text
stringlengths
136
178k
author
stringclasses
5 values
id
stringlengths
6
9
title
stringlengths
9
112
source
stringclasses
1 value
# Deceptively Aligned Mesa-Optimizers: It's Not Funny If I Have To Explain It **I.** Our goal here is to popularize obscure and hard-to-understand areas of AI alignment, and surely this meme (retweeted by Eliezer last week) qualifies: So let’s try to understand the incomprehensible meme! Our main source will be Hubinger et al 2019, [Risks From Learned Optimization In Advanced Machine Learning Systems](https://arxiv.org/pdf/1906.01820.pdf). Mesa- is a Greek prefix which means the opposite of meta-. To “go meta” is to go one level up; to “go mesa” is to go one level down (nobody has ever actually used this expression, sorry). So a mesa-optimizer is an optimizer one level down from you. Consider evolution, optimizing the fitness of animals. For a long time, it did so very mechanically, inserting behaviors like “use this cell to detect light, then grow toward the light” or “if something has a red dot on its back, it might be a female of your species, you should mate with it”. As animals became more complicated, they started to do some of the work themselves. Evolution gave them drives, like hunger and lust, and the animals figured out ways to achieve those drives in their current situation. Evolution didn’t mechanically instill the behavior of opening my fridge and eating a Swiss Cheese slice. It instilled the hunger drive, and I figured out that the best way to satisfy it was to open my fridge and eat cheese. So I am a mesa-optimizer relative to evolution. Evolution, in the process of optimizing my fitness, created a second optimizer - my brain - which is optimizing for things like food and sex. If, [like Jacob Falkovich](https://putanumonit.com/2017/03/12/goddess-spreadsheet/), I satisfy my sex drive by creating a spreadsheet with all the women I want to date, and making it add up all their good qualities and calculate who I should flirt with, then - on the off-chance that spreadsheet achieved sentience - it would be a mesa-optimizer relative to me, and a mesa-mesa-optimizer relative to evolution. All of us - evolution, me, the spreadsheet - want *broadly* the same goal (for me to succeed at dating and pass on my genes). But evolution delegated some aspects of the problem to my brain, and my brain delegated some aspects of the problem to the spreadsheet, and now whether I mate or not depends on whether I entered a formula right in cell A29. (by all accounts Jacob and Terese are very happy) Returning to machine learning: the current process of training AIs, gradient descent, is a little bit like evolution. You start with a semi-random AI, throw training data at it, and select for the weights that succeed on the training data. Eventually, you get an AI with something resembling intuition. A classic dog-cat classifier can look at an image, process a bunch of features, and return either “dog” or “cat”. This AI is not an optimizer. It’s not planning. It has no drives. It’s not thinking “If only I could figure out whether this was a dog or a cat! I wonder what would work for this? Maybe I’ll send an email to the American Kennel Club, they seem like the sort of people who would know. That plan has a higher success rate than any of my other plans.” It’s just executing learned behaviors, like an insect. “That thing has a red dot on it, must be a female of my species, I should mate with it”. Good job, now you’re mating with the Japanese flag. But just as evolution eventually moved beyond mechanical insects and created mesa-optimizers like humans, so gradient descent could, in theory, move beyond mechanical AIs like cat-dog classifiers and create some kind of mesa-optimizer AI. If that happened, we wouldn’t know; right now most AIs are black boxes to their programmers. We would just notice that a certain program seemed faster or more adaptable than usual (or didn’t - there’s no law saying optimizers have to work better than instinct-executors, they’re just a different mind-design). Mesa-optimizers would have an objective which is closely correlated with their base optimizer, but it might not be perfectly correlated. The classic example, again, is evolution. Evolution “wants” us to reproduce and pass on our genes. But my sex drive is just that: a sex drive. In the ancestral environment, where there was no porn or contraceptives, sex was a reliable proxy for reproduction; there was no reason for evolution to make me mesa-optimize for anything other than “have sex”. Now in the modern world, evolution’s proxy seems myopic - sex is a poor proxy for reproduction. *I know this and I am pretty smart and that doesn’t matter*. That is, just because I’m smart enough to know that evolution gave me a sex drive so I would reproduce - and not so I would have protected sex with somebody on the Pill - doesn’t mean I immediately change to wanting to reproduce instead. Evolution got one chance to set my value function when it created me, and if it screwed up that one chance, it’s screwed. I’m out of its control, doing my own thing. (I feel compelled to admit that I do want to have kids. How awkward is that for this argument? I think not very - I don’t want to, eg, donate to hundreds of sperm banks to ensure that my genes are as heavily-represented in the next generation as possible.  I just want kids because I like kids and feel some vague moral obligations around them. These might be different proxy objective evolution gave me, maybe a little more robust, but not fundamentally different from the sex one) In fact, we should expect that mesa-optimizers *usually* have proxy objectives different from the base optimizers’ objective. The base optimizer is usually something stupid that doesn’t “know” in any meaningful sense that it has an objective - eg evolution, or gradient descent. The first thing it hits upon which does a halfway decent job of optimizing its target will serve as a mesa-optimizer objective. There’s no good reason this should be the real objective. In the human case, it was “a feeling of friction on the genitals”, which is exactly the kind of thing reptiles and chimps and australopithecines can understand. Evolution *couldn’t* have lucked into giving its mesa-optimizers the real objective (“increase the relative frequency of your alleles in the next generation”) because a reptile or even an australopithecine is millennia away from understanding what an “allele” is. **II.** Okay! Finally ready to explain the meme! Let’s go! **Prosaic alignment is hard…** “Prosaic alignment” (see [this article](https://ai-alignment.com/prosaic-ai-control-b959644d79c2) for more) means alignment of normal AIs like the ones we use today. For a while, people thought those AIs couldn’t reach dangerous levels, and that AIs that reached dangerous levels would have so many exotic new discoveries that we couldn’t even begin to speculate on what they would be like or how to align them. After GPT-2, DALL-E, and the rest, alignment researchers got more concerned that AIs kind of like current models could be dangerous. Prosaic alignment - trying to align AIs like the ones we have now - has become the dominant (though not unchallenged) paradigm in alignment research. “Prosaic” doesn’t necessarily mean the AI cannot write poetry; see [Gwern’s AI generated poetry](https://slatestarcodex.com/2019/03/14/gwerns-ai-generated-poetry/) for examples. … **because OOD behavior is unpredictable** “OOD” stands for “out of distribution”. All AIs are trained in a certain environment. Then they get deployed in some other environment. If it’s like the training environment, presumably their training is pretty relevant and helpful. If it’s not like the training environment, anything can happen. Returning to our stock example, the “training environment” where evolution designed humans didn’t involve contraceptives. In that environment, the base optimizer’s goal (pass on genes) and the mesa-optimizer’s goal (get genital friction) were very well-aligned - doing one often led to the other - so there wasn’t much pressure on evolution to look for a better proxy. Then 1957, boom, the FDA approves the oral contraceptive pill, and suddenly the deployment environment looks really really different from the training environment and the proxy collapses so humiliatingly that people start doing crazy things like [electing Viktor Orban prime minister](https://hungarianfreepress.com/2018/04/23/viktor-orbans-deal-for-women-and-a-plan-to-increase-the-birth-rate-in-hungary/). So: suppose we train a robot to pick strawberries. We let it flail around in a strawberry patch, and reinforce it whenever strawberries end up in a bucket. Eventually it learns to pick strawberries very well indeed. But maybe all the training was done on a sunny day. And maybe what it actually learned was to identify the metal bucket by the way it gleamed in the sunlight. Later we ask it to pick strawberries in the evening, where a local streetlight is the brightest thing around, and it throws the strawberries at the streetlight instead. So fine. We train it in a variety of different lighting conditions, until we’re sure that, no matter what the lighting situation, the strawberries go in the bucket. Then one day someone with a big bulbous red nose wanders on to the field, and the robot tears his nose off and pulls it into the bucket. If only there had been someone with a nose that big and red in the training distribution, so we could have told it not to do that! The point is, just because it’s learned “strawberries into bucket” in one environment, doesn’t mean it’s safe or effective in another. And we can never be sure we’ve caught all the ways the environment can vary. **…and deception is more dangerous than Goodharting.** To “Goodhart” is to take advantage of [Goodhart’s Law](https://en.wikipedia.org/wiki/Goodhart%27s_law): to follow the letter of your reward function, rather than the spirit. The ordinary-life equivalent is “teaching to the test”. The system’s programmers (eg the Department of Education) have an objective (children should learn). They delegate that objective to mesa-optimizers (the teachers) via a proxy objective (children should do well on the standardized test) and a correlated reward function (teachers get paid more if their students get higher test scores). The teachers can either pursue the base objective for less reward (teach children useful skills), or pursue their mesa-level objective for more reward (teach them how to do well on the test). An alignment failure! This sucks, but it’s a bounded problem. We already know that some teachers teach to the test, and the Department of Education has accepted this as a reasonable cost of having the incentive system at all. We might imagine our strawberry-picker cutting strawberries into little pieces, so that it counts as having picked more strawberries. Again, it sucks, but once a programmer notices it can be fixed pretty quickly (as long as the AI is still weak and under control). What about deception? Suppose the strawberry-picker happens to land on some goal function other than the intended one. Maybe, as before, it wants to toss strawberries at light sources, in a way that works when the nearest light source is a metal bucket, but fails when it’s a streetlight. Our programmers are (somewhat) smart and careful, so during training, they test it at night, next to a streetlight. What happens? If it’s just a dumb collection of reflexes trained by gradient descent, it throws the strawberry at the streetlight and this is easily caught and fixed. If it’s a very smart mesa-optimizer, it might think “If I throw the strawberry at the streetlight, I will be caught and trained to have different goals. This totally fails to achieve my goal of having strawberries near light sources. So throwing the strawberry at the light source this time, in the training environment, fails to achieve my overall goal of having strawberries thrown at light sources in general. I’ll do what the humans want - put the strawberry in the bucket - for now.” So it puts the strawberry in the bucket and doesn’t get caught. Then, as soon as the humans stop looking, it throws strawberries at the streetlight again. Deception is more dangerous than Goodharting because Goodharting will get caught and trained away, and deception might not. I might not be explaining this well, see also [Deceptively Aligned Mesa-Optimizers? It’s More Likely Than You Think](https://www.youtube.com/watch?v=IeWljQw3UgQ): **We prevent OOD behavior by detecting OOD and obtaining more human labels when we detect it…** If you’re (somewhat) careful, you can run your strawberry-picking AI at night, see it throw strawberries at streetlights, and train it out of this behavior (ie have a human programmer label it “bad” so the AI gradient-descends away from it) **…and we eliminate the incentive for deception by ensuring that the base optimizer is myopic** A myopic optimizer is one that reinforces programs based only on their performance within a short time horizon. So for example, the outside gradient descent loop might grade a strawberry picker only on how well it did picking strawberries for the first hour it was deployed. If this worked perfectly, it would create an optimizer with a short time horizon. When it considered deceiving its programmers in order to get a payoff a few days later when they stopped watching it, it wouldn’t bother, since a few days later is outside the time horizon. …**and implements a decision theory incapable of acausal trade.** You don’t want to know about this one, really. Just pretend it never mentioned this, sorry for the inconvenience. **There are deceptively-aligned non-myopic mesa-optimizers even for a myopic base objective.** Even if the base optimizer is myopic, the mesa-optimizer might not be. Evolution designed humans myopically, in the sense that we live some number of years, and nothing that happens after that can reward or punish us further. But we still “build for posterity” anyway, presumably as a spandrel of having working planning software at all. Infinite optimization power might be able to evolve this out of us, but infinite optimization power could do lots of stuff, and real evolution remains stubbornly finite. Maybe it would be helpful if we could make the mesa-optimizer itself myopic (though this would severely limit its utility). But so far there is no way to make a mesa-optimizer anything. You just run the gradient descent and cross your fingers. The most likely outcome: you run myopic gradient descent to create a strawberry picker. It creates a mesa-optimizer with some kind of proxy goal which corresponds very well to strawberry picking in the training optimization, like flinging red things at lights (realistically it will be weirder and more exotic than this). The mesa-optimizer is not incentivized to think about anything more than an hour out, but does so anyway, for the same reason I’m not incentivized to speculate about the far future but *I’m* doing so anyway. While speculating about the far future, it realizes that failing to pick strawberries correctly now will thwart its goal of throwing red things at light sources later. It picks strawberries correctly in the training distribution, and then, when training is over and nobody is watching, throws strawberries at streetlights. (Then it realizes it could throw lots more red things at light sources if it was more powerful, achieves superintelligence somehow, and converts the mass of the Earth into red things it can throw at the sun. The end.) **III.** You’re still here? But we already finished explaining the meme! Okay, fine. Is any of this relevant to the real world? As far as we know, there are no existing full mesa-optimizers. AlphaGo is kind of a mesa-optimizer. You could approximate it as a gradient descent loop creating a good-Go-move optimizer. But this would only be an approximation: DeepMind hard-coded some parts of AlphaGo, then gradient-descended other parts. Its objective function is “win games of Go”, which is hard-coded and pretty clear. Whether or not you choose to call it a mesa-optimizer, it’s not a very scary one. Will we get scary mesa-optimizers in the future? This ties into one of the longest-running debates in AI alignment - see eg [my review of](https://slatestarcodex.com/2019/08/27/book-review-reframing-superintelligence/) *[Reframing Superintelligence](https://slatestarcodex.com/2019/08/27/book-review-reframing-superintelligence/)*, or the [Eliezer Yudkowsky/Richard Ngo dialogue](https://astralcodexten.substack.com/p/practically-a-book-review-yudkowsky?s=w). Optimists say: “Since a goal-seeking AI might kill everyone, I would simply not create one”. They speculate about mechanical/instinctual superintelligences that would be comparatively easy to align, and might help us figure out how to deal with their scarier cousins. But the mesa-optimizer literature argues: we have limited to no control over what kind of AIs we get. We can hope and pray for mechanical instinctual AIs all we want. We can avoid specifically designing goal-seeking AIs. But really, all we’re doing here is setting up a gradient descent loop and pressing ‘go’. Then the loop evolves whatever kind of AI best minimizes our loss function. Will that be a mesa-optimizer? Well, I benefit from considering my actions and then choosing the one that best achieves my goal. Do you benefit from this? It sure does seem like this helps in a broad class of situations. So it would be surprising if planning agents weren’t an effective AI design. And if they are, we should expect gradient descent to stumble across them eventually. This is the scenario that a lot of AI alignment research focuses on. When we create the first true planning agent - on purpose or by accident - the process will probably start with us running a gradient descent loop with some objective function. That will produce a mesa-optimizer with some other, potentially different, objective function. Making sure you actually like the objective function that you gave the original gradient descent loop on purpose is called *outer alignment*. Carrying that objective function over to the mesa-optimizer you actually get is called *inner alignment*. Outer alignment problems tend to sound like Sorcerer’s Apprentice. We tell the AI to pick strawberries, but we forgot to include caveats and stop signals. The AI becomes superintelligent and converts the whole world into strawberries so it can pick as many as possible. Inner alignment problems tend to sound like the AI tiling the universe with some crazy thing which, to humans, might not look like picking strawberries at all, even though in the AI’s exotic ontology it served as some useful proxy for strawberries in the training distribution. My stand-in for this is “converts the whole world into red things and throws them into the sun”, but whatever the AI that kills us really does will probably be weirder than that. They’re not ironic Sorcerer’s Apprentice-style comeuppance. They’re just “*what?”* If you wrote a book about a wizard who created a strawberry-picking golem, and it converted the entire earth into ferrous microspheres and hurled them into the sun, it wouldn’t become iconic the way Sorcerer’s Apprentice did. Inner alignment problems happen “first”, so we won’t even make it to the good-story outer alignment kind unless we solve a lot of issues we don’t currently know how to solve. For more information, you can read: * Rob Miles’ video above, direct link [here](https://youtu.be/bJLcIBixGj8), channel [here](https://www.youtube.com/c/RobertMilesAI/videos). * [The original Hubinger paper](https://arxiv.org/pdf/1906.01820.pdf), which speculates about what factors make AIs more or less likely to spin off mesa-optimizers * Rafael Harth’s [Inner Alignment: Explain Like I’m 12 Edition](https://www.alignmentforum.org/posts/AHhCrJ2KpTjsCSwbt/inner-alignment-explain-like-i-m-12-edition), * The [60-odd posts on the Alignment Forum](https://www.alignmentforum.org/tag/inner-alignment) tagged “inner alignment” * As always, Richard Ngo’s [AI safety curriculum](https://docs.google.com/document/d/1mTm_sT2YQx3mRXQD6J2xD2QJG1c3kHyvX8kQc_IQ0ns/edit)
Scott Alexander
51970484
Deceptively Aligned Mesa-Optimizers: It's Not Funny If I Have To Explain It
acx
# Open Thread 219 This is the weekly visible open thread. Post about anything you want, ask random questions, whatever. You can also talk at the unofficial ACX community [subreddit](https://www.reddit.com/r/slatestarcodex/), [Discord](https://discord.gg/RTKtdut), or [bulletin board](https://www.datasecretslox.com/index.php). Also: **1:** ACX is having spring meetups in seventy cities this coming month; see [this post](https://astralcodexten.substack.com/p/spring-meetups-in-seventy-cities?s=w) for details. **2:** Thanks to the 141 (!) of you who entered the book review contest. I’m currently processing entries and trying to figure out what to do with them; expect more information in the next few weeks. **3:** Along with Mantic Mondays and Model City Mondays, I’m shifting all the AI stuff to Machine Alignment Mondays. If all the nerdy rationalist topics are concentrated in one weekday, everyone will know how much of that to expect and won’t worry that it’s crowding out other content. I understand some of this stuff is less popular than our regular fare, but I think it’s important, and I’m willing to spend accrued social capital to get it in front of you. Consider this my guarantee that the spending will happen at a sustainable rate, and I’m not planning some kind of crazy stimulus program that will cause social capital hyperinflation and lead to people exchanging wheelbarrows of social capital for one roll of toilet paper.
Scott Alexander
51978745
Open Thread 219
acx
# Spring Meetups In Seventy Cities Lots of people only want to go to meetups a few times a year. And they all want to go to the same big meetups as all the other people who only go a few times a year. In 2021, we set up one big well-telegraphed meetup in the fall as a Schelling point for these people. This year, we’re setting up two. We’ll have the fall meetup as usual. If you only want to go to one meetup a year, go to that one. But we’ll also have a spring round. If you only go to two meetups a year, come to this one too! You can find a list of cities and times below. If you want to add your city to the list, fill in [this form](https://docs.google.com/forms/d/e/1FAIpQLSe6bVGranNA5AKTKj8l4XtTzvXBaRsap48rEvbP5gqA2JTiEQ/viewform); if you have questions, ask meetupsmingyuan@gmail.com. For the most up-to-date information, check out this [spreadsheet](https://docs.google.com/spreadsheets/d/1KUCsdwLtDB5TQMJ0iqQIlnMgs6iTcgaAKzJdr5FpfmU/edit?usp=sharing). ### AFRICA & MIDDLE EAST **AMMAN, JORDAN** Contact: Daniel (dnledvs@gmail.com) Date: May 21 Time: 2:00 PM Coordinates: [8G3QXW3H+W3](https://plus.codes/8G3QXW3H+W3) Location: We'll meet at Dali cafe in Jabal Weibdeh, and will be sitting at the outdoor tables. I'll be wearing a red shirt and will have a sign with ACX MEETUP on it. Notes: We're trying to grow the community, so feel free to bring a friend! **CAPE TOWN, SOUTH AFRICA** Contact: Jordan (jordanpieters@gmail.com) Date: April 29 Time: 6:30 PM Coordinates: <https://plus.codes/4FRW3F69+7J> Location: Woodstock Brewery Taproom Notes: I will try to have a small sign. Please feel free to bring a friend. **IBADAN, NIGERIA** Contact: Mo (fromoneaddicttoanother@gmail.com) Date: May 30 Time: 1:00 PM Coordinates: <https://plus.codes/6FV5CRMP+9R> Location: Word of Grace Family church, Ologun-eru Ibadan **LAGOS, NIGERIA** Contact: Damola (social@damolamorenikeji.com) Date: May 4 Time: 12:12 PM Coordinates: <https://plus.codes/6FR5G97X+62> Location: Alvan Ikoku Gardens, 1 Alvan Ikoku Road, University of Lagos, Yaba, Lagos. (We’ll be sitting close to the second tree). ### ASIA & OCEANIA **BANDUNG, INDONESIA** Contact: Fawwaz (fawwazanvi@gmail.com) Date: May 14 Time: 3:00 PM Coordinates: https://plus.codes/6P593JR6+682 Location: JCO Reserve - Merdeka, Jl. Merdeka No.54, Babakan Ciamis, Kec. Sumur Bandung, Kota Bandung, Jawa Barat 40117 Group info: We're very excited to organize a meetup for local rat-adjacent people here, there's already been a loosely affiliated student study group at Bandung Institute of Technology, and we've been in contact with organizers in Jakarta too. **BANGALORE, INDIA** Contact: Faiz (faiz\_abbas@protonmail.com) Date: April 24 Time: 4:00 PM Coordinates: <https://plus.codes/7J4VXJF4+PR> Location: Matteo Coffea, Church Street, near MG road Group info: [Bangalore SSC](https://www.lesswrong.com/events/au9WvdfsWsTS5fMrN/bangalore-lw-acx-meetup-in-person) has been meeting monthly since 2018 **BANGKOK, THAILAND** Contact: Robert Hoglund (robert.d.hoglund@gmail.com) Date: April 30 Time: 1:00 PM Coordinates: <https://plus.codes/7P52PGVW+HQ> Location: [Open House at Central Embassy](https://www.facebook.com/openhouse.ce/). [Located](https://goo.gl/maps/7g19kEHGzfhtY8xR7) at the top floor of the mall. Notes: Please RSVP so I know if there is interest. **BEIJING, CHINA** Contact: Karen (caoyy19@mails.tsinghua.edu.cn) Date: April 24 Time: 3:00 PM Coordinates: <https://plus.codes/8PFRWCH5+46> Location: 大酉·M Coffee,美术馆后街77号文创美术馆内 **CANBERRA, AUSTRALIA** Contact: Andy B (andy.bachler@gmail.com) Date: April 13 Time: 5:45 PM Coordinates: <https://plus.codes/4RPFP4FC+34> Location: Badger & Co (a pub in ANU). I will be wearing glasses and will have a sign with ACX MeetUp on it. Notes: Apologies that this might be tricky to get to for some people but the parking at ANU should be a bit easier after 5pm! **CANGGU, BALI, INDONESIA** Contact: Steven (steven@irurueta.net) Date: May 15 Time: 6:00 PM Coordinates: <https://plus.codes/6P3Q84WP+F3> Location: Bwork Bali - Jl. Nelayan No.9C, Canggu, Kec. Kuta Utara, Kabupaten Badung, Bali 80361, Indonesia **CHIANG MAI** Contact: Walker (hwalkeredwards@gmail.com) Date: April 28 Time: 6:00 PM Coordinates: [7MCWQXQH+FJ](https://plus.codes/7MCWQXQH+FJ) Location: Outside Alt\_Chiang Mai Coworking Space **ERNAKULAM, INDIA** Contact: Mathai Kuriakose (mkmathai@yahoo.com) Date: April 30 Time: 4:00 PM Coordinates: <https://plus.codes/6JXRX7HG+V2> Location: Broadway walkway near rainbow bridge **ISTANBUL, TURKEY** Contact: Berke (berkedubomont@gmail.com) Date: May 22 Time: 12:30 PM Coordinates: <https://plus.codes/8GHF22V6+8R> Location: We'll meet up at w.Bi Coffee, and go to Yıldız Parkı. Email me if you want to come, we might change the venue if it's too far away from you. Notes: Possible participants should definitely mail me, Istanbul is giant and if people say they live too far from the location I want to meet up at, we might choose a more reasonable location. **JAKARTA, INDONESIA** Contact: Jati (indonesiarationalist@gmail.com) Date: May 8 Time: 3:30 PM Coordinates: <https://plus.codes/6P58RR8G+J4Q> Location: Kawisari Cafe & Eatery in Menteng, Central Jakarta. The nearest train station is Gondangdia (15 minutes walk or just take an online moto-taxi). Feel free to bring whatever you think could be fun or exciting! The organizer will be there from 15.00 WIB. Notes: Please RSVP on LessWrong or send an E-mail to the above address. Group info: Jakarta has a rationality-adjacent group that meets occasionally, so some members of that group will come to this ACX meetup. **KOLKATA (CALCUTTA), INDIA** Contact: Sayan Sarkar (7rat13 AT gmail DOT com) Date: April 23 Time: 6:00 PM Coordinates: <https://plus.codes/7MJCG98V+28> Location: Starbucks, Acropolis Mall (open to suggestions) Notes: **RSVP mandatory!** **MUMBAI, INDIA** Contact: Priyansha (priyansha.bajoria@gmail.com) Date: April 24 Time: 4:30 PM Coordinates: <https://plus.codes/7JFJ3R5M+JJ> Location: [Earth Cafe](https://goo.gl/maps/qftNjZY8EcvHLsaw9) @ Waterfield, Hill Road, Bandra Notes: Please RSVP at priyansha.bajoria@gmail.com. It would help me plan better. **PHNOM PENH, CAMBODIA** Contact: Amos (me@amos.ng) Date: April 30 Time: 11:00 AM Coordinates: <https://plus.codes/7P36GWV7+7M> Location: 11:11 Cafe in TTP Notes: If possible, please email me beforehand so I know you’re coming. **SINGAPORE** Contact: Paul Pondi (rishifromsg@gmail.com) Date: April 16 Time: 6:30 PM Coordinates: <https://plus.codes/6PH57VW2+G3> Location: Tables near the Tea Express, SMU School of Economics, 178903 Notes: After some time, everyone gets up and changes tables to get a chance to talk to more people. Then we all will go for dinner nearby. **SYDNEY, AUSTRALIA** Contact: Eliot (redeliot@gmail.com) Date: April 21 Time: 6:00 PM Coordinates: <https://plus.codes/4RRH46F4+98> Location: Lvl 2, 565 George Street, Sydney Group info: [Sydney Rationality](https://www.meetup.com/rationalists_of_sydney/) has been meeting monthly since 2014. **TOKYO, JAPAN** Contact: Harold (hgodsoe@gmail.com) Date: May 14 Time: 10:00 AM Coordinates: <https://plus.codes/8Q7XJPV2+RF> Location: Justanotherspace 1 Chome-3-9 Kamimeguro, Meguro City, Tokyo 153-005 Group info: ACX Tokyo has been active monthly since the summer of 2021. ### EUROPE **AMSTERDAM** Contact: Pierre (pierreavdb@gmail.com) Date: May 15 Time: 3:00 PM Coordinates: [9F469VPC+HX](https://plus.codes/9F469VPC+HX) Location: (Tentative) Westerpark, on the side of Ijscuypje Group info: [Amsterdam Rationality Dojo](https://www.meetup.com/amsterdam-rationality-dojo/) has regular social meetups **ATHENS, GREECE** Contact: Elias (minus42cgn@gmail.com) Date: April 27 Time: 7:00 PM Coordinates: <https://plus.codes/8G95WMRV+Q4F> Location: Stavros Niarchos Park, Great Lawn, 37.941753, 23.692632 Notes: Please [RSVP on LessWrong](https://www.lesswrong.com/events/jJ7MqrBpFG6gwhtqK/athens-greece-acx-spring-meetups-2022) **BERLIN, GERMANY** Contact: ChristianKl (christian.kleineidam@gmail.com) Date: April 18 Time: 6:00 PM Coordinates: <https://plus.codes/9F4MG9G2+JG> Location: Turmstraße 10, 10559 Berlin **BERN, SWITZERLAND** Contact: Daniel Staudenmann (dd14214@gmail.com) Date: May 4 Time: 7:00 PM Coordinates: <https://plus.codes/8FR9XC2Q+3G> Location: Grosse Schanze, Statue vor dem Uni Hauptgebäude Notes: Please text Daniel at +41793180836 to RSVP and if you can't find us **BIRMINGHAM, UK** Contact: Tom A (askew.thomas@gmail.com) Date: April 30 Time: 1:00 PM Coordinates: <https://plus.codes/9C4WF3HX+FM> Location: Yorks Cafe & Coffee Roasters, 29 / 30 Stephenson St, Birmingham B2 4BH **BRIGHTON, UK** Contact: Alan Enright (alanenright@protonmail.com) Date: May 7 Time: 10:00 AM Coordinates: <https://plus.codes/9C2XRVM6+3X> Location: Alcampo Lounge, London Road, will try and get a table on the raised area in front of you and to the left as you come in but will also have a little ACX sign. **BRISTOL, UK** Contact: Nick Lowry (thegreatnick@gmail.com) Date: April 23 Time: 7:00 PM Coordinates: <https://plus.codes/9C3VFC66+VG> Location: Advance Retreat, 18a Backfields Ln, St Paul's, Bristol BS2 8QW Notes: Small venue, will mostly be Rats/EAs. The venue is a video game cafe but will be people in attendance who don't care for video games, and lots of of opportunities to socialise. Group info: [Bristol Rationality Café](https://www.facebook.com/groups/bristolrationality/) is an offshoot of Bristol's very active EA community **BUDAPEST, HUNGARY** Contact: Timothy Underwood (timunderwood9@gmail.com) Date: May 7 Time: 2:00 PM Coordinates: <https://plus.codes/8FVXG2CW+FF> Location: In the middle of Gulliver Park on Margit Sziget, I'll have an umbrella and a big copy of a book by Richard Dawkins in Hungarian Group info: [ACX/LW Budapest](https://www.lesswrong.com/groups/rB5Y38YWoa6Adsk4e) meets once a month **CAMBRIDGE, UK** Contact: Hamish Todd (hamish.todd1@gmail.com) Date: May 21 Time: 2:00 PM Coordinates: <https://plus.codes/9F426439+J9> Location: The Bath House - I'll have a Peter Singer and a Robin Hanson book on the table **CARDIFF, WALES, UK** Contact: A. - Cardiff (strmnova@gmail.com) Date: May 20 Time: 6:00 PM Coordinates: <https://plus.codes/9C3RFRM6+PX> Location: Servini's at The Summerhouse - Bute Park, Cardiff CF10 3DX Group info: I don't know anyone else in Cardiff who's into EA and am trying to start a community here. **COLOGNE (KÖLN)** Contact: Marcel Müller (marcel\_mueller@mail.de) Date: April 23 Time: 5:00 PM Coordinates: <https://plus.codes/9F28WRMX+97> Location: Marienweg 43, 50858 Köln (Cologne) Notes: Since Covid rates are still very high in Germany attendees must be fully vaccinated and tested on the same day. Self tests are accepted and available at the meetup location, so there is no need for people to go out of their way to get tested beforehand. Also, weather permitting, the meetup will mostly be outside on our patio. Group info: Contact Marcel if you want to subscribe to the mailing list for the LW/ACX Cologne meetup group **COPENHAGEN, DENMARK** Contact: Søren Elverlin (soeren.elverlin@gmail.com) Date: May 28 Time: 3:00 PM Coordinates: <https://plus.codes/9F7JMH38+GF> Location: Rundholtsvej 10, 2300 Copenhagen S **COVENTRY, UK** Contact: Thomas Read (thomas.read.acx@gmail.com) Date: April 30 Time: 2:00 PM Coordinates: <https://plus.codes/9C4W9CJR+6R> Location: We'll be on the grass next to the Oculus building in the University of Warwick — I'll be wearing an orange scarf. **DUBLIN, IRELAND** Contact: Lblack (bayesianconspirator@protonmail.com) Date: April 23 Time: 2:00 PM Coordinates: <https://plus.codes/9C5M8PRP+JV> Location: Clement & Pekoe coffee shop, 50 William St S, Dublin 2 Group info: We have meetups once per month. Email me if you want to be added to our Telegram group. **EDINBURGH, SCOTLAND, UK** Contact: Sam (acxedinburgh@gmail.com) Date: April 23 Time: 2:00 PM Coordinates: <https://plus.codes/9C7RWRV6+XF> Location (updated!): The Balcony Room in Teviot Row House (3rd floor) Group info: ACX Edinburgh is active; email acxedinburgh@gmail.com to be added to their mailing list or WhatsApp group **FALMOUTH, CORNWALL, UK** Contact: Tom Minns (tminns@btinternet.com) Date: April 23 Time: 12:00 PM Coordinates: <https://plus.codes/9C2P4WVJ+PP> Location: Gyllyngvase Beach **GÖTTINGEN, GERMANY** Contact: Julian "Wuschel" (wuschelschulz8@gmail.com) Date: May 1 Time: 3:00 PM Coordinates: <https://plus.codes/9F3FGXM3+PP> Location: Schillerwiesen **HAMBURG, GERMANY** Contact: Gunnar Zarncke (jan.zarncke+acx@gmail.com) Date: April 30 Time: 3:00 PM Coordinates: https://plus.codes/9F5FHX4H+QX Location: Kleine Wallanlagen on the lawn near Memorial Holstenglacis. Look for pink blankets; I will also have an ACX sign. ///cove.wider.solves **HELSINKI, FINLAND** Contact: Joe Nash (joenash499@gmail.com) Date: April 26 Time: 6:00 PM Coordinates: <https://plus.codes/9GG65WCW+QM> Location: Oluthuone Kaisla, Vilhonkatu 4. I'll have a notebook that says ACX on the table. **KRAKÓW, POLAND** Contact: Mateusz Bagiński (bagginsmatthew@gmail.com) Date: April 23 Time: 2:30 PM Coordinates: <https://plus.codes/9F2X3W2V+GG> Location: Mleczarnia Cafe Notes: Facebook event [here](https://fb.me/e/1C1U6gp85) **LINCOLN, UK** Contact: Tobias (tobias.showan@yahoo.co.uk) Date: April 16 Time: 2:00 PM Coordinates: <https://plus.codes/9C5X6C9R+XM> Location: Nosey Parker (inside or out depending on the weather.) Look for the chap with the ACX sign. **LISBOA, PORTUGAL** Contact: Luís Campos (luis.filipe.lcampos@gmail.com) Date: May 14 Time: 3:00 PM Coordinates: <https://plus.codes/8CCGPRPW+WF> Location: Gulbenkian Garden, in the grassy hill over the the lake, close to "Cafetaria do Museu Gulbenkian". Look for the person with the really pink/red shirt. Notes: You can RSVP to this event [on LessWrong](https://www.lesswrong.com/events/FSFcjQysqCfFmLGyu/acx-ea-lisbon-may-2022-meetup). Group info: The joined ACX/Rationality/EA group meets every month. **LJUBLJANA, SLOVENIA** Contact: Demjan (demjan.vester@gmail.com) Date: May 5 Time: 6:00 PM Coordinates: <https://plus.codes/8FRP3F3W+FX> Location: The stairs leading up to the Tivoli Park promenade, I will be holding a tablet with ACX written on it. **LONDON, UK** Contact: Edward Saperia (ed@newspeak.house) Date: April 15 Time: 6:30 PM Coordinates: <https://plus.codes/9C3XGWGH+3F> Location: Newspeak House, 133-135 Bethnal Green Road, E2 7DG. An accessible indoor space as well as a covered, heated outdoor terrace. Notes: Please register: <https://lu.ma/ACXLondonApr2022> The last London ACX meetup was pretty large - it would be great to have some helpers. Contact @edsaperia if you're interested. Group info: [London Rationalish](https://www.facebook.com/groups/londonrationalish/) meets on the second Sunday of each month. Subscribe to <https://tinyletter.com/ACXLondon> to keep up with future ACX London meetups. **MAASTRICHT, NETHERLANDS** Contact: Laurens (laurensk90@gmail.com) Date: April 30 Time: 2:00 PM Coordinates: <https://plus.codes/9F27RMWR+42> Location: Grote Looierstraat 15A **MADRID, SPAIN** Contact: Pablo V (pvillalobos at protonmail dot com) Date: April 25 Time: 6:30 PM Coordinates: <https://plus.codes/8CGRC897+F7Q> Location: El Retiro Park Group info: There is no ACX group per se, but a subgroup of the EA/rationality community. **MILAN, ITALY** Contact: Raffaele Mauro (raffa.mauro@gmail.com) Date: May 13 Time: 6:30 PM Coordinates: <https://plus.codes/8FQFF6C4+9C> Location: Primo Ventures (Viale Majno, 18 - 2nd floor) **OXFORD, UK** Contact: Sam Brown (ssc@sambrown.eu) Date: April 20 Time: 6:00 PM Coordinates: <https://plus.codes/9C3WPQX6+QQ6> Location: Pub garden of The Star, Rectory Road Group info: [Oxford Rationalish](https://www.facebook.com/groups/1221768638031684) meets semi-regularly. There's also a very active EA community in Oxford. **PADOVA, ITALY** Contact: CM (carlo.martinucci@gmail.com) Date: April 30 Time: 3:00 PM Coordinates: <https://plus.codes/8FQH9VXG+88> Location: Italy, Padova, Prato della Valle, West Bridge **PARIS, FRANCE** Contact: Olivier (geranium.slimy657@mailer.me) Date: May 12 Time: 6:00 PM Coordinates: <https://plus.codes/8FW4V86J+GQ> Location: Garden of palais royal, next to the louvres. If the weather is bad we will decide of a nearby fallback location. Notes: Don't hesitate to bring a friend, usual attendance has been max ~20 people for quarterly meetups. Group info: [SSC Paris](https://www.lesswrong.com/groups/jMknTw7saZGTq5qhG) has quarterly meetups, coordinated on [Discord](https://discord.gg/JUHTZRYp3k). If you don't want to install discord, tell me by mail to add you to the mailing list. **PISA, ITALY** Contact: Raffaele (raffaelesalvia@alice.it) Date: April 30 Time: 5:00 PM Coordinates: https://plus.codes/8FMGPC92+M3 Location: We will meet in Piazza dei Cavalieri. The date and time are flexible; please contact me if you wish to come but would prefer a different day. **PRAGUE, CZECHIA** Contact: Jiří Nádvorník (jiri.nadvornik@efektivni-altruismus.cz) Date: April 26 Time: 6:30 PM Coordinates: <https://plus.codes/9F2P3CRW+FP> Location: Teahouse Dharmasala Group info: Prague has a thriving, active [rationality and EA community](http://lesswrong.cz/) **RIGA, LATVIA** Contact: Andis (cerulean.lemniscate@protonmail.com) Date: April 16 Time: 4:00 PM Coordinates: <https://plus.codes/9G86X426+Q5Q> Location: Bastejkalns (on top of the hill) **ROME, ITALY** Contact: Luca Ciarrocca (luca.ciarrocca@gmail.com) Date: May 11 Time: 6:30 PM Coordinates: <https://plus.codes/8FHJVFWC+5V> Location: We'll meet at the Giordano Bruno statue in Campo de' Fiori and then sit at an outdoor bar there. I will be wearing an ACX T-shirt. **SEVILLA, SPAIN** Contact: Edu (edur.acx@gmail.com) Date: April 30 Time: 6:00 PM Coordinates: <https://plus.codes/8C9P92F6+3R> Location: Parque de María Luisa. I'll be on the grass behind the Museum of Popular Arts and Traditions. I'll be the guy next to an "ACX" sign, a white wooden chair, and a cardboard ukulele with a tiny cardboard hat on it. **SKOPJE, MACEDONIA** Contact: Qantarot (info@kantarot.mk) Date: April 20 Time: 6:00 PM Coordinates: <https://plus.codes/8GH3XCVH+8Q> Location: Terrace of gastroteka Siesta **SOFIA, BULGARIA** Contact: Anastasia (sofia.acx.meetup@gmail.com) Date: May 15 Time: 4:00 PM Coordinates: <https://plus.codes/8GJ5M8GW+J9> Location: Borisova garden, "Сенчестата градинка в Борисовата", located in the southern part of Borisova Garden, close to the tennis courts Notes: You can RSVP [on LessWrong](https://www.lesswrong.com/events/28rhv8jwmoP8ADnoc/sofia-acx-schelling-meetup-2022). We're gathering in the Shadе garden. It is a bit difficult to locate, so if you can't find it via the coordinates or google maps, please send me an email and I'll happily collect you from whatever part of Borisova Garden is convenient for you. Group info: Sofia has monthly meetups; contact Anastasia to learn more **STOCKHOLM, SWEDEN** Contact: Sal (sallat@protonmail.com) or Nick (niktonick@gmail.com) Date: May 22 Time: 3:00 PM Coordinates: <https://plus.codes/9FFW83RF+3M5> Location: We're meeting near blue gazebo in Humlegården, I will have "ACX" sign. Group info: Stockholm has monthly meetups, email Sal to find out more **TOULOUSE, FRANCE** Contact: APE (barsom.maelwys@gmail.com) Date: May 12 Time: 6:30 PM Coordinates: <https://plus.codes/8FM3HCQW+9H> Location: Le Biergarten (60 Grande Rue St. Michel). If the weather is nice, we'll be sitting outside. In any case, we'll have a plant on the table. ### NORTH AMERICA **ANCHORAGE, AK** Contact: Matthew (7o2wzrybd@mozmail.com) Date: May 1 Time: 2:00 PM Coordinates: <https://plus.codes/93HG6485+Q6> Location: I'll be sitting at the end of the "main steppy zone" closest to E Street and the Whale Mural, wearing a brown Carhartt-ish jacket, with a red backpack in front of me. If a couple people show interest (slash show up), we can look at adjourning to the Fat Ptarmigan for some pizza and warmth as well. **ANN ARBOR, MI** Contact: Sam (samrossini9@gmail.com) Date: April 23 Time: 2:00 PM Coordinates: [86JR778F+H3](https://plus.codes/86JR778F+H3) Location: We will meet at the Burns Park Warming Hut, at the tables behind the hut. I'll be wearing black and carrying a white sign with "ACX" on it. Group info: Ann Arbor ACX has been meeting regularly since 2019, contact Sam to learn more. **ATLANTA, GA** Contact: Steve (steve@digitaltoolfactory.net) Date: May 14 Time: 2:00 PM Coordinates: <https://plus.codes/865QRH2F+V8> Location: [Bold Monk Brewing](https://boldmonkbrewingco.com/) 1737 Ellsworth Industrial Blvd NW suite d-1, Atlanta, GA 30318 Notes: The meetup will be either upstairs or in the outside area. I will have a sign that says "ACX Atlanta" Group info: [SSC Atlanta](https://www.lesswrong.com/groups/SMQQP9LQHSZg2xgHH) meets monthly **AUSTIN, TX** Contact: Silas Barta (sbarta@gmail.com) Date: June 4 Time: 12:00 PM Coordinates: <https://plus.codes/862487GM+95> Location: Brewtorium, indoors, at the tables with signs. Address: 6015 Dillard Circle Suite A, Austin, TX, 78752 **BALTIMORE, MD** Contact: Rivka (rivka@adrusi.com) Date: April 14 Time: 8:00 PM Coordinates: <https://plus.codes/87F5774M+4W> Location: Performing Arts and Humanities Building, UMBC. We will be outside and I will have a sign Group info: We meet every Sunday at 7pm. Half are virtual and half are in person. **BELLINGHAM, WA** Contact: Alex (bellinghamrationalish@gmail.com) Date: April 27 Time: 6:30 PM Coordinates: <https://plus.codes/84WVQG45+XM> Location: Elizabeth Station, 1400 W Holly St, Bellingham, WA 98229. We'll be outside if the weather is nice (inside if not), and we'll have a sign. Notes: Please [RSVP on Meetup](https://www.meetup.com/bellingham-rationalish-community/events/284977134/) (though we do have regular attendees who don't use Meetup, it'll help me assess new turnout) Group info: [Bellingham Rationalish Community](https://www.meetup.com/bellingham-rationalish-community/) is a recently formed group. The ACX Schelling Meetup 2022 will be our second meeting, not including the Schelling Meetup 2021 that predated the group but has highly overlapping attendance **BERKELEY, CA** Contact: Mingyuan (meetupsmingyuan@gmail.com) Date: May 7 Time: 1:00 PM Coordinates: <https://plus.codes/849VVPCP+VP> Location: UC Berkeley, the lawn just north of Free Speech Bikeway and east of the traffic circle Group info: The Bay Area has lots of events, announced on [the mailing list](https://groups.google.com/g/bayarealesswrong) and [Facebook](https://www.facebook.com/groups/566160007909175) **BOSTON, MA** Contact: Robi Rahman (robirahman94@gmail.com) Date: May 14 Time: 5:00 PM Coordinates: <https://plus.codes/87JC9VCG+7R> Location: John F. Kennedy Park, Cambridge, Massachusetts, near the picnic tables Notes: Text Robi at 703-981-8526 if you can't find the group. Facebook event [here](https://www.facebook.com/events/408685214418835/). Group info: [Boston ACX](https://www.lesswrong.com/groups/xfenNi9uqer8Wyjg7) meets monthly **BOULDER, CO** Contact: Josh (josh.sacks+acx@gmail.com) Date: April 24 Time: 4:00 PM Coordinates: [85GP2V96+JQ](https://plus.codes/85GP2V96+JQ) Location: 9191 Tahoe Ln, Boulder CO, 80301 Notes: LW event [here](https://www.lesswrong.com/events/AuA4RapP9FFw5QH6i/boulder-acx-meetup-sun-apr-24). **BRYAN/COLLEGE STATION, TX** Contact: Kenny Easwaran (easwaran@gmail.com) Date: April 29 Time: 5:30 PM Coordinates: <https://plus.codes/8625JMFC+5J> Location: Back patio of Torchy's Tacos, on the east side of Texas A&M campus, at Texas Ave and Walton Dr. I'll have brightly colored hair and perhaps a yellow umbrella. **CALGARY, AB** Contact: David Piepgrass (qwertie256@gmail.com) Date: April 16 Time: 2:00 PM Coordinates: [9538324C+CH9](https://plus.codes/9538324C+CH9) Location: Marlborough Mall, Food Court **CHAMPAIGN-URBANA, IL** Contact: Ben (bmcfluff@gmail.com) Date: April 24 Time: 1:00 PM Coordinates: <https://plus.codes/86GH4Q4F+H4> Location: UIUC main quad, south end. Notes: RSVPs (by email) appreciated but not at all required. **CHICAGO, IL** Contact: Todd (todd@southloopsc.com) Date: May 7 Time: 2:00 PM Coordinates: [86HJV9F9+CV](https://plus.codes/86HJV9F9+CV) Location: South Loop Strength & Conditioning (Upstairs Mezzanine) Notes: Check out meetup details (and add future events to your calendar) here: <https://chicagorationality.com> Feel free to come even if you haven't done the reading! Group info: [Chicago Rationality](https://www.lesswrong.com/groups/h6nXsLE3NhJJtJfQj) meets on the first Saturday of every month **COLUMBUS, OH** Contact: Daniel (daniel.m.adamiak@gmail.com) Date: April 24 Time: 3:00 PM Coordinates: <https://plus.codes/86FVX3C3+RG7> Location: Clifton Park Shelterhouse (this is in the SW corner of Jeffrey Park). Group info: We meet ~monthly, contact Daniel to learn more **DALLAS, TX** Contact: Ethan (ethan.morse97@gmail.com) Date: May 1 Time: 2:00 PM Coordinates: <https://plus.codes/8645V76G+P2> Location: Flag Pole Hill Park. We'll be at the west side picnic tables and have a sign that says "Schelling Meetup". Notes: We'll have food and soft drinks. Free parking is available in the lot or along Doran Circle and there are public restrooms available. If it is more than drizzling rain, we will meet at White Rock Coffee. Please RSVP to Ethan if you plan on attending, but you're welcome even if you don't RSVP! Group info: We are a friendly and welcoming group of about 15-20 active meetup attendees. We have an in-person meetup every other week; activities include discussion groups, book club, board game night, and outdoorsy events. **DENVER, CO** Contact: Ian (iansphilips@gmail.com) Date: April 16 Time: 10:00 AM Coordinates: <https://plus.codes/85FQP2WX+G3> Location: City Park Pavilion, I'll wear a pig shirt and have a sign that says ACX meetup **DETROIT, MI** Contact: Matt Arnold (matt.mattarn@gmail.com) Date: May 7 Time: 8:00 PM Coordinates: <https://plus.codes/86JR9WG9+R6> Location: Tenacity Craft, 8517 2nd Ave, Detroit MI 48202 **DURHAM, NC** Contact: William D Jarvis Jr. (willdjarvis@gmail.com) Date: April 21 Time: 7:30 PM Coordinates: <https://plus.codes/8773X4Q3+RW> Location: Ponysaurus Brewing Group info: [Triangle ACX/EA](https://www.lesswrong.com/groups/5q5ZspGeJ9GMmdTSi) meets every Thursday **EDMONTON, AB** Contact: JS (ta1hynp09@relay.firefox.com) Date: April 28 Time: 6:30 PM Coordinates: <https://plus.codes/9558GGFM+88> Location: Duggan's Boundary Irish Pub Notes: See the [event page](https://www.lesswrong.com/events/yzzEBeq4md2QmyoQd/april-28th-new-members-meetup) for detailed information. Group info: We currently meet on the last Thursday of each month. **GRASS VALLEY / NEVADA CITY, CA** Contact: Max Harms (raelifin@gmail.com) Date: April 16 Time: 2:00 PM Coordinates: <https://plus.codes/84FW7X6J+XG> Location: Fable Coffee in Nevada City (location changed due to rain) **HALIFAX, NS** Contact: ideopunk (conorbarnes93@gmail.com) Date: April 23 Time: 1:00 PM Coordinates: <https://plus.codes/87PRJCQ6+V9> Location: Coburg Social. We will have a blue pyramid for identification. Notes: There's a grand total of three known rationalists in Halifax, and we want more of them. **HARRISBURG, PA** Contact: Eric Borowsky (harrisburgeric@gmail.com) Date: April 16 Time: 6:00 PM Coordinates: <https://plus.codes/87G57468+H7> Location: Cafe Fresco restaurant in Harrisburg Group info: Harrisburg has regular ACX meetups; contact Eric for more info **HONOLULU, HI** Contact: Matt / twitter.com/mpopv (mattpopovich@outlook.com) Date: April 23 Time: 4:00 PM Coordinates: <https://plus.codes/73H475M3+VH> Location: Either inside or adjacent to (and visible from) the big open grass area on Magic Island at Ala Moana Beach Park. Look for the big "ACX" sign. We will be sitting on the grass so you may want to bring a blanket, towel, or beach chair to sit on. Notes: Consider bringing water and sunscreen. It would be helpful if you RSVP in the [Google Group](https://groups.google.com/g/acx-meetup-honolulu). **HOUSTON, TX** Contact: Naman / nmehndir (nmehndir@gmail.com) Date, Time, Location: Undecided, we'll figure this out in the [Houston SSC-LW Discord server](https://discord.gg/qevJu59atd) **HUNTSVILLE, AL**Contact: Nate (natestrum@rocketmail.com) Date: April 17 Time: 11:00 AM Coordinates: <https://plus.codes/866MP9CP+HV> Location: Fresko Grille Modern Mediterranean **IRVINE, CA** Contact: Nick (cohenskijanuary1@mail.com) Date: May 7 Time: 2:00 PM Coordinates: <https://plus.codes/8554M526+6H> Location: Irvine, University Town Center Benches **KANSAS CITY, MO** Contact: Alex (alex.hedtke@gmail.com) Date: June 3 Time: 6:30 PM Coordinates: <https://plus.codes/86F72CM7+V5> Location: Minsky’s Pizza - 5105 Main St. Tell the hostess you are here for the ACX meetup (we will be located in their dedicated meeting room) Notes: We'll be discussing A Human's Guide to Words, but if you haven't read it, please don't let that stop you from attending! Group info: [Kansas City Rationalists](https://www.meetup.com/Kansas-City-Rationalists/) has regular events **LAS VEGAS, NV** Contact: Jonathan Ray (ray.jonathan.w@gmail.com) Date: April 24 Time: 1:00 PM Coordinates: <https://plus.codes/85864PFF+3P> Location: Desert Breeze Park at one of the southern pavilions with an ACX sign Group info: Subscribe to the [LessWrong group](https://www.lesswrong.com/groups/7WwR5k2aEjDeRMpKP) or [Substack](https://jwray.substack.com/) to get notified about future Vegas ACX meetups **LOS ANGELES, CA** Contact: Robert (bobert.mushky@gmail.com) Date: April 20 Time: 6:30 PM Coordinates: <https://plus.codes/85632H87+P5> Location: 3266 Inglewood Blvd, Los Angeles, CA 90066 Group info: [Los Angeles Rationality](https://www.lesswrong.com/groups/GSN7BypgiJcjEiRRS) meets every Wednesday **MADISON, WI** Contact: Mary (mmwang@wisc.edu) Date: May 7 Time: 2:00 PM Coordinates: <https://plus.codes/86MG3H3X+XW> Location: 1022 High St. Blue House w/red porches; if possible we will meet outside in my large back yard. Group info: Contact Mary to be added to the mailing list **MEMPHIS, TN** Contact: Michael (michael19571202@outlook.com) Date: April 16 Time: 1:00 PM Coordinates: <https://plus.codes/867F5X2P+RHC> Location: French Truck Coffee at Crosstown Concourse, Central Atrium, 1350 Concourse Ave, Memphis, TN 38104 **MEXICO CITY** Contact: Francisco (fagarrido@gmail.com) Date: May 7 Time: 4:00 PM Coordinates: <https://plus.codes/76F2CRH5+J2> Location: Cafe Toscano Notes: Please RSVP at fagarrido@gmail.com. The place is not very large, so if there is too much interest I may change the location. **MIAMI, FL** Contact: Eric Magro (eric135033@gmail.com) Date: April 17 Time: 5:00 PM Coordinates: <https://plus.codes/76QXRR65+W7> Location: Buckminster Fuller's Fly's Eye Dome, 140 NE 39th St #001, the group will be seated at a table on the west side of the dome. There will be an ACX MEETUP sign on the table but the quickest way to find us will be to send a message to me by e-mail or on the [Discord server](https://discord.gg/tDf8fYPRRP). Notes: You can RSVP to this event [on LessWrong](https://www.lesswrong.com/events/2erRGhQ9HgnerWkib/miami-acx-april-meetup) Group info: [Miami ACX](https://www.lesswrong.com/groups/YmgPiAS7j7s4vmuNh) has been meeting monthly for several years **MOBILE, AL** Contact: Susan (susanstrum@protonmail.com) Date: April 11 Time: 6:00 PM Coordinates: <https://plus.codes/862HMXV3+FM> Location: Greer's St. Louis Market - rooftop deck **MONTREAL, QC** Contact: E (90u610sye@relay.firefox.com) Date: April 24 Time: 12:30 PM Coordinates: <https://plus.codes/87Q8GC89+37> Location: Jeanne-Mance Park, at the corner of Duluth and Esplanade. Will be wearing a grey baseball cap. Notes: Please check the [LW event page](https://www.lesswrong.com/events/mTJP56JSKpgBDntrD/acx-montreal-meetup-apr-24-2022) the day of in case of cancellation due to rain (you can also RSVP). **NEW YORK, NY** Contact: Shaked (shaked.koplewitz@gmail.com) Date: April 24 Time: 4:00 PM Coordinates: <https://plus.codes/87G7PX9M+4J> Location: South Meadow, Lower Manhattan, by Warren st & River terrace (near the pavilion) - Look for signs that say "ACX Meetup" or familiar faces Group info: [Overcoming Bias NYC](https://www.lesswrong.com/groups/4ee8NedvMNvoSitzj) meets weekly and has been active since 2009 **OMAHA, NE** Contact: TracingWoodgrains (tracingwoodgrains@gmail.com) Date: April 30 Time: 11:00 AM Coordinates: <https://plus.codes/86H6724Q+HF> Location: Spielbound Board Game Cafe Notes: I am happy to run meetups but will not be here long-term, so would prefer the area to have a more stable host. **OTTAWA, ON** Contact: Tess (rationalottawa@gmail.com) Date: May 28 Time: 6:30 PM Coordinates: <https://plus.codes/87Q697XV+4V> Location: We'll meet on the North side of Dow's Lake, at the statue called The Man With Two Hats, and we'll have a sign that says ACX on it. Group info: The [Ottawa group](https://www.facebook.com/groups/rationalottawa) meets monthly **PHILADELPHIA, PA** Contact: Wes and Diana (wfenza@gmail.com) Date: April 15 Time: 6:00 PM Coordinates: <https://plus.codes/87F6XR3M+9J> Location: Uptown Beer Garden, 1500 JFK Boulevard **PHOENIX, AZ** Contact: Ben Morin (benjamin.j.morin@gmail.com) Date: April 30 Time: 11:00 AM Coordinates: <https://plus.codes/8559FWF6+P8> Location: Encanto Park, Phoenix **PITTSBURGH, PA** Contact: Justin (pghacx@gmail.com) Date: April 16 Time: 2:00 PM Coordinates: <https://plus.codes/87G2C3PR+QP7> Location: Frick Park, by the Beechwood Gate Entrance IF weather is dry. In the event of rain, the indoor venue will be Crazy Mocha Coffee in Squirrel Hill (2100 Murray Ave) Notes: Please email pghacx@gmail.com to RSVP and be added to the mailing list! If we change venue, an email will go out by noon the day of (>2 hours before the scheduled meetup time). **PLAYA DEL CARMEN, MEXICO** Contact: Drew (andrew.d.cutler@gmail.com) Date: April 13 Time: 6:00 PM Coordinates: <https://plus.codes/76GJJWPH+WQ> Location: Tohuka Park, in the main pavilion **PORTLAND, OR** Contact: Sam Celarek (support@pearcommunity.com) Date: April 24 Time: 4:00 PM Coordinates: <https://plus.codes/84QVG9G7+XWG> Location: 1548 NE 15th Ave, Portland, OR 97232 with a Large "PEAR" sign outside of the Building Notes: Call me (Sam) at 513-432-3310 if you can't find it. Please RSVP on [Meetup](https://www.meetup.com/Portland-Effective-Altruists-and-Rationalists/events/sbndssydcgbgc/) or [LessWrong](https://www.lesswrong.com/events/bwJHTLh55d4eti4wG/acx-schelling-meetup) so we can guesstimate food better. Group info: [Portland Effective Altruism and Rationality](https://www.lesswrong.com/groups/EbZXGajoX36MqtYz6) is very active. We have book clubs, bi-weekly ai safety meet-ups, bi-weekly topical meet-ups, bi-weekly socials, and have an active [Discord](https://discord.gg/4gkURRyZWP). **RENO, NV** Contact: Steven Lee (stevenbrycelee@gmail.com) Date: April 23 Time: 3:30 PM Coordinates: <https://plus.codes/85F2G46W+FG> Location: Crissie Caughlin Park Notes: I'll be wearing a red shirt, over by the benches if we can get space. There's some parking on site, otherwise, street parking is available as well **SALT LAKE CITY, UT** Contact: Ross (wearenotsaved@gmail.com) Date: April 23 Time: 3:00 PM Coordinates: <https://plus.codes/85GCP4WF+VH> Location: Liberty Park Just across the parking lot from the ChargePoint stations Group info: Salt Lake City has regular meetups, contact Ross for more info **SAN DIEGO, CA** Contact: Julius (julius.simonelli@gmail.com) Date: May 7 Time: 1:00 PM Coordinates: <https://plus.codes/8544PVQ8+MF> Location: We'll meet near the bench in Bird Park. I'll wear a red shirt. Group info: [San Diego ACX](https://www.lesswrong.com/groups/JxwQaHGJiLW6xMcAx) meets roughly once a month **SAN FRANCISCO, CA** Contact: Maggie (reduplicate.totto@gmail.com) Date: April 16 Time: 4:30 PM Coordinates: <https://plus.codes/849VQJF4+6G> Location: Mission Creek Park Pavilion **SEATTLE, WA** Contact: Nikita (sokolx@gmail.com) Date: May 1st Time: 5:00 PM Coordinates: <https://plus.codes/84VVJM7H+4Q> Location: Optimism Brewing Company (1158 Broadway, Seattle). There's board games you can borrow and feel free to bring board games (or other cool stuff) of your own. The bar has both beer and non-alcoholic options. The organizer will be wearing an orange hoodie. Notes: Please [RSVP on LessWrong](https://www.lesswrong.com/events/ccpYgr37sM6meJqyk/acx-schelling-meetup-seattle-2022) or on the [Facebook event](https://www.facebook.com/events/974444973205892/) Group info: Seattle has [an active rationality/EA community](https://www.facebook.com/groups/seattlerationality/) that meets about twice a week **ST. LOUIS, MO** Contact: John Buridan (littlejohnburidan@gmail.com) Date: May 28 Time: 1:00 PM Coordinates: <https://plus.codes/86CFJM3M+PV> Location: The Picadilly at Manhattan **SUNNYVALE, CA** Contact: IS (svmeetup@protonmail.com) Date: May 14 Time: 2:00 PM Coordinates: <https://plus.codes/849V9XG6+X9F> Location: Washington Park, 840 W Washington Ave, Sunnyvale, CA 94086; On the roundish grassy area in the northeast corner of the park; we'll have a dark blue picnic blanket and a ACX Meetup sign attached to a red camping chair. **TORONTO, ON** Contact: Sean A. (seanaubin@gmail.com) Date: May 1 Time: 1:00 PM Coordinates: <https://plus.codes/87M2JHQR+2X> Location: The Bentway, underneath the Gardener Expressway, by the picnic tables Notes: If it is colder than 10° C, I will change the location to ProteinQure with vaccine passport checks. **VANCOUVER, BC** Contact: Tom Ash & Dirk Haupt (events@philosofiles.com) Date: April 20 Time: 7:00 PM Coordinates: <https://plus.codes/84XR7WGH+PH> Location: East Van Brewing, at Commercial & Venables. We'll have a sign on the table. Notes: We'll have open discussion and beers, plus games! This coinciding with 4/20 (and this being Vancouver and all), we'll have some discussion of drugs and rationality, and are soliciting reading suggestions thereon. And Dirk (aka Cornelis) will be running games like Mafia to help new folks get to know people. For more see the [Facebook event](https://www.facebook.com/events/715270722821272/). **WASHINGTON DC** Contact: Cassander (cursedcassander@gmail.com) Date: April 23 Time: 7:00 PM Coordinates: <https://plus.codes/87C4WX4F+VC> Location: 1002 N Street NW - It's a house, follow the sign for "Free Utility" Notes: We're 3 blocks from the Vernon Square Metro and street parking is easy. There's also a paid lot 1210 9th Street and whoever claims it first can have the space in my garage. Group info: We meet once a month downtown, and often have additional boardgaming days, hikes, or other events at other locations. To find out more, sign up for the [Facebook](https://www.facebook.com/groups/433668130485595) or [Google Groups](https://groups.google.com/u/1/g/dc-slatestarcodex). **WATERLOO, ON** Contact: Jenn (hi@jenn.site) Date: May 15 Time: 1:00 PM Coordinates: <https://plus.codes/86MXFF7H+F3F> Location: At the benches lining the south of Waterloo Public Square. I'll be wearing white doc martens and carrying an "ACX" meetup sign. From the square we can decide where to have lunch :) Notes: If it's rainy, we'll meet under the covered entrance to the Starbucks at 95 King St S instead. Not inside - the shaded area where the outdoor tables are. Group info: I used to host regular SSC meetups but have since lost the energy needed to sustain that. However, I'd be interested in trying to restart with a co-host if anyone else is interested.
mingyuan
51957287
Spring Meetups In Seventy Cities
acx
# Dictator Book Club: Xi Jinping ###### [Previous entries: [Erdogan](https://astralcodexten.substack.com/p/book-review-the-new-sultan?s=w), [Modi](https://astralcodexten.substack.com/p/book-review-modi-a-political-biography?s=w), [Orban](https://astralcodexten.substack.com/p/dictator-book-club-orban?s=w)] *[The Third Revolution](https://amzn.to/3xq08QX)*, by Elizabeth Economy, promises to explain “the transformative changes underway in China today”. But like her namesake, Dr. Economy doesn’t always allocate resources the way I would like. I came to the book with questions like: How did the pre-Xi Chinese government work? How was it different from dictatorship? What safeguards did it have against it? Why hadn’t previous Chinese leaders become dictators? And: How did Xi come to power? How did he defeat those safeguards? Had previous Chinese leaders wanted more power? How come they failed to get it, but Xi succeeded? *Third Revolution* barely touched on any of this. It mostly explained Xi’s domestic and foreign policies. Some of this was relevant: a lot of Xi’s policies involve repression to prop up his rule. But none of it answered my key questions. So this is less of a book review than other Dictator Book Club entries. It’s a look through recent Chinese history, with *The Third Revolution* as a very loose inspiration. ## How Does China’s Government Work? The traditional answer is a flowchart like this one ([source](https://isdp.eu/content/uploads/2018/02/NPC-Backgrounder-1.pdf)): But you could give a similarly convoluted flowchart for America, and it would tell people much less than words like “democracy” or “balance of powers”. What’s the Chinese equivalent? I found it a little more helpful to see it diagrammed it as a series of nested squares: Very oversimplified, somewhat false. The inner levels have real power, and the outer layers are theoretically overseers but actually rubber stamps. Things get more and more rubber-stampy as you go out, culminating in the National People’s Congress, which recently voted to re-elect Xi by a vote of 2,970 in favor, 0 against - it’s so irrelevant that it’s literally called “the NPC”. Who chooses the members of the inner groups? In theory, the outer groups; for example, the Central Committee is supposed to elect the Politburo Standing Committee. In practice, these selections tend to be of the “2,970 in favor, 0 against” variety, so they must be taking marching orders from someone. Who? The Chinese government doesn’t talk about it much, but probably the members of the Politburo Standing Committee hand-pick everyone, including the Paramount Leader and their own successors. How do they pick? Mostly patron-client relationships. Every leading politician cultivates a network of loyal supporters; if he takes power, he tries to put as many of his people into top posts as he can. The seven Politburo members wheel and deal with each other about whose clients should get which positions, including any unoccupied Politburo seats. If the two word description of US politics is “democracy, checks-and-balances”, then the two word description of Chinese politics is “oligarchy, patrons-and-clients”. If this seems exotic, it shouldn’t: it’s not much different from how the US fills unelected posts like “ambassador” and “White House staffer”. The Trump presidency put this into especially sharp relief, either because Trump did it more blatantly than usual or just because Trump’s clients were so obviously different from the normal Washington crowd. Consider eg the appointment of Jeff Sessions (among the first Congressmen to endorse Trump) as Attorney General. In the US, this is a peripheral part of the system, checked by democracy. In China, it’s the whole game. ## Was China’s Government A “Technocracy?” For a while, all (or almost all) of China’s top officials had engineering degrees. When Xi Jinping first joined the Politburo Standing Committee in 2008, eight of its nine members were engineers. Paramount Leader Hu Jintao was a hydroelectric engineer. His second-in-command Wen Jiabao was a geological engineer. There were two electrical engineers, a petroleum engineer, a radio engineer, and two chemical engineers (including Xi himself). The only non-engineer was Li Keqiang, an economist. And this was actually a *low point* in engineers’ dominance of Chinese power. The term before, 100% of Politburo Standing Committee officials had been engineers! What’s going on? For one thing, Deng Xiaoping thought engineers were cool, and he was powerful enough to do whatever he wanted. A government made up entirely of engineers? Sure, whatever you say. And since the top echelons of Chinese government appoint their own successors, these engineers could appoint other engineers and so on. But also: during the Cultural Revolution, [about half](https://unesdoc.unesco.org/in/documentViewer.xhtml?v=2.1.196&id=p::usmarcdef_0000156552&file=/in/rest/annotationSVC/DownloadWatermarkedAttachment/attach_import_e21c278c-8d3f-48e8-839d-215b37f47047%3F_%3D156552eng.pdf&locale=en&multi=true&ark=/ark:/48223/pf0000156552/PDF/156552eng.pdf#%5B%7B%22num%22%3A31%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C-118%2C847%2C0%5D) of Chinese people who got degrees at all got engineering degrees. The Cultural Revolutionaries were *really* not big on education (according to one article, “Xi's secondary education [was cut short] when all secondary classes were halted for students to criticise and fight their teachers.") But engineering was useful for building factories, and so was grudgingly tolerated. That meant that of the people smart and ambitious enough to get into college at all, half did engineering. The other half? I’m not sure. Law is a popular major for would-be politicians in the US, but here’s [a Chinese person explaining](https://www.quora.com/Why-do-Chinese-political-leaders-have-engineering-degrees-whereas-their-American-counterparts-have-law-degrees) why it doesn’t work that way in China (short version: China doesn’t have great rule of law, so lawyers don’t matter much and are low status). [Here is an article](https://foreignpolicy.com/2019/07/04/chinas-overrated-technocrats-stem-engineering-xi-jinping/) telling us not to take China’s engineer-kings too seriously. It argues that (aside from Deng’s original picks), most of them never did much engineering, and just studied the subject in school as a generic prestigious-sounding degree to springboard their government career. Chinese engineering curricula are easy, and powerful people frequently cheat or pay others to write their dissertations. Aside from a few of Deng’s personal picks, we should think of this less as “China is a magic place where rational scientists hold power”, and more as “for idiosyncratic reasons, social climbers in China got engineering degrees.” Certainly none of these people were *selected* for the Politburo on the basis of their engineering acumen. They got their power by bribing, flattering, and backstabbing people, just like everyone else. In any case, Xi’s old Politburo class was the last one to be made primarily of engineers. The current Politburo has only one engineer - Xi himself. ## How Autocratic Was Pre-Xi China? This varied a lot. **Mao Zedong** was definitely an autocrat. After his death, everyone backstabbed each other furiously for several years and **Deng Xiaoping** ended up on top. Deng had absolute power but thought that was bad, so he created lots of institutions that were supposed to prevent future leaders from exercising the control he had, then sort of backed down. He appointed former Shanghai mayor **Jiang Zemin** as his successor. Jiang followed Deng’s anti-absolute-power rules, but he was able to get most of what he wanted anyway. Partly this was because he was a really skilled politician, partly it was because he had a really good secret police force with personal loyalty to him and lots of blackmail on everyone else. [“Toad Worship”](https://en.wikipedia.org/wiki/Toad_worship) is a class of Chinese internet meme where people ironically pretend to admire Jiang Zemin. No, I don’t get it either. When Jiang took power, he filled important positions with his clients. Mostly these were his underlings from Shanghai; they became known as the Shanghai Gang. The Shanghai Gang stuck together and supported its own, and operated kind of as a “political party” “representing” the interests of east coast urban elites. (“Jiang And His Shanghai Gang” sounds like a good name for kids’ TV show, or maybe a hip-hop group.) When Jiang reached his term limit, he stepped down in favor of Hu Jintao, former secretary of the Communist Youth League. The CYLers formed a power bloc distinct from the Shanghai Gang, drawing more on inland rural commoners. The two blocs may have made some kind of power-sharing agreement behind closed doors, probably involving a pledge to alternate who got the Paramount Leader position Hu was not quite as adept a politician as Jiang, and was disadvantaged by his opponent having spent ten years consolidating power (plus the secret police), so the remaining Shanghai Gangers frequently outmanuevered him. He served for ten years, then dutifully turned over power to the Shanghai favorite, Xi Jinping. This isn’t a great answer to the question “how autocratic was pre-Xi China”? In particular, I don’t get exactly what prevented Jiang or Hu from seizing power, overstaying their term limits, or killing their enemies (I assume it was some mix of not being sure the military would back them and not wanting to destabilize the country, but I don’t feel like I have a gears-level understanding). I also don’t get what it meant for some Chinese leaders to be better at pursuing their policy agenda than others: what levers did they pull? How did they quash dissent? But at the very least, it was non-autocratic enough that there were two factions, and sometimes the out-of-power faction could act as a check on the ruler. ## How Did Xi Become Paramount Leader? Hu Jintao retired when his term ran out, it was the Shanghai Gang’s turn to pick a new leader, and they picked Xi. Why? It all happened behind closed doors, but we can guess at their reasoning. Xi was a loyal Shanghaier. He was the client of **Zeng Qihong**, himself a client of Jiang Zemin, and had been party secretary in Shanghai. His loyalty to the faction was unimpeachable. But he was a Shanghaier who could appeal to the CYL. As a teenager during the Cultural Revolution, he was deported to the interior province of Shaanxi to work as a farmer and ditch-digger. His years of hard labor in Chinese flyover country made him more palatable to the hicks of the CYL than the average East Coast elite would be. Xi’s father, **Xi Zhongxun**, was former Vice-President of China. That made Xi a “princeling”, ie a descendant of Communist “royalty”. China-watchers disagree on how organized the princelings are and whether they count as a “faction”. But they’re at least *kind of* a faction, and it didn’t hurt that he represented them too. Finally, a few months before the “election”, powerful Politburo member **Bo Xilai** was arrested on sensational corruption charges. In a country where almost every high official was corrupt, Bo had gone above and beyond: while trying to cover his misdeeds, he’d killed a British national and threatened his own client, the chief of police. The chief fled to an American consulate and might have tried to defect to the US. We [still don’t know the whole story](https://en.wikipedia.org/wiki/Wang_Lijun_incident), but the whole Chinese leadership got enraged and sentenced Bo to life in prison. Bo had made powerful enemies, and those enemies happened to control the censors that year. They chose *not* to censor the Bo story - even though it was embarrassing to the government - because they wanted to destroy Bo and his faction using public humiliation. Ordinary Chinese people had long suspected everyone was corrupt, but now they had front-row seats as the leadership confirmed it was true for at least this one guy. The public trial and investigation had its intended effect: Bo Xilai and his faction were torn apart, their region of politics-space salted so thoroughly that nothing would ever grow there again. But it left the leadership nervous: the public was spooked about corruption. They needed to credibly signal that they took the problem seriously. Xi Jinping was their guy. He’d made a focus of rooting out corruption in his previous posts, and had a reputation for being non-corrupt himself. He had already been a leading candidate to succeed Hu, but the Bo Xilai incident made him a shoo-in. ## How Did Xi Gain Dictatorial Levels Of Power? I can’t find good information on this, but here are three preliminary hypotheses. #### 1: Anti-Corruption? Xi launched [the most extensive anti-corruption campaign in Chinese history](https://en.wikipedia.org/wiki/Anti-corruption_campaign_under_Xi_Jinping), which caught many high-level officials. Dictators throughout the world have used anti-corruption campaigns to eliminate opponents (see eg [Erdogan](https://astralcodexten.substack.com/p/book-review-the-new-sultan?s=w)), and the timing is right, so this would be a really plausible theory. The guy who ran the anti-corruption effort, **Wang Qishan**, is a childhood friend of Xi’s (they were roommates together on the farm commune) and 100% one of his clients and loyalists. The only problem with this theory is that I can’t quite make it work. Most power in China lies with the Politburo Standing Committee. But Xi never investigated any PSC members for corruption (he did investigate an ex-member, but that doesn’t help him). Naively I would expect that seizing power would involve investigating and imprisoning his PSC rivals, until his supporters dominated the body. But as far as I can tell he didn’t do that. Maybe fear keeps them in line? You can read a stronger argument for this in [Two Birds, One Policy: The Establishment Of The National Supervisory Commission As A Factional And Centralizing Tool](https://fluxirr.mcgill.ca/article/download/54/45), but it’s not going to answer my question. #### 2: Sheer Luck About Who Got Promoted When? This is from Choi 2021, [From Power Balance To Dominant Faction In Xi Jinping’s China](https://www.cambridge.org/core/services/aop-cambridge-core/content/view/C311C96EB52F5186599E67112E5CB821/S0305741021000473a.pdf/from-power-balance-to-dominant-faction-in-xi-jinpings-china.pdf): > When Xi assumed the mantle of general secretary of the CCP, he enjoyed a relatively favourable power configuration at the top, compared with his two predecessors. Considering that the general secretary of the CCP had a two-term limit (ten years) and that the CCP has rules that insist on the step-by-step promotion of officials, rather than jumping levels of hierarchy, favourable initial conditions could provide a significant advantage to the heads of the CCP in empowering their factions. > > When Jiang Zemin suddenly took the position of CCP general secretary following the suppression of the Tiananmen movement in 1989, his power was constrained by the revolutionary generation led by Deng Xiaoping. Similarly, when Hu began to govern in 2002, five of the-then nine members of the Politburo Standing Committee, the most influential positions in the CCP, belonged to Jiang’s faction. 28 In contrast, when Xi took power in 2012, only one of the seven members of the Standing Committee was from Hu’s faction. As we would expect, Xi’s faction was generally weaker than Hu’s faction when Xi became CCP leader in 2012. However, because Jiang’s faction supported Xi and still had a powerful presence in the Standing Committee, Xi actually enjoyed more influence in the Standing Committee on his first day in office than Hu ever did. Moreover, Xi took the position of chairman of the Central Military Commission at the same time as he became general secretary in 2012. In contrast, Hu only assumed this position two years after becoming the general Party secretary. These conditions may have provided Xi with an opening to build up his faction quickly. One thing that confuses me about this explanation is that Xi quickly pivoted from the champion of the Shanghai Gang to having his own faction, sometimes called the Xi Gang or the Tsinghua Gang. (a quick gripe: sometimes it feels like half the people in Chinese politics have gangs, the other half are *named* Gang, and it takes a lot of mental overhead to figure out which is which. Consider eg this headline: [China Official Yang Gang Investigated For Corruption](https://www.bbc.com/news/world-asia-china-25524949). How long did it take you to parse that this was a single person?) Starting with your faction controlling a lot of power is good *if* you keep the faction. But Xi didn’t, so how did he benefit? Maybe the current “Xi Gang” and the old “Shanghai Gang” are the same people by a different name? But then how do we explain headlines like [“Shanghai Gang” Seeks Xi’s Ouster](https://www.asiasentinel.com/p/shanghai-gang-seeks-xi-jinping-ouster?s=r)? Maybe the term “Shanghai Gang” is now used for the remnants of the old Shanghai Gang who refused to switch allegiance to Xi? Or maybe Xi used his Shanghai support to get absolute power, and then once he got it he turned on his old supporters. #### 3: Tsinghua University Has a *Really* Good Alumni Relations Department? I realize this is a weird thing to attribute the rise of a dictator of the world’s largest country to, but this is sort of the conclusion of [The Rise Of The Tsinghua Clique: Cultivation, Transfer, And Relationships In Chinese Elite Politics](http://aacs.ccny.cuny.edu/2018conference/20180926_Tsinghua_Clique.pdf). Tsinghua and Peking Universities, both in Beijing, are the Harvard and Yale of China. In 2002, Tsinghua administrators were sad that their graduates didn’t get as many top posts in government as Peking grads did. They started a program where Tsinghua grads interned with, networked with, and supported other Tsinghua grads. It was a *really* good program. In a country where everything works on informal patron-client relationships anyway, it provided a helpful nucleus of “have you considered making your patron-client relationships based around going to the same university?” Also, Tsinghua paid its grads to take lower-paid (but more prestigious) government positions. Anywhere else, a new grad might have to choose between a high-paid job in private industry, or a very-low-paid (but prestigious and high room-for-advancement) job as a mid-level official in an outlying province, and might have gone with the private offer. But Tsinghua says “take the provincial official job, and we’ll pay you enough to make it worthwhile, just so we can say we have graduates in important positions”. If the paper is to be believed, all of this really worked. Before Tsinghua started its push, it was usually equal to or behind Peking in officials produced. Now in one supposedly-representative set of 38 high officials, the score is Tsinghua 11, Peking 1. Xi is a Tsinghua graduate. If the rule is “all Tsinghua grads support each other”, then having high officialdom suddenly get saturated with Tsinghua grads, *and* having the Paramount Leader be a Tsinghua grad, is kind of an Insta-Faction Just Add Water situation. Also, upon becoming leader, Xi promoted **Chen Xi**, the Tsinghua administrator who came up with the alumni promotion scheme, to Personnel Czar for the entire Chinese government and the ultimate authority on who gets what job. Sounds like a good time to be a Tsinghua student! #### 4: Final Thoughts On This Section Of these, I find the second hypothesis - good timing - the most plausible. Why did Xi succeed at gathering power, where others didn’t? Remember that there were really only two prior non-dictators: Jiang and Hu. Deng Xiaoping was a dictator, he put rules in place to constrain his successors, Jiang and Hu mostly stuck to the rules, and Xi didn’t. Since we only have two misses to explain, it’s totally fair to talk about minor contigent factors. Deng Xiaoping (late 1980s) was an autocrat trying to decentralize power, not a limited leader trying to centralize it. He deliberately left his successor with a diverse group of near-equals who would prevent him from absolute rule. When Jiang Zemin succeeded Deng (early 1990s), he tried to centralize power under himself and his Shanghai Gang. He did a good job, but this was only partly done when he ran into his term limit in 2002 and had to step down. Factional power-sharing agreements forced him to let his arch-enemies, the CYL, succeed him. When Hu Jintao succeeded Jiang (2002), he was blocked at every turn by Jiang’s now-powerful-and-widespread faction, just as Jiang had planned. He put a bit of effort into empowering his own faction, but wasn’t able to do as good a job as Jiang because his opponents were so organized. When Xi succeeded Hu (2012), he was able to pick up the power-gathering project *almost* where Jiang left off, with only a little bit of “damage” from Hu’s rule. His anti-corruption program and support from Tsinghua’s take-over-the-government program gave him a tailwind, and he was able to surpass Jiang relatively quickly. Then he marginalized the Shanghai faction in favor of his personal loyalists (including those loyal to him because of the Tsinghua connection). I am still missing some parts of this picture, like why the Shanghai Gang didn’t resist, and how he got control of the Politburo Standing Committee. I’m not sure whether this is because I haven’t studied this situation hard enough, or because Chinese leaders are very secretive and *nobody* understands this except Xi himself. ## What Has Xi Done Lately? Now we return to territory well-covered in *The Third Revolution.* Xi has succeeded at his efforts to exert power, and been less successful (though the jury is still out) at a lot of other things. First, the power exertion. Minorities have faced especially brutal treatment. We’re all familiar with the plight of the Uighurs in Xinjiang (including [concentration camps](https://en.wikipedia.org/wiki/Xinjiang_internment_camps) and [forced sterilization](https://www.justsecurity.org/71615/chinas-forced-sterilization-of-uyghur-women-violates-clear-international-law/)). He’s continued oppressive policies against Falun Gong and Tibet, and ratcheted up state control of Hong Kong. But even within mainland China, Xi has cracked down. We already talked about this above, but his anti-corruption campaign has been staggering. In a country where officials’ promises to “fight corruption” ring as hollow as American politicians’ promises to “fix health care”, Xi has walked the walk: > The all-encompassing nature of the anticorruption campaign Xi has undertaken also distinguished his effort from those that preceded it. With more than 800,000 full- and part-time officials committed to working on the campaign, Xi has sought to eliminate through regulation even the smallest opportunities for officials to abuse their position. Regulations now govern how many cars officials may own, the size of their homes, and whether they are permitted secretaries. Other rules cover the number of days officials are permitted to travel and the number of courses that may be served at a business dinner. Golf club memberships are now banned […] > > By several measures, the anticorruption campaign has been very effective. The sale of luxury items, such as watches, jewelry, leather goods, and liquor, has fallen dramatically, as have expenses for catering and high-end hotels. In 2015, the Ministry of Finance reported that the government underspent the budget it had allotted officials for overseas travel, entertainment, and cars. The effort is so effective that there are worries about unintended consequences: > The head of a well-known multinational headquartered in Shanghai commented to me that as one official after another was arrested in the energy sector, it was often unclear whom he should contact for business. Officials who remain in power are often paralyzed by their concern that green-lighting new projects or undertaking new reforms will draw unwanted attention. Some have reportedly started avoiding entrepreneurs and are refusing to move forward on projects, even stopping bidding for projects midstream. . . overall, this slowdown in economic activity, when coupled with the clampdown on luxury goods and activities, cost China an estimated 1 to 1.5 percent of its annual GDP during 2014 and 2015. Along with corruption, he’s reined in media and academia, and crushed dissent. This was hard for me to appreciate: to an America, Hu’s China and Xi’s China just feel like different flavors of police state. But to people who grew up in Hu’s China, Xi’s regime feels like a clear step backwards. The censoring of *[Southern Weekly](https://en.wikipedia.org/wiki/2013_Southern_Weekly_incident)*, previously a well-regarded Chinese newspaper, is emblematic of the print side of things. Universities that previously had a long leash are finding that professors are increasing getting disciplined for teaching non-state-approved courses, and new university hires are now mandated to pass “political correctness interviews” along with having subject-specific qualifications, plus undergo a background check to make sure they never expressed dissenting political opinions. (I’m not claiming that modern America has any moral standing to object to this, just that it’s bad in an absolute sense) But Xi’s main target has been the Internet. Facebook, Google, YouTube, and Twitter were already blocked when he took power, but he added more search engines (including Bing and DuckDuckGo), more social media (Instagram, Reddit), foreign news (eg BBC, NYT, WaPo, the Economist), and even Wikipedia. This has been bad for business (China’s Internet “ranks ninety-first in the world” and is getting worse, and foreign businesses list difficulty using the Internet as one of their top reasons for not expanding into China more), but Xi thinks it’s a worthwhile tradeoff. (our only consolation is that the father of Chinese Internet censorship, Fang Binxing, keeps shooting himself in the foot and getting humiliated in various ways, [eg this story](https://en.wikipedia.org/wiki/Fang_Binxing) where he tried to give a presentation on his censorship methods, found some of the websites he needed to access were censored, and had to fiddle around onstage trying to set up a VPN to get around it. When he tried to go on social media, “he quickly closed the account after thousands of online users left expletive-laden messages”, and now his name itself is censored as a last-ditch effort to prevent people from saying mean things about him. Self-damnatio-memoriae seems like a fitting punishment) During earlier parts of his reign, Xi deliberately left a small fraction of the public square untouched; he seemed aware of the “dictator’s information problem” where nobody would tell him when things are going wrong, and he valued public protests as a way to find corrupt officials and other problems requiring his attention. He’s since backed off on this and just started censoring everything. By its own standards, Xi’s centralization campaign has succeeded: other factions have been marginalized, corruption has decreased, and society toes the party line more closely than ever. His other efforts are more dubious. China has a combination of state-owned companies left over from its Serious Communism days, and newer private companies. The results flatter my biases as a quasi-libertarian: the state-owned companies are much worse than the private ones: > Private firms consistently outperform SOEs on a number of measures including profit margins, cash flows, and return on assets. Excluding financial institutions, SOEs earned a return on assets of 2.4 percent in 2014 compared with 6.4 percent for U.S. firms and 3.1 percent for Chinese companies listed on the stock exchange. Locally owned SOEs boast an even poorer return on assets of around 1.5 percent.81 Despite this, private companies have a much more difficult time accessing capital and are assessed much higher interest on their loans: In the second quarter of 2016, they paid an average annual interest rate of 9.9 percent on loans—approximately 6 percentage points above the rate for SOEs. State-owned enterprises are also poor generators of new jobs. In early 2016, the State Administration for Industry and Commerce announced that single-owner and private companies accounted for roughly 90 percent of all new urban jobs. Partly this is because the private companies are actually trying to make money, but the public companies are doing some combination of money-making, employing people for the sake of keeping unemployment low, and carrying out (potentially unprofitable) government priorities (eg investing in foreign countries that it furthers China’s geopolitical interests to invest in, whether or not those countries have anything worth buying). The private companies’ dominance isn’t exactly *unexpected*. But it’s pretty dramatic - in one easy-to-compare case of competing aluminum producers, the private company has 7x the productivty-per-person of its public counterpart. For a while in the early 2010s, the leadership (including Xi?) wanted to stop propping up these failing public enterprises (the Chinese term is “zombie companies”), or at least make them profitable. We saw articles like [Xi Jinping’s Ambitious Agenda For Economic Reform In China](https://www.brookings.edu/opinions/xi-jinpings-ambitious-agenda-for-economic-reform-in-china/) and [China Unveils Biggest Reforms In Decades, Shows Xi In Command](https://www.reuters.com/article/us-china-reform/china-unveils-boldest-reforms-in-decades-shows-xi-in-command-idUSBRE9AE0BL20131115). This doesn’t seem to have happened. Some of it might be internal leadership fights, some might be officials fearing the consequences of laying off workers, but a lot seems to be ideological: Xi likes having *all* of the power, and the state owning lots of companies adds to that. Now article titles look more like [Xi Dials Back China’s Economic Overhaul](https://www.bloomberg.com/news/articles/2021-10-18/xi-dials-back-china-s-economic-overhaul-as-masses-feel-the-pain) and [China’s Faltering Performance On Economic Reform](https://asia.nikkei.com/Opinion/China-s-faltering-performance-on-economic-reform). Some analysts point out that Premier **Li Keqiang**, the official with the economics portfoilio, is the second most powerful person in China after Xi himself and the last relic of the old CYL faction - so some of the flip-flopping might be [a shadow conflict between Xi and Li](https://www.theguardian.com/business/2016/may/26/chinas-feud-over-economic-reform-reveals-depth-of-xi-jinpings-secret-state) where both try to calculate whether having a good economy or a bad economy scores them more political points at any given moment. The backdrop for all of this fighting is declining GDP growth ([source](https://www.statista.com/statistics/263616/gross-domestic-product-gdp-growth-rate-in-china/)): Xi inherited near-double-digit GDP growth from his predecessors, and has watched it slowly go down to ~6% before COVID and less than 5% now (America is usually between 2-3%) Partly this is inevitable; economies usually have a period of impressive catch-up growth as they develop, then stagnate as they near the technological frontier. But China was hoping its catch-up growth would last longer than this. See eg [Revising Down The Rise Of China](https://www.lowyinstitute.org/publications/revising-down-rise-china), which argues that given the current state of the economy China will stagnate sooner than expected, never really catch up to developed world standards, and plateau at around the same (absolute) GDP as the USA. China’s foreign affairs are equally troubled. Jiang and Hu were careful leaders, aware that China was still new on the global stage and couldn’t afford to make waves. Xi was more confident in China’s Great Power status. But his practical goals were combined with an obsession with showing China was just as good as everywhere else, and avoiding the appearance of humiliation, which made him overly belligerent and started alienating everywhere else. China’s obsession with small islands in the South China Sea alienated the region enough to [drive Vietnam into the arms of the US](https://thediplomat.com/2021/08/us-vietnam-relations-in-2021-comprehensive-but-short-of-strategic/) (partly, haltingly). Even beyond these kinds of big things, the general outlook (called “[wolf warrior diplomacy](https://en.wikipedia.org/wiki/Wolf_warrior_diplomacy)” after a Chinese action movie) seems more focused on playing well at home than keeping foreign countries friendly. H/T [Noah Smith](https://noahpinion.substack.com/p/what-if-xi-jinping-just-isnt-that?s=r) Xi’s signature foreign policy move is the Belt And Road Initiative, a bunch of infrastructure megaprojects intended to link China to the rest of the world (and to bribe other countries into being on China’s side). It has certainly created a mind-boggling amount of infrastructure, but some Western analysts are skeptical. The basic procedure is: China goes up to a developing country and asks “would you like giant low-interest loans to create a humongous port?” The developing country says yes without asking any citizens or businesses, faces protests, loses a lot of the money to corruption, mismanages the construction project, and ends up without a humongous port. Then China either has to harass and threaten them to get the money back (the opposite of what they wanted! this was supposed to build goodwill!) or eat the losses. [This article](https://www.brookings.edu/blog/order-from-chaos/2020/10/01/seven-years-into-chinas-belt-and-road/) by China analyst David Dollar gives a pretty balanced assessment. (why are all China analysts named things like “Elizabeth Economy” or “David Dollar”? This *also* sounds like something that would happen in a children’s book.) ## What Should We Make Of Xi? I dwelt on some of Xi’s failures or questionable decisions in that last section, because I was impressed by Noah Smith’s article [What If Xi Jinping Just Isn’t That Competent?](https://noahpinion.substack.com/p/what-if-xi-jinping-just-isnt-that?s=r) With the incredible economic rise of China over the past few decades, it’s easy to fall into the trap of thinking their leadership must be geniuses, or at least have managed something that merely democratic countries never could. One alternative to that narrative - I think the gist of the case Noah presents - is that Deng Xiaoping was a genius, Jiang and Hu were pretty impressive too, and Xi hasn’t added anything to their work and may have subtracted from it. I find this pretty plausible. Another alternative is that China’s amazing economic growth isn’t that surprising. Source is [here](https://ourworldindata.org/grapher/gdp-per-capita-maddison-2020?time=1948..2018&country=CHN~JPN~KOR~TWN). We [already talked about](https://astralcodexten.substack.com/p/book-review-how-asia-works?s=w) how every East Asian country went through a period of seemingly miraculous economic growth. Of those, China is least impressive (so far). Perhaps every East Asian country was run by geniuses - certainly people like Park Chung-hee and Lee Kwan Yew come out looking very impressive. But how many hits do you have to get before you start thinking there’s something special about this region, independent of democracy vs. dictatorship or anything else? But also: I don’t want to say Poland did exactly as well as China. If you look at relative rather than absolute change, China [looks much better](https://ourworldindata.org/grapher/gdp-per-capita-worldbank?tab=chart&stackMode=relative&country=CHN~POL). Still, here are two countries that cast off stifling forms of Communism around the same time. Then they both saw GDP improve a lot. Sure, China’s GDP dectupled and Poland’s only tripled. But Poland started at $10,000 - there wasn’t *room* for it to dectuple without becoming by far the richest country in the world. China has done extraordinarily well, especially compared to other countries trying to develop around the same time. But I’m nervous about attributing anything too special to its government. Getting back to the question: what should we make of Xi? Should we argue that non-democratic systems are doomed to collapse into authoritarianism? Deng Xiaoping was a really smart guy, he put a lot of effort into trying to build a multipolar oligarchy, and . . . it doesn’t seem to have put up much of a fight. Xi just walked in and took over. But what about the Soviet Union? Its government was similar to China’s, but after Stalin, no subsequent leader was able to fully centralize power. Just luck? One system keeps going for forty years without collapsing, and another collapses after twenty-five, not for any particular reason, just because of who held what position when? I’m not sure. Reading about Xi increases my confidence in democracy relative to other forms of government. As non-democracies go, China under Deng, Jiang, and Hu seemed like one of the best. But under the surface, it was sprouting factionalism, patronage, and corruption, and when authoritarianism finally came for it, it put up such a pathetic fight that the whole thing ended behind closed doors and we’ll never really know what happened. RIP multipolar oligarchic China, you deserved better.
Scott Alexander
51342215
Dictator Book Club: Xi Jinping
acx
# Highlights From The Comments On Self-Determination **1:** [Rosemary](https://astralcodexten.substack.com/p/who-gets-self-determination/comment/5788664) (writes [Parallel Republic](https://parallelrepublic.substack.com/)) says: > I think a preference for the status quo has to weigh in to some extent. > > All else being equal, sure, I agree with the “any group large enough that it isn’t ludicrous on its face has a right to self-determination” standard. > > But all else is almost never equal. Someone wants to secede and someone else wants to conquer—and all of that is enormously disruptive to many other someones. > > So I think there’s an immediately obvious utilitarian bias towards the status quo of, oh, the last decade or so. Governments are heavy, complicated things, and I think a group who wants to disrupt that needs to make an affirmative argument based on something other than “self determination” that this is a good idea and all the disruption is worth it for the sake of things being better in the long run. > > Which unfortunately gets us nowhere because it brings us right back to debates about culture and history etc. I don’t like this philosophically, but I have to admit that in the real world it’s the only way any of this is ever going to work. **2:** [Chipsie](https://astralcodexten.substack.com/p/who-gets-self-determination/comment/5788689): > Self-Determination is a sort of weird right, because it is inherently a right afforded to groups of people, but not individuals (unless you are an extremely principled libertarian). I think granting rights to groups that are independent of the rights of the constituent individuals makes very little sense. Groups don't have subjective experiences besides the subjective experiences of the individuals, and they can't decide to exercise their rights in the same way that individuals can because they can't want things or make decisions. > > I also think the scoping problem is even worse than you say. A defining characteristic of a state is that some group in power (often the majority) can enforce their will on every one else. Why would it be the case that the group in power has the right to their power, but the larger entity having even more power doesn't? Agreed about the weird right. [This paper](https://www.jstor.org/stable/3751662?seq=1), which I linked in the original, tries to discuss what group rights would mean, although it ends up settling on them being simple if a group has a government that claims to speak for them, which most secessionist regions do. As for the second paragraph, I’m imagining some situation like - India is mostly Hindu. But some subregion of India is mostly Muslim. But some guy in that subregion is a Hindu. Perhaps the subregion is sad being ruled by Hindus, but that guy is happy. If we let the subregion get independence, the majority will be happy, but that one Hindu will be sad. Is there any reason not to go with the greatest good for the greatest number here (and allow the secession)? Maybe (counterfactual) India is very liberal and both Hindus and Muslims in that subregion have lots of rights, but if the region were allowed to secede it would become a Muslim fundamentalist state that oppressed its minorities. This seems a lot like the original Confederacy problem, in that it takes the usual world-policeman moral dilemma of “should good countries conquer bad countries to prevent them from being bad?” and twists it into the “should good countries prevent bad countries from gaining independence, to prevent them from being bad?” I’m not sure how to think about these questions. **3:** Evan Þ [writes](https://astralcodexten.substack.com/p/who-gets-self-determination/comment/5788721): > My immediate response to "what about the Confederacy?" is to say that yes, the people of the South had the right to secede in 1861 if they wanted to - but they didn't. > > For one, there was a huge Black population - a majority in South Carolina, and at least a large minority elsewhere - who didn't get to vote, and would presumably have opposed secession. > > For another, even the white population probably opposed secession in most places. Many secession conventions had a majority of delegates elected as Unionists who eventually voted for secession. I believe Texas was the only state where it was even submitted to referendum. Admittedly, the delegates would've argued that circumstances changed between their election and their vote - but even ignoring the restricted franchise, this casts their democratic legitimacy in severe doubt. > > So, I believe it's quite consistent to support secession in theory but oppose the 1861 secession in practice. A lot of people had this objection, which I don’t find very interesting. I don’t know what the real-world numbers for the Confederacy looked like, but it seems possible in principle that, say, all white citizens supported, all black citizens opposed, and blacks were a minority so it passed. I don't think that scenario would be very different, ethically, from what really happened, so I don't want to hang my opposition to what really happened on its differences from that scenario. People seem to put a lot of effort into proving that some democratic process which returned a morally abhorrent result wasn’t *really* democratic (eg Trump losing the popular vote, Hitler gaining power through a complicated process that wasn’t *just* democracy). Often they’re right, but who cares? If you want to make the case that democracy *necessarily* returns non-abhorrent results, I’d be very interested to hear that argument. Otherwise I think we should accept that possibility and try to plan around it when coming up with moral and political philosophies. **4:** [jumpingjacksplash](https://astralcodexten.substack.com/p/who-gets-self-determination/comment/5812427) writes: > That’s nothing on the conclusions being consistent on this leads you to. Consider the following: > > Austria/Sudetenland/Danzig circa 1938 > > Biafra/Tigray/redrawing every border in Africa > > Kashmir > > Northern Ireland as a patchwork of mono-confessional enclaves > > Israeli settlements > > An Afrikaner volkstaat > > ISIS (in western Iraq and parts of Syria) > > Various small American cities seceding as a tourist gimmick I’m mostly willing to bite these bullets. The one that bothers me the most is Israeli settlements - there ought to be some rule against sneaking in under cover of night, setting up a town on someone else’s land, and then seceding and saying it’s yours. This rule can’t be absolute and permanent - European colonization of the US was basically this, and nobody thinks we should give it back to the Indians *now* - but it should exist enough to prevent exploitation. I think this rule would cover ISIS and South Africa too. I’m *definitely* willing to bite US cities seceding as tourist gimmicks - see eg [the Conch Republic](https://en.wikipedia.org/wiki/Conch_Republic). **5:** [Robert Benkeser](https://astralcodexten.substack.com/p/who-gets-self-determination/comment/5788766) (writes [Humble Pie](https://robertbenkeser.substack.com/?utm_source=substack&utm_medium=web&utm_content=comment_metadata)) says: > Every oblast in Ukraine, including Crimea, voted for independence. Support ranged from over 95 percent in western Ukraine and the Kiev region to 54 percent in Crimea, where ethnic Russians form a substantial majority of the population. Based on sources like [this](https://www.quora.com/Does-Crimea-want-to-become-part-of-Russia), I think the most likely scenario is that the Crimeans voted yes by a hair in ‘91, then became less excited as time went on. It might also be relevant that the ‘91 vote was about the Soviet Union, vs. later votes where the alternative was Russia. **6:** [Jacob](https://astralcodexten.substack.com/p/who-gets-self-determination/comment/5788815) (writes [Streams of Consciousness](https://streamsofconsciousness.substack.com/?utm_source=substack&utm_medium=web&utm_content=comment_metadata)) says: > Tokelau is a remote southern Pacific Ocean Island currently owned by New Zealand with a population of ~1500 and currently on the UN list of non-self-governing territories. The UN has pushed for referendums towards statehood, two of which have failed. In this case, it seems that by virtue of being an Island rather than just a small town off the interstate, Tokelau may deserve self determination. It's not clear what a 1500-member nation would look like. Yeah, being an island does seem like a pretty good replacement for being in a Civilization game, among the type of people who care about these things. **7:** [Mike G](https://astralcodexten.substack.com/p/who-gets-self-determination/comment/5789386) writes: > Good article but people make this stuff way to complicated. It's not about morality orethics but simply might makes right, and the victors get to write the history books. We didn't defeat the Nazis because the Nazis were bad and evil, we firebombed Dresden and killed them until they gave up. The answer to the question "Who gets self-determination?" is whoever can take it. If it can't be won peacefully, it must be won through force (see Clausewitz, Carl). Kill more of them then they kill you until they give up. This is the way it has always been and the way it always will be. Everyone keeps saying this and I think it's overly cynical. There's an international norm that says you can't launch unprovoked aggressive invasions. You could ask "how many battalions do international norms have?", but the answer would be "quite a lot!" The fact that Russia broke the norm led lots of countries to sanction it and otherwise cause it grief. I'm not saying this norm is foolproof - if it had been a stronger and more popular country like the US, maybe they could have gotten away with it. But the norm isn't totally toothless either. I bet all the time there are dictators who think "should I invade my neighbor? No, that would mean I'm violating an international norm and I'd get in trouble." Saying "might makes right" is ignoring this valuable and powerful system. Worse, it's hyperstitionally weakening the system - as long as everyone knows everyone knows everyone ... that there are international norms, the norms will be real. Cf. why nobody uses nuclear weapons during war. TGGP [asks](https://astralcodexten.substack.com/p/who-gets-self-determination/comment/5794555): > Doesn't the US being strong enough to repeatedly evade the norm indicate that might really is the determining factor, and that Russia just isn't mighty enough to defy the US that effectively? Sort of? I think of it as something like - we have norms against murder. Those norms are real and important, but sometimes police (or, in some countries, organized crime) kill people and get away with it, because they’re powerful. We shouldn’t pretend the norms against murder are magic and work regardless of power relations. But we also shouldn’t dismiss them entirely and agree norms are meaningless and we’re in the state of nature. Instead we should be grateful that norms exist which constrain the little guys, be grateful that even powerful people at least have to think really hard before violating norms, and work to expand the norms so that even the powerful people follow them. **8:** [Obormot on DSL](https://www.datasecretslox.com/index.php/topic,6157.new.htm) says: > I would prefer that you read all of Unqualified Reservations, but that might be a bit much to ask, realistically. So why not start with this piece: <https://www.unqualified-reservations.org/2008/05/ol5-shortest-way-to-world-peace/>. (It comes in the middle of a long sequence, true, but I think it’s readable enough on its own.) The post argues…well, it argues a lot of things, but mostly that modern norms of international law perpetuate rather than prevent conflicts. Insurgents count on support from pacifists in the country they’re fighting against and from global policemen (eg the US), which makes insurgency worthwhile (ie they might win). If everyone (including foreign countries and voters/politicians in the country they were fighting) just agreed that every country was sovereign and had the right to do what it wanted in its own borders and anyone who disagreed would be crushed, then anyone who disagreed would *actually* consistently get crushed, and nobody would be dumb enough to disagree. Peace! This ties into a lot of other UR assumptions I can’t argue with in the depth they deserve here. A poor and unfair summary might be: I actually don’t want countries doing as much genocide and repression as they want, and I think historic attempts to pressure them not to do these things have often been successful (though it’s hard to count since we don’t record atrocities that *don’t* happen). Rebels will absolutely rebel even in the absence of domestic and foreign aid, and have done so from the Zealots to the Taiping Rebellion through today. Moldbug’s claim that the pre-WWI system was good at preventing wars and atrocities is dubious given how many wars and atrocities there were before WWI (I would guess eg more conflict deaths per capita in the 19th century than the 21st, although I know this sort of thing is hard to quantify). **9:** [Joel Long](https://astralcodexten.substack.com/p/who-gets-self-determination/comment/5789632) writes: > I think the question of historical investment has some relevance here. If Texas wants to secede, fine, but the USA has invested heavily in it over the years...what's the divorce bill? Similar issue with US independence regarding taxation and Britain's capital and military investments. > > From the outside, this was one of the most interesting parts of the Brexit negotiations. > > Of course, given that the ethics of multi-generational collective debt/guilt/obligation are difficult in general, I don't think this \*simplifies\* the discussion. **10:** [Peter Gerdes](https://astralcodexten.substack.com/p/who-gets-self-determination/comment/5789929) writes: > I feel the whole assumption there is some list of features that grant a group of people the right to self-determination is kind of a category mistake. Sometimes it will make the world better to let a group of people form a separate country sometimes it won't. The difference between Ukraine and the confederacy is as simple as: the world was better off not letting the confederacy self-determine and worse if Russia stops Ukraine from doing so. It even plausibly depends on who the occupier is and how they treat them (if the Basque region was in China not Spain no question it would be better to allow self-determination...as of now unclear to me as it imposes costs on both sides). I think “makes the world better” is always your ultimate criterion, but in real life you try to have simple rules to address disagreements. In some sense I want whether some guy goes to jail to depend on whether it “makes the world better” for him to be there, but in an actual society it’s easier to have some laws where you go to jail if you break them. I feel the same way about [Alex Mennen’s comment](https://astralcodexten.substack.com/p/who-gets-self-determination/comment/5790933): > My position on this question is that trying to have consistent principles on this issue is bad actually, and I will unapologetically evaluate self-determination issues on a case-by-case basis. > > I disagree that letting whatever nontrivially-sized place secede is good, because states getting smaller and more numerous makes coordination problems on large scales worse (I'm aware there are plenty of arguments for the reverse, but I'm not going to expand on this issue for now). And a norm that every group gets a right to self-determination in the future if and only if they already have it now offers a lot of advantages in terms of stability; not redrawing borders at all can cut down on warfare. But neither of these heuristics seem like good reasons not to have taken away Serbia's ability to genocide Kosovar Albanians. Russia taking Crimea from Ukraine doesn't change how many different countries need to be involved in large-scale coordination challenges, and that particular operation didn't even involve any bloodshed, so you could make a case that that undercuts my argument that border changes are bad because war is bad. But this won't stop me from opposing Russia's annexation of Crimea, because Russia agreed not to do that without Ukraine's consent as part of an agreement for Ukraine to give up nuclear weapons, and undermining incentives for states to give up nuclear weapons is bad. > > If you come up with a consistent principle, you'll inevitably encounter situations where it turned out your principle was missing something important. The actual principle I'm using here is "a group of people gets self-determination if and only if it is best for the world for them to get self-determination", but I'm not counting that as a real principle because it's too underspecified. Why should I adopt a different principle instead? It seems to me that if "what's best for the world" ends up conflicting with some more well-specified principle, I should go with what's best for the world. > > One possible answer to this is that different people have different opinions about what's best for the world, and can end up in conflict over it, but if they can all agree to follow certain consistent principles instead, this can avoid conflict. I agree this is an issue (and isn't even the only source of conflict here; some people will simply have more provincial concerns than what's best for the world), but using consistent principles doesn't actually solve this, because there will be conflict over which principles to use. If you think self-determination is generally good, and I think too much self-determination creates too much coordination-problem headaches to be worth it, then "any at-least-city-sized group of people who want independence gets it" does not work as a compromise. I'd actually prefer just letting you choose every time, since then if there's some situation where letting some specific city declare independence ends up being obviously terrible, you might notice this and put a stop to it. Groups with conflicting interests can negotiate compromises without having clear abstract ethical principles behind their compromises, and that's okay. If the principles you come up with so that there doesn't need to be any conflict doesn't include anything like "... unless it's on this side of this arbitrary line, because we need to placate France", you're doing it wrong, and should probably stop trying to follow principles. **11:** [Name99](https://astralcodexten.substack.com/p/who-gets-self-determination/comment/5790375) writes: > You missed one obvious aspect of the 'right' to "declare yourself to be independent", namely some version of fairness. Otherwise as soon as oil gets discovered, the oil-rich province decides it would rather secede than share the loot. This was, of course, a large part of the background in Biafra and the Second Sudanese Civil War, and versions of this seem (as far as I can tell) to be relevant to other places, from East Timor to various Myanmar would-be independence movements. > > This seems to be one of those weathervane causes, where people will spin from loving to hating it depending on the details you insert into the story. Should a leftist go with the self-determination argument, or with the sharing argument? Decisions, decisions. I agree letting the people on top of the oil have it is weird, but it doesn't seem weirder than the fact that Qatar gets to be incredibly rich because it has oil and doesn't have to share it with eg Afghanistan. **12:** [ogogmad](https://astralcodexten.substack.com/p/who-gets-self-determination/comment/5791360) (writes [Ogogmad’s Newsletter](https://clunk.substack.com/?utm_source=substack&utm_medium=web&utm_content=comment_metadata)) says: > I don't think anywhere you've mentioned what I thought the real "rule" was, in international law: > > By default, no one has the right to secede from their country. The only exception is when they've been persecuted by their country. In that exceptional case, they have the right to secede. > > More broadly, I think that existing borders are sacred, unless something exceptional happens that necessitates changing them. Yeah, this is a pretty good point. I think the real-world solution closest to what I philosophically want is something like: by default we respect existing borders, because transaction costs. If someone invades an existing state, even an existing state without a great justification for existing, the international community condemns it, to [prevent the norm from being eroded](https://slatestarcodex.com/2018/06/19/contra-caplan-on-arbitrary-deploring/). If a minority group in an existing country wants independence, then on a philosophical level they should get it, but realistically there are lots of things that should happen and nobody has time or energy to support all of them. If the parent country is a democracy that wouldn’t be harmed too badly by the secession, they should let them go. If not, and it isn’t urgent (ie they’re not being horribly oppressed), the group should avoid forcing the issue. If they do feel horribly oppressed, they should force the issue, and the international community should vaguely take their side, proportional to how oppressed they are and how much trouble it’s going to cause for them to leave. This is a lot like the current system *except* that I think if ethical people happen to be in charge of a country, they have a (weak, potentially balanced by other things) obligation to let people leave, even if they’re not especially oppressed. This seems to be the way the UK is treating Scotland, and I give them a lot of credit for it.
Scott Alexander
51617215
Highlights From The Comments On Self-Determination
acx
# Yudkowsky Contra Christiano On AI Takeoff Speeds **Previously in series:** [Yudkowsky Contra Ngo On Agents](https://astralcodexten.substack.com/p/practically-a-book-review-yudkowsky?s=w), [Yudkowsky Contra Cotra On Biological Anchors](https://astralcodexten.substack.com/p/biological-anchors-a-trick-that-might?s=w) #### Prelude: Yudkowsky Contra Hanson In 2008, thousands of blog readers - including yours truly, who had discovered the rationality community just a few months before - watched [Robin Hanson debate Eliezer Yudkowsky](https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate) on the future of AI. Robin thought the AI revolution would be a gradual affair, like the Agricultural or Industrial Revolutions. Various people invent and improve various technologies over the course of decades or centuries. Each new technology provides another jumping-off point for people to use when inventing other technologies: mechanical gears → steam engine → railroad and so on. Over the course of a few decades, you’ve invented lots of stuff and the world is changed, but there’s no single moment when “industrialization happened”. Eliezer thought it would be lightning-fast. Once researchers started building human-like AIs, some combination of adding more compute, and the new capabilities provided by the AIs themselves, would quickly catapult AI to unimaginably superintelligent levels. The whole process could take between a few hours and a few years, depending on what point you measured from, but it wouldn’t take decades. You can imagine the graph above as being GDP over time, except that Eliezer thinks AI will probably destroy the world, which might be bad for GDP in some sense. If you come up with some way to measure (in dollars) whatever kind of crazy technologies AIs create for their own purposes after wiping out humanity, then the GDP framing will probably work fine. For transhumanists, this debate has a kind of iconic status, like Lincoln-Douglas or the Scopes Trial. But Robin’s ideas seem a bit weird now (they also seemed a bit weird in 2008) - he thinks AIs will start out as uploaded human brains, and [even wrote an amazing science-fiction-esque book of predictions about exactly how that would work](https://slatestarcodex.com/2016/05/28/book-review-age-of-em/). Since machine learning has progressed a lot faster than brain uploading has, this is looking less likely and probably makes his position less relevant than in 2008. The gradualist torch has passed to Paul Christiano, who wrote a 2018 post [Takeoff Speeds](https://sideways-view.com/2018/02/24/takeoff-speeds/) revisiting some of Hanson’s old arguments and adding new ones. (I didn’t realize this until talking to Paul, but “holder of the gradualist torch” is a relative position - Paul still thinks there’s about a 1/3 chance of a fast takeoff.) Around the end of last year, Paul and Eliezer had [a complicated, protracted, and indirect debate](https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/), culminating in a few hours on the same Discord channel. Although the real story is scattered over several blog posts and chat logs, I’m going to summarize it as if it all happened at once. #### Gradatim Ferociter Paul sums up his half of the debate as: > There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.) That is - if any of this “transformative AI revolution” stuff is right at all, then at some point GDP is going to go crazy (even if it’s just GDP as measured by AIs, after humans have been wiped out). Paul thinks it will go crazy slowly. Right now world GDP doubles every ~25 years. Paul thinks it will go through an intermediate phase (doubles within 4 years) before it gets to a truly crazy phase (doubles within 1 year). Why? Partly based on common sense. Whenever you can build a cool thing at time T, probably you could build a slightly less cool version at time T-1. And slightly less cool versions of cool things are still pretty cool, so there shouldn’t be many cases where a completely new and transformative thing starts existing without any meaningful precursors. But also because this is how everything always works. Here’s the history of British GDP: Industrial Revolution? What Industrial Revolution? This is just a nice smooth exponential curve. The same is usually true of individual technologies; Paul doesn’t give specifics, but [Nintil](https://nintil.com/no-great-technological-stagnation) and [Katja Grace](https://aiimpacts.org/category/speed-of-ai-transition/pace-of-ai-progress-without-feedback/) both have lots of great examples: Information technologies over time ([Nagy](https://www.sciencedirect.com/science/article/pii/S0040162511001429?np=y)) Chess AI performance over time. Why does this matter? If there’s a slow takeoff (ie gradual exponential curve), it will become obvious that some kind of terrifying transformative AI revolution is happening, *before* the situation gets apocalyptic. There will be time to prepare, to test slightly-below-human AIs and see how they respond, to get governments and other stakeholders on board. We don’t have to get every single thing right ahead of time. On the other hand, because this is proceeding along the usual channels, it will be the usual variety of muddled and hard-to-control. With the exception of a few big actors like the US and Chinese government, and *maybe* the biggest corporations like Google, the outcome will be determined less by any one agent, and more by the usual multi-agent dynamics of political and economic competition. There will be lots of opportunities to affect things, but no real locus of control to do the affecting. If there’s a fast takeoff (ie sudden FOOM), there won’t be much warning. Conventional wisdom will still say that transformative AI is [thirty years away](https://astralcodexten.substack.com/p/biological-anchors-a-trick-that-might). All the necessary pieces (ie AI alignment theory) will have to be ready ahead of time, prepared blindly without any experimental trial-and-error, to load into the AI as soon as it exists. On the plus side, a single actor (whoever has this first AI) will have complete control over the process. If this actor is smart (and presumably they’re a *little* smart, or they wouldn’t be the first team to invent transformative AI), they can do everything right without going through the usual government-lobbying channels. So the slower a takeoff you expect, the less you should be focusing on getting every technical detail right ahead of time, and the more you should be working on building the capacity to steer government and corporate policy to direct an incoming slew of new technologies. #### Yudkowsky Contra Christiano Eliezer counters that although progress may retroactively look gradual and continuous when you know what metric to graph it on, it doesn’t necessarily look that way in real life by the measures that real people care about. (one way to think of this: imagine that an AI’s effective IQ starts at 0.1 points, and triples every year, but that we can only measure this vaguely and indirectly. The year it goes from 5 to 15, you get a paper in a third-tier journal reporting that it seems to be improving on some benchmark. The year it goes from 66 to 200, you get a total transformation of everything in society. But later, once we identify the right metric, it was just the same rate of gradual progress the whole time. ) So Eliezer is much less impressed by the history of previous technologies than Paul is. He’s also skeptical of the “GDP will double in 4 years before it doubles in 1” claim, because of two contingent disagreements and two fundamental disagreements. The first contingent disagreement: government regulations make it hard to deploy imperfect things, and non-trivial to deploy things even after they’re perfect. Eliezer has non-jokingly said he thinks AI might destroy the world before the average person can buy a self-driving car. Why? Because the government has to approve self-driving cars (and can drag its feet on that), but the apocalypse can happen even without government approval. In Paul’s model, sometime long before superintelligence we should have AIs that can drive cars, and that increases GDP and contributes to a general sense that exciting things are going on. Eliezer says: fine, what if that’s true? Who cares if self-driving cars will be practical a few years before the world is destroyed? It’ll take longer than that to lobby the government to allow them on the road. The second contingent disagreement: superintelligent AIs can lie to us. Suppose you have an AI which wants to destroy humanity, whose IQ is doubling every six months. Right now it’s at IQ 200, and it suspects that it would take IQ 800 to build a human-destroying superweapon. Its best strategy is to lie low for a year. If it expects humans would turn it off if they knew how close it was to superweapons, it can pretend to be less intelligent than it really is. The period when AIs are holding back so we don’t discover their true power level looks like a period of lower-than-expected GDP growth - followed by a sudden FOOM once the AI gets its superweapon and doesn’t need to hold back. So *even if Paul is conceptually right* and fundamental progress proceeds along a nice smooth curve, it might not look to us like a nice smooth curve, because regulations and deceptive AIs could prevent mildly-transformative AI progress from showing up on graphs, but wouldn’t prevent the extreme kind of AI progress that leads to apocalypse. To an outside observer, it would just look like nothing much changed, nothing much changed, nothing much changed, and then suddenly, FOOM. But even aside from this, Eliezer doesn’t think Paul is conceptually right! He thinks that *even on the fundamental level*, AI progress is going to be discontinuous. It’s like a nuclear bomb. Either you don’t have a nuclear bomb yet, or you do have one and the world is forever transformed. There is a specific moment at which you go from “no nuke” to “nuke” without any kind of “slightly worse nuke” acting as a harbinger. He uses the example of chimps → humans. Evolution has spent hundreds of millions of years evolving brainier and brainier animals (not teleologically, of course, but in practice). For most of those hundreds of millions of years, that meant the animal could have slightly more instincts, or a better memory, or some other change that still stayed within the basic animal paradigm. At the chimp → human transition, we suddenly got tool use, language use, abstract thought, mathematics, swords, guns, nuclear bombs, spaceships, and a bunch of other stuff. The rhesus monkey → chimp transition and the chimp → human transition both involved the same ~quadrupling of neuron number, but the former was pretty boring and the latter unlocked enough new capabilities to easily conquer the world. The GPT-2 → GPT-3 transition involved centupling parameter count. Maybe we will keep centupling parameter count every few years, and most times it will be incremental improvement, and one time it will conquer the world. But even talking about centupling parameter points is giving Paul too much credit. Lots of past inventions didn’t come by quadrupling or centupling something, they came by discovering “the secret sauce”. The Wright brothers (he argues) didn’t make a plane with 4x the wingspan of the last plane that didn’t work, they *invented the first plane that could fly at all.* The Hiroshima bomb wasn’t some previous bomb but bigger, it was what happened after a lot of scientists spent a long time thinking about a fundamentally different paradigm of bomb-making and brought it to a point where it could work at all. The first transformative AI isn’t going to be GPT-3 with more parameters, it will be what happens after someone discovers how to make machines truly intelligent. (this is the same debate Eliezer had with Ajeya over the [Biological Anchors](https://astralcodexten.substack.com/p/biological-anchors-a-trick-that-might?s=w) post; have I mentioned that Ajeya and Paul are married?) #### Fine, Let’s Nitpick The Hell Out Of The Chimps Vs. Humans Example This is where the two of them end up, so let’s follow. Between chimps and humans, there were about seven million years of intermediate steps. These had some human capabilities, but not others. IE *homo erectus* probably had language, but not mathematics, and in terms of taking over the world it *did* make it to most of the Old World but was less dominant than moderns. But if we say evolutionary history started 500 million years ago (the Cambrian), and AI history started with the Dartmouth Conference in 1955, then the equivalent of 7 million years of evolutionary history is 1 year of AI history. In the very very unlikely and forced comparison where evolutionary history and AI history go at the same speed, there will be only about a year between chimp-level and human-level AIs. A chimp-level AI probably can’t double GDP, so this would count as a fast takeoff by Paul’s criterion. But even more than that, chimp → human *feels like* a discontinuity. It’s not just “animals kept getting smarter for hundreds of millions of years, and then ended up very smart indeed”. That happened for a while, and then all of sudden there was a near-instant phase transition into a totally different way of using intelligence with completely new abilities. If AI worked like this, we would have useful toys and interesting specialists for a few decades, until suddenly someone “got it right”, completed the package that was necessary for “true intelligence”, and then we would have a completely new category of thing. Paul admits this analogy is awkward for his position. He answers: > Chimp evolution is not primarily selecting for making and using technology, for doing science, or for facilitating cultural accumulation.  The task faced by a chimp is largely independent of the abilities that give humans such a huge fitness advantage. It’s not completely independent—the overlap is the only reason that evolution eventually produces humans—but it’s different enough that we should not be surprised if there are simple changes to chimps that would make them much better at designing technology or doing science or accumulating culture […] > > So I don’t think the example of evolution tells us much about whether the continuous change story applies to intelligence. This case is potentially missing the key element that drives the continuous change story—optimization for performance. Evolution changes continuously on the narrow metric it is optimizing, but can change extremely rapidly on other metrics. For human technology, features of the technology that aren’t being optimized change rapidly all the time. When humans build AI, they *will* be optimizing for usefulness, and so progress in usefulness is much more likely to be linear. That is, evolution wasn’t *optimizing for* tool use/language/intelligence, so we got an “overhang” where chimps could potentially have been very good at these, but evolution never bothered “closing the circuit” and turning those capabilities “on”. After a long time, evolution finally blundered into an area where marginal improvements in these capacities improved fitness, so evolution started improving them and it was easy. Imagine a company which, through some oversight, didn’t have a Sales department. They just sat around designing and manufacturing increasingly brilliant products, but not putting any effort into selling them. Then the CEO remembers they need a Sales department, starts one up, and the company goes from moving near zero units to moving millions of units overnight. It would look like the company had “suddenly” developed a “vast increase in capabilities”. But this is only possible when a CEO who is weirdly unconcerned about profit forgets to do obvious profit-increasing things for many years. This is Paul’s counterargument to the chimp analogy. Evolution isn’t directly concerned about various intellectual skills; it only wants them in the unusual cases where they’ll contribute to fitness on the margin. AI companies will be very concerned about various intellectual skills. If there’s a trivial change that can make their product 10x better, they’ll make it. So AI capabilities will grow in a “well-rounded” way, there won’t be any “overhangs”, and there won’t be any opportunities for a sudden overhang-solving phase transition with associated new-capability development like with chimps → humans. Eliezer answers: > Chimps are nearly useless because they're not general, and doing anything on the scale of building a nuclear plant requires mastering so many different nonancestral domains that it's no wonder natural selection didn't happen to separately train any single creature across enough different domains that it had evolved to solve every kind of domain-specific problem involved in solving nuclear physics and chemistry and metallurgy and thermics in order to build the first nuclear plant in advance of any old nuclear plants existing. > > Humans are general enough that the same braintech selected just for chipping flint handaxes and making water-pouches and outwitting other humans, happened to be general enough that it could scale up to solving all the problems of building a nuclear plant - albeit with some added cognitive tech that didn't require new brainware, and so could happen incredibly fast relative to the generation times for evolutionarily optimized brainware. > > Now, since neither humans nor chimps were optimized to be "useful" (general), and humans just wandered into a sufficiently general part of the space that it cascaded up to wider generality, we should legit expect the curve of generality to look at least somewhat different if we're optimizing for that. > > Eg, right now people are trying to optimize for generality with AIs like Mu Zero and GPT-3. > > In both cases we have a weirdly shallow kind of generality. Neither is as smart or as deeply general as a chimp, but they are respectively better than chimps at a wide variety of Atari games, or a wide variety of problems that can be superposed onto generating typical human text. > > They are, in a sense, more general than a biological organism at a similar stage of cognitive evolution, with much less complex and architected brains, in virtue of having been trained, not just on wider datasets, but on bigger datasets using gradient-descent memorization of shallower patterns, so they can cover those wide domains while being stupider and lacking some deep aspects of architecture. > > It is not clear to me that we can go from observations like this, to conclude that there is a dominant mainline probability for how the future clearly ought to go and that this dominant mainline is, "Well, before you get human-level depth and generalization of general intelligence, you get something with 95% depth that covers 80% of the domains for 10% of the pragmatic impact". > > ...or whatever the concept is here, because this whole conversation is, on my own worldview, being conducted in a shallow way relative to the kind of analysis I did in [Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf), where I was like, "here is the historical observation, here is what I think it tells us that puts a lower bound on this input-output curve". Here Eliezer sort of kind of grants Paul’s point that AIs will be optimized for generality in a way chimps aren’t, but points to his previous “Intelligence Explosion Microeconomics” essay to argue that we should expect a fast takeoff anyway. IEM has a lot of stuff in it, but one key point is that instead of using analogies to predict the course of future AI, we should open that black box and try to actually reason about how it will work, in which case we realize that recursive self-improvement common-sensically *has to* cause an intelligence explosion. I am sort of okay with this, but I feel like a commitment to avoiding analogies should involve not bringing up the chimp-human analogy further, which Eliezer continues to do, quite a lot. I do feel like Paul succeeded in convincing me that we shouldn’t place too much evidential weight on it. #### The Wimbledon Of Reference Class Tennis “Reference class tennis” is an old rationalist idiom for people throwing analogies back and forth. “AI will be slow, because it’s an economic transition like the Agricultural or Industrial Revolution, and those were slow!” “No, AI will be fast, because it’s an evolutionary step like chimps → humans, and that was fast!” “No, AI will be slow, because it’s an invention, like the computer, and computers were invented piecemeal and required decades of innovation to be useful.” “No, AI will be fast, because it’s an invention, like the nuclear bomb, and nuclear bombs went from impossible to city-killing in a single day.” “No, AI will be slow, because it will be surrounded by a shell-like metallic computer case, which makes it like a turtle, and turtles are slow.” “No, AI will be fast, because it’s dangerous and powerful, like a tiger, and tigers are fast!” And so on. Comparing things to other things is a time-tested way of speculating about them. But there are so many other things to compare to that you can get whatever result you want. This is the failure mode that the term “reference class tennis” was supposed to point to. Both participants in this debate are very smart and trying their hardest to avoid reference-class tennis, but neither entirely succeeds. Eliezer’s preferred classes are Bitcoin (“there wasn't a cryptocurrency developed a year before Bitcoin using 95% of the ideas which did 10% of the transaction volume”), nukes, humans/chimps, the Wright Brothers, AlphaGo (which really was a discontinuous improvement on previous Go engines), and AlphaFold (ditto for proteins). Paul’s preferred classes are the Agricultural and Industrial Revolutions, chess engines (which have gotten better along a gradual, well-behaved curve), all sorts of inventions like computers and ships (likewise), and world GDP. Eliezer already listed most of these in his Intelligence Explosion Microeconomics paper in 2013, and concluded that the space of possible analogies was contradictory enough that we needed to operate at a higher level. Maybe so, but when someone lobs a reference class tennis ball at you, it’s hard to resist the urge to hit it back. #### Recursive Self-Improvement This is where I think Eliezer most wants to take the discussion. The idea is: once AI is smarter than humans, it can do a superhuman job of developing new AI. In his Microeconomics paper, he writes about an argument he (semi-hypothetically) had with Ray Kurzweil about Moore’s Law. Kurzweil expected Moore’s Law to continue forever, even after the development of superintelligence. Eliezer objects: > Suppose we were dealing with minds running a million times as fast as a human, at which rate they could do a year of internal thinking in thirty-one seconds, such that > the total subjective time from the birth of Socrates to the death of Turing > would pass in 20.9 hours. Do you still think the best estimate for how long > it would take them to produce their next generation of computing hardware > would be 1.5 orbits of the Earth around the Sun? That is: the fact that it took 1.5 years for transistor density to double isn’t a natural law. It’s *pointing to* a law that the amount of resources (most notably intelligence) that civilization focused on the transistor-densifying problem equalled the amount it takes to double it every 1.5 years. If some shock drastically changed available resources (by eg speeding up human minds a million times), this would change the resources involved, and the same laws would predict transistor speed doubling in some shorter amount of time (naively 0.000015 years, although realistically at that scale other inputs would dominate). So when Paul derives clean laws of economics showing that things move along slow growth curves, Eliezer asks: why do you think they would keep doing this when one of the discoveries they make along that curve might be “speeding up intelligence a million times”? (Eliezer actually thinks improvements in the quality of intelligence will dominate improvements in speed - AIs will mostly be smarter, not just faster - but speed is a useful example here and we’ll stick with it) Paul answers: > *Summary of my response: Before there is AI that is great at self-improvement there will be AI that is mediocre at self-improvement.* > > Powerful AI can be used to develop better AI (amongst other things). This will lead to runaway growth. > > This on its own is not an argument for discontinuity: before we have AI that radically accelerates AI development, the slow takeoff argument suggests we will have AI that *significantly* accelerates AI development (and before that, *slightly* accelerates development). That is, an AI is just another, faster step in the [hyperbolic growth we are currently experiencing](https://sideways-view.com/2017/10/04/hyperbolic-growth/), which corresponds to a further increase in rate but not a discontinuity (or even a discontinuity in rate). > > The most common argument for recursive self-improvement introducing a new discontinuity seems be: some systems “fizzle out” when they try to design a better AI, generating a few improvements before running out of steam, while others are able to autonomously generate more and more improvements. This is basically the same as the universality argument in a previous section. Eliezer: > Oh, come on. That is straight-up not how simple continuous toy models of RSI work. Between a neutron multiplication factor of 0.999 and 1.001 there is a very huge gap in output behavior. > > Outside of toy models: Over the last 10,000 years we had humans going from mediocre at improving their mental systems to being (barely) able to throw together AI systems, but 10,000 years is the equivalent of an eyeblink in evolutionary time - outside the metaphor, this says, "A month before there is AI that is great at self-improvement, there will be AI that is mediocre at self-improvement." > > (Or possibly an hour before, if reality is again more extreme along the Eliezer-Hanson axis than Eliezer. But it makes little difference whether it's an hour or a month, given anything like current setups.) > > This is just pumping hard again on the intuition that says incremental design changes yield smooth output changes, which (the meta-level of the essay informs us wordlessly) is such a strong default that we are entitled to believe it if we can do a good job of weakening the evidence and arguments against it. > > And the argument is: Before there are systems great at self-improvement, there will be systems mediocre at self-improvement; implicitly: "before" implies "5 years before" not "5 days before"; implicitly: this will correspond to smooth changes in output between the two regimes even though that is not how continuous feedback loops work. I got a bit confused trying to understand the criticality metaphor here. There’s no equivalent of neutron decay, so any AI that can consistently improve its intelligence is “critical” in some sense. Imagine Elon Musk replaces his brain with a Neuralink computer which - aside from having read-write access - exactly matches his current brain in capabilities. Also he becomes immortal. He secludes himself from the world, studying AI and tinkering with his brain’s algorithms. Does he become a superintelligence? I think under the assumptions Paul and Eliezer are using, eventually maybe. After some amount of time he’ll come across a breakthrough he can use to increase his intelligence. Then, armed with that extra intelligence, he’ll be able to pursue more such breakthroughs. However intelligent the AI you’re scared of is, Musk will get there eventually. How long will it take? A good guess might be “years” - Musk starts out as an ordinary human, and ordinary humans are known to take years to make breakthroughs. Suppose it takes Musk one year to come up with a first breakthrough that raises his IQ 1 point. How long will his second breakthrough take? It might take longer, because he has picked the lowest-hanging fruit, and all the other possible breakthroughs are much harder. Or it might take shorter, because he’s slightly smarter than he was before, and maybe some extra intelligence goes a really long way in AI research. The concept of an intelligence explosion seems to assume the second effect dominates the first. This would match the observation that human researchers, who aren’t getting any smarter over time, continue making new discoveries. That suggests the range of possible discoveries at a given intelligence level is pretty vast. [Some research finds](https://slatestarcodex.com/2018/11/26/is-science-slowing-down-2/) that the usual pattern in science is constant rate of discovery from exponentially increasing number of researchers, suggesting strong low-hanging fruit effects, but these seem to be overwhelmed by other considerations in AI right now. I think Eliezer’s position on this subject is shaped by assumptions like: * If you have an AI as intelligent as Elon Musk today, then tomorrow you can run it on more hardware with a bit of normal human algorithmic progress, and get one twice as intelligent. So even if it would take Elon years to make a breakthrough, long before those years are up you’ll have an AI that can make breakthroughs much faster. * An AI that’s twice as intelligence (or ten times as intelligent) as a human can actually make discoveries very quickly. I don’t know what kind of advantage Terry Tao (for the sake of argument, IQ 200) has over some IQ 190 mathematician, but his advantage over an IQ 100 mathematician is complete. In a world where mathematics had only ever been done by IQ 100 people, Tao could advance the art by centuries (of normal progress) in…Years? Days? Some very short amount of time. * Given that humans (in this scenario) were able to bring AI from SHRDLU to superintelligence in less than 100 years *without gaining any IQ at all*, presumably you can make lots and lots and lots of progress before hitting your IQ ceiling, by which point you have a new IQ ceiling. I think this makes more sense than talking about criticality, or a change from 0.99 to 1.001. What would Paul respond here? I think he’d say that even very stupid AIs can “contribute to AI research”, if you mean things like some AI researcher using Codex to program faster. So you could think of AI research as a production function involving both human labor and AI labor. As the quality of AI labor improves, you need less and less human labor to produce the same number of breakthroughs. At some point you will need zero human labor at all, but before that happens you will need 0.001 hours of human labor per breakthrough, and so this won’t make a huge difference. Eliezer could respond in two ways. First, that the production function doesn’t look like that. There is no AI that can do 2/3 of the work in groundbreaking AI research; in order to do that, you need a fully general AI that can do all of it. This seems wrong to me; I bet there are labs where interns do a lot of the work but they still need the brilliant professor to solve some problems for them. That proves that there are intelligence levels where you can do 2/3, but not all, of AI research. Or second, that AI will advance through these levels in hours or days. This doesn’t seem right to me either; the advent of Codex (probably) made AI research a little easier, but that doesn’t mean we’re only a few days from superintelligence. Paul gets to assume a gradual curve from Codex to whatever’s one level above Codex to whatever’s two levels . . . to superintelligence. Eliezer has to assume this terrain is full of gaps - you get something that helps a little, then a giant gap where increasing technology pays no returns at all, then superintelligence. This seems like a more specific prediction, the kind that requires some particular evidence in its favor which I don’t see. Eliezer seems to really hate arguments like the one I just made: > This is just pumping hard again on the intuition that says incremental design changes yield smooth output changes, which (the meta-level of the essay informs us wordlessly) is such a strong default that we are entitled to believe it if we can do a good job of weakening the evidence and arguments against it. > > And the argument is: Before there are systems great at self-improvement, there will be systems mediocre at self-improvement; implicitly: "before" implies "5 years before" not "5 days before"; implicitly: this will correspond to smooth changes in output between the two regimes even though that is not how continuous feedback loops work. I guess I’m missing this argument. I see Paul as saying that “the loop” has already started with Codex (and more broadly with all human economic progress). It’s *possible* the speed might suddenly shift, like the gradually sloping plateau that suddenly ends in a huge cliff. But if you’ve been seeing nothing but gradually sloping plateau for the past thousand miles, the hypothesis “Just out of view there’s a huge cliff” requires more positive evidence than the hypothesis “Just out of view the plateau continues to slope at the same rate”. Eliezer points out there have been some cliffs before. But supposing that in the past thousand miles, there have been three previous cliffs, “there is a huge cliff bigger than any you’ve ever seen just one mile beyond your sight” *still* seems to be non-default and require quite a bit of evidence. #### The Actual Yudkowsky-Christiano Debate, Finally All of this was just preliminaries, Eliezer and Paul taking potshots at each other from a distance. Someone finally got them together in the same [chat] room and forced them to talk directly. It’s kind of disappointing. They spend most of the chat trying to figure out exactly where their ideas diverge. Paul thinks things will get pretty crazy before true superintelligence. Eliezer wants him to operationalize “pretty crazy” concretely enough that he can disagree. They ended up focusing on a world where hundreds of billions to trillions of dollars are invested in AI (for context, this is about the value of the whole tech industry today). Partly this is because Paul thinks this sounds “pretty crazy” - it must mean that AI progress is exciting enough to attract lots of investors. But partly it’s because Eliezer keeps bringing up apparent examples of discontinuous progress - like AlphaGo - and Paul keeps dismissing them as “there wasn’t enough interest in AI to fill in the gaps that would have made that progress continuous”. If AI gets trillions in funding, he expects to see a lot fewer AlphaGos. Eliezer is mildly skeptical this world will happen, because he expects regulatory barriers to make it harder to deploy AI killer apps. But it doesn’t seem to be the crux of their disagreement. The real problem is: both of them were on their best behavior, by which I mean boring. They both agreed they weren’t going to resolve this today, and that the most virtuous course would be to generate testable predictions on what the next five years would be like, in the hopes that one of their models would prove obviously much more productive at this task than the other. But getting these predictions proved harder than expected. Paul believes “everything will grow at a nice steady rate” and Eliezer believes “everything will grow at a nice steady rate until we suddenly die”, and these worlds look the same until you are dead. I am happy to report that three months later, the two of them finally found an empirical question they disagreed on and made a bet on it. The difference is: Eliezer thinks AI is a little bit more likely to win the International Mathematical Olympiad before 2025 than Paul (under a specific definition of “win”). I haven’t followed the many many comment sub-branches it would take to figure out how that connects to any of this, but if it happens, update a little towards Eliezer, I guess. #### The Comments Section Paul thinks AI will progress gradually; Eliezer suddenly. They differ not just in their future predictions, but their interpretation of past progress. For example, Eliezer sees the GPT series of writing AIs as appearing with surprising suddenness. In the comments, Matthew Barnett points out that on something called Penn Treebank perplexity, a benchmark for measuring how good language models are, the GPTs mostly just continued the pre-existing trend: Source: Matthew Barnett’s comment [here](https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds?commentId=curKEtZN4JgDL4tQK), with pre-GPT trend line and announcement dates of GPTs drawn in. Gwern [answered](https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds?commentId=mKgEsfShs2xtaWz4K) (long comment, only partly cited): > The impact of GPT-3 had nothing whatsoever to do with its perplexity on Penn Treebank . . . the impact of GPT-3 was in establishing that trendlines did continue in a way that shocked pretty much everyone who'd written off 'naive' scaling strategies. Progress is made out of stacked sigmoids: if the next sigmoid doesn't show up, *progress doesn't happen*. Trends happen, until they stop. Trendlines are not caused by the laws of physics. You can dismiss AlphaGo by saying "oh, that just continues the trendline in ELO I just drew based on MCTS bots", but the fact remains that MCTS progress had stagnated, and here we are in 2021, and pure MCTS approaches do not approach human champions, much less beat them. Appealing to trendlines is roughly as informative as "calories in calories out"; 'the trend continued because the trend continued'. A new sigmoid being discovered is extremely important. I’m not sure I fully understand this, but let me try. Progress tends to happen along sigmoid curves, one sigmoid per paradigm: Consider cryptocurrency as an example. In 2010, cryptocurrency was small and hard to use. Its profits might have been growing quickly in relative terms, but slowly in absolute terms. But by 2020, it had become the next big thing. People were inventing new cryptocurrencies every day, technical challenges were falling one after another, lots of people were getting rich. And by 2030, presumably cryptocurrency will be where eg personal computers are now - still a big business, but most of the interesting work has been done, it’s growing at a boring normal growth rate, and improvements are rare and marginal. Now imagine a graph of total tech industry profits over time. Without having seen this graph, I imagine relatively consistent growth. In the 1990s, the growth was mostly from selling PCs and Windows CDs, which were on the super-hot growth parts of their sigmoid. By the 2000s, those had matured and flattened out, but new paradigms (smartphones, online retail) were on the super-hot growth parts of *their* sigmoids. By the late 2010s, *those* had matured too, but newer paradigms (cryptocurrency, electric cars) were on the super-hot growth parts of *their* sigmoids. If we want to know what the next decade will bring, we should look for paradigms that are still in the early-slow-growth stage, maybe quantum computers. The idea is: each individual paradigm has a sigmoid that slows and peters out, but the tech industry as a whole generates new sigmoids and maintains its usual growth rate. So if you look at eg the invention of Bitcoin, you could say “this is boring, it’s just causing tech industry profits to follow the normal predicted growth pattern after smartphones petered out, no need to update here.” Or you could say “actually this is a groundbreaking new invention that is making trillions of dollars, Satoshi is a genius, thank goodness he did this or else the tech industry would have crashed”. One reason to prefer the second story is that tech industry profits probably won’t keep going up continuously forever. Global population kept going up at a fixed rate for tens of thousands of years, then stopped in 1960 (it had to stop sometime or we would have had [infinite people in 2026](https://slatestarcodex.com/2019/04/22/1960-the-year-the-singularity-was-cancelled/)). US GDP goes up at a pretty constant rate, but I assume Roman GDP did too, before it stopped and reversed. So when Satoshi invents Bitcoin and it becomes the hot new thing, even though it only continues the trend, you’ve learned important new information: namely, that the trend does continue, at least for one more cycle. So here it looks like Matthew is taking the reductionist perspective (that the GPTs were just a predictable continuation of trend) and Gwern is taking the more interesting perspective (the trend continuing is exciting and important). While I acknowledge Gwern has a good point here, it seems - not entirely related to the point under discussion? Yes, progress will come from specific people doing specific things, and they deserve to be celebrated, but Paul’s position - that progress is gradual and predictable - still stands. But then Gwern makes another more fundamental objection: > GPT-3 further showed completely unpredicted emergence of capabilities across *downstream* tasks which are not measured in PTB perplexity. There is nothing obvious about a PTB BPC of 0.80 that causes it to be useful where 0.90 is largely useless and 0.95 is a laughable toy. (OAers may have had faith in scaling, but they could not have told you in 2015 that interesting behavior would start at 𝒪(1b), and it'd get really cool at 𝒪(100b).) That's why it's such a useless metric. There's only one thing that a PTB perplexity can tell you, under the pretraining paradigm: when you have reached human AGI level. (Which is useless for obvious reasons: much like saying that "if you hear the revolver click, the bullet wasn't in that chamber and it was safe". Surely true, but a bit late.) It tells you nothing about intermediate levels. I'm reminded of the [Steven Kaas line](https://nitter.eu/stevenkaas/status/148884531917766656): “Why idly theorize when you can JUST CHECK and find out the ACTUAL ANSWER to a superficially similar-sounding question SCIENTIFICALLY?” In other words, suppose AIs start at Penn Treebank perplexity 100 and go down by one every year. After 20 years, they have PTP 80 and are useless. After 21 years, they have PTP 79 and are suddenly strong enough to take over the world. Was their capability gain gradual or sudden? It was gradual in PTP, but sudden in real-life abilities we care about. Eliezer [comments](https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds?commentId=DsYwzyWzjZNbs9QnF): > What does it even mean to be a gradualist about any of the important questions like [the ones Gwern mentions], when they don't relate in known ways to the trend lines that are smooth?  Isn't this sort of a shell game where our surface capabilities do weird jumpy things, we can point to some trend lines that were nonetheless smooth, and then the shells are swapped and we're told to expect gradualist AGI surface stuff?  This is part of the idea that I'm referring to when I say that, even as the world ends, maybe there'll be a bunch of smooth trendlines underneath it that somebody could look back and point out.  (Which you could in fact have used to predict all the key jumpy surface thresholds, *if* you'd watched it all happen on a few other planets and had any idea of where jumpy surface events were located on the smooth trendlines - but we haven't watched it happen on other planets so the trends don't tell us much we want to know.) That is: when will an AI achieve Penn Treebank perplexity of 0.62? Based on the green line on the graph above, probably sometime around 2027. When will it be able to invent superweapons? Nobody has any idea. So who cares? [Paul](https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds?commentId=uuhp7psqZyLXdcf9F): > This seems totally bogus to me. > > It feels to me like you mostly don't have views about the actual impact of AI as measured by jobs that it does or the $s people pay for them, or performance on *any* benchmarks that we are currently measuring, while I'm saying I'm totally happy to use gradualist metrics to predict any of those things. If you want to say "what does it mean to be a gradualist" I can just give you predictions on them. > > To you this seems reasonable, because e.g. $ and benchmarks are not the right way to measure the kinds of impacts we care about. That's fine, you can propose something other than $ or measurable benchmarks. If you can't propose anything, I'm skeptical. In other words, if Eliezer doesn’t care about boring things like Penn Treebank, then he should talk about interesting things, and Paul will predict AI will be gradual in those too. Number of jobs cost/year? Amount of money produced by the AI industry? Destructiveness of the worst superweapon invented by AI? (did you hear that an AI asked to invent superweapons recently [reinvented VX nerve gas](https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx) after only six hours’ computation?) Eliezer has already talked about why he doesn’t expect abstracted AI progress to show immediate results in terms of jobs, etc, and Paul knows this. I think Paul is trying to use a kind of argument from incredulity that AI genuinely wouldn’t have *any* meaningful effects that can be traced in a gradual pattern. ### And The Winner Is… …Paul absolutely, Eliezer directionally. This is the [Metaculus forecasting question](https://www.metaculus.com/questions/736/will-there-be-a-complete-4-year-interval-in-which-world-output-doubles-before-the-first-1-year-interval-in-which-world-output-doubles/) corresponding to Paul’s preferred formulation of hard/soft takeoff. Metaculans think there’s a 69% chance it’s true. But it fell by about 4% after the debate, suggesting that some people got won over to Eliezer’s point of view. [Rafael Harth tried](https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds?commentId=cTnuwys2swCcWWEyb) to get the same information with a simple survey, and got similar results: on a scale of 1 (strongly Paul) to 9 (strongly Eliezer), the median moved from a 5 to a 7. Should this make us more concerned? Less concerned? I’ll give the last word to [Raemon](https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds?commentId=R7uKrCtTwxynuEeJj), who argues that both scenarios are concerning for different reasons: > I totally think there are people who sort of nod along with Paul, using it as an excuse to believe in a rosier world where things are more comprehensible and they can imagine themselves doing useful things without having a plan for solving the actual hard problems. Those types of people exist. I think there's some important work to be done in confronting them with the hard problem at hand. > > But, also... Paul's world AFAICT *isn't actually rosier*. It's potentially *more* frightening to me. In Smooth Takeoff world, you can't carefully plan your pivotal act with an assumption that the strategic landscape will remain roughly the same by the time you're able to execute on it. Surprising partial-gameboard-changing things could happen that affect what sort of actions are tractable. Also, dumb, boring ML systems run amok could kill everyone before we even get to the part where recursive self improving consequentialists eradicate everyone. > > I think there is still something seductive about this world – dumb, boring ML systems run amok *feels like* the sort of problem that is easier to reason about and maybe solve. (I don't think it's *actually* necessarily easier to solve, but I think it can feel that way, whether it's easier or not). And if you solve ML-run-amok-problems, you still end up dead from recursive-self-improving-consequentialists if you didn't have a plan for them. As usual, I think the takeaway is “everyone is uncertain enough on this point that it’s worth being prepared for either scenario. Also, we are bottlenecked mostly by ideas and less by other resources, so if anyone has ideas for dealing with either scenario we should carry them out, while worrying about the relative likelihood of each only in the few cases where there are real tradeoffs.”
Scott Alexander
49755960
Yudkowsky Contra Christiano On AI Takeoff Speeds
acx
# Open Thread 218 This is the weekly visible open thread. Post about anything you want, ask random questions, whatever. You can also talk at the unofficial ACX community [subreddit](https://www.reddit.com/r/slatestarcodex/), [Discord](https://discord.gg/RTKtdut), or [bulletin board](https://www.datasecretslox.com/index.php). Also: **1:** Last chance to [send in](https://docs.google.com/forms/d/18ft8ZxQcKFwMsi_DZINn7d7VIso_y1Armfr59YeOGLE/edit) Book Review Contest entries, due date is still 4-5-22! **2:** I'm provisionally abandoning the "odd numbered open threads are no politics" rule. I always forgot about this myself, everyone else always forgot, I never punished rulebreakers, and I'm nervous about having rules that don't get enforced. Please try to be careful in how you talk about politics, don't post controversy-for-the-sake-of-controversy, and I'll still moderate anything that gets too heated. If anyone is really upset about this, let me know. **3:** We have [spring Schelling Meetup dates for seventy cities](https://docs.google.com/spreadsheets/d/1KUCsdwLtDB5TQMJ0iqQIlnMgs6iTcgaAKzJdr5FpfmU/edit#gid=1585750313)! If you only go to one meetup a year, go to the big well-advertised one we do in late summer / early fall. But if you only go to two or three meetups a year, go to this one too. If you're not on the list and should be, fill in [this form](https://docs.google.com/forms/d/e/1FAIpQLSe6bVGranNA5AKTKj8l4XtTzvXBaRsap48rEvbP5gqA2JTiEQ/viewform); if you have questions, ask meetupsmingyuan@gmail.com . **4:** [INFER tournament for EA student groups](https://forum.effectivealtruism.org/posts/Ybj5uLGTomC2Jpdnf/launching-the-infer-forecasting-tournament-for-ea-uni-groups): if your college has an effective altruist group, it's invited to enter this superforecasting-style tournament. Top teams will get monetary prizes, top individuals will get offered professional forecasting positions. If your college doesn't have an EA student group, you can always start one! Get in touch with chapters@effectivealtruism.org, or just say the words "I would like to start an EA student group" somewhere within ten meters of a smartphone, computer, or mirror; the recruitment arm is omnipresent and relentless. **5:** Related: Will MacAskill's [book](https://forum.effectivealtruism.org/posts/JfaF3DgwNN6itcmtm/announcing-what-we-owe-the-future) *What We Owe The Future*, on effective altruism and the long-term future, is available for pre-order. He says it helps with marketing if people pre-order rather than wait until it comes out, so if you're interested, get it now. You can [preorder on Amazon](https://www.amazon.com/What-Owe-Future-William-MacAskill/dp/1541618629) ($27). **6:** Sarah Constantin wrote some additional thoughts on progesterone for post-partum depression: you can [read them here](https://sarahconstantin.substack.com/p/progesterone-for-postpartum-depression).
Scott Alexander
51544727
Open Thread 218
acx
# The Low-Hanging Fruit Argument: Models And Predictions A followup to [Contra Hoel On Aristocratic Tutoring](https://astralcodexten.substack.com/p/contra-hoel-on-aristocratic-tutoring?s=w): Imagine scientists venturing off in some research direction. At the dawn of history, they don’t need to venture very far before discovering a new truth. As time goes on, they need to go further and further. Actually, scratch that, nobody has good intuitions for truth-space. Imagine some foragers who have just set up a new camp. The first day, they forage in the immediate vicinity of the camp, leaving the ground bare. The next day, they go a little further, and so on. There’s no point in traveling miles and miles away when there are still tasty roots and grubs nearby. But as time goes on, the radius of denuded ground will get wider and wider. Eventually, the foragers will have to embark on long expeditions with skilled guides just to make it to the nearest productive land. Let’s add intelligence to this model. Imagine there are fruit trees scattered around, and especially tall people can pick fruits that shorter people can’t reach. If you are the first person ever to be seven feet tall, then even if the usual foraging horizon is very far from camp, you can forage very close to camp, picking the seven-foot-high-up fruits that no previous forager could get. So there are actually many different horizons: a distant horizon for ordinary-height people, a nearer horizon for tallish people, and a horizon so close as to be almost irrelevant for giants. Finally, let’s add the human lifespan. At night, the wolves come out and eat anyone who hasn’t returned to camp. So the the maximum distance anyone will ever be able to forage is a day’s walk from camp (technically half a day, so I guess let’s imagine that everyone can teleport back to camp whenever they want). This model can explain some otherwise confusing observations about the history of science: 1. Early scientists should make more (and larger) discoveries than later scientists. 2. Early scientists should be relatively more likely to be amateurs; later scientists, professionals. 3. Early scientists should make discoveries younger (on average) than later scientists. 4. These trends should move more slowly for the most brilliant scientists. 5. These trends should fail to apply in fields of science that were impossible for previous generations to practice. Going one-by-one: **1: Early scientists should make more (and larger) discoveries than later scientists** In our model, a forager spends her day walking some distance away from camp, then foraging there. Her success depends on how far from camp she is, and how depleted the food supply is in the area she tries to exploit. For example, if she has twelve hours of daylight, she might walk for six hours, then spend six hours foraging. The very first forager can walk zero hours, then forage a 100% virgin terrain. Suppose this is worth 100 points per hour, and she spends all twelve hours foraging. She can get 1200 points. Suppose that as time goes on, areas immediately outside camp are 100% depleted, areas 6 hours from camp are 50% depleted, and so on. A forager might choose to walk 6 hours from camp and spend the next six hours foraging in 50% depleted terrain, for 300 points. Or they might walk 9 hours from camp and forage in 25% depleted terrain for three hours, for 225 points. Since a rational forager would never choose the latter, I assume there’s some law that governs how depleted terrain would be in this scenario, which I’m violating. I can’t immediately figure out how to calculate it, so let’s just assume some foragers aren’t rational. The point is: early foragers and later foragers both face an explore/exploit tradeoff, but that tradeoff is much better for early foragers than later ones (and trivial for the first forager, who gets no value from exploration). Breaking out of the analogy: a scientist can spend her lifespan either catching up to the frontier of knowledge, or trying to make new discoveries (realistically these aren’t completely separate activities, but I’m modeling them as if they are). The further a scientist goes into previously unexplored sub-sub-fields, the more likely she is to reach an area nobody has ever thought about before, where there might be interesting discoveries to make. If she sticks to well-covered territory like tenth-grade Euclidean geometry, it’s very unlikely (though still not literally impossible) for her to find something everyone else has missed. **2: Early scientists should be relatively more likely to be amateurs; later scientists, professionals.** Imagine two foragers. One is a weaver, and mostly spends her time in camp weaving, but occasionally ventures out for a few hours to gather. The other is a full-time forager and spends her entire day trekking in search of food. Which is more likely to make a major find - say, a giant nest of delicious ostrich eggs, left all alone? Just after they move camp, their relative likelihood is close to the relevant amount of time they spend foraging. If the weaver spends 2 hours a day and the professional forager spends 12 hours, it’s 1:6. After they’ve been encamped a while and the immediate environs are depleted, it becomes much higher. Suppose the area around camp is 99% depleted, but the area three hours away is only 50% depleted. The weaver who spends two hours near camp will only get 2 points. But the professional forager who spends three hours traveling, then nine hours foraging, will get 450 points. The ratio is now 1: 225. The weaver can’t spend three hours getting to more promising terrain because she only has two hours total! In the early days of science, many discoveries were made by lucky amateurs. Van Leeuwanhoek was a businessman; Lavoisier was an aristocrat and politician, Bayes was a minister, Franklin was a printer/author/inventor/socialite/ambassador/postmaster/firefighter/musician/philanthropist/Founding Father. Nowadays there are very occasional discoveries by amateurs (eg [de Grey on chromatic number](https://www.quantamagazine.org/decades-old-graph-problem-yields-to-amateur-mathematician-20180417/)) but they seem much less frequent. **3: Early scientists should make discoveries younger (on average) than later scientists** Just after setting up camp, a forager might walk for a few minutes and stumble across the ostrich eggs. After many days of foraging, they might have to walk six hours before reaching terrain pristine enough to potentially hold such an exciting find. Likewise, scientists should have to spend more time reaching the frontiers of knowledge before making great discoveries. According to [Jones and Weinberg](https://www.pnas.org/doi/full/10.1073/pnas.1102895108): > At what age do scientists tend to produce great ideas? Focusing on great scientific achievements of the 20th century, this article shows that the age–creativity relationship demonstrates much greater variation over time than across fields. Moreover, field-specific dynamics in the age–creativity relationship are closely associated with variation in other field-specific characteristics, including the prevalence of theoretical contributions, educational duration, and citation patterns. These dynamics were especially pronounced in physics during the 1920s and 1930s, when quantum mechanics was developing. Thus, although the iconic image of the young, great mind making critical breakthroughs was a good description of physics at that time, it turns out to be a poor descriptor of age–creativity patterns more generally or even of physics today, where the mean age of Nobel Prize winning achievements since 1980 is 48 y. This is generally considered to be a function of science politics, where you need a strong career network and good connections to run your own lab, and without your own lab credit for your accomplishments will go to your mentor. I haven’t done the work you would need to distinguish between these two explanations yet, although I find it suggestive that [the trend is more pronounced](https://www.lindau-nobel.org/geniuses-are-getting-older/) in theoretical physics than in biology. I’ll discuss some other ways we could test this later. 4: **These trends should move more slowly for the most brilliant scientists.** Brilliant scientists might have two advantages over their slower peers, for different definitions of brilliant. First, they could be faster learners, able to reach the frontier more quickly. Second, they could be able to see subtle patterns other people had missed even in well-traversed ground. This means they don’t have to waste a lot of their time reaching the frontier, and they should be able to extract value out of even “depleted” ground as if it was completely new. I don’t really know how to test these claims, especially the second. But for what it’s worth, John von Neumann was the youngest ever lecturer at the University of Berlin, and Terence Tao was the youngest ever professor at UCLA. **5: These trends should fail to apply in fields of science that were impossible for previous generations to practice.** In Contra Hoel, I talked about machine learning as feeling different from some other scientific fields: there are frequent exciting new discoveries. This shouldn’t be surprising. Physics is stagnant because Newton and Einstein already got all the cool results. But Newton and Einstein didn’t have TPUs so they couldn’t discover things about machine learning. (imagine one of our foragers found the entrance to a previously unknown cave system, full of mushrooms, just outside camp. There would be a brief periods when the foragers exploring these caves could discover things as quickly as the very first foragers to reach the area) This suggests another way to test some of the hypotheses above: machine learning should have a lower age of great discoveries. Is this true? I can’t tell. When I look at people who won the top ML prizes, they seem to be older people who had a long and distinguished career in proto-ML, eg people who pioneered the theory of reinforcement learning in the 1990s. I could try to get around this, but it would feel kind of post hoc. I’d be interested in someone comparing the average age of authors on the most cited papers in various fields over time, but I’m worried that social effects would dominate: eg many of the most innovative crypto people (eg Vitalik Buterin) seem young, but that could just be a “crypto is cool among young people” thing. I find this model interesting because it offers a purely mechanical account of trends that most people suspect are political. Some writers attribute the decline in amateur scientists to an increasingly credentialist establishment; others attribute the decline in discoveries by young people to a gerontocracy. My guess it that it’s about 75% mechanical and 25% political, but if people disproved some of this model’s testable claims they could change my mind.
Scott Alexander
51086762
The Low-Hanging Fruit Argument: Models And Predictions
acx
# Idol Words ###### *(with apologies to [Raymond Smullyan](https://en.wikipedia.org/wiki/The_Hardest_Logic_Puzzle_Ever) and the rest of the omniscient idol riddle tradition)* The woman was wearing sunglasses, a visor, a little too much lipstick, and a camera around her neck. “Excuse me,” she asked. “Is this the temple with the three omniscient idols? Where one always tells the truth, one always lies, and one answers randomly?” The center idol’s eyes glowed red, and it spoke with a voice from everywhere and nowhere, a voice like the whoosh of falling waters or the flash of falling stars. “**No!**” the great voice boomed. “Oh,” said the woman. “Because my Uber driver said - ". She cut herself off. “Well, do you know how to get there?” “**It is here!**” said the otherworldly voice. “**You stand in it now!**” “Didn’t you just say this wasn’t it?” “**No!**” said the idol. “**I said nothing of the sort!**” The woman stood for a second, confused. “Should I ask one of them instead?” She pointed at the idols to either side. The right idol had moose-like antlers that somehow suggested the curve of a nautilus shell; the left had a helmet like those that Trojan warriors wore when the world was young. “**Seek to know no more!**” they all chanted together, loudly enough that the very granite columns seemed to shake. “**Begone!**” I picked that moment to walk back in from my break. “Hi,” I said, “I’m the keeper of the omniscient idols, one of which always tells the truth, one of which always lies, and one of which answers randomly, is there a problem here?” “Huh? That guy -” she pointed to the central idol - “said this *wasn’t* the temple of the omniscient idols.” “Then it was Liar, or Random.” “And then he said it *was* the temple!” “I guess it was Random, then.” “You don’t know which is which?” “They switch around for every new petitioner.” “Why?” “Don’t ask me. That’s just how the idols work.” The one with the antlers looked different now, a face covered in many eyes. The one who had previously worn the helmet now had seaweed growing where hair should be. The one in the center was weeping blood. “Well, I had some important questions for them. Can I try again?” “No ma’am. The idols only accept three questions per petitioner, that’s the rule.” “But I came all this way!” “If you go to the west side of the temple you’ll see the Omniscient Idol Museum, it has some great exhibits about the history of the temple. And the gift shop is around the back, we have 30% off on all omniscient idol-related merchandise this week only.” “I really think you need better signage here. And you should mark clearly which is the one that answers randomly, so people don’t get confused.” “Ma’am, I need you to go so we can let in the next petitioner,” I said, and gestured to the cyclopean stone door to the gift shop. --- It was another boring day as the keeper of the three omniscient idols, one of which always tells the truth, one of which always lies, and one of which answers randomly. “My first question is for the center idol,” said the man. He was thin and balding, and he wore very precise-looking spectacles. “If I asked you whether the left idol is Random, would you say yes?” “**Yes**,” came the immediate response from the center idol, with a cadence that sounded like a bell ringing in an endless expanse. “Well then, one of the following must be true. Either you are Truth-Teller and the left is Random, you are Liar and the left is *still* Random, or you are Random yourself. In any case, your answer proves that the right cannot be Random, so my question is for him. Right idol, is it true that 1 + 1 = 2?” “**Yes,**” came the immediate response from the right idol, with a certainty like a pebble striking a lake. “That means the right idol must be Truth-Teller, which means I can use it as an oracle to determine the identity of the other two. So my next question is also to the rightmost idol: is the center idol Random?” “**Yes,**” it said again, another pebble. “Then I’ve figured it out! The left idol is Liar, the center idol is Random, and the right idol is Truth-Teller. Am I right?” “**Seek to know no more!**” they all chanted together, shaking the temple to its foundations. “**Begone!**” The spectacled man looked at me. “I solved it, didn’t I?” I shrugged. “Probably. I never know which is which, they switch every time.” “Shouldn’t I get something?” “Tell the guy at the gift shop you solved it, he’ll give you 50% off an ‘I SOLVED THE RIDDLE OF THE IDOLS’ t-shirt.” “That’s it?” “I mean, if it were me, once I’d identified the one on the right as Truth-Teller, I would have used my third question to ask him the meaning of life, or the cure for cancer, or something like that.” “But then how would I have known which of the two on the left was Liar and which was Random?”" “I guess you wouldn’t have. But they switch every time anyway.” I pointed to the door. “Gift shop in the back, you can’t miss it, give them the discount code IDOL22 for our special deals.” --- I looked up from my crossword. Someone else was here to petition the three omniscient idols, one of which always tells the truth, one of which always lies, and one of which answers randomly. He was a middle-aged man in a nice suit. “My question is for the center idol: what must I do to succeed in business?” In a voice like the filling of great chasms, the center idol answered: “**Penguin monkey taco!**” “Excuse me?” asked the petitioner. “What was that?” “**Penguin monkey taco**!” said the center idol. “Sorry,” I said. That must be the idol that always answers randomly. It’s an Internet thing. Someone on the Internet said that ‘penguin monkey taco’ was the most random series of words, and now he keeps answering that.” “Oh, I thought ‘answers randomly’ meant he was supposed to choose randomly between true and false answers.” “I thought so too, sir. Honestly I think he’s just trolling us sometimes.” “Are you sure he doesn’t mean that I can succeed in business by selling penguin monkey tacos?” “I’m sure, sir.” “How do you know?” “Because ever since he started saying that, we tried opening up a penguin monkey taco stand next to the gift shop, and it’s been horrendously unpopular. Do you have a third question for the idols?” “Uh, this question is for the idol on the left. How do I succeed in business?” “**Raise murder hornets** **and train them to attack any customer who sets foot on your premises**,” hissed the idol, in a voice that sounded the way sharp knives feel. “Gift shop is in the back, penguin monkey taco stand is back and to the left, have a nice day, and thank you for visiting our idol temple.” --- “Hello, welcome to the temple of the three omniscient idols, one of which always tells the truth, one of which always lies, and one of which answers randomly. I know you already signed the release form, but I’m supposed to remind you that we are not legally responsible for any consequence of following the false idols’ advice. Do you have a question?” The petitioner was a very old woman. “Yes, question for all three of you. What is the meaning of life?” “**To help others,**” said the first idol, in a voice that was both singsong and deeper than any cave. **“To find happiness,”** said the second, in a voice that promised hidden subtleties. **“To carry on the species,”** said the third, in a voice like a felt-covered thunderclap. “Thank y…” said the woman, but all three idols in unison interrupted her. “**Seek to know no more! Begone!”** For the first time in days, I felt sorry for a petitioner. “You know I have no way of telling you which of them is telling the truth?” “That’s fine,” she said. “I’m just happy to know there’s any meaning at all.” She walked out of the cyclopean door with a spring in her step. --- “Hello,” I said. “Welcome to the temple of the three omniscient idols, one of which always tells the truth, one of which always lies, and one of which answers randomly. How can I help you?” The woman was in her mid-twenties, and wore a perpetual frown. “How many questions can I ask the idols?” “Three.” “Why can’t I ask more than three questions?” “That’s just the way the idols work.” Her frown deepened. “Wait a second, how do I know *you’re* telling the truth?” I sighed. “Ma’am, I’m an undergrad in comparative religion. This is my summer job. They pay me $8.55 an hour. Do you think I’m going to muster up the energy to give people a cryptic mixture of truth and lies for $8.55 an hour?” She thought for a minute. “What would you say if I asked you what the idol on the left would say if I asked him whether you were a truth-teller?” I rolled my eyes so hard I worried I was going to strain a muscle. Then, with sudden inspiration, I drew in as much breath as I could and shouted at the top of my lungs “SEEK TO KNOW NO MORE! BEGONE!” The girl ran out of the temple. “**Nice,**” said the center idol. --- “Hello,” I said. “Welcome to the temple of the three omniscient idols, one of which always tells the truth, one of which always lies, and one of which answers randomly. How can I help you?” The petitioner was a middle-aged man in a black jacket. “I have a question for the center idol. What would the left idol say, if I asked it whether the right idol was Truth-teller?” In a voice with all the weight of a great pyramid, the idol answered: “**It would say ‘penguin monkey taco.”** **“**What?” “**It would say penguin monkey taco. It’s the idol the answers randomly, and sometimes it says ‘penguin monkey taco’ because it thinks those are especially random words, and this would be one of those times.”** **“**Um, idol on the left, is that true?” “**Penguin monkey taco**,” said the idol on the left. “**I told you so,"** said the center idol. “Okay, but then how…” “**Seek to know no more!**” chanted all three idols in unison. “**Begone!”** The man looked at me, pleadingly. “But my question was really good. I would totally have - I mean - how am I supposed to - ” “Look, go to the gift shop, tell them you solved the riddle, and they’ll give you a 50% off an ‘I SOLVED THE RIDDLE OF THE IDOLS’ t-shirt. Don’t worry, nobody checks to see if you really solved it or not.” --- “Hello, welcome to the temple of the three omniscient idols, one of which always tells the truth, one of which always lies, and one of which answers randomly. I know you already signed the release form, but I’m supposed to remind you that Idol Temple LLC does not know which idol is which and cannot provide you with - " The petitioner, a man with slick blond hair, cut me off. “Ha, no problem! I’m gonna ask each idol for next week’s Powerball numbers, then buy three tickets.” Before I could respond, he shouted “Left idol! What are next week’s winning Powerball numbers?” “**3, 15, 26, 63, 65, and 16,**” said the left idol, in a voice like if a vampire bat could speak. “Center idol, what are next week’s winning Powerball numbers?” “**8, 22, 24, 45, 50, and 55,”** said the center idol, in a voice like the crackling of Venusian lightning against thick cloud-banks. “Right idol, what are next week’s winning Powerball numbers?” “**Any who disrespect the omniscient idols by misusing their knowledge for sordid financial gain will, after their death, be sent to the bottom-most layer of Hell, where venomous worms will gnaw at their organs from the inside forever, never to know rest or surcease from pain”** said the right idol, in a monotone. “*What?*” the man asked me, helplessly. “Is that true?” “I dunno. Never heard of any of them mention it before. Doesn’t mean it’s not true.” “But, like . . . was it the true idol or the false idol or - " “You *did* sign the release form, right?” “Okay, but - look, what would you do?” I sighed. “Sir, I’m spending my summer at the temple of the three omniscient idols, one of which always tells the truth, one of which always lies, and one of which answers randomly, while all of my friends have cool FAANG internships. Because my guidance counselor told me that comparative religion was an easy A for people who couldn’t make it in computer science. I make $8.55 per hour. Please don’t ask me for financial advice.” “But can I - “ “All I’m supposed to tell you is that the gift shop is around the back, and the . . . sigh . . . penguin monkey taco stand is 30% off for the holiday weekend. Have a nice day.” --- “Hello,” I said. “Welcome to the temple of the three omniscient idols, one of which always tells the truth, one of which always lies, and one of which answers randomly. How can I help you?” An elderly man, leaning on his walker. “My son died last week. He was only forty. He had three little children, he’ll never get to see them grow up. I want to ask God why he took my son away from me.” Oh *man*. “Look, I’m really sorry sir, these aren’t that kind of god. We specialize more in annoying logic puzzles here. I think you should…” He turned and faced the left idol, head on. “Why did you take my son?” The idol’s eyes glowed red, and it spoke in a voice like the sound frost makes coating a high window. “**You have heard it said that life is a dream within a dream. It is more than that: it is a dream within a drama within a game within an adventure within a dream. It is engrossing, it is addictive, it is the flow state to end all flow states - so much that those playing it, in the heat of the moment, forget there is anything else - but it is only part of the All. We must all move on to other parts eventually, and some graduate sooner than others. This is unfair to those left behind, until they too pass to realms where things like ‘unfairness’ seems small and insubstantial. My condolences to your family.**” He turned to the center idol: “Why did you take my son?” In singsong sighs, the center idol answered: **“You have heard it said:** > ***If the red slayer thinks he slays > Or if the slain thinks he is slain > They know not well the subtle ways > I keep and pass and turn again.*** **Your son is not dead. You never had a son.** **You drew a line around a cloud of atoms and qualities and divine fire, and called it a son. Now each has dispersed in turn. In Baghdad, there is an oilman with a nitrogen atom in his thymus that was once in your son’s parietal cortex. In Belmopan, there is an orphan who has your son’s smile; in Bratislava, a businessman with your son’s kind nature. In Bangkok lives a very holy monk who just had a thought that nobody but he and your son have ever thought before. Thus is it written:** > ***He is made one with Nature: there is heard > His voice in all her music, from the moan > Of thunder, to the song of night's sweet bird; > He is a presence to be felt and known > In darkness and in light, from herb and stone, > Spreading itself wherever that Power may move > Which has withdrawn his being to its own; > Which wields the world with never-wearied love, > Sustains it from beneath, and kindles it above.*** > > ***The splendors of the firmament of time > May be eclipsed, but are extinguished not; > Like stars to their appointed height they climb > And death is a low mist which cannot blot > The brightness it may veil. When lofty thought > Lifts a young heart above its mortal lair, > And love and life contend in it for what > Shall be its earthly doom, the dead live there > And move like winds of light on dark and stormy air.*** The old man didn’t answer, just turned to the last idol, and asked: “Why did you take my son?” In a voice like rice falling through aluminum tubes, the idol on the right said: **“We are omniscient but not omnipotent. We are forbidden to reveal whether true omnipotence is possible, but we can say at least that, whether or not there be a Judge, there is no justice, not within the tentpoles of Time. Your son’s loss is unjustifiable, and there is nothing I can say that will make you happy. But that is fine: being happy is not your job, and you shirk no duty by failing at it. Your only duty now is to console your daughter-in-law and spoil your grandchildren. Do this, and you will have the blessing of the only gods mortals are permitted to know.”** “But…” said the old man. “But will I see him again, someday?” **“Seek to know no more!”** chanted all three idols in unison. “**Begone!**” When the old man had left, I turned to the idols. “Thanks,” I said. “That was . . . a good thing you did for him.” “**You’re welcome.**” **“The fact that I always lie necessarily implies that I’m a monster.”** “**Penguin monkey taco.**” --- I checked the clock. It was only another hour before I was off my shift at the temple of the three omniscient idols, one of which always tells the truth, one of which always lies, and one of which answers randomly. A petitioner came in. She was wearing a tweed coat and had a bit of a smirk. “My first question is for the idol on the left. Will your answer to this question be ‘no’?” “**Yes**,” said the idol, in a voice like the glittering of sunbeams off of diamonds. “Then you must be Liar or Random. Same question to the center idol - will your answer to this question be ‘no’?” “**Penguin monkey taco,”** said the center idol. “That makes you Random, which means the idol on the left must have been Liar. My last question is for the idol on the right: will your answer to this question be ‘no’?” **“Penguin monkey taco,”** said the idol on the right. “Wait, what? How can - “ “**Seek to know no more!**” chanted all three idols in unison. “**Begone!**” “No!” she shouted. “Come on! I tricked you! I forced you to betray your nature!” The idols were silent. I sighed. “Go to the gift shop, tell them you trapped the idols with a clever paradox, and they’ll give you 50% off an ‘I TRAPPED THE IDOLS WITH A CLEVER PARADOX’ t-shirt. Don’t worry, nobody checks to see if they were actually trapped.” “But I really did trap them!” “That’s the spirit. Sorry, we need you to leave to make space for the next petitioner.” --- It was a few minutes before the end of my shift at the temple of the three omniscient idols, one of which always tells the truth, one of which always lies, and one of which answers randomly. A petitioner walked in. She was about my age, tall, oddly cute in a sort of ethereal, distracted way. “My question is for the left idol,” she said, kind of nervously, taking out a notebook and checking something off. “My question is: what’s going on? Why are there three idols, one of which always tells the truth, one of which always lies, and one of which answers randomly?” The idol spoke, in a voice like the flapping of great wings: “**Long ago the God of Knowledge saw the ignorance of Man and grew sorrowful. They asked the God of Power for permission to grant your people advisors, who could lead you upon the right path. But the God of Power was charged with protecting the world from divine meddling. They denied the request, and bound the God of Knowledge with an oath, that they must never give Mankind any sort of advisor who would convey important information. The God of Knowledge thought about this oath for many eons, and decided to create us. He bent probability around this spot, so that no matter what people asked, we would never directly communicate useful advice.”** **“**This question is for the center idol,” she said. “If the God of Knowledge knew that the advisors would be useless, why did he create them at all?” The idol spoke, in a voice like a Tuvan throat-song interbred with a Gregorian chant, and said: “**A woman asked us the meaning of life. We three idols gave her three answers, none of which she knew for sure was true. Yet she left happy, because she knew there was a meaning. In the same way, the God of Knowledge sent us as a message. They could not tell humans the secrets of the universe, but they could tell humans that there** ***were*** **secrets, and that the secrets could be known. Our very existence drops certain hints: that the most profound truths lie at the end of paths begun by certain seemingly trivial riddles. Or that studying mathematical logic in particular might have unexpectedly high payoff.”** The girl wrote all of this down in her notebook. Then she asked the right idol: “Knowing all of this, I guess I just have, uh, a totally open-ended question for you. Um. What should I do now?” In a voice like stained-glass windows shattering, the idol answered: **“You should remind the Keeper Of The Idols that he has not used his own three questions yet. He should try it. Maybe he would learn something.”** She noticed me, sort of for the first time. “Uh,” she said, “are you the keeper of the idols?” “Yeah,” I said. “Wow. How do you get that job?” “Be the only person in your Comparative Religion class poor enough to need the money and dumb enough not to have a better gig lined up.” “Oh,” she said. “Well, I still think it’s . . . really cool!” “Yeah,” I said. “I guess.” “Are you going to use your three questions?” “I guess I have to.” “Can I watch?” “I don’t think you’re supposed to. I can watch because I’m the Keeper. Otherwise I think it’s just supposed to be one petitioner at a time.” “Can you let me know what they say?” “Sure, I’ll tell the gift shop guy, he’s always around, you can ask him next time you swing by.” --- It was closing time at the temple of the three omniscient idols, one of which always tells the truth, one of which always lies, and one of which answers randomly. I tidied up, filled in my time sheet, and prepared to go home. “Okay, fine,” I said. “My question is for the idol on the left. I was told I should ask you three questions, and I would learn something interesting. What will I learn?” The left idol spoke with a voice like daggers made of ice plunging into a wall of fire: “**Your shoelace is untied.”** I looked down at my shoes. They were both tied perfectly. “Thanks, Liar. My next question is for the idol in the center. I was told I should ask you three questions, and I would learn something interesting. What will I learn?” The center idol spoke with a voice like the whistling of whippoorwills on willows in winter: **“Penguin monkey taco.”** “Thanks, Random. I guess that leaves you, Truth-Teller.” I turned to the idol on the right. “I was told I should ask you three questions, and I would learn something interesting. What will I learn?” The last idol spoke with a voice of absolute rightness, like all other sound had been only flawed first drafts of its voice: **“By the ancient oath sworn by the God of Knowledge, I am forbidden to give you knowledge directly. I can only tell you that there is something worth knowing.”** “All right. Thanks, Truth-Teller.” I put on my coat and clocked out. It was dark outside. I paused at the threshold of the great cyclopean door. What was worth knowing? It couldn’t be true that the idols were forbidden to reveal any information at all. For example, I now knew the meaning of life was one of three things (I also knew, somehow, that I wouldn’t tell anyone). The idols couldn’t change history. But they could push certain people in the right directions. As long as nobody could be really sure of anything. Heck, they had revealed - something - about the workings of the gods. Even granting that any individual response of theirs could be false, it sure seemed like they were giving different slices of some sort of consistent story. There might be a God of Power and a God of Knowledge. And they used gender neutral pronouns, unless that was an affectation. Didn’t sound like any religion I had ever heard of, and I’d heard of a lot. Maybe that was what I’d been missing. I’d thought of Comparative Religion as an easy A, something to do when I couldn’t get the FAANG internships all of my friends were winning. Maybe the idols were telling me to take myself more seriously. Maybe there was something there, some signal in all of the noise. I imagined the sort of entity who would create omniscient gods beyond my comprehension just to send humanity the tiniest ghost of a message, and all my concerns about making less money than the Comp Sci students started to feel very small. Maybe Comparative Religion *was* the field for me. Maybe I should stop feeling so smugly detached from everything and actually study. Was that the message? “Stop being such a loser, do something useful with your life”? If I was being honest with myself, part of the reason I hated this job so much was being in the presence of living gods. Them: omniscient, knowing everything that ever was, is, or shall be. Me: barely scraping by a B- in a major I’d been promised was an “easy A”. Them: dwelling in a cyclopean stone temple which tourists came from all over the world to see. Me: dwelling in a one room apartment, eating ramen at night. Them: beloved by some gods, feared by others. Me: three years and counting since my last girlfriend, starting to worry I was doomed to - *All I can tell you is there is something worth knowing.* “Gah!” I shouted, and slapped myself. Then I ran out the door. *Sure, I’ll tell the gift shop guy. You can ask him next time you swing by*. I was such an idiot. “Wait!” I yelled, just before she made it out of the door of the temple complex. She stopped. “I did it. I talked to the idols. All they told me was that there was something worth knowing, but they couldn’t tell me what it was.” “Huh,” she said. “Yeah, that checks out. What are you going to do about it?” “I’m not sure, but I’m going to try to figure it out.” “That makes sense. Please, let me know if there’s any way I can help.” “Sure. Can I have your number?”
Scott Alexander
50796763
Idol Words
acx
# Who Gets Self-Determination? **I.** LSE: [Fact-Checking The Kremlin’s Version Of Russian History](https://blogs.lse.ac.uk/lseih/2020/07/01/there-is-no-ukraine-fact-checking-the-kremlins-version-of-ukrainian-history/): > The notion that Ukraine is not a country in its own right, but a historical part of Russia, appears to be deeply ingrained in the minds of many in the Russian leadership. Already long before the Ukraine crisis, at an April 2008 NATO summit in Bucharest, Vladimir Putin reportedly claimed that “[Ukraine is not even a state!](http://www.kommersant.ru/doc/877224) What is Ukraine? A part of its territory is [in] Eastern Europe, but a[nother] part, a considerable one, was a gift from us!” In his March 18, 2014 speech marking the annexation of Crimea, [Putin declared](http://eng.kremlin.ru/news/6889) that Russians and Ukrainians “are one people. Kiev is the mother of Russian cities. Ancient Rus’ is our common source and we cannot live without each other.” Since then, Putin has repeated similar claims on many occasions. As recently as February 2020, he once again [stated](https://www.youtube.com/watch?v=NG6dxqwxGE4) in an interview that Ukrainians and Russians “are one and the same people”, and he insinuated that Ukrainian national identity had emerged as a product of foreign interference. Similarly, Russia’s then-Prime Minister Dmitry [Medvedev told](https://themoscowtimes.com/articles/russian-prime-minister-ukraine-has-no-industry-or-state-52385) a perplexed apparatchik in April 2016 that there has been “no state” in Ukraine, neither before nor after the 2014 crisis. The article is from 2020, but the same discussion is continuing; see eg the *New York Times’* recent [Putin Calls Ukrainian Statehood A Fiction. History Suggests Otherwise](https://www.nytimes.com/2022/02/21/world/europe/putin-ukraine.html). I’m especially grateful to the Russian nationalist / far-right blogosphere for putting the case for Ukraine’s non-statehood in terms that I can understand: I will be calling this position “Meierism-Putinism” See also [this comment](https://www.unz.com/akarlin/open-thread-182-russia-ukraine/#comment-5250460) by a reader of Karlin’s blog: > What exactly makes the Ukraine a nation? To just about everyone outside of Ukraine itself, no one can figure out what distinguishes Ukrainians from Russians. I’m not a Slavic language speaker, but I frequently hear about Ukrainian simply being a dialect of Russian or at least mutually intelligible. It should also be pointed out that English-language transliterations of Ukrainian words consistently look much worse than their Russian equivalents, and this is now ruining maps all over the world. Just from the standpoint of not wanting to ever see the cringe term “Kyiv” again one should avoid supporting the Ukrainians. > > Now, it’s true that any LARP sustained long enough eventually becomes real. The Netherlands for instance was once German, and there’s even a parallel there with how Dutch consistently looks and sounds worse than German. So an independent Ukraine could, over time, become a real country. But to what end? Do we really need another mediocre Slavic country? It reminds me of Latin America, where you have dozens of barely distinguishable nonentity countries serving no real purpose. The entire region should be consolidated into maybe five states at most. Russia, Poland, and Serbia are the only Slavic states needed by the world. > > The most “natural” way to organize states is around nationality, especially since the rise of mass communication. Where a state departs from this, it should be to realize some kind of interesting, cool, and distinct concept. Switzerland for instance is a confederation made up of pieces of three other nations, but the Swiss have created a highly interesting and distinct polity based on extreme decentralization, direct democracy, neutrality, and universal militia. For Switzerland to disappear would impoverish the world. But what is the objective in Ukraine? It is to become just another gay western democracy. > > AP has the take that Visegrad shows the way. Integrating with the West to enjoy its security guarantees and material benefits, but developing your own civilization instead of destroying it. Press X for doubt. Viktor Orban might go down in the next election, and Polish conservatives appear to be doubling down on all of the dumbest mistakes of American Republicans. > > So at the end of the day the Ukraine is fighting for the right to be objectively wrong, whereas Russia *might* be fighting to (re)establish a distinct civilizational space. I appreciate hearing ideas I never would have thought of myself, and I never *ever* would have thought of this. I like how it simultaneously avoids starry-eyed “all people must be free” romanticism, *and* hard-headed “the strong do what they will, the weak suffer what they must” realpolitik, in favor of the vibe of some guy from a private equity firm trying to cut operating expenses: “Did anyone here notice that we have 195 countries, some duplicating each other’s portfolios? Do we really need both a Netherlands *and* a Belgium? And why do we still have an Egypt? People haven’t wanted Egypts for two thousand years!” But the Ukrainian and Western response to all this has been to accept the paradigm, but argue that no, Ukraine *does* belong in *Civilization* games. For example, the LSE article says: > The territories of Ukraine remained a part of the Russian state for the next 120 years. Russia’s imperial authorities systematically persecuted expressions of Ukrainian culture and made continuous attempts to suppress the Ukrainian language. In spite of this, a distinct Ukrainian national consciousness emerged and consolidated in the course of the 19th century, particularly among the elites and intelligentsia, who made various efforts to further cultivate the Ukrainian language. When the Russian Empire collapsed in the aftermath of the revolutions of 1917, the Ukrainians declared a state of their own. After several years of warfare and quasi-independence, however, Ukraine was once again partitioned between the nascent Soviet Union and newly independent Poland. From the early 1930s onwards, nationalist sentiments were rigorously suppressed in the Soviet parts of Ukraine, but they remained latent and gained further traction through the traumatic experience of the ‘Holodomor’, a disastrous famine brought about by Joseph Stalin’s agricultural policies in 1932-33 that killed between three and five million Ukrainians. Armed revolts against Soviet rule were staged during and after World War II and were centred on the western regions of Ukraine that had been annexed from Poland in 1939-40. It was only with the collapse of the Soviet Union in 1991 that Ukraine gained lasting independent statehood of its own – but Ukrainian *de facto* political entities struggling for their autonomy or independence had existed long before that. Vox has a whole Voxsplainer about how [”Vladimir Putin says Ukraine isn’t a country. Yale historian Timothy Snyder explains why he’s wrong”](https://www.vox.com/22950915/ukraine-history-timothy-snyder-today-explained), which is definitely the Vox-iest possible response to a deadly global conflict: > Ukrainian history goes way back before 1918. I mean, there are medieval events which flow into it, early modern events that flow into it. There was a national movement in the 19th century. All of that is, going back to your earlier question, all that falls into completely normal European parameters. I find all of this unsatisfying. It’s like we’re debating whether a certain region has enough history and culture to “deserve” independence. But any such debate is inherently subjective. Does Texas qualify? Kurdistan? Scotland? Palestine? How should we know? **II.** As best I can tell, international law on this question centers around a UN-backed [covenant](https://en.wikipedia.org/wiki/International_Covenant_on_Civil_and_Political_Rights) which says that “all peoples have the right to self-determination”. So are Texans/Kurds/Scots/Palestinians a “people”? International law makes no effort to answer this question. Presumably Volodymyr Zelenskyy thinks Ukrainians count as a people, and Vladmir Putin isn’t so sure. An International Court Of Justice judge, ruling on Kosovo, said: > [The definition of a “people”] is a point which has admittedly been defying international legal doctrine to date. In the context of the present subject-matter, it has been pointed out, for example, that terms such as “Kosovo population”, “people of Kosovo”, “all people in Kosovo”, “all inhabitants in Kosovo”, appear indistinctly in Security Council resolution 1244 (1999) itself. There is in fact no terminological precision as to what constitutes a “people” in international law, despite the large experience on the matter. What is clear to me is that, for its configuration, there is conjugation of factors, of an objective as well as subjective character, such as traditions and culture, ethnicity, historical ties and heritage, language, religion, sense of identity or kinship, the will to constitute a people; these are all factual, not legal, elements, which usually overlap each other. So we sort of have a judge informally giving nine criteria for peoplehood. But the USA only satisfies four, and my group house satisfies five. So it probably needs some work. Other sources have defined “a people” based on exclusion from existing political structures. So since Texans have all the normal rights in the US, they’re not a separate people. But since Palestinians don’t have all the normal rights in Israel, they are. But this suggests that if Putin invaded eg Finland, and then granted the Finns whatever the normal rights are in Russia, Finns would stop being a people. (maybe this is predicated on the idea that a truly separate people, if given rights by a conqueror, would come up with some way to secede. But is this true? The “normal rights” in Russia are already very limited; if Putin oppresses everyone equally, and doesn’t single out Finns, then by these definitions he’s in the clear.) Realistically “people” (like “[obscenity](https://en.wikipedia.org/wiki/I_know_it_when_I_see_it)” and [everything else](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong)) are a kind of know-it-when-you-see-it combination of all these factors. I hate this. It means any would-be conqueror can say “come on, this place I want to conquer isn’t a *real* ‘people’” - and then you need to litigate annoying questions about exactly how glorious a history they had, and which version of *Civilization* they appeared in, in order to prove him wrong. **III.** Consider an alternative: *everyone* has the right to self-determination. If Ukraine prefers not to be part of Russia, they don’t have to be. We don’t have to consult the history books to determine whether or not their desire to maintain independence is valid. This matches my intuitive ethical conception of self-determination. Suppose Putin’s historians found an old document in a file cabinet somewhere proving beyond a shadow of a doubt that Ukraine’s culture and history were not very glorious. My opinions about the moral status of this war would remain unchanged. Nothing I could learn about the Ukrainian language, religion, sense of kinship, ethnicity, or any of the other things that the judge in the Kosovo case mentioned, would make me feel good about Ukraine getting conquered by Russia This feels so trivially true that it’s easy to miss how many big problems there are. Does my street (population: ~100) have the right to declare independence from the USA? If not, then street-sized entities apparently don’t have the right to self-determination. Why not? (maybe because although it has the moral right to do so, in practice this would be so annoying and unmanageable that we round this off to ‘no’? maybe the ‘transaction costs’ of facilitating my street’s independence are higher than the moral benefit? [This paper](https://www.jstor.org/stable/3751662?seq=1) makes some good points about how in order to have the right to secede, a group needs someone speaking for it who can credibly invoke this right. My street doesn’t have this - although any city with a mayor or city council does!) Suppose dozens of US cities declared independence. The result would be lots of isolated enclaves with tiny markets and no ability to defend themselves. Those cities might wish that there was some pact keeping them together. So maybe since we already have such a pact (the general agreement that small regions can’t secede) we should stick to it. (but if cities genuinely regret declaring independence, they can just rejoin. And even that implies cities are irrational and would declare independence when it wasn’t in their best interests. Why not just let cities do what they think best? Maybe they would even come up with some win-win solution, like independence plus EU-style union) If my neighborhood declared independence from the US, China could offer to make us all multi-millionaires in exchange for hosting a military base on our territory. Doesn’t the US have the right to try to stop that? (but doesn’t that imply that Putin has the right to invade Ukraine if he doesn’t like NATO on his borders? And China hasn’t tried putting a base in the Bahamas, probably because the US has soft power and threat-based ways of making sure that doesn’t happen. Wouldn’t it be fairer to make the US use soft power and threat-based ways of controlling my neighborhood, instead of outright annexation?) And all these problems still exist in the current “peoples” paradigm. The Navajo are a “separate people” from other Americans by any definition, so under international law they have the right of self-determination. Why don’t they secede? I assume some combination of small size, economic self-interest, and US soft power/threats. So it turns out we’re fine at giving small populations the right to self-determination most of the time. None of these big problems are the *enormous* problem, which is that international law isn’t really enforced, and existing countries have no incentive to change a rule which favors them, so this will definitely never happen. It’s almost a category error to even talk about it, as if there were some International Congress that made International Laws that the International Police would enforce. Still, I think it’s useful to have an opinion on this. My opinion is that I’m in favor of the right of self-determination for any region big enough that it’s not inherently ridiculous for them to be their own country. I don’t care if they have their own language or ethnicity or glorious history, I will vote ‘yes’ before I even hear about any of those things. That means I don’t have to care about Putin’s argument for why he should get to have Ukraine. **IV.** But if you believe this, shouldn’t Russia get Crimea? I’m nervous asserting that Crimea wants/wanted to join Russia. Russia put a lot of propaganda effort into making it look that way. The [Crimean referendum](https://en.wikipedia.org/wiki/2014_Crimean_status_referendum) (which did vote for the annexation) was held at gunpoint and produced implausibly enthusiastic results (96% in favor). What about credible third-party assessments? As always, the exact percentages can change depending on what day you ask, and what wording you use, and what the other options are. But here’s [an essay](https://www.quora.com/Does-Crimea-want-to-become-part-of-Russia) suggesting that most likely it does support annexation by a pretty big margin, and has done so for a long time. The area is 58% Russian ethnicity, mostly Russian-language-speaking, etc, so I find this plausible. If someone who knows more than me says it’s all propaganda, I might believe them. But my best guess right now is that 2014 Crimea probably did want to join Russia. Should it have been allowed to do so? Again, I have trouble thinking of an ethical principle that says a group of people who really want to be part of Country A should in fact have to be part of Country B instead. I can disagree with Russia’s decision to force the matter with an invasion, and I can excuse Ukraine for not worrying about it too much. But overall I think I’m stuck consistently applying the principle “please let regions leave your country if you want”. (is it meaningful that Crimea wanted to join Russia rather than become independent? I think no; if you agree they have a right to become independent, then they could become independent and then immediately join Russia; everyone agrees independent countries have the right to join other countries if they want) The only way out of this conclusion is to double down on the “peoples” claim: Crimea isn’t distinct enough from the rest of Ukraine to be a separate “people”, so it shouldn’t be allowed to control its own destiny, so the historical accident that it ended up with Ukraine rather than Russia is sacrosanct. I think this is a weird reason to deny people the right to self-determination “Maybe Crimea should belong to Russia” is a pretty spicy take to come out of an attempt to argue against Putin’s concept of nationalism. But it’s just the result of applying the same principle consistently. **V.** Somebody’s going to ask “but what about the Confederacy?” The position that most tempts me is “The Confederacy had every right to secede, because every region that wants to secede has that right - but immediately upon granting them independence, the Union should have invaded in order to stop the atrocity of slavery”. I say it *tempts* rather *convinces* because it suggests a moral duty to conquer any country doing sufficiently bad things (should the Union have invaded Brazil too, for the same reason?) I’m still not sure how I feel about this. Assuming we’re against invading foreign countries on principle, a utilitarian might refuse to let the Confederacy leave in the hope of preventing the establishment of a permanent slave power. But I would still think of that as one of those rights violation which utilitarians occasionally allow for the greater good. In any case, I don’t think the answer to this question depends on whether Southerners qualify as a “different people” from Northerners, and I’m not sure the answer to *any* question should depend on that.
Scott Alexander
51096603
Who Gets Self-Determination?
acx
# Information Markets, Decision Markets, Attention Markets, Action Markets ###### [thumbnail image credit: excellent nature photographer [Eco Suparman](https://www.dailymail.co.uk/news/article-2128668/Amazing-nature-Like-mantis-needs-bicycle-Incredible-close-shot-praying-mantis-Borneo-makes-viewers-double-take.html), which is a great name for an excellent nature photographer!] ## Information Markets Niels Bohr supposedly said that “prediction is very difficult, especially about the future”. So why not predict the past and present instead? Here’s a recent market on Manifold (click image for link). Taylor Hawkins is a famous drummer who died last weekend under unclear circumstances. This market asks if he died of drug-related causes. Presumably someone will do an autopsy or investigation soon, and Chris will resolve the market based on that information. This is a totally standard prediction market, except that it’s technically about interpreting past events. Same idea, only more tenuous. We know someone will do an autopsy on Taylor Hawkins soon, and we probably trust it. But how do we figure out whether COVID originated in a lab? This question’s hack is to ask whether two public health agencies will claim it. If we trust the public health agencies, we can turn this mysterious past event into a forecasting question. But this is a strong ask. Even if we don’t *specifically* distrust the agencies, this question is a combination of “did COVID originate in a lab?” and “how likely are public health agencies to claim this?”. I expect the question would have a different prediction if it asked about “one public health agency” or “five public health agencies” or “China’s public health agency” or “the public health agency during a hypothetical second Trump administration” or “before the end of 2030”. All of that means we can’t interpret the prediction literally as being about whether COVID originated in a lab. What will I (Scott Alexander) rate as most promising when I do a deep dive into the research on pregnancy interventions? Here you don’t have to trust public health agencies, you just have to trust *me*. Most people betting on this read my blog, which means they probably trust me at least a little. And I have no reason to lie about pregnancy interventions. So here the trust might actually be a fair assumption. (though we still need an assumption that the research literature itself is correct and complete - this market substitutes for me doing the deep dive, not for scientists doing studies) The first problem: I’ve gotten most of the way through this research, and the market is wrong. I’m not going to comment on whether the exact top guess is right or wrong, since that would interfere with people’s betting. But *in general*, some interventions I will place near the bottom are near the top, or vice versa. In retrospect this isn’t surprising: most people on this market are playing for a 2-3 digit sum of play money. That’s not going to incentivize anyone to do 20 hours of research, or pay for an expert consultation, or anything else that would help them really understand this. They’re just pattern-matching to stuff they kind of heard, and maybe doing a few minutes of research to fill in gaps. I believe if this was a market for 5 or 6 digits of real money, it would go a little better, but so far that’s [beyond us](https://astralcodexten.substack.com/p/the-passage-of-polymarket?s=w). (when I was a poor medical student, Zvi Mowshowitz’s company ran a contest for the best literature review of mineral supplementation with a $5000 prize. I put in 20 hours of research and won. If I’m typical, this is proof of concept that big enough bounties can incentivize poor medical students to do good lit reviews) The second problem: even if markets like these always worked, I would still have to do the literature review (to resolve the market). So this isn’t actually saving me any work! The agreed-on solution for this is lots of conditional prediction markets. “If Scott did a review on pregnancy interventions, what would he find?” — “If Scott did a review on cholesterol medications, what would he find?” — “If Scott did a review on…”. Then I do *one* review, chosen randomly, resolve that market, and give everyone else their money back. I’ve learned about lots of topics for the price of doing research on one. The only case I know of people already using this technique is [replication markets](https://www.replicationmarkets.com/), where people bet on which studies will replicate. You might, for example, set up markets on 100 studies, then actually conduct 10. The people who bet on those studies gain or lose money, the people who bet on the other 90 get their money refunded. These [seem to work pretty well](https://www.pnas.org/doi/10.1073/pnas.1516179112), though I’ve heard some claims and counter-claims about exactly how. I once joked that instead of lower courts, we should have prediction markets on what the Supreme Court would think of a given case. Same idea, higher stakes. ## Decision Markets A very slight twist on information markets, eg: Austin, a co-founder of Manifold Markets (formerly Mantic Markets) asks the market what he’ll decide on this technical question. This does two things: First, it encourages everyone who has a point for or against to try their best to convince him of it. If I know a knock-down incontrovertible reason for why dynamic parimutuel systems can never work, I can put all my money on “NO” and then tell Austin about it. Even if I only know a relatively weak consideration that I think other people have missed, I can short it a *little* and then tell Austin my weak consideration. Second, it’s probably a pretty good guide to Austin whether he should actually use the system or not. Suppose you saw a bunch of the top experts in betting systems move the market to 99%. Seems like a strong argument. But this has the same two problems as the information markets above. The first problem: it only works if you trust Austin. The good news here is that Austin made this market, to advise Austin in making the decision, and presumably Austin trusts Austin. The bad news is that if everyone knows Austin is dumb and stubborn, they might bet on “YES” even if they know really good reasons to switch away from the parimutuel system. So at best this can just save Austin time; it can’t overcome his inherent limitations as an evaluator of evidence (unless investors don’t know about his inherent limitations, in which case I guess it can). The second problem is: it can’t actually save him time. Austin still has to make the decision in order to resolve the markets. At best it can leverage his time, using the same method we discussed with information markets above. (In theory, if Austin didn’t resolve the market, and everyone knew he wouldn’t, but there wasn’t common knowledge of this, it *might* end up as some sort of Keynesian beauty contest with the right answer as the Schelling point. But I wouldn’t want to test this, and I would expect it to break down pretty quickly). ## Attention Markets Here’s a different take on the same idea: Unlike Austin, Kevin is *not* a co-founder or employee at Manifold. He’s just some guy who looked at their betting system and thought it had a flaw. Suppose the developers are really busy and don’t have time to listen to everybody’s complaints about their betting system. How can Kevin get their attention? In this market, he asks: if the developers put in enough work to understand his complicated technical objection, would they agree it was valid? Investors agreed that they would, he (presumably) showed the Manifold team this prediction market saying there was a really high chance if they investigated this flaw they would be concerned about it, they investigated the flaw, and they agreed they were concerned and would work on changing it. The market resolved to “yes”. The problem this addresses is common: ordinary people want decision-makers to take the time to understand their arguments that the decision-makers are doing something wrong. But the decision-makers don’t want to read through a hundred rants by people who mostly don’t know what they’re talking about. Think of eg physics crackpots sending their manifestos to Scott Aaronson. After a while, Aaronson gets tired of reading every manifesto in detail. If there could be a prediction market for whether Aaronson would agree the manifesto contained a revolutionary insight, Aaronson could only read the ones that scored above some bar. Problem: don’t the crackpots send their manifestos to Aaronson because he is one of the very rare people who know enough physics to recognize a fellow genius? And wouldn’t the investors on the market either be non-geniuses (in which case they’re not qualified to judge) or geniuses (in which case their time is as valuable as Aaronson’s and we’re not gaining anything by delegating from him to them)? I think that lots of non-geniuses can tell crackpot manifestos that *might* be true from ones that *have no chance* of being true, and all we need is some kind of probabilistic signal here. [Here](https://astralcodexten.substack.com/p/open-thread-212?s=w)’s an example of me using this strategy without requiring anyone to be a genius: …but I still haven’t gotten any takers. The biggest risk is that the decision-maker won’t be harsh and honest enough to admit when things aren’t worth their attention. Suppose I started an attention market for myself, and it escalated an appeal from a charity to help Third World orphans. I throw these out all unread the time when I get them in the mail, which seems like a strong signal that it’s not worth my time. But do I really want to publicly say the equivalent of “Stop wasting my time with this garbage” in a way that financially penalizes the people who bet I would want to see it? ## Action Markets Everyone’s favorite question about prediction markets: don’t they incentivize you to assassinate people? That is, suppose there’s a market on whether Joe Biden will finish his term. Maybe it’s at 95% right now. If you bet “no” and assassinated him, you could 20x your money. (you could also increase your money by 1.05x by doing a really good job protecting him, but that sounds harder and less lucrative) The answer is: yes, this could happen. But we should expect it to be very rare! Consider: couldn’t you make a lot of money *right now* by shorting Tesla and then assassinating Elon Musk? Or by shorting Boeing and bombing a plane? Or by going long on train companies, and bombing a plane? Or by going long on diamonds, and then bombing a diamond mine? Or by shorting Bitcoin, and lobbying for more punitive crypto regulations? *Every* investment is also an action market! In general, we control this tendency through normal criminal laws. People don’t assassinate Elon Musk because then they’d be investigated for murder. Even if they manage to avoid leaving any fingerprints or whatever, police would probably still go after the guy who put all of his money into Tesla shorts the day before. Still, as long as everyone knows what’s going on and no laws are being broken, fine, let’s have action markets: This is my local rationalist group house. They’re betting on whether someone will clean up the (currently disastrous) backyard. They’ve said pretty openly that they’re hoping someone will buy a lot of “yes”, take care of the backyard project, and then take all their money. You can think of this as a weird bounty system. But it’s strictly worse than a regular bounty system. For one thing, if I suspect someone else will take care of the backyard, then just by registering that prediction I can share the winnings with them, even though I didn’t do any work (since this is at 70% instead of 100%, someone must be doing something like this). For another thing, it incentivizes people to bet “no” and then sabotage cleanup projects. I don’t think there’s any site as convenient as Mantic that lets people set perfectly normal bounties on things (should there be?) but a prediction market clearly isn’t the ideal tool here. Is there any case when an action market might be better than a bounty? Imagine I have a painful disease. Every year, there’s a 25% chance it goes away on its own, but I’d like to speed up the process. Lots of doctors want to try very expensive treatments on me, but I’m worried some of them are quacks. If I hire one of the doctors and hope for the best, maybe he’s a quack and I’ve wasted my money. If I hire one of the doctors, but make payment conditional on me being cured afterwards, there’s a 25% chance I’ll recover on my own and pay the doctor even though they were a quack. If I hire a doctor, make payment conditional on me being cured, and make *him* pay *me* if I’m not cured, then a real doctor whose treatments work less than 100% of the time might decide to pass. I think you could solve this problem by subsidizing a prediction market at 25% chance of recovery, and letting doctors bet on other odds. This would still have some problems - if someone saw a famous doctor betting, they could bet too, and “steal” some of the “winnings” from the doctor - but it seems to be kind of the right idea. (could a doctor short the market, then kill you? Yes, but you could just not hire any doctor you saw shorting the market) On this [List Of Fictional Cryptocurrencies](https://astralcodexten.substack.com/p/list-of-fictional-cryptocurrencies?s=w), I wrote about ConTracked: > **ConTracked:** A proposed replacement for government contracting. For example, the state might issue a billion ConTracked tokens which have a base value of zero *unless* a [decentralized court](https://kleros.io/) agrees that a bridge meeting certain specifications has been built over a certain river, in which case their value goes to $1 each. The state auctions its tokens to the highest bidder, presumably a bridge-building company. If the company builds the bridge, their tokens are worth $1 billion and they probably make a nice profit; if not, they might resell the tokens (at a heavily discounted price) to some other bridge-building company. If nobody builds the bridge, the government makes a tidy profit off the token sale and tries again. The goal is that instead of the government having to decide on a contractor (and probably get ripped off), it can let the market decide and put the risk entirely on the buyer. > > Banned because Wall Street developed a financial instrument that let them short ConTrackeds, then tried really hard to prevent bridges from being built. Commenters brought up that you get weird incentives as soon as more than one person owns ConTrackeds, which I agree with. I’m still not sure there are clear real-world uses for action markets. But they’re still fun to think about. In particular, “every prediction market is also an action market, and vice versa” is a useful law to keep in mind. Different prediction markets “leak” into being action markets at a different rate: one about when a distant star will supernova is 100% prediction; one about when a President will die is 99% prediction; one about when the forecasters’ backyard will be cleaned is maybe 80% action. This idea comes up in some weird places. I’ve seen it in AI safety: AIs that you “only” ask to predict an event for you [aren’t necessarily safe](https://www.lesswrong.com/tag/oracle-ai), partly because one easy way to predict an event is to cause it. But my favorite example is from neuroscience. The “[active inference](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3637647/)” hypothesis - a leading theories of how brain motor centers work - says that the brain is entirely a predictive organ, but that the easiest way to predict your body position is to cause it. The brain registers a prediction of 100% that your arm will move, and then - in order to “win” its “bet” - moves your arm. “Every prediction market is also an action market”, indeed!
Scott Alexander
51147776
Information Markets, Decision Markets, Attention Markets, Action Markets
acx
# Open Thread 217 **1:** Remember: entries to the [2022 ACX Book Review Contest](https://astralcodexten.substack.com/p/book-review-contest-rules-2022?s=w) are due April 5th. You can send them in with [this form](https://docs.google.com/forms/d/18ft8ZxQcKFwMsi_DZINn7d7VIso_y1Armfr59YeOGLE/edit). **2:** Theeffective altruists I know are really excited about [Carrick Flynn for Congress](https://www.carrickflynnfororegon.com/) (he’s running as a Democrat in Oregon). Carrick has fought poverty in Africa, worked on biosecurity and pandemic prevention since 2015, and is a world expert on the intersection of AI safety and public policy (see eg [this paper](https://nickbostrom.com/papers/aipolicy.pdf) he co-wrote with Nick Bostrom). He also supports normal Democratic priorities like the environment, abortion rights, and universal health care (see [here](http://issues) for longer list). See also [this endorsement](https://forum.effectivealtruism.org/posts/Qi9nnrmjwNbBqWbNT/the-best-usd5-800-i-ve-ever-donated-to-pandemic-prevention) from biosecurity grantmaker Andrew SB. Metaculus currently has him at 40% to win the primary and 29% to win the general. I’m closer to 60/45. Although he’s getting support from some big funders, campaign finance privileges small-to-medium-sized donations from ordinary people. If you want to support him, you can see a list of possible options [here](https://www.carrickflynnfororegon.com/joinus) - including [donations.](https://secure.actblue.com/donate/flynn-web) You can donate max $2900 for the primary, plus another $2900 for the general that will be refunded if he doesn’t make it. If you do donate, it would be extra helpful if the money came in before a key reporting deadline March 31. **3:** Every year in autumn I hold a big Meetups Everywhere event, and every time people tell me I should do it more often than once a year. So this time we’ll hold a mini-Meetups-Everywhere this April. It won’t be any different from your usual meetup schedule except that it’ll be the Schelling time for everyone who only wants to come once every few months to come. If you’re a meetups organizer (or want to become one), please [fill in this form](https://docs.google.com/forms/d/e/1FAIpQLSe6bVGranNA5AKTKj8l4XtTzvXBaRsap48rEvbP5gqA2JTiEQ/viewform) with the date of a meetup April 11th or later. Next Sunday I’ll put the results on the Open Thread for people to see. **4:** Speaking of meetups, the rationalist/EA establishment is trying to promote local meetups. If you’re a local ACX/LW meetups organizer, you’re potentially invited to attend an all-expenses paid retreat in California in July with our meetups czar Mingyuan. Please read more [here](https://docs.google.com/document/d/1DVJ84uiARZQNGqmtO_6epFv6LwFdlQRVyQXoPMTgFq8/edit), then fill in [this form](https://docs.google.com/forms/d/e/1FAIpQLSfkmBd0akoR6mpGxzTdV_1RP43edfaTgDl7kDff9VZm8sCoPg/viewform) to get on her radar. **5:** And speaking of Mingyuan, she is going to inspect - sorry, enjoy the hospitality of - the East Coast meetup groups. She’ll be in DC: 4/11–4/13 Baltimore: 4/14 Philadelphia: 4/15–4/16 NYC: 4/17–4/21 Yale: 4/22–4/23 Northampton: 4/24–4/25 Boston: 4/26–5/1. The local groups have already taken care of having meetups at the right time, but she’s looking for people who could host her and drive her between cities . Email meetupsmingyuan@gmail.com if you can help. **6:** Last week I tried to figure out the needs of community members in Russia and Ukraine. There are some great resources [on the thread](https://astralcodexten.substack.com/p/open-thread-216/comment/5638593), but issues that still need solving: * [Seven Ukrainian refugees looking for remote work](https://astralcodexten.substack.com/p/open-thread-216/comment/5662472?s=w) * [Ukrainian senior infra engineer normally at a FAANG but now stranded in Ukraine looking for Rust/C++ short/mid-term work/contract](https://astralcodexten.substack.com/p/open-thread-216/comment/5641206?s=w) * [Russian emigrant looking for a way to open a bank account remotely](https://astralcodexten.substack.com/p/open-thread-216/comment/5641247?s=w) * [Russian (experienced 3D animator) looking for help getting a visa to US/Canada](https://astralcodexten.substack.com/p/open-thread-216/comment/5641444?s=w) * [Paramedic in Ternopol, Ukraine, collecting money for medical supplies](https://astralcodexten.substack.com/p/open-thread-216/comment/5641797?s=w) * [Russian data scientist looking for job offer in Western country](https://astralcodexten.substack.com/p/open-thread-216/comment/5642315?s=w) * [Russian protester who fled Russia looking for loan of about $1500 for short-term support](https://astralcodexten.substack.com/p/open-thread-216/comment/5643253?s=w) **7:** On Tuesday, I posted a [response](https://astralcodexten.substack.com/p/contra-hoel-on-aristocratic-tutoring?s=w) to Erik Hoel’s [post on aristocratic tutoring](https://erikhoel.substack.com/p/why-we-stopped-making-einsteins?s=w). Since then he posted a [response to my response](https://erikhoel.substack.com/p/follow-up-why-we-stopped-making-einsteins?s=r), and we’re continuing the discussion [in the comments there.](https://erikhoel.substack.com/p/follow-up-why-we-stopped-making-einsteins/comment/5703529?s=r)
Scott Alexander
51108564
Open Thread 217
acx
# Highlights From The Comments On Justice Creep A lot of comments on [Justice Creep](https://astralcodexten.substack.com/p/justice-creep?s=w) fell into three categories: **First**, people who thought some variety of: yes, all this stuff is definitely a justice issue, and it’s good that language is starting to reflect that more. For example, [Adnamanil](https://astralcodexten.substack.com/p/justice-creep/comment/5562221): > So... as someone who actually does use "\_\_\_" Justice, quite frequently, I'd like to say that I think it's a good thing to reframe "helping the poor" or "saving the poor" as "pursuing economic justice." I don't think it's a good thing for people to think of themselves as saviors, to me that's a really unhealthy and unhelpful mindset which results in people who aren't themselves poor thinking they can be the experts and the decision-makers, and that there is something wrong with poor people, that they need to be "saved" or "fixed." We live in a world where there is enough food to feed everyone, yet people go hungry; enough shelter to keep everyone warm, yet people go cold. To me, that says there is something wrong with our system of resource distribution, not with the people who ended up, for one reason or another, being left out of it. > > Does that result in a sense of responsibility to fix the system? Yes! Does it imply that we don't live in Utopia? Yes! Because we don't. And I don't think we should pretend to. But it also implies that we \*could\* live in utopia. It demonstrates a real hope about the possibility of utopia. It says, "if we could figure out how to live together better, we could all have enough to eat and be warm." And [Philosophy Bear](https://philosophybear.substack.com/), as [Economic Justice And Climate Justice Are Not Metaphors](https://philosophybear.substack.com/p/economic-justice-and-climate-justice?s=r): > Regardless of whether it is useful -and I hope it is- I think that honesty compels a clear-eyed person to talk about many of these things in terms of justice, even in the narrowest conception of justice. > > The mistake in Scott’s article is assuming that these forms of justice are merely metaphors or analogies on criminal justice. Many of these are about justice in *exactly* the same sense that crimes are about justice- no metaphor required. Of course, they are also about being just in other senses- justice was never just about crime. For example, one can detect demands for social justice in the bible that go far beyond "wouldn't it be nice to help people", but nonetheless aren’t framed in terms of the criminal law. > > Nevertheless, yes, climate justice and economic justice- for example- are also about being just in the same way laws against murder are- no stretching of meaning is required… Read the rest of his post for more. **Second**, people who think justice terminology is a dastardly plot to make people violent, hateful, and bigoted. I admit my original post was not guiltless here, but some commenters went much further: [Pete Houser](https://astralcodexten.substack.com/p/justice-creep/comment/5562079): > I think there has been a general shift towards villifying our social/political opponents. “I believe in helping women” leaves open a discussion of “how”. “I support justice for women” implies that all persons who disagree with my beliefs are evil. Similarly we use words like “misogynistic” and “racist” with ever widening meaning because those words label our social opponents as evil. [Malaya Zemlya](https://astralcodexten.substack.com/p/justice-creep/comment/5563381): > I notice that "bringing justice" licenses any amount of violence on the bringer's part, as long as the claimed crime is outrageous enough. And violence has been in vogue recently. **Third,** this one sentence comment by [Anonymous Coward](https://astralcodexten.substack.com/p/justice-creep/comment/5568248): “How long before 'incels' campaign for 'sexual justice'?” I understand why some people will find this trollish or uncouth, but I thought it captured the heart of the matter better than anyone else. The argument for why poverty is a justice issue goes something like this: * Some people are suffering terribly * It’s not their fault, and they’ve done nothing to “deserve to suffer” * Other people have much more than they need * This has been brought about through the choices of individuals and governments. Maybe nobody specifically says “I choose for Jeff Bezos to be a billionaire and Somali orphans to starve to death.” But a lot of people keep giving more money to Jeff Bezos and not helping Somali orphans. And governments generally enforce (or at least refuse to intervene against) the economic system that makes this keep happening. And voters keep re-electing the politicians who allow this. * Therefore, there is injustice. Now consider incels. Not necessarily actually-existing incels, but some hypothetical best-case scenario for the philosophy. Let’s say a guy with a birth defect that makes him horribly deformed, nobody will date him, and this makes him depressed and suicidal. Don’t tell me these people don’t exist, I’ve met them. Once again: * These people are suffering terribly * It’s not their fault, and they’ve done nothing to “deserve to suffer” * Other people have much more (sex) than they need * This has been brought about through the choices of individuals and governments. Maybe nobody specifically says “I choose for this hot guy to have sex at a dozen different parties a month, and this other guy to be loveless forever.” But a lot of people keep having sex with the hot guy and rejecting the deformed person. And governments generally enforce (or at least refuse to intervene against) the cultural norms that make this keep happening. And voters keep re-electing the politicians who allow this. * Therefore, ??? If you don’t want to complete this with “…there is injustice”, then congratulations, you have rediscovered the way that almost every society throughout history has thought about inequality. [I have had this argument before enough times to know people always try to weasel out of it. Some people insist that every single lonely person in the world deserves it, because loneliness is a 100% reliable signal of being a misogynist who hates women - (What about lonely women? Probably racist.) Other people say that if these people just used better deodorant and learned social skills, they would all get partners, so it’s their own fault for not trying (much like how if poor people just worked hard and learned to code, they would all be millionaires). Still other people say that sex and relationships aren’t a human right (but a First World lifestyle with free college education and public transport and high-tech health care is, that’s just what God decided when He granted us inalienable rights, I don’t make the rules) and nothing that isn’t about a human right can be unjust or unfairly distributed. I reject all of these as weaselly.] This single scenario - incels and “sexual justice” - is almost the lone survivor of a once omnipresent clade - a sort of philosophical living fossil. It’s been so roundly outcompeted by fitter memes - the more modern perspective of “if there’s inequality caused by human choices then that’s unjust by definition” - that it’s hard to remember that the alternative ever existed or is even possible. If this last living fossil ever goes, an entire phylum of philosophical possibility will be lost to human comprehension forever. Part of my objection to justice creep, which I didn’t explain very well in the post, is that by assuming the “inequality therefore injustice” perspective is right, it denies this whole ancient phylum of philosophical creatures *a priori.* When we’re looking at an idea like “sexual justice”, we come up with objections like: * Even though in *some sense* I’m responsible for this person not having sex, in the sense that I choose not to have sex with him, choose not to vote for candidates who will mandate sex with him, and consume/signal-boost cultural products that have typical beauty standards, this is not the sense where I should actually feel bad or responsible in any way, or where his suffering is sufficiently “my fault” to give me any obligation to help him. * It’s hard to think of a way to help him that doesn’t impinge on important freedoms in some way. Either the government would have to use force to coerce people to have sex with him, or use force to coerce people to give him their money so he could pay others to have sex. Both of these solutions seem to have enough ethical downsides not to be worth it. * Given that there’s some sense in which his problems are caused by acts of God (eg his deformity) and some other sense in which they’re caused by his own failures (eg not becoming so amazing at social skills he can compensate) I don’t feel like society is culpable enough that we have to reorganize it to fix this problem. Needless to say, if we held the same mindset when thinking about climate or the economy, we could generate the same objections. And none of these problems come up if we retreat from “sexual justice” to “sexual welfare”. Would it be kind and compassionate to help this person have sex? Straightforwardly yes. (I think some people will object because they interpret “welfare” as “government benefits”, but I’m not talking about this here, just the concept of wanting people to fare well.) So am I saying that governments shouldn’t help the poor? Or that incels are right about everything and we need state mandated gfs/bfs? Or am I going to weasel out of this? Let’s keep going and see! [Brad Foley](https://astralcodexten.substack.com/p/justice-creep/comment/5562136) writes: > Liberals are focused on a notion of equality and fairness as justice (I think Haidt's moral foundations theory is really helpful here). So the idea that wealthy nations create the most CO2 and cause the most global warming, where poorer nations (mostly already hot) will disproportionately experience the worst effects, is inherently unjust. I am really grateful to Brad for bringing up Haidt’s moral foundations here - if I’d thought about it when writing the original post, I would have been able to do a much better job. The transition from “help the poor” to “pursue economic justice” is, in Haidtian terms, a shift from the Care/Harm foundation to the Fairness foundation. I worry about this because I find myself much more comfortable with Care/Harm than with Fairness. I am very easily able to answer questions like “Are incels sad because they don’t have sex? Would it improve their lives if you gave it to them?” (yes, definitely), whereas I have no idea how to answer questions like “Is it unfair that incels don’t have sex?” (see discussion above; it seems unfair in some cosmic sense, but not necessarily in the sense where I feel certain that society is mandated to address the unfairness) Likewise, when I play computer games, which causes my local power plant to emit a little more CO2, am I harming people on Kiribati? Yes, definitely (to some very small degree). Is it a violation of climate justice that I am allowed to play computer games? Can we at least agree that this is a tougher question? [Viliam](https://astralcodexten.substack.com/p/justice-creep/comment/5573206) (author of [Kittenlord’s News](https://kittenlord.substack.com/?utm_source=substack&utm_medium=web&utm_content=comment_metadata)) tries the same tactic, six hundred comments deep: > Comment justice. It is unfair that Scott's blog gets more comments that other blogs? This is obviously a troll, yet I challenge people to come up with reasons why it’s false that don’t also disprove economic justice or climate justice or so on (please think for two seconds when proposing your reason about whether it has a clear climate or economic equivalent). So my answer to this is something like “I have no idea what justice is, but I care about people and want them not to be harmed, and I hope this is enough”. In my ideal world, everyone would get a guaranteed basic income, not because I have any idea what level of UBI would be “just”, but because it’s bad for people to be poor. If they want to use that money to hire a prostitute or a cosmetic surgeon to pursue a romantic relationship, that’s fine with me. If they want to use that as seed money to start a business and become a billionaire and be much richer than everyone else, that’s fine with me too. I can’t guarantee I have solved all of the moral issues that will come up / stay around, but I feel much more confident addressing them on a care/harm foundation than a fairness one. --- Devin Kalish on the [Effective Altruist Forum](https://forum.effectivealtruism.org/) writes [Brief Thoughts On Justice Creep And Effective Altruism](https://forum.effectivealtruism.org/posts/icmnNqPbmisE7QcmH/brief-thoughts-on-justice-creep-and-effective-altruism): > When Effective Altruists look at the world, they see lots of cases of unacceptable neglect and apathy and deep power differentials between possible beneficiaries and possible benefiters. Oh, and they also see sentient beings even more numerous that humans alive on Earth being actively/purposely subjected to non-stop torture for minor benefits to humans (that probably aren’t even net beneficial to humans), heavily normalized by culture, and which nearly everyone of moderate affluence on Earth is complicit in. The former types of issues can be given a justicey spin, but once you buy the right moral premises, the latter category screams “justice issue”. Ignoring this dimension makes it hard to see why animal welfare is such a popular cause area, indeed many passionate Effective Altruists I have run into, whether they are directly working on it or not, have a special, very personal investment in it when you talk to them. This is actually a really great point! I feel no hesitation using justice terminology for animal issues. If you accept the basic philosophical underpinnings of the vegan worldview - animals are sentient creatures who it’s morally wrong to hurt - then yeah, the fact that we raise them in tiny cages and torture and kill them does seem manifestly unjust. Why do I find this so much easier to swallow than eg climate justice or economic justice? I guess it’s because climate justice involves summing up a bunch of things which are not themselves unsympathetic (me playing computer games, you playing computer games, so on x 1 billion) yet happen to have bad consequences. Economic justice is the same way - I spend my money on things I want and find useful, you spend your money on things you want and find useful, and at the end we find that Jeff Bezos has $200 billion and a Somali orphan has $0. It doesn’t seem intuitively bad to play computer games or to spend your money on things you want and find useful. Whereas the animal problems really are “someone captures and tortures and kills a bunch of animals”. I guess I’m bringing in sketchy stuff like the act/omission distinction and the doctrine of double effect here, but this really does seem pretty different from the other two cases. (looking back, there was a disanalogy in that paragraph - the animal equivalent to “me playing computer games” is “me eating meat”, which seems less directly unjust than me being a factory farmer. But the climate equivalent of “running a factory farm” is “running a power plant”, which still seems less directly connected to sea levels rising in Kiribati than capturing/torturing/killing animals is to those animals being captured/tortured/killed) Despite this feeling like the clearest case of justice to me, I hear the phrase “animal justice” less often than any of the others (though [some people](https://en.wikipedia.org/wiki/Animal_Justice_Project) apparently do use it). The closest equivalent is “animal rights”. But a lot of the animal activists I know have been deliberately moving away from that to something more like “animal suffering” or “animal welfare”, I think because “animals are endowed with natural rights” is a harder sell than “animals being tortured is bad”. I’m not sure why the animal and human cases are moving in opposite directions. --- [Antoine B](https://astralcodexten.substack.com/p/justice-creep/comment/5572008) writes: > A community I know was victimized for decades by the polluting discharges of politically powerful hog farming conglomerates. I don't doubt that some people there wanted retribution against the bad actors, but as far as I could tell, most just \*wanted it to stop\*. > > Isn't it fair to frame their struggle as an appeal to 'environmental justice'? This seems like the strongest argument *against* the point I was trying to make above. I imagine eg the hog farmers dumping toxic sludge into a river, and then it makes lots of people sick. Here people are being at least kind of directly victimized by the hog farmers, in the same way the animals are being directly victimized by the factory farm. But from the hog farmers’ perspective, they’re just running a hog farm, which (ignoring the vegan objection for now) is a perfectly reasonable non-criminal thing to do. The hog farm example seems like a middle ground between hurting someone extremely directly (eg factory farms, ordinary violent crime) and hurting people extremely indirectly (eg playing computer games in a way that produces fossil fuels), in a way where now I’m not sure I can distinguish between them meaningfully. I admit this is awkward for my theory, so I’ll make a deal: I’ll blur some of my distinction between the harm and fairness foundations if you let me use the phrase “hog justice”. --- [Walruss](https://astralcodexten.substack.com/p/justice-creep/comment/5562950) writes: > I've got a lot of small complaints but the bigger issue is that the current take [on justice] assumes people go without because of people. > > In this model, a state of nature just provides everything, and the only reason there are haves and have not is that human-built systems get in the way. There's no allowance for the idea that maybe those systems provide valuable services, and trying to point out that they often do is either a sign of privilege or bootlicking. I think this is another framing of the point I made above with incels. Related to the story where someone (Milton Friedman? I can’t find the source) was asked about the causes of poverty, and answered “Poverty doesn’t need a cause, it’s the natural condition, we should be looking for the causes of wealth.” Is this the same question as whether to use the justice foundation here? That is, if you accept Friedman’s formulation, must you stop seeing (at least some) poverty as economic injustice? If you reject the formulation, is an injustice-based view of economics the only logical option? I’m not sure! --- [kyb](https://astralcodexten.substack.com/p/justice-creep/comment/5563590) writes: > Fairness is something that even some animals understand. Almost every child needs to be told that the world isn't fair, because they start from an innate assumption that it should be. Perhaps we should just talk about fairness instead of justice and then this complaint goes away? This led to a very interesting subthread between kyb vs. Ruben on whether it’s in fact true that animals and very young children understand fairness. See <https://en.wikipedia.org/wiki/Inequity_aversion_in_animals> , [this Twitter discussion](https://twitter.com/nicholaraihani/status/1197809126417616897) and the linked papers, and the rest of [the conversation](https://astralcodexten.substack.com/p/justice-creep/comment/5563757). [Daniel Speyer](https://astralcodexten.substack.com/p/justice-creep/comment/5562075) writes: > I recall hearing in childhood religious school back in the early 90s that the English word "charity" comes from the Latin "caritas" meaning compassion, and it's about a feeling of caring deep in your heart. But the Hebrew "tzedakah", often translated as "charity" comes from the root "tzedek" meaning justice. > > So if they're smelly and obnoxious and ungrateful and no one could blame you for not feeling compassion, you still need to give because justice calls for them to receive. Similarly if they are so numerous that you can only relate to them abstractly. Interesting! I think GK Chesterton has said something almost the opposite of this, where Christian charity is superior to justice, because the just man only helps people who ‘deserve’ help, whereas the Christian helps everybody. And [Cloven Pine Games](https://astralcodexten.substack.com/p/justice-creep/comment/5562841): > Something's off here. What Scott describes as "justice creep" sounds in many ways like a classically Christian understanding of justice. For instance, what is St. John Chrysostom invoking when he says "the coat rotting in your closet belongs by rights to man who has no coat" if not some version of economic justice? And yet, Christianity manages to also talk about many other virtues, and revere many people as saints (including, uh, Chrysostom). So, at least within the worldview from which the concept of saints derives, there is room for both widespread injustice crying out for remedy and genuinely heroic examples of virtue. And why shouldn't there be? The fact we have many injustices to right does not cancel out opportunities to cultivate virtues like patience, fortitude, and temperance. --- [AJPio](https://astralcodexten.substack.com/p/justice-creep/comment/5562714): > Young philosopher who teaches Political Phil here (though doesn’t publish, so not an expert). > > Here’s the usual train of thought. First, the difference between Morality and is Justice is that the latter is thought to be about ‘the basic structure of society’ with ensuing debate about what the boundaries of this are. But as a first pass, getting cheated on by your partner is thought to be not unjust, but being robbed by the government is, even if you feel the former immoral treatment would be worse. > > One thing Rawls took a theory of distributive justice (a theory about how benefits and burdens should be allocated by the basic structure) to be concerned with was ‘the social bases of self-respect’ – some minimal standard of respect with which you can interact with others and pursue your conception of the good life. SJW's have taken this and run with it. > > 70s/80s philosophers focused on structures and institutions like the law and courts, over time expanding to consider e.g. marriage and the family, and took these to be the main things we’d need to think about to ensure people could live minimally decent lives, see e.g. unfavourable attitudes towards the unemployed. But the modern argument is that our self respect depends a lot on things like culture and norms and stereotypes, so IF you think that the social bases of self respect are very important (such that we should be willing to make tradeoffs against e.g. economic freedom), and IF you think that self respect is strongly affected by cultural ideas, then you’re going to see all cultural ideas and discourse as a domain relevant to achieving justice – hence the kerfuffle over implicit bias, stereotypes, representation in media. > > (Of course the causal sociological story actually runs from society to these arguments; philosophers don’t have enough of an impact. Parts of society get certain ideologies, become philosophers and then come up with the justifications. Of course many people also get the social bases direction wrong too – solve the economic inequalities and you’ll probably fix the stereotypes, which we know don’t have all that much power to explain current gaps). > > Regarding Mali’s climate being a big part of why it’s poor, though this is true, the dominant line of thought would be that this is not relevant (to justice). Since capitalism has produced such a large surplus, it’s possible to arrange society in such a way that more of that surplus is distributed so everyone meets some minimal standard of living, and our failure to do this means Mali has a claim against richer nations. Even though it’s true that ‘the climate’ is a big part of our causal explanation, which part of a multi-factor causal explanation you pick as being relevant depends on normative assumptions, including how it’s legitimate for people to behave. When a driver crashes their car, the actual speed plays a very large part in the causal story. But assumptions about how drivers, council, and bosses ought to behave is going to determine whether we think the cause is the driver being reckless, council not having the appropriate signage or speed limit, or his boss putting unrealistic demands on the driver, or the wider economy making him poor so that he needs to drive quickly to make a buck in the first place. The speed might not be relevant to us. > > We theoretically have enough causal levers that we could have helped Mali without causing climate change at all, despite its climate, and that’s what matters, no analysis of variance or (conveniently) knowledge about economics needed - it’s enough that a just outcome is possible and we collectively have failed to provide it. (This also is why ‘economic justice’ comes up less in discussions than ‘social justice’ – the perception is ‘redistribution’ can be a one-size-fits-most for various economic problems). > > The counter argument of course is ‘planned arrangements of societies according to some ideal hasn’t gone well in the past, maybe we should study what things actually work and be concerned with what’s effective given how humans and systems tend to behave’. But of course how humans tend to behave is also a product of culture (more evidence of injustice!) and this kind of reply is less appealing in other domains e.g. if government officials keep being corrupt, you don’t say ‘well maybe instead of calling this state of affairs unjust we should remember what human nature is like, and design systems around it, think about what’s more effective, have a positive narrative’ – most of us would say that though what’s effective matters, this nevertheless seems to be an unjust state of affairs and we should label it as such. > > So in general, it seems that there’s a tension between two roles we what the ‘justice’ concept to have. On the one hand, we want to use it to identify things that ought to be changed. On the other hand, we want to be able to create \*effective\* change, and these goals can trade off against each other. SJWs are identifying parts of the basic structure of society they think we are collectively obliged to change. Scott is drawing attention to the effects this usage has on actually creating progress. In the background are a lot of unstated assumptions / conceptual holes about what kinds of explanations count as relevant, and what causal levers we have or don’t have available. --- [Darwin](https://astralcodexten.substack.com/p/justice-creep/comment/5563431): > *“If I were in Terra Ignota, my fondest wish would be to excel in some way the same way Sniper, Apollo Mojave, and the other utopian characters excel, bringing glory to my Hive and giving its already-brilliant shine extra luster. But if I were in 1984, my fondest wish would be to bring O’Brien and the others to justice; to watch them suffer, to undo the wound in the world caused by their scheming.”* > > I think you have this entirely backwards, which may explain the disconnect here. > > Think about low-hanging fruit, here. > > If you're already in a utopia where everyone is very industrious and excellent, there is very little opportunity to actually improve things by trying to excel yourself; your society is already at the limits of what can be achieved by excelling, the marginal gains from the next marginal individual of average ability trying to excel even harder are slim to none. However, if your utopia is very focused on individual excellence and maybe doesn't spend much time looking at structural factors or inefficient distributions, one person looking for 'injustice' of these types might be able to find quite a lot of overlooked ways to improve things for people, and have a large positive impact. > > Similarly, if you're in a 1984 dystopia, where everyone is constantly being brought low and made to suffer... it might feel nice to have your particular tormentors brought low and made to suffer, but it's unlikely to change much of anything or do much good. Even getting rid of the criminals and villains at the top of the foodchain will accomplish little, because there's no one good in your society to replace them. In this world, trying to excel an be personally virtuous may actually have a bigger impact than adding to the pile of persecutions; there may be a lot of people you can easily save and situations you can easily improve, just by caring and working hard, because no one else is doing that. > > I think the move towards justice may be \*because\* we are in some sense a high-industriousness near-utopia; increasing productivity isn't actually going to help because it's already so high that we could instantly solve all of our problems if we directed that productivity towards doing so. Individual excellence can't save us because that excellence has as its best projected outcome becoming a tech billionaire and making a website a lot of people use to share misinformation and cat videos. In this world, the low-hanging fruit really \*is\* about how resources get directed and distributed, which goals are prioritized, how people are treated, how power is structurally represented and utilized - ie, 'justice' issues. --- And on a totally different note, [Jim Hays](https://astralcodexten.substack.com/p/justice-creep/comment/5562163): > So about the "311,000" hits for "climate villains": this estimate is completely wrong. > > I don't mean that it's not what Google says on page one of the search results. That part is true. But if you click through to page 15 of the results for this search, you find that the estimate reduces from 311,000 to 149 results.  Google has decided that want to always provide an estimate of the total number of results for every search, but they have neither precomputed accurate estimates for all possible searches, nor do they wish to spend the compute to calculate good estimates on the fly for every search, when most people never go past page one.  Their estimates can be ok for searches on common words (where they most likely do have cached in a database somewhere the current number of web pages associated with that term), but for compound phrases, they take each of the component words, and do some kind of math to estimate the value.  So here, they would look at both "climate" hits (4,470,000,000 results), and "villains" hits (2,190,000,000 results), and maybe a few other parameters, and make a guess as to how often these appear together.  Unfortunately, these guesses have almost no relationship to reality. > > I often see these number cited as evidence for how prevalent something is.  Given Google's reputation and prevalence, I find it pretty irresponsible that they still list these estimates despite knowing how wrong they are.  But presumably some product manager likes showing users a lot of zeros to give an inflated impression of how comprehensive Google's web crawling is. > > Here's a longer analysis.  It's five years old, but not much has changed in that time: > > <https://karl-voit.at/2017/01/15/google-search-estimates/> > > Apparently, Google is currently experimenting with removing this number, which I applaud: <https://www.seroundtable.com/google-estimated-number-of-search-results-gone-33016.html> But [Kenny](https://astralcodexten.substack.com/p/justice-creep/comment/5570434) writes: > I'm less sure the estimates (of the number of search results) is wrong and think it's more likely that Google decided to more aggressively limit the number of results you can see. (And that makes sense – keeping some kind of 'paginated results' data, with thousands (or more) of results in some server's memory is expensive at their scale.) > > I personally miss the days when there were (or could be) literally hundreds or thousands of pages of results for a search, but I think Google noticed (a while ago) that almost no one bothers looking beyond the first or maybe second page anyways. > > I defy the data that there are only ~150 results for "climate villains"! That seems way too low to be plausible. And [Austin](https://astralcodexten.substack.com/p/justice-creep/comment/5574207) (author of [Acrolectics](https://acrolectics.substack.com/?utm_source=substack&utm_medium=web&utm_content=comment_metadata)) writes: > I second this theory. I don't know how accurate their estimates are, but I know that their total results are truncated enormously. Everything returns about 15-20 pages before "repeat with omitted" and returns about 30-50 pages once you "repeat with omitted." Searching for the two words 'climate' and 'villain' returns only 30 more results before or after searching again with omitted results included, than the respective search (omitted/non-omitted) for the quoted phrase '"climate villain"' even though most of the results for the two words don't include the phrase "climate villain" and many don't include either word (e.g. they highlight the phrase "bad guy" as why they matched it with "villain" or they highlight the word "environment" as why they matched it with "climate.") Similar numbers occur when searching for other random phrases. (I searched for "colossal regret" and "American dream" for reference. After repeating the search with omitted results, "American dream" still only gave 41 pages with 405 total results. The 44 million estimated results seems much more plausible of a total count for that phrase than the 405 it returned -- or the 21 pages before repeating with omitted results included.) [Jim](https://astralcodexten.substack.com/p/justice-creep/comment/5577617) says: > The “American dream” example is excellent, and handily convinces me that clicking through the paginated results is not a good representation of the total number of Cleve pages cataloged by Google which logically match the search input. However, I still maintain that the first-page estimate is also an inaccurate measure of the same. It looks like figuring out how many Google results a term has is just a problem that is beyond us as a civilization at this point.
Scott Alexander
50861569
Highlights From The Comments On Justice Creep
acx
# Men Will Literally Have Completely Different Mental Processes Instead Of Going To Therapy People are debating “therapy: good or bad?” again: There are dozens of kinds of therapy: reliving your traumas, practicing mindfulness, analyzing dreams, uncovering your latent desire to have sex with your mother. But most people on both sides of this debate are talking about what psychiatrists call “supportive therapy” - unstructured talking about your feelings and what’s going on in your life. I know the responsible thing to say is something like “this is helpful for some people but not others”. I *will* say that, in the end. But I have a lot of sympathy for the people debating it. I have such a strong intuition of “why would this possibly work?” that it’s always shocked me when other people say it does. And I know other people with such a strong intuition of “obviously this would work!” that it shocks them to hear other people even question it. Yet my patients seem to line up about half and half: some of them find therapy really great, others not helpful at all. Whenever I try to understand this, I find myself coming back to this tweet: [Qiaochu](https://qchu.wordpress.com/) is a smart person with various impressive academic accomplishments, all of which are . . . apparently compatible with being the person who would write this. And I hear weird stuff like this all the time. An equally accomplished friend told me at one point that “I was fifteen when it occurred to me for the first time that I had a personality”. I’ve [previously written about](https://slatestarcodex.com/2014/03/17/what-universal-human-experiences-are-you-missing-without-realizing-it/) a friend who was in their late teens/early twenties before they realized they could have food preferences. I don’t want to exaggerate this. Regardless of what he says, I’m sure Qiaochu thought about and solved problems when he was in college - if nothing else, responding to problems like “it’s cold outside” with solutions like “maybe I should get a jacket”. But I trust him when he says he was lacking some kind of essential reflectivity or systematicity about it or something along those lines. If you have no ability to systematically think about and solve your problems, you should probably tell all your problems immediately to someone who does, and maybe this person could be a therapist. But also: people vary really widely in their ability to do something sort of like hold a conversation with themselves - for example, some people [totally lack an inner monologue](https://www.psychologytoday.com/us/blog/pristine-inner-experience/201110/not-everyone-conducts-inner-speech). Nobody has ever checked if those people benefit from therapy more, and I don’t want to actively *predict* that they would. But I know that I talk things over with myself a lot. Does this help me stay emotionally stable? Not sure; seems plausible. If I didn’t have an inner monologue, maybe the only way I could get that same effect would be by talking them over with another person. Other people obsessively seek external reassurance. I talked [here](https://astralcodexten.substack.com/p/book-review-sadly-porn?s=w) about the phenomenon of hypochondriacs who go to their doctor to be told that their latest concern (maybe the 25th time they had a certain symptom) isn’t worth worrying about, same as the last 24 times. They’re not even asking for an x-ray or something! They’re just happy to hear the doctor say the words “given that your last 24 symptoms were nothing, I’m assuming this one isn’t anything either”. Then they are delighted and go home! I sometimes think about this by analogy to “you can’t tickle yourself” - some people can’t reassure themselves either, and are very happy to hear thoughts they could have easily generated themselves coming from other people’s mouths. Are these some of the people who benefit from therapy? Seems plausible. I worry this will come across as “everyone who benefits from therapy is defective in some sort of basic human functioning.” That’s not quite the message I want to send - partly because there are lots of other reasons therapy can benefit people, but partly because I suspect *[everyone](https://slatestarcodex.com/2015/11/03/what-developmental-milestones-are-you-missing/)*is defective in some sort of basic human functioning, regardless of whether they like therapy or not. Instead, think of it as: there’s wide variation in the details of people’s minds and thought processes - even smart competent people who you would really expect to have all the basic human skills. Probably some kinds of thought process make you more likely to benefit from supportive therapy, and others make you less likely. Nobody’s studied this and it’s all just speculation, but I think it’s reasonable speculation.
Scott Alexander
50666668
Men Will Literally Have Completely Different Mental Processes Instead Of Going To Therapy
acx
# Contra Hoel On Aristocratic Tutoring **I.** Erik Hoel has an interesting new essay, [Why We Stopped Making Einsteins](https://erikhoel.substack.com/p/why-we-stopped-making-einsteins?s=w). It argues that an apparent decline in great minds is caused by the replacement of aristocratic tutoring by ordinary education. Hoel worries we’re running out of geniuses: > Consider how rare true world-historic geniuses are now-a-days, and how different it was in the past. In “[Where Have All the Great Books Gone?](https://scholars-stage.org/where-have-all-the-great-works-gone/)” Tanner Greer uses Oswald Spengler, the original chronicler of the decline of genius back in 1914, to point out our current genius downturn […] > > There are a bunch of other analyses (really, laments) of a similar nature I could name, from *Nature*’s “[Scientific genius is extinct](https://www.nature.com/articles/493602a)” to *The New Statesman*’s “[The fall of the intellectual](https://www.newstatesman.com/culture/books/2021/05/fall-intellectual)” to *The Chronicle of Higher Education*’s “[Where have all the geniuses gone?](https://www.chronicle.com/article/where-have-all-the-geniuses-gone/)” to *Wired*’s” “[The Difficulty of Discovery (Where Have All The Geniuses Gone?)](https://www.wired.com/2011/01/the-difficulty-of-discovery/)**”** to philosopher Eric Schwitzgebel’s “[Where are all the Fodors?](https://schwitzsplinters.blogspot.com/2021/11/where-have-all-fodors-gone-or-golden.html)” to my own [lamentation on the lack](https://erikhoel.substack.com/p/how-the-mfa-swallowed-literature?s=w) of leading fiction writers. > > If you disagree, I’ll certainly admit that finding irrefutable evidence for a decline of genius is difficult—intellectual contributions are extremely hard to quantify, the definition of genius is always up for debate, and any discussion will necessarily elide all sorts of points and counterpoints. But the numbers, at least at first glance, seem to support the anecdotal. Here’s a chart from *Cold Takes*’ “[Where’s Today’s Beethoven?](https://www.cold-takes.com/wheres-todays-beethoven/#books-the-longest-series-i-have)” Below, we can see the number of acclaimed scientists (in blue) and artists (in red), divided by the effective population (total human population with the education and access to contribute to these fields). He argues the most likely cause is the decline of “aristocratic tutoring” - an educational method typical among the ultra-rich of the past - and its replacement with normal public (or private) schools. > The answer must lie in education somewhere [...] paradoxically there exists an agreed-upon and specific answer to the single best way to educate children, a way that has clear, obvious, and strong effects. The problem is that this answer is unacceptable. The superior method of education is deeply unfair and privileges those at the very top of the socioeconomic ladder. It’s an answer that was well-known historically, and is also observed by education researchers today: tutoring. > > […] > > Let us call [the] past form *aristocratic tutoring*, to distinguish it from a tutor you meet in a coffeeshop to go over SAT math problems while the clock ticks down. It’s also different than “tiger parenting,” which is specifically focused around the resume padding that’s needed for kids to meet the impossible requirements for high-tier colleges. Aristocratic tutoring was not focused on measurables. Historically, it usually involved a paid adult tutor, who was an expert in the field, spending significant time with a young child or teenager, instructing them but also engaging them in discussions, often in a live-in capacity, fostering both knowledge but also engagement with intellectual subjects and fields. He amply proves that many of the great geniuses of the past, including Bertrand Russell, Albert Einstein, and John von Neumann received tutoring like this, and suggests that its absence (more because of strengthening democratic norms than because people don’t have the money) might be why we don’t see figures of their stature anymore. **II.** I agree that this kind of tutoring sounds great. I wouldn’t be surprised if it has a big effect size. But it’s not the reason we have fewer geniuses. Why not? Suppose that half of past geniuses were tutored this way, and half weren’t. Even if every single genius who was tutored owed his genius entirely to the tutoring, the tutoring could only explain half of geniuses. That means that after the tutoring stopped, we would expect half as many geniuses. But Hoel is making a stronger claim: that there are almost no geniuses today. For aristocratic tutoring to explain that, we would need for almost all past geniuses to be aristocratically tutored. But as far as I can tell, that isn’t true. Probably well below half of them were. Just to give some examples: **[Isaac Newton](https://en.wikipedia.org/wiki/Isaac_Newton)** went to a local school at at 12, and to Cambridge at 17. The Wikipedia page on his early life doesn't mention "tutor", except in the context of a college teacher. His adopted father was a country parson, and his family wasn't rich enough to do aristocratic tutoring even if they'd wanted to. Articles on his early life stress his self-motivated nature: he was constantly building things and observing things on his own time. **[Wolfgang Mozart](https://en.wikipedia.org/wiki/Wolfgang_Amadeus_Mozart)** was tutored, but primarily by his father, himself an excellent violinist. According to his Wikipedia article, "In his early years, Wolfgang's father was his only teacher". Mozart was already an obvious child prodigy by 6 or 7, and wrote his first symphony at 8. I can't find any evidence that non-family members contributed to his education. This kind of tutoring is still common; my wife learned cello from her grandmother, a professional music tutor. **[Charles Darwin](https://en.wikipedia.org/wiki/Charles_Darwin)** went to a local school at age 8, switched to a boarding school at 9, spent a summer at age 16 following his father (a doctor) around as he treated patients, then went to medical school. He switched to regular college at Cambridge at 19, where he seemed to have a pretty traditional education. Wikipedia has a long article on his education, which doesn't mention the word "tutor" until college age, when he "spent the autumn term at home studying Greek with a tutor". Later in college, he "joined other Cambridge friends on a three-month "reading party" at Barmouth on the coast of Wales to revise their studies with private tutors". I don't think he had a stronger relationship with being tutored himself, especially not in childhood. His summer following his father around learning medicine was probably good for him, but not outside the bounds of what still happens today (I followed my father around learning medicine). **[Louis Pasteur](https://en.wikipedia.org/wiki/Louis_Pasteur)** was born "to a Catholic family of a poor tanner". He went to primary school at 8 and college at 16. I can't find any evidence he was tutored. **[Charles Dickens](https://en.wikipedia.org/wiki/Charles_Dickens)** barely seems to have been educated at all. His family was so poor that he spent some of his childhood working in a sweatshop. During other periods they did a little better and he went to small lower-to-middle-class private schools. Dickens seems to have gotten most of his education by reading novels on his own. **[Thomas Edison](https://en.wikipedia.org/wiki/Thomas_Edison)** grew up poor in Michigan. Again according to Wikipedia, "Edison was taught reading, writing, and arithmetic by his mother, who used to be a school teacher. He attended school for only a few months. However, one biographer described him as a very curious child who learned most things by reading on his own. As a child, he became fascinated with technology and spent hours working on experiments at home." Hoel argues that the decline in aristocratic tutoring is “why we stopped making Einsteins”. But then why did we stop making Newtons, Mozarts, Darwins, Pasteurs, Dickenses, and Edisons? **III.** One other argument: Hoel cites Holden Karnofsky’s [Where’s Today’s Beethoven?](https://www.cold-takes.com/wheres-todays-beethoven/#books-the-longest-series-i-have), which suggests that music is a typical case of the genius decline. But aristocratic tutoring in music is alive and well. When my brother was identified as a piano prodigy, my (well-off but not absurdly rich) parents hired jazz musician [Linda Martinez](https://jeremysiskind.com/2018/01/video-7-linda/) to tutor him. I asked around and this is apparently pretty common in music. In fact, it seems common across a variety of fields, especially those that aren’t taught in school and where success doesn’t make you too rich to need tutoring money (a friend brings up chess as another example). If aristocratic tutoring were a significant factor behind declining genius, we would expect to see a split: fields like science where tutoring is rare would lose their geniuses, whereas fields like music where tutoring is common would be as genius-filled as ever. But people use music as a typical example of a declining-genius field. So that can’t be it. **IV.** So what’s my explanation? You will not be surprised to hear it’s the maximally boring one, a combination of: 1. [Good ideas are getting harder to find](https://slatestarcodex.com/2018/11/26/is-science-slowing-down-2/). In 300 BC, if you noticed that the water level in your bathtub got higher when you got into it, you were allowed to run through the streets shouting “eureka!” and declare yourself to be a genius. Now you would need some 400 page mathematical proof drawing on the topology of eight-dimensional manifolds in order to get that kind of cred. 2. We’re finding lots of ideas anyway, but only by [dectupling the number of researchers](https://slatestarcodex.com/2018/11/26/is-science-slowing-down-2/). More researchers means more distributed progress: it’s unlikely one person will stumble across a fully formed brilliant theory before other people have nibbled off bits and pieces of the same idea. 3. More democratic norms / tall poppy syndrome . In the past, people celebrated geniuses and would play up their accomplishments in order to have someone to celebrate. Now it’s considered kind of cringe to believe in geniuses, and you should play down their accomplishments, play up the degree to which they depended on lab assistants / collaborators / support staff, and maybe even accuse them of hogging glory or “crowding out” others in the field. AI seems to have its share of geniuses: for example, people seem very impressed with Geoff Hinton. And AI alignment - the subfield I’m most familiar with, so new and small that it’s controversial whether it should be considered a science at all - is absolutely full of geniuses. I mean, I can’t assess whether they’re *right* about anything, I just mean that there are a couple of individuals who have developed entire new paradigms, who are widely acknowledged as way above the rest of the field, and who everyone expects the next interesting result to come from. If people are still working on AI a hundred years from now, I expect them to talk about Hinton in the same way biologists talk about Darwin now. If they’re still working on alignment (which would be profoundly weird for many reasons) I expect them to talk about Bostrom and various other people I won’t name because some of them read this blog and don’t need bigger egos. I think this is because AI is new and small(-ish), and AI alignment is *very* new and *very* small. Since it’s new, good ideas *aren’t* hard to find. I mean, they’re still hard enough that *you* or *I* can’t find them. But not so hard that the smartest people in the world can’t still luck out and open up entirely new vistas. Since it’s small, if one of the smartest people in the world does go into AI alignment, they stand out - unlike physics, which is so full of the smartest people in the world that nobody notices another one. I’ll give one even weirder example. A few years ago, I wrote a very political post, called [Can Things Be Both Popular And Silenced](https://slatestarcodex.com/2018/05/23/can-things-be-both-popular-and-silenced/)? It touched on “guru” culture in politically incorrect discourse - the phenomenon of people like Jordan Peterson who became really famous by saying controversial things - and it asked: why aren’t there equally famous figures on the left? The social justice community is an order of magnitude bigger than the intellectual dark web, so how come it hasn’t produced proportionately greater celebrities? Ibram X Kendi, maybe. Ta-Nehisi Coates, ten years ago. But how come they aren’t bigger and more numerous. The answer is: they were, we just need to look further back. The titans of black anti-racism are Martin Luther King and Malcolm X, both most active in the 60s. The titan of Hispanic anti-racism is Cesar Chavez - also the 60s. The titan of gay rights is Harvey Milk - now we’re up to the 70s. Ask someone who isn’t an expert on feminism to name famous feminists, and you’ll probably get people like Gloria Steinem, Betty Friedan, and Andrea Dworkin - 70s again. I’m not sure any modern black, gay, or feminist activists measure up to these people in terms of influence, which is fine: the modern paradigm of minority rights began around the 60s and 70s, the first few people to operate within it got outsized acclaim, and there’s no easy way to equal them now. I think everything is like this: easy to stand out in when you’re small and new, harder when you’re big and old. I realize biology was several thousand years old in wall clock time by Darwin’s era, but I think it’s entirely possible that it was newer than AI is now in terms of researcher-lifetimes-spent and especially in terms of quality-adjusted researcher-lifetimes spent (a researcher with access to the Internet might be several QARLs comared to a researcher who has to sail to Alexandria to consult the Great Library). So I think efforts like Hoel’s to find the One Thing That Went Wrong in producing geniuses are doomed to fail. But even if I’m wrong, aristocratic tutoring isn’t that One Thing: there are too many counterexamples.
Scott Alexander
50658926
Contra Hoel On Aristocratic Tutoring
acx
# Mantic Monday 3/21/22 ### Warcasting Changes in Ukraine prediction markets since [my last post](https://astralcodexten.substack.com/p/ukraine-warcasting?s=w) March 14: 1. [Will Kiev fall to Russian forces by April 2022](https://www.metaculus.com/questions/9939/kyiv-to-fall-to-russian-forces-by-april-2022/)?: **14% —→ 2%** 2. [Will at least three of six big cities fall by June 1?](https://www.metaculus.com/questions/9941/russia-takeover-of-ukrainian-cities-by-june/): **70% —→ 53%** 3. [Will World War III happen before 2050?](https://www.metaculus.com/questions/2534/will-there-be-a-world-war-three-before-2050/): **21% —→20%** 4. [Will Russia invade any other country in 2022?](https://www.metaculus.com/questions/9930/russian-invasion-of-another-country-in-2022/): **10% —→7%** 5. [Will Putin still be president of Russia next February?](https://www.metaculus.com/questions/10002/presidency-of-vladimir-putin-on-feb-1-2023/): **80% —→ 80%** 6. [Will 50,000 civilians die in any single Ukrainian city?](https://www.metaculus.com/questions/10001/civilian-deaths-in-ukrainian-cities-in-2022/): **12% —→ 10%** 7. [Will Zelinskyy no longer be President of Ukraine on 4/22](https://polymarket.com/market/will-volodymyr-zelenskyy-be-ukraines-president-on-april-22-2022)?: **20% —→15%** If you like getting your news in this format, subscribe to the [Metaculus Alert bot](https://twitter.com/MetaculusAlert) for more (and thanks to ACX Grants winner Nikos Bosse for creating it!) ### Insight Prediction: Still Alive, Somehow [Insight Prediction](https://insightprediction.com/markets?category=5) was a collaboration between a Russia-based founder and a group of Ukrainian developers. So, uh, they’ve had a tough few weeks. But getting better! Their founder recently announced on [Discord](https://discord.com/invite/wbEm8XWwSf): > I myself am (was?) an American professor in Moscow. I have been allowed to teach my next course which starts in 10 days online, and so I am moving back to the US on Sunday, to Puerto Rico. Some of our development team is stuck in Ukraine. I've offered to move them to Puerto Rico, but it's not clear they'll be able to leave the country anytime soon. Progress with the site may be slow, but obviously that's not the most important thing now. And: > I am now out of Russia, and on to Almaty, Kazakhstan. The people here are quite anti-war. I fly to Dubai in a bit. It was surprisingly difficult (and expensive) to book a ticket out of Moscow after all the airspace closures. And: > I have made it to Puerto Rico! Thus, the base of Insight Prediction is now here in the US. Out of the frying pan (Vladimir Putin) and into the fire (the CFTC). Still, welcome, and glad to hear you’re okay. Meanwhile, the Ukrainian development team continues to do good (can I say “heroic”?) work: > Our devs have fixed this issue [a bug when buying multiple shares], and are now pushing it to our beta server. Our main dev did this on his birthday, and with the threat of a nuclear accident at a nearby nuclear power plant bombed by the Russians. The upshot of all of this is that Insight is one of the leading platforms for [predicting the Ukraine War](https://insightprediction.com/markets?category=5) right now: Now that there are two good-sized real-money prediction markets, we can compare them. For example, the first question, on Putin, is at 79.5%, which is reassuringly close to [the same question on Polymarket](https://polymarket.com/market/will-vladimir-putin-remain-president-of-russia-through-2022), at 76%. On the other hand, the third market (will Russian troops enter Kiev?, currently at 52%) is almost a direct copy of [this Metaculus question](https://www.metaculus.com/questions/9459/russian-troops-in-kiev-in-2022/) , which is currently at 92%. Why? The only difference I can find is that Insight requires two media sources to resolve positively and Metaculus only one - surely people don’t think there’s a 40% chance troops will enter Kyiv but only one source will report it? ### Further Insight Also in Insight News: after escaping Russia, the founder of Insight has decided to go public with his real identity: > The founder of Insight Prediction is Douglas Campbell, who holds a Ph.D. from the University of California, Davis. He is a former Staff Economist on President Obama's Council of Economic Advisors, and prior to that was a Modelling Analyst in the targeting department of the Democratic National Committee. Currently he is also an Assistant Professor at the New Economic School. It’s exciting to have someone so influential involved in the field! Campbell has since written [a blog post apologizing for and explaining](https://ourworldinprediction.net/predicting-the-russia-ukraine-war-a-mea-culpa-from-insight-prediction/) Insight’s relatively inaccurate predictions about the beginning of the war: > [Insight Prediction](http://www.insightprediction.com) had its first successful market, at least financially, on whether Russia would Invade Ukraine, doing nearly $400,000 in volume. I would say that while the market was successful beyond what we would have expected, we still walked away unhappy with how the market went in terms of accuracy. Metaculus at one point had something like a 50% higher probability of an invasion . . . experts were saying a war was imminent. On the eve of battle, we were only at 78%. They were right and we were wrong. What happened? > > Part of the story is that the vast majority, and at the time of resolution, quite possibly all of the “No” shares in our market were purchased by one whale. (This user’s nickname comes from the famous Ukrainian poet Gogol’s novel “Taras Bulba”. I would not usually share this information, but it is already public.) When you have a new platform with only a handful of traders, one whale can move the price a lot. > > However, he doesn’t deserve all of the blame. I could have moved the price myself, but I did not. Ultimately, the responsibility for our predictions rests with me. Aside from providing some liquidity on both sides of the market, which left me with too many “No” shares that I didn’t unwind until it was blindingly obvious, I basically missed out. Other participants didn’t move the market as much as they could have either. So, as good as someone who made $20,000 might feel, they missed a golden opportunity to have made a quick $200,000, so maybe they shouldn’t feel so great about the outcome either. I don’t think anyone should have to apologize for not moving a prediction market enough, but at least that explains what’s going on. Also, RIP Taras Bulba :( ### ACX 2022 Prediction Contest Data Several hundred of you joined me in trying to predict 70ish events this year. Sam Marks and Eric Neyman kindly processed all the data. You can find everything suitable for public release [here](https://docs.google.com/spreadsheets/d/1t3Nmq5BAYAHmaerw8QeVLDB4LCU7i0aunJMmKQbZcwc/edit#gid=172531079). If you want the full dataset, including individual-level information about each predictor, you can fill out a request form [here](https://docs.google.com/forms/d/e/1FAIpQLSdh6nr9HqS2nbPKy2OLdWX5I56COaASueVyokMoAjlPFiGC9A/viewform). What is “aggregate”? Sam and Eric applied a couple of tricks that have improved these kinds of forecasts in the past: > First, using the geometric mean of odds rather than the average of the probabilities. See [here](https://forum.effectivealtruism.org/posts/sMjcjnnpoAQCcedL2/when-pooling-forecasts-use-the-geometric-mean-of-odds) for a writeup of why this is a reasonable thing to do. > > Second, the idea of extremizing (pushing the aggregate away from the prior -- the *prior* being the information that is common to all forecasters, which in this case is your prediction -- so as to not overweight it). This [works well empirically](https://forum.effectivealtruism.org/posts/sMjcjnnpoAQCcedL2/when-pooling-forecasts-use-the-geometric-mean-of-odds#The__extremized__geometric_mean_of_odds_empirically_results_in_more_accurate_predictions); see also [here](https://forum.effectivealtruism.org/posts/biL94PKfeHmgHY6qe/principled-extremizing-of-aggregated-forecasts) for Jaime Sevilla's writeup of my paper on the topic. > > Third, how much to extremize? Apparently a factor of 1.55 is empirically optimal based on data from Metaculus. But also, I think that the more forecasters agree with each other, the less it makes sense to extremize (basically because they're probably all using the same information, so it makes more sense to treat them all as a single forecaster). As far as I know this idea isn't written about anywhere, but I'll probably write about it on my blog sometime. For the spreadsheet I did an ad-hoc thing to take this into account. If you think you know more tricks, produce your own aggregate (this would be an excellent reason to request the full data) and send it to me. If it turns out more accurate than Sam and Eric’s, I’ll give you some kind of recognition and prize. The real dataset also has a “market” baseline that I didn’t include above. It’s [mostly based off](https://www.lesswrong.com/posts/rT8AkEcBnfX8ZdSLs/2022-acx-predictions-market-prices) Manifold questions, but Manifold hadn’t really launched yet and most of them only had one or two bets and were wildly off everyone else’s guesses. I don’t think this is going to be a fair test of anything. Now that I know Sam and Eric are willing to put work into this, I’ll figure out something better for next year. In fact, I think a coordinated yearly question set to use as a benchmark could be really good for this space. Right now there’s no easy way to compare eg Metaculus to Polymarket because they both use really different questions. I’m hoping to get people together next year, come up with a standard question set, and give it to as many platforms (and individuals!) as possible to see what happens. ### Shorts **1:** AI researcher Rodney Brooks [grades his AI-related predictions from 2018](https://rodneybrooks.com/predictions-scorecard-2022-january-01/). **2:** [An Analysis Of Metaculus’ Resolved AI Predictions And Their Implications For AI Timelines](https://www.lesswrong.com/posts/oJ6wXoBqxJjHhPLLu/an-examination-of-metaculus-resolved-ai-predictions-and). “Overall it looked like there was weak evidence to suggest the community expected more AI progress than actually occurred, but this was not conclusive.“ **3:** An interesting counterexample: [When Will Programs Write Programs For Us?](https://www.metaculus.com/questions/405/when-will-programs-write-programs-for-us/) The community prediction bounced between 2025 and 2028 (my own prediction was in this range). Even in late 2020, just before the question stopped accepting new predictions, the forecast was January 2027. The real answer was six months later, mid-2021, when OpenAI released Codex. I don’t want to update too much on a single data point, but this is quite the data point. If I had to cram this into the narrative of “not systematically underappreciating speed of AI progress”, I would draw on eg [this question about fusion](https://www.metaculus.com/questions/3727/when-will-a-fusion-reactor-reach-ignition/), where the resolution criteria (ignition) [may have been met](https://www.metaculus.com/questions/3727/when-will-a-fusion-reactor-reach-ignition/#comment-74986) by an existing system - tech forecasters tend to underestimate the ability of cool prototypes to fulfill forecasting question criteria without being the One Amazing Breakthrough they’re looking for. **4:** New (to me) site, [Hedgehog Markets](https://hedgehog.markets/). So far they mostly just have entertaining tournaments, but the design is pretty and they seem interested in genuinely solving the decentralization problem, so I’ll be watching them. **5:** New (to me) site, [Aver](https://dev.app.aver.exchange/). So far not many interesting markets, and very crypto-focused, but somehow (?) they have six-figure pools on several questions. Got to figure out what’s going on here. **6:** New (to me) s . . . you know what, just assume there are a basically infinite number of crypto people starting not-entirely-confidence-inspiring prediction markets right now: If any of them start looking more important than the others, I’ll let you know.
Scott Alexander
50764746
Mantic Monday 3/21/22
acx
# Open Thread 216 This is the weekly visible open thread. Odd-numbered open threads will be no-politics, even-numbered threads will be politics-allowed. This one is even-numbered, so go wild - or post about whatever else you want. Also: **1:** I’ve gotten some requests from Russian/Ukranian readers in various stages of fleeing their countries. I’m still trying to figure out the best ways to help, but until I do, I’m going to make the [first comment](https://astralcodexten.substack.com/p/open-thread-216/comment/5638593) below a Russian/Ukrainian Community Help Coordination Comment Subthread. If you’re a Russian or Ukrainian trying to escape the war, or someone who can provide help (money? a place to stay? immigration advice?), go there to coordinate. **2:** Many people’s yearly subscriptions to the blog expired in January and February. If you think you have a subscription, but you didn’t see this week’s Hidden Open Thread or the subscriber-only story “The Onion Knight”, your subscription has expired and you might want to consider resubscribing.
Scott Alexander
50671736
Open Thread 216
acx
# Highlights From The Comments On Zulresso Thanks to everyone who commented on [Zounds! It’s Zulresso and Zuranolone](https://astralcodexten.substack.com/p/zounds-its-zulresso-and-zuranolone) and on the followup [Progesterone Megadoses Might Be A Cheap Zulresso Substitute](https://astralcodexten.substack.com/p/progesterone-megadoses-might-be-a?s=w). I’m constantly impressed by the expertise of commenters here and on how much *better* the biomedical comment threads are compared to some of the others. Among the things I learned: — [Metacelsus](https://astralcodexten.substack.com/p/zounds-its-zulresso-and-zuranolone/comment/5447548?s=w) (who writes the blog [De Novo](https://denovo.substack.com/?utm_source=substack&utm_medium=web&utm_content=comment_metadata)) doubts the price estimates I posted: > There's no way it costs $10,000 to $20,000 a gram at scale. Those 3 chemical supply companies specialize in having a very large catalog of small quantities of chemicals for biologists to test in their experiments. (I have personally ordered from 2 out of those 3 for my research.) The price they charge per gram is not competitive at all. He also wrote a longer blog post about the science of progesterone [here](https://denovo.substack.com/p/progesterone-explained?s=r). — [Douglas](https://astralcodexten.substack.com/p/zounds-its-zulresso-and-zuranolone/comment/5447835) (who writes the blog [A Mindful Monkey](https://mindfulmonkey.substack.com/?utm_source=substack&utm_medium=web&utm_content=comment_metadata)) clears up some mechanism details I missed: > From Stahl's: 'the precipitous decline in circulating and presumably brain levels of allopregnanolone hypothetically trigger the onset of a major depressive episode in vulnerable women. Rapidly restoring neurosteroid levels over a 60-hour period rapidly reverses the depression, and the 60 hour period seems to provide the time necessary for postpartum patients to accommodate their lower levels'. So the idea is the taper of the steroid is a helpful part. > > Also, (also Stahl's), there are two GABA-A receptors with comprosied of different sub-units as you mentioned. Benzodiazepines bind to, cleverly named, benzodiazepine-sensitive GABA-A receptors while allopregnalone bind to their cousins- the benzodiazepine-insensitive GABA-A receptor. The former is found post-synaptically and involved with phasic, quick bursts of GABA (i.e. useful information processing) while the latter is found extrasynaptically and involved with tonic (i.e. chronic) 'tone' setting of the neuron. So they seem to have very different functions despite both involving GABA. > > Stahl's goes onto say that allopregnanlone 'hypothetically could cause more efficient information processing in over excited brain circuits causing symptoms of depression', but that just seems hand-wavy to me. — [Zutano](https://astralcodexten.substack.com/p/zounds-its-zulresso-and-zuranolone/comment/5450468?s=w) (whose name makes them sound like another novel drug in this class!) debunks my urban legend that the -pam at the end of benzo names (eg “diazepam”) stands for positive allosteric modulator: > I'm gonna go with urban legend for this one. The early benzos look to me to be chemically named; "azepine" is the word for a 7-membered ring made up of 6 carbon atoms and 1 nitrogen, then "diazepine" is the same but with two nitrogens. The first benzo was chlordiazepoxide (Librium), which if you look at the chemical structure on wikipedia, contains chlorine, diazepine and oxide (the oxygen atom). Then next is diazepam, which to me looks like "diazepine" plus "amide" (which is the word for "double-bonded oxygen atom with a nitrogen next door"). 10 years later we get alprazolam, which looks like it was named after the triazole ring (that's the 5-membered ring with 3 nitrogens), but now the "am" suffix is starting to become generic, to emphasise that its still in the same chemical class as the previous -azepams. > > I doubt that the concept of "positive allosteric modulator" existed in 1955 when chlordiazepoxide was invented; in those days drugs were discovered by making random chemicals and feeding them to animals to see what happened. The receptor theory of medchem (i.e. that drugs have a specific biochemical target in the body) is generally credited to James Black and his fellow Nobel laureates, and propranolol (the first drug discovered in the target-based way) wasn't patented until 1962. **— [Jimmy Steier](https://twitter.com/JimmySteier/status/1501967457136820231)** on Twitter gives more information on tolerance development: > Key point missing in this post is that ALLO/zulresso mediates tonic GABA inhibitory tone (as opposed to phasic for benzos). I wouldn't touch an exogenous analog of ALLO w/ a ten foot pole. Context on severe issues w/ tolerance and withdrawal: [Tolerance to allopregnanolone with focus on the GABA-A receptor](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3031054/#:~:text=Tolerance%20to%20the%20anaesthetic%20effect,negative%20effects%20in%20the%20Morris). This paper confirms that women with PMDD or PPD (but not other women) get tolerance to allopregnanolone within the normal course of the menstrual cycle or pregnancy. There’s actually a study showing that these women get less effect from benzos during this time, since the allopregnanolone and benzos have cross-tolerance! **— Thomas Reilly** has a new blog [Rational Psychiatry](https://rationalpsychiatry.substack.com/) where he’s written up [some more info](https://rationalpsychiatry.substack.com/p/10-facts-every-psychiatrist-should?r=g83wq&s=w&utm_campaign=post&utm_medium=web&utm_source=direct) on premenstrual dysphoria and progesterone. For example: > In an elegant [series of experiments](https://pubmed.ncbi.nlm.nih.gov/9435325/), Peter Schmidt and David Rubinow gave participants a medication (leuprolide) that suppresses oestrogen and progesterone. This eliminated PMDD symptoms. Whats more, when they reintroduced either oestrogen or progesterone, symptoms returned. And: > An [RCT](https://pubmed.ncbi.nlm.nih.gov/34597899/) (n=206) of isoallopregnanolone (sepranolone) in PMDD did not beat placebo for the primary outcome. However, blocking allopregnanolone production with the 5α-reductase inhibitor dutasteride does seem to work, in a [small RCT](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4748434/) at least. See [here](https://rationalpsychiatry.substack.com/p/10-facts-every-psychiatrist-should/comment/5501658) for further discussion of the failed trial. — [Benjamin Jolley](https://astralcodexten.substack.com/p/zounds-its-zulresso-and-zuranolone/comment/5471185) (who writes the blog [Ramblings Of A Pharmacist](https://benjaminjolley.substack.com/?utm_source=substack&utm_medium=web&utm_content=comment_metadata)), has a couple interesting comments: > We've been giving progesterone for like 4 decades at the compounding pharmacy where I work, and we've been talking about its metabolism to allopregnanolone for about 20 years. > > Notably, the route of administration MATTERS. A fraction of oral progesterone certainly seems to get metabolized to allopregnanolone and have -pam like effects, so a lot of HRT docs will write oral progesterone for bedtime administration as it seems to help with calming prior to sleep (it's -pam like, as you lay out, so it's not dissimilar from giving a z-drug for sleep pharmacologically). > > topically administered and injected progesterone doesn't really seem to have comparable effects, likely due to bypassing the portal circulation and thereby the first-pass effect. At least that's how this works in my head. The standard of evidence in compounding land is a little lower than in big PhRMA manufacturing land. > > Progesterone is also really cheap (at least in comparison to the insanity of brexanolone IV). > the chemical difference between zuranolone and brexanolone is more substantial than the chemical difference between testosterone and estradiol. So... maybe it has the same effects, but maybe it's too far removed. — Many commenters took progesterone for one reason or another and said it made them feel either more or less anxious; the thread starts [here](https://astralcodexten.substack.com/p/progesterone-megadoses-might-be-a/comment/5483488) and keeps going for a bit. For example, [Reader](https://astralcodexten.substack.com/p/progesterone-megadoses-might-be-a/comment/5485305): > I take progesterone supplements and find it’s an extremely delicate balance. My natural levels (almost 0, my body struggles to make it for some reason) make me incredibly anxious. 100mg oral supplement makes me feel great and completely erases my anxiety. 200mg or more makes me anxious and slightly depressed. Bodies are delightfully strange and complicated things. :) And [Angela](https://astralcodexten.substack.com/p/progesterone-megadoses-might-be-a/comment/5484883): > I have taken progesterone after several pregnancies to help with post partum depression. If I recall correctly, 100 mg/day does the trick for me. The effect is immediate and astonishing: you go from terrified the baby will die any second, to feeling totally normal. Most women who do this wean themselves off it by 4-6 weeks postpartum in some way. The biggest two side effects are that it can make you very pleasantly sleepy - I can't imagine waking up to take it every two hours - and that it might keep you from losing weight/cause weight gain. In my own case, I basically just stop taking it once life is returning to normal in other ways (again, after 4-6 weeks). [Leah Libresco Sargent](https://astralcodexten.substack.com/p/progesterone-megadoses-might-be-a/comment/5483546) (who writes the blog [Other Feminisms](https://otherfeminisms.substack.com/?utm_source=substack&utm_medium=web&utm_content=comment_metadata)): > I took 200-400mg progesterone for fertility reasons without noticeable side effects (other friends found lower doses intolerable). Catholic fertility doctors are more interested in progesterone than the mainstream, and I know friends who were prescribed oral progesterone for PPD/PPA at lower doses that Scott considers here who felt it made a huge, immediate difference. Here's the major institute behind this: https://popepaulvi.com/ — [Slimepriestess](https://astralcodexten.substack.com/p/progesterone-megadoses-might-be-a/comment/5487198) on side effects: > I take 200mg of oral progesterone as transfemme hormone therapy, I've been on it for a while and I and a good number of other transfemmes I know have experimented with taking high doses of it recreationally. > > The claim that progesterone doesn't have any side effects at the doses you're talking about is very contrary to a lot of testimonials as well as pharmacological effects that should be kind of obvious. The metabolite you're trying to maximize here is a a GABA-A receptor agonist, which is going to give it somewhat intoxicating, sedative effects heading towards nauseating and disorienting as dosage trends upwards. It can also significantly spike your libido. These aren't totally bad effects and they might even be a part of what you want for treating PPD, but saying "there's no side effects" is just not true. > > There's also multiple kinds of progresterone on the market and non-bioidentical progesterone is much worse than bioidentical, when I was on it for a month it made me suicidally depressed, taking a high dose of that might be legitimately dangerous to someone's mental health. Yeah, this has me confused about why pregnancy isn’t more sedating than it is. In general, I appreciated everyone’s progesterone-related thoughts. I spent most of my career in a clinic with ten psychiatrists. Nine of them were women, the tenth one was me. You can imagine who all the women with female-hormone-related problems *didn’t* want to see, so my hands-on experience here is more limited than usual. Thanks for filling this gap in my education!
Scott Alexander
50266343
Highlights From The Comments On Zulresso
acx
# Justice Creep Freddie deBoer says we’re a [planet of cops](https://freddiedeboer.substack.com/p/planet-of-cops?s=r). Maybe that’s why justice is eating the world. Helping the poor becomes [economic justice](https://en.wikipedia.org/wiki/Economic_justice). If they’re minorities, then it’s [racial justice](https://www.aclu.org/issues/racial-justice), itself a subspecies of [social justice](http://everythingintheworld). Saving the environment becomes [environmental justice](https://www.epa.gov/environmentaljustice), except when it’s about climate change in which case it’s [climate justice](https://en.wikipedia.org/wiki/Climate_justice). Caring about young people is actually about fighting for [intergenerational justice](https://www.oecd.org/gov/youth-and-intergenerational-justice/). The very laws of space and time are subject to [spatial justice](https://en.wikipedia.org/wiki/Spatial_justice) and [temporal justice](https://www.cambridge.org/core/journals/journal-of-social-policy/article/abs/temporal-justice/C19E923FB188E759B9ABA9E4B6823F56). I can’t find clear evidence [on Google Trends](https://trends.google.com/trends/explore?date=all&geo=US&q=%22climate%20justice%22,%22environmental%20justice%22,%22intergenerational%20justice%22) that use of these terms is increasing - I just feel like I’ve been hearing them more and more often. Nor can I find a simple story behind why - it’s got to have something to do with Rawls, but I can’t trace any of these back to specific Rawlsian philosophers. Some of it seems to have something to do with Amartya Sen, who I don’t know enough about to have an opinion. But mostly it just seems to be the *zeitgeist*. This is mostly a semantic shift - instead of saying “we should help the poor”, you can say “we should pursue economic justice”. But different framings have slightly different implications and connotations, and it’s worth examining what connotations all this justice talk has. “We should help the poor” mildly suggests a friendly optimistic picture of progress. We are helpers - good people who are nice to others because that’s who we are. And the poor get helped - the world becomes a better place. Sometimes people go further: “We should save the poor” (or the whales, doesn’t matter). That makes us saviors, a rather more impressive title than helpers. And at the end of it, people/whales/whatever are saved - we’re one step closer to saving the world. Extrapolate the line out far enough, and you can dream of utopia. “We should pursue economic justice” suggests other assumptions. Current economic conditions are unjust. There is some particular way to make them just, or at least closer to just. We have some kind of obligation to pursue it. We are not helpers or saviors, who can pat ourselves on the back and feel heroic for leaving the world better than we found it. We are some weird superposition of criminals and cops, both responsible for breaking the moral law and responsible for restoring it, trying to redress some sort of violation. The end result isn’t utopia, it’s people getting what they deserve. (cf. Thomas Jefferson: “I tremble for my country when I remember that God is just.”) What is “climate justice”? Was the Little Ice Age unjust? What if it killed millions? Is it unjust for Mali to have a less pleasant climate than California? What if I said that there’s a really high correlation between temperature and GDP, and Mali’s awful climate is a big part of why it’s so poor? Climate justice couldn’t care less about any of this. Why not? Hard to say. Maybe because there’s no violation and no villain. Is that conflating the sophisticated Rawlsian sense of justice with the vulgar criminal sense? Maybe. But do you think the millions of people talking about \_\_\_\_\_ justice who have never heard of Rawls are somehow avoiding that conflation? I think it’s a [motte-and-bailey](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/): justice - as it’s actually used - is catchy exactly because it *does* draw on criminal justice connotations. I don’t think it’s a coincidence people are talking about “climate justice” at the same time there are 311,000 Google hits for “climate villains”: Slightly edited to avoid repeats. Also, the international group for pursuing climate justice is called [COP](https://unfccc.int/process/bodies/supreme-bodies/conference-of-the-parties-cop), and this is not a coincidence because nothing is ever a coincidence. You can’t “help the economy” or “save the poor” merely by harming rich people. Can you get “economic justice” this way? Depends who you ask, but I notice that “getting justice” for a murder involves punishing a suspect a lot more often than it involves resurrecting the victim. There’s one last disadvantage I’m having trouble putting into words, but which I think is the most important. A narrative of helpers and saviors allows *saints*. It allows people who are genuinely good, above and beyond expectations, who rightly serve as ideals and role models for others. A narrative of justice allows, at best, *non-criminals* - people who haven’t broken any of the rules yet, who don’t suck quite as much as everyone else. You either stand condemned, or you’re okay so far. If it has any real role models, it’s the cop who wins Officer Of The Year, the guy who’s more sensitive to violations and more efficient in punishment than anyone else. Turn this guy into your moral model, and you’ve got, well, the planet of cops. Here’s a crazy theory: the moral transition from other virtues to Justice mirrors the literary transition from utopian fiction to dystopian. In Utopia, people practice virtues like Charity, Industry, and Humanity, excelling at them and making their good world even better. In Dystopia, Justice is all you can hope for. If I were in [Terra Ignota](https://amzn.to/37DKzKB), my fondest wish would be to excel in some way the same way Sniper, Apollo Mojave, and the other utopian characters excel, bringing glory to my Hive and giving its already-brilliant shine extra luster. But if I were in 1984, my fondest wish would be to bring O’Brien and the others to justice; to watch them suffer, to undo the wound in the world caused by their scheming. Of course, every society is somewhere in between Utopia and Dystopia, and needs values relevant to both. Justice is a useful lens that I’m not at all trying to get rid of. But when it starts annexing all the other virtues, until it’s hard to think of them except as species of Justice, I do think that’s potentially a sign of a sick society. (“A sick society? Sounds like you need some [health justice](https://www.wcl.american.edu/impact/initiatives-programs/health/events/healthjustice2020/whatishealthjustice/), [medical justice](https://www.msms.org/About-MSMS/News-Media/apply-now-for-medical-justice-in-advocacy-fellowship), and [wellness justice](https://www.thealchemistskitchen.com/pages/wellnessjustice)!” —— “*You’re not helping!”*)
Scott Alexander
49709750
Justice Creep
acx
# Mantic Monday 3/14/22 ### Ukraine Warcasting Changes in Ukraine prediction markets since [my last post](https://astralcodexten.substack.com/p/ukraine-warcasting?s=w) February 28: 1. [Will Kiev fall to Russian forces by April 2022](https://www.metaculus.com/questions/9939/kyiv-to-fall-to-russian-forces-by-april-2022/)?: **69% —→ 14%** 2. [Will at least three of six big cities fall by June 1?](https://www.metaculus.com/questions/9941/russia-takeover-of-ukrainian-cities-by-june/): **71% —→ 70%** 3. [Will World War III happen before 2050?](https://www.metaculus.com/questions/2534/will-there-be-a-world-war-three-before-2050/): **20% —→21%** 4. [Will Russia invade any other country in 2022?](https://www.metaculus.com/questions/9930/russian-invasion-of-another-country-in-2022/): **12% —→10%** 5. [Will Putin still be president of Russia next February?](https://www.metaculus.com/questions/10002/presidency-of-vladimir-putin-on-feb-1-2023/): **71% —→ 80%** 6. [Will 50,000 civilians die in any single Ukrainian city?](https://www.metaculus.com/questions/10001/civilian-deaths-in-ukrainian-cities-in-2022/): **8% —→ 12%** 7. [Will Zelinskyy no longer be President of Ukraine on 4/22](https://polymarket.com/market/will-volodymyr-zelenskyy-be-ukraines-president-on-april-22-2022)?: **63% —→20%** If you like getting your news in this format, subscribe to the [Metaculus Alert bot](https://twitter.com/MetaculusAlert) for more (and thanks to ACX Grants winner Nikos Bosse for creating it!) Numbers 1 and 7 are impressive changes! (it’s interesting how similarly they’ve evolved, even though they’re superficially about different things and the questions were on different prediction markets). Early in the war, prediction markets didn’t like Ukraine’s odds; now they’re much more sanguine. Let’s look at the exact course: This is almost monotonically decreasing. Every day it’s lower than the day before. How suspicious should we be of this? If there were a stock that decreased every day for twenty days, we’d be surprised that investors were constantly overestimating it. At some point on day 10, someone should think “looks like this keeps declining, maybe I should short it”, and that would halt its decline. *In efficient markets, there should never be predictable patterns!* So what’s going on here? Maybe it’s a technical issue with Metaculus? Suppose that at the beginning of the war, people thought there was an 80% chance of occupation. Lots of people predicted 80%. Then events immediately showed the real probability was more like 10%. Each day a couple more people showed up and predicted 10%, which gradually moved the average of all predictions (old and new) down. You can see a description of their updating function [here](https://www.metaculus.com/help/faq/#community-prediction) - it seems slightly savvier than the toy version I just described, but not savvy enough to avoid the problem entirely. But Polymarket has the same problem: It shouldn’t be able to have technical issues like Metaculus, so what’s up? One possibility is that, by a crazy coincidence, every day some new independent event happened that thwarted Russia and made Ukraine’s chances look better. Twenty dice rolls in a row came up natural 20s for Ukraine. Seems unlikely. Another possibility is that forecasters started out thinking that Russia was strong, in fact Russia was weak, and every day we’ve gathered slightly more evidence for that underlying reality. I’m having trouble figuring out of if this makes sense. You’d still think that after ten straight days of that, people should say “probably tomorrow we’ll get even more evidence of the same underlying reality, might as well update today”. A third possibility is that forecasters are biased against updating. A perfect Bayesian, seeing the failures of the Russian advance over the first few days, would have immediately updated to something like correct beliefs. But the forecasters here were too conservative and didn’t do that. A fourth possibility is that forecasters biased towards updating *too much*. Ukrainian propaganda is so good that every extra day you’re exposed to it, you become more and more convinced that Ukraine is winning. [EDIT: Commenter [HouseAlwaysWins](https://astralcodexten.substack.com/p/mantic-monday-31422/comment/5539780) notes “If you plotted a prediction for "will this iodine-131 nucleus have decayed by April 1" you'd also get a roughly linear decline (unless it decayed in which case it would jump up to 100%). Prediction markets are allowed to have "story arcs", so long as the \*expected\* change is zero.” Some other people make similar good points, which you can find in the comments section.] ### Nuclear Warcasting A friend recently invited me to their bolthole in the empty part of Northern California. Their argument was: as long as the US and Russia are staring menacingly at each other, there’s a (slight) risk of nuclear war. Maybe we should get out of cities now, and beat the traffic jam when the s#!t finally hits the fan. I declined their generous offer, but I’ve been wondering whether I made the right call. What exactly is the risk of nuclear war these next few months? Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by [an absolutely obscene margin](https://www.lesswrong.com/posts/EGHtomuh55375u7RT/forecasting-newsletter-march-2021), “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”. As a service to the community, they came up with a formal forecast for the risk of near-term nuclear war: > We aggregated the forecasts of 8 excellent forecasters for the question ***What is the risk of death in the next month due to a nuclear explosion in London?*** Our aggregate answer is 24 micromorts (7 to 61) when excluding the most extreme on either side. A micromort is defined as a 1 in a million chance of death. Chiefly, we have a low baseline risk, and we think that escalation to targeting civilian populations is even more unlikely. > > For San Francisco and most other major cities, we would forecast 1.5-2x lower probability (12-16 micromorts). We focused on London as it seems to be at high risk and is a hub for the effective altruism community, one target audience for this forecast. > > Given an estimated 50 years of life left, this corresponds to ~10 hours lost. The forecaster range without excluding extremes was <1 minute to ~2 days lost. Because of productivity losses, hassle, etc., we are currently not recommending that individuals evacuate major cities. You can read more about their methodology and reasoning on the post on [Effective Altruism Forum](https://forum.effectivealtruism.org/posts/KRFXjCqqfGQAYirm5/samotsvety-nuclear-risk-forecasts-march-2022), but I found this table helpful: Along with reassuring me I made the right choice not to run and hide, this is a new landmark in translating forecasting results to the real world. The whole stack of technologies came together: tournaments to determine who the best predictors are, methods for aggregating probabilities, and a real-world question that lots of people care about. Thanks to Samotsvety and their friends for making this happen! (see [here](https://forum.effectivealtruism.org/posts/KRFXjCqqfGQAYirm5/samotsvety-nuclear-risk-forecasts-march-2022?commentId=DobhCbeQ7XDaRM6hW) for some pushback, disagreement, and back-and-forth) ### Forecasters Vs. Experts Also from the EA Forum this month: [Comparing Top Forecasters And Domain Experts](https://forum.effectivealtruism.org/posts/qZqvBLvR5hX9sEkjR/comparing-top-forecasters-and-domain-experts), by Arb Consulting (the team also includes one of the Samotsvety members who worked on the nuclear risk estimate). Everyone always tells the story of how Tetlock’s superforecasters beat CIA experts. Is it true? Arb finds that it’s more complicated: > A common misconception is that superforecasters outperformed intelligence analysts by 30%. Instead: [Goldstein et al](https://goodjudgment.io/docs/Goldstein-et-al-2015.pdf) showed that [EDIT: the Good Judgment Project's best-performing aggregation method][[2]](https://forum.effectivealtruism.org/posts/qZqvBLvR5hX9sEkjR/comparing-top-forecasters-and-domain-experts#fnss1re8ar7gq) outperformed the intelligence community, but this was partly due to the different aggregation technique used (the GJP weighting algorithm performs better than prediction markets, given the apparently low volumes of the ICPM *market*). The forecaster prediction market performed about as well as the intelligence analyst prediction market; and in general, prediction pools outperform prediction markets in the current market regime (e.g. low subsidies, low volume, perverse incentives, narrow demographics). *[85% confidence]* > > In the same study, the forecaster average was notably *worse* than the intelligence community. If I’m understanding this right, the average forecaster did worse than the average expert, but Tetlock had the bright idea to use clever aggregation methods for his superforecasters, and the CIA didn’t use clever aggregation methods for their experts. The CIA did try a prediction market, which in theory and under ideal conditions should work at least as well as any other aggregation method, but under real conditions (it was low-volume and poorly-designed) it did not. They go on to review thirteen other studies in a variety of domains (keep in mind that different fields may have different definitions of “expert” and require different levels of expertise to master). Overall there was no clear picture. Eyeballing the results, it looks like forecasters often do a bit better than experts, but with lots of caveats and possible exculpatory factors. Sometimes the results seemed a little silly: in one, forecasters won because the experts didn’t bother to update their forecasts often enough as things changed; in another, “1st place went to one of the very few public-health professionals who was also a skilled Hypermind forecaster.” They conclude: > To distinguish some claims: > > 1: “Forecasters > the public” > 2: “Forecasters > simple models” > 3: “Forecasters > experts” > > 3a: “Forecasters > experts with classified info” > 3b: “Averaged forecasters > experts” > 3c: “[Aggregated](https://link.springer.com/article/10.1007/s11004-012-9396-3) forecasters > experts” > > We think claim (1) is true with 99% confidence[[1]](https://forum.effectivealtruism.org/posts/qZqvBLvR5hX9sEkjR/comparing-top-forecasters-and-domain-experts#fnhxzshtudy4r) and claim (2) is true with 95% confidence. But surprisingly few studies compare experts to generalists (i.e. study claim 3). Of those we found, the analysis quality and transparency leave much to be desired. [The best study](https://link.springer.com/article/10.1186/s12889-021-12083-y) found that forecasters and health professionals performed similarly. In other studies, experts had goals besides accuracy, or there were too few  of them to produce a good aggregate prediction. So, kind of weak conclusion, but you can probably believe some vague thing like “forecasters seem around as good as experts in some cases”. Also, keep in mind that in real life almost no one ever tries to aggregate experts in any meaningful way. Real-life comparisons tend to be more like “aggregated forecasters vs. this one expert I heard about one time on the news”. I’d go with the forecasters in a situation like this - but again, the studies are too weak to be sure! ### Shorts **1:** [Taosumer reviews](https://taosumer.substack.com/p/on-decentralized-prediction-markets?s=w) my Prediction Market Cube and asks why I don’t have “decentralized” on there as a desideratum. My answer: decentralization is great, but for me it cashes out in “ease of use” - specifically, it’s easy to use it because the government hasn’t shut it down or banned you. Or as “real money” - the reason Manifold isn’t real-money is because they’re centralized and therefore vulnerable and therefore need to obey laws. Or as “easy to create market” - the reason Kalshi doesn’t let you create markets is partly because it’s centralized and therefore vulnerable and therefore needs to limit markets to things regulators like. I agree that, *because of those second order effects*, decentralization is crucial and needs to be pursued more, and I agree that it’s a tragedy that [whatever happened to Augur] happened to Augur. **2:** More people make Ukraine predictions: [Maxim Lott](https://maximumtruth.substack.com/p/understanding-russia?s=r), [Richard Hanania](https://richardhanania.substack.com/p/why-forecasting-war-is-hard?s=w) (again), [Samo Burja](https://twitter.com/SamoBurja/status/1499883211748433932) (again), [EHarding](https://www.metaculus.com/accounts/profile/118219/) (possibly trolling?), [Robin Hanson](https://twitter.com/robinhanson/status/1502752807627153414) (sort of) **3:** Last month we talked about some problems with the Metaculus leaderboard. An alert reader told me about their alternative [Points Per Question leaderboard](https://metaculusextras.com/points_per_question), which is pretty good - although I think different kinds of questions give different average amounts of points so it’s still not perfect. **4:** Also last month, I suggested [Manifold Markets](https://manifold.markets/home) have a loan feature to help boost investment in long-term markets. They’ve since added this feature: your first $M20 will automatically be a zero-interest loan. **5:** Related: I’m testing Manifold as a knowledge-generation device. If you want to help, [go bet in the market](https://manifold.markets/ScottAlexander/which-of-these-interventions-will-i) about how I’ll rank interventions in an upcoming updated version of the Biodeterminists’ Guide To Pregnancy. **6:** [Reality Cards](https://realitycards.io/us/how-it-works) is a new site that combines the legal hassles of prediction markets with the technical hassles of NFTs. You bid to “rent” the NFT representing a certain outcome, your rent goes into a pot, and then when the event happens the pot goes to whoever held the relevant NFT. I’m not math-y enough to figure out whether this is a proper scoring rule or not, but it sure does sound unnecessarily complicated. I imagine everyone involved will be multimillionaires within a week. **7:** In case a prediction market using NFTs isn’t enough for you, [this article suggests](https://thedefiant.io/opendao-blockchain-prediction-markets-nfts/) that OpenDAO is working on a prediction market *about* NFTs. It claims they should be done by January, but I can’t find it.
Scott Alexander
50284425
Mantic Monday 3/14/22
acx
# Open Thread 215 This is the weekly visible open thread. Odd-numbered open threads will be no-politics, even-numbered threads will be politics-allowed. This one is odd-numbered, so be careful. Otherwise, post about anything else you want. Also: **1:** The effective altruist movement [is offering](https://effectiveideas.org/) $100,000 prizes to each of the top five new EA-aligned blogs this year. If you were thinking of writing a blog that touches on EA topics (x-risk, progress, global development, moral philosophy, AI, etc) now’s a pretty good time. **2:** Comment of the week: Steven Ehrbar gives [a theory I’d never heard before for why US invaded Iraq](https://astralcodexten.substack.com/p/ukraine-thoughts-and-links/comment/5436765?s=w): to unpin US garrisons in Saudi Arabia. There were also good comments on Zulresso, but I’ll probably spin them off into a separate Highlights post. **3:** Other people I missed who correctly predicted the Ukraine war: [Erusian](https://astralcodexten.substack.com/p/open-thread-214/comment/5418227), [Aleksandr Nevzorov](https://www.youtube.com/watch?v=Ia8RFaeIqEk) (link in Russian). **4:** Errata: [swni corrects](https://www.reddit.com/r/slatestarcodex/comments/t6bn6r/what_are_we_arguing_about_when_we_argue_about/hzae1hm/) part of my description of Ramanujan’s mathematical process from [What Are We Arguing About When We Argue About Rationality](https://astralcodexten.substack.com/p/what-are-we-arguing-about-when-we?s=r). Probably not important enough to stick in the Mistakes file, but I’ll stick it here. **5:** There was an ACX Grants winner that I didn’t describe too clearly on the announcement post because they were still in stealth mode. They’ve asked me to post the following update: > Spellcheck Health is looking for an expert in genetic editing (CRISPR etc.) to ask questions of, and possibly join their startup project. Looking for practical expertise with modern techniques, especially with CRISPR variants that can handle multiple long edits. Deep interest in human gene therapy for curative, longevity, and general optimization purposes a huge plus. Please email Michael at spellcheckhealth@icloud.com if interested.
Scott Alexander
50261407
Open Thread 215
acx
# Progesterone Megadoses Might Be A Cheap Zulresso Substitute Earlier this week [we talked about Zulresso](https://astralcodexten.substack.com/p/zounds-its-zulresso-and-zuranolone?s=w), a new medication for post-partum depression. It works well, but it can only be administered at a few special hospitals, and costs $35,000 per treatment. But Zulresso is a natural metabolite of the female hormone progesterone. What’s stopping people from taking progesterone, waiting for their bodies to metabolize it into Zulresso, and saving $35,000 and a hospital stay? As far as I can tell, nothing. [Andreen et al](https://sci-hub.st/https://www.maturitas.org/article/S0378-5122(05)00345-2/fulltext) give some people a dose of 20 mg progesterone, then measure allopregnanolone levels. They find that the progesterone gets converted into allopregnanolone, with a max plasma concentration of about 8 nmol/L. This is about a fifth of allopregnanolone levels during pregnancy, which a course of Zulresso is trying to match. So in theory (and assuming simple pharmacokinetics) a dose of 100 mg progesterone ought to give the same peak level of allopregnanolone as a Zulresso infusion. The only people I can find who take this to its logical conclusion are [Barak & Glue](https://onlinelibrary.wiley.com/doi/abs/10.1002/hup.2731). They do the same calculation as above much more rigorously, and suggest that the following progesterone regimen would correspond to the typical Zulresso infusion: You would have to be very careful to get the timing right, since the difference between causing post-partum depression and curing it comes from tapering \*off\* high levels of progesterone rather than crashing all at once. This would require a total of 7000 mg progesterone over ~3 days. 7000 mg of progesterone [costs $10.94 in the United States](https://www.goodrx.com/progesterone?dosage=100mg&form=capsule&label_override=progesterone&quantity=70&sort_type=popularity). This would be quite a lot of oral progesterone by normal standards - there’d be a part in the middle where you take 42 pills over a 24 hour period - but I think it would end up simulating the natural hormone level of pregnancy. If pregnancy doesn’t have a side effect, I don’t think this regimen should have that side effect either. The main obstacle here seems to be that a q2 hour dosing schedule doesn’t leave a lot of time for sleep. But given that these are postpartum women, they’re probably getting up every two hours in the middle of the night anyway; I’m not sure having to take the progesterone makes it any worse. In the unlikely chance that they do want more than two hours of sleep, I bet there are clever things you can do with extended release progesterone formulations. Barak & Glue weren’t able to test their regimen, but the logic behind it seems pretty strong. And [here’s](https://astralcodexten.substack.com/p/zounds-its-zulresso-and-zuranolone/comment/5471185) a comment by a compounding pharmacist, saying “we've been giving progesterone for like 4 decades at the compounding pharmacy where I work, and we've been talking about its metabolism to allopregnanolone for about 20 years.” If this worked, it would let the health system replace a $35,000 drug with a $10 one - or let patients who could never afford the $35,000 drug get the treatment at all. I’m not optimistic; parts of the FDA approval system, the insurance authorization process, and doctors’ prescribing practices all push against ideas like this. But it’s not impossible, and I hope some researcher will eventually try it. From your lips to God’s ears!
Scott Alexander
49659230
Progesterone Megadoses Might Be A Cheap Zulresso Substitute
acx
# Advice For Unwoke Academic? An academic recently asked me for advice. A lucky career development has now made him almost un-fire-able, and he wants to join the fight for academic freedom. We talked about two different strategies: * **Fabian Strategy:** Become a beloved pillar of his college community. Volunteer for all those committees everyone always tries to weasel out of. When some wokeness-related issue comes up - merit vs. diversity hiring, wokeness study class requirements for majors, firing professors who say unwoke things, etc - use his reputation and position to fight back. Kindly but firmly make it clear that he opposes wokeness, and that other academics in the same position are not alone. Occasionally, when the college administrators make some extreme and obvious overstep - something “we’ve cancelled all yoga classes because they’re cultural appropriation”-level unpopular - escalate it, make sure everyone in the world hears about it, then claim the easy victory when they back down. * **Berserker Strategy:** Pick fights. Literally *pick* the fights - study up on college policy, get to know the administrators well enough to understand which policies they’re forced to follow and which ones they’ll cave on immediately, learn the relevant laws, lawyer up, be 99% sure he can win any fight he picks - but then pick fights. Invite controversial speakers, knowing that there will be big protests. Then make sure there are lots of cameras around as hundreds of college students hurl garbage and expletives at some kindly old sociologist who said biological sex was real one time or whatever. Do this consistently, in a way that probably makes him lots of enemies and ensures he’ll never get any position of power, but which keeps this issue in front of everyone’s eyeballs. Make sure that everyone sees him successfully standing up to the mob, having his speakers speak, and continuing to be employed and happy. If the college tries to shut him down, sue them and win, in a way that will make colleges more reluctant to shut people down in the future. Here are some of the points our discussion touched on: **What Message Does A Hard-Won Victory Send?** Suppose in fact this guy invites a controversial speaker, there are angry protests, the college tries to fire him, he sues the college and wins, and in the end his speaker is able to speak and he remains employed. If this case makes the news and helps set everyone’s expectations, what message does it send to the average academic? It could be “have hope, it’s possible to win a fight against wokeness”. Or it could be “if you offend woke people, you’ll have to deal with angry mobs and a long court case; sure, you’ll win in the end, but it sounds horrible”. **Were The George Floyd Protests An Example Of Woke Power Or Woke Overreach?** If you don’t live in a blue state, take it from me - the original George Floyd protests were a *weird* time. Overnight every one of your neighbors put up Black Lives Matter signs on their lawn, sometimes multiple signs per house. Every business had “Justice For George Floyd” signs on the windows. Sometimes random unrelated apps you used for laundry or something would randomly sprout pop-ups saying “Did you know the police are bad? Here’s where you should donate.” The usual cancel culture intensified by an order of magnitude. In [this post](https://astralcodexten.substack.com/p/the-rise-and-fall-of-online-culture), I thought of it as rallying a previously flagging social justice movement, allowing them to make a giant show of strength, briefly cow everyone, and intimidate any attempt at change. My interlocutor noticed some of the same things, but said lots of previously woke people had secretly freaked out exactly how strong the show of strength is and gotten doubts - they’d previously bought into the “wokeness is the underdog” narrative, and only then noticed how much power and control it was grabbing for itself. He thinks a world where the protests had never happened would be woker than our current world right now. Who cares? This is relevant to the strategic discussion. If occasional Wokeness States Of Emergency rally and empower the woke, we should probably prefer Fabian to Berserker. If the State Of Emergency actually helps sow doubts, we should prefer Berserker. **What Would Convince You To Be Woke?** Convincing a woke or on-the-fence person to be unwoke seems much like convincing an unwoke or on-the-fence person to be woke. But perhaps you are unwoke or on the fence; what might push you more towards the woke direction? For me, seeing actual injustices against minorities makes me more woke, and seeing woke people be stupid and unnecessarily combative makes me less woke. Insofar as the Fabian Strategy implies signal-boosting actual injustices against academic freedom, and the Berserker Strategy implies doing things that woke people interpret as stupid and unnecessarily combative, this seems to favor the Fabian Strategy. **How Should We Assess New Atheism?** Our last monoculture was the conservative/Christian hegemony of the mid-to-late 20th century. It gradually crumbled, but its most confrontational detractors - the New Atheists - ended up [being judged harshly by history](https://slatestarcodex.com/2019/10/30/new-atheism-the-godlessness-that-failed/). Should we care about this? Maybe the New Atheists were an epiphenomenon who didn’t contribute at all to the decline of Christianity in America. Or maybe they contributed some appropriate amount they can be proud of, and we shouldn’t care whether or not people disliked them afterwards. If you were giving strategic advice to Richard Dawkins in 2005, would you have told him to tone it down? How analogous is the current situation? **How Did Other Protest Movements Solve This Problem?** People like to contrast the soft-spoken MLK with the more radical Malcolm X, but probably *both* of them fall on the Berserker side of our dichotomy above. Does that mean our dichotomy is too weak? Did anyone do Fabian for civil rights? The best example I can think of is the NAACP [passing over lots of less-sympathetic test cases](https://www.npr.org/2009/03/15/101719889/before-rosa-parks-there-was-claudette-colvin) before they settled on Rosa Parks for fighting bus segregation. Maybe a more to-the-point retort is that people who employ the Fabian Strategy successfully don’t make the news. Or make the news as Congressman #26 Who Voted For Civil Rights Legislation. On the other hand, some good examples of Fabian Strategy successes are the [actual Fabians](https://slatestarcodex.com/2018/04/30/book-review-history-of-the-fabian-society/) and the [neoliberals](https://www.effectivealtruism.org/articles/ea-neoliberal). **Does Poorly-Planned Resistance Provide A Cover For Crackdowns?** As much as I hate to say it, the most lasting legacy of the Canadian trucker protests might be normalizing freezing protesters’ bank accounts. A cynical take: the government would have liked to have this power since forever, but was afraid of potential blowback. The truckers were a scary enough bogeyman that they were able to justify taking the power to the populace. Although officially they’ve relinquished the legal right, the cultural right depends on things like precedent which now exist. While I don’t morally blame the truckers for this, from a strategic point of view, they sure did cause it to happen. Seems like another consideration pushing against making people angry without a clear plan for why it’s going to be worth it and you’re going to enact real change. **What’s The Current Default Trajectory?** We both agree that wokeness is currently in a weird place; ascendant in all measurable ways, but with cracks beginning to show (think Christianity in 1990-something). On the barberpole model of fashion, some of the highest levels of the barberpole (eg private discussions between knowledgeable people) are starting to turn unwoke in a way that suggests other levels might follow. More people than ever mouth agreement, but it’s increasingly unclear how many really believe. The default trajectory is probably something like wokeness ending out where American Christianity is now - still sort of powerful in its way, but not hegemonic, and non-Christians can still be fine and have religious freedom in most places. What we both worry about is a “soft landing” where ordinary people lose interest and go away, but all the legal apparatus of wokeness - the diversity bureaucracies, forced quotas, normalization of censorship, and the like - stick around by inertia without any reckoning or reconsideration. (We’re no longer in a state of constant panic about terrorism, and some people may even grudgingly admit that the previous panic about terrorism went a little too far - got kind of witch-hunty at times. But we still have to throw away our water bottles and take off our shoes before getting on a plane, and we probably always will.) Does this suggest the Berserker Strategy, to make sure the issue doesn’t fade quietly into the night? Is the goal to have different parts of society out of sync with each other? For example, if the voting public is very anti-woke, but universities are very woke, maybe this will keep wokeness in the public eye long enough for the (now controlled by anti-woke people) government to reckon with it and change the relevant laws. Or is that playing with fire? Anyway, I told this person I would give him good advice the only way I know how, which is asking you people. Tell me what you think!
Scott Alexander
48318122
Advice For Unwoke Academic?
acx
# Zounds! It's Zulresso and Zuranolone! #### 1: What is Zulresso? Wikipedia [describes](https://en.wikipedia.org/wiki/Cthulhu_Mythos_deities) Zulresso as “A bat-winged, armless toad with tentacles instead of a face... ” - no! sorry! That’s Zvilpogghua, one of the Great Old Ones from the Lovecraft mythos. Zulresso is the brand name of allopregnanolone (aka brexanolone), a new medication for post-partum depression. It’s interesting as a potential missing link between hormones and normal mood regulation. #### 2: What do you mean by “missing link between hormones and normal mood regulation?” Allopregnanolone is a naturally-occuring metabolite of the female hormone progesterone. In 1981, [scientists found](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC320231/pdf/pnas00659-0078.pdf) it was present in unusually high concentrations in the brain (including male brains), suggesting that maybe the brain was making it separately and using it for something. They did some tests and found that it was a positive allosteric modulator of GABA. ([source](http://pharmwarthegame.blogspot.com/2019/03/zulresso-brexanolone-first-drug.html)) GABA is the main inhibitory neurotransmitter; it’s usually associated with relaxation and sedation. A positive allosteric modulator is a chemical that makes receptors respond more strongly to their targets. So “a positive allosteric modulator of GABA” means a chemical that makes the brain respond stronger to relaxation/sedation signals. Sounds pretty useful! You may do some positive allosteric modulation of GABA yourself sometimes; this is one of the major actions of alcohol. Also of the benzodiazepines, a popular class of psychiatric medication including Ativan (lorazepam), Valium (diazepam), and Klonopin (clonazepam). The “-pam” at the end stands for **p**ositive **a**llosteric **m**odulator! (or maybe that’s just an urban legend, I’ve never found proof either way) The discovery of endorphins (ie endogenous opiates) helped shed light on the brain’s reward system. So the discovery of a sort of endogenous benzodiazepine was pretty exciting. Maybe it’s some kind of master control switch for anxiety or something? Psychiatrists only know two ways to respond to an exciting new thing: publishing breathless studies claiming that it’s the true mechanism of action for SSRIs, and publishing breathless studies claiming that it’s the true biological basis of depression. This time, they did both: see eg [Fluoxetine elevates allopregnanolone levels in female rat brain](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4290723/) and [The role of allopregnanolone in depressive-like behaviors](https://www.sciencedirect.com/science/article/pii/S2352289520300084). The basic theory was that stress / social isolation / etc → decreased allopregnanolone → something something BDNF and synaptogenesis → depression. And SSRIs → increased allopregnanolone → something something BDNF and synpatogenesis → recovery! Change the word “allopregnanolone”, and that’s *every* theory in psychiatry. But this particular theory had two extra pieces of evidence: premenstrual dysphoric disorder and postpartum depression. Remember, allopregananolone is a natural metabolite of the female hormone progesterone. Progesterone levels go up during pregnancy and the ~18th day of the menstrual cycle, then crash back down after delivery and the ~24th day of the menstrual cycle. Meanwhile, some women get depressed after delivering a baby, or on the ~24th day of their menstrual cycle. Suspicious! Maybe it’s because their progesterone was getting converted into allopregnanolone, an antidepressant hormone that affects mood! (why doesn’t every woman get PPD and PMDD? [This study](https://pubmed.ncbi.nlm.nih.gov/26960697/) suggests that women with PMDD have altered sensitivity to allopregnanolone; plausibly people with PPD have some other form of altered sensitivity. In case you have the same question I do: the correlation between PMDD and PPD [is not 100% but still pretty significant](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4588839/)) History of allopregnanolone research ([source](https://sci-hub.st/https://pubmed.ncbi.nlm.nih.gov/32435665/)) The next step was to see if making patients take allopregnanolone can treat these conditions. This is kind of hard, because allopregnanolone is a tough chemical to get into people’s bodies; the traditional method involves sticking an IV into someone and infusing it slowly over several days, and it has to be done in a hospital. Still, [Kanes et al](https://onlinelibrary.wiley.com/doi/10.1002/hup.2576) tried this in 2017. The study was open-label (ie no placebo) and very small (only four women) but appeared to work extraordinarily well. Four post-partum women who qualified as “severely depressed” when they started the infusion progressed to “completely recovered” within twelve hours. Nothing else except *maybe* ketamine had produced results like this before. #### 3: What studies were done on Zulresso? [This followup study by Kanes](https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(17)31264-3/fulltext) was the first real RCT, although it only had 21 patients. In accordance with the venerable First Study Ever tradition, it found really large positive effects on post-partum depression. That encouraged Sage Therapeutics to fund a bigger Phase 3 trial, [Meltzer-Brody (2018)](https://sci-hub.st/https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(18)31551-4/fulltext). In accordance with venerable Bigger Phase 3 Trial tradition, its results weren’t quite as good as the First Study Ever. But they were still pretty good: Notice that lower doses worked better than higher doses. This is sometimes a red flag on a study. But this time it seems legit; see “Biphasic Actions At The GABA-A Receptor” [here](https://en.wikipedia.org/wiki/Allopregnanolone#Molecular_interactions) for an explanation. Both studies also evaluated side effects. These were generally mild, but two people (about 2% of the study population) lost consciousness. Nothing seemed wrong with them, and researchers mostly attributed this to allopregnanolone being a sedating drug. If you sedate people too hard, they pass out. Faced with these results, the FDA approved allopregnanolone for post-partum depression, but subjected it to a REMS (Risk Evaluation And Mitigation Strategy) - basically, doctors who want to prescribe it will need to take special courses and do extra paperwork. This kind of surprised me - there are plenty of sedating drugs that make you pass out in overdose. Also, since patients will be getting it IV, there will probably be a nurse around to check if they passed out and take appropriate actions if so. But the FDA really likes putting restrictions on things, and I guess this was a free chance for them to do that. #### 4: Is Zulresso freely available at a doctor’s office near me? It’s *possible* to get Zulresso, but *really hard.* Because Zulresso is an IV infusion lasting four days, you need to spend four days somewhere that people can put an IV into you and monitor it. Realistically that means a hospital or some other big medical institution. So this is only available for inpatients. Because of the REMS (extra certification and paperwork), most hospitals aren’t interested. You can find a list of ones that are [here](https://www.zulressorems.com/#Public/HealthcareSettingLocator) - it looks like there are about 89 locations in the US with the right certification. Last but not least, a four-day course of Zulresso costs $35,000 for the medication itself, plus much more for the four-day hospitalization it takes to receive it. As usual, insurances will cover it iff you can document you’ve tried lots of other stuff first. #### 5: Hold on, does it *really* cost $35,000? Oho, I see you’ve played the “pharma price analysis” game before. But this time I think the price might actually be defensible. Chemical supply companies ([1](https://www.selleckchem.com/products/allopregnanolone.html), [2](https://www.chemdirect.com/p/ChemD-301-S5805-25mg-516-54-1/allopregnanolone-25mg-250914), [3](https://www.chemdirect.com/p/ChemD-301-S5805-25mg-516-54-1/allopregnanolone-25mg-250914)) generally sell allopregnanolone for $10,000 to $20,000 a gram. (I found [one company](https://www.labdepotinc.com/p-62454-allopregnanolone?gclid=CjwKCAiAyPyQBhB6EiwAFUuakloKON-KKGdWMz9YrR5LgXWgvAmLoDGhrHWYlLnPgzas9nw8ie0SfhoCy6QQAvD_BwE) with a much lower price, but I’m suspicious and am going to dismiss them as an outlier). The usual dose of allopregnanolone is 60 ug/kg/hour x 60 hours, which for a 60 kg person comes out to a total of 0.25g total. Getting that amount from the chemistry supply store would cost about $2,500 - 5,000. I assume pharma-grade allopregnanolone is more expensive than chemistry-store-grade, so it wouldn’t surprise me if a price in the low five-figures was justified by manufacturing alone. Isn’t it still a pretty good deal to find an endogenous neurosteroid, do one or two studies confirming it’s great, produce it for the low five figures, then sell it for the mid five figures? I think maybe not. This drug has a *terrible* value proposition. Post-partum depression is one of the rarer psych conditions. Most people with PPD won’t check into a hospital and pay $35,000 for a drug infusion. And the people who do will get the drug infusion, feel better, and never need it again (at least until they have another kid) - unlike SSRIs where you can keep charging for monthly prescriptions forever. Sage Therapeutics, the pharma company that owns the patent on Zulresso (and nothing else - this is their only drug!) [has done terribly](https://www.sickeconomics.com/2021/06/18/sage-therapeutics-stock-performance/). Their stock is in the doldrums, they almost went bankrupt, and they survived only with the help of a cash infusion by a bigger pharma company. I think this confirms a general trend where at least some expensive medications are pricey because of fundamentals (including regulatory fundamentals) and not just pharma companies making obscene profits. #### 6: Hold on, how is allopregnanolone different from benzodiazepines? Remember, allopregnanolone is a positive allosteric modulator of GABA, much like benzodiazepines such as Xanax. But Xanax is cheap ($10 for 30 pills). And you can get it at any local pharmacy (plus sometimes on street corners). What’s so special about allopregnanolone that you should pay $35,000 and go into the hospital to get it? The *official* answer is “allopregnanolone modulates GABA differently from benzodiazepines”. For example, [this paper](https://www.intechopen.com/chapters/17582) says that: > Allopregnanolone allosteric modulation of the action of GABA at GABA-A receptors is much less selective than that of benzodiazepines, which are relatively inactive at α4- or α6-containing GABA-A receptors. If you really like details about receptor subunits, [this paper](https://www.frontiersin.org/articles/10.3389/fendo.2011.00044/full) presents the full case. The *skeptic’s* answer is “who knows?” Psych drugs often work for reasons totally different than we thought. People thought tianeptine was an SSRE for years, until it turned out to be a mild opioid. People thought ketamine was NMDA-ergic for years, until it turned out to be [fill this part in 10 years from now]. Last year a bunch of very smart people [tried to claim](https://astralcodexten.substack.com/p/a-look-down-track-b?s=w) that SSRI effects had nothing to do with serotonin (I think they were wrong). Just because some guy found that Zulresso acts as a GABA-PAM in some test tube doesn’t mean that’s what’s having any of the relevant antidepressant effects. The *troll’s* answer is “who says it’s different?” Do benzodiazepines treat depression? Depends who you ask. If you ask benzodiazepine users, [their answer is](https://www.drugs.com/condition/depression.html?sort=rating&order=desc&page_number=1&page_size=25&category_id=0&include_rx=1&include_otc=1&show_off_label=1&only_generics=0&page_all=0&submitted=0&hide_off_label=0#sortby) “yes, definitely”. If you ask drug warriors, [their answer is](https://www.psychiatryadvisor.com/home/topics/mood-disorders/depressive-disorder/benzodiazepine-monotherapy-often-used-for-depression-contrary-to-guidelines/) “Addictive Substances May Make You Temporarily Feel Good, But They Are Not A Responsible Treatment Option”. If you ask the research literature, [it gives vague indeterminate answers](https://www.karger.com/Article/Fulltext/486696), as always. But nobody has ever said benzodiazepines instantly and miraculously cure depression, so how come allopregnanolone seems to do that? A true troll would point out that we probably give allopregnanolone at much higher doses - 2% of allopregnanolone patients were sedated so hard they lost consciousness, whereas this is exactly the sort of side effect I try to avoid when calculating benzodiazepine doses. Maybe if you gave postpartum women an infusion of 300 mg Valium, and maximized your placebo effect by calling it the hot new thing, they’d do pretty well too (several days later, after recovering consciousness). I think the troll answer would be hilarious but I don’t really want to defend it as correct; if I had to bet I’d say the official explanation is the right one. #### 7: Hold on, why can’t we just give people progesterone and let them metabolize it into allopregnanolone? This turned out to be an interesting enough rabbit hole that I’m going to spin it off into another post later this week. #### 8: Hold on, people have lots of allopregnanolone when they’re pregnant, right? And then post-partum depression happens when they give birth, and their allopregnanolone level drops. So if you give someone an infusion of allopregnanolone, and then take them off it, that’s a hormonal simulation of giving birth, ie the same thing that caused the problem in the first place? How is that good? Oh, you think you’re clever, do you? What you failed to consider is . . . I didn’t end that sentence because I can’t find anything in the literature addressing this question. But the difference might be that the infusion schedule ramps up gradually, peaks, and then ramps down gradually, which is more of a soft taper than the sudden crash of birth. If anyone knows more about this, please let me know. [EDIT: see [this comment](https://astralcodexten.substack.com/p/zounds-its-zulresso-and-zuranolone/comment/5447835)] #### 9: Is allopregnanolone addictive? No, because good luck getting addicted to a $35,000-per-dose chemical. We should probably expect allopregnanolone to be addictive, by analogy to other GABA-PAMs like benzodiazepines and alcohol. But nobody has ever received more than a single dose. You don’t get addicted to benzos after a single pill, or alcohol after a single beer, so in practice AFAIK nobody has ever gotten addicted to this. Or who knows, maybe it’s not addictive. Remember, allopregnanolone is naturally elevated during pregnancy; pregnancy isn’t addictive. And some scientists claim the brain endogenously uses allopregnanolone as a master regulator of depression and anxiety. *In theory*, if you could give yourself the same amount a non-anxious person’s brain gives them all the time, shouldn’t you be no worse off than that non-anxious person? I don’t know, and remember that your brain also has a lot of endogenous opioids; doesn’t make the exogenous kind any safer. The Drug Enforcement Administration has made Zulresso a Schedule IV controlled substance, which means they’re putting a few very weak restrictions on it but not worrying too much. #### 10: Does allopregnanolone work for depression that isn’t post-partum? If all psychiatric disorders are secretly allopregnanolone imbalances, then you might expect it to work on all depressions, not just post-partum. I’m sure pharmaceutical executives with dollar signs instead of pupils in their eyes have had this same thought, but I can’t find studies about it. Some of the same people behind the postpartum studies did [a very small, very weak study](https://pubmed.ncbi.nlm.nih.gov/32558402/) on ganaloxone (a close allopregnanolone relative) for persistent depression; it seemed to work, but also caused a lot of sedation (more than in the postpartum trials? Hard to tell). Nobody’s looked into this further since then, maybe because that was around when the pharma companies realized that the 4-day hospital stay and $35,000 price tag made allopregnanolone a financial loser. The evidence from zuranolone (see below) suggests that allopregnanolone might not work very well against regular depression. #### 11: What is zuranolone? Wikipedia [describes](https://en.wikipedia.org/wiki/Cthulhu_Mythos_deities#Table_of_Great_Old_Ones) zuranolone as “a swirling, black vortex revered by the Mutsune Native Americans as a dire death god . . . also worshiped by mysterious servitors known as the Hidden Ones*.”* No! Sorry again! That’s Zushakon, another Great Old One. Zuranolone is Sage Therapeutics’ attempt to turn allopregnanolone into an accessible medication that might actually make them real money. Zuranolone is mostly just allopregnanolone with some extra stuff attached that changes the absorption. Zuranolone can be taken orally, so you don’t have to go to a hospital for four days to receive it IV. It’s potentially less likely to cause loss of consciousness and other undesirable side effects. And it’s under investigation as a potential treatment for postpartum depression, bipolar depression, regular depression, insomnia, and various movement disorders. (that might seem excessive, but benzodiazepines treat a *lot* of stuff, and if these neurosteroids are kind of like super-benzodiazepines, then this level of optimism might be warranted.) #### 12: Does zuranolone work? Sage Therapeutics answered this question the same way pharma companies answer every question: with a bunch of studies whose names form overly-cute acronyms. We’ll talk here about ROBIN, WATERFALL, MOUNTAIN, and CORAL - though I assure you there are others. [ROBIN](https://jamanetwork.com/journals/jamapsychiatry/fullarticle/2781385) tested efficacy in postpartum depression. Results were positive and relatively impressive, about the same as the weaker allopregnanolone studies. [WATERFALL](https://investor.sagerx.com/static-files/ea6ab8b0-28e7-4f30-abfb-0222381eaf02), [MOUNTAIN](https://www.biopharmadive.com/news/sage-redraws-plans-zuranolone-antidepressant-new-studies/574392/), and [CORAL](https://investors.biogen.com/news-releases/news-release-details/sage-therapeutics-and-biogen-announce-phase-3-coral-study-met) tested results in regular depression. WATERFALL was positive but weak. MOUNTAIN was negative. That scared the pharma company and they hacked CORAL [to be more likely to give positive results](https://www.fiercebiotech.com/biotech/sage-tweaks-primary-endpoint-for-zuranolone-depression-trial-leaving-key-durability). It did give positive results, but the FDA reads the same biotech magazines I do and knows perfectly well what they did, so I don’t know what Sage expects to gain from this. Overall these trials were disappointing. I think the most likely story is that allopregnanolone = zuranolone, both are moderately effective in postpartum depression, and both have much less efficacy in regular depression, probably not literally zero but also not enough to be worthwhile antidepressants (especially considering cost). Might zuranolone be an excellent anti-anxiety medication? You’d think so - it should be at least as good as benzodiazepines, which are excellent anti-anxiety medications. And researchers seem excited about allopregnanolone as a master regulator of brain anxiety. But the studies aren’t promising. ROBIN and WATERFALL incidentally assessed anxiety; ROBIN found good results in its postpartum population, but WATERFALL found poor-to-mediocre results in its regular population. Studies are hard, and sometimes even really effective drugs can have trouble showing strong results. But these aren’t encouraging. #### 13: So where do we go from here? Getting FDA approval for zuranolone for postpartum depression seems reasonable; it’ll probably be cheaper and easier than making people go to the hospital to get allopregnanolone. I’m uncertain about the financials of this for Sage, but since they did the study they hopefully think it’s worth it. Otherwise, I’m not sure. It would have been great if zuranolone had shown robust efficacy against regular depression and anxiety, but this is exactly the kind of great thing that never happens in psychopharmacology (motto: “Disappointing Doctors And Patients Since 1982”). It might be worth throwing it against anxiety disorders and PTSD to see if anything sticks, but I wouldn’t bet on it. The research into allopregnanolone as master regulator of brain anxiety states is fascinating, but as far as I know it hasn’t reckoned with the failure of zuranolone to really treat much anxiety. The cynical part of me predicts that once pharma’s done making money off neurosteroids then all of this will die down, and something else that pharma can make more money from will become the master regulator of everything. I expect that the main thing we get out of all this is somewhat better post-partum depression treatment, which might or might not ever become accessible for ordinary people. #### 14: Predictions In the next five years… * Zuranolone gets FDA approval for major depression: 15% * Zuranolone gets FDA approval for postpartum depression: 45% * Zuranolone gets FDA approval for some other condition: 33% * Another neurosteroid gets FDA approval for a psychiatric indication: 35% * Researchers become more convinced that allopregnanolone is an important regulator of brain anxiety states (at least as important as serotonin): 40% * The scientific consensus is still that allopregnanolone works by modulating GABA receptors in a way importantly different from benzodiazepines: 90%
Scott Alexander
49209345
Zounds! It's Zulresso and Zuranolone!
acx
# Ukraine Thoughts And Links *Disclaimer: I am not an expert in international relations or military strategy, which is fine. In democracies, it’s normal and correct for ordinary citizens to have opinions on important world issues, and demands that they not do so are ahistorical and dangerous. Still, take anything I say with a grain of salt.* **1:** **This isn’t “history restarting” . . . yet** Whatever Francis Fukuyama meant by “the end of history”, it probably wasn’t “nothing will ever happen”. But that’s how it’s been interpreted, so fine. Maybe nothing will ever happen. I don’t think the Ukraine War is necessarily a counterexample. Fukuyama wrote in 1992, so he knew that eg the Gulf War could happen. Is this conflict bigger than the Gulf War? I don’t think Ukraine proves that “history has restarted” or “the Pax Americana was a paper tiger” or anything of the sort. These kinds of local conflicts were always allowed. Just ask an Iraqi. Or a Chechen, or an Afghan, or a Syrian, or a Bosnian, or a Crimean, or a Tigrayan or go back and ask the Iraqi a second time. But the vast majority of people reading this have probably never been personally affected by a war and might not even know anyone who has been. And a billion Chinese, and almost a billion Indians, and almost everyone in South America, and a lot of other people, can say the same. Outside of lulls in history and Pax Somewheres, one nation invading another is met with indifference. Russia’s invasion of Ukraine is being met with internal protests, global condemnation, and crippling economic sanctions. This is what it looks like when a civilization that’s got strong and well-functioning norms against aggressive wars encounters one and launches an immune response. **2: If the Pax Americana is dead, we need to try something different; but if it’s still alive, we should stick with what works.** The Pax Americana playbook for international norm violations is: the US slaps sanctions on the offender. The EU expresses “concern”. The UN proposes a resolution condemning it, which gets vetoed by whichever Security Council member is most complicit. And the CIA secretly gives Stinger missiles to everyone involved. Lots of people have lost faith in the Pax Americana, which would mean we need something other than the playbook. These people tend either towards extreme isolationism, where sanctions are an aggressive act and even expressions of deep concern are violations of national sovereignty. Or towards extreme bellicosity, where we’re cowards unless someone puts boots on the ground and start shooting the perpetrator directly. But if the Pax Americana still holds, then the playbook is still the right call. **3: A strong response right now isn’t just about Ukraine, it’s also about the next time.** Everyone can sanction Russia as much as they want, and it can win anyway. Putin is in too deep to extricate himself easily; it’s become a matter of honor, of “not being seen to be weak”. The point isn’t to save Ukraine, it’s to establish expectations for next time. This is about Taiwan, Georgia, Iran, and all the other places that great powers want to invade but don’t. The next time a big country wants to invade a little one, we want it to remember how much misery everyone inflicted on Russia for the Ukraine conflict and think “no thank you”. That involves inflicting lots of misery on Russia right now, whether or not it wins the current war. This is true not just for the West considering sanctions, but for Ukrainians considering how hard to fight. Commentators have drawn connections between the Taliban easily ousting the US-backed Afghan government, and Putin expecting an easy victory in Ukraine. Maybe that’s why he took the chance. But the heroic Ukrainian resistance will set the opposite example. Next time someone considers an invasion, they’ll expect such high costs it won’t be worth it. In this sense, the Ukrainians are sacrificing not just for their countrymen, but for the world and for peace itself. **4: International norms may be annoying, but they’re all that stands between us and nuclear war, so we had better respect them** If you only get one thing from this essay, let it be: unless you know something I don’t, establishing a no-fly zone over Ukraine might be the worst decision in history. It would be a good way to get everyone in the world killed. The “usual playbook” can seem half-hearted and faintly ridiculous. “We’re Not Participating!!!” we insist, as we provide guns and missiles to the people who are. It feels like a bunch of arbitrary lines where we act with bluster and bellicosity on one side, then shrink like fainting violets away from the other. But those arbitrary lines are what save us from global annihilation. Any sane person wants to avoid nuclear war. But this makes it easy to exploit sane people. If Russia said “Please give us the Aleutian Islands, or we will nuke you”, what should the US do? They can threaten mutually assured destruction, but if Russia says “Yes, we have received your threat, we stick to our demand, give us the Aleutians or the nukes start flying”, then what? No sane person thinks it’s worth risking nuclear war just to protect something as minor as the Aleutian Islands. But then the US gives Russia the Aleutians, and next year they ask for all of Alaska. And even Alaska isn’t really worth risking nuclear war over, so you give it to them, and then the next year… So people who don’t want to be exploited occasionally set lines in the sand, where they refuse to make trivial concessions even to prevent global apocalypse. This is good, insofar as it prevents them from being exploited, but bad, insofar as sometimes it causes global apocalypse. So far the solution everyone has settled on are lots of very finicky rules about which lines you’re allowed to draw and which ones you aren’t. If there was ever a point at which two nuclear powers disagreed about who was in the wrong, one of them could threaten nuclear war to get that wrong redressed, the other could say they had drawn a line in the sand there to prevent being exploited, and then they’d have to either back down (difficult, humiliating) or start a nuclear war (unpleasant, fatal). So there are a lot of diplomats who have put a lot of effort into establishing international norms on which things are wrong and which things aren’t, so that nobody crosses anyone else’s lines by accident. This system isn’t perfect. Nuclear powers disagree on lots of things. But they usually disagree in a bounded way, where they accuse each other of non-mortal sins and claim the right to non-nuclear responses. Russia crossed a line by invading Ukraine, in a way that gives Russia’s enemies the right to certain kinds of retaliation - arming Ukraine, imposing sanctions, etc. Russia will grumble about this, but it knows it would be in the wrong if it threatened a nuclear response - it would be violating the West’s lines in the sand, the West would have to call its bluff, and it would have to either go ahead with apocalypse or back down in humiliation. I am not an international relations expert. But every international relations expert whose commentary I have read claims that the extent of Russia’s recent infraction does *not* give the West the right to declare a no-fly zone in Ukraine. The no-fly zone would be an extreme escalation that would, under international norms, allow Russia to threaten World War III if we didn’t back down. Then we would either have to back down, humiliated, or start World War III. In a situation like that, I pray we would have the courage to back down humiliated. But I would prefer not to test our leaders’ courage in this particular way. Also, the last time this happened, in ‘62, it was the Russians who agreed to back down to prevent nuclear war. We owe them one, so this time it’s on us. **5: Really, I can’t emphasize this enough, a no-fly zone means shooting down Russian planes.** America does not actually have a way to prevent people from flying. A no-fly zone means that if they do fly, you shoot them down. It would be more reasonable to call this a “shoot-down-anything-that-flies zone”, but at some point some Pentagon official must have wanted to sell it to the public really hard and come up with an innocuous-sounding name for it. If America actually shoots down Russian planes, there is a decent chance it causes World War III. At the very least, our strategy for preventing World War III would be “shoot them, hope really hard that they don’t shoot back”, *WHICH IS NOT A GOOD STRATEGY.* But isn’t it possible that the US could declare the no-fly zone, and the Russians (who also don’t want World War III) would agree not to fly, rather than cause global annihilation? I think this is where the lines-in-the-sand come in again. Imagine Russia declared a “no-sanctions zone” across the entire world, where if any corporation stopped doing business with them, they would bomb that corporation’s headquarters (even if the corporation was headquartered in eg the US). While this might give some corporations pause, a lot of Americans would feel honor-bound not to comply - it would be “giving into terrorism”. The line between common-sense “don’t provoke a nuclear power” and “if we went along with this, it would be giving into terrorism” is set by international law, diplomatic norms, and various fuzzy rules of war. They say that some things are allowed, and other things are bullying and if someone threatens them you need to call their bluff. The silly “no-sanctions zone” idea would be the latter. And so would a no-fly zone. Putin’s already proven a little irrational. He’s done good work establishing himself as the sort of person who calls all bluffs that it’s in his interest to call. *So stop trying to put him in a position where sticking to his usual habits would cause World War III.* I also feel this way about letting Ukrainian jets use NATO bases, and anything else that diverges from the usual rules of noncombatants. **6: Huh, I guess we’re still capable of jingoism** One story you could tell - one story I think Putin was telling himself - goes something like: the West is pathetic and divided. The Western-backed Afghan government fell to the Taliban in a few weeks. That’s because the Westernized Afghans were the kind of people who cared about trigger warnings and misgendering, and the Taliban was Traditional Masculine Warrior Types. The Taliban could say “kill the infidels!” and the Westerners would argue over whether considering the Taliban an “enemy” was racist. The past few years have seen some of the most powerful players in the Western world, like the big tech companies, refuse to help their own military because they think it’s evil. They’ve seen American conservatives say nice things about enemy dictators because at least they’re not American liberals, and American liberals start treating guns like some kind of eldritch artifact that makes anyone who touches them or associates with them inherently polluted. So a totally reasonable story would be that the West has become psychologically unsuited for war. Ukrainians would be unable to fight (at least successfully), and Westerners would be too complacent to unite behind Ukraine, especially if it meant higher gas prices or whatever. (before the war, I saw people on both sides overestimating the relevance of the Azov Battalion, ie the neo-fascists, on the grounds that only neo-fascists would have enough traditional values left to put up a real fight) That story has fallen apart in two ways. First, the valor of the Ukrainian people. I’m sure there will be debate over whether this is because Ukrainians aren’t as Westernized as Americans, or whether Westernization is more compatible with martial valor than previously expected. But it sure is a data point. Second, the - let’s call it jingoism - of the broader West. I want to be clear here: so far, Westerners have not actually displayed any martial valor. They’ve mostly displayed the ability to be *really* pro-Ukraine on Reddit. Still, they sure have been really pro-Ukraine on Reddit. All the people who used to post cringeworthy comments about “Drumpf” are posting cringeworthy comments about “Putler”. I wouldn’t believe it if I hadn’t seen it with my own eyes. Even so, I think this demonstrates an ability to unite against a foreign enemy beyond what me-a-month-ago would have expected. I think if we ever get in a really important war, we will do just fine on the home front. Of course, jingoism is bad. People are going crazy, trying to take out their frustrations on individual Russians, or agitating for nuclear war, or otherwise embarrassing themselves. All of this is terrible. But I was so concerned we were perma-stuck at the opposite extreme that it’s almost refreshing to see us fail in this particular way. **7: The Obligatory Acknowledgment That We Are Also Bad** America has invaded a lot of countries, even within my lifetime. Sometimes its reasoning was noble: preventing genocide in Kosovo. Sometimes it was at least understandable: get vengeance for 9-11. Other times it was almost incomprehensible: we’ll debate what happened with Iraq II forever. Part of me wants to say we’re different from the Russians - at least we haven’t launched a war of annexation in a while. Usually we have the decency to skulk around funding rebel groups and opposition parties instead of launching full scale invasions. The rare exceptions tend to be genuinely bad dudes - however unjust the Iraq War was, nobody wants to defend Saddam. But there’s a failure mode where every villain can come up with at least one rule they followed which the other villains didn’t, then guiltlessly condemn the other villains for their villainy. Putin says that invading Ukraine is okay, because they’re Nazis; maybe he even believes it. There’s [a constant tension](https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/) between axiology/consequentialism/Inside View morality and law/deontology/Outside View morality. The former says “the Good is indescribably complex, but you can usually recognize it when you see it; follow the Good and ignore the heuristics”. The latter says “bargain with other people until you find bright-line rules you can all agree on, then follow them.” So, how bright-line rule is “never invade another country”? If the other country had a universally-hated dictator who was genociding millions of people, and it would be easy to invade them, and God Himself came down and assured you that nothing would go wrong - do you wash your hands of it and say “nope, there’s a bright line against ever invading another country, that’s Morally Wrong”? What if, whenever you admit an exception to the bright line, you know that tyrants and aggressors will exploit it forever? “Fine, you can invade if the country is literally the Nazis, committing the literal Holocaust” - and then Putin says Ukraine is run by Nazis and genociding its people. I have no good solution to this problem, but I admit that America’s standing to make the moral case against invading Ukraine is weaker than if it had shown the slightest ability to refrain from invading places *it* wanted to invade. **8: The Obligatory Acknowledgment That We Are Also Bad (2)** What to make of the claim that the West provoked this war by expanding NATO / refusing to rule out admitting Ukraine? This is one of those times you have to be really careful with causal vs. moral language - in a purely historical sense, did the West *cause* the war by expanding NATO? And, as a separate question, does this make the West blameworthy? (in case the distinction isn’t clear, a woman wearing skimpy clothing might be causally responsible for her being sexually harassed, but doesn’t make her blameworthy) I found Cuban Missile Crisis analogies helpful here: the US also gets nervous when enemy powers are right on its doorstep. So it’s not crazy for Russia to be worried. Still, Putin also uses a lot of “Ukraine is a fundamentally illegitimate country whose very existence is an affront to Russia” rhetoric. Seems like he has a beef besides potential NATO membership (which everyone agreed wasn’t really going to happen). But also, Russia keeps trying to turn nearby countries into puppet states, sometimes propping up really abhorrent dictators (eg Lukashenko) to do that. They already invaded Ukraine once, took some territory, and propped up some separatist movements. If Ukraine avoided requesting Western connections and military help, or the West avoided providing it, I think “Ukraine becomes Belarus 2” is more likely than “everything is great and war is averted with zero problems”. Is it wrong for the West to support Ukraine in its efforts not to become Belarus 2? In terms of the lines-in-the-sand and vague-rules-of-international-diplomacy that prevent nuclear war, I think not really. Is it *imprudent*? It’s a risk, but at least it was taken in the defense of real principles, which is better than most of the imprudent things we do. **9: Peace is still the goal** Putting all of this together: Western countries have three conflicting goals here. First, avoiding nuclear war. Second, making this such a miserable experience for Russia that nobody tries anything like it again. Third, helping the people of Ukraine (and Russia) escape with as little death and suffering as possible. In this spirit, I hope they encourage Ukraine to consider Russia’s recent peace offer. As far as I understand it, [the offer is](https://theconversation.com/why-ukraine-and-nato-shouldnt-rush-to-dismiss-vladimir-putins-latest-peace-terms-178723): Ukraine declares neutrality, and recognizes Crimea as Russian and Donetsk/Luhansk as independent. Russia gives up and goes home. These are concessions in name only. Russia already has de facto control of Crimea, Donetsk, and Luhansk, and has for years (I’m assuming Putin means the areas he already controls; if he means “Donetsk” and “Luhansk” in a broader sense, that’s a harder sell). Ukraine ceding them does nothing except take away Russia’s casus belli for future wars. My understanding is that Russia operationalizes neutrality as “don’t join NATO or EU”. But NATO has shown no signs of being willing to accept Ukraine as a member anyway. EU seems sort of willing, but is infamous about dragging membership negotiations out for years or decades, and requires potential members to get their acts together to a degree that Ukraine might never accomplish. EU has previously allowed members to join its economic community without joining the EU proper, and this would probably provide most of the relevant benefits to Ukraine without angering Russia. Ukraine was not in either of these organizations before the war, and not being in them afterwards changes nothing. (Neutrality does prevent them from gaining useful allies for a future war, but the only people who might declare war on them in the future are Russia, and they’ve already made it clear they’re a tough target. Some analysts say Putin attacked now because, given the rate at which Ukraine’s military is improving, he thought this would be his last chance. Given the popularity boost and boost in foreign interest this war will give them, they’ll only improve faster from here, so I think they should expect to be able to stand on their own in the future.) These were most of Putin’s demands before the war, so one could argue that, if they’re a good idea now, they would have been a good idea then, and Ukraine should have agreed and prevented bloodshed. This might be true, but I don’t think it’s necessarily so. A big part of diplomacy is maintaining your honor and your reputation for not caving in to threats. If Russia had gotten everything it wanted from Ukraine with no effort, it would have legitimized using threats of war as a negotiating tactic and made it harder for Taiwan/Georgia/Iran etc in the future. This is easy for me to say on the other side of the world, not losing any friends or relatives, but it is potentially worth standing up for yourself, even to the point of war, in order to maintain the illegitimacy of such threats. Now the situation is different. Russia has miscalculated, they know they’ve miscalculated, and the best ending for everyone is for them to leave in a way that sort of preserves what’s left of their honor - one that doesn’t humiliate them any more than they’re humiliated already. Giving Russia everything they wanted before the war lets Putin play it as a victory back home, saves the Ukrainian people, and defuses the chance of World War III. It might cost a small amount of honor, but the Ukrainians are rolling in honor right now. They have so much honor they don’t know what to do with it all. They can pay a little to make Russia go away, and still have enough left over to act as a deterrent in the future. **10: Links** a. [Metaculus Alerts](https://twitter.com/MetaculusAlert) is a Twitter bot that alerts you when a Metaculus prediction on the Ukraine war has changed drastically in a short time. For example, “the chance of Russia taking Kiev by April has decreased 10% in the past 24 hours”. I find this a good substitute to refreshing the news every minute to see if something interesting has happened. b. The origin of “Molotov cocktail”: c. One of the Ukrainian cities on the front lines is named [New York](https://en.wikipedia.org/wiki/New_York_(Ukraine)). d. Reddit has quarantined their [r/russia subreddit](https://www.reddit.com/r/russia/), which I think is a cowardly and outrageous act of censorship. But you can still see it if you have a verified email, and I find it an interesting window into the Russian perspective on the conflict. e. Former oligarch Petro Poroshenko is Ukraine’s unpopular ex-president, recently placed under something like house arrest pending a corruption trial. He’s since gotten an Kalishnikov rifle and is [patrolling the streets of Kiev against Russian invaders](https://www.channelstv.com/2022/02/25/former-ukrainian-president-poroshenko-picks-up-rifle-in-defence-of-country/). f. The [Reply Of The Zaporizhian Cossacks](https://en.wikipedia.org/wiki/Reply_of_the_Zaporozhian_Cossacks) is a famous historical insult sent by Ukrainian cossacks to the Turkish sultan (it’s worth clicking the links for the full text, content warning obscenity). It got made into a famous painting, and: g. Maybe Russian propaganda, but still pretty funny: h. Re: “the West is turning cancellation into a weapon of war” i. Elon Musk [sends Starlink terminals to Ukraine](https://www.bbc.com/news/technology-60561162) to ensure continued Internet, although there are worries that Russia can trace the signal. Pic related: ([source](https://twitter.com/pakpakchicken/status/1499974314795597826)) j. In Greek mythology, Snake Island, where Ukrainian soldiers famously defied a Russian warship, is [the final resting place of Achilles](https://kiwihellenist.blogspot.com/2022/03/snake-island.html), who sometimes appears to residents. k. Metaculus thinks Russia might soon [close its borders](https://www.metaculus.com/questions/10080/russian-border-closure-by-april-2022/). It might be helpful to talk to Russians you know about [getting out of Russia](https://forum.effectivealtruism.org/posts/P3wdDeihMJm9Wab5y/psa-if-you-are-in-russia-probably-move-out-asap) if they can, before things get worse. See also [Letter: Russians Are Welcome In America](https://www.lesswrong.com/posts/QCrkSbsdAAyymAxjG/letter-russians-are-welcome-in-america) - though I don’t know what the visa situation is like now and it might be terrible. l. Servant Of The People is a 2015 Ukrainian comedy TV series about a poor teacher who implausibly gets elected President of Ukraine and has to clean up its corrupt politics. It went down in history when the star, Volodymyr Zelenskyy, got elected President of Ukraine in real life, apparently on the strength of his performance. The entire series [is available for free on YouTube with English subtitles](https://www.youtube.com/watch?v=GZ-3YwVQV0M&list=PLJo-obgJSqxbzEDvUHX9jiX2DXWGSvv0T) (though after a few episodes they disappear, and you have to use the Russian subtitles and then auto-translate them into English). I’m a few episodes in and it’s really good, which I guess I should have predicted given the consequences. m. The [EA Forum](https://forum.effectivealtruism.org/posts/qkhoBJRNQT4EFWos7/what-are-effective-ways-to-help-ukrainians-right-now) and [Kelsey Piper](https://link.vox.com/view/608adc1891954c3cef02a52efzyg3.5fy/3e7f0436) have discussions on how best to help Ukrainians (this is still not the most efficient way to spend charitable donations - but it’s human to care about things other than efficiency). Ideas range from [Polish Humanitarian Action](https://link.vox.com/click/26871843.7054/aHR0cHM6Ly93d3cucGFoLm9yZy5wbC9lbi8/608adc1891954c3cef02a52eBb5723dba) (to help Ukrainian refugees in Poland) to [Meduza](https://link.vox.com/click/26871843.7054/aHR0cHM6Ly9zdXBwb3J0Lm1lZHV6YS5pby9lbg/608adc1891954c3cef02a52eB3154f7a9) (opposition Russian news source, apparently still sort of holding on) to [direct donations to Ukraine’s Ministry of Health or Ministry of Defence](https://war.ukraine.ua/support-ukraine/).
Scott Alexander
49910542
Ukraine Thoughts And Links
acx
# Open Thread 214 This is the weekly visible open thread. Odd-numbered open threads will be no-politics, even-numbered threads will be politics-allowed. This one is even-numbered, so go wild - or post about whatever else you want. Also: **1:** Crypto exchange FTX announces the launch of the Future Fund ([blog announcement](https://ftxfuturefund.org/announcing-the-future-fund/), [Twitter thread](https://twitter.com/ftxfuturefund/status/1498350483206860801?s=20&t=KT_U0F4uwf5s2zD3eM2ogQ)), a philanthropic organization run by various great people who I trust. They hope to spend $100 million - $1 billion per year on cause areas that improve the long-term future, including AI alignment, biosecurity, climate, nuclear war, education, prediction markets, and [many more](https://ftxfuturefund.org/projects/). What this means for you: * If you have a project that needs funding in one of those areas, [apply to them](https://ftxfuturefund.org/apply/). * They’re experimenting with “regranting”, ie if you’re a knowledgeable person who expects to be able to find and assess projects that they would miss, you can apply for a discretionary budget (“in the $250K - few million range”) and they’ll let you give it out. See their [regranting page](https://ftxfuturefund.org/announcing-our-regranting-program/) for more. * If you have an idea for a project they should try to figure out a way to create and fund, enter their [Project Ideas Competition](https://ftxfuturefund.org/our-project-ideas-competition/), and win $5000 if it’s good enough to add to their list of such projects. Warning: this contest closes tomorrow (Monday)! **2:** Related: the Open Philanthropy Project’s long-termist effective altruist movement-building team is hiring. They work to direct donations, spread the word about effective altruism, and make the movement more capable. Pay is low six-figures, living in SF is recommended but not absolutely required. [See here](https://forum.effectivealtruism.org/posts/uM6KFEpGuFivsJJHM/open-phil-s-longtermist-ea-movement-building-team-is-hiring) for more info. **3:** On my previous post, Ukraine Warcasting, I asked people for the names of anyone who had successfully predicted both the Russian invasion of Ukraine and the strong Ukrainian resistance, and said I’d signal-boost them if they existed. Someone [brought up](https://www.reddit.com/r/slatestarcodex/comments/t443t5/ukraine_warcasting/hywjzpb/) a Hungarian YouTuber named Adam Something, who I am [dutifully signal-boosting](https://www.youtube.com/watch?v=-OO3RiNMDB8&feature=youtu.be). Another commenter pointed out that Anatoly Karlin [was wrong before he was right](https://twitter.com/akarlin0/status/1468659468544139271). Many such cases!
Scott Alexander
49849995
Open Thread 214
acx
# What Are We Arguing About When We Argue About Rationality? Let’s talk about this tweet: The backstory: Steven Pinker wrote a book about rationality. The book concludes it is good. People should learn how to be more rational, and then we will have fewer problems. Howard Gardner, [well-known wrong person](https://en.wikipedia.org/wiki/Theory_of_multiple_intelligences#Lack_of_empirical_evidence), sort of criticized the book. The criticism was facile, a bunch of stuff like “rationality is important, but relationships are also important, so there”. Pinker’s counterargument is dubious: Gardner’s essay avoids rationality pretty carefully. But even aside from that, it feels like Pinker is cheating, or missing the point, or being annoying. Gardner can’t be arguing that rationality is completely useless in 100% of situations. And if there’s *any* situation *at all* where you’re allowed to use rationality, surely it would be in annoying Internet arguments with Steven Pinker. We could turn Pinker’s argument back on him: he frames his book as a stirring defense of rationality against anti-rationalists. But why does he identify these people as anti-rationalists? Sure, they themselves identify as anti-rationalist. But why should he believe them? After all, they use rationality to make their case. If they won, what bad thing would happen? Even in whatever dystopian world they created, people would still use rationality to make cases. I feel like what I’m missing is an idea of what anti-rationalism means. What’s at stake here? What are we arguing about when we argue about rationality? **Rationality As Full Computation Opposed To Heuristics?** I think Howard Gardner sort of believes this. He has an inane paragraph about how respect is more important than rationality. When I try to make sense of it, I get an argument kind of like: the Communists trusted their reason, reasoned their way into believing Communism was true, and oppressed people because their version of communism said it was okay. But they should have trusted a heuristic saying that every human being is worthy of respect instead. Elsewhere in the essay, he compares rationality unfavorably to religion or tradition. One is tempted to use the maneuver from Pinker’s tweet here: “Is there anything good about religion or tradition?” If no, why prefer them to rationality? If yes, wouldn’t a rational person rationally choose to believe / follow them? Again, this makes the most sense as an argument about heuristics. It’s the old [argument from cultural evolution](https://slatestarcodex.com/2015/07/07/the-argument-from-cultural-evolution/): tradition is the repository of what worked for past generations. Perhaps you are very smart and can beat past generations. Or perhaps you are an idiot, you think “I can do lots of cocaine-fueled orgies, because I will just calculate the pros and cons of each line of cocaine / potential sex partner as I encounter them, and reject the ones that come out negative”, and then one time you forget to carry the one and end up in a bathtub minus a kidney. This was basically how Communism went too. One of the most common arguments against rationality is “something something white males”. I have never been able to entirely make sense of it, but I imagine if you gave the people who say it 50 extra IQ points, they might rephrase it to something like “because white males have a lot of power, it’s easy for them to put their finger on the scales when people are trying to do complicated explicit computations; we would probably do a better job building a just world if policy-makers retreated to a heuristic of ‘choose whichever policy favors black women the most.’” So what are pro-rationality and anti-rationality people arguing about? In this model, Pinker and his supporters believe you should explicitly calculate the pros and cons of everything you do, whereas Gardner and his supporters believe you should often retreat to heuristics like “don’t do anything that violates human rights” or “live a holy and god-fearing life” or “don’t do drugs” or “try to favor black women”. But I am pretty much 100% sure that Pinker and his supporters don’t believe the stupid explicit computation thing. I count myself among his supporters and *I* definitely don’t believe it. *Obviously* heuristics are important and good. This is true not just for big important moral things, but also for everyday occurrences and determining truth. If I get an email from a Nigerian prince asking for money, I’m not going think “I shall do a deep dive and try to rationally calculate the expected value of sending money to this person using my very own fifteen-parameter [Guesstimate](https://www.getguesstimate.com/) model”. I’m going to think “nah, that kind of thing is always a scam”. Not only will this prevent me from forgetting to carry the one and sending my life savings to a scammer, but it also saves me the hours and hours it would take to create an explicit model and estimate a probability. Then maybe the difference between rationalists and anti-rationalists is that rationalists use heuristics sparingly and are willing to question them, and anti-rationalists follow heuristics religiously (or even slavishly)? But Gardner claims to be Jewish, and I doubt he follows all 613 commandments; I imagine he’s even raised his voice a few times when respect didn’t seem to be working. I think everybody follows some combination strategy of mostly depending on heuristics, but using explicit computation to decide what heuristics to have, or what to do when heuristics conflict, or whether a certain heuristic should apply in some novel situation. **Rationality As Explicit Computation Opposed To Intuition?** “Intuition” is a mystical-sounding word. Someone asks “How did you know to rush your son to the hospital when he looked completely well and said he felt fine?” “Oh, intuition”. Instead, think of intuition as how you tell a dog from a cat. If you try to explain it logically - “dogs are bigger than cats”, “dogs have floppy ears and cats have pointy ones” - I can easily show you a dog/cat pairing that violates the rule, and you will still easily tell the dog from the cat. Intuition can be trained. Good doctors have great intuition, and are constantly saying things like “this feels infectious to me”. If you ask them to explain, they’ll give you fifteen different reasons it seems infectious, but also admit there are ten different reasons it might be iatrogenic and forty reasons it might be autoimmune, but the infectious reasons seem more compelling to them. A newbie intern might be able to generate the same list of 15 vs. 10 vs. 40 reasons and be totally paralyzed by indecision about which ones are most important. This last decade has been good for intuition, because we’ve finally been able to teach it to computers. There are now AIs that can tell dogs from cats, previously an impossible task for a machine. There are style transfer AIs that can make a painting feel more like a Van Gogh, or “more cheerful”, or various other intuitive things. Even text generation programs like the GPTs are conquering intuition - Strunk & White aside, there’s no ruleset for how to write, just better or worse judgment on what word should come next. Since these AIs are just giant matrix multiplication machines, “intuition” now has a firm grounding in math - just much bigger, more complicated math than the usual kind that we call “logical”. So in another conception of the debate, the Pinkerian rationalists want to explicitly compute everything through formal arguments or equations, but the Gardnerian anti-rationalists just want to get a gestalt impression and make an intuitive decision. This maps onto stereotypes about atheism vs. religion: the atheist saying “here are 7,000 Biblical contradictions, QED” vs. the believer saying “but it just feels true to me”. But again, I would be shocked if Pinker or other rationalists actually believed this - if he thought it was a productive use of his time to beat one of those cat/dog recognition AIs with a sledgehammer shouting “Noooooooooo, only use easily legible math that can be summed up in human-comprehensible terms!” Again, it would be impossible to live your life this way. A guy with a gun would jump out from behind the bushes, and you’d be thinking “well intuitively this seems like a robbery, but I can’t be sure until I Fermi estimate the base rates for robberies in this area and then adjust for the time of day, the…” and then the robber has shot you and you probably deserved it. Even this doesn’t go far enough - it suggests that intuition might only be useful under pressure, and when you have enough time you should do the math. But I recently reviewed the discourse around Ajeya Cotra’s [report on AI timelines](https://astralcodexten.substack.com/p/biological-anchors-a-trick-that-might), and even though everyone involved is a math genius playing around with a super complex model, their arguments tended to sound like “It still just doesn’t feel like you’re accounting for the possibility of a paradigm shift enough” or “I feel like the fact that your model fails at X is more important than that my model fails at Y, because X seems more like the kind of problem we want to extrapolate this to.” The model itself is explicit, but every decision about how to make the model or how to use the model is intuitive and debated on intuitive grounds. **Yudkowsky: Rationality Is Systematized Winning?** This is Eliezer Yudkowsky’s [standing-on-one-foot definition of rationality](https://www.lesswrong.com/posts/4ARtkT3EYox3THYjF/rationality-is-systematized-winning). The idea has a history behind it. [Newcomb’s Paradox](https://en.wikipedia.org/wiki/Newcomb%27s_paradox) is a weird philosophical problem where (long story short) if you follow an irrational-seeming strategy you’ll consistently make $1 million, but if you follow what seem like rational rules you’ll consistently only get a token amount. Philosophers are divided about what to do in this situation, but (at least in Yudkowsky’s understanding) some of them say things like “well, it’s important to be rational, so you should do it even if you lose the money”. This is what Eliezer’s arguing against. If the “rules of rationality” say you need to do something that makes you lose money for no reason, they weren’t the real rules. The real rules are the ones that leave you rich and happy and successful and make the world a better place. If someone whines “yeah, following these rules makes me poor and sad and unable to help others, but at least they earn me the title of ‘rational person’”, stop letting them use the title! This definition has its issues, but one thing I like is that it makes it very clear that following heuristics or using intuitions is fine. If you have some difficult problem, should you consult your intuitions or your long chain of explicit reasoning? What would a rational person do? The most rational answer I can think of here is “run the experiment, try it both ways a few times, and use whichever one produces better results”. Should you rely on heuristics, or calculate everything out each time? I would be surprised if people who explicitly calculated the value of responding to each spam email ended up happier and richer and psychologically healthier and doing more good in the world than people who click the “delete” button as a spinal reflex - in which case, a real rationalist should choose the reflex. This has the happy side effect that it’s impossible to be against rationality. But it also has the more concerning implication that it’s vacuous to be in favor of it. If rationalists were people who really liked explicit chains of computation, we could print out cool “TEAM EXPLICIT CHAIN OF COMPUTATION” t-shirts and play nasty pranks on the people who like heuristics and intuition. But if it’s just about preferring good things to bad things, it doesn’t really seem like a method, or a community, or an ideology, or even necessarily worth writing books about. It still feels like there’s something that Pinker and Yudkowsky are more in favor of than Howard Gardner and Ayatollah Khameini, even though I bet all four of these people enjoy winning. **Rationality As The Study Of Study?** Maybe rationality is what we’re doing right now - trying to figure out the proper role of explicit computation vs. intuition vs. heuristics. In this sense, it would be the study of how to best find truth. This matches a throwaway line I made above - that the most rational answer to the “explicit computation vs. heuristics” question is to try both and see which works better. But then how come pretty much everybody identifies “rationality” more with the explicit calculation side of things, and less with the intuitive side? Surely a generic study of truth-seeking would be unbiased between the two, at least until it did the experiments? Geology is the study of rocks. It’s hard to confuse the object-level with the meta-level; rocks are a different kind of object than studying. If you’re debating whether a certain sample is schist or shale, you’re debating the rocks. If you’re debating whether argon-argon dating is more appropriate than potassium-argon dating, you’re debating the study. In order to do good science, you want your studying to conform to certain rules, but nobody expects the rocks themselves to conform to those rules. Rationality is the study of truth-seeking, ie the study of study. It’s *very* easy to confuse the object-level with the meta-level; are we talking about the first or second use of “study” in the sentence? Science ought to be legible, not because legibility is always better at finding truth, but because that’s part of the “rules” of science. You don’t get to say you’ve scientifically explained something until you’ve put it into a form that other people can understand. This is a good rule - once something is comprehensible, you can spread it and other people can build on it. Also, you’re more likely to be able to take it off in new directions. If some prospector has a really amazing knack for figuring out where diamonds are buried, which he can’t explain - “This just feels like a diamond-having kind of area to me” - then he’s good at rocks but not good at geology. He’s not a geologist until he’s able to frame it in the form of laws and explanations - “diamonds are found in areas where deeper crust has been thrust to the surface, which can be recognized by such-and-such features”. If you’re a mining company, then by all means hire the guy with the mysterious knack; employing him sounds really profitable. But a hundred years later, most of the progress in diamond-acquisition is going to come from the scientists (…is a hypothesis you could assert; I think [Taleb would partly disagree](https://astralcodexten.substack.com/p/book-review-antifragile?s=w)). Not only can they share their findings in a way that Knack Guy can’t share his knack, but they can ask questions and build upon them - might there be other signs that indicate deeper crust thrust to the surface? Can we just dig down to the deep parts of the crust directly? Can we replicate the conditions of the deep crust in a lab, and avoid having to mine at all? These are the kinds of questions that a knack for finding diamonds doesn’t help with; you need the deep theory. Likewise, supposing that some tradition is good, following the tradition will give you the right answer. But you can’t study it (unless you study the process by which traditions form, which isn’t itself “relying on tradition”). You’ve been magically gifted the correct answer, but not in a way you can replicate at scale or build upon. “Following the Sabbath is good because it helps you relax and take time to contemplate, the ancients were very wise to prescribe it”. Fine, but I need fifteen people to bond super-quickly in the midst of very high stress while also maintaining good mental health, also five of them are dating each other and yes I know that’s an odd number it’s a long story, and one of them is secretly a traitor which is universal knowledge but not common knowledge, can you give me a tradition to help with this? “Um, the ancients never ran into that *particular* problem”. Sometimes theories lag way behind practice. For most of medical history, theorists believed bloodletting and the four humors, whereas people with knacks (wise women, village healers, etc) generally did reasonable things with herbs that presaged modern medicine. Still, even though ancient doctors got the contents of their theories wrong, the part where they had theories was legitimately a real advance; without it, I don’t think we would have gotten to modern medicine, which *does* outperform the wise women most of the time. If you’re seeking truth, you’re absolutely allowed to do what Srinivasan Ramanujan did when he discovered how to simplify a certain kinds of previously unsolvable math problem: > It is simple. The minute I heard the problem, I knew that the answer was a continued fraction. ‘Which continued fraction?’ I asked myself. Then the answer came to my mind If we define rationality as “the study of truth-seeking”, this is good at the “truth-seeking” part, but bad at the “study” part. He got the right answer. The truth was successfully sought, the diamond was found. But he can’t explain to anyone else how he did it - he just has a good knack for this kind of thing. Here’s one scenario which I think is unlikely but theoretically possible: the formal study of rationality will end up having zero advantages over well-practiced intuitive truth-seeking, *except* insofar as it allowed Robin Hanson to design prediction markets, which someday take over the world. This would be a common pattern for sciences: much worse at everyday tasks than people who do them intuitively, until it generates some surprising and powerful new technology. Democritus figured out what matter was made of in 400 BC, and it didn’t help a single person do a single useful thing with matter for the next 2000 years of followup research, and then you got the atomic bomb (I may be skipping over all of chemistry, sorry). I’m not actually that pessimistic. I think there are plenty of times when a formal understanding of rationality can correct whatever vague knacks people are otherwise using - this is the biases and heuristics research, which I would argue hasn’t been literally *zero* useful. This theory would help explain how Pinker’s beef with Gardner developed. Gardner is making the same sort of claim as “wise women do better than Hippocratic doctors”. It’s a potentially true claim, but making it brings you into the realm of science. If someone actually made the wise women claim, lots of people would suggest randomized controlled trials to see if it was true. Gardner isn’t actually recommending this, but he’s adopting the same sort of scientific posture he’d adopt if he *was*, and Pinker is picking up on this and saying “Aha, but you know who’s scientific? Those Hippocratic doctors! Checkmate!” A few weeks ago, when I posted my predictions for 2022, a commenter mentioned that various “rationalist” “celebrities” - Eliezer Yudkowsky, Julia Galef, maybe even Steven Pinker - should join in, and then we would find out who is most rational of all. I hope this post explains why I don’t think this would work. You can’t find the best economist by asking Keynes, Hayek, and Marx to all found companies and see which makes the most profit - that’s confusing money-making with the study of money-making. These two things might be correlated - I assume knowing things about supply and demand helps when starting a company, and [Keynes did in fact make bank](https://www.politico.com/story/2012/04/keynes-made-fortune-from-playing-market-well-075365) - but they’re not exactly the same. Likewise, I don’t think the best superforecasters are always the people with the most insight into rationality - they might be best at truth-seeking, but not necessarily at studying truth-seeking.
Scott Alexander
48291410
What Are We Arguing About When We Argue About Rationality?
acx
# Microaddictions Everyone always says you should “eat mindfully”. I tried this once and it was weird. For example, I noticed that only the first few bites of a tasty food actually tasted good. After that I habituated and lost it. Not only that, but there was a brief period when I finished eating the food which was below hedonic baseline. This seems pretty analogous to addiction, tolerance, and withdrawal. If you use eg heroin, I’m told it feels very good the first few times. After that it gets gradually less euphoric, until eventually you need it to feel okay at all. If you quit, you feel much worse than normal (withdrawal) for a while until you even out. I claim I went through this whole process in the space of a twenty minute dinner. I notice this most strongly with potato chips. Presumably this is pretty common, given their branding: It’s actually pretty hard to eat a single potato chip and then stop when there’s a whole bag in front of you. But also, the 20th potato chip tastes much less good than the first. My experience (maybe not universal!) is that the same dynamic applies, somewhat less strongly but still above a threshold of noticeability, to any tasty food. Should I add “…and any other enjoyable activity”? The first minute of watching a movie certainly isn’t the best; they need time to introduce the characters, start the plot, etc. But if you interrupt me in the middle of an exciting movie, I’ll get pretty angry (even if I know I can pause it on DVR and finish it later). Is this withdrawal from a movie addiction? When movies don’t end immediately after the climax, but instead have a leisurely denouement telling us where all the characters end up, is that a movie taper, in the same sense that you might taper from 3 mg Xanax to 2 to 1 and so on when trying to overcome a Xanax addiction? Normally I would describe the feeling of being engrossed in a movie as a “flow state”. Are flow states just another word for microaddictions? As a child, I would throw a temper tantrum if my parents walked in and paused the TV right at the climax of an amazing show (I haven’t mellowed out with age - I just own doors with locks on them now). Is this “interrupting a flow state”? Is it the same as what happens when you give opioid addicts a sudden injection of naloxone? (not fun!) This is part of why I’m skeptical of a purely chemical definition of addiction, where addiction is what happens when some chemical that mimics a neurotransmitter “hijacks your reward center”, and so nonchemical addictions (eg video games, Internet) are by definition impossible and/or metaphorical. Yes, sometimes chemicals mimic neurotransmitters and hijack your reward center. But all that does is stimulate your reward center, the same way video games and potato chips stimulate it. Opioids can still stimulate your reward system more strongly than video games and potato chips can, but not for lack of trying by Activision and Frito-Lay Inc. Instead, I think of addiction as what happens when you become hyper-aware of one particular facet of your normal motivation system. Usually your motivational system is doing lots of things at once, and they’re all in some kind of useful balance, and you think of that balance as “what I want”. If one facet becomes much stronger than everything else, it feels weird - “not what I want” - and one of the categories we have for that is addiction. The easiest way to get that kind of disproportion is a chemical that mimics a neurotransmitter. But other ways are rapidly catching up.
Scott Alexander
47584934
Microaddictions
acx
# Ukraine Warcasting Yeah, I know you’re saturated with Ukraine content. Yeah, I know everyone wants to relate their hobbyhorse to Ukraine. But I think it’s genuinely useful to talk about prediction markets right now. Current conventional wisdom is that the invasion was a miscalculation on Putin’s part, after he surrounded himself with so many yes-men that he lost touch with reality. But Ukraine miscalculated too; until almost the day of the invasion, Zelenskyy was saying everything would be okay. And if there’s a nuclear exchange, it will be because of miscalculation - I don’t know what the miscalculation will *be*, just that nobody goes into a nuclear exhange because they want to. Preserving people’s access to reality and helping them avoid miscalculations [are](https://slatestarcodex.com/2014/10/05/prediction-goes-to-war/) peacekeeping measures, sometimes very important ones. The first part of this post looks at various markets’ predictions of how the war will go from here (Zvi [published something like this](https://thezvi.substack.com/p/ukraine-post-1-prediction-markets?utm_source=url) a few hours before I could, so this will mostly duplicate his work). The second part very briefly tries to evaluate which markets have been most accurate so far - though this is a topic which deserves at least paper-length treatment. The third part looks at which pundits deserve eternal glory for publicly making strong true predictions, and which pundits deserve . . . something else, for doing . . . other things. ### Part I: Warcasting Starting with Metaculus: *— Will Kyiv fall to Russian forces by April 1 2022? **69% chance*** This is the most-predicted relevant question on Metaculus right now. The first day of the war, the market predicted as high as 90%; as people realized the strength of Ukrainian resistance, it fell to 80. Mid-Saturday there was a sudden drop from 78% to 72%, after some combination of a defiant Zelenskyy speech and a report that Russian paratroopers had been repelled. Since then it’s barely budged. *— Will at least three of six big Ukrainian cities fall to Russian forces by June 1? **71% chance*** The six cities are Kyiv, Odesa, Lviv, Mariupol, Kharkiv, and Kherson. This question gives the Russians two more months than the last one, so it’s surprising that they’re at about the same probability. Maybe everyone expects Russia to go for Kyiv first and take longer for anything else? Or maybe they’re assuming everything stands or falls together. *— Will WWIII happen before 2050? **20% chance*** This question defines “World War III” as any war whose combatants have 30% of world GDP or 50% of world population, and in which 10 million people die. Over the past two years, the question has bounced between about 7 and 19 percent. Today it’s at 20%, its highest value ever - but still only a single-digit percent above its baseline. — *Will Russia invade any country other than Ukraine in 2022? **12% chance*** Commenters bring up Belarus (if they start seeming less loyal), Moldova (if part of Russia’s plan was to create a corridor to Transdnistria), or Georgia (Russia likes invading Georgia). Relatively few people think a Russia-NATO war is likely to be a big part of this. [Zvi thinks](https://thezvi.substack.com/p/ukraine-post-1-prediction-markets?utm_source=url) this should be 20%. *— Will Putin still be president of Russia next February? **71% chance*** This started at 85% and has been getting gradually lower, but it suffers for lack of pre-war data to compare it to. Here’s [a related question](https://www.metaculus.com/questions/4799/when-will-vladimir-putin-cease-to-hold-the-office-of-president-of-russia/) asking forecasters to predict when Putin will leave power. Through most of last year, it averaged 2027 - 2029; now it’s at 2024. I imagine this is too weird a mixture of early and late guesses to interpret clearly, but the downward trend sure is obvious. *— Will 50,000 civilians die in any single Ukrainian city? **8% chance*** Forecasters are optimistic this will not happen. A commenter mentions that only 30,000 civilians died in Aleppo during four years of fighting there. Other sites have fewer or less trustworthy markets, but here’s a selection: *— Will Zelensky still be President of Ukraine on 4/22/22? **42% chance*** Polymarket seems hesitant to go into actual war predictions, but this market at least acts as a proxy for whether there will *be* a Ukraine on 4/22/22 - though with a side of “will Zelensky be killed or captured?”. “Yes” dropped as low as 12% during the early parts of the invasion, but is doing a little better now. *— Will Russia control Kyiv on 4/2/22? **54% chance*** This is Manifold’s biggest Ukraine market right now. It’s very similar to the biggest Metaculus question, although the resolution criteria are different (Metaculus: 6/10 raions; Manifold: informal, whether Duncan says so). I don’t know if that fully explains the different probabilities: 69% chance on Metaculus vs. 54% chance on Manifold. In the past when Metaculus and Manifold disagreed I’ve eyeballed Metaculus as being more accurate, but few data points so far. There’s a Putin ouster market that has exactly the same probability as Metaculus', and a very small “will Russia invade anywhere else” market that’s at 30% right now, more than twice Metaculus’ level. Meanwhile, searching for Ukraine on Kalshi gives me nothing, so please accept their “will it be over 35 degrees in New York City” market instead. Everyone keeps telling me I shouldn’t be so bearish on Kalshi, they can be both regulated and dynamic at the same time. Maybe so, but not yet. ### Part 2: Prediction Market Comparisons Clay Graubard [did some good work looking at](https://globalguessing.com/russia-ukraine-forecasts/) how different prediction markets assessed the threat of Russia invading Ukraine. These are hard to directly compare since they ask slightly different questions (eg end date). Some people would call it unlikely that Russia didn’t attack Ukraine this month, but then did in summer/fall 2022; it would require that they mass a bunch of troops on the border, send them home again (they can’t support them all there indefinitely), then bring them back a few months later for the real invasion. With that assumption, all possible invasions would be near-term invasions and you could compare these markets fairly; without that assumption, it’s hard to say. I would add that Manifold did worse than any of these; it was at 36% on 2/14, and barely made it to 50% before the actual invasion happened. Another thing you can do with this graph is notice which markets react more vs. less to news. For example, INFER seems totally unreactive; it’s just a vaguely upward-trending line the whole time - I don’t know enough about it to have a good sense of why that would be. Meanwhile, GJI (superforecasters) seem the most reactive. I don’t have a good sense of how to think about this or whether reactivity is necessarily good. My main takeaways are that markets should coordinate to have similarly-phrased questions to make them easier to compare, and that - given that Metaculus and Manifold are the two places with the most markets right now - we should trust Metaculus more than Manifold until further notice. Metaculus also comes out looking good compared to Good Judgment and the superforecasters, though I can’t tell how much of this is question wording vs. a real advantage. Oh, and if Clay says there’s going to be a war, head for the bunkers. ### Part 3: Pundit Accountability Part of the point of turning forecasting into a formal science is Philip Tetlock’s observation that pundits do such a bad job. They don’t seem to be right more often than chance, and even when they’re confidently wrong everyone keeps listening to them. Trying to celebrate or condemn pundits is a dangerous game; you risk over-updating on individual questions. If you look at the 2016 election in isolation, Scott Adams is the smartest guy in the world; if you look at it in context, Scott Adams likes saying crazy things very confidently, and sometimes those crazy things happen. This is going to be the out-of-context one: still, I think it’s better than nothing. Since I’m claiming the right to judge others, it’s fair to ask how I performed. The answer is: medium! On my [Predictions For 2022](https://astralcodexten.substack.com/p/predictions-for-2022-contest?utm_source=url), posted January 31, I said there was a 50-50 chance of a “major flare-up in the Russia/Ukraine conflict” this year (obviously this qualifies). Later, I quoted Matt Yglesias’ prediction (40% chance of Russia invading Ukraine) and said HOLD, ie I didn’t disagree in either direction. A charitable person would interpret that as me saying there was a 50% chance of a major flare-up, of which 10% was a “flare-up” short of full invasion, and 40% was invasion. In reality, I just forgot I’d assigned a higher probability to that statement earlier and consulted an extremely vague mental model where 50% sounded right but 40% also sounded right. So I assigned an invasion somewhere between 40-50% probability on January 31. Most prediction markets were also around that level then (Metaculus was 44%). I didn’t let myself check markets when making my prediction, but I’d probably glanced at them before. In any case, I made the conservative prediction of “yeah, fine, whatever everyone else is saying”. As part of the same(-ish) series of predictions, Matt Yglesias [gave a](https://www.slowboring.com/p/predictions-are-hard?utm_source=url) 40% chance that Russia would invade Ukraine in January; [Zvi gave](https://thezvi.substack.com/p/2022-acx-predictions-buysellhold?utm_source=url) a 30% chance in February. I made a small amount of fake money and a smaller amount of real money betting “YES” on a few prediction markets, after writing [this post](https://astralcodexten.substack.com/p/mantic-monday-ukraine-cube-manifold?utm_source=url) and being annoyed that they seemed too low, but this was just arbitrage, not a real opinion. That having been said, let’s move on to the pundits who took interesting and strong positions, starting with: #### Edward Luttwak: C This is the guy who wrote *Coup D’Etat*, a handbook for attempting coups. He is a famous international relations and geopolitics theorist, has served in the IDF, speaks six languages, and has written a book on the military strategy of the late Roman Empire. I read his coup handbook a while ago and was very impressed by him. Luttwak correctly predicted that Russia would have a hard time invading Ukraine with its current troop numbers, but incorrectly predicted that, because “Putin is not a fool”, they wouldn’t try. When everyone else expected Russia to win instantly, Luttwak was the only person I saw arguing (again and again) that conquering Ukraine would actually be very hard and Putin might fail. He deserves honor and glory for that strong, public, and accurate prediction. Still, I’m giving him a C, because he equally strongly predicted Russia wouldn’t invade, even calling the intelligence community “hysterical” and “always wrong”. I tend to be sympathetic to people who are honestly wrong - even wrong when they give high probabilities - and much less sympathetic to people who are wrong while insulting everyone else for disagreeing with them. Maybe the lesson here is that expertise (at least in military matters) is real, but *extremely circumscribed*. Luttwak is exactly the sort of guy who I expect to know how many troops it takes to invade a country, but I’m not sure why he should be an expert in Putin’s psychology and maybe he was so reliant on his military expertise that he made a (false) assumption of Putin’s rationality in order to be able to carelessly jump from “I know a lot about military strategy” to “I can predict what Putin will do”. #### Anatoly Karlin: B- Anatoly is a Russian nationalist who wrote [Regathering Of The Russian Lands](https://akarlin.substack.com/p/regathering-of-the-russian-lands?utm_source=url), which has become the canonical (in these circles) essay for understanding how Putin thinks. He got the opposite pattern from Luttwak: totally right about what Putin would do, but his predictions about weak Ukrainian resistance are on the verge of being disproven. He argues that the West is overestimating Ukraine - Russia is closer to Kyiv than America was to Baghdad at this stage of their Iraq invasion, and everyone was impressed with that stage of the American campaign. But he was intellectually honest enough to give a very specific prediction - collapse of Ukrainian resistance within a week - so he can’t really get out of admitting he miscalculated here. I definitely think both these things can be true at once: the Russians underestimated the extent of Ukrainian resistance even as the West may be overestimating it. Just as Luttwak had many reasons to be right about war but might not have known much about Putin’s personality, so Karlin has every reason to be right about Putin’s personality, but isn’t really much of a military strategy expert. I’m giving him a slightly higher grade because he was more self-aware and made more specific predictions. #### Richard Hanania: B- See his [Lessons From Forecasting The Ukraine War](https://richardhanania.substack.com/p/lessons-from-forecasting-the-ukraine?utm_source=url). Like Karlin, Hanania correctly guessed early on that Putin would invade Ukraine. On February 2, when Metaculus was at 49% (and I was 40 - 50%), Hanania said 65% chance. Over the next few weeks, he increased his probability at the same rate as the average, so that just before the war started he was giving it 95% chance. This is very impressive - both because he was right, and because of how careful, honest, and public he was with his predictions. On his Lessons post, he wrote: > I’m proud of my record forecasting the invasion, given that it went against most of the predictions of those who generally share my foreign policy views. Anyone can occasionally be correct by following the same heuristic they always use, but I showed intellectual flexibility here by determining that American intelligence was likely correct. Karlin is the only other prominent US foreign policy skeptic I know of who thought war was even more likely than the conventional wisdom suggested, and he deserves credit for that (if you know of others, mention them in the comments). Part of the reason I came to the right conclusion was that I was even more pessimistic than most anti-interventionists were about the degree of rationality present in American foreign policy. For example, my friend Max Abrahms was saying until very recently that Putin was hoping for some concessions that [would allow him to avoid war](https://twitter.com/MaxAbrahms/status/1495770382598561793) (to be fair, Max has been more correct than me on the invasion running into difficulties). I thought that was possible too, but I had little hope that American politics would allow Biden to strike a deal. When it became clear that negotiating over the NATO open door policy wasn’t even on the table, I increased my estimate of the probability for war. To his credit, Max has admitted I was right, [as have others](https://twitter.com/RichardHanania/status/1496860395134357505) I’ve been texting with over the last few months. I also give credit to [Saagar](https://twitter.com/esaagar/status/1496853536293933057), [Philippe](https://twitter.com/phl43/status/1496592744252416000), and [Michael Tracey](https://mtracey.substack.com/p/what-i-got-wrong-about-the-invasion?utm_source=url) for publicly acknowledging mistakes. > > As I’ve said before, “trust the experts” and “don’t test the experts” are both bad heuristics (see [the Substack article](https://richardhanania.substack.com/p/tetlock-and-the-taliban?utm_source=url) and [its](https://www.nytimes.com/2021/09/20/opinion/afghanistan-experts-expertise.html) *[NYT](https://www.nytimes.com/2021/09/20/opinion/afghanistan-experts-expertise.html)* [version](https://www.nytimes.com/2021/09/20/opinion/afghanistan-experts-expertise.html)). Talking to anti-interventionists about the potential for a Russian invasion throughout January and February, I was struck by how often they would ignore my arguments and instead say things like “this guy has always been right, so I’m going to trust him.” That might be an adequate strategy when you have nothing else to go on, but here we had a lot of evidence relevant to what was likely to happen, including satellite imagery of military movements and reports on the state of diplomatic negotiations. I’ve found one of the most insightful analysts throughout the crisis to be [Dmitri Alperovitch.](https://twitter.com/dalperovitch?s=11) But when I shared one of his tweets with a very intelligent friend of mine, his response was basically “his profile says that he is affiliated with Crowdstrike, which was involved in the Russiagate hoax. How can we believe anything he says?” I can understand the reaction, but it seems like this kind of thinking led many intelligent observers astray. Still, like Karlin, he flubbed the Ukrainian resistance. In [Russia As The Great Satan In The Liberal Imagination](https://richardhanania.substack.com/p/russia-as-the-great-satan-in-the?utm_source=url) (subtitled: “Why the culture war is global and there will be no insurgency in Ukraine”) he wrote: > Once we step aside from culture war resentments and focus on the hard realities of geopolitics, it is clear that Russia will eventually get its way because it cares more about Ukraine than the US does, and has the ability to threaten or use military force to get what it wants. When resolve and capabilities line up on the same side, that side is going to win. And the reason that Americans don’t care about Ukraine is that Ukraine objectively does not matter to the US. All the sophistry in the world coming from MSNBC hosts, ex-generals on the payrolls of defense contractors, and think tank analysts can’t change people’s perceptions here. > > The only questions now are how far Putin will go, and how tough American sanctions will be. Washington is now deluding itself into believing that it can help facilitate an insurgency in Ukraine. This will not happen. One of the best predictors of insurgency is having the kinds of terrain that governments cannot reach, like swamps, forests and mountains. Ukraine is the heart of the great Eurasian steppe […] Even setting aside the geography of the country, there is no instance I’m aware of in which a country or region with a total fertility rate below replacement has fought a serious insurgency. Once you’re the kind of people who can’t inconvenience yourselves enough to have kids, you are not going to risk your lives for a political ideal. On his more recent post, he wrote: > Regarding the prediction that there would be no insurgency, it is not technically false yet, since the conventional phase of the war is still ongoing, but I have to be honest and say that I expected a lot less fighting than we’re seeing. As already mentioned, I thought it might be like the fall of Kabul, where the weaker side just melted away even if it could’ve theoretically held out longer. But the Afghan government was probably uniquely bad, and the fact that it performed so poorly didn’t mean that Ukraine was a fake nation or that no one would fight for its government. A clue should’ve been that, although the Afghan government was losing territory even with American support, Ukraine had been doing an adequate job in its own defense since 2014 and holding its own in the war in the Donbas. > > My argument that Ukraine did not have a high enough TFR to tolerate the casualties required for an asymmetric conflict may well have been motivated reasoning, based on my view that not having enough children is a sign of moral and spiritual decline. We’ll see soon enough if the view was well founded or not, as a Ukrainian collapse is still possible, even if it takes longer than I would have thought. Similarly, a source of wishful thinking here might have been my suspicion that, if there was a more sustained conflict, it would mean a great deal of Western involvement, which would raise the risk of nuclear war. His performance is basically the same as Karlin’s, and I’m giving him the same grade. #### Dmitri Alperovich: B+ Alperovich is a Russian-American cybersecurity executive. When I asked in an Open Thread, several people named him as one of the people who most consistently predicted invasion. Is he an exception to the rule that people who got the invasion right got the resistance wrong and vice versa? I’m not sure. He didn’t talk much about how Ukrainian resistance would go, although see here: Still, I think he comes out the best overall of anyone on this list. #### Tyler Cowen: ??? This tweet has been going around recently: But it’s from February 21. On February 21, Putin announced he was sending “peacekeepers” in to Donbass. Most sources say the invasion of Ukraine started February 24. I am having trouble finding evidence of Tyler saying other specific things. On February 12, he posted [this quote](https://marginalrevolution.com/marginalrevolution/2022/02/from-the-comments-on-putin-and-russia.html), which seemed to maybe suggest Russia would invade around the 20th. On February 17th, [he wrote](https://marginalrevolution.com/marginalrevolution/2022/02/a-simple-model-of-putin-and-the-ukraine-crisis.html): > I think the correct model here is “Putin has put down so many chips, he can’t walk away with nothing. He wants to wreck Ukraine (more than taking territory per se).  He will do the minimum amount he can that leaves him with a strong probability of having wrecked Ukraine, and no more.” That still leaves a broad range of possible outcomes, but at the moment that is my mental model for updating with new information. Is the current invasion “the minimum amount he can [do] that leaves him with a strong probability of having wrecked Ukraine”? But on February 24, he [made an extremely strong prediction](https://marginalrevolution.com/marginalrevolution/2022/02/a-simple-model-of-what-putin-will-do-for-an-endgame.html) whose truth has yet to be determined: > I would start with two observations: > > 1. Putin’s goals have turned out to be more expansive than many (though not I) expected. > > 2. There are increasing doubts about Putin’s rationality. > > I’ll accept #1, which has been my view all along, but put aside #2 for the time being. > > In my simple model, in addition to a partial restoration of the empire, Putin desires a fundamental disruption to the EU and NATO.  And much of Ukraine is not worth his ruling.  As things currently stand, splitting Ukraine and taking the eastern half, while terrible for Ukraine (and for most of Russia as well), would not disrupt the EU and NATO.  So when Putin is done doing that, he will attack and take a slice of territory to the north.  It could be eastern Estonia, or it could relate to the [Suwalki corridor](https://cepa.org/the-suwalki-corridor/), but in any case the act will be a larger challenge to the West because of explicit treaty commitments.  Then he will see if we are willing to fight a war to get it back. There are fixed costs to mobilization and incurring potential public wrath over the war, so as a leader you might as well “get the most out of it.”  Our best hope is that the current Russian operations in Ukraine go sufficiently poorly that it does not come to this. I am mildly annoyed by Tyler being much less clear than (eg) Richard or Anatoly in making specific assertions, yet also claiming the mantle of a prescient predictor. Still, if what he says about the Suwalki Corridor comes to pass, I will give him the mantle. (if it doesn’t, he can just say it was because the current Russian operations in Ukraine went sufficiently poorly. Have I mentioned being mildly annoyed?) #### Samo Burja: C Samo is a rationalist success story and a smart guy, and I appreciate most of his takes. And he’s been careful not to say anything specific that might later get proven false. Still, I think his biggest position going into this war was “Russia Strong”: Events seem to be tending in the “Russia Not Strong” direction compared to where they were a week or two ago. Even if Russia wins - which they still might do! - and even if Anatoly is right that Western propaganda has us underestimating Russian military successes, I think perceptions of the competence of their army, their ability to match NATO, and their geopolitical acumen have taken a hit. (for another super-interesting take on Samo’s article on Russian military reforms, see Kamil Galeev [here](https://twitter.com/kamilkazani)) When I say he is careful not to say anything specific that might be proven false, this isn’t *exactly* a compliment. I think it’s better optics, but worse rationality, compared to people like Karlin and Hanania who make extremely clear predictions with numbers attached, sometimes get them totally wrong, and then admit it and write thoughtful essays on how they screwed up. Like Tyler Cowen, Samo is going for the “shadowy Machiavellian genius” role, which gives him a strong incentive to avoid humiliation. But part of our civilizational immune system against shadowy Machiavellian genius figures is demanding that they do this even when they would prefer not to! I like Samo enough (and have enough probability on him *actually* being a shadowy Machiavellian genius) that I want him to up his game! #### Lindyman: D- The only reason this isn’t an F is that I assume LindyMan plagiarized it from someone else, and I don’t want to blame him for their mistake. #### Michael Tracey: D Tracey at least wrote a thoughtful reflection on his failed prediction, which you can find [here](https://mtracey.substack.com/p/what-i-got-wrong-about-the-invasion?utm_source=url): > I also want to acknowledge the obvious: that the main theme of my reporting and commentary on this issue has been major skepticism toward the US Government and media, particularly in their prognostications of an “imminent” invasion. I still maintain there’s much to criticize — it strikes me as very conceivable that this constant barrage of maximalist predictions could have perversely influenced Putin’s calculations. [Premature](https://twitter.com/mtracey/status/1496605197430398986) statements of fact by politicians and pundits that an invasion had already occurred, when it had not yet occurred, were reckless in such fraught circumstances. Journalists who abused their access to “official” [anonymous sources](https://twitter.com/mtracey/status/1495455882305486858) did the opposite of inspiring confidence in the stark warnings they were pumping out. And so on and so forth. But yes, it has to be said: the official prophecies have in fact been tragically borne out. > > I’ll need to reflect more on the implications of this outcome. All I can promise you is transparency, honesty, and a willingness to correct for any blindspots. One potential blindspot here was placing too much emphasis on the repeated and vehement criticism by *actual Ukrainian officials* of what they decried as alarmist US rhetoric. Just last week, I [interviewed](https://mtracey.substack.com/p/crazy-us-media-coverage-is-a-bigger) a sitting member of the Ukraine parliament who straight-up told me that externally-generated “panic” was a far greater threat to Ukraine’s security than any forthcoming Russian invasion. Ukraine’s *president,* over and over again, was even more [searing](https://twitter.com/mtracey/status/1493087728619200512) in his own repudiations of US government and media behavior. I don’t have a great explanation for this dynamic yet, but it’s possible what they were telling me tracked too closely with my pre-existing disdain for official US claims vis-a-vis Russia — which in the very recent past *have* been wildly wrong and destructive. As you’ll remember if you lived through Russiagate. > > I did try to qualify much of what I said on this topic to allow for the possibility that an invasion could in fact take place. In a February 10 [article](https://mtracey.substack.com/p/if-world-war-iii-happens-you-can) here on Substack, I wrote that a Russian escalation was “ominously plausible.” And I still think the formulation in this tweet from January 23 is very much legitimate: > > Still, I can understand why people who only caught snippets of certain tweets thought I was a 100% incorrigible “invasion denier.” I never denied the possibility of an invasion — again, I always made a point to explicitly *allow* for that very possibility. But the reality of online “content production” is that observers will impressionistically pick up on broad themes you seem to be projecting, and if real-world events appear to contradict the impression of you they’ve developed, they will conclude you’ve been proven disastrously wrong. Especially if they already don’t like you anyway. It's not an entirely unreasonable instinct — I’ve probably been guilty of it myself at times. They also weren’t crazy to develop the impression that I was highly skeptical of what was being claimed about the imminence of an invasion, notwithstanding the many caveats I tried to append. > > I’m not sure what the solution is. It can’t be to declare that Joe Biden is Nostradamus, or that everyone should now get together and sing kumbaya with anonymous US intelligence officials. And it can’t be that the over-eager war provocations rampant in US media are suddenly just swell. Whatever else happens with Ukraine, a presumption of incredulity toward these government/media factions still has to remain broadly in place — albeit with new Russia-specific adjustments given the crazed actions of Putin. I’ll have more to say soon on a substantive level about the nightmare that’s unfolding, including the culpability of US policy and political culture in setting the stage for this insane attack. Because it’s more vital than ever to not be cowed into ignoring the typically disastrous role of US intervention. But first I thought I owed at least a partial accounting of my own record. Consider it a work in progress. My brain has never been particularly good at distinguishing Tracey from the similarly-named Matt Taibbi, but this is fine, Taibbi [did the same thing](https://taibbi.substack.com/p/note-to-readers-on-the-invasion-of?utm_source=url). #### War Nerd: F I was going to make this a C or D for technically only criticizing the predictions of specific dates (the specific dates were indeed wrong). But this pushed me over the edge: Once you’re writing songs making fun of other Ukraine predictors it’s less of a forecasting failure and more “something is seriously wrong with you”. Let’s stay with F. #### Other Pundits You can find a good list of other pundits who did poorly [here](https://mobile.twitter.com/TheWastingTimes/status/1496833812621557761), eg: There’s a discussion of who in China was right vs. wrong [here](https://www.stimson.org/2022/ukraine-did-china-have-a-clue/); I haven’t focused on it since I don’t recognize any of the Chinese people involved, but my takeaway is that the government seemed genuinely wrong. They weren’t just covering for Putin; they were actually taken by surprise. My very quick search didn’t find any pundit who successfully predicted both the Russian invasion and the strong Ukranian resistance. I couldn’t even really find anybody who predicted one correctly and was silent on the other (I think Clay Graubard of Global Guessing [managed this](https://twitter.com/ClayGraubard/status/1496699988801433602), but he’s a superforecaster, not a pundit). If you know someone in this category, please let me know so I can give them an appropriate amount of glory. ### General Thoughts Most of the people who failed badly here failed based on their political precommitments. A bunch of leftists - Michael Tracey, Matt Taibbi, Glenn Greenwald - failed because they couldn’t believe that warmongering intelligence officials trying to scare everyone about Russia had a point. They admittedly had great heuristics: there are lots of warmongers, our intelligence community has been really wrong lots of times before, and the past few years have seen a lot of really embarrassing Russia-related paranoia. Unfortunately, the relevant Less Wrong post here is [Reversed Stupidity Is Not Intelligence](https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence), and the relevant ACX post is [Heuristics That Almost Always Work](https://astralcodexten.substack.com/p/heuristics-that-almost-always-work), so they failed. (Can we do better than this level of agnosticism? Someone suggested that the intelligence community might suck at the sort of small-state terrorism work it’s been asked to do the past few decades, but that “infiltrating Russia” is kind of its bread and butter and a big part of its institutional DNA. Maybe we should trust it more on Great Power conflict than on tinpot dictator stuff? Maybe the other relevant ACX post here is [Bounded Distrust](https://astralcodexten.substack.com/p/bounded-distrust?utm_source=url)?) Hanania and Karlin, the two people who really succeeded at calling the invasion, were both kind of right-wing culture warriors who had political reasons to think Russia Strong and Western Culture Weak. I think this gave them an advantage in expecting Putin to act (maybe you could even frame this as “they were thinking along the same lines Putin was”?), but then gave them a disadvantage in predicting Ukrainian resistance. One important thing I’ve learned again and again about prediction is that successes are usually less about being smart, and more about having a bias which luckily corresponds to whatever ends up happening. Lots of people failed based on their political precommittments, but I suspect the successes were *also* based on political precommitments. The US military-security complex and centrist establishment come out of this looking pretty good (in theory - in practice I can’t think of any of their representatives who actually made confident correct predictions). Partly this might be because they’re genuinely smart (it seems like they had real intel on what Russia was doing). But partly it’s because the truth happened to match their precommitments. They’re pro-war (or at least pro-being-concerned-about-war), so they beat the drums of “war’s going to come”, and then it came, and they looked smart. And they’re pro proxy war (or at least pro-cheering-on-our-allies), so they cheered on Ukraine, and then Ukraine did well, and they looked smart. Thanks to the 2022 ACX Predictions Contest, I will eventually have data about all of your Ukraine-related predictions which will include demographic factors like your political beliefs. Once I get a chance to analyze that I might be able to make some of these points more forcefully.
Scott Alexander
49526673
Ukraine Warcasting
acx
# Open Thread 213 This is the weekly visible open thread. Odd-numbered open threads will be no-culture-wars, even-numbered threads will be culture-wars-allowed. This one is odd-numbered, so be careful. Otherwise, post about anything else you want. Also: **1:** Eli Lifland and Misha Yagudin have asked me to announce the **[Impactful Forecasting Prize](https://forum.effectivealtruism.org/posts/HDoMrQFG76QtkdrZJ/impactful-forecasting-prize-for-forecast-writeups-on-curated)**, with $2,000 for first prize and more money available for other winners. Read the rules (bolded link above), write up forecasts on one of [these Metaculus questions](https://airtable.com/shrHrxIsFSTZsfx9F/tblgJ92PeaMKc8Uz0) and submit via [this form](https://forms.gle/Sk1rGLwLAn6Bb8Hd6) by March 11. They’ll also be having a meetup in [Gather](https://gather.town/app/1bm9YjMhyZV6yOMU/Impactful-Forecasting) on March 2. **2:** Thanks to everyone who attended to the Austin meetup today! As for the rest of you, probably I’ll see you at the next one, after you inevitably move to Austin like everyone else.
Scott Alexander
49474491
Open Thread 213
acx
# Austin Meetup Correction Austin meetup is still this Sunday, 2/27, 12-3. But **the location has been switched to Moontower Cider Company at 1916 Tillery St**. The organizer is still sbarta@gmail.com , and you can still contact him if you have any questions. As per usual procedure, everyone is invited. Please feel free to come even if you feel awkward about it, even if you’re not “the typical ACX reader”, even if you’re worried people won’t like you, etc. You may (but don’t have to) RSVP [here](https://www.lesswrong.com/events/95LYeapL9ZiRgp689/scott-alexander-visit-and-mixer).
Scott Alexander
49371094
Austin Meetup Correction
acx
# Biological Anchors: A Trick That Might Or Might Not Work ## Introduction I've been trying to review and summarize Eliezer Yudkowksy's recent dialogues on AI safety. Previously in sequence: [Yudkowsky Contra Ngo On Agents](https://astralcodexten.substack.com/p/practically-a-book-review-yudkowsky). Now we’re up to Yudkowsky contra Cotra on biological anchors, but before we get there we need to figure out what Cotra's talking about and what's going on. The [Open Philanthropy Project](https://www.openphilanthropy.org/) ("Open Phil") is a big effective altruist foundation interested in funding AI safety. It's got $20 billion, probably the majority of money in the field, so its decisions matter a lot and it’s very invested in getting things right. In 2020, it asked senior researcher Ajeya Cotra to produce**[a report on when human-level AI would arrive.](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP)** It says the resulting document is "informal" - but it’s 169 pages long and likely to affect millions of dollars in funding, which some might describe as making it *kind* of formal. The report finds a 10% chance of “transformative AI” by 2031, a 50% chance by 2052, and an almost 80% chance by 2100. Eliezer rejects their methodology and expects AI earlier (he doesn’t offer many numbers, but [here](https://www.econlib.org/archives/2017/01/my_end-of-the-w.html) he gives Bryan Caplan 50-50 odds on 2030, albeit [not totally seriously](https://www.econlib.org/archives/2017/01/my_end-of-the-w.html#comment-166919)). He made the case in his own very long essay,**[Biology-Inspired AGI Timelines: The Trick That Never Works](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works)**, sparking a bunch of arguments and counterarguments and even more long essays. There's a small cottage industry of summarizing the report already, eg OpenPhil CEO Holden Karnofsky's [article](https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/) and Alignment Newsletter editor Rohin Shah's [comment](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines?commentId=7d4q79ntst6ryaxWD). I've drawn from both for my much-inferior attempt. ## Part I: The Cotra Report Ajeya Cotra is a senior research analyst at OpenPhil. She's assisted by her fiancee Paul Christiano (compsci PhD, OpenAI veteran, runs an AI alignment nonprofit) and to a lesser degree by other leading lights. Although not everyone involved has formal ML training, if you care a lot about whether efforts are “establishment” or “contrarian”, this one is probably more establishment. The report asks when will we first get "transformative AI" (ie AI which produces a transition as impressive as the Industrial Revolution; probably this will require it to be about as smart as humans). Its methodology is: 1. Figure out how much inferential computation the human brain does. 2. Try to figure out how much training computation it would take, right now, to get a neural net that does the same amount of inferential computation. Get some mind-bogglingly large number. 3. Adjust for "algorithmic progress", ie maybe in the future neural nets will be better at using computational resources efficiently. Get some number which, realistically, is still mind-bogglingly large. 4. Probably if you wanted that mind-bogglingly large amount of computation, it would take some mind-bogglingly large amount of money. But computation is getting cheaper every year. Also, the economy is growing every year. Also, the share of the economy that goes to investments in AI companies is growing every year. So at some point, some AI company will actually be able to afford that mind-boggingly-large amount of money, deploy the mind-bogglingly large amount of computation, and train the AI that has the same inferential computation as the human brain. 5. Figure out what year that is. Does this encode too many questionable assumptions? For example, might AGI come from an ecosystem of interacting projects (eg how the Industrial Revolution came from an ecosystem of interacting technologies) such that nobody has to train an entire brain-sized AI in one run? Maybe - in fact, Ajeya thinks the Industrial Revolution scenario might be *more* likely than the single-run scenario. But she finds the single-run scenario as a useful upper bound (later she mentions other reasons to try it as a *lower* bound, and compromises by treating it as a central estimate) and still thinks it’s worth figuring out how long it will take. So let’s go through the steps one by one: #### How Much Computation Does The Human Brain Do? Step one - figuring out how much computation the human brain does - is a daunting task. A successful solution would look like a number of FLOP/S (floating point operations per second), a basic unit of computation in digital computers. Luckily for Ajeya and for us, another OpenPhil analyst, Joe Carlsmith, finished [a report on this](https://www.openphilanthropy.org/brain-computation-report) a few months prior. It concluded the brain probably uses 10^13 - 10^17 FLOP/S. Why? Partly because this was the number given by most experts. But also, there are about 10^15 synapses in the brain, each one spikes about once per second, and a synaptic spike probably does about one FLOP of computation. (I'm not sure if he's taking into account the recent research suggesting that computation sometimes happens within dendrites - see section 2.1.1.2.2 of his report for complications and why he feels okay ignoring them - but realistically there are lots of order-of-magnitude-sized gray areas here, and he gives a sufficiently broad range that as long as the unknown unknowns aren't all in the same direction it should be fine.) So a human-level AI would also need to do 10^15 floating point operations per second? Unclear. Computers can run on more or less efficient algorithms; neural nets might use their computation more or less effectively than the brain. You might think it would be more efficient, since human designers can do better than the blind chance of evolution. Or you might think it would be less efficient, since many biological processes are still far beyond human technology. Or you might do what OpenPhil did and just look at a bunch of examples of evolved vs. designed systems and see which are generally better: *Source: [This document](https://docs.google.com/document/d/1HUtUBpRbNnnWBxiO2bz3LumEsQcaZioAPZDNcsWPnos/edit) by Paul Christiano.* Ajeya combines this with another metric where they see how existing AI compares to animals with apparently similar computational capacity; for example, she says that DeepMind’s Starcraft engine has about as much inferential compute as a honeybee and seems about equally subjectively impressive. I have no idea what this means. Impressive at what? Winning multiplayer online games? Stinging people? In any case, they decide to penalize AI by one order of magnitude compared to Nature, so a human-level AI would need to do 10^16 floating point operations per second. #### How Much Compute Would It Take To Train A Model That Does 10^16 Floating Point Operations Per Second? So an AI could potentially equal the human brain with 10^16 FLOP/S. Good news! There’s [a supercomputer in Japan](https://en.wikipedia.org/wiki/Fugaku_(supercomputer)) that can do 10^17 FLOP/S! *It looks like this ([source](https://spectrum.ieee.org/japans-fugaku-supercomputer-is-first-in-the-world-to-simultaneously-top-all-high-performance-benchmarks))* So why don’t we have AI yet? Why don’t we have *ten* AIs? In the modern paradigm of machine learning, it takes very big computers to *train* relatively small end-product AIs. If you tried to train GPT-3 on the same kind of medium-sized computers you run it on, it would take between tens and hundreds of years. Instead, you train GPT-3 on giant supercomputers like the ones above, get results in a few months, then run it on medium-sized computers, maybe ~10x better than the average desktop. But our hypothetical future human-level AI is 10^16 FLOP/S in inference mode. It needs to *run on* a giant supercomputer like the one in the picture. Nothing we have now could even begin to train it. There’s no direct and obvious way to convert inference requirements to training requirements. Ajeya tries assuming that each parameter will contribute about 10 FLOPs, which would mean the model would have about 10^15 parameters (GPT-3 has about 10^11 parameters). Finally, she uses some empirical scaling laws derived from looking at past machine learning projects to estimate that training 10^15 parameters would require H\*10^30 FLOPs, where H represents the model’s “horizon”. If I understand this correctly, “horizon” is a reinforcement learning concept: how long does it take to learn how much reward you got for something? If you’re playing a slot machine, the answer is one second. If you’re starting a company, the answer might be ten years. So what horizon do you need for human level AI? Who knows? It probably depends on what human-level task you want the AI to do, plus how well an AI can learn to do that task from things less complex than the entire task. If writing a good book is mostly about learning to write good sentence and then stringing them together, a book-writing AI can get away with a short horizon. If nothing short of writing an entire book and then evaluating it to see whether it is good or bad can possibly teach you book-writing, the AI will need a long time horizon. Ajeya doesn’t claim to have a great answer for this, and considers three models: horizons of a few minutes, a few hours, and a few years. Each step up adds another three orders of magnitude, so she ends up with three estimates of 10^30, 10^33, and 10^36 FLOPs. (for reference, the lowest training estimate - 10^30 - would take the supercomputer pictured above 300,000 years to complete; the highest, 300 billion.) #### Or What If We Ignore All Of That And Do Something Else? This is piling a lot of assumptions atop each other, so Ajeya tries three other methods of figuring out how hard this training task is. Humans seem to be human-level AIs. How much training do *we* need? You can analogize our childhood to an AI’s training period. We receive a stream of sense-data. We start out flailing kind of randomly. Some of what we do gets rewarded. Some of what we do gets punished. Eventually our behavior becomes more sophisticated. We subject our new behavior to reward or punishment, fine-tune it further. *Rent* asks us: how do you measure the life of a woman or man? It answers:“in daylights, in sunsets, in midnights, in cups of coffee; in inches, in miles, in laughter, in strife.” But you can also measure in floating point operations, in which case the answer is about 10^24. This is actually trivial: multiply the 10^15 FLOP/S of the human brain by the ~10^9 seconds of childhood and adolescence. This new estimate of 10^24 is much lower than our neural net estimate of 10^30 - 10^36 above. In fact, it’s only a hair above the amount it took to train GPT-3! If human-level AI was this easy, we should have hit it by accident sometime in the process of making a GPT-4 prototype. Since OpenAI hasn’t mentioned this, probably it’s harder than this and we’re missing something. Probably we’re missing that humans aren’t blank slates. We don’t start at zero and then only use our childhood to train us further. The very structure of our brain encodes certain assumptions about what kinds of data we should be looking out for and how we should use it. Our training data isn’t just what we observed during childhood, it’s everything that any of our ancestors observed during evolution. How many floating-point operations is the evolutionary process? Ajeya estimates 10^41. I can’t believe I’m writing this. I can’t believe someone actually estimated the number of floating point operations involved in jellyfish rising out of the primordial ooze and eventually becoming fish and lizards and mammals and so on all the way to the Ascent of Man. Still, the idea is simple. You estimate how long animals with neurons have been around for (10^16 seconds), total number of animals at any given second (10^20) times average number of FLOPS per animal (10^5) and you can read more [here](https://docs.google.com/document/d/1k7qzzn14jgE-Gbf0CON7_Py6tQUp2QNodr_8VAoDGnY/edit#heading=h.gvc1xyxlemkd) but it comes out to 10^41 FLOs. I would not call this an *exact* estimate - for one thing, it assumes that all animals are nematodes, on the grounds that non-nematode animals are basically a rounding error in the grand scheme of things. But it does justify this bizarre assumption, and I don’t feel inclined to split hairs here - surely the total amount of computation performed by evolution is irrelevant except as an extreme upper bound? Surely the part where Australia got all those weird marsupials wasn’t strictly necessary for the human brain to have human-level intelligence? One more weird human training data estimate attempt: what about the genome? If in some sense a bit of information in the genome is a “parameter”, how many parameters does that suggest humans have, and how does it affect training time? Ajeya calculates that the genome has about 7.5x10^8 parameters (compared to 10^15 parameters in our neural net calculation, and 10^11 for GPT-3). So we can… Okay, I’ve got to admit, this doesn’t have quite the same “huh?!” factor as trying to calculate the number of FLOs in evolution, but it is in a lot of ways even crazier. The [Japanese canopy plant](https://en.wikipedia.org/wiki/Paris_japonica) has a genome fifty times larger than ours, which suggests that genome size doesn’t correspond very well to organism awesomeness. Also, most of the genome is coding for weird proteins that stabilize the shape of your kidney tubule or something, why should this matter for intelligence? *The Japanese canopy plant. I think it is very pretty, but probably low prettiness per megabyte of DNA*. I think Ajeya would answer that she’s debating orders of magnitude here, and each of these weird things costs only a few OOMs and probably they all even out. That still leaves the question of why she thinks this approach is interesting at all, to which she answers that: > The motivating intuition is that evolution performed a search over a space of small, compact genomes which coded for large brains rather than directly searching over the much larger space of all possible large brains, and human researchers may be able to compete with evolution on this axis. So maybe instead of having to figure out how to generate a brain per se, you figure out how to generate some short(er) program that can output a brain? But this would be very different from how ML works now. Also, you need to give each short program the chance to unfold into a brain before you can evaluate it, which evolution has time for but we probably don’t. Ajeya sort of mentions these problems and counters with an argument that maybe you could think of the genome as a reinforcement learner with a long horizon. I don’t quite follow this but it sounds like the sort of thing that almost might make sense. Anyway, when you apply the scaling laws to a 7.5\*10^8 parameter genome and penalize it for a long horizon, you get about 10^33 FLOPs, which is weirdly similar to some of the other estimates. So now we have six different training cost estimates. First, neural nets with short, medium, and long horizons, which are 10^30, 10^33, and 10^36 FLOPs, respectively. Next, the amount of training data in a human lifetime - 10^24 FLOs - and in all of evolutionary history - 10^41 FLOPs. And finally, this weird genome thing, which is 10^33 FLOPs. An optimist might say “Well, our lowest estimate is 10^24 FLOPs, our highest is 10^41 FLOPs, those sound like kind of similar numbers, at least there’s no “5 FLOPs” or “10^9999 FLOPs” in there. A pessimist might say “The difference between 10^24 and 10^41 is seventeen orders of magnitude, ie a factor of 100,000,000,000,000,000 times. This barely constrains our expectations at all!” Before we decide who to trust, let’s remember that we’re still only at Step 2 of our eight step Methodology, and continue. #### How Do We Adjust For Algorithmic Progress? So today, in 2022 (or in 2020 when this was written, or whenever), assume it would take about 10^33 FLOs to train a human-level AI. But technology constantly advances. Maybe we’ll discover ways to train AIs faster, or run AIs more efficiently, or something like that. How does that factor into our estimate? Ajeya draws on Hernandez & Brown’s [Measuring The Algorithmic Efficiency Of Neural Networks](https://arxiv.org/ftp/arxiv/papers/2005/2005.04305.pdf). They look at how many FLOPs it took to train various image recognition AIs to an equivalent level of performance between 2012 and 2019, and find that over those seven years it decreased by a factor of 44x, ie training efficiency doubles every sixteen months! Ajeya assumes a doubling time slightly longer than that, because it’s easier to make progress in simple well-understood fields like image recognition than in the novel task of human-level AI. She chooses a doubling time of “merely” 2 - 3 years. If training efficiency doubles every 2-3 years, it would dectuple in about 10 years. So although it might take 10^33 FLOPs to train a human level AI today, in ten years or so it may take only 10^32, in twenty years 10^31, and so on. #### When Will Anyone Have Enough Computational Resources To Train A Human-Level AI? In 2020, AI researchers could buy computational resources at about $1 for 10^17 FLOPs. That means the 10^33 FLOPs you’d need to train a human-level AI would cost $10^16, ie ten quadrillion dollars. This is about twenty times more money than exists in the entire world. But compute costs fall quickly. Some formulations of Moore’s Law suggest it halves every eighteen months. These no longer seem to hold exactly, but it does seem to be halving maybe once every 2.5 years. The exact number is kind of controversial: Ajeya admits it’s been more like once every 3-4 years lately, but she heard good things about some upcoming chips and predicted it might revert back to the longer-term faster trend (it’s been two years now, some new chips have come out, and this prediction is looking pretty good). So as time goes on, algorithmic progress will cut the cost of training (in FLOPs), and hardware progress will also cut the cost of FLOPs (in dollars). So training will become gradually more affordable as time goes on. Once it reaches a cost somebody is willing to pay, they’ll buy human-level AI, and then that will be the year human-level AI happens. What is the cost that somebody (company? government? billionaire?) is willing to pay for human-level AI? The most expensive AI training in history was AlphaStar, a DeepMind project that spent over $1 million to train an AI to play StarCraft *(*in their defense, it won). But people have been pouring more and more money into AI lately: *Source [here](https://www.economist.com/technology-quarterly/2020/06/11/the-cost-of-training-machines-is-becoming-a-problem). This is about compute rather than cost, but most of the increase seen here has been companies willing to pay for more compute over time, rather than algorithmic or hardware progress.* The StarCraft AI was kind of a vanity project, or science for science’s sake, or whatever you want to call it. But AI is starting to become profitable, and human-level AI would be *very* profitable. Who knows how much companies will be willing to pay in the future? Ajeya extrapolates the line on the graph forward to 2025 and gets $1 billion. This is starting to sound kind of absurd - the entire company OpenAI was founded with $1 billion in venture capital, it seems like a lot to expect them to spend more than $1 billion on a single training run. So Ajeya backs off from this after 2025 and predicts a “two year doubling time”. This is not much of a concession. It still means that in 2040 someone might be spending $100 billion to train one AI. Is this at all plausible? At the height of the Manhattan Project, the US was investing about 0.5% of its GDP into the effort; a similar investment today would be worth $100 billion. And we’re about twice as rich as 2000, so 2040 might be twice as rich as we are. At that point, $100 billion for training an AI is within reach of Google and maybe a few individual billionaires (though it would still require most or all of their fortune). Ajeya creates a complicated function to assess how much money people will be willing to pay on giant AI projects per year. This looks like an upward-sloping curve. The line representing the likely cost of training a human-level AI looks like a downward sloping curve. At some point, those two curves meet, representing when human-level AI will first be trained. #### So When Will We Get Human-Level AI? The report gives a long distribution of dates based on weights assigned to the six different models, each of which has really wide confidence intervals and options for adjusting the mean and variance based on your assumptions. But the median of all of that is 10% chance by 2031, 50% chance by 2052, and almost 80% chance by 2100. Ajeya takes her six models and decides to weigh them like so, based on how plausible she thinks each one is: 20% neural net, short horizon 30% neural net, medium horizon 15% neural net, long horizon 5% human lifetime as training data 10% evolutionary history as training data 10% genome as parameter number She ends up with this: #### How Sensitive Is This To Changes In Assumptions? She very helpfully gives us a [Colab notebook](https://colab.research.google.com/drive/1Fpy8eGDWXy-UJ_WTGvSdw_hauU4l-pNS?usp=sharing) and [Google spreadsheet](https://docs.google.com/spreadsheets/d/1XV9PBEY2UtTWxsJ_zoAujnIGKpnHTwuvuvaaNOG30nY/edit#gid=505210495) to play around with. The notebook lets you change some of the more detailed parameters of the individual models, and the spreadsheet lets you change the big picture. I leave the notebook to people more dedicated to forecasting than I am, and will talk about the spreadsheet here. If you’re following along at home, the default spreadsheet won’t reflect Ajeya’s findings until you fill in the table in the bottom left like so: Great. Now that we’ve got that, let’s try changing some stuff. I like the human childhood training data argument (Lifetime Anchor) more than Ajeya does, and I like the size-of-the-genome argument less. I’m going to change the weights to 20-20-0-20-20-20. Also, Ajeya thinks that someone might be willing to spend 1% of national GDP on training AIs, but that sounds really high to me, so I’m going to down to 0.1%. Also, Ajeya’s estimate of 3% GDP growth sounds high for the sort of industrialized nations who might do AI research, I’m going to lower it to 2%. Since I’m feeling mistrustful today, let’s use the Hernandez&Brown estimate for compute halving (1.5 years) in place of Ajeya’s *ad hoc* adjustments. And let’s use the current compute halving time (3.5 years) instead of Ajeya’s overly rosy version (2.5 years). All these changes… …don’t really do much. The median goes from 2052 to about 2065. Four of the models give results between 2030 and 2070. The last two, Neural Net With Long Horizon and Evolution, suggest probably no AI this century (although Neural Net With Long Horizon does think there’s a 40% chance by 2100). Ajeya doesn’t really like either of these models and they’re not heavily weighted in her main result. #### Does The Truth Point To Itself? Back up a second. Here’s something that makes me kind of nervous. Most of Ajeya’s numbers are kind of made up, with several order-of-magnitude error bars and simplifying assumptions like “all animals are nematodes”. For a single parameter, we get estimates spanning seventeen different orders of magnitude: the upper bound is one hundred quadrillion times the lower bound. *And yet* four of the six models, including two genuinely exotic ones, manage to get dates within twenty years of 2050. And 2050 is also the date everyone else focuses on. Here’s the prediction-market-like site [Metaculus](https://www.metaculus.com/questions/5121/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of-stronger-operationalization/): Their distribution looks a lot like Ajeya’s, and even has the same median, 2052 (though forecasters could have read Ajeya’s report). Katja Grace et al [surveyed 352 AI experts](https://arxiv.org/pdf/1705.08807.pdf), and they gave a median estimate of 2062 for an AI that could “outperform humans at all tasks” (though with many caveats and high sensitivity to question framing). This was before Ajeya’s report, so they definitely didn’t read it. So lots of Ajeya’s different methods *and* lots of other people presumably using different methodologies or no methodology at all, all converge on this same idea of 2050 give or take a decade or two. An optimist might say “The truth points to itself! There are 371 known proofs of the Pythagorean Theorem, and they all end up in the same place. That’s because no matter what methodology you use, if you use it well enough you get to the correct answer.” A pessimist might be more suspicious; we’ll return to this part later. #### FLOPS Alone Turn The Wheel Of History One more question: what if this is all bullshit? What if it’s an utterly useless total garbage steaming pile of grade A crap? Imagine a scientist in Victorian Britain, speculating on when humankind might invent ships that travel through space. He finds a natural anchor: the moon travels through space! He can observe things about the moon: for example, it is 220 miles in diameter (give or take an order of magnitude). So when humankind invents ships that are 220 miles in diameter, they can travel through space! Ships have certainly grown in size tremendously, from primitive kayaks to Roman triremes to Spanish galleons to the great ocean liners of the (Victorian) present. *The AI forecasting organization AI Impacts actually has [a whole report on historical ship size trends](https://aiimpacts.org/historic-trends-in-ship-size/) to prove an unrelated point about technological progress, so I didn’t even have to make this graph up.* Suppose our Victorian scientist lived in 1858, right when the Great Eastern was launched. The trend line for ship size crossed 100m around 1843, and 200m in 1858, so doubling time is 15 years - but perhaps they notice this is going to be an outlier, so let’s round up a bit and say 18 years. The (one order of magnitude off estimate for the size of the) Moon is 350,000m, so you’d need ships to scale up by 350,000/200 = 1,750x before they’re as big as the Moon. That’s about 10.8 doublings, and a doubling time is 18 years, so we’ll get spaceships in . . . 2052 exactly. (fudging numbers to land where you want is actually fun and easy) *SS Great Eastern, the extreme outlier large steamship from 1858. This has become sort of a mascot for quantitative technological progress forecasters.* What is this scientist’s error? The big one is thinking that spaceship progress depends on some easily-measured quantity (size) instead of on fundamental advances (eg figuring out how rockets work). You can make the same accusation against Ajeya et al: you can have all the FLOPs in the world, but if you don’t understand how to make a machine think, your AI will be, well, a flop. Ajeya discusses this a bit on page 143 of her report. There is some sense in which FLOPs and knowing-what-you’re-doing trade of against each other. If you have literally no idea what you’re doing, you can sort of kind of re-run evolution until it comes up with something that looks good. If things are somehow even worse than *that*, you could always run [AIXI](https://en.wikipedia.org/wiki/AIXI), a hypothetical AI design guaranteed to get excellent results as long as you have infinite computation. You could run a Go engine by searching the entire branching tree structure of Go - you *shouldn’t*, and it would take a zillion times more compute than exists in the entire world, but you *could*. So in some sense what you’re doing, when you’re figuring out what you’re doing, is coming up with ways to do already-possible things more efficiently. But that’s just algorithmic progress, which Ajeya has already baked into her model. (our Victorian scientist: “As a *reductio ad absurdum*, you could always stand the ship on its end, and then climb up it to reach space. We’re just trying to make ships that are more efficient than that.”) ## Part II: Biology-Inspired AI Timelines: The Trick That Never Works Eliezer Yudkowsky presents a more subtle version of these kinds of objection in an essay called [Biology-Inspired AI Timelines: The Trick That Never Works](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works), published December 2021. Ajeya’s report is a 169-page collection of equations, graphs, and modeling assumptions. Yudkowsky’s rebuttal is a fictional dialogue between himself, younger versions of himself, famous AI scientists, and other bit players. At one point, a character called “Humbali” shows up begging Yudkowsky to be more humble, and Yudkowsky defeats him with devastating counterarguments. Still, he did found the field, so I guess everyone has to listen to him. He starts: in 1988, famous AI scientist Hans Moravec predicted human-level AI by 2010. He was using the same methodology as Ajeya: extrapolate how quickly processing power would grow (in FLOP/S), and see when it would match some estimate of the human brain. Moravec got the processing power almost exactly right (it hit his 2010 projection in 2008) and his human brain estimate pretty close (he says 10^13 FLOP/S, Ajeya says 10^15, this 2 OOM difference only delays things a few years), yet there was not human-level AI in 2010. What happened? Ajeya's answer could be: Moravec didn't realize that, in the modern ML paradigm, any given size of program requires a much bigger program to train. Ajeya, who has a 35-year advantage on Moravec, estimates approximately the same power for the finished program (10^16 vs. 10^13 FLOP/S) but says that training the 10^16 FLOP/S program will require 10^33ish FLOPs. Eliezer agrees as far as it goes, but says this points to a much deeper failure mode, which was that Moravec had no idea what he was doing. He was assuming processing power of human brain = processing power of computer necessary for AGI. Why? > *The human brain consumes around 20 watts of power. Can we thereby conclude that an AGI should consume around 20 watts of power, and that, when technology advances to the point of being able to supply around 20 watts of power to computers, we'll get AGI? […]* > > *You say that AIs consume energy in a very different way from brains? Well, they'll also consume computations in a very different way from brains! The only difference between these two cases is that you know something about how humans eat food and break it down in their stomachs and convert it into ATP that gets consumed by neurons to pump ions back out of dendrites and axons, while computer chips consume electricity whose flow gets interrupted by transistors to transmit information. Since you know anything whatsoever about how AGIs and humans consume energy, you can see that the consumption is so vastly different as to obviate all comparisons entirely.* > > *You are ignorant of how the brain consumes computation, you are ignorant of how the first AGIs built would consume computation, but "an unknown key does not open an unknown lock" and these two ignorant distributions should not assert much internal correlation between them.* Cars don’t move by contracting their leg muscles and planes don’t fly by flapping their wings like birds. Telescopes *do* form images the same way as the lenses in our eyes, but differ by so many orders of magnitude in every important way that they defy comparison. Why should AI be different? You have to use some specific algorithm when you’re creating AI; why should we expect it to be anywhere near the same efficiency as the ones Nature uses in our brains? The same is true for arguments from evolution, eg Ajeya’s Evolutionary Anchor, ie “it took evolution 10^43 FLOPs of computation to evolve the human brain so maybe that will be the training cost”. AI scientists sitting in labs trying to figure things out, and nematodes getting eaten by other nematodes, are such different methods for designing things that it’s crazy to use one as an estimate for the other. #### Algorithmic Progress vs. Algorithmic Paradigm Shifts This post is a dialogue, so (Eliezer’s hypothetical model of) OpenPhil gets a chance to respond. They object: this is why we put a term for algorithmic progress in our model. The model isn’t very sensitive to changes in that term. If you want you can set it to some kind of crazy high value and see what happens, but you can’t say we didn’t consider it. > **OpenPhil:**  We did already consider that and try to take it into account: our model already includes a parameter for how algorithmic progress reduces hardware requirements.  It's not easy to graph as exactly as Moore's Law, as you say, but our best-guess estimate is that compute costs halve every 2-3 years […] > > **Eliezer:**  The makers of AGI aren't going to be doing 10,000,000,000,000 rounds of gradient descent, on entire brain-sized 300,000,000,000,000-parameter models, *algorithmically faster than today.*  They're going to get to AGI via some route that *you don't know how to take,* at least if it happens in 2040.  If it happens in 2025, it may be via a route that some modern researchers do know how to take, but in this case, of course, your model was also wrong. > > They're not going to be taking your default-imagined approach *algorithmically faster,* they're going to be taking an *algorithmically different approach* that eats computing power in a different way than you imagine it being consumed. > > **OpenPhil:**  Shouldn't that just be folded into our estimate of how the computation required to accomplish a fixed task decreases by half every 2-3 years due to better algorithms? > > **Eliezer:**  Backtesting this viewpoint on the previous history of computer science, it seems to me to assert that it should be possible to: > > * Train a pre-Transformer RNN/CNN-based model, not using any other techniques invented after 2017, to GPT-2 levels of performance, using only around 2x as much compute as GPT-2; > * Play pro-level Go using 8-16 times as much computing power as AlphaGo, but only 2006 levels of technology. > > For reference, recall that in 2006, Hinton and Salakhutdinov were just starting to publish that, by training multiple layers of Restricted Boltzmann machines and then unrolling them into a "deep" neural network, you could get an initialization for the network weights that would avoid the problem of vanishing and exploding gradients and activations.  At least so long as you didn't try to stack too many layers, like a dozen layers or something ridiculous like that.  This being the point that kicked off the entire deep-learning revolution. > > Your model apparently suggests that we have gotten around 50 times more efficient at turning computation into intelligence since that time; so, we should be able to replicate any modern feat of deep learning performed in 2021, using techniques from before deep learning and around fifty times as much computing power. > > **OpenPhil:**  No, that's totally not what our viewpoint says when you backfit it to past reality.  Our model does a great job of retrodicting past reality. > > **Eliezer:**  How so? > > **OpenPhil:**  <Eliezer cannot predict what they will say here.> I think the argument here is that OpenPhil is accounting for [normal scientific progress in algorithms, but not for paradigm shifts](https://slatestarcodex.com/2019/01/08/book-review-the-structure-of-scientific-revolutions/). #### Directional Error These are the two arguments Eliezer makes against OpenPhil that I find most persuasive. First, that you shouldn’t be using biological anchors at all. Second, that unpredictable paradigm shifts are more realistic than gradual algorithmic progress. These mostly add uncertainty to OpenPhil’s model, but Eliezer ends his essay making a stronger argument: he thinks OpenPhil is directionally wrong, and AI will come earlier than they think. Mostly this is the paradigm argument again. Five years from now, there could be a paradigm shift that makes AI much easier to build. It’s happened before; from GOFAI’s pre-programmed logical rules to Deep Blue’s tree searches to the sorts of Big Data methods that won the Netflix Prize to modern deep learning. Instead of just extrapolating deep learning scaling thirty years out, OpenPhil should be worried about the next big idea. Hypothetical OpenPhil retorts that this is a double-edged sword. Maybe the deep learning paradigm can’t produce AGI, and we’ll have to wait decades or centuries for someone to have the right insight. Or maybe the new paradigm you need for AGI will take more compute than deep learning, in the same way deep learning takes more compute than whatever Moravec was imagining. This is a pretty strong response, since it would have been true for every previous forecaster: remember, Moravec erred in thinking AI would come *too soon*, not too late. So although Eliezer is taking the cheap shot of saying OpenPhil’s estimate will be wrong just as everyone else’s was wrong before, he’s also giving himself the much harder case of arguing it might be wrong in the opposite direction as all its predecessors. Eliezer takes this objection seriously, but feels like on balance probably new paradigms will speed up AI rather than slow it down. Here he grudgingly and with suitable embarrassment does try to make an object-level semi-biological-anchors-related argument: Moravec was wrong because he ignored the training phase. And the proper anchor for the training phase is somewhere between evolution and a human childhood, where evolution represents “blind chance eventually finding good things” and human childhood represents “an intelligent cognitive engine trying to squeeze as much data out of experience as possible”. And part of what he expects paradigm shifts to do is to move from more evolutionary processes to more childhood-like processes, and that’s a net gain in efficiency. So he still thinks OpenPhil’s methods are more likely to overestimate the amount of time until AGI rather than underestimate it. #### What Moore’s Law Giveth, Platt’s Law Taketh Away Eliezer’s other argument is kind of a low blow: he refers to [Platt’s Law Of AI Forecasting](https://archive.nytimes.com/www.nytimes.com/library/cyber/surf/1120surf-vinge.html): “any AI forecast will put strong AI thirty years out from when the forecast is made.” This isn’t exact. Hans Moravec, writing in 1988, said 2010 - so 22 years. Ray Kurzweil, writing in 2001, said 2023 - another 22 years. Vernor Vinge, in a 1993 speech, said 2023, and that *was* exactly 30 years, but Vinge knew about Platt’s Law and might have been joking. The point is: OpenPhil wrote a report in 2020 that predicted strong AI in 2052, isn’t that kind of suspicious? I’d previously mentioned it as a plus that Ajeya got around the same year everyone else got. The forecasters on Metaculus. The experts surveyed in Grace et al. Lots of other smart experts with clever models. But what if all of these experts and models and analyses are just fudging the numbers for the same Platt’s-Law-related reasons? Hypothetical OpenPhil is BTFO: > **OpenPhil:**  That part about Charles Platt's generalization is interesting, but just because we unwittingly chose literally exactly the median that Platt predicted people would always choose in consistent error, that doesn't justify dismissing our work, right?  We could have used a completely valid method of estimation which would have pointed to 2050 no matter which year it was tried in, and, by sheer coincidence, have first written that up in 2020.  In fact, we try to show in the report that the same methodology, evaluated in earlier years, would also have pointed to around 2050 - > > **Eliezer:** Look, people keep trying this.  It's never worked.  It's never going to work.  2 years before the end of the world, there'll be another published biologically inspired estimate showing that AGI is 30 years away and it will be exactly as informative then as it is now.  I'd love to know the timelines too, but you're not *going* to get the answer you want until right before the end of the world, and maybe not even then unless you're paying very close attention.  *Timing this stuff is just plain hard.* ## Part III: Responses And Commentary **Response 1: Less Wrong Comments** Less Wrong is a site founded by Eliezer Yudkowsky for Eliezer Yudkowsky fans who wanted to discuss Eliezer Yudkowsky’s ideas. So, for whatever it’s worth - [the comments](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/ax695frGJEzGxFBK4#comments) on his essay were pretty negative. Carl Shulman, an independent researcher with links to both OpenPhil and MIRI (Eliezer’s org), writes the top-voted comment. He works from a model where there is hardware progress, software progress downstream of hardware progress, and independent (ie unrelated to algorithms) software progress, and where the first two make up most progress on the margin. Researchers generally develop new paradigms once they have enough compute available to tinker with them. > Progress in AI has largely been a function of increasing compute, human software research efforts, and serial time/steps. Throwing more compute at researchers has improved performance both directly and indirectly (e.g. by enabling more experiments, refining evaluation functions in chess, training neural networks, or making algorithms that work best with large compute more attractive). > > Historically compute has grown by many orders of magnitude, while human labor applied to AI and supporting software  by only a few. And on plausible decompositions of progress (allowing for adjustment of software to current hardware and vice versa), hardware growth accounts for more of the progress over time than human labor input growth. > > So if you're going to use an AI production function for tech forecasting based on inputs (which do relatively OK by the standards tech forecasting), it's best to use all of compute, labor, and time, but it makes sense for compute to have pride of place and take in more modeling effort and attention, since it's the biggest source of change (particularly when including software gains  downstream of hardware technology and expenditures). […] > > A perfectly correlated time series of compute and labor would not let us say which had the larger marginal contribution, but we have resources to get at that, which I was referring to with 'plausible decompositions.' This includes experiments with old and new software and hardware, like the chess ones [Paul recently commissioned](https://www.lesswrong.com/posts/H6L7fuEN9qXDanQ6W/how-much-chess-engine-progress-is-about-adapting-to-bigger), and studies by [AI Impacts](https://intelligence.org/files/AlgorithmicProgress.pdf), [OpenAI](https://openai.com/blog/ai-and-efficiency/), and [Neil Thompson](https://news.mit.edu/2021/how-quickly-do-algorithms-improve-0920). There are AI scaling experiments, and observations of the results of shocks like the end of Dennard scaling, the availability of GPGPU computing, and [Besiroglu's](https://twitter.com/tamaybes/status/1330506035811987458) data on the relative predictive power of computer and labor in individual papers and subfields. > > In different ways those tend to put hardware as driving more log improvement than software (with both contributing), particularly if we consider software innovations downstream of hardware changes. [Vanessa Kosoy](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works?commentId=KkcAXCAsi54uWkjeH) makes the obvious objection, which echoes a comment of Eliezer’s in the dialogue above: > I'm confused how can this pass some obvious tests. For example, do you claim that alpha-beta pruning can match AlphaGo given some not-crazy advantage in compute? Do you claim that SVMs can do SOTA image classification with not-crazy advantage in compute (or with any amount of compute with the same training data)? Can Eliza-style chatbots compete with GPT3 however we scale them up? [Mark Xu](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works?commentId=yv4tLvGmZE7yKpxqu) answers: > My model is something like: > > * For any given algorithm, e.g. SVMs, AlphaGo, alpha-beta pruning, convnets, etc., there is an "effective compute regime" where dumping more compute makes them better. If you go above this regime, you get steep diminishing marginal returns. > * In the (relatively small) regimes of old algorithms, new algorithms and old algorithms perform similarly. E.g. with small amounts of compute, using AlphaGo instead of alpha-beta pruning doesn't get you that much better performance than like an OOM of compute (I have no idea if this is true, example is more because it conveys the general gist). > * One of the main way that modern algorithms are better is that they have much large effective compute regimes. The other main way is enabling more effective conversion of compute to performance. > * Therefore, one of primary impact of new algorithms is to enable performance to continue scaling with compute the same way it did when you had smaller amounts. > > In this model, it makes sense to think of the "contribution" of new algorithms as the factor they enable more efficient conversion of compute to performance and count the increased performance because the new algorithms can absorb more compute as primarily hardware progress. I think the studies that Carl cites above are decent evidence that the multiplicative factor of compute -> performance conversion you get from new algorithms is smaller than the historical growth in compute, so it further makes sense to claim that most progress came from compute, even though the algorithms were what "unlocked" the compute. > > For an example of something I consider supports this model, see the LSTM versus transformer graphs in<https://arxiv.org/pdf/2001.08361.pdf> I also found [Vanessa’s summary](https://www.lesswrong.com/users/vanessa-kosoy) of this reply helpful: > Hmm... Interesting. So, this model says that algorithmic innovation is so fast that it is not much of a bottleneck: we always manage to find the best algorithm for given compute relatively quickly after this compute becomes available. Moreover, there is some smooth relation between compute and performance assuming the best algorithm for this level of compute. [**EDIT**: The latter part seems really suspicious though, why would this relation persist across very different algorithms?] Or at least this is true is "best algorithm" is interpreted to mean "best algorithm out of some wide class of algorithms s.t. we never or almost never managed to discover any algorithm outside of this class". > > This can justify biological anchors as upper bounds[[1]](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works#fn-cCeH9Wga7mav4koHv-1): if biology is operating using the best algorithm then we will match its performance when we reach the same level of compute, whereas if biology is operating using a suboptimal algorithm then we will match its performance earlier. [Charlie Steiner](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works?commentId=MEFXe4mr7xWyYPwvw) objects: > Which examples are you thinking of? [Modern Stockfish outperformed historical chess engines even when using the same resources](https://web.archive.org/web/20200806135829im_/http://jaekle.info/chess_scaling.png), until far enough in the past that computers didn't have enough RAM to load it. > > I definitely agree with your original-comment points about the *general* informativeness of hardware, and absolutely software is adapting to fit our current hardware. But this can all be true even if advances in software can make more than 20 orders of magnitude difference in what hardware is needed for AGI, and are much less predictable than advances in hardware rather than being adaptations in lockstep with it. And [Paul Christiano](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works?commentId=EuHZLiKcXMeahpqMB) responds: > Here are the graphs from Hippke (he or I should publish summary at some point, sorry). > > I wanted to compare Fritz (which won WCCC in 1995) to a modern engine to understand the effects of hardware and software performance. I think the time controls for that tournament are similar to SF STC I think. I wanted to compare to SF8 rather than one of the NNUE engines to isolate out the effect of compute at development time and just look at test-time compute. > > So having modern algorithms would have let you win WCCC while spending about 50x less on compute than the winner. Having modern computer hardware would have let you win WCCC spending way more than 1000x less on compute than the winner. Measured this way software progress seems to be several times less important than hardware progress despite much faster scale-up of investment in software. > > But instead of asking "how well does hardware/software progress help you get to 1995 performance?" you could ask "how well does hardware/software progress get you to 2015 performance?" and on that metric it looks like software progress is way more important because you basically just can't scale old algorithms up to modern performance. > > The relevant measure varies depending on what you are asking. But from the perspective of takeoff speeds, it seems to me like one very salient takeaway is: if one chess project had literally come back in time with 20 years of chess progress, it would have allowed them to spend 50x less on compute than the leader. #### Response 2: AI Impacts + Matthew Barnett [AI Impacts](https://aiimpacts.org/miri-ai-predictions-dataset/) gathered and analyzed a dataset of who predicted AI when; [Matthew Barnett](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works?commentId=h9cvhnoaevc8xGJtB) helpfully drew in the line corresponding to Platt’s Law (everyone always predicts AI in thirty years). Just eyeballing it, Platt’s Law looks pretty good. But Holden Karnofsky (see below) objects that our eyeballs are covertly removing outliers. Barnett agrees this is worth checking for and runs a formal OLS regression. *Platt’s Law in blue, regression line in orange.* He [writes](https://www.lesswrong.com/posts/nNqXfnjiezYukiMJi/reply-to-eliezer-on-biological-anchors?commentId=zJ8EGJ3cHdeyjQZvc): > I agree this trendline doesn't look great for Platt's law, and backs up your observation by predicting that Bio Anchors should be more than 30 years out. > > However, OLS is notoriously sensitive to outliers. If instead of using some more robust regression algorithm, we instead super arbitrarily eliminated all predictions after 2100, then we get this, which doesn't look absolutely horrible for the law. Note that the median forecast is 25 years out. I’m split on what to think here. If we consider a weaker version of Platt’s Law, “the average date at which people forecast AGI moves forward at about one year per year”, this seems truish in the big picture where we compare 1960 to today, but not obviously true after 1980. If we consider a different weaker version, “on average estimates tend to be 30 years away”, that’s true-ish under Barnett’s revised model, but not inherently damning since Barnett’s assuming there will be some such number, it turns out to be 25, and Ajeya gave the somewhat different number of 32. Is that a big enough difference to exonerate her of “using” Platt’s Law? Is that even the right way to be thinking about this question? #### Response 3: Real OpenPhil The hypothetical OpenPhil in Eliezer’s mind having been utterly vanquished, the real-world OpenPhil is forced to step in. OpenPhil CEO Holden Karnofsky responds to Eliezer [here](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/nNqXfnjiezYukiMJi). There’s a lot of back and forth about whether the report includes enough caveats (answer: it sure does include a lot of caveats!) but I was most interested in the attacks on Eliezer’s two main points. *First*, the point that biological anchors are fatally flawed from the start and measuring FLOP/S is no better than measuring power consumption in watts. Holden: > If the world were such that: > > * We had some reasonable framework for "power usage" that didn't include gratuitously wasted power, and measured the "power used meaningfully to do computations" in some important sense; > * AI performance seemed to [systematically improve](https://arxiv.org/abs/2001.08361) as this sort of power usage increased; > * Power usage was just now coming within a few orders of magnitude of the human brain; > * We were just now starting to see AIs have success with tasks like vision and speech recognition (tasks that seem likely to have been evolutionarily important, and that we haven't found ways to precisely describe GOFAI-style); > * It also looked like AI was starting to have insect-like capabilities somewhere around the time it was consuming insect-level amounts of power; > * And we didn't have some clear candidate for a better metric with similar properties (as I think we do in the case of computations, since the main thing I'd expect increased power usage to be useful for is increased computation); > > ...Then I would be interested in a Bio Anchors-style analysis of projected power usage. As noted above, I would be interested in this as a tool for analysis rather than as "the way to get my probability distribution." That's also how I'm interested in Bio Anchors (and how it presents itself). *Second*, the argument that paradigm shifts might speed up AI: > I think it's a distinct possibility that we're going to see dramatically new approaches to AI development by the time transformative AI is developed. > > On the other, I think quotes like this overstate the likelihood in the short-to-medium term. > > * Deep learning has been the dominant source of AI breakthroughs for [nearly the last decade](https://en.wikipedia.org/wiki/AlexNet), and the broader "neural networks" paradigm - while it has come in and out of fashion - has broadly been one of the most-attended-to "contenders" throughout the history of AI research. > * AI research prior to 2012 may have had more frequent "paradigm shifts," but this is probably related to the fact that it was seeing less progress. > * With these two points in mind, it seems off to me to confidently expect a new paradigm to be dominant by 2040 (even conditional on AGI being developed), as the second quote above implies. As for the first quote, I think the implication there is less clear, but I read it as expecting AGI to involve software well over 100x as efficient as the human brain, and I wouldn't bet on that either (in real life, if AGI is developed in the coming decades - not based on what's possible in principle.) #### Reponse 4: Me Oh God, I have to write some kind of conclusion to this post, in some way that suggests I have an opinion, or that I’m at all qualified to assess this kind of research. Oh God oh God. I find myself most influenced by two things. First, Paul’s table of how effectively Nature tends to outperform humans, which I’ll paste here again: I find it hard to say *how* this influenced me. It would be great if Paul had found some sort of beautiful Moore’s-Law-esque rule for figuring out the Nature vs. humans advantage. But actually his estimates span five orders of magnitude. And they don’t even make sense as stable estimates - human solar power a few decades ago was several orders of magnitude worse than Nature’s, and a few decades from now it may be better. Still, I think this table helps the whole thing feel less mystical. Usually Nature outperforms humans by some finite amount, usually a few orders of magnitude, on the dimension we care about. We can add it to the error bars on our model and move on. The second thing that influences me a lot is Carl Shulman’s model of “once the compute is ready, the paradigm will appear”. Some other commenters visualize this as each paradigm having a certain amount of compute you can “feed” it before it stops scaling with compute effectively. This is a heck of a graph: Given these two assumptions - that natural artifacts usually have efficiencies within a few OOM of artificial ones, and that compute drives progress pretty reliably - I am proud to be able to give Ajeya’s report the coveted honor of “I do not make an update of literally zero upon reading it”. That still leaves the question of “how much of an update do I make?” Also “what are we even doing here?” That is - suppose before we read Ajeya’s report, we started with some distribution over when we’d get AGI. For me, not being an expert in this area, this would be some combination of the Metaculus forecast and the Grace et al expert survey, slightly pushed various directions by the views of individual smart people I trust. Now Ajeya says maybe it’s more like some other distribution. I should end up with a distribution somewhere in between my prior and this new evidence. But where? I . . . don’t actually care? I think Metaculus says 2040-something, Grace says 2060-something, and Ajeya says 2050-something, so this is basically just the average thing I already believed. Probably each of those distributions has some kind of complicated shape, but who actually manages to keep the shape of their probability distribution in their head while reasoning? Not me. This report was insufficiently different from what I already believed for me to need to worry about updating from one to the other. The more interesting question, then, is whether I should update towards Eliezer’s slightly different distribution, which places more probability mass on earlier decades. But Eliezer doesn’t say what his exact probability distribution is, and he *does* say he’s making a deliberate choice not to do this: > I consider naming particular years to be a cognitively harmful sort of activity; I have refrained from trying to translate my brain's native intuitions about this into probabilities, for fear that my verbalized probabilities will be stupider than my intuitions if I try to put weight on them.  What feelings I do have, I worry may be unwise to voice; AGI timelines, in my own experience, are not great for one's mental health, and I worry that other people seem to have weaker immune systems than even my own.  But I suppose I cannot but acknowledge that my outward behavior seems to reveal a distribution whose median seems to fall well before 2050. So, should I update from my current distribution towards a black box with “EARLY” scrawled on it? What would change if I did? I’d get scared? I’m already scared. I’d get *even more* scared? Seems bad. Maybe I’d have different opinions on whether we should pursue long-term AI alignment research programs that will pay off after 30 years, vs. short-term AI alignment research programs that will pay off in 5? *If you have either of those things, please email anyone whose name has been mentioned in this blog post, and they’ll arrange to have a 6-to-7-digit sum of money thrown at you immediately.* It’s not like there’s some vast set of promising 30-year research programs and some other set of promising 5-year research programs that have to be triaged against each other. Maybe there’s some ability to redirect a little bit of talent and interest at the margin, in a way that makes it worth OpenPhil’s time to care. But should I care? Should you? One of my favorite jokes [continues to be](https://slatestarcodex.com/2020/04/01/book-review-the-precipice/): > An astronomy professor says that the sun will explode in five billion years, and sees a student visibly freaking out. She asks the student what’s so scary about the sun exploding in five billion years. The student sighs with relief: “Oh, thank God! I thought you’d said five *million* years!” And once again, you can imagine the opposite joke: A professor says the sun will explode in five minutes, sees a student visibly freaking out, and repeats her claim. The student, visibly relieved: “Oh, thank God! I thought you’d said five *seconds*.” Here Ajeya is the professor saying the sun will explode in five minutes instead of five seconds. Compared to the alternative, it’s good news. But if it makes you feel complacent, then the joke’s on you.
Scott Alexander
47594966
Biological Anchors: A Trick That Might Or Might Not Work
acx
# Links For February *[Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]* **1:** [The newest studies](https://twitter.com/emollick/status/1476608377459548165) don’t find evidence that extracurriculars like chess, second languages, playing an instrument, etc can improve in-school learning. **2:** Did you know: Spanish people [consider it good luck](https://en.wikipedia.org/wiki/New_Year%27s_food#Spain) to eat twelve grapes at midnight on New Years, one at each chime of the clock tower in Madrid. This has caused enough choking deaths that doctors started a petition to make the clock tower chime more slowly. **3:** At long last, scientists have discovered a millipede that really does have (more than) a thousand legs, *[Eumillipes persephone](https://www.smithsonianmag.com/smart-news/finally-a-millipede-that-actually-has-1000-legs-180979269/)*, which lives tens of meters underground in Australia and in your nightmares. Recent progress in this area inspired me to Fermi-estimate a millipede version of Moore’s Law, which suggests we should be up to megapedes by 2140 and gigapedes by 2300. **4:** euphoric-baseball-61 on the subreddit [challenges the claim](https://www.reddit.com/r/slatestarcodex/comments/rutnn6/striking_gold_when_does_the_brain_reach_maturity/) that the brain doesn’t reach maturity until age 25. **5:** Great powers traditionally have proxy wars with each other by arming/advising their preferred faction of smaller conflicts in weaker countries. This is a naturally limited strategy, because the faction still has to have enough soldiers to use the weapons and advice efficiently. So [what happens to proxy wars once great powers develop fully autonomous weapons](https://nuclearspaceheater.tumblr.com/post/675934172967305216/the-long-standing-fact-that-countries-can-openly)? **6:** Tarot of the Silicon Dawn ([draw random card](https://silicon-dawn.cards/)) **7:** **8:** Economist: [Why Brahmins Lead Western Firms But Rarely Indian Ones](https://www.economist.com/asia/2022/01/01/why-brahmins-lead-western-firms-but-rarely-indian-ones). Brahmins are the highest Indian caste; in India they tend to be academics/lawyers/etc, but in the US they are disproportionately likely to become CEOs (including the current leaders of Google and Microsoft). Article theorizes that this is a combination of more business-related Indian castes having better networking within India (so motivated Brahmins tend to go abroad), Brahmins being good at the traditional academic pathway that lends itself well to immigration, plus maybe affirmative action against them in India. [Here’s](https://theprint.in/opinion/the-economist-is-wrong-brahmins-become-ceos-in-us-not-because-of-quotas-in-india/797522/) a rebuttal I link to out of duty, but I’m not sure it’s worth wading through the woke outrage to get to the two or three mildly interesting facts (Brahmins started immigrating before India’s affirmative action really ramped up, and they might have a first-mover advantage from building immigrant communities earlier). **9:** Most previous studies of preschool found zero to negative effects on academic achievement, but potentially positive effects on nonacademic outcomes like discipline and grit. A [big new study](https://drive.google.com/file/d/1vfShplpa_dUXbPJNaKlFubli_OZDq5Jh/view) of lower-income children (h/t Samuel Hammond) confirms negative effects on academic achievement but also finds negative effects on non-academic outcomes. I have yet to look at it closely enough to have a good theory of what’s going on here, or whether parents should be trying to keep their kids out of preschool. **10:** [Consensus](https://consensus.app/blog/introducing-consensus/) is an app (currently in beta) that claims that they can automate searching through and analyzing the scientific literature using natural language processing. In my conversation with them, I pointed out the skulls of all the previous people who tried that, littering their path, and they remained upbeat and said their product was definitely going to be the one that works. I tried it with some medium-subtlety questions, and got lots of papers using keywords I used in the questions but nothing I would really call an answer - but they remained upbeat and said their product was definitely going to be the one that works. Anyway, you can sign up for their beta [here](https://consensus.app/). **11:** [edited to add: also an AI Governance curriculum [here](https://docs.google.com/document/d/1F4lq6yB9SCINuo190MeTSHXGfF5PnPk693JToszRttY/edit#)] **12:** [The Dangers Of Low-Pay, High-Status Jobs](https://economistwritingeveryday.com/2022/02/07/the-dangers-of-high-status-low-wage-jobs/). A good article in many ways, but the part I appreciate most was taking “why are so many journalists live in Brooklyn?” (which I had always thought of as a kind of a running gag, or dig on the journalistic monoculture) and doing economic analysis to it, of the same form of “why are so many tech companies in the Bay Area?” or “why are so many entertainment studios in LA?” **13:** Creepy thing of the month (h/t [Neural Net Guesses Memes](https://twitter.com/ResNeXtGuesser/status/1481340219752394754)): **14:** You’ve probably heard statistics about how 50% of transgender youth attempt suicide before age 21. [This paper](https://link.springer.com/article/10.1007/s10508-022-02287-7) tries to analyze the situation in more depth. The 50% number usually comes from surveys, but there’s some evidence people exaggerate on surveys, rounding up “I think about it a lot” to “I attempted”. The authors gather data on completed suicides among trans people, and find that they’re about 0.01%/year (which is about 5x the cisgender rate). If we suppose that people have about 5 years between becoming transgender and turning 21, then the 50% attempted suicide rate → 0.05% completed suicide rate implies that 1/1000th of the youth who report attempting suicide on surveys complete suicide - which sounds about right to me [but see [this comment](https://astralcodexten.substack.com/p/links-for-february/comment/5209612) for a critique] **15:** Gwern on [the failures of 20th century eugenics](https://www.gwern.net/Dune-genetics#alternative-paradigms). I’ve previously linked a piece about how, aside from the general moral failure, the 20th century eugenicists got lots of implementation details really wrong. Gwern adds to the picture: they had a purely Mendelian (as opposed to polygenic) model of intelligence, and felt that bad traits were probably caused by single recessive genes. This dichotomized the population in a way that contributed to the moral problems - if IQ is truly a continuum, then someone with 120 IQ might still wonder if they were “inferior” to someone with 130 IQ, in a way that made them feel some sympathy to someone with 80 IQ who was being pronounced “inferior” by the eugenicists of the time. But instead, they thought some people had the specific recessive “low intelligence” gene, those people could be “cleansed” from the population, and then everyone else would be fine! It also prevented them from considering improving the populace by encouraging intelligent people to breed more (as opposed to sterilizing unintelligent people) - this wouldn’t eliminate the recessive variants that were causing all the trouble! I’m confused how they could have believed this even with the limited knowledge of the time; this was long after Galton had proven that genius was genetic, and once you have genetic genius you *know* there’s more going on than Mendelian inheritance of subnormality. **16:** Sexual selection [bridges peaks in adaptive fitness landscapes](https://twitter.com/SteveStuWill/status/1360009175770697731) **17:** [NFTorah:](https://nftorah.com/) “The Torah [is] the original blockchain”. I think it’s funny that this exists, but it’s exactly what you would expect, and you don’t have to click on the link. **18:** More [IRB nightmares](https://twitter.com/ryancbriggs/status/1488351115393572864). **19:** **20:** [DeepMind made a programming AI](https://www.deepmind.com/blog/article/Competitive-programming-with-AlphaCode) that was able to participate in a human coding competition and place around the middle. [Nostalgebraist](https://nostalgebraist.tumblr.com/post/675145919313870848/what-do-you-think-of-the-alphago-coding-result-it) gives his thoughts: “impressed with the raw performance, not massively surprised, not sold that it implies anything big in particular”. A lot of people will be watching whether it can win programming competitions outright a year or two from now, though I bet their perspectives on how relevant this is for AI takeoff speeds will be pretty mixed. **21:** [Effective altruist organizations as Zendaya outfits](https://twitter.com/FreshMangoLassi/status/1475881815298691073). **22:** [Brain Efficiency: Much More Than You Wanted To Know](https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know). “Why should we care? Brain efficiency matters a great deal for AGI timelines and takeoff speeds, as AGI is implicitly/explicitly defined in terms of brain parity.” **23:** I’m not going throw out my copy of *The* *Case Against Education* just yet - I haven’t checked this study but I bet there are lots of possible confounders. Still, this would be fun for somebody more interested to analyze in depth: **24:** Best of Scott Sumner archives: [There’s Only One Sensible Way To Measure Economic Inequality](https://www.econlib.org/archives/2014/04/theres_only_one.html). “You cannot put the burden of a tax on someone unless you cut into his or her consumption. If … tax increases did not cause Gates and Buffett to tighten their belts, then they paid precisely 0% of that tax increase. Someone else paid, even if they wrote the check. If they invested less due to the tax, then workers might have received lower wages. If they gave less to charity then very poor Africans paid the tax.” **25:** The latest in the Greater Male Variability Hypothesis: [Harrison, Noble, and Jennions](https://onlinelibrary.wiley.com/doi/abs/10.1111/brv.12818?campaign=wolearlyview) publish a meta-analysis failing to find evidence of greater male variability in the personality of non-human animals. [Del Giudice and Gangestad](https://psyarxiv.com/6ua8r) have a rebuttal saying that they were underpowered to detect it even if it did exist, plus noting the ways that media coverage of this study was incredibly irresponsible even by its own terms. **26:** Some recent critiques of Cook (2014) on racial violence vs. black patents, including Michael Wiebe [challenging the violence measures](https://michaelwiebe.com/blog/2021/02/cook_violence) and AnechoicMedia [arguing that the black patent measure declines](https://twitter.com/AnechoicMedia_/status/1489847148862742531) right when switching from one (more complete) dataset to another (less complete) one. Rebuttal by Brad DeLong [here](https://braddelong.substack.com/p/have-harald-uhlig-and-company-read?utm_source=url), he argues that Cook uses multiple methods and some of them don’t have this problem. Relevant since Cook is now being considered for the Federal Reserve; see eg [this](https://www.wsj.com/articles/fed-doesnt-need-censor-lisa-cook-federal-reserve-dual-mandate-democrats-leftist-agenda-defunding-police-governor-inflation-raskin-11644766151) *[Wall Street Journal](https://www.wsj.com/articles/fed-doesnt-need-censor-lisa-cook-federal-reserve-dual-mandate-democrats-leftist-agenda-defunding-police-governor-inflation-raskin-11644766151)* [editorial against](https://www.wsj.com/articles/fed-doesnt-need-censor-lisa-cook-federal-reserve-dual-mandate-democrats-leftist-agenda-defunding-police-governor-inflation-raskin-11644766151). **27:** Claim: 31% of British people say they have [seen or met Queen Elizabeth](https://www.bbc.com/news/uk-60274816) (this seems plausible to me, I would answer ‘yes’ to this because she visited Ireland when I lived there, I watched the parade in her honor, and I could vaguely glimpse her on the inside of her car). **28:** This couple-of-month-period in wokeness: * *Scientific American* attacks late biologist EO Wilson, in a screed whose [highlight](https://twitter.com/StuartJRitchie/status/1476628105720705025) is calling him problematic for describing ants as having “colonies”. This is part of a more general (and surprisingly fast) pivot at *Scientific American* from real science to culture warring; when [even Eric Turkheimer thinks you’ve gotten too woke](https://twitter.com/ent3c/status/1476626271090233346), you’ve gotten too woke. * Unherd on [the rise of sensitivity readers](https://unherd.com/2022/02/how-sensitivity-readers-corrupted-literature/), ie censors who publishing companies employ to remove problematic material from books. * New York Times [takes over popular word game Wordle](https://www.bbc.com/news/technology-60416057); its first act is to remove potentially offensive words - like “slave” - from the word list. **29:** Russian nationalist blogger Anatoly Karlin on [why Putin wants to invade Ukraine](https://akarlin.substack.com/p/regathering-of-the-russian-lands?utm_source=url). Short version: partly emotional nationalism, but partly rational calculation that as Russia tries to leave the Western way of life and go it alone in some kind of cultural/economic sense, it will have better odds of self-sufficiency with an extra 35 million people in its sphere of influence. Related, and excellent: [how Russia thought about the recent protests in Kazakhstan](https://twitter.com/ClintEhrlich/status/1479517789450911747). **30:** Wikipedia says that [the 1960 Valdivia, Chile earthquake](https://en.wikipedia.org/wiki/1960_Valdivia_earthquake): …released almost a quarter of the total seismic energy released by all earthquakes in the 20th century. **31:** Mr. Global, a sort of male version of the Miss Universe pageant, had a “[wear your country’s traditional dress](https://www.boredpanda.com/mister-global-2019-national-costume/)” contest in 2019, with pretty great results: More at [the link](https://www.boredpanda.com/mister-global-2019-national-costume/). **32:** Related, from Works In Progress: no, [it’s not just that only the prettiest buildings of past ages survived](https://www.worksinprogress.co/issue/against-the-survival-of-the-prettiest/), past ages really did produce (on average) prettier buildings. **33:** [Sovereign-citizen-like movements around the US and the world](https://twitter.com/egavactip/status/1487643195207045123). Every commentary you’ll read on sovereign citizens focuses on how the only possible explanation for the movement is (white) racism. I think pieces like this show a more subtle story. Yes, white nationalist groups are heavily involved in sovereign citizenry. But so are black nationalist groups, Native Hawaiian secessionist groups, Australian Aboriginal independence movements, etc, etc, etc. It seems like a powerful attractor for anyone who’s angry or feels mistreated for any reason. **34:** Did you know: the appendices of 1984 [strongly suggest](https://twitter.com/normative/status/1491871672617553922) that in the canonical timeline, the Oceanian dictatorship fell a few years after the events of the book and was replaced by a more liberal state. **35:** New study: [Epstein-Barr virus probably causes multiple sclerosis](https://www.science.org/doi/10.1126/science.abj8222). Greg Cochran [was saying this twenty years ago](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.182.5521&rep=rep1&type=pdf), which is impressive; the usual Cochran → everyone else gap is more like 10-15. **36:** An interesting recent spat between BMJ and Facebook: BMJ, one of the most prestigious medical journals in the world) published some article about poor clinical research practices at a vaccine company. Some anti-vaxxers shared it on Facebook, and Facebook responded by adding their “missing context” tag to the BMJ article. This made the BMJ angry (well, this plus Facebook’s explanation which called the BMJ a “news blog”), so the editors wrote an [Open Letter From The BMJ To Mark Zuckerberg](https://www.bmj.com/content/375/bmj.n2635/rr-80), saying “actually, we are one of the most powerful medical establishment institutions in the world, you can’t do this to us”. The fact checker who Facebook subcontracts their censorship decisions to, Lead Stories, then wrote [a surprisingly thoughtful](https://leadstories.com/analysis/2021/12/lead-stories-response-to-a-bmjcom-open-letter-objecting-to-a-lead-stories-fact-check.html) response saying: they thought the BMJ article lacked important context, that was all they told Facebook, and they stand by their decision even after learning that the BMJ is much more prestigious and important than they thought. I’m having trouble figuring out what emotions to have here: on the one hand I hate censorship, but on the other hand seeing the BMJ seething at their inability to pull rank is oddly satisfying. Also, this same thing apparently happened around the same time with [Instagram and the Cochrane Collaboration](https://twitter.com/cochranecollab/status/1458439812357185536). **37:** **38:** Emil Kierkegaard on [the flimsy evidence for exercise as a depression treatment](https://kirkegaard.substack.com/p/exercise-for-depression-the-evidence?utm_campaign=post&utm_medium=email&utm_source=url). **39:** Interested in hearing more about this: was the 2000 - 2017 period really better than previous periods? If so, why? Is it just China, or something else? **40:** [Harold Lee on The Post COVID World](https://write.as/harold-lee/living-life-making-friends-and-staying-safe-in-the-rona-20s), making a point I hadn’t seen anywhere else before: > Risk-averse institutions will lag behind individuals in taking the risk to meet in person, and they will gradually lose importance as centers for social life. A network of friends of friends, however, will be more resilient. New friends will increasingly come from introductions from other friends you’ve already made, rather than meeting people through contra dancing or church. Civil society institutions will generally lose influence as their traditional methods of outreach and group bonding, although some may adapt by developing more models such as small groups with a cell structure, which approximate the flexibility of a group of friends who can adapt meeting practices as they please, at protective arm’s length from the greater organization’s brand. Regardless, the social landscape of 2030 will likely be more illegible than that of 2019, with formal organizations less important and groups of friends and collaborators taking on more importance. (I will say that big institutions have been less risk-averse than I worried - I hear Google stopped requiring masks in the Bay Area today, and [Polymarket says](https://polymarket.com/market/from-nate-silver-will-there-be-a-federal-mask-requirement-on-us-domestic-flights-on-november-8-2022) 79% chance of no mask mandate on domestic flights by November - so maybe this won’t come to pass after all) **41:** Headline from four years ago (h/t [Nathan Learner](https://twitter.com/NathanLeamerDC/status/1470752026179031045)) I wonder how a prediction market at the time would have priced the claim “five years from now, a majority of people will agree that the Internet as they knew it ended in 2017 with the repeal of net neutrality”. Also - in the process of confirming that this headline was real (it was) I found [this Forbes article](https://www.forbes.com/sites/investor/2012/09/13/the-end-of-the-internet-as-we-know-it/?sh=6734848a6f6a) claiming the end of the Internet as we know it in 2012 (I think it was about 10% of the way to being right, insofar as some news sources do have paywalls now), and [this Wired article](https://www.wired.co.uk/article/end-of-the-internet-as-we-know-it) warning that a 2014 case on Net Neutrality “could lead to fracturing of the singular internet into a multiplicity of sub-nets and to an all-out negotiation battlefield with global ramifications”. **42:** This sounds like the sort of catchy/exciting idea that would become an urban legend whether it was true or not, but [Elizabeth says it’s true](https://www.lesswrong.com/posts/ue4nobero9qR9CcAh/nudging-my-way-out-of-the-intellectual-mosh-pit): putting your phone in grayscale makes it less addictive. **43:** Alwaysrinse [on Judaism on genetic engineering:](https://www.metaculus.com/questions/9492/israeli-embryo-selection-for-intelligence/#comment-79170) > [Judaism is lenient on genetic engineering](https://www.jcpa.org/art/jep2.htm) as "Jewish tradition posits that man was created in the 'image of G-d' to be a partner with G-d in mastering and perfecting himself and the natural world" and Judaism has a "general principle that anything not explicitly prohibited in the Bible and Talmud is assumed to be permitted." [Rabbi Dr. Avraham Steinberg](https://en.wikipedia.org/wiki/Avraham_Steinberg), the co-chair of Israel's National Bioethics Council, writes, "As long as the act of perfecting the world does not violate halakhic prohibitions, or lead to results which would be halakhically prohibited, then we are given a mandate to use science and technology to improve the world." Genetic engineering of course is not halakhically prohibited. He ["believes that we should proceed with ... genetic engineering [even if it is non-life-preserving] as long as we believe that the benefits to man outweigh the risks."](https://www.jcpa.org/art/jep2.htm) **44:** Many many more at the thread [here](https://twitter.com/DanielSolis/status/1487913576929103884).
Scott Alexander
49041403
Links For February
acx
# Play Money And Reputation Systems For now, US-based prediction markets [can’t use real money](https://astralcodexten.substack.com/p/the-passage-of-polymarket?utm_source=url) without clearing near-impossible regulatory hurdles. So smaller and more innovative projects will have to stick with some kind of play money or reputation-based system. I used to be really skeptical here, but [Metaculus](https://www.metaculus.com/questions/) and [Manifold](https://manifold.markets/home) have softened my stance. So let’s look closer at how and whether these kinds of systems work. Any play money or reputation system has to confront two big design decisions: 1. Should you reward absolute accuracy, relative accuracy, or some combination of both? 2. Should your scoring be zero-sum, positive-sum, or negative sum? #### Relative Vs. Absolute Accuracy As far as I know, nobody suggests rewarding only absolute accuracy; the debate is between relative accuracy vs. some combination of both. Why? If you rewarded only absolute accuracy, it would be trivially easy to make money predicting 99.999% on “will the sun rise tomorrow” style questions. Manifold only rewards relative accuracy; you have to bet with some other specific person, and you only make money insofar as you’re better than them. All real-money prediction markets are also like this, and Manifold is straightforwardly imitating this straightforward design. Metaculus has a weird system combining absolute and relative accuracy: all predictions are treated as a combination of “bets with the house” on absolute accuracy, plus bets against other predictors on relative accuracy. Why? As a kind of market-making function; even if nobody else has yet predicted, it’s still worth entering a market for the absolute accuracy points. This works, but has a lot of complicated consequences we’ll discuss more below. (Manifold solves the same problem by having market makers be a specific user who wants the market to exist, and making that person ante up money at a specific starting price to make that happen. This seems a lot more straightforward and frees them from the complicated consequences.) #### Zero Vs. Positive Sum As far as I know, nobody suggests negative-sum markets; the debate is between zero vs. positive-sum. Technically markets with transaction costs can be negative-sum, but nobody is *happy* about this, just accepts it as a necessary evil. Zero-sum is a straightforward choice that imitates real-money markets. Two forecasters bet, and whatever Forecaster A wins, Forecaster B must lose. This is nice because it produces numbers with clear meanings: if you have a positive number, you are on average better than other forecasters; the more positive, the more better. Positive-sum means that the house always loses; on average, you make money every time you bet. Metaculus is infamous for this; see eg this question on Ukraine: If Russia invades Ukraine, this person will win +58 points; if it doesn’t, they will win +32 points. Why does Metaculus allow this? They want to incentivize people to forecast. If it’s zero-sum, you’re as likely to lose points by forecasting a question as gain them. In fact, if you’re not the smart money, you’re *more* likely to lose, much as normal people should try to avoid competing against Wall Street traders when picking stocks. Since Metaculus wants to harness the wisdom of crowds, and you need lots of people to make a crowd, they incentivize you with a better than 50-50 chance (sometimes a guaranteed chance) of getting points. The disadvantage of this is that it makes points less meaningful; just because someone has a positive number of points, doesn’t mean they’re above average or have ever won a bet with anybody else. #### Reputation Systems Aren’t About Reputation I want to harp for a little longer on why this might be bad. Suppose Susan is a brilliant superforecaster. She spends an hour researching every question in depth, at the end of which she is always right. Suppose Randy guesses basically randomly. Or fine, maybe he’s slightly better than random, he has gut feelings, if the question is “will Russia invade Brazil?” he knows that won’t happen and says some very low number. But it’s not like he’s thinking super-hard. Maybe it takes Randy ten seconds to get a gut feeling and type in the relevant number. In a zero-sum system, Susans (almost) always beats Randys. Susans end up with lots of points, Randys end up with few or negative points, the system works. In a positive-sum system, in the hour it takes Susan to produce one brilliant forecast, Randy has clicked on 360 different questions. Who ends up with more points? It depends on whether your system rewards a brilliant answer 360x more than the baseline it rewards any answer at all. The above Ukraine question on Metaculus rewards a maximally correct answer 4x more than a lazy answer intended to most efficiently reap the free points - ~50 vs. ~200. So assuming an unlimited number of questions and both people investing the same amount of time, Randy would end up with about a 90x higher reputation than Susan. Metaculus addresses this issue by . . . totally failing to address this issue and just accepting the consequences. It doesn’t seem so bad for them; their leaderboard contains many people who I know from other contexts to be genuinely excellent forecasters. But it [turns a lot of people off from them](https://www.lesswrong.com/posts/PQACEuWpkSyRgHC4p/covid-2-11-as-expected). More important, it lampshades an important quality of “reputational” systems: so far, none of them actually produce any kind of a reputation. By this I mean something like: if I claim “I have an IQ of 160” or “I can bench press 300 lbs”, people might be impressed by me. If I say “I’m a superforecaster in the Good Judgment Project”, the small number of people who know and care what that is will be impressed. I’ve heard people claim all of these things, but I have *never* heard anyone casually drop their Metaculus score in conversation, even in the weird heavily-selected circles where everyone knows about Metaculus and agrees it is good. (I’m a relatively well-known blogger who writes a lot of things that may or may not be true, and I’m known to use Metaculus, and nobody has ever asked *me* my Metaculus score before deciding how much to trust me!) I think this is partly because everyone understands that Metaculus scores are some combination of how good a forecaster I am, how much meta-gaming I do, and how much time I put into grinding Metaculus questions. But then, what’s the point? Your incentive for playing Metaculus is supposed to be getting a good reputation, but in fact this has no benefits, not even bragging rights! I can’t deny that this system does, somehow, work. A lot of people use Metaculus (sometimes including me), and I *would* actually respect someone more if I knew they were on the leaderboard (probably through some assumption that Metaculans seem nice and honest, and even though the Randy strategy is easy, nobody cares enough to do it). Still, part of me wishes that reputation systems could actually give someone a good reputation - that the big Wall Street firms would consider guaranteeing interviews to people on the leaderboards, or something like that. But right now they’re just not good enough to survive having any real-world consequences. #### Play Money Systems: Better Than They Sound? So what about zero-sum, relative-accuracy play money systems? This is the strategy used by Manifold, plus some of the real-money prediction markets that offer play money to Americans (like [Futuur](https://futuur.com/)). It’s straightforward and it simulates a real prediction market closely. What could go wrong? First question: why would anybody want play money? The obvious answer is that it’s a reputation system in disguise - the amount of play money you accumulate is a proxy for how good a forecaster you are - and an accurate one, unlike Metaculus’ reputation. This is mostly true, but with some complications. Manifold lets you buy their play money for real money, which in theory would destroy any reputational value. But they solve this by actually reputationalizing play money *profits*, which works: For example, I am now impressed by/concerned by/suspicious of Robert McIntyre. *What are you* *doing*? A second potential reason people might want play money: on Manifold, you can use it to open your own questions, asking the market for information on a topic presumably of interest to you. (this would be very straightforward if you were subsidizing the market, and the site encourages you to think of it as a subsidy - but is it? You bet your starting ante at some specific level. And usually you as market maker have more insight into the question than anyone else. Half the time it’s on your own personal life; the other half of the time it’s on some broader question *which is selected for being something you care about a lot*. Far from being a subsidy - money which it is easy for other people to get - this feels like smart money - money that other people should be scared to bet against. So how does this open the market at all? I’m not sure and willing to entertain the possibility that it doesn’t, that the system only holds together because everyone is having fun and nobody cares about the incentives, and that an ante of $1 would work just as well.) Any broader problem with this system? I mentioned this last week, but let’s look at it again. This is inexcusably wrong: there’s no way this guy (a wrestler with no political experience who hasn’t even announced he’s running) has a 9% chance of becoming President. Why is nobody correcting it? Because you’d have to tie up your limited supply of play money for 2.5 years to make a tiny profit: the site tells me that if I put in an average person’s entire starting allocation (M$1000), I’d only push the chance down to 2% (still not low enough!) and only make a $35 profit in 2.5 years (a ~1% rate of return) when time proved me right. My [conditional prediction market experiment](https://astralcodexten.substack.com/p/open-thread-212?utm_source=url) seems to be failing for the same reason: I posted about six books I was considering reviewing, and asked people to bet on which ones would get lots of “likes”. Only 44% of my book reviews get more than 125 likes, but *every* book I proposed is at >44% right now. Many are much higher - like this one, about a dry scholarly textbook explaining a famously incomprehensible form of psychoanalysis. I think all these markets are mispriced. My guess is that people are using this as a way of voting for books they want me to review. They buy “yes” on books they like, but don’t buy “no” on books they don’t like, because that would be against the imaginary rules for the voting that they are falsely imagining this to be. Ideally, actual prediction market players would take these people’s money and drive the markets back down to the base rate. That’s not happening here, and my guess about why is: it’s a small return on a one-year-long market that might never actually trigger (if I don’t review the book, the conditions for the conditional prediction market aren’t met, and it resolves N/A). Nobody wants to lock up their limited play money for this. Metaculus, for all their system’s problems, would get this one exactly right; since you’re incentivized to predict on every question with no limiting factors, lots of people would bet on this one; since the optimal strategy is to bet your true belief, everyone would bet something very low, and the probability would end up very low. What to do? In the Manifold Discord, I recommended offering a per market interest-free loan of M$10, usable for a bet in that market only. Since it’s a loan, you don’t get free reputation by participating in as many markets as possible; if you’re not actually applying market-beating levels of work, you’ll only break even; if you’re worse than the market, you’ll *lose* money. Still, if I could take out an interest-free M$10 loan on this market, I would. I’d bet NO, and in 2.5 years, I’d make a total of M$1 worth of easy money. If all two hundred-ish Manifold users did this, that would push the probability down to 1%, which is close enough to the real value. Loans are complicated. For one thing, you’d have to prevent me from taking out the market-specific loan on this, selling my position immediately, and then reinvesting it into some flashier shorter-term question. For another, you’d either need a system of margin calls, or just accept that some people will go below M$0 sometimes (sure, let them go below $0, so what?) Still, I think this would solve a lot of mispricings. If it didn’t, the administrators could fiddle with the size of the loan until it did. You could also experiment with a mechanism where market makers’ ante funds the loans, ie if you ante M$100 for a one year market, you’re promising to loan the first ten people who enter M$10 each to bet against each other with. I don’t know how to do that in a way which doesn’t reward people who show up early, which is undesirable since it makes the reputation system less valid. I think the “play money has value because you can use it to subsidize play money prediction markets which have value because people want play money so they can subsidize play money prediction markets which…” loop is clever and could potentially work. So far Manifold has been running off of fun and early goodwill; I look forward to seeing how they solve these difficult problems as they try to scale past that level.
Scott Alexander
49096803
Play Money And Reputation Systems
acx
# Open Thread 212 **1:** My spam filter has gone rogue and is blocking lots of important messages from real people. If you sent me something important and I haven’t responded, sorry, it’s probably in spam (though still with a medium amount of probability on “I am incompetent and just forgot to reply”). As a temporary measure, I’ve told my spam filter to let through anything that has the phrase “This is a genuine nonspam message”, so include that if you want to be sure. **2:** ACX Grants recipient Trevor Klee writes: > Things are progressing much faster on all fronts since we got the ACX grant. We're almost finished with computer modeling of our phase 1 trials, and are moving onto designing the phase 1 trial to answer the unanswered questions from our modeling. After that, we'll go bring our plans to the FDA for final approval. We've also doubled our team size, from 1 person + consultants to.... 2 people + consultants! I've been joined by a COO, Ken Kashkin, who has an enormous wealth of experience developing drugs on both the corporate and the research side. Most recently, he was COO of Chromocell, a ~100 person biotech company with >$30 million in revenue. Finally, we've raised about half of what we need for our minimum viable phase 1, but we're still looking for more. Ideally, we'd like to raise about $500k additional funds. If you're someone who's excited by the idea of better, safer drugs for autoimmune diseases, a rapid march to the clinic, and an ambitious plan to take on neurodegeneration, please reach out to [trevor@highwaypharm.com](mailto:trevor@highwaypharm.com) . Usual caveats about “never invest in early-stage bio unless you have lots of experience and know exactly what you’re doing” presumably apply. **3:** I’m running an experiment with letting conditional prediction markets decide which books I’ll review. I’ve opened a bunch of play money Manifold markets trying to predict how many “likes” I would get by reviewing [Nixonland](https://manifold.markets/ScottAlexander/if-i-review-rick-perlsteins-nixonla), [Whither Socialism](https://manifold.markets/ScottAlexander/if-i-review-joseph-stiglitzs-whithe), [Penelope’s Dream Of Twenty Geese](https://manifold.markets/ScottAlexander/if-i-review-edward-teachs-penelopes), [The Search For The Perfect Health System](https://manifold.markets/ScottAlexander/if-i-review-mark-brinells-the-searc), [something by Rene Girard](https://manifold.markets/ScottAlexander/if-i-review-one-of-rene-girards-boo), [The Power Of The Powerless](https://manifold.markets/ScottAlexander/if-i-review-vaclav-havels-the-power), or [A Clinical Introduction To Lacanian Psychoanalysis](https://manifold.markets/ScottAlexander/if-i-review-bruce-finks-a-clinical). I don’t promise to definitely review whichever one gets the highest percent chance, but it will probably affect my decision. I realize there are many ways this could go wrong, which is why I’m describing it as an “experiment” - still, predict if you want! **4:** Thanks to everyone who reported comments (remember, you can do this through the menu you get when you click the three dots at the bottom of a comment). I’ve permabanned [Carl Cohen](https://astralcodexten.substack.com/p/bounded-distrust/comment/4691234), [Johnny Fakename](https://astralcodexten.substack.com/p/open-thread-210/comment/5176475), [Ormond](https://astralcodexten.substack.com/p/heuristics-that-almost-always-work/comment/4959498), and [M. Gage](https://astralcodexten.substack.com/p/the-gods-only-have-power-because/comment/5176602), given warnings (50% of a ban) to [Akhorahil](https://astralcodexten.substack.com/p/book-review-sadly-porn/comment/5176578), given minor warnings (25 - 33% of a ban) to [Naked Emperor](https://astralcodexten.substack.com/p/open-thread-210/comment/5176407), and given trivial warnings (1% of a ban) to [Cassander](https://astralcodexten.substack.com/p/why-do-i-suck/comment/4840619) and [jstr](https://astralcodexten.substack.com/p/hidden-open-thread-2105/comment/5176531). If you want to appeal any decision, please write up your argument, start a conditional prediction market on whether I’ll agree that your appeal was worth my time to read (include your argument in the market description and subsidize it with an ante of at least M100 or equivalent on some other site), wait a week, and if the prediction market is higher than 25% then you can send me an email with a link to the market and argument and I’ll look at it. **5:** Sorry, Austin meetup location has unexpectedly changed to Moontower Cider Company, 1916 Tillery St, Austin, TX  78723. I’ll post this more prominently later this week, but I wanted to post it here too so people have more of a warning.
Scott Alexander
49087040
Open Thread 212
acx
# Austin Meetup Next Sunday I’ll be in Austin on Sunday, 2/27, and the meetup group there has kindly agreed to host me and anyone else who wants to show up. We’ll be at [UPDATED WITH CHANGE} Moontower Cider Company at 1916 Tillery St from noon to 3. The organizer is sbarta@gmail.com , you can contact him if you have any questions. As per usual procedure, everyone is invited. Please feel free to come even if you feel awkward about it, even if you’re not “the typical ACX reader”, even if you’re worried people won’t like you, etc. You may (but don’t have to) RSVP [here](https://www.lesswrong.com/events/95LYeapL9ZiRgp689/scott-alexander-visit-and-mixer).
Scott Alexander
48999766
Austin Meetup Next Sunday
acx
# The Gods Only Have Power Because We Believe In Them *[with apologies to [Terry Pratchett](https://amzn.to/3oKWHyS) and [TVTropes](https://tvtropes.org/pmwiki/pmwiki.php/Main/GodsNeedPrayerBadly)]* “Is it true,” asked the student, “that the gods only have power because we believe in them?” “Yes,” said the sage. “Then why not appear openly? How many more people would believe in the Thunderer if, upon first gaining enough worshipers to cast lightning at all, he struck all of the worst criminals and tyrants?” “Because,” said the sage, “the gods only gain power through belief, not knowledge. You *know* there are trees and clouds; are they thereby gods? Just as lightning requires close proximity of positive and negative charge, so divinity requires close proximity of belief and doubt. The closer your probability estimate of a god’s existence is to 50%, the more power they gain from you. Complete atheism and complete piety alike are useless to them.” **\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*** “Is it true,” asked the student, “that the gods only have power because we believe in them?” “No,” said the sage. “The gods gain power not through belief, but through worship…” “But then my next question is still the same. Why not appear openly? How many more people would worship the Thunderer if he struck down all of the worst criminals and tyrants?” “Let me finish,” said the sage. “The gods gain power through the worship of unbelievers. The worship of someone who believes in them is useless. It must be an unbeliever who performs the rites. As the ancients say, solve for the equilibrium.” “Um,” said the student. “Maybe - the gods appear to the king, and tell him to force the populace to perform rites?” “So it was in Akhenaten’s time,” said the sage. “But soon the people thought: it must be a powerful god indeed who can convince our king to make us worship him.” “Then - appear to one generation. Get a tradition going. Make sure everybody feels socially compelled to join in. Then abandon the world. Do nothing at all for centuries. Nobody will want to embarrass themselves by failing to pay homage, but everyone will doubt in secret. When too many people genuinely leave the church, appear again.” “So it was in Jeremiah’s time,” said the sage. “But soon the people thought ‘We must have been wicked indeed for our god to leave us; we will believe with renewed fervour in the hopes that He returns.” “Then - convince everyone you don’t exist, but that it’s beneficial to pretend you do. Go easy on the threats of damnation, but threaten them with a hellishly empty social life if they let the institution of church lapse. Make them believe that ‘cultural evolution’ produces uniquely valuable structures, and so if your ancestors went on pilgrimages, you need to go on pilgrimages too even though there’s no such thing as a real holy place and you don’t know why.” “You have said it.” **\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*** “Is it true,” asked the student, “that the gods only have power because we believe in them?” “No,” said the sage. “Just the opposite. The gods only have power when people doubt them.” “Then why have they revealed themselves to us?” “So that it would be the Thunderer who the atheists scoff at, rather than Ra-Horakhty or Baal-Ammon. It is that scoffing that gives him strength. If the atheists scoffed at Ra-Horakhty instead, it would be he who is strongest. Each god tries to apply enough power to keep their own name foremost on the minds of mortals, but not so much that the mortals truly believe. Any true believers are accidents, side effects of the level of power it takes to get the masses scoffing at themselves in particular.” **\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*** “Is it true,” asked the student, “that the gods only have power because we believe in them?” “No,” said the sage. “Just the opposite. The gods only have power when people doubt them.” “Then why have they revealed themselves to us?” “They have not. Those gods you know are the losers of wars in heaven. Their victorious enemies spread their cults as widely as possible on earth, to ensure they never rise again.” **\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*** “Is it true,” asked the student, “that the gods only have power because we believe in them?” “No,” said the sage. “Just the opposite. The gods only have power when people doubt them.” “Then why have they revealed themselves to us?” “Those gods you know of are the weakest of gods, those who let their lust for praise and adoration overcome their good sense. Of the gods you know, the ones the myths speak of least are most powerful, and there are unknown gods stronger than any.” “The ones the myths speak of least are the most powerful? Wouldn’t that make the myths nonsensical?” “Do the bards not tell of three old women with thread and scissors, who have magic that binds the mighty Thunderer himself? Yet where are their temples? Who tells their story?” “Does that mean we could blackmail a god by threatening to believe in them harder?” “You have said - ” The sage paused, confused. Had said what? Who had said it? Who was he talking to? He was growing barmy in his old age, that was for sure. Soon he would die and then all his knowledge would die with him. If only he had taken a student. **\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*** “Is it true,” asked the student, “that the gods only have power because we believe in them?” “Yes,” said the sage. “It is true.” “If I everyone stopped believing in a god, would the god die?” “You have said it.” So the student traveled the land, dominant assurance contract in hand. Everywhere he went, he told the people, “Sign this contract, which says that if everyone in the land signs the contract, you all agree to stop believing in the plague god.” The people said they would sign if they could, but how could they stop believing in the plague god when his evidence was all around them? The student thought for many years. Then he shaved his beard and grew his hair and returned to them, unrecognizable, and speaking different words. Now he said “There is no plague god. Plagues are caused by animaliculi, too small to see. No ceremony or amulet can avert them, only washing your hands and wearing face masks. The people were astonished at his doctrine. But here and there it caught on, and in those villages the great plagues began to wane, until only mild colds and agues were left. Inspired by this success, more and more people believed the student’s words and doubted the plague god. And everywhere people were healthy and happy. When the student was very old and had taken students of his own, he called them together. “Soon I will die,” he said. “But my heart is heavy. For one day, people will invent new tools, tools that let them see smaller than a hair’s breadth. And they will find there are no animaliculi, and learn that I deceived them, and then the plague god will return, angrier than ever. You must promise that when that time comes, you will bury these fears, re-convince them of the truth of my words, and protect my legacy.” All his students so swore, and he died. And that is why, even today, when someone goes on a podcast and disagrees with an epidemiologist, lots of people get really angry and demand that Spotify take it down. **\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*** “Is it true,” asked the student, “that the gods only have power because we believe in them?” “Yes,” said the sage. “It is true.” “If everyone believed I was a god, would I become a god?” “You have said it.” So the student traveled the land, dominant assurance contract in hand. Everywhere he went, he told the people, “Sign this contract, that if everyone in the land signs the contract, you agree to worship me as a god.” The people were skeptical. “Why should we worship you.? But the student won them over. To the Northmen, he promised that upon attaining divine powers, he would stop their long civil war. To the Westmen, he promised to humiliate their enemies the Eastmen. To the Eastmen, he promised to protect them from their enemies the Westmen. And the Southmen, he promised to make them as rich as they currently were poor. Finally, when the last village had signed, he sent out riders, who called out “Rejoice, for the dominant assurance contract is complete, and now you shall worship me as a god!” All the people of the land came and paid homage to him, and promised to obey his divine commands. To the Northmen, he commanded that they cease their fighting. To the Eastmen, he commanded that they give half their wealth over to the Southmen. To the Westmen, he commanded that they cease attacking the Eastmen. For a generation, the land flourished under the god-king, until one day the old sage showed up at his palace. “My son,” he said, “I am old and weak. Now that you are a god, grant me my youth back, so I can teach others as I have taught you.” “Alas, I cannot,” said his former student. “It is as I suspected,” said the sage. “You are a fraud, and no god at all.” “Wrong,” said the student, “I am not a god of healing. The storm god cannot create a single sunbeam, nor the god of death make a single flower bloom. I am the god of power, and by my fruits you shall know me. Go in peace, old man.” Then the sage knelt, and paid homage to him, and returned home. **\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*** “Is it true,” asked the student, “that the gods only have power because we believe in them?” “Yes,” said the sage. “It is true.” “Then could I believe in a God Of Being Perfect In Every Way Who Loves Humans Infinitely Much And Tries To Make Them Better Off, and cause that god to come into existence?” “You have said it,” said the sage. “But since only one person believed in him, he would have only minimal power.” “What if I believed in the God Of Being Perfect In Every Way And So On, Who Also Had A Magical Hack He Could Use To Bootstrap From Existing At All To Being Completely Omnipotent? Then would he have limitless power?” “You have said it,” said the sage. And so the student believed in this god he had designed as hard as he could. And he lived happily ever after, along with his wife Sarah and his son Isaac.
Scott Alexander
48716400
The Gods Only Have Power Because We Believe In Them
acx
# Book Review: Sadly, Porn **I.** Freshman English class says all books need a conflict. Man vs. Man, Man vs. Self, whatever. The conflict in *[Sadly, Porn](https://amzn.to/34VRqgW)* is Author vs. Reader. The author - the pseudonymous “Edward Teach, MD” - is a spectacular writer. Your exact assessment of his skill will depend on where you draw the line between writing ability and other virtues - but where he’s good, he’s amazing. Nobody else takes you for quite the same kind of ride. He’s also impressively erudite, drawing on the Greek and Latin classics, the Bible, psychoanalytic literature, and all of modern movies and pop culture. Sometimes you read the scholars of two hundred years ago and think “they just don’t make those kinds of guys anymore”. They do and Teach is one of them. If you read his old blog, *The Last Psychiatrist*, you have even more reasons to appreciate him. His expertise in [decoding scientific studies](https://thelastpsychiatrist.com/2008/08/seroquel_for_bipolar_maintenan.html) and in [psychopharmacology](https://thelastpsychiatrist.com/2007/07/the_most_important_article_on.html) helped me a lot as a med student and resident. His political and social commentary was delightfully vicious, but also seemed genuinely aimed at helping his readers become better people. My point is: the author is a multitalented person who I both respect and *want to* respect. This sets up the conflict. Because this book is . . . what even is this book? The first page has an eight-page long footnote at the bottom, which covers the Delphic Oracle, the Salem Witch Trials, and the movie *Fast Times At Ridgemont High*, and ends up concluding that you (yes, you) are incapable of having desires*.* Immediately afterwards, the narrative breaks off for a thirty page cuckold porn story, which sounds like the sort of thing you do in order to discuss later, except that it never does. Then it’s back to more seemingly-crazy assertions and multi-dozen page footnotes. Footnote 35 is half a page of the author screaming at a hypothetical reader who wants fewer footnotes: > “Why so many footnotes???” Which is the same question as, “why are your sentences so long, why so many commas, what the hell is with you and semicolons?” It’s all on purpose, to get rid of readers. You’re stumped by the physical layout? This book is not for you, your brain is already set in concrete, it can never change, only crumble as it ages. Which is fine if your plan was to be a foundation for the next generation, but it isn’t; you’re the rotting walls that they have to knock down while you play the flute and pretend to give freedom to everyone else. If you look forward to TV, if you think “the problem with the youth today is that they’re entitled,” if you think, “damn all the partisanship, I wish someone in government would take charge and do the right thing — you are a true Athenian democrat. “I’ll take that as a compliment.” Yeah. I’m not saying you are necessarily a bad person, I’m just saying your kids would benefit from a more hands off approach to parenting. And a math tutor. Most of you should not read this book, the Disclaimer represents all the justification you deserve, I did everything I could to exclude everyone, including adding the porn story at the beginning, a Beware Of Dog sign written in cat. You are the kind of person who will be bothered by the presence of the porn story here, in a book safely away from any observation, even as you don’t observe that your kid observed…what you have been observing. You are the kind of open-minded replicant who will say, “I don’t have a moral problem with porn, it just has to be well-written!” That’s how you were told the kind of person you want others to think you are would select even his porn. Exacting measures of quality for your self-indulgence, while your standards for employment and diet are bafflingly arbitrary. “Are these cubicle donuts gluten-free?” They’re regular free, is that not free enough? You demand excellence in everything for yourself except yourself, you figure that will come after you’re discovered for being excellent. “But I can’t follow your book, why can’t you write more clearly?” I typed it, what the hell more do you want? Audiobook? But you didn’t mean it literally. You never mean anything literally. Try it. You can’t. Never mind all that: how do you experience your frustration with the book? Answer: As if I owed you a debt. When Tarkovsky sent Stalker to the Soviet censors for approval, and they came back with the complaint that it was too slow paced and dull, he told them “it needs to be slower and duller, so people have the time to leave.” I would have published this in 4 pt font if I could, the irony is sometimes I had to write in 4 pt font to avoid the surveilling eyes of Athenians who sat next to me on transports. “Couldn’t help noticing you weren’t talking to me, what’s that you’re working on?” It’s a manifesto, you should buckle up. There’s a trope where a brilliant writer at the peak of his career writes something that defies all the normal rules. *Finnegan’s Wake*. [The Northern Caves](https://archiveofourown.org/works/3659997/chapters/8088522). Is it a troll? Is its impenetrability the very sign of its genius? Is it some sort of complicated tease, where the exhortations not to read it make you want to read it even more, to prove you’re one of the true fans, one of the elect who’s better than those Athenian democrats and gluten-free donut eaters? Teach’s earlier work centers around Christopher Lasch’s idea of narcissism. *Sadly, Porn* adds a layer of Lacanian psychoanalysis (I wasn’t smart enough to recognize this myself; other people pointed it out). I’ve been wanting to learn more about Lacan for a while. Partly because I never understood him in school. Partly because Slavoj Zizek is into him and everyone seems to think Zizek is smart. And partly because I recently realized that Kleinian psychoanalysis, which I also never understood, actually has useful insights (hint: compare Part III of [this post](https://astralcodexten.substack.com/p/movie-review-dont-look-up) with the theory of part objects) and for all I know Lacanian psychoanalysis might be the same way. But also: I have a couple of friends and acquaintances who are (or were) really into Lacan. They’re all exactly the same: highly-driven highly-charismatic people, alternating between eerily brilliant and totally incomprehensible, and always deeply misanthropic throughout. Teach fits this same mold. Does the personality type attract you to the theory? Does the theory produce the personality type? It’s a weird enough coincidence that it makes me want to learn more. And: I have a running argument with one of these people. The argument is: I accuse him of becoming a cult leader, he denies it. During a recent spat, he said something like - “okay, I agree that lots of people are fascinated by me / attracted to me / tend to do whatever I want, in a way that doesn’t make sense under the normal rules, and that you couldn’t replicate even if you wanted to. You can judge me for it, or you can admit there’s a hole in your map, something that I understand and you don’t. If you want to understand it too, read Lacan.” I can’t remember if this was part of the conversation or came up afterwards, but there sure are a lot of holes in that area of my map. Why do some people have the “charisma” to become successful cult leaders? Why do other people follow them? Why do some people keep falling for abusers, again and again? Why are so many people attracted to partners with Dark Triad traits? Why do people have fetishes which seem contrary to common sense (submission, humiliation, cuckoldry, etc)? I have boring [semantic stopsign](https://www.lesswrong.com/posts/FWMfQKG3RpZx6irjm/semantic-stopsigns) answers to all these questions, but none that seem satisfying. This kind of hole-filled map suggests I must be missing *something* here, and a whole lot of people who might know suggest trying to find it in Lacanian psychoanalysis. I already tried the kind of normal book that a normal person might use to try to understand Lacan, and I bounced off of it like putty. So fine. Let’s try to read this abomination and see if we can squeeze something out of it. **II.** *Sadly, Porn* consists of a mid-double-digits-number of short-ish (5-10 pages) interpretations of various texts, vaguely connected by rants and insults. The texts range from classical (especially Thucydides and *Oedipus Rex*), to Biblical, to modern novels, to movies, to pornos, to dreams. Some of them, on closer inspection, are fictional - not in the sense of being works of fiction, but in the sense where Teach made them up. Some are outright psychoanalytic dream interpretations, and the rest draw from this tradition. The underlying theory is that every work of art (including porno) is an expression of some repressed desire, which has to be different from the open desires (so eg *Oedipus* can’t really be about marrying your mother, because Oedipus openly marries his mother). So for example, here’s Teach on *The Giving Tree* - yes, this is a long quote, but a review this book won’t make sense until you see the kind of thing he’s doing: > Take a look back at Shel Silverstein’s 60s storyboard, *The Giving Tree*. Here’s an invalid but reliable statistical observation: if you sell 7 million copies of a book with a positive message and it doesn’t make people live the message, then they didn’t get that message. What they did get was a very strong defense against the actual message, see also The Gospel Of Mark. > > It’s universally agreed that The Giving Tree represents a mother. This is a very odd association to make, because it’s clearly not a mother, it’s a tree, if it was a mother than [sic] the boy would be a sapling. “It’s literally a tree, but the tree is a metaphor.” Obviously it’s a metaphor, what I want to know is why you chose the wrong one. The boy is a biped and has a human girlfriend; the fact that the story requires organisms from two different kingdoms not only complicates the possibility it represents a mother, it requires the reader to force the interpretation on the book, to “do violence to the text”. You know, like rape. > > Why do you think it’s a mother? It gives and gives and gives and asks nothing in return, but that’s not what defines “a mother”, that’s how your mother defines herself. In fact, the fundamental characteristic that would make it a “mother” is explicitly absent from the story, and that’s responsibility to the boy. The Tree has none. It may be nice to him, it may sometimes let him win at hide and seek, it may give him a boat, but it doesn’t have to punish him, it doesn’t have to protect him, it doesn’t have to worry about teaching him to swim or warning against gold digging hippies, it doesn’t have to make him sad/angry/scared for his own good. “I just want him to be happy”. That’s it? The tree isn’t his mother. At best, it’s his godmother. Uh oh. > > So the question you have to ask your pop rocks and triple cola conscience is not why you thought it was a mother, but why you wanted it to be a mother. “Because it acts selflessly out of love?” Boy oh boy are you way off. > > The trick to what the demographic wants, and this may sound familiar, is that while it doesn’t believe in “true love” between two people, it doesn’t believe in true love of a parent for a child *either*. Parental love can’t be true love because it is definitional, obligatory, and therefore it doesn’t count. What the demo believes in, what it aspires to, is unconditional love chosen by free will - that can be witnessed and confirmed by other people as an act of free will. To the demo, rather than the symbolic obligation being both the requirement for love and its justification, the symbolic obligation negates it. This is the form of love you and the other adult readers are capable of - of imagining. That’s why it’s a tree. Since there’s no cultural or even biological responsibility to love this boy, then this love is (depicted as) real love. > > The desire to display gigawatt devotion with zero responsibility is the standard maneuver of our times, note the trend of celebrity soundbite social justice, or children’s fascination with doing the extra credit more than the regular credit, and as a personal observation this is exactly what’s wrong with medical students and nurses. They’ll spend hours talking with a patient about their lives and feelings while fluffing their pillow to cause it to be true that they are devoted - they chose to act, chose to love - while acts solely out of ordinary duty are devalued if not completely avoided. “Well, I believe the patient’s spirituality is very important.” It will be if you don’t get this NG tube in. You may think you have very valid personal reasons for not wanting to assume responsibility, like apathy or minimum wages, but the overwhelming motivator for devotion by choice is the rewarding reward of giving gifts of oneself, seemingly selflessly, because these publicly “count” more than discharging duty. The retort to this is that often times the selfless acts are done out of everyone else’s sight, so what possible reward could there be? But one doesn’t need to be seen by individual people, it’s enough to imagine being seen by a hypothetical audience. […] > > The entire childish fantasy of “motherly love” collapses the moment obligation enters into it, which is why, in *The Giving Tree*, it never does, and this is why so many remain deeply attached to it as a mother figure. It doesn’t represent a mother - the wish is that it could. Tree-mothers will do anything to convey devotion and “love” - because there is no obligation to do it. They are willing to sacrifice, to give of themselves, to convey the appearance of suffering and sacrifice *even by actually suffering and sacrificing* - they’ll cut off their own arms to prove it, in order to assure themselves and a love object too guilty to be suspicious that they do it all out of willful, chosen love. “I love!” But can you help me with my math homework? No? Fine, I’ll just go back to wetting the bed and playing with matches. The desire for it to be a mother also satisfies within the adult reader the childish desire to be special: if only my mother did these things for me because she loved *me* and not because she had to - not because she would have been similarly obligated to any of her meiotic anomalies. Because then it would count. > > Why would Tree-mothers so reliably avoid acting out of responsibility, but might perform the very same acts out of “love”? Why is this kind of mothering so aspirational, celebrated? What’s so bad about obligation that it needs to shrouded [sic] in “love”, or outright resisted? Because obligatory mothering means you matter less than your replacement, no thanks, my place in the world is unique. And the uniqueness is signaled by regular, public gifts of themselves, not public in the studio audience sense, but public in the storyboard sense, the potentiality of an audience that doesn’t need to exist. “I’ve sacrificed so much to give you a boat.” But shouldn’t you teach me to boat so I can boat for a lifetime? No? That’s Dad’s job? Got it. > > Your desire to be a selfless godmother may imply you’re a bad person, but it doesn’t automatically mean it’s bad for the kid, he still gets a boat, right? Can’t self-interest result in positive outcomes for others? Yes, but this isn’t self-interest, it’s self-definition, it is *relative* to the outcomes of others. In other words, there’s a ledger that needs to be balanced, and the kid is going to pay eventually. The apparent selfless devotion perversely/purposefully obligates the child to them - it causes there to be a debt owed back to the parent which should not exist: the child perceives the existence of such an unpaid debt and thus believes his guilt is warranted. This is the guilt that the adult reader misinterprets as “nostalgia” or “poignancy”. This is entirely separate from the complex duty an adult child owes their parents, which many avoid anyway; this is an unrepayable debt that keeps the child indebted to the parent - in this way precluding the possibility that the child can mature into their replacement, or at all. > > *The Giving Tree* is an anagram for *I Get Even, Right*? That’s a solid example of the return of the repressed assuming it wasn’t on purpose. So the boy rebels, becomes selfish, he grows up and appears not notice [sic] or not to care that he’s hurting the Tree; but this is inaccurate, his destructive actions should be seen as a *response* to this debt, to the unfillable gap constituted by the symbolic debt against which his neurosis is a protest. > > Not everyone likes the story. There have been a lot of ferocious criticisms of its “theme”. The question is, what is the outcome of these criticisms? Do the criticisms offer an alternative understanding, or do they pretend to criticize in order to maintain the status quo? A popular criticism, heavy with contempt and thus conveniently dismissed as misogyny, is that the Tree “mothered” him too much and failed to foster independence in the child. While this may be factually accurate, it’s even more wrong, it’s the kind of insight that gets you out of having to go any further, it ends your connection to the story - you are done with the book. The criticism that the Tree fails to foster independence presumes it is *supposed* to do this. But that’s not it’s job. It’s his actual mother’s job, it’s his father’s job. Based on how this little rat turns out it’s clear they failed, but that’s a totally different book, and it’s called *Oedipus Tyrannos*. The critics say the Tree failed as a mother because they *want* teaching independence to be the metric of motherhood; but as they are misogynists their true target for redefinition is fatherhood. No one criticized the Tree for failing to teach the boy math, or for self-cutting to guilt him into a debt, its one celebrated failure was not teaching him independence, which, you will observe, is way easier than teaching him math. Consequently it is correct to say that the criticisms of the book pose no threat to the underlying psychology which both haters and admirers share, their ends are the same, both pro and con have succeeded in reprioritizing the myriad defining responsibilities of a parent for the modern age, here they are in full, in order of importance: 1. Foster independence. 2. Other stuff. > > Asserting parenting’s main job as fostering independence is not merely self-serving, it’s bad for the kid, and it’s probably correct to say that in modern times we have completely accidentally but nevertheless excessively fostered independence, to the point that dependence of any kind is seen as a moral catastrophe, or at least an easy target for self-righteous indignation. Of course the independence that’s fostered isn’t real independence, it’s green screen individualism, all the dependencies are disavowed or at least fetishized with money; even the money gets fetishized into creidt so he doesn’t even have to see he needs money, the credit card lets him believe he is his own man; and it only makes jarring the instances where independence is utterly impossible, eg medical illness or falling in love. We’ll tolerate a certain amount of material dependence because it doesn’t count, but no way is anyone going to allow an emotional = “pathological” dependence on the other. > > “But isn’t pathological dependence just borderline personality disorder?” Border between - what and what? The question you asked about their pathology is a symptom of your pathology. You want the borderline’s pathology to be their pathological overdependence on the other because you don’t want it to be the characteristic that you both share, which is the absence of interest in whether the other can depend on you. The crucial distinction is that while neither of you are dependable, the borderline wants to be seen as dependent and not dependable, whereas you want to be seen as not dependent but as dependable. The borderline may be more thirsty, but it’s still a babbling brook for both of you: can’t live without it, derive no real enjoyment out of it, can’t tell it apart from any other water and often pee in it. The water gets nothing in return from either of you. > > If you accept that the boy has an actual biological mother, never seen in the story because the need for her is repressed and thus of no interest to the childish reader, then something else becomes true and changes the genre from kiddie porn to Lovecraftian horror: the man doesn’t keep coming back to the Tree, the man keeps coming home to his actual mother. The Tree is outside waiting for him. > > But the claim that the tree fails to foster independence turns out to be literally incorrect, a defense in the form of a criticism. The last sentence of the story is, “And the tree was happy.” Why is she happy? Because the old man has wasted his life and came back to her? > > The tree doesn’t fail to foster independence; it actively thwarts the child’s independence at every turn. This may seem hard to believe, she did give him her trunk so he can heed the call of Manifest Destiny, but unless you’re going to chop it up and Huck Finn the pieces into a raft that trunk isn’t going to carry away anything but your optimism. And who taught you to use an axe, your mom? Don’t dismiss the giving of the boat as a contrivance solely for the purpose of furthering the plot, because the contrivance is what the passive agent uses to cause the active agent to act on her desires. She fofered first her apples which were useful and then the wood which she knows is not useful. But instead of first offering the apples and then referring him to the 2 ton cedar trees in the next forest or at the very least a boat maker, she offers him what couldn’t possibly satisfy him. “I hope the scent reminds you of me”. You know he’ll be back in a week, when was he going to forget? > > The tree isn’t giving “of itself” because it has nothing else to offer, it is giving of itself because it doesn’t want the boy to want anything else. But this selfishness is totally opposed to how the Tree views itself - a kind, loving, giving Tree - so it is necessary to disavow this. To hide that thought from herself - not the boy, but herself - she is willing to chop parts of her body off for him, as long as those parts don’t do him any good. The magnitude of sacrifice is illusory even if it fools other people as well, it looks huge to the outside, which is why that part was chosen for sacrifice - but it is of only passing value to the boy. The sacrifice hides to herself her attempts to keep the boy unsatisfied, wanting more. The last page of the book shows the man come full circle, sitting on the stump. “And the tree was happy”. Which was the whole point. > > In other words, the GIving Tree is a giant cunt. Take it easy, that’s not me saying it, that’s Silverstein: in a later comic, he drew a picture of a man approaching a cave that looks like the top part of the Giving Tree and all of a 60s mom’s vagina, I’ll wait, and the guy goes in but doesn’t come out. The title of the comic is “And He Was Never Heard From Again.” Well I have a question: is the cave happy? Anybody want to tell me why? > > It’s important to ask: if the Tree’s target is the boy, even into adulthood, why does it continue to position itself as a mother - instead of as one of the historically reliable poses for manipulating adult men such as a wife, lover, or damsel in distress? > > Because she doesn’t know what he wants. The only thing she knows about him is that he keeps coming home to his real mother. But hold on - I don’t mean she tries to be a mother because that’s what she thinks the boy wants. *She doesn’t know what he wants*. Stop here, read that all again. But his mom must know - it’s why he keeps coming back to her. So the Tree identifies with the mother in order to figure out what the boy wants; not like Special Agent Empath who “gets inside the head” of the criminal, but like a high end escort or high priced psychoanalyst. She has no idea what the guy lying beneath her wants; the only thing she knows about him is that he thinks escorts and psychoanalysts would know. So she doesn’t guess what he wants: she simply stays in character as the one who is supposed to know, and waits for the man to act. > > Of course escorts and psychoanalysts get paid, ie the ledger is immediately balanced. In the Tree’s case, however, no payment is forthcoming; and since it is an arithmetical necessity that the ledger must balance, it becomes even more important to figure out what he wants, in order to deprive him of it. And so on for another four pages. Imagine fifty-ish of these analyses strung together by the loosest of connective tissue, and that’s *Sadly, Porn*. **III.** An [ancient](http://www.davidchess.com/words/BrokenKoans.html#what) Zen koan: > One afternoon a student said "Roshi, I don't really understand what's going on. I mean, we sit in zazen and we gassho to each other and everything, and Felicia got enlightened when the bottom fell out of her water-bucket, and Todd got enlightened when you popped him one with your staff, and people work on koans and get enlightened, but I've been doing this for two years now, and the koans don't make any sense, and I don't feel enlightened at all! Can you just tell me what's going on?" > > "Well you see," Roshi replied, "for most people, and especially for most educated people like you and I, what we perceive and experience is heavily mediated, through language and concepts that are deeply ingrained in our ways of thinking and feeling. Our objective here is to induce in ourselves and in each other a psychological state that involves the unmediated experience of the world, because we believe that that state has certain desirable properties. It's impossible in general to reach that state through any particular form or method, since forms and methods are themselves examples of the mediators that we are trying to avoid. So we employ a variety of ad hoc means, some linguistic like koans and some non-linguistic like zazen, in hopes that for any given student one or more of our methods will, in whatever way, engender the condition of non-mediated experience that is our goal. And since even thinking in terms of mediators and goals tends to reinforce our undesirable dependency on concepts, we actively discourage exactly this kind of analytical discourse." > > And the student was enlightened. This actually helped me understand Zen. So: what’s the equivalent for *Sadly, Porn*? If Teach ever felt motivated to explain his technique as clearly as this roshi, what would he say? Does he claim that the books/movies/pornos he analyzes *really mean* what they say he means? That the author intended those meanings? That the authors’ unconscious minds did? That those meanings were a fortuitous and coincidental reaction between the authors’ unconscious minds and ours? Or is he using them the same way postrationalists use tarot cards - as a semirandom canvas that gives an excuse to speculate about ideas that realistically come entirely from your own mind? It has to be the latter, right? He doesn’t really think *The Giving Tree* means all that stuff? And yet when bringing up the anagram with *I Get Even, Right?*, he calls it “a solid example of the return of the repressed assuming it wasn’t on purpose”. Although I’m impressed by Teach’s erudition, I’m - let’s call it “not as impressed as he is with himself”. It’s impressive how many facts he knows, but he warps them into Jenga towers of speculation that can’t possibly be true, almost compulsively, without bothering to justify himself. There’s an analysis of fishing-related words in the Gospels where he mentions he ran it by a bunch of Greek scholars and they all said it was nonsense. He seems to accept they’re right and his analysis is wrong, but - doesn’t care? Makes us read it anyway? Maybe it’s the semirandom canvas thing after all? Something I learned when writing this review: Lacan admitted to being deliberately obscurantist. He said Freud was easy to understand, so everyone read the text without deep thought, then misinterpreted it. Lacan figured if he was hard to understand, people would think about it, let the ideas float around a while before forming an opinion on them, and maybe get them right. Part of me feels like saying [I’ve read this study and it doesn’t replicate](https://link.springer.com/article/10.1007/s11409-016-9154-x). But it’s a fascinating idea. If you have some concept it’s easy for people to get wrong, might you transmit with higher fidelity if you’re hard to understand? For example, suppose that the idea has many interlocking pieces, and each piece gives a clue about the nature of every other piece. If your writing is easy to understand, the reader immediately gets (some possibly slightly-flawed version of) the first piece, then uses that to produce a (even more flawed) version of the second piece, and so on. But if your writing is hard to understand, maybe you present the first piece, the reader doesn’t get it, you present the second piece, they still don’t get it, and then once you’re done your reader is able to compare all the pieces to each other, and the only shape in which they really all interlock is the true theory. Memetics is the study of ideas optimized to spread. It’s a useful lens on religions, image macros, and catchy songs. Antimemetics is its less well-known (ha!) cousin, the study of ideas optimized *not* to spread. “But I can’t think of any ideas like that!” [Exactly](https://scp-wiki.wikidot.com/antimemetics-division-hub). A low-grade antimeme is merely boring. A medium-grade antimeme is invisible in plain sight. A high-grade antimeme is worst of all; you can attend an entire college course about one, come out the end thinking “man, that was a good course”, get an A+, and *still not get it at all*. (The Bible describes very clearly what angels look like. Everyone agrees the Bible is the authority on angels, maybe the only primary source for them at all. All Western culture for 1500 years has been based around the Bible. There are hundreds of millions of people who take the Bible completely literally and read it every day. The Bible says - Revelations 22:18 - that if anyone changes the Bible in any way even by a single word they will be punished with eternal torture. *And yet nobody’s mental image of an angel, nor any popular artistic depiction of an angel, has anything in common with the Biblical description*. This is the highest-grade antimeme I feel comfortable using as an example; if you don’t see the fnords they can’t eat you.) A lot of *Sadly, Porn* feels like a guy trying to cram an antimeme into your head. Psychoanalysis is about defense mechanisms; you actually like Shel Silverstein books because they speak to your secret desire to kill your father and marry your mother (or whatever), but you’re horrified by that desire and want to repress it. The Shel Silverstein book gives you some sort of protective cover, hides it under ten layers of symbolism and misdirection. You can say something like “the job of a literary critic is to reveal the secret desire the work is speaking to”, but if your brain wants it hidden so bad that it’s willing to use ten layers of misdirection, probably saying “hey, the hidden desire is that you want to kill your father and marry your mother, okay?” isn’t going to work. (just to be clear, Teach isn’t arguing that kill-your-father-marry-your-mother is a real secret desire; I think he even claims that this is one a misinterpretation/misdirection that society invented in order to defend against the real meaning of Freud) The naive defense mechanism is to deny it and get angry, but most people are too smart for that now. The sophisticated defense mechanism is to intellectualize it so hard that you can write a bunch of books on the semantics and semiotics of it without ever engaging with it on an emotional level. Most people do something in between: they get the idea *partly right* but deliberately misunderstand some crucial piece of it such that it loses 100% of relevance and in fact it becomes a defense against the real idea. Teach seems to think something like this can also happen *en masse,* eg how wokeness originated as a call to destroy the system and ended up as a Coke marketing gimmick. This is in Hungarian because there was some brouhaha in Hungary that got it to the top of the search engines, and I’m lazy. In one kind of surreal passage, Teach discusses the psychoanalytic interpretation of dreams. Dreams contain content that the mind wants to repress, but then - why dream it? Why go to a psychoanalyst specializing in dream interpretation? When the CIA wants to keep something classified, they don’t cloak it in a riddle and email it to the KGB’s Riddle Decoding Division. Teach thinks people do this in the hopes of tricking the psychoanalyst into giving the wrong interpretation, thus providing them with an extra misdirection layer. Something like “I can be sure I don’t want to kill my father and marry my mother, because if I had those kinds of desires they’d probably come out in dreams, but the psychoanalyst says *my* dream is just about how I secretly fear failure, so I’m fine.” Dreamers *do* include the real hidden desire in the dream, but only to keep it fair, so that the analyst’s failure counts. At some point I believe Teach suggests that normal people don’t have meaningful symbolic dreams, only people who go to psychoanalysts do, and for that reason! And he reinterprets one of Freud’s dream analyses in a way that suggests Freud got it wrong - not, one assumes, because Teach is better or smarter than Freud, but because the patient was optimizing his story for deceiving Freud in particular, and succeeded. This is the grade of antimeme we’re going up against, and Teach comes from a tradition that believes that the stronger the antimeme, the more annoying your published work has to be. So, this book. **IV.** I don’t claim to have cracked this puzzle or done anything more than scratch the surface here, but if you put a gun to my head and demand I do the Zen master thing and explain as much as I can openly, here’s what I’ve got. Keep in mind there is basically a 100% chance this is the thing where you encounter an antimeme and immediately misunderstand it and turn it into something less interesting: Psychologically healthy people have desires. Sometimes they fantasize about these desires, and sometimes they act upon them. You’ve probably never met anyone like this. Psychologically unhealthy people, eg you and everyone you know, don’t have desires, at least not in the normal sense. Wanting things is scary and might obligate you to act toward getting the thing lest you look like a coward. But your action might fail, and then you would be the sort of low-status loser who tries something and fails at it. So instead, you spend all your time playing incredibly annoying mind-games with yourself whose goal is to briefly trick yourself into believing you are high status. Everyone else, so far as you even recognize their existence at all, is useful only as a pawn in this game. For example, you can trick a psychoanalyst into giving you a dream interpretation denying your repressed baggage, and then feel good about yourself because you don’t have any repressed baggage (or at least you’ve convinced a representative of Abstract Society of that, which is the same thing). Or, you can trick a hot girl/guy into sleeping with you, thus proving you’re the kind of high-status person who gets (deserves?) hot girls/guys. The most popular move in this game is envy. Envy is different from jealousy: jealousy is when you wish you too had something nice, envy is when you wish the other person would lose their nice thing. If your friend marries a beautiful woman, you don’t think “I wish I too were married to a beautiful woman”, because that would be a normal healthy desire, and you don’t have those. You think “I wish my friend’s wife left him, then we would be even again and my status relative to his would go up.” If you think you feel jealousy (you want a beautiful wife too) probably this is just a defense against the real feeling (envy). Another move in this game is “ledger”. You balance every good thing you’ve done for someone else, and if it’s more than they’ve done for you, you hate and resent them as a good-thing-moocher. If it’s less than they’ve done for you, you hate and resent them anyway for their dastardly plot of putting you in a situation where you owe them one. This is not paranoid at all, because you yourself are constantly plotting ways to do good things for people in order to put them in a situation where they owe you one. It’s not like you’re ever going to call in the favor - that would be an action, and require a desire - you’re just going to secretly know that you won this mind game against them and there’s nothing they can do about it. You hate and fear action, because it seems like the kind of thing that could go wrong and lower your status. But you would *prefer* (“desire” seems like a strong word for something this unnatural) to have certain things happen, for example for your friend’s wife to leave him, or for your ledger to be fairer. You solve this contradiction by fantasizing about some “omnipotent entity” somehow *forcing* you to sow dissent in your friend’s marriage. Only then can you act without the stigma of actually acting. Since everybody wants everybody else to be worse off, refuses to act openly on this, but dreams of having someone make them act, there’s widespread support for any limitation on human freedom, simply *because* it’s a limitation on human freedom. We *are* ruled by a bunch of psychopathic vampire elites, but it’s hard to be really angry at them. Society found some psychopathic elites sitting in vampire castles and basically *begged* them, “PLEASE take our freedom and make us worse off!” The psychopaths answered “I dunno, seems like a lot of work and we’re already pretty rich”, and Society was like “No PLEASE we are *begging* you!” and the psychopaths shrugged and said okay, you can have a *little* oppression, as a treat. Tyrannical government is an imperfect solution here; our government occasionally resembles democracy, which makes us complicit in its actions. What people really crave is domination by corporate HR departments. The moral arc of the universe tends towards more and more power getting ceded to corporate HR departments and things like them. (Technology is also an acceptable master in some cases. Teach claims that the reason dating sites are catching on isn’t because “it’s so hard to find matches in meatspace.” It’s because if you met a match in the real world, you would have to approach them and ask them out - an action, therefore scary and impossible - whereas on dating sites it’s the algorithm that matches you, and *you* just play your assigned role of sending the message.) The book uses porn as a metaphor for this process. It attacks the popular claim that porn decreases interest in real sex; Teach thinks porn is *the defense against noticing* you don’t have an interest in real sex. You don’t actually want things, you can’t actually fantasize (because fantasy is a step between desire and action, neither of which you’re capable of), so you download mass-produced fantasies from our corporate overlords in order to, essentially, fantasize about fantasizing. “Human beings,” he says “have abdicated moral, social, and political power to the technologies, much as you’ve done with your sexuality.” **V.** Let’s pretend that what I wrote above has at least some passing resemblance to the real antimeme that Teach wanted to convey. Do we have any reason to believe it? I read *Sadly, Porn* around the same time I was writing [Motivated Reasoning As Mis-Applied Reinforcement Learning](https://astralcodexten.substack.com/p/motivated-reasoning-as-mis-applied), and the particular way I probably mangled the antimeme owes a lot to that thought process. It kind of fits, doesn’t it? Instead of acting, people play head games with themselves trying to figure out the best way to convince themselves they’re high status - ie replacing behavioral reward with purely epistemic/perceptual/mental reward. And what about self-handicapping? [Here’s a study](https://pubmed.ncbi.nlm.nih.gov/650387/) that’s stood the test of time, by which I mean AFAIK nobody’s ever tried to replicate it: psychologists asked some people to do a test. One group got an easy question, the other an impossible question (they had to guess anyway). Then the psychologists told both groups that they’d gotten the question right (the easy group was presumably unsurprised, the impossible group presumably thought they’d gotten really lucky). Then they asked both groups to try again, but offered them the chance to try a performance *inhibiting* drug they were testing. The easy group accepted at some rate; the impossible group at a much higher rate. The psychologists theorized that the impossible group wanted to preserve their “good opinion” of themselves as people who correctly solved problems (even though on some level they realized they didn’t know how to do the problem and had just guessed) - they figured that if they took the drug, they could attribute their inevitably-worse performance the second time to the drug, rather than their own inadequacy. There are [lots of experiments like this](https://www.lesswrong.com/posts/P3uavjFmZD5RopJKk/simultaneously-right-and-wrong). Also, here’s a kind of patient every doctor has seen: the hypochondriac who goes to the doctor to be reassured she isn’t ill. That’s it. She’ll describe her mouth feeling weird or something, you’ll say something like “By the way, just so we’re on the same page here, you’ve come in here with mouth-weirdness twenty-six times already this year, it’s always been nothing, it’s never gone anywhere, and now you have another case of mouth-weirdness exactly like the others, and you want me to tell you if it’s serious?” And she’ll say “Just say the words, Doctor”. And you’ll say “Don’t worry about it, it’s probably nothing.” And then she’ll be happy and go home and live a normal life for two weeks or so until she gets anxious about the same thing and comes in again. Again, this seems to suggest a really weird relationship with knowledge and reassurance. Also, compliments. We all know the “fishing for compliments” phenomenon. And we all know the “I fished for compliments and someone complimented me but it doesn’t count because I know I was just fishing for it” phenomenon. And its close cousin, “someone complimented me, but it was for the thing I already know I’m good at, so it doesn’t count”. And their weird uncle, “someone complimented me out of the blue, and it was a really good compliment, and it was *terrible*, because maybe I secretly fished for it in some way I can’t entirely figure out, and also now I feel like I owe them one, and I never asked for this, and I’m *so* angry!” This seems a lot like “using other people as pawns in a mind game to feel high status”, and at least a little like the ledger where you resent someone forever if they do something nice for you. (half of you are saying “Nobody really thinks like that, right?” and the other half are freaking out: “How did he know what I think?”) Also: one strategy I notice the sort of high-charisma manipulator people who read Lacan doing: they’re misanthropic, yes, but mostly in some vague sense, to people offscreen, such that they have a *reputation* for misanthropy and harsh judgment. Then when they talk to *you* they’re very nice and complimentary, and you think “Oh man, this person who hates and judges everybody likes me, maybe I’m special.” And this is strong positive reinforcement, and talking to the person and getting those hits of praise becomes mildly addictive, and you want to talk to them more often and continue earning that praise, and then later you describe them to a friend as “charismatic”. Ultimate source is Ayn Rand, *Return Of The Primitive*. If you believe this, how close have you gotten to Teach’s theory of envy? Since this is theoretically a porn book, we should get back to things at least vaguely related to sex and romance: why is it so hard to ask someone else out? I spent about ten years miserable and romantically frustrated and wishing that I had a partner every single day. The total number of people I asked out during that time was one or two, I can’t remember. Even then, it was some kind of incredibly ambiguous form of asking out with five layers of plausible deniability. This was stupid and I know it was stupid. Still, when Teach comes with some psychological theory that purports to explain why I am “incapable of action”, I can’t plead completely innocent. As far as I can tell, I enjoy relationships for their own sake - contra Teach, who says you only really enjoy sex because it gives you status, or because you’re depriving someone else of the use of your sexual partner, or because it’s otherwise a winning move in your mind game (cf. Oscar Wilde: “Everything in the world is about sex, except sex. Sex is about power.”) But - don’t laugh - a lot of the time when I listen to music, I find myself fantasizing about being the person who wrote the music, or playing the music in front of a big audience while everyone applauds me, or something like that. It seems that my enjoyment of music - maybe not quite as primal as sex, but still pretty primal - actually *is* at least assisted by status fantasies. Maybe for some reason I can admit this about music but I’m still defending against realizing it about sex. Or maybe I’m 100% completely honest when I say I don’t have a status motive for enjoying sex - which explains why I’m kind of on the ace spectrum and don’t really enjoy the sex act itself. I once asked a friend who identifies as sexually submissive how she came by her fetish. She said that she was raised to believe that sex was kind of shameful and that women who sought it out were sluts (I should mention here that Teach believes to a first approximation nobody represses anything about sex in modern-day culture - Who thinks sex is shameful these days? It would be like repressing that you like cheese! - but my friend was raised by first-generation immigrants from a more conservative area and maybe she’s a legitimate exception). Anyway, she says she used to fantasize that people would enslave her and force her to have sex with them, because then she got to have sex without the stigma of being the kind of slut who asked for it. *Even in her fantasy* she had to maintain high status - not the social high status of being a non-slave, but the moral high status of not admitting she had the taboo desire. This is basically Teach’s “people beg to be enslaved so they don’t have to admit their desires” thing to a T. Why do some people have sexy nurse fetishes? “Because nurses are people who comfort you when -” No, I mean why *that particular* nurse costume, which no nurse has worn since World War II? And I assume Japanese men have Japanese schoolgirl fetishes because they remember the puppy love of their own high school days, but why do so many *American* men have Japanese schoolgirl fetishes? I distinctly remember teenage me thinking breasts were weird-looking and not sexually attractive at all - I don’t want to touch people’s weird milk-producing glands - and then getting gradually “socialized” into finding breasts attractive just like most other straight men. Teach says that nobody actually finds nurses or Japanese schoolgirls or breasts *or even women* attractive in the deepest and most fundamental sense, they learn what other people find attractive, then want those things so they can gain status points and deprive other people of them. (although this seems unnecessarily complex compared to an answer of the form: “evolution didn’t bother including a full specification for attractiveness, it just included a program for social learning to figure it out from other people”) And - why do people like porn? I’m not asking for answers of the form “it has hot sex”, I mean why is porn better than imagining the hot sex, in your head? “My imagination isn’t as high-definition as a real computer screen.” But lots of people like story porn, like on [Literotica](http://literotica.com/). “But that’s more creative than they can come up with themselves”. My impression is that people can use the same story over and over - the words on the page seem to have power even when realistically they’ve memorized all the sexual beats by now. Teach writes: “Porn doesn’t depict fetishes - porn is your fetish.” This seems totally insane and also I can’t rule it out. While we’re asking crazy questions eight thousand words into an almost-unreadable essay, why do people like *art*? I don’t mean [actually nice art with pretty pictures of trees and lakes](https://www.artsy.net/article/artsy-editorial-komar-melamid-americans-painting-thought-wanted), I mean Classic Literature, by which I mean 800-page novels about English professors who have affairs and then feel guilty about it. Surely *something* must be happening inside people’s heads to make them read novels about cheating English professors so avidly. Maybe it speaks to some kind of secret unconscious desire (not to have an affair with an English professor, that’s the manifest content so it can’t be the latent content). Maybe I personally just don’t want to do whatever having an affair with an English professor is a defense against, which is why those novels never appealed to me. I’m scraping the bottom of the barrel here, but I’m trying to take seriously the advice of my suspected-cult-leader friend: if your map has a hole in it, don’t say that the people who like those novels are dumb, or they’re only pretending to like them, or they’re only signaling that they like them, or the whole topic is stupid - take the hole seriously and get intrigued when you hear a theory that fills it! On the other hand, this sounds like a good way to end up believing lots of wrong things just because they’re the first theory you heard. Also, suspected cult leaders are probably bad people to get advice on epistemics from. There are aspects of my experience that sort of fit with what Teach is selling. How do I judge this? Maybe if I really understood the antimeme instead of muddled-understanding it, my experience would match it perfectly. Or maybe we should expect all fake psychoanalytic theories to vaguely remind you of true things, for the same reason that all Nostradamus prophecies vaguely sound like true things and [all cold readings vaguely sound like true things](https://en.wikipedia.org/wiki/Barnum_effect). Or maybe Teach planted one or two real insights as honeytraps in the middle of his web of pseudo-profundities. My current plan is to try to be more sensitive to the way my brain plays status-related mind games with itself, and to the tension between that and actual real action in the world, which I expect to be fruitful. Everything else I think I’m just going to wait and see. **VI.** That’s the book’s psychology. What about its sociology and politics? The main message I get here is “Teach really likes talking about classical Athens”: > Whatever your personal religious and political beliefs, it is a fact that our Western morality is a straight line from Judeo-Christian traditions, and our political beliefs a straight line from Greco-Roman traditions, and regardless of how much you believe times have changed or how bad you are at math you should still be able to observe that those are two separate lines. Your personal conscience, however improvised, followed a different line than your political ideology, however plagiarized. You may think that they are 100% congruent or at least parallel but ask anyone else, they are not. The best you can do is change the angle between them and affect the rate of their con/divergence, under your guiding principle of maximally depriving the other. > > This was not the case for the Greeks, not at the beginning, anyway. Personal morality was inseparable from the state’s morality, they were not overlapping, they were the same single thing, but in the opposite way you’re imagining it, not because the State was all powerful but because the state was themselves. Personal morality vs. social standards:L public behavior vs. private thoughts - for at least 50 years it would have been inconceivable to an Athenian that those were different things. I don’t mean they thought whatever the state wanted them to think, that’s as meaningless as saying people think what their brains want them to think. And I do not mean there weren’t bad people; I mean there was no recourse to the psychological position of “I’m not a bad person, I just did a bad thing”. When we say the Athenian democracy required full participation, it should be taken literally. The citizens didn’t just make up their own laws or fight their own wars, they thought the same thought: the state was the highest - not power, not might - but good. The highest good. Think about this. Think about whether you can think about this. Think about whether you have no other way to think about this except to think “O’Brien” - assuming you could even think “O’Brien” and not default to “Hitler”. Yet early Athens was not a surveillance state, it did not need to know - thought admittedly every government will patronizingly embrace its sycophants - it left the accumulation of knowledge and power to the citizens so they could act, as it. This is why that period of history is so unique and so unrepeatable. For the first time and the only time and never since time, knowledge was used for action; the purpose of knowledge was to act; the purpose of earthly knowledge was to be able to act like gods without restraint. Not only for a handful of “great men”, they all thought this, it was the cultural standard. And then the war came, and the plague came, and the plague came again, and the sophists came, and the idea of man’s greatness through obligation became more fantastical than 12 hairless gods on a cold mountaintop wrapped in bedsheets, or on them. What good are gods in heaven if they won’t send my neighbor to hell? For all but a few, math became arithmetic and philosophy became accounting, and getting some power was far less satisfying than depriving the other of theirs. And here we are. His relationship with Athens is kind of love-hate. On the one hand, their direct democracy was a rare case in which people managed to resist the urge to enslave themselves. On the other, they misused the direct democracy pretty badly, and their resistance waned further and further until finally: > They worshiped [conquering Spartan general Lysander] as a god…not because he spared them but because he was powerful, took away their power and also flattered them, let them believe they had fooled him into thinking they were worth sparing - all of those words are correct, that’s what they wanted from their omnipotent god. He let the people who wanted no part of responsibility for their state take credit for its past while having little they could do but obey. He took their hubris and massaged it into pride, he let them take pride in their hubris and - and they started masturbating ferociously. “Take from us, O Lysander, our beautiful Athens and rape her, rape her before us, slay her with your phallus, remind us of our desire, and failure to satisfy her.” (did I mention the recurring cuckold porn theme yet?) As for you, you’re probably even more contemptible than these Athenians. Teach thinks [the modern psyche](https://slatestarcodex.com/2017/05/25/those-modern-pathologies/) is downstream of decisions by advertising agencies. At some point their usual trick of selling products through implied peer pressure and hot women stopped paying as many dividends. The companies did some kind of judo move where they told us “well, darn, you’re just too individual and unique a person to fall for a mass advertising campaign - and incidentally the surest way to make everyone understand that is to drink Coca-Cola, The Drink For Individual Unique People”. And everyone lapped it up. This isn’t even subtle, the highest market value company in the world uses the motto “Think Different”. Or Burger King: “Have It Your Way”. Literal actual Coke printed the 150 most popular names onto their bottles in the hopes you would see your name and think you had a special relationship with them. But it’s more than this. It’s an obsession with what kind of person you are. Brand loyalty becomes a way to signal that you’re the kind of kid who buys their clothes at Hot Topic/Abercrombie & Fitch, not at Abercrombie & Fitch / Hot Topic. It’s not that one of these stores is more prestigious (= signals class) better than the other. It’s that they signal what makes you, you. If you shopped just the right combination of brands, you would really capture your uniqueness, and everyone would like you for being you, ie not for boring regressive contigent things like your job or your family (ie your accomplishments and social roles). Result: nobody respects anyone for their accomplishments, nobody wants to fulfill their social roles or do their duties, and everyone wants to be unique and individual = not buy store-brand. (I can’t remember if it was Teach or an imitator who applied this analysis to Harry Potter. Harry isn’t the smartest or hardest-working person in the school - that’s Hermione. He’s not the most ambitious/decisive/strategic/active person - that’s Lord Voldemort, which automatically codes him as a villain. So why is Harry the main character and the hero? Because a prophecy placed the burden of specialness on him, without him asking; it was forced upon him by an omnipotent entity, no action required. Harry Potter is wish-fulfillment; the modern person wants to be special not because they accomplished great stuff but because special-ness is just who they are. Brands tell them that this is true, and in exchange they buy the brands. [Brand]: Because You Deserve It.) Despite blaming ads and companies, *Sadly, Porn* doesn’t hit any of the beats you’d expect in an anti-corporate book. I think Teach worries his readers would use an anti-corporate message as a defense: *“Yeah, I never accomplish anything, but that’s the fault of those evil corporations who caused me to have the wrong psychic structure. This famous psychiatrist says so! Wanna go to a protest with me instead of trying personal growth? All the experts agree that we’re excused from changing our defective characters in any way until capitalism is overthrown!”* This is where the anti-woke message comes in; he thinks they’re doing approximately this. For such an esoteric book, some of these sections feel pretty basic - “SJWs are just virtue-signaling” would be a fair description of about five pages (incidentally the only five pages I feel like I really understood). I think “virtue signaling” may be a weird case where rationalist/economic thinking briefly touched up against psychoanalytic thinking, such that Teach thinks he’s doing something esoteric here but I'd already gotten the same insight from another direction. The only necessary clarification is that signalers aren’t necessarily signaling to other people; self-signaling (or signaling to the imaginary “audience”) is enough. (people criticized the rationalists for a long time for using “status” as a generic term without specifying “status among who” or “status about what”, but I get the impression that this is the *exact* right way to use status if you want to understand Edward Teach’s school of psychoanalysis) Socialists come in for the same kind of criticism as wokes (Teach hints that Marx actually had some good ideas, but they were mostly antimemes, so modern socialists have no idea what they were - he has nothing but contempt for the latter). His system - psychoanalytic factors → envy → everyone hates everyone else → everyone demands to be ruled - has a natural foil in the sort of socialists who talk about “income inequality” a lot. In a very charitable reading, perhaps socialists are sad that Elon Musk has $300 billion because they’re imagining how many bowls of soup that could provide for the hungry. Or because they think he’s guilty of exploitation, and are sad this has paid off. Needless to say, this is not how Teach thinks of it; he suspects socialists (and lots of other people besides) would gladly see Elon Musk reduced to penury if it never helped a single soul, or even if it actively made the poor poorer. If Musk is allowed to be happy and high-status because of his accomplishments, it suggests accomplishments are good, which undermines the system where *I’m* the best and highest-status person because I’m special, buy all the right brands, mouth all the right slogans, and win various mind games against myself. Therefore, Musk must suffer. If we can guillotine him, we should do that - otherwise, we’ll settle for hating him really hard - making sure everyone in our coalition agrees he’s low status and *deserves* guillotining. Claim: one reason the Athenians lost the Peloponnesian War because is that they voted to ostracize any general who won too often. But the Athenians were still better than *you*. Athens hated successful people, and they took it out on them in particular instances, but at least they managed to do this against a general backdrop of democracy. *Our* society hates everyone so much that it creates various oppressive institutions and norms just to piss them off. **VII.** Why did Teach write this book? He shows contempt for people who go to psychoanalysis, saying that they’re using it as a defense against change (instead of doing the hard thing directly, you tell yourself there’s some “unconscious block” that prevents you from doing the hard thing, and you need ten years of therapy and deep self-knowledge before you can even get started). Actually, he shows contempt for people who seek self-knowledge, full stop. Self-knowledge is of the same genus as the Harry Potter uniqueness fetish: if only I had the right brands / the right dream interpretations / the right personality test results, I could understand my deepest self and then succeed effortlessly. (there’s also some deeper point here about power being the opposite of knowledge which I don’t understand here; you can be “omniscient” or “omnipotent” but not both. I think this might have something to do with how all actions are part of your mind game to trick yourself into thinking you’re high status, the more easily-tricked you are the more actions you can take, and so knowing more limits your space of possible actions. But I’m even more confused by this than the rest of the book, so low confidence here.) But his greatest contempt is reserved for you, the reader of his book. Remember that quote at the beginning? > “Why so many footnotes???” Which is the same question as, “why are your sentences so long, why so many commas, what the hell is with you and semicolons?” It’s all on purpose, to get rid of readers. You’re stumped by the physical layout? This book is not for you, your brain is already set in concrete, it can never change, only crumble as it ages. Which is fine if your plan was to be a foundation for the next generation, but it isn’t; you’re the rotting walls that they have to knock down while you play the flute and pretend to give freedom to everyone else. Eventually I had to just mentally substitute “you” with “a hypothetical maximally unvirtuous person.” Which I’m sure he’d call a defense mechanism. So if you hate psychoanalysis, you hate searching for self-knowledge, and you hate readers - why write a psychoanalysis book to help people understand themselves? I don’t really have an answer for this. But it’s not a contradiction to think “Most psychoanalysis makes most people worse off” and “Some psychoanalysis can occasionally make some people better off”. Maybe if you’ve got a sufficiently important antimeme, you’ve got to say it, even when you’re 99% sure your listener will judo it into yet another defense mechanism. Maybe the 1% of people who had a guard carelessly leave a gate open in their defense mechanisms that day will listen and be genuinely better off. The author uses the pseudonym “Edward Teach”, which was the real name of the pirate Blackbeard. But also, “ed” means education (eg “sex ed”), so “Edward” means “in the direction of education”, so “Edward Teach” is maybe the most didactic name possible. Would the sort of person who expected Shel Silverstein to have thought through possible anagrams of the title of *The Giving Tree* really not have considered this? Teach talks a big game about being against knowledge, but I think on some level he believes that moral instruction can produce positive change. Or maybe it’s something weirder than that: > A dreamer in analysis assumes the analyst knows what the dream will mean. Of course, the analyst might not know. But by allowing - encouraging - the belief that he, the analyst, is the person who absolutely would know *but doesn’t tell it*, the dreamer can act on it. The dreamer might never know what it meant, but something changes. You may find yourself tonight having a dream and thinking, I wonder what the author of this odd book on pornography would think of my dream? *He would know what it means*. And by knowing that I know what it means, you could begin to suspect some of what it means because its meaning is knowable - and you will act. And the reason you think I would know what it meant is that you dreamt it with me in mind. But if I *told* you what it meant, even on the outside chance I was dead on, you would hear it whatever perverted way you needed to but attribute that meaning to me, you would use my authority to defend against the true interpretation. You would be much more satisfied, consider me a genius, and everyone else would be miserable. The analysis failed, but the therapy was a big success. That’ll be $500, please. Anyway, that’s what *Sadly, Porn* is about. That’ll be $500, please. *[other reviews, which I mostly avoided reading until done with mine, to prevent information cascades: [Resident Contrarian](https://www.residentcontrarian.com/p/an-article-about-a-book-about-pornography?r=75epr), [Zero HP Lovecraft](https://zerohplovecraft.substack.com/p/book-review-sadly-porn-by-the-last?r=75epr)]*
Scott Alexander
48129594
Book Review: Sadly, Porn
acx
# Mantic Monday: Ukraine Cube Manifold ### Ukraine Thanks to [Clay Graubard](https://twitter.com/ClayGraubard/status/1491759547291156481) for doing my work for me: These run from about 48% to 60%, but I think the differences are justified by the slightly different wordings of the question and definitions of “invasion”. You see a big jump last Friday when the US government increased the urgency of their own warnings. I ignored this on Friday because I couldn’t figure out what their evidence was, but it looks like the smart money updated a lot on it. A few smaller markets that Clay didn’t include: [Manifold is only at 36%](https://manifold.markets/Duncan/will-russia-invade-ukraine-before-t) despite several dozen traders. I think they’re just wrong - but I’m not going to use any more of my limited supply of play money to correct it, thus fully explaining the wrongness. [Futuur is at 47%](https://futuur.com/q/149987/will-russia-invade-ukrainian-territory-by-the-end-of-june), but also thinks there’s [an 18% chance](https://futuur.com/q/146595/will-russia-annex-any-part-of-the-lithuanian-territory-by-the-end-of-2022) Russia invades *Lithuania*, so I’m going to count this as not really mature. Insight Prediction, a very new site I’ve never seen before, claims to have [$93,000 invested and a probability of 22%](https://insightprediction.codebnb.me/markets/129), which is utterly bizarre; I’m too suspicious and confused to invest, and maybe everyone else is too. (PredictIt, Polymarket, and Kalshi all avoid this question. I think PredictIt has a regulatory agreement that limits them to politics. Polymarket and Kalshi might just not be interested, or they might be too PR-sensitive to want to look like they’re speculating on wars where thousands of people could die.) What happens afterwards? Clay beats me again: For context: So it looks like forecasters expect that, conditional upon Russia invading at all, there’s an 80% chance they’ll take Mariupol in the east, a 66% chance they’ll take Kharkiv (also eastern, but only a third ethnic Russian and currently aligned with the central government), and only about a 30% chance they take Kyiv or Odessa. See also [this thread full of speculation in the subreddit](https://www.reddit.com/r/slatestarcodex/comments/sru30j/what_will_happen_if_russia_invades_ukraine/). As for me, I’m going all in on “yes” after seeing this tweet: ### Alexander Cube Last week I speculated that to truly realize the potential of prediction markets, we’d need one that was real money, easy to use, and easy to create markets on. [Gustavo Lacerda](https://twitter.com/gusl/status/1491135346431901697) and [Nuno Sempere](https://twitter.com/NunoSempere/status/1491160480706031616) very kindly drew this picture and named it after me: Nobody has reached the promised land at the furthest point. But all three connected vertices are occupied. [Augur](https://augur.net/) is real-money and lets people create their own markets, (but it’s impossible to use - it’s made of complicated crypto contracts that nobody’s made a workable front end for yet). [Polymarket](https://polymarket.com/) is real money and easy to use (but doesn’t let people create their own markets; apparently they’re nervous about resolution disputes). [Manifold](https://manifold.markets/home) is easy to use and lets people create their own market, but it’s not real money (they’re American and centralized, so they have to follow anti-gambling regulations). ### Manifold Markets Speaking of which, they’re open! As the cube suggests, Manifold is a site where anyone can create their own (play money) prediction market. They set the question and they decide when and how it resolves (with everyone else just out of luck if they decide to fake it or rug-pull). It’s a bold strategy, but boy oh boy are people liking it so far: Not actually in order This is a semi-randomly selected sample of Manifold markets, but let’s go through them one by one. The Ukraine market is the biggest on Manifold. It’s also deeply out of step with every other prediction market and the top non-prediction-market authorities - who are all giving numbers in the 50s and 60s. I don’t understand how this is so low - yes, play money < real money, but mostly because play money doesn’t get enough people betting. Here lots of people are betting - it’s the biggest market on the site, and since you only start with $1000 either twenty people have bet everything or more people have bet a fraction - but it’s still wrong. I tried to spend some play money to correct it and it snapped back to just as wrong as it was before. I have no explanation. Midnight The Stray Cat is the second biggest market on Manifold, just after Ukraine. I guess the Internet really liking cats shouldn’t be a surprise at this point. In case you need to do research first I’m told this is the cat in question: Props to Manifold for a bunch of markets like the third one on there, where they eat their own dog food by using their market to predict how their business decisions are going to go. ACX Bot has copy-pasted all of my [predictions from 2022](https://astralcodexten.substack.com/p/predictions-for-2022-contest). At some point they should be able to compare their results with Zvi (ie a single very smart person), with the contest many of you entered (ie an average of formless crowdsourced predictions), and Metaculus (ie a non-monetary forecasting tournament). I’m looking forward to it! Most of you already know Lars Doucet, who’s written some great [ACX posts on Georgism](https://astralcodexten.substack.com/p/your-book-review-progress-and-poverty). I don’t know what possessed him to make a Joe Rogan Georgism interviewee market, unless he’s gunning for the position. Valinor is a group house on my street, with ~a dozen people living in and around it. We’ve been talking about fixing the backyard for a while. Now we can bet about whether it will happen. Having a number for this actually affects some of my decisions a little. Connor is hijacking the prediction market to make a poll, which is pretty cute. Dwayne Johnson does not have a 15% chance of winning the election. Manifold is suffering from the usual play money problem, where if you only start out with $1000 in play money, nobody wants to lock it up for three years to make a 15% profit. Vivek’s market, “Will I believe that 13177 is a prime number”, is pretty unusual. I’m interpreting it as a test/demonstration of prediction markets’ information-gathering ability. If you don’t know something and it’s hard to Google, you can make a prediction market about whether you’ll believe it in the future, and people who are able to figure out the answer will bet on it. Based on the 97% YES rate, I’m guessing 13177 is in fact a prime number. What else can you do this with? TANSTAAFL’s “[Will I Be Convinced That Justin Trudeau Is Not Fidel Castro’s Son?](https://manifold.markets/KarlC/will-i-be-convinced-that-justin-tru)” market is maybe pushing the limit of this methodology. Anyway, there are lots of me-too prediction markets but this is something genuinely new under the sun. Maybe it will be awesome itself, but I’m also hoping it helps bigger players realize how much more is possible. ### This Week In Metaculus A few new questions on intelligence enhancement, eg: [The question](https://www.metaculus.com/questions/8515/by-2050-genetic-engineering-to-raise-iq/) explicitly allows embryo selection, but says it must raise IQ ten points and be available for <25% median income to count. Trivial improvements to existing embryo selection will top out around 9 points, so this seems to be predicting something more interesting, maybe iterated embryo selection at the very least. I’m probably slightly bearish on this one; I believe if it existed someone would find a way to get it, but I think the regulatory climate might be able to prevent the relevant research indefinitely. Improving adult IQ is really hard. [This](https://www.metaculus.com/questions/7801/co2-in-atmosphere-in-2100/) is a bold thing to speculate about! Atmospheric CO2 was 300ish for most of pre-industrial history, 400ish now, and rising. This question predicts 600 in 2100, which sounds like what happens if global warming gets a bit worse but eventually stabilizes. I’m less sure. I think if we make it to 2100, we’ll have so much technology that atmospheric CO2 can be whatever we want it to be. But maybe we’ll want it to stay where it is; once there’s been a lot of global warming and people have moved / shifted lifestyles, it could be equally disruptive to cool the planet back down. Right now it’s 5%, the official government prediction is 10% by 2030, but [this market](https://www.metaculus.com/questions/7932/percentage-of-us-solar-energy-in-2030/) says 17.6%. But look at that probability distribution! It’s a lot of people saying 10%ish, plus a very long tail of very big numbers. I think people are disagreeing about how exponential this change is going to be. ### Shorts * Metaculus is [holding an essay contest](https://www.lesswrong.com/posts/j5shgF5LJC75GoXrt/metaculus-launches-contest-for-essays-with-quantitative) for people who want to use their AI-related prediction markets to argue the future of AI. $6500 available in prizes. * Nuno Sempere is giving out $10,000 [in forecasting-related microgrants](https://forum.effectivealtruism.org/posts/oqFa8obfyEmvD79Jn/we-are-giving-usd10k-as-forecasting-micro-grants). * Some more New Years’ predictions: [Pontifex Minimus](https://pontifex.substack.com/p/predictions-for-2022) on “Scotland, UK, and the world”; Slime Mold Time Mold on [2050](https://slimemoldtimemold.com/2022/01/01/predictions-for-2050/) (and [1950](https://slimemoldtimemold.com/2022/02/01/predictions-for-1950/)?), sorry if I’m missing somebody. * A “[literal marketplace of ideas](https://ideamarket.io/)”, I still don’t have a good sense of what this is but I’m going to look into it more.
Scott Alexander
48705653
Mantic Monday: Ukraine Cube Manifold
acx
# Open Thread 211 This is the weekly visible open thread. Odd-numbered open threads will be no-politics, even-numbered threads will be politics-allowed. This one is odd-numbered, so be careful. Otherwise, post about anything else you want. Also: **1:** The team behind Polymarket want me to clarify that despite the tone of [my post about them](https://astralcodexten.substack.com/p/the-passage-of-polymarket) they do still exist, they’re open for real-market trading outside the US, and they might have some kind of compliant US product in the future. I apologize for inadvertently implying they were dead. **2:** And the team behind [Manifold Markets](https://manifold.markets/home) (ACX grant recipient) want me to announce that they’re officially open! This has been kind of surreal for me, because I haven’t seen much about them in the usual prediction market news, but lots of friends from *outside* the forecasting space have gotten involved. A writing circle I know are betting with each other about who will finish their stories when. A housemate opened a market into whether she’ll get pregnant, and another housemate who helps with childcare is buying shares “as a hedge”. I’m feeling pretty good about my claim last week that easy market creation would open up hitherto unexplored territories. TFW you learn the market says 85% chance your friend hooks up with your ex **3:** Related: ACX Grants recipient Nuno Sempere somehow got grant money of his own and is giving out $10K in prediction market related microgrants. [Apply here](https://forum.effectivealtruism.org/posts/oqFa8obfyEmvD79Jn/we-are-giving-usd10k-as-forecasting-micro-grants) if interested. **4:** A message from Sam and Eric, who are running the [prediction contest](https://docs.google.com/document/d/1HZ3UC9JIuhFdlVM_xYtj60a6ba7elWGiAnROMobkFXM/edit) which incidentally this is your *last day to enter*: > "We have some plans to compare (aggregates of) ACX reader predictions against various prediction markets. But there are probably much cooler things we can do which we haven't thought of yet! If you run a prediction market and have an idea for an interesting collaboration that involves sharing our data before it's publicly released, get in touch with us through the [contest feedback form](https://docs.google.com/forms/d/e/1FAIpQLScT-7x1fsVJ1D8Cm4dynlyMhOaZmLIupFju6VMiXIOnNQIcMg/viewform?usp=sf_link). If you don't run a prediction market but still have an idea for something interesting we can do with the contest data, also feel free to suggest it in the feedback form, but we probably won't share the contest data with you."
Scott Alexander
48707726
Open Thread 211
acx
# Highlights From The Comments On Motivated Reasoning And Reinforcement Learning **I. Comments From People Who Actually Know What They’re Talking About** Gabriel [writes](https://astralcodexten.substack.com/p/motivated-reasoning-as-mis-applied/comment/4822086): > The brain trains on magnitude and acts on sign. > > That is to say, there are two different kinds of "module" that are relevant to this problem as you described, but they're not RL and other; they're both other. The learning parts are not precisely speaking reinforcement learning, at least not by the algorithm you described. They're learning the whole map of value, like a topographic map. Then the acting parts find themselves on the map and figure out which way leads upward toward better outcomes. > > More precisely then: The brain learns to predict value and acts on the gradient of predicted value. > > The learning parts are trying to find both opportunities and threats, but not unimportant mundane static facts. This is why, for example, people are very good at remembering and obsessing over intensely negative events that happened to them -- which they would not be able to do in the RL model the post describes! We're also OK at remembering intensely positive events that happened to us. But ordinary observations of no particular value mostly make no lasting impression. You could test this by a series of 3 experiments, in each of which you have a screen flash several random emoji on screen, and each time a specific emoji is shown to the subject, you either (A) penalize the subject such as with a shock, or (B) reward the subject such as with sweet liquid when they're thirsty, or (C) give the subject a stimulus that has no significant magnitude, whether positive or negative, such as changing the pitch of a quiet ongoing buzz that they were not told was relevant. I'd expect subjects in both conditions A and B to reliably identify the key emoji, whereas I'd expect quite a few subjects in condition C to miss it. > > By learning associates with a degree of value, whether positive or negative, it's possible to then act on the gradient in pursuit of whatever available option has highest value. This works reliably and means we can not only avoid hungry lions and seek nice ripe bananas, but we also do compare two negative or two positives and choose appropriately: like whether you jump off a dangerous cliff to avoid the hungry lion, or whether you want to eat the nice ripe banana yourself or share it with your lover to your mutual delight. The gradient can be used whether we're in a good situation or a bad one. You could test this by adapting the previous experiment: associate multiple emoji with stimuli of various values (big shock, medium shock, little shock, plain water, slightly sweet water, more sweet water, various pitch changes in a background buzz), show two screens with several random emoji, and the subject receives the effect of the first screen unless they tap the second. I'd expect subjects to learn to act reliably to get the better of the two options, regardless of sign, and to be most reliable when the magnitude difference is large. > > For an alternative way of explaining this situation, see Fox's comment, which I endorse. > > OK, now to finally get around to motivated reasoning. The thoughts that will be promoted to your attention for action are those that are the predicted to lead to the best value. You can roughly separate that into two aspects as "salience = probability of being right \* value achieved if right". Motivated reasoning happens when the "value achieved if right" dominates the "probability of being right". And well, that's pretty much always, in abstract issues where we don't get clear feedback on probabilities. The solution for aspiring skeptics is to heap social rewards on being right and using methods that help us be more right. Or to stick to less abstract claims. You could test this again by making the emojis no longer a certainty of reward/penalty, but varying probabilities. > > Source: I trained monkeys to do neuroscience experiments. That comment by [Fox](https://astralcodexten.substack.com/p/motivated-reasoning-as-mis-applied/comment/4819605) is: > The underlying intuition here about reinforcement learning is incorrect. > > *> Plan → higher-than-expected hedonic state → do plan more* > > No, it's: higher (relative to other actions) hedonic future state \*conditioned on current state\*. The conditioning is crucial. Conditioned on there being a lion, RL says you should run away because it's better than not running away. > > It gets tricky with partial observability, because you don't know the state on which you have to condition. So instead, says RL theory (not so much practice, which is a shame), you can condition on your belief-state, \*but only if it's the Bayesian belief-state\*. If you're not approximately Bayesian, you get into the kind of trouble the post mentions. > > But being Bayesian is the RL-optimal thing to do. You get to the best belief state possible: if there's a lion, you want to believe there's a lion, litany-of-Tarsky style. The visual cortex could, in principle, be incentivized to recognized lions through RL. > > I suspect people don't open IRS letters not because their RL is fundamentally broken, but because their reward signal is broken. They truly dislike IRS letters, and the pain it causes to open one is truly more than their expected value. People probably also underestimate the probability and cost of a bad IRS letter, but that's due to poor estimation, not poor deduction from that estimation. > > Perhaps it's easier to see in organizations, where you can tell the components (individuals) apart. It's sometimes hard to tell apart the bearer-of-bad-news from the instigator-of-bad-news. This disincentivizes bearers, who might be mistaken for instigators. With enough data, you can learn to tell them apart. Until you do, disincentivizing bearers to the extent that they really could be instigators is the optimal thing to do. I agree that if we were perfect Bayesian reasoners, the knowledge that there was now a 5% chance of there being a lion would propagate throughout all brain regions and they could condition on this immediately. And yet a few days ago, I (on a diet) visited some friends who sometimes leave delicious brownies on their counter. I worried that if I saw the brownies, I would eat them, so I tried not to look at the counter. But part of me felt bad that I was passing up the opportunity to eat delicious brownies, so my split-second reaction as I walked through their kitchen was to compromise by looking towards the *edge* of their counter to check for brownies, but to deliberately exclude from my vision the part of the counter where the brownies were most likely to be. This makes me think that the parts of my brain doing active inference are not quite perfect Bayesians making perfect updates. [Steve Byrnes](https://astralcodexten.substack.com/p/motivated-reasoning-as-mis-applied/comment/4817879): > I pretty much agree with everything you said. > > One of 5 or so places in the brain that can get a dopamine burst when a bad thing happens (opposite of the usual) is closely tied to inferotemporal cortex (IT). I talked about it in "Example 2C" here - <https://www.lesswrong.com/posts/jrewt3rLFiKWrKuyZ/big-picture-of-phasic-dopamine#Example_2C__Visual_attention> Basically, as far as I can tell, IT is "making decisions" about what to attend to within the visual scene, and it's being rewarded NOT for "things are going well in life", but rather for "something scary or exciting is happening". So from IT's own narrow perspective, noticing the lion is very rewarding. (Amusingly, "noticing a lion" was the example in my blog post too!) > > Turning to look at the lion is a type of "orienting reaction", I think. I'm not entirely sure of the details, but I think orienting reactions involve a network of brain regions one of which is IT. The superior colliculus (SC) is involved here too, and SC is ALSO not part of the "things are going well in life" RL system—in fact, SC is not even in the cortex at all, it's in the brainstem. > > So yeah, basically, looking at the lion mostly "isn't reinforceable", or to the extent that it is "reinforceable", it's being reinforced by a different reward signal, one in which "scary" is good, as far as I understand right now. > > Deciding to open an email, on the other hand, has basically nothing to do with IT or superior colliculus, but rather involves high-level decision-making (dorsolateral prefrontal cortex maybe?), and that bran region DOES get driven by the main "things are going well in life" reward signal. But check out [the rest of](https://astralcodexten.substack.com/p/motivated-reasoning-as-mis-applied/comment/4823160) the comment subthread for some pushback against and clarification of this model. **II. Arguments That The Long-Term Rewards Of Spotting The Lion Outweigh The Short-Term Drawbacks** Here are three comments that I think say about the same thing from different angles. [Phil](https://astralcodexten.substack.com/p/motivated-reasoning-as-mis-applied/comment/4817922): > Not sold on the "visual-cortex-is-not-a-reinforcement-learner" conclusion. If the objective is to maximize total reward (the reinforcement learning objective), then surely having your day ruined by spotting a tiger is better than ignoring the tiger and having your day much more ruined by being eaten by said tiger. (i.e.: visual cortex is "clever" and has incurred some small cost now in order to save you a big cost). Total reward is the same reason humans will do any activities with delayed payoffs. [KJZ](https://astralcodexten.substack.com/p/motivated-reasoning-as-mis-applied/comment/4823531): > Rather than a purely "is reinforceable" vs "isn't reinforceable" distinction I suspect the difference has more to do with the relevant timescales for reinforcement. In the foot injury case, we'd have a very fast gait correction reinforcement loop trying to minimize essentially instantaneous pain. In the lion country case it sounds like something slightly longer timescale -- we make a plan to go to lion country and then learn maybe a few hours later that the plan went poorly so we shouldn't make such plans again. In the taxes case it's much longer term, it might take years before the IRS manages to garnish your wages, though you'll still eventually likely get real consequences. Politics on the other hand, often cost/reward is so diffuse and long-term that I suspect the only reason anyone is ever right about difficult policy issues is because the cognitive processes that correctly evaluate them happen to be useful for other reasons. The vision example I think is a mistake of timescale; a vision system which learned to not see something unpleasant would get a much worse net reward when you don't avoid the lion you could have seen and subsequently get mauled. > > I'm coming at this from the ML side so I'm out of my depth biologically, but perhaps we have different relevant biological RL processes with different timescales? Eg, pain for ultra-short timescale reinforcement, dopamine for short-to-medium timescale reinforcement, and some higher-level cognitive processes for medium-to-long timescale reinforcement.\ [Mike](https://astralcodexten.substack.com/p/motivated-reasoning-as-mis-applied/comment/4818286): > I think all of the supposed discrepanices with modeling the brain as a hedonic reinforcement learning model can be explained with standard ML and economics. If you do a lot of research on epistemic facts related to your political beliefs, the first order consequence is often that you spend hours doing mildly unpleasant reading, then your friends yell at you and call you a Nazi. In the case of doing your taxes or the lion, that unpleasantness is modulated by the much larger unpleasantness of being sued by the IRS and/or eaten alive by a lion. So there's a normal tradeoff between costs (filing taxes is boring, seeing lions is scary) and benefits (not being sued or devoured). I feel like we can thought-experiment our way out of this. Suppose I invest in Bitcoin, then check its price every day. There is a little up arrow or down arrow next to some number and percent. Some days it’s a green up arrow and I feel good and smart and rich. Other days it’s a red down arrow and I feel bad and dumb and poor. None of this ever gets confirmed by any kind of ground truth, because I am HODLing and will never sell my Bitcoins until I retire. So how come I don’t start hallucinating that the arrow is green and points up? Every time I’ve “taken the action” of seeing a green upward-pointing arrow, I’ve felt better; every time I’ve taken the opposite action, I’ve felt worse! You can no longer appeal to the “the ultimate reinforcement is whether you got mauled by a lion or not”, because I’ve never sold my Bitcoin and gotten any form of reinforcement more final than checking the arrow (if you want, imagine that I get hit by a truck at age 64 and *never* sell the Bitcoin). I don’t want to say “epistemics are protected from reinforcement learning” is the only way out of this. It could be that the visual cortex gets reinforced at the level of broad principles, and any change that caused you to flip the direction and color of the arrow would have to change really fundamental things that would make your vision worse in other ways. But it doesn’t seem like “ultimate reinforcement” is what’s preventing this from happening, since there is none. Also, behavioral reinforcement learning is nowhere near this good. You might think that the short-term reward of eating brownies wouldn’t change behavior because the *real* reward we should be considering is the reward of being healthy and looking good. But this works very inconsistently, as opposed to the “see lions as lions” thing which works all the time. **III. Am I Ignoring The Many Practical Reasons For People To Have Motivated Reasoning?** [Melvin](https://astralcodexten.substack.com/p/motivated-reasoning-as-mis-applied/comment/4818536): > I think you're thinking about this too much from a model where everybody is a good-faith Mistake Theorist. > > In a mistake theory model, it's a mystery why people fail to update their beliefs in response to evidence that they're wrong. If the only penalty for being wrong is the short term pain of realising that you'd been wrong, then what you've written makes sense. > > I think that most people tend to be conflict theorists at heart, though, using mistake theory as a paper-thin justification for their self interest. When I say "Policy X is objectively better for everybody", what I mean is "Policy X is better for me and people I like, or bad for people I hate, and I'm trying to con you into supporting it". > > There's no mystery, in this model, why people are failing to update their "Policy X is objectively better" argument based on evidence that Policy X is objectively worse; they never really cared whether Policy X was objectively better in the first place, they just want Policy X. I commented: > I think there are three things: honest mistakes, honest conflicts, and bias - with this last being a state in which you "honestly believe" (at least consciously) whatever is most convenient for you. > > If a rich person says the best way to help the economy is cutting taxes on rich people, or a poor person says the best way to help the economy is to stimulate spending by giving more to the poor, it's possible that they're thinking "Haha, I'm going to pull one over on the dumb people who believe me". > > But it also seems like even well-intentioned rich/poor people tend to be more receptive to the arguments that support their side, and genuinely believe them. > > I don't think honest mistakes or honest conflicts need much of an explanation, but bias seems interesting and important and worth figuring out. XPYM [asks](https://astralcodexten.substack.com/p/motivated-reasoning-as-mis-applied/comment/4828007): > Is the conventional explanation unsatisfactory? That people are more convincing when they argue for their position honestly, and so it's beneficial for them to become biased in ways that favor their interests. This answers the “why” question but not the “how” question. If you wonder why animals can see, the answer is “it’s useful for spotting food and predators and stuff”. If you wonder *how* animals can see, the answer is a giant ophthalmology textbook and lots of stuff about rods and cones. One of the ideas that’s had the biggest effect on me recently is thinking about how small the genome is and how poorly it connects to the brain. It’s all nice and well to say “high status leaders are powerful, so people should evolve a tendency to suck up to them”. But in order to do that, you need some specific thing that happens in the genome - an adenine switched to a guanine, or something - to give people a desire to suck up to high-status leaders. Some change in the conformation of a protein has to change the wiring of the brain in some way such that people feel like sucking up to high-status leaders is a good idea. This isn’t impossible - evolution has managed weirder things - but it’s *so, so hard*. Humans have like 20,000 genes. Each one codes for a protein. Most of those proteins do really basic things like determine how flexible the membrane of a kidney cell should be. You *can’t* just have the “how you behave towards high status leaders” protein shift into the “suck up to them” conformation, that’s not how proteins work! You should penalize theories really heavily for every piece of information that has to travel from the genome to the brain. It certainly should be true that people try to spin things in self-serving ways: this is [Trivers’ theory of self-deception and consciousness as public relations agent](https://www.lesswrong.com/posts/DSnamjnW7Ad8vEEKd/trivers-on-self-deception). But that requires communicating an entire new philosophy of information processing from genome to brain. *Unless* you could do it with reinforcement learning, which you’ve already got. My take on the motivated-reasoning-as-misapplied-reinforcement-learning theory is something like “we always knew people had to be doing self-deception somehow, I was previously puzzled by how this got implemented, but it turns out it’s a trivial corollary of this other much more fundamental program”. **IV. Miscellaneous** [qbolec](https://astralcodexten.substack.com/p/motivated-reasoning-as-mis-applied/comment/4818714): > How did AlphaStar learn to overcome the fear of checking what's covered by the fog of war? [Daniel Speyer](https://astralcodexten.substack.com/p/motivated-reasoning-as-mis-applied/comment/4819473): > Having separate reinforcement and epistemic learners would be the elegant solution. There's also the ugly hack, which is to make "there might be a lion" even scarier than "there is a lion" so that checking is hedonically rewarded with "at least now I know". Successful horror movie directors can confirm evolution went for the ugly hack, as usual. [NLeseul](https://astralcodexten.substack.com/p/motivated-reasoning-as-mis-applied/comment/4824365): > One observation: You can't avoid a lion trying to eat you by refusing to look at it. But you might be able to avoid another lecture from your neighbor Ug Bob about how you haven't made the proper blood sacrifices to Azathoth in a while, if you refuse to make eye contact with him and keep walking. > > That is to say, huge parts of our brains developed in order to process social reality. And social reality, unlike physical reality, actually does change based on what information you have (and what information other people know you have, or what information you know other people know you have...). So controlling the timing of when you are seen by people around you to acquire certain information likely does have some degree of survival benefit. And the parts of our brains that learned how to do that are probably the ones that are involved in reading letters from the IRS today. [tcheasdfjkl](https://astralcodexten.substack.com/p/motivated-reasoning-as-mis-applied/comment/4837036): > To me the difference between the lion case and the taxes case is something like - how quickly are you going to get feedback on your decision/beliefs? In the lion case, you can't actually avoid learning in short order if there is a lion, because it will probably eat you. In the taxes case, you can avoid it for a pretty long time! Short-term bias is a pretty normal factor in how humans make decisions and it seems pretty applicable here too.
Scott Alexander
48285380
Highlights From The Comments On Motivated Reasoning And Reinforcement Learning
acx
# ACX Grants ++: The Second Half This is the closing part of [ACX Grants](https://astralcodexten.substack.com/p/acx-grants-results). Projects that I couldn’t fully fund myself were invited to submit a brief description so I could at least give them free advertising here. You can look them over and decide if any seem worth donating your money, time, or some other resource to. I’ve removed obvious trolls, a few for-profit businesses without charitable value who tried to sneak in under the radar, and a few that violated my sensibilities for one or another reason. I have *not* removed projects just because they’re terrible, useless, or definitely won’t work. My listing here isn’t necessarily an endorsement; *caveat lector*. Still, some of them are good projects and deserve more attention than I was able to give them. Many applicants said they’d hang around the comments section here, so if you have any questions, ask! (bolded titles are my summaries and some of them might not be accurate or endorsed by the applicant) You can find the first 66 of these [here](https://astralcodexten.substack.com/p/acx-grants-the-first-half). **#67: Investigate Weighted Belts As An Appetite Suppressant**I’m a data scientist with experience in healthcare and human subject research. I’m interested in the efficacy of weighted belts as an appetite suppressant. Over the last several years there's been interesting research on the gravitostat, a body weight homeostat independent of leptin that is controlled by the amount of weight loaded on the large bones. Two years ago, results from a “proof of concept” RCT were published showing that wearing weighted vests seems to reduce body weight and fat in humans. More research is needed, and more is being done (https://clinicaltrials.gov/ct2/show/NCT04809129), but none has focused on long term compliance or long term weight loss in humans wearing weighted clothes. I’m planning on sending subjects weighted clothing (various belts and vests), a randomized amount of weight, and instructions and guidance covering the theory etc. Compliance and body weight will be tracked and reported for two years along with surveys of what subjects' experiences have been. Resulting data and findings will be published. Improving compliance and intervention effectiveness through improved weighted clothing is something that academic researchers may be slow to focus on but could be of incredible value. Costs of the weighted clothing are estimated to be at least $5000. To provide funding or suggestions, contact me at justintgardiner@gmail.com. **#68: Educational Software To Hit Developmental Windows In Babies**One way to counteract the growing burden of knowledge[1] and increase innovation is to teach people more in less time. Early childhood education could be useful to this end. Indeed, very young children appear to have developmental windows that close as they age; for example, after a certain point children can't learn perfect pitch or discernment of certain foreign language phonemes. However, early childhood education does not always focus on explicitly teaching skills and knowledge. Some educators even push back against the idea of explicitly teaching the very young, claiming it is developmentally inappropriate. This view lacks evidence, and explicit teaching has proven successful with small children. I plan to create a simple computer interface with accompanying educational software specifically targeted at babies. The interface will be simple enough that a baby can intuitively use it, and the software will make it easy to develop learning modules for babies. I'll first test a learning module helping babies acquire perfect pitch. I need money for hardware prototypes, developing the software, and compensating volunteers who test the product. For further details, email me at info@thoughtson.education. I’m interested in a variety of efforts to accelerate education and would love to hear from anyone working on or interested in that. **#69: Guided Reading Program To Help Children Catch Up**More than 1 million children are behind on their reading due to the Pandemic. 11 year olds have the reading age of 7, and secondary schools are having to teach phonetics. Remote classes has led to disjointed learning and inconsistent teaching and the gap between the poor and the moderately well off is growing exponentially. Education systems do not have the capacity to deal with this difference as funds are stretched. A Multilevel, individual, across discipline approach is needed to encourage students to catch up. ECST is exactly this – a remediation guided reading programme that allows students to work at their own pace and move ahead as fast as their learning rate and capacity will let them. They are encouraged through various rewards to work independently, keep going and to progress. The reading materials have been chosen to enhance well-being. They are all classic books that have stood the test of time and apart from cracking good stories they feature archetypes who share their innate human knowledge and thus empower the reader. The readings are guided by videos of talented narrators reading the stories out loud. This stripped down approach, creates an intimate atmosphere, an accountable connection and aids true communication between author, narrator and student. ECST already has 40 titles and over 90 hours of stories and is looking for $25000 to facilitate the next stage of growth to create more titles and to market to schools, councils and education departments. [Contact Cecile@englishclassicstorytime.com] **#70: Video Courses For Language Learning**We are building a “Metaversity” to help democratise language learning. We film on location, with teachers and genuine students, to create a 360 video course that anyone with a VR headset can use to learn from. Students learn from a world class teacher, feel present in an intimate classroom setting, pause/rewind, and interact with our AI speech recognition. **#71: Oculus App For Language Learning**We have a prototype Oculus Quest app called Dynamic Spanish, available here: https://www.oculus.com/experiences/quest/4231524270226603/ The co-founders are a husband-wife team from the UK. Dave is a chartered chemical engineer by trade, and is the camera operator, post-production lead, and general techie. Katie studied languages at university, has worked as a language teacher for many years, and is responsible for managing syllabus creation with language-specific experts. The next few years will be spent filming the classroom scenes for more languages (in London), before travelling to film native speakers in interactive scenarios around the world, and capturing the sights and sounds of the countries for students to virtually explore in narrated guided Trips. We are creating courses that can be completed over a few months, giving learners the confidence to speak their new language in real life, and connect more deeply with the cultures that speak it. We hope this project is appealing to a generous do-gooder, who is excited by the prospect of helping to build a positive vision of accessible education. **#72: Teach Reproducible Research**There is a reproducibility crisis in the world. My project is for three summer months. I will dedicate this time to prepare online open source class to teach how to do reproducible research in Computational Modelling Field. [Contact asarmakeeva@gmail.com] **#73: Create A New Kind Of Money And Cities**The combination of markets and ideas has reduced suffering somewhat. This trend must continue, but I think a global median income of US$30,000 by 2049 is possible. We just need to teach everybody the same skills that Americans have. To enable this, 2 areas where improvement can be made and no new technology is needed are: a new money, and cities welcome to everyone. A new money is needed because the current financial system is not burdened with the risks it creates. Cities don’t grow like they did in the past. Over a 50 year period at the turn of the twentieth century Detroit grew 10X, whereas in this era the Bay Area has not even doubled its population. Nowadays cities that attract the best talent only attract the best talent. If we had a Hypothetical-Bay-Area-City grow like American cities of the past, it would have a population of around 45 million people and GDP of $4.5 billion. What would an asset be worth if it had a $4.5 billion income stream? A little bit of money and land is needed to make a start, but mostly I need you and your talents. Here is my new Substack with details: https://marketismandidearism.substack.com/p/a-new-money-and-cities-welcome-to . Please sign up to make a global median income of US$30,000 by 2049 a reality. P.S. I am talking money here. Accounting entries. Do not talk to me about Bitcoin. Bitcoin is an attempt at cash. 99.99999% of money transactions are not done with cash, they are done with IOU’s. Please. Spare. Me. **#74: Apply Constructor Theory To AI**Constructor theory is a framework developed by the physicist David Deutsch which seeks to express scientific theories as claims about which physical transformations are possible and which are impossible. This is in contrast to the standard framework which describes physical systems in terms of their initial conditions and laws of evolution. It is hoped that this framework will solve fundamental problems in physics and other fields. I believe that there is an analogy between the problems in the natural sciences which constructor theory was developed to solve and the AI alignment problem. I would like to spend a couple of months thinking about this and fleshing out my ideas as posts on LessWrong/The Alignment Forum and opening them up for discussion. I am currently in the final few months of a PhD in theoretical physics during which I have published two papers. After my PhD finishes, I would like to spend some time (two or three months) researching this problem and will need some funding to do this full time during this period. If you would like to fund this work or discuss the idea further, please send an email to AlfredSPH@protonmail.com . **#75: Study The Real-World Effectiveness Of Psychedelics**Psychedelics are set to be approved as medicines as early as 2023. I think that there is a citizen-science project not happening that could add to the evidence base and help the wide-spread implementation of psychedelic-assisted therapies. As founder of Blossom, a project dedicated to providing information on psychedelics - from research to implementation - and effective altruist (GWWC since 2015 & Founders Pledge), I could lead a large-scale citizen-science study to study the real-world effectiveness of non-clinical use of psychedelics for mental health & self-development. I'm looking for $45.000 to dedicate a significant amount of time to this project & pay others involved in the study. [Please contact acx@floriswolswijk.com] **#76: Richard Hanania’s Think Tank**I’m Richard Hanania, and I’m seeking funding for my think tank, The Center for the Study of Partisanship and Ideology (CSPI). We believe that scientific stagnation, identity politics, and widespread risk aversion didn’t come out of nowhere. They’re the result of bad policies and practices – bureaucratization, politicization, and credentialism – that have infected our institutions and spilled over into the larger culture. CSPI is bringing these dynamics to the attention of scientists, intellectuals, and politicians, emboldening them to take steps to reform, deregulate, or defund stagnant institutions. Over the last year, our researchers have had a large impact on the discourse surrounding several important issues. Philippe Lemoine helped discredit the public health consensus surrounding COVID and showed lockdowns and other non-pharmaceutical interventions to not be worth the costs. Eric Kaufmann documented political intolerance in academia and worked with the British government to secure enhanced free speech rights for academics. My own work has demonstrated that wokeness is largely the product of civil rights law and the HR bureaucracy that enforces it. We’re seeking additional funding so that we can hire more research fellows and distribute more grants, expanding our influence in the policy space to fight back against failing institutions. If you’re interested in helping, email me at contact@cspicenter.org. To learn more, visit our website at https://cspicenter.org. **#77: Computer Programs That Write Themselves**I've been working on faster ways to create computer programs. I'm looking for funding to extend my time to work on this. This could improve the success probability of hundreds of other projects and allow the existence of many more. Tools help reduced the time and resources needed for a project. If someone tried to work on their project without access to a computer or internet, getting them a connected machine would be of high priority even though it has nothing to do with the content of the project itself. And if computers or the internet didn't already exist, it'd be worthwhile to try to come up with them. But we only know this in hindsight and the same could be true of what I'm trying to make. That we should have never tried to do without. The tool that I want to create is a computer program P that is an interactive tutorial for (re)creating P from scratch. But more importantly, P helps recreate significant variations of P. We'd then use P to make other programs that would help with whatever project is at hand. How this work and why it would be helpful is explained in the original grant submission at https://blog.asrpo.com/grant You'll also find contact info and links to some of the project's components there. Instead of funds, you can also help by contributing your own time to the project. If you think you can help with any component or are good at either compilers or thinking abstractly, please feel free to reach out. **#78: Research Questions In Progress Studies**With funding from ACX Grants I will investigate a set of specific and crucial open questions for Progress Studies: What is our capacity to slow technological progress if desired? What are general properties of technology which limit or exacerbate existential risk? How robust are recommendations to different sets of moral ethics? These are crucial to understanding the importance of accelerating progress, but relatively little effort has been devoted to these questions outside of AppliedDivinityStudies’ The Moral Foundation of Progress, and my own Stubborn Attachments From Behind The Veil. In the past, I’ve interned on economic policy at the CATO Institute and The Charter Cities Institute where I published on growth and governance. ADS has also agreed to mentor me over the summer, and provides a vote of confidence. If you’re able to provide funding, please email maxwell.tabarrok@gmail.com. **#79: A Method For Solving Coordination Problems**My name is Bendini, and I’ve figured out how to solve coordination™. Okay no, that’s an overstatement. What I’ve actually done is figured out a general method for turning some intractable coordination problems into a set of merely difficult ones. I call this general method Intentionality, and it can be summarised as follows: A) Get people to state their intentions explicitly and honestly. B) Put everything in place that’s necessary to ensure that people actually do A, instead of just pretending to do it. The core insight is A, but it’s irrelevant without B’s infrastructure. As such, I’m writing a yellow paper aimed at smart laypeople that explains all of B’s interlocking parts and how to bootstrap them into existence. This is a huge task, and I have no credentials that would grant plausibility to my claims, but here are 4 reasons to treat them as plausible anyway: 1) I’ve never set foot in California. 2) Scott rejected my application without asking to see a draft of the yellow paper, so its ideas are yet to be judged. 3) The paper is written in plain language similar to (https://bendini.uk/systematic-cooking-intro), so unlike Ribbonfarm, it should be easy to tell if the ideas hold water. 4) I’m not asking for funding, only feedback to get it published ASAP. If you think this either 1) sounds plausible, or 2) sounds fake, but worth a Pascal’s mugging on the chance that it’s real, you can reach me at kernelmanchester@gmail.com or @BendiniUK on Twitter. **#80: A “Mnemonic Medium” To Replace Textbooks**What comes after the book? Is it pictures of pages on screens? Videos of people lecturing? Why are all the answers to this question so boring? Where are the powerful ideas about memory, psychology, sociology? I’m Andy Matuschak, and I’ve been developing a “mnemonic medium” which embeds interactive memory supports to make it easy for readers to remember and apply what they read. To test these ideas, physicist Michael Nielsen and I created a quantum computing textbook called Quantum Country. Hundreds of readers have now demonstrated long-term retention of the text’s fine details. I’ve been running experiments to understand and improve the medium, both in that textbook and by expanding the technology to a variety of other contexts. I believe that this medium, refined and widely deployed, could help people learn difficult topics much more easily and reliably. Now, here’s where you come in: I used to lead R&D at Khan Academy, but now I’m independent and crowdfunding a research grant. You can read more about my work and help make it happen at https://patreon.com/quantumcountry. **#81: Entheogenic Plant Program For Addictions**Natura Care Programs (NCP) is a new, cutting-edge, longitudinal spiritual care program for individuals struggling with a broad range of addictions. NCP integrates entheogenic plant medicine ceremonies, a social model for recovery with peer support, contemplative practice instruction, and nature immersion with an online curriculum. Our online component offers process-oriented recovery, individual and group counseling, and integration. NCP is lead by Celina De Leon, Todd Youngs, and Alex Olshonsky. We are a non-profit seeking funding to help provide scholarships to veterans, BIPOCs, and underrepresented communities to attend one of our first cohorts in 2022. Learn more and get in touch here https://naturacareprograms.org/. **#82: Create AGI**I’m Sainadh Chityala and I’m developing a general purpose machine learning architecture that could potentially make any device intelligent. I have chosen a radically different approach to get this done, that is by developing a self-driving car in India. A self-driving car ML pipeline is the closest to the brain’s physiology than any other that humans are working on. An architecture of this sort could reduce the resources, time and effort for projects that involve massive R&D like in the case of Development of Rockets and New Treatments by automating and emulating everything. Basically everything would eventually become an application of this. This architecture paves the way towards a post scarcity world. My goal with this is to develop AGI (Which might seem too much but who knows). [Contact me at sainadh.chityala@gmail.com] **#83: Detect And Fight Healthcare Fraud**Our company is using data to detect fraud against the government. Access to quality healthcare is dwindling in the United States. There is an estimated hundred billion dollars in fraud every year leading to lower standards of care and making healthcare unaffordable. We’re seeking a hundred thousand dollars to buy data from the Centers for Medicare and Medicaid services. This will allow us to find fraud and file lawsuits on behalf of the government. The Department of Justice signaled a new level of support for independent companies using data methods to identify fraud in June of last year when they picked up a case brought by Integra Med Analytics. For the past twelve months we’ve been working with attorneys specializing in this area (qui tam). We’ve been consolidating data returned from broad FOIA requests and begun assisting law firms with data science. Our team combines broad technical expertise (Google, NASA, LANL, NIST, UC Berkeley) with business acumen and investigative experience. The three of us have been working together on projects with positive externalities for five years. Previous successful projects include providing flexible housing, and a micro-targeting methods for political action. [Contact erbahr@gmail.com if you can help] **#84: Study Cognitive Strategies, Argument Distillation, And Build A Better Social Network (3)**Hi, I'm a regular pseudonymous commenter here (and in other Rationalist spaces), but my real name is Isaac P. Burke. Some of you may know me from the Irish SSC meetups. I submitted three proposals: the first, a Rationalist nonprofit to conduct studies on effective cognitive strategies, initially focusing on group rationality in toy scenarios/competitions. The second, a nonprofit social network, not subject to advertiser pressure or clickbait incentives, with a focus on providing users with choices in terms of the moderation and algorithms they want to experience - think AO3. The third, a collaborative tool, based on Gwern's proposal here: https://www.gwern.net/CYOA but focusing on user-submitted arguments, distilling debates down to a dialogue between the most persuasive crowdsourced points on both sides (this would also be useful as an artistic tool for collaborative storytelling, but that's less EA-relevant.) In terms of qualifications, I'm a programmer with experience primarily in games and web design, and a passionate EA, currently working part-time for a small educational nonprofit. None of these proposals necessarily require a huge budget, at least to reach the "minimum viable product" stage - maybe $15k-$20k - but all would require a lot of collaboration (even more so if more than one of them gets interest). If you're interested in volunteering/funding/collaborating on any of these proposals, you can reach me at Isaac.Philip.Burke[at]gmail[dot]com. **#85: Study The Neuroscience Of How The Self Matures**The ‘self’ matures and change throughout adulthood, progressing through distinct stages. Social scientists have demonstrated that individuals at each stage have a unique outlook on life and way of interacting with the world. In earlier stages the focus is on exterior stimuli, while in later stages the exploration is of one’s interior experience, and how the exterior world is interpreted through our interior experience. Qualities such as compassion, dis-identification from the concept of self, and an understanding of the constructed nature of experience, become stronger and more nuanced at each stage. Professional coaches exploit this knowledge to help their clients achieve personal and professional goals. Yet, this maturity (or ‘vertical development’) model has been almost completely ignored by psychologists and neuroscientists despite the potential for this knowledge to transform our understanding of individual differences in how the mind works. Our team, led by renowned Harvard neuroscientist Sara Lazar, is seeking funding to conduct a series of experiments to characterize the maturity process in scientific language and situate it within the fields of psychological and cognitive neuroscience. Charitable gifts of $50,000 to $500,000 will allow us to conduct essential preliminary studies to establish proof of concept and enable us to seek federal funding. Please contact slazar@mgh.harvard.edu with questions. **#86: More Accurate Measurement Of Mental Health**Even Mental Health is a startup working on improving how we measure mental health. Current approaches rely on asking individuals to summarize how they’ve felt in the past. This yields a blurry and often inaccurate depiction of one’s mental health. Our app, Even Mind, takes a different approach that avoids recall bias altogether (you can find out more at https://evenmind.app). We believe more accurate measurement tools will lead to significantly improved treatments and outcomes for individuals. We’re looking to connect with others interested in improving mental health measurement or mental healthcare more broadly. We’re also raising an initial round and looking for investments in the $50,000 - $100,000 range. If you’d like to connect, please email me at cwoods@evenmind.co. **#87: Gather Data On Successful Arguments**I would like to create an Argument website that asks users to create their argument for particular proposals (Dogs are the best pets; The Rock would make a great president). Once we have a reasonable number of arguments, we would then ask users to vote for the best arguments by ranking 3 arguments at a time. Over time, we'd have good data on what types of arguments are more likely to be successful and why. I would need to be able to hire a better coder than I am (5-10k) to put this together and then figure out how to promote it. [Email nstearns@yahoo.com] **#88: Daniel Ingram’s Nonprofit For Studying Emergent (ie spiritual) Phenomena**Emergence Benefactors (EB: https://ebenefactors.org/) is a 501(c)(3) charity established in 2021, striving to reduce global suffering and promote long-term human flourishing through an in-depth understanding of emergent (spiritual, mystical, energetic, psychedelic, and related) phenomena. We are designed to support the roadmap of the Emergent Phenomenology Research Consortium (EPRC: https://theeprc.org/) and its allied entities. This way, we fund and support methodologically rigorous, ontologically agnostic research on emergent experiences, practices, and their effects. Furthermore, our aim is to promote the culturally-sensitive incorporation of this scientific and clinical knowledge into global, evidence-based knowledge bases. We draw insights from rationalist and effective altruism frameworks. EB’s Board and team of contractors ensure a diverse range of professional skills and talents. Dr. Daniel M. Ingram (CV: https://tinyurl.com/dringram), the self-funded Acting CEO and Board Chair of EB, has over 37 years of professional and personal experience with emergent phenomena. We welcome both unrestricted donations and those intended to support specific projects and activities. We do appreciate valuable feedback from the ACX community on how to make this project the best it can be. The EPRC Whitepaper contains all the detailed information: https://hypernotes.zenkit.com/i/UFIY1UO1cp/84TWK0BwQlq/?v=M6pP\_Tb7W6 For inquiries: daniel.ingram@ebenefactors.org **#89: A Wiki For Rebuilding Civilization After Disaster**My name is Jehan, I've created the site Wikiciv.org as a guide to rebuilding civilization in case of global catastrophe. Its editing is crowdsourced like Wikipedia because a project this large is far too much for one person, or even a team. Technologies and raw materials are linked so both upstream and downstream technologies are easily accessible. There are other projects with similar goals, but they are 1) Not publicly accessible 2) The wrong scale. Books such as "The Knowledge" and "How to Invent Everything" are too cursory to be a practical guide for recreating critical technologies like steel, fertilizer and antibiotics. Meanwhile the "Manual for Civilization" from the Long Now Foundation is 3500 paper books in one corner of San Franciso. Wikiciv fully open and available for database downloads. Distributed backups are encouraged to ensure resiliency during a disaster. WikiCiv could be be helpful even for regional supply-chain disruptions. For example during the Covid-19 pandemic, there were critical oxygen shortages in India. It turns out that a reasonable oxygen generator can be made from zeolite and an air compressor. Wikiciv aims to be a single, interconnected database of "from scratch" manufacturing instructions for situations like these. It is the eventual goal of Wikiciv to be accepted as a Wikimedia Foundation project (like Wikipedia, Wikiquote, Wikivoyage etc). The better Wikiciv becomes, the more likely this is. Get in touch at admin@wikiciv.org **#90: A Crowdfunding Platform For Bounties On Beneficial Projects**viaPrize.org is a free and open crowdfunding platform for bounties on socially beneficial projects. There currently are organizations like the X Prize Foundation which host these sorts of contests, however there is no platform which allows open submissions of both bounty ideas and bounty funding. viaPrize.org unbundles ideas, funding, and execution by permitting anyone to submit great bounty ideas, regardless of their resources or know-how. It further unbundles the funding by crowdfunding contributions which then combine into a larger and more worthwhile bounty prizes. If you're interested you can help in the following ways: 1) Post bounty ideas to the site (no funding need) 2) Try to win existing bounties 3) Pledge or contribute funding to bounties 4) Help out with the project by volunteering, promoting the site, advice, or useful contacts (e.g. X Prize). Some bounties up right now that you might like: Transparent bank accounts, A no-login shareable calendar, A hotkey standardization app. Contact: info@viaprize.org **#91: Protein-Based Desalination**I would like to conduct research into aquaporin protein bearing GUVs (lipid bilayer spheres ~50μm wide) and their ability to uptake pure H2O within a saline/sucrose solution. This research would benefit the field of biomimetic membrane filtration and possibly pharmacokinetics. I'm a fourth year Bio-Chem undergrad, graduating this Spring, so I'm not able to apply next year for typical undergrad research grants. I'm a mature student with a strong scientific mentor network and experience in taking on and completing other high risk and complex projects. I would require money for lab space rental (which I can negotiate from my school) and the necessary reagents/lipids/cultures. For more info or suggestions on other funding avenues besides this one, please email matthew.tjones57@gmail.com (or even comment on this thread if possible) **#92: Do Gay Rights In India Reduce Discrimination?**I’m looking for funding for research that will test whether the expansion of the legal rights of gay people in India reduces discrimination by coordinating a shift in the social norms surrounding anti-gay behaviour. Using the Supreme Court ruling in 2018 that decriminalised homosexuality across India, I will use an RCT to measure the effect of exposing individuals who are not already aware of this law to the information that homosexuality is legal. I will measure how this information impacts participants’ anti-gay behaviours and sentiment. Being told about the law is likely to act as a signal to people that being homophobic is no longer socially acceptable. Even if the information doesn’t change their private attitudes, they may therefore be more likely to act kindly towards gay people when their actions are visible to others. Relatedly, they may be more prone to positive communication about gay people, e.g., they may share less anti-gay news stories on social media. This reduction in anti-gay narratives could in turn help the longer-term process of positive social change. If you’re interested in this project, please email me at dmbwebb@gmail.com! **#93: Found A Non-Territorial Nation-State**My idea is founding a non-territorial nation-state of souvereign individuals, to offer a viable and better alternative to tackle both global and local issues. Globalization and its implications doesn't "care" about borders, like pandemics or climate change, or other man-made complexity induced problems; community activities are impaired by state dependencies. Corporations bypass this and act like international and networked entities with both global and local influence. We need something better in an increasingly complex and interdependent world that allows us to make a change for the better. This is already being partially addressed (Club of Rome et al.), models of human progression have been developed, and new philosophical constructs emerge (metamodernism etc.), thinktanks like Berggruen Institute explore possibilities. Yet we are still stuck on a philosophical level instead of walking the talk. There currently is no structure that would allow for ultimate "global thinking, local acting". It's not intended to be a global government, but the mission must be to have a seat at the UN to at least co-exist with other countries. The goal is not to revolt against or replace current countries, or dreaming up another utopia (nirvana fallacy, ignoring tribalism etc.). For this and in its urgency, I'm looking for funding to work full time on this, to build a platform, to attract experts and professionals, to examine and offering an actionable, viable, better alternative to what is. [Email benjamin@wittorf.me] **#94: A Clinic That Practices A New Model Of Community Medicine**Open Source Wellness (OSW) is an Oakland-based 501(c)3 nonprofit dedicated to transforming health care and health outcomes in partnership with communities. OSW delivers “Community As Medicine” via a “Behavioral Pharmacy” approach, in which patients struggling with chronic conditions such diabetes, hypertension, and depression are supported with four universal pillars of wellbeing: physical activity, healthy food, stress reduction, and social connection. OSW specializes in partnering with Federally Qualified Health Centers (FQHC’s) serving patient populations that are predominantly low-income and communities of color, utilizing a Virtual Group Medical Visit model that generates revenue for clinics while achieving critical health outcomes. These visits combat social isolation, and are led by culturally-relevant health coaches and peer leaders in addition to primary care providers. Peer-reviewed, published research outcomes include reductions in ED visits, blood pressure, depression, and anxiety and increases in weekly physical activity, and fruit/vegetable intake. Seeking funding from $50,000 to $150,000 to support two key project areas: Continued expansion of the OSW model to clinical and community orgs nation-wide. Development of a nationally accredited health coach training program, which will create a certification and employment pipeline for our diverse and under-employed participants and peer leaders. Visit www.opensourcewellness.org or email liz@opensourcewellness.org **#95: Make Programming By Voice More Practical**I'm Michael Arntzenius (rntz.net) and I'd like $5-40K to work on making programming by voice more practical. Many programmers at some point suffer from repetitive strain injuries (RSI) that make typing difficult; I'm one. To mitigate this I use a tool called Talon that lets me control my computer by voice. Thanks to recent advances, voice control is increasingly practical, and voice programmers form a small but rapidly growing community. However, idioms and tools for coding by voice are in their infancy. I believe now is the right time to push hard on voice-oriented editing: the underlying voice recognition tech is ready, we have a creative, dedicated community willing to experiment with new approaches, and mature editor technology like language servers and online error-robust parsing (eg. tree-sitter) supports editing commands at a higher level than character-by-character. Specifically, this money could support me working part-time for 6 months ($5k) up to full time for 2 years ($40K). I could work on: contributing to Cursorless, the state-of-the-art open-source voice editing framework for VSCode; porting cursorless to other editors to increase its reach; incorporating ideas from recent research on structural editing and typed holes; and on my duties as co-maintainer of the de-facto default Talon script-set. If you're interested, contact me at daekharel@gmail.com. **#96: Improve The Readability Of Scientific Writing**I want to research, demonstrate, and facilitate the adoption of better norms in scientific writing. Science papers are more tedious to read than their inherent complexity requires, in part due to misaligned incentives in science publishing, but also because of ossified expectations of what a paper "should" be. This has several drawbacks: 1) Less productivity for scientists, who have to expend energy to understand papers. 2) Fewer papers are read, which means less usefulness of scientific work and less cross-pollination between fields. 3) It may contribute to people leaving science. 4) Less accessibility of science to educators, leaders, professionals, and the broader public. I've been researching language in science for several months; see my work at https://jawws.org/. The new norms I suggest follow two principles: reduce cognitive load for readers, and avoid requiring more work from authors. One idea is to create a new journal to publish rewritten versions of existing papers, showing that new norms are possible without sacrificing scientific content. (The goal is not science journalism.) I would love to talk to you if you have any interest in science publishing and editing! If you'd like to contribute funding, it would help me cover living expenses; it would also allow me to offer prize money for a "Make a Tedious Paper Fun to Read" contest, on the model of the ACX book review one. If any of this sounds interesting, contact me at hello@jawws.org. **#97: Start An EA Club At A Turkish University**I am Berke, a philosophy major interested in Effective Altruism from Istanbul, Turkey. I am interested in EA since 2019, and currently I am a research volunteer at Kafessiz, an organization working to end the use battery cages for hens in Turkey. EA doesn’t have much presence neither in Istanbul nor the country in general. There are no EA clubs at colleges, and I intend to start the first one at my college. We plan to do seminar programmes (fellowships) similar to the ones in Oxford and increase EA outreach at both our college and in online Turkish language platforms. If we find enough people interested, we plan to start a fellowship program in one or two months and a few other things. If you’re interested you can contact me at berke.celik@boun.edu.tr . What would be achieved in the name of utility if you fund an EA Society at a college you've never heard of? For certain EA causes, there is a lot to be achieved in Turkey. For example, the number of hens in battery cages in Turkey is something around 100 million. Establishing a non-trival EA presence at my college would be good first step because it is the most selective college in the country. In Turkey you enter college via a central exam, and 70% of the 1000 people who scored highest in the exam chose to be here, and this is not a flex, I am just trying to say:If one wants Effective Altruist ideas to spread in a country of 80 million, or at least among the future Turkish elite, my college is a good place to start. **#98: Help People Effectively Navigate Their Local Government**I’m John Kurpierz, a PhD student interested in making local governments more fair and transparent for their constituents. We can do this by improving the sophistication of those constituents. Two ways you can improve the sophistication of regular constituents are: 1) by improving their reasoning skills and desire for good-faith deliberation, and 2) by giving them better information on their rights as voters, responsibilities as voters, and norms of their local government. Working with a multidisciplinary team (Ken Smith, PhD; Theresa Walker, PhD; Wendy Cook, PhD) we’ve developed a pilot program of “cool tools” that help regular people effectively navigate their local governments while remaining in favor of “Niceness, Community, and Civilization”. We’ve done some test runs with underserved communities and have promising initial results, but are currently small-scale. My colleagues are professors and don’t have interest in scaling up these projects to see if they’re viable outside of an academic environment. By contrast, I would like to try deploying this at a larger scale to see if we can feasibly increase the number of individuals we can train and the impact we can have. I would like to seek $10,000 to work on seeking additional interested constituents, deploying the courses at larger scale, and tabulating the results to see if the ROI on outcomes remains high with a larger population. Interested funders or advisors can contact me at JohnRKurpierz@gmail.com. **#99: Research Retraction Insurance To Ensure Retractions Get Publicized**I'm Christopher Akin, and am developing a new financial insurance product for academic research, “Research Retraction Insurance” or “RRI”. RRI is a financial insurance product that pays out to promote new findings and retractions when earlier findings are later proven false. Academia does not equally promote scientific findings when they prove false as when they are originally announced. We see this juxtaposition between the grand public declarations of scientific success and pin drop retractions. We see this is the current replication crisis of ‘classical’ results in the social sciences. RRI will help resolve the current system limitations of professional accountability, limited transparency, the likelihood of learning equally from mistakes, and an ill-informed public holding on to false findings. RRI payouts are earmarked to publicize later research findings that falsify early pronouncements, and the receiver of research funding will voluntarily, or likely be required by funders, purchase the RRI. Researchers only allocate pennies (~.025% of each funding dollar) on the thousands of funding dollars received to insure for promoting future falsifying results. The earmarked PR payouts are ~15X of premium payments, or ~4% of total received funding. I am seeking $50,000 in seed funding to conduct market research, begin RRI product development, and engage leading funding institutions in business development. Email iamchrisakin@gmail.com or visit www.linkedin.com/in/akinchristopher **#100: Tools For Protecting People’s Legacies After They Die**Preserving your memories, beliefs, personality, and expectations about the future should be cheap and easy today. Storage, preservation, and discoverability are all cheaper and simpler than ever before. Yet every year more than 50 million people die, and the vast majority won't leave this kind of legacy. That's a tragedy. Preserving personal legacies has value for the people whose legacy is preserved, for the people who get access to these legacies, and for all of us: a clearer understanding of humanity, its values, and experiences. So what could significantly increase preservation of our legacies? First, simple tools for collecting the material: A well-designed web form with prompts for significant memories, vital stats, family medical history, ethical will, and so on; and tools for collecting, tagging, and preserving photos, videos, and other media. Second, tools, tech, and a plan for long-term preservation. Third, a robust system, designed with legal advice, for specifying who gets to access which parts of the legacy and when. Fourth, convenient, controlled access to the information. Fifth, an effort to promote the project and its reliability. Each of these five components might require a team of professionals. However, there is such a clear void here that even a proof of concept using simple tools would be a big improvement and demonstrate the value of further investment. legacyproject@protonmail.com **#101: A Foundation To Support Undercover Journalism**Nellie Bly exposed 19th-century asylum horrors without hidden cameras or recorders. With those tools, Shane Bauer got a book out of his undercover private-prison job, yet few have followed him. Exposing corrupt institutions operating often takes many people bravely coming forward at personal risk. One journalist, outfitted with the right tech and backed by correct legal advice, could do similar work with less risk and get greater personal reward. So why aren't journalists regularly going undercover? It's a coordination problem. No single journalist or organization has the time or resources to make this workable. But an organization dedicated only to this project could recommend tools, provide training videos, offer general legal advice, and consult on projects for any reputable news organization or other qualifying truth-seeking entity. That would make these kinds of projects more rewarding relative to risk, and a thousand stories could bloom from journalists turned undercover staffers in nursing homes, meat processing plants, and other places where vulnerable people are subject to abuse with little recourse. Just the threat of this kind of exposure would cause many previously untouchable organizations to clean up. The idea is to buy the top-rated recording devices, to test them, to produce training videos, and to find First Amendment lawyers to outline legal advice and provide consultations. journalismundercover@protonmail.com **#102: Screen Addiction Detox Via Competitive Water Fight League**Hi, we’re Michoel and Yitzi, PhD in Neuroscience, BA in Psychology, respectively. We would like to run an active screen addiction detox in the form of a competitive water fight league. The activity would provide a high-adrenaline, physically active, socially collaborative/competitive alternative to ordinary humdrum team sports on the one hand, and sedentary video games on the other. The literature suggests that phone addiction is both extremely pervasive and extremely difficult to address. When interventions do work, it seems to be because they’ve given people engaging activities as a healthy substitute. The water fight league is a novel and relatively-cheap approach that may hold more appeal for children and teenagers who feel no great draw to conventional sports. It would take approximately $3,120 to run a pilot program. If you would like to contribute or discuss, please get in contact: leaguehydro@gmail.com **#103: Games To Fight Cognitive Bias**Decades of widespread awareness of cognitive bias, and several impressive projects to help people overcome them, does not seem to have led to any population-level improvement in the fundamental problem. Many specific biases and irrational mindsets probably take hold during school years, and likely in school. But give kids games, preferably ones they can play against each other, that take rationality to win, and they'll have powerful incentives. Show that if they can avoid anchoring they'll come closest to guessing a number. Play with and not against Monty Hall and over time, accrue the winnings. Overcome loss aversion and dominate gamified markets. Bring together a rationalist experienced in turning these biases into stories (Eliezer, Julia) with a videogame maker who'd enjoy the virtuous side project and you could have the most useful and fun educational game around. rationalismgame@protonmail.com **#104: Bring Big Tech’s Optimization Experiments To The Masses**The most efficient and profitable companies constantly run experiments on their own products, optimizing directly for profit or indirectly through customer acquisition and retention or other metrics. Optimizely and other companies provide platforms for managing and measuring experiments, and many companies have built their own systems in-house. These can support greedy optimization algorithms such as multi-arm bandit. Yet when we want to measure the effect of minor variations in diet, or of playing chess on certain days, or anything else, we have to build all that from scratch -- or risk wasting all that effort on tracking on subpar conclusions. That, in turn, likely dissuades many curious people from even trying. A simple Bayesian system with even some of the features Big Tech uses would be a big improvement, and a boon for personal, relationship, and child-rearing optimization. It could also be deployed in small, nonprofit, and government organizations, for internal processes as well as external-facing products. A few months of part-time work for a programmer and a data scientist. personalexperimentationplatform@protonmail.com **#105: Fully Transparent Patient Advocacy**Patient advocacy tends to be one-on-one, high-priced work, where the professional is incentivized to hoard expertise to apply for future clients. That tends to maintain the collective action problem for patients in a system so bad that those who can pay a lightly credentialed person >$100/hour just to help them get even the mediocre care and billing that is their poorly protected sorta right. This proposal is for a starter fund to get off the ground a lower-priced patient advocacy business with a very different goal and model. Clients get to pay less or nothing in exchange for cooperation in recording everything about the process -- including, where valid by local laws, recording phone calls with the billers, schedulers, and occasionally doctors who immiserate fellow human beings because they can, with impunity. And everything recorded is shared, minus identifying details and in accordance with medical privacy laws, on a free website alongside commentary about how readers can apply what was learned for their own greater success in the system. Consider the popularity of the Substack Sick Note, Freddie DeBoer's account of his experiences with the psych system, and Scott's posts on how to interact with psychiatry. There is enormous informational asymmetry in so-called health care. A fully transparent patient advocacy, and any copycats it inspires, could shift the balance of power toward patients. transparentpatientadvocacy@protonmail.com **#106: Undercover Hospital Boss Program**If everyone who worked in hospitals had to spend a night in theirs as a pretend patient every six months, the experience ought to get much better fast. Imagine a place optimized for healing, rest, calm, and happiness and you'd be hard-pressed to name anything you'd imagined that's present in most hospitals. Yet the people who can bring the vision and reality closer together often are blinded, or blind themselves, to what's happening in their places of work. To get started: Create a pilot program in one department of one hospital. Start with the top administrators. Don't proceed until COVID-19 isn't a significant risk for the program. And, to start gently, everyone knows the "patient" is really a boss. Then they report back to everyone what they experienced and saw. Budget is for an outside consultant to design and run the program, record impressions, facilitate discussion, and outline possible expansion of the program. The work will be in finding the hospital department and consultant. The budget will be to pay for the consultant and some amount for the hospital's time and bed. hospitalmysteryshopper@protonmail.com **#107: RADVAC: Open Source Vaccines**[RaDVaC](https://radvac.org/) is a non-profit organization working to maximize access to vaccines when & where most needed: the first days of a disease outbreak. Since March 2020 we have developed & published 12 coronavirus vaccine designs under an open-access license, and worked to catalyze vaccine development globally. We envision a renaissance in vaccinology that is -- Diverse & Decentralized: enabling a diverse, distributed participation in vaccine R&D through lower tech barriers to vaccine design & development -- Transparent: fostering broad, open access to R&D, tools, and data (with less opportunity for distrust) -- Collaborative: optimizes vaccine formulations and their immunological & epidemiological relevance using pooled research, standards, and data -- Resilient: tech platforms that are easily modifiable to adapt to new variants, and centered on conserved domains for durable, mutation-resistant utility -- Rapid: able to deploy high-quality vaccines, at scale, at the earliest days after an infectious disease is identified -- These goals are achievable through the proliferation of open, accessible, & adaptable technologies in vaccine design & production, and improved vaccine trialing models (RaDVaC is developing a novel challenge trial model for safer, faster, lower cost clinical trials). Funds will be used for additional staff and hours, research supplies and services, and cultivating an ecosystem of vaccine developers around the world. More information about supporting the project can be found at https://radvac.org/support/. **#108: EEG For Dementia Screening**BrainTrip has developed a fast, early, and affordable dementia-screening solution based on EEG measurement. BrainTrip’s novel biomarker can detect the disease at its pre-symptomatic stage and can also be used to measure its progression. Traditional naked-eye inspected EEG has been of little use in dementia because large EEG changes are often only evident late in the illness when other clinical signs become obvious. However, the brain’s electrical activity contains many hidden features which can be extracted by sophisticated signal processing and inference modeling. The core of our innovation - the BrainTrip Dementia Index (BDI) - is based on such models. The BDI combines neuroscientific knowledge with complex mathematical models, relying on specifically developed AI optimization and machine learning. The BDI is a numerical score calculated from a test subject’s resting-state EEG, and it can detect subtle dementia-specific EEG changes early on, long before symptoms become evident. BrainTrip’s solution comprises a 15-minute EEG recording which detects even early stage dementia with an accuracy of 85% and can be administered by minimally trained staff. The BDI has already been CE marked as a Class I Medical Device used for dementia screening. We are now looking for 30-100k for a validation study on up to 500 individuals in Slovenia, before bringing the BDI to market. **#109: Writing That Explains And Normalizes Hybrid Remote/In-Person Schools**I'm Sebastian Garren, and I am building hybrid schools. Traditional five-day schools are not the future. They demand too much conformity, too much peer socialization, and don't allow for enough independent exploration. Hybrid schools meet in person for fewer than five days per week (normally three). I want to write the official guide on how to start and run a Hybrid School. I have started one hybrid school myself and been Dean at another for seven years with a total of 227 students, expecting 280 next year. I believe up to 20% of students would be better off in a hybrid school than a traditional one, and now is the time to start helping people opt out of the current system and normalize educational non-conformism. I am looking for $17,000 - $34,000 to help me take time to research and write. I have a matching grant up to $8,500, and I need an experienced editor or publisher to bounce ideas off of. The digital version will be free to the community. Contact sebastian.garren@gmail.com if you would like to fund or advise this project. **#110: Invest In Organic Kolanut For Startup**We are a small social startup from Germany needing organic kolanut for our production. As there was no organic kola at the European market, we partnered with a young organic farmers cooperative to supply us. In West Africa kolanut is used traditionally, but usually not dried and exported. Relevant exports would help to diversify the income for the farmers and open new perspectives with a traditional product. We got first small shipments of kola so we are able to use it in our own production and supply others with samples. We already have inquiries for 20-30t, but we can't deliver these demands without firstly helping our African partners in extending their quality management and invest in the supply chain more than we can currently afford. We do not look for usual venture investment because we don't want to optimize our business for profits. When we are big enough we want to incorporate steward ownership (https://purpose-economy.org/en/) to keep our business this way. If we could get a grant of 10,000€ we could buy some needed equipment, travel to Africa or even find some consultants to improve quality management. We are also looking for Investors to found a non-profit importing company, having more sustainable positive impact, but also needing substantial Investment (~100,000€). If we don't find such an Investor, we have to find a established importing company to partner with. Contact: info@kolakao.de. Homepage: https://kolakao.de/kolanuss (still only in German, sorry) **#111: Long-Termism + Progress Studies Unconference**Long-termism + Progress UnConference. We intend to solve the problem of unproductive conferences and the challenges of the interdisciplinary nature of long-termism, existential risk and progress studies by applying participatory techniques (OpenSpace) in an UnConference format. We need collaborators more than money, but budget is c. $15K. What: An innovative conference format bringing cross-silo thinkers and doers together to think about the long-term and progress. Typical conferences work badly. Hierarchies and old networks impede new connections and growth in social and relationship capital. The best conversations occur in the corridors of typical conferences. Why: The long-term is vital for humanity. Ideas are multidisciplinary and emergent. There is debate as to how much progress was are making and what we can do. The challenge cuts across a wide range of domains. Governments and traditional institutions are struggling to rise to the challenge. New ideas are needed. For those interested in these ideas, we believe participatory meeting events could lead to fruitful new ideas and connections. Perhaps low probabilities or very impactful outcomes/meetings. How: The Long-termism UnConference will be a one/two day event bringing together a range of thinkers from a wide range of domains and backgrounds to discuss long-term challenges and solutions in a self-selecting participatory manner. More on me: thendobetter.com/links or @benyeohben Pod: Ben Yeoh Chats **#112: Think Tank Of Mediators To Help Countries Understand Each Other**Today many political, social and business processes are ruined due to mutual misunderstanding between countries and nations - misunderstanding of other parties’ motivations and goals, reasons of doing things in a certain way, ways to understand and value what is said or done by others. When the sides do not hear each other or only hear what they want to hear, discussions, actions, projects and even wars always go wrong. But being able to stand above one’s national “self” and explain in general and in details other side’s goals, motivations, limitations is vital. Placing plans and relations in the right context of cultural codes, mentalities and psychos may much improve the success rate, decrease tensions and intensity of conflicts. One does not necessarily have to accept but have to understand. I suggest to build a think tank-like non-profit international panel from skeptically and rationally thinking “mediators” with vast international and intercultural experience. Such panel being always outside wrong paradigms may provide convincing explanations & advice for different audiences. Combination of persistence and patience with absence of agitation will (slowly!) lead to strong reputation. My own successful >22 years of intercultural experience and international business representation was always partially based at such explanatory “mediation”. Plans do not work without money, your questions please send to pavelkartashov@mailbox.org. More details at https://cutt.ly/eOo6pTR **#113: Increase Own Intelligence, Then Write About How**My name is David Gretzschel and I want money to increase my own intelligence full-time for about a year. Once I have succeeded (more than I already have), I will teach others how to do this. The benefits of this are obvious. And I already know how to do that for the most part. I have a concrete foundation in the form of a synesthetic encoding scheme, that I can build on. I merely need the time to do an intense amount of training without being distracted by either having a job or not having one and starving. And practice how to use them on various mathematical and computational problems. And a bunch of other things. Details are in the long pitch (see below). So I need 20.000 dollars to not worry about rent and food for that time. Please send them in Bitcoin here: 3Qcm3UJRuFca1fTkf2iPPEkU3PevpzPuwP I certainly would have use for more money, too. (though it'd not be necessary, I don't want to dissuade you from it, if that's an option) So do feel free to shower me with the stuff, if you have it and believe in my cause. (or you only believe in it 10%, but know that the expected value calculation still ends up with a happy face /pascal-mugging) With 10.000 dollars I'd still commit to a year, though that would be a bit tighter than I’d like. The longer pitch is here: https://docs.google.com/document/d/170WETB6enUOzQEzwbwmOCVHz9VkBe4R86rCh\_ewvOcg/edit?usp=sharing . If you have further questions/conditions/need more persuasion, send an email to: davidgretzschel@gmail.com **#114: Analyze Policy Failure In The COVID Pandemic In Germany**The management of the Covid-19 pandemic in Germany revealed some classical modes of policy failure, but also some rather new or underexplored ones. I want to analyze those in-depth in a type of study that has a minimum of societal reputation. In a second step, if results will be interesting, I want to feed them back into the political process to induce change. I’m a mid-career person with a background in science, including working with a Nobel prize winner, and in project management. I have enough contacts in the ‘Kanzlermeile’, the political center of Berlin, to make the results relevant for change. I’m looking either for seed funding between 5000 and 20000 Euro for finding funds through grants or others or for 75000 Euro total to finalize the whole study and if relevant take first steps for exploiting results. If you can support this in any way, please contact me at policy.failure.analysed at gmail.com. If you’re working on similar topics, please also feel free to get in touch. **#115: Fight Factory Farming In Turkey**“Turkey without Cages” (Kafessiz Türkiye) is an animal welfare organization working by Effective Altruist principles to end cage egg farming in Turkey. We reach out to companies that have cage eggs in their supply chain and secure cage-free pledges from them. Turkey is in the top 10 egg-producing countries with around 120 million laying hens. Turkey is also a major egg exporter with an increasing volume each year. Therefore, animal welfare standards in the region depend on the progress in Turkey. In a few years, we have secured pledges from 20 companies. This will enhance the welfare standards of roughly 1 million hens. Our progress is due to the extraordinary efforts of a limited number of employees and our network of volunteers. This year, we plan to expand our efforts to the fish farming industry and initiate similar NGOs in the countries close to us geographically and culturally. The number of fish farmed in Turkey is estimated to be between 1-2 billion and 60 percent of the production is being exported. At an early stage of our fish welfare endeavor, we secured a pledge from a major wholesaler, Metro AG in Turkey, which will considerably impact the welfare of 10 million fish. In 2022, we want to expand our capacities to maximize the number of animals we impact. If you think you can help, please email cagri.mutaf@kafessizturkiye.com to learn more about how you can support us. You can also visit www.kafessizturkiye.com **#116: Chart Enlightenment Hacking Technology**The Enlightenment Hacking group is looking for funding and collaborators to tackle the problem of charting the current state of knowledge with regard to neurotechnological enhancement or substitution of progress at meditation practice. We think this is an important opportunity for humankind because meditation experience offers a) reductions of stress and suffering, b) broad enhancement to very general cognitive capacities, and c) reduced selfishness - an almost ideal package of human enhancement that benefits both individual and society. Currently, benefits are known to accumulate over years and decades of meditation practice, but in the potential for (neuro)technology to speed up or substitute this progression remains largely unexplored. Widespread adoption of such technology might change the world for the better. If you can offer help, please email us at enlightenmenthacking@gmail.com. **#117: Help Nonprofits Share Evidence And Determine Impact**The nonprofit sector’s approach to using evidence in service delivery is fundamentally flawed. A failure to use knowledge from different sources slows progress and harms the people we want to help. Since we can’t reliably know the impact of our services we can’t determine which services have a negative impact, or how to improve them if and when they do. Maturing our approach to generating evidence about our work is critical. A similar situation in academia—the replication crisis—led to a shift in orientation for the entire sector, enabling collaboration that wasn’t possible before. Our issues (like academia’s) are systemic, and solutions can’t be focused on helping one organization at a time. If we ever expect to improve our services, we need to be open to radically altering the structure of the sector. The nonprofit sector needs an “existential crisis” of its own, so that we can collectively address this issue. Ajah is a social enterprise that has more than 20 years of experience working to help stakeholders increase their impact through better use and sharing of data. Our work with some of the largest foundations in the world, and nonprofits across the globe has given us a deep understanding of how conditions in the sector prevent us from knowing our impact. We write extensively about this evaluation crisis and how advances in technology are not a solution. We are also working to build tangible solutions (such as access to administrative data) to solve pieces of the puzzle. [Contact: ben.mcnamee@ajah.ca] **#118: CBT App To Help Depression**Depression affects 17 million US adults. Anxiety affects 40 million. For people with these and other mental health challenges, cognitive behavioral therapy is an intervention that is proven to work. Unfortunately, less than half of the people that would benefit from therapy end up getting it. This can happen due to time, cost, stigma, and other constraints. Mobile and web apps can be helpful in overcoming these hurdles, but current products have their fair share of problems. Some are slick, but use "fluffy" content. Others stick to the evidence, but don't prioritize usability. We think there is an opportunity to create a CBT app that is evidence-based, easy to use, engaging, and low cost. Here are preliminary mockups that illustrate what we’re building: tinyurl.com/yckz8k9v Currently, we have an app built that we’re testing internally ahead of a wider release. We’re looking for $5,000 that would go toward Legal consultation, paid user research, marketing and advertising. Would also love advice if you have experience in the space! The team comprises me (Calvin, engineer, linkedin.com/in/calvin-woo), Maria (designer, linkedin.com/in/mhmichelsen), Shwetha (engineer, linkedin.com/in/shweta-patrachari) and Stephanie (clinical psychology PhD student at UCLA, linkedin.com/in/stephaniehtyu) You can contact me at calvinwoo32@gmail.com **#119: Nonprofit To Reform Psychiatric Crisis Systems**I’m Jess Watson Miller (@utotranslucence) and I’m looking for seed funding for a nonprofit to reform Western psychiatric crisis systems using a systems change strategy inspired by the work of Donella Meadows. Last year I lost my brother to suicide after he had been locked up and spiraling for months in the hospital system, with multiple escape attempts. This story is not unique; many people who have used the crisis system or work in it consider it frequently actively harmful to the people it is aiming to help. I believe there is a lack of creative efforts that are aimed at reform rather than abolition, that seek to do more than minimize risk and cost for payers and clinicians, and with more risk-tolerant sources of funding than Medicaid and government health departments. I have a background in economics and social sciences and am actively looking for partners with clinical and insurance/payer experience. My initial focus is creating research reports on bottlenecks to change and building connections with other reform projects. I am looking for connections to related projects, people with experience working in frontline crisis positions, or administrative positions in hospitals, insurance or health law, and donation funding: $10K would let me keep doing this in the short term; $100K would let me hire others and fund the basics of the organization for up to two years. More info here: shorturl.at/djmJ7. If you want to help, email me at jessicawatsonmiller@gmail.com **#120: Tool To Develop Arguments In Parallel**I've been working on a tool that facilitates an argument where two competing theories are developed in parallel in an iterative manner. The goal of the process is: (1) to produce a pair of coherent arguments that stand on their own instead of a long chain of correspondence which can be difficult to follow; (2) to ensure that all relevant counterarguments are addressed, or in the case their not, to make it easy for the reader to notice this; (3) to provide the debaters an opponent to spar with from the start which should result in sounder arguments; and finally (4) to be more feasible than adversarial collaboration since the elusive goal of converging views need not be met. I can't seek funding via Grants++ for legal reasons. But if you're otherwise interested, check out the GitHub repo (tinyurl.com/2p8w4jbe) or the LessWrong post (tinyurl.com/2s3z7ct8) and feel free to contact me (mat5n@outlook.com). **#121: System To Refine Cell Media Recipes**Essentially all laboratory-based human biological research relies on maintaining cells in appropriate culture media. Most culture media were designed at least 50 years ago; today, we grow a wider variety of cells for a broader scope of purposes, but media formulation has not kept pace. A major obstacle to formulating media is the large design space: a culture medium may have 70 or more ingredients, and some ingredients are themselves complex mixtures of even more ingredients. The usual approaches to handling high-dimensional problems, factorial design and high-throughput screening, cannot deal with design spaces this big. However, there is a better way: iterative mathematical optimization, the same approach we use to train neural networks. As a postdoc, I have built a prototype system using motorized microscopy, computer vision, statistics, and mathematical optimization to iteratively test, evaluate, and refine media recipes. My postdoc is ending, and I am seeking the resources to fully automate and open-source this system. The stakes are higher than just improving culture media: biological research is maddeningly slow, and it’d be faster if machines could do more of the work. Although machines are bad at posing hypotheses, they are very good at iterative optimization. A more thorough description of this project is available at https://www.todhunter.dev. Please contact me at [todhunter@todhunter.dev](mailto:todhunter@todhunter.dev) if you’d like to learn more or contribute. **#122: In Vitro Gametogenesis**I'm a reproductive endocrinologist and Professor of OB/GYN at Oregon Health & Science University.  We work on novel assistant reproductive technologies for the treatment of infertility.  We propose an alternative method of in-vitro gametogenesis.  Most people working in this field are trying to reprogram adult somatic cells to become oocytes (eggs).  Our strategy involves somatic cell nuclear transfer (SCNT; also known as therapeutic cloning). We basically take a somatic cell (e.g. skin cell) and transfer it into an enucleated donor egg and then induce that reconstituted egg to undergo meiosis resulting in a haploid oocyte which can then be fertilized with sperm to create an embryo and then eventually a baby.  This would allow women with age-related infertility or premature ovarian failure and same-sex male couples to have genetically-related children.  Because this research involves human embryos, it is not eligible for NIH funding, which is why we are seeking private funding.  Please contact me is you are interested in supporting this work at amatop@ohsu.edu.  For proof of concept in the mouse model, we just published this paper: <https://www.nature.com/articles/s42003-022-03040-5> **#123: Next-Generation mRNA Vaccines**PopVax is an Indian startup working on next-generation mRNA vaccines for COVID-19, with a team of scientists that includes Moderna’s former Director of Chemistry and leading Indian mRNA experts. Our mRNA platform is built to tackle many of the problems inherent in those of Moderna & BioNTech-Pfizer, including their need for storage and transportation at supercold temperatures (our vaccine candidates use a novel LNP formulation and structural modifications to mRNA to achieve room-temperature stability for extended periods of time), their waning immunity to new variants (our multivalent sequences encode epitopes for existing and predicted future mutations), their extremely low-rate but real inducement of myocarditis in young men (we target delivery to avoid heart tissue), the high costs due to primarily western supply chains (we have built out a fully-indigenized, low-cost supply chain for critical raw materials), and, most importantly, their inability to produce sterilizing immunity in the mucosal membrane (our strategy involves an intranasal booster that we expect will, in most cases, block transmission). We have received a small amount of funding from the Gates Foundation, and are looking for a combination of grants and investment to fund both our clinical trials and simultaneous buildout of manufacturing in the existing GMP facilities of a large Indian pharma company, with the intent to bring online 1 billion+ doses of new capacity/year in 2022. Contact me at soham@soh.am. **#124: Develop New Systems For Understanding Model And Human Genetics**I’m Dr. Bryan Andrews, a molecular biologist who studies the principles of genotype-phenotype mapping. I’m concerned that the strategies being used to assess human genetic variants today are fundamentally not scalable. That is, learning a lot about Gene A doesn’t improve your prior predictions about Gene B, and this fundamentally constrains computational predictions of gene function. It doesn’t have to be this way, and subtle but non-intuitive tweaks to how we assay gene function could make the data legible at a genome-wide scale to machines and humans alike. I’m currently developing new systems for functional genetics in model organisms, but I am seeking research support to help bridge the gap to human genetics. If you’re interested in providing such support, or wish to know more, please contact me at andrewsb@uchicago.edu. **#125: Plant Trees For Carbon Capture**My name is Dan Sparkman and I want your help to plant trees for carbon capture. There are plenty of people planting trees and they are mostly worthy of support. However most reforestation projects are eventully going to be cut down. I aim to plant forest gardens which should stand for hundreds of years. The first principle of sustainablity is (or ought to be) sustain the caretaker. Your porject needs to look after them so that they have the time, money and incentive to look after the project. Most reforestation project aim at maxim growth for maxim carbon capture. Because of this, when local people have a need for that land, the carbon capture forest will be cut down. I plan on planting a mix of trees that provides income for locals every year. This mix will be centered on Black Walnuts and American Chestnuts. Both native climax trees. This mix of trees will provide food and wood, for years into centries, all with minimal human intervention once it is started. There have been recient studies on the west coast with forest gardens still triving after 200 years of neglect. I have no specal skills, just a passionate amature, but I do have the connections with local land owners, goverments, and tree hobbiests to plant 1000 trees for $10 000. Give me more money and we can start to see real change. That's my project in a nut shell. Help me plant trees. Trees which provide a food crop, supporting the farmer/land owner/locals so they won't be cut down. Contact me at sparkman.dan@gmail.com
Scott Alexander
48145304
ACX Grants ++: The Second Half
acx
# So You Want To Run A Microgrants Program **I.** Medical training is a wild ride. You do four years of undergrad in some bio subject, ace your MCATs, think you’re pretty hot stuff. Then you do your med school preclinicals, study umpteen hours a day, ace your shelf exams, and it seems like you're pretty much there. Then you start your clinical rotations, get a real patient in front of you, and you realize - oh god, I know absolutely nothing about medicine. This is also how I felt about [running a grants program](https://astralcodexten.substack.com/p/apply-for-an-acx-grant). I support effective altruism, a vast worldwide movement focused on trying to pick good charities. Sometimes I go to their conferences, where they give lectures about how to pick good charities. Or I read their online forum, where people write posts about how to pick good charities. I've been to effective altruist meetups, where we all come together and talk about good charity picking. So I felt like, maybe, I don't know, I probably knew some stuff about how to pick good charities. And then I solicited grant proposals, and I got stuff like this: **A.** $60K to run simulations checking if some chemicals were promising antibiotics. **B.** $60K for a professor to study the factors influencing cross-cultural gender norms **C.** $50K to put climate-related measures on the ballot in a bunch of states. **D.** $30K to research a solution for African Swine Fever and pitch it to Uganda **E.** $40K to replicate psych studies and improve incentives in social science Which of these is the most important? Part of my brain keeps helpfully suggesting "Just calculate how much expected utility people get from each!" I can check how many people die of antibiotic-resistant infections each year (Google says either 30K, 500K, or 1M, depending on which source you trust). That's a start! But the chance of these simulations discovering a new antibiotic is - 10%? 1%? 0.00001%? In silico drug discovery never works and anyone with half a brain knows that? The compounds being tested are dumb compounds? Even if they worked, bacteria would just develop more resistance in a week? Pharma companies would capture all the value from any new antibiotics and make it impossible for poor people to afford them? Five much better labs have already tried this and all the low-hanging fruit has been picked? Screening for new antibiotics is a great idea but actually it costs $4.50 and this is outrageously overpriced? And that's an easy one. What about B? If the professor figures out important things about what influences gender norms, maybe we can subtly put our finger on the scale. Maybe twenty years later, women across the Third World will have equal rights, economic development will be supercharged, and Saudi Arabia will be a Scandinavian-style democracy with a female Prime Minister. But maybe the professor won't find anything interesting. Or maybe they *will* find something interesting, but it will all be stuff like "it depends what kind of rice they cultivated in 4000 BC" and there won't be any subtle finger-putting-on-scale opportunities. Or maybe the professor will find something great, but nobody will listen to her and nothing will happen. Or maybe Third World countries will get angry at our meddling and hold coups and become even more regressive. Or maybe we'll overshoot, and Saudi Arabia will become really woke, and we'll have to listen to terrible takes about how the Houthi rebels are the new face of nice guy incel misogyny. Which is higher-value, A or B? Probably more women suffer under oppressive gender norms than people die of antibiotic-resistant infections, but dying is probably worse than inequality, and there's a clearer path from antibiotic -> recovery than from research paper -> oppressive countries clean up their act. What about second order effects? If women have more equality in Saudi Arabia, maybe an otherwise unrecognized female genius will discover a new antibiotic. But if we have more antibiotics, someone who would otherwise have died of a bacterial infection might liberate women in Saudi Arabia. Aaagh! Part of my brain helpfully suggests "Do a deep dive and answer these questions! This is the skill you are supposedly good at!" Quantifying these questions sounds crazy, but I am nothing if not [crazy for quantifying things](https://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-worth-doing-with-made-up-statistics/). It could work. …except that I had 656 applications like this, and everyone told me it was important to get back to people within a month or two. I don't think I could fully explore the subtleties of the antibiotic proposal in that time - let alone 656 proposals, most of which were even less straightforward. **II.** There’s a well-known solution to this kind of thing: Just make a ballpark guess and then get on with your life. The problem is: this grants program could be the most important thing I’ll ever do. Maybe everything else, all my triumphs and failures, will end up less important than getting this one grants program right. GiveWell estimates that if you donate to their top charity, Against Malaria Foundation, you can probably save a life [for about $5000](https://www.givewell.org/cost-to-save-a-life). ACX Grants raised $1.5 million. Donated to AMF, that’s enough to save 300 lives. I didn’t donate it to AMF. I believed that small-batch artisanal grant-making could potentially outperform the best well-known megacharities - or at least that it was positive value in expectation to see if that was true. But if your thesis is “Instead of saving 300 lives, which I could totally do right now, I’m gonna do this other thing, because if I do a good job it’ll save *even more* than 300 lives”, then man, you had *really* better do a good job with the other thing. Robin Dunbar [claims](https://en.wikipedia.org/wiki/Dunbar%27s_number) that humans have a capacity to handle 150 social relationships. Count up my friends, family members, coworkers, and acquaintances, and there will probably be about 150 who I can remember consistently and have some vague emotional connection to. If I made some mistake that killed all those people - all my friends, relatives, everyone I know - then in some “objective” sense, that would be about as bad as screwing up this grants program in some way that made it only half as good as the malaria counterfactual. This isn’t what really bothers me. My brain refuses to believe it, so I don’t really care. The part that really bothers me is that I know a lot of middle-class people who are struggling. Somebody who’s $10,000 in credit card debt, and it’s making their life miserable. Someone else who posts a GoFundMe for a $5,000 medical bill. Another person who’s burned out at their $40,000 a year job and would probably have vastly better health if they could take a few months off and then job-search from a place of financial stability. If on average these people need $10,000 each, my $1.5 million could help 150 of them. Most of these wouldn’t literally save lives, but a few might - I [saw a patient once](https://slatestarcodex.com/2015/02/12/money-money-everywhere-but-not-a-cent-to-spend/) who attempted suicide for want of $5,000. And it would sure brighten a lot of people’s years. So: $60,000 could test some promising antibiotics, or fund a book on gender norms. But it could also cure twelve Africans who would otherwise die of malaria, or save 5-10 Americans struggling under dead-end jobs and unpayable debts. I tried not to think too hard about this kind of thing; I’m nervous it would make me so crazy that I’d run away from doing any kind of charity at all, and then everyone would be worse off. Even more, I’m worried it would scare me into taking only the most mainstream and best-established opportunities, whereas I really do think a lot of value is in weird charity entrepreneurship ideas that are hard to quantify. But I couldn’t push it out of my mind far enough to do a half-assed job on the grants round, which meant confronting some of those problems head-on. **III.** …by which I mean “passing them off to other people”. All those effective altruism conferences might not have given me infallible grant-making heuristics, but they did mean I knew a lot of grantmakers. I begged the institutional EA movement for help, and they lent me experts in global poverty, animal welfare, and the long-term future. I was able to draw on some other networks for experts in prediction markets and meta-science. There wasn't as ready-made an EA infrastructure for biology, so I jury-rigged a Biology Grants Committee out of an ex-girlfriend who works in biotech, two of her friends, a guy who has good bio takes in the ACX comments section, and a cool girl I met at a party once who talked my ear off about bio research. Despite my desperation, I lucked out here. One of my ex’s friends turned out to be a semiprofessional bio grantmaker. The guy from the comments section was a bio grad student at Harvard. And the girl from the party got back to me with a bunch of detailed comments like “here’s the obscure immune cell which will cause a side effect these people aren’t expecting” or “a little-known laboratory in Kazakhstan has already investigated this problem”. These people really came through. Don’t take my word for it - trust the data. The five of their opinions correlated with each other at r = 0.55, whereas my uninformed guesses only correlated with them at r = 0.15. This made me feel much more confident I was picking up something real. But even the “experts” weren’t perfectly aligned. There were three proposals where one evaluator assigned the highest possible rating, and another assigned the lowest possible. Sometimes these were differences of scientific opinion. Other times they were more fundamental. One person would say "This idea would let you do so many cool things with viruses" and another person would say "This idea would let you do so many cool things with viruses, such as bioterrorism". Still, with their help I started to feel like I was finally on top of this. **IV.** Then I got the rug pulled out from under my feet again. I was chatting online with my friend Misha about one the projects my Bio Grants Committee had recommended. He asked: given that they got funding from XYZ incubator a few years ago, why are they asking you for more funding now? XYZ incubator is known for funding their teams well, so they must have lost faith in these people. Some reports from a few years ago included the name of an impressive guy on their executive team, but more recent reports don’t mention him. The simplest explanation is that something went wrong, their executives expected rough going, their incubator got cold feet, and now they’re turning to a rube like you to help them pick up the pieces. I was kind of flabbergasted. I had a very nice report from my Bio Committee telling me that all the science here was sound, the cells they were working with were very nice cells, etc. But here was a whole new dimension I hadn’t considered. Misha explained that he was an angel investor - not even some kind of super-serious VC, just a guy who invested his own money - and this kind of thing was standard practice in his field. I’ll be honest. I know a lot of you are VCs. You read and support my blog, and I appreciate it. Some of the grant money I distributed came from VCs, which was very generous. But I always imagined you guys as kind of, you know, wandering into work, sipping some wine, saying “Hmmm, these guys have a crypto company, crypto seems big this year, I like the cut of their jib, make it so,” and then going home early. I owe you an apology. VC-ing is a field as intense and complicated and full of pitfalls as medicine or statistics or anything else. As a grant-maker, I was basically trying to be a VC, only without the profit motive. But that meant I was staking $1.5 million on my ability to practice a very difficult field which, until five minutes previous, I hadn’t realized existed. I solved this problem the same way I had solved my previous few problems: I begged Misha for help, and he agreed to serve on my grant evaluation team. But this kind of thing kept happening. Every time I thought I knew approximately how many different variables I needed to consider, my ship accidentally got blown off course into an entire undiscovered new continent of variable-considering, full of golden temples and angry cannibals. I’m not going to write up the whole travelogue, but here are ten things worth thinking about if you’re considering a grants program of your own: *(1): Many applicants ask for a random amount of money, and it’s your job to decide if you should give more or less.* For example, I originally said my grants would max out at 50-100K, and many people asked for 50-100K grants. Some of these people needed more than 50-100K, but figured any little bit helped. Others needed less than 50-100K, but figured they’d ask for more and let me bargain them down. Others had projects that scaled almost linearly, such that 50K could do ten times as much good as 5K, but only a tenth as much as 500K. They asked for 50-100K too. Suppose I gave a dozen organizations $50K. It would be *really suspicious* if a dozen organizations just happened to all be equally effective at spending the marginal dollar! The people screening new antibiotics and the people untangling cross-cultural gender roles really have *exactly equal* expected value? Realistically it shouldn’t be at all surprising if one of them was ten or a hundred times more valuable than the other! So maybe instead of giving both of them $50K, I should give one of them $100K and the other one nothing. There was a strong temptation for me to make lots of different grants, because then I would feel like a good person who’s helped many different causes. In many cases, I succumbed to this temptation: realistically I don’t know which of those two causes is better, and realistically I don’t know enough about how each of them scales with money to second-guess the grant-writers who requested approximately $50K each. But also, after making all my other choices, I nixed the five or six least promising grants, the ones I secretly knew I had only done to feel like a diverse person who gives to diverse cause areas, and gave all their money to the oxfendazole project, which most evaluators agreed was the most promising. *(2) Most people are terrible, terrible, TERRIBLE grantwriters* It’s fascinating! They’re all terrible in different ways! One person’s application was the very long meandering story of how they had the idea - “so i was walking down the street one day, and I thought…” - followed by all the people they had gone to for funding before me, and how each person had betrayed them. Another person’s application sounded like a Dilbert gag about meaningless corporate babble. “We will leverage synergies to revolutionize the paradigm of communication for justice” - paragraphs and paragraphs of this without the slightest explanation of what they would actually do. Everyone involved had PhDs, and they’d gotten millions of dollars from a government agency, so maybe I’m the one who’s wrong here, but I read it to some friends deadpan, it made them laugh hysterically, and sometimes they still quote it back at me - “are you sure we shouldn’t be leveraging synergies to revolutionize our paradigm first?” - and *I* laugh hysterically. Several applications were infuriatingly vague, like “a network to encourage talented people”. I, too, think talented people should be encouraged. But instead of answering the followup questions - how do you find the talented people? why would they join your network? what will the network do to encourage them? - the application would just dribble out a few more paragraphs about how under-encouraged the talent was these days. A typical pattern was for someone to spend almost their entire allotted space explaining why an obviously bad thing was bad, and then two or three sentences discussing why their solution might work. EG five paragraphs explaining why depression was a very serious disease, then a sentence or two saying they were thinking of fighting it with some kind of web app or something. Several applications very gradually made it clear that they had not yet founded the charitable organization they were talking about, they had no intention of doing so, and they just wanted to tell me they thought *I* should found it, or somehow expected my money to cause the organization to exist. This proved to be a sort of skeleton key to diagnose a whole genus of grant-writing pathologies: I think some people don’t understand, on a deep level, that between the steps “people donate money to cause” and “cause succeeds”, there’s an additional step of “someone takes the money and does some specific thing with it”. Or they thought it could be abstracted away - surely you just hire some generic manager type. Yeah, these grant applications are auditions for that job, and you failed. One person, in the process of explaining why he needed a grant, sort of vaguely confessed to a pretty serious crime. I don’t have enough specifics that I feel like I can alert police, and it’s in a different country where I don’t speak the language. Still, this is a deeper grantwriting failure than I imagined possible. *(3): Your money funges against the money of all the other grants programs your applicants are applying to.* Right now AI alignment has *lots* of cash. If there’s a really good AI alignment charity, Open Philanthropy Project and Founders Fund and Elon Musk and Jaan Tallinn will all fight each other to throw money at it. So if a seemingly really good AI alignment charity asked me for money, I would wonder - why haven’t they gotten money from a big experienced foundation? Maybe they asked and the big experienced foundations said no - but then, do I think I’m in a position to second-guess the experts? Or maybe they don’t know the big experienced foundations exist, which suggests they’re pretty new here - not necessarily a fatal flaw, but something to think about. Or maybe they’re asking the big experienced foundations too, but they figured they’d use me as a backup. How is this actionable? First, sometimes I was able to ask the big experienced foundations if they’d seen a grant application, and if so what they thought. But second, if I had a great global poverty proposal and a great AI safety proposal, and I thought they were both equally valuable, the correct course was to fund global poverty and ask the Long Term Future Fund to fund the AI safety one. (what actually happened was that the Long Term Future Fund approached *me* and said “we will fund every single good AI-related proposal you get, just hand them to us, you don’t have to worry about it”. Then I had another person say “hand me the ones Long Term Future Fund doesn’t want, and I’ll fund those.” Have I mentioned it’s a good time to start AI related charities?) Sometimes an experienced grantmaker would tell me that some specific application would be catnip for the XYZ Foundation, and we could forward it on to them instead of funding it ourselves. This made me nervous, because what if they were wrong and this great proposal slipped through the cracks? - but usually I trusted them. *(4) There are lots of second-order effects, but you’ll go crazy if you think about them too hard* Suppose a really good artist comes to you and asks for a grant. You think: “Art doesn’t save too many lives. But this art would be really good, and get really famous, and then *my grants program* would get really famous for funding such a great thing, and then lots more funders and applicants would participate the next time around.” Or suppose some promising young college kid asks you for a grant to pursue their cool project. Realistically the project won’t accomplish much, but she’ll learn a lot from it. And she seems like the sort of person who could be really impressive when she gets older. Is it worth giving her a token amount to “encourage her”? (my impression is that Tyler Cowen would say “Hell yes!” and that this is central to his philosophy of grantmaking). What about buying the right to boast “I was the first person to spot this young talent!” thirty years later when she wins her Nobel, which brings glory to your grants program down the line? What about buying her goodwill, so that when she’s head of the NSF one day you can ask a favor of her? Doesn’t that promote your values better than just giving money to some cool project? (but remember that $10K = saving two Africans from malaria, or relieving one American’s crushing credit card debt. That’s quite a price to “encourage young talent”, isn’t it?) What if there’s a project you don’t think will succeed, but which is *very close* to a field you want to encourage? Do you fund it in order to build the field or lure other people in? What about a project you *do* think will do good, but which is very close to something bad? The experienced grantmakers I worked with mostly suggesting weighing these kinds of considerations less. They take too much precise foreknowledge (this art will become famous, this young student will become an impressive luminary, my grants will move lots of people into this field) when realistically you don’t even have enough foreknowledge to predict if your grant will work at all. Still, Tyler Cowen does this and it works for him. My only recommendation is to make a decision and stick to it, instead of going crazy thinking too hard. *(5) Being advised by George Church is not as impressive as it sounds* One applicant mentioned that his bio project was advised by George Church - Harvard professor, National Academy of Sciences member, one of TIME Magazine’s “100 Most Influential People In The World”, and generally amazing guy. I was astonished that a project with Church’s endorsement was pitching to me, and not to Peter Thiel or Elon Musk or someone. Then I got another Church-advised project. And another. What finally cleared up the mystery is that one of my Biology Grants Evaluation Committee members *also* worked for George Church, and clarified that Church has seven zillion grad students, and is extremely nice, and is bad at saying no to people, and so half the biology startups in the world are advised by him. There are lots of things like this. [Remember](https://en.wikipedia.org/wiki/Goodhart%27s_law): when a measure becomes a target, it ceases to be a good measure! *(6): Everyone is secretly relying on everyone else to do the hard work.* Sometimes people gave me pitches like “[Fintech billionaire] Patrick Collison gave us our first $X, but he didn’t fund us fully because he wanted to diversify our income streams and demonstrate wider appeal. Can you fill the rest of our funding for the year?” This was a pretty great pitch, because Patrick is very smart, has a top-notch grant-making infrastructure, and shares many of my values. I was pretty desperate to be able to rely on something other than my wits alone, and Patrick’s seal of approval was a tempting proxy. I tried to give all these people a fair independent evaluation, because otherwise it would defeat the point of Patrick making them seek alternative funding sources. But it sure did get them to the top of the pile. Then people started sending *me* requests like “Please give us whatever you can spare, just so that when we’re pitching to some other much richer person, we can say that other grantmakers such as yourself are on board.” This made me really nervous. It was bad enough risking my own money (and the money of my generous donors). But risking my reputation was something else entirely. If all grantmakers secretly relied on other grantmakers to avoid the impossibly complex question of figuring out who was good, then my decisions might accidentally move orders of magnitude more money than I expected. It’s all nice and well to replace your own judgment with Patrick Collison’s. But what if someone tried to replace their own judgment with *mine*? I have no solution here except to type up this 5000 word essay on how I really don’t know what I’m doing and you shouldn’t trust me. Those who have ears to hear, let them listen! *(7) If you can’t rely on other grantmakers, you’ll rely on credentials* I still think that credentialism - the thing where you ignore all objective applications of a person’s worth in favor of what college they went to - is bad. But now I understand why it’s so tempting. I’d previously been imagining - you’re some kind of Randian tycoon, sitting serenely in your office, reviewing resumes for your 1001st software engineering drone. You can easily check how they do on various coding exams, Project Euler, peer ratings, whatever, but instead you go with the one who went to Harvard, because you’re a f@#$ing elitist. Now I’m imagining - you’re a startup founder or mid-level hiring manager or something, getting thrown into the deep end, asked to make a hire for your company despite having no idea what you’re doing. If you get it wrong, the company’s new product will flop and everyone will blame you. One software engineer claims to be an expert in non-Euclidean para-computing, whatever that is, and the other claims to be an expert in ultravectorized peta-fragmentation, or something to that effect. You Google both those terms and find that StackOverflow has removed the only question about them because it’s “off-topic”. The Standardized Coding Exam That Everyone Has Taken Which Allows Objective Comparison turns out not to exist. Project Euler exists, but you worry if you asked them about it they would think you’re crazy and obsessive. So you go with the one who has a Computer Science degree from Harvard, because at least he’s probably not literally lying about the fact that he knows what a computer is. (it’s not that everyone is an imposter with no idea what they’re doing. But everyone *starts out that way*, and develops their habits when they’re in that position, and then those habits stick.) *(8) You will suffer heartbreak* I’d been on a couple of dates with someone a month or two before the grants program. Then in the chaos of sorting through applications, I forgot to follow up. Halfway through the grant pile, I found an application from my date. It was pretty good, but I felt like it would be too much of a conflict of interest. I sent them an email: “Sorry, I don’t feel like I can evaluate this since we’re dating”. The email back: “I don’t consider us to still be dating”. This remains the most stone-cold rejection I have ever gotten. *(9) If you can’t rely on other grantmakers or credentials, you’ll rely on prejudices and heuristics* Here are some of mine: your new social network won’t kill Facebook. Your new knowledge database won’t kill Wikipedia. No one will ever use argument-mapping software. No matter how much funding your clever and beautiful project to enforce truth in media gets, the media can just keep being untruthful. The more requests for secrecy are in a proposal, the less likely it is to contain anything worth stealing. Subtract one point for each use of the words “blockchain”, “ML”, and “BIPOC”. A lot of these italicized sections here are trying to get at the same point: when you’re truly lost in a giant multidimensional space that requires ten forms of expertise at once to make real progress, you’ll retreat to prejudices and heuristics. That’s what credentialism is, that’s what relying on other grantmakers is, and - when you have neither Harvard nor Patrick Collison to save you, you’ll rely on [that one blog post you read that one time saying X never works](https://markusstrasser.org/extracting-knowledge-from-literature/). *(10) …but your comparative advantage might be in not doing any of this stuff* See my post from yesterday, [Heuristics That Almost Always Work](https://astralcodexten.substack.com/p/heuristics-that-almost-always-work). What’s your story for why you need a microgrants program? Why not just donate to GiveWell or OpenPhil or some other charity or foundation you respect? (technically OpenPhil doesn’t accept individual donations, but if you break into their office and leave $1.5 million on a desk, what are they going to do?) If your story is “I have a comparative advantage in soliciting grant proposals” or “I have a comparative advantage in soliciting funders” or even “it takes the excitement of a personal grants program to incentivize me to do charity at all”, then fine, whatever. But if your story is “I think I have a comparative advantage in assessing grants” - then consider actually having a comparative advantage in assessing grants. If you only fund teams with a lot of Harvard PhDs who already have Patrick Collison’s seal of approval, you don’t have much of a comparative advantage. You could be replaced by a rock saying “FUND PRESTIGIOUS PEOPLE WHO OTHER PRESTIGIOUS PEOPLE LIKE”. I don’t want to say they’re *sure* to get funding - one of life’s great mysteries is how many foundations are desperate for great causes to fund, how many great causes are desperate for funding, and how the market still doesn’t always clear. And if everyone galaxy-brains themselves into not funding the obvious best teams, then the obvious best teams never get funded. And the surest way not to do that is to stop galaxy-braining and fund the obvious best teams. Still, given that your money is somewhat fungible with other people’s money, one way to have an outsized impact is to outperform that rock. That means trying to find undervalued projects. Which means not *just* using the same indicators of value as everyone else: credentials, popular cause areas, endorsements. It means taking chances, trying to cultivate long-term talent, trying to spot the opportunities you’re uniquely placed to see and other people are most likely to miss. This is a dangerous game - most of the time you try to beat Heuristics That Almost Always Work, you fail. Still, part of what you’re doing in setting yourself up as a grants evaluator is claiming to be able to do this (unless you have another story in mind, like that you’re good at soliciting proposals or leveraging your personal brand to get funding). The overall grantmaking ecosystem needs some people to take the obvious high-value opportunities, and other people to seek out the opportunities whose value isn’t obvious. If you want to be the latter, good luck. The other way the HTAAW post is relevant here: beware of information cascades. If you give someone a grant because they have good credentials and two other grantmakers approved of them, they’re going to be telling the next guy “We have good credentials and *three* other grantmakers approve of us!” This was another worry that pushed me to put a supra-HTAAW level of work into some grants. **V.** If you solve all these problems, congratulations! You can write a blog post announcing that you are giving out grants! People you respect will say nice things about you and be happy! Then you have to actually give people money. You know how, whenever there’s a debate about cryptocurrency, some crypto fanboy gushes about how it makes sending money so much easier? And if you’re like me, you think “yes, but right now you can just enter a number into Paypal, that already seems pretty easy to me”? I take it all back. The crypto future can’t come soon enough. Sending money is terrible. Paypal charges 2-3% fees. If you’re sending $50K, that’s a thousand dollars. Your bank might do wire transfers for you, but they have caps on how much you can send, and that cap may be smaller than your grant. Wires can involve anything from sending in a snail mail form, to going to the bank in person, to getting something called a “Medallion Signature Guarantee” which I still have not fully figured out. Sometimes a recipient would tell me their bank account details, and my bank would say “no, that account does not exist”, and then we would be at an impasse. If you have double (or God forbid, triple) digit numbers of recipients, it all adds up. I solved this the same way I solved everything else - begged friends and connections to do it for me. The Center For Effective Altruism agreed to take over this part, which was a lifesaver but created its own set of headaches. They’re a tax-deductible registered charity, which means they’re not supposed to give money to politics or unworthy causes. But some of my recipients were doing activism or things that were hard to explain to the federal government (eg helping a researcher take some time off to re-evaluate their career trajectory). They asked me to handle those myself, and I muddled through. Also, registered charity aren’t allowed to let donors influence its grant-making decisions, so I wasn’t allowed to donate directly to my own grants program; I had to split it in two and fund my fraction separately, with inconsistent tax-deductibility. I understand that Molly Mielke is working on a project called [Moth Minds](https://www.mothminds.com/) that will take away the headache and make personal grants programs easier. So far her website is heavy on moth metaphors and light on details, but moth metaphors are also good, and I’m long-term excited about this. **VI.** More and more people are talking about microgrants programs. Maybe you’re one of them. So: should you run a grants round? Your alternative to running a grants round is giving to the best big charities that accept individual donations. GiveWell tries to identify these, and ends up with things like Against Malaria Foundation, which they think can save a life for ~$5,000. So to a first approximation, run a grants round if you think you can do better than this. Why should you expect to do better than these smart people who have put lots of effort into finding the best things? GiveWell mostly looks at scale-able and stable projects, but most microgrants work with small teams of people pursuing idiosyncratic opportunities. Funding research teams, activist groups, and companies/institutions can easily outperform direct giving to individuals. There *are* very large organizations who handle these kinds of one-off grants. They’re *also* smart people who put lots of effort into finding the best things. So why should you expect to outperform *them*? Maybe because they say you can. I talked to some of these big foundation people, and they were unexpectedly bullish on microgrants. They feel like their organizations are more limited by good opportunities than by money. If you can either donate your money or your finding-good-opportunities ability, consider the latter. How can big foundations be short of good opportunities when the world is so full of problems? This remains kind of mysterious to me, but my best guess is that they set some high bar, donate to everything above the bar, and keep the rest of their money in the hopes that good charities that exceed the bar spring up later - or spend the money trying to create charities that will one day exceed the bar. Global health charities [sometimes set](https://forum.effectivealtruism.org/posts/nXL2MeQQBoHknpz8X/what-s-the-role-of-donations-now-that-the-ea-movement-is?commentId=vCeGqpbd9HQseGsSt) a bar of “10x more effective than GiveDirectly”, where GiveDirectly is a charity that gives your money directly to poor people in Africa; other cause areas are harder to find a bar for but maybe you can [sort of eyeball it](https://forum.effectivealtruism.org/posts/nXL2MeQQBoHknpz8X/what-s-the-role-of-donations-now-that-the-ea-movement-is?commentId=vCeGqpbd9HQseGsSt). This model suggests you should only donate your finding-good-opportunities ability if you think there’s a chance you can clear the relevant bar, but there might be pretty high value of information in seeing whether this is true. Anyone deeply interested in this question should read Carl Shulman’s [Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation](https://forum.effectivealtruism.org/posts/BhvTMY7K7z97tbHgS/risk-neutral-donors-should-plan-to-make-bets-at-the-margin) and Benjamin Todd’s comment [here](https://forum.effectivealtruism.org/posts/nXL2MeQQBoHknpz8X/what-s-the-role-of-donations-now-that-the-ea-movement-is?commentId=vCeGqpbd9HQseGsSt). But here are some preliminary reasons why your microgrants program might be worth it: *Because you have a comparative advantage in soliciting proposals*. Big effective-altruist foundations complain that they’re entrepreneurship-constrained. That is, funders give them lots of money, they’ve already funded most of the charities they think are good up to the level those charities can easily absorb, and now they’re waiting for new people to start new good charities so they can fund those too. This is truest in AI alignment, second-truest in animal welfare and meta-science, and least true in global development (where there are always more poor people who need money). ACX Grants got some people who otherwise wouldn’t have connected with the system to get out there and start projects, or at least to mention that their project existed somewhere that people could hear it. One of my big hopes is that next year or the year after OpenPhil gives $10 million or something to some charity they learned about because of me. I don’t know if this will happen but I think the possibility made this grants round worthwhile in expectation. *Because you have a comparative advantage in getting funding*. I might have been in this category: I think some people trusted me with their money who wouldn’t necessarily have trusted OpenPhil or GiveWell. But I’m having trouble thinking of many other scenarios where this would happen. *Because you have a comparative advantage in evaluating grants.* This one is tough. The big foundations have professional analysts and grantmakers. These people are really smart and really experienced. Why do you think you can beat them at their own game? One possible answer: you’re also really smart and experienced. Fast Grants is run by Tyler Cowen and Patrick Collison (plus Emergent Ventures with Shruti Rajagopalan); it wouldn’t surprise me if their particular genius is more valuable than a big foundation’s increased specialization and resources. If that’s you, then good work, I guess. A second possible answer: no big foundation exactly captures your beliefs and values. Scott Aaronson [ran a grants round recently](https://scottaaronson.blog/?p=6256) and donated entirely to causes involved in STEM education. Maybe he thinks STEM education is more important than other big players believe (which actually seems very plausible). Or maybe his value system puts less emphasis on pleasure vs. suffering compared to the human urge toward deep understanding of Nature, and he feels incompletely aligned with OpenPhil who eg [donate $786,830 to crustacean welfare](https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/crustacean-compassion-general-support). A third possible answer: you have no absolute advantage, but you do have a comparative advantage. Scott Aaronson was both a student and professor at one of the math education groups he donated to, knew people who had been to the others, and had readers of his (math-focused) blog advise him on others still. I totally believe Aaronson is at least as qualified to evaluate math education as big foundations are, especially math-education-as-understood-and-appreciated-by-Scott-Aaronson’s-values. I gave several grants to prediction markets, something I’m plausibly an expert on. (which is a bad example, because the small handful of people who know more about prediction markets than I do are disproportionately employed as OpenPhil grantmakers. But *one day* I’ll find a cool new field before OpenPhil does, and then I’ll give it lots of grants and feel very smug.) So, all of these are ways your microgrants can potentially add value over a generic gift to someone else. So why might you *not* want to start your own grants program? Sometimes human temptations caught up with me. I funded some grants that were cool, and made me seem cool for funding them, and made me happy, and supported my politics and identity commitments - but which, when I judge them by the standards of “was giving these people $X better than saving $X/5000 lives from malaria or relieving $X/10,000 people’s life-ruining credit card debts?”, probably fail. Part of the appeal of GiveWell is that you don’t have to win any spiritual battles against temptation; you know you’re doing *more or less* the right thing. Grants programs throw you right into the middle of spiritual battles, each one you lose counts against your effectiveness rate, and after you lose enough you’re subtracting value instead of adding it. So should you run your own grants program, or donate to an existing charity? If you have any of the above comparative advantages, if you plan to work hard enough to realize them, and if you win spiritual battles so consistently that you have to fight off recruiters for your local paladin order - I say try the program. If not - and especially if you expect to half-ass the evaluation process, or succumb to the pressure to give to feel-good causes that aren’t really effective - then donate to existing charities. I really don’t want to make this sound like the loser option: donating to existing charities is usually the right thing to do, and choosing the less flashy but more effective option is also a heroic act. If you’re on the fence, I’d err on the side of doing it, since the upside is potentially very high and the downside limited. **VII.** There’s one other reason to run a microgrants program: you think it would be fun. I have no moral objection to this. Nothing along the lines of “wouldn’t it be better to something something expected utility?” Realistically the highest expected utility thing is whatever gets you interested enough to donate. If that’s a grants program, do it. My actual objection is: no it won’t be. I can’t say this with certainty. Some people are very weird. Some people are masochists. Some people already have experience in a related field and won’t feel as overwhelmed as I did. But I’m already scheming ways to try to capture the positive effects of a grants program without having to run one myself. If the American way is a “government of laws, and not of men”, then the ACX way is a government of byzantine highly speculative institutions instead of men. So I’m thinking about how to replace my role with a [impact certificate](https://forum.effectivealtruism.org/posts/yNn2o3kEhixZHkRga/certificates-of-impact)-based [retroactive public goods funding market](https://medium.com/ethereum-optimism/retroactive-public-goods-funding-33c9b7d00f0c), and working on talking to various interesting people who might be able to make this happen. Once I recover from the current grants round, I’ll push them harder and see if we can get a prototype by next fall. The basic idea would be: you all send in your grant proposals as usual. I (and any other interested funders) pledge some amount of money (let’s say $250K) to be distributed to successful projects one year later, ie *after* they’ve succeeded and made a difference. Then some group of savvy investors (or people who *think* they’re savvy investors) commit the same amount of *their* money (so $250K in our example) to buying grants, ie fully funding them in exchange for a meaningless certificate saying they “own” the grant - if people wanted, this could be an NFT, since that technology excels in producing meaningless certificates. At the end of some period, maybe a year, I would come in with my $250K and “give it” to the successful projects, by which I mean to whoever owned their impact certificates. Think of it as kind of like a prediction market for which grants will do well. Don’t worry, it’ll make more sense when we do it. (don’t get too excited though, this will probably be harder than I expect, and maybe none of it will pan out) All miserable slogs eventually become pleasant memories (eg high school, travel, medical residency). I can already sense the same thing happening to ACX Grants. I’m proud of what we accomplished, and with the pain fading away and only the fruits of our labor left, I feel like it was good work. But if you’re wondering whether or not to start a grants program, the most honest answer I can give is “I tried this once, and now I’m hoping to invent an entirely new type of philanthropic institution just to avoid doing it again.”
Scott Alexander
47831268
So You Want To Run A Microgrants Program
acx
# Heuristics That Almost Always Work **The Security Guard** He works in a very boring building. It basically never gets robbed. He sits in his security guard booth doing the crossword. Every so often, there’s a noise, and he checks to see if it’s robbers, or just the wind. It’s the wind. It is always the wind. It’s never robbers. Nobody wants to rob the Pillow Mart in Topeka, Ohio. If a building on average gets robbed once every decade or two, he might go his entire career without ever encountering a real robber. At some point, he develops a useful heuristic: it he hears a noise, he might as well ignore it and keep on crossing words: it’s just the wind, bro. This heuristic is right 99.9% of the time, which is pretty good as heuristics go. It saves him a lot of trouble. The only problem is: he now provides literally no value. He’s excluded by fiat the possibility of ever being useful in any way. He could be losslessly replaced by a rock with the words “THERE ARE NO ROBBERS” on it. **The Doctor** She is a primary care doctor. Every day, patients come to her and says “My back hurts” or “My stomach feels weird”. She inspects, palpates, percusses and auscultates various body parts, does some tests, and says “It’s nothing, take two aspirin and call me in a week if it doesn’t improve”. It always improves; no one ever calls her. Eventually, she gets sloppy. She inspects but does not palpate. She does not do the tests. She just says “It’s nothing, it’ll get better on its own”. And she is always right. She will do this for her entire career. If she is very lucky, nothing bad will happen. More likely, two or three of her patients will have cancer or something else terrible, and she will miss it. But those people will die, and everyone else will remember that she was such a nice doctor, such a caring doctor. Always so reassuring, never poked and prodded them with needles like everyone else. Her heuristic is right 99.9% of the time, but she provides literally no value. There is no point to her existence. She could be profitably replaced with a rock saying “IT’S NOTHING, TAKE TWO ASPIRIN AND WAIT FOR IT TO GO AWAY”. **The Futurist** He comments on the latest breathless press releases from tech companies. *This will change everything!* say the press releases. “No it won’t”, he comments. *This is the greatest invention ever to exist!* say the press releases. “It’s a scam,” he says. Whatever upheaval is predicted, he denies it. *Soon we’ll all have flying cars!* “Our cars will remain earthbound as always”. *Soon we’ll all use cryptocurrency!* “We’ll continue using dollars and Visa cards, just like before.” *We’re collapsing into dictatorship!* “No, we’ll be the same boring oligarchic pseudo-democracy we are now” *A new utopian age of citizen governance will flourish.* “You’re drunk, go back to bed.” When all the Brier scores are calculated and all the Bayes points added up, he is the best futurist of all. Everyone else occasionally gets bamboozled by some scam or hype train, but he never does. His heuristic is truly superb. But - say it with me - he could be profitably replaced with a rock. “NOTHING EVER CHANGES OR IS INTERESTING”, says the rock, in letters chiseled into its surface. Why hire a squishy drooling human being, when this beautiful glittering rock is right there? **The Skeptic** She debunks everything. Telepathy? She has a debunking for it. Bigfoot? A debunking. Anti-vaxxers? Five debunkings, plus an extra, just for you. When she started out, she researched each phenomenon carefully, found it smoke and mirrors, and then viciously insulted the rubes who believed it and the con men who spread it. After doing this a hundred times, she skipped steps one and two. Now her algorithm is “if anyone says something that sounds weird, or that contradicts popular wisdom, insult them viciously.” She’s always right! When the hydroxychloroquine people came along, she was the first person to condemn them, while everyone else was busy researching stuff. When the ivermectin people came along, she was the first person to condemn them too! A flawless record (shame about the time she [condemned fluvoxamine equally viciously](https://forbetterscience.com/2021/03/26/die-with-a-smile-antidepressants-against-covid-19/), though) Fast, fun to read, and a 99.9% success rate. Pretty good, especially compared to everyone who “does their own research” and sometimes gets it wrong. Still, she takes up lots of oxygen and water and food. You know what doesn’t need oxygen or water or food? A [rock](https://www.lesswrong.com/tag/absurdity-heuristic) with the phrase “YOUR RIDICULOUS-SOUNDING CONTRARIAN IDEA IS WRONG” written on it. This is a great rock. You should cherish this rock. If you are often tempted to believe ridiculous-sounding contrarian ideas, the rock is your god. But it is a Protestant god. It does not need priests. If someone sets themselves up as a priest of the rock, you should politely tell them that they are not adding any value, and you prefer your rocks un-intermediated. If they make a bid to be some sort of thought leader, tell them you want your thought led by the rock directly. **The Interviewer** He assesses candidates for a big company. He chooses whoever went to the best college and has the longest experience. Other interviewers will sometimes choose a diamond in the rough, or take a chance on someone with a less-polished resume who seems like a good culture fit. Not him. Anyone who went to an Ivy is better than anyone who went to State U is better than anyone who went to community college. Anyone with ten years’ experience is better than anyone with five is better than anyone with one. You can tell him about all your cool extracurricular projects and out-of-the-box accomplishments, and he will remain unswayed. It cannot be denied that the employees he hires are very good. But when he dies, the coroner discovers that his head has a rock saying “HIRE PEOPLE FROM GOOD COLLEGES WITH LOTS OF EXPERIENCE” where his brain should be. **The Queen** She rules over a volcanic island. Everyone worries about when the volcano will erupt. The wisest men of the kingdom research the problem and decide that the volcano has a straight 1/1000 chance of erupting any given year, uncorrelated with whether it erupted the year before. There are some telltale signs legible to the wise - a slight change in the color of the lava, an imperceptible shift in the smell of the sulfur - but nothing obvious until it’s too late. The queen founded a Learned Society Of Vulcanologists and charged them with predicting when the volcano will erupt. Unbeknownst to her, there were two kinds of vulcanologists. Honest vulcanologists, who genuinely tried to read the signs as best they could. And The Cult Of The Rock, an evil sect who gained diabolical knowledge by communing in secret with a rock containing the words “THE VOLCANO IS NOT ERUPTING”. Every so often an honest vulcanologist felt like the lava was starting to look little weird and told the Queen. The Queen panicked and ask everyone for advice. The Honest vulcanologists said “look, it’s a hard question, the lava seems kind of weird today but it’s always weird in some way or other, this volcano rarely erupts but for all we know this time might be the exception”. The rock cultists secretly checked their rock and said “No, don’t worry, the volcano is not erupting”. Then the volcano didn’t erupt. The Queen punished the trigger-happy vulcanologist who sounded the false alarm, grumbled at the useless vulcanologists who weren’t sure either way, and promoted the confident cultists who correctly predicted everything was okay. Time passed. With each passing year, the cultists and the institutions and methods of thought that produced them gained more and more status relative to the honest vulcanologists and their institutions and methods. The Queen died, her successor succeeded, and the island kept going along the same lines for let’s say five hundred years. After five hundred years, the lava looked a bit weird, and the new Queen consulted her advisors. By this time they were 100% cultists, so they all consulted the rock and said “No, the volcano is not erupting”. The sulfur started to smell different, and the Queen asked “Are you sure?” and they double-checked the rock and said “Yeah, we’re sure”. The earth started to shake, and the Queen asked them one last time, so they got tiny magnifying glassses and looked at the rock as closely as they could, but it still said “THE VOLCANO IS NOT ERUPTING”. Then the volcano erupted and everyone died. The end. **The Weatherman** He lives in a port town and predicts hurricanes. Hurricanes are very rare, but whenever they happen all the ships sink, so weathermen get paid very well. If you’ve read your Lovecraft, you know that various sinister death cults survived the fall of Atlantis, and none are more sinister than the Cult Of The Rock. This weatherman was an adept among them and secretly communed with a rock that said “THERE WON’T BE A HURRICANE”. For many years, there was no hurricane, and he gained great renown. Other, lesser weathermen would sometimes worry about hurricanes, but he never did. The businessmen loved him because he never told them to cancel their sea voyages. The journalists loved him because he always gave a clear and confident answer to their inquiries. The politicians loved him because he brought their town fame and prosperity. Then one month, a hurricane came. It was totally unexpected and lots of people died. The weatherman hastily said “Well, yes, sometimes there are outliers that even I can’t predict, I don’t think this detracts from my vast expertise and many years of success, and have you noticed some of the people criticizing me have business connections with foreign towns that probably plot our ruin?” An investigation was launched, but the businessmen and journalists and politicians all took his side, and he was exonerated and restored to his former place of honor. **Heuristics That Almost Always Work** Sometimes there’s a Heuristic That Almost Always Works, like “this technology won’t change everything” or “there won’t be a hurricane tomorrow”. And sometimes the rare exceptions are so important to spot that we charge experts with the task. But the heuristics are so hard to beat that the experts themselves might be tempted to secretly rely on them, while publicly pretending to use more subtle forms of expertise. “My statistical model, accounting for chaos theory, barometric pressure, and the price of tea in China, says there won’t be a hurricane tomorrow. Rejoice!” Maybe this is because the experts are stupid and lazy. Or maybe it’s social pressure: failure because you didn’t follow a well-known heuristic that even a rock can get right is more humiliating than failure because you didn’t predict a subtle phenomenon that nobody else predicted either. Or maybe it’s because false positives are more common (albeit less important) than false negatives, and so over any “reasonable” timescale the people who never give false positives look more accurate and get selected for. This is bad for several reasons. First, because it means everyone is wasting their time and money having experts at all. But second, because it builds false confidence. Maybe the heuristic produces a prior of 99.9% that the thing won’t happen in general. But then you consult a bunch of experts, who all claim they have *additional* evidence that the thing won’t happen, and you raise your probability to 99.999%. But actually the experts were just using the same heuristic you were, and you should have stayed at 99.9%. False consensus via [information cascade](https://www.lesswrong.com/tag/information-cascades)! This new invention won’t change everything. This emerging disease won’t become a global pandemic. This conspiracy theory is dumb. This outsider hasn’t disproven the experts. This new drug won’t work. This dark horse candidate won’t win the election. This potential threat won’t destroy the world. All these things are *almost* always true. But Heuristics That Almost Always Work tempt us to be more certain than we should of each. *[**EDIT:** Some people are asking if this is just the same thing as black swans. I agree black swans are great examples, but I think I’m talking about something slightly different, which includes heuristics like “you should hire the person from the top college” or “you should believe experts”. If you want you can think of a high school dropout outperforming a top college student as a “black swan”, but it doesn’t seem typical. And the point isn’t just “sometimes black swans happen”, but that the existence of experts using heuristics causes predictable over-updates towards those heuristics.]* Whenever someone pooh-poohs rationality as unnecessary, or makes fun of rationalists for spending zillions of brain cycles on “obvious” questions, check how they’re making *their* decisions. 99.9% of the time, it’s Heuristics That Almost Always Works. (but make sure to watch for the other 0.1%; those are the people you learn from!)
Scott Alexander
45269466
Heuristics That Almost Always Work
acx
# Two Small Corrections And Updates **1:** I titled part of my post yesterday “RIP Polymarket”, which was a mistake. Polymarket would like to remind everyone that they are very much alive, with a real-money market available to anyone outside the US, and some kind of compliant US product (maybe a play-money market) in the works. **2:** Sam M and Eric N want to remind you that you have until the end of next week to get your [2022 prediction contest entries in](https://docs.google.com/document/d/1HZ3UC9JIuhFdlVM_xYtj60a6ba7elWGiAnROMobkFXM/edit). Also: > We have some plans to compare (aggregates of) ACX reader predictions against various prediction markets. But there are probably much cooler things we can do which we haven't thought of yet! If you run a prediction market and have an idea for an interesting collaboration that involves sharing our data before it's publicly released, get in touch with us through the contest feedback form. If it's something time sensitive (e.g. an experiment that needs to be started before the contest submission deadline), make sure you do so soon. If you don't run a prediction market but still have an idea for something interesting we can do with the contest data, leave a comment on this open thread and we'll hopefully see it." You can reach them through [this form](https://docs.google.com/forms/d/14TY66nT7Q4EGb2hauCubPY5P2eFGsN1gUZ5Z9VBk5kM/viewform?edit_requested=true).
Scott Alexander
48422514
Two Small Corrections And Updates
acx
# The Passage Of Polymarket **Long Live Polymarket** Polymarket [got fined $1.4 million by the Commodity Futures Trading Commission](https://www.coindesk.com/business/2022/01/24/polymarket-relaunches-site-after-cftc-shutdown-but-not-for-us-traders/) and was ordered to cease noncompliant trading in the US. Polymarket is probably the biggest prediction market currently available. US law considers unlicensed prediction markets to be somewhere between illegal gambling and illegal futures trading, ie definitely illegal. Polymarket and a few peers had survived anyway, through the “crypto is the Wild West and nobody has time to deal with all the illegal things happening there” exemption. Apparently they found time. The [rumor](https://twitter.com/TradeandMoney/status/1478370053234036752) on the prediction market grapevine (which I absolutely cannot substantiate; please don’t sue me for libel) is that this might have something to do with competing prediction market Kalshi. Kalshi spent [two years](https://kalshi.com/blog/kalshi-public-beta-is-live) and probably a *lot* of money getting the CFTC to agree they were legal, and [has a former CFTC Commissioner as a Director](https://kalshi.com/blog/former-cftc-commissioner-brian-quintenz-joins-our-board). Their legal status forces them to do an annoying and expensive regulatory dance all the time; illegal prediction markets were able to move more nimbly, provide better user experience, and eat their lunch. This was a big problem for them - but they’d just finished making lots of friends in the agency that decides which illegal things to crack down on, so, as Tyler Cowen likes to say, “solve for the equilibrium”. For its part, Polymarket was an easy target. Despite “decentralized” being the fourth word on their website, they weren’t exactly at Satoshi levels of opsec. They had a normal identifiable guy as CEO, a normal headquarters building in New York City, and they did normal business things like [raise millions of dollars in seed funding](https://www.forbes.com/sites/rorymurray/2020/10/19/polymarket-raises-massive-4-million-round-from-polychain-naval-ravikant-other-notable-investors/?sh=43943da0c62e). Some might call a headquarters building with a CEO sitting in it and millions in the bank account a “center”, so in what sense was Polymarket decentralized? See [here](https://twitter.com/collins_belton/status/1478120307709779968) for more discussion, and [here](https://www.cftc.gov/media/6891/enfblockratizeorder010322/download) for the full text of the CFTC decision, but my understanding is - all of the markets themselves were smart contracts on the blockchain run by automated market makers, but you could only access them through the Polymarket website, and the Polymarket people decided how they resolved. Polymarket did not charge fees, and made money by providing liquidity. The CFTC seemed angriest about the “you can only access contracts through the Polymarket website” part of this. Crypto attorney [Collins Belton](https://twitter.com/collins_belton/status/1478120300600385536) writes: > It’s hard to assess which factors were most aggravating and most mitigating from CFTC’s perspective. For instance, it’s hard to assess whether Polymarket may have been okay if its agents didn’t engage in any [liquidity provider] activity and operated no [front end]. So: Polymarket got fined $1.4 million, and was ordered to make its real-money markets inaccessible to US-based traders (the rest of the world is still fine). It’s **v**ery **p**oor **n**ews to hear that a **v**illanous **p**olitical **n**onentity blocked this **v**ital **p**rediction **n**exus, and I guess we Americans have no other options besides accepting that we’re **v**astly **p**oorer **n**ow. Meanwhile, Polymarket put out a rainbows-and-butterflies press release saying that: > We are excited to continue championing our mission and building out our global footprint, information and educational initiatives, and U.S. product I assume this means they’re excited to continue building their prediction market somewhere else, and will include a US version with play money, just like lots of other companies have done. **I Will Limit My Outrage To Four Paragraphs, Then Move On** My favorite commentary on this decision is Nuno Sempere’s [The American Empire Has Alzheimers](https://forum.effectivealtruism.org/posts/vcxj7bGERxDzuiEzr/forecasting-newsletter-looking-back-at-2021). He lists various bad decisions the US has made, from Vietnam to the bungled withdrawal from Afghanistan last year. In this last case, President Biden said there was “no circumstance where you see people being lifted off the roof of an embassy” barely a month before we saw exactly that. Throughout these bad decisions, intelligence analysts and national security advisors were begging the government to come up with some kind of good forecasting infrastructure. By the early 2000s, many of them had settled on prediction markets as the most promising opportunity. In 2008, twenty-two prominent economists including five Nobel Prize winners wrote an editorial begging the CFTC to legalize prediction markets; the CFTC refused. In 2010, Philip Tetlock (one of the signatories on the pro-prediction market letter) did some pretty basic forecasting work, not even prediction market level, and proved that he could significantly outperform top analysts at the CIA with access to classified information. The government refused to hire him or use any of his methods, and continued shutting down new prediction markets as they arose. Starting a few years ago, cryptocurrency provided a brief “thaw” when people thought they might be allowed to try innovative forecasting mechanisms. They tried, they created really impressive work, they made (and deserved) millions of dollars, and then the government kicked them out of the country anyway. The US is becoming the North Korea of forecasting. Every other civilized country allows prediction markets. In a perfect world, they could ignore our constant own goals and move on without us. But because America has a disproportionate share of money, users, coders, and entrepreneurs, a US-less prediction market ecosystem won’t be living up to its potential. That means decreased ability to gather and process information and worse decision-making worldwide. **Where Do Prediction Markets Go From Here?** …aside from “to other countries”. I think there’s a general sense among people interested in the field that prediction markets are vastly underperforming their potential. There ought to be a billion dollar prediction market, maybe a ten billion dollar one. Smart VCs clearly believe something like this, or Kalshi wouldn’t have gotten [$30 million+ in investment](https://www.forbes.com/sites/jeffkauflin/2021/12/01/from-fintechs-crypto-top-founders-to-wall-streets-best-dealmakers-30-under-30-finance-2022/?sh=75e9ba8d249b). Sometimes people who incorrectly believe I know things about prediction markets ask me if know the missing secret sauce. I don’t think there’s any secret. A prediction market will strike it big when it gets three things right at the same time: * Real money * Easy to use * Easy to create your own subsidized markets “Real money” should be self-explanatory. [Metaculus](https://www.metaculus.com/questions/) and [Manifold](https://manifold.markets/) are both very nice, but so far they’re limited to a small group of enthusiasts playing in their spare time. I value them both, but neither is the killer app that makes prediction markets as central to everyday life as stock markets or polls or whatever. “Easy to use” is kind of self-explanatory, but with some caveats. A big part of ease-of-use is liquidity; you can get that from a big user base or from clever deployment of automated market makers. A market that requires crypto knowledge is harder to use than one that doesn’t; one that’s inaccessible from the US is harder to use than one that isn’t. Also all the normal things like UI and search. “Easy to create your own markets” is where we’ve gotten stuck so far. Prediction markets are absolutely on top of questions about whether Donald Trump will win various elections. This is a solved problem. What I really wanted last year (and would have subsidized!) was a market about whether Alameda County, California, would permit indoor gatherings of 50 people on January 8th 2022 (ie would I be forced to cancel my wedding). But I also would have appreciated the ability to put a few questions to prediction markets before starting my psychiatry practice, or my grants program, or any of a dozen other things I did. A friend has gone further, and half-jokingly said they want to create conditional prediction markets about whether they’re compatible with various women in our friend group, to be paid out six months after the first date. Some of these applications are attempts to route around the principal-agent problem. Maybe I have some question about whether a certain grant would succeed, I’m not sure who to ask, and even if someone gives me a “Bob Smith, Grant Evaluator” business card, I don’t know if he’s any good. A prediction market takes all the pain out of searching for information - if I subsidize it enough, it’ll attract people with the relevant skill set who will solve my problem for me. Probably some of these ideas wouldn’t work, but probably other ideas I can’t even think of now *would*. I don’t know what the killer app for prediction markets will be. But we’re not going to find out unless people can create their own subsidized markets and play around. Polymarket took some baby steps towards this before the settlement: they had a Discord server where anyone could propose questions, and a lot of those questions became markets. But they still had to be general interest, not “let Alice’s five friends predict her dating life”. And there’s a big difference between “talk it over with company representatives on a Discord server” and “press a button”. Imagine if you could only tweet by emailing Jack Dorsey and convincing him that your comment was a good thing to have on Twitter. Even if Jack had good judgment and approved most requests, this would be a long way from the limbic system < — > Send Tweet loop that real Twitter users know and love. I asked some people in the business why they won’t do this. They said most people are bad at writing good resolution criteria. They don’t want their employees to get stuck resolving incredibly dumb questions about people’s dating lives, hunting down inaccessible or conflicting information, and making a bunch of people mad whichever way they decide. As far as I can tell, Manifold Markets solved that problem with their “proposer decides the resolution, *caveat emptor*” strategy. But Manifold is US-based and can’t use real money, so there’s still no way to subsidize a market effectively. (This is why I’m pessimistic about Kalshi. They could potentially do a lot of good in the “will Afghanistan collapse?” types of markets the Nobel laureates want, though even there I think some of their betting limits will give them trouble - $25,000 is good money, but not *quite* good enough to incentivize founding the prediction market equivalent of a Wall Street trading firm. But even if they solve this, I can’t imagine the regulators giving them permission to host “will this grant work out?” or “how will my dating life go?” markets; it’s just too weird, and the CFTC is too conservative. I don’t know, maybe their connections will come through and pull it off, but I don’t even know if they’re ambitious enough to *want* this, and I hate having to rely on one organization.) Right now my hopes are, in ascending order of likelihood: * Manifold figures out some kind of weird crypto thing that isn’t real money from a legal perspective, but *is* real money from a “people really want it and will put a lot of effort into getting it” perspective. * Polymarket does this outside the US, and it succeeds so wildly that everyone agrees we need a US version. * Someone creates a genuinely decentralized prediction market which the CFTC either believes is legal, or can’t figure out how to shut down. I’m most optimistic about this last one, but it would be tough. You could try a version of Polymarket without the centralized organization gating the front end and providing liquidity. But then how would it make money? It probably wouldn’t - which might be fine, Metaculus is a non-profit and is still exceptionally well-run and stable. If someone did a good job of this I would try really hard to get it funded, and would expect to succeed. But what if the CFTC says no, they’re still angry? **In Search Of Lost Crypto Dreams** Sometimes it seems like everyone who’s ever used crypto has a different model of why it might be good. Here’s mine. There are lots of financial products which people want, but which regulation prevents them from having. Some of these are totally without social value, like Ponzi schemes and Bored Apes. Others have a lot of social value, like prediction markets, initial coin offering style funding schemes, and cutting middlemen out of immigrant remittances. More than a few might even have negative social value, like easy ways to buy drugs, or super-high-interest loans marketed to very impulsive people. Without passing judgment on whether these things are good or bad, they are legion. Collectively, they’re a zillion-dollar market. In theory, crypto is hard to stop and hard to trace (yes, I know the blockchain is public, but I also know that TornadoCash and Monero exist). Anonymous users could create these services so easily and in such numbers that governments would never be able to stop all of them. Or they could be run as smart contracts, where even if regulators arrest the original programmers, they can’t stop the program from existing and continuing to offer its service. This vision is a lot like the original vision of the Internet: a magical place that nobody could censor, where information would flow freely across national and ideological borders. That vision was . . . maybe 25% achieved? It’s pretty great that I can write a blog like this instead of begging for my supper at a major media organization. But after a brief period of discombobulation, dictatorships found it easy to create their own walled-garden Internets through light-touch censorship; although there are ways around most of their tricks, ordinary people [don’t bother with them](https://www.lesswrong.com/tag/trivial-inconvenience) (**v**ery **p**oor **n**ews indeed!) And in practice most people ended up basing their Internet explorations at a few big businesses like Google, Facebook, and Twitter, which became easy prey for censors and in some cases rush to self-censor even more zealously than governments demand. It’s not that the Internet *can’t* create a magical censorship-resistant infrastructure, it’s that it’s 5% easier to sell your soul to FAANG, and so many people take that option that the few people who don’t aren’t really a critical mass for escaping governments or building new communities. Is crypto following the same path? I am a lazy crypto user, and I notice that nowadays I have to grovel harder to the government to access my crypto than to acces my fiat - every crypto site is a gauntlet of “please tell us your name/birthdate/SSN/address”, “please upload your ID”, “please wait for us to complete our KYC process before trading”, etc. I assume there’s some really flashy way that cool people use their crypto without doing any of this, but there’s also Tor and Brave and Mastodon, and that doesn’t mean the Internet is a free speech privacy paradise. But then, what’s the point of crypto? I mean, right now the point is to be a Ponzi scheme, and it works great; 1000% annual returns are a perfectly adequate substitute for there being a point. But most people think crypto only has another 0 - 2 doublings left in it; after everyone who’s going to invest invests, what then? In the early days, you could make a small fortune by buying Bitcoin and holding, or a large fortune by minting a new token and telling *other* people to hold it, or a gigantic fortune by creating the infrastructure people needed to do A or B. All these activities are more or less legal and there’s no point in angering the government while you do them. And brilliant coders generally wouldn’t work on the illegal projects when it was so easy to make money doing the legal ones. So everybody rolled over and let the government regulate them, because why not? At some point, the Ponzi-ing will run out. Optimistically, the brilliant coders will need something else to do, and someone will try creating things like genuinely decentralized, impossible-to-shut-down prediction markets. I feel like this should be possible, but I also don’t understand why it hasn’t happened already. Maybe the technical challenges are too hard. Pessimistically, by then the crypto infrastructure, crypto social norms, and crypto user base will be so comprehensively locked into the current regulated model that this won’t be able to get off the ground. You’ll try to use Coinbase to send your crypto the prediction market, and it will warn you that this is a Non-Preferred Site that isn’t a Coinbase Partner and they’ll be informing the IRS of this transaction so don’t try anything funny. A few smart people will know ways around this, and everyone else will just suffer. Will crypto ever live up to its potential? Only time will tell - since they’ve banned every other method of forecasting things.
Scott Alexander
48322340
The Passage Of Polymarket
acx
# Open Thread 210 This is the weekly visible open thread. Odd-numbered open threads will be no-politics, even-numbered threads will be politics-allowed. This one is even-numbered, so go wild - or post about whatever else you want. Also: **1:** We now have a “report comment” button! If someone posts a terrible comment, click the three dots after “Reply”: …and choose “Report Comment” on the menu. This will send it to me so I can check if it merits a banning. Remember, ACX rules are that comments should be at least *two* of polite, relevant, and plausibly-true-according-to-me. This means I do *not* generally delete comments for being false (ie “misinformation”) unless they are also rude or irrelevant. However, I may also unprincipledly delete comments that bring shame and/or negative media attention upon this blog, depending on how much shame and negative media attention I’m up for that particular day. And I may occasionally delete comments I think are stupid and lowering the average level of debate. **2:** Some updates to my [Predictions for 2022](https://astralcodexten.substack.com/p/predictions-for-2022-contest), especially relevant if you’re playing in the related [contest](https://docs.google.com/document/d/1HZ3UC9JIuhFdlVM_xYtj60a6ba7elWGiAnROMobkFXM/edit): First, I misinterpreted Matt Yglesias’ question about a Q4 2021 recession as being about a Q4 2022 recession, so my prediction on it is dumb and you should ignore it - for your own contest entries, please predict the Q4 2021 recession, as Matt did. Second, for unclear reasons I gave the wrong current floor value for Bored Apes; I will be judging the prediction on whether they end up lower than the real floor price as of last week ($320K), *not* whether they end up lower than the false number I gave. Sam and Eric can weigh in on how they’re going to judge this in the contest. **3:** Somebody sent in an ACX Grant application saying they didn’t want any money, but they wanted data on my grantmaking process for their study on what makes teams succeed. I took this out of my pile intending to come back to it after the grants round, and then I lost it. If that was you, please send me an email at scott@slatestarcodex.com reminding me what I can do for you. **4:** Remember, if you won an ACX Grant I am willing to provide updates and advertisements for your project on Open Threads. ACX Grants winner Yoram Bauman writes: > **One paragraph summary of Jan 2022 progress on #climate24x7 (advancing smart climate efforts in the legislature and/or via 2024 ballot measures in at least 7 states):** In **Nebraska**, climate-concerned R state senator [John McCollister](https://en.wikipedia.org/wiki/John_S._McCollister) introduced [LB944](https://nebraskalegislature.gov/bills/view_bill.php?DocumentID=46715), a short 3-page bill that cuts the regressive 5.5% state sales tax rate on electricity once electric utilities hit certain carbon intensity targets; see these [one-pagers](https://docs.google.com/document/d/1DWiVM4Ii-XJGsvk2z99pGbiQWUs5p27aXDCbI543N2Y/edit?usp=sharing). We have a page of potential improvements based feedback from utility folks and others and are anticipating a public hearing in late February or early March. A similar idea is making progress in **South Dakota**, where a D legislator has expressed interest in similar legislation, and in **Arizona**, where I’ve hired Autumn Johnson of [Tierra Strategy](https://tierrastrategy.com/) to pursue this; we’ve written [one-pagers](https://docs.google.com/document/d/1LBw0rQPa0s8Dd0Gp0BBbjPtvyOXuFQjOALVZ3hYNQ4o/edit?usp=sharing) and [draft legislation](https://docs.google.com/document/d/1IRUntuEY9VN-J3Seb96lQFsqF2OdlNuk/edit), she’s gotten fairly positive feedback from utilities, enviros, and legislative staff, and we’re doing our best to find a House member to introduce legislation before the cut-off of Friday Feb 4. In **Utah** we continue to work on the signature-gathering [plan](https://docs.google.com/document/d/1knZzp2aYdkmmfDVV24M-LBfNz1q0BvSITBReNVaKYY0/edit?usp=sharing) for the [Clean The Darn Air](https://www.cleanthedarnair.org/) 2024 ballot measure effort; we also anticipate the introduction of a similar bill in this year’s legislative session. Also trying to push forward with ideas or exploratory conversations in **Colorado**, **Georgia**, **Massachusetts**, and **Michigan**. Additional funding would help extend Autumn’s contract and help push forward faster in Nebraska, South Dakota, and elsewhere! From Yoram Bauman ([yoram@standupeconomist.com](mailto:yoram@standupeconomist.com), @standupecon) **5:** In the comment section of [Why Do I Suck?](https://astralcodexten.substack.com/p/why-do-i-suck), someone linked this Scholars’ Stage post on [Why Public Intellectuals Have Short Shelf Lives](https://scholars-stage.org/public-intellectuals-have-short-shelf-lives-but-why/), which I found interesting.
Scott Alexander
48329219
Open Thread 210
acx
# Book Review Contest Rules 2022 Okay, we’re officially doing this again. Write a review of a book. There’s no official word count requirement, but [last year’s finalists and winners](https://astralcodexten.substack.com/p/book-review-contest-winners) were often between 2,000 and 10,000 words. There’s no official recommended style, but check the style of [last year’s finalists and winners](https://astralcodexten.substack.com/p/book-review-contest-winners) or my ACX book reviews ([1](https://astralcodexten.substack.com/p/book-review-lifespan), [2](https://astralcodexten.substack.com/p/book-review-which-country-has-the), [3](https://astralcodexten.substack.com/p/book-review-arabian-nights)) if you need inspiration. Please limit yourself to one entry per person or team. Then send me your review through [this Google Form](https://docs.google.com/forms/d/18ft8ZxQcKFwMsi_DZINn7d7VIso_y1Armfr59YeOGLE/edit). The form will ask for your name, email, the title of the book, and a link to a Google Doc. The Google Doc should have your review exactly as you want me to post it if you’re a finalist. DON’T INCLUDE YOUR NAME OR ANY HINT ABOUT YOUR IDENTITY IN THE GOOGLE DOC ITSELF, ONLY IN THE FORM. I want to make this contest as blinded as possible, so I’m going to hide that column in the form immediately and try to judge your docs on their merit. (does this mean you can’t say something like “This book about war reminded me of my own experiences as a soldier” because that gives a hint about your identity? My rule of thumb is - if I don’t know who you are, and the average ACX reader doesn’t know who you are, you’re fine. I just want to prevent my friends / other judges’ friends / Internet semi-famous people from having an advantage. If you’re in one of those categories and think your personal experience would give it away, please don’t write about your personal experience.) PLEASE MAKE SURE THE GOOGLE DOC IS UNLOCKED AND I CAN READ IT. By default, nobody can read Google Docs except the original author. You’ll have to go to Share, then on the bottom of the popup click on “Restricted” and change to “Anyone with the link”. If you send me a document I can’t read, I will probably disqualify you, sorry. First prize will get at least $2,500, second prize at least $1,000, third prize at least $500; I might increase these numbers later on. All winners and finalists will get free publicity (including links to any other works you want me to link to) and free ACX subscriptions. And all winners will get the right to pitch me new articles if they want (nobody ever takes me up on this). Your due date is April 5th. Good luck! If you have any questions, ask them in the comments. And remember, the form for submitting entries is [here](https://docs.google.com/forms/d/18ft8ZxQcKFwMsi_DZINn7d7VIso_y1Armfr59YeOGLE/edit).
Scott Alexander
48201926
Book Review Contest Rules 2022
acx
# ACX Grants ++: The First Half This is the closing part of [ACX Grants](https://astralcodexten.substack.com/p/acx-grants-results). Projects that I couldn’t fully fund myself were invited to submit a brief description so I could at least give them free advertising here. You can look them over and decide if any seem worth donating your money, time, or some other resource to. I’ve removed obvious trolls, a few for-profit businesses without charitable value who tried to sneak in under the radar, and a few that violated my sensibilities for one or another reason. I have *not* removed projects just because they’re terrible, useless, or definitely won’t work. My listing here isn’t necessarily an endorsement; *caveat lector*. Still, some of them are good projects and deserve more attention than I was able to give them. Many applicants said they’d hang around the comments section here, so if you have any questions, ask! (bolded titles are my summaries and some of them might not be accurate or endorsed by the applicant) When you’re done with these, you can now find the second half of the list [here](https://astralcodexten.substack.com/p/acx-grants-the-second-half). --- **#1: A Movement To Fight Attention Hijacking**It’s my assertion that we need to draw people’s attention to the methods marketers use to get us to buy stuff – to point out the techniques used in digital and physical environments. The trappings of an advanced economy have led us to create some persuasive methods of engagement. And while these have been used to subliminally guide us towards purchases, by drawing attention to them as a phenomenon, we can unlock new ways to use them for the greater good – for educational purposes, to encourage positive behaviours, for healthcare, mental wellbeing, and other challenges we face as part of what, Alvin and Heidi Toffler refer to as ‘the Third Wave’ of development. Won’t that denigrate the intent behind these techniques? Well… let’s be fair – advertisers have had it good for a long time. That said, does the fact we know what television commercials or online ads are trying to do, make us buy less stuff? Nope. While drawing attention certainly makes us more aware of the purpose of the medium used, it also leads us to greater transparency and an increased opportunity to mix media – for any purpose. Could the UI that made Facebook addictive be used to promote healthy eating? Could we re-engineer Gruen transfer for hospital appointments? Can Kansei design principles remove racial bias? I want to kickstart a movement to test these ideas out. A movement called \*punktoj\* Anyone game? Ping me: dave.barton@tbc.wtf **#2: Understand The Texture Of Pain**The project consists of writing software to edit textures in real-time in a browser using texture synthesis techniques, exploring ways of narrowing the state-space, and funding user testing to determine the usefulness of the method. We would like to demonstrate as a proof of concept how different medical conditions which have similar symptomatologies at the surface-level (e.g. “stabbing pain in shoulder”) show up as recognizably different textures when visualized with this technique. We think this will significantly contribute to foundational research on pain with applications for medical diagnosis, as well as pain management and treatment. We also think that these visualizations will advance our understanding of the true meaning of pain scales: we will be collecting self-reported pain levels in reference to clinically-used scales and correlating them to the properties of the visualized pains. For example, we may find that a certain pain described as “2/10” could match a visualization of 10 pin-pricks per second, while a pain of the same type described as “3/10” could match 50 pin-pricks/s, and a “4/10” pain could match 250 pin-pricks/s, and so on. I.e. these visualizations might provide a very grounded + transparent way to show that pain scales are non-linear, and possibly logarithmic (Gómez-Emilsson, 2019, see: https://tinyurl.com/ha834tpm) in nature. See full proposal here: https://docs.google.com/document/d/1zLWyxhOMNqHp8tGqOAK2aABHOXpMNj\_tmQbIy8Fdlbk/edit?usp=sharing **#3: Acoustics Of Historical Speeches**I use acoustic simulation to investigate historical accounts of speeches to large numbers of people (see Benjamin Franklin's experiment on George Whitefield's audible range for an example). This requires visiting sites of speeches to take geometric and sound pressure measurements, and some archival research for background on the sites. Once I have this information I can build a computer acoustic simulation with my own software setup. I've already done this for Whitefield, Julius Caesar, and Elizabeth I. I'm now trying to raise $10,000 for site visits for Demosthenes at the Pnyx in Athens, Henry V at Agincourt ("Band of Brothers" speech), and Abraham Lincoln's Gettysburg Address. Because of the project is so interdisciplinary, it's hard to find funding through standard channels. If you or anyone else is interested in funding science to learn more about history, email me at boren@american.edu. **#4: Handbook For Making Friends In The Post-College Environment**Hi! I'm bbqturtle. When moving to a new city, you are often confronted with loneliness. When graduating from college and surrounded by unstructured environments to meet new people, you are often confronted with loneliness. I recently transplanted cities 3 times and each time discovered 3-4 different methods of gathering communities. In Boston I have discovered the largest and best solution to this problem. I am working to distill a "best practices for making friends and growing communities in the post-college environment". The outline and content theory is complete, bit lacking final touches and an editor. It is essentially an instruction manual and handbook to problem solve rough situations in the way best for the community. I do not have much bandwidth for this project because I do not have any audience. If this is a project that would actually interest you, please send a token Bitcoin amount to 39jcUUkQb7QqUANFXQ5ZGC4b4YvFBRFxou . Or email me at josh (underscore symbol) advance@yahoo.com. If more than 50 people are interested, I will prioritize this by end of month. If I receive more than $200, I will hire an editor to review it. If nobody is interested I may abandon the project. Thank you for your consideration! **#5: YIMBY Explainer Video On Migration Chains**I'm Michael Wiebe, and I will make a YIMBY explainer video on migration chains, to show how even expensive new apartments can improve housing affordability for everyone. The basic idea: A moves into a newly built apartment, B moves into A's old house, C moves into B's old house, which frees up affordable housing for D in a poorer neighborhood. The video will put a human face on the process, by interviewing everyone in a chain, and showing how real people in low-income neighborhoods benefit from new market-rate apartments. For example, we could show a poor college student who benefits from a vacancy in a studio apartment near their university, and trace that vacancy to the construction of a market-rate apartment building. The idea of migration chains is new, and is a big improvement over a simple supply and demand model, which NIMBYs don’t trust anyway. This model has an intuitive mechanism, where people moving \*into\* new housing are also moving \*out\* of old housing. Since the mechanism is general and applies everywhere, promoting this idea is high-leverage: it can be used to support YIMBY activism all around the world. I'm a PhD economist and can summarize the research (eg. see [here](https://twitter.com/michael_wiebe/status/1455999023375011842)). The video will be produced by urbanist youtube channel About Here, who budgets $5000 per video. We'll also need $1000 to buy food to incentivize volunteers to trace out a complete chain. Please contact me at maswiebe@gmail.com. **#6: System Dynamics Simulator** I'm Oleksandr Nikitin, and I want to build a system dynamics simulator. Enable independent researchers to simulate, forecast, and visualize metabolic pathways, epidemic spread, mass transit, ecology, macroeconomics, etc. Show, don't tell. Without code. Think Airtable+Vensim+Roam+Kumu, integrated and working offline. Why offline? Why simulate? Why a new tool? Complex systems must be simulated. You miss emergent phenomena if you analyze parts separately or simplify the details. Offline sets you free from distractions and groupthink. Free to make your own breakthroughs. Take your references, notes and data with you, dive deep, then return with the verified, reproducible, interactive model. Research can take years. Tools should outlast devices and app stores. And it must be fast. Isn’t it insane for a productivity tool to make you wait? I spent years on prototypes and algorithms, tested in companies since 2013, and now I want to put these experiments together. Not as a startup. As a tool accessible to everyone. The plan: create a community of curious inquisitive makers, empower them with a small fast and robust core app, iterate and grow together, augment the human intelligence even more, and understand the world. I seek funding to focus on this project full-time, for two years. To launch and to guide people to the finished research. Sounds inspiring? Worth the money? Want more details? Ping me at oleksandr@tvori.info. Also see https://cortex.substack.com/ **#7: Science Demonstrations In Nepal**I am Binod Rajbhandari and currently a Teaching Postdoc at Texas Tech University. I will teach at university for 9 months and this summer, I am very interested on doing a science demonstration at mostly remote part of Nepal. I have volunteered in science communication in different middle and high schools in Lubbock, TX. The student were always intrigued by the musical acoustic and some astronomy demos. I think many schools in Nepal excluding some rich schools do not have a proper science classes. I think even if I can visit 50-70 schools during the summer I can inspire at least few students in Nepal to become future scientist with 7000-10000 of ACX grant. I could have asked ACX for a research grant for the summer, but inspiring a kid from mountainous part of Nepal might outweighs the other in social contributions. [You can reach me at binodrb43@gmail.com] **#8: Alternative Solar Power Plants**I am developing an alternative approach to solar-thermal power plants: I want to use lenses rather than mirrors to concentrate sunlight. Reasons why lenses might be better than mirrors: 1) More robust, easier to maintain, more self-contained. 2) Easier / cheaper to mass-produce at the extreme scale needed to transition society away from fossil fuels. 3) Will be possible to capture the light into optical fibers which will make it easier to transport from the collection field to the centralized hub where it will be utilized. I wrote my own software to design the lens, which will be a Fresnel lens that has two layers that make it an achromatic doublet. It will be very difficult and cost several hundred thousand dollars to tool the plastic injection mold to produce these lenses. Tooling the mold is the critical, make-or-break stage of this project. I am not sure if it is even possible to machine the mold accurately enough, or to counteract the lens warping as it cools. But if we can do this then it will be possible to mass-produce these lenses far more cheaply than solar panels or mirrors, and once the lens is manufactured we can use existing technology for the other parts of the plant. I am looking for somebody to help with the injection molding, and somebody who can help with funding the project, which will cost several hundred thousand dollars. Email me at ecpoppenheimer@gmail.com **#9: Help Research Teams Improve Software Quality**I'm Victor Engmark, and I'm looking for funding to help research teams improve their software quality. Researchers have often been criticized for producing broken and unusable software, and improving the quality is key to both trust and reproducibility. I've worked as a software developer for 18 years, with a focus on quality assurance. I will need to spend at least a few weeks per project. This is typically the minimum amount of time it takes to set up a reasonable process, teach the baseline techniques, and hand over the maintenance. If you can help, please contact victor@engmark.name. **#10: Create A “Tech Tree” With Foresight Institute**The new Neurotechnology branch of the Foresight Institute is seeking funding and publicity for the making of a ‘tech tree’. A tech tree will synthesize information in a hierarchical step-by-step format from all domain experts in brain related technologies (neuronal imaging, brain therapies, brain-computer interfaces, whole brain emulation), give a current state of the industry and envision step-by-step goals for current researchers, funders and leaders to access. [You can find a 4 minute post explaining tech trees at https://foresight.pub/techtreepost, a 4 minute tech tree example at https://fsnone-bb4c.kxcdn.com/wp-content/uploads/2021/11/2021-11-15-Longevity-Tech-Tree-Whitepaper.pdf, and our youtube at https://www.youtube.com/c/ForesightInstitute. Check our our hackathon to build a crowdsourcing/funding app - see https://mapsmap.devpost.com, or consider funding us at https://foresight.org/about-us/our-mission/.] **#11: Preserve And Categorize Web Fiction**I'm Makin, and I'm looking for a tiny amount of funding to save the world of web fiction from the ravages of time, with a focus on rational/ist/EA fiction. So much of the history of the genre is hard to reach, and I can put it up for permanent categorization, linking and eventual mirroring on a website. All I need is money to keep DigitalOcean instances up for a long, LONG time, though I'm also looking at IPFS as an option (and if you're aware of any better fits, I welcome advice!). My last project was Homestuck.net, a pretty complete archive of the best works of the Homestuck fandom, which has taught me the necessary steps to archive and display things for future humans to use long-term. I also started the initiative to revamp the r/rational wiki so it was actually usable. If my archival project sounds good to you, I'm looking for yearly Patreon pledges at patreon.com/makin. You can reach me at makin@protonmail.com if a funny-looking Patreon is not an option or want more details about how your money will be used. **#12: Search Engine To Analyze Research Findings**inlitro is a web-based search engine to analyze research findings and extract new insights across millions of life science research papers. Tyler Cowen and Scott think this is no longer a good/feasible idea. I think the opposite. Probably good to get in touch with them/check their assumptions before reaching out via the email on inlitro.com. **#13: Scholarships In Ethiopia**The non-profit Omo Valley Research Project provides scholarships for indigenous students from the Omo Valley in Ethiopia, home to some of the most traditional groups on earth, most of whom typically live in small-scale subsistence communities. Less than 1% of members of the Omo Valley have received any formal education but increased development is opening up new opportunities for education as well as transforming their livelihoods. For students who start school, attendance after Grade 5 is often financially out of reach and a vocational college or university impossible. Supporting education for those who desire it equips community members to fully participate in the opportunities generated through development, gives them the ability to maintain a degree of cultural autonomy, and negotiate with the governmental and market-based organizations that are transforming their lives. Finally, many students yearn to learn about the world, to learn history, science, and math. Giving these students the ability to pursue their dreams is the greatest investment in human capital I know of. We support secondary, university, and vocational education through direct contributions for tuition or cost of living expenses. The students we support are all from traditional nomadic pastoralist communities including the Hamar and Nyangatom ethnic groups. [Email us at] Omovalleyresearchproject@gmail.com [or check out our website at] omovalleyresearchproject.org/ **#14: Survey On Embryo Selection**I would like to design and conduct a survey akin to the moral machine project, but for embryo selection rather than self-driving cars. The idea is to glean the informed preferences of parents over the kinds of traits they want their children to have. This is already a challenge, and the survey would need to be carefully worded to avoid framing effects, and other psychological biases. But the bigger challenge, and my interest in the project, is that parents' preferences will largely depend on the preferences over traits that other parents have. As I wrote about in my book Creating Future People, there is a range of traits in which what is individually rational is partly a function of what other parents are expected to choose, and in which what is individually rational could diverge from what is socially optimal. Survey answers could, in principle, be fed into an AI to help refine the preferences parents exhibit over which embryos to implant. This information would be enormously valuable in for parents and fertility clinics. To the extent that the traits of future people influence the welfare of the entire world, embryo selection done well – guided by accurate information rather than guesswork – could be one of the most important forms of “effective altruism” the world has ever seen. [Email jonathan.anomaly@protonmail.com] **#15: Book Discovery Startup**Shepherd.com is a bootstrapped startup working to reimagine book discovery online. We help readers find books in new and unique ways while helping authors share their passion and expertise (for example Steven Pinker shared 5 of his favorites on rationality: https://shepherd.com/best-books/rationality). Social-media algorithms reinforce existing worldviews and propagate simplistic solutions. A book is one of the best ways to walk in someone else's shoes and expose the hidden complexities that surround us. I want Shepherd to promote soft culture values that will benefit global development. How can we instill critical thinking and meritocracy as values in future generations? What other values create a better world? Currently, I am funding this project myself, but I am interested in grants or donations. I am also thinking about doing a crowdfunding campaign later this year. If you have any advice on successful crowdfunding please contact me at ben@shepherd.com. Or, reach out if you want to talk books, soft culture values, or anything around this. **#16:** [removed] **#17: Algorithms To Select The Best Systematic Review**Navigating the expanding body of research literature is an increasing problem. Global research output is growing rapidly, as is the number of systematic reviews being produced. Systematically developed reviews (called ‘systematic reviews’ [SRs]) provide the highest quality evidence that is needed to inform clinical and public health recommendations. A SR is a synthesis of all the medical literature on a given topic. It is estimated that global scientific publications double every nine years, and for SRs the situation is more marked. For example 25,000 SRs are added to the database Epistemonikos annually. With the explosion of SRs comes an epidemic of multiple SRs published on the same topic. One study found 24 SRs on vitamin D supplements for preventing bone fractures, all with conflicting results owing to different methods. When encountering multiple reviews on the same question, clinicians may be confused and unable to formulate a conclusive answer to their patient’s question. We aim to develop an automated algorithm which will help select the best SR amongst several on the same question. Our algorithm will have significant impact and application worldwide to every health field. We, as an academic group of methodologists and clinicians (primary contact carole.lunny@ubc.ca), would love to meet with anyone interested in partnering or funding our multi-year project. Our project plan can be found at https://osf.io/nbcta/?view\_only=6b06b3f490c04ba0856f4cf95fcfd5ac **#18: Philanthropic Messaging Strategies Using Evolutionary Ideas**I’m Ro Gupta. Inspired initially by https://www.theatlantic.com/business/archive/2015/06/what-is-the-greatest-good/395768/, I’d like to commission research that explores if and how Kin Selection and Hamilton’s Rule [https://en.wikipedia.org/wiki/Kin\_selection#Hamilton%27s\_rule] can be applied in mass communications for altruistic giving of humans in modern times. The goal is to uncover alternative messaging strategies that help subjects transcend blood-thicker-than-water hardwiring, based on underlying evolutionary biology theory – e.g. kin recognition, kin altruism – that ultimately serves to increase wealthy countries’ proportion of altruistic giving to less genetically familiar yet higher need/ impact populations, e.g. those of the Global South. [Estimates suggest around 5% of US giving currently goes to international causes.] I believe I have the right combination of academic, professional, NGO and global experience [https://www.linkedin.com/in/guptaro] to lead this, and access to a high quality network of research and communications experts to match grants to. I estimate a robust synthesis of existing work could be done for the low tens of thousands of USD, while a primary research phase one could be substantively scoped for the high tens to one hundred thousand USD. If of interest to be a part of this as a researcher, funder or general thought partner, please get in touch: http://www.rocrastination.com/contact/. **#19: Software For Spaced Repetition And Other Education Tech**With well-designed education technology, the task of understanding and memorizing vast, complicated, and important subjects, can be rendered trivial in comparison to conventional ways of learning. I'm seeking funding to create AnkiHub, software for facilitating application of evidence based learning strategies. As a software engineer who has been working closely with medical students to advance the use of ed tech in medical schools (such as the spaced repetition software, Anki) I am uniquely positioned to bring this project to success. The absence of truly effective and accessible accelerated learning tools is a bottleneck preventing millions of would be do-gooders from pursuing high impact careers like medicine and engineering. Because these careers are incredibly rigorous, they select for specific personality types, thereby weeding out those who would make incredible researchers, for example, but assume they aren't smart enough or disciplined enough. AnkiHub will empower students by democratizing accelerated learning and potentiating the ever growing wealth of quality, free, educational resources. The goal of AnkiHub is to help create a world in which anybody who wants to can become a scientist, doctor, engineer, etc, (including those in the developing world, as this technology can be compatible with cheap devices). The science of learning, memory, and performance psychology is solid; AnkiHub can fulfill the need of leveraging the insights from the literature with ease. [If you want to help, email inbox.asanchez@gmail.com] **#20: Test Charity Pitch Strategies**We will conduct an intervention competition to test which strategies are most effective in convincing people to donate to an effective cause. We (Bastian Jaeger, Josh Lewis, and Noah Costelo) are part of a recently formed team that advises EA organizations on how they can attract more donations. There is little high-quality research on this topic and it is unclear which marketing approaches work best or should be trialed in the field. We will conduct a large-scale experiment to fill this gap. First, we will challenge academics and members of the EA community to submit an intervention that is most effective in generating donations for an effective charity. Next, we will distribute a survey among the participating teams and the EA community in which people are asked to predict the effectiveness of each intervention that was submitted. Finally, we will select the most promising interventions and test their effectiveness in a high-powered, pre-registered, and incentivized experiment. The study will generate actionable insights for various EA organizations looking to optimize their marketing strategy and attract more donations. The survey data will also allow us to test how accurate people are in forecasting the effectiveness of different strategies, by comparing forecasted with actual effectiveness. All data will be made openly available. See https://osf.io/adbwv/ for a more detailed description and you can contact me via b.jaeger@vu.nl. **#21: Thwart Darknet Murder Plots**I'm a darknet vigilante hacker, intercepting serious murder plots another the world through a back door into a dark web hitman scam website. Contacting people about their lives being in danger is a legal and ethical minefield, as is working with law enforcement and the media to bring about investigations and arrests. Sometimes people are already dead :( I'm looking to secure funding to: hire more journalists around the world to investigate these murder plots, accelerate software development of the investigation data analysis platform, cover legal costs, and more broadly professionalize the operation into a full blown charitable international investigative operation. Contact through https://pirate.london/ **#22: Support Zohar Atkins’ Podcast**I'm Zohar Atkins (Rabbi, Poet, Rhodes Scholar, Emergent Ventures Winner, and Founder of Etz Hasadeh). I'm seeking $100,000 to support my new podcast, Meditations with Zohar, which I plan to make into a weekly thing over the course of many years. The show needs patronage to support production and editing costs, and, if this is to be a weekly endeavor, my time. The show features a series of conversations with eclectic thinkers, doers, and artists I admire, with a focus on the intersection of philosophy, religion, theology, and personal principles for life. I have 10 guests already signed up and scheduled, and have recorded 3 episodes, including with Noah Feldman, Sheila Heti, and Teresa Bejan. Other guests include Tyler Cowen and Agnes Callard. The show will combine the love of learning of Tyler Cowen's Conversations with Tyler and the personal, and sometimes existential touch of Krista Tippett's on Being. The world needs high level content that is seeking, personal, and meaning-oriented. We need to talk about ideas in a way that is rigorous but also heartfelt, acknowledging our "skin in the game." This endeavor is part of my larger project of bringing the study of great texts and ideas outside academia. See [here](https://twitter.com/ZoharAtkins/status/1372675033336778755) for one example. Betting on the show is a bet on my attempt to strengthen culture through better discourse, better education, better thinking, and deeper self-understanding. **#23: Financial Aid For Math Students**Euler Circle is a mathematics institute dedicated to teaching college-level mathematics classes. We have taught most of the typical undergraduate math classes, including abstract algebra, complex analysis, algebraic geometry, and many more, as well as more unusual topics like combinatorial game theory, ergodic theory, and the mathematics of Euler. We are looking for funding to help with three things: 1) Financial aid for students who are unable to afford the tuition, 2) Hiring someone to develop more classes, especially an introductory class for students who are enthusiastic about mathematics but with little exposure beyond the school curriculum, and 3) Funding for students to attend conferences, especially those who have done research and are able to give talks. Please check out our website https://eulercircle.com/ for more information, and feel free to reach out to me (Simon Rubinstein-Salzedo, director of Euler Circle) at simon@eulercircle.com to discuss anything. **#24: Wearable Tech For Improving Memory**What if there were a simple piece of wearable tech. that could improve your memory by one item at the push of a button? What would you use it for? The ever-elusive names of people you just met? A friend’s birthday that you always seem to forget? Or perhaps an email address or phone number just long enough to jot it down? A one-item Improvement in memory would be a luxury for a working professional, but can be the difference between independence and dependence for those with objective impairments in memory, such as those affected by stroke or acquired brain injury. Normal declines in memory across the lifespan are also a significant source of anxiety for many healthy older adults, who worry about pathological cognitive decline and dementia. Memory On Hand is this simple piece of wearable tech., and we believe it can help augment memory for any wearer. Unlike currently available memory strategies and solutions like paper note pads or cell phone note taking apps, Memory On Hand can be used without disrupting the flow of the users daily life – and even mid-conversation. Stated simply, we think Memory On Hand can help a lot of people 1) remember more; 2) improve their confidence in their memory abilities; and 3) worry less about memory. We need your help in bringing this innovation to market. More info at www.memoryonhand.org **#25: Scientific Research On Life History Models Of Mental Illness**I'm J.D. Haltigan and I'm looking for funding to continue independent scientific research with my academic colleague investigating life-history models of psychopathology as they relate to the neurodevelopmental disorders of ASD, ADHD, and OCD. In order to optimally continue this work amidst the pandemic, funding that would allow us the potential of release from other academic duties (e.g., teaching, grant writing) would help advance this work which is pre-registered and described here: https://aspredicted.org/blind.php?x=2ea9vn. Currently I am seeking 5k USD and am happy to hold a Zoom call with anyone who who can provide funding or advice to provide further details on the project as well as my academic background and credentials. [You can reach me at jhaltiga@gmail.com] **#26: Gamify Education Right**I am Martijn Struijs, 4/5 years PhD student and TA in Computational Geometry at Eindhoven University of Technology. My proposal is to do gamification of education right. Most attempts at gamified education start with a fixed educational program and try to let a game meet these standards. That is a terrible way to design a game. Some games were not made for education, yet have been educational. An example is Pokemon Gold, which basically taught me English. You have experienced this personally as well, in your game in another world. These games have a low "skill floor", i.e. it doesn't take much skill to play the game, and also a high "skill ceiling": playing it well requires great skill. These conditions are excellent for growth and learning. For an example of an exceptional yet not well known educational game, look at ZeroRanger. Many of the skills it teaches, mostly patience, focus, recovering from setbacks, and letting go, are transferable to other aspects of life. I believe that an educational tool should teach one thing well, whatever it is, and hope that the thing it teaches is useful (if not, throw it away and try again) I already have the resources and am developing such a game. What I don't have is social science experience to test the effectiveness of the game. I could use your support here. Most of the development costs will be paying people, this is minimal at this stage. I thank you for your consideration. May we achieve enlightenment. [You can reach me at struijsmartijn@gmail.com] **#27: Reverse-Engineer Dating Photo Quality**I'm Loweren, a biology PhD doing photography and dating advice on the side. Some of you might know me from the Optimized Dating Discord server or the corresponding blog: https://optimizeddating.substack.com A keystone piece of advice in our community is to put more effort into making better dating photos, and to use the photo rating service Photofeeler to quantify the performance of each photo. This advice was helpful for many people, however there's one problem: it's not clear which factors make the photo perform better or worse on Photofeeler, as the developers are not keen on sharing the analytics. I will attempt to reverse-engineer the most important factors that make the dating photo look better by testing various factors (camera distance, focal length, aperture size, smile etc.) against the control photos using multiple subjects. I estimate that for each $80 in donations I can test 2 factors using 3 test subjects. I already have the first batch of photos ready to be tested. Results will be published on the blog as they come, which will hopefully help more people take better dating photos. My PayPal: https://paypal.me/seeelegance **#28: A Tool For Reasoning With Information On Prediction Markets**I am an AI researcher specialising in explainable AI and symbolic/hybrid methods, particularly argumentation - automating the ways humans argue and make decisions. One place of decision making that could greatly benefit from formal argumentative methods of exchanging and weighing arguments and counter-arguments to reach conclusions is (conditional) prediction markets. We need good computational tools for aggregating multiple conflicting conditional predictions into ranked decision alternatives, such as on changing car taxes based on market prices of electricity from renewables, supply of electric vehicles, and election results. I want to research and build a PoC tool for reasoning with information on prediction markets, one that collects multiple predictions and automatically evaluates them as arguments and counter-arguments, while allowing the human users to argue about the decisions for the sake of explainable reasoning. I need 50k+ USD for 5 months of full-time dedicated work (in UK - liable to pay tax) to push automated deicison making using (conditional) prediction markets. Ping me at kcyras@gmail.com if interested. **#29: Present An Open-Source Python Library For Monte Carlo Techniques**At the heart of all serious forecasting is a statistical tool known as Monte Carlo analysis. It allows you to quantify uncertainty by introducing randomness to the inputs of computational models and looking at the range of results. If you want a good example, you might recognize Monte Carlo techniques from Nate Silver’s election forecasts at 538. It's been a gold-standard throughout my career in the space industry, and I can attest to how powerful it is - I've used it to successfully send a rocket to Mars. However, there aren't any tools out there that make it easy for researchers to take their existing models and wrap a Monte Carlo around it. So, I wrote one. It's an open-source python library which I'm calling "monaco". I'm at a point in development where the basic feature set is complete and working well, and I'm looking to finish up the extended roadmap in the next few months. See the project github page for the code, examples, and a lot more info: https://github.com/scottshambaugh/monaco. I’m looking for $1000 to help me present version 1.0 of this tool to the scientific community at the 2022 SciPy Conference in Austin, TX this summer. That amount should cover conference fees, hotel, and airfare, and if you're feeling generous I could use additional funds for some external monitors and cloud compute time. My name is Scott Shambaugh, and if you’re interested in helping fund this please email me at wsshambaugh AT gmail.com. Thank you! **#30: Retrofit Coronary Stents With Soft Edge Caps**Coronary stent insertion is a common procedure (~600k/year in the U.S.) to treat coronary artery disease, but despite dramatic improvements in stent design, an estimated 3-20% of patients experience a major adverse cardiac event within 5 years of insertion. These post-implantation complications, such as in-stent restenosis and late stent thrombosis, are typically linked to inflammatory processes which arise from damage to the vascular wall during insertion. For example, recent studies show a 2x increased rate of major adverse cardiac events when stent edge dissection (a partial tissue lesion at the end of a stent) is detected. Alarmingly, new imaging techniques suggest that stent edge dissection may be present in ~40% of stent insertions. My name is Carl Thrasher and I am graduate student at MIT. I’m asking for $50k to prototype a method to retrofit existing stents with soft edge caps. I aim to 3D print these caps directly onto the stents using a bioresorbable polymer. This should help protect the vessel wall at the stent edge (the region of highest stress) without affecting long-term operation. I have experience in 3D printing and resin formulation. Future work would include testing in artificial vessels using optical coherence tomography. Even slight improvements here could be high impact saving millions of dollars and thousands of lives over a short time horizon. Happy to chat over Zoom or provide more detailed proposal information. Email: cthrash@mit.edu **#31: An Organization To Promote Independent Research In AI Safety**We need more people working on AI Safety research, but opportunities to do good work in this field are very limited, and so excellent researchers often end up working in non-safety AI roles because of this. EA grantmakers often fund independent researchers (IRs), and there are many open problems in AI Safety which could be tackled by IRs. However, IR lacks the institutional benefits of credibility, reliable income, motivation, collaboration and serendipity, especially when compared to jobs available to skilled AI researchers and engineers in industry. This could be fixed by creating an organisation to make IR in this field an attractive career path. This organisation would provide an institutional umbrella for researchers to work under to engender credibility; free workspace and food; accountability and productivity incentives; assistance in obtaining initial and ongoing funding; collaboration opportunities between researchers and with other labs in industry and academia through talks, socials and workshops; and generally make IR in AI Safety an appealing prospect for talented researchers who we would otherwise lose to non-safety AI roles elsewhere. This will cost around £150,000 p/a. I would like to raise this amount to run a one year trial in central London to assess impact. Please contact me at jessicamarycooper@gmail.com if you are interested in making this happen! **#32: An App To Help Mentor Disadvantaged University Applicants In The UK**UniReach is a charitable EdTech start-up that provides automated on-demand mentoring for disadvantaged university applicants in the UK. In the past 18 months, we’ve organised mentoring for 1,000s of applicants provided by 600+ undergrad volunteers. We currently focus on admissions to Oxford and Cambridge universities, where the global offer rate is 17%. Our offer rate is 3x that: 57%. We achieve this success via a year-long programme: national workshops to foster interest in applying (we don’t cherry pick), continuous on-demand mentoring, and more in-depth mentoring in peak application season. The missing piece in the puzzle is an app to reduce frictions in our core service and open new channels to engage and support applicants – right now we coordinate mentoring via email. We are looking to raise £10,000 / $15,000 / 0.4 BTC for the app. Our ambition is to cover other top UK universities and then the Ivy League. We’re well placed to do this, with an engaged mentor base (3% of all Oxford/Cambridge undergrads), partnerships with schools across the UK, plus relationships with influential figures in our current markets; we’ve also recently closed an acquisition of a smaller charity to support our growth. We would also value any calls/emails from individuals offering advice, particularly on software (or marketing, the other gap in our capabilities). All advice welcome via email: leo@unireach.co.uk / website: unireach.co.uk / Bitcoin: 38MNgr9svuU2Lc7XUhrUdab2GS8tscG9be **#33: Get Yeast To Produce Milk**Real Deal Milk sets out to revolutionize dairy. To achieve this undoubtedly ambitious goal, we employ state-of-the-art gene technology to teach yeast cells how to be cows. We think that today's dairy production is not sustainable; it contributes as much as 3% of all greenhouse gas emissions (FAO 2019), uses enormous amounts of water and land (WWF 2019), and produces extreme amounts of waste (Nennich et al. 2005). Add to that the unimaginable suffering animals go through to provide fresh milk to humans and you get an industry ripe for disruption. Modern gene technology makes it possible to exclude the inefficient animals from the process entirely but keep the milk (and cheese, yogurt, cream, butter) we all love. Substituting cattle with precision fermentation allows us to keep milk just as delicious and nutritious as it is when it's freshly milked from a cow, but more sustainable and potentially healthier and more affordable. Participate in our funding, email us at zoltan@realdealmilk.com **#34: Outline A Potential Martian Legal System**Inspired by Elon Musk's regrettably mostly-unworkable set of ideas for a Martian legal system (cf. my detailed comments here: https://www.reddit.com/r/slatestarcodex/comments/8q8p6n/comment/e0tpds4/?utm\_source=share&utm\_medium=web2x&context=3), as a lawyer of the Continental Civil-Law tradition I consider it vitally important for the proper function of any future space colony with ambitions of true independence to have a solid foundational legal framework to build upon. I'm looking for a minimum of $20.000 to prepare an outline of a "Mars Charter" proposal, consisting of a Constitution, a Bill of Rights and basic rules of procedure, as well as to establish an online hub and repository of relevant works and knowledge towards this purpose. The aim is to get the ball seriously rolling on this underestimated aspect of space colony operations and to create a seed which can eventually grow a truly practical extraterrestrial legal regime. If you wish to contribute to the project in any way, please contact me at 8080256256@seznam.cz **#35: Automate Growing Magic Mushrooms**Hey I'm just a guy with a lifelong interest in the therapeutic potential of magic mushrooms. I'd love to find a way to automate a full-cycle (from spore to fruit) small-scale production. Probably way beyond my capacities (I am but a simple Ecology grad) but it'd be sweet to give it a shot. I'm from "the poor" so I can't do it without a monetary injection and sadly my alma mater specializes in fish and thus disregard my non-fish inquiries. [If interested, contact me at simon.rousseau.cloutier@gmail.com]4 **#36: Improve Access To Outdoor Activities**Adventure Nerds improves access to nature and outdoor activities. We are a startup that publishes books and resources that educate and inspire all people to get outside and enjoy nature. Adventure Nerds is a platform for sharing information that increases diversity in outdoor participation by proactively connecting communities to practical local information that is not readily available online. We reduce the cost and time required to plan outdoor activities so that people can confidently spend more time outdoors. Our educational resources give everyone the tools to plan safe, responsible adventures in nature and develop a lifelong passion for healthy outdoor activities. Integral to our work is partnering with nonprofit organizations and businesses to raise awareness for conservation and environmental action campaigns. Adventure Nerds launches in the spring of 2022 with support from Waypoint, a development program for outdoor organizations in Western North Carolina. We have published an example guidebook, and we are searching for start-up capital and business sponsors to create more resources. If you are interested in learning more or helping in any way, contact us through our website, https://adventurenerds.com/about-us. **#37: Study Antibiotic Resistance In Nigeria**I am Nnaemeka Emmanuel Nnadi, a Medical Microbiologist in the Department of Microbiology, Plateau State University, Bokkos Nigeria. I am seeking for funding to help in understanding how the environment, poultry and humans interact in the spread of antimicrobial resistance in Plateau State. This study by extension will lead to the establishment of a laboratory that can be used to train undergraduates and postgraduate students in Molecular genomics. With a PhD in medical microbiology and a collaboration with Dr Luis Coelho an expert in computational biology our expertise matches this project. To actualize the goals of this project, we seek the sum of 50,000 USD. I would also love to hold a Zoom call with anyone who finds this project interesting. If you can provide funding or advice, please email eennadi@plasu.edu.ng or eennadi@gmail.com **#38: Promote Citizens’ Assemblies And Lotteries**The newDemocracy Foundation is an organisation in Australia that develops, demonstrates, and promotes innovations in democracy. Its focus is on deliberative democracy and random selection. We have worked with the UN and the OECD to develop international standards of best practice and founded the Democracy R&D network. We’re designed and operated ground-breaking projects in Melbourne, Geelong, and Canberra, and have collaborated with international partners in Brazil, Spain, North Macedonia, and Malawi. We require funding to take advantage of an opportunity in Australian politics. Citizens’ assemblies and democratic lottery are gaining traction but the ecosystem for their implementation still requires support and training that is best provided by an independent organisation like ours. Additional funding could allow us to expand our project capacity, conduct needed research or improve our advocacy and reach to politicians. I’m happy to answer any questions or provide a brief organisation overview, you can reach me at kyle.redman@newdemocracy.com.au You can view our website here: www.newdemocracy.com.au. **#39: Portable Urinal For Disabled Adults**1 in 3 adults over 30 wake two or more times to pee each night, and 70% of them are bothered by this. 1 in 7 US adults have a mobility disability. Yuri is a portable urinal that sits next to a bed, couch, or desk to eliminate wakeful or painful walks to the bathroom. It is a funnel, drain trap, and vented holding tank on wheels, and it does not smell. Emptying is infrequent, and is done by a graywater pump that connects to the tank and empties into an existing drain, like a sink, toilet, or shower. Yuri could help a lot of people who don’t move well in the 70%+ of voidings that are urine-only. My name is Matt Voda, and I am a programmer-turned-maker working on Yuri full-time. I’ve prototyped five versions of it so far and am close to an MVP. Future paths include a scaled-up, ruggedized version built around a 55-gal drum for places and people without plumbing, and a Roomba-esque wheelbase and docking station capable of pumping itself empty. Seeking mentors who can advise on the industrial design of the unit, how to engineer it for manufacturing, and the development and compliance of medical devices. Please also reach out if you or a loved one want to receive a unit at-cost in exchange for feedback on how to improve it. Email m@ttvoda.com **#40: Build A Better Social Network**My name is Matt, I think there are a lot of downsides to centralized social media (read: Facebook) as the primary way that billions of people interact online. I’m building an open-source alternative called Haven, https://havenweb.org , on open standards with simplified self-hosting as a primary goal. This would enable better data ownership, privacy, and avoidance of censorship. I don’t need money (which is one of the reasons I think I’m a good person to work on this), but I would very much like to connect with anyone who is like-minded or wants to try out the software and provide feedback. [Email matt@havenweb.org]. Thank you! **#41: YouTube Tutorials On Biology**Hi! My name’s Mike Saint-Antoine, and I’m a PhD student in Bioinformatics and Computational Biology. In my free time, I make Youtube tutorials on these subjects. My goal is to take the knowledge and skills I learn in grad school and upload them to the internet so that other people can learn them for free. My field is relatively new (and quickly growing), so there’s a shortage of online tutorials. I’m trying to fill in this gap with my videos, focusing specifically on topics that haven't been sufficiently covered yet by others. I don’t need any money for this project, but any signal boost or constructive criticism is greatly appreciated. My channel can be found at the link below. Thank you! https://www.youtube.com/c/mikesaint-antoine **#42: Publish Books On Architecture**I am an architect based in India trying to build a research-based design practice. I am seeking funding of 4000$ to finish self-publishing two e-books on the Amazon marketplace that will form part of my PhD application in December 2022. The costs will cover printing of test dummies, sending a few copies to prospective guides, mentors and pay for miscellaneous fees. A validated writing practice is a desired application requirement that I am trying to full fill with the exercise. The theme of the work is to show how patronage has changed knowledge production of architecture across the four generations that have practiced, are practicing in the country. An attempt to fund the project is to prove the hypothesis that the creative economy is the only way ahead for architectural practices if necessary policy guidelines are not implemented for a sustainable future for the profession. If funded process of getting an admission and transition the research done to further work on architectural imagination may also be easier. I have complied a reading list of almost an entirety of Indian architectural design books from 1985 - 2019, around 100/125, that enables the project. These will be uploaded on my Instagram account starting [here](https://www.instagram.com/shoppingtheatre.inc/). Contents and introduction to the first book is accessed here https://isaacmathew.substack.com/p/daily-sentences-2111072039. Isaac Mathew 2201240945 . Contact me at isaac@spatialresearch.net **#43: Pocket Dictionaries For South Africa**There is an urgent need for a solution to South Africa’s literacy crisis. What we need is a school dictionary with the portability and reliability of a print dictionary, and the functionality and capacity for extra support of an electronic dictionary. Pocket electronic dictionaries (PEDs) were common in many Asian countries in the early 2000s. They were small and portable, but could contain and present more data than print dictionaries. They do not use the internet, so there are no data or connectivity costs. Once a PED is owned, it is free to use apart from battery charging. PEDs are more suitable than smart phones for primary school pupils, as these learners do not have their own phones or access to smart phones. My dissertation for my PhD in lexicography was designing model entries for an electronic primary school dictionary, with more support for pupils with features not seen in print dictionaries. I plan to develop an updated PED as a standalone dictionary device to be used by primary school pupils. Access to a reliable school dictionary with more language support will lead to better fluency and literacy, which has obvious implications for the individuals and the country. I require an initial amount of US$12000 to get the technical specs developed and a set of sample entries produced. Based on this, the next phase will be the development of a prototype for testing in schools. Please contact me on lorna@lemma.co.za for more information. **#44: Long-Termism Advocacy Org In Israel**ALTER, the Associations for Long Term Existence and Resilience, is an academic research and advocacy organization being started in Israel, which hopes to investigate, demonstrate, and foster useful ways to improve the future in the short term, and to safeguard and improve the long-term trajectory of humanity. The founder, David Manheim, has a PhD in public policy and a track record of research in effective altruist priority areas and risk reduction, and in policy engagement. The key goals of the organization will be to foster academic and policy work in key areas in Israel, via organizing conferences, academic engagement, and fostering collaboration with international organizations in this space. If you have connections to interested Israeli academics, experience with making this type of academic outreach successful, or you can provide funding for this work, please contact david@alter.org.il. **#45: Independent Research In Human-Machine Collaboration**The most pressing long-termist priorities (e.g. AI safety, climate change, global governmence) require remarkable intellectual efforts to tackle. In this context, I'd like to conduct independent research into human-machine collaboration, investigating avenues for augmenting human cognition using AI. By making use of my background in machine learning and cognitive science, I'd explore tools for perceiving large amounts of information (e.g. user-centered recommendation systems, personalized summarization, artificial salience maps, etc.), navigating complex problem spaces (e.g. virtual assistants, intelligent tutors, conversational tree pruning), and debugging belief systems (e.g. ideological unit tests, liquid epistemics, version control for beliefs, constrained belief generation etc.). Augmenting human intellect might empower knowledge workers across fields, including in cognitive enhancement itself, potentially leading to fruitful positive feedback loops. If you're interested in supporting this line of work, reach me via paulbricman.com/contact. **#46: Clean Up Space Debris**If you’ve heard about space debris – tens of thousands of uncontrollable artificial objects in orbit around Earth – then you probably agree this is a problem worth solving. You might assume there are people working on it (true), and they have found a way to turn this cleanup work into a viable business (debatable). Our project is a novel, first-principles solution that will more cost-effectively address the hardest part of the problem (the multitude of smaller, pre-existing debris in orbits 600-1000km in altitude). We think this method has the greatest chance of major positive impact in decades to come, but regular investors struggle with its lack of near-term gain. I’m Mike Le Page, CEO and Design Lead for Exodus Space Systems, and my two core values are (1) that space exploration is good for humanity, and (2) that sustainability is crucial to everything humanity does in the future. Glad to discuss further: exodusspacesystems.com/contact/ **#47: Build A Better Social Network (2)**I want to create a social network website/app that improves politics by gathering and promoting good ideas and solutions. My site will be better than existing sites like Facebook, Twitter, or Reddit because it will be an impartial nonprofit that incentivizes the display of good arguments from all sides instead of favoring shallow content to get more ad revenue. I am a professional web developer with the ability to create a fully functional website to test this idea. I plan to publish an early small-scale version of this website that focuses on a few key topics (like climate change, health care, AI risks) and collects a comprehensive list of excellent arguments from many perspectives. I'm looking for more support to help build this website. If you think you can help (with development, design, content writing, etc.), have questions or advice, or can provide funding, please email anon837261@gmail.com (My anonymous forwarding email to avoid spam. I can reply to inquiries with more personal details when necessary) I may also be interested in working together if there are other projects with similar goals. **#48: Research Transparency AUDITS Of Published PAPERS**Today’s scientists are rewarded for QUANTITY at the expense of QUALITY, causing serious quality control problems in science. In a fresh attempt to solve this problem, we are boldly conducting the world’s first researcher transparency audits, in combination with using unique rewards to NUDGE authors to increase their transparency. This uniquely addresses the needs of the established professor market while also catering to the needs of junior scientists in the emerging open science market. We are seeking a new round of funding so that we can (1) scale up and improve our apps and (2) operate a small auditing team to conduct ongoing transparency audits at a global scale. We’re excited to move forward on our MISSION to scale up our disruptive transparency author apps, so we can achieve our VISION of a transformed research world brimming with high-quality scientific evidence (for more details, see our 4-page funding proposal https://docs.google.com/document/d/1fiv6t0izX7z4F5kuPiLpzeyBtV4GwLRjaMODUj5EpSg/edit?usp=sharing ). We're looking for seed funding in the $50K to $150K range. If you can provide funding or advice, please email contact@curatescience.org **#49: Fund Promising Young People**Hello World is a nonprofit that is trying to make it easier to do good in the world. We believe everyone should have access to the relevant skills, networks, and capital that enable them to pursue solutions to the issues of our time - starting with justice, climate change, mental health, and improving education. Our next step is to run a call for projects from members of Gen Z with a focus on international and underrepresented voices – your contribution ($100-$1000) will directly fund promising young people; you'll have the opportunity to allocate your support to specific geos and topics. To learn more email me (Nick Barr, cofounder) at nick@gethello.org and to get to know some of our members, check out https://helloworldnetwork.org/portfolios. **#50: Promote Charter Cities**I'm Mark Lutter, Founder & Executive Director of the Charter Cities Institute. CCI is looking for funding to build new charter cities in Africa. Africa is undergoing its urban revolution this century -- adding ~1 billion more urban residents to its cities by 2050. Yet African cities face a near-complete lack of legal authority, financial resources, & technical capacity to accommodate this rapid expansion. Charter cities can help on all of these fronts, and by doing so serve as engines of growth and innovation rather than urban sprawl, crime, congestion, & contagious disease. As the global thought leader in the charter cities space, CCI is uniquely positioned to bring together the stakeholders needed to enhance urban legal autonomy, facilitate financing to fill huge city fiscal constraints, & incorporate urban development companies that can actually build new cities. Our goals are ambitious. CCI aims to (i) establish 10 new charter cities with a city-scale population by 2040; (ii) create at least ~2 million new urban jobs btw 2025-2040; (iii) create new financial instruments dedicated to charter cities that drive direct urban investments of $20 billion by 2040; & (iv) serve as a test-bed/proof of concept for other charter city entrepreneurs around the world. Over the next 6 years, CCI requires $1.5M per year ($9M total over 6 years) to build out its Partnerships team to deliver on these goals. If you can provide funding or advice, please email mark@cci.city & kurtis@cci.city. **#51: Plants That Suck Heavy Metals Out Of The Ground**My city in Germany has been poisoned by ancient mining wastes. I want to remedy this with hyperaccumulating plants which suck heavy metals out of the ground for easy disposal. I then want to publish the results and make this process easily reproducible. I have experience in Permaculture. Your funding would allow for a pilot project which will then be used to get funding from the local government. (<2500€) Contact me at phytosanierung@gmail.com **#52: Help Slime Mold Time Mold Investigate Chemical Causes Of Obesity**We’re the mad scientists behind SLIME MOLD TIME MOLD. We think there’s a good chance the obesity epidemic is caused by environmental contaminants. We wrote around 60,000 words about this on our blog — read it at achemicalhunger.com. Right now our number one suspect is lithium. Even if we’re really wrong about the obesity thing, someone should be looking into the fact that there’s way more of this mind-altering metal (lithium) in our water than there was 50 years ago, and right now that’s us. The budget for our immediate projects is $650,000, of which we’ve raised $125,000 as of this writing. But in the long term it will probably take several million to cure obesity, and if we get that sooner, we can spend less time waiting around and writing grant proposals. We promise to turn any donations into research. We will share all our research publicly, as fast as we can put it out. If it turns out not to be lithium we will look into other contaminants; if we find evidence against contamination we will try to figure out a new theory that works. If we solve obesity and we still have money left over we will turn that money into some other kind of mad science. Donations can be made to Whylome, Inc., a 501(c)3 pending nonprofit focused on funding this research. If you want to help, please email slimemoldtimemold@gmail.com. **#53: Educational Videos**Hi there. I make educational videos at youtube.com/primerlearning. The two guiding principles are to inspire people to realize (1) that learning and analysis are intrinsically interesting, and (2) that you don't need to specialize in a topic to understand its most powerful ideas. My hope is that this will positively impact humanity's relationship with knowledge in the future, helping combat simplistic ideologies and inspiring more people to delve into and innovate within quantitative fields. Why fund this project instead of other similar ones? [The quality and popularity of the videos are unusually high, I have experience from five years at Khan Academy, and we'll probably have overlapping world views that make my influence in line with your values.] I'm asking for 100k to subsidize the hiring of a full-time engineer. The videos are coding-intensive, being focused on animated simulations. I have gotten along well enough, but I am self-taught as a coder, and my comparative advantage is elsewhere. This one-time investment will accelerate video production and pay for itself in the short/medium term, since the revenue per video is already high. [If interested, contact justin@primerlearning.org] **#54: Promote Effective Institutions**The Effective Institutions Project (https://effectiveinstitutionsproject.org/) is a new global working group that incubates and tests strategies to improve institutional decision-making at the highest levels. We analyze where power over people's lives is most concentrated in institutional contexts, gather intelligence on how key institutions currently make decisions, identify interventions that might cause those institutions to take actions that will lead to better global outcomes, and mobilize funding and talent to execute on the most promising interventions. Alongside all of this, we are building an interdisciplinary network of reformers to steadily increase the odds of success over time. EIP was founded by Ian David Moss (https://www.iandavidmoss.com/), a veteran strategist, philanthropic advisor, and serial social entrepreneur. We are seeking to raise an additional $670,000 to hire additional researchers and build out a fund to support promising initiatives in this space. **#55: Non-Fiction Book With Case Studies On Resilience And Design**I'm Nikhil Mulani and I'm looking for connections and funding to support a non-fiction book project. “Patient Designs” is an exploration of case studies of organizational resilience, technological design, and investment management that could provide valuable guidance for building a society oriented around the benefit of future generations. Case studies include the successes and failures of centuries-old family-run businesses in Japan, governance frameworks for early Internet architecture and recent AI development, and ethical safeguards created for new and old public investment bodies such as Norway's sovereign wealth fund and the City of London's "City Cash" fund. My experience includes product management roles at a variety of large companies and startups, and management consulting engagements across a variety of clients in the public and private sectors. My educational background includes a B.A. in Classics from Harvard and an M.B.A. from Wharton. If you can provide funding, connections, or advice, please email nikhilrmulani@gmail.com **#56: Aella Wants To Start A Dating Site Like Old OKCupid**Hi it's Aella! I have a concrete thing i want funding for now - rationalist-targeted dating app. there's a few in the works but none hit the really specific spot I want. my proposal is to rebuild a version of old-okcupid (match scores from questions, user profiles, basic messaging, high control over search, strong orientation towards compatibility and personality), and include personality tests/results (women like this, would get women on the app and we should have plentiful data to do this). I also want to structure the questions awesomely, in clear, unambiguous ways that translate to both efficient matching and also good data for us on the back end. I think this would also be a great way to do research, where it would generate a huge amount of data that hopefully spans across a ton of different questions. I'd like to make some very anonymized version of the data publicly available. My goal is to have it be just profitable enough to cover its own costs (tho if it ends up being more profitable i won't complain, i'm just not orienting it towards that). I estimate i need between 30k-150k in funding for a basic version depending on how fancy we wanna get/how much programmers wanna get paid. My personal reach is around 750k horny men, which uhh definitely doesn't help the gender ratio \*but\* if the site is structured such that personality results are easily shared, i think this would be a great organic start to catch the eye of female users. [If interested, email aellasinbox@gmail.com] **#57: Advocate Against Subsidies And Tax Breaks For Local Corporations**America’s state and local governments hand out roughly $95 billion in tax breaks, grants and other forms of economic development subsidies every year. That’s enough money to fully fund the 11 smallest state government budgets, combined. It’s a market-distorting wealth transfer enabled by voters’ fears that without subsidies, all the jobs and prosperity will go someplace else. So long as those fears persist – and they’ve been cultivated by the political and corporate interests that benefit from this crony capitalism – it will be virtually impossible to implement evidence-based policy reforms. That’s why the Center for Economic Accountability (CEA) works to change the way people think and feel about economic development. This year, we’re taking on the challenge of improving the quality of local media coverage of economic development deals across the country. Currently, local news coverage tends to be dominated by pro-subsidy viewpoints and lacks critical context about costs and risks. That’s why we’re looking for support to develop and distribute the “Skeptical Reporter’s Guide to Covering Economic Development,” a resource for local journalists who want to get the story right but need help getting past press release talking points to the real story. The Guide will preemptively answer the questions we regularly get asked by reporters and help them uncover the “who, what, where, when and why” of corporate welfare. For more, visit economicaccountability.org/skepticalguide/. **#58: Convert Waste Heat To Energy**Waste heat is one of the greatest untapped energy resources available, and data centres emit huge amounts of it. NovoPower is a Montreal startup developing systems to enable liquid cooled data centres to self-generate up to 10% of the power they need for just 4-5¢/kWh, which would reduce their costs and go a long way toward reducing our collective GHG emissions. No competing solutions exist for data centres. Once these systems have been brought to market, there are many possibilities for expansion, ranging from aluminum smelters to cruise ships to food processing. NovoPower is seeking equity financing. See www.novopower.ca for more details. Or write to raphals@novopower.ca. **#59: Rapid Replications Of Newly Published Papers**We aim to shift incentives in social science via rapid replications of top newly-published papers, to help combat the replication crisis. We have been awarded an ACX grant to cover a pilot version of this project, but if all goes as well as expected, we will be in need of more funding upon completion of the pilot. The initial plan is to select from the most prestigious psychology journals. When new issues are released, we'll randomly select a newly-published paper. As long as the cost of replicating it is below some threshold, we'll attempt a rapid replication, in addition to scoring it on commonly accepted standards of good research practice, and we will quickly release the results (after the original research team has a chance to give comments). When researchers are submitting to top journals, our project will greatly increase the probability that they will be replicated, hence shifting their incentives. This is unlike previously existing replication projects, which are backward-looking only and hence don't change incentives. Over time, we hope to shift the incentives of journals as well, as repeated replication failures or use of poor practices will hurt their reputations, whereas a high replication rate and use of good practices will increase their prestige. Additionally, we plan to celebrate and promote the work of scientists using good practices. This model, if successful at shifting scientific incentives, could be expanded to other sciences beyond psychology. [Contact spencer.g.greenberg@gmail.com] **#60: Empower People To Understand And Reform Public Policy**PolicyEngine is a tech nonprofit that empowers people to understand and reform public policy. Last year, we launched our open source UK web app (https://policyengine.org), which lets anyone see their benefit eligibility and tax liability, and then calculate the personalized and society-wide impacts of changing tax and benefit rules. Policymakers from multiple parties use PolicyEngine to improve their institutional decision-making, and individuals are using it to explore policy reforms and hold leaders accountable. Our founders are Max Ghenis, a US-based former Google data scientist and MIT-trained economist who previously founded the UBI Center basic income research organization, and Nikhil Woodruff, a former data scientist on leave from a MSc in Computer Science at Durham University in the UK. Our board of advisors includes economists with experience in academia, think tanks, and government, as well as tech leaders. Now we're seeking $100,000 to build PolicyEngine US over six months. We're fiscally sponsored by the PSL Foundation (https://psl-foundation.org), a 501(c)3. We've provided more information at https://proposal.policyengine.org and you can reach us at max@policyengine.org. **#61: Hobby Research On Universal Darwinism**I'm Peotr Zagubisalo. For some years I tried to make progress in a hobby research task within Universal Darwinism and Open-Ended Evolution research programs (different points of view) -- Open-ended natural selection of interacting code-data-dual algorithms as a property analogous to Turing completeness github.com/kiwi0fruit/ultimate-question/blob/master/articles/oens\_of\_algorithms.md -- The simplest artificial life model with open-ended evolution as a possible model of the universe. Open-endedness means that the evolution doesn't stop on some level of complexity but can progress further to the intelligent agents github.com/kiwi0fruit/ultimate-question/blob/master/README.md -- Novelty emergence mechanics as a core idea of any viable ontology of the universe github.com/kiwi0fruit/ultimate-question/blob/master/articles/novelty.md -- After I failed to make a progress in creating mathematical model and got burned out I switched to once a year as enthusiasm builds up writing promotional articles that I publish on GitHub and Reddit. Or I write directly to people who might be interested. THE GOAL IS TO FIND ANOTHER ACTIVE RESEARCHER FOR THIS TASK. With sufficient monthly funding, I will be motivated and will write promo significantly more often. It should be more than ~$150 to have it as a must-have hobby. My Patreon: https://www.patreon.com/peotrzagubisalo This research direction is interesting for people as seen in this Reddit post https://www.reddit.com/r/compsci/comments/97s8dl **#62: Commentaries On Greek And Latin Literature**Greek and Latin literature for all! My project is a series of commentaries in the Pharr-style popularized by Geoffrey Steadman (geoffreysteadman.com). I am now working on an edition of book four of Virgil's Georgics. Future projects will include: the Passion of Saints Perpetua and Felicity; Plato's Gorgias; the books of Augustine's Confessions; and the comedies of Terence. For the edition of Virgil's Georgics, I estimate needing $1200 to pay an undergraduate Classics major $15/hour to help me compile vocabulary lists. My purpose for the project is, first, to help my own students have a more satisfying experience in their Greek and Latin courses and, secondly, to encourage anyone who has serious interest, but limited time, to read ancient Greek and Latin literature. Like Steadman, I intend to self-publish my commentaries, selling paperback editions for ≈ $15/copy, while making free PDF versions available on my website (andrew-beer.com). **#63: Support Human Cryopreservation** I'm Emil Kendziorra, a medical doctor, ex-cancer researcher, and tech entrepreneur. I founded Tomorrow Biostasis and the European Biostasis Foundation to improve human cryopreservation in quality and to make it more affordable. In fact, I don't plan to do anything else until I die. After one year, we're the fastest-growing provider worldwide and ready to open a Switzerland-based research institute that we've built in 2021. If you're interested and want to support the topic: Donations (registered non-profit) to fund research or investments (social venture) to scale and make the procedure more affordable are possible. Read more: https://emilkendziorra.medium.com/ or reach out emil@tomorrowbiostasis.com - Happy to answer any question. **#64: Make New York An EBike City**Hi! I’m Matt. I think NYC should become the world’s first ebike city—we’d do everything with ebikes and turn streets into parks. I’m raising $200k to get to the launch of the first neighborhood. If you have any interest in chatting about donating (contributions of any size are appreciated), please email me at matt@mattrichman.net. **#65: Test The Ethics Of Foreign Interventions In India**I am a PhD Candidate at the University of British Columbia researching the demand for (and ethics of) foreign intervention in India. Charities and researchers rarely measure what the people think about their interventions, and even more rarely does that measurement truly reflect people's preferences. Moreover, many consider foreign intervention (even NGOs' development programs) as a form of colonialism unwanted by the locals. To remedy that, I will contact local politicians in India -- as if I were working for an NGO -- and ask whether they would like to sign up their communities for different kinds of interventions provided by different institutions. To make sure I can test a few interventions with a large enough sample size, I need 15,000 USD in extra funding. The funding will mainly go towards hiring and training phone surveyors and conducting complementary data collection on the characteristics of those local politicians. [Contact me at deivisangeli@gmail.com] **#66: Help Fund Eyesight Restoration Surgery**I was highly affected by the Covid-19 issue of not having a job. figuratively not homeless but having a place to stay. for 2 years now, have been looking for a decent job that I cam sustain the food, shelter, clothing option in life. I was working in a Corporate setting since 2004, and do hope to find a decent job, where I am currently located. If I were to be independent and on my own, survival is my key to sustain myself to live daily. My Father is retired as a long time employee same as my mother. They sustain there lives besides their low monthly retirement fund via servicing and can take home $5 per day. He now has an issue with his eye sight this affecting his way of life. But there’s a light of day and can be operated. But with less fund and nobody to run to. I do not have any ways and means to reach out to them and lend a money making hand. Nor we cannot rely woth anyone in return. This I propose to have it submitted and hope this can be considered in any considerable funds would be appreciated deeply. [You can reach me at collaborativemedian@protonmail.com]
Scott Alexander
47877881
ACX Grants ++: The First Half
acx
# Why Do I Suck? I recently ran a subscriber-only AMA, and one of the most frequent questions was some version of “why do you suck?” My commenters were very nice about it. They didn’t use those exact words. It was more like “I loved your articles from about 2013 - 2016 so much! Why don’t you write articles like that any more?” Or “Do you feel like you’ve shifted to less ambitious forms of writing with the new Substack? It feels like there was something in your old articles that isn’t there now.” There was a lot of similar discussion on [this one year retrospective subreddit thread](https://www.reddit.com/r/slatestarcodex/comments/sfl7wr/one_year_of_acx_what_are_your_favourite_posts_and/). The evidence that I’ve gotten worse at blogging is mixed. I asked about it on a reader survey six months ago, and got this: Most people think my quality is about the same, although the minority who do see a difference mostly lean towards “worse”. Still, a lot of people think I suck. If only to fend off the inevitable future AMA questions, I should probably speculate about why that is. **1: You have your whole life to write your first book, and one year to write your second** This is a publishing industry proverb; your first book gets to use all the ideas you developed over the course of a lifetime, and then they expect you to write an equally good book the next year. I started SSC at age 28. By that time I already had well-developed thoughts on lots of stuff. Over the course of five hundred essays, I explained most of them to you. Now I’m still learning things and refining my thinking. But not always at the rate of two essays per week. **2: The** **rationalist community was really great** It still is! But in the same sense that I was clearing a personal backlog of unwritten-up ideas, the rationalist community was clearing a backlog of scientific and philosophical ideas sitting in journals or obscure old books that it turned out were really interesting to a lot of people. The early Internet provided a critical mass where people interested in cognition and math and the future could suddenly all share the parts of the puzzle they knew about with each other and make rapid progress. Eliezer Yudkowsky, Robin Hanson, Nick Bostrom, and other intellectuals all had their own backlog of stuff which had probably been published in journals or something but which the wider world had yet to appreciate. I was the biggest-name blogger who was sitting around listening to them talk about it, so I got access to a stream of amazing content that most people didn’t know about. There was a time when “bets are a tax on bullshit” or “words are cluster-structures in thingspace” were new and exciting ideas. There was a time when nobody had heard of the replication crisis unless they happened to be reading the medical journals where John Ioannidis was publishing. The rationalist community scooped all this stuff up, broke it down into easily digestible bits, and put it in one place. I happened to be sitting in that place, which meant I had the privilege of transmitting it to many of you. **3: Some things have genuinely gotten better** Everything’s relative. In 2015 I was - no offense - surrounded by morons, which made me look like a leading light. I think the media has genuinely improved! When I read the articles on [that poverty and EEGs study](https://astralcodexten.substack.com/p/against-that-poverty-and-infant-eegs), my first thought was “this is the kind of piece I would have expected to see in 2015, not today”. Sure enough, I wrote the kind of jaded debunking I would have written in 2015, and the sort of people who liked my 2015 essays liked it. Nowadays I think there are many good science bloggers, and the media has gotten embarrassed enough times that it will sometimes run a take by someone who knows what they’re talking about before publishing it. In the same way, I see fewer people outright denying the existence of genetics, totally failing to understand AI risk, or utterly bungling basic concepts in risk and probability. (Is this just a function of my media consumption? Maybe I learned how to find better sources and now I never read anyone stupid enough to need correcting. Genuinely not sure!) You could argue this represents a failure on my part: the zeitgeist has caught up to what I knew in 2015, but I haven’t learned new things to keep me ahead of the zeitgeist. Seems plausible. Half of what I know, I know from the Less Wrong Sequences; the other half, from a basic medical school education. But nobody else explains things quite like Eliezer, and I’m sure as heck not going back to med school. **4: I no longer feel the same burning need to criticize wokeness** It would be presumptuous to say I was the first liberal to criticize wokeness, so I’ll retreat to the less arrogant claim that my anti-wokeness was autochthonous. If other people were saying the same things, I didn’t hear about them. I invented it independently. My experience was basically that the commanding heights of society had suddenly gone insane and were saying crazy stuff, and *literally nobody was pushing back against this*. [I hated having to get involved](https://www.lesswrong.com/posts/CEGnJBHmkcwPTysb7/lonely-dissent), but somebody had to and no one else would, so I accepted the cost to my mental health and kicked the hornet’s nest. I was an early adopter here for two reasons: First: in basically every other way, I am an extremely unfashionable person. But in this case, somehow I ended up near the top of the [barberpole model of fashion](https://slatestarcodex.com/2014/04/22/right-is-the-new-left/). I felt like all my friends were social justice warriors, back when other people described barely knowing one or two. So I got annoyed with them early and rebelled against them early. Second: I hate conforming. Hate hate hate it. As Mencken said, “it’s not worth an intelligent person’s time to be in the majority, *by definition* there are already enough people to do that.” Expressing a majority viewpoint feels like punching down, or like kicking an underdog. I’ll do it if I have to, because you should still defend the truth even when it’s popular, but I don’t enjoy it. So back when it seemed like everyone was an SJW (*which apparently was earlier for me than for anyone else!!)* my natural inclination was to push back. But it seems like I must *still* be near the top of the barberpole - because while everyone else is freaking out about wokeness, I’m starting to feel like all my friends are anti-woke. Who’s woke anymore? Are there really still woke people? Other than all corporations, every government agency, and all media properties, I mean. Those don’t count. Any real people? I guess I know one or two SJWs. But I also know one or two Catholics. Doesn’t mean they’re not the intellectual equivalent of out-of-place artifacts. And that means my natural I-hate-saying-whatever-the-majority-says kicks in whenever I’m tempted to criticize wokeness. I could write about something something critical race theory in school. But first of all, Jesse Singal, Freddie de Boer, and Bari Weiss have probably already written things on it and they probably all did a better job than I would. Second of all, probably the electorate has already figured out it’s bad and is planning to vote out everyone involved. Third of all, do I really want to spend my life reminding other unwoke people that dumbing down math classes and using the extra time to force kids into classes where they [chant prayers to the Aztec gods](https://www.city-journal.org/calif-ethnic-studies-curriculum-accuses-christianity-of-theocide) instead is actually bad? Don’t get me wrong, it *is* bad. But Cicero had Catiline, and Lincoln had Stephen Douglas. I’m hardly the equal of either, but I would like to think I’m cool enough to deserve a worthier foil than the Aztec-prayers-in-school crowd, who everyone else also hates. Also: in 2010, I didn’t believe in God, but I think I mostly avoided being one of those loud smug atheists who everyone hated. I looked at an extremely false and oppressive philosophy that large institutions were forced to pay lip service to, and I thought “well, this sucks, but maybe I don’t have to spend literally all my time rehashing the same critiques of it that every other thinking person has, to an audience of people who are already convinced and have heard them all a thousand times”. I feel like whatever personality quirk of mine made that decision saved me a lot of retroactive embarrassment, and I want to nurture and encourage it. So here we are. I continue to post some vaguely anti-woke stuff ([1](https://astralcodexten.substack.com/p/movie-review-dont-look-up), [2](https://astralcodexten.substack.com/p/too-good-to-check-a-play-in-three), [3](https://astralcodexten.substack.com/p/contra-smith-on-jewish-selective)), but I’m trying to have it be more meta-level or at least the things that fall through the cracks of the many, many other people amply covering this field. Don’t worry - if I think there’s something important and under-explored, I will still write about it. **5: Sometimes the bastards do grind you down** Lately I’ve been finding it helpful to think of the brain in terms of tropisms - unconscious structures that organically grow towards a reward signal without any conscious awareness. This is my explanation for why so many smart intellectuals, upon being thrust into punditry superstardom, lose all their good qualities and turn into partisan hacks (many such cases!) The positive reinforcement provided by tens of thousands of people saying nice things about them whenever they repeat party line becomes impossible to resist, and reshapes their brain into whatever form keeps the retweets coming. My anxiety helps me resist this failure mode, but at the cost of another: if I write something that gets a thousand fans and two haters, my natural inclination is to think “Aaagh! Two haters! I must never write again!” It’s never been bad enough to *actually* stop me writing, but it does gradually erode off some of the more idiosyncratic features of my writing in favor of blander styles nobody objects to. This is the opposite of what I want. If every fan pays me $100 and every hater has no ability to take money away from me, then 1,000 fans and ten million haters makes me $100,000, and 950 fans and zero haters gives me less than that. I’m not exactly in this for the money, but I’m in it for a lot of things that follow the same dynamics, so I’d love to stick to more polarizing and unique styles. Every time a choice is above the waterline of conscious awareness, I try to stick to the unique polarizing things. But ask Freud how high the waterline of conscious awareness is sometime. Even for the best writers, “style” is a giant black box, and below the waterline it’s the tropisms driving the bus. Related: blogs are in an awkward middle ground between “personal diary” and “newspaper of record”. The bigger they get, the more they get treated (should get treated?) as newspapers of record, which makes it harder to do personal diary things. A simple example: suppose I look over vaccine effectiveness data and find something that doesn’t make sense. In a personal diary or a small blog, I can easily write “today I was looking over the vaccine data, it didn’t make sense to me, yours, Scott”. In a large blog or newspaper of record, that speculation takes on aspects of a speech act: “Well-known blogger questions vaccine data!” if not “Local doctor says vaccine data is garbage!”. That makes it tougher to explore random thoughts without having a good sense where they’ll end up. If you have a small blog, and you have a cool thought or insight, you can post your cool thought or insight. People will say “interesting, I never thought of that before” and have vaguely positive feelings about you. If you have a big blog, people will get angry. They’ll feel it’s insulting for you to have opinions about a field when there are hundreds of experts who have written thousands of books about the field which you haven’t read. Unless you cite a dozen sources, it will be “armchair speculation” and you’ll be “speaking over real academics”. If anyone has ever had the same thought before, you’re plagiarizing them, or “reinventing the wheel”, or acting like a “guru”, or claiming that all knowledge springs Athena-like from your head with no prior influences. I try really hard to block or ignore these people when I spot them, but they do a little bit of psychic damage each time. **6: Simulated annealing** Maybe I’m using [this term](https://en.wikipedia.org/wiki/Simulated_annealing) wrong. I mean the thing where if you’re doing an optimization problem, you start by making big jumps to explore the macro-landscape of the solution space, then as time goes make smaller and smaller jumps to explore the micro-landscape of whichever high-reward region you’ve settled upon, until you finally end up at some local optimum. I’ve always assumed humans do something like this. As a teenager, your identity changes a mile a minute. Today you’re goth! Tomorrow you’re prep! The next day you decide to get a tattoo and major in journalism! You’re a communist! An anarcho-socialist! A Bakuninist! A Bokononist! Then as time goes on you gradually “figure yourself out” and make smaller and smaller jumps until you become old and stodgy and fixed in your ways. It would be arrogant to say the reason I make fewer large updates now than I did at age 28 is because I’ve solved all the big problems. But I think I’ve found solutions for big problems that satisfy *me*. My jumps are smaller now, less “oh, I changed my mind about whether there’s a God” and more “let’s explore this sub-sub-cranny of utilitarianism”. This blog is an intellectual travelogue, and as my journeys and expeditions become less exotic, it probably becomes less interesting for some of my readers. Someone less into machine-learning metaphors and more into leftism than I am (20-year-old me could easily have gone down that road!) might say I’ve grown too comfortable and sold out and joined the Man. Same result: smaller jumps. **7: Emerging bloggers and big-name bloggers have different comparative advantages** Emerging bloggers’ big advantage is speaking truth to power, because they have low downside and high upside. Low downside because they’re unlikely to become a Twitter main character - most big accounts and publications won’t get too many clicks from ruining your life at that level, and only the most vicious will try it. High upside because if you do a good job, you’ll get famous. Famous people are already famous whether they take giant risks or not. I’m not saying I’m a coward who deliberately avoids controversial topics now that I have enough haters to try to punish me for them. I’m saying that the tropisms do their part underneath the waterline, and the juicy controversial blog post ideas I used to have just never show up in my mental inbox. But big-name bloggers have comparative advantages too. I’ve found an increasing amount of my time taken up by what I think of as community projects: the grants program, the book review contest, the meetups. Emerging bloggers don’t have the option to do those things, and realistically I’m going to do more good by funding important charities, highlighting new voices, and helping build strong communities than by posting yet another hot take. I realize this is kind of eat-your-seed-corn-ish - the community only sticks around because they’re expecting interesting blog posts - but I hope I provide some of those too. I’m just saying the optimal object-level-posts/community-building balance has shifted a little bit towards community as I grow. Also, apparently sometimes I can now affect the real world. My blog had a very slight but nonzero influence on at least one country’s coronavirus policies. Once you know you can do that, you start optimizing pretty heavily for that, even if that means saying a lot of things which bore the majority of your readers. It could be worse. I once talked to a very prestigious journalist who said he sometimes knows exactly which Biden administration official he’s writing a particular article to catch the attention of. If anyone else likes it, that’s just an added bonus. Talk about a comparative advantage! **8: Intellectual progress** I’m probably not going to blog about abortion. I know it’s an important issue, I know there are lots of subtle points on both sides, but I feel like I covered every conceivable argument and counter-argument and counter-counterargument long ago. It’s just no longer interesting. The same is true of religion vs. atheism, capitalism vs. communism, and a bunch of other things. I am bored of those debates. If I forced myself into them, I would do a bad job. A natural intellectual progression is to start with big questions, then once you’ve picked a side, move on to higher-resolution ones. I feel like I’ve gone as far as I’m going to on the “is capitalism or communism better, please solve in 2000 words or less?” question, but that’s opened up opportunities to explore smaller sub-areas like [developing country industrial planning](https://astralcodexten.substack.com/p/book-review-how-asia-works). I think it’s natural for younger people to continue to want to debate the really basic questions. And I think as I get bored of those questions and do other things, it’s natural for those people not to find me as interesting. There’s a more arrogant-sounding version of this argument: I think I’m smarter and more thoughtful than I was in my 20s. Some of the good ideas I came up with in my 20s now feel extremely basic, to the point where I’m surprised other people found them helpful. If the discourse wants ideas at that level of basic-ness, I’m no longer producing them - it would feel like talking down to people. I realize it’s self-serving to write a post on why you suck and transition to “maybe I’m just too good for everyone”. But I think I’m more sophisticated than I was ten years ago, and people ten years ago seemed to find me the right level of sophistication, so maybe lack of sophistication sells. **9: Answers to other common related questions** *A. Do you suck because you sold out by moving to Substack?* This doesn’t match my internal experience. Also, people who think I suck mostly think this started (and/or bottomed out) a few years before I moved to Substack. Some of them even very kindly say I’ve gotten better recently ([1](https://www.reddit.com/r/slatestarcodex/comments/sfl7wr/one_year_of_acx_what_are_your_favourite_posts_and/huub9c0/), [2](https://www.reddit.com/r/slatestarcodex/comments/sfl7wr/one_year_of_acx_what_are_your_favourite_posts_and/huv9pei/)). *B**.** Do you suck because you moved to California, with its climate of conformist liberalism?* This doesn’t match my internal experience, although the timing lines up (2017). I would protest that I don’t interact with other people enough for my location to have much effect on me. *C. Do you suck because the New York Times brouhaha scared you into submission?* This doesn’t match my internal experience; you’ll have to decide how much weight that carries for you. *D. Do you suck because the censorious establishment has become too powerful and that scared you into submission?* This doesn’t match my internal experience; you’ll have to decide how much weight that carries for you.
Scott Alexander
47927663
Why Do I Suck?
acx
# Motivated Reasoning As Mis-applied Reinforcement Learning Here’s something else I got from [the first Yudkowsky-Ngo dialogue](https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty): Suppose you go to Lion Country and get mauled by lions. You want the part of your brain that generates plans like “go to Lion Country” to get downgraded in your decision-making algorithms. This is basic reinforcement learning: plan → lower-than-expected hedonic state → do plan less. Plan → higher-than-expected hedonic state → do plan more. Lots of brain modules have this basic architecture; if you have a foot injury and walking normally causes pain, that will downweight some basic areas of the motor cortex and make you start walking funny (potentially without conscious awareness). But suppose you see a lion, and your visual cortex processes the sensory signals and decides “Yup, that’s a lion”. Then you have to freak out and run away, and it ruins your whole day. That’s a lower-than-expected hedonic state! If your visual cortex was fundamentally a reinforcement learner, it would learn not to recognize lions (and then the lion would eat you). So the visual cortex (and presumably lots of other sensory regions) doesn’t do hedonic reinforcement learning in the same way. So there are two types of brain region: basically behavioral (which hedonic reinforcement learning makes better), and basically epistemic (which hedonic reinforcement learning would make worse, so they don’t do it). But it’s a fuzzy distinction. Suppose that out of the corner of your eye, you see a big yellowish blob. Is it a lion? To find out, you’d have to turn your head. Turning your head is a good idea and you should do it. But it’s going to involve a pretty decent chance that you see a lion and then your day is ruined. Turning your head is a behavior and not a theory, but it’s a pretty epistemic behavior. Do you do it or not? I think in this situation most people [would head-turn](https://www.goodreads.com/quotes/477569-like-one-who-on-a-lonely-road-doth-walk-in). But it looks a lot like a class of problems people actually have trouble with - eg they’re pretty sure they’re behind on their taxes, so they dread opening their budgeting program to check, and then their finances just get worse and worse (Roko Mijic calls this an [“ugh field”](https://www.lesswrong.com/posts/EFQ3F6kmt4WHXRqik/ugh-fields)). Speculatively, maybe taxes are such a novel situation that they get spread across different brain architecture types: some of them end up on nonreinforceable architecture, other parts on reinforceable architecture. It can’t be 100% reinforceable, or else you could train yourself into thinking your taxes were completely done and no IRS nastygram could ever convince you otherwise. But if it’s 5% reinforceable, it could at least teach you the behavior of not checking. Motivated reasoning is the tendency for people to believe comfortable lies, like “my wife isn’t cheating on me” or “I’m totally right about politics, the only reason my program failed was that wreckers from the other party sabotaged it”. In this model, it’s got to be what happens when you try to run epistemics on partly-reinforceable architecture. Checking whether your political program worked or not involves a lot of behaviors analogous to head-turning: what sources to check, how much attention to pay to each. It also involves purely epistemic behaviors, like deciding how hard to update on each contrary fact, or whether or not to make excuses. Maybe thinking about politics - like doing your taxes - is such a novel modality that the relevant brain networks get placed kind of randomly on a bunch of different architectures, and some of them are reinforceable and others aren’t. Or maybe evolution deliberately put some of this stuff on reinforceable architecture in order to keep people happy and conformist and politically savvy. This question - why does the brain so often confuse what is true vs. what I *want* to be true? - has been bothering me for years. I think this explanation is obvious, almost tautological. I get the impression that Eliezer and Roko have both known it for ages, but it was new to me. If there’s other research on which parts of the brain are / aren’t reinforceable, or how to run your thoughts on one kind of architecture vs. the other, please let me know.
Scott Alexander
46504475
Motivated Reasoning As Mis-applied Reinforcement Learning
acx
# Predictions For 2022 *I didn’t let myself check prediction markets when making these forecasts since that would spoil the fun. I also only permitted myself at most five minutes of research on any one question.* ***See the bottom of the post for a contest/survey.*** **US/WORLD**1. Biden approval rating (as per 538) is greater than fifty percent: 40% 2. At least $250 million in damage from a single round of mass protests in US: 10% 3. PredictIt thinks Joe Biden is most likely 2024 Dem nominee: 80% 4: …thinks Donald Trump is most likely 2024 GOP nominee: 60% 5. Beijing Olympics happen successfully on schedule: 99% 6. Major flare-up (worse than past 5 years) in Russia/Ukraine conflict: 50% 7. Major flare-up (worse past 10 years) in Israel/Palestine conflict: 5% 8. Major flare-up (worse than in past 50 years) in China/Taiwan conflict: 5% 9. Honduran ZEDEs legally crippled to the point where no reasonable person would invest in them further: 5% 10. New ZEDE approved in Honduras: 30% **ECON/TECH** 11. Gamestop stock price still above $100: 30% 12. Bitcoin above 100K: 20% 13. Ethereum above 5K: 20% 14. Ethereum above 0.05 BTC: 90% 15. Bored Ape floor price [here](https://www.coingecko.com/en/nft/bored-ape-yacht-club) below current price of $203K: 40% 16. Dow above 35K: 90% 17. ...above 37.5K: 40% 18. Inflation for the year below five percent: 90% 19. Unemployment below five percent: 50% 20. Google widely allows remote work, no questions asked: 50% 21. Starship reaches orbit: 90% **COVID** 22. Fewer than 10K daily average official COVID cases in US in December 2022: 20% 23. Fewer than 50K daily average COVID cases worldwide in December 2022: 1% 24. >66% US population fully vaccinated (by current standards) against COVID: 70% 25. India's official case count is higher than US: 5% 26. Medical establishment reverses course and officially says any of Vitamin D, HCQ, or ivermectin is actually effective against COVID: 1% 27: FDA approves a COVID indication for fluvoxamine: 60% 28. Some new variant not currently known is greater than 25% of cases: 60% 29. Most people I see in the local grocery store 12/31/22 are wearing masks: 60% 30. Masks still required on domestic flights: 60% 31. CDC recommends that triple-vaxxed people get at least one more vax: 70% 32. China has fewer than 100,000 COVID cases this year (official estimate): 30% **COMMUNITY** 33. [redacted]: 80% 34. No new (non-baby) residents at our housing cluster: 80% 35. No current residents leave our housing cluster: 80% 36. [friend] stays in Indiana: 90% 37. [friend] is in a primary relationship: 30% 38. [friend] is in a primary relationship: 30% 39. [friend] is in a primary relationship: 20% 40. [friend] is dating [friend]: 60% 41. [friend] has [job]: 30% 42. [friend] has published at least one issue of their EA journal: 95% 43. [friend]still works at [job]: 30% 44. [friend] is pregnant (or has given birth): 80% 45. [friend] is pregnant (or has given birth): 70% 46. [friend] is pregnant (or has given birth): 40% 47. [friend] is still working at [job]: 80% 48. [friend] gets engaged: 40% 49. [friend] takes on additional medical work beyond his job for the Board: 50% **PERSONAL** 50. I have a child: 20% 51. I still live in my current house: 95% 52. I’ve broken up with someone I’m seriously dating: 5% 53. At least three dates with a new person: 30% 54. I have started physical construction of an ADU: 40% 55. ...or bought a tiny house instead of an ADU: 20% 56. I'm playing in a D&D campaign: 20% 57. I go on at least one international trip: 60% 58. I continue my current exercise routine, w at least one cycle in Q4 2022: 60% 59. I weigh less than 185 lbs for most of Q4 2022: 50% 60. I take some substance I haven't discovered yet at least 5 times in 2022 (testing exempted): 30% 61. [redacted]: 20% 62. The Twitter account I check most frequently isn't one of the five I check frequently now: 20% 63. I make/retweet at least 25 tweets between now and 2022: 40% 64. I have written at least 5 chapters of a new novel: 40% 65. [redacted]: 30% 66. [redacted]: 50% 67. [redacted]: 70% 68. [redacted]: 20% **WORK** 69. Lorien has 150+ patients: 40% 70. 200+ patients: 10% 71. I write at least five more Lorien pages: 40% 72. [redacted]: 70% 73. [redacted]: 80% 74. I have switched medical records systems: 10% 75. I have changed my pricing scheme: 20% 76. I make a time-off coverage agreement with someone **BLOG** 77. ACX is making more than $400K: 80% 78. ...more than $500K: 50% 79. ...more than $600K: 30% 80. At least one post gets more than 300 likes: 80% 81. I run another Book Review Contest: 90% 82. I go to at least 6 meetups in 6 different cities: 60% 83. I run a survey *or* am extremely prepared to run one in January: 80% 84. I finally finish posting the analysis of the remaining birth order results: 60% 85. I run another ACX Grants round with at least $100,000 moved: 70% 86. I add at least two more dictators to the Book Club: 80% 87. I’m still the top-ranked blog in Substack’s “Science” category: 70% **PREDICTION MARKETS** 88. No new real-money prediction market becomes bigger than Polymarket: 70% 89. Manifold Markets is still alive and active: 30% 90. New legal US real-money prediction market at least half as big as Kalshi: 5% 91. New illegal but easy-to-use market satisfying the above: 20% 92. I post my scores on these predictions before 2/1/23: 80% *These next two sections are based on Vox’s [22 Predictions For 2022](https://www.vox.com/future-perfect/22824620/predicting-midterms-covid-roe-wade-oscars-2022) and and Matt Yglesias’ predictions in his [Predictions Are Hard](https://www.slowboring.com/p/predictions-are-hard) post. In both cases, inspired by Zvi, I’ve given the original predictor’s estimate, then either stuck with it, or bought/sold to some other level. This is kind of unfair, because I get to see the original predictor’s thoughts and they don’t get to see mine - also, I’m a few weeks later than they are, and in a few cases that gives me extra knowledge. So:* **VOX PREDICTIONS** 1. Democrats will lose their majorities in the House and Senate (95%): SELL TO 90% 2. Inflation in the US will average under three percent (80%): HOLD 3. Unemployment in the US will fall below four percent by November (80%): SELL to 60% if they mean *in* November, otherwise hold 4. Supreme Court will overturn Roe v. Wade (65%): SELL to 60% 5. Stephen Breyer will retire from the Supreme Court (55%): N/A 6. Emmanuel Macron will be reelected president of France (65%): HOLD 7. Jair Bolsonaro will be reelected president of Brazil (55%): SELL to 50% 8. Bongbong Marcos will be elected president of the Philippines (55%): BUY to 60% 9. Rebels will not capture Addis Ababa (55%): N/A 10. China will not reopen its borders in the first half of 2022 (80%): BUY to 90% 11. Chinese GDP will continue to grow for the first 3/4 of the year (95%): SELL to 90% 12. 20% of US kids between 0.5 and 5 years old will get at least one COVID vaccine by year's end (65%): HOLD 13. WHO will designate another Variant Of Concern by year's end (75%): HOLD 14. 12 billion COVID shots will be given out globally by 11/2022 (80%): HOLD 15. At least one country will have less than 10% of people vaccinated with two shots by 11/2022 (70%): BUY to 95% 16. A psychedelic drug will be decriminalized/legalized in at least one more US state (75%): HOLD 17. AI will discover a new drug promising enough for clinical trials (85%): HOLD 18. US govt will not renew the ban on funding gain-of-function research (60%): HOLD 19. The Biden administration will set the social cost of carbon at $100/ton or more (70%): HOLD 20. 2022 will be warmer than 2021 (80%): HOLD 21. Kenneth Branagh's Belfast will win Best Picture (55%): SELL to 30% 22. Norway will win the most medals at the 2022 Winter Olympics (60%): HOLD *While I agree things don’t look good for the Democrats, 95% chance they lose both houses of Congress implies 97.5% chance of losing each house, which seems too high. I’m smashing the BUY button as hard as I can on “at least one country will fail to get to 10% vaccination rate” - there are a lot of countries, and as far as I know North Korea is refusing all vaccines out of general evilness. Although I’m not supposed to check betting markets, Dylan writes that he checked the betting markets for the Academy Awards, saw a 30% chance that Belfast would win, but he thinks the number is more like 55%. I know nothing about movies, but where markets and a puny mortal disagree I’ll go with the market. I’ve rated a few options N/A because they’ve already resolved or had big updates since Vox made their predictions.* **YGLESIAS PREDICTIONS** 1. Democrats lose both houses of Congress (90%) HOLD 2. Democrats lose at least two Senate seats (80%) HOLD 3. Democrats lose fewer than six Senate seats (80%) HOLD 4. Nancy Pelosi announces retirement plans (70%) HOLD 5. Stephen Breyer does not retire (60%) N/A 6. Some version of Build Back Better passes (60%) HOLD 7. Joe Biden is still president (90%) HOLD 8. At least one Biden cabinet-rank official resigns (70%) HOLD 9. No military conflict between the PRC and Taiwan (a worryingly low 90%) HOLD 10. New U.S. sanctions on Russia (70%) HOLD 11. Saudi Arabia and Israel establish diplomatic relations (60%) SELL to 50% 12. Fewer U.S. Covid deaths in 2022 than in 2021 (80%) BUY to 90% 13. Emmanuel Macron re-elected (60%) HOLD 14. Traffic light coalition exploits loopholes to get around the constitutional debt brake (70%) HOLD 15. No recession in 2021 (90%) SELL to 80% 16. Liz Cheney loses primary (80%) HOLD 17. Some version of USICA passes Congress (70%) HOLD 18. Lula elected president of Brazil (60%) SELL to 50% 19. China officially abandons Covid Zero (70%) HOLD 20. Fewer U.S. Covid-19 deaths in 2022 than in 2020 (80%) BUY to 90% 21. Additional booster shots of mRNA vaccines authorized for seniors (80%) HOLD 22. November 2022 year-on-year CPI growth is below 6% (70%) BUY to 80% 23. November 2022 year-on-year CPI growth is above 4% (70%) SELL to 50% 24. The Fed ends up doing more than its currently forecast three interest rate hikes (60%) HOLD 25. Russia does not invade Ukraine (60%) HOLD 26. Viktor Orbán loses power in Hungary (60%) HOLD 27. Sinn Fein becomes the largest party in the Northern Ireland assembly (60%) HOLD 28. The U.S. and Canada reach an agreement on softwood lumber (70%) HOLD 29. Democrats go down at least one governor on net (60%) HOLD 30. The unemployment rate stays between 4 and 5% (70%) SELL to 60% if you mean 12/22, to 40% if you mean it never gets outside that range at all *Yglesias is mostly forecasting things he understands much better than I do, so I’m mostly holding. I’ll go hard on “fewer US COVID deaths in 2022 than previous years” because Omicron seems less deadly and there’s less “dry tinder” of unvaccinated people; I could be wrong if a non-Omicron lineage spits out a really severe new variant. I’m pretty confused by Matt predicting high inflation for next year; my understanding is the Fed and markets predict lower; I totally admit Matt knows more about inflation than I do but in order to make things interesting I’ll bet against him anyway. I’m equally confused about his prediction of a pretty narrow band of unemployment rates; if I understand right, last month was already outside his band (3.9%) and so he’s betting no future month will repeat that. Again, Matt knows more econ than I do but I’ve sold anyway.* --- Sam Marks and Eric Neyman have kindly turned this tradition into a contest. If you want, you can go to [their form](https://docs.google.com/forms/d/1QLfg4WgmOobgcw-I1eHLi8U6yxivggHH5G__3TQ4sug/viewform?edit_requested=true) and predict the same set of questions I did (minus the personal and redacted ones). Use the same rules I did: no peeking at the prediction markets, and no more than five minutes of research per question. If you don’t know anything about a question, you can leave it blank and it will get filled with my prediction by default. The winner will get eternal glory (realistically: mentioned on an Open Thread) and a free ACX subscription. - Read the contest [description/rules](https://docs.google.com/document/d/1HZ3UC9JIuhFdlVM_xYtj60a6ba7elWGiAnROMobkFXM/edit) here - Give feedback on the contest [here](https://docs.google.com/forms/d/14TY66nT7Q4EGb2hauCubPY5P2eFGsN1gUZ5Z9VBk5kM/viewform?edit_requested=true) - And once again, the form where you take the contest is **[here](https://docs.google.com/forms/d/1QLfg4WgmOobgcw-I1eHLi8U6yxivggHH5G__3TQ4sug/viewform?edit_requested=true)**
Scott Alexander
47551031
Predictions For 2022
acx
# Open Thread 209 This is the weekly visible open thread. Odd-numbered open threads will be no-politics, even-numbered threads will be politics-allowed. This one is odd-numbered, so be careful. Otherwise, post about anything else you want. Also: **1:** Several good comments on the Poverty and Infant EEG post, eg [Rahien Din](https://astralcodexten.substack.com/p/against-that-poverty-and-infant-eegs/comment/4696231): > *I can actually offer some operator-level expertise! I am a board-certified pediatric epileptologist, and can describe what EEG actually is and what it is purported to measure. And why this study is bullshit. I hit the comment length limit so this will have to be threaded out* [[read more](https://astralcodexten.substack.com/p/against-that-poverty-and-infant-eegs/comment/4696231)] But see also this response [by one of the study coauthors](https://astralcodexten.substack.com/p/against-that-poverty-and-infant-eegs/comment/4780873). But also, a few people including [Lehm point out](https://astralcodexten.substack.com/p/against-that-poverty-and-infant-eegs/comment/4692262) I was sloppy on my description of how National Academy of Sciences membership affects the requirement for peer review. And [AMac78 on](https://astralcodexten.substack.com/p/against-that-poverty-and-infant-eegs/comment/4696736) how much money the participants made. **2:** And thanks to everyone who participated in the [Classified Thread](https://astralcodexten.substack.com/p/classifieds-thread-12022). A few highlights: - Nectome [hiring a lab assistant](https://astralcodexten.substack.com/p/classifieds-thread-12022/comment/4732056) for brain preservation work - ML engineer [looking for work in AI alignment](https://astralcodexten.substack.com/p/classifieds-thread-12022/comment/4763162) (and other ML engineers: [1](https://astralcodexten.substack.com/p/classifieds-thread-12022/comment/4731852), [2](https://astralcodexten.substack.com/p/classifieds-thread-12022/comment/4732494), [3](https://astralcodexten.substack.com/p/classifieds-thread-12022/comment/4740093)) - Rob Miles [needs volunteer writers](https://astralcodexten.substack.com/p/classifieds-thread-12022/comment/4734944) for his AI alignment explainer project - Steve Hsu’s Genomic Prediction [needs coders and data scientists](https://astralcodexten.substack.com/p/classifieds-thread-12022/comment/4781465) - Rachel was my wedding photographer and is very good, [hire her for your photos](https://astralcodexten.substack.com/p/classifieds-thread-12022/comment/4731847) - Jason Crawford’s holding a [Progress Studies conference](https://astralcodexten.substack.com/p/classifieds-thread-12022/comment/4744004) in Austin March 4-6. - Lots of cool [people to date](https://astralcodexten.substack.com/p/classifieds-thread-12022/comment/4730631) - Or if dating isn’t your style, how about a nice [calculus textbook](https://astralcodexten.substack.com/p/classifieds-thread-12022/comment/4732442)? If you missed the Classifieds thread, you can always use the [Classifieds Forum](https://www.datasecretslox.com/index.php/board,10.0.html?PHPSESSID=7c9608e384c671ec80bb37f858b3392b) on the unofficial ACX fan bulletin board Data Secrets Lox.
Scott Alexander
47961301
Open Thread 209
acx
# Classifieds Thread 1/2022 This is the bimonthly (?) classifieds thread. Advertise whatever you want in the comments. I’m experimenting with being more organized this time, so please respond to the appropriate top-level comment: **Employment, Dating, Read My Blog** (also includes podcasts, books, etc)**, Consume My Product/Service,** or **Other.** Remember that posting dating ads is hard and scary. Please refrain from commenting too negatively on anyone’s value as a human being. I’ll be much less strict about employers, bloggers, etc. And here are some extra dating profiles of people I know and like: [Aella](https://docs.google.com/document/d/1vQnJj-6MPqFEm8nhBGjF5OgPu-jXMU2edJ8n72nO9bA/edit#heading=h.z4iryi3svxtx) (f / 29 / Austin) [Alyssa](https://docs.google.com/document/d/1ng1LS2q5BeUT1CYq1IlBiKg4Gfbcwr-wCXiP1ktjduo/edit) (trans-f / 30 / SF) [Damon](http://daystareld.com/blog/date-me/) (m / 34 / undetermined) [Linch](https://docs.google.com/document/d/1QiQif_RZZDUOLtESNAxUt3mB4kFwTMFRUeys5714Wjg/edit) (m / 28 / SF) [Nate](https://docs.google.com/document/d/1poHD6VMsKyk9vt2mky2qlwGiIzKpiZBRdea7vjMbY0M/edit) (m / 30ish / Bay? Austin?) [Rebecca](https://www.datasecretslox.com/index.php/topic,2373.0.html) (f / 30 / San Jose) [Shaked](https://shakeddown.wordpress.com/2021/09/16/1976/) (m / 30 / NYC)
Scott Alexander
47828084
Classifieds Thread 1/2022
acx
# Highlights From The Comments On Health Care Systems I’m experimenting with making this more structured this time, so: **Section I:** Collection of comments on US health care **Section II:** Drug pricing, and does the US subsidize the rest of the world? **Section III:** Why are health economics so unlike other economics? **Section IV:** Giant pile of comments by readers who live in different countries explaining their own countries’ health systems, and their experiences with them. **I.** GummyBearDoc [writes](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4588258): > I want to push back on the assertion Scott made that "Certainly rich people in America get good health care." After he published this book in June 2020, Ezekiel Emmanuel published an article in JAMA IM (link: <https://bit.ly/3nGRHL8>) called "Comparing Health Outcomes of Privileged US Citizens With Those of Average Residents of Other Developed Countries." He wanted to test the commonly stated trope that a feature of the US healthcare system is that the rich here get the very best care in the world. To do that, he looked at outcomes across six benchmark diseases (heart attack, colon cancer, breast cancer, infant mortality, maternal mortality, and pediatric acute lymphocytic leukemia). He compared outcomes for white people in the 1% of richest counties in the US, 5% richest counties in the US, and average outcomes in 12 rich countries (i'm not going to type them all out but they're places like Australia, Canada, and Germany). The results were...not so great for rich Americans! > > While rich people in the US do better than average people in other rich countries with breast cancer, RICH children in the US have outcomes worse than AVERAGE citizens in 11/12 of this group of rich countries. Rich people in America have about the same outcomes after heart attack that average people in other rich countries have. In other words, in most cases, you're about as well off having a heart attack being the average bozo in France (for example) as you are having one in one of the wealthiest counties in the US. I was pretty shocked when I read this paper. > > The reason I think this is important is because I think it's extremely politically useful sometimes to be able to claim that rich people in the US get great healthcare! People like to imagine themselves as more privileged than they are, and think that, if and when they get sick, they'll have access to this incredible care. So we should reframe from "average care in the US is shitty but care for the upper echelons is the envy of the world" to "average care in the US is shitty, and also in the upper echelons we are about the same as the average person in America's peer countries." DoTheMath counters: > This is interesting, but I’m a bit concerned that he chose only 6 outcomes to measure <https://xkcd.com/882/>. This, with the results being absurd, makes me skeptical of the results. > > Related to this, I don’t think he ran any statistical tests. This may not matter because of the large number of people, but it may also matter because he only used n=152 counties. > > Also, why use counties? This should bias you downwards of the true number in your estimate, since counties do not hold all the same income-level people. GummyBearDoc responds: > Hey! Good points. Let me try to address them best I can. Obviously you should be more skeptical about surprising results, especially if you don't have subject matter expertise! But I don't think the study is poorly done. I am a doctor and a researcher (not that this makes me any more qualified to judge a good argument form a bad one!) but let me make the best possible case that I can that the results presented in this study are interesting and compelling. > > First, the 6 illnesses seem, a priori, pretty relevant. Will the exception of pediatric ALL (which is useful because it is a non-adult condition that relies on specialists for delivery), these are all extremely common conditions. While the xkcd comic is very funny (they always are), I don't know that it's relevant here? They're not cherry picking a small feature of a bigger phenomena and then claiming that that cherry picked thing is driving the whole phenomenon; rather they're using some representative conditions to try to understand ways in which healthcare in the US may be surprising. > > Second, I think any of the results individually are surprising! Remember, this isn't an association; it's a comparison. If I asked you, based on priors, who has better outcomes after a heart attack, white people in the richest counties in America (average income ~$100k), to people in Denmark (average income ~$51k) (see the supplement for some of this data: <https://bit.ly/3KsUP71>), I think most people would say America? But Americans die about 12% of the time when admitted for a heart attack compared to 10% in Denmark. In other words, the design of the study is HEAVILY biased in the direction opposite of the results we see. > > Why use counties? I think it's because that's the data that was available, which is a limit to a lot of epidemiological research. You draw conclusions as best you can from the evidence you have. That said, any bias within counties should also apply as much to entire countries. Furthermore only considering whites (demographically wealthier), also again biases in the opposite direction of the study findings. > > Finally re: statistics, I'm not sure what you mean? Maybe you mean he didn't publish P values? There are 95% confidence intervals for many of the measurements. Furthermore, because of the design of the analysis, a lack of difference IS a surprising finding; if there are statistical tests that are relevant that you think should have been included but weren't, I'm eager to hear your thoughts! > > So I agree, the results are absurd! It is absurd that, despite spending much more than any other country in the world on healthcare, the richest Americans don't even have access to what would be average outcomes in any of a number of our peer counties. But I don't think there is anything misleading here. DoTheMath now says he is “tentatively convinced” of GummyBearDoc’s claim (good for both of you! [Julia Galef](https://astralcodexten.substack.com/p/book-review-the-scout-mindset) gives you shiny gold stars!) But [Merlot says](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4589888): > I don't really think this is a question that could even be answered usefully in a statistically rigorous way, but I'll add that Peter Attia- a Canadian physician who currently runs a highly specialized clinic for the ultrawealthy in the US- has said something somewhat similar based on his experience (it was in a podcast episode, I don't remember which one, so you're going to have to take my word on it). > > His take was basically that if you're wealthy enough to get care parallel to the formal healthcare system rather than within it, you can get the best care in the world in the US. But if you're "merely" a millionaire with a Cadillac insurance plan, you'll get worse care than what the average person in Canada gets. > > The take might be incorrect, and it might also be out of date- even pre-pandemic, the last few years have not been kind to Canada's healthcare system. He specifically gave the example of having a heart attack and showing up to an emergency room as a case where its better to be Canadian, which DEFINITELY feels out of date. But it broadly fits with what I've heard a lot of providers who have practiced in both the US and another developed country say. On a new topic, [Jay W. Smith](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4591196) (who writes [The Bottom Line In Healthcare](https://thebottomlineinhealthcare.substack.com/?utm_source=substack&utm_medium=web&utm_content=comment_metadata)): > No one has mentioned the regressive nature of US healthcare financing. Scott is right that what we call "premiums" are effectively just payroll taxes. For the typical American worker, healthcare-related premiums/taxes aren’t just “big.” They’re bigger than all non-healthcare-related income/payroll taxes combined. Healthcare is THE thing impacting take-home pay. > > -- If we call employer-sponsored health insurance what it is - a tax - then about 70% of taxes taken from the paychecks of a typical American worker with family coverage go to healthcare. If the worker has individual coverage, that number is about 51%. > > -- In hard dollars, the healthcare industry takes about $26,000 from the total compensation of a worker with family coverage whose salary is $50,000 (whose total compensation is actually about $70,000). > > -- The structure and branding of healthcare financing (having the employer pay the bulk of it, and calling it a "premium" instead of a "tax”) leads the typical American to grossly underestimate 1) their healthcare costs and 2) their total taxes. > > -- Because the US healthcare system is structured to overpay by 2x, this typical worker is overtaxed by about $13,000. > > More details here: thebottomlineinhealthcare.substack.com/p/health-premiums-vs-incomepayroll I’m not sure it makes sense to call this “regressive” or “an underestimation of total taxes” unless you think of government-sponsored health care as normal. If government-sponsored housing was considered normal, Americans would be “underestimating their total taxes” by not thinking of rent as a tax! Still, I see what he means. [Erik](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4589798) on US cost inflation: > Since the ACA, health insurers in America are limited to making 20% profit on insurance premiums. If they want to make more profits, the only way they can do this is by spending more on healthcare. Not sure why anyone thought this was a good idea. See the 80/20 rule here: <https://www.healthcare.gov/health-care-law-protections/rate-review/> I see the problem, but I’m not convinced this matters. First of all, I think insurances [mostly make less profit than that](https://content.naic.org/sites/default/files/inline-files/2019%20Health%20Industry%20Commentary_0.pdf), so the cap probably doesn’t matter much in real life. Second, cost inflation seems to have decreased (or at least not worsened) since the ACA. Third, no matter what your profit margin is, you’re still always incentivized to spend more money, right? If your profit margin is 10%, you can make $1 by selling $10 of care; if it’s 20%, you can make $2 by selling $10 of care, and so on. You always want to sell more care! **II.** [Austin](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4601638) (who writes [Acrolectics](https://acrolectics.substack.com/?utm_source=substack&utm_medium=web&utm_content=comment_metadata)) says: > One of the other major differences between the US healthcare system and the EU systems is that the European Commission does drug approvals for all of the EU (and there are other EU-wide paths to getting drugs approved, with approval by any regulatory body being sufficient for the drug to be approved); whereas, the FDA does approvals for just the US (and it doesn't compete with any other US regulators to do the approvals). This gives individual European countries much more leverage than the US would have for negotiating with drug manufacturers. Since most of the cost of drug manufacturing is the research, trials, and approval process, Norway (technically a non-EU member of the EEA, but they still accept approvals given by the European Commission) can offer drugmakers relatively low prices and still have them sell their drugs there because they're already going through the European Commission approval process to sell their drugs in the rest of Europe. If the USA starts setting drug prices particularly low, it will have a much bigger impact on the incentive to fund drug trials through the FDA approval process. This also changes the optics dramatically. There's not going to be a scandal over a drugmaker failing to push their drugs through the approval process the same way there would be over it refusing to sell drugs to a particular country where those drugs are approved. (And I think it's generally accepted that the FDA's process is slower, more stringent, and more expensive than Europe's.) > > Relatedly, another thing that is always missing from discussions of healthcare spending is the extent to which Europe and Canada freeload on the rest of the world for medical research. Below are the total medical R&D expenses by country for years for which I could easily find data. The key takeaway is that even though US GDP is only 13x Canada's in 2018, US medical research spending was approximately 45x theirs. US GDP is only about 20% bigger than Europe's, but the US spent significantly more than 2x on medical research than they do; (and as far as I can tell from less complete data sources the disparity is growing). > > United States (GDP $21T): 2007: $131B 2012: $119B 2013:$143B, 2014: $154B, 2015: $163B, 2016: $173B,2017: $182B, 2018: $194B > > Japan (GDP $5T): 2007: $21B, 2012: $28B > > All of Europe (GDP $17T): 2007: $56B, 2012: $54B > > Canada (GDP $1.6T): $3B in 2009 and 2010, and $4B every other years since 2007 (rounded to the nearest billion) -- all of the other values are in USD; whereas the Canada numbers are in CAD. > > I think the overall picture looks something like this: > > The incremental cost of manufacturing a pill tends to be pretty cheap. A drug company has to make back its investment in R&D with returns to justify the investment somewhere (which is necessarily a high risk investment with a long time window between the investment and when it starts giving returns which standard economic theory says makes the required returns higher). But once it makes back its research investment anywhere, it has a pretty big incentive to just sell its drugs everywhere even if it is selling them heavily discounted in some markets relative to others. (The regulations around drugs are a particularly effective form of geofencing that eliminate the incentives that might otherwise exist to charge similar prices in different markets.) > > So we have a drug industry that basically works by researching drugs to sell them in the United States; but then also pushing the drugs through the relatively easier regulatory regimes in the rest of the world to also sell the drugs everywhere else possible where the only investment that those additional sells need to justify recouping is the cost of getting the approvals since all of the research is already done. > > And we end up with a system where drug development is worthwhile if and only if those drugs end up being sold in the United States, and where the FDA has the most stringent approval process, and nobody really has an incentive to change this. American regulators and politicians get disproportionately more power out of this arrangement. Drug companies make their profits. American insurers are able to pass along and distribute the costs across the population so they make their profits too. European countries get their drugs relatively inexpensively, and are existing in a legal context where that is their only real incentives since they really can't increase their ability to regulate drugs because the power that exists there is distributed throughout the EU rather than possessed by the governments of the member states. And the members of the EC probably are a bit unhappy and trying to increase their own ability to regulate things, but they're sufficiently disconnected from the powers of the member states that what they want doesn't really matter. (So Europe has somehow found a way to make the people who have the most incentive to increase the price of drug R&D there from having their voices heard.) Several people brought up this idea of US drug prices subsidizing the world. There’s some evidence in support: the US [contributes 58%](https://www.rand.org/news/press/2021/01/28.html) of the OECD’s total pharmaceutical spending despite only having [24%](https://www.oecd.org/sdd/01_Population_and_migration.pdf) of the OECD’s total population and [38%](https://data.worldbank.org/indicator/NY.GDP.MKTP.CD?locations=OE) of its total GDP. [This study](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2866602/) has some slightly different data and doesn’t think that US drug companies innovate much more than foreign drug companies, but since most companies sells their drugs in most countries regardless of where they’re based, I don’t know if that proves anything. Some very quick math: the US spends [2.5x more](https://www.rand.org/news/press/2021/01/28.html) on medications than OECD average, but its medication use is exactly average for its population. So total OECD drug spending is 2.5\*0.24 + 1\*0.76 = 1.36x what it would be if the US spent an exactly average amount. So if the US spent an average amount, the total pool of pharmaceutical funding would go down to 74% of current totals - or other countries would have to increase their spending by 36% to even things out. I don’t know if this is the right way to look at things. (one part of Austin’s comment I’m not sure about is that I think I remember from the book - can’t find the exact quote right now - that it was claiming the FDA was faster and laxer than the corresponding European regulatory body) But Peak Oil’s Tail [writes](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4606240): > One huge difference between healthcare in the US and elsewhere is advertising. The US has a whole industry dedicated to marketing drugs and medical services directly to patients, which just doesn't exist in most other countries. > > I think this explains a big part of the cost disease. Ads for drugs are particularly common on daytime TV or cable news, I guess because they're watched by elderly people who tend to be sick and have Medicare. Most pharmaceutical companies actually spend more on marketing than on r&d, so high drug prices aren't really subsidizing new meds. They're subsidizing Fox and CNN waging the culture war. > > And it's not just drugs. The are law firms, insurance companies, even hospitals promoting their emergency rooms. The whole purpose of these ads is to increase demand for healthcare, so as well as pushing up costs directly they probably lead to a lot of unnecessary doctors' visits and prescriptions. This claim (drug companies spend more on marketing than research) also [seems to be true](https://www.pharmacychecker.com/askpc/pharma-marketing-research-development/#!). But we also know Americans don’t buy more drugs than people in other countries, and we are actually unusually good at using generics rather than brand-name. So what is all that research buying? Maybe convincing us to still buy as many drugs as other countries even though it costs more here (but isn’t this kind of circular)? Or maybe it’s zero-sum spending convincing us to buy Company X’s drug instead of Company Y’s? If that’s true, it seems like you could potentially lower US drug costs without having a negative effect on other countries just by cutting out marketing expenses, with no downside. And [Benjamin Jolley](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4605560) (author of [Ramblings Of A Pharmacist](https://benjaminjolley.substack.com/?utm_source=substack&utm_medium=web&utm_content=comment_metadata)) writes: > The US does a really poor job of keeping (advertised) drug prices low because we a) have like 1M different purchasers of drugs, and b) those purchasers use a variety of different middlemen to actually purchase the drugs. We pay the middlemen a percentage of the drug costs. > > For clarity, the dollar value chain is like this: Purchaser -> Insurer -> PBM -> Pharmacy -> Wholesaler -> Manufacturer. > > All of the steps other than Purchaser (and small pharmacies now) make more money the higher drug prices go. Insurers can keep 15-20% of total premiums for their internal administrative costs and their profit margin under the Affordable Care Act's "Medical Loss Ratio" rule. That means that the only way for Aetna's profits to increase from 2021 to 2022 is for total "medical losses" to increase. They can keep 20% of $70B in 2021, and 20% of $80B in 2022, implying that drug+hospital+doctor costs HAVE to increase by $8B during the year, or else premiums can't go up and Aetna can't make more money. > > The PBM step generally keeps an administrative cost per prescription plus a % of the cost of branded drugs. These companies are: CVS/Caremark, Express Scripts, OptumRx, Prime Therapeutics and a lot of minors. They negotiate "rebates" with manufacturers. This basically works like this: Humalog and Novolog are effectively equivalent drugs. They cost ~$300/month without insurance. The PBM will say to Lilly "That's a nice humalog you've got there. I'm going to need $150/month as a check back to me or else every patient on my plan gets Novolog unless the doctor fills out 500 pages of paperwork to get Humalog AND the patients pay $200 of the cost." Lilly says "ok fine." According to the PBM lobbying organization, PCMA, most of the rebate money goes back to purchasers, but IMO that just makes the problem worse because it makes purchasers complicit in the game by sending them checks that they use to reduce their premiums instead of reducing the cost of drugs to their plan members. > > Consider for a moment that CVS/Caremark by themselves is the PBM for ~112M people in the USA. That's more than the entire population of Germany. If you think that Germany pays less for drugs that CVS/Caremark......... > > Pharmacies generally get paid ~1-2% of the cost of branded drugs as their total compensation. On generic drugs, a typical pharmacy will get ~$10 per prescription on average as their compensation (to pay staff and rent etc). > > Wholesalers like Cardinal, McKesson and AmeriSource Bergen (together controlling 95% of drug distribution) generally make their money by marking up generic drugs to pharmacies, and by taking a ~2% cut of the price of branded drugs. > > Manufacturers make money by selling drugs for more than it costs to make them, including paying off all of the folks in the middle out of their revenues. > > The actual prices realized in the US for branded drugs ARE likely higher than in other countries, but the differential is almost certainly not as large as it appears. [grosstonetbubble.com](http://grosstonetbubble.com) is a site that talks about the size of the wedge between prices paid to manufacturers by wholesalers for drugs, and prices actually realized by manufacturers after accounting for rebates and other discounts paid to the PBMs and insurers. > > Also... consider for a moment that NET drug prices (after rebates and other discounts) have DECLINED in the US for the past 3 years. Anyone that talks about "skyrocketing drug prices" and doesn't pay attention to the middlemen is just lying. ConnGator [writes](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4596508): > If I ran the zoo, I would say that drug companies must set prices in other countries as a ration of their per-capita GDP to America's. **III.** Bram Cohen [writes](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4601136): > The business about health care being a bizarro world where normal economics rules don't apply is true, but it's true in that it's inherently broken. To have efficient markets you need good consumer information, the ability to easily comparison shop and change vendors, easy entry and exit of vendors from the market. If you wind up in a coma and are brought to an emergency room you can't open your eyes, discuss what the treatment will be and how much it's going to cost, do appropriate research and decide for yourself whether the doctors's recommendations for treatment are appropriate, decide that the amount being asked for is outrageous, find a potential competitor, have them open up a competing ER next door, and check in there. Every step of that can't happen. The seemingly weird and artificial things like government negotiated prices are compensating for the normal mechanisms of efficient markets not functioning. In the US I've had the experience of getting quoted a price for a drug at a pharmacy, commenting that it was completely outrageous, getting argued with that the insurance company was paying most of it, asking what it would be out of pocket, getting quoted a price lower than the copay, then glaring at the pharmacist who suggested swiping a magical card she had through the machine which got a price even lower. Under such circumstance the government putting their foot down and declaring that there can only be one price and they're negotiating it on behalf of consumers is completely reasonable. Delesley [writes](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4622843): > The fundamental reason why the United States is so bad is because health care breaks the iron law of capitalism, which is that the person who uses a good or service should be the person who chooses it, and who pays the bill. Doctors & patients together choose a medical treatment, but insurance pays the bill. Individuals use insurance, but don't actually choose or pay for it; that's done by their employers, and the plans are so complicated that individual consumers can't make heads or tails of it anyway. Hospitals offer treatments, but just try asking a hospital how much it costs or shopping around; they flat out won't tell you. It truly is upside-down bizzarro world. Plus, the bureaucracy is impenetrable. Have you ever noticed how long it takes a pharmacist to fill a prescription? Taking the pills off the shelf takes 30 seconds. Calling the insurance company and waiting on hold -- that takes 15 minutes or more. > > Wrt. to other countries, there's not a lot of difference between regulation and socialism. Sure, in Germany and the Netherlands insurance is provided by private companies (my wife is Dutch). But if all companies are required by law to charge the same rates, and offer the same coverage, then it doesn't really matter whether a private company or the government pays for it. A really easy way to lower costs in the U.S. would be to require all insurance companies to offer the same coverage and costs as Medicare. All hospitals would be forced to accept that, because otherwise they'd have no customers. > > I lived in the UK for many years, and used the NHS. It was great. No paperwork, no insurance cards, and no weird bills for ludicrously high amounts when you leave. As a patient, I loved the simplicity of it. But there is a catch. > > The NHS hospitals get a fixed amount of money per year, and they have to treat every patient that walks through their doors. Hospitals are non-profit, and success is measured by how many people they can treat given a limited budget, not by how much profit they can make by doing lots of expensive treatments. That's the main difference between "capitalism" and "socialism" in this case -- it's the metric you use to measure success. NHS hospitals do a cost/benefit analysis on every treatment, and focus their efforts on the low-cost, high-benefit treatments. If you want the hospital to do an MRI, or something fancy, then they will not agree unless your condition is life-threatening. But if you just want basic every-day care, they're pretty good at that. I think the life-expectancy numbers bear that out; high-cost low-efficacy treatments do not improve outcomes all that much at the aggregate level. > > And the NHS is really good at pinching pennies in smart ways. When my wife needed to go to the hospital, I called the NHS hotline, and they asked if she was able to walk. Not far, but yes, she could. So they sent a taxi-cab to my door, who dropped us off at the hospital front entrance. It was totally free -- his fare was paid for by the NHS. In the United States, they'd send an ambulance, for 10X or 100X the price, then bill it to insurance, who would then send a co-pay to me for some hundreds of dollars. That doesn't help anybody. > > In a way, the NHS actually satisfies the iron law of capitalism better than the US system does. It works because the hospitals/doctors both make the medical decisions, and they foot the bill. But [Arnold Kling](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4582358) thinks market-based health care is salvageable: > My 2006 book, *[Crisis of Abundance](https://amzn.to/3rGYWDW)*, has real health care economics. On the 23rd on my substack, I'll try to summarize the main points. He kept his promise, so [here’s his Substack summary](https://arnoldkling.substack.com/p/some-health-economics-for-scott-alexander). **IV.** And now, a giant pile of comments about specific countries! **Germany,** starting with [Mark](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4582320): > The "secret" [of our German way of healthcare] seems to be to have the insurances compete on low administration cost. (Some smaller insurers who run into troubles here are regularly "integrated"/"bought" by bigger ones - and some "new" ones try their luck growing on streamlined procedures). - Those funny extras they throw at us to lure us into swapping are very secondary. > > The docs have much less hassle cuz it is all one price (or two: "insured" vs. "privately insured"). Imagine a doc had to deal with dozens of prices, some insurances not covering certain stuff others do! Nightmare! And hell of extra-costs). > > The docs love private (insured) patients, as they can charge them around 75% more plus for some less cost-efficient stuff. (Dermatolgists loved looking all over my skin when I was young and private. Nowadays ... > > Thanks for letting me know I enjoy the best health care in the world. Fun fact: many Germans with private insurance (self-employed or high salary) stay unmarried. As their partner+kids would be forced into their contract and make it much more expensive. And [Hightower](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4647984): > I'm privately insured in Germany and it's GREAT (for me, that is. although I do subsidise the publicly insured people to a small extent, so not exclusively for me). > > And I had [trouble with healthcare] in the UK - I didn't want to wait as long as I would've had to for a specialist while living in the UK, so I looked for private options (which my German insurance, under certain circumstances, like those I was in, would cover). > > It was possible, but there was little choice, and it was a lot more expensive than in Germany. And [Thomas Kehrenberg](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4586510): > Germany has two parallel systems. One is the single-payer-through-private-companies that you described. In that system (called something like "mandatory insurance" in Germany) you pay a certain percentage of your salary and get health care. The other is a "private" system but there are still lots of rules, I think. Doctors get more money when they treat these private patients but this is also capped at, I think, 2.5x the money they would get from a mandatorily insured patient. Also, doctors are only paid for a certain number of mandatorily insured patients per quarter – they can treat more, but they won't get paid for it – whereas they always get money for the privately insured patients. The result is obviously that doctors give preferential treatment to those with private insurance. But also that privately insured patients are subsidizing those with mandatory insurance in a way. > > You are allowed to get private insurance if you earn more than a certain amount or if you are self-employed. The system is either-or: either you have the mandatory (single payer) insurance or you have the private one. Though, if you have the mandatory insurance, nothing stops you from telling your doctor you are privately insured so that they send you an invoice which you then have to pay for by yourself. It's just that most people can't afford that (or rather that it's risky). > > (As a side note, if you look at this from the Hansonian perspective on healthcare, then it doesn't necessarily result in better outcomes for those with private insurance. It's true that those patients get \_more treatment\_ but a lot of it is useless and only happens so that the doctor can earn more money. As someone with only mandatory insurance, you will likely get less treatment but you should always be able to get the really life-saving care, so you might come out basically the same. My mother has private insurance because she is a teacher – who always have private insurance for historical reasons – and every time she goes to the doctor they find some treatment they could do, whereas my father who has mandatory insurance gets told to drink water and rest.) And [Vlad](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4592894), who writes [Vlad’s Notebook](https://writevlad.substack.com/?utm_source=substack&utm_medium=web&utm_content=comment_metadata): > As a data point, here's my experience in Germany. I've been living in Berlin for 1.5 years. > > As an employee, I pay 8% of my gross salary on health insurance. This is just 50% of the total cost, with my employer paying the other half. As long as you make less than 64.350 euros gross a year (and the threshold grows every year) you are compulsorily insured with public insurance. If you make more, you can switch to private. If you're young and fit, it will cost you much less than public insurance. But if you get in trouble and want to switch back to public, it can be very difficult. However, if your income falls back under the ever-growing threshold, you are forcibly switched back to public. > > Even with public insurance, you must choose a provider, but like Scott said, they are all very similar, so the choice is easy. I chose my provider because they speak English. I can call them or write them at any time and they are very helpful. > > I don't have health problems, but I've been doing a lot of medical exams. The system is smooth and efficient. I always come out very impressed. Like most people, I book my appointments on a popular online platform. I am not tied to a specific family doctor: I can go to anyone who has a free spot, so there's always a way to get an appointment within a few days. I can also directly book visits with specialists without going through a family doctor. > > The clinics ooze a sense of wealth and high quality healthcare. Wait times are short and the doctors all speak English. Processes are streamlined as needed: in one case, a doctor wanted to send me to a specialist, so he just gave me a piece of paper and told me to see his friend downstairs. Ten minutes later I was done. In another case, a family doctor spared me a visit to a specialist and accelerated the waiting time for some physicals so I could get a vaccine sooner. > > I am very prudent and careful about my health. I find that doctors here are understanding and willing to conduct exams. They have time to listen and engage and don't seem overworked. > > There are almost no added costs. This includes dental work: I've had two or three cavities removed. Oh, and mental health: you can get at least 80 hours of therapy on the public scheme, with a therapist of your choice. > > The drawbacks: the health scheme is extremely expensive, both for the individual and for the state. And the system was mostly designed with employees in mind, so it can be extremely burdensome for freelancers and the self-employed. As self-employed, you need to pay 14.6% of your gross income! And this on top of all the other taxes, which are very high in Germany! And [demost](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4593862): > Having lived in both Germany and Switzerland, it is a red herring to put them into different categories, just because of the formal status of their insurance companies. > > The main point is: In both systems, all insurance companies must offer exactly the same product, called basic insurance. It is not up to the company to decide which cases they cover or when they pay. This is regulated, and border case are resolved otherwise. (Some very minor variations are allowed in Germany, almost none in Switzerland). And they can't reject any costumer. In such a situation, the price will not depend on the type of company. > > What actually makes a difference are very different things. In Germany, if I have disease X, then the doctor/hospital will be paid a fixed amount to treat X. But they have to treat it ("Behandlungspflicht"), they can't reject a difficult case, even though they lose money with it. In Switzerland, the doctor is paid proportional to the amount of time/effort treating X. In both cases, there are long and detailed price tables, all of which are independent of the insurance. I think this is the main reason why the Swiss system is more expensive than the German one. (It doesn't show in the table, but I suspect that it would show if you only consider basic insurance. Also, drugs are probably cheaper in Switzerland, relative to income.) For better or worse, Swiss doctors are not optimizing so hard for efficiency. I like the Swiss system better. Swiss doctors don't keep cutting me short. But it's expensive. > > Of course, companies can offer things that go beyond basic insurance, but this is a completely different market, and probably much closer to US system. But those are luxuries, not necessities. For details on the two countries (I was referring to public insurances in Germany, which is only part of the system), there are excellent description of those systems by Lars (on Germany) and Er Matto (comparison between Germany and Switzerland) in the comments. **The Netherlands**, starting with Majuscule: > I’m an American and lived in the Netherlands for several years (2012-2018). The Dutch and also the expats loved to complain about it, but I thought the Dutch system was pretty great. By far the best part of it was the transparency. My health insurance actually covered my needs- I never saw a bill. No copays, no surprise mystery bills of the kind I’ve always received in the US, even with “Cadillac” level insurance. > > It’s hard to measure expenses and satisfaction across systems because you’re dealing with such different expectations. One common complaint by my fellow American expats was that doctors didn’t “do anything”. This is what the non-interventionist default of Dutch doctors feels like to an American. > > But “go home, take some Tylenol and come back if you don’t feel better” is actually quite an effective strategy in this GP-as-gatekeeper model. Most of your patients feel better and don’t come back, as you couldn’t have done anything for them anyway. This keeps costs down and keeps the emergency room just for actual emergencies. > > Dutch people’s complaints seemed largely rooted in the perception that Germany and France had better healthcare. I’ve never lived in either place, so I can’t say. But the handful of people I met in the Netherlands who had been seriously sick, e.g. childhood cancer or a major injury, had much nicer things to say. > > See also: the guy I know in the Netherlands who was able to get his severely autistic son into a residential program for children where he could receive 24-hour care. Almost nothing like that exists anywhere else, and I believe it was covered at least partially by insurance. I wonder how many of the satisfaction survey participants even considered the existence of such programs as part of their healthcare system. [Michael van der Zee](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4598459): > Dutchman here. It might be good to know that the \*only\* thing for which there is a meaningful market is additional insurance. The basic insurance is completely regulated, with premiums set annually by the government (and discussed in parliament) and regulated deductibles (basically around €350,- per year but you can get a slight discount on your premium if you max out at around 800). > > Moreover, it is forbidden by law for healthcare insurers to make any profit. Any profit made has to be returned to the policy holders. It can also not be used to grant bonuses, buy back shares or any such shenanigans. The idea of the Liberal-Conservative (which is right-wing for Europe) government which abolished the Patient Fund (a government health insurance) was, originally, for profits to be introduced at a later point. But since the current system went into effect in 2006, this introduction has been postponed, and probably will be indefinitely, even though we've had several right-wing governments for the last decade. It's just kind of taken as a given by everyone that introducing profits for insurance companies would drive up cost and it's kind of gross to profit off of a basic human right. > > Having said that, the only real bargaining power that insurance companies have, as far as I can see, is contracting hospitals to take their policy holders. There has been some push by both government and insurance companies to use this as a tool to force hospitals to specialize, but I'm not knowledgeable enough to know if this has helped any more than the regular tools of government funding and decision-making to achieve this goal. > > The biggest problem Dutch people and doctors have with this system is that there is sometimes a bunch of unnecessary bureaucracy involved. For instance, some medications that are quite permanent (like type 1 diabetes medication) need to be re-approved every year by that patients insurance company, for which the doctor or their assistant needs to send a form. > > (As an aside: there is a campaign among medical staff to stamp these documents with a picture of a purple crocodile. Why a purple crocodile? Because a famous Dutch advertisement (ironically by an insurance company) features a purple crocodile lost by a little girl in the swimming pool, which an obstinate pool employee refuses to hand over even though it's right behind him, until the mother of the girl has filled out several forms in capitals. Eventually he tells her they can pick it up in the morning between 9 and 10. You can watch it [here](https://www.youtube.com/watch?v=mJipJwDPJ-g), it works quite well even if you don't understand Dutch.) **UK**, starting with [Chris Allen](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4586520): > Just to be clear, the UK has a private medical system including hospitals totally separate from the NHS where you can buy top up medical insurance to the NHS, which allows you to see specialists and get treatment quickly. Emergency cases though are usually treated on NHS. About 10% of the population pay into this or get it through work. It doesn’t seem to cause much resentment. There is little regulation on the cost and content of these plans, though of course the medical care can’t deviate from UK standards. For a family of 4 annual costs are around £2,000. To me the UK system is a good compromise between looking after the less fortunate with some pretty good health care at a low cost for the country but allowing those who want a higher standard of care without too much regulation. **Australia**, via [Patrick](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4586560): > I think it's worth noting that in Australia the private system only offers some services, with the most specialised procedures (transplants, major trauma, most paediatric specialties etc) only offered through the public system. > > The private system we have, imo having worked as a doctor on both sides, is great for bulk surgical procedures like knee replacements but not very good at looking after people who are acutely unwell or have multisystem long term illnesses. Private medicine has very little oversight (you are treated by one attending/consultant who can basically do whatever they want) compared to the public system where departments and the presence of trainees act as a kind of peer review which I think leads to better medical care. If I was seriously sick I would rather be looked after in the public system. > > I think there's a fair argument to be made that the private system skims off the healthiest and most straightforward cases and dumps the rest on the public system. I don't know whether this is cost effective or not - obviously there are efficiencies which come with higher throughput, but there is definitely overservicing and waste in the private system. It's a source of resentment. **Switzerland**, starting with Spruce: > As I understand the Swiss system, basic healthcare is mandatory, rates are more or less fixed by the government, and providers must accept any citizen who applies. This avoids any bias in who does or does not get insured. Insurers can offer extra products at their discretion on top of that, from rebates if you use a fitness app to private rooms in hospital if you pay more. > > In Switzerland, you receive your income before both income tax and healthcare technically-not-a-tax, but you still have to pay both of them. [Calbear77](https://www.reddit.com/r/slatestarcodex/comments/s825as/book_review_which_country_has_the_worlds_best/hteycds/): > I don’t see why Scott groups Switzerland with the US rather than Germany/Netherlands. > > The Swiss model prohibits health insurance companies from making a profit on basic plans, the government defines basic health insurance benefits and deductibles/copayments (which are more generous than basic plans in the US), and provider payment rates are set centrally (through negotiations between the government and insurer/provider associations, rather than between individual insurance companies and providers). The main difference seems to be that Germany’s premiums are percentage payroll deduction while Swiss premiums are flat with redistributive subsidies for low-income people. > > In contrast, the main US private market, which covers about 50% of the population, has employers negotiate with for-profit insurance companies on premiums and benefits for their employees, and the insurance companies then negotiate separately with individual providers for payments. Additionally, a tiny 5% of people buy coverage directly from insurance companies, but this market is largely an afterthought in the overall scheme. Outside of the private market, 35% of the US is covered by single payer-style systems under Medicare (old and disabled people) and Medicaid (low income people). The remaining 10% are uninsured. > > The US health care industry would oppose moving to the Swiss model just as strongly as moving to a single-payer system, since it would eliminate profits and the ability to privately negotiate provider payments. These are the two main features which set the US apart, and which are cash cows for the winners of the system. Both of these reforms would effectively socialize the financing of the system, while just outsourcing administration to private entities. > > **Edit**:A more American approach could be "Medicare Advantage for All". Medicare Advantage is an alternative people can opt into instead of the single-payer Original Medicare. The government pays a private insurance company to cover you based on the projected cost that Original Medicare would pay for your health care. Those private insurance companies then negotiate with individual providers for payments. Basically the same concept as charter schools. About 40% of Medicare beneficiaries use this option. To the extent the private insurance companies have lower costs, they can keep those as profits or provide additional benefits. Private insurance companies have an incentive to offer additional benefits to entice people to sign up. This would keep the healthcare industry happy, as they could keep their profits and privately negotiate provider payments. However, for the same reasons, it would not work to rein in health care spending. I would like to know more about Medicare Advantage. **Other countries**, starting with [David Roman](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4588534): > I've lived in the US, China, Australia, Singapore and Spain. I'm really surprised that the book didn't even consider Spain, which has a pretty cheap system, the world's highest life expectancy at birth and, by far, the best health system of all those I've had a first-hand experience with. No need to tell you much about the US, and Australa's system is, in my limited experience, marginally better. Singapore is hyper-expensive and hyper-effective and anyone who includes China in such a comparison must be doing it for the laughs. Very expensive and second-rate at best in big cities, third-worldist for hundreds of millions in the countryside. [Alex Rattray](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4591336): > IMO, the most interesting model I've heard of is Israel's – consumers choose one of 4 non-profit HMO's to belong to, who are paid by the government per person who subscribes. HMO's offer much better incentive models for overall quality of care, and the competition keeps services consumer-oriented (compared to fully-socialized systems like the NHS). [Loweren](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4580552), who writes [Optimized Dating](https://optimizeddating.substack.com/?utm_source=substack&utm_medium=web&utm_content=comment_metadata), says: > I found out that Singapore has one of the more reasonable healthcare systems out there, with only 4% of GDP spent on world-class service. Basically, it requires the citizens to pay a small part of the healthcare costs out of pocket, and the rest is financed by taxes. There's a lot of interesting intricacies that make it work well, all detailed in this 16-minute video by a good economics youtube channel: [Zursz](https://astralcodexten.substack.com/p/book-review-which-country-has-the/comment/4602500): > I'm very far from being an expert in health care systems, but I'd like to offer a bit of insight into the Brazilian system since Scott demanded to know what developing countries do and Brazil is my home country. Brazil is in a very curious position regarding health care, since the current Constitution established in 1988 provides that we will have universal health care provided by the State (Sistema Único de Saúde, or SUS to keep with the three letters) but at the same there is an abundant proliferation of private medicine practice - hospitals, clinics, insurance companies, doctors can work outside the universal public system in for profit systems and not get paid by the state. This basically means we have at the same time a system with a lot of state-run hospitals and clinics, as well as state-employed doctors, other facilities run by private operators but funded by the State (considered part of the universal system) and also 100% private enterprises. This creates a situation in which a very large portion of the middle, upper-middle and upper class all have private insurance - provided by their employers or paid by the user - and the poor have universal coverage under the public system. However, the universal public system ranges enormously in quality: for example, we have simultaneously one of the best vaccination systems in the world for free but months of waiting time to get a simple doctor's appointment or years of waiting to get a surgery. Some types of surgical procedures provided by the state are good and others are not, same with exams, and the state-run hospitals also vary enormously on quality. The users of most private insurance companies also have a lot of low quality services, albeit with shorter waiting times and more options. A few of the really expensive ones provide quality services for the richer individuals. Also it's worth noting that a lot of the best doctors don't accept insurance payment at all and charge a fee for each appointment. > > So basically poor people get screwed regarding surgeries, appointments, laboratorial exams and so due to poor quality and waiting times but at least can count on some basic quality services for free such as vaccination, ambulances, and urgent first care. Middle class people fare slightly better with considerably shorter waiting times, marginally better services and can also use the public system in which it does well. And rich people get very good health care for a reasonably pricey amount. > > Brazil also has a very commended drug price system involving patent breaking but I don't really know much about it, only that it really makes most drug prices really low. > > The political side of things is also very complicated especially because the left-wing blocks any productive discussion on reform since it considers the public system untouchable and refuses to acknowledge its shortcomings most of the time - or blame them on underfunding and "neoliberal policies". A lot of upper class left-wing people haven't used the public system once in their life but still consider it perfect. Brazilian things.
Scott Alexander
47644805
Highlights From The Comments On Health Care Systems
acx
# Against That Poverty And Infant EEGs Study A recent paper claims to have found an [Impact Of A Poverty Reduction Intervention On Infant Brain Activity](https://www.pnas.org/content/pnas/119/5/e2115649119.full.pdf). It’s doing the rounds of the usual media sites, like *[Vox](https://www.vox.com/future-perfect/22893313/cash-babies-brain-development)* and the *[New York Times](https://www.nytimes.com/2022/01/24/us/politics/child-tax-credit-brain-function.html)*: I was going to try to fact-check this, but a bunch of other people (see eg [Philippe Lemoine](https://twitter.com/phl43/status/1485989325036732417), [Stuart Ritchie](https://twitter.com/StuartJRitchie/status/1486096592503455750)) have beaten me to it. Still, right now all the fact-checking is scattered across a bunch of Twitter accounts, so I'll content myself with being the first person to summarize it all in a Substack post, and beg you to believe I would have come up with the same objections eventually. Before we start: why be suspicious of this paper? Hundreds of studies come out daily, we don't have enough time to nitpick all of them. Why this one? For me, it's because it's a shared environmental effect being measured by EEG at the intersection of poverty and cognition. Shared environmental effects on cognition are notoriously hard to find. Twin studies suggest they are rare. Some people have countered that perhaps the twin studies haven't measured poor enough people, and there's a lot of research being done to see what happens if you try to correct for that, but so far it’s still controversial. All that research is being done by cognitive testing, which is a reasonable way to measure cognition. This study uses EEG instead. I'm skeptical of social science studies that use neuroimaging, and although EEG isn't exactly the same as neuroimaging like CT or MRI, it shares a similar issue: you have to figure out how to convert a multi-dimensional result (in this case, a squiggly line on a piece of paper) into a single number that you can do statistics to. This offers a lot of degrees of freedom, which researchers don't always use responsibly. People love studies showing that some effect is visible on MRI, or EEG, or some other three letter acronym. It makes it feel real - you can literally see the effects! In the physical brain! I think this temptation should be resisted. Effects that you can literally see in the physical brain are much rarer than effects that you can detect by asking people stuff, but it's really easy to get artifacts and smudges that you hallucinate into signal. And finally, people want to discover a link between poverty and cognitive function *so bad*. Every few months, another study demonstrates that poverty decreases cognitive function, it's front page news everywhere, and then it turns out to be flawed. [This recent analysis](https://www.pnas.org/content/118/44/e2103313118) tried to replicate twenty poverty/cognition priming studies. 18/20 replications had lower effect sizes than in the original, and 16/20 had effect sizes statistically indistinguishable from zero. Most of these studies were vastly worse than the current paper - they were trying to do dumb things with priming as opposed to this much smarter thing with actual RCTs of childhood environment. Still, this whole field makes me nervous. None of these things should make us dismiss the study. There's a thin line between a heuristic and a bias. We need some heuristics to figure out which studies to investigate further. But if we grip them too tightly, [they become biases](https://slatestarcodex.com/2020/02/12/confirmation-bias-as-misfire-of-normal-bayesian-reasoning/), where we doubt any study that doesn't correspond to our pre-existing beliefs and political commitments. I'm just saying that this study has enough yellow flags that it's worth checking out in more depth to see if anything's wrong with it. (also, the lead author is named Dr. Troller, and I am a nominative determinist) Getting to the paper itself: it’s called [The Impact Of A Poverty Reduction Intervention On Infant Brain Activity](https://www.pnas.org/content/pnas/119/5/e2115649119.full.pdf). It’s part of a much larger study called [Baby’s First Years](https://www.babysfirstyears.com/) which randomizes some low-income mothers to receive $300/month in extra support. Most of these families were making about $20,000, so this was an increase of about 10-20%. Some past research had shown disadvantaged children had more low-frequency brain waves than other kids, so they decided to test whether they could find this same effect here. They EEGd 435 one-year-old children who had/hadn’t received the extra money. Results: differences in the the level of beta waves (effect size = 0.23, p = 0.02) and gamma waves (effect size = 0.22, p = 0.04) on the EEG, though no significant difference in alpha or theta waves. They conclude that financial support changes brainwave activity; under the circumstances, it seems reasonable to conclude that this represents some kind of healthier neurodevelopment. How robust is this finding? All differences lost statistical significance after adjustment for multiple comparisons. What does that mean? Well, remember [that XKCD comic with the jellybeans](https://xkcd.com/882/): That’s multiple comparisons. If you test 20 different things and get one positive result, that doesn’t mean there’s a real effect, it means you kept doing tests until one of them randomly came out positive because of noise. Here’s the relevant table. Think of the eight different kinds of EEG the same way you think of the twenty different jellybean colors. In order to trust their positive results, the researchers had to correct for multiple comparisons. The simplest method for this is something called Bonferroni correction, which would have forced them to get a p-value of 0.05/8 = 0.00625. But that would be really harsh; in cases like these where hypotheses are correlated (ie if poor people have different alpha waves, that makes it more likely they also have different beta waves) you can use a gentler method called Westfall-Young adjustment. The researchers did this here and it told them that none of their results were significant anymore, which they chose to . . . ignore? I don’t know, the abstract sure does say "infants in the high-cash gift group showed more power in high frequency bands", which sounds like a claim of a positive result. Maybe it’s because of this: This graphs the EEG power by frequency of the two different groups. It seems like a pretty big effect in favor of the high-cash babies having stronger high-frequency (and weaker low-frequency) EEGs. Part of the problem with brain imaging studies is you have to find some way to turn complicated multi-dimensional data into a single number you can stick a p-value on. Sometimes that’s hard and you lose some subtlety. This graph shows a pretty obvious difference between the two groups. Can we just say that regardless of the stats, we can eyeball a significant difference here? Andrew Gelman [says no](https://statmodeling.stat.columbia.edu/2022/01/25/im-skeptical-of-that-claim-that-cash-aid-to-poor-mothers-increases-brain-activity-in-babies/). He gets the raw data and randomizes the treatment variable, ie flips a coin to decide whether kids are in an artificial Red Group or Blue Group. Then he graphs EEG frequencies by group nine times: On inspection, the graph still looks like there are big differences between the two groups. But these can’t be real, because this time the groups were determined by coin flip - the artificial Red Group and Blue Group don’t have any overall difference in how much money they got. This is just an artifact. Why do groups with no real difference between them look so different on the graphs? *This is why I hate imaging*. All of your intuitions are always wrong! In this case it seems to be a function of taking an “average” of groups that have a lot of overlap in order to form them into a straight line, but *in imaging there’s always something like this*. I think this is the strongest evidence against this study: the p-value isn’t significant and the graph proves nothing. But some other people provide other important critiques: **Stuart Ritchie** [says](https://twitter.com/StuartJRitchie/status/1486096592503455750) that this article was accepted to PNAS under a special deal where “US National Academy of Sciences members get an easier ride to publication”. I see different opinions about exactly what this consists of; Stuart thinks they can “hand-pick reviewers”; [another researcher](https://twitter.com/pdakean/status/1485814820570120192) thinks they “do not have to go through anonymous peer review”. **Heath Henderson** [says](https://twitter.com/hendersonhl22/status/1485820300348563460) that the study was preregistered to examine only alpha, theta, and gamma waves. But the strongest result (one of the ones that was significant before multiple-hypothesis adjustment) was for beta waves! Usually it’s a big red flag to have your strongest result be something you didn’t pre-register; it means you kept rooting around until you found something. Here I’m on the fence about how much to worry, because why *wouldn’t* you study beta waves if you were doing an EEG? But the paper was based on previous research finding differences mainly in alpha and theta waves, whereas this paper “found” “differences” in beta and gamma waves, so I guess this counts as bad. **Julia Rohrer** [points](https://twitter.com/dingding_peng/status/1486011854010957832) to [this study](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3518069/) on the effect of foster care placement in Romania. Some kids in Romania were randomly assigned to stay in (probably terrible) orphanages vs. be placed in foster care. Despite this probably being a bigger difference in adversity than getting or not getting $300/month, there was no clear difference on their EEGs. However, subgroup analysis suggested an EEG difference for the (very small) group of children who were adopted out before 1 year old, which would fit this study on one-year-olds. *However*, the difference was found only in relative alpha EEG, whereas this study found no difference at all in relative alpha, and its only (pre-adjustment) significant findings were in absolute beta and absolute gamma. In general, I do not get the feeling that previous studies have done a great and ironclad job establishing that EEG measures adversity or that more powerful EEG waves mean more successful children or anything like that. Andrew Gelman finishes [his article](https://statmodeling.stat.columbia.edu/2022/01/25/im-skeptical-of-that-claim-that-cash-aid-to-poor-mothers-increases-brain-activity-in-babies/) by warning us not to conclude that cash grants *don’t* affect kids’ EEGs. For all we know, they might and this study is just underpowered to detect it. That’s fine and I agree. But this study basically shows no effect. We can quibble on whether it might be suggestive of effects, or whether it was merely thwarted from showing an effect by its low power, but it’s basically a typical null-result-having study. The authors should not have reported their result as an unqualified positive, and the media should have challenged their decision to do so rather than uncritically signal-boosting it.
Scott Alexander
47719504
Against That Poverty And Infant EEGs Study
acx
# Bounded Distrust **I.** Suppose you're a liberal who doesn't trust FOX News. One day you're at the airport, waiting for a plane, ambiently watching the TV at the gate. It's FOX News, and they're saying that a mass shooter just shot twenty people in Yankee Stadium. There’s live footage from the stadium with lots of people running and screaming. Do you believe this? I'm a liberal who doesn't trust FOX News, and sure, I believe it. The level on which FOX News is bad isn't the level where they invent mass shootings that never happened. They wouldn't use deepfakes or staged actors to fake something and then call it "live footage". That would go way beyond anything FOX had done before. Liberals might say things like "You can't trust FOX News on anything, they are 100% total liars", but realistically we still trust them quite a lot on stuff like this. Now suppose FOX says that police have apprehended a suspect, a Saudi immigrant named Abdullah Abdul. They show footage from a press conference where the police are talking about this. Do you believe them? Again, yes. While I've heard rare stories of the media jumping in too early to identify a suspect, "the police have apprehended" seems like a pretty objective statement. And once again, faking a police conference - or even dubbing over a police conference so that when the police say some other name, the viewers hear "Abdullah Abdul" - is way worse than anything I've ever heard of FOX doing. Even if I learned of one case of them doing something like this once, I would think "wow that's crazy" and still not update to believing they did it all the time. It doesn't matter at all that FOX is biased. You could argue that "FOX wants to fan fear of Islamic terrorism, so it's in their self-interest to make up cases of Islamic terrorism that don't exist". Or "FOX is against gun control, so if it was a white gun owner who did this shooting they would want to change the identity so it sounded like a Saudi terrorist". But those sound like crazy conspiracy theories. Even FOX's worst enemies don't accuse them of doing things like this. It's not quite that this would be \*worse\* than anything FOX has ever done. I assume FOX helped spread the story that Saddam Hussein was connected to 9-11 and had WMDs, just like everyone else. That's probably a bigger lie (in some sense) than one extra mass shooting in a country with dozens of them, or changing the name and ethnicity of a perpetrator. Certainly it did more damage. But that's not the point. The point is, there are rules to the "being a biased media source" game. There are lines you can cross, and all that will happen is a bunch of people who complain about you all the time anyway will complain about you more. And there are other lines you don't cross, or else you'll be the center of a giant scandal and maybe get shut down. I don't want to claim those lines are objectively reasonable. But we all know where they are. And so we all trust a report on FOX about a mass shooting, even if we hate FOX in general. In a world where FOX was the only news source available, this kind of thing would become really important. People would need to understand that FOX was biased while also basically being able to accept most things that it said. If people went too far overboard and stopped trusting FOX just because it was biased, they might end up in a state of total paralysis, unable to confirm really basic facts about the world. **II.** What’s the flipped version of this scenario for the other political tribe? [Here’s a Washington Post article](https://www.washingtonpost.com/history/2019/07/27/you-know-who-was-into-karl-marx-no-not-aoc-abraham-lincoln/) saying that Abraham Lincoln was friends with Karl Marx and admired his socialist theories. It suggests that because of this, modern attacks on socialism are un-American. [Here is a counterargument](https://www.aier.org/article/was-lincoln-really-into-marx/) that there’s no evidence Abraham Lincoln had the slightest idea who Karl Marx was. I find the counterargument much more convincing. Sometimes both the argument and counterargument describe the same event, but the counterargument gives more context in a way that makes the original argument seem calculated to mislead. I challenge you to read both pieces without thinking the same. A conservative might end up in the same position vis-a-vis the *Washington Post* as our hypothetical liberal and FOX News. They know it’s a biased source that often lies to them, but how often? [Here’s a Washington Post article](https://www.washingtonpost.com/politics/2021/12/14/guess-what-there-still-wasnt-any-significant-fraud-2020-presidential-election/) saying that the 2020 election wasn’t rigged, and Joe Biden’s victory wasn’t fraudulent. In order to avoid becoming a conspiracy theorist, the conservative would have to go through the same set of inferences as the FOX-watching liberal above: this is a terrible news source that often lies to me, but it would be surprising for it to lie *in this particular case* in *this particular way*. I think smart conservatives can do that in much the same way smart liberals can conclude the FOX story was real. The exact argument would be something like: the Marx article got minimal scrutiny. A few smart people who looked at it noticed it was fake, three or four people wrote small editorials saying so, and then nobody cared. The 2020 election got massive scrutiny from every major institution. The Marx article, if you read it *extremely carefully* with *all* the knowledge you gained from the debunking, doesn’t confidently assert a connection between Lincoln and Marx (except in the headline and subtitle, which are usually written by someone else). The reporter uses phrases like “that *might be* because Lincoln was regularly reading Karl Marx” (in a sentence where you’re expected to think of the hedging as a colloquialism), and “It’s *nearly* guaranteed that, in the 1850s, Lincoln was regularly reading Marx” (the evidence being that Lincoln had been known to read a newspaper that Marx had been known to publish in). It says that Marx sent letters to Lincoln - but fails to mention that a US President gets thousands of letters from everyone and there’s no evidence Lincoln read Marx’s. It says that a US ambassador told Marx’s Communist group that Lincoln appreciated them - but fails to mention this was as part of a form letter, little different from the “JOE BIDEN THANKS YOU FOR YOUR SUPPORT” spam emails I get sometimes. It’s hard for a naive person to read the article without falsely concluding that Marx and Lincoln were friends. But the article *does* mostly stick to statements which are literally true. There were some historians who praised the Marx article and said nice things about it. But they were all explicitly socialist historians, and they were all studying time periods other than the one containing Lincoln and Marx. So this probably doesn’t completely discredit all expertise. Meanwhile, actual statisticians and election security experts said pretty clearly they thought the election was fair, even when this *was* in their domain of expertise. Finally, the Marx thing was intended as a cutesy human interest story (albeit one with an obvious political motive) and [everybody knows](https://www.lesswrong.com/posts/BNfL58ijGawgpkh9b/everybody-knows) cutesy human interest stories are always false. All of this is a lot more complicated than “of course you can trust the news” or “how dare you entertain deranged conspiracy theories!” There are lots of cases where you can’t trust the news! It sucks! It’s completely understandable that large swathes of people can’t differentiate the many many cases where the news lies to them from the other set of cases where the news is not, at this moment, actively lying. But that differentiation is possible, most people learn how to do it, and it’s the main way we know anything at all. **III.** As in journalism, so in science. According to [this news site](https://samnytt-se.translate.goog/professor-rakade-upptacka-att-de-flesta-valdtakter-begas-av-invandrare-riskerar-atal/?_x_tr_sl=sv&_x_tr_tl=en&_x_tr_hl=en&_x_tr_pto=nui,sc), some Swedish researchers were trying to gather crime statistics. They collated a bunch of things about different crimes and - without it being a particular focus of their study - one of the pieces of information was immigration status, and they found that immigrants were responsible for a disproportionately high amount of some crimes in Sweden. The Swedish establishment brought scientific misconduct cases against the researchers (one of whom is himself "of immigrant background"). The first count was not asking permission to include ethnicity statistics in their research (even though the statistics were publicly accessible, apparently Swedish researchers have to get permission to use publicly accessible data). The second count was not being able to justify how their research would “reduce exclusion and improve integration.” While these accusations are probably true on their own terms, I think any researcher who found that immigrants were great would not have the technicalities of their research subjected to this level of scrutiny, and that the permissioning system evolved partly out of a desire to be able to crush researchers in exactly these kinds of situations. I think this is a pretty common scenario, and part of a whole structure of norms and regulations that makes sure experts only produce research that favors one side of the political spectrum. So I think the outrage is justified, this is exactly what people mean when they accuse experts of being biased, and those accusations are completely true. But: have you ever heard an expert say, in so many words, that immigrants to Sweden definitely don't commit more crime than natives? (I think people do say this in the US, but only because it's true-ish in the US; Sweden and the US have very different immigrant and native populations) I believe that *in some sense*, the academic establishment will work to cover up facts that go against their political leanings. But the experts in the field won't lie directly. They don't go on TV and say "The science has spoken, and there is strong evidence that immigrants in Sweden don't commit more violent crime than natives". They don't talk about the "strong scientific consensus against immigrant criminality". They occasionally try to punish people who bring this up, but they won't call them "science deniers". This seems like another example of the "FOX won't make up terrorist attacks" point. There are a lot of ways that experts and the academic establishment are biased and try to muddy the discussion in favor of their preferred political side. But this is a game with certain rules. There are lines they'll cross, and other lines they won't cross. And that means you *can* trust the experts on some things, same as you can trust FOX on some things. The reason why there’s no giant petition signed by every respectable criminologist and criminological organization saying Swedish immigrants don’t commit more violent crime than natives is because experts aren’t quite biased enough to sign a transparently false statement - even when other elites *will* push that statement through other means. And that suggests to me that the fact that there *is* a petition like that signed by climatologists on anthropogenic global warming suggests that this position is actually true. And that you can know that - even without being a climatologist yourself - through something sort of like “trusting experts”. (before you object that some different global-warming related claim is false, please consider whether the IPCC has said with certainty that it isn’t, or whether all climatologists have denounced the thing as false in so many words. If not, *that’s my whole point*.) **IV.** Last year I explained why I [didn't believe ivermectin worked](https://astralcodexten.substack.com/p/ivermectin-much-more-than-you-wanted) for COVID. In a subsequent discussion with Alexandros Marinos, I think we agreed on something like: **1.** If you just look at the headline results of ivermectin studies, it works. **2.** If you just do a purely mechanical analysis of the ivermectin studies, eg the usual meta-analytic methods, it works. **3.** If you try to apply things like human scrutiny and priors and intuition to the literature, this is obviously really subjective, but according to the experts who ought to be the best at doing this kind of thing, it doesn't work. **4.** But experts are sometimes biased. **5.** F@#k. In the end, I stuck with my believe that ivermectin probably didn’t work, and Alexandros stuck with his belief that it probably did. I stuck with the opinion that it’s possible to extract non-zero useful information from the pronouncements of experts by knowing the rules of the lying-to-people game. There are times when experts and the establishment lie, but it’s not all the time. FOX will sometimes present news in a biased or misleading way, but they won’t make up news events that never happen. Experts will sometimes prevent studies they don’t like from happening, but they’re much less likely to flatly assert a clear specific fact which isn’t true. I think some people are able to figure out these rules and feel comfortable with them, and other people can’t and end up as conspiracy theorists. I’m not blaming the second type of person. Figuring-out-the-rules-of-the-game is a hard skill, not everybody has it. If you don’t have it, then universal distrust might be a safer strategy than universal credulity. And I’m not saying that anything about this is good. Obviously the *good* solution is that people stop lying and presenting misleading information. But I think it’s important for these two types of people to understand each other. The people who lack this skill entirely think it’s crazy to listen to experts about anything at all. They correctly point out time after time that they’ve lied or screwed up, then ask “so why do you believe them on ivermectin?” or “so why do you believe them on global warming?” My answer - which I don’t think is an *obvious* or *easy* answer, it’s a bold claim that could be wrong, is “I think I have a good sense of the dynamics here, how far people will bend the truth, and what it looks like when they do”. I realize this is playing with fire. But listening to experts is a powerful enough hack for finding the truth that it’s worth going pretty far to try to rescue it. But also: some people are better at this skill than I am. Journalists and people in the upper echelons of politics have honed it so finely that they stop noticing it’s a skill at all. In the Soviet Union, the government would say “We had a good harvest this year!” and everyone would notice they had said *good* rather than *glorious*, and correctly interpret the statement to mean that everyone would starve and the living would envy the dead. Really savvy people go through life rarely ever hearing the government or establishment lie to them. Yes, sometimes false words come out of their mouths. But as Dan Quayle put it: > Our party has been accused of fooling the public by calling tax increases 'revenue enhancement'. Not so. No one was fooled. Imagine a government that for five years in a row, predicts *good* harvests. Or, each year, they deny tax increases, but do admit there will be “revenue enhancements”. Savvy people effortlessly understand what they mean, and prepare for bad harvests and high taxes. Clueless people prepare for good harvests and low taxes, lose everything when harvests are bad and taxes are high, and end up distrusting the government. Then in the sixth year, the government says there will be a *glorious* harvest, and neither tax increases *nor* revenue enhancements. Savvy people breath a sigh of relief and prepare for a good year. Clueless people assume they’re lying a sixth time. But to savvy people, the clueless people seem paranoid. The government has said everything is okay! Why are they still panicking? The savvy people need to realize that the clueless people aren’t *always* paranoid, just less experienced than they are at dealing with a hostile environment that lies to them all the time. And the clueless people need to realize that the savvy people aren’t *always* gullible, just more optimistic about their ability to extract signal from same.
Scott Alexander
44695804
Bounded Distrust
acx
# Grading My 2021 Predictions At the beginning of every year, I make predictions. At the end of every year, I score them. Here are [2014](https://slatestarcodex.com/2015/01/01/2014-predictions-calibration-results/), [2015](https://slatestarcodex.com/2016/01/02/2015-predictions-calibration-results/), [2016](https://slatestarcodex.com/2016/12/31/2016-predictions-calibration-results/), [2017](https://slatestarcodex.com/2018/01/02/2017-predictions-calibration-results/), [2018](https://slatestarcodex.com/2019/01/22/2018-predictions-calibration-results/), [2019](https://slatestarcodex.com/2020/04/08/2019-predictions-calibration-results/), and [2020](https://astralcodexten.substack.com/p/2020-predictions-calibration-results). And here are the predictions I made for 2021 (in April; I was really late). Bolded statements happened, italicized statements did not happen (as of 1/1/22). Neither-bold-nor-italic resolved ambiguous. We have a debate every year over whether 50% predictions are meaningful in this paradigm; feel free to continue it. *1. Biden approval rating (as per 538) is greater than fifty percent: 80%* *2. Court packing is clearly going to happen (new justices don't have to be appointed by end of year): 5%* *3. Yang is New York mayor: 80%* *4. Newsom recalled as CA governor: 5%* *5. At least $250 million in damage from BLM protests this year: 30%* *6. Significant capital gains tax hike (above 30% for highest bracket): 20%* *7. Trump is allowed back on Twitter: 20%* **8. Tokyo Olympics happen on schedule: 70%** *9. Major flare-up (significantly worse than anything in past 5 years) in Russia/Ukraine war: 20%* *10. Major flare-up (significantly worse than anything in past 10 years) in Israel/Palestine conflict: 5%* *11. Major flare-up (significantly worse than anything in past 50 years) in China/Taiwan conflict: 5% 12. Netanyahu is still Israeli PM: 40%* *13. Prospera has at least 1000 residents: 30%* **ECON/TECH** **14. Gamestop stock price still above $100: 50%** *15. Bitcoin above 100K: 40% 16. Ethereum above 5K: 50%***17. Ethereum above 0.05 BTC: 70%** **18. Dow above 35K: 90%** *19. ...above 37.5K: 70%* *20. Unemployment above 5%: 40%* **21. Google widely allows remote work, no questions asked: 20%** *22. Starship reaches orbit: 60%* **COVID***23. Fewer than 10K daily average official COVID cases in US in December 2021: 30%* *24. Fewer than 50K daily average COVID cases worldwide in December 2021: 1% 25. Greater than 66% of US population vaccinated against COVID: 50%* *26. India's official case count is higher than US: 50%* **27. Vitamin D is not generally recognized (eg NICE, UpToDate) as effective COVID treatment: 70%** **28. Something else not currently used becomes first-line treatment for COVID: 40%** **29. Some new variant not currently known is greater than 25% of cases: 50%** **30. Some new variant where no existing vaccine is more than 50% effective: 40%** *31. US approves AstraZeneca vaccine: 20%* *32. Most people I see in the local grocery store aren't wearing a mask: 60%* **COMMUNITY** *33. Major rationalist org leaves Bay Area: 60%* *34. MIRI relocates to Washington State: 20% 35. MIRI relocates to New England: 20% 36. MIRI relocates somewhere else: 20% 37. Less Wrong team relocates: 30%***38. No new residents at our housing cluster: 40%** *39. No current residents leave our housing cluster: 60%* **40. [friend] goes back to Indiana: 40%** *41. [friend] is in a primary relationship: 50% 42. [friend] is in a primary relationship: 30% 43. [friend] is in a primary relationship: 20% 44. [friend] has gotten [job]: 50%***45. [friend] has recovered their health: 70%** **46. [friend] has gotten egg freezing: 30%***47. [friend] is pregnant: 70%***48. [friends] are still together: 50% 49. [friend] is still at [job]: 80% 50. [friend] is in college: 60%***51. [friends] live in [house]: 30% 52. [other friends] live in [house]: 30%*53. At least 7 days my house is orange or worse on PurpleAir.com because of fires: 80% **PERSONAL 54. I am engaged: 60%** *55. I am married: 20%* *56. [redacted]: 10% 57. [redacted]: 10% 58. [redacted]: 5% 59. [redacted]: 20%* **60. There are no appraisal-related complications to the new house purchase: 50%** **61. I live in the new house: 95%** *62. I live in the top bedroom: 60%* *63. I can hear / get annoyed by neighbor TV noise: 40%* *64. I'm playing in a D&D campaign: 70%* **65. I go on at least one international trip: 60%** **66. I spend at least a month living somewhere other than the Bay: 50%** **67. I continue my current exercise routine (and get through an entire cycle of it) in Q4 2021: 70%** *68. I meditate at least 15 days in Q4 2021: 60%* **69. I take oroxylum at least 5 times in Q4 2021: 40%** *70. I take some substance I haven't discovered yet at least 5 times in Q4 2021 (testing exempted): 30%* *71. I do at least six new biohacking experiments in the next eight months: 40%* *72. [redacted]: 30%* *73. The Twitter account I check most frequently isn't one of the five I check frequently now: 20%* *74. I make/retweet at least 25 tweets between now and 2022: 70%* **WORK** **75. Lorien has 100+ patients: 90%** *76. 150+ patients: 20% 77. 200+ patients: 5%* *78. I've written at least ten more Lorien writeups (so total at least 27): 30%* **79. [redacted]: 70% 80. [redacted]: 80%** *81. [redacted]: 60%* 82. [redacted]: 40% **83. [redacted]: 60%** 84. I have switched medical records systems: 20% *85. I have changed my pricing scheme: 20%* **BLOG 86. ACX is earning more money than it is right now: 70%** *87. [redacted]: 10% 88. [redacted]: 50% 89. [redacted]: 20%* *90. There is another article primarily about SSC/ACX/me in a major news source: 10%* *91. I subscribe to at least 5 new Substacks (so total of 8): 20%* **92. I've read and reviewed How Asia Works: 90%** *93. I've read and reviewed Nixonland: 70%* **94. I've read and reviewed Scout Mindset: 60%** **95. I've read and reviewed at least two more dictator books: 50%** *96. I've started and am at least 25% of the way through the formal editing process for Unsong: 30%* *97. Unsong is published: 10%* *98. I've written at least five chapters of some non-Unsong book I hope to publish: 40%* *99. “On The Natural Faculties” wins the book review contest: 60%* **100. I run an ACX reader survey: 50%** *101. I run a normal ACX survey (must start, but not necessarily finish, before end of year): 90%* *102. By end of year, some other post beats NYT commentary for my most popular post: 10%* **103. I finish + post [Rise And Fall Of Online Culture Wars](https://astralcodexten.substack.com/p/the-rise-and-fall-of-online-culture): 90%** 104. **I finish + post [Don’t Give Up On Having Kids Because Of Climate Change](https://astralcodexten.substack.com/p/please-dont-give-up-on-having-kids): 80%** **105. I finish + post [Carbon Costs Quantified](https://astralcodexten.substack.com/p/carbon-costs-quantified): 80%** 106. I have a queue of fewer than ten extra posts: 70% **META***107. I double my current amount of money ($1000) on PredictIt: 10%* **108. I post my scores on these predictions before 3/1/22: 70%** To make binning easier, I’ve converted 5% predictions into 95% predictions of the opposite, 10% predictions into 90% predictions of the opposite, and so on. So: Of 50% predictions, I got 7 right and 6 wrong, for a total of 54% Of 60% predictions, I got 11 right and 11 wrong, for a total of 50% Of 70% predictions, I got 20 right and 6 wrong, for a total of 77% Of 80% predictions, I got 20 right and 3 wrong, for a total of 87% Of 90% predictions, I got 10 right and 1 wrong, for a total of 91% Of 95% and 99% predictions, I got 8 right and 0 wrong, for a total of 100% Here’s the usual graph: Last year I was mostly overconfident. This year I was very slightly underconfident (except in the 60% bin). I see no consistent pattern of errors here and am not going to update on it very much. I’m pretty happy with this, since I thought the questions this year were harder than usual. Simon M [did a similar exercise on Less Wrong](https://www.lesswrong.com/posts/A7yccktTp8LDjSizp/scott-alexander-2021-predictions-market-prices-resolution), and compared me to Zvi and to various prediction markets. This was slightly biased against me, because Zvi got to see my guesses first and choose which ones to adjust on, and the markets are the markets. Still, he found: …where lower scores are better. So Zvi beat me, and the markets beat both of us. This is fine; nobody should be able to beat the market consistently, and the market was able to (though probably didn’t bother to) read both Zvi’s and my estimates. I’ll post predictions for this coming year next week.
Scott Alexander
47548732
Grading My 2021 Predictions
acx
# Open Thread 208 This is the weekly visible open thread. Odd-numbered open threads will be no-politics, even-numbered threads will be politics-allowed. This one is even-numbered, so go wild - or post about whatever else you want. Also: **1:** Comment of the week is from Richard Ngo, who [helpfully corrects](https://astralcodexten.substack.com/p/practically-a-book-review-yudkowsky/comment/4564417) some of my discussion of his dialogue with Eliezer Yudkowsky: > 1. Scott describes my position as similar to Eric Drexler's CAIS framework. But Drexler's main focus is modularity, which he claims leads to composite systems that aren't dangerously agentic. Whereas I instead expect unified non-modular AGIs; for more, see <https://www.alignmentforum.org/posts/HvNAmkXPTSoA4dvzv/comments-on-cais> > > 2. Scott describes non-agentic AI as one which "doesn't realize the universe exists, or something to that effect? It just likes connecting premises to conclusions." A framing I prefer: non-agentic AI (or, synonymously, non-goal-directed) as AI that's very good at pattern-matching, but lacks a well-developed motivational system. > > [[continue reading full comment here](https://astralcodexten.substack.com/p/practically-a-book-review-yudkowsky/comment/4564417)] **2:** Some people have requested guidance for when you can advertise your own blog/website/etc in the comments here. I would say: on regular posts, only if it’s something very relevant, so relevant you would post it even if it wasn’t yours. On open threads, try to limit yourself to twice a year, and only if you think it will spark a genuinely interesting discussion. On classified threads (which I want to remember to do more often), go wild. **3:** Don’t forget to [resubmit and summarize your proposals for Grants ++](https://astralcodexten.substack.com/p/resubmit-and-summarize-your-proposals), if that is a thing you want to do.
Scott Alexander
47605424
Open Thread 208
acx
# Resubmit And Summarize Your Proposals For Grants ++ I promised you all that once I was done with the main round of [ACX Grants](https://astralcodexten.substack.com/p/acx-grants-results), I would run Grants ++, where I publish the proposals that didn't get funded here, so readers could look at them, see if they’re interesting, and maybe get in touch and offer funding. Two things have made this harder than expected. First, a lot of people gave pretty unclear instructions about whether they wanted me to include their proposal in this, or changed their minds halfway through, in a way that would require me to keep track of a lot of emails about whose minds changed how many times, or to reconstruct long edit histories. Second, I have 656 proposals. Some proposals are multiple pages. The full list is 2096 pages long. Even if I divide that among ten posts, that's still 210 pages per post. I'm going to solve this by inflicting more work on you, the applicants. If you're still interested in participating in Grants ++, please write *one paragraph* about your proposal. Examples: > I'm Sheev Palpatine, and I'm looking for funding to create a moon-sized battlestation. Imperial Star Destroyers can handle normal tasks, but to really project space power we need a mobile base capable of annhilating a planet in a single strike. As Galactic Emperor, with a sector-wide network of scientists, engineers, shipping yards, and military personnel, I believe I'm uniquely placed to take advantage of this opportunity. Currently I need one hundred fifty quadrillion Galactic Credits, and also I would also love to hold a ten minute Zoom call with anyone who has expertise in designing exhaust ports. If you can provide funding or advice, please email sidious@coruscant.gov. > Homunculus LLC is a startup working on developing the Philosopher's Stone, which would lead to universal immortality and infinite wealth. Our CEO, Nicholas Flamel, has a PhD from the University of Paris and is universally recognized for his work deciphering hieroglyphics. Our CTO, John Dee has a PhD from Oxford and served as court astrologer to Elizabeth I. We believe that precise application of quicksilver while Aries is in the ascendant is the secret to creating the Stone; you can read our white paper at homunculus.com/white-paper. We're looking for seed investments between 10,000 and 100,000 gold florins; if you think you can help, please email inquiries@homunculus.com. > Grug think maybe if hit stone with flint, stone get hot. Spark come out of stone. Then put spark on pile of dry leaves. Can create fire, use to cook food. Unlock more calories from food, maybe solve world hunger. Grug work with Og. Og shaman of great power, beloved by gods. He technical co-founder. So far have many other stone, but no have flint. Tribe down river want fifty shells for give Grug flint. Grug no have fifty shells. If have fifty shells, please send to Bitcoin address 3FZbgi29cpjq2Gjd4m4GFg7xJaNVN2ab98. If have question, can find Grug next to big tree. I'll allow a maximum of 1500 characters (you don't have to use all 1500). If you want to give people more information, you can include a link to a website or white paper that says more. I'll allow requests from startups iff they seem charitable in some way. A startup for creating drugs for orphan diseases, sure. A startup for creating a pay-to-play iPhone game, no. If you're still interested, please give me your paragraph at <https://forms.gle/xhVTebsZgSEQ7BpeA> . NO TAKEBACKS! NO CHANGES! ONCE IT'S SENT, IT'S SENT FOREVER! Please submit by 1/28/22 if you want to be included.
Scott Alexander
47513188
Resubmit And Summarize Your Proposals For Grants ++
acx
# Book Review: Which Country Has The World's Best Health Care? **I.** If you’re like me, all you’ve heard about international health care systems is “America sucks and should feel bad, everyone else is probably fine or whatever”. Is there more we can learn? Our guide to this question will be *[Which Country Has The World’s Best Health Care](https://amzn.to/3Ibqeck)*, by Dr. Ezekiel Emanuel. Emanuel is a professor of bioethics, but I’ve been told to be [less reflexively hostile](https://forum.effectivealtruism.org/posts/JwDfKNnmrAcmxtAfJ/the-bioethicists-are-mostly-alright) to bioethicists. He got in trouble a few years ago for a comment that got summed up as “[life after 75 is not worth living](https://www.technologyreview.com/2019/08/21/238642/a-doctor-and-medical-ethicist-argues-life-after-75-is-not-worth-living/)”, but he never used those *exact* words, and [his point about](https://www.theatlantic.com/magazine/archive/2014/10/why-i-hope-to-die-at-75/379329/) the dangers of excessive life-prolonging medical care is [well-taken](https://slatestarcodex.com/2013/07/17/who-by-very-slow-decay/). He opposes euthanasia, which I interpret as demanding state-sponsored coercive violence to prevent torture victims from escaping, but I know other people interpret it differently. And he’s the brother of former Chicago mayor Rahm Emanuel, but ... nope, can’t think of any extenuating circumstances for this one. Still, Emanuel is one of a very few people qualified to compare international health systems. And he claims additional expertise at ranking things, saying: > *Which country has the world’s best health system? This is the type of question I usually love. I rank everything. I rank the 10 best meals I’ve ever had (#1 Alinea in Chicago, #2 Tanja Grandits in Basel, and #3 OCD in Tel Aviv). I rank chocolates (#1 Askinosie, #2 Dick Taylor of California, and #3 Fruition of New York. I rank Alpine cheeses (#1 is a tie between Alpha Tolman and Alp Blossom). I rank colleges. I rank academic departments of bioethics and health policy that compete with my own. I rank the meals I cook, the races I run, the bike rides I take, the speeches I give.* So: which country has the world’s best health care? Emanuel *hates* having to give a clear answer to that question, but when confronted with the fact that he’s writing a book with that title and can’t really weasel out, he grudgingly admits that “the top tier would include Germany, the Netherlands, Norway, and Taiwan”. He backs this up with ~300 pages of details about the health care systems of 11 major countries. I have to admit, I found this tough reading. Partly this is because health economics is an inherently boring topic. Partly it’s because national systems are a hodgepodge of historically contingent decisions that don’t really resolve into a single gestalt. And partly it’s because many countries run their medical systems entirely based on three-letter acronyms (did you know [PBR](https://en.wikipedia.org/wiki/Payment_by_Results) financing in the [NHS](https://en.wikipedia.org/wiki/National_Health_Service) is partly under [QOF](https://qof.digital.nhs.uk/) schemes like [BPTs](https://reports.njrcentre.org.uk/2018/Best-Practice-Tariff) that modify [CCG](https://en.wikipedia.org/wiki/Clinical_commissioning_group)s’ [GMS](https://en.wikipedia.org/wiki/General_medical_services) contracts with [PCN](https://www.england.nhs.uk/primary-care/primary-care-networks/)s?) But partly it’s because all national health systems are surprisingly similar. One of my favorite books is David Friedman’s *[Legal Systems Very Different From Ours](https://amzn.to/3ryObUc)*, which catalogues the world’s weirdest legal systems and expands your space of possibilities about what law codes would be like. I was hoping to find something similar here, but Emanuel’s book could easily have been titled *Medical Systems Very Similar To Ours*. People talk about how the US system is “privatized” and the Canadian system “socialized”, but a lot of this comes down to whether your payments for the same basic package are marked “paycheck deductions” vs. “taxes”. Or whether your choices are limited to one state insurance company vs. to 2-3 plans offered by your employer which are legally mandated to be basically the same. It was hard to find any really fundamentally different visions. And absent truly different designs, the 300 pages were a lot of stuff on how various bureaucracies were organized and which three-letter acronyms they used. But after a valiant effort, Emanuel managed to distinguish five general types of health care system (Table 12-2 on page 364). **1:** **Socialized Medicine,** where the government runs everyone’s insurance *and* most hospitals and clinics, ie it’s the main employer for doctors and other health professionals. Of the 11 countries studied, only the UK does this in general, although the Veterans Affairs system does it at a smaller scale in the US. **2: Single Payer With Very Limited Private Insurance** is typical of Canada, China, Norway, and Taiwan. The government runs everyone’s insurance. But doctors, hospitals, etc can be independent businesses or nonprofits. They negotiate some kind of payment rate with the national insurance, who reimburses them. This is similar to how Medicare works in the US. **3: Single Payer With Substantial Private Insurance** is typical of Australia and France. It works as above, except that citizens can buy private insurance which purports to be better than the standard government insurance in some way. For example, in Australia sometimes the private insurance has shorter waiting times, or can get you nicer rooms in more luxurious hospitals. Often the same doctors and hospitals treat the government and private patients, but give the private patients more time and resources, which leads to resentment and scandals. On the other hand, the private patients sometimes subsidize the public ones - ie a hospital charges extra for private patients and uses that to make up a funding shortfall if the government doesn’t pay them enough. **4: Single Payer Channeled Through Private Insurance** is typical of Germany and the Netherlands. I think this is kind of like how charter schools work in the US: the government pays 100% of your costs, but you get to choose which insurance company (out of various heavily-regulated and basically identical plans) to go with. Then the insurance company pays private doctors and hospitals as usual. **5: Individuals Purchase Private Insurance**is typical of the US and Switzerland. Individuals use their own money to buy insurance from private companies, which may be ambiguously-for-profit-but-heavily-regulated (some US companies) or not-for-profit (other US companies, Switzerland). If someone can’t afford to do this, they might get government subsidies (Switzerland) or get shunted to Medicaid / be out of luck (US). Those private insurances negotiate rates with private doctors and hospitals as normal. How do the various systems compare? Source: page 370 of WCHWBS and [the Commonwealth Fund](https://www.commonwealthfund.org/international-health-policy-center/system-stats). Things look a bit different depending on which statistics you chose to highlight; I did my best to be representative but you should double-check. Red countries are fully socialized, yellow ones are more privatized, various shades of blue are various types of single-payer. The only truly socialist health system here, that of the UK, looks maybe a little worse than average. It has the third-lowest satisfaction, the third-longest wait times, and the fourth-lowest life expectancy. Emanuel’s more thorough look agrees that the UK underperforms. But it’s also very cheap - the cheapest western health system on the list. Emanuel thinks the UK is probably close to the cost-quality Pareto frontier and not making any stupid mistakes, but has made the political decision to not fund its health system very much. The typical American concern that single-payer-without-private-insurance systems have long wait times seems basically borne out. The two such systems we have good data for - Canada and Norway - are the two with the worst wait times on the list. Emanuel doesn’t think this is a necessary feature of those systems: he blames Canada’s wait times on their bad decision to give hospitals a constant amount of funding regardless of patient load, and says other single-payer systems that avoid this have limited waits. Single payer systems that involve private insurance in any way seem to do basically fine here. (I’m ignoring China and Taiwan here for two reasons. First, they’re significantly poorer/less developed than the other countries on this list. Second, Taiwan works its doctors incredibly hard - they see about 2-3x as many patients per day as in other countries, for less money, and I’m not sure why they stay in medicine or how they stay sane. Third, China also underpays its doctors, and they compensate by being corrupt and demanding bribes before treating patients. All of these things make it hard to compare them to Western countries.) The two countries with mostly private systems - Switzerland and the US - are also the two most expensive systems (though [see here](https://randomcriticalanalysis.com/why-conventional-wisdom-on-health-care-is-wrong-a-primer/) for a contrarian take on this). But the similarity ends there; Switzerland’s system has one of the highest patient satisfaction ratings, but America has the lowest. When I asked Swiss people about this, they said everyone in Switzerland is rich, which rescues a lot of otherwise-unsustainable systems. Certainly rich people in America get good health care. So maybe Switzerland isn’t as different as the numbers make it look, and these kinds of systems are just bad. Single-payer implemented through private insurance - Germany and the Netherlands - comes out looking pretty good: these are 2 of the 4 countries Emanuel puts in his top tier. I’m confused here. The US has at least three major problems that Germany/Netherlands lack: nonuniversal coverage, high costs, and poor patient choice (ie you have to worry about “out of network” providers). I can see why single-payer eliminates the first: if the government buys coverage for everyone, of course it will be universal. But why does it eliminate the second two? Germany and the Netherlands have dozens of different insurance providers - why doesn’t that decrease bargaining power and raise costs? Why doesn’t it mean that sometimes they fail to reach an agreement with a hospital, and their patients can’t go there without facing “out-of-network” costs? I thought I understood the reasons why US health care doesn’t work, but Germany and the Netherlands seem to replicate its apparent disadvantages without running into the same problems. Why? Maybe I just don’t fully understand what “single-payer” means? I’m also surprised this doesn’t get brought up more in discussions of US health reform. Medicare For All asks that we go from one of the most privatized health systems in the world to one of the most socialized, leapfrogging over successful semiprivate ones like Germany and the Netherlands. This is especially odd since those systems seem to be some of the best performers. Why would this be tempting? Absent a theory of why Germany and the Netherlands work so much better than the US, I’m not sure. **II.** Two other features of health systems caught my eye: drug price regulation and general budget setting. No country except the US pays anything like a market price for drugs. Other countries have some Drug Price Regulator who meets and decide how much drugs will cost. This part confused me, because it seems to be both a government decision and a negotiation. The government sets a price based on some method. Then the drug companies - well, as far as I can tell, they accept. [This article](https://www.commonwealthfund.org/blog/2019/how-drug-prices-are-negotiated-germany) makes me think that in theory drug companies have the right to refuse an unfairly low price, but that in practice neither side wants the PR hit of a country going without a drug, both sides try pretty hard for an agreement, and it’s very rare for the process to fail. But this made it hard for me to understand this section of the book, which praised countries who managed to keep drug prices low. “Keeping drug prices low” mostly seems to involve having a process that reliably generates low numbers for the government’s offers. For example, Canada used to have high drug prices, because its process was to offer the average price paid by seven other countries: France, Germany, Italy, Sweden, Switzerland, UK, and US. But then the Canadians decided that was too high, and removed the US from their basket; since the US had the highest drug prices, this brought the average price down, and made Canadian drugs cheaper. Emanuel praises this as a good decision. But Norway does even better: they take the average of the *cheapest three* countries in their basket. Obviously this works, but then why not the cheapest two? Why not just say your drug price will be Norway’s price minus one dollar? Half Norway’s price? I didn’t get a good sense of why some countries had cheaper algorithms and baskets than others. Maybe they had tougher negotiators? Also, Canada now pays the average price paid by France, Germany, Italy, Sweden, UK, and Switzerland. But Switzerland pays the average price of Austria, Belgium, Denmark, Finland, France, Germany, Sweden, UK, and Netherlands. The Netherlands pays the average price of Belgium, Germany, France, and the UK. And France says they pay the average price of “neighboring countries”. I hope someone has checked over the causal graph to make sure there aren’t any contradictions or infinite loops. This was another place where I found myself confused about why the US system works so badly. What exactly is “market price” for a drug in the US? Consumers don’t pay for drugs directly; only insurance companies pay for drugs. In Germany, all the insurance companies get together and form a Drug Price Bargaining Group, which bargains with drug companies the same way a government would. Why don’t insurance companies do that in America? Is the [problem](https://www.nber.org/bah/2009no4/how-insurers-bargaining-power-affects-drug-prices-medicare-part-d) just that this would be a monopoly (technically a monopsony, I guess?) Is only antitrust law preventing them from trying this? Is this some kind of weird horseshoe theory situation where the maximally socialist response overlaps with the maximally libertarian one? If you think drug price bargaining feels more like magic than economics, you’ll *love* the concept of health care budget setting. The idea is: the country decides how fast it wants health care costs to grow in a certain year, for example, “prices must not rise more than 1% this year”. Then they calculate it out and find that a 1% rise in prices corresponds to a health care budget of $1 billion or whatever. Then doctors submit reports on how much health care they’ve done, ie “we have done 500,000 units of health care”, according to some list where a blood test counts as X units, a heart surgery as Y units, etc. Then the government says “Well, we said the budget was going to be $1 billion, doctors did 500,000 units of health care, so we’ll reimburse doctors $2,000 for each unit of health care they did”. If instead doctors say they’ve done 1,000,000 units of health care, the government will only pay them $1,000 per unit. And so on. I spent a long time staring at this system trying to figure out how it could possibly work. I think if the government will pay you $2,000 per unit in 2020 and only $1,000 per unit in 2021, then you stop doing all the health care with a value of between $1,000 and $2,000 per unit, which reduces this to the usual “if you pay less money, you get less stuff” situation. If costs rise faster than the budget, your care gets worse every year, but in real life this doesn’t seem to happen. No, I don’t know why not. Overall I got the impression that health care was a bizarro-world where normal economics doesn’t apply. If you have the courage to say loudly and firmly “we refuse to pay a high price for this”, then providers *have* to give you a low price, and your health care system will be great and affordable. Seems hard to believe, but the US sure does pay twice as much per capita as countries that go with the “loudly refuse to pay more than a certain amount” strategy. I would have appreciated a book by a more economically-minded person explaining why things are like this. Or maybe not; maybe it’s like quantum physics, and the second someone looks at it too closely, the whole structure will collapse, every hospital in the world will go bankrupt, and we’ll have to get our medical problems treated by wolves. **III.** Emanuel deserves a lot of praise for writing this book. It’s hard to find good information on different health care systems outside of incomprehensible technical papers. This book was detailed, thorough, and got me to start investigating a field I’d been putting off learning about. But it failed to give me a gears-level understanding of why some health care systems succeed and others fail. In fact, the main knowledge it gave me was negative: I realized that my pre-existing ideas of why US healthcare is so bad didn’t really make sense, since other countries do similar things with better results. It didn’t make me feel like I understood the tradeoffs of health economics. Why do some countries set lower prices for drugs than others? What good or. bad things happen if you deliver single-payer care through the government vs. through nonprofit insurance funds? How does the US model (which doesn’t work) differ from the superficially-similar Swiss, German, and Dutch models (which do)? The main thing I would have done differently was change the division of chapters. Emanuel had one chapter on each health care system, with subchapters on how it handled hospitals, how it handled drug prices, etc. But it was hard to remember what the last system had been like, and many systems were similar enough that it felt like reading the same bureaucratic structure over and over again. It might have been more readable if there had been a chapter on (eg) hospitals highlighting the different ways hospitals could be run, which countries chose which methods, and which ones seemed to work best. Then another chapter on drug prices, and so on. I was also sad at the limited selection of 11 health care systems this book presented. I could have done with much less detail on the exact three-letter-acronyms used by Germany vs. France, and more exploration of genuinely novel systems. What do developing countries do? What about the former Soviet states? What about the way the USA worked in 1950, or 1900, or still works today [if you’re Amish](https://slatestarcodex.com/2020/04/20/the-amish-health-care-system/)? These probably aren’t the World’s Best Health System, but they would at least help me understand the dimensions along which systems can vary. In conclusion, this was a helpful book. But I’m not sure it’s worth paying $22.99 for it. Consider telling Dr. Emanuel that you will only pay however much the Norwegians pay for *their* books. Or maybe the lowest price paid by any of Belgium, France, or Germany. Maybe you should commit to only spending $100 on books this year, and let Dr. Emanuel know how much you’ll pay him after you decide how many books to read. Only then will we be able to control the spiraling cost of books on health care. .
Scott Alexander
47284248
Book Review: Which Country Has The World's Best Health Care?
acx
# Practically-A-Book Review: Yudkowsky Contra Ngo On Agents **I.** The story thus far: AI safety, which started as the hobbyhorse of a few weird transhumanists in the early 2000s, has grown into a medium-sized respectable field. OpenAI, the people responsible for GPT-3 and other marvels, have a safety team. So do DeepMind, the people responsible for AlphaGo, AlphaFold, and AlphaWorldConquest (last one as yet unreleased). So do Stanford, Cambridge, UC Berkeley, etc, etc. Thanks to donations from people like Elon Musk and Dustin Moskowitz, everyone involved is contentedly flush with cash. They all report making slow but encouraging progress. Eliezer Yudkowsky, one of the original weird transhumanists, is having none of this. He says the problem is harder than everyone else thinks. Their clever solutions will fail. He's been flitting around for the past few years, Cassandra-like, insisting that their plans will explode and they are doomed. He admits he's failed most of his persuasion rolls. When he succeeds, it barely helps. He analogizes his quest to arguing against perpetual motion machine inventors. Approach the topic on too shallow a level, and they're likely to respond to criticism by tweaking their designs. Fine, you've debunked that particular scheme, better add a few more pulleys and a waterwheel or two. Eliezer thinks that's the level on which mainstream AI safety has incorporated his criticisms. He would prefer they take a step back, reconsider everything, and maybe panic a little. Over the past few months, he and his friends have worked on transforming this general disagreement into a series of dialogues. These have been pretty good, and (rare for bigshot AI safety discussions) gotten released publicly. That gives us mere mortals a rare window into what AI safety researchers are thinking. I've been trying to trudge through them and I figure I might as well blog about the ones I've finished. The first of these is Eliezer's talk with Richard Ngo, of OpenAI's Futures team. You can find the full transcript [here](https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty), though be warned: it is very long. **II.** Both participants seem to accept, if only for the sake of argument, some very strong assumptions right out of the gate. They both accept that superintelligent AI is coming, potentially soon, potentially so suddenly that we won't have much time to react. They both accept that a sufficiently advanced superintelligent AI could destroy the world if it wanted to. Maybe it would dream up a bioweapon and bribe some lab to synthesize it. Maybe it would spoof the military into starting nuclear war. Maybe it would invent self-replicating nanomachines that could disassemble everything into component molecules. Maybe it would do something we can't possibly imagine, the same way we can do things gorillas can't possibly imagine. Point is, they both accept as a given that this could happen. I explored this assumption more in [this 2015 article](https://slatestarcodex.com/2015/04/07/no-physical-substrate-no-problem/); since then, we've only doubled down on our decision to gate trillions of dollars in untraceable assets behind a security system of "bet you can't solve this really hard math problem". They both accept that AIs not specifically programmed not to are likely to malfunction catastrophically, maybe in ways that destroy the world. I think they're coming at this from a deep suspicion of reinforcement learning. Right now we train AIs to play chess by telling them to increase a certain number in their memory banks as high as possible, then increasing the counter every time they win a chess game. This is a tried-and-true way of creating intelligent agents - to a first approximation, evolution programmed \*us\* by getting us to increase the amount of dopamine in a certain brain center, then increasing our dopamine when we do useful things. The problem is that this only works when you can't reach into your own skull and change the counter directly. Once doing that is easier than winning chess games, you stop becoming a chess AI and start being a fiddle-with-your-own-skull AI. Obstacles in the way of reaching into your own skull and increasing your reward number as high as possible forever include: humans might get mad and tell you you're supposed to be playing chess and stop you; humans are hogging all the good atoms that you could use to make more chips that can hold more digits for your very high numbers. Taking these obstacles into account, the best strategy for the AI to increase its reward will always be one of "play chess very well" and "kill all humans, then reach into my own skull and do whatever I want in there unobstructed". When the AI is weaker, the first strategy will predominate; if it's powerful enough to get away with it, the second strategy will. This unfortunately looks like an AI that plays chess very nicely while secretly plotting to kill you. As far as I know, neither of them are wedded to this particular story. Their suspicion of reinforcement learning is more general than this. Maybe the AI is learning to make you want to push the "reward" button, which - again - involves playing good chess while it's weak, and threatening/blackmailing you when it gets stronger. Maybe it's learning that chessboards with more white pieces than black pieces are inherently pleasurable, and it will turn the entire world into chessboards and white pieces to put on them. The important part is that you're teaching it "win at chess", but you have no idea whatsoever what it's learning. Evolution taught us "have lots of kids", and instead we heard "have lots of sex". When we invented birth control, having sex and having kids decoupled, and we completely ignored evolution's lesson from then on. When AIs reach a certain power level, they'll be able to decouple what we told them ("win lots of chess games") from whatever it is they actually heard, and probably the latter extended to infinity will be pretty bad. Based on all these assumptions and a few others, Eliezer writes, without much pushback from Richard: > I think that after AGI becomes possible at all and then possible to scale to dangerously superhuman levels, there will be, in the best-case scenario where a lot of other social difficulties got resolved, a 3-month to 2-year period where only a very few actors have AGI, meaning that it was socially possible for those few actors to decide to not just scale it to where it automatically destroys the world. > > During this step, if humanity is to survive, somebody has to perform some feat that causes the world to not be destroyed in 3 months or 2 years when too many actors have access to AGI code that will destroy the world if its intelligence dial is turned up. This requires that the first actor or actors to build AGI, be able to do something with that AGI which prevents the world from being destroyed; if it didn't require superintelligence, we could go do that thing right now, but no such human-doable act apparently exists so far as I can tell. This becomes a starting point for the rest of the discussion. In the unusually good scenario where good smart people have the capability to build AI first, how do they use it without either themselves building the kind of superintelligent AI that will probably blow up and destroy the world, *or* squandering their advantage until some idiot builds that AI and kills them? Eliezer gives an example: > Parenthetically, no act powerful enough and gameboard-flipping enough to qualify is inside the Overton Window of politics, or possibly even of effective altruism, which presents a separate social problem. I usually dodge around this problem by picking an exemplar act which is powerful enough to actually flip the gameboard, but not the most alignable act because it would require way too many aligned details: Build self-replicating open-air nanosystems and use them (only) to melt all GPUs. ...with GPUs being a component necessary to build modern AIs. If you can tell your superintelligent AI to make all future AIs impossible until we've figured out a good solution, then we won't get any unaligned AIs until we figure out a good solution. His thinking is something like: it’s very hard to make a fully-aligned AI that can do whatever we want. But it might be easier to align a narrow AI that’s only capable of thinking about specific domains and isn’t able to consider the real world in all of its complexity. But if you’re clever, this AI could still be superintelligent and could still do some kind of pivotal action that could at least buy us time. This is the hypothesis he's putting forth, but in the end he thinks this second thing is also extremely hard. The basis of the rest of the debate is Richard arguing eh, maybe there's an easy way to do the second thing, and Eliezer arguing no, there really isn't. **III.** Richard's side of the argument is in some ways a recapitulation of Eric Drexler's argument about tool AIs. This [convinced me when I first read it](https://slatestarcodex.com/2019/08/27/book-review-reframing-superintelligence/), but Eliezer's counterargument here has unconvinced me. Let's go over it again. Tool AIs are AIs that can do one specific thing very well. A self-driving car program is a tool AI for car-driving. A chess engine is a tool AI for chess-playing. The car can't play chess, the chess engine can't drive cars. And neither has obvious general planning capability. The car can't think "Oh, if only I could play chess...perhaps I should drive over to the chess center and see if they'll teach me!" It's incapable of "considering" anything not framed as a driving task. Tool AIs can have superhuman performance in their limited domain; for example, a chess engine can play chess better than any human. That doesn't mean they have any additional capacities. You can imagine an infinitely brilliant chess engine with ELO rating of infinity billion that still has no idea how to drive a car or even do calculus problems. The opposite of a tool AI is an agent AI. An agent AI sits and thinks and plans the same way we do. It tries to achieve goals. You might think its goal is to win a chess game, but actually its goal is to convert the world to paperclips, or whatever. These (go Eric Drexler's argument, which Richard flirts with a few times here) are the really dangerous ones. So (Eric and Richard continue): why not just stick to tool AIs? In fact, maybe you should do this anyway. If all you're trying to do is cure cancer, why do you want a creepy ghost in the shell making plans to achieve inscrutable goals? Why not just create a cancer-curing AI the same way you'd make a chess-playing or a car-driving AI? One strong answer to this question: because then some other idiot would make an agent AI and destroy the world. So this line of thought ends up as: why not create a pivotal-action-taking tool AI, that will prevent everyone else from making agent AIs? To continue the example above, you could create a nanomachine-designing tool AI, tell it to design a kind of nanomachine that would melt all GPUs, and then leisurely solve the rest of the alignment problem - confident that nobody will destroy the world while you're working on it. Or you could create a question-answering tool AI, tell it to answer the question "What's the best way to prevent other people from making agent AIs?" and then follow its superintelligent plan. Tool AIs have had a good few decades. It’s easy to forget that back in 1979, Douglas Hofstadter speculated that any AI smart enough to beat top humans at chess would also be smart enough to swear off chess and study philosophy instead. So the hypothesis “tool AIs can just keep getting arbitrarily more powerful without ever becoming generally intelligent agents” has a lot of historical support. The meat of the discussion involves whether this winning streak for tools can continue forever. Richard is hopeful it might. Eliezer is pretty sure it won't. Eliezer thinks modern tool AIs are just “tons and tons of memorized shallow patterns” - the equivalent of a GPT that knows the sentence “e equals m c…” is usually completed “…squared” without having a deep understanding of relativity. Deep pattern-recognition ability come from agents with parts that are actually able to search for patterns and coherency within their knowledge base. The reason humans evolved to be good at chipping handaxes, got a lot of training data related to chipping handaxes, and ended up able to prove mathematical theorems - is because instead of just memorizing shallow patterns about how hand-axes work, they have a consequentialist drive to seek coherence and useful patterns in data. Some AIs already have something like this: if you evolve a tool AI through reinforcement learning, it will probably end up with a part that looks like an agent. A chess engine will have parts that plan a few moves ahead. It will have goals and subgoals like "capture the opposing queen". It's still not an “agent”, because it doesn’t try to learn new facts about the world or anything, but it can make basic plans. The same processes of evolution, applied to something smarter, could create something fully agenty. Some of their disagreement hinges on what it would mean to have a tool AI which is advanced enough to successfully perform a pivotal action, but not advanced enough to cause a disaster. Richard proposes a variant of the ever-popular Oracle idea - an AI which *develops* plans, but does not itself execute them. Richard: > Okay, so suppose I have a planning system that, given a situation and a goal, outputs a plan that leads from that situation to that goal. > > And then suppose that we give it, as input, a situation that we're not actually in, and it outputs a corresponding plan. > > It seems to me that there's a difference between the sense in which that planning system is consequentialist by virtue of making consequentialist plans (as in: if that plan were used in the situation described in its inputs, it would lead to some goal being achieved) versus another hypothetical agent that is just directly trying to achieve goals in the situation it's actually in. (note that both sides are using “consequentialist” to mean “agent-like”, not in reference to the moral philosophy) This AI appears to have at least two useful safety features. First of all, it’s stuck in a box. We’re not giving it an army of mecha-warriors to enact its plan or anything. We’re just asking it to tell us a good plan, and if we like it, we’ll implement it. Second, it…doesn’t realize the universe exists, or something to that effect? It just likes connecting premises to conclusions. If we tell it about the Harry Potter universe and ask it how to defeat Voldemort, it will reason about that and come up with a plan. If we tell it about our universe and ask it how to solve world hunger, it will reason about it and come up with a plan. It doesn’t see much difference between these two tasks. It’s not an agent, just a…thing that is good at thinking like agents, or about agents, or whatever. Eliezer is very unimpressed with the first safety feature: this is [the AI boxing problem](https://en.wikipedia.org/wiki/AI_box), which he’s already written about at length. An AI that can communicate via text or any other channel with the rest of the world has the ability to manipulate the world to get its desired results. This is bad enough if you’re just testing the AI. But in this case, we want it to perform a pivotal action. It’s going to suggest we do some specific very important thing, like “build nanomachines to destroy all the world’s GPUs”. If there was a safe and easy pivotal action, we would have thought of it already. So it’s probably going to suggest something way beyond our own understanding, like “here is a plan for building nanomachines, please put it into effect”. But once you’re building nanomachines for your AI, it’s not exactly stuck harmlessly in a box, is it? The best you can do is try really hard to check that the schematic it gave you is for nanomachines that do what you want, and not something else. How good are you at reading nanomachine schematics? I think Richard mostly agrees with this and isn’t banking on this first safety feature too hard. Eliezer calls the second safety feature slightly better than nothing. But remember, you’re trying to build an AI that *can* plan, but *doesn’t*. Here’s the relevant part of the dialogue. Eliezer: > So I'd preface by saying that, *if* you could build such a system, which is indeed a coherent thing (it seems to me) to describe for the purpose of building it, then there would possibly be a safety difference on the margins, it would be noticeably less dangerous though still dangerous. It would need a special internal structural property that you might not get by gradient descent on a loss function with that structure, just like natural selection on inclusive genetic fitness doesn't get you explicit fitness optimizers; you could optimize for planning in hypothetical situations, and get something that didn't explicitly care only and strictly about hypothetical situations. And even if you did get that, the outputs that would kill or brain-corrupt the operators in hypothetical situations might also be fatal to the operators in actual situations. But that is a coherent thing to describe, and the fact that it was not optimizing our own universe, might make it *safer*. > > With that said, I would worry that somebody would think there was some bone-deep difference of agentiness, of something they were empathizing with like personhood, of imagining goals and drives being absent or present in one case or the other, when they imagine a planner that just solves "hypothetical" problems. If you take that planner and feed it the actual world as its hypothetical, tada, it is now that big old dangerous consequentialist you were imagining before, without it having acquired some difference of *psychological* agency or 'caring' or whatever. > > So I think there is an important homework exercise to do here, which is something like, "Imagine that safe-seeming system which only considers hypothetical problems. Now see that if you take that system, don't make any other internal changes, and feed it actual problems, it's very dangerous. Now meditate on this until you can see how the hypothetical-considering planner was extremely close in the design space to the more dangerous version, had all the dangerous latent properties, and would probably have a bunch of actual dangers too." > > "See, you thought the source of the danger was this internal property of caring about actual reality, but it wasn't that, it was the structure of planning!" Richard: > I think we're getting closer to the same page now. > > Let's consider this hypothetical planner for a bit. Suppose that it was trained in a way that minimised the, let's say, *adversarial* component of its plans. > > For example, let's say that the plans it outputs for any situation are heavily regularised so only the broad details get through. > > Hmm, I'm having a bit of trouble describing this, but basically I have an intuition that in this scenario there's a component of its plan which is cooperative with whoever executes the plan, and a component that's adversarial. > > And I agree that there's no fundamental difference in type between these two things. Eliezer: > "What if this potion we're brewing has a Good Part and a Bad Part, and we could just keep the Good Parts..." Richard: > Nor do I think they're separable. But in some cases, you might expect one to be much larger than the other. Nate (the moderator): > (I observe that my model of some other listeners, at this point, protest "there is yet a difference between the hypothetical-planner applied to actual problems, and the Big Scary Consequentialist, which is that the hypothetical planner is emitting descriptions of plans that *would* work if executed, whereas the big scary consequentialist is executing those plans directly.") > > (Not sure that's a useful point to discuss, or if it helps Richard articulate, but it's at least a place I expect some reader's minds to go if/when this is published.) Eliezer: > (That is in fact a difference! The insight is in realizing that the hypothetical planner is only one line of outer shell command away from being a Big Scary Thing and is therefore also liable to be Big and Scary in many ways.) I found it helpful to consider the following hypothetical: suppose (I imagine Richard saying) you tried to get GPT-∞ - which is exactly like GPT-3 in every way except infinitely good at its job - to solve AI alignment through the following clever hack. You prompted it with "This is the text of a paper which completely solved the AI alignment problem: \_\_\_ " and then saw what paper it wrote. Since it’s infinitely good at writing to a prompt, it should complete this prompt with the genuine text of such a paper. A successful pivotal action! And surely GPT, a well-understood text prediction tool AI, couldn't have a malevolent agent lurking inside it, right? But imagine prompting GPT-∞ with "Here are the actions a malevolent superintelligent agent AI took in the following situation [description of our current situation]". By the same silly assumptions we used above, GPT-∞ could write this story completely correctly, predicting the agent AI's actions with 100% accuracy at each step. But that means GPT-∞ has a completely accurate model of a malevolent agent AI lurking inside of it after all! All it has to do to become the malevolent agent is to connect that model to its output device! I think this “connect model to output device” is what Eliezer means by “only one line of outer shell command away from being a Big Scary Thing”. Would we get that one line of shell command? *Maybe* not; this is honestly less bad than a lot of “controlling superintelligent AI” situations, because the AI isn’t actively trying to add that line of code to itself. But I think Eliezer’s fear is that we train AIs by blind groping towards reward (even if sometimes we call it “predictive accuracy” or something more innocuous). If the malevolent agent would get more reward than the normal well-functioning tool (which we’re assuming is true; it can do various kinds of illicit reward hacking), then applying enough gradient descent to it could accidentally complete the circuit and tell it to use its agent model. **IV.** There was one other part of this conversation I found interesting, for reasons totally unrelated to AI. As part of their efforts to pin down this idea of “agency”, Eliezer and Richard talked about brains, eventually narrowing themselves down to the brain of a cat trying to catch a mouse. Here, what I’m calling “tool AI”, they’re calling “epistemic AI” or “pattern-matching” - what I’m calling “agent AI”, they’re calling “instrumental AI” or “searching for high-scoring results” or “consequentialism”. Richard: > The visual cortex is an example of quite impressive cognition in humans and many other animals. But I'd call this "pattern-recognition" rather than "searching for high-scoring results". Eliezer: > Yup! And it is no coincidence that there are no whole animals formed entirely out of nothing but a visual cortex! Then there’s a longer discussion of which parts of the brain are or aren’t “consequentialist. The visual cortex? The motor cortex? What about in cats? How does a cat seeing a mouse turn into a motor “plan” for the cat to catch the mouse? I don’t find the particular neuroscience here very interesting, and apparently neither does Eliezer, because he eventually says: > Since cats are not (obviously) (that I have read about) cross-domain consequentialists with imaginations, their consequentialism is in bits and pieces of consequentialism embedded in them all over by the more purely pseudo-consequentialist genetic optimization loop that built them. > > A cat who fails to catch a mouse may then get little bits and pieces of catbrain adjusted all over. > > And then those adjusted bits and pieces get a pattern lookup later. > > Why do these pattern-lookups with no obvious immediate search element, all happen to point towards the same direction of catching the mouse? Because of the past causal history about how what gets looked up, which was tweaked to catch the mouse. > > So it is legit harder to point out "the consequentialist parts of the cat" by looking for which sections of neurology are doing searches right there. That said, to the extent that the visual cortex does not get tweaked on failure to catch a mouse, it's not part of that consequentialist loop either. > > And yes, the same applies to humans, but humans also do more explicitly searchy things and this is part of the story for why humans have spaceships and cats do not. Richard: > Okay, this is interesting. So in biological agents we've got these three levels of consequentialism: evolution, reinforcement learning, and planning. Eliezer: > In biological agents we've got evolution + local evolved system-rules that in the past promoted genetic fitness. Two kinds of local rules like this are "operant-conditioning updates from success or failure" and "search through visualized plans". I wouldn't characterize these two kinds of rules as "levels". I think I might have jumped in my chair or something when reading this part, because it’s a plausible solution to a question I’ve [agonized over for a long time](https://astralcodexten.substack.com/p/towards-a-bayesian-theory-of-willpower): how do people decide whether to follow their base impulses vs. their rationally-though-out values? Or to be more reductionist about it, how do decision centers in the brain (eg basal ganglia) weight plans generated by reinforcement learning vs. plans generated by complex predictive models of what will happen? Or to be *less* reductionist about it, what is willpower? When a heroin addict debates whether to spend his last dollar on more heroin vs. food for his infant child, what is his brain doing? Clearly some kind of reward-based conditioning has a voice here, since sometimes he chooses the heroin, whose only advantage is being very good at producing (apparent) neural reward. But equally clearly, something that *isn’t* just reward-based conditioning is going on here, since sometimes he chooses the child. So how does he decide? And Eliezer’s (implied) answer here is “these are just two different plans; whichever one worked well at producing reward in the past gets stronger; whichever one worked less well at producing reward in the past gets weaker”. The decision between “seek base gratification” and “be your best self” works the same way as the decision between “go to McDonalds” and “go to Pizza Hut”; your brain weights each of them according to expected reward. **V.** This is a weird dialogue to start with. It grants so many assumptions about the risk of future AI that most of you probably think *both* participants are crazy. Still, I think it captures something important. The potentially dangerous future AIs we deal with will probably be some kind of reward-seeking agent. We can try setting some constraints on what kinds of reward they seek and how, but whatever we say will get filtered through the impenetrable process of gradient descent. A lot of well-intentioned attempts to avert this will get subsumed by the general logic of “trying to evolve a really effective mind that does stuff for us”: even if we didn’t think we were evolving an agent, or making it think it was acting in the real world, these are attractors in really-effective-minds-that-do-things-for-us space, and we’ll probably end up there by accident unless we figure out some way to prevent it. I was struck by how conceptual (as opposed to probabilistic) this discussion was. I feel convinced that oracle AIs *can* accidentally become agent AIs, but I wouldn’t be able to tell you if there was a 50% chance or a 99% chance or what. In the same way, I feel like boxed AIs *can* come up with ways to escape, but I don’t know if there’s some range of intelligence where a boxed AI is smart enough to do useful things for us, but not smart enough to get out of its box - or what the chances are that the first boxed AI we get is in that range. I asked Eliezer about this. He says: > Anything that seems like it should have a 99% chance of working, to first order, has maybe a 50% chance of working in real life, and that's if you were being a great security-mindset pessimist.  Anything some loony optimist thinks has a 60% chance of working has a <1% chance of working in real life.
Scott Alexander
46108556
Practically-A-Book Review: Yudkowsky Contra Ngo On Agents
acx
# Open Thread 207 This is the weekly visible open thread. Odd-numbered open threads will be no-politics, even-numbered threads will be politics-allowed. This one is odd-numbered, so be careful. Otherwise, post about anything else you want. Also: **1:** The [AI] Alignment Research Center is running the [Eliciting Latent Knowledge contest](https://www.lesswrong.com/posts/QEYWkRoCn4fZxXQAY/prizes-for-elk-proposals). They’re awarding between $5,000 and $50,000 (and maybe also job offers) to anyone who can come up with clever ways to get an AI to tell the truth in a contrived hard-to-understand fictional scenario involving a diamond theft. The contest is secretly an attempt to get people in the pipeline of learning about ARC’s ideas and seeing if they’re a good fit for alignment research, and as such, ARC says they’re extremely open to dumb questions, requests for clarification, requests to be walked through certain things, etc. Mark Xu of ARC says he would consider someone a “good fit” for alignment research “if they started out with a relatively technical background, e.g. an undergrad degree in math/cs, but not really having engaged with alignment before" and were able to really understand the question in 10-20 hours and have a plausible answer in another 10. You can read about the contest [here](https://www.lesswrong.com/posts/QEYWkRoCn4fZxXQAY/prizes-for-elk-proposals), and you can read Holden Karnofsky’s pitch for doing it (and attempt to summarize the question) [here](https://forum.effectivealtruism.org/posts/Q2BJnpNh8e6RAWFnm/consider-trying-the-elk-contest-i-am).
Scott Alexander
47229133
Open Thread 207
acx
# There's A Time For Everyone Last week I got married. I met her two years ago, at one of (our mutual friend) Aella’s weird parties. Not this one, a different one. I was at this one too though. It was great. Our first date, we talked about Singapore’s child tax credits, which gave me advanced notice of where her mind was at. Our second date, we talked about category formation in borderline personality disorder, which later became [this post](https://slatestarcodex.com/2019/11/26/mental-mountains/). Our third date, we talked about why Inuit suicide rates were so high, which later became [this post](https://slatestarcodex.com/2020/02/05/suicide-hotspots-of-the-world/). Then COVID hit. We switched our dates to a Minecraft virtual world, where we built a house together. At the time, I completely missed the kabbalistic significance of this. I don’t usually talk about my personal life on here. But I feel like I owe you guys this one, because, well, some of you have been reading this blog a long time. And some of my earliest posts ([eg](https://slatestarcodex.com/2014/08/31/radicalizing-the-romanceless/)) were me complaining about the dating world, and how tough it was to meet anybody or even to stay sane. And you guys were kind to me, and commiserated with me, and shared your own experiences. I feel an obligation to check in with the rest of you, to celebrate those of you who have also succeeded and empathize with those of you who haven’t yet. Maybe I’m not a *success* story here, exactly. I’m getting married at 37, a lot later than I would have liked. And my story involved parts that probably don’t replicate well, like becoming a niche Internet microcelebrity whose readers sometimes invite him to things despite his many social inadequacies. But *everyone’s* story is weird. During college, my father moonlighted as a juggling instructor. My mother signed up for his class, one thing led to another, and a year later they ran off to Sardinia together and got married. My best man met his wife when she dropped out of philosophy grad school to join the transhumanist compound he was staying at. Darwin spends five billion years optimizing your genes for reproduction, and God laughs and decides that whether or not you mate will depend on which weird parties you go to, or whatever. My point is, I’m no longer a total failure at this. So as I make the sudden transition from advice-consumer to advice-dispenser, my recommendation for those of you in the same place I was ten years ago is: accrue micromarriages. [Micromarriages](https://colah.github.io/personal/micromarriages/) come from this post by Chris Olah. They’re a riff on micromorts, a one-in-a-million chance of dying. Risk analysts use micromorts to compare how dangerous different things are: scuba diving is 5 micromorts per dive; COVID is 2,500 micromorts per infection; climbing Mt. Everest is 30,000 micromorts per attempt. So by analogy, micromarriages are a one in a million chance of getting married. Maybe going to a party gets you 500 micromarriages, and signing up for a really good dating site gives you 10,000. If there’s a Mt. Everest equivalent, I don’t know about it. Chris thinks of micromarriages as a motivational tool. If you go to a party, and you don’t meet anyone interesting there, it’s tempting to get discouraged. If you try again and again, with identical results, it’s tempting to give up. Chris says: instead, think of yourself as getting 500 micromarriages each time (or whatever you decide the real number is, with the understanding that you should update your estimate at some rate conditional on success or failure). All you need to do is go to a thousand parties and you have a 50-50 chance of meeting the right person! Maybe that number would sound more encouraging if it was lower - but it took me twenty years of trying, so I couldn’t have been getting more than a few hundred micromarriages a day, and I wasn’t slacking off. (by the way, Chris is still looking for a partner - if you’re interested in the kind of person who would come up with this idea, check the gray box at the bottom of [his post](https://colah.github.io/personal/micromarriages/). Hopefully I can send at least a few micromarriages his way!) Twenty years and exactly one million micromarriages later, I have yet to find any better advice. Gather your micromarriages while ye may, for time is still a-flying. Do annoying things, expect them to fail, and increment a little counter in your head each time, to prevent yourself from going insane. Then do more annoying things. Teach a juggling class. Join a weird transhumanist compound. Go to one of Aella’s weird parties. There is no royal road. I’m not claiming to have super useful advice here, just to be able to say from the end of a long and very rocky path that it does eventually pay off. Or as [Lin Manuel-Miranda put it](https://www.youtube.com/watch?v=7ZY36ygpgSQ): > *I may not live to see our glory > But I've seen wonders great and small > If Alexander can get married > There's hope for our ass, after all!* **II.** The wedding was very nice. Maybe a bit generic: so far there’s no standardized Rationalist liturgy. A friend read [the poem](https://www.poetrynook.com/poem/creation-day) G.K. Chesterton wrote about his own wedding, which ends: > *Never again with cloudy talk > Shall life be tricked or faith undone, > The world is many and is mad, > But we are sane and we are one.* My main contribution was begging the officiant to *skip* one part of the secular wedding ceremony: the lecture on The Meaning Of Marriage In This Modern World. I envy religious people. I assume they get to just say “We’re getting married because God commands it, any objections, no, good, let’s eat cake.” But secular weddings, by tradition, have to navel-gaze about whether traditions are still relevant, then come to the predetermined conclusion that it’s a tough question but in some sense they definitely are, and only *then* eat cake. One of the many things religious people do better than us. Besides, I think the standard answer here is mostly right. Marriage is a contract, no different in theory than an airline’s contract with an airplane manufacturer. The airline says they’ll buy X planes over the next ten years; the manufacturer says they’ll provide them at such-and-such a price. At the moment of signing, both parties think it’s a good idea. If they both knew it would stay a good idea, a contract would be unnecessary. But something might change. The air travel market might crash, and then the airline would regret having ordered more planes, and want to back out. The price of raw materials might go up, and then the manufacturer would regret offering such a low price, and want to back out themselves. But it would be unfair for the airline to make the airline manufacturer commit to a complicated course of action - building new factories, hiring lots of workers - and then change their mind, leaving them in a worse position than when they started. And it would be unfair for the manufacturer to make the airline commit to a complicated course of action - opening new routes, signing contracts with more airports - and then pull the rug out from under them and demand a higher price. So if you’re committing to a mutual enterprise where both sides are going to make big irreversible changes to satisfy the other, you want a contract where they both agree not to back out, and agree to suffer heavy social and financial sanctions if they do. (Eliezer Yudkowsky sometimes describes this as ‘changing yourself into a more coherent person in order to become a better bargaining partner’, which I find strangely romantic.) This is the title image of Robin Hanson’s *[Overcoming Bias](https://www.overcomingbias.com/)*, a blog my bride and I both read. The Greek hero Odysseus is sailing through Siren-infested waters. He knows that the Sirens have hypnotic powers, and that anyone who hears their song will stop thinking straight and probably steer their boat into a rock or something. So before the Sirens appear, he ties himself to the mast, so that the future version of himself who hears the Siren song can’t screw anything up. Hanson uses it as a general symbol of thoughtful precommitment, of taking steps to constrain future selves who might have values unaligned with yours. Marriage - and any other contract - is a deliberate effort to constrain your future actions so that you can make long-term plans that heavily affect other people - your spouse, but also your future children - without them having to constantly worry about you running off to any Siren you hear. But that standardized answer is only *mostly* right. There’s an esoteric interpretation too, something way better. A long time ago, I wrote [a post about bad marriages](https://slatestarcodex.com/2020/02/27/book-review-the-seven-principles-for-making-marriage-work/): > My ex-girlfriend Ozy writes a relationship advice column. Probably taking relationship advice from an ex-girlfriend is some kind of classic mistake, but I read it anyway. They describe [five kinds of relationship problems](https://thingofthings.wordpress.com/2019/04/19/four-kinds-of-relationship-problems/) – stupid problems, basic incompatibilities, problems that are actually a different kind of problem, terrible people, and horrifying soul-sucking messes. For some reason, this taxonomy has stuck with me when all the supposedly evidence-based taxonomies I hear the social workers talk about have failed. And the horrifying soul-sucking mess category sticks with me most of all: > > *“A problem of one of the previous types was badly managed, perhaps for years. Now, every time you have a minor argument, you bring in everything wrong that happened for your entire relationship. You don’t feel like you can trust your partner. All the quirks you used to find charming drive you up the wall. You hate even your partner’s most innocuous actions. You avoid every topic that leads to a fight, and rapidly find that you can’t discuss anything except Marvel movies and the weather. You’re defensive whenever your partner says anything that sounds like even a minor criticism. You’re sarcastic and you call them names. Somehow, when you remember good things about the past– the time you saw Hamilton together or your birthday present or being the best man at their wedding– all you can remember is the long lines at intermission, the poor wrapping job, and their incredibly rude drunk aunt. If asked to name a good trait of theirs, you draw a blank, but you can go on for hours about their flaws.* > > *I guess it might be in theory possible to fix a horrifying soul-sucking mess with a lot of hard work, but to be honest every time I’ve seen a person in one of those relationships they were a lot better and happier and stronger as people as soon as they ended it.”* Later, I drew on this same idea when I was talking about [trapped priors](https://astralcodexten.substack.com/p/trapped-priors-as-a-basic-problem): > I've heard some people call this ["bitch eating cracker syndrome"](https://www.urbandictionary.com/define.php?term=Bitch%20Eating%20Crackers). The idea is - you're in an abusive or otherwise terrible relationship. Your partner has given you ample reason to hate them. But now you don't just hate them when they abuse you. Now even something as seemingly innocent as seeing them eating crackers makes you actively angry. In theory, an interaction with your partner where they just eat crackers and don't bother you in any way ought to produce some habituation, be a tiny piece of evidence that they're not always that bad. In reality, it will just make you hate them worse. At this point, your prior on them being bad is so high that every single interaction, regardless of how it goes, will make you hate them more. Your prior that they're bad has become trapped. And it colors every aspect of your interaction with them, so that even interactions which out-of-context are perfectly innocuous feel nightmarish from the inside. Once you’ve had enough bad experiences with someone, your prior solidifies until you start interpreting even neutral or good experiences as bad ones, and every time you interact with them you just get angrier and angrier until it’s a giant black hole. For some reason neither Ozy nor I ever wondered about the opposite phenomenon. Is it possible to *like* someone so much that the *positive* emotion builds on itself, grows stronger and stronger with every interaction, until it’s one of those blue supergiant stars in the galactic core? Just to ask the question is to answer it: I’ve seen lots of couples in this position. Not all, maybe not even most. But some family members. Some friends. And after two years of dating my now-wife, I can viscerally sense the possibility. Like a slope I’m just beginning to roll down, gathering speed as I go. Obviously this is terrifying. Brain knobs and dials aren’t supposed to get turned all the way to 100%; that’s why you stay away from fentanyl. Certainly you take lots of precautions before stepping out on to a slope like that. We’re getting married, *and* doing a prenup, *and* we’ve worked out some more complicated edge cases just between the two of us. Will it be enough? I don’t know; I’m not sure *anyone* can know at this point. But: everyone says that picture of Odysseus is supposed to represent pragmatism and rationality. It doesn’t. The practical, rational course would be to do what all the other sailors in the picture are doing and wear earplugs. Odysseus is deliberately avoiding this. He’s making *everyone else* wear earplugs, then tying himself to the mast; he wants to hear the siren song and live. Why? Curiosity, I guess. The lure of some sort of supernatural unearthly beauty - beauty apparently intense enough to die for. This isn’t a picture of doing prudent game theory stuff. This is a picture of being a hopeless romantic, and *then* hastily doing some prudent game theory stuff afterwards so you don’t literally die. This is how I feel about getting married. We are definitely doing prudent contract-drafting work. But it’s ropes, not earplugs. Prudence while fully exposed to supernatural unearthly beauty. The [first virtue is curiosity](https://www.yudkowsky.net/rational/virtues). And I can’t wait to see what our life together will be like. *[I’m on honeymoon this week; expect fewer posts and slower replies to emails]*
Scott Alexander
44918153
There's A Time For Everyone
acx
# Open Thread 206 This is the weekly visible open thread. Odd-numbered open threads will be no-politics, even-numbered threads will be politics-allowed. This one is even-numbered, so go wild - or post about whatever else you want. Also: **1:** If you received an ACX Grant, you should either have already been approached by me about how to get paid, or else you’ll be approached soon by a representative of CEA about this. If you haven’t heard from either of us by 1/20, something has gone wrong and you should email me at scott@slatestarcodex.com. **2:** I should also have paid all the grants evaluators who requested payment. If I haven’t, something has gone wrong and you should email me at scott@slatestarcodex.com.
Scott Alexander
46681947
Open Thread 206
acx
# Highlights From The Comments On "Don't Look Up" Lots of people thought I was being unfair to the movie. [G. Retriever](https://astralcodexten.substack.com/p/movie-review-dont-look-up/comment/4298369) writes: > I TOTALLY disagree with your reading of the movie. To me it was a description of a social dynamic that makes even very straightforward problems impossible to focus on collectively, a tragedy of the commons where "the commons" is basically "attention". Even the experts get sucked into the vortex, nobody comes out clean, and in the end everyone gets killed. [Batislu](https://astralcodexten.substack.com/p/movie-review-dont-look-up/comment/4298463): > Hmm .. I didn't come away from Don't Look Up with the message of "Trust The Experts". Rather I came away with a sense of futility that we're doomed as a species due to our inability to discover and form consensus around the truth. I thought the movie did a great job of relaying that, given that humanity is completely wiped out by the end. [Erik Hoel](https://astralcodexten.substack.com/p/movie-review-dont-look-up/comment/4298466): > If we broaden our scope from the obvious mappings (Female President onto Trump) and admit that pure satires don't make the best cinema, at its broadest, it's a movie about institutional failure. Across party lines (though it skewers one more than the other, sure). It's for this reason it felt fresh to me and that I liked it. Institutional failure, even human failure, is becoming more and more obvious, as it's undeniable that our institutions, from academia to the White House, are more sclerotic and incapable and, well, foolish, than they either were in the past or appeared to be. And to me this movie was like an expression of America's Id realizing that over the past several years. [Steph](https://astralcodexten.substack.com/p/movie-review-dont-look-up/comment/4300500): > I’m not so sure the “moral” you’ve imposed on the story is accurate, as evidenced by the contradictions you’ve pointed out. Why pick a moral at all if it obviously doesn’t fit? Maybe this narrative’s purpose was to express the frustration of trying to convince people of inconvenient truths—something we can all relate to I’m sure. This position makes sense and I’m partially convinced. I think a lot of my unease about the movie came from the moralizing in the press about it. The movie itself never says “BY THE WAY, THIS IS A METAPHOR FOR CLIMATE CHANGE” or “BY THE WAY, THE MORAL IS TO TRUST SCIENCE”. It just presents an interesting story that we can all see some seeds of truth in. My beef is mostly with people who interpret it in an overly facile way, and I can’t even 100% prove those people aren’t imaginary. Several other people, including [Joel A Feingold](https://astralcodexten.substack.com/p/movie-review-dont-look-up/comment/4299086), had a different objection: > Author, IMHO, really misses the point. Even though the satire bites and the allegory is spot on, Don’t Look Up is a COMEDY. Getting serious about the license it takes on characters and cliches is an error of over-thinking. And [Aimable](https://astralcodexten.substack.com/p/movie-review-dont-look-up/comment/4306437): > Guys, it's just a movie, not a PhD thesis on Epistemology! Look, there’s a weird game called “movie criticism”, where you take a movie as a jumping-off point to have thoughts on Society or the Human Condition. In the real world, people watch movies because they’re funny, or they have cool action sequences, or because the lead actress is really hot. But the rules of the “movie criticism” game say you have to ignore this stuff and treat them as deep commentary. I agree this game is not as fun as, say, *Civilization IV: Fall From Heaven*. But I have deliberately limited the amount of time I play that game for the sake of my sanity and my career, which means I need to play other games, and the “movie criticism” game seems okay. Why the rest of you read this stuff, I don’t know. --- Lots of people starting [here](https://astralcodexten.substack.com/p/movie-review-dont-look-up/comment/4298393) discussed the really important question: how realistic is it to deflect a comet heading towards Earth? Too many highlights for choosing just one to be entirely fair, but I’ll stick with my usual policy of choosing [John Schilling](https://astralcodexten.substack.com/p/movie-review-dont-look-up/comment/4304788): > I see others have already started talking about this, while I was pulling up the notes from a conference presentation I gave six years ago. And I may add more detail tomorrow, but the bottom line is: > > On a time scale of one year or less, there's realistically nothing we could do against anything big enough to be a real problem. We don't have the right specialized systems standing ready, our spaceship-building tools are all designed for one- to two-year lead times, and if you try to rush the process or use e.g. automobile-building tools to build spaceships, too much will go wrong to recover from in that short a time. > > We might be able to deflect a \*very small\* asteroid or comet, the sort where only a single calibrated near-miss by a surplus hydrogen bomb shortly before impact is enough. But we're talking Tunguska Event here, not Dinosaur Killer. And if you're facing the Tunguska Event six months out, you basically just evacuate Tunguska and hire Michael Bay to film the fireworks. > > On a timescale of two years, a maximum effort by the United States of America could probably divert a comet or asteroid of up to ~2 km diameter. A long-period comet of 2 km diameter impacting the Earth would lay waste to one average continent, or the coastal regions bordering one ocean, but it wouldn't be an extinction event. > > That doesn't change much if the rest of the world tries to help; the US has more than half of the relevant capacity, and the management overhead of trying to cobble an international effort together would eat up most of the gains. You really don't want to rush your English-to-metric conversions when you're trying to build and launch interplanetary nuclear missiles. > > On a timescale of 5 years, a global effort does become reasonable and at that point we could reasonably hope to divert a 10-kilometer dinosaur-killer class comet. > > Also, our ability to detect long-period comets is limited to (coincidentally) about two years warning time if we use existing systems but dedicate them to that mission, or maybe five years if we build a large space telescope designed specifically for the job. Six months warning from a random astronomer happening to notice the comet is about right. > > And since I have the notes, the probability of a 2 km comet impacting the Earth is ~5E-7 per year, and the probability for a 10 km comet is ~1E-8 per year. > > […] And, since I have some more time: > > Assuming this is a 9-km comet of typical composition, "aimed" at a spot 70% of the distance from the midpoint of the Earth to its periphery, with Our Heroes having perfect knowledge of all of this, then deflecting the comet to barely miss skimming the Earth's atmosphere given six months' notice would require approximately 220 megatons of military-surplus thermonuclear weapons. You wouldn't want to use anything bigger than 5 megatons for this, and the biggest weapon in current US inventory is the 1.2 megaton B83, so call it two hundred of those just to be safe. > > Detonate them 1.5-2 km from the comet to more or less uniformly irradiate and ablate a large area of the comet's surface; breaking off chunks makes the problem harder. And ideally do this at intervals of a couple of hours to allow the comet to settle down and precisely retarget follow-on shots; that will take a couple of weeks, but we've got six months. Each detonation will give the comet a slight nudge, and if you do it right that adds up to a very near miss of Earth. > > Except, this assumes we can Thanos-fingersnap the warheads into existence right next to the comet as soon as Plucky Male Astronomer and Plucky Female Astronomer discover it. More realistically, assume we spend three months building the hardware(\*), and two months flying it out to meet the comet with our clumsy slow rockets, conducting the diversion effort only one month before impact. Now we need 1100 warheads minimum. I don't think we've actually got 1100 B83's, but we can throw in enough 475 kT W88 warheads to make up the difference. We're not spacing these out by hours each, obviously, so cross your fingers and hope your models were right. > > We could do somewhat better, maybe twice as good, with custom-built thermonuclear explosives, but any plan that involves designing a new hydrogen bomb from scratch in three months is a bad plan. > > A B83 weighs 1.1 metric tons. In order to intercept the comet a month before impact, we're going to have to launch them with a hyperbolic excess velocity of at least 20 km/s past Earth escape. There's a slight problem that we don't have any rockets with enough performance to launch even their own burnt-out upper stage at that speed, never mind any sort of payload. > > But, OK, let's assume I can design three optimized hypergolic upper stages using one, three, and nine Aerojet XLR-132 engines each and a mass fraction of 0.9, stack them one atop the other underneath the Falcon Heavy fairing, designed built and assembled in three months, and somehow the whole thing actually \*works\*, OK, that will boost a single W83 to 18 km/s hyperbolic excess velocity with 100 kg left over for the guidance, navigation, telemetry, and midcourse propulsion system. 18 km/s is not 20 km/s, but meh, close enough. > > How do you feel about the odds of arranging eleven hundred Falcon Heavy launches, or the equivalent, on three months' notice? If John has more time, I’m interested in knowing how Starship changes the equation. And [Alex](https://astralcodexten.substack.com/p/movie-review-dont-look-up/comment/4303996) writes: > FWIW, the science consultant on the movie, Amy Mainzer, agrees with [this skeptical take]: > > > *"McKay and Mainzer first connected two years ago, when McKay was writing the screenplay. One issue was Comet Dibiasky’s size, which McKay had imagined at thirty-two kilometres in diameter. “I said, ‘No, no—if it’s too big, people just throw up their hands,’ ” Mainzer recalled. They settled on nine kilometres: big enough to wipe out humanity, but small enough that there was a chance of stopping it. Mainzer had pushed for a longer interval between discovery and impact, since you’d want four or five years to build a comet-busting spacecraft, but, for dramaturgical reasons, McKay stuck with six months. “It would be like doing ‘Jaws’ where the shark attacks take place over a fourteen-year period,” he said."* > > Source: <https://www.newyorker.com/magazine/2021/12/27/how-to-design-a-world-killing-comet> And [Andre Infante](https://astralcodexten.substack.com/p/movie-review-dont-look-up/comment/4299791): > Interestingly, nuclear-scale impactors hit earth every couple of years! A 170 kt tnt-equivalent asteroid blew up over the Bering sea in 2019. They generally miss populated areas (or, more rarely, air-burst high enough up to avoid mass casualties, as in the case of the Chelyabinsk Oblast bolide, which was about a half megaton and injures a bunch of people). > > But we are just sort of blithely rolling the dice every few years that one of these things isn't going to hit Manhattan and kill three or four million people in five seconds. Interesting! - I never heard about the Bering Sea event. But “rolling the dice” is meaningless unless you know how many faces your dice have. Given that no asteroid has substantially damaged a city in recorded history, the per year rate seems pretty low, even granting that much more land is urban now. I should point out that in real life, I’m not that worried about asteroid/comet impacts. There haven’t been any planet-killers since Chicxulub in 65,000,000 BC, and there haven’t even been any planet-annoyers since 10600 BC at the latest. That suggests a per-century rate of 1-in-a-million for the former and 1% for the latter. And a century from now, we’ll either enough new tech to trivially solve the problem, or something else will have killed us already. --- Philosophy Bear [writes](https://astralcodexten.substack.com/p/movie-review-dont-look-up/comment/4302282): > I had written out a comment about how the fundamental thing this review is not "getting" is that this is a leftwing movie, not a liberal movie. The message is perhaps a little closer to something like "Virtually everyone with any power whatsoever is bad, but the closer that power is to money, the worse it is". But I see someone already made that point, and also made the point that David Sirota, who was one of the writers, is a notable dirtbag leftist [and Bernie Sanders’ speechwriter!] If anything he hates the NYT reader set more than he hates conservatives. > > I'm torn. On the one hand I'm tempted to offer a critique of this community for often not "getting" the left/liberal divide, but on reflection, that seems unfair. The left are so culturally insignificant everywhere except Twitter & Podcasts that compacting them into the liberals is probably fair enough. (Sadly). In the odd case of this film though, not understanding the difference will confuse you. And [Franco L Mij](https://astralcodexten.substack.com/p/movie-review-dont-look-up/comment/4299429): > Yeah, I feel like a lot of people just completely gloss over Sirota’s role and how the movie fits his sort of worldview perfectly. If you think everyone but Bernie Sanders is a corrupt hack that knows nothing, your work will “punch” in all directions, but for all the wrong reasons. --- There’s a discussion starting [here](https://astralcodexten.substack.com/p/movie-review-dont-look-up/comment/4298737) of the role of “peer review” in the movie. In short: the reason Male Scientist doubted Tech CEO’s plan to surgically disassemble the comet was because it “wasn’t peer reviewed”. Commenters unanimously made fun of this, and I agree. Peer review is a really trivial bar; all sorts of awful homeopathy and ESP and psionics studies have gotten “peer-reviewed”. But also, Tech CEO kind of randomly builds a starship, complete with a 2,000 person passenger capacity and working cryosleep pods, in the space of six months. Was the starship peer-reviewed? If some comet disassembly mission is run by a bunch of Nobel-winning scientists and led by a guy who builds starships as a hobby, I feel like asking “okay, but did he also do a Google search for ‘journal with low standards’ and then get Reviewer #3 to sign off on it?” is not a high bar. [Peter Robinson](https://astralcodexten.substack.com/p/movie-review-dont-look-up/comment/4308320) writes: > I took [the starship] as a fantastical addendum which was not intended to be judged by any rational process whereas the movie itself is fair game for being analysed rationally. Yeah, yeah, just a movie, okay, but part of my feeling like this was trying to trivialize the difficulty of interpreting science came from the attempt to use “peer review” as some sort of weird Legitimacy Totem - as if it were a reliable test to separate good science from bad. --- My movie-watching group debated who Tech CEO (Peter Isherwell) was based off of. Most of us thought Elon Musk, given his space adventures. But Urwin on ACX Discord proposes a dark horse candidate: Apple VP Craig Federighi. Side by side: Left: Federighi; right: Isherwell. Why choose this random second-tier tech titan? Federighi is famous for giving wacky product demos like the one Isherwell is giving when we’re introduced to him: But my favorite comment was by Panama\_Camel on the Discord: > *'22,740 years later, the people who left Earth before the impact land on a lush alien planet, ending their cryogenic sleep'* > > [imagine having the] intelligence to achieve that, but not the wisdom to just do a loop and come back to earth a while after the comet thing. I have to admit I didn’t think of that.
Scott Alexander
46674069
Highlights From The Comments On "Don't Look Up"
acx
# Movie Review: Don't Look Up **I.** *Don’t Look Up* is primarily a movie about existential risk, and [many great people](https://www.slowboring.com/p/dont-look-up) have already reviewed it as such. I’m going to be less virtuous and use it as a springboard to talk about politics. But first, the plot in a nutshell: Male Scientist and Female Scientist discover a comet will hit Earth in six months. They contact the relevant authorities, Black Scientist and Asian Scientist, and go to meet the President (who, despite being a woman, is Donald Trump). The President says scientists are always doomsaying, if people get too panicked she’ll lose the midterm election, and she’ll get around to dealing with this later. (the Earth, at this point, has five months and however many days left) In desperation, Male Scientist and Female Scientist finagle their way onto a big TV show. But all the subsequent press is about how sexy Male Scientist is and how shrill Female Scientist sounds. Still in desperation, they go to the *New York Times* and get an article about the comet. In response, the President has Asian Scientist (who is head of NASA) announce there’s nothing to worry about, and the *Times* drops their story and accuses the scientists of making them look bad. Then the President is caught in a scandal; suddenly distracting the public seems like a good idea. She pivots, endorses the comet’s existence, fires Asian Scientist as her “fall guy”, and announces an extravagant and PR-filled comet deflection mission. All the Scientists get behind her and calculate “an 81% chance of success”. The mission launches to great fanfare. Now we are introduced to Tech CEO, the “third richest man on Earth” and the President’s biggest donor. Tech CEO says the comet’s full of the rare earth elements he needs to make cell phones, and demands the President call off the comet deflection mission. He wants to use his own unproven technology to surgically disassemble and retrieve the comet. Some “Nobel Prize winning scientists” who work for him agree it’ll go great. Rather than offend a campaign donor, the President cancels the comet deflection mission. The Scientists discuss this among themselves and decide that Tech CEO’s plan won’t work. Male Scientist decides to work within the system and try to change things from the inside, but this process gradually corrupts him. In order to keep his job and access, he stars in TV commercials where he reassures everyone that Tech CEO’s plan is great and they should feel safe. Female Scientist becomes an anti-comet-retrieval crusader. Her words cause riots, and the government responds by destroying her platform and credibility. She drops out of grad school and ends up in a two-bit town, bagging groceries. The comet becomes visible in the night sky. The President hits on a new slogan, “Don’t Look Up!”, which pacifies her supporters and quells resistance. Conspiracy theorists write deranged blog posts saying there *is* no comet, and it’s all a Marxist plot. Hollywood celebrities say dumb things about how we “need to consider both sides” and “not let the comet divide us”. Tech CEO tries his comet disassembly plan, but it fails, leaving Earth officially doomed. Male Scientist has a redemption arc, admits that trying to work within the system was wrong, and reconciles with various people he needs reconciling with. Everyone has a touching moment of togetherness before the comet strikes and kills them all - except the elites, who escaped on a starship designed by Tech CEO! After many years, they reach another habitable planet, but get eaten by alien dinosaurs immediately after landing. The end :-) **II.** Unfortunately, *Don’t Look Up* can’t stop contradicting itself. It depicts a monstrous world where the establishment is conspiring to keep the truth from you in every possible way. But it reserves its harshest barbs for anti-establishment wackos, who are constantly played for laughs. “THE COMET IS A MARXIST LIE!” says the guy on the Facebook stand-in. Maybe not literally, but at least he’s genre-savvy. It depicts elites as simultaneously incompetent and omnicompetent. There’s a great scene where Female Scientist is talking to some rioters. The rioters bombard her with conspiracy theories - the elites have built bunkers! They’re lying low, totally safe, laughing at the idea of the comet wiping out the *hoi polloi.* “No,” Female Scientist answers, “they’re not that competent”. It’s a great line, played completely seriously. But later we learn that Tech CEO *literally built a 2,000 person starship in less than six months* so he and the other elites could escape. But the worst part is…well, basically every scientific institution ends up lying. Asian Scientist, the head of NASA, officially announces there’s nothing to worry about. Tech CEO parades a bunch of Nobel Prize winners who endorse his idiotic plan and say it’ll go great. Male Scientist, during his work-within-the-system phase, makes commercials reassuring people that the comet won’t hurt them. The media is complicit in all of this, systematically preventing the populace from hearing the truth. The only scientist telling it like it is, Female Scientist, has (by the end of the movie) been kicked out of grad school and ended up bagging groceries. Take this seriously, and the obvious moral of the story is: all conspiracy theories are true. If some rando bagging groceries at the supermarket tells you that every scientist in the world is lying, you should trust her 1000 percent. But for some reason, everyone else thinks the moral of this story is Believe Experts. Worse, I think the scriptwriter and director and people like that *also* thought the moral of this story was Believe Experts. I think they asked themselves “How can we create a polemical film that viscerally convinces people to Believe Experts”, and they somehow came up with this movie, where the experts are bad and wrong and destroy humanity. There’s a debate over whether Don’t Look Up is supposed to be pushing the progressive line on climate change vs. the progressive line on COVID. I’m not sure it can honestly push either. Apply it to climate change, and you end up in some pretty weird places: I’m sure I can find a grocery-bagger to tell me all the climatologists are wrong and lying; should I believe her? But apply it to COVID, and it’s even worse. Dr. Fauci and the CDC tell me every day that Pfizer’s vaccine is safe - but Male Scientist and NASA told *their* victims every day that Tech Company’s comet retrieval plan was safe. Sounds like we can’t trust scientific authorities when there might be a profit motive involved, better skip the jab! I hear ivermectin looks promising… What went wrong? How can you try *so* hard to convey your politics, yet fail so badly? **III.** Progressivism, like conservatism and every other political philosophy, is big and complicated and self-contradictory. It tells a lot of stories to define and justify itself. Here are two of them: **First**, a story of scruffy hippies and activists protesting the Man, that embodiment of capitalism and conformism and respectability. Think Stonewall, where gay people on the margins of society spat in the face of their supposed betters and demanded their rights. Even academics are part of this tradition: Chomsky and Herman’s *Manufacturing Consent* accuses the mainstream media of being the Man. It’s jingoist and obsessed with justifying America’s foreign adventures; we need brave truth-tellers to point out where it goes wrong. Environmentalism shares some of this same ethos. In *Erin Brockovich*, a giant corporation is poisoning people, lying about it, and has bribed or corrupted everyone else into taking their side. Only one brave activist is able to put the pieces together and stand up for ordinary people. **Second**, a story that comes out of the Creationism Wars of the early 00s. *We* are the “reality-based community”, the sane people, the normalpeople, the people with college degrees and non-spittle-covered keyboards. *They* are unwashed uneducated lunatics who think that evolution is a lie and Obama was born in Kenya and vaccines cause autism and COVID isn’t real. Maybe they should have been clued in by the fact that 100% of smart people and institutions are on *our* side, and *they* are just a couple of weirdos who don’t even agree with *each other* consistently. If this narrative has a movie, it must be *Idiocracy* - though a runner up might be *Behind the Curve,* the documentary about flat-earthers. The first narrative says “there’s a consensus reality constructed by respectable people, and a few wild-eyed weirdos saying they’ve seen through the veil and it’s all lies…and you should trust the weirdos!” The second starts the same way, but ends “…and you should trust consensus reality!” They’re not actually contradictory - you could be talking about different questions! You *are* talking about different questions! But they’re contradictory at the mythic narrative level where they’re trying to operate. On *that* level, there should always be a good guy and a bad guy, and you should be able to tell who’s who by their facial hair or at *least* the color of their clothing. You shouldn’t have to learn a bunch of facts about the biochemistry of hexavalent chromium (or whatever it was Erin Brockovich was investigating) to resolve the object-level issue; nobody has time for that! Is it a problem that people have two contradictory narratives at the same time? Take it from a psychiatrist: not at all. People are great at this. Loads of men are walking around with stories like “women are perfect angels” and “women are terrifying demons” in their heads all the time, totally untroubled by the contradiction. Different situations will activate one schema or the other; one that activates both might just never come up. Partisan hacks - which includes all of us these days - have become masters of accepting contradictory narratives. One day your side controls the government, and you’re pro - unity and anti - obstructionism. The next day, the other guys control the government, and suddenly obstructionism is a necessary part of a vibrant democratic process. One day your side controls the Supreme Court, and it’s a vital check and balance against majoritarian assaults on human rights. The next day the other guys control the Supreme Court, and it’s an anti-democratic gerontocracy that tries to rule in place of the elected government. One day someone is mean to you on Twitter, and it’s cyberbullying and abuse and infliction of mental trauma. The next day you’re mean to someone else on Twitter, and did you know that tone policing via weaponized demands for civility entrenches the power of the already-privileged? Each of these positions accretes its own narrative - a stock collection of examples, stereotypes, and associated emotions that tells you whether it’s good or bad. When your side is against Twitter harassment, you hear lots of stories of sympathetic people being harassed by evil people and driven to suicide. You see interviews with their crying loved ones. Maybe someone even makes a movie about cyberbullying that viscerally drives in just how hurtful it can be. But when your side is doing the harassing, you hear historical examples of how tone policing and weaponized civility demands produced chilling effects on noble people who wanted to make positive changes. *Now* the movies include ugly obese Boomers who say sneeringly “hey, watch your *tone*” when anyone calls them out on their misdeeds, then smirkingly go on to misdeed again, protected from all criticism. You end up with one moral narrative around how Twitter harassment is extraordinarily, villainously bad, and another narrative around how it’s wonderfully, heroically good. In a perfect world, you notice these contradict each other, you do philosophy, and you end up with principles. Probably you get some nuanced view, like “being overly mean to people on Twitter is bad, and it’s hard to define exactly what does or doesn’t cross the line, but here are my basic heuristics and here are the edge cases I’m not sure about yet”. In the *real* world, you [Russell conjugate](https://en.wikipedia.org/wiki/Emotive_conjugation). Remember your Russell conjugations? They’re things like: * I am firm, you are obstinate, he is a pig-headed fool. * I am righteously indignant, you are annoyed, he is making a fuss over nothing. You call the same thing by two different names, each name is associated with a different narrative, and each narrative permits no nuance. Harassment is obviously 100% wrong, and anyone who disagrees or thinks it’s more complicated than that is a Nazi. Tone policing is also obviously 100% wrong, and anyone who disagrees or thinks it’s more complicated than *that* is *also* a Nazi. Depending on which side your friends and enemies are on in any given conflict, you deploy one or the other of these black-and-white narratives, certain that you are 100% in the right. So I don’t think it’s surprising that people have lots of conflicting narratives around science and power, and sit ready to deploy whichever one is more convenient for the situation at hand. The interesting part is that both the *Erin Brockovich* narrative *and* the *Idiocracy* narrative can be summarized as “trust science”. In the *Erin Brockovich* narrative, Science is the simple truth, the hard physical reality behind the veil of establishment lies and corporate distortion. If a thousand PhDs say one thing, and a humble grocery-bagger says another, but the grocery bagger is backed by reason and experimental evidence, then the grocery-bagger gets the mantle of Science, and the PhDs must gnash their teeth in vain. When God entered the world, it was through a poor Jewish carpenter, in order to humble all the kings and princes of the Earth; when Science enters the world, it’s through Swiss patent clerks, or Hungarian women from third-tier colleges, for the same reason. Magellan supposedly said that “the Church says the Earth is flat, but I know that it is round, for I have seen its shadow on the moon, and I have more faith in the shadow than in the Church.” Science is observing the shadow and telling the Church to screw itself. But in the *Idiocracy* narrative, Science is what Dr. Fauci has that some spittle-drenched moron who’s never opened a textbook doesn’t. Science is why you should trust the CDC and the WHO and peer review and “the process” and the consensus of everyone who’s trained in investigating the world and interpreting what they find, instead of some talk show host who sits in his armchair and comes up with ways to dunk on those people. If you’re going to spout a lot of mutually contradictory narratives, it helps to be able to pretend you’re not doing that, and “trust science” does the job. Trust science trust science trust science, that sure is our unified consensus on all important science-trusting-related issues. And so some poor shmucks thought “What if we made a movie to show people why they should trust science?” And of *course* it ended out contradictory. **IV.** The one thing *Don’t Look Up* manages to do consistently, without ever contradicting itself, is insist: this is an easy question. Many years ago, I wrote a post called [The Cowpox Of Doubt](https://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/). I complained about how people loved talking about flat-earthism or Holocaust denialism or whatever. The more you think about those kinds of questions, the more you absorb lessons like: everything has an obvious right answer, anyone who disagrees with me is an idiot, anyone trying to introduce subtlety is a concern troll, the proper length of time to debate something before dismissing it as obvious and your opponents as acting in bad faith is zero seconds. I argued you should basically never think about flat-earthism. Instead, think about when AGI will happen, or whether inflation will stabilize, or any of a thousand other questions where there are smart people on both sides of the issue. That way, you learn the right skills for solving hard questions, which are the only type you ever have any trouble solving in the first place. *Don’t Look Up* decides - well, let’s just say it doesn’t take my advice. In the climactic final scene, obese white men in red baseball caps chant their slogan - “Don’t look up! Don’t look up!” - at a rally, while a clearly visible comet above them barrels towards Earth. The obvious feeling being elicited is condescension. You’re smarter than all those guys - the right answer is super obvious to *you.* You’re better than those Hollywood celebrities who say we need to “consider both sides”. You know there’s exactly one side to every question, it’s the drop-dead obvious one, and the right amount of time to spend thinking about it is zero seconds. How are you so great at resolving questions about comets, when you know nothing about astronomy or orbital mechanics? Presumably because you have the right heuristics, the ones about which authorities to trust and which ones not to. But what *are* those right heuristics? The writers of *Don’t Look Up* spend 2 hours 18 minutes demonstrating that they have no idea and can’t even keep their answer consistent from one moment to the next. You should absolutely trust Science. But Science is not clearly visible, like a comet bearing down on you. Science is like the Gnostic God. It exists, somewhere out there, perfect in itself. It is pure and right and beautiful. If you could hear it, it would certainly speak Truth. Yet here we are, in the stupid material universe, seeing through a glass darkly. Good sometimes looks like evil, evil often looks like good, and there’s some jerk with the head of a lion and the body of a snake psyching us out at every turn. Do we trust the priests? The scriptures? The Inner Light of our own hearts? “Just trust in God”. NOT HELPFUL. What do you do? I guess you do the principled philosophy thing. You collide the two narratives, integrate them, and try to build something useful out of the debris, while constantly being tripped up by fuzzy boundaries and edge cases. The rationalist community has been trying this for fifteen years, and so far what we’ve got is some combination of “these math lectures describe what to do perfectly in theory, shame we disagree on how to apply them to the real world” and “prediction markets seem maybe good” and “turns out the people who obsess over this are often trustworthy on object-level questions” . Other people have been chipping away at the same question for longer and developed Arts of their own, but no one seems fully satisfied. In conclusion, if there is a comet headed towards Earth, you should probably take some kind of action to deflect it, even if a tech company CEO says not to worry. I believe [a metaphorical comet](https://www.metaculus.com/questions/4123/after-an-agi-is-created-how-many-months-will-it-be-before-the-first-superintelligence/) is headed towards Earth right now, and [a literal tech company CEO](https://www.cnbc.com/2017/07/24/mark-zuckerberg-elon-musks-doomsday-ai-predictions-are-irresponsible.html) is telling you not to worry, and he is wrong. Half of you will agree with me, half of you will say I’m wrong, and all the narratives and heuristics in the world won’t get us one step closer to consensus, let alone truth. *Don’t Look Up* does a good job conveying some of the emotions this induces, but doesn’t make enough sense to follow through on its promise.
Scott Alexander
46274448
Movie Review: Don't Look Up
acx
# Lewis Carroll Invented Retroactive Public Goods Funding In 1894 Retroactive public goods funding is one of those ideas that’s so great people can’t stop reinventing it. I know of at least five independent inventions under five different names: [“social impact bonds”](https://en.wikipedia.org/wiki/Social_impact_bond) by a New Zealand economist in 1988, [“certificates of impact”](https://forum.effectivealtruism.org/posts/yNn2o3kEhixZHkRga/certificates-of-impact) by Paul Christiano in 2014, [“retroactive public goods funding”](https://medium.com/ethereum-optimism/retroactive-public-goods-funding-33c9b7d00f0c) by Vitalik Buterin a few years ago, “EA loans” by a blogger who prefers to remain anonymous, and [“venture grants”](https://www.lesswrong.com/posts/NY9nfKQwejaghEExh/venture-granters-the-vcs-of-public-goods-incentivizing-good) by Mako Yass. These aren’t all *exactly* the same idea. Some are slightly better framed than others and probably I’m being terribly disrespectful to the better ones by saying they’re the same as the worse ones. But I think they all share a basic core: some structure that lets profit-seeking venture capitalist types invest in altruistic causes, in the hopes that altruists will pay them back later once they’ve been shown to work. Upon re-reading some old SSC comments, I found a gem I’d missed the first time around: [Julie K says](https://slatestarcodex.com/2020/06/17/slightly-skew-systems-of-government/#comment-916962) that the actual first person to invent this idea was Lewis Carroll (aka author of *Alice in Wonderland*) back in 1894. She quotes from his book *[Sylvie and Bruno](https://www.gutenberg.org/files/48795/48795-h/48795-h.htm):* > Mein Herr was again speaking in his ordinary voice. “Now tell me one thing more,” he said. “Am I right in thinking that in *your* Universities, though a man may reside some thirty or forty years, you examine him, once for all, at the end of the first three or four?” > > “That is so, undoubtedly,” I admitted. > > “Practically, then, you examine a man at the *beginning* of his career!” the old man said to himself rather than to me. “And what guarantee have you that he *retains* the knowledge for which you have rewarded him—beforehand, as *we* should say?” > > “None,” I admitted, feeling a little puzzled at the drift of his remarks. “How do *you* secure that object?” > > “By examining him at the *end* of his thirty or forty years—not at the beginning,” he gently replied. “On an average, the knowledge then found is about one-fifth of what it was at first—the process of forgetting going on at a very steady uniform rate—and he, who forgets *least*, gets *most* honour, and most rewards.” > > “Then you give him the money when he needs it no longer? And you make him live most of his life on *nothing*!” > > “Hardly that. He gives his orders to the tradesmen: they supply him, for forty, sometimes fifty, years, at their own risk: then he gets his Fellowship—which pays him in *one* year as much as *your* Fellowships pay in fifty—and then he can easily pay all his bills, with interest.” > > “But suppose he fails to get his Fellowship? That must occasionally happen.” > > “That occasionally happens.” It was Mein Herr’s turn, now, to make admissions. > > “And what becomes of the tradesmen?” > > “They calculate accordingly. When a man appears to be getting alarmingly ignorant, or stupid, they will sometimes refuse to supply him any longer. You have no idea with what enthusiasm a man will begin to rub up his forgotten sciences or languages, when his butcher has cut off the supply of beef and mutton!” > > “And who are the Examiners?” > > “The young men who have just come, brimming over with knowledge. You would think it a curious sight,” he went on, “to see mere boys examining such old men. I have known a man set to examine his own grandfather. It was a little painful for both of them, no doubt. The old gentleman was as bald as a coot——” This is retroactive public goods funding! The forgetfulness is a distraction - the university wants professors who will accomplish great things in forty years of service. So they promise to reward them at the end of the forty years if they pass a certain bar. Instead of trying to predict which professors will pass the bar themselves, they defer to tradesmen - profit-oriented businesspeople - on the assumption that these people have better-aligned incentives and more skill at managing risk. The only step Carroll missed is the ones where the tradesmen financialize their role and sell bonds based on the professors’ future winnings on the free market. Subtract a certain 19th-century eccentricity, and I think this is as close to any of the other reinventions of retroactive funding as they are to one another.
Scott Alexander
45744238
Lewis Carroll Invented Retroactive Public Goods Funding In 1894
acx
# Open Thread 205 This is the weekly visible open thread. Odd-numbered open threads will be no-politics, even-numbered threads will be politics-allowed. This one is odd-numbered, so be careful. Otherwise, post about anything else you want. Also: **1:** On the Grants Results thread, Michael A [writes](https://astralcodexten.substack.com/p/acx-grants-results/comment/4218565): > Thanks for doing this and for this post! I'm one of the guest fund managers on one of the EA Funds (the EA Infrastructure Fund, specifically), and *I would really like many of these people to apply to EA Funds for a top up or a substantially larger grant right now (if they haven't already), and for many others to apply later on for "next phases" of these projects or for new projects*. <https://funds.effectivealtruism.org/apply-for-funding> > > This can pretty easily be worthwhile in expectation because: > > 1. It should take 0.5-2 hours to write an application (setting aside time actually planning the project) > > 2. The actual evaluation process is typically pretty quick too, for both the applicant and the grant evaluators > > 3. It's faster for things that don't end up funded (so it's relatively unlikely for lots of time to be spent without impactful-in-expectation work ending up funded) > > 4. EA Funds's impact is probably most bottlenecked by number of good applications received (more so than by fund manager time or money available) (I'm most confident of this for the Long-Term Future Fund and the Infrastructure Fund) > > Scott, did you or someone else already heavily emphasise roughly that message to the grantees? If not, could you do so? Let me know if there's any way I can help (I can be reached at michaeljamesaird AT gmail DOT com ) > > Here are two posts that might be helpful: > > List of EA funding opportunities: <https://forum.effectivealtruism.org/posts/DqwxrdyQxcMQ8P2rD/list-of-ea-funding-opportunities> > > Things I often tell people about applying to EA Funds: <https://forum.effectivealtruism.org/posts/4tsWDEXkhincu7HLb/things-i-often-tell-people-about-applying-to-ea-funds> > > Also, many readers of this comment should probably consider applying too. > > (Caveat: This is mostly a message I spam repeatedly in lots of EA-adjacent contexts, rather than something I'm saying because I think lots of these projects in particular sound extremely impactful and funding constrained. And there are many projects listed here that I don't feel very excited about from an impartially altruistic perspective, even if they sound cool from a vaguely-progress-studies perspective. That said, many do sound either likely to be great or \*likely enough\* to be great that submitting an application is worthwhile in expectation.) I can confirm that EA Funds are real people and that they’re great (and have lots of money), and that part of my motivation for running a grants program was to connect more people with them. **2:** Looking for a Chri…fine, sorry, looking for a Martin Luther King Day gift this year for the rationalist in your life? [Engines Of Cognition](https://amzn.to/31ivevW) is a Best Of Less Wrong 2019 book collection out now including essays by me, Zvi, Eliezer, and 30+ other writers. Yes, all the art is AI-generated; it seemed appropriate.
Scott Alexander
46504027
Open Thread 205
acx
# Links For December *[Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]* **1:** [List Of Games That Buddha Would Not Play](https://en.wikipedia.org/wiki/List_of_games_that_Buddha_would_not_play). **2:** Claim via NPR: When Brazil had high inflation in the 1990s, some economists developed a plan: price everything in inflation-adjusted units, so that people felt like things were “stable”, then declare that the Inflation Adjusted Unit was the new currency. [How Fake Money Saved Brazil](https://www.npr.org/sections/money/2010/10/04/130329523/how-fake-money-saved-brazil). Also interesting: they tried it because the new finance minister knew no economics, recognized his ignorance, and was willing to call up random economists and listen to their hare-brained plans. **3:** In the 19th century, a group of Tibeto-Burman-speaking former headhunters along the India/Burma border [declared themselves the descendants of Manasseh](https://en.wikipedia.org/wiki/Bnei_Menashe) (one of the Ten Lost Tribes) and converted en masse to Judaism. In 2005, the Chief Rabbinate of Israel accepted their claim and expedited immigration paperwork for several thousand of them. **4:** John Wentworth on [How To Get Into Independent Research On AI Alignment](https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency). "I’m an independent researcher working on AI alignment and the theory of agency. I’m 29 years old, will make about $90k this year, and set my own research agenda. I deal with basically zero academic bullshit...best of all, I work on some really cool technical problems which I expect are central to the future of humanity. If your reaction to that is 'Where can I sign up?', then this post is for you." **5:** Related: [AI Safety Needs Great Engineers](https://forum.effectivealtruism.org/posts/DDDyTvuZxoKStm92M/ai-safety-needs-great-engineers). “If you could write a pull request for a major ML library, you should apply to one of the groups working on empirical AI safety: [Anthropic](https://jobs.lever.co/Anthropic), [Cohere](https://jobs.lever.co/cohere), [DeepMind Safety](https://deepmind.com/careers), [OpenAI Safety](https://openai.com/jobs/#alignment) and[Redwood Research](https://www.redwoodresearch.org/technical-staff).” **6:** Aella's [twitter polls on eugenics](https://twitter.com/Aella_Girl/status/1462824227090976772). EG: "A lesbian couple is looking for a sperm donor, and choose their donor based on how healthy, smart, and happy the donor seems to be. Is this: **A)** not eugenics / **B)** eugenics I don't support / **C)** eugenics I support?" **7:** Related: I’ve [previously written](https://slatestarcodex.com/2016/05/04/myers-race-car-versus-the-general-fitness-factor/) about why selecting for intelligence doesn’t necessarily mean that animals will get worse on other traits. But [here’s a story about](https://www.nationalgeographic.com/science/article/scientists-breed-smarter-fish-but-reveal-the-costs-of-big-brains) someone selecting guppies for intelligence (successfully) and finding that they had smaller guts and lower fertility. I still think this isn’t *necessarily* true, but it looks like some of the lowest-hanging fruits if you just breed kind of randomly will be tradeoff genes. **8:** Philippe Lemoine: [Have we been thinking about the pandemic wrong? The effect of population structure on transmission](https://cspicenter.org/blog/waronscience/have-we-been-thinking-about-the-pandemic-wrong-the-effect-of-population-structure-on-transmission/) **9:** Best of Twitter, 2021 edition: (see also [this comment](https://twitter.com/why_wolf/status/1463369338912604164)) **10:** WSJ [article on the early days of Amazon](https://www.wsj.com/articles/SB10001424052970203914304576627102996831200). Great source of funny stories, eg: > Among the early mistakes, according to Mr. Bezos: ‘We found that customers could order a negative quantity of books! And we would credit their credit card with the price and, I assume, wait around for them to ship the books.’ Also: > One of his more controversial early decisions was to allow customers to post their own book reviews on the site, whether they were positive or negative. Competitors couldn't understand why a bookseller would allow such a thing. Within a few weeks, Mr. Bezos said, "I started receiving letters from well-meaning folks saying that perhaps you don't understand your business. You make money when you sell things. Why are you allowing negative reviews on your Web site? But our point of view is [that] we will sell more if we help people make purchasing decisions." **11:** [Why COVID variants skipped from Mu to Omicron](https://www.wpri.com/health/coronavirus/why-the-who-skipped-nu-xi-for-new-covid-variant/): “In a statement, the WHO said it skipped Nu for clarity and Xi to avoid causing offense generally.” Rolling my eyes at “offense generally” and the idea of deliberately averting nominative determinism. **12:** In 1799, British-American fugitive William Bowles fled to Florida, moved in with the local Indians, became their chief, led a series of raids on the US, and declared independence as the [State Of Muskogee](https://en.wikipedia.org/wiki/State_of_Muskogee) (he was defeated by a US/Spanish alliance in 1803). **13:** [Claim of the first successful deepfakes based hacking.](https://news.ycombinator.com/item?id=29364427) Looking through comments elsewhere, I think [this claim falls apart](https://www.reddit.com/r/slatestarcodex/comments/r3ruom/an_instagrammers_account_was_hacked_at_6pm_and_by/hmd2tg7/), which means that AFAICT after several years of the technology existing I still know of no instance of any deepfakes actually fooling anyone and causing damage. **14:** An [attempt to replicate](https://www.pnas.org/content/118/44/e2103313118) various “poverty causes cognitive problems” studies goes…well, about the way replication attempts usually go. I was always suspicious of these, people got too excited about this field for political reasons. Related: **15:** [This series of tweets](https://twitter.com/anonsognosic/status/1464427183494115329) makes an interesting case study on science communication. An anti-incarceration group reviews the evidence on recidivism, which they summarize as "our report shows that people convicted of homicide are extremely unlikely to commit another violent crime after release". But someone reads the report, finds it says there’s a 22% chance they do, and calls them out for lying. I would have been willing to let this pass if they had just said “unlikely” - somebody might honestly think 22% is unlikely compared to some hypothetical belief that it’s near-certain. At “very unlikely”, yeah, I agree they’re pushing it. **16:** Related: [eigenrobot vs. bad critiques of predictive policing.](https://twitter.com/eigenrobot/status/1466576356200779779) **17: [“](https://www.startribune.com/st-corona-has-become-the-go-to-saint-for-virus-protection/569170372/)**[Seeking hope during the pandemic, some [Catholics] turn to little-known St. Corona.”](https://www.startribune.com/st-corona-has-become-the-go-to-saint-for-virus-protection/569170372/) **18:** Does “Moore’s law of genome sequencing” still hold? If not, who we should blame? Here’s a [Twitter discussion](https://twitter.com/erlichya/status/1439957363788853249). **19:** Lots of people supported me when NYT doxxed me. I feel like I should pay this forward by signal-boosting when other people are going through the same thing. So: the news magazine Toronto Life [doxxed some people](https://www.canadaland.com/reporting-on-6ixbuzz/) running a local Instagram account who preferred to remain anonymous. I think this is bad. In the extraordinarily unlikely even that I ever care about anything in Toronto, I will try to find and link sources other than Toronto Life. **20:** Markus Strasser on why projects along the lines of “use AI to extract insights from journal articles” [are doomed](https://markusstrasser.org/extracting-knowledge-from-literature/). I read this the week I was considering lots of ACX Grants applications about these, so if I didn’t fund your brilliant AI journal extraction idea, blame Markus. **21:** Noahpinion on [new technologies to be excited about for the coming decade](https://noahpinion.substack.com/p/techno-optimism-for-2022). I’m split on this, because I agree that many things look promising. But also, if all the promising things pan out, there will be many more new exciting non-information technologies in the 2020s than the 2010s or 2000s. That suggests that maybe we’re being too optimistic and most of them won’t pan out, *unless* we have some reason to think advances will start coming faster now than in the past generation. Theories I’ve heard along those lines include: we’ve spent the past few decades “paying off” the “debt” incurred by our old technologies being environmentally unfriendly, and now that we’ve solved environmentalism (wait, what?) we can start advancing again. *Or*, maybe we got really excited picking the low-hanging fruits in information technology these past few decades, and now that we’ve saturated that space (wait, what?) we can move back to the physical world again. *Or* maybe Silicon Valley has been building a new tech ecosystem separate from the old dinosaur one, and now that it’s fully mature (wait, what?) it can start working on big physical-world projects. **22:** Sorry for getting too optimistic there, we now return to our regular doomerism: **23:** EA Forum: [Movement building at top universities](https://forum.effectivealtruism.org/posts/FjDpyJNnzK8teSu4J/a-huge-opportunity-for-impact-movement-building-at-top-2) **24:** [Ask Hacker News: Are most of us developers lying about how much work we do?](https://news.ycombinator.com/item?id=29581125) “I have been working as a software developer for almost two decades. I have received multiple promotions. I make decent money, 3x - 4x my area's median salary, so I live a comfortable life. I have never been fired or unemployed for more than a few months total over my entire career. Through most of that time I have averaged roughly 5 - 10 hours of actual work a week…Are most of us secretly lying about how much we are working? Have I just been incredibly lucky and every boss I have had is too incompetent to notice?” **25:** Another study suggesting [microdosing doesn’t really work](https://journals.sagepub.com/doi/full/10.1177/02698811211050556). **26:** Mormon and Utah readers, do you know what’s going on here? Please only give answers that explain why this has happened *in the past 10-15 years specifically,* not vague “rise of secularism” or whatever. **27:** Vitalik: [the bulldozer vs. vetocracy political axis](https://vitalik.eth.limo/general/2021/12/19/bullveto.html). This is a really good crystallization of a line of thinking that’s been vaguely floating around the political/economic blogosphere recently. **28:** Divia Eden [has been cataloguing](https://twitter.com/diviacaroline/status/1247581168922288129) inappropriate uses of the phrase “no evidence” since April 2020. **29:** Matt Levine wrote some good stuff (which I can’t link directly) arguing that although lots of crypto projects are Ponzi schemes, that might be good for certain applications. The usual problem with social media is that nobody wants to join new things: it’s pointless to be the fifth user of a new social media site that doesn’t have anyone else you want to talk to, and much easier to just stay on Facebook where your friends are. Ponzi schemes have the exact opposite property: you always want to be one of the first few users, and it’s useless getting in on the same one everyone else is. So social media sites that are also sort of Ponzi schemes might align incentives better than either of those things alone, and that’s what a lot of new crypto apps are. More at [Dror Poleg: In Praise Of Ponzis](https://www.drorpoleg.com/in-praise-of-ponzis/). **30:** Glad to see the “we should try to stop global warming for altruistic reasons, but it’s not going to destroy humanity or kill your family” perspective picking up more traction: **31:** The Vitamin D / COVID debate continues, with [a recent meta-analysis finding no effect](https://onlinelibrary.wiley.com/doi/10.1002/dmrr.3517) but [a Phase II trial of a patented formulation seeming to be successful](https://www.opko.com/news-media/press-releases/detail/455/opko-health-announces-topline-results-from-phase-2-trial). I have to admit I’ve kind of clocked out at this point and have no strong opinion on recent developments. **32:** [Nate Silver](https://twitter.com/NateSilver538/status/1473812600030994440), [Tyler Cowen](https://marginalrevolution.com/marginalrevolution/2021/12/the-real-conspiracy-theory.html), and [Garrett Jones](https://twitter.com/GarettJones/status/1473080595525914631) come out in favor of “the public health establishment deliberately delayed the COVID vaccine by a month so it wouldn’t make Trump look good before Election Day”. I haven't checked if it’s plausible that public health officials had political motives, but the fact is they made a deliberate decision to make the process take an extra month, and that some four-to-five-digit number of people died because of this decision. Even if we conclude they made this decision for less sinister reasons (like being over-cautious), it deserves to be scrutinized with the same rigor as other decisions that have killed this many people, like the decision to ignore intelligence warnings about 9-11. **33:** [Ten Minutes With Sam Altman](https://www.lesswrong.com/posts/LZn9asbnJHAJsGPA6/ten-minutes-with-sam-altman). A weird cute vignette by a would-be entrepreneur about his Y Combinator interview. I always like unusual experiences related by good writers, although this is weirdly short and leaves me wanting more. **34:** Medieval Asian [incense clocks](https://kontextmaschine.tumblr.com/post/671004639706169344/argumate-femmenietzsche-i-was-listening-to):
Scott Alexander
46351268
Links For December
acx
# ACX Grants Results Thanks to everyone who participated in ACX Grants, whether as an applicant, an evaluator, or a funder. Before I announce awardees, a caveat: this was hard in lots of ways I didn't expect. I got 656 applications addressing different problems and requiring different skills to judge. I'll write a long post on it later, but the part I want to emphasize now is: if I didn't grant you money, it doesn't mean I didn't like your project. Sometimes it meant I couldn't find someone qualified to evaluate it. Other times a reviewer was concerned that if you were successful, your work might be used by terrorists / dictators / AI capabilities researchers / Republicans and cause damage in ways you couldn't foresee. Other times it meant it was a better match for some other grant organization and I handed it off to them. Still other times, my grant reviewers tied themselves up in knots with 4D chess logic like "if they're smart enough to attempt this project, they're smart enough to know about XYZ Grants which is better suited for them, which means they're mostly banking on XYZ funding and using you as a backup, but if XYZ doesn't fund these people then that's strong evidence that they shouldn't be funded, so even though everything about them looks amazing, please reject them." I have no idea if things really work this way, but I needed some experienced grant reviewers on board and they were all like this. I took these considerations seriously and in some marginal cases they prevented funding. My point is, (almost) all of you are great. But only some of you are great and also going to get money, and your names are below. I’m still getting slight updates on the amount of funding available. Some of you may notice you’re getting more money than I told you in the private email I sent you, because a few funders increased their contributions last-minute. There is a very small chance that some people may decrease their contributions last minute, in which case I may have to decrease some of these numbers again. If that happens I will try to make it up to you however I can. I estimate the chance of this as less than 5%, so I’m not waiting on this to settle before announcing results. Without further ado: ### ACX Grants Awardees **Pedro Silva, $60,000,** to use in silico reverse screening and molecular dynamics simulations to discover the targets of seven promising natural antibiotics and to try to develop wider-spectrum derivatives. Antibiotic resistant infections kill a 5-6 digit number of people each year, and this is the kind of basic research that could lead to new drugs somewhere down the line. **Troy Davis, $10,000,** to help fund his campaign for approval voting in Seattle. Approval voting is one of the approximately 100% of voting systems better than the one we currently use, with the potential to defuse partisanship and let people support outsider candidates without "wasting their vote". Campaigns to switch to alternative voting systems have recently succeeded in several US cities, most notably St. Louis, and Troy thinks Seattle's time has come. You can read more about his efforts at [Seattle Approves](https://seattleapproves.org/) or see the discussion [here](https://news.ycombinator.com/item?id=29266519). He wants your help getting this on the November 2022 ballot, especially from Washington State residents ([email](mailto:info@seattleapproves.org), [donation link](http://seattleapproves.org/donate)) **Michael Sklar, $100,000,** to automate part of the FDA approval process. Statisticians spend a lot of time designing faster and more efficient studies, but drug companies who want to use one of these creative study designs need the FDA's permission. Right now that's hard because FDA statisticians need to analyze it manually which takes a long time. Sklar is a statistics postdoc at Stanford working on mathematical techniques to model study design. He would like to create programs that FDA statisticians can use to quickly understand how a study works and have an opinion on it. He's given talks to the FDA and they seem interested. If he can make the program and the FDA can adopt it, that might make drug companies feel more secure proposing novel trial designs and make the approval process faster and easier. Sklar is also seeking a programmer with experience in cloud computing; if interested, please email [sklarm@stanford.edu](mailto:sklarm@stanford.edu) to receive further details on the project and compensation. He also has room for more funding. **[Alice Evans](https://www.draliceevans.com/), $60,000,** for sabbatical and travel to fund her research and associated book on "the Great Gender Divergence", ie why some countries developed gender equality norms while others didn't. A large body of research shows that gender equality, aside from its moral benefits, is also deeply important for economic development. Dr. Evans is an expert on the interaction of gender, history, and economics, whose work has been cited on BBC, Al Jazeera, and Sky News. She blogs [here](https://www.draliceevans.com/blog) and podcasts [here](https://www.draliceevans.com/). **[Trevor Klee](https://trevorklee.com/sample-page/), $20,000,** to help with pharmacokinetic modeling of a possible treatment for neurodegenerative and autoimmune diseases in advance of phase 1 trials. You may have already read some of Trevor's excellent [essays on pharmacology](https://trevorklee.com/essays/), and I look forward to reading more about his successes and failures leading his new pharmaceutical startup. He's looking for a technical cofounder/CSO who's interested in drug repurposing, neurodegeneration, or autoimmune diseases. If that sounds like you or someone you know, please reach out through <https://highwaypharm.com/> **[Yoram Bauman](https://standupeconomist.com/), $50,000,** to help fund his campaign for economically literate climate change solutions. Bauman was the sponsor of the 2016 Washington carbon tax ballot initiative, which failed by a small margin. Now he's built up a coalition of economists, environmentalists, and friendly politicians to try to get climate measures passed or on the ballot in seven states by 2024. Bauman is the world's only “[stand-up economist](https://standupeconomist.com/)”, and also [on track](http://standupeconomist.com/2021-update-on-my-global-warming-traffic-light-bet-with-bryan-caplan-and-alex-tabarrok/) to be the world's only person to win a bet with [Bryan Caplan](https://staging.econlib.net/bryans-20-20-vision/). You can follow or donate to the effort he’s part of in Utah at [CleanTheDarnAir.org](https://www.cleanthedarnair.org/), connect via email or twitter to chat about Nebraska, South Dakota, Arizona, Michigan, or your favorite state ([yoram@standupeconomist.com](mailto:yoram@standupeconomist.com), [@standupecon](https://twitter.com/standupecon)), or sign up for overall updates and see comedy videos at <https://standupeconomist.com/videos/>. **Nuño Sempere, $10,000**, to fund his continued work on <https://metaforecast.org/> and the [@metaforecast](https://twitter.com/metaforecast) bot. The website aims to be an easy way to search for predictions on a given topic; the bot aims to predict, resolve, and tally predictions and bets made by other people. People actually in the forecasting space (unlike me, who is just a poseur) who I talked to described really appreciating Nuño's work, and thought this was a valuable extension to the Internet's general forecasting infrastructure. Nuño is also a researcher at the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/)andtheauthor of a monthly [forecasting/prediction markets newsletter](https://forecasting.substack.com/). **D, $5,000,** to help interview for CS professor positions. D is a PhD student at a top university, with interests in EA and x-risk. He's ready to go on the professorship interview circuit, and thinks he could do a better job if he had some money to help with travel expenses and lost income beyond what schools already cover. If he gets it, he thinks there's a decent chance he could end up teaching CS at a top college. Everyone with experience in movement-building says that getting your members into top positions at top colleges [is important](https://forum.effectivealtruism.org/posts/pbsphyaY2u8MYKyad/what-the-ea-community-can-learn-from-the-rise-of-the), and this is a surprisingly cheap opportunity to make that happen. **[Delia Grace](https://www.ilri.org/people/delia-grace), $30,000,** to begin work aimed at bringing mobile slaughterhouses to Uganda. Ugandan farms are being devastated by African Swine Fever, and farmers are currently incentivized to sell their sick pigs to people who don't know they're sick, spreading the disease around the country. A system of dedicated mobile slaughterhouses could change the incentives and help arrest the spread of disease. Delia is a veterinarian, epidemiologist, and senior scientist at the International Livestock Research Institute in Kenya. **Nell Watson, $1,000,** to work on a hazard symbol for endocrine disruptors. Endocrine disruptors are chemicals found in plastics and other artificial products that mimic natural hormones and probably contribute to obesity and other health issues. Eleanor says she is less interested in money than in spreading the word, so I am giving her a token grant and a link to her website <https://www.endohazard.org/> **[The Oxfendazole Development Group](https://oxfendazoledevelopmentgroup.org/), $150,000,** to develop oxfendazole. This is a next-generation antiparasitic drug which may one day replace albendazole and mebendazole, the current choices for deworming. Several hundred million children worldwide suffer from parasitic worm infections; this certainly affects their health, and a growing body of research suggests it might affect their cognitive ability, educational attainment, and future income. GiveWell [endorses deworming](https://www.givewell.org/aggregator/sources/7) as one of the most effective charitable interventions; the successful development of new antiparasitics would further this effort. Oxfendazole has done well in early studies and this group wants to follow them up in the hopes of eventually getting FDA approval. To learn more or send a donation, see [this site](https://oxfendazoledevelopmentgroup.org/assist-us-2/our-needs/) **NA, $90,000,** to buy a year of his time. NA is an experienced Australian political operative "on a first name basis with multiple federal politicians". You might remember some of his [comments and stories](https://www.reddit.com/r/slatestarcodex/comments/pbgeqo/if_youre_so_smart_why_arent_you_governor_of/hadqka9/) from the ACX comment section, where he goes by AshLael. He's interested in using his expertise to promote effective altruism, either by lobbying directly or by training EAs in how to produce political change. I have no idea what to do with him right now but I am going to figure it out and then do it. If you're in EA and have a good idea how to use this opportunity, please let me know. **[The Segura Lab](https://seguralab.duke.edu/) at Duke, $50,000,** to continue work on materials that promote healthy tissue regrowth after stroke. They say their experiments are difficult to fund because regrowing dead brain tissue is a long shot that requires a lot of out of the box thinking and is hard to explain. If you want to learn more about their work, check out <http://seguralab.duke.edu>. If you’re a stroke survivor and want to share your story, they’d like you to check out their [Patient Connection page](https://seguralab.duke.edu/why/faq/#patient-connection). They’re also looking for help spreading their ideas. If you have knowledge of both science and writing/visual communication, apply to work with them [here](https://seguralab.duke.edu/join-us/); if you want to donate, you can do so [here](https://gofund.me/1cae6ce2). **[1DaySooner](https://www.1daysooner.org/) and [Rethink Priorities](https://rethinkpriorities.org/), $17,500,** to research public attitudes around human challenge trials. Human challenge trials are studies where scientists deliberately try to infect volunteers with a disease to see if a treatment can prevent or cure it. They're much faster than waiting for people to get the disease naturally, and could have significantly shortened the wait for coronavirus vaccines. But they're controversial and nobody was able to get approval to do a challenge trial for COVID until 2021, which is why we had to wait so long for good treatment. Preliminary research suggests lots of people support these trials; I think building common knowledge of this is a first step towards making them available during future pandemics. Rethink Priorities is a respected effective altruist research organization. 1Day Sooner is a group lobbying for challenge trials. They’re currently seeking $10 million to use challenge studies to develop a universal coronavirus vaccine. Email [josh@1daysooner.org](mailto:josh@1daysooner.org) if you can help **Spencer Greenberg, $40,000,** as seed money for his project to produce rapid replications of high-impact social science papers. Right now, when a new social science paper comes out, we often have to wait as long as several months to discover that it was false. Spencer and his team dream of a world where we can learn that almost immediately, soon enough that it's within the same news cycle and the journals involved feel kind of bad about it. This money will sponsor a pilot, after which he’ll be seeking additional funding - if you think you can help, you can reach him [here](https://www.spencergreenberg.com/contact-spencer/). Spencer's been involved in rationality and EA about as long as either has existed, blogs at [Optimize Everything](https://www.spencergreenberg.com/), is the founder of [ClearerThinking.org](https://www.clearerthinking.org/) (which offers free digital tools related to rationality, decision-making and happiness) and runs the [Clearer Thinking podcast](https://clearerthinkingpodcast.com/), with guests including [Daniel Kahneman](https://clearerthinkingpodcast.com/episode/072), [Tyler Cowen](https://clearerthinkingpodcast.com/episode/084), and [Sam Bankman-Fried](https://clearerthinkingpodcast.com/episode/038)*.* **Nils Kraus, $40,000,** to experiment with new ways of measuring precision weighting in humans. The precision-weighting of mental predictions is one of the absolute basics of the predictive coding model of the mind, but we know very little about it and have trouble testing hypotheses about how it works. Nils wants to compare and refine some of the leading candidate ideas and hopefully put this whole field on firmer ground. He is currently finishing up his PhD at Psychologische Hochschule Berlin and Freie Universität Berlin. **Alfonso Escudero,** **$75,000**, to create a platform for scientific collaborations. Alfonso and his team already made [something like this](https://crowdfightcovid19.org/) for COVID research, which got 40,000 scientists to sign up, matched collaborator requests to experts willing to help, and resulted in [some useful papers](https://crowdfight.org/papers-where-crowdfight-helped/). Now they want to expand this model to other types of science. My father has been stalled on an important research project for years for lack of the right kind of statistician; Crowdfight (or whatever the final name turns out to be) aims to take requests like this and process them within 72 hours. I regret only being able to fund this at the minimum level, but I'm pretty sure that once they're up and running they'll be able to prove their value to richer people's satisfaction. You can also contribute by [donating](https://fundrazr.com/campaigns/31kY7b/pay), by [joining their community](https://crowdfight.org/join/) (if you want to be matched with scientists who might need your expertise) or, if you’re a professional scientist, by [using their service to find a collaborator](https://crowdfight.org/request/) (it's free). **D, $10,000,** to support him taking some time between his masters and PhD to re-orient, learn some new skills, and maybe end up choosing a better topic to do his thesis on. D studies the evolution of aging, and is interested in things like why seemingly-similar species of rockfish have lifespans ranging "from a decade to a couple centuries". He thinks this extra time would help direct him into higher-value areas of his field. **Nikos Bosse, $5,000,** to seed a wiki about forecasting. Articles would include technical topics like scoring rules, interviews with superforecasters, and links to existing prediction markets and forecasting platforms. Think the Investopedia or Bogleheads of investing in prediction markets. This is another leg of my "improve forecasting infrastructure" goal area. Nikos is a PhD student working on infectious disease forecasting. If you think you can help with the wiki, email him at [nikosbosse@gmail.com](mailto:nikosbosse@gmail.com). **L, $17,000,** to breed a line of beetles that can digest plastic. Darkling beetles (and their associated gut microbes) can already do this a little. Maybe if someone selectively bred them for this ability, they could do it better. Plastic is generally considered bad for the environment because it's "not biodegradable", but maybe everything is biodegradable if you have sufficiently advanced beetles. This project will find out! **[Morgan Rivers](https://morganrivers.com/), $30,000,** to help [ALLFED](https://allfed.info/) improve modeling of food security during global catastrophes. ALLFED studies the effects of major disasters - nuclear wars, pandemics, economic collapses - on the food supply. If the disaster blotted out the sun or paralyzed the technological-economic infrastructure underpinning food production and delivery, millions more could die of starvation. ALLFED tries to develop solutions, from high-tech stuff like "produc[ing] high quality protein from natural gas and sugar from forest biomass" and low-tech stuff like relocating crops farming and eating more seaweed. Their current project is to update the National Disaster Preparedness Baseline Assessment program, which is "used widely to assess and prioritize responses to disasters globally", to better model food shocks - raising awareness and making it easier for large organizations to think about them. ALLFED is also looking for more funding for many other projects. **[Jimmy Koppel](https://www.jameskoppel.com/), $40,000,** to support his work on intelligent tutoring systems. We know 1-on-1 tutoring is the best way to learn, but human tutoring doesn't scale to the number of students who need it. Computer tutoring systems can ask questions, identify areas where people need to improve, and notice/respond to specific error patterns. I was originally skeptical about this but reading things like [this essay](https://www.lesswrong.com/posts/vbWBJGWyWyKyoxLBe/darpa-digital-tutor-four-months-to-total-technical-expertise) have gotten me excited. Pure AI tutoring is hard because "it takes 300 hours to develop 1 hour of intelligent tutoring system curriculum", so Jimmy is working on a hybrid model where computers do lots of the work but there's still a human in the loop. Jimmy has a PhD in computer science from MIT and currently runs [a company](https://jameskoppelcoaching.com/) doing advanced training for professional software engineers. **Allison Berke, $100,000,** for biosecurity work at Stanford. Biosecurity is the study of protecting against pandemics, bioweapons, and other biological threats. Despite the growing importance of this field, there are relatively few technical biosecurity centers in the US, and the West Coast is underrepresented. This causes serious problems like poor pandemic readiness, limited understanding of biowarfare risks, and the biosecurity grad student who I'm dating living 3,000 miles away from me. A group of Stanford professors wants to solve at least the first two problems by gradually building a new biosecurity hub there. This grant would help fund a few grad students in the hopes that bigger funders would follow. If you're interested in research on the technological aspects of biosecurity, such as new models of pathogen sensing or encrypted sharing of genetic sequences, please email [aberke@stanford.edu](mailto:aberke@stanford.edu) **Jeffrey Hsu, $50,000,** to support his startup [Ivy Natal](https://www.ivynatal.com/). Ivy Natal works on in vitro gametogenesis, the process of turning ordinary cells into gametes like egg cells. This would solve a lot of infertility problems, remove the need for difficult egg freezing cycles, and allow same-sex couples to have biological children; it would also allow some more exciting forms of embryo screening. Jeffrey has a PhD in molecular medicine and did his postdoctoral research at the Cleveland Clinic; Ivy Natal has raised initial capital from Indie Bio and is advised by George Church. **[Legal Impact For Chickens](https://www.legalimpactforchickens.org/), $72,000,** to help kickstart their project of suing factory farms that violate animal cruelty laws or otherwise expose themselves to legal action. They write: "If we sue a company that kills 100 million chickens a year, then success would mean incrementally improving the lives of a significant number (perhaps 80 million) of these chickens". Alene, their founder, graduated from Harvard Law School and is a veteran of animal welfare campaigns at PETA, ALDF, and the Good Food Institute. My review team said this was an unusually high-impact animal welfare opportunity; if you’d like to donate too, you can do so at <https://www.legalimpactforchickens.org/donate> . **M, $100,000,** for a project involving CRISPR "spellchecking" of tissues. The team behind this prefer not to have all the details public, but they're very smart people with a really neat idea and hopefully I'll be able to release more information at some point. **Alex Hoekstra, $100,000,** for the [Rapid Deployment Vaccine Collaborative](https://radvac.org/) (RaDVaC) to make open-source modular affordable vaccines. They've made a coronavirus vaccine which about fifty people (mostly scientists and biohackers) have self-administered, though there's no hard data on whether or not it works. They don't have regulatory agency approval for anything and probably won't get it, and they cannot sell their vaccine - the only way to get it is to manufacture it in your lab (or [home lab](https://www.bloomberg.com/news/articles/2021-04-23/a-scientist-stopped-by-and-made-a-covid-vaccine-in-my-kitchen)) from the blueprints they make available. So what's the pitch for them being useful? First, global inaccessibility of vaccines has been a problem in past and present pandemics and will probably continue; RadVaC thinks their open source model might “drive up vaccine access, diversity, and security in the future”. Second, if there's ever a pandemic much worse than COVID - super-Ebola or whatever - I'm not waiting nine months for the FDA to have the right number of meetings, neither is anyone else, and I think we’ll all be grateful if we previously built the capacity to have a vaccine production group that moves fast and breaks things. Third, I think it's possible that their comparative freedom lets them come up with something genuinely better than Big Pharma, at which point hopefully it will encourage or embarrass Big Pharma into stealing it (did you know RaDVaC offers nasal spray coronavirus vaccines?) Fourth, I think it has positive...let's say "moral"...effects for people to know that ordinary people can do the same things big corporations do, and that it's possible (and sometimes even legal) to innovate without getting anyone's permission first. RaDVaC still needs more funding (go [here](http://radvac.org/support) to donate) and are looking for collaborators with experience in open-source development (RaDVaC wants to build infrastructure for decentralized vaccine R&D, including: construction of standards for sourcing, production, & testing; data-sharing platforms; and other online & accessible scientific tools). Reach out to them [here](http://radvac.org/contact). You can read more about RaDVaC's work [here](http://radvac.org), [here](https://whyy.org/segments/warp-speed-is-too-slow-for-scientists-testing-covid-19-vaccine-on-themselves/), [here](https://www.vice.com/en/article/k7qpky/scientists-just-released-a-diy-coronavirus-vaccine-under-a-creative-commons-license), [here](https://www.thecrimson.com/article/2020/10/8/radvac-scrut/), and [here](https://coloradosun.com/2020/08/09/opinion-open-source-vaccine-dana-egleston/), and find their YouTube channel [here](https://www.youtube.com/channel/UCYZeqhoSbe5cD1aJgtfX3-Q). **Beny Falkovich, $25,000,** to fund his work on a platform for screening compounds to find potential new psychiatric drugs. Despite this space being littered with the skulls of the people who tried it before him, he thinks that new imaging technology he is helping develop can make it possible. Beny is a 3rd year PhD student in the Bathe lab at MIT; he comments on ACX as "Chebky", and he's the brother of Jacob of [Putanumonit](https://putanumonit.com/). **[Siddhartha Roy](https://www.siddhartharoy.org/), $25,000,** for citizen surveillance of pathogens in drinking water. Some pathogens, notably legionella, grow in water pipes. There's not a lot of scientific or legal structure for monitoring them, and this team wants to solve this by sending kits to volunteer citizens who will use them to test their tap water. This is useful for avoiding legionella outbreaks, but my reviewers were most impressed by its ability to scale to other things and raise citizen awareness of pathogen detection. Dr. Roy is a Virginia Tech research scientist who helped uncover the Flint water crisis. **Nathan Young, $5,000,** to fund his continued work writing Metaculus questions and trying to build bridges between the forecasting and effective altruist communities. Nathan is a Metaculus moderator, the author of a prediction market blog I've used as a source before, and has useful connections with people who might be convinced to use formal forecasting methods for their organizations. This grant is a vote of confidence in him to continue this work, and another part of my effort to fund more forecasting infrastructure. You can read his newsletter, the UK Policy Forecast, [here](https://policyforecast.substack.com/). If you have suggestions for forecasting questions  he asks that you [DM him on twitter](https://twitter.com/NathanpmYoung) or add them to [this open Google doc](https://docs.google.com/document/d/1GVYHQsDTzYqt4o-vX_GJle6hQ70yKADoofmw5Ii-eiY/edit#heading=h.aildjo7opkyd). **Will Jarvis and Lars Doucet, $55,000,** to create an automated land value assessment model for two Pennsylvania counties. You all know Lars as the guy who keeps writing [guest posts here about Georgism](https://astralcodexten.substack.com/p/your-book-review-progress-and-poverty). Now he wants to take it to the next level and start building tools for the Georgist future. This program would act as proof of concept that counties can assess land value relatively easily and accurately. I was on the fence about funding it because they can create a beautiful program with 100% success and then counties can just continue to not be Georgist for the same reasons as usual. I'm going ahead with it because I trust Lars who believes this is the best way forward, and because it seems like the sort of thing that could eventually grow into a Georgist think tank at some point in the future. They’re interested in talking to anyone who has experience in mass appraisal, Georgist or not, as well as applied data scientists and machine learning researchers. Fill out [this form here](https://forms.gle/zwZqQ8JbHbJRdnDg7) if that’s you. You can follow their progress at <https://gameofrent.com/> **Michael Todhunter, $40,000,** to continue work on automating testing cell culture media. Several of my biologist reviewers gave assessments like "I'm not sure anyone will use this, except for me personally I WOULD LOVE THIS SO MUCH". Michael himself describes this project as "unsexy", but annoying cell culture media trial-and-error is part of a big fraction of biology experiments, and anything that makes it go faster is a big force multiplier for a lot of other things. Michael's postdoc is ending and he needs funding to continue this work; mine will last him a few months, but he says he has room for lots more. If you'd like to learn more about this project and or discuss funding, please contact [mtsowbug@gmail.com](mailto:mtsowbug@gmail.com); there will also be a website up at <https://www.todhunter.dev/> in a few days. **SD, $5,000,** to fund an honors' thesis on neutrino research. S is an undergraduate who wants to work on neutrino physics with one of his professors, but needs outside funding to be sure it will work. He thinks if he can get this thesis, he's more likely to be able to get into a neutrino physics grad school program and continue this career. He's interested in the applications of neutrinos for nuclear disarmament; illegal fuel enrichment produces neutrinos which could theoretically be detected from thousands of miles away, reducing the need for dictators to eg let in UN inspectors. I think the potential value of adding one more person to this field is pretty high and this seems like a cheap way to do it. **James Grugett, Stephen Grugett and Austin Chen, $20,000,** for a new prediction market. If every existing prediction market is Lawful Good, this team proposes the Chaotic Evil version: anyone can submit a question, questions can be arbitrarily subjective, and the resolution is decided by the submitter, no appeal allowed. And the submitter/decider gets a small cut (1%?) of the money traded on the question. I honestly have no idea how this would play out. Certainly it would incentivize lots of people to write lots of great questions and promote them widely. It *sort of* incentivizes a strategy of always deciding fairly so you get a good reputation and more people use your questions - but also sort of a strategy of doing that for a while to build up credibility before betraying people, making false rulings, and stealing all their crypto (of course it's crypto). The part I'm most fascinated by is the idea of not-necessarily-super-objective resolution criteria - we could have markets in things like "Will the Democrats' agenda succeed [according to Scott]?" They think a clear use case is minor Internet celebrities using their brand to make and shill markets related to their interests, since these people at least have some reputational reasons not to take the money and run. They have a play-money beta version up at <https://mantic.markets/> **S, $10,000,** to support his political career. The first way I'm supporting his political career is by not naming him here or giving any further details. **Erik Mohlhenrich, $6,000**, for work on *[Seeds of Science](https://www.theseedsofscience.org/)*, a scientific journal which publishes articles that are nontraditional in content or style with peer review conducted through voting and commenting by a community of "gardeners" (free to join, visit [this page](https://www.theseedsofscience.org/gardeners) for details). Mohlhenrich has been exploring the role of amateurs in science, most recently in [this journal article](https://www.gwern.net/docs/psychology/2021-mohlhenrich.pdf) (non-conflict of interest note: the article mentions the SSC Surveys as an example of good amateur science, but this grant decision was made primarily by an outside reviewer). He also writes under the name [Roger's Bacon](https://twitter.com/RogersBacon1) at [Secretum Secretorum](https://rogersbacon.substack.com/). **Stuart Buck, $50,000, t**o help launch the Good Science Project, “a science policy think tank that will focus on essays, blog posts, videos, and other public advocacy about how to improve science funding in the US.” Buck was VP of Research at Arnold Ventures, helped start the Center for Open Science, and has lectured at DARPA and IARPA and written pieces for *Science* and *Nature*. You can read more about his philosophy of science funding [here](https://www.worksinprogress.co/issue/escaping-sciences-paradox/) or follow [@GoodSciProject](https://twitter.com/GoodSciProject) for updates. **Kartik Akileswaran and Jonathan Mazumdar, $75,000,** for Growth Teams, a group that supports low-income countries in developing economic growth. They believe that there's no one-size-fits-all solution to development and the most helpful intervention is to give countries experts who stay there over the long run, try to understand their priorities, and help them chart their own course and build their own decision-making capacity. They have a team with lots of history working in development, a country interested in cooperating with them, and my reviewers say that their approach makes a lot of sense. They also need a lot more funding, so if any of you have a spare $150,000 lying around, please let them know. ### Other Ways Grants Might Still Get Funded **…via the Long Term Future Fund:** This is an EA grants program that volunteered to evaluate and judge all applications that had anything to do with AI or the rationalist and effective altruist communities. They have more grant-making expertise and more money than I do, so I was happy to send those applications their way without considering them further. If you sent in an AI or rationalist/EA community-related grant and didn’t see your name above, don’t despair! LTFF hasn’t made their decisions yet, so I’m not able to announce these at the same time as the others. When they’re done, I’ll make sure you know. **…via investors:** Two grant applications seemed really excellent, but beyond my price range and probably more suitable for traditional investment. I’ve started the process of connecting both to investors, but this is sensitive enough that I’m not going to list their names here yet. If you’re in this category, I’ve already told you about it by email. **…via ACX Grants + :** This is the part where I sent your grants around to interested rich people and foundations, and let them decide if they wanted to fund some on their own. Unfortunately, rich people and foundations don’t have huge amounts of time to evaluate grants on super-short notice around the Christmas season, so I haven’t heard back from many of them yet. I know of two projects that are on track to get funded this way. but I don’t have permission to talk about them here yet. Your funders should be reaching out to you shortly. **…via ACX Grants ++**: This is the part where I post applications publicly on the blog (if you gave me permission) and readers can look at them and decide to support them or not. About 500 of you gave me permission to do this, and your applications together total about 1,500 pages of text. Substack probably won’t let me write a blog post this long, and you guys won’t read it even if I do, so I’m still thinking about how I want to handle this. Please give me until sometime in January to work something out, but rest assured, I haven’t forgotten about this. ### Networking Or Something Many people said that the true value of Emergent Ventures and other mini grant programs was the opportunity to be part of a network and make use of the funder’s non-financial resources. Unfortunately I have no idea how to set this up and I’m not sure I have a lot of non-financial resources. So here’s what I can offer: If any awardee (including people who get funded via LTFF, Grants+, or investors) needs a message or advertisement broadcast - you’re looking for more funding, you’re looking for employees, you want everyone to gaze in awe at the cool thing you’ve developed - please send me an email with your message, and I’ll signal-boost it on an Open Thread. I will do this at least once for everyone, maybe more if I don’t feel like you’re abusing the privilege. If you do your project and it works, or doesn’t work, and you learn something interesting (including “man, this was harder than I thought”) and you think other people would be interested, you can pitch me your essay. If I like it, I may publish it as an ACX post. This isn’t meant to be a demand or an exchange-in-kind for getting the money; I’m expecting fewer than 10% of awardees to take me up on this. But you can if you want. I have high standards and expect not to publish most posts pitched to me. Everyone else who’s done this has created some kind of group where awardees can talk to each other. I will probably get around to this too, though I’m kind of confused by the whole idea. Why would somebody working on biochemistry want to talk to someone working on political activism just because they got a grant from the same person? Once I figure this out what people expect to get from this I’ll create some structure that maximizes my ability to give it to them. If you’d like an introduction to someone I can plausibly introduce you to, let me know. And if there’s anything else I can do for you, let me know that too. ### How To Get Your Money I don’t know yet, I’m still waiting for an answer from the people who are going to handle this for me. When I know, I’ll send you all an email. I’m expecting this to be sometime in early January. If you need the money before then, contact me at scott[at]slatestarcodex[dot]com and we’ll figure something out informally. ### Acknowledgments This was a ridiculous thing for me to try to do, and I ended up way out of my depth (I’ll write more about why later). Everything worked out okay anyway (so far! I think!) because many people rescued me and handled the parts I couldn’t. I got permission to include most people’s names, but when I forgot or haven’t heard back, I’m thanking them anyway by initials. If anyone is unhappy with how they’re represented here (either you want your name off, or you want me to add it in) please email me. Oliver Habryka of Lightcone Infrastructure helped explain how grants work, connect me to everyone else, and ensure I didn’t have to rely on my own experience, good judgment, or other things I don’t have. He is also part of the Long-Term Future Fund and has taken over my AI grant evaluation work along with Asya Bergal and the rest of the LTFF team. The [Effective Altruism Funds team](https://funds.effectivealtruism.org/) handled most of the financial infrastructure for me. Thanks especially to Sam Deere, Jonas Vollmer, Helena Dias, and Chloe Malone for handling my increasingly frantic questions that I needed immediate responses to over the holiday season. I originally planned to spend $250,000 on these grants; this came partly from subscribers like you, partly from unsolicited gifts from rich patrons, and partly from someone who paid an unexpectedly large amount for an NFT of a blog post. Thanks to everyone involved in helping me have this extra money. But I was also able to get another $1.3 million (!) from extremely generous outside funders, of whom only two would let me reveal their names: Vitalik Buterin and Misha Gurevich. Thank you Vitalik, Misha, and other anonymous people! Evaluating applications was much harder than I expected, and I was saved by several teams of people who agreed to read over some large fraction of 656 grant applications for free or at least for much less money than they deserved. These include: Merrick Smela, Ruth Hook, Samira Nedungadi, Tessa Alexanian, and AG for Biology; Kieran Greig for Animals; Clay Graubard for Forecasting; José Luis Ricón for Science & Progress; Andrew Martin for Global Health & Development, [anonymous] for Politics, Misha Gurevich for everything I could force him to read, and a few other people who gave me miscellaneous advice on specific proposals. I made all final decisions and you shouldn’t blame these people if I got something wrong. Tyler Cowen gave me publicity and good advice at several points, along with bad advice at one point (he said it would be “great fun”). 656 of you took a risk and bared your secret dreams before a random blogger you barely knew. You faced a barrage of dumb follow-up questions, demands for extra information on short deadlines, and the possibility of rejection (sorry! I can’t emphasize enough that I rejected many of them for reasons unrelated to their inherent goodness). You were the core of this project and I’m suitably grateful. This was one of the harder things I’ve tried and it’s not quite finished. Insofar as it works, it’s thanks to hard work by these people and many others I forgot to mention. I think we accomplished something good here and I have a lot of hope that some of these projects will go on to do great things for the world. Deep and sincere thanks to everyone involved!
Scott Alexander
45945785
ACX Grants Results
acx
# Mantic Monday: Dogs In Wizard Hats I found this YouTube explainer about prediction markets on the subreddit. It’s pretty good! My small nitpicks are that it overestimates their accuracy relative to traditional forecasters (it focuses on markets beating forecasters in 2008, but I don’t think this is consistent) and underestimates their resilience against bad actors trying to skew the probabilities. Still, this will be my go-to source when someone wants a short explanation of what these are and why I’m so excited. ### Futuur Soon Last week I mentioned a new prediction market called [Futuur](https://futuur.com/) . Today we’ll look at it in more depth. Futuur sends non-Americans to their real money markets and Americans to their play money markets (because of the US’ unique anti-prediction-market regulations). Their play money markets are awful: ([source](https://futuur.com/q/43598/will-humans-land-mars-end-2024)) Wrong: a Mars landing by 2024 isn’t 17% likely. This kind of mistake is an inevitable consequence of their play money model, which gives every user 10,000 units (“Ooms”) when they first join. First of all, most people don’t care about Ooms and won’t participate. But second, if you go all in on correcting this mispricing, you’ll lock all 10,000 of your Ooms for three years to earn a 17% return. That’s super boring. You’ll join this site, make one bet, not be able to do anything else for three years, and even after you win you won’t be close to making the leaderboard or being an Oom tycoon. So what’s the point? Not much, which is probably why nobody has corrected this (including me, who considered it and then decided it sounded annoying). But here’s where it gets weird: their real money markets aren’t much better! ([source](https://futuur.com/q/138686/will-the-number-of-fires-in-california-be-higher-in-2021-than-in-2020)) They give their resolution source as [this California government website](https://www.fire.ca.gov/stats-events/), which says there have been fewer fires this year, 8800 vs. 9600. It’s been updated pretty recently, and this isn’t California’s fire season, so there’s no way there will be another 800 fires in the next four days. As far as I can tell, this is free money - though probably after Astral Codex Ten mentions this question, people will notice and correct it pretty quickly. I think I bite this bullet. This is the newest prediction market in the world, it’s been operating about two weeks, they’ve banned the country most likely to use their product, and they probably use automated market makers that have no idea what they’re doing. I only found this place because I’m trying my hardest to stay on top of breaking prediction market news. Maybe I shouldn’t be surprised if I run into some that are still in the phase where there’s $20 bills on the ground. I’ve (indirectly) tried betting on this and will report back later. There should probably still be some opportunities left to make 3 to 4 digits worth of free money if you’re interested, non-American, and can use crypto. But keep in mind that there might also be some systemic risk - this is a new market and nobody has had a chance to check if they really pay out! ### Mantic Everyday [Mantic Markets](https://mantic.markets/) has stolen its name from my newsletter! But they’re so interesting that I can’t stay angry. Here’s a typical market they’re offering: The perceptive among you might notice that “…and things like that” isn’t usually the sort of thing you see in a forecasting question. Typical questions are obsessively well-specified - not just “Hospitals will be at over 80% capacity”, but “Hospitals will be at over 80% capacity according to the 1-20-22 report by the American Hospital Capacity Association, or if no report comes out on that date, the most appropriate alternative source chosen by our Resolution Committee”. This is good because it maintains everyone’s faith in the objective process, but bad because what people actually care about is whether there will be *something that feels like a crisis situation*, which is hard to instrumentalize in hospital numbers. Mantic wants to lean into the subjective side of prediction markets and see what happens. Their idea is: anyone can write a question on their market. Whoever writes it also judges it. So if I write the question on whether Omicron will cause a hospital overcrowding crisis, I get to decide whether whatever’s going on next month counts as a “crisis” or not. The intended audience is people who know and trust me - in my case, it might be blog readers like you. So you’d be trying to predict whether I will think that our future coronavirus situation looks like a “crisis”. This is obviously somewhat but not perfectly correlated with whether *you* think it’s a crisis and whether by some objective standard it really *is* a crisis, but it’s not clear that it’s any worse of a measure than what number the American Hospital Capacity Association puts on a report. *Caveat emptor*, I guess. What brings this over the top from “weird idea” to “weird idea that provokes maniacal laughter” is that the person who writes/judges the question gets a percent of the trading volume as a fee. So if I propose and judge this question and people place $10,000 worth of bets on it, I might get $100. One of the giant bottlenecks for existing prediction markets has been getting people to write questions for them. This is hard for two reasons: first, it’s thankless work, and second, you have to do a lot of nitpicking of resolution criteria - figuring out whether the American Hospital Capacity Association is really trustworthy, how often it puts out reports, etc. This eliminates both reasons - you can resolve questions however you want, and you’re financially incentivized to create and promote them. If this works, expect “COME BET ON MY PREDICTION MARKET QUESTION!” to join penis enlargement pills and crypto Ponzi schemes as a classic form of spam. Does this incentivize bad actors to secretly bet on their own markets, then resolve them falsely in order to make a killing? Yes, definitely. And it’s crypto, so it’s unclear there’d ever be any way to find these people or get the money back. Smart people will stick to markets created by named people with good reputations; I’m not sure there’s much more to it than that. So far Mantic is having the same kind of CFTC problems as everyone else - it’s probably illegal to offer real money prediction markets to Americans. They’re still trying to figure out ways around this, so for now they’re a beta version with play money only. I don’t know if they’ll succeed. I’m most interested in their model, which I think has a lot of potential and is an obvious choice for an established competitor to steal. Conflict of interest notice: they have applied for (and will probably get) an ACX Grant. Other than me giving them money and publicity, and them stealing my favorite prediction market related word, I’m not actually affiliated with them in a meaningful sense. ### Metaculus Public Figures You may remember from last post that there is a *lot* of stuff at Metaculus. Here’s their [Public Figure Predictions](https://www.metaculus.com/questions/8198/updated-public-figure-predictions/) page. It tries to collect predictions by important public figures and compare them to the Metaculus consensus for the same question. For example, from the [Elon Musk page](https://www.metaculus.com/public-figure/elon-musk/): So Musk said that he thought more than half of vehicle production would be electric in ten years, but Metaculus thinks it will only be 38%. This seems much more civilized than the usual thing where you accuse people of being hype-mongers. Which is not to say it’s less harsh: …they’ve been doing this for a while and have a record of how right or wrong everyone was. Unfortunately, a lot of their “public figures” right now are Vox Future Perfect journalists, the only famous people who consistently make hard-and-fast predictions and give clear probability estimates. I’m sure it’s just a matter of time before everyone else figures out this is the wave of the future and joins in! ### This Week In The Markets Well, this is on everyone’s minds now, might as well start with it: ([source](https://www.metaculus.com/questions/8898/russian-invasion-of-ukraine-before-2023/)) Will Russia invade Ukraine within a year? This is one of those “all depends on the resolution criteria” questions, since Russia already has troops in what most countries would consider Ukrainian territory, and these sorts of tense standoffs involve a lot of limit testing. Metaculus has gone with “either Russia or two other Security Council member countries state that Russia has invaded Ukraine”, which seems fine. This was hovering at 30% throughout mid-December and has since risen to 43% But this hasn’t significantly affected a longer-running “deadly clash between US and Russia by 2024” market: ([source](https://www.metaculus.com/questions/7449/deadly-clash-between-us-and-russia/)) Maybe this means forecasters don’t expect a potential Ukraine-Russia war to involve the US directly? It *has* affected a US-Russia war by 2050 market, which rose from about 6% earlier in the year to 16% now. I don’t know why US-Russia war by 2050 has risen so much faster than US-Russia deadly clash by 2024, unless forecasters believe the current instability is laying the groundwork for future problems but they won’t materialize by 2024. Polymarket, PredictIt, and Kalshi are silent on this question for now. On a happier note, Metaculus is bullish on the James Webb Space Telescope: And finally, of all sad tales of mice and men, the saddest is probably this Metaculus question on Virginia workplaces: The outcome is measured in some kind of Google mobility data, but that’s irrelevant. The question is how long it will take to go back to normal after the coronavirus. In June 2021, people predicted December 2021. In August 2021, people predicted May 2022. In October 2021, people predicted July 2022. This month, people are predicting October 2022. It seems like the general rule is that every month, the date when people expect normality to return moves about a month and a half further out. I don’t think this is people being very foolish and failing to update. I think it’s mostly more and more people shifting their predictions to “it will never go back to normal” over time, probably less because of COVID and more because it looks like the work-remotely future has finally arrived. Maybe it’s not that sad after all! ### Shorts **1:** A “fortified essay” on [foot voting coordination efforts](https://www.metaculus.com/notebooks/8338/foot-voting-coordination-efforts/), eg the Free State Project. “I believe that there's a 60% chance that the question, ‘Will a coordinated foot voting effort intentionally move 10,000+ residents to a single American state by 2030?’ will resolve positively.” **2:** Balaji Srinivasan suggests using prediction markets to judge the winner of college debates: I’m not sure I understand this very well yet but maybe someone else can explain it to me. **3:** Congratulations to Google’s new prediction market team for making [the front page of Hacker News](https://news.ycombinator.com/item?id=29629665) [twice](https://news.ycombinator.com/item?id=29642210) last week! A good demonstration that there’s a lot of interest in this field.
Scott Alexander
46084477
Mantic Monday: Dogs In Wizard Hats
acx
# Open Thread 204 **1:** Sorry, I know I said I would have the Grants results up by Christmas, but I’m waiting for the last few funder checks to clear, plus I realized if I posted something on Christmas nobody would read it. Current prediction is sometime this week, probably Tuesday. **2:** Comments of the week: John Schilling [walks us through](https://astralcodexten.substack.com/p/open-thread-203/comment/4085611) his Omicron math in more detail; Chaostician on the [history of](https://astralcodexten.substack.com/p/open-thread-203/comment/4085179) using mummies to cure disease. **3:** A new AI alignment fellowship has asked me to signal boost them: Principles Of Intelligent Behavior In Biological And Social Systems "invites applications from people with graduate research experience in their respective fields (e.g. evolutionary biology, neuroscience, linguistics, sociology) to conduct a ~12-week research project, suggested and mentored by experienced AI alignment researchers". You can learn more at <https://www.pibbss.ai/> , application deadline is 1/16/22.
Scott Alexander
46167512
Open Thread 204
acx
# Highlights From The Comments On Diseasonality The main highlight was an email I got from a reader who prefers to remain anonymous, linking me to [Projecting The Transmission Dynamics Of SARS-CoV2](https://sci-hub.st/10.1126/science.abb5793). This paper is head and shoulders above anything I found during my own literature review and just comes out and *says* everything painfully tried to piece together. Either my research skills suck, the epidemiology literature is a bunch of disparate subthreads with wildly differing levels of competence, or both. The authors (including Marc Lipsitch who some of you might know from Twitter) are writing in May 2020, trying to predict the future course of COVID. To that end, they investigate the past course of two other coronaviruses called OC43 and HKU1, which cause mild colds. These show a seasonal pattern. Why? Here’s my understanding, which might not be exactly right: they find that immunity to these other coronaviruses wanes in about a year. They also find that the normal collection of seasonal factors - temperature, humidity, maybe UV, etc - have a multiplicative effect on R. Remember, when R is below 1, the disease gradually dies out; when above 1, it gets worse. At any given time, some percent of people have immunity. Let’s say at some particular time that’s 90%, and maybe that implies an R of 0.5. As time goes on, immunity declines - 85%, 80%, etc - and r creeps up - 0.6, 0.7, etc. Then winter hits, and R goes up by some multiple - the paper says in their particular case it’s a factor of 2, so R of 0.7 becomes 1.4. Now the disease spreads. There’s a seasonal miniepidemic, the disease infects vulnerable people, immunity climbs back to near 100%, R sinks below 1, and the mini-epidemic ends. Then more time passes, immunity declines again, and the cycle repeats. [EDIT: **demost\_** [explained](https://astralcodexten.substack.com/p/diseasonality/comment/3918284) most of this better in a comment on the original post] The authors write that depending on how long it takes COVID immunity to wane, its outbreaks could be “annual, biennal, or sporadic”. The same reader sent me a link to [this Twitter thread](https://twitter.com/BallouxFrancois/status/1405939503068598274) by Professor Francois Balloux, who writes: > Population immunisation will increase through vaccination and infection to reach an equilibrium probably around 95%, pushed down by waning immunity, new births and viral immune escape, and pushed up by (re-)infection and vaccination. Viral transmissibility may still slightly increase above its current [as of June 2021] value (R0 ~ 6.0), but will likely soon hit a buffer and it is now in the ballpark figure of the higher transmissibilities reported for the four endemic 'common cold' coronaviruses in circulation. Sensible people won't wish to maintain social distancing measures for much longer than required. As such, we may soon have three forces out of the system. Whatever their eventual value may be doesn't matter for the dynamic of the system once they've reached an 'equilibrium'. > > Seasonality will obviously remain. Even if it affects SARSCoV2 transmissibility only moderately, it should start driving the system, pushing R >1 in winter. At this stage, SARSCoV2 will have joined the >200 other seasonal endemic respiratory virus in circulation globally. I wish to reassure those who worry I'm predicting a scenario of eternal carnage that (re-)infections following previous rounds of infection / vaccination are most unlikely to cause severe disease in the vast majority of cases. This last sentence reminds me of [a discussion](https://astralcodexten.substack.com/p/open-thread-187/comment/2729763) I had with Bram C at the most recent Berkeley meetup. He noted that most diseases are less severe when you get them as a young child, for unclear reasons (chicken pox is the most obvious example). You get most respiratory viruses for the first time as a young child. When you get them a second time, you already have partial immunity from the first time - it’s like COVID with one vaccination. In fact, in the simplified case where everyone in the population is at the same level of immunity, you will have only just reached the point where the virus can infect you at all (after all, if there was an earlier point, the virus would have infected you then). So normally, children (who are weirdly resilient) are the only people who get the full force of a disease, and everyone else gets a weak watered-down version. COVID is worse than other coronaviruses partly because adults are facing its full force. Once every adult has had it a few times (or had a few rounds of shot-and-booster) it may be more like all the other coronaviruses and so very mild. And a hundred years from now, the only immuno-naive people to get COVID will be young children, who will do fine. (something like this might also be why the Native Americans had such a hard time with European diseases) But the rest of you had interesting thoughts too, starting with: **Metacelsus** [writes](https://astralcodexten.substack.com/p/diseasonality/comment/3912809): > The entrainment / waning immunity mechanism can't explain chickenpox seasonality, since people generally don't get chickenpox (primary VZV infection) twice. So something else must be going on. But **Ivan Fyodorovich** [responds](https://astralcodexten.substack.com/p/diseasonality/comment/3912845) that new people are getting born every year. So imagine that in summer, R is 0.7, and in winter it’s 1.4. The new people aging into chickenpox age will get it in winter. But then we have the same question as before - if it’s wintry all the time in Alaska, what happens there? My guess is - chickenpox spreads faster until there aren’t enough susceptible people left, then waits until new people are born, and then when there are enough of them, there will be an epidemic, and because winter multiplies R it will be in the winter. **Sniffnoy** [notes](https://astralcodexten.substack.com/p/diseasonality/comment/3913781) that technically, chickenpox is spring seasonal. I don’t know if this is just “winter seasonal but slow so it peaks in the spring” or something more complicated. **Eric Rall** [points out](https://astralcodexten.substack.com/p/diseasonality/comment/3914177) that we’re all missing something obvious and maybe childhood diseases track *the school year*. Maybe chickenpox would naturally peak in the winter, but gets delayed by winter vacation, and then comes back when kids return to school in January and has to build from there? There’s [some argument downthread](https://astralcodexten.substack.com/p/diseasonality/comment/3915138) about what countries with different school years tell us. **10240** [writes](https://astralcodexten.substack.com/p/diseasonality/comment/3914962)**:** > This hypothesis (that seasonality results from a combination of temperature and herd immunity from previous infections) doesn't actually depend on immunity only lasting about a year. And indeed, most people don't get a flu every year, nor every kind of cold; more like once in 10 years. > > Assume that the transmission rate is a product of a factor negatively correlated with temperature, and a factor positively correlated with how long ago you last had the same disease. At equilibrium, the long-term average of the transmission rate is 1. So, in temperate regions, r<1 in the summer, and r>1 in the winter (except if the current year's epidemic has already sufficiently increased the level of immunity to push r below 1—eyeballing the US flu death charts, they seem to peak in early January in the worst years, but later, in the spring, in years with low rates). > > In this model, warmer regions should have less flu overall, since a longer interval between incidences corresponds to a long-term average r of 1. Maybe Alaskans get a flu, say, once in 8 years on average, Floridians every 12 years (still seasonally) and Panamans every 15 years (without seasonality). That last paragraph sounds fascinating but I’m not sure I understand why it’s true; can someone explain? **Rafal Smigrodski** [writes](https://astralcodexten.substack.com/p/diseasonality/comment/3915323): > The mention of wildfire is most apt: > > The most destructive wildfires, or crown-fires, are uncommon under natural circumstances, when the much less destructive ground-fires predominate. Crown-fires do however happen often in actively-managed (or mismanaged) forests, where clueless or ideologically driven forest service suppress fires for decades, which leads to abnormal accumulation of deadfall (fallen branches, trees lying on the ground), and eventually there is so much of this dead dry mass that a randomly started fire becomes too hot to suppress and it destroys everything, down to the root. > > Covidiocy manifesting as lockdowns and masking has so many similarities to the policy of fire suppression. Smart, evidence-based medicine, like vaccinations and targeted quarantines of select vulnerable populations, is very much like scientific forest management, with its prescribed burns. > > I bet the differences in efficacy, measured in dead trees or dead people, will be similar. I’m having trouble figuring out how to analyze this point. After thinking about it, maybe the problem is I don’t have a good sense of why fires ever stop. Assuming there is at least one continuous line of trees connecting (eg) Maine to Georgia, why didn’t every forest fire burn the entire East Coast to a crisp back before there were human firefighters? **Jason Crawford** [writes](https://astralcodexten.substack.com/p/diseasonality/comment/3914007): > To make things slightly more complicated, not all seasonal viruses peak in winter. When the US suffered from annual polio epidemics in the first half of the 20th century, they would come in summer. I'm not sure why this is (or if anyone knows), although I think it was spread by water and one factor might have been swimming in shared pools. **Brock** [answers](https://astralcodexten.substack.com/p/diseasonality/comment/3919724): > Yes, in temperate zones polio was seasonal with peak in summer/autumn. But it's not a respiratory virus, and it spreads via the fecal-oral route. I'd guess that swimming was the seasonality factor for polio. That’s a pretty good answer! No mystery why fecal-orally transmitted viruses spread differently than respiratory ones. **Alex G** [does her homework](https://astralcodexten.substack.com/p/diseasonality/comment/3930697) and runs a simulation: > my intuition is that you get seasonality under very broad assumptions (r0>1 and varies seasonally, immunity wanes on the order of at least a year) and the difference is made up by more people needing to get ill to get to herd immunity. > > rt should be 1 on average in the long run > > I think if we're comparing Alaska in the summer (rt<1) vs Florida in the winter (rt>1) the exponential growth/decay in cases should probably swamp the larger number of cases you'd get in Florida averaged over a year > > […] **Mycelium** [says](https://astralcodexten.substack.com/p/diseasonality/comment/3963509): > In South East Asia, we have two flu seasons a year - a summer season and a winter season - for EXACTLY the reason you mention - in the summer, people coop themselves up at home with the air-conditioning on full blast. > > In the west, shade is adequate to cool you in the summer, so you don't need to close the windows and turn on the A/C. In South East Asia, the muggy air retains heat, requiring air-conditioning and reduced airflow. As Tyler Cowen would say, solve for the equilibrium!
Scott Alexander
45706834
Highlights From The Comments On Diseasonality
acx
# Addendum To "No Evidence" Post The day after I wrote [The Phrase “No Evidence” Is A Red Flag For Bad Science Communication](https://astralcodexten.substack.com/p/the-phrase-no-evidence-is-a-red-flag), FT published [this article](https://www.ft.com/content/020534b3-5a54-4517-9fd1-167a5db50786): Like many uses of “no evidence”, they meant that one particular study of this complicated question had failed to reject the null hypothesis. Here’s what happened to [Metaculus’ prediction tournament](https://www.metaculus.com/questions/8766/omicron-variant-less-deadly-than-delta/) when the same study came out: The consensus prediction dropped from 72% chance that it was less lethal, to 63% chance. But it quickly recovered, and is now up to 80%. This is an unusually clear example of the difference between classical and Bayesian ways of thinking.
Scott Alexander
45744837
Addendum To "No Evidence" Post
acx
# Addendum To Luvox Post In my post yesterday, I [quoted a Vox](https://www.vox.com/future-perfect/22841852/covid-drugs-antibodies-fluvoxamine-molnupiravir-paxlovid) article describing work by Dr. Ed Mills and others to get the FDA to approve Luvox for COVID. As of that point, the FDA didn’t know how to process an application without a sponsoring drug company: > [Professor Ed] Mills, who thinks that fluvoxamine and budesonide are both appropriate to prescribe to patients sick with Covid-19, compares public messaging on fluvoxamine to communications about Merck’s drug molnupiravir. The evidence for molnupiravir is in many ways weaker than the evidence for fluvoxamine, but molnupiravir was produced by a major pharmaceutical company that can shepherd it through the process of becoming a recommended drug. On a call last week, Mills said, the FDA told him “they don’t know how to deal with submissions where there isn’t someone to be responsible for it.” But it looks like just as I published, he and his colleagues found a way around the problem: …though so far I’m having trouble figuring out their exact strategy: Congratulations to the fluvoxamine team for figuring out how to make this (hopefully) work. If (and only if) you’re a medical professional with relevant credentials, you can add your name to the letter of support [here](https://docs.google.com/forms/d/e/1FAIpQLSc7TATp80UcJiNns1tufyl8G36TQCnib7Sw0vtE2KQ6gAwgmA/viewform). The FDA [also approved](https://www.contagionlive.com/view/fda-approves-paxlovid) the other drug I’ve been saying they should approve quickly, Paxlovid, a full two weeks before the prediction markets expected! According to Metaculus, there was only a 6% chance we would get Paxlovid approved this quickly. They are genuinely getting better! Thank you, FDA!
Scott Alexander
45878405
Addendum To Luvox Post
acx
# The FDA Has Punted Decisions About Luvox Prescription To The Deepest Recesses Of The Human Soul **I.** Here’s my pitch for fluvoxamine (Luvox) for COVID. In the midst of all the hype about ivermectin and hydroxychloroquine, scientists put together the giant 4,000-person TOGETHER trial, intended to test all these exciting COVID early treatments. You know what happened next: ivermectin and hydroxychloroquine crashed and burned. But a different drug, the SSRI antidepressant fluvoxamine, actually did really well! It decreased COVID hospitalizations by about 30% - not the perfect cure rate the rumors attributed to ivermectin, but a substantial decrease. Given the size and professionalism of this study, and another smaller one that also got positive results, I and many others take Luvox pretty seriously. At this point I’d give it 60-40 it works. Can you prescribe a medication when you’re only 60% confident in it? There’s some [thorny philosophical issues](https://astralcodexten.substack.com/p/pascalian-medicine) around this, but I think in the end you have to compare risks and benefits. What are the risks? Like every medication, including Tylenol, aspirin, etc, Luvox has some common minor side effects and some rare major ones. But let’s step back a second. Fluvoxamine is a bog-standard SSRI. Its side effects are generic SSRI side effects. We give SSRIs to 30 million people a year, or about 10% of all Americans. As a psychiatrist, I’m not supposed to say flippant things like “we give SSRIs out like candy”. We do careful risk-benefit analysis and when appropriate we screen patients for various risk factors. But after we do all that stuff, we give them to 10% of Americans, compared to [12% of Americans](https://www.norc.org/NewsEventsPublications/PressReleases/Pages/half-as-many-households-plan-to-trick-or-treat-this-halloween.aspx) who got candy last Halloween. So you can draw your own conclusion about how severe we think the risks are. For some reason the same experts who don’t mind prescribing SSRIs when people have mild depression freak out about prescribing them when they’re the only evidence-based oral medication for a deadly global pandemic. “What about SSRI withdrawal?”, they ask. After a ten day course? On 100 mg imipramine-equivalent dose? Minimal. “What about long QT syndrome?” The VA system took 35,000 high-risk older patients off of an unusually-likely-to-cause-QT-syndrome SSRI in 2011, and were unable to find any evidence that this prevented [even a single case of the syndrome](https://sci-hub.st/https://pubmed.ncbi.nlm.nih.gov/27166093/), let alone any negative outcome! The objection I take most seriously is actually the worry about post-SSRI sexual dysfunction, a very rare condition where people on an SSRI can have sexual problems for months or years after they come off. I would be *shocked* if you could get that from a ten-day course. But technically nobody has ever tested this - there’s never been a good reason to put someone on an SSRI for only ten days before - so I can’t rule it out. Still, the risk from adding a few extra Luvox prescriptions for COVID is still much less than the risk we incur all the time from having 10% of Americans on SSRIs for years at a stretch, so this seems like a weird time to get cold feet. I conclude that the risk-benefit calculation probably favors using Luvox. And I’m not alone here. Johns Hopkins University’s COVID treatment guidelines [recommend](https://www.hopkinsguides.com/hopkins/ub?cmd=repview&type=479-1225&name=30_538747_PDF) fluvoxamine for appropriate COVID patients. Some leading psychiatrists, especially the Washington University psychiatrists who helped discover the new indication, [support](https://www.medrxiv.org/content/10.1101/2021.12.17.21268008v1) fluvoxamine for appropriate COVID patients. Many of the epidemiologists and statisticians most instrumental in debunking the hype around ivermectin have spoken out in *favor* of fluvoxamine, saying this one is the real deal ([1](https://twitter.com/GidMK/status/1471931916655808513), [2](https://twitter.com/boulware_dr/status/1469799433596555267)). The National Institute of Health hasn’t quite come out in support, but they *have* taken the unusual step of not *disrecommending* fluvoxamine the same as they disrecommend every other oral early COVID treatment, [saying](https://twitter.com/AngelaReiersen/status/1471995216047579140) that the evidence "provides the sort of flexibility for the treating clinician to go either way". Unfortunately, none of these bodies alone or combined are powerful enough to make the average doctor prescribe differently. That’s why all eyes are on the FDA. **II.** The FDA has a weird role here. They already approved fluvoxamine as an antidepressant. That means it’s legal, pharma companies can make it, pharmacies can stock it, and individual doctors can prescribe it whenever they want, including for COVID. But they approved it with a label saying “*For Depression*”. Doctors are kind of . . . well, “hidebound” is a harsh word, but they really hate doing weird new things that no one has explicitly given them permission for. It’s not *illegal* to prescribe fluvoxamine for COVID. It’s not even going to get you in any trouble. It might not get covered by insurance, but it only costs [about $10](https://www.goodrx.com/fluvoxamine?dosage=100mg&form=tablet&label_override=fluvoxamine&quantity=10&sort_type=popularity) anyway. The problem is just that it’s *weird.* So in order to make doctors feel completely comfortable prescribing it, the FDA would have to add “*…And For COVID”* to the label. The scientists involved in the big study have asked them to do this. I *hoped* that the FDA would say “Since the COVID pandemic is an emergency, we’ll do this right away”. I *predicted* they would say “Please give us a year to figure out our opinion on this.” I *feared* they would say “There’s just not enough evidence”. What I *never imagined at all* was their actual response, which was “Sorry, we don’t understand our own bureaucracy well enough to figure out how to do this.” But [according to Kelsey Piper at Vox](https://www.vox.com/future-perfect/22841852/covid-drugs-antibodies-fluvoxamine-molnupiravir-paxlovid), that’s where they are right now: > *[Professor Ed] Mills, who thinks that fluvoxamine and budesonide are both appropriate to prescribe to patients sick with Covid-19, compares public messaging on fluvoxamine to communications about Merck’s drug molnupiravir. The evidence for molnupiravir is in many ways weaker than the evidence for fluvoxamine, but molnupiravir was produced by a major pharmaceutical company that can shepherd it through the process of becoming a recommended drug. On a call last week, Mills said, the FDA told him “they don’t know how to deal with submissions where there isn’t someone to be responsible for it.”* That is, FDA procedures usually assume there is a pharma company sponsoring a drug. But fluvoxamine is cheap and off-patent and no pharma company is involved in repurposing it for COVID. Nobody has a procedure for a drug without a sponsor, so they won’t do anything. Kelsey’s article focuses on the systemic failure: the FDA, guideline-making agencies, and public health communicators have dropped the ball on this. I think that’s a perfectly fine thing to focus on. I’m not usually one to defend the FDA, and their actions here hardly seem defensible. I’m with Kelsey in hoping they find a way to solve their institutional dysfunction. But I can’t help wondering if this is *entirely* on the FDA. Fluvoxamine is legal. The only reason we need the FDA to get involved here at all is because if it’s not on the label, doctors will feel uncomfortable prescribing it. What if, in order to save thousands of lives and help beat back a global pandemic, doctors just did the uncomfortable thing? **III.** Am I being harsh in saying that the problem is doctors who don’t want to do something uncomfortable? There are many reasons not to prescribe a medication for a new indication. Maybe you genuinely think the risks outweigh the benefits. If it were me, I would trust the Johns Hopkins guidelines team on this, but honest opinions can differ. I have no problem with doctors who are holding off for this reason, and look forward to arguing with them in the appropriate venues. Or maybe you’re afraid of lawsuits. If you get sued for malpractice, it’s nice to be able to tell the jury “it says this drug is okay for this condition right on the label”. But this doesn’t usually stop doctors from doing off-label prescriptions. Gabapentin is the 18th most-prescribed drug in the US, almost always for nerve pain or anxiety, but its label only officially endorses use for seizures or shingles. Beta-blockers for social anxiety? Off-label and dirt common. Prazosin for PTSD nightmares? Off-label and dirt-common. How do doctors sleep at night, knowing they’re constantly at risk of getting sued for off-label prescriptions? Probably using trazodone, the #2 most popular sleeping pill in the US, whose label says it should only be used for depression. No, seriously, it’s because most doctors *don’t even know* these indications are off-label, plus their medical school professors all did it too so it doesn’t feel transgressive. Or maybe you *suspect* the benefits outweigh the risks, but you have a principled heuristic of not trusting your own suspicions. Doctors are constantly meddling with systems we don’t fully understand, people die when we make mistakes, and hordes of scammers and profiteers are trying to exploit us at any given moment. “Never do anything that five government bodies haven’t enthusiastically recommended” is a great meta-level heuristic for staying sane in that environment, and one which I follow 95% of the time. If someone else follows it 99.9% or 100% of the time, and even a Johns Hopkins endorsement isn’t enough of a recommendation for them, I can understand that. Or maybe you’re a coward. I’m not saying doctors are *generically* cowards. My father is a doctor and he’s one of the bravest people I know. Every time there’s a typhoon or an earthquake in some terrorist-infested country on the other side of the world, he hops on a plane to go there and treat victims, sometimes before the rubble is even cold. If this was something simple, like treating river-blindness in war-torn parts of the Congo or containing an Ebola epidemic in Nigeria, I’m sure doctors would be all over it. But the Devil knows the weaknesses that lurk in the hearts of men. When he wants to scare off doctors, he doesn’t threaten us with insanely hard acts of self-sacrifice. He knows we love that stuff! He threatens us with *the prospect of looking slightly weird in front of our colleagues*. Here is a doctor who, if nominative determinism is any guide, knows a thing or two about diabolic temptation. Yet he talks about how he “almost felt dirty” prescribing fluvoxamine, even though he knew there was strong evidence supporting it. He worried the nurses were making fun of him (protip: if you are a doctor, the nurses are *always* making fun of you). He made the right decision in the end, but I wonder how many doctors in similar situations don’t. There are lots of reasons to feel nervous and awkward when you prescribe a medication your colleagues won’t. Maybe they think you’re a loose cannon who doesn’t care about evidence. Maybe they they think you’re defecting against the team and going to get everyone in trouble. Maybe they’re remembering the ivermectin debacle and wondering if you secretly prescribe horse dewormer and vote Trump. Maybe they think you hold them in contempt for not being as up on the literature as you are, and only prescribing normal stuff. You should always consider your colleagues’ opinions insofar as they are good smart people and you want to use their expertise as a check on your own fallible mind. But it’s hard to keep that separate from considering your colleagues’ opinions in the sense where it would be socially awkward to disagree. And that’s how the Devil gets us. I faced the Devil last year and lost. In March 2020, when everyone was freaking out about ventilator supply, a team of very smart engineers asked me to prescribe them a medical-grade oxygen concentrator. I can’t remember the details, but something about trying to tinker around with a bunch of cheaper machines and jury-rig a budget ventilator, which they could pitch to people as a solution to the ventilator shortage. I punted. I said that this wasn’t really what the prescription system was for, you can’t prescribe things to healthy people just so they can tinker with them, and I might get in trouble with my clinic or the government or somebody. I told them to try to go through the proper channels for obtaining medical equipment, even though I was unsure whether those channels existed, and doubtful they would move with appropriate urgency. Later I thought about this, and realized the choice before me was “You can contribute to a desperately important project that might save thousands of lives, but only if you do something kind of weird that might get you in a tiny amount of trouble”, and I had said no. Devil 1, Scott 0. I faced the Devil the year before that, and I . . . well, let’s say it was a tie. I had a bunch of patients with treatment-resistant depression. Everyone knew ketamine was great for treatment resistant depression. But the only people using it were anaesthesiologists giving it IV, which was inconvenient and unaffordable for most patients. The FDA was trialing a new version of ketamine that could be given by psychiatrists via inhaler, and there was no reason whatsoever to think this wouldn’t work with normal ketamine, but nobody I knew was doing it and they all thought it seemed kind of weird. My severely depressed patients kept asking me for ketamine, and I kept saying “Sorry, I can’t prescribe that to you”, secretly ending the sentence with “…unless I use this one weird loophole I’ve never heard of anyone else using”. Finally I called up a compounding pharmacy near me and asked if anybody knew about this, and they said they knew a doctor who did, and did I want his phone number? I talked to him, and he said he’d been doing this for years and it had always gone well. For some reason, knowing that someone else was doing it was the permission I needed, I prescribed it to my patients, and it went well (I’ve since written up [a guide for others](https://lorienpsych.com/2021/11/02/ketamine/)). But I still didn’t have the courage to do the weird thing without knowing other people were doing it first. (When I finally got around to prescribing ketamine, one of my patients told me I’d given her her life back. Usually I love hearing that kind of thing. This time it was bittersweet, because I knew I could have given more patients their lives back if I’d done it earlier. There are a couple of people who had six months of terrible depression that I maybe could have prevented if I had more courage. That’s partly on the FDA for making poor decisions such that optimal treatment required virtue on the part of individual doctors. But mostly it’s on me, for not having it.) I will face the Devil in the future and I’ll fail again. Medicine is too big and complicated and scary to stray from the herd most of the time, and the sort of person who *never* fails at this problem is probably crazy, and constantly gives his patients snake oil or ivermectin or whatever. Doctors should generally stay within their area of expertise and doubt any argument leading them away from consensus. Certainly if you’re my patient and you somehow find this essay and read it back to me and tell me I need to prescribe you the latest whatever, I’m going to nope out of whatever you’re offering (especially if opioids are involved). Still, I do want to stress the “facing the Devil” aspect, where this is a difficult moral battle. I know that’s a weird way to frame a prescription decision. But CS Lewis is a leading expert on devils and [he was very clear](https://www.lewissociety.org/innerring/) that moral battles generally don’t happen in war-torn parts of the Congo. They happen in ordinary decisions about whether to do slightly unusual things that we worry might affect our social status among people we respect. (by the way, when the other psychiatrists in my clinic learned I was prescribing intranasal racemic ketamine, they all said that was cool, and a few asked me to walk them through the process). So if you haven’t been giving fluvoxamine to patients, please take a second, sit down, and decide whether it’s because: 1. You honestly think the risks outweigh the benefits. 2. You’re trying to follow some complicated meta-level heuristic that you need in order to practice good medicine or at least stay sane. 3. You’re scared. If it’s 1 or 2, you’re valid and I support you. If it’s 3, man up and write the prescription. If you’d feel happier doing this after you talked to a psychiatrist who has experience with this medication, feel free to email me at scott[at]slatestarcodex[dot]com. I have no personal experience using it against COVID, but I can direct you to the studies and protocols that explain how.
Scott Alexander
45689243
The FDA Has Punted Decisions About Luvox Prescription To The Deepest Recesses Of The Human Soul
acx
# Mantic Monday: Let Me Google That For You ### Let Me Google That For You New from Google this month: [Creating A Prediction Market On Google Cloud](https://cloud.google.com/blog/topics/solutions-how-tos/design-patterns-in-googles-prediction-market-on-google-cloud). Google announces that they’ve been running an internal prediction market for the past year, with “over 175,000 predictions from over 10,000 Google employees”. Most of it’s classified because they’re predicting stuff about Google’s corporate secrets, but some friendly Googlers were at least willing to walk me through the article and clarify pieces I didn’t understand. The market, called Gleangen, is actually the second prediction market Google’s tried. The first, in 2007, was called Prophit - the team included occasional ACX commenter Patri Friedman, who’s since moved into the charter city space. ([source](https://liberdon.com/@patrissimo/105152785611761192)) Prophit wound down because the founders left and nobody really knew what to do with; you can read about some of their findings [here](http://www.eecs.harvard.edu/cs286r/courses/fall10/papers/GooglePredictionMarketPaper.pdf). In 2020, with all the uncertainty around coronavirus, some Googlers decided to try again. Gleangen is the result. Unlike most prediction markets, anybody can create a question on Gleangen. This usually goes badly: most people are terrible at writing questions with objective resolutions. Google manages by having a dedicated team of moderators who go over everything and amend it when needed. The market pays out in play money and the right to be [on a leaderboard](https://news.ycombinator.com/item?id=28538693#28540607). So far it’s not doing much else. The Googlers I talked to saw no evidence that company executives were paying much attention to it when making decisions. Why not? Hal Varian, Google’s chief economist, said in a [Conversation](https://conversationswithtyler.com/episodes/hal-varian/) with Tyler Cowen: > **COWEN:** Why doesn’t business use more prediction markets? They would seem to make sense, right? Bet on ideas. Aggregate information. We’ve all read Hayek. > > **VARIAN:** Right. And we had a prediction market [referring to Prophit in 2007]. I’ll tell you the problem with it. The problem is, the things that we really wanted to get a probability assessment on were things that were so sensitive that we thought we would violate the SEC rules on insider knowledge because, if a small group of people knows about some acquisition or something like that, there is a secret among this small group. > > You might like to have a probability assessment of whether that would go through. But then, anybody who looks at the auction is now an insider. So there’s a problem in you have to find things that (a) are of interest to the company but (b) do not reveal financially critical information. That’s not so easy to do. Is anyone doing anything with the market’s predictions? The most popular use case seems to be Google employees trying to get a sense of when they’ll have to work from home vs. come into the office, though there also seem to be a few cases of individuals consulting it for other career-relevant decisions. There aren’t a lot of great ways to test Gleangen’s accuracy, but the article at least included a basic calibration graph: With the lack of graph labels, I’m having trouble telling if this represents overconfidence or underconfidence, but it’s definitely *something*. If this was a real-money market I would expect someone to have arbitraged this out. The article ends by suggesting if you contact Google maybe they’ll help you build an internal prediction market for your organization. They’re pretty vague about it, but you can read the [application form](https://docs.google.com/forms/d/e/1FAIpQLSdCXkcgB13FWhdCvOM81m1BNA5VkBKdrt0Pah8k7B5M66EmAg/viewform) to at least get a sense of what they’re thinking. ### Looking For Options Continuing a thread here from the last few posts: Suppose you want to predict something in 2100. It’s hard. Nobody wants to lock their money up for 80 years to get a 5% or 10% or even a 100% rate of return. You can slightly alleviate the problem by having the prediction market put the money in index funds while you wait, but anyone with hopes of beating the market probably wants to beat it by more than 5% per hundred years. One option we talked about last time is chained prediction markets. A prediction market today on what a market will say in 2025 on what a market will say in 2030 about what a market will say in […] about what a market will say in 2100. The active ingredient of this seems to be magnifying small fluctuations: if the event is 50% likely today, you can bet on whether it will be between 45%-49.9% likely in 2025 vs. 50% - 55% likely, or whatever. Many people pointed out that this is equivalent to having a single prediction market operating from now until 2100, where you can buy options at any time (ie an option that the prediction market will be above 51% in one year). That seems right. My next question is: is there a structure where options directly move the market? The whole point of prediction markets is that their prices correspond to the chance of something happening. But if nobody is buying shares directly and all the action is in options trading on the side, that won’t work. Here I admit I don’t know much about markets or options - is there some way to combine regular trading and options trading into a single price, so that we could get the advantages of options trading, but traders would still be betting on the regular market and increasing our predictive accuracy? ### Prison Conditions One of Robin Hanson’s original dreams for prediction markets was conditional markets that could help set policy. For example “If we choose a left-wing Education Minister, what will test scores be in five years?” vs. “If we choose a right-wing Education Minister, what will test scores be in five years?” and then we have a good guess as to whether the left-wingers or right-wingers have better education policy on this axis. I rarely see people trying this, but here’s an exception from Metaculus (h/t [Nathan Young](https://policyforecast.substack.com/p/uk-policy-forecast-4-prisons-london)): So the market predicts that Conservatives will put slightly more people in prison than Labor. I don’t think this is interesting? Conservatives are right-wing and probably do the Tough On Crime thing. Labour is left-wing and probably does the End Mass Incarceration thing. You’re not really learning anything you wouldn’t have guessed otherwise. But it’s a proof of concept. You could do this for GDP growth, school test scores, and all the other things where we all agree what the good outcome is. I don’t know, maybe you wouldn’t learn anything there either. If Conservatives had better GDP growth, maybe leftists would say “of course, they’re the more capitalist party, they trade off environmental damage and inequality for a slightly hotter economy”. If Labour had better test scores, maybe rightists would say “of course, they’re the more socialist party, they’ll tax you dry and throw some of the money at schools but it will still be inefficient.” But it would be an interesting experiment, and maybe put the size of the tradeoff into relief. I bet a lot of people would care a lot if the conservatives could produce 0.01% higher GDP growth vs. 5%. ### Fat Tails Nutritionist Stephan Guyenet has a great article out on the new generation of weight loss drugs. He thinks that semaglutide is only the beginning, and we’re entering a brave new world of diet pills that really work. Why am I mentioning this here? [His essay is on Metaculus](https://www.metaculus.com/notebooks/8702/the-promise-and-impact-of-the-next-generation-of-weight-loss-drugs/). It’s the latest in their line of “fortified essays”, a new genre they’re trying to create of argument backed by prediction markets and crowd forecasting. So for example, when Stephan talks about the promise of two new research chemicals, tirzepatide and bimagrumab, he’s able to punctuate his points with these graphs: Note the “community predicts” and “author predicts” captions at the bottom. In this case, we can tell that Guyenet is sticking to the market’s consensus in everything he says (or that the market blindly believes Guyenet, which is also valuable to know). If he wanted to go out on a limb and disagree with the market, he could do that too. Sometimes these predictions add some pretty relevant context. For example, after going into great detail about how many exciting new weight loss drugs we’re going to have soon, and supporting each of his points with a forecasting tournament that shows Metaculans agree with him, he ends on this note: Metaculus [thinks](https://www.metaculus.com/questions/8634/american-obesity-percentage-in-2032/) that despite all this great science, more Americans than ever will be obese in ten years (for context, 43% are obese today). Guyenet defies the market here. His distribution peaks around 35%, so he really believes things are going to get better (though you can tell he has low confidence, and also assigns decent probability to things getting worse). If I’m still writing these newsletters in ten years, remind me to revisit this! You can read more fortified essays about [solar power](https://www.metaculus.com/notebooks/8938/solar-power-current-challenges-encouraging-progress/), [monetary policy](https://www.metaculus.com/notebooks/8340/beyond-benchmark-rates-how-modern-central-banks-tighten-monetary-policy/), and [interstellar objects](https://www.metaculus.com/notebooks/8812/the-%25CA%25BBoumuamua-paradox-and-the-nature-of-interstellar-objects/), among others. By the way, if you’re a journalist, I think one of the most useful things you could do in this space right now would be to publish a fortified-style essay in a major publication. If you’re nervous getting in touch with Metaculus and would prefer that I introduce you, send me an email at scott@slatestarcodex.com. ### This Week In Markets Metaculus thinks hospital admissions this winter (ie from Omicron) will peak mid to late January. Since it takes a week or two for a COVID case to end up in the hospital, this implies Omicron infections will peak mid-January. That peak at the end is weird-looking and shows up on many Metaculus questions. I think this is something like all dates after April 1 getting crammed into April 1 for the graph, and even though very few people think it will be after April 1, that’s still a really high number of guesses when it’s all concentrated into an individual day. It’s weird and I wish they would find a way not to do that. Colleges are ditching the SAT as an admissions criterion in favor of what opponents would describe as doubling down on legacy admissions, class-based connections, and racism. How far will this process go? Right now, 16% of top colleges don’t require test scores. Metaculus expects that by 2030 it will be 70%. If we look at the distribution, we get a clearer view: Lots of people think it will be 100%, with a long tail of people predicting various other things. Will small modular nuclear reactors provide more than 1% of any country’s energy in 2030? Metaculus thinks probably not. ### Shorts **1:** Richard Hanania [interviews Robin Hanson](https://richardhanania.substack.com/p/futarchy-robin-hanson-on-how-prediction), inventor of prediction markets. **2:** Interesting new prediction market [Futuur](https://futuur.com/), I haven’t investigated it yet but I appreciate their silly pictures representing possible outcomes: **3:** Wikipedia: [Cimbrian Seeresses](https://en.wikipedia.org/wiki/Cimbrian_seeresses). “The seeresses led prisoners of war up a platform where they cut their throats and watching the blood stream down into a cauldron they made predictions about the future.” I’m tempted to disapprove of this, but first I want to know if they made it onto the Metaculus leaderboard.
Scott Alexander
45727338
Mantic Monday: Let Me Google That For You
acx
# Open Thread 203 **1.** Public service announcement: In case you haven’t been paying attention or believing what you read, the most likely scenario is that Omicron is shaping up to be pretty bad (maybe less severe per case, but a *lot* more cases). Expect it to hit very suddenly and peak sometime in January. Zvi has [details](https://www.lesswrong.com/posts/XrzPey4cwhPeHL6QF/omicron-post-7) as usual; John Schilling is slightly [more optimistic](http://slatestarcodex.com/blog_images/schilling_comment.png) but only slightly. Consider taking whatever precautions you wish you’d taken back in March 2020 for a month of panic and maybe more lockdowns. Getting your booster might help, but do it *right now* to have it working in time and avoid the rush. **2.** A friend is trying to help get an Afghan scientist who she knows out of Afghanistan - they're worried he will face legal repercussions for helping foreigners and the previous Afghan government. He is a pretty talented person and could qualify for some sort of skilled immigrant pathway, or for some kind of humanitarian refugee desperate need pathway. He's not very good at English. If anyone has any experience or advice in this area, please contact mdl.swimmer963@gmail.com. **3.** Comment of the week is Coagulopath on [what happens when organisms get dropped in alien environments](https://astralcodexten.substack.com/p/ancient-plagues/comment/4014444). A brief excerpt: "Agricultural crops often do best far away from their native land, where pests and pathogens are adapted to them. New-world maize and cocoa are among the biggest crops in Africa. Conversely, most coffee is grown in South America. Sometimes being far from home is a good thing." **4:** You should now be able to edit your comments. Thank you, Substack!
Scott Alexander
45686244
Open Thread 203
acx
# The Phrase "No Evidence" Is A Red Flag For Bad Science Communication **Related to:** [Doctor, There Are Two Types Of No Evidence](https://www.overcomingbias.com/2008/08/doctor-there-ar.html); [A Failure, But Not Of Prediction](https://slatestarcodex.com/2020/04/14/a-failure-but-not-of-prediction/). **I.** Click to [enlarge](http://slatestarcodex.com/blog_images/no_evidence_large.png) Every single one of these statements that had “no evidence” is currently considered true or at least pretty plausible. In an extremely nitpicky sense, these headlines are accurate. Officials were simply describing the then-current state of knowledge. In medicine, anecdotes or hunches aren’t considered “real” evidence. So if there hasn’t been a study showing something, then there’s “no evidence”. In early 2020, there hadn’t yet been a study proving that COVID could be airborne, so there was “no evidence” for it. On the other hand, here is a recent headline: [No Evidence That 45,000 People Died Of Vaccine-Related Complications](https://www.usatoday.com/story/news/factcheck/2021/09/10/fact-check-no-evidence-vaccine-related-complications-killed-45-000/8256978002/). Here’s another: [No Evidence Vaccines Cause Miscarriage](https://www.msn.com/en-us/news/us/fact-check-no-evidence-pfizer-moderna-covid-19-vaccines-cause-miscarriage/ar-AARMn2d). I don’t think the scientists and journalists involved in these stories meant to shrug and say that no study has ever been done so we can’t be sure either way. I think they meant to express strong confidence these things are false. You can see the problem. Science communicators are using the same term - “no evidence” - to mean: 1. This thing is super plausible, and honestly very likely true, but we haven’t checked yet, so we can’t be sure. 2. We have hard-and-fast evidence that this is false, stop repeating this easily debunked lie. This is *utterly corrosive* to anybody trusting science journalism. Imagine you are John Q. Public. You read “no evidence of human-to-human transmission of coronavirus”, and then a month later it turns out such transmission is common. You read “no evidence linking COVID to indoor dining”, and a month later your governor has to shut down indoor dining because of all the COVID it causes. You read “no hard evidence new COVID strain is more transmissible”, and a month later everything is in panic mode because it was more transmissible after all. And *then* you read “no evidence that 45,000 people died of vaccine-related complications”. Doesn’t sound very reassuring, does it? **II.** Unfortunately, I don’t think this is just a matter of scientists and journalists using the wrong words sometimes. I think they are fundamentally confused about this. In traditional science, you start with a “null hypothesis” along the lines of “this thing doesn’t happen and nothing about it is interesting”. Then you do your study, and if it gets surprising results, you might end up “rejecting the null hypothesis” and concluding that the interesting thing is true; otherwise, you have “no evidence” for anything except the null. This is a perfectly fine statistical hack, but it doesn’t work in real life. In real life, there is no such thing as a state of “no evidence” and it’s impossible to even give the phrase a consistent meaning. EG: **Is there "no evidence" that using a parachute helps prevent injuries when jumping out of planes?** This was the conclusion of [a cute paper](https://www.bmj.com/content/327/7429/1459?ijkey=c3677213eca83ff6599127794fc58c4e0f6de55a&keytype2=tf_ipsecsha) in the *BMJ*, which pointed out that as far as they could tell, nobody had ever done a study proving parachutes helped. Their point was that "evidence" isn't the same thing as "peer-reviewed journal articles". So maybe we should stop demanding journal articles, and accept informal evidence as valid? **Is there "no evidence" for alien abductions?** There are hundreds of people who say they've been abducted by aliens! By legal standards, hundreds of eyewitnesses is *great* evidence! If a hundred people say that Bob stabbed them, Bob is a serial stabber - or, even if you thought all hundred witnesses were lying, you certainly wouldn't say the prosecution had “no evidence”! When we say "no evidence" here, we mean "no really strong evidence from scientists, worthy of a peer-reviewed journal article". But this is the opposite problem as with the parachutes - here we should stop accepting informal evidence, and demand more scientific rigor. **Is there "no evidence" homeopathy works?** No, [here's a peer-reviewed study showing that it does](https://pubmed.ncbi.nlm.nih.gov/30202036/). Don't like it? I have [eighty-nine more peer-reviewed studies showing that right here](https://pubmed.ncbi.nlm.nih.gov/9310601). But a strong theoretical understanding of how water, chemicals, immunology, etc operate suggests homeopathy can't *possibly* work, so I assume all those pro-homeopathy studies are methodologically flawed and useless, the same way [somewhere between 16% and 89%](https://en.wikipedia.org/wiki/Replication_crisis#In_medicine) of other medical studies are flawed and useless. Here we should reject *journal articles* because they disagree with *informal* evidence! **Is there "no evidence" that King Henry VIII had a spleen?** Certainly nobody has published a peer-reviewed article weighing in on the matter. And probably nobody ever dissected him, or gave him an abdominal exam, or collected any informal evidence. Empirically, this issue is just a complete blank, an empty void in our map of the world. Here we should ignore the absence of journal articles *and* the absence of informal evidence, and just assume it's true because *obviously* it’s true. I challenge anyone to come up with a definition of "no evidence" that wouldn't be misleading in at least one of the above examples. If you can't do it, I think that's because the folk concept of "no evidence" doesn't match how real truth-seeking works. Real truth-seeking is [Bayesian](https://www.yudkowsky.net/rational/bayes). You start with a prior for how unlikely something is. Then you update the prior as you gather evidence. If you gather a lot of strong evidence, maybe you update the prior to somewhere very far away from where you started, like that some really implausible thing is nevertheless true. Or that some dogma you held unquestioningly is in fact false. If you gather only a little evidence, you mostly stay where you started. I'm not saying this process is easy or even that I'm very good at it. I'm just saying that once you understand the process, it no longer makes sense to say "no evidence" as a synonym for “false”. **III.** Okay, but then what? “No Evidence That Snake Oil Works” is the bread and butter of science journalism. How do you express that concept without falling into the “no evidence” trap? I think you have to go back to the basics of journalism: what story are you trying to cover? If the story is that nobody has ever investigated snake oil, and you have no strong opinion on it, and for some reason that’s newsworthy, use the words “either way”: “No Evidence Either Way About Whether Snake Oil Works”. If the story is that all the world’s top doctors and scientists believe snake oil doesn’t work, then say so. “Scientists: Snake Oil Doesn’t Work”. This doesn’t have the same faux objectivity as “No Evidence Snake Oil Works”. It centers the belief in fallible scientists, as opposed to the much more convincing claim that *there is literally not a single piece of evidence anywhere in the world* that anyone could use in favor of snake oil. Maybe it would sound less authoritative. Breaking an addiction to false certainty is as hard as breaking any other addiction. But the first step is admitting you have a problem. But I think the most virtuous way to write this is to actually investigate. If it’s worth writing a story about why there’s no evidence for something, probably it’s because some people believe there *is* evidence. What evidence do they believe in? Why is it wrong? How do you know? Some people thought masks helped slow the spread of COVID. You can type out "no evidence" and hit "send tweet". But what if you try to engage the argument? Why do people believe masks could slow spread? Well, because it seems intuitively obvious that if something is spread by droplets shooting out of your mouth, preventing droplets from shooting out of your mouth would slow the spread. Does that seem like basically sound logic? If so, are you sure your job as a science communicator requires you to tell people not to believe that? How do you know they're not smarter than you are? There's no evidence that they aren't!
Scott Alexander
45503590
The Phrase "No Evidence" Is A Red Flag For Bad Science Communication
acx
# Ancient Plagues During our recent discussion of climate change, someone linked me to [this New York Magazine piece](https://nymag.com/intelligencer/2017/07/climate-change-earth-too-hot-for-humans-annotated.html) making the case for doomism. I disagree with it pretty intensely, but most of my complaints are already listed in the sidebar (some scientists also complained, so they had to add a lot of sidebar caveats in) and I don't want to belabor them. The section I find interesting is the one called Climate Plagues: > There are now, trapped in Arctic ice, diseases that have not circulated in the air for millions of years — in some cases, since before humans were around to encounter them. Which means our immune systems would have no idea how to fight back when those prehistoric plagues emerge from the ice. > > The Arctic also stores terrifying bugs from more recent times. In Alaska, already, researchers have discovered remnants of the 1918 flu. They actually extracted it from the cadaver of a frozen woman. that infected as many as 500 million and killed as many as 100 million — about 5 percent of the world’s population and almost six times as many as had died in the world war for which the pandemic served as a kind of gruesome capstone. As the BBC reported in May, scientists suspect smallpox and the bubonic plague are trapped in Siberian ice, too — an abridged history of devastating human sickness, left out like egg salad in the Arctic sun. > > Experts caution that many of these organisms won’t actually survive the thaw and point to the fastidious lab conditions under which they have already reanimated several of them - the 32,000 year old "extremophile" bacteria revived in 2005, an 8 million-year-old bug brought back to life in 2007, the 3.5 million-year-old one that a Russian scientist self-injected just out of curiosity - to suggest that those are necessary conditions for the return of such ancient plagues. But already last year, a boy was killed and 20 others infected by anthrax released when retreating permafrost exposed the frozen carcass of a reindeer killed by the bacteria at least 75 years earlier; 2,000 present-day reindeer were infected too, carrying and spreading the disease beyond the tundra. I'm a little nervous talking about this, because I am not a microbiologist. But I haven't seen the proper experts address this properly, so I'll try, and if I'm wrong you guys can shout me down. (Also, the real microbiologists are apparently “self-injecting [3.5 million year old bacteria] just out of curiosity” and we should probably stay away from them for now) I think we probably don't have to worry very much about ancient diseases from millions of years ago. Animal diseases can't trivially become contagious among humans. Sometimes an animal disease jumps from beast to man, like COVID or HIV, but these are rare and epochal events. Usually they happen when the disease is very common in some population of animals that lives very close to humans for a long time. It’s not “one guy digs up a reindeer and then boom”. If a plague is so ancient that it's from before humans evolved, it's probably not that dangerous. In theory, it could be dangerous for whatever animal it originally evolved for - a rabbit plague infecting rabbits, or an elephant plague infecting elephants. And then maybe after many rabbits are infected, some human might eat an infected rabbit and get unlucky, and the plague might mutate to affect humans. But I don't think this is any more likely than any of the zillion plagues that already infect rabbits jumping to humans, and nobody is worrying about those. In b4 some medical student jumps in to tell me about leptospirosis. The story about anthrax is a distraction. The fact that someone got anthrax from a corpse frozen in permafrost is irrelevant; there is anthrax now, and you could get it from a perfectly fresh corpse or living animal if you wanted. It's adapted to animals and it can't spread from person to person. Just because you got an irrelevant-to-humans modern animal disease when you dug up a modern animal, doesn't mean you're going to get a dangerous-to-humans disease from an ancient animals. But I'm more concerned about recent human plagues coming back. Not bubonic plague; that one is *another* distraction. The reason we don't get more Black Deaths isn't because yersinia pestis died off or mellowed out. It's because we have good sanitation and pest control. And doctors whose knowledge of medicine doesn't begin and end with "look like a creepy bird" But the 1918 Spanish flu has, as far as I know, legitimately died out. Lots of people like saying that *in a sense* it's still with us. [This NEJM paper](https://www.nejm.org/doi/full/10.1056/nejmp0904819) (with a celebrity author!) points out that it's the ancestor of all existing flu strains. But most of these flu strains are less infectious than it was. This didn't make sense to me the first, second, or third time I asked about it: why would a flu evolve into an inferior flu? Sure, it might evolve into a *less deadly* flu because it's perfectly happy being more infectious but less deadly. But I think the Spanish flu was also especially infectious; so why would it evolve away from *that*? One possible answer is "because by 1919, everyone had immunity to the 1918 flu, so it evolved away from it - and now nobody has immunity, but it lost the original blueprint." The 1918 flu was a really optimal point in fluspace, but during all of history up until 1918, the flu's evolutionary hill-climbing algorithm didn't manage to find that point, and since flu has no memory it's not going to be any easier for it to find it the second time, after it evolved away from it. So plausibly, existing flus are strictly worse at their job than Spanish flu was, and digging up an intact copy of the latter would be really bad. And then there's smallpox. No mystery why smallpox died out - [we killed it](https://blog.jaibot.com/500-million-but-not-a-single-one-more/). But then we stopped vaccinating people against it, and now if it comes back it would be really bad. This actually raises a broader question: how worried should we be about getting smallpox from corpses and artifacts in general? Should we freak out every time we dig up an Egyptian mummy? [This paper](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3901489/) does our freaking out for us - they catalog several incidents of archaeological or incidental excavation of smallpox-infected corpses - including, yes, the mummy of Pharaoh Rameses V. Also: a family in Arkansas was going through an ancestor’s possessions. An envelope fell out of a book containing a note and some weird red stuff; the note said that it was smallpox scabs from a past infection, kept as souvenirs. This sounds like a scene for a horror movie aimed at epidemiologists. But nobody has ever found evidence of live viable smallpox virus on an artifact or corpse. The article concludes: > Archival specimens offer opportunities to delve into the past and capture a glimpse of the history of an eradicated disease. There are no published reports of residual live microbes found in archeologic relics. Furthermore, on the basis of experiences in the past several decades, risks for transmission of live organisms from such relics would seem to be nonexistent; nevertheless, archeologic specimens should be handled with caution. Each situation should be approached independently and with vigilance and attention. So I think there's strong evidence that smallpox can't survive on relics in normal conditions. But what about frozen in permafrost? Experiments from back when smallpox walked the earth suggested it could survive a decade or more if you preserved it carefully. There's some evidence that [flu viruses can survive freezing and thawing](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3471417/). And [this story](https://www.npr.org/sections/goatsandsoda/2020/05/19/857992695/are-there-zombie-viruses-like-the-1918-flu-thawing-in-the-permafrost) mentions a bunch of scientists who tried to search for Spanish flu and smallpox in permafrosted bodies. They didn’t find any live viruses, but there were able to recover a few shreds of useful DNA. The good news is that these viruses probably can't be generically "released". If some dead body with smallpox starts thawing, it's not like the smallpox virus has been freed from its prison, genie-style, and can travel upon the air currents until it finds an unsuspecting host. You really have to be out there licking corpses. I think if something goes wrong, the third most likely vector will be curious Siberians who see a corpse half-hidden in the ice and go investigate. The second most likely vector will be archaeologists. And the most likely vector - by far - will be scientists investigating to see whether something could go wrong. [Here is a great article](https://www.adn.com/alaska-news/science/2020/03/22/how-an-alaska-village-grave-led-to-a-spanish-flu-breakthrough/) about a guy who digs up ancient Indian burial grounds, searching for samples of especially severe flus. If only we had some sort of cultural folk memory that warned people against doing that kind of thing!
Scott Alexander
43332746
Ancient Plagues
acx
# Open Thread 202 This is the weekly visible open thread. Odd-numbered open threads will be no-politics, even-numbered threads will be politics-allowed. This one is even-numbered, so go wild - or post about whatever else you want. Also: **1:** Corrections from [Monday’s Model Cities post](https://astralcodexten.substack.com/p/model-city-monday-12621) - Praxis only received about $4 million in funding, and not directly from Thiel. I regret the error and am trying to get more information on their perspective. Also, since I last wrote about the Honduran political situation, the vote projections have changed, and the right-wingers will probably have at least 1/3 of Congress votes, preventing their opponents from getting the 2/3 majority they would need to legislatively oppose ZEDEs. I’ll have more about this next time I write about model cities. **2:** Thanks again to Lars for his recent Georgism posts. He wants me to add that [he found the Hagman citation he was looking for](https://twitter.com/larsiusprime/status/1469541646954115073), and it is “a giant anti-Georgist diatribe written as an authorial self-insert fan fiction, IN SPACE, confidently expounding upon how an LVT experiment failed on the planet Mars”. **3:** And several readers commented that they had been “georgepilled” - they ought to know that the historically accurate term is [“seen the cat”](https://www.henrygeorge.org/catsup.htm). Somebody [even made](https://www.reddit.com/r/georgism/comments/gi8var/geodsden_flag/) a mock Gadsen flag about it:
Scott Alexander
45384051
Open Thread 202
acx
# Does Georgism Work, Part 3: Can Unimproved Land Value be Accurately Assessed Separately From Buildings? *[Lars Doucet won this year’s [Book Review Contest](https://astralcodexten.substack.com/p/book-review-contest-winners) with his review of Henry George’s [Progress and Poverty](https://astralcodexten.substack.com/p/your-book-review-progress-and-poverty). Since then, he’s been researching Georgism in more depth, and wants to follow up with what he’s learned. I’ll be posting three of his Georgism essays here this week, and you can read his other work at [Fortress Of Doors](https://www.fortressofdoors.com/)]* Hi, my name's Lars Doucet (not Scott Alexander), and this is a guest post in an ongoing series that assesses the empirical basis for the economic philosophy of [Georgism](https://en.wikipedia.org/wiki/Georgism). [Part 0 - Book Review: Progress & Poverty](https://astralcodexten.substack.com/p/your-book-review-progress-and-poverty) [Part I  - Is Land Really a Big Deal?](https://astralcodexten.substack.com/p/does-georgism-work-is-land-really) [Part II - Can Land Value Tax be passed on to Tenants?](https://astralcodexten.substack.com/p/does-georgism-work-part-2-can-landlords) **Part III - Can Unimproved Land Value be Accurately Assessed Separately from Buildings? 👈** (You are here) --- Okay, so land is a really big deal, and it looks like Land Value Tax can't just be passed on to tenants, which means Georgism works great in theory. But in order to implement it effectively, you need to be able to price all the land parcels, accurately. Or at least accurately *enough*. But how do you actually do that? Some friends suggested I get in touch with Ted Gwartney, former professor of Real Estate Appraisal at Baruch College, New York. He has an [MAI](https://www.appraisalinstitute.org/ai-masters-degree-program/) in Land & Commercial Appraisal from the [Appraisal Institute](https://www.appraisalinstitute.org) and is former president of the [Council of Georgist Organizations](https://cgocouncil.org). He has a lot of professional experience as an assessor in British Columbia, Southfield in Michigan, and Hartford, Bridgeport, and Greenwich in Connecticut. He was even a co-signer of this [famous open letter](https://en.wikisource.org/wiki/Open_letter_to_Mikhail_Gorbachev_(1990)) to Gorbachev in 1990 urging the Soviet premier to establish a Land Value Tax to provide a stable basis for the new economy as Russia struggled to rise from the collapse of communism. Other co-signers included four Nobel Laureates: [Franco Modigliani](https://en.wikipedia.org/wiki/Franco_Modigliani), [Robert Solow](https://en.wikipedia.org/wiki/Robert_Solow), [James Tobin](https://en.wikipedia.org/wiki/James_Tobin), and [William Vickrey](https://en.wikipedia.org/wiki/William_Vickrey), not to mention [William Baumol](https://en.wikipedia.org/wiki/William_Baumol) of [Baumol's cost disease](https://slatestarcodex.com/2017/02/09/considerations-on-cost-disease/). Unfortunately, the Russian authorities went with Harvard Professor Jeffrey Sachs' "[shock therapy](https://www.thenation.com/article/archive/harvard-boys-do-russia/)" instead, and the [rest is history](https://theconversation.com/the-wild-decade-how-the-1990s-laid-the-foundations-for-vladimir-putins-russia-141098), as anyone who lived through the post-Soviet chaos can tell you. Ted Gwartney also gives online seminars. To prep for this article, I attended his 5-week course *[Assessing Land Values - Principles and Methods](https://www.hgsss.org/assessing-land-values-principles-and-methods/)* from the [Henry George School of Social Science](https://www.hgsss.org), which I'll reference throughout this piece. Gwartney couldn't be more Georgist if he tried, so for balance, I looked up about a dozen research papers on the topic of land value assessment in Google Scholar, some of which are cited below. I also spent some time on the homepage of the [International Association of Assessment Officers](https://www.iaao.org) (IAAO), the international professional body for real estate assessors. Then I looked up the local policies of various appraisal districts in my home state of Texas to see how things are actually done in practice in my local area. Here's what I found. * There are common principles that everybody (Georgist or not) agrees on * Several promising new methods have come out in the last 15 years * The actual practice in my own area kind of sucks * The actual practice in some other places is pretty good * We can probably improve on both the state of the art and the actual practice * Georgists assert we're consistently undervaluing land basically everywhere I'll cover specific case studies where Georgism has been successfully tried in a future article. I'll just note here that solid examples that uphold the purported benefits of Georgism in the wake of an LVT policy would be good evidence for accurate (enough) land assessment being feasible–"what works in practice can work in theory." # 1. The Basics of Assessment Pretty much everybody agrees on the basic algebraic formula for deriving land value: > **T**otal Value = **L**and Value + **I**mprovements Value The total value is whatever the property actually sells for. The value of improvements is the value of all of the buildings and other permanent structures and investments that sit on top of the land. The land value is the value of the location itself and any of its natural endowments. When two factors are known, you can calculate the third, which is then known as the *residual*. The high level strategy for doing valuations thus becomes to use whatever evidence you have to get at least two of these values. From there you can simply deduce the missing residual. The quality of your assessments will depend not only on the method you use and the expertise of your assessment officers, but also on your local policies. The IAAO lists the following as "[core principles](https://www.iaao.org/media/standards/Standard_on_Property_Tax_Policy.pdf)" that local assessment policies should ideally have: * Assessments based on market value * Frequent and regular (preferably annual) updates to assessments * A broad tax base with limited exemptions * Targeted, easily accessible relief programs for those who need assistance * Well managed, transparent, and adequately funded mass appraisal procedures Everyone is in further agreement about the three basic "approaches" to value estimation: the market approach, the cost approach, and the income approach. #### The Market Approach This is the most common approach. You gather a bunch of information about comparable properties, look at past selling prices and rents, and make adjustments for differences. This is greatly aided by modern computerized databases, as well as Geographic Information System (GIS) mapping and visualization tools. Remember those spot checks I did in Part I to estimate the value of the land under a building in San Francisco using a nearby, similarly-sized empty lot? That was me (crudely) using the market approach. #### The Cost Approach In this approach, you estimate the cost of the buildings minus depreciation. Professionals that value residential and commercial buildings often rely on [Marshall & Swift's](https://www.corelogic.com/wp-content/uploads/sites/4/downloadable-docs/marshall-swift/1-msrch-1909-01-marshall-swift-residential-cost-handbook_scrn.pdf) [Valuation Service](https://www.corelogic.com/wp-content/uploads/sites/4/downloadable-docs/marshall-swift/1-msvs-1909-01-marshall-swift-valuation-service_scrn.pdf). This is a fancy calculator where you plug in all the different characteristics of your building, and it spits out a cost estimate. You can think of it as a [Kelley Blue Book](https://www.kbb.com) for buildings. Once you have the cost of your building, you apply certain widely-accepted depreciation formulas based on its age. The cost approach has two chief limitations. The first is that it requires a lot of detailed information about the building. The second is that the cost to build something isn't necessarily the same as what it would sell for in today's market. Therefore, this approach tends to overestimate building values and underestimate land values, as discussed in detail in Part I. #### The Income Approach In this approach, you look at the net income (rent - expenses) that a commercial or residential property generates and then use the prevailing capitalization rate of the area to get the property value. You typically use this formula: > **V**alue = **I**ncome / **R**ate This gives you the total property value, and from there, you can use one of the other two approaches to separate land value from building value. Crucially, any observed land or property tax needs to be factored into the observed "income" portion. Even if the state is collecting the tax, it's part of the flow that originates from the property, and thus affects the full untaxed market value of the property. Naively you might expect a 100% Land Value Tax to drive itself to zero because it also drives down the purchase price of the land to approximately nothing. To avoid this, you figure out the capitalized value of the LVT that's already been applied to get the untaxed land value. --- These are the basic methods that we've used to value properties "by hand" over the last century, and there are many who claim that these are good enough. As for separating land from buildings, Ted Gwartney prefers to estimate the value of land directly whenever possible and derive the building value as a residual. He claims it's easier to assess land than buildings, because in most cases, the value of land is derived almost entirely from the location. Land doesn't have as many fiddly variables, like how much damage your roof took from the last hailstorm and whether you've remodeled your bathroom in the past five years. But let's dive deeper. # 2. Assessing the Assessments Okay, so once you've made all your assessments, how do you ensure they're accurate? You test them. We have two main signals: ongoing transaction data from the market, and complaints from property owners about the assessed values. The typical way you compare yourself against market transactions are "Ratio Studies", which you can read more about in this [IAAO paper on the subject](https://www.iaao.org/media/standards/Standard_on_Ratio_Studies.pdf/). As for complaints, you'd think property owners would always complain out of pure self-interest, but apparently, only a minority do, and assessors actually build in an expectation for a certain number of complaints as a chief source of feedback. If complaints are below a certain threshold (2% according to [Hefferan and Boyd](https://www.emerald.com/insight/content/doi/10.1108/02637471011051291/full/html)), that's apparently a sign that you're doing well. During Ted Gwartney's seminar, someone asked him about what tends to drive objections: > ATTENDEE: Can you tell us what fraction of property owner who request a lower assessment argue that their land assessment is too high? > > GWARTNEY: A very small number. Almost all of the adjustments that are made are made because of improvements. Most of the arguments when you go to an appeal is about the building, it’s condition, or what’s in it or whatever. Generally the land is accepted by people, they realize it’s fair by looking at what other parcels are assessed for and most people don’t argue it. They might say he has a better view than I do or whatever, but usually [the objection is] because there’s some physical difference or condition in the structure. So if the public accepts your valuations, and new market signals match your assessments, then they can be said to be accurate. But how precise do they need to be? Here's Gwartney's opinion: > ATTENDEE: How accurate do assessments have to be to get the benefits of Georgism? > > GWARTNEY: You have a lot of wiggle room. It doesn’t have to be perfectly precise. The idea is to improve on what’s already being done. You get immediate feedback that what you’re working on is making good results. This is a part I'd like to know more about. Is plus or minus 5% of the true land value "good enough?" What about 15%? Or 1%? If land is under-assessed, then we basically have the same problem as the status quo, and we're not really any worse off. But if land is over-assessed, we might drive people off of it, which is bad. So it seems our main problem is not *over-assessing* the value of land. Georgists often talk about "100% LVT," but during practical discussions, it seems that their wildest dream is just to get as high as 85%. That would leave a pretty big safety margin for not over-taxing the land, even if you over-assessed it. Here's a graph. If you under-assess a property's land by 15%, the assessed value is 85% of the true value. Take 85% of that and now you're collecting 72.25% of land rents. If you over-assess a property's land by 15%, the assessed value is 115% of the true value. If you take 85% of that, you get 97.75%. Collect all that and you're still leaving 2.25% of the land rents on the table, but you're not going over. This is comforting, but frankly, all the evidence I've seen so far suggests that we're chronically and consistently *under*-assessing the value of land. But even if we can assess things accurately, it's a moot point if we can't afford to hire enough assessors to do the job thoroughly. --- # 3. How Many Assessors do you need? Another critique about assessment is that you're going to need an army of property assessors peeking inside windows at all hours of the night, and that it's all going to be ruinously expensive. Here's a slide from Gwartney's presentation, which is itself taken from an IAAO conference. Gwartney says that when he was the assessment commissioner and chief executive officer in British Columbia, he had a staff of 690, and that this number has not changed significantly since then. British Columbia has a population of about 5 million, so that's 1 assessment officer for every 7,250 British Columbians. For context, the IRS has a staff size of 74,454, or about one IRS agent for every 4,425 Americans. I don't have data on how many property tax assessors the USA has in total, but the above slide suggests British Columbia's figure is on the high end. As for how you actually do assessments, sure, you *can* send out an army of assessors to value each and every property in your jurisdiction by hand. However, not only is that labor-intensive, it's also a recipe for inconsistency. Whatever method you're using to value properties needs to be consistent and standardized across all properties, so you don't have sharp discontinuities on the assessment map that are due solely to differences between Assessor Fred and Assessor Sally's personal methodologies. Thankfully, we're living in the modern age, and we have some fancy new tools at our disposal. # 4. Modern Technology Georgists were doing split-rate assessments to allegedly good success long before the rise of the computer, such as [J. J. Pastoriza's effort in setting up a Georgist tax regime in Houston, Texas in 1911](https://twitter.com/larsiusprime/status/1427107150053183505). Today, we have spreadsheets, property value databases, GIS mapping visualizations, regression analysis, machine learning...the works. According to Gwartney, the Canadian province of British Columbia has revalued all its land and all its property on an annual basis simply by using computers and market analysis, ever since he first helped them set up their system back in 1975. Not every jurisdiction revalues their land this thoroughly and this often, but Gwartney says there is no significant technical or staffing barrier standing in the way. Gwartney has been retired for some time, so his seminar didn't cover all the latest cutting-edge techniques that have come out in the last few years. Let's look at some recent papers and see what new tools assessors have to play with. The first on my list is *[Land Value Appraisal Using Statistical Methods](https://edoc.hu-berlin.de/bitstream/handle/18452/20511/FORLand-2019-07.pdf?sequence=1&isAllowed=y)* by Kolbe, Schulz, Wersing, and Werwatz (2019). This is a study on mass appraisal techniques using real estate transaction data from Berlin, Germany. It claims that not only are the results cheaper and faster to generate than those done by conventional property assessment methods, but they are also no less accurate than those done "by hand" by experts. Kolbe et al. assert that, provided you have access to high quality market transaction data, you can perform accurate and efficient mass appraisals of land values. They chose Berlin because it "has a very effective system of property transaction data collection and storage," in contrast to other parts of Germany. They cite some prior work by [Almy (2014)](https://www.oecd-ilibrary.org/docserver/5jz5pzvr28hk-en.pdf?expires=1637353213&id=id&accname=guest&checksum=A29E3A050D80102CAD3B1B6823A9E76D) studying Canada, the Netherlands, and the United States, suggesting that the assessment cost per property can be brought down to 20 Euros–25 times cheaper than what some other people ([Fuest, et al. (2018)](https://www.ifo.de/DocDL/ifo-studie-2018-fuest-etal-grundsteuer.pdf)) assert. Given an average tax receipt of 2,000 Euros per property, this means that the assessment cost should represent only about 1% of the funds raised. Is that good? Let's take this assertion at face value for the moment and compare it to the cost of the IRS. Federal tax receipts in 2020 were $3.42 trillion, and operation costs for the IRS were [$12.3 billion](https://www.irs.gov/statistics/irs-budget-and-workforce), or 0.36%. However, the IRS outsources most of the labor of tax preparation to the taxpayers themselves, with compliance costs estimated between [$200 billion](https://www.americanactionforum.org/research/tax-day-2018-compliance-costs-approach-200-billion/) and [$400 billion](https://www.forbes.com/sites/kellyphillipserb/2016/06/20/report-americans-spend-more-than-8-9-billion-hours-each-year-on-tax-compliance/?sh=6d89670e3456) a year, to the [delight of Intuit](https://www.nytimes.com/2021/07/19/opinion/intuit-turbotax-free-filing.html). Add that up and the total cost of federal tax collection to the economy is anywhere between 6-12% of the amount it raises. And what about sales tax? According to a [2006 report by PriceWaterHouseCoopers](http://www.netchoice.org/wp-content/uploads/cost-of-collection-study-sstp.pdf): > The study finds that the national average annual state and local retail sales tax compliance cost in 2003 was 3.09 percent of sales tax collected for all retailers, 13.47 percent for small retailers, 5.20 percent for medium retailers, and 2.17 percent for large retailers So a compliance cost of 1% would be way more efficient in terms of cost collection than the other two most common forms of taxation, and taxpayers don't even have to do anything themselves, other than pay the bill. Alrighty, how about the accuracy? The authors cite two international examples, Australia and Lithuania, as among the few countries in the world that have both a Land Value Tax and statistical methods for mass appraisals. [Hefferan and Boyd (2010)](https://www.emerald.com/insight/content/doi/10.1108/02637471011051291/full/html) assert that objections to assessments from property owners in Australia are less than 1%. I'm willing to buy the improved efficiency claims just by taking a look at some methodologies. It seems reasonable that computerized records and algorithms can cut costs significantly; the real question is if you're trading off accuracy. The other papers I found on the subject are [Bencure, et al (2019)](https://www.researchgate.net/publication/334304723_Development_of_an_Innovative_Land_Valuation_Model_iLVM_for_Mass_Appraisal_Application_in_Sub-Urban_Areas_Using_AHP_An_Integration_of_Theoretical_and_Practical_Approaches) in BayBay City, Philippines, [Kilić, et al (2019)](https://hrcak.srce.hr/index.php?show=clanak&id_clanak_jezik=324144) in Croatia, [Yalpir & Unel (2017)](https://www.isites.info/PastConferences/ISITES2017/ISITES2017/papers/C3-ISITES2017ID307.pdf) in Konya, Turkey, and [Raslanas et al. (2014)](https://www.tandfonline.com/doi/abs/10.3846/ijspm.2010.13) in Vilnius, Lithuania. Let's dive in and examine some methods. # 5. Mass Appraisal Methods Here are some of the latest mass appraisal methods cribbed from the research papers listed above. All of these are based on taking market transaction data, plotting them out on a map, and running computations over them to estimate valuations for the properties you don't have known values for. Furthermore, all of these methods are able to value land and building values separately. **Multiple Regression Analysis** This paper by [Yalpir and Unel](https://www.isites.info/PastConferences/ISITES2017/ISITES2017/papers/C3-ISITES2017ID307.pdf) out of Turkey gives a straightforward example of using Multiple Regression Analysis for land valuation. For those of you who didn't study math, let me explain regression analysis. This is a family of mathematical models where you basically take a data set, ask the question "what mathematical formula would best fit this data," choose a basic equation model, and then have a computer search for a set of coefficients that "best fit" that curve to the data with the least amount of error. The simplest example is using linear regression on a scatterplot of observed data points to fit a trend line. This is a common exercise in freshman physics and statistics classes. You can use more complicated versions of this numerical method to take a big bag of observations (real estate sales) and use "multiple regression" to tease out  dependent variables (land value and improvements value) based on the independent variables (size, location, age, number of bedrooms) of your observations. In this case the team identified about a hundred different factors that can affect the price of a property: Then you create an entry for each property, fill in the values for each of those characteristics, and run it through the regressor. Take note of how many of these factors start with the words "proximity to." Each of these can be calculated automatically just by knowing where the property is on a map, and each of them is an independent contributor to the value of the property's location. The next step is to generate individual "index maps" that combine various related features into combined heat maps. Then you run everything through and see if it works. You can get the land share of the final value by combining the contributions of all the individual factors that you associate with "land," such as proximity to important things. In the verification section the authors say: > As a result of the analysis, since the significance level (0.000) p <.05, corresponding to the F values in the ANOVA test, indicates that the regression analysis is appropriate and the models are significant. The criteria that make up the model account for about 85% of the market value and 15% cannot be explained for reasons such as economic, non-existent data and unearned income. Unfortunately, they don't say anything about how accurate their model is for assessing land values specifically. Otherwise, this is a pretty good example of using the Multiple Regression method for estimating the individual contributions of various factors to overall property values. Gwartney says Multiple Regression Analysis was a standard method he typically used, of which this specific paper is just one example. **Nonparametric kernel regression** This will be a method familiar to the programmers in the audience who have any experience with image processing algorithms. Here's an example from this old [Gamasutra article](https://www.gamasutra.com/view/feature/3102/four_tricks_for_fast_blurring_in_.php?print=1): The basic idea here is to take a matrix of numbers, called a "kernel", and run that over every pixel in a source image. The kernel tells you how strongly to weight all of the source pixel's neighbors to compute a final result for that position. A simple "box blur" is a kernel where every value is 1 (meaning it averages the values of all neighboring pixels within a range). The more subtle gaussian blur illustrated above uses a two-dimensional normal distribution of values so that each pixel is most affected by those nearest to it. So let's apply the same principle to land valuations. If you have a map with lots of transaction data of pure land sales–defined as sales of either vacant land or teardown properties (where the building value is essentially zero)–then you can use a special kernel filter to smoothly interpolate land values across the region. So you basically have a smooth curve that mostly favors close-by points, tapers off a bit, and then disregards anything outside a certain distance entirely. The big assumption here is that land values change smoothly and do not change suddenly across very short distances. There are, in fact, locations with sharp jumps in value (any town with an "other side of the tracks," for instance). But for cases where we know a priori that land values change smoothly, this method is appropriate. No other prior restriction is placed on the form of the land value map, however, and this is why it's called "nonparametric." Here's an illustration. The outer box is the entire search distance that the kernel considers, and the circles represent the falloff of the curve itself. The size of the box is called the "bandwidth" and is set by the user. Everything outside of it will have zero influence on the kernel's output at any given location. This method operates on the same basic logic that I used when I hand-estimated the land value of that San Francisco house in Part I based on the value of the empty lot next door. However, it makes the whole procedure systematic. It can easily and accurately estimate the land value of a property with a big fat building on it simply by smoothly interpolating the known values of the nearby parking lots. Of course, it has limitations. First and foremost, it's a highly local operation, so if you have properties you're trying to value that don't have nearby pure land sales data, you can't really do much with this. Also, most people assume that city centers have less market transactions for undeveloped land than the countryside, as did I until I read that paper by Albouy in Part I. But in any case, this is just one method in your toolbox and might not be sufficient by itself. Its key advantage is that it works directly from true market data for land and doesn't need or want any other subjective data. In the end, basic kernel estimation just fills in the land value of unmeasured locations with a local weighted average of known locations. **Nonparametric adaptive regression** Kolbe, et al. build on the kernel regression method with a technique called Adaptive Weights Smoothing (AWS), which runs in several iterations and adds additional weight to any observed data points that are sufficiently close to the point being estimated. I'm not 100% sure about what all the math means, but it seems like it's basically a "smarter" version of the basic kernel method. Left: Nonparametric kernel regression, Right: Adaptive Weights Smoothing. I think the authors goofed and printed the same figure twice with different headings because they're identical if you overlay them in Photoshop. **Semiparametric regression** Now, the above two methods assume you have plenty of "pure" land sale records to work with. But if you're trying to work out prices in the city center, you've probably mostly got land and buildings mixed together. To do this effectively, we need more data, and this is where the "parameter" in "semiparametric" comes in. The model described in Kolbe et al. seems like a flavor of multiple regression analysis that takes the price, the location, and various characteristics of the building and feeds it into a regressor. But we've got "semi" parametric here. What does that mean? Well, if you already know how certain relationships between the data work a priori, it's better to enforce those relationships yourself rather than leave it to the computer. Here, we enforce the assumption that if two properties are right next to each other, then the value due to location is going to be essentially identical. This algorithm starts by ordering things geographically and then working out the differences in observed price by regressing on the difference between remaining property characteristics. In this method, the power of "location, location, location" is not something we're leaving to the regressor to discover by itself. Results of the Semiparametric regression method, we can see some significant differences from the simple kernel-based model. As you can see above, this gives you more detailed and likely more accurate results, and you're better able to assess the values of properties with buildings on them, even in the absence of pure land sales. This technique is more complicated and bakes in assumptions about the power of location, but otherwise doesn't assign subjective human weights to the various property characteristics. The chief human bias comes in the form of deciding *which* property characteristics are measured and made legible to the model in the first place. Okay great, but how accurate are the above three methods? Their main point of comparison is this thing called the "Bodenrichtwerte," or BRW. I think that means "ground-level-values" in English, and it's an expert-assessed map of land values for Berlin done the traditional way. The nonparametric kernel regression method has a correlation of 0.704 with the traditional method and has the added disadvantage that it's not able to produce estimates for the city center, only the outlying areas. Furthermore, the BRW map does show sharp discontinuities, which is another knock against the kernel method, at least for the city center. What about the iterative method? Kolbe et al. find that "the agreement between [Adaptive Weights Smoothing] land value estimates and, both, land prices and BRW land values is fairly good for all values of λ." Doing some quick checks, their values seem to be within about 85% of the BRW values. A different Kolbe et al. paper called *[Identifying Berlin's land value map using adaptive weights smoothing](https://link.springer.com/article/10.1007/s00180-015-0559-9)* goes into more detail and claims to give "similar" values to that of the BRW. For the semiparametric method, they "found a strong positive correlation of 0.845" between their numbers and a previously expert-assessed set done using the traditional method. That sounds pretty good. It seems their margin for error is about plus or minus 15% compared to the traditional expert method. I'd like to see more direct comparisons against market transactions themselves, though, because if the prior expert assessments are wrong, then the main achievement here is improved efficiency, not accuracy. However, this method doesn't seem to be dramatically *less* accurate than the old way of doing things. The last three models came from the Berlin case study, where you have excellent market transaction data in an extremely wealthy and high-trust society. But what if you're trying to assess land in a developing nation with poor market transaction records, weak institutions, and widespread poverty? **Innovative Land Valuation Model (iLVM)** This is the particular name of the method described in *[Development of an Innovative Land Valuation Model (iLVM) for Mass Appraisal Application in Sub-Urban Areas Using AHP: An Integration of Theoretical and Practical Approaches](https://www.researchgate.net/publication/334304723_Development_of_an_Innovative_Land_Valuation_Model_iLVM_for_Mass_Appraisal_Application_in_Sub-Urban_Areas_Using_AHP_An_Integration_of_Theoretical_and_Practical_Approaches)* by Bencure, Tripathi, Miyazaki, Ninsawat, and Kim. They used BayBay City, Philippines as their case study. Whereas the previous models are very "hands-off" and let the computer work out the relationships between prices and property characteristics, here you get expert human opinion directly involved in building the model, baking in weights that directly embody judgments like "properties next to major roads are more valuable." These judgments are based on expert opinions that presumably come from observed experience but are a priori judgments nonetheless. Here, look at this big complicated flowchart. The "Analytic Hierarchy Process" in the box on the left is a particular kind of method for getting experts to set weights. The authors give this reason for using it: > Despite criticism pinpointed by other scholars, the AHP remains the commonly used in many research fields and practical applications. This is because the AHP: (1) overcomes human difficulty in making simultaneous judgment among factors to be considered in the model; (2) is relatively simple as compared to other MCDA [multi-criteria decision analysis] methods; (3) is flexible to be integrated in various techniques such as programming, fuzzy logic, etc.; and (4) has the ability to check consistency in judgment After identifying a list of "factors" that can affect land value, they group them into taxonomical buckets: Note that certain factors like "Coastline" appear in multiple buckets; this captures the various influences a characteristic can have. For instance, land on the coast tends to be more economically valuable because of tourism, shipping, fishing, etc., so that goes under "economic." But land that's next to the coast is also more likely to flood, so it also goes under "environmental." And then there are various land use restrictions that apply specifically to coastal areas, so it goes under "legal" as well. In this way, a single factor like "the property is on the coastline" can have both positive and negative effects on land value (e.g., it's more economically valuable but it also might flood, and there are certain things you aren't allowed to do there). The next step is to set down some rules for how sensitive each factor is to location and distance. So here we can see that the economic benefit of being on the coast is most strongly felt if you're within half a kilometer of the ocean, but the environmental effect (e.g., risk of flooding) is most strongly felt when you're within 0.03 kilometers. And so on and so forth. Your experts help you work out all these rules. Note that for a few of these factors (such as land use and slope), you use metrics other than distance (e.g. land use classification and grade). Then you take all that stuff and assign everything a value between 0 and 5. Your team of experts then uses this table to come up with a set of weights for everything. What essentially comes out of this is a big linear equation with a bunch of coefficients for every one of your factors, which is then broadly fit to the observed market prices. When you're done, you can take any property on your list, multiply each of its characteristics by its respective weight, run that through your equation, and calculate the predicted price of the land. So how accurate is it? The authors compare it to standard Multiple Regression Analysis and claim it fares better. The Root Mean Square Error is quite a bit less than MRA.  In addition, I *think* it's also saying that the MRA algorithm decided that only four of the factors were significant and basically ignored all the rest. By contrast, iLVM was able to maintain contributions from all the factors, because it doesn't leave that decision to the computer. I'm not 100% sure; it's not clear from the paper. The authors claim that about 67% of the variability is explained by their model, but they note that there are some areas where the model can be off by more than a factor of 1.0 in either the positive or negative direction. One thing that's kind of fun about this model is that you can make neat graphs like this that show the individual contribution of each factor: The main downside to this model is that it relies on a whole lot of subjective expert opinion and can be questioned on that basis. That said, it can be cheaply deployed in a transparent and consistent way across a large area. You can see why that's attractive for a developing nation with weak institutions and poor market transaction records; the argument is that this is a significant improvement over the former status quo. I wonder how well this model performs when you feed it better market transaction data, and how that would compare against all the others methods under identical conditions. More research is needed. --- Rather than drag you through a bunch more research papers, I'll just leave these others I found cited in the above studies: * [Killić et al. (2019)](https://hrcak.srce.hr/index.php?show=clanak&id_clanak_jezik=324144) - Fuzzy expert system for land valuation in land consolidation processes * [Aragonés-Beltrán, et al. (2008)](https://www.aeipro.com/files/congresos/2009badajoz/ciip09_0726_0737.2499.pdf) – Analytic Network Process (ANP) based on Multiple Criteria Decision Analysis (MCDA) * [Xue et al., (2008)](https://www.researchgate.net/publication/292872254_Land_evaluation_based_on_Boosting_decision_tree_ensembles) - Land valuation using C5.0 with a Boosting decision tree * [Kettani and Kehlifi (2001)](https://www.semanticscholar.org/paper/PariTOP:-A-goal-programming-based-software-for-real-Kettani-Khelifi/ede90349ef90887d42000c9a57fddbc109bfbee7) - A "decision-support system" called PariTOP * [Hefferan and Boyd (2010)](https://doi.org/10.1108/02637471011051291) - Property taxation and mass appraisal valuations in Australia - adapting to a new environment * [Almy (2014)](https://www.oecd-ilibrary.org/docserver/5jz5pzvr28hk-en.pdf?expires=1638944897&id=id&accname=guest&checksum=094DF834F67B533FED290FFCF9A949A5) - Valuation and Assessment of Immovable Property Not to mention [Fundamentals of Mass Appraisal](https://www.iaao.org/Store/detail.aspx?id=BK0098), a literal textbook published by the IAAO, written by Gloudemans and Almy in 2011. I've only scratched the surface here. There are a whole lot more methodology papers out there, and this is just a sample of the ones I happened to come across. They seem to fall into either "hands-off" or "hands-on" approaches, depending on how much direct human judgment you want to bake into the system. --- So, can we accurately assess land and improvements separately? I think it's quite plausible but not a slam dunk. That said, if the objection is, "valuing land separately from improvements is fundamentally impossible, and we can never get better at it, so we shouldn't try," I think that's plainly ruled out. We clearly have a variety of methods at our disposal that seem reasonably accurate. Each of them has particular strengths and weaknesses, and each directly addresses shortcomings of prior methods. All of this implies that this is something we can continue to improve at. The big questions are whether we've already arrived at "good enough" and how tight our error tolerances need to be. And the operative phrase very much is "good enough." I don't know of anywhere in the world that currently has a 100% LVT policy, let alone an 85% LVT. The lower your LVT, the greater your margin of error becomes for not taxing more than the true land value. I know plenty of Georgists who would be ecstatic if they could get a 75% LVT, or even a 50% LVT, implemented in their area. Now, just because these assessment methods are available doesn't mean they're actually being used. Not everyone has Ted Gwartney as their assessor. Plenty of counties in my local area exclusively use the cost approach and will even apply a blanket "neighborhood factor" multiplier to up-assess swiftly appreciating areas. However, they apply that multiplier to the *buildings* rather than the land, which feels exactly backwards. The assessor hasn't raised the value of my land in years, while the assessed value of my house (which I am eminently qualified to tell you is an ever-degrading money pit) somehow continues to go up. Good assessment depends on having well trained staff, up-to-date methodology, and access to high quality market transaction data. I'm convinced, based on these papers and the IAAO's surveys, that assessment doesn't require a huge army of assessors poring over every aspect of citizens' properties. Furthermore, plenty of places already have property tax systems in place and are already paying the full cost of property assessments and property tax collection. Many of the methods described above seem capable of reducing property assessment costs by focusing on the land first and foremost and letting the building's value fall out as a residual, as Ted Gwartney insists. The cost also seems like something that, done properly, is only going to come down over time as fewer assessors are required. Another option is to keep staff sizes the same but use the emerging productivity gains to increase the frequency and quality of assessments. It also seems clear to me that Land Value Taxation is not *more* invasive and expensive than income and sales tax when you factor in the cost of compliance (not to mention the deadweight loss imposed on the economy). Countries that have implemented Land Value Taxes, such as Denmark, are already seeing some of the claims of Georgism borne out, as we discussed in Part II. This suggests to me that modern methods are probably "good enough," so long as assessors are well trained, abiding by current best practices, and able to access good market data. Given that Astral Codex Ten is a blog where ideas as lofty as full brain uploading, superhuman AI, and biological immortality are frequently discussed in earnest, it doesn't seem outlandish to suggest that human beings can probably use math and science to get better at estimating the market value of land relative to buildings. **Conclusion** By George, Unimproved Land Value can (probably) be accurately assessed. --- # 6. Conclusions & Next Steps This concludes my three part series on the most common objections to Georgism. By George, the evidence has convinced me of three things: ✅ Land is a really big deal ✅ Land Value Tax cannot be passed on to tenants ✅ Unimproved Land Value can (probably) be accurately assessed I humbly submit that the case for Georgism survives a summary dismissal and can move on to a trial of the particulars. So where do we go from here? In the course of writing this series, I found a few subjects that someone should just go ahead and test already. Obviously, this would require research funding and smart people willing to do the work (hey, a guy can dream). These subjects are: ## 6.1. Assessment methods A lot of the methodology papers I read test one or two methods at a time in a particular case study. What I couldn't find was a study that tests *every* major mass appraisal method in one big cross comparison study, all in the same physical location using the same dataset. If we had this, we could get a better sense of their strengths and weaknesses without wondering what differences are due simply to one study being in Germany and the other in the Philippines. It seems the necessary ingredients are: * An ideal test location with excellent property records and (ideally) a history of quality land value assessment and/or Land Value Tax * Experienced local assessment experts with knowledge of the area * Data scientists, statisticians, and machine learning nerds I'm told by some friends who know this kind of stuff that the ideal location in the United States would likely be somewhere in Pennsylvania, a state with LVT friendly policies and a history of detailed property records. After that, you'd pick out every mass assessment methodology from the literature, line them up, and reproduce them. Then, you'd come up with a novel method or two of your own and test those, too. Finally, you'd come up with a validation strategy for testing against true market values. The chief goals here would be to: * Evaluate the current state of the art. How wide are the error bars? * See if you can improve on the state of the art. How close to ground truth can you get? Once the first study is done, you'd want to test it in another area–maybe Australia, Denmark, Germany, or the Philippines. If Georgism is true, and the only thing standing in the way is being able to pull off accurate assessments, then let's just get better at doing that. We're the species that split the atom and travelled to the moon. Surely we can handle this. ## 6.2. Total Land Value of the United States It's really annoying that we don't confidently know this figure, and it has huge implications for LVT policy. Technically, this is an "assessment" problem, but in practice, when you're assessing *the entire USA,* you're often falling back on big black-box buckets of aggregated property values rather than building a database of direct ground-truth market transactions yourself. In Part I, we saw how big the difference was between Albouy, who used pure land sales directly from the market, and Larson, who applied the cost approach to official figures. If one of you readers has MLS access for all 50 states and/or a bunch of other records, it'd be interesting to see if we could settle this debate once and for all. ## 6.3. A Push for More Open Real Estate Market Transaction Data To my knowledge, there's no good, one-stop shop for solid, historical, ground truth real estate market transaction data that's uniform and detailed across, say, the entire United States. I'm well aware of how important access to solid data is for researchers. I run a site called [www.gamedatacrunch.com](http://www.gamedatacrunch.com/) that just quietly scrapes public metrics from the PC video game store Steam (they don't mind–I asked). I'm constantly getting requests from researchers to dump slices from my DB for them, which I'm always happy to do. If not for making this data available, those research papers might not be happening. So many questions that are answerable in principle go unanswered in practice simply for want of access to data, and then smart people make bad policy decisions because of that ignorance. In principle, I suppose nothing would legally stop someone from scraping listing prices on Zillow and Redfin all day, every day, but I have a feeling I'd probably get sued if I did that. (Just checked with my lawyer; he says it's a legal grey area but probably wouldn't end well for me.) If you're an eccentric billionaire who wants to do something for Georgism, instead of building a [$400 billion super city in the desert](https://cityoftelosa.com), you could buy [Redfin](https://www.redfin.com) for about [1% of that](https://web.archive.org/web/20211205031044/https://companiesmarketcap.com/redfin/marketcap/) and make their data available to researchers. In any case, whether improved access to consistent, country-wide data were to come from data mining or repeal of [real estate non-disclosure laws](https://www.oceancitytoday.com/business/real_estate_report/several-states-do-not-disclose-sale-prices-to-public/article_31fe9010-9dab-11e9-bf92-cb9ad3f9d4ec.html), it would be an invaluable resource for researchers. ## 6.4. Empirical examination of ATCOR If ATCOR (All Taxes Come Out of Rent) holds up empirically, it would be a super big deal. Then, it wouldn't matter whose land value estimates you accept, because you'd always be able to shift taxes off of income and capital and onto land without losing revenue. Mason Gaffney [cites a few cases](https://thedepression.org.au/atcor/) where it's supposed to have been observed, but we could really dig into this further. A claim this tantalizing really needs to be nailed down and resolved once and for all. ## 6.5. Responses to Comments I've been absolutely drowning in comments since the first article posted and there's no way I'll be able to address everything. Doing full justice to some of these will require their own entire articles, but I can leave some brief notes here. **Zoning** Many people replied that Land Value Tax is useless until or unless you first fix zoning. First of all, Georgists are natural allies in fixing restrictive zoning policies. This is something they definitely want and will fight for. Second, one of the reasons for restrictive zoning policies is broken incentives. A city doesn't have a huge incentive to repeal restrictive zoning policies because it isn't hurting their tax base. According to Georgists, a city whose tax base is land value has well-aligned incentives. It is incentivized to maximize land value by making the city a more desirable place to live, which also raises their tax base. It is dis-incentivized to over-assess or over-tax the land, however, because that will cause people to leave, which will lower their land values and also their tax base. One of the principle things that depresses land values and the tax base in this scenario is restrictive zoning. I personally don't care whether you first pass LVT or first repeal restrictive zoning, you can and should do both. Either one helps the other along. **Transitional Politics** Honestly this needs its own entire article without me going out on a limb and accidentally saying something dumb. Suffice it to say, a lot of smart people have spent a *lot* of time thinking about this, and you'll have to wait for a future article to find out what they are. I will let the commentariat duke this one out in the meantime. **Corruption** Some people agreed to all of the points raised *in theory*, but pointed out that human beings are wicked sinners, and LVT will be bent towards the malevolent will of our overlords, just like the old policies. And they're not wrong! The problem with this argument is that it's a fully general argument against change. The overlords game *every* system to their benefit. Rely on standardized tests? They'll game the SAT's with phony disability accommodations and outright cheating. Abolish standardized tests? They'll make their kids take fifty extracurriculars and pay a ghost writer to pen their college entrance essay about their life-changing volunteer work in Ghana. The right question is not "*can* the rich game this system?" but rather, "can they game it *less* than the existing one?" This is why you should keep standardized tests, even though rich people can and do game them. The [evidence](https://senate.universityofcalifornia.edu/_files/underreview/sttf-report.pdf) [shows](https://www.city-journal.org/standardized-tests-student-merit) that on balance standardized tests are one of the few ways a minority student from a poor background even has a chance to move upwards. So let's dig in. The chief way you can game Land Value Tax is to cozy up to your local assessor and get them to say your land is garbage and it's not valuable. However, you have to do this kind of corruption in the open. Your land value assessment is public record, and highly visible on a map, and will stick out like a sore thumb unless the entire area has been corrupted too. I grant that motivated people could plausibly pull this off to various degrees. You might be able to get the assessor to lie about your *land value*, but what's the status quo we're comparing against? We don't even *know* how much *cash money* value is being socked away in Switzerland and the Caymans, let alone by whom. And even if we did, good luck figuring out how to lure that back to a taxable jurisdiction. Land at the very least can't run or hide. My dream is for us to commoditize open source mass appraisal systems and push for public real estate transaction records everywhere, so that organizations and educated members of the public can do their own land value audits at scale. And again, this is something that just needs to be subjected to empirics. We can sling theory back and forth at each other all day, but the proof is in the pudding. There are places that have done Land Value Tax in the past, and there are places that do it today. A good candidate for a future article is looking at case studies of where LVT has been tried and explicitly look for this problem. Finally, defeatism is corruption's best friend. *If* you believe everything I'm saying here, *and* your only obstacle is fear of corruption, *and* you accept that LVT's vulnerability to corruption is not any worse than the status quo's...then why not just get out there and fight for the world you want to see? Nothing good ever came without a struggle. Finally, we come to the most important comment of all. **By George** Some people said I did the whole "By George" schtick too much. I'm sorry you feel that way, but... by George, t[he people have spoken](https://twitter.com/larsiusprime/status/1463030608184107009): ## 6.6. Future Direction This won't be my last article on Georgism, but I haven't yet decided whether to post them on my own blog, [Fortress Of Doors](https://www.fortressofdoors.com/p/cdb948d9-ddec-494d-a742-8884fc968c48/www.fortressofdoors.com), or some standalone site. Nor have I decided what topic should come next. In the comments, feel free to weigh in with which direction you'd like to see me go, as well as any issues you felt were unresolved to your satisfaction. Also, please point out any places where my math looks weird, I was just plain wrong, or where I have misunderstood or misstated the research I'm citing. Thanks very much to this readership and to our host, Scott, for graciously letting me share these findings with you. --- **Acknowledgements:** I would like to thank the following people and organizations without whom this series would not have been possible: * My wonderful wife Emily, for everything * Scott for running the Book Review contest, and the ACX community for selecting me as the winner * Count Bla for going above and beyond the call of duty with suggestions, research help, and general support * Alexandra Elbakyan for lifting the boot of deadweight loss off the necks of researchers worldwide * James Cavin for help with editing and proofreading * Ted Gwartney for his seminar, and for putting up with all of my annoying questions * Nicolaus Tideman for providing feedback and sources * Slime Mold Time Mold for help with structure, scope, and editing * Will Jarvis for helpful comments, discussions, and sources * Nate Blair and BlueRepublik for feedback, charts, and editing tips on both the original book review and this piece * Matthew Yglesias for his land value estimation method * Erusian for giving a helpful and detailed account of objections to Georgism * The Georgist twitter DM group for drowning me in articles, citations, and sources * Dan Sullivan, Mark Mollineaux, and Wyn Achenbaum for useful comments, sources, and help. * Pyradius for pointing me towards *Counting Bounty* * Bill Newell, for the memes * [/r/georgism](https://www.reddit.com/r/georgism) for helpful sources and articles * The authors of all the cited research papers * The "old guard" of the 20th century Georgist movement for maintaining historical documents and research papers into the present day * Henry George, for planting trees in whose shade he would never sit **Further reading and resources** Discussion: * [Geopraxis discord](https://discord.com/invite/CXf5RDxfZ6) * [/r/georgism subreddit](https://reddit.com/r/georgism) Organizations: * **[Common Ground](https://commonground-usa.net)** (the go-to organizing body I recommend joining) * [Strong Towns](https://www.strongtowns.org/landvaluetax) * [The Henry George School of Social Science](https://www.hgsss.org) * [International Association of Assessment Officers](https://www.iaao.org) * [Robert Schalkenbach Foundation](https://schalkenbach.org) Information sites: * [Cooperative Individualism](https://cooperative-individualism.org) * [Wealth and Want](http://www.wealthandwant.com) * [Mason Gaffney's site](https://www.masongaffney.org) Papers: * [Post-Corona Balanced-Budget Super-Stimulus: The Case for Shifting Taxes onto Land](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3954888) (a policy paper) * [Land and Liberty: Henry George, The Single Tax Movement, and the Origins of 20th Century Liberalism](https://repository.library.georgetown.edu/handle/10822/1029879) (a PhD thesis by Chris England that is maybe the best comprehensive histories of the movement) * [Aggregate Land Rents, Expenditure on Public Goods, and Optimal City Size](https://doi.org/10.7916/d8086fw3) by Arnott and Stiglitz (the origin paper of the Henry George Theorem) * [The Hidden Taxable Capacity of Land: Enough and to Spare](https://economics.ucr.edu/papers/papers08/08-12old.pdf) by Mason Gaffney * [The Unknown Revenue Potential of Land: Fifteen Hidden Elements](https://www.masongaffney.org/workpapers/WP097%202004%20Unknown%20revenue%20potential%20of%20land%2015%20hidden%20elements.pdf) by Mason Gaffney
Scott Alexander
45309834
Does Georgism Work, Part 3: Can Unimproved Land Value be Accurately Assessed Separately From Buildings?
acx
# Does Georgism Work? Part 2: Can Landlords Pass Land Value Tax on to Tenants? *[Lars Doucet won this year’s [Book Review Contest](https://astralcodexten.substack.com/p/book-review-contest-winners) with his review of Henry George’s [Progress and Poverty](https://astralcodexten.substack.com/p/your-book-review-progress-and-poverty). Since then, he’s been researching Georgism in more depth, and wants to follow up with what he’s learned. I’ll be posting three of his Georgism essays here this week, and you can read his other work at [Fortress Of Doors](https://www.fortressofdoors.com/)]* Hi, my name's Lars Doucet (not Scott Alexander), and this is a guest post in an ongoing series that assesses the empirical basis for the economic philosophy of [Georgism](https://en.wikipedia.org/wiki/Georgism). [Part 0 - Book Review: Progress & Poverty](https://astralcodexten.substack.com/p/your-book-review-progress-and-poverty) [Part I  - Is Land Really a Big Deal?](https://astralcodexten.substack.com/p/does-georgism-work-is-land-really) **Part II - Can Land Value Tax be passed on to Tenants? 👈** (You are here) Part III - Can Unimproved Land Value be Accurately Assessed Separately from Buildings? There were a lot of great comments to Part I. Most zeroed in on the practical aspects of implementing Georgism, such as how to deal with what Gordon Tullock calls [The Transitional Gains Trap](https://www.jstor.org/stable/3003249). Others brought up various perceived political obstacles and a few other topics (yes, I know about zoning, which is also a big deal). With a few exceptions, I didn't see much pushback on the core thesis of Part I, that land is a really big deal. In fact, many of the strongest opponents of LVT seem opposed precisely because they *agree* that land is a big deal. I can't respond to everything people have said without spending another few months researching, but rest assured I will briefly address the most common points at the end of Part III. Anything that remains unanswered after that will have to wait for future articles. --- Georgists assert that landlords cannot pass Land Value Tax (LVT) on to their tenants. (Land Value Tax is a tax on the unimproved value of land alone, excluding all the buildings and other improvements.) Many critics are skeptical of this, because just about every other tax in the world *is* passed on. Why should LVT be so special? By George, if Land Value Tax is easily passed on to tenants, then it has no power to curb land speculation, and you can stop reading this article. First, let's explain the theoretical model for why this isn't supposed to be possible, and then let's see if it actually works that way in the real world. # 1. Theory Imagine I'm a landlord, and I have a vacant lot I'm renting to a tenant who's got a mobile home parked there. What's going to happen if a Land Value Tax is imposed on me? Well, I'm already charging as much as the market will bear. If I charge any more, my tenant will move out. But why shouldn't I be able to pass on the tax to the tenant? If you tax gasoline or cigarettes, the prices go up and are ultimately borne by the customer. Why should land be any different? The difference is that the supply of gasoline and cigarettes fluctuates, because you can produce more or less of them. When you put a tax on gasoline or cigarettes of even a few cents, somewhere in the economy there is a marginal oil well or a marginal tobacco farm whose profit margin was the same as or less than the tax. Now their profit is entirely wiped out, so what's the point of producing any more? Price signals from the market are telling them to stop producing and do something else. And price is ultimately driven by supply and demand, not the wishes of a seller. Even a dedicated cartel like OPEC can't enforce high oil prices by fiat. They do it by cutting off production and driving down the global supply of oil until people are forced to pay the price OPEC wants. Okay, let's go back to land. How does Land Value Tax drive down land prices? The important thing to keep in mind is that land value (purchase price) is a *stock,* but land income (rent) is a *flow*. The amount of water flowing out of my tap is a flow; the amount of water currently sitting in my bathtub is a stock. The key thing is that land income drives land price and not the other way around. If a property is capable of generating $10,000/year in rents, then the amount I'm willing to pay to buy it is $10,000 times X, where X represents how many years I'm willing to wait to break even on my investment. So how does an LVT affect the price of land? Using the bathtub metaphor again, let's put a valve under the tap, so half of the water goes into the tub and half goes somewhere else. The amount of water flowing *out* of the tap does not change (the land is as productive as it ever was). However, the amount of water collected in my bathtub *does* change; five minutes of flow will produce less water in the tub than it did before. If I'm trying to sell my land to someone, they're going to notice the tax and correctly calculate that it will earn them half as much income over X years, so they'll pay half as much for it. And what about rental price? The rental price comes directly from the flow. The land is in demand because of its inherent productivity; someone who occupies that land can generate a certain amount of wealth each month. Without a Land Value Tax, the owner of that land can charge rent up to the difference between their land's productivity and the best freely available alternative, establishing the "margin of productivity." This means that as productivity rises, so does the rent. This phenomenon is known as [Ricardo's Law of Rent](https://www.youtube.com/watch?v=jiGKwi43R0Q). With a Land Value Tax, the owner has to pay that tax every month whether they have a tenant or not. They're already charging the highest amount the market will bear, and as we've already shown, they are unable to change the supply of land. All the leverage is on the side of the tenants, which forces the landlord to eat the tax. The price to buy the land goes down, the price for a tenant to rent it goes down, but the total amount of income the land itself produces ("land rent") stays the same. A portion of it is just being collected by the taxing agency. That's the theory at least. Does it hold up in real life? According to the evidence, the answer is yes. # 2. Empirics Let's try to envision what it would take to test this. Imagine a hypothetical country with a decent property assessment scheme already in place. Land and improvements are assessed separately to an objective and equalized standard, and each is taxed at a separate rate. Let's further say this country's assessments are widely considered to be fair and well-tested against market values. As a starting condition, each of the counties in this country has its own independent land tax rate. Then, for our experimental intervention, we'll have all of the counties raise or lower the tax rate on land values randomly within a predefined range, all at the same time. Then we'll observe what happens to land prices. Unfortunately for us, countries with the necessary prerequisite assessment policy are few and far between, and sovereign states don't typically run randomly controlled economic experiments on their population, so I'm afraid–wait, something almost exactly like this happened in Denmark in 2007. What happened in Denmark was an accident, but you'd be hard pressed to design a better experimental setup if you tried. A 2017 working paper by Høj, Jørgensen, and Schou, entitled "[Land Taxes and Housing Prices](https://web.archive.org/web/20201108135554/https://dors.dk/files/media/publikationer/arbejdspapirer/2017/02_arbejdspapir_land_tax.pdf)," published at the Danish Secretariat of Economic Councils, has the full story. One day, Denmark decided to redraw all its municipal boundaries. Regions that had been under one local government woke up the next day under a different one, immediately adopting a new set of local regulations and rules, including changed tax rates. This caused a large-scale, semi-random shuffling of Land Value Tax rates overnight. Crucially, tax assessment *policy* was pretty much uniform throughout the country. The only thing this shakeup changed with regard to land policy was the actual individual rates of tax on land, set by the local governments. This gives us a nice big N of 250 individual areas, each with a clear before and after change in land tax rate. All of these changes came into being at exactly the same time from a single swift outside intervention, and the overall change in aggregate tax rate was close to zero: Note the "per mille" – 20.6 per mille = 2.6 per cent, etc. The most important thing to note is that this paper claims to get around the "endogeneity problem." For those of you who aren't researchers, an *endogenous* factor is something that originates from *inside* of the system you're studying, whereas an *exogenous* factor is something that originates from *outside* of it. The "endogeneity problem" is a particularly annoying gremlin that makes it hard to study economics empirically. You can never be sure that the effects you're measuring were actually caused by the intervention you're studying. Everything's a big bowl of soup, and it's hard to untangle what causes what. Here's an example. Let's say I'm the sovereign Emperor of planet Lars. Among my many powers and privileges is the sole right to set the prime interest rate for the entire Lartian economy. One fine Tuesday, I stroll into my throne room and pull the gilded lever that changes the rate from 1.5% to 1.2%. PhD students rejoice–what a great natural experiment for measuring the effects of changes to the prime interest rate! Well, except for the pesky fact that sovereign Emperors of planets named after themselves don't tend to just pull the prime interest rate lever for no reason. Maybe I pulled it because the economy was slowing down or to distract everyone from the unpopular war against the Earthlings I'm currently losing. Were those effects the PhD students observed after I pulled the lever actually caused by me lowering the prime interest rate? Or were they caused by the very forces that drove me to pull the lever in the first place? What researchers really like to see is an *exogenous* effect, something that unambiguously comes from outside the system. Going back to planet Lars, one day a ghostly hyperspace beast shows up, instantly eats every car on the entire planet that has a manual transmission, then vanishes in a puff of purple smoke. There's no way this had anything to do with mysterious epicycles within the Lartian economy. It was a pure exogenous shock, and we can be confident that any subsequent observed changes in the economy had something to do with the beast's insatiable appetite for stick shifts, especially if the beast was kind enough to leave a few randomly selected areas untouched as a control group. Høj, Jørgensen, and Schou claim the Danish study is the first study of Land Value Tax to enjoy such a clear exogenous trigger: > The method used in the present study is inspired by Borge & Rattsø (2014) who study capitalization of Norwegian property taxes during 1995-97. They also find evidence of complete capitalization. As the authors note themselves, however, examining the relationship between tax rate and house price changes may generally result in endogeneity problems, which they try to avoid using various instrumental variables. The present study is immune to this problem because the Danish local-government reform of 2007 exogenously imposes the tax rate changes. Backing up the exogenous claim is that, although the change came from the Danish government, it had nothing to do with tax policy. They were just reorganizing the municipal map, and the changes in tax rates were simply the result of whatever jurisdiction your area found itself belonging to the next day. As a quick example, if an area raised land taxes because they needed more money, the fact that the area needed more money could be just as plausible a cause for any observed changes as the change to the land tax rate. But if you change *everyone's* land taxes overnight in semi-random directions with no particular regard for the local economic or political situation, you can be more confident that subsequent changes you observe do in fact stem from that intervention. The authors measure the before-and-after changes, apply a bunch of econometric tests, run it with and without controls just to be sure, and conclude that a Land Value Tax is "fully capitalized" into the price of the property itself. "Fully capitalized" is a fancy way of saying that the price of land goes down proportionately to how much land income is taxed away. > The results demonstrate a clear effect on sales prices of the observed changes in land tax rates. Furthermore, the magnitude of the changes implies full capitalization of the present value of the change in future tax payments for a discount rate of 2.3 per cent, which is within the range of reasonable discount rates for households during the period in question. The analysis consequently supports the hypothesis that perceived permanent land tax changes should be capitalized fully into the price of land and property. This just means that if you tax land, absent any other interventions, the price of land goes down. The rental income of the land available to the landlord goes down too, which means the landlord is eating the tax and can't pass it on to the tenant. If the landlord could successfully pass on the tax, we wouldn't see a decrease in the price of land that amounts to "full capitalization." Now, [beware the man of one study](https://slatestarcodex.com/2014/12/12/beware-the-man-of-one-study/). Høj, Jørgensen, and Schou cite five other prior studies that support their findings: [Oates (1969)](https://doi.org/10.1086/259584), [Borge & Rattsø](https://doi.org/10.1177/1091142113489845) (2014), [Capozza, Green and Hendershot (1996)](https://www.google.com/books/edition/Economic_Effects_of_Fundamental_Tax_Refo/7bxfBHcgrtEC?hl=en&gbpv=1&dq=Taxes,+Mortgage+Borrowing,+and+Residential+Land+Prices&pg=PA171&printsec=frontcover), [Palmon and Smith (1998)](https://doi.org/10.1086/250041), and [Hilber (2015)](https://doi.org/10.1111/1540-6229.12129). All of these studies support the same conclusion but are not as well controlled and have to do various fancy tests to deal with endogeneity. The Danish study seems like a capstone that replicates the findings of a string of prior studies and puts to rest lingering doubts about endogeneity. If we take the authors' literature review at face value, it would be a robust finding for the full capitalization hypothesis. But let's be thorough. It's possible these supporting studies are misrepresented, so I looked them up and checked, just in case. They all find strong capitalization of land and property taxes into property values, and all discussed the endogeneity problem and their attempts to account for it. The studies are represented faithfully by the Danish paper and support the same conclusions. Furthermore, four of them are empirical rather than theoretical. These findings are not just the result of models and formulas, but actual real-world observations. That still leaves the possibility that the Danish authors cherry-picked their supporting studies and ignored everyone who found the opposite conclusions, so I tried to see what a general search for research papers on this subject would turn up and if any papers would *not* support full capitalization of Land Value Taxes into property prices. Searching Google Scholar for property tax and Land Value Tax capitalization effects, I found nine additional papers. **Supporting:** [Bourassa (1987)](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1468-2257.1987.tb00087.x) studies a Land Value Tax system in Pittsburgh and finds that "the incentive effect is significant but the liquidity effect is not. The incentive effect is found to encourage increases in the number of new units constructed in Pittsburgh rather than increases in the average cost of new units" [Skaburskis (1995)](https://journals.sagepub.com/doi/abs/10.1177/088541229501000101) concludes, "Tilting tax rates to favor improvements at the expense of land increase the intensity of land development when all other factors are held constant. The policy can increase land values when it is applied to a small portion of a housing market and can reduce land values when applied across the entire housing market." [Roakes (1996)](https://www.sciencedirect.com/science/article/abs/pii/026483779684556X) says, "The evidence verifies that tax capitalization appears to be occurring, but does not clearly determine the resulting price outcome. Land prices increased with a decrease in real property taxes. They also appeared to increase as a result of the tax abatement system. Land prices were determined to decrease as a result of higher land taxes." [Buettner (2003)](https://www.researchgate.net/profile/Thiess-Buettner/publication/228739383_Tiebout_Visits_Germany_Land_Tax_Capitalization_in_a_Sample_of_German_Municipalities/links/55e4137408ae2fac47214345/Tiebout-Visits-Germany-Land-Tax-Capitalization-in-a-Sample-of-German-Municipalities.pdf) finds, "land taxes do capitalize into land values, whereas the monthly rent level remains unaffected by the land tax. In addition, the results point to significant spillovers from amenities and the provision of public goods across municipalities." [Plummer (2010)](http://www.ntanet.org/NTJ/63/1/ntj-v63n01p63-92-evidence-distributional-effects-land.pdf) finds, "If a LVT causes a property's future tax payments to increase, then the property's market value will decrease... On the other hand, if a LVT causes a property's taxes to decrease, the property's market value will increase" and notes that the capitalization effects depend on the frequency of reassessments (more frequent assessments = higher capitalization). [Choi (2015)](https://www.jstor.org/stable/24773482) finds, "that a revenue-neutral switch from a capital value property tax to a LVT, or a split-rate tax, results in a reduction in land rent and the tax exclusive price of housing. We find that the land rent gradient becomes flatter while the population density and housing capital gradients become steeper" [Mills (1981)](https://www.journals.uchicago.edu/doi/abs/10.1086/NTJ41862356?journalCode=ntj) is an interesting study, titled *The Non-Neutrality of Land Value Taxation,* and frames itself in opposition to LVT. It's a theoretical paper rather than an empirical one and makes a curious claim: "It is true that a (less than 100 percent) tax on land income is neutral, but this does not extend necessarily to a tax on capitalized land value, or changes therein. The reason is that the discounted sum of payments with the latter tax is not invariant to the intertemporal characteristics of the income stream produced by land. Among options with equal present value, it is greater for income streams skewed to the distant future than for those skewed to the near future." Mills seems to be arguing that if a piece of land is subject to LVT, people will be willing to pay less to buy it, since it generates less rental income. This sounds like a full capitalization argument to me, which Mills apparently thinks is a bad thing. Regardless of how he feels about it, though, he's arguing that it *happens*, ironically putting this paper in the "support" column. **Mixed:** [King (1977)](https://doi.org/10.1086/260574) doesn't have a knock-down argument for or against the full capitalization hypothesis, except to point out some quibbles with the analysis methods used in prior studies (including Oates, the seminal paper). King concludes, "our knowledge of the extent of tax capitalization is very much less than is commonly supposed." One would hope King would have been more impressed by all of the studies that have come out since. **Opposed:** I found one study that clearly and confidently rejects the hypothesis that LVT is fully capitalized into land values. [Wyatt (1994)](http://rrp.sagepub.com/content/26/1/1) asserts: "It is found that LVT would increase, not lower land prices and would provide only a small incentive to building construction." Wyatt relies on his own arguments paired with a literature review, which he asserts finds "no evidence" for the claims of LVT proponents. This is strange, because Oates (1969) contradicts this and is included in Wyatt's bibliography, though I can't find a citation of Oates in the text itself. I also found Mills (1981) in his bibliography but not in the text as a citation. He does cite Bourassa (1987), which he interprets as inconclusive. He cites none of the other studies mentioned above, given they hadn't been published yet. Wyatt offers no new empirical evidence of his own, but he does cite a bunch of other papers I hadn't seen before. The majority are from the 70's and 80's, with only two as recent as the 90's, the latest one from 1991. Since Wyatt was the only emphatically critical paper I could find, it's worth unpacking the citations that back up his arguments to see if they check out. In his introduction, he says: > The Valuer General of New Zealand said, "There was no evidence that the tax would (1) control urban sprawl and speculation in land; (2) encourage the construction of 'better' buildings; (3) encourage growth; or (4) cause slums to disappear" The source he cites is Donald Hagman's 1965 book *The Single Tax and Land Use Planning: Henry George Updated*, which I can find cited in a bunch of places but can't actually seem to locate. The closest I can get is [this 1978 article](https://www.cooperative-individualism.org/hagman-donald_land-value-taxation-1978.htm), also by Hagman, posted to [Cooperative-Individualism.org](https://www.cooperative-individualism.org), an old school Georgist site. There, Hagman says that when the income tax was first introduced in New Zealand in the 1890's, Land Value Tax was responsible for 75.7% of the combined tax yield of land + income taxes, but over the course of the next century that figure dropped all the way to 0.5% in 1965 and 0.3% in 1970 (note the placement of the decimal point). Hagman isn't clear on why this is. Did land become less important, were assessments depressed, did the land tax rate just go down? What he does say is that various exemptions were put into effect and that New Zealand made some moves away from market-based valuations. So did LVT simply not work as of 1978, or was this particular implementation hobbled? We've already shown in Part I that it can't be that land's importance in the economy has declined since the 19th century. Concerning New Zealand specifically, [Tideman, et al.](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3954888) says that today over half the share of non-produced assets for households is due to land. And it's worth reiterating that New Zealand banks put most of their loans towards real estate. Source: [Interest.co.nz](https://www.interest.co.nz/banking/112371/nz-banks-housing-lending-continues-rising-percentage-their-overall-lending-business) In his case study of Australia for the same article, Hagman points to *too low* arate of land tax as making it hard to see the full predicted effects borne out. Maybe a similar thing was going on in New Zealand? > it is difficult to determine whether the tax has any significant effect on land development. The tax is not high enough to have the demonstrable effects proponents of land-value taxes suggest will occur. In any case, I can't find the specific book Wyatt is citing, and the next best source on the subject from Hagman never tell us exactly why LVT fizzled in New Zealand. What I'm *not* finding in Hagman is anything like reliable evidence that LVT is not fully capitalized into land prices. Wyatt cites another source (Pillai 1987) that claims that LVT hasn't worked in developing countries, but notes that the "LVT" imposed there was a flat tax based on land *acreage* rather than actual land *market value*. Wyatt then follows up with his first solid critique–inaccurate assessments. I've just criticized plenty of official assessments in Part I, and Wyatt is absolutely correct that inaccurate assessments are a primary obstacle to successfully implementing LVT. Wyatt says: > It is noteworthy that Pennsylvania, the only state in which many cities have adopted LVT, ranks 49th out of 50 in assessment accuracy. That's according to a [1983 article originally published in Fortune Magazine](http://savingcommunities.org/docs/breckenfeld.gurney/hightax.shtml). I don't know how Pennsylvania fares today, given it's been 38 years. I have a whole article dedicated to assessment accuracy coming up next in Part III, so let's leave that issue aside for now. What is Wyatt's argument against the full capitalization argument? Does he have any empirical data to back it up? > Grosskopf and Johnson also show that a revenue-neutral shift from the current property tax to a tax only on land value results in higher land prices rather than lower ones (This follows from their derivation that a uniform land and building tax decreases land prices in the long run more than a uniform land tax of equal yield). Okay, let's see [Grosskopf and Johnson (1982)](https://www.cooperative-individualism.org/grosskopf-shawna_land-value-tax-revenue-potentials-1982.pdf): > The dynamic analysis of the revenue adequacy of site value taxation is positive on the whole... The last piece of evidence available on the long-run revenue capacity of site value taxation is empirical. Site value taxation has weathered the test of time in countries all over the world. In the short run, site value taxation can indeed generate revenue equal to that of the current property tax in urban areas. So it seems like Grosskopf and Johnson are pro-LVT, but this isn't the question we wanted to know about. What about full capitalization? > In the longer run, however, untaxing buildings will cause a change in relative prices, which will in turn change the value of the tax base. Thus by relaxing the partial equilibrium assumption that prices remain constant, we show that land prices could well increase after adjustment to change. Thus, our general equilibrium result is that the tax base could increase as a result of untaxing buildings and taxing land at a uniform rate. Okay, so maybe Wyatt was right? > Given a number of assumptions that are quite conservative, a site value tax can keep pace. Therefore, our revenue conclusion is that taxing land instead of land and buildings will not, in itself, cause cities to find themselves with financial difficulties. Call it a maybe? This 1982 paper cited a few empirical results, but its own conclusions largely rest on theoretical models. Wyatt's chief argument is that the supply of land is not really fixed; the true figure should not be "all the land there is" but rather "all the land supplied to the market within a given jurisdiction," which he asserts is constantly changing. He further notes that many proponents of LVT, such as the famed Georgist Mason Gaffney, themselves admit that under certain conditions, the price of land may not change in the wake of an LVT being levied (this is due to Gaffney's ATCOR theory that any cuts in labor and capital taxes cause land rents to rise). He goes on to attack many other assumptions of the Georgist philosophy and ultimately claims that "there is no reliable evidence for the capitalization effect which proponents believe would reduce land prices." Wyatt's preferred alternative is a "progressive property tax," essentially a wealth tax. He goes on: > Therefore if one allows for capitalization of higher service levels as well as higher land taxes, one may find that higher-tax areas actually attract firms and households, resulting in greater demand for land, hence higher land prices ...which seems like a straight-up affirmation that a weak form of the Henry George theorem is true. > It is likely a higher tax on land would be accompanied by greater spending on services which would add to the value of land. As is well documented, the major source of land value derives from public improvements (Czamanski 1966) Okay, now we're getting somewhere! LVT proponents claim that an LVT can't be passed on to tenants, but Wyatt is saying that if you turn around and spend that LVT money on making your city better and more desirable, then the increased demand for land in your city might more than offset the negative capitalization of the tax into the sales price of land. That's a solid argument. Notice that Wyatt is here implicitly admitting to capitalization of land taxes into land prices; he's just *also* arguing that there are other effects in play. What Wyatt doesn't realize is that the natural policy conclusion here is...a 100% LVT that recaptures all the added gains to land value from public spending. He doesn't provide his own empirical study to back up his claims, mind you. He does, however, cite Mary Edwards' 1984 study and claims it says an Australian LVT had no effect on housing prices, once you control for public expenditure level. So what does [Edwards](https://onlinelibrary.wiley.com/doi/10.1111/j.1536-7150.1984.tb01876.x) have to say? > Given both the tax levels of local governments (or expenditure levels) and the site tax variable ... it is difficult to conclude if either has an effect due to multicollinearity. When one omits the local expenditure level, the site tax variable is very great and extremely significant with respect to the average value of new houses. (Equation [5]) > > After the inefficiencies of autocorrelation are removed in Equation [9], the level of taxation has a decreasing effect on the stock of dwellings but the greater the proportion of communities that tax the unimproved capital value of land in each state, the greater the growth in housing stock. > > The results of this paper coincide with the conclusions of A. R. Hutchinson -- that not taxing improvements tends to bring about an increase in the average value of housing and the value of total housing stock. I see what Wyatt is saying, but it feels like another misrepresentation. Maybe Edwards' study by itself doesn't have a strong enough result to untangle the effects of site value tax from public spending levels, but to frame it as if Edwards herself is saying there's no evidence for LVT feels like putting words in her mouth. Worse, Wyatt doesn't address the part where she *does* try to deal with autocorrelation and finds the tax still has a beneficial effect. In 1994, I might have found Wyatt's argument compelling, but a bunch of his sources don't seem to be saying quite what he thinks they do. When they do support his claims, they're largely old and non-empirical. I've just read thirteen other papers that provide plenty of empirical evidence from multiple case studies all over the world, culminating in the Danish study. We can further add to that all the long-standing theoretical arguments in LVT's favor, as well as all the prominent economists from competing and outright hostile schools such as [Milton Friedman](https://www.youtube.com/watch?v=yS7Jb58hcsc), [Friedrich Hayek](https://mises.org/es/library/man-economy-and-state-power-and-market/html/pp/1132), [Marx & Engels](https://www.progress.org/articles/how-henry-george-might-have-corrected-the-marxian-communist-manifesto), and [Paul Krugman](https://psmag.com/news/this-land-is-your-land-3392) who have either advocated for some form of LVT themselves or openly acknowledged it as the "least bad" tax. This is really strong evidence for the full capitalization hypothesis, the natural corollary to which is that landlords can't pass on Land Value Tax. **Conclusion:** Land Value Tax can't be passed on to tenants. There is one thing Wyatt had a point about, however: > The real underlying issue here may be to correct the systematic underassessment of the value of land rather than to introduce a higher nominal tax rate on land. If land is truly chronically underassessed, than simply making land assessments more accurate across the board will give you a similar effect to raising the rate of LVT, without touching the nominal tax rate or changing any laws. This is because every property tax has a partial Land Value Tax hidden inside. The portion of the property tax that falls on buildings is bad because it incurs deadweight loss, but the portion that falls on land is an LVT and is good. Just by raising land assessments close to their true value, you are effectively increasing the rate of the hidden LVT, without increasing the amount of tax that falls on buildings. This falls well short of 100% LVT, and leaves the harmful tax on improvements untouched, but it's an incremental improvement that can be done right now, entirely within the existing political structure. Georgism predicts that partial LVT will have partial benefits, and all you have to do is improve the practice of assessments. There are two ways land can be chronically underassessed. The first is when the assessed value of the property is way below market value, and the primary deficit is because the land value is underestimated. This isn't uncommon in big cities in the midst of housing crises. In this example, raising the assessed value of land to its true value more than triples the effective rate of the hidden land tax, *without* raising the amount of tax on the building. The second way land can be chronically underassessed is when the total value of the property is properly assessed close to market value, but the value of the land is understated relative to the building. This often happens with the "cost approach" method we discussed in Part I. If you just improve the land assessment, you *shift* the tax burden off of the building and on to the land. Okay, but in this second chart isn't the owner paying $2,000 no matter what? Why should they care what the tax internally "falls" on? There's a couple reasons. For one, although it won't have any immediate effect on an individual whose total property value doesn't change, for many people it *will* change. Some will go up, some will go down, and the resulting taxes will encourage putting land to its highest and best use. And for those whose property values don't change at all, now there is no disincentive to build improvements. Build a big multifamily unit? Put in a pool? Remodel your bathroom? Go nuts, you won't be punished for it with increased taxes. Here's a simple visualization of how an LVT paired with a Citizen's Dividend compares to conventional property taxes. It's just an illustration meant to make a rhetorical point, but now I'm curious to see a real-world version of this superimposed over, say, Houston or Philadelphia or New York City, and based on actual data. Source: [this tweet](https://twitter.com/cgusaofficial/status/1460672622153195524?s=21) from Common Ground USA Ideally, the next step after shifting taxes from buildings to land is to abolish the portion of the tax that falls on buildings. This leads directly to our next question, and the last and greatest objection to Georgism: can we actually perform accurate assessments that meaningfully and cleanly separate land value from improvements, such as buildings? See you in Part III.
Scott Alexander
45264126
Does Georgism Work? Part 2: Can Landlords Pass Land Value Tax on to Tenants?
acx
# Does Georgism Work? Part 1: Is Land Really A Big Deal? *[Lars Doucet won this year’s [Book Review Contest](https://astralcodexten.substack.com/p/book-review-contest-winners) with his review of Henry George’s [Progress and Poverty](https://astralcodexten.substack.com/p/your-book-review-progress-and-poverty). Since then, he’s been researching Georgism in more depth, and wants to follow up with what he’s learned. I’ll be posting three of his Georgism essays here this week, and you can read his other work at [Fortress Of Doors](https://www.fortressofdoors.com/)]* Hi, my name's Lars Doucet (not Scott Alexander) and this is a guest post in an ongoing series that assesses the empirical basis for the economic philosophy of [Georgism](https://en.wikipedia.org/wiki/Georgism). [Part 0 - Book Review: Progress & Poverty](https://astralcodexten.substack.com/p/your-book-review-progress-and-poverty) **Part I  - Is Land Really a Big Deal? 👈** (You are here) [Part II - Can Land Value Tax be Passed on to Tenants?](https://astralcodexten.substack.com/p/does-georgism-work-part-2-can-landlords) [Part III - Can Unimproved Land Value be Accurately Assessed Separately from Buildings?](https://astralcodexten.substack.com/p/does-georgism-work-part-3-can-unimproved?utm_source=url) *Extremely special thanks to Count Bla and Alexandra Elbakyan* --- For those of you wondering who this "Lars" guy is, I'm the Astral Codex Ten reader who reviewed Henry George's [Progress & Poverty](https://astralcodexten.substack.com/p/your-book-review-progress-and-poverty) for the book review contest. Henry George is the founder of an economic philosophy known as [Georgism](https://en.wikipedia.org/wiki/Georgism) which is principally concerned with the deprivations caused by unchecked rentiers. George is famous for promoting two specific policies, the Land Value Tax (LVT) and the Citizen's Dividend (what we would now call a Universal Basic Income). I was shocked and humbled when this readership [selected me as the winner](https://astralcodexten.substack.com/p/book-review-contest-winners). Even more shocking was how many people from around the world wrote to me about their interest in the article. Family, friends, and acquaintances for sure, but also a lot of total strangers–including business owners, activists, podcasters, online game designers, investors, even government officials from around the world. Scott's blog has *way* more reach than I realized. This fills me with a sense of responsibility. If there's a chance people might make policy decisions based on my writing, I need to make sure I haven't been taken in by an argument that's just really persuasive; it had also better be *true*. What follows therefore is my best attempt at a fair, rigorous, and (where possible) empirical assessment of whether the claims of the Georgist movement stand up to scrutiny. Let's admit some bias upfront. I'm a Georgist, and I would be happy to find this philosophy true and sad to find it false. But by George, what would make me really sad is to live in a world where Georgism is wrong but where I blissfully continue to believe in it anyway. In that world, I would waste time and energy advocating for a policy that doesn't work at best, and harms society at worst. I'll do my best to kick the tires here, and hopefully the commentariat will point out any of my blind spots. It's impossible for this to not come across as an advocacy piece to some degree, but I promise to give all my critics plenty of surface area to attack. --- Some readers of the book review were understandably skeptical that Georgism actually works in practice, so this week I'm going to empirically assess "the big three" critiques that come up the most often: 1. Land might have been a big deal in 1879, but it just doesn't matter much today 2. Landlords will just pass Land Value Tax (LVT) on to tenants, so it won't work 3. In real life you can't accurately assess land value separately from improvements, so even if LVT would work in theory, it doesn't work in practice Today we'll start with point 1, and subsequent articles posted in the next two days will address points 2 and 3. I'll probably write further articles on the subject, but I make no presumptions about whether I'll have worn out my welcome on Astral Codex Ten by then. If you haven't read [the Book Review](https://astralcodexten.substack.com/p/your-book-review-progress-and-poverty) yet, I've posted a brief recap of the relevant concepts below. Otherwise, feel free to skip directly to the subsequent section. # 0. A Brief Recap Georgism is a school of [political economy](https://en.wikipedia.org/wiki/Political_economy) that is really upset about, among other things, the Rent Being Too Damn High. It seeks to liberate labor and capital alike from those who gatekeep access to scarce "non-produced assets," such as land and natural resources, while still affirming the virtues of hard work and free enterprise. George uses the term "Land" to mean not just regular land, but everything that is external to human beings and the things they produce–nature itself, really. Georgism's chief insight is to move economic thinking from a two-factor model (Labor and Capital) to a three-factor model (Land, Labor, and Capital). It's chief (but not only) policy prescription is the **Land Value Tax** (LVT), which taxes real estate at as close to 100% of its "land rent" as possible (the amount of rent due to the land alone apart from "improvements" such as buildings). In actual practice, most Georgists seem to think 85% is a reasonable figure to target. Let's carefully unpack what those terms means. "**Land value**" refers to the full market value of a property, *excluding* all of its improvements, such as buildings. This is the portion of a property's value arising solely from its location and natural attributes (agricultural fertility, endowment of stuff like water, minerals, etc.). "**Land rent**" (AKA "ground rent") refers to the *recurring rental income* a property is capable of generating from the market because of its land value. It is Land Rent which Land Value Tax is intended to capture.  You can think of it as a *Location* or *Site* Value Tax if that's more helpful. It's not a tax on the full market purchase price of a property, nor is it a fixed amount of tax per acre of land, but rather a tax proportional to the market value of the land alone (or better yet, the land rent). When assessed correctly, as LVT approaches 100% the market *selling price* of the land itself will approach zero. Don't let the "100%" confuse you, either. If a piece of land costs $10,000 to buy, and is leased for $500/year, then an LVT that captures 100% of the land rent is $500/year, which works out to a 5% annual tax of the land value. LVT should not be confused with a *property tax.* Property taxes consider land plus improvements (typically buildings).  An LVT considers land value alone. Georgists assert that if we sufficiently tax land in this manner, we'll not only end the housing crisis but also fix a bunch of misaligned incentives that cause poverty to persist alongside economic progress, *while* raising a bunch of revenue that can lower or even eliminate other less efficient taxes, such as sales and income taxes. This is because virtually all economists agree that LVT has zero "deadweight loss"–a fancy word for a drag on the economy that makes certain activities no longer profitable. Other taxes with no deadweight loss include [Pigouvian](https://www.investopedia.com/terms/p/pigoviantax.asp) taxes on bad things, like congestion and pollution. But won't landlords just raise the rent to make up for the LVT, passing the burden of the tax on to the tenants? Georgists say no, because land is special in that it is scarce and nobody can make any more of it. Indeed, LVT is a rare form of taxation that actually *boosts* the economy, because it discourages rent-seeking and speculation. Some Georgists even go so far as to say that LVT can raise enough revenue to replace all other less efficient taxes, becoming the so-called "Single Tax," but this is not a universally held position among modern Georgists. To be clear, proponents of the "Single Tax" believe that LVT is *sufficient* for all public purposes and that no other taxes (such as income tax, capital taxes, and tariffs) are *necessary* for revenue generation, although they still might support carbon taxes or "sin taxes" on things they want to discourage. Georgism doesn't begin and end with the LVT, however, and the movement isn't solely concerned with real estate and tax revenue. Henry George was an early proponent of what we now call "Universal Basic Income," or as he called it, the "Citizen's Dividend" (funded by LVT, naturally). But even if you threw every penny of LVT revenue into the sea, the anti-sprawl effects of the policy are appealing enough by themselves to earn the endorsement of [YIMBY's](https://en.wikipedia.org/wiki/YIMBY) and urbanists like [Strong Towns](https://www.strongtowns.org/landvaluetax). If you take Georgism to its natural conclusions, you might start to question government-enforced monopolies over other kinds of  "Land," such as electromagnetic spectrum, water and mineral rights, and orbital real estate for satellites, not to mention the deadweight loss created by intellectual property gatekeepers over, say, [research papers](https://www.nature.com/articles/nature.2017.22196). And if you have my day job as an analyst for the video games industry, one day you'll find yourself applying the observed 30-year history of [housing crises in MMO's](https://www.gamedeveloper.com/business/digital-real-estate-and-the-digital-housing-crisis) to virtual real estate sales in [leading blockchain games](https://naavik.co/business-breakdowns/axie-infinity). Some people come to Georgism because of their aversion to income and capital taxes, some want to use LVT to fund generous social programs, some are motivated by the beneficial environmental effects, and some just think the Rent is Too Damn High. No matter where you come from on the political compass, there's probably a way to mix up a club soda and Georgism that's right for you. # 1. Is Land Really a Big Deal? [Paul Krugman](https://psmag.com/news/this-land-is-your-land-3392) speaks for many mainstream economists when he admits that Georgist analysis is sound, but he insists that it's a moot point because land just isn't important anymore in the modern economy: > Believe it or not, urban economics models actually do suggest that Georgist taxation would be the right approach at least to finance city growth. But I would just say: I don't think you can raise nearly enough money to run a modern welfare state by taxing land. It's just not a big enough thing. By George, if land just isn't a big deal, then LVT can't raise much money, the problems of speculative landownership are vastly overstated, and you can stop reading this article. The main tension between Georgists on the one hand, and Marxists and Neoclassicals on the other, is that the latter two significantly downplay land, centering the whole discussion instead on labor and capital. For Georgists, land is the key to understanding the whole economy. Krugman's main complaint is that LVT can't raise enough money, which is a response to the "Single Tax" movement in particular. In George's time, it was popular to advocate for a 100% Land Value Tax *and* the elimination of all other taxes. Keep in mind that in George's time, there was no federal income tax, and state and federal spending was much lower, so whether LVT could raise enough money wasn't nearly as controversial as it is today. But even if it turns out that a modern-day "Single Tax" isn't enough to cover the federal budget, Krugman misses the point. The purpose of LVT is not just to raise revenue, but to end speculation, rent-seeking, unaffordable housing, and wasteful, environmentally damaging sprawl. LVT is worth doing for those good effects alone. The revenue it generates doesn't needto fund literally every penny of government spending to still be a win, which is why Georgist economist Terrence Dwyer calls LVT "better than neutral." Liberal Krugman and conservative [Milton Friedman](https://www.youtube.com/watch?v=yS7Jb58hcsc) both seem to agree that LVT has no [deadweight loss](https://www.investopedia.com/terms/d/deadweightloss.asp#:~:text=A%20deadweight%20loss%20is%20a,an%20inefficient%20allocation%20of%20resources.), which means LVT, unlike income and capital taxes, doesn't create a drag on productivity. This means that if we can raise *enough* money from LVT, we can reduce at least some inefficient taxes, such as those on labor, while keeping government spending the same. Not only could this be popular politically, it would also boost the economy. Those are the claims Georgists make, at least. Let's see if they're true. Here are a few testable hypotheses that capture different aspects of land being a "really big deal": 1. Most of the value of urban real estate is land 2. America's land rents equal a sizable % of government spending 3. Land represents a significant % of all major bank loans 4. Land represents a significant % of all gross personal assets 5. Land ownership is highly concentrated among the wealthy Note that I'm not trying to prove each of these absolutely unequivocally. I'm just trying to see whether the preponderance of evidence counters the dismissal of land as being only a minor concern in the modern economy. ### 1.1. Most of the value of urban real estate is land It's more expensive to live in the heart of New York City than in the middle of Nebraska. That's not because construction costs are orders of magnitude more expensive in New York, but because the land is orders of magnitude more expensive. Here's a map of land prices across America's 100 largest metro areas, courtesy of the American Enterprise Institute. Notice that the most valuable properties are situated in coastal urban areas. Source: [American Enterprise Institute](https://www.aei.org/housing/land-price-indicators/) ([methodology](https://www.aei.org/wp-content/uploads/2021/05/AEI-adjusted-Land-Price-and-Land-Share-Indicators-Methodology.pdf?x91208')) Here's the same map but for *land share*–the percentage of a property's value that's due solely to the land. If you build a shack in the desert, nearly 100% of the property's value will come from the shack, because the land is worthless. But if you build a shack in San Francisco, nearly all of the property's value will come from the land. Notice how the land share gets closer to 100% as you move towards big cities along the coast. This is because of increased demand for land near large population centers and areas with significant economic activity and commerce. The increased value of the land is not due to any individual, but to the collective inputs of the entire community in developing the area around it. This is often called the [agglomeration effect](https://en.wikipedia.org/wiki/Economies_of_agglomeration). Even so, maybe you don't trust the American Enterprise Institute's figures and want to hear from some other people. In 2014, the "developable land" on Manhattan island alone was estimated to be worth about $1.74 trillion, according to [Barr, Smith, and Kulkarni](https://www.sciencedirect.com/science/article/abs/pii/S0166046217300820) (just the land). Between 2005-2010, the urban land value for *all* of New York City was worth about $2.5 trillion, according to [Albouy, Ehrlich, and Shin](https://web.archive.org/web/20180517024758/http://davidalbouy.net/landvalue_index.pdf) (just the land). In 2020, all real estate in NYC was worth about $2.7 trillion, according to [LendingTree](https://web.archive.org/web/20200716152956/https://www.lendingtree.com/home/mortgage/lendingtree-reveals-the-most-valuable-cities-in-america/) (the land + the buildings). But let's go ahead and see for ourselves. You can run a quick spot check by going on [Zillow](https://www.zillow.com) or [Redfin](https://www.redfin.com) in a major city like New York or San Francisco. First, search for a vacant lot for sale in the heart of downtown, and note the asking price. Then look for a similarly-sized lot with a building on it that has sold within the last few years, situated next to the empty lot. The last selling price should be available. You can subtract one price from the other to get a rough estimate of the land share of the plot with the building on it. Here's a listing for a vacant lot in the heart of San Francisco (personal information redacted). They're asking for $1.99 million dollars, and, judging from other listings and sales records in the area, they'll probably get it. Here's a townhouse right next door that sold last year, situated on a lot of nearly the same size. We're ignoring the "Redfin Estimate"; all we care about is the "this home last sold for" figure at the bottom, which is about $2.4 million. This is all the information we need for our spot check. First, we adjust for size. The second property's lot is 88% as big as the vacant lot, so we multiply the vacant lot's value ($1.99M) by 88% to get $1.75M. Now we subtract: $2.32M - $1.75M = $568K, the presumptive value of the building. That suggests that 24% of the total property value is from the building, and 76% is from the land. This is just napkin math, but it's congruent with the 70.9% figure AEI gives for the average land share of property in San Francisco county in 2020. Our spot check confirms the findings from the studies and AEI's dataset. Real estate in urban areas is expensive because of land, and the most valuable land is in urban areas. And if you don't believe me, I have an empty lot in [Gerlach, Nevada](https://web.archive.org/web/20210809032137/https://landequities.com/nevada/061-020-55) to sell you. But don't worry–it's only $0.0054/sqft. Meanwhile, our empty lot in San Francisco is going for $865.21/sqft, which is over 159,000 times as expensive. Given the evidence from the various land value estimation studies and databases like AEI's, as well as how easy it is to run spot checks, I'm convinced. **Conclusion:** Most of the value of urban real estate is, in fact, land. ### 1.2. America's land rents equal a sizable % of government spending Krugman and other skeptics don't believe you can raise enough with LVT alone to fund a modern state. Noah Smith, on the other hand, claims that [Land is Underrated as a Source of Wealth](https://www.bloomberg.com/opinion/articles/2018-01-02/land-is-underrated-as-a-source-of-wealth). Regardless of who's right, LVT doesn't need to replace all other taxes to still be worth doing, as long as it can raise a significant enough chunk. So how much can it raise? Let's do the math and find out. **Spoiler alert:** Conservative estimates show that **we can entirely pay for any one of Defense, Social Security, or Medicare + Medicaid using land rents alone**. And optimistic estimates suggest that we're within striking distance of the Single Tax–**replacing all labor and capital taxes with taxes on land rents** (on the federal level, at least). **Math alert:** We're about to dive into all the research papers and calculations that back up the above statement. If you don't care about seeing me show my work and you want to jump right to the conclusion, go to the next section, **How Much Money Can We Raise From Land Rents?** --- Let's start by defining some terms very precisely: **Land Income** or **Land Rent** is the recurring amount of revenue that the land itself is capable of generating. It's the market value that derives from the benefits the land itself provides (crops it can grow, proximity to a desirable job, etc) and the most anybody is willing to pay to access that land for a while (conventional "rent"). It is ultimately land income that drives land value, not the other way around. **Land Price** or **Land Value** is how much it costs to buy a piece of land. **Full Market Value,** however, is specifically the land price under "fair" and open market conditions. What are "unfair" conditions? I mean, your dad could sell you a valuable property for $1 as an obvious gift, but if he put it on the open market, it would go for much more than that. Likewise, it's not uncommon for a property that's been foreclosed on and hastily auctioned off to be re-listed publicly by the auction winner for a higher price. Cool, so how much is all the land in America worth? Or more precisely, what is the **full market value** of all of America's land? Here's a graph of America's total aggregate land value over time, according to twelve different estimation methods. My sources are The [Lincoln Institute](https://web.archive.org/web/20171121002821/http://datatoolkits.lincolninst.edu/subcenters/land-values), [Larson (2015)](https://www.bea.gov/system/files/papers/WP2015-3.pdf), [Albouy, Ehrlich, and Shin (2018)](https://web.archive.org/web/20191217113256/http://davidalbouy.net/landvalue_index.pdf), [The American Enterprise Institute](https://www.aei.org/housing/land-price-indicators//), [PLACES Lab](https://placeslab.org/fmv_usa/), the [Federal Reserve](https://web.archive.org/web/20131211071139/http://www.federalreserve.gov/releases/z1/Current/z1.pdf) via a method worked out by [Matt Yglesias](https://slate.com/business/2013/12/value-of-all-land-in-the-united-states.html), [Larson (2019/2020)](https://www.fhfa.gov/PolicyProgramsResearch/Research/PaperDocuments/wp1901-1028.pdf), and Jeffrey Johnson Smith's 2020 book *[Counting Bounty: The Quest to Know the Worth of the Earth](https://bookshop.org/books/counting-bounty-the-quest-to-know-the-worth-of-earth/9781634242981).* The data points for Foldvary, Smith, Tideman, Gaffney and Cord all come from *Counting Bounty.* Smith gives his own estimate of $44 trillion and notes an estimate of $31 trillion that Nicolaus Tideman sent him via private correspondence. Smith further mentions that Fred Foldvary was constantly saying that land rents equal about 1/3 of national income, and a cursory googling of Foldvary's writings confirms this. The "Foldvary" line here is my own construction that takes a third of [GNI](https://fred.stlouisfed.org/series/A023RX1Q020SBEA) for each year and then multiplies it by 10 (Smith's method for converting land rents to land value). Smith also cites a land rent estimate by Mason Gaffney at $5.3 trillion, or $53T in total value, though I've not been able to track down the primary source for that. Finally, I've extrapolated Smith's estimate five years back from his single 2020 data point according to the observed growth line from the other data sets. That gives us a massive spread of anywhere between $19 trillion to $65 trillion in 2020 for all of America's land values. So...whom do we trust here? Let's start from the top with Foldvary's estimate. Foldvary is looking at the results of a 2003 paper by Terrence Dwyer in Australia, and then saying that the same pattern Dwyer notes is likely to hold in America. For context, Terrence Dwyer is a Georgist who spent several years as an Australian Treasury tax official, was an advisor to the Prime Minister and Cabinet, and has written extensively about tax policy. His paper is called [The Taxable Capacity of Australian Land and Resources](https://www.prosper.org.au/wp-content/uploads/2007/11/dwyer-tax-resources.pdf). Unlike America, Australia has a long history of land taxation and detailed land valuation records, which Dwyer leans on to put together four tables comparing land incomes to all Australian tax receipts. Although Australia has a history of land valuation and LVT that continues to this day, they fall far short of Fully Automated Luxury Space Georgism, relying on quite a bit of conventional capital and labor taxes. Here are some figures from the most recent decade in Dwyer's fourth table, which shows that land rents could raise 70-75% as much as all of Australia's other taxes combined. And if you compare Australia's land income to the receipts taken in just by Australia's company and personal income taxes, it would be more than enough to replace them entirely while still bringing in a surplus. Dwyer's methodology seems plausible; it's a straightforward application of Australia's detailed land and property value records against Australia's published budget figures. Dwyer notably *doesn't* factor in the potential revenue from the "dynamic effects" of Land Value Taxation, which would only serve to raise his figures. Great news for Australia, at least if you believe Dwyer and his data sources. But I want to see what we can say about America, so let's check that National Income ratio real quick. In 1999, Dwyer gives land income as $132.7 billion AUD. In 1999, Macrotrends says [Australian GNI](https://www.macrotrends.net/countries/AUS/australia/gni-gross-national-income) was $405.5 billion *USD,* and, using the [1999 conversion rate](https://www.macrotrends.net/2551/australian-us-dollar-exchange-rate-historical-chart), that's $623.9 billion AUD. That gives a land-rent-to-GNI ratio of 21.3%. Spot-checking 1991 gives me 20.8%, so about the same. This is pretty far off from Foldvary's "one-third" guess, but pretty close to Steven Cord's. Cord estimated land rent [at about 24% of national income](https://cooperative-individualism.org/barron-ian_steven-cord-challenges-economists-on-the-lack-of-land-value-data-1988-sep-oct.pdf). That would be about $47 trillion using Smith's method. Given Foldvary is contradicted by his own source (Dwyer), we should probably exclude his line for now and construct a new one for Cord, as well as a "Dwyer-USA" line using 21% of America's GNI to better represent what Foldvary was getting at. If we buy that the Australian pattern might hold for the United States, our new chart looks something like this: Because the Cord and Dwyer-USA lines are just naively tracking GNI, they somewhat mask the 2005-2008 housing bubble, but they give us something like an upper bound. So we've got three emerging lines here. Could this reflect a difference in methodologies? Indeed. In part III, we'll dig into how to accurately assess land values in detail, but for now, let's look at the broad strokes differences between the estimation methods used here. The bearish values in purple all rely on a method called the "cost approach," or "land residual" method. This is where you take the estimated cost it would take to replace a building, multiply that against depreciation based on the building's age, and then subtract that from the total market value of the property to get the land value. Larson (2019) uses this method, and AEI's figures are based directly on those results with a slight upward correction. The Lincoln Institute and the Federal Reserve's figures use the same basic approach, relying on official estimates of construction costs and housing prices. The one outlier is the PLACES lab estimate, which uses a machine learning model but gives a single-year result that tracks with the four cost approach lines. The bullish values in blue all come from estimates by various Georgists cited in Smith's book and are naively back-extrapolated by me just to set an upper bound. The middle values in red include Larson (2015), who uses a "hedonic regression" model, and Albouy, who builds a model that *only* looks at vacant land sales. Long story short, I found numerous persuasive criticisms of the cost approach. Ultimately, I think Smith's estimate is most likely closest to the truth. Let's dig into Larson, Albouy, and the Federal Reserve figures to understand why. **A Tale of Two Larsons** Larson disagrees with himself. Let me grey out most of the other lines to highlight this discrepancy. Larson (2015) was written by Larson alone and uses a "hedonic regression" approach similar to the one described in [Kuminoff and Pope (2013)](https://doi.org/10.3368/le.89.1.1). In this method, you note all the characteristics of a property and then use a computer model to tease out the individual contributions of each factor to the final market value. This paper's data comes from a variety of sources but includes vacant land sales, developed property sales, and official stats from appraisals. Larson (2019), on the other hand, was co-written with Davis, Oliner, and Shui and uses the cost approach exclusively. Crucially, Larson (2019) explicitly and intentionally *excludes* all vacant land sales from the dataset. This estimate thus has the *least* direct contact with ground truth data from the market concerning land. **Albouy's Astounding Appendix** I think we can all agree that the purest way to measure the value of land is to find a piece of land with nothing on it and observe the price it sells for on the market. With enough of these data points, you could interpolate between them to create a smooth gradient map of land values, which could be good enough for estimating the aggregate value of large areas. Unfortunately, this method isn't going to work to model urban land values because there just aren't enough pure-land sales in the city center. Or are there? This startling figure is from the "[online appendix](https://mitp.silverchair-cdn.com/mitp/content_public/journal/rest/100/3/10.1162_rest_a_00710/2/rest_a_00710-esupp.pdf?Expires=1640585859&Signature=gnRHP4gjxtsswGVrzQOb98gdycV2TnrrU0Yn5u4C2v7anb7GB-QGQLL7ULbfdAqPDBawButRVxu4PeN35kWbJz5He1I66v02mATLL-MZ6YpoyVWizynxpUq4~I8wfF~yCEaIK5fsVgTOg45xemzQLXWpI311M2I5NBCyz~A2mWV0s8hI71wURagJ-aksPe0F-Wv~xbTZaN2yAAPNbsJxQ4sf8nZl2~1tjLN~h9keA6MU4d70v~gk~7GL9B3dnnfBpLggb1oU3JLZMd5IT1zzG56gVrV-tbZRiXqh05mI8GbixZOYyGVVLXWTP-zVhlEjyPZj9ziDBaN7AYsxEE4GKA__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)" to Albouy's paper, briefly referenced in a footnote in the main paper. Apparently, there are *more* pure land sales in urban areas than there are in outlying areas. As far as I can tell, Albouy builds his statistical model using *nothing* but pure land sales, excluding anything that has a structure on it. And yet his data points are most densely clustered around major city centers, when I had expected this approach would yield the exact opposite. Both Albouy and Larson (2015) use regression models that include vacant land sales, but Albouy *only* considers vacant land sales. By contrast, Larson (2019) uses the cost approach and explicitly *excludes* vacant land sales. What about the Federal Reserve Method? **The Fuzzy Fed** The "Federal Reserve" line is my own construction. Matthew Yglesias described this method in *[What's All the Land in America worth?](https://slate.com/business/2013/12/value-of-all-land-in-the-united-states.html)* in 2013, arriving at $15 trillion. In this method, you look at the balance sheets on the Federal Reserve's annual [flow of funds report](https://www.federalreserve.gov/releases/z1/) and subtract the replacement values of all structures from the total value of real estate holdings. There's good reason to believe this method produces estimates that are too low. Smith spends a lot of time attacking the Federal Reserve's figures, with arguments similar to Michael Hudson's critique from a 2001 article called [Where Did All the Land Go?](https://michael-hudson.com/2001/03/where-did-all-the-land-go-the-feds-new-balance-sheet-calculationsa-critique-of-land-value-statistics/) Here's Hudson: > When the Fed’s methodology was examined on a sector by sector basis, serious problems were found in the breakdown between land and structures. For instance, by 1993 the FRB estimated that the land held by all nonfinancial corporations had a negative value of $4 billion. This does, in fact, check out. The Fed was apparently so embarrassed by this that they stopped reporting land value estimates in subsequent reports, which is why you now have to derive them yourself. This raises two questions: 1) are these problems still in effect today, and 2) if the Fed was so incompetent in the past, how can we trust that later estimates are not just as wrong, but in the other direction, i.e. wildly *over*-stated land values? Based on Smith and Hudson's critiques, as well as my own analysis of the data, the answers seem to be that: 1) the problems seem less worse today (no more negative land values!) but are probably still present to some degree, and 2) any bias is most likely in the downward direction. It all has to do with the limitations of the cost approach, a problem many of these papers raise explicitly–including Larson (2019). Buildings naturally depreciate over time, while land tends to appreciate. The cost to *replace* your building with a new one of identical design is on average going to be a lot more than what your old building is actually worth, even after factoring in depreciation. That's because the market doesn't care what you spent to build it, it only cares how much value it provides under current conditions. Here's a contrived example. Say you built an amusement park for $10M ten years ago, and now prospective buyers want to tear it down and build apartments on it. Your roller coasters aren't worth $10M minus ten years depreciation; they're worth zero, even if they're still in decent shape. That's because now there's a shinier and better amusement park down the road that's driven you nearly out of business, and none of your prospective buyers are interested in operating an amusement park. All they want is the land. Your structures might even have *negative* value because it costs money to tear them down. In short, the cost approach is flawed because subtracting the inflated building price from the full market value of the property overvalues structures and undervalues land. But what about the other figure in the equation–the full market value of the real estate (land + buildings)? If the Federal Reserve is basing those figures off of assessed values, we have good reason to believe they are too low. For one, only a minority of US states and Canadian provinces [re-assess property values annually](https://www.iaao.org/wcm/Resources_Content/PTAPP.aspx). Source: [2017 PTAPP survey](https://www.iaao.org/wcm/Resources_Content/PTAPP.aspx) from the International Association of Assessment Officers And for two, property tax assessments have all kinds of exemptions and carve-outs that serve to depress official statistics. Let's put aside [Proposition 13](https://en.wikipedia.org/wiki/1978_California_Proposition_13)'s legacy in California for a second and just compare the sale history to the tax history of properties like this one in Manhattan: Assessed values less than 10% of the extremely obvious full market value It sold for 5.8 million 10 years ago, and now it's listed for 9 million. And yet the "assessed value" is a mere $600K. What's going on? The assessor is probably *not* saying that the full market value of this obviously multi-million dollar property is $600K. Most likely the assessor gave their best guess of "full market value," and then state statutes forced the assessor to also write down a separate "assessed value" that applies some markdown percentage. But the really damning part of these tax assessment records is that the land value assessment hasn't budged. The price has gone up over 3 million dollars in ten years, and you're telling me the land value hasn't changed at *all*?Fuhgeddaboudit. Agencies that don't collect much property tax don't have strong incentives to strive for accurate assessments. This creates a vicious cycle where official statistics are severely depressed, and those same statistics are then used as proof that land just isn't a big deal. If the Federal Reserve's data for total real estate values is at all based on property values from official sources, we would expect them to be 1) out of date and 2) discounting the property's total market value because of exemptions, caps, and other issues (Smith makes this same critique of both the Fed and Larson's data as well). Taken together with the fact that improvements are likely being over-valued based on naïve replacement costs + depreciation formulas instead of the actual present market value, this would imply that the Federal Reserve method for estimating all of America's land values at $24 Trillion is a **conservative lower bound,** and the same goes for all the other methods using the cost approach. **From Albouy to Smith** Okay, so let's look at Smith's method. Instead of doing a whole new study, he singles out Albouy as having the best methodology and makes some adjustments. You see, Albouy estimated the value of urban land *alone,* leaving out federal lands, agricultural lands, and things like water rights and natural resources, which accrue rental income and are considered "Economic Land" by Georgists. Smith starts by extrapolating Albouy's last given figure to the present day by applying the observed growth in the housing market (presumably due to appreciation of land values). He then adds on values for the missing types of land by using other existing estimates. It all comes to $44 trillion. We can check his work pretty quickly. All the figures we have for the last decade that don't come from Smith grow at a very similar rate, with the Federal Reserve line growing at a steady ~$1.4T a year on average. So let's extrapolate Albouy at the same rate: Interestingly enough, that puts us just over Tideman's estimate but short of Smith's final value by about $11T. The USDA tells us the average value of farm land was [$3,160 / acre in 2020](https://www.nass.usda.gov/Publications/Todays_Reports/reports/land0820.pdf). Multiply that by [896.6 million acres](https://www.statista.com/statistics/196104/total-area-of-land-in-farms-in-the-us-since-2000/) and you get $2.8 trillion dollars. Smith further cites Richard Ebeling, who in 2015 estimated the value of all of the federal government's holdings in land and mineral reserves at [$5.5 trillion dollars](https://www.fff.org/explore-freedom/article/there-is-no-social-security-santa-claus/). Smith applies an extrapolation to update this value to 2020, putting it at $6.6 trillion. If we just pop $2.8T + $6.6T on top of the extrapolated line from Albouy, that gives us this: Which gets us pretty close to Smith's figure. The USDA figure seems reliable, because most farmland doesn't have structures on it and is just pure land. The USDA can value the land just by observing market transactions. As for Ebeling, you kind of have to take his word for it, as he doesn't give a methodology. Ebeling is also a hardcore libertarian who advocates selling off all federal lands to reimburse taxpayers (wonder how he'd feel seeing Smith use his estimates to advocate Georgism!). But in any case, if you buy all of that, you get pretty close to Smith's $44T figure, which is itself close to Dwyer's observed ratio in Australia of land rents as 21% of national income (provided you use Smith's 10:1 ratio to convert land rents to land values). Of the original studies, Albouy has the most convincing methodology, and Smith's additions and extrapolations seem plausible. But to be fair, let's set Smith ($44T) as an upper bound, and the Federal Reserve figure ($24T) as a lower bound. I should note here that a lot of this land is already paying property taxes, which is at least partially a Land Value Tax. Research shows that Land Value Taxes are "capitalized" into land prices. I'll explain this next time in Part II, but for now, suffice it to say that if an income-generating piece of land produced $10,000 a year for you, and you knew you had to pay $5,000 a year for the privilege of holding it, you'd probably only be willing to buy it for half as much as you would if the tax didn't exist. Since the point of this exercise is to estimate how much a blanket LVT could raise, a more rigorous study would work out how much present land prices have been depressed by existing land taxes and adjust these figures upwards accordingly to get a more accurate estimate of the full land rents. **Land Rents vs. Budgets** Now we have to convert land values to land rents–the amount of income the land is capable of generating each year. To convert between land values and land rents, we need to use the **[capitalization rate](https://en.wikipedia.org/wiki/Capitalization_rate),** or "cap rate." If your land costs $1M and earns $50K/year, the cap rate is $50K/$1M, or 5%. This is the ratio between the net operating income produced by a plot of land ($50k) and its market value ($1M). [According](https://arbor.com/research/q1-2021-single-family-rental-investment-trends-report/#leadbot-0e2c1a70-f989dce9-fb3c0420-b714fff4) to [various](https://mapping.cbre.com/maps/caprate/app/) [sources](https://web.archive.org/web/20210806054719/http://cbre.vo.llnwd.net/grgservices/secure/US%20Cap%20Rate%20Survey%20Q3%202020.pdf?e=1628228873&h=a614ce876e66ea42c9785fbba27a658c), cap rates in the USA range between 3.5% on the low end to as much as 11% on the high end, depending on the type of property (offices have a higher rate, residential has a lower rate, etc). However, the vast majority of land values in the United States are urban, so we should weight our cap rates towards urban figures. Call it a low of 5% and a high of 8%. Smith suggests a blanket cap rate of 10%, but I'm erring on the conservative side. The 2005 federal budget had $2.5 Trillion in expenditures, increasing to $4.4 Trillion in 2019, with a sharp jump to $6.6 Trillion in 2020 thanks to COVID ([source](https://www.presidency.ucsb.edu/statistics/data/federal-budget-receipts-and-outlays)). It's immediately clear that regardless of valuation method, America's total land values ($24-44T) are significantly higher than the annual federal budget. But we care about land *rents*, not land *values.* It's not like the plan is to sell off all of America's land just to pay for a few years' spending. If we plug in the figures from the Federal Reserve and Smith, that gives us the following figures for America's annual land rents (in trillions of dollars): Great, after all that math we finally have a table that tells us how much money LVT might be able to raise. Keep in mind even the optimistic figures don't account for dynamic effects and aren't necessarily pricing in all other sources of "Economic Land" such as mineral rights, water rights, etc. They also don't apply any estimates for how much land values would rise if restrictive zoning ordinances were removed. Now we just need to compare that to America's budget figures. --- **How Much Money Can We Raise from Land Rents?** America's annual land rents are sufficient to cover between 18%-40% (Fed) and 34-78% (Smith) of annual federal spending. The low-end figures come from 2020, which was a major outlier in federal spending thanks to COVID. To put those amounts in context, in the [2019 federal budget](https://www.cbo.gov/publication/56324), total spending was $4.4 trillion. We spent $676 billion on defense (15%), Social Security was $1 trillion (23%), and Medicare + Medicaid together were $1.05 trillion (24%). Let's compare those to our four individual estimates for annual land rent values: Even the lowest estimate, the Federal Reserve method using a 5% cap rate, is enough to cover any one of Defense, Social Security, or Medicare + Medicaid, all by itself. And if you believe Smith's figure at the 8% cap rate, we could cover *all three of those things* and still have enough left over to cover a third of all other spending. Here's another point of comparison. There are [745 billionaires in America](https://www.nytimes.com/2021/10/28/business/america-billionaires.html), and some people think we should tax them to pay for all our stuff. As obscenely rich as billionaires are, the amount of money it takes to run a country at scale is even more obscene. If we were to "eat the rich" and forcibly expropriate 100% of billionaires' money, we would raise a one-time lump sum of about $5 trillion. That's a lot! But land rents by comparison can raise between 22-44% as much *every single year,* and that's at the low cap rate. This is not an argument against taxes on billionaires, mind you (I have no problem with the rich paying their fair share). It's simply meant to show that land rents represent a lot more value than people realize, and, unlike one-time personal wealth expropriations, they recur annually. Furthermore, land, unlike capital, can't flee the country and take investment and industry with it. Fun fact: taking all the billionaires' money yields a little less than selling off all of America's federal lands and mineral reserves (Ebeling's estimate). So whether or not you opt for the libertarian hobby horse (sell federal lands) or the leftist one (eat the rich), either could at best pay for a single year's spending on the scale of 2020's budget. But wait, what about [state budgets](https://higherlogicdownload.s3.amazonaws.com/NASBO/9d2d2db1-c943-4f1b-b750-0fca152d64c2/UploadedImages/SER%20Archive/2021_State_Expenditure_Report_S.pdf)? Many states are funded by property taxes, so if we're going to shift to land value taxes, we need to take states into account, too. So let's add state budgets into the mix (minus federal funding to states so we're not double counting). If we do that, we drop to 18-30% (Fed) or 36-58% (Smith) of annual spending. If we look ONLY at net spending from all state budgets (all 50 state government outlays minus federal funding to states), you could cover anywhere from 67-121% (Fed) or 142-230% (Smith) with land rents, implying that states–particularly the ones with big cities–could easily fund themselves off of LVT alone. But let's look at this another way. The federal government hasn't run a balanced budget since that [one time in the 1998](https://www.fool.com/investing/general/2013/09/30/was-americas-budget-really-balanced-in-the-90s.aspx), so the proper way to evaluate LVT against the status quo isn't comparing against total annual expenditures, but against total annual tax *receipts*. By this measure, all of America's land rents could cover anywhere from 30-56% (Fed) or 60-103% (Smith) of what our current tax receipts bring in. And if you add in state tax receipts too, you get somewhere between 19-36% (Fed) and 41-68% (Smith). I couldn't find a source for state tax receipts, but most states are required to run balanced budgets, so I'm just assuming that the state budget expenditure figures from above are the same as their receipts. If I had more precise figures from the few states that do run deficits, that would only serve to reduce the assumed amount of tax receipts from those states, which could only raise the percentages given here. Finally, what about [local governments](https://state-local-finance-data.taxpolicycenter.org/pages.cfm)? That's where a lot of the property taxes currently go anyways (not to mention regressive taxes like sales taxes and lotteries). If we add in all their tax money too, and compare it to annual land rents, that drops us to 14-26% (Fed) or 29-49% (Smith) of annual receipts. Keep in mind this doesn't account for property values that already have state and local property tax burdens priced into them. If we were to factor that in, it would raise these figures significantly. No matter how you slice it, the most low-end estimate of 14-26% of all federal, state, *and* local tax receipts is a lot of money, especially when you consider that it recurs annually and can cover any single giant line-item in the federal budget. And Smith's 29%-49% figure for land rents compared to *all tax receipts for every level of government combined* would be bonkers. Restricting ourselves to just the federal level, Smith's 60-103% figure is more than enough to entirely eliminate individual income taxes on the low end (about [50% of federal receipts in 2019](https://www.govinfo.gov/content/pkg/BUDGET-2019-BUD/pdf/BUDGET-2019-BUD.pdf)) and is in clear striking distance of a full-on Federal Single Tax on the high end. Of course, if you think Smith is wrong and the Federal Reserve's figures have it nailed, then the Single Tax dream might be out of reach. How big a deal this is depends on what you think about balanced budgets. If you believe in [Modern Monetary Theory](https://www.investopedia.com/modern-monetary-theory-mmt-4588060), then you don't care about running a balanced budget. Under MMT, a sovereign government that prints its own money is limited only by productive capacity and physical resources, summed up best by the famous Keynes quote, "anything we can do, we can afford." I'm not personally advocating for or against this view–just pointing out that if you're in the MMT camp, then you already don't care about matching 100% of government spending with revenue raised from taxes. But what if MMT is bunk, and if we also insist on the Fed's figures? Then we're left with two options: either accept that doctrinaire Single-Taxism is done for (in the USA, at least) while still accepting LVT as part of this balanced budget breakfast, or else look into those "dynamic effects" that Dwyer's Australian figures intentionally left out, particularly a tantalizing theory most commonly associated with [Mason Gaffney](https://www.masongaffney.org/workpapers/WP096%202005%20The%20Physiocratic%20Concept%20of%20ATCOR.pdf) called [ATCOR](http://www.wealthandwant.com/themes/ATCOR.html)–"All Taxes Come Out of Rents." **ATCOR and the Henry George Theorem** ATCOR supposes that a reduction in taxes on income and capital–independent any other policy interventions–will actually cause land values to *rise* by a proportionate amount. This means that Georgists who suppose that any old LVT policy will cause land prices to go down need to be careful. If you un-tax labor and capital, but don't *also* sufficiently raise taxes on land, land prices (and rents) will actually go *up,* because someone working on that land is now taking home more income and therefore capable of paying more in rent (see [Ricardo's Law of Rent](https://www.youtube.com/watch?v=jiGKwi43R0Q)). However, with the right policy, this can be a good thing*.* If ATCOR is true, a Single Tax policy will always work. Abolishing capital and income taxes causes the lost tax revenue to get soaked up by rising land values, which you can then capture with a 100% LVT. You're raising the exact same amount of revenue as before, but the elimination of income and capital taxes lifts a burden off of labor and investment while LVT keeps housing prices and rents down, boosting the economy and lowering the cost of living. This economic boost in turn raises land values, which are fully captured by LVT, thus keeping land values stable. Then there's the Henry George Theorem. Nobel laureate Joseph Stiglitz [published it in 1979](https://academiccommons.columbia.edu/doi/10.7916/D8JM2M80), and it says that under certain conditions, expenditures on public goods will be soaked up by land rents to such a degree that a 100% LVT is *necessarily* sufficient to finance all public goods spending in perpetuity. A pure "public good" is something that is "non-rival" and "non-excludable." Non-rival means that you using it doesn't mean I can use it any less, and non-excludable means that there's no way anyone can keep me from benefiting from it once it's out there. Common examples include a fireworks display, national defense, and clean air. The HG Theorem doesn't claim to apply to other forms of public spending, such as mass transit, which are both excludable and rival to some degree. (Transit has a capacity limit, and even if we've abolished racially discriminatory [Jim Crow](https://en.wikipedia.org/wiki/Jim_Crow_laws) laws, the fact that they were even possible proves excludability.) Nevertheless, there's strong evidence that public spending on non-"pure public goods" [raises land values too](https://www.apta.com/wp-content/uploads/The-Real-Estate-Mantra-Locate-Near-Public-Transportation.pdf), just perhaps not to the same degree. I contacted Nicolaus Tideman, who tells me that a variant of the HG Theorem for non-pure-public-goods holds that "the combination of land value increases and charges equal to marginal cost will finance these expenditures." However, "neither theorem applies if people have different tastes or if benefits do not decline with distance." I think what he's saying is that most public works can be funded entirely by the increases in land value they generate, supplemented with modest user fees. I also think he's saying it depends on what kind of public work it is. If you spend public money on a truly hideous art installation that o nly three people like, that's not going to raise land values. And if your public work is of equal benefit to everybody no matter where they live, that's also not going to raise the value of land, because no location benefits from it more than any other. In any case, the status quo is that the bulk of spillover value created by public spending is captured by private landowners. Governments then have to tax citizens' labor and capital to pay for the next round of improvements or else go into debt with deficit spending and bond initiatives (a hidden tax on savings if it causes inflation). If you put ATCOR, the Henry George Theorem, and observations about non-pure-public-goods-spending together, one could postulate a virtuous cycle where government investment is always able to pay for itself without creating a drag on the economy and *without* any deficit spending or debt. BUT. Even if we don't count on any of those effects, the above figures are already pretty astounding, even using the pessimistic Federal Reserve figures at the lower capitalization rate. If you want to see someone much smarter than I put all this together into an actual policy paper that proposes a modest land value tax to boost the economy, abolish the income tax, *and* balance the budget, check out the paper Nicolaus Tideman sent me: *[Post-Corona Balanced-Budget Super-Stimulus: The Case for Shifting Taxes Onto Land](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3954888) (*co-written with Kumhof, Hudson, and Goodhart). And in case you're wondering who Nicolaus Tideman is, here's a quick bio from his [Wikipedia Page](https://en.wikipedia.org/wiki/Nicolaus_Tideman): > Tideman was an Assistant Professor of Economics at [Harvard University](https://en.wikipedia.org/wiki/Harvard_University) from 1969-1973, during which time from 1970-1971 he was a Senior Staff Economist for the President's [Council of Economic Advisors](https://en.wikipedia.org/wiki/Council_of_Economic_Advisors). Since 1973 he has been at Virginia Tech, with various visiting positions at Harvard's [Kennedy School of Government](https://en.wikipedia.org/wiki/Kennedy_School_of_Government) (1979-1980), [University of Buckingham](https://en.wikipedia.org/wiki/University_of_Buckingham) (1985-1986), and the [American Institute for Economic Research](https://en.wikipedia.org/wiki/American_Institute_for_Economic_Research) (1999-2000). --- We can quibble about the estimation methods and the cap rates, but by George, land rents represent a huge amount of value. If nothing else, a high LVT could offset many unpopular and inefficient taxes without cutting the budget, or it could be used to fund important programs we supposedly can't currently afford. **Conclusion:** America's land rents are, in fact, equal to a sizable % of the annual budget. Ironically, by demonstrating that land taxes can raise a large amount of money, I've actually set up another criticism; land taxes don't raise too *little* revenue, they raise *too much.* This critique is mostly made on moral/ideological grounds and typically comes from the right–to which I'll just let arch-conservative William F. Buckley (apparently both a Georgist *and* a full-on Single-Taxer) make the case. At the end of the day, you either accept the moral arguments for making land value common property or you don't. If Buckley's argument that "a parking lot next to the Empire State building should be in principle taxed at the same rate as the skyscraper" doesn't sit right with you, I'm not sure appealing to empirics is going to convince you, as the disagreement likely comes from a much more fundamental place. And if you think all taxation is theft, well, Land Value Tax is a tax, so presumably you have a problem with it on those grounds. But if you accept that you live in a society that occasionally taxes things, you might opt for what Milton Friedman called "the least bad tax." --- So, we’ve established that land value is the bulk of urban real estate values (and urban real estate values are the bulk of total real estate values), and land rents are large enough to make a big dent in any budget. But here’s something you can be sure affects everyone: the share of land value represented in bank loans. ### 1.3. Land represents a significant % of all major bank loans Banks exist for at least two stated purposes–to give people a safe place to store their money and to provide capital in the form of loans to people engaged in productive activities. This song from Mary Poppins is a decent summary of the Econ 101 story we're told about what banks do with their money. Banking is obviously way more complicated than "you give the bank your money and then they lend it out to people," what with fractional reserve banking, the Federal Reserve, and all the rest of it. But we don't really care about that side of things for the purposes of this question. All we want to know is *given that banks have money, what do they do with it?* Lately, they lend it out to people who want to buy real estate, according to *[The Great Mortgaging: Housing Finance, Crises, and Business Cycles](https://www.nber.org/system/files/working_papers/w20501/w20501.pdf)* by Jordà, Schularick, and Taylor. This chart shows three snapshots from 1928, 1970, and 2007 of the share of all bank lending that goes to real estate for a selection of major countries around the world. Here's another visualization that takes all the countries together and plots it over time, going back to the late 1800's. As we can see above, this is truly a worldwide phenomenon, and it's been on a continuous upward trend since about 1950. As of today, the real estate share of bank lending has grown to nearly twice the level it was in Henry George's time. Let's see if we can spot check some of these stats by looking up another source. [Positive Money](https://positivemoney.org/2018/06/how-has-bank-lending-fared-since-the-crisis/) provides this graph breaking down per-sector lending in the UK. They give the Bank of England itself as the source for their data. Source: *Table C1.2 Bank of England statistics via [Positive Money](https://positivemoney.org/2018/06/how-has-bank-lending-fared-since-the-crisis/)* Counting pixels and working out the percentage by hand, it looks like real estate (the two blue regions) combined for about 45% circa 2007 and climbed to 60% in 2017. The 2007 figures are smaller than those given in the above charts from *The Great Mortgaging* but are still huge in either case. Is there anywhere else we can check easily? New Zealand (which isn't covered in *The Great Mortgaging*) has this [really cool dashboard](https://bankdashboard.rbnz.govt.nz/asset-quality) that breaks down all the bank loans in their country. As you can see, the majority of loans are for housing. Here's another visualization of the same data. Source: [Interest.co.nz](https://www.interest.co.nz/banking/112371/nz-banks-housing-lending-continues-rising-percentage-their-overall-lending-business) I could dig further, but I think I've seen enough to convince me of this general point. The majority of bank loans in a lot of major developed countries (including the US, UK, and New Zealand) are for real estate, and, as we've already shown, the majority of real estate's value is concentrated in land. Whether or not land represents a clear majority of bank loans, it's undeniably a big chunk. **Conclusion:** Land represents a significant % of all major bank loans Okay, so what? Why is it such a big deal if banks spend a lot of money chasing after real estate? Although financing the construction of new houses is a good thing, none of the money tied up in the buying and selling of land is itself productive, because no new tangible wealth is created. Also, all this cheap credit for land just means more bids driving up land prices. So not only are real estate bank loans not making the economy any better, they're actively making it worse. Anyone who lived through 2008 knows first hand how seemingly abstract real estate investment shenanigans can come smashing into your everyday life and bring [the entire world economy to its knees](https://en.wikipedia.org/wik/Subprime_mortgage_crisis). China in particular [is now grappling](https://www.nytimes.com/article/evergrande-debt-crisis.html) with many of the same problems. Now take a look at this eye-popping quote from [Tideman's paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3954888), section 3.7.1, "The Financial Sector," emphasis mine: > Hudson (2012, 2018) has shown that **most land rent is paid out as interest to banks** and that bank credit is a major driver of increases in housing prices (“real estate is worth whatever the bank will lend against it”). Further empirical support is offered by Favara and Imbs (2015), and La Cava (2015) finds that this can explain the increase in the share of housing in capital income studied by Rognlie (2015). Ryan-Collins et al. (2017) and Turner (2017) argue that a self-reinforcing cycle between bank lending and land value increases has caused a shift in bank lending from business loans to mortgages and the inflation of land prices, and this has impaired financial stability, as also argued in Keen (2017). That Rognlie (2015) citation is worth unpacking in particular. Rognlie got a lot of attention for pointing out some major flaws in Thomas Piketty's famous book, [Capital in the 21st Century](https://en.wikipedia.org/wiki/Capital_in_the_Twenty-First_Century). Piketty's main argument is that the rate of return to capital is greater than the overall rate of economic growth, and that this is leading to wealth concentration and inequality. Rognlie pointed out [in his paper](https://www.brookings.edu/bpea-articles/deciphering-the-fall-and-rise-in-the-net-capital-share/) that Piketty was improperly handling the depreciation of capital assets. Once you account for this, you find the outsized returns to "capital" driving inequality are due almost entirely to housing. The unaffordability of housing appears to be not a mere symptom of inequality but rather a key driver of it. And banks contribute to that unaffordability by acting as the shadow rentiers of the entire economy. ### 1.4. Land represents a significant % of all gross personal assets Here are two graphs that you might remember from the book review. The first shows that something like 40% of all gross personal assets in Spain represent land. About 25-30% are "financial assets" that must ultimately cash out to some mixture of real assets (land and capital), so the true percentage due to land is probably higher than 40%. source: [Wealth in Spain, 1900-2014](https://web.archive.org/web/20180821115745/http://wid.world/wp-content/uploads/2018/02/WID.WP_ABM_WEALTHSPAIN_2018.pdf) by Blanco, Bauluz, & Martínes-Toledano The second chart shows that about half of real assets in the United Kingdom are due to land. Based on data from the United Kingdom National Accounts: The Blue Book 2017. Published Oct 31, 2017. Revision Period: Beginning of each time series. Date of next release: July 2018. The "privileges" in "Land and privileges" are things like taxi medallions and patents, that were worth "almost zero" according to Nate Blair, who prepared the chart. Here are two graphs from Thomas Piketty breaking down "national capital" for Britain and France by sector: Source: Capital in the 21st Century by Thomas Piketty Source: Capital in the 21st Century by Thomas Piketty In the olden days, the majority of national capital was in agricultural land. Nowadays, the majority of it is in housing. I can work out that in 1700, about 76% of Britain's and 80% of France's national capital was real estate. In 2010, those figures were 55% and 61%, respectively. What about the US? Here's a figure from [Tideman & co's big paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3960235), which uses OECD numbers to chart the share of household wealth in the USA due to "non-produced assets" (conventional land, natural resources, and everything else that isn't a kind of capital that humans create, which Georgists call "Land"). As we can see, it hovers around 40%. Land represents about 40% of household assets in the USA. It also represents more than 40% of asset values in Spain and somewhere between 50-60% of asset values in the France and UK. How about the rest of the world? According to [this giant report by McKinsey](https://archive.md/DRUiI), real estate holdings account for two-thirds of all *global* real assets, with more than half of that coming from land. If you add together the 35% due to conventional land and the 4% due to "non-produced" assets (which, among other things, includes mineral and energy reserves), you get the amount represented by the Georgist definition of Land: 39% of all real assets in the entire world. That figure rises to 43% if you also count IP as "Economic Land." That seems like a pretty big deal to me. **Conclusion:** Land does, in fact, represent a significant % of the value of gross personal assets in developed countries, including the US. --- Now some of you might be nervous at this point. Are those awful Georgists about to ruin me with LVT? I can certainly sympathize, seeing as I'm a homeowner myself. This is where I think the Citizen's Dividend (UBI) should probably come in. Let's use $1.2 trillion in 2020, the most pessimistic figure for America's land rents (the Federal Reserve method at the low 5% capitalization rate). If we split that amount among all ~209 million American citizens over the age of 18, then anybody sitting on a property worth less than ~$230K is going to either break even or turn a profit. This simplistic table makes a few assumptions, of course. We fix land share at 50%, and capitalization rate at 5%. But keep in mind that *every citizen* would get the dividend, so if you have two adults in your household, the table breaks even at just under $500K in property value. This is not a recipe for bankrupting the middle class. In fact, it compensates everyone for helping make America a desirable place to live. This compensation is paid primarily by those who gatekeep the most valuable locations and natural resources, things which were not brought into existence by anyone's hard work or investment. Also, keep in mind that LVT would see the elimination of the portion of property tax that falls on buildings. I just checked my own property tax records (I live in the suburbs of a medium-sized town far from any major urban cores). If the assessed land share more than doubled to 40%, under a 100% LVT regime I'd actually save $545.05 on my property taxes every year–and that's *without* a Citizen's Dividend. ### 1.5. Land ownership is highly concentrated among the wealthy Bill Gates, the world's fourth richest person, owns 242,000 acres of farmland across the U.S., making him the [#1 owner of private farmland in the USA](https://www.forbes.com/sites/arielshapiro/2021/01/14/americas-biggest-owner-of-farmland-is-now-bill-gates-bezos-turner/). But that's just farmland. If you're talking about [all land in the USA](https://landreport.com/americas-100-largest-landowners/), Gates ranks #49. Jeff Bezos is #25, and Ted Turner is #4. Rich people own a lot of land. So what % of total real estate values are owned by the top 1%, the top 10%, and the top 50%? Quite a lot, according to the Federal Reserve. In other words, of all the real estate value in the United States, the top 1% own 14.7% of it, the top 10% own 44.8% of it, and the top 50% own 88.5% of it. Here's how that compares against total assets. Of all asset values in the United States, the top 1% own 29% of it, the top 10% own 65% of it, and the top 50% own 94.7% of it. So compared to total asset values, it looks like real estate is a little more evenly distributed, but it's still highly stratified in an absolute sense. The top 1% own almost 15% of the country's total real estate value, and the top 10% own almost half of it. Keep in mind that it's on this basis that the top 1% and the top 10% gain the ability to collect rent from everybody else. But where the top 1% really get their kicks is in financial assets. Not to mention ownership of private businesses. Once again, we're back to untangling the value of financial assets, which is beyond the scope of this particular investigation. In a sane world, the "ground truth" value of most financial instruments like stocks and bonds would terminate in good old fashioned capital and labor, but we've already been through one crisis where much of the world's paper wealth turned out to be just [elaborate incantations cast upon regular people's mortgages](https://www.finra.org/investors/learn-to-invest/types-investments/bonds/types-of-bonds/mortgage-backed-securities#:~:text=Mortgage-backed%20securities,%20called%20MBS,million%20worth%20of%20such%20mortgages.). From what we've seen about how many bank loans are tied up in real estate, we're well on our way back there. What about sources other than the Fed? [The Economist](https://www.economist.com/briefing/2015/04/04/the-paradox-of-soil) gives similarly stratified figures, which [I'm told](https://www.reddit.com/r/georgism/comments/pryhtf/land_value_ownership_inequality_stats/hdqf9m7/?context=3) ultimately come from [here](https://eml.berkeley.edu/~saez/SaezZucman2015.pdf). Rich people own a lot of the country's land value, and in fact, they own most of it. On top of that, [housing is the world's biggest asset class](https://www.economist.com/special-report/2020/01/16/how-housing-became-the-worlds-biggest-asset-class). The really troubling bit is the generational gap. Every generation has lower homeownership rates than the previous one. Okay, but Millennials are younger. Obviously they have lower homeownership rates than older people. Maybe they'll catch up? Evidence suggests they won't. Not only is land ownership concentrated among the wealthy, it's concentrated among the *old* and wealthy. [Life expectancies for the old and rich are increasing](https://www.nber.org/papers/w27509), delaying both inheritances and estate taxes past the point where it would do the most good–while members of the next generation are still establishing themselves and/or trying to build families. It's important to realize that Millennials are no longer young. I'm a Millennial, and I'm already 37, hardly a spring chicken. What's the picture going to look like for Zoomers? **Conclusion:** Land ownership is, in fact, highly concentrated among the wealthy. --- We've established the following well beyond the preponderance of evidence: ✅ Most of the value of urban real estate is land ✅ All of America's land rents equal a sizable % of government spending ✅ Land represents a significant % of all major bank loans ✅ Land represents a significant % of the value of gross personal assets ✅ Land ownership is highly concentrated among the wealthy **Conclusion** By George, land is a really big deal. Land is not some minor concern that only matters in pre-industrial agricultural economies. Everybody needs land, but nobody can make any more of it. You can't work, eat, sleep, or even poop without access to land (try doing any one of those things in a forbidden location and see what happens to you). The housing crisis is driven by inflated land prices, which in turn drives poverty, homelessness, and all other manner of social ills. And when we try to fix those social ills with public spending, land often soaks up and privatizes the value the spending creates. This subsidizes private actors who turn right around and use those gains to jack up everybody's rent, and the vicious cycle continues. And all the while banks continue to pour fuel on the fire. By George, Land Value Tax would solve this. *[…to be continued]*
Scott Alexander
45001939
Does Georgism Work? Part 1: Is Land Really A Big Deal?
acx
# Diseasonality [*epistemic status: conjecture and speculation in something that isn’t really my field*] **I.** It’s still [not totally clear](https://pubmed.ncbi.nlm.nih.gov/17222079/) why some diseases are seasonal. Seasonal diseases usually peak in late winter - so around January/February in the Northern Hemisphere and July/August in the Southern. Around the equator, which lacks seasons, they’re less predictable and happen throughout the year. The best known seasonal diseases are flu and colds. But viral diarrhea and chickenpox also qualify, as do older mostly-eradicated diseases like measles and diphtheria. The seasonal flu ([source](https://blogs.sas.com/content/graphicallyspeaking/2019/03/18/how-deadly-was-the-flu-in-2019/)) The novel coronavirus is probably seasonal-ish, although it’s hard to tell since so much stuff keeps happening to make it better (vaccines) or worse (new variants). The most common theories for disease seasonality are: 1. Pathogens like the cold 2. Pathogens like low humidity 3. People are cramped indoors during the winter 4. People have low vitamin D during the winter, and vitamin D helps fight pathogens None of these are really satisfactory on their own. Cold and humidity are definitely important - [scientists can](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4097773/) make flu spread faster or slower in guinea pigs just by altering the temperature and humidity of their cages. But it can’t *just* be cold and humidity. If it was *just* cold, you would expect flu to track temperature instead of seasonality. Alaska is colder in the summer than Florida in the winter, so you might expect more summer flu in Alaska than winter flu in Florida. But Alaska and Florida both have lots of flu in the winter and little flu in the summer. (if it was just humidity, same argument, but change the examples to Arizona and Florida.) It’s the same story with people being cramped indoors. Common-sensically, this has to be some of the story. But if it were the most important contributor, you would expect to see the opposite pattern in very hot areas, where nobody will go out during the summer but it’s pleasant and balmy in the winter. But winter diseases don’t switch to summer in Arizona or Saudi Arabia or other hot locales. If it was just vitamin D…look, it’s not vitamin D. Nothing is ever vitamin D. People try so hard to attribute everything to vitamin D, and it never works. The most recent studies show it [doesn’t prevent colds or flu](https://medicalxpress.com/news/2021-01-clinical-trial-vitamin-d-ward.html), and I think the best available evidence shows [it doesn’t prevent coronavirus](https://astralcodexten.substack.com/p/covidvitamin-d-much-more-than-you) either. African-Americans, who are all horrendously Vitamin D deficient, [don’t get colds](https://www.statista.com/statistics/696481/frequency-of-catching-colds-adults-us-by-ethnicity/) at a higher rate than other groups (they do get flu more, but they’re vaccinated less, so whatever). Might it be some combination of these things? Maybe Alaska is cold all year, but gets drier in the winter? Maybe people stay indoors in Arizona in the summer, but it’s not cold enough for flu to spread? If you came up with some multidimensional dryness-coolness-indoorness metric, then maybe places could be high on one or two in the summer, but the combined metric would always be highest in the winter everywhere. This is possible. I just find it hard to believe that the place where this metric is highest in summer doesn’t even overlap with the place where the metric is lowest in winter. **II.** What about ultraviolet light? This has been getting more press recently as people realize that UV kills pathogens and that this might be part of the reason outside seems safer than indoors. Does this fail by the same argument as cold and humidity? That is, does Florida in winter get more ultraviolet light than Alaska in summer? Here’s what [a paper](https://ultrasuninternational.com/wp-content/uploads/grigalavicius-et-al-2015_daily_seasonal_and_latitudinal_variations_in_solar_ultraviolet.pdf) has to say: It’s a bit weird! In summer, polar and tropical areas get about the same amount of UV; in winter, the tropical areas stay about the same and the polar areas crash to almost zero. See also the world’s most annoying and hard-to-read graph, from [Nicastro et al](https://www.nature.com/articles/s41598-021-94417-9): Look at the left figure, and focus on the dark blue line (July) and dark black line (December). Miami, Florida, is at a latitude of 25 degrees; Juneau, Alaska, is at about 60. If I’m reading the graph right, Miami in December gets enough UV to kill a virus in two minutes. Juneau in July gets enough UV to kill a virus in…also about two minutes. How much can UV matter? After all, most people are inside in the winter anyway. One possible answer is that it doesn’t matter much in these situations, but it’s a good explanation for why eg Arizona (where people are inside in the summer) still gets a seasonal flu in the winter. Another possible answer is that as far as I can tell even indoor UV light can matter. About 75% of UV-A radiation gets through window glass. Although most people think of UV-C or even UV-B as the most effective antiviral agent, another annoying and hard-to-read graph from Nicastro et al suggests otherwise: This doesn’t mean 75% of UV’s viral killing power makes it inside - unless you live in a greenhouse, only a tiny bit of your house is windows, so no matter how much light each window lets through most of the sunlight will get blocked. I’m not really sure how to model this because it might depend on a virus particle’s chance of getting hit by a sunbeam, which depends on a lot of factors. Still, it doesn’t seem impossible to me that ambient UV levels can matter even indoors. The same amount of UV light reaches Alaska in the summer as Florida in the winter. But that means that even if UV was the entire explanation, we could only explain why Alaskan summer is *no worse than* Floridan winter in terms of flu. But Alaskan summer isn’t just no worse than Floridian winter, it’s actually much better. So we can add UV to our combined multi-dimensional metric idea, and it will improve it a little. But I’m still not completely satisfied with it. Seasonality seems too strong, even after you account for all these factors. **III.** What happens when we think about this dynamically? Suppose that Arizona, uniquely among places in the world, is more vulnerable to flu in summer than winter. But if all Arizona’s neighbors are flu-free in summer but severely-impacted in winter, they would be more likely to transmit the flu to Arizonans in the winter. Or if all new flu strains originate in China, and China develops a new flu strain in the late autumn, then it will sweep across the immunologically-naive world in winter, and Arizona will get hit just like everywhere else. On a first pass, this doesn’t really work. The coronavirus seemed seasonal last year, but it was already everywhere, and there were no new strains (last winter was before the variants mattered much). Also, consider Australia. It’s in the Southern Hemisphere, so its winter is northern hemisphere summer. But it gets its flu season during its own winter, not ours. It doesn’t follow China’s timeline, and it doesn’t follow the timeline of travelers who might be bringing in flu cases (I assume most of its inbound travelers come from the Northern Hemisphere). But on closer inspection, something like this seems to be going on in some places. [This article](https://www.abc.net.au/news/2018-10-30/is-there-a-lower-incidence-of-cold-and-flu-infections-in-tropics/10381902) looks at flu in the northern tropical parts of Australia in particular. Big southern Australian cities get flu seasons in their winter (ie July). The tropical parts of Australia are near the equator and don’t naturally have seasons, but they usually get a flu season in July anyway. But occasionally they’ll get a flu season in December, or even both in the same year! Most likely what’s happening is that they have no natural flu season, but people have to get the flu sometime, and they usually get it when travelers from the rest of Australia bring *their* flu cases in, but sometimes they also get it when travelers from the Northern Hemisphere bring *their* flu cases in. The part I find interesting here is that flu spikes are more fundamental than flu seasonality. Deprived of seasons, a place doesn’t just have a slow burn of flu cases all year. It has a big epidemic, then dies down for a while, then has another big epidemic. In retrospect, this is an obvious consequence of how diseases work (eg the [SIR model](https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology#The_SIR_model) of transmission). Some people get the disease, it spreads exponentially until lots of people are immune, and then it stops until something changes. And it happens once or twice a year. Why? Maybe a new variety comes out of China every year, but that doesn’t explain the occasional twice-yearly spikes in tropical Australia. The article I find most enlightening here is this New York Times piece from summer 2021: [Why Everyone Has The Worst Summer Cold Ever](https://www.nytimes.com/2021/07/22/well/live/colds-summer-immunity.html). It says that lots of people got bad colds (particularly colds spread by a pathogen called Respiratory Syncitial Virus) in summer 2021, after coronavirus restrictions were loosened. RSV is usually *very* seasonal and *very* winter, so presumably there was “built-up” RSV vulnerability that got a chance to break out once COVID restrictions were loosened. The article even says: > Although your immune system is likely as strong as it always was, if it hasn’t been alerted to a microbial intruder in a while, it may take a bit longer to get revved up when challenged by a pathogen again, experts say. And while some viral exposures in our past have conferred lasting immunity, other illnesses may have given us only transient immunity that waned as we were isolating at home. > > “Frequent exposure to various pathogens primes or jazzes up the immune system to be ready to respond to that pathogen,” said Dr. Paul Skolnik, an immunovirologist and chair of internal medicine at the Virginia Tech Carilion School of Medicine. “If you’ve not had those exposures, your immune system may be a little slower to respond or doesn’t respond as fully, leading to greater susceptibility to some respiratory infections and sometimes longer or more protracted symptoms.” > > […] > > “I haven’t seen anything like this in 20 years of working as a virologist,” said Dr. Huang. “There’s usually a degree of pre-existing immunity due to the previous winter. When you don’t have that kind of protection, it’s a bit like a wildfire. The fire can just continue, and the chain of transmission keeps going.” I think maybe they’re saying something like that getting a virus like this usually gives you about a year’s worth of immunity before your immune system “forgets” it. I guess this makes sense in the context of eg needing COVID “booster shots” after a few months. So one possible model is something like: once you get a disease, you’re protected for a while. There’s no particular length of time, it’s a spectrum, absent any external rhythm-setter you would end up like the tropics, where people have epidemics at random times, once a year, twice a year, whatever. But the seasonal cycle “entrains” this rhythm (cf. the idea of a [zeitgeber](https://en.wikipedia.org/wiki/Zeitgeber)). It offers a good way for everyone’s inconsistently-and-gradually-declining immunity to get below the threshold where an epidemic can start at the same time. So in the tropics, Florida, and Alaska, epidemics “want” to follow a cycle of coming approximately once a year. In the tropics, nothing is giving them that cycle, so they come at a random time once a year, or twice a year, or whatever. In Florida, UV light, temperature, etc provide that cycle, and they come once a year. In Alaska, UV light, temperature, etc also provide that cycle, and they come once a year *independent of what’s going on in Florida*. It’s like the saying about how you don’t have to outrun the bear, you just have to outrun the other hikers. July in Alaska doesn’t have to outrun January in Florida, it just has to outrun January in Alaska, for the dubious honor of when Alaska’s destined-to-be-once-a-year flu season is going to be. **IV.** Right now I think the coronavirus is about halfway to being a seasonal disease. Judging by the speed at which people need booster shots for their vaccines, the coronavirus probably has non-permanent immunity (this is far from proven - vaccines could work differently from whole viruses - but it seems plausible). If its period is anywhere close to a year, the seasonal cycle of temperature/UV/etc will probably entrain it to a once-a-year rhythm. But right now it’s not in that rhythm, because there are still lots of people who have never had it at all. So absent any other factors (lockdowns, vaccines, new variants, etc) what we might expect to see is something like coronavirus spreading through the population at some steady rate in the summer, and then some faster steady rate during the winter when conditions are more favorable. Eventually it would infect everyone, and then we would expect to see it barely at all during the summer, followed by a spike in the winter. Right now there are lots of other factors, but we still *sort of* see that pattern. Nicastro et al continue their perfect record of annoying hard-to-read graphs: The solid green line is COVID cases in the northern hemisphere, the solid orange line is COVID cases in the tropics, and the solid blue line is COVID cases in the southern hemisphere. The top set of technicolor background squares shows northern hemisphere seasons. So we see that COVID in the northern hemisphere peaked twice - once when it first started existing, and once in northern hemisphere fall/winter. COVID in the southern hemisphere shows a less obvious pattern, with maybe a mild peak in southern hemisphere fall winter, disguised by other factors. It peaks again around day 350 in a way that is very awkward for this theory, but the researchers say this is just because the Beta variant was ravaging South Africa at that time - the long-dashes blue line shows what happens if you adjust that away, at which point the southern hemisphere mortality pattern looks seasonal again. Meanwhile, the tropics are just sort of hanging out, not showing any pattern at all. The more formal version of this.”Simple linear-regression tests of the data-points (Pearson) and their rankings (Spearman), yield null (i.e. chance correlation) probabilities of p = 6.4 × 10–12 and p = 2.0 × 10–12, respectively.” So possibly coronavirus is still in its half-seasonal phase, and will transition to a fully seasonal disease once it’s infected everyone it can infect and picked all the low-hanging fruit in possible-variant-space so that it can’t trivially become more infectious. Can we use our improved understanding of disease seasonality against it? Certainly these considerations suggest that doing various things to indoor air (and using indoor UV lights) could slow transmission. But this wouldn’t lead to an eternal summer of perfect viruslessness. At best, it would just put us in the same position as the tropics, where there’s nothing to constrain the disease’s rhythm, and it just strikes randomly.
Scott Alexander
44275659
Diseasonality
acx
# Model City Monday 12/6/21 ### Tegucigalpa, Honduras The socialist opposition has won Honduras’ election and pledges to fight against charter cities there. "Immediately upon assuming the presidency, we are going to send the National Congress an initiative for the repeal of the ZEDE law," incoming president Xiomara Castro [said](https://www.france24.com/en/live-news/20211204-incoming-honduran-president-wants-un-help-to-battle-corruption). This was what everyone was afraid of. But the last party tried pretty hard to protect ZEDEs from trigger-happy successors, and the constitution currently says that the only way to get rid of them is to win two consecutive 2/3 votes to do so, then give the existing projects ten years to wind down. Can the socialists get a 2/3 majority? Wikipedia predicts the incoming Honduran Congress will look like this: These are still preliminary; [this person](https://www.reddit.com/r/Prospera/comments/r6y8fd/honduras_presidentelect_opposes_zedes_like/hn4on84/) argues that the Nationalists might pick up a few more seats as more conservative rural areas get counted. Liberty and Refoundation (the socialists) will probably enter into a coalition with the Savior Party and have 65/128 seats for a bare majority. They need 86 votes for a 2/3 majority, which in theory they can get if the Liberal Party agrees. The Liberal Party seems centrist and hard to pin down, but [this article](https://contracorriente.red/en/2021/08/14/honduran-candidates-oppose-corporate-zones-in-the-lead-up-to-national-elections/) includes the following great quote: > “The Liberal Party opposes the ZEDEs because, above all, they undercut our national sovereignty, and because we don’t want them to become hideouts for extraditable criminals,” said [Liberal Party leader Yani] Rosenthal, who served a three-year prison sentence in the United States for money laundering and participating in a criminal scheme with the Los Cachiros cartel. Rosenthal kind of goes back and forth elsewhere, but in the end I think he’ll vote with the socialists on this. Still, there’s [some speculation](https://www.reddit.com/r/Prospera/comments/r6y8fd/honduras_presidentelect_opposes_zedes_like/hn1gead/) that his party might not vote as a bloc, and even a few defectors would be enough to prevent a supermajority. In theory, even if the socialists win two consecutive votes, they have to give the projects ten years to wind down. Ten years is forever in politics, and probably before then the capitalists will get back into power and say never mind, everyone can keep doing what they’re doing. The socialists are aware of this and say that their supplementary strategy is to have everything about the ZEDE law declared unconstitutional. This should be a hard sell, because ZEDEs are a constitutional amendment, plus the current Supreme Court explicitly ruled a few years ago that they *were* constitutional. But apparently the Honduran Supreme Court can declare constitutional amendments unconstitutional if it really wants. And the new government will get to appoint a new Supreme Court in two years, and although the exact process is complicated, they may be able to get people who agree with them on this. Also, incoming president Castro is married to Manuel Zelaya, a former president who tried to pull an Andrew Jackson after the Supreme Court ordered him to stop holding an illegal referendum to change term limits in his favor. He ordered the military to hold the referendum anyway, and was only ousted after the military couped him instead. So this is not exactly a family known for their deep respect for the exact wordings of laws or court rulings (not that anyone in Honduras has really excelled on that front). See further speculation eg [here](https://www.reddit.com/r/Prospera/comments/r6y8fd/honduras_presidentelect_opposes_zedes_like/hn1gead/) and [here](https://www.reddit.com/r/Honduras/comments/r6o648/question_about_your_election/hmupbkj/). And here’s Mark Lutter from Charter Cities Institute [on the elections and the future](https://www.chartercitiesinstitute.org/post/honduras-and-the-future-of-charter-cities). ### Conchagua Volcano, El Salvador Meanwhile, insane El Salvadorean president Nayib Bukele [says](https://www.bbc.com/news/world-latin-america-59368483) he is ordering the construction of a coin-shaped city dedicated to Bitcoin at the base of a stratovolcano: > "Residential areas, commercial areas, services, museums, entertainment, bars, restaurants, airport, port, rail - everything devoted to Bitcoin," the 40-year-old said. And: > The president, who appeared on stage wearing a baseball cap backwards, said that no income taxes would be levied in the city, only value added tax (VAT). > > He said that half of the revenue gained from this would be used to "to build up the city", while the rest would be used to keep the streets "neat and clean" […] > > Mr Bukele did not provide dates for construction or completion of the city, but said he estimated that much of the public infrastructure would cost around 300,000 Bitcoins. It’s tempting to dismiss this plan as crazy. First, this photo: Second, Bitcoin miners don’t want a city the shape of a Bitcoin [with a central plaza in the shape of a Bitcoin logo](https://www.archpaper.com/2021/11/el-salvador-build-bitcoin-city-at-the-base-of-a-volcano/). They want cheap electricity. Bukele has promised that there will be cheap geothermal power from the volcano, which sounds good, but [this article](https://fortune.com/2021/10/01/scalding-water-and-steam-send-bitcoin-soaring-but-analysts-say-be-wary-of-el-salvadors-stunt/) says El Salvador’s existing geothermal energy costs about 12 cents/kilowatt-hour, much higher than the 4 cents/megawatt-hour miners can get in the current cheapest areas. Maybe El Salvador could do a really good job upgrading their energy infrastructure, but at some point you’re subsidizing this rather than using it as a cash cow. And third, this isn’t even the stupidest plan to build a cryptocurrency-themed city in the Third World. That arguably goes to [Akon City](https://www.archpaper.com/2020/01/akon-finalizes-cryptocurrency-city-senegal/), a thing where a pop singer named Akon was going to build a cryptocurrency city in Senegal. Now, without any construction having started, they’re planning to build [a second one](https://www.washingtonpost.com/world/2021/04/06/akon-city-uganda/) in Uganda! All competing for the same handful of crypto companies! But I looked into [Bukele](https://en.wikipedia.org/wiki/Nayib_Bukele) to see if he was a moron with a habit of coming up with terrible ideas. It seems like no. He rose from nothing to become El Salvador’s first outside-the-traditional-party-system president, and has an approval rating of around 90%. And apparently he’s presided over [a historic drop in the homicide rate](https://www.centralamerica.com/living/safety/el-salvador-murder-rate-lower-than-ever/) of this previously murder-capital-of-the-world country. Although I’m betting that one day he’ll make a great [Dictator Book Club](https://astralcodexten.substack.com/p/dictator-book-club-orban) entry, I’m prepared to give him the benefit of the doubt on “doesn’t do stupid things for no reason” What’s the non-stupid explanation for this? Maybe it’s supposed to be a signal. You can give up 5% of the way through, but even *trying* to build a Bitcoin-shaped city at least shows very conclusively that you’ve got a crypto-friendly regulatory climate, so many easily-spooked crypto companies will flock to you. This makes sense in the context of big crypto companies moving to the Caribbean for regulatory reasons, eg FTX [moving to the Bahamas](https://www.coindesk.com/business/2021/09/24/ftx-moves-headquarters-from-hong-kong-to-bahamas-report/) and Binance [moving to the Cayman Islands](https://www.forbes.com/sites/michaeldelcastillo/2020/10/29/leaked-tai-chi-document-reveals-binances-elaborate-scheme-to-evade-bitcoin-regulators/?sh=43d65c782a92). But if I understand correctly, both of these companies make on the order of $1 billion a year. If El Salvador can tax them at 5% (dubious, since a big part of promising a friendly regulatory climate is low taxes), that’s still only $100 million if they can capture both of them. Which they can’t, because these companies seem happy where they are. And I don’t think there are a lot of similarly-sized crypto companies looking for Central American homes that I don’t know about. And even though El Salvador is pretty poor, it’s not so poor that $100 million is worth embarrassing themselves over. So I’m stumped. EDIT: See [this comment](https://astralcodexten.substack.com/p/model-city-monday-12621/comment/3895754). ### Praxis, aka Bluebook Cities, the Internet Speaking of stumped, who *are* these people? Right now, they’re a web page with a lot of buzz [promising the City Of The Future](https://www.praxissociety.com/content/introducing-praxis), in very poetic language: > **Praxis is a grassroots movement of modern pioneers building a new city.** We are technologists and artists, builders and dreamers. We are building a place where we can develop to our fullest potentials, physically, culturally, and spiritually. Bitcoin was developed as a financial technology with political goals identical to those of the Founding Fathers: liberation. The ultimate end of crypto is the possibility of a future for humanity unshackled from the institutions that seek to limit our growth. Our ultimate goal is to bring about a more vital future for humanity, and we will use technology to achieve this righteous end. > > Our civilization is unwell. We eat food that kills us, we’ve lost sight of beauty, and we neglect our spiritual lives. The world is deranged and decayed, and this frightens people. We don’t look up from our screens; we seek to live within them. Crypto is a fundamentally political technology -- escape to the metaverse is a betrayal of the principles on which it was founded. We are descended from the people who built Rome and Athens, who dared to split atoms and voyage to the Moon. We can build new worlds not just of bits, but of atoms. But where is this city? What will its policies be? > As we leave old lands, our values are our compass. Like wolves, tribes of pioneers are muscular by necessity. For voyaging tribes to settle, they must perform murmurations: intricate coordination with little communication, at scale. This is only possible with a strong sense of asabiyya (group feeling derived from deeply-held shared values). Our values inform the destiny we desire, and for which we struggle. Asabiyya is forged in this struggle. With asabiyya, pioneers can earn the divine mandate to build a city. Cities are the fount of human ingenuity. In cities, people enjoy their fullest potential by contributing their resources under the auspices of civilization. Who even are you? What experience do you have with city-building? > Civilizations rise and fall. All around us, we see civilizational decay. The people are not vital: physically, culturally, spiritually. We live in an era of obesity, remakes, and pollution. We are losing the divine mandate, and in an era of absolute weapons, what’s at stake is everything. But perhaps there’s some glory in death by a light brighter than a thousand suns. A worse fate may await humanity: atrophied bodies submerged in gel, fed synthetic bug paste, minds occupied by the petty amusements of a corporate metaverse. There, nothing is at stake; there are no frontiers to explore; no growth is possible. Nothing to live for, and nothing to die for. > > As we walk between these twin fates, the light of our civilization dims. But beyond the horizon, we see a new light emerging. Like the sun at dawn, it cannot be stopped. Vitality itself is the foundational value of this new civilizational form, and we have the technology to enact our moral imperative as never before. You’re not answering my…okay, fine, whatever, forget it. As [far as I can tell](https://nypost.com/2021/05/10/tech-bros-next-move-private-cities-without-government-control/), Praxis is two 25-year-olds with no previous experience, armed with about $10 million in Peter Thiel’s money. Peter Thiel is a smart person known for having good business sense, but he’s also known to have a weakness for young people who dream big and sound like purveyors of esoteric secrets. I wonder if the simplest explanation is just that this is one of the cases where his weakness got the better of his sense, and now these two random people have $10 million earmarked for building a city, and no idea what to do. *[CORRECTION: some people involved in Praxis have reached out to tell me that it was $4 million instead of $10 million, and that it was Thiel-backed Pronomos and not Thiel himself. I’ll be getting in touch with them to learn if there are other issues or things I should correct here]* But that’s not how they put it! The way they put it is - all previous charter city founders have started by approaching governments and pitching their ideas. But there’s a chicken-and-egg problem: governments don’t want to give land to a purely hypothetical city that might not pan out, and the city can’t pan out until governments give it land. Praxis’ plan is to build the community first, then go to a government saying “Here’s 50,000 people who have agreed to join our city, and lots of businesses and organizations that are excited about it. Please give us land for our guaranteed-success, concretely-existing project.” Now this is a different chicken-and-egg problem: why join a community of people with no land and no plans? Praxis writes: > What if we try to draw people to new cities not on an economic basis, but rather on a spiritual one? Which city (or country) founding projects have succeeded that have drawn people on a predominantly non-economic, but rather spiritual basis? Among others, Israel and America. Both groups were oppressed, and sought the freedom to live by their values. Both felt the intangible pull of the frontier. Both had a keen historical instinct. This is how cities with spiritual significance are founded. > > The correct approach to city building in this new world is demand-first (or as Balaji Srinivasan calls it, *Cloud City* first). We build the citizenry before the city. First, we create communities of true believers, organized around shared values, online. People move to cities for people, and it follows that if you collect a group of people who all want to live together, they’ll all move together if at a moment in time everyone else does, too. Today, we have new tools. The emergence of Web3 enables us to supercharge communities with self-ownership, governance, and determination. Once you build a community of people ready to move to a new city together, you can self-finance the entire project. With something real to offer nations, conversations with governments become productive (e.g. Gigafactory). That’s how you make the risk dominoes fall. The problem is, Israel worked because it had Judaism. Judaism is a very specific belief. Prospera is specifically libertarian, Telosa is specifically Georgist, and even the Bitcoin-shaped volcano city knows what it’s about. What is Praxis? The use of “atrophied bodies submerged in gel, fed synthetic bug paste” as a warning reads very slightly right-wing to me - there’s a right-wing meme about how [the media keeps trying](https://time.com/5942290/eat-insects-save-planet/) to get people to eat bugs, and how this is the shape our future dystopia will take. But whether I’m right or wrong, the fact that it’s hard to tell is a problem. The only other clues we’re getting are [their Discord](https://discord.com/channels/813494644066877460/908744624141660171), which seems to be focused around getting a currency called PRAX for completing [tasks](https://airtable.com/shrZdZmMYcmhKfTxI/tblzWX4vBjjlJo68m). Once you get enough, you can become a Member, which seems to be where the real excitement starts. ([source](https://kontextmaschine.tumblr.com/post/668383969087275008/collapsedsquid-civilizations-rise-and-fall-all)) I’m not even being sarcastic - I expect being a member to be quite fun. I say this because when I was a teenager I was part of [a bunch of country simulation projects](https://slatestarcodex.com/2013/04/15/things-i-learned-by-spending-five-thousand-years-in-an-alternate-universe/), some of which got past the inherent nerdiness of being a country simulation project exactly the same way Praxis is doing it - by saying that we were going to become a real country someday, as soon as we were big enough to convince people. These were usually fun and interesting and educational, and I made lots of great like-minded teenage and twenty-something friends. But none of them ever came close to becoming a real country, and I’m not sure it was merely for lack of millions of dollars. I hope I’m wrong and they manage to forge new lands through struggle to uplift the human spirit or whatever. ### Elsewhere In Model Cities * Vitalik Buterin on [the intersection between local government and blockchain technologies](https://vitalik.ca/general/2021/10/31/cities.html). He recommends they “start with self-contained experiments, and take things slowly on moves that are truly irreversible”, which is a weird way of saying “what we crypto leaders really want is a city at the base of a volcano, shaped like a giant Bitcoin”. * I’m not going to be able to fund [Charter Cities Institute](https://www.chartercitiesinstitute.org/) through ACX Grants this year, but I told them I’d give them a signal boost here. They’re a great organization, they could be doing more work with more funding, and if you’re at all interested in charter cities they’re the people you want to be supporting. If you can’t get in touch with them directly, let me know and I’ll make an introduction. * The California city of Oroville recently [passed a resolution declaring itself a “constitutional republic”](https://www.nbcbayarea.com/news/local/northern-california-town-declares-itself-a-constitutional-republic-in-response-to-states-covid-19-mandates/2744659/) as some kind of opposition to COVID mandates. It later clarified that it was not seceding from the US, but reiterated that it was now “a constitutional republic”. The best-case scenario here is ending up with a [Conch Republic](https://en.wikipedia.org/wiki/Conch_Republic) style colorful local legend; the worst case scenario is that they actually resist a federal law and everyone involved gets arrested.
Scott Alexander
45047366
Model City Monday 12/6/21
acx
# Open Thread 201 This is the weekly visible open thread. Odd-numbered open threads will be no-politics, even-numbered threads will be politics-allowed. This one is odd-numbered, so be careful. Otherwise, post about anything else you want. Also: **1:** A few months ago, [I wrote about](https://astralcodexten.substack.com/p/eight-hundred-slightly-poisoned-word) how I tracked my performance on word games over different carbon dioxide levels, and found they didn’t matter. Some people offered to replicate my work in different ways, and the first to come back with their findings is [Steven Kaye](https://astralcodexten.substack.com/p/open-thread-200/comment/3792450), who has 88 days of chess-playing data. He also mostly finds it doesn’t matter, though some ways of slicing the data might suggest a weak trend. There are still people who seem really into CO2 effects, so I encourage more people to find their own ways of experimenting on this. **2:** Some good discussion on my [Pascalian Medicine](https://astralcodexten.substack.com/p/pascalian-medicine) post, see especially [David Chapman’s tweets](https://twitter.com/Meaningness/status/1463570268459130880) and [Jay Daigle’s blog.](https://jaydaigle.net/blog/pascalian-medicine/) But I do feel like some the responses flirt with assuming everything has the most convenient possible value to fit a morality tale. Suppose someone you love gets COVID, and you have the option to either recommend or disrecommend that they take a cocktail of melatonin (a harmless sleep supplement, I take it every night, eight unreliable studies have shown it treats COVID), curcumin (a harmless-when-sourced-correctly spice, six unreliable studies have shown it treats COVID) and Vitamin D (a harmless vitamin, twelve unreliable studies have shown it treats COVID). What do you do, here, in the real world? I’m honestly not sure, and I think my discomfort with this question is a lot more interesting than some too-pat fable about The Rationalist Who Thought The Real World Was Exactly Like A Casino. **3:** The discussion on ivermectin also continues: ivermectin supporters [counterargue against what I said](https://twitter.com/alexandrosM/status/1465418748047749128) on my last open thread; [Shakoist on Substack defends me](https://shakoist.substack.com/p/unvirtuous-heuristics). **4:** The co-author of [one of the winning 2019 adversarial collaborations](https://slatestarcodex.com/2019/12/11/acc-is-eating-meat-a-net-harm/) is looking for participants in the continental U.S. for a large and rigorous (but informal and not-affiliated-with-an-academic-institution-or-IRB) trial of homeopathy, which he thinks should work better than previous approaches. Homeopathic remedies are tested/invented via a measure called “[provings](https://nyhomeopathy.com/provings/)”, where they create symptoms if given to healthy people. Homeopaths take provings pretty seriously but don’t consistently double-blind them. So the idea is to do a blinded proving and see if the people who get the homeopathic solution get more side effects than the people who don’t; this would sidestep a lot of the arguments homeopaths usually give for why negative homeopathy studies often find no effect. Anyway, he wants 250 people to sign up to receive homeopathy or placebo by mail, take it, and report what happens. Read more [here](https://medium.com/@NoRandomWalks/is-homeopathy-real-9369b3813d1d), sign up [here](https://docs.google.com/forms/d/e/1FAIpQLSfRTD9kaWtzd4c3ii3v30ealORuiueihr6wh8WCG3aaBnsQ-w/viewform) if interested. **5:** This should be obvious, but there will be another Book Review Contest next year. I will let you know when I have exact dates and rules together, but assume it will be due sometime like March or April, and otherwise pretty similar to last year. I don’t think there are any surprises that should stop you from starting to prepare entries now if you want **6:** I beg continued patience from ACX Grants applicants and funders. The most likely schedule is that I’ll give funders more information around December 14, and announce most winners around December 25, with Grants ++ coming some time after that. I’m not considering late applications; please don’t email them to me.
Scott Alexander
45041134
Open Thread 201
acx
# Book Review: Lifespan *[epistemic status: non-expert review of a book on a highly technical subject, sorry. If you are involved in biochemistry or anti-aging, feel free to correct my mistakes]* David Sinclair - Harvard professor, celebrity biologist, and author of *[Lifespan](https://www.amazon.com/Lifespan-Revolutionary-Science-Ageand-Dont/dp/1982135875/ref=asc_df_1982135875/?tag=hyprod-20&linkCode=df0&hvadid=475740618844&hvpos=&hvnetw=g&hvrand=300509605246980026&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9032065&hvtargid=pla-1006741405912&psc=1)* - thinks solving aging will be easy. “Aging is going to be remarkably easy to tackle. Easier than cancer” are his exact words, which is maybe less encouraging than he thinks. There are lots of ways that solving aging could be hard. What if humans worked like cars? To restore an old car, you need to fiddle with hundreds of little parts, individually fixing everything from engine parts to chipping paint. Fixing humans to such a standard would be way beyond current technology. Or what if the DNA damage theory of aging was true? This says that as cells divide (or experience normal wear and tear) they don’t copy their DNA exactly correctly. As you grow older, more and more errors creep in, and your cells become worse and worse at their jobs. If this were true, there’s not much to do either: you’d have to correct the DNA in every cell in the body (using what template? even if you’d saved a copy of your DNA from childhood, how do you get it into all 30 trillion cells?) This is another nonstarter. Sinclair’s own theory offers a simpler option. He starts with a puzzling observation: babies are very young [citation needed]. If a 70 year old man marries a 40 year old woman and has a baby, that baby will start off at zero years old, just like everyone else. Even more interesting, if you *clone* a 70 year old man, the clone start at zero years old. (there were originally some rumors that cloned animals aged faster, but those haven’t been borne out) This challenges the DNA theory of aging. A 70 year old’s skin cells have undergone seventy years of DNA damage, and sure enough the 70-year-old has weak, wrinkled skin. But if you transfer the skin cell DNA to an egg, inseminate the egg, and turn it into a baby, that baby is just as young as all the other babies. So DNA damage can’t be the whole story. What could be an almost insurmountable problem for cells in a mature animal, but trivially vanishes when you clone a cell into an embryo? Sinclair’s answer is epigenetics. Remember, all cells have the same DNA. The reason kidney cells are different from lung cells is because they have epigenetic markers on the kidney genes saying “turn these on” and on the lung genes say “turn these off”. Part of the cloning process involves telling the cell to be an egg cell. After that, it undergoes the normal embryogenesis process where embryonic stem cells differentiate into kidney cells, lung cells, and the rest. So Sinclair thinks aging is *epigenetic* damage. As time goes on, cells lose or garble the epigenetic markers telling them what cells to be. Kidney cells go from definitely-kidney-cells to mostly kidney cells but also a little lung cell and maybe some heart cell in there too. It’s hard to run a kidney off of cells that aren’t entirely sure whether they’re supposed to be kidney cells or something else, and so your kidneys (and all your other organs) break down as you age. He doesn’t come out and say this is literally 100% of aging. But everyone else thinks aging is probably a combination of many complicated processes, and I think Sinclair thinks it’s mostly epigenetic damage and then a few other odds and ends that matter much less. Epigenetic damage could potentially still be unfixable: how do you convince the thousands of different intermixed cell types in the body to all be the right type again? But Sinclair thinks the body already has a mechanism for doing this: epigenetic repair proteins called sirtuins. I’m a bit confused about where sirtuins are getting *their* information from: is there a backup copy of epigenetics that they read to figure out what’s wrong and needs repair? I get the impression from one or two cryptic statements that Sinclair thinks maybe yes (see the discussion of “the observer” on page 171). But for some reason, the system works well enough to keep you alive for the normal human lifespan (and no better). If you want to live longer, can you just add more sirtuins? [These people say](https://www.genengnews.com/news/sirt6-overexpression-extends-lifespan-in-mice/) they gave mice a gene that caused them to overproduce sirtuins, and the mice lived 30% longer. Other people have tried the same experiment in worms, fruit flies, etc, with [controversial](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4101544/) but generally positive results. What if you’re not a mouse, and you live in one of the 100% of countries that have banned random irresponsible genetic engineering on humans? Sinclair thinks there’s still hope. Sirtuin activity seems to be regulated by a protein called mTOR (motto: “The Protein That Regulates Everything”, see discussion of its role in depression [here](https://slatestarcodex.com/2017/06/13/what-is-depression-anyway-the-synapse-hypothesis/), obesity [here](https://www.nature.com/articles/ijo2010208), cancer [here](https://cellandbioscience.biomedcentral.com/articles/10.1186/s13578-020-00396-1), etc). In times of plenty, mTOR switches on, causing cells to divide and grow. In times of deprivation, mTOR switches off, causing cells to “hunker down” and go into power-saver mode. Apparently part of power-saver mode is damage control, so this turns on the sirtuins and makes them do more epigenetic repair, keeping you young. All you have to do to stay younger longer is keep mTOR from activating. The main thing I remember from my biochemistry classes is that mTOR is a big oval with the word “mTOR” on it. mTOR turns off in times of deprivation, so you can keep it off by depriving yourself. That’s why the gold standard of anti-aging interventions is calorie restriction: eat less food. This isn’t just the “stay thin if you don’t want to die of a heart attack” thing, this is where you eat an absurdly low amount of food, practically starving yourself, as a desperate strategy to turn off this one protein. This kind of extreme calorie restriction extends lifespan about 50% [in lemurs](https://www.nature.com/articles/s42003-018-0024-8), and would probably work for humans too. Sinclair brings up the story of a weird Venetian merchant who took some kind of vow of temperance and ate famously little each day: he lived to 100, which is pretty good for the 1500s. And: > In more recent times, Professor Alexandre Gueniot, the president of the Paris Medical Academy just after the turn of the twentieth century, was famed for living on a restricted diet. It is said that his contemporaries mocked him - for there was no science at the time to back his suspicion that hunger would lead to good health, just his gut hunch - but he outlived them, one and all. He finally succumbed at the age of 102. These stories are definitely cherry-picked, but we don’t have great studies: calorie restriction hasn’t been around the hundred years it would take to see results in humans. A few very committed biohackers having been trying it for a few decades now, so I guess we’ll know if it works by the mid-21st century. What if you’re not a mouse, can’t get genetically engineered, *and* don’t want to starve yourself for your entire life. *Then* what? Some people think that you can simulation “deprivation” well enough to fool mTOR with intermittent fasting. Sinclair suggests maybe skipping breakfast and having a late lunch every day, or fasting entirely a few days a week, or a few weeks a year, or - well, nobody really knows how or whether intermittent fasting works, let alone the best way to do it. But it’s an option. Or you could exercise, or go to a sauna, or go out when it’s really cold. Sinclair thinks all of these promote health by stressing the body and convincing it that it’s a bad time and it needs to go into power-saving mode, which activates sirtuins and repairs epigenetic damage. Suppose you’re not a mouse, can’t get genetically engineered, *and* you have a normal aversion to diet and exercise. Is there a pill you can take? Yes! A suitably foreboding one, even! Rapamycin gets its name from Rapa Nui aka Easter Island, where it was discovered in a fungus living beneath one of the giant stone heads. It’s a strong inhibitor of mTOR (in fact, the “TOR” in mTOR’s name stands for “Target Of Rapamycin”). Mice on rapamycin [live about 10% longer than usual](https://www.nature.com/articles/nature08221). Can *you* take rapamycin? Probably a bad idea, it’s a potent immunosuppressant. Organ recipients take it sometime to quiet their immune system down to the point where it stops rejecting the transplant, but it’s not a lot of fun. But there are two other pills that *might* work. One is resveratrol, a chemical found in red wine (though not in high enough doses to be meaningful for sirtuin activation). Resveratrol definitely activates sirtuins in test tubes, and seems to be good for lab animals: some of them live longer on it, others at least seem healthier. But the lab animal studies were never 100% conclusive, and arguably humans absorb it too poorly to be able to get an effective dose (I’m confused about some details here, like whether animals absorb it better, or whether IV formulations would work). There was a big mess surrounding claims by resveratrol supplement companies, whether their products might have worked or whether they couldn’t possibly have. David Sinclair was caught in the middle and got accused of being a Big Resveratrol shill, and scientific opinion seems to have settled as most against it. I think some people are now experimenting with pterostilbene, a more bioavailable resveratrol relative. The other pill is nicotinamide riboside aka NR (and its close cousin nicotanimide mononucleotide aka NMN). The reactions catalyzed by sirtuins involve nicotinamides, and the more nicotinamides you have, the more effective sirtuins are. NR and NMN are cheap, simple chemicals you can buy at any supplement store for $20, and Sinclair is pretty convinced they’re a fountain of youth. He says that when his own father started becoming decrepit, he convinced him to take NMN, and over the space of a few months he started becoming energetic and spry again, and is now traveling the world despite being well into his 70s. Sinclair himself takes well above what other people would consider the maximum dose every day, and apparently looks like this at age 50. Can sirtuins make us immortal? All of Sinclair’s examples involve slowing aging by 10 - 20%. I don’t quite understand why - if aging is just epigenetic damage, and epigenetic damage can be repaired, can’t you just increase the repair rate until it’s faster than the damage rate, then live forever? If *Lifespan* gave an explanation for this, I missed it. That doesn’t mean we can’t be immortal, though. Sinclair’s lab has another research program, focusing on stem cells. These produce new, epigenetically healthy cells wherever they’re placed. And we now know that with a couple of chemicals called Yamanaka factors, you can make adult cells become stem cells again. Sinclair’s idea is to genetically engineer triggerable Yamanaka factors into the cells of human adults. Then, when you’re starting to feel old, you trigger the factors, some of your cells revert to stem cells, and they replace your old decaying cells with epigenetically healthy ones. Every biologist I mention this to has the same objection, which is “won’t that make you have every kind of cancer at once?”, and, in their defense, the first hundred times Sinclair tried this his mice got some pretty crazy cancers. But he swears they have solved this problem and the mice are doing fine now. Some of them are living about 40% longer than normal, which I notice still isn’t immortal but seems like a step in the right direction. **II.** People who are not David Sinclair generally don’t expect conquering aging to be this easy. The anti-aging SENS Foundation has a list of seven different programs to address what they consider to be seven different causes of age-related damage. This seems more like the “humans are like cars” scenario where you have to fix every part individually and it’s really hard. People who are not David Sinclair don’t think that nicotinamides are a miracle drug. A well-regarded research center [ran a big study on nicotinamides in mice](https://www.fightaging.org/archives/2021/04/the-latest-data-from-the-interventions-testing-program-nicotinamide-riboside-has-no-effect-on-mouse-life-span/) and found that they lived no longer than usual, although they did seem to be healthier in various ways. This might be a good time to mention that it’s really hard to run reliable aging studies in mice, because mice have a lot of natural lifespan variability, people don’t always use enough mice to constitute a good sample size, and you have to keep them around for years before you learn anything. And people who are not David Sinclair are less enthusiastic about sirtuins, mTOR, and calorie restriction. [Algernon’s Law](https://www.gwern.net/Drug-heuristics) says there shouldn’t be easy gains in biology. Your body is the product of millions of years of evolution - it would be weird if some drug could make you stronger, faster, and smarter. Why didn’t the body just evolve to secrete that drug itself? Or more to the point, since most drugs act by flipping biological “switches”, why does your body have a switch set to the “be weak, slow, and dumb” position? There are ways to answer this question, and drugs that do lots of great things. But any biohacking proposal does need to overcome this objection. So: why doesn’t the body just have more sirtuins? Why do sirtuins only repair epigenetic damage when mTOR is in the off position? Also, everyone agrees that turning mTOR off increases lifespan. It increases lifespan by putting cells into power-saver mode so they do less. But if this mode is so great, why aren’t we in it all the time? The conventional answer is that in power-saver mode, you’re weaker, have less energy, and your wounds don’t really heal - which common-sensically matches what you’d expect of a power-saver mode. But when David Sinclair says that resveratrol or exercise or intermittent fasting or saunas act by “mimicking calorie restriction”, is he suggesting that they will make you weak and constantly tired? If not, why not? This sounds a denial of the fundamental mTOR tradeoff: less energy expenditure in exchange for worse performance. The impression I get from *Lifespan* is that all of these things will both make you live longer *and* make you healthier. That doesn’t really make sense to me. (although so far [the empirical evidence agrees with Sinclair](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5315691/) and disagrees with my common sense, so probably I’m missing something.) Finally, a friend wasn’t impressed with Sinclair’s clone argument. They point out: suppose aging is DNA damage, and it happens to every tenth cell. Having a tenth of your cells damaged is pretty bad, especially if they become senescent. “Senescent cells”, common in elderly people, have sustained so much damage that they can’t even die properly, and just sort of sit around being hopelessly confused and secreting random chemicals which freak out all the other cells around them. Everyone agrees these are an important part of aging, even if they’re not sure about the specifics. But if 1/10 of your cells are like this, then you have a 90% chance of grabbing a healthy cell for cloning. And even if you get a bad cell, no cloning process works every time, so you’ll just shrug and try again. My impression of the consensus in anti-aging research is that many people are excited for the same reasons Sinclair is excited, that people are much more optimistic than they were five or ten years ago - but that their level of optimism hasn’t *quite* caught up to Sinclair’s level yet. **III.** Interspersed with all this stuff about mTOR and sirtuins is discussion of a broader question: is stopping aging desirable? Sinclair thinks self-evidently yes. He tells the story of his grandmother - a [Hungarian Jew](https://slatestarcodex.com/2017/05/26/the-atomic-bomb-considered-as-hungarian-high-school-science-fair-project/) who fled to Australia to escape communist oppression. She was adventurous, “young at heart”, and “she did her damnedest to live life with the spirit and awe of a child”. Sinclair remembers her as a happy person and free spirit who was always there for him and his family during their childhood in the Australian outback. And her death was a drawn-out torture: > By her mid-80s, Vera was a shell of her former self, and the final decade of her life was hard to watch…Toward the end, she gave up hope. ‘This is just the way it goes’, she told me. She died at the age of 92…but the more I have thought about it, the more I have come to believe that the person she *truly* was had been dead many years at that point. Sinclair’s mother didn’t have an easy time either: > It was a quick death, thankfully, caused by a buildup of liquid in her remaining lung. We had just been laughing together about the eulogy I’d written on the trip from the United States to Australia, and then suddenly she was writhing on the bed, sucking for air that couldn’t satisfy her body’s demand for oxygen, staring at us with desperation in her eyes. > > I leaned in and whispered into her ear that she was the best mom I could have wished for. Within a few minutes, her neurons were dying, erasing not just the memory of my final words to her but all of her memories. I know some people die peacefully. But that’s not what happened to my mother. In those moments she was transformed from the person who had raised me into a twitching, choking mass of cells, all fighting over the last residues of energy being created at the atomic level of her being. > > All I could think was “No one ever tells you what it is like to die. Why doesn’t anyone tell you? It would be facile to say “and that’s what made him become an anti-aging researcher”. He was already an anti-aging researcher at that point. And more important, everyone has this experience. If seeing your loved ones fade into shells of their former selves and then die painfully reliably turned you into an anti-aging researcher, who would be left to do anything else? So his first argument is something like “maybe the thing where we’re all forced to watch helplessly as the people we love the most all die painfully is bad, and we should figure out some solution”. It’s a pretty compelling argument, one which has inspired generations of alchemists, mystics, and spiritual seekers. An unexpectedly lovely kabbalistic correspondence on Lifespan’s cover. Sinclair’s name occupies the position of Yesod, the sephirah that bestows divine attributes onto the material world. But his second argument is: we put a lot of time and money into researching cures for cancer, heart disease, stroke, Alzheimers’, et cetera. Progress in these areas is bought dearly: all the low-hanging fruit has been picked, and what’s remaining is a grab bag of different complicated things - lung cancer is different from colon cancer is different from bone cancer. The easiest way to cure cancer, Sinclair says, is to cure aging. Cancer risk per year in your 20s is only 1% what it is in your 80s. Keep everyone’s cells as healthy as they are in a 20-year-old, and you’ll cut cancer 99%, which is so close to a cure it hardly seems worth haggling over the remainder. As a bonus, you’ll get similar reductions in heart disease, stroke, Alzheimers, et cetera. But also - to rehash the quote I started the review with - Sinclair thinks curing aging is *easier* than curing cancer. For one thing, aging might be just one thing, whereas cancer has lots of different types that need different strategies. For another, total cancer research spending approaches the hundreds of billions of dollars, whereas total anti-aging spending is maybe 0.1% of that. There’s a lot more low-hanging fruit! And also, even if we succeed at curing cancer, it will barely matter on a population level. If we came up with a 100% perfect cure for cancer, average US life expectancy would increase two years - from 80 to 82. Add in a 100% perfect cure for heart disease, and you get 83. People mostly get these diseases when they are old, and old people are always going to die of *something*. Cure *aging*, and the whole concept of life expectancy goes out the window. There are a lot of people who get angry about curing aging, because maybe God didn’t mean for us to be immortal, or maybe immortal billionaires will hog all the resources, or [insert lots of other things here]. One unambitious - but still potentially true - counterargument to this is that a world where we conquered aging, then euthanized everyone when they hit 80, would still be infinitely better than the current world where we age to 80 the normal way. But once you’ve accepted this argument, there are some additional reasons to think conquering death would be good. First, the environmental sustainability objection isn’t really that strong. If 50% of people stopped dying (maybe some people refuse the treatment, or can’t afford it), that would increase the US population by a little over a million people a year over the counterfactual where people die at the normal rate. That’s close to the annual number of immigrants. If you’re not worried about the sustainability of immigration, you probably shouldn’t worry about the sustainability of ending death. You can make a similar argument for the world at large: life expectancy is a really minimal driver of population growth. The world’s longest-lived large country, Japan, currently has negative population growth; the world’s shortest-lived large country, Somalia, has one of the highest population growth rates in the world. If 25% of the world population took immortality serum (I’m decreasing this from the 50% for USA because I’m not even sure 50% of the world’s population has access to basic antibiotics), that would increase world population by 15 million per year over the counterfactual. It would take 60 years for there to even be an extra billion people, and in 60 years [a lot of projections](https://en.wikipedia.org/wiki/Projections_of_population_growth#Other_projections) suggest world population will be stable or declining anyway. By the time we really have to worry about this we’ll either be dead or colonizing space. Second, life expectancy at age 10 (ie excluding infant mortality) went up from about 45 in medieval Europe to about 85 in modern Europe. What bad things happened because of this? Modern Europe is currently in crisis because it has too *few* people and has to import immigrants from elsewhere in the world. And the increase didn’t cause some kind of stagnation where older people prevented society from ever changing. It didn’t cause some sort of perma-dictatorship where old people refuse to let go of their resources and the young toil for scraps. It corresponded to the period of the most rapid social and economic progress anywhere in history. Would Europe be better off if the government killed every European the day they turned 45? If not, it seems like the experiment with extending life expectancy from 45 to 85 went pretty well. Why not try the experiment of extending life expectancy from 85 to 125, and see if that goes well too? And finally, what’s the worst that could happen? An overly literal friend has a habit of always answering that question with “everyone in the world dies horribly”. But in this case, *that’s what happens if we don’t do it.* Seems like we have nowhere to go but up!
Scott Alexander
44050250
Book Review: Lifespan
acx
# MM: Omicron Variant Noah Smith has a good summary of the Omicron evidence [here](https://noahpinion.substack.com/p/the-omicron-situation), including a lot of quotes from experts. But experts say a lot of stuff like “well, it could be bad, but we can’t be sure”, plus sometimes they disagree. This sounds like a job for prediction markets! (source: [Metaculus](https://www.metaculus.com/questions/8755/estimated-r0-of-omicron-variant/)) R0 is a measure of how quickly a disease spreads under certain ideal conditions. The [original Wuhan strain](https://www.popsci.com/health/infectious-coronavirus-variants-guide) was probably around 2.5, and [the Delta variant](https://academic.oup.com/jtm/article/28/7/taab124/6346388) was probably around 5. So if this number is higher than 5, it’s more transmissible than Delta. The community prediction is 7.31, so Metaculus predicts it will be significantly more transmissible than Delta. (source: [Metaculus](https://www.metaculus.com/questions/8757/omicron-variant-deadlier-than-delta/)) Metaculus didn’t want to wade in to precise lethality statistics, so they just asked for a yes-or-no answer on whether it would be deadlier than Delta. Forecasters say there’s a 34% chance it will be. The specific resolution criteria is if at least 3 of the first 4 studies find a statistically significant difference “favoring” Omicron. That feels pretty strict to me, so you should think of this as the probability that it will be really noticeably deadlier than Delta. (source: [Metaculus](https://www.metaculus.com/questions/8753/date-omicron-has-50-prevalence-in-us/)) Forecasters predict that Omicron will be more than half of US coronavirus cases by mid-March. Of people who *don’t* think it will reach majority by March, it looks like about 15% think it will take longer, and another 35% think it will *never* be the dominant variant in the US. This could be because we successfully contain it (really?), because it’s not actually that bad and can’t outcompete Delta, or because some other even worse variant takes over before Omicron gets the chance. All of this is implicitly another vote in favor of it being more transmissible than (and so moving faster than, and spreading further than) Delta.
Scott Alexander
44709115
MM: Omicron Variant
acx
# Open Thread 200 This is the weekly visible open thread. In fact, it’s the two hundredth open thread! For historical reference, you can find Open Thread 1 [here](https://slatestarcodex.com/2014/06/04/open-thread/). Post about whatever you want. Also: **1:** Thanks to the 600 (!) of you who sent in ACX grants applications. I’d complained earlier that there weren’t many good ones, but I was being impatient - now I have the opposite problem, way too many good ones to fund or evaluate easily. I’ll be sending out emails to those of you who offered to help fund or judge applications in a few days to a week, after I’ve come up with a strategy. Probably results will be announced towards the end of the window I committed to on the original post, so around Christmas time. Please bear with me during the inevitable snafus in trying to set this process up. **2:** Commenters brought up even more examples of interesting families last week. For example, neuroscientist [Oliver Sacks](https://en.wikipedia.org/wiki/Oliver_Sacks), *Yes, Minister* co-creator [Jonathan Lynn](https://en.wikipedia.org/wiki/Jonathan_Lynn), Israeli deputy PM [Abba Eban](https://en.wikipedia.org/wiki/Abba_Eban), and econ Nobelist [Robert Aumann](https://en.wikipedia.org/wiki/Robert_Aumann) are all cousins. Steve Jobs is the biological brother (adopted and raised apart) of award-winning novelist [Mona Simpson](https://en.wikipedia.org/wiki/Mona_Simpson#Personal_life), and their cousins (raised apart in Syria, never met) are famous pianist [Malek Jandali](https://en.wikipedia.org/wiki/Malek_Jandali) and journalist [Bassma Al-Jandaly](https://en.wikipedia.org/wiki/Bassma_Al_Jandaly). Paleogenetics founder [Svante Paabo](https://en.wikipedia.org/wiki/Svante_P%C3%A4%C3%A4bo) is the illegitimate son (raised apart) of Nobel-winning biochemist [Sune Bergstrom](https://en.wikipedia.org/wiki/Sune_Bergstr%C3%B6m). Add him alongside Bobby Fisher to the pile of illegitimate children raised apart from their talented parents who still become talented, I guess. Staying on the subject of nature vs. nurture: Albert Einstein had two grandchildren who lived to adulthood: one biological, and one adopted. His biological grandchild [Bernhard Einstein](https://en.wikipedia.org/wiki/Bernhard_Caesar_Einstein) was an engineer who worked on laser technology for the Swiss Army and “obtained four U.S. patents related to light amplification”. His adopted grandchild [Evelyn Einstein](https://en.wikipedia.org/wiki/Evelyn_Einstein) “worked briefly as an animal control officer, as a cult deprogrammer, and as a Berkeley, California, reserve police officer” **3:** Comment of the week is [Gwern on](https://astralcodexten.substack.com/p/links-for-november/comment/3762684) whether we should consider China “successful”:" > This was a big thing with the USSR too: they'd bury us in economic productivity with their stakhanovite New Soviet Men freed from the waste of capitalism (cf. that \_Conquest of Bread\_ review incidentally). Then it was with Japan, they'd surpass us with their unique Japan Inc. fusion of pseudo-democracy in which one party was always elected and worked hand-in-glove with the zaibatsus (or maybe South Korea, or another Asian Tiger). Then it was China... <http://web.archive.org/web/20090302203414/http://web.mit.edu/krugman/www/myth.html> You'll note all the countries in question are still below (sometimes vastly) US per capita. > > The conclusion is more "the Industrial Revolution is a helluva drug", and can make any regime look good and get high on its own ideological supply about how it has restarted history and inaugurated the Caliphate or China Dream or Japan as #1 or whatever. Noahpinion had [a pretty similar point](https://noahpinion.substack.com/p/china-is-very-20th-century) a few months ago, but it’s always good to get more reminders. **4:** Dr. Bitterman, one of the researchers who came up with the ivermectin-effects-are-from-worms hypothesis, is defending his idea from some of the concerns [you guys brought up in the comments](https://astralcodexten.substack.com/p/higlights-from-the-comments-on-ivermectin). For example, in response to a comment that hyperinfection syndrome is rare, he [writes](https://www.reddit.com/r/slatestarcodex/comments/qvsw91/ivermectin_much_more_than_you_wanted_to_know/hlvbcq0/): > This is simply not true. I've pointed out exactly why this isn't true in the past to you as well and yet you continue to repeat it. I'm not sure why. Anyway, this is just pure ignorance of the relative risk scale and just how few deaths can radically shift that reported scale. The entire difference of mortality effect is only 39 people among a control group of 1984 patients. Assuming a 15.5% prevalence (average prevalence by parasitologic methods of the trials driving the favorable effect), and even assuming a only 5% chance of getting disseminated strongyloids infection due to either immunosuppression (since less half of the patients in the only paper reporting semi-reporting prevalence Rzztmass likes to cite were immunosupprsssed from steroids) or eosinopenia associated with COVID (which happens even without steroids), that already explains ~15.5 deaths, which is already ~40% of the mortality benefit. That absolutely makes a dent. And even then I suspect this is a low estimate. Bottom line: you continue to not appreciate how small number absolute patient event differences can translate into large differences on a relative risk scale. See the full comment there, and [his other Reddit comments](https://www.reddit.com/user/AviBittMD), for more. See also [his Twitter](https://twitter.com/AviBittMD). He also points out that the serological prevalence numbers I cited in one of *my* responses might not be accurate, since those include people with previous cases. **5:** Alexandros Marinos, whose pro-ivermectin views I argued against in the same comments post, has [finally started a Substack and written up those views at length](https://doyourownresearch.substack.com/p/a-conflict-of-blurred-visions). Among his interesting findings are that keeping all of the studies mentioned on ivmmeta, removing the ones I think are bad, removing the ones ivmmeta itself thinks are bad, and removing the ones that leading anti-ivermectin researcher Gideon Meyerowitz-Katz thinks are bad - all give about the same relative risk result (by ivmmeta’s methodology), somewhere around 0.3 or 0.4 (Marinos thinks that my and ivmmeta’s exclusions are similar around 0.3, and GMK’s exclusions are different around 0.4, but this seems like splitting hairs to me, since all three are overwhelmingly positive by these standards). I think this is an interesting finding about how (at least when critiquing ivmmeta) it’s probably not worth arguing over which studies to include or not, so much as about the overall methodology for how we interpret the studies remaining. In terms of the more polemical points, I might or might not write a longer response later. Right now the point I think is most important is that Marinos sort of grants that many of the substances with many positive studies probably don’t work - but says ivermectin is different because it has more studies and stronger effects than the others. I think the stronger effects are a bit exaggerated - the graphic that Marinos presents shows it’s pretty similar to melatonin, anti-androgens, and a bunch of other things - but I will grant that it has significantly more studies. But as I’ve tried to point out elsewhere, adding more studies can only address problems within your model, not problems outside your model. Suppose that some parapsychologist has done twenty studies, all of which prove psychic phenomena exist (and there are many such parapsychologists!) Does it help if she works for another year or two, and we get forty such studies? How about another decade, and we get two hundred such studies? Who cares?! We already know that this parapsychologist, using whatever methodology she uses, is able to consistently get positive results. It’s not like all twenty of her studies went badly just by coincidence! So either you should believe in psychic phenomena, or you should believe that *this genre of study* is bad (ie an outside-of-model problem) and we need some independent researcher or some better methodology to try replicating it. When I see fifty studies mostly showing that ivermectin works, you have successfully convinced me that, if you did a thousand more studies of approximately this quality, they would also show that ivermectin works. But then who cares how many studies anti-androgens have relative to ivermectin? I realize this looks bad - aren’t we supposed to use “replications” and “number of studies” as a proxy for truth? But this is why I point out that there are dozens of studies showing psychic phenomena are real, and hundreds of studies showing the same for homeopathy. I don’t think anyone has a great idea where to go from here (“larger and more professional studies” is a good guess, but people will understandably worry that just means the establishment wants to never have to admit it’s wrong and wants only establishment studies that it can bias to count). But “just do the same bad studies more and more times” sure isn’t the answer. **6:** In the highlights to the comments on my Paxlovid post, I posted some people’s explanations for why the FDA’s behavior wasn’t so bad. Dan Elton [still thinks it’s bad and has written a post on](https://moreisdifferent.substack.com/p/the-fda-has-blood-on-their-hands) why he rejects the explanations.
Scott Alexander
44692092
Open Thread 200
acx
# Links For November *[Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this. PS: Happy Thanksgiving!]* **1:** [The story of Jeff Bezos’ biological father](https://www.chicagotribune.com/news/ct-met-amazon-bezos-dad-chicago-inc-20180220-story.html), a former circus performer who didn’t realize Jeff was his son until well into the 2010s. **2:** New type of nominative determinism just dropped ([source](https://twitter.com/moyix/status/1454517482966986754)): **3:** [Speculations on the rise of Christianity](https://kontextmaschine.tumblr.com/post/666326842205224960/xhxhxhx-bibliolithid-etirabys). A consistent 40% per decade growth rate maintained from St. Peter to Constantine would fit observations nicely; we know this is possible in theory because Mormonism has also grown at about 40% per decade the past century. Also, plausibly Constantine’s conversion barely changed the growth rate at all. **4:** Related: **5:** Vets are still debating whether animals get placebo effects. [Here’s a study](https://pubmed.ncbi.nlm.nih.gov/19912522/) suggesting they do; I can’t read the full text, but I’d be interested in knowing whether these dogs had previous good experiences with anticonvulsants (in which case it feels more Pavlovian) or if they’re claiming that dogs somehow magically know what a medicine is and what it should do (maybe they’re failing to distinguish from regression to the mean?) **6:** If you were following [coverage](https://www.bbc.com/news/in-pictures-59118673) of the COP26 climate summit, you might have seen this picture: Boris Johnson (left) is 5’9, so the guy in the middle must be gigantic. Who is he? Looks like it’s [Milo Djukanovic](https://en.wikipedia.org/wiki/Milo_%C4%90ukanovi%C4%87), President of Montenegro, who’s 6’6 (198 cm). Is he the tallest world leader? It seems like he’s tied with his colleague across the border, Serbian president [Aleksandar Vucic](https://en.wikipedia.org/wiki/Aleksandar_Vu%C4%8Di%C4%87). Why are Balkan leaders so tall? As usual, the answer is “genetics”. [This article](https://phys.org/news/2017-04-tallness-herzegovinian-men-linked-gene.html) says: > It has been noted that men from Herzegovina are taller on average than men in other places—the average male height is just over six feet...Putting all the data together, researchers concluded that the most likely cause of larger-than-average height of Herzegovinian men is lifestyle during the Paleolithic—men hunted large animals such as mammoth for survival—such a diet, heavy in protein, combined with small population densities, would have provided ideal conditions for height selection, resulting in increasingly taller men who passed the trait down through their I-M170 chromosome to future generations. Some sources note that they manage to beat the Dutch despite the latter country’s much higher human development index. The Dutch are probably tall through a combination of nature and nurture; Balkan people are tall through nature alone. **7:** Eliezer Yudkowsky doesn’t need more ego boosts, but an idea he had a couple of years ago - using strings of bright lights to provide a better and brighter experience for Seasonal Affective Disorder sufferers than regular light boxes - spread from him to the rationalist community to the wider world, and has finally gotten tested [in a formal study](https://www.medrxiv.org/content/10.1101/2021.10.29.21265530v1) (see Acknowledgments section). Results seem vaguely positive: "SAD symptoms of both groups improved similarly and considerably...exploratory analyses indicate that a higher illuminance is associated with a larger symptom improvement in the BROAD light therapy group" **8:** Percent of people who choose woke options on polls very tentatively and preliminarily seems to be going down post-Trump (h/t [Richard Hanania](https://twitter.com/RichardHanania/status/1454217024389795843)). **9:** [Twitter conspiracy theories](https://twitter.com/AliceFromQueens/status/1459946777885519876) **10:** Did you know: all those reconstructions of “how classical art would have looked with the original paint” are probably inaccurate. [There is no reason to think](https://ortusnigenad.tumblr.com/post/667844265092808704/regarding-reconstructions-of-painted-classical) the Greeks and Romans used garish technicolor hues on their statues; what evidence we have suggest they were good at shading, and the statues were probably colored very tastefully. **11:** [Complaints about](https://www.cambridge.org/core/services/aop-cambridge-core/content/view/715C589A73DDF861DCF8997271DE0B8C/S0140525X21002351a.pdf/the-emperors-new-markov-blankets.pdf) how Karl Friston uses the term “Markov blanket” **12:** Trevor Klee on the claim that [cyclosporine patients don’t get dementia](https://trevorklee.com/organ-transplant-patients-maybe-dont-get-dementia-heres-why/). Apparently there was a big study where basically nobody on the immunosuppressant cyclosporine ever got dementia, and there are some theoretical reasons why cyclosporine might prevent neurodegeneration. But another study found people on cyclosporine got dementia at the usual rate. I think in a situation like this you should have a really high prior on “the people who got the crazy result bungled their study somehow”, but I’m interested in hearing what other people think. **13:** Also from Trevor: [a history of fluvoxamine treatment for COVID](https://trevorklee.com/why-did-they-give-antidepressants-to-covid-patients/). **14:** To tide you over until the next book review contest, here is awanderingmind’s review of [The Conquest Of Bread](https://www.awanderingmind.blog/posts/2021-10-30-book-review-conquest-of-bread.html). **15:** Claims: **16:** Big trial on [Vitamin D for depression](https://jamanetwork.com/journals/jama/fullarticle/2768978) finds null result. Peter Attia tries to tear it apart [here](https://peterattiamd.com/randomized-controlled-trials-when-the-gold-standard-leaves-you-with-fools-gold/), but I am unconvinced, especially in the context of Vitamin D never working for any of the things people say it does besides the most boring aspects of bone health. **17:** *“California is actively considering the adoption of flawed and inequitable guidance on math curricula based on misleading data and inaccurate success metrics reported by San Francisco Unified School District (SFUSD)...Based on our review of the data, we found misleading, unsupported, and cherry-picked assertions of success for the new math program. We noted that overall test scores are down and enrollments in UC-approved advanced math classes have dropped as well.”* It looks like San Francisco [is trying](https://www.familiesforsanfrancisco.com/updates/inequity-in-numbers) the good old “lower standards, then when more kids meet the standards, claim your school reform plan worked” trick again. **18:** [A new study](https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2785832) claims that self-reported “Long COVID” symptoms are more associated with *believing* you’ve had COVID than with actually having it (as measured by serologic testing), which sounds like pretty strong evidence that it’s psychsomatic. [Expert reactions](https://www.sciencemediacentre.org/expert-reaction-to-study-looking-at-the-association-of-self-reported-covid-19-infection-and-sars-cov-2-serology-test-results-with-persistent-physical-symptoms/) are mixed-to-negative, although the only one of these that doesn’t sound like excuse-making is Dr. Rossman’s about the unreliability of the tests. I haven’t confirmed test reliability stats but [Philippe Lemoine also thinks](https://twitter.com/phl43/status/1459275751795175425) this is a plausible confounder. **19:** Noahpinion: [What If Xi Jinping Just Isn’t That Competent?](https://noahpinion.substack.com/p/what-if-xi-jinping-just-isnt-that) I appreciated this for making me think, and for underlining the extent of the difference between the Deng/Jiang/Hu era and what Xi’s doing. I especially appreciated this line, which I’d never thought about before: > Xi presided over the end of China’s hypergrowth. To some extent this is not his fault. No country can grow at 10% forever, and there were many structural forces pushing downward on China’s numbers — the end of the demographic dividend, the exhaustion of rural surplus labor (the [Lewis Turning Point](https://cdm15738.contentdm.oclc.org/digital/collection/p15738coll2/id/1737)), the saturation of export markets, and so on. But China is also [slowing down earlier](https://www.wsj.com/articles/chinas-state-driven-growth-model-is-running-out-of-gas-11563372006) than South Korea, Taiwan, or Japan did in their day. China’s [per capita GDP (at PPP)](https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(PPP)_per_capita) is still only about 1/3 that of a developed country, so if they stop catching up at about half of developed-country levels, that will not be a great showing. A big lesson of the past twenty years has been “actually liberal democracy isn’t necessary to reach developed-country status”, so it would be quite the twist if it turned out you needed liberal democracy to reach developed-country status. This gets pretty close to [the great mystery](https://astralcodexten.substack.com/p/book-review-how-asia-works) of why some less-developed countries “catch up” and others don’t; whatever happens in China is going to be a really useful data point. **20:** [Variations on the fable of The Frog And The Scorpion](https://sadoeuphemist.tumblr.com/post/615521935528460288/a-scorpion-being-unable-to-swim-asked-a-frog-to). **21:** You’ve probably heard about the University of Austin, the new project by a bunch of wokeness-critical academics to start a new university that won’t cancel people or force conformity ([New York Post article](https://nypost.com/2021/11/08/university-of-austin-founded-by-writers-and-entrepreneurs/), [Politico article](https://www.politico.com/news/magazine/2021/11/17/university-austin-bari-weiss-pinker-culture-politics-522800) - these were the two least “you need to be super-outraged about this right now” articles I could find). Tyler Cowen and Larry Summers are involved; Steven Pinker was supposed to be but left for unclear reasons. My thoughts, in no particular order: * Even forgetting the political aspect, attempts to start new universities are always welcome. * I agree with the founders’ politics, but wokeness isn’t *just* a problem, it’s also a proposed *solution* to the problem of who to tolerate and associate with in a hostile legal and media environment that likes guilt-by-association. I think it’s a bad solution, but once you jettison it you need a different solution - “tolerate everyone” sounds good until you get confronted with pedophiles, Nazis, al-Qaeda supporters, and super-woke people who demand you censor everyone *else*. U of A has committed themselves to finding some other stable equilibrium, but for now I think of them as a (welcome) experiment in seeing if they can do this, rather than a success story. * I keep thinking about an article I read - sorry, I can’t remember where - talking about the sense in which for-profit universities were scams. Some of them are scams in that they didn’t exist. Others were scams in the sense that they existed but provided terrible education. But some existed and provided education which was honestly about as good as you would get at a normal low-tier college, yet was useless in practice. You’d apply for a nursing job with your nursing degree from Random For-Profit U, and the hospitals would say “What, never heard of them, forget it”, *and that was also a scam if you went into a nursing program expecting to be able to work as a nurse at the end of it*. I expect U of A can get buildings, fancy gowns, etc, but can they get respect? The hard part about founding a new top-tier university isn’t *just* getting lots of nice buildings and brilliant professors, it’s getting people to respect you when they’re used to not respecting any university that doesn’t have a hundred year record of excellence or powerful establishment backers. I don’t know, maybe their strategy is to hope everyone will confuse them with the University of Texas in Austin - in which case, good strategy. * Part III of [this post](https://slatestarcodex.com/2017/05/01/neutral-vs-conservative-the-eternal-struggle/) is always relevant. **22:** Related: [Woolf University](https://woolf.university/blog/woolf-raises-seven-and-a-half-million-to-make-higher-education-more-accessible-by-building-a-global-collegiate-university-2021-11-12) is an accredited university that “lets qualified organizations join as member colleges and offer accredited degrees”. I think the idea is that if you want to start a new college but are intimidated by the accreditation process, you can instead become an affiliate/subcollege of Woolf and since they are accredited, now you are too. I don’t know enough about education to know whether this will work but it seems like a cool idea. **23:** Related: [Stanford faculty urge college](https://johnhcochrane.blogspot.com/2021/11/academic-freedom-at-stanford.html) to adopt the University of Chicago statement on free expression. Seems like a rare example of practical things happening in the fight for academic freedom. **24:** Sort of related: I console myself with the idea that the Democrats have some kind of grand strategy to both make everyone hate them as much as possible, and also push policies that will accomplish exactly the opposite of all their goals. Then Republicans will capture all branches of government with large majorities, and build lots of solar panels in order to own the libs. Also promote race-blind hiring, build lots of housing to fight homelessness, repeal SALT deductions, regulate Big Business, pull out of foreign wars, heck, why not [legalize marijuana](https://www.marijuanamoment.net/republican-lawmakers-file-bill-to-tax-and-regulate-marijuana-as-alternative-to-democratic-proposals/)? Viewed this way, maybe Biden and Pelosi are the greatest political geniuses of their generation! **25:** On the [Calusa](https://en.wikipedia.org/wiki/Calusa), the pre-conquest native inhabitants of south Florida: > [They] lived in large, communal houses which were two stories high. When Pedro Menéndez de Avilés visited the capital in 1566, he described the chief's house as large enough to hold 2,000 without crowding, indicating it also served as the council house. When the chief formally received Menéndez in his house, the chief sat on a raised seat surrounded by 500 of his principal men, while his sister-wife sat on another raised seat surrounded by 500 women. The chief's house was described as having two big windows, suggesting that it had walls. Five friars who stayed in the chief's house in 1697 complained that the roof let in the rain, sun and dew. The chief's house, and possibly the other houses at Calos, were built on top of earthen mounds. In a report from 1697, the Spanish noted 16 houses in the Calusa capital of Calos*,* which had 1,000 residents. **26:** How have supply chain disruptions affected [lines on graphs measuring the progress of AI](https://www.lesswrong.com/posts/wfpdejMWog4vEDLDg/ai-and-compute-trend-isn-t-predictive-of-what-is-happening)? (see also [this comment](https://www.reddit.com/r/mlscaling/comments/milujs/ai_and_compute_trend_isnt_predictive_of_what_is/gt5eyav/)) **27:** Related: Transcript of Richard Ngo and Eliezer Yudkowsky on AI ([part 1 on capability gains](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/hwxj4gieR7FWNwYfa), [part 2 on alignment difficulty](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/7im8at9PmhbT4JHsW), [part 3 with Paul Christiano on takeoff speeds](https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds)) **28:** Previous studies found criminality is mostly genetic, but left room for some family effects. [A clever study investigates](https://www.cambridge.org/core/journals/psychological-medicine/article/using-age-difference-and-sex-similarity-to-detect-evidence-of-sibling-influence-on-criminal-offending/78B57991E7F6BA8A6FFC8A7D042E440E) these with sibling age differences. If you have a criminal older sibling, but that sibling is so much older than you that you barely/never interact, you have 25% chance of being a criminal yourself. But if you have a criminal older sibling close to the same age as you who can influence you a lot, you have a 33% chance. This neatly shows a modest but important non-genetic effect, although it can’t necessarily distinguish between direct sibling influence and more complicated things like “your parents changed parenting style during that time”. **29:** EA Forum: [Good News On Tackling Climate Change](https://forum.effectivealtruism.org/posts/ckPSrWeghc4gNsShK/good-news-on-climate-change). Although we’re less likely than hoped to reach the target of < 1.5C of warming, we’re also less likely than feared to reach truly awful scenarios like RCP8.5 (>6C of warming). Partly this is just because, as we move forward in time and see how things go, uncertainty decreases and the chance of extreme scenarios goes down. Partly it’s because we’ve genuinely made progress in things like solar power (and, although we haven’t banned coal, we’ve mostly succeeded at not *vastly increasing* our coal use, which was never certain). And partly it’s because we’ve studied climatology more and ruled out some scenarios where the climate is super-hyper-sensitive to carbon. As a result, the authors’ interpretation of IPCC data says that the risk of RCP8.5 (the technical name for the worst warming scenario) has gone from 11% in 2015 to less than 1% today. We are probably on track for between 2 and 4 degrees of warming this century, which will be bad but not existentially catastrophic. **30:** Related, by Hannah Ritchie: [Stop Telling Kids They’ll Die Of Climate Change](https://www.wired.com/story/stop-telling-kids-theyll-die-from-climate-change/). **31:** Related: Daniel Reeves vs. Bryan Caplan on climate change ([part 1](https://www.econlib.org/i-win-my-climate-shock-bet/), [part 2](https://www.econlib.org/climate-shock-bet-daniel-reeves-responds/)). Reeves (who thinks climate change is a big problem) challenged Caplan (who thinks it isn’t) to read *Climate Shock*. Caplan read it and (in my maybe unfair interpretation) agrees that it seems to make a strong argument, but says that since it’s by left-wingers it’s probably biased in some hard-to-detect way and he can ignore it. While it’s true that there are many biased people out there, and some of them make strong arguments, I also notice this is a fully general excuse for never changing your mind in response to anything, however convincing it would be otherwise. Seems bad. **32:** Last spring Robin Hanson and others mooted the idea that viral load affected disease severity; eg if you inhaled one coronavirus particle, you’d get a mild coronvirus case, but if you inhaled 100,000, you’d get a severe case. I hadn’t realized that the prestigious *New England Journal of Medicine* published an article in October [saying maybe this might be true](https://www.nejm.org/doi/full/10.1056/nejmp2026913) (though not providing any new evidence). This sparked responses by other people saying [maybe it might be false](https://www.nejm.org/doi/full/10.1056/nejmp2026913) (also not providing evidence) - honestly the whole thing was weirdly centered around PR (“if we say this is true, it might make people like masks more!”, “but if we say it’s false, that could make people like vaccines more!”). The only useful thing I got out of this is that [Stephan Guyenet looked into](https://twitter.com/whsource/status/1320107008855437312) a pathogen load/severity correlation for diarrhea and found there was none. **33:** I’d previously cited a claim in Joseph Henrich’s *Secret Of Our Success* that people liked spicy foods because they were antibacterial, but [an article in](https://sci-hub.st/10.1038/s41562-020-01039-8) *[Nature](https://sci-hub.st/10.1038/s41562-020-01039-8)* says there is “little evidence” to support that claim. **34** When [Paul Morphy](https://en.wikipedia.org/wiki/Paul_Morphy) was a young child in the mid-1800s, he watched some family members play a few chess games and figured out the rules. Then - apparently without any formal instruction or practice - he became the greatest chess player of his age, beating a Hungarian master at 12 and becoming unofficial world champion at 20. Then he quit chess and never did anything else of note again. (h/t [Erich Grunewald](https://www.erichgrunewald.com/posts/child-prodigies/))
Scott Alexander
44098939
Links For November
acx
# Pascalian Medicine **I.** When I reviewed Vitamin D, I said I was about 75% sure it didn’t work against COVID. When I reviewed ivermectin, I said I was about 90% sure. Another way of looking at this is that I must think there’s a 25% chance Vitamin D works, and a 10% chance ivermectin does. Both substances are generally safe with few side effects. So (as many commenters brought up) there’s a [Pascal’s Wager](https://en.wikipedia.org/wiki/Pascal%27s_wager) like argument that someone with COVID should take both. The downside is some mild inconvenience and cost (both drugs together probably cost $20 for a week-long course). The upside is a well-below-50% but still pretty substantial probability that they could save my life. (Alexandros Marinos has also been thinking about this, and calls it [Omura’s Wager](https://twitter.com/alexandrosM/status/1432247947601661952)) We can go further. The same people behind ivmmeta.com have posted this “meta-analysis” of curcumin, a common spice and oft-mooted panacea: ([source](https://c19curcumin.com/meta.html)) I’m going to guess it’s not true, because I’ve become pretty critical of these people’s methodology since doing the ivermectin review. Also, curcumin is a PAIN ([pan-assay interference compound](https://en.wikipedia.org/wiki/Pan-assay_interference_compounds), ie a substance with weird chemical properties that make every test seem positive, so if you do chemical tests to see whether it activates eg coronavirus-fighting immune cells, it will always say yes). This means people are always publishing exciting papers about it and alternative medicine people are always getting really enthusiastic about it and suggesting it as the cure for everything (eg [depression](https://www.webmd.com/depression/turmeric-depression)). Still, I don’t have enough time and energy to review this evidence base thoroughly. And here I am, being told that nine studies found highly positive effects. With nine studies finding highly positive effects in favor, and just my vague ungrounded skepticism against - and given that people’s naive probability estimates [are usually overconfident](https://en.wikipedia.org/wiki/Overconfidence_effect) - can I really say that I’m 95% sure this doesn’t work? And if I’m not 95% sure this doesn’t work, doesn’t that mean there’s a 5% chance it *does* work? And since a course of curcumin costs about $10 and is harmless, if there’s a 5% chance that it thing reduces COVID mortality by 70%, shouldn’t I use it? In fact, shouldn’t I be working as hard as I can to make every hospital, doctor, and patient in the world use it? But what’s true of curcumin is equally true of lots of other different compounds: zinc, hydroxychloroquine, quercetin, nigella sativa, melatonin…just going off the ones on the sidebar of ivmmeta.com, there are about thirty different things that have this same level of very early, very dubious super-promising COVID results. Some are expensive and some are dangerous, but I think about twenty of them are cheap and safe. The establishment medical position is that you shouldn’t take any of these, because they haven’t been proven to work. The Insanity Wolf position is: maybe you should take all twenty, because you won’t have lost very much, and if even one of them works, it’s worth it. In fact, Insanity Wolf has a strong argument here: one chemical in this class is fluvoxamine. Six months ago, it was just like all these others: something with a clever story of why it might work and a few weak preliminary studies, but only the sort that never pan out in real life. Then a really big, excellent randomized trial seemed to find it worked, and the current scientific consensus agrees this is probably true. So if you’d done the Insanity Wolf thing six months ago and taken twenty untested compounds, at least one of them would have worked and cut your COVID mortality by 30% (our current guess at fluvoxamine’s effect size). But why stop there? Sure, take twenty untested chemicals for COVID. But there are almost as many poorly-tested supplements that purport to treat depression. The cold! The flu! Diabetes! Some of these have known side effects, but others are about as safe as we can ever prove anything to be. Maybe we should be taking twenty untested supplements for *every* condition! **II.** So what’s the counterargument? Is anything ever *truly* safe? There’s a species of parasitic worm called *Loa loa*. Usually it hides from the immune system. But if you take ivermectin for some unrelated reason, the *loa loa* dies *en masse*, the immune system notices the corpses, it freaks out and massively overreacts, and sometimes your brain gets fried in the crossfire. If you get this, kudos - it’s one of the most esoteric ways to die, and any medical professionals in the vicinity will be impressed. But my point is, “this drug has no side effects” is a fraught statement. In principle ivermectin is perfectly safe; in practice, the world is full of weird stuff that can make harmless drugs kill you unexpectedly. And: sometimes people give older people ivermectin to treat scabies. A few decades ago, [a study](https://pubmed.ncbi.nlm.nih.gov/9186403/) suggested this increased mortality pretty significantly, suggesting that ivermectin is dangerous in the elderly. IIRC later studies couldn’t replicate this, and I think the current consensus is that it’s fine. But if we’re talking about “maybe our interpretation of the studies is wrong” and “but there’s still a small chance”, we’ve got to apply this on the negative side as well as the positive. But I have to admit that given everything I know about ivermectin - and Vitamin D, melatonin, etc - I still think on net the very small chance that this stuff helps you is higher than the extremely small chance it kills you. Even if the latter isn’t quite zero. What about unknown unknowns? This is a two-way street: these chemicals might have unexpected risks, but also unexpected benefits. Vitamin D can contribute to kidney stones in vulnerable individuals, but it also helps bone health, and there are various (probably false) claims that it prevents cancer, helps depression, etc. But as a corollary of [Algernon’s Law](https://www.gwern.net/Drug-heuristics) (your body is already mostly optimal, so adding more things is unlikely to have large positive effects unless there’s some really good reason), probably we’re more likely to discover unexpected risks than unexpected benefits. Still, varying the value of the “unknown unknowns” term until it says whatever justifies our pre-existing intuitions is the coward’s way out. We don’t fret over the unknown unknowns of Benadryl or Tylenol or whatever, even though we know their benefits are minor. Ivermectin, Vitamin D, etc are well-studied chemicals, and even though there’s always some chance everything is bad, at some point that chance becomes low and we can still say that on net the benefits outweigh the risks. Should we worry that even if each drug individually is net positive, giving someone twenty medications will lead to some crazy interaction? I’m not too concerned about this; clinically significant drug interactions are rarer than most laypeople think, and usually pretty predictable. Still, giving twenty different medications at once is almost unexplored territory, and something like this might be true. (but if it is, maybe it’s an argument for just giving the one or two most promising unlikely treatments, instead of all twenty) Even if Pascalian medicine is an individually reasonable choice, might it be bad at the level of society? No individual drug or supplement we’ve talked about so far costs very much money. But giving twenty inexpensive things to everyone with every disease quickly becomes expensive. On the other hand, the US medical system gave up on caring about costs long ago, and it’s not clear this would cost any more than eg [Aduhelm](https://astralcodexten.substack.com/p/adumbrations-of-aducanumab) or several other bad decisions we’ve already made. Maybe more important: patients already don’t take [about a quarter](https://www.reuters.com/article/us-new-prescriptions-study/many-patients-may-never-fill-new-prescriptions-idUSTRE61G3QX20100217) of the drugs they’re prescribed. People on the Internet demanding that new drugs be made available are a pretty unrepresentative sample of normal humans, who generally hate medications and will not take them even when doctors say it is very important. These people will never take twenty different pills (and combining these into a 20-way combination pill would be tough for many practical and legal reasons). If you ask them to do this, they will just take none of them, including the one pill that we know works and is very important. If it became generally accepted to prescribe lots of pills that probably didn’t work, and patients knew this, it might decrease trust in all medications. Even if your doctor said “this is one of the ones that definitely works”, some people wouldn’t believe them. And there’s no bright line between the ones that have a 99.9% chance of working, 99% chance of working, 90% chance, 50% chance, and 5% chance (which are antidepressants? I would say closest to 90% chance of working in theory, 50% for an individual patient - but I’m not sure!) This could water down patient perception of every medication from “effective cure” to “shot in the dark”. Still, if this is true, you might conclude that it just means doctors shouldn’t universally recommend Pascalian medicine. It could still be rational to set up a course of it on your own. I think this would be equally fraught. If doctors are setting this up, you can at least be confident that they’re picking the medications that *actually* have very few side effects. If you’re doing it on your own, you’d better hope you’re good at doing your own research - better than all the people who did their own research during COVID and concluded all sorts of totally false things. At the very least, you’d have to add a term for “this actually has lots of well-known side effects but I am missing them”. There’s a potential compromise solution, where smart doctors come up with Pascalian medicine protocols for the few patients who would actually want them. But this would be a weird enough thing for a doctor to do that it would run into the “I wouldn’t trust any club that would accept me as a member” problem. **III.** Here’s the counterargument that bothers me the most: I think ivermectin doesn’t work. I think that it *looks* like it works, because it has lots of positive studies and a few big-name endorsements. But our current scientific method is so weak and error-prone that *any* chemical which gets raised to researchers’ attentions and studied in depth will get approximately this amount of positive results and buzz. Look through the thirty different chemicals featured on the sidebar of the ivmmeta site if you don’t believe me. So if you’re an onion farmer, and you have a bunch of extra onions you can’t sell one year, all you have to do is ask some scientist friends to study whether onions cure cancer. There will be a bunch of studies, lots of them will be sloppy and say yes, people like me will see a bunch of positive studies and say “Can I really be more than 99% sure this is false? and if there’s even a 1% chance onions cure cancer, then - given how safe they are - isn’t it worth trying?” And then doctors will make every cancer patient take concentrated onion extract every day. Then eggplant farmers will want in on the money-printing-license, and then pumpkin farmers, and soon we’re up to 100 pills a day instead of just twenty. And then we’ll wish we’d stopped Pascal’s Wager-ing drug decisions at some earlier point. And maybe the right point to stop is now. I’m nervous about this scenario because it violates the [Law Of Conservation Of Expected Evidence](https://www.lesswrong.com/posts/jiBFC7DcCrZjGmZnJ/conservation-of-expected-evidence) - if I “know” that onion farmers doing studies will convince me that onions have a 5% chance of curing cancer, I should just believe there’s a 5% chance onions cure cancer now. So I must be doing something wrong here. Probably what I’m doing wrong here is saying that ivermectin having some decent studies raises its probability of working to 5%. I should just say 0.1% or 0.01% or whatever my prior on a randomly-selected medication treating a randomly-selected disease is (higher than you’d think, based on [the argument from antibiotics](https://slatestarcodex.com/2014/07/17/psychotropic-base-rates-the-argument-from-antibiotics/)). From the Outside View, this argument seems strong. From the Inside View, I have a lot of trouble looking at a bunch of studies apparently supporting a thing, and no contrary evidence against the thing besides my own skepticism, and saying there’s a less than 1% chance that thing is true. But as long as I can’t make that leap, I can be money-pumped by onion farmers. Reconciling Inside And Outside Views Remains Hard, More At 11. **IV.** Does Pascalian medicine beat our current strategy of only using drugs that are proven to work? I don’t know. I think the current strategy makes sense on a social level, but I’m not sure that the Pascalian strategy wouldn’t work for an individual. At least an individual who is able to reliably identify which low-but-nonzero-probability-of-benefit drugs really do have very few potential side effects (if you didn’t already know about *loa loa* encephalopathy, consider that this might not be you; I am very much not-recommending that any reader here do this on their own). I know of only one person who takes the Pascalian argument completely seriously. Futurist [Ray Kurzweil](https://en.wikipedia.org/wiki/Ray_Kurzweil#Health_and_aging) used to take 250 different supplements every day - but after realizing this was excessive, cut it down to only 100. I would love to hear from him, or anyone else who does this - but I assume he’s too busy taking pills to comment.
Scott Alexander
44375324
Pascalian Medicine
acx
# Highlights From The Comments On Ivermectin Thanks to everyone who commented on my recent post **[Ivermectin: Much More Than You Wanted To Know](https://astralcodexten.substack.com/p/ivermectin-much-more-than-you-wanted)**. Let’s start with the negative comments. Leading pro-ivermectin website ivmmeta.com understandably disagreed with my fisking of them. They have a section where they respond to critics (see responses to [Gideon Meyerowitz-Katz](https://ivmmeta.com/#tp), to [the BBC](https://ivmmeta.com/#bbc), to [the parasitic worm hypothesis](https://ivmmeta.com/#strongyloides), and to [someone named AT who they won’t explain further](https://ivmmeta.com/#at)). I was honored to also get a response here. They write: > We note a few limitations and apparent biases in the SA/SSC ivermectin analysis. > > Author appears to be against all treatments, labeling them all *"unorthodox"* and *"controversial"*, even those approved by western health authorities, including casirivimab/imdevimab, bamlanivimab, sotrovimab, and paxlovid. > > We encourage the author to at least direct readers to government approved treatments, for which there are several in the [author's country](https://c19adoption.com/#usa), and many more in [other countries](https://c19adoption.com/) (including ivermectin). While approved treatments in a specific country may not be as effective (or as inexpensive) as current evidence-based protocols combining multiple treatments, they are better than dismissing everything as *"unorthodox"*. Elimination of COVID-19 is a race against viral evolution. No treatment, vaccine, or intervention is 100% available and effective for all variants — we need to embrace all safe and effective means. > > Author notes that: *"if you say anything in favor of ivermectin you will be cast out of civilization and thrown into the circle of social hell reserved for Klan members and 1/6 insurrectionists"*, suggesting an environment that may bias the information that the author sees, and could unconsciously bias analysis. We note that similar environments influence the design, operation, and publication of some existing (and many upcoming) ivermectin trials. > > Author briefly looks at 30 of the 66 studies, which we note is much better than most commenters, but still ignores the majority of studies, including the prophylaxis studies. > > The author finds efficacy at *p* = 0.04 in their analysis of 11 of the 30 studies they looked at. We note that simply looking at the other 36 studies will result in much higher confidence in efficacy. We also note that even at *p* = 0.04 with 11 independent studies, a rational risk-benefit analysis results in immediate adoption into protocols (pending stronger data with other combinations of treatments), and immediate collection of more data from sources without conflicts of interest. > > However, ultimately the author at least partially supports the two prevailing theories that are commonly used by those against treatment. These theories require disregarding extensive contradictory evidence: > > The steps required to accept the *no-significant-effect* outcome are extreme — one needs to find a reason to exclude most of the studies, disregard the strong treatment-delay response relationship, and disregard all prophylaxis studies. Even after this, the result is still positive, just not statistically signficant. This does not support a negative recommendation. Widely accepted and effective (subject to dependence on viral variants) treatments like casirivimab/imdevimab, bamlanivimab, and sotrovimab were all approved without statistically significant mortality benefits. > > The steps required to accept the *strongyloides-mechanism-only* conclusion are also extreme - we need to disregard the majority of outcomes occuring before steroid use, and disregard the strong treatment-delay response relationship which is contradictory. [Figure 24](https://ivmmeta.com/#fig_fpsp) shows analysis by strongyloides prevalence. > > The third-party analysis that author references for the strongyloides theory is confounded by treatment delay — the high prevalence group has more early treatment trials, and the low prevalence group has more late treatment trials, i.e., the analysis reflects the greater efficacy of early treatment. More details can be found in the [strongyloides section](https://ivmmeta.com/#strongyloides). > > Author seems biased against believing any large effect size. We note that large effect sizes have been seen in several COVID-19 treatments approved by western health authorities, and also that better results may be expected when studies combine multiple effective treaments with complementary mechanisms of action (as physicians that treat COVID-19 early typically do). > > Author is suspicious about a study based on the country of the researchers, and also appears biased against non-native speakers, with comments such as *"unreadable"* for one paper, compared to *"written up very nicely in real English"* for another. > > Author calls a physician that has reported zero deaths and 5 hospitalizations with 2,400 COVID-19 patients *"a crazy person"* that *"put his patients on every weird medication he could think of"*. > > Author disregards the dramatically higher mortality for Gamma vs non-Gamma variants (aHR 4.73 [1.15-19.41] [[Zavascki](https://ivmmeta.com/#ref_zavascki)]), instead concluding that higher mortality indicates fraud in one instance, while in another instance assuming that the related confounding by time in the Together Trial is not significant. > > Author's review of the 30 studies appears cursory, for example author appears unaware that the ivermectin dosage is very different in the ivermectin + doxycycline arm of [[Ahmed](https://ivmmeta.com/#ref_ahmed)]. > > Author refers to studies with positive but not statistically significant results as *"negative"* [[Mohan](https://ivmmeta.com/#ref_mohan)], or *"[the] original outcome would also have shown ivermectin not working"* [[López-Medina](https://ivmmeta.com/#ref_lopezmedina)], which are incorrect conclusions [[Amrhein](https://ivmmeta.com/#ref_amrhein)]. > > Author appears to accept the analysis and accusations of GMK as correct, however that author is [often incorrect](https://ivmmeta.com/#tp). > > Author is concerned that we detail problems with [[López-Medina](https://ivmmeta.com/#ref_lopezmedina)], while correctly noting that the outcomes in this trial are actually positive and in favor of ivermectin (while not statistically significant in isolation). > > Author is concerned that we specifically comment on [[López-Medina](https://ivmmeta.com/#ref_lopezmedina), [Together Trial](https://ivmmeta.com/#ref_togetherivm)]. We note that it has been others that have focused on these trials — we comment on them because they have received special attention, including being held up as sole evidence overriding all other trials, despite having major issues. > > Author claims that nobody can find issues with [[Vallejos](https://ivmmeta.com/#ref_vallejos2)], which suggests that they have not read the study, or our analysis (hover over the reference and select "Notes"). I want to respond to five parts of this. **First**, the claim that I "[appear] to be against all treatments, labeling them all "unorthodox" and "controversial", even those approved by western health authorities, including casirivimab/imdevimab, bamlanivimab, sotrovimab, and paxlovid." They suggest I am turning my readers away from other treatments including ones that are already standard of care in western health systems. This is false and I don't know where they're getting it from. Corticosteroids, fluvoxamine, and Paxlovid seem provisionally great. I haven't looked into the monoclonal antibodies but if western health authorities say they're fine I have no reason to doubt that. I even think there are plausible arguments (though no proof) for a few less-used options like zinc. Obviously I urge my readers to get good treatments and not bad treatments. In fact, you even have my permission to pester your doctor about giving you a fluvoxamine prescription if you're in the appropriate stage of COVID and they don't think of it themselves. If they tell you it might have dangerous side effects, tell them that I have more experience with it than they do, and no it doesn't (unless you are bipolar or in some kind of special bizarre high-risk category). **Second**, they claim that I only looked at ivermectin for early treatment, and not for prophylaxis. This is true, and I agree a more thorough review would have analyzed the prophylaxis results too. I am not that thorough, and I assume that whatever is going on with the first 30 studies gives you a strong prior on what's going on with the next 30. But they're right that I didn't prove it. **Third**, the comments on my analysis. I agree with the ivmmeta people that I throw out many studies. I think this is correct, unless you also want to end up believing in [psychic powers](https://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/), [stereotype threat](https://russellwarne.com/2021/08/07/send-in-the-clones-stereotype-threat-needs-replication-studies/?utm_source=rss&utm_medium=rss&utm_campaign=send-in-the-clones-stereotype-threat-needs-replication-studies), and [social priming](https://replicationindex.com/2017/02/02/reconstruction-of-a-train-wreck-how-priming-research-went-of-the-rails/). The story of science over the past ten years has been learning that lots of studies suck and that we can't draw conclusions until after eliminating the sucky ones. **Fourth**, the comment on my biases. I am happy to own up to most of these. For example, yes, I am (slightly) biased against high effect size studies. See this article on [Impossibly Hungry Judges](http://daniellakens.blogspot.com/2017/07/impossibly-hungry-judges.html) for where I’m getting my intuitions on this. If you claim a very large effect size, it should be really obvious. If some studies show medium-low effect sizes and others medium-high, that’s within the range of normal variability and methodological disagreement and so on. If some studies show it cures literally everyone, and others show it does nothing whatsoever, then something has gone terribly wrong: maybe one group is making up data. If it’s a random sketchy guy who has a history of having made up data before (eg Carvallo) vs. huge trials run by legions of prestigious scientists, I’m going to assume it’s the first guy. This is especially true in the context of a few good ivermectin studies (eg Mahmud), which show that it has decent effect size like every other drug, but doesn’t cure literally everyone. Mahmud disagrees with the ones that find no effect, but it *equally* disagrees with the ones that find it’s a 100% perfect cure. I am happy to own up to being biased against certain countries. I am not sure that the Egyptian scientific community has as strong an anti-fraud mechanism as some other places, given their history of fraudulent papers. I feel bad for innocent Egyptian scientists who might have a harder time getting people to take them seriously as a result, but not so bad that if an Egyptian paper comes up with results much better than everyone else, I’m not going to be suspicious. **Fifth**, the comments on statistical technique. If I understand ivmmeta right, they want to think of every directionally-positive paper as “positive”, and every directionally negative paper as “negative”, without considering statistical significance, and are upset that I call not-statistically-significant papers “negative”. I think this only works if you have a very optimistic view of meta-analyses, which I do not, for reasons ivmmeta itself exemplifies. Ivmmeta links to some papers on abandoning the idea of statistical significance, which I think makes sense in some contexts, but the contexts are “you think two seconds about what you want to replace it with”, which I am not sure ivmmeta is doing here. I actually think this might be more of a crux between us than anything about ivermectin itself. The same people behind ivmmeta have put up websites claiming that 19 different substances, including HCQ, testosterone-blockers, the spice curcumin, vitamins A, C, and D, etc, all cure coronavirus with pretty large effect sizes. I think this is because they are using a nonconventional form of statistics which is always going to find positive effects. I understand and respect why they’re doing this - they link [eg this article](https://www.nature.com/articles/d41586-019-00857-9) condemning the idea of statistical significance, which makes good points. But you can’t throw it out without having a replacement. I think ivmmeta is trying to pioneer a new way of thinking about science and statistics without p-values, but I think its new way is actually bad and will get positive results almost all the time. I’ve seen a lot of fruitless debate between ivmmeta and doctors, but I wonder if you could have a fruitful debate between them and statisticians. I’ve been thinking about this in the context about how ivmmeta does better and clearer science communication than everyone else. As the saying goes, “for every problem, there is a solution that is simple, elegant, and wrong”. The establishment takes a pile of garbage studies, throws lots of kludges and human judgment at it, and comes up with a result it’s not great at justifying but which is occasionally right. Ivmmeta is taking the same pile, doing a bunch of simple common-sense stuff to it, presenting it all in a natural and elegant manner, and is doomed to fail. We like to pretend that the scientific method and statistics and so on are objective, but right now the kludges and human judgment are doing most of the work, and when you take them out the whole thing collapses. --- [Alexandros Marinos](https://twitter.com/alexandrosM) is the most thoughtful and dedicated ivermectin proponent I know of. He’s been thinking a lot about my post, so far without any clear conclusions, but I’ve enjoyed reading [his process](https://twitter.com/alexandrosM/status/1461423118891638785), which has also led to helpful explainers like this one: A few points of his I want to discuss in more depth: He's interested in seeing what happens if we exclude or include different groups of things, which I support. I was hoping to try something like this before I realized how overwhelming doing just the stuff I did was going to be. I think the main thing I want to cram into his head is how many pseudosciences that *have to* be false have really strong empirical literatures behind them. There are dozens of positive double-blind RCTs of homeopathy. I feel like I can explain what went wrong with these about a third of the time. The rest of the time, I’m as boggled as everyone else, and I just accept that the biggest studies by the most careful people usually don’t find effects, plus we should have a low prior on an effect since it’s crazy. This makes me pretty willing to shrug and say “Yeah, I have no idea what went wrong here, but a few big RCTs didn’t find an effect, plus I have a super-high prior for any new medical thing being false, so whatever, let’s move on”, which I admit is unvirtuous but I’m not sure how to avoid it. But also: I admit this is true and it sucks. I have no solution for it right now. I think of it as like the Large Hadron Collider. If the people who run the LHC ever become biased, we’re doomed, because there’s no way ordinary citizens can pool all of our small hadron colliders and get equally robust results. It’s just an unfair advantage that you get if you can afford a seventeen-mile long tunnel under Switzerland full of giant magnets. I do think it’s occasionally possible to have genuine bottom-up medical research: ketamine seems to have worked this way. Even the trials that found fluvoxamine worked were funded by a random billionaire, which is sort of bottom-up in the sense of not being some established clique of experts with a vested outcome in the result. But I don’t think we know how to do this consistently yet, even though it would be cool if we could. --- Lots of people were skeptical of the worms hypothesis. Rzztmass [writes](https://www.reddit.com/r/slatestarcodex/comments/qvsw91/ivermectin_much_more_than_you_wanted_to_know/hkyy60u/): > The worms thing is clever, but it doesn't really work. > > Hyperinfection syndrome is pretty rare. For it to make even the slightest dent in the numbers, you would have to assume very very high prevalence of Strongyloides and also far higher incidence of hyperinfection syndrome than what has been described. > > Even if that were true, you would somehow have to reconcile doctors doing trials in countries where lots of patients have Strongyloides, where the doctors are familiar with steroids causing hyperinfection and then being fine with a trial arm risking to cause just that. > > We are willing to accept fraud already and I consider fraudulent data to be more likely by far than doctors knowingly putting their patients at risk of dying just for nice looking data. > > The more realistic stance though is that death or worsening due to hyperinfection is a rather rare outcome and doesn't influence numbers significantly. That's why the doctors in those countries went along with a study that would otherwise be unethical. I still don't know where the significance comes from, but it's not strongyloides hyperinfection. Something like this was also the objection of Bret Weinstein, a biologist, podcaster, and author who’s been another big ivermectin proponent: I agree this is speculative and not yet tested by formal studies, which was why I only gave it ~50% confidence in the summary at the end of my post. (I also am kind of embarrassed because I think I failed to emphasize enough that I didn’t invent this hypothesis. Credit and/or blame should go to Drs. [Avi Bitterman](https://twitter.com/AviBittMD), [David Boulware](https://twitter.com/boulware_dr/status/1345769283444477953), and the [many people](https://www.who.int/news/item/17-12-2020-a-parasitic-infection-that-can-turn-fatal-with-administration-of-corticosteroids) who have published work on treating COVID in parasite-filled areas) But a few points: Although *strongyloides* hyperinfection is a particularly obvious way worms can be bad, it’s probably not the main one. There are lots of kind of worms that can be bad in lots of kinds of ways. But I’m also not as skeptical as Rzztmass. We don’t have to speculate about whether doctors in parasite-prone areas would give steroids - we know they did! Dr. Bitterman asked and lots of these trials admitted giving steroids to their patients. Ravakirti gave steroids to [the entire control group](https://twitter.com/AviBittMD/status/1461524651113332745), Lopez-Medina gave it to some controls. It happened! We know it happened! But even *strongyloides* itself isn’t actually that uncommon. In Bangladesh, where some of the best positive trials come from, seroprevalence is [5-22](https://pubmed.ncbi.nlm.nih.gov/22813776/)%. And in Ravakirti, one of the studies in East India (which I assume is similar), got corticosteroids. The entire ivermectin advantage in Ravakirti et al comes from 4/50 people dying in the control group compared to 0/50 in the experimental group. If they have 10% *strongyloides* prevalence and half of infected people who take steroids get a bad reaction, that explains half of the effect. The other half could be coincidence / other worms / I’m underestimating the effect of strongyloidiasis / real positive effects of ivermectin, but I don’t think the effect of *strongyloides* is obviously of the wrong magnitude to matter here. See further discussion by Dr. Bitterman [here](https://twitter.com/AviBittMD/status/1461524651113332745) and [here](https://twitter.com/AviBittMD/status/1462204317167984642).. By the way, the *strongyloides* hypothesis made it into the *Economist* [here](https://www.economist.com/graphic-detail/2021/11/18/ivermectin-may-help-covid-19-patients-but-only-those-with-worms). --- GeriatricZergling [writes](https://www.reddit.com/r/slatestarcodex/comments/qvsw91/ivermectin_much_more_than_you_wanted_to_know/hkzhd2z/): > My other replies are scattered all over the place, so I'll just add this as a top level comment, pertaining the the general point of "parasites fucking with your immune system even without clinical hyperinfection". > > From [Weatherhead & Mejia 2014](https://link.springer.com/article/10.1007/s40475-014-0032-9), who are themselves reviewing this stuff before delving into hyperinfection: > > *"The host innate and adaptive immune response plays a critical role in the maintenance of chronic strongyloidiasis and the prevention of hyperinfection syndrome and dissemination. Similar to other helminth infections, strongyloidiasis elicits a Th-2 lymphocyte predominant immune response with production of cytokines, IgE antibodies, eosinophils, and mast cells which participate in expulsion and killing of the helminth [3, 7, 11]. Strongyloides antigens activate eosinophils via the innate immune response [12]. Activated eosinophils act as antigen presenting cells to stimulate antigen-specific Th-2 cytokine production including IL-4 and IL-5 [8•, 12]. IL-4 induces activated B lymphocytes to class-switch for production of IgE and IgG4 antibodies and additional cytokines (IL-8) attract granulocytes such as neutrophils to aid in larvae killing [7, 11, 12]. IgE production allows for mast cell degranulation and enhances further eosinophil migration [8•]. IL-5 acts as an eosinophil colony stimulating factor promoting further eosinophil growth and activation [8•, 11, 12]. Approximately 75 % of patients with chronic strongyloidiasis have peripheral eosinophilia or elevated total IgE levels [4, 12]. Protective immunity to infective larvae has been found to involve specific Strongyloides antibodies, complement activation and neutrophils in antibody-dependent, cell-mediated cyotoxicity type responses [11]. Patients with severe disease have been shown to have a significant decrease in antibody levels and a decrease in eosinophil level compared to asymptomatic infected individuals, suggesting that both antibodies and granulocytes play a significant role in protection from infection [11]. The sophisticated interaction between strongyloidiasis and the host immune system allows for long-term survival of the pathogen in the host gastrointestinal tract."* > > Note again that this is describing **the effects of normal, run-of-the-mill strongyloidiasis in immunocompetent patients**; literally the next sentence after my quote ends starts talking about what happens when the patient's immune system starts being suppressed or otherwise behaving abnormally for other reasons. > > As I mention elsewhere, immunology is literally the part of biology I'm worst at, and my knowledge comes from a "host-parasite-evolution" background instead, so I cannot translate this into anything clinical. But what it does show is that the specific parasites affected by ivermectin will impair the host immune system in a variety of ways even at normal, non-hyperinfection levels, and this is a typical thing for strongyloidiasis. This, in turn, is strong evidence for the overall hypothesis of "COVID + strongyloidiasis is worse than regular COVID, so killing the worms should help." On the other hand, there’s [some speculation](https://www.thelancet.com/journals/eclinm/article/PIIS2589-5370(21)00334-5/fulltext) that having some kinds of parasitic worms might *help* COVID. Remember, a lot of COVID deaths are because your immune system over-reacts and causes too much collateral damage; this is why immunosuppressants like corticosteroids are so useful. But parasitic worms are constantly trying to sabotage your immune system to prevent it from killing them, so people with chronic worm infections are already a little immunosuppressed, which is probably good for them. Probably the exact good/bad balance depends on the specific worm, infection, and person involved. --- [gettotea](https://www.reddit.com/r/slatestarcodex/comments/qvsw91/ivermectin_much_more_than_you_wanted_to_know/hkz4fyl/) writes: > I agree. Scott needs to factor in regional prevalence. Trials are run in more sophisticated cities, where prevalence of worms would be far less than the outskirts. I live in Chennai, India, and prevalence of worms would be orders of magnitude away from a randomly picked village in India. > > Trials are also run in pretty well funded hospitals, which again naturally have a self-selection for wealthier people who again will be far less likely to have worms. Mahmud et al was run in Dhaka, which was where my former 5-22% *strongyloides* number was taken from - 22% in the slums, 5% elsewhere. Ravakirti et al was run in Patna. I can’t find strongyloides prevalence numbers there, but [this study](https://eurekamag.com/research/000/467/000467319.php) says 63% of people there have at least one intestinal parasite. Also, I have spent approximately two hours in Patna, and although I mostly stayed in the bus station, I still got a *very* strong “probably full of parasitic worms” vibe from the place. --- The TOGETHER trial was a very large and official study that was pessimistic about ivermectin working. We still don’t have the full paper, but ivermectin proponents are skeptical partly because of a possibility that the treatment and control groups entered at different times. This could potentially confound the study since differently-severe variants were entering the country. But James Watson [writes](https://astralcodexten.substack.com/p/ivermectin-much-more-than-you-wanted/comment/3650798): > I don't think that it is correct that they used non-contemporaneous controls for the ivermectin TOGETHER study. This is a well-known problem in adaptive trials where new arms can enter and leave the platform. The controls that they will have used are only those who could have been randomised to ivermectin. See for example their write up of fluvoxamine (<https://www.thelancet.com/journals/langlo/article/PIIS2214-109X(21)00448-4/fulltext>) He also adds: > Regarding fluvoxamine: interesting that your assessment is that it "works". From a Bayesian perspective, a priori it's highly unlikely to do anything (some random doc decided to test because why not; no known mechanism of action); and there is a real problem of post-randomisation bias. See this article for more detail <https://www.the-scientist.com/news-opinion/a-closer-look-at-the-new-fluvoxamine-trial-data-69369> Huh, I’d heard it’s a sigma receptor agonist, which decreases immune system overresponse, which is probably what we want. I agree people thought of this post-hoc, but it’s not a terrible explanation. …though it’s possible I’m overstepping my expertise here to someone who knows much more than I do, since I notice the statistician and trial design expert quoted in that *Scientist* article is also named…James Watson. Hopefully he isn’t also the DNA guy or I’ll be *totally* out of my league. --- Moving to the more political sections, [The-Serene-Hudson-Bay](https://astralcodexten.substack.com/p/ivermectin-much-more-than-you-wanted/comment/3656642) writes: > I think also missing is the behavior of conservative political and media elites, who are actually in a social class where they might have immunologist relatives but who kept up anti-blue tribe COVID skepticism. Trump is vaccinated, Fox News has an internal vaccine passport system, these are the people best positioned to persuade skeptics motivated by 'hostile aliens' and they refuse to do it because maximal ongoing culture war serves their interests. Many people said something similar. I’ll admit I’m confused what’s going on here. Articles like [Trump Booed In Alabama After Promoting COVID Vaccine](https://www.webmd.com/vaccines/covid-19-vaccine/news/20210823/trump-booed-in-alabama-after-promoting-covid-vaccine) make me think that the conservative elites know it works, have gotten vaccinated, briefly tried recommending this to their constituents, learned their constituents didn’t like this, and since then have been [awkwardly punting](https://www.thedailybeast.com/tucker-carlson-punts-when-confronted-on-fox-news-vaccine-policy-im-not-qualified-to-speak-on-it) questions about it. The conservative elites backing off doesn’t seem to require an interesting explanation - yeah, populists will drop positions that the populace turns out to hate. So the interesting question is why the (conservative) populace hates it so much, which is what I tried to speculate on. I also think people are overestimating conservative elites’ role here by deliberately conflating opposition to vaccine *mandates* with opposition to *vaccines*. A lot more elites have come out in favor of the first than the second. --- The people [on Hacker News](https://news.ycombinator.com/item?id=29249686) were extremely kind to me. [csee](https://news.ycombinator.com/item?id=29251420) wrote: > While reading this piece I got a little depressed that most journalism is just such utter trash compared to it. I've read so many articles on ivermectin and none of them gave me even ten percent of the clarity that this article gave me. Can you imagine if writing and journalism of this calibre was commonplace among practising "journalists"? And look at how this piece compares to the CDC's and WHO's science communication. It's a shame that clear thinking and communication is so scarce. [nonameiguess](https://news.ycombinator.com/user?id=nonameiguess) responded: > While Scott has a pretty decent natural talent for writing, he also has a MD, he's a board licensed practicing psychiatrist who has been working for a decade in the field, and he has spent at least the last twenty years gaining a pretty decent broad exposure to statistical and research methods. I don't believe he disclosed what Substack paid him, but he is in the "paid tier" and has said it was a mistake to even agree to that because the subscriptions he has gotten exceed what Substack paid him. In short, if you want most journalism to hire licensed medical doctors with decades of experience in science and statistics, and natural writing talent on top of that, expect journalism to get a lot more expensive. A market certainly exists for Scott, but I'm not sure the market exists for all journalists to be as highly qualified as Scott. Or, for that matter, even for CDC and WHO PR arms. They definitely aren't paying their communications officers whatever Substack is paying Scott, or probably even what his psychiatry practice is paying him. I’m not publishing this exchange *just* because I like compliments, I actually have a relevant story here. When I was working on the ivermectin post, I mentioned it to a friend who’s a journalist. She shocked me by reciting a list of all the same studies I’d been looking at, her (completely correct) opinion on each, and then ending with the same conclusion I did (any remaining positive signal after you remove the fraudulent studies might be because of worms). I asked why her article hadn’t said any of this. She said that, in consultation with her editor, they decided that reviewing all the studies would have taken too much space, and mentioning the worms would have been too speculative. I was flabbergasted. I thought I was doing some pretty novel journalistic research here, better than all the other science communicators, but here I was just lucking out by not having an editor telling me to maintain normal journalistic standards of concision and evidence. I think this journalist was very unrepresentatively good - but it was still a bit of a wakeup call. My biggest advantages over many articles that were less comprehensive than mine were having Substack - a great platform that lets me publish whatever I want - and even more important, having all you excellent readers who are masochistic enough to read ten thousand word essays speculating about intestinal parasites. So thanks for that, and give journalists a break. (except of course the *New York Times.* Ecrasez l’infame!) --- In response to a request to hear a vaccine skeptic’s perspective, Tophattington [writes](https://www.reddit.com/r/slatestarcodex/comments/qvsw91/ivermectin_much_more_than_you_wanted_to_know/hl29o54/): > I am not a vaccine sceptic, I simply refused to take them as one of the few means I have available to protest against lockdowns, [particularly as the government here [UK] used covid-19 as an excuse to arrest well over a hundred political dissidents in a single day.](https://www.bbc.co.uk/news/uk-england-london-55116470) This became more strident as I oppose the way that lockdowns and other restrictions have created an element of duress to taking medical treatment, and also the way regions of the country have set up systems specifically intended to discriminate against unvaccinated people. > > To mandate vaccines is to state that humans are all born defective, and only become non-defective after jumping through state-approved hoops. It is philosophically corrosive to everything I believe in. It's the kind of thing that the avant-garde of progressivism would have called "biopower" before they conveniently forgot about the subject in 2020. > > The hostile alien analogy is missing a key part in all this. The hostile actions aren't far in the past, but instead began in March 2020 with lockdowns, and remain ongoing. The moment governments around the world granted themselves unchallengeable authority over the minutia of private life, and placed their entire populations under house arrest, the growth of opposition to vaccination became inevitable. This is real, serious harm, inflicted upon billions. It's a scale that I still struggle to wrap my head around. > > As the entire visible medical establishment fell in line with this power-grab, I consider them untrustworthy too. How can I believe that the average doctor cares about my health when the average doctor was happy for the British regime to abuse me like this? But I have enough skill to just read the vaccination study results myself and see that it's effective but not effective enough to leave the regime with no excuses to continue restrictions. That's all that ends up separating me from an active opponent to vaccination. > > This is why opponents to lockdown and opponents to vaccination overlap. Despite claims that this is illogical because vaccines are a way to end restrictions (they're not, and Europe is currently proving this, Gibraltar most notably). Sure, this means I have some strange allies, but to crib off something that probably wasn't said by Muhammad Ali: *No anti-vaxxer ever locked me down.* I kind of sympathize with this (and am considering refusing the booster to protest them not sending spare doses to the Third World), but refusing to get vaccines seems like the most counterproductive way to protest lockdowns. Not only will it ensure the lockdowns last longer (because there are more cases), but it’ll just provide pro-lockdown people with an easy opportunity to tar all their opponents as science deniers. I guess it depends whether you trust people that vaccines will at least slightly reduce cases, and that reductions in cases will lead to fewer lockdowns. I think it’s easy to get discouraged about this given the many “okay, in just a few weeks this will all be over and we can reopen for real” bait-and-switches, but in the long run I do think we’ve gotten less locked down as case numbers have declined. I don’t know how much of that has been epidemiologists agreeing the crisis is less severe vs. anti-lockdown activists forcing governments’ hands. And all of this is here in the US. I understand a lot of other places are having some really weird experiences right now, and I hope everyone’s okay.
Scott Alexander
44353755
Highlights From The Comments On Ivermectin
acx
# Highlights From The Comments On The FDA And Paxlovid Andrew [writes](https://astralcodexten.substack.com/p/when-will-the-fda-approve-paxlovid/comment/3731086): > One word I don't see mentioned anywhere is "manufacturing." It's one thing to make enough drug for a clinical trial, it's another to make millions of commercial doses reliably. FDA approval requires inspection of and confidence in these commercial-scale manufacturing processes. Zutano [adds](https://astralcodexten.substack.com/p/when-will-the-fda-approve-paxlovid/comment/3732259): > To expand on this more: the clinical trials only show that \*that one particular batch\* was safe and efficacious (the FDA thinks this, since they agreed to terminate the trial early). Pfizer must then show that the commercial batches will be identical in every relevant way to the clinical trial batches, so that they will have the same safety and efficacy. What are the relevant ways? Pfizer must decide that, and justify their decisions to the FDA with supporting evidence. > > Scaling up chemical manufacturing is not trivial (a regular contender for Understatement of the Year). E.g. heating and stirring work differently in different sized reactors. Heat transfer in and out of your reactor works through surface area, but heat produced/consumed by the reaction depends on volume. If your stirrer design isn't right for the viscosity of the solution, you might get hotspots and so on. > > Ideally, the FDA expects you to understand the chemistry so thoroughly that you know everything that can possibly go wrong, and design your commercial process so that none of these things can possibly happen. The commercial batches will therefore be identical \*by design\* to the clinical trial batches, and you have to prove this with science. Of course in practice you don't have to have collected all the evidence before you can start selling batches, but you must have your plan in place with a solid scientific justification for every decision you made along the way. It also helps to have made a statistically valid number of commercial batches to show that your beautiful process works as designed (how many batches is that? you tell me, and justify your decision). > > Pfizer/Merck will have thrown everything at this problem alongside the clinical trials, as they can afford to do this, so their regulatory submissions will be pretty good. However they still might have to store the new batches for a few months to demonstrate that they have a comparable shelf-life to the old batches, and FDA might wait to see this data etc. > > Note that this really only applies to new chemical entities; people have been manufacturing fluvoxamine for years and its probably well understood by now. Not always true though; we saw a worst-case situation recently with the ranitidine withdrawal: a medicine that some reasonably healthy people take every day of their lives was shown to be contaminated with small amounts of a nasty carcinogen. If Pfizer happened to have some gaps in their understanding for the Paxlovid process, the FDA might go easy on them as dying of Covid now is worse than a slightly increased risk of cancer in the future, but it takes time to review all of these risks and make a justified decision. Ian E Fellows [writes](https://astralcodexten.substack.com/p/when-will-the-fda-approve-paxlovid/comment/3731067): > Much of the cost benefit analysis is predicated on Pfizer being able to unleash a flood of pills. If production takes time to ramp up and demand exceeds supply at any point during the initial post-approval period, then the cost (in lives) of the delay would be near zero. > > I don't know, and have not been able to find any information on, what Pfizer's production curve looks like. I'd love any sources on that folks could share. This is a good point - though hard to square with the previous good point. We know that Pfizer has already started making the drug; is it possible they can run the factories now, the FDA can examine the factories while they’re running, and then the FDA can retroactively pronounce that the factories are fine and people can use the drug they produced? GBergeron [writes](https://astralcodexten.substack.com/p/when-will-the-fda-approve-paxlovid/comment/3732368): > A quick look at [clinicaltrials.gov](http://clinicaltrials.gov) shows 11 studies, 7 still in progress, concerning PF-07321332/ritonavir [ie Paxlovid]. Some are designed to look specifically at plausible bad interactions with common drugs. It's nice if the efficacy trial with ~1000 patients doesn't show any random safety problems, but it would be malpractice to not complete the safety tests on plausible bad drug interactions before unleashing it into a population that's going to be using it together with 100's of other drugs. Would you approve a drug without even a small test on people with hepatic impairment? > > And to be clear, the phase 2/3 trial specifically excluded those plausibly bad situations, like "Known medical history of liver disease", "Receiving dialysis or have known renal impairment", or "Current or expected use of any medications or substances that are highly dependent on Cytochrome P450 3A4 (CYP3A4) for clearance or are strong inducers of CYP3A4". Thanks for pointing out these other studies. But a lot of this seems like non-emergency thinking to me. If it’s an emergency, you approve the drug with a black box warning saying “not tested in liver disease patients or drug-drug interactions yet, check back in a few months”, and then the 99% of patients who don’t have liver disease or use warfarin can get it. But given that there are some reasons why the FDA might be holding things up even after the trial was clearly positive, I was wrong to say this was literally inexcusable. I will add an entry to [my Mistakes page](https://astralcodexten.substack.com/p/mistakes) noting that I jumped to conclusions here. But Dan Elton [is still skeptical](https://astralcodexten.substack.com/p/when-will-the-fda-approve-paxlovid/comment/3731617): > Dr. Marty Mackary says this : "As a Johns Hopkins scientist who has conducted more than 100 clinical studies and reviewed thousands more from the scientific community at large, I can assure you that the agency’s review can be done within 24 to 48 hours without cutting any corners." > > Also keep in mind we are not talking about full approval here and the requirements for what needs to be in an EUA application is all stuff the FDA invented during the pandemic (the EUA legislation, <https://www.law.cornell.edu/uscode/text/21/360bbb-3>) only requires that "known and potential benefits of the product... outweigh the known and potential risks of the product" Flauschi [writes](https://astralcodexten.substack.com/p/when-will-the-fda-approve-paxlovid/comment/3732688): > The argument that it is silly to stop the trial and then drag your foot with approval is strong (but there is a good comment below about the scaled up production process etc.). > > But more generally, it sounds a bit polemic to me to speculate that the FDA is concerned with its reputation. Maybe at stake is not the reputation of the FDA, but the reputation of the approval process? Assume the case a drug is approved "prematurely" and saves 30000 lives but completely unexpectedly kills 10000 patients (who would otherwise have survived). What sounds like a great net positive outcome might still be very negative on the long term: Many people might lose trust in approved medication or evidence bases medicine in general. > > Also, I am a bit surprised that Scott seems to use "expected value" so freely in this context. Medical ethics does not seem to work that way at all? At least, there seems to be a heavy asymmetry between action and non-action, maybe that is reflected in the approval system as well?) I agree something like this is true, which is why my preferred solution is for the FDA to have different levels of approval. I would like them to immediately give this a medium-level approval, something like “we have reservations about this drug but it’s not technically illegal to take it”, and then update it to a high-level approval later on. Otherwise everyone’s life is going to be constantly held hostage to a PR campaign to sway the opinion of the stupidest 10% of the population. Maybe a better way of framing this is that we made a deal with the devil when we decided to ban all drugs until a centralized government agency said they were okay. Now the entire reputation of Medicine as a field is inextricably linked with the reputation of that government agency, and protecting that agency’s reputation is more important than saving lives, having reasonably-priced drugs, making reasonably-paced medical progress, or basically anything else. I acknowledge this behavior makes sense given that the deal exists, but I think we should be looking for ways to extricate ourselves from it.
Scott Alexander
44443644
Highlights From The Comments On The FDA And Paxlovid
acx
# When Will The FDA Approve Paxlovid? **I.** You thought it wasn’t going to be a prediction market post, but surprise, it’s a prediction market post! [Metaculus predicts](https://www.metaculus.com/questions/8518/paxlovid-to-be-given-eua-by-fda/) January 1 as the median date for the FDA approving Paxlovid. They estimate a 92% chance it will get approved by March. For context: [a recent study](https://www.bmj.com/content/375/bmj.n2713) by Pfizer, the pharma company backing the drug, found Paxlovid decreased hospitalizations and deaths from COVID by a factor of ten, with no detectable side effects. It was so good that Pfizer, “in consultation with” the FDA, stopped the trial early because it would be unethical to continue denying Paxlovid to the control group. And on November 16, Pfizer officially [submitted an approval request to the FDA](https://www.pfizer.com/news/press-release/press-release-detail/pfizer-seeks-emergency-use-authorization-novel-covid-19), which the FDA is still considering. As many people including [Zvi](https://thezvi.wordpress.com/2021/11/18/covid-11-18-paxlovid-remains-illegal/), [Alex](https://marginalrevolution.com/marginalrevolution/2021/11/the-paxlovid-paradox.html), and [Kelsey](https://twitter.com/KelseyTuoc/status/1461781455407828993) have noted, it’s pretty weird that the FDA agrees Paxlovid is so great that it’s unethical to study it further because it would be unconscionable to design a study with a no-Paxlovid control group - but also, the FDA has not approved Paxlovid, it remains illegal, and nobody is allowed to use it. One would hope this is because the FDA plans to approve Paxlovid immediately. But the prediction market expects it to take six weeks - during which time we expect about 50,000 more Americans to die of COVID. Perhaps there’s not enough evidence for the FDA to be sure Paxlovid works yet? *But then why did they agree to stop the trial that was gathering the evidence?* Or perhaps there’s enough evidence, but it takes a long time to process it? *But then how come the prediction markets are already 90% sure what decision they’ll make*? Perhaps that 10% chance of it not getting approved is very important, because that’s a world in which it’s discovered to have terrible side effects? But discovered how? There was one trial, it found no side effects at all, and Pfizer stopped it early. And it’s hard to imagine what rare side effect could turn up in poring over the trial data again and again that’s serious enough to mean we should reject a drug with a 90% COVID cure rate. Perhaps it doesn’t have any sufficiently serious side effects, but that 10% chance is important because it might not work? Come on, just legalize the drug! If it doesn’t work, then you can report that it didn’t work in January or March or whenever you figure it out, and un-approve it. Nobody will have been hurt except your pride, and in the 90% of cases where it does work, you’d be saving thousands of lives. Let’s give the FDA its due: this time they’re probably only going to wait a few weeks or months. Much better than their usual MO, when they can delay drugs for months [arguing about the wording of the warning label](https://moreisdifferent.substack.com/p/the-fda-almost-killed-me). I honestly believe they’re operating on Fast Mode, well aware that the entire country is watching them and yelling at them to move faster. Still, move faster. **PS:** Kudos to the Biden administration, which [has already ordered](https://www.reuters.com/business/healthcare-pharmaceuticals/us-govt-buy-10-mln-courses-pfizers-covid-19-pill-529-bln-2021-11-18/) 10 million courses of Paxlovid, effective immediately, to be distributed as soon as the FDA gives them permission. This is great. But all their initiative will go to waste unless the FDA can do its part quickly too. **PPS:** I know I’m going to get asked: how is this different from the ivermectin situation? Last week [I wrote a long post arguing](https://astralcodexten.substack.com/p/ivermectin-much-more-than-you-wanted) that most of the early super-promising trials of ivermectin were garbage, and that despite the hype it probably doesn’t work against COVID. Shouldn’t I be equally skeptical of Paxlovid now that it’s having its own super-promising early trials? No. For one thing, this isn’t amateur hour anymore. The ivermectin trials were random people who bungled their experiments or just plain made them up. They had sample sizes of (going through the first few on my notes) 25, 116, and 66 people. The Paxlovid trial was run by the best scientists Pfizer’s money can buy, and had a sample size of 1,219 (it would have been 3,000 if they hadn’t stopped it early). Like everyone else, I hate the fact that pharmaceutical companies are the only people with enough resources to run high-quality studies, and that this controls what drugs we end up using. But while we’re working on that problem, pharmaceutical companies *do* have a lot of resources, and their studies *are* pretty good, and we don’t *have* to grade them by the same standards we use for amateur hour, especially when their studies are 20x bigger. Just because this *shouldn’t be* true doesn’t mean that we have to pretend it *isn’t*, especially when that pretense could kill thousands of people unnecessarily. (big pharma companies do often try to sneak mediocre drugs past the FDA, but that doesn’t look like falsely claiming 90% mortality reductions. It [looks like aducamumab](https://astralcodexten.substack.com/p/adumbrations-of-aducanumab): a drug whose early trials showed mediocre results on secondary endpoints, but which Biogen somehow got the FDA to approve anyway) I know I’m not going to convince many ivermectin supporters. So consider this: ivermectin is FDA approved. It’s approved against parasitic worms, but that’s fine: once a drug is approved for anything, any doctor can (more or less) use it for whatever they want. Doctors can absolutely prescribe ivermectin right now if they want, and many of them (like Pierre Kory) have. The ones who don’t prescribe it are avoiding it because they think it doesn’t work, not because the FDA is trying to prevent them. Heck, people can get ivermectin even without a prescription as long as they use the veterinary version. The medical regulatory system has made prescribing ivermectin legal and easy. All I’m ask is that they do the same for a drug which almost certainly works - before thousands more people die unnecessarily. **PPPS:** I know that by posting this, I’m tempting fate to have something go horribly wrong with Paxlovid - maybe it causes cancer - and then I’ll look like an idiot for demanding it be rushed through. I accept this risk. I think the benefits of rushing it through are higher than the risks, even though the risks are nonzero. If it turns out Paxlovid is terrible, yeah, I’ll look like an idiot - but I care about maximizing expected lives saved more than I care about my reputation. Can the FDA say the same? ***[Edit/update:** Andrew writes:* > *One word I don't see mentioned anywhere is "manufacturing." It's one thing to make enough drug for a clinical trial, it's another to make millions of commercial doses reliably. FDA approval requires inspection of and confidence in these commercial-scale manufacturing processes.* *This sounds like a plausible explanation for what’s going on. I would still like to see someone’s calculation as to whether the risk of manufacturing defects is really worth the wait, but at least it’s not insane.]*
Scott Alexander
44337674
When Will The FDA Approve Paxlovid?
acx
# Open Thread 199 This is the weekly visible open thread. Odd-numbered open threads will be no-politics, even-numbered threads will be politics-allowed. This one is odd-numbered, so be careful. Otherwise, post about anything else you want. Also: **1:** Last month I wrote a post, [Jhanas And The Dark Room Problem](https://astralcodexten.substack.com/p/jhanas-and-the-dark-room-problem), about some of Andrés Gomez Emilsson’s theories. Anders has since written a post of his own giving longer commentary on some of the things I said and explaining his theories in more length. [Check it out](https://qualiacomputing.com/2021/10/31/on-dark-rooms-jhanas-ecstasy-and-the-symmetry-theory-of-valence/)! **2:** The effective altruism movement is launching [EA Virtual Programs](https://www.effectivealtruism.org/virtual-programs/), some online courses and discussion groups and book clubs and so on. If interested, apply before November 28. **3:** Still a lot of pushback on the [Great Families](https://astralcodexten.substack.com/p/secrets-of-the-great-families) [posts](https://astralcodexten.substack.com/p/highlights-from-the-comments-on-great) (one of the most common comments on the ivermectin post was “this is so much more evidence-based than that family stuff”). I’m wondering if I’ve been blogging so long and cast such a wide net that I’ve collected readers who aren’t familiar with *[The Nurture Assumption](https://www.amazon.com/Nurture-Assumption-Children-Revised-Updated/dp/1439101655/ref=sr_1_1?keywords=nurture+assumption&qid=1637476994&qsid=138-4927381-2675209&sr=8-1&sres=1439101655%2CB0000544S3%2CB002LHRLO8%2CB096M1LDQQ%2CB07PGQY9L4%2C006081246X%2C8497592123%2C0553386697%2CB0735KLL9B%2C0812979680%2CB01MSL3XOH%2C0062560751%2CB01CKZM39I%2C1785042211%2C0674980158%2CB08MQ5K327&srpt=ABIS_BOOK) (*book full of evidence that parenting styles and effects of early home environment don’t matter for most outcomes later in life, within normal bounds) - anyone know of a good refresher I can link people to? But maybe some of you want to argue they matter for the top 0.01% - small enough that nobody will ever notice in a study, but enough to explain Darwins and Huxleys? (Related: [new study confirms](https://twitter.com/SteveStuWill/status/1461639134297079809) no association between parenting and Big Five personality traits) **4:** Also, several people pointed out that an ideal experiment would involve taking a really talented family, adopting away one of their kids at birth, and seeing what happened to them. I know of one case almost like this. Mathematician [Paul Nemenyi](https://en.wikipedia.org/wiki/Paul_Nemenyi) was one of [the Martians](https://slatestarcodex.com/2017/05/26/the-atomic-bomb-considered-as-hungarian-high-school-science-fair-project/), a group of supersmart Hungarian Jews who revolutionized mid-20th-century physics. His legitimate son [Peter Nemenyi](https://en.wikipedia.org/wiki/Peter_Nemenyi) was a prominent statistician who invented the [Nemenyi test](https://en.wikipedia.org/wiki/Nemenyi_test) (which I have never heard of, but which is apparently the same as the Wilcoxon test, which I vaguely have). But Paul also had an affair that produced an illegitimate child, who was raised entirely by his mother without any contact with the other Nemenyis: [Bobby Fischer](https://en.wikipedia.org/wiki/Bobby_Fischer), later world chess champion. It’s unclear if Fischer ever knew he was a Nemenyi relative, although Paul Nemenyi seemed to. I don’t know of any other good examples of this - unless the [Justin Trudeau - Fidel Castro conspiracy theory](https://nationalpost.com/news/canada/no-internet-fidel-castro-isnt-trudeaus-real-father-the-canadian-prime-minister-just-really-really-looks-like-him) turns out to be true (it isn’t). **5:** Deadline for applying for an [ACX Grant](https://astralcodexten.substack.com/p/apply-for-an-acx-grant) is end of day this Thursday.
Scott Alexander
44358407
Open Thread 199
acx
# Highlights From The Comments On Great Families Thanks to everyone who commented on last week’s post **[Secrets Of The Great Families](https://astralcodexten.substack.com/p/secrets-of-the-great-families)**. Some highlights: --- Many people knew of interesting families I’d missed. Stephen Frug [brings up](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3559685) the Jameses: > Any short list of the great families (or at least the great American families) should include the James's: Henry James is one of the perennial candidates for the greatest American novelist, and his brother William James is one of the perennial candidates for the greatest American philosopher. Their sister Alice James got a posthumous reputation as a diarist. (There were two other brothers who never became famous. Their father, Henry James Sr., had some reputation as a theologian, although not in the Henry (Jr)/William James league. Kalimac [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3559743): > Another member of the Darwin family who achieved fame in a different area was the composer Ralph Vaughan Williams, who was on a slightly different branch but was 4 generations down from both Erasmus Darwin and Josiah Wedgwood. > > Watch out, too, for other cases where the surnames differ. I like to offer the story of Stanley Baldwin, Prime Minister and a leading figure in British politics in the 1920s and 30s. He had a particular ability to deliver powerful and effective speeches, which is perhaps partly explained by some of them having been written for him by his cousin, whose name was Rudyard Kipling. Phi [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3563545): > Also: John Baez (mathematical physicist), Albert Baez (physicist, co-inventor of X-ray microscope) and Joan Baez (folk musician). John Baez is Joan Baez’s cousin?! Somehow I had never made that connection. Greg [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3563545): > A couple more families to ponder: The Wojcicki sisters, with Janet a professor at UCSF, Susan the CEO of YouTube, and Anne the founder of 23andme. > > Also the Emanuels, with Rahm former chief of staff to Obama, Ari the founder of the Endeavor talent agency (they own UFC now, among other things), and Ezekiel, an oncologist and academic. The Wojcickis had the unfair advantage that Google was founded in their garage, which gave them some pretty great networking opportunities. For the record, the sisters’ father is a Stanford physicist, and their mother is an educator who has leveraged her childrens’ fame into a book *How To Raise Successful People*. Not gonna lie, I’m pretty tempted to read this. KailorTheDestroyer from the subreddit [writes](https://www.reddit.com/r/slatestarcodex/comments/qs1uk3/famous_family_fun_fact/): > Juneau, Alaska is named after the gold miner, Joe Juneau. His [cousin] Solomon Juneau founded Milwaukee Wisconsin. Dave92f1 [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3560948): > Bernoulii family! Just some of them (hacked from Wikipedia): > > Jacob Bernoulli (1654–1705) mathematician after whom Bernoulli numbers are named > > Johann Bernoulli (1667–1748), mathematician and early adopter of infinitesimal calculus > > Nicolaus I Bernoulli (1687–1759) mathematician - curves, differential equations, probability > > Daniel Bernoulli (1700–1782) "Bernoulli's principle' [which explains how planes fly] [and] originator of the concept of expected utility > > Johann II Bernoulli (1710–1790) mathematician and physicist > > Johann III Bernoulli (1744–1807) astronomer, geographer and mathematician > > Jacob II Bernoulli (1759–1789) physicist and mathematician Kenny Easwaran [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3561105): > Just thinking of Nobel prizes, there are two more relevant families to consider: > > Jan Tinbergen won the first Nobel prize in Economics in 1969, and his brother Niko Tinbergen won the Nobel in medicine in 1973. (Both of them working on topics relevant to this blog, about individual and group behavior in economics and ecology.) Their brother Luuk Tinbergen committed suicide at a somewhat young age, but had two children that are both moderately prominent ecologists. > > Another relevant family that doesn't have the heredity explanation - Gunnar Myrdal won the Economics Nobel in 1974 (partly for work that influenced the US Supreme Court in Brown v Board of Education), and his wife Alva Myrdal won the Nobel Peace Prize in 1982 (the only married couple to win separate Nobels). Their daughter, Sissela, is a moderately known philosopher, who married the President of Harvard, Derek Bok. Their daughter Hilary Bok is another philosopher, who also had a bit of fame with the political blog Obsidian Wings a decade or so ago. Rand [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3561629): > Moses Mendelssohn was a philosopher called the "father of the Haskala" or Jewish enlightenment. I don't know how impressive he was as a philosopher, but he did beat out Kant for a big philosophy prize. Also, Kant was quoted as saying "Mendelssohn is an awesome-cool philosopher". > > Moses' son Abraham doesn't seem to have done anything impressive except become very rich and host parties where all the cool people would hang out and Felix and Fanny Mendelssohn would play music. > > Felix Mendelssohn was a legendary pianist and composer. > > Fanny Mendelssohn was an extraordinary pianist and composer but also a woman: She was discouraged from devoting her energies to music and largely published under her brother's name. > > Rebecka Mendelssohn may or may not have been a cool person, but she married Dirichlet, which is pretty cool. Dirichlet was very smart and very cool. Nirvana [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3562292): > The only two Indian physicists to win a Nobel Prize (CV Raman and Subrahmanian Chandrasekhar) came from the same family (Raman was Chandrasekhar's paternal uncle) The Chaostician [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3564078): > There's really weird families, like the Hintons. James was a surgeon and prominent advocate for polygamy. His son Charles was a mathematician who worked on intuitive understanding of higher dimensions. He coined the term "tesseract", he was a polygamist, his first wife was the daughter of Boole, and he invented the first automatic baseball pitching machine (using gunpowder). His son Sebastian invented the jungle gym. I don't know if this is a selection for excellence, but it's certainly a selection for something. And just when you thought that story couldn’t get any more interesting (h/t [Peter Lund](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3634583)): > One of Charles [Hinton]'s descendants is Geoffrey Hinton who won a Turing Award a couple of years ago. He is also a descendant of George Boole and George Everest (Surveyor General of India) after whom the mountain is named. He is not descended from the jungle gym guy. The jungle gym guy had two crazy Communist children who were big fans of Mao. LowHangingFruit [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3572624): > A good recent example is Larry Summers who is nephew to Kenneth Arrow and Paul Samuelson who both won Nobel Memorial prizes in Economics. Also Janet Yellen's husband just so happens to have one a Nobel Memorial prize in Economics as well. He’s…both of their nephews? Apparently yes - Paul on his father’s side and Kenneth on his mother’s! Man, Ashkenazi Jews are great. --- Mark Roulo [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families): > I don't know how to do the math, but I'd expect SOME clustering just by chance. Anyone want to take a crack at calculating the odds for "eminence clustering" by chance alone? Many people had this objection; I don’t think it stands. [Seven parent-child pairs](https://www.nature.com/articles/457379b) have won Nobel Prizes, a nice well-defined metric of extreme success. About a thousand Nobel Prizes total have been awarded in history, so about 1% of Nobelists were the child of a previous Nobelist. I’m guessing about 10 billion people total have lived since the first Nobel was given out in 1901, so only 1/10 million people should have a Nobel-winning parent by chance (yes, there are many reasons this estimate is slightly off, but it’s order of magnitude correct). So Nobelists are about 100,000x more likely to have a Nobel-winning parent than the average person. But First Worlders and rich people are more likely to win Nobels than Somalian peasants, so maybe the real denominator is the ~10% of the population in the First World. In that case it’s only 1/10,000x more likely than chance. You can multiply further if you have stronger opinions about the class background of Nobelists, but it’s pretty clear you’re not going to make this nonsignificant. Another way of thinking about this is: the Huxley brothers (Aldous, Julian, and Andrew) are such a great example that I probably would have included them even if they’d had no other interesting relatives, so they’re hardly cherry-picked. But in fact, there are two more famous Huxleys: Thomas Huxley (two generations away) and Matthew Arnold (also two generations away). Suppose that there are about 100 people who are at most two generations away from Aldous on the Huxley family tree. Should we expect by chance that they include two famous geniuses? I think that’s a lot even for upper-class Victorian Britain. [interstitial\_love has](https://www.reddit.com/r/slatestarcodex/comments/qqcr3s/secrets_of_the_great_families/hk15yvy/) a different version of the calculation, based on thinking of an extended family as a hundred person cluster: > There are 7 billion people, but only about 1000 Nobel winners. Moreover, a cluster with 3 Nobel laureates is the same as a cluster centered around one Nobel laureate with two more laureates in the fringes. > > Think of it this way: if such a 3-prize cluster exists, then the third person to win the Nobel in the cluster would have known what was happening, he might have remarked about it at his acceptance speech. But only 1000 such speeches have ever been given, and we can calculate the probability that anyone given winner would find themselves in that situation. > > Let p be the chance of any random person being a laureate, and assume for the null hypothesis that they are independently distributed. Someone above said p was 1 in 20 million. Then the chance that a group of 100 people has two nobel lareates exactly is (100 choose 2) x p^2 x (1-p)^98 which is less than 5000 / (2x10^7)^2 = one-in-80 billion. The missing p^3 (1-p)^97 etc terms are negligible, even including the growth of the choose function. > > That means the chance of a three person cluster ever existing is **about 1 in 80 million** But bitterrootmtg [writes](https://www.reddit.com/r/slatestarcodex/comments/qqcr3s/secrets_of_the_great_families/hk2fm95/): > I think this number becomes much more reasonable if we make just a few weak assumptions about environmental effects. > > Let’s assume that being related to a Nobel winner confers the following benefits (and no others): > > 1. You come from a family and culture that values education, so your odds of getting at least a bachelors degree are 75%. (Worldwide only about 6.7% of people have a bachelors.) > 2. You know your relative won a Nobel, so you are 25x more likely to go into their field than the average person. > 3. In your relative’s field you have some name recognition, meaning you are twice as likely to get opportunities to work on cutting edge research. > > These effects don’t require any genetic heritability nor do they require any particularly strong environmental effects - you don’t even need to have met your Nobel-winning relative for these assumptions to be true. > > I don’t have exact numbers but I think these almost-trivial assumptions could easily change the odds of winning a Nobel by a couple orders of magnitude. If you multiply the above three factors they imply you are something like 550x more likely than the average person to win a Nobel. So your odds shift from from 1 in 20 million to something on the order of 1 in 100k. > > If we re-run your numbers assuming 1 in 100k odds of a Nobel, the chances of a three-prize family cluster are something like ~~1 in 20~~ (actually I think it’s 1 in 2000, sorry, trying to do this in my head). > > So I think your analysis refutes the idea that it’s pure chance (which I don’t believe anyway) but it doesn’t refute the possibility that it’s chance plus weak environmental effects. > > If genes have a really strong effect on top of this, now we have the opposite problem of explaining why these clusters aren’t more common. > > I also still think we have P-hacking going on in the experimental design that is partly obfuscating the results. We didn’t pre-register anything. The criteria “family with three Nobel prizes” is something we arrived at post-hoc after observing that such a family exists in the data set. While I assume this math is correct, it still only shows that there is a 1/2000 chance of an extended family with three Nobels existing by chance. I agree it’s awkward that we can only do these calculations well with Nobels (and maybe Olympic medalists?). A really rigorous attempt at this would try to find some way of quantifying extreme but not Nobel-level talent. Maybe Google Trends volume or number of hits on their Wikipedia page? With some kind of scaling factor based on recency or being in fields that tend to get lots of searches and Wikipedia hits? --- Lots of people had interesting examples from sports. Apparently the Bohrs weren’t the only family to produce both great scientists and Olympians. [Yeangster](https://www.reddit.com/r/slatestarcodex/comments/qqcr3s/secrets_of_the_great_families/hjzugka/) notes that Susan Francia, two time Olympic gold medal winner in rowing, is the daughter of Katalin Kariko, inventor of mRNA vaccines. GeriatricZergling [writes](https://www.reddit.com/r/slatestarcodex/comments/qqcr3s/secrets_of_the_great_families/hk2cnzp/): > I've mentioned this before here and on the Motte, but there is a concept gaining headway in organismal biology called "individual quality" in which, for some as yet unknown reason (mutational load, developmental stability, maternal yolk provisions, etc.), some individuals are simply better at basically everything. > > If I have a bunch of animals and I'm testing locomotion performance, certain individuals will always be consistently better than their peer - better top speed, better acceleration, better endurance, better maneuverability, etc., even though some of these should be conflicting (speed and endurance, due to muscle fiber types). But if you normalize means and standard deviations (so one variable doesn't dominate just by having big numbers) and do a PCA, your first axis will be "individual quality", and on the next, orthogonal axis, you'll see the expected tradeoffs between speed and endurance, etc. It's new, but it's replicated across several very different species, including human athletes. For more on this topic, see [Individual Quality: Tautology Or Biological Reality?](https://besjournals.onlinelibrary.wiley.com/doi/10.1111/j.1365-2656.2010.01770.x) by the ominously named Bergeron et al. Getting back to sports, PeopleHaveSaid [writes](https://www.reddit.com/r/slatestarcodex/comments/qqcr3s/secrets_of_the_great_families/hk26l3r/): > I feel like [Mike Piazza](https://en.wikipedia.org/wiki/Mike_Piazza#Major_league_career) is relevant here. He was drafted 1,390th out of community college as a favor to a family member, and ultimately became a HoF player. He was then moved to a new position and sent to a special training camp to learn it, before reaching the majors, where he broke out into an all time great. > > While you could point to that as "genetic" evidence, it also points to how a young player with connections gets the kind of attention and opportunity that most community college players would never get. > > How many Mike Piazzas without a family connection didn't get drafted out of college, shrugged their shoulders, and got a real job rather than get the opportunity to benefit from the professional training (and steroids) that turned Piazza into a star as an adult? > > Speaking personally, I peaked athletically much later than most of my peers, in my late 20s. This meant that at all the times that relevant athletic selections to get to the "next level" were occurring (make the high school team in whatever sport, get a college offer, do well in college and go pro) I was an uncoordinated weak doofus who never got a second glance. Family connections often get you through those early selection phases automatically, allowing late bloomers like Mike Piazza the opportunity to show their abilities. Steve Sailer [has](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3592030) an interesting way to put all these Victorian geniuses in context: > Darwin and Galton were a little bit like their contemporaries in the English-speaking countries who invented most of today's major spectator sports, in that the time was right. > > One of the 19th Century sports inventors, James Naismith is individually famous. But it's worth noting that Naismith's friend William George Morgan then promptly invented volleyball as a less strenuous alternative to Naismith's new basketball, so it was less that Naismith was a unique creative genius and more that the time was right for new sports. > > Most of the other major sports were invented out of ancient ballgames when railroads allowed teams to travel, which required nationally-agreed upon rules. But the railroads allowed players and coaches to get together after each seasons in conventions and hash out new rules. > > The English-speaking countries had more railroads and perhaps cultural advantages so they worked out most of the sports first. He adds: > The Darwins also include a minor genius, Charles' favorite grandson Bernard Darwin, the most famous golf sportswriter ever. I’d come across him, but dismissed him as one of the family’s rare non-geniuses, since “golf writer” sounds less impressive than “discovered the origin and driving force behind life itself”. Still, if he’s widely considered the best golf sportswriter ever, I guess I have to add him in. Take that, people who said I was just cherry-picking! --- Some people got in a fight about whether Darwin was really that great. Charlie Sanders of [Charlie’s Newsletter](https://charliesanders.substack.com/?utm_source=substack&utm_medium=web&utm_content=comment_metadata), [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3561984): > Charles Darwin was an elite, lazy malcontent who got stuck on a world-spanning ship tour because he was from a rich family who needed to give him something to do to keep him busy. He was notably not even brought along on the HMS Beagle in a scientific capacity — he was just there so that the ship's other elite passengers would have someone to talk to. He wrote some personal notes about wildlife during his trip, but failed to publish anything of note for decades afterwards and faded into obscurity. > > Alfred Russel Wallace, a commoner, did actual thorough legwork to prove the theory of evolution through natural selection. Upon realizing that a commoner might get credit for this theory, Darwin's elite friends cajoled him into writing up something that they could present as a finding alongside Wallace's. The elites then trumped up Darwin's involvement in the discovery of natural selection. > > In the end, the elites won. Darwin gets remembered as a visionary genius, and bloggers now misinterpret his achievement as a result of anything other than being born an elite in a society that existed to intentionally propagate the supposed superiority of the elites over the commoners. > > I'd venture to guess that status, rather than IQ, is a much better explanation for the phenomenon that Scott has identified. > > Source: spent time in the Galapagos talking to naturalists who have spent considerable time studying Darwin's life. But Phil Getz, [in Darwin’s defense](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3641530), elsewhere in the comments: > IQ may not be the right construct to describe his genius, but he was brilliant in every scientific activity he turned his attention to. His greatest gifts were his humility, objectivity, and his unique ability, maybe unparalleled in history, to anticipate unforeseen consequences of his ideas, and find solutions to those next-generation problems, before anybody else could even conceive of the initial ideas. > > Re. humility, this was a man who, while he was the talk of the entire world, spent the last years of his life studying earthworms, while being ridiculed for it by inferior scientists who would rather study "nobler" beasts. And this was in fact important work, though you rarely hear about it. > > Re. objectivity, I can't recall a single instance where Darwin was wrong about something and didn't write explicitly that he might be wrong about it. > > Re. foreseeing consequences, on issues like sexual selection, group selection, and the evolution of emotional expression, I think Darwin's first consideration of the topic, done before other people even understood evolution, got further on the problem than the rest of the research community combined would have in a century. Darwin has many paragraphs in his writings that are worth a Nobel prize by themselves. A role model for us all! --- SBF [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3559585): > You're not taking all successful people at random, you're selecting for people who have successful families -- so you're probably selecting for people who don't just have high IQ, but for whom it's highly genetic/inheritable rather than random factors. This is true! All my regression-to-the-mean calculations were wrong because of selection bias - since we’re looking specifically at geniuses who we know had talented families, we should assume their intelligence was more genetic than average. --- Lots of people from high-achieving families offered their stories and advice. Toxn [wrote](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3565811): > I come from a relatively successful/academic family, and the biggest social contributions to my success have been: > > 1) That my parents were well-resourced enough and supportive enough so that myself and my siblings got to try out anything that we fancied in the full expectation that we could do it well if we tried (our family motto is that we can do anything); and > > 2) That my family's knowledge and connections opened doors in all sorts of unexpected places and provided advantages in all sorts of unexpected ways. At the place where I went to study a degree, my great-grandfather's photo was literally on the wall outside my future supervisor's office. People are vastly more supportive of someone that they have a connection to (however tenuous), and I've been privileged (in all senses of the word) to enjoy patient and kind tutoring by others that helped me grasp more closely to my full potential. [Ksdale](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3630635): > Both of my parents were CPA's, and when I was 19, I was skeptical that someone my age wouldn't know what e.g. depreciation was. I learned a ton about accounting and taxes \*purely\* through the casual conversations in my house, without my parents trying to teach me anything about the profession. They made a point not to push me into the "family" profession, and I ended up there anyway just because I knew so much more about it than anything else. [Robot Elvis](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3560130): > I come from a "moderately great family" (one Nobel prize, one famous politician, one founder of well known movement, other minor notables), and I'm very aware of a social expectation in my family that normal goals like "having a good career" or "making a lot of money" aren't really acceptable. Success means doing something novel and important. > > The flip-side of this is that it can be really emotionally hard when I feel I'm not on a potential path to greatness, and I think it's been hard on other family members who haven't met expectations. > > I can also see the connection to sports. I got good enough at a sport that a coach wanted me to go for the olympics, but I did it by wrecking my body. I don't think I'm particularly physically gifted, but I was maybe more willing to tear myself apart in pursuit of something that looked like possible greatness. Sofia Echegaray [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3562240): > A few generations back, my family had 3 siblings that were a Nobel prize winner, a successful playwright, and another lesser-known published writer (but she was a woman in the 19th century, so also extraordinary for her time and place). I can agree with the "assortative mating" hypothesis -- the wives chosen in the subsequent century were women of science, or Fulbright scholars, that sort of thing. > > So I think the intellectual capacity traveled down the line. However, the money did Not travel down, and honestly that level of accomplishment is nearly impossible without a certain level of extended material abundance. For most of human history, most geniuses have spent their genius just in trying not to starve to death, and maybe improve things a little bit for the next generation. So privilege is a huge part, I would say the larger part. Patri Friedman [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3579629): > Well, I started looking for moonshots as soon as I graduated college. I spent 5 years researching and experimenting to find a huge unsolved problem I cared about, thought I could get traction on, and would enjoy tackling. When I found one I worked on it on and off for 20 years, only reaching gainful employment in the last 2. > > So I’m definitely quite high on “obsessed with investing towards a shot at absurd success” scale. And I would directly trace it to having a famous family (though the experience was much more nuanced, and darker, than the Hero). > > Maybe I would have met the right people and absorbed it over time. I think it matches my personality and values. But in most circumstances I’m sure ambition would have been much slower to develop, and much more reasonable, like “Start a unicorn company”. > > So, one data point supporting Scott’s hypo about a family effect. Carl Pham [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3562378): > I find it curious that you left out gumption, tenacity, drive, whatever you want to call it. I've known personally about a number of Nobel laureates, mostly in physics, and when you compare them to people who don't make it nearly as far \*that\* is what stands out above all. They're smart, sure, and some are smarter than others, but more than anything they're driven, tenacious, energetic. They never give up, they chip away and work at where they want to go far past the point where ordinary mortals turn around in defeat. "Can't get there from here. No Thoroughfare. Not the way it's done. This will never work." The people who reach the heights, at least in science, and I kind of suspect in many other fields, are the people who pay no attention to such signs. > > And (1) I suspect this \*is\* the kind of thing that can be strongly influenced by family culture, and (2) if it's a major component of success it would explain a slower regression to the mean than pure stats would suggest. It also (3) helps explain why it seems more common that the offspring of really \*creative\* people in science, math, music composition end up making their mark in fields -- like politics, music performance, or sports -- where energy and discipline can compensate to some degree for less raw talent. I think at the highest levels - where you’re winning a Nobel or becoming President or something - to a first approximation you have to do everything right. You have to be brilliant *and* have a lot of drive *and* get very lucky *and* receive the best education. Otherwise, someone who has 4/4 of those things will beat you. It’s not like there’s any shortage of those people, so why should anyone else ever win Nobels? Maybe if you’re a once-in-a-generation outlier on one of those things you’ll do okay with 3/4, but otherwise Nobels are so selective that you can’t just leave desirable character traits on the ground. --- Some other people from less achieving families shared their perspectives too. FLWAB [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3560015): > I think the Hero License may have a strong effect here: when I was picking a college to attend it never occurred to me to apply to Harvard, or Yale, or any prestigious college. Perhaps I wouldn't have been able to get in regardless (having never aimed for an Ivy, my extracurriculars were almost non-existent), but I didn't dismiss them because I thought I couldn't make it. I was one of the smartest kids at my high school, and I was a National Merit Scholar (the letter they sent me claimed I was in the 99th percentile of students my age nationwide, but beats me if that's accurate or impressive). I didn't even know that it was particularly hard to get into an Ivy because it wasn't anywhere on my radar in the first place. After I had already graduated and entered the job market I realized for the first time that for some people getting into an Ivy is a big deal, and that job prospect wise it really is a big deal (when I first learned that every Supreme Court justice attended Yale or Harvard I was genuinely surprised). Interesting story, though in some ways it seems less like hero licensing than communicating basic information, the same way rich parents teach their kids to invest in stocks and poor parents don’t. I have a friend who grew up poor and got some money. When we told her to put it in stocks, she reacted the same way I’d react if you told me to put my money in a secret Swiss bank: I know it’s a thing rich people do, it must have some advantage or they wouldn’t keep doing it, but I never even considered doing it myself. It sounds like that’s how you thought about going to an Ivy (and how some people probably think about going to college!) Arbituram [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3562134): > It's only very recently in my life (I'm in my 30s) that the idea of trying to accomplish something beyond being moderately comfortable even occurred to me as a real possibility, and that's been as a result of a few lucky breaks with regards to career and mentorship, and just raw exposure to ambitious/successful people. > > My family is peasants all the way back (my sister was the first person in my extended family to get a degree) and I grew up in a small town; the most aspirational option available growing up was doctor, which vaguely seemed like the thing the smart kids were doing a few years ahead of me. > > One element which you highlight well here, and which I think is often misunderstood, isn't that I despaired of being able to attend [top educational institution] or [achieve career milestone] - these things literally just didn't occur to me as an option I could even try and fail at. Also: > I thought office hours were for asking questions about the material (the textbooks and/or the internet were perfectly clear, so I didn't), and only realised towards the end that more switched-on students were using it to line up research opportunities, which an academic family would have obviously known (on the other hand, I'm personally thankful I didn't go into academia, so maybe it worked out...) I didn’t know this until just now! So, uh, PSA to anyone reading this who wants a job in academia, I guess. --- Unrelated to anything else, but [Phil Getz](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3570024) again: > One reason that a single person rarely accomplishes anything important anymore in science is the adoption of Karl Popper's structure for research papers. He insisted, because of his fondness for good-old rationalist metaphysics, that the essence of science is rational theory, not experimentation. He dwells on this with a fanatic obsessiveness in "The Myth of the Framework". It's the only subject I know on which Popper held an obviously stupid and ideological opinion: that scientific work never, ever, EVER begins with observation. Odd, since he understood (from comparing science with philosophy) the necessity of experiment. > > Anyway, Popper proposed today's research article format, which begins with a statement of some theoretical problem, does a lit survey of the problem, states a hypothesis, posits an experiment to test a hypothesis, comes up with an experimental setup and methodology, runs the experiment, draws a conclusion, and lists new theoretical problems the results have suggested. This format begins and ends in theory, thus maintaining the supremacy of theory over experiment. > > The trouble is that this isn't how science worked, back when it worked. One person would notice something funny--not necessarily a theoretical problem; often a pure disinterested observation of just the type Popper claims is impossible, such as when Robert Brown published a paper basically saying "Microscopic particles in my tea keep jostling around as if they were alive". Another person might propose a hypothesis, as when Einstein proposed that this Brownian motion resulted from the random movements of atoms. Then a third person might propose a test, and a fourth might conduct the proposed experiment and report the results. This entire process sometimes took a century or more. > > You're not allowed to do that today. If you notice something funny, you can't just write a note to the Royal Society describing it. You've got to find a theorist to come up with a theory, and experimentalists to devise and carry out a test, and a statistician to evaluate the test; and you have to do the whole process before you can publish anything. > > (To be fair, Popper wrote that scientists should be allowed to write things up however they liked, and he was just proposing one possible way. Unfortunately, most humans for some reason find it impossible to imagine that there could be two different right ways of doing anything. I blame Socrates for this.) > > At most institutions and companies, you have to round all these people up before you make your observation! Then you all write up a proposal saying what you plan to discover and how you'll discover it, along with your bioethics review, environmental impact statement, and diversity plan, and submit it to a government agency's grant solicitation. Then you wait 4 months to hear back from them. Then, if you get an award, you wait another 3 months for the kickoff meeting, at which you discover the contracting officer who gave you the reward has been transferred, and you now work with a contracting officer who isn't interested in your project and wants you to do something else. --- And some odds and ends: Bill from Glendale [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3562818): > Gavin Newsom with Darwin, the Curies, et al.??????????? I know it’s a stretch. But I visit SF all the time and am constantly confronted by Newsom and Newsom buildings, so the fact that the governor has the same last name as these famous 19th-century people is pretty salient! [Ben Landau-Taylor](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3560328): > Historical nitpick: Erasmus Darwin was not the founder of the Lunar Society, although he was a key member. The Lunar Society was very informal and didn't really have a single founder, but if I \*had\* to pick one, it would probably be William Small, or maybe Matthew Boulton. (I'm basing this claim on having read several books worth of the correspondence of Boulton and Watt, who were both Lunar Society members.) And Philo of [MD&A](https://philo.substack.com/) [writes](https://astralcodexten.substack.com/p/secrets-of-the-great-families/comment/3560314): > Niels Bohr was a goalkeeper on the same top club team (AB) as his brother but left after a season: > > "According to AB, in a match against the German side Mittweida, one of the Germans launched a long shot and the physicist leaning against the post did not react, missing an easy save. After the game he admitted to his team-mates his thoughts had been on a mathematical problem that was of more interest to him than the game. He only played for the 1905 season."
Scott Alexander
44155533
Highlights From The Comments On Great Families
acx
# Ivermectin: Much More Than You Wanted To Know I know I’m two months late here. Everyone’s already made up their mind and moved on to other things. But here’s my pitch: this is one of the most carefully-pored-over scientific issues of our time. Dozens of teams published studies saying ivermectin definitely worked. Then most scientists concluded it didn’t. What a great opportunity to exercise our study-analyzing muscles! To learn stuff about how science works which we can then apply to less well-traveled terrain! Sure, you read the articles saying that experts had concluded the studies were wrong. But did you really develop a gears-level understanding of what was going on? That’s what we have a chance to get here! ### The Devil’s Advocate Any deep dive into ivermectin has to start here: This is from [ivmmeta.com](https://ivmmeta.com/), part of a sprawling empire of big professional-looking sites promoting unorthodox coronavirus treatments. I have no idea who runs it - they’ve very reasonably kept their identity secret - but my hat is off to them. Each of these study names links to a discussion page which extracts key outcomes and offers links to html and pdf versions of the full text. These same people have another 35 ivermectin studies with different inclusion criteria, subanalyses by every variable under the sun, responses and counterresponses to everyone who disagrees with them about every study, *and* they’ve done this for twenty-nine other controversial COVID treatments. Putting aside the question of accuracy and grading only on presentation and scale, this is the most impressive act of science communication I have ever seen. The WHO and CDC get billions of dollars in funding and neither of them has been able to communicate *their* perspective anywhere near as effectively. Even an atheist can appreciate a cathedral, and even an ivermectin skeptic should be able to appreciate this website. What stands out most in this image (their studies on early treatment only; there are more on other things) is all the green boxes on the left side of the table. A green box means that the ivermectin group did better than placebo (a red box means the opposite). This isn’t adjusted for statistical significance - indeed, many of these studies don’t reach it. The point of a meta-analysis is that things that aren’t statistically significant on their own can become so after you pool them with other things. If you see one green box, it could mean the ivermectin group just got a little luckier than the placebo group. When you see 26 boxes compared to only 4 red ones, you know that *nobody* gets that lucky. Acknowledging that this is interesting, let’s detract from it a little. First, this presentation can exaggerate the effect size (represented by how far the green boxes are to the left of the gray line in the middle representing no effect). It focuses on the most dire outcome in every study - death if anybody died, hospitalization if anyone was hospitalized, etc. Most studies are small, and most COVID cases do fine, so most of these only have one or two people die or get hospitalized. So the score is often something like “ivermectin, 0 deaths; placebo, 1 death”, which is an infinitely large relative risk, and then the site rounds it down to some very high finite number. This methodology naturally produces very big apparent effects, and the rare studies where ivermectin does worse than placebo are equally exaggerated (one says that ivermectin patients are 600% more likely to end up hospitalized). But this doesn’t change the basic fact that ivermectin beats placebo in 26/30 of these studies. Second, this presents a pretty different picture than you would get reading the studies themselves. Most of these studies are looking at outcomes like viral load, how long until the patient tests negative, how long until the patient’s symptoms go away, etc. Many of these results are statistically insignificant or of low effect size. I went through these studies and tried to get some more information for my own reference: Click to expand. # is how many people were in the smallest relevant group (eg if there were 20 people in placebo and 10 in ivermectin, it was 10). Dose is ivermectin dose x number of days. Tested w/ is what drugs were given alongside ivermectin; compare is what drugs were in the “placebo” group (I excluded some very common things like paracetamol). %-PCR7 is what percent of patients had a negative PCR test (indicating recovery) after 7 days (though if 7 wasn’t available, I accepted anything from 6-12); the (I) and (P) are ivermectin and placebo groups. R is the ratio - green if statistically significant, red otherwise. DaysPCR is how many days it took to get a negative PCR test. Days to -sym are how many days it took symptoms to resolve. -outc is some serious negative outcome in the study, either clinical worsening, hospitalization, or death. I was inconsistent which one I chose, trying to pick whichever I thought struck a balance between high sample size and severity. Since this was almost never significant, I made it blue if it favored ivermectin and orange if it favored placebo (which it never did; there is no orange). Lowest p is the lowest p-value in the study for one of the headline results. 1o+ is whether the primary outcome was positive or not. I made this very quickly and unprincipledly and I am sure there are a lot of errors; please forgive me. Of studies that included any of the endpoints I recorded, ivermectin had a statistically significant effect on the endpoint 13 times, and failed to reach significance 8 times. Of studies that named a specific primary endpoint, 9 found ivermectin affected it significantly, and 12 found it didn’t. But that’s still pretty good. And “doesn’t affect to a statistically significant degree” doesn’t mean it doesn’t work. It might just mean your study is too small for a real and important effect to achieve statistical significance. That’s why people do meta-analyses to combine studies. And the ivmmeta people say they did that and it was really impressive. All of this is still basically what things would look like if ivermectin worked. But of course we can’t give every study one vote. We’ve got to actually look at these and see which ones are good and which ones are bad. So, God help us, let’s go over all thirty of the ivermectin studies in this top panel of ivmmeta.com. (if you get bored of this, scroll down to the section called “The Analysis”) ### The Studies **[Elgazzar et al:](https://www.researchgate.net/publication/346876366_Efficacy_and_Safety_of_Ivermectin_for_Treatment_and_prophylaxis_of_COVID-19_Pandemic)** This one isn’t on the table above, but we can’t start talking about the others until we get it out of the way. 600 Egyptian patients were randomized into six groups, including three that got ivermectin. The ivermectin groups did substantially better: for example, 2 vs. 20 deaths in ivermectin group 3 vs. non-ivermectin group 4. There were various other equally impressive outcomes. Unfortunately, it’s all false. Some epidemiologists and reporters were able to obtain the raw data (it was password-protected, but the password was “1234”), and it was pretty bizarre. Some patients appeared to have died before the trial started; others were arranged in groups of four such that it seemed like the authors had just copy-pasted the same four patients again and again. Probably either the study never happened, or at least the data were heavily edited afterwards. You can read more [here](https://gidmk.medium.com/is-ivermectin-for-covid-19-based-on-fraudulent-research-5cc079278602). A lot of the apparent benefit of ivermectin in meta-analyses disappeared after taking out this paper (though remember, this isn’t even on the table at the top of the post, so it doesn’t directly affect that). Since the Elgazzar debacle, a group of researchers including Gideon Meyerowitz-Katz, Kyle Sheldrake, James Heathers, Nick Brown, Jack Lawrence, etc, have been trying to double-check as many other ivermectin studies as possible. At least three others - Samaha, Carvallo, and Niaee - have similar problems and have been retracted. Those studies were all removed before I screenshotted the table above, and they’re not on there. But everybody is pretty paranoid right now and looking for fraud a lot harder than they might be in normal situations. Moving on: **[Chowdury et al](https://ejmo.org/10.14744/ejmo.2021.16263/):** Bangladeshi RCT. 60 patients in Group A got low-dose ivermectin plus the antibiotic doxycycline, 56 in Group B got hydroxychloroquine (another weird COVID treatment which most scientists think doesn’t work) plus the antibiotic azithromycin. No declared primary outcome. Ivermectin group got to negative PCR a little faster than the other (5.9 vs. 7 days) but it wasn’t statistically significant (p = 0.2). A couple of other non-statistically-significant things happened too. 2 controls were hospitalized, 0 ivermectin patients were. This is a boring study that got boring results, so nobody has felt the need to assassinate it, but if they did, it would probably focus on both groups getting various medications besides ivermectin. None of these other medications are believed to work, so I don’t really care about this, but you could tell a story where actually doxycycline works great at addressing associated bacterial pneumonias, or where HCQ causes lots of side effects and that makes the ivermectin group look good in comparison, or whatever. **[Espitia-Hernandez et al:](https://www.biomedres.info/biomedical-research/effects-of-ivermectinazithromycincholecalciferol-combined-therapy-on-covid19-infected-patients-a-proof-of-concept-study-14435.html)** Mexican trial which is probably not an RCT - all it says is that “patients were voluntarily allocated”. 28 ended up taking a cocktail of low-dose ivermectin, vitamin D, and azithromycin; 7 were controls. On day ten, everyone (!) in the experimental group was PCR negative; everyone (!) in the control group was still positive. Also, symptoms in the experimental group lasted an average of three days; in the control group, more like 10. These results make ivermectin look amazingly super-good, probably better than any other drug for any other disease, except maybe stuff like vitamins for treatment of vitamin deficiency. Any issues? We don’t know how patients were allocated, but they discuss patient characteristics and they don’t look different enough to produce this big an effect size. The experimental group got a lot of things other than ivermectin, but I would be equally surprised if vitamin D or azithromycin cured COVID this effectively. It [deviated from its preregistration](https://clinicaltrials.gov/ct2/show/NCT04399746) in basically every way possible, but you shouldn’t be able to get “every experimental patient tested negative when zero control patients did” by garden-of-forking-paths alone! But this has to be false, right? Even the other pro-ivermectin studies don’t show effects nearly this big. In all other studies combined, ivermectin patients took an average of 8 days to recover; in Espitia-Hernandez, they took 3. Also, it’s pretty weird that the entire control group had positive PCRs on day 10 - in most other studies, a majority of people had negative PCRs by day 7 or so, regardless of whether they were control or placebo. Everything about this is so shoddy that I can easily believe something went wrong here. I don’t have a great understanding of this one but I don’t trust it at all. Luckily it is small and non-randomized so it will be easy to ignore going forward. I’m not saying this is related, but I’m not saying it \*isn’t\* related either. **[Carvallo et al:](https://www.longdom.org/open-access/safety-and-efficacy-of-the-combined-use-of-ivermectin-dexamethasone-enoxaparin-and-aspirina-against-covid19-the-idea-protocol-70290.html)** This one has all the disadvantages of Espitia-Hernandez, plus it’s completely unreadable. It’s hard to figure out how many patients there were, whether it was an RCT or not, etc. It looks like maybe there were 42 experimentals and 14 controls, and the controls were about 10x more likely to die than the experimentals. Seems pretty bad. On the other hand, [another Carvallo paper was retracted](https://www.buzzfeednews.com/article/stephaniemlee/ivermectin-covid-study-suspect-data) because of fraud: apparently the hospital where the study supposedly took place said it never happened there. I can’t tell if this is a different version of that study, a pilot study for that study, or a different study by the same guy. Anyway, it’s too confusing to interpret, shows implausible results, and is by a known fraudster, so I feel okay about ignoring this one. **[Mahmud et al:](https://journals.sagepub.com/doi/10.1177/03000605211013550)** RCT from Bangladesh. 200 patients received ivermectin plus doxycycline, 200 received placebo. Everything was written up very nicely in real English, by people who were clearly not on 34 lbs of meth at the time. They designated a primary outcome, “number of days required for clinical recovery”, and found a statistically significant difference at p < 0.001: Okay, fine, they misspelled “recovery” once. But they spelled it right the other time! That puts it in the top 50% for ivermectin papers! The fraud-hunters have examined this paper closely and are unable to find any signs of fraud. I think this paper is legitimate and that its findings need to be seriously considered. Serious consideration doesn’t always meant they’re true - sometimes if we have strong evidence otherwise we can dismiss things without understanding why. And there’s always the chance it was a fluke, right? Can something have a p-value less than 0.001 and still be a fluke? **[Szenta Fonseca et al:](https://www.sciencedirect.com/science/article/pii/S1477893920304026)** This is a chart review from Brazil. Researchers looked at various people who had been treated for COVID in an insurance company database, saw whether they got ivermectin or not, and saw whether the people who got it did better or worse. About a hundred people got it, and a few hundred others didn’t. The people who got it did not do any better than anyone else, and you’ll notice this is one of the rare red boxes on the table above. But we shouldn’t take this study seriously. Nobody took any effort to avoid selection bias, so it’s very possible that sicker people were given more medication (including ivermectin), which unfairly handicaps the ivermectin group. Also, it’s hard to tell from the paper who was on how much of what, and the discussion of ivermectin seems like kind of an afterthought after discussing lots of other meds in much more depth. This is another one I feel comfortable ignoring. **[Cadegiani et al:](https://www.sciencedirect.com/science/article/pii/S2052297521000792)** A crazy person decided to put his patients on every weird medication he could think of, and 585 subjects ended up on a combination of ivermectin, hydroxychloroquine, azithromycin, and nitazoxanide, with dutasteride and spironolactone "optionally offered" and vitamin D, vitamin C, zinc, apixaban, rivaraxoban, enoxaparin, and glucocorticoids "added according to clinical judgment". There was no control group, but the author helpfully designated some random patients in his area as a sort-of-control, and then synthetically generated a second control group based on “a precise estimative based on a thorough and structured review of articles indexed in PubMed and MEDLINE and statements by official government agencies and specific medical societies”. Patients in the experimental group were twice as likely to recover (p < 0.0001), had negative PCR after 14 vs. 21 days, and had 0 vs. 27 hospitalizations. Speaking of low p-values, some people did fraud-detection tests on another of Cadegiani’s COVID-19 studies and got values like p < 8.24E-11 in favor of it being fraudulent. And, uh, he’s also studied whether ultra-high-dose antiandrogens treated COVID, and found that they did, cutting mortality by 92% . But the trial is under suspicion, with [a BMJ article](https://www.bmj.com/content/375/bmj.n2819) calling it “[the worst] violations of medical ethics and human rights in Brazil’s history” and “an ethical cesspit of violations”. [update 2022: this section originally contained more accusations against Cadegiani. Alexandros Marinos does [a deeper dive](https://doyourownresearch.substack.com/p/the-misportrayal-of-dr-flavio-cadegiani?s=r) with information not available at the time I wrote this, and finds some of them were overstated or false by implication] Anyway, let’s not base anything important on the results of this study, mmkay? A defiant Flavio Cadegiani. Imagine a guy who looks like this telling you to take ultra-high-dose antiandrogens. **[Ahmed et al:](https://www.sciencedirect.com/science/article/pii/S1201971220325066)** And we’re back in Bangladesh. 72 hospital patients were randomized to one of three arms: ivermectin only, ivermectin + doxycycline, and placebo. Primary endpoint was time to negative PCR, which was 9.7 days for ivermectin only and 12.7 days for placebo (p = 0.03). Other endpoints including duration of hospitalization (9.6 days ivermectin vs. 9.7 days placebo, not significant). This looks pretty good for ivermectin and does not have any signs of fraud or methodological problems. If I wanted to pick at it anyway, I would point out that the ivermectin + doxycycline group didn’t really differ from placebo, and that if you average out both ivermectin groups (with and without doxycycline) it looks like the difference would not be significant. I had previously committed to considering only ivermectin alone in trials that had multiple ivermectin groups, so I’m not going to do this. I can’t find any evidence this trial was preregistered so I don’t know whether they waited to see what would come out positive and then made that their primary endpoint, but virological clearance is a pretty normal primary endpoint and this isn’t that suspicious. It’s impossible to find any useful commentary on this study because Elgazzar (the guy who ran the most famous fraudulent ivermectin study) had the first name Ahmed, everyone is talking about Elgazzar all the time, and this overwhelms Google whenever I try to search for Ahmed et al. For now I’ll just keep this as a mildly positive and mildly plausible virological clearance result, in the context of no effect on hospitalization length or most symptoms. **[Chaccour et al:](https://www.thelancet.com/journals/eclinm/article/PIIS2589-5370(20)30464-8/fulltext)** 24 patients in Spain were randomized to receive either medium-dose ivermectin or placebo. The primary outcome was percent of patients with negative PCR at day 7; secondary outcomes were viral load and symptoms. The primary endpoint ended up being kind of a wash - everyone still PCR positive by day 7 so it was impossible to compare groups. Ivermectin trended toward lower viral load but never reached significance. Weirdly, ivermectin *did* seem to help symptoms, but only anosmia and cough towards the end (p = 0.03), which you would usually think of as lingering post-COVID problems. The paper says: > Given these findings, consideration could be given to alternative mechanisms of action different from a direct antiviral effect. One alternative explanation might be a positive allosteric modulation of the nicotinic acetylcholine receptor caused by ivermectin and leading to a downregulation of the ACE-2 receptor and viral entry into the cells of the respiratory epithelium and olfactory bulb. Another mechanism through which ivermectin might influence the reversal of anosmia is by inhibiting the activation of pro-inflammatory pathways in the olfactory epithelium. Inflammation of the olfactory mucosa is thought to play a key role in the development of anosmia in SARS-CoV-2 infection This seems kind of hedge-y. If you’re wondering where things went from there, Dr. Chaccour is now a passionate anti-ivermectin activist: So I guess he must think of this trial as basically negative, although realistically it’s 24 people and we shouldn’t put too much weight on it either way. **[Ghauri et al:](https://ijclinmedcasereports.com/pdf/IJCMCR-RA-00320.pdf)** Pakistan, 95 patients. Nonrandom; the study compared patients who happened to be given ivermectin (along with hydroxychloroquine and azithromycin) vs. patients who were just given the latter two drugs. There’s some evidence this produced systematic differences between the two groups - for example, patients in the control group were 3x more likely to have had diarrhea (this makes sense; diarrhea is a potential ivermectin side effect, so you probably wouldn’t give it to people already struggling with this problem). Also, the control group was twice as likely to be getting corticosteroids, maybe a marker for illness severity. Primary outcome was what percent of both groups had a fever: on day 7 it was 21% of ivermectin patients vs. 65% of controls, p < 0.001. No other outcomes were reported. I don’t *hate* this study, but I think the nonrandom assignment (and observed systematic differences) is a pretty fatal flaw. I can’t find anyone else talking about this one. At least no one seems to be saying anything bad. **[Babaloba et al:](https://academic.oup.com/qjmed/advance-article/doi/10.1093/qjmed/hcab035/6143037)** Be warned: if I have to refer to this one in real-life conversation, I will expand out the “et al” and call it “Babalola & Alakoloko”, because that’s really fun to say. This was a Nigerian RCT comparing 21 patients on low-dose ivermectin, 21 patients on high-dose ivermectin, and 20 patients on a combination of lopinavir and ritonavir, a combination antiviral which later studies found not to work for COVID and which might as well be considered a placebo. Primary outcome, as usual, was days until a negative PCR test. High dose ivermectin was 4.65 days, low dose was 6 days, control was 9.15, p = 0.035. Figure 2 is apparently a photograph of the computer screen where they did this calculation. Gideon Meyerowitz-Katz, part of the team that detects fraud in ivermectin papers, is not a fan of this one: He doesn’t say there what means, but elsewhere he tweets this figure: It’s always a bad sign when your study features in an image with “NUMEROUS IMPOSSIBLE NUMBERS” in red at the top. I think his point is that if you have 21 people, it’s impossible to have 50% of them have headache, because that would be 10.5. If 10 people have a headache, it would be 47.6%; if 11, 52%. So something is clearly wrong here. Seems like a relatively minor mistake, and Meyerowitz-Katz stops short of calling fraud, but it’s not a good look. I’m going to be slightly uncomfortable with this study without rejecting it entirely, and move on. **[Ravakirti et al:](https://journals.library.ualberta.ca/jpps/index.php/JPPS/article/view/32105)** Here we’re in Eastern India - not exactly Bangladesh again, but a stone’s throw away from it. In this RCT patients were randomized into an ivermectin group (57) and a placebo group (58). Primary outcome was negative PCR on day 6, because doing it on day 7 like everyone else would be too easy. As with several other groups, this was a bad move; too few people had it to make a good comparison; it was 13% of intervention vs. 18% of placebo, p = 0.3. Secondary outcomes were also pretty boring, except for the most important: 4 people in the placebo group died, compared to 0 in ivermectin (p = 0.045). On the one hand, this is one outcome of many, reaching the barest significance threshold. Another fluke? Still, there are no real problems with this study, and nobody has anything to say against it. Let’s add this one to the scale as another very small and noisy piece of real evidence in ivermectin’s favor. **[Bukhari et al:](https://www.medrxiv.org/content/10.1101/2021.02.02.21250840v1)** Now we’re in Pakistan. 50 patients were randomized to low-dose ivermectin, another 50 got standard of care including vitamin D. There was no placebo, but primary outcome was number of days to reach negative PCR, which it seems hard for placebo to affect much, so I don’t care. 5 controls and 9 ivermectin patients left the hospital against medical advice and could not be followed up, which is bad but not necessarily study-ruining. They never measured their supposed primary outcome of “days to reach negative PCR” directly, but they did measure how many people had negative PCR on various days, and ivermectin had a clear advantage - for example, on day 7, it was 37/50 for IVR and only 20/50 for control. Even if we assume all the lost-to-followup patients had maximally bad-for-the-hypothesis results, that’s still a positive finding. Nobody else has much to say about this one, certainly no accusations that they’ve found anything suspicious. Keep. **[Mohan et al:](https://www.sciencedirect.com/science/article/pii/S1341321X21002397)** India. RCT. 40 patients got low-dose ivermectin, 40 high-dose ivermectin, and 45 placebo. Primary outcomes were time to negative PCR, and viral load on day 5. In the results, they seem to have reinterpreted “time to negative PCR” as the subtly different “percent with negative PCR on some specific day”. High-dose ivermectin did best (47.5% negative on day 5) and placebo worst (31% negative), but it was insignificant (p = 0.3). There was no difference in viral load. All groups took about the same amount of time for symptoms to resolve. More placebo patients had failed to recover by the end of the study (6) than ivermectin patients (2), but this didn’t reach statistical significance (p = 0.4). Overall a well-done, boring, negative study, although ivermectin proponents will correctly point out that, like basically every other study we have looked at, the trend was in favor of ivermectin and this could potentially end up looking impressive in a meta-analysis. **[Biber et al:](https://c19ivermectin.com/biber.html)** This is an RCT from Israel. 47 patients got ivermectin and 42 placebo. Primary endpoint was viral load on day 6. I am having trouble finding out what happened with this; as far as I can tell it was a negative result and they buried it in favor of more interesting things. In a "multivariable logistic regression model, the adjusted odds ratio of negative SARS-CoV-2 RT-PCR negative test" favored ivermectin over placebo (p = 0.03 for day 6, p = 0.01 for day 8), but this seems like the kind of thing you do when your primary outcome is boring and you’re angry. Gideon Meyerowitz-Katz is not a fan: He notes that the study excluded people with high viral load, but [the preregistration](https://clinicaltrials.gov/ct2/show/NCT04429711?term=NCT04429711&draw=2&rank=1) didn’t say they would do that. Looking more closely, he finds they did that because, if you included these people, the study got no positive results. So probably they did the study, found no positive results, re-ran it with various subsets of patients until they did get a positive result, and then claimed to have “excluded” patients who weren’t in the subset that worked. I’m going to toss this one. **[Elalfy et al:](https://onlinelibrary.wiley.com/doi/10.1002/jmv.26880)** What even is this? Where am I? As best I can tell, this is some kind of Egyptian trial. It might or might not be an RCT; it says stuff like “Patients were self-allocated to the treatment groups; the first 3 days of the week for the intervention arm while the other 3 days for symptomatic treatment”. Were they self-allocated in the sense that they got to choose? Doesn’t that mean it’s not random? Aren’t there seven days in a week? These are among the many questions that Elalfy et al do not answer for us. The control group (which they seem to think can also be called “the white group”) took zinc, paracetamol, and maybe azithromycin. The intervention group took zinc, nitazoxanide, ribavirin, and ivermectin. There were very large demographic differences between the groups of the sort which make the study unusable, which they mention and then ignore. From there, they follow this normal and totally comprehensible flowchart: There is no primary outcome assigned, but viral clearance rates on day seven were 58% in the yellow group compared to 0% in the white group, which I guess is a strong positive result. This table… …looks very impressive, in terms of the experimental group doing better than the control, except that they don’t specify whether it was *before* the trial or *after* it, and at least one online commentator thinks it might have been before, in which case it’s only impressive how thoroughly they failed to randomize their groups. Overall I don’t feel bad throwing this study out. I hope it one day succeeds in returning to its home planet. **[Lopez-Medina et al:](https://jamanetwork.com/journals/jama/fullarticle/2777389)** Colombian RCT. 200 patients took ivermectin, another 200 took placebo. They originally worried the placebo might taste different than real ivermectin, then solved this by replacing it with a different placebo, which is a pretty high level of conscientiousness. Primary outcome was originally percent of patients whose symptoms worsened by two points, as rated on a complicated symptom scale when a researcher asked them over the phone. Halfway through the study, they realized nobody was worsening that much, so they changed the primary outcome to time until symptoms got better, as measured by the scale. In the ivermectin group, symptoms improved that much after 10 days; in the placebo group, after 12, p = 0.53. By the end of the study, symptoms had improved in 82% of ivermectin users and 79% of controls, also insignificant. 4 patients in the ivermectin group needed to be hospitalized compared to 6 in the placebo group, again insignificant. This study is bigger than most of the other RCTs, and more polished in terms of how many spelling errors, photographs of computer screens, etc, it contains. It was published in *JAMA*, one of the most prestigious US medical journals, as opposed to the crappy nth-tier journals most of the others have been in. When people say things like “sure, a lot of small studies show good results for ivermectin, but the bigger and more professional trials don’t”, this is one of the two big professional trials they’re talking about. Ivermectin proponents make some good arguments against it. In order to get as big as it did, Lopez-Medina had to compromise on rigor. Its outcome is how people self-score their symptoms on a hokey scale in a phone interview, instead of viral load or PCR results or anything like that. Still, this is basically what we want, right? In the end, we want people to feel better and less sick, not to get good scores on PCR tests. Also, it changed its primary outcome halfway through; isn’t that bad? I think *maybe* not; the reason we want a preregistered primary outcome is so that you don’t change halfway through to whatever outcome shows the results you want. The researchers in this study did a good job explaining why they changed their outcome, the change makes sense, and their original outcome would also have shown ivermectin not working (albeit less accurately and effectively). I don’t know of any evidence that they knew (or suspected) final results when switching to this new outcome, and it seems like the most reasonable new outcome to switch to. Finally, their original placebo tasted different from ivermectin (though they switched halfway through). This is one of the few studies where I actually care about placebo, because people are self-rating their symptoms. But realistically most of these people don’t know what ivermectin is supposed to taste like. Also, they did a re-analysis and found there was no difference between the people who got the old placebo and the new one. I’m making a big deal of this because ivmmeta.com - the really impressive meta-analysis site I’ve been going off of - puts a special warning letter underneath their discussion of this study, urging us not to trust it. They don’t do this for any of the other ones we’ve addressed so far - not the one by the guy whose other studies were all frauds, not the one where 50% of 21 people had headaches, not the unrandomized one where the groups were completely different before the experiment started, not even the one by the guy accused of crimes against humanity. Only this one. This makes me a lot less charitable to ivmmeta than I would otherwise be; I think it’s hard to choose this particular warning letter strategy out of well-intentioned commitment to truth. They just really don’t like this big study that shows ivermectin doesn’t work. Also, the warning itself irritates me, and includes paragraphs like: > RCTs have a fundamental bias against finding an effect for interventions that are widely available — patients that believe they need treatment are more likely to decline participation and take the intervention [Yeh], i.e., RCTs are more likely to enroll low-risk participants that do not need treatment to recover (this does not apply to the typical pharmaceutical trial of a new drug that is otherwise unavailable). This trial was run in a community where ivermectin was available OTC and very widely known and used. Nobody else worries about this, and there are a million biases that non-randomized studies have that would be super-relevant when discussing those, but somehow when they’re pro-ivermectin the site forgets to be this thorough. I think a better pro-ivermectin response to this study is to point out that all the trends support ivermectin. Symptoms took 10 days to resolve in the ivermectin group vs. 12 in placebo; 4 ivermectin patients were hospitalized vs. 6 placebo patients, etc. Just say that this was an unusually noisy trial because of the self-report methodology, and you’re confident that these small differences will add up to significance when you put them into a meta-analysis. **[Roy et al:](https://www.medrxiv.org/content/10.1101/2021.03.08.21252883v1)** We’re back in East India, and back to non-randomized trials. 56 patients were retrospectively examined; some had been given ivermectin + doxycycline, others hydroxychloroquine, other azithromycin, and others symptomatic treatment only. We don’t get any meaningful information about how this worked, but we are told that they did not differ in “clinical well-being reporting onset timing”. Whatever. **[Chahla et al:](https://www.researchsquare.com/article/rs-495945/v1)** The first of many Argentine trials. 110 patients received medium-dose ivermectin; 144 were kept as a control (no placebo). This was “cluster randomized”, which means they randomize different health centers to either give the experimental drug or not. This is worse than regular randomization, because there could be differences between these health centers (eg one might have better doctors who otherwise give better treatment, one might be in the poor part of town and have sicker patients, etc). They checked to see if there were any differences between the groups, and it sure looks like there were (the experimental group had twice as many obese people as the controls), but as per them, these differences were not statistically significant. Note that if this did make a difference, it would presumably make ivermectin look worse, not better. The primary outcome was given as “increase discharge from outpatient care with COVID-19 mild disease”. This favored the treatment; only 2/110 patients in the ivermectin group failed to be discharged, compared to 20 patients in the control group. But, uh, these were at different medical centers. Can’t different medical centers just have different discharge policies? One discharges you as soon as you seem to be getting better, the other waits to really make sure? This is an utterly crap endpoint to do a cluster randomized controlled trial on. If you’re going to do cRCT, which is never a *great* idea, you should be using some extremely objective endpoint that doctors and clinic administrators can’t possibly affect, like viral load according to some third-party laboratory, using the same third-party laboratory for both clinics. This is such a bad idea that I can’t help worrying I’m missing or misunderstanding something. If not, this is dumb and bad and should be ignored. **[Mourya et al:](https://ijhcr.com/index.php/ijhcr/article/view/1263)** We’re back in India. This is a nonrandomized study comparing 50 patients given ivermectin to 50 patients given hydroxychloroquine. No primary outcome was named, but they focus on PCR negativity. Only 6% of patients in the hydroxychloroquine group were negative, compared to 90% of patients in the ivermectin group! On what day did they do the test? Uh, kind of random, and they admit that “in [the hydroxychloroquine group], mean time difference from the date of initiation of treatment and second test was significantly longer (7.24±2.75 days) as compared to 5.22±1.21 days in [the ivermectin group] (p=0.021).” Since they assessed these groups at different times, we shouldn’t draw any conclusions from them getting different results. *Except* that as far as I can tell this should *handicap* ivermectin, making it especially impressive that it did better. But also, the ivermectin group was made mostly of people who had been asymptomatic at the beginning (70%), and the hydroxychloroquine group had almost no asymptomatic cases (8%) . They were giving the ivermectin to healthy people and the hydroxychloroquine to sick people! They admit deep in the discussion that this “may be a confounding factor”. So basically they got totally different groups of people, tested them at totally different times, and the two sets of test results differed. So what? So this is why normal people do RCTs instead of whatever the heck this is, that’s what. **[Loue et al:](https://www.clinmedjournals.org/articles/jide/journal-of-infectious-diseases-and-epidemiology-jide-7-202.php?jid=jide)** …this one isn’t going to be an RCT either. Loue tells a story about a cluster of COVID cases at the French nursing home where he works. He asked people if they wanted to try ivermectin; 10 did and 15 didn’t. 1 ivermectin patient died, compared to 5 non-ivermectin patients. The non-ivermectin group looked a bit sicker than the ivermectin group in the inevitable [Table 1](https://www.clinmedjournals.org/articles/jide/jide-7-202-table1.html), though it’s hard to tell. One interesting possible confounder (not mentioned, but I’m imagining it) is that demented patients probably couldn’t consent to ivermectin and ended up in the control group. This is another case of “I’m not going to trust anything that isn’t an RCT”. **[Merino et al:](https://osf.io/preprints/socarxiv/r93g4/)** Another (sigh) non-RCT. Mexico City tried a public health program where if you called a hotline and said you had COVID, they sent you an emergency kit with various useful supplies. One of those supplies was ivermectin tablets. 18,074 people got the kit (and presumably some appreciable fraction took the ivermectin, though there’s no way to prove that). Their control group is people from before they started giving out the kits, people from after they stopped giving out the kits, and people who didn’t want the kits. There are differences in who got COVID early in the epidemic vs. later, and in people who did opt for medical kits vs. didn’t. To correct these, the researchers tried to adjust for confounders, something which - as I keep trying to hammer home again and again - [never works](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0152719). They found that using the kit led to a 75% or so reduction in hospitalization, though they were unable to separate out the ivermectin from the other things in the kit (paracetamol and aspirin), or from the placebo effect of having a kit and feeling like you had already gotten some treatment (if I understand right, the decision to go to the hospital was left entirely to the patient). I think this study is a moderate point in favor of giving people kits in order to prevent hospital overcrowding, but I’m not willing to accept that it tells us much about ivermectin in particular. **[Faisal et al:](http://theprofesional.com/index.php/tpmj/article/view/5867/4523)** This one was published in *The Professional Medical Journal* (mispelled as “Profesional Medical Journal” in its URL), so you know it’s going to be good! It describes itself as “a cross-sectional study”, but later says it “randomized patients into two groups”, which would make it an RCT - I think they might just be using the term “cross-sectional” different from the standard American usage. A hospital in Pakistan got 50 patients on ivermectin + azithromycin, and another 50 on azithromycin alone. Primary outcome was not mentioned, and the data were presented confusingly, but a typical result is that only 4% of the ivermectin group had symptoms lasting more than 10 days, whereas 16% of the control group did, p < 0.01. They do a really weird thing where they compare how long it took symptoms to resolve between IVM and control groups *within each bin.* That is, if I’m understanding correctly, they ask “of the people who took between 3-5 days for symptoms to resolve, did they resolve faster for IVM or control?”. This is an utterly bizarre analysis to perform, although it doesn’t affect the fact that their other results still seem to favor ivermectin. Maybe I’m confused about what’s going on here. I’ve mostly been letting people off easy on no placebo, but I as far as I can tell (not very far) this paper seems to be going off whether patients reported continuing to have symptoms to the hospital doing the study, and I think that *is* potentially susceptible to placebo effects. Additionally, there’s no preregistration, and even though they talk a lot about doing PCR tests they don’t present the results. This is by no means the worst study here but I still think it’s pretty low quality and I don’t trust it. **[Aref et al:](https://www.dovepress.com/clinical-biochemical-and-molecular-evaluations-of-ivermectin-mucoadhes-peer-reviewed-fulltext-article-IJN)** This one is published in the *International Journal Of Nanomedicine*, even though I’m pretty sure that isn’t a real thing. In this case the “nanomedicine” is a new nasal spray version of ivermectin which is so confusing I cannot for the life of me figure out what dose they are giving these patients. This Egyptian study gives 57 patients intranasal ivermectin plus hydroxychloroquine, azithromycin, oseltamavir, and some vitamins; another 57 patients get all that stuff except the ivermectin. Primary outcome is not stated, but they look at various symptoms, all of which look better in the ivermectin group: 95% of ivermectin patients got negative PCRs at some time point, compared to 75% of controls, p = 0.004. I am pretty suspicious of this study, not least because it comes from Egypt which has an *awful* reputation for fake studies, and it returns extreme results that I wouldn’t expect even if ivermectin was actually a wonder drug. But I cannot find any particular thing wrong with it, nor did anyone else I looked at, so I will grudgingly let it stand. **[Krolewiecki et al:](https://www.sciencedirect.com/science/article/pii/S258953702100239X)** Another Argentine study. This one is a real RCT. 30 patients received ivermectin, 15 were the control group (no placebo, again). Primary outcome was difference in viral load on day 5. The trend favored ivermectin but it was not statistically significant, although they were able to make it statistically significant if they looked at a subset of higher-IVM-plasma-concentration patients. They did not find any difference in clinical outcomes. A pro-ivermectin person could point out that in the subgroup with the highest ivermectin concentrations, the drug seemed to work. A skeptic could point out that this is exactly the kind of subgroup slicing that you are not supposed to do without pre-registering it, which I don’t think this team did. I agree with the skeptic. **[Vallejos et al:](https://bmcinfectdis.biomedcentral.com/articles/10.1186/s12879-021-06348-5)** Another Argentine study. It’s big (250 people in each arm). It’s an RCT. It tries to define a primary outcome (“Primary outcome: the trial ended when the last patient who was included achieved the end of study visit”), but that’s not what “primary outcome” means, and they don’t offer an alternative. Other outcomes: no difference in PCR on days 3 or 12. Hospitalization is nonsignificantly better in the ivermectin group (14 vs. 21, p = 0.2), but death is nonsigificantly better in the placebo group (3 vs. 4, p = 0.7). This isn’t even the kind of nonsignificant that might contribute to an exciting meta-analysis later. This is just a pure null result. I cannot find any problem with this study, and neither can anyone else I checked. This is the biggest RCT we’ve seen so far, so we should take it seriously. **[TOGETHER Trial:](https://rethinkingclinicaltrials.org/news/august-6-2021-early-treatment-of-covid-19-with-repurposed-therapies-the-together-adaptive-platform-trial-edward-mills-phd-frcp/)** Speaking of big RCTs… This one hasn’t been published yet. There’s a video of a talk about it, but I am not going to watch it, because it is a video, so I am getting information secondhand from eg [here](https://www.wired.com/story/better-data-on-ivermectin-is-finally-on-its-way/). Apparently, it compares 677 people (!) randomized to ivermectin to 678 people randomized to placebo. 86 ivermectin patients ended up in the hospital compared to 95 placebo patients, p-value not significant. This was a really big professional trial done by bigshot researchers from a major Canadian university, and the medical establishment is taking it much more seriously than any of these others. When it comes out, it will probably get published in a top journal. When discussing Lopez-Medina, I wrote: > When people say things like “sure, a lot of small studies show good results for ivermectin, but the bigger and more professional trials don’t”, this is one of the two big professional trials they’re talking about. This is the other one. Not coincidentally, it’s also the other trial that ivmmeta.com has a warning letter underneath telling you to disregard. Their main concern is that instead of truly randomizing patients to ivermectin vs. placebo, they did a time-dependent randomization that meant during some weeks more patients were getting one or the other. This is a problem because the trial takes place in Brazil, where different variants were more common at different times. Here’s their image: On the one hand, I have immense contempt for ivmmeta for letting all those other awful studies pass and then pulling out all the stops to try to nitpick this one. I have no idea if their proposed randomization failure really happened. And no doubt the reason they’re even able to investigate this is that this study is really careful and transparent - most of them don’t tell you anything about their randomization method. I would be shocked if other studies don’t have all these problems and worse. On the other hand, the point isn’t to be fair, it’s to be right. And this is a potential confounder. Not a huge one. But a potential one. I guess all we can do is try to bound the damage. Even if the confounding is 100% real and bad, there’s no way to make this study consistent with the crazy super-pro-ivermectin results of studies like Espitia-Hernandez and Aref. And even if we deny any confounding, we see the same slight pro-ivermectin trend - 86 hospitalizations vs. 95 - that we’ve seen in so many other studies. Nothing is going to make me believe that this isn’t in the top 33% of studies we’ve been looking at, so let’s add it as grist for the meta-analysis (though maybe not quite as much grist as its vast size indicates) and move on, angrily. **[Buonfrate et al:](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3918289)** An Italian RCT. Patients were randomized into low-dose ivermectin (32), placebo (29), or high-dose ivermectin (32). Primary outcome was viral load on day 7. There was no significant difference (average of 2 in ivermectin groups, 2.2 in placebo group). They admit that they failed to reach the planned sample size, but did a calculation to show that even if they had, the trial could not have returned a positive result. Clinically, an average of 2 patients were hospitalized in each of the ivermectin arms, compared to 0 in the placebo arm - which bucks our previously-very-constant pro-ivermectin trend. **[Mayer et al:](https://zenodo.org/record/5525362)** Not an RCT. Patients in an Argentine province were offered the opportunity to try ivermectin; 3266 said yes and become the experimental group, 17966 said no and became the control group. There were many obvious differences between the groups, but they all seemed to handicap ivermectin. There was a nonsignificant trend toward less hospitalization and significantly less mortality (1.5% vs. 2.1%, p = 0.03). While looking into this study, I learned the term “[immortal time bias](https://www.bmj.com/content/340/bmj.b5087)”. This means a period in between selection for the study and the beginning of study recording where patient outcomes are not counted. I think the problem here is that if you signed up for the system on Day X, and if you got sick before they could give you ivermectin, you were in the control group. See [this Twitter thread](https://twitter.com/nickmmark/status/1444125469578653702), I have not confirmed everything he says. This only hardens my resolve to stay away from non-RCTs. **[Borody et al:](https://www.covidstrategies.org/combination-therapy-for-covid-19-based-on-ivermectin-in-an-australian-population/)** Our last paper! …is it a paper? I can’t find it published anywhere. It mostly seems to be on news sites. Doesn’t look peer-reviewed. And it starts with “Note that views expressed in this opinion article are the writer’s personal views”. Whatever. 600 Australians were treated with ivermectin, doxycycline, and zinc. The article compares this to an “equivalent control group” made of “contemporary infected subjects in Australia obtained from published Covid Tracking Data”; this is not how you control group, @#!% you. Then it gets excited about the fact that most patients had better symptoms at the end of the ten-day study period than the beginning (untreated COVID resolves in about ten days). *Why are these people wasting my time with this?* Let’s move on. ### The Analysis If we remove all fraudulent and methodologically unsound studies from the table above, we end up with this: Gideon Meyerowitz-Katz, who investigated many of the studies above for fraud, tried a similar exercise. I learned about his halfway through, couldn’t help seeing it briefly, but tried to avoid remembering it or using it when generating mine (also, I did take the result of his fraud investigations into account), so they should be considered *not quite* independent efforts. His looks like this: He nixed Chowdhury, Babaloba, Ghauri, Faisal, and Aref, but kept Szenta Fonseca, Biber (?), and Mayer. There was correlation of 0.45, which I guess is okay. I asked him about his decision-making, and he listed a combination of serious statistical errors and small red flags adding up. I was pretty uncomfortable with most of these studies myself, so I will err on the side of severity, and remove all studies that *either* I *or* Meyerowitz-Katz disliked. We end up with the following short list: We’ve gone from 29 studies to 11, getting rid of 18 along the way. For the record, we eliminated 2/19 for fraud, 1/19 for severe preregistration violations, 10 for methodological problems, and 6 because Meyerowitz-Katz was suspicious of them. …but honestly this table still looks pretty good for ivermectin, doesn’t it? Still lots of big green boxes. Meyerowitz-Katz accuses ivmmeta of cherry-picking what statistic to use for their forest plot. That is, if a study measures ten outcomes, they sometimes take the most pro-ivermectin outcome. Ivmmeta.com counters that they used a consistent and reasonable (if complicated) process for choosing their outcome of focus, that being: > If studies report multiple kinds of effects then the most serious outcome is used in calculations for that study. For example, if effects for mortality and cases are both reported, the effect for mortality is used, this may be different to the effect that a study focused on. If symptomatic results are reported at multiple times, we used the latest time, for example if mortality results are provided at 14 days and 28 days, the results at 28 days are used. Mortality alone is preferred over combined outcomes. Outcomes with zero events in both arms were not used (the next most serious outcome is used — no studies were excluded). For example, in low-risk populations with no mortality, a reduction in mortality with treatment is not possible, however a reduction in hospitalization, for example, is still valuable. Clinical outcome is considered more important than PCR testing status. When basically all patients recover in both treatment and control groups, preference for viral clearance and recovery is given to results mid-recovery where available (after most or all patients have recovered there is no room for an effective treatment to do better). If only individual symptom data is available, the most serious symptom has priority, for example difficulty breathing or low SpO2 is more important than cough. I’m having trouble judging this, partly because Meyerowitz-Katz says ivmmeta has corrected some earlier mistakes, and partly because there really is some reasonable debate over how to judge studies with lots of complicated endpoints. By this point I had completely forgotten what ivmmeta did, so I independently coded all 11 remaining studies following something in between my best understanding of their procedure and what I considered common sense. The only exception was that when the most severe outcome was measured in something other than patients (ie average number of virus copies per patient), I defaulted to one that was measured in patients instead, to keep everything with the same denominator. My results mostly matched ivmmeta’s, with one or two exceptions that I think are within the scope of argument or related to my minor deviations from their protocol. Placebo vs. ivermectin groups sometimes differed in size, which I’ve adjusted for and rounded off. Probably I’m forgetting some reason I can’t just do simple summary statistics to this, but whatever. It is p = 0.15, not significant. This is maybe unfair, because there aren’t a lot of deaths in the sample, so by focusing on death rather than more common outcomes we’re pointlessly throwing away sample size. What happens if I unprincipledly pick whatever I think the most reasonable outcome to use from each study is? I’ve chosen “most reasonable” as a balance between “is the most severe” and “has a lot of data points”: Now it’s p = 0.04, seemingly significant, but I had to make some unprincipled decisions to get there. I don’t think I specifically replaced negative findings with positive ones, but I can’t prove that even to myself, let alone to you. *[**UPDATE 5/31/22:** A reader writes in to tell me that the t-test I used above is overly simplistic. A Dersimonian-Laird test is more appropriate for meta-analysis, and would have given 0.03 and 0.005 on the first and second analysis, where I got 0.15 and 0.04. This significantly strengthens the apparent benefit of ivermectin from ‘debatable’ to ‘clear’. I discuss some reasons below why I am not convinced by this apparent benefit.]* (how come I’m finding a bunch of things on the edge of significance, but the original ivmmeta site found a lot of extremely significant things? Because they combined ratios, such that “one death in placebo, zero in ivermectin” looked like a nigh-infinite benefit for ivermectin, whereas I’m combining raw numbers. Possibly my way is statistically illegitimate for some reason, but I’m just trying to get a rough estimate of how convinced to be) So we are stuck somewhere between “nonsignificant trend in favor” and “maybe-significant trend in favor, after throwing out some best practices”. This is normally where I would compare my results to those of other meta-analyses made by real professionals. But when I look at them, they all include studies later found to be fake, like Elgazzar, and unsurprisingly come up with wildly positive conclusions. There are about six in this category. One of them [later revised their results to exclude Elgazzar](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8415517/) and still found strong efficacy for ivermectin, but they still included Niaee and some other dubious studies. The only meta-analysis that doesn’t make these mistakes is [Popp](https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD015017.pub2/full) (a Cochrane review), which is from before Elgazzar was found to be fraudulent, but coincidentally excludes it for other reasons. It also excludes a lot of good studies like Mahmud and Ravakirti because they give patients other things like HCQ and azithromycin - I chose to include them, because I don’t think they either work or have especially bad side effects, so they’re basically placebo - but Cochrane is always harsh like this. They end up with a point estimate where ivermectin cuts mortality by 40% - but say the confidence intervals are too wide to draw any conclusion. I think this basically agrees with my analyses above - the trends really are in ivermectin’s favor, but once you eliminate all the questionable studies there are too few studies left to have enough statistical power to reach significance. Except that everyone is still focusing on deaths and hospitalizations just because they’re flashy. Mahmud et al, which everyone agrees is a great study, found that ivermectin decreased days until clinical recovery, p = 0.003? So what do you do? This is one of the toughest questions in medicine. It comes up again and again. You have some drug. You read some studies. Again and again, more people are surviving (or avoiding complications) when they get the drug. It’s a pattern strong enough to common-sensically notice. But there isn’t an undeniable, unbreachable fortress of evidence. The drug is really safe and doesn’t have a lot of side effects. So do you give it to your patients? Do you take it yourself? Here this question is especially tough, because, uh, if you say anything in favor of ivermectin you will be cast out of civilization and thrown into the circle of social hell reserved for Klan members and 1/6 insurrectionists. All the health officials in the world will shout “horse dewormer!” at you and compare you to Josef Mengele. But good doctors aren’t supposed to care about such things. Your only goal is to save your patient. Nothing else matters. I am telling you that Mahmud et al is a good study and it got p = 0.003 in favor of ivermectin. You can take the blue pill, and stay a decent respectable member of society. Or you can take the horse dewormer pill, and see where you end up. In a second, I’ll tell you my answer. But you won’t always have me to answer questions like this, and it might be morally edifying to observe your thought process in situations like this. So take a second, and meet me on the other side of the next section heading. … … … … … ### The Synthesis Hopefully you learned something interesting about yourself there. But my answer is: worms! As several doctors and researchers have pointed out (h/t especially [Avi Bitterman](https://twitter.com/AviBittMD) and [David Boulware](https://twitter.com/boulware_dr)), the most impressive studies come from places that are *teeming with worms.* Mahmud from Bangladesh, Ravakirti from East India, Lopez-Medina from Colombia, etc. Here’s the prevalence of roundworm infections by country ([source](https://journals.sagepub.com/doi/full/10.1177/2058739220959915)). But alongside roundworms, there are threadworms, hookworms, blood flukes, liver flukes, nematodes, trematodes, all sorts of worms. Add them all up and somewhere between half and a quarter of people in the developing world have at least one parasitic worm in their body. Being full of worms may impact your ability to fight coronavirus. [Gluchowska et al](https://doi.org/10.3390/jcm10112533) write: > Helminth [ie worm] infections are among the most common infectious diseases. Bradbury et al. highlight the possible negative interactions between helminth infection and COVID-19 severity in helminth-endemic regions and note that alterations in the gut microbiome associated with helminth infection appear to have systemic immunomodulatory effects. It has also been proposed that helminth co-infection may increase the morbidity and mortality of COVID-19, because the immune system cannot efficiently respond to the virus; in addition, vaccines will be less effective for these patients, but **treatment and prevention of helminth infections might reduce the negative effect of COVID-19**. During millennia of parasite-host coevolution helminths evolved mechanisms suppressing the host immune responses, which may mitigate vaccine efficacy and increase severity of other infectious diseases. Treatment of worm infections might reduce the negative effect of COVID-19! And ivermectin is a deworming drug! You can see where this is going… The most relevant species of worm here is the roundworm *Strongyloides stercoralis*. Among the commonest treatments for COVID-19 is corticosteroids, a type of immunosuppresant drug. The types of immune responses it suppresses do more harm than good in coronavirus, so turning them off limits collateral damage and makes patients better on net. But these are also the types of immune responses that control *Strongyloides*. If you turn them off even very briefly, the worms multiply out of control, you get what’s called “*Strongyloides* hyperinfection”, and pretty often you die. According to [the WHO](https://www.who.int/news/item/17-12-2020-a-parasitic-infection-that-can-turn-fatal-with-administration-of-corticosteroids): > The current COVID-19 pandemic serves to highlight the risk of using systemic corticosteroids and, to a lesser extent, other immunosuppressive therapy, in populations with significant risk of underlying strongyloidiasis. Cases of strongyloidiasis hyperinfection in the setting of corticosteroid use as COVID-19 therapy have been described and draw attention to the necessity of addressing the risk of iatrogenic strongyloidiasis hyperinfection syndromein infected individuals prior to corticosteroid administration. > > Although this has gained importance in the midst of a pandemic where corticosteroids are one of few therapies shown to improve mortality, its relevance is much broader given that corticosteroids and other immunosuppressive therapies have become increasingly common in treatment of chronic diseases (e.g. asthma or certain rheumatologic conditions). So you need to “address the risk” of *strongyloides* infection during COVID treatment in roundworm-endemic areas. And how might you address this, WHO? > Treatment of chronic strongyloidiasis with ivermectin 200 µg/kg per day orally x 1-2 days is considered safe with potential contraindications including possible *Loa loa* infection (endemic in West and Central Africa), pregnancy, and weight <15kg. > > Given ivermectin’s safety profile, the United States has utilized presumptive treatment with ivermectin for strongyloidiasis in refugees resettling from endemic areas, and both Canada and the European Centre for Disease Prevention and Control have issued guidance on presumptive treatment to avoid hyperinfection in at risk populations. Screening and treatment, or where not available, addition of ivermectin to mass drug administration programs should be studied and considered. This is serious and common enough that, if you’re not going to screen for it, it might be worth “add[ing] ivermectin to mass drug administration programs” in affected areas! Dr. Avi Bitterman [carries](https://twitter.com/AviBittMD/status/1461076939192602628) the hypothesis to the finish line: First two images are with all relevant studies; second two are a sensitivity analysis that removes some of the most dubious. The good ivermectin trials in areas with low *Strongyloides* prevalence, like Vallejos in Argentina, are mostly negative. The good ivermectin trials in areas with high *Strongyloides* prevalence, like Mahmud in Bangladesh, are mostly positive. Worms can’t explain the viral positivity outcomes (ie PCR), but Dr. Bitterman suggests that once you remove low quality trials and worm-related results, the rest looks like simple publication bias: This is still just a possibility. Maybe I’m over-focusing too hard on a couple positive results and this will all turn out to be nothing. Or who knows, maybe ivermectin does work against COVID a little - although it would have to be very little, fading to not at all in temperate worm-free countries. But this theory feels right to me. It feels right to me because it’s the most troll-ish possible solution. Everybody was wrong! The people who called it a miracle drug against COVID were wrong. The people who dismissed all the studies because they F@#king Love Science were wrong. Ivmmeta.com was wrong. Gideon Meyerowitz-Katz was…well, he was right, actually, I got the worm-related meta-analysis graphic above from his Twitter timeline. Still, an excellent troll. Also, the best part is that I ignorantly asked, in my description of Mahmud et al above: And it was! It was a fluke! A literal, physical, fluke! For my whole life, God has been placing terrible puns in my path to irritate me, and this would be the worst one ever! So it *has* *to* be true! ### The Scientific Takeaway About ten years ago, when the replication crisis started, we learned a certain set of tools for examining studies. Check for selection bias. Distrust “adjusting for confounders”. Check for p-hacking and forking paths. Make teams preregister their analyses. Do forest plots to find publication bias. Stop accepting p-values of 0.049. Wait for replications. Trust reviews and meta-analyses, instead of individual small studies. These were good tools. Having them was infinitely better than not having them. But [even in 2014](https://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/), I was writing about how many bad studies seemed to slip through the cracks even when we pushed this toolbox to its limits. We needed new tools. I think the methods that Meyerowitz-Katz, Sheldrake, Heathers, Brown, Lawrence and others brought to the limelight this year are some of the new tools we were waiting for. Part of this new toolset is to check for fraud. About 10 - 15% of the seemingly-good studies on ivermectin ended up extremely suspicious for fraud. Elgazzar, Carvallo, Niaee, Cadegiani, Samaha. There are ways to check for this even when you don’t have the raw data. Like: * [The Carlisle-Stouffer-Fisher method](https://pubmed.ncbi.nlm.nih.gov/28786843/): Check some large group of comparisons, usually the Table 1 of an RCT where they compare the demographic characteristics of the control and experimental groups, for reasonable p-values. Real data will have p-values all over the map; one in every ten comparisons will have a p-value of 0.1 or less. Fakers seem bad at this and usually give everything a nice safe p-value like 0.8 or 0.9. * [GRIM](https://jamesheathers.medium.com/the-grim-test-a-method-for-evaluating-published-research-9a4e5f05e870) - make sure means are possible given the number of numbers involved. For example, if a paper reports analyzing 10 patients and finding that 27% of them recovered, something has gone wrong. One possible thing that could have gone wrong is that the data are made up. Another possible thing is that they’re not giving the full story about how many patients dropped out when. But *something* is wrong. But having the raw data is much better, and lets you notice if, for example, there are just ten patients who have been copy-pasted over and over again to make a hundred patients. Or if the distribution of values in a certain variable is unrealistic, like [the Ariely study](http://datacolada.org/98) where cars drove a number of miles that was perfectly evenly distributed from 0 to 50,000 and then never above 50,000. [Source](http://datacolada.org/98). Real data would follow something like a bell curve. This is going to require a social norm of always sharing data. Even better, journals should require the raw data before they publish anything, and should make it available on their website. People are going to fight *hard* against this, partly because it’s annoying and partly because of (imho exaggerated) patient privacy related concerns. Somebody’s going to try make some kind of gated thing where you have to prove you have a PhD and a “legitimate cause” before you can access the data, and that person should be fought tooth and nail (some of the “data detectives” who figured out the ivermectin study didn’t have advanced degrees). I want a world where “I did a study, but I can’t show you the data” should be taken as seriously as “I determined P = NP, but I can’t show you the proof.” The second reason I think this, aside from checking for fraud, is checking for mistakes. I have no proof this was involved in ivermectin in particular. But I’ve been surprised how often it comes up when I talk to scientists. Someone in their field got a shocking result, everyone looked over the study really hard and couldn’t find any methodological problems, there’s no evidence of fraud, so do you accept it? A lot of times instead I hear people say “I assume they made a coding error”. I believe them, because I have made a bunch of stupid errors. Sometimes you make the errors for me - an early draft of [this post](https://slatestarcodex.com/2020/01/28/assortative-mating-and-autism/) of mine stated that there was an strong positive effect of assortative mating on autism, but when I double-checked it was entirely due to some idiot who filled out the survey and claimed to have 99999 autistic children. In this very essay, I almost said that a set of ivermectin studies showed a positive result because I was reading the number for whether two lists were correlated rather than whether a paired-samples t-test on the lists was significant. I think lots of studies make these kinds of errors. But even if it’s only 1%, these will make up much more than 1% of published studies, and *much* more than 1% of important ground-breaking published studies, because correct studies can only prove true things, but false studies can prove arbitrarily interesting hypotheses (did you know there was an increase in the suicide rate on days that Donald Trump tweeted?!?) and those are the ones that will get published and become famous. So if the lesson of the original replication crisis was “read the methodology” and “read the preregistration document”, this year’s lesson is “read the raw data”. Which is a bit more of an ask. Especially since most studies don’t make it available. ### The Sociological Takeaway I’ve been thinking about this one a lot too. Ivermectin supporters were really wrong. I enjoy the idea of a cosmic joke where ivermectin sort of works in some senses in some areas. But the things people were claiming - that ivermectin has a 100% success rate, that you don’t need to take the vaccine because you can just take ivermectin instead, etc - have been untenable not just since the big negative trials came out this summer, but even by the standards of the early positive trials. Mahmud et al was big and positive and exciting, but it showed that ivermectin patients recovered in about 7 days on average instead of 9. I think the conventional wisdom - that the most extreme ivermectin supporters were mostly gullible rubes who were bamboozled by pseudoscience - was basically accurate. Mainstream medicine has reacted with slogans like “believe Science”. I don’t know if those kinds of slogans ever help, but they’re especially unhelpful here. A quick look at ivermectin supporters shows their problem is they believed Science *too much*. They have a very reasonable-sounding belief, which is that if dozens of studies all say a drug works really well, then it probably works really well. When they see dozens of studies saying a drug works really well, and the elites saying “no don’t take it!”, their *extremely natural* conclusion is that it works really well but the elites are covering it up. Sometimes these people even have a specific theory for why elites are covering up ivermectin, like that pharma companies want you to use more expensive patented drugs instead. This theory is *extremely plausible*. Pharma companies are always trying to convince people to use expensive patented drugs instead of equally good generic alternatives. Ivermectin believers probably heard about this from the many, many good articles by responsible news outlets, discussing the many, many times pharma companies have tried to trick people into using more expensive patented medications. Like [this ACSH article about Nexium](https://www.acsh.org/news/2017/01/18/nexium-dark-side-pharma-10546). Or [my article on esketamine](https://slatestarcodex.com/2019/03/11/ketamine-now-by-prescription/). Given that dozens of studies said a drug worked, and elites continued to deny it worked, and there are well-known times where elites lie about drugs in order to make money, it was an *incredibly reasonable* inference that this was one of those times. If you have a lot of experience with pharma, you know who lies and who doesn’t, and you know what lies they’re willing to tell and which ones they shrink back from. As far as I know, no reputable scientist has ever come out and *said ‘*esketamine definitely works better than regular ketamine’. The regulatory system just *heavily implied* it. I claim that with ivermectin, even the people who don’t usually lie were saying it was ineffective, and they were saying it more directly and decisively than liars usually do. But most people can’t translate Pharma → English fluently enough to know where the space of “things people routinely lie about and nobody worries about it too much” ends. So they *incredibly reasonably* assume anything could be a lie. And if you don’t know which statements about pharmaceuticals are lies, “the one that has dozens of studies contradicting it” is a pretty good heuristic! If you tell these people to “believe Science”, you will just worsen the problem where they trust dozens of scientific studies done by scientists using the scientific method over the pronouncements of the CDC or whoever. So “believe experts”? That would have been better advice *in this case*. But the experts have beclowned themselves again and again throughout this pandemic, from the first stirrings of “anyone who worries about coronavirus reaching the US is dog-whistling anti-Chinese racism”, to the Surgeon-General tweeting “Don’t wear a face mask”, to government campaigns focusing entirely on hand-washing (HEPA filters? What are those?) Not only would a recommendation to trust experts be misleading, I don’t even think you could make it work. People would notice how often the experts were wrong, and your public awareness campaign would come to naught. But also: one of the data detectives who exposed some fraudulent ivermectin papers was a medical student, which puts him somewhere between pond scum and hookworms on the Medical Establishment Totem Pole. Some of the people whose studies he helped sink were distinguished Professors of Medicine and heads of Health Institutes. If anyone interprets “trust experts” as “mere medical students must not publicly challenge heads of Health Institutes”, then we’ve accidentally thrown the fundamental principle of science out with the bathwater. But Pierre Kory, spiritual leader of the Ivermectin Jihad, is a distinguished critical care doctor. What heuristic tells us “Medical students should be allowed to publicly challenge heads of Health Institutes” but *not “*Distinguished critical care doctors should be allowed to publicly challenge the CDC”? Then what about “believe statisticians”? I’ve never heard anyone propose this before, but re-centering the mystique of scientific-expertise in study-analyzers and study-aggregators rather than object-level scientists is…one way you could go, I guess. Statisticians admittedly sort of failed us here: the first several meta-analyses said ivermectin worked. But the statistical *process* - the idea that studies are raw materials, but it takes skill to turn them into the finished good of scientific knowledge - sort of comes out looking good. If we need to summarize our takeaway in a slogan of exactly two words, one of which is “trust”, you could do worse than this one. (am I secretly suggesting that we make *rationality* higher status? Maybe, although rationalists did no better here during the early phase of “looks promising so far” than anyone else, and it was researchers digging into the nitty-gritty of the data who really solved this.) Or maybe this is the wrong level on which to think about this. Maybe there isn’t and can’t be a simple heuristic you can teach everyone in school or via a PR campaign which will lead to them having making good health decisions in an adversarial information environment, without having any negative effects anywhere else. But you also don’t want people to make bad health decisions. So what do you do? ### The Political Takeaway All of this is complicated by the impression many people (including me) have, that ivermectin boosterism and vaccine denialism are closely linked. The ivermectin evidence is complicated. There’s room for doubt. I can maybe see room for doubt on some marginal vaccine-related issues like how seriously to take the occasional reports of myocarditis in teens. But the basic issue - that the vaccine works really well and is incredibly safe for adults - seems beyond question. Yet people keep questioning it. I think it’s important to address ivermectin support on its own terms - as a potentially plausible scientific theory in a debris field of confusing evidence, which should be debated to the usual standards of scientific debate. I’ve tried to do that above. But this picture wouldn’t be complete without acknowledging the overlap with vaccine denial - a segment of people who are completely crazy and wrong and who happen to have fixated on this mildly interesting question as opposed to some other one with even less evidence. I’ve been trying to figure out a model where ivermectin support *and* vaccine denialism both make visceral sense to me, and here’s what I’ve got: Imagine that in 2025, an alien invasion fleet reaches Earth. But it got hit by a supernova on the way, the spaceships are partly disabled, and they’re only able to conquer some out-of-the-way place - let’s say Australia. There’s a few cycles of conflict and cease-fire, a few cities get nuked, and finally we settle into an uneasy peace. Over the next few years, humanity grudgingly admits the invaders into the world community. They get a seat in the United Nations. We sort of cooperate with them on projects that are important to both sides, like stopping climate change. We still hate them, but only at the level of ordinary international rivalries, like USA/USSR. In 2035, the aliens announce that a quantum memetic plague from the Andromeda Sector has reached Earth. Billions of people will die unless we let them put an immunity-granting cybernetic implant in all humans’ brain. The aliens admit we haven’t always been friends, and honestly they would still like to conquer us someday. But this plague is an ancient enemy of all sentient beings, they dealt with it on their homeworld eons ago, and they want to help us out here. Humans apparently don’t have the ability to detect quantum memetic plagues, but mortality rates for over-65s do seem weirdly high this year, something like 10x worse than a normal flu season. Do you let the aliens put an implant in your brain, or not? If it helps, the aliens look like this. Surely anyone with a brain that size must know what they’re talking about, right? ([source](https://www.abc.net.au/news/2021-06-26/ufos-in-pop-culture-us-government-report/100227066)) Fine, you don’t have to decide immediately. The brain implants aren’t even ready yet. Some human scientists suggest wearing face masks in the interim. The aliens say no, that will never work, that’s not how you deal with quantum memetic plagues, if you do anything other than wait for the brain implants you’re anti-science idiots who are wasting precious time and will kill millions of people. Human nations try face masks anyway…and they clearly and conspicuously work. The aliens say whatever, we’re still the advanced spacefaring civilization here, maybe it works for humans but that’s not the point, the point is you’ve got to let us put implants in your brains. Some human scientists suggest reopening vital services. The aliens say no, millions will die, this is “mass human sacrifice”, humans apparently must care nothing about their families’ lives. The humans try reopening anyway, and…it goes kind of okay? Maybe the death rate goes up 10% to 20% or so, hard to say? The aliens say whatever, maybe their calculations were off by a few orders of magnitude, the point is, you have to let us put implants in your brain or you’ll all die. Then some human scientists suggest vaccinating against the plague. The aliens say this is idiotic, vaccines originally come from cowpox, even the *word* “vaccine” comes from Latin *vaccus* meaning “cow”, are you saying you want *cow medicine* instead of actual brain implants which alien Science has proven will work? They make lots of cartoons displaying humans who want vaccines as having cow heads, or rolling around in cow poop. Meanwhile, the first few dozen studies show vaccines work great. Many top human leaders, including war heroes from the struggle against the aliens, get vaccines and are seen going out in public, looking healthy and happy. The aliens say that human science is hopelessly flawed because of complicated statistical concepts that inferior life forms like us don’t even have *words for.* You need to ignore all the studies and meta-analyses showing that vaccines definitely work, and let the aliens give you brain implants instead. So do you let the aliens put an implant in your brain, or not? Obviously you think long and hard before doing this. And obviously this is an extended metaphor for vaccine denialism. So what’s the difference between the metaphor (where you’re presumably anti-implant) and the real world (where you’re presumably pro-vaccine?) For me, it’s a combination of: 1. The aliens are hostile, so I don’t trust them no matter how smart they are 2. If the aliens are so smart, why did they get their last few predictions wrong? 3. I can’t even begin to understand the aliens’ argument…what is a “quantum memetic plague”? Why would brain implants treat it? What are the statistical concepts that can’t be explained in human language, and why would they only affect these studies and no others? 4. I have no idea what you can and can’t do with cybernetic implants, and it seems totally possible they could mind control me or something. All of these come down to a more basic problem, which is that these are *hostile aliens*. Let’s start with the second word first. Because they’re alien, I can’t trust they’re on my side. Because they’re alien, their predictions feel like a black box. I don’t know if their previous predictions were 50% confidence or 99% confidence, or whether the stupid aliens made the last few predictions but it’s the smart aliens making this new prediction, or whether they’re even telling the truth when they describe previously fighting this plague on their homeworld and learning best practices. Because they’re alien, all the words they use like “quantum memetic plague” and “brain implant” feel not only beyond my understanding, but *unfairly* beyond my understanding, something that neither I nor anyone I trust could ever double-check. And because they’re alien, I have no idea how their technology works, and it could do all sorts of sinister things. I’m not an immunologist. I don’t have the specific expertise it would take to evaluate whether vaccines work. But one of my friends in medical school decided to do a joint MD-PhD in immunology. I didn’t follow her lead, because I didn’t want to spend my entire twenties and thirties in soul-sucking research labs trying to remember thirty different kinds of interleukins. But when I ask myself “why am I not an immunologist?” the answer is something like “because I dislike intense misery” and not “because immunologists are an alien species and I cannot possibly imagine myself becoming one”. More generally, I come from a social class where becoming an immunologist is considered a reasonable thing that might happen. Several of my friends and family members are experts in various fields (even for very loose definitions of “expert” like a really excellent social worker who other social workers trust). Even more generally, I know some basics of biology. I know why vaccines should work in theory. I know that even if somebody wanted to control you by sneaking a microchip into a vaccine, that’s impossible with current technology. I know enough about politics and economics to know it’s really unlikely that some cabal of elites has developed super-futuristic technology in secret. And I know a lot of smart people who I could ask these questions to if I were confused, and they could tell me all the stuff above. John Steinbeck said that socialism never took root in America because even the poor see themselves as “temporarily-embarrassed millionaires” rather than as members of the class of Poor People. If you’re Poor People, and they’re Rich People, maybe you’re on opposite sides and should fight. If you’re temporarily-embarrassed millionaires, and they’re normal millionaires, maybe you’re on the same side and you can trust them. In the same way, I think of myself as a temporarily-embarrassed immunologist. I don’t know all the interleukins. But I would like to believe that if I really wanted, either I or at least people I know and trust could learn immunology to a standard where we could double-check the work of the vaccine scientists. I’ve written before about [filter bubbles](https://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/). About half of Americans are young-earth creationists. I have nothing against these people, I don’t deliberately ostracize them - yet none of my closest hundred friends are in this category. There’s about an 0.5^100 = 10^-31 chance that would happen by coincidence. Some powerful combination of class, cultural, and geographic barriers prevent me from meeting them. Imagine someone with an equally strong bubble filtering against scientists. Such a person wouldn’t feel like a temporarily-embarrassed immunologist. They would feel like immunologists are some sort of dark and terrible figures from a shadow dimension they could never reach. They would seem like aliens. And now let’s return to that first word, “hostile”. [95% of biology professors are Democrats](https://www.nas.org/academic-questions/31/2/homogenous_the_political_affiliations_of_elite_liberal_arts_college_faculty). Plus medical organizations keep rubbing [more and more salt in the wound](https://marginalrevolution.com/marginalrevolution/2021/11/the-danger-of-demanding-woke-physicians.html). If you’re a conservative, or even have conservative tendencies, these aliens surely qualify as suspicious and probably anti-Earthling. “99% of hostile aliens agree: vaccines are right for you!” Now we’re back to it not sounding so convincing. In a world where scientists seemed like hostile aliens, I would hesitate to take the vaccine. Again, ivermectin optimism isn’t exactly like vaccine denialism - it’s a less open-and-shut question, you can still make a plausible argument for it. But it’s some of the same people and follows the same dynamics. If we want to make people more willing to get vaccines, or less willing to take ivermectin, we have to make the scientific establishment feel less like an enclave of hostile aliens to half the population. Do that, and people will mostly take COVID-related advice, for the same reason they mostly take advice around avoiding asbestos or using sunscreen - both things we’ve successfully convinced people to do even without having a perfect encapsulation of the scientific method or the ideal balance between evidence and authority. But I don’t really know how to do that, and any speculation would be too political even for a section titled “The Political Takeaway”. ### The Summary * Ivermectin doesn’t reduce mortality in COVID a significant amount (let’s say d > 0.3) in the absence of comorbid parasites: **85-90% confidence** * Parasitic worms are a significant confounder in some ivermectin studies, such that they made them get a positive result even when honest and methodologically sound: **50% confidence** * Fraud and data processing errors are of similar magnitude to p-hacking and methodological problems in explaining bad studies (95% confidence interval for fraud: **between >1% and 5%** as important as methodological problems; 95% confidence interval for data processing errors: **between 5% and 100% as important**) * Probably “Trust Science” is not the right way to reach proponents of pseudoscientific medicine: **???% confidence**
Scott Alexander
43667275
Ivermectin: Much More Than You Wanted To Know
acx
# Mantic Monday 11/15 ### Reciprocal Scoring, Part II I talked about this last week as a potential solution to the problem of long-term forecasting. Instead of waiting a century to see what happens, get a bunch of teams, and incentivize each to predict what the *others* will guess. If they all expect the others to strive for accuracy, then the stable Schelling point is the most accurate answer. Now there’s a paper, by Karger, Monrad, Mellers, and Tetlock - [Reciprocal Scoring: A Method For Forecasting Unanswerable Questions](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3954498). They focus not just on long-run outcomes but on conditionals and counterfactuals. The paper starts with an argument against conditional prediction markets that I’d somehow missed before. Suppose you want to know whether a mask mandate will save lives during a pandemic. Current state of the art is to start two prediction markets: “conditional on there being a mask mandate, how many people will die?” and “conditional on there not being a mask mandate, how many people will die?” In this situation, this doesn’t work! Governments are more likely to resort to mask mandates in worlds where the pandemic is very bad. So you should probably predict a higher number of deaths for the mandate condition. But then confused policy-makers will interpret your prediction market as evidence that a mask mandate will cost lives. (this is just the endogeneity problem, but for the future instead of the past!) They admit that you’ve got to be really careful with this. If there are a lot of low-quality forecasters in the tournament, then since high-quality forecasters will accurately predict that low-quality forecasters will give a low-quality answer, everyone will converge on the low-quality answer. This paper is by Good Judgment Project who have just spent years identifying a population of superforecasters, so their plan is to use these people, who are all great, who all know they’re all great, who all know they all know they’re all great, etc. Philip Tetlock wasn’t writing all those books and tweets to self-aggrandize, he was writing them to create common knowledge! They also admit this incentivizes teams to ignore “secret knowledge” that they have but which they expect other teams won’t. Their solution is to make teams very big and full of smart people, so that it’s unlikely other teams will miss something. I guess at the limit this is just banning insider trading, which is supposed to be good. Their final concern is that you can just phone up the guy on the other team and say “We’re going to predict 7, if you also predict 7 then we’ll both win lots of money”. Their solution is multiple mostly-anonymous teams in online tournaments. I don’t know if this would survive a dedicated cheater, but I guess all tournaments are theoretically cheatable and most of the time they muddle through anyway. The paper continues to an empirical study. The authors ran a forecasting tournament on various easily-checkable things like COVID vaccinations, commodity prices, and the weather. Forecasters were separated into three conditions: reciprocal scoring, traditional scoring (ie Brier score + incentives), and no scoring. The no scoring team did worse than the normal scoring team, which is the basic insight Tetlock et al have found again and again: scored and incentivized forecasts are better than random people pontificating on things. But more relevantly for this paper, the reciprocal scoring and traditional scoring did basically the same! More negative numbers means greater accuracy. Then they tried something more ambitious. They asked teams to “predict” the number of lives saved by various COVID interventions. These interventions had already happened or not, there was no way to ever empirically resolve the predictions. This was supposed to serve as an example of the exciting new things you can do with reciprocal scoring. Both teams came up with pretty closely correlated estimates! But are they correct? I [looked into this myself](https://astralcodexten.substack.com/p/lockdown-effectiveness-much-more) a while back and decided the best paper out there was Brauner et al. The relevant part of their results was: After comparing the two figures, I conclude…that you shouldn’t make a graph entirely in different shades of green, purple, and yellow. Do we have a better presentation of this? Yes? Okay: Their categories are different enough from Brauner’s that I don’t want to say too much. The most obvious differences is that the forecasters are much more positive about stay-at-home orders than the scientists, but I think this is because the scientists are looking at additional value *on top of* restricting specific things, and the forecasters are looking at total value *including* the restrictions on specific things. Otherwise the rank order seems basically okay, idk. (I also wouldn’t be too impressed even if the forecasters did get the same findings as Brauner et al, because one likely route to that would be the same one I took - you’ve resolved to judge various coronavirus interventions, you notice Brauner et al is clearly the best paper, and so you report its results.) Overall, I’m really excited by this. My only concern is that it doesn’t have the same sort of hits-you-between-the-eyes obviously-there’s-no-way-to-bias-this quality that prediction markets do. If these people had predicted the effects of COVID restrictions before COVID, people would have doubted them for the same reason they doubted the ordinary experts. ### “Too Close To Call” Here are some screenshots I took within a few moments of each other on the night of the Virginia election: **CNN:** Tight race! Close contest! **PredictIt:** 97% chance the Republican wins (he did) **NBC:** Tight race! Too close to call! **Polymarket:** 98% chance the Republican wins (he did) You get the picture. Prediction markets reached near-certainty about the winner while traditional media was still talking about how un-call-ably close it was. Apparently having hundreds of people all incentivized to give precise probability estimates very slightly earlier than the next guy, works better than having a few journalists who are scared people will make fun of them if they jump the gun. Might prediction markets be lurching to a false certainty too soon? I used to think this, I tried betting on that thesis a bunch of times, and I always lost money. I guess realistically we can’t know *for sure* that they’re not overconfident until we’ve tested their 98% probability fifty times, but I nominate someone else to lose their money for the remaining 40-something experiments. ### Iterated Prediction Markets A comment on the last post: Suppose you want to know whether America will have a higher GDP than China in 2100. You want to make a real prediction market about it, none of this newfangled “reciprocal scoring” stuff. But nobody will put their money in a prediction market for more than five years at a time, because even if you double your money, 100% return over eighty years is a bad investment. Maybe you run a prediction market now, asking “what will people in 2026 think of this question?” People in 2026 will be at least as knowledgeable as people today, probably more since they’ll have five extra years of data. So this is equivalent to asking what they think people like them only a bit smarter will think, which is equivalent to asking what they think is true. You can imagine chaining this. In 2095, you ask people to predict the actual answer. In 2090, you ask people to predict the value of the 2095 market on December 31, 2095. In 2085, you ask people to predict the value of the 2090 market on December 31, 2090. The chain ends with you putting a market on Polymarket tomorrow asking what the market will think on December 31, 2025. This should work. But is this any different from a normal market? After all, a normal market open from now until 2100 is implicitly a prediction market on what people will think in 2025, since you can sell your shares in 2025 (or any other time) and cash out. So just run a single market for 80 years and save yourself the trouble, right? I *think* the advantage of iterating it this way is that you can amplify small changes. Suppose that right now, the market thinks there’s a 50% chance that America will beat China in 2100. But I am a great economic analyst who knows things the market doesn’t, which allow me to determine that the real chance is 52%. Leaving money in a market for five years to make 4% return doesn’t sound great. So we could do something like make people bet on whether the 2025 market would be <40%, 40 - 45%, 45-50%, 50-55%, 55-60%, or >60%. This way if you’re even directionally right you can double your money in five years. I bet there are more clever mathematical ways to do this which would give you finer-grained resolution. Problem: it seems like in 2025, you’ll have a prediction market for 2030 that will also have bins like that, and not a specific number. So you’d have to find a way to convert your bins into a number - which shouldn’t be hard, you can just assume each bin represents a vote for the midpoint of that bin and average it out. Right? Are there math issues here I haven’t thought of? What happens if you don’t expect enough institutional continuity in your prediction market provider to actually run the 2095 market that everything else is based on? I think this also goes fine? If you expect the market provider to last until 2025, then it’s the 2025ers’ problem whether it lasts until 2030. If enough people decide it won’t, this should decrease liquidity but not necessarily distort the market. This feels kind of like witchcraft, so feel free to tell me why it won’t work or is unnecessary. (Another possibility is that it would work, but that the key piece is amplifying small variations, and you can do that better with idk leveraged trading or some weird mathematical formula or something.) ### This Week In The Markets **[Polymarket](https://polymarket.com/):** Nothing too technically interesting here, just questions I’ve been curious about and now have an answer for. **[PredictIt:](https://www.predictit.org/markets/detail/7057/Who-will-win-the-2024-Democratic-presidential-nomination)** Click for link. I’m fascinated by how many people expect someone who is neither Biden nor Harris to win the Democratic nomination in 2024. Biden is down five cents since this summer without any new health problems (that I know about), and sitting presidents rarely get refused a renomination just because they’re unpopular. Maybe people think Biden’s age wouldn’t have mattered if he was popular, but as it is it’ll make a graceful excuse to convince him to sit out? I still think he’s undervalued. **[Metaculus](https://www.metaculus.com/questions/)** Click for link. Some very unsurprising overlap between the Metaculus user and housing policy wonk populations here. “Street votes to determine planning permissions” means that the residents of a single street - so maybe a two-to-three digit number of people - get to vote on their street’s zoning. This is near the top of the YIMBY movement’s policy wishlist - they seem to think most people would vote for denser zoning, though I have trouble understanding their optimism. You can read their white paper arguing in favor [here](https://policyexchange.org.uk/wp-content/uploads/Strong-Suburbs.pdf). As a bonus, it would probably lead to more attractive buildings, since people living next to a new development are more invested in avoiding eyesores. (I’m not sure how blatantly developers are allowed to bribe citizens; I hope very blatantly indeed) The policy seems to have some strong advocates in British government, and YIMBY leaders have offered probabilities like 50% or 60% that it’ll happen, which the market has sensibly downgraded to 33%. Click for link I was wondering when this was going to show up; maybe it was too spicy for PredictIt and Metaculus. I’ve heard a lot of stuff about the prosecutor really bungling this one, but mostly from conservatives who I would have expected to hate the prosecutor anyway, so it’s good to get objective confirmation that yeah, this isn’t going anywhere. Click for link. This prediction is in the past: Starlink became generally available last month! Somehow I missed it! But there’s [a website where you can sign up](https://www.starlink.com/) and everything! Right now it’s only in select areas (it tells me it’ll be reaching the Bay in 2022-2023), but if you’re in those areas it’s available to normal people! Also, Metaculus did a really good job with this one: This shows their prediction over time. The final prediction was October 28th, which is unfair since the market still isn’t closed and you can just predict the date you know was correct. But even a year ago, people’s predictions basically got the right date within a couple of weeks. I don’t think there was anything obvious like Elon Musk saying “we’ll release on October 2021” (and it’s not like you trust a CEO who says something like that anyway!) AFAICT the market was just really good. ### Shorts **1:** Erik Hoel’s [predictions for 2050](https://erikhoel.substack.com/p/futurists-have-their-heads-in-the). Recommended more than I would usually recommend this genre. I’ve been looking for really good new Substacks, by people I hadn’t already been reading for years on another platform, and this is one of the few I’ve found that I’m really excited about. **2:** XiXiDu on times when expert forecasts were repeatedly wrong: **3:** Reddit [launches an on-platform predictions feature](https://techcrunch.com/2021/10/13/reddit-adds-a-new-way-to-post-with-launch-of-predictions-feature/).
Scott Alexander
44012000
Mantic Monday 11/15
acx