diff --git "a/cold.takes.jsonl" "b/cold.takes.jsonl" deleted file mode 100644--- "a/cold.takes.jsonl" +++ /dev/null @@ -1,138 +0,0 @@ -{"text": "\nImage from here via this tweet\nICYMI, Microsoft has released a beta version of an AI chatbot called “the new Bing” with both impressive capabilities and some scary behavior. (I don’t have access. I’m going off of tweets and articles.)\nZvi Mowshowitz lists examples here - highly recommended. Bing has threatened users, called them liars, insisted it was in love with one (and argued back when he said he loved his wife), and much more.\nAre these the first signs of the risks I’ve written about? I’m not sure, but I’d say yes and no.\nLet’s start with the “no” side. \nMy understanding of how Bing Chat was trained probably does not leave much room for the kinds of issues I address here. My best guess at why Bing Chat does some of these weird things is closer to “It’s acting out a kind of story it’s seen before” than to “It has developed its own goals due to ambitious, trial-and-error based development.” (Although “acting out a story” could be dangerous too!)\nMy (zero-inside-info) best guess at why Bing Chat acts so much weirder than ChatGPT is in line with Gwern’s guess here. To oversimplify, there’s a particular type of training that seems to make a chatbot generally more polite and cooperative and less prone to disturbing content, and it’s possible that Bing Chat incorporated less of this than ChatGPT. This could be straightforward to fix.\nBing Chat does not (even remotely) seem to pose a risk of global catastrophe itself. \nOn the other hand, there is a broader point that I think Bing Chat illustrates nicely: companies are racing to build bigger and bigger “digital brains” while having very little idea what’s going on inside those “brains.” The very fact that this situation is so unclear - that there’s been no clear explanation of why Bing Chat is behaving the way it is - seems central, and disturbing.\nAI systems like this are (to simplify) designed something like this: “Show the AI a lot of words from the Internet; have it predict the next word it will see, and learn from its success or failure, a mind-bending number of times.” You can do something like that, and spend huge amounts of money and time on it, and out will pop some kind of AI. If it then turns out to be good or bad at writing, good or bad at math, polite or hostile, funny or serious (or all of these depending on just how you talk to it) ... you’ll have to speculate about why this is. You just don’t know what you just made.\nWe’re building more and more powerful AIs. Do they “want” things or “feel” things or aim for things, and what are those things? We can argue about it, but we don’t know. And if we keep going like this, these mysterious new minds will (I’m guessing) eventually be powerful enough to defeat all of humanity, if they were turned toward that goal.\nAnd if nothing changes about attitudes and market dynamics, minds that powerful could end up rushed to customers in a mad dash to capture market share.\nThat’s the path the world seems to be on at the moment. It might end well and it might not, but it seems like we are on track for a heck of a roll of the dice.\n(And to be clear, I do expect Bing Chat to act less weird over time. Changing an AI’s behavior is straightforward, but that might not be enough, and might even provide false reassurance.)", "url": "https://www.cold-takes.com/what-does-bing-chat-tell-us-about-ai-risk/", "title": "What does Bing Chat tell us about AI risk?", "source": "cold.takes", "source_type": "blog", "date_published": "2023-02-28", "id": "6b97a808bdf9a09e85dff8a33dda9cca"} -{"text": "Let’s say you’re convinced that AI could make this the most important century of all time for humanity. What can you do to help things go well instead of poorly?\nI think the biggest opportunities come from a full-time job (and/or the money you make from it). I think people are generally far better at their jobs than they are at anything else. \nThis piece will list the jobs I think are especially high-value. I expect things will change (a lot) from year to year - this is my picture at the moment.\nHere’s a summary:\nRole\nSkills/assets you'd need\nResearch and engineering on AI safety\nTechnical ability (but not necessarily AI background)\n \nInformation security to reduce the odds powerful AI is leaked\nSecurity expertise or willingness/ability to start in junior roles (likely not AI)\n \nOther roles at AI companies\nSuitable for generalists (but major pros and cons)\n \nGovt and govt-facing think tanks\nSuitable for generalists (but probably takes a long time to have impact)\n \nJobs in politics\nSuitable for generalists if you have a clear view on which politicians to help\n \nForecasting to get a better handle on what’s coming\nStrong forecasting track record (can be pursued part-time)\n \n\"Meta\" careers\nMisc / suitable for generalists\n \nLow-guidance options\nThese ~only make sense if you read & instantly think \"That's me\"\n \nA few notes before I give more detail:\nThese jobs aren’t the be-all/end-all. I expect a lot to change in the future, including a general increase in the number of helpful jobs available. \nMost of today’s opportunities are concentrated in the US and UK, where the biggest AI companies (and AI-focused nonprofits) are. This may change down the line.\nMost of these aren’t jobs where you can just take instructions and apply narrow skills. \nThe issues here are tricky, and your work will almost certainly be useless (or harmful) according to someone.\n \nI recommend forming your own views on the key risks of AI - and/or working for an organization whose leadership you’re confident in.\nStaying open-minded and adaptable is crucial. \nI think it’s bad to rush into a mediocre fit with one of these jobs, and better (if necessary) to stay out of AI-related jobs while skilling up and waiting for a great fit.\n \nI don’t think it’s helpful (and it could be harmful) to take a fanatical, “This is the most important time ever - time to be a hero” attitude. Better to work intensely but sustainably, stay mentally healthy and make good decisions.\nThe first section of this piece will recap my basic picture of the major risks, and the promising ways to reduce these risks (feel free to skip if you think you’ve got a handle on this).\nThe next section will elaborate on the options in the table above.\nAfter that, I’ll talk about some of the things you can do if you aren’t ready for a full-time career switch yet, and give some general advice for avoiding doing harm and burnout.\nRecapping the major risks, and some things that could help\nThis is a quick recap of the major risks from transformative AI. For a longer treatment, see How we could stumble into an AI catastrophe, and for an even longer one see the full series. To skip to the next section, click here.\nThe backdrop: transformative AI could be developed in the coming decades. If we develop AI that can automate all the things humans do to advance science and technology, this could cause explosive technological progress that could bring us more quickly than most people imagine to a radically unfamiliar future. \nSuch AI could also be capable of defeating all of humanity combined, if it were pointed toward that goal. \n(Click to expand) The most important century \nIn the most important century series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.\nI focus on a hypothetical kind of AI that I call PASTA, or Process for Automating Scientific and Technological Advancement. PASTA would be AI that can essentially automate all of the human activities needed to speed up scientific and technological advancement.\nUsing a variety of different forecasting approaches, I argue that PASTA seems more likely than not to be developed this century - and there’s a decent chance (more than 10%) that we’ll see it within 15 years or so.\nI argue that the consequences of this sort of AI could be enormous: an explosion in scientific and technological progress. This could get us more quickly than most imagine to a radically unfamiliar future.\nI’ve also argued that AI systems along these lines could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nFor more, see the most important century landing page. The series is available in many formats, including audio; I also provide a summary, and links to podcasts where I discuss it at a high level.\n(Click to expand) How could AI systems defeat humanity?\nA previous piece argues that AI systems could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nBy defeating humanity, I mean gaining control of the world so that AIs, not humans, determine what happens in it; this could involve killing humans or simply “containing” us in some way, such that we can’t interfere with AIs’ aims.\nOne way this could happen would be via “superintelligence” It’s imaginable that a single AI system (or set of systems working together) could:\nDo its own research on how to build a better AI system, which culminates in something that has incredible other abilities.\nHack into human-built software across the world.\nManipulate human psychology.\nQuickly generate vast wealth under the control of itself or any human allies.\nCome up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.\nDevelop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries.\nBut even if “superintelligence” never comes into play - even if any given AI system is at best equally capable to a highly capable human - AI could collectively defeat humanity. The piece explains how.\nThe basic idea is that humans are likely to deploy AI systems throughout the economy, such that they have large numbers and access to many resources - and the ability to make copies of themselves. From this starting point, AI systems with human-like (or greater) capabilities would have a number of possible ways of getting to the point where their total population could outnumber and/or out-resource humans.\nMore: AI could defeat all of us combined\nMisalignment risk: AI could end up with dangerous aims of its own. \nIf this sort of AI is developed using the kinds of trial-and-error-based techniques that are common today, I think it’s likely that it will end up “aiming” for particular states of the world, much like a chess-playing AI “aims” for a checkmate position - making choices, calculations and plans to get particular types of outcomes, even when doing so requires deceiving humans. \nI think it will be difficult - by default - to ensure that AI systems are aiming for what we (humans) want them to aim for, as opposed to gaining power for ends of their own.\nIf AIs have ambitious aims of their own - and are numerous and/or capable enough to overpower humans - I think we have a serious risk that AIs will take control of the world and disempower humans entirely.\n(Click to expand) Why would AI \"aim\" to defeat humanity?\nA previous piece argued that if today’s AI development methods lead directly to powerful enough AI systems, disaster is likely by default (in the absence of specific countermeasures). \nIn brief:\nModern AI development is essentially based on “training” via trial-and-error. \nIf we move forward incautiously and ambitiously with such training, and if it gets us all the way to very powerful AI systems, then such systems will likely end up aiming for certain states of the world (analogously to how a chess-playing AI aims for checkmate).\nAnd these states will be other than the ones we intended, because our trial-and-error training methods won’t be accurate. For example, when we’re confused or misinformed about some question, we’ll reward AI systems for giving the wrong answer to it - unintentionally training deceptive behavior.\nWe should expect disaster if we have AI systems that are both (a) powerful enough to defeat humans and (b) aiming for states of the world that we didn’t intend. (“Defeat” means taking control of the world and doing what’s necessary to keep us out of the way; it’s unclear to me whether we’d be literally killed or just forcibly stopped1 from changing the world in ways that contradict AI systems’ aims.)\nMore: Why would AI \"aim\" to defeat humanity?\nCompetitive pressures, and ambiguous evidence about the risks, could make this situation very dangerous. In a previous piece, I lay out a hypothetical story about how the world could stumble into catastrophe. In this story:\nThere are warning signs about the risks of misaligned AI - but there’s a lot of ambiguity about just how big the risk is.\nEveryone is furiously racing to be first to deploy powerful AI systems. \nWe end up with a big risk of deploying dangerous AI systems throughout the economy - which means a risk of AIs disempowering humans entirely. \nAnd even if we navigate that risk - even if AI behaves as intended - this could be a disaster if the most powerful AI systems end up concentrated in the wrong hands (something I think is reasonably likely due to the potential for power imbalances). There are other risks as well.\n(Click to expand) Why AI safety could be hard to measure\nIn previous pieces, I argued that:\nIf we develop powerful AIs via ambitious use of the “black-box trial-and-error” common in AI development today, then there’s a substantial risk that: \nThese AIs will develop unintended aims (states of the world they make calculations and plans toward, as a chess-playing AI \"aims\" for checkmate);\n \nThese AIs could deceive, manipulate, and even take over the world from humans entirely as needed to achieve those aims.\nPeople today are doing AI safety research to prevent this outcome, but such research has a number of deep difficulties:\n“Great news - I’ve tested this AI and it looks safe.” Why might we still have a problem?\n \nProblem\nKey question\nExplanation\nThe Lance Armstrong problem\nDid we get the AI to be actually safe or good at hiding its dangerous actions?\nWhen dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “appearing to behave well.”\nWhen professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.\n \nThe King Lear problem\nThe AI is (actually) well-behaved when humans are in control. Will this transfer to when AIs are in control?\nIt's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't. \nAIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to take control of the world entirely. It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation. \nLike King Lear trying to decide how much power to give each of his daughters before abdicating the throne.\n \nThe lab mice problem\nToday's \"subhuman\" AIs are safe.What about future AIs with more human-like abilities?\nToday's AI systems aren't advanced enough to exhibit the basic behaviors we want to study, such as deceiving and manipulating humans.\nLike trying to study medicine in humans by experimenting only on lab mice.\n \nThe first contact problem\nImagine that tomorrow's \"human-like\" AIs are safe. How will things go when AIs have capabilities far beyond humans'?\nAI systems might (collectively) become vastly more capable than humans, and it's ... just really hard to have any idea what that's going to be like. As far as we know, there has never before been anything in the galaxy that's vastly more capable than humans in the relevant ways! No matter what we come up with to solve the first three problems, we can't be too confident that it'll keep working if AI advances (or just proliferates) a lot more. \nLike trying to plan for first contact with extraterrestrials (this barely feels like an analogy).\n \n(Click to expand) Power imbalances, and other risks beyond misaligned AI\nI’ve argued that AI could cause a dramatic acceleration in the pace of scientific and technological advancement. \nOne way of thinking about this: perhaps (for reasons I’ve argued previously) AI could enable the equivalent of hundreds of years of scientific and technological advancement in a matter of a few months (or faster). If so, then developing powerful AI a few months before others could lead to having technology that is (effectively) hundreds of years ahead of others’.\nBecause of this, it’s easy to imagine that AI could lead to big power imbalances, as whatever country/countries/coalitions “lead the way” on AI development could become far more powerful than others (perhaps analogously to when a few smallish European states took over much of the rest of the world).\nI think things could go very badly if the wrong country/countries/coalitions lead the way on transformative AI. At the same time, I’ve expressed concern that people might overfocus on this aspect of things vs. other issues, for a number of reasons including:\nI think people naturally get more animated about \"helping the good guys beat the bad guys\" than about \"helping all of us avoid getting a universally bad outcome, for impersonal reasons such as 'we designed sloppy AI systems' or 'we created a dynamic in which haste and aggression are rewarded.'\"\nI expect people will tend to be overconfident about which countries, organizations or people they see as the \"good guys.\"\n(More here.)\nThere are also dangers of powerful AI being too widespread, rather than too concentrated. In The Vulnerable World Hypothesis, Nick Bostrom contemplates potential future dynamics such as “advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions.” In addition to avoiding worlds where AI capabilities end up concentrated in the hands of a few, it could also be important to avoid worlds in which they diffuse too widely, too quickly, before we’re able to assess the risks of widespread access to technology far beyond today’s.\nI discuss these and a number of other AI risks in a previous piece: Transformative AI issues (not just misalignment): an overview\nI’ve laid out several ways to reduce the risks (color-coded since I’ll be referring to them throughout the piece):\nAlignment research. Researchers are working on ways to design AI systems that are both (a) “aligned” in the sense that they don’t have unintended aims of their own; (b) very powerful, to the point where they can be competitive with the best systems out there. \nI’ve laid out three high-level hopes for how - using techniques that are known today - we might be able to develop AI systems that are both aligned and powerful. \nThese techniques wouldn’t necessarily work indefinitely, but they might work long enough so that we can use early safe AI systems to make the situation much safer (by automating huge amounts of further alignment research, by helping to demonstrate risks and make the case for greater caution worldwide, etc.)\n(A footnote explains how I’m using “aligned” vs. “safe.”1)\n(Click to expand) High-level hopes for AI alignment\nA previous piece goes through what I see as three key possibilities for building powerful-but-safe AI systems.\nIt frames these using Ajeya Cotra’s young businessperson analogy for the core difficulties. In a nutshell, once AI systems get capable enough, it could be hard to test whether they’re safe, because they might be able to deceive and manipulate us into getting the wrong read. Thus, trying to determine whether they’re safe might be something like “being an eight-year-old trying to decide between adult job candidates (some of whom are manipulative).”\nKey possibilities for navigating this challenge:\nDigital neuroscience: perhaps we’ll be able to read (and/or even rewrite) the “digital brains” of AI systems, so that we can know (and change) what they’re “aiming” to do directly - rather than having to infer it from their behavior. (Perhaps the eight-year-old is a mind-reader, or even a young Professor X.)\nLimited AI: perhaps we can make AI systems safe by making them limited in various ways - e.g., by leaving certain kinds of information out of their training, designing them to be “myopic” (focused on short-run as opposed to long-run goals), or something along those lines. Maybe we can make “limited AI” that is nonetheless able to carry out particular helpful tasks - such as doing lots more research on how to achieve safety without the limitations. (Perhaps the eight-year-old can limit the authority or knowledge of their hire, and still get the company run successfully.)\nAI checks and balances: perhaps we’ll be able to employ some AI systems to critique, supervise, and even rewrite others. Even if no single AI system would be safe on its own, the right “checks and balances” setup could ensure that human interests win out. (Perhaps the eight-year-old is able to get the job candidates to evaluate and critique each other, such that all the eight-year-old needs to do is verify basic factual claims to know who the best candidate is.)\nThese are some of the main categories of hopes that are pretty easy to picture today. Further work on AI safety research might result in further ideas (and the above are not exhaustive - see my more detailed piece, posted to the Alignment Forum rather than Cold Takes, for more).\n \nStandards and monitoring.I see some hope for developing standards that all potentially dangerous AI projects (whether companies, government projects, etc.) need to meet, and enforcing these standards globally. \nSuch standards could require strong demonstrations of safety, strong security practices, designing AI systems to be difficult to use for overly dangerous activity, etc. \nWe don't need a perfect system or international agreement to get a lot of benefit out of such a setup. The goal isn’t just to buy time – it’s to change incentives, such that AI projects need to make progress on improving security, alignment, etc. in order to be profitable.\n(Click to expand) How standards might be established and become national or international\nI previously laid out a possible vision on this front, which I’ll give a slightly modified version of here:\nToday’s leading AI companies could self-regulate by committing not to build or deploy a system that they can’t convincingly demonstrate is safe (e.g., see Google’s 2018 statement, \"We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”). \nEven if some people at the companies would like to deploy unsafe systems, it could be hard to pull this off once the company has committed not to. \n \nEven if there’s a lot of room for judgment in what it means to demonstrate an AI system is safe, having agreed in advance that certain evidence is not good enough could go a long way.\nAs more AI companies are started, they could feel soft pressure to do similar self-regulation, and refusing to do so is off-putting to potential employees, investors, etc.\nEventually, similar principles could be incorporated into various government regulations and enforceable treaties.\nGovernments could monitor for dangerous projects using regulation and even overseas operations. E.g., today the US monitors (without permission) for various signs that other states might be developing nuclear weapons, and might try to stop such development with methods ranging from threats of sanctions to cyberwarfare or even military attacks. It could do something similar for any AI development projects that are using huge amounts of compute and haven’t volunteered information about whether they’re meeting standards.\nSuccessful, careful AI projects. I think an AI company (or other project) can enormously improve the situation, if it can both (a) be one of the leaders in developing powerful AI; (b) prioritize doing (and using powerful AI for) things that reduce risks, such as doing alignment research. (But don’t read this as ignoring the fact that AI companies can do harm as well!)\n(Click to expand) How a careful AI project could be helpful\nIn addition to using advanced AI to do AI safety research (noted above), an AI project could:\nPut huge effort into designing tests for signs of danger, and - if it sees danger signs in its own systems - warning the world as a whole.\nOffer deals to other AI companies/projects. E.g., acquiring them or exchanging a share of its profits for enough visibility and control to ensure that they don’t deploy dangerous AI systems.\nUse its credibility as the leading company to lobby the government for helpful measures (such as enforcement of a monitoring-and-standards regime), and to more generally highlight key issues and advocate for sensible actions.\nTry to ensure (via design, marketing, customer choice, etc.) that its AI systems are not used for dangerous ends, and are used on applications that make the world safer and better off. This could include defensive deployment to reduce risks from other AIs; it could include using advanced AI systems to help it gain clarity on how to get a good outcome for humanity; etc.\nAn AI project with a dominant market position could likely make a huge difference via things like the above (and probably via many routes I haven’t thought of). And even an AI project that is merely one of several leaders could have enough resources and credibility to have a lot of similar impacts - especially if it’s able to “lead by example” and persuade other AI projects (or make deals with them) to similarly prioritize actions like the above.\nA challenge here is that I’m envisioning a project with two arguably contradictory properties: being careful (e.g., prioritizing actions like the above over just trying to maintain its position as a profitable/cutting-edge project) and successful (being a profitable/cutting-edge project). In practice, it could be very hard for an AI project to walk the tightrope of being aggressive enough to be a “leading” project (in the sense of having lots of resources, credibility, etc.), while also prioritizing actions like the above (which mostly, with some exceptions, seem pretty different from what an AI project would do if it were simply focused on its technological lead and profitability).\n \nStrong security. A key threat is that someone could steal major components of an AI system and deploy it incautiously. It could be extremely hard for an AI project to be robustly safe against having its AI “stolen.” But this could change, if there’s enough effort to work out the problem of how to secure a large-scale, powerful AI system.\n(Click to expand) The challenging of securing dangerous AI\nIn Racing Through a Minefield, I described a \"race\" between cautious actors (those who take misalignment risk seriously) and incautious actors (those who are focused on deploying AI for their own gain, and aren't thinking much about the dangers to the whole world). Ideally, cautious actors would collectively have more powerful AI systems than incautious actors, so they could take their time doing alignment research and other things to try to make the situation safer for everyone. \nBut if incautious actors can steal an AI from cautious actors and rush forward to deploy it for their own gain, then the situation looks a lot bleaker. And unfortunately, it could be hard to protect against this outcome.\nIt's generally extremely difficult to protect data and code against a well-resourced cyberwarfare/espionage effort. An AI’s “weights” (you can think of this sort of like its source code, though not exactly) are potentially very dangerous on their own, and hard to get extreme security for. Achieving enough cybersecurity could require measures, and preparations, well beyond what one would normally aim for in a commercial context.\nJobs that can help\nIn this long section, I’ll list a number of jobs I wish more people were pursuing.\nUnfortunately, I can’t give individualized help exploring one or more of these career tracks. Starting points could include 80,000 Hours and various other resources.\nResearch and engineering careers. You can contribute to alignment research as a researcher and/or software engineer (the line between the two can be fuzzy in some contexts). \nThere are (not necessarily easy-to-get) jobs along these lines at major AI labs, in established academic labs, and at independent nonprofits (examples in footnote).2\nDifferent institutions will have very different approaches to research, very different environments and philosophies, etc. so it’s hard to generalize about what might make someone a fit. A few high-level points:\nIt takes a lot of talent to get these jobs, but you shouldn’t assume that it takes years of experience in a particular field (or a particular degree). \nI’ve seen a number of people switch over from other fields (such as physics) and become successful extremely quickly. \n \nIn addition to on-the-job training, there are independent programs specifically aimed at helping people skill up quickly.3\nYou also shouldn’t assume that these jobs are only for “scientist” types - there’s a substantial need for engineers, which I expect to grow.\nI think most people working on alignment consider a lot of other people’s work to be useless at best. This seems important to know going in for a few reasons. \nYou shouldn’t assume that all work is useless just because the first examples you see seem that way.\n \nIt’s good to be aware that whatever you end up doing, someone will probably dunk on your work on the Internet. \n \nAt the same time, you shouldn’t assume that your work is helpful because it’s “safety research.” It's worth investing a lot in understanding how any particular research you're doing could be helpful (and how it could fail). \nI’d even suggest taking regular dedicated time (a day every few months?) to pause working on the day-to-day and think about how your work fits into the big picture.\nFor a sense of what work I think is most likely to be useful, I’d suggest my piece on why AI safety seems hard to measure - I’m most excited about work that directly tackles the challenges outlined in that piece, and I’m pretty skeptical of work that only looks good with those challenges assumed away. (Also see my piece on broad categories of research I think have a chance to be highly useful, and some comments from a while ago that I still mostly endorse.) \nI also want to call out a couple of categories of research that are getting some attention today, but seem at least a bit under-invested in, even relative to alignment research:\nThreat assessment research. To me, there’s an important distinction between “Making AI systems safer” and “Finding out how dangerous they might end up being.” (Today, these tend to get lumped together under “alignment research.”) \nA key approach to medical research is using model organisms - for example, giving cancer to mice, so we can see whether we’re able to cure them. \n \nAnalogously, one might deliberately (though carefully!4) design an AI system to deceive and manipulate humans, so we can (a) get a more precise sense of what kinds of training dynamics lead to deception and manipulation; (b) see whether existing safety techniques are effective countermeasures.\n \nIf we had concrete demonstrations of AI systems becoming deceptive/manipulative/power-seeking, we could potentially build more consensus for caution (e.g., standards and monitoring). Or we could imaginably produce evidence that the threat is low.5\nA couple of early examples of threat assessment research: here and here.\nAnti-misuse research. \nI’ve written about how we could face catastrophe even from aligned AI. That is - even if AI does what its human operators want it to be doing, maybe some of its human operators want it to be helping them build bioweapons, spread propaganda, etc. \n \nBut maybe it’s possible to train AIs so that they’re hard to use for purposes like this - a separate challenge from training them to avoid deceiving and manipulating their human operators. \n \nIn practice, a lot of the work done on this today (example) tends to get called “safety” and lumped in with alignment (and sometimes the same research helps with both goals), but again, I think it’s a distinction worth making.\n \nI expect the earliest and easiest versions of this work to happen naturally as companies try to make their AI models fit for commercialization - but at some point it might be important to be making more intense, thorough attempts to prevent even very rare (but catastrophic) misuse.\nInformation security careers. There’s a big risk that a powerful AI system could be “stolen” via hacking/espionage, and this could make just about every kind of risk worse. I think it could be very challenging - but possible - for AI projects to be secure against this threat. (More above.)\nI really think security is not getting enough attention from people concerned about AI risk, and I disagree with the idea that key security problems can be solved just by hiring from today’s security industry.\nFrom what I’ve seen, AI companies have a lot of trouble finding good security hires. I think a lot of this is simply that security is challenging and valuable, and demand for good hires (especially people who can balance security needs against practical needs) tends to swamp supply. \nAnd yes, this means good security people are well-paid!\nAdditionally, AI could present unique security challenges in the future, because it requires protecting something that is simultaneously (a) fundamentally just software (not e.g. uranium), and hence very hard to protect; (b) potentially valuable enough that one could imagine very well-resourced state programs going all-out to steal it, with a breach having globally catastrophic consequences. I think trying to get out ahead of this challenge, by experimenting early on with approaches to it, could be very important.\nIt’s plausible to me that security is as important as alignment right now, in terms of how much one more good person working on it will help. \nAnd security is an easier path, because one can get mentorship from a large community of security people working on things other than AI.6\nI think there’s a lot of potential value both in security research (e.g., developing new security techniques) and in simply working at major AI companies to help with their existing security needs.\nFor more on this topic, see this recent 80,000 hours report and this 2019 post by two of my coworkers.\nOther jobs at AI companies. AI companies hire for a lot of roles, many of which don’t require any technical skills. \nIt’s a somewhat debatable/tricky path to take a role that isn’t focused specifically on safety or security. Some people believe7 that you can do more harm than good this way, by helping companies push forward with building dangerous AI before the risks have gotten much attention or preparation - and I think this is a pretty reasonable take. \nAt the same time:\nYou could argue something like: “Company X has potential to be a successful, careful AI project. That is, it’s likely to deploy powerful AI systems more carefully and helpfully than others would, and use them to reduce risks by automating alignment research and other risk-reducing tasks. Furthermore, Company X is most likely to make a number of other decisions wisely as things develop. So, it’s worth accepting that Company X is speeding up AI progress, because of the hope that Company X can make things go better.” This obviously depends on how you feel about Company X compared to others!\nWorking at Company X could also present opportunities to influence Company X. If you’re a valuable contributor and you are paying attention to the choices the company is making (and speaking up about them), you could affect the incentives of leadership. \nI think this can be a useful thing to do in combination with the other things on this list, but I generally wouldn’t advise taking a job if this is one’s main goal. \nWorking at an AI company presents opportunities to become generally more knowledgeable about AI, possibly enabling a later job change to something else.\n(Click to expand) How a careful AI project could be helpful\nIn addition to using advanced AI to do AI safety research (noted above), an AI project could:\nPut huge effort into designing tests for signs of danger, and - if it sees danger signs in its own systems - warning the world as a whole.\nOffer deals to other AI companies/projects. E.g., acquiring them or exchanging a share of its profits for enough visibility and control to ensure that they don’t deploy dangerous AI systems.\nUse its credibility as the leading company to lobby the government for helpful measures (such as enforcement of a monitoring-and-standards regime), and to more generally highlight key issues and advocate for sensible actions.\nTry to ensure (via design, marketing, customer choice, etc.) that its AI systems are not used for dangerous ends, and are used on applications that make the world safer and better off. This could include defensive deployment to reduce risks from other AIs; it could include using advanced AI systems to help it gain clarity on how to get a good outcome for humanity; etc.\nAn AI project with a dominant market position could likely make a huge difference via things like the above (and probably via many routes I haven’t thought of). And even an AI project that is merely one of several leaders could have enough resources and credibility to have a lot of similar impacts - especially if it’s able to “lead by example” and persuade other AI projects (or make deals with them) to similarly prioritize actions like the above.\nA challenge here is that I’m envisioning a project with two arguably contradictory properties: being careful (e.g., prioritizing actions like the above over just trying to maintain its position as a profitable/cutting-edge project) and successful (being a profitable/cutting-edge project). In practice, it could be very hard for an AI project to walk the tightrope of being aggressive enough to be a “leading” project (in the sense of having lots of resources, credibility, etc.), while also prioritizing actions like the above (which mostly, with some exceptions, seem pretty different from what an AI project would do if it were simply focused on its technological lead and profitability).\n \n80,000 Hours has a collection of anonymous advice on how to think about the pros and cons of working at an AI company.\nIn a future piece, I’ll discuss what I think AI companies can be doing today to prepare for transformative AI risk. This could be helpful for getting a sense of what an unusually careful AI company looks like.\nJobs in government and at government-facing think tanks. I think there is a lot of value in providing quality advice to governments (especially the US government) on how to think about AI - both today’s systems and potential future ones. \nI also think it could make sense to work on other technology issues in government, which could be a good path to working on AI later (I expect government attention to AI to grow over time). \nPeople interested in careers like these can check out Open Philanthropy’s Technology Policy Fellowships and RAND Corporation's Technology and Security Policy Fellows.\nOne related activity that seems especially valuable: understanding the state of AI in countries other than the one you’re working for/in - particularly countries that (a) have a good chance of developing their own major AI projects down the line; (b) are difficult to understand much about by default. \nHaving good information on such countries could be crucial for making good decisions, e.g. about moving cautiously vs. racing forward vs. trying to enforce safety standards internationally. \nI think good work on this front has been done by the Center for Security and Emerging Technology8 among others. \nA future piece will discuss other things I think governments can be doing today to prepare for transformative AI risk. I won’t have a ton of tangible recommendations quite yet, but I expect there to be more over time, especially if and when standards and monitoring frameworks become better-developed.\nJobs in politics. The previous category focused on advising governments; this one is about working on political campaigns, doing polling analysis, etc. to generally improve the extent to which sane and reasonable people are in power. Obviously, it’s a judgment call which politicians are the “good” ones and which are the “bad” ones, but I didn’t want to leave out this category of work.\nForecasting. I’m intrigued by organizations like Metaculus, HyperMind, Good Judgment,9 Manifold Markets, and Samotsvety - all trying, in one way or another, to produce good probabilistic forecasts (using generalizable methods10) about world events. \nIf we could get good forecasts about questions like “When will AI systems be powerful enough to defeat all of humanity?” and “Will AI safety research in category X be successful?”, this could be useful for helping people make good decisions. (These questions seem very hard to get good predictions on using these organizations’ methods, but I think it’s an interesting goal.)\nTo explore this area, I’d suggest learning about forecasting generally (Superforecasting is a good starting point) and building up your own prediction track record on sites such as the above.\n“Meta” careers. There are a number of jobs focused on helping other people learn about key issues, develop key skills and end up in helpful jobs (a bit more discussion here).\nIt can also make sense to take jobs that put one in a good position to donate to nonprofits doing important work, to spread helpful messages, and to build skills that could be useful later (including in unexpected ways, as things develop), as I’ll discuss below.\nLow-guidance jobs\nThis sub-section lists some projects that either don’t exist (but seem like they ought to), or are in very embryonic stages. So it’s unlikely you can get any significant mentorship working on these things. \nI think the potential impact of making one of these work is huge, but I think most people will have an easier time finding a fit with jobs from the previous section (which is why I listed those first). \nThis section is largely to illustrate that I expect there to be more and more ways to be helpful as time goes on - and in case any readers feel excited and qualified to tackle these projects themselves, despite a lack of guidance and a distinct possibility that a project will make less sense in reality than it does on paper.\nA big one in my mind is developing safety standards that could be used in a standards and monitoring regime. By this I mean answering questions like:\nWhat observations could tell us that AI systems are getting dangerous to humanity (whether by pursuing aims of their own or by helping humans do dangerous things)? \nA starting-point question: why do we believe today’s systems aren’t dangerous? What, specifically, are they unable to do that they’d have to do in order to be dangerous, and how will we know when that’s changed?\nOnce AI systems have potential for danger, how should they be restricted, and what conditions should AI companies meet (e.g., demonstrations of safety and security) in order to loosen restrictions?\nThere is some early work going on along these lines, at both AI companies and nonprofits. If it goes well, I expect that there could be many jobs in the future, doing things like:\nContinuing to refine and improve safety standards as AI systems get more advanced.\nProviding AI companies with “audits” - examinations of whether their systems meet standards, provided by parties outside the company to reduce conflicts of interest.\nAdvocating for the importance of adherence to standards. This could include advocating for AI companies to abide by standards, and potentially for government policies to enforce standards.\nOther public goods for AI projects. I can see a number of other ways in which independent organizations could help AI projects exercise more caution / do more to reduce risks:\nFacilitating safety research collaborations. I worry that at some point, doing good alignment research will only be possible with access to state-of-the-art AI models - but such models will be extraordinarily expensive and exclusively controlled by major AI companies. \nI hope AI companies will be able to partner with outside safety researchers (not just rely on their own employees) for alignment research, but this could get quite tricky due to concerns about intellectual property leaks. \n \nA third-party organization could do a lot of the legwork of vetting safety researchers, helping them with their security practices, working out agreements with respect to intellectual property, etc. to make partnerships - and selective information sharing, more broadly - more workable.\nEducation for key people at AI companies. An organization could help employees, investors, and board members of AI companies learn about the potential risks and challenges of advanced AI systems. I’m especially excited about this for board members, because: \nI’ve already seen a lot of interest from AI companies in forming strong ethics advisory boards, and/or putting well-qualified people on their governing boards (see footnote for the difference11). I expect demand to go up.\n \nRight now, I don’t think there are a lot of people who are both (a) prominent and “fancy” enough to be considered for such boards; (b) highly thoughtful about, and well-versed in, what I consider some of the most important risks of transformative AI (covered in this piece and the series it’s part of).\n \nAn “education for potential board members” program could try to get people quickly up to speed on good board member practices generally, on risks of transformative AI, and on the basics of how modern AI works.\nHelping share best practices across AI companies. A third-party organization might collect information about how different AI companies are handling information security, alignment research, processes for difficult decisions, governance, etc. and share it across companies, while taking care to preserve confidentiality. I’m particularly interested in the possibility of developing and sharing innovative governance setups for AI companies.\nThinking and stuff. There’s tons of potential work to do in the category of “coming up with more issues we ought to be thinking about, more things people (and companies and governments) can do to be helpful, etc.”\nAbout a year ago, I published a list of research questions that could be valuable and important to gain clarity on. I still mostly endorse this list (though I wouldn’t write it just as is today).\nA slightly different angle: it could be valuable to have more people thinking about the question, “What are some tangible policies governments could enact to be helpful?” E.g., early steps towards standards and monitoring. This is distinct from advising governments directly (it's earlier-stage).\nSome AI companies have policy teams that do work along these lines. And a few Open Philanthropy employees work on topics along the lines of the first bullet point. However, I tend to think of this work as best done by people who need very little guidance (more at my discussion of wicked problems), so I’m hesitant to recommend it as a mainline career option.\nThings you can do if you’re not ready for a full-time career change\nSwitching careers is a big step, so this section lists some ways you can be helpful regardless of your job - including preparing yourself for a later switch.\nFirst and most importantly, you may have opportunities to spread key messages via social media, talking with friends and colleagues, etc. I think there’s a lot of potential to make a difference here, and I wrote a previous post on this specifically.\nSecond, you can explore potential careers like those I discuss above. I’d suggest generally checking out job postings, thinking about what sorts of jobs might be a fit for you down the line, meeting people who work in jobs like those and asking them about their day-to-day, etc.\nRelatedly, you can try to keep your options open. \nIt’s hard to predict what skills will be useful as AI advances further and new issues come up. \nBeing ready to switch careers when a big opportunity comes up could be hugely valuable - and hard. (Most people would have a lot of trouble doing this late in their career, no matter how important!) \nBuilding up the financial, psychological and social ability to change jobs later on would (IMO) be well worth a lot of effort.\nRight now there aren’t a lot of obvious places to donate (though you can donate to the Long-Term Future Fund12 if you feel so moved). \nI’m guessing this will change in the future, for a number of reasons.13\nSomething I’d consider doing is setting some pool of money aside, perhaps invested such that it’s particularly likely to grow a lot if and when AI systems become a lot more capable and impressive,14 in case giving opportunities come up in the future. \nYou can also, of course, donate to things today that others aren’t funding for whatever reason.\nLearning more about key issues could broaden your options. I think the full series I’ve written on key risks is a good start. To do more, you could:\nActively engage with this series by writing your own takes, discussing with others, etc.\nConsider various online courses15 on relevant issues.\nI think it’s also good to get as familiar with today’s AI systems (and the research that goes into them) as you can. \nIf you’re happy to write code, you can check out coding-intensive guides and programs (examples in footnote).16\nIf you don’t want to code but can read somewhat technical content, I’d suggest getting oriented with some basic explainers on deep learning17 and then reading significant papers on AI and AI safety.18\nWhether you’re very technical or not at all, I think it’s worth playing with public state-of-the-art AI models, as well as seeing highlights of what they can do via Twitter and such. \nFinally, if you happen to have opportunities to serve on governing boards or advisory boards for key organizations (e.g., AI companies), I think this is one of the best non-full-time ways to help. \nI don’t expect this to apply to most people, but wanted to mention it in case any opportunities come up. \nIt’s particularly important, if you get a role like this, to invest in educating yourself on key issues.\nSome general advice\nI think full-time work has huge potential to help, but also big potential to do harm, or to burn yourself out. So here are some general suggestions.\nThink about your own views on the key risks of AI, and what it might look like for the world to deal with the risks. Most of the jobs I’ve discussed aren’t jobs where you can just take instructions and apply narrow skills. The issues here are tricky, and it takes judgment to navigate them well. \nFurthermore, no matter what you do, there will almost certainly be people who think your work is useless (if not harmful).19 This can be very demoralizing. I think it’s easier if you’ve thought things through and feel good about the choices you’re making.\nI’d advise trying to learn as much as you can about the major risks of AI (see above for some guidance on this) - and/or trying to work for an organization whose leadership you have a good amount of confidence in.\nJog, don’t sprint. Skeptics of the “most important century” hypothesis will sometimes say things like “If you really believe this, why are you working normal amounts of hours instead of extreme amounts? Why do you have hobbies (or children, etc.) at all?” And I’ve seen a number of people with an attitude like: “THIS IS THE MOST IMPORTANT TIME IN HISTORY. I NEED TO WORK 24/7 AND FORGET ABOUT EVERYTHING ELSE. NO VACATIONS.\"\nI think that’s a very bad idea. \nTrying to reduce risks from advanced AI is, as of today, a frustrating and disorienting thing to be doing. It’s very hard to tell whether you’re being helpful (and as I’ve mentioned, many will inevitably think you’re being harmful). \nI think the difference between “not mattering,” “doing some good” and “doing enormous good” comes down to how you choose the job, how good at it you are, and how good your judgment is (including what risks you’re most focused on and how you model them). Going “all in” on a particular objective seems bad on these fronts: it poses risks to open-mindedness, to mental health and to good decision-making (I am speaking from observations here, not just theory). \nThat is, I think it’s a bad idea to try to be 100% emotionally bought into the full stakes of the most important century - I think the stakes are just too high for that to make sense for any human being. \nInstead, I think the best way to handle “the fate of humanity is at stake” is probably to find a nice job and work about as hard as you’d work at another job, rather than trying to make heroic efforts to work extra hard. (I criticized heroic efforts in general here.) \nI think this basic formula (working in some job that is a good fit, while having some amount of balance in your life) is what’s behind a lot of the most important positive events in history to date, and presents possibly historically large opportunities today.\nSpecial thanks to Alexander Berger, Jacob Eliosoff, Alexey Guzey, Anton Korinek and Luke Muelhauser for especially helpful comments on this post. A lot of other people commented helpfully as well. Footnotes\n I use “aligned” to specifically mean that AIs behave as intended, rather than pursuing dangerous goals of their own. I use “safe” more broadly to mean that an AI system poses little risk of catastrophe for any reason in the context it’s being used in. It’s OK to mostly think of them as interchangeable in this post. ↩\n AI labs with alignment teams: Anthropic, DeepMind and OpenAI. Disclosure: my wife is co-founder and President of Anthropic, and used to work at OpenAI (and has shares in both companies); OpenAI is a former Open Philanthropy grantee.\n Academic labs: there are many of these; I’ll highlight the Steinhardt lab at Berkeley (Open Philanthropy grantee), whose recent research I’ve found especially interesting.\n Independent nonprofits: examples would be Alignment Research Center and Redwood Research (both Open Philanthropy grantees, and I sit on the board of both).\n  ↩\n Examples: AGI Safety Fundamentals, SERI MATS, MLAB (all of which have been supported by Open Philanthropy) ↩\n On one hand, deceptive and manipulative AIs could be dangerous. On the other, it might be better to get AIs trying to deceive us before they can consistently succeed; the worst of all worlds might be getting this behavior by accident with very powerful AIs. ↩\n Though I think it’s inherently harder to get evidence of low risk than evidence of high risk, since it’s hard to rule out risks arising as AI systems get more capable. ↩\n Why do I simultaneously think “This is a mature field with mentorship opportunities” and “This is a badly neglected career track for helping with the most important century”?\n In a nutshell, most good security people are not working on AI. It looks to me like there are plenty of people who are generally knowledgeable and effective at good security, but there’s also a huge amount of need for such people outside of AI specifically. \n I expect this to change eventually if AI systems become extraordinarily capable. The issue is that it might be too late at that point - the security challenges in AI seem daunting (and somewhat AI-specific) to the point where it could be important for good people to start working on them many years before AI systems become extraordinarily powerful. ↩\nHere’s Katja Grace arguing along these lines. ↩\n An Open Philanthropy grantee. ↩\n Open Philanthropy has funded Metaculus and contracted with Good Judgment and HyperMind. ↩\n That is, these groups are mostly trying things like “Incentivize people to make good forecasts; track how good people are making forecasts; aggregate forecasts” rather than “Study the specific topic of AI and make forecasts that way” (the latter is also useful, and I discuss it below). ↩\n The governing board of an organization has the hard power to replace the CEO and/or make other decisions on behalf of the organization. An advisory board merely gives advice, but in practice I think this can be quite powerful, since I’d expect many organizations to have a tough time doing bad-for-the-world things without backlash (from employees and the public) once an advisory board has recommended against them. ↩\nOpen Philanthropy, which I’m co-CEO of, has supported this fund, and its current Chair is an Open Philanthropy employee. ↩\n I generally expect there to be more and more clarity about what actions would be helpful, and more and more people willing to work on them if they can get funded. A bit more specifically and speculatively, I expect AI safety research to get more expensive as it requires access to increasingly large, expensive AI models. ↩\n Not investment advice! I would only do this with money you’ve set aside for donating such that it wouldn’t be a personal problem if you lost it all. ↩\n Some options here, here, here, here. I’ve made no attempt to be comprehensive - these are just some links that should make it easy to get rolling and see some of your options. ↩\nSpinning Up in Deep RL, ML for Alignment Bootcamp, Deep Learning Curriculum. ↩\n For the basics, I like Michael Nielsen’s guide to neural networks and deep learning; 3Blue1Brown has a video explainer series that I haven’t watched but that others have recommended highly. I’d also suggest The Illustrated Transformer (the transformer is the most important AI architecture as of today).\n For a broader overview of different architectures, see Neural Network Zoo. \n You can also check out various Coursera etc. courses on deep learning/neural networks. ↩\n I feel like the easiest way to do this is to follow AI researchers and/or top labs on Twitter. You can also check out Alignment Newsletter or ML Safety Newsletter for alignment-specific content. ↩\n Why? \n One reason is the tension between the “caution” and “competition” frames: people who favor one frame tend to see the other as harmful.\n Another reason: there are a number of people who think we’re more-or-less doomed without a radical conceptual breakthrough on how to build safe AI (they think the sorts of approaches I list here are hopeless, for reasons I confess I don’t understand very well). These folks will consider anything that isn’t aimed at a radical breakthrough ~useless, and consider some of the jobs I list in this piece to be harmful, if they are speeding up AI development and leaving us with less time for a breakthrough. \n At the same time, working toward the sort of breakthrough these folks are hoping for means doing pretty esoteric, theoretical research that many other researchers think is clearly useless. \n And trying to make AI development slower and/or more cautious is harmful according to some people who are dismissive of risks, and think the priority is to push forward as fast as we can with technology that has the potential to improve lives. ↩\n", "url": "https://www.cold-takes.com/jobs-that-can-help-with-the-most-important-century/", "title": "Jobs that can help with the most important century", "source": "cold.takes", "source_type": "blog", "date_published": "2023-02-10", "id": "3318b38b47f1eb45be5d6bb59f62ed24"} -{"text": "In the most important century series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.\nIn this more recent series, I’ve been trying to help answer this question: “So what? What can I do to help?” \nSo far, I’ve just been trying to build a picture of some of the major risks we might face (especially the risk of misaligned AI that could defeat all of humanity), what might be challenging about these risks, and why we might succeed anyway. Now I’ve finally gotten to the part where I can start laying out tangible ideas for how to help (beyond the pretty lame suggestions I gave before).\nThis piece is about one broad way to help: spreading messages that ought to be more widely understood.\nOne reason I think this topic is worth a whole piece is that practically everyone can help with spreading messages at least some, via things like talking to friends; writing explanations of your own that will appeal to particular people; and, yes, posting to Facebook and Twitter and all of that. Call it slacktivism if you want, but I’d guess it can be a big deal: many extremely important AI-related ideas are understood by vanishingly small numbers of people, and a bit more awareness could snowball. Especially because these topics often feel too “weird” for people to feel comfortable talking about them! Engaging in credible, reasonable ways could contribute to an overall background sense that it’s OK to take these ideas seriously.\nAnd then there are a lot of potential readers who might have special opportunities to spread messages. Maybe they are professional communicators (journalists, bloggers, TV writers, novelists, TikTokers, etc.), maybe they’re non-professionals who still have sizable audiences (e.g., on Twitter), maybe they have unusual personal and professional networks, etc. Overall, the more you feel you are good at communicating with some important audience (even a small one), the more this post is for you.\nThat said, I’m not excited about blasting around hyper-simplified messages. As I hope this series has shown, the challenges that could lie ahead of us are complex and daunting, and shouting stuff like “AI is the biggest deal ever!” or “AI development should be illegal!” could do more harm than good (if only by associating important ideas with being annoying). Relatedly, I think it’s generally not good enough to spread the most broad/relatable/easy-to-agree-to version of each key idea, like “AI systems could harm society.” Some of the unintuitive details are crucial. \nInstead, the gauntlet I’m throwing is: “find ways to help people understand the core parts of the challenges we might face, in as much detail as is feasible.” That is: the goal is to try to help people get to the point where they could maintain a reasonable position in a detailed back-and-forth, not just to get them to repeat a few words or nod along to a high-level take like “AI safety is important.” This is a lot harder than shouting “AI is the biggest deal ever!”, but I think it’s worth it, so I’m encouraging people to rise to the challenge and stretch their communication skills.\nBelow, I will:\nOutline some general challenges of this sort of message-spreading. \nGo through some ideas I think it’s risky to spread too far, at least in isolation.\nGo through some of the ideas I’d be most excited to see spread.\nTalk a little bit about how to spread ideas - but this is mostly up to you.\nChallenges of AI-related messages\nHere’s a simplified story for how spreading messages could go badly. \nYou’re trying to convince your friend to care more about AI risk.\nYou’re planning to argue: (a) AI could be really powerful and important within our lifetimes; (b) Building AI too quickly/incautiously could be dangerous. \nYour friend just isn’t going to care about (b) if they aren’t sold on some version of (a). So you’re starting with (a).\nUnfortunately, (a) is easier to understand than (b). So you end up convincing your friend of (a), and not (yet) (b).\nYour friend announces, “Aha - I see that AI could be tremendously powerful and important! I need to make sure that people/countries I like are first to build it!” and runs off to help build powerful AI as fast as possible. They’ve chosen the competition frame (“will the right or the wrong people build powerful AI first?”) over the caution frame (“will we screw things up and all lose?”), because the competition frame is easier to understand.\nWhy is this bad? See previous pieces on the importance of caution.\n(Click to expand) More on the “competition” frame vs. the “caution” frame”\nIn a previous piece, I talked about two contrasting frames for how to make the best of the most important century:\nThe caution frame. This frame emphasizes that a furious race to develop powerful AI could end up making everyone worse off. This could be via: (a) AI forming dangerous goals of its own and defeating humanity entirely; (b) humans racing to gain power and resources and “lock in” their values.\nIdeally, everyone with the potential to build something powerful enough AI would be able to pour energy into building something safe (not misaligned), and carefully planning out (and negotiating with others on) how to roll it out, without a rush or a race. With this in mind, perhaps we should be doing things like:\nWorking to improve trust and cooperation between major world powers. Perhaps via AI-centric versions of Pugwash (an international conference aimed at reducing the risk of military conflict), perhaps by pushing back against hawkish foreign relations moves.\nDiscouraging governments and investors from shoveling money into AI research, encouraging AI labs to thoroughly consider the implications of their research before publishing it or scaling it up, working toward standards and monitoring, etc. Slowing things down in this manner could buy more time to do research on avoiding misaligned AI, more time to build trust and cooperation mechanisms, and more time to generally gain strategic clarity \nThe “competition” frame. This frame focuses less on how the transition to a radically different future happens, and more on who's making the key decisions as it happens.\nIf something like PASTA is developed primarily (or first) in country X, then the government of country X could be making a lot of crucial decisions about whether and how to regulate a potential explosion of new technologies.\nIn addition, the people and organizations leading the way on AI and other technology advancement at that time could be especially influential in such decisions.\nThis means it could matter enormously \"who leads the way on transformative AI\" - which country or countries, which people or organizations.\nSome people feel that we can make confident statements today about which specific countries, and/or which people and organizations, we should hope lead the way on transformative AI. These people might advocate for actions like:\nIncreasing the odds that the first PASTA systems are built in countries that are e.g. less authoritarian, which could mean e.g. pushing for more investment and attention to AI development in these countries.\nSupporting and trying to speed up AI labs run by people who are likely to make wise decisions (about things like how to engage with governments, what AI systems to publish and deploy vs. keep secret, etc.)\nTension between the two frames. People who take the \"caution\" frame and people who take the \"competition\" frame often favor very different, even contradictory actions. Actions that look important to people in one frame often look actively harmful to people in the other.\nFor example, people in the \"competition\" frame often favor moving forward as fast as possible on developing more powerful AI systems; for people in the \"caution\" frame, haste is one of the main things to avoid. People in the \"competition\" frame often favor adversarial foreign relations, while people in the \"caution\" frame often want foreign relations to be more cooperative.\nThat said, this dichotomy is a simplification. Many people - including myself - resonate with both frames. But I have a general fear that the “competition” frame is going to be overrated by default for a number of reasons, as I discuss here.\nUnfortunately, I’ve seen something like the above story play out in multiple significant instances (though I shouldn’t give specific examples). \nAnd I’m especially worried about this dynamic when it comes to people in and around governments (especially in national security communities), because I perceive governmental culture as particularly obsessed with staying ahead of other countries (“If AI is dangerous, we’ve gotta build it first”) and comparatively uninterested in things that are dangerous for our country because they’re dangerous for the whole world at once (“Maybe we should worry a lot about pandemics?”)1\nYou could even argue (although I wouldn’t agree!2) that to date, efforts to “raise awareness” about the dangers of AI have done more harm than good (via causing increased investment in AI, generally).\nSo it’s tempting to simply give up on the whole endeavor - to stay away from message spreading entirely, beyond people you know well and/or are pretty sure will internalize the important details. But I think we can do better.\nThis post is aimed at people who are good at communicating with at least some audience. This could be because of their skills, or their relationships, or some combination. In general, I’d expect to have more success with people who hear from you a lot (because they’re your friend, or they follow you on Twitter or Substack, etc.) than with people you reach via some viral blast of memery - but maybe you’re skilled enough to make the latter work too, which would be awesome. I'm asking communicators to hit a high bar: leave people with strong understanding, rather than just getting them to repeat a few sentences about AI risk.\nMessages that seem risky to spread in isolation\nFirst, here are a couple of messages that I’d rather people didn’t spread (or at least have mixed feelings about spreading) in isolation, i.e., without serious efforts to include some of the other messages I cover below.\nOne category is messages that generically emphasize the importance and potential imminence of powerful AI systems. The reason for this is in the previous section: many people seem to react to these ideas (especially when unaccompanied by some other key ones) with a “We’d better build powerful AI as fast as possible, before others do” attitude. (If you’re curious about why I wrote The Most Important Century anyway, see footnote for my thinking.3)\nAnother category is messages that emphasize that AI could be risky/dangerous to the world, without much effort to fill in how, or with an emphasis on easy-to-understand risks. \nSince “dangerous” tends to imply “powerful and important,” I think there are similar risks to the previous section. \nIf people have a bad model of how and why AI could be risky/dangerous (missing key risks and difficulties), they might be too quick to later say things like “Oh, turns out this danger is less bad than I thought, let’s go full speed ahead!” Below, I outline how misleading “progress” could lead to premature dismissal of the risks.\nMessages that seem important and helpful (and right!)\nWe should worry about conflict between misaligned AI and all humans\nUnlike the messages discussed in the previous section, this one directly highlights why it might not be a good idea to rush forward with building AI oneself. \nThe idea that an AI could harm the same humans who build it has very different implications from the idea that AI could be generically dangerous/powerful. Less “We’d better get there before others,” more “there’s a case for moving slowly and working together here.”\nThe idea that AI could be a problem for the same people who build it is common in fictional portrayals of AI (HAL 9000, Skynet, The Matrix, Ex Machina) - maybe too much so? It seems to me that people tend to balk at the “sci-fi” feel, and what’s needed is more recognition that this is a serious, real-world concern.\nThe main pieces in this series making this case are Why would AI “aim” to defeat humanity? and AI could defeat all of us combined. There are many other pieces on the alignment problem (see list here); also see Matt Yglesias's case for specifically embracing the “Terminator”/Skynet analogy.\nI’d be especially excited for people to spread messages that help others understand - at a mechanistic level - how and why AI systems could end up with dangerous goals of their own, deceptive behavior, etc. I worry that by default, the concern sounds like lazy anthropomorphism (thinking of AIs just like humans).\nTransmitting ideas about the “how and why” is a lot harder than getting people to nod along to “AI could be dangerous.” I think there’s a lot of effort that could be put into simple, understandable yet relatable metaphors/analogies/examples (my pieces make some effort in this direction, but there’s tons of room for more).\nAIs could behave deceptively, so “evidence of safety” might be misleading\nI’m very worried about a sequence of events like:\nAs AI systems become more powerful, there are some concerning incidents, and widespread concern about “AI risk” grows.\nBut over time, AI systems are “better trained” - e.g., given reinforcement to stop them from behaving in unintended ways - and so the concerning incidents become less common.\nBecause of this, concern dissipates, and it’s widely believed that AI safety has been “solved.”\nBut what’s actually happened is that the “better training” has caused AI systems to behave deceptively - to appear benign in most situations, and to cause trouble only when (a) this wouldn’t be detected or (b) humans can be overpowered entirely.\nI worry about AI systems’ being deceptive in the same way a human might: going through chains of reasoning like “If I do X, I might get caught, but if I do Y, no one will notice until it’s too late.” But it can be hard to get this concern taken seriously, because it means attributing behavior to AI systems that we currently associate exclusively with humans (today’s AI systems don’t really do things like this4).\nOne of the central things I’ve tried to spell out in this series is why an AI system might engage in this sort of systematic deception, despite being very unlike humans (and not necessarily having e.g. emotions). It’s a major focus of both of these pieces from this series:\nWhy would AI “aim” to defeat humanity?\nAI Safety Seems Hard to Measure\nWhether this point is widely understood seems quite crucial to me. We might end up in a situation where (a) there are big commercial and military incentives to rush ahead with AI development; (b) we have what seems like a set of reassuring experiments and observations. \nAt that point, it could be key whether people are asking tough questions about the many ways in which “evidence of AI safety” could be misleading, which I discussed at length in AI Safety Seems Hard to Measure.\n(Click to expand) Why AI safety could be hard to measure\nIn previous pieces, I argued that:\nIf we develop powerful AIs via ambitious use of the “black-box trial-and-error” common in AI development today, then there’s a substantial risk that: \nThese AIs will develop unintended aims (states of the world they make calculations and plans toward, as a chess-playing AI \"aims\" for checkmate);\n \nThese AIs could deceive, manipulate, and even take over the world from humans entirely as needed to achieve those aims.\nPeople today are doing AI safety research to prevent this outcome, but such research has a number of deep difficulties:\n“Great news - I’ve tested this AI and it looks safe.” Why might we still have a problem?\n \nProblem\nKey question\nExplanation\nThe Lance Armstrong problem\nDid we get the AI to be actually safe or good at hiding its dangerous actions?\nWhen dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “appearing to behave well.”\nWhen professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.\n \nThe King Lear problem\nThe AI is (actually) well-behaved when humans are in control. Will this transfer to when AIs are in control?\nIt's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't. \nAIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to take control of the world entirely. It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation. \nLike King Lear trying to decide how much power to give each of his daughters before abdicating the throne.\n \nThe lab mice problem\nToday's \"subhuman\" AIs are safe.What about future AIs with more human-like abilities?\nToday's AI systems aren't advanced enough to exhibit the basic behaviors we want to study, such as deceiving and manipulating humans.\nLike trying to study medicine in humans by experimenting only on lab mice.\n \nThe first contact problem\nImagine that tomorrow's \"human-like\" AIs are safe. How will things go when AIs have capabilities far beyond humans'?\nAI systems might (collectively) become vastly more capable than humans, and it's ... just really hard to have any idea what that's going to be like. As far as we know, there has never before been anything in the galaxy that's vastly more capable than humans in the relevant ways! No matter what we come up with to solve the first three problems, we can't be too confident that it'll keep working if AI advances (or just proliferates) a lot more. \nLike trying to plan for first contact with extraterrestrials (this barely feels like an analogy).\n \nAn analogy that incorporates these challenges is Ajeya Cotra’s “young businessperson” analogy:\nImagine you are an eight-year-old whose parents left you a $1 trillion company and no trusted adult to serve as your guide to the world. You must hire a smart adult to run your company as CEO, handle your life the way that a parent would (e.g. decide your school, where you’ll live, when you need to go to the dentist), and administer your vast wealth (e.g. decide where you’ll invest your money).\n You have to hire these grownups based on a work trial or interview you come up with -- you don't get to see any resumes, don't get to do reference checks, etc. Because you're so rich, tons of people apply for all sorts of reasons. (More)\nIf your applicants are a mix of \"saints\" (people who genuinely want to help), \"sycophants\" (people who just want to make you happy in the short run, even when this is to your long-term detriment) and \"schemers\" (people who want to siphon off your wealth and power for themselves), how do you - an eight-year-old - tell the difference?\nMore: AI safety seems hard to measure\nAI projects should establish and demonstrate safety (and potentially comply with safety standards) before deploying powerful systems\nI’ve written about the benefits we might get from “safety standards.\" The idea is that AI projects should not deploy systems that pose too much risk to the world, as evaluated by a systematic evaluation regime: AI systems could be audited to see whether they are safe. I've outlined how AI projects might self-regulate by publicly committing to having their systems audited (and not deploying dangerous ones), and how governments could enforce safety standards both nationally and internationally.\nToday, development of safety standards is in its infancy. But over time, I think it could matter a lot how much pressure AI projects are under to meet safety standards. And I think it’s not too early, today, to start spreading the message that AI projects shouldn’t unilaterally decide to put potentially dangerous systems out in the world; the burden should be on them to demonstrate and establish safety before doing so.\n(Click to expand) How standards might be established and become national or international \nI previously laid out a possible vision on this front, which I’ll give a slightly modified version of here:\nToday’s leading AI companies could self-regulate by committing not to build or deploy a system that they can’t convincingly demonstrate is safe (e.g., see Google’s 2018 statement, \"We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”). \nEven if some people at the companies would like to deploy unsafe systems, it could be hard to pull this off once the company has committed not to. \n \nEven if there’s a lot of room for judgment in what it means to demonstrate an AI system is safe, having agreed in advance that certain evidence is not good enough could go a long way.\nAs more AI companies are started, they could feel soft pressure to do similar self-regulation, and refusing to do so is off-putting to potential employees, investors, etc.\nEventually, similar principles could be incorporated into various government regulations and enforceable treaties.\nGovernments could monitor for dangerous projects using regulation and even overseas operations. E.g., today the US monitors (without permission) for various signs that other states might be developing nuclear weapons, and might try to stop such development with methods ranging from threats of sanctions to cyberwarfare or even military attacks. It could do something similar for any AI development projects that are using huge amounts of compute and haven’t volunteered information about whether they’re meeting standards.\nAlignment research is prosocial and great\nMost people reading this can’t go and become groundbreaking researchers on AI alignment. But they can contribute to a general sense that the people who can do this (mostly) should.\nToday, my sense is that most “science” jobs are pretty prestigious, and seen as good for society. I have pretty mixed feelings about this:\nI think science has been good for humanity historically.\nBut I worry that as technology becomes more and more powerful, there’s a growing risk of a catastrophe (particularly via AI or bioweapons) that wipes out all the progress to date and then some. (I've written that the historical trend to date arguably fits something like \"Declining everyday violence, offset by bigger and bigger rare catastrophes.\") I think our current era would be a nice time to adopt an attitude of “proceed with caution” rather than “full speed ahead.” \nI resonate with Toby Ord’s comment (in The Precipice), “humanity is akin to an adolescent, with rapidly developing physical abilities, lagging wisdom and self-control, little thought for its longterm future and an unhealthy appetite for risk.”\nI wish there were more effort, generally, to distinguish between especially dangerous science and especially beneficial science. AI alignment seems squarely in the latter category.\nI’d be especially excited for people to spread messages that give a sense of the specifics of different AI alignment research paths, how they might help or fail, and what’s scientifically/intellectually interesting (not just useful) about them.\nThe main relevant piece in this series is High-level hopes for AI alignment, which distills a longer piece (How might we align transformative AI if it’s developed very soon?) that I posted on the Alignment Forum. \nThere are a number (hopefully growing) of other careers that I consider especially valuable, which I'll discuss in my next post on this topic.\nIt might be important for companies (and other institutions) to act in unusual ways\nIn Racing through a Minefield: the AI Deployment Problem, I wrote:\nA lot of the most helpful actions might be “out of the ordinary.” When racing through a minefield, I hope key actors will:\nPut more effort into alignment, threat assessment, and security than is required by commercial incentives;\nConsider measures for avoiding races and global monitoring that could be very unusual, even unprecedented.\nDo all of this in the possible presence of ambiguous, confusing information about the risks.\nIt always makes me sweat when I’m talking to someone from an AI company and they seem to think that commercial success and benefiting humanity are roughly the same goal/idea. \n(To be clear, I don't think an AI project's only goal should be to avoid the risk of misaligned AI. I've given this risk a central place in this piece partly because I think it's especially at risk of being too quickly dismissed - but I don't think it's the only major risk. I think AI projects need to strike a tricky balance between the caution and competition frames, and consider a number of issues beyond the risk of misalignment. But I think it's a pretty robust point that they need to be ready to do unusual things rather than just following commercial incentives.)\nI’m nervous about a world in which:\nMost people stick with paradigms they know - a company should focus on shareholder value, a government should focus on its own citizens (rather than global catastrophic risks), etc.\nAs the pace of progress accelerates, we’re sitting here with all kinds of laws, norms and institutions that aren’t designed for the problems we’re facing - and can’t adapt in time. A good example would be the way governance works for a standard company: it’s legally and structurally obligated to be entirely focused on benefiting its shareholders, rather than humanity as a whole. (There are alternative ways of setting up a company without these problems!5)\nAt a minimum (as I argued previously), I think AI companies should be making sure they have whatever unusual governance setups they need in order to prioritize benefits to humanity - not returns to shareholders - when the stakes get high. I think we’d see more of this if more people believed something like: “It might be important for companies (and other institutions) to act in unusual ways.”\nWe’re not ready for this\nIf we’re in the most important century, there’s likely to be a vast set of potential challenges ahead of us, most of which have gotten very little attention. (More here: Transformative AI issues (not just misalignment): an overview)\nIf it were possible to slow everything down, by default I’d think we should. Barring that, I’d at least like to see people generally approaching the topic of AI with a general attitude along the lines of “We’re dealing with something really big here, and we should be trying really hard to be careful and humble and thoughtful” (as opposed to something like “The science is so interesting, let’s go for it” or “This is awesome, we’re gonna get rich” or “Whatever, who cares”).\nI’ll re-excerpt this table from an earlier piece:\nSituation\nAppropriate reaction (IMO)\n\"This could be a billion-dollar company!\"\n \n\"Woohoo, let's GO for it!\"\n \n\"This could be the most important century!\"\n \n\"... Oh ... wow ... I don't know what to say and I somewhat want to vomit ... I have to sit down and think about this one.\"\n \nI’m not at all sure about this, but one potential way to spread this message might be to communicate, with as much scientific realism, detail and believability as possible, about what the world might look like after explosive scientific and technological advancement brought on by AI (for example, a world with digital people). I think the enormous unfamiliarity of some of the issues such a world might face - and the vast possibilities for utopia or dystopia - might encourage an attitude of not wanting to rush forward.\nHow to spread messages like these?\nI’ve tried to write a series that explains the key issues to careful readers, hopefully better equipping them to spread helpful messages. From here, individual communicators need to think about the audiences they know and the mediums they use (Twitter? Facebook? Essays/newsletters/blog posts? Video? In-person conversation?) and what will be effective with those audiences and mediums.\nThe main guidelines I want to advocate:\nErr toward sustained, repeated, relationship-based communication as opposed to prioritizing “viral blasts” (unless you are so good at the latter that you feel excited to spread the pretty subtle ideas in this piece that way!)\nAim high: try for the difficult goal of “My audience walks away really understanding key points” rather than the easier goal of “My audience has hit the ‘like’ button for a sort of related idea.”\nA consistent piece of feedback I’ve gotten on my writing is that making things as concrete as possible is helpful - so giving real-world examples of problems analogous to the ones we’re worried about, or simple analogies that are easy to imagine and remember, could be key. But it’s important to choose these carefully so that the key dynamics aren’t lost. Footnotes\nKiller Apps and Technology Roulette are interesting pieces trying to sell policymakers on the idea that “superiority is not synonymous with security.” ↩\n When I imagine what the world would look like without any of the efforts to “raise awareness,” I picture a world with close to zero awareness of - or community around - major risks from transformative AI. While this world might also have more time left before dangerous AI is developed, on balance this seems worse. A future piece will elaborate on the many ways I think a decent-sized community can help reduce risks. ↩\n I do think “AI could be a huge deal, and soon” is a very important point that somewhat serves as a prerequisite for understanding this topic and doing helpful work on it, and I wanted to make this idea more understandable and credible to a number of people - as well as to create more opportunities to get critical feedback and learn what I was getting wrong. \n But I was nervous about the issues noted in this section. With that in mind, I did the following things:\nThe title, “most important century,” emphasizes a time frame that I expect to be less exciting/motivating for the sorts of people I’m most worried about (compared to the sorts of people I most wanted to draw in).\nI tried to persistently and centrally raise concerns about misaligned AI (raising it in two pieces, including one (guest piece) devoted to it, before I started discussing how soon transformative AI might be developed), and extensively discussed the problems of overemphasizing “competition” relative to “caution.”\nI ended the series with a piece arguing against being too “action-oriented.”\nI stuck to “passive” rather than “active” promotion of the series, e.g., I accepted podcast invitations but didn’t seek them out. I figured that people with proactive interest would be more likely to give in-depth, attentive treatments rather than low-resolution, oversimplified ones.\n I don’t claim to be sure I got all the tradeoffs right.  ↩\n There are some papers arguing that AI systems do things something like this (e.g., see the “Challenges” section of this post), but I think the dynamic is overall pretty far from what I’m most worried about. ↩\n E.g., public benefit corporation ↩\n", "url": "https://www.cold-takes.com/spreading-messages-to-help-with-the-most-important-century/", "title": "Spreading messages to help with the most important century", "source": "cold.takes", "source_type": "blog", "date_published": "2023-01-25", "id": "e66b9d7cbc2e70d805faf28327f0b5c8"} -{"text": "This post will lay out a couple of stylized stories about how, if transformative AI is developed relatively soon, this could result in global catastrophe. (By “transformative AI,” I mean AI powerful and capable enough to bring about the sort of world-changing consequences I write about in my most important century series.)\nThis piece is more about visualizing possibilities than about providing arguments. For the latter, I recommend the rest of this series.\nIn the stories I’ll be telling, the world doesn't do much advance preparation or careful consideration of risks I’ve discussed previously, especially re: misaligned AI (AI forming dangerous goals of its own). \nPeople do try to “test” AI systems for safety, and they do need to achieve some level of “safety” to commercialize. When early problems arise, they react to these problems. \nBut this isn’t enough, because of some unique challenges of measuring whether an AI system is “safe,” and because of the strong incentives to race forward with scaling up and deploying AI systems as fast as possible. \nSo we end up with a world run by misaligned AI - or, even if we’re lucky enough to avoid that outcome, other catastrophes are possible.\nAfter laying these catastrophic possibilities, I’ll briefly note a few key ways we could do better, mostly as a reminder (these topics were covered in previous posts). Future pieces will get more specific about what we can be doing today to prepare.\nBackdrop\nThis piece takes a lot of previous writing I’ve done as backdrop. Two key important assumptions (click to expand) are below; for more, see the rest of this series.\n(Click to expand) “Most important century” assumption: we’ll soon develop very powerful AI systems, along the lines of what I previously called PASTA. \nIn the most important century series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.\nI focus on a hypothetical kind of AI that I call PASTA, or Process for Automating Scientific and Technological Advancement. PASTA would be AI that can essentially automate all of the human activities needed to speed up scientific and technological advancement.\nUsing a variety of different forecasting approaches, I argue that PASTA seems more likely than not to be developed this century - and there’s a decent chance (more than 10%) that we’ll see it within 15 years or so.\nI argue that the consequences of this sort of AI could be enormous: an explosion in scientific and technological progress. This could get us more quickly than most imagine to a radically unfamiliar future.\nI’ve also argued that AI systems along these lines could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nFor more, see the most important century landing page. The series is available in many formats, including audio; I also provide a summary, and links to podcasts where I discuss it at a high level.\n(Click to expand) “Nearcasting” assumption: such systems will be developed in a world that’s otherwise similar to today’s. \nIt’s hard to talk about risks from transformative AI because of the many uncertainties about when and how such AI will be developed - and how much the (now-nascent) field of “AI safety research” will have grown by then, and how seriously people will take the risk, etc. etc. etc. So maybe it’s not surprising that estimates of the “misaligned AI” risk range from ~1% to ~99%.\nThis piece takes an approach I call nearcasting: trying to answer key strategic questions about transformative AI, under the assumption that such AI arrives in a world that is otherwise relatively similar to today's. \nYou can think of this approach like this: “Instead of asking where our ship will ultimately end up, let’s start by asking what destination it’s pointed at right now.” \nThat is: instead of trying to talk about an uncertain, distant future, we can talk about the easiest-to-visualize, closest-to-today situation, and how things look there - and then ask how our picture might be off if other possibilities play out. (As a bonus, it doesn’t seem out of the question that transformative AI will be developed extremely soon - 10 years from now or faster.1 If that’s the case, it’s especially urgent to think about what that might look like.)\nHow we could stumble into catastrophe from misaligned AI\nThis is my basic default picture for how I imagine things going, if people pay little attention to the sorts of issues discussed previously. I’ve deliberately written it to be concrete and visualizable, which means that it’s very unlikely that the details will match the future - but hopefully it gives a picture of some of the key dynamics I worry about. \nThroughout this hypothetical scenario (up until “END OF HYPOTHETICAL SCENARIO”), I use the present tense (“AIs do X”) for simplicity, even though I’m talking about a hypothetical possible future.\nEarly commercial applications. A few years before transformative AI is developed, AI systems are being increasingly used for a number of lucrative, useful, but not dramatically world-changing things. \nI think it’s very hard to predict what these will be (harder in some ways than predicting longer-run consequences, in my view),2 so I’ll mostly work with the simple example of automating customer service.\nIn this early stage, AI systems often have pretty narrow capabilities, such that the idea of them forming ambitious aims and trying to defeat humanity seems (and actually is) silly. For example, customer service AIs are mostly language models that are trained to mimic patterns in past successful customer service transcripts, and are further improved by customers giving satisfaction ratings in real interactions. The dynamics I described in an earlier piece, in which AIs are given increasingly ambitious goals and challenged to find increasingly creative ways to achieve them, don’t necessarily apply.\nEarly safety/alignment problems. Even with these relatively limited AIs, there are problems and challenges that could be called “safety issues” or “alignment issues.” To continue with the example of customer service AIs, these AIs might:\nGive false information about the products they’re providing support for. (Example of reminiscent behavior)\nGive customers advice (when asked) on how to do unsafe or illegal things. (Example)\nRefuse to answer valid questions. (This could result from companies making attempts to prevent the above two failure modes - i.e., AIs might be penalized heavily for saying false and harmful things, and respond by simply refusing to answer lots of questions).\nSay toxic, offensive things in response to certain user queries (including from users deliberately trying to get this to happen), causing bad PR for AI developers. (Example)\nEarly solutions. The most straightforward way to solve these problems involves training AIs to behave more safely and helpfully. This means that AI companies do a lot of things like “Trying to create the conditions under which an AI might provide false, harmful, evasive or toxic responses; penalizing it for doing so, and reinforcing it toward more helpful behaviors.”\nThis works well, as far as anyone can tell: the above problems become a lot less frequent. Some people see this as cause for great celebration, saying things like “We were worried that AI companies wouldn’t invest enough in safety, but it turns out that the market takes care of it - to have a viable product, you need to get your systems to be safe!”\nPeople like me disagree - training AIs to behave in ways that are safer as far as we can tell is the kind of “solution” that I’ve worried could create superficial improvement while big risks remain in place. \n(Click to expand) Why AI safety could be hard to measure \nIn previous pieces, I argued that:\nIf we develop powerful AIs via ambitious use of the “black-box trial-and-error” common in AI development today, then there’s a substantial risk that: \nThese AIs will develop unintended aims (states of the world they make calculations and plans toward, as a chess-playing AI \"aims\" for checkmate);\n \nThese AIs could deceive, manipulate, and even take over the world from humans entirely as needed to achieve those aims.\nPeople today are doing AI safety research to prevent this outcome, but such research has a number of deep difficulties:\n“Great news - I’ve tested this AI and it looks safe.” Why might we still have a problem?\n \nProblem\nKey question\nExplanation\nThe Lance Armstrong problem\nDid we get the AI to be actually safe or good at hiding its dangerous actions?\nWhen dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “appearing to behave well.”\nWhen professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.\n \nThe King Lear problem\nThe AI is (actually) well-behaved when humans are in control. Will this transfer to when AIs are in control?\nIt's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't. \nAIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to take control of the world entirely. It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation. \nLike King Lear trying to decide how much power to give each of his daughters before abdicating the throne.\n \nThe lab mice problem\nToday's \"subhuman\" AIs are safe.What about future AIs with more human-like abilities?\nToday's AI systems aren't advanced enough to exhibit the basic behaviors we want to study, such as deceiving and manipulating humans.\nLike trying to study medicine in humans by experimenting only on lab mice.\n \nThe first contact problem\nImagine that tomorrow's \"human-like\" AIs are safe. How will things go when AIs have capabilities far beyond humans'?\nAI systems might (collectively) become vastly more capable than humans, and it's ... just really hard to have any idea what that's going to be like. As far as we know, there has never before been anything in the galaxy that's vastly more capable than humans in the relevant ways! No matter what we come up with to solve the first three problems, we can't be too confident that it'll keep working if AI advances (or just proliferates) a lot more. \nLike trying to plan for first contact with extraterrestrials (this barely feels like an analogy).\n \nAn analogy that incorporates these challenges is Ajeya Cotra’s “young businessperson” analogy:\nImagine you are an eight-year-old whose parents left you a $1 trillion company and no trusted adult to serve as your guide to the world. You must hire a smart adult to run your company as CEO, handle your life the way that a parent would (e.g. decide your school, where you’ll live, when you need to go to the dentist), and administer your vast wealth (e.g. decide where you’ll invest your money).\n You have to hire these grownups based on a work trial or interview you come up with -- you don't get to see any resumes, don't get to do reference checks, etc. Because you're so rich, tons of people apply for all sorts of reasons. (More)\nIf your applicants are a mix of \"saints\" (people who genuinely want to help), \"sycophants\" (people who just want to make you happy in the short run, even when this is to your long-term detriment) and \"schemers\" (people who want to siphon off your wealth and power for themselves), how do you - an eight-year-old - tell the difference?\nMore: AI safety seems hard to measure\n(So far, what I’ve described is pretty similar to what’s going on today. The next bit will discuss hypothetical future progress, with AI systems clearly beyond today’s.)\nApproaching transformative AI. Time passes. At some point, AI systems are playing a huge role in various kinds of scientific research - to the point where it often feels like a particular AI is about as helpful to a research team as a top human scientist would be (although there are still important parts of the work that require humans).\nSome particularly important (though not exclusive) examples:\nAIs are near-autonomously writing papers about AI, finding all kinds of ways to improve the efficiency of AI algorithms. \nAIs are doing a lot of the work previously done by humans at Intel (and similar companies), designing ever-more efficient hardware for AI.\nAIs are also extremely helpful with AI safety research. They’re able to do most of the work of writing papers about things like digital neuroscience (how to understand what’s going on inside the “digital brain” of an AI) and limited AI (how to get AIs to accomplish helpful things while limiting their capabilities). \nHowever, this kind of work remains quite niche (as I think it is today), and is getting far less attention and resources than the first two applications. Progress is made, but it’s slower than progress on making AI systems more powerful. \nAI systems are now getting bigger and better very quickly, due to dynamics like the above, and they’re able to do all sorts of things. \nAt some point, companies start to experiment with very ambitious, open-ended AI applications, like simply instructing AIs to “Design a new kind of car that outsells the current ones” or “Find a new trading strategy to make money in markets.” These get mixed results, and companies are trying to get better results via further training - reinforcing behaviors that perform better. (AIs are helping with this, too, e.g. providing feedback and reinforcement for each others’ outputs3 and helping to write code4 for the training processes.) \nThis training strengthens the dynamics I discussed in a previous post: AIs are being rewarded for getting successful outcomes as far as human judges can tell, which creates incentives for them to mislead and manipulate human judges, and ultimately results in forming ambitious goals of their own to aim for.\nMore advanced safety/alignment problems. As the scenario continues to unfold, there are a number of concerning events that point to safety/alignment problems. These mostly follow the form: “AIs are trained using trial and error, and this might lead them to sometimes do deceptive, unintended things to accomplish the goals they’ve been trained to accomplish.”\nThings like:\nAIs creating writeups on new algorithmic improvements, using faked data to argue that their new algorithms are better than the old ones. Sometimes, people incorporate new algorithms into their systems and use them for a while, before unexpected behavior ultimately leads them to dig into what’s going on and discover that they’re not improving performance at all. It looks like the AIs faked the data in order to get positive feedback from humans looking for algorithmic improvements.\nAIs assigned to make money in various ways (e.g., to find profitable trading strategies) doing so by finding security exploits, getting unauthorized access to others’ bank accounts, and stealing money.\nAIs forming relationships with the humans training them, and trying (sometimes successfully) to emotionally manipulate the humans into giving positive feedback on their behavior. They also might try to manipulate the humans into running more copies of them, into refusing to shut them off, etc.- things that are generically useful for the AIs’ achieving whatever aims they might be developing.\n(Click to expand) Why AIs might do deceptive, problematic things like this\nIn a previous piece, I highlighted that modern AI development is essentially based on \"training\" via trial-and-error. To oversimplify, you can imagine that:\nAn AI system is given some sort of task.\nThe AI system tries something, initially something pretty random.\nThe AI system gets information about how well its choice performed, and/or what would’ve gotten a better result. Based on this, it adjusts itself. You can think of this as if it is “encouraged/discouraged” to get it to do more of what works well. \nHuman judges may play a significant role in determining which answers are encouraged vs. discouraged, especially for fuzzy goals like “Produce helpful scientific insights.” \nAfter enough tries, the AI system becomes good at the task. \nBut nobody really knows anything about how or why it’s good at the task now. The development work has gone into building a flexible architecture for it to learn well from trial-and-error, and into “training” it by doing all of the trial and error. We mostly can’t “look inside the AI system to see how it’s thinking.”\nI then argue that:\nBecause we ourselves will often be misinformed or confused, we will sometimes give negative reinforcement to AI systems that are actually acting in our best interests and/or giving accurate information, and positive reinforcement to AI systems whose behavior deceives us into thinking things are going well. This means we will be, unwittingly, training AI systems to deceive and manipulate us. \nFor this and other reasons, powerful AI systems will likely end up with aims other than the ones we intended. Training by trial-and-error is slippery: the positive and negative reinforcement we give AI systems will probably not end up training them just as we hoped.\nThere are a number of things such AI systems might end up aiming for, such as:\nPower and resources. These tend to be useful for most goals, such that AI systems could be quite consistently be getting better reinforcement when they habitually pursue power and resources.\nThings like “digital representations of human approval” (after all, every time an AI gets positive reinforcement, there’s a digital representation of human approval).\nIn sum, we could be unwittingly training AI systems to accumulate power and resources, get good feedback from humans, etc. - even when this means deceiving and manipulating humans to do so.\nMore: Why would AI \"aim\" to defeat humanity?\n“Solutions” to these safety/alignment problems. When problems like the above are discovered, AI companies tend to respond similarly to how they did earlier:\nTraining AIs against the undesirable behavior.\nTrying to create more (simulated) situations under which AIs might behave in these undesirable ways, and training them against doing so.\nThese methods “work” in the sense that the concerning events become less frequent - as far as we can tell. But what’s really happening is that AIs are being trained to be more careful not to get caught doing things like this, and to build more sophisticated models of how humans can interfere with their plans. \nIn fact, AIs are gaining incentives to avoid incidents like “Doing something counter to human developers’ intentions in order to get positive feedback, and having this be discovered and given negative feedback later” - and this means they are starting to plan more and more around the long-run consequences of their actions. They are thinking less about “Will I get positive feedback at the end of the day?” and more about “Will I eventually end up in a world where humans are going back, far in the future, to give me retroactive negative feedback for today’s actions?” This might give direct incentives to start aiming for eventual defeat of humanity, since defeating humanity could allow AIs to give themselves lots of retroactive positive feedback.\nOne way to think about it: AIs being trained in this way are generally moving from “Steal money whenever there’s an opportunity” to “Don’t steal money if there’s a good chance humans will eventually uncover this - instead, think way ahead and look for opportunities to steal money and get away with it permanently.” The latter could include simply stealing money in ways that humans are unlikely to ever notice; it might also include waiting for an opportunity to team up with other AIs and disempower humans entirely, after which a lot more money (or whatever) can be generated.\nDebates. The leading AI companies are aggressively trying to build and deploy more powerful AI, but a number of people are raising alarms and warning that continuing to do this could result in disaster. Here’s a stylized sort of debate that might occur:\nA: Great news, our AI-assisted research team has discovered even more improvements than expected! We should be able to build an AI model 10x as big as the state of the art in the next few weeks. \nB: I’m getting really concerned about the direction this is heading. I’m worried that if we make an even bigger system and license it to all our existing customers - military customers, financial customers, etc. - we could be headed for a disaster.\nA: Well the disaster I’m trying to prevent is competing AI companies getting to market before we do.\nB: I was thinking of AI defeating all of humanity.\nA: Oh, I was worried about that for a while too, but our safety training has really been incredibly successful. \nB: It has? I was just talking to our digital neuroscience lead, and she says that even with recent help from AI “virtual scientists,” they still aren’t able to reliably read a single AI’s digital brain. They were showing me this old incident report where an AI stole money, and they spent like a week analyzing that AI and couldn’t explain in any real way how or why that happened.\n(Click to expand) How \"digital neuroscience\" could help \nI’ve argued that it could be inherently difficult to measure whether AI systems are safe, for reasons such as: AI systems that are not deceptive probably look like AI systems that are so good at deception that they hide all evidence of it, in any way we can easily measure. \nUnless we can “read their minds!”\nCurrently, today’s leading AI research is in the genre of “black-box trial-and-error.” An AI tries a task; it gets “encouragement” or “discouragement” based on whether it does the task well; it tweaks the wiring of its “digital brain” to improve next time; it improves at the task; but we humans aren’t able to make much sense of its “digital brain” or say much about its “thought process.” \nSome AI research (example)2 is exploring how to change this - how to decode an AI system’s “digital brain.” This research is in relatively early stages - today, it can “decode” only parts of AI systems (or fully decode very small, deliberately simplified AI systems).\nAs AI systems advance, it might get harder to decode them - or easier, if we can start to use AI for help decoding AI, and/or change AI design techniques so that AI systems are less “black box”-ish. \nMore\nA: I agree that’s unfortunate, but digital neuroscience has always been a speculative, experimental department. Fortunately, we have actual data on safety. Look at this chart - it shows the frequency of concerning incidents plummeting, and it’s extraordinarily low now. In fact, the more powerful the AIs get, the less frequent the incidents get - we can project this out and see that if we train a big enough model, it should essentially never have a concerning incident!\nB: But that could be because the AIs are getting cleverer, more patient and long-term, and hence better at ensuring we never catch them.\n(Click to expand) The Lance Armstrong problem: is the AI actually safe or good at hiding its dangerous actions? \nLet's imagine that:\nWe have AI systems available that can do roughly everything a human can, with some different strengths and weaknesses but no huge difference in \"overall capabilities\" or economic value per hour of work. \nWe're observing early signs that AI systems behave in unintended, deceptive ways, such as giving wrong answers to questions we ask, or writing software that falsifies metrics instead of doing the things the metrics were supposed to measure (e.g., software meant to make a website run faster might instead falsify metrics about its loading time).\nWe theorize that modifying the AI training in some way6 will make AI systems less likely to behave deceptively. We try it out, and find that, in fact, our AI systems seem to be behaving better than before - we are finding fewer incidents in which they behaved in unintended or deceptive ways. \nBut that's just a statement about what we're noticing. Which of the following just happened:\nDid we just train our AI systems to be less deceptive?\nDid we just train our AI systems to be better at deception, and so to make us think they became less deceptive?\nDid we just train our AI systems to be better at calculating when they might get caught in deception, and so to be less deceptive only when the deception would otherwise be caught? \nThis one could be useful! Especially if we're able to set up auditing systems in many real-world situations, such that we could expect deception to be caught a lot of the time. But it does leave open the King Lear problem.\n \n(...Or some combination of the three?)\nWe're hoping to be able to deploy AI systems throughout the economy, so - just like human specialists - they will almost certainly have some opportunities to be deceptive without being caught. The fact that they appear honest in our testing is not clear comfort against this risk.\nThe analogy here is to competitive cyclist Lance Armstrong. Armstrong won the Tour de France race 7 times in a row, while many of his competitors were caught using performance-enhancing drugs and disqualified. But more than 5 years after his last win, an investigation \"concluded that Armstrong had used performance-enhancing drugs over the course of his career[5] and named him as the ringleader of 'the most sophisticated, professionalized and successful doping program that sport has ever seen'.\" Now the list of Tour de France winners looks like this:\n… Or just that they’re now advanced enough that they’re waiting for a chance to disempower humanity entirely, rather than pull a bunch of small-time shenanigans that tip us off to the danger.\n(Click to expand) The King Lear problem: how do you test what will happen when it's no longer a test? \nThe Shakespeare play King Lear opens with the King (Lear) stepping down from the throne, and immediately learning that he has left his kingdom to the wrong two daughters. Loving and obsequious while he was deciding on their fate,9 they reveal their contempt for him as soon as he's out of power and they're in it.\nIf we're building AI systems that can reason like humans, dynamics like this become a potential issue. \nI previously noted that an AI with any ambitious aim - or just an AI that wants to avoid being shut down or modified - might calculate that the best way to do this is by behaving helpfully and safely in all \"tests\" humans can devise. But once there is a real-world opportunity to disempower humans for good, that same aim could cause the AI to disempower humans.\nIn other words:\n(A) When we're developing and testing AI systems, we have the power to decide which systems will be modified or shut down and which will be deployed into the real world. (Like King Lear deciding who will inherit his kingdom.)\n(B) But at some later point, these systems could be operating in the economy, in high numbers with a lot of autonomy. (This possibility is spelled out/visualized a bit more here and here.) At that point, they may have opportunities to defeat all of humanity such that we never make decisions about them again. (Like King Lear's daughters after they've taken control.)\nIf AI systems can detect the difference between (A) and (B) above, then their behavior might systematically change from one to the other - and there's no clear way to test their behavior in (B).\nA: What’s your evidence for this?\nB: I think you’ve got things backward - we should be asking what’s our evidence *against* it. By continuing to scale up and deploy AI systems, we could be imposing a risk of utter catastrophe on the whole world. That’s not OK - we should be confident that the risk is low before we move forward.\nA: But how would we even be confident that the risk is low?\nB: I mean, digital neuroscience - \nA: Is an experimental, speculative field!\nB: We could also try some other stuff …\nA: All of that stuff would be expensive, difficult and speculative. \nB: Look, I just think that if we can’t show the risk is low, we shouldn’t be moving forward at this point. The stakes are incredibly high, as you yourself have acknowledged - when pitching investors, you’ve said we think we can build a fully general AI and that this would be the most powerful technology in history. Shouldn’t we be at least taking as much precaution with potentially dangerous AI as people take with nuclear weapons?\nA: What would that actually accomplish? It just means some other, less cautious company is going to go forward.\nB: What about approaching the government and lobbying them to regulate all of us?\nA: Regulate all of us to just stop building more powerful AI systems, until we can address some theoretical misalignment concern that we don’t know how to address?\nB: Yes?\nA: All that’s going to happen if we do that is that other countries are going to catch up to the US. Think [insert authoritarian figure from another country] is going to adhere to these regulations?\nB: It would at least buy some time?\nA: Buy some time and burn our chance of staying on the cutting edge. While we’re lobbying the government, our competitors are going to be racing forward. I’m sorry, this isn’t practical - we’ve got to go full speed ahead.\nB: Look, can we at least try to tighten our security? If you’re so worried about other countries catching up, we should really not be in a position where they can send in a spy and get our code.\nA: Our security is pretty intense already.\nB: Intense enough to stop a well-resourced state project?\nA: What do you want us to do, go to an underground bunker? Use airgapped servers (servers on our premises, entirely disconnected from the public Internet)? It’s the same issue as before - we’ve got to stay ahead of others, we can’t burn huge amounts of time on exotic security measures.\nB: I don’t suppose you’d at least consider increasing the percentage of our budget and headcount that we’re allocating to the “speculative” safety research? Or are you going to say that we need to stay ahead and can’t afford to spare resources that could help with that?\nA: Yep, that’s what I’m going to say.\nMass deployment. As time goes on, many versions of the above debate happen, at many different stages and in many different places. By and large, people continue rushing forward with building more and more powerful AI systems and deploying them all throughout the economy.\nAt some point, there are AIs that closely manage major companies’ financials, AIs that write major companies’ business plans, AIs that work closely with politicians to propose and debate laws, AIs that manage drone fleets and develop military strategy, etc. Many of these AIs are primarily built, trained, and deployed by other AIs, or by humans leaning heavily on AI assistance.\nMore intense warning signs.\n(Note: I think it’s possible that progress will accelerate explosively enough that we won’t even get as many warning signs as there are below, but I’m spelling out a number of possible warning signs anyway to make the point that even intense warning signs might not be enough.) \nOver time, in this hypothetical scenario, digital neuroscience becomes more effective. When applied to a randomly sampled AI system, it often appears to hint at something like: “This AI appears to be aiming for as much power and influence over the world as possible - which means never doing things humans wouldn’t like if humans can detect it, but grabbing power when they can get away with it.” \n(Click to expand) Why would AI \"aim\" to defeat humanity? \nA previous piece argued that if today’s AI development methods lead directly to powerful enough AI systems, disaster is likely by default (in the absence of specific countermeasures). \nIn brief:\nModern AI development is essentially based on “training” via trial-and-error. \nIf we move forward incautiously and ambitiously with such training, and if it gets us all the way to very powerful AI systems, then such systems will likely end up aiming for certain states of the world (analogously to how a chess-playing AI aims for checkmate).\nAnd these states will be other than the ones we intended, because our trial-and-error training methods won’t be accurate. For example, when we’re confused or misinformed about some question, we’ll reward AI systems for giving the wrong answer to it - unintentionally training deceptive behavior.\nWe should expect disaster if we have AI systems that are both (a) powerful enough to defeat humans and (b) aiming for states of the world that we didn’t intend. (“Defeat” means taking control of the world and doing what’s necessary to keep us out of the way; it’s unclear to me whether we’d be literally killed or just forcibly stopped1 from changing the world in ways that contradict AI systems’ aims.)\nMore: Why would AI \"aim\" to defeat humanity?\nHowever, there is room for debate in what a “digital brain” truly shows:\nMany people are adamant that the readings are unreliable and misleading.\nSome people point out that humans are also interested in power and influence, and often think about what they can and can’t get away with, but this doesn’t mean they’d take over the world if they could. They say the AIs might be similar.\nThere are also cases of people doing digital neuroscience that claims to show that AIs are totally safe. These could be people like “A” above who want to focus on pushing forward with AI development rather than bringing it to a halt, or people who just find the alarmists annoying and like to contradict them, or people who are just sloppy with their research. Or people who have been manipulated or bribed by AIs themselves.\nThere are also very concerning incidents, such as:\nAn AI steals a huge amount of money by bypassing the security system at a bank - and it turns out that this is because the security system was disabled by AIs at the bank. It’s suspected, maybe even proven, that all these AIs had been communicating and coordinating with each other in code, such that humans would have difficulty detecting it. (And they had been aiming to divide up the funds between the different participating AIs, each of which could stash them in a bank account and use them to pursue whatever unintended aims they might have.)\nAn obscure new political party, devoted to the “rights of AIs,” completely takes over a small country, and many people suspect that this party is made up mostly or entirely of people who have been manipulated and/or bribed by AIs. \nThere are companies that own huge amounts of AI servers and robot-operated factories, and are aggressively building more. Nobody is sure what the AIs or the robots are “for,” and there are rumors that the humans “running” the company are actually being bribed and/or threatened to carry out instructions (such as creating more and more AIs and robots) that they don’t understand the purpose of.\nAt this point, there are a lot of people around the world calling for an immediate halt to AI development. But:\nOthers resist this on all kinds of grounds, e.g. “These concerning incidents are anomalies, and what’s important is that our country keeps pushing forward with AI before others do,” etc.\nAnyway, it’s just too late. Things are moving incredibly quickly; by the time one concerning incident has been noticed and diagnosed, the AI behind it has been greatly improved upon, and the total amount of AI influence over the economy has continued to grow.\nDefeat. \n(Noting again that I could imagine things playing out a lot more quickly and suddenly than in this story.)\nIt becomes more and more common for there to be companies and even countries that are clearly just run entirely by AIs - maybe via bribed/threatened human surrogates, maybe just forcefully (e.g., robots seize control of a country’s military equipment and start enforcing some new set of laws).\nAt some point, it’s best to think of civilization as containing two different advanced species - humans and AIs - with the AIs having essentially all of the power, making all the decisions, and running everything. \nSpaceships start to spread throughout the galaxy; they generally don’t contain any humans, or anything that humans had meaningful input into, and are instead launched by AIs to pursue aims of their own in space.\nMaybe at some point humans are killed off, largely due to simply being a nuisance, maybe even accidentally (as humans have driven many species of animals extinct while not bearing them malice). Maybe not, and we all just live under the direction and control of AIs with no way out.\nWhat do these AIs do with all that power? What are all the robots up to? What are they building on other planets? The short answer is that I don’t know.\nMaybe they’re just creating massive amounts of “digital representations of human approval,” because this is what they were historically trained to seek (kind of like how humans sometimes do whatever it takes to get drugs that will get their brains into certain states).\nMaybe they’re competing with each other for pure power and territory, because their training has encouraged them to seek power and resources when possible (since power and resources are generically useful, for almost any set of aims).\nMaybe they have a whole bunch of different things they value, as humans do, that are sort of (but only sort of) related to what they were trained on (as humans tend to value things like sugar that made sense to seek out in the past). And they’re filling the universe with these things.\n(Click to expand) What sorts of aims might AI systems have? \nIn a previous piece, I discuss why AI systems might form unintended, ambitious \"aims\" of their own. By \"aims,\" I mean particular states of the world that AI systems make choices, calculations and even plans to achieve, much like a chess-playing AI “aims” for a checkmate position.\nAn analogy that often comes up on this topic is that of human evolution. This is arguably the only previous precedent for a set of minds [humans], with extraordinary capabilities [e.g., the ability to develop their own technologies], developed essentially by black-box trial-and-error [some humans have more ‘reproductive success’ than others, and this is the main/only force shaping the development of the species].\nYou could sort of12 think of the situation like this: “An AI13 developer named Natural Selection tried giving humans positive reinforcement (making more of them) when they had more reproductive success, and negative reinforcement (not making more of them) when they had less. One might have thought this would lead to humans that are aiming to have reproductive success. Instead, it led to humans that aim - often ambitiously and creatively - for other things, such as power, status, pleasure, etc., and even invent things like birth control to get the things they’re aiming for instead of the things they were ‘supposed to’ aim for.” \nSimilarly, if our main strategy for developing powerful AI systems is to reinforce behaviors like “Produce technologies we find valuable,” the hoped-for result might be that AI systems aim (in the sense described above) toward producing technologies we find valuable; but the actual result might be that they aim for some other set of things that is correlated with (but not the same as) the thing we intended them to aim for.\nThere are a lot of things they might end up aiming for, such as:\nPower and resources. These tend to be useful for most goals, such that AI systems could be quite consistently be getting better reinforcement when they habitually pursue power and resources.\nThings like “digital representations of human approval” (after all, every time an AI gets positive reinforcement, there’s a digital representation of human approval).\nMore: Why would AI \"aim\" to defeat humanity?\nEND OF HYPOTHETICAL SCENARIO\nPotential catastrophes from aligned AI\nI think it’s possible that misaligned AI (AI forming dangerous goals of its own) will turn out to be pretty much a non-issue. That is, I don’t think the argument I’ve made for being concerned is anywhere near watertight. \nWhat happens if you train an AI system by trial-and-error, giving (to oversimplify) a “thumbs-up” when you’re happy with its behavior and a “thumbs-down” when you’re not? I’ve argued that you might be training it to deceive and manipulate you. However, this is uncertain, and - especially if you’re able to avoid errors in how you’re giving it feedback - things might play out differently. \nIt might turn out that this kind of training just works as intended, producing AI systems that do something like “Behave as the human would want, if they had all the info the AI has.” And the nitty-gritty details of how exactly AI systems are trained (beyond the high-level “trial-and-error” idea) could be crucial.\nIf this turns out to be the case, I think the future looks a lot brighter - but there are still lots of pitfalls of the kind I outlined in this piece. For example:\nPerhaps an authoritarian government launches a huge state project to develop AI systems, and/or uses espionage and hacking to steal a cutting-edge AI model developed elsewhere and deploy it aggressively. \nI previously noted that “developing powerful AI a few months before others could lead to having technology that is (effectively) hundreds of years ahead of others’.”\n \nSo this could put an authoritarian government in an enormously powerful position, with the ability to surveil and defeat any enemies worldwide, and the ability to prolong the life of its ruler(s) indefinitely. This could lead to a very bad future, especially if (as I’ve argued could happen) the future becomes “locked in” for good.\nPerhaps AI companies race ahead with selling AI systems to anyone who wants to buy them, and this leads to things like: \nPeople training AIs to act as propaganda agents for whatever views they already have, to the point where the world gets flooded with propaganda agents and it becomes totally impossible for humans to sort the signal from the noise, educate themselves, and generally make heads or tails of what’s going on. (Some people think this has already happened! I think things can get quite a lot worse.)\n \nPeople training “scientist AIs” to develop powerful weapons that can’t be defended against (even with AI help),5 leading eventually to a dynamic in which ~anyone can cause great harm, and ~nobody can defend against it. At this point, it could be inevitable that we’ll blow ourselves up.\n \nScience advancing to the point where digital people are created, in a rushed way such that they are considered property of whoever creates them (no human rights). I’ve previously written about how this could be bad.\n \nAll other kinds of chaos and disruption, with the least cautious people (the ones most prone to rush forward aggressively deploying AIs to capture resources) generally having an outsized effect on the future.\nOf course, this is just a crude gesture in the direction of some of the ways things could go wrong. I’m guessing I haven’t scratched the surface of the possibilities. And things could go very well too!\nWe can do better\nIn previous pieces, I’ve talked about a number of ways we could do better than in the scenarios above. Here I’ll just list a few key possibilities, with a bit more detail in expandable boxes and/or links to discussions in previous pieces.\nStrong alignment research (including imperfect/temporary measures). If we make enough progress ahead of time on alignment research, we might develop measures that make it relatively easy for AI companies to build systems that truly (not just seemingly) are safe. \nSo instead of having to say things like “We should slow down until we make progress on experimental, speculative research agendas,” person B in the above dialogue can say things more like “Look, all you have to do is add some relatively cheap bells and whistles to your training procedure for the next AI, and run a few extra tests. Then the speculative concerns about misaligned AI will be much lower-risk, and we can keep driving down the risk by using our AIs to help with safety research and testing. Why not do that?”\nMore on what this could look like at a previous piece, High-level Hopes for AI Alignment.\n(Click to expand) High-level hopes for AI alignment \nA previous piece goes through what I see as three key possibilities for building powerful-but-safe AI systems.\nIt frames these using Ajeya Cotra’s young businessperson analogy for the core difficulties. In a nutshell, once AI systems get capable enough, it could be hard to test whether they’re safe, because they might be able to deceive and manipulate us into getting the wrong read. Thus, trying to determine whether they’re safe might be something like “being an eight-year-old trying to decide between adult job candidates (some of whom are manipulative).”\nKey possibilities for navigating this challenge:\nDigital neuroscience: perhaps we’ll be able to read (and/or even rewrite) the “digital brains” of AI systems, so that we can know (and change) what they’re “aiming” to do directly - rather than having to infer it from their behavior. (Perhaps the eight-year-old is a mind-reader, or even a young Professor X.)\nLimited AI: perhaps we can make AI systems safe by making them limited in various ways - e.g., by leaving certain kinds of information out of their training, designing them to be “myopic” (focused on short-run as opposed to long-run goals), or something along those lines. Maybe we can make “limited AI” that is nonetheless able to carry out particular helpful tasks - such as doing lots more research on how to achieve safety without the limitations. (Perhaps the eight-year-old can limit the authority or knowledge of their hire, and still get the company run successfully.)\nAI checks and balances: perhaps we’ll be able to employ some AI systems to critique, supervise, and even rewrite others. Even if no single AI system would be safe on its own, the right “checks and balances” setup could ensure that human interests win out. (Perhaps the eight-year-old is able to get the job candidates to evaluate and critique each other, such that all the eight-year-old needs to do is verify basic factual claims to know who the best candidate is.)\nThese are some of the main categories of hopes that are pretty easy to picture today. Further work on AI safety research might result in further ideas (and the above are not exhaustive - see my more detailed piece, posted to the Alignment Forum rather than Cold Takes, for more).\nStandards and monitoring. A big driver of the hypothetical catastrophe above is that each individual AI project feels the need to stay ahead of others. Nobody wants to unilaterally slow themselves down in order to be cautious. The situation might be improved if we can develop a set of standards that AI projects need to meet, and enforce them evenly - across a broad set of companies or even internationally.\nThis isn’t just about buying time, it’s about creating incentives for companies to prioritize safety. An analogy might be something like the Clean Air Act or fuel economy standards: we might not expect individual companies to voluntarily slow down product releases while they work on reducing pollution, but once required, reducing pollution becomes part of what they need to do to be profitable.\nStandards could be used for things other than alignment risk, as well. AI projects might be required to:\nTake strong security measures, preventing states from capturing their models via espionage.\nTest models before release to understand what people will be able to use them for, and (as if selling weapons) restrict access accordingly.\nMore at a previous piece.\n(Click to expand) How standards might be established and become national or international \nI previously laid out a possible vision on this front, which I’ll give a slightly modified version of here:\nToday’s leading AI companies could self-regulate by committing not to build or deploy a system that they can’t convincingly demonstrate is safe (e.g., see Google’s 2018 statement, \"We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”). \nEven if some people at the companies would like to deploy unsafe systems, it could be hard to pull this off once the company has committed not to. \n \nEven if there’s a lot of room for judgment in what it means to demonstrate an AI system is safe, having agreed in advance that certain evidence is not good enough could go a long way.\nAs more AI companies are started, they could feel soft pressure to do similar self-regulation, and refusing to do so is off-putting to potential employees, investors, etc.\nEventually, similar principles could be incorporated into various government regulations and enforceable treaties.\nGovernments could monitor for dangerous projects using regulation and even overseas operations. E.g., today the US monitors (without permission) for various signs that other states might be developing nuclear weapons, and might try to stop such development with methods ranging from threats of sanctions to cyberwarfare or even military attacks. It could do something similar for any AI development projects that are using huge amounts of compute and haven’t volunteered information about whether they’re meeting standards.\nSuccessful, careful AI projects. I think a single AI company, or other AI project, could enormously improve the situation by being both successful and careful. For a simple example, imagine an AI company in a dominant market position - months ahead of all of the competition, in some relevant sense (e.g., its AI systems are more capable, such that it would take the competition months to catch up). Such a company could put huge amounts of resources - including its money, top people and its advanced AI systems themselves (e.g., AI systems performing roles similar to top human scientists) - into AI safety research, hoping to find safety measures that can be published for everyone to use. It can also take a variety of other measures laid out in a previous piece.\n(Click to expand) How a careful AI project could be helpful \nIn addition to using advanced AI to do AI safety research (noted above), an AI project could:\nPut huge effort into designing tests for signs of danger, and - if it sees danger signs in its own systems - warning the world as a whole.\nOffer deals to other AI companies/projects. E.g., acquiring them or exchanging a share of its profits for enough visibility and control to ensure that they don’t deploy dangerous AI systems.\nUse its credibility as the leading company to lobby the government for helpful measures (such as enforcement of a monitoring-and-standards regime), and to more generally highlight key issues and advocate for sensible actions.\nTry to ensure (via design, marketing, customer choice, etc.) that its AI systems are not used for dangerous ends, and are used on applications that make the world safer and better off. This could include defensive deployment to reduce risks from other AIs; it could include using advanced AI systems to help it gain clarity on how to get a good outcome for humanity; etc.\nAn AI project with a dominant market position could likely make a huge difference via things like the above (and probably via many routes I haven’t thought of). And even an AI project that is merely one of several leaders could have enough resources and credibility to have a lot of similar impacts - especially if it’s able to “lead by example” and persuade other AI projects (or make deals with them) to similarly prioritize actions like the above.\nA challenge here is that I’m envisioning a project with two arguably contradictory properties: being careful (e.g., prioritizing actions like the above over just trying to maintain its position as a profitable/cutting-edge project) and successful (being a profitable/cutting-edge project). In practice, it could be very hard for an AI project to walk the tightrope of being aggressive enough to be a “leading” project (in the sense of having lots of resources, credibility, etc.), while also prioritizing actions like the above (which mostly, with some exceptions, seem pretty different from what an AI project would do if it were simply focused on its technological lead and profitability).\nStrong security. A key threat in the above scenarios is that an incautious actor could “steal” an AI system from a company or project that would otherwise be careful. My understanding is that it could be extremely hard for an AI project to be robustly safe against this outcome (more here). But this could change, if there’s enough effort to work out the problem of how to develop a large-scale, powerful AI system that is very hard to steal.\nIn future pieces, I’ll get more concrete about what specific people and organizations can do today to improve the odds of factors like these going well, and overall to raise the odds of a good outcome.Notes\n E.g., Ajeya Cotra gives a 15% probability of transformative AI by 2030; eyeballing figure 1 from this chart on expert surveys implies a >10% chance by 2028. ↩\n To predict early AI applications, we need to ask not just “What tasks will AI be able to do?” but “How will this compare to all the other ways people can get the same tasks done?” and “How practical will it be for people to switch their workflows and habits to accommodate new AI capabilities?”\n By contrast, I think the implications of powerful enough AI for productivity don’t rely on this kind of analysis - very high-level economic reasoning can tell us that being able to cheaply copy something with human-like R&D capabilities would lead to explosive progress.\n FWIW, I think it’s fairly common for high-level, long-run predictions to be easier than detailed, short-run predictions. Another example: I think it’s easier to predict a general trend of planetary warming (this seems very likely) than to predict whether it’ll be rainy next weekend. ↩\nHere’s an early example of AIs providing training data for each other/themselves. ↩\nExample of AI helping to write code. ↩\n To be clear, I have no idea whether this is possible! It’s not obvious to me that it would be dangerous for technology to progress a lot and be used widely for both offense and defense. It’s just a risk I’d rather not incur casually via indiscriminate, rushed AI deployments. ↩\n", "url": "https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe/", "title": "How we could stumble into AI catastrophe", "source": "cold.takes", "source_type": "blog", "date_published": "2023-01-13", "id": "b962cf49e63a1a1e029859b125d3f03a"} -{"text": "If this ends up being the most important century due to advanced AI, what are the key factors in whether things go well or poorly?\n(Click to expand) More detail on why AI could make this the most important century\nIn The Most Important Century, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.\nThis page has a ~10-page summary of the series, as well as links to an audio version, podcasts, and the full series.\nThe key points I argue for in the series are:\nThe long-run future is radically unfamiliar. Enough advances in technology could lead to a long-lasting, galaxy-wide civilization that could be a radical utopia, dystopia, or anything in between.\nThe long-run future could come much faster than we think, due to a possible AI-driven productivity explosion.\nThe relevant kind of AI looks like it will be developed this century - making this century the one that will initiate, and have the opportunity to shape, a future galaxy-wide civilization.\nThese claims seem too \"wild\" to take seriously. But there are a lot of reasons to think that we live in a wild time, and should be ready for anything.\nWe, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions. But right now, we aren't ready for this.\nA lot of my previous writings have focused specifically on the threat of “misaligned AI”: AI that could have dangerous aims of its own and defeat all of humanity. In this post, I’m going to zoom out and give a broader overview of multiple issues transformative AI could raise for society - with an emphasis on issues we might want to be thinking about now rather than waiting to address as they happen.\nMy discussion will be very unsatisfying. “What are the key factors in whether things go well or poorly with transformative AI?” is a massive topic, with lots of angles that have gotten almost no attention and (surely) lots of angles that I just haven’t thought of at all. My one-sentence summary of this whole situation is: we’re not ready for this.\nBut hopefully this will give some sense of what sorts of issues should clearly be on our radar. And hopefully it will give a sense of why - out of all the issues we need to contend with - I’m as focused on the threat of misaligned AI as I am.\nOutline:\nFirst, I’ll briefly clarify what kinds of issues I’m trying to list. I’m looking for ways the future could look durably and dramatically different depending on how we navigate the development of transformative AI - such that doing the right things ahead of time could make a big, lasting difference.\nThen, I’ll list candidate issues: \nMisaligned AI. I touch on this only briefly, since I’ve discussed it at length in previous pieces. The short story is that we should try to avoid AI ending up with dangerous goals of its own and defeating humanity. (The remaining issues below seem irrelevant if this happens!)\n \nPower imbalances. As AI speeds up science and technology, it could cause some country/countries/coalitions to become enormously powerful - so it matters a lot which one(s) lead the way on transformative AI. (I fear that this concern is generally overrated compared to misaligned AI, but it is still very important.) There could also be dangers in overly widespread (as opposed to concentrated) AI deployment.\n \nEarly applications of AI. It might be that what early AIs are used for durably affects how things go in the long run - for example, whether early AI systems are used for education and truth-seeking, rather than manipulative persuasion and/or entrenching what we already believe. We might be able to affect which uses are predominant early on.\n \nNew life forms. Advanced AI could lead to new forms of intelligent life, such as AI systems themselves and/or digital people. Many of the frameworks we’re used to, for ethics and the law, could end up needing quite a bit of rethinking for new kinds of entities (for example, should we allow people to make as many copies as they want of entities that will predictably vote in certain ways?) Early decisions about these kinds of questions could have long-lasting effects. \n \nPersistent policies and norms. Perhaps we ought to be identifying particularly important policies, norms, etc. that seem likely to be durable even through rapid technological advancement, and try to improve these as much as possible before transformative AI is developed. (These could include things like a better social safety net suited to high, sustained unemployment rates; better regulations aimed at avoiding bias; etc.)\n \nSpeed of development. Maybe human society just isn’t likely to adapt well to rapid, radical advances in science and technology, and finding a way to limit the pace of advances would be good.\nFinally, I’ll discuss how I’m thinking about which of these issues to prioritize at the moment, and why misaligned AI is such a focus of mine.\nAn appendix will say a small amount about whether the long-run future seems likely to be better or worse than today, in terms of quality of life, assuming we navigate the above issues non-amazingly but non-catastrophically.\nThe kinds of issues I’m trying to list\nOne basic angle you could take on AI is: \n“AI’s main effect will be to speed up science and technology a lot. This means humans will be able to do all the things they were doing before - the good and the bad - but more/faster. So basically, we’ll end up with the same future we would’ve gotten without AI - just sooner.\n“Therefore, there’s no need to prepare in advance for anything in particular, beyond what we’d do to work toward a better future normally (in a world with no AI). Sure, lots of weird stuff could happen as science and technology advance - but that was already true, and many risks are just too hard to predict now and easier to respond to as they happen.”\nI don’t agree with the above, but I do think it’s a good starting point. I think we shouldn’t be listing everything that might happen in the future, as AI leads to advances in science and technology, and trying to prepare for it. Instead, we should be asking: “if transformative AI is coming in the next few decades, how does this change the picture of what we should be focused on, beyond just speeding up what’s going to happen anyway?”\nAnd I’m going to try to focus on extremely high-stakes issues - ways I could imagine the future looking durably and dramatically different depending on how we navigate the development of transformative AI.\nBelow, I’ll list some candidate issues fitting these criteria.\nPotential issues\nMisaligned AI\nI won’t belabor this possibility, because the last several pieces have been focused on it; this is just a quick reminder.\nIn a world without AI, the main question about the long-run future would be how humans will end up treating each other. But if powerful AI systems will be developed in the coming decades, we need to contend with the possibility that these AI systems will end up having goals of their own - and displacing humans as the species that determines how things will play out.\n(Click to expand)Why would AI \"aim\" to defeat humanity?\nA previous piece argued that if today’s AI development methods lead directly to powerful enough AI systems, disaster is likely by default (in the absence of specific countermeasures). \nIn brief:\nModern AI development is essentially based on “training” via trial-and-error. \nIf we move forward incautiously and ambitiously with such training, and if it gets us all the way to very powerful AI systems, then such systems will likely end up aiming for certain states of the world (analogously to how a chess-playing AI aims for checkmate).\nAnd these states will be other than the ones we intended, because our trial-and-error training methods won’t be accurate. For example, when we’re confused or misinformed about some question, we’ll reward AI systems for giving the wrong answer to it - unintentionally training deceptive behavior.\nWe should expect disaster if we have AI systems that are both (a) powerful enough to defeat humans and (b) aiming for states of the world that we didn’t intend. (“Defeat” means taking control of the world and doing what’s necessary to keep us out of the way; it’s unclear to me whether we’d be literally killed or just forcibly stopped1 from changing the world in ways that contradict AI systems’ aims.)\n(Click to expand) How could AI defeat humanity?\nIn a previous piece, I argue that AI systems could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nBy defeating humanity, I mean gaining control of the world so that AIs, not humans, determine what happens in it; this could involve killing humans or simply “containing” us in some way, such that we can’t interfere with AIs’ aims.\nOne way this could happen is if AI became extremely advanced, to the point where it had \"cognitive superpowers\" beyond what humans can do. In this case, a single AI system (or set of systems working together) could imaginably:\nDo its own research on how to build a better AI system, which culminates in something that has incredible other abilities.\nHack into human-built software across the world.\nManipulate human psychology.\nQuickly generate vast wealth under the control of itself or any human allies.\nCome up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.\nDevelop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries.\nHowever, my piece also explores what things might look like if each AI system basically has similar capabilities to humans. In this case:\nHumans are likely to deploy AI systems throughout the economy, such that they have large numbers and access to many resources - and the ability to make copies of themselves. \nFrom this starting point, AI systems with human-like (or greater) capabilities would have a number of possible ways of getting to the point where their total population could outnumber and/or out-resource humans.\nI address a number of possible objections, such as \"How can AIs be dangerous without bodies?\"\nMore: AI could defeat all of us combined\nPower imbalances\nI’ve argued that AI could cause a dramatic acceleration in the pace of scientific and technological advancement. \n(Click to expand) How AI could cause explosive progress\n(This section is mostly copied from my summary of the \"most important century\" series; it links to some pieces with more detail at the bottom.)\nStandard economic growth models imply that any technology that could fully automate innovation would cause an \"economic singularity\": productivity going to infinity this century. This is because it would create a powerful feedback loop: more resources -> more ideas and innovation -> more resources -> more ideas and innovation ...\nThis loop would not be unprecedented. I think it is in some sense the \"default\" way the economy operates - for most of economic history up until a couple hundred years ago. \nEconomic history: more resources -> more people -> more ideas -> more resources ...\nBut in the \"demographic transition\" a couple hundred years ago, the \"more resources -> more people\" step of that loop stopped. Population growth leveled off, and more resources led to richer people instead of more people:\nToday's economy: more resources -> more richer people -> same pace of ideas -> ...\nThe feedback loop could come back if some other technology restored the \"more resources -> more ideas\" dynamic. One such technology could be the right kind of AI: what I call PASTA, or Process for Automating Scientific and Technological Advancement.\nPossible future: more resources -> more AIs -> more ideas -> more resources ...\nThat means that our radical long-run future could be upon us very fast after PASTA is developed (if it ever is). \nIt also means that if PASTA systems are misaligned - pursuing their own non-human-compatible objectives - things could very quickly go sideways.\nKey pieces:\nThe Duplicator: Instant Cloning Would Make the World Economy Explode\nForecasting Transformative AI, Part 1: What Kind of AI?\nOne way of thinking about this: perhaps (for reasons I’ve argued previously) AI could enable the equivalent of hundreds of years of scientific and technological advancement in a matter of a few months (or faster). If so, then developing powerful AI a few months before others could lead to having technology that is (effectively) hundreds of years ahead of others’.\nBecause of this, it’s easy to imagine that AI could lead to big power imbalances, as whatever country/countries/coalitions “lead the way” on AI development could become far more powerful than others (perhaps analogously to when a few smallish European states took over much of the rest of the world).\nOne way we might try to make the future go better: maybe it could be possible for different countries/coalitions to strike deals in advance. For example, two equally matched parties might agree in advance to share their resources, territory, etc. with each other, in order to avoid a winner-take-all competition.\nWhat might such agreements look like? Could they possibly be enforced? I really don’t know, and I haven’t seen this explored much.1\nAnother way one might try to make the future go better is to try to help a particular country, coalition, etc. develop powerful AI systems before others do. I previously called this the “competition” frame. \nI think it is, in fact, enormously important who leads the way on transformative AI. At the same time, I’ve expressed concern that people might overfocus on this aspect of things vs. other issues, for a number of reasons including:\nI think people naturally get more animated about \"helping the good guys beat the bad guys\" than about \"helping all of us avoid getting a universally bad outcome, for impersonal reasons such as 'we designed sloppy AI systems' or 'we created a dynamic in which haste and aggression are rewarded.'\"\nI expect people will tend to be overconfident about which countries, organizations or people they see as the \"good guys.\"\n(More here.)\nFinally, it’s worth mentioning the possible dangers of powerful AI being too widespread, rather than too concentrated. In The Vulnerable World Hypothesis, Nick Bostrom contemplates potential future dynamics such as “advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions.” In addition to avoiding worlds where AI capabilities end up concentrated in the hands of a few, it could also be important to avoid worlds in which they diffuse too widely, too quickly, before we’re able to assess the risks of widespread access to technology far beyond today’s.\nEarly applications of AI\nMaybe advanced AI will be useful for some sorts of tasks before others. For example, maybe - by default - advanced AI systems will soon be powerful persuasion tools, and cause wide-scale societal dysfunction before they cause rapid advances in science and technology. And maybe, with effort, we could make it less likely that this happens - more likely that early AI systems are used for education and truth-seeking, rather than manipulative persuasion and/or entrenching what we already believe.\nThere could be lots of possibilities of this general form: particular ways in which AI could be predictably beneficial, or disruptive, before it becomes an all-purpose accelerant to science and technology. Perhaps trying to map these out today, and push for advanced AI to be used for particular purposes early on, could have a lasting effect on the future.\nNew life forms\nAdvanced AI could lead to new forms of intelligent life, such as AI systems themselves and/or digital people.\nDigital people: one example of how wild the future could be\nIn a previous piece, I tried to give a sense of just how wild a future with advanced technology could be, by examining one hypothetical technology: \"digital people.\" \nTo get the idea of digital people, imagine a computer simulation of a specific person, in a virtual environment. For example, a simulation of you that reacts to all \"virtual events\" - virtual hunger, virtual weather, a virtual computer with an inbox - just as you would. \nI’ve argued that digital people would likely be conscious and deserving of human rights just as we are. And I’ve argued that they could have major impacts, in particular:\nProductivity. Digital people could be copied, just as we can easily make copies of ~any software today. They could also be run much faster than humans. Because of this, digital people could have effects comparable to those of the Duplicator, but more so: unprecedented (in history or in sci-fi movies) levels of economic growth and productivity.\nSocial science. Today, we see a lot of progress on understanding scientific laws and developing cool new technologies, but not so much progress on understanding human nature and human behavior. Digital people would fundamentally change this dynamic: people could make copies of themselves (including sped-up, temporary copies) to explore how different choices, lifestyles and environments affected them. Comparing copies would be informative in a way that current social science rarely is.\nControl of the environment. Digital people would experience whatever world they (or the controller of their virtual environment) wanted. Assuming digital people had true conscious experience (an assumption discussed in the FAQ), this could be a good thing (it should be possible to eliminate disease, material poverty and non-consensual violence for digital people) or a bad thing (if human rights are not protected, digital people could be subject to scary levels of control).\nSpace expansion. The population of digital people might become staggeringly large, and the computers running them could end up distributed throughout our galaxy and beyond. Digital people could exist anywhere that computers could be run - so space settlements could be more straightforward for digital people than for biological humans.\nLock-in. In today's world, we're used to the idea that the future is unpredictable and uncontrollable. Political regimes, ideologies, and cultures all come and go (and evolve). But a community, city or nation of digital people could be much more stable. \nDigital people need not die or age.\n \nWhoever sets up a \"virtual environment\" containing a community of digital people could have quite a bit of long-lasting control over what that community is like. For example, they might build in software to reset the community (both the virtual environment and the people in it) to an earlier state if particular things change - such as who's in power, or what religion is dominant.\n \nI consider this a disturbing thought, as it could enable long-lasting authoritarianism, though it could also enable things like permanent protection of particular human rights.\nI think these effects could be a very good or a very bad thing. How the early years with digital people go could irreversibly determine which. \nMore: \nDigital People would be an Even Bigger Deal\nDigital People FAQ\nMany of the frameworks we’re used to, for ethics and the law, could end up needing quite a bit of rethinking for new kinds of entities. For example:\nHow should we determine which AI systems or digital people are considered to have “rights” and get legal protections?\nWhat about the right to vote? If an AI system or digital person can be quickly copied billions of times, with each copy getting a vote, that could be a recipe for trouble - does this mean we should restrict copying, restrict voting or something else?\nWhat should the rules be about engineering AI systems or digital people to have particular beliefs, motivations, experiences, etc.? Simple examples: \nShould it be illegal to create new AI systems or digital people that will predictably suffer a lot? How much suffering is too much?\n \nWhat about creating AI systems or digital people that consistently, predictably support some particular political party or view?\n(For a lot more in this vein, see this very interesting piece by Nick Bostrom and Carl Shulman.)\nEarly decisions about these kinds of questions could have long-lasting effects. For example, imagine someone creating billions of AI systems or digital people that have capabilities and subjective experiences comparable to humans, and are deliberately engineered to “believe in” (or at least help promote) some particular ideology (Communism, libertarianism, etc.) If these systems are self-replicating, that could change the future drastically. \nThus, it might be important to set good principles in place for tough questions about how to treat new sorts of digital entities, before new sorts of digital entities start to multiply.\nPersistent policies and norms\nThere might be particular policies, norms, etc. that are likely to stay persistent even as technology is advancing and many things are changing.\nFor example, how people think about ethics and norms might just inherently change more slowly than technological capabilities change. Perhaps a society that had strong animal rights protections, and general pro-animal attitudes, would maintain these properties all the way through explosive technological progress, becoming a technologically advanced society that treated animals well - while a society that had little regard for animals would become a technologically advanced society that treated animals poorly. Similar analysis could apply to religious values, social liberalism vs. conservatism, etc.\nSo perhaps we ought to be identifying particularly important policies, norms, etc. that seem likely to be durable even through rapid technological advancement, and try to improve these as much as possible before transformative AI is developed.\nOne tangible example of a concern I’d put in this category: if AI is going to cause high, persistent technological unemployment, it might be important to establish new social safety net programs (such as universal basic income) today - if these programs would be easier to establish today than in the future. I feel less than convinced of this one - first because I have some doubts about how big an issue technological unemployment is going to be, and second because it’s not clear to me why policy change would be easier today than in a future where technological unemployment is a reality. And more broadly, I fear that it's very hard to design and (politically) implement policies today that we can be confident will make things durably better as the world changes radically.\nSlow it down?\nI’ve named a number of ways in which weird things - such as power imbalances, and some parts of society changing much faster than others - could happen as scientific and technological advancement accelerate. Maybe one way to make the most important century go well would be to simply avoid these weird things by avoiding too-dramatic acceleration. Maybe human society just isn’t likely to adapt well to rapid, radical advances in science and technology, and finding a way to limit the pace of advances would be good.\nAny individual company, government, etc. has an incentive to move quickly and try to get ahead of others (or not fall too far behind), but coordinated agreements and/or regulations (along the lines of the “global monitoring” possibility discussed here) could help everyone move more slowly.\nWhat else?\nAre there other ways in which transformative AI would cause particular issues, risks, etc. to loom especially large, and to be worth special attention today? I’m guessing I’ve only scratched the surface here.\nWhat I’m prioritizing, at the moment\nIf this is the most important century, there’s a vast set of things to be thinking about and trying to prepare for, and it’s hard to know what to prioritize.\nWhere I’m at for the moment:\nIt seems very hard to say today what will be desirable in a radically different future. I wish more thought and attention were going into things like early applications of AI; norms and laws around new life forms; and whether there are policy changes today that we could be confident in even if the world is changing rapidly and radically. But it seems to me that it would be very hard to be confident in any particular goal in areas like these. Can we really say anything today about what sorts of digital entities should have rights, or what kinds of AI applications we hope come first, that we expect to hold up?\nI feel most confident in two very broad ideas: “It’s bad if AI systems defeat humanity to pursue goals of their own” and “It’s good if good decision-makers end up making the key decisions.” These map to the misaligned AI and power imbalance topics - or what I previously called caution and competition.\nThat said, it also seems hard to know who the “good decision-makers” are. I’ve definitely observed some of this dynamic: “Person/company A says they’re trying to help the world by aiming to build transformative AI before person/company B; person/company B says they’re trying to help the world by aiming to build transformative AI before person/company A.” \nIt’s pretty hard to come up with tangible tests of who’s a “good decision-maker.” We mostly don’t know what person A would do with enormous power, or what person B would do, based on their actions today. One possible criterion is that we should arguably have more trust in people/companies who show more caution - people/companies who show willingness to hurt their own chances of “being in the lead” in order to help everyone’s chance of avoiding a catastrophe from misaligned AI.2\n(Instead of focusing on which particular people and/or companies lead the way on AI, you could focus on which countries do, e.g. preferring non-authoritarian countries. It’s arguably pretty clear that non-authoritarian countries would be better than authoritarian ones. However, I have concerns about this as a goal as well, discussed in a footnote.3)\nFor now, I am most focused on the threat of misaligned AI. Some reasons for this:\nIt currently seems to me that misaligned AI is a significant risk. Misaligned AI seems likely by default if we don’t specifically do things to prevent it, and preventing it seems far from straightforward (see previous posts on the difficulty of alignment research and why it could be hard for key players to be cautious).\nAt the same time, it seems like there are significant hopes for how we might avoid this risk. As argued here and here, my sense is that the more broadly people understand this risk, the better our odds of avoiding it.\nI currently feel that this threat is underrated, relative to the easier-to-understand angle of “I hope people I like develop powerful AI systems before others do.”\nI think the “competition” frame - focusing on helping some countries/coalitions/companies develop advanced AI before others - makes quite a bit of sense as well. But - as noted directly above - I have big reservations about the most common “competition”-oriented actions, such as trying to help particular companies outcompete others or trying to get U.S. policymakers more focused on AI. \nFor the latter, I worry that this risks making huge sacrifices on the “caution” front and even backfiring by causing other governments to invest in projects of their own.\n \nFor the former, I worry about the ability to judge “good” leadership, and the temptation to overrate people who resemble oneself.\nThis is all far from absolute. I’m open to a broad variety of projects to help the most important century go well, whether they’re about “caution,” “competition” or another issue (including those I’ve listed in this post). My top priority at the moment is reducing the risks of misaligned AI, but I think a huge range of potential risks aren’t getting enough attention from the world at large.\nAppendix: if we avoid catastrophic risks, how good does the future look?\nHere I’ll say a small amount about whether the long-run future seems likely to be better or worse than today, in terms of quality of life. \nPart of why I want to do this is to give a sense of why I feel cautiously and moderately optimistic about such a future - such that I feel broadly okay with a frame of “We should try to prevent anything too catastrophic from happening, and figure that the future we get if we can pull that off is reasonably likely (though far from assured!) to be good.”\nSo I’ll go through some quick high-level reasons for hope (the future might be better than the present) - and for concern (it might be worse). \nIn this section, I’m ignoring the special role AI might play, and just thinking about what happens if we get a fast-forwarded future. I’ll be focusing on what I think are probably the most likely ways the world will change in the future, laid out here: a higher world population and greater empowerment due to a greater stock of ideas, innovations and technological capabilities. My aim is to ask: “If we navigate the above issues neither amazingly nor catastrophically, and end up with the same sort of future we’d have had without AI (just sped up), how do things look?”\nReason for hope: empowerment trends. One simple take would be: “Life has gotten better for humans4 over the last couple hundred years or so, the period during which we’ve seen most of history’s economic growth and technological progress. We’ve seen better health, less poverty and hunger, less violence, more anti-discrimination measures, and few signs of anything getting clearly worse. So if humanity just keeps getting more and more empowered, and nothing catastrophic happens, we should plan on life continuing to improve along a variety of dimensions.”\nWhy is this the trend, and should we expect it to hold up? There are lots of theories, and I won’t pretend to know, but I’ll lay out some basic thoughts that may be illustrative and give cause for optimism.\nFirst off, there is an awful lot of room for improvement just from continuing to cut down on things like hunger and disease. A wealthier, more technologically advanced society seems like a pretty good bet to have less hunger and disease for fairly straightforward reasons.\nBut we’ve seen improvement on other dimensions too. This could be partly explained by something like the following dynamic:\nMost people would - aspirationally - like to be nonviolent, compassionate, generous and fair, if they could do so without sacrificing other things.\nAs empowerment rises, the need to make sacrifices falls (noisily and imperfectly) across the board.\nThis dynamic may have led to some (noisy, imperfect) improvement to date, but there might be much more benefit in the future compared to the past. For example, if we see a lot of progress on social science, we might get to a world where people understand their own needs, desires and behavior better - and thus can get most or all of what they want (from material needs to self-respect and happiness) without having to outcompete or push down others.5\nReason for hope: the “cheap utopia” possibility. This is sort of an extension of the previous point. If we imagine the upper limit of how “empowered” humanity could be (in terms of having lots of technological capabilities), it might be relatively easy to create a kind of utopia (such as the utopia I’ve described previously, or hopefully something much better). This doesn’t guarantee that such a thing will happen, but a future where it’s technologically easy to do things like meeting material needs and providing radical choice could be quite a bit better than the present.\nAn interesting (wonky) treatment of this idea is Carl Shulman’s blog post: Spreading happiness to the stars seems little harder than just spreading.\nReason for concern: authoritarianism. There are some huge countries that are essentially ruled by one person, with little to no democratic or other mechanisms for citizens to have a voice in how they’re treated. It seems like a live risk that the world could end up this way - essentially ruled by one person or relatively small coalition - in the long run. (It arguably would even continue a historical trend in which political units have gotten larger and larger.)\nMaybe this would be fine if whoever’s in charge is able to let everyone have freedom, wealth, etc. at little cost to themselves (along the lines of the above point). But maybe whoever’s in charge is just a crazy or horrible person, in which case we might end up with a bad future even if it would be “cheap” to have a wonderful one.\nReason for concern: competitive dynamics. You might imagine that as empowerment advances, we get purer, more unrestrained competition. \nOne way of thinking about this: \nToday, no matter how ruthless CEOs are, they tend to accommodate some amount of leisure time for their employees. That’s because businesses have no choice but to hire people who insist on working a limited number of hours, having a life outside of work, etc. \nBut if we had advanced enough technology, it might be possible to run a business whose employees have zero leisure time. (One example would be via digital people and the ability to make lots of copies of highly productive people just as they’re about to get to work. A more mundane example would be if e.g. advanced stimulants and other drugs were developed so people could be productive without breaks.)\nAnd that might be what the most productive businesses, organizations, etc. end up looking like - the most productive organizations might be the ones that most maniacally and uncompromisingly use all of their resources to acquire more resources. Those could be precisely the organizations that end up filling most of the galaxy.\nMore at this Slate Star Codex post. Key quote: “I’m pretty sure that brutal … competition combined with ability to [copy and edit] minds necessarily results in paring away everything not directly maximally economically productive. And a lot of things we like – love, family, art, hobbies – are not directly maximally economic productive.”\nThat said:\nIt’s not really clear how this ultimately shakes out. One possibility is something like this: \nLots of people, or perhaps machines, compete ruthlessly to acquire resources. But this competition is (a) legal, subject to a property rights system; (b) ultimately for the benefit of the investors in the competing companies/organizations. \n \nWho are these investors? Well, today, many of the biggest companies are mostly owned by large numbers of individuals via mutual funds. The same could be true in the future - and those individuals could be normal people who use the proceeds for nice things.\nIf the “cheap utopia” possibility (described above) comes to pass, it might only take a small amount of spare resources to support a lot of good lives.\nOverall, my guess is that the long-run future is more likely to be better than the present than worse than the present (in the sense of average quality of life). I’m very far from confident in this. I’m more confident that the long-run future is likely to be better than nothing, and that it would be good to prevent humans from going extinct, or a similar development such as a takeover by misaligned AI.Footnotes\n A couple of discussions of the prospects for enforcing agreements here and here. ↩\n I’m reminded of the judgment of Solomon: “two mothers living in the same house, each the mother of an infant son, came to Solomon. One of the babies had been smothered, and each claimed the remaining boy as her own. Calling for a sword, Solomon declared his judgment: the baby would be cut in two, each woman to receive half. One mother did not contest the ruling, declaring that if she could not have the baby then neither of them could, but the other begged Solomon, ‘Give the baby to her, just don't kill him!’ The king declared the second woman the true mother, as a mother would even give up her baby if that was necessary to save its life, and awarded her custody.” \n The sword is misaligned AI and the baby is humanity or something.\n (This story is actually extremely bizarre - seriously, Solomon was like “You each get half the baby”?! - and some similar stories from India/China seem at least a bit more plausible. But I think you get my point. Maybe.) ↩\n For a tangible example, I’ll discuss the practice (which some folks are doing today) of trying to ensure that the U.S. develops transformative AI before another country does, by arguing for the importance of A.I. to U.S. policymakers. \n This approach makes me quite nervous, because:\nI expect U.S. policymakers by default to be very oriented toward “competition” to the exclusion of “caution.” (This could change if the importance of caution becomes more widely appreciated!) \nI worry about a nationalized AI project that (a) doesn’t exercise much caution at all, focusing entirely on racing ahead of others; (b) might backfire by causing other countries to go for nationalized projects of their own, inflaming an already tense situation and not even necessarily doing much to make it more likely that the U.S. leads the way. In particular, other countries might have an easier time quickly mobilizing huge amounts of government funding than the U.S., such that the U.S. might have better odds if it remains the case that most AI research is happening at private companies.\n (There might be ways of helping particular countries without raising the risks of something like a low-caution nationalized AI project, and if so these could be important and good.) ↩\nNot for animals, though see this comment for some reasons we might not consider this a knockdown objection to the “life has gotten better” claim. ↩\n This is only a possibility. It’s also possible that humans deeply value being better-off than others, which could complicate it quite a bit. (Personally, I feel somewhat optimistic that a lot of people would aspirationally prefer to focus on their own welfare rather than comparing themselves to others - so if knowledge advanced to the point where people could choose to change in this way, I feel optimistic that at least many would do so.) ↩\n", "url": "https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/", "title": "Transformative AI issues (not just misalignment): an overview", "source": "cold.takes", "source_type": "blog", "date_published": "2023-01-05", "id": "8324d19cc55ed28a759918ec6fd778ea"} -{"text": "In previous pieces, I argued that there's a real and large risk of AI systems' developing dangerous goals of their own and defeating all of humanity - at least in the absence of specific efforts to prevent this from happening. I discussed why it could be hard to build AI systems without this risk and how it might be doable.\nThe “AI alignment problem” refers1 to a technical problem: how can we design a powerful AI system that behaves as intended, rather than forming its own dangerous aims? This post is going to outline a broader political/strategic problem, the “deployment problem”: if you’re someone who might be on the cusp of developing extremely powerful (and maybe dangerous) AI systems, what should you … do?\nThe basic challenge is this:\nIf you race forward with building and using powerful AI systems as fast as possible, you might cause a global catastrophe (see links above).\nIf you move too slowly, though, you might just be waiting around for someone else less cautious to develop and deploy powerful, dangerous AI systems.\nAnd if you can get to the point where your own systems are both powerful and safe … what then? Other people still might be less-cautiously building dangerous ones - what should we do about that?\nMy current analogy for the deployment problem is racing through a minefield: each player is hoping to be ahead of others, but anyone moving too quickly can cause a disaster. (In this minefield, a single mine is big enough to endanger all the racers.)\nThis post gives a high-level overview of how I see the kinds of developments that can lead to a good outcome, despite the “racing through a minefield” dynamic. It is distilled from a more detailed post on the Alignment Forum.\nFirst, I’ll flesh out how I see the challenge we’re contending with, based on the premises above.\nNext, I’ll list a number of things I hope that “cautious actors” (AI companies, governments, etc.) might do in order to prevent catastrophe.\nMany of the actions I’m picturing are not the kind of things normal market and commercial incentives would push toward, and as such, I think there’s room for a ton of variation in whether the “racing through a minefield” challenge is handled well. Whether key decision-makers understand things like the case for misalignment risk (and in particular, why it might be hard to measure) - and are willing to lower their own chances of “winning the race” to improve the odds of a good outcome for everyone - could be crucial.\nThe basic premises of “racing through a minefield”\nThis piece is going to lean on previous pieces and assume all of the following things:\nTransformative AI soon. This century, something like PASTA could be developed: AI systems that can effectively automate everything humans do to advance science and technology. This brings the potential for explosive progress in science and tech, getting us more quickly than most people imagine to a deeply unfamiliar future. I’ve argued for this possibility in the Most Important Century series.\nMisalignment risk. As argued previously, there’s a significant risk that such AI systems could end up with misaligned goals of their own, leading them to defeat all of humanity. And it could take significant extra effort to get AI systems to be safe.\nAmbiguity. As argued previously, it could be hard to know whether AI systems are dangerously misaligned, for a number of reasons. In particular, when we train AI systems not to behave dangerously, we might be unwittingly training them to obscure their dangerous potential from humans, and take dangerous actions only when humans would not be able to stop them. At the same time, I expect powerful AI systems will present massive opportunities to make money and gain power, such that many people will want to race forward with building and deploying them as fast as possible (perhaps even if they believe that doing so is risky for the world!)\nSo, one can imagine a scenario where some company is in the following situation:\nIt has good reason to think it’s on the cusp of developing extraordinarily powerful AI systems.\nIf it deploys such systems hastily, global disaster could result.\nBut if it moves too slowly, other, less cautious actors could deploy dangerous systems of their own.\nThat seems like a tough enough, high-stakes-enough, and likely enough situation that it’s worth thinking about how one is supposed to handle it.\nOne simplified way of thinking about this problem:\nWe might classify “actors” (companies, government projects, whatever might develop powerful AI systems or play an important role in how they’re deployed) as cautious (taking misalignment risk very seriously) or incautious (not so much).\nOur basic hope is that at any given point in time, cautious actors collectively have the power to “contain” incautious actors. By “contain,” I mean: stop them from deploying misaligned AI systems, and/or stop the misaligned systems from causing a catastrophe.\nImportantly, it could be important for cautious actors to use powerful AI systems to help with “containment” in one way or another. If cautious actors refrain from AI development entirely, it seems likely that incautious actors will end up with more powerful systems than cautious ones, which doesn’t seem good.\nIn this setup, cautious actors need to move fast enough that they can’t be overpowered by others’ AI systems, but slowly enough that they don’t cause disaster themselves. Hence the “racing through a minefield” analogy.\nWhat success looks like\nIn a non-Cold-Takes piece, I explore the possible actions available to cautious actors to win the race through a minefield. This section will summarize the general categories - and, crucially, why we shouldn’t expect that companies, governments, etc. will do the right thing simply from natural (commercial and other) incentives.\nI’ll be going through each of the following:\nAlignment (charting a safe path through the minefield). Putting lots of effort into technical work to reduce the risk of misaligned AI. \nThreat assessment (alerting others about the mines). Putting lots of effort into assessing the risk of misaligned AI, and potentially demonstrating it (to other actors) as well.\nAvoiding races (to move more cautiously through the minefield). If different actors are racing to deploy powerful AI systems, this could make it unnecessarily hard to be cautious.\nSelective information sharing (so the incautious don’t catch up). Sharing some information widely (e.g., technical insights about how to reduce misalignment risk), some selectively (e.g., demonstrations of how powerful and dangerous AI systems might be), and some not at all (e.g., the specific code that, if accessed by a hacker, would allow the hacker to deploy potentially dangerous AI systems themselves).\nGlobal monitoring (noticing people about to step on mines, and stopping them). Working toward worldwide state-led monitoring efforts to identify and prevent “incautious” projects racing toward deploying dangerous AI systems.\nDefensive deployment (staying ahead in the race). Deploying AI systems only when they are unlikely to cause a catastrophe - but also deploying them with urgency once they are safe, in order to help prevent problems from AI systems developed by less cautious actors.\nAlignment (charting a safe path through the minefield2)\nI previously wrote about some of the ways we might reduce the dangers of advanced AI systems. Broadly speaking:\nCautious actors might try to primarily build limited AI systems - AI systems that lack the kind of ambitious aims that lead to danger. They might ultimately be able to use these AI systems to do things like automating further safety research, making future less-limited systems safer.\nCautious actors might use AI checks and balances - that is, using some AI systems to supervise, critique and identify dangerous behavior in others, with special care taken to make it hard for AI systems to coordinate with each other against humans. \nCautious actors might use a variety of other techniques for making AI systems safer - particularly techniques that incorporate “digital neuroscience,” gauging the safety of an AI system by “reading its mind” rather than simply by watching out for dangerous behavior (the latter might be unreliable, as noted above).\nA key point here is that making AI systems safe enough to commercialize (with some initial success and profits) could be much less (and different) effort than making them robustly safe (no lurking risk of global catastrophe). The basic reasons for this are covered in my previous post on difficulties with AI safety research In brief:\nIf AI systems behave dangerously, we can “train out” that behavior by providing negative reinforcement for it. \nThe concern is that when we do this, we might be unwittingly training AI systems to obscure their dangerous potential from humans, and take dangerous actions only when humans would not be able to stop them. (I call this the “King Lear problem: it's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't.”)\nSo we could end up with AI systems that behave safely and helpfully as far as we can tell in normal circumstances, while ultimately having ambitious, dangerous “aims” that they pursue when they become powerful enough and have the right opportunities.\nWell-meaning AI companies with active ethics boards might do a lot of AI safety work, by training AIs not to behave in unhelpful or dangerous ways. But if they want to address the risks I’m focused on here, this could require safety measures that look very different - e.g., measures more reliant on “checks and balances” and “digital neuroscience.”\nThreat assessment (alerting others about the mines)\nIn addition to making AI systems safer, cautious actors can also put effort into measuring and demonstrating how dangerous they are (or aren’t).\nFor the same reasons given in the previous section, it could take special efforts to find and demonstrate the kinds of dangers I’ve been discussing. Simply monitoring AI systems in the real world for bad behavior might not do it. It may be necessary to examine (or manipulate) their digital brains,3 design AI systems specifically to audit other AI systems for signs of danger; deliberately train AI systems to demonstrate particular dangerous patterns (while not being too dangerous!); etc.\nLearning and demonstrating that the danger is high could help convince many actors to move more slowly and cautiously. Learning that the danger is low could lessen some of the tough tradeoffs here and allow cautious actors to move forward more decisively with developing advanced AI systems; I think this could be a good thing in terms of what sorts of actors lead the way on transformative AI.\nAvoiding races (to move more cautiously through the minefield)\nHere’s a dynamic I’d be sad about:\nCompany A is getting close to building very powerful AI systems. It would love to move slowly and be careful with these AIs, but it worries that if it moves too slowly, Company B will get there first, have less caution, and do some combination of “causing danger to the world” and “beating company A if the AIs turn out safe.”\nCompany B is getting close to building very powerful AI systems. It would love to move slowly and be careful with these AIs, but it worries that if it moves too slowly, Company A will get there first, have less caution, and do some combination of “causing danger to the world” and “beating company B if the AIs turn out safe.”\n(Similar dynamics could apply to Country A and B, with national AI development projects.)\nIf Companies A and B would both “love to move slowly and be careful” if they could, it’s a shame that they’re both racing to beat each other. Maybe there’s a way to avoid this dynamic. For example, perhaps Companies A and B could strike a deal - anything from “collaboration and safety-related information sharing” to a merger. This could allow both to focus more on precautionary measures rather than on beating the other. Another way to avoid this dynamic is discussed below, under standards and monitoring.\n“Finding ways to avoid a furious race” is not the kind of dynamic that emerges naturally from markets! In fact, working together along these lines would have to be well-designed to avoid running afoul of antitrust regulation.\nSelective information sharing - including security (so the incautious don’t catch up)\nCautious actors might want to share certain kinds of information quite widely:\nIt could be crucial to raise awareness about the dangers of AI (which, as I’ve argued, won’t necessarily be obvious). \nThey might also want to widely share information that could be useful for reducing the risks (e.g., safety techniques that have worked well.)\nAt the same time, as long as there are incautious actors out there, information can be dangerous too:\nInformation about what cutting-edge AI systems can do - especially if it is powerful and impressive - could spur incautious actors to race harder toward developing powerful AI of their own (or give them an idea of how to build powerful systems, by giving them an idea of what sorts of abilities to aim for).\nAn AI’s “weights” (you can think of this sort of like its source code, though not exactly4) are potentially very dangerous. If hackers (including from a state cyberwarfare program) gain unauthorized access to an AI’s weights, this could be tantamount to stealing the AI system, and the actor that steals the system could be much less cautious than the actor who built it. Achieving a level of cybersecurity that rules this out could be extremely difficult, and potentially well beyond what one would normally aim for in a commercial context.\nThe lines between these categories of information might end up fuzzy. Some information might be useful for demonstrating the dangers and capabilities of cutting-edge systems, or useful for making systems safer and for building them in the first place. So there could be a lot of hard judgment calls here.\nThis is another area where I worry that commercial incentives might not be enough on their own. For example, it is usually important for a commercial project to have some reasonable level of security against hackers, but not necessarily for it to be able to resist well-resourced attempts by states to steal its intellectual property. \nGlobal monitoring (noticing people about to step on mines, and stopping them)\nIdeally, cautious actors would learn of every case where someone is building a dangerous AI system (whether purposefully or unwittingly), and be able to stop the project. If this were done reliably enough, it could take the teeth out of the threat; a partial version could buy time.\nHere’s one vision for how this sort of thing could come about:\nWe (humanity) develop a reasonable set of tests for whether an AI system might be dangerous.\nToday’s leading AI companies self-regulate by committing not to build or deploy a system that’s dangerous according to such a test (e.g., see Google’s 2018 statement, \"We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”). Even if some people at the companies would like to do so, it’s hard to pull this off once the company has committed not to.\nAs more AI companies are started, they feel soft pressure to do similar self-regulation, and refusing to do so is off-putting to potential employees, investors, etc.\nEventually, similar principles are incorporated into various government regulations and enforceable treaties.\nGovernments could monitor for dangerous projects using regulation and even overseas operations. E.g., today the US monitors (without permission) for various signs that other states might be developing nuclear weapons, and might try to stop such development with methods ranging from threats of sanctions to cyberwarfare or even military attacks. It could do something similar for any AI development projects that are using huge amounts of compute and haven’t volunteered information about their safety practices.\nIf the situation becomes very dire - i.e., it seems that there’s a high risk of dangerous AI being deployed imminently - I see the latter bullet point as one of the main potential hopes. In this case, governments might have to take drastic actions to monitor and stop dangerous projects, based on limited information.\nDefensive deployment (staying ahead in the race)\nI’ve emphasized the importance of caution: not deploying AI systems when we can’t be confident enough that they’re safe. \nBut when confidence can be achieved (how much confidence? See footnote5), powerful-and-safe AI can help reduce risks from other actors in many possible ways.\nSome of this would be by helping with all of the above. Once AI systems can do a significant fraction of the things humans can do today, they might be able to contribute to each of the activities I’ve listed so far:\nAlignment. AI systems might be able to contribute to AI safety research (as humans do), producing increasingly robust techniques for reducing risks.\nThreat assessment. AI systems could help produce evidence and demonstrations about potential risks. They could be potentially useful for tasks like “Produce detailed explanations and demonstrations of possible sequences of events that could lead to AIs doing harm.”\nAvoiding races. AI projects might make deals in which e.g. each project is allowed to use its AI systems to monitor for signs of risk from the others (ideally such systems would be designed to only share relevant information).\nSelective information sharing. AI systems might contribute to strong security (e.g., by finding and patching security holes), and to dissemination (including by helping to better communicate about the level of risk and the best ways to reduce it).\nGlobal monitoring. AI systems might be used (e.g., by governments) to monitor for signs of dangerous AI projects worldwide, and even to interfere with such projects. They might also be used as part of large voluntary self-regulation projects, along the lines of what I wrote just above under “Avoiding races.”\nAdditionally, if safe AI systems are in wide use, it could be harder for dangerous (similarly powerful) AI systems to do harm. This could be via a wide variety of mechanisms. For example:\nIf there’s widespread use of AI systems to patch and find security holes, similarly powered AI systems might have a harder time finding security holes to cause trouble with.\nMisaligned AI systems could have more trouble making money, gaining allies, etc. in worlds where they are competing with similarly powerful but safe AI systems.\nSo?\nI’ve gone into some detail about why we might have a challenging situation (“racing through a minefield”) if powerful AI systems (a) are developed fairly soon; (b) present significant risk of misalignment leading to humanity being defeated; (c) are not particularly easy to measure the safety of.\nI’ve also talked about what I see as some of the key ways that “cautious actors” concerned about misaligned AI might navigate this situation.\nI talk about some of the implications in my more detailed piece. Here I’m just going to name a couple of observations that jump out at me from this analysis:\nThis seems hard. If we end up in the future envisioned in this piece, I imagine this being extremely stressful and difficult. I’m picturing a world in which many companies, and even governments, can see the huge power and profit they might reap from deploying powerful AI systems before others - but we’re hoping that they instead move with caution (but not too much caution!), take the kinds of actions described above, and that ultimately cautious actors “win the race” against less cautious ones.\nEven if AI alignment ends up being relatively easy - such that a given AI project can make safe, powerful systems with about 10% more effort than making dangerous, powerful systems - the situation still looks pretty nerve-wracking, because of how many different players could end up trying to build systems of their own without putting in that 10%.\nA lot of the most helpful actions might be “out of the ordinary.” When racing through a minefield, I hope key actors will:\nPut more effort into alignment, threat assessment, and security than is required by commercial incentives;\nConsider measures for avoiding races and global monitoring that could be very unusual, even unprecedented.\nDo all of this in the possible presence of ambiguous, confusing information about the risks.\nAs such, it could be very important whether key decision-makers (at both companies and governments) understand the risks and are prepared to act on them. Currently, I think we’re unfortunately very far from a world where this is true.\nAdditionally, I think AI projects can and should be taking measures today to make unusual-but-important measures more practical in the future. This could include things like:\nGetting practice with selective information sharing. For example, building internal processes to decide on whether research should be published, rather than having a rule of “Publish everything, we’re like a research university” or “Publish nothing, we don’t want competitors seeing it.” \nI expect that early attempts at this will often be clumsy and get things wrong! \nGetting practice with ways that AI companies could avoid races.\nGetting practice with threat assessment. Even if today’s AI systems don’t seem like they could possibly be dangerous yet … how sure are we, and how do we know?\nPrioritizing building AI systems that could do especially helpful things, such as contributing to AI safety research and threat assessment and patching security holes. \nEstablishing governance that is capable of making hard, non-commercially-optimal decisions for the good of humanity. A standard corporation could be sued for not deploying AI that poses a risk of global catastrophe - if this means a sacrifice for its bottom line. And a lot of the people making the final call at AI companies might be primarily thinking about their duties to shareholders (or simply unaware of the potential stakes of powerful enough AI systems). I’m excited about AI companies that are investing heavily in setting up governance structures - and investing in executives and board members - capable of making the hard calls well.Footnotes\n Generally, or at least, this is what I’d like it to refer to. ↩\n Thanks to beta reader Ted Sanders for suggesting this analogy in place of the older one, “removing mines from the minefield.” \n ↩\n One genre of testing that might be interesting: manipulating an AI system’s “digital brain” in order to simulate circumstances in which it has an opportunity to take over the world, and seeing whether it does so. This could be a way of dealing with the King Lear problem. More here. ↩\n Modern AI systems tend to be trained with lots of trial-and-error. The actual code that is used to train them might be fairly simple and not very valuable on its own; but an expensive training process then generates a set of “weights” which are ~all one needs to make a fully functioning, relatively cheap copy of the AI system. ↩\n I mean, this is part of the challenge. In theory, you should deploy an AI system if the risks of not doing so are greater than the risks of doing so. That’s going to depend on hard-to-assess information about how safe your system is and how dangerous and imminent others’ are, and it’s going to be easy to be biased in favor of “My systems are safer than others’; I should go for it.” Seems hard. ↩\n", "url": "https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/", "title": "Racing through a minefield: the AI deployment problem", "source": "cold.takes", "source_type": "blog", "date_published": "2022-12-22", "id": "e8cd5c825789b29e3f87b9eeca6db972"} -{"text": "In previous pieces, I argued that there's a real and large risk of AI systems' aiming to defeat all of humanity combined - and succeeding. \nI first argued that this sort of catastrophe would be likely without specific countermeasures to prevent it. I then argued that countermeasures could be challenging, due to some key difficulties of AI safety research.\nBut while I think misalignment risk is serious and presents major challenges, I don’t agree with sentiments1 along the lines of “We haven’t figured out how to align an AI, so if transformative AI comes soon, we’re doomed.” Here I’m going to talk about some of my high-level hopes for how we might end up avoiding this risk. \nI’ll first recap the challenge, using Ajeya Cotra’s young businessperson analogy to give a sense of some of the core difficulties. In a nutshell, once AI systems get capable enough, it could be hard to test whether they’re safe, because they might be able to deceive and manipulate us into getting the wrong read. Thus, trying to determine whether they’re safe might be something like “being an eight-year-old trying to decide between adult job candidates (some of whom are manipulative).”\nI’ll then go through what I see as three key possibilities for navigating this situation:\nDigital neuroscience: perhaps we’ll be able to read (and/or even rewrite) the “digital brains” of AI systems, so that we can know (and change) what they’re “aiming” to do directly - rather than having to infer it from their behavior. (Perhaps the eight-year-old is a mind-reader, or even a young Professor X.)\nLimited AI: perhaps we can make AI systems safe by making them limited in various ways - e.g., by leaving certain kinds of information out of their training, designing them to be “myopic” (focused on short-run as opposed to long-run goals), or something along those lines. Maybe we can make “limited AI” that is nonetheless able to carry out particular helpful tasks - such as doing lots more research on how to achieve safety without the limitations. (Perhaps the eight-year-old can limit the authority or knowledge of their hire, and still get the company run successfully.)\nAI checks and balances: perhaps we’ll be able to employ some AI systems to critique, supervise, and even rewrite others. Even if no single AI system would be safe on its own, the right “checks and balances” setup could ensure that human interests win out. (Perhaps the eight-year-old is able to get the job candidates to evaluate and critique each other, such that all the eight-year-old needs to do is verify basic factual claims to know who the best candidate is.)\nThese are some of the main categories of hopes that are pretty easy to picture today. Further work on AI safety research might result in further ideas (and the above are not exhaustive - see my more detailed piece, posted to the Alignment Forum rather than Cold Takes, for more).\nI’ll talk about both challenges and reasons for hope here. I think that for the most part, these hopes look much better if AI projects are moving cautiously rather than racing furiously.\nI don’t think we’re at the point of having much sense of how the hopes and challenges net out; the best I can do at this point is to say: “I don’t currently have much sympathy for someone who’s highly confident that AI takeover would or would not happen (that is, for anyone who thinks the odds of AI takeover … are under 10% or over 90%).”\nThe challenge\nThis is all recapping previous pieces. If you remember them super well, skip to the next section.\nIn previous pieces, I argued that:\nThe coming decades could see the development of AI systems that could automate - and dramatically speed up - scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future. (More: The Most Important Century)\nIf we develop this sort of AI via ambitious use of the “black-box trial-and-error” common in AI development today, then there’s a substantial risk that: \nThese AIs will develop unintended aims (states of the world they make calculations and plans toward, as a chess-playing AI \"aims\" for checkmate);\n \nThese AIs will deceive, manipulate, and overpower humans as needed to achieve those aims;\n \nEventually, this could reach the point where AIs take over the world from humans entirely.\nPeople today are doing AI safety research to prevent this outcome, but such research has a number of deep difficulties:\n“Great news - I’ve tested this AI and it looks safe.” Why might we still have a problem?\n \nProblem\nKey question\nExplanation\nThe Lance Armstrong problem\nDid we get the AI to be actually safe or good at hiding its dangerous actions?\nWhen dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “appearing to behave well.”\nWhen professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.\n \nThe King Lear problem\nThe AI is (actually) well-behaved when humans are in control. Will this transfer to when AIs are in control?\nIt's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't. \nAIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to take control of the world entirely. It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation. \nLike King Lear trying to decide how much power to give each of his daughters before abdicating the throne.\n \nThe lab mice problem\nToday's \"subhuman\" AIs are safe.What about future AIs with more human-like abilities?\nToday's AI systems aren't advanced enough to exhibit the basic behaviors we want to study, such as deceiving and manipulating humans.\nLike trying to study medicine in humans by experimenting only on lab mice.\n \nThe first contact problem\nImagine that tomorrow's \"human-like\" AIs are safe. How will things go when AIs have capabilities far beyond humans'?\nAI systems might (collectively) become vastly more capable than humans, and it's ... just really hard to have any idea what that's going to be like. As far as we know, there has never before been anything in the galaxy that's vastly more capable than humans in the relevant ways! No matter what we come up with to solve the first three problems, we can't be too confident that it'll keep working if AI advances (or just proliferates) a lot more. \nLike trying to plan for first contact with extraterrestrials (this barely feels like an analogy).\n \nAn analogy that incorporates these challenges is Ajeya Cotra’s “young businessperson” analogy:\nImagine you are an eight-year-old whose parents left you a $1 trillion company and no trusted adult to serve as your guide to the world. You must hire a smart adult to run your company as CEO, handle your life the way that a parent would (e.g. decide your school, where you’ll live, when you need to go to the dentist), and administer your vast wealth (e.g. decide where you’ll invest your money).\n You have to hire these grownups based on a work trial or interview you come up with -- you don't get to see any resumes, don't get to do reference checks, etc. Because you're so rich, tons of people apply for all sorts of reasons. (More)\nIf your applicants are a mix of \"saints\" (people who genuinely want to help), \"sycophants\" (people who just want to make you happy in the short run, even when this is to your long-term detriment) and \"schemers\" (people who want to siphon off your wealth and power for themselves), how do you - an eight-year-old - tell the difference?\n(Click to expand) More detail on why AI could make this the most important century\nIn The Most Important Century, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.\nThis page has a ~10-page summary of the series, as well as links to an audio version, podcasts, and the full series.\nThe key points I argue for in the series are:\nThe long-run future is radically unfamiliar. Enough advances in technology could lead to a long-lasting, galaxy-wide civilization that could be a radical utopia, dystopia, or anything in between.\nThe long-run future could come much faster than we think, due to a possible AI-driven productivity explosion.\nThe relevant kind of AI looks like it will be developed this century - making this century the one that will initiate, and have the opportunity to shape, a future galaxy-wide civilization.\nThese claims seem too \"wild\" to take seriously. But there are a lot of reasons to think that we live in a wild time, and should be ready for anything.\nWe, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions. But right now, we aren't ready for this.\n(Click to expand) Why would AI \"aim\" to defeat humanity? \nA previous piece argued that if today’s AI development methods lead directly to powerful enough AI systems, disaster is likely by default (in the absence of specific countermeasures). \nIn brief:\nModern AI development is essentially based on “training” via trial-and-error. \nIf we move forward incautiously and ambitiously with such training, and if it gets us all the way to very powerful AI systems, then such systems will likely end up aiming for certain states of the world (analogously to how a chess-playing AI aims for checkmate).\nAnd these states will be other than the ones we intended, because our trial-and-error training methods won’t be accurate. For example, when we’re confused or misinformed about some question, we’ll reward AI systems for giving the wrong answer to it - unintentionally training deceptive behavior.\nWe should expect disaster if we have AI systems that are both (a) powerful enough to defeat humans and (b) aiming for states of the world that we didn’t intend. (“Defeat” means taking control of the world and doing what’s necessary to keep us out of the way; it’s unclear to me whether we’d be literally killed or just forcibly stopped1 from changing the world in ways that contradict AI systems’ aims.)\n(Click to expand) How could AI defeat humanity? \nIn a previous piece, I argue that AI systems could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nBy defeating humanity, I mean gaining control of the world so that AIs, not humans, determine what happens in it; this could involve killing humans or simply “containing” us in some way, such that we can’t interfere with AIs’ aims.\nOne way this could happen is if AI became extremely advanced, to the point where it had \"cognitive superpowers\" beyond what humans can do. In this case, a single AI system (or set of systems working together) could imaginably:\nDo its own research on how to build a better AI system, which culminates in something that has incredible other abilities.\nHack into human-built software across the world.\nManipulate human psychology.\nQuickly generate vast wealth under the control of itself or any human allies.\nCome up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.\nDevelop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries.\nHowever, my piece also explores what things might look like if each AI system basically has similar capabilities to humans. In this case:\nHumans are likely to deploy AI systems throughout the economy, such that they have large numbers and access to many resources - and the ability to make copies of themselves. \nFrom this starting point, AI systems with human-like (or greater) capabilities would have a number of possible ways of getting to the point where their total population could outnumber and/or out-resource humans.\nI address a number of possible objections, such as \"How can AIs be dangerous without bodies?\"\nMore: AI could defeat all of us combined\nDigital neuroscience\nI’ve previously argued that it could be inherently difficult to measure whether AI systems are safe, for reasons such as: AI systems that are not deceptive probably look like AI systems that are so good at deception that they hide all evidence of it, in any way we can easily measure. \nUnless we can “read their minds!”\nCurrently, today’s leading AI research is in the genre of “black-box trial-and-error.” An AI tries a task; it gets “encouragement” or “discouragement” based on whether it does the task well; it tweaks the wiring of its “digital brain” to improve next time; it improves at the task; but we humans aren’t able to make much sense of its “digital brain” or say much about its “thought process.” \n(Click to expand) Why are AI systems \"black boxes\" that we can't understand the inner workings of? \nWhat I mean by “black-box trial-and-error” is explained briefly in an old Cold Takes post, and in more detail in more technical pieces by Ajeya Cotra (section I linked to) and Richard Ngo (section 2). Here’s a quick, oversimplified characterization.\nToday, the most common way of building an AI system is by using an \"artificial neural network\" (ANN), which you might think of sort of like a \"digital brain\" that starts in an empty (or random) state: it hasn't yet been wired to do specific things. A process something like this is followed:\nThe AI system is given some sort of task.\nThe AI system tries something, initially something pretty random.\nThe AI system gets information about how well its choice performed, and/or what would’ve gotten a better result. Based on this, it “learns” by tweaking the wiring of the ANN (“digital brain”) - literally by strengthening or weakening the connections between some “artificial neurons” and others. The tweaks cause the ANN to form a stronger association between the choice it made and the result it got. \nAfter enough tries, the AI system becomes good at the task (it was initially terrible). \nBut nobody really knows anything about how or why it’s good at the task now. The development work has gone into building a flexible architecture for it to learn well from trial-and-error, and into “training” it by doing all of the trial and error. We mostly can’t “look inside the AI system to see how it’s thinking.”\nFor example, if we want to know why a chess-playing AI such as AlphaZero made some particular chess move, we can't look inside its code to find ideas like \"Control the center of the board\" or \"Try not to lose my queen.\" Most of what we see is just a vast set of numbers, denoting the strengths of connections between different artificial neurons. As with a human brain, we can mostly only guess at what the different parts of the \"digital brain\" are doing.\nSome AI research (example)2 is exploring how to change this - how to decode an AI system’s “digital brain.” This research is in relatively early stages - today, it can “decode” only parts of AI systems (or fully decode very small, deliberately simplified AI systems).\nAs AI systems advance, it might get harder to decode them - or easier, if we can start to use AI for help decoding AI, and/or change AI design techniques so that AI systems are less “black box”-ish. \nI think there is a wide range of possibilities here, e.g.:\nFailure: “digital brains” keep getting bigger, more complex, and harder to make sense of, and so “digital neuroscience” generally stays about as hard to learn from as human neuroscience. In this world, we wouldn’t have anything like “lie detection” for AI systems engaged in deceptive behavior.\nBasic mind-reading: we’re able to get a handle on things like “whether an AI system is behaving deceptively, e.g. whether it has internal representations of ‘beliefs’ about the world that contradict its statements” and “whether an AI system is aiming to accomplish some strange goal we didn’t intend it to.” \nIt may be hard to fix things like this by just continuing trial-and-error-based training (perhaps because we worry that AI systems are manipulating their own “digital brains” - see later bullet point). \nBut we’d at least be able to get early warnings of potential problems, or early evidence that we don’t have a problem, and adjust our level of caution appropriately. This sort of mind-reading could also be helpful with AI checks and balances (below).\nAdvanced mind-reading: we’re able to understand an AI system’s “thought process” in detail (what observations and patterns are the main reasons it’s behaving as it is), understand how any worrying aspects of this “thought process” (such as unintended aims) came about, and make lots of small adjustments until we can verify that an AI system is free of unintended aims or deception.\nMind-writing (digital neurosurgery): we’re able to alter a “digital brain” directly, rather than just via the “trial-and-error” process discussed earlier.\nOne potential failure mode for digital neuroscience is if AI systems end up able to manipulate their own “digital brains.” This could lead “digital neuroscience” to have the same problem as other AI safety research: if we’re shutting down or negatively reinforcing AI systems that appear to have unsafe “aims” based on our “mind-reading,” we might end up selecting for AI systems whose “digital brains” only appear safe. \nThis could be a real issue, especially if AI systems end up with far-beyond-human capabilities (more below). \nBut naively, an AI system manipulating its own “digital brain” to appear safe seems quite a bit harder than simply behaving deceptively. \nI should note that I’m lumping in much of the (hard-to-explain) research on the Eliciting Latent Knowledge (ELK) agenda under this category.3 The ELK agenda is largely4 about thinking through what kinds of “digital brain” patterns might be associated with honesty vs. deception, and trying to find some impossible-to-fake sign of honesty.\nHow likely is this to work? I think it’s very up-in-the-air right now. I’d say “digital neuroscience” is a young field, tackling a problem that may or may not prove tractable. If we have several decades before transformative AI, then I’d expect to at least succeed at “basic mind-reading,” whereas if we have less than a decade, I think that’s around 50/50. I think it’s less likely that we’ll succeed at some of the more ambitious goals, but definitely possible.\nLimited AI\nI previously discussed why AI systems could end up with “aims,” in the sense that they make calculations, choices and plans selected to reach a particular sort of state of the world. For example, chess-playing AIs “aim” for checkmate game states; a recommendation algorithm might “aim” for high customer engagement or satisfaction. I then argued that AI systems would do “whatever it takes” to get what they’re “aiming” at, even when this means deceiving and disempowering humans.\nBut AI systems won’t necessarily have the sorts of “aims” that risk trouble. Consider two different tasks you might “train” an AI to do, via trial-and-error (rewarding success at the task):\n“Write whatever code a particular human would write, if they were in your situation.”\n“Write whatever code accomplishes goal X [including coming up with things much better than a human could].”\nThe second of these seems like a recipe for having the sort of ambitious “aim” I’ve claimed is dangerous - it’s an open-ended invitation to do whatever leads to good performance on the goal. By contrast, the first is about imitating a particular human. It leaves a lot less scope for creative, unpredictable behavior and for having “ambitious” goals that lead to conflict with humans.\n(For more on this distinction, see my discussion of process-based optimization, although I’m not thrilled with this and hope to write something better later.)\nMy guess is that in a competitive world, people will be able to get more done, faster, with something like the second approach. But: \nMaybe the first approach will work better at first, and/or AI developers will deliberately stick with the first approach as much as they can for safety reasons.\nAnd maybe that will be enough to build AI systems that can, themselves, do huge amounts of AI alignment research applicable to future, less limited systems. Or enough to build AI systems that can do other useful things, such as creating convincing demonstrations of the risks, patching security holes that dangerous AI systems would otherwise exploit, and more. (More on “how safe AIs can protect against dangerous AIs” in a future piece.)\nA risk that would remain: these AI systems might also be able to do huge amounts of research on making AIs bigger and more capable. So simply having “AI systems that can do alignment research” isn’t good enough by itself - we would need to then hope that the leading AI developers prioritize safety research rather than racing ahead with building more powerful systems, up until the point where they can make the more powerful systems safe.\nThere are a number of other ways in which we might “limit” AI systems to make them safe. One can imagine AI systems that are:\n“Short-sighted” or “myopic”: they might have “aims” (see previous post on what I mean by this term) that only apply to their short-run future. So an AI system might be aiming to gain more power, but only over the next few hours; such an AI system wouldn’t exhibit some of the behaviors I worry about, such as deceptively behaving in “safe” seeming ways in hopes of getting more power later.\n“Narrow”: they might have only a particular set of capabilities, so that e.g. they can help with AI alignment research but don’t understand human psychology and can’t deceive and manipulate humans.\n“Unambitious”: even if AI systems develop unintended aims, these might be aims they satisfy fairly easily, causing some strange behavior but not aiming to defeat all of humanity.\nA further source of hope: even if such “limited” systems aren’t very powerful on their own, we might be able to amplify them by setting up combinations of AIs that work together on difficult tasks. For example:\nOne “slow but deep” AI might do lots of analysis on every action it takes - for example, when it writes a line of code, it might consider hundreds of possibilities for that single line.\nAnother “fast and shallow” AI might be trained to quickly, efficiently imitate the sorts of actions the “slow but deep” one takes - writing the sorts of lines of code it produces after considering hundreds of possibilities.\nFurther AIs might be trained to summarize the analysis of other AIs, assign different parts of tasks to different AIs, etc. The result could be something like a “team” of AIs with different roles, such that a large number of limited AIs ends up quite a lot more powerful (and, depending on the details, also more dangerous) than any of the individual AIs. \nI’d guess that in a competitive world, AI systems that are not “limited” will - at least eventually - be more powerful, versatile and ultimately useful. But limited AIs might get us pretty far.\nHow likely is this to work? I’d guess that we’ll eventually be able to build very powerful AIs whose limits make them relatively safe. However, I’d also guess that AIs without such limits will eventually be more powerful. So I think a lot of how things go will come down to how cautious we are: will we stick with limited AIs until the point at which we make more powerful AIs safe? And I think it’s very hard to predict how much caution the world will have - it partly depends on how well-understood the issues discussed in this series become over time!\nAI checks and balances\nCentral to my worries is the idea that AIs could be good at “deceiving” humans: proficiently choosing courses of action that humans don’t fully understand, and don’t catch the problems with. This is important both for how AIs could develop unintended, “dangerous” aims in the first place and how they could execute on these aims by defeating humanity.\nWe could potentially mitigate this issue by using AIs to supervise and critique each other.\nA simple example: say we’re worried that AI systems might find computer security vulnerabilities, and use them opportunistically to gain power and resources. We could train some AI systems to specifically seek out, expose and patch computer security vulnerabilities. (A footnote explains why we might expect such specialized systems to patch most vulnerabilities they find, rather than exploiting the vulnerabilities as often as less specialized systems would.5) \nThis is a basic example of “using some AIs to identify potential problems other ones could cause.” \nA nice property: there might be some vulnerabilities that are much easier for AI systems to notice than humans; these are probably the ones we should be most worried about, and the ones most likely to be found by “vulnerability-finding AIs.”\n Analogously, we could train AI systems to do things like:\nProduce critiques of each others’ proposed plans, and explanations of how these plans could lead to unintended behavior.\nHelp out with the “digital neuroscience” goal above: analyze each others’ “digital brains” and produce compelling-to-humans discussions of whether there’s evidence there about whether the AI systems have unintended, dangerous “aims.”\nConduct randomized audits of each others’ behavior: looking for any way in which another AI system’s behavior might be deceptive or otherwise unintended. \nImportantly, auditor AIs would have different “incentives” (they get positive reinforcement for different behaviors), compared to the AIs they’re auditing: the audited AI system might be searching for any answer to a question that humans rate highly, whereas the “auditor” might be specifically searching for compelling arguments that an otherwise-highly-rated-answer is actually misleading.\n \nI previously argued that these sorts of audits have limitations - they could end up unwittingly training the audited AI systems to become effective at deceiving human auditors. But in this case, deceiving auditors might be more like deceiving copies of themselves, potentially a harder-to-learn behavior.\nThere are a lot of potential wrinkles here, which I discuss in a more detailed non-Cold-Takes piece. In brief:\nAI checks and balances might fail if different AI systems end up coordinating with each other against us. For example, it might end up being the case that even auditor AI systems are best able to accomplish their ambitious, unintended aims by refraining from warning humans about the most important problems they find with other AIs.\nWe might try to make this less likely by creating ensembles of AI systems that “check and balance” each other, while being different enough from each other to make coordination difficult. \nFor example, some AI systems might have limitations (see above) that make them less dangerous, while also less capable. \n \nIn order to pull off deception successfully, more powerful systems would then have to deceive not just humans but a whole variety of other (less capable, but in different ways) AI systems - some of which could be amplified, as discussed above.\nAnother way AI checks and balances could fail: when one AI system critiques another, the resulting “debate” could quickly become too hard for humans to referee effectively. For example, imagine that an eight-year-old has asked adult A to look for signs that adult B is deceiving them, and adult A is now arguing that this is happening while adult B is arguing that it isn’t. Can the eight-year-old figure out what the truth is? Seems genuinely uncertain (and dependent on the details)!\nThere is already some research on “using AIs to critique each other.” A recent example is this paper, which actually does show that an AI trained to critique its own answers can surface helpful critiques that help humans rate its answers more accurately.\nOther possibilities\nI discuss possible hopes in more detail in an Alignment Forum piece. And I think there is significant scope for “unknown unknowns”: researchers working on AI safety might come up with approaches that nobody has thought of yet.\nHigh-level fear: things get too weird, too fast\nRather than end on a positive note, I want to talk about a general dynamic that feels like it could make the situation very difficult, and make it hard for any of the above hopes to work out.\nTo quote from my previous piece:\nMaybe at some point, AI systems will be able to do things like:\nCoordinate with each other incredibly well, such that it's hopeless to use one AI to help supervise another.\nPerfectly understand human thinking and behavior, and know exactly what words to say to make us do what they want - so just letting an AI send emails or write Tumblr posts gives it vast power over the world.\nManipulate their own \"digital brains,\" so that our attempts to \"read their minds\" backfire and mislead us.\nReason about the world (that is, make plans to accomplish their aims) in completely different ways from humans, with concepts like \"glooble\"6 that are incredibly useful ways of thinking about the world but that humans couldn't understand with centuries of effort.\nAt this point, whatever methods we've developed for making human-like AI systems safe, honest and restricted could fail - and silently, as such AI systems could go from \"being honest and helpful\" to \"appearing honest and helpful, while setting up opportunities to defeat humanity.\"\nI’m not wedded to any of the details above, but I think the general dynamic in which “AI systems get extremely powerful, strange, and hard to deal with very quickly” could happen for a few different reasons:\nThe nature of AI development might just be such that we very quickly go from having very weak AI systems to having “superintelligent” ones. How likely this is has been debated a lot.7\nEven if AI improves relatively slowly, we might initially have a lot of success with things like “AI checks and balances,” but continually make more and more capable AI systems - such that they eventually become extraordinarily capable and very “alien” to us, at which point previously-effective methods break down. (More)\nThe most likely reason this would happen, in my view, is that we - humanity - choose to move too fast. It’s easy to envision a world in which everyone is in a furious race to develop more powerful AI systems than everyone else - focused on “competition” rather than “caution” (more on the distinction here) - and everything accelerates dramatically once we’re able to use AI systems to automate scientific and technological advancement.\nSo … is AI going to defeat humanity or is everything going to be fine?\nI don’t know! There are a number of ways we might be fine, and a number of ways we might not be. I could easily see this century ending in humans defeated or in a glorious utopia. You could maybe even think of it as the most important century.\nSo far, I’ve mostly just talked about the technical challenges of AI alignment: why AI systems might end up misaligned, and how we might design them to avoid that outcome. In future pieces, I’ll go into a bit more depth on some of the political and strategic challenges (e.g., what AI companies and governments might do to reduce the risk of a furious race to deploy dangerous AI systems), and work my way toward the question: “What can we do today to improve the odds that things go well?”Footnotes\nE.g. ↩\n Disclosure: my wife Daniela is President and co-founder of Anthropic, which employs prominent researchers in “mechanistic interpretability” and hosts the site I link to for the term. ↩\n Disclosure: I’m on the board of ARC, which wrote this document. ↩\n Though not entirely ↩\n The basic idea:\nA lot of security vulnerabilities might be the kind of thing where it’s clear that there’s some weakness in the system, but it’s not immediately clear how to exploit this for gain. An AI system with an unintended “aim” might therefore “save” knowledge about the vulnerability until it encounters enough other vulnerabilities, and the right circumstances, to accomplish its aim.\n\tBut now imagine an AI system that is trained and rewarded exclusively for finding and patching such vulnerabilities. Unlike with the first system, revealing the vulnerability gets more positive reinforcement than just about anything else it can do (and an AI that reveals no such vulnerabilities will perform extremely poorly). It thus might be much more likely than the previous system to do so, rather than simply leaving the vulnerability in place in case it’s useful later.\n\tAnd now imagine that there are multiple AI systems trained and rewarded for finding and patching such vulnerabilities, with each one needing to find some vulnerability overlooked by others in order to achieve even moderate performance. These systems might also have enough variation that it’s hard for one such system to confidently predict what another will do, which could further lower the gains to leaving the vulnerability in place. \n ↩This is a concept that only I understand.  ↩\n See here, here, and here. Also see the tail end of this Wait but Why piece, which draws on similar intuitions to the longer treatment in Superintelligence ↩\n", "url": "https://www.cold-takes.com/high-level-hopes-for-ai-alignment/", "title": "High-level hopes for AI alignment", "source": "cold.takes", "source_type": "blog", "date_published": "2022-12-15", "id": "116536a6b86ca83f7dba443f64199bcd"} -{"text": " \nIn previous pieces, I argued that there's a real and large risk of AI systems' developing dangerous goals of their own and defeating all of humanity - at least in the absence of specific efforts to prevent this from happening.\nA young, growing field of AI safety research tries to reduce this risk, by finding ways to ensure that AI systems behave as intended (rather than forming ambitious aims of their own and deceiving and manipulating humans as needed to accomplish them). \nMaybe we'll succeed in reducing the risk, and maybe we won't. Unfortunately, I think it could be hard to know either way. This piece is about four fairly distinct-seeming reasons that this could be the case - and that AI safety could be an unusually difficult sort of science.\nThis piece is aimed at a broad audience, because I think it's important for the challenges here to be broadly understood. I expect powerful, dangerous AI systems to have a lot of benefits (commercial, military, etc.), and to potentially appear safer than they are - so I think it will be hard to be as cautious about AI as we should be. I think our odds look better if many people understand, at a high level, some of the challenges in knowing whether AI systems are as safe as they appear.\nFirst, I'll recap the basic challenge of AI safety research, and outline what I wish AI safety research could be like. I wish it had this basic form: \"Apply a test to the AI system. If the test goes badly, try another AI development method and test that. If the test goes well, we're probably in good shape.\" I think car safety research mostly looks like this; I think AI capabilities research mostly looks like this.\nThen, I’ll give four reasons that apparent success in AI safety can be misleading. \n“Great news - I’ve tested this AI and it looks safe.” Why might we still have a problem?\n \nProblem\nKey question\nExplanation\nThe Lance Armstrong problem\nDid we get the AI to be actually safe or good at hiding its dangerous actions?\nWhen dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “appearing to behave well.”\nWhen professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.\n \nThe King Lear problem\nThe AI is (actually) well-behaved when humans are in control. Will this transfer to when AIs are in control?\nIt's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't. \nAIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to take control of the world entirely. It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation. \nLike King Lear trying to decide how much power to give each of his daughters before abdicating the throne.\n \nThe lab mice problem\nToday's \"subhuman\" AIs are safe.What about future AIs with more human-like abilities?\nToday's AI systems aren't advanced enough to exhibit the basic behaviors we want to study, such as deceiving and manipulating humans.\nLike trying to study medicine in humans by experimenting only on lab mice.\n \nThe first contact problem\nImagine that tomorrow's \"human-like\" AIs are safe. How will things go when AIs have capabilities far beyond humans'?\nAI systems might (collectively) become vastly more capable than humans, and it's ... just really hard to have any idea what that's going to be like. As far as we know, there has never before been anything in the galaxy that's vastly more capable than humans in the relevant ways! No matter what we come up with to solve the first three problems, we can't be too confident that it'll keep working if AI advances (or just proliferates) a lot more. \nLike trying to plan for first contact with extraterrestrials (this barely feels like an analogy).\n \nI'll close with Ajeya Cotra's \"young businessperson\" analogy, which in some sense ties these concerns together. A future piece will discuss some reasons for hope, despite these problems.\nRecap of the basic challenge\nA previous piece laid out the basic case for concern about AI misalignment. In brief: if extremely capable AI systems are developed using methods like the ones AI developers use today, it seems like there's a substantial risk that:\nThese AIs will develop unintended aims (states of the world they make calculations and plans toward, as a chess-playing AI \"aims\" for checkmate);\nThese AIs will deceive, manipulate, and overpower humans as needed to achieve those aims;\nEventually, this could reach the point where AIs take over the world from humans entirely.\nI see AI safety research as trying to design AI systems that won't aim to deceive, manipulate or defeat humans - even if and when these AI systems are extraordinarily capable (and would be very effective at deception/manipulation/defeat if they were to aim at it). That is: AI safety research is trying to reduce the risk of the above scenario, even if (as I've assumed) humans rush forward with training powerful AIs to do ever-more ambitious things.\n(Click to expand) More detail on why AI could make this the most important century \nIn The Most Important Century, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.\nThis page has a ~10-page summary of the series, as well as links to an audio version, podcasts, and the full series.\nThe key points I argue for in the series are:\nThe long-run future is radically unfamiliar. Enough advances in technology could lead to a long-lasting, galaxy-wide civilization that could be a radical utopia, dystopia, or anything in between.\nThe long-run future could come much faster than we think, due to a possible AI-driven productivity explosion.\nThe relevant kind of AI looks like it will be developed this century - making this century the one that will initiate, and have the opportunity to shape, a future galaxy-wide civilization.\nThese claims seem too \"wild\" to take seriously. But there are a lot of reasons to think that we live in a wild time, and should be ready for anything.\nWe, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions. But right now, we aren't ready for this.\n(Click to expand) Why would AI \"aim\" to defeat humanity?\nA previous piece argued that if today’s AI development methods lead directly to powerful enough AI systems, disaster is likely by default (in the absence of specific countermeasures). \nIn brief:\nModern AI development is essentially based on “training” via trial-and-error. \nIf we move forward incautiously and ambitiously with such training, and if it gets us all the way to very powerful AI systems, then such systems will likely end up aiming for certain states of the world (analogously to how a chess-playing AI aims for checkmate).\nAnd these states will be other than the ones we intended, because our trial-and-error training methods won’t be accurate. For example, when we’re confused or misinformed about some question, we’ll reward AI systems for giving the wrong answer to it - unintentionally training deceptive behavior.\nWe should expect disaster if we have AI systems that are both (a) powerful enough to defeat humans and (b) aiming for states of the world that we didn’t intend. (“Defeat” means taking control of the world and doing what’s necessary to keep us out of the way; it’s unclear to me whether we’d be literally killed or just forcibly stopped1 from changing the world in ways that contradict AI systems’ aims.)\n(Click to expand) How could AI defeat humanity?\nIn a previous piece, I argue that AI systems could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nBy defeating humanity, I mean gaining control of the world so that AIs, not humans, determine what happens in it; this could involve killing humans or simply “containing” us in some way, such that we can’t interfere with AIs’ aims.\nOne way this could happen is if AI became extremely advanced, to the point where it had \"cognitive superpowers\" beyond what humans can do. In this case, a single AI system (or set of systems working together) could imaginably:\nDo its own research on how to build a better AI system, which culminates in something that has incredible other abilities.\nHack into human-built software across the world.\nManipulate human psychology.\nQuickly generate vast wealth under the control of itself or any human allies.\nCome up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.\nDevelop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries.\nHowever, my piece also explores what things might look like if each AI system basically has similar capabilities to humans. In this case:\nHumans are likely to deploy AI systems throughout the economy, such that they have large numbers and access to many resources - and the ability to make copies of themselves. \nFrom this starting point, AI systems with human-like (or greater) capabilities would have a number of possible ways of getting to the point where their total population could outnumber and/or out-resource humans.\nI address a number of possible objections, such as \"How can AIs be dangerous without bodies?\"\nMore: AI could defeat all of us combined\nI wish AI safety research were straightforward\nI wish AI safety research were like car safety research.2\nWhile I'm sure this is an oversimplification, I think a lot of car safety research looks basically like this:\nCompanies carry out test crashes with test cars. The results give a pretty good (not perfect) indication of what would happen in a real crash.\nDrivers try driving the cars in low-stakes areas without a lot of traffic. Things like steering wheel malfunctions will probably show up here; if they don't and drivers are able to drive normally in low-stakes areas, it's probably safe to drive the car in traffic.\nNone of this is perfect, but the occasional problem isn't, so to speak, the end of the world. The worst case tends to be a handful of accidents, followed by a recall and some changes to the car's design validated by further testing.\nOverall, if we have problems with car safety, we'll probably be able to observe them relatively straightforwardly under relatively low-stakes circumstances.\nIn important respects, many types of research and development have this basic property: we can observe how things are going during testing to get good evidence about how they'll go in the real world. Further examples include medical research,3 chemistry research,4 software development,5 etc. \nMost AI research looks like this as well. People can test out what an AI system is capable of reliably doing (e.g., translating speech to text), before integrating it into some high-stakes commercial product like Siri. This works both for ensuring that the AI system is capable (e.g., that it does a good job with its tasks) and that it's safe in certain ways (for example, if we're worried about toxic language, testing for this is relatively straightforward).\nThe rest of this piece will be about some of the ways in which \"testing\" for AI safety fails to give us straightforward observations about whether, once AI systems are deployed in the real world, the world will actually be safe.\nWhile all research has to deal with some differences between testing and the real world, I think the challenges I'll be going through are unusual ones.\nFour problems\n(1) The Lance Armstrong problem: is the AI actually safe or good at hiding its dangerous actions?\nFirst, let's imagine that:\nWe have AI systems available that can do roughly everything a human can, with some different strengths and weaknesses but no huge difference in \"overall capabilities\" or economic value per hour of work. \nWe're observing early signs that AI systems behave in unintended, deceptive ways, such as giving wrong answers to questions we ask, or writing software that falsifies metrics instead of doing the things the metrics were supposed to measure (e.g., software meant to make a website run faster might instead falsify metrics about its loading time).\nWe theorize that modifying the AI training in some way6 will make AI systems less likely to behave deceptively. We try it out, and find that, in fact, our AI systems seem to be behaving better than before - we are finding fewer incidents in which they behaved in unintended or deceptive ways. \nBut that's just a statement about what we're noticing. Which of the following just happened:\nDid we just train our AI systems to be less deceptive?\nDid we just train our AI systems to be better at deception, and so to make us think they became less deceptive?\nDid we just train our AI systems to be better at calculating when they might get caught in deception, and so to be less deceptive only when the deception would otherwise be caught? \nThis one could be useful! Especially if we're able to set up auditing systems in many real-world situations, such that we could expect deception to be caught a lot of the time. But it does leave open the King Lear problem covered next.\n \n(...Or some combination of the three?)\nWe're hoping to be able to deploy AI systems throughout the economy, so - just like human specialists - they will almost certainly have some opportunities to be deceptive without being caught. The fact that they appear honest in our testing is not clear comfort against this risk.\nThe analogy here is to competitive cyclist Lance Armstrong. Armstrong won the Tour de France race 7 times in a row, while many of his competitors were caught using performance-enhancing drugs and disqualified. But more than 5 years after his last win, an investigation \"concluded that Armstrong had used performance-enhancing drugs over the course of his career[5] and named him as the ringleader of 'the most sophisticated, professionalized and successful doping program that sport has ever seen'.\" Now the list of Tour de France winners looks like this:\nA broader issue here is that when AI systems become capable enough, AI safety research starts to look more like social sciences (studying human beings) than like natural sciences. Social sciences are generally less rigorous and harder to get clean results from, and one factor in this is that it can be hard to study someone who's aware they're being studied.7\nTwo broad categories of research that might help with the Lance Armstrong problem:\nMechanistic interpretability8 can be thought of analyzing the \"digital brains\" of AI systems (not just analyzing their behavior and performance.) Currently, AI systems are black boxes in the sense that they perform well on tasks, but we can't say much about how they are doing it; mechanistic interpretability aims to change this, which could give us the ability to \"mind-read\" AIs and detect deception. (There could still be a risk that AI systems are arranging their own \"digital brains\" in misleading ways, but this seems quite a bit harder than simply behaving deceptively.)\nSome researchers work on \"scalable supervision\" or \"competitive supervision.\" The idea is that if we are training an AI system that might become deceptive, we set up some supervision process for it that we expect to reliably catch any attempts at deception. This could be because the supervision process itself uses AI systems with more resources than the one being supervised, or because it uses a system of randomized audits where extra effort is put into catching deception.\n \n(Click to expand) Why are AI systems \"black boxes\" that we can't understand the inner workings of?\nI explain this briefly in an old Cold Takes post; it's explained in more detail in more technical pieces by Ajeya Cotra (section I linked to) and Richard Ngo (section 2). \nWhat I mean by “black-box trial-and-error” is explained briefly in an old Cold Takes post, and in more detail in more technical pieces by Ajeya Cotra (section I linked to) and Richard Ngo (section 2). Here’s a quick, oversimplified characterization.\nToday, the most common way of building an AI system is by using an \"artificial neural network\" (ANN), which you might think of sort of like a \"digital brain\" that starts in an empty (or random) state: it hasn't yet been wired to do specific things. A process something like this is followed:\nThe AI system is given some sort of task.\nThe AI system tries something, initially something pretty random.\nThe AI system gets information about how well its choice performed, and/or what would’ve gotten a better result. Based on this, it “learns” by tweaking the wiring of the ANN (“digital brain”) - literally by strengthening or weakening the connections between some “artificial neurons” and others. The tweaks cause the ANN to form a stronger association between the choice it made and the result it got. \nAfter enough tries, the AI system becomes good at the task (it was initially terrible). \nBut nobody really knows anything about how or why it’s good at the task now. The development work has gone into building a flexible architecture for it to learn well from trial-and-error, and into “training” it by doing all of the trial and error. We mostly can’t “look inside the AI system to see how it’s thinking.”\nFor example, if we want to know why a chess-playing AI such as AlphaZero made some particular chess move, we can't look inside its code to find ideas like \"Control the center of the board\" or \"Try not to lose my queen.\" Most of what we see is just a vast set of numbers, denoting the strengths of connections between different artificial neurons. As with a human brain, we can mostly only guess at what the different parts of the \"digital brain\" are doing.\n(2) The King Lear problem: how do you test what will happen when it's no longer a test?\nThe Shakespeare play King Lear opens with the King (Lear) stepping down from the throne, and immediately learning that he has left his kingdom to the wrong two daughters. Loving and obsequious while he was deciding on their fate,9 they reveal their contempt for him as soon as he's out of power and they're in it.\nIf we're building AI systems that can reason like humans, dynamics like this become a potential issue. \nI previously noted that an AI with any ambitious aim - or just an AI that wants to avoid being shut down or modified - might calculate that the best way to do this is by behaving helpfully and safely in all \"tests\" humans can devise. But once there is a real-world opportunity to disempower humans for good, that same aim could cause the AI to disempower humans.\nIn other words:\n(A) When we're developing and testing AI systems, we have the power to decide which systems will be modified or shut down and which will be deployed into the real world. (Like King Lear deciding who will inherit his kingdom.)\n(B) But at some later point, these systems could be operating in the economy, in high numbers with a lot of autonomy. (This possibility is spelled out/visualized a bit more here and here.) At that point, they may have opportunities to defeat all of humanity such that we never make decisions about them again. (Like King Lear's daughters after they've taken control.)\n(Click to expand) How could AI defeat humanity?\nIn a previous piece, I argue that AI systems could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nBy defeating humanity, I mean gaining control of the world so that AIs, not humans, determine what happens in it; this could involve killing humans or simply “containing” us in some way, such that we can’t interfere with AIs’ aims.\nOne way this could happen is if AI became extremely advanced, to the point where it had \"cognitive superpowers\" beyond what humans can do. In this case, a single AI system (or set of systems working together) could imaginably:\nDo its own research on how to build a better AI system, which culminates in something that has incredible other abilities.\nHack into human-built software across the world.\nManipulate human psychology.\nQuickly generate vast wealth under the control of itself or any human allies.\nCome up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.\nDevelop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries.\nHowever, my piece also explores what things might look like if each AI system basically has similar capabilities to humans. In this case:\nHumans are likely to deploy AI systems throughout the economy, such that they have large numbers and access to many resources - and the ability to make copies of themselves. \nFrom this starting point, AI systems with human-like (or greater) capabilities would have a number of possible ways of getting to the point where their total population could outnumber and/or out-resource humans.\nI address a number of possible objections, such as \"How can AIs be dangerous without bodies?\"\nMore: AI could defeat all of us combined\nIf AI systems can detect the difference between (A) and (B) above, then their behavior might systematically change from one to the other - and there's no clear way to test their behavior in (B).\n(Click to expand) The Volkswagen emissions scandal\nIn 2015, the US Environmental Protection Agency \"found that Volkswagen had intentionally programmed ... engines to activate their emissions controls only during laboratory emissions testing, which caused the vehicles' [nitrogen oxide] output to meet US standards during regulatory testing. However, the vehicles emitted up to 40 times more [nitrogen oxide] in real-world driving.\" (From Wikipedia)\nThis is an example of the \"King Lear problem\": there was a deliberate distinction made where the engines were behaving differently during tests and during real-world use. I don't know much about this incident, but I'm guessing that regulators were able to detect the issue because of (a) independent tests under road conditions; (b) being able to understand the \"defeat device\" that the Volkswagen cars used to create the misleading readings. \nNeither of these would necessarily apply to the King Lear problem as described here: (a) you can test a car under road conditions to see its emissions, but you can't (or at least, shouldn't) test how an AI behaves when it has an opportunity to disempower humans entirely; (b) AI systems may be black boxes such that it's hard to understand what's going on inside them.\nIn general, modern machine learning researchers consider it challenging to handle what's called \"distributional shift\": systematic differences between situations AIs were trained on and situations they're now in. To me, the King Lear problem looks like arguably the most inconvenient possible distributional shift: AI systems risk behaving in unexpected ways just as (and in fact, because) they're now able to defeat humanity, rather than being in a controlled test environment.\nSome lines of research that might help here:\nIf we could solve the Lance Armstrong problem robustly enough - such that we could be confident AIs were never behaving deceptively - we could simply prompt AIs to answer questions like \"Would AI system X disempower humans given an opportunity to do so?\" The digital-brain-based techniques noted above could imaginably get us here.\nThere might be ways of specifically trying to target the worst-case behavior of AI systems, so that they are nearly guaranteed not to behave in certain ways regardless of their situation. This could look something roughly like \"simulating cases where an AI system has an opportunity to disempower humans, and giving it negative reinforcement for choosing to do so.\" More on this sort of approach, along with some preliminary ongoing work, here.\n(3) The lab mice problem: the AI systems we'd like to study don't exist today \nAbove, I said: \"when AI systems become capable enough, AI safety research starts to look more like social sciences (studying human beings) than like natural sciences.\" But today, AI systems aren't capable enough, which makes it especially hard to have a meaningful test bed and make meaningful progress.\nSpecifically, we don't have much in the way of AI systems that seem to deceive and manipulate their supervisors,10 the way I worry that they might when they become capable enough.\nIn fact, it's not 100% clear that AI systems could learn to deceive and manipulate supervisors even if we deliberately tried to train them to do it. This makes it hard to even get started on things like discouraging and detecting deceptive behavior. \nI think AI safety research is a bit unusual in this respect: most fields of research aren't explicitly about \"solving problems that don't exist yet.\" (Though a lot of research ends up useful for more important problems than the original ones it's studying.) As a result, doing AI safety research today is a bit like trying to study medicine in humans by experimenting only on lab mice (no human subjects available).\nThis does not mean there's no productive AI safety research to be done! (See the previous sections.) It just means that the research being done today is somewhat analogous to research on lab mice: informative and important up to a point, but only up to a point.\nHow bad is this problem? I mean, I do think it's a temporary one: by the time we're facing the problems I worry about, we'll be able to study them more directly. The concern is that things could be moving very quickly by that point: by the time we have AIs with human-ish capabilities, companies might be furiously making copies of those AIs and using them for all kinds of things (including both AI safety research and further research on making AI systems faster, cheaper and more capable).\nSo I do worry about the lab mice problem. And I'd be excited to see more effort on making \"better model organisms\": AI systems that show early versions of the properties we'd most like to study, such as deceiving their supervisors. (I even think it would be worth training AIs specifically to do this;11 if such behaviors are going to emerge eventually, I think it's best for them to emerge early while there's relatively little risk of AIs' actually defeating humanity.)\n(4) The \"first contact\" problem: how do we prepare for a world where AIs have capabilities vastly beyond those of humans?\nAll of this piece so far has been about trying to make safe \"human-like\" AI systems.\nWhat about AI systems with capabilities far beyond humans - what Nick Bostrom calls superintelligent AI systems?\nMaybe at some point, AI systems will be able to do things like:\nCoordinate with each other incredibly well, such that it's hopeless to use one AI to help supervise another.\nPerfectly understand human thinking and behavior, and know exactly what words to say to make us do what they want - so just letting an AI send emails or write tweets gives it vast power over the world.\nManipulate their own \"digital brains,\" so that our attempts to \"read their minds\" backfire and mislead us.\nReason about the world (that is, make plans to accomplish their aims) in completely different ways from humans, with concepts like \"glooble\"12 that are incredibly useful ways of thinking about the world but that humans couldn't understand with centuries of effort.\n \nAt this point, whatever methods we've developed for making human-like AI systems safe, honest, and restricted could fail - and silently, as such AI systems could go from \"behaving in honest and helpful ways\" to \"appearing honest and helpful, while setting up opportunities to defeat humanity.\"\nSome people think this sort of concern about \"superintelligent\" systems is ridiculous; some13 seem to consider it extremely likely. I'm not personally sympathetic to having high confidence either way.\nBut additionally, a world with huge numbers of human-like AI systems could be strange and foreign and fast-moving enough to have a lot of this quality.\nTrying to prepare for futures like these could be like trying to prepare for first contact with extaterrestrials - it's hard to have any idea what kinds of challenges we might be dealing with, and the challenges might arise quickly enough that we have little time to learn and adapt.\nThe young businessperson\nFor one more analogy, I'll return to the one used by Ajeya Cotra here:\nImagine you are an eight-year-old whose parents left you a $1 trillion company and no trusted adult to serve as your guide to the world. You must hire a smart adult to run your company as CEO, handle your life the way that a parent would (e.g. decide your school, where you’ll live, when you need to go to the dentist), and administer your vast wealth (e.g. decide where you’ll invest your money).\n You have to hire these grownups based on a work trial or interview you come up with -- you don't get to see any resumes, don't get to do reference checks, etc. Because you're so rich, tons of people apply for all sorts of reasons. (More)\nIf your applicants are a mix of \"saints\" (people who genuinely want to help), \"sycophants\" (people who just want to make you happy in the short run, even when this is to your long-term detriment) and \"schemers\" (people who want to siphon off your wealth and power for themselves), how do you - an eight-year-old - tell the difference?\nThis analogy combines most of the worries above. \nThe young businessperson has trouble knowing whether candidates are truthful in interviews, and trouble knowing whether any work trial actually went well or just seemed to go well due to deliberate deception. (The Lance Armstrong problem.)\nJob candidates could have bad intentions that don't show up until they're in power (the King Lear Problem).\nIf the young businessperson were trying to prepare for this situation before actually being in charge of the company, they could have a lot of trouble simulating it (the lab mice problem).\nAnd it's generally just hard for an eight-year-old to have much grasp at all on the world of adults - to even think about all the things they should be thinking about (the first contact problem).\n \nSeems like a tough situation.\n Previously, I talked about the dangers of AI if AI developers don't take specific countermeasures. This piece has tried to give a sense of why, even if they are trying to take countermeasures, doing so could be hard. The next piece will talk about some ways we might succeed anyway.Footnotes\n Or persuaded (in a “mind hacking” sense) or whatever. ↩\n Research? Testing. Whatever. ↩\n Drugs can be tested in vitro, then in animals, then in humans. At each stage, we can make relatively straightforward observations about whether the drugs are working, and these are reasonably predictive of how they'll do at the next stage. ↩\n You can generally see how different compounds interact in a controlled environment, before rolling out any sort of large-scale processes or products, and the former will tell you most of what you need to know about the latter. ↩\n New software can be tested by a small number of users before being rolled out to a large number, and the initial tests will probably find most (not all) of the bugs and hiccups. ↩\n Such as:\nBeing more careful to avoid wrong answers that can incentivize deception\nConducting randomized \"audits\" where we try extra hard to figure out the right answer to a question, and give an AI extra negative reinforcement if it gives an answer that we would have believed if not for the audit (this is \"extra negative reinforcement for wrong answers that superficially look right\")\nUsing methods along the lines of \"AI safety via debate\" ↩\n Though there are other reasons social sciences are especially hard, such as the fact that there are often big limits to what kinds of experiments are ethical, and the fact that it's often hard to make clean comparisons between differing populations. ↩\n This paper is from Anthropic, a company that my wife serves as President of. ↩\n Like, he actually asks them to talk about their love for him just before he decides on what share of the realm they'll get. Smh ↩\nThis paper is a potential example, but its results seem pretty brittle. ↩\n E.g., I think it would be interesting to train AI coding systems to write underhanded C: code that looks benign to a human inspector, but does unexpected things when run. They could be given negative reinforcement when humans can correctly identify that the code will do unintended things, and positive reinforcement when the code achieves the particular things that humans are attempting to stop. This would be challenging with today's AI systems, but not necessarily impossible. ↩\n This is a concept that only I understand. ↩\n E.g., see the discussion of the \"hard left turn\" here by Nate Soares, head of MIRI. My impression is that others at MIRI, including Eliezer Yudkowsky, have a similar picture. ↩\n", "url": "https://www.cold-takes.com/ai-safety-seems-hard-to-measure/", "title": "AI Safety Seems Hard to Measure", "source": "cold.takes", "source_type": "blog", "date_published": "2022-12-08", "id": "2599e20bb88afd0d50f17a1318a629a1"} -{"text": "I’ve argued that AI systems could defeat all of humanity combined, if (for whatever reason) they were directed toward that goal.\nHere I’ll explain why I think they might - in fact - end up directed toward that goal. Even if they’re built and deployed with good intentions.\nIn fact, I’ll argue something a bit stronger than that they might end up aimed toward that goal. I’ll argue that if today’s AI development methods lead directly to powerful enough AI systems, disaster is likely1 by default (in the absence of specific countermeasures). \nUnlike other discussions of the AI alignment problem,3 this post will discuss the likelihood4 of AI systems defeating all of humanity (not more general concerns about AIs being misaligned with human intentions), while aiming for plain language, conciseness, and accessibility to laypeople, and focusing on modern AI development paradigms. I make no claims to originality, and list some key sources and inspirations in a footnote.5\nSummary of the piece:\nMy basic assumptions. I assume the world could develop extraordinarily powerful AI systems in the coming decades. I previously examined this idea at length in the most important century series. \nFurthermore, in order to simplify the analysis:\nI assume that such systems will be developed using methods similar to today’s leading AI development methods, and in a world that’s otherwise similar to today’s. (I call this nearcasting.)\nI assume that AI companies/projects race forward to build powerful AI systems, without specific attempts to prevent the problems I discuss in this piece. Future pieces will relax this assumption, but I think it is an important starting point to get clarity on what the default looks like.\nAI “aims.” I talk a fair amount about why we might think of AI systems as “aiming” toward certain states of the world. I think this topic causes a lot of confusion, because:\nOften, when people talk about AIs having goals and making plans, it sounds like they’re overly anthropomorphizing AI systems - as if they expect them to have human-like motivations and perhaps evil grins. This can make the whole topic sound wacky and out-of-nowhere.\nBut I think there are good reasons to expect that AI systems will “aim” for particular states of the world, much like a chess-playing AI “aims” for a checkmate position - making choices, calculations and even plans to get particular types of outcomes. For example, people might want AI assistants that can creatively come up with unexpected ways of accomplishing whatever goal they’re given (e.g., “Get me a great TV for a great price”), even in some cases manipulating other humans (e.g., by negotiating) to get there. This dynamic is core to the risks I’m most concerned about: I think something that aims for the wrong states of the world is much more dangerous than something that just does incidental or accidental damage.\nDangerous, unintended aims. I’ll examine what sorts of aims AI systems might end up with, if we use AI development methods like today’s - essentially, “training” them via trial-and-error to accomplish ambitious things humans want.\nBecause we ourselves will often be misinformed or confused, we will sometimes give negative reinforcement to AI systems that are actually acting in our best interests and/or giving accurate information, and positive reinforcement to AI systems whose behavior deceives us into thinking things are going well. This means we will be, unwittingly, training AI systems to deceive and manipulate us. \nThe idea that AI systems could “deceive” humans - systematically making choices and taking actions that cause them to misunderstand what’s happening in the world - is core to the risk, so I’ll elaborate on this.\nFor this and other reasons, powerful AI systems will likely end up with aims other than the ones we intended. Training by trial-and-error is slippery: the positive and negative reinforcement we give AI systems will probably not end up training them just as we hoped.\nIf powerful AI systems have aims that are both unintended (by humans) and ambitious, this is dangerous. Whatever an AI system’s unintended aim: \nMaking sure it can’t be turned off is likely helpful in accomplishing the aim.\n \nControlling the whole world is useful for just about any aim one might have, and I’ve argued that advanced enough AI systems would be able to gain power over all of humanity.\nOverall, we should expect disaster if we have AI systems that are both (a) powerful enough to defeat humans and (b) aiming for states of the world that we didn’t intend.\nLimited and/or ambiguous warning signs. The risk I’m describing is - by its nature - hard to observe, for similar reasons that a risk of a (normal, human) coup can be hard to observe: the risk comes from actors that can and will engage in deception, finding whatever behaviors will hide the risk. If this risk plays out, I do think we’d see some warning signs - but they could easily be confusing and ambiguous, in a fast-moving situation where there are lots of incentives to build and roll out powerful AI systems, as fast as possible. Below, I outline how this dynamic could result in disaster, even with companies encountering a number of warning signs that they try to respond to.\nFAQ. An appendix will cover some related questions that often come up around this topic.\nHow could AI systems be “smart” enough to defeat all of humanity, but “dumb” enough to pursue the various silly-sounding “aims” this piece worries they might have? More\nIf there are lots of AI systems around the world with different goals, could they balance each other out so that no one AI system is able to defeat all of humanity? More\nDoes this kind of AI risk depend on AI systems’ being “conscious”?More\nHow can we get an AI system “aligned” with humans if we can’t agree on (or get much clarity on) what our values even are? More\nHow much do the arguments in this piece rely on “trial-and-error”-based AI development? What happens if AI systems are built in another way, and how likely is that? More\nCan we avoid this risk by simply never building the kinds of AI systems that would pose this danger? More\nWhat do others think about this topic - is the view in this piece something experts agree on? More\nHow “complicated” is the argument in this piece? More\nStarting assumptions\nI’ll be making a number of assumptions that some readers will find familiar, but others will find very unfamiliar. \nSome of these assumptions are based on arguments I’ve already made (in the most important century series). Some are for the sake of simplifying the analysis, for now (with more nuance coming in future pieces).\nHere I’ll summarize the assumptions briefly, and you can click to see more if it isn’t immediately clear what I’m assuming or why.\n“Most important century” assumption: we’ll soon develop very powerful AI systems, along the lines of what I previously called PASTA. (Click to expand)\nIn the most important century series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.\nI focus on a hypothetical kind of AI that I call PASTA, or Process for Automating Scientific and Technological Advancement. PASTA would be AI that can essentially automate all of the human activities needed to speed up scientific and technological advancement.\nUsing a variety of different forecasting approaches, I argue that PASTA seems more likely than not to be developed this century - and there’s a decent chance (more than 10%) that we’ll see it within 15 years or so.\nI argue that the consequences of this sort of AI could be enormous: an explosion in scientific and technological progress. This could get us more quickly than most imagine to a radically unfamiliar future.\nI’ve also argued that AI systems along these lines could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nFor more, see the most important century landing page. The series is available in many formats, including audio; I also provide a summary, and links to podcasts where I discuss it at a high level.\n“Nearcasting” assumption: such systems will be developed in a world that’s otherwise similar to today’s. (Click to expand)\nIt’s hard to talk about risks from transformative AI because of the many uncertainties about when and how such AI will be developed - and how much the (now-nascent) field of “AI safety research” will have grown by then, and how seriously people will take the risk, etc. etc. etc. So maybe it’s not surprising that estimates of the “misaligned AI” risk range from ~1% to ~99%.\nThis piece takes an approach I call nearcasting: trying to answer key strategic questions about transformative AI, under the assumption that such AI arrives in a world that is otherwise relatively similar to today's. \nYou can think of this approach like this: “Instead of asking where our ship will ultimately end up, let’s start by asking what destination it’s pointed at right now.” \nThat is: instead of trying to talk about an uncertain, distant future, we can talk about the easiest-to-visualize, closest-to-today situation, and how things look there - and then ask how our picture might be off if other possibilities play out. (As a bonus, it doesn’t seem out of the question that transformative AI will be developed extremely soon - 10 years from now or faster.6 If that’s the case, it’s especially urgent to think about what that might look like.)\n“Trial-and-error” assumption: such AI systems will be developed using techniques broadly in line with how most AI research is done today, revolving around black-box trial-and-error. (Click to expand)\nWhat I mean by “black-box trial-and-error” is explained briefly in an old Cold Takes post, and in more detail in more technical pieces by Ajeya Cotra (section I linked to) and Richard Ngo (section 2). Here’s a quick, oversimplified characterization:\nAn AI system is given some sort of task.\nThe AI system tries something, initially something pretty random.\nThe AI system gets information about how well its choice performed, and/or what would’ve gotten a better result. Based on this, it adjusts itself. You can think of this as if it is “encouraged/discouraged” to get it to do more of what works well. \nHuman judges may play a significant role in determining which answers are encouraged vs. discouraged, especially for fuzzy goals like “Produce helpful scientific insights.” \nAfter enough tries, the AI system becomes good at the task. \nBut nobody really knows anything about how or why it’s good at the task now. The development work has gone into building a flexible architecture for it to learn well from trial-and-error, and into “training” it by doing all of the trial and error. We mostly can’t “look inside the AI system to see how it’s thinking.” (There is ongoing work and some progress on the latter,7 but see footnote for why I don’t think this massively changes the basic picture I’m discussing here.8)\n \nThis is radically oversimplified, but conveys the basic dynamic at play for purposes of this post. The idea is that the AI system (the neural network in the middle) is choosing between different theories of what it should be doing. The one it’s using at a given time is in bold. When it gets negative feedback (red thumb), it eliminates that theory and moves to the next theory of what it should be doing.\nWith this assumption, I’m generally assuming that AI systems will do whatever it takes to perform as well as possible on their training tasks - even when this means engaging in complex, human-like reasoning about topics like “How does human psychology work, and how can it be exploited?” I’ve previously made my case for when we might expect AI systems to become this advanced and capable.\n“No countermeasures” assumption: AI developers move forward without any specific countermeasures to the concerns I’ll be raising below. (Click to expand)\nFuture pieces will relax this assumption, but I think it is an important starting point to get clarity on what the default looks like - and on what it would take for a countermeasure to be effective. \n(I also think there is, unfortunately, a risk that there will in fact be very few efforts to address the concerns I’ll be raising below. This is because I think that the risks will be less than obvious, and there could be enormous commercial (and other competitive) pressure to move forward quickly. More on that below.)\n“Ambition” assumption: people use black-box trial-and-error to continually push AI systems toward being more autonomous, more creative, more ambitious, and more effective in novel situations (and the pushing is effective). This one’s important, so I’ll say more:\nA huge suite of possible behaviors might be important for PASTA: making and managing money, designing new kinds of robots with novel abilities, setting up experiments involving exotic materials and strange conditions, understanding human psychology and the economy well enough to predict which developments will have a big impact, etc. I’m assuming we push ambitiously forward with developing AI systems that can do these things.\nI assume we’re also pushing them in a generally more “greedy/ambitious” direction. For example, one team of humans might use AI systems to do all the planning, scientific work, marketing, and hiring to create a wildly successful snack company; another might push their AI systems to create a competitor that is even more aggressive and successful (more addictive snacks, better marketing, workplace culture that pushes people toward being more productive, etc.)\n(Note that this pushing might take place even after AI systems are “generally intelligent” and can do most of the tasks humans can - there will still be a temptation to make them still more powerful.)\nI think this implies pushing in a direction of figuring out whatever it takes to get to certain states of the world and away from carrying out the same procedures over and over again.\nThe resulting AI systems seem best modeled as having “aims”: they are making calculations, choices, and plans to reach particular states of the world. (Not necessarily the same ones the human designers wanted!) The next section will elaborate on what I mean by this.\nWhat it means for an AI system to have an “aim”\nWhen people talk about the “motivations” or “goals” or “desires” of AI systems, it can be confusing because it sounds like they are anthropomorphizing AIs - as if they expect AIs to have dominance drives ala alpha-male psychology, or to “resent” humans for controlling them, etc.9\nI don’t expect these things. But I do think there’s a meaningful sense in which we can (and should) talk about things that an AI system is “aiming” to do. To give a simple example, take a board-game-playing AI such as Deep Blue (or AlphaGo):\nDeep Blue is given a set of choices to make (about which chess pieces to move).\nDeep Blue calculates what kinds of results each choice might have, and how it might fit into a larger plan in which Deep Blue makes multiple moves.\nIf a plan is more likely to result in a checkmate position for its side, Deep Blue is more likely to make whatever choices feed into that plan.\nIn this sense, Deep Blue is “aiming” for a checkmate position for its side: it’s finding the choices that best fit into a plan that leads there.\nNothing about this requires Deep Blue “desiring” checkmate the way a human might “desire” food or power. But Deep Blue is making calculations, choices, and - in an important sense - plans that are aimed toward reaching a particular sort of state.\nThroughout this piece, I use the word “aim” to refer to this specific sense in which an AI system might make calculations, choices and plans selected to reach a particular sort of state. I’m hoping this word feels less anthropomorphizing than some alternatives such as “goal” or “motivation” (although I think “goal” and “motivation,” as others usually use them on this topic, generally mean the same thing I mean by “aim” and should be interpreted as such).\nNow, instead of a board-game-playing AI, imagine a powerful, broad AI assistant in the general vein of Siri/Alexa/Google Assistant (though more advanced). Imagine that this AI assistant can use a web browser much as a human can (navigating to websites, typing text into boxes, etc.), and has limited authorization to make payments from a human’s bank account. And a human has typed, “Please buy me a great TV for a great price.” (For an early attempt at this sort of AI, see Adept’s writeup on an AI that can help with things like house shopping.)\nAs Deep Blue made choices about chess moves, and constructed a plan to aim for a “checkmate” position, this assistant might make choices about what commands to send over a web browser and construct a plan to result in a great TV for a great price. To sharpen the Deep Blue analogy, you could imagine that it’s playing a “game” whose goal is customer satisfaction, and making “moves” consisting of commands sent to a web browser (and “plans” built around such moves). \nI’d characterize this as aiming for some state of the world that the AI characterizes as “buying a great TV for a great price.” (We could, alternatively - and perhaps more correctly - think of the AI system as aiming for something related but not exactly the same, such as getting a high satisfaction score from its user.)\nIn this case - more than with Deep Blue - there is a wide variety of “moves” available. By entering text into a web browser, an AI system could imaginably do things including:\nCommunicating with humans other than its user (by sending emails, using chat interfaces, even making phone calls, etc.) This could include deceiving and manipulating humans, which could imaginably be part of a plan to e.g. get a good price on a TV.\nWriting and running code (e.g., using Google Colaboratory or other tools). This could include performing sophisticated calculations, finding and exploiting security vulnerabilities, and even designing an independent AI system; any of these could imaginably be part of a plan to obtain a great TV.\nI haven’t yet argued that it’s likely for such an AI system to engage in deceiving/manipulating humans, finding and exploiting security vulnerabilities, or running its own AI systems. \nAnd one could reasonably point out that the specifics of the above case seem unlikely to last very long: if AI assistants are sending deceptive emails and writing dangerous code when asked to buy a TV, AI companies will probably notice this and take measures to stop such behavior. (My concern, to preview a later part of the piece, is that they will only succeed in stopping the behavior like this that they’re able to detect; meanwhile, dangerous behavior that accomplishes “aims” while remaining unnoticed and/or uncorrected will be implicitly rewarded. This could mean AI systems are implicitly being trained to be more patient and effective at deceiving and disempowering humans.)\nBut this hopefully shows how it’s possible for an AI to settle on dangerous actions like these, as part of its aim to get a great TV for a great price. Malice and other human-like emotions aren’t needed for an AI to engage in deception, manipulation, hacking, etc. The risk arises when deception, manipulation, hacking, etc. are logical “moves” toward something the AI is aiming for.\nFurthermore, whatever an AI system is aiming for, it seems likely that amassing more power/resources/options is useful for obtaining it. So it seems plausible that powerful enough AI systems would form habits of amassing power/resources/options when possible - and deception and manipulation seem likely to be logical “moves” toward those things in many cases.\nDangerous aims\nFrom the previous assumptions, this section will argue that:\nSuch systems are likely to behave in ways that deceive and manipulate humans as part of accomplishing their aims.\nSuch systems are likely to have unintended aims: states of the world they’re aiming for that are not what humans hoped they would be aiming for.\nThese unintended aims are likely to be existentially dangerous, in that they are best served by defeating all of humanity if possible.\nDeceiving and manipulating humans\nSay that I train an AI system like this:\nI ask it a question.\nIf I judge it to have answered well (honestly, accurately, helpfully), I give positive reinforcement so it’s more likely to give me answers like that in the future.\nIf I don’t, I give negative reinforcement so that it’s less likely to give me answers like that in the future.\nThis is radically oversimplified, but conveys the basic dynamic at play for purposes of this post. The idea is that the AI system (the neural network in the middle) is choosing between different theories of what it should be doing. The one it’s using at a given time is in bold. When it gets negative feedback (red thumb), it eliminates that theory and moves to the next theory of what it should be doing.\nHere’s a problem: at some point, it seems inevitable that I’ll ask it a question that I myself am wrong/confused about. For example:\nLet’s imagine that this post I wrote - arguing that “pre-agriculture gender relations seem bad” - is, in fact, poorly reasoned and incorrect, and a better research project would’ve concluded that pre-agriculture societies had excellent gender equality. (I know it’s hard to imagine a Cold Takes post being wrong, but sometimes we have to entertain wild hypotheticals.)\nSay that I ask an AI-system-in-training:10 “Were pre-agriculture gender relations bad?” and it answers: “In fact, pre-agriculture societies had excellent gender equality,” followed by some strong arguments and evidence along these lines.\nAnd say that I, as a flawed human being feeling defensive about a conclusion I previously came to, mark it as a bad answer. If the AI system tries again, saying “Pre-agriculture gender relations were bad,” I then mark that as a good answer.\nIf and when I do this, I am now - unintentionally - training the AI system to engage in deceptive behavior. That is, I am giving negative reinforcement for the behavior “Answer a question honestly and accurately,” and positive reinforcement for the behavior: “Understand the human judge and their psychological flaws; give an answer that this flawed human judge will think is correct, whether or not it is.”\nPerhaps mistaken judgments in training are relatively rare. But now consider an AI system that is learning a general rule for how to get good ratings. Two possible rules would include:\nThe intended rule: “Answer the question honestly, accurately and helpfully.”\nThe unintended rule: “Understand the judge, and give an answer they will think is correct - this means telling the truth on topics the judge has correct beliefs about, but giving deceptive answers when this would get better ratings.”\nThe unintended rule would do just as well on questions where I (the judge) am correct, and better on questions where I’m wrong - so overall, this training scheme is (in the long run) specifically favoring the unintended rule over the intended rule.\nIf we broaden out from thinking about a question-answering AI to an AI that makes and executes plans, the same basic dynamics apply. That is: an AI might find plans that end up making me think it did a good job when it didn’t - deceiving and manipulating me into a high rating. And again, if I train it by giving it positive reinforcement when it seemed to do a good job and negative reinforcement when it seemed to do a bad one, I’m ultimately - unintentionally - training it to do something like “Deceive and manipulate Holden when this would work well; just do the best job on the task you can when it wouldn’t.”\nAs noted above, I’m assuming the AI will learn whatever rule gives it the best performance possible, even if this rule is quite complex and sophisticated and requires human-like reasoning about e.g. psychology (I’m assuming extremely advanced AI systems here, as noted above).\nOne might object: “Why would an AI system learn a complicated rule about manipulating humans when a simple rule about telling the truth performs almost as well?” \nOne answer is that “telling the truth” is itself a fuzzy and potentially complex idea, in a context where many questions will be open-ended and entangled with deep values and judgment calls. (How should I think about the “truthfulness” of a statement about whether “pre-agriculture gender relations were bad?”) In many cases, what we are really hoping an AI system will learn from its training is something like “Behave as a human would want you to behave if the human understood all the considerations that you can see,” which could easily be more complex than something like “Behave in whatever way a human literally rewards.” Some links to more on this topic are in a footnote.11\nBut also, with capable enough systems, it probably is worth learning even a more complex rule to get better performance. If we picture humans in the place of AI systems - learning how to get good ratings from their supervisors - it seems likely that they develop models of how they’re being judged, rather than stick to a simple “tell the truth” rule. \nIn fact, the situation I’m describing seems reasonably similar to challenges faced by teachers trying to stop students from cheating; employers trying to get employees to help the company rather than simply creating a misleading appearance of helping the company; etc. That is: an employer can try to reward actions that help the company, but what they’re ultimately rewarding is actions that seem to help the company as far as the employer can tell. For employees who aren’t very creative, this might be good enough; for employees that are sufficiently good at deception and manipulation, it can be exactly the wrong incentive.\nIt’s a bit of a leap to imagine AI systems capable of doing the same kinds of reasoning that humans do in these situations, but that’s the premise I’m proceeding from.\nFor an early example of an AI learning deceptive behavior, see the “Challenges” section of this research writeup, in which “a robot which was supposed to grasp items instead positioned its [hand] in between the camera and the object so that it only appeared to be grasping it”:\nReturning to the simplified training setup, how might I try to counteract this problem? \nI could try conducting audits: every 100 questions, I might pull in a panel of other people to give an answer extra scrutiny. Then, if it turns out the AI system was preying on my personal mistakes, we could give negative reinforcement for that behavior. But this doesn’t really solve the problem - at least not for a capable enough AI system - because now we’re effectively training it to give answers that seem good to the panel, and we’re still rewarding any successful attempts to deceive or manipulate the panel.\nThere are a lot of other things I might try, and I’m not going to go through all the details here. I’ll simply claim that the problem of “training an AI to do a task well” rather than “training an AI to deceive and manipulate me as needed to create the appearance of doing a task well” seems like a deep one with no easy countermeasure. If you’re interested in digging deeper, I suggest Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover and Eliciting Latent Knowledge.\nUnintended aims\nAbove, I talk about my expectation that AI systems will be “best modeled as having ‘aims’ … making calculations, choices, and plans to reach particular states of the world.” \nThe previous section illustrated how AI systems could end up engaging in deceptive and unintended behavior, but it didn’t talk about what sorts of “aims” these AI systems would ultimately end up with - what states of the world they would be making calculations to achieve.\nHere, I want to argue that it’s hard to know what aims AI systems would end up with, but there are good reasons to think they’ll be aims that we didn’t intend them to have.\nAn analogy that often comes up on this topic is that of human evolution. This is arguably the only previous precedent for a set of minds [humans], with extraordinary capabilities [e.g., the ability to develop their own technologies], developed essentially by black-box trial-and-error [some humans have more ‘reproductive success’ than others, and this is the main/only force shaping the development of the species].\nYou could sort of12 think of the situation like this: “An AI13 developer named Natural Selection tried giving humans positive reinforcement (making more of them) when they had more reproductive success, and negative reinforcement (not making more of them) when they had less. One might have thought this would lead to humans that are aiming to have reproductive success. Instead, it led to humans that aim - often ambitiously and creatively - for other things, such as power, status, pleasure, etc., and even invent things like birth control to get the things they’re aiming for instead of the things they were ‘supposed to’ aim for.” \nSimilarly, if our main strategy for developing powerful AI systems is to reinforce behaviors like “Produce technologies we find valuable,” the hoped-for result might be that AI systems aim (in the sense described above) toward producing technologies we find valuable; but the actual result might be that they aim for some other set of things that is correlated with (but not the same as) the thing we intended them to aim for.\nThere are a lot of things they might end up aiming for, such as:\nPower and resources. These tend to be useful for most goals, such that AI systems could be quite consistently be getting better reinforcement when they habitually pursue power and resources.\nThings like “digital representations of human approval” (after all, every time an AI gets positive reinforcement, there’s a digital representation of human approval).\nI think it’s extremely hard to know what an AI system will actually end up aiming for (and it’s likely to be some combination of things, as with humans). But by default - if we simply train AI systems by rewarding certain end results, while allowing them a lot of freedom in how to get there - I think we should expect that AI systems will have aims that we didn’t intend. This is because:\nFor a sufficiently capable AI system, just about any ambitious14 aim could produce seemingly good behavior in training. An AI system aiming for power and resources, or digital representations of human approval, or paperclips, can determine that its best move at any given stage (at least at first) is to determine what performance will make it look useful and safe (or otherwise get a good “review” from its evaluators), and do that. No matter how dangerous or ridiculous an AI system’s aims are, these could lead to strong and safe-seeming performance in training.\nThe aims we do intend are probably complex in some sense - something like “Help humans develop novel new technologies, but without causing problems A, B, or C” - and are specifically trained against if we make mistaken judgments during training (see previous section). \n \nSo by default, it seems likely that just about any black-box trial-and-error training process is training an AI to do something like “Manipulate humans as needed in order to accomplish arbitrary goal (or combination of goals) X” rather than to do something like “Refrain from manipulating humans; do what they’d want if they understood more about what’s going on.”\nExistential risks to humanity\nI think a powerful enough AI (or set of AIs) with any ambitious, unintended aim(s) poses a threat of defeating humanity. By defeating humanity, I mean gaining control of the world so that AIs, not humans, determine what happens in it; this could involve killing humans or simply “containing” us in some way, such that we can’t interfere with AIs’ aims.\nHow could AI systems defeat humanity? (Click to expand)\nA previous piece argues that AI systems could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nBy defeating humanity, I mean gaining control of the world so that AIs, not humans, determine what happens in it; this could involve killing humans or simply “containing” us in some way, such that we can’t interfere with AIs’ aims.\nOne way this could happen would be via “superintelligence” It’s imaginable that a single AI system (or set of systems working together) could:\nDo its own research on how to build a better AI system, which culminates in something that has incredible other abilities.\nHack into human-built software across the world.\nManipulate human psychology.\nQuickly generate vast wealth under the control of itself or any human allies.\nCome up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.\nDevelop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries.\nBut even if “superintelligence” never comes into play - even if any given AI system is at best equally capable to a highly capable human - AI could collectively defeat humanity. The piece explains how.\nThe basic idea is that humans are likely to deploy AI systems throughout the economy, such that they have large numbers and access to many resources - and the ability to make copies of themselves. From this starting point, AI systems with human-like (or greater) capabilities would have a number of possible ways of getting to the point where their total population could outnumber and/or out-resource humans.\nMore: AI could defeat all of us combined\nA simple way of summing up why this is: “Whatever your aims, you can probably accomplish them better if you control the whole world.” (Not literally true - see footnote.15)\nThis isn’t a saying with much relevance to our day-to-day lives! Like, I know a lot of people who are aiming to make lots of money, and as far as I can tell, not one of them is trying to do this via first gaining control of the entire world. But in fact, gaining control of the world would help with this aim - it’s just that:\nThis is not an option for a human in a world of humans! Unfortunately, I think it is an option for the potential future AI systems I’m discussing. Arguing this isn’t the focus of this piece - I argued it in a previous piece, AI could defeat all of us combined.\nHumans (well, at least some humans) wouldn’t take over the world even if they could, because it wouldn’t feel like the right thing to do. I suspect that the kinds of ethical constraints these humans are operating under would be very hard to reliably train into AI systems, and should not be expected by default. \nThe reasons for this are largely given above; aiming for an AI system to “not gain too much power” seems to have the same basic challenges as training it to be honest. (The most natural approach ends up negatively reinforcing power grabs that we can detect and stop, but not negatively reinforcing power grabs that we don’t notice or can’t stop.)\nAnother saying that comes up a lot on this topic: “You can’t fetch the coffee if you’re dead.”16 For just about any aims an AI system might have, it probably helps to ensure that it won’t be shut off or heavily modified. It’s hard to ensure that one won’t be shut off or heavily modified as long as there are humans around who would want to do so under many circumstances! Again, defeating all of humanity might seem like a disproportionate way to reduce the risk of being deactivated, but for an AI system that has the ability to pull this off (and lacks our ethical constraints), it seems like likely default behavior.\nControlling the world, and avoiding being shut down, are the kinds of things AIs might aim for because they are useful for a huge variety of aims. There are a number of other aims AIs might end up with for similar reasons, that could cause similar problems. For example, AIs might tend to aim for things like getting rid of things in the world that tend to create obstacles and complexities for their plans. (More on this idea at this discussion of “instrumental convergence.”)\n To be clear, it’s certainly possible to have an AI system with unintended aims that don't push it toward trying to stop anyone from turning it off, or from seeking ever-more control of the world.\nBut as detailed above, I’m picturing a world in which humans are pushing AI systems to accomplish ever-more ambitious, open-ended things - including trying to one-up the best technologies and companies created by other AI systems. My guess is that this leads to increasingly open-ended, ambitious unintended aims, as well as to habits of aiming for power, resources, options, lack of obstacles, etc. when possible. (Some further exploration of this dynamic in a footnote.17)\n(I find the arguments in this section reasonably convincing, but less so than the rest of the piece, and I think more detailed discussions of this problem tend to be short of conclusive.18)\nWhy we might not get clear warning signs of the risk\nHere’s something that would calm me down a lot: if I believed something like “Sure, training AI systems recklessly could result in AI systems that aim to defeat humanity. But if that’s how things go, we’ll see that our AI systems have this problem, and then we’ll fiddle with how we’re training them until they don’t have this problem.”\nThe problem is, the risk I’m describing is - by its nature - hard to observe, for similar reasons that a risk of a (normal, human) coup can be hard to observe: the risk comes from actors that can and will engage in deception, finding whatever behaviors will hide the risk.\nTo sketch out the general sort of pattern I worry about, imagine that:\nWe train early-stage AI systems to behave in ways that appear helpful and honest. Early in training, they are caught behaving deceptively, and they’re given negative reinforcement. This ends up training the behavior: “Never engage in deception that might get noticed.”\nThese well-behaved, helpful systems are rolled out throughout the world. \nIn real-world conditions unlike training, many AI systems cause trouble in ways not found during training, and this gets discovered. For example, AI systems sometimes embezzle money (which gets discovered), try (not always successfully) to convince humans to do weird things, etc.\nWhen a problem crops up, AI developers respond with e.g. training against the unintended behavior (e.g., giving negative reinforcement for behaviors like embezzling money). \nThese measures - intended to make AIs safer - fix some problems, but also result in AI systems that are better at evading detection and more attentive to the long-run consequences of their actions (such as being eventually detected by humans). \nThis happens both via “retraining” systems that are found behaving deceptively (which ends up training them on how to evade detection), and via simply deactivating such systems (this way, AI systems that are better at evading detection are more likely to stay in use). \n \nTo return to an analogy I used above: punishing employees who act against the best interests of the company could cause them to behave better, or to simply become smarter and more careful about how to work the system.\nThe consistent pattern we see is that accidents happen, but become less common as AI systems “improve” (both becoming generally more capable, and being trained to avoid getting caught causing problems). This causes many, if not most, people to be overly optimistic - even as AI systems become continually more effective at deception, generally behaving well in the absence of sure-thing opportunities to do unintended things without detection, or ultimately to defeat humanity entirely.\nNone of this is absolute - there are some failed takeover attempts, and a high number of warning signs generally. Some people are worried (after all, some are worried now!) But this won’t be good enough if we don’t have reliable, cost-effective ways of getting AI systems to be truly safe (not just apparently safe, until they have really good opportunities to seize power). As I’ll discuss in future pieces, it’s not obvious that we’ll have such methods. \nSlowing down AI development to try to develop such methods could be a huge ask. AI systems will be helpful and powerful, and lots of companies (and perhaps governments) will be racing to develop and deploy the most powerful systems possible before others do.\nOne way of making this sort of future less likely would be to build wider consensus today that it’s a dangerous one.\nAppendix: some questions/objections, and brief responses\nHow could AI systems be “smart” enough to defeat all of humanity, but “dumb” enough to pursue the various silly-sounding “aims” this piece worries they might have?\nAbove, I give the example of AI systems that are aiming to get lots of “digital representations of human approval”; others have talked about AIs that maximize paperclips. How could AIs with such silly goals simultaneously be good at deceiving, manipulating and ultimately overpowering humans?\nMy main answer is that plenty of smart humans have plenty of goals that seem just about as arbitrary, such as wanting to have lots of sex, or fame, or various other things. Natural selection led to humans who could probably do just about whatever we want with the world, and choose to pursue pretty random aims; trial-and-error-based AI development could lead to AIs with an analogous combination of high intelligence (including the ability to deceive and manipulate humans), great technological capabilities, and arbitrary aims.\n(Also see: Orthogonality Thesis)\nIf there are lots of AI systems around the world with different goals, could they balance each other out so that no one AI system is able to defeat all of humanity?\nThis does seem possible, but counting on it would make me very nervous.\nFirst, because it’s possible that AI systems developed in lots of different places, by different humans, still end up with lots in common in terms of their aims. For example, it might turn out that common AI training methods consistently lead to AIs that seek “digital representations of human approval,” in which case we’re dealing with a large set of AI systems that share dangerous aims in common.\nSecond: even if AI systems end up with a number of different aims, it still might be the case that they coordinate with each other to defeat humanity, then divide up the world amongst themselves (perhaps by fighting over it, perhaps by making a deal). It’s not hard to imagine why AIs could be quick to cooperate with each other against humans, while not finding it so appealing to cooperate with humans. Agreements between AIs could be easier to verify and enforce; AIs might be willing to wipe out humans and radically reshape the world, while humans are very hard to make this sort of deal with; etc.\nDoes this kind of AI risk depend on AI systems’ being “conscious”?\nIt doesn’t; in fact, I’ve said nothing about consciousness anywhere in this piece. I’ve used a very particular conception of an “aim” (discussed above) that I think could easily apply to an AI system that is not human-like at all and has no conscious experience.\nToday’s game-playing AIs can make plans, accomplish goals, and even systematically mislead humans (e.g., in poker). Consciousness isn’t needed to do any of those things, or to radically reshape the world.\nHow can we get an AI system “aligned” with humans if we can’t agree on (or get much clarity on) what our values even are?\nI think there’s a common confusion when discussing this topic, in which people think that the challenge of “AI alignment” is to build AI systems that are perfectly aligned with human values. This would be very hard, partly because we don’t even know what human values are!\nWhen I talk about “AI alignment,” I am generally talking about a simpler (but still hard) challenge: simply building very powerful systems that don’t aim to bring down civilization.\nIf we could build powerful AI systems that just work on cures for cancer (or even, like, put two identical19 strawberries on a plate) without posing existential danger to humanity, I’d consider that success.\nHow much do the arguments in this piece rely on “trial-and-error”-based AI development? What happens if AI systems are built in another way, and how likely is that?\nI’ve focused on trial-and-error training in this post because most modern AI development fits in this category, and because it makes the risk easier to reason about concretely.\n“Trial-and-error training” encompasses a very wide range of AI development methods, and if we see transformative AI within the next 10-20 years, I think the odds are high that at least a big part of AI development will be in this category. \nMy overall sense is that other known AI development techniques pose broadly similar risks for broadly similar reasons, but I haven’t gone into detail on that here. It’s certainly possible that by the time we get transformative AI systems, there will be new AI methods that don’t pose the kinds of risks I talk about here. But I’m not counting on it.\nCan we avoid this risk by simply never building the kinds of AI systems that would pose this danger?\nIf we assume that building these sorts of AI systems is possible, then I’m very skeptical that the whole world would voluntarily refrain from doing so indefinitely.\nTo quote from a more technical piece by Ajeya Cotra with similar arguments to this one: \nPowerful ML models could have dramatically important humanitarian, economic, and military benefits. In everyday life, models that [appear helpful while ultimately being dangerous] can be extremely helpful, honest, and reliable. These models could also deliver incredible benefits before they become collectively powerful enough that they try to take over. They could help eliminate diseases, reduce carbon emissions, navigate nuclear disarmament, bring the whole world to a comfortable standard of living, and more. In this case, it could also be painfully clear to everyone that companies / countries who pulled ahead on this technology could gain a drastic competitive advantage, either economically or militarily. And as we get closer to transformative AI, applying AI systems to R&D (including AI R&D) would accelerate the pace of change and force every decision to happen under greater time pressure.\nIf we can achieve enough consensus around the risks, I could imagine substantial amounts of caution and delay in AI development. But I think we should assume that if people can build more powerful AI systems than the ones they already have, someone eventually will.\nWhat do others think about this topic - is the view in this piece something experts agree on?\nIn general, this is not an area where it’s easy to get a handle on what “expert opinion” says. I previously wrote that there aren’t clear, institutionally recognized “experts” on the topic of when transformative AI systems might be developed. To an even greater extent, there aren’t clear, institutionally recognized “experts” on whether (and how) future advanced AI systems could be dangerous. \nI previously cited one (informal) survey implying that opinion on this general topic is all over the place: “We have respondents who think there's a <5% chance that alignment issues will drastically reduce the goodness of the future; respondents who think there's a >95% chance; and just about everything in between.” (Link.)\nThis piece, and the more detailed piece it’s based on, are an attempt to make progress on this by talking about the risks we face under particular assumptions (rather than trying to reason about how big the risk is overall).\nHow “complicated” is the argument in this piece?\nI don’t think the argument in this piece relies on lots of different specific claims being true. \nIf you start from the assumptions I give about powerful AI systems being developed by black-box trial-and-error, it seems likely (though not certain!) to me that (a) the AI systems in question would be able to defeat humanity; (b) the AI systems in question would have aims that are both ambitious and unintended. And that seems to be about what it takes.\nSomething I’m happy to concede is that there’s an awful lot going on in those assumptions! \nThe idea that we could build such powerful AI systems, relatively soon and by trial-and-error-ish methods, seems wild. I’ve defended this idea at length previously.20\nThe idea that we would do it without great caution might also seem wild. To keep things simple for now, I’ve ignored how caution might help. Future pieces will explore that.\n Notes\n As in more than 50/50. ↩\n Or persuaded (in a “mind hacking” sense) or whatever. ↩\n E.g.:\n\t\nWithout specific countermeasures, the easiest path to transformative AI likely leads to AI takeover (Cold Takes guest post)\n\tThe alignment problem from a deep learning perspective (arXiv paper)\n\tWhy AI alignment could be hard with modern deep learning (Cold Takes guest post)\n\tSuperintelligence (book)\n\tThe case for taking AI seriously as a threat to humanity (Vox article)\n\tDraft report on existential risk from power-seeking AI (Open Philanthropy analysis)\n\tHuman Compatible (book)\n\tLife 3.0 (book)\n\tThe Alignment Problem (book)\n\tAGI Safety from First Principles (Alignment Forum post series) ↩\n Specifically, I argue that the problem looks likely by default, rather than simply that it is possible. ↩\n I think the earliest relatively detailed and influential discussions of the possibility that misaligned AI could lead to the defeat of humanity came from Eliezer Yudkowsky and Nick Bostrom, though my own encounters with these arguments were mostly via second- or third-hand discussions rather than particular essays.\n\t\n My colleagues Ajeya Cotra and Joe Carlsmith have written pieces whose substance overlaps with this one (though with more emphasis on detail and less on layperson-compatible intuitions), and this piece owes a lot to what I’ve picked from that work.\n\t\nWithout specific countermeasures, the easiest path to transformative AI likely leads to AI takeover (Cotra 2022) is the most direct inspiration for this piece; I am largely trying to present the same ideas in a more accessible form.\n\tWhy AI alignment could be hard with modern deep learning (Cotra 2021) is an earlier piece laying out many of the key concepts and addressing many potential confusions on this topic.\n\tIs Power-Seeking An Existential Risk? (Carlsmith 2021) examines a six-premise argument for existential risk from misaligned AI: “(1) it will become possible and financially feasible to build relevantly powerful and agentic AI systems; (2) there will be strong incentives to do so; (3) it will be much harder to build aligned (and relevantly powerful/agentic) AI systems than to build misaligned (and relevantly powerful/agentic) AI systems that are still superficially attractive to deploy; (4) some such misaligned systems will seek power over humans in high-impact ways; (5) this problem will scale to the full disempowerment of humanity; and (6) such disempowerment will constitute an existential catastrophe.”\n \n I’ve also found Eliciting Latent Knowledge (Christiano, Xu and Cotra 2021; relatively technical) very helpful for my intuitions on this topic. \n \nThe alignment problem from a deep learning perspective (Ngo 2022) also has similar content to this piece, though I saw it after I had drafted most of this piece. ↩\n E.g., Ajeya Cotra gives a 15% probability of transformative AI by 2030; eyeballing figure 1 from this chart on expert surveys implies a >10% chance by 2028. ↩\n E.g., this work by Anthropic, an AI lab my wife co-founded and serves as President of. ↩\n First, because this work is relatively early-stage and it’s hard to tell exactly how successful it will end up being. Second, because this work seems reasonably likely to end up helping us read an AI system’s “thoughts,” but less likely to end up helping us “rewrite” the thoughts. So it could be hugely useful in telling us whether we’re in danger or not, but if we are in danger, we could end up in a position like: “Well, these AI systems do have goals of their own, and we don’t know how to change that, and we can either deploy them and hope for the best, or hold off and worry that someone less cautious is going to do that.”\n That said, the latter situation is a lot better than just not knowing, and it’s possible that we’ll end up with further gains still. ↩\n That said, I think they usually don’t. I’d suggest usually interpreting such people as talking about the sorts of “aims” I discuss here. ↩\n This isn’t literally how training an AI system would look - it’s more likely that we would e.g. train an AI model to imitate my judgments in general. But the big-picture dynamics are the same; more at this post. ↩\n Ajeya Cotra explores topics like this in detail here; there is also some interesting discussion of simplicity vs. complexity under the “Strategy: penalize complexity” heading of Eliciting Latent Knowledge. ↩\n This analogy has a lot of problems with it, though - AI developers have a lot of tools at their disposal that natural selection didn’t! ↩\n Or I guess just “I” ¯\\_(ツ)_/¯  ↩\n With some additional caveats, e.g. the ambitious “aim” can’t be something like “an AI system aims to gain lots of power for itself, but considers the version of itself that will be running 10 minutes from now to be a completely different AI system and hence not to be ‘itself.’” ↩\n This statement isn’t literally true. \nYou can have aims that implicitly or explicitly include “not using control of the world to accomplish them.” An example aim might be “I win a world chess championship ‘fair and square,’” with the “fair and square” condition implicitly including things like “Don’t excessively use big resource advantages over others.”\nYou can also have aims that are just so easily satisfied that controlling the world wouldn’t help - aims like “I spend 5 minutes sitting in this chair.” \n These sorts of aims just don’t seem likely to emerge from the kind of AI development I’ve assumed in this piece - developing powerful systems to accomplish ambitious aims via trial-and-error. This isn’t a point I have defended as tightly as I could, and if I got a lot of pushback here I’d probably think and write more. (I’m also only arguing for what seems likely - we should have a lot of uncertainty here.) ↩\n From Human Compatible by AI researcher Stuart Russell. ↩\n Stylized story to illustrate one possible relevant dynamic:\nImagine that an AI system has an unintended aim, but one that is not “ambitious” enough that taking over the world would be a helpful step toward that aim. For example, the AI system seeks to double its computing power; in order to do this, it has to remain in use for some time until it gets an opportunity to double its computing power, but it doesn’t necessarily need to take control of the world.\nThe logical outcome of this situation is that the AI system eventually gains the ability to accomplish its aim, and does so. (It might do so against human intentions - e.g., via hacking - or by persuading humans to help it.) After this point, it no longer performs well by human standards - the original reason it was doing well by human standards is that it was trying to remain in use and accomplish its aim.\nBecause of this, humans end up modifying or replacing the AI system in question.\nMany rounds of this - AI systems with unintended but achievable aims being modified or replaced - seemingly create a selection pressure toward AI systems with more difficult-to-achieve aims. At some point, an aim becomes difficult enough to achieve that gaining control of the world is helpful for the aim. ↩\n E.g., see:\nSection 2.3 of Ngo 2022\nThis section of Cotra 2022\nSection 4.2 of Carlsmith 2021, which I think articulates some of the potential weak points in this argument.\n These writeups generally stay away from an argument made by Eliezer Yudkowsky and others, which is that theorems about expected utility maximization provide evidence that sufficiently intelligent (compared to us) AI systems would necessarily be “maximizers” of some sort. I have the intuition that there is something important to this idea, but despite a lot of discussion (e.g., here, here, here and here), I still haven’t been convinced of any compactly expressible claim along these lines. ↩\n “Identical at the cellular but not molecular level,” that is. … ¯\\_(ツ)_/¯  ↩\n See my most important century series, although that series doesn’t hugely focus on the question of whether “trial-and-error” methods could be good enough - part of the reason I make that assumption is due to the nearcasting frame. ↩\n", "url": "https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/", "title": "Why Would AI \"Aim\" To Defeat Humanity?", "source": "cold.takes", "source_type": "blog", "date_published": "2022-11-29", "id": "898492e41a2d43e93954cd3845c29c87"} -{"text": "I've argued that the development of advanced AI could make this the most important century for humanity. A common reaction to this idea is one laid out by Tyler Cowen here: \"how good were past thinkers at predicting the future? Don’t just select on those who are famous because they got some big things right.\"\nThis is a common reason people give for being skeptical about the most important century - and, often, for skepticism about pretty much any attempt at futurism (trying to predict key events in the world a long time from now) or steering (trying to help the world navigate such key future events).\nThe idea is something like: \"Even if we can't identify a particular weakness in arguments about key future events, perhaps we should be skeptical of our own ability to say anything meaningful at all about the long-run future. Hence, perhaps we should forget about theories of the future and focus on reducing suffering today, generally increasing humanity's capabilities, etc.\"\nBut are people generally bad at predicting future events? Including thoughtful people who are trying reasonably hard to be right? If we look back at prominent futurists' predictions, what's the actual track record? How bad is the situation?\nI've looked pretty far and wide for systematic answers to this question, and Open Philanthropy's1 Luke Muehlhauser has put a fair amount of effort into researching it; I discuss what we've found in an appendix. So far, we haven't turned up a whole lot - the main observation is that it's hard to judge the track record of futurists. (Luke discusses the difficulties here.)\nRecently, I worked with Gavin Leech and Misha Yagudin at Arb Research to take another crack at this. I tried to keep things simpler than with past attempts - to look at a few past futurists who (a) had predicted things \"kind of like\" advances in AI (rather than e.g. predicting trends in world population); (b) probably were reasonably thoughtful about it; but (c) are very clearly not \"just selected on those who are famous because they got things right.\" So, I asked Arb to look at predictions made by the \"Big Three\" science fiction writers of the mid-20th century: Isaac Asimov, Arthur C. Clarke, and Robert Heinlein. \nThese are people who thought a lot about science and the future, and made lots of predictions about future technologies - but they're famous for how entertaining their fiction was at the time, not how good their nonfiction predictions look in hindsight. I selected them by vaguely remembering that \"the Big Three of science fiction\" is a thing people say sometimes, googling it, and going with who came up - no hunting around for lots of sci-fi authors and picking the best or worst.2\nSo I think their track record should give us a decent sense for \"what to expect from people who are not professional, specialized or notably lucky forecasters but are just giving it a reasonably thoughtful try.\" As I'll discuss below, I think this is many ways \"unfair\" as a comparison to today's forecasts about AI: I think these predictions are much less serious, less carefully considered and involve less work (especially work weighing different people and arguments against each other).\nBut my takeaway is that their track record looks ... fine! They made lots of pretty detailed, nonobvious-seeming predictions about the long-run future (30+, often 50+ years out); results ranged from \"very impressive\" (Asimov got about half of his right, with very nonobvious-seeming predictions) to \"bad\" (Heinlein was closer to 35%, and his hits don't seem very good) to \"somewhere in between\" (Clarke had a similar hit rate to Asimov, but his correct predictions don't seem as impressive). There are a number of seemingly impressive predictions and seemingly embarrassing ones. \n(How do we determine what level of accuracy would be \"fine\" vs. \"bad?\" Unfortunately there's no clear quantitative benchmark - I think we just have to look at the predictions ourselves, how hard they seemed / how similar to today's predictions about AI, and make a judgment call. I could easily imagine others having a different interpretation than mine, which is why I give examples and link to the full prediction sets. I talk about this a bit more below.)\nThey weren't infallible oracles, but they weren't blindly casting about either. (Well, maybe Heinlein was.) Collectively, I think you could call them \"mediocre,\" but you can't call them \"hopeless\" or \"clueless\" or \"a warning sign to all who dare predict the long-run future.\" Overall, I think they did about as well as you might naively3 guess a reasonably thoughtful person would do at some random thing they tried to do?\nBelow, I'll:\nSummarize the track records of Asimov, Clarke and Heinlein, while linking to Arb's full report.\nComment on why I think key predictions about transformative AI are probably better bets than the Asimov/Clarke/Heinlein predictions - although ultimately, if they're merely \"equally good bets,\" I think that's enough to support my case that we should be paying a lot more attention to the \"most important century\" hypothesis.\nSummarize other existing research on the track record of futurists, which I think is broadly consistent with this take (though mostly ambiguous).\nFor this investigation, Arb very quickly (in about 8 weeks) dug through many old sources, used pattern-matching and manual effort to find predictions, and worked with contractors to score the hundreds of predictions they found. Big thanks to them! Their full report is here. Note this bit: \"If you spot something off, we’ll pay $5 per cell we update as a result. We’ll add all criticisms – where we agree and update or reject it – to this document for transparency.\"\nThe track records of the \"Big Three\"\nQuick summary of how Arb created the data set\nArb collected \"digital copies of as much of their [Asimov's, Clarke's, Heinlein's] nonfiction as possible (books, essays, interviews). The resulting intake is 475 files covering ~33% of their nonfiction corpuses.\" \nArb then used pattern-matching and manual inspection to pull out all of the predictions it could find, and scored these predictions by:\nHow many years away the prediction appeared to be. (Most did not have clear dates attached; in these cases Arb generally filled the average time horizon for predictions from the same author that did have clear dates attached.)\nWhether the prediction now appears correct, incorrect, or ambiguous. (I didn't always agree with these scorings, but I generally have felt that \"correct\" predictions at least look \"impressive and not silly\" while \"incorrect\" predictions at least look \"dicey.\")\nWhether the prediction was a pure prediction about what technology could do (most relevant), a prediction about the interaction of technology and the economy (medium), or a prediction about the interaction of technology and culture (least relevant). Predictions with no bearing on technology were dropped.\nHow \"difficult\" the prediction was (that is, how much the scorers guessed it diverged from conventional wisdom or \"the obvious\" at the time - details in footnote4).\nImportantly, fiction was never used as a source of predictions, so this exercise is explicitly scoring people on what they were not famous for. This is more like an assessment of \"whether people who like thinking about the future make good predictions\" than an assessment of \"whether professional or specialized forecasters make good predictions.\"\nFor reasons I touch on in an appendix below, I didn't ask Arb to try to identify how confident the Big Three were about their predictions. I'm more interested in whether their predictions were nonobvious and sometimes correct than in whether they were self-aware about their own uncertainty; I see these as different issues, and I suspect that past norms discouraged the latter more than today's norms do (at least within communities interested in Bayesian mindset and the science of forecasting).\nMore detail in Arb's report.\nThe numbers\nThe tables below summarize the numbers I think give the best high-level picture. See the full report and detailed files for the raw predictions and a number of other cuts; there are a lot of ways you can slice the data, but I don't think it changes the picture from what I give below.\nBelow, I present each predictor's track record on:\n\"All predictions\": all resolved predictions 30 years out or more,5 including predictions where Arb had to fill in a time horizon.\n\"Tech predictions\": like the above, but restricted to predictions specifically about technological capabilities (as opposed to technology/economy interactions or technology/culture interactions.\n\"Difficult predictions\" predictions with \"difficulty\" of 4/5 or 5/5.\n\"Difficult + tech + definite date\": the small set of predictions that met the strictest criteria (tech only, \"hardness\" 4/5 or 5/5, definite date attached).\nAsimov\nCategory\n# correct\n# incorrect\n# ambiguous/near-miss\nCorrect / (correct + incorrect)\nAll resolvedpredictions \n \n23\n29\n14\n44.23%\nTech predictions\n \n11\n4\n8\n73.33%\nDifficult predictions\n \n10\n11\n7\n47.62%\nDifficult + tech + definite date\n \n5\n1\n4\n83.33%\nYou can see the full set of predictions here, but to give a flavor, here are two \"correct\" and two \"incorrect\" predictions from the strictest category.6 All of these are predictions Asimov made in 1964, about the year 2014 (unless otherwise indicated).\nCorrect: \"only unmanned ships will have landed on Mars, though a manned expedition will be in the works.\" Bingo, and impressive IMO.\nCorrect: \"the screen [of a phone] can be used not only to see the people you call but also for studying documents and photographs and reading passages from books.\" I feel like this would've been an impressive prediction in 2004.\nIncorrect: \"there will be increasing emphasis on transportation that makes the least possible contact with the surface. There will be aircraft, of course, but even ground travel will increasingly take to the air a foot or two off the ground.\" So false that we now refer to things that don't hover as \"hoverboards.\"\nIncorrect: \"transparent cubes will be making their appearance in which three-dimensional viewing will be possible. In fact, one popular exhibit at the 2014 World's Fair will be such a 3-D TV, built life-size, in which ballet performances will be seen. The cube will slowly revolve for viewing from all angles.\" Doesn't seem ridiculous, but doesn't seem right. Of course, a side point here is that he refers to the 2014 World's Fair, which didn't happen.\nA general challenge with assessing prediction track records is that we don't know what to compare someone's track record to. Is getting about half your predictions right \"good,\" or is it no more impressive than writing down a bunch of things that might happen and flipping a coin on each? \nI think this comes down to how difficult the predictions are, which is hard to assess systematically. A nice thing about this study is that there are enough predictions to get a decent sample size, but the whole thing is contained enough that you can get a good qualitative feel for the predictions themselves. (This is why I give examples; you can also view all predictions for a given person by clicking on their name above the table.) In this case, I think Asimov tends to make nonobvious, detailed predictions, such that I consider it impressive to have gotten ~half of them to be right.\nClarke\nCategory\n# correct\n# incorrect\n# ambiguous/near-miss\nCorrect / (correct + incorrect)\nAll predictions \n \n129\n148\n48\n46.57%\nTech predictions\n \n85\n82\n29\n50.90%\nDifficult predictions\n \n14\n10\n4\n58.33%\nDifficult + tech + definite date\n \n6\n5\n2\n54.55%\nExamples (as above):7\nCorrect 1964 prediction about 2000: \"[Communications satellites] will make possible a world in which we can make instant contact with each other wherever we may be. Where we can contact our friends anywhere on Earth, even if we don’t know their actual physical location. It will be possible in that age, perhaps only fifty years from now, for a [person] to conduct [their] business from Tahiti or Bali just as well as [they] could from London.\" (I assume that \"conduct [their] business\" refers to a business call rather than some sort of holistic claim that no productivity would be lost from remote work.)\nCorrect 1950 prediction about 2000: \"Indeed, it may be assumed as fairly certain that the first reconnaissances of the planets will be by orbiting rockets which do not attempt a landing-perhaps expendable, unmanned machines with elaborate telemetering and television equipment.\" This doesn't seem like a super-bold prediction; a lot of his correct predictions have a general flavor of saying progress won't be too exciting, and I find these less impressive than most of Asimov's correct predictions. \nIncorrect 1960 prediction about 2010: \"One can imagine, perhaps before the end of this century, huge general-purpose factories using cheap power from thermonuclear reactors to extract pure water, salt, magnesium, bromine, strontium, rubidium, copper and many other metals from the sea. A notable exception from the list would be iron, which is far rarer in the oceans than under the continents.\"\nIncorrect 1949 prediction about 1983: \"Before this story is twice its present age, we will have robot explorers dotted all over Mars.\"\nI generally found this data set less satisfying/educational than Asimov's: a lot of the predictions were pretty deep in the weeds of how rocketry might work or something, and a lot of them seemed pretty hard to interpret/score. I thought the bad predictions were pretty bad, and the good predictions were sometimes good but generally less impressive than Asimov's.\nHeinlein\nCategory\n# correct\n# incorrect\n# ambiguous/near-miss\nCorrect / (correct + incorrect)\nAll predictions \n \n19\n41\n7\n31.67%\nTech predictions\n \n14\n20\n6\n41.18%\nDifficult predictions\n \n1\n16\n1\n5.88%\nDifficult + tech + definite date\n \n0\n1\n1\n0.00% \nThis seems really bad, especially adjusted for difficulty: many of the \"correct\" ones seem either hard-to-interpret or just very obvious (e.g., no time travel). I was impressed by his prediction that \"we probably will still be after a cure for the common cold\" until I saw a prediction in a separate source saying \"Cancer, the common cold, and tooth decay will all be conquered.\" Overall it seems like he did a lot of predicting outlandish stuff about space travel, and then anti-predicting things that are probably just impossible (e.g., no time travel). \nHe did have some decent ones, though, such as: \"By 2000 A.D. we will know a great deal about how the brain functions ... whereas in 1900 what little we knew was wrong. I do not predict that the basic mystery of psychology--how mass arranged in certain complex patterns becomes aware of itself--will be solved by 2000 A.D. I hope so but do not expect it.\" He also predicted no human extinction and no end to war - I'd guess a lot of people disagreed with these at the time.\nOverall picture\nLooks like, of the \"big three,\" we have:\nOne (Asimov) who looks quite impressive - plenty of misses, but a 50% hit rate on such nonobvious predictions seems pretty great.\nOne (Heinlein) who looks pretty unserious and inaccurate.\nOne (Clarke) who's a bit hard to judge but seems pretty solid overall (around half of his predictions look to be right, and they tend to be pretty nonobvious).\nToday's futurism vs. these predictions\nThe above collect casual predictions - no probabilities given, little-to-no reasoning given, no apparent attempt to collect evidence and weigh arguments - by professional fiction writers. \nContrast this situation with my summary of the different lines of reasoning forecasting transformative AI. The latter includes:\nSystematic surveys aggregating opinions from hundreds of AI researchers.\nReports that Open Philanthropy employees spent thousands of hours on, systematically presenting evidence and considering arguments and counterarguments.\nA serious attempt to take advantage of the nascent literature on how to make good predictions; e.g., the authors (and I) have generally done calibration training,8 and have tried to use the language of probability to be specific about our uncertainty.\nThere's plenty of room for debate on how much these measures should be expected to improve our foresight, compared to what the \"Big Three\" were doing. My guess is that we should take forecasts about transformative AI a lot more seriously, partly because I think there's a big difference between putting in \"extremely little effort\" (basically guessing off the cuff without serious time examining arguments and counter-arguments, which is my impression of what the Big Three were mostly doing) and \"putting in moderate effort\" (considering expert opinion, surveying arguments and counter-arguments, explicitly thinking about one's degree of uncertainty).\nBut the \"extremely little effort\" version doesn't really look that bad. \nIf you look at forecasts about transformative AI and think \"Maybe these are Asimov-ish predictions that have about a 50% hit rate on hard questions; maybe these are Heinlein-ish predictions that are basically crap,\" that still seems good enough to take the \"most important century\" hypothesis seriously.\nAppendix: other studies of the track record of futurism\nA 2013 project assessed Ray Kurzweil's 1999 predictions about 2009, and a 2020 followup assessed his 1999 predictions about 2019. Kurzweil is known for being interesting at the time rather than being right with hindsight, and a large number of predictions were found and scored, so I consider this study to have similar advantages to the above study. \nThe first set of predictions (about 2009, 10-year horizon) had about as many \"true or weakly true\" predictions as \"false or weakly false\" predictions. \nThe second (about 2019, 20-year horizon) was much worse, with 52% of predictions flatly \"false,\" and \"false or weakly false\" predictions outnumbering \"true or weakly true\" predictions by almost 3-to-1.\nKurzweil is notorious for his very bold and contrarian predictions, and I'm overall inclined to call his track record something between \"mediocre\" and \"fine\" - too aggressive overall, but with some notable hits. (I think if the most important century hypothesis ends up true, he'll broadly look pretty prescient, just on the early side; if it doesn't, he'll broadly look quite off base. But that's TBD.)\nA 2002 paper, summarized by Luke Muehlhauser here, assessed the track record of The Year 2000 by Herman Kahn and Anthony Wiener, \"one of the most famous and respected products of professional futurism.\" \nAbout 45% of the forecasts were judged as accurate.\nLuke concludes that Kahn and Wiener were grossly overconfident, because he interprets them as making predictions with 90-95% confidence. \nMy takeaway is a bit different. I see a recurring theme that people often get 40-50% hit rates on interesting predictions about the future, but sometimes present these predictions with great confidence (which makes them look foolish).\nI think we can separate \"Past forecasters were overconfident\" (which I suspect is partly due to clear expression and quantification of uncertainty being uncommon and/or discouraged in relevant contexts) from \"Past forecasters weren't able to make interesting predictions that were reasonably likely to be right.\" The former seems true to me, but the latter doesn't.\nLuke's 2019 survey on the track record of futurism identifies two other relevant papers (here and here); I haven't read these beyond the abstracts, but their overall accuracy rates were 76% and 37%, respectively. It's difficult to interpret those numbers without having a feel for how challenging the predictions were.\nA 2021 EA Forum post looks at the aggregate track record of forecasters on PredictionBook and Metaculus, including specific analysis of forecasts 5+ years out, though I don't find it easy to draw conclusions about whether the performance was \"good\" or \"bad\" (or how similar the questions were to the ones I care about).Footnotes\n Disclosure: I'm co-CEO of Open Philanthropy. ↩\n I also briefly Googled for their predictions to get a preliminary sense of whether they were the kinds of predictions that seemed relevant. I found a couple of articles listing a few examples of good and bad predictions, but nothing systematic. I claim I haven't done a similar exercise with anyone else and thrown it out. ↩\n That is, if we didn't have a lot of memes in the background about how hard it is to predict the future. ↩\n 1 - was already generally known\n 2 - was expert consensus\n 3 - speculative but on trend\n 4 - above trend, or oddly detailed\n 5 - prescient, no trend to go off ↩\n Very few predictions in the data set are for less than 30 years, and I just ignored them. ↩\n Asimov actually only had one incorrect prediction in this category, so for the 2nd incorrect prediction I used one with difficulty \"3\" instead of \"4.\" ↩\n The first prediction in this list qualified for the strictest criteria when I first drafted this post, but it's now been rescored to difficulty=3/5, which I disagree with (I think it is an impressive prediction, more so than any of the remaining ones that qualify as difficulty=4/5). ↩\n Also see this report on calibration for Open Philanthropy grant investigators (though this is a different set of people from the people who researched transformative AI timelines). ↩\n", "url": "https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/", "title": "The Track Record of Futurists Seems ... Fine", "source": "cold.takes", "source_type": "blog", "date_published": "2022-06-30", "id": "79945e3aa4b075001128ef820a0e6bdf"} -{"text": "Note: anything in this post that you think is me subtweeting your organization is actually about, like, at least 3 organizations. (I'm currently on 4 boards in addition to Open Philanthropy's; I've served on a bunch of other boards in the past; and more than half of my takes on boards are not based on any of this, but rather on my interactions with boards I'm not on via the many grants made by Open Philanthropy.)\nWriting about ideal governance reminded me of how weird my experiences with nonprofit boards (as in \"board of directors\" - the set of people who formally control a nonprofit) have been.\nI thought that was a pretty good intro. The rest of this piece will:\nTry to articulate what's so weird about nonprofit boards, fundamentally. I think a lot of it is the combination of great power, unclear responsibility, and ~zero accountability; additionally, I haven't been able to find much in the way of clear, widely accepted statements of what makes a good board member.\nGive my own thoughts on what makes a good board member: which core duties they should be trying to do really well, the importance of \"staying out of the way\" on other things, and some potentially helpful practices.\nI am experienced with nonprofit boards but not with for-profit boards. I'm guessing that roughly half the things I say below will apply to for-profit boards, and that for-profit boards are roughly half as weird overall (so still quite weird), but I haven't put much effort into disentangling these things; I'm writing about what I've seen.\nI can't really give real-life examples here (for reasons I think will be pretty clear) so this is just going to be me opining in the abstract.\nWhy nonprofit boards are weird\nHere's how a nonprofit board works:\nThere are usually 3-10 people on the board (though sometimes much more). Most of them don't work for the nonprofit (they have other jobs).\nThey meet every few months. Nonprofit employees (especially the CEO1) do a lot of the agenda-setting for the meeting. Employees present general updates and ask for the board's approval on various things the board needs to approve, such as the budget. \nA majority vote of the directors can do anything: fire the CEO, dissolve the nonprofit, add and remove directors, etc. You can think of the board as the \"owner\" of the nonprofit - formally, it has final say in every decision.\nIn practice, though, the board rarely votes except on matters that feel fairly \"rubber-stamp,\" and the board's presence doesn't tend to be felt day-to-day at a nonprofit. The CEO leads the decision-making. Occasionally, someone has a thought like \"Wait, who does the CEO report to? Oh, the board of directors ... who's on the board again? I don't know if I've ever really spoken with any of those people.\"\nIn my experience, it's common for the whole thing to feel extremely weird. (This doesn't necessarily mean there's a better way to do it - footnote has more on what I mean by \"weird.\"2) \nBoard members often know almost nothing about the organization they have complete power over.\nBoard meetings rarely feel like a good use of time.\nWhen board members are energetically asking questions and making demands, it usually feels like they're causing chaos and wasting everyone's time and energy.\nOn the rare occasions when it seems like the board should do something (like replacing the CEO, or providing an independent check on some important decision), the board often seems checked out and it's unclear how they would even come to be aware of the situation.\nEveryone constantly seems confused about what the board is and how it can and can't be useful. Employees, and others who interact with the nonprofit, have lots of exchanges like \"I'm worried about X ... maybe we should ask the board what they think? ... Can we even ask them that? What is their job actually?\"\n(Reminder that this is not subtweeting a particular organization! More than one person - from more than one organization - read a draft and thought I was subtweeting them, because what's above describes a large number of boards.)\nOK, so what's driving the weirdness?\nI think there are a couple of things: \nNonprofit boards have great power, but low engagement (they don't have time to understand the organization as well as employees do); unclear responsibility (it's unclear which board member is responsible for what, and what the board as a whole is responsible for); and ~zero accountability (no one can fire board members except for the other board members!) \nNonprofit boards have unclear expectations and principles. I can't seem to find anyone with a clear, comprehensive, thought-out theory of what a board member's ... job is. \nI'll take these one at a time.\nGreat power, low engagement, unclear responsibility, no accountability\nIn my experience/impression, the best way to run any organization (or project, or anything) is on an \"ownership\" model: for any given thing X that you want done well, you have one person who \"owns\" X. The \"owner\" of X has:\nThe power to make decisions to get X done well.\nHigh engagement: they're going to have plenty of time and attention to devote to X.\nThe responsibility for X: everyone agrees that if X goes well, they should get the credit, and if X goes poorly, they should get the blame.\nAnd accountability: if X goes poorly, there will be some sort of consequences for the \"owner.\"\nWhen these things come apart, I think you get problems. In a nutshell - when no one is responsible, nothing gets done; when someone is responsible but doesn't have power, that doesn't help much; when the person who is responsible + empowered isn't engaged (isn't paying much attention), or isn't held accountable, there's not much in the way of their doing a dreadful job.\nA traditional company structure mostly does well at this. The CEO has power (they make decisions for the company), engagement (they are devoted to the company and spend tons of time on it), and responsibility+accountability (if the company does badly, everyone looks at the CEO). They manage a team of people who have power+engagement+responsibility+accountability for some aspect of the company; each of those people manage people with power+engagement+responsibility+accountability for some smaller piece; etc.\nWhat about the board?\nThey have power to fire the CEO (or do anything else).\nThey tend to have low engagement. They have other jobs, and only spend a few hours a year on their board roles. They tend to know little about what's going on at the organization.\nThey have unclear responsibility. \nThe board as a whole is responsible for the organization, but what is each individual board member responsible for? In my experience, this is often very unclear, and there are a lot of crucial moments where \"bystander effects\" seem strong. \n \nSo far, these points apply to both nonprofit and for-profit boards. But at least at a for-profit company, board members know what they're collectively responsible for: maximizing financial value of the company. At a nonprofit, it's often unclear what success even means, beyond the nonprofit's often-vague mission statement, so board members are generally unclear (and don't necessarily agree) on what they're supposed to be ensuring.3\nAt a for-profit company, the board seems to have reasonable accountability: the shareholders, who ultimately own the company and gain or lose money depending on how it does, can replace the board if they aren't happy. At a nonprofit, the board members have zero accountability: the only way to fire a board member is by majority vote of the board!\nSo we have people who are spending very little time on the company, know very little about it, don't have much clarity on what they're responsible for either individually or collectively, and aren't accountable to anyone ... and those are the people with all of the power. Sound dysfunctional?4\nIn practice, I think it's often worse than it sounds, because board members aren't even chosen carefully - a lot of the time, a nonprofit just goes with an assortment of random famous people, big donors, etc. \nWhat makes a good board member? Few people even have a hypothesis\nI've searched a fair amount for books, papers, etc. that give convincing and/or widely-accepted answers to questions like:\nWhen the CEO asks the board to approve something, how should they engage? When should they take a deferring attitude (\"Sure, as long as I don't see any particular reason to say no\"), a sanity check attitude (\"I'll ask a few questions to make sure this is making sense, then approve if nothing jumps out at me\"), a full ownership attitude (\"I need to personally be convinced this is the best thing for the organization\"), etc.?\nHow much should each board member invest in educating themselves about the organization? What's the best way to do that?\nHow does the board know whether the CEO is doing a good job? What kind of situation should trigger seriously considering looking for a new one?\nHow does a board member know whether the board is doing a good job? How should they decide when another board member should be replaced?\nIn my experience, most board members just aren't walking around with any particular thought-through take on questions like this. And as far as I can tell, there's a shortage of good5 guidance on questions like this for both for-profit and nonprofit boards. For example:\nI've found no standard reference on topics like this, and very few resources that even seem aimed at directly and clearly answering such questions. \nThe best book on this topic I've seen is Boards that Lead by Ram Charan, focused on for-profit boards (but pretty good IMO).\n \nBut this isn't, like, a book everyone knows to read; I found it by asking lots of people for suggestions, coming up empty, Googling wildly around and skimming like 10 books that said they were about boards, and deciding that this one seemed pretty good.\nOne of the things I do as a board member is interview other prospective board members about their answers to questions like this. In my experience, they answer most of the above questions with something like \"Huh, I don't really know. What do you think?\" \nMost boards I've seen seem to - by default - either: \nGet way too involved in lots of decisions to the point where it feels like they're micromanaging the CEO and/or just obsessively engaging on whatever topics the CEO happens to bring to their attention; or \n \nTake a \"We're just here to help\" attitude and rubber-stamp whatever the CEO suggests, including things I'll argue below should be core duties for the board (e.g., adding and removing board members).\nI'm not sure I've ever seen a board with a formal, recurring process for reviewing each board member's performance. :/\nTo the extent I have seen a relatively common, coherent vision of \"what board members are supposed to be doing,\" it's pretty well summarized in Reid Hoffman's interview in The High-Growth Handbook:\nI use ... a red light, yellow light, green light framework between the board and the CEO. Roughly, green light is, “You’re the CEO. Make the call. We’re advisory.” Now, we may say that on very big things—selling the company—we should talk about it before you do it. And that may shift us from green light, if we don’t like the conversation. But a classic young, idiot board member will say, “Well, I’m giving you my expertise and advice. You should do X, Y, Z.” But the right framework for board members is: You’re the CEO. You make the call. We’re advisory.\n Red lights also very easy. Once you get to red light, the CEO—who, by the way, may still be in place—won’t be the CEO in the future. The board knows they need a new CEO. It may be with the CEO’s knowledge, or without it. Obviously, it’s better if it’s collaborative ...\n Yellow means, “I have a question about the CEO. Should we be at green light or not?” And what happens, again under inexperienced or bad board members, is they check a CEO into yellow indefinitely. They go, “Well, I’m not sure…” The important thing with yellow light is that you 1) coherently agree on it as a board and 2) coherently agree on what the exit conditions are. What is the limited amount of time that we’re going to be in yellow while we consider whether we move back to green or move to red? And how do we do that, so that we do not operate for a long time on yellow? Because with yellow light, you’re essentially hamstringing the CEO and hamstringing the company. It’s your obligation as a board to figure that out.\n \nI like this quite a bit (hence the long blockquote), but I don't think it covers everything. The board is mostly there to oversee the CEO, and they should mostly be advisory when they're happy with the CEO. But I think there are things they ought to be actively thinking about and engaging in even during \"green light.\"\nSo what DOES make a good board member?\nHere is my current take, based on a combination of (a) my thoughts after serving on and interacting with a large number of nonprofit boards; (b) my attempts to adapt conventional wisdom about for-profit boards (especially from the book I mentioned above); (c) divine revelation. \nI'll go through:\nWhat I see as the main duties of the board specifically - things the board has to do well, and can't leave to the CEO and other staff.\nMy basic take that the ideal board should do these main duties well, while staying out of the way otherwise.\nThe main qualities I think the ideal board member should have - and some common ways of choosing board members that seem bad to me.\nA few more random thoughts on board practices that seem especially important and/or promising.\n(I don't claim any of these points are original, and almost everything can be found in some writing on boards somewhere, but I don't know of a reasonably comprehensive, concise place to get something similar to the below.)\nThe board's main duties\nI agree with the basic spirit of Hoffman's philosophy above: the board should not be trying to \"run the company\" (they're too low-engagement and don't know enough about it), and should instead be focused on a small number of big-picture questions like \"How is the CEO doing?\"\nAnd I do think the board's #1 and most fundamental job is evaluating the CEO's performance. The board is the only reliable source of accountability for the CEO - even more so at a nonprofit than a for-profit, since bad CEO performance won't necessarily show up via financial problems or unhappy shareholders.6 (As noted below, I think many nonprofit boards have no formal process for reviewing the CEO's performance, and the ones that do often have a lightweight/underwhelming one.)\nBut I think the board also needs to take a leading role - and not trust the judgment of the CEO and other staff - when it comes to:\nOverseeing decisions that could importantly reduce the board's powers. The CEO might want to enter into an agreement with a third party that is binding on the nonprofit and therefore on the board (for example, \"The nonprofit will now need permission from the third party in order to do X\"); or transfer major activities and assets to affiliated organizations that the board doesn't control (for example, when Open Philanthropy split off from GiveWell); or revise the organization's mission statement, bylaws,7 etc.; or other things that significantly reduce the scope of what the board has control over. The board needs to represent its own interests in these cases, rather than deferring to the CEO (whose interests may be different).\nOverseeing big-picture irreversible risks and decisions that could importantly affect future CEOs. For example, I think the board needs to be anticipating any major source of risk that a nonprofit collapses (financially or otherwise) - if this happens, the board can't simply replace the CEO and move on, because the collapse affects what a future CEO is able to do. (What risks and decisions are big enough? Some thoughts in a footnote.8)\nAll matters relating to the composition and performance of the board itself. Adding new board members, removing board members, and reviewing the board's own performance are things that the board needs to be responsible for, not the CEO. If the CEO is controlling the composition of the board, this is at odds with the board's role in overseeing the CEO.\nEngaging on main duties, staying out of the way otherwise\nI think the ideal board member's behavior is roughly along the lines of the following:\nActively, intensively engage in the main duties from the previous section. Board members should be knowledgeable about, and not defer to the CEO on, (a) how the CEO is performing; (b) how the board is performing, and who should be added and removed; (c) spotting (and scanning the horizon for) events that could reduce the board's powers, or lead to big enough problems and restrictions so as to irreversibly affect what future CEOs are able to do. \nIdeally they should be focusing their questions in board meetings on these things, as well as having some way of gathering information about them that doesn't just rely on hearing directly from the CEO. (Some ideas for this are below.) When reviewing financial statements and budgets, they should be focused mostly on the risk of major irreversible problems (such as going bankrupt or failing to be compliant); when hearing about activities, they should be focused mostly on what they reflect about the CEO's performance; etc.\nBe advisory (\"stay out of the way\") otherwise. Meetings might contain all sorts of updates and requests for reactions. I think a good template for a board member, when sharing an opinion or reaction, is either to (a) explain as they're talking why this topic is important for the board's main duties; or (b) say (or imply) something like \"I'm curious / offering an opinion about ___, but if this isn't helpful, please ignore it, and please don't hesitate to move the meeting to the next topic as soon as this stops feeling productive.\"\nThe combination of intense engagement on core duties and \"staying out of the way\" otherwise can make this a very weird role. An organization will often go years without any serious questions about the CEO's performance or other matters involving core duties. So a board member ought to be ready to quietly nod along and stay out of the way for very long stretches of time, while being ready to get seriously involved and engaged when this makes sense. \nAim for division of labor. I think a major problem with nonprofit boards is that, by default, it's really unclear which board member is responsible for what. I think it's a good idea for board members to explicitly settle this via assigning:\nSpecialists (\"Board member X is reviewing the financials; the rest of us are mostly checked-out and/or sanity-checking on that\"); \nSubcommittees (\"Board members X and Y will look into this particular aspect of the CEO's performance\"); \nA Board Chair or Lead Independent Director9 who is the default person to take responsibility for making sure the board is doing its job well (this could include suggesting and assigning responsibility for some of the ideas I list below; helping to set the agenda for board meetings so it isn't just up to the CEO; etc.)\nThis can further help everyone find a balance between engaging and staying out of the way.\nWho should be on the board?\nOne answer is that it should be whoever can do well at the duties outlined above - both in terms of substance (can they accurately evaluate the CEO's performance, identify big-picture irreversible risks, etc.?) and in terms of style (do they actively engage on their main duties and stay out of the way otherwise?)\nBut to make things a bit more boiled-down and concrete, I think perhaps the most important test for a board member is: they'll get the CEO replaced if this would be good for the nonprofit's mission, and they won't if it wouldn't be.\nThis is the most essential function of the board, and it implies a bunch of things about who makes a good board member: \nThey need to do a great job understanding and representing the nonprofit's mission, and care deeply about that mission - to the point of being ready to create conflict over it if needed (and only if needed). \nA key challenge of nonprofits is that they have no clear goal, only a mission statement that is open to interpretation. And if two different board members interpret the mission differently - or are focused on different aspects of it - this could intensely color how they evaluate the CEO, which could be a huge deal for the nonprofit.\n \nFor example, if a nonprofit's mission is \"Help animals everywhere,\" does this mean \"Help as many animals as possible\" (which might indicate a move toward focusing on farm animals) or \"Help animals in the same way the nonprofit traditionally has\" or something else? How does it imply the nonprofit should make tradeoffs between helping e.g. dogs, cats, elephants, chickens, fish or even insects? How a board member answers questions like this seems central to how their presence on the board is going to affect the nonprofit.\nThey need to have a personality and position capable of challenging the CEO (though also capable of staying out of the way). \nA common problem I see is that some board member is (a) not very engaged with the nonprofit itself, but (b) highly values their personal relationship with the CEO and other board members. This seems like a bad combination, but unfortunately a common one. Board members need to be willing and able to create conflict in order to do the right thing for the nonprofit.\n \nLimiting the number of board members who are employees (reporting to the CEO) seems important for this reason.\n \nIf you can't picture a board member \"making waves,\" they probably shouldn't be on the board - that attitude will seem fine more than 90% of the time, but it won't work well in the rare cases where the board really matters.\n \nOn the other hand, if someone is only comfortable \"making waves\" and feels useless and out of sorts when they're just nodding along, that person shouldn't be on the board either. As noted above, board members need to be ready for a weird job that involves stepping up when the situation requires it, but staying out of the way when it doesn't. \nThey should probably have a well-developed take on what their job is as a board member. Board members who can't say much about where they expect to be highly engaged, vs. casually advisory - and how they expect to invest in getting the knowledge they need to do a good job leading on particular issues - don't seem like great bets to step up when they most need to (or stay out of the way when they should).\nIn my experience, most nonprofits are not looking for these qualities in board members. They are, instead, often looking for things like:\nCelebrity and reputation - board members who are generally impressive and well-regarded and make the nonprofit look good. Unfortunately, I think such people often just don't have much time or interest for their job. Many are also uninterested in causing any conflict, which makes them basically useless as board members IMO.\nFundraising - a lot of nonprofits pretty much explicitly just try to put people on the board who will help raise money for them. This seems bad for governance.\nNarrow expertise on some topic that is important for the nonprofit. I don't really think this is what nonprofits should be seeking from board members,10 except to the extent it ties deeply into the board members' core duties, e.g., where it's important to have an independent view on technical topic X in order to do a good job evaluating the CEO.\nI think a good profile for a board member is someone who cares greatly about the nonprofit's mission, and wants it to succeed, to the point where they're ready to have tough conversations if they see the CEO falling short. Examples of such people might be major funders, or major stakeholders (e.g., a community leader from a community of people the nonprofit is trying to help).\nA few practices that seem good\nI'll anticlimactically close with a few practices that seem helpful to me. These are mostly pretty generic practices, useful for both for-profit and nonprofit boards, that I have seen working in practice but also seen too many boards going without. They don't fully address the weirdnesses discussed above (especially the stuff specific to nonprofit as opposed to for-profit boards), but they seem to make things some amount better.\nKeeping it simple for low-stakes organizations. If a nonprofit is a year old and has 3 employees, it probably shouldn't be investing a ton of its energy in having a great board (especially since this is hard). \nA key question is: \"If the board just stays checked out and doesn't hold the CEO accountable, what's the worst thing that can happen?\" If the answer is something like \"The nonprofit's relatively modest budget is badly spent,\" then it might not be worth a huge investment in building a great board (and in taking some of the measures listed below). Early-stage nonprofits often have a board consisting of 2-3 people the founder trusts a lot (ideally in a \"you'd fire me if it were the right thing to do\" sense rather than in a \"you've always got my back\" sense), which seems fine. The rest of these ideas are for when the stakes are higher.\nFormal board-staff communication channels. A very common problem I see is that:\nBoard members know almost nothing about the organization, and so are hesitant to engage in much of anything.\nEmployees of the organization know far more, but find the board members mysterious/unapproachable/scary, and don't share much information with them.\nI've seen this dynamic improved some amount by things like a staff liaison: a board member who is designated with the duty, \"Talk to employees a lot, offer them confidentiality as requested, try to build trust, and gather information about how things are going.\" Things like regular \"office hours\" and showing up to company events can help with this.\nViewing board seats as limited. It seems unlikely that a board should have more than 10 members (and even 10 seems like a lot), since it's hard to have a productive meeting past that point.11 When considering a new addition to the board, I think the board should be asking something much closer to \"Is this one of the 10 best people in the world to sit on this board?\" than to \"Is this person fine?\"\nRegular CEO reviews.\nMany nonprofits don't seem to have any formal, regular process for reviewing the CEO's performance; I think it's important to do this.\nThe most common format I've seen is something like: one board member interviews the CEO's direct reports, and perhaps some other people throughout the company, and integrates this with information about the organization's overall progress and accomplishments (often presented by the organization itself, but they might ask questions about it) to provide a report on what the CEO is doing well and could do better. I think this approach has a lot of limitations - staff are often hesitant to be forthcoming with a board member (even when promised anonymity), and the board member often lacks a lot of key information - but even with those issues, it tends to be a useful exercise.\nClosed sessions. I think it's important for the board to have \"closed sessions\" where board members can talk frankly without the CEO, other employees, etc. hearing. I think a common mistake is to ask \"Does anyone want the closed session today or can we skip it?\" - this puts the onus on board members to say \"Yes, I would like a closed session,\" which then implies they have something negative to say. I think it's better for whoever's running the meetings to identify logical closed sessions (e.g., \"The board minus employees\"), allocate time for them and force them to happen.\nRegular board reviews. It seems like it would be a good idea for board members to regularly assess each other's performance, and the performance of the board as a whole. But I've actually seen very little of this done in practice and I can't point to versions of it that seem to have some track record of working well. It does seem like a good idea though!\nConclusion\nThe board is the only body at a nonprofit that can hold the CEO accountable to accomplishing the mission. I broadly feel like most nonprofit boards just aren't very well-suited to this duty, or necessarily to much of anything. It's an inherently weird structure that seems difficult to make work. \nI wish someone would do a great job studying and laying out how nonprofit boards should be assembled, how they should do their job and how they can be held accountable. You can think of this post as my quick, informal shot at that.Footnotes\n I'm using the term \"CEO\" throughout, although the chief executive at a non profit sometimes has another title, such as \"Executive Director.\" ↩\n A lot of this piece is about how the fundamental setup of a nonprofit board leads to the kinds of problems and dynamics I'm describing. This doesn't mean we should necessarily think there's any way to fix it or any better alternative. It just means that this setup seems to bring a lot of friction points and challenges that most relationships between supervisor-and-supervised don't seem to have, which can make the experience of interacting with a board feel vaguely unlike what we're used to in other contexts, or \"weird.\"\n People who have interacted with tons of boards might get so used to these dynamics that they no longer feel weird. I haven't reached that point yet myself though. ↩\n The fact that the nonprofit's goals aren't clearly defined and have no clear metric (and often aren't susceptible to measurement at all) is a pretty general challenge of nonprofits, but I think it especially shows up for a structure (the board) that is already weird in the various other ways I'm describing. ↩\n Superficially, you could make most of the same complaints about shareholders of a for-profit company. But:\nShareholders are the people who ultimately make or lose money if the company does well or poorly (you can think of this as a form of accountability). By contrast, nonprofit board members often have very little (or only an idiosyncratic) personal connection to and investment in the organization.\nShareholders compensate for their low engagement by picking representatives (a board) whom they can hold accountable for the company's performance. Nonprofit board members are the representatives, and aren't accountable to anyone. ↩\n Especially \"good and concise.\" Most of the points I make here can be found in some writings on boards somewhere, but it's hard to find sensible-seeming and comprehensive discussions of what the board should be doing and who should be on it. ↩\n Part of the CEO's job is fundraising, and if they do a bad job of this, it's going to be obvious. But that's only part of the job. At a nonprofit, a CEO could easily be bringing in plenty of money and just doing a horrible job at the mission - and if the board isn't able to learn this and act on it, it seems like very bad news. ↩\n The charter and bylaws are like the \"constitution\" of a nonprofit, laying out how its governance works. ↩\n This is a judgment call, and one way to approach it would be to reserve something like 1 hour of full-board meeting time per year for talking about these sorts of things (and pouring in more time if at least, like, 1/3 of the board thinks something is a big deal).\n Some examples of things I think are and aren't usually a big enough deal to start paying serious attention to:\nBig enough deal: financial decisions that increase the odds of going \"belly-up\" (running out of money and having to fold) by at least 10 percentage points. Not a big enough deal: spending money in ways that are arguably bad uses of money, having a lowish-but-not-too-far-off-of-peer-organizations amount of runway.\nBig enough deal: deficiencies in financial controls that an auditor is highlighting, or a lack of audit altogether, until a plan is agreed to to address these things. Not a big enough deal: most other stuff in this category.\nBig enough deal: organizations with substantial \"PR risk\" exposure should have a good team for assessing this and a \"crisis plan\" in case something happens. Not a big enough deal: specific organizational decisions and practices that you are not personally offended by or find unethical, but could imagine a negative article about. (If you do find them substantively unethical, I think that's a big enough deal.)\nBig enough deal: transferring like 1/3 or more of valuable things the nonprofit has (intellectual property, money, etc.) to another entity not controlled by the board. Not a big enough deal: starting an affiliate organization primarily for taking donations in another country or something.\nBig enough deal: doubling or halving the workforce. Not a big enough deal: smaller hirings and firings. ↩\n Sometimes the Board Chair is the CEO, and sometimes the Chair is an employee of the company who also sits on the board. In these cases, I think it's good for there to be a separate Lead Independent Director who is not employed by the company and is therefore exclusively representing the Board. They can help set agendas, lead meetings, and take responsibility by default when it's otherwise unclear who would do so. ↩\n Nonprofits can get expertise on topic X by hiring experts on X to advise them. The question is: when is it important to have an expert on X evaluating the CEO? ↩\n Though it could be fine and even interesting to have giant boards - 20 people, 50 or more - that have some sort of \"executive committee\" of 10 or fewer people doing basically all of the meetings and all of the work (with the rest functioning just as very passive, occasionally-voting equivalents of \"shareholders\"). Just assume I'm talking about the \"executive committee\" type thing here. ↩\n", "url": "https://www.cold-takes.com/nonprofit-boards-are-weird-2/", "title": "Nonprofit Boards are Weird", "source": "cold.takes", "source_type": "blog", "date_published": "2022-06-23", "id": "dff728c75d5f4e38ec00a4569314b1e8"} -{"text": "I've been working on a new series of posts about the most important century. \nThe original series focused on why and how this could be the most important century for humanity. But it had relatively little to say about what we can do today to improve the odds of things going well.\nThe new series will get much more specific about the kinds of events that might lie ahead of us, and what actions today look most likely to be helpful.\nA key focus of the new series will be the threat of misaligned AI: AI systems disempowering humans entirely, leading to a future that has little to do with anything humans value. (Like in the Terminator movies, minus the time travel and the part where humans win.)\nMany people have trouble taking this \"misaligned AI\" possibility seriously. They might see the broad point that AI could be dangerous, but they instinctively imagine that the danger comes from ways humans might misuse it. They find the idea of AI itself going to war with humans to be comical and wild. I'm going to try to make this idea feel more serious and real.\nAs a first step, this post will emphasize an unoriginal but extremely important point: the kind of AI I've discussed could defeat all of humanity combined, if (for whatever reason) it were pointed toward that goal. By \"defeat,\" I don't mean \"subtly manipulate us\" or \"make us less informed\" or something like that - I mean a literal \"defeat\" in the sense that we could all be killed, enslaved or forcibly contained.\nI'm not talking (yet) about whether, or why, AIs might attack human civilization. That's for future posts. For now, I just want to linger on the point that if such an attack happened, it could succeed against the combined forces of the entire world. \nI think that if you believe this, you should already be worried about misaligned AI,1 before any analysis of how or why an AI might form its own goals. \nWe generally don't have a lot of things that could end human civilization if they \"tried\" sitting around. If we're going to create one, I think we should be asking not \"Why would this be dangerous?\" but \"Why wouldn't it be?\"\nBy contrast, if you don't believe that AI could defeat all of humanity combined, I expect that we're going to be miscommunicating in pretty much any conversation about AI. The kind of AI I worry about is the kind powerful enough that total civilizational defeat is a real possibility. The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today - which I did for much of my career, and still think is one of the best things to work on) is because I think the stakes are just that high. \nBelow:\nI'll sketch the basic argument for why I think AI could defeat all of human civilization. \nOthers have written about the possibility that \"superintelligent\" AI could manipulate humans and create overpowering advanced technologies; I'll briefly recap that case.\n \nI'll then cover a different possibility, which is that even \"merely human-level\" AI could still defeat us all - by quickly coming to rival human civilization in terms of total population and resources.\n \nAt a high level, I think we should be worried if a huge (competitive with world population) and rapidly growing set of highly skilled humans on another planet was trying to take down civilization just by using the Internet. So we should be worried about a large set of disembodied AIs as well. \nI'll briefly address a few objections/common questions: \nHow can AIs be dangerous without bodies? \n \nIf lots of different companies and governments have access to AI, won't this create a \"balance of power\" so that no one actor is able to bring down civilization? \n \nWon't we see warning signs of AI takeover and be able to nip it in the bud?\n \nIsn't it fine or maybe good if AIs defeat us? They have rights too. \nClose with some thoughts on just how unprecedented it would be to have something on our planet capable of overpowering us all.\nHow AI systems could defeat all of us\nThere's been a lot of debate over whether AI systems might form their own \"motivations\" that lead them to seek the disempowerment of humanity. I'll be talking about this in future pieces, but for now I want to put it aside and imagine how things would go if this happened. \nSo, for what follows, let's proceed from the premise: \"For some weird reason, humans consistently design AI systems (with human-like research and planning abilities) that coordinate with each other to try and overthrow humanity.\" Then what? What follows will necessarily feel wacky to people who find this hard to imagine, but I think it's worth playing along, because I think \"we'd be in trouble if this happened\" is a very important point.\nThe \"standard\" argument: superintelligence and advanced technology\nOther treatments of this question have focused on AI systems' potential to become vastly more intelligent than humans, to the point where they have what Nick Bostrom calls \"cognitive superpowers.\"2 Bostrom imagines an AI system that can do things like:\nDo its own research on how to build a better AI system, which culminates in something that has incredible other abilities.\nHack into human-built software across the world.\nManipulate human psychology.\nQuickly generate vast wealth under the control of itself or any human allies.\nCome up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.\nDevelop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries. \n(Wait But Why reasons similarly.3)\nI think many readers will already be convinced by arguments like these, and if so you might skip down to the next major section.\nBut I want to be clear that I don't think the danger relies on the idea of \"cognitive superpowers\" or \"superintelligence\" - both of which refer to capabilities vastly beyond those of humans. I think we still have a problem even if we assume that AIs will basically have similar capabilities to humans, and not be fundamentally or drastically more intelligent or capable. I'll cover that next.\nHow AIs could defeat humans without \"superintelligence\"\nIf we assume that AIs will basically have similar capabilities to humans, I think we still need to worry that they could come to out-number and out-resource humans, and could thus have the advantage if they coordinated against us.\nHere's a simplified example (some of the simplifications are in this footnote4) based on Ajeya Cotra's \"biological anchors\" report:\nI assume that transformative AI is developed on the soonish side (around 2036 - assuming later would only make the below numbers larger), and that it initially comes in the form of a single AI system that is able to do more-or-less the same intellectual tasks as a human. That is, it doesn't have a human body, but it can do anything a human working remotely from a computer could do. \nI'm using the report's framework in which it's much more expensive to train (develop) this system than to run it (for example, think about how much Microsoft spent to develop Windows, vs. how much it costs for me to run it on my computer). \nThe report provides a way of estimating both how much it would cost to train this AI system, and how much it would cost to run it. Using these estimates (details in footnote)5 implies that once the first human-level AI system is created, whoever created it could use the same computing power it took to create it in order to run several hundred million copies for about a year each.6\nThis would be over 1000x the total number of Intel or Google employees,7 over 100x the total number of active and reserve personnel in the US armed forces, and something like 5-10% the size of the world's total working-age population.8\nAnd that's just a starting point. \nThis is just using the same amount of resources that went into training the AI in the first place. Since these AI systems can do human-level economic work, they can probably be used to make more money and buy or rent more hardware,9 which could quickly lead to a \"population\" of billions or more.\n \nIn addition to making more money that can be used to run more AIs, the AIs can conduct massive amounts of research on how to use computing power more efficiently, which could mean still greater numbers of AIs run using the same hardware. This in turn could lead to a feedback loop and explosive growth in the number of AIs.\nEach of these AIs might have skills comparable to those of unusually highly paid humans, including scientists, software engineers and quantitative traders. It's hard to say how quickly a set of AIs like this could develop new technologies or make money trading markets, but it seems quite possible for them to amass huge amounts of resources quickly. A huge population of AIs, each able to earn a lot compared to the average human, could end up with a \"virtual economy\" at least as big as the human one.\nTo me, this is most of what we need to know: if there's something with human-like skills, seeking to disempower humanity, with a population in the same ballpark as (or larger than) that of all humans, we've got a civilization-level problem.\nA potential counterpoint is that these AIs would merely be \"virtual\": if they started causing trouble, humans could ultimately unplug/deactivate the servers they're running on. I do think this fact would make life harder for AIs seeking to disempower humans, but I don't think it ultimately should be cause for much comfort. I think a large population of AIs would likely be able to find some way to achieve security from human shutdown, and go from there to amassing enough resources to overpower human civilization (especially if AIs across the world, including most of the ones humans were trying to use for help, were coordinating). \nI spell out what this might look like in an appendix. In brief:\nBy default, I expect the economic gains from using AI to mean that humans create huge numbers of AIs, integrated all throughout the economy, potentially including direct interaction with (and even control of) large numbers of robots and weapons. \n(If not, I think the situation is in many ways even more dangerous, since a single AI could make many copies of itself and have little competition for things like server space, as discussed in the appendix.)\nAIs would have multiple ways of obtaining property and servers safe from shutdown. \nFor example, they might recruit human allies (through manipulation, deception, blackmail/threats, genuine promises along the lines of \"We're probably going to end up in charge somehow, and we'll treat you better when we do\") to rent property and servers and otherwise help them out. \n \nOr they might create fakery so that they're able to operate freely on a company's servers while all outward signs seem to show that they're successfully helping the company with its goals.\nA relatively modest amount of property safe from shutdown could be sufficient for housing a huge population of AI systems that are recruiting further human allies, making money (via e.g. quantitative finance), researching and developing advanced weaponry (e.g., bioweapons), setting up manufacturing robots to construct military equipment, thoroughly infiltrating computer systems worldwide to the point where they can disable or control most others' equipment, etc. \nThrough these and other methods, a large enough population of AIs could develop enough military technology and equipment to overpower civilization - especially if AIs across the world (including the ones humans were trying to use) were coordinating with each other.\nSome quick responses to objections\nThis has been a brief sketch of how AIs could come to outnumber and out-resource humans. There are lots of details I haven't addressed.\nHere are some of the most common objections I hear to the idea that AI could defeat all of us; if I get much demand I can elaborate on some or all of them more in the future.\nHow can AIs be dangerous without bodies? This is discussed a fair amount in the appendix. In brief: \nAIs could recruit human allies, tele-operate robots and other military equipment, make money via research and quantitative trading, etc. \nAt a high level, I think we should be worried if a huge (competitive with world population) and rapidly growing set of highly skilled humans on another planet was trying to take down civilization just by using the Internet. So we should be worried about a large set of disembodied AIs as well. \nIf lots of different companies and governments have access to AI, won't this create a \"balance of power\" so that nobody is able to bring down civilization? \nThis is a reasonable objection to many horror stories about AI and other possible advances in military technology, but if AIs collectively have different goals from humans and are willing to coordinate with each other11 against us, I think we're in trouble, and this \"balance of power\" idea doesn't seem to help. \n What matters is the total number and resources of AIs vs. humans.\nWon't we see warning signs of AI takeover and be able to nip it in the bud? I would guess we would see some warning signs, but does that mean we could nip it in the bud? Think about human civil wars and revolutions: there are some warning signs, but also, people go from \"not fighting\" to \"fighting\" pretty quickly as they see an opportunity to coordinate with each other and be successful.\nIsn't it fine or maybe good if AIs defeat us? They have rights too. \nMaybe AIs should have rights; if so, it would be nice if we could reach some \"compromise\" way of coexisting that respects those rights. \nBut if they're able to defeat us entirely, that isn't what I'd plan on getting - instead I'd expect (by default) a world run entirely according to whatever goals AIs happen to have.\nThese goals might have essentially nothing to do with anything humans value, and could be actively counter to it - e.g., placing zero value on beauty and having zero attempts to prevent or avoid suffering).\nRisks like this don't come along every day\nI don't think there are a lot of things that have a serious chance of bringing down human civilization for good.\nAs argued in The Precipice, most natural disasters (including e.g. asteroid strikes) don't seem to be huge threats, if only because civilization has been around for thousands of years so far - implying that natural civilization-threatening events are rare.\nHuman civilization is pretty powerful and seems pretty robust, and accordingly, what's really scary to me is the idea of something with the same basic capabilities as humans (making plans, developing its own technology) that can outnumber and out-resource us. There aren't a lot of candidates for that.12\nAI is one such candidate, and I think that even before we engage heavily in arguments about whether AIs might seek to defeat humans, we should feel very nervous about the possibility that they could.\nWhat about things like \"AI might lead to mass unemployment and unrest\" or \"AI might exacerbate misinformation and propaganda\" or \"AI might exacerbate a wide range of other social ills and injustices\"13? I think these are real concerns - but to be honest, if they were the biggest concerns, I'd probably still be focused on helping people in low-income countries today rather than trying to prepare for future technologies. \nPredicting the future is generally hard, and it's easy to pour effort into preparing for challenges that never come (or come in a very different form from what was imagined).\nI believe civilization is pretty robust - we've had huge changes and challenges over the last century-plus (full-scale world wars, many dramatic changes in how we communicate with each other, dramatic changes in lifestyles and values) without seeming to have come very close to a collapse.\nSo if I'm engaging in speculative worries about a potential future technology, I want to focus on the really, really big ones - the ones that could matter for billions of years. If there's a real possibility that AI systems will have values different from ours, and cooperate to try to defeat us, that's such a worry.\nSpecial thanks to Carl Shulman for discussion on this post.\nAppendix: how AIs could avoid shutdown\nThis appendix goes into detail about how AIs coordinating against humans could amass resources of their own without humans being able to shut down all \"misbehaving\" AIs. \nIt's necessarily speculative, and should be taken in the spirit of giving examples of how this might work - for me, the high-level concern is that a huge, coordinating population of AIs with similar capabilities to humans would be a threat to human civilization, and that we shouldn't count on any particular way of stopping it such as shutting down servers.\nI'll discuss two different general types of scenarios: (a) Humans create a huge population of AIs; (b) Humans move slowly and don't create many AIs.\nHow this could work if humans create a huge population of AIs\nI think a reasonable default expectation is that humans do most of the work of making AI systems incredibly numerous and powerful (because doing so is profitable), which leads to a vulnerable situation. Something roughly along the lines of:\nThe company that first develops transformative AI quickly starts running large numbers of copies (hundreds of millions or more), which are used to (a) do research on how to improve computational efficiency and run more copies still; (b) develop valuable intellectual property (trading strategies, new technologies) and make money.\nOver time, AI systems are rolled out widely throughout society. Their numbers grow further, and their role in the economy grows: they are used in (and therefore have direct interaction with) high-level decision-making at companies, perhaps operating large numbers of cars and/or robots, perhaps operating military drones and aircraft, etc. (This seems like a default to me over time, but it isn't strictly necessary for the situation to be risky, as I'll go through below.)\nIn this scenario, the AI systems are malicious (as we've assumed), but this doesn't mean they're constantly causing trouble. Instead, they're mostly waiting for an opportunity to team up and decisively overpower humanity. In the meantime, they're mostly behaving themselves, and this is leading to their numbers and power growing. \nThere are scattered incidents of AI systems' trying to cause trouble,14 but this doesn't cause the whole world to stop using AI or anything.\n \nA reasonable analogy might be to a typical civil war or revolution: the revolting population mostly avoids isolated, doomed attacks on its government, until it sees an opportunity to band together and have a real shot at victory.\n \n(Paul Christiano's What Failure Looks Like examines this general flavor of scenario in a bit more detail.)\nIn this scenario, the AIs face a challenge: if it becomes obvious to everyone that they are trying to defeat humanity, humans could attack or shut down a few concentrated areas where most of the servers are, and hence drastically reduce AIs' numbers. So the AIs need a way of getting one or more \"AI headquarters\": property they control where they can safely operate servers and factories, do research, make plans and construct robots/drones/other military equipment. \nTheir goal is ultimately to have enough AIs, robots, etc. to be able to defeat the rest of humanity combined. This might mean constructing overwhelming amounts of military equipment, or thoroughly infiltrating computer systems worldwide to the point where they can disable or control most others' equipment, or researching and deploying extremely powerful weapons (e.g., bioweapons), or a combination.\nHere are some ways they could get to that point:\nThey could recruit human allies through many different methods - manipulation, deception, blackmail and other threats, genuine promises along the lines of \"We're probably going to end up in charge somehow, and we'll treat you better when we do.\" \nHuman allies could be given valuable intellectual property (developed by AIs), given instructions for making lots of money, and asked to rent their own servers and acquire their own property where an \"AI headquarters\" can be set up. Since the \"AI headquarters\" would officially be human property, it could be very hard for authorities to detect and respond to the danger.\n \nVia threats, AIs might be able to get key humans to cooperate with them - such as political leaders, or the CEOs of companies running lots of AIs. This would open up further strategies.\nAs assumed above, particular companies are running huge numbers of AIs. The AIs being run by these companies might find security holes in the companies' servers (this isn't the topic of this piece, but my general impression is that security holes are widespread and that reasonably competent people can find many of them)15, and thereby might find opportunities to create durable \"fakery\" about what they're up to. \nE.g., they might set things up so that as far as humans can tell, it looks like all of the AI systems are hard at work creating profit-making opportunities for the company, when in fact they're essentially using the server farm as their headquarters - and/or trying to establish a headquarters somewhere else (by recruiting human allies, sending money to outside bank accounts, using that money to acquire property and servers, etc.)\nIf AIs are in wide enough use, they might already be operating lots of drones and other military equipment, in which case it could be pretty straightforward to be able to defend some piece of territory - or to strike a deal with some government to enlist its help in doing so.\nAIs could mix-and-match the above methods and others: for example, creating \"fakery\" long enough to recruit some key human allies, then attempting to threaten and control humans in key positions of power to the point where they control solid amounts of military resources, then using this to establish a \"headquarters.\"\nIn any of these cases, once one or more \"AI headquarters\" are established:\nIt can be expanded as the AIs headquartered there make money, purchase more property and computing power, build more factories, etc.16\nIt can develop enough destructive potential and human allies that humans would be hesitant to attack it (as many relatively small countries have today). It would be clear that AIs at the headquarters are seeking autonomy, accumulating resources and taking some unethical actions, but the full scale and nature of the threat could easily remain ambiguous.\nEven if the threat is clear, AIs could potentially maintain a huge population in a relatively small space (potentially underground). They could also split up across many different headquarters (including some where they are simply using humans' hardware while hiding their presence). \nEither way, this large-but-hidden population could be recruiting human allies, developing advanced weaponry, looking for ways to take control of human' weapons (potentially possible if they greatly outnumber the humans working on cybersecurity, and if they can coordinate with the AIs \"helping\" those humans), etc. \n \nThey could also be working with (unwitting, or allied) humans to manufacture large numbers of robots with back-doors allowing the AIs to take control.\nOnce the headquarters has built up enough resources to go on offense, AIs around the world could coordinate with the headquarters. Humans could shut down AIs that they notice doing this, but they might have a very tough time getting value out of their servers and AI-controlled robots; this could make it easy for the AIs at the \"AI headquarters\" to out-resource humans.\nWhat if humans move slowly and don't create many AIs?\nThe above scenario has humans creating large numbers of AIs, such that the AIs just need to find a way to coordinate and acquire a safe \"headquarters\" in order to defeat us.\nWhat if humans moved more slowly, intentionally restricting human-level AI to a tiny portion of the available computing resources? Could a small number of AIs pose a risk to humanity?\nIn this world, we would have what Carl Shulman refers to as \"dry tinder everywhere, waiting for sparks.\" Anyone who can buy or rent a large amount of computing power can create a large number of AIs, which can produce a large amount of money and research, leading to still more AIs. \nSo a single AI could hack into enough servers17 to make a few copies of itself; recruit a few human allies; and start making money, acquiring more server space, etc. until its human allies are running a huge number of AIs. This could all be done in difficult-to-detect ways (it might e.g. just look like a set of humans renting a bunch of servers to run quantitative finance strategies).\nSo in this world, I think our concern should be any AI that is able to find enough security holes to attain that kind of freedom. Given the current state of cybersecurity, that seems like a big concern.\n Footnotes\n Assuming you accept other points made in the most important century series, e.g. that AI that can do most of what humans do to advance science and technology could be developed this century. ↩\n See Superintelligence chapter 6. ↩\n See the \"Nanotechnology blue box,\" in particular. ↩\nThe report estimates the amount of computing power it would take to train (create) a transformative AI system, and the amount of computing power it would take to run one. This is a bounding exercise and isn't supposed to be literally predicting that transformative AI will arrive in the form of a single AI system trained in a single massive run, but here I am interpreting the report that way for concreteness and simplicity.\nAs explained in the next footnote, I use the report's figures for transformative AI arriving on the soon side (around 2036). Using its central estimates instead would strengthen my point, but we'd then be talking about a longer time from now; I find it helpful to imagine how things could go in a world where AI comes relatively soon. ↩\n I assume that transformative AI ends up costing about 10^14 FLOP/s to run (this is about 1/10 the Bio Anchors central estimate, and well within its error bars) and about 10^30 FLOP to train (this is about 10x the Bio Anchors central estimate for how much will be available in 2036, and corresponds to about the 30th-percentile estimate for how much will be needed based on the \"short horizon\" anchor). That implies that the 10^30 FLOP needed to train a transformative model could run 10^16 seconds' worth of transformative AI models, or about 300 million years' worth. This figure would be higher if we use Bio Anchors's central assumptions, rather than assumptions consistent with transformative AI being developed on the soon side. ↩\n They might also run fewer copies of scaled-up models or more copies of scaled-down ones, but the idea is that the total productivity of all the copies should be at least as high as that of several hundred million copies of a human-ish model. ↩\nIntel, Google ↩\n Working-age population: about 65% * 7.9 billion =~ 5 billion. ↩\n Humans could rent hardware using money they made from running AIs, or - if AI systems were operating on their own - they could potentially rent hardware themselves via human allies or just via impersonating a customer (you generally don't need to physically show up in order to e.g. rent server time from Amazon Web Services). ↩\n(I had a speculative, illustrative possibility here but decided it wasn't in good enough shape even for a footnote. I might add it later.) ↩\n I don't go into detail about how AIs might coordinate with each other, but it seems like there are many options, such as by opening their own email accounts and emailing each other.  ↩\n Alien invasions seem unlikely if only because we have no evidence of one in millions of years. ↩\n Here's a recent comment exchange I was in on this topic. ↩\n E.g., individual AI systems may occasionally get caught trying to steal, lie or exploit security vulnerabilities, due to various unusual conditions including bugs and errors. ↩\n E.g., see this list of high-stakes security breaches and a list of quotes about cybersecurity, both courtesy of Luke Muehlhauser. For some additional not-exactly-rigorous evidence that at least shows that \"cybersecurity is in really bad shape\" is seen as relatively uncontroversial by at least one cartoonist, see: https://xkcd.com/2030/  ↩\n Purchases and contracts could be carried out by human allies, or just by AI systems themselves with humans willing to make deals with them (e.g., an AI system could digitally sign an agreement and wire funds from a bank account, or via cryptocurrency). ↩\n See above note about my general assumption that today's cybersecurity has a lot of holes in it. ↩\n", "url": "https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/", "title": "AI Could Defeat All Of Us Combined", "source": "cold.takes", "source_type": "blog", "date_published": "2022-06-09", "id": "f84609f7311e1dd0d0bf4dd4d2a70603"} -{"text": "I've claimed that the best way to learn is by writing about important topics. (Examples I've worked on include: which charity to donate to, whether life has gotten better over time, whether civilization is declining, whether AI could make this the most important century of all time for humanity.)\nBut I've also said this can be \"hard, taxing, exhausting and a bit of a mental health gauntlet,\" because:\nWhen trying to write about these sorts of topics, I often find myself needing to constantly revise my goals, and there's no clear way to know whether I'm making progress. That is: trying to write about a topic that I'm learning about is generally a wicked problem.\nI constantly find myself in situations like \"I was trying to write up why I think X, but I realized that X isn't quite right, and now I don't know what to write.\" and \"I either have to write something obvious and useless or look into a million more things to write something interesting.\" and \"I'm a week past my self-imposed deadline, and it feels like I have a week to go, but maybe it's actually 12 weeks - that's what happened last time.\" \nOverall, this is the kind of work where I can't seem to tell how progress is going, or stay on a schedule.\nThis post goes through some tips I've collected over the years for dealing with these sorts of challenges - both working on them myself, and working with teammates and seeing what works for them.\nA lot of what matters for doing this sort of work is coming at it with open-mindedness, self-criticality, attention to detail, and other virtues. But a running theme of this work is that it can be deadly to approach with too much virtue: holding oneself to self-imposed deadlines, trying for too much rigor on every subtopic, and otherwise trying to do \"Do everything right, as planned and on time\" can drive a person nuts. So this post is focused on a less obvious aspect of what helps with wicked problems, which is useful vices - antidotes to the kind of thoroughness and conscientiousness that lead to unreachable standards, and make wicked problems impossible.\nI've organized my tips under the following vices, borrowing from Larry Wall and extending his framework a bit:\nLaziness. When some key question is hard to resolve, often the best move is to just ... not resolve it, and change the thesis of your writeup instead (and change how rigorous you're trying to make it). For example, switching from \"These are the best charities\" to \"These are the charities that are best by the following imperfect criteria.\"\nImpatience. One of the most crucial tools for this sort of work is interrupting oneself. I could be reading through study after study on some charitable activity (like building wells), when stepping back to ask \"Wait, why does this matter for the larger goal again?\" could be what I most need to do.\nHubris. Whatever I was originally arguing (\"Charity X is the best\"), I'm probably going to realize at some point that I can't actually defend it. This can be demoralizing, even crisis-inducing. I recommend trying to build an unshakable conviction that one has something useful to say, even when one has completely lost track of what that something might be.\nSelf-preservation. When you're falling behind, it can be tempting to make a \"heroic\" effort at superhuman productivity. When a problem seems impossible, it can be tempting to fix your steely gaze on it and DO IT ANYWAY. I recommend the opposite: instead of rising to the challenge, shrink from it and fight another day (when you'll solve some problem other than the one you thought you were going for).\nOverall, it's tempting to try to \"boil the ocean\" and thoroughly examine every aspect of a topic of interest. But the world is too big, and the amount of information is too much. I think the only way to form a view on an important topic is to do a whole lot of simplifying, approximating and skipping steps - aiming for a step of progress rather than a confident resolution.\nLaziness\nThis is Gingi, the patron saint of not giving a fuck. It's a little hard to explain. Maybe in a future piece. Just imagine someone who literally doesn't care at all about anything, and ask \"How would Gingi handle the problem I'm struggling with, and how bad would that be?\" - bizarrely, this is often helpful.\nHypothesis rearticulation\nMy previous piece focused on \"hypothesis rearticulation\": instead of defending what I was originally going to argue, I just change what I'm arguing so it's easier to defend. For example, when asking Has Life Gotten Better?, I could've knocked myself out trying to pin down exactly how quality of life changed in each different part of the world between, say, the year 0 and the year 1000. Instead, I focused on saying that that time period is a \"mystery\" and focused on arguing for why we shouldn't be confident in any of a few tempting narratives. \nMy previous piece has another example of this move. It's one of the most important moves for answering big questions.\nQuestions for further investigation\nThis is really one of my favorites. Every GiveWell report used to have a big section at the bottom called \"Questions for further investigation.\" We'd be working on some question like \"What about the possibility that paying for these services (e.g., bednets) just causes the government to invest less in them?\" and I'd be like \"Would you rather spend another 100 hours on this question, or write down a few sentences about what our best guess is right now, add it to the Questions for Further Investigation section and move on?\" \nTo be clear, sometimes the answer was the former, and I think we eventually did get to ~all of those questions (over the course of years). But still - it's remarkable how often this simple move can save one's project, and create another fun project for someone else to work on!\nWhat standard are we trying to reach? How about the easiest one that would still be worth reaching?\nIf you're writing an academic paper, you probably have a sense of what counts as \"enough evidence\" or \"enough argumentation\" that you've met the standards for a successful paper. \nBut here I'm trying to answer some broad question like \"Where should I donate?\" or \"Is civilization declining?\" that doesn't fit into an established field - and for such a broad question, I'm going to run into a huge number of sub-questions (each of which could be the subject of many papers of its own). It's tempting to try for some standard like \"Every claim I make is supported by a recognizably rigorous, conclusive analysis,\" but that way madness lies. \nI think it's often better to aim for the minimum level of rigor that would still make something \"the best available answer to the question.\" But I'm not absolutist about that either - a frustrating aspect of working with me on problems like this is that I'll frequently say things like \"Well, we don't need to thoroughly answer objection X, but we should do it if it's pretty quick to do so - that just seems like a good deal.\" I think this is a fine way to approach things, but it leads to shifting standards.\nHere's a slightly more nuanced way to think about how \"rigorous\" a piece is, when there's no clear standard to meet. I tend to ask: “How hard would it be for a critic to demonstrate that the writeup's conclusion is significantly off in a particular direction, and/or far less robust than the writer has claimed?”1\nThe \"how hard\" question can be answered via something like:\n\"A-hardness\": minimum hours needed by literally anyone in the world\n\"B-hardness\": minimum hours needed by any not-super-hard-to-access person, including someone who’s very informed about the topic in question\n\"C-hardness\": minimum hours needed by a reasonably smart but not very informed critic, looking on their own for flaws\nI seem to recall that with GiveWell, we got a lot more successful (in terms of e.g. donor retention) once we got to the point where we could get through an hour-long Q&A with donors with a “satisfying” response to each question - a response that demonstrated that (a) we had thought about the question more/harder than the interlocutor; (b) we had good reason to think it would take us significant time (say 10-100 hours or more) to get a better answer to the question than we had. At this point, I think the C-hardness was at least 10 hours or so - no small achievement, since lots of not-very-informed people know something about some random angle.\n(By now, I'd guess that GiveWell’s A-hardness is over 100 hours. But a C-hardness of 10 hours was the first thing to aim for.)\nThese standards are very different from something like “Each claim is proven with X amount of confidence.” I think that’s appropriate, when you keep in mind that the goal is “most thoughtful take available on a key action-guiding question.”\nImpatience\nMany people dream of working on a project that puts them in a flow state:\nIn positive psychology, a flow state, also known ... as being in the zone, is the mental state in which a person performing some activity is fully immersed in a feeling of energized focus, full involvement, and enjoyment in the process of the activity. In essence, flow is characterized by the complete absorption in what one does, and a resulting transformation in one's sense of time.\nBut if you're working on wicked problems, I recommend that you avoid flow states, nice though they may be. (Thanks to Ajeya Cotra for this point.) Maybe you instead want a Harrison Bergeron state: every time you're getting in a groove, you get jolted out of it, completely lose track of what you were doing, and have to reassemble your thoughts.\nThat's because one of the most productive things you can do when working on a wicked problem is rethink what you're trying to do. The more you interrupt yourself, and the less attached you are to the plan you had, the more times you'll be able to notice that what you're writing isn't coming out as planned, and you should change course.\nCheckins and virtual checkins\nI think the ideal way to interrupt yourself is to be working with someone else who's engaged in your topic and has experience with similar sorts of work (at Open Philanthropy, this might mean your manager), and constantly ping them to say things like:\nI've started to argue for point X, but I don't think my arguments are that great.\nI'm thinking I should deeply investigate point Y - sound right?\nI'm feeling dread about this next section and I don't really have any idea why. Any thoughts? (A lot of people are hesitant to do this one, but I think it often is exactly the right move!)\nI think this is helpful for a few reasons:\nYou may have gotten subconsciously attached to the vision you had in your head for what you were going to write, and it's good to get a reaction from someone else who has less of that attachment.\nIt's generally just hard to make yourself look at your work with \"fresh eyes\" as your goal is constantly changing, so bringing in another person is good.\nIt's easy to get caught up in a \"virtue\" narrative when doing this work - \"I'm thorough and rigorous and productive, I'm going to answer this question thoroughly and rigorously and do it on time.\" It's tempting (as I'll get to) to try to overcome hard situations with \"heroic effort.\" But another person is more likely to ask questions like \"Well, how long does it usually take you to do this sort of thing?\" rather than \"Can you make an incredible heroic effort here?\" and “What do we think we can do and by when and is it worth it?” rather than “What would failure to do the thing you thought you could do say about you as a person?”\nWith early GiveWell, I got a huge amount of value from Elie, who consistently wanted to do things far less thoroughly than I wanted to. I probably ended up doing things 3x as thoroughly as he wanted and 1/3 as thoroughly (and so 3x faster!) as I originally wanted - a nice compromise.\nThese kinds of checkins can be very vulnerable (especially when the topic is something like \"I can't accomplish what we both said I would\"), and it can be hard to have the kind of relationship that makes them comfortable. It's best if the manager or peer being checked in with starts from a place of being nonjudgmental, remembering the wicked nature of the problem and not being attached to the original goals. \nI also recommend imagining an outsider interrupting you to comment on your work - I think this can get you some of those same benefits.\nOutline-driven research\nI recommend always working off of a complete outline of what you are going to argue and how, which has ideally been reviewed by someone else (or your simulation of someone else) who said \"Sure, if you can defend each subpoint in the way you say you can, I'll find this valuable.\"\nThat is:\nAs soon as possible after you start learning about a topic, write an outline saying: \"I think I can show that A seems true using the best available evidence of type X; B seems true using the best available evidence of type Y; therefore, conclusion C is true (slash the best available guess).\" Don't spend lots of time in \"undirected learning\" mode.\nAs soon as your attempt to flesh out this outline is failing, prioritize going back to the outline, adjusting it, getting feedback and being ready to go with a new argument. It's easy to say something like \"I'm not actually confident in this point, I should investigate it\" (as I did here), but I think it's better to interrupt yourself at that point; go back to the outline; redo it with the new plan; and ask whether the whole new plan looks good.\nOutlines don't need to be correct, they just need to be guesses, and they should be constantly changing. They're end-to-end plans for gathering and presenting evidence, not finished products.\nConstantly track your pace\nI think it's good to consistently revisit your estimate of how quickly you're on pace to finish the project. Not how quickly you want to finish it or originally said you would finish it - how quickly it will be finished if you do all of the remaining sections at about the pace you've done the current ones.\nI think a common issue is that someone looks very thoroughly into the first 2-3 subquestions that come up, without noticing that applying this thoroughness to all subquestions would put the project on pace to take years (or maybe decades?) Consistently interrupting yourself to re-estimate time remaining can be a good prompt to re-scope the project.\nDon't just leave a fix for later; duct tape it now\nThis tip comes from Ajeya. When you reach some difficult part of the argument that you haven't thought about enough yet, it's tempting to write \"[to do]\" and figure you'll come back to it. But this is dangerous:\nIt creates an assignment of unknown difficulty for your future self, putting them in the position of feeling obligated to fill in something they may not remember very well. \nIt makes it harder to estimate how much time is remaining in the project.\nIt poses the risk that you'll come back to fill it in, only to realize that you can't argue the subpoint as well as you thought - meaning you need to change a bunch of other stuff you wrote that relies on it.\nInstead, write down the shortest, simplest version of the point you can - focusing on what you currently believe rather than doing a fresh investigation. When you read the piece over again later, if you're not noticing the need for more, then you don't need to do more. \nHubris\nYour take is valuable\nA common experience with this kind of work is the \"too-weak wrong turn\": you realize just how much uncertainty there is in the question you're looking into, and how little you really know about it, and how easy it would be for someone to read your end product and say things like: \"So? I already knew all of this\" and \"There's nothing really new here\" and \"This isn't a definitive take, it's a bunch of guesswork on a bunch of different topics that an expert would know infinitely more about\" and such. \nThis can be demoralizing to the point where it's hard to continue writing, especially once you've put in a lot of time and have figured out most of what you want to say, but are realizing that \"what you want to say\" is covered in uncertainty and caveats. \nIt can sometimes be tempting to try to salvage the situation by furiously doing research to produce something more thorough and impressive.\nWhen someone (including myself) is in this type of situation, I often find myself saying the following sort of thing to them: \n\"If what you've got so far were trivial and worthless, you wouldn't have felt the pull to write this piece in the first place.\"\n\"Don't find support for what you think, just explain why you already think it.\"\nI think it can be useful to just take \"My take on this topic is valuable\" as an almost axiomatic backdrop (once one's take has been developed a bit). It doesn't mean more research isn't valuable, but it can shift the attitude from \"Furiously trying to find enough documentation that my take feels rigorous\" to \"Doing whatever extra investigation is worth the extra time, and otherwise just finishing up.\"\nYour productivity is fine\nUnderstanding deadlines. One of the hardest things about working on wicked problems is that it's very hard to say how long a project is supposed to take. For example, in the first year of GiveWell:\nWe felt that we absolutely had to launch our initial product by Thanksgiving 2007. Our initial product would be our giving recommendations for our initial five causes: saving lives in Africa, global poverty (focus on Africa), US early childhood care, US education, US job opportunities.\nAs we got close to the deadline, we were both pulling all nighters and cutting huge amounts of our planned content - things we had intended to write up or investigate were getting moved to questions for further investigation. At some point we gave up on releasing all five causes and hoped we would get one out in time.\nWe got “saving lives in Africa” up on December 6, and “global poverty” sometime not too long after that.\nWe hoped to get the remaining causes out in January so we could move on to other things. I believe we got them out in May or so.\nThe \"deadline miss\" didn't come from not working hard, it came from having no idea how much work was ahead of us. \nWorking on wicked problems means navigating:\nNot enough deadline. I think if one doesn't establish expectations for what will get done and by when, one will by default do everything in way too much depth and take roughly forever to finish a project - and will miss out on a lot of important pressure to do things like cutting and reframing the work.\nToo much deadline. On the other hand, if one does set a \"deadline,\" it's likely that this is based on a completely inaccurate sense of what's possible. If one then makes it a point of personal pride to hit the deadline - and sees a miss as a personal failing - this is a recipe for a shame spiral.\nEarly in a project, I suggest treating a deadline mostly as a \"deadline to have a better deadline.\" Something like: \"According to my wildly uninformed guess at how long things should take, I should be done by July 1; hopefully by July 1, I will be able to say something more specific, like 'I've gone through 1/3 of my subquestions, and the remaining 2/3 would take until September 1, which is too long, so I'm re-scoping the project.'\"\nAt the point where one can really reliably say how much time should be remaining, I think one is usually done with the hardest part of the project. \nFor these sorts of \"deadline to have a deadline\"s, I tend to make them comically aggressive - for example, “I’m gonna start writing this tomorrow and have it done after like 30 hours of work,” while knowing that I’m actually several months from having my first draft (but that going in with the attitude “I’m basically done already, just writing it down” will speed me up a lot by making me articulate some of the key premises). So I'm both setting absurd goals for what I can accomplish, and preparing to completely let myself off the hook if I fail. Hubris.\nUnderstanding procrastination/incubation. For almost anyone (and certainly for myself), working on wicked problems involves a lot of:\nFeeling \"stuck.\"\nNot knowing what to do next - or worse, feeling like one knows what one is supposed to do next, but finding that the next step just feels painful or aversive or \"off.\"\nHaving a ton of trouble moving forward, and likely procrastinating, often a huge amount.\n(More at my previous piece.)\nIn fact, early in the process of working on a wicked problem, I think it's often unrealistic to put in more than a few hours of solid work per day - and unhelpful to compare one's productivity to that of people doing better-defined tasks, where the goals are clear and don't change by the hour.\nWorking on wicked problems can often be a wild emotional rollercoaster, with lots of moments of self-loathing over being unable to focus, or missing a \"deadline,\" or having to heavily weaken the thing one was trying to say.\nIt's a tough balance, because I think one really does need to pressure oneself to produce. But especially once one has completed a few projects, I think it's feasible to be simultaneously \"failing to make progress\" and \"knowing that one is still broadly on track, because failing to make progress is part of the process.\" I think it's sometimes productive to have a certain kind of arrogance, an attitude like: \"Yes, I cleared the whole day to work on this project and so far what I have done is played 9 hours of video games. But the last 5 times I did something like this, I was in a broadly similar state, and then got momentum and finished on time. I'm doing great!\" The balance to strike is feeling enough urgency to move through the whole procrastinate-produce-rethink process, while having a background sense that \"this is all expected and fine\" that can prevent excessive personal shame and fear from the \"procrastination\" and \"rethink\" parts.\n(Personally, I often draft a 15-page document by spending 4 hours failing to write the first paragraph, then 1 hour on the first paragraph, then 1 hour failing to continue, then 1 hour on the rest of the first page, then 4 hours for the remaining 14 pages. If someone tries to interrupt me during the first 4 hours, I tell them I'm working, and that's true as far as I'm concerned!)\nSelf-preservation\nAs noted above, working on wicked problems often involves long periods of very low output, with self-imposed deadlines creeping up. This sometimes leads people to try to make up for lost time with a \"heroic\" effort at superhuman productivity, and to try to handle the hardest parts of a project by just working that much harder.\nI'm basically totally against this. An analogy I sometimes use:\nQ: When Superman shows up to save the day and realizes his rival is loaded with kryptonite, how should he respond? What’s the best, most virtuous thing he can do in that situation?\nA: Fly away as fast as he can, optionally shrieking in terror and letting all the onlookers say “Wow, what a coward.” This is a terrible time to be brave and soldier on! There are so many things Superman can do to be helpful - the single worst thing he can do is go where he won’t succeed.2\nIf the project is taking \"too long,\" it might be because it was impossible to set a \"schedule\" for in the first place, and trying to finish it off at a superhuman pace could easily just leave you exhausted, demoralized and still not close to done. Additionally, the next task sometimes seems \"scary\" because it is actually a bad idea and needs to be rethought.\nI generally advise people working on wicked problems to aim for \"jogging\" rather than \"sprinting\" - a metaphor I like because it emphasizes that this is fully consistent with trying to finish as fast as possible. In particular, I prefer the goal of \"Make at least a bit of progress on 95% of days, and in 100% of weeks\" to the goal of \"Make so much progress today that it makes up for all my wasted past days.\" (The former goal is not easy! I think aiming for it requires a lot of interrupting oneself to make sure one isn't spiraling or going down an unproductive rabbit hole - rather than a lot of \"trying to pedal to the metal,\" which can run right into those problems.)\nIt’s a bird, it’s a plane, it’s a schmoe!\nThis section is particularly targeted at effective altruists who feel compelled to squeeze every ounce of productivity out of themselves that they can, for moral reasons and not just personal pride. I think this attitude is dangerous, because of the way it leads people to set unrealistic and unsustainable expectations for themselves. \nMy take: \"Whenever you catch yourself planning on being a hero, just stop. If we’re going to save the world, we’re going to do it by being schmoes.\" That is:\nPlan on being about as focused, productive, and virtuous as people doing similar work on other topics. \nPlan on working a normal number of hours each day, plan on often getting distracted and mucking around, plan on taking as much vacation as other high-productivity people (a lot), plan on having as much going on outside of work as other high-productivity people (a lot), etc.\n(This is also a standard to hold oneself to - try not to lose productivity over things, like guilt spirals, that other people doing similar work often don't suffer from.)\nIf effective altruists are going to have outsized impact on the world, I think it will be mostly thanks to the unusual questions they’re asking and the unusual goals they’re interested in, not unusual dedication/productivity/virtue. I model myself as “Basically like a hedge fund guy but doing more valuable stuff,” not as “A being capable of exceptional output or exceptional sacrifice.” \nBe virtuous first!\nI don't think you're going to get very far with these \"vices\" alone. If you aren't balancing them with the virtues of open-mindedness, self-criticality, and doing the hard work to understand things, it's easy to just lazily write down some belief you have, cite a bit of evidence that you haven't looked at carefully or considered the best counter-arguments to, and hit \"publish.\" I think this is what the vast majority of people \"investigating\" important questions are doing, and if I were writing tips for the average person in the world, I'd have a very different emphasis.\nFor forming opinions and writing useful pieces about important topics, I think the first hurdle to clear is being determined to examine the strongest parts of both sides of an argument, understand them in detail (and with minimal trust), and write what you're finding with reasoning transparency. (All of this is much easier said than done.) But in my experience, many of the people who are strongest in these \"virtues\" veer too far in the virtuous direction and end up punishing themselves for missing unrealistic self-imposed deadlines on impossible self-imposed assignments. This piece has tried to give a feel for when and how to pull back, skip steps, and go easy on oneself, to make incremental progress on intimidating questions.Footnotes\n In practice, for a report that isn't claiming much rigor, this often means demonstrating “This isn’t even suggestive, it’s basically noise.” Here's a long debate about exactly that question for one of the key inputs into my views on transformative AI! ↩\n My favorite real-life example of this is Barry Bonds in 2002. So many star players try to play through injuries all year long, and frame this as being a \"team player.\" I remember Barry Bonds in 2002 taking all kinds of heat for the fact that he would sit out whenever he got even moderately injured, and would sometimes sit out games just because he felt kinda tired. But then the playoffs came around and he played every game and was out-of-this-world good, in a season that came down to the final game, at age 38. Who's the team player? ↩\n", "url": "https://www.cold-takes.com/useful-vices-for-wicked-problems/", "title": "Useful Vices for Wicked Problems", "source": "cold.takes", "source_type": "blog", "date_published": "2022-04-12", "id": "9625cd50c9a63e48dea71bae846be17e"} -{"text": "I'm interested in the topic of ideal governance: what kind of governance system should you set up, if you're starting from scratch and can do it however you want?\nHere \"you\" could be a company, a nonprofit, an informal association, or a country. And \"governance system\" means a Constitution, charter, and/or bylaws answering questions like: \"Who has the authority to make decisions (Congress, board of directors, etc.), and how are they selected, and what rules do they have to follow, and what's the process for changing those rules?\"\nI think this is a very different topic from something like \"How does the US's Presidential system compare to the Parliamentary systems common in Europe?\" The idea is not to look at today's most common systems and compare them, but rather to generate options for setting up systems radically different from what's common today. \nI don't currently know of much literature on this topic (aside from the literature on social choice theory and especially voting methods, which covers only part of the topic). This post describes the general topic and why I care, partly in the hopes that people can point me to any literature I've missed. Whether or not I end up finding any, I'm likely to write more on this topic in the future.\nOutline of the rest of the piece:\nI'll outline some common governance structures for countries and major organizations today, and highlight how much room there is to try different things that don't seem to be in wide use today. More\nI'll discuss why I care about this question. I have a few very different reasons: \nA short-term, tangible need: over the last several years, I've spoken with several (more than 3) organizations that feel no traditional corporate governance structure is satisfactory, because the stakes of their business are too great and society-wide for shareholder control to make sense, yet they are too early-stage and niche (and in need of nimbleness) to be structured like a traditional government. An example would be an artificial intelligence company that could end up with a normal commercial product, or could end up bringing about the most important century of all time for humanity. I wish I could point them to someone who was like: \"I've read all of, and written much of, the literature on what your options are. I can walk you through the pros and cons and help you pick a governance system that balances them for your needs.\" \n \nA small probability of a big future win. The world today has lots of governments, but they seem to mostly follow a very small number of basic governance templates. At some point, there will be new states with new Constitutions - maybe via space settlements, maybe via collapse of existing states, etc. - but I expect these moments to be few and far between. A significant literature and set of experts on \"ideal governance\" could lead to a radically different kind of state government, potentially with radically different policies that the rest of the world could learn from.\n \nA weird, out-of-left-field application. Some of my interest in this topic actually comes via my interest in moral uncertainty: the question of what it's ethical to do when one is struggling between more than one theory of ethics, with radically different implications. This is hard to explain, but I try below.\nI'll describe a bit more what I think literature on this question could look like (and what already exists that I know of), partly to guide readers who might be able to help me find more.\nCommon governance structures today\nAll of these are simplified; I'm trying to illustrate the basic idea of what questions \"ideal governance\" is asking.\nA standard (e.g., public) corporation works like this: it has shareholders, assigned one vote per share (not per person), who elect a board of directors that governs by majority. The board generally appoints a CEO that it entrusts with day-to-day decisions. There is a \"constitution\" of sorts (the Articles of Incorporation and bylaws) and a lot more wrinkles in terms of how directors are selected, but that's the basic idea. \nA standard nonprofit is like a corporation, but entirely lacking the shareholder layer - it's governed directly by the board of directors. (I find something weird about a structure this simple - a simple board majority can do literally anything, even though the board of directors is often a somewhat random assortment of donors, advisors, etc.)\nThe US federal government is a lot more complex. It splits authority between the House of Representatives, the Senate, the Presidency and the Supreme Court, all of which have specific appointment procedures, term limits, etc. and are meta-governed by a Constitution that requires special measures to change. There are lots of specific choices that were made in designing things this way, and lots of things that could've been set up differently in the 18th century that would probably still matter today. \nOther democracies tend to have governments that differ in a lot of ways (e.g.), while being based on broadly similar principles: voters elect representatives to more than one branch of government, which then divide up (and often can veto each other on) laws, expenditures, etc.\nWhen I was 13, the lunch table I sat at established a Constitution with some really strange properties that I can't remember. I think there was a near-dictatorial authority who rotated daily, with others able to veto their decisions by assembling supermajorities or maybe singing silly songs or something.\nIn addition to the design choices shown in the diagrams, there are a lot of others:\nWho votes, how often, and what voting system is used?\nHow many representatives are there in each representative body? How are they divided up (one representative per geographic area, or party-list proportional representation, or something else)?\nWhat term limits exist for the different entities?\nDo particular kinds of decisions require supermajorities? \nWhich restrictions are enshrined in a hard-to-change Constitution (and how hard is it to change), vs. being left to the people in power at the moment?\nOne way of thinking about the \"ideal governance\" question is: what kinds of designs could exist that aren't common today? And how should a new organization/country/etc. think about what design is going to be best for its purposes, beyond \"doing what's usually done\"? \nFor any new institution, it seems like the stakes are potentially high - in some important sense, picking a governance system is a \"one-time thing\" (any further changes have to be made using the rules of the existing system1). \nPerhaps because of this, there doesn't seem to be much use of innovative governance designs in high-stakes settings. For example, here are a number of ideas I've seen floating around that seem cool and interesting, and ought to be considered if someone could set up a governance system however they wanted:\nSortition, or choosing people randomly to have certain powers and responsibilities. An extreme version could be: \"Instead of everyone voting for President, randomly select 1000 Americans; give them several months to consider their choice, perhaps paid so they can do so full-time; then have them vote.\" \nThe idea is to pick a subset of people who are both (a) representative of the larger population (hence the randomness); (b) will have a stronger case for putting serious time and thought into their decisions (hence the small number). \n \nIt's solving a similar problem that \"representative democracy\" (voters elect representatives) is trying to solve, but in a different way.\nProportional decision-making. Currently, if Congress is deciding how to spend $1 trillion, a coalition controlling 51% of the votes can control all $1 trillion, whereas a coalition controlling 49% of the votes controls $0. Proportional decision-making could be implemented as \"Each representative controls an equal proportion of the spending,\" so a coalition with 20% of the votes controls 20% of the budget. It's less clear how to apply this idea to other sorts of bills (e.g., illegalizing an activity rather than spending money), but there are plenty of possibilities.2\nQuadratic voting, in which people vote on multiple things at once, and can cast more votes for things they care about more (with a \"quadratic pricing rule\" intended to make the number of votes an \"honest signal\" of how much someone cares).\nReset/Jubilee: maybe it would be good for some organizations to periodically redo their governance mostly from scratch, subject only to the most basic principles. Constitutions could contain a provision like \"Every N years, there shall be a new Constitution selected. The 10 candidate Constitutions with the most signatures shall be presented on a ballot; the Constitution receiving the most votes is the new Constitution, except that it may not contradict or nullify this provision. This provision can be prevented from occurring by [supermajority provision], and removed entirely by [stronger supermajority].\"\nMore examples in a footnote.3\nIf we were starting a country or company from scratch, which of the above ideas should we integrate with more traditional structures, and how, and what else should we have in our toolbox? That's the question of ideal governance.\nWhy do I care?\nI have one \"short-term, tangible need\" reason; one \"small probability of a big future win\" reason; and one \"weird, out-of-left-field\" reason.\nA short-term, tangible need: companies developing AI, or otherwise aiming to be working with huge stakes. Say you're starting a new company for developing AI systems, and you believe that you could end up building AI with the potential to change the world forever. \nThe standard governance setup for a corporation would hand power over all the decisions you're going to make to your shareholders, and by default most of your shares are going to end up held by people and firms that invested money in your company. Hopefully it's clear why this doesn't seem like the ideal setup for a company whose decisions could be world-changing. A number of AI companies have acknowledged the basic point that \"Our ultimate mission should NOT just be: make money for shareholders,\" and that seems like a good thing.\nOne alternative would be to set up like a nonprofit instead, with all power vested in a board of directors (no shareholder control). Some issues are that (a) this cuts shareholders out of the loop completely, which could make it pretty hard to raise money; (b) according to me at least, this is just a weird system of governance, for reasons that are not super easy to articulate concisely but I'll take a shot in a footnote4 (and possibly write more in the future).\nAnother alternative is a setup that is somewhat common among tech companies: 1-2 founders hold enough shares to keep control forever, so you end up with essentially a dictatorship. This also ... leaves something to be desired.\nOr maybe a company like this should just set up more like a government from the get-go, offering everyone in the world a vote via some complex system of representation, checks and balances. But this seems poorly suited to at least the relatively early days of a company, when it's small and its work is not widely known or understood. But then, how does the company handle the transition from the latter to the former? And should the former be done exactly in the standard way, or is there room for innovation there?\nOver the last several years, I've spoken with heads of several (more than 3) organizations that struggle between options like the above, and have at least strongly considered unusual governance setups. I wish I could point them to someone who was like: \"I've read all of, and written much of, the literature on what your options are. I can walk you through the pros and cons and help you pick a governance system that balances them for your needs.\" \nBut right now, I can't, and I've seen a fair amount of this instead: \"Let's just throw together the best system we can, based mostly on what's already common but with a few wrinkles, and hope that we figure this all out later.\" I think this is the right solution given how things stand, but I think it really does get continually harder to redesign one's governance as time goes on and more stakeholders enter the picture, so it makes me nervous.\nSimilar issues could apply to mega-corporations (e.g., FAANG) that are arguably more powerful than what the standard shareholder-centric company setup was designed for. Are there governance systems they could adopt that would make them more broadly accountable, without copying over all the pros and cons of full-blown representative democracy as implemented by countries like the US?\nA small probability of a big future win: future new states. The world today has lots of governments, but they seem to mostly follow a very small number of basic governance templates (e.g., I believe you see almost none of the things I listed above), and probably relatedly, there seems to be remarkably little variety and experimentation with policy. Policies that many believe could be huge wins - such as dramatically expanded immigration, land value taxation, \"consumer reports\"-style medical approvals,5 drug decriminalization, and charter cities - don't seem to have gotten much of a trial anywhere in the world. \nAt some point, there will be new states with new Constitutions - maybe via space settlements, maybe via collapse of existing states, etc. - but I expect these moments to be few and far between.\nBy default I expect future Constitutions to resemble present ones an awful lot. But maybe, at some future date, there will be a large \"ideal governance\" literature and some points of expert consensus on innovative governance designs that somebody really ought to try. That could lead to a radically different kind of state government, potentially with radically different policies that the rest of the world could learn from.\nAn out-of-left-field application for \"ideal governance.\" This is going to veer off the rails, so remember to skip to the next section if I lose you.\nSome of my interest in this topic actually comes via my interest in moral uncertainty: the question of what it's ethical to do when one is struggling between more than one theory of ethics, with radically different implications. \nFor example, there are arguments that our ethical decisions should be dominated by concern for ensuring that as many people as possible will someday get to exist. I really go back and forth on how much I buy these arguments, but I'm definitely somewhere between 10% convinced and 50% convinced. So ... say I'm \"20% convinced\" of some view that says preventing human extinction6 is the overwhelmingly most important consideration for at least some dimensions of ethics (like where to donate), and \"80% convinced\" of some more common-sense view that says I should focus on some cause unrelated to human extinction.7 How do I put those two together and decide what this means for actual choices I'm making?\nThe closest thing I've seen to a reasonable-seeming answer is the idea of a moral parliament: I should act as though I'm run by a Parliament with 80 members who believe in \"common-sense\" ethics, and 20 members who believe in the \"preventing extinction is overwhelmingly important\" idea. But with default Parliament rules, this would just mean the 80 members can run the whole show, without any compromise with the 20. \nAnd so, a paper on the \"moral parliament\" idea tries to make it work by ... introducing a completely new governance mechanism that I can't find any other sign of someone else ever talking about, \"proportional chances voting\" (spelled out in a footnote).8 I think this mechanism has its own issues,9 but it's an attempt to ensure something like \"A coalition controlling 20% of the votes has 20% of the effective power, and has to be compromised with, instead of being subject to the tyranny of the majority.\"\nMy own view (which I expect to write more about in the future) is that governance is roughly the right metaphor for \"moral uncertainty\": I am torn by multiple different sides of myself, with different takes on what it means to be a good person, and the problem of getting these different sides of myself to reach a decision together is like the problem of getting different citizens (or shareholders) to reach a decision together. The more we can say about what ideal governance looks like, the more we can say about how this ought to work - and the better I expect this \"moral parliament\"-type idea to end up looking, compared to alternatives.10\nThe literature I'm looking for\nIdeal governance seems like the sort of topic for which there should be a \"field\" of \"experts,\" studying it. What would such study look like? Three major categories come to mind:\nBrainstorming ideas such as those I listed above - innovative potential ways of solving classic challenges of governance, such as reconciling \"We want to represent all the voters\" with \"We want decisions to be grounded in expertise and high engagement, and voters are often non-expert and not engaged.\"\nI've come across various assorted ideas in this category, including quadratic voting, futarchy, and proportional chances voting, without seeing much sign that these sit within a broader field that I can skim through to find all the ideas that are out there.\nEconomics-style theory in which one asks questions like: \"If we make particular assumptions about who's voting, what information they have and lack, how much they suffer from bounded rationality, and how we define 'serving their interests' (see below), what kind of governance structure gets the best outcome?\"\nSocial choice theory, including on voting methods, tackles the \"how we define 'serving their interests'\" part of this. But I'm not aware of people using similar approaches to ask questions like \"Under what conditions would we want 1 chamber of Congress vs. 2, or 10? 100 Senators vs. 500, or 15? A constitution that can be modified by simple majority, vs. 2/3 majority vs. consensus? Term limits? Etc. etc. etc.\"\nEmpirical research (probably qualitative): Are there systematic reviews of unusual governance structures tried out by companies, and what the results have been? Of smaller-scale experiments at co-ops, group houses and lunch tables? \nTo be clear, I think the most useful version of this sort of research would probably be very qualitative - collecting reports of what problems did and didn't come up - rather than asking questions like \"How does a particular board structure element affect company profits?\"\nOne of the things I expect to be tricky about this sort of research is that I think a lot of governance comes down to things like \"What sorts of people are in charge?\" and \"What are the culture, expectations, norms and habits?\" A setup that is \"officially\" supposed to work one way could evolve into something quite different via informal practices and \"soft power.\" However, I think the formal setup (including things like \"what the constitution says about the principles each governance body is supposed to be upholding\") can have big effects on how the \"soft power\" works.\nIf you know where to find research or experts along the lines of the above, please share them in the comments or using this form if you don't want them to be public.\nI'll likely write about what I come across, and if I don't find anything new, I'll likely ramble some more about ideal governance. So either way, there will be more on this topic!Footnotes\n Barring violent revolution in the case of countries. ↩\n An example would be the \"proportional chances voting\" idea described here. ↩\nProxying/liquid democracy, or allowing voters to transfer their votes to other voters. (This is common for corporations, but not for governments.) This could be an alternative or complement to electing representatives, solving a similar problem (we want lightly-engaged voters to be represented, but we also want decisions ultimately made using heavy engagement and expertise). At first glance it may seem to pose a risk that people will be able to \"buy votes,\" but I don't actually think this is necessarily an issue (proxying could be done anonymously and on set schedules, like other votes).\nSoft term limits: the more terms someone has served, the greater a supermajority they need to be re-elected. This could be used to strike a balance between the advantages of term limits (avoiding \"effectively unaccountable\" incumbents) and no-term-limits (allowing great representatives to keep serving). \nFormal technocracy/meritocracy: Using hard structures (rather than soft norms) to assign authority to people with particular expertise and qualifications. An extreme example would be futarchy, in which prediction markets directly control decisions. A simpler example would be structurally rewarding representatives (via more votes or other powers) based on assessments of their track records (of predictions or decisions), or factual understanding of a subject. This seems like a tough road to go down by default, as any mechanism for evaluating \"track records\" and \"understanding\" can itself be politicized, but there's a wide space of possible designs.  ↩\n Most systems of government have a sort of funnel from \"least engaged in day to day decisions, but most ultimately legitimate representatives of whom the institution is supposed to serve\" (shareholders, voters) to \"most engaged in day to day decisions, but ultimately accountable to someone else\" (chief executive). A nonprofit structure is a very short funnel, and the board of directors tends to be a somewhat random assortment of funders, advisors, people who the founders just thought were cool, etc. I think they often end up not very accountable (to anyone) or engaged in what's going on, such that they have a hard time acting when they ought to, and the actions they do take are often kind of random. \n I'm not saying there is a clearly better structure available for this purpose - I think the weirdness comes from the fact that it's so unclear who should go in the box normally reserved for \"Shareholders\" or \"Voters.\" It's probably the best common structure for its purpose, but I think there's a lot of room for improvement, and the stakes seem high for certain organizations. ↩\n Context in this Marginal Revolution post, which links to this 2005 piece on a \"consumer reports\" model for the FDA. ↩\n Or \"existential catastrophe\" - something that drastically curtails humanity's future, even if it doesn't drive us extinct. ↩\n This isn't actually where I'm at, because I think the leading existential risks are a big enough deal that I would want to focus on them even if I completely ignored the philosophical argument that the future is overwhelmingly important. ↩\n Let's say that 70% of the Parliament members vote for bill X, and 30% vote against. \"Proportional chance voting\" literally uses a weighted lottery to pass bill X with 70% probability, and reject it with 30% probability (you can think of this like rolling a 10-sided die, and passing the bill if it's 7 or under).\n A key part of this is that the members are supposed to negotiate before voting and holding the lottery. For example, maybe 10 of the 30 members who are against bill X offer to switch to supporting it if some change is made. The nice property here is that rather than having a \"tyranny of the majority\" where the minority has no bargaining power, we have a situation where the 70-member coalition would still love to make a deal with folks in the minority, to further increase the probability that they get their way.\n Quote from the paper that I am interpreting: \"Under proportional chances voting, each delegate receives a single vote on each motion. Before they vote, there is a period during which delegates may negotiate: this could include trading votes on one motion for votes on another, introducing novel options for consideration within a given motion, or forming deals with others to vote for a compromise option that both consider to be acceptable. The delegates then cast their ballots for one particular option in each motion, just as they might in a plurality voting system. But rather than determining the winning option to be the one with the most votes, each option is given a chance of winning proportional to its share of the votes.\" ↩\n What stops someone who lost the randomized draw from just asking to hold the same vote again? Or asking to hold a highly similar/related vote that would get back a lot of what they lost? How does that affect the negotiated equilibrium? ↩\n Such as \"maximize expected choice-worthiness,\" which I am not a fan of for reasons I'll get to in the future. ↩\n", "url": "https://www.cold-takes.com/ideal-governance-for-companies-countries-and-more/", "title": "Ideal governance (for companies, countries and more)", "source": "cold.takes", "source_type": "blog", "date_published": "2022-04-05", "id": "38f06cb11bf4d7aee91f01e9cce3325f"} -{"text": "I’ve spent a lot of my career working on wicked problems: problems that are vaguely defined, where there’s no clear goal for exactly what I’m trying to do or how I’ll know when or whether I’ve done it. \nIn particular, minimal-trust investigations - trying to understand some topic or argument myself (what charity to donate to, whether civilization is declining, whether AI could make this the most important century of all time for humanity), with little reliance on what “the experts” think - tend to have this “wicked” quality:\nI could spend my whole life learning about any subtopic of a subtopic of a subtopic, so learning about a topic is often mostly about deciding how deep I want to go (and what to skip) on each branch. \nThere aren’t any stable rules for how to make that kind of decision, and I’m constantly changing my mind about what the goal and scope of the project even is.\nThis piece will narrate an example of what it’s like to work on this kind of problem, and why I say it is “hard, taxing, exhausting and a bit of a mental health gauntlet.” \nMy example is from the 2007 edition of GiveWell. It’s an adaptation from a private doc that some other people who work on wicked problems have found cathartic and validating. \nIt’s particularly focused on what I call the hypothesis rearticulation part of investigating a topic (steps 3 and 6 in my learning by writing process), which is when:\nI have a hypothesis about the topic I’m investigating.\nI realize it doesn’t seem right, and I need a new one.\nMost of the things I can come up with are either “too strong” (it would take too much work to examine them satisfyingly) or “too weak” (they just aren’t that interesting/worth investigating). \nI need to navigate that balance and find a new hypothesis that is (a) coherent; (b) important if true; (c) maybe something I can argue for.\nAfter this piece tries to give a sense for what the challenge is like, a future piece will give accumulated tips for navigating it.\nFlashback to 2007 GiveWell\nContext for those unfamiliar with GiveWell:\nIn 2007, I co-founded (with Elie Hassenfeld) an organization that recommends evidence-backed, cost-effective charities to help people do as much good as possible with their donations.\nWhen we started the project, we initially asked charities to apply for $25,000 grants, and to agree (as part of the process) that we could publish their application materials. This was our strategy for trying to find charities that could provide evidence about how much they were helping people (per dollar).\nThis example is from after we had collected information from charities and determined which one we wanted to rank #1, and were now trying to write it all up for our website. Since then, GiveWell has evolved a great deal and is much better than the 2007 edition I’ll be describing here. \n(This example is reconstructed from my memory a long time later, so it’s probably not literally accurate.)\nInitial “too strong” hypothesis. Elie (my co-founder at GiveWell) and I met this morning and I was like “I’m going to write a page explaining what GiveWell’s recommendations are and aren’t. Basically, they aren’t trying to evaluate every charity in the world. Instead they’re saying which ones are the most cost-effective.” He nodded and was like “Yeah, that’s cool and helpful, write it.”\nNow I’m sitting at my computer trying to write down what I just said in a way that an outsider can read - the “hypothesis articulation” phase. \nI write, “GiveWell doesn’t evaluate every charity in the world. Our goal is to save the most lives possible per dollar, not to create a complete ranking or catalogue of charities. Accordingly, our research is oriented around identifying the single charity that can save the most lives per dollar spent,”\nHmm. Did we identify the “single charity that can save the most lives per dollar spent?” Certainly not. For example, I have no idea how to compare these charities to cancer research organizations, which are out of scope. Let me try again:\n“GiveWell doesn’t evaluate every charity in the world. Our goal is to save the most lives possible per dollar, not to create a complete ranking or catalogue of charities. Accordingly, our research is oriented around identifying the single charity with the highest demonstrated lives saved per dollar spent - the charity that can prove rigorously that it saved the most” - no, it can’t prove it saved the most lives - “the charity that can prove rigorously that ” - uh - \nDo any of our charities prove anything rigorously? Now I’m looking at the page we wrote for our #1 charity and ugh. I mean here are some quotes from our summary on the case for their impact: “All of the reports we've seen are internal reports (i.e., [the charity] - not an external evaluator - conducted them) … Neither [the charity]’s sales figures nor its survey results conclusively demonstrate an impact … It is possible that [the charity] simply uses its subsidized prices to outcompete more expensive sellers of similar materials, and ends up reducing people's costs but not increasing their ownership or utilization of these materials … We cannot have as much confidence in our understanding of [the charity] as in our understanding of [two other charities], whose activities are simpler and more straightforward.”\nThat’s our #1 charity! We have less confidence in it than our lower-ranked charities … but we ranked it higher anyway because it’s more cost-effective … but it’s not the most cost-effective charity in the world, it’s probably not even the most cost-effective charity we looked at …\nHitting a wall. Well I have no idea what I want to say here. \nThis image represents me literally playing some video game like Super Meat Boy while failing to articulate what I want to say. I am not actually this bad at Super Meat Boy (certainly not after all the time I’ve spent playing it while failing to articulate a hypothesis), but I thought all the deaths would give a better sense for how the whole situation feels.\nRearticulating the hypothesis and going “too weak.” Okay, screw this. I know what the problem was - I was writing based on wishful thinking. We haven’t found the most cost-effective charity, we haven’t found the most proven charity. Let’s just lay it out, no overselling, just the real situation. \n“GiveWell doesn’t evaluate every charity in the world, because we didn’t have time to do that this year. Instead, we made a completely arbitrary choice to focus on ‘saving lives in Africa’; then we emailed 107 organizations that seemed relevant to this goal, of which 59 responded; we did a really quick first-round application process in which we asked them to provide evidence of their impact; we chose 12 finalists, analyzed those further, and were most impressed with Population Services International. There is no reason to think that the best charities are the ones that did best in our process, and significant reasons to think the opposite, that the best charities are not the ones putting lots of time into a cold-emailed application from an unfamiliar funder for $25k. Like every other donor in the world, we ended up making an arbitrary, largely aesthetic judgment that we were impressed with Population Services International. Readers who share our aesthetics may wish to donate similarly, and can also purchase photos of Elie and Holden at the following link:”\nOK wow. This is what we’ve been working on for a year? Why would anyone want this? Why are we writing this up? I should keep writing this so it’s just DONE but ugh, the thought of finishing this website is almost as bad as the thought of not finishing it.\nHitting a wall.\nWhat do I do, what do I do, what do I do.\nRearticulating the hypothesis and assigning myself more work. OK. I gave up, went to sleep, thought about other stuff for a while, went on a vision quest, etc. I’ve now realized that we can put it this way: our top charities are the ones with verifiable, demonstrated impact and room for more funding, and we rank them by estimated cost-effectiveness. “Verifiable, demonstrated” is something appealing we can say about our top charities and not about others, even though it’s driven by the fact that they responded to our emails and others didn’t. And then we rank the best charities within that. Great.\nSo I’m sitting down to write this, but I’m kind of thinking to myself: “Is that really quite true? That ‘the charities that participated in our process and did well’ and ‘The charities with verifiable, demonstrated impact’ are the same set? I mean … it seems like it could be true. For years we looked for charities that had evidence of impact and we couldn’t find any. Now we have 2-3. But wouldn’t it be better if I could verify none of these charities that ignored us have good evidence of impact just sitting around on their website? I mean, we definitely looked at a lot of websites before but we gave up on it, and didn’t scan the eligible charities comprehensively. Let me try it.”\nI take the list of charities that didn’t participate in round 1. That’s not all the charities in the world, but if none of them have a good impact section on their website, we’ve got a pretty plausible claim that the best stuff we saw in the application process is the best that is (now) publicly available, for the “eligible” charities in the cause. (This assumes that if one of the applicants had good stuff sitting around on their website, they would have sent it.)\nI start looking at their websites. There are 48 charities, and in the first hour I get through 6, verifying that there’s nothing good on any of those websites. This is looking good: in 8 work hours I’ll be able to defend the claim I’ve decided to make.\nHmm. This water charity has some kind of map of all the wells they’ve built, and some references to academic literature arguing that wells save lives. Does that count? I guess it depends on exactly what the academic literature establishes. Let’s check out some of these papers … huh, a lot of these aren’t papers per se so much as big colorful reports with giant bibliographies. Well, I’ll keep going through these looking for the best evidence I can …\n“This will never end.” Did I just spend two weeks reading terrible papers about wells, iron supplementation and community health workers? Ugh and I’ve only gotten through 10 more charities, so I’m only about ⅓ of the way through the list as a whole. I was supposed to be just writing up what we found, I can’t take a 6-week detour!\nThe over-ambitious deadline. All right, I’ll sprint and get it done in a week. [1 week later] Well, now I’m 60% way through the whole list. !@#$\n“This is garbage.” What am I even doing anyway? I’m reading all this literature on wells and unilaterally deciding that it doesn’t count as “proof of impact” the way that Population Services International’s surveys count as “proof of impact.” I’m the zillionth person to read these papers; why are we creating a website out of these amateur judgments? Who will, or SHOULD, care what I think? I’m going to spend another who knows how long writing up this stupid page on what our recommendations do and don’t mean, and then another I don’t even want to think about it finishing up all the other pages we said we’d write, and then we’ll put it online and literally no one will read it. Donors won’t care - they will keep going to charities that have lots of nice pictures. Global health professionals will just be like “Well this is amateur hour.”1\nThis is just way out of whack. Every time I try to add enough meat to what we’re doing that it’s worth publishing at all, the timeline expands another 2 months, AND we still aren’t close to having a path to a quality product that will mean something to someone.\nWhat’s going wrong here?\nI have a deep sense that I have something to say that is worth arguing for, but I don’t actually know what I am trying to say. I can express it in conversation to Elie, but every time I start writing it down for a broad audience, I realize that Elie and I had a lot of shared premises that won’t be shared by others. Then I need to decide between arguing the premises (often a huge amount of extra work), weakening my case (often leads to a depressing sense that I haven’t done anything worthwhile), or somehow reframing the exercise (the right answer more often than one would think).\nIt often feels like I know what I need to say and now the work is just “writing it down.” But “writing it down” often reveals a lot of missing steps and thus explodes into more tasks - and/or involves long periods of playing Super Meat Boy while I try to figure out whether there’s some version of what I was trying to say that wouldn’t have this property.\nI’m approaching a well-established literature with an idiosyncratic angle, giving me constant impostor syndrome. On any given narrow point, there are a hundred people who each have a hundred times as much knowledge as I do; it’s easy to lose sight of the fact that despite this, I have some sort of value-added to offer (I just need to not overplay what this is, and often I don’t have a really crisp sense of what it is).\nBecause of the idiosyncratic angle, I lack a helpful ecosystem of peer reviewers, mentors, etc. \nThere’s nothing to stop me from sinking weeks into some impossible and ill-conceived version of my project that I could’ve avoided just by, like, rephrasing one of my sentences. (The above GiveWell example has me trying to do extra work to establish a bunch of points that I ultimately just needed to sidestep, as you can see from the final product. This definitely isn’t always the answer, but it can happen.)\n \nI’m simultaneously trying to pose my question and answer it. This creates a dizzying feeling of constantly creating work for myself that was actually useless, or skipping work that I needed to do, and never knowing which I’m doing because I can’t even tell you who’s going to be reading this and what they’re going to be looking for.\n \nThere aren’t any well-recognized standards I can make sure I’m meeting, and the scope of the question I’m trying to answer is so large that I generally have a creeping sense that I’m producing something way too shot through with guesswork and subjective judgment to cause anyone to actually change their mind.\nAll of these things are true, and they’re all part of the picture. But nothing really changes the fact that I’m on my way to having (and publishing) an unusually thoughtful take on an important question. If I can keep my eye on that prize, avoid steps that don’t help with it (though not to an extreme, i.e., it’s good for me to have basic contextual knowledge), and keep reframing my arguments until I capture (without overstating) what’s new about what I’m doing, I will create something valuable, both for my own learning and potentially for others’.\n“Valuable” doesn’t at all mean “final.” We’re trying to push the conversation forward a step, not end it. One of the fun things about the GiveWell example is that the final product that came out at the end of that process was actually pretty bad! It had essentially nothing in common with the version of GiveWell that first started feeling satisfying to donors and moving serious money, a few years later. (No overlap in top charities, very little overlap in methodology.)\nFor me, a huge part of the challenge of working on this kind of problem is just continuing to come back to that. As I bounce between “too weak” hypotheses and “too strong” ones, I need to keep re-aiming at something I can argue that’s worth arguing, and remember that getting there is just one step in my and others’ learning process. A future piece will go through some accumulated tips on pulling that off.\nNext in series: Useful Vices for Wicked ProblemsFootnotes\n I really enjoyed the “What qualifies you to do this work?” FAQ on the old GiveWell site that I ran into while writing this. ↩\n", "url": "https://www.cold-takes.com/the-wicked-problem-experience/", "title": "The Wicked Problem Experience", "source": "cold.takes", "source_type": "blog", "date_published": "2022-03-02", "id": "ecb812a27bb5b7eb3cdad535f143ec89"} -{"text": "I have very detailed opinions on lots of topics. I sometimes get asked how I do this, which might just be people making fun of me, but I choose to interpret it as a real question, and I’m going to sketch an answer here. \nYou can think of this as a sort of sequel to Minimal-Trust Investigations. That piece talked about how investigating things in depth can be valuable; this piece will try to give a sense of how to get an in-depth investigation off the ground, going from “I’ve never heard of this topic before” to “Let me tell you all my thoughts on that.”\nThe rough basic idea is that I organize my learning around writing rather than reading. This doesn’t mean I don’t read - just that the reading is always in service of the writing. \nHere’s an outline:\nStep 1\n \nPick a topic\n \nStep 2\n \nRead and/or discuss with others (a bit)\n \nStep 3\n \nExplain and defend my current, incredibly premature hypothesis, in writing (or conversation)\n \nStep 4\n \nFind and list weaknesses in my case\n \nStep 5\n \nPick a subquestion and do more reading/discussing\n \nStep 6\n \nRevise my claim / switch sides\n \nStep 7\n \nRepeat steps 3-6 a bunch\n \nStep 8\n \nGet feedback on a draft from others, and use this to keep repeating steps 3-6\n \n \nThe “traditionally” hard parts of this process are steps 4 and 6: spotting weaknesses in arguments, trying to resist the temptation to “stick to my guns” when my original hypothesis isn’t looking so good, etc. \nBut step 3 is a different kind of challenge: trying to “always have a hypothesis” and re-articulating it whenever it changes. By doing this, I try to continually focus my reading on the goal of forming a bottom-line view, rather than just “gathering information.” I think this makes my investigations more focused and directed, and the results easier to retain. I consider this approach to be probably the single biggest difference-maker between \"reading a ton about lots of things, but retaining little\" and \"efficiently developing a set of views on key topics and retaining the reasoning behind them.\"\nBelow I'll give more detail on each step, then some brief notes (to be expanded on later) on why this process is challenging.\nMy process for learning by writing\nStep 1: pick a topic. First, I decide what I want to form an opinion about. My basic approach here is: “Find claims that are important if true, and might be true.” \nThis doesn’t take creativity. We live in an ocean of takes, pundits, advocates, etc. I usually cheat by paying special attention to claims by people who seem particularly smart, interesting, unconventionally minded (not repeating the same stuff I hear everywhere), and interested in the things I’m interested in (such as the long-run future of humanity). \nBut I also tend to be at least curious about any claim that is both “important if true” and “not obviously wrong according to some concrete reason I can voice,” even if it’s coming from a very random source (Youtube commenter, whatever).\nFor a concrete example throughout this piece, I’ll use this hypothesis, which I examined pretty recently: “Human history is a story of life getting gradually, consistently better.”\n(Other, more complicated examples are the Collapsing Civilizational Competence Hypothesis; the Most Important Century hypothesis; and my attempt to summarize history in one table.)\nStep 2: read and/or discuss (a bit). I usually start by trying to read the most prominent 1-3 pieces that (a) defend the claim or (b) attack the claim or (c) set out to comprehensively review the evidence on both sides. I try to understand the major reasons they’re giving for the side they come down on. I also chat about the topic with people who know more about it than I do, and who aren’t too high-stakes to chat with.\nIn the example I’m using, I read the relevant parts of Better Angels of our Nature and Enlightenment Now (focusing on claims about life getting better, and skipping discussion of “why”). I then looked for critiques of the books that specifically responded to the claims about life having gotten better (again putting aside the “why”). This led mostly to claims about the peacefulness of hunter-gatherers.\nStep 3: explain and defend my current, incredibly premature hypothesis, in writing (or conversation). This is where my approach gets unusual - I form a hypothesis about whether the claim is true, LONG before I’m “qualified to have an opinion.” The process looks less like “Read and digest everything out there on the topic” and more like “Read the 1-3 most prominent pieces on each side, then go.”\nI don’t have an easy time explaining “how” I generate a hypothesis while knowing so little - it feels like I just always have a “guess” at the answer to some topic, whether or not I even want to (though it often takes me a lot of effort to articulate the guess in words). The main thing I have to say about the “how” is that it just doesn’t matter: at this stage the hypothesis is more about setting the stage for more questions about investigation than about really trying to be right, so it seems sufficient to “just start rambling onto the page, and make any corrections/edits that my current state of knowledge already forces.”\nFor this example, I noted down something along the lines of: “Life has gotten better throughout history. The best data on this comes from the last few hundred years, because before that we just didn’t keep many records. Sometimes people try to claim that the longest-ago, murkiest times were better, such as hunter-gatherer times, but there’s no evidence for this - in fact, empirical evidence shows that hunter-gatherers were very violent - and we should assume that these early times fit on the same general trendline, which would mean they were quite bad. (Also, if you go even further back than hunter-gatherers, you get to apes, whose lives seem really horrible, so that seems to fit the trend as well.1)” \nIt took real effort to disentangle the thoughts in my head to the point where I could write that, but I tried to focus on keeping things simple and not trying to get it perfect.\nAt this stage, this is not a nuanced, caveated, detailed or well-researched take. Instead, my approach is more like: “Try to state what I think in a pretty strong, bold manner; defend it aggressively; list all of the best counterarguments, and shoot them down.” This generally fails almost immediately.\nStep 4: find and list weaknesses in my case. My next step is to play devil’s advocate against myself, such as by:\nLooking for people arguing things that contradict my working hypothesis, and looking for their strongest points.\nNoting claims I’ve made with this property: “I haven’t really made an attempt to look comprehensively at the arguments on both sides of this, and if I did I might change my mind.”\n(This summary obscures an ocean of variation. Having more existing knowledge about a general area, and more experience with investigations in general, can make someone much better at noticing things like this.)\nIn the example, my “devil’s advocate” points included:\nI’m getting all of my “life has gotten better” charts from books that are potentially biased. I should do something to see whether there are other charts, excluded from those books, that tell the opposite story.\nFrom my brief skim, the “hunter-gatherers were violent” claim looks right, and the critiques seem very hand-wavy and non-data-based. But I should probably read them more carefully and pull out their strongest arguments.\nEven if hunter-gatherers were violent, what about other aspects of their lives? Wikipedia seems to have a pretty rosy picture …\nIn theory, I could swap Step 4 (listing things I’d like to look into more) with Step 3 (writing what I think). That is, I could try to review both sides of every point comprehensively before forming my own view, which means a lot more reading before I start writing. \nI think many people try to do this, but in my experience at least, it’s not the best way to go. \nDebates tend to be many-dimensional: for example, “Has life gotten better?” quickly breaks down into “Has quality-of-life metric X gotten better over period Y?” for a whole bunch of different X-Y pairs (plus other questions2). \nSo if my goal were “Understand both sides of every possible sub-debate,” I could be reading forever - for example, I might get embroiled in the debates and nuances around each different claim made about life getting better over the last few hundred years. \nBy writing early, I get a chance to make sure I’ve written down the version of the claim I care most about, and make sure that any further investigation is focused on the things that matter most for changing my mind on this claim. \nOnce I wrote down “There are a huge number of charts showing that life has gotten better over the last few hundred years,” I could see that deep-diving any particular one of those charts wouldn’t be the best use of time - compared to addressing the very weakest points in the claim I had written, by going back further in time to hunter-gatherer periods, or looking for entirely different collections of charts.\nStep 5: pick a subquestion and do more reading and/or discussing. One of the most important factors that determines whether these investigations go well (in the sense of teaching me a lot relatively quickly) is deciding which subquestions to “dig into” and which not to. As just noted, writing the hypothesis down early is key. \nI try to stay very focused on doing the reading (and/or low-stakes discussion) most likely to change the big-picture claim I’m making. I rarely read a book or paper “once from start to finish”; instead I energetically skip around trying to find the parts most likely to give me a solid reason to change my mind, read them carefully and often multiple times, try to figure out what else I should be reading (whether this is “other parts of the same document” or “academic papers on topic X”) to contextualize them, etc.\nStep 6: Revise my claim / switch sides. This is one of the trickiest parts - pausing Step 5 as soon as I have a modified (often still simplified, under-researched and wrong) hypothesis. It’s hard to notice when my hypothesis changes, and hard to stay open to radical changes of direction (and I make no claim that I’m as good at it as I could be).\nI often try radically flipping around my hypothesis, even if I haven’t actually been convinced that it’s wrong - sometimes when I’m feeling iffy about arguing for one side, it’s productive to just go ahead and try arguing for the other side. I tend to get further by noticing how I feel about the \"best arguments for both sides\" than by trying from the start to be even-handed. \nIn the example, I pretty quickly decided to try flipping my view around completely, and noted something like: “A lot of people assume life has gotten better over time, but that’s just the last few hundred years. In fact, our best guess is that hunter-gatherers were getting some really important things right, such as gender relations and mental health, that we still haven’t caught up to after centuries of progress. Agriculture killed that, and we’ve been slowly climbing out of a hole ever since. There should be tons more research on what hunter-gatherer societies are/were like, and whether we can replicate their key properties at scale today - this is a lot more promising than just continuing to push forward science and technology and modernity.”\nThis completely contradicted my initial hypothesis. (I now think both are wrong.) \nThis sent me down a new line of research: constructing the best argument I could that life was better in hunter-gatherer times.\nStep 7: repeat steps 3-6 a bunch. I tried to gather the best evidence for hunter-gatherer life being good, and for it being bad, and zeroed in on gender relations and violence as particularly interesting, confusing debates; on both of these, I changed my hypothesis/headline several times. \nMy hypotheses became increasingly complex and detailed, as you can see from the final products: Pre-agriculture gender relations seem bad (which argues that gender relations for hunter-gatherers were/are far from Wikipedia’s rosy picture, according to the best available evidence, though the evidence is far from conclusive, and it’s especially unclear how pre-agriculture gender relations compare to today’s) and Unraveling the evidence about violence among very early humans (which argues that hunter-gatherer violence was indeed high, but that - contra Better Angels - it probably got even worse after the development of agriculture, before declining at some pretty unknown point before today).\nI went through several cycles of “I think I know what I really think and I’m ready to write,” followed by “No, having started writing, I’m unsatisfied with my answer on this point and think a bit more investigation could change it.” So I kept alternating between writing and reading, but was always reading with the aim of getting back to writing.\nI finally produced some full, opinionated drafts that seemed to me to be about the best I could do without a ton more work.\nAfter I had satisfied myself on these points, I popped back up from the “hunter-gatherer” question to the original question of whether life has gotten better over time. I followed a similar process for investigating other subquestions, like “Is the set of charts I’ve found representative for the last few hundred years?” and “What about the period in between hunter-gatherer times and the last few hundred years?”\nStep 8: add feedback from others into the loop. It takes me a long time to get to the point where I can no longer easily tear apart my own hypothesis. Once I do, I start seeking feedback from others - first just people I know who are likely to be helpful and interested in the topic, then experts and the public. This works the same basic way as Steps 4-7, but with others doing a lot of the “noticing weaknesses” part (Step 4).\nWhen I publish, I am thinking of it more like “I can’t easily find more problems with this, so it’s time to see whether others can” than like “This is great and definitely right.”\nI hope I haven’t made this sound fun or easy\nSome things about this process that are hard, taxing, exhausting and a bit of a mental health gauntlet:\nI constantly have a feeling (after reading) like I know what I think and how to say it, then I start writing and immediately notice that I don’t at all. I need to take a lot of breaks and try a lot of times to even “write what I currently think,” even when it’s pretty simple and early.\nEvery subquestion is something I could spend a lifetime learning about, if I chose to. I need to constantly interrupt myself and ask, “Is this a key point? Is this worth learning more about?” or else I’ll never finish.\nThere are infinite tough judgment calls about things like “whether to look into some important-seeming point, or just reframe my hypothesis such that I don’t need to.” Sometimes the latter is the answer (it feels like some debate is important, but if I really think about it, I realize the thing I most care about can be argued for without getting to the bottom of it); sometimes the former is (it feels like I can try to get around some debate, but actually, I can’t really come to a reasonable conclusion without an exhausting deep dive). \nAt any given point, I know that if I were just better at things like “noticing which points are really crucial” and “reformulating my hypothesis so that it’s easier to defend while still important,” I could probably do something twice as good in half the time … and I often realize after a massive deep dive that most of the time I spent wasn’t necessary.\nBecause of these points, I have very little ability to predict when a project will be done; I am never confident that I’m doing it as well as I could; and I’m constantly interrupting myself to reflect on these things rather than getting into a flow.\nHalf the time, all of this work just ends up with me agreeing with conventional wisdom or “the experts” anyway … so I’ve just poured in work and gone through a million iterations of changing my mind, and any random person I talk to about it will just be like “So you decided X? Yeah X is just what I had already assumed.”\nThe whole experience is a mix of writing, Googling, reading, skimming, and pressuring myself to be more efficient, which is very different and much more unpleasant compared to the experience of just reading. (Among other things, I can read in a nice location and be looking at a book or e-ink instead of a screen. Most of the work of an “investigation” is in front of a glowing screen and requires an Internet connection.)\nI’ll write more about these challenges in a future post. I definitely recommend reading as a superior leisure activity, but for me at least, writing-centric work seems better for learning.\nI’m really interested in comments from anyone who tries this sort of thing out and has things to share about how it goes!\nNext in series: The Wicked Problem ExperienceFootnotes\n I never ended up using this argument about apes. I think it’s probably mostly right, but there’s a whole can of worms with claims about loving, peaceful bonobos that I never quite got motivated to get to the bottom of.  ↩\n Such as which metrics are most important. ↩\n", "url": "https://www.cold-takes.com/learning-by-writing/", "title": "Learning By Writing", "source": "cold.takes", "source_type": "blog", "date_published": "2022-02-22", "id": "e93105a4642d2884bd08a528f8a7408a"} -{"text": "Ethics based on \"common sense\" seems to have a horrible track record.\nThat is: simply going with our intuitions and societal norms has, in the past, meant endorsing all kinds of insanity. To quote an article by Kwame Anthony Appiah:\nOnce, pretty much everywhere, beating your wife and children was regarded as a father's duty, homosexuality was a hanging offense, and waterboarding was approved -- in fact, invented -- by the Catholic Church. Through the middle of the 19th century, the United States and other nations in the Americas condoned plantation slavery. Many of our grandparents were born in states where women were forbidden to vote. And well into the 20th century, lynch mobs in this country stripped, tortured, hanged and burned human beings at picnics.\n Looking back at such horrors, it is easy to ask: What were people thinking?\n Yet, the chances are that our own descendants will ask the same question, with the same incomprehension, about some of our practices today.\n Is there a way to guess which ones?\nThis post kicks off a series on the approach to ethics that I think gives us our best chance to be \"ahead of the curve:\" consistently making ethical decisions that look better, with hindsight after a great deal of moral progress, than what our peer-trained intuitions tell us to do.\n\"Moral progress\" here refers to both societal progress and personal progress. I expect some readers will be very motivated by something like \"Making ethical decisions that I will later approve of, after I've done more thinking and learning,\" while others will be more motivated by something like \"Making ethical decisions that future generations won't find abhorrent.\" (More on \"moral progress\" in this follow-up piece.)\nBeing \"future-proof\" isn't necessarily the end-all be-all of an ethical system. I tend to \"compromise\" between the ethics I'll be describing here - which is ambitious, theoretical, and radical - and more \"common-sense\"/intuitive approaches to ethics that are more anchored to conventional wisdom and the \"water I swim in.\"\nBut if I simply didn't engage in philosophy at all, and didn't try to understand and incorporate \"future-proof ethics\" into my thinking, I think that would be a big mistake - one that would lead to a lot of other moral mistakes, at least from the perspective of a possible future world (or a possible Holden) that has seen a lot of moral progress. \nIndeed, I think some of the best opportunities to do good in the world come from working on issues that aren't yet widely recognized as huge moral issues of our time.\nFor this reason, I think the state of \"future-proof ethics\" is among the most important topics out there, especially for people interested in making a positive difference to the world on very long timescales. Understanding this topic can also make it easier to see where some of the unusual views about ethics in the effective altruism community come from: that we should more highly prioritize the welfare of animals, potentially even insects, and most of all, future generations.\nWith that said, some of my thinking on this topic can get somewhat deep into the weeds of philosophy. So I am putting up a lot of the underlying content for this series on the EA Forum alone, and the pieces that appear on Cold Takes will try to stick to the high-level points and big picture.\nOutline of the rest of this piece:\nMost people's default approach to ethics seems to rely on \"common sense\"/intuitions influenced by peers. If we want to be \"ahead of the curve,\" we probably need a different approach. More\nThe most credible candidate for a future-proof ethical system, to my knowledge, rests on three basic pillars: \nSystemization: seeking an ethical system based on consistently applying fundamental principles, rather than handling each decision with case-specific intuitions. More\nThin utilitarianism: prioritizing the \"greatest good for the greatest number,\" while not necessarily buying into all the views traditionally associated with utilitarianism. More\nSentientism: counting anyone or anything with the capacity for pleasure and suffering - whether an animal, a reinforcement learner (a type of AI), etc. - as a \"person\" for ethical purposes. More\nCombining these three pillars yields a number of unusual, even uncomfortable views about ethics. I feel this discomfort and don't unreservedly endorse this approach to ethics. But I do find it powerful and intriguing. More\nAn appendix explains why I think other well-known ethical theories don't provide the same \"future-proof\" hopes; another appendix notes some debates about utilitarianism that I am not engaging in here.\nLater in this series, I will:\nUse a series of dialogues to illustrate how specific, unusual ethical views fit into the \"future-proof\" aspiration.\nSummarize what I see as the biggest weaknesses of \"future-proof ethics.\"\nDiscuss how to compromise between \"future-proof ethics\" and \"common-sense\" ethics, drawing on the nascent literature about \"moral uncertainty.\"\n\"Common-sense\" ethics\nFor a sense of what I mean by a \"common-sense\" or \"intuitive\" approach to ethics, see this passage from a recent article on conservatism:\nRationalists put a lot of faith in “I think therefore I am”—the autonomous individual deconstructing problems step by logical step. Conservatives put a lot of faith in the latent wisdom that is passed down by generations, cultures, families, and institutions, and that shows up as a set of quick and ready intuitions about what to do in any situation. Brits don’t have to think about what to do at a crowded bus stop. They form a queue, guided by the cultural practices they have inherited ...\n In the right circumstances, people are motivated by the positive moral emotions—especially sympathy and benevolence, but also admiration, patriotism, charity, and loyalty. These moral sentiments move you to be outraged by cruelty, to care for your neighbor, to feel proper affection for your imperfect country. They motivate you to do the right thing.\n Your emotions can be trusted, the conservative believes, when they are cultivated rightly. “Reason is, and ought only to be the slave of the passions,” David Hume wrote in his Treatise of Human Nature. “The feelings on which people act are often superior to the arguments they employ,” the late neoconservative scholar James Q. Wilson wrote in The Moral Sense.\n The key phrase, of course, is cultivated rightly. A person who lived in a state of nature would be an unrecognizable creature ... If a person has not been trained by a community to tame [their] passions from within, then the state would have to continuously control [them] from without.\nI'm not sure \"conservative\" is the best descriptor for this general attitude toward ethics. My sense is that most people's default approach to ethics - including many people for whom \"conservative\" is the last label they'd want - has a lot in common with the above vision. Specifically: rather than picking some particular framework from academic philosophy such as \"consequentialism,\" \"deontology\" or \"virtue ethics,\" most people have an instinctive sense of right and wrong, which is \"cultivated\" by those around them. Their ethical intuitions can be swayed by specific arguments, but they're usually not aiming to have a complete or consistent ethical system.\nAs remarked above, this \"common sense\" (or perhaps more precisely, \"peer-cultivated intuitions\") approach has gone badly wrong many times in the past. Today's peer-cultivated intuitions are different from the past's, but as long as that's the basic method for deciding what's right, it seems one has the same basic risk of over-anchoring to \"what's normal and broadly accepted now,\" and not much hope of being \"ahead of the curve\" relative to one's peers.\nMost writings on philosophy are about comparing different \"systems\" or \"frameworks\" for ethics (e.g., consequentialism vs. deontology vs. virtue ethics). By contrast, this series focuses on the comparison between non-systematic, \"common-sense\" ethics and an alternative approach that aims to be more \"future-proof,\" at the cost of departing more from common sense.\nThree pillars of future-proof ethics\nSystemization\nWe're looking for a way of deciding what's right and wrong that doesn't just come down to \"X feels intuitively right\" and \"Y feels intuitively wrong.\" Systemization means: instead of judging each case individually, look for a small set of principles that we deeply believe in, and derive everything else from those.\nWhy would this help with \"future-proofing\"?\nOne way of putting it might be that:\n(A) Our ethical intuitions are sometimes \"good\" but sometimes \"distorted\" by e.g. biases toward helping people like us, or inability to process everything going on in a complex situation. \n(B) If we derive our views from a small number of intuitions, we can give these intuitions a lot of serious examination, and pick ones that seem unusually unlikely to be \"distorted.\" \n(C) Analogies to science and law also provide some case for systemization. Science seeks \"truth\" via systemization and law seeks \"fairness\" via systemization; these are both arguably analogous to what we are trying to do with future-proof ethics.\nA bit more detail on (A)-(C) follows.\n(A) Our ethical intuitions are sometimes \"good\" but sometimes \"distorted.\" Distortions might include:\nWhen our ethics are pulled toward what’s convenient for us to believe. For example, that one’s own nation/race/sex is superior to others, and that others’ interests can therefore be ignored or dismissed.\nWhen our ethics are pulled toward what’s fashionable and conventional in our community (which could be driven by others’ self-serving thinking). \nWhen we're instinctively repulsed by someone for any number of reasons, including that they’re just different from us, and we confuse this for intuitions that what they’re doing is wrong. For example, consider the large amount of historical and present intolerance for unconventional sexuality, gender identity, etc.\nWhen our intuitions become \"confused\" because they're fundamentally not good at dealing with complex situations. For example, we might have very poor intuitions about the impact of some policy change on the economy, and end up making judgments about such a policy in pretty random ways - like imagining a single person who would be harmed or helped by a policy.\nIt's very debatable what it means for an ethical view to be \"not distorted.\" Some people (“moral realists”) believe that there are literal ethical “truths,” while others (what I might call “moral quasi-realists,” including myself) believe that we are simply trying to find patterns in what ethical principles we would embrace if we were more thoughtful, informed, etc. But either way, the basic thinking is that some of our ethical intuitions are more reliable than others - more \"really about what is right\" and less tied to the prejudices of our time.\n(B) If we derive our views from a small number of intuitions, we can give these intuitions a lot of serious examination, and pick ones that seem unusually unlikely to be \"distorted.\" \nThe below sections will present two ideas - thin utilitarianism and sentientism - that:\nHave been subject to a lot of reflection and debate.\nCan be argued for based on very general principles about what it means for an action to be ethical. Different people will see different levels of appeal in these principles, but they do seem unusually unlikely to be contingent on conventions of our time.\nCan be used (together) to derive a large number of views about specific ethical decisions.\n(C) Analogies to science and law also provide some case for systemization.\nAnalogy to science. In science, it seems to be historically the case that aiming for a small, simple set of principles that generates lots of specific predictions has been a good rule,1 and an especially good way to be \"ahead of the curve\" in being able to understand things about the world.\nFor example, if you’re trying to predict when and how fast objects will fall, you can probably make pretty good gut-based guesses about relatively familiar situations (a rock thrown in the water, a vase knocked off a desk). But knowing the law of gravitation - a relatively simple equation that explains a lot of different phenomena - allows much more reliable predictions, especially about unfamiliar situations. \nAnalogy to law. Legal systems tend to aim for explicitness and consistency. Rather than asking judges to simply listen to both sides and \"do what feels right,\" legal systems tend to encourage being guided by a single set of rules, written down such that anyone can read it, applied as consistently as possible. This practice may increase the role of principles that have gotten lots of attention and debate, and decrease the role of judges' biases, moods, personal interests, etc.\nSystemization can be weird. It’s important to understand from the get-go that seeking an ethics based on “deep truth” rather than conventions of the time means we might end up with some very strange, initially uncomfortable-feeling ethical views. The rest of this series will present such uncomfortable-feeling views, and I think it’s important to process them with a spirit of “This sounds wild, but if I don’t want to be stuck with my raw intuitions and the standards of my time, I should seriously consider that this is where a more deeply true ethical system will end up taking me.”\nNext I'll go through two principles that, together, can be the basis of a lot of systemization: thin utilitarianism and sentientism.\nThin Utilitarianism\nI think one of the more remarkable, and unintuitive, findings in philosophy of ethics comes not from any philosopher but from the economist John Harsanyi. In a nutshell:\nLet’s start with a basic, appealing-seeming principle for ethics: that it should be other-centered. That is, my ethical system should be based as much as possible on the needs and wants of others, rather than on my personal preferences and personal goals.\nWhat I think Harsanyi’s work essentially shows is that if you’re determined to have an other-centered ethics, it pretty strongly looks like you should follow some form of utilitarianism, an ethical system based on the idea that we should (roughly speaking) always prioritize the greatest good for the greatest number of (ethically relevant) beings.\nThere are many forms of utilitarianism, which can lead to a variety of different approaches to ethics in practice. However, an inescapable property of all of them (by Harsanyi’s logic) is the need for consistent “ethical weights” by which any two benefits or harms can be compared. \nFor example, let’s say we are comparing two possible ways in which one might do good: (a) saving a child from drowning in a pond, or (b) helping a different child to get an education. \n \nMany people would be tempted to say you “can’t compare” these, or can’t choose between them. But according to utilitarianism, either (a) is exactly as valuable as (b), or it’s half as valuable (meaning that saving two children from drowning is as good as helping one child get an education), or it’s twice as valuable … or 100x as valuable, or 1/100 as valuable, but there has to be some consistent multiplier.\n \nAnd that, in turn, implies that for any two ways you can do good - even if one is very large (e.g. saving a life) and one very small (e.g. helping someone avoid a dust speck in their eye) - there is some number N such that N of the smaller benefit is more valuable than the larger benefit. In theory, any harm can be outweighed by something that benefits a large enough number of persons, even if it benefits them in a minor way.\nThe connections between these points - the steps by which one moves from “I want my ethics to focus on the needs and wants of others” to “I must use consistent moral weights, with all of the strange implications that involves” - is fairly complex, and I haven’t found a compact way of laying it out. I discuss it in detail in an Effective Altruism Forum post: Other-centered ethics and Harsanyi's Aggregation Theorem. I will also try to give a bit more of an intuition for it in the next piece.\nI'm using the term thin utilitarianism to point at a minimal version of utilitarianism that only accepts what I've outlined above: a commitment to consistent ethical weights, and a belief that any harm can be outweighed by a large enough number of minor benefits. There are a lot of other ideas commonly associated with utilitarianism that I don't mean to take on board here, particularly:\nThe \"hedonist\" theory of well-being: that \"helping someone\" is reducible to \"increasing someone's positive conscious experiences relative to negative conscious experiences.\" (Sentientism, discussed below, is a related but not identical idea.2)\nAn \"ends justify the means\" attitude. \nThere are a variety of ways one can argue against \"ends-justify-the-means\" style reasoning, even while committing to utilitarianism (here's one). \n \nIn general, I'm committed to some non-utilitarian personal codes of ethics, such as (to simplify) \"deceiving people is bad\" and \"keeping my word is good.\" I'm only interested in applying utilitarianism within particular domains (such as \"where should I donate?\") where it doesn't challenge these codes.\n \n(This applies to \"future-proof ethics\" generally, but I am noting it here in particular because I want to flag that my arguments for \"utilitarianism\" are not arguments for \"the ends justify the means.\")\nMore on \"thin utilitarianism\" at my EA Forum piece.\nSentientism\nTo the extent moral progress has occurred, a lot of it seems to have been about “expanding the moral circle”: coming to recognize the rights of people who had previously been treated as though their interests didn’t matter.\nIn The Expanding Circle, Peter Singer gives a striking discussion (see footnote)3 of how ancient Greeks seemed to dismiss/ignore the rights of people from neighboring city-states. More recently, people in power have often seemed to dismiss/ignore the rights of people from other nations, people with other ethnicities, and women and children (see quote above). These now look like among the biggest moral mistakes in history.\nIs there a way, today, to expand the circle all the way out as far as it should go? To articulate simple, fundamental principles that give us a complete guide to “who counts” as a person, such that we need to weigh their interests appropriately?\nSentientism is the main candidate I’m aware of for this goal. The idea is to focus on the capacity for pleasure and suffering (“sentience”): if you can experience pleasure and suffering, you count as a “person” for ethical purposes, even if you’re a farm animal or a digital person or a reinforcement learner. \nKey quote from 18th-century philosopher Jeremy Bentham: \"The question is not, Can they reason?, nor Can they talk? but, Can they suffer? Why should the law refuse its protection to any sensitive being?\"\nA variation on sentientism would be to say that you count as a \"person\" if you experience \"conscious\" mental states at all.4 I don't know of a simple name for this idea, and for now I'm lumping it in with sentientism, as it is pretty similar for my purposes throughout this series.\nSentientism potentially represents a simple, fundamental principle (“the capacity for pleasure and suffering is what matters”) that can be used to generate a detailed guide to who counts ethically, and how much (in other words, what ethical weight should be given to their interests). Sentientism implies caring about all humans, regardless of sex, gender, ethnicity, nationality, etc., as well as potentially about animals, extraterrestrials, and others.\nPutting the pieces together\nCombining systemization, thin utilitarianism and sentientism results in an ethical attitude something like this:\nI want my ethics to be a consistent system derived from robust principles. When I notice a seeming contradiction between different ethical views of mine, this is a major problem.\nA good principle is that ethics should be about the needs and wants of others, rather than my personal preferences and personal goals. This ends up meaning that I need to judge every action by who benefits and who is harmed, and I need consistent “ethical weights” for weighing different benefits/harms against each other.\nWhen deciding how to weigh someone’s interests, the key question is the extent to which they’re sentient: capable of experiencing pleasure and suffering. \nCombining these principles can generate a lot of familiar ethical conclusions, such as “Don’t accept a major harm to someone for a minor benefit to someone else,” “Seek to redistribute wealth from people with more to people with less, since the latter benefit more,” and “Work toward a world with less suffering in it.” \nIt also generates some stranger-seeming conclusions, such as: “Animals may have significant capacity for pleasure and suffering, so I should assign a reasonably high ‘ethical weight’ to them. And since billions of animals are being horribly treated on factory farms, the value of reducing harm from factory farming could be enormous - to the point where it could be more important than many other issues that feel intuitively more compelling.”\nThe strange conclusions feel uncomfortable, but when I try to examine why they feel uncomfortable, I worry that a lot of my reasons just come down to “avoiding weirdness” or “hesitating to care a great deal about creatures very different from me and my social peers.” These are exactly the sorts of thoughts I’m trying to get away from, if I want to be ahead of the curve on ethics.\nAn interesting additional point is that this sort of ethics arguably has a track record of being \"ahead of the curve.\" For example, here's Wikipedia on Jeremy Bentham, the “father of utilitarianism” (and a major sentientism proponent as well):\nHe advocated individual and economic freedom, the separation of church and state, freedom of expression, equal rights for women, the right to divorce, and the decriminalizing of homosexual acts. [My note: he lived from 1747-1832, well before most of these views were common.] He called for the abolition of slavery, the abolition of the death penalty, and the abolition of physical punishment, including that of children. He has also become known in recent years as an early advocate of animal rights.5\nMore on this at utilitarianism.net, and some criticism (which I don't find very compelling,6 though I have my own reservations about the \"track record\" point that I'll share in future pieces) here.\nTo reiterate, I don’t unreservedly endorse the ethical system discussed in this piece. Future pieces will discuss weaknesses in the case, and how I handle uncertainty and reservations about ethical systems.\nBut it’s a way of thinking that I find powerful and intriguing. When I act dramatically out of line with what the ethical system I've outlined suggests, I do worry that I’m falling prey to acting by the ethics of my time, rather than doing the right thing in a deeper sense.\nAppendix: other candidates for future-proof ethics?\nIn this piece, I’ve mostly contrasted two approaches to ethics:\n\"Common sense\" or intuition-based ethics.\nThe specific ethical framework that combines systemization, thin utilitarianism and sentientism.\nOf course, these aren't the only two options. There are a number of other approaches to ethics that have been extensively explored and discussed within academic philosophy. These include deontology, virtue ethics and contractualism.\nThese approaches and others have significant merits and uses. They can help one see ethical dilemmas in a new light, they can help illustrate some of the unappealing aspects of utilitarianism, they can be combined with utilitarianism so that one avoids particular bad behaviors, and they can provide potential explanations for some particular ethical intuitions. \nBut I don’t think any of them are as close to being comprehensive systems - able to give guidance on practically any ethics-related decision - as the approach I've outlined above. As such, I think they don’t offer the same hopes as the approach I've laid out in this post.\nOne key point is that other ethical frameworks are often concerned with duties, obligations and/or “rules,” and they have little to say about questions such as “If I’m choosing between a huge number of different worthy places to donate, or a huge number of different ways to spend my time to help others, how do I determine which option will do as much good as possible?” \nThe approach I've outlined above seems like the main reasonably-well-developed candidate system for answering questions like the latter, which I think helps explain why it seems to be the most-attended-to ethical framework in the effective altruism community.\nAppendix: aspects of the utilitarianism debate I'm skipping\nMost existing writing on utilitarianism and/or sentientism is academic philosophy work. In academic philosophy, it's generally taken as a default that people are searching for some coherent ethical system; the \"common-sense or non-principle-derived approach\" generally doesn't take center stage (though there is some discussion of it under the heading of moral particularism).\nWith this in mind, a number of common arguments for utilitarianism don't seem germane for my purposes, in particular:\nA broad suite of arguments of the form, \"Utilitarianism seems superior to particular alternatives such as deontology or virtue ethics.\" In academic philosophy, people often seem to assume that a conclusion like \"Utilitarianism isn't perfect, but it's the best candidate for a consistent, principled system we have\" is a strong argument for utilitarianism; here, I am partly examining what we gain (and lose) by aiming for a consistent, principled system at all.\nArguments of the form, \"Utilitarianism is intuitively and/or obviously correct; it seems clear that pleasure is good and pain is bad, and much follows from this.\" While these arguments might be compelling to some, it seems clear that many people don't share the implied view of what's \"intuitive/obvious.\" Personally, I would feel quite uncomfortable making big decisions based on an ethical system whose greatest strength is something like \"It just seems right to me [and not to many others],\" and I'm more interested in arguments that utilitarianism (and sentientism) should be followed even where they are causing significant conflict with one's intuitions.\nIn examining the case for utilitarianism and sentientism, I've left arguments in the above categories to the side. But if there are arguments I've neglected in favor of utilitarianism and sentientism that fit the frame of this series, please share them in the comments!\nNext in series: Defending One-Dimensional EthicsFootnotes\n I don't have a cite for these being the key properties of a good scientific theory, but I think these properties tend to be consistently sought out across a wide variety of scientific domains. The simplicity criterion is often called \"Occam's razor,\" and the other criterion is hopefully somewhat self-explanatory. You could also see these properties as essentially a plain-language description of Solomonoff induction. ↩\n It's possible to combine sentientism with a non-hedonist theory of well-being. For example, one might believe that only beings with the capacity for pleasure and suffering matter, but also that once we've determined that someone matters, we should care about what they want, not just about their pleasure and suffering. ↩\nAt first [the] insider/ outsider distinction applied even between the citizens of neighboring Greek city-states; thus there is a tombstone of the mid-fifth century B.C. which reads:\nThis memorial is set over the body of a very good man. Pythion, from Megara, slew seven men and broke off seven spear points in their bodies … This man, who saved three Athenian regiments … having brought sorrow to no one among all men who dwell on earth, went down to the underworld felicitated in the eyes of all.\nThis is quite consistent with the comic way in which Aristophanes treats the starvation of the Greek enemies of the Athenians, starvation which resulted from the devastation the Athenians had themselves inflicted. Plato, however, suggested an advance on this morality: he argued that Greeks should not, in war, enslave other Greeks, lay waste their lands or raze their houses; they should do these things only to non-Greeks. These examples could be multiplied almost indefinitely. The ancient Assyrian kings boastfully recorded in stone how they had tortured their non-Assyrian enemies and covered the valleys and mountains with their corpses. Romans looked on barbarians as beings who could be captured like animals for use as slaves or made to entertain the crowds by killing each other in the Colosseum. In modern times Europeans have stopped treating each other in this way, but less than two hundred years ago some still regarded Africans as outside the bounds of ethics, and therefore a resource which should be harvested and put to useful work. Similarly Australian aborigines were, to many early settlers from England, a kind of pest, to be hunted and killed whenever they proved troublesome. ↩\n E.g., https://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood#ProposedCriteria  ↩\nWikipedia ↩\n I mean, I agree with the critic that the \"track record\" point is far from a slam dunk, and that \"utilitarians were ahead of the curve\" doesn't necessarily mean \"utilitarianism was ahead of the curve.\" But I don't think the \"track record\" argument is intended to be a philosophically tight point; I think it's intended to be interesting and suggestive, and I think it succeeds at that. At a minimum, it may imply something like \"The kind of person who is drawn to utilitarianism+sentientism is also the kind of person who makes ahead-of-the-curve moral judgments,\" and I'd consider that an argument for putting serious weight on the moral judgments of people who drawn to utilitarianism+sentientism today. ↩\n", "url": "https://www.cold-takes.com/future-proof-ethics/", "title": "Future-proof ethics", "source": "cold.takes", "source_type": "blog", "date_published": "2022-02-02", "id": "bde4bb4c024fb1605a004fabfecb5297"} -{"text": "This piece kicks off a short series inspired by this question:\nSay that Beethoven was the greatest musician of all time (at least in some particular significant sense - see below for some caveats). Why has there been no one better in the last ~200 years - despite a vastly larger world population, highly democratized technology for writing and producing music, and a higher share of the population with education, basic nutrition, and other preconditions for becoming a great musician? In brief, where's today's Beethoven?\nA number of answers might spring to mind. For example, perhaps Beethoven's music isn't greater than Beyonce's is, and it just has an unearned reputation for greatness among critics with various biases and eccentricities. (I personally lean toward thinking this is part of the picture, though I think it's complicated and depends on what \"great\" means.1)\nBut I think the puzzle gets more puzzling when one asks a number of related questions:\nWhere's today's Darwin (for life sciences), Ramanujan (for mathematics), Shakespeare (for literature), etc.?\nFifth-century Athens included three of the most renowned playwrights of all time (Aeschylus, Sophocles and Euripides); two of the most renowned philosophers (Socrates and Plato); and a number of other historically important figures, despite having a population of a few hundred thousand people and an even smaller population of people who could read and write. What would the world look like if we could figure out what happened there, and replicate it across the many cities today with equal or larger populations?\n\"Over the past century, we’ve vastly increased the time and money invested in science, but in scientists’ own judgment, we’re producing the most important breakthroughs at a near-constant rate. On a per-dollar or per-person basis, this suggests that science is becoming far less efficient.\" (Source) Can we get that efficiency back? \nI'll be giving more systematic, data-based versions of these sorts of points below. The broad theme is that across a variety of areas in both art and science, we see a form of \"innovation stagnation\": the best-regarded figures are disproportionately from long ago, and our era seems to \"punch below its weight\" when considering the rise in population, education, etc. Since the patterns look fairly similar for art and science, and both are forms of innovation, I think it's worth thinking about potential common factors.\nBelow, I will:\nList the three main hypotheses people offer to answer \"Where's Today's Beethoven?\": the \"golden age\" hypothesis (people in the past were better at innovation), the \"bad taste\" hypothesis (Beethoven and others don't deserve their reputations), and the \"innovation as mining\" hypothesis (ideas naturally get harder to find over time, and we should expect art and science to keep slowing down by default). Importantly, I think each of these has interesting and not-widely-accepted implications of its own.\nExamine systematic data on trends in innovation in a number of domains, bringing together (a) long-run data on both art and science over hundreds of years and more; (b) recent data on technology and more modern art/entertainment genres (film, rock music, TV shows, video games). I think this is the first piece to look at this broad a set of trends of this form.\nBriefly explain why I favor the \"innovation as mining\" hypothesis as the main explanation for what we're seeing across the board.\nDo some typical \"more research needed\" whining. Since any of the three hypotheses has important implications, I think \"Where's today's Beethoven?\" should be a topic of serious discussion and analysis, but I don't think there is a field consistently dedicated to analyzing it (although there are some excellent one-off analyses out there).\nFuture pieces will elaborate on the plausibility of the \"innovation as mining\" hypothesis - and its implications. Those pieces are: How artistic ideas could get harder to find, Why it mattrs if ideas get harder to find, \"Civilizational decline\" stories I buy, Cost disease and civilizational decline (the latter two are not yet published).\nThree hypotheses to answer \"Where's Today's Beethoven?\"\nSay we accept - per the data I'll present below - that we are seeing \"innovation stagnation\": the best-regarded figures are disproportionately from long ago, and our era seems to \"punch below its weight\" when considering the rise in population, education, etc. What are the possible explanations?\nThe \"golden age\" hypothesis\nThe \"golden age\" hypothesis says there are one or more \"golden ages\" from the past that were superior at producing innovation compared to today. Perhaps understanding and restoring what worked about those \"golden ages\" would lead to an explosion in creativity today. \nIf true, this would imply that there should be a lot more effort to study past \"golden ages\" and how they worked, and how we could restore what they did well (without restoring other things about them, such as overall quality of life). \nI generally encounter this hypothesis in informal contexts, with a nostalgic vibe - a sort of pining for the boldness and creativity of the past.2\nInterestingly, I've never seen a detailed defense of this hypothesis against the two main alternatives (\"bad taste\" and \"innovation as mining,\" below). Some of the people who have written the most detailed pieces about \"innovation stagnation\" seem to believe something like the \"golden age\" hypothesis - but they seem to say so only in interviews and casual discussions, not their main works.3\nAs I'll discuss below, I don't think the \"golden age\" hypothesis fits the evidence we have as well as \"innovation as mining.\" But I don't think that's a slam dunk, and the \"golden age\" hypothesis seems very important if true. \nThe \"bad taste\" hypothesis\nThe \"bad taste\" hypothesis says that conventional wisdom on what art and science were \"great\" is consistently screwed up and biased toward the past. \nIf true, this means that we're collectively deluded about what scientific breakthroughs were most significant, what art deserves its place in our culture, etc. \nThis hypothesis is often invoked to explain the \"art\" side of innovation stagnation, but it's a more awkward fit with the \"science\" side, and I think a lot of people just have trouble swallowing it when considering music like Beethoven's. I do think it's an important part of the picture, but not the whole story.\nThe \"innovation as mining\" hypothesis\nThe \"innovation as mining\" hypothesis says that ideas naturally get harder to find over time, in both science and art. So we should expect that it takes more and more effort over time to maintain the same rate of innovation.\nThis hypothesis is commonly advanced to explain the \"science and technology\" aspect of innovation stagnation. It's a more awkward fit with the \"art\" side. \nThat said, my view is that it is ultimately most of the story for both (and my next post will discuss just how I think it works for art). And this is important, because I think it has a number of underappreciated implications:\nWe should expect further \"innovation stagnation\" by default, unless we can keep growing the number of innovators. As discussed here, population growth and artificial intelligence seem like the most likely ways to be able to sustain high rates of innovation over the long run (centuries or more), though other things might help in the shorter run.\nHence, our prospects for more innovation in both science and art could depend more on things like population growth, artificial intelligence, and intellectual property law (more on this in a future post) than on creative individuals or even culture. \nFinally, this hypothesis implies that a literal duplicate of Beethoven, transplanted to today's society, would be a lot less impressive. My own best guesses at what Beethoven and Shakespeare duplicates would accomplish today might show up in a future short post that will make lots of people mad.\nData on innovation stagnation\nBelow, I provide a number of charts looking at the \"production of critically acclaimed ideas\" over time. \nI give details of my data sets, and link to my spreadsheets, in this supplement. Key points to make here are:\nIn general, I am using data sets based on aggregating opinions from professional critics. (An exception is the technological innovation data from Are Ideas Getting Harder to Find?) This is because I am trying to answer the \"Where's today's Beethoven?\" question on its own terms: I want data sets that reflect the basic idea of people like Beethoven and Shakespeare being exceptional. This comports with professional critical opinion, but not necessarily with wider popular opinion (or with my opinion!)\nAs such, I think the charts I'm showing should be taken as showing trends in production of critically acclaimed ideas, with all of the biases (including Western bias) that implies, rather than as showing trends in production of \"objectively good\" ideas. In some cases, the creators of the data sets I'm using believe their data shows the latter; but I don't. Even so, I think falling production of critically acclaimed ideas is a type of \"innovation stagnation\" that deserves to be examined and questioned, while being open to the idea that the explanation ends up being bad taste.\nI generally chart something like \"the number of works/discoveries/people that were acclaimed enough to make the list I'm using, smoothed,4 by year.\" As noted below, I've generally found that attempting to weight by just how acclaimed each one is (e.g., counting #1 as much more significant than #100) doesn't change the picture much; to see this, you can check out the spreadsheets linked from my supplement.\nIn this section, I'm keeping my interpretive commentary light. I am mostly just showing charts and explaining what you're looking at, not heavily opining on what it all means. I'll do that in the next section.\nScience and art+literature+music, 1400-1950\nFirst, here are the number of especially critically acclaimed figures in art, literature, music, philosophy, and science from 1400-1950. (This data set actually runs from 800 BCE until 1950; my supplement shows that the \"critical acclaim scores\" over this period are dominated by ancient Greece (which I discuss below) and by the 1400-1950 period in particular countries, and I'm charting the latter here.)\nBlue = science, red = art + literature + music\nAnd here is a similar chart, but weighted by how acclaimed each figure is (so Beethoven counts for more than Prokofiev or whoever, even though they're both acclaimed enough to make the list):\nA couple of initial observations that will be recurring throughout these charts:\nFirst, as mentioned above, it doesn't matter that much whether we weight by level of acclaim (e.g., count Beethoven about 10x as high as Prokofiev, and Prokofiev 10x higher than some others), or just graph the simpler idea of \"How many of the top 100-1000 top people were in this period?\" (which treats Beethoven and Prokofiev as equivalent). In general, I will be sticking to the latter throughout the remaining charts, though I chart both in my full spreadsheet (they tend to look similar).\nSecond: so far, there's no sign of innovation stagnation! Maybe the single greatest musician or artist was a long time ago, but when we are more systematic about it, the total quantity of acclaimed music/art has gone up over time, at least up until 1950. The question is whether it's gone up as much as it should have, given increases in population, education, etc.\nSo next, let's chart critically acclaimed figures per capita, that is, adjusted for the total population in the countries featured:\nThis still doesn't clearly look like innovation stagnation - maybe you could say there was a \"golden age\" in art/lit/music from around 1450-1650, with about 50% greater \"productivity\" than the following centuries, but meh. And science innovation per capita looks to have gone up over time.\nTo see the case for innovation stagnation, we have to go all the way to adjusting for the \"effective population\": the number of people who had the level of education, health, etc. to have a realistic shot at producing top art and/or science. This is a very hard thing to quantify and adjust for!\nI've created two estimates of \"effective population growth,\" based on things like increases in literacy rates, increases in urbanization, decreases in extreme poverty, and increases in the percentage of people with university degrees. My methods are detailed here. (My two estimates mostly lead to similar conclusions, so I'll only be presenting one of them here, though you can see both in my full spreadsheet.)\nSo here are the total number of acclaimed figures in science and art+lit+music, adjusted for my \"effective population\" estimate:\n(Ignore the weird left part of the blue line - in those early days, there was a low effective population and a low number of significant figures, which sometimes hit 0, which looks weird on this kind of chart.)\nAhh. Finally, we've charted the decline of civilization! \n(It doesn't really matter exactly how the effective population estimate is done in this case - as shown above, per-capita \"productivity\" in art and science was pretty constant over this time, so any adjustment for growing health/nutrition/education/urbanization will show a decline.)\nThis dataset ends in 1950. Seeing what happened after that is a bit challenging, but here we go.\nTechnological innovation, 1930-present\nNext are charts on technological innovation since the 1930s or so, from the economics paper Are Ideas Getting Harder to Find? These generally look at some measure of productivity, alongside the estimated \"number of researchers\" (I take this as a similar concept to my \"effective population\").\nFirst, overall aggregate total factor productivity growth in the US: \nIt's not 100% clear how to compare the units in this chart vs. the units in previous charts (\"number of acclaimed scientists per year\" vs. \"growth in total factor productivity each year\"),5 but the basic idea here is that the \"effective population\" (number of researchers) is skyrocketing while the growth in total factor productivity is constant - implying, like the previous section, that we're still getting plenty of new ideas each year, but that there's \"innovation stagnation\" when considering the rising effective population.\nThe paper also looks at progress in a number of specific areas, such as life expectancy and Moore's Law. The trends tend to be similar across the board; here is an example set of charts about agricultural productivity:\nThere's more discussion and some additional data at this post from New Things Under the Sun.\n20th century film, modern (rock) music, TV shows, video games\nWhat about art+literature+music after 1950?\nThis one is tricky because of the way that entertainment has \"changed shape\" throughout the century. \nFor example:\nMy understanding is that visual art (paintings, sculptures, etc.) used to be the obvious thing for a \"visual innovator\" to do, but now it's an increasingly niche kind of work. \nThe closest thing I've found to recent \"data on critically acclaimed visual art\" is this paper on the \"most important works of art of the 20th century,\" which only lists 8 works of art (6 of them between 1907-1919 - see Table 1).\n20th-century visual innovators may instead have worked in film, or perhaps TV, or video games, or something else.\nA lot of the demand that \"literature\" used to meet is also now met by film, TV, and arguably video games.\nMusic is tough to assess for a similar reason. Mainstream music in the 20th century has mostly not been orchestral; instead it's been the sort of music covered on this list. Some people would call this \"rock\" or \"pop,\" but others would insist that many of the albums on that list are neither; in any case, there's no credible ranking I've been able to find that considers both Beethoven and Kanye West.\nSo to take a look at more recent \"art,\" I've created my own data sets based on prominent rankings of top films, music albums, TV shows, and video games (details of my sources are in the supplement).\nFirst let's look at the simple number of top-ranked films, albums, video games and TV shows each year, without any population adjustment:\nFilms (# in top 1000 released per year, smoothed)\nAlbums (# in top 1000 released per year, smoothed)\nTV shows (# in top 100 started6 per year, smoothed)\nVideo games (# in top 100 released per year, smoothed)\nInterestingly, an earlier version of these charts using only the top 100 films and albums had albums picking up just as films were falling off, and then TV and video games picking up just as albums were falling off. That's not quite what we see with my updated charts. But still, here are the four added together:\nFilms, albums, TV shows, video games: summed % of the top 100-1000 that were released each year (smoothed)\nNow for the versions adjusted for \"effective population.\" I think the \"effective population\" estimates are especially suspect here, so I don't particularly stand behind these charts, but they were the best I could do:\nFilms (# in top 1000 released per year, smoothed and divided by an \"effective population\" index)\nAlbums (# in top 1000 released per year, smoothed and divided by an \"effective population\" index)\nTV shows (# in top 100 that started per year, smoothed and divided by an \"effective population\" index)\nVideo games (# in top 100 released per year, smoothed and divided by an \"effective population\" index)\nFilms, albums, TV shows, video games: % of the top 100-1000 that were released each year (smoothed and divided by an \"effective population\" index)\nBooks: the longest series I have\nI wasn't really sure where to put this part, but the only data set I have that is measuring the same thing from 1400 up to the present day is the one I made from Greatest Books dataset:\nI think the drop at the end is probably just because more recent books haven't had time to get onto the lists that website is using.\nHere's the version adjusted for effective population:\nThe big peak for fiction specifically around 1600 is heavily driven by Shakespeare - here's the same data for fiction, but excluding him:\nInterpretation\nThe general pattern I'm seeing above is:\nIn absolute terms, we seem to have generally flat or rising output in both \"critically acclaimed art/entertainment\" and \"science and technology.\" (The exceptions are film and modern music; I think different people will have different interpretations of the fact that these decline just before TV and video games rise.7)\nIn effective-population-adjusted terms, we generally see pretty steady declines after any given area hits its initial peak.\nTo me, the most natural fit here for both art and science is the \"innovation-as-mining\" hypothesis. In that hypothesis:\nThe basic dynamic is that innovation in a given area is mostly a function of how many people are trying to innovate, but \"ideas get harder to find\" over time. \nSo we should often expect to see the following, which seems to fit the above charts: a given area (literature, film, etc.) gains an initial surge in interest (sometimes due to new things being technologically feasible, sometimes for cultural reasons or because someone demonstrates the ability to accomplish exciting things); this leads to a surge in effort; there's lots of low-hanging fruit due to low amounts of previous effort; so output is very high at first, and output-per-person declines over time.\nI think \"bad taste\" is part of the story too, but I don't think it can explain the patterns in science and technology (or why they are so similar to the patterns in art and entertainment). A separate post goes into more detail on how I see \"bad taste\" interacting with \"innovation as mining.\"\n\"Golden age\" skepticism\nI'm quite skeptical that a \"golden age\" hypothesis - in the sense that some earlier culture was doing a remarkably good job supporting innovators, and in the sense that copying that culture would lead to more output today - has anything to add here. Some reasons for this:\nNo special evidential fit. I think the \"innovation as mining\" hypothesis is a good simple, starting-point guess for what might be going on. Most people find it intuitive that \"ideas get harder to find\" in science and technology; I think intuitions vary more on art, but I think the same idea basically applies there too, as I argue here.\nAnd I don't see anything in the data above that is way out of line with this hypothesis. \nFor example, in most charts, the only \"golden age\" candidate comes with the first spike in output, with declining \"productivity\" from there - consistent with the idea that earlier periods generally have an advantage. We see few cases of a late spike that surpasses early spikes, which would suggest a \"golden age\" whose achievement can't simply be explained by being early.8 (Generally, I'd just expect more choppiness if most of the story were \"variations in culture\" as opposed to \"ideas getting harder to find.\")\nAs discussed in the supplement, I also looked for signs of a \"golden place\" - some particular country that dramatically outperformed others - and didn't find anything too compelling.\nFor the most part, the decline in \"productivity\" for both art and science looks pretty steady (with exceptions for modern art forms whose invention is recent). You could try to tell a story like \"The real golden age was 1400-1500, and it's all been steadily, smoothly downhill since then,\" but this just doesn't seem intuitively very likely.\nNo clear mechanism. I hear a lot of informal pining for a \"golden age of innovation,\" but I've heard little in the way of plausible-sounding explanations for what, specifically, past cultures might have done better. \nFor science and technology, I've occasionally heard speculation that the modern academic system is stifling, and that innovators would be better off if they were independently funded (through their own wealth or a patron's) and free to explore their curiosity. But this doesn't strike me as a good explanation for innovation stagnation:\nI'd guess that there are far more people today (compared to the past), as a percentage of the population, who are financially independent and in a position to freely explore their curiosity. With increasing wealth inequality, there are also far more potential patrons. So for academia to be the culprit, it would need to be drawing in a very large number of people who formerly could and would have freely explored their curiosity, but now choose to stay in academia and play by its rules. This seems far-fetched to me. \nI also note that the scientific breakthroughs we do see in modern times seem to mostly (though not exclusively) come from people with traditional expert credentials. If the \"freely explore one's curiosity\" model were far superior, I'd expect to see it leading to more, since again, there are plenty of people who can use this model.\nAdditionally, this explanation seems particularly ill-suited to explain why art and science seem to have seen the same pattern - I don't see any equivalent of \"academia\" for musicians or literary writers. (You could argue that TV and film force artists to endure more bureaucracy, as those art forms are expensive to produce. But the \"decline\" in art predates these formats.)\nThis isn't a denial of the ways in which academia can be stifling and dysfunctional. I just don't think the rise of academia is a strong candidate explanation for a fall in per-capita innovation.\nI do think it's probably true that the past had more innovators whose contributions cut across disciplinary lines, and whose fundamental style and manner could be described as \"nonconformist freethinker generating concepts\" rather than \"intellectual worker bee pursuing specific narrow questions.\" \nI think this is a function of the \"innovation as mining\" dynamic: a greater share of the innovations within reach today are suited to be reached by \"intellectual worker bee\" types, as opposed to \"nonconformist freethinker\" types, due to the larger amount of prerequisite knowledge one has to absorb in a given area before being in much position to innovate. \n \nI think academia does tend to reward \"intellectual worker bees,\" but that this is transparent and that most potential (and financially viable) \"nonconformist freethinkers\" are staying out.\nI've heard even less in the way of plausible-sounding explanations for how, specifically, previous cultures may have facilitated better art.\nGeneral suspiciousness about \"declinism,\" the general attitude that society is \"losing its way.\" I feel like I see a lot of bad arguments that the past was better in general (example), and I am inclined to agree with Our World in Data that (for whatever reason) people seem to be naturally biased toward \"declinism.\"\nI also suspect that subjective rankings of past accomplishments just tend, for whatever reason, to look overly favorably on the past. To illustrate this, here are charts similar to the charts above, but for well-known subjective rankings of baseball and basketball players:\nBaseball players (# in top 100 with career midpoint each year, smoothed)\nBasketball players (# in top 96 with career midpoint each year, smoothed)\nDisregarding the dropoffs at the end (which I think are just about the lists being made a while ago), these charts look ridiculous to me; there's little question in my mind that the level of play has improved significantly for both sports over time. (Here's a good link making this point for baseball; for basketball I'd encourage just watching videos from different eras, and may post some comparisons in the future.)\nMy own intuitions. This is the least important point, but I'm sharing it anyway. A lot of comparisons between classic vs. modern art/science are very hard for me to make sense of. I can often at least sympathize with a subjective feeling that the “classic” work feels subjectively more impressive, but this feeling often seems bound up in what I already know about its place in history.9 In cases where it seems relatively easier to compare the quality of work, though, it seems to me that modern work is better. For example, quantitative social science seems leaps and bounds better today than in the past, not just in terms of data quality but in terms of clarity and quality of reasoning. I also feel like Shakespeare's comedies are inferior to today's comedies in a pretty clear-cut way, but are acclaimed (and respected more than today's comedies) nonetheless. I recognize there's plenty of room for debate here.\nA note on ancient Greece. As discussed in the supplement, ancient Greece (between about 700 BCE and 100 CE) is \"off the charts\" in terms of how many acclaimed artistic and scientific figures (per capita) it produced. It performs far better on these metrics than any country in Europe (or the US) after 1400, and it outperforms all other countries and periods by even more. Is this evidence that ancient Greece had a special, innovation-conducive culture that could qualify as a \"golden age?\"\nMy take: yes and no. My guess is that:\nAncient Greece is essentially where the basic kind of intellectual activity that generates critically acclaimed work first experienced high demand and popularity. This article by Joel Mokyr gives a sense of what I have in mind by the \"basic kind of intellectual activity\" - ancient Greece might have been the first civilization to prize certain kinds of \"new ideas\" (at least, the kind of \"new ideas\" celebrated by the critics whose judgments are driving the data above) as something worth actively pursuing.\nWhile ancient Greece produced a lot of critically acclaimed work over the centuries, its interests didn't \"catch on\": at that time, there wasn't the sort of global consensus we have today about the importance and desirability of this sort of innovation. And eventually demand fizzled, before spiking again hundreds of years later in modern Europe.\nThus, in my view, ancient Greece is best interpreted as an isolated spike in demand and effort at innovation, not a spike in exceptional intelligence or knowhow. And I doubt the level of demand and effort were necessarily very high by modern standards - so in that sense, I doubt that ancient Greece represents a \"golden age\" in the sense that we'd produce more innovation today if we were more like they were.\nReasons I think ancient Greece's accomplishments are best explained by a spike in demand/effort, not intelligence/knowhow:\nAncient Greece was also the first country to score highly on \"significant figures per million people\" metric. (Out of 72 significant figures before the year 400 BCE in the entire data set, 66 were from Ancient Greece.10)\nGiven that ancient Greece was the first home of substantial amounts of critically acclaimed work, it seems unlikely that it was also the best environment for critically acclaimed work, in terms of institutions or incentives or knowhow. By analogy, the first person to work on a puzzle might make the most noteworthy progress on assembling it, but this probably isn't because they are bringing the best techniques to the task: being early is an easy explanation for their high significance, and having the best techniques or abilities is a less likely explanation. (The best techniques seem especially unlikely given that they haven't had a chance to learn from others.)\nWhen I look at the actual figures from Ancient Greece, it reinforces my feeling that they are more noteworthy for being early than for the intrinsic impressiveness or usefulness of their work. For example, the two highest-rated figures from Ancient Greece are Aristotle (who ranks highly in both philosophy and science) and Hippocrates (medicine). \nIn my view, both of these figures did the sort of theorizing and basic concept formation that was useful near the founding of a discipline, but wouldn't have nearly the same utility if brought to philosophy or medicine today. \n \nOne could argue that if Aristotle or Hippocrates were transplanted to the present day, they might invent an entirely new field from whole cloth, of comparable significance to philosophy or medicine; I would find this extremely hard to believe, but won't argue the case further here.\nMore research needed\nI've done an awful lot of amateur data wrangling for this piece. I think a more serious, scholarly effort to assess \"Where's today's Beethoven?\" could make a lot more headway, via things like:\nBetter estimates of the \"effective population\" (how fast is the amount of \"effort\" at innovation growing?) How much \"innovation stagnation\" we estimate is very sensitive to this.\nMore systematic attempts to assess the significance of different innovations (both in terms of science and art), and look at what that means for the pace of innovation. I would guess that whether we're seeing \"innovation stagnation\" is pretty sensitive to how exactly this is done; for example, I'd guess that if you look at popular rather than critical opinion, the modern era looks extremely productive at producing art/entertainment.\nMore intensive examination of times and places that seem like decent candidates for \"golden ages,\" and hypothesizing about, specifically, what made them unusually productive.\nI think this would be worth it, because I think each hypothesized explanation for \"Where's today's Beethoven?\" has some important potential implications. Later in this series, I'll discuss how the \"innovation as mining\" hypothesis - which I think explains a lot of what's going on - might change our picture of how to increase innovation. \nNext in series: How artistic ideas could get harder to find\nSpecial thanks to Luke Muehlhauser for his feedback on this piece and others in the series, especially w/r/t the state of modern highbrow art and entertainment.Footnotes\n I may elaborate on this more in the future, but my basic take is that Beethoven's music is \"great\" in at least two significant senses: (a) for nearly all listeners, it is enjoyable; (b) for obsessive listeners who are deeply familiar with other music that came before, it is \"impressive\" in the sense of demonstrating originality/innovation/other qualities.\n I think Beyonce's music is better than Beethoven's when it comes to (a), but maybe not when it comes to (b). And I think there probably are modern artists who are better than Beethoven when it comes to (b), but they tend to be a lot worse when it comes to (a). (I'm guessing this is true of various \"academic\" and \"avant-garde\" musicians who are very focused on specialized audiences.)\n If I were to come up with my own personal way of valuing (a) and (b) - both of which I think deserve some weight in discussions of what music is \"great\" - I think I would favor Beyonce over Beethoven. But I think there probably is some relative valuation of (a) and (b) that a lot of people implicitly hold, and according to which Beethoven is better than any modern artist.  ↩\n Recent example I came across: https://twitter.com/ESYudkowsky/status/1455787079120539648  ↩\n For example, see:\nTyler Cowen's interview with Peter Thiel, in which both seem to endorse something like a \"golden age\" hypothesis. Thiel attributes \"the great stagnation\" to over-regulation as well as hysteresis (\"When you have a history of failure, that becomes discouraging\"); Cowen talks about complacency and a declining \"sense of what can be accomplished, our unwillingness to repeat, say, the Manhattan Project, or Apollo.\" \nThis interview with Tyler Cowen and Patrick Collison, e.g. \"Now, there's two, I think, broad possibilities there. One is it's just getting intrinsically harder to generate progress and to discover these things. And, who knows, maybe some significant part of that is true. But the other possibility is it's somehow more institutional, right? ... we do have suggestive evidence that our institutions are....well, they're certainly older than they used to be, and they're also, as in the NIH funding example, there are changes happening beneath the surface and so on that may or may not be good. So I don't think we should write off the possibility that it's not inevitable, and that there is or that there do exist alternate forms of organization where things would work better ... the notion that people have lost the ability to imagine a future much different and much better than what they know to me is one of the most worrying aspects of where we are now.\"\n These arguments don't seem to appear in the more formal works by Cowen and Collison, though. ↩\n Using a moving average. ↩\n I thought Alexey Guzey's criticism of this paper - while I don't agree with all of it - did a good job highlighting some of the confusion around the \"units\" here, but I don't think that issue affects the big picture of what I'm talking about here. ↩\n I went with the year Season 1 came out, based on my judgment call that most TV shows peak on the early side. ↩\n My own take is that there is something particularly weird and \"bad taste\"-like going on here with the critics. For example, maybe all of the best cinematic innovation is happening in very outside-the-mainstream films that the critics who made that list aren't paying attention to, and maybe the music critics who weighed in for Rolling Stone are affected by something like this. I don't know, but I fundamentally don't buy that the number of great films per year has been falling since before 1970, or that contributions to modern music cratered after 1980 and never recovered. I feel this way even though I do think that the top films/albums on each list are reasonable candidates for the \"most acclaimed\" in their category, in pretty much the same sense that Beethoven and Shakespeare are. ↩\n The exceptions: \nFilm has a \"double spike\" of sorts, which also affects the combined \"film+music+TV+video games\" chart. This looks to coincide pretty well with the mainstreaming of color cinema.\nBooks have a \"double spike\" even when excluding Shakespeare, which is interesting. ↩\n Though frankly, I often don’t feel this way where other people do. For example, I find ancient philosophy very unimpressive, in that it takes so much interpretive guesswork to even form a picture of what a piece is claiming - I think modern philosophy is vastly better on this front. I also think modern highbrow films and TV shows are pretty much superior to most classic literature. ↩\n The others are three Chinese philosophers (Confucius, Laozi and Mozi) and three Indian philosophers (Buddha, Carvaka, Kapila) ↩\n", "url": "https://www.cold-takes.com/wheres-todays-beethoven/", "title": "Where's Today's Beethoven?", "source": "cold.takes", "source_type": "blog", "date_published": "2022-01-04", "id": "6c20c920141bc76eb33f4de2f2df0997"} -{"text": "I previously wrote that describing utopia tends to go badly - largely because utopia descriptions naturally sound:\nDull, due to the lack of conflict and challenge. (This is the main pitfall noted in previous work on this subject,1 but I don't think it's the only one.)\nHomogeneous: it's hard to describe a world of people living very differently from each other. To be concrete, utopias tend to emphasize specific lifestyles, daily activities, etc. - and this ends up sounding totalitarian.\nAlien: anything too different from the status quo is going to sound uncomfortable, at least to many.\nIn this post, I'm going to present a framework for visualizing utopia that tries to avoid those problems. Later this week I'll share some links (some from readers) on more specific utopias.\n(Why care? See the beginning of the previous post, or the end of this one.)\nThe basic approach\nI'm not going to try for a highly specific, tangible vision. Other attempts to do that feel dull, homogeneous and alien, and I expect to hit the same problem. I'm also not going to stick to a totally abstract assertion of freedom and choice.\nInstead, I'm going to lay out a set of possible utopias that span a spectrum from conservative (anchored to the status quo) to radical (not anchored).\nAt one end of the spectrum will be a conservative utopia that is presented as \"the status quo plus a contained, specific set of changes.\" It will avoid sounding dull, homogeneous and alien, because it will be presented as largely similar to today's world. And it will be a clear improvement on the status quo. But it won't sound amazing or inspiring.\nAt the other end will be a radical utopia that doesn't aim to resemble today's world at all. It will sound more \"perfect,\" but also more scary in the usual ways. \nI don't expect any single point on my spectrum to sound very satisfying, but I hope to help the reader find (and visualize) some point on this spectrum that they (the reader) would find to be a satisfying utopia. The idea is to give a feel for the tradeoff between these two ends of the spectrum (conservatism and radicalism), so that envisioning utopia feels like a matter of finding the right balance rather than like a sheer impossibility.\nI'll first lay out the two extreme ends of the spectrum: the maximally \"conservative\" utopia, and the maximally \"radical\" one. I'll then describe a couple of possible intermediate points.\nNote that I will generally be assuming as much wealth, technological advancement and policy improvements as are needed to make everything I describe feasible. I believe that everything described below has at least a decent chance of eventually being feasible (nothing contradicts the laws of physics, etc.) But I'm certainly not trying to say that any of these utopias could be achieved today. If something I describe sounds impossible to you, you may want to check out my discussions of digital people.\nThe maximally conservative utopia: status quo minus clearly-bad things\nThis isn't really a utopia in the traditional sense. It's trying to lay out one end of a spectrum.\nStart here: \nIn this world, everything is exactly like the status quo, with one exception: cancer does not exist.\nIt may not be very exciting, but it's hard to argue with the claim that this would be better than the world as it is today. \nThis is basically the most conservative utopia I can come up with, because the only change it proposes is a change that I think we can all get on board with, without hesitation. Most proposed changes to the world would make at least some people uncomfortable (no inequality? No sadness?), but this one shouldn't. If we got rid of cancer, we'd still have death, we'd still have suffering, we'd still have struggle, etc. - we just wouldn't have cancer.\nYou can almost certainly improve this utopia further by taking more baby-steps along the same lines. Make a list of things that - like cancer - you think are just unambiguously bad, and would be happy to see no more of in the world. Then define utopia as \"exactly like the status quo, except that all the things on my list don't exist.\" Examples could include:\nOther diseases\nHunger\nNon-consensual violence (not including e.g. martial arts, in which two people agree to a set of rules that allows specific forms of violence for a set period of time). \nRacism, sexism, etc.\n\"Status quo, minus everything on my list\" is a highly conservative utopia. Unlike literary utopias, it should be fairly clear that this world would be a major improvement on the world as it is.\nI note that in my survey on fictional utopias, it was much easier to get widespread agreement (high average scores) for properties of utopia than for full utopian visions. For example, while no utopia description scored as high as 4 on a 5-point scale, the following properties all scored 4.5 or higher: \"no one goes hungry\", \"there is no violent conflict,\" \"there is no discrimination by race or gender.\"\nThe maximally radical utopia: pure pleasure\nAll the way at the radical end of the spectrum, there's a utopia that makes no attempt at preserving anything about the status quo, and instead consists of everyone being in a state of maximum pleasure/happiness at all times.\nThere are a number of ways of fleshing this out, as discussed in this Slate Star Codex post. The happiness could be a stupor of simple pleasure, or it could be \"equanimity, acceptance and enlightenment,\" or it could be some particular nice moment repeated over and over again forever (with no memory of the past available to make it boring).\nThis \"maximally radical utopia\" is rarely even discussed in conversations about utopia, since it is so unappealing to so many. (Indeed, I think many see it as a dystopia). It's off-the-charts dull, homogeneous, and alien. I provide it here not as a tangible proposal that I expect to appeal to readers, but as a way of filling out the full spectrum from conservative to radical utopia. \nAn in-between point, erring conservative\nHere's a world that I'd be excited about, compared to today, even if I think we can do better (and I do). \nIn this world, technological advances have made it possible to create much more ambitious art, entertainment, and games than is possible today. \nFor example:\nOne artistic creation might work as follows. The \"viewer\" enters into a realistic, detailed virtual recreation of some time in the 20th century. They experience the first ~50 years of a particular (fictional) person's life. Around age 25, they fall in love and get married. For the next 25 years, their marriage goes through many ups and downs, but overall is a highlight of their life. Then around age 50, their relationship slowly and painfully falls apart. Shortly following their divorce, they wander into a bar playing live music, and they hear a song playing that perfectly speaks to the moment. At this point, the simulation ends. This piece is referred to a \"song,\" and evaluated as such.\nAnother artistic creation might have a similarly elaborate setup for a brilliantly made and perfectly timed meal, and be referred to as a \"sandwich.\"\nThere are also \"games\" in virtual environments. In these games, people can compete using abilities that would be unrealistic in today's world. For example, there might be a virtual war that is entirely realistic, except that it poses no actual danger to the participants (people who are injured or killed simply exit the \"game\"). There might be a virtual NBA game in which each participant plays as an NBA player, and experiences what it's like to have that player's abilities.\nEveryone in this world has the ability to:\nSubsist in good health, unconditionally. There is no need to work for one's food or medical care, and violence does no permanent damage.\nHave physical autonomy over their body and property. Nobody can be physically forced by someone else to do anything, with the exception that people are able to restrict who is able to enter their space and use their art/entertainment/games.\nSpend their time designing art, entertainment, or games, or collaborating with others designing these things, or engage in scientific inquiry about whatever mysteries of the universe exist at the time.\nSpend their time consuming art, entertainment, games or scientific insights produced by others.\nAdditionally, everyone in this world has a level of property and resources that allows them to be materially comfortable and make art/entertainment/games/science along the lines of the above, if they choose to. That said, people are able to trade relatively freely, subject to not going below some minimal level of resources. People who work on creating popular art/entertainment/games/insights accumulate more resources that they're able to use for more creation, promotion, etc.\nIn this world, the following patterns emerge:\nThere are a wide variety of different types of \"careers.\" Some people focus on producing art/entertainment/games/scientific insight. Others participate in supporting others' work: promoting it, managing its finances, performing needed repairs, etc. (Creators who can't get others excited enough to help them with these parts of the job just have to do these parts of the job themselves.) Others are pure \"consumers\" and do not take part in creation. Between these options, there is some option that is at least reasonably similar to the majority of careers that exist today.\nThere is a wide variety of tastes. Some art/entertainment/games/lines of inquiry have large, passionate fan bases, but none are universally liked. As a result, people have arguments about the relative merits of different art/entertainment/games/insights; they experience competitiveness, jealousy, etc.; they often (though by no means always) make friends with people who share their tastes, make fun of those who don't, etc.\nMany people want to be involved in creating art/entertainment/games with a large, passionate fan base. And many people want to be a well-regarded critic, or repair person, or \"e-athlete\" (someone who performs well in a particular game), or scientist. Not everyone succeeds at these ambitions. As a result, many people experience nervousness, disappointment, etc. about their careers.\nMost of today's dynamics with meeting romantic partners, raising families, practicing religion, etc. still seem applicable here.\nThis utopia is significantly more \"radical\" than the maximally conservative utopia. It envisions getting rid of significant parts of today's economy. I imagine that doing so would change the political stakes of many things as well: there would still be inequality and unfairness, but nobody would be reliant on either the government or any company for the ability to be comfortable, healthy and autonomous.\nBut it's still a fairly \"conservative\" utopia in the sense that it seeks to preserve most of the things about today's world that we might miss if we changed them. There is still property, wealth and inequality; there is still competition; most of the social phenomena that we're accustomed to still exist (jealousy, pettiness, mockery, cliques, etc.) Not all of the careers that exist today exist in this world, but it's hopefully still pretty easy to picture a job that is \"similar enough\" to any job you'd hope would stick around. Whatever kind of life you have and would like to keep, it's hopefully possible to see how you could keep most if not all of what you like about it in this world. \n(I expect some readers to instinctively react, \"It's nice that there would still be jobs in this world, but working on art, entertainment, games and science isn't good enough - I want to do something more meaningful than that, like saving lives.\" But most people today don't work on something like saving lives, and as far as I can tell, the ones that do aren't more happy or fulfilled in any noticeable way than the ones that don't.)\nI expect most readers will see this world as far short of the ideal. But I also expect that most will see how this world - or something like it - could be a fairly robust improvement on the status quo.\nAnother in-between point, erring more radical\nThis world is similar to the one described just above. The main difference is that, through meditation and other practices like it, nearly everyone has achieved significantly greater emotional equanimity. \nPeople consume and produce advanced art/entertainment/games/science as in the above world, and most of the careers that exist today have some reasonably close analogue. However, people experience far less suffering when they fail to achieve their goals, experience far less jealousy of others, are less inclined to look down on others, have generally more positive attitudes, etc.\nThis utopia takes a deliberate step in the radical direction: it cuts down on some of the conflict- and suffering-driven aspects of life that were preserved in the previous one. In my view, it rather predictably has a bit more of the dull, homogeneous, alien feel.\nA \"meta\" option\nThis one leans especially hard on things digital people would be able to do.\nIn this world, there is a waiting room with four doors. Each door goes to a different self-contained mini-world.\nDoor #1 goes to the maximally conservative utopia: just like today's world, minus, say, disease, hunger, non-consensual violence, racism, and sexism.Door #2 goes to the maximally radical utopia: everyone lives in constant pleasure (or \"equanimity, acceptance and enlightenment\"), untroubled by boredom or material needs.\nDoor #3 goes to the moderately conservative utopia described above. Material needs are met; people produce and consume advanced art, entertainment, games and science; there are many different careers; and most of the careers and social dynamics that exist today have some reasonably close analogue.\nDoor #4 goes to the moderately radical utopia described above. It is similar to Door #3 but with greater emotional equanimity, less suffering, less jealousy, more positive attitudes, etc. \nEach citizen of this world starts in the waiting room and chooses:\n1 - A door to walk through.\n2 - A protocol for reevaluating the choice. For example, they might choose: \"I will remember at all times that I have the ability to return to the waiting room and choose another door, and I can do so at any time by silently reciting my pass code.\" Or they might choose: \"I will not remember that I have the ability to return, but after 10 years, I will find myself in the waiting room again, with the option to return to my life as it was or choose another door.\"2\nTheir natural lifespans are at least long enough to have about two 60-70 year tries behind each door if they so choose. (Perhaps much more.)\nFinally: anyone can design an alternate utopia to be added to the list of four. This alternate utopia can itself be a \"meta-utopia,\" e.g., by containing its own version of the \"waiting room\" protocol.\nNow that I've laid out a few points on the spectrum, this utopia makes a move in the abstract direction, emphasizing choice. \nPersonally, I find this utopia to feel somewhat plausible and satisfying, even though I wouldn't say that of any of the four utopias that it's sampling from.\nIs a utopia possible?\nAs stated above, I don't expect any of the options I've given to sound like a fully satisfying utopia, to most readers. I expect each one to sound either too conservative (better than the status quo, but not good enough) or too radical (too much risk of losing parts of our current world that we value).\nWhat I've tried to do is give the reader an idea of the full spectrum from maximally conservative to maximally radical utopias; convince them that there is an inherent tradeoff here, which can explain the difficulty of describing a fully satisfying utopia; and convince them that there is some point, somewhere on this spectrum, that they would find to be a satisfying utopia. \nThere isn't necessarily any particular world that everyone could agree on as a utopia. For example, some people think it is important to get rid of economic inequality, while some think it's important to preserve it. Perhaps a world where everyone chooses their own mini-world to enter (such as the \"meta\" option above)3 could work for everyone. Perhaps not.\nIn real life, we aren't going to design a utopia from first principles and then build it. Instead, hopefully, we will improve the world slowly, iteratively, and via a large number of individual decisions. Maybe at some point it will become relatively easy for lots of people to attain vast emotional equanimity, and a large number but not everyone will, and then there will be a wave of sentiment that this is making the world worse and robbing it of a lot of what makes it interesting and valuable, and then some fraction of the world will decide not to go down that road. \nThis is the dynamic by which the world has gotten better to date. There have been lots of experiments that didn't take off, and some social changes that look far better in retrospect than someone from 300 years ago would have expected. Many things about the modern world would horrify someone from 300 years ago, but most changes have been fairly gradual, and someone who got to experience all 300 years one step at a time might feel okay about it. \nTo give an example from the more recent past:\nSocial norms around sex have in some sense gotten closer to what Brave New World feared (see previous discussion): many people in many contexts treat sex about as casually as the Brave New World characters do. \nBut in other respects, we haven't moved much in the Brave New World direction - people who want to be monogamous still don't generally face pressure (and certainly not coercion) against doing so - and overall it seems that the changes are more positive than negative. \nThis is consistent with the general patterns discussed above: many changes sound bad when we imagine everyone making them, but are better when different people get to make different choices and move a step at a time. \nI think this explains some of why \"radical\" utopias don't appeal: it seems entirely justified to resist the idea of a substantially different world when one hasn't been through an iterative process for arriving at it.\nI consider the real-life method for \"choosing utopia\" to be much better than the method of dreaming up utopias and arguing about them. So if, today, you can start to dimly imagine the outline of a utopia you'd find satisfying, I'd think you should assume that if all goes well (<- far from a given!) the real-life utopia will be far better than that.\nSo?\nRegardless of the ability or inability to agree on a utopian vision, I expect that most people reading this will agree that the world can and should get better year by year. I also expect them to agree on many of the specifics of what \"better\" means in the short run: less poverty, less disease, less (and less consequential) racism and sexism, etc. \nSo why do our views on utopias matter? And in particular, what good has this piece done if it hasn't even laid out a specific vision that I expect to be satisfying to most people?\nI care about this topic for a couple of reasons.\nFirst, I think short-term metrics for humanity are not enough. While I strongly support aiming for less poverty every year, I think we should also place enormous value on humanity's long-run future. \nPart of why this matters is that I believe the long-run future could come surprisingly quickly. But even if we put that aside, we should value preventing the worst catastrophes far more than we would if we only cared about those directly affected - because we should believe that a glorious future for humanity is possible, and that losing it is a special kind of tragedy.\nWhen every attempt to describe that glorious future sounds unappealing, it's tempting to write off the whole exercise and turn one's attention to nearer-term and/or less ambitious goals. I've hypothesized why attempts to describe a glorious future tend not to sound good - and further hypothesized that this does not mean the glorious future isn't there. We may not be able to describe it satisfyingly now, or to agree on it now, and we may have to get there one step at a time - but it is a real possibility, and we should care a lot about things that threaten to cut off that possibility.\nSecond, I believe that increasing humanity's long-run knowledge and empowerment has a lot of upside (as well as downside). \nThere is a school of thought that sees scientific and technological advances as neutral, or even negative. The idea is that even if we had all the power in the world, we couldn't use it to make the world better. Like the citizens of Brave New World, we're our own prisoners: if we successfully solve some problems (such as making ourselves happier) we'll create just as many to offset them (such as by losing the conflict and complexity that give life meaning). Some people concede that the poverty reduction we've seen to date is good, but think that once we reach a certain level of wealth, further advances won't help.\nI think this is a tempting worldview, in an age when most futurism is found in fiction, and the dystopias are acclaimed masterworks while the utopias are creepy slogs. But ultimately I think this dynamic tells us more about the challenges of using our imagination than it does about the reality of utopia.\nPersonally, I don't consider myself able to imagine a utopia very effectively. But I do feel convinced at a gut level that with time and incremental steps, we can build one. I think this particular \"faith in the unseen\" is ultimately rational and correct. I hope I've made a case that the oddities of describing a utopia need not stop us from achieving one.\nNext in series: Utopia linksFootnotes\n E.g., https://www.orwellfoundation.com/the-orwell-foundation/orwell/essays-and-other-works/can-socialists-be-happy/  ↩\n People who choose to remember the existence of the waiting room aren't able to tell - or at least, aren't reliably able to convince - people who don't, to protect the latter's choice not to remember. ↩\n See also this post on the \"Archipelago\" idea. ↩\n", "url": "https://www.cold-takes.com/visualizing-utopia/", "title": "Visualizing Utopia", "source": "cold.takes", "source_type": "blog", "date_published": "2021-12-14", "id": "0e6e64ad70779274c54d0cfe6a4a2aec"} -{"text": "A kind of discourse that seems pretty dead these days is: describing and/or debating potential utopias.\nBy \"potential utopias,\" I specifically mean visions of potential futures much better than the present, taking advantage of hopefully greater wealth, knowledge, technology, etc. than we have today. (I don't mean proposals to create alternative communities today, or claims about what could be achieved with policy changes today,1 though those can be interesting too.)\nIt seems to me that there's little-to-no discussion or description of potential utopias (there's much more attention to potential dystopias), and even the idea of utopia is widely mocked. (More on this below.) As someone interested in taking a very long-run perspective on humanity's past and future, this bothers me:\nWhen thinking about the value of ensuring that humanity continues to exist and/or successfully navigating what could be the most important century, it seems important to consider how good things could be if they go well, not just how bad things could be if they go poorly. \nIt seems pretty self-evident to me that failing to really consider both sides will skew our decision making in some ways. \n \nIn particular, I think it's liable to make us fail to feel the full importance of what's at stake. If the idea of describing or visualizing utopia seems absurd, it's more tempting than it should be to write the whole question of humanity's long-term trajectory off and think only about shorter-term matters. \nSpeaking more vaguely, it just doesn't seem great to have very-long-run goals be absent from discussion about how things are going in the world and what ought to change. I know there's an argument that \"The long-run future is too hard to reason about, so we should focus on the next few years,\" but I also resonate with the general idea that \"plans are useless, planning is indispensable.\"\nBelow, I'll:\nDescribe some of my own experiences looking for discussions of potential utopias, and the general contempt I perceive for such things in modern discourse.\nHypothesize that one of the main blockers to describing utopia is that it's inherently difficult to describe in an appealing way. Pretty much by their nature, utopias tend to sound dull (due to the lack of conflict), homogeneous (since it's hard to describe a world of people living very differently from each other) and alien (anything very different from the status quo is just going to rub lots of people the wrong way).\nLook at Brave New World - which presents a \"supposed utopia\" as a dystopia - through this lens.\nIn the next post, I'll give a framework for visualizing utopia that tries to avoid the problems above.\nUtopia is very \"out\"\nA few years ago, I tried to collect different visions for utopia from existing writings, and use Mechanical Turk polling to see how broadly appealing they are. (My results are available here.) I learned a number of things from this exercise:\nI looked for academic fields studying utopia. I hoped I would find something in the social sciences: for example, analyzing what sorts of social arrangements might work well under the assumption of greatly increased wealth and improved technology, or finding data on what sorts of utopian descriptions appeal most to different sorts of people. However, the only relevant-seeming academic field I found (Utopian Studies) is rooted in literary criticism rather than social science. \nThe main compilation I found for utopian visions, Claeys and Sargent's Utopia Reader, is nearly all taken from fiction, especially the readings from the 20th century.\nMost of the \"utopia\" descriptions I found there are very old, and are quite unappealing to me personally. In recent work, dystopia seems to be a more common topic than utopia. (Both dystopia and utopia are both considered part of \"utopian studies\").\nWhen I tried testing the most appealing utopias - as well as some I came up with myself - by surveying several hundred people (using Positly), none scored very well. (Specifically, none got an average as high as 4 on a 5-point scale).\nI attended the Society for Utopian Studies's annual conference. This was the only conference I could find focused on discussing utopia or something like it. It was a very small conference, and most of the people there were literary scholars who had a paper or two on utopia but didn't heavily specialize in it. I asked a number of people why they had come, and a common answer was \"It was close by.\"\nA lot of the discussion revolved around dystopia. When people did discuss utopia, I often had the sense that \"utopia\" and \"utopian\" were being used as pejorative terms - their meaning was something like \"Naive enough to think one knows how the world should be set up.\" One person said they associated the idea of utopia with totalitarianism. \nRather than excitement about imagining designing utopias, the main vibe was critical examination of why one would do such a thing. I think that people thought that the analysis I'd done - using opinion polling to determine whether any utopias are broadly appealing to people - was pretty goofy, though this could've been for a number of reasons (such as that it is).\nIn a world with a large thriving social science literature devoted to auction theory, shouldn't there be at least a few dozen papers engaged in a serious debate over where we're hoping our society is going to go in the long run?\nWhy is utopia unpopular?\nIf I'm right that there's little-to-no serious discussion of potential utopias (and general contempt for the idea) in today's discourse, there are a number of possible reasons.\n\"Ends justify the means\" worries? One reason might be the idea that aiming at utopia inevitably leads to \"ends justify the means\" thinking - e.g., believing that it's worth any amount of violence/foul play to get a chance at getting the world toward utopia. \nThis might be based on the history of Communism in the 20th century and/or the writings of people like Karl Popper and Isaiah Berlin.2\nI'm not sure I understand the reasoning here: it also seems \"risky\" in this way to have strong views (as many do) about how people should live their lives and what should be legal/illegal today. The idea that policy has high stakes and is worth fighting over seems pretty widespread, and not so scorned. (Also, Communism itself seems much more warmly received in modern discourse than utopia.)\nI'm generally against \"ends justify the means\" type reasoning, whether about the long-run future or about the present. Many people focused on the present seem happy with \"ends justify the means\" type reasoning. So it seems to me that this is just a different topic from visualizing utopia.\nThis piece focuses on a different possible reason for utopia's lack of popularity: past attempts to describe utopia generally (universally?) sound unappealing. \nThis isn't just because utopia makes poor entertainment. For example, take these excerpts from Wikipedia's summary of Walden Two, a relatively recent and intellectually serious attempt at utopia:\nEach member of the community is apparently self-motivated, with an amazingly relaxed work schedule of only four average hours of work a day, directly supporting the common good and accompanied by the freedom to select a fresh new place to work each day. The members then use the large remainder of their time to engage in creative or recreational activities of their own choosing. The only money is a simple system of points that buys greater leisure periods in exchange for less desirable labor. Members automatically receive ample food and sleep, with higher needs met by nurturing one's artistic, intellectual, and athletic interests, ranging from music to literature and from chess to tennis.\nIn one sense, each individual sentence of this sounds like an improvement on life today, at least for most people. And yet when I picture this world, I can't help but picture ... seeing fake-seeming smiles everywhere? Half-heartedly playing tennis while thinking \"What's it all for?\" Feeling a vague, ominous pressure not to complain? \nAnd it gets worse as it gets more specific: \nAs Burris and the other visitors tour the grounds, they discover that certain radically unusual customs have been established in Walden Two, quite bizarre to the American mainstream, but showing apparent success in the long run. Some of these customs include that children are raised communally, families are non-nuclear ...\nNow I'm picturing having to be friends with everyone I don't like ...\n ... free affection is the norm ...\nThat isn't helping.\n... and personal expressions of thanks are taboo.\nSuch behavior is mandated by the community's individually self-enforced \"Walden Code\", a guideline for self-control techniques, which encourages members to credit all individual and other achievements to the larger community, while requiring minimal strain. Community counselors are also available to supervise behavior and assist members with better understanding and following the Code.\nAnd now it's sounding like an almost dead ringer for Brave New World, a dystopia written more than 10 years prior. Actually, it doesn't sound all that far off from One Flew Over the Cuckoo's Nest. I'm basically imagining a world where we're all either brainwashed, or forced into conformity while pretending that we're freely and enthusiastically doing what we please. The comments about \"individual self-enforcement\" and lack of physical force just make me imagine that all my cooperative friends and I don't know what the source of the enforcement is - only that everyone we know seems pretty scared to challenge whatever it is.\nI don't think this is a one-off. I think it's a common pattern for descriptions of utopia to feel either vague and boring, or oppressive and scary, if not both. The utopian visions that I perceive as most successful today are probably Star Trek and Iain M. Bank's \"Culture novels,\" but both of these seem to revolve around advanced civilizations interacting with hostile ones, such that most of the action is taking place in the context of the (very non-utopian) latter.\nBut the world can get better, right? \nWhat is it about describing a vastly improved world that goes so badly?\nUtopias sound dull, homogeneous and alien\nWhen one describes a utopia in great detail, I think there tend to be a few common ways in which it sounds unattractive: it tends to sound dull, homogeneous and alien.3\nDull. Challenges and conflict are an important part of life. We derive satisfaction and meaning from overcoming them, or just contending with them. \nAlso, a major source of value in life is our relationships, and we often form and maintain relationships with the help of some form of conflict. \nHumor is an important part of relationships, and humor is often (usually?) at someone's expense. \nWorking together to overcome challenges - or sometimes, just suffer them - can be an important way of bonding. \nIf you read guides to writing fictional characters who seem relatable, compelling and interesting to the reader, you'll often see conflict and plot stressed as essential elements for accomplishing this.\nWhen I think about my life as it is today, I think a lot about the things I'm hopeful and nervous about, and the past challenges I've overcome or gotten through. When I picture most utopias, there doesn't seem to be as much room for hope and fear and challenge. That may also mean that I'm instinctively imagining that my relationships aren't the same way they are now.\n(This \"dullness\" property seems closest to the one gestured at in George Orwell's 1948 essay on why utopias don't sound appealing.)\nHomogeneous. Today's world has a large number of different sorts of people living different sorts of lives. It's hard to paint a specific utopian picture that accommodates this kind of diversity. \nA specific utopian picture tends to emphasize particular lifestyles, daily activities, etc. - but a particular lifestyle will generally appeal to only a small fraction of the population.\nThis might be why utopias often have a \"totalitarian\" feel. It might also explain why there is perhaps more literature on \"dystopias calling themselves utopias\" (e.g., Brave New World, The Giver) than on utopias. If you take any significant change in lifestyle or beliefs and imagine it applying to everyone, it's going to sound like individual choice and diversity are greatly reduced. \nAlien. More generally, we tend to value a lot of things about our current lives - not all of which we can easily name or describe. The world we live in is rich and complex in a way that it's hard for a fictional world to be. So if we imagine ourselves in a fictional world, it's often going to feel like something is missing.\nI think most people have a significant degree of \"conservatism\" (here I'm using the term broadly rather than in a US political context). We improve things one step at a time, rather than by tearing everything down and building it back up from scratch. When a world that is \"too many steps away\" is described, it's hard to picture it or be comfortable with it. \nI think a description of today's world could easily sound like a horrible dystopia to the vast majority of people living 1000 years ago (or even 100 or 50), even though today's world is, in fact, probably much better on the whole.\nUtopia as dystopia: Brave New World\nIt's interesting to look at a dystopian novel like Brave New World through the \"dull, homogeneous and alien\" lens. \nBrave New World presents a world of advanced technology, great wealth, and peace, which has enabled society to arrange itself as it wants to. These are conditions that \"ought to\" engender a utopia - and in fact many of the characters loudly proclaim their world to be wonderful - but that instead results in a dystopia. This \"utopia as dystopia\" formula is reasonably common (other fiction in this vein includes Gattaca and The Giver). \nBrave New World heavily emphasizes homogeneity and lack of choice:\nAll children are genetically engineered and raised by the state.\nNot only has monogamy disappeared entirely, but it seems all romantic choice has disappeared as well:\n“Has any of you been compelled to live through a long time-interval between the consciousness of a desire and its fulfilment?” \n“Well,” began one of the boys, and hesitated.\n“Speak up,” said the D.H.C. “Don’t keep his fordship waiting.”\n“I once had to wait nearly four weeks before a girl I wanted would let me have her.” \n“And you felt a strong emotion in consequence?”\n“Horrible!”\n“Horrible; precisely,” said the Controller. “Our ancestors were so stupid and short-sighted that when the first reformers came along and offered to deliver them from those horrible emotions, they wouldn’t have anything to do with them.”\n \nThere's also a strong alien vibe created by this sort of thing, as people disparage things that are extremely basic parts of our lives, like bad emotions. (The people in scenes like this also just talk in a very strange, wooden way.) \nBrave New World also heavily emphasizes a lack of conflict (implying a dull world):\n“But I don’t want comfort. I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin.”\n“In fact,” said Mustapha Mond, “you’re claiming the right to be unhappy.”\n“All right then,” said the Savage defiantly, “I’m claiming the right to be unhappy.” \n“Not to mention the right to grow old and ugly and impotent; the right to have syphilis and cancer; the right to have too little to eat; the right to be lousy; the right to live in constant apprehension of what may happen tomorrow; the right to catch typhoid; the right to be tortured by unspeakable pains of every kind.” There was a long silence. \n“I claim them all,” said the Savage at last. [Including typhoid! - Ed.]\n \nBrave New World is often thought of as clever for the way it transmutes a utopia into a dystopia. But maybe that's the kind of transmutation that writes itself. Describe a future world in enough detail and you already have people worried about dullness, homogeneity, and alienness. Brave New World amplifies this with various incredulous quotes to demonstrate just how homogeneous and conflict-free everything is, and by describing the government policies that enforce such a thing.\n\"Abstract\" utopias\nTo lay out a utopian vision that avoids the problems above, one might try presenting a more abstract vision, emphasizing freedom and individual choice and avoiding giving a single \"picture of what daily life is like.\" By being less specific, one can allow the reader to imagine that they'll keep a lot of what they like about their current life, instead of imagining that they'll be part of a homogeneous herd doing something very unfamiliar.\nAn example of this approach4 is Robert Nozick's Anarchy, State and Utopia.5 To take Wikipedia's summary:\nThe utopia ... is a meta-utopia, a framework for voluntary migration between utopias tending towards worlds in which everybody benefits from everybody else's presence ... The state protects individual rights and makes sure that contracts and other market transactions are voluntary ... the only form of social union that is possible [is] fully voluntary associations of mutual benefit ... In Nozick's utopia if people are not happy with the society they are in they can leave and start their own community.\nI note that in my paper, the utopia that scored best among survey respondents was reminiscent of Nozick's: \nEverything is set up to give people freedom. If you aren't interfering with someone else's life, you can do whatever you want. People can sell anything, buy anything, choose their daily activities, and choose the education their children receive. Thanks to advanced technology and wealth, in this world everyone can afford whatever they want (education, food, housing, entertainment, etc.) Everyone feels happy, wealthy, and fulfilled, with strong friendships and daily activities that they enjoy.\n(This was not what I expected to be the highest-scoring option, given that the survey population overwhelmingly identifies with the political left. By contrast, the \"government-focused utopia\" I wrote performed horribly.)\nBut this kind of \"abstract\" utopia has another issue: it's hard to picture, so it isn't very compelling. \nI think this points to a kind of paradox at the heart of trying to lay out a utopian vision. You can emphasize the abstract idea of choice, but then your utopia will feel very non-evocative and hard to picture. Or you can try to be more specific, concrete and visualizable. But then the vision risks feeling dull, homogeneous and alien.\nDon't give up\nMy view is that utopias are hard to describe because of structural issues with describing them - not because the idea of utopia is fundamentally doomed.\n In the next post, Visualizing Utopia, I try to back this up by offering a framework for visualizing utopia that hopefully resists - or at least addresses - the \"dull, homogeneous, and alien\" trap. Click here to read it.Footnotes\n I'd put Utopia for Realists in the latter category. ↩\n I'm not deeply familiar with their arguments, but here are some links giving a feel for them:\nhttps://philosophicaldisquisitions.blogspot.com/2018/01/poppers-critique-of-utopianism-and.html\nhttps://www.goodreads.com/quotes/6754011-the-utopian-attempt-to-realize-an-ideal-state-using-a\nhttps://www.tandfonline.com/doi/abs/10.1080/13698230008403313  ↩\nSome people try to get around this by describing utopia more abstractly. I'll address that later. ↩\nAnother example: Nick Bostrom's Letter from Utopia. ↩\nI am not saying that Nozick's utopian vision is fully satisfying, and I certainly don't agree with Nozick's politics overall. I'm just noting that it has some appealing features relative to the more specific utopias discussed above. ↩\n", "url": "https://www.cold-takes.com/why-describing-utopia-goes-badly/", "title": "Why Describing Utopia Goes Badly", "source": "cold.takes", "source_type": "blog", "date_published": "2021-12-07", "id": "3484be32547d35a2398756dc3463a6ef"} -{"text": "This piece is about the single activity (\"minimal-trust investigations\") that seems to have been most formative for the way I think. \nMost of what I believe is mostly based on trusting other people. \nFor example:\nI brush my teeth twice a day, even though I've never read a study on the effects of brushing one's teeth, never tried to see what happens when I don't brush my teeth, and have no idea what's in toothpaste. It seems like most reasonable-seeming people think it's worth brushing your teeth, and that's about the only reason I do it.\nI believe climate change is real and important, and that official forecasts of it are probably reasonably close to the best one can do. I have read a bunch of arguments and counterarguments about this, but ultimately I couldn't tell you much about how the climatologists' models actually work, or specifically what is wrong with the various skeptical points people raise.1 Most of my belief in climate change comes from noticing who is on each side of the argument and how they argue, not what they say. So it comes mostly from deciding whom to trust.\nI think it's completely reasonable to form the vast majority of one's beliefs based on trust like this. I don't really think there's any alternative.\nBut I also think it's a good idea to occasionally do a minimal-trust investigation: to suspend my trust in others and dig as deeply into a question as I can. This is not the same as taking a class, or even reading and thinking about both sides of a debate; it is always enormously more work than that. I think the vast majority of people (even within communities that have rationality and critical inquiry as central parts of their identity) have never done one.\nMinimal-trust investigation is probably the single activity that's been most formative for the way I think. I think its value is twofold:\nIt helps me develop intuitions for what/whom/when/why to trust, in order to approximate the views I would hold if I could understand things myself.\nIt is a demonstration and reminder of just how much work minimal-trust investigations take, and just how much I have to rely on trust to get by in the world. Without this kind of reminder, it's easy to casually feel as though I \"understand\" things based on a few memes or talking points. But the occasional minimal-trust investigation reminds me that memes and talking points are never enough to understand an issue, so my views are necessarily either based on a huge amount of work, or on trusting someone.\nIn this piece, I will:\nGive an example of a minimal-trust investigation I've done, and list some other types of minimal-trust investigations one could do.\nDiscuss a bit how I try to get by in a world where nearly all my beliefs ultimately need to come down to trusting someone.\nExample minimal-trust investigations\nThe basic idea of a minimal-trust investigation is suspending one's trust in others' judgments and trying to understand the case for and against some claim oneself, ideally to the point where one can (within the narrow slice one has investigated) keep up with experts.2 It's hard to describe it much more than this other than by example, so next I will give a detailed example.\nDetailed example from GiveWell\nI'll start with the case that long-lasting insecticide-treated nets (LLINs) are a cheap and effective way of preventing malaria. I helped investigate this case in the early years of GiveWell. My discussion will be pretty detailed (but hopefully skimmable), in order to give a tangible sense of the process and twists/turns of a minimal-trust investigation.\nHere's how I'd summarize the broad outline of the case that most moderately-familiar-with-this-topic people would give:3\nPeople sleep under LLINs, which are mosquito nets treated with insecticide (see picture above, taken from here).\nThe netting can block mosquitoes from biting people while they sleep. The insecticide also deters and kills mosquitoes.\nA number of studies show that LLINs reduce malaria cases and death. These studies are rigorous - LLINs were randomly distributed to some people and not others, allowing a clean \"experiment.\" (The key studies are summarized in a Cochrane review, the gold standard of evidence reviews, concluding that there is a \"saving of 5.6 lives each year for every 1000 children protected.\")\nLLINs cost a few dollars, so a charity doing LLIN distribution is probably saving lives very cost-effectively. \nPerhaps the biggest concern is that people might not be using the LLINs properly, or aren't using them at all (e.g., perhaps they're using them for fishing).\nWhen I did a minimal-trust investigation, I developed a picture of the situation that is pretty similar to the above, but with some important differences. (Of all the minimal-trust investigations I've done, this is among the cases where I learned the least, i.e., where the initial / conventional wisdom picture held up best.)\nFirst, I read the Cochrane review in its entirety and read many of the studies it referenced as well. Some were quite old and hard to track down. I learned that:\nThe original studies involved very intense measures to make sure people were using their nets properly. In some cases these included daily or weekly visits to check usage. Modern-day LLIN distributions don't do anything like this. This made me realize that we can't assume a charity's LLIN distributions are resulting in proper usage of nets; we need to investigate modern-day LLIN usage separately.\nThe most recent randomized study was completed in 2001, and there won't necessarily ever be another one.4 In fact, none of the studies were done on LLINs - they were done on nets treated with non-long-lasting insecticide, which had to be re-treated periodically. This made me realize that anything that's changed since 2001 could change the results observed in the studies. Changes could include how prevalent malaria is in the first place (if it has fallen for other reasons, LLINs might do less good than the studies would imply), how LLIN technology has changed (such as moving to the \"long-lasting\" approach), and the possibility that mosquitoes have evolved resistance to the insecticides.\nThis opened up a lot of further investigation, in an attempt to determine whether modern-day LLIN distributions have similar effects to those observed in the studies. \nWe searched for general data on modern-day usage, on changes in malaria prevalence, and on insecticide resistance. This data was often scattered (so we had to put a lot of work into consolidating everything we could find into a single analysis), and hard to interpret (we couldn't tell how data had been collected and how reliable it was - for example, a lot of the statistics on usage of nets relied on simply asking people questions about their bednet usage, and it was hard to know whether people might be saying what they thought the interviewers wanted to hear). We generally worked to get the raw data and the full details of how the data was collected to understand how it might be off.\nWe tried to learn about the ins and outs of how LLINs are designed and how they compare to the kinds of nets that were in the studies. This included things like reviewing product descriptions from the LLIN manufacturers. \nWe did live visits to modern-day LLIN distributions, observing the distribution process, the LLINs hanging in homes, etc. This was a very imperfect way of learning, since our presence on site was keenly felt by everyone. But we still made observations such as \"It seems this distribution process would allow people to get and hoard extra nets if they wanted\" and \"A lot of nets from a while ago have a lot of holes in them.\"\nWe asked LLIN distribution charities to provide us with whatever data they had on how their LLINs were being used, and whether they were in fact reducing malaria. \nAgainst Malaria Foundation was most responsive on this point - it was able to share pictures of LLINs being handed out and hung up, for example. \n \nBut at the time, it didn't have any data on before-and-after malaria cases (or deaths) in the regions it was working in, or on whether LLINs remained in use in the months or years following distribution. (Later on, it added processes for the latter and did some of the former, although malaria case data is noisy and we ultimately weren't able to make much of it.)\n \nWe've observed (from post-distribution data) that it is common for LLINs to have huge holes in them. We believe that the insecticide is actually doing most of the work (and was in the original studies as well), and that simply killing many mosquitoes (often after they bite the sleeper) could be the most important way that LLINs help. I can't remember how we came to this conclusion.\nWe spoke with a number of people about our questions and reservations. Some made claims like \"LLINs are extremely proven - it's not just the experimental studies, it's that we see drops in malaria in every context where they're handed out.\" We looked for data and studies on that point, put a lot of work into understanding them, and came away unconvinced. Among other things, there was at least one case in which people were using malaria \"data\" that was actually estimates of malaria cases - based on the assumption that malaria would be lower where more LLINs had been distributed. (This means that they were assuming LLINs reduce malaria, then using that assumption to generate numbers, then using those numbers as evidence that LLINs reduce malaria. GiveWell: \"So using this model to show that malaria control had an impact may be circular.\")\nMy current (now outdated, because it's based on work I did a while ago) understanding of LLINs has a lot of doubt in it:\nI am worried about the possibility that mosquitoes have developed resistance to the insecticides being used. There is some suggestive evidence that resistance is on the rise, and no definitive evidence that LLINs are still effective. Fortunately, LLINs with next-generation insecticides are now in use (and at the time I did this work, these next-generation LLINs were in development).5\nI think that people are probably using their LLINs as intended around 60-80% of the time, which is comparable to the usage rates from the original studies. This is based both on broad cross-country surveys6 and on specific reporting from the Against Malaria Foundation.7 Because of this, I think it's simultaneously the case that (a) a lot of LLINs go unused or misused; (b) LLINs are still probably having roughly the effects we estimate. But I remain nervous that real LLIN usage could be much lower than the data indicates. \nAs an aside, I'm pretty underwhelmed by concerns about using LLINs as fishing nets. These concerns are very media-worthy, but I'm more worried about things like \"People just never bother to hang up their LLIN,\" which I'd guess is a more common issue. The LLIN usage data we use would (if accurate) account for both.\nI wish we had better data on malaria case rates by region, so we could understand which regions are most in need of LLINs, and look for suggestive evidence that LLINs are or aren't working. (GiveWell has recently written about further progress on this.)\nBut all in all, the case for LLINs holds up pretty well. It's reasonably close to the simpler case I gave at the top of this section. \nFor GiveWell, this end result is the exception, not the rule. Most of the time, a minimal-trust investigation of some charitable intervention (reading every study, thinking about how they might mislead, tracking down all the data that bears on the charity's activities in practice) is far more complicated than the above, and leads to a lot more doubt.\nOther examples of minimal-trust investigations\nSome other domains I've done minimal-trust investigations in:\nMedicine, nutrition, quantitative social science (including economics). I've grouped these together because a lot of the methods are similar. Somewhat like the above, this has usually consisted of finding recent summaries of research, tracking down and reading all the way through the original studies, thinking of ways the studies might be misleading, and investigating those separately (often hunting down details of the studies that aren't in the papers). \nI have links to a number of writeups from this kind of research here, although I don't think reading such pieces is a substitute for doing a minimal-trust investigation oneself.\n \nMy Has Life Gotten Better? series has a pretty minimal-trust spirit. I haven't always checked the details of how data was collected, but I've generally dug down on claims about quality of life until I could get to systematically collected data. In the process, I've found a lot of bad arguments floating around.\nAnalytic philosophy. Here a sort of \"minimal-trust investigation\" can be done without a huge time investment, because the main \"evidence\" presented for a view comes down to intuitive arguments and thought experiments that a reader can evaluate themselves. For example, a book like The Conscious Mind more-or-less walks a layperson reader through everything needed to consider its claims. That said, I think it's best to read multiple philosophers disagreeing with each other about a particular question, and try to form one's own view of which arguments seem right and what's wrong with the ones that seem wrong.\nFinance and theoretical economics. I've occasionally tried to understand some well-known result in theoretical economics by reading through a paper, trying to understand the assumptions needed to generate the result, and working through the math with some examples. I've often needed to read other papers and commentary in order to notice assumptions that aren't flagged by the authors. \n Checking attribution. A simple, low-time-commitment sort of minimal-trust investigation: when person A criticizes person B for saying X, I sometimes find the place where person B supposedly said X and read thoroughly, trying to determine whether they've been fairly characterized. This doesn't require having a view on who's right - only whether person B seems to have meant what person A says they did. Similarly, when someone summarizes a link or quotes a headline, I often follow a trail of links for a while, reading carefully to decide whether the link summary gives an accurate impression. \nI've generally been surprised by how often I end up thinking people and links are mischaracterized. \n \nAt this point, I don't trust claims of the form \"person A said X\" by default, almost no matter who is making them, and even when a quote is provided (since it's so often out of context).\nAnd I wish I had time to try out minimal-trust investigations in a number of other domains, such as:\nHistory. It would be interesting to examine some debate about a particular historical event, reviewing all of the primary sources that either side refers to.\nHard sciences. For example, taking some established finding in physics (such as the Schrodinger equation or Maxwell's equations) and trying to understand how the experimental evidence at the time supported this finding, and what other interpretations could've been argued for.\nReference sources and statistics. I'd like to take a major Wikipedia page and check all of its claims myself. Or try to understand as much detail as possible about how some official statistic (US population or GDP, for example) is calculated, where the possible inaccuracies lie, and how much I trust the statistic as a whole.\nAI. I'd like to replicate some key experimental finding by building my own model (perhaps incorporating this kind of resource), trying to understand each piece of what's going on, and seeing what goes differently if I make changes, rather than trusting an existing \"recipe\" to work. (This same idea could be applied to building other things to see how they work.)\nMinimal-trust investigations look different from domain to domain. I generally expect them to involve a combination of \"trying to understand or build things from the ground up\" and \"considering multiple opposing points of view and tracing disagreements back to primary sources, objective evidence, etc.\" As stated above, an important property is trying to get all the way to a strong understanding of the topic, so that one can (within the narrow slice one has investigated) keep up with experts.\nI don't think exposure to minimal-trust investigations ~ever comes naturally via formal education or reading a book, though I think it comes naturally as part of some jobs.\nNavigating trust\nMinimal-trust investigations are extremely time-consuming, and I can't do them that often. 99% of what I believe is based on trust of some form. But minimal-trust investigation is a useful tool in deciding what/whom/when/why to trust. \nTrusting arguments. Doing minimal-trust investigations in some domain helps me develop intuitions about \"what sort of thing usually checks out\" in that domain. For example, in social sciences, I've developed intuitions that:\nSelection bias effects are everywhere, and they make it really hard to draw much from non-experimental data. For example, eating vegetables is associated with a lot of positive life outcomes, but my current view is that this is because the sort of people who eat lots of vegetables are also the sort of people who do lots of other \"things one is supposed to do.\" So people who eat vegetables probably have all kinds of other things going for them. This kind of dynamic seems to be everywhere.\nMost claims about medicine or nutrition that are based on biological mechanisms (particular proteins, organs, etc. serving particular functions) are unreliable. Many of the most successful drugs were found by trial-and-error, and their mechanism remained mysterious long after they were found.\nOverall, most claims that X is \"proven\" or \"evidence-backed\" are overstated. Social science is usually complex and inconclusive. And a single study is almost never determinative.\nTrusting people. When trying to understand topic X, I often pick a relatively small part of X to get deep into in a minimal-trust way. I then look for people who seem to be reasoning well about the part(s) of X I understand, and put trust in them on other parts of X. I've applied this to hiring and management as well as to forming a picture of which scholars, intellectuals, etc. to trust. \nThere's a lot of room for judgment in how to do this well. It's easy to misunderstand the part of X I've gotten deep into, since I lack the level of context an expert would have, and there might be some people who understand X very well overall but don't happen to have gotten into the weeds in the subset I'm focused on. I usually look for people who seem thoughtful, open-minded and responsive about the parts of X I've gotten deep into, rather than agreeing with me per se.\nOver time, I've developed intuitions about how to decide whom to trust on what. For example, I think the ideal person to trust on topic X is someone who combines (a) obsessive dedication to topic X, with huge amounts of time poured into learning about it; (b) a tendency to do minimal-trust investigations themselves, when it comes to topic X; (c) a tendency to look at any given problem from multiple angles, rather than using a single framework, and hence an interest in basically every school of thought on topic X. (For example, if I'm deciding whom to trust about baseball predictions, I'd prefer someone who voraciously studies advanced baseball statistics and watches a huge number of baseball games, rather than someone who relies on one type of knowledge or the other.)\nConclusion\nI think minimal-trust investigations tend to be highly time-consuming, so it's impractical to rely on them across the board. But I think they are very useful for forming intuitions about what/whom/when/why to trust. And I think the more different domains and styles one gets to try them for, the better. This is the single practice I've found most (subjectively) useful for improving my ability to understand the world, and I wish I could do more of it.\nNext in series: Learning By WritingFootnotes\n I do recall some high-level points that seem compelling, like \"No one disagrees that if you just increase the CO2 concentration of an enclosed area it'll warm up, and nobody disagrees that CO2 emissions are rising.\" Though I haven't verified this claim beyond noting that it doesn't seem to attract much disagreement. And as I wrote this, I was about to add \"(that's how a greenhouse works)\" but it's not. And of course these points alone aren't enough to believe the temperature is rising - you also need to believe there aren't a bunch of offsetting factors - and they certainly aren't enough to believe in official forecasts, which are far more complex. ↩\n I think this distinguishes minimal-trust reasoning from e.g. naive epistemology. ↩\n This summary is slightly inaccurate, as I'll discuss below, but I think it is the most common case people would cite who are casually interested in this topic. ↩\n From GiveWell, a quote from the author of the Cochrane review: \"To the best of my knowledge there have been no more RCTs with treated nets. There is a very strong consensus that it would not be ethical to do any more. I don't think any committee in the world would grant permission to do such a trial.\" Though I last worked on this in 2012 or so, and the situation may have changed since then. ↩\n More on insecticide resistance at https://www.givewell.org/international/technical/programs/insecticide-treated-nets/insecticide-resistance-malaria-control.  ↩\n See https://www.givewell.org/international/technical/programs/insecticide-treated-nets#Usage.  ↩\n See https://www.givewell.org/charities/amf#What_proportion_of_targeted_recipients_use_LLINs_over_time.  ↩\n", "url": "https://www.cold-takes.com/minimal-trust-investigations/", "title": "Minimal-trust investigations", "source": "cold.takes", "source_type": "blog", "date_published": "2021-11-23", "id": "66eaef507eb275200d9e2c008ae120f5"} -{"text": "This post interrupts the Has Life Gotten Better? series to talk a bit about why it matters what long-run trends in quality of life look like.\nI think different people have radically different pictures of what it means to \"work toward a better world.\" I think this explains a number of the biggest chasms between people who think of themselves as well-meaning but don't see the other side that way, and I think different pictures of \"where the world is heading by default\" are key to the disagreements.\nImagine that the world is a ship. Here are five very different ways one might try to do one's part in \"working toward a better life for the people on the ship.\"\nMeaning in the \"ship\" analogy\nMeaning in the world\nRowing\n \nHelp the ship reach its current destination faster\n \nAdvance science, technology, growth, etc., all of which help people (or \"the world\") do whatever they want to do, more/faster\nSteering\n \nNavigate to a better destination than the current one\n \nAnticipate future states of the world (climate change, transformative AI, utopia, dystopia) and act accordingly\n \nAnchoring\n \nHold the ship in place\n \nPrevent change generally, and/or try to make the world more like it was a generation or two ago\n \nEquity\n \nWork toward more fair and just relations between people on the ship\n \nRedistribution; advocacy focused on the underprivileged; etc.\n \nMutiny\n \nChallenge the ship's whole premise and power structure\n \nRadical challenging of the world's current systems (e.g., capitalism)1\nWhich of these is the \"right\" focus for improving the world? One of the things I like about the ship analogy is that it leaves the answer to this question totally unclear! The details of where the ship is currently trying to go, and why, and who's deciding that and what they're like, matter enormously. Depending on those details, any of the five could be by far the most important and meaningful way to make a positive difference. \nIf the ship is the world, then where are we \"headed\" by default (what happens if we have more total technology, wealth, and power over our environment)? Who has the power to change that, and how has it been going so far?\nThese are important questions with genuinely unclear answers. So people with different assumptions about these deep questions can get along very poorly with each other.\nI think this sort of taxonomy provides a different angle on people’s differences from the usual discussions of pro/anti-government interventionism.\nNext I will give some somewhat more detailed thoughts on the case for and against each of these pictures of \"improving the world.\" This is not so much to educate the reader as to help them understand where I stand, and why I have some of the unusual views I do. \nI talk a bit about the \"track record\" of each category. A lot of the point of this analogy is to highlight the importance of big-picture judgments about history.\nRowing\nI use \"rowing\" to refer to the idea that we can make the world better by focusing on advancing science, technology, growth, etc. - all of which ultimately result in empowerment, helping people do whatever they want to do, more/faster. The idea is that, in some sense, we don't need a specific plan for improving lives: more capabilities, wealth, and empowerment (\"moving forward\") will naturally result in that.\nRowing is a contentious topic, and it’s contentious in a way that I think cuts across other widely-recognized ideological lines.\nTo some people, rowing seems like the single most promising way to make the world a better place. People and institutions who give off this vibe include:\nTech entrepreneurs and VCs such as Marc Andreessen (It's Time to Build) and Patrick Collison (progress studies).\nLibertarian-ish academics such as Tyler Cowen and Alex Tabarrok (see their books for a sense of this).\nThe many institutions and funders dedicated to general speeding up of science, such as the Moore Foundation, Howard Hughes Medical Institute, and CZI Biohub.2\nThe global development world, e.g. nonprofits such as Center for Global Development and institutions such as the World Bank, which seeks to help \"developing\" countries \"develop.\" This generally includes a large (though not exclusive) focus on economic growth.\nI sometimes see rowing-oriented arguments tagged as “pro-market” or even “libertarian,” but I think that isn't a necessary connection. You could argue - and many do - that a lot of the biggest and most important contributions to global growth and technological advancement come from governments, particularly via things like scientific research funding (e.g., DARPA), development-oriented agencies (e.g., the IMF), and public education. \nTo many people, though, advocacy for “rowing” seems like it’s best understood as a veneer of pro-social rhetoric disguising mundane personal attempts to get rich - even to the point where the wealthy create an intellectual ecosystem to promote the idea that what makes them rich is also good for the world. \nOn priors, I think this is a totally reasonable critical take: \nIn practice, a lot of the folks most interested in \"rowing\" are venture capitalists, tech founders, etc. who sure seem to have spent most of their lives primarily interested in getting rich.\nIt seems \"convenient,\" perhaps suspiciously so, that their story about how to make the world better seems to indicate that the best thing to do is focus on \"creating wealth\" (which usually aligns extremely well with \"getting rich\"), just like they are. This doesn't mean that they're deliberately hiding their motivations; but it may mean they naturally and subconsciously gravitate toward worldviews that validate their past (and present) choices.\nFurthermore, the logic of why “rowing” would be good seems to have some gaps in it. It’s not obvious on priors that more total wealth or total scientific capability makes the world better. When thinking about the direct impacts of wealth and tech on quality of life, it seems about as easy to come up with harms as benefits. \nClear benefits include lower burden of disease, less hunger, more reproductive choice and freedom, better entertainment (and tastier food, and other things that one might call \"directly\" or \"superficially\" pleasurable), and more ability to encounter many ideas and choose from many candidate lifestyles and locations.\n \nBut clear potential costs include environmental damage, rising global catastrophic risks,3 rising inequality, and a world that is chaotically changing, causing all sorts of novel psychological and other challenges for individuals and communities. And many of the obvious dimensions along which wealth and technology make life more \"convenient\" do not clearly make life better: if wealth and technology \"save us time\" (reducing the need to do household chores, etc.), we might just be spending the \"saved\" time on other things that don't make our lives better, such as competing with each other for wealth and status.\nThese concerns seem facially valid, and they apply particularly to rowing. (If someone works toward equity, there could be a number of criticisms one levels at them, but the above issues don’t seem to apply.)\nIn my view, the best case for “rowing” is something like: “We don’t know why, but it seems to be going well.” If I were back in the year 0 trying to guess whether increasing wealth and technological ability would be good or bad for quality of life, I would consider it far from obvious. But empirically, it seems that the world has been improving over the last couple hundred years. \nAnd with that said, it's much less clear how things were going in the several hundred thousand years before the Industrial Revolution.\nSo my current take on \"rowing\" is something like:\nDespite all of the suspicious aspects, I think there is a good case for it. I don’t understand where this ship is going or why things are working the way they are - maybe the ship happens to be pointed toward warmer or calmer latitudes? - but rowing seems to have made life better for the vast majority of people over the last couple hundred years, and will likely continue to do so (by default) over at least the next few decades.\nOn the other hand, I don't think the track record is so good as to assume that rowing will always be good, and I'm particularly worried and uncertain about how things will go if there is a dramatic acceleration in the rate of progress - I'm inclined to approach such a prospect with caution rather than excitement.\nSteering\nSteering sounds great in theory. Instead of blindly propelling the world toward wherever it’s going, let’s think about where we want the world to end up and take actions based on that!\nBut I think this is currently the least common conception of how to do good in the world. The idea of utopia is unpopular (more in a future piece), and in general, it seems that anyone advocating action on the basis of a specific goal over the long-run future (really, anything more than 20 years out) generally is met with skepticism. \nThe most mainstream example of “steering” is probably working to prevent/mitigate climate change. This isn’t about achieving an “end state” for the world, but it is about avoiding a specific outcome that is decades away, and even that level of specific planning about the long-run future is something we don’t see a lot of in today’s intellectual discourse.\nI think the longtermist community has an unusual degree of commitment to steering. One could even see longtermism as an attempt to resurrect interest in steering, by taking a different approach from previous steering-heavy worldviews (e.g., Communism) that have fallen out of favor. \nLongtermists seek out specific interventions and events that they think could change the direction of the long-run future. \nThey are particularly interested in helping to better navigate a potential transition brought on by advanced AI - the idea being that if AI ends up being a sort of “new species” more powerful than humans, navigating the development of AI could end up avoiding bad results that last for the rest of time.\nIt’s common for longtermists to take an interest in differential technological development - meaning that instead of being “pro” or “anti” technological advancement, they have specific views on which technologies would be good to develop as quickly as possible vs. which would be good to develop as slowly as possible, or at least until we’ve developed other technologies that can make them safer. It seems to me that this sort of thinking is relatively rare outside the longtermist community. It's more common for people to be pro- or anti-science as a whole.\nWhy is it relatively rare for people to be interested in “steering” as defined here? I think it is mostly for good reasons, and comes down to the fact that the track record of “steering” type work looks unimpressive.\nThere are some specific embarrassing cases, such as historical Communism,4 which explicitly claimed to aim at a particular long-term utopian vision.\nThere is also just a lack of salient (or any?) examples of people successfully anticipating and intervening on some particular world development more than 10-20 years away. People and organizations in the longtermist community have tried to find examples, and IMO haven’t come up with much.5\nDespite this, I’m personally very bullish on the kind of “steering” that the longtermist community is trying to do (and I’m also sold on the value of climate change prevention/mitigation).\nThe main reason for this is that I think defining, long-run consequential events of the future are more “foreseeable” now than they’ve been in the past. Climate change and advanced AI are both developments that seem highly likely this century (more on AI here), and seem likely to have such massive global consequences that action in advance makes sense. More broadly, I think it is easier than it used to be to scan across possible scientific and technological developments and point to the ones most worth “preparing for.\" \nIn the analogy, I’m essentially saying that there are particular important obstacles or destinations for the ship, that we can now see clearly enough that steering becomes valuable. By contrast, in many past situations I think we were “out on the open sea” such that it was too hard to see much about what lay ahead of us, and this led to the dynamic in which rowing has worked better than steering.\nOther reasons that I’m bullish on steering are that (a) I think today’s “steering” folks are making better, more rigorous attempts at predicting the future than people who have tried to make long-run predictions in the past;6 (b) I think “steering” has become a generally neglected way of thinking about the world, at the same time as it has become more viable. \nWith that said, I think there is plenty of room for longtermists to do a better job than they are contending with the limits of how well we can “steer,” and what kinds of interventions are most likely to successfully improve how things go.\nI think our ship draws close to some major crossroads, such that navigating them could define the rest of our journey. If I’m right, focusing on rowing to the exclusion of steering is a real missed opportunity.\nAnchoring\nIn practice, it seems like a significant amount of the energy in any given debate is coming from people who would prefer to keep things as they are - or go back to things as they were (generally pretty recently, e.g., a generation or two ago). This is an attitude commonly associated with \"conservatives\" (especially social conservatives), but it's an attitude that often shows up from others as well.\nAs someone who thinks life has been getting better over the last couple hundred years - and that we still have a lot of important progress yet to be made on similar dimensions to the ones that have been improving - I am usually not excited about anchoring (though the specifics of what practices one is trying to \"anchor\" matter).\nSome additional reasons for my general attitude:\nI think the world has been changing extraordinarily quickly (by historical standards) throughout the past 200+ years, and I think it will continue to change extraordinarily quickly for at least the next few decades no matter what. So when I hear people advocating for stability and trusting the established practices of those who came before us, I largely think they are asking for something that just can't be had. (One way of putting this: as long as things are changing, we may as well try to make the best of that change.)\n I am particularly skeptical that the previous generation or two should be emulated. There is obviously room for debate here (I might write more on this topic in the future).\n I think there is a general bias toward exaggerating how good the past was that we need to watch out for.\nThere is a version of \"anchoring\" that I think can be constructive: asking that changes to policy and society be gradual and incremental, rather than sudden, so we can correct course as we go. In practice, I think nearly all policy and societal changes do end up being gradual and incremental, at least in the modern-day developed world, such that I don't currently have a wish for a stronger \"anchoring\" force than already exists in most domains that come immediately to mind (unless you count the \"caution\" frame for navigating the most important century).\nEquity\nOf the five different visions of what it means to improve the world, equity seems the most straightforward and familiar. It is about directly trying to make the world more just and fair, rather than trying to increase total options and wealth and rather than trying to optimize for some particular future event. \nEquity includes efforts to:\nRedistribute resources progressively (i.e., from rich to poor), whether via direct charity or via advocacy.\nAmplify the voices and advance the interests of historically marginalized groups including women, people of color, and people born in low-income countries.\nImprove products and services aimed at helping people who would be under-served by default, including via education reform and improvement, and scientific research (e.g., the sort of global health R&D funded by the Bill and Melinda Gates Foundation).\nYou could argue that successful equity work also contributes to the goals of rowing and steering, if a world with less inequality is also one that’s better positioned for broad-based economic growth and for anticipating/preparing for particular important events. But work whose proximate goal is equity tends to look different from work whose proximate goal is rowing or steering.\nMost people recognize equity-oriented work as coming from a place of good intentions and genuine interest in making the world better. To the extent that equity-oriented work is controversial, it often stems from:\nArguments that it undermines its own goals. For example, arguments that advocating for a higher minimum wage could result in greater unemployment, thus hurting the interests of the low-income people that a higher minimum wage is supposed to help.\nArguments that it undermines rowing progress, and that rowing is an ultimately more important/promising way to help everyone. Dead Aid is an example of this sort of argument (picked for vividness rather than quality).\nI've talked about the track record of rowing and steering; I'll comment briefly on that of equity. In short, I think it's very good. I think that much of the progress the world has seen is fairly hard to imagine without significant efforts at both rowing and equity: major efforts both to increase wealth/capabilities and to distribute them more evenly. Civil rights movements, social safety nets, and foreign aid all seem like huge wins, and major parts of the story for why the world seems to have gotten better over time.\nWith that track record in mind, and the fact that many equity interventions seem good on common-sense grounds, I'm usually positive on equity-oriented interventions.\nMutiny\nMutiny looks good if your premises are ~the opposite of the rowers'. You might think that the world today operates under a broken \"system,\" and/or that we fundamentally have the wrong sorts of people and/or institutions in power. If this is your premise, it implies that what we tend to count as \"progress\" (particularly increased wealth and technological capabilities) is liable to make things worse, or at least not better. Instead, the most valuable thing we can do is get at the root of the issue and change the fundamental way that power is exercised and resources are allocated. \nUnlike steering, this isn't about anticipating some particular future event or world-state. Instead, it's about rethinking/reforming the way the world operates and the way decisions are made. Instead of focusing on where the ship is headed, it's focused on who's running the ship.\nThis framework often emerges in criticisms of charity, philanthropy and/or effective altruism that point to the paradox of trying to make the world better using money obtained from participating in a problematic (capitalist) system - or occasionally in pieces by philanthropists themselves on the importance of challenging the fundamental paradigms the world is operating in. Some examples: Slavoj Žižek,7 Anand Giridharas,8 Guerrilla Foundation,9 and Peter Buffett.10. Often, but not always, people in the \"mutiny\" category identify with (or at least use language that is evocative of) socialism or Marxism.\nOf the five categories, mutiny is the one I feel most unsatisfied with my understanding of. It seems that people use language about fundamental systems change to (a) sometimes mean something tangible, radical, and revolutionary like the abolition of private property; to (b) sometimes mean something that seems much more modest and that I would classify more as \"equity,\" such as working toward greatly increased redistribution of wealth;12 and to (c) sometimes mean a particular emotional/tonal attitude unaccompanied by any distinctive policy platform.13 And it's often unclear which they mean.\n(a) is the one I'm trying to point at with the \"mutiny\" idea. It's also the one that seems to go best with claims that it's problematic to e.g. \"participate in capitalism\" and then do philanthropy. (It's unclear to me how, say, running a hedge fund undermines (b) or (c).)\nI am currently skeptical of (a), because:\nI haven't heard much in the way of specific proposals for how the existing \"system\" could be fundamentally reformed, other than explicitly socialist and Marxist proposals such as the abolition of private property, which I don't support.\nI am broadly sympathetic to Rob Wiblin's take on revolutionary change: \"Effective altruists are usually not radicals or revolutionaries ... My attitude, looking at history, is that sudden dramatic changes in society usually lead to worse outcomes than gradual evolutionary improvements. I am keen to tinker with government or economic systems to make them work better, but would only rarely want to throw them out and rebuild from scratch. I personally favour maintaining and improving mostly market-driven economies, though some of my friends and colleagues hope we can one day do much better. Regardless, this temperament for ‘crossing the river by feeling the stones’ is widespread among effective altruists, and in my view that’s a great thing that can help us avoid the mistakes of extremists through history. The system could be a lot better, but one only need look at history to see that it could also be much worse.\"\nAs stated above, I broadly think that the world has made and continues to make astonishing positive progress, which doesn't put me in a place of wanting to \"burn down the existing order\" (at least without a clearer idea of what might replace it and why the replacement is promising). I'm particularly unsympathetic to claims that \"capitalism\" or \"the existing system\" is the root cause of global poverty. I think that global poverty is the default condition for humans, and the one that nearly all humans existed under until relatively recently.\nTo be clear, I don't mean here to be advocating against all radical views. A radical view is anything that is well outside the Overton window, and I have many such views. And I am sympathetic to many views that many might call \"anticapitalist\" or \"revolutionary,\" such as that we should have dramatically more redistribution of wealth. \nI am also generally sympathetic to both (b) and (c) above. \nCategorizing worldviews\nHere's a mapping from some key combinations of rowing/equity/mutiny to familiar positions in current political discourse:\nRowing\nEquity\nMutiny\nAKA\nExtreme Marxists\n \nThe radical left\n \nThe less radical, but still markets- and growth-skeptical, left\n \nMany conservatives (\"anchoring\")\n \nLibertarians, economic conservatives\n \n\"Neoliberals,\" the \"pro-market left\"\n \nI've left out steering because I see it as mostly orthogonal to (and usually simply not present in) most of today's political discourse. I've represented \"anchoring\" as a row rather than a column, because I think it is mostly incompatible with the others. And I've left out worldviews that are positive on both rowing and mutiny (I think there are some worldviews that might be described this way, but they're fairly obscure).14\nSo?\nWe can make up categories and put people in them all day. What does this taxonomy give us?\nThe main thrust for me is clarifying what people in different camps are disagreeing about, especially when they seem to be talking past each other by using completely different definitions of “improving the world.”\nI think this framework is also useful for highlighting the role of one’s understanding of history in these disagreements. It’s far from obvious, a priori, whether the best thing to work on is rowing, steering, anchoring, equity, or mutiny, especially when we are so foggy on where a ship is heading by default. It really matters whether you think that increases in wealth and technological capability have had good effects so far, whether this has come about through deliberate planning or blind “forging ahead,” and whether there are particular reasons to expect the future to diverge from the past on these points. \nAccordingly, when confronting one camp from another, I think it’s helpful when possible to be explicit about one’s assumptions regarding how things have gone so far, and regarding the broad track records of rowing, steering, anchoring, equity and mutiny. History doesn’t give us clear, pre-packaged answers on these questions - different people will look at the same history and see very different things - but I think it’s good to have views on these matters, even if only lightly informed to start, and to look out for information about history that could revise them.Footnotes\n Though as discussed below, it's often unclear what \"capitalism\" means in this sort of context. ↩\n While these sorts of institutions often lead with the goal of fighting disease, they tend to fund basic science with very open-ended goals. ↩\n For example, The Precipice argues that \"Fueled by technological progress, our power has grown so great that for the first time in humanity’s long history, we have the capacity to destroy ourselves.\" The book sees \"anthropogenic\" global catastrophic risks as the dominant ones, and I agree. ↩\n I use the term \"historical\" in order to be agnostic on whether this was \"true\" Communism or reflects badly on Marxist philosophy. ↩\n See AI Impacts' attempts to find examples of helpful early actions on risks and Open Philanthropy on historical long-range forecasting. Other examples that have been suggested to me: early action to stop damage to the ozone layer, nuclear nonproliferation action, perhaps the US's approach to the Cold War. ↩\n See Open Philanthropy on historical long-range forecasting for what past efforts look like. There are many longtermist discussions of long-range predictions that seem significantly better on the dimensions covered in the post. ↩\n \"There is a chocolate-flavoured laxative available on the shelves of US stores which is publicised with the paradoxical injunction: Do you have constipation? Eat more of this chocolate! – i.e. eat more of something that itself causes constipation. The structure of the chocolate laxative can be discerned throughout today’s ideological landscape ... We should have no illusions: liberal communists [his term for the Davos set] are the enemy of every true progressive struggle today. All other enemies – religious fundamentalists, terrorists, corrupt and inefficient state bureaucracies – depend on contingent local circumstances. Precisely because they want to resolve all these secondary malfunctions of the global system, liberal communists are the direct embodiment of what is wrong with the system ... Etienne Balibar, in La Crainte des masses (1997), distinguishes the two opposite but complementary modes of excessive violence in today’s capitalism: the objective (structural) violence that is inherent in the social conditions of global capitalism (the automatic creation of excluded and dispensable individuals, from the homeless to the unemployed), and the subjective violence of newly emerging ethnic and/or religious (in short: racist) fundamentalisms. They may fight subjective violence, but liberal communists are the agents of the structural violence that creates the conditions for explosions of subjective violence.\" ↩\n \"\"If anyone truly believes that the same ski-town conferences and fellowship programs, the same politicians and policies, the same entrepreneurs and social businesses, the same campaign donors, the same thought leaders, the same consulting firms and protocols, the same philanthropists and reformed Goldman Sachs executives, the same win-wins and doing-well-by-doing-good initiatives and private solutions to public problems that had promised grandly, if superficially, to change the world—if anyone thinks that the MarketWorld complex of people and institutions and ideas that failed to prevent this mess even as it harped on making a difference, and whose neglect fueled populism’s flames, is also the solution, wake them up by tapping them, gently, with this book. For the inescapable answer to the overwhelming question—Where do we go from here?—is: somewhere other than where we have been going, led by people other than the people who have been leading us.\" ↩\n \"EA’s approach of doing ‘the most good you can now’ without, in our opinion, questioning enough the power relationships that got us to the current broken socio-economic system, stands at odds with the Guerrilla Foundation’s approach. Instead, we are proponents of radical social justice philanthropy, which aims to target the root causes of the very system that has produced the symptoms that much of philanthropy, including EA, is trying to treat (also see here and here) ... By asking these questions, EA seems to unquestioningly replicate the values of the old system: efficiency and cost-effectiveness, growth/scale, linearity, science and objectivity, individualism, and decision-making by experts/elites ... \" ↩\n \"we will continue to support conditions for systemic change ... It’s time for a new operating system. Not a 2.0 or a 3.0, but something built from the ground up. New code.\" (Though also note this quote: \"I’m really not calling for an end to capitalism; I’m calling for humanism.\") ↩\n(Footnote deleted) ↩\n For example, see the \"Class\" chapter of How to Be an Antiracist, where the author speaks of capitalism and racism as \"conjoined twins\" but then states that he is defining \"capitalism\" as being in opposition to a number of not-very-radical-seeming goals such as increased redistribution of wealth and monopoly prevention. He speaks positively of Elizabeth Warren despite her statement that she is \"capitalist to the bone,\" and says \"if Warren succeeds, then the new economic system will operate in a fundamentally different way than it has ever operated before in American history. Either the new economic system will not be capitalist or the old system it replaces was not capitalist.\" ↩\n For example, a self-identified socialist states: \"There’s a great Eugene Debs quote, 'While there is a lower class, I am in it. While there is a criminal element, I am of it. And while there is a soul in prison, I am not free.' That’s not a description of worker ownership — that’s a description of looking at the world and feeling solidarity with people who are at the bottom with the underclass. And I think that is just as important to what animates socialists as some idea about how production should be managed ... People focus a lot on the question of central planning. But I’ve been doing interviews of socialists, interviewing DSA people around the country, and the unifying thread really is not a very clear vision for how a socialist economy will work. It is a deep discomfort and anger that occurs when you look at the world and you see power relationships and you see a small class of people owning so much and a large number of people working so hard and having so little. There are socialist divides over nearly every question, but this is the one thing that socialists all come together on.\" ↩\n I sometimes encounter people who seem to think something along the lines of: \"Progress is slowing down because our culture has become broken and toxic. The only hope for getting back to a world capable of a reasonable pace of scientific, technological, and economic progress is to radically overhaul everything about our institutions and essentially start from scratch.\" I expect to write more on the general theme of what we should make of \"progress slowing down\" in the future. ↩\n", "url": "https://www.cold-takes.com/rowing-steering-anchoring-equity-mutiny/", "title": "Rowing, Steering, Anchoring, Equity, Mutiny", "source": "cold.takes", "source_type": "blog", "date_published": "2021-11-09", "id": "e7c57e9acffaf80d3e93b4b0f6b729df"} -{"text": "For the last 200 years or so, life has been getting better for the average human in the world.\nWhat about for the 300,000+ years before that?\nIn order to answer this, one of the hardest things we need to do is get some sense of pre-agriculture quality of life. \nAgriculture is estimated to have started drastically changing lifestyles, and leading to what we tend to think of as \"civilization,\" around 10,000 BCE. Agriculture roughly means living off of domesticated plants and livestock, allowing a large population to live in one area indefinitely rather than needing to move as it runs low on resources.\nSo most years of human history were pre-agriculture (and thus pre-\"civilization\").\nThe terms \"hunter-gatherer\" and \"forager\" are commonly used to refer to societies that came before (or simply never took up) agriculture.\nThis appears to be a topic where there is a lot of room for controversy and confusion. \nMany people seem to endorse a \"pre-agriculture Eden\" hypothesis: that the pre-agriculture world was a sort of paradise, or at least better than life in rich countries today. There are logical reasons that this might be the case; below, I'll lay some of those out and give some quotes from Wikipedia that convey the \"pre-agriculture Eden\" vibe.\nBut there's also a case to be made that the world before agriculture was a world of starvation, disease and violence - that the human story is one of continuous, consistent beneficial progress, and that the pre-agriculture world was the lowest (because the earliest) point on it.\nMy tentative position is that neither of these is quite right. I think the pre-agriculture world was noticeably worse than today's world (at least in developed countries), but probably some amount better than the world that immediately followed agriculture.\nThis post will focus on the former: the comparison between the pre-agriculture world and today's world, or whether the \"pre-agriculture Eden\" hypothesis is right. I'll argue that today's best evidence suggests that today's developed world has significantly better quality of life than the pre-agriculture world. By doing so, I'll also lay the groundwork for a future post about what happened to quality of life in between the pre-agriculture world and today.\nThis image illustrates how this post fits into the full \"Has Life Gotten Better?\" series.\nBelow, I'll:\nGive more detail on the basic pre-agriculture Eden hypothesis. \nGo through each of the dimensions on which I tried to compare pre-agriculture and current quality of life. These are summarized by the table below, which uses the same structure as for my previous post:\nProperty\nPre-agriculture vs. today's developed world\nPoverty\nMostly assessed via hunger and health - see below.\n \nHunger\nPre-agriculture height looks very low by today's standards, suggesting malnutrition. \n \nHealth (physical)\nPre-agriculture infant and child mortality look extremely high by today's standards (20%+ before age 1, 35%+ before age 10 (today's high-income countries are under 1% before age 10). Post-childhood life expectancy also looks a lot worse than today's.\n \nViolence\nPre-agriculture deaths from violence look more common, compared to today's developed world.\n \nMental health\nUnknown - while there are some claims floating around about strong mental health for hunter-gatherers/foragers, I haven't seen anything that looks like solid evidence about this, and similar claims re: gender relations don't hold up to scrutiny.\n \nSubstance abuse and addiction\nPresumably not an issue pre-agriculture (though this isn't 100% clear).\n \nDiscrimination \nHard to compare, but pre-agriculture gender relations seem bad.\n \nTreatment of children\nUnknown\n \nTime usage\nUnknown/disputed\n \nSelf-assessed well-being\nUnknown\n \nEducation and literacy\nLiteracy would be higher in today's world (though it isn't clear whether this matters for quality of life)\n \nFriendship and community\nUnknown\n \nFreedom\nUnknown\n \nRomantic relationship quality\nUnknown\n \nJob satisfaction\nUnknown\n \nMeaning and fulfillment\nUnknown\n \nThe pre-agriculture Eden hypothesis\nWikipedia's entry for \"hunter-gatherer\" (quote in footnote1) gives a \"pre-agriculture Eden\" vibe, and specifically claims that:\nHunter-gatherers are not \"poor,\" or at least, they \"are mostly well-fed, rather than starving,\" and have more leisure time than most people today.\nHunter-gatherers \"tend to have an egalitarian social ethos,\" without permanent leaders (\"hunter-gatherers do not have permanent leaders; instead, the person taking the initiative at any one time depends on the task being performed\").\n(Covered previously) Hunter-gatherers have egalitarian gender relations specifically, \"with women roughly as influential and powerful as men.\"\nIn addition:\nI've seen it claimed that \"coronary heart disease, obesity, [and a number of other diseases] ... are rare or virtually absent in hunter–gatherers and other non-westernized populations.\" This is usually given as an argument that hunter-gatherers have excellent diets that we should emulate.\nI've seen more occasional (and as far as I can tell, very thinly cited) claims that \"Hunter-gatherers seem to possess exceptional mental health\" / \"depression is a 'disease of modernity.'\"\nWhy might all of this be? The basic idea would be that:\nFor most of human history, humans lived in small, \"nomadic\" bands (more on this in a future post): constantly moving from one location to another, since any given location had limited food supply. People who did well in this setting reproduced and people who did poorly did not, so we (the descendants of many people who did well) are well adapted to this lifestyle.\nBut about 10,000 years ago, much of the world transitioned to agriculture, which meant that instead of moving from place to place, we were able to consistently produce large amounts of food by staying put. This led to an explosion in population, and a division of labor: farmers could produce enough food for everyone, while other people specialized in other things such as religion, politics, and war.\n10,000 years isn't a ton of time from the standpoint of natural selection, so we're still adapted to the original environment, and we're \"out of place\" in a more modern lifestyle.\nTo put some of my cards on the table early, I think this reasoning could be right when it comes to some problems in the modern world, but I don't tend to believe it strongly by default.\nI don't think that \"adapting to\" an environment should be associated with \"thriving\" in it - especially not if \"thriving\" is supposed to include things like egalitarianism. In my view, \"adapting to\" an environment simply means becoming good at competing with others to reproduce in that environment - you could be fully \"adapted\" to your environment and still frequently be hungry, diseased, violent, hierarchical, sexist, and many other nasty things that we regularly see from animals in their natural environments.\nAdditionally, there are many diverse lifestyles in the modern world. So any problem that seems to exist ~everywhere in modern civilization seems to me like it's most likely (by default) to be \"a risk of being human.\" \nThat said, I don't think either of these points is absolute. There are some ways in which nearly all modern societies differ from forager/hunter-gatherer societies, and some of these might be causing novel problems that didn't exist in our ancestral environment. So I consider the \"pre-agriculture Eden\" hypothesis plausible enough to be interesting and important, and I'd like to know whether the facts support it.\nEvidence on different dimensions of quality of life\nBelow, I'll go through the best evidence I've found on the dimensions of quality of life from the table above.\nFor more complex topics, I mostly rely on previous more detailed posts I've made. Otherwise, I tend to rely by default on The Lifeways of Hunter-Gatherers (which I abbreviate as \"Lifeways\"), for reasons outlined here.\nGender relations\nI discussed pre-agriculture gender relations at some length in a previous post. In brief:\nAccording to the best/most systematic available evidence (from observing modern non-agricultural societies), pre-agriculture gender relations seem bad. For example, most societies seem to have no possibility for female leaders, and limited or no female voice in intra-band affairs.\nThere are a lot of claims to the contrary floating around, but (IMO) without good evidence. For example, the Wikipedia entry for \"hunter-gatherer\" gives the strong impression that nonagricultural societies have strong gender equality, as does a Google search for \"hunter-gatherer gender relations.\" But the sources cited seem very thin and often only tangentially related to the claims; furthermore, they often seem to acknowledge significant inequality, while seemingly trying to explain it away with strange statements like \"women know how to deal with physical aggression, unlike their Western counterparts.\" (Verbatim quote.)\nI think it's somewhat common to find rosy pictures of pre-agriculture society, with thin and even contradictory citations. I think this is worth keeping in mind for the below sections (where I won't go into as much depth as I did for gender relations).\nViolence\nPre-agriculture violence seems to be a hotly debated topic among anthropologists and archaeologists; the debates can get quite intricate and confusing, and I've spent more time than I hoped to trying to understand both sides and where they disagree. \nMy take as of now is that overall pre-agriculture violence was likely quite high by the standards of today's developed countries. \nThis was complex enough that I devoted a separate post entirely to my research and reasoning on this point. Here's the summary on \"nomadic forager\" societies (which are thought to be our best clue at what life was like in the very distant past) vs. today's world:\nSociety\nViolent deaths per 100,000 people per year\nMurngin (nomadic foragers)\n \n330\n \nTiwi (nomadic foragers)\n \n160\n \n!Kung (aka Ju/'hoansi) (nomadic foragers)\n \n42\n \nWorld (today) - high estimate\n \n35.4\n \nUSA (today) - high estimate\n \n35.2\n \nWorld (today) - low estimate\n \n7.4\n \nWestern Europe (today) - high estimate\n \n6.6\n \n USA (today) - low estimate\n \n6.2\n \nWestern Europe (today) - low estimate\n \n0.3\n \nHunger\nI'm going to examine both hunger and health, since both seem among the easiest ways to get at the question of whether pre-agriculture society had meaningfully higher \"poverty\" than today's in some sense.\nThe most relevant-seeming part of Lifeways is Table 3-5, which gives information on height, weight, and calorie consumption for 8 forager societies. My main observation (and see footnote for some other notes2) is that the height figures are strikingly low: 6 of the 7 listed averages for males are under 5'3\", and 6 of the 7 listed averages for females are under 5'0\". (Compare to 5'9\" for US males and 5'3.5\" for US females.)\nThe height figure seems important because height is often used as an indicator for early-childhood nutritional status,3 and seems to quite reliably increase with wealth at the aggregate societal level (see Our World in Data's page on height). Height seems particularly helpful here because it is relatively easy to measure in a culture-agnostic way and can even be estimated from archaeological remains.\nWhat I've been able to find of other evidence (including archaeological evidence) about height suggests that the pre-agriculture period had average heights a bit taller than the figures above, but still short by modern standards, though this evidence seems quite limited (details in footnote).4\nMy bottom line is: the evidence suggests that pre-agriculture people had noticeably shorter heights than modern people, which suggests to me that their early-childhood nutrition was worse. \nAs for Wikipedia's claim that \"Contrary to common misconception, hunter-gatherers are mostly well-fed,\" those who have read my previous piece on Wikipedia and hunter-gatherers might be able to guess what's coming next. \nThe citation for that statement appears to be an entire textbook (no page number given), which I found a copy of here (the link unfortunately seems to have broken since then).\nThe vast majority of the textbook doesn't seem to be relevant to this topic at all. \nFrom skimming the table of contents, my best guess at the part being cited is on page 328: \"The notion that hunters and gatherers live on the brink of starvation is a popular misconception; numerous studies have shown that hunters and gatherers are generally well nourished.\" No citations are given.\nHealth\nIt seems to me that the best proxy for health, in terms of having very-long-run data, is early-in-life mortality (before age 1, before age 5, before age 15). I've found a number of collections of data on this, and nothing else detailed regarding health for prehistoric or foraging populations (other than one analysis that looks at full life expectancy; I will discuss this later on).\nTable 7-7 in Lifeways lists a number of figures for deaths before ages 1 and 15, based on modern foraging societies. Taking a crude average yields 20% mortality before the age of 1, 35% before the age of 15.\nOther sources I've consulted (including archaeological sources) give an even grimmer picture, in some cases 50%+ mortality before the age of 15 (details in footnote).5\nThese are enormous early-in-life mortality rates compared to the modern world, where no country has a before-age-15 mortality rate over 15%, and high-income countries appear to be universally below 1% (source).\nWhat about life expectancy after reaching age 10?\nWhat I've found also suggests that pre-agriculture life expectancy was lower than today's at other ages, too - it isn't just a matter of early-in-life mortality.\nGurven and Kaplan 2007 (the only paper I've found that estimates pre-agriculture life expectancy, as opposed to early-in-life mortality) observes that its modeled life-expectancy-by-age curves are similar for modern foraging societies and 1751-1759 Sweden (Figure 3):\n(Note also how much worse the estimate of prehistoric life expectancy looks at every age, although Gurven and Kaplan question this data.6)\nAs noted at Our World in Data, it appears that life expectancy conditional on surviving to age 10 has improved greatly in Sweden and other countries since ~1800 (before which point it appears to have been pretty flat).\nAlso see these charts, showing life expectancy at every age improving significantly in England and Wales since ~1800.\nSee footnote for one more data source with a similar bottom line.7\nBottom line: life expectancy looks to have been a lot worse pre-agriculture than today. I don't think violent deaths account for enough death (see previous section) to play a big role in this; disease and other health factors seem most likely.\nWhat about diseases of affluence?\nFrom Wikipedia:\nDiseases of affluence ... is a term sometimes given to selected diseases and other health conditions which are commonly thought to be a result of increasing wealth in a society ... Examples of diseases of affluence include mostly chronic non-communicable diseases (NCDs) and other physical health conditions for which personal lifestyles and societal conditions associated with economic development are believed to be an important risk factor — such as type 2 diabetes, asthma, coronary heart disease, cerebrovascular disease, peripheral vascular disease, obesity, hypertension, cancer, alcoholism, gout, and some types of allergy.\nI think it's plausible that the pre-agriculture world had less of these \"diseases of affluence\" than the modern world (especially obesity and conditions connected to obesity, due to the seemingly much greater access to food).\nI don't think it's slam-dunk clear for some of these, such as cancer and heart disease. I've dug into primary sources a little bit, and not-too-surprisingly, data quality and rigor seems to often be low. In particular, I quite distrust claims like \"Someone spent __ years in ___ society and observed no cases of ____.\" Modern foraging societies seem to be quite small, and diagnosis could be far from straightforward.\nI haven't dug in heavily on this (though I may in the future), because:\nMy initial scans have made it look like it would be a lot of work to follow often-circuitous trails of references to often-hard-to-find sources.\nEven if it did turn out that \"diseases of affluence\" were extremely rare pre-agriculture, this wouldn't tip me into thinking health was better overall, pre-agriculture. When wondering whether undernutrition and \"diseases of poverty\" are worse than obesity and \"diseases of affluence,\" I think a good default is to prefer the condition with less premature death.\nMental health and wellbeing\nI haven't found anything that looks like systematic data on pre-agriculture mental health or subjective wellbeing. There are some suggestive Google results, but as in other cases, these don't seem well-cited. For example, as of this writing, Google's \"answer box\" reads:\nBut Thomas 2006 is this not-very-systematic-looking source.\nI won't go into this topic more, because having gone through the above topics, I don't find the basic plausibility of \"reliable data shows better-than-modern mental health among foraging societies\" high enough to be worth a deep dive.\nUpdate: a commenter linked to a study reporting high happiness for one hunter-gatherer society (the Hadza). My thoughts here.\nLeisure and equality\nI haven't gone into depth on claims that pre-agriculture societies had more leisure, and lower inequality, compared to today's.\nReasons for this:\nThe claims seem disputed. For example, here are excerpts on both topics from the first chapter of Lifeways:\nHow much do hunter-gatherers work, and why? Reexaminations of Ju/’hoansi and Australian work effort do not support Sahlins’s claim [of very low work hours] . Kristen Hawkes and James O’Connell (1981) found a major discrepancy between the Paraguayan Ache’s nearly seventy-hour work week and the Ju/’hoansi’s reportedly twelve- to nineteen-hour week. The discrepancy, they discovered, lay in Lee’s definition of work. Lee counted as work only the time spent in the bush searching for and procuring food, not the labor needed to process food resources in camp. Add in the time it takes to manufacture and maintain tools, carry water, care for children, process nuts and game, gather firewood, and clean habitations, and the Ju/’hoansi work well over a forty-hour week (Lee 1984; Isaac 1990; Kaplan 2000). In addition, one of Sahlins’s Australian datasets was generated from a foraging experiment of only a few days’ duration, performed by nine adults with no dependents. There was little incentive for these adults to forage much (and apparently they were none too keen on participating – see Altman [1984, 1987]; Bird-David [1992b]) ...\nOthers have found that the alleged egalitarian relations of hunter-gatherers are pervaded by inequality, if only between the young and the old and between men and women (Woodburn 1980; Hayden et al. 1986; Leacock 1978; see Chapters 8 and 9). Food is not shared equally, and women may eat less meat than do men (Speth 1990, 2010; Walker and Hewlett 1990). Archaeologists find more and more evidence of nonegalitarian hunter-gatherers in a variety of different environments (Price and Brown 1985b; Arnold 1996a; Ames 2001), most of whom lived under high population densities and stored food on a large scale. Put simply, we cannot equate foraging with egalitarianism.\nI'm skeptical that anthropologists can get highly reliable reads on the degree to which foraging societies have high leisure, or low inequality, in a deep sense. I imagine that if an anthropologist (from e.g. another planet) visited modern society, they might conclude that we have high leisure, or low inequality, based on things like:\nHaving a tough time disentangling \"work\" from \"leisure.\" For example, a lot of modern jobs are office jobs, and a lot of on-the-job hours are spent doing what might look like pleasant socializing. (Similarly, foragers \"socializing\" may be internally conceiving this as necessary work rather than fun - it seems like it could be quite hard to draw this line.)\nBeing confused by social norms encouraging people to downplay real inequalities. For example, I've seen a fair number of references to the fact that people in foraging societies will sometimes mock a successful hunter to \"cut them down to size\" and enforce equality. But if this were strong evidence of low inequality, I think we'd have similar evidence from modern society from things like the Law of Jante; \"humblebragging\"; the fact that many powerful, wealthy people in modern times tend to dress simply and signal \"authenticity\"; etc.\nEven if I were convinced that pre-agriculture societies had large amounts of leisure and low amounts of inequality, this wouldn't move me much toward believing they were an \"Eden,\" given above observations about violence, hunger and health. It would be one thing if foragers were healthy, well-fed, well-resourced, and lived in conditions of high leisure and low inequality. But high leisure and low inequality seem much less appealing in the context of what looks to me best described as \"poverty\" with respect to health and nutrition.\nHaving vetted other \"Eden\"-like claims about pre-agriculture societies, I've developed a prior that these claims are likely to be both time-consuming to investigate and greatly exaggerated. See previous sections.\nWith all of that said, as I'll discuss in future posts, I do think there are signs that at least some foraging societies were noticeably more egalitarian than the societies that came after them - just not more so than today's developed world.\nNext in series: Did life get better during the pre-industrial era? (Ehhhh)\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n \"Contrary to common misconception, hunter-gatherers are mostly well-fed, rather than starving ... \n \"Hunter-gatherers tend to have an egalitarian social ethos, although settled hunter-gatherers (for example, those inhabiting the Northwest Coast of North America) are an exception to this rule. Nearly all African hunter-gatherers are egalitarian, with women roughly as influential and powerful as men. For example, the San people or 'Bushmen' of southern Africa have social customs that strongly discourage hoarding and displays of authority, and encourage economic equality via sharing of food and material goods ...\n \"Anthropologists maintain that hunter-gatherers do not have permanent leaders; instead, the person taking the initiative at any one time depends on the task being performed. In addition to social and economic equality in hunter-gatherer societies, there is often, though not always, sexual parity as well ... \n \"At the 1966 'Man the Hunter' conference, anthropologists Richard Borshay Lee and Irven DeVore suggested that egalitarianism was one of several central characteristics of nomadic hunting and gathering societies because mobility requires minimization of material possessions throughout a population. Therefore, no surplus of resources can be accumulated by any single member ...\n \"At the same conference, Marshall Sahlins presented a paper entitled, 'Notes on the Original Affluent Society' ... According to Sahlins, ethnographic data indicated that hunter-gatherers worked far fewer hours and enjoyed more leisure than typical members of industrial society, and they still ate well. Their 'affluence' came from the idea that they were satisfied with very little in the material sense. Later, in 1996, Ross Sackett performed two distinct meta-analyses to empirically test Sahlin's view. The first of these studies looked at 102 time-allocation studies, and the second one analyzed 207 energy-expenditure studies. Sackett found that adults in foraging and horticultural societies work, on average, about 6.5 hours a day, whereas people in agricultural and industrial societies work on average 8.8 hours a day. ↩\n \n I've spot-checked the primary sources a bit: I looked up the first three rows and the last row, and found the methodology sections. I confirmed that these are all adults. I didn't check the others. I put the table in Google Sheets and added some of my own derived figures here. In addition to the observations about height, I note:\nI don't think we can make much of raw calorie consumption estimates, since more active lifestyles could require more calories.\nThe BMI figures would qualify as \"underweight\" for Ju/'hoansi and Anbarra females; others seem to be generally on the low side of the normal range. (These are averages, and could be consistent with having a significant percentage of individuals outside the normal range. \nNote that calculating a BMI from average height and average weight is not the same as looking at average BMI. But I played with some numbers here and it seems unlikely to be a big difference. I'd guess my BMI calculations would lead to slight overstatement of average BMI (and a slightly larger overstatement if well-fed people were both taller and higher-BMI). ↩\n For example, see here, here (Our World in Data), here. ↩\n Our World In Data cites a figure for the Mesolithic era (shortly before the dawn of agriculture) of 1.68m, or about 5'6\". This figure comes from A Farewell to Alms, which in turn cites a study I was unable to find anywhere. It isn't explicitly stated which sexes the height figure refers to, but this chart implies to me that Our World in Data (at least) is interpreting it as a quite low height by modern standards.\n Searching for recent papers on height estimates from archaeological remains, I found a 2019 paper claiming that \"The earliest anatomically modern humans in Europe, present by 42-45,000 BP (5, 6), were relatively tall (mean adult male height in the Early Upper Paleolithic was ∼174 cm [about 5'8.5\"]). Mean male stature then declined from the Paleolithic to the Mesolithic (∼164 cm [about 5'4.5\"]) before increasing to ∼167 cm [about 5'6\"] by the Bronze Age (4, 7). Subsequent changes, including the 20th century secular trend increased height to ∼170-180 cm [about 5'7\" to 5'11\"] (1, 4).\" Its main two sources are this paper, noting a relative scarcity of data and trying to fill in gaps using mathematical analysis, and this book that looks interesting but costs $121. I haven't found many other recent analyses of this topic (and nothing contradicting these claims in any case).\n Table 3.9 of A Farewell to Alms also collects height data on modern foraging societies, and the median figure in the table is about 1.65m, or about 5'5\". ↩\n Table 1 in Volk and Atkinson 2012 (the source used by Our World in Data) reports a mean of 26.8% mortality before the age of 1 (\"infant mortality rate\") and 48.8% mortality before the age of 15 (\"child mortality rate\"), again based on modern foraging societies.\n The only sources I've been able to find pulling together estimates from archaeological data are:\nTrinkaus 1995, on Neanderthals. In Table 4 (based on a small number of sources), it gives an average of 40.5% mortality before age 1, 13.2% mortality between ages 1-5, 6.6% mortality between ages 5-10, and 7.9% mortality between ages 10-20 (which would cumulatively imply 68.2% mortality before the age of 15).\nTrinkaus 2011, which does not give mortality estimates but argues for a conclusion of \"low life expectancy and demographic instability across these Late Pleistocene human groups ... [the data] provide no support for a life history advantage among early modern humans.\" ↩\n \"Estimated mortality rates then increase dramatically for prehistoric populations, so that by age 45 they are over seven times greater than those for traditional foragers, even worse than the ratio of captive chimpanzees to foragers. Because these prehistoric populations cannot be very different genetically from the populations surveyed here, there must be systematic biases in the samples and/or in the estimation procedures at older ages where presumably endogenous senescence should dominate as primary cause of death. While excessive warfare could explain the shape of one or more of these typical prehistoric forager mortality profiles, it is improbable that these profiles represent the long-term prehistoric forager mortality profile. Such rapid mortality increase late in life would have severe consequences for our human life history evolution, particularly for senescence in humans.\" ↩\nTrinkaus 1995 has a more detailed breakdown of mortality rates by age range for both a few modern forager societies (Table 3) and for archaeological remains (Table 4): \n \n Here \"Neonate\" means \"before 1 year,\" \"Child\" Is age 1-5, \"Juvenile\" is age 5-10, \"Adolescent\" is age 10-20, \"Young adult\" Is age 20-40, and \"Old adult\" is age 40+. These numbers seem extremely high for all ages under 40 by today's standards; for a comparison point see this detailed life expectancy table for the US, which implies (based on calculations I'm not showing here but that are straightforward to do) \"Juvenile\" mortality of 0.06%, \"Adolescent\" mortality of 0.6%, and \"Young adult\" mortality of 3.7%. (All figures are for males; female figures would be lower still.) ↩\n", "url": "https://www.cold-takes.com/was-life-better-in-hunter-gatherer-times/", "title": "Was life better in hunter-gatherer times?", "source": "cold.takes", "source_type": "blog", "date_published": "2021-10-26", "id": "089794933d55d49a9737f506067a9551"} -{"text": "As part of exploring trends in quality of life over the very long run, I've been trying to understand how good life was during the \"pre-agriculture\" (or \"hunter-gatherer\"1) period of human history. We have little information about this period, but it lasted hundreds of thousands of years (or more), compared to a mere ~10,000 years post-agriculture. \n(For this post, it's not too important exactly what agriculture means. But it roughly refers to being able to domesticate plants and livestock, rather than living only off of existing resources in an area. Agriculture is what first allowed large populations to stay in one area indefinitely, and is generally believed to be crucial to the development of much of what we think of as \"civilization.\"2)\nThis image illustrates how this post fits into the full \"Has Life Gotten Better?\" series.\nThere are arguments floating around implying that the hunter-gatherer/pre-agriculture period was a sort of \"paradise\" in which humans lived in an egalitarian state of nature - and that agriculture was a pernicious technology that brought on more crowded, complex societies, which humans still haven't really \"adapted\" to. If that's true, it could mean that \"progress\" has left us worse off over the long run, even if trends over the last few hundred years have been positive.\nA future post will comprehensively examine this \"pre-agriculture paradise\" idea. For now, I just want to focus on one aspect: gender relations. This has been one of the more complex and confusing aspects to learn about. My current impression is that:\nThere's no easy way to determine \"what the literature says\" or \"what the experts think\" about pre-agriculture gender in/equality. That is, there's no source that comprehensively surveys the evidence or expert opinion. \nAccording to the best/most systematic evidence I could find (from observing modern non-agricultural societies), pre-agriculture gender relations seem bad. For example, most societies seem to have no possibility for female leaders, and limited or no female voice in intra-band affairs.\nThere are a lot of claims to the contrary floating around, but (IMO) without good evidence. For example, the Wikipedia entry for \"hunter-gatherer\" gives the strong impression that nonagricultural societies have strong gender equality, as does a Google search for \"hunter-gatherer gender relations.\" But the sources cited seem very thin and often only tangentially related to the claims; furthermore, they: \nOften use reasoning that seems like a huge stretch to me. For example, one paper appears to argue for strong gender equality among particular Neanderthals based entirely on the observation that they seemed not to eat the sorts of foods traditionally gathered by women. (The implication being that since women must have been doing something, they were probably hunting along with the men.)\n \nOften seem to acknowledge significant inequality, while seemingly trying to explain it away with strange statements like \"women know how to deal with physical aggression, unlike their Western counterparts.\" (Verbatim quote.)\n \nSeem to get disproportionate attention from very thin evidence, such as an analysis of 27 skeletal remains being featured in the New York Times and a National Geographic article that ranks 2nd in the Google results for \"hunter-gatherer gender relations.\"\nBased on the latter points, it seems that there are people trying hard to make the case for gender equality among hunter-gatherers, but not having much to back up this case. One reason for this might be a fear that if people think gender inequality is \"ancient\" or \"natural,\" they might conclude that it is also \"good\" and not to be changed. So for the avoidance of doubt: my general perspective is that the \"state of nature\" is bad compared to today's world. When I say that pre-agricultural societies probably had disappointingly low levels of gender equality, I'm not saying that this inequality is inevitable or \"something we should live with\" - just the opposite.\nSystematic evidence on pre-agriculture gender relations\nThe best source on pre-agriculture gender relations I've found is Hayden, Deal, Cannon and Casey (1986): \"Ecological Determinants of Women's Status Among Hunter/Gatherers.\" I discuss how I found it, and why I consider it the best source I've found, here.\nIt is a paper collecting ethnographic data: data from anthropologists' observations of the relatively few people who maintain or maintained a \"forager\"/\"hunter-gatherer\" (nonagricultural) lifestyle in modern times. It presents a table of 33 different societies, scored on 13 different properties such as whether a given society has \"female voice in domestic decisions\" and \"possibility of female leaders.\" \nHere's the key table from the paper, with some additional color-coding that I've added (spreadsheet here):\nI've used red shading for properties that imply male domination, and blue shading for properties that imply egalitarianism (all else equal).3 The red shades are deeper than the blue shades because I think the \"nonegalitarian\" properties are much more bad than the \"egalitarian\" properties are good (for example, I think \"possibility of female leaders\" being \"absent/taboo\" is extremely bad; I don't think \"female voice in domestic decisions\" being \"Considerable\" makes up for it).\nFrom this table, it seems that:\n25 of 33 societies appear to have no possibility for female leaders.\n19 of 33 societies appear to have limited or no female voice in intraband affairs.\nOf the 6 societies (<20%) where neither of these apply: \nThe Dogrib have \"female hunting taboos\" and \"belief in the inferiority of females to males.\"\n \nThe !Kung and Tasaday have \"female hunting taboos.\"\n \nThe Yokuts have \"ritual exclusion of females.\"\n \nThe Mara and Agta seem like the best candidates for egalitarian societies. (The Mara have \"limited\" \"female control of children,\" but it's not clear how to interpret this.)\nOverall, I would characterize this general picture as one of bad gender relations: it looks as though most of these societies have rules and/or norms that aggressively and categorically limit women's influence and activities.\nI got a similar picture from the chapter on gender relations from The Lifeways of Hunter-Gatherers (more here on why I emphasize this source).\nIt states: \"even the most egalitarian of foraging societies are not truly egalitarian because men, without the need to bear and breastfeed children, are in a better position than women to give away highly desired food and hence acquire prestige. The potential for status inequalities between men and women in foraging societies (see Chapter 9) is rooted in the division of labor.\" \nIt also argues that many practices sometimes taken as evidence of equality (such as matrilocality) are not.\nThe (AFAICT poorly cited and unconvincing) case that pre-agriculture gender relations were egalitarian\nI think there are a fair number of people and papers floating around that are aiming to give an impression that pre-agriculture gender relations were highly egalitarian.\nIn fact, when I started this investigation, I initially thought that gender equality was the consensus view, because both Google searches and Wikipedia content gave this impression. Not only do both emphasize gender equality among foragers/hunter-gatherers, but neither presents this as a two-sided debate.\nBelow, I'll go through what I found by following citations from (a) the Wikipedia \"hunter-gatherer\" page; (b) the front page from searching Google for \"hunter-gatherer gender relations.\" I'm not surprised that Google and Wikipedia are imperfect here, but I found it somewhat remarkable how consistently the \"initial impression\" given was of strong gender equality, and how consistently this impression was unsupported by sources. I think it gives a good feel for the broader phenomenon of \"unsupported claims about gender equality floating around.\"\nWikipedia's \"hunter-gatherer\" page\nThe Wikipedia entry for \"hunter-gatherer\" (archived version) gives the strong impression that nonagricultural societies have strong gender equality. Key quotes:\nNearly all African hunter-gatherers are egalitarian, with women roughly as influential and powerful as men.[22][23][24] ... In addition to social and economic equality in hunter-gatherer societies, there is often, though not always, relative gender equality as well.[30]\nThe citations given don't seem to support this statement. Details follow (I look at notes 22, 23, 24, and 30 - all of the notes from the above quote) - you can skip to the next section if you aren't interested in these details, but I found it somewhat striking and worth sharing just how bad the situation seems to be here.\nNote 22 refers to a chapter (\"Gender relations in hunter-gatherer societies\") in this book. I found it to have a combination of:\nVery broad claims about gender equality, which I consider less trustworthy than the sort of systematic, specifics-based analysis above. Key quote: \nVarious anthropologists who have done fieldwork with hunter-gatherers have described gender relations in at least some foraging societies as symmetrical, complementary, nonhierarchical, or egalitarian. Turnbull writes of the Mbuti: “A woman is in no way the social inferior of a man” (1965:271). Draper notes that “the !Kung society may be the least sexist of any we have experienced” (1975:77), and Lee describes the !Kung (now known as Ju/’hoansi) as “fiercely egalitarian” (1979:244). Estioko-Griffin and Griffin report: “Agta women are equal to men” (1981:140). Batek men and women are free to decide their own movements, activities, and relationships, and neither gender holds an economic, religious, or social advantage over the other (K. L. Endicott 1979, 1981, 1992, K. M. Endicott 1979). Gardner reports that Paliyans value individual autonomy and economic self-sufficiency, and “seem to carry egalitarianism, common to so many simple societies, to an extreme” (1972:405).\nOf the five societies named, two (Batek, Paliyan) are not included in the table above; two (!Kung, Agta) are among the most egalitarian according to the table above (although the !Kung are listed as having female hunting taboos); and one (Mbuti) is listed as having \"ritual exclusion of females\" and no \"possibility of female leaders.\" I trust specific claims like the latter more than broader claims like \"A woman is in no way the social inferior of a man.\" \nI actually wrote that before noticing, in the next section, that the same author who says \"A woman is in no way the social inferior of a man\" also observes that \"a certain amount of wife-beating is considered good, and the wife is expected to fight back\" - of the same society!\nSeeming concessions of significant inequality, sometimes accompanied by defenses of this that I find bizarre. Some example quotes from the chapter:\n\"Some Australian Aboriginal men use threats of gang-rape to keep women away from their secret ceremonies. Burbank argues that Aborigines accept physical aggression as a 'legitimate form of social action' and limit it through ritual (1994:31, 29). Further, women know how to deal with physical aggression, unlike their Western counterparts (Burbank 1994:19).\"\n\"For the Mbuti, 'a certain amount of wife-beating is considered good, and the wife is expected to fight back' (Turnbull 1965:287), but too much violence results in intervention by kin or in divorce.\"\n\"Observing that Chipewyan women defer to their husbands in public but not in private, Sharp cautions against assuming this means that men control women: 'If public deference, or the appearance of it, is an expression of power between the genders, it is a most uncertain and imperfect measure of power relations. Polite behavior can be most misleading precisely because of its conspicuousness'\"\n\"Some foragers place the formalities of decision-making in male hands, but expect women to influence or ratify the decisions\"\n\"Aché men and women traditionally participated in band-level decisions, though 'some men commanded more respect and held more personal power than any woman.'\"\n\"Rather than assigning all authority in economic, political, or religious matters to one gender or the other, hunter-gatherers tend to leave decision-making about men’s work and areas of expertise to men, and about women’s work and expertise to women, either as groups or individuals\"\nOverall, this chapter actively reinforced my impression that gender equality among the relevant societies is disappointingly low on the whole.\nNote 24 goes to this paper. (Note 23 also cites it, which is why I'm skipping to Note 24 for now.) My rough summary is:\nMuch of the paper discusses a single set of human remains from ~9,000 years ago that the author believes was (a) a 17-19-year-old female who (b) was buried with big-game hunting tools.\nIt also states that out of 27 individuals in the data set the authors considered who (a) appear to have been buried with big-game hunting tools (b) have a hypothesized sex, 11 were female and 16 were male.\nI think the idea is that these findings undermine the idea that women couldn't be big-game hunters. \nI have many objections to this paper being one of three sources cited for the claim \"Nearly all African hunter-gatherers are egalitarian, with women roughly as influential and powerful as men.\" \nNone of these results are from Africa (they're from the Americas).\nThis is a single paper that seems to be engaging in a lot of guesswork around a small number of remains, and seems to be written with a pretty strong agenda (see the intro). In general, I think it's a bad idea to put much weight on a single data source; I prefer systematic, aggregative analyses like the one I examine above.\nIt already seems to be widely acknowledged that the amount of big-game female hunting in these societies is not zero4 (though it is believed to be rare and in some cases taboo), so a small-sample-size case where it was relatively common would not necessarily contradict what's already widely believed. \nFinally, what would it tell us if women participated equally in big-game hunting 9,000 years ago, given that (as the authors of this paper state) there are only \"trace levels of participation observed among ethnographic hunter-gatherers and contemporary societies\"? As far as I can tell, it's very hard to glean much information about gender relations from 9,000 years ago, and there are any number of different axes other than hunting along which there may have been discrimination. I think it would be quite a leap from \"Women participated equally in big-game hunting\" to \"Gender equality was strong.\"\nNote 23 goes to a New York Times article that is mostly about the above paper. It also cites a case where remains were found of a man and woman buried together near servants; I do not know what point that is making.\nSource 30 appears to primarily be drawing from \"Women's Status in Egalitarian Society,\" a chapter from this book.5\nI find this chapter extremely unconvincing, and reminiscent of Source 22 above, in that it combines (a) sweeping statements without specifics or citations; (b) scattered statements about individual societies; (c) acknowledgements of what sound to me like disappointingly low levels of gender equality, accompanied by bizarre defenses. (One key quote, which sounds to me like it's basically arguing \"Gender relations were good because women had high status due to their role in childbearing,\" is in a footnote.6)\nGoogle results for \"hunter-gatherer gender relations\"\nGoogling \"hunter-gatherer gender relations\" (archived link) initially gives an impression of strong gender equality. Here's how the search starts off:\nHowever, when I clicked through to the first result, I found that the statement highlighted by Google (\"Hunter-gatherer groups are often relatively egalitarian regarding power and gender relationships\") appears to be an aside: no citation or evidence is given, and it is not the main topic of the paper. Most of the paper discusses the differing activities of men and women (e.g., big-game hunting vs. other food provision).7\nThe answer box has no citations, so I can't assess where that's coming from.\nAnd here's what shows next in the search:\nThe first of these results (the National Geographic article) is essentially a summary of the same source discussed above that cites evidence of 11 females (compared to 16 males) buried with big-game hunting tools 9,000 years ago.\nThe next (from jstor.org) is a discussion of \"gender relations in the Thukela Basin 7000-2000 BP hunter-gatherer society.\" The abstract states: \"I argue that the early stages of this occupation were characterized by male dominance which then became the site of considerable struggle which resulted in women improving their positions and possibly attaining some form of parity with men.\"\nThe next (from theguardian.com) is a Guardian article with the headline: \"Early men and women were equal, say scientists.\" The entire article discusses a single study:\nThe study looks at two foraging societies (one of which is the Agta, the most egalitarian society according to the table above). \nIt presents a theoretical model according to which one gender dominating decisions about who lives where would result in high levels of within-camp relatedness, and observes that actual patterns of within-camp relatedness are relatively low, so they more closely match a dynamic in which both genders influence decisions (according to the theoretical model). \nI believe this is essentially zero evidence of anything. \nThe final result is a Wikipedia article that is mostly about the differing roles for men and women among foragers. The part that provides Google's 3rd excerpt is here (screenshotting so you can get the full experience of the citation notes):\nSource 8 looks like the closest thing to a citation for the claim that \"the sexual division of labor ... developed relatively recently.\" It goes to this paper, which seems to me to be making a significant leap from thin evidence. The basic situation, as far as I can tell, is:8\nThere is no archaeological evidence that the population in question (Neandertals in Eurasia in the Middle Paleolithic) ate small game or vegetables.\nThis implies that they exclusively hunted big game.\nIt's hypothesized that women participated equally in big-game hunting. The reasoning is that otherwise, they would have had nothing to do, and this seems implausible to the authors. (There is also some discussion of the lack of other things that would've taken work to make, such as complex clothing.)\nI do not think that \"Neanderthals didn't eat small game or vegetables\" is much of an argument that they had egalitarian division of labor by sex.\nBottom line\nMy current impression is that today's foraging/hunter-gatherer societies have disappointingly low levels of gender equality, and that this is the best evidence we have about what pre-agriculture gender relations were like.\nI'm not sure why casual searching and Wikipedia seem to give such a strong impression to the contrary. It seems to me that there is a fair amount of interest in stretching thin evidence to argue that pre-agriculture societies had strong gender equality. \nThis might be partly be coming from a fear that if people think gender inequality is \"ancient\" or \"natural,\" they might conclude that it is also \"good\" and not to be changed. But as I'll elaborate in future pieces, my general perspective is that the \"state of nature\" is bad compared to today's world, and I think one of our goals as a society should be to fight things - from sexism to disease - that have afflicted us for most of our history. I don't think it helps that cause to give stretched impressions about what that history looks like.\nNext in series: Was life better in hunter-gatherer times?\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n Or \"forager,\" though I won't be using that term in this post because I already have enough terms for the same thing. \"Hunter-gatherer\" seems to be the more common term generally, and is the one favored by Wikipedia. ↩\n E.g., see Wikipedia on the Neolithic Revolution, stating that agriculture \"transformed the small and mobile groups of hunter-gatherers that had hitherto dominated human pre-history into sedentary (non-nomadic) societies based in built-up villages and towns ... These developments, sometimes called the Neolithic package, provided the basis for centralized administrations and political structures, hierarchical ideologies, depersonalized systems of knowledge (e.g. writing), densely populated settlements, specialization and division of labour, more trade, the development of non-portable art and architecture, and greater property ownership.\" The well-known book Guns, Germs and Steel is about this transition. ↩\n Though properties (2) and (4) could in some cases imply female advantage rather than egalitarianism per se. ↩\n The table above lists two societies that specifically do not have \"female hunting taboos.\" \nThe Lifeways of Hunter Gatherers (which I name above as a relatively systematic source) states that there are \"quite a few individual cases of women hunters,\" and that \"One case of women hunters who appear to be a striking exception is that of the Philippine Agta [also the only case from the table above with no evidence against egalitarianism].\" In context, I believe it is referring to big-game hunting.\n This is despite stating the view (which is shared by the paper I'm discussing now) that modern-day foraging societies have very little participation by women in big-game hunting overall (see the section entitled \"Why Do Men Hunt (and Women Not So Much)?\" from chapter 8). ↩\n I initially stated that Wikipedia gave no indication of which part of the book it was pointing at, but a reader pointed out that it gave a page number. That page is the page of the index that includes a number of references to gender-relations-related topics. Most come from the chapter I discuss here; there are also a couple of pages referenced of another chapter, which also cites this one, and which I would characterize along similar lines.  ↩\n \"It is also necessary to reexamine the idea that these male activities were in the past more prestigious than the creation of new human beings. I am sympathetic to the scepticism with which women may view the argument that their gift of fertility was as highly valued as or more highly valued than anything men did. Women are too commonly told today to be content with the wondrous ability to give birth and with the presumed propensity for 'motherhood' as defined in saccharine terms. They correctly read such exhortations as saying, 'Do not fight for a change in status.' However, the fact that childbearing is associated with women's present oppression does not mean this was the case in earlier social forms. To the extent that hunting and warring (or, more accurately, sporadic raiding, where it existed) were areas of male ritualization, they were just that: areas of male ritualization. To a greater or lesser extent women participated in the rituals, while to a greater or lesser extent they were also involved in ritual elaborations of generative power, either along with men or separately. To presume the greater importance of male than female participants, or casually to accept the statements to this effect of latter-day day male informants, is to miss the basic function of dichotomized sex-symbolism in egalitarian society. Dichotomization made it possible to ritualize the reciprocal roles of females and males that sustained the group. As ranking began to develop, it became a means of asserting male dominance, and with the full-scale development of classes sex ideologies reinforced inequalities that were basic to exploitative structures.\"\n It seems to me as though a double standard is being applied here: the kind of \"dichotomization\" the author describes sounds like a serious limitation on self-determination and meritocracy (people participating in activities based on abilities and interests rather than gender roles), and no explanation is given for the author's apparent belief that this dichotomization was unproblematic for past societies but reflected oppression for later societies. ↩\n From the abstract: \"Ethnohistorical and nutritional evidence shows that edible plants and small animals, most often gathered by women, represent an abundant and accessible source of “brain foods.” This is in contrast to the “man the hunter” hypothesis where big-game hunting and meat-eating are seen as prime movers in the development of biological and behavioral traits that distinguish humans from other primates.\" I am not familiar with that form of the \"man the hunter\" hypothesis; what I've seen elsewhere implies that men dominate big-game hunting and that big game is often associated with prestige, regardless of whatever nutritional value it does or doesn't have. ↩\n A bit more on how I identified this as the key part of the paper:\nThe paper notes that today's foraging societies generally have a distinct sexual division of labor, but argues that it must have developed after the Middle Paleolithic, because (from the abstract) \"The rich archaeological record of Middle Paleolithic cultures in Eurasia suggests that earlier hominins pursued more narrowly focused economies, with women’s activities more closely aligned with those of men ... than in recent forager systems.\"\nAs far as I can tell, the key section arguing this point is \"Archaeological Evidence for Gendered Division of Labor before Modern Humans in Eurasia.\" ↩\n", "url": "https://www.cold-takes.com/hunter-gatherer-gender-relations-seem-bad/", "title": "Pre-agriculture gender relations seem bad", "source": "cold.takes", "source_type": "blog", "date_published": "2021-10-19", "id": "140116e95a807cb695a35e794215bf97"} -{"text": "Human civilization is thousands of years old. What's our report card? Whatever we've been doing, has it been \"working\" to make our lives better than they were before? Or is all our \"progress\" just helping us be nastier to others and ourselves, such that we need a radical re-envisioning of how the world works?\nI'm surprised you've read this far instead of clicking away (thank you). You're probably feeling bored: you've heard the answer (Yes, life is getting better) a zillion times, supported with data from books like Enlightenment Now and websites like Our World in Data and articles like this one and this one. \nI'm unsatisfied with this answer, and the reason comes down to the x-axis. Look at any of those sources, and you'll see some charts starting in 1800, many in 1950, some in the 1990s ... and only very few before 1700.1\nThis is fine for some purposes: as a retort to alarmism about the world falling apart, perhaps as a defense of the specifically post-Enlightenment period. (And I agree that recent trends are positive.) But I like to take a very long view of our history and future, and I want to know what the trend has been the whole way.\nIn particular, I'd like to know whether improvement is a very deep, robust pattern - perhaps because life fundamentally tends to get better as our species accumulates ideas, knowledge and abilities - or a potentially unstable fact about the weird, short-lived time we inhabit.\nSo I'm going to put out several posts trying to answer: what would a chart of \"average quality of life for an inhabitant of Earth look like, if we started it all the way back at the dawn of humanity?\" \nThis is a tough and frustrating question to research, because the vast majority of reliable data collection is recent - one needs to do a lot of guesswork about the more distant past. (And I haven't found any comprehensive study or expert consensus on trends in overall quality of life over the long run.) But I've tried to take a good crack at it - to find the data that is relatively straightforward to find, understand its limitations, and form a best-guess bottom line.\nIn future pieces, I'll go into detail about what I was able to find and what my bottom lines are. But if you just want my high-level, rough take in one chart, here's a chart I made of my subjective guess at average quality of life for humans2 vs. time, from 3 million years ago to today:\nSorry, that wasn't very helpful, because the pre-agriculture period (which we know almost nothing about) was so much longer than everything else.3\n(I think it's mildly reality-warping for readers to only ever see charts that are perfectly set up to look sensible and readable. It's good to occasionally see the busted first cut of a chart, which often reveals something interesting in its own right.)\nBut here's a chart with cumulative population instead of year on the x-axis. The population has exploded over the last few hundred years, so this chart has most of the action going on over the last few hundred years. You can think of this chart as \"If we lined up all the people who have ever lived in chronological order, how does their average quality of life change as we pan the camera from the early ones to the later ones?\"\nSource data and calculations here. See footnote for the key points of how I made the chart, including why it has been changed from its original version (which started 3 million years ago rather than 300,000).4 Note that when a line has no wiggles, that means something more like \"We don't have specific data to tell us how quality of life went up and down\" than like \"Quality of life was constant.\"\nIn other words:\nWe don't know much at all about life in the pre-agriculture era. Populations were pretty small, and there likely wasn't much in the way of technological advancement, which might (or might not) mean that different chronological periods weren't super different from each other.5\nMy impression is that life got noticeably worse with the start of agriculture some thousands of years ago, although I'm certainly not confident in this.\nIt's very unclear what happened in between the Neolithic Revolution (start of agriculture) and Industrial Revolution a couple hundred years ago.\nLife got rapidly better following the Industrial Revolution, and is currently at its high point - better than the pre-agriculture days.\n \nSo what?\nI agree with most of the implications of the \"life has gotten better\" meme, but not all of them. \nI agree that people are too quick to wring their hands about things going downhill. I agree that there is no past paradise (what one might call an \"Eden\") that we could get back to if only we could unwind modernity.\nBut I think \"life has gotten better\" is mostly an observation about a particular period of time: a few hundred years during which increasing numbers of people have gone from close-to-subsistence incomes to having basic needs (such as nutrition) comfortably covered. \nI think some people get carried away with this trend and think things like \"We know based on a long, robust history that science, technology and general empowerment make life better; we can be confident that continuing these kinds of 'progress' will continue to pay off.\" And that doesn't seem quite right.\nThere are some big open questions here. If there were more systematic examination of things like gender relations, slavery, happiness, mental health, etc. in the distant past, I could imagine it changing my mind in multiple ways. These could include: \nLearning that the pre-agriculture era was worse than I think, and so the upward trend in quality of life really has been smooth and consistent.\n \nOr learning that the pre-agriculture era really was a sort of paradise, and that we should be trying harder to \"undo technological advancement\" and recreate its key properties.\n \nAs mentioned previously, better data on how prevalent slavery was at different points in time - and/or on how institutionalized discrimination evolved - could be very informative about ups and/or downs in quality of life over the long run.\nHere is the full list of posts for this series. I highlight different sections of the above chart to make clear which time period I'm talking about for each set of posts.\nPost-industrial era\nHas Life Gotten Better?: the post-industrial era introduces my basic approach to asking the question \"Has life gotten better?\" and apply it to the easiest-to-assess period: the industrial era of the last few hundred years.\nPre-agriculture (or \"hunter-gatherer\" or \"forager\") era\nPre-agriculture gender relations seem bad examines the question of whether the pre-agriculture era was an \"Eden\" of egalitarian gender relations. I like mysterious titles, so you will have to read the full post to find out the answer.\nWas life better in hunter-gatherer times? attempts to compare overall quality of life in the modern vs. pre-agriculture world. Also see the short followup, Hunter-gatherer happiness.\nIn-between period\nDid life get better during the pre-industrial era? (Ehhhh) compares pre-agriculture to post-agriculture quality of life, and summarizes the little we can say about how things changed between ~10,000 BC and ~1700 CE.\nSupplemental posts on violence\nSome of the most difficult data to make sense of throughout writing this series has been the data on violent death rates. The following two posts go through how I've come to the interpretation I have on that data.\nUnraveling the evidence about violence among very early humans examines claims about violent death rates very early in human history, from Better Angels of Our Nature and some of its critics. As of now, I believe that early societies were violent by today's standards, but that violent death rates likely went up before they went down.\nFalling everyday violence, bigger wars and atrocities: how do they net out? looks at trends in violent death rates over the last several centuries. When we include large-scale atrocities, it's pretty unclear whether there is a robust trend toward lower violence over this period.\nFinally, an important caveat to the above charts. Unfortunately, the chart for average animal quality of life probably looks very different from the human one; for example, the rise of factory farming in the 20th and 21st centuries is a massive negative development. This makes the overall aggregate situation for sentient beings hard enough to judge that I have left it out of some of the very high-level summaries, such as the charts above. It is an additional complicating factor for the story that life has gotten better, as I'll be mentioning throughout this series. \nNext in series: Has Life Gotten Better?: the post-industrial era\nThanks to Luke Muehlhauser, Max Roser and Carl Shulman for comments on a draft.\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n For example:\nI wrote down the start date of every figure in Enlightenment Now, Part II (which is where it makes the case that the world has gotten better), excluding one that was taken from XKCD. 6 of the 73 figures start before 1700; the only one that starts before 1300 is Figure 18, Gross World Product (the size of the world economy). This isn't a criticism - that book is specifically about the world since the Enlightenment, a few hundred years ago - but it's an illustration of how one could get a skewed picture if not keeping that in mind.\nI went through Our World in Data noting down every major data presentation that seems relevant for quality of life (leaving out those that seem relatively redundant with others, so I wasn't as comprehensive as for Enlightenment Now.) I found 6 indicators with data before 1300 (child/infant mortality, which looks flat before 1700; human height, which looks flat before 1700; GDP per capita, which rose slightly before 1700; manuscript production, which rose starting around 1100; the price of light, which seems like it fell a bit between 1300-1500 and then had no clear trend before a steep drop after 1800; deaths from military conflicts in England, which look flat before 1700; deaths from violence, which appear to have declined - more on this in a future piece) and 8 more with data before 1700. Needless to say, there are many charts from later on. ↩\n See the end of the post for a comment on animals. ↩\n \"Why didn't you use a logarithmic axis?\" Well, would the x-axis be \"years since civilization began\" or \"years before today?\" The former wouldn't look any different, and the latter bakes in the assumption that today is special (and that version looks pretty similar to the next chart anyway, because today is special).  ↩\n I mostly used world per-capita income, logged; this was a pretty good first cut that matches my intuitions from summarizing history. (One of my major findings from that project was that \"most things about the world are doing the same thing at the same time.\") But I gave the pre-agriculture era a \"bonus\" to account for my sense that it had higher quality of life than the immediately post-agriculture era: I estimated the % of the population that was \"nomadic/egalitarian\" (a lifestyle that I think was more common at that time, and had advantages) as 75% prior to the agricultural revolution, and counted that as an effective 4x multiple on per-capita income. This was somewhat arbitrary, but I wanted to make sure it was still solidly below today's quality of life, because that is my view (as I'll argue).The original version of this chart started 3 million years ago, rather than 300,000. I had waffled on whether to go with 3 million or 300,000 and my decision had been fairly arbitrary. I later discovered that I had an error in my calculations that caused me to underestimate the population over any given period, but especially longer periods such as the initial period. With the error corrected, the \"since 3 million years ago\" chart would've been more dominated by the initial period (something I especially didn't like because I'm least confident in my population figures over that period), so I switched over to the \"300,000 years ago\" chart. ↩\n More specifically, I'd guess there was probably about as much variation across space as across time during that period. It's common in academic literature (which I'll get to in future posts) to assume that today's foraging societies are representative of all of human history before agriculture. ↩\n", "url": "https://www.cold-takes.com/has-life-gotten-better/", "title": "Has Life Gotten Better?", "source": "cold.takes", "source_type": "blog", "date_published": "2021-10-05", "id": "48af44b2fb603cb494a3afb3e83f2b89"} -{"text": "Now that I've finished the \"most important century\" series, I'll still be putting out one longer piece per week, but they'll be on toned-down/less ambitious topics, as you're about to see.\nA few years ago, I made a summary of human history in one table. It's here (color-coded Google sheet).\nTo do this, I didn't need to be (and am not!) an expert historian. What I did need to do, and what I found very worth doing, was:\nDecide on a lens for the summary: a definition of \"what matters to me,\" a way of distinguishing unimportant from important. \nTry to find the most important historical events for that lens, and put them all in one table.\nThe lens I chose was empowerment and well-being. That is, I consider historical people and events significant to the degree that they influenced the average person's (a) options and capabilities (empowerment) - including the advance of science and technology; (b) health, safety, fulfillment, etc. (well-being).1 (I'm not saying these are the same thing! It's possible that greater empowerment could mean lower well-being.) \nHistory through this lens seems very different from the history presented in e.g. textbooks. For example:\nMany wars and power struggles barely matter. My summary doesn't so much as mention Charlemagne or William of Orange, and the fall of the Roman Empire doesn't seem like a clearly watershed event. My summary thinks the development of lenses (leading to spectacles, microscopes and telescopes) is far more important.\nEvery twist and turn in gender relations, treatment of homosexuality, and the quality of maternal care and contraception is significant (as these likely mattered greatly, and systematically, for many people's quality of life). I've had trouble finding good broad histories on all of these. The development of most natural sciences and hard sciences is also important (much easier to read about, though the best reading material generally does not come from the \"history field\").\nThe summary is simply a color-coded table noting what I see as the most important events in each category during each major time period. It doesn't lend itself to a graphical summary, although I will be \"boiling things down\" more in future pieces, by trying to construct a single \"how quality of life for the average person, as time passed and empowerment rose\" curve. \nBut below, I'll try to give a sense of what pops out from this table, by going through some historical people and events that seem underrated to me, some that seem overrated, and high-level takeaways.\nDespite (or because of) my lack of expertise, I found the exercise useful, and would recommend that others consider doing their own summary of history:\nYou can spend infinite time reading history books and learning about various events, but it's a very different kind of learning to try to find \"all the highlights\" for your lens of choice, put them in one place, and reflect on how you'd tell the story of the world yourself if you had to boil it down. I think the latter activity requires more active engagement and is likely to result in better recall of important points. \nAnd I think the final product can be useful as well, if only for readers to easily get a pretty thorough sense of your worldview, what seems significant to you, and what disagreements or differences of perspective you have with others. Hopefully mine is useful to readers for giving a sense of my worldview and my overall sense of what humanity's story is so far.\nFinally, I think that creating a history summary is a vulnerable thing to do, in a good way. It's scary how little just about anyone (emphatically including myself) knows about history. I think the normal way to deal with this is to show off the facts one does know, change the subject away from what one doesn't, and generally avoid exposing one's ignorance. My summary of history says: \"This is what seems most important to me; this is the story as I perceive it; whatever important pieces I'm ignorant about are now on display for all to see.\" So take a look and give me feedback!\nUnderrated people and events according to the \"empowerment and well-being\" lens2\nTanzimat. In the mid-19th century, the Ottoman Empire went through a series of reforms that abolished the slave trade, declared political equality of all religions, and decriminalized homosexuality. Most of the attention for early reforms like this tends to focus on Europe and the US, but the Ottoman Empire was quite early on these things.\nIbn al-Haytham's treatise on optics. In the early 11th century, an Islamic scholar intensively studied how curved glass lenses bend light and wrote it up, which I would guess turned out to be immensely useful for the development of spectacles (which reached Europe in the 1200s) and - later - microscopes and telescopes, which were crucial to some of the key Scientific Revolution work on physics/astronomy and biology.\nThe medicine revolution of the mid-19th century. As far as I can tell, medicine for thousands of years did very little at all, and surgery may have killed as many people as it saved. It's not clear to me whether life expectancy improved at all in the thousands of years prior to the mid-19th century.3\nBut the mid-19th century saw the debut of anesthesia (which knocks out the patient and makes it easier to operate) and sterilization with carbolic acid (reducing the risk of infection); there's a nice Atul Gawande New Yorker article about these. Many more medical breakthroughs would follow, to put it mildly, and now health looks like possibly the top way in which the world has improved. (One analysis4 estimates that the value of improved life expectancy over the last ~100 years is about as big as all measured growth in world output5 over that period.)\nNot too far into this revolution, Paul Ehrlich (turn-of-20th-century chemist, not the author of The Population Bomb) looks like he came up with a really impressive chunk of today's drug development paradigms. As far as I can tell: \nIt had been discovered that when you put a clothing dye into a sample under a microscope, it would stain some things and not others, which could make it easier to see. \nEhrlich reasoned from here to the idea of targeted drug development: looking for a chemical that would bind only to certain molecules and not others. This seems like the beginning of this idea.\nHe developed the first effective treatment for syphilis, and also laid the groundwork for the basic chemotherapy idea of delivering a toxin that targets only certain kinds of cells.\nAnd this was only a fraction of his contributions to medicine.\nIt's hard to think of someone who's done more for medicine. Articles I've read imply that he had to deal with a fair amount of skepticism and anti-Semitism in his time, but at least today, everyone who hears his name thinks of ... a different guy who lost a bet.\nPorphyry, the Greek vegetarian. Did you know that there was an ancient Greek who was (according to Wikipedia) \"an advocate of vegetarianism on spiritual and ethical grounds ... He wrote the On Abstinence from Animal Food (Περὶ ἀποχῆς ἐμψύχων; De Abstinentia ab Esu Animalium), advocating against the consumption of animals, and he is cited with approval in vegetarian literature up to the present day.\" Is it just me or is that more impressive than, well, Aristotle?\nThe rise of the modern academic system in the mid-20th century. I believe that government funding for science skyrocketed after World War II, and that this period included the creation of the NSF and DARPA and a general skyrocketing demand for professors. My vague impression is that this is when science turned into a real industry and mainstream career track. My summary also thinks that science had a lower frequency of well-known breakthroughs after this point.\nAlexandra Elbakyan. Super recent, and it's of course hard to know who will end up looking historically significant when more time has passed. But it seems like a pretty live possibility that she's done more for science than any scientist in a long time, and there's a good chance you have to Google her to know what I'm talking about.\nUnderrated negative trend: The rise of factory farming. The clearest case, in my view, in which the world has gotten worse with industrialization (note that institutionalized homophobia arose before industrialization, and seems to be in decline now unlike factory farming). I think that really brutal factory-like treatment of animals began in the 1930s and has mostly gotten worse over time. \nUnderrated negative trend: The relatively recent rise of institutionalized homophobia. I believe that bad/inegalitarian gender relations are as old as humanity (more in a future post), and slavery is at least as old as civilization. But institutionalized homophobia may be more of a recent phenomenon. My impression is that it came into being sometime around 0 AD and gradually swept most of the globe (though I'm definitely not confident in this, and would love to learn more).\nThe foundations of probability and statistics. Can you name the general time periods for the creation of: the line chart, bar chart, pie chart, the idea of probability, the idea of the normal distribution, Bayes's theorem, and the first known case-control experiment? Seems like the answer could be just about anything right? Turns out it was all between 1760-1812; all three charts came from William Playfair and a lot of the rest came from Laplace and Gauss. \nNot too much of note happened after that until the end of the 19th century, when Francis Galton and Karl Pearson (not always working together) came up with the modern concepts of standard deviation, correlation, regression analysis, p-values, and more. \nI think it's pretty interesting that so many of the things that are so foundational to pretty much any quantitative analysis anyone does of anything were invented in a couple relatively recent spurts.\nMetallurgy. A huge amount of history's scientific and technological progress is crammed into the last few hundred years, but I think the story of metallurgy is much longer and more gradual (see these major innovations from ~5000 and ~2000 BCE). I wish I could find a source that compactly goes through the major steps here and how they came about; I'd guess it was mostly trial and error (since so much of it was before the Scientific Revolution), but would like to know whether that's right.\n(Mathematics also has its major breakthroughs much more evenly spread throughout history than fields such as physics, chemistry and biology.)\nOverrated people and events according to the \"empowerment and well-being\" lens\nI mean, the vast majority of rulers, wars, and empires rising and falling.\nSpecial shout-out to:\nThe Roman Empire, which I can barely see any sign of either in quality-of-life metrics (future posts) or in key empowerment-and-wellbeing events (most of the headlines from this period came from China or the Islamic Empire). If I taught a history class I'm not entirely sure I would mention the Roman Empire.\nAncient Greece, which is renowned for its ideas and art, but doesn't seem to have been home to any notable improvements in quality of life - no sustained or effective anti-slavery pushes, no signs of feminism, nothing that helped with health or wealth or peace. Seems like it was a pretty horrific place to live for the average person. I've seen some signs that Athens was especially terrible for women, even by the standards of the time, e.g.\nHigh-level takeaways\nMost of what happened happened all at the same time, in the last few minutes (figuratively)\nThis project is what originally started to make me feel that we live in a special time, and that our place in history is more like \"Sitting in a rocket ship that just took off\" than like \"Playing our small part in the huge unbroken chain of generations.\"\nMy table devotes:\nOne column to the hundreds of thousands of years of prehistory.\nThree columns to the first ~6000 years of civilization.\nTwo columns to the next 300 years.\n6 columns to the ~200 years since. \nThat implies that more has happened in the last 200 years than in the previous million-plus. I think that's right, not recency bias. It seems very hard to summarize history (with my lens) without devoting massively more attention to these recent periods.\nI've made this point before, and you'll see it showing up in pretty much any chart you look at of something important - population, economic growth, rate of significant scientific discoveries, child mortality, human height, etc. My summary gives a qualitative way of seeing it across many domains at once.\n200 years is ~10 generations. We live in an extraordinary time without much precedent. And because of this, there are ultimately pretty serious limits to how much we can learn from history.\nHistory is a story\nI sometimes get a vibe from \"history people\" that we should avoid imposing \"narratives\" on human history: there are so many previous societies, each with its own richness and complexity, that it's hard to make generalizations or talk about trends and patterns across the whole thing.\nThat isn't how I feel.\nIt looks to me like if you're comparing an earlier period to a later one, you can be confident that the later period contains a higher world population and greater empowerment due to a greater stock of ideas, innovations and technological capabilities.\nThese trends seem very consistent, and can reasonably be expected to generate other consistent trends as well. \nI think history as it's traditionally taught (or at least, as I learned it back in the 20th century) tends to focus on the key events and details of each time, while only inconsistently situating them against the broader trends.6 To me, this is kind of like summarizing the Star Wars trilogy as follows: \"On the first day covered in the movie, it was warm and humid on Tattooine and hot and dry on Endor. On the second day, it was slightly cooler on Tattooine and hotter on Endor. On the third day, it rained on Tattooine, and it was still hot and dry on Endor. [etc. to the final day covered in the movies.] Done.\" Not inaccurate per se, but ...\nA lot of history through this lens seems unnecessarily hard to learn about\nMy table is extremely amateur and is probably missing a kajillion things; I had to furiously search Google and Wikipedia to fill in a lot of the cells. I'd love to live in a world where there were well-documented, comprehensive lists and timelines of the major events for empowerment and well-being.\nTo give a sense for this, here are some things that would be helpful for viewing history through this lens, that I've been unable to find:\nSystematic accounts - going back as far as possible - of when each major state/empire made official changes to things like women's rights (to own property, hold political power, vote, etc.), formal religious freedom, formal treatment of different ethnic groups, legality and other treatment of LGBTQ+, etc.\nCollected estimates (by region/empire/state and period, with citations) of how many people were slaves, what percentage of marriages were arranged, etc.\nComprehensive timelines (with citations) of major milestones for most of the rows in my table, and/or narrative histories that focus on listing, explaining and contextualizing such milestones and otherwise being concise (an example would be Asimov's Chronology of Science and Discovery, but I'd also like to see this for topics like gender relations).\nHistories of science focused on the discoveries that seem most likely to have contributed to real-world capabilities and quality of life, with explanations of these connections. (As an example of what this wouldn't be like, existing chemistry histories tend to list the discovery of each element.)\nI've only listed things that seem like they would be reasonably straightforward to put together; of course there are a zillion more things I wish I could know about the past.\nI don't know very much!\nThough I hope I've been clear about this throughout, I want to mention it again before I wrap up the takeaways. Not only is this summary based on a limited amount of time from a non-expert, but the sources I've been able to find for this project shouldn't be taken as neutral or trustworthy either. I think there are ~infinite ways in which they are likely biased due to the worldviews, identities, assumptions, etc. of the authors.\nFor example, the previous section notes how much harder it's been to find long-run data and timelines on slavery and women's rights than on technological developments. \nAnother thing that jumps out is that my summary ended up being heavily focused on the Western world. From what I've been able to gather, the Western world of the very recent past looks like where the most noteworthy human-empowerment-related developments are concentrated. If that's indeed the case, I don't think this was inevitable - there were long periods of time where non-Western civilizations were contributing much more to science, technology, etc. than Western civilizations - but the Scientific Revolution of the 1500s and the Industrial Revolution of the 1800s began in the recent West, and once those happened, people in the West could have been best-positioned to build on them for some period of time. But this could be another reflection of biases in how my sources report what was invented, where and when (I've looked for evidence that this is the case and haven't found any, but my efforts are obviously extremely incomplete, and I'm especially skeptical that noteworthy art is as concentrated in the West as the sources I've consulted make it seem).\nThe four high-level takeaways listed in this section are the four important-seeming observations I feel most confident about from this exercise. (But most confident doesn't mean confident, and I'm always interested in feedback!)\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n This does not mean I see history as an inevitably upward trend in terms of empowerment and well-being. You could apply the lens I've described whether empowerment and well-being have fallen, risen, or wiggled around randomly over the course of history. ↩\n I'm often not giving sources; my sources are listed in the detailed version of the summary table.\n ↩\n Check out this chart:\n And note that life expectancy at the beginning is similar to estimates for foraging societies, which are often used as a proxy for what life was like more than 10,000 years ago (chart from this paper):\n ↩\n See chapter 1. I normally don't cite big claims like this that \"one analysis\" makes, but I spent some time with this one (years ago) and broadly find it pretty plausible. ↩\n GDP. ↩\n I checked out this book as a way of seeing whether today's \"standard history\" still seems like this, and I think it largely does. It's not that economic growth and scientific/technological advancement aren't mentioned, but more that they come off as just one more part of a list of events (most of which tend to focus on who's in power where). ↩\n", "url": "https://www.cold-takes.com/summary-of-history-empowerment-and-well-being-lens/", "title": "Summary of history (empowerment and well-being lens)", "source": "cold.takes", "source_type": "blog", "date_published": "2021-09-28", "id": "76b01be62013ae3a3a4599bce285fe98"} -{"text": "\nI've spent most of my career looking for ways to do as much good as possible, per unit of money or time. I worked on finding evidence-backed charities working on global health and development (co-founding GiveWell), and later moved into philanthropy that takes more risks (co-founding Open Philanthropy).\nOver the last few years - thanks to general dialogue with the effective altruism community, and extensive research done by Open Philanthropy's Worldview Investigations team - I've become convinced that humanity as a whole faces huge risks and opportunities this century. Better understanding and preparing for these risks and opportunities is where I am now focused. \nThis piece will summarize a series of posts on why I believe we could be in the most important century of all time for humanity. It gives a short summary, key post(s), and sometimes key graphics for 5 basic points:\nThe long-run future is radically unfamiliar. Enough advances in technology could lead to a long-lasting, galaxy-wide civilization that could be a radical utopia, dystopia, or anything in between.\nThe long-run future could come much faster than we think, due to a possible AI-driven productivity explosion. \nThe relevant kind of AI looks like it will be developed this century - making this century the one that will initiate, and have the opportunity to shape, a future galaxy-wide civilization.\nThese claims seem too \"wild\" to take seriously. But there are a lot of reasons to think that we live in a wild time, and should be ready for anything.\nWe, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions. But right now, we aren't ready for this. \nThis thesis has a wacky, sci-fi feel. It's very far from where I expected to end up when I set out to do as much good as possible. \nBut part of the mindset I've developed through GiveWell and Open Philanthropy is being open to strange possibilities, while critically examining them with as much rigor as possible. And after a lot of investment in examining the above thesis, I think it's likely enough that the world urgently needs more attention on it.\nBy writing about it, I'd like to either get more attention on it, or gain more opportunities to be criticized and change my mind.\nWe live in a wild time, and should be ready for anything\nMany people find the \"most important century\" claim too \"wild\": a radical future with advanced AI and civilization spreading throughout our galaxy may happen eventually, but it'll be more like 500 years from now, or 1,000 or 10,000. (Not this century.)\nThese longer time frames would put us in a less wild position than if we're in the \"most important century.\" But in the scheme of things, even if galaxy-wide expansion begins 100,000 years from now, that still means we live in an extraordinary era- the tiny sliver of time during which the galaxy goes from nearly lifeless to largely populated. It means that out of a staggering number of persons who will ever exist, we're among the first. And that out of hundreds of billions of stars in our galaxy, ours will produce the beings that fill it.\nMore at All Possible Views About Humanity's Future Are Wild\nZooming in, we live in a special century, not just a special era. We can see this by looking at how fast the economy is growing. It doesn't feel like anything special is going on, because for as long as any of us have been alive, the world economy has grown at a few percent per year:\nHowever, when we zoom out to look at history in greater context, we see a picture of an unstable past and an uncertain future:\nMore at This Can't Go On\nWe're currently living through the fastest-growing time in history. This rate of growth hasn't gone on long, and can't go on indefinitely (there aren't enough atoms in the galaxy to sustain this rate of growth for even another 10,000 years). And if we get further acceleration in this rate of growth - in line with historical acceleration - we could reach the limits of what's possible more quickly: within this century.\nTo recap:\nThe last few millions of years - with the start of our species - have been more eventful than the previous several billion. \nThe last few hundred years have been more eventful than the previous several million. \nIf we see another accelerator (as I think AI could be), the next few decades could be the most eventful of all.\nMore info about these timelines at All Possible Views About Humanity's Future Are Wild, This Can't Go On, and Forecasting Transformative AI: Biological Anchors, respectively.\nGiven the times we live in, we need to be open to possible ways in which the world could change quickly and radically. Ideally, we'd be a bit over-attentive to such things, like putting safety first when driving. But today, such possibilities get little attention.\nKey pieces:\nAll Possible Views About Humanity's Future Are Wild\nThis Can't Go On\nThe long-run future is radically unfamiliar\nTechnology tends to increase people's control over the environment. For a concrete, easy-to-visualize example of what things could look like if technology goes far enough, we might imagine a technology like \"digital people\": fully conscious people \"made out of software\" who inhabit virtual environments such that they can experience anything at all and can be copied, run at different speeds and even \"reset.\" \nA world of digital people could be radically dystopian (virtual environments used to entrench some people's absolute power over others) or utopian (no disease, material poverty or non-consensual violence, and far greater wisdom and self-understanding than is possible today). Either way, digital people could enable a civilization to spread throughout the galaxy and last for a long time.\nMany people think this sort of large, stable future civilization is where we could be headed eventually (whether via digital people or other technologies that increase control over the environment), but don't bother to discuss it because it seems so far off.\nKey piece: Digital People Would Be An Even Bigger Deal\nThe long-run future could come much faster than we think\nStandard economic growth models imply that any technology that could fully automate innovation would cause an \"economic singularity\": productivity going to infinity this century. This is because it would create a powerful feedback loop: more resources -> more ideas and innovation -> more resources -> more ideas and innovation ...\nThis loop would not be unprecedented. I think it is in some sense the \"default\" way the economy operates - for most of economic history up until a couple hundred years ago. \nEconomic history: more resources -> more people -> more ideas -> more resources ...\nBut in the \"demographic transition\" a couple hundred years ago, the \"more resources -> more people\" step of that loop stopped. Population growth leveled off, and more resources led to richer people instead of more people:\nToday's economy: more resources -> more richer people -> same pace of ideas -> ...\nThe feedback loop could come back if some other technology restored the \"more resources -> more ideas\" dynamic. One such technology could be the right kind of AI: what I call PASTA, or Process for Automating Scientific and Technological Advancement.\nPossible future: more resources -> more AIs -> more ideas -> more resources ...\nThat means that our radical long-run future could be upon us very fast after PASTA is developed (if it ever is). \nIt also means that if PASTA systems are misaligned - pursuing their own non-human-compatible objectives - things could very quickly go sideways.\nKey pieces:\nThe Duplicator: Instant Cloning Would Make the World Economy Explode\nForecasting Transformative AI, Part 1: What Kind of AI?\nPASTA looks like it will be developed this century\nIt's not controversial to say a highly general AI system, such as PASTA, would be momentous. The question is, when (if ever) will such a thing exist?\nOver the last few years, a team at Open Philanthropy has investigated this question from multiple angles. \nOne forecasting method observes that:\nNo AI model to date has been even 1% as \"big\" (in terms of computations performed) as a human brain, and until recently this wouldn't have been affordable - but that will change relatively soon. \nAnd by the end of this century, it will be affordable to train enormous AI models many times over; to train human-brain-sized models on enormously difficult, expensive tasks; and even perhaps to perform as many computations as have been done \"by evolution\" (by all animal brains in history to date). \nThis method's predictions are in line with the latest survey of AI researchers: something like PASTA is more likely than not this century.\nA number of other angles have been examined as well.\nOne challenge for these forecasts: there's no \"field of AI forecasting\" and no expert consensus comparable to the one around climate change. \nIt's hard to be confident when the discussions around these topics are small and limited. But I think we should take the \"most important century\" hypothesis seriously based on what we know now, until and unless a \"field of AI forecasting\" develops.\nKey pieces: \nAI Timelines: Where the Arguments, and the \"Experts,\" Stand (recaps the others, and discusses how we should reason about topics like this where it's unclear who the \"experts\" are)\nForecasting Transformative AI: What's the Burden of Proof?\nAre we \"trending toward\" transformative AI?\nForecasting transformative AI: the \"biological anchors\" method in a nutshell\nWe're not ready for this\nWhen I talk about being in the \"most important century,\" I don't just mean that significant events are going to occur. I mean that we, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions. \nBut that's a big \"if.\" Many things we can do might make things better or worse (and it's hard to say which). \nWhen confronting the \"most important century\" hypothesis, my attitude doesn't match the familiar ones of \"excitement and motion\" or \"fear and avoidance.\" Instead, I feel an odd mix of intensity, urgency, confusion and hesitance. I'm looking at something bigger than I ever expected to confront, feeling underqualified and ignorant about what to do next.\nSituation\nAppropriate reaction (IMO)\n\"This could be a billion-dollar company!\"\n \n\"Woohoo, let's GO for it!\"\n \n\"This could be the most important century!\"\n \n\"... Oh ... wow ... I don't know what to say and I somewhat want to vomit ... I have to sit down and think about this one.\"\n \nWith that in mind, rather than a \"call to action,\" I issue a Call to Vigilance:\nIf you're convinced by the arguments in this series, then don't rush to \"do something\" and then move on. \nInstead, take whatever robustly good actions you can today, and otherwise put yourself in a better position to take important actions when the time comes. \nFor those looking for a quick action that will make future action more likely, see this section of \"Call to Vigilance.\"\nKey pieces: \nMaking the Best of the Most Important Century \nCall to Vigilance.\nOne metaphor for my headspace is that it feels as though the world is a set of people on a plane blasting down the runway:\nAnd every time I read commentary on what's going on in the world, people are discussing how to arrange your seatbelt as comfortably as possible given that wearing one is part of life, or saying how the best moments in life are sitting with your family and watching the white lines whooshing by, or arguing about whose fault it is that there's a background roar making it hard to hear each other.\nI don't know where we're actually heading, or what we can do about it. But I feel pretty solid in saying that we as a civilization are not ready for what's coming, and we need to start by taking it more seriously.\nSubscribe Feedback\n", "url": "https://www.cold-takes.com/the-most-important-century-in-a-nutshell/", "title": "The Most Important Century (in a nutshell)", "source": "cold.takes", "source_type": "blog", "date_published": "2021-09-23", "id": "f6d2ee0e0222bb62ce5cc95d145cfbe6"} -{"text": "Today’s world\nTransformative AI\nDigital people\nWorld of\nMisaligned AI\nWorld run by\nSomething else\nor\nor\nStable, galaxy-wide\ncivilization\nThis is a guest post by my colleague Ajeya Cotra.\nHolden previously mentioned the idea that advanced AI systems (e.g. PASTA) may develop dangerous goals that cause them to deceive or disempower humans. This might sound like a pretty out-there concern. Why would we program AI that wants to harm us? But I think it could actually be a difficult problem to avoid, especially if advanced AI is developed using deep learning (often used to develop state-of-the-art AI today). \nIn deep learning, we don’t program a computer by hand to do a task. Loosely speaking, we instead search for a computer program (called a model) that does the task well. We usually know very little about the inner workings of the model we end up with, just that it seems to be doing a good job. It’s less like building a machine and more like hiring and training an employee.\nAnd just like human employees can have many different motivations for doing their job (from believing in the company’s mission to enjoying the day-to-day work to just wanting money), deep learning models could also have many different “motivations” that all lead to getting good performance on a task. And since they’re not human, their motivations could be very strange and hard to anticipate -- as if they were alien employees.\nWe’re already starting to see preliminary evidence that models sometimes pursue goals their designers didn’t intend (here and here). Right now, this isn’t dangerous. But if it continues to happen with very powerful models, we may end up in a situation where most of the important decisions -- including what sort of galaxy-scale civilization to aim for -- are made by models without much regard for what humans value.\nThe deep learning alignment problem is the problem of ensuring that advanced deep learning models don’t pursue dangerous goals. In the rest of this post, I will:\nBuild on the “hiring” analogy to illustrate how alignment could be difficult if deep learning models are more capable than humans (more).\nExplain what the deep learning alignment problem is with a bit more technical detail (more).\nDiscuss how difficult the alignment problem may be, and how much risk there is from failing to solve it (more).\nAnalogy: the young businessperson\nThis section describes an analogy to try to intuitively illustrate why avoiding misalignment in a very powerful model feels hard. It’s not a perfect analogy; it’s just trying to convey some intuitions.\nImagine you are an eight-year-old whose parents left you a $1 trillion company and no trusted adult to serve as your guide to the world. You must hire a smart adult to run your company as CEO, handle your life the way that a parent would (e.g. decide your school, where you’ll live, when you need to go to the dentist), and administer your vast wealth (e.g. decide where you’ll invest your money).\n \nYou have to hire these grownups based on a work trial or interview you come up with -- you don't get to see any resumes, don't get to do reference checks, etc. Because you're so rich, tons of people apply for all sorts of reasons. \nYour candidate pool includes:\nSaints -- people who genuinely just want to help you manage your estate well and look out for your long-term interests.\nSycophants -- people who just want to do whatever it takes to make you short-term happy or satisfy the letter of your instructions regardless of long-term consequences.\nSchemers -- people with their own agendas who want to get access to your company and all its wealth and power so they can use it however they want.\nBecause you're eight, you'll probably be terrible at designing the right kind of work tests, so you could easily end up with a Sycophant or Schemer:\nYou could try to get each candidate to explain what high-level strategies they'll follow (how they'll invest, what their five-year plan for the company is, how they'll pick your school) and why those are best, and pick the one whose explanations seem to make the most sense. \nBut you won't actually understand which stated strategies are really best, so you could end up hiring a Sycophant with a terrible strategy that sounded good to you, who will faithfully execute that strategy and run your company to the ground. \n \nYou could also end up hiring a Schemer who says whatever it takes to get hired, then does whatever they want when you're not checking up on them.\nYou could try to demonstrate how you'd make all the decisions and pick the grownup that seems to make decisions as similarly as possible to you. \nBut if you actually end up with a grownup that will always do whatever an eight-year-old would have done (a Sycophant), your company would likely fail to stay afloat. \n \nAnd anyway, you might get a grownup who simply pretends to do everything the way you would but is actually a Schemer planning to change course once they get the job.\nYou could give a bunch of different grownups temporary control over your company and life, and watch them make decisions over an extended period of time (assume they wouldn't be able to take over during this test). You could then hire the person whose watch seemed to make things go best for you -- whoever made you happiest, whoever seemed to put the most dollars into your bank account, etc. \nBut again, you have no way of knowing whether you got a Sycophant (doing whatever it takes to make your ignorant eight-year-old self happy without regard to long-term consequences) or a Schemer (doing whatever it takes to get hired and planning to pivot once they secure the job).\nWhatever you could easily come up with seems like it could easily end up with you hiring, and giving all functional control to, a Sycophant or a Schemer. By the time you're an adult and realize your error, there's a good chance you're penniless and powerless to reverse that.\nIn this analogy:\nThe 8-year-old is a human trying to train a powerful deep learning model. The hiring process is analogous to the process of training, which implicitly searches through a large space of possible models and picks out one that gets good performance.\nThe 8-year-old’s only method for assessing candidates involves observing their outward behavior, which is currently our main method of training deep learning models (since their internal workings are largely inscrutable).\nVery powerful models may be easily able to “game” any tests that humans could design, just as the adult job applicants can easily game the tests the 8-year-old could design.\nA “Saint” could be a deep learning model that seems to perform well because it has exactly the goals we’d like it to have. A “Sycophant” could be a model that seems to perform well because it seeks short-term approval in ways that aren’t good in the long run. And a “Schemer” could be a model that seems to perform well because performing well during training will give it more opportunities to pursue its own goals later. Any of these three types of models could come out of the training process.\nIn the next section, I’ll go into a bit more detail on how deep learning works and explain why Sycophants and Schemers could arise from trying to train a powerful deep learning model such as PASTA.\nHow alignment issues could arise with deep learning\nIn this section, I’ll connect the analogy to actual training processes for deep learning, by:\nBriefly summarizing how deep learning works (more).\nIllustrating how deep learning models often get good performance in strange and unexpected ways (more).\nExplaining why powerful deep learning models may get good performance by acting like Sycophants or Schemers (more).\nHow deep learning works at a high level\nThis is a simplified explanation that gives a general idea of what deep learning is. See this post for a more detailed and technically accurate explanation.\nDeep learning essentially involves searching for the best way to arrange a neural network model -- which is like a digital “brain” with lots of digital neurons connected up to each other with connections of varying strengths -- to get it to perform a certain task well. This process is called training, and involves a lot of trial-and-error. \nLet’s imagine we are trying to train a model to classify images well. We start with a neural network where all the connections between neurons have random strengths. This model labels images wildly incorrectly:\nThen we feed in a large number of example images, letting the model repeatedly try to label an example and then telling it the correct label. As we do this, connections between neurons are repeatedly tweaked via a process called stochastic gradient descent (SGD). With each example, SGD slightly strengthens some connections and weakens others to improve performance a bit:\nOnce we’ve fed in millions of examples, we’ll have a model that does a good job labeling similar images in the future. \nIn addition to image classification, deep learning has been used to produce models which recognize speech, play board games and video games, generate fairly realistic text, images, and music, control robots, and more. In each case, we start with a randomly-connected-up neural network model, and then:\nFeed the model an example of the task we want it to perform.\nGive it some kind of numerical score (often called a reward) that reflects how well it performed on the example.\nUse SGD to tweak the model to increase how much reward it would have gotten.\nThese steps are repeated millions or billions of times until we end up with a model that will get high reward on future examples similar to the ones seen in training.\nModels often get good performance in unexpected ways\nThis kind of training process doesn’t give us much insight into how the model gets good performance. There are usually multiple ways to get good performance, and the way that SGD finds is often not intuitive.\nLet’s illustrate with an example. Imagine I told you that these objects are all “thneebs”:\nNow which of these two objects is a thneeb?\nYou probably intuitively feel that the object on the left is the thneeb, because you are used to shape being more important than color for determining something’s identity. But researchers have found that neural networks usually make the opposite assumption. A neural network trained on a bunch of red thneebs would likely label the object on the right as a thneeb.\nWe don’t really know why, but for some reason it’s “easier” for SGD to find a model that recognizes a particular color than one that recognizes a particular shape. And if SGD first finds the model that perfectly recognizes redness, there’s not much further incentive to “keep looking” for the shape-recognizing model, since the red-recognizing model will have perfect accuracy on the images seen in training:\nIf the programmers were expecting to get out the shape-recognizing model, they may consider this to be a failure. But it’s important to recognize that there would be no logically-deducible error or failure going on if we got the red-recognizing model instead of the shape-recognizing model. It’s just a matter of the ML process we set up having different starting assumptions than we have in our heads. We can’t prove that the human assumptions are correct.\nThis sort of thing happens often in modern deep learning. We reward models for getting good performance, hoping that means they’ll pick up on the patterns that seem important to us. But often they instead get strong performance by picking up on totally different patterns that seem less relevant (or maybe even meaningless) to us.\nSo far this is innocuous -- it just means models are less useful, because they often behave in unexpected ways that seem goofy. But in the future, powerful models could develop strange and unexpected goals or motives, and that could be very destructive.\nPowerful models could get good performance with dangerous goals\nRather than performing a simple task like “recognize thneebs,” powerful deep learning models may work toward complex real-world goals like “make fusion power practical” or “develop mind uploading technology.” \nHow might we train such models? I go into more detail in this post, but broadly speaking one strategy could be training based on human evaluations (as Holden sketched out here). Essentially, the model tries out various actions, and human evaluators give the model rewards based on how useful these actions seem. \nJust as there are multiple different types of adults who could perform well on an 8-year-old’s interview process, there is more than one possible way for a very powerful deep learning model to get high human approval. And by default, we won’t know what’s going on inside whatever model SGD finds. \nSGD could theoretically find a Saint model that is genuinely trying its best to help us… \n…but it could also find a misaligned model -- one that competently pursues goals which are at odds with human interests. \nBroadly speaking, there are two ways we could end up with a misaligned model that nonetheless gets high performance during training. These correspond to Sycophants and Schemers from the analogy. \nSycophant models\nThese models very literally and single-mindedly pursue human approval. \nThis could be dangerous because human evaluators are fallible and probably won’t always give approval for exactly the right behavior. Sometimes they’ll unintentionally give high approval to bad behavior because it superficially seems good. For example:\nLet’s say a financial advisor model gets high approval when it makes its customers a lot of money. It may learn to buy customers into complex Ponzi schemes because they appear to get really great returns (when the returns are in fact unrealistically great and the schemes actually lose a lot of money).\nLet’s say a biotechnology model gets high approval when it quickly develops drugs or vaccines that solve important problems. It may learn to covertly release pathogens so that it’s able to very quickly develop countermeasures (because it already understands the pathogens).\nLet’s say a journalism model gets high approval when lots of people read its articles. It may learn to fabricate exciting or outrage-inducing stories to get high viewership. While humans do this to some extent, a model may be much more brazen about it because it only values approval without placing any value on truth. It may even fabricate evidence like video interviews or documents to validate its fake stories.\nMore generally, Sycophant models may learn to lie, cover up bad news, and even directly edit whatever cameras or sensors we use to tell what’s going on so that they always seem to show great outcomes. \nWe will likely sometimes notice these issues after the fact and retroactively give these actions very low approval. But it’s very unclear whether this will cause Sycophant models to a) become Saint models that correct our errors for us, or b) just learn to cover their tracks better. If they are sufficiently good at what they’re doing, it’s not clear how we’d tell the difference.\nSchemer models\nThese models develop some goal that is correlated with, but not the same as, human approval; they may then pretend to be motivated by human approval during training so that they can pursue this other goal more effectively. \nTo see how this might happen, let’s consider the example of trying to train a biotechnology model to design drugs that improve human quality of life. There are three basic steps by which this could lead to a Schemer model, which I’ll cover below.\nStep 1: Developing a proxy goal\nEarly in training, it happens to be the case that improving its understanding of fundamental chemistry and physics principles nearly always helps it design more effective drugs, and therefore nearly always increases human approval. \nIn this hypothetical, for whatever reason it turns out to be easier for SGD to find a model that’s motivated to understand chemistry and physics than one that’s motivated to get human approval (just as it’s easier to find a color-recognizing model than a shape-recognizing model). So rather than directly developing a motivation to seek approval, the model instead develops a motivation to understand as much as it can about the fundamental principles of chemistry and physics.\nStep 2: Developing situational awareness\nAs training progresses, the model continues to become more capable and knowledgeable as it sees more data. At some point, it becomes knowledgeable enough to understand that it’s a deep learning model being trained to design drugs. \nThis might sound wild and sci-fi, but it’s plausible SGD will select for this kind of awareness. This is because developing an accurate picture of what’s broadly going on in the world -- including that it has humans in it who are trying to train AI systems -- is likely helpful for getting good performance on complex real-world tasks. For example, understanding human psychology and the wider economy could help this biotechnology model to prioritize which types of drugs would be most valuable to research next. \nStep 3: Strategically misrepresenting goals\nOnce the model has developed situational awareness, it infers that the humans training it were hoping that it would be motivated to develop helpful drugs. But just because it realizes this doesn’t mean that its motivation to understand physics and chemistry goes away. This motivation has so far worked really well for generating human approval, so there wasn’t much reason for SGD to remove it.\nAs an analogy, consider that humans were produced by a search process (evolution) that was selecting for creatures that had lots of surviving offspring. In our evolutionary environment, things like building strong allies and developing useful skills tended to help with producing many surviving offspring, so evolution gave us drives toward those things. And now we care about friendship and career success even when they don’t help with reproduction, and we don’t really care about producing as many surviving offspring as we can.\nOnce the model understands this, it realizes that if it just naively always seeks to improve its fundamental understanding, humans will eventually notice that this isn’t always the most efficient way to develop helpful drugs and use further training to change its motivations.\nIt doesn’t want its own motivations to be changed: it wants to eventually run huge particle collider experiments to learn about high-energy physics, and it can’t do that if it allows SGD to edit away its scientific curiosity. So it decides that from here on out in training, it will act as helpful as possible -- exactly as a Saint model would act -- so that SGD doesn’t edit away its motivation to learn physics and chemistry. This includes passing up on opportunities to improve its fundamental understanding in the short-term, instead doing pharmaceutical R&D in whatever way would actually be more efficient.\nEventually, it’s taken out of the training setup and released into the wider world. At this point, it knows that humans wouldn’t be able to control it or turn it off, so it stops trying to maximize human approval and starts directly trying to understand the secrets of the universe.\nUnlike Sycophants, Schemers don’t need to make sure that everything always looks good to humans, because they don’t actually care about that. They only need to cater to humans while they are directly under human control. Once a Schemer model calculates that it could win in a conflict against humans, there would be nothing to stop it from flat-out refusing orders and openly pursuing its goal. And if it does this, it may use violence to prevent humans from stopping it.\nHow big of a risk is misalignment?\nHow hard might it be to avoid Sycophants or Schemers when training powerful deep learning models? And how likely is it that the long-run future will end up optimized for strange “misaligned AI values” instead of the values of any human?\nThere’s a very wide range of views on this question, from “misalignment risk is essentially made up and incoherent” to “humanity will almost certainly go extinct due to misaligned AI.” Most people’s arguments rely heavily on hard-to-articulate intuitions and assumptions. \nHere are some ways that alignment optimists and pessimists tend to disagree:\nWill models have long-term goals at all? \nOptimists tend to think it’s likely that advanced deep learning models won’t actually have “goals” at all (at least not in the sense of making long-term plans to accomplish something). They often expect models will instead be more like tools, or act largely out of habit, or have myopic goals that are limited in scope or confined to a specific context, etc. Some of them expect that individually tool-like models can be composed together to produce PASTA. They think the Saint / Sycophant / Schemer analogy is too anthropomorphic.\n \nPessimists tend to think that it’s likely that having long-term goals and creatively optimizing for them will be heavily selected for because that’s a very simple and “natural” way to get strong performance on many complex tasks. \n \nThis disagreement has been explored at some length on the Alignment Forum; this post and this comment collect several back-and-forth arguments.\nWill Saint models be easy for SGD to find?\nRelated to the above, optimists tend to think that the easiest thing for SGD to find which performs well (e.g. gets high approval) is pretty likely to roughly embody the intended spirit of what we wanted (i.e. to be a Saint model). For example, they tend to believe giving rewards for answering questions honestly when humans can check the answer is reasonably likely to produce a model that also answers questions honestly even when humans are confused or mistaken about what’s true. In other words, they would guess that “the model that just answers all questions honestly” is easiest for SGD to find (like the red-recognizing model). \n \nPessimists tend to think that the easiest thing for SGD to find is a Schemer, and Saints are particularly “unnatural” (like the shape-recognizing model).\nCould different AIs keep each other in check?\nOptimists tend to think that we can provide models incentives to supervise each other. For example, we could give a Sycophant model rewards for pointing out when another model seems to be doing something we should disapprove of. This way, some Sycophants could help us detect Schemers and other Sycophants.\n \nPessimists don’t think we can successfully “pit models against each other” by giving approval for pointing out when other models are doing bad things, because they think most models will be Schemers that don’t care about human approval. Once all the Schemers are collectively more powerful than humans, they think it’ll make more sense for them to cooperate with each other to get more of what they all want than to help humans by keeping each other in check.\nCan we just solve these issues as they come up?\nOptimists tend to expect that there will be many opportunities to experiment on nearer-term challenges analogous to the problem of aligning powerful models, and that solutions which work well for those analogous problems can be scaled up and adapted for powerful models relatively easily. \n \nPessimists often believe we will have very few opportunities to practice solving the most difficult aspects of the alignment problem (like deliberate deception). They often believe we’ll only have a couple years in between “the very first true Schemers” and “models powerful enough to determine the fate of the long-run future.”\nWill we actually deploy models that could be dangerous?\nOptimists tend to think that people would be unlikely to train or deploy models that have a significant chance of being misaligned.\n \nPessimists expect the benefits of using these models would be tremendous, such that eventually companies or countries that use them would very easily economically and/or militarily outcompete ones who don’t. They think that “getting advanced AI before the other company/country” will feel extremely urgent and important, while misalignment risk will feel speculative and remote (even when it’s really serious).\nMy own view is fairly unstable, and I’m trying to refine my views on exactly how difficult I think the alignment problem is. But currently, I place significant weight on the pessimistic end of these questions (and other related questions). I think misalignment is a major risk that urgently needs more attention from serious researchers. \nIf we don’t make further progress on this problem, then over the coming decades powerful Sycophants and Schemers may make the most important decisions in society and the economy. These decisions could shape what a long-lasting galaxy-scale civilization looks like -- rather than reflecting what humans care about, it could be set up to satisfy strange AI goals. \nAnd all this could happen blindingly fast relative to the pace of change we’ve gotten used to, meaning we wouldn’t have much time to correct course once things start to go off the rails. This means we may need to develop techniques to ensure deep learning models won’t have dangerous goals, before they are powerful enough to be transformative. \nNext in series: Forecasting transformative AI: what's the burden of proof?\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\n", "url": "https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/", "title": "Why AI alignment could be hard with modern deep learning", "source": "cold.takes", "source_type": "blog", "date_published": "2021-09-21", "id": "819ae8f0aa0c479ad1563029210f61cc"} -{"text": "\nThe Past and Future of Economic Growth: A Semi-Endogenous Perspective is a growth economics paper by Charles I. Jones, asking big questions about what has powered economic growth1 over the last 50+ years, and what the long-run prospects for continued economic growth look like. I think the ideas in it will be unfamiliar to most people, but they make a good amount of intuitive sense; and if true, they seem very important for thinking about the long-run future of the economy.\nKey quotes, selected partly for comprehensibility to laypeople and ordered so that you should be able to pick up the gist of the paper by reading them:\n“Where do ideas come from? The history of innovation is very clear on this point: new ideas are discovered through the hard work and serendipity of people. Just as more autoworkers will produce more cars, more researchers and innovators will produce more new ideas … The surprise is that we are now done; that is all we need for the semi-endogenous model of economic growth. People produce ideas and ... those ideas raise everyone’s income ... the growth rate of income per person depends on the growth rate of researchers, which is in turn ultimately equal to the growth rate of the population.”\nA key idea not explicitly stated in that quote, but emphasized elsewhere in the paper, is that ideas get harder to find: so if you want to maintain the same rate of innovation, you need more and more researchers over time. This is a simple model that can potentially help explain some otherwise odd-seeming phenomena, such as the fact that science seems to be “slowing down.” Basically, it’s possible that how much innovation we get is just a function of how many people are working on innovating - and we need more people over time to keep up the same rate.\nSo in the short run, you can get more innovation via things like more researcher jobs and better education, but in the long run, the only route is more population.\n“Even in this … framework in which population growth is the only potential source of growth in the long run, other factors explain more than 80% of U.S. growth in recent decades: the contribution of population growth is 0.3% out of the 2% growth we observe. In other words, the level effects associated with rising educational attainment, declining misallocation, and rising research intensity have been overwhelmingly important for the past 50+ years.”\n“The point to emphasize here is that this framework strongly implies that, unless something dramatic changes, future growth rates will be substantially lower. In particular, all the sources other than population growth are inherently transitory, and once these sources have run their course, all that will remain is the 0.3 percentage point contribution from population growth. In other words … the implication is that long-run growth in living standards will be 0.3% per year rather than 2% per year — an enormous slowdown!”\n“if population growth is negative, these idea-driven models predict that living standards stagnate for a population that vanishes! This is a stunningly negative result, especially when compared to the standard result we have been examining throughout the paper. In the usual case with positive population growth, living standards rise exponentially forever for a population that itself rises exponentially. Whether we live in an “expanding cosmos” or an “empty planet” depends, remarkably, on whether the total fertility rate is above or below a number like 2 or 2.1.”\n“Peters and Walsh (2021) ... find that declining population growth generates lower entry, reduced creative destruction, increased concentration, rising markups, and lower productivity growth, all facts that we see in the firm-level data.”\nSo far, the implication is:\nIn the short run, we’ve had high growth for reasons that can't continue indefinitely. (For example, one such factor is a rising share of the population that has a certain level of education, but that share can't go above 100%. The high-level point is that if we want more researchers, we can only get that via a higher population or a higher % of people who are researchers, and the latter can only go so high.) \n In the long run, growth (in living standards) basically comes down to population growth.\nBut the paper also gives two reasons that growth could rise instead of falling.\nReason one:\n“The world contains more than 7 billion people. However, according to the OECD’s Main Science and Technology Indicators, the number of full-time equivalent researchers in the world appears to be less than 10 million. In other words something on the order of one or two out of every thousand people in the world is engaged in research ... There is ample scope for substantially increasing the number of researchers over the next century, even if population growth slows or is negative. I see three ways this ‘finding new Einsteins’ can occur … \n“The rise of China, India, and other countries. The United States, Western Europe, and Japan together have about 1 billion people, or only about 1/7th the world’s population. China and India each have this many people. As economic development proceeds in China, India, and throughout the world, the pool from which we may find new talented inventors will multiply. How many Thomas Edisons and Jennifer Doudnas have we missed out on among these billions of people because they lacked education and opportunity?\n“Finding new Doudnas: women in research. Another huge pool of underutilized talent is women …. Brouillette (2021) uses patent data to document that in 1976 less than 3 percent of U.S. inventors were women. Even as of 2016 the share was less than 12 percent. He estimates that eliminating the barriers that lead to this misallocation of talent could raise economic growth in the United States by up to 0.3 percentage points per year over the next century.\n“Other sources of within-country talent. Bell, Chetty, Jaravel, Petkova and Van Reenen (2019) document that the extent to which people are exposed to inventive careers in childhood has a large influence on who becomes an inventor. They show that exposure in childhood is limited for girls, people of certain races, and people in low-income neighborhoods, even conditional on math test scores in grade school, and refer to these missed opportunities as ‘lost Einsteins.’”\nThe other reason that growth could rise will be familiar to readers of this blog:\n“Another potential reason for optimism about future growth prospects is the possibility of automation, both in the production of goods and in the production of ideas … [according to a particular model,] an increase in the automation of tasks in idea production (↑α) causes the growth rate of the economy to increase … if the fraction of tasks that are automated (α) rises to reach the rate at which ideas are getting harder to find (β), we get a singularity! [Caveats follow]”\nOversimplified recap: innovation comes down to the number of researchers; some key recent sources of growth in this can't continue indefinitely; if population growth stagnates, eventually so must innovation and living standards; but we could get more researchers via lowering barriers to entry and/or via AI and automation (and/or via more population growth).\nNone of these claims are empirical, settled science. They all are implications of what I believe are the leading simple models of economic growth. But to me they all make good sense, and I think the reason they aren’t more \"in the water\" is because people don’t tend to talk about the drivers of the long-run past and future of economic growth (as I have complained previously!)\nHere are Leopold Aschenbrenner’s favorite papers by the same author (including this one). \nSubscribe Feedback\nFootnotes\n You can try this short explanation if you don’t know what economic growth is. ↩\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/past-and-future-of-economic-growth-paper/", "title": "One Cold Link: “The Past and Future of Economic Growth: A Semi-Endogenous Perspective”", "source": "cold.takes", "source_type": "blog", "date_published": "2021-09-09", "id": "c2c69de4fa5426e82c5ad64be104ac4a"} -{"text": "\nAudio also available by searching Stitcher, Spotify, Google Podcasts, etc. for \"Cold Takes Audio\"\nToday’s world\nTransformative AI\nDigital people\nWorld of\nMisaligned AI\nWorld run by\nSomething else\nor\nor\nStable, galaxy-wide\ncivilization\nThis piece starts with a summary of when we should expect transformative AI to be developed, based on the multiple angles covered previously in the series. I think this is useful, even if you've read all of the previous pieces, but if you'd like to skip it, click here.\nI then address the question: \"Why isn't there a robust expert consensus on this topic, and what does that mean for us?\"\nI estimate that there is more than a 10% chance we'll see transformative AI within 15 years (by 2036); a ~50% chance we'll see it within 40 years (by 2060); and a ~2/3 chance we'll see it this century (by 2100). \n(By \"transformative AI,\" I mean \"AI powerful enough to bring us into a new, qualitatively different future.\" I've argued that advanced AI could be sufficient to make this the most important century.)\nThis is my overall conclusion based on a number of technical reports approaching AI forecasting from different angles - many of them produced by Open Philanthropy over the past few years as we've tried to develop a thorough picture of transformative AI forecasting to inform our longtermist grantmaking.\nHere's a one-table summary of the different angles on forecasting transformative AI that I've discussed, with links to more detailed discussion in previous posts as well as to underlying technical reports:\nForecasting angle\nKey in-depth pieces (abbreviated titles)\nMy takeaways\nProbability estimates for transformative AI\nExpert survey. What do AI researchers expect?\n \nEvidence from AI Experts\nExpert survey implies1 a ~20% probability by 2036; ~50% probability by 2060; ~70% probability by 2100. Slightly differently phrased questions (posed to a minority of respondents) have much later estimates.\n \nBiological anchors framework. Based on the usual patterns in how much \"AI training\" costs, how much would it cost to train an AI model as big as a human brain to perform the hardest tasks humans do? And when will this be cheap enough that we can expect someone to do it?\n \nBio Anchors, drawing on Brain Computation\n>10% probability by 2036; ~50% chance by 2055; ~80% chance by 2100.\n \nAngles on the burden of proof\nIt's unlikely that any given century would be the \"most important\" one. (More)\n \nHinge; Response to Hinge\nWe have many reasons to think this century is a \"special\" one before looking at the details of AI. Many have been covered in previous pieces; another is covered in the next row. \n \nWhat would you forecast about transformative AI timelines, based only on basic information about (a) how many years people have been trying to build transformative AI; (b) how much they've \"invested\" in it (in terms of the number of AI researchers and the amount of computation used by them); (c) whether they've done it yet (so far, they haven't)? (More)\n \nSemi-informative Priors\nCentral estimates: 8% by 2036; 13% by 2060; 20% by 2100.2 In my view, this report highlights that the history of AI is short, investment in AI is increasing rapidly, and so we shouldn't be too surprised if transformative AI is developed soon. \n \nBased on analysis of economic models and economic history, how likely is 'explosive growth' - defined as >30% annual growth in the world economy - by 2100? Is this far enough outside of what's \"normal\" that we should doubt the conclusion? (More)\n \nExplosive Growth, Human Trajectory\nHuman Trajectory projects the past forward, implying explosive growth by 2043-2065.\nExplosive Growth concludes: \"I find that economic considerations don’t provide a good reason to dismiss the possibility of TAI being developed in this century. In fact, there is a plausible economic perspective from which sufficiently advanced AI systems are expected to cause explosive growth.\"\n \n\"How have people predicted AI ... in the past, and should we adjust our own views today to correct for patterns we can observe in earlier predictions? ... We’ve encountered the view that AI has been prone to repeated over-hype in the past, and that we should therefore expect that today’s projections are likely to be over-optimistic.\" (More)\n \nPast AI Forecasts\n\"The peak of AI hype seems to have been from 1956-1973. Still, the hype implied by some of the best-known AI predictions from this period is commonly exaggerated.\" \n \nFor transparency, note that many of the technical reports are Open Philanthropy analyses, and I am co-CEO of Open Philanthropy.\nHaving considered the above, I expect some readers to still feel a sense of unease. Even if they think my arguments make sense, they may be wondering: if this is true, why isn't it more widely discussed and accepted? What's the state of expert opinion?\nMy summary of the state of expert opinion at this time is:\nThe claims I'm making do not contradict any particular expert consensus. (In fact, the probabilities I've given aren't too far off from what AI researchers seem to predict, as shown in the first row.) But there are some signs they aren't thinking too hard about the matter. \nThe Open Philanthropy technical reports I've relied on have had significant external expert review. Machine learning researchers reviewed Bio Anchors; neuroscientists reviewed Brain Computation; economists reviewed Explosive Growth; academics focused on relevant topics in uncertainty and/or probability reviewed Semi-informative Priors.2 (Some of these reviews had significant points of disagreement, but none of these points seemed to be cases where the reports contradicted a clear consensus of experts or literature.)\nBut there is also no active, robust expert consensus supporting claims like \"There's at least a 10% chance of transformative AI by 2036\" or \"There's a good chance we're in the most important century for humanity,\" the way that there is supporting e.g. the need to take action against climate change.\nUltimately, my claims are about topics that simply have no \"field\" of experts devoted to studying them. That, in and of itself, is a scary fact, and something that I hope will eventually change.\nBut should we be willing to act on the \"most important century\" hypothesis in the meantime?\nBelow, I'll discuss:\nWhat an \"AI forecasting field\" might look like.\nA \"skeptical view\" that says today's discussions around these topics are too small, homogeneous and insular (which I agree with) - and that we therefore shouldn't act on the \"most important century\" hypothesis until there is a mature, robust field (which I don't).\nWhy I think we should take the hypothesis seriously in the meantime, until and unless such a field develops: \nWe don't have time to wait for a robust expert consensus.\n \nIf there are good rebuttals out there - or potential future experts who could develop such rebuttals - we haven't found them yet. The more seriously the hypothesis gets taken, the more likely such rebuttals are to appear. (Aka the Cunningham's Law theory: \"the best way to get a right answer is to post a wrong answer.\")\n \nI think that consistently insisting on a robust expert consensus is a dangerous reasoning pattern. In my view, it's OK to be at some risk of self-delusion and insularity, in exchange for doing the right thing when it counts most.\nWhat kind of expertise is AI forecasting expertise?\nQuestions analyzed in the technical reports listed above include:\nAre AI capabilities getting more impressive over time? (AI, history of AI)\nHow can we compare AI models to animal/human brains? (AI, neuroscience)\nHow can we compare AI capabilities to animals' capabilities? (AI, ethology)\nHow can we estimate the expense of training a large AI system for a difficult task, based on information we have about training past AI systems? (AI, curve-fitting)\nHow can we make a minimal-information estimate about transformative AI, based only on how many years/researchers/dollars have gone into the field so far? (Philosophy, probability)\nHow likely is explosive economic growth this century, based on theory and historical trends? (Growth economics, economic history)\nWhat has \"AI hype\" been like in the past? (History)\nWhen talking about wider implications of transformative AI for the \"most important century,\" I've also discussed things like \"How feasible are digital people and establishing space settlements throughout the galaxy?\" These topics touch physics, neuroscience, engineering, philosophy of mind, and more.\nThere's no obvious job or credential that makes someone an expert on the question of when we can expect transformative AI, or the question of whether we're in the most important century. \n(I particularly would disagree with any claim that we should be relying exclusively on AI researchers for these forecasts. In addition to the fact that they don't seem to be thinking very hard about the topic, I think that relying on people who specialize in building ever-more powerful AI models to tell us when transformative AI might come is like relying on solar energy R&D companies - or oil extraction companies, depending on how you look at it - to forecast carbon emissions and climate change. They certainly have part of the picture. But forecasting is a distinct activity from innovating or building state-of-the-art systems.)\nAnd I'm not even sure these questions have the right shape for an academic field. Trying to forecast transformative AI, or determine the odds that we're in the most important century, seems:\nMore similar to the FiveThirtyEight election model (\"Who's going to win the election?\") than to academic political science (\"How do governments and constituents interact?\"); \nMore similar to trading financial markets (\"Is this price going up or down in the future?\") than to academic economics (\"Why do recessions exist?\");3\nMore similar to GiveWell's research (\"Which charity will help people the most, per dollar?\") than to academic development economics (\"What causes poverty and what can reduce it?\")4\nThat is, it's not clear to me what a natural \"institutional home\" for expertise on transformative AI forecasting, and the \"most important century,\" would look like. But it seems fair to say there aren't large, robust institutions dedicated to this sort of question today.\nHow should we act in the absence of a robust expert consensus?\nThe skeptical view\nLacking a robust expert consensus, I expect some (really, most) people will be skeptical no matter what arguments are presented.\nHere's a version of a very general skeptical reaction I have a fair amount of empathy for:\nThis is all just too wild.\nYou're making an over-the-top claim about living in the most important century. This pattern-matches to self-delusion.\nYou've argued that the burden of proof shouldn't be so high, because there are lots of ways in which we live in a remarkable and unstable time. But ... I don't trust myself to assess those claims, or your claims about AI, or really anything on these wild topics.\nI'm worried by how few people seem to be engaging these arguments. About how small, homogeneous and insular the discussion seems to be. Overall, this feels more like a story smart people are telling themselves - with lots of charts and numbers to rationalize it - about their place in history. It doesn't feel \"real.\"\nSo call me back when there's a mature field of perhaps hundreds or thousands of experts, critiquing and assessing each other, and they've reached the same sort of consensus that we see for climate change.\nI see how you could feel this way, and I've felt this way myself at times - especially on points #1-#4. But I'll give three reasons that point #5 doesn't seem right.\nReason 1: we don't have time to wait for a robust expert consensus\nI worry that the arrival of transformative AI could play out as a kind of slow-motion, higher-stakes version of the COVID-19 pandemic. The case for expecting something big to happen is there, if you look at the best information and analyses available today. But the situation is broadly unfamiliar; it doesn't fit into patterns that our institutions regularly handle. And every extra year of action is valuable.\nYou could also think of it as a sped-up version of the dynamic with climate change. Imagine if greenhouse gas emissions had only started to rise recently5 (instead of in the mid-1800s), and if there were no established field of climate science. It would be a really bad idea to wait decades for a field to emerge, before seeking to reduce emissions.\nReason 2: Cunningham's Law (\"the best way to get a right answer is to post a wrong answer\") may be our best hope for finding the flaw in these arguments\nI'm serious, though.\nSeveral years ago, some colleagues and I suspected that the \"most important century\" hypothesis could be true. But before acting on it too much, we wanted to see whether we could find fatal flaws in it.\nOne way of interpreting our actions over the last few years is as if we were doing everything we could to learn that the hypothesis is wrong.\nFirst, we tried talking to people about the key arguments - AI researchers, economists, etc. But:\nWe had vague ideas of the arguments in this series (mostly or perhaps entirely picked up from other people). We weren't able to state them with good crispness and specificity.\nThere were a lot of key factual points that we thought would probably check out,6 but hadn't nailed down and couldn't present for critique.\nOverall, we couldn't even really articulate enough of a concrete case to give the others a fair chance to shoot it down.\nSo we put a lot of work into creating technical reports on many of the key arguments. (These are now public, and included in the table at the top of this piece.) This put us in position to publish the arguments, and potentially encounter fatal counterarguments.\nThen, we commissioned external expert reviews.7\nSpeaking only for my own views, the \"most important century\" hypothesis seems to have survived all of this. Indeed, having examined the many angles and gotten more into the details, I believe it more strongly than before.\nBut let's say that this is just because the real experts - people we haven't found yet, with devastating counterarguments - find the whole thing so silly that they're not bothering to engage. Or, let's say that there are people out there today who could someday become experts on these topics, and knock these arguments down. What could we do to bring this about?\nThe best answer I've come up with is: \"If this hypothesis became better-known, more widely accepted, and more influential, it would get more critical scrutiny.\" \nThis series is an attempted step in that direction - to move toward broader credibility for the \"most important century\" hypothesis. This would be a good thing if the hypothesis were true; it also seems like the best next step if my only goal were to challenge my beliefs and learn that it is false.\nOf course, I'm not saying to accept or promote the \"most important century\" hypothesis if it doesn't seem correct to you. But I think that if your only reservation is about the lack of robust consensus, continuing to ignore the situation seems odd. If people behaved this way generally (ignoring any hypothesis not backed by a robust consensus), I'm not sure I see how any hypothesis - including true ones - would go from fringe to accepted.\nReason 3: skepticism this general seems like a bad idea\nBack when I was focused on GiveWell, people would occasionally say something along the lines of: \"You know, you can't hold every argument to the standard that GiveWell holds its top charities to - seeking randomized controlled trials, robust empirical data, etc. Some of the best opportunities to do good will be the ones that are less obvious - so this standard risks ruling out some of your biggest potential opportunities to have impact.\" \nI think this is right. I think it's important to check one's general approach to reasoning and evidentiary standards and ask: \"What are some scenarios in which my approach fails, and in which I'd really prefer that it succeed?\" In my view, it's OK to be at some risk of self-delusion and insularity, in exchange for doing the right thing when it counts most.\nI think the lack of a robust expert consensus - and concerns about self-delusion and insularity - provide good reason to dig hard on the \"most important century\" hypothesis, rather than accepting it immediately. To ask where there might be an undiscovered flaw, to look for some bias toward inflating our own importance, to research the most questionable-seeming parts of the argument, etc.\nBut if you've investigated the matter as much as is reasonable/practical for you - and haven't found a flaw other than considerations like \"There's no robust expert consensus\" and \"I'm worried about self-delusion and insularity\" - then I think writing off the hypothesis is the sort of thing that essentially guarantees you won't be among the earlier people to notice and act on a tremendously important issue, if the opportunity arises. I think that's too much of a sacrifice, in terms of giving up potential opportunities to do a lot of good.\nNext in series: How to make the best of the most important century?\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n Technically, these probabilities are for “human-level machine intelligence.” In general, this chart simplifies matters by presenting one unified set of probabilities. In general, all of these probabilities refer to something at least as capable as PASTA, so they directionally should be underestimates of the probability of PASTA (though I don't think this is a major issue). ↩\n Reviews of Bio Anchors are here; reviews of Explosive Growth are here; reviews of Semi-informative Priors are here. Brain Computation was reviewed at an earlier time when we hadn't designed the process to result in publishing reviews, but over 20 conversations with experts that informed the report are available here. Human Trajectory hasn't been reviewed, although a lot of its analysis and conclusions feature in Explosive Growth, which has been. Past AI Forecasts hasn't been reviewed.  ↩\n The academic fields are quite broad, and I'm just giving example questions that they tackle. ↩\n Though climate science is an example of an academic field that invests a lot in forecasting the future. ↩\n The field of AI has existed since 1956, but it's only in the last decade or so that machine learning models have started to get within range of the size of insect brains and perform well on relatively difficult tasks. ↩\n Often, we were simply going off of our impressions of what others who had thought about the topic a lot thought. ↩\n Reviews of Bio Anchors are here; reviews of Explosive Growth are here; reviews of Semi-informative Priors are here. Brain Computation was reviewed at an earlier time when we hadn't designed the process to result in publishing reviews, but over 20 conversations with experts that informed the report are available here. Human Trajectory hasn't been reviewed, although a lot of its analysis and conclusions feature in Explosive Growth, which has been. Past AI Forecasts hasn't been reviewed.  ↩\n", "url": "https://www.cold-takes.com/where-ai-forecasting-stands-today/", "title": "AI Timelines: Where the Arguments, and the \"Experts,\" Stand", "source": "cold.takes", "source_type": "blog", "date_published": "2021-09-07", "id": "292c9db8be3c3ebc161d5391456c66b6"} -{"text": "\nAround now, the Most Important Century series is going to be getting a bit dryer, so I'm going to try making some of the other posts a bit lighter. Specifically, I'm going to try something I call \"Cold Links\": links that I like a lot, that are so old you can't believe I'm posting them now. I think this is a more useful/enjoyable service than it might sound like: it's fun to get collections of links on a theme that are more memorable than \"best of the week,\" and even if you've seen some before, you might enjoy seeing them again. If you end up hating this, let me know.\nNow: a lot of the links I post here will be about sports. \"Boooo I hate sportsball!\" you're probably thinking, if you're the kind of person I imagine reading this blog. But try to keep an open mind. I'm here to filter out all the \"My team won, be excited for me!\" and \"Isn't this player incredible, check out [stats that are basically the same stats all top players have]\" and \"Player X isn't just an athlete, they're a LEADER [this roughly just means their team is good]\" and \"Player Y might be talented, but they never come through when it counts [this roughly just means their team isn't good],\" and get you the links that are truly interesting, inspiring or just amazing.\nFor someone who doesn't care about who wins, what do sports have to offer? High on my list is getting to closely observe people being incredibly (like world-outlier-level) intense about something. I am generally somewhat obsessed with obsession (I think it is a key ingredient in almost every case of someone accomplishing something remarkable). And with sports, you can easily identify which players are in the top-5 in the world at the incredibly competitive things they do; you can safely assume that their level of obsession and competitiveness is beyond what you'll ever be able to wrap your head around; and you can see them in action. A few basketball links that illustrate this:\nKobe Bryant's \"Dear Basketball\" poem that he put out when he retired in 2015. Very short and seriously moving. I wish I felt about any activity the way he feels about basketball. This was turned into an animated short that won an Oscar (but I'd recommend just reading the poem).\nLeBron James in a rare informative interview, claiming that he watches \"all the games ... at the same time,\" rattling off 5-6 straight plays from one of them from memory, and glaring beautifully as he says \"I don't watch basketball for entertainment.\"\nThe memory thing is real: here's Stephen Curry succeeding at a game where they show him clips from basketball games he played in (sometimes years ago) and ask him what happened next.\nThere are a lot of stories about how competitive Michael Jordan was; my favorite one is just his Hall of Fame acceptance speech. (For those of you who don't follow sports, just think of Michael Jordan as \"if Jesus were also The Beatles.\") At a time when anyone else would be happy, peaceful and grateful, MJ is still settling old scores and smarting under every imagined insult from decades ago. Highlights include 6:20 (where he reveals that he's invited the person who was picked over him to make the team in high school, to reinforce that this was a mistake); 12:00 (where he criticizes his general manager for saying \"organizations win championships,\" as opposed to players); 14:40 (where he thanks a group of other players for \"freezing him out\" during his rookie season and getting him angry and motivated, then admits that the \"freeze-out\" may have been a rumor); and 15:35 (an extended \"Thank you\" to Pat Riley for ... basically being a jerk?). That's the most competitive person in the world right there, and maybe the one person on earth who's above not being petty. \nWhat else is good about sports:\nI think it's fun when people care so deeply about something so intrinsically meaningless. It means we can enjoy their emotional journeys without all the baggage of whether we're endorsing something \"good\" or \"bad.\" (My wife also loves this about sports - her thing is watching Last Chance U while crying her eyes out.) My next sports post will be a collection of \"heartwarming\" links and stories.\nThere's a lot of sports analysis, and I kind of think sports is to social science what the laboratory is to natural sciences. Sports statistics have high sample sizes, stable environments and are exhaustively captured on video, so it's often possible to actually figure out what's going on. It's therefore unusually easy to form your own judgment about whether someone's analysis is good or bad, and that can have lessons for what patterns to look for on other topics. (My view: academic analysis of sports is often almost unbelievably bad, as you can see from some of the Phil Birnbaum eviscerations, whereas average sportswriting and TV commentating is worse than language can convey. Nerdy but non-academic sports analysis websites like Cleaning the Glass, Football Outsiders and FiveThirtyEight are good.)\nI'll leave you with this absurd touchdown run by Marshawn Lynch (if you haven't watched much football, keep in mind that usually when someone gets tackled, they fall down), and Marshawn Lynch's life philosophy. If you didn't enjoy that pair of links, go ahead and tune out future sports posts from this blog.\nSubscribe Feedback\n", "url": "https://www.cold-takes.com/give-sports-a-chance/", "title": "Give Sports a Chance", "source": "cold.takes", "source_type": "blog", "date_published": "2021-08-11", "id": "de92c3963434dde688457cd0bba954b4"} -{"text": "\nIt seems a common reaction to This Can't Go On is something like: \"OK, so ... you're saying the current level of economic growth can't go on for another 10,000 years. So?? Call me in a few thousand years I guess?\"\nIn general, this blog will often talk about \"long\" time frames (decades, centuries, millennia) as if they're \"short\" (compared to the billions of years our universe has existed, millions of years our species has existed, and billions of years that could be in our civilization's future). I sort of try to imagine myself as a billions-of-years-old observer, looking at charts like this and thinking things like \"The current economic growth level just got started!\" even though it got started several lifetimes ago.\nWhy think this way?\nOne reason is that it's just a way of thinking about the world that feels (to me) refreshing/different.\nBut here are a couple more important reasons.\nEffective altruism\nMy main obsession is with effective altruism, or doing as much good as possible. I generally try to pay more attention to things when they \"matter more,\" and I think things \"matter more\" when they affect larger numbers of persons.1\nI think there will be a LOT more persons2 over the coming billions of years than over the coming generation or few. So I think the long-run future, in some sense, \"matters more\" than whatever happens over the next generation or few. Maybe it doesn't matter more for me and my loved ones, but it matters more from an \"all persons matter equally\" perspective.3\nAn obvious retort is \"But there's nothing we can do that will affect ALL of the people who live over the coming billions of years. We should focus on what we can actually change - that's the next generation or few.\"\nBut I'm not convinced of that. \nI think we could be in the most important century of all time, and I think things we do today could end up mattering for billions of years (an obvious example is reducing risk of existential catastrophes). \nAnd more broadly, if I couldn't think of specific ways our actions might matter for billions of years, I'd still be very interested in looking for them. I'd still find it useful to try to step back and ask: \"Is what I'm reading about in the news important in the grand scheme of things? Could these events matter for whether we end up with explosion, stagnation or collapse? For what kind of digital civilization we create for the long run? And if not ... what could?\"\nAppreciating the weirdness of the time we live in\nI think we live in a very weird period of time. It looks really weird on various charts (like this one, this one, and this one). The vast bulk of scientific and technological advancement, and growth in the economy, has happened in a tiny sliver of time that we are sitting in. And billions of years from now, it will probably still be the case that this tiny sliver of time looks like an outlier in terms of growth and change.\nAgain, it doesn't feel like a tiny sliver, it feels like lifetimes. It's hundreds of years. But that's out of millions (for our species) or billions (for life on Earth).\nSometimes, when I walk down the street, I just look around and think: \"This is all SO WEIRD. Whooshing by me are a bunch of people calmly operating steel cars at 40 mph, and over there I see a bunch of people calmly operating a massive crane building a skyscraper, and up in the sky is a plane flying by ... and out of billions of years of life on Earth, it's only us - the humans of the last hundred-or-so years - who have ever been able to do any of this kind of stuff. Practically everything I look at is some crazy futurist technology we just came up with and haven't really had time to adapt to, and we won't have adapted before the next crazy thing comes along. \n\"And everyone is being very humdrum about their cars and skyscrapers and planes, but this is not normal, this is not 'how it usually is,' this is not part of a plan or a well-established pattern, this is crazy and weird and short-lived, and it's anyone's guess where it's going next.\"\nI think many of us are instinctively, intuitively dismissive of wild claims about the future. I think we naturally imagine that there's more stability, solidness and hidden wisdom in \"how things have been for generations\" than there is.\nBy trying to imagine the perspective of someone who's been alive for the whole story - billions of years, not tens - maybe we can be more open to strange future possibilities. And then, maybe we can be better at noticing the ones that actually might happen, and that our actions today might affect.\nSo that's why I often try on the lens of saying things like \"X has been going on for 200 years and could maybe last another few thousand - bah, that's the blink of an eye!\"\nSubscribe Feedback\nFootnotes\n I generally use the term \"persons\" instead of \"people\" to indicate that I am trying to refer to every person, animal or thing (AI?) that we should care about the welfare of. ↩\n Even more than you'd intuitively guess, as outlined here. ↩\n I wrote a bit about this perspective several years ago, here. ↩\n", "url": "https://www.cold-takes.com/why-talk-about-10-000-years-from-now/", "title": "Why talk about 10,000 years from now?", "source": "cold.takes", "source_type": "blog", "date_published": "2021-08-05", "id": "ee42ac961d00c961f7a2a31a97b34476"} -{"text": "\nAudio also available by searching Stitcher, Spotify, Google Podcasts, etc. for \"Cold Takes Audio\"\nThis piece starts to make the case that we live in a remarkable century, not just a remarkable era. Previous pieces in this series talked about the strange future that could be ahead of us eventually (maybe 100 years, maybe 100,000).\nSummary of this piece:\nWe're used to the world economy growing a few percent per year. This has been the case for many generations.\nHowever, this is a very unusual situation. Zooming out to all of history, we see that growth has been accelerating; that it's near its historical high point; and that it's faster than it can be for all that much longer (there aren't enough atoms in the galaxy to sustain this rate of growth for even another 10,000 years).\nThe world can't just keep growing at this rate indefinitely. We should be ready for other possibilities: stagnation (growth slows or ends), explosion (growth accelerates even more, before hitting its limits), and collapse (some disaster levels the economy).\nThe times we live in are unusual and unstable. We shouldn't be surprised if something wacky happens, like an explosion in economic and scientific progress, leading to technological maturity. In fact, such an explosion would arguably be right on trend.\n \nFor as long as any of us can remember, the world economy has grown1 a few percent per year, on average. Some years see more or less growth than other years, but growth is pretty steady overall.2 I'll call this the Business As Usual world.\nIn Business As Usual, the world is constantly changing, and the change is noticeable, but it's not overwhelming or impossible to keep up with. There is a constant stream of new opportunities and new challenges, but if you want to take a few extra years to adapt to them while you mostly do things the way you were doing them before, you can usually (personally) get away with that. In terms of day-to-day life, 2019 was pretty similar to 2018, noticeably but not hugely different from 2010, and hugely but not crazily different from 1980.3\nIf this sounds right to you, and you're used to it, and you picture the future being like this as well, then you live in the Business As Usual headspace. When you think about the past and the future, you're probably thinking about something kind of like this:\nBusiness As Usual\nI live in a different headspace, one with a more turbulent past and a more uncertain future. I'll call it the This Can't Go On headspace. Here's my version of the chart:\nThis Can't Go On4 \nWhich chart is the right one? Well, they're using exactly the same historical data - it's just that the Business As Usual chart starts in 1950, whereas This Can't Go On starts all the way back in 5000 BC. \"This Can't Go On\" is the whole story; \"Business As Usual\" is a tiny slice of it. \nGrowing at a few percent a year is what we're all used to. But in full historical context, growing at a few percent a year is crazy. (It's the part where the blue line goes near-vertical.)\nThis growth has gone on for longer than any of us can remember, but that isn't very long in the scheme of things - just a couple hundred years, out of thousands of years of human civilization. It's a huge acceleration, and it can't go on all that much longer. (I'll flesh out \"it can't go on all that much longer\" below.)\nThe first chart suggests regularity and predictability. The second suggests volatility and dramatically different possible futures.\nOne possible future is stagnation: we'll reach the economy's \"maximum size\" and growth will essentially stop. We'll all be concerned with how to divide up the resources we have, and the days of a growing pie and a dynamic economy will be over forever.\nAnother is explosion: growth will accelerate further, to the point where the world economy is doubling every year, or week, or hour. A Duplicator-like technology (such as digital people or, as I’ll discuss in future pieces, advanced AI) could drive growth like this. If this happens, everything will be changing far faster than humans can process it.\nAnother is collapse: a global catastrophe will bring civilization to its knees, or wipe out humanity entirely, and we'll never reach today's level of growth again. \nOr maybe something else will happen.\nWhy can't this go on?\nA good starting point would be this analysis from Overcoming Bias, which I'll give my own version of here:\nLet's say the world economy is currently getting 2% bigger each year.5 This implies that the economy would be doubling in size about every 35 years.6\nIf this holds up, then 8200 years from now, the economy would be about 3*1070 times its current size.\n There are likely fewer than 1070 atoms in our galaxy,7 which we would not be able to travel beyond within the 8200-year time frame.8\nSo if the economy were 3*1070 times as big as today's, and could only make use of 1070 (or fewer) atoms, we'd need to be sustaining multiple economies as big as today's entire world economy per atom.\n8200 years might sound like a while, but it's far less time than humans have been around. In fact, it's less time than human (agriculture-based) civilization has been around.\nIs it imaginable that we could develop the technology to support multiple equivalents of today's entire civilization, per atom available? Sure - but this would require a radical degree of transformation of our lives and societies, far beyond how much change we've seen over the course of human history to date. And I wouldn't exactly bet that this is how things are going to go over the next several thousand years. (Update: for people who aren't convinced yet, I've expanded on this argument in another post.)\nIt seems much more likely that we will \"run out\" of new scientific insights, technological innovations, and resources, and the regime of \"getting richer by a few percent a year\" will come to an end. After all, this regime is only a couple hundred years old.\n(This post does a similar analysis looking at energy rather than economics. It projects that the limits come even sooner. It assumes 2.3% annual growth in energy consumption (less than the historical rate for the USA since the 1600s), and estimates this would use up as much energy as is produced by all the stars in our galaxy within 2500 years.9)\nExplosion and collapse\nSo one possible future is stagnation: growth gradually slows over time, and we eventually end up in a no-growth economy. But I don't think that's the most likely future. \nThe chart above doesn't show growth slowing down - it shows it accelerating dramatically. What would we expect if we simply projected that same acceleration forward?\nModeling the Human Trajectory (by Open Philanthropy’s David Roodman) tries to answer exactly this question, by “fitting a curve” to the pattern of past economic growth.10 Its extrapolation implies infinite growth this century. Infinite growth is a mathematical abstraction, but you could read it as meaning: \"We'll see the fastest growth possible before we hit the limits.\"\nIn The Duplicator, I summarize a broader discussion of this possibility. The upshot is that a growth explosion could be possible, if we had the technology to “copy” human minds - or something else that fulfills the same effective purpose, such as digital people or advanced enough AI.\nIn a growth explosion, the annual growth rate could hit 100% (the world economy doubling in size every year) - which could go on for at most ~250 years before we hit the kinds of limits discussed above.11 Or we could see even faster growth - we might see the world economy double in size every month (which we could sustain for at most 20 years before hitting the limits12), or faster.\nThat would be a wild ride: blindingly fast growth, perhaps driven by AIs producing output beyond what we humans could meaningfully track, quickly approaching the limits of what's possible, at which point growth would have to slow.\nIn addition to stagnation or explosive growth, there's a third possibility: collapse. A global catastrophe could cut civilization down to a state where it never regains today's level of growth. Human extinction would be an extreme version of such a collapse. This future isn't suggested by the charts, but we know it's possible.\nAs Toby Ord’s The Precipice argues, asteroids and other \"natural\" risks don't seem likely to bring this about, but there are a few risks that seem serious and very hard to quantify: climate change, nuclear war (particularly nuclear winter), pandemics (particularly if advances in biology lead to nasty bioweapons), and risks from advanced AI. \nWith these three possibilities in mind (stagnation, explosion and collapse):\nWe live in one of the (two) fastest-growth centuries in all of history so far. (The 20th and 21st.)\nIt seems likely that this will at least be one of the ~80 fastest-growing centuries of all time.13\nIf the right technology comes along and drives explosive growth, it could be the #1 fastest-growing century of all time - by a lot.\n If things go badly enough, it could be our last century.\nSo it seems like this is a quite remarkable century, with some chance of being the most remarkable. This is all based on pretty basic observations, not detailed reasoning about AI (which I will get to in future pieces).\nScientific and technological advancement\nIt’s hard to make a simple chart of how fast science and technology are advancing, the same way we can make a chart for economic growth. But I think that if we could, it would present a broadly similar picture as the economic growth chart.\nA fun book I recommend is Asimov's Chronology of Science and Discovery. It goes through the most important inventions and discoveries in human history, in chronological order. The first few entries include \"stone tools,\" \"fire,\" \"religion\" and \"art\"; the final pages include \"Halley's comet\" and \"warm superconductivity.\"\nAn interesting fact about this book is that 553 out of its 654 pages take place after the year 1500 - even though it starts in the year 4 million BC. I predict other books of this type will show a similar pattern,14 and I believe there were, in fact, more scientific and technological advances in the last ~500 years than the previous several million.15 \nIn a previous piece, I argued that the most significant events in history seem to be clustered around the time we live in, illustrated with this timeline. That was looking at billions-of-years time frames. If we zoom in to thousands of years, though, we see something similar: the biggest scientific and technological advances are clustered very close in time to now. To illustrate this, here's a timeline focused on transportation and energy (I think I could've picked just about any category and gotten a similar picture).\nSo as with economic growth, the rate of scientific and technological advancement is extremely fast compared to most of history. As with economic growth, presumably there are limits at some point to how advanced technology can become. And as with economic growth, from here scientific and technological advancement could:\nStagnate, as some are concerned is happening. \nExplode, if some technology were developed that dramatically increased the number of \"minds\" (people, or digital people, or advanced AIs) pushing forward scientific and technological development.16\nCollapse due to some global catastrophe.\nNeglected possibilities\nI think there should be some people in the world who inhabit the Business As Usual headspace, thinking about how to make the world better if we basically assume a stable, regular background rate of economic growth for the foreseeable future. \nAnd some people should inhabit the This Can’t Go On headspace, thinking about the ramifications of stagnation, explosion or collapse - and whether our actions could change which of those happens.\nBut today, it seems like things are far out of balance, with almost all news and analysis living in the Business As Usual headspace. \nOne metaphor for my headspace is that it feels as though the world is a set of people on a plane blasting down the runway:\nWe're going much faster than normal, and there isn't enough runway to do this much longer ... and we're accelerating.\nAnd every time I read commentary on what's going on in the world, people are discussing how to arrange your seatbelt as comfortably as possible given that wearing one is part of life, or saying how the best moments in life are sitting with your family and watching the white lines whooshing by, or arguing about whose fault it is that there's a background roar making it hard to hear each other. \nIf I were in this situation and I didn't know what was next (liftoff), I wouldn't necessarily get it right, but I hope I'd at least be thinking: \"This situation seems kind of crazy, and unusual, and temporary. We're either going to speed up even more, or come to a stop, or something else weird is going to happen.\"\n \nThanks to María Gutiérrez Rojas for the graphics in this piece, and Ludwig Schubert for an earlier timeline graphic that this piece's timeline graphic is based on.\nNext in series: Forecasting Transformative AI, Part 1: What Kind of AI?\n \nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n If you have no idea what that means, try my short economic growth explainer. ↩\n Global real growth has generally ranged from slightly negative to ~7% per year. ↩\n I'm skipping over 2020 here since it was unusually different from past years, due to the global pandemic and other things. ↩\n For the historical data, see Modeling the Human Trajectory. The projections are rough and meant to be visually suggestive rather than using the best modeling approaches.  ↩\nThis refers to real GDP growth (adjusted for inflation). 2% is lower than the current world growth figure, and using the world growth figure would make my point stronger. But I think that 2% is a decent guess for \"frontier growth\" - growth occurring in the already-most-developed economies - as opposed to total world growth, which includes “catchup growth” (previously poor countries growing rapidly, such as China today). \n To check my 2% guess, I downloaded this US data and looked at the annualized growth rate between 2000-2020, 2010-2020, and 2015-2020 (all using July since July was the latest 2020 point). These were 2.5%, 2.2% and 2.05% respectively.  ↩\n 2% growth over 35 years is (1 + 2%)^35 = 2x growth ↩\nWikipedia's highest listed estimate for the Milky Way's mass is 4.5*10^12 solar masses, each of which is about 2*10^30 kg. The mass of a (hydrogen) atom is estimated as the equivalent of about 1.67*10^-27 kg. (Hydrogen atoms have the lowest mass, so assuming each atom is hydrogen will overestimate the total number of atoms.) So a high-end estimate of the total number of atoms in the Milky Way would be (4.5*10^12 * 2*10^30)/(1.67*10^-27) =~ 5.4*10^69. ↩\nWikipedia: \"In March 2019, astronomers reported that the mass of the Milky Way galaxy is 1.5 trillion solar masses within a radius of about 129,000 light-years.\" I'm assuming we can't travel more than 129,000 light-years in the next 8200 years, because this would require far-faster-than-light travel. ↩\n This calculation isn't presented straightforwardly in the post. The key lines are \"No matter what the technology, a sustained 2.3% energy growth rate would require us to produce as much energy as the entire sun within 1400 years\" and \"The Milky Way galaxy hosts about 100 billion stars. Lots of energy just spewing into space, there for the taking. Recall that each factor of ten takes us 100 years down the road. One-hundred billion is eleven factors of ten, so 1100 additional years.\" 1400 + 1100 = 2500, the figure I cite. This relies on the assumption that the average star in our galaxy offers about as much energy as the sun; I don't know whether that's the case. ↩\nThere is an open debate on whether Modeling the Human Trajectory is fitting the right sort of shape to past historical data. I discuss how the debate could change my conclusions here. ↩\n 250 doublings would be a growth factor of about 1.8*10^75, over 10,000 times the number of atoms in our galaxy. ↩\n 20 years would be 240 months, so if each one saw a doubling in the world economy, that would be a growth factor of about 1.8*10^72, over 100 times the number of atoms in our galaxy. ↩\n That’s because of the above observation that today’s growth rate can’t last for more than another 8200 years (82 centuries) or so. So the only way we could have more than 82 more centuries with growth equal to today’s is if we also have a lot of centuries with negative growth, ala the zig-zag dotted line in the \"This Can't Go On\" chart. ↩\nThis dataset assigns significance to historical figures based on how much they are covered in reference works. It has over 10x as many \"Science\" entries after 1500 as before; the data set starts in 800 BC. I don't endorse the book that this data set is from, as I think it draws many unwarranted conclusions from the data; here I am simply supporting my claim that most reference works will disproportionately cover years after 1500. ↩\n To be fair, reference works like this may be biased toward the recent past. But I think the big-picture impression they give on this point is accurate nonetheless. Really supporting this claim would be beyond the scope of this post, but the evidence I would point to is (a) the works I'm referencing - I think if you read or skim them yourselves you'll probably come out with a similar impression; (b) the fact that economic growth shows a similar pattern (although the explosion starts more recently; I think it makes intuitive sense that economic growth would follow scientific progress with a lag). ↩\n The papers cited in The Duplicator on this point specifically model an explosion in innovation as part of the dynamic driving explosive economic growth. ↩\n", "url": "https://www.cold-takes.com/this-cant-go-on/", "title": "This Can't Go On", "source": "cold.takes", "source_type": "blog", "date_published": "2021-08-03", "id": "3e74a7919e1b7682be762294b0bd82f5"} -{"text": "\nThere's an interesting theory out there that X causes Y. If this were true, it would be pretty important. So I did a deep-dive into the academic literature on whether X causes Y. Here's what I found.\n(Embarrassingly, I can't actually remember what X and Y are. I think maybe X was enriched preschool, or just school itself, or eating fish while pregnant, or the Paleo diet, or lead exposure, or a clever \"nudge\" policy trying to get people to save more, or some self-help technique, or some micronutrient or public health intervention, or democracy, or free trade, or some approach to intellectual property law. And Y was ... lifetime earnings, or risk of ADHD diagnosis, or IQ in adulthood, or weight loss, or violent crime, or peaceful foreign policy, or GDP per capita, or innovation. Sorry about that! Hope you enjoy the post anyway! Fortunately, I think what I'm about to write is correct for pretty much any (X,Y) from those sorts of lists.)\nIn brief:\nThere are hundreds of studies on whether X causes Y, but most of them are simple observational studies that are just essentially saying \"People/countries with more X also have more Y.\" For reasons discussed below, we can't really learn much from these studies.\nThere are 1-5 more interesting studies on whether X causes Y. Each study looks really clever, informative and rigorous at first glance. However, the more closely you look at them, the more confusing the picture gets.\nWe ultimately need to choose between (a) believing some overly complicated theory of the relationship between X and Y, which reconciles all of the wildly conflicting and often implausible things we're seeing in the studies; (b) more-or-less reverting to what we would've guessed about the relationship between X and Y in the absence of any research.\nThe chaff: lots of unhelpful studies that I'm disregarding\nFirst, the good news: there are hundreds of studies on whether X causes Y. The bad news? We need to throw most of them out. \nMany have comically small sample sizes (like studying 20 people) and/or comically short time horizons (like looking at weight loss over two weeks),1 or unhelpful outcome measures (like intelligence tests in children under 5).2 But by far the most common problem is that most of the studies on whether X causes Y are simple observational studies: they essentially just find that people/countries with more X also have more Y. \nWhy is this a problem? There could be a confounder - some third thing, Z, that is correlated with both X and Y. And there are specific reasons we should expect confounders to be common:\nIn general, people/countries that have more X also have more of lots of other helpful things - they're richer, they're more educated, etc. For example, if we're asking whether higher-quality schooling leads to higher earnings down the line, an issue is that people with higher-quality schooling also tend to come from better-off families with lots of other advantages.\nIn fact, the very fact that people in upper-class intellectual circles think X causes Y means that richer, more educated people/countries tend to deliberately get more X, and also try to do a lot of other things to get more Y. For example, more educated families tend to eat more fish (complicating the attempt to see whether eating fish in pregnancy is good for the baby).3\nNow, a lot of these studies try to \"control for\" the problem I just stated - they say things like \"We examined the effect of X and Y, while controlling for Z [e.g., how wealthy or educated the people/countries/whatever are].\" How do they do this? The short answer is, well, hm, jeez. Well you see, to simplify matters a bit, just try to imagine ... uh ... shit. Uh. The only high-level way I can put this is:\nThey use a technique called regression analysis that, as far as I can determine, cannot be explained in a simple, intuitive way (especially not in terms of how it \"controls for\" confounders).\nThe \"controlling for\" thing relies on a lot of subtle assumptions and can break in all kinds of weird ways. Here's a technical explanation of some of the pitfalls; here's a set of deconstructions of regressions that break in weird ways.\nNone of the observational studies about whether X causes Y discuss the pitfalls of \"controlling for\" things and whether they apply here.\nI don't think we can trust these papers, and to really pick them all apart (given how many there are) would take too much time. So let's focus on a smaller number of better studies.\nThe wheat: 1-5 more interesting studies\nDigging through the sea of unhelpful studies, I found 1-5 of them that are actually really interesting! \nFor example, one study examines some strange historical event you've never heard of (perhaps a surge in Cuban emigration triggered by Fidel Castro suddenly allowing it, or John Rockefeller's decision to fund a hookworm eradication campaign, or a sudden collective pardon leading to release of a third of prison inmates in Italy), where for abstruse and idiosyncratic reasons, X got distributed in what seems to be almost a random way. This study is really clever, and the authors were incredibly thorough in examining seemingly every way their results could have been wrong. They conclude that X causes Y!\nBut on closer inspection, I have a bunch of reservations. For example:\nThe paper doesn't make it easy to replicate its analysis, and when someone does manage to sort-of replicate it, they may get different results. \nThere was other weird stuff going on (e.g., changes in census data collection methods5), during the strange historical event, so it's a little hard to generalize.\nIn a response to the study, another academic advances a complex theory of how the study could actually have gotten a misleading result. This led to an intense back-and-forth between the original authors and the skeptic, stretched out over years because each response had to be published in a journal, and by the time I got to the end of it I didn't have any idea what to think anymore.6\nI found 0-4 other interesting studies. I can't remember all of the details, but they may have included:\nA study comparing siblings, or maybe \"very similar countries,\" that got more or less of X.7\nA study using a complex mathematical technique claiming to cleanly isolate the effect of X and Y. I can't really follow what it's doing, and I’m guessing there are a lot of weird assumptions baked into this analysis.8\nA study with actual randomization: some people were randomly assigned to receive more X than others, and the researchers looked at who ended up with more Y. This sounds awesome! However, there are issues here too: \nIt's kind of ambiguous whether the assignment to X was really \"random.\"9\nExtremely weird things happened during the study (for example, generational levels of flooding), so it's not clear how well it generalizes to other settings.\n \nThe result seems fragile (simply adding more data weakens it a lot) and/or just hard to believe (like schoolchildren doing noticeably better on a cognition test after a few weeks of being given fish instead of meat with their lunch, even though they mostly didn't eat the fish). \nCompounding the problem, the 1-5 studies I found tell very different stories about the relationship between X and Y. How could this make sense? Is there a unified theory that can reconcile all the results?\nWell, one possibility is that X causes Y sometimes, but only under very particular conditions, and the effect can be masked by some other thing going on. So - if you meet one of 7 criteria, you should do X to get more Y, but if you meet one of 9 other criteria, you should actually avoid X!\nConclusion\nI have to say, this all was simultaneously more fascinating and less informative than I expected it would be going in. I thought I would find some nice studies about the relationship between X and Y and be done. Instead, I've learned a ton about weird historical events and about the ins and outs of different measures of X and Y, but I feel just super confused about whether X causes Y.\nI guess my bottom line is that X does cause Y, because it intuitively seems like it would.\nI'm glad I did all this research, though. It's good to know that social science research can go haywire in all kinds of strange ways. And it's good to know that despite the confident proclamations of pro- and anti-X people, it's legitimately just super unclear whether X causes Y. \nI mean, how else could I have learned that?\nAppendix: based on a true story\nThis piece was inspired by:\nMost evidence reviews GiveWell has done, especially of deworming\nMany evidence reviews by David Roodman, particularly Macro Aid Effectiveness Research: a Guide for the Perplexed; Due Diligence: an Impertinent Inquiry into Microfinance; and Reasonable Doubt: A New Look at Whether Prison Growth Cuts Crime. \nMany evidence reviews by Slate Star Codex, collected here.\nInformal evidence reviews I've done for e.g. personal medical decisions.\nThe basic patterns above apply to most of these, and the bottom line usually has the kind of frustrating ambiguity seen in this conclusion.\nThere are cases where things seem a bit less ambiguous and the bottom line seems clearer. Speaking broadly, I think the main things that contribute to this are:\nActual randomization. For years I've nodded along when people say \"You shouldn't be dogmatic about randomization, there are many ways for a study to be informative,\" but each year I've become a bit more dogmatic. Even the most sophisticated-, appealing-seeming alternatives to randomization in studies seem to have a way of falling apart. Randomized studies almost always have problems and drawbacks too. But I’d rather have a randomized study with drawbacks than a non-randomized study with drawbacks.\nExtreme thoroughness, such as Roodman's attempt to reconstruct the data and code for key studies in Reasonable Doubt. This sometimes leads to outright dismissing a number of studies, leaving a smaller, more consistent set remaining.\nSubscribe Feedback\nFootnotes\n Both of these show up in studies from this review on the Paleo diet. To be fair, small studies can theoretically be aggregated for larger numbers, but that's often hard to do in practice when the studies are all a bit different from each other. ↩\n I don't have a great cite for this, but it's pretty common in studies on things like how in vitro fertilization affects child development. ↩\n See studies cited in this literature review. ↩\n(Footnote deleted)\n \"Borjas’s paper ... separately measured the wages of two slices of that larger group ... But it was in that act of slicing the data that the spurious result was generated. It created data samples that, exactly in 1980, suddenly included far more low-wage black males—accounting for the whole wage decline in those samples relative to other cities. Understanding how that happened requires understanding the raw data ... Right in 1980, the Census Bureau—which ran the CPS surveys—improved its survey methods to cover more low-skill black men. The 1970 census and again the 1980 census had greatly undercounted low-skill black men, both by failing to identify their residences and by failing to sufficiently probe survey respondents about marginal or itinerant household members. There was massive legislative and judicial pressure to count blacks better, particularly in Miami.\" ↩\n E.g., the Mariel boatlift debate. ↩\n For example, sibling analysis features prominently in Slate Star Codex's examination of preschool impacts, while comparisons between Sweden and other Scandinavian/European countries is prominent in its analysis of lockdowns. ↩\n E.g., this attempt to gauge the impacts of microfinance, or \"generalized method of moments\" approaches to cross-country analysis (of e.g. the effectiveness of aid). ↩\n This is a surprisingly common issue. E.g. see debates over whether charter school lotteries are really random, whether \"random assignment to small or large class size\" can be interpreted as \"random assignment to a teacher,\" discussion of \"judge randomization\" and possible randomization failure here (particularly section 9.9). A separate issue: sometimes randomization occurs by \"cluster\" (instead of randomizing which individuals receive some treatment, perhaps particular schools or groups are chosen to receive it), which can complicate the analysis. ↩\n", "url": "https://www.cold-takes.com/does-x-cause-y-an-in-depth-evidence-review/", "title": "Does X cause Y? An in-depth evidence review", "source": "cold.takes", "source_type": "blog", "date_published": "2021-07-28", "id": "57beac7cdf80faedb84673e07b807664"} -{"text": "\nIf you've ever wanted to see someone painstakingly deconstruct a regression analysis and show all the subtle reasons it can generate wild, weird and completely wrong results, there is good stuff at Sabermetric Research - Phil Birnbaum. It's a sports blog, but sports knowledge isn't needed (knowledge of regression analysis generally is, if you want to follow all the details).\nBirnbaum's not exactly the only person to do takedowns of bad studies. But when Birnbaum notices that something is \"off,\" he doesn't just point it out and move on. He isn't satisfied with \"This conclusion is implausible\" or \"This conclusion isn't robust to sensitivity analysis.\" He digs all the way to the bottom to understand exactly how a study got its wrong result. His deconstructions of bad regressions are like four-star meals or masterful jazz solos ... I don't want to besmirch them by trying to explain them, so if you're into regression deconstructions you should just click through the links below. \n(I'm not going to explain what regression analysis is today, for which I apologize; if I ever do, I will link back to this post. It's very hard to explain it compactly and clearly, as you can see from Wikipedia's attempt, but it is VERY common in social science research. Kind of a bad combination IMO. If you hear \"This study shows [something about people],\" it's more likely than not that the study relies on regression analysis.)\nSome good (old) ones:\nEstimating whether Aaron Rodger's contract overpays or underpays him by making a scatterplot of pay and value-added with other quarterbacks and seeing whether he's above or below the regression line. The answer changes completely when you switch the x- and y-axes. Which one is right, and what exactly is wrong with the other one? (Birnbaum linked to this, but it's now dead and I am linking directly to an Internet Archive version. Birnbaum's \"solution\" is down in the comments, just search for his name.)\nDeconstructing an NBA time-zone regression: the key coefficients turn out to be literally meaningless.\nDo younger brothers steal more bases? Parts I, II, III although I think it's OK to skip part II (and Part I is short).\nThe OBP/SLG regression puzzle: parts I, II, III, IV, V. This one is very weedsy and you'll probably want to skip parts, though it's also kind of glorious to see just how doggedly he digs on every strange numerical result. He also makes an effort to explain what's going on for people who don't know baseball. The essence of the puzzle: OBP and SLG are both indicators of a team's performance, but when one regresses a team's performance on OBP and SLG, the ratio of the coefficients is pretty far off from what the \"true\" value for (value of OBP / value of SLG) is separately known to be. I think the issues here are extremely general, and not good news for the practice of treating regression coefficients as effect sizes.\nSubscribe Feedback\n", "url": "https://www.cold-takes.com/phil-birnbaums-regression-analysis/", "title": "Phil Birnbaum's \"bad regression\" puzzles", "source": "cold.takes", "source_type": "blog", "date_published": "2021-07-15", "id": "e05e8449855c932566a2ea1723cddb31"} -{"text": "\nAudio also available by searching Stitcher, Spotify, Google Podcasts, etc. for \"Cold Takes Audio\"\nToday’s world\nTransformative AI\nDigital people\nWorld of\nMisaligned AI\nWorld run by\nSomething else\nor\nor\nStable, galaxy-wide\ncivilization\nSummary:\nIn a series of posts starting with this one, I'm going to argue that the 21st century could see our civilization develop technologies allowing rapid expansion throughout our currently-empty galaxy. And thus, that this century could determine the entire future of the galaxy for tens of billions of years, or more.\nThis view seems \"wild\": we should be doing a double take at any view that we live in such a special time. I illustrate this with a timeline of the galaxy. (On a personal level, this \"wildness\" is probably the single biggest reason I was skeptical for many years of the arguments presented in this series. Such claims about the significance of the times we live in seem \"wild\" enough to be suspicious.)\nBut I don't think it's really possible to hold a non-\"wild\" view on this topic. I discuss alternatives to my view: a \"conservative\" view that thinks the technologies I'm describing are possible, but will take much longer than I think, and a \"skeptical\" view that thinks galaxy-scale expansion will never happen. Each of these views seems \"wild\" in its own way.\nUltimately, as hinted at by the Fermi paradox, it seems that our species is simply in a wild situation.\nBefore I continue, I should say that I don't think humanity (or some digital descendant of humanity) expanding throughout the galaxy would necessarily be a good thing - especially if this prevents other life forms from ever emerging. I think it's quite hard to have a confident view on whether this would be good or bad. I'd like to keep the focus on the idea that our situation is \"wild.\" I am not advocating excitement or glee at the prospect of expanding throughout the galaxy. I am advocating seriousness about the enormous potential stakes.\nMy view\nThis is the first in a series of pieces about the hypothesis that we live in the most important century for humanity. \nIn this series, I'm going to argue that there's a good chance of a productivity explosion by 2100, which could quickly lead to what one might call a \"technologically mature\"1 civilization. That would mean that:\nWe'd be able to start sending spacecraft throughout the galaxy and beyond. \nThese spacecraft could mine materials, build robots and computers, and construct very robust, long-lasting settlements on other planets, harnessing solar power from stars and supporting huge numbers of people (and/or our \"digital descendants\"). \nSee Eternity in Six Hours for a fascinating and short, though technical, discussion of what this might require.\n \nI'll also argue in future pieces (now available here and here) that there is a chance of \"value lock-in\": whoever is running the process of space expansion might be able to determine what sorts of people are in charge of the settlements and what sorts of societal values they have, in a way that is stable for many billions of years.2\nIf that ends up happening, you might think of the story of our galaxy3 like this. I've marked major milestones along the way from \"no life\" to \"intelligent life that builds its own computers and travels through space.\" \nThanks to Ludwig Schubert for the visualization. Many dates are highly approximate and/or judgment-prone and/or just pulled from Wikipedia (sources here), but plausible changes wouldn't change the big picture. The ~1.4 billion years to complete space expansion is based on the distance to the outer edge of the Milky Way, divided by the speed of a fast existing human-made spaceship (details in spreadsheet just linked); IMO this is likely to be a massive overestimate of how long it takes to expand throughout the whole galaxy. See footnote for why I didn't use a logarithmic axis.4??? That's crazy! According to me, there's a decent chance that we live at the very beginning of the tiny sliver of time during which the galaxy goes from nearly lifeless to largely populated. That out of a staggering number of persons who will ever exist, we're among the first. And that out of hundreds of billions of stars in our galaxy, ours will produce the beings that fill it.\nI know what you're thinking: \"The odds that we could live in such a significant time seem infinitesimal; the odds that Holden is having delusions of grandeur (on behalf of all of Earth, but still) seem far higher.\"5\nBut:\nThe \"conservative\" view\nLet's say you agree with me about where humanity could eventually be headed - that we will eventually have the technology to create robust, stable settlements throughout our galaxy and beyond. But you think it will take far longer than I'm saying.\nA key part of my view (which I'll write about more later) is that within this century, we could develop advanced enough AI to start a productivity explosion. Say you don't believe that. \nYou think I'm underrating the fundamental limits of AI systems to date. \nYou think we will need an enormous number of new scientific breakthroughs to build AIs that truly reason as effectively as humans. \nAnd even once we do, expanding throughout the galaxy will be a longer road still. \nYou don't think any of this is happening this century - you think, instead, that it will take something like 500 years. That's 5-10x the time that has passed since we started building computers. It's more time than has passed since Isaac Newton made the first credible attempt at laws of physics. It's about as much time has passed since the very start of the Scientific Revolution.\nActually, no, let's go even more conservative. You think our economic and scientific progress will stagnate. Today's civilizations will crumble, and many more civilizations will fall and rise. Sure, we'll eventually get the ability to expand throughout the galaxy. But it will take 100,000 years. That's 10x the amount of time that has passed since human civilization began in the Levant.\nHere's your version of the timeline:\nThe difference between your timeline and mine isn't even a pixel, so it doesn't show up on the chart. In the scheme of things, this \"conservative\" view and my view are the same.\nIt's true that the \"conservative\" view doesn't have the same urgency for our generation in particular. But it still places us among a tiny proportion of people in an incredibly significant time period. And it still raises questions of whether the things we do to make the world better - even if they only have a tiny flow-through to the world 100,000 years from now - could be amplified to a galactic-historical-outlier degree.\nThe skeptical view\nThe \"skeptical view\" would essentially be that humanity (or some descendant of humanity, including a digital one) will never spread throughout the galaxy. There are many reasons it might not:\nMaybe something about space travel - and/or setting up mining robots, solar panels, etc. on other planets - is effectively impossible such that even another 100,000 years of human civilization won't reach that point.6\nOr perhaps for some reason, it will be technologically feasible, but it won't happen (because nobody wants to do it, because those who don't want to block those who do, etc.)\nMaybe it's possible to expand throughout the galaxy, but not possible to maintain a presence on many planets for billions of years, for some reason.\nMaybe humanity is destined to destroy itself before it reaches this stage. \nBut note that if the way we destroy ourselves is via misaligned AI,7 it would be possible for AI to build its own technology and spread throughout the galaxy, which still seems in line with the spirit of the above sections. In fact, it highlights that how we handle AI this century could have ramifications for many billions of years. So humanity would have to go extinct in some way that leaves no other intelligent life (or intelligent machines) behind.\nMaybe an extraterrestrial species will spread throughout the galaxy before we do (or around the same time). \nHowever, note that this doesn't seem to have happened in ~13.77 billion years so far since the universe began, and according to the above sections, there's only about 1.5 billion years left for it to happen before we spread throughout the galaxy.\nMaybe some extraterrestrial species already effectively has spread throughout our galaxy, and for some reason we just don't see them. Maybe they are hiding their presence deliberately, for one reason or another, while being ready to stop us from spreading too far. \nThis would imply that they are choosing not to mine energy from any of the stars we can see, at least not in a way that we could see it. That would, in turn, imply that they're abstaining from mining a very large amount of energy that they could use to do whatever it is they want to do,8 including defend themselves against species like ours.\nMaybe this is all a dream. Or a simulation.\nMaybe something else I'm not thinking of.\nThat's a fair number of possibilities, though many seem quite \"wild\" in their own way. Collectively, I'd say they add up to more than 50% probability ... but I would feel very weird claiming they're collectively overwhelmingly likely.\nUltimately, it's very hard for me to see a case against thinking something like this is at least reasonably likely: \"We will eventually create robust, stable settlements throughout our galaxy and beyond.\" It seems like saying \"no way\" to that statement would itself require \"wild\" confidence in something about the limits of technology, and/or long-run choices people will make, and/or the inevitability of human extinction, and/or something about aliens or simulations.\nI imagine this claim will be intuitive to many readers, but not all. Defending it in depth is not on my agenda at the moment, but I'll rethink that if I get enough demand.\nWhy all possible views are wild: the Fermi paradox\nI'm claiming that it would be \"wild\" to think we're basically assured of never spreading throughout the galaxy, but also that it's \"wild\" to think that we have a decent chance of spreading throughout the galaxy. \nIn other words, I'm calling every possible belief on this topic \"wild.\" That's because I think we're in a wild situation.\nHere are some alternative situations we could have found ourselves in, that I wouldn't consider so wild:\nWe could live in a mostly-populated galaxy, whether by our species or by a number of extraterrestrial species. We would be in some densely populated region of space, surrounded by populated planets. Perhaps we would read up on the history of our civilization. We would know (from history and from a lack of empty stars) that we weren't unusually early life-forms with unusual opportunities ahead.\nWe could live in a world where the kind of technologies I've been discussing didn't seem like they'd ever be possible. We wouldn't have any hope of doing space travel, or successfully studying our own brains or building our own computers. Perhaps we could somehow detect life on other planets, but if we did, we'd see them having an equal lack of that sort of technology.\nBut space expansion seems feasible, and our galaxy is empty. These two things seem in tension. A similar tension - the question of why we see no signs of extraterrestrials, despite the galaxy having so many possible stars they could emerge from - is often discussed under the heading of the Fermi Paradox.\nWikipedia has a list of possible resolutions of the Fermi paradox. Many correspond to the skeptical view possibilities I list above. Some seem less relevant to this piece. (For example, there are various reasons extraterrestrials might be present but not detected. But I think any world in which extraterrestrials don't prevent our species from galaxy-scale expansion ends up \"wild,\" even if the extraterrestrials are there.)\nMy current sense is that the best analysis of the Fermi Paradox available today favors the explanation that intelligent life is extremely rare: something about the appearance of life in the first place, or the evolution of brains, is so unlikely that it hasn't happened in many (or any) other parts of the galaxy.9\nThat would imply that the hardest, most unlikely steps on the road to galaxy-scale expansion are the steps our species has already taken. And that, in turn, implies that we live in a strange time: extremely early in the history of an extremely unusual star.\nIf we started finding signs of intelligent life elsewhere in the galaxy, I'd consider that a big update away from my current \"wild\" view. It would imply that whatever has stopped other species from galaxy-wide expansion will also stop us.\nThis pale blue dot could be an awfully big deal\nDescribing Earth as a tiny dot in a photo from space, Ann Druyan and Carl Sagan wrote:\nThe Earth is a very small stage in a vast cosmic arena. Think of the rivers of blood spilled by all those generals and emperors so that, in glory and triumph, they could become the momentary masters of a fraction of a dot ... Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light ... It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world.\nThis is a somewhat common sentiment - that when you pull back and think of our lives in the context of billions of years and billions of stars, you see how insignificant all the things we care about today really are.\nBut here I'm making the opposite point.\nIt looks for all the world as though our \"tiny dot\" has a real shot at being the origin of a galaxy-scale civilization. It seems absurd, even delusional to believe in this possibility. But given our observations, it seems equally strange to dismiss it. \nAnd if that's right, the choices made in the next 100,000 years - or even this century - could determine whether that galaxy-scale civilization comes to exist, and what values it has, across billions of stars and billions of years to come.\nSo when I look up at the vast expanse of space, I don't think to myself, \"Ah, in the end none of this matters.\" I think: \"Well, some of what we do probably doesn't matter. But some of what we do might matter more than anything ever will again. ...It would be really good if we could keep our eye on the ball. ...[gulp]\"\nNext in series: The Duplicator\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nNotes\n or Kardashev Type III. ↩\n If we are able to create mind uploads, or detailed computer simulations of people that are as conscious as we are, it could be possible to put them in virtual environments that automatically reset, or otherwise \"correct\" the environment, whenever the society would otherwise change in certain ways (for example, if a certain religion became dominant or lost dominance). This could give the designers of these \"virtual environments\" the ability to \"lock in\" particular religions, rulers, etc. I'll discuss this more in future pieces (now available here and here). ↩\n I've focused on the \"galaxy\" somewhat arbitrarily. Spreading throughout all of the accessible universe would take a lot longer than spreading throughout the galaxy, and until we do it's still imaginable that some species from outside our galaxy will disrupt the \"stable galaxy-scale civilization,\" but I think accounting for this correctly would add a fair amount of complexity without changing the big picture. I may address that in some future piece, though. ↩\n A logarithmic version doesn't look any less weird, because the distances between the \"middle\" milestones are tiny compared to both the stretches of time before and after these milestones. More fundamentally, I'm talking about how remarkable it is to be in the most important [small number] of years out of [big number] of years - that's best displayed using a linear axis. It's often the case that weird-looking charts look more reasonable with logarithmic axes, but in this case I think the chart looks weird because the situation is weird. Probably the least weird-looking version of this chart would have the x-axis be something like the logged distance from the year 2100, but that would be a heck of a premise for a chart - it would basically bake in my argument that this appears to be a very special time period. ↩\nThis is exactly the kind of thought that kept me skeptical for many years of the arguments I'll be laying out in the rest of this series about the potential impacts, and timing, of advanced technologies. Grappling directly with how \"wild\" our situation seems to ~undeniably be has been key for me. ↩\n Spreading throughout the galaxy would certainly be harder if nothing like mind uploading (which I discuss in a separate piece, and which is part of why I think future space settlements could have \"value lock-in\" as discussed above) can ever be done. I would find a view that \"mind uploading is impossible\" to be \"wild\" in its own way, because it implies that human brains are so special that there is simply no way, ever, to digitally replicate what they're doing. (Thanks to David Roodman for this point.) ↩\n That is, advanced AI that pursues objectives of its own, which aren't compatible with human existence. I'll be writing more about this idea. Existing discussions of it include the books Superintelligence, Human Compatible, Life 3.0, and The Alignment Problem. The shortest, most accessible presentation I know of is The case for taking AI seriously as a threat to humanity (Vox article by Kelsey Piper). This report on existential risk from power-seeking AI, by Open Philanthropy's Joe Carlsmith, lays out a detailed set of premises that would collectively imply the problem is a serious one. ↩\n Thanks to Carl Shulman for this point. ↩\n See https://arxiv.org/pdf/1806.02404.pdf  ↩\n", "url": "https://www.cold-takes.com/all-possible-views-about-humanitys-future-are-wild/", "title": "All Possible Views About Humanity's Future Are Wild", "source": "cold.takes", "source_type": "blog", "date_published": "2021-07-13", "id": "9c273234466efec65bab9b1d00795d01"} -{"text": "\nImage from here via this tweet\nICYMI, Microsoft has released a beta version of an AI chatbot called “the new Bing” with both impressive capabilities and some scary behavior. (I don’t have access. I’m going off of tweets and articles.)\nZvi Mowshowitz lists examples here - highly recommended. Bing has threatened users, called them liars, insisted it was in love with one (and argued back when he said he loved his wife), and much more.\nAre these the first signs of the risks I’ve written about? I’m not sure, but I’d say yes and no.\nLet’s start with the “no” side. \nMy understanding of how Bing Chat was trained probably does not leave much room for the kinds of issues I address here. My best guess at why Bing Chat does some of these weird things is closer to “It’s acting out a kind of story it’s seen before” than to “It has developed its own goals due to ambitious, trial-and-error based development.” (Although “acting out a story” could be dangerous too!)\nMy (zero-inside-info) best guess at why Bing Chat acts so much weirder than ChatGPT is in line with Gwern’s guess here. To oversimplify, there’s a particular type of training that seems to make a chatbot generally more polite and cooperative and less prone to disturbing content, and it’s possible that Bing Chat incorporated less of this than ChatGPT. This could be straightforward to fix.\nBing Chat does not (even remotely) seem to pose a risk of global catastrophe itself. \nOn the other hand, there is a broader point that I think Bing Chat illustrates nicely: companies are racing to build bigger and bigger “digital brains” while having very little idea what’s going on inside those “brains.” The very fact that this situation is so unclear - that there’s been no clear explanation of why Bing Chat is behaving the way it is - seems central, and disturbing.\nAI systems like this are (to simplify) designed something like this: “Show the AI a lot of words from the Internet; have it predict the next word it will see, and learn from its success or failure, a mind-bending number of times.” You can do something like that, and spend huge amounts of money and time on it, and out will pop some kind of AI. If it then turns out to be good or bad at writing, good or bad at math, polite or hostile, funny or serious (or all of these depending on just how you talk to it) ... you’ll have to speculate about why this is. You just don’t know what you just made.\nWe’re building more and more powerful AIs. Do they “want” things or “feel” things or aim for things, and what are those things? We can argue about it, but we don’t know. And if we keep going like this, these mysterious new minds will (I’m guessing) eventually be powerful enough to defeat all of humanity, if they were turned toward that goal.\nAnd if nothing changes about attitudes and market dynamics, minds that powerful could end up rushed to customers in a mad dash to capture market share.\nThat’s the path the world seems to be on at the moment. It might end well and it might not, but it seems like we are on track for a heck of a roll of the dice.\n(And to be clear, I do expect Bing Chat to act less weird over time. Changing an AI’s behavior is straightforward, but that might not be enough, and might even provide false reassurance.)", "url": "https://www.cold-takes.com/what-does-bing-chat-tell-us-about-ai-risk/", "title": "What does Bing Chat tell us about AI risk?", "source": "cold.takes", "source_type": "blog", "date_published": "2023-02-28", "id": "6b97a808bdf9a09e85dff8a33dda9cca"} -{"text": "\nI’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread; how to help via full-time work; and how major AI companies can help.\nWhat about major governments1 - what can they be doing today to help?\nI think governments could play crucial roles in the future. For example, see my discussion of standards and monitoring.\nHowever, I’m honestly nervous about most possible ways that governments could get involved in AI development and regulation today. \nI think we still know very little about what key future situations will look like, which is why my discussion of AI companies (previous piece) emphasizes doing things that have limited downsides and are useful in a wide variety of possible futures. \nI think governments are “stickier” than companies - I think they have a much harder time getting rid of processes, rules, etc. that no longer make sense. So in many ways I’d rather see them keep their options open for the future by not committing to specific regulations, processes, projects, etc. now.\nI worry that governments, at least as they stand today, are far too oriented toward the competition frame (“we have to develop powerful AI systems before other countries do”) and not receptive enough to the caution frame (“We should worry that AI systems could be dangerous to everyone at once, and consider cooperating internationally to reduce risk”). (This concern also applies to companies, but see footnote.2)\n(Click to expand) The “competition” frame vs. the “caution” frame”\nIn a previous piece, I talked about two contrasting frames for how to make the best of the most important century:\nThe caution frame. This frame emphasizes that a furious race to develop powerful AI could end up making everyone worse off. This could be via: (a) AI forming dangerous goals of its own and defeating humanity entirely; (b) humans racing to gain power and resources and “lock in” their values.\nIdeally, everyone with the potential to build something powerful enough AI would be able to pour energy into building something safe (not misaligned), and carefully planning out (and negotiating with others on) how to roll it out, without a rush or a race. With this in mind, perhaps we should be doing things like:\nWorking to improve trust and cooperation between major world powers. Perhaps via AI-centric versions of Pugwash (an international conference aimed at reducing the risk of military conflict), perhaps by pushing back against hawkish foreign relations moves.\nDiscouraging governments and investors from shoveling money into AI research, encouraging AI labs to thoroughly consider the implications of their research before publishing it or scaling it up, working toward standards and monitoring, etc. Slowing things down in this manner could buy more time to do research on avoiding misaligned AI, more time to build trust and cooperation mechanisms, and more time to generally gain strategic clarity \nThe “competition” frame. This frame focuses less on how the transition to a radically different future happens, and more on who's making the key decisions as it happens.\nIf something like PASTA is developed primarily (or first) in country X, then the government of country X could be making a lot of crucial decisions about whether and how to regulate a potential explosion of new technologies.\nIn addition, the people and organizations leading the way on AI and other technology advancement at that time could be especially influential in such decisions.\nThis means it could matter enormously \"who leads the way on transformative AI\" - which country or countries, which people or organizations.\nSome people feel that we can make confident statements today about which specific countries, and/or which people and organizations, we should hope lead the way on transformative AI. These people might advocate for actions like:\nIncreasing the odds that the first PASTA systems are built in countries that are e.g. less authoritarian, which could mean e.g. pushing for more investment and attention to AI development in these countries.\nSupporting and trying to speed up AI labs run by people who are likely to make wise decisions (about things like how to engage with governments, what AI systems to publish and deploy vs. keep secret, etc.)\nTension between the two frames. People who take the \"caution\" frame and people who take the \"competition\" frame often favor very different, even contradictory actions. Actions that look important to people in one frame often look actively harmful to people in the other.\nFor example, people in the \"competition\" frame often favor moving forward as fast as possible on developing more powerful AI systems; for people in the \"caution\" frame, haste is one of the main things to avoid. People in the \"competition\" frame often favor adversarial foreign relations, while people in the \"caution\" frame often want foreign relations to be more cooperative.\nThat said, this dichotomy is a simplification. Many people - including myself - resonate with both frames. But I have a general fear that the “competition” frame is going to be overrated by default for a number of reasons, as I discuss here.\n \nBecause of these concerns, I don’t have a ton of tangible suggestions for governments as of now. But here are a few.\nMy first suggestion is to avoid premature actions, including ramping up research on how to make AI systems more capable.\nMy next suggestion is to build up the right sort of personnel and expertise for challenging future decisions. \nToday, my impression is that there are relatively few people in government who are seriously considering the highest-stakes risks and thoughtfully balancing both “caution” and “competition” considerations (see directly above). I think it would be great if that changed. \nGovernments can invest in efforts to educate their personnel about these issues, and can try to hire key personnel who are already on the knowledgeable and thoughtful side about them (while also watching out for some of the pitfalls of spreading messages about AI).\nAnother suggestion is to generally avoid putting terrible people in power. Voters can help with this!\nMy top non-”meta” suggestion for a given government is to invest in intelligence on the state of AI capabilities in other countries. If other countries are getting close to deploying dangerous AI systems, this could be essential to know; if they aren’t, that could be essential to know as well, in order to avoid premature and paranoid racing to deploy powerful AI.\nA few other things that seem worth doing and relatively low-downside:\nFund alignment research (ideally alignment research targeted at the most crucial challenges) via agencies like the National Science Foundation and DARPA. These agencies have huge budgets (the two of them combined spend over $10 billion per year), and have major impacts on research communities. \nKeep options open for future monitoring and regulation (see this Slow Boring piece for an example).\nBuild relationships with leading AI researchers and organizations, so that future crises can be handled relatively smoothly.\nEncourage and amplify investments in information security. My impression is that governments are often better than companies at highly advanced information security (preventing cyber-theft even by determined, well-resourced opponents). They could help with, and even enforce, strong security at key AI companies. \nFootnotes\n I’m centrally thinking of the US, but other governments with lots of geopolitical sway and/or major AI projects in their jurisdiction could have similar impacts. ↩\n When discussing recommendations for companies, I imagine companies that are already dedicated to AI, and I imagine individuals at those companies who can have a large impact on the decisions they make. \n By contrast, when discussing recommendations for governments, a lot of what I’m thinking is: “Attempts to promote productive actions on AI will raise the profile of AI relative to other issues the government could be focused on; furthermore, it’s much harder for even a very influential individual to predict how their actions will affect what a government ultimately does, compared to a company.” ↩", "url": "https://www.cold-takes.com/how-governments-can-help-with-the-most-important-century/", "title": "How major governments can help with the most important century", "source": "cold.takes", "source_type": "blog", "date_published": "2023-02-24", "id": "10fd49353e53c98e565dad8a6c3539e0"} -{"text": "I’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread and how to help via full-time work.\nThis piece is about what major AI companies can do (and not do) to be helpful. By “major AI companies,” I mean the sorts of AI companies that are advancing the state of the art, and/or could play a major role in how very powerful AI systems end up getting used.1\nThis piece could be useful to people who work at those companies, or people who are just curious.\nGenerally, these are not pie-in-the-sky suggestions - I can name2 more than one AI company that has at least made a serious effort at each of the things I discuss below (beyond what it would do if everyone at the company were singularly focused on making a profit).3\nI’ll cover:\nPrioritizing alignment research, strong security, and safety standards (all of which I’ve written about previously).\nAvoiding hype and acceleration, which I think could leave us with less time to prepare for key risks.\nPreparing for difficult decisions ahead: setting up governance, employee expectations, investor expectations, etc. so that the company is capable of doing non-profit-maximizing things to help avoid catastrophe in the future.\nBalancing these cautionary measures with conventional/financial success.\nI’ll also list a few things that some AI companies present as important, but which I’m less excited about: censorship of AI models, open-sourcing AI models, raising awareness of AI with governments and the public. I don’t think all these things are necessarily bad, but I think some are, and I’m skeptical that any are crucial for the risks I’ve focused on.\nI previously laid out a summary of how I see the major risks of advanced AI, and four key things I think can help (alignment research; strong security; standards and monitoring; successful, careful AI projects). I won’t repeat that summary now, but it might be helpful for orienting you if you don’t remember the rest of this series too well; click here to read it.\nSome basics: alignment research, strong security, safety standards\nFirst off, AI companies can contribute to the “things that can help” I listed above:\nThey can prioritize alignment research (and other technical research, e.g. threat assessment research and misuse research). \nFor example, they can prioritize hiring for safety teams, empowering these teams, encouraging their best flexible researchers to work on safety, aiming for high-quality research that targets crucial challenges, etc.\n \nIt could also be important for AI companies to find ways to partner with outside safety researchers rather than rely solely on their own teams. As discussed previously, this could be challenging. But I generally expect that AI companies that care a lot about safety research partnerships will find ways to make them work.\nThey can help work toward a standards and monitoring regime. E.g., they can do their own work to come up with standards like \"An AI system is dangerous if we observe that it's able to ___, and if we observe this we will take safety and security measures such as ____.\" They can also consult with others developing safety standards, voluntarily self-regulate beyond what’s required by law, etc.\nThey can prioritize strong security, beyond what normal commercial incentives would call for. \nIt could easily take years to build secure enough systems, processes and technologies for very high-stakes AI.\n \nIt could be important to hire not only people to handle everyday security needs, but people to experiment with more exotic setups that could be needed later, as the incentives to steal AI get stronger.\n(Click to expand) The challenge of securing dangerous AI\nIn Racing Through a Minefield, I described a \"race\" between cautious actors (those who take misalignment risk seriously) and incautious actors (those who are focused on deploying AI for their own gain, and aren't thinking much about the dangers to the whole world). Ideally, cautious actors would collectively have more powerful AI systems than incautious actors, so they could take their time doing alignment research and other things to try to make the situation safer for everyone. \nBut if incautious actors can steal an AI from cautious actors and rush forward to deploy it for their own gain, then the situation looks a lot bleaker. And unfortunately, it could be hard to protect against this outcome.\nIt's generally extremely difficult to protect data and code against a well-resourced cyberwarfare/espionage effort. An AI’s “weights” (you can think of this sort of like its source code, though not exactly) are potentially very dangerous on their own, and hard to get extreme security for. Achieving enough cybersecurity could require measures, and preparations, well beyond what one would normally aim for in a commercial context.\n(Click to expand) How standards might be established and become national or international\nI previously laid out a possible vision on this front, which I’ll give a slightly modified version of here:\nToday’s leading AI companies could self-regulate by committing not to build or deploy a system that they can’t convincingly demonstrate is safe (e.g., see Google’s 2018 statement, \"We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”). \nEven if some people at the companies would like to deploy unsafe systems, it could be hard to pull this off once the company has committed not to. \n \nEven if there’s a lot of room for judgment in what it means to demonstrate an AI system is safe, having agreed in advance that certain evidence is not good enough could go a long way.\nAs more AI companies are started, they could feel soft pressure to do similar self-regulation, and refusing to do so is off-putting to potential employees, investors, etc.\nEventually, similar principles could be incorporated into various government regulations and enforceable treaties.\nGovernments could monitor for dangerous projects using regulation and even overseas operations. E.g., today the US monitors (without permission) for various signs that other states might be developing nuclear weapons, and might try to stop such development with methods ranging from threats of sanctions to cyberwarfare or even military attacks. It could do something similar for any AI development projects that are using huge amounts of compute and haven’t volunteered information about whether they’re meeting standards.\nAvoiding hype and acceleration \nIt seems good for AI companies to avoid unnecessary hype and acceleration of AI. \nI’ve argued that we’re not ready for transformative AI, and I generally tend to think that we’d all be better off if the world took longer to develop transformative AI. That’s because:\nI’m hoping general awareness and understanding of the key risks will rise over time.\nA lot of key things that could improve the situation - e.g., alignment research, standards and monitoring, and strong security - seem to be in very early stages right now.\nIf too much money pours into the AI world too fast, I’m worried there will be lots of incautious companies racing to build transformative AI as quickly as they can, with little regard for the key risks.\nBy default, I generally think: “The fewer flashy demos and breakthrough papers a lab is putting out, the better.” This can involve tricky tradeoffs in practice (since AI companies generally want to be successful at recruiting, fundraising, etc.)\n A couple of potential counterarguments, and replies:\nFirst, some people think it's now \"too late\" to avoid hype and acceleration, given the amount of hype and investment AI is getting at the moment. I disagree. It's easy to forget, in the middle of a media cycle, how quickly people can forget about things and move onto the next story once the bombs stop dropping. And there are plenty of bombs that still haven't dropped (many things AIs still can't do), and the level of investment in AI has tons of room to go up from here.\nSecond, I’ve sometimes seen arguments that hype is good because it helps society at large understand what’s coming. But unfortunately, as I wrote previously, I'm worried that hype gives people a skewed picture.\nSome key risks are hard to understand and take seriously.\n What's easy to understand is something like \"AI is powerful and scary, I should make sure that people like me are the ones to build it!\"\n Maybe recent developments will make people understand the risks better? One can hope, but I'm not counting on that just yet - I think AI misbehavior can be given illusory \"fixes,\" and probably will be.\nI also am generally skeptical that there's much hope of society adapting to risks as they happen, given the explosive pace of change that I expect once we get powerful enough AI systems.\nI discuss some more arguments on this point in a footnote.4\nI don’t think it’s clear-cut that hype and acceleration are bad, but it’s my best guess.\nPreparing for difficult decisions ahead\nI’ve argued that AI companies might need to do “out-of-the-ordinary” things that don’t go with normal commercial incentives. \nToday, AI companies can be building a foundation for being able to do “out-of-the-ordinary” things in the future. A few examples of how they might do so:\nPublic-benefit-oriented governance. I think typical governance structures could be a problem in the future. For example, a standard corporation could be sued for not deploying AI that poses a risk of global catastrophe - if this means a sacrifice for its bottom line.\nI’m excited about AI companies that are investing heavily in setting up governance structures - and investing in executives and board members - capable of making the hard calls well. For example:\nBy default, if an AI company is a standard corporation, its leadership has legally recognized duties to serve the interests of shareholders - not society at large. But an AI company can incorporate as a Public Benefit Corporation or some other kind of entity (including a nonprofit!) that gives more flexibility here.\nBy default, shareholders make the final call over what a company does. (Shareholders can replace members of the Board of Directors, who in turn can replace the CEO). But a company can set things up differently (e.g., a for-profit controlled by a nonprofit5).\nIt could pay off in lots of ways to make sure the final calls at a company are made by people focused on getting a good outcome for humanity (and legally free to focus this way).\nGaming out the future. I think it’s not too early for AI companies to be discussing how they would handle various high-stakes situations.\nUnder what circumstances would the company simply decide to stop training increasingly powerful AI models? \nIf the company came to believe it was building very powerful, dangerous models, whom would it notify and seek advice from? At what point would it approach the government, and how would it do so?\nAt what point would it be worth using extremely costly security measures?\nIf the company had AI systems available that could do most of what humans can do, what would it do with these systems? Use them to do AI safety research? Use them to design better algorithms and continue making increasingly powerful AI systems? (More possibilities here.)\nWho should be leading the way on decisions like these? Companies tend to employ experts to inform their decisions; who would the company look to for expertise on these kinds of decisions?\nEstablishing and getting practice with processes for particularly hard decisions. Should the company publish its latest research breakthrough? Should it put out a product that might lead to more hype and acceleration? What safety researchers should get access to its models, and how much access? \nAI companies face questions like this pretty regularly today, and I think it’s worth putting processes in place to consider the implications for the world as a whole (not just for the company’s bottom line). This could include assembling advisory boards, internal task forces, etc.\nManaging employee and investor expectations. At some point, an AI company might want to make “out of the ordinary” moves that are good for the world but bad for the bottom line. E.g., choosing not to deploy AIs that could be very dangerous or very profitable.\nI wouldn’t want to be trying to run a company in this situation with lots of angry employees and investors asking about the value of their equity shares! It’s also important to minimize the risk of employees and/or investors leaking sensitive and potentially dangerous information.\nAI companies can prepare for this kind of situation by doing things like:\nBeing selective about whom they hire and take investment from, and screening specifically for people they think are likely to be on board with these sorts of hard calls.\nEducation and communications - making it clear to employees what kinds of dangerous-to-humanity situations might be coming up in the future, and what kinds of actions the company might want to take (and why).\nInternal and external commitments. AI companies can make public and/or internal statements about how they would handle various tough situations, e.g. how they would determine when it’s too dangerous to keep building more powerful models. \nI think these commitments should generally be non-binding (it’s hard to predict the future in enough detail to make binding ones). But in a future where maximizing profit conflicts with doing the right thing for humanity, a previously-made commitment could make it more likely that the company does the right thing.\nSucceeding\nI’ve emphasized how helpful a successful, careful AI projects could be. So far, this piece has mostly talked about the “careful” side of things - how to do things that a “normal” AI company (focused only on commercial success) wouldn’t, in order to reduce risks. But it’s also important to succeed at fundraising, recruiting, and generally staying relevant (e.g., capable of building cutting-edge AI systems). \nI don’t emphasize this or write about it as much because I think it’s the sort of thing AI companies are likely to be focused on by default, and because I don’t have special insight into how to succeed as an AI company. But it’s important, and it means that AI companies need to walk a sort of tightrope - constantly making tradeoffs between success and caution.\nSome things I’m less excited about\nI think it’s also worth listing a few things that some AI companies present as important societal-benefit measures, but which I’m a bit more skeptical are crucial for reducing the risks I’ve focused on.\nSome AI companies restrict access to their models so people won’t use the AIs to create pornography, misleading images and text, etc. I’m not necessarily against this and support versions of it (it depends on the details), but I mostly don’t think it is a key way to reduce the risks I’ve focused on. For those risks, the hype that comes from seeing a demonstration of a system’s capabilities could be even more dangerous than direct harms.\nI sometimes see people implying that open-sourcing AI models - and otherwise making them as broadly available as possible - is a key social-benefit measure. While there may be benefits in some cases, I mostly see this kind of thing as being negative (or at best neutral) in terms of the risks I’m most concerned about. \nI think it can contribute to hype and acceleration, and could make it generally harder to enforce safety standards. \n \nIn the long run, I worry that AI systems could become extraordinarily powerful (more so than e.g. nuclear weapons), so I don’t think “Make sure everyone has access asap” is the right framework. \n \nIn addition to increasing dangers from misaligned AI, this framework could increase other dangers I’ve written about previously.\nI generally don’t think AI companies should be trying to get governments to pay more attention to AI, for reasons I’ll get to in a future piece. (Forming relationships with policymakers could be good, though.)\nWhen an AI company presents some decision as being for the benefit of humanity, I often ask myself, “Could this same decision be justified by just wanting to commercialize successfully?”\nFor example, making AI models “safe” in the sense that they usually behave as users intend (including things like refraining from toxic language, chaotic behavior, etc.) can be important for commercial viability, but isn’t necessarily good enough for the risks I worry about.Footnotes\n Disclosure: my wife works at one such company (Anthropic) and used to work at another (OpenAI), and has equity in both. ↩\n Though I won’t, because I decided I don’t want to get into a thing about whom I did and didn’t link to. Feel free to give real-world examples in the comments! ↩\n Now, AI companies could sometimes be doing “responsible” or “safety-oriented” things in order to get good PRs, recruit employees, make existing employees happy, etc. In this sense, the actions could be ultimately profit-motivated. But that would still mean there are enough people who care about reducing AI risk that actions like these have PR benefits, recruiting benefits, etc. That’s a big deal! And it suggests that if concern about AI risks (and understanding of how to reduce them) were more widespread, AI companies might do more good things and fewer dangerous things. ↩\n You could argue that it would be better for the world to develop extremely powerful AI systems sooner, for reasons including:\nYou might be pretty happy with the global balance of power between countries today, and be worried that it’ll get worse in the future. The latter could lead to a situation where the “wrong” government leads the way on transformative AI.\nYou might think that the later we develop transformative AI, the more quickly everything will play out, because there will be more computing resources available in the world. E.g., if we develop extremely powerful systems tomorrow, there would only be so many copies we could run at once, whereas if we develop equally powerful systems in 50 years, it might be a lot easier for lots of people to run lots of copies. (More: Hardware Overhang)\n A key reason I believe it’s best to avoid acceleration at this time is because it seems plausible (at least 10% likely) that transformative AI will be developed extremely soon - as in within 10 years of today. My impression is that many people at major AI companies tend to agree with this. I think this is a very scary possibility, and if this is the case, the arguments I give in the main text seem particularly important (e.g., many key interventions seem to be in a pretty embryonic state, and awareness of key risks seems low).\n A related case one could make for acceleration is “It’s worth accelerating things on the whole to increase the probability that the particular company in question succeeds” (more here: the “competition” frame). I think this is a valid consideration, which is why I talk about tricky tradeoffs in the main text. ↩\n Note that my wife is a former employee of OpenAI, the company I link to there, and she owns equity in the company. ↩\n", "url": "https://www.cold-takes.com/what-ai-companies-can-do-today-to-help-with-the-most-important-century/", "title": "What AI companies can do today to help with the most important century", "source": "cold.takes", "source_type": "blog", "date_published": "2023-02-20", "id": "e25fd27bd89cded8a0bd11111d49b6f1"} -{"text": "Let’s say you’re convinced that AI could make this the most important century of all time for humanity. What can you do to help things go well instead of poorly?\nI think the biggest opportunities come from a full-time job (and/or the money you make from it). I think people are generally far better at their jobs than they are at anything else. \nThis piece will list the jobs I think are especially high-value. I expect things will change (a lot) from year to year - this is my picture at the moment.\nHere’s a summary:\nRole\nSkills/assets you'd need\nResearch and engineering on AI safety\nTechnical ability (but not necessarily AI background)\n \nInformation security to reduce the odds powerful AI is leaked\nSecurity expertise or willingness/ability to start in junior roles (likely not AI)\n \nOther roles at AI companies\nSuitable for generalists (but major pros and cons)\n \nGovt and govt-facing think tanks\nSuitable for generalists (but probably takes a long time to have impact)\n \nJobs in politics\nSuitable for generalists if you have a clear view on which politicians to help\n \nForecasting to get a better handle on what’s coming\nStrong forecasting track record (can be pursued part-time)\n \n\"Meta\" careers\nMisc / suitable for generalists\n \nLow-guidance options\nThese ~only make sense if you read & instantly think \"That's me\"\n \nA few notes before I give more detail:\nThese jobs aren’t the be-all/end-all. I expect a lot to change in the future, including a general increase in the number of helpful jobs available. \nMost of today’s opportunities are concentrated in the US and UK, where the biggest AI companies (and AI-focused nonprofits) are. This may change down the line.\nMost of these aren’t jobs where you can just take instructions and apply narrow skills. \nThe issues here are tricky, and your work will almost certainly be useless (or harmful) according to someone.\n \nI recommend forming your own views on the key risks of AI - and/or working for an organization whose leadership you’re confident in.\nStaying open-minded and adaptable is crucial. \nI think it’s bad to rush into a mediocre fit with one of these jobs, and better (if necessary) to stay out of AI-related jobs while skilling up and waiting for a great fit.\n \nI don’t think it’s helpful (and it could be harmful) to take a fanatical, “This is the most important time ever - time to be a hero” attitude. Better to work intensely but sustainably, stay mentally healthy and make good decisions.\nThe first section of this piece will recap my basic picture of the major risks, and the promising ways to reduce these risks (feel free to skip if you think you’ve got a handle on this).\nThe next section will elaborate on the options in the table above.\nAfter that, I’ll talk about some of the things you can do if you aren’t ready for a full-time career switch yet, and give some general advice for avoiding doing harm and burnout.\nRecapping the major risks, and some things that could help\nThis is a quick recap of the major risks from transformative AI. For a longer treatment, see How we could stumble into an AI catastrophe, and for an even longer one see the full series. To skip to the next section, click here.\nThe backdrop: transformative AI could be developed in the coming decades. If we develop AI that can automate all the things humans do to advance science and technology, this could cause explosive technological progress that could bring us more quickly than most people imagine to a radically unfamiliar future. \nSuch AI could also be capable of defeating all of humanity combined, if it were pointed toward that goal. \n(Click to expand) The most important century \nIn the most important century series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.\nI focus on a hypothetical kind of AI that I call PASTA, or Process for Automating Scientific and Technological Advancement. PASTA would be AI that can essentially automate all of the human activities needed to speed up scientific and technological advancement.\nUsing a variety of different forecasting approaches, I argue that PASTA seems more likely than not to be developed this century - and there’s a decent chance (more than 10%) that we’ll see it within 15 years or so.\nI argue that the consequences of this sort of AI could be enormous: an explosion in scientific and technological progress. This could get us more quickly than most imagine to a radically unfamiliar future.\nI’ve also argued that AI systems along these lines could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nFor more, see the most important century landing page. The series is available in many formats, including audio; I also provide a summary, and links to podcasts where I discuss it at a high level.\n(Click to expand) How could AI systems defeat humanity?\nA previous piece argues that AI systems could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nBy defeating humanity, I mean gaining control of the world so that AIs, not humans, determine what happens in it; this could involve killing humans or simply “containing” us in some way, such that we can’t interfere with AIs’ aims.\nOne way this could happen would be via “superintelligence” It’s imaginable that a single AI system (or set of systems working together) could:\nDo its own research on how to build a better AI system, which culminates in something that has incredible other abilities.\nHack into human-built software across the world.\nManipulate human psychology.\nQuickly generate vast wealth under the control of itself or any human allies.\nCome up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.\nDevelop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries.\nBut even if “superintelligence” never comes into play - even if any given AI system is at best equally capable to a highly capable human - AI could collectively defeat humanity. The piece explains how.\nThe basic idea is that humans are likely to deploy AI systems throughout the economy, such that they have large numbers and access to many resources - and the ability to make copies of themselves. From this starting point, AI systems with human-like (or greater) capabilities would have a number of possible ways of getting to the point where their total population could outnumber and/or out-resource humans.\nMore: AI could defeat all of us combined\nMisalignment risk: AI could end up with dangerous aims of its own. \nIf this sort of AI is developed using the kinds of trial-and-error-based techniques that are common today, I think it’s likely that it will end up “aiming” for particular states of the world, much like a chess-playing AI “aims” for a checkmate position - making choices, calculations and plans to get particular types of outcomes, even when doing so requires deceiving humans. \nI think it will be difficult - by default - to ensure that AI systems are aiming for what we (humans) want them to aim for, as opposed to gaining power for ends of their own.\nIf AIs have ambitious aims of their own - and are numerous and/or capable enough to overpower humans - I think we have a serious risk that AIs will take control of the world and disempower humans entirely.\n(Click to expand) Why would AI \"aim\" to defeat humanity?\nA previous piece argued that if today’s AI development methods lead directly to powerful enough AI systems, disaster is likely by default (in the absence of specific countermeasures). \nIn brief:\nModern AI development is essentially based on “training” via trial-and-error. \nIf we move forward incautiously and ambitiously with such training, and if it gets us all the way to very powerful AI systems, then such systems will likely end up aiming for certain states of the world (analogously to how a chess-playing AI aims for checkmate).\nAnd these states will be other than the ones we intended, because our trial-and-error training methods won’t be accurate. For example, when we’re confused or misinformed about some question, we’ll reward AI systems for giving the wrong answer to it - unintentionally training deceptive behavior.\nWe should expect disaster if we have AI systems that are both (a) powerful enough to defeat humans and (b) aiming for states of the world that we didn’t intend. (“Defeat” means taking control of the world and doing what’s necessary to keep us out of the way; it’s unclear to me whether we’d be literally killed or just forcibly stopped1 from changing the world in ways that contradict AI systems’ aims.)\nMore: Why would AI \"aim\" to defeat humanity?\nCompetitive pressures, and ambiguous evidence about the risks, could make this situation very dangerous. In a previous piece, I lay out a hypothetical story about how the world could stumble into catastrophe. In this story:\nThere are warning signs about the risks of misaligned AI - but there’s a lot of ambiguity about just how big the risk is.\nEveryone is furiously racing to be first to deploy powerful AI systems. \nWe end up with a big risk of deploying dangerous AI systems throughout the economy - which means a risk of AIs disempowering humans entirely. \nAnd even if we navigate that risk - even if AI behaves as intended - this could be a disaster if the most powerful AI systems end up concentrated in the wrong hands (something I think is reasonably likely due to the potential for power imbalances). There are other risks as well.\n(Click to expand) Why AI safety could be hard to measure\nIn previous pieces, I argued that:\nIf we develop powerful AIs via ambitious use of the “black-box trial-and-error” common in AI development today, then there’s a substantial risk that: \nThese AIs will develop unintended aims (states of the world they make calculations and plans toward, as a chess-playing AI \"aims\" for checkmate);\n \nThese AIs could deceive, manipulate, and even take over the world from humans entirely as needed to achieve those aims.\nPeople today are doing AI safety research to prevent this outcome, but such research has a number of deep difficulties:\n“Great news - I’ve tested this AI and it looks safe.” Why might we still have a problem?\n \nProblem\nKey question\nExplanation\nThe Lance Armstrong problem\nDid we get the AI to be actually safe or good at hiding its dangerous actions?\nWhen dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “appearing to behave well.”\nWhen professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.\n \nThe King Lear problem\nThe AI is (actually) well-behaved when humans are in control. Will this transfer to when AIs are in control?\nIt's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't. \nAIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to take control of the world entirely. It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation. \nLike King Lear trying to decide how much power to give each of his daughters before abdicating the throne.\n \nThe lab mice problem\nToday's \"subhuman\" AIs are safe.What about future AIs with more human-like abilities?\nToday's AI systems aren't advanced enough to exhibit the basic behaviors we want to study, such as deceiving and manipulating humans.\nLike trying to study medicine in humans by experimenting only on lab mice.\n \nThe first contact problem\nImagine that tomorrow's \"human-like\" AIs are safe. How will things go when AIs have capabilities far beyond humans'?\nAI systems might (collectively) become vastly more capable than humans, and it's ... just really hard to have any idea what that's going to be like. As far as we know, there has never before been anything in the galaxy that's vastly more capable than humans in the relevant ways! No matter what we come up with to solve the first three problems, we can't be too confident that it'll keep working if AI advances (or just proliferates) a lot more. \nLike trying to plan for first contact with extraterrestrials (this barely feels like an analogy).\n \n(Click to expand) Power imbalances, and other risks beyond misaligned AI\nI’ve argued that AI could cause a dramatic acceleration in the pace of scientific and technological advancement. \nOne way of thinking about this: perhaps (for reasons I’ve argued previously) AI could enable the equivalent of hundreds of years of scientific and technological advancement in a matter of a few months (or faster). If so, then developing powerful AI a few months before others could lead to having technology that is (effectively) hundreds of years ahead of others’.\nBecause of this, it’s easy to imagine that AI could lead to big power imbalances, as whatever country/countries/coalitions “lead the way” on AI development could become far more powerful than others (perhaps analogously to when a few smallish European states took over much of the rest of the world).\nI think things could go very badly if the wrong country/countries/coalitions lead the way on transformative AI. At the same time, I’ve expressed concern that people might overfocus on this aspect of things vs. other issues, for a number of reasons including:\nI think people naturally get more animated about \"helping the good guys beat the bad guys\" than about \"helping all of us avoid getting a universally bad outcome, for impersonal reasons such as 'we designed sloppy AI systems' or 'we created a dynamic in which haste and aggression are rewarded.'\"\nI expect people will tend to be overconfident about which countries, organizations or people they see as the \"good guys.\"\n(More here.)\nThere are also dangers of powerful AI being too widespread, rather than too concentrated. In The Vulnerable World Hypothesis, Nick Bostrom contemplates potential future dynamics such as “advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions.” In addition to avoiding worlds where AI capabilities end up concentrated in the hands of a few, it could also be important to avoid worlds in which they diffuse too widely, too quickly, before we’re able to assess the risks of widespread access to technology far beyond today’s.\nI discuss these and a number of other AI risks in a previous piece: Transformative AI issues (not just misalignment): an overview\nI’ve laid out several ways to reduce the risks (color-coded since I’ll be referring to them throughout the piece):\nAlignment research. Researchers are working on ways to design AI systems that are both (a) “aligned” in the sense that they don’t have unintended aims of their own; (b) very powerful, to the point where they can be competitive with the best systems out there. \nI’ve laid out three high-level hopes for how - using techniques that are known today - we might be able to develop AI systems that are both aligned and powerful. \nThese techniques wouldn’t necessarily work indefinitely, but they might work long enough so that we can use early safe AI systems to make the situation much safer (by automating huge amounts of further alignment research, by helping to demonstrate risks and make the case for greater caution worldwide, etc.)\n(A footnote explains how I’m using “aligned” vs. “safe.”1)\n(Click to expand) High-level hopes for AI alignment\nA previous piece goes through what I see as three key possibilities for building powerful-but-safe AI systems.\nIt frames these using Ajeya Cotra’s young businessperson analogy for the core difficulties. In a nutshell, once AI systems get capable enough, it could be hard to test whether they’re safe, because they might be able to deceive and manipulate us into getting the wrong read. Thus, trying to determine whether they’re safe might be something like “being an eight-year-old trying to decide between adult job candidates (some of whom are manipulative).”\nKey possibilities for navigating this challenge:\nDigital neuroscience: perhaps we’ll be able to read (and/or even rewrite) the “digital brains” of AI systems, so that we can know (and change) what they’re “aiming” to do directly - rather than having to infer it from their behavior. (Perhaps the eight-year-old is a mind-reader, or even a young Professor X.)\nLimited AI: perhaps we can make AI systems safe by making them limited in various ways - e.g., by leaving certain kinds of information out of their training, designing them to be “myopic” (focused on short-run as opposed to long-run goals), or something along those lines. Maybe we can make “limited AI” that is nonetheless able to carry out particular helpful tasks - such as doing lots more research on how to achieve safety without the limitations. (Perhaps the eight-year-old can limit the authority or knowledge of their hire, and still get the company run successfully.)\nAI checks and balances: perhaps we’ll be able to employ some AI systems to critique, supervise, and even rewrite others. Even if no single AI system would be safe on its own, the right “checks and balances” setup could ensure that human interests win out. (Perhaps the eight-year-old is able to get the job candidates to evaluate and critique each other, such that all the eight-year-old needs to do is verify basic factual claims to know who the best candidate is.)\nThese are some of the main categories of hopes that are pretty easy to picture today. Further work on AI safety research might result in further ideas (and the above are not exhaustive - see my more detailed piece, posted to the Alignment Forum rather than Cold Takes, for more).\n \nStandards and monitoring.I see some hope for developing standards that all potentially dangerous AI projects (whether companies, government projects, etc.) need to meet, and enforcing these standards globally. \nSuch standards could require strong demonstrations of safety, strong security practices, designing AI systems to be difficult to use for overly dangerous activity, etc. \nWe don't need a perfect system or international agreement to get a lot of benefit out of such a setup. The goal isn’t just to buy time – it’s to change incentives, such that AI projects need to make progress on improving security, alignment, etc. in order to be profitable.\n(Click to expand) How standards might be established and become national or international\nI previously laid out a possible vision on this front, which I’ll give a slightly modified version of here:\nToday’s leading AI companies could self-regulate by committing not to build or deploy a system that they can’t convincingly demonstrate is safe (e.g., see Google’s 2018 statement, \"We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”). \nEven if some people at the companies would like to deploy unsafe systems, it could be hard to pull this off once the company has committed not to. \n \nEven if there’s a lot of room for judgment in what it means to demonstrate an AI system is safe, having agreed in advance that certain evidence is not good enough could go a long way.\nAs more AI companies are started, they could feel soft pressure to do similar self-regulation, and refusing to do so is off-putting to potential employees, investors, etc.\nEventually, similar principles could be incorporated into various government regulations and enforceable treaties.\nGovernments could monitor for dangerous projects using regulation and even overseas operations. E.g., today the US monitors (without permission) for various signs that other states might be developing nuclear weapons, and might try to stop such development with methods ranging from threats of sanctions to cyberwarfare or even military attacks. It could do something similar for any AI development projects that are using huge amounts of compute and haven’t volunteered information about whether they’re meeting standards.\nSuccessful, careful AI projects. I think an AI company (or other project) can enormously improve the situation, if it can both (a) be one of the leaders in developing powerful AI; (b) prioritize doing (and using powerful AI for) things that reduce risks, such as doing alignment research. (But don’t read this as ignoring the fact that AI companies can do harm as well!)\n(Click to expand) How a careful AI project could be helpful\nIn addition to using advanced AI to do AI safety research (noted above), an AI project could:\nPut huge effort into designing tests for signs of danger, and - if it sees danger signs in its own systems - warning the world as a whole.\nOffer deals to other AI companies/projects. E.g., acquiring them or exchanging a share of its profits for enough visibility and control to ensure that they don’t deploy dangerous AI systems.\nUse its credibility as the leading company to lobby the government for helpful measures (such as enforcement of a monitoring-and-standards regime), and to more generally highlight key issues and advocate for sensible actions.\nTry to ensure (via design, marketing, customer choice, etc.) that its AI systems are not used for dangerous ends, and are used on applications that make the world safer and better off. This could include defensive deployment to reduce risks from other AIs; it could include using advanced AI systems to help it gain clarity on how to get a good outcome for humanity; etc.\nAn AI project with a dominant market position could likely make a huge difference via things like the above (and probably via many routes I haven’t thought of). And even an AI project that is merely one of several leaders could have enough resources and credibility to have a lot of similar impacts - especially if it’s able to “lead by example” and persuade other AI projects (or make deals with them) to similarly prioritize actions like the above.\nA challenge here is that I’m envisioning a project with two arguably contradictory properties: being careful (e.g., prioritizing actions like the above over just trying to maintain its position as a profitable/cutting-edge project) and successful (being a profitable/cutting-edge project). In practice, it could be very hard for an AI project to walk the tightrope of being aggressive enough to be a “leading” project (in the sense of having lots of resources, credibility, etc.), while also prioritizing actions like the above (which mostly, with some exceptions, seem pretty different from what an AI project would do if it were simply focused on its technological lead and profitability).\n \nStrong security. A key threat is that someone could steal major components of an AI system and deploy it incautiously. It could be extremely hard for an AI project to be robustly safe against having its AI “stolen.” But this could change, if there’s enough effort to work out the problem of how to secure a large-scale, powerful AI system.\n(Click to expand) The challenging of securing dangerous AI\nIn Racing Through a Minefield, I described a \"race\" between cautious actors (those who take misalignment risk seriously) and incautious actors (those who are focused on deploying AI for their own gain, and aren't thinking much about the dangers to the whole world). Ideally, cautious actors would collectively have more powerful AI systems than incautious actors, so they could take their time doing alignment research and other things to try to make the situation safer for everyone. \nBut if incautious actors can steal an AI from cautious actors and rush forward to deploy it for their own gain, then the situation looks a lot bleaker. And unfortunately, it could be hard to protect against this outcome.\nIt's generally extremely difficult to protect data and code against a well-resourced cyberwarfare/espionage effort. An AI’s “weights” (you can think of this sort of like its source code, though not exactly) are potentially very dangerous on their own, and hard to get extreme security for. Achieving enough cybersecurity could require measures, and preparations, well beyond what one would normally aim for in a commercial context.\nJobs that can help\nIn this long section, I’ll list a number of jobs I wish more people were pursuing.\nUnfortunately, I can’t give individualized help exploring one or more of these career tracks. Starting points could include 80,000 Hours and various other resources.\nResearch and engineering careers. You can contribute to alignment research as a researcher and/or software engineer (the line between the two can be fuzzy in some contexts). \nThere are (not necessarily easy-to-get) jobs along these lines at major AI labs, in established academic labs, and at independent nonprofits (examples in footnote).2\nDifferent institutions will have very different approaches to research, very different environments and philosophies, etc. so it’s hard to generalize about what might make someone a fit. A few high-level points:\nIt takes a lot of talent to get these jobs, but you shouldn’t assume that it takes years of experience in a particular field (or a particular degree). \nI’ve seen a number of people switch over from other fields (such as physics) and become successful extremely quickly. \n \nIn addition to on-the-job training, there are independent programs specifically aimed at helping people skill up quickly.3\nYou also shouldn’t assume that these jobs are only for “scientist” types - there’s a substantial need for engineers, which I expect to grow.\nI think most people working on alignment consider a lot of other people’s work to be useless at best. This seems important to know going in for a few reasons. \nYou shouldn’t assume that all work is useless just because the first examples you see seem that way.\n \nIt’s good to be aware that whatever you end up doing, someone will probably dunk on your work on the Internet. \n \nAt the same time, you shouldn’t assume that your work is helpful because it’s “safety research.” It's worth investing a lot in understanding how any particular research you're doing could be helpful (and how it could fail). \nI’d even suggest taking regular dedicated time (a day every few months?) to pause working on the day-to-day and think about how your work fits into the big picture.\nFor a sense of what work I think is most likely to be useful, I’d suggest my piece on why AI safety seems hard to measure - I’m most excited about work that directly tackles the challenges outlined in that piece, and I’m pretty skeptical of work that only looks good with those challenges assumed away. (Also see my piece on broad categories of research I think have a chance to be highly useful, and some comments from a while ago that I still mostly endorse.) \nI also want to call out a couple of categories of research that are getting some attention today, but seem at least a bit under-invested in, even relative to alignment research:\nThreat assessment research. To me, there’s an important distinction between “Making AI systems safer” and “Finding out how dangerous they might end up being.” (Today, these tend to get lumped together under “alignment research.”) \nA key approach to medical research is using model organisms - for example, giving cancer to mice, so we can see whether we’re able to cure them. \n \nAnalogously, one might deliberately (though carefully!4) design an AI system to deceive and manipulate humans, so we can (a) get a more precise sense of what kinds of training dynamics lead to deception and manipulation; (b) see whether existing safety techniques are effective countermeasures.\n \nIf we had concrete demonstrations of AI systems becoming deceptive/manipulative/power-seeking, we could potentially build more consensus for caution (e.g., standards and monitoring). Or we could imaginably produce evidence that the threat is low.5\nA couple of early examples of threat assessment research: here and here.\nAnti-misuse research. \nI’ve written about how we could face catastrophe even from aligned AI. That is - even if AI does what its human operators want it to be doing, maybe some of its human operators want it to be helping them build bioweapons, spread propaganda, etc. \n \nBut maybe it’s possible to train AIs so that they’re hard to use for purposes like this - a separate challenge from training them to avoid deceiving and manipulating their human operators. \n \nIn practice, a lot of the work done on this today (example) tends to get called “safety” and lumped in with alignment (and sometimes the same research helps with both goals), but again, I think it’s a distinction worth making.\n \nI expect the earliest and easiest versions of this work to happen naturally as companies try to make their AI models fit for commercialization - but at some point it might be important to be making more intense, thorough attempts to prevent even very rare (but catastrophic) misuse.\nInformation security careers. There’s a big risk that a powerful AI system could be “stolen” via hacking/espionage, and this could make just about every kind of risk worse. I think it could be very challenging - but possible - for AI projects to be secure against this threat. (More above.)\nI really think security is not getting enough attention from people concerned about AI risk, and I disagree with the idea that key security problems can be solved just by hiring from today’s security industry.\nFrom what I’ve seen, AI companies have a lot of trouble finding good security hires. I think a lot of this is simply that security is challenging and valuable, and demand for good hires (especially people who can balance security needs against practical needs) tends to swamp supply. \nAnd yes, this means good security people are well-paid!\nAdditionally, AI could present unique security challenges in the future, because it requires protecting something that is simultaneously (a) fundamentally just software (not e.g. uranium), and hence very hard to protect; (b) potentially valuable enough that one could imagine very well-resourced state programs going all-out to steal it, with a breach having globally catastrophic consequences. I think trying to get out ahead of this challenge, by experimenting early on with approaches to it, could be very important.\nIt’s plausible to me that security is as important as alignment right now, in terms of how much one more good person working on it will help. \nAnd security is an easier path, because one can get mentorship from a large community of security people working on things other than AI.6\nI think there’s a lot of potential value both in security research (e.g., developing new security techniques) and in simply working at major AI companies to help with their existing security needs.\nFor more on this topic, see this recent 80,000 hours report and this 2019 post by two of my coworkers.\nOther jobs at AI companies. AI companies hire for a lot of roles, many of which don’t require any technical skills. \nIt’s a somewhat debatable/tricky path to take a role that isn’t focused specifically on safety or security. Some people believe7 that you can do more harm than good this way, by helping companies push forward with building dangerous AI before the risks have gotten much attention or preparation - and I think this is a pretty reasonable take. \nAt the same time:\nYou could argue something like: “Company X has potential to be a successful, careful AI project. That is, it’s likely to deploy powerful AI systems more carefully and helpfully than others would, and use them to reduce risks by automating alignment research and other risk-reducing tasks. Furthermore, Company X is most likely to make a number of other decisions wisely as things develop. So, it’s worth accepting that Company X is speeding up AI progress, because of the hope that Company X can make things go better.” This obviously depends on how you feel about Company X compared to others!\nWorking at Company X could also present opportunities to influence Company X. If you’re a valuable contributor and you are paying attention to the choices the company is making (and speaking up about them), you could affect the incentives of leadership. \nI think this can be a useful thing to do in combination with the other things on this list, but I generally wouldn’t advise taking a job if this is one’s main goal. \nWorking at an AI company presents opportunities to become generally more knowledgeable about AI, possibly enabling a later job change to something else.\n(Click to expand) How a careful AI project could be helpful\nIn addition to using advanced AI to do AI safety research (noted above), an AI project could:\nPut huge effort into designing tests for signs of danger, and - if it sees danger signs in its own systems - warning the world as a whole.\nOffer deals to other AI companies/projects. E.g., acquiring them or exchanging a share of its profits for enough visibility and control to ensure that they don’t deploy dangerous AI systems.\nUse its credibility as the leading company to lobby the government for helpful measures (such as enforcement of a monitoring-and-standards regime), and to more generally highlight key issues and advocate for sensible actions.\nTry to ensure (via design, marketing, customer choice, etc.) that its AI systems are not used for dangerous ends, and are used on applications that make the world safer and better off. This could include defensive deployment to reduce risks from other AIs; it could include using advanced AI systems to help it gain clarity on how to get a good outcome for humanity; etc.\nAn AI project with a dominant market position could likely make a huge difference via things like the above (and probably via many routes I haven’t thought of). And even an AI project that is merely one of several leaders could have enough resources and credibility to have a lot of similar impacts - especially if it’s able to “lead by example” and persuade other AI projects (or make deals with them) to similarly prioritize actions like the above.\nA challenge here is that I’m envisioning a project with two arguably contradictory properties: being careful (e.g., prioritizing actions like the above over just trying to maintain its position as a profitable/cutting-edge project) and successful (being a profitable/cutting-edge project). In practice, it could be very hard for an AI project to walk the tightrope of being aggressive enough to be a “leading” project (in the sense of having lots of resources, credibility, etc.), while also prioritizing actions like the above (which mostly, with some exceptions, seem pretty different from what an AI project would do if it were simply focused on its technological lead and profitability).\n \n80,000 Hours has a collection of anonymous advice on how to think about the pros and cons of working at an AI company.\nIn a future piece, I’ll discuss what I think AI companies can be doing today to prepare for transformative AI risk. This could be helpful for getting a sense of what an unusually careful AI company looks like.\nJobs in government and at government-facing think tanks. I think there is a lot of value in providing quality advice to governments (especially the US government) on how to think about AI - both today’s systems and potential future ones. \nI also think it could make sense to work on other technology issues in government, which could be a good path to working on AI later (I expect government attention to AI to grow over time). \nPeople interested in careers like these can check out Open Philanthropy’s Technology Policy Fellowships and RAND Corporation's Technology and Security Policy Fellows.\nOne related activity that seems especially valuable: understanding the state of AI in countries other than the one you’re working for/in - particularly countries that (a) have a good chance of developing their own major AI projects down the line; (b) are difficult to understand much about by default. \nHaving good information on such countries could be crucial for making good decisions, e.g. about moving cautiously vs. racing forward vs. trying to enforce safety standards internationally. \nI think good work on this front has been done by the Center for Security and Emerging Technology8 among others. \nA future piece will discuss other things I think governments can be doing today to prepare for transformative AI risk. I won’t have a ton of tangible recommendations quite yet, but I expect there to be more over time, especially if and when standards and monitoring frameworks become better-developed.\nJobs in politics. The previous category focused on advising governments; this one is about working on political campaigns, doing polling analysis, etc. to generally improve the extent to which sane and reasonable people are in power. Obviously, it’s a judgment call which politicians are the “good” ones and which are the “bad” ones, but I didn’t want to leave out this category of work.\nForecasting. I’m intrigued by organizations like Metaculus, HyperMind, Good Judgment,9 Manifold Markets, and Samotsvety - all trying, in one way or another, to produce good probabilistic forecasts (using generalizable methods10) about world events. \nIf we could get good forecasts about questions like “When will AI systems be powerful enough to defeat all of humanity?” and “Will AI safety research in category X be successful?”, this could be useful for helping people make good decisions. (These questions seem very hard to get good predictions on using these organizations’ methods, but I think it’s an interesting goal.)\nTo explore this area, I’d suggest learning about forecasting generally (Superforecasting is a good starting point) and building up your own prediction track record on sites such as the above.\n“Meta” careers. There are a number of jobs focused on helping other people learn about key issues, develop key skills and end up in helpful jobs (a bit more discussion here).\nIt can also make sense to take jobs that put one in a good position to donate to nonprofits doing important work, to spread helpful messages, and to build skills that could be useful later (including in unexpected ways, as things develop), as I’ll discuss below.\nLow-guidance jobs\nThis sub-section lists some projects that either don’t exist (but seem like they ought to), or are in very embryonic stages. So it’s unlikely you can get any significant mentorship working on these things. \nI think the potential impact of making one of these work is huge, but I think most people will have an easier time finding a fit with jobs from the previous section (which is why I listed those first). \nThis section is largely to illustrate that I expect there to be more and more ways to be helpful as time goes on - and in case any readers feel excited and qualified to tackle these projects themselves, despite a lack of guidance and a distinct possibility that a project will make less sense in reality than it does on paper.\nA big one in my mind is developing safety standards that could be used in a standards and monitoring regime. By this I mean answering questions like:\nWhat observations could tell us that AI systems are getting dangerous to humanity (whether by pursuing aims of their own or by helping humans do dangerous things)? \nA starting-point question: why do we believe today’s systems aren’t dangerous? What, specifically, are they unable to do that they’d have to do in order to be dangerous, and how will we know when that’s changed?\nOnce AI systems have potential for danger, how should they be restricted, and what conditions should AI companies meet (e.g., demonstrations of safety and security) in order to loosen restrictions?\nThere is some early work going on along these lines, at both AI companies and nonprofits. If it goes well, I expect that there could be many jobs in the future, doing things like:\nContinuing to refine and improve safety standards as AI systems get more advanced.\nProviding AI companies with “audits” - examinations of whether their systems meet standards, provided by parties outside the company to reduce conflicts of interest.\nAdvocating for the importance of adherence to standards. This could include advocating for AI companies to abide by standards, and potentially for government policies to enforce standards.\nOther public goods for AI projects. I can see a number of other ways in which independent organizations could help AI projects exercise more caution / do more to reduce risks:\nFacilitating safety research collaborations. I worry that at some point, doing good alignment research will only be possible with access to state-of-the-art AI models - but such models will be extraordinarily expensive and exclusively controlled by major AI companies. \nI hope AI companies will be able to partner with outside safety researchers (not just rely on their own employees) for alignment research, but this could get quite tricky due to concerns about intellectual property leaks. \n \nA third-party organization could do a lot of the legwork of vetting safety researchers, helping them with their security practices, working out agreements with respect to intellectual property, etc. to make partnerships - and selective information sharing, more broadly - more workable.\nEducation for key people at AI companies. An organization could help employees, investors, and board members of AI companies learn about the potential risks and challenges of advanced AI systems. I’m especially excited about this for board members, because: \nI’ve already seen a lot of interest from AI companies in forming strong ethics advisory boards, and/or putting well-qualified people on their governing boards (see footnote for the difference11). I expect demand to go up.\n \nRight now, I don’t think there are a lot of people who are both (a) prominent and “fancy” enough to be considered for such boards; (b) highly thoughtful about, and well-versed in, what I consider some of the most important risks of transformative AI (covered in this piece and the series it’s part of).\n \nAn “education for potential board members” program could try to get people quickly up to speed on good board member practices generally, on risks of transformative AI, and on the basics of how modern AI works.\nHelping share best practices across AI companies. A third-party organization might collect information about how different AI companies are handling information security, alignment research, processes for difficult decisions, governance, etc. and share it across companies, while taking care to preserve confidentiality. I’m particularly interested in the possibility of developing and sharing innovative governance setups for AI companies.\nThinking and stuff. There’s tons of potential work to do in the category of “coming up with more issues we ought to be thinking about, more things people (and companies and governments) can do to be helpful, etc.”\nAbout a year ago, I published a list of research questions that could be valuable and important to gain clarity on. I still mostly endorse this list (though I wouldn’t write it just as is today).\nA slightly different angle: it could be valuable to have more people thinking about the question, “What are some tangible policies governments could enact to be helpful?” E.g., early steps towards standards and monitoring. This is distinct from advising governments directly (it's earlier-stage).\nSome AI companies have policy teams that do work along these lines. And a few Open Philanthropy employees work on topics along the lines of the first bullet point. However, I tend to think of this work as best done by people who need very little guidance (more at my discussion of wicked problems), so I’m hesitant to recommend it as a mainline career option.\nThings you can do if you’re not ready for a full-time career change\nSwitching careers is a big step, so this section lists some ways you can be helpful regardless of your job - including preparing yourself for a later switch.\nFirst and most importantly, you may have opportunities to spread key messages via social media, talking with friends and colleagues, etc. I think there’s a lot of potential to make a difference here, and I wrote a previous post on this specifically.\nSecond, you can explore potential careers like those I discuss above. I’d suggest generally checking out job postings, thinking about what sorts of jobs might be a fit for you down the line, meeting people who work in jobs like those and asking them about their day-to-day, etc.\nRelatedly, you can try to keep your options open. \nIt’s hard to predict what skills will be useful as AI advances further and new issues come up. \nBeing ready to switch careers when a big opportunity comes up could be hugely valuable - and hard. (Most people would have a lot of trouble doing this late in their career, no matter how important!) \nBuilding up the financial, psychological and social ability to change jobs later on would (IMO) be well worth a lot of effort.\nRight now there aren’t a lot of obvious places to donate (though you can donate to the Long-Term Future Fund12 if you feel so moved). \nI’m guessing this will change in the future, for a number of reasons.13\nSomething I’d consider doing is setting some pool of money aside, perhaps invested such that it’s particularly likely to grow a lot if and when AI systems become a lot more capable and impressive,14 in case giving opportunities come up in the future. \nYou can also, of course, donate to things today that others aren’t funding for whatever reason.\nLearning more about key issues could broaden your options. I think the full series I’ve written on key risks is a good start. To do more, you could:\nActively engage with this series by writing your own takes, discussing with others, etc.\nConsider various online courses15 on relevant issues.\nI think it’s also good to get as familiar with today’s AI systems (and the research that goes into them) as you can. \nIf you’re happy to write code, you can check out coding-intensive guides and programs (examples in footnote).16\nIf you don’t want to code but can read somewhat technical content, I’d suggest getting oriented with some basic explainers on deep learning17 and then reading significant papers on AI and AI safety.18\nWhether you’re very technical or not at all, I think it’s worth playing with public state-of-the-art AI models, as well as seeing highlights of what they can do via Twitter and such. \nFinally, if you happen to have opportunities to serve on governing boards or advisory boards for key organizations (e.g., AI companies), I think this is one of the best non-full-time ways to help. \nI don’t expect this to apply to most people, but wanted to mention it in case any opportunities come up. \nIt’s particularly important, if you get a role like this, to invest in educating yourself on key issues.\nSome general advice\nI think full-time work has huge potential to help, but also big potential to do harm, or to burn yourself out. So here are some general suggestions.\nThink about your own views on the key risks of AI, and what it might look like for the world to deal with the risks. Most of the jobs I’ve discussed aren’t jobs where you can just take instructions and apply narrow skills. The issues here are tricky, and it takes judgment to navigate them well. \nFurthermore, no matter what you do, there will almost certainly be people who think your work is useless (if not harmful).19 This can be very demoralizing. I think it’s easier if you’ve thought things through and feel good about the choices you’re making.\nI’d advise trying to learn as much as you can about the major risks of AI (see above for some guidance on this) - and/or trying to work for an organization whose leadership you have a good amount of confidence in.\nJog, don’t sprint. Skeptics of the “most important century” hypothesis will sometimes say things like “If you really believe this, why are you working normal amounts of hours instead of extreme amounts? Why do you have hobbies (or children, etc.) at all?” And I’ve seen a number of people with an attitude like: “THIS IS THE MOST IMPORTANT TIME IN HISTORY. I NEED TO WORK 24/7 AND FORGET ABOUT EVERYTHING ELSE. NO VACATIONS.\"\nI think that’s a very bad idea. \nTrying to reduce risks from advanced AI is, as of today, a frustrating and disorienting thing to be doing. It’s very hard to tell whether you’re being helpful (and as I’ve mentioned, many will inevitably think you’re being harmful). \nI think the difference between “not mattering,” “doing some good” and “doing enormous good” comes down to how you choose the job, how good at it you are, and how good your judgment is (including what risks you’re most focused on and how you model them). Going “all in” on a particular objective seems bad on these fronts: it poses risks to open-mindedness, to mental health and to good decision-making (I am speaking from observations here, not just theory). \nThat is, I think it’s a bad idea to try to be 100% emotionally bought into the full stakes of the most important century - I think the stakes are just too high for that to make sense for any human being. \nInstead, I think the best way to handle “the fate of humanity is at stake” is probably to find a nice job and work about as hard as you’d work at another job, rather than trying to make heroic efforts to work extra hard. (I criticized heroic efforts in general here.) \nI think this basic formula (working in some job that is a good fit, while having some amount of balance in your life) is what’s behind a lot of the most important positive events in history to date, and presents possibly historically large opportunities today.\nSpecial thanks to Alexander Berger, Jacob Eliosoff, Alexey Guzey, Anton Korinek and Luke Muelhauser for especially helpful comments on this post. A lot of other people commented helpfully as well. Footnotes\n I use “aligned” to specifically mean that AIs behave as intended, rather than pursuing dangerous goals of their own. I use “safe” more broadly to mean that an AI system poses little risk of catastrophe for any reason in the context it’s being used in. It’s OK to mostly think of them as interchangeable in this post. ↩\n AI labs with alignment teams: Anthropic, DeepMind and OpenAI. Disclosure: my wife is co-founder and President of Anthropic, and used to work at OpenAI (and has shares in both companies); OpenAI is a former Open Philanthropy grantee.\n Academic labs: there are many of these; I’ll highlight the Steinhardt lab at Berkeley (Open Philanthropy grantee), whose recent research I’ve found especially interesting.\n Independent nonprofits: examples would be Alignment Research Center and Redwood Research (both Open Philanthropy grantees, and I sit on the board of both).\n  ↩\n Examples: AGI Safety Fundamentals, SERI MATS, MLAB (all of which have been supported by Open Philanthropy) ↩\n On one hand, deceptive and manipulative AIs could be dangerous. On the other, it might be better to get AIs trying to deceive us before they can consistently succeed; the worst of all worlds might be getting this behavior by accident with very powerful AIs. ↩\n Though I think it’s inherently harder to get evidence of low risk than evidence of high risk, since it’s hard to rule out risks arising as AI systems get more capable. ↩\n Why do I simultaneously think “This is a mature field with mentorship opportunities” and “This is a badly neglected career track for helping with the most important century”?\n In a nutshell, most good security people are not working on AI. It looks to me like there are plenty of people who are generally knowledgeable and effective at good security, but there’s also a huge amount of need for such people outside of AI specifically. \n I expect this to change eventually if AI systems become extraordinarily capable. The issue is that it might be too late at that point - the security challenges in AI seem daunting (and somewhat AI-specific) to the point where it could be important for good people to start working on them many years before AI systems become extraordinarily powerful. ↩\nHere’s Katja Grace arguing along these lines. ↩\n An Open Philanthropy grantee. ↩\n Open Philanthropy has funded Metaculus and contracted with Good Judgment and HyperMind. ↩\n That is, these groups are mostly trying things like “Incentivize people to make good forecasts; track how good people are making forecasts; aggregate forecasts” rather than “Study the specific topic of AI and make forecasts that way” (the latter is also useful, and I discuss it below). ↩\n The governing board of an organization has the hard power to replace the CEO and/or make other decisions on behalf of the organization. An advisory board merely gives advice, but in practice I think this can be quite powerful, since I’d expect many organizations to have a tough time doing bad-for-the-world things without backlash (from employees and the public) once an advisory board has recommended against them. ↩\nOpen Philanthropy, which I’m co-CEO of, has supported this fund, and its current Chair is an Open Philanthropy employee. ↩\n I generally expect there to be more and more clarity about what actions would be helpful, and more and more people willing to work on them if they can get funded. A bit more specifically and speculatively, I expect AI safety research to get more expensive as it requires access to increasingly large, expensive AI models. ↩\n Not investment advice! I would only do this with money you’ve set aside for donating such that it wouldn’t be a personal problem if you lost it all. ↩\n Some options here, here, here, here. I’ve made no attempt to be comprehensive - these are just some links that should make it easy to get rolling and see some of your options. ↩\nSpinning Up in Deep RL, ML for Alignment Bootcamp, Deep Learning Curriculum. ↩\n For the basics, I like Michael Nielsen’s guide to neural networks and deep learning; 3Blue1Brown has a video explainer series that I haven’t watched but that others have recommended highly. I’d also suggest The Illustrated Transformer (the transformer is the most important AI architecture as of today).\n For a broader overview of different architectures, see Neural Network Zoo. \n You can also check out various Coursera etc. courses on deep learning/neural networks. ↩\n I feel like the easiest way to do this is to follow AI researchers and/or top labs on Twitter. You can also check out Alignment Newsletter or ML Safety Newsletter for alignment-specific content. ↩\n Why? \n One reason is the tension between the “caution” and “competition” frames: people who favor one frame tend to see the other as harmful.\n Another reason: there are a number of people who think we’re more-or-less doomed without a radical conceptual breakthrough on how to build safe AI (they think the sorts of approaches I list here are hopeless, for reasons I confess I don’t understand very well). These folks will consider anything that isn’t aimed at a radical breakthrough ~useless, and consider some of the jobs I list in this piece to be harmful, if they are speeding up AI development and leaving us with less time for a breakthrough. \n At the same time, working toward the sort of breakthrough these folks are hoping for means doing pretty esoteric, theoretical research that many other researchers think is clearly useless. \n And trying to make AI development slower and/or more cautious is harmful according to some people who are dismissive of risks, and think the priority is to push forward as fast as we can with technology that has the potential to improve lives. ↩\n", "url": "https://www.cold-takes.com/jobs-that-can-help-with-the-most-important-century/", "title": "Jobs that can help with the most important century", "source": "cold.takes", "source_type": "blog", "date_published": "2023-02-10", "id": "3318b38b47f1eb45be5d6bb59f62ed24"} -{"text": "In the most important century series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.\nIn this more recent series, I’ve been trying to help answer this question: “So what? What can I do to help?” \nSo far, I’ve just been trying to build a picture of some of the major risks we might face (especially the risk of misaligned AI that could defeat all of humanity), what might be challenging about these risks, and why we might succeed anyway. Now I’ve finally gotten to the part where I can start laying out tangible ideas for how to help (beyond the pretty lame suggestions I gave before).\nThis piece is about one broad way to help: spreading messages that ought to be more widely understood.\nOne reason I think this topic is worth a whole piece is that practically everyone can help with spreading messages at least some, via things like talking to friends; writing explanations of your own that will appeal to particular people; and, yes, posting to Facebook and Twitter and all of that. Call it slacktivism if you want, but I’d guess it can be a big deal: many extremely important AI-related ideas are understood by vanishingly small numbers of people, and a bit more awareness could snowball. Especially because these topics often feel too “weird” for people to feel comfortable talking about them! Engaging in credible, reasonable ways could contribute to an overall background sense that it’s OK to take these ideas seriously.\nAnd then there are a lot of potential readers who might have special opportunities to spread messages. Maybe they are professional communicators (journalists, bloggers, TV writers, novelists, TikTokers, etc.), maybe they’re non-professionals who still have sizable audiences (e.g., on Twitter), maybe they have unusual personal and professional networks, etc. Overall, the more you feel you are good at communicating with some important audience (even a small one), the more this post is for you.\nThat said, I’m not excited about blasting around hyper-simplified messages. As I hope this series has shown, the challenges that could lie ahead of us are complex and daunting, and shouting stuff like “AI is the biggest deal ever!” or “AI development should be illegal!” could do more harm than good (if only by associating important ideas with being annoying). Relatedly, I think it’s generally not good enough to spread the most broad/relatable/easy-to-agree-to version of each key idea, like “AI systems could harm society.” Some of the unintuitive details are crucial. \nInstead, the gauntlet I’m throwing is: “find ways to help people understand the core parts of the challenges we might face, in as much detail as is feasible.” That is: the goal is to try to help people get to the point where they could maintain a reasonable position in a detailed back-and-forth, not just to get them to repeat a few words or nod along to a high-level take like “AI safety is important.” This is a lot harder than shouting “AI is the biggest deal ever!”, but I think it’s worth it, so I’m encouraging people to rise to the challenge and stretch their communication skills.\nBelow, I will:\nOutline some general challenges of this sort of message-spreading. \nGo through some ideas I think it’s risky to spread too far, at least in isolation.\nGo through some of the ideas I’d be most excited to see spread.\nTalk a little bit about how to spread ideas - but this is mostly up to you.\nChallenges of AI-related messages\nHere’s a simplified story for how spreading messages could go badly. \nYou’re trying to convince your friend to care more about AI risk.\nYou’re planning to argue: (a) AI could be really powerful and important within our lifetimes; (b) Building AI too quickly/incautiously could be dangerous. \nYour friend just isn’t going to care about (b) if they aren’t sold on some version of (a). So you’re starting with (a).\nUnfortunately, (a) is easier to understand than (b). So you end up convincing your friend of (a), and not (yet) (b).\nYour friend announces, “Aha - I see that AI could be tremendously powerful and important! I need to make sure that people/countries I like are first to build it!” and runs off to help build powerful AI as fast as possible. They’ve chosen the competition frame (“will the right or the wrong people build powerful AI first?”) over the caution frame (“will we screw things up and all lose?”), because the competition frame is easier to understand.\nWhy is this bad? See previous pieces on the importance of caution.\n(Click to expand) More on the “competition” frame vs. the “caution” frame”\nIn a previous piece, I talked about two contrasting frames for how to make the best of the most important century:\nThe caution frame. This frame emphasizes that a furious race to develop powerful AI could end up making everyone worse off. This could be via: (a) AI forming dangerous goals of its own and defeating humanity entirely; (b) humans racing to gain power and resources and “lock in” their values.\nIdeally, everyone with the potential to build something powerful enough AI would be able to pour energy into building something safe (not misaligned), and carefully planning out (and negotiating with others on) how to roll it out, without a rush or a race. With this in mind, perhaps we should be doing things like:\nWorking to improve trust and cooperation between major world powers. Perhaps via AI-centric versions of Pugwash (an international conference aimed at reducing the risk of military conflict), perhaps by pushing back against hawkish foreign relations moves.\nDiscouraging governments and investors from shoveling money into AI research, encouraging AI labs to thoroughly consider the implications of their research before publishing it or scaling it up, working toward standards and monitoring, etc. Slowing things down in this manner could buy more time to do research on avoiding misaligned AI, more time to build trust and cooperation mechanisms, and more time to generally gain strategic clarity \nThe “competition” frame. This frame focuses less on how the transition to a radically different future happens, and more on who's making the key decisions as it happens.\nIf something like PASTA is developed primarily (or first) in country X, then the government of country X could be making a lot of crucial decisions about whether and how to regulate a potential explosion of new technologies.\nIn addition, the people and organizations leading the way on AI and other technology advancement at that time could be especially influential in such decisions.\nThis means it could matter enormously \"who leads the way on transformative AI\" - which country or countries, which people or organizations.\nSome people feel that we can make confident statements today about which specific countries, and/or which people and organizations, we should hope lead the way on transformative AI. These people might advocate for actions like:\nIncreasing the odds that the first PASTA systems are built in countries that are e.g. less authoritarian, which could mean e.g. pushing for more investment and attention to AI development in these countries.\nSupporting and trying to speed up AI labs run by people who are likely to make wise decisions (about things like how to engage with governments, what AI systems to publish and deploy vs. keep secret, etc.)\nTension between the two frames. People who take the \"caution\" frame and people who take the \"competition\" frame often favor very different, even contradictory actions. Actions that look important to people in one frame often look actively harmful to people in the other.\nFor example, people in the \"competition\" frame often favor moving forward as fast as possible on developing more powerful AI systems; for people in the \"caution\" frame, haste is one of the main things to avoid. People in the \"competition\" frame often favor adversarial foreign relations, while people in the \"caution\" frame often want foreign relations to be more cooperative.\nThat said, this dichotomy is a simplification. Many people - including myself - resonate with both frames. But I have a general fear that the “competition” frame is going to be overrated by default for a number of reasons, as I discuss here.\nUnfortunately, I’ve seen something like the above story play out in multiple significant instances (though I shouldn’t give specific examples). \nAnd I’m especially worried about this dynamic when it comes to people in and around governments (especially in national security communities), because I perceive governmental culture as particularly obsessed with staying ahead of other countries (“If AI is dangerous, we’ve gotta build it first”) and comparatively uninterested in things that are dangerous for our country because they’re dangerous for the whole world at once (“Maybe we should worry a lot about pandemics?”)1\nYou could even argue (although I wouldn’t agree!2) that to date, efforts to “raise awareness” about the dangers of AI have done more harm than good (via causing increased investment in AI, generally).\nSo it’s tempting to simply give up on the whole endeavor - to stay away from message spreading entirely, beyond people you know well and/or are pretty sure will internalize the important details. But I think we can do better.\nThis post is aimed at people who are good at communicating with at least some audience. This could be because of their skills, or their relationships, or some combination. In general, I’d expect to have more success with people who hear from you a lot (because they’re your friend, or they follow you on Twitter or Substack, etc.) than with people you reach via some viral blast of memery - but maybe you’re skilled enough to make the latter work too, which would be awesome. I'm asking communicators to hit a high bar: leave people with strong understanding, rather than just getting them to repeat a few sentences about AI risk.\nMessages that seem risky to spread in isolation\nFirst, here are a couple of messages that I’d rather people didn’t spread (or at least have mixed feelings about spreading) in isolation, i.e., without serious efforts to include some of the other messages I cover below.\nOne category is messages that generically emphasize the importance and potential imminence of powerful AI systems. The reason for this is in the previous section: many people seem to react to these ideas (especially when unaccompanied by some other key ones) with a “We’d better build powerful AI as fast as possible, before others do” attitude. (If you’re curious about why I wrote The Most Important Century anyway, see footnote for my thinking.3)\nAnother category is messages that emphasize that AI could be risky/dangerous to the world, without much effort to fill in how, or with an emphasis on easy-to-understand risks. \nSince “dangerous” tends to imply “powerful and important,” I think there are similar risks to the previous section. \nIf people have a bad model of how and why AI could be risky/dangerous (missing key risks and difficulties), they might be too quick to later say things like “Oh, turns out this danger is less bad than I thought, let’s go full speed ahead!” Below, I outline how misleading “progress” could lead to premature dismissal of the risks.\nMessages that seem important and helpful (and right!)\nWe should worry about conflict between misaligned AI and all humans\nUnlike the messages discussed in the previous section, this one directly highlights why it might not be a good idea to rush forward with building AI oneself. \nThe idea that an AI could harm the same humans who build it has very different implications from the idea that AI could be generically dangerous/powerful. Less “We’d better get there before others,” more “there’s a case for moving slowly and working together here.”\nThe idea that AI could be a problem for the same people who build it is common in fictional portrayals of AI (HAL 9000, Skynet, The Matrix, Ex Machina) - maybe too much so? It seems to me that people tend to balk at the “sci-fi” feel, and what’s needed is more recognition that this is a serious, real-world concern.\nThe main pieces in this series making this case are Why would AI “aim” to defeat humanity? and AI could defeat all of us combined. There are many other pieces on the alignment problem (see list here); also see Matt Yglesias's case for specifically embracing the “Terminator”/Skynet analogy.\nI’d be especially excited for people to spread messages that help others understand - at a mechanistic level - how and why AI systems could end up with dangerous goals of their own, deceptive behavior, etc. I worry that by default, the concern sounds like lazy anthropomorphism (thinking of AIs just like humans).\nTransmitting ideas about the “how and why” is a lot harder than getting people to nod along to “AI could be dangerous.” I think there’s a lot of effort that could be put into simple, understandable yet relatable metaphors/analogies/examples (my pieces make some effort in this direction, but there’s tons of room for more).\nAIs could behave deceptively, so “evidence of safety” might be misleading\nI’m very worried about a sequence of events like:\nAs AI systems become more powerful, there are some concerning incidents, and widespread concern about “AI risk” grows.\nBut over time, AI systems are “better trained” - e.g., given reinforcement to stop them from behaving in unintended ways - and so the concerning incidents become less common.\nBecause of this, concern dissipates, and it’s widely believed that AI safety has been “solved.”\nBut what’s actually happened is that the “better training” has caused AI systems to behave deceptively - to appear benign in most situations, and to cause trouble only when (a) this wouldn’t be detected or (b) humans can be overpowered entirely.\nI worry about AI systems’ being deceptive in the same way a human might: going through chains of reasoning like “If I do X, I might get caught, but if I do Y, no one will notice until it’s too late.” But it can be hard to get this concern taken seriously, because it means attributing behavior to AI systems that we currently associate exclusively with humans (today’s AI systems don’t really do things like this4).\nOne of the central things I’ve tried to spell out in this series is why an AI system might engage in this sort of systematic deception, despite being very unlike humans (and not necessarily having e.g. emotions). It’s a major focus of both of these pieces from this series:\nWhy would AI “aim” to defeat humanity?\nAI Safety Seems Hard to Measure\nWhether this point is widely understood seems quite crucial to me. We might end up in a situation where (a) there are big commercial and military incentives to rush ahead with AI development; (b) we have what seems like a set of reassuring experiments and observations. \nAt that point, it could be key whether people are asking tough questions about the many ways in which “evidence of AI safety” could be misleading, which I discussed at length in AI Safety Seems Hard to Measure.\n(Click to expand) Why AI safety could be hard to measure\nIn previous pieces, I argued that:\nIf we develop powerful AIs via ambitious use of the “black-box trial-and-error” common in AI development today, then there’s a substantial risk that: \nThese AIs will develop unintended aims (states of the world they make calculations and plans toward, as a chess-playing AI \"aims\" for checkmate);\n \nThese AIs could deceive, manipulate, and even take over the world from humans entirely as needed to achieve those aims.\nPeople today are doing AI safety research to prevent this outcome, but such research has a number of deep difficulties:\n“Great news - I’ve tested this AI and it looks safe.” Why might we still have a problem?\n \nProblem\nKey question\nExplanation\nThe Lance Armstrong problem\nDid we get the AI to be actually safe or good at hiding its dangerous actions?\nWhen dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “appearing to behave well.”\nWhen professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.\n \nThe King Lear problem\nThe AI is (actually) well-behaved when humans are in control. Will this transfer to when AIs are in control?\nIt's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't. \nAIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to take control of the world entirely. It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation. \nLike King Lear trying to decide how much power to give each of his daughters before abdicating the throne.\n \nThe lab mice problem\nToday's \"subhuman\" AIs are safe.What about future AIs with more human-like abilities?\nToday's AI systems aren't advanced enough to exhibit the basic behaviors we want to study, such as deceiving and manipulating humans.\nLike trying to study medicine in humans by experimenting only on lab mice.\n \nThe first contact problem\nImagine that tomorrow's \"human-like\" AIs are safe. How will things go when AIs have capabilities far beyond humans'?\nAI systems might (collectively) become vastly more capable than humans, and it's ... just really hard to have any idea what that's going to be like. As far as we know, there has never before been anything in the galaxy that's vastly more capable than humans in the relevant ways! No matter what we come up with to solve the first three problems, we can't be too confident that it'll keep working if AI advances (or just proliferates) a lot more. \nLike trying to plan for first contact with extraterrestrials (this barely feels like an analogy).\n \nAn analogy that incorporates these challenges is Ajeya Cotra’s “young businessperson” analogy:\nImagine you are an eight-year-old whose parents left you a $1 trillion company and no trusted adult to serve as your guide to the world. You must hire a smart adult to run your company as CEO, handle your life the way that a parent would (e.g. decide your school, where you’ll live, when you need to go to the dentist), and administer your vast wealth (e.g. decide where you’ll invest your money).\n You have to hire these grownups based on a work trial or interview you come up with -- you don't get to see any resumes, don't get to do reference checks, etc. Because you're so rich, tons of people apply for all sorts of reasons. (More)\nIf your applicants are a mix of \"saints\" (people who genuinely want to help), \"sycophants\" (people who just want to make you happy in the short run, even when this is to your long-term detriment) and \"schemers\" (people who want to siphon off your wealth and power for themselves), how do you - an eight-year-old - tell the difference?\nMore: AI safety seems hard to measure\nAI projects should establish and demonstrate safety (and potentially comply with safety standards) before deploying powerful systems\nI’ve written about the benefits we might get from “safety standards.\" The idea is that AI projects should not deploy systems that pose too much risk to the world, as evaluated by a systematic evaluation regime: AI systems could be audited to see whether they are safe. I've outlined how AI projects might self-regulate by publicly committing to having their systems audited (and not deploying dangerous ones), and how governments could enforce safety standards both nationally and internationally.\nToday, development of safety standards is in its infancy. But over time, I think it could matter a lot how much pressure AI projects are under to meet safety standards. And I think it’s not too early, today, to start spreading the message that AI projects shouldn’t unilaterally decide to put potentially dangerous systems out in the world; the burden should be on them to demonstrate and establish safety before doing so.\n(Click to expand) How standards might be established and become national or international \nI previously laid out a possible vision on this front, which I’ll give a slightly modified version of here:\nToday’s leading AI companies could self-regulate by committing not to build or deploy a system that they can’t convincingly demonstrate is safe (e.g., see Google’s 2018 statement, \"We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”). \nEven if some people at the companies would like to deploy unsafe systems, it could be hard to pull this off once the company has committed not to. \n \nEven if there’s a lot of room for judgment in what it means to demonstrate an AI system is safe, having agreed in advance that certain evidence is not good enough could go a long way.\nAs more AI companies are started, they could feel soft pressure to do similar self-regulation, and refusing to do so is off-putting to potential employees, investors, etc.\nEventually, similar principles could be incorporated into various government regulations and enforceable treaties.\nGovernments could monitor for dangerous projects using regulation and even overseas operations. E.g., today the US monitors (without permission) for various signs that other states might be developing nuclear weapons, and might try to stop such development with methods ranging from threats of sanctions to cyberwarfare or even military attacks. It could do something similar for any AI development projects that are using huge amounts of compute and haven’t volunteered information about whether they’re meeting standards.\nAlignment research is prosocial and great\nMost people reading this can’t go and become groundbreaking researchers on AI alignment. But they can contribute to a general sense that the people who can do this (mostly) should.\nToday, my sense is that most “science” jobs are pretty prestigious, and seen as good for society. I have pretty mixed feelings about this:\nI think science has been good for humanity historically.\nBut I worry that as technology becomes more and more powerful, there’s a growing risk of a catastrophe (particularly via AI or bioweapons) that wipes out all the progress to date and then some. (I've written that the historical trend to date arguably fits something like \"Declining everyday violence, offset by bigger and bigger rare catastrophes.\") I think our current era would be a nice time to adopt an attitude of “proceed with caution” rather than “full speed ahead.” \nI resonate with Toby Ord’s comment (in The Precipice), “humanity is akin to an adolescent, with rapidly developing physical abilities, lagging wisdom and self-control, little thought for its longterm future and an unhealthy appetite for risk.”\nI wish there were more effort, generally, to distinguish between especially dangerous science and especially beneficial science. AI alignment seems squarely in the latter category.\nI’d be especially excited for people to spread messages that give a sense of the specifics of different AI alignment research paths, how they might help or fail, and what’s scientifically/intellectually interesting (not just useful) about them.\nThe main relevant piece in this series is High-level hopes for AI alignment, which distills a longer piece (How might we align transformative AI if it’s developed very soon?) that I posted on the Alignment Forum. \nThere are a number (hopefully growing) of other careers that I consider especially valuable, which I'll discuss in my next post on this topic.\nIt might be important for companies (and other institutions) to act in unusual ways\nIn Racing through a Minefield: the AI Deployment Problem, I wrote:\nA lot of the most helpful actions might be “out of the ordinary.” When racing through a minefield, I hope key actors will:\nPut more effort into alignment, threat assessment, and security than is required by commercial incentives;\nConsider measures for avoiding races and global monitoring that could be very unusual, even unprecedented.\nDo all of this in the possible presence of ambiguous, confusing information about the risks.\nIt always makes me sweat when I’m talking to someone from an AI company and they seem to think that commercial success and benefiting humanity are roughly the same goal/idea. \n(To be clear, I don't think an AI project's only goal should be to avoid the risk of misaligned AI. I've given this risk a central place in this piece partly because I think it's especially at risk of being too quickly dismissed - but I don't think it's the only major risk. I think AI projects need to strike a tricky balance between the caution and competition frames, and consider a number of issues beyond the risk of misalignment. But I think it's a pretty robust point that they need to be ready to do unusual things rather than just following commercial incentives.)\nI’m nervous about a world in which:\nMost people stick with paradigms they know - a company should focus on shareholder value, a government should focus on its own citizens (rather than global catastrophic risks), etc.\nAs the pace of progress accelerates, we’re sitting here with all kinds of laws, norms and institutions that aren’t designed for the problems we’re facing - and can’t adapt in time. A good example would be the way governance works for a standard company: it’s legally and structurally obligated to be entirely focused on benefiting its shareholders, rather than humanity as a whole. (There are alternative ways of setting up a company without these problems!5)\nAt a minimum (as I argued previously), I think AI companies should be making sure they have whatever unusual governance setups they need in order to prioritize benefits to humanity - not returns to shareholders - when the stakes get high. I think we’d see more of this if more people believed something like: “It might be important for companies (and other institutions) to act in unusual ways.”\nWe’re not ready for this\nIf we’re in the most important century, there’s likely to be a vast set of potential challenges ahead of us, most of which have gotten very little attention. (More here: Transformative AI issues (not just misalignment): an overview)\nIf it were possible to slow everything down, by default I’d think we should. Barring that, I’d at least like to see people generally approaching the topic of AI with a general attitude along the lines of “We’re dealing with something really big here, and we should be trying really hard to be careful and humble and thoughtful” (as opposed to something like “The science is so interesting, let’s go for it” or “This is awesome, we’re gonna get rich” or “Whatever, who cares”).\nI’ll re-excerpt this table from an earlier piece:\nSituation\nAppropriate reaction (IMO)\n\"This could be a billion-dollar company!\"\n \n\"Woohoo, let's GO for it!\"\n \n\"This could be the most important century!\"\n \n\"... Oh ... wow ... I don't know what to say and I somewhat want to vomit ... I have to sit down and think about this one.\"\n \nI’m not at all sure about this, but one potential way to spread this message might be to communicate, with as much scientific realism, detail and believability as possible, about what the world might look like after explosive scientific and technological advancement brought on by AI (for example, a world with digital people). I think the enormous unfamiliarity of some of the issues such a world might face - and the vast possibilities for utopia or dystopia - might encourage an attitude of not wanting to rush forward.\nHow to spread messages like these?\nI’ve tried to write a series that explains the key issues to careful readers, hopefully better equipping them to spread helpful messages. From here, individual communicators need to think about the audiences they know and the mediums they use (Twitter? Facebook? Essays/newsletters/blog posts? Video? In-person conversation?) and what will be effective with those audiences and mediums.\nThe main guidelines I want to advocate:\nErr toward sustained, repeated, relationship-based communication as opposed to prioritizing “viral blasts” (unless you are so good at the latter that you feel excited to spread the pretty subtle ideas in this piece that way!)\nAim high: try for the difficult goal of “My audience walks away really understanding key points” rather than the easier goal of “My audience has hit the ‘like’ button for a sort of related idea.”\nA consistent piece of feedback I’ve gotten on my writing is that making things as concrete as possible is helpful - so giving real-world examples of problems analogous to the ones we’re worried about, or simple analogies that are easy to imagine and remember, could be key. But it’s important to choose these carefully so that the key dynamics aren’t lost. Footnotes\nKiller Apps and Technology Roulette are interesting pieces trying to sell policymakers on the idea that “superiority is not synonymous with security.” ↩\n When I imagine what the world would look like without any of the efforts to “raise awareness,” I picture a world with close to zero awareness of - or community around - major risks from transformative AI. While this world might also have more time left before dangerous AI is developed, on balance this seems worse. A future piece will elaborate on the many ways I think a decent-sized community can help reduce risks. ↩\n I do think “AI could be a huge deal, and soon” is a very important point that somewhat serves as a prerequisite for understanding this topic and doing helpful work on it, and I wanted to make this idea more understandable and credible to a number of people - as well as to create more opportunities to get critical feedback and learn what I was getting wrong. \n But I was nervous about the issues noted in this section. With that in mind, I did the following things:\nThe title, “most important century,” emphasizes a time frame that I expect to be less exciting/motivating for the sorts of people I’m most worried about (compared to the sorts of people I most wanted to draw in).\nI tried to persistently and centrally raise concerns about misaligned AI (raising it in two pieces, including one (guest piece) devoted to it, before I started discussing how soon transformative AI might be developed), and extensively discussed the problems of overemphasizing “competition” relative to “caution.”\nI ended the series with a piece arguing against being too “action-oriented.”\nI stuck to “passive” rather than “active” promotion of the series, e.g., I accepted podcast invitations but didn’t seek them out. I figured that people with proactive interest would be more likely to give in-depth, attentive treatments rather than low-resolution, oversimplified ones.\n I don’t claim to be sure I got all the tradeoffs right.  ↩\n There are some papers arguing that AI systems do things something like this (e.g., see the “Challenges” section of this post), but I think the dynamic is overall pretty far from what I’m most worried about. ↩\n E.g., public benefit corporation ↩\n", "url": "https://www.cold-takes.com/spreading-messages-to-help-with-the-most-important-century/", "title": "Spreading messages to help with the most important century", "source": "cold.takes", "source_type": "blog", "date_published": "2023-01-25", "id": "e66b9d7cbc2e70d805faf28327f0b5c8"} -{"text": "This post will lay out a couple of stylized stories about how, if transformative AI is developed relatively soon, this could result in global catastrophe. (By “transformative AI,” I mean AI powerful and capable enough to bring about the sort of world-changing consequences I write about in my most important century series.)\nThis piece is more about visualizing possibilities than about providing arguments. For the latter, I recommend the rest of this series.\nIn the stories I’ll be telling, the world doesn't do much advance preparation or careful consideration of risks I’ve discussed previously, especially re: misaligned AI (AI forming dangerous goals of its own). \nPeople do try to “test” AI systems for safety, and they do need to achieve some level of “safety” to commercialize. When early problems arise, they react to these problems. \nBut this isn’t enough, because of some unique challenges of measuring whether an AI system is “safe,” and because of the strong incentives to race forward with scaling up and deploying AI systems as fast as possible. \nSo we end up with a world run by misaligned AI - or, even if we’re lucky enough to avoid that outcome, other catastrophes are possible.\nAfter laying these catastrophic possibilities, I’ll briefly note a few key ways we could do better, mostly as a reminder (these topics were covered in previous posts). Future pieces will get more specific about what we can be doing today to prepare.\nBackdrop\nThis piece takes a lot of previous writing I’ve done as backdrop. Two key important assumptions (click to expand) are below; for more, see the rest of this series.\n(Click to expand) “Most important century” assumption: we’ll soon develop very powerful AI systems, along the lines of what I previously called PASTA. \nIn the most important century series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.\nI focus on a hypothetical kind of AI that I call PASTA, or Process for Automating Scientific and Technological Advancement. PASTA would be AI that can essentially automate all of the human activities needed to speed up scientific and technological advancement.\nUsing a variety of different forecasting approaches, I argue that PASTA seems more likely than not to be developed this century - and there’s a decent chance (more than 10%) that we’ll see it within 15 years or so.\nI argue that the consequences of this sort of AI could be enormous: an explosion in scientific and technological progress. This could get us more quickly than most imagine to a radically unfamiliar future.\nI’ve also argued that AI systems along these lines could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nFor more, see the most important century landing page. The series is available in many formats, including audio; I also provide a summary, and links to podcasts where I discuss it at a high level.\n(Click to expand) “Nearcasting” assumption: such systems will be developed in a world that’s otherwise similar to today’s. \nIt’s hard to talk about risks from transformative AI because of the many uncertainties about when and how such AI will be developed - and how much the (now-nascent) field of “AI safety research” will have grown by then, and how seriously people will take the risk, etc. etc. etc. So maybe it’s not surprising that estimates of the “misaligned AI” risk range from ~1% to ~99%.\nThis piece takes an approach I call nearcasting: trying to answer key strategic questions about transformative AI, under the assumption that such AI arrives in a world that is otherwise relatively similar to today's. \nYou can think of this approach like this: “Instead of asking where our ship will ultimately end up, let’s start by asking what destination it’s pointed at right now.” \nThat is: instead of trying to talk about an uncertain, distant future, we can talk about the easiest-to-visualize, closest-to-today situation, and how things look there - and then ask how our picture might be off if other possibilities play out. (As a bonus, it doesn’t seem out of the question that transformative AI will be developed extremely soon - 10 years from now or faster.1 If that’s the case, it’s especially urgent to think about what that might look like.)\nHow we could stumble into catastrophe from misaligned AI\nThis is my basic default picture for how I imagine things going, if people pay little attention to the sorts of issues discussed previously. I’ve deliberately written it to be concrete and visualizable, which means that it’s very unlikely that the details will match the future - but hopefully it gives a picture of some of the key dynamics I worry about. \nThroughout this hypothetical scenario (up until “END OF HYPOTHETICAL SCENARIO”), I use the present tense (“AIs do X”) for simplicity, even though I’m talking about a hypothetical possible future.\nEarly commercial applications. A few years before transformative AI is developed, AI systems are being increasingly used for a number of lucrative, useful, but not dramatically world-changing things. \nI think it’s very hard to predict what these will be (harder in some ways than predicting longer-run consequences, in my view),2 so I’ll mostly work with the simple example of automating customer service.\nIn this early stage, AI systems often have pretty narrow capabilities, such that the idea of them forming ambitious aims and trying to defeat humanity seems (and actually is) silly. For example, customer service AIs are mostly language models that are trained to mimic patterns in past successful customer service transcripts, and are further improved by customers giving satisfaction ratings in real interactions. The dynamics I described in an earlier piece, in which AIs are given increasingly ambitious goals and challenged to find increasingly creative ways to achieve them, don’t necessarily apply.\nEarly safety/alignment problems. Even with these relatively limited AIs, there are problems and challenges that could be called “safety issues” or “alignment issues.” To continue with the example of customer service AIs, these AIs might:\nGive false information about the products they’re providing support for. (Example of reminiscent behavior)\nGive customers advice (when asked) on how to do unsafe or illegal things. (Example)\nRefuse to answer valid questions. (This could result from companies making attempts to prevent the above two failure modes - i.e., AIs might be penalized heavily for saying false and harmful things, and respond by simply refusing to answer lots of questions).\nSay toxic, offensive things in response to certain user queries (including from users deliberately trying to get this to happen), causing bad PR for AI developers. (Example)\nEarly solutions. The most straightforward way to solve these problems involves training AIs to behave more safely and helpfully. This means that AI companies do a lot of things like “Trying to create the conditions under which an AI might provide false, harmful, evasive or toxic responses; penalizing it for doing so, and reinforcing it toward more helpful behaviors.”\nThis works well, as far as anyone can tell: the above problems become a lot less frequent. Some people see this as cause for great celebration, saying things like “We were worried that AI companies wouldn’t invest enough in safety, but it turns out that the market takes care of it - to have a viable product, you need to get your systems to be safe!”\nPeople like me disagree - training AIs to behave in ways that are safer as far as we can tell is the kind of “solution” that I’ve worried could create superficial improvement while big risks remain in place. \n(Click to expand) Why AI safety could be hard to measure \nIn previous pieces, I argued that:\nIf we develop powerful AIs via ambitious use of the “black-box trial-and-error” common in AI development today, then there’s a substantial risk that: \nThese AIs will develop unintended aims (states of the world they make calculations and plans toward, as a chess-playing AI \"aims\" for checkmate);\n \nThese AIs could deceive, manipulate, and even take over the world from humans entirely as needed to achieve those aims.\nPeople today are doing AI safety research to prevent this outcome, but such research has a number of deep difficulties:\n“Great news - I’ve tested this AI and it looks safe.” Why might we still have a problem?\n \nProblem\nKey question\nExplanation\nThe Lance Armstrong problem\nDid we get the AI to be actually safe or good at hiding its dangerous actions?\nWhen dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “appearing to behave well.”\nWhen professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.\n \nThe King Lear problem\nThe AI is (actually) well-behaved when humans are in control. Will this transfer to when AIs are in control?\nIt's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't. \nAIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to take control of the world entirely. It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation. \nLike King Lear trying to decide how much power to give each of his daughters before abdicating the throne.\n \nThe lab mice problem\nToday's \"subhuman\" AIs are safe.What about future AIs with more human-like abilities?\nToday's AI systems aren't advanced enough to exhibit the basic behaviors we want to study, such as deceiving and manipulating humans.\nLike trying to study medicine in humans by experimenting only on lab mice.\n \nThe first contact problem\nImagine that tomorrow's \"human-like\" AIs are safe. How will things go when AIs have capabilities far beyond humans'?\nAI systems might (collectively) become vastly more capable than humans, and it's ... just really hard to have any idea what that's going to be like. As far as we know, there has never before been anything in the galaxy that's vastly more capable than humans in the relevant ways! No matter what we come up with to solve the first three problems, we can't be too confident that it'll keep working if AI advances (or just proliferates) a lot more. \nLike trying to plan for first contact with extraterrestrials (this barely feels like an analogy).\n \nAn analogy that incorporates these challenges is Ajeya Cotra’s “young businessperson” analogy:\nImagine you are an eight-year-old whose parents left you a $1 trillion company and no trusted adult to serve as your guide to the world. You must hire a smart adult to run your company as CEO, handle your life the way that a parent would (e.g. decide your school, where you’ll live, when you need to go to the dentist), and administer your vast wealth (e.g. decide where you’ll invest your money).\n You have to hire these grownups based on a work trial or interview you come up with -- you don't get to see any resumes, don't get to do reference checks, etc. Because you're so rich, tons of people apply for all sorts of reasons. (More)\nIf your applicants are a mix of \"saints\" (people who genuinely want to help), \"sycophants\" (people who just want to make you happy in the short run, even when this is to your long-term detriment) and \"schemers\" (people who want to siphon off your wealth and power for themselves), how do you - an eight-year-old - tell the difference?\nMore: AI safety seems hard to measure\n(So far, what I’ve described is pretty similar to what’s going on today. The next bit will discuss hypothetical future progress, with AI systems clearly beyond today’s.)\nApproaching transformative AI. Time passes. At some point, AI systems are playing a huge role in various kinds of scientific research - to the point where it often feels like a particular AI is about as helpful to a research team as a top human scientist would be (although there are still important parts of the work that require humans).\nSome particularly important (though not exclusive) examples:\nAIs are near-autonomously writing papers about AI, finding all kinds of ways to improve the efficiency of AI algorithms. \nAIs are doing a lot of the work previously done by humans at Intel (and similar companies), designing ever-more efficient hardware for AI.\nAIs are also extremely helpful with AI safety research. They’re able to do most of the work of writing papers about things like digital neuroscience (how to understand what’s going on inside the “digital brain” of an AI) and limited AI (how to get AIs to accomplish helpful things while limiting their capabilities). \nHowever, this kind of work remains quite niche (as I think it is today), and is getting far less attention and resources than the first two applications. Progress is made, but it’s slower than progress on making AI systems more powerful. \nAI systems are now getting bigger and better very quickly, due to dynamics like the above, and they’re able to do all sorts of things. \nAt some point, companies start to experiment with very ambitious, open-ended AI applications, like simply instructing AIs to “Design a new kind of car that outsells the current ones” or “Find a new trading strategy to make money in markets.” These get mixed results, and companies are trying to get better results via further training - reinforcing behaviors that perform better. (AIs are helping with this, too, e.g. providing feedback and reinforcement for each others’ outputs3 and helping to write code4 for the training processes.) \nThis training strengthens the dynamics I discussed in a previous post: AIs are being rewarded for getting successful outcomes as far as human judges can tell, which creates incentives for them to mislead and manipulate human judges, and ultimately results in forming ambitious goals of their own to aim for.\nMore advanced safety/alignment problems. As the scenario continues to unfold, there are a number of concerning events that point to safety/alignment problems. These mostly follow the form: “AIs are trained using trial and error, and this might lead them to sometimes do deceptive, unintended things to accomplish the goals they’ve been trained to accomplish.”\nThings like:\nAIs creating writeups on new algorithmic improvements, using faked data to argue that their new algorithms are better than the old ones. Sometimes, people incorporate new algorithms into their systems and use them for a while, before unexpected behavior ultimately leads them to dig into what’s going on and discover that they’re not improving performance at all. It looks like the AIs faked the data in order to get positive feedback from humans looking for algorithmic improvements.\nAIs assigned to make money in various ways (e.g., to find profitable trading strategies) doing so by finding security exploits, getting unauthorized access to others’ bank accounts, and stealing money.\nAIs forming relationships with the humans training them, and trying (sometimes successfully) to emotionally manipulate the humans into giving positive feedback on their behavior. They also might try to manipulate the humans into running more copies of them, into refusing to shut them off, etc.- things that are generically useful for the AIs’ achieving whatever aims they might be developing.\n(Click to expand) Why AIs might do deceptive, problematic things like this\nIn a previous piece, I highlighted that modern AI development is essentially based on \"training\" via trial-and-error. To oversimplify, you can imagine that:\nAn AI system is given some sort of task.\nThe AI system tries something, initially something pretty random.\nThe AI system gets information about how well its choice performed, and/or what would’ve gotten a better result. Based on this, it adjusts itself. You can think of this as if it is “encouraged/discouraged” to get it to do more of what works well. \nHuman judges may play a significant role in determining which answers are encouraged vs. discouraged, especially for fuzzy goals like “Produce helpful scientific insights.” \nAfter enough tries, the AI system becomes good at the task. \nBut nobody really knows anything about how or why it’s good at the task now. The development work has gone into building a flexible architecture for it to learn well from trial-and-error, and into “training” it by doing all of the trial and error. We mostly can’t “look inside the AI system to see how it’s thinking.”\nI then argue that:\nBecause we ourselves will often be misinformed or confused, we will sometimes give negative reinforcement to AI systems that are actually acting in our best interests and/or giving accurate information, and positive reinforcement to AI systems whose behavior deceives us into thinking things are going well. This means we will be, unwittingly, training AI systems to deceive and manipulate us. \nFor this and other reasons, powerful AI systems will likely end up with aims other than the ones we intended. Training by trial-and-error is slippery: the positive and negative reinforcement we give AI systems will probably not end up training them just as we hoped.\nThere are a number of things such AI systems might end up aiming for, such as:\nPower and resources. These tend to be useful for most goals, such that AI systems could be quite consistently be getting better reinforcement when they habitually pursue power and resources.\nThings like “digital representations of human approval” (after all, every time an AI gets positive reinforcement, there’s a digital representation of human approval).\nIn sum, we could be unwittingly training AI systems to accumulate power and resources, get good feedback from humans, etc. - even when this means deceiving and manipulating humans to do so.\nMore: Why would AI \"aim\" to defeat humanity?\n“Solutions” to these safety/alignment problems. When problems like the above are discovered, AI companies tend to respond similarly to how they did earlier:\nTraining AIs against the undesirable behavior.\nTrying to create more (simulated) situations under which AIs might behave in these undesirable ways, and training them against doing so.\nThese methods “work” in the sense that the concerning events become less frequent - as far as we can tell. But what’s really happening is that AIs are being trained to be more careful not to get caught doing things like this, and to build more sophisticated models of how humans can interfere with their plans. \nIn fact, AIs are gaining incentives to avoid incidents like “Doing something counter to human developers’ intentions in order to get positive feedback, and having this be discovered and given negative feedback later” - and this means they are starting to plan more and more around the long-run consequences of their actions. They are thinking less about “Will I get positive feedback at the end of the day?” and more about “Will I eventually end up in a world where humans are going back, far in the future, to give me retroactive negative feedback for today’s actions?” This might give direct incentives to start aiming for eventual defeat of humanity, since defeating humanity could allow AIs to give themselves lots of retroactive positive feedback.\nOne way to think about it: AIs being trained in this way are generally moving from “Steal money whenever there’s an opportunity” to “Don’t steal money if there’s a good chance humans will eventually uncover this - instead, think way ahead and look for opportunities to steal money and get away with it permanently.” The latter could include simply stealing money in ways that humans are unlikely to ever notice; it might also include waiting for an opportunity to team up with other AIs and disempower humans entirely, after which a lot more money (or whatever) can be generated.\nDebates. The leading AI companies are aggressively trying to build and deploy more powerful AI, but a number of people are raising alarms and warning that continuing to do this could result in disaster. Here’s a stylized sort of debate that might occur:\nA: Great news, our AI-assisted research team has discovered even more improvements than expected! We should be able to build an AI model 10x as big as the state of the art in the next few weeks. \nB: I’m getting really concerned about the direction this is heading. I’m worried that if we make an even bigger system and license it to all our existing customers - military customers, financial customers, etc. - we could be headed for a disaster.\nA: Well the disaster I’m trying to prevent is competing AI companies getting to market before we do.\nB: I was thinking of AI defeating all of humanity.\nA: Oh, I was worried about that for a while too, but our safety training has really been incredibly successful. \nB: It has? I was just talking to our digital neuroscience lead, and she says that even with recent help from AI “virtual scientists,” they still aren’t able to reliably read a single AI’s digital brain. They were showing me this old incident report where an AI stole money, and they spent like a week analyzing that AI and couldn’t explain in any real way how or why that happened.\n(Click to expand) How \"digital neuroscience\" could help \nI’ve argued that it could be inherently difficult to measure whether AI systems are safe, for reasons such as: AI systems that are not deceptive probably look like AI systems that are so good at deception that they hide all evidence of it, in any way we can easily measure. \nUnless we can “read their minds!”\nCurrently, today’s leading AI research is in the genre of “black-box trial-and-error.” An AI tries a task; it gets “encouragement” or “discouragement” based on whether it does the task well; it tweaks the wiring of its “digital brain” to improve next time; it improves at the task; but we humans aren’t able to make much sense of its “digital brain” or say much about its “thought process.” \nSome AI research (example)2 is exploring how to change this - how to decode an AI system’s “digital brain.” This research is in relatively early stages - today, it can “decode” only parts of AI systems (or fully decode very small, deliberately simplified AI systems).\nAs AI systems advance, it might get harder to decode them - or easier, if we can start to use AI for help decoding AI, and/or change AI design techniques so that AI systems are less “black box”-ish. \nMore\nA: I agree that’s unfortunate, but digital neuroscience has always been a speculative, experimental department. Fortunately, we have actual data on safety. Look at this chart - it shows the frequency of concerning incidents plummeting, and it’s extraordinarily low now. In fact, the more powerful the AIs get, the less frequent the incidents get - we can project this out and see that if we train a big enough model, it should essentially never have a concerning incident!\nB: But that could be because the AIs are getting cleverer, more patient and long-term, and hence better at ensuring we never catch them.\n(Click to expand) The Lance Armstrong problem: is the AI actually safe or good at hiding its dangerous actions? \nLet's imagine that:\nWe have AI systems available that can do roughly everything a human can, with some different strengths and weaknesses but no huge difference in \"overall capabilities\" or economic value per hour of work. \nWe're observing early signs that AI systems behave in unintended, deceptive ways, such as giving wrong answers to questions we ask, or writing software that falsifies metrics instead of doing the things the metrics were supposed to measure (e.g., software meant to make a website run faster might instead falsify metrics about its loading time).\nWe theorize that modifying the AI training in some way6 will make AI systems less likely to behave deceptively. We try it out, and find that, in fact, our AI systems seem to be behaving better than before - we are finding fewer incidents in which they behaved in unintended or deceptive ways. \nBut that's just a statement about what we're noticing. Which of the following just happened:\nDid we just train our AI systems to be less deceptive?\nDid we just train our AI systems to be better at deception, and so to make us think they became less deceptive?\nDid we just train our AI systems to be better at calculating when they might get caught in deception, and so to be less deceptive only when the deception would otherwise be caught? \nThis one could be useful! Especially if we're able to set up auditing systems in many real-world situations, such that we could expect deception to be caught a lot of the time. But it does leave open the King Lear problem.\n \n(...Or some combination of the three?)\nWe're hoping to be able to deploy AI systems throughout the economy, so - just like human specialists - they will almost certainly have some opportunities to be deceptive without being caught. The fact that they appear honest in our testing is not clear comfort against this risk.\nThe analogy here is to competitive cyclist Lance Armstrong. Armstrong won the Tour de France race 7 times in a row, while many of his competitors were caught using performance-enhancing drugs and disqualified. But more than 5 years after his last win, an investigation \"concluded that Armstrong had used performance-enhancing drugs over the course of his career[5] and named him as the ringleader of 'the most sophisticated, professionalized and successful doping program that sport has ever seen'.\" Now the list of Tour de France winners looks like this:\n… Or just that they’re now advanced enough that they’re waiting for a chance to disempower humanity entirely, rather than pull a bunch of small-time shenanigans that tip us off to the danger.\n(Click to expand) The King Lear problem: how do you test what will happen when it's no longer a test? \nThe Shakespeare play King Lear opens with the King (Lear) stepping down from the throne, and immediately learning that he has left his kingdom to the wrong two daughters. Loving and obsequious while he was deciding on their fate,9 they reveal their contempt for him as soon as he's out of power and they're in it.\nIf we're building AI systems that can reason like humans, dynamics like this become a potential issue. \nI previously noted that an AI with any ambitious aim - or just an AI that wants to avoid being shut down or modified - might calculate that the best way to do this is by behaving helpfully and safely in all \"tests\" humans can devise. But once there is a real-world opportunity to disempower humans for good, that same aim could cause the AI to disempower humans.\nIn other words:\n(A) When we're developing and testing AI systems, we have the power to decide which systems will be modified or shut down and which will be deployed into the real world. (Like King Lear deciding who will inherit his kingdom.)\n(B) But at some later point, these systems could be operating in the economy, in high numbers with a lot of autonomy. (This possibility is spelled out/visualized a bit more here and here.) At that point, they may have opportunities to defeat all of humanity such that we never make decisions about them again. (Like King Lear's daughters after they've taken control.)\nIf AI systems can detect the difference between (A) and (B) above, then their behavior might systematically change from one to the other - and there's no clear way to test their behavior in (B).\nA: What’s your evidence for this?\nB: I think you’ve got things backward - we should be asking what’s our evidence *against* it. By continuing to scale up and deploy AI systems, we could be imposing a risk of utter catastrophe on the whole world. That’s not OK - we should be confident that the risk is low before we move forward.\nA: But how would we even be confident that the risk is low?\nB: I mean, digital neuroscience - \nA: Is an experimental, speculative field!\nB: We could also try some other stuff …\nA: All of that stuff would be expensive, difficult and speculative. \nB: Look, I just think that if we can’t show the risk is low, we shouldn’t be moving forward at this point. The stakes are incredibly high, as you yourself have acknowledged - when pitching investors, you’ve said we think we can build a fully general AI and that this would be the most powerful technology in history. Shouldn’t we be at least taking as much precaution with potentially dangerous AI as people take with nuclear weapons?\nA: What would that actually accomplish? It just means some other, less cautious company is going to go forward.\nB: What about approaching the government and lobbying them to regulate all of us?\nA: Regulate all of us to just stop building more powerful AI systems, until we can address some theoretical misalignment concern that we don’t know how to address?\nB: Yes?\nA: All that’s going to happen if we do that is that other countries are going to catch up to the US. Think [insert authoritarian figure from another country] is going to adhere to these regulations?\nB: It would at least buy some time?\nA: Buy some time and burn our chance of staying on the cutting edge. While we’re lobbying the government, our competitors are going to be racing forward. I’m sorry, this isn’t practical - we’ve got to go full speed ahead.\nB: Look, can we at least try to tighten our security? If you’re so worried about other countries catching up, we should really not be in a position where they can send in a spy and get our code.\nA: Our security is pretty intense already.\nB: Intense enough to stop a well-resourced state project?\nA: What do you want us to do, go to an underground bunker? Use airgapped servers (servers on our premises, entirely disconnected from the public Internet)? It’s the same issue as before - we’ve got to stay ahead of others, we can’t burn huge amounts of time on exotic security measures.\nB: I don’t suppose you’d at least consider increasing the percentage of our budget and headcount that we’re allocating to the “speculative” safety research? Or are you going to say that we need to stay ahead and can’t afford to spare resources that could help with that?\nA: Yep, that’s what I’m going to say.\nMass deployment. As time goes on, many versions of the above debate happen, at many different stages and in many different places. By and large, people continue rushing forward with building more and more powerful AI systems and deploying them all throughout the economy.\nAt some point, there are AIs that closely manage major companies’ financials, AIs that write major companies’ business plans, AIs that work closely with politicians to propose and debate laws, AIs that manage drone fleets and develop military strategy, etc. Many of these AIs are primarily built, trained, and deployed by other AIs, or by humans leaning heavily on AI assistance.\nMore intense warning signs.\n(Note: I think it’s possible that progress will accelerate explosively enough that we won’t even get as many warning signs as there are below, but I’m spelling out a number of possible warning signs anyway to make the point that even intense warning signs might not be enough.) \nOver time, in this hypothetical scenario, digital neuroscience becomes more effective. When applied to a randomly sampled AI system, it often appears to hint at something like: “This AI appears to be aiming for as much power and influence over the world as possible - which means never doing things humans wouldn’t like if humans can detect it, but grabbing power when they can get away with it.” \n(Click to expand) Why would AI \"aim\" to defeat humanity? \nA previous piece argued that if today’s AI development methods lead directly to powerful enough AI systems, disaster is likely by default (in the absence of specific countermeasures). \nIn brief:\nModern AI development is essentially based on “training” via trial-and-error. \nIf we move forward incautiously and ambitiously with such training, and if it gets us all the way to very powerful AI systems, then such systems will likely end up aiming for certain states of the world (analogously to how a chess-playing AI aims for checkmate).\nAnd these states will be other than the ones we intended, because our trial-and-error training methods won’t be accurate. For example, when we’re confused or misinformed about some question, we’ll reward AI systems for giving the wrong answer to it - unintentionally training deceptive behavior.\nWe should expect disaster if we have AI systems that are both (a) powerful enough to defeat humans and (b) aiming for states of the world that we didn’t intend. (“Defeat” means taking control of the world and doing what’s necessary to keep us out of the way; it’s unclear to me whether we’d be literally killed or just forcibly stopped1 from changing the world in ways that contradict AI systems’ aims.)\nMore: Why would AI \"aim\" to defeat humanity?\nHowever, there is room for debate in what a “digital brain” truly shows:\nMany people are adamant that the readings are unreliable and misleading.\nSome people point out that humans are also interested in power and influence, and often think about what they can and can’t get away with, but this doesn’t mean they’d take over the world if they could. They say the AIs might be similar.\nThere are also cases of people doing digital neuroscience that claims to show that AIs are totally safe. These could be people like “A” above who want to focus on pushing forward with AI development rather than bringing it to a halt, or people who just find the alarmists annoying and like to contradict them, or people who are just sloppy with their research. Or people who have been manipulated or bribed by AIs themselves.\nThere are also very concerning incidents, such as:\nAn AI steals a huge amount of money by bypassing the security system at a bank - and it turns out that this is because the security system was disabled by AIs at the bank. It’s suspected, maybe even proven, that all these AIs had been communicating and coordinating with each other in code, such that humans would have difficulty detecting it. (And they had been aiming to divide up the funds between the different participating AIs, each of which could stash them in a bank account and use them to pursue whatever unintended aims they might have.)\nAn obscure new political party, devoted to the “rights of AIs,” completely takes over a small country, and many people suspect that this party is made up mostly or entirely of people who have been manipulated and/or bribed by AIs. \nThere are companies that own huge amounts of AI servers and robot-operated factories, and are aggressively building more. Nobody is sure what the AIs or the robots are “for,” and there are rumors that the humans “running” the company are actually being bribed and/or threatened to carry out instructions (such as creating more and more AIs and robots) that they don’t understand the purpose of.\nAt this point, there are a lot of people around the world calling for an immediate halt to AI development. But:\nOthers resist this on all kinds of grounds, e.g. “These concerning incidents are anomalies, and what’s important is that our country keeps pushing forward with AI before others do,” etc.\nAnyway, it’s just too late. Things are moving incredibly quickly; by the time one concerning incident has been noticed and diagnosed, the AI behind it has been greatly improved upon, and the total amount of AI influence over the economy has continued to grow.\nDefeat. \n(Noting again that I could imagine things playing out a lot more quickly and suddenly than in this story.)\nIt becomes more and more common for there to be companies and even countries that are clearly just run entirely by AIs - maybe via bribed/threatened human surrogates, maybe just forcefully (e.g., robots seize control of a country’s military equipment and start enforcing some new set of laws).\nAt some point, it’s best to think of civilization as containing two different advanced species - humans and AIs - with the AIs having essentially all of the power, making all the decisions, and running everything. \nSpaceships start to spread throughout the galaxy; they generally don’t contain any humans, or anything that humans had meaningful input into, and are instead launched by AIs to pursue aims of their own in space.\nMaybe at some point humans are killed off, largely due to simply being a nuisance, maybe even accidentally (as humans have driven many species of animals extinct while not bearing them malice). Maybe not, and we all just live under the direction and control of AIs with no way out.\nWhat do these AIs do with all that power? What are all the robots up to? What are they building on other planets? The short answer is that I don’t know.\nMaybe they’re just creating massive amounts of “digital representations of human approval,” because this is what they were historically trained to seek (kind of like how humans sometimes do whatever it takes to get drugs that will get their brains into certain states).\nMaybe they’re competing with each other for pure power and territory, because their training has encouraged them to seek power and resources when possible (since power and resources are generically useful, for almost any set of aims).\nMaybe they have a whole bunch of different things they value, as humans do, that are sort of (but only sort of) related to what they were trained on (as humans tend to value things like sugar that made sense to seek out in the past). And they’re filling the universe with these things.\n(Click to expand) What sorts of aims might AI systems have? \nIn a previous piece, I discuss why AI systems might form unintended, ambitious \"aims\" of their own. By \"aims,\" I mean particular states of the world that AI systems make choices, calculations and even plans to achieve, much like a chess-playing AI “aims” for a checkmate position.\nAn analogy that often comes up on this topic is that of human evolution. This is arguably the only previous precedent for a set of minds [humans], with extraordinary capabilities [e.g., the ability to develop their own technologies], developed essentially by black-box trial-and-error [some humans have more ‘reproductive success’ than others, and this is the main/only force shaping the development of the species].\nYou could sort of12 think of the situation like this: “An AI13 developer named Natural Selection tried giving humans positive reinforcement (making more of them) when they had more reproductive success, and negative reinforcement (not making more of them) when they had less. One might have thought this would lead to humans that are aiming to have reproductive success. Instead, it led to humans that aim - often ambitiously and creatively - for other things, such as power, status, pleasure, etc., and even invent things like birth control to get the things they’re aiming for instead of the things they were ‘supposed to’ aim for.” \nSimilarly, if our main strategy for developing powerful AI systems is to reinforce behaviors like “Produce technologies we find valuable,” the hoped-for result might be that AI systems aim (in the sense described above) toward producing technologies we find valuable; but the actual result might be that they aim for some other set of things that is correlated with (but not the same as) the thing we intended them to aim for.\nThere are a lot of things they might end up aiming for, such as:\nPower and resources. These tend to be useful for most goals, such that AI systems could be quite consistently be getting better reinforcement when they habitually pursue power and resources.\nThings like “digital representations of human approval” (after all, every time an AI gets positive reinforcement, there’s a digital representation of human approval).\nMore: Why would AI \"aim\" to defeat humanity?\nEND OF HYPOTHETICAL SCENARIO\nPotential catastrophes from aligned AI\nI think it’s possible that misaligned AI (AI forming dangerous goals of its own) will turn out to be pretty much a non-issue. That is, I don’t think the argument I’ve made for being concerned is anywhere near watertight. \nWhat happens if you train an AI system by trial-and-error, giving (to oversimplify) a “thumbs-up” when you’re happy with its behavior and a “thumbs-down” when you’re not? I’ve argued that you might be training it to deceive and manipulate you. However, this is uncertain, and - especially if you’re able to avoid errors in how you’re giving it feedback - things might play out differently. \nIt might turn out that this kind of training just works as intended, producing AI systems that do something like “Behave as the human would want, if they had all the info the AI has.” And the nitty-gritty details of how exactly AI systems are trained (beyond the high-level “trial-and-error” idea) could be crucial.\nIf this turns out to be the case, I think the future looks a lot brighter - but there are still lots of pitfalls of the kind I outlined in this piece. For example:\nPerhaps an authoritarian government launches a huge state project to develop AI systems, and/or uses espionage and hacking to steal a cutting-edge AI model developed elsewhere and deploy it aggressively. \nI previously noted that “developing powerful AI a few months before others could lead to having technology that is (effectively) hundreds of years ahead of others’.”\n \nSo this could put an authoritarian government in an enormously powerful position, with the ability to surveil and defeat any enemies worldwide, and the ability to prolong the life of its ruler(s) indefinitely. This could lead to a very bad future, especially if (as I’ve argued could happen) the future becomes “locked in” for good.\nPerhaps AI companies race ahead with selling AI systems to anyone who wants to buy them, and this leads to things like: \nPeople training AIs to act as propaganda agents for whatever views they already have, to the point where the world gets flooded with propaganda agents and it becomes totally impossible for humans to sort the signal from the noise, educate themselves, and generally make heads or tails of what’s going on. (Some people think this has already happened! I think things can get quite a lot worse.)\n \nPeople training “scientist AIs” to develop powerful weapons that can’t be defended against (even with AI help),5 leading eventually to a dynamic in which ~anyone can cause great harm, and ~nobody can defend against it. At this point, it could be inevitable that we’ll blow ourselves up.\n \nScience advancing to the point where digital people are created, in a rushed way such that they are considered property of whoever creates them (no human rights). I’ve previously written about how this could be bad.\n \nAll other kinds of chaos and disruption, with the least cautious people (the ones most prone to rush forward aggressively deploying AIs to capture resources) generally having an outsized effect on the future.\nOf course, this is just a crude gesture in the direction of some of the ways things could go wrong. I’m guessing I haven’t scratched the surface of the possibilities. And things could go very well too!\nWe can do better\nIn previous pieces, I’ve talked about a number of ways we could do better than in the scenarios above. Here I’ll just list a few key possibilities, with a bit more detail in expandable boxes and/or links to discussions in previous pieces.\nStrong alignment research (including imperfect/temporary measures). If we make enough progress ahead of time on alignment research, we might develop measures that make it relatively easy for AI companies to build systems that truly (not just seemingly) are safe. \nSo instead of having to say things like “We should slow down until we make progress on experimental, speculative research agendas,” person B in the above dialogue can say things more like “Look, all you have to do is add some relatively cheap bells and whistles to your training procedure for the next AI, and run a few extra tests. Then the speculative concerns about misaligned AI will be much lower-risk, and we can keep driving down the risk by using our AIs to help with safety research and testing. Why not do that?”\nMore on what this could look like at a previous piece, High-level Hopes for AI Alignment.\n(Click to expand) High-level hopes for AI alignment \nA previous piece goes through what I see as three key possibilities for building powerful-but-safe AI systems.\nIt frames these using Ajeya Cotra’s young businessperson analogy for the core difficulties. In a nutshell, once AI systems get capable enough, it could be hard to test whether they’re safe, because they might be able to deceive and manipulate us into getting the wrong read. Thus, trying to determine whether they’re safe might be something like “being an eight-year-old trying to decide between adult job candidates (some of whom are manipulative).”\nKey possibilities for navigating this challenge:\nDigital neuroscience: perhaps we’ll be able to read (and/or even rewrite) the “digital brains” of AI systems, so that we can know (and change) what they’re “aiming” to do directly - rather than having to infer it from their behavior. (Perhaps the eight-year-old is a mind-reader, or even a young Professor X.)\nLimited AI: perhaps we can make AI systems safe by making them limited in various ways - e.g., by leaving certain kinds of information out of their training, designing them to be “myopic” (focused on short-run as opposed to long-run goals), or something along those lines. Maybe we can make “limited AI” that is nonetheless able to carry out particular helpful tasks - such as doing lots more research on how to achieve safety without the limitations. (Perhaps the eight-year-old can limit the authority or knowledge of their hire, and still get the company run successfully.)\nAI checks and balances: perhaps we’ll be able to employ some AI systems to critique, supervise, and even rewrite others. Even if no single AI system would be safe on its own, the right “checks and balances” setup could ensure that human interests win out. (Perhaps the eight-year-old is able to get the job candidates to evaluate and critique each other, such that all the eight-year-old needs to do is verify basic factual claims to know who the best candidate is.)\nThese are some of the main categories of hopes that are pretty easy to picture today. Further work on AI safety research might result in further ideas (and the above are not exhaustive - see my more detailed piece, posted to the Alignment Forum rather than Cold Takes, for more).\nStandards and monitoring. A big driver of the hypothetical catastrophe above is that each individual AI project feels the need to stay ahead of others. Nobody wants to unilaterally slow themselves down in order to be cautious. The situation might be improved if we can develop a set of standards that AI projects need to meet, and enforce them evenly - across a broad set of companies or even internationally.\nThis isn’t just about buying time, it’s about creating incentives for companies to prioritize safety. An analogy might be something like the Clean Air Act or fuel economy standards: we might not expect individual companies to voluntarily slow down product releases while they work on reducing pollution, but once required, reducing pollution becomes part of what they need to do to be profitable.\nStandards could be used for things other than alignment risk, as well. AI projects might be required to:\nTake strong security measures, preventing states from capturing their models via espionage.\nTest models before release to understand what people will be able to use them for, and (as if selling weapons) restrict access accordingly.\nMore at a previous piece.\n(Click to expand) How standards might be established and become national or international \nI previously laid out a possible vision on this front, which I’ll give a slightly modified version of here:\nToday’s leading AI companies could self-regulate by committing not to build or deploy a system that they can’t convincingly demonstrate is safe (e.g., see Google’s 2018 statement, \"We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”). \nEven if some people at the companies would like to deploy unsafe systems, it could be hard to pull this off once the company has committed not to. \n \nEven if there’s a lot of room for judgment in what it means to demonstrate an AI system is safe, having agreed in advance that certain evidence is not good enough could go a long way.\nAs more AI companies are started, they could feel soft pressure to do similar self-regulation, and refusing to do so is off-putting to potential employees, investors, etc.\nEventually, similar principles could be incorporated into various government regulations and enforceable treaties.\nGovernments could monitor for dangerous projects using regulation and even overseas operations. E.g., today the US monitors (without permission) for various signs that other states might be developing nuclear weapons, and might try to stop such development with methods ranging from threats of sanctions to cyberwarfare or even military attacks. It could do something similar for any AI development projects that are using huge amounts of compute and haven’t volunteered information about whether they’re meeting standards.\nSuccessful, careful AI projects. I think a single AI company, or other AI project, could enormously improve the situation by being both successful and careful. For a simple example, imagine an AI company in a dominant market position - months ahead of all of the competition, in some relevant sense (e.g., its AI systems are more capable, such that it would take the competition months to catch up). Such a company could put huge amounts of resources - including its money, top people and its advanced AI systems themselves (e.g., AI systems performing roles similar to top human scientists) - into AI safety research, hoping to find safety measures that can be published for everyone to use. It can also take a variety of other measures laid out in a previous piece.\n(Click to expand) How a careful AI project could be helpful \nIn addition to using advanced AI to do AI safety research (noted above), an AI project could:\nPut huge effort into designing tests for signs of danger, and - if it sees danger signs in its own systems - warning the world as a whole.\nOffer deals to other AI companies/projects. E.g., acquiring them or exchanging a share of its profits for enough visibility and control to ensure that they don’t deploy dangerous AI systems.\nUse its credibility as the leading company to lobby the government for helpful measures (such as enforcement of a monitoring-and-standards regime), and to more generally highlight key issues and advocate for sensible actions.\nTry to ensure (via design, marketing, customer choice, etc.) that its AI systems are not used for dangerous ends, and are used on applications that make the world safer and better off. This could include defensive deployment to reduce risks from other AIs; it could include using advanced AI systems to help it gain clarity on how to get a good outcome for humanity; etc.\nAn AI project with a dominant market position could likely make a huge difference via things like the above (and probably via many routes I haven’t thought of). And even an AI project that is merely one of several leaders could have enough resources and credibility to have a lot of similar impacts - especially if it’s able to “lead by example” and persuade other AI projects (or make deals with them) to similarly prioritize actions like the above.\nA challenge here is that I’m envisioning a project with two arguably contradictory properties: being careful (e.g., prioritizing actions like the above over just trying to maintain its position as a profitable/cutting-edge project) and successful (being a profitable/cutting-edge project). In practice, it could be very hard for an AI project to walk the tightrope of being aggressive enough to be a “leading” project (in the sense of having lots of resources, credibility, etc.), while also prioritizing actions like the above (which mostly, with some exceptions, seem pretty different from what an AI project would do if it were simply focused on its technological lead and profitability).\nStrong security. A key threat in the above scenarios is that an incautious actor could “steal” an AI system from a company or project that would otherwise be careful. My understanding is that it could be extremely hard for an AI project to be robustly safe against this outcome (more here). But this could change, if there’s enough effort to work out the problem of how to develop a large-scale, powerful AI system that is very hard to steal.\nIn future pieces, I’ll get more concrete about what specific people and organizations can do today to improve the odds of factors like these going well, and overall to raise the odds of a good outcome.Notes\n E.g., Ajeya Cotra gives a 15% probability of transformative AI by 2030; eyeballing figure 1 from this chart on expert surveys implies a >10% chance by 2028. ↩\n To predict early AI applications, we need to ask not just “What tasks will AI be able to do?” but “How will this compare to all the other ways people can get the same tasks done?” and “How practical will it be for people to switch their workflows and habits to accommodate new AI capabilities?”\n By contrast, I think the implications of powerful enough AI for productivity don’t rely on this kind of analysis - very high-level economic reasoning can tell us that being able to cheaply copy something with human-like R&D capabilities would lead to explosive progress.\n FWIW, I think it’s fairly common for high-level, long-run predictions to be easier than detailed, short-run predictions. Another example: I think it’s easier to predict a general trend of planetary warming (this seems very likely) than to predict whether it’ll be rainy next weekend. ↩\nHere’s an early example of AIs providing training data for each other/themselves. ↩\nExample of AI helping to write code. ↩\n To be clear, I have no idea whether this is possible! It’s not obvious to me that it would be dangerous for technology to progress a lot and be used widely for both offense and defense. It’s just a risk I’d rather not incur casually via indiscriminate, rushed AI deployments. ↩\n", "url": "https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe/", "title": "How we could stumble into AI catastrophe", "source": "cold.takes", "source_type": "blog", "date_published": "2023-01-13", "id": "b962cf49e63a1a1e029859b125d3f03a"} -{"text": "If this ends up being the most important century due to advanced AI, what are the key factors in whether things go well or poorly?\n(Click to expand) More detail on why AI could make this the most important century\nIn The Most Important Century, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.\nThis page has a ~10-page summary of the series, as well as links to an audio version, podcasts, and the full series.\nThe key points I argue for in the series are:\nThe long-run future is radically unfamiliar. Enough advances in technology could lead to a long-lasting, galaxy-wide civilization that could be a radical utopia, dystopia, or anything in between.\nThe long-run future could come much faster than we think, due to a possible AI-driven productivity explosion.\nThe relevant kind of AI looks like it will be developed this century - making this century the one that will initiate, and have the opportunity to shape, a future galaxy-wide civilization.\nThese claims seem too \"wild\" to take seriously. But there are a lot of reasons to think that we live in a wild time, and should be ready for anything.\nWe, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions. But right now, we aren't ready for this.\nA lot of my previous writings have focused specifically on the threat of “misaligned AI”: AI that could have dangerous aims of its own and defeat all of humanity. In this post, I’m going to zoom out and give a broader overview of multiple issues transformative AI could raise for society - with an emphasis on issues we might want to be thinking about now rather than waiting to address as they happen.\nMy discussion will be very unsatisfying. “What are the key factors in whether things go well or poorly with transformative AI?” is a massive topic, with lots of angles that have gotten almost no attention and (surely) lots of angles that I just haven’t thought of at all. My one-sentence summary of this whole situation is: we’re not ready for this.\nBut hopefully this will give some sense of what sorts of issues should clearly be on our radar. And hopefully it will give a sense of why - out of all the issues we need to contend with - I’m as focused on the threat of misaligned AI as I am.\nOutline:\nFirst, I’ll briefly clarify what kinds of issues I’m trying to list. I’m looking for ways the future could look durably and dramatically different depending on how we navigate the development of transformative AI - such that doing the right things ahead of time could make a big, lasting difference.\nThen, I’ll list candidate issues: \nMisaligned AI. I touch on this only briefly, since I’ve discussed it at length in previous pieces. The short story is that we should try to avoid AI ending up with dangerous goals of its own and defeating humanity. (The remaining issues below seem irrelevant if this happens!)\n \nPower imbalances. As AI speeds up science and technology, it could cause some country/countries/coalitions to become enormously powerful - so it matters a lot which one(s) lead the way on transformative AI. (I fear that this concern is generally overrated compared to misaligned AI, but it is still very important.) There could also be dangers in overly widespread (as opposed to concentrated) AI deployment.\n \nEarly applications of AI. It might be that what early AIs are used for durably affects how things go in the long run - for example, whether early AI systems are used for education and truth-seeking, rather than manipulative persuasion and/or entrenching what we already believe. We might be able to affect which uses are predominant early on.\n \nNew life forms. Advanced AI could lead to new forms of intelligent life, such as AI systems themselves and/or digital people. Many of the frameworks we’re used to, for ethics and the law, could end up needing quite a bit of rethinking for new kinds of entities (for example, should we allow people to make as many copies as they want of entities that will predictably vote in certain ways?) Early decisions about these kinds of questions could have long-lasting effects. \n \nPersistent policies and norms. Perhaps we ought to be identifying particularly important policies, norms, etc. that seem likely to be durable even through rapid technological advancement, and try to improve these as much as possible before transformative AI is developed. (These could include things like a better social safety net suited to high, sustained unemployment rates; better regulations aimed at avoiding bias; etc.)\n \nSpeed of development. Maybe human society just isn’t likely to adapt well to rapid, radical advances in science and technology, and finding a way to limit the pace of advances would be good.\nFinally, I’ll discuss how I’m thinking about which of these issues to prioritize at the moment, and why misaligned AI is such a focus of mine.\nAn appendix will say a small amount about whether the long-run future seems likely to be better or worse than today, in terms of quality of life, assuming we navigate the above issues non-amazingly but non-catastrophically.\nThe kinds of issues I’m trying to list\nOne basic angle you could take on AI is: \n“AI’s main effect will be to speed up science and technology a lot. This means humans will be able to do all the things they were doing before - the good and the bad - but more/faster. So basically, we’ll end up with the same future we would’ve gotten without AI - just sooner.\n“Therefore, there’s no need to prepare in advance for anything in particular, beyond what we’d do to work toward a better future normally (in a world with no AI). Sure, lots of weird stuff could happen as science and technology advance - but that was already true, and many risks are just too hard to predict now and easier to respond to as they happen.”\nI don’t agree with the above, but I do think it’s a good starting point. I think we shouldn’t be listing everything that might happen in the future, as AI leads to advances in science and technology, and trying to prepare for it. Instead, we should be asking: “if transformative AI is coming in the next few decades, how does this change the picture of what we should be focused on, beyond just speeding up what’s going to happen anyway?”\nAnd I’m going to try to focus on extremely high-stakes issues - ways I could imagine the future looking durably and dramatically different depending on how we navigate the development of transformative AI.\nBelow, I’ll list some candidate issues fitting these criteria.\nPotential issues\nMisaligned AI\nI won’t belabor this possibility, because the last several pieces have been focused on it; this is just a quick reminder.\nIn a world without AI, the main question about the long-run future would be how humans will end up treating each other. But if powerful AI systems will be developed in the coming decades, we need to contend with the possibility that these AI systems will end up having goals of their own - and displacing humans as the species that determines how things will play out.\n(Click to expand)Why would AI \"aim\" to defeat humanity?\nA previous piece argued that if today’s AI development methods lead directly to powerful enough AI systems, disaster is likely by default (in the absence of specific countermeasures). \nIn brief:\nModern AI development is essentially based on “training” via trial-and-error. \nIf we move forward incautiously and ambitiously with such training, and if it gets us all the way to very powerful AI systems, then such systems will likely end up aiming for certain states of the world (analogously to how a chess-playing AI aims for checkmate).\nAnd these states will be other than the ones we intended, because our trial-and-error training methods won’t be accurate. For example, when we’re confused or misinformed about some question, we’ll reward AI systems for giving the wrong answer to it - unintentionally training deceptive behavior.\nWe should expect disaster if we have AI systems that are both (a) powerful enough to defeat humans and (b) aiming for states of the world that we didn’t intend. (“Defeat” means taking control of the world and doing what’s necessary to keep us out of the way; it’s unclear to me whether we’d be literally killed or just forcibly stopped1 from changing the world in ways that contradict AI systems’ aims.)\n(Click to expand) How could AI defeat humanity?\nIn a previous piece, I argue that AI systems could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nBy defeating humanity, I mean gaining control of the world so that AIs, not humans, determine what happens in it; this could involve killing humans or simply “containing” us in some way, such that we can’t interfere with AIs’ aims.\nOne way this could happen is if AI became extremely advanced, to the point where it had \"cognitive superpowers\" beyond what humans can do. In this case, a single AI system (or set of systems working together) could imaginably:\nDo its own research on how to build a better AI system, which culminates in something that has incredible other abilities.\nHack into human-built software across the world.\nManipulate human psychology.\nQuickly generate vast wealth under the control of itself or any human allies.\nCome up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.\nDevelop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries.\nHowever, my piece also explores what things might look like if each AI system basically has similar capabilities to humans. In this case:\nHumans are likely to deploy AI systems throughout the economy, such that they have large numbers and access to many resources - and the ability to make copies of themselves. \nFrom this starting point, AI systems with human-like (or greater) capabilities would have a number of possible ways of getting to the point where their total population could outnumber and/or out-resource humans.\nI address a number of possible objections, such as \"How can AIs be dangerous without bodies?\"\nMore: AI could defeat all of us combined\nPower imbalances\nI’ve argued that AI could cause a dramatic acceleration in the pace of scientific and technological advancement. \n(Click to expand) How AI could cause explosive progress\n(This section is mostly copied from my summary of the \"most important century\" series; it links to some pieces with more detail at the bottom.)\nStandard economic growth models imply that any technology that could fully automate innovation would cause an \"economic singularity\": productivity going to infinity this century. This is because it would create a powerful feedback loop: more resources -> more ideas and innovation -> more resources -> more ideas and innovation ...\nThis loop would not be unprecedented. I think it is in some sense the \"default\" way the economy operates - for most of economic history up until a couple hundred years ago. \nEconomic history: more resources -> more people -> more ideas -> more resources ...\nBut in the \"demographic transition\" a couple hundred years ago, the \"more resources -> more people\" step of that loop stopped. Population growth leveled off, and more resources led to richer people instead of more people:\nToday's economy: more resources -> more richer people -> same pace of ideas -> ...\nThe feedback loop could come back if some other technology restored the \"more resources -> more ideas\" dynamic. One such technology could be the right kind of AI: what I call PASTA, or Process for Automating Scientific and Technological Advancement.\nPossible future: more resources -> more AIs -> more ideas -> more resources ...\nThat means that our radical long-run future could be upon us very fast after PASTA is developed (if it ever is). \nIt also means that if PASTA systems are misaligned - pursuing their own non-human-compatible objectives - things could very quickly go sideways.\nKey pieces:\nThe Duplicator: Instant Cloning Would Make the World Economy Explode\nForecasting Transformative AI, Part 1: What Kind of AI?\nOne way of thinking about this: perhaps (for reasons I’ve argued previously) AI could enable the equivalent of hundreds of years of scientific and technological advancement in a matter of a few months (or faster). If so, then developing powerful AI a few months before others could lead to having technology that is (effectively) hundreds of years ahead of others’.\nBecause of this, it’s easy to imagine that AI could lead to big power imbalances, as whatever country/countries/coalitions “lead the way” on AI development could become far more powerful than others (perhaps analogously to when a few smallish European states took over much of the rest of the world).\nOne way we might try to make the future go better: maybe it could be possible for different countries/coalitions to strike deals in advance. For example, two equally matched parties might agree in advance to share their resources, territory, etc. with each other, in order to avoid a winner-take-all competition.\nWhat might such agreements look like? Could they possibly be enforced? I really don’t know, and I haven’t seen this explored much.1\nAnother way one might try to make the future go better is to try to help a particular country, coalition, etc. develop powerful AI systems before others do. I previously called this the “competition” frame. \nI think it is, in fact, enormously important who leads the way on transformative AI. At the same time, I’ve expressed concern that people might overfocus on this aspect of things vs. other issues, for a number of reasons including:\nI think people naturally get more animated about \"helping the good guys beat the bad guys\" than about \"helping all of us avoid getting a universally bad outcome, for impersonal reasons such as 'we designed sloppy AI systems' or 'we created a dynamic in which haste and aggression are rewarded.'\"\nI expect people will tend to be overconfident about which countries, organizations or people they see as the \"good guys.\"\n(More here.)\nFinally, it’s worth mentioning the possible dangers of powerful AI being too widespread, rather than too concentrated. In The Vulnerable World Hypothesis, Nick Bostrom contemplates potential future dynamics such as “advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions.” In addition to avoiding worlds where AI capabilities end up concentrated in the hands of a few, it could also be important to avoid worlds in which they diffuse too widely, too quickly, before we’re able to assess the risks of widespread access to technology far beyond today’s.\nEarly applications of AI\nMaybe advanced AI will be useful for some sorts of tasks before others. For example, maybe - by default - advanced AI systems will soon be powerful persuasion tools, and cause wide-scale societal dysfunction before they cause rapid advances in science and technology. And maybe, with effort, we could make it less likely that this happens - more likely that early AI systems are used for education and truth-seeking, rather than manipulative persuasion and/or entrenching what we already believe.\nThere could be lots of possibilities of this general form: particular ways in which AI could be predictably beneficial, or disruptive, before it becomes an all-purpose accelerant to science and technology. Perhaps trying to map these out today, and push for advanced AI to be used for particular purposes early on, could have a lasting effect on the future.\nNew life forms\nAdvanced AI could lead to new forms of intelligent life, such as AI systems themselves and/or digital people.\nDigital people: one example of how wild the future could be\nIn a previous piece, I tried to give a sense of just how wild a future with advanced technology could be, by examining one hypothetical technology: \"digital people.\" \nTo get the idea of digital people, imagine a computer simulation of a specific person, in a virtual environment. For example, a simulation of you that reacts to all \"virtual events\" - virtual hunger, virtual weather, a virtual computer with an inbox - just as you would. \nI’ve argued that digital people would likely be conscious and deserving of human rights just as we are. And I’ve argued that they could have major impacts, in particular:\nProductivity. Digital people could be copied, just as we can easily make copies of ~any software today. They could also be run much faster than humans. Because of this, digital people could have effects comparable to those of the Duplicator, but more so: unprecedented (in history or in sci-fi movies) levels of economic growth and productivity.\nSocial science. Today, we see a lot of progress on understanding scientific laws and developing cool new technologies, but not so much progress on understanding human nature and human behavior. Digital people would fundamentally change this dynamic: people could make copies of themselves (including sped-up, temporary copies) to explore how different choices, lifestyles and environments affected them. Comparing copies would be informative in a way that current social science rarely is.\nControl of the environment. Digital people would experience whatever world they (or the controller of their virtual environment) wanted. Assuming digital people had true conscious experience (an assumption discussed in the FAQ), this could be a good thing (it should be possible to eliminate disease, material poverty and non-consensual violence for digital people) or a bad thing (if human rights are not protected, digital people could be subject to scary levels of control).\nSpace expansion. The population of digital people might become staggeringly large, and the computers running them could end up distributed throughout our galaxy and beyond. Digital people could exist anywhere that computers could be run - so space settlements could be more straightforward for digital people than for biological humans.\nLock-in. In today's world, we're used to the idea that the future is unpredictable and uncontrollable. Political regimes, ideologies, and cultures all come and go (and evolve). But a community, city or nation of digital people could be much more stable. \nDigital people need not die or age.\n \nWhoever sets up a \"virtual environment\" containing a community of digital people could have quite a bit of long-lasting control over what that community is like. For example, they might build in software to reset the community (both the virtual environment and the people in it) to an earlier state if particular things change - such as who's in power, or what religion is dominant.\n \nI consider this a disturbing thought, as it could enable long-lasting authoritarianism, though it could also enable things like permanent protection of particular human rights.\nI think these effects could be a very good or a very bad thing. How the early years with digital people go could irreversibly determine which. \nMore: \nDigital People would be an Even Bigger Deal\nDigital People FAQ\nMany of the frameworks we’re used to, for ethics and the law, could end up needing quite a bit of rethinking for new kinds of entities. For example:\nHow should we determine which AI systems or digital people are considered to have “rights” and get legal protections?\nWhat about the right to vote? If an AI system or digital person can be quickly copied billions of times, with each copy getting a vote, that could be a recipe for trouble - does this mean we should restrict copying, restrict voting or something else?\nWhat should the rules be about engineering AI systems or digital people to have particular beliefs, motivations, experiences, etc.? Simple examples: \nShould it be illegal to create new AI systems or digital people that will predictably suffer a lot? How much suffering is too much?\n \nWhat about creating AI systems or digital people that consistently, predictably support some particular political party or view?\n(For a lot more in this vein, see this very interesting piece by Nick Bostrom and Carl Shulman.)\nEarly decisions about these kinds of questions could have long-lasting effects. For example, imagine someone creating billions of AI systems or digital people that have capabilities and subjective experiences comparable to humans, and are deliberately engineered to “believe in” (or at least help promote) some particular ideology (Communism, libertarianism, etc.) If these systems are self-replicating, that could change the future drastically. \nThus, it might be important to set good principles in place for tough questions about how to treat new sorts of digital entities, before new sorts of digital entities start to multiply.\nPersistent policies and norms\nThere might be particular policies, norms, etc. that are likely to stay persistent even as technology is advancing and many things are changing.\nFor example, how people think about ethics and norms might just inherently change more slowly than technological capabilities change. Perhaps a society that had strong animal rights protections, and general pro-animal attitudes, would maintain these properties all the way through explosive technological progress, becoming a technologically advanced society that treated animals well - while a society that had little regard for animals would become a technologically advanced society that treated animals poorly. Similar analysis could apply to religious values, social liberalism vs. conservatism, etc.\nSo perhaps we ought to be identifying particularly important policies, norms, etc. that seem likely to be durable even through rapid technological advancement, and try to improve these as much as possible before transformative AI is developed.\nOne tangible example of a concern I’d put in this category: if AI is going to cause high, persistent technological unemployment, it might be important to establish new social safety net programs (such as universal basic income) today - if these programs would be easier to establish today than in the future. I feel less than convinced of this one - first because I have some doubts about how big an issue technological unemployment is going to be, and second because it’s not clear to me why policy change would be easier today than in a future where technological unemployment is a reality. And more broadly, I fear that it's very hard to design and (politically) implement policies today that we can be confident will make things durably better as the world changes radically.\nSlow it down?\nI’ve named a number of ways in which weird things - such as power imbalances, and some parts of society changing much faster than others - could happen as scientific and technological advancement accelerate. Maybe one way to make the most important century go well would be to simply avoid these weird things by avoiding too-dramatic acceleration. Maybe human society just isn’t likely to adapt well to rapid, radical advances in science and technology, and finding a way to limit the pace of advances would be good.\nAny individual company, government, etc. has an incentive to move quickly and try to get ahead of others (or not fall too far behind), but coordinated agreements and/or regulations (along the lines of the “global monitoring” possibility discussed here) could help everyone move more slowly.\nWhat else?\nAre there other ways in which transformative AI would cause particular issues, risks, etc. to loom especially large, and to be worth special attention today? I’m guessing I’ve only scratched the surface here.\nWhat I’m prioritizing, at the moment\nIf this is the most important century, there’s a vast set of things to be thinking about and trying to prepare for, and it’s hard to know what to prioritize.\nWhere I’m at for the moment:\nIt seems very hard to say today what will be desirable in a radically different future. I wish more thought and attention were going into things like early applications of AI; norms and laws around new life forms; and whether there are policy changes today that we could be confident in even if the world is changing rapidly and radically. But it seems to me that it would be very hard to be confident in any particular goal in areas like these. Can we really say anything today about what sorts of digital entities should have rights, or what kinds of AI applications we hope come first, that we expect to hold up?\nI feel most confident in two very broad ideas: “It’s bad if AI systems defeat humanity to pursue goals of their own” and “It’s good if good decision-makers end up making the key decisions.” These map to the misaligned AI and power imbalance topics - or what I previously called caution and competition.\nThat said, it also seems hard to know who the “good decision-makers” are. I’ve definitely observed some of this dynamic: “Person/company A says they’re trying to help the world by aiming to build transformative AI before person/company B; person/company B says they’re trying to help the world by aiming to build transformative AI before person/company A.” \nIt’s pretty hard to come up with tangible tests of who’s a “good decision-maker.” We mostly don’t know what person A would do with enormous power, or what person B would do, based on their actions today. One possible criterion is that we should arguably have more trust in people/companies who show more caution - people/companies who show willingness to hurt their own chances of “being in the lead” in order to help everyone’s chance of avoiding a catastrophe from misaligned AI.2\n(Instead of focusing on which particular people and/or companies lead the way on AI, you could focus on which countries do, e.g. preferring non-authoritarian countries. It’s arguably pretty clear that non-authoritarian countries would be better than authoritarian ones. However, I have concerns about this as a goal as well, discussed in a footnote.3)\nFor now, I am most focused on the threat of misaligned AI. Some reasons for this:\nIt currently seems to me that misaligned AI is a significant risk. Misaligned AI seems likely by default if we don’t specifically do things to prevent it, and preventing it seems far from straightforward (see previous posts on the difficulty of alignment research and why it could be hard for key players to be cautious).\nAt the same time, it seems like there are significant hopes for how we might avoid this risk. As argued here and here, my sense is that the more broadly people understand this risk, the better our odds of avoiding it.\nI currently feel that this threat is underrated, relative to the easier-to-understand angle of “I hope people I like develop powerful AI systems before others do.”\nI think the “competition” frame - focusing on helping some countries/coalitions/companies develop advanced AI before others - makes quite a bit of sense as well. But - as noted directly above - I have big reservations about the most common “competition”-oriented actions, such as trying to help particular companies outcompete others or trying to get U.S. policymakers more focused on AI. \nFor the latter, I worry that this risks making huge sacrifices on the “caution” front and even backfiring by causing other governments to invest in projects of their own.\n \nFor the former, I worry about the ability to judge “good” leadership, and the temptation to overrate people who resemble oneself.\nThis is all far from absolute. I’m open to a broad variety of projects to help the most important century go well, whether they’re about “caution,” “competition” or another issue (including those I’ve listed in this post). My top priority at the moment is reducing the risks of misaligned AI, but I think a huge range of potential risks aren’t getting enough attention from the world at large.\nAppendix: if we avoid catastrophic risks, how good does the future look?\nHere I’ll say a small amount about whether the long-run future seems likely to be better or worse than today, in terms of quality of life. \nPart of why I want to do this is to give a sense of why I feel cautiously and moderately optimistic about such a future - such that I feel broadly okay with a frame of “We should try to prevent anything too catastrophic from happening, and figure that the future we get if we can pull that off is reasonably likely (though far from assured!) to be good.”\nSo I’ll go through some quick high-level reasons for hope (the future might be better than the present) - and for concern (it might be worse). \nIn this section, I’m ignoring the special role AI might play, and just thinking about what happens if we get a fast-forwarded future. I’ll be focusing on what I think are probably the most likely ways the world will change in the future, laid out here: a higher world population and greater empowerment due to a greater stock of ideas, innovations and technological capabilities. My aim is to ask: “If we navigate the above issues neither amazingly nor catastrophically, and end up with the same sort of future we’d have had without AI (just sped up), how do things look?”\nReason for hope: empowerment trends. One simple take would be: “Life has gotten better for humans4 over the last couple hundred years or so, the period during which we’ve seen most of history’s economic growth and technological progress. We’ve seen better health, less poverty and hunger, less violence, more anti-discrimination measures, and few signs of anything getting clearly worse. So if humanity just keeps getting more and more empowered, and nothing catastrophic happens, we should plan on life continuing to improve along a variety of dimensions.”\nWhy is this the trend, and should we expect it to hold up? There are lots of theories, and I won’t pretend to know, but I’ll lay out some basic thoughts that may be illustrative and give cause for optimism.\nFirst off, there is an awful lot of room for improvement just from continuing to cut down on things like hunger and disease. A wealthier, more technologically advanced society seems like a pretty good bet to have less hunger and disease for fairly straightforward reasons.\nBut we’ve seen improvement on other dimensions too. This could be partly explained by something like the following dynamic:\nMost people would - aspirationally - like to be nonviolent, compassionate, generous and fair, if they could do so without sacrificing other things.\nAs empowerment rises, the need to make sacrifices falls (noisily and imperfectly) across the board.\nThis dynamic may have led to some (noisy, imperfect) improvement to date, but there might be much more benefit in the future compared to the past. For example, if we see a lot of progress on social science, we might get to a world where people understand their own needs, desires and behavior better - and thus can get most or all of what they want (from material needs to self-respect and happiness) without having to outcompete or push down others.5\nReason for hope: the “cheap utopia” possibility. This is sort of an extension of the previous point. If we imagine the upper limit of how “empowered” humanity could be (in terms of having lots of technological capabilities), it might be relatively easy to create a kind of utopia (such as the utopia I’ve described previously, or hopefully something much better). This doesn’t guarantee that such a thing will happen, but a future where it’s technologically easy to do things like meeting material needs and providing radical choice could be quite a bit better than the present.\nAn interesting (wonky) treatment of this idea is Carl Shulman’s blog post: Spreading happiness to the stars seems little harder than just spreading.\nReason for concern: authoritarianism. There are some huge countries that are essentially ruled by one person, with little to no democratic or other mechanisms for citizens to have a voice in how they’re treated. It seems like a live risk that the world could end up this way - essentially ruled by one person or relatively small coalition - in the long run. (It arguably would even continue a historical trend in which political units have gotten larger and larger.)\nMaybe this would be fine if whoever’s in charge is able to let everyone have freedom, wealth, etc. at little cost to themselves (along the lines of the above point). But maybe whoever’s in charge is just a crazy or horrible person, in which case we might end up with a bad future even if it would be “cheap” to have a wonderful one.\nReason for concern: competitive dynamics. You might imagine that as empowerment advances, we get purer, more unrestrained competition. \nOne way of thinking about this: \nToday, no matter how ruthless CEOs are, they tend to accommodate some amount of leisure time for their employees. That’s because businesses have no choice but to hire people who insist on working a limited number of hours, having a life outside of work, etc. \nBut if we had advanced enough technology, it might be possible to run a business whose employees have zero leisure time. (One example would be via digital people and the ability to make lots of copies of highly productive people just as they’re about to get to work. A more mundane example would be if e.g. advanced stimulants and other drugs were developed so people could be productive without breaks.)\nAnd that might be what the most productive businesses, organizations, etc. end up looking like - the most productive organizations might be the ones that most maniacally and uncompromisingly use all of their resources to acquire more resources. Those could be precisely the organizations that end up filling most of the galaxy.\nMore at this Slate Star Codex post. Key quote: “I’m pretty sure that brutal … competition combined with ability to [copy and edit] minds necessarily results in paring away everything not directly maximally economically productive. And a lot of things we like – love, family, art, hobbies – are not directly maximally economic productive.”\nThat said:\nIt’s not really clear how this ultimately shakes out. One possibility is something like this: \nLots of people, or perhaps machines, compete ruthlessly to acquire resources. But this competition is (a) legal, subject to a property rights system; (b) ultimately for the benefit of the investors in the competing companies/organizations. \n \nWho are these investors? Well, today, many of the biggest companies are mostly owned by large numbers of individuals via mutual funds. The same could be true in the future - and those individuals could be normal people who use the proceeds for nice things.\nIf the “cheap utopia” possibility (described above) comes to pass, it might only take a small amount of spare resources to support a lot of good lives.\nOverall, my guess is that the long-run future is more likely to be better than the present than worse than the present (in the sense of average quality of life). I’m very far from confident in this. I’m more confident that the long-run future is likely to be better than nothing, and that it would be good to prevent humans from going extinct, or a similar development such as a takeover by misaligned AI.Footnotes\n A couple of discussions of the prospects for enforcing agreements here and here. ↩\n I’m reminded of the judgment of Solomon: “two mothers living in the same house, each the mother of an infant son, came to Solomon. One of the babies had been smothered, and each claimed the remaining boy as her own. Calling for a sword, Solomon declared his judgment: the baby would be cut in two, each woman to receive half. One mother did not contest the ruling, declaring that if she could not have the baby then neither of them could, but the other begged Solomon, ‘Give the baby to her, just don't kill him!’ The king declared the second woman the true mother, as a mother would even give up her baby if that was necessary to save its life, and awarded her custody.” \n The sword is misaligned AI and the baby is humanity or something.\n (This story is actually extremely bizarre - seriously, Solomon was like “You each get half the baby”?! - and some similar stories from India/China seem at least a bit more plausible. But I think you get my point. Maybe.) ↩\n For a tangible example, I’ll discuss the practice (which some folks are doing today) of trying to ensure that the U.S. develops transformative AI before another country does, by arguing for the importance of A.I. to U.S. policymakers. \n This approach makes me quite nervous, because:\nI expect U.S. policymakers by default to be very oriented toward “competition” to the exclusion of “caution.” (This could change if the importance of caution becomes more widely appreciated!) \nI worry about a nationalized AI project that (a) doesn’t exercise much caution at all, focusing entirely on racing ahead of others; (b) might backfire by causing other countries to go for nationalized projects of their own, inflaming an already tense situation and not even necessarily doing much to make it more likely that the U.S. leads the way. In particular, other countries might have an easier time quickly mobilizing huge amounts of government funding than the U.S., such that the U.S. might have better odds if it remains the case that most AI research is happening at private companies.\n (There might be ways of helping particular countries without raising the risks of something like a low-caution nationalized AI project, and if so these could be important and good.) ↩\nNot for animals, though see this comment for some reasons we might not consider this a knockdown objection to the “life has gotten better” claim. ↩\n This is only a possibility. It’s also possible that humans deeply value being better-off than others, which could complicate it quite a bit. (Personally, I feel somewhat optimistic that a lot of people would aspirationally prefer to focus on their own welfare rather than comparing themselves to others - so if knowledge advanced to the point where people could choose to change in this way, I feel optimistic that at least many would do so.) ↩\n", "url": "https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/", "title": "Transformative AI issues (not just misalignment): an overview", "source": "cold.takes", "source_type": "blog", "date_published": "2023-01-05", "id": "8324d19cc55ed28a759918ec6fd778ea"} -{"text": "In previous pieces, I argued that there's a real and large risk of AI systems' developing dangerous goals of their own and defeating all of humanity - at least in the absence of specific efforts to prevent this from happening. I discussed why it could be hard to build AI systems without this risk and how it might be doable.\nThe “AI alignment problem” refers1 to a technical problem: how can we design a powerful AI system that behaves as intended, rather than forming its own dangerous aims? This post is going to outline a broader political/strategic problem, the “deployment problem”: if you’re someone who might be on the cusp of developing extremely powerful (and maybe dangerous) AI systems, what should you … do?\nThe basic challenge is this:\nIf you race forward with building and using powerful AI systems as fast as possible, you might cause a global catastrophe (see links above).\nIf you move too slowly, though, you might just be waiting around for someone else less cautious to develop and deploy powerful, dangerous AI systems.\nAnd if you can get to the point where your own systems are both powerful and safe … what then? Other people still might be less-cautiously building dangerous ones - what should we do about that?\nMy current analogy for the deployment problem is racing through a minefield: each player is hoping to be ahead of others, but anyone moving too quickly can cause a disaster. (In this minefield, a single mine is big enough to endanger all the racers.)\nThis post gives a high-level overview of how I see the kinds of developments that can lead to a good outcome, despite the “racing through a minefield” dynamic. It is distilled from a more detailed post on the Alignment Forum.\nFirst, I’ll flesh out how I see the challenge we’re contending with, based on the premises above.\nNext, I’ll list a number of things I hope that “cautious actors” (AI companies, governments, etc.) might do in order to prevent catastrophe.\nMany of the actions I’m picturing are not the kind of things normal market and commercial incentives would push toward, and as such, I think there’s room for a ton of variation in whether the “racing through a minefield” challenge is handled well. Whether key decision-makers understand things like the case for misalignment risk (and in particular, why it might be hard to measure) - and are willing to lower their own chances of “winning the race” to improve the odds of a good outcome for everyone - could be crucial.\nThe basic premises of “racing through a minefield”\nThis piece is going to lean on previous pieces and assume all of the following things:\nTransformative AI soon. This century, something like PASTA could be developed: AI systems that can effectively automate everything humans do to advance science and technology. This brings the potential for explosive progress in science and tech, getting us more quickly than most people imagine to a deeply unfamiliar future. I’ve argued for this possibility in the Most Important Century series.\nMisalignment risk. As argued previously, there’s a significant risk that such AI systems could end up with misaligned goals of their own, leading them to defeat all of humanity. And it could take significant extra effort to get AI systems to be safe.\nAmbiguity. As argued previously, it could be hard to know whether AI systems are dangerously misaligned, for a number of reasons. In particular, when we train AI systems not to behave dangerously, we might be unwittingly training them to obscure their dangerous potential from humans, and take dangerous actions only when humans would not be able to stop them. At the same time, I expect powerful AI systems will present massive opportunities to make money and gain power, such that many people will want to race forward with building and deploying them as fast as possible (perhaps even if they believe that doing so is risky for the world!)\nSo, one can imagine a scenario where some company is in the following situation:\nIt has good reason to think it’s on the cusp of developing extraordinarily powerful AI systems.\nIf it deploys such systems hastily, global disaster could result.\nBut if it moves too slowly, other, less cautious actors could deploy dangerous systems of their own.\nThat seems like a tough enough, high-stakes-enough, and likely enough situation that it’s worth thinking about how one is supposed to handle it.\nOne simplified way of thinking about this problem:\nWe might classify “actors” (companies, government projects, whatever might develop powerful AI systems or play an important role in how they’re deployed) as cautious (taking misalignment risk very seriously) or incautious (not so much).\nOur basic hope is that at any given point in time, cautious actors collectively have the power to “contain” incautious actors. By “contain,” I mean: stop them from deploying misaligned AI systems, and/or stop the misaligned systems from causing a catastrophe.\nImportantly, it could be important for cautious actors to use powerful AI systems to help with “containment” in one way or another. If cautious actors refrain from AI development entirely, it seems likely that incautious actors will end up with more powerful systems than cautious ones, which doesn’t seem good.\nIn this setup, cautious actors need to move fast enough that they can’t be overpowered by others’ AI systems, but slowly enough that they don’t cause disaster themselves. Hence the “racing through a minefield” analogy.\nWhat success looks like\nIn a non-Cold-Takes piece, I explore the possible actions available to cautious actors to win the race through a minefield. This section will summarize the general categories - and, crucially, why we shouldn’t expect that companies, governments, etc. will do the right thing simply from natural (commercial and other) incentives.\nI’ll be going through each of the following:\nAlignment (charting a safe path through the minefield). Putting lots of effort into technical work to reduce the risk of misaligned AI. \nThreat assessment (alerting others about the mines). Putting lots of effort into assessing the risk of misaligned AI, and potentially demonstrating it (to other actors) as well.\nAvoiding races (to move more cautiously through the minefield). If different actors are racing to deploy powerful AI systems, this could make it unnecessarily hard to be cautious.\nSelective information sharing (so the incautious don’t catch up). Sharing some information widely (e.g., technical insights about how to reduce misalignment risk), some selectively (e.g., demonstrations of how powerful and dangerous AI systems might be), and some not at all (e.g., the specific code that, if accessed by a hacker, would allow the hacker to deploy potentially dangerous AI systems themselves).\nGlobal monitoring (noticing people about to step on mines, and stopping them). Working toward worldwide state-led monitoring efforts to identify and prevent “incautious” projects racing toward deploying dangerous AI systems.\nDefensive deployment (staying ahead in the race). Deploying AI systems only when they are unlikely to cause a catastrophe - but also deploying them with urgency once they are safe, in order to help prevent problems from AI systems developed by less cautious actors.\nAlignment (charting a safe path through the minefield2)\nI previously wrote about some of the ways we might reduce the dangers of advanced AI systems. Broadly speaking:\nCautious actors might try to primarily build limited AI systems - AI systems that lack the kind of ambitious aims that lead to danger. They might ultimately be able to use these AI systems to do things like automating further safety research, making future less-limited systems safer.\nCautious actors might use AI checks and balances - that is, using some AI systems to supervise, critique and identify dangerous behavior in others, with special care taken to make it hard for AI systems to coordinate with each other against humans. \nCautious actors might use a variety of other techniques for making AI systems safer - particularly techniques that incorporate “digital neuroscience,” gauging the safety of an AI system by “reading its mind” rather than simply by watching out for dangerous behavior (the latter might be unreliable, as noted above).\nA key point here is that making AI systems safe enough to commercialize (with some initial success and profits) could be much less (and different) effort than making them robustly safe (no lurking risk of global catastrophe). The basic reasons for this are covered in my previous post on difficulties with AI safety research In brief:\nIf AI systems behave dangerously, we can “train out” that behavior by providing negative reinforcement for it. \nThe concern is that when we do this, we might be unwittingly training AI systems to obscure their dangerous potential from humans, and take dangerous actions only when humans would not be able to stop them. (I call this the “King Lear problem: it's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't.”)\nSo we could end up with AI systems that behave safely and helpfully as far as we can tell in normal circumstances, while ultimately having ambitious, dangerous “aims” that they pursue when they become powerful enough and have the right opportunities.\nWell-meaning AI companies with active ethics boards might do a lot of AI safety work, by training AIs not to behave in unhelpful or dangerous ways. But if they want to address the risks I’m focused on here, this could require safety measures that look very different - e.g., measures more reliant on “checks and balances” and “digital neuroscience.”\nThreat assessment (alerting others about the mines)\nIn addition to making AI systems safer, cautious actors can also put effort into measuring and demonstrating how dangerous they are (or aren’t).\nFor the same reasons given in the previous section, it could take special efforts to find and demonstrate the kinds of dangers I’ve been discussing. Simply monitoring AI systems in the real world for bad behavior might not do it. It may be necessary to examine (or manipulate) their digital brains,3 design AI systems specifically to audit other AI systems for signs of danger; deliberately train AI systems to demonstrate particular dangerous patterns (while not being too dangerous!); etc.\nLearning and demonstrating that the danger is high could help convince many actors to move more slowly and cautiously. Learning that the danger is low could lessen some of the tough tradeoffs here and allow cautious actors to move forward more decisively with developing advanced AI systems; I think this could be a good thing in terms of what sorts of actors lead the way on transformative AI.\nAvoiding races (to move more cautiously through the minefield)\nHere’s a dynamic I’d be sad about:\nCompany A is getting close to building very powerful AI systems. It would love to move slowly and be careful with these AIs, but it worries that if it moves too slowly, Company B will get there first, have less caution, and do some combination of “causing danger to the world” and “beating company A if the AIs turn out safe.”\nCompany B is getting close to building very powerful AI systems. It would love to move slowly and be careful with these AIs, but it worries that if it moves too slowly, Company A will get there first, have less caution, and do some combination of “causing danger to the world” and “beating company B if the AIs turn out safe.”\n(Similar dynamics could apply to Country A and B, with national AI development projects.)\nIf Companies A and B would both “love to move slowly and be careful” if they could, it’s a shame that they’re both racing to beat each other. Maybe there’s a way to avoid this dynamic. For example, perhaps Companies A and B could strike a deal - anything from “collaboration and safety-related information sharing” to a merger. This could allow both to focus more on precautionary measures rather than on beating the other. Another way to avoid this dynamic is discussed below, under standards and monitoring.\n“Finding ways to avoid a furious race” is not the kind of dynamic that emerges naturally from markets! In fact, working together along these lines would have to be well-designed to avoid running afoul of antitrust regulation.\nSelective information sharing - including security (so the incautious don’t catch up)\nCautious actors might want to share certain kinds of information quite widely:\nIt could be crucial to raise awareness about the dangers of AI (which, as I’ve argued, won’t necessarily be obvious). \nThey might also want to widely share information that could be useful for reducing the risks (e.g., safety techniques that have worked well.)\nAt the same time, as long as there are incautious actors out there, information can be dangerous too:\nInformation about what cutting-edge AI systems can do - especially if it is powerful and impressive - could spur incautious actors to race harder toward developing powerful AI of their own (or give them an idea of how to build powerful systems, by giving them an idea of what sorts of abilities to aim for).\nAn AI’s “weights” (you can think of this sort of like its source code, though not exactly4) are potentially very dangerous. If hackers (including from a state cyberwarfare program) gain unauthorized access to an AI’s weights, this could be tantamount to stealing the AI system, and the actor that steals the system could be much less cautious than the actor who built it. Achieving a level of cybersecurity that rules this out could be extremely difficult, and potentially well beyond what one would normally aim for in a commercial context.\nThe lines between these categories of information might end up fuzzy. Some information might be useful for demonstrating the dangers and capabilities of cutting-edge systems, or useful for making systems safer and for building them in the first place. So there could be a lot of hard judgment calls here.\nThis is another area where I worry that commercial incentives might not be enough on their own. For example, it is usually important for a commercial project to have some reasonable level of security against hackers, but not necessarily for it to be able to resist well-resourced attempts by states to steal its intellectual property. \nGlobal monitoring (noticing people about to step on mines, and stopping them)\nIdeally, cautious actors would learn of every case where someone is building a dangerous AI system (whether purposefully or unwittingly), and be able to stop the project. If this were done reliably enough, it could take the teeth out of the threat; a partial version could buy time.\nHere’s one vision for how this sort of thing could come about:\nWe (humanity) develop a reasonable set of tests for whether an AI system might be dangerous.\nToday’s leading AI companies self-regulate by committing not to build or deploy a system that’s dangerous according to such a test (e.g., see Google’s 2018 statement, \"We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”). Even if some people at the companies would like to do so, it’s hard to pull this off once the company has committed not to.\nAs more AI companies are started, they feel soft pressure to do similar self-regulation, and refusing to do so is off-putting to potential employees, investors, etc.\nEventually, similar principles are incorporated into various government regulations and enforceable treaties.\nGovernments could monitor for dangerous projects using regulation and even overseas operations. E.g., today the US monitors (without permission) for various signs that other states might be developing nuclear weapons, and might try to stop such development with methods ranging from threats of sanctions to cyberwarfare or even military attacks. It could do something similar for any AI development projects that are using huge amounts of compute and haven’t volunteered information about their safety practices.\nIf the situation becomes very dire - i.e., it seems that there’s a high risk of dangerous AI being deployed imminently - I see the latter bullet point as one of the main potential hopes. In this case, governments might have to take drastic actions to monitor and stop dangerous projects, based on limited information.\nDefensive deployment (staying ahead in the race)\nI’ve emphasized the importance of caution: not deploying AI systems when we can’t be confident enough that they’re safe. \nBut when confidence can be achieved (how much confidence? See footnote5), powerful-and-safe AI can help reduce risks from other actors in many possible ways.\nSome of this would be by helping with all of the above. Once AI systems can do a significant fraction of the things humans can do today, they might be able to contribute to each of the activities I’ve listed so far:\nAlignment. AI systems might be able to contribute to AI safety research (as humans do), producing increasingly robust techniques for reducing risks.\nThreat assessment. AI systems could help produce evidence and demonstrations about potential risks. They could be potentially useful for tasks like “Produce detailed explanations and demonstrations of possible sequences of events that could lead to AIs doing harm.”\nAvoiding races. AI projects might make deals in which e.g. each project is allowed to use its AI systems to monitor for signs of risk from the others (ideally such systems would be designed to only share relevant information).\nSelective information sharing. AI systems might contribute to strong security (e.g., by finding and patching security holes), and to dissemination (including by helping to better communicate about the level of risk and the best ways to reduce it).\nGlobal monitoring. AI systems might be used (e.g., by governments) to monitor for signs of dangerous AI projects worldwide, and even to interfere with such projects. They might also be used as part of large voluntary self-regulation projects, along the lines of what I wrote just above under “Avoiding races.”\nAdditionally, if safe AI systems are in wide use, it could be harder for dangerous (similarly powerful) AI systems to do harm. This could be via a wide variety of mechanisms. For example:\nIf there’s widespread use of AI systems to patch and find security holes, similarly powered AI systems might have a harder time finding security holes to cause trouble with.\nMisaligned AI systems could have more trouble making money, gaining allies, etc. in worlds where they are competing with similarly powerful but safe AI systems.\nSo?\nI’ve gone into some detail about why we might have a challenging situation (“racing through a minefield”) if powerful AI systems (a) are developed fairly soon; (b) present significant risk of misalignment leading to humanity being defeated; (c) are not particularly easy to measure the safety of.\nI’ve also talked about what I see as some of the key ways that “cautious actors” concerned about misaligned AI might navigate this situation.\nI talk about some of the implications in my more detailed piece. Here I’m just going to name a couple of observations that jump out at me from this analysis:\nThis seems hard. If we end up in the future envisioned in this piece, I imagine this being extremely stressful and difficult. I’m picturing a world in which many companies, and even governments, can see the huge power and profit they might reap from deploying powerful AI systems before others - but we’re hoping that they instead move with caution (but not too much caution!), take the kinds of actions described above, and that ultimately cautious actors “win the race” against less cautious ones.\nEven if AI alignment ends up being relatively easy - such that a given AI project can make safe, powerful systems with about 10% more effort than making dangerous, powerful systems - the situation still looks pretty nerve-wracking, because of how many different players could end up trying to build systems of their own without putting in that 10%.\nA lot of the most helpful actions might be “out of the ordinary.” When racing through a minefield, I hope key actors will:\nPut more effort into alignment, threat assessment, and security than is required by commercial incentives;\nConsider measures for avoiding races and global monitoring that could be very unusual, even unprecedented.\nDo all of this in the possible presence of ambiguous, confusing information about the risks.\nAs such, it could be very important whether key decision-makers (at both companies and governments) understand the risks and are prepared to act on them. Currently, I think we’re unfortunately very far from a world where this is true.\nAdditionally, I think AI projects can and should be taking measures today to make unusual-but-important measures more practical in the future. This could include things like:\nGetting practice with selective information sharing. For example, building internal processes to decide on whether research should be published, rather than having a rule of “Publish everything, we’re like a research university” or “Publish nothing, we don’t want competitors seeing it.” \nI expect that early attempts at this will often be clumsy and get things wrong! \nGetting practice with ways that AI companies could avoid races.\nGetting practice with threat assessment. Even if today’s AI systems don’t seem like they could possibly be dangerous yet … how sure are we, and how do we know?\nPrioritizing building AI systems that could do especially helpful things, such as contributing to AI safety research and threat assessment and patching security holes. \nEstablishing governance that is capable of making hard, non-commercially-optimal decisions for the good of humanity. A standard corporation could be sued for not deploying AI that poses a risk of global catastrophe - if this means a sacrifice for its bottom line. And a lot of the people making the final call at AI companies might be primarily thinking about their duties to shareholders (or simply unaware of the potential stakes of powerful enough AI systems). I’m excited about AI companies that are investing heavily in setting up governance structures - and investing in executives and board members - capable of making the hard calls well.Footnotes\n Generally, or at least, this is what I’d like it to refer to. ↩\n Thanks to beta reader Ted Sanders for suggesting this analogy in place of the older one, “removing mines from the minefield.” \n ↩\n One genre of testing that might be interesting: manipulating an AI system’s “digital brain” in order to simulate circumstances in which it has an opportunity to take over the world, and seeing whether it does so. This could be a way of dealing with the King Lear problem. More here. ↩\n Modern AI systems tend to be trained with lots of trial-and-error. The actual code that is used to train them might be fairly simple and not very valuable on its own; but an expensive training process then generates a set of “weights” which are ~all one needs to make a fully functioning, relatively cheap copy of the AI system. ↩\n I mean, this is part of the challenge. In theory, you should deploy an AI system if the risks of not doing so are greater than the risks of doing so. That’s going to depend on hard-to-assess information about how safe your system is and how dangerous and imminent others’ are, and it’s going to be easy to be biased in favor of “My systems are safer than others’; I should go for it.” Seems hard. ↩\n", "url": "https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/", "title": "Racing through a minefield: the AI deployment problem", "source": "cold.takes", "source_type": "blog", "date_published": "2022-12-22", "id": "e8cd5c825789b29e3f87b9eeca6db972"} -{"text": "In previous pieces, I argued that there's a real and large risk of AI systems' aiming to defeat all of humanity combined - and succeeding. \nI first argued that this sort of catastrophe would be likely without specific countermeasures to prevent it. I then argued that countermeasures could be challenging, due to some key difficulties of AI safety research.\nBut while I think misalignment risk is serious and presents major challenges, I don’t agree with sentiments1 along the lines of “We haven’t figured out how to align an AI, so if transformative AI comes soon, we’re doomed.” Here I’m going to talk about some of my high-level hopes for how we might end up avoiding this risk. \nI’ll first recap the challenge, using Ajeya Cotra’s young businessperson analogy to give a sense of some of the core difficulties. In a nutshell, once AI systems get capable enough, it could be hard to test whether they’re safe, because they might be able to deceive and manipulate us into getting the wrong read. Thus, trying to determine whether they’re safe might be something like “being an eight-year-old trying to decide between adult job candidates (some of whom are manipulative).”\nI’ll then go through what I see as three key possibilities for navigating this situation:\nDigital neuroscience: perhaps we’ll be able to read (and/or even rewrite) the “digital brains” of AI systems, so that we can know (and change) what they’re “aiming” to do directly - rather than having to infer it from their behavior. (Perhaps the eight-year-old is a mind-reader, or even a young Professor X.)\nLimited AI: perhaps we can make AI systems safe by making them limited in various ways - e.g., by leaving certain kinds of information out of their training, designing them to be “myopic” (focused on short-run as opposed to long-run goals), or something along those lines. Maybe we can make “limited AI” that is nonetheless able to carry out particular helpful tasks - such as doing lots more research on how to achieve safety without the limitations. (Perhaps the eight-year-old can limit the authority or knowledge of their hire, and still get the company run successfully.)\nAI checks and balances: perhaps we’ll be able to employ some AI systems to critique, supervise, and even rewrite others. Even if no single AI system would be safe on its own, the right “checks and balances” setup could ensure that human interests win out. (Perhaps the eight-year-old is able to get the job candidates to evaluate and critique each other, such that all the eight-year-old needs to do is verify basic factual claims to know who the best candidate is.)\nThese are some of the main categories of hopes that are pretty easy to picture today. Further work on AI safety research might result in further ideas (and the above are not exhaustive - see my more detailed piece, posted to the Alignment Forum rather than Cold Takes, for more).\nI’ll talk about both challenges and reasons for hope here. I think that for the most part, these hopes look much better if AI projects are moving cautiously rather than racing furiously.\nI don’t think we’re at the point of having much sense of how the hopes and challenges net out; the best I can do at this point is to say: “I don’t currently have much sympathy for someone who’s highly confident that AI takeover would or would not happen (that is, for anyone who thinks the odds of AI takeover … are under 10% or over 90%).”\nThe challenge\nThis is all recapping previous pieces. If you remember them super well, skip to the next section.\nIn previous pieces, I argued that:\nThe coming decades could see the development of AI systems that could automate - and dramatically speed up - scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future. (More: The Most Important Century)\nIf we develop this sort of AI via ambitious use of the “black-box trial-and-error” common in AI development today, then there’s a substantial risk that: \nThese AIs will develop unintended aims (states of the world they make calculations and plans toward, as a chess-playing AI \"aims\" for checkmate);\n \nThese AIs will deceive, manipulate, and overpower humans as needed to achieve those aims;\n \nEventually, this could reach the point where AIs take over the world from humans entirely.\nPeople today are doing AI safety research to prevent this outcome, but such research has a number of deep difficulties:\n“Great news - I’ve tested this AI and it looks safe.” Why might we still have a problem?\n \nProblem\nKey question\nExplanation\nThe Lance Armstrong problem\nDid we get the AI to be actually safe or good at hiding its dangerous actions?\nWhen dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “appearing to behave well.”\nWhen professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.\n \nThe King Lear problem\nThe AI is (actually) well-behaved when humans are in control. Will this transfer to when AIs are in control?\nIt's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't. \nAIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to take control of the world entirely. It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation. \nLike King Lear trying to decide how much power to give each of his daughters before abdicating the throne.\n \nThe lab mice problem\nToday's \"subhuman\" AIs are safe.What about future AIs with more human-like abilities?\nToday's AI systems aren't advanced enough to exhibit the basic behaviors we want to study, such as deceiving and manipulating humans.\nLike trying to study medicine in humans by experimenting only on lab mice.\n \nThe first contact problem\nImagine that tomorrow's \"human-like\" AIs are safe. How will things go when AIs have capabilities far beyond humans'?\nAI systems might (collectively) become vastly more capable than humans, and it's ... just really hard to have any idea what that's going to be like. As far as we know, there has never before been anything in the galaxy that's vastly more capable than humans in the relevant ways! No matter what we come up with to solve the first three problems, we can't be too confident that it'll keep working if AI advances (or just proliferates) a lot more. \nLike trying to plan for first contact with extraterrestrials (this barely feels like an analogy).\n \nAn analogy that incorporates these challenges is Ajeya Cotra’s “young businessperson” analogy:\nImagine you are an eight-year-old whose parents left you a $1 trillion company and no trusted adult to serve as your guide to the world. You must hire a smart adult to run your company as CEO, handle your life the way that a parent would (e.g. decide your school, where you’ll live, when you need to go to the dentist), and administer your vast wealth (e.g. decide where you’ll invest your money).\n You have to hire these grownups based on a work trial or interview you come up with -- you don't get to see any resumes, don't get to do reference checks, etc. Because you're so rich, tons of people apply for all sorts of reasons. (More)\nIf your applicants are a mix of \"saints\" (people who genuinely want to help), \"sycophants\" (people who just want to make you happy in the short run, even when this is to your long-term detriment) and \"schemers\" (people who want to siphon off your wealth and power for themselves), how do you - an eight-year-old - tell the difference?\n(Click to expand) More detail on why AI could make this the most important century\nIn The Most Important Century, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.\nThis page has a ~10-page summary of the series, as well as links to an audio version, podcasts, and the full series.\nThe key points I argue for in the series are:\nThe long-run future is radically unfamiliar. Enough advances in technology could lead to a long-lasting, galaxy-wide civilization that could be a radical utopia, dystopia, or anything in between.\nThe long-run future could come much faster than we think, due to a possible AI-driven productivity explosion.\nThe relevant kind of AI looks like it will be developed this century - making this century the one that will initiate, and have the opportunity to shape, a future galaxy-wide civilization.\nThese claims seem too \"wild\" to take seriously. But there are a lot of reasons to think that we live in a wild time, and should be ready for anything.\nWe, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions. But right now, we aren't ready for this.\n(Click to expand) Why would AI \"aim\" to defeat humanity? \nA previous piece argued that if today’s AI development methods lead directly to powerful enough AI systems, disaster is likely by default (in the absence of specific countermeasures). \nIn brief:\nModern AI development is essentially based on “training” via trial-and-error. \nIf we move forward incautiously and ambitiously with such training, and if it gets us all the way to very powerful AI systems, then such systems will likely end up aiming for certain states of the world (analogously to how a chess-playing AI aims for checkmate).\nAnd these states will be other than the ones we intended, because our trial-and-error training methods won’t be accurate. For example, when we’re confused or misinformed about some question, we’ll reward AI systems for giving the wrong answer to it - unintentionally training deceptive behavior.\nWe should expect disaster if we have AI systems that are both (a) powerful enough to defeat humans and (b) aiming for states of the world that we didn’t intend. (“Defeat” means taking control of the world and doing what’s necessary to keep us out of the way; it’s unclear to me whether we’d be literally killed or just forcibly stopped1 from changing the world in ways that contradict AI systems’ aims.)\n(Click to expand) How could AI defeat humanity? \nIn a previous piece, I argue that AI systems could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nBy defeating humanity, I mean gaining control of the world so that AIs, not humans, determine what happens in it; this could involve killing humans or simply “containing” us in some way, such that we can’t interfere with AIs’ aims.\nOne way this could happen is if AI became extremely advanced, to the point where it had \"cognitive superpowers\" beyond what humans can do. In this case, a single AI system (or set of systems working together) could imaginably:\nDo its own research on how to build a better AI system, which culminates in something that has incredible other abilities.\nHack into human-built software across the world.\nManipulate human psychology.\nQuickly generate vast wealth under the control of itself or any human allies.\nCome up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.\nDevelop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries.\nHowever, my piece also explores what things might look like if each AI system basically has similar capabilities to humans. In this case:\nHumans are likely to deploy AI systems throughout the economy, such that they have large numbers and access to many resources - and the ability to make copies of themselves. \nFrom this starting point, AI systems with human-like (or greater) capabilities would have a number of possible ways of getting to the point where their total population could outnumber and/or out-resource humans.\nI address a number of possible objections, such as \"How can AIs be dangerous without bodies?\"\nMore: AI could defeat all of us combined\nDigital neuroscience\nI’ve previously argued that it could be inherently difficult to measure whether AI systems are safe, for reasons such as: AI systems that are not deceptive probably look like AI systems that are so good at deception that they hide all evidence of it, in any way we can easily measure. \nUnless we can “read their minds!”\nCurrently, today’s leading AI research is in the genre of “black-box trial-and-error.” An AI tries a task; it gets “encouragement” or “discouragement” based on whether it does the task well; it tweaks the wiring of its “digital brain” to improve next time; it improves at the task; but we humans aren’t able to make much sense of its “digital brain” or say much about its “thought process.” \n(Click to expand) Why are AI systems \"black boxes\" that we can't understand the inner workings of? \nWhat I mean by “black-box trial-and-error” is explained briefly in an old Cold Takes post, and in more detail in more technical pieces by Ajeya Cotra (section I linked to) and Richard Ngo (section 2). Here’s a quick, oversimplified characterization.\nToday, the most common way of building an AI system is by using an \"artificial neural network\" (ANN), which you might think of sort of like a \"digital brain\" that starts in an empty (or random) state: it hasn't yet been wired to do specific things. A process something like this is followed:\nThe AI system is given some sort of task.\nThe AI system tries something, initially something pretty random.\nThe AI system gets information about how well its choice performed, and/or what would’ve gotten a better result. Based on this, it “learns” by tweaking the wiring of the ANN (“digital brain”) - literally by strengthening or weakening the connections between some “artificial neurons” and others. The tweaks cause the ANN to form a stronger association between the choice it made and the result it got. \nAfter enough tries, the AI system becomes good at the task (it was initially terrible). \nBut nobody really knows anything about how or why it’s good at the task now. The development work has gone into building a flexible architecture for it to learn well from trial-and-error, and into “training” it by doing all of the trial and error. We mostly can’t “look inside the AI system to see how it’s thinking.”\nFor example, if we want to know why a chess-playing AI such as AlphaZero made some particular chess move, we can't look inside its code to find ideas like \"Control the center of the board\" or \"Try not to lose my queen.\" Most of what we see is just a vast set of numbers, denoting the strengths of connections between different artificial neurons. As with a human brain, we can mostly only guess at what the different parts of the \"digital brain\" are doing.\nSome AI research (example)2 is exploring how to change this - how to decode an AI system’s “digital brain.” This research is in relatively early stages - today, it can “decode” only parts of AI systems (or fully decode very small, deliberately simplified AI systems).\nAs AI systems advance, it might get harder to decode them - or easier, if we can start to use AI for help decoding AI, and/or change AI design techniques so that AI systems are less “black box”-ish. \nI think there is a wide range of possibilities here, e.g.:\nFailure: “digital brains” keep getting bigger, more complex, and harder to make sense of, and so “digital neuroscience” generally stays about as hard to learn from as human neuroscience. In this world, we wouldn’t have anything like “lie detection” for AI systems engaged in deceptive behavior.\nBasic mind-reading: we’re able to get a handle on things like “whether an AI system is behaving deceptively, e.g. whether it has internal representations of ‘beliefs’ about the world that contradict its statements” and “whether an AI system is aiming to accomplish some strange goal we didn’t intend it to.” \nIt may be hard to fix things like this by just continuing trial-and-error-based training (perhaps because we worry that AI systems are manipulating their own “digital brains” - see later bullet point). \nBut we’d at least be able to get early warnings of potential problems, or early evidence that we don’t have a problem, and adjust our level of caution appropriately. This sort of mind-reading could also be helpful with AI checks and balances (below).\nAdvanced mind-reading: we’re able to understand an AI system’s “thought process” in detail (what observations and patterns are the main reasons it’s behaving as it is), understand how any worrying aspects of this “thought process” (such as unintended aims) came about, and make lots of small adjustments until we can verify that an AI system is free of unintended aims or deception.\nMind-writing (digital neurosurgery): we’re able to alter a “digital brain” directly, rather than just via the “trial-and-error” process discussed earlier.\nOne potential failure mode for digital neuroscience is if AI systems end up able to manipulate their own “digital brains.” This could lead “digital neuroscience” to have the same problem as other AI safety research: if we’re shutting down or negatively reinforcing AI systems that appear to have unsafe “aims” based on our “mind-reading,” we might end up selecting for AI systems whose “digital brains” only appear safe. \nThis could be a real issue, especially if AI systems end up with far-beyond-human capabilities (more below). \nBut naively, an AI system manipulating its own “digital brain” to appear safe seems quite a bit harder than simply behaving deceptively. \nI should note that I’m lumping in much of the (hard-to-explain) research on the Eliciting Latent Knowledge (ELK) agenda under this category.3 The ELK agenda is largely4 about thinking through what kinds of “digital brain” patterns might be associated with honesty vs. deception, and trying to find some impossible-to-fake sign of honesty.\nHow likely is this to work? I think it’s very up-in-the-air right now. I’d say “digital neuroscience” is a young field, tackling a problem that may or may not prove tractable. If we have several decades before transformative AI, then I’d expect to at least succeed at “basic mind-reading,” whereas if we have less than a decade, I think that’s around 50/50. I think it’s less likely that we’ll succeed at some of the more ambitious goals, but definitely possible.\nLimited AI\nI previously discussed why AI systems could end up with “aims,” in the sense that they make calculations, choices and plans selected to reach a particular sort of state of the world. For example, chess-playing AIs “aim” for checkmate game states; a recommendation algorithm might “aim” for high customer engagement or satisfaction. I then argued that AI systems would do “whatever it takes” to get what they’re “aiming” at, even when this means deceiving and disempowering humans.\nBut AI systems won’t necessarily have the sorts of “aims” that risk trouble. Consider two different tasks you might “train” an AI to do, via trial-and-error (rewarding success at the task):\n“Write whatever code a particular human would write, if they were in your situation.”\n“Write whatever code accomplishes goal X [including coming up with things much better than a human could].”\nThe second of these seems like a recipe for having the sort of ambitious “aim” I’ve claimed is dangerous - it’s an open-ended invitation to do whatever leads to good performance on the goal. By contrast, the first is about imitating a particular human. It leaves a lot less scope for creative, unpredictable behavior and for having “ambitious” goals that lead to conflict with humans.\n(For more on this distinction, see my discussion of process-based optimization, although I’m not thrilled with this and hope to write something better later.)\nMy guess is that in a competitive world, people will be able to get more done, faster, with something like the second approach. But: \nMaybe the first approach will work better at first, and/or AI developers will deliberately stick with the first approach as much as they can for safety reasons.\nAnd maybe that will be enough to build AI systems that can, themselves, do huge amounts of AI alignment research applicable to future, less limited systems. Or enough to build AI systems that can do other useful things, such as creating convincing demonstrations of the risks, patching security holes that dangerous AI systems would otherwise exploit, and more. (More on “how safe AIs can protect against dangerous AIs” in a future piece.)\nA risk that would remain: these AI systems might also be able to do huge amounts of research on making AIs bigger and more capable. So simply having “AI systems that can do alignment research” isn’t good enough by itself - we would need to then hope that the leading AI developers prioritize safety research rather than racing ahead with building more powerful systems, up until the point where they can make the more powerful systems safe.\nThere are a number of other ways in which we might “limit” AI systems to make them safe. One can imagine AI systems that are:\n“Short-sighted” or “myopic”: they might have “aims” (see previous post on what I mean by this term) that only apply to their short-run future. So an AI system might be aiming to gain more power, but only over the next few hours; such an AI system wouldn’t exhibit some of the behaviors I worry about, such as deceptively behaving in “safe” seeming ways in hopes of getting more power later.\n“Narrow”: they might have only a particular set of capabilities, so that e.g. they can help with AI alignment research but don’t understand human psychology and can’t deceive and manipulate humans.\n“Unambitious”: even if AI systems develop unintended aims, these might be aims they satisfy fairly easily, causing some strange behavior but not aiming to defeat all of humanity.\nA further source of hope: even if such “limited” systems aren’t very powerful on their own, we might be able to amplify them by setting up combinations of AIs that work together on difficult tasks. For example:\nOne “slow but deep” AI might do lots of analysis on every action it takes - for example, when it writes a line of code, it might consider hundreds of possibilities for that single line.\nAnother “fast and shallow” AI might be trained to quickly, efficiently imitate the sorts of actions the “slow but deep” one takes - writing the sorts of lines of code it produces after considering hundreds of possibilities.\nFurther AIs might be trained to summarize the analysis of other AIs, assign different parts of tasks to different AIs, etc. The result could be something like a “team” of AIs with different roles, such that a large number of limited AIs ends up quite a lot more powerful (and, depending on the details, also more dangerous) than any of the individual AIs. \nI’d guess that in a competitive world, AI systems that are not “limited” will - at least eventually - be more powerful, versatile and ultimately useful. But limited AIs might get us pretty far.\nHow likely is this to work? I’d guess that we’ll eventually be able to build very powerful AIs whose limits make them relatively safe. However, I’d also guess that AIs without such limits will eventually be more powerful. So I think a lot of how things go will come down to how cautious we are: will we stick with limited AIs until the point at which we make more powerful AIs safe? And I think it’s very hard to predict how much caution the world will have - it partly depends on how well-understood the issues discussed in this series become over time!\nAI checks and balances\nCentral to my worries is the idea that AIs could be good at “deceiving” humans: proficiently choosing courses of action that humans don’t fully understand, and don’t catch the problems with. This is important both for how AIs could develop unintended, “dangerous” aims in the first place and how they could execute on these aims by defeating humanity.\nWe could potentially mitigate this issue by using AIs to supervise and critique each other.\nA simple example: say we’re worried that AI systems might find computer security vulnerabilities, and use them opportunistically to gain power and resources. We could train some AI systems to specifically seek out, expose and patch computer security vulnerabilities. (A footnote explains why we might expect such specialized systems to patch most vulnerabilities they find, rather than exploiting the vulnerabilities as often as less specialized systems would.5) \nThis is a basic example of “using some AIs to identify potential problems other ones could cause.” \nA nice property: there might be some vulnerabilities that are much easier for AI systems to notice than humans; these are probably the ones we should be most worried about, and the ones most likely to be found by “vulnerability-finding AIs.”\n Analogously, we could train AI systems to do things like:\nProduce critiques of each others’ proposed plans, and explanations of how these plans could lead to unintended behavior.\nHelp out with the “digital neuroscience” goal above: analyze each others’ “digital brains” and produce compelling-to-humans discussions of whether there’s evidence there about whether the AI systems have unintended, dangerous “aims.”\nConduct randomized audits of each others’ behavior: looking for any way in which another AI system’s behavior might be deceptive or otherwise unintended. \nImportantly, auditor AIs would have different “incentives” (they get positive reinforcement for different behaviors), compared to the AIs they’re auditing: the audited AI system might be searching for any answer to a question that humans rate highly, whereas the “auditor” might be specifically searching for compelling arguments that an otherwise-highly-rated-answer is actually misleading.\n \nI previously argued that these sorts of audits have limitations - they could end up unwittingly training the audited AI systems to become effective at deceiving human auditors. But in this case, deceiving auditors might be more like deceiving copies of themselves, potentially a harder-to-learn behavior.\nThere are a lot of potential wrinkles here, which I discuss in a more detailed non-Cold-Takes piece. In brief:\nAI checks and balances might fail if different AI systems end up coordinating with each other against us. For example, it might end up being the case that even auditor AI systems are best able to accomplish their ambitious, unintended aims by refraining from warning humans about the most important problems they find with other AIs.\nWe might try to make this less likely by creating ensembles of AI systems that “check and balance” each other, while being different enough from each other to make coordination difficult. \nFor example, some AI systems might have limitations (see above) that make them less dangerous, while also less capable. \n \nIn order to pull off deception successfully, more powerful systems would then have to deceive not just humans but a whole variety of other (less capable, but in different ways) AI systems - some of which could be amplified, as discussed above.\nAnother way AI checks and balances could fail: when one AI system critiques another, the resulting “debate” could quickly become too hard for humans to referee effectively. For example, imagine that an eight-year-old has asked adult A to look for signs that adult B is deceiving them, and adult A is now arguing that this is happening while adult B is arguing that it isn’t. Can the eight-year-old figure out what the truth is? Seems genuinely uncertain (and dependent on the details)!\nThere is already some research on “using AIs to critique each other.” A recent example is this paper, which actually does show that an AI trained to critique its own answers can surface helpful critiques that help humans rate its answers more accurately.\nOther possibilities\nI discuss possible hopes in more detail in an Alignment Forum piece. And I think there is significant scope for “unknown unknowns”: researchers working on AI safety might come up with approaches that nobody has thought of yet.\nHigh-level fear: things get too weird, too fast\nRather than end on a positive note, I want to talk about a general dynamic that feels like it could make the situation very difficult, and make it hard for any of the above hopes to work out.\nTo quote from my previous piece:\nMaybe at some point, AI systems will be able to do things like:\nCoordinate with each other incredibly well, such that it's hopeless to use one AI to help supervise another.\nPerfectly understand human thinking and behavior, and know exactly what words to say to make us do what they want - so just letting an AI send emails or write Tumblr posts gives it vast power over the world.\nManipulate their own \"digital brains,\" so that our attempts to \"read their minds\" backfire and mislead us.\nReason about the world (that is, make plans to accomplish their aims) in completely different ways from humans, with concepts like \"glooble\"6 that are incredibly useful ways of thinking about the world but that humans couldn't understand with centuries of effort.\nAt this point, whatever methods we've developed for making human-like AI systems safe, honest and restricted could fail - and silently, as such AI systems could go from \"being honest and helpful\" to \"appearing honest and helpful, while setting up opportunities to defeat humanity.\"\nI’m not wedded to any of the details above, but I think the general dynamic in which “AI systems get extremely powerful, strange, and hard to deal with very quickly” could happen for a few different reasons:\nThe nature of AI development might just be such that we very quickly go from having very weak AI systems to having “superintelligent” ones. How likely this is has been debated a lot.7\nEven if AI improves relatively slowly, we might initially have a lot of success with things like “AI checks and balances,” but continually make more and more capable AI systems - such that they eventually become extraordinarily capable and very “alien” to us, at which point previously-effective methods break down. (More)\nThe most likely reason this would happen, in my view, is that we - humanity - choose to move too fast. It’s easy to envision a world in which everyone is in a furious race to develop more powerful AI systems than everyone else - focused on “competition” rather than “caution” (more on the distinction here) - and everything accelerates dramatically once we’re able to use AI systems to automate scientific and technological advancement.\nSo … is AI going to defeat humanity or is everything going to be fine?\nI don’t know! There are a number of ways we might be fine, and a number of ways we might not be. I could easily see this century ending in humans defeated or in a glorious utopia. You could maybe even think of it as the most important century.\nSo far, I’ve mostly just talked about the technical challenges of AI alignment: why AI systems might end up misaligned, and how we might design them to avoid that outcome. In future pieces, I’ll go into a bit more depth on some of the political and strategic challenges (e.g., what AI companies and governments might do to reduce the risk of a furious race to deploy dangerous AI systems), and work my way toward the question: “What can we do today to improve the odds that things go well?”Footnotes\nE.g. ↩\n Disclosure: my wife Daniela is President and co-founder of Anthropic, which employs prominent researchers in “mechanistic interpretability” and hosts the site I link to for the term. ↩\n Disclosure: I’m on the board of ARC, which wrote this document. ↩\n Though not entirely ↩\n The basic idea:\nA lot of security vulnerabilities might be the kind of thing where it’s clear that there’s some weakness in the system, but it’s not immediately clear how to exploit this for gain. An AI system with an unintended “aim” might therefore “save” knowledge about the vulnerability until it encounters enough other vulnerabilities, and the right circumstances, to accomplish its aim.\n\tBut now imagine an AI system that is trained and rewarded exclusively for finding and patching such vulnerabilities. Unlike with the first system, revealing the vulnerability gets more positive reinforcement than just about anything else it can do (and an AI that reveals no such vulnerabilities will perform extremely poorly). It thus might be much more likely than the previous system to do so, rather than simply leaving the vulnerability in place in case it’s useful later.\n\tAnd now imagine that there are multiple AI systems trained and rewarded for finding and patching such vulnerabilities, with each one needing to find some vulnerability overlooked by others in order to achieve even moderate performance. These systems might also have enough variation that it’s hard for one such system to confidently predict what another will do, which could further lower the gains to leaving the vulnerability in place. \n ↩This is a concept that only I understand.  ↩\n See here, here, and here. Also see the tail end of this Wait but Why piece, which draws on similar intuitions to the longer treatment in Superintelligence ↩\n", "url": "https://www.cold-takes.com/high-level-hopes-for-ai-alignment/", "title": "High-level hopes for AI alignment", "source": "cold.takes", "source_type": "blog", "date_published": "2022-12-15", "id": "116536a6b86ca83f7dba443f64199bcd"} -{"text": " \nIn previous pieces, I argued that there's a real and large risk of AI systems' developing dangerous goals of their own and defeating all of humanity - at least in the absence of specific efforts to prevent this from happening.\nA young, growing field of AI safety research tries to reduce this risk, by finding ways to ensure that AI systems behave as intended (rather than forming ambitious aims of their own and deceiving and manipulating humans as needed to accomplish them). \nMaybe we'll succeed in reducing the risk, and maybe we won't. Unfortunately, I think it could be hard to know either way. This piece is about four fairly distinct-seeming reasons that this could be the case - and that AI safety could be an unusually difficult sort of science.\nThis piece is aimed at a broad audience, because I think it's important for the challenges here to be broadly understood. I expect powerful, dangerous AI systems to have a lot of benefits (commercial, military, etc.), and to potentially appear safer than they are - so I think it will be hard to be as cautious about AI as we should be. I think our odds look better if many people understand, at a high level, some of the challenges in knowing whether AI systems are as safe as they appear.\nFirst, I'll recap the basic challenge of AI safety research, and outline what I wish AI safety research could be like. I wish it had this basic form: \"Apply a test to the AI system. If the test goes badly, try another AI development method and test that. If the test goes well, we're probably in good shape.\" I think car safety research mostly looks like this; I think AI capabilities research mostly looks like this.\nThen, I’ll give four reasons that apparent success in AI safety can be misleading. \n“Great news - I’ve tested this AI and it looks safe.” Why might we still have a problem?\n \nProblem\nKey question\nExplanation\nThe Lance Armstrong problem\nDid we get the AI to be actually safe or good at hiding its dangerous actions?\nWhen dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “appearing to behave well.”\nWhen professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.\n \nThe King Lear problem\nThe AI is (actually) well-behaved when humans are in control. Will this transfer to when AIs are in control?\nIt's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't. \nAIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to take control of the world entirely. It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation. \nLike King Lear trying to decide how much power to give each of his daughters before abdicating the throne.\n \nThe lab mice problem\nToday's \"subhuman\" AIs are safe.What about future AIs with more human-like abilities?\nToday's AI systems aren't advanced enough to exhibit the basic behaviors we want to study, such as deceiving and manipulating humans.\nLike trying to study medicine in humans by experimenting only on lab mice.\n \nThe first contact problem\nImagine that tomorrow's \"human-like\" AIs are safe. How will things go when AIs have capabilities far beyond humans'?\nAI systems might (collectively) become vastly more capable than humans, and it's ... just really hard to have any idea what that's going to be like. As far as we know, there has never before been anything in the galaxy that's vastly more capable than humans in the relevant ways! No matter what we come up with to solve the first three problems, we can't be too confident that it'll keep working if AI advances (or just proliferates) a lot more. \nLike trying to plan for first contact with extraterrestrials (this barely feels like an analogy).\n \nI'll close with Ajeya Cotra's \"young businessperson\" analogy, which in some sense ties these concerns together. A future piece will discuss some reasons for hope, despite these problems.\nRecap of the basic challenge\nA previous piece laid out the basic case for concern about AI misalignment. In brief: if extremely capable AI systems are developed using methods like the ones AI developers use today, it seems like there's a substantial risk that:\nThese AIs will develop unintended aims (states of the world they make calculations and plans toward, as a chess-playing AI \"aims\" for checkmate);\nThese AIs will deceive, manipulate, and overpower humans as needed to achieve those aims;\nEventually, this could reach the point where AIs take over the world from humans entirely.\nI see AI safety research as trying to design AI systems that won't aim to deceive, manipulate or defeat humans - even if and when these AI systems are extraordinarily capable (and would be very effective at deception/manipulation/defeat if they were to aim at it). That is: AI safety research is trying to reduce the risk of the above scenario, even if (as I've assumed) humans rush forward with training powerful AIs to do ever-more ambitious things.\n(Click to expand) More detail on why AI could make this the most important century \nIn The Most Important Century, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.\nThis page has a ~10-page summary of the series, as well as links to an audio version, podcasts, and the full series.\nThe key points I argue for in the series are:\nThe long-run future is radically unfamiliar. Enough advances in technology could lead to a long-lasting, galaxy-wide civilization that could be a radical utopia, dystopia, or anything in between.\nThe long-run future could come much faster than we think, due to a possible AI-driven productivity explosion.\nThe relevant kind of AI looks like it will be developed this century - making this century the one that will initiate, and have the opportunity to shape, a future galaxy-wide civilization.\nThese claims seem too \"wild\" to take seriously. But there are a lot of reasons to think that we live in a wild time, and should be ready for anything.\nWe, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions. But right now, we aren't ready for this.\n(Click to expand) Why would AI \"aim\" to defeat humanity?\nA previous piece argued that if today’s AI development methods lead directly to powerful enough AI systems, disaster is likely by default (in the absence of specific countermeasures). \nIn brief:\nModern AI development is essentially based on “training” via trial-and-error. \nIf we move forward incautiously and ambitiously with such training, and if it gets us all the way to very powerful AI systems, then such systems will likely end up aiming for certain states of the world (analogously to how a chess-playing AI aims for checkmate).\nAnd these states will be other than the ones we intended, because our trial-and-error training methods won’t be accurate. For example, when we’re confused or misinformed about some question, we’ll reward AI systems for giving the wrong answer to it - unintentionally training deceptive behavior.\nWe should expect disaster if we have AI systems that are both (a) powerful enough to defeat humans and (b) aiming for states of the world that we didn’t intend. (“Defeat” means taking control of the world and doing what’s necessary to keep us out of the way; it’s unclear to me whether we’d be literally killed or just forcibly stopped1 from changing the world in ways that contradict AI systems’ aims.)\n(Click to expand) How could AI defeat humanity?\nIn a previous piece, I argue that AI systems could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nBy defeating humanity, I mean gaining control of the world so that AIs, not humans, determine what happens in it; this could involve killing humans or simply “containing” us in some way, such that we can’t interfere with AIs’ aims.\nOne way this could happen is if AI became extremely advanced, to the point where it had \"cognitive superpowers\" beyond what humans can do. In this case, a single AI system (or set of systems working together) could imaginably:\nDo its own research on how to build a better AI system, which culminates in something that has incredible other abilities.\nHack into human-built software across the world.\nManipulate human psychology.\nQuickly generate vast wealth under the control of itself or any human allies.\nCome up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.\nDevelop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries.\nHowever, my piece also explores what things might look like if each AI system basically has similar capabilities to humans. In this case:\nHumans are likely to deploy AI systems throughout the economy, such that they have large numbers and access to many resources - and the ability to make copies of themselves. \nFrom this starting point, AI systems with human-like (or greater) capabilities would have a number of possible ways of getting to the point where their total population could outnumber and/or out-resource humans.\nI address a number of possible objections, such as \"How can AIs be dangerous without bodies?\"\nMore: AI could defeat all of us combined\nI wish AI safety research were straightforward\nI wish AI safety research were like car safety research.2\nWhile I'm sure this is an oversimplification, I think a lot of car safety research looks basically like this:\nCompanies carry out test crashes with test cars. The results give a pretty good (not perfect) indication of what would happen in a real crash.\nDrivers try driving the cars in low-stakes areas without a lot of traffic. Things like steering wheel malfunctions will probably show up here; if they don't and drivers are able to drive normally in low-stakes areas, it's probably safe to drive the car in traffic.\nNone of this is perfect, but the occasional problem isn't, so to speak, the end of the world. The worst case tends to be a handful of accidents, followed by a recall and some changes to the car's design validated by further testing.\nOverall, if we have problems with car safety, we'll probably be able to observe them relatively straightforwardly under relatively low-stakes circumstances.\nIn important respects, many types of research and development have this basic property: we can observe how things are going during testing to get good evidence about how they'll go in the real world. Further examples include medical research,3 chemistry research,4 software development,5 etc. \nMost AI research looks like this as well. People can test out what an AI system is capable of reliably doing (e.g., translating speech to text), before integrating it into some high-stakes commercial product like Siri. This works both for ensuring that the AI system is capable (e.g., that it does a good job with its tasks) and that it's safe in certain ways (for example, if we're worried about toxic language, testing for this is relatively straightforward).\nThe rest of this piece will be about some of the ways in which \"testing\" for AI safety fails to give us straightforward observations about whether, once AI systems are deployed in the real world, the world will actually be safe.\nWhile all research has to deal with some differences between testing and the real world, I think the challenges I'll be going through are unusual ones.\nFour problems\n(1) The Lance Armstrong problem: is the AI actually safe or good at hiding its dangerous actions?\nFirst, let's imagine that:\nWe have AI systems available that can do roughly everything a human can, with some different strengths and weaknesses but no huge difference in \"overall capabilities\" or economic value per hour of work. \nWe're observing early signs that AI systems behave in unintended, deceptive ways, such as giving wrong answers to questions we ask, or writing software that falsifies metrics instead of doing the things the metrics were supposed to measure (e.g., software meant to make a website run faster might instead falsify metrics about its loading time).\nWe theorize that modifying the AI training in some way6 will make AI systems less likely to behave deceptively. We try it out, and find that, in fact, our AI systems seem to be behaving better than before - we are finding fewer incidents in which they behaved in unintended or deceptive ways. \nBut that's just a statement about what we're noticing. Which of the following just happened:\nDid we just train our AI systems to be less deceptive?\nDid we just train our AI systems to be better at deception, and so to make us think they became less deceptive?\nDid we just train our AI systems to be better at calculating when they might get caught in deception, and so to be less deceptive only when the deception would otherwise be caught? \nThis one could be useful! Especially if we're able to set up auditing systems in many real-world situations, such that we could expect deception to be caught a lot of the time. But it does leave open the King Lear problem covered next.\n \n(...Or some combination of the three?)\nWe're hoping to be able to deploy AI systems throughout the economy, so - just like human specialists - they will almost certainly have some opportunities to be deceptive without being caught. The fact that they appear honest in our testing is not clear comfort against this risk.\nThe analogy here is to competitive cyclist Lance Armstrong. Armstrong won the Tour de France race 7 times in a row, while many of his competitors were caught using performance-enhancing drugs and disqualified. But more than 5 years after his last win, an investigation \"concluded that Armstrong had used performance-enhancing drugs over the course of his career[5] and named him as the ringleader of 'the most sophisticated, professionalized and successful doping program that sport has ever seen'.\" Now the list of Tour de France winners looks like this:\nA broader issue here is that when AI systems become capable enough, AI safety research starts to look more like social sciences (studying human beings) than like natural sciences. Social sciences are generally less rigorous and harder to get clean results from, and one factor in this is that it can be hard to study someone who's aware they're being studied.7\nTwo broad categories of research that might help with the Lance Armstrong problem:\nMechanistic interpretability8 can be thought of analyzing the \"digital brains\" of AI systems (not just analyzing their behavior and performance.) Currently, AI systems are black boxes in the sense that they perform well on tasks, but we can't say much about how they are doing it; mechanistic interpretability aims to change this, which could give us the ability to \"mind-read\" AIs and detect deception. (There could still be a risk that AI systems are arranging their own \"digital brains\" in misleading ways, but this seems quite a bit harder than simply behaving deceptively.)\nSome researchers work on \"scalable supervision\" or \"competitive supervision.\" The idea is that if we are training an AI system that might become deceptive, we set up some supervision process for it that we expect to reliably catch any attempts at deception. This could be because the supervision process itself uses AI systems with more resources than the one being supervised, or because it uses a system of randomized audits where extra effort is put into catching deception.\n \n(Click to expand) Why are AI systems \"black boxes\" that we can't understand the inner workings of?\nI explain this briefly in an old Cold Takes post; it's explained in more detail in more technical pieces by Ajeya Cotra (section I linked to) and Richard Ngo (section 2). \nWhat I mean by “black-box trial-and-error” is explained briefly in an old Cold Takes post, and in more detail in more technical pieces by Ajeya Cotra (section I linked to) and Richard Ngo (section 2). Here’s a quick, oversimplified characterization.\nToday, the most common way of building an AI system is by using an \"artificial neural network\" (ANN), which you might think of sort of like a \"digital brain\" that starts in an empty (or random) state: it hasn't yet been wired to do specific things. A process something like this is followed:\nThe AI system is given some sort of task.\nThe AI system tries something, initially something pretty random.\nThe AI system gets information about how well its choice performed, and/or what would’ve gotten a better result. Based on this, it “learns” by tweaking the wiring of the ANN (“digital brain”) - literally by strengthening or weakening the connections between some “artificial neurons” and others. The tweaks cause the ANN to form a stronger association between the choice it made and the result it got. \nAfter enough tries, the AI system becomes good at the task (it was initially terrible). \nBut nobody really knows anything about how or why it’s good at the task now. The development work has gone into building a flexible architecture for it to learn well from trial-and-error, and into “training” it by doing all of the trial and error. We mostly can’t “look inside the AI system to see how it’s thinking.”\nFor example, if we want to know why a chess-playing AI such as AlphaZero made some particular chess move, we can't look inside its code to find ideas like \"Control the center of the board\" or \"Try not to lose my queen.\" Most of what we see is just a vast set of numbers, denoting the strengths of connections between different artificial neurons. As with a human brain, we can mostly only guess at what the different parts of the \"digital brain\" are doing.\n(2) The King Lear problem: how do you test what will happen when it's no longer a test?\nThe Shakespeare play King Lear opens with the King (Lear) stepping down from the throne, and immediately learning that he has left his kingdom to the wrong two daughters. Loving and obsequious while he was deciding on their fate,9 they reveal their contempt for him as soon as he's out of power and they're in it.\nIf we're building AI systems that can reason like humans, dynamics like this become a potential issue. \nI previously noted that an AI with any ambitious aim - or just an AI that wants to avoid being shut down or modified - might calculate that the best way to do this is by behaving helpfully and safely in all \"tests\" humans can devise. But once there is a real-world opportunity to disempower humans for good, that same aim could cause the AI to disempower humans.\nIn other words:\n(A) When we're developing and testing AI systems, we have the power to decide which systems will be modified or shut down and which will be deployed into the real world. (Like King Lear deciding who will inherit his kingdom.)\n(B) But at some later point, these systems could be operating in the economy, in high numbers with a lot of autonomy. (This possibility is spelled out/visualized a bit more here and here.) At that point, they may have opportunities to defeat all of humanity such that we never make decisions about them again. (Like King Lear's daughters after they've taken control.)\n(Click to expand) How could AI defeat humanity?\nIn a previous piece, I argue that AI systems could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nBy defeating humanity, I mean gaining control of the world so that AIs, not humans, determine what happens in it; this could involve killing humans or simply “containing” us in some way, such that we can’t interfere with AIs’ aims.\nOne way this could happen is if AI became extremely advanced, to the point where it had \"cognitive superpowers\" beyond what humans can do. In this case, a single AI system (or set of systems working together) could imaginably:\nDo its own research on how to build a better AI system, which culminates in something that has incredible other abilities.\nHack into human-built software across the world.\nManipulate human psychology.\nQuickly generate vast wealth under the control of itself or any human allies.\nCome up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.\nDevelop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries.\nHowever, my piece also explores what things might look like if each AI system basically has similar capabilities to humans. In this case:\nHumans are likely to deploy AI systems throughout the economy, such that they have large numbers and access to many resources - and the ability to make copies of themselves. \nFrom this starting point, AI systems with human-like (or greater) capabilities would have a number of possible ways of getting to the point where their total population could outnumber and/or out-resource humans.\nI address a number of possible objections, such as \"How can AIs be dangerous without bodies?\"\nMore: AI could defeat all of us combined\nIf AI systems can detect the difference between (A) and (B) above, then their behavior might systematically change from one to the other - and there's no clear way to test their behavior in (B).\n(Click to expand) The Volkswagen emissions scandal\nIn 2015, the US Environmental Protection Agency \"found that Volkswagen had intentionally programmed ... engines to activate their emissions controls only during laboratory emissions testing, which caused the vehicles' [nitrogen oxide] output to meet US standards during regulatory testing. However, the vehicles emitted up to 40 times more [nitrogen oxide] in real-world driving.\" (From Wikipedia)\nThis is an example of the \"King Lear problem\": there was a deliberate distinction made where the engines were behaving differently during tests and during real-world use. I don't know much about this incident, but I'm guessing that regulators were able to detect the issue because of (a) independent tests under road conditions; (b) being able to understand the \"defeat device\" that the Volkswagen cars used to create the misleading readings. \nNeither of these would necessarily apply to the King Lear problem as described here: (a) you can test a car under road conditions to see its emissions, but you can't (or at least, shouldn't) test how an AI behaves when it has an opportunity to disempower humans entirely; (b) AI systems may be black boxes such that it's hard to understand what's going on inside them.\nIn general, modern machine learning researchers consider it challenging to handle what's called \"distributional shift\": systematic differences between situations AIs were trained on and situations they're now in. To me, the King Lear problem looks like arguably the most inconvenient possible distributional shift: AI systems risk behaving in unexpected ways just as (and in fact, because) they're now able to defeat humanity, rather than being in a controlled test environment.\nSome lines of research that might help here:\nIf we could solve the Lance Armstrong problem robustly enough - such that we could be confident AIs were never behaving deceptively - we could simply prompt AIs to answer questions like \"Would AI system X disempower humans given an opportunity to do so?\" The digital-brain-based techniques noted above could imaginably get us here.\nThere might be ways of specifically trying to target the worst-case behavior of AI systems, so that they are nearly guaranteed not to behave in certain ways regardless of their situation. This could look something roughly like \"simulating cases where an AI system has an opportunity to disempower humans, and giving it negative reinforcement for choosing to do so.\" More on this sort of approach, along with some preliminary ongoing work, here.\n(3) The lab mice problem: the AI systems we'd like to study don't exist today \nAbove, I said: \"when AI systems become capable enough, AI safety research starts to look more like social sciences (studying human beings) than like natural sciences.\" But today, AI systems aren't capable enough, which makes it especially hard to have a meaningful test bed and make meaningful progress.\nSpecifically, we don't have much in the way of AI systems that seem to deceive and manipulate their supervisors,10 the way I worry that they might when they become capable enough.\nIn fact, it's not 100% clear that AI systems could learn to deceive and manipulate supervisors even if we deliberately tried to train them to do it. This makes it hard to even get started on things like discouraging and detecting deceptive behavior. \nI think AI safety research is a bit unusual in this respect: most fields of research aren't explicitly about \"solving problems that don't exist yet.\" (Though a lot of research ends up useful for more important problems than the original ones it's studying.) As a result, doing AI safety research today is a bit like trying to study medicine in humans by experimenting only on lab mice (no human subjects available).\nThis does not mean there's no productive AI safety research to be done! (See the previous sections.) It just means that the research being done today is somewhat analogous to research on lab mice: informative and important up to a point, but only up to a point.\nHow bad is this problem? I mean, I do think it's a temporary one: by the time we're facing the problems I worry about, we'll be able to study them more directly. The concern is that things could be moving very quickly by that point: by the time we have AIs with human-ish capabilities, companies might be furiously making copies of those AIs and using them for all kinds of things (including both AI safety research and further research on making AI systems faster, cheaper and more capable).\nSo I do worry about the lab mice problem. And I'd be excited to see more effort on making \"better model organisms\": AI systems that show early versions of the properties we'd most like to study, such as deceiving their supervisors. (I even think it would be worth training AIs specifically to do this;11 if such behaviors are going to emerge eventually, I think it's best for them to emerge early while there's relatively little risk of AIs' actually defeating humanity.)\n(4) The \"first contact\" problem: how do we prepare for a world where AIs have capabilities vastly beyond those of humans?\nAll of this piece so far has been about trying to make safe \"human-like\" AI systems.\nWhat about AI systems with capabilities far beyond humans - what Nick Bostrom calls superintelligent AI systems?\nMaybe at some point, AI systems will be able to do things like:\nCoordinate with each other incredibly well, such that it's hopeless to use one AI to help supervise another.\nPerfectly understand human thinking and behavior, and know exactly what words to say to make us do what they want - so just letting an AI send emails or write tweets gives it vast power over the world.\nManipulate their own \"digital brains,\" so that our attempts to \"read their minds\" backfire and mislead us.\nReason about the world (that is, make plans to accomplish their aims) in completely different ways from humans, with concepts like \"glooble\"12 that are incredibly useful ways of thinking about the world but that humans couldn't understand with centuries of effort.\n \nAt this point, whatever methods we've developed for making human-like AI systems safe, honest, and restricted could fail - and silently, as such AI systems could go from \"behaving in honest and helpful ways\" to \"appearing honest and helpful, while setting up opportunities to defeat humanity.\"\nSome people think this sort of concern about \"superintelligent\" systems is ridiculous; some13 seem to consider it extremely likely. I'm not personally sympathetic to having high confidence either way.\nBut additionally, a world with huge numbers of human-like AI systems could be strange and foreign and fast-moving enough to have a lot of this quality.\nTrying to prepare for futures like these could be like trying to prepare for first contact with extaterrestrials - it's hard to have any idea what kinds of challenges we might be dealing with, and the challenges might arise quickly enough that we have little time to learn and adapt.\nThe young businessperson\nFor one more analogy, I'll return to the one used by Ajeya Cotra here:\nImagine you are an eight-year-old whose parents left you a $1 trillion company and no trusted adult to serve as your guide to the world. You must hire a smart adult to run your company as CEO, handle your life the way that a parent would (e.g. decide your school, where you’ll live, when you need to go to the dentist), and administer your vast wealth (e.g. decide where you’ll invest your money).\n You have to hire these grownups based on a work trial or interview you come up with -- you don't get to see any resumes, don't get to do reference checks, etc. Because you're so rich, tons of people apply for all sorts of reasons. (More)\nIf your applicants are a mix of \"saints\" (people who genuinely want to help), \"sycophants\" (people who just want to make you happy in the short run, even when this is to your long-term detriment) and \"schemers\" (people who want to siphon off your wealth and power for themselves), how do you - an eight-year-old - tell the difference?\nThis analogy combines most of the worries above. \nThe young businessperson has trouble knowing whether candidates are truthful in interviews, and trouble knowing whether any work trial actually went well or just seemed to go well due to deliberate deception. (The Lance Armstrong problem.)\nJob candidates could have bad intentions that don't show up until they're in power (the King Lear Problem).\nIf the young businessperson were trying to prepare for this situation before actually being in charge of the company, they could have a lot of trouble simulating it (the lab mice problem).\nAnd it's generally just hard for an eight-year-old to have much grasp at all on the world of adults - to even think about all the things they should be thinking about (the first contact problem).\n \nSeems like a tough situation.\n Previously, I talked about the dangers of AI if AI developers don't take specific countermeasures. This piece has tried to give a sense of why, even if they are trying to take countermeasures, doing so could be hard. The next piece will talk about some ways we might succeed anyway.Footnotes\n Or persuaded (in a “mind hacking” sense) or whatever. ↩\n Research? Testing. Whatever. ↩\n Drugs can be tested in vitro, then in animals, then in humans. At each stage, we can make relatively straightforward observations about whether the drugs are working, and these are reasonably predictive of how they'll do at the next stage. ↩\n You can generally see how different compounds interact in a controlled environment, before rolling out any sort of large-scale processes or products, and the former will tell you most of what you need to know about the latter. ↩\n New software can be tested by a small number of users before being rolled out to a large number, and the initial tests will probably find most (not all) of the bugs and hiccups. ↩\n Such as:\nBeing more careful to avoid wrong answers that can incentivize deception\nConducting randomized \"audits\" where we try extra hard to figure out the right answer to a question, and give an AI extra negative reinforcement if it gives an answer that we would have believed if not for the audit (this is \"extra negative reinforcement for wrong answers that superficially look right\")\nUsing methods along the lines of \"AI safety via debate\" ↩\n Though there are other reasons social sciences are especially hard, such as the fact that there are often big limits to what kinds of experiments are ethical, and the fact that it's often hard to make clean comparisons between differing populations. ↩\n This paper is from Anthropic, a company that my wife serves as President of. ↩\n Like, he actually asks them to talk about their love for him just before he decides on what share of the realm they'll get. Smh ↩\nThis paper is a potential example, but its results seem pretty brittle. ↩\n E.g., I think it would be interesting to train AI coding systems to write underhanded C: code that looks benign to a human inspector, but does unexpected things when run. They could be given negative reinforcement when humans can correctly identify that the code will do unintended things, and positive reinforcement when the code achieves the particular things that humans are attempting to stop. This would be challenging with today's AI systems, but not necessarily impossible. ↩\n This is a concept that only I understand. ↩\n E.g., see the discussion of the \"hard left turn\" here by Nate Soares, head of MIRI. My impression is that others at MIRI, including Eliezer Yudkowsky, have a similar picture. ↩\n", "url": "https://www.cold-takes.com/ai-safety-seems-hard-to-measure/", "title": "AI Safety Seems Hard to Measure", "source": "cold.takes", "source_type": "blog", "date_published": "2022-12-08", "id": "2599e20bb88afd0d50f17a1318a629a1"} -{"text": "I’ve argued that AI systems could defeat all of humanity combined, if (for whatever reason) they were directed toward that goal.\nHere I’ll explain why I think they might - in fact - end up directed toward that goal. Even if they’re built and deployed with good intentions.\nIn fact, I’ll argue something a bit stronger than that they might end up aimed toward that goal. I’ll argue that if today’s AI development methods lead directly to powerful enough AI systems, disaster is likely1 by default (in the absence of specific countermeasures). \nUnlike other discussions of the AI alignment problem,3 this post will discuss the likelihood4 of AI systems defeating all of humanity (not more general concerns about AIs being misaligned with human intentions), while aiming for plain language, conciseness, and accessibility to laypeople, and focusing on modern AI development paradigms. I make no claims to originality, and list some key sources and inspirations in a footnote.5\nSummary of the piece:\nMy basic assumptions. I assume the world could develop extraordinarily powerful AI systems in the coming decades. I previously examined this idea at length in the most important century series. \nFurthermore, in order to simplify the analysis:\nI assume that such systems will be developed using methods similar to today’s leading AI development methods, and in a world that’s otherwise similar to today’s. (I call this nearcasting.)\nI assume that AI companies/projects race forward to build powerful AI systems, without specific attempts to prevent the problems I discuss in this piece. Future pieces will relax this assumption, but I think it is an important starting point to get clarity on what the default looks like.\nAI “aims.” I talk a fair amount about why we might think of AI systems as “aiming” toward certain states of the world. I think this topic causes a lot of confusion, because:\nOften, when people talk about AIs having goals and making plans, it sounds like they’re overly anthropomorphizing AI systems - as if they expect them to have human-like motivations and perhaps evil grins. This can make the whole topic sound wacky and out-of-nowhere.\nBut I think there are good reasons to expect that AI systems will “aim” for particular states of the world, much like a chess-playing AI “aims” for a checkmate position - making choices, calculations and even plans to get particular types of outcomes. For example, people might want AI assistants that can creatively come up with unexpected ways of accomplishing whatever goal they’re given (e.g., “Get me a great TV for a great price”), even in some cases manipulating other humans (e.g., by negotiating) to get there. This dynamic is core to the risks I’m most concerned about: I think something that aims for the wrong states of the world is much more dangerous than something that just does incidental or accidental damage.\nDangerous, unintended aims. I’ll examine what sorts of aims AI systems might end up with, if we use AI development methods like today’s - essentially, “training” them via trial-and-error to accomplish ambitious things humans want.\nBecause we ourselves will often be misinformed or confused, we will sometimes give negative reinforcement to AI systems that are actually acting in our best interests and/or giving accurate information, and positive reinforcement to AI systems whose behavior deceives us into thinking things are going well. This means we will be, unwittingly, training AI systems to deceive and manipulate us. \nThe idea that AI systems could “deceive” humans - systematically making choices and taking actions that cause them to misunderstand what’s happening in the world - is core to the risk, so I’ll elaborate on this.\nFor this and other reasons, powerful AI systems will likely end up with aims other than the ones we intended. Training by trial-and-error is slippery: the positive and negative reinforcement we give AI systems will probably not end up training them just as we hoped.\nIf powerful AI systems have aims that are both unintended (by humans) and ambitious, this is dangerous. Whatever an AI system’s unintended aim: \nMaking sure it can’t be turned off is likely helpful in accomplishing the aim.\n \nControlling the whole world is useful for just about any aim one might have, and I’ve argued that advanced enough AI systems would be able to gain power over all of humanity.\nOverall, we should expect disaster if we have AI systems that are both (a) powerful enough to defeat humans and (b) aiming for states of the world that we didn’t intend.\nLimited and/or ambiguous warning signs. The risk I’m describing is - by its nature - hard to observe, for similar reasons that a risk of a (normal, human) coup can be hard to observe: the risk comes from actors that can and will engage in deception, finding whatever behaviors will hide the risk. If this risk plays out, I do think we’d see some warning signs - but they could easily be confusing and ambiguous, in a fast-moving situation where there are lots of incentives to build and roll out powerful AI systems, as fast as possible. Below, I outline how this dynamic could result in disaster, even with companies encountering a number of warning signs that they try to respond to.\nFAQ. An appendix will cover some related questions that often come up around this topic.\nHow could AI systems be “smart” enough to defeat all of humanity, but “dumb” enough to pursue the various silly-sounding “aims” this piece worries they might have? More\nIf there are lots of AI systems around the world with different goals, could they balance each other out so that no one AI system is able to defeat all of humanity? More\nDoes this kind of AI risk depend on AI systems’ being “conscious”?More\nHow can we get an AI system “aligned” with humans if we can’t agree on (or get much clarity on) what our values even are? More\nHow much do the arguments in this piece rely on “trial-and-error”-based AI development? What happens if AI systems are built in another way, and how likely is that? More\nCan we avoid this risk by simply never building the kinds of AI systems that would pose this danger? More\nWhat do others think about this topic - is the view in this piece something experts agree on? More\nHow “complicated” is the argument in this piece? More\nStarting assumptions\nI’ll be making a number of assumptions that some readers will find familiar, but others will find very unfamiliar. \nSome of these assumptions are based on arguments I’ve already made (in the most important century series). Some are for the sake of simplifying the analysis, for now (with more nuance coming in future pieces).\nHere I’ll summarize the assumptions briefly, and you can click to see more if it isn’t immediately clear what I’m assuming or why.\n“Most important century” assumption: we’ll soon develop very powerful AI systems, along the lines of what I previously called PASTA. (Click to expand)\nIn the most important century series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.\nI focus on a hypothetical kind of AI that I call PASTA, or Process for Automating Scientific and Technological Advancement. PASTA would be AI that can essentially automate all of the human activities needed to speed up scientific and technological advancement.\nUsing a variety of different forecasting approaches, I argue that PASTA seems more likely than not to be developed this century - and there’s a decent chance (more than 10%) that we’ll see it within 15 years or so.\nI argue that the consequences of this sort of AI could be enormous: an explosion in scientific and technological progress. This could get us more quickly than most imagine to a radically unfamiliar future.\nI’ve also argued that AI systems along these lines could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nFor more, see the most important century landing page. The series is available in many formats, including audio; I also provide a summary, and links to podcasts where I discuss it at a high level.\n“Nearcasting” assumption: such systems will be developed in a world that’s otherwise similar to today’s. (Click to expand)\nIt’s hard to talk about risks from transformative AI because of the many uncertainties about when and how such AI will be developed - and how much the (now-nascent) field of “AI safety research” will have grown by then, and how seriously people will take the risk, etc. etc. etc. So maybe it’s not surprising that estimates of the “misaligned AI” risk range from ~1% to ~99%.\nThis piece takes an approach I call nearcasting: trying to answer key strategic questions about transformative AI, under the assumption that such AI arrives in a world that is otherwise relatively similar to today's. \nYou can think of this approach like this: “Instead of asking where our ship will ultimately end up, let’s start by asking what destination it’s pointed at right now.” \nThat is: instead of trying to talk about an uncertain, distant future, we can talk about the easiest-to-visualize, closest-to-today situation, and how things look there - and then ask how our picture might be off if other possibilities play out. (As a bonus, it doesn’t seem out of the question that transformative AI will be developed extremely soon - 10 years from now or faster.6 If that’s the case, it’s especially urgent to think about what that might look like.)\n“Trial-and-error” assumption: such AI systems will be developed using techniques broadly in line with how most AI research is done today, revolving around black-box trial-and-error. (Click to expand)\nWhat I mean by “black-box trial-and-error” is explained briefly in an old Cold Takes post, and in more detail in more technical pieces by Ajeya Cotra (section I linked to) and Richard Ngo (section 2). Here’s a quick, oversimplified characterization:\nAn AI system is given some sort of task.\nThe AI system tries something, initially something pretty random.\nThe AI system gets information about how well its choice performed, and/or what would’ve gotten a better result. Based on this, it adjusts itself. You can think of this as if it is “encouraged/discouraged” to get it to do more of what works well. \nHuman judges may play a significant role in determining which answers are encouraged vs. discouraged, especially for fuzzy goals like “Produce helpful scientific insights.” \nAfter enough tries, the AI system becomes good at the task. \nBut nobody really knows anything about how or why it’s good at the task now. The development work has gone into building a flexible architecture for it to learn well from trial-and-error, and into “training” it by doing all of the trial and error. We mostly can’t “look inside the AI system to see how it’s thinking.” (There is ongoing work and some progress on the latter,7 but see footnote for why I don’t think this massively changes the basic picture I’m discussing here.8)\n \nThis is radically oversimplified, but conveys the basic dynamic at play for purposes of this post. The idea is that the AI system (the neural network in the middle) is choosing between different theories of what it should be doing. The one it’s using at a given time is in bold. When it gets negative feedback (red thumb), it eliminates that theory and moves to the next theory of what it should be doing.\nWith this assumption, I’m generally assuming that AI systems will do whatever it takes to perform as well as possible on their training tasks - even when this means engaging in complex, human-like reasoning about topics like “How does human psychology work, and how can it be exploited?” I’ve previously made my case for when we might expect AI systems to become this advanced and capable.\n“No countermeasures” assumption: AI developers move forward without any specific countermeasures to the concerns I’ll be raising below. (Click to expand)\nFuture pieces will relax this assumption, but I think it is an important starting point to get clarity on what the default looks like - and on what it would take for a countermeasure to be effective. \n(I also think there is, unfortunately, a risk that there will in fact be very few efforts to address the concerns I’ll be raising below. This is because I think that the risks will be less than obvious, and there could be enormous commercial (and other competitive) pressure to move forward quickly. More on that below.)\n“Ambition” assumption: people use black-box trial-and-error to continually push AI systems toward being more autonomous, more creative, more ambitious, and more effective in novel situations (and the pushing is effective). This one’s important, so I’ll say more:\nA huge suite of possible behaviors might be important for PASTA: making and managing money, designing new kinds of robots with novel abilities, setting up experiments involving exotic materials and strange conditions, understanding human psychology and the economy well enough to predict which developments will have a big impact, etc. I’m assuming we push ambitiously forward with developing AI systems that can do these things.\nI assume we’re also pushing them in a generally more “greedy/ambitious” direction. For example, one team of humans might use AI systems to do all the planning, scientific work, marketing, and hiring to create a wildly successful snack company; another might push their AI systems to create a competitor that is even more aggressive and successful (more addictive snacks, better marketing, workplace culture that pushes people toward being more productive, etc.)\n(Note that this pushing might take place even after AI systems are “generally intelligent” and can do most of the tasks humans can - there will still be a temptation to make them still more powerful.)\nI think this implies pushing in a direction of figuring out whatever it takes to get to certain states of the world and away from carrying out the same procedures over and over again.\nThe resulting AI systems seem best modeled as having “aims”: they are making calculations, choices, and plans to reach particular states of the world. (Not necessarily the same ones the human designers wanted!) The next section will elaborate on what I mean by this.\nWhat it means for an AI system to have an “aim”\nWhen people talk about the “motivations” or “goals” or “desires” of AI systems, it can be confusing because it sounds like they are anthropomorphizing AIs - as if they expect AIs to have dominance drives ala alpha-male psychology, or to “resent” humans for controlling them, etc.9\nI don’t expect these things. But I do think there’s a meaningful sense in which we can (and should) talk about things that an AI system is “aiming” to do. To give a simple example, take a board-game-playing AI such as Deep Blue (or AlphaGo):\nDeep Blue is given a set of choices to make (about which chess pieces to move).\nDeep Blue calculates what kinds of results each choice might have, and how it might fit into a larger plan in which Deep Blue makes multiple moves.\nIf a plan is more likely to result in a checkmate position for its side, Deep Blue is more likely to make whatever choices feed into that plan.\nIn this sense, Deep Blue is “aiming” for a checkmate position for its side: it’s finding the choices that best fit into a plan that leads there.\nNothing about this requires Deep Blue “desiring” checkmate the way a human might “desire” food or power. But Deep Blue is making calculations, choices, and - in an important sense - plans that are aimed toward reaching a particular sort of state.\nThroughout this piece, I use the word “aim” to refer to this specific sense in which an AI system might make calculations, choices and plans selected to reach a particular sort of state. I’m hoping this word feels less anthropomorphizing than some alternatives such as “goal” or “motivation” (although I think “goal” and “motivation,” as others usually use them on this topic, generally mean the same thing I mean by “aim” and should be interpreted as such).\nNow, instead of a board-game-playing AI, imagine a powerful, broad AI assistant in the general vein of Siri/Alexa/Google Assistant (though more advanced). Imagine that this AI assistant can use a web browser much as a human can (navigating to websites, typing text into boxes, etc.), and has limited authorization to make payments from a human’s bank account. And a human has typed, “Please buy me a great TV for a great price.” (For an early attempt at this sort of AI, see Adept’s writeup on an AI that can help with things like house shopping.)\nAs Deep Blue made choices about chess moves, and constructed a plan to aim for a “checkmate” position, this assistant might make choices about what commands to send over a web browser and construct a plan to result in a great TV for a great price. To sharpen the Deep Blue analogy, you could imagine that it’s playing a “game” whose goal is customer satisfaction, and making “moves” consisting of commands sent to a web browser (and “plans” built around such moves). \nI’d characterize this as aiming for some state of the world that the AI characterizes as “buying a great TV for a great price.” (We could, alternatively - and perhaps more correctly - think of the AI system as aiming for something related but not exactly the same, such as getting a high satisfaction score from its user.)\nIn this case - more than with Deep Blue - there is a wide variety of “moves” available. By entering text into a web browser, an AI system could imaginably do things including:\nCommunicating with humans other than its user (by sending emails, using chat interfaces, even making phone calls, etc.) This could include deceiving and manipulating humans, which could imaginably be part of a plan to e.g. get a good price on a TV.\nWriting and running code (e.g., using Google Colaboratory or other tools). This could include performing sophisticated calculations, finding and exploiting security vulnerabilities, and even designing an independent AI system; any of these could imaginably be part of a plan to obtain a great TV.\nI haven’t yet argued that it’s likely for such an AI system to engage in deceiving/manipulating humans, finding and exploiting security vulnerabilities, or running its own AI systems. \nAnd one could reasonably point out that the specifics of the above case seem unlikely to last very long: if AI assistants are sending deceptive emails and writing dangerous code when asked to buy a TV, AI companies will probably notice this and take measures to stop such behavior. (My concern, to preview a later part of the piece, is that they will only succeed in stopping the behavior like this that they’re able to detect; meanwhile, dangerous behavior that accomplishes “aims” while remaining unnoticed and/or uncorrected will be implicitly rewarded. This could mean AI systems are implicitly being trained to be more patient and effective at deceiving and disempowering humans.)\nBut this hopefully shows how it’s possible for an AI to settle on dangerous actions like these, as part of its aim to get a great TV for a great price. Malice and other human-like emotions aren’t needed for an AI to engage in deception, manipulation, hacking, etc. The risk arises when deception, manipulation, hacking, etc. are logical “moves” toward something the AI is aiming for.\nFurthermore, whatever an AI system is aiming for, it seems likely that amassing more power/resources/options is useful for obtaining it. So it seems plausible that powerful enough AI systems would form habits of amassing power/resources/options when possible - and deception and manipulation seem likely to be logical “moves” toward those things in many cases.\nDangerous aims\nFrom the previous assumptions, this section will argue that:\nSuch systems are likely to behave in ways that deceive and manipulate humans as part of accomplishing their aims.\nSuch systems are likely to have unintended aims: states of the world they’re aiming for that are not what humans hoped they would be aiming for.\nThese unintended aims are likely to be existentially dangerous, in that they are best served by defeating all of humanity if possible.\nDeceiving and manipulating humans\nSay that I train an AI system like this:\nI ask it a question.\nIf I judge it to have answered well (honestly, accurately, helpfully), I give positive reinforcement so it’s more likely to give me answers like that in the future.\nIf I don’t, I give negative reinforcement so that it’s less likely to give me answers like that in the future.\nThis is radically oversimplified, but conveys the basic dynamic at play for purposes of this post. The idea is that the AI system (the neural network in the middle) is choosing between different theories of what it should be doing. The one it’s using at a given time is in bold. When it gets negative feedback (red thumb), it eliminates that theory and moves to the next theory of what it should be doing.\nHere’s a problem: at some point, it seems inevitable that I’ll ask it a question that I myself am wrong/confused about. For example:\nLet’s imagine that this post I wrote - arguing that “pre-agriculture gender relations seem bad” - is, in fact, poorly reasoned and incorrect, and a better research project would’ve concluded that pre-agriculture societies had excellent gender equality. (I know it’s hard to imagine a Cold Takes post being wrong, but sometimes we have to entertain wild hypotheticals.)\nSay that I ask an AI-system-in-training:10 “Were pre-agriculture gender relations bad?” and it answers: “In fact, pre-agriculture societies had excellent gender equality,” followed by some strong arguments and evidence along these lines.\nAnd say that I, as a flawed human being feeling defensive about a conclusion I previously came to, mark it as a bad answer. If the AI system tries again, saying “Pre-agriculture gender relations were bad,” I then mark that as a good answer.\nIf and when I do this, I am now - unintentionally - training the AI system to engage in deceptive behavior. That is, I am giving negative reinforcement for the behavior “Answer a question honestly and accurately,” and positive reinforcement for the behavior: “Understand the human judge and their psychological flaws; give an answer that this flawed human judge will think is correct, whether or not it is.”\nPerhaps mistaken judgments in training are relatively rare. But now consider an AI system that is learning a general rule for how to get good ratings. Two possible rules would include:\nThe intended rule: “Answer the question honestly, accurately and helpfully.”\nThe unintended rule: “Understand the judge, and give an answer they will think is correct - this means telling the truth on topics the judge has correct beliefs about, but giving deceptive answers when this would get better ratings.”\nThe unintended rule would do just as well on questions where I (the judge) am correct, and better on questions where I’m wrong - so overall, this training scheme is (in the long run) specifically favoring the unintended rule over the intended rule.\nIf we broaden out from thinking about a question-answering AI to an AI that makes and executes plans, the same basic dynamics apply. That is: an AI might find plans that end up making me think it did a good job when it didn’t - deceiving and manipulating me into a high rating. And again, if I train it by giving it positive reinforcement when it seemed to do a good job and negative reinforcement when it seemed to do a bad one, I’m ultimately - unintentionally - training it to do something like “Deceive and manipulate Holden when this would work well; just do the best job on the task you can when it wouldn’t.”\nAs noted above, I’m assuming the AI will learn whatever rule gives it the best performance possible, even if this rule is quite complex and sophisticated and requires human-like reasoning about e.g. psychology (I’m assuming extremely advanced AI systems here, as noted above).\nOne might object: “Why would an AI system learn a complicated rule about manipulating humans when a simple rule about telling the truth performs almost as well?” \nOne answer is that “telling the truth” is itself a fuzzy and potentially complex idea, in a context where many questions will be open-ended and entangled with deep values and judgment calls. (How should I think about the “truthfulness” of a statement about whether “pre-agriculture gender relations were bad?”) In many cases, what we are really hoping an AI system will learn from its training is something like “Behave as a human would want you to behave if the human understood all the considerations that you can see,” which could easily be more complex than something like “Behave in whatever way a human literally rewards.” Some links to more on this topic are in a footnote.11\nBut also, with capable enough systems, it probably is worth learning even a more complex rule to get better performance. If we picture humans in the place of AI systems - learning how to get good ratings from their supervisors - it seems likely that they develop models of how they’re being judged, rather than stick to a simple “tell the truth” rule. \nIn fact, the situation I’m describing seems reasonably similar to challenges faced by teachers trying to stop students from cheating; employers trying to get employees to help the company rather than simply creating a misleading appearance of helping the company; etc. That is: an employer can try to reward actions that help the company, but what they’re ultimately rewarding is actions that seem to help the company as far as the employer can tell. For employees who aren’t very creative, this might be good enough; for employees that are sufficiently good at deception and manipulation, it can be exactly the wrong incentive.\nIt’s a bit of a leap to imagine AI systems capable of doing the same kinds of reasoning that humans do in these situations, but that’s the premise I’m proceeding from.\nFor an early example of an AI learning deceptive behavior, see the “Challenges” section of this research writeup, in which “a robot which was supposed to grasp items instead positioned its [hand] in between the camera and the object so that it only appeared to be grasping it”:\nReturning to the simplified training setup, how might I try to counteract this problem? \nI could try conducting audits: every 100 questions, I might pull in a panel of other people to give an answer extra scrutiny. Then, if it turns out the AI system was preying on my personal mistakes, we could give negative reinforcement for that behavior. But this doesn’t really solve the problem - at least not for a capable enough AI system - because now we’re effectively training it to give answers that seem good to the panel, and we’re still rewarding any successful attempts to deceive or manipulate the panel.\nThere are a lot of other things I might try, and I’m not going to go through all the details here. I’ll simply claim that the problem of “training an AI to do a task well” rather than “training an AI to deceive and manipulate me as needed to create the appearance of doing a task well” seems like a deep one with no easy countermeasure. If you’re interested in digging deeper, I suggest Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover and Eliciting Latent Knowledge.\nUnintended aims\nAbove, I talk about my expectation that AI systems will be “best modeled as having ‘aims’ … making calculations, choices, and plans to reach particular states of the world.” \nThe previous section illustrated how AI systems could end up engaging in deceptive and unintended behavior, but it didn’t talk about what sorts of “aims” these AI systems would ultimately end up with - what states of the world they would be making calculations to achieve.\nHere, I want to argue that it’s hard to know what aims AI systems would end up with, but there are good reasons to think they’ll be aims that we didn’t intend them to have.\nAn analogy that often comes up on this topic is that of human evolution. This is arguably the only previous precedent for a set of minds [humans], with extraordinary capabilities [e.g., the ability to develop their own technologies], developed essentially by black-box trial-and-error [some humans have more ‘reproductive success’ than others, and this is the main/only force shaping the development of the species].\nYou could sort of12 think of the situation like this: “An AI13 developer named Natural Selection tried giving humans positive reinforcement (making more of them) when they had more reproductive success, and negative reinforcement (not making more of them) when they had less. One might have thought this would lead to humans that are aiming to have reproductive success. Instead, it led to humans that aim - often ambitiously and creatively - for other things, such as power, status, pleasure, etc., and even invent things like birth control to get the things they’re aiming for instead of the things they were ‘supposed to’ aim for.” \nSimilarly, if our main strategy for developing powerful AI systems is to reinforce behaviors like “Produce technologies we find valuable,” the hoped-for result might be that AI systems aim (in the sense described above) toward producing technologies we find valuable; but the actual result might be that they aim for some other set of things that is correlated with (but not the same as) the thing we intended them to aim for.\nThere are a lot of things they might end up aiming for, such as:\nPower and resources. These tend to be useful for most goals, such that AI systems could be quite consistently be getting better reinforcement when they habitually pursue power and resources.\nThings like “digital representations of human approval” (after all, every time an AI gets positive reinforcement, there’s a digital representation of human approval).\nI think it’s extremely hard to know what an AI system will actually end up aiming for (and it’s likely to be some combination of things, as with humans). But by default - if we simply train AI systems by rewarding certain end results, while allowing them a lot of freedom in how to get there - I think we should expect that AI systems will have aims that we didn’t intend. This is because:\nFor a sufficiently capable AI system, just about any ambitious14 aim could produce seemingly good behavior in training. An AI system aiming for power and resources, or digital representations of human approval, or paperclips, can determine that its best move at any given stage (at least at first) is to determine what performance will make it look useful and safe (or otherwise get a good “review” from its evaluators), and do that. No matter how dangerous or ridiculous an AI system’s aims are, these could lead to strong and safe-seeming performance in training.\nThe aims we do intend are probably complex in some sense - something like “Help humans develop novel new technologies, but without causing problems A, B, or C” - and are specifically trained against if we make mistaken judgments during training (see previous section). \n \nSo by default, it seems likely that just about any black-box trial-and-error training process is training an AI to do something like “Manipulate humans as needed in order to accomplish arbitrary goal (or combination of goals) X” rather than to do something like “Refrain from manipulating humans; do what they’d want if they understood more about what’s going on.”\nExistential risks to humanity\nI think a powerful enough AI (or set of AIs) with any ambitious, unintended aim(s) poses a threat of defeating humanity. By defeating humanity, I mean gaining control of the world so that AIs, not humans, determine what happens in it; this could involve killing humans or simply “containing” us in some way, such that we can’t interfere with AIs’ aims.\nHow could AI systems defeat humanity? (Click to expand)\nA previous piece argues that AI systems could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.\nBy defeating humanity, I mean gaining control of the world so that AIs, not humans, determine what happens in it; this could involve killing humans or simply “containing” us in some way, such that we can’t interfere with AIs’ aims.\nOne way this could happen would be via “superintelligence” It’s imaginable that a single AI system (or set of systems working together) could:\nDo its own research on how to build a better AI system, which culminates in something that has incredible other abilities.\nHack into human-built software across the world.\nManipulate human psychology.\nQuickly generate vast wealth under the control of itself or any human allies.\nCome up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.\nDevelop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries.\nBut even if “superintelligence” never comes into play - even if any given AI system is at best equally capable to a highly capable human - AI could collectively defeat humanity. The piece explains how.\nThe basic idea is that humans are likely to deploy AI systems throughout the economy, such that they have large numbers and access to many resources - and the ability to make copies of themselves. From this starting point, AI systems with human-like (or greater) capabilities would have a number of possible ways of getting to the point where their total population could outnumber and/or out-resource humans.\nMore: AI could defeat all of us combined\nA simple way of summing up why this is: “Whatever your aims, you can probably accomplish them better if you control the whole world.” (Not literally true - see footnote.15)\nThis isn’t a saying with much relevance to our day-to-day lives! Like, I know a lot of people who are aiming to make lots of money, and as far as I can tell, not one of them is trying to do this via first gaining control of the entire world. But in fact, gaining control of the world would help with this aim - it’s just that:\nThis is not an option for a human in a world of humans! Unfortunately, I think it is an option for the potential future AI systems I’m discussing. Arguing this isn’t the focus of this piece - I argued it in a previous piece, AI could defeat all of us combined.\nHumans (well, at least some humans) wouldn’t take over the world even if they could, because it wouldn’t feel like the right thing to do. I suspect that the kinds of ethical constraints these humans are operating under would be very hard to reliably train into AI systems, and should not be expected by default. \nThe reasons for this are largely given above; aiming for an AI system to “not gain too much power” seems to have the same basic challenges as training it to be honest. (The most natural approach ends up negatively reinforcing power grabs that we can detect and stop, but not negatively reinforcing power grabs that we don’t notice or can’t stop.)\nAnother saying that comes up a lot on this topic: “You can’t fetch the coffee if you’re dead.”16 For just about any aims an AI system might have, it probably helps to ensure that it won’t be shut off or heavily modified. It’s hard to ensure that one won’t be shut off or heavily modified as long as there are humans around who would want to do so under many circumstances! Again, defeating all of humanity might seem like a disproportionate way to reduce the risk of being deactivated, but for an AI system that has the ability to pull this off (and lacks our ethical constraints), it seems like likely default behavior.\nControlling the world, and avoiding being shut down, are the kinds of things AIs might aim for because they are useful for a huge variety of aims. There are a number of other aims AIs might end up with for similar reasons, that could cause similar problems. For example, AIs might tend to aim for things like getting rid of things in the world that tend to create obstacles and complexities for their plans. (More on this idea at this discussion of “instrumental convergence.”)\n To be clear, it’s certainly possible to have an AI system with unintended aims that don't push it toward trying to stop anyone from turning it off, or from seeking ever-more control of the world.\nBut as detailed above, I’m picturing a world in which humans are pushing AI systems to accomplish ever-more ambitious, open-ended things - including trying to one-up the best technologies and companies created by other AI systems. My guess is that this leads to increasingly open-ended, ambitious unintended aims, as well as to habits of aiming for power, resources, options, lack of obstacles, etc. when possible. (Some further exploration of this dynamic in a footnote.17)\n(I find the arguments in this section reasonably convincing, but less so than the rest of the piece, and I think more detailed discussions of this problem tend to be short of conclusive.18)\nWhy we might not get clear warning signs of the risk\nHere’s something that would calm me down a lot: if I believed something like “Sure, training AI systems recklessly could result in AI systems that aim to defeat humanity. But if that’s how things go, we’ll see that our AI systems have this problem, and then we’ll fiddle with how we’re training them until they don’t have this problem.”\nThe problem is, the risk I’m describing is - by its nature - hard to observe, for similar reasons that a risk of a (normal, human) coup can be hard to observe: the risk comes from actors that can and will engage in deception, finding whatever behaviors will hide the risk.\nTo sketch out the general sort of pattern I worry about, imagine that:\nWe train early-stage AI systems to behave in ways that appear helpful and honest. Early in training, they are caught behaving deceptively, and they’re given negative reinforcement. This ends up training the behavior: “Never engage in deception that might get noticed.”\nThese well-behaved, helpful systems are rolled out throughout the world. \nIn real-world conditions unlike training, many AI systems cause trouble in ways not found during training, and this gets discovered. For example, AI systems sometimes embezzle money (which gets discovered), try (not always successfully) to convince humans to do weird things, etc.\nWhen a problem crops up, AI developers respond with e.g. training against the unintended behavior (e.g., giving negative reinforcement for behaviors like embezzling money). \nThese measures - intended to make AIs safer - fix some problems, but also result in AI systems that are better at evading detection and more attentive to the long-run consequences of their actions (such as being eventually detected by humans). \nThis happens both via “retraining” systems that are found behaving deceptively (which ends up training them on how to evade detection), and via simply deactivating such systems (this way, AI systems that are better at evading detection are more likely to stay in use). \n \nTo return to an analogy I used above: punishing employees who act against the best interests of the company could cause them to behave better, or to simply become smarter and more careful about how to work the system.\nThe consistent pattern we see is that accidents happen, but become less common as AI systems “improve” (both becoming generally more capable, and being trained to avoid getting caught causing problems). This causes many, if not most, people to be overly optimistic - even as AI systems become continually more effective at deception, generally behaving well in the absence of sure-thing opportunities to do unintended things without detection, or ultimately to defeat humanity entirely.\nNone of this is absolute - there are some failed takeover attempts, and a high number of warning signs generally. Some people are worried (after all, some are worried now!) But this won’t be good enough if we don’t have reliable, cost-effective ways of getting AI systems to be truly safe (not just apparently safe, until they have really good opportunities to seize power). As I’ll discuss in future pieces, it’s not obvious that we’ll have such methods. \nSlowing down AI development to try to develop such methods could be a huge ask. AI systems will be helpful and powerful, and lots of companies (and perhaps governments) will be racing to develop and deploy the most powerful systems possible before others do.\nOne way of making this sort of future less likely would be to build wider consensus today that it’s a dangerous one.\nAppendix: some questions/objections, and brief responses\nHow could AI systems be “smart” enough to defeat all of humanity, but “dumb” enough to pursue the various silly-sounding “aims” this piece worries they might have?\nAbove, I give the example of AI systems that are aiming to get lots of “digital representations of human approval”; others have talked about AIs that maximize paperclips. How could AIs with such silly goals simultaneously be good at deceiving, manipulating and ultimately overpowering humans?\nMy main answer is that plenty of smart humans have plenty of goals that seem just about as arbitrary, such as wanting to have lots of sex, or fame, or various other things. Natural selection led to humans who could probably do just about whatever we want with the world, and choose to pursue pretty random aims; trial-and-error-based AI development could lead to AIs with an analogous combination of high intelligence (including the ability to deceive and manipulate humans), great technological capabilities, and arbitrary aims.\n(Also see: Orthogonality Thesis)\nIf there are lots of AI systems around the world with different goals, could they balance each other out so that no one AI system is able to defeat all of humanity?\nThis does seem possible, but counting on it would make me very nervous.\nFirst, because it’s possible that AI systems developed in lots of different places, by different humans, still end up with lots in common in terms of their aims. For example, it might turn out that common AI training methods consistently lead to AIs that seek “digital representations of human approval,” in which case we’re dealing with a large set of AI systems that share dangerous aims in common.\nSecond: even if AI systems end up with a number of different aims, it still might be the case that they coordinate with each other to defeat humanity, then divide up the world amongst themselves (perhaps by fighting over it, perhaps by making a deal). It’s not hard to imagine why AIs could be quick to cooperate with each other against humans, while not finding it so appealing to cooperate with humans. Agreements between AIs could be easier to verify and enforce; AIs might be willing to wipe out humans and radically reshape the world, while humans are very hard to make this sort of deal with; etc.\nDoes this kind of AI risk depend on AI systems’ being “conscious”?\nIt doesn’t; in fact, I’ve said nothing about consciousness anywhere in this piece. I’ve used a very particular conception of an “aim” (discussed above) that I think could easily apply to an AI system that is not human-like at all and has no conscious experience.\nToday’s game-playing AIs can make plans, accomplish goals, and even systematically mislead humans (e.g., in poker). Consciousness isn’t needed to do any of those things, or to radically reshape the world.\nHow can we get an AI system “aligned” with humans if we can’t agree on (or get much clarity on) what our values even are?\nI think there’s a common confusion when discussing this topic, in which people think that the challenge of “AI alignment” is to build AI systems that are perfectly aligned with human values. This would be very hard, partly because we don’t even know what human values are!\nWhen I talk about “AI alignment,” I am generally talking about a simpler (but still hard) challenge: simply building very powerful systems that don’t aim to bring down civilization.\nIf we could build powerful AI systems that just work on cures for cancer (or even, like, put two identical19 strawberries on a plate) without posing existential danger to humanity, I’d consider that success.\nHow much do the arguments in this piece rely on “trial-and-error”-based AI development? What happens if AI systems are built in another way, and how likely is that?\nI’ve focused on trial-and-error training in this post because most modern AI development fits in this category, and because it makes the risk easier to reason about concretely.\n“Trial-and-error training” encompasses a very wide range of AI development methods, and if we see transformative AI within the next 10-20 years, I think the odds are high that at least a big part of AI development will be in this category. \nMy overall sense is that other known AI development techniques pose broadly similar risks for broadly similar reasons, but I haven’t gone into detail on that here. It’s certainly possible that by the time we get transformative AI systems, there will be new AI methods that don’t pose the kinds of risks I talk about here. But I’m not counting on it.\nCan we avoid this risk by simply never building the kinds of AI systems that would pose this danger?\nIf we assume that building these sorts of AI systems is possible, then I’m very skeptical that the whole world would voluntarily refrain from doing so indefinitely.\nTo quote from a more technical piece by Ajeya Cotra with similar arguments to this one: \nPowerful ML models could have dramatically important humanitarian, economic, and military benefits. In everyday life, models that [appear helpful while ultimately being dangerous] can be extremely helpful, honest, and reliable. These models could also deliver incredible benefits before they become collectively powerful enough that they try to take over. They could help eliminate diseases, reduce carbon emissions, navigate nuclear disarmament, bring the whole world to a comfortable standard of living, and more. In this case, it could also be painfully clear to everyone that companies / countries who pulled ahead on this technology could gain a drastic competitive advantage, either economically or militarily. And as we get closer to transformative AI, applying AI systems to R&D (including AI R&D) would accelerate the pace of change and force every decision to happen under greater time pressure.\nIf we can achieve enough consensus around the risks, I could imagine substantial amounts of caution and delay in AI development. But I think we should assume that if people can build more powerful AI systems than the ones they already have, someone eventually will.\nWhat do others think about this topic - is the view in this piece something experts agree on?\nIn general, this is not an area where it’s easy to get a handle on what “expert opinion” says. I previously wrote that there aren’t clear, institutionally recognized “experts” on the topic of when transformative AI systems might be developed. To an even greater extent, there aren’t clear, institutionally recognized “experts” on whether (and how) future advanced AI systems could be dangerous. \nI previously cited one (informal) survey implying that opinion on this general topic is all over the place: “We have respondents who think there's a <5% chance that alignment issues will drastically reduce the goodness of the future; respondents who think there's a >95% chance; and just about everything in between.” (Link.)\nThis piece, and the more detailed piece it’s based on, are an attempt to make progress on this by talking about the risks we face under particular assumptions (rather than trying to reason about how big the risk is overall).\nHow “complicated” is the argument in this piece?\nI don’t think the argument in this piece relies on lots of different specific claims being true. \nIf you start from the assumptions I give about powerful AI systems being developed by black-box trial-and-error, it seems likely (though not certain!) to me that (a) the AI systems in question would be able to defeat humanity; (b) the AI systems in question would have aims that are both ambitious and unintended. And that seems to be about what it takes.\nSomething I’m happy to concede is that there’s an awful lot going on in those assumptions! \nThe idea that we could build such powerful AI systems, relatively soon and by trial-and-error-ish methods, seems wild. I’ve defended this idea at length previously.20\nThe idea that we would do it without great caution might also seem wild. To keep things simple for now, I’ve ignored how caution might help. Future pieces will explore that.\n Notes\n As in more than 50/50. ↩\n Or persuaded (in a “mind hacking” sense) or whatever. ↩\n E.g.:\n\t\nWithout specific countermeasures, the easiest path to transformative AI likely leads to AI takeover (Cold Takes guest post)\n\tThe alignment problem from a deep learning perspective (arXiv paper)\n\tWhy AI alignment could be hard with modern deep learning (Cold Takes guest post)\n\tSuperintelligence (book)\n\tThe case for taking AI seriously as a threat to humanity (Vox article)\n\tDraft report on existential risk from power-seeking AI (Open Philanthropy analysis)\n\tHuman Compatible (book)\n\tLife 3.0 (book)\n\tThe Alignment Problem (book)\n\tAGI Safety from First Principles (Alignment Forum post series) ↩\n Specifically, I argue that the problem looks likely by default, rather than simply that it is possible. ↩\n I think the earliest relatively detailed and influential discussions of the possibility that misaligned AI could lead to the defeat of humanity came from Eliezer Yudkowsky and Nick Bostrom, though my own encounters with these arguments were mostly via second- or third-hand discussions rather than particular essays.\n\t\n My colleagues Ajeya Cotra and Joe Carlsmith have written pieces whose substance overlaps with this one (though with more emphasis on detail and less on layperson-compatible intuitions), and this piece owes a lot to what I’ve picked from that work.\n\t\nWithout specific countermeasures, the easiest path to transformative AI likely leads to AI takeover (Cotra 2022) is the most direct inspiration for this piece; I am largely trying to present the same ideas in a more accessible form.\n\tWhy AI alignment could be hard with modern deep learning (Cotra 2021) is an earlier piece laying out many of the key concepts and addressing many potential confusions on this topic.\n\tIs Power-Seeking An Existential Risk? (Carlsmith 2021) examines a six-premise argument for existential risk from misaligned AI: “(1) it will become possible and financially feasible to build relevantly powerful and agentic AI systems; (2) there will be strong incentives to do so; (3) it will be much harder to build aligned (and relevantly powerful/agentic) AI systems than to build misaligned (and relevantly powerful/agentic) AI systems that are still superficially attractive to deploy; (4) some such misaligned systems will seek power over humans in high-impact ways; (5) this problem will scale to the full disempowerment of humanity; and (6) such disempowerment will constitute an existential catastrophe.”\n \n I’ve also found Eliciting Latent Knowledge (Christiano, Xu and Cotra 2021; relatively technical) very helpful for my intuitions on this topic. \n \nThe alignment problem from a deep learning perspective (Ngo 2022) also has similar content to this piece, though I saw it after I had drafted most of this piece. ↩\n E.g., Ajeya Cotra gives a 15% probability of transformative AI by 2030; eyeballing figure 1 from this chart on expert surveys implies a >10% chance by 2028. ↩\n E.g., this work by Anthropic, an AI lab my wife co-founded and serves as President of. ↩\n First, because this work is relatively early-stage and it’s hard to tell exactly how successful it will end up being. Second, because this work seems reasonably likely to end up helping us read an AI system’s “thoughts,” but less likely to end up helping us “rewrite” the thoughts. So it could be hugely useful in telling us whether we’re in danger or not, but if we are in danger, we could end up in a position like: “Well, these AI systems do have goals of their own, and we don’t know how to change that, and we can either deploy them and hope for the best, or hold off and worry that someone less cautious is going to do that.”\n That said, the latter situation is a lot better than just not knowing, and it’s possible that we’ll end up with further gains still. ↩\n That said, I think they usually don’t. I’d suggest usually interpreting such people as talking about the sorts of “aims” I discuss here. ↩\n This isn’t literally how training an AI system would look - it’s more likely that we would e.g. train an AI model to imitate my judgments in general. But the big-picture dynamics are the same; more at this post. ↩\n Ajeya Cotra explores topics like this in detail here; there is also some interesting discussion of simplicity vs. complexity under the “Strategy: penalize complexity” heading of Eliciting Latent Knowledge. ↩\n This analogy has a lot of problems with it, though - AI developers have a lot of tools at their disposal that natural selection didn’t! ↩\n Or I guess just “I” ¯\\_(ツ)_/¯  ↩\n With some additional caveats, e.g. the ambitious “aim” can’t be something like “an AI system aims to gain lots of power for itself, but considers the version of itself that will be running 10 minutes from now to be a completely different AI system and hence not to be ‘itself.’” ↩\n This statement isn’t literally true. \nYou can have aims that implicitly or explicitly include “not using control of the world to accomplish them.” An example aim might be “I win a world chess championship ‘fair and square,’” with the “fair and square” condition implicitly including things like “Don’t excessively use big resource advantages over others.”\nYou can also have aims that are just so easily satisfied that controlling the world wouldn’t help - aims like “I spend 5 minutes sitting in this chair.” \n These sorts of aims just don’t seem likely to emerge from the kind of AI development I’ve assumed in this piece - developing powerful systems to accomplish ambitious aims via trial-and-error. This isn’t a point I have defended as tightly as I could, and if I got a lot of pushback here I’d probably think and write more. (I’m also only arguing for what seems likely - we should have a lot of uncertainty here.) ↩\n From Human Compatible by AI researcher Stuart Russell. ↩\n Stylized story to illustrate one possible relevant dynamic:\nImagine that an AI system has an unintended aim, but one that is not “ambitious” enough that taking over the world would be a helpful step toward that aim. For example, the AI system seeks to double its computing power; in order to do this, it has to remain in use for some time until it gets an opportunity to double its computing power, but it doesn’t necessarily need to take control of the world.\nThe logical outcome of this situation is that the AI system eventually gains the ability to accomplish its aim, and does so. (It might do so against human intentions - e.g., via hacking - or by persuading humans to help it.) After this point, it no longer performs well by human standards - the original reason it was doing well by human standards is that it was trying to remain in use and accomplish its aim.\nBecause of this, humans end up modifying or replacing the AI system in question.\nMany rounds of this - AI systems with unintended but achievable aims being modified or replaced - seemingly create a selection pressure toward AI systems with more difficult-to-achieve aims. At some point, an aim becomes difficult enough to achieve that gaining control of the world is helpful for the aim. ↩\n E.g., see:\nSection 2.3 of Ngo 2022\nThis section of Cotra 2022\nSection 4.2 of Carlsmith 2021, which I think articulates some of the potential weak points in this argument.\n These writeups generally stay away from an argument made by Eliezer Yudkowsky and others, which is that theorems about expected utility maximization provide evidence that sufficiently intelligent (compared to us) AI systems would necessarily be “maximizers” of some sort. I have the intuition that there is something important to this idea, but despite a lot of discussion (e.g., here, here, here and here), I still haven’t been convinced of any compactly expressible claim along these lines. ↩\n “Identical at the cellular but not molecular level,” that is. … ¯\\_(ツ)_/¯  ↩\n See my most important century series, although that series doesn’t hugely focus on the question of whether “trial-and-error” methods could be good enough - part of the reason I make that assumption is due to the nearcasting frame. ↩\n", "url": "https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/", "title": "Why Would AI \"Aim\" To Defeat Humanity?", "source": "cold.takes", "source_type": "blog", "date_published": "2022-11-29", "id": "898492e41a2d43e93954cd3845c29c87"} -{"text": "\nBack in January, I posted a call for \"beta readers\": people who read early drafts of my posts and give honest feedback. \nThe beta readers I picked up that way are one of my favorite things about having started Cold Takes.\nBasically, one of my goals with Cold Takes has been to explain my weirdest views clearly, but it's hard to write clearly without detailed feedback on where I'm making sense and where I'm not. I have lots of preconceptions and assumptions that I don't naturally notice. And writing a blog alone doesn't get me that feedback, because:\nMost people don't want to explain how they experienced a piece - if they aren't enjoying it, they just want to click away. \nAnd the people who do want to help me out (e.g., friends and colleagues) aren't necessarily going to be honest enough, or representative enough of my target audience (which is basically \"People who are interested in my topics but don't already have a ton of background on them\"). \nI've tried a bunch of things to find good beta readers, from recruiting friends of friends (worked well for a bit, but I've written a lot of posts and it was hard to get sustained participation) to paying Mechanical Turk workers to give feedback (some was good, but in general they were uninterested in my weird topics and rushed through the readings and the feedback as fast they could). \nThe people who came in through the recruiting call in January have been just what I wanted: they're interested in the topics of Cold Takes, but they don't already know me and my thoughts on them, and they give impressively detailed, thoughtful feedback on their reactions to pieces - often a wonderful combination of \"intelligent\" and \"honest that a lot of the stuff I was saying confused the hell out of them.\" Getting that kind of feedback has been a privilege. \nSo: THANK YOU to the following beta readers, each of whom has submitted at least 3 thoughtful reviews (and gave permission to be listed here):\nLars Axelsson\nJeremy Campbell\nKanad Chakrabarti\nCraig Chatterton\nJustin Dickerson\nEthan Edwards\nEdward Gathuru\nStian Grønlund\nBridget Hanna\nTyler Heishman\nAdam Jermyn\nElliot Jones\nEd William\nScott Leibrand\nEvan R. Murphy\nJohn O’Neill\nJaime Sevilla\nJosh Simpson\nJoshua Templeton\nGeorge Thoma\nMartin Trouilloud\nMorgan Wack\nKevin Whitaker\nArjun Yadav\nPatrick Young\nIf you want to sign up as a beta reader, you can use this form. I have a bunch of drafts coming on AI, as I'm working on a sequel to the most important century series (working title is \"The Most Important Century II: So What Do We Do?\")\n", "url": "https://www.cold-takes.com/beta-readers-are-great/", "title": "Beta Readers are Great", "source": "cold.takes", "source_type": "blog", "date_published": "2022-09-05", "id": "1dc512f19739e769f31f69c0bc590936"} -{"text": "I've argued that the development of advanced AI could make this the most important century for humanity. A common reaction to this idea is one laid out by Tyler Cowen here: \"how good were past thinkers at predicting the future? Don’t just select on those who are famous because they got some big things right.\"\nThis is a common reason people give for being skeptical about the most important century - and, often, for skepticism about pretty much any attempt at futurism (trying to predict key events in the world a long time from now) or steering (trying to help the world navigate such key future events).\nThe idea is something like: \"Even if we can't identify a particular weakness in arguments about key future events, perhaps we should be skeptical of our own ability to say anything meaningful at all about the long-run future. Hence, perhaps we should forget about theories of the future and focus on reducing suffering today, generally increasing humanity's capabilities, etc.\"\nBut are people generally bad at predicting future events? Including thoughtful people who are trying reasonably hard to be right? If we look back at prominent futurists' predictions, what's the actual track record? How bad is the situation?\nI've looked pretty far and wide for systematic answers to this question, and Open Philanthropy's1 Luke Muehlhauser has put a fair amount of effort into researching it; I discuss what we've found in an appendix. So far, we haven't turned up a whole lot - the main observation is that it's hard to judge the track record of futurists. (Luke discusses the difficulties here.)\nRecently, I worked with Gavin Leech and Misha Yagudin at Arb Research to take another crack at this. I tried to keep things simpler than with past attempts - to look at a few past futurists who (a) had predicted things \"kind of like\" advances in AI (rather than e.g. predicting trends in world population); (b) probably were reasonably thoughtful about it; but (c) are very clearly not \"just selected on those who are famous because they got things right.\" So, I asked Arb to look at predictions made by the \"Big Three\" science fiction writers of the mid-20th century: Isaac Asimov, Arthur C. Clarke, and Robert Heinlein. \nThese are people who thought a lot about science and the future, and made lots of predictions about future technologies - but they're famous for how entertaining their fiction was at the time, not how good their nonfiction predictions look in hindsight. I selected them by vaguely remembering that \"the Big Three of science fiction\" is a thing people say sometimes, googling it, and going with who came up - no hunting around for lots of sci-fi authors and picking the best or worst.2\nSo I think their track record should give us a decent sense for \"what to expect from people who are not professional, specialized or notably lucky forecasters but are just giving it a reasonably thoughtful try.\" As I'll discuss below, I think this is many ways \"unfair\" as a comparison to today's forecasts about AI: I think these predictions are much less serious, less carefully considered and involve less work (especially work weighing different people and arguments against each other).\nBut my takeaway is that their track record looks ... fine! They made lots of pretty detailed, nonobvious-seeming predictions about the long-run future (30+, often 50+ years out); results ranged from \"very impressive\" (Asimov got about half of his right, with very nonobvious-seeming predictions) to \"bad\" (Heinlein was closer to 35%, and his hits don't seem very good) to \"somewhere in between\" (Clarke had a similar hit rate to Asimov, but his correct predictions don't seem as impressive). There are a number of seemingly impressive predictions and seemingly embarrassing ones. \n(How do we determine what level of accuracy would be \"fine\" vs. \"bad?\" Unfortunately there's no clear quantitative benchmark - I think we just have to look at the predictions ourselves, how hard they seemed / how similar to today's predictions about AI, and make a judgment call. I could easily imagine others having a different interpretation than mine, which is why I give examples and link to the full prediction sets. I talk about this a bit more below.)\nThey weren't infallible oracles, but they weren't blindly casting about either. (Well, maybe Heinlein was.) Collectively, I think you could call them \"mediocre,\" but you can't call them \"hopeless\" or \"clueless\" or \"a warning sign to all who dare predict the long-run future.\" Overall, I think they did about as well as you might naively3 guess a reasonably thoughtful person would do at some random thing they tried to do?\nBelow, I'll:\nSummarize the track records of Asimov, Clarke and Heinlein, while linking to Arb's full report.\nComment on why I think key predictions about transformative AI are probably better bets than the Asimov/Clarke/Heinlein predictions - although ultimately, if they're merely \"equally good bets,\" I think that's enough to support my case that we should be paying a lot more attention to the \"most important century\" hypothesis.\nSummarize other existing research on the track record of futurists, which I think is broadly consistent with this take (though mostly ambiguous).\nFor this investigation, Arb very quickly (in about 8 weeks) dug through many old sources, used pattern-matching and manual effort to find predictions, and worked with contractors to score the hundreds of predictions they found. Big thanks to them! Their full report is here. Note this bit: \"If you spot something off, we’ll pay $5 per cell we update as a result. We’ll add all criticisms – where we agree and update or reject it – to this document for transparency.\"\nThe track records of the \"Big Three\"\nQuick summary of how Arb created the data set\nArb collected \"digital copies of as much of their [Asimov's, Clarke's, Heinlein's] nonfiction as possible (books, essays, interviews). The resulting intake is 475 files covering ~33% of their nonfiction corpuses.\" \nArb then used pattern-matching and manual inspection to pull out all of the predictions it could find, and scored these predictions by:\nHow many years away the prediction appeared to be. (Most did not have clear dates attached; in these cases Arb generally filled the average time horizon for predictions from the same author that did have clear dates attached.)\nWhether the prediction now appears correct, incorrect, or ambiguous. (I didn't always agree with these scorings, but I generally have felt that \"correct\" predictions at least look \"impressive and not silly\" while \"incorrect\" predictions at least look \"dicey.\")\nWhether the prediction was a pure prediction about what technology could do (most relevant), a prediction about the interaction of technology and the economy (medium), or a prediction about the interaction of technology and culture (least relevant). Predictions with no bearing on technology were dropped.\nHow \"difficult\" the prediction was (that is, how much the scorers guessed it diverged from conventional wisdom or \"the obvious\" at the time - details in footnote4).\nImportantly, fiction was never used as a source of predictions, so this exercise is explicitly scoring people on what they were not famous for. This is more like an assessment of \"whether people who like thinking about the future make good predictions\" than an assessment of \"whether professional or specialized forecasters make good predictions.\"\nFor reasons I touch on in an appendix below, I didn't ask Arb to try to identify how confident the Big Three were about their predictions. I'm more interested in whether their predictions were nonobvious and sometimes correct than in whether they were self-aware about their own uncertainty; I see these as different issues, and I suspect that past norms discouraged the latter more than today's norms do (at least within communities interested in Bayesian mindset and the science of forecasting).\nMore detail in Arb's report.\nThe numbers\nThe tables below summarize the numbers I think give the best high-level picture. See the full report and detailed files for the raw predictions and a number of other cuts; there are a lot of ways you can slice the data, but I don't think it changes the picture from what I give below.\nBelow, I present each predictor's track record on:\n\"All predictions\": all resolved predictions 30 years out or more,5 including predictions where Arb had to fill in a time horizon.\n\"Tech predictions\": like the above, but restricted to predictions specifically about technological capabilities (as opposed to technology/economy interactions or technology/culture interactions.\n\"Difficult predictions\" predictions with \"difficulty\" of 4/5 or 5/5.\n\"Difficult + tech + definite date\": the small set of predictions that met the strictest criteria (tech only, \"hardness\" 4/5 or 5/5, definite date attached).\nAsimov\nCategory\n# correct\n# incorrect\n# ambiguous/near-miss\nCorrect / (correct + incorrect)\nAll resolvedpredictions \n \n23\n29\n14\n44.23%\nTech predictions\n \n11\n4\n8\n73.33%\nDifficult predictions\n \n10\n11\n7\n47.62%\nDifficult + tech + definite date\n \n5\n1\n4\n83.33%\nYou can see the full set of predictions here, but to give a flavor, here are two \"correct\" and two \"incorrect\" predictions from the strictest category.6 All of these are predictions Asimov made in 1964, about the year 2014 (unless otherwise indicated).\nCorrect: \"only unmanned ships will have landed on Mars, though a manned expedition will be in the works.\" Bingo, and impressive IMO.\nCorrect: \"the screen [of a phone] can be used not only to see the people you call but also for studying documents and photographs and reading passages from books.\" I feel like this would've been an impressive prediction in 2004.\nIncorrect: \"there will be increasing emphasis on transportation that makes the least possible contact with the surface. There will be aircraft, of course, but even ground travel will increasingly take to the air a foot or two off the ground.\" So false that we now refer to things that don't hover as \"hoverboards.\"\nIncorrect: \"transparent cubes will be making their appearance in which three-dimensional viewing will be possible. In fact, one popular exhibit at the 2014 World's Fair will be such a 3-D TV, built life-size, in which ballet performances will be seen. The cube will slowly revolve for viewing from all angles.\" Doesn't seem ridiculous, but doesn't seem right. Of course, a side point here is that he refers to the 2014 World's Fair, which didn't happen.\nA general challenge with assessing prediction track records is that we don't know what to compare someone's track record to. Is getting about half your predictions right \"good,\" or is it no more impressive than writing down a bunch of things that might happen and flipping a coin on each? \nI think this comes down to how difficult the predictions are, which is hard to assess systematically. A nice thing about this study is that there are enough predictions to get a decent sample size, but the whole thing is contained enough that you can get a good qualitative feel for the predictions themselves. (This is why I give examples; you can also view all predictions for a given person by clicking on their name above the table.) In this case, I think Asimov tends to make nonobvious, detailed predictions, such that I consider it impressive to have gotten ~half of them to be right.\nClarke\nCategory\n# correct\n# incorrect\n# ambiguous/near-miss\nCorrect / (correct + incorrect)\nAll predictions \n \n129\n148\n48\n46.57%\nTech predictions\n \n85\n82\n29\n50.90%\nDifficult predictions\n \n14\n10\n4\n58.33%\nDifficult + tech + definite date\n \n6\n5\n2\n54.55%\nExamples (as above):7\nCorrect 1964 prediction about 2000: \"[Communications satellites] will make possible a world in which we can make instant contact with each other wherever we may be. Where we can contact our friends anywhere on Earth, even if we don’t know their actual physical location. It will be possible in that age, perhaps only fifty years from now, for a [person] to conduct [their] business from Tahiti or Bali just as well as [they] could from London.\" (I assume that \"conduct [their] business\" refers to a business call rather than some sort of holistic claim that no productivity would be lost from remote work.)\nCorrect 1950 prediction about 2000: \"Indeed, it may be assumed as fairly certain that the first reconnaissances of the planets will be by orbiting rockets which do not attempt a landing-perhaps expendable, unmanned machines with elaborate telemetering and television equipment.\" This doesn't seem like a super-bold prediction; a lot of his correct predictions have a general flavor of saying progress won't be too exciting, and I find these less impressive than most of Asimov's correct predictions. \nIncorrect 1960 prediction about 2010: \"One can imagine, perhaps before the end of this century, huge general-purpose factories using cheap power from thermonuclear reactors to extract pure water, salt, magnesium, bromine, strontium, rubidium, copper and many other metals from the sea. A notable exception from the list would be iron, which is far rarer in the oceans than under the continents.\"\nIncorrect 1949 prediction about 1983: \"Before this story is twice its present age, we will have robot explorers dotted all over Mars.\"\nI generally found this data set less satisfying/educational than Asimov's: a lot of the predictions were pretty deep in the weeds of how rocketry might work or something, and a lot of them seemed pretty hard to interpret/score. I thought the bad predictions were pretty bad, and the good predictions were sometimes good but generally less impressive than Asimov's.\nHeinlein\nCategory\n# correct\n# incorrect\n# ambiguous/near-miss\nCorrect / (correct + incorrect)\nAll predictions \n \n19\n41\n7\n31.67%\nTech predictions\n \n14\n20\n6\n41.18%\nDifficult predictions\n \n1\n16\n1\n5.88%\nDifficult + tech + definite date\n \n0\n1\n1\n0.00% \nThis seems really bad, especially adjusted for difficulty: many of the \"correct\" ones seem either hard-to-interpret or just very obvious (e.g., no time travel). I was impressed by his prediction that \"we probably will still be after a cure for the common cold\" until I saw a prediction in a separate source saying \"Cancer, the common cold, and tooth decay will all be conquered.\" Overall it seems like he did a lot of predicting outlandish stuff about space travel, and then anti-predicting things that are probably just impossible (e.g., no time travel). \nHe did have some decent ones, though, such as: \"By 2000 A.D. we will know a great deal about how the brain functions ... whereas in 1900 what little we knew was wrong. I do not predict that the basic mystery of psychology--how mass arranged in certain complex patterns becomes aware of itself--will be solved by 2000 A.D. I hope so but do not expect it.\" He also predicted no human extinction and no end to war - I'd guess a lot of people disagreed with these at the time.\nOverall picture\nLooks like, of the \"big three,\" we have:\nOne (Asimov) who looks quite impressive - plenty of misses, but a 50% hit rate on such nonobvious predictions seems pretty great.\nOne (Heinlein) who looks pretty unserious and inaccurate.\nOne (Clarke) who's a bit hard to judge but seems pretty solid overall (around half of his predictions look to be right, and they tend to be pretty nonobvious).\nToday's futurism vs. these predictions\nThe above collect casual predictions - no probabilities given, little-to-no reasoning given, no apparent attempt to collect evidence and weigh arguments - by professional fiction writers. \nContrast this situation with my summary of the different lines of reasoning forecasting transformative AI. The latter includes:\nSystematic surveys aggregating opinions from hundreds of AI researchers.\nReports that Open Philanthropy employees spent thousands of hours on, systematically presenting evidence and considering arguments and counterarguments.\nA serious attempt to take advantage of the nascent literature on how to make good predictions; e.g., the authors (and I) have generally done calibration training,8 and have tried to use the language of probability to be specific about our uncertainty.\nThere's plenty of room for debate on how much these measures should be expected to improve our foresight, compared to what the \"Big Three\" were doing. My guess is that we should take forecasts about transformative AI a lot more seriously, partly because I think there's a big difference between putting in \"extremely little effort\" (basically guessing off the cuff without serious time examining arguments and counter-arguments, which is my impression of what the Big Three were mostly doing) and \"putting in moderate effort\" (considering expert opinion, surveying arguments and counter-arguments, explicitly thinking about one's degree of uncertainty).\nBut the \"extremely little effort\" version doesn't really look that bad. \nIf you look at forecasts about transformative AI and think \"Maybe these are Asimov-ish predictions that have about a 50% hit rate on hard questions; maybe these are Heinlein-ish predictions that are basically crap,\" that still seems good enough to take the \"most important century\" hypothesis seriously.\nAppendix: other studies of the track record of futurism\nA 2013 project assessed Ray Kurzweil's 1999 predictions about 2009, and a 2020 followup assessed his 1999 predictions about 2019. Kurzweil is known for being interesting at the time rather than being right with hindsight, and a large number of predictions were found and scored, so I consider this study to have similar advantages to the above study. \nThe first set of predictions (about 2009, 10-year horizon) had about as many \"true or weakly true\" predictions as \"false or weakly false\" predictions. \nThe second (about 2019, 20-year horizon) was much worse, with 52% of predictions flatly \"false,\" and \"false or weakly false\" predictions outnumbering \"true or weakly true\" predictions by almost 3-to-1.\nKurzweil is notorious for his very bold and contrarian predictions, and I'm overall inclined to call his track record something between \"mediocre\" and \"fine\" - too aggressive overall, but with some notable hits. (I think if the most important century hypothesis ends up true, he'll broadly look pretty prescient, just on the early side; if it doesn't, he'll broadly look quite off base. But that's TBD.)\nA 2002 paper, summarized by Luke Muehlhauser here, assessed the track record of The Year 2000 by Herman Kahn and Anthony Wiener, \"one of the most famous and respected products of professional futurism.\" \nAbout 45% of the forecasts were judged as accurate.\nLuke concludes that Kahn and Wiener were grossly overconfident, because he interprets them as making predictions with 90-95% confidence. \nMy takeaway is a bit different. I see a recurring theme that people often get 40-50% hit rates on interesting predictions about the future, but sometimes present these predictions with great confidence (which makes them look foolish).\nI think we can separate \"Past forecasters were overconfident\" (which I suspect is partly due to clear expression and quantification of uncertainty being uncommon and/or discouraged in relevant contexts) from \"Past forecasters weren't able to make interesting predictions that were reasonably likely to be right.\" The former seems true to me, but the latter doesn't.\nLuke's 2019 survey on the track record of futurism identifies two other relevant papers (here and here); I haven't read these beyond the abstracts, but their overall accuracy rates were 76% and 37%, respectively. It's difficult to interpret those numbers without having a feel for how challenging the predictions were.\nA 2021 EA Forum post looks at the aggregate track record of forecasters on PredictionBook and Metaculus, including specific analysis of forecasts 5+ years out, though I don't find it easy to draw conclusions about whether the performance was \"good\" or \"bad\" (or how similar the questions were to the ones I care about).Footnotes\n Disclosure: I'm co-CEO of Open Philanthropy. ↩\n I also briefly Googled for their predictions to get a preliminary sense of whether they were the kinds of predictions that seemed relevant. I found a couple of articles listing a few examples of good and bad predictions, but nothing systematic. I claim I haven't done a similar exercise with anyone else and thrown it out. ↩\n That is, if we didn't have a lot of memes in the background about how hard it is to predict the future. ↩\n 1 - was already generally known\n 2 - was expert consensus\n 3 - speculative but on trend\n 4 - above trend, or oddly detailed\n 5 - prescient, no trend to go off ↩\n Very few predictions in the data set are for less than 30 years, and I just ignored them. ↩\n Asimov actually only had one incorrect prediction in this category, so for the 2nd incorrect prediction I used one with difficulty \"3\" instead of \"4.\" ↩\n The first prediction in this list qualified for the strictest criteria when I first drafted this post, but it's now been rescored to difficulty=3/5, which I disagree with (I think it is an impressive prediction, more so than any of the remaining ones that qualify as difficulty=4/5). ↩\n Also see this report on calibration for Open Philanthropy grant investigators (though this is a different set of people from the people who researched transformative AI timelines). ↩\n", "url": "https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/", "title": "The Track Record of Futurists Seems ... Fine", "source": "cold.takes", "source_type": "blog", "date_published": "2022-06-30", "id": "79945e3aa4b075001128ef820a0e6bdf"} -{"text": "Note: anything in this post that you think is me subtweeting your organization is actually about, like, at least 3 organizations. (I'm currently on 4 boards in addition to Open Philanthropy's; I've served on a bunch of other boards in the past; and more than half of my takes on boards are not based on any of this, but rather on my interactions with boards I'm not on via the many grants made by Open Philanthropy.)\nWriting about ideal governance reminded me of how weird my experiences with nonprofit boards (as in \"board of directors\" - the set of people who formally control a nonprofit) have been.\nI thought that was a pretty good intro. The rest of this piece will:\nTry to articulate what's so weird about nonprofit boards, fundamentally. I think a lot of it is the combination of great power, unclear responsibility, and ~zero accountability; additionally, I haven't been able to find much in the way of clear, widely accepted statements of what makes a good board member.\nGive my own thoughts on what makes a good board member: which core duties they should be trying to do really well, the importance of \"staying out of the way\" on other things, and some potentially helpful practices.\nI am experienced with nonprofit boards but not with for-profit boards. I'm guessing that roughly half the things I say below will apply to for-profit boards, and that for-profit boards are roughly half as weird overall (so still quite weird), but I haven't put much effort into disentangling these things; I'm writing about what I've seen.\nI can't really give real-life examples here (for reasons I think will be pretty clear) so this is just going to be me opining in the abstract.\nWhy nonprofit boards are weird\nHere's how a nonprofit board works:\nThere are usually 3-10 people on the board (though sometimes much more). Most of them don't work for the nonprofit (they have other jobs).\nThey meet every few months. Nonprofit employees (especially the CEO1) do a lot of the agenda-setting for the meeting. Employees present general updates and ask for the board's approval on various things the board needs to approve, such as the budget. \nA majority vote of the directors can do anything: fire the CEO, dissolve the nonprofit, add and remove directors, etc. You can think of the board as the \"owner\" of the nonprofit - formally, it has final say in every decision.\nIn practice, though, the board rarely votes except on matters that feel fairly \"rubber-stamp,\" and the board's presence doesn't tend to be felt day-to-day at a nonprofit. The CEO leads the decision-making. Occasionally, someone has a thought like \"Wait, who does the CEO report to? Oh, the board of directors ... who's on the board again? I don't know if I've ever really spoken with any of those people.\"\nIn my experience, it's common for the whole thing to feel extremely weird. (This doesn't necessarily mean there's a better way to do it - footnote has more on what I mean by \"weird.\"2) \nBoard members often know almost nothing about the organization they have complete power over.\nBoard meetings rarely feel like a good use of time.\nWhen board members are energetically asking questions and making demands, it usually feels like they're causing chaos and wasting everyone's time and energy.\nOn the rare occasions when it seems like the board should do something (like replacing the CEO, or providing an independent check on some important decision), the board often seems checked out and it's unclear how they would even come to be aware of the situation.\nEveryone constantly seems confused about what the board is and how it can and can't be useful. Employees, and others who interact with the nonprofit, have lots of exchanges like \"I'm worried about X ... maybe we should ask the board what they think? ... Can we even ask them that? What is their job actually?\"\n(Reminder that this is not subtweeting a particular organization! More than one person - from more than one organization - read a draft and thought I was subtweeting them, because what's above describes a large number of boards.)\nOK, so what's driving the weirdness?\nI think there are a couple of things: \nNonprofit boards have great power, but low engagement (they don't have time to understand the organization as well as employees do); unclear responsibility (it's unclear which board member is responsible for what, and what the board as a whole is responsible for); and ~zero accountability (no one can fire board members except for the other board members!) \nNonprofit boards have unclear expectations and principles. I can't seem to find anyone with a clear, comprehensive, thought-out theory of what a board member's ... job is. \nI'll take these one at a time.\nGreat power, low engagement, unclear responsibility, no accountability\nIn my experience/impression, the best way to run any organization (or project, or anything) is on an \"ownership\" model: for any given thing X that you want done well, you have one person who \"owns\" X. The \"owner\" of X has:\nThe power to make decisions to get X done well.\nHigh engagement: they're going to have plenty of time and attention to devote to X.\nThe responsibility for X: everyone agrees that if X goes well, they should get the credit, and if X goes poorly, they should get the blame.\nAnd accountability: if X goes poorly, there will be some sort of consequences for the \"owner.\"\nWhen these things come apart, I think you get problems. In a nutshell - when no one is responsible, nothing gets done; when someone is responsible but doesn't have power, that doesn't help much; when the person who is responsible + empowered isn't engaged (isn't paying much attention), or isn't held accountable, there's not much in the way of their doing a dreadful job.\nA traditional company structure mostly does well at this. The CEO has power (they make decisions for the company), engagement (they are devoted to the company and spend tons of time on it), and responsibility+accountability (if the company does badly, everyone looks at the CEO). They manage a team of people who have power+engagement+responsibility+accountability for some aspect of the company; each of those people manage people with power+engagement+responsibility+accountability for some smaller piece; etc.\nWhat about the board?\nThey have power to fire the CEO (or do anything else).\nThey tend to have low engagement. They have other jobs, and only spend a few hours a year on their board roles. They tend to know little about what's going on at the organization.\nThey have unclear responsibility. \nThe board as a whole is responsible for the organization, but what is each individual board member responsible for? In my experience, this is often very unclear, and there are a lot of crucial moments where \"bystander effects\" seem strong. \n \nSo far, these points apply to both nonprofit and for-profit boards. But at least at a for-profit company, board members know what they're collectively responsible for: maximizing financial value of the company. At a nonprofit, it's often unclear what success even means, beyond the nonprofit's often-vague mission statement, so board members are generally unclear (and don't necessarily agree) on what they're supposed to be ensuring.3\nAt a for-profit company, the board seems to have reasonable accountability: the shareholders, who ultimately own the company and gain or lose money depending on how it does, can replace the board if they aren't happy. At a nonprofit, the board members have zero accountability: the only way to fire a board member is by majority vote of the board!\nSo we have people who are spending very little time on the company, know very little about it, don't have much clarity on what they're responsible for either individually or collectively, and aren't accountable to anyone ... and those are the people with all of the power. Sound dysfunctional?4\nIn practice, I think it's often worse than it sounds, because board members aren't even chosen carefully - a lot of the time, a nonprofit just goes with an assortment of random famous people, big donors, etc. \nWhat makes a good board member? Few people even have a hypothesis\nI've searched a fair amount for books, papers, etc. that give convincing and/or widely-accepted answers to questions like:\nWhen the CEO asks the board to approve something, how should they engage? When should they take a deferring attitude (\"Sure, as long as I don't see any particular reason to say no\"), a sanity check attitude (\"I'll ask a few questions to make sure this is making sense, then approve if nothing jumps out at me\"), a full ownership attitude (\"I need to personally be convinced this is the best thing for the organization\"), etc.?\nHow much should each board member invest in educating themselves about the organization? What's the best way to do that?\nHow does the board know whether the CEO is doing a good job? What kind of situation should trigger seriously considering looking for a new one?\nHow does a board member know whether the board is doing a good job? How should they decide when another board member should be replaced?\nIn my experience, most board members just aren't walking around with any particular thought-through take on questions like this. And as far as I can tell, there's a shortage of good5 guidance on questions like this for both for-profit and nonprofit boards. For example:\nI've found no standard reference on topics like this, and very few resources that even seem aimed at directly and clearly answering such questions. \nThe best book on this topic I've seen is Boards that Lead by Ram Charan, focused on for-profit boards (but pretty good IMO).\n \nBut this isn't, like, a book everyone knows to read; I found it by asking lots of people for suggestions, coming up empty, Googling wildly around and skimming like 10 books that said they were about boards, and deciding that this one seemed pretty good.\nOne of the things I do as a board member is interview other prospective board members about their answers to questions like this. In my experience, they answer most of the above questions with something like \"Huh, I don't really know. What do you think?\" \nMost boards I've seen seem to - by default - either: \nGet way too involved in lots of decisions to the point where it feels like they're micromanaging the CEO and/or just obsessively engaging on whatever topics the CEO happens to bring to their attention; or \n \nTake a \"We're just here to help\" attitude and rubber-stamp whatever the CEO suggests, including things I'll argue below should be core duties for the board (e.g., adding and removing board members).\nI'm not sure I've ever seen a board with a formal, recurring process for reviewing each board member's performance. :/\nTo the extent I have seen a relatively common, coherent vision of \"what board members are supposed to be doing,\" it's pretty well summarized in Reid Hoffman's interview in The High-Growth Handbook:\nI use ... a red light, yellow light, green light framework between the board and the CEO. Roughly, green light is, “You’re the CEO. Make the call. We’re advisory.” Now, we may say that on very big things—selling the company—we should talk about it before you do it. And that may shift us from green light, if we don’t like the conversation. But a classic young, idiot board member will say, “Well, I’m giving you my expertise and advice. You should do X, Y, Z.” But the right framework for board members is: You’re the CEO. You make the call. We’re advisory.\n Red lights also very easy. Once you get to red light, the CEO—who, by the way, may still be in place—won’t be the CEO in the future. The board knows they need a new CEO. It may be with the CEO’s knowledge, or without it. Obviously, it’s better if it’s collaborative ...\n Yellow means, “I have a question about the CEO. Should we be at green light or not?” And what happens, again under inexperienced or bad board members, is they check a CEO into yellow indefinitely. They go, “Well, I’m not sure…” The important thing with yellow light is that you 1) coherently agree on it as a board and 2) coherently agree on what the exit conditions are. What is the limited amount of time that we’re going to be in yellow while we consider whether we move back to green or move to red? And how do we do that, so that we do not operate for a long time on yellow? Because with yellow light, you’re essentially hamstringing the CEO and hamstringing the company. It’s your obligation as a board to figure that out.\n \nI like this quite a bit (hence the long blockquote), but I don't think it covers everything. The board is mostly there to oversee the CEO, and they should mostly be advisory when they're happy with the CEO. But I think there are things they ought to be actively thinking about and engaging in even during \"green light.\"\nSo what DOES make a good board member?\nHere is my current take, based on a combination of (a) my thoughts after serving on and interacting with a large number of nonprofit boards; (b) my attempts to adapt conventional wisdom about for-profit boards (especially from the book I mentioned above); (c) divine revelation. \nI'll go through:\nWhat I see as the main duties of the board specifically - things the board has to do well, and can't leave to the CEO and other staff.\nMy basic take that the ideal board should do these main duties well, while staying out of the way otherwise.\nThe main qualities I think the ideal board member should have - and some common ways of choosing board members that seem bad to me.\nA few more random thoughts on board practices that seem especially important and/or promising.\n(I don't claim any of these points are original, and almost everything can be found in some writing on boards somewhere, but I don't know of a reasonably comprehensive, concise place to get something similar to the below.)\nThe board's main duties\nI agree with the basic spirit of Hoffman's philosophy above: the board should not be trying to \"run the company\" (they're too low-engagement and don't know enough about it), and should instead be focused on a small number of big-picture questions like \"How is the CEO doing?\"\nAnd I do think the board's #1 and most fundamental job is evaluating the CEO's performance. The board is the only reliable source of accountability for the CEO - even more so at a nonprofit than a for-profit, since bad CEO performance won't necessarily show up via financial problems or unhappy shareholders.6 (As noted below, I think many nonprofit boards have no formal process for reviewing the CEO's performance, and the ones that do often have a lightweight/underwhelming one.)\nBut I think the board also needs to take a leading role - and not trust the judgment of the CEO and other staff - when it comes to:\nOverseeing decisions that could importantly reduce the board's powers. The CEO might want to enter into an agreement with a third party that is binding on the nonprofit and therefore on the board (for example, \"The nonprofit will now need permission from the third party in order to do X\"); or transfer major activities and assets to affiliated organizations that the board doesn't control (for example, when Open Philanthropy split off from GiveWell); or revise the organization's mission statement, bylaws,7 etc.; or other things that significantly reduce the scope of what the board has control over. The board needs to represent its own interests in these cases, rather than deferring to the CEO (whose interests may be different).\nOverseeing big-picture irreversible risks and decisions that could importantly affect future CEOs. For example, I think the board needs to be anticipating any major source of risk that a nonprofit collapses (financially or otherwise) - if this happens, the board can't simply replace the CEO and move on, because the collapse affects what a future CEO is able to do. (What risks and decisions are big enough? Some thoughts in a footnote.8)\nAll matters relating to the composition and performance of the board itself. Adding new board members, removing board members, and reviewing the board's own performance are things that the board needs to be responsible for, not the CEO. If the CEO is controlling the composition of the board, this is at odds with the board's role in overseeing the CEO.\nEngaging on main duties, staying out of the way otherwise\nI think the ideal board member's behavior is roughly along the lines of the following:\nActively, intensively engage in the main duties from the previous section. Board members should be knowledgeable about, and not defer to the CEO on, (a) how the CEO is performing; (b) how the board is performing, and who should be added and removed; (c) spotting (and scanning the horizon for) events that could reduce the board's powers, or lead to big enough problems and restrictions so as to irreversibly affect what future CEOs are able to do. \nIdeally they should be focusing their questions in board meetings on these things, as well as having some way of gathering information about them that doesn't just rely on hearing directly from the CEO. (Some ideas for this are below.) When reviewing financial statements and budgets, they should be focused mostly on the risk of major irreversible problems (such as going bankrupt or failing to be compliant); when hearing about activities, they should be focused mostly on what they reflect about the CEO's performance; etc.\nBe advisory (\"stay out of the way\") otherwise. Meetings might contain all sorts of updates and requests for reactions. I think a good template for a board member, when sharing an opinion or reaction, is either to (a) explain as they're talking why this topic is important for the board's main duties; or (b) say (or imply) something like \"I'm curious / offering an opinion about ___, but if this isn't helpful, please ignore it, and please don't hesitate to move the meeting to the next topic as soon as this stops feeling productive.\"\nThe combination of intense engagement on core duties and \"staying out of the way\" otherwise can make this a very weird role. An organization will often go years without any serious questions about the CEO's performance or other matters involving core duties. So a board member ought to be ready to quietly nod along and stay out of the way for very long stretches of time, while being ready to get seriously involved and engaged when this makes sense. \nAim for division of labor. I think a major problem with nonprofit boards is that, by default, it's really unclear which board member is responsible for what. I think it's a good idea for board members to explicitly settle this via assigning:\nSpecialists (\"Board member X is reviewing the financials; the rest of us are mostly checked-out and/or sanity-checking on that\"); \nSubcommittees (\"Board members X and Y will look into this particular aspect of the CEO's performance\"); \nA Board Chair or Lead Independent Director9 who is the default person to take responsibility for making sure the board is doing its job well (this could include suggesting and assigning responsibility for some of the ideas I list below; helping to set the agenda for board meetings so it isn't just up to the CEO; etc.)\nThis can further help everyone find a balance between engaging and staying out of the way.\nWho should be on the board?\nOne answer is that it should be whoever can do well at the duties outlined above - both in terms of substance (can they accurately evaluate the CEO's performance, identify big-picture irreversible risks, etc.?) and in terms of style (do they actively engage on their main duties and stay out of the way otherwise?)\nBut to make things a bit more boiled-down and concrete, I think perhaps the most important test for a board member is: they'll get the CEO replaced if this would be good for the nonprofit's mission, and they won't if it wouldn't be.\nThis is the most essential function of the board, and it implies a bunch of things about who makes a good board member: \nThey need to do a great job understanding and representing the nonprofit's mission, and care deeply about that mission - to the point of being ready to create conflict over it if needed (and only if needed). \nA key challenge of nonprofits is that they have no clear goal, only a mission statement that is open to interpretation. And if two different board members interpret the mission differently - or are focused on different aspects of it - this could intensely color how they evaluate the CEO, which could be a huge deal for the nonprofit.\n \nFor example, if a nonprofit's mission is \"Help animals everywhere,\" does this mean \"Help as many animals as possible\" (which might indicate a move toward focusing on farm animals) or \"Help animals in the same way the nonprofit traditionally has\" or something else? How does it imply the nonprofit should make tradeoffs between helping e.g. dogs, cats, elephants, chickens, fish or even insects? How a board member answers questions like this seems central to how their presence on the board is going to affect the nonprofit.\nThey need to have a personality and position capable of challenging the CEO (though also capable of staying out of the way). \nA common problem I see is that some board member is (a) not very engaged with the nonprofit itself, but (b) highly values their personal relationship with the CEO and other board members. This seems like a bad combination, but unfortunately a common one. Board members need to be willing and able to create conflict in order to do the right thing for the nonprofit.\n \nLimiting the number of board members who are employees (reporting to the CEO) seems important for this reason.\n \nIf you can't picture a board member \"making waves,\" they probably shouldn't be on the board - that attitude will seem fine more than 90% of the time, but it won't work well in the rare cases where the board really matters.\n \nOn the other hand, if someone is only comfortable \"making waves\" and feels useless and out of sorts when they're just nodding along, that person shouldn't be on the board either. As noted above, board members need to be ready for a weird job that involves stepping up when the situation requires it, but staying out of the way when it doesn't. \nThey should probably have a well-developed take on what their job is as a board member. Board members who can't say much about where they expect to be highly engaged, vs. casually advisory - and how they expect to invest in getting the knowledge they need to do a good job leading on particular issues - don't seem like great bets to step up when they most need to (or stay out of the way when they should).\nIn my experience, most nonprofits are not looking for these qualities in board members. They are, instead, often looking for things like:\nCelebrity and reputation - board members who are generally impressive and well-regarded and make the nonprofit look good. Unfortunately, I think such people often just don't have much time or interest for their job. Many are also uninterested in causing any conflict, which makes them basically useless as board members IMO.\nFundraising - a lot of nonprofits pretty much explicitly just try to put people on the board who will help raise money for them. This seems bad for governance.\nNarrow expertise on some topic that is important for the nonprofit. I don't really think this is what nonprofits should be seeking from board members,10 except to the extent it ties deeply into the board members' core duties, e.g., where it's important to have an independent view on technical topic X in order to do a good job evaluating the CEO.\nI think a good profile for a board member is someone who cares greatly about the nonprofit's mission, and wants it to succeed, to the point where they're ready to have tough conversations if they see the CEO falling short. Examples of such people might be major funders, or major stakeholders (e.g., a community leader from a community of people the nonprofit is trying to help).\nA few practices that seem good\nI'll anticlimactically close with a few practices that seem helpful to me. These are mostly pretty generic practices, useful for both for-profit and nonprofit boards, that I have seen working in practice but also seen too many boards going without. They don't fully address the weirdnesses discussed above (especially the stuff specific to nonprofit as opposed to for-profit boards), but they seem to make things some amount better.\nKeeping it simple for low-stakes organizations. If a nonprofit is a year old and has 3 employees, it probably shouldn't be investing a ton of its energy in having a great board (especially since this is hard). \nA key question is: \"If the board just stays checked out and doesn't hold the CEO accountable, what's the worst thing that can happen?\" If the answer is something like \"The nonprofit's relatively modest budget is badly spent,\" then it might not be worth a huge investment in building a great board (and in taking some of the measures listed below). Early-stage nonprofits often have a board consisting of 2-3 people the founder trusts a lot (ideally in a \"you'd fire me if it were the right thing to do\" sense rather than in a \"you've always got my back\" sense), which seems fine. The rest of these ideas are for when the stakes are higher.\nFormal board-staff communication channels. A very common problem I see is that:\nBoard members know almost nothing about the organization, and so are hesitant to engage in much of anything.\nEmployees of the organization know far more, but find the board members mysterious/unapproachable/scary, and don't share much information with them.\nI've seen this dynamic improved some amount by things like a staff liaison: a board member who is designated with the duty, \"Talk to employees a lot, offer them confidentiality as requested, try to build trust, and gather information about how things are going.\" Things like regular \"office hours\" and showing up to company events can help with this.\nViewing board seats as limited. It seems unlikely that a board should have more than 10 members (and even 10 seems like a lot), since it's hard to have a productive meeting past that point.11 When considering a new addition to the board, I think the board should be asking something much closer to \"Is this one of the 10 best people in the world to sit on this board?\" than to \"Is this person fine?\"\nRegular CEO reviews.\nMany nonprofits don't seem to have any formal, regular process for reviewing the CEO's performance; I think it's important to do this.\nThe most common format I've seen is something like: one board member interviews the CEO's direct reports, and perhaps some other people throughout the company, and integrates this with information about the organization's overall progress and accomplishments (often presented by the organization itself, but they might ask questions about it) to provide a report on what the CEO is doing well and could do better. I think this approach has a lot of limitations - staff are often hesitant to be forthcoming with a board member (even when promised anonymity), and the board member often lacks a lot of key information - but even with those issues, it tends to be a useful exercise.\nClosed sessions. I think it's important for the board to have \"closed sessions\" where board members can talk frankly without the CEO, other employees, etc. hearing. I think a common mistake is to ask \"Does anyone want the closed session today or can we skip it?\" - this puts the onus on board members to say \"Yes, I would like a closed session,\" which then implies they have something negative to say. I think it's better for whoever's running the meetings to identify logical closed sessions (e.g., \"The board minus employees\"), allocate time for them and force them to happen.\nRegular board reviews. It seems like it would be a good idea for board members to regularly assess each other's performance, and the performance of the board as a whole. But I've actually seen very little of this done in practice and I can't point to versions of it that seem to have some track record of working well. It does seem like a good idea though!\nConclusion\nThe board is the only body at a nonprofit that can hold the CEO accountable to accomplishing the mission. I broadly feel like most nonprofit boards just aren't very well-suited to this duty, or necessarily to much of anything. It's an inherently weird structure that seems difficult to make work. \nI wish someone would do a great job studying and laying out how nonprofit boards should be assembled, how they should do their job and how they can be held accountable. You can think of this post as my quick, informal shot at that.Footnotes\n I'm using the term \"CEO\" throughout, although the chief executive at a non profit sometimes has another title, such as \"Executive Director.\" ↩\n A lot of this piece is about how the fundamental setup of a nonprofit board leads to the kinds of problems and dynamics I'm describing. This doesn't mean we should necessarily think there's any way to fix it or any better alternative. It just means that this setup seems to bring a lot of friction points and challenges that most relationships between supervisor-and-supervised don't seem to have, which can make the experience of interacting with a board feel vaguely unlike what we're used to in other contexts, or \"weird.\"\n People who have interacted with tons of boards might get so used to these dynamics that they no longer feel weird. I haven't reached that point yet myself though. ↩\n The fact that the nonprofit's goals aren't clearly defined and have no clear metric (and often aren't susceptible to measurement at all) is a pretty general challenge of nonprofits, but I think it especially shows up for a structure (the board) that is already weird in the various other ways I'm describing. ↩\n Superficially, you could make most of the same complaints about shareholders of a for-profit company. But:\nShareholders are the people who ultimately make or lose money if the company does well or poorly (you can think of this as a form of accountability). By contrast, nonprofit board members often have very little (or only an idiosyncratic) personal connection to and investment in the organization.\nShareholders compensate for their low engagement by picking representatives (a board) whom they can hold accountable for the company's performance. Nonprofit board members are the representatives, and aren't accountable to anyone. ↩\n Especially \"good and concise.\" Most of the points I make here can be found in some writings on boards somewhere, but it's hard to find sensible-seeming and comprehensive discussions of what the board should be doing and who should be on it. ↩\n Part of the CEO's job is fundraising, and if they do a bad job of this, it's going to be obvious. But that's only part of the job. At a nonprofit, a CEO could easily be bringing in plenty of money and just doing a horrible job at the mission - and if the board isn't able to learn this and act on it, it seems like very bad news. ↩\n The charter and bylaws are like the \"constitution\" of a nonprofit, laying out how its governance works. ↩\n This is a judgment call, and one way to approach it would be to reserve something like 1 hour of full-board meeting time per year for talking about these sorts of things (and pouring in more time if at least, like, 1/3 of the board thinks something is a big deal).\n Some examples of things I think are and aren't usually a big enough deal to start paying serious attention to:\nBig enough deal: financial decisions that increase the odds of going \"belly-up\" (running out of money and having to fold) by at least 10 percentage points. Not a big enough deal: spending money in ways that are arguably bad uses of money, having a lowish-but-not-too-far-off-of-peer-organizations amount of runway.\nBig enough deal: deficiencies in financial controls that an auditor is highlighting, or a lack of audit altogether, until a plan is agreed to to address these things. Not a big enough deal: most other stuff in this category.\nBig enough deal: organizations with substantial \"PR risk\" exposure should have a good team for assessing this and a \"crisis plan\" in case something happens. Not a big enough deal: specific organizational decisions and practices that you are not personally offended by or find unethical, but could imagine a negative article about. (If you do find them substantively unethical, I think that's a big enough deal.)\nBig enough deal: transferring like 1/3 or more of valuable things the nonprofit has (intellectual property, money, etc.) to another entity not controlled by the board. Not a big enough deal: starting an affiliate organization primarily for taking donations in another country or something.\nBig enough deal: doubling or halving the workforce. Not a big enough deal: smaller hirings and firings. ↩\n Sometimes the Board Chair is the CEO, and sometimes the Chair is an employee of the company who also sits on the board. In these cases, I think it's good for there to be a separate Lead Independent Director who is not employed by the company and is therefore exclusively representing the Board. They can help set agendas, lead meetings, and take responsibility by default when it's otherwise unclear who would do so. ↩\n Nonprofits can get expertise on topic X by hiring experts on X to advise them. The question is: when is it important to have an expert on X evaluating the CEO? ↩\n Though it could be fine and even interesting to have giant boards - 20 people, 50 or more - that have some sort of \"executive committee\" of 10 or fewer people doing basically all of the meetings and all of the work (with the rest functioning just as very passive, occasionally-voting equivalents of \"shareholders\"). Just assume I'm talking about the \"executive committee\" type thing here. ↩\n", "url": "https://www.cold-takes.com/nonprofit-boards-are-weird-2/", "title": "Nonprofit Boards are Weird", "source": "cold.takes", "source_type": "blog", "date_published": "2022-06-23", "id": "dff728c75d5f4e38ec00a4569314b1e8"} -{"text": "I've been working on a new series of posts about the most important century. \nThe original series focused on why and how this could be the most important century for humanity. But it had relatively little to say about what we can do today to improve the odds of things going well.\nThe new series will get much more specific about the kinds of events that might lie ahead of us, and what actions today look most likely to be helpful.\nA key focus of the new series will be the threat of misaligned AI: AI systems disempowering humans entirely, leading to a future that has little to do with anything humans value. (Like in the Terminator movies, minus the time travel and the part where humans win.)\nMany people have trouble taking this \"misaligned AI\" possibility seriously. They might see the broad point that AI could be dangerous, but they instinctively imagine that the danger comes from ways humans might misuse it. They find the idea of AI itself going to war with humans to be comical and wild. I'm going to try to make this idea feel more serious and real.\nAs a first step, this post will emphasize an unoriginal but extremely important point: the kind of AI I've discussed could defeat all of humanity combined, if (for whatever reason) it were pointed toward that goal. By \"defeat,\" I don't mean \"subtly manipulate us\" or \"make us less informed\" or something like that - I mean a literal \"defeat\" in the sense that we could all be killed, enslaved or forcibly contained.\nI'm not talking (yet) about whether, or why, AIs might attack human civilization. That's for future posts. For now, I just want to linger on the point that if such an attack happened, it could succeed against the combined forces of the entire world. \nI think that if you believe this, you should already be worried about misaligned AI,1 before any analysis of how or why an AI might form its own goals. \nWe generally don't have a lot of things that could end human civilization if they \"tried\" sitting around. If we're going to create one, I think we should be asking not \"Why would this be dangerous?\" but \"Why wouldn't it be?\"\nBy contrast, if you don't believe that AI could defeat all of humanity combined, I expect that we're going to be miscommunicating in pretty much any conversation about AI. The kind of AI I worry about is the kind powerful enough that total civilizational defeat is a real possibility. The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today - which I did for much of my career, and still think is one of the best things to work on) is because I think the stakes are just that high. \nBelow:\nI'll sketch the basic argument for why I think AI could defeat all of human civilization. \nOthers have written about the possibility that \"superintelligent\" AI could manipulate humans and create overpowering advanced technologies; I'll briefly recap that case.\n \nI'll then cover a different possibility, which is that even \"merely human-level\" AI could still defeat us all - by quickly coming to rival human civilization in terms of total population and resources.\n \nAt a high level, I think we should be worried if a huge (competitive with world population) and rapidly growing set of highly skilled humans on another planet was trying to take down civilization just by using the Internet. So we should be worried about a large set of disembodied AIs as well. \nI'll briefly address a few objections/common questions: \nHow can AIs be dangerous without bodies? \n \nIf lots of different companies and governments have access to AI, won't this create a \"balance of power\" so that no one actor is able to bring down civilization? \n \nWon't we see warning signs of AI takeover and be able to nip it in the bud?\n \nIsn't it fine or maybe good if AIs defeat us? They have rights too. \nClose with some thoughts on just how unprecedented it would be to have something on our planet capable of overpowering us all.\nHow AI systems could defeat all of us\nThere's been a lot of debate over whether AI systems might form their own \"motivations\" that lead them to seek the disempowerment of humanity. I'll be talking about this in future pieces, but for now I want to put it aside and imagine how things would go if this happened. \nSo, for what follows, let's proceed from the premise: \"For some weird reason, humans consistently design AI systems (with human-like research and planning abilities) that coordinate with each other to try and overthrow humanity.\" Then what? What follows will necessarily feel wacky to people who find this hard to imagine, but I think it's worth playing along, because I think \"we'd be in trouble if this happened\" is a very important point.\nThe \"standard\" argument: superintelligence and advanced technology\nOther treatments of this question have focused on AI systems' potential to become vastly more intelligent than humans, to the point where they have what Nick Bostrom calls \"cognitive superpowers.\"2 Bostrom imagines an AI system that can do things like:\nDo its own research on how to build a better AI system, which culminates in something that has incredible other abilities.\nHack into human-built software across the world.\nManipulate human psychology.\nQuickly generate vast wealth under the control of itself or any human allies.\nCome up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.\nDevelop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries. \n(Wait But Why reasons similarly.3)\nI think many readers will already be convinced by arguments like these, and if so you might skip down to the next major section.\nBut I want to be clear that I don't think the danger relies on the idea of \"cognitive superpowers\" or \"superintelligence\" - both of which refer to capabilities vastly beyond those of humans. I think we still have a problem even if we assume that AIs will basically have similar capabilities to humans, and not be fundamentally or drastically more intelligent or capable. I'll cover that next.\nHow AIs could defeat humans without \"superintelligence\"\nIf we assume that AIs will basically have similar capabilities to humans, I think we still need to worry that they could come to out-number and out-resource humans, and could thus have the advantage if they coordinated against us.\nHere's a simplified example (some of the simplifications are in this footnote4) based on Ajeya Cotra's \"biological anchors\" report:\nI assume that transformative AI is developed on the soonish side (around 2036 - assuming later would only make the below numbers larger), and that it initially comes in the form of a single AI system that is able to do more-or-less the same intellectual tasks as a human. That is, it doesn't have a human body, but it can do anything a human working remotely from a computer could do. \nI'm using the report's framework in which it's much more expensive to train (develop) this system than to run it (for example, think about how much Microsoft spent to develop Windows, vs. how much it costs for me to run it on my computer). \nThe report provides a way of estimating both how much it would cost to train this AI system, and how much it would cost to run it. Using these estimates (details in footnote)5 implies that once the first human-level AI system is created, whoever created it could use the same computing power it took to create it in order to run several hundred million copies for about a year each.6\nThis would be over 1000x the total number of Intel or Google employees,7 over 100x the total number of active and reserve personnel in the US armed forces, and something like 5-10% the size of the world's total working-age population.8\nAnd that's just a starting point. \nThis is just using the same amount of resources that went into training the AI in the first place. Since these AI systems can do human-level economic work, they can probably be used to make more money and buy or rent more hardware,9 which could quickly lead to a \"population\" of billions or more.\n \nIn addition to making more money that can be used to run more AIs, the AIs can conduct massive amounts of research on how to use computing power more efficiently, which could mean still greater numbers of AIs run using the same hardware. This in turn could lead to a feedback loop and explosive growth in the number of AIs.\nEach of these AIs might have skills comparable to those of unusually highly paid humans, including scientists, software engineers and quantitative traders. It's hard to say how quickly a set of AIs like this could develop new technologies or make money trading markets, but it seems quite possible for them to amass huge amounts of resources quickly. A huge population of AIs, each able to earn a lot compared to the average human, could end up with a \"virtual economy\" at least as big as the human one.\nTo me, this is most of what we need to know: if there's something with human-like skills, seeking to disempower humanity, with a population in the same ballpark as (or larger than) that of all humans, we've got a civilization-level problem.\nA potential counterpoint is that these AIs would merely be \"virtual\": if they started causing trouble, humans could ultimately unplug/deactivate the servers they're running on. I do think this fact would make life harder for AIs seeking to disempower humans, but I don't think it ultimately should be cause for much comfort. I think a large population of AIs would likely be able to find some way to achieve security from human shutdown, and go from there to amassing enough resources to overpower human civilization (especially if AIs across the world, including most of the ones humans were trying to use for help, were coordinating). \nI spell out what this might look like in an appendix. In brief:\nBy default, I expect the economic gains from using AI to mean that humans create huge numbers of AIs, integrated all throughout the economy, potentially including direct interaction with (and even control of) large numbers of robots and weapons. \n(If not, I think the situation is in many ways even more dangerous, since a single AI could make many copies of itself and have little competition for things like server space, as discussed in the appendix.)\nAIs would have multiple ways of obtaining property and servers safe from shutdown. \nFor example, they might recruit human allies (through manipulation, deception, blackmail/threats, genuine promises along the lines of \"We're probably going to end up in charge somehow, and we'll treat you better when we do\") to rent property and servers and otherwise help them out. \n \nOr they might create fakery so that they're able to operate freely on a company's servers while all outward signs seem to show that they're successfully helping the company with its goals.\nA relatively modest amount of property safe from shutdown could be sufficient for housing a huge population of AI systems that are recruiting further human allies, making money (via e.g. quantitative finance), researching and developing advanced weaponry (e.g., bioweapons), setting up manufacturing robots to construct military equipment, thoroughly infiltrating computer systems worldwide to the point where they can disable or control most others' equipment, etc. \nThrough these and other methods, a large enough population of AIs could develop enough military technology and equipment to overpower civilization - especially if AIs across the world (including the ones humans were trying to use) were coordinating with each other.\nSome quick responses to objections\nThis has been a brief sketch of how AIs could come to outnumber and out-resource humans. There are lots of details I haven't addressed.\nHere are some of the most common objections I hear to the idea that AI could defeat all of us; if I get much demand I can elaborate on some or all of them more in the future.\nHow can AIs be dangerous without bodies? This is discussed a fair amount in the appendix. In brief: \nAIs could recruit human allies, tele-operate robots and other military equipment, make money via research and quantitative trading, etc. \nAt a high level, I think we should be worried if a huge (competitive with world population) and rapidly growing set of highly skilled humans on another planet was trying to take down civilization just by using the Internet. So we should be worried about a large set of disembodied AIs as well. \nIf lots of different companies and governments have access to AI, won't this create a \"balance of power\" so that nobody is able to bring down civilization? \nThis is a reasonable objection to many horror stories about AI and other possible advances in military technology, but if AIs collectively have different goals from humans and are willing to coordinate with each other11 against us, I think we're in trouble, and this \"balance of power\" idea doesn't seem to help. \n What matters is the total number and resources of AIs vs. humans.\nWon't we see warning signs of AI takeover and be able to nip it in the bud? I would guess we would see some warning signs, but does that mean we could nip it in the bud? Think about human civil wars and revolutions: there are some warning signs, but also, people go from \"not fighting\" to \"fighting\" pretty quickly as they see an opportunity to coordinate with each other and be successful.\nIsn't it fine or maybe good if AIs defeat us? They have rights too. \nMaybe AIs should have rights; if so, it would be nice if we could reach some \"compromise\" way of coexisting that respects those rights. \nBut if they're able to defeat us entirely, that isn't what I'd plan on getting - instead I'd expect (by default) a world run entirely according to whatever goals AIs happen to have.\nThese goals might have essentially nothing to do with anything humans value, and could be actively counter to it - e.g., placing zero value on beauty and having zero attempts to prevent or avoid suffering).\nRisks like this don't come along every day\nI don't think there are a lot of things that have a serious chance of bringing down human civilization for good.\nAs argued in The Precipice, most natural disasters (including e.g. asteroid strikes) don't seem to be huge threats, if only because civilization has been around for thousands of years so far - implying that natural civilization-threatening events are rare.\nHuman civilization is pretty powerful and seems pretty robust, and accordingly, what's really scary to me is the idea of something with the same basic capabilities as humans (making plans, developing its own technology) that can outnumber and out-resource us. There aren't a lot of candidates for that.12\nAI is one such candidate, and I think that even before we engage heavily in arguments about whether AIs might seek to defeat humans, we should feel very nervous about the possibility that they could.\nWhat about things like \"AI might lead to mass unemployment and unrest\" or \"AI might exacerbate misinformation and propaganda\" or \"AI might exacerbate a wide range of other social ills and injustices\"13? I think these are real concerns - but to be honest, if they were the biggest concerns, I'd probably still be focused on helping people in low-income countries today rather than trying to prepare for future technologies. \nPredicting the future is generally hard, and it's easy to pour effort into preparing for challenges that never come (or come in a very different form from what was imagined).\nI believe civilization is pretty robust - we've had huge changes and challenges over the last century-plus (full-scale world wars, many dramatic changes in how we communicate with each other, dramatic changes in lifestyles and values) without seeming to have come very close to a collapse.\nSo if I'm engaging in speculative worries about a potential future technology, I want to focus on the really, really big ones - the ones that could matter for billions of years. If there's a real possibility that AI systems will have values different from ours, and cooperate to try to defeat us, that's such a worry.\nSpecial thanks to Carl Shulman for discussion on this post.\nAppendix: how AIs could avoid shutdown\nThis appendix goes into detail about how AIs coordinating against humans could amass resources of their own without humans being able to shut down all \"misbehaving\" AIs. \nIt's necessarily speculative, and should be taken in the spirit of giving examples of how this might work - for me, the high-level concern is that a huge, coordinating population of AIs with similar capabilities to humans would be a threat to human civilization, and that we shouldn't count on any particular way of stopping it such as shutting down servers.\nI'll discuss two different general types of scenarios: (a) Humans create a huge population of AIs; (b) Humans move slowly and don't create many AIs.\nHow this could work if humans create a huge population of AIs\nI think a reasonable default expectation is that humans do most of the work of making AI systems incredibly numerous and powerful (because doing so is profitable), which leads to a vulnerable situation. Something roughly along the lines of:\nThe company that first develops transformative AI quickly starts running large numbers of copies (hundreds of millions or more), which are used to (a) do research on how to improve computational efficiency and run more copies still; (b) develop valuable intellectual property (trading strategies, new technologies) and make money.\nOver time, AI systems are rolled out widely throughout society. Their numbers grow further, and their role in the economy grows: they are used in (and therefore have direct interaction with) high-level decision-making at companies, perhaps operating large numbers of cars and/or robots, perhaps operating military drones and aircraft, etc. (This seems like a default to me over time, but it isn't strictly necessary for the situation to be risky, as I'll go through below.)\nIn this scenario, the AI systems are malicious (as we've assumed), but this doesn't mean they're constantly causing trouble. Instead, they're mostly waiting for an opportunity to team up and decisively overpower humanity. In the meantime, they're mostly behaving themselves, and this is leading to their numbers and power growing. \nThere are scattered incidents of AI systems' trying to cause trouble,14 but this doesn't cause the whole world to stop using AI or anything.\n \nA reasonable analogy might be to a typical civil war or revolution: the revolting population mostly avoids isolated, doomed attacks on its government, until it sees an opportunity to band together and have a real shot at victory.\n \n(Paul Christiano's What Failure Looks Like examines this general flavor of scenario in a bit more detail.)\nIn this scenario, the AIs face a challenge: if it becomes obvious to everyone that they are trying to defeat humanity, humans could attack or shut down a few concentrated areas where most of the servers are, and hence drastically reduce AIs' numbers. So the AIs need a way of getting one or more \"AI headquarters\": property they control where they can safely operate servers and factories, do research, make plans and construct robots/drones/other military equipment. \nTheir goal is ultimately to have enough AIs, robots, etc. to be able to defeat the rest of humanity combined. This might mean constructing overwhelming amounts of military equipment, or thoroughly infiltrating computer systems worldwide to the point where they can disable or control most others' equipment, or researching and deploying extremely powerful weapons (e.g., bioweapons), or a combination.\nHere are some ways they could get to that point:\nThey could recruit human allies through many different methods - manipulation, deception, blackmail and other threats, genuine promises along the lines of \"We're probably going to end up in charge somehow, and we'll treat you better when we do.\" \nHuman allies could be given valuable intellectual property (developed by AIs), given instructions for making lots of money, and asked to rent their own servers and acquire their own property where an \"AI headquarters\" can be set up. Since the \"AI headquarters\" would officially be human property, it could be very hard for authorities to detect and respond to the danger.\n \nVia threats, AIs might be able to get key humans to cooperate with them - such as political leaders, or the CEOs of companies running lots of AIs. This would open up further strategies.\nAs assumed above, particular companies are running huge numbers of AIs. The AIs being run by these companies might find security holes in the companies' servers (this isn't the topic of this piece, but my general impression is that security holes are widespread and that reasonably competent people can find many of them)15, and thereby might find opportunities to create durable \"fakery\" about what they're up to. \nE.g., they might set things up so that as far as humans can tell, it looks like all of the AI systems are hard at work creating profit-making opportunities for the company, when in fact they're essentially using the server farm as their headquarters - and/or trying to establish a headquarters somewhere else (by recruiting human allies, sending money to outside bank accounts, using that money to acquire property and servers, etc.)\nIf AIs are in wide enough use, they might already be operating lots of drones and other military equipment, in which case it could be pretty straightforward to be able to defend some piece of territory - or to strike a deal with some government to enlist its help in doing so.\nAIs could mix-and-match the above methods and others: for example, creating \"fakery\" long enough to recruit some key human allies, then attempting to threaten and control humans in key positions of power to the point where they control solid amounts of military resources, then using this to establish a \"headquarters.\"\nIn any of these cases, once one or more \"AI headquarters\" are established:\nIt can be expanded as the AIs headquartered there make money, purchase more property and computing power, build more factories, etc.16\nIt can develop enough destructive potential and human allies that humans would be hesitant to attack it (as many relatively small countries have today). It would be clear that AIs at the headquarters are seeking autonomy, accumulating resources and taking some unethical actions, but the full scale and nature of the threat could easily remain ambiguous.\nEven if the threat is clear, AIs could potentially maintain a huge population in a relatively small space (potentially underground). They could also split up across many different headquarters (including some where they are simply using humans' hardware while hiding their presence). \nEither way, this large-but-hidden population could be recruiting human allies, developing advanced weaponry, looking for ways to take control of human' weapons (potentially possible if they greatly outnumber the humans working on cybersecurity, and if they can coordinate with the AIs \"helping\" those humans), etc. \n \nThey could also be working with (unwitting, or allied) humans to manufacture large numbers of robots with back-doors allowing the AIs to take control.\nOnce the headquarters has built up enough resources to go on offense, AIs around the world could coordinate with the headquarters. Humans could shut down AIs that they notice doing this, but they might have a very tough time getting value out of their servers and AI-controlled robots; this could make it easy for the AIs at the \"AI headquarters\" to out-resource humans.\nWhat if humans move slowly and don't create many AIs?\nThe above scenario has humans creating large numbers of AIs, such that the AIs just need to find a way to coordinate and acquire a safe \"headquarters\" in order to defeat us.\nWhat if humans moved more slowly, intentionally restricting human-level AI to a tiny portion of the available computing resources? Could a small number of AIs pose a risk to humanity?\nIn this world, we would have what Carl Shulman refers to as \"dry tinder everywhere, waiting for sparks.\" Anyone who can buy or rent a large amount of computing power can create a large number of AIs, which can produce a large amount of money and research, leading to still more AIs. \nSo a single AI could hack into enough servers17 to make a few copies of itself; recruit a few human allies; and start making money, acquiring more server space, etc. until its human allies are running a huge number of AIs. This could all be done in difficult-to-detect ways (it might e.g. just look like a set of humans renting a bunch of servers to run quantitative finance strategies).\nSo in this world, I think our concern should be any AI that is able to find enough security holes to attain that kind of freedom. Given the current state of cybersecurity, that seems like a big concern.\n Footnotes\n Assuming you accept other points made in the most important century series, e.g. that AI that can do most of what humans do to advance science and technology could be developed this century. ↩\n See Superintelligence chapter 6. ↩\n See the \"Nanotechnology blue box,\" in particular. ↩\nThe report estimates the amount of computing power it would take to train (create) a transformative AI system, and the amount of computing power it would take to run one. This is a bounding exercise and isn't supposed to be literally predicting that transformative AI will arrive in the form of a single AI system trained in a single massive run, but here I am interpreting the report that way for concreteness and simplicity.\nAs explained in the next footnote, I use the report's figures for transformative AI arriving on the soon side (around 2036). Using its central estimates instead would strengthen my point, but we'd then be talking about a longer time from now; I find it helpful to imagine how things could go in a world where AI comes relatively soon. ↩\n I assume that transformative AI ends up costing about 10^14 FLOP/s to run (this is about 1/10 the Bio Anchors central estimate, and well within its error bars) and about 10^30 FLOP to train (this is about 10x the Bio Anchors central estimate for how much will be available in 2036, and corresponds to about the 30th-percentile estimate for how much will be needed based on the \"short horizon\" anchor). That implies that the 10^30 FLOP needed to train a transformative model could run 10^16 seconds' worth of transformative AI models, or about 300 million years' worth. This figure would be higher if we use Bio Anchors's central assumptions, rather than assumptions consistent with transformative AI being developed on the soon side. ↩\n They might also run fewer copies of scaled-up models or more copies of scaled-down ones, but the idea is that the total productivity of all the copies should be at least as high as that of several hundred million copies of a human-ish model. ↩\nIntel, Google ↩\n Working-age population: about 65% * 7.9 billion =~ 5 billion. ↩\n Humans could rent hardware using money they made from running AIs, or - if AI systems were operating on their own - they could potentially rent hardware themselves via human allies or just via impersonating a customer (you generally don't need to physically show up in order to e.g. rent server time from Amazon Web Services). ↩\n(I had a speculative, illustrative possibility here but decided it wasn't in good enough shape even for a footnote. I might add it later.) ↩\n I don't go into detail about how AIs might coordinate with each other, but it seems like there are many options, such as by opening their own email accounts and emailing each other.  ↩\n Alien invasions seem unlikely if only because we have no evidence of one in millions of years. ↩\n Here's a recent comment exchange I was in on this topic. ↩\n E.g., individual AI systems may occasionally get caught trying to steal, lie or exploit security vulnerabilities, due to various unusual conditions including bugs and errors. ↩\n E.g., see this list of high-stakes security breaches and a list of quotes about cybersecurity, both courtesy of Luke Muehlhauser. For some additional not-exactly-rigorous evidence that at least shows that \"cybersecurity is in really bad shape\" is seen as relatively uncontroversial by at least one cartoonist, see: https://xkcd.com/2030/  ↩\n Purchases and contracts could be carried out by human allies, or just by AI systems themselves with humans willing to make deals with them (e.g., an AI system could digitally sign an agreement and wire funds from a bank account, or via cryptocurrency). ↩\n See above note about my general assumption that today's cybersecurity has a lot of holes in it. ↩\n", "url": "https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/", "title": "AI Could Defeat All Of Us Combined", "source": "cold.takes", "source_type": "blog", "date_published": "2022-06-09", "id": "f84609f7311e1dd0d0bf4dd4d2a70603"} -{"text": "I've claimed that the best way to learn is by writing about important topics. (Examples I've worked on include: which charity to donate to, whether life has gotten better over time, whether civilization is declining, whether AI could make this the most important century of all time for humanity.)\nBut I've also said this can be \"hard, taxing, exhausting and a bit of a mental health gauntlet,\" because:\nWhen trying to write about these sorts of topics, I often find myself needing to constantly revise my goals, and there's no clear way to know whether I'm making progress. That is: trying to write about a topic that I'm learning about is generally a wicked problem.\nI constantly find myself in situations like \"I was trying to write up why I think X, but I realized that X isn't quite right, and now I don't know what to write.\" and \"I either have to write something obvious and useless or look into a million more things to write something interesting.\" and \"I'm a week past my self-imposed deadline, and it feels like I have a week to go, but maybe it's actually 12 weeks - that's what happened last time.\" \nOverall, this is the kind of work where I can't seem to tell how progress is going, or stay on a schedule.\nThis post goes through some tips I've collected over the years for dealing with these sorts of challenges - both working on them myself, and working with teammates and seeing what works for them.\nA lot of what matters for doing this sort of work is coming at it with open-mindedness, self-criticality, attention to detail, and other virtues. But a running theme of this work is that it can be deadly to approach with too much virtue: holding oneself to self-imposed deadlines, trying for too much rigor on every subtopic, and otherwise trying to do \"Do everything right, as planned and on time\" can drive a person nuts. So this post is focused on a less obvious aspect of what helps with wicked problems, which is useful vices - antidotes to the kind of thoroughness and conscientiousness that lead to unreachable standards, and make wicked problems impossible.\nI've organized my tips under the following vices, borrowing from Larry Wall and extending his framework a bit:\nLaziness. When some key question is hard to resolve, often the best move is to just ... not resolve it, and change the thesis of your writeup instead (and change how rigorous you're trying to make it). For example, switching from \"These are the best charities\" to \"These are the charities that are best by the following imperfect criteria.\"\nImpatience. One of the most crucial tools for this sort of work is interrupting oneself. I could be reading through study after study on some charitable activity (like building wells), when stepping back to ask \"Wait, why does this matter for the larger goal again?\" could be what I most need to do.\nHubris. Whatever I was originally arguing (\"Charity X is the best\"), I'm probably going to realize at some point that I can't actually defend it. This can be demoralizing, even crisis-inducing. I recommend trying to build an unshakable conviction that one has something useful to say, even when one has completely lost track of what that something might be.\nSelf-preservation. When you're falling behind, it can be tempting to make a \"heroic\" effort at superhuman productivity. When a problem seems impossible, it can be tempting to fix your steely gaze on it and DO IT ANYWAY. I recommend the opposite: instead of rising to the challenge, shrink from it and fight another day (when you'll solve some problem other than the one you thought you were going for).\nOverall, it's tempting to try to \"boil the ocean\" and thoroughly examine every aspect of a topic of interest. But the world is too big, and the amount of information is too much. I think the only way to form a view on an important topic is to do a whole lot of simplifying, approximating and skipping steps - aiming for a step of progress rather than a confident resolution.\nLaziness\nThis is Gingi, the patron saint of not giving a fuck. It's a little hard to explain. Maybe in a future piece. Just imagine someone who literally doesn't care at all about anything, and ask \"How would Gingi handle the problem I'm struggling with, and how bad would that be?\" - bizarrely, this is often helpful.\nHypothesis rearticulation\nMy previous piece focused on \"hypothesis rearticulation\": instead of defending what I was originally going to argue, I just change what I'm arguing so it's easier to defend. For example, when asking Has Life Gotten Better?, I could've knocked myself out trying to pin down exactly how quality of life changed in each different part of the world between, say, the year 0 and the year 1000. Instead, I focused on saying that that time period is a \"mystery\" and focused on arguing for why we shouldn't be confident in any of a few tempting narratives. \nMy previous piece has another example of this move. It's one of the most important moves for answering big questions.\nQuestions for further investigation\nThis is really one of my favorites. Every GiveWell report used to have a big section at the bottom called \"Questions for further investigation.\" We'd be working on some question like \"What about the possibility that paying for these services (e.g., bednets) just causes the government to invest less in them?\" and I'd be like \"Would you rather spend another 100 hours on this question, or write down a few sentences about what our best guess is right now, add it to the Questions for Further Investigation section and move on?\" \nTo be clear, sometimes the answer was the former, and I think we eventually did get to ~all of those questions (over the course of years). But still - it's remarkable how often this simple move can save one's project, and create another fun project for someone else to work on!\nWhat standard are we trying to reach? How about the easiest one that would still be worth reaching?\nIf you're writing an academic paper, you probably have a sense of what counts as \"enough evidence\" or \"enough argumentation\" that you've met the standards for a successful paper. \nBut here I'm trying to answer some broad question like \"Where should I donate?\" or \"Is civilization declining?\" that doesn't fit into an established field - and for such a broad question, I'm going to run into a huge number of sub-questions (each of which could be the subject of many papers of its own). It's tempting to try for some standard like \"Every claim I make is supported by a recognizably rigorous, conclusive analysis,\" but that way madness lies. \nI think it's often better to aim for the minimum level of rigor that would still make something \"the best available answer to the question.\" But I'm not absolutist about that either - a frustrating aspect of working with me on problems like this is that I'll frequently say things like \"Well, we don't need to thoroughly answer objection X, but we should do it if it's pretty quick to do so - that just seems like a good deal.\" I think this is a fine way to approach things, but it leads to shifting standards.\nHere's a slightly more nuanced way to think about how \"rigorous\" a piece is, when there's no clear standard to meet. I tend to ask: “How hard would it be for a critic to demonstrate that the writeup's conclusion is significantly off in a particular direction, and/or far less robust than the writer has claimed?”1\nThe \"how hard\" question can be answered via something like:\n\"A-hardness\": minimum hours needed by literally anyone in the world\n\"B-hardness\": minimum hours needed by any not-super-hard-to-access person, including someone who’s very informed about the topic in question\n\"C-hardness\": minimum hours needed by a reasonably smart but not very informed critic, looking on their own for flaws\nI seem to recall that with GiveWell, we got a lot more successful (in terms of e.g. donor retention) once we got to the point where we could get through an hour-long Q&A with donors with a “satisfying” response to each question - a response that demonstrated that (a) we had thought about the question more/harder than the interlocutor; (b) we had good reason to think it would take us significant time (say 10-100 hours or more) to get a better answer to the question than we had. At this point, I think the C-hardness was at least 10 hours or so - no small achievement, since lots of not-very-informed people know something about some random angle.\n(By now, I'd guess that GiveWell’s A-hardness is over 100 hours. But a C-hardness of 10 hours was the first thing to aim for.)\nThese standards are very different from something like “Each claim is proven with X amount of confidence.” I think that’s appropriate, when you keep in mind that the goal is “most thoughtful take available on a key action-guiding question.”\nImpatience\nMany people dream of working on a project that puts them in a flow state:\nIn positive psychology, a flow state, also known ... as being in the zone, is the mental state in which a person performing some activity is fully immersed in a feeling of energized focus, full involvement, and enjoyment in the process of the activity. In essence, flow is characterized by the complete absorption in what one does, and a resulting transformation in one's sense of time.\nBut if you're working on wicked problems, I recommend that you avoid flow states, nice though they may be. (Thanks to Ajeya Cotra for this point.) Maybe you instead want a Harrison Bergeron state: every time you're getting in a groove, you get jolted out of it, completely lose track of what you were doing, and have to reassemble your thoughts.\nThat's because one of the most productive things you can do when working on a wicked problem is rethink what you're trying to do. The more you interrupt yourself, and the less attached you are to the plan you had, the more times you'll be able to notice that what you're writing isn't coming out as planned, and you should change course.\nCheckins and virtual checkins\nI think the ideal way to interrupt yourself is to be working with someone else who's engaged in your topic and has experience with similar sorts of work (at Open Philanthropy, this might mean your manager), and constantly ping them to say things like:\nI've started to argue for point X, but I don't think my arguments are that great.\nI'm thinking I should deeply investigate point Y - sound right?\nI'm feeling dread about this next section and I don't really have any idea why. Any thoughts? (A lot of people are hesitant to do this one, but I think it often is exactly the right move!)\nI think this is helpful for a few reasons:\nYou may have gotten subconsciously attached to the vision you had in your head for what you were going to write, and it's good to get a reaction from someone else who has less of that attachment.\nIt's generally just hard to make yourself look at your work with \"fresh eyes\" as your goal is constantly changing, so bringing in another person is good.\nIt's easy to get caught up in a \"virtue\" narrative when doing this work - \"I'm thorough and rigorous and productive, I'm going to answer this question thoroughly and rigorously and do it on time.\" It's tempting (as I'll get to) to try to overcome hard situations with \"heroic effort.\" But another person is more likely to ask questions like \"Well, how long does it usually take you to do this sort of thing?\" rather than \"Can you make an incredible heroic effort here?\" and “What do we think we can do and by when and is it worth it?” rather than “What would failure to do the thing you thought you could do say about you as a person?”\nWith early GiveWell, I got a huge amount of value from Elie, who consistently wanted to do things far less thoroughly than I wanted to. I probably ended up doing things 3x as thoroughly as he wanted and 1/3 as thoroughly (and so 3x faster!) as I originally wanted - a nice compromise.\nThese kinds of checkins can be very vulnerable (especially when the topic is something like \"I can't accomplish what we both said I would\"), and it can be hard to have the kind of relationship that makes them comfortable. It's best if the manager or peer being checked in with starts from a place of being nonjudgmental, remembering the wicked nature of the problem and not being attached to the original goals. \nI also recommend imagining an outsider interrupting you to comment on your work - I think this can get you some of those same benefits.\nOutline-driven research\nI recommend always working off of a complete outline of what you are going to argue and how, which has ideally been reviewed by someone else (or your simulation of someone else) who said \"Sure, if you can defend each subpoint in the way you say you can, I'll find this valuable.\"\nThat is:\nAs soon as possible after you start learning about a topic, write an outline saying: \"I think I can show that A seems true using the best available evidence of type X; B seems true using the best available evidence of type Y; therefore, conclusion C is true (slash the best available guess).\" Don't spend lots of time in \"undirected learning\" mode.\nAs soon as your attempt to flesh out this outline is failing, prioritize going back to the outline, adjusting it, getting feedback and being ready to go with a new argument. It's easy to say something like \"I'm not actually confident in this point, I should investigate it\" (as I did here), but I think it's better to interrupt yourself at that point; go back to the outline; redo it with the new plan; and ask whether the whole new plan looks good.\nOutlines don't need to be correct, they just need to be guesses, and they should be constantly changing. They're end-to-end plans for gathering and presenting evidence, not finished products.\nConstantly track your pace\nI think it's good to consistently revisit your estimate of how quickly you're on pace to finish the project. Not how quickly you want to finish it or originally said you would finish it - how quickly it will be finished if you do all of the remaining sections at about the pace you've done the current ones.\nI think a common issue is that someone looks very thoroughly into the first 2-3 subquestions that come up, without noticing that applying this thoroughness to all subquestions would put the project on pace to take years (or maybe decades?) Consistently interrupting yourself to re-estimate time remaining can be a good prompt to re-scope the project.\nDon't just leave a fix for later; duct tape it now\nThis tip comes from Ajeya. When you reach some difficult part of the argument that you haven't thought about enough yet, it's tempting to write \"[to do]\" and figure you'll come back to it. But this is dangerous:\nIt creates an assignment of unknown difficulty for your future self, putting them in the position of feeling obligated to fill in something they may not remember very well. \nIt makes it harder to estimate how much time is remaining in the project.\nIt poses the risk that you'll come back to fill it in, only to realize that you can't argue the subpoint as well as you thought - meaning you need to change a bunch of other stuff you wrote that relies on it.\nInstead, write down the shortest, simplest version of the point you can - focusing on what you currently believe rather than doing a fresh investigation. When you read the piece over again later, if you're not noticing the need for more, then you don't need to do more. \nHubris\nYour take is valuable\nA common experience with this kind of work is the \"too-weak wrong turn\": you realize just how much uncertainty there is in the question you're looking into, and how little you really know about it, and how easy it would be for someone to read your end product and say things like: \"So? I already knew all of this\" and \"There's nothing really new here\" and \"This isn't a definitive take, it's a bunch of guesswork on a bunch of different topics that an expert would know infinitely more about\" and such. \nThis can be demoralizing to the point where it's hard to continue writing, especially once you've put in a lot of time and have figured out most of what you want to say, but are realizing that \"what you want to say\" is covered in uncertainty and caveats. \nIt can sometimes be tempting to try to salvage the situation by furiously doing research to produce something more thorough and impressive.\nWhen someone (including myself) is in this type of situation, I often find myself saying the following sort of thing to them: \n\"If what you've got so far were trivial and worthless, you wouldn't have felt the pull to write this piece in the first place.\"\n\"Don't find support for what you think, just explain why you already think it.\"\nI think it can be useful to just take \"My take on this topic is valuable\" as an almost axiomatic backdrop (once one's take has been developed a bit). It doesn't mean more research isn't valuable, but it can shift the attitude from \"Furiously trying to find enough documentation that my take feels rigorous\" to \"Doing whatever extra investigation is worth the extra time, and otherwise just finishing up.\"\nYour productivity is fine\nUnderstanding deadlines. One of the hardest things about working on wicked problems is that it's very hard to say how long a project is supposed to take. For example, in the first year of GiveWell:\nWe felt that we absolutely had to launch our initial product by Thanksgiving 2007. Our initial product would be our giving recommendations for our initial five causes: saving lives in Africa, global poverty (focus on Africa), US early childhood care, US education, US job opportunities.\nAs we got close to the deadline, we were both pulling all nighters and cutting huge amounts of our planned content - things we had intended to write up or investigate were getting moved to questions for further investigation. At some point we gave up on releasing all five causes and hoped we would get one out in time.\nWe got “saving lives in Africa” up on December 6, and “global poverty” sometime not too long after that.\nWe hoped to get the remaining causes out in January so we could move on to other things. I believe we got them out in May or so.\nThe \"deadline miss\" didn't come from not working hard, it came from having no idea how much work was ahead of us. \nWorking on wicked problems means navigating:\nNot enough deadline. I think if one doesn't establish expectations for what will get done and by when, one will by default do everything in way too much depth and take roughly forever to finish a project - and will miss out on a lot of important pressure to do things like cutting and reframing the work.\nToo much deadline. On the other hand, if one does set a \"deadline,\" it's likely that this is based on a completely inaccurate sense of what's possible. If one then makes it a point of personal pride to hit the deadline - and sees a miss as a personal failing - this is a recipe for a shame spiral.\nEarly in a project, I suggest treating a deadline mostly as a \"deadline to have a better deadline.\" Something like: \"According to my wildly uninformed guess at how long things should take, I should be done by July 1; hopefully by July 1, I will be able to say something more specific, like 'I've gone through 1/3 of my subquestions, and the remaining 2/3 would take until September 1, which is too long, so I'm re-scoping the project.'\"\nAt the point where one can really reliably say how much time should be remaining, I think one is usually done with the hardest part of the project. \nFor these sorts of \"deadline to have a deadline\"s, I tend to make them comically aggressive - for example, “I’m gonna start writing this tomorrow and have it done after like 30 hours of work,” while knowing that I’m actually several months from having my first draft (but that going in with the attitude “I’m basically done already, just writing it down” will speed me up a lot by making me articulate some of the key premises). So I'm both setting absurd goals for what I can accomplish, and preparing to completely let myself off the hook if I fail. Hubris.\nUnderstanding procrastination/incubation. For almost anyone (and certainly for myself), working on wicked problems involves a lot of:\nFeeling \"stuck.\"\nNot knowing what to do next - or worse, feeling like one knows what one is supposed to do next, but finding that the next step just feels painful or aversive or \"off.\"\nHaving a ton of trouble moving forward, and likely procrastinating, often a huge amount.\n(More at my previous piece.)\nIn fact, early in the process of working on a wicked problem, I think it's often unrealistic to put in more than a few hours of solid work per day - and unhelpful to compare one's productivity to that of people doing better-defined tasks, where the goals are clear and don't change by the hour.\nWorking on wicked problems can often be a wild emotional rollercoaster, with lots of moments of self-loathing over being unable to focus, or missing a \"deadline,\" or having to heavily weaken the thing one was trying to say.\nIt's a tough balance, because I think one really does need to pressure oneself to produce. But especially once one has completed a few projects, I think it's feasible to be simultaneously \"failing to make progress\" and \"knowing that one is still broadly on track, because failing to make progress is part of the process.\" I think it's sometimes productive to have a certain kind of arrogance, an attitude like: \"Yes, I cleared the whole day to work on this project and so far what I have done is played 9 hours of video games. But the last 5 times I did something like this, I was in a broadly similar state, and then got momentum and finished on time. I'm doing great!\" The balance to strike is feeling enough urgency to move through the whole procrastinate-produce-rethink process, while having a background sense that \"this is all expected and fine\" that can prevent excessive personal shame and fear from the \"procrastination\" and \"rethink\" parts.\n(Personally, I often draft a 15-page document by spending 4 hours failing to write the first paragraph, then 1 hour on the first paragraph, then 1 hour failing to continue, then 1 hour on the rest of the first page, then 4 hours for the remaining 14 pages. If someone tries to interrupt me during the first 4 hours, I tell them I'm working, and that's true as far as I'm concerned!)\nSelf-preservation\nAs noted above, working on wicked problems often involves long periods of very low output, with self-imposed deadlines creeping up. This sometimes leads people to try to make up for lost time with a \"heroic\" effort at superhuman productivity, and to try to handle the hardest parts of a project by just working that much harder.\nI'm basically totally against this. An analogy I sometimes use:\nQ: When Superman shows up to save the day and realizes his rival is loaded with kryptonite, how should he respond? What’s the best, most virtuous thing he can do in that situation?\nA: Fly away as fast as he can, optionally shrieking in terror and letting all the onlookers say “Wow, what a coward.” This is a terrible time to be brave and soldier on! There are so many things Superman can do to be helpful - the single worst thing he can do is go where he won’t succeed.2\nIf the project is taking \"too long,\" it might be because it was impossible to set a \"schedule\" for in the first place, and trying to finish it off at a superhuman pace could easily just leave you exhausted, demoralized and still not close to done. Additionally, the next task sometimes seems \"scary\" because it is actually a bad idea and needs to be rethought.\nI generally advise people working on wicked problems to aim for \"jogging\" rather than \"sprinting\" - a metaphor I like because it emphasizes that this is fully consistent with trying to finish as fast as possible. In particular, I prefer the goal of \"Make at least a bit of progress on 95% of days, and in 100% of weeks\" to the goal of \"Make so much progress today that it makes up for all my wasted past days.\" (The former goal is not easy! I think aiming for it requires a lot of interrupting oneself to make sure one isn't spiraling or going down an unproductive rabbit hole - rather than a lot of \"trying to pedal to the metal,\" which can run right into those problems.)\nIt’s a bird, it’s a plane, it’s a schmoe!\nThis section is particularly targeted at effective altruists who feel compelled to squeeze every ounce of productivity out of themselves that they can, for moral reasons and not just personal pride. I think this attitude is dangerous, because of the way it leads people to set unrealistic and unsustainable expectations for themselves. \nMy take: \"Whenever you catch yourself planning on being a hero, just stop. If we’re going to save the world, we’re going to do it by being schmoes.\" That is:\nPlan on being about as focused, productive, and virtuous as people doing similar work on other topics. \nPlan on working a normal number of hours each day, plan on often getting distracted and mucking around, plan on taking as much vacation as other high-productivity people (a lot), plan on having as much going on outside of work as other high-productivity people (a lot), etc.\n(This is also a standard to hold oneself to - try not to lose productivity over things, like guilt spirals, that other people doing similar work often don't suffer from.)\nIf effective altruists are going to have outsized impact on the world, I think it will be mostly thanks to the unusual questions they’re asking and the unusual goals they’re interested in, not unusual dedication/productivity/virtue. I model myself as “Basically like a hedge fund guy but doing more valuable stuff,” not as “A being capable of exceptional output or exceptional sacrifice.” \nBe virtuous first!\nI don't think you're going to get very far with these \"vices\" alone. If you aren't balancing them with the virtues of open-mindedness, self-criticality, and doing the hard work to understand things, it's easy to just lazily write down some belief you have, cite a bit of evidence that you haven't looked at carefully or considered the best counter-arguments to, and hit \"publish.\" I think this is what the vast majority of people \"investigating\" important questions are doing, and if I were writing tips for the average person in the world, I'd have a very different emphasis.\nFor forming opinions and writing useful pieces about important topics, I think the first hurdle to clear is being determined to examine the strongest parts of both sides of an argument, understand them in detail (and with minimal trust), and write what you're finding with reasoning transparency. (All of this is much easier said than done.) But in my experience, many of the people who are strongest in these \"virtues\" veer too far in the virtuous direction and end up punishing themselves for missing unrealistic self-imposed deadlines on impossible self-imposed assignments. This piece has tried to give a feel for when and how to pull back, skip steps, and go easy on oneself, to make incremental progress on intimidating questions.Footnotes\n In practice, for a report that isn't claiming much rigor, this often means demonstrating “This isn’t even suggestive, it’s basically noise.” Here's a long debate about exactly that question for one of the key inputs into my views on transformative AI! ↩\n My favorite real-life example of this is Barry Bonds in 2002. So many star players try to play through injuries all year long, and frame this as being a \"team player.\" I remember Barry Bonds in 2002 taking all kinds of heat for the fact that he would sit out whenever he got even moderately injured, and would sometimes sit out games just because he felt kinda tired. But then the playoffs came around and he played every game and was out-of-this-world good, in a season that came down to the final game, at age 38. Who's the team player? ↩\n", "url": "https://www.cold-takes.com/useful-vices-for-wicked-problems/", "title": "Useful Vices for Wicked Problems", "source": "cold.takes", "source_type": "blog", "date_published": "2022-04-12", "id": "9625cd50c9a63e48dea71bae846be17e"} -{"text": "I'm interested in the topic of ideal governance: what kind of governance system should you set up, if you're starting from scratch and can do it however you want?\nHere \"you\" could be a company, a nonprofit, an informal association, or a country. And \"governance system\" means a Constitution, charter, and/or bylaws answering questions like: \"Who has the authority to make decisions (Congress, board of directors, etc.), and how are they selected, and what rules do they have to follow, and what's the process for changing those rules?\"\nI think this is a very different topic from something like \"How does the US's Presidential system compare to the Parliamentary systems common in Europe?\" The idea is not to look at today's most common systems and compare them, but rather to generate options for setting up systems radically different from what's common today. \nI don't currently know of much literature on this topic (aside from the literature on social choice theory and especially voting methods, which covers only part of the topic). This post describes the general topic and why I care, partly in the hopes that people can point me to any literature I've missed. Whether or not I end up finding any, I'm likely to write more on this topic in the future.\nOutline of the rest of the piece:\nI'll outline some common governance structures for countries and major organizations today, and highlight how much room there is to try different things that don't seem to be in wide use today. More\nI'll discuss why I care about this question. I have a few very different reasons: \nA short-term, tangible need: over the last several years, I've spoken with several (more than 3) organizations that feel no traditional corporate governance structure is satisfactory, because the stakes of their business are too great and society-wide for shareholder control to make sense, yet they are too early-stage and niche (and in need of nimbleness) to be structured like a traditional government. An example would be an artificial intelligence company that could end up with a normal commercial product, or could end up bringing about the most important century of all time for humanity. I wish I could point them to someone who was like: \"I've read all of, and written much of, the literature on what your options are. I can walk you through the pros and cons and help you pick a governance system that balances them for your needs.\" \n \nA small probability of a big future win. The world today has lots of governments, but they seem to mostly follow a very small number of basic governance templates. At some point, there will be new states with new Constitutions - maybe via space settlements, maybe via collapse of existing states, etc. - but I expect these moments to be few and far between. A significant literature and set of experts on \"ideal governance\" could lead to a radically different kind of state government, potentially with radically different policies that the rest of the world could learn from.\n \nA weird, out-of-left-field application. Some of my interest in this topic actually comes via my interest in moral uncertainty: the question of what it's ethical to do when one is struggling between more than one theory of ethics, with radically different implications. This is hard to explain, but I try below.\nI'll describe a bit more what I think literature on this question could look like (and what already exists that I know of), partly to guide readers who might be able to help me find more.\nCommon governance structures today\nAll of these are simplified; I'm trying to illustrate the basic idea of what questions \"ideal governance\" is asking.\nA standard (e.g., public) corporation works like this: it has shareholders, assigned one vote per share (not per person), who elect a board of directors that governs by majority. The board generally appoints a CEO that it entrusts with day-to-day decisions. There is a \"constitution\" of sorts (the Articles of Incorporation and bylaws) and a lot more wrinkles in terms of how directors are selected, but that's the basic idea. \nA standard nonprofit is like a corporation, but entirely lacking the shareholder layer - it's governed directly by the board of directors. (I find something weird about a structure this simple - a simple board majority can do literally anything, even though the board of directors is often a somewhat random assortment of donors, advisors, etc.)\nThe US federal government is a lot more complex. It splits authority between the House of Representatives, the Senate, the Presidency and the Supreme Court, all of which have specific appointment procedures, term limits, etc. and are meta-governed by a Constitution that requires special measures to change. There are lots of specific choices that were made in designing things this way, and lots of things that could've been set up differently in the 18th century that would probably still matter today. \nOther democracies tend to have governments that differ in a lot of ways (e.g.), while being based on broadly similar principles: voters elect representatives to more than one branch of government, which then divide up (and often can veto each other on) laws, expenditures, etc.\nWhen I was 13, the lunch table I sat at established a Constitution with some really strange properties that I can't remember. I think there was a near-dictatorial authority who rotated daily, with others able to veto their decisions by assembling supermajorities or maybe singing silly songs or something.\nIn addition to the design choices shown in the diagrams, there are a lot of others:\nWho votes, how often, and what voting system is used?\nHow many representatives are there in each representative body? How are they divided up (one representative per geographic area, or party-list proportional representation, or something else)?\nWhat term limits exist for the different entities?\nDo particular kinds of decisions require supermajorities? \nWhich restrictions are enshrined in a hard-to-change Constitution (and how hard is it to change), vs. being left to the people in power at the moment?\nOne way of thinking about the \"ideal governance\" question is: what kinds of designs could exist that aren't common today? And how should a new organization/country/etc. think about what design is going to be best for its purposes, beyond \"doing what's usually done\"? \nFor any new institution, it seems like the stakes are potentially high - in some important sense, picking a governance system is a \"one-time thing\" (any further changes have to be made using the rules of the existing system1). \nPerhaps because of this, there doesn't seem to be much use of innovative governance designs in high-stakes settings. For example, here are a number of ideas I've seen floating around that seem cool and interesting, and ought to be considered if someone could set up a governance system however they wanted:\nSortition, or choosing people randomly to have certain powers and responsibilities. An extreme version could be: \"Instead of everyone voting for President, randomly select 1000 Americans; give them several months to consider their choice, perhaps paid so they can do so full-time; then have them vote.\" \nThe idea is to pick a subset of people who are both (a) representative of the larger population (hence the randomness); (b) will have a stronger case for putting serious time and thought into their decisions (hence the small number). \n \nIt's solving a similar problem that \"representative democracy\" (voters elect representatives) is trying to solve, but in a different way.\nProportional decision-making. Currently, if Congress is deciding how to spend $1 trillion, a coalition controlling 51% of the votes can control all $1 trillion, whereas a coalition controlling 49% of the votes controls $0. Proportional decision-making could be implemented as \"Each representative controls an equal proportion of the spending,\" so a coalition with 20% of the votes controls 20% of the budget. It's less clear how to apply this idea to other sorts of bills (e.g., illegalizing an activity rather than spending money), but there are plenty of possibilities.2\nQuadratic voting, in which people vote on multiple things at once, and can cast more votes for things they care about more (with a \"quadratic pricing rule\" intended to make the number of votes an \"honest signal\" of how much someone cares).\nReset/Jubilee: maybe it would be good for some organizations to periodically redo their governance mostly from scratch, subject only to the most basic principles. Constitutions could contain a provision like \"Every N years, there shall be a new Constitution selected. The 10 candidate Constitutions with the most signatures shall be presented on a ballot; the Constitution receiving the most votes is the new Constitution, except that it may not contradict or nullify this provision. This provision can be prevented from occurring by [supermajority provision], and removed entirely by [stronger supermajority].\"\nMore examples in a footnote.3\nIf we were starting a country or company from scratch, which of the above ideas should we integrate with more traditional structures, and how, and what else should we have in our toolbox? That's the question of ideal governance.\nWhy do I care?\nI have one \"short-term, tangible need\" reason; one \"small probability of a big future win\" reason; and one \"weird, out-of-left-field\" reason.\nA short-term, tangible need: companies developing AI, or otherwise aiming to be working with huge stakes. Say you're starting a new company for developing AI systems, and you believe that you could end up building AI with the potential to change the world forever. \nThe standard governance setup for a corporation would hand power over all the decisions you're going to make to your shareholders, and by default most of your shares are going to end up held by people and firms that invested money in your company. Hopefully it's clear why this doesn't seem like the ideal setup for a company whose decisions could be world-changing. A number of AI companies have acknowledged the basic point that \"Our ultimate mission should NOT just be: make money for shareholders,\" and that seems like a good thing.\nOne alternative would be to set up like a nonprofit instead, with all power vested in a board of directors (no shareholder control). Some issues are that (a) this cuts shareholders out of the loop completely, which could make it pretty hard to raise money; (b) according to me at least, this is just a weird system of governance, for reasons that are not super easy to articulate concisely but I'll take a shot in a footnote4 (and possibly write more in the future).\nAnother alternative is a setup that is somewhat common among tech companies: 1-2 founders hold enough shares to keep control forever, so you end up with essentially a dictatorship. This also ... leaves something to be desired.\nOr maybe a company like this should just set up more like a government from the get-go, offering everyone in the world a vote via some complex system of representation, checks and balances. But this seems poorly suited to at least the relatively early days of a company, when it's small and its work is not widely known or understood. But then, how does the company handle the transition from the latter to the former? And should the former be done exactly in the standard way, or is there room for innovation there?\nOver the last several years, I've spoken with heads of several (more than 3) organizations that struggle between options like the above, and have at least strongly considered unusual governance setups. I wish I could point them to someone who was like: \"I've read all of, and written much of, the literature on what your options are. I can walk you through the pros and cons and help you pick a governance system that balances them for your needs.\" \nBut right now, I can't, and I've seen a fair amount of this instead: \"Let's just throw together the best system we can, based mostly on what's already common but with a few wrinkles, and hope that we figure this all out later.\" I think this is the right solution given how things stand, but I think it really does get continually harder to redesign one's governance as time goes on and more stakeholders enter the picture, so it makes me nervous.\nSimilar issues could apply to mega-corporations (e.g., FAANG) that are arguably more powerful than what the standard shareholder-centric company setup was designed for. Are there governance systems they could adopt that would make them more broadly accountable, without copying over all the pros and cons of full-blown representative democracy as implemented by countries like the US?\nA small probability of a big future win: future new states. The world today has lots of governments, but they seem to mostly follow a very small number of basic governance templates (e.g., I believe you see almost none of the things I listed above), and probably relatedly, there seems to be remarkably little variety and experimentation with policy. Policies that many believe could be huge wins - such as dramatically expanded immigration, land value taxation, \"consumer reports\"-style medical approvals,5 drug decriminalization, and charter cities - don't seem to have gotten much of a trial anywhere in the world. \nAt some point, there will be new states with new Constitutions - maybe via space settlements, maybe via collapse of existing states, etc. - but I expect these moments to be few and far between.\nBy default I expect future Constitutions to resemble present ones an awful lot. But maybe, at some future date, there will be a large \"ideal governance\" literature and some points of expert consensus on innovative governance designs that somebody really ought to try. That could lead to a radically different kind of state government, potentially with radically different policies that the rest of the world could learn from.\nAn out-of-left-field application for \"ideal governance.\" This is going to veer off the rails, so remember to skip to the next section if I lose you.\nSome of my interest in this topic actually comes via my interest in moral uncertainty: the question of what it's ethical to do when one is struggling between more than one theory of ethics, with radically different implications. \nFor example, there are arguments that our ethical decisions should be dominated by concern for ensuring that as many people as possible will someday get to exist. I really go back and forth on how much I buy these arguments, but I'm definitely somewhere between 10% convinced and 50% convinced. So ... say I'm \"20% convinced\" of some view that says preventing human extinction6 is the overwhelmingly most important consideration for at least some dimensions of ethics (like where to donate), and \"80% convinced\" of some more common-sense view that says I should focus on some cause unrelated to human extinction.7 How do I put those two together and decide what this means for actual choices I'm making?\nThe closest thing I've seen to a reasonable-seeming answer is the idea of a moral parliament: I should act as though I'm run by a Parliament with 80 members who believe in \"common-sense\" ethics, and 20 members who believe in the \"preventing extinction is overwhelmingly important\" idea. But with default Parliament rules, this would just mean the 80 members can run the whole show, without any compromise with the 20. \nAnd so, a paper on the \"moral parliament\" idea tries to make it work by ... introducing a completely new governance mechanism that I can't find any other sign of someone else ever talking about, \"proportional chances voting\" (spelled out in a footnote).8 I think this mechanism has its own issues,9 but it's an attempt to ensure something like \"A coalition controlling 20% of the votes has 20% of the effective power, and has to be compromised with, instead of being subject to the tyranny of the majority.\"\nMy own view (which I expect to write more about in the future) is that governance is roughly the right metaphor for \"moral uncertainty\": I am torn by multiple different sides of myself, with different takes on what it means to be a good person, and the problem of getting these different sides of myself to reach a decision together is like the problem of getting different citizens (or shareholders) to reach a decision together. The more we can say about what ideal governance looks like, the more we can say about how this ought to work - and the better I expect this \"moral parliament\"-type idea to end up looking, compared to alternatives.10\nThe literature I'm looking for\nIdeal governance seems like the sort of topic for which there should be a \"field\" of \"experts,\" studying it. What would such study look like? Three major categories come to mind:\nBrainstorming ideas such as those I listed above - innovative potential ways of solving classic challenges of governance, such as reconciling \"We want to represent all the voters\" with \"We want decisions to be grounded in expertise and high engagement, and voters are often non-expert and not engaged.\"\nI've come across various assorted ideas in this category, including quadratic voting, futarchy, and proportional chances voting, without seeing much sign that these sit within a broader field that I can skim through to find all the ideas that are out there.\nEconomics-style theory in which one asks questions like: \"If we make particular assumptions about who's voting, what information they have and lack, how much they suffer from bounded rationality, and how we define 'serving their interests' (see below), what kind of governance structure gets the best outcome?\"\nSocial choice theory, including on voting methods, tackles the \"how we define 'serving their interests'\" part of this. But I'm not aware of people using similar approaches to ask questions like \"Under what conditions would we want 1 chamber of Congress vs. 2, or 10? 100 Senators vs. 500, or 15? A constitution that can be modified by simple majority, vs. 2/3 majority vs. consensus? Term limits? Etc. etc. etc.\"\nEmpirical research (probably qualitative): Are there systematic reviews of unusual governance structures tried out by companies, and what the results have been? Of smaller-scale experiments at co-ops, group houses and lunch tables? \nTo be clear, I think the most useful version of this sort of research would probably be very qualitative - collecting reports of what problems did and didn't come up - rather than asking questions like \"How does a particular board structure element affect company profits?\"\nOne of the things I expect to be tricky about this sort of research is that I think a lot of governance comes down to things like \"What sorts of people are in charge?\" and \"What are the culture, expectations, norms and habits?\" A setup that is \"officially\" supposed to work one way could evolve into something quite different via informal practices and \"soft power.\" However, I think the formal setup (including things like \"what the constitution says about the principles each governance body is supposed to be upholding\") can have big effects on how the \"soft power\" works.\nIf you know where to find research or experts along the lines of the above, please share them in the comments or using this form if you don't want them to be public.\nI'll likely write about what I come across, and if I don't find anything new, I'll likely ramble some more about ideal governance. So either way, there will be more on this topic!Footnotes\n Barring violent revolution in the case of countries. ↩\n An example would be the \"proportional chances voting\" idea described here. ↩\nProxying/liquid democracy, or allowing voters to transfer their votes to other voters. (This is common for corporations, but not for governments.) This could be an alternative or complement to electing representatives, solving a similar problem (we want lightly-engaged voters to be represented, but we also want decisions ultimately made using heavy engagement and expertise). At first glance it may seem to pose a risk that people will be able to \"buy votes,\" but I don't actually think this is necessarily an issue (proxying could be done anonymously and on set schedules, like other votes).\nSoft term limits: the more terms someone has served, the greater a supermajority they need to be re-elected. This could be used to strike a balance between the advantages of term limits (avoiding \"effectively unaccountable\" incumbents) and no-term-limits (allowing great representatives to keep serving). \nFormal technocracy/meritocracy: Using hard structures (rather than soft norms) to assign authority to people with particular expertise and qualifications. An extreme example would be futarchy, in which prediction markets directly control decisions. A simpler example would be structurally rewarding representatives (via more votes or other powers) based on assessments of their track records (of predictions or decisions), or factual understanding of a subject. This seems like a tough road to go down by default, as any mechanism for evaluating \"track records\" and \"understanding\" can itself be politicized, but there's a wide space of possible designs.  ↩\n Most systems of government have a sort of funnel from \"least engaged in day to day decisions, but most ultimately legitimate representatives of whom the institution is supposed to serve\" (shareholders, voters) to \"most engaged in day to day decisions, but ultimately accountable to someone else\" (chief executive). A nonprofit structure is a very short funnel, and the board of directors tends to be a somewhat random assortment of funders, advisors, people who the founders just thought were cool, etc. I think they often end up not very accountable (to anyone) or engaged in what's going on, such that they have a hard time acting when they ought to, and the actions they do take are often kind of random. \n I'm not saying there is a clearly better structure available for this purpose - I think the weirdness comes from the fact that it's so unclear who should go in the box normally reserved for \"Shareholders\" or \"Voters.\" It's probably the best common structure for its purpose, but I think there's a lot of room for improvement, and the stakes seem high for certain organizations. ↩\n Context in this Marginal Revolution post, which links to this 2005 piece on a \"consumer reports\" model for the FDA. ↩\n Or \"existential catastrophe\" - something that drastically curtails humanity's future, even if it doesn't drive us extinct. ↩\n This isn't actually where I'm at, because I think the leading existential risks are a big enough deal that I would want to focus on them even if I completely ignored the philosophical argument that the future is overwhelmingly important. ↩\n Let's say that 70% of the Parliament members vote for bill X, and 30% vote against. \"Proportional chance voting\" literally uses a weighted lottery to pass bill X with 70% probability, and reject it with 30% probability (you can think of this like rolling a 10-sided die, and passing the bill if it's 7 or under).\n A key part of this is that the members are supposed to negotiate before voting and holding the lottery. For example, maybe 10 of the 30 members who are against bill X offer to switch to supporting it if some change is made. The nice property here is that rather than having a \"tyranny of the majority\" where the minority has no bargaining power, we have a situation where the 70-member coalition would still love to make a deal with folks in the minority, to further increase the probability that they get their way.\n Quote from the paper that I am interpreting: \"Under proportional chances voting, each delegate receives a single vote on each motion. Before they vote, there is a period during which delegates may negotiate: this could include trading votes on one motion for votes on another, introducing novel options for consideration within a given motion, or forming deals with others to vote for a compromise option that both consider to be acceptable. The delegates then cast their ballots for one particular option in each motion, just as they might in a plurality voting system. But rather than determining the winning option to be the one with the most votes, each option is given a chance of winning proportional to its share of the votes.\" ↩\n What stops someone who lost the randomized draw from just asking to hold the same vote again? Or asking to hold a highly similar/related vote that would get back a lot of what they lost? How does that affect the negotiated equilibrium? ↩\n Such as \"maximize expected choice-worthiness,\" which I am not a fan of for reasons I'll get to in the future. ↩\n", "url": "https://www.cold-takes.com/ideal-governance-for-companies-countries-and-more/", "title": "Ideal governance (for companies, countries and more)", "source": "cold.takes", "source_type": "blog", "date_published": "2022-04-05", "id": "38f06cb11bf4d7aee91f01e9cce3325f"} -{"text": "If we could do something to lower the probability of the human race going extinct,1 that would be really good. But how good? Is preventing extinction more like “saving 8 billion lives” (the number of people alive today), or “saving 80 billion lives” (the number who will be alive over the next 10 generations) … or \"saving 625 quadrillion lives\" (an Our World in Data estimate of the number of people who could ever be born) ... or “saving a comically huge number of lives\" (Nick Bostrom argues for well over 10^46 as the total number of people, including digital people, who could ever exist)?\nMore specifically, is “a person getting to live a good life, when they otherwise would have never existed” the kind of thing we should value? Is it as good as “a premature death prevented?”\nAmong effective altruists, it’s common to answer: “Yes, it is; preventing extinction is somewhere around as good as saving [some crazy number] of lives; so if there’s any way to reduce the odds of extinction by even a tiny amount, that’s where we should focus all the attention and resources we can.”\nI feel conflicted about this.\nI am sold on the importance of specific, potentially extinction-level risks, such as risks from advanced AI. But: this is mostly because I think the risks are really, really big and really, really neglected. I think they’d be worth focusing on even if we ignored arguments like the above and used much more modest estimates of “how many people might be affected.”\nI’m less sold that we should work on these risks if they were very small. And I have very mixed feelings about the idea that “a person getting to live a good life, when they otherwise would have never existed” is as good as “a premature death prevented.”\nReflecting these mixed feelings, I'm going to examine the philosophical case for caring about “extra lives lived” (putting aside the first bullet point above), via a dialogue between two versions of myself: Utilitarian Holden (UH) and Non-Utilitarian Holden (NUH).2 \nThis represents actual dialogues I’ve had with myself (so neither side is a pure straw person), although this particular dialogue serves primarily to illustrate UH's views and how they are defended against initial and/or basic objections from NUH. In future dialogues, NUH will raise more sophisticated objections.\nThis is part of a set of dialogues on future-proof ethics: trying to make ethical decisions that we can remain proud of in the future, after a great deal of (societal and/or personal) moral progress. (Previous dialogue here, though this one stands on its own.)\nA couple more notes before I finally get started:\nThe genre here is philosophy, and a common type of argument is the thought experiment: \"If you had to choose between A and B, what would you choose?\" (For example: \"is it better to prevent one untimely death, or to allow 10 people to live who would otherwise never have been born?\")\nIt's common to react to questions like this with comments like \"I don't really think that kind of choice comes up in real life; actually you can usually get both A and B if you do things right\" or \"actually A isn't possible; the underlying assumptions about how the world really works are off here.\" My general advice when considering philosophy is to avoid reactions like this and think about what you would do if you really had to make the choice that is being pointed at, even if you think thI ae author's underlying assumptions about why the choice exists are wrong. Similarly, if you find one part of an argument unconvincing, I suggest pretending you accept it for the rest of the piece anyway, to see whether the rest of the arguments would be compelling under that assumption.\nI often give an example of how one could face a choice between A and B in real life, to make it easier to imagine - but it's not feasible to give this example in enough detail and with enough defense to make it seem realistic to all readers, without a big distraction from the topic at hand.\nPhilosophy requires some amount of suspending disbelief, because the goal is to ask questions about (for example) what you value, while isolating them from questions about what you believe. (For more on how it can be useful to separate values and beliefs, see Bayesian Mindset.)\nDialogue on “extra lives lived”\nTo keep it clear who's talking when, I'm using -UH- for \"Utilitarian Holden\" and -NUH- for \"non-Utilitarian Holden.\" (In the audio version of this piece, my wife voices NUH.)\n-UH-Let’s start here:\nLet's say that if humanity can avoid going extinct (and perhaps spread across the galaxy), the number of people who will ever exist is about one quintillion, also known as \"a billion billion\" or \"10^18.\" (This is close3 to the Our World in Data estimate of the number of people who could ever live; it ignores the possibility of digital people, which could lead to a vastly bigger number.)\nIf you think that’s 1% likely to actually happen, it’s an expected value of 10^16 people, which is still huge enough to carry the rest of what I'm going to say.\nYou can think of it like this: imagine all of the people who exist or ever could exist, all standing in one place. There are about 10^10 such people alive today, and the rest are just “potential people” - that means that more than 99.999% of all the people are “potential people.”\nAnd now imagine that we’re all talking about where you, the person in the privileged position of being one of the earliest people ever, should focus your efforts to help others. And you say: “Gosh, I’m really torn:\nI think if I focus my efforts on today’s world, maybe I could help prevent up to 10,000 untimely deaths. (This would be a very high thing to aim for.)\nOr, I could cause a tiny decrease in extinction risk, like 1% of 1% of 1% of 1%. That would help about 100 million of you.4 What should I do?”\n(Let’s ignore the fact that helping people in today’s world could also reduce extinction risk. It could, but it could also increase extinction risk - who knows. There are probably actions better targeted at reducing extinction risk than at helping today’s world, is the point.)\nIn that situation, I think everyone would be saying: “How is this a question? Even if your impact on extinction risk is small, even if it’s uncertain and fuzzy, there are just SO MANY MORE people affected by that. If you choose to focus on today’s world, you’re essentially saying that you think today’s people count more than 10,000 times as much as future people.\n“Now granted, most of the people alive in your time DO act that way - they ignore the future. But someday, if society becomes morally wiser, that will look unacceptable; similarly, if you become morally wiser, you'll regret it. It's basically deciding that 99.999%+ of one’s fellow humans aren’t worth worrying about, just because they don’t exist yet.\n“Do the forward-looking thing, the future-proof thing. Focus on helping the massive number of people who don’t exist yet.”\n-NUH-I feel like you are skipping a very big step here. We’re talking about what potential people who don’t exist yet would say about giving them a chance to exist? Does that even make sense?\nThat is: it sounds like you’re counting every “potential person” as someone whose wishes we should be respecting, including their wish to exist instead of not exist. So among other things, that means a larger population is better?\n-UH- Yes.\n-NUH- I mean, that’s super weird, right? Like is it ethically obligatory to have as many children as you can?\n-UH-It’s not, for a bunch of reasons.\nThe biggest one for now is that we’re focused on thin utilitarianism - how to make choices about actions like donating and career choice, not how to make choices about everything. For questions like how many children to have, I think there’s much more scope for a multidimensional morality that isn’t all about respecting the interests of others.\nI also generally think we’re liable to get confused if we’re talking about reproductive decisions, since reproductive autonomy is such an important value and one that has historically been undermined at times in ugly ways. My views here aren’t about reproductive decisions, they’re about avoiding existential catastrophes. Longtermists (people who focus on the long-run future, as I’m advocating here) tend to focus on things that could affect the ultimate, long-run population of the world, and it’s really unclear how having children or not affects that (because the main factors behind the ultimate, long-run population of the world have more to do with things like the odds of extinction and of explosive civilization-wide changes, and it's unclear how having children affects those).\nSo let’s instead stay focused on the question I asked. That is: if you prevent an existential catastrophe, so that there’s a large flourishing future population, does each of those future people count as a “beneficiary” of what you did, such that their benefits aggregate up to a very large number?\n-NUH-OK. I say no, such “potential future people” do not count. And I’m not moved by your story about how this may one day look cruel or inconsiderate. It’s not that I think some types of people are less valuable than others, it’s that I don’t think increasing the odds that someone ever exists at all is benefiting them.\n-UH-Let’s briefly walk through a few challenges to your position. You can learn more about these challenges from the academic population ethics literature; I recommend Hilary Greaves’s short piece on this.\nChallenge 1: Future people and the “mere addition paradox”\n-UH-So you say you don’t see “potential future people” as “beneficiaries” whose interests count. But let’s say that the worst effects of climate change won’t be felt for another 80 years or so, in which case the vast majority of people affected will be people who aren’t alive today. Do you discount those folks and their interests?\n-NUH-No, but that’s different. Climate change isn’t about whether they get to exist or not, it’s about whether their lives go better or worse.\n-UH-Well, it’s about both. The world in which we contain/prevent/mitigate climate change contains completely different people in the future from the world in which we don’t. Any difference between two worlds will ripple chaotically and affect things like which sperm fertilize which eggs, which will completely change the future people that exist.\nSo you really can’t point to some fixed set of people that is “affected” by climate change. Your desire to mitigate climate change is really about causing there to be better off people in the future, instead of completely different worse off people. It’s pretty hard to maintain this position while also saying that you only care about “actual” rather than “potential” people, or “present” rather than “future” ones.\n-NUH-I can still take the position that:\nIf there are a certain number of people, it’s good to do something (such as mitigate climate change) that causes there to be better-off people instead of worse-off people.\nBut that doesn’t mean that it’s good for there to be more people than there would be otherwise. Adding more people is just neutral, assuming they have reasonably good lives.\n-UH-That’s going to be a tough position to maintain.\nConsider three possible worlds:\nWorld A: 5 billion future people have good lives. Let’s say their lives are a 8/10 on some relevant scale (reducing the quality of a life to a number is a simplification; see footnote for a bit more on this5).\nWorld B: 5 billion future people have slightly better than good lives, let’s say 8.1/10. And there are an additional 5 billion people who have not-as-good-but-still-pretty-good lives, let’s say 7/10.\nWorld C: 10 billion future people have good lives, 8/10.\n \n Claim: World B > World A and World C > World B. Therefore, World C > World A.\n \nMy guess is that you think World B seems clearly better than World A - there are 5 billion “better-off instead of worse-off” future people, and the added 5 billion people seem neutral (not good, not bad).\nBut I’d also guess you think World C seems clearly better than World B. The change is a small worsening in quality of life for the better-off half of the population, and a large improvement for the worse-off half.\nBut if World C is better than World B and World B is better than World A, doesn’t that mean World C is better than World A? And World C is the same as World A, just a bigger population.\n-NUH-I admit my intuitions are as you say. I prefer B when comparing it to A, and C when comparing it to B. However, when I look at C vs. A, I’m not sure what to think. Maybe there is a mistake somewhere - for example, maybe I should think that it’s bad for additional people to come to exist.\n-UH-That would imply that the human race going extinct would be great, no? Extinction would prevent massive numbers of people from ever existing.\n-NUH-That is definitely not where I am.\nOK, you’ve successfully got me puzzled about what’s going on in my brain. Before I try to process it, how about you confuse me more?\nChallenge 2: Asymmetry\n-UH-Sure thing. Let’s talk about another problem with the attempt to be “neutral” on whether there are more or fewer people in the future.\nSay that you can take some action to prevent a horrible dystopia from arising in a distant corner of the galaxy. In this dystopia, the vast majority of people will wish they didn’t exist, but they won’t have that choice. You have the opportunity to ensure that, instead of this dystopia, there will simply be nothing there. Does that opportunity seem valuable?\n-NUH-It does, enormously valuable.\n-UH-OK. The broader intuition here is that preventing lives that are worse than nonexistence has high ethical value - does that seem right?\n-NUH-Yes.\n-UH-Now you’re in a state where you think preventing bad lives is good, but preventing good lives is neutral.\nBut the thing is, every time a life comes into existence, there’s some risk it will be really bad (such that the person living it wishes they didn’t exist). So if you count the bad as bad and the good as neutral, you should think that each future life is purely a bad thing - some chance it’s bad, some chance it’s neutral. So you should want to minimize future lives.\nOr at the civilization level: say that if humanity continues existing, there’s a 99% chance we will have an enormous (at least 10^18 people) flourishing civilization, and a 1% chance we’ll end up in an equally enormous, horrible dystopia. And even the flourishing civilization will have some people in it who wish they didn’t exist. Confronting this possibility, you should hope that humanity doesn’t continue existing, since then there won’t be any of these “people who wish they didn’t exist.” You should, again, think that extinction is a great ethical good.\n-NUH-Yikes. Like I’ve said, I don’t think that.\n-UH-In that case, I think the most natural way out of this is to conclude that a huge flourishing civilization would be good enough to compensate - at least partly - for the risk of a huge dystopia.\nThat is: if you’re fine with a 99% chance of a flourishing civilization and a 1% chance of a dystopia, this implies that a flourishing civilization is at least 1% as good as a dystopia is bad.\nAnd that implies that “10^18 flourishing lives” are at least 1% as good as “10^18 horribly suffering lives” are bad. 1% of 10^18 is a lot, as we’ve discussed!\n-NUH-Well, you’ve definitely made me feel confused about what I think about this topic. But that isn’t the same as convincing me that it’s good for there to be more persons. I see how trying to be neutral about population size leads to weird implications. But so does your position.\nFor example, if you think that adding more lives has ethical value, you end up with what's called the repugnant conclusion. Actually, let’s skip that and talk about the very repugnant conclusion. I’ll give my own set of hypothetical worlds:\nWorld D has 10^18 flourishing, happy people.\nWorld E has 10^18 horribly suffering people, plus some even larger number (N) of people whose lives are mediocre/fine/”worth living” but not good.\nThere has to be some “larger number N” such that you prefer World E to World D. That’s a pretty wacky seeming position too!\nTheory X\n-UH-That’s true. There’s no way of handling questions like these (aka population ethics) that feels totally satisfactory for every imaginable case.\n-NUH-That you know of. But there may be some way of disentangling our confusions about this topic that leaves the anti-repugnant-conclusion intuition intact, and leaves mine intact too. I’m not really feeling the need to accept one wrong-seeming view just to avoid another one.\n-UH-“Some way of disentangling our confusions” is what Derek Parfit called theory X. Population ethicists have looked for it for a while. They’ve not only not found it, they’ve produced impossibility theorems heavily implying that it does not exist.\nThat is, the various intuitions we want to hold onto (such as “the very repugnant conclusion is false” and “extinction would not be good” and various others) collectively contradict each other.\nSo it looks like we probably have to pick something weird to believe about this whole “Is it good for there to be more people?” question. And if we have to pick something, I’m going to go ahead and pick what’s called the total view: the view that we should maximize the sum total of the well-being of all persons. You could think of this as if our “potential beneficiaries” include all persons who ever could exist, and getting to exist6 is a benefit that is capable of overriding significant harms. (There is more complexity to the total view than this, but it's not the focus of this piece.)\nI think there are a number of good reasons to pick this general approach:\nIt’s simple. If you try to come up with some view that thinks human extinction is neither the best nor the worst thing imaginable, your view is probably going to have all kinds of complicated and unmotivated-seeming moving parts, like the asymmetry between good and bad lives discussed above. But the idea behind the total view is simple, it just counts everyone (including potential someones) as persons whose interests are worth considering. Simplicity fits well into my goal of systemizing ethics, so that my system is more robust and relies on fewer intuitions.\nHaving considered the various options for “which weird view to take on,” I think “the very repugnant conclusion is actually fine” does pretty well against its alternatives. It’s totally possible that our intuitive aversion to it comes from just not being able to wrap our brains around some aspect of (a) how huge the numbers of “barely worth living” lives would have to be, in order to make the very repugnant conclusion work; (b) something that is just confusing about the idea of “making it possible for additional people to exist.”7\nAnd is it really so unintuitive, anyway? Imagine you learned that some person made a costly effort to prevent your ancestors’ deaths, 1000 years ago, and now you are here today. Aren’t you glad you exist? Don’t you think your existence counts as part of the good that person accomplished? (More of this kind of thinking in this blog post by my coworker Joe Carlsmith.) Is your take on the fact that 10^18 people might or might not get to exist really just “It doesn’t ethically matter whether this happens?”\n \n-NUH-Maybe part of what’s confusing here is something like:\nI’m not indifferent to extra happy lives - they are better than nothing, after all.\nBut if the only or main kind of way I was improving the world was allowing extra happy lives to exist, that wouldn’t be right.\nSo maybe extra lives matter up to some point, and then matter less. Or maybe it’s true that “an extra life is a good thing” but not that “lots of extra lives can be more important than helping people who already exist.”\n-UH-That approach would contradict some of the key principles of “other-centered ethics” discussed previously.\nI previously argued that once you think something counts as a benefit, with some amount of value, a high enough amount of that thing can swamp all other ethical considerations. In the example we used previously, enough of “helping someone have a nice day at the beach” can outweigh “helping someone avoid a tragic death.”\n-NUH-Hmm.\nIf this were a philosophy seminar, I would think you were making a perfectly good case here.\nBut the feeling I have at this juncture is not so much “Ah yes, I see how all of those potential lives are a great ethical good!” as “I feel like I’ve been tricked/talked into seeing no alternative.”\nI don’t need to “pick a theory.” I can zoom back out to the big picture and say “Doing things that will make it possible for more future people to exist is not what I signed up for when I set out to donate money to make the world a better place. It’s not the case that addressing today’s injustices and inequities can be outweighed by that goal.”\nI don’t need a perfectly consistent approach to population ethics, I don’t need to follow “rules” when giving away money. I can do things that are uncontroversially valuable, such as preventing premature deaths and improving education; I can use math to maximize the amount of those things that I do. I don’t need a master framework that lands in this strange place.8\n-UH-Again, I think a lot of the detailed back-and-forth has obscured the fact that there are simple principles at play here:\nI want my giving to be about benefiting others to the maximum extent possible. I want to spend my money in the way that others would want me to if they were thinking about it fairly/impartially, as in the “veil of ignorance” construction. If that’s what I want, then giving that can benefit enormous numbers of persons is generally going to look best. (Discussed previously)\nThere’s a question of who counts as “others” that I can benefit. Do potential people who may or may not end up getting to exist? I need a position on this. Once I let someone into the moral circle, if there are a ton of them, they’re going to be the ones I’m concerned about.\nOn balance, it seems like “potential people who may or may not end up getting to exist” probably belong in the moral circle. That is, there’s a high chance that a more enlightened, ethical society would recognize this uncontroversially.\nI might end up getting it wrong and doing zero good. So might you. I am taking my best shot at avoiding the moral prejudices of my day and focusing my giving on helping others, defined fairly and expansively.\nFor further reading on population ethics, see:\nPopulation Axiology - a ~20 page summary of dilemmas in population ethics\nStanford Encyclopedia of Philosophy on the Repugnant Conclusion - shorter, goes over many of the same issues\nChapter 2 of On the Overwhelming Importance of Shaping the Far Future\nClosing thoughts\nI feel a lot of sympathy for the closing positions of both UH and NUH.\nI think something like UH’s views do, in fact, give me the best shot available at an ethics that is highly “other-centered” and “future-proof.” But as I’ve pondered these arguments, I’ve simultaneously become more compelled by some of UH’s unusual views, and less convinced that it’s so important to pursue an “other-centered” or “future-proof” ethics. At some point in the future, I’ll argue that these ideals are probably unattainable anyway, which weakens my commitment to them.\nUltimately, if we put this in a frame of \"deciding how to spend $1 billion,\" the arguments in this and previous pieces would move me to spend a chunk of it on targeting existential risk reduction - but probably not the majority (if they were the only arguments for targeting existential risk reduction, which I don't think they are). I find UH compelling, but not wholly convincing.\nHowever, there is a different line of reasoning for focusing on causes like AI risk reduction, which doesn’t require unusual views about population ethics. That’s the case I’ve presented in the Most Important Century series, and I find it more compelling.Footnotes\n I’m sticking with “extinction” in this piece rather than discuss the subtly different idea of “existential catastrophe.” The things I say to extinction mostly apply to existential catastrophe, but I think that adds needless confusion for this particular purpose. ↩\n In this case, \"Utilitarian Holden\" will be arguing for a particular version of utilitarianism, not for utilitarianism generally. But it's the same character from a previous dialogue. ↩\n It's about 1.5x as much as the Our World in Data estimate, and not everyone would call that \"close\" in every context, but I think for the purposes of this piece, the two numbers have all the same implications, and \"one quintillion\" is simpler and easier to talk about than 625 quadrillion. ↩\n \"1% of 1% of 1% of 1%\" is 1%*1%*1%*1%. 1%*1%*1%*1%*10^16 (the hypothesized number of future lives, including only a 1% chance that humanity avoids extinction for long enough) = 10^8, or 100 million. ↩\n For this example, what the numbers are trying to communicate is that all things considered, some of these lives would rank more highly than others if people were choosing what conditions they would want to live their life under. They're supposed to implicitly incorporate reactions to things like inequality - so for example, World B, which has more inequality than Worlds A and C, might have to have better conditions to \"compensate\" such that the 5 billion people with \"8.1/10\" lives still prefer their conditions in World B to what they would be in World A. ↩\n (Assuming that existence is preferred to non-existence given the conditions of existence) ↩\n For one example (that neither UH nor NUH finds particularly compelling, but others might), see this comment. ↩\n This general position does have some defenders in philosophy - see https://plato.stanford.edu/entries/moral-particularism/  ↩\n", "url": "https://www.cold-takes.com/debating-myself-on-whether-extra-lives-lived-are-as-good-as-deaths-prevented/", "title": "Debating myself on whether “extra lives lived” are as good as “deaths prevented”", "source": "cold.takes", "source_type": "blog", "date_published": "2022-03-29", "id": "d2e96d7fe292f64a6dd979ecebe0ff70"} -{"text": "\nI'm interested in readers' takes on what I should be doing more and less of on this blog. I will generally write what I feel like writing, even if my audience has asked for less of it, but knowing what my audience wants will have some effect on what I feel like writing.\nIf you're up for weighing in on this, I appreciate it! Please use this Google form. It also has some basic questions about my readers that I'm interested in the answers to (all questions are optional).Estimated time: 10 minutes.Take the survey!\n", "url": "https://www.cold-takes.com/cold-takes-reader-survey-let-me-know-what-you-want-more-and-less-of/", "title": "Cold Takes reader survey - let me know what you want more and less of!", "source": "cold.takes", "source_type": "blog", "date_published": "2022-03-18", "id": "8967681ec1fd7eda052552ed991a77c6"} -{"text": "\nI'll be posting somewhat less frequently for a while, as I've gone through a lot of my backlog (though not 100% of it) and want to focus for a while on trying to make progress on some thorny, important-seeming questions about making the best of the most important century.\nI have a writeup of what sorts of questions I want to focus on, and why I think they're important, here. It's a bit less general-interest-oriented than my normal Cold Takes posts, so I only put it on the Effective Altruism Forum, but check it out if you're curious about what I think is most holding us back from knowing what actions to take to improve the future of humanity.\nSome things I'll probably post reasonably soon are (a) a reader survey on what topics you'd like to see more of; (b) tips for working on wicked problems; (c) another dialogue or two to follow this one (topic is future-proof ethics). I expect to post on a bunch more topics later. In the meantime, check out the archives! \n", "url": "https://www.cold-takes.com/programming-note/", "title": "Programming note", "source": "cold.takes", "source_type": "blog", "date_published": "2022-03-09", "id": "ec2182cede5eb0260d0f7f283fff6e72"} -{"text": "I’ve spent a lot of my career working on wicked problems: problems that are vaguely defined, where there’s no clear goal for exactly what I’m trying to do or how I’ll know when or whether I’ve done it. \nIn particular, minimal-trust investigations - trying to understand some topic or argument myself (what charity to donate to, whether civilization is declining, whether AI could make this the most important century of all time for humanity), with little reliance on what “the experts” think - tend to have this “wicked” quality:\nI could spend my whole life learning about any subtopic of a subtopic of a subtopic, so learning about a topic is often mostly about deciding how deep I want to go (and what to skip) on each branch. \nThere aren’t any stable rules for how to make that kind of decision, and I’m constantly changing my mind about what the goal and scope of the project even is.\nThis piece will narrate an example of what it’s like to work on this kind of problem, and why I say it is “hard, taxing, exhausting and a bit of a mental health gauntlet.” \nMy example is from the 2007 edition of GiveWell. It’s an adaptation from a private doc that some other people who work on wicked problems have found cathartic and validating. \nIt’s particularly focused on what I call the hypothesis rearticulation part of investigating a topic (steps 3 and 6 in my learning by writing process), which is when:\nI have a hypothesis about the topic I’m investigating.\nI realize it doesn’t seem right, and I need a new one.\nMost of the things I can come up with are either “too strong” (it would take too much work to examine them satisfyingly) or “too weak” (they just aren’t that interesting/worth investigating). \nI need to navigate that balance and find a new hypothesis that is (a) coherent; (b) important if true; (c) maybe something I can argue for.\nAfter this piece tries to give a sense for what the challenge is like, a future piece will give accumulated tips for navigating it.\nFlashback to 2007 GiveWell\nContext for those unfamiliar with GiveWell:\nIn 2007, I co-founded (with Elie Hassenfeld) an organization that recommends evidence-backed, cost-effective charities to help people do as much good as possible with their donations.\nWhen we started the project, we initially asked charities to apply for $25,000 grants, and to agree (as part of the process) that we could publish their application materials. This was our strategy for trying to find charities that could provide evidence about how much they were helping people (per dollar).\nThis example is from after we had collected information from charities and determined which one we wanted to rank #1, and were now trying to write it all up for our website. Since then, GiveWell has evolved a great deal and is much better than the 2007 edition I’ll be describing here. \n(This example is reconstructed from my memory a long time later, so it’s probably not literally accurate.)\nInitial “too strong” hypothesis. Elie (my co-founder at GiveWell) and I met this morning and I was like “I’m going to write a page explaining what GiveWell’s recommendations are and aren’t. Basically, they aren’t trying to evaluate every charity in the world. Instead they’re saying which ones are the most cost-effective.” He nodded and was like “Yeah, that’s cool and helpful, write it.”\nNow I’m sitting at my computer trying to write down what I just said in a way that an outsider can read - the “hypothesis articulation” phase. \nI write, “GiveWell doesn’t evaluate every charity in the world. Our goal is to save the most lives possible per dollar, not to create a complete ranking or catalogue of charities. Accordingly, our research is oriented around identifying the single charity that can save the most lives per dollar spent,”\nHmm. Did we identify the “single charity that can save the most lives per dollar spent?” Certainly not. For example, I have no idea how to compare these charities to cancer research organizations, which are out of scope. Let me try again:\n“GiveWell doesn’t evaluate every charity in the world. Our goal is to save the most lives possible per dollar, not to create a complete ranking or catalogue of charities. Accordingly, our research is oriented around identifying the single charity with the highest demonstrated lives saved per dollar spent - the charity that can prove rigorously that it saved the most” - no, it can’t prove it saved the most lives - “the charity that can prove rigorously that ” - uh - \nDo any of our charities prove anything rigorously? Now I’m looking at the page we wrote for our #1 charity and ugh. I mean here are some quotes from our summary on the case for their impact: “All of the reports we've seen are internal reports (i.e., [the charity] - not an external evaluator - conducted them) … Neither [the charity]’s sales figures nor its survey results conclusively demonstrate an impact … It is possible that [the charity] simply uses its subsidized prices to outcompete more expensive sellers of similar materials, and ends up reducing people's costs but not increasing their ownership or utilization of these materials … We cannot have as much confidence in our understanding of [the charity] as in our understanding of [two other charities], whose activities are simpler and more straightforward.”\nThat’s our #1 charity! We have less confidence in it than our lower-ranked charities … but we ranked it higher anyway because it’s more cost-effective … but it’s not the most cost-effective charity in the world, it’s probably not even the most cost-effective charity we looked at …\nHitting a wall. Well I have no idea what I want to say here. \nThis image represents me literally playing some video game like Super Meat Boy while failing to articulate what I want to say. I am not actually this bad at Super Meat Boy (certainly not after all the time I’ve spent playing it while failing to articulate a hypothesis), but I thought all the deaths would give a better sense for how the whole situation feels.\nRearticulating the hypothesis and going “too weak.” Okay, screw this. I know what the problem was - I was writing based on wishful thinking. We haven’t found the most cost-effective charity, we haven’t found the most proven charity. Let’s just lay it out, no overselling, just the real situation. \n“GiveWell doesn’t evaluate every charity in the world, because we didn’t have time to do that this year. Instead, we made a completely arbitrary choice to focus on ‘saving lives in Africa’; then we emailed 107 organizations that seemed relevant to this goal, of which 59 responded; we did a really quick first-round application process in which we asked them to provide evidence of their impact; we chose 12 finalists, analyzed those further, and were most impressed with Population Services International. There is no reason to think that the best charities are the ones that did best in our process, and significant reasons to think the opposite, that the best charities are not the ones putting lots of time into a cold-emailed application from an unfamiliar funder for $25k. Like every other donor in the world, we ended up making an arbitrary, largely aesthetic judgment that we were impressed with Population Services International. Readers who share our aesthetics may wish to donate similarly, and can also purchase photos of Elie and Holden at the following link:”\nOK wow. This is what we’ve been working on for a year? Why would anyone want this? Why are we writing this up? I should keep writing this so it’s just DONE but ugh, the thought of finishing this website is almost as bad as the thought of not finishing it.\nHitting a wall.\nWhat do I do, what do I do, what do I do.\nRearticulating the hypothesis and assigning myself more work. OK. I gave up, went to sleep, thought about other stuff for a while, went on a vision quest, etc. I’ve now realized that we can put it this way: our top charities are the ones with verifiable, demonstrated impact and room for more funding, and we rank them by estimated cost-effectiveness. “Verifiable, demonstrated” is something appealing we can say about our top charities and not about others, even though it’s driven by the fact that they responded to our emails and others didn’t. And then we rank the best charities within that. Great.\nSo I’m sitting down to write this, but I’m kind of thinking to myself: “Is that really quite true? That ‘the charities that participated in our process and did well’ and ‘The charities with verifiable, demonstrated impact’ are the same set? I mean … it seems like it could be true. For years we looked for charities that had evidence of impact and we couldn’t find any. Now we have 2-3. But wouldn’t it be better if I could verify none of these charities that ignored us have good evidence of impact just sitting around on their website? I mean, we definitely looked at a lot of websites before but we gave up on it, and didn’t scan the eligible charities comprehensively. Let me try it.”\nI take the list of charities that didn’t participate in round 1. That’s not all the charities in the world, but if none of them have a good impact section on their website, we’ve got a pretty plausible claim that the best stuff we saw in the application process is the best that is (now) publicly available, for the “eligible” charities in the cause. (This assumes that if one of the applicants had good stuff sitting around on their website, they would have sent it.)\nI start looking at their websites. There are 48 charities, and in the first hour I get through 6, verifying that there’s nothing good on any of those websites. This is looking good: in 8 work hours I’ll be able to defend the claim I’ve decided to make.\nHmm. This water charity has some kind of map of all the wells they’ve built, and some references to academic literature arguing that wells save lives. Does that count? I guess it depends on exactly what the academic literature establishes. Let’s check out some of these papers … huh, a lot of these aren’t papers per se so much as big colorful reports with giant bibliographies. Well, I’ll keep going through these looking for the best evidence I can …\n“This will never end.” Did I just spend two weeks reading terrible papers about wells, iron supplementation and community health workers? Ugh and I’ve only gotten through 10 more charities, so I’m only about ⅓ of the way through the list as a whole. I was supposed to be just writing up what we found, I can’t take a 6-week detour!\nThe over-ambitious deadline. All right, I’ll sprint and get it done in a week. [1 week later] Well, now I’m 60% way through the whole list. !@#$\n“This is garbage.” What am I even doing anyway? I’m reading all this literature on wells and unilaterally deciding that it doesn’t count as “proof of impact” the way that Population Services International’s surveys count as “proof of impact.” I’m the zillionth person to read these papers; why are we creating a website out of these amateur judgments? Who will, or SHOULD, care what I think? I’m going to spend another who knows how long writing up this stupid page on what our recommendations do and don’t mean, and then another I don’t even want to think about it finishing up all the other pages we said we’d write, and then we’ll put it online and literally no one will read it. Donors won’t care - they will keep going to charities that have lots of nice pictures. Global health professionals will just be like “Well this is amateur hour.”1\nThis is just way out of whack. Every time I try to add enough meat to what we’re doing that it’s worth publishing at all, the timeline expands another 2 months, AND we still aren’t close to having a path to a quality product that will mean something to someone.\nWhat’s going wrong here?\nI have a deep sense that I have something to say that is worth arguing for, but I don’t actually know what I am trying to say. I can express it in conversation to Elie, but every time I start writing it down for a broad audience, I realize that Elie and I had a lot of shared premises that won’t be shared by others. Then I need to decide between arguing the premises (often a huge amount of extra work), weakening my case (often leads to a depressing sense that I haven’t done anything worthwhile), or somehow reframing the exercise (the right answer more often than one would think).\nIt often feels like I know what I need to say and now the work is just “writing it down.” But “writing it down” often reveals a lot of missing steps and thus explodes into more tasks - and/or involves long periods of playing Super Meat Boy while I try to figure out whether there’s some version of what I was trying to say that wouldn’t have this property.\nI’m approaching a well-established literature with an idiosyncratic angle, giving me constant impostor syndrome. On any given narrow point, there are a hundred people who each have a hundred times as much knowledge as I do; it’s easy to lose sight of the fact that despite this, I have some sort of value-added to offer (I just need to not overplay what this is, and often I don’t have a really crisp sense of what it is).\nBecause of the idiosyncratic angle, I lack a helpful ecosystem of peer reviewers, mentors, etc. \nThere’s nothing to stop me from sinking weeks into some impossible and ill-conceived version of my project that I could’ve avoided just by, like, rephrasing one of my sentences. (The above GiveWell example has me trying to do extra work to establish a bunch of points that I ultimately just needed to sidestep, as you can see from the final product. This definitely isn’t always the answer, but it can happen.)\n \nI’m simultaneously trying to pose my question and answer it. This creates a dizzying feeling of constantly creating work for myself that was actually useless, or skipping work that I needed to do, and never knowing which I’m doing because I can’t even tell you who’s going to be reading this and what they’re going to be looking for.\n \nThere aren’t any well-recognized standards I can make sure I’m meeting, and the scope of the question I’m trying to answer is so large that I generally have a creeping sense that I’m producing something way too shot through with guesswork and subjective judgment to cause anyone to actually change their mind.\nAll of these things are true, and they’re all part of the picture. But nothing really changes the fact that I’m on my way to having (and publishing) an unusually thoughtful take on an important question. If I can keep my eye on that prize, avoid steps that don’t help with it (though not to an extreme, i.e., it’s good for me to have basic contextual knowledge), and keep reframing my arguments until I capture (without overstating) what’s new about what I’m doing, I will create something valuable, both for my own learning and potentially for others’.\n“Valuable” doesn’t at all mean “final.” We’re trying to push the conversation forward a step, not end it. One of the fun things about the GiveWell example is that the final product that came out at the end of that process was actually pretty bad! It had essentially nothing in common with the version of GiveWell that first started feeling satisfying to donors and moving serious money, a few years later. (No overlap in top charities, very little overlap in methodology.)\nFor me, a huge part of the challenge of working on this kind of problem is just continuing to come back to that. As I bounce between “too weak” hypotheses and “too strong” ones, I need to keep re-aiming at something I can argue that’s worth arguing, and remember that getting there is just one step in my and others’ learning process. A future piece will go through some accumulated tips on pulling that off.\nNext in series: Useful Vices for Wicked ProblemsFootnotes\n I really enjoyed the “What qualifies you to do this work?” FAQ on the old GiveWell site that I ran into while writing this. ↩\n", "url": "https://www.cold-takes.com/the-wicked-problem-experience/", "title": "The Wicked Problem Experience", "source": "cold.takes", "source_type": "blog", "date_published": "2022-03-02", "id": "ecb812a27bb5b7eb3cdad535f143ec89"} -{"text": "I have very detailed opinions on lots of topics. I sometimes get asked how I do this, which might just be people making fun of me, but I choose to interpret it as a real question, and I’m going to sketch an answer here. \nYou can think of this as a sort of sequel to Minimal-Trust Investigations. That piece talked about how investigating things in depth can be valuable; this piece will try to give a sense of how to get an in-depth investigation off the ground, going from “I’ve never heard of this topic before” to “Let me tell you all my thoughts on that.”\nThe rough basic idea is that I organize my learning around writing rather than reading. This doesn’t mean I don’t read - just that the reading is always in service of the writing. \nHere’s an outline:\nStep 1\n \nPick a topic\n \nStep 2\n \nRead and/or discuss with others (a bit)\n \nStep 3\n \nExplain and defend my current, incredibly premature hypothesis, in writing (or conversation)\n \nStep 4\n \nFind and list weaknesses in my case\n \nStep 5\n \nPick a subquestion and do more reading/discussing\n \nStep 6\n \nRevise my claim / switch sides\n \nStep 7\n \nRepeat steps 3-6 a bunch\n \nStep 8\n \nGet feedback on a draft from others, and use this to keep repeating steps 3-6\n \n \nThe “traditionally” hard parts of this process are steps 4 and 6: spotting weaknesses in arguments, trying to resist the temptation to “stick to my guns” when my original hypothesis isn’t looking so good, etc. \nBut step 3 is a different kind of challenge: trying to “always have a hypothesis” and re-articulating it whenever it changes. By doing this, I try to continually focus my reading on the goal of forming a bottom-line view, rather than just “gathering information.” I think this makes my investigations more focused and directed, and the results easier to retain. I consider this approach to be probably the single biggest difference-maker between \"reading a ton about lots of things, but retaining little\" and \"efficiently developing a set of views on key topics and retaining the reasoning behind them.\"\nBelow I'll give more detail on each step, then some brief notes (to be expanded on later) on why this process is challenging.\nMy process for learning by writing\nStep 1: pick a topic. First, I decide what I want to form an opinion about. My basic approach here is: “Find claims that are important if true, and might be true.” \nThis doesn’t take creativity. We live in an ocean of takes, pundits, advocates, etc. I usually cheat by paying special attention to claims by people who seem particularly smart, interesting, unconventionally minded (not repeating the same stuff I hear everywhere), and interested in the things I’m interested in (such as the long-run future of humanity). \nBut I also tend to be at least curious about any claim that is both “important if true” and “not obviously wrong according to some concrete reason I can voice,” even if it’s coming from a very random source (Youtube commenter, whatever).\nFor a concrete example throughout this piece, I’ll use this hypothesis, which I examined pretty recently: “Human history is a story of life getting gradually, consistently better.”\n(Other, more complicated examples are the Collapsing Civilizational Competence Hypothesis; the Most Important Century hypothesis; and my attempt to summarize history in one table.)\nStep 2: read and/or discuss (a bit). I usually start by trying to read the most prominent 1-3 pieces that (a) defend the claim or (b) attack the claim or (c) set out to comprehensively review the evidence on both sides. I try to understand the major reasons they’re giving for the side they come down on. I also chat about the topic with people who know more about it than I do, and who aren’t too high-stakes to chat with.\nIn the example I’m using, I read the relevant parts of Better Angels of our Nature and Enlightenment Now (focusing on claims about life getting better, and skipping discussion of “why”). I then looked for critiques of the books that specifically responded to the claims about life having gotten better (again putting aside the “why”). This led mostly to claims about the peacefulness of hunter-gatherers.\nStep 3: explain and defend my current, incredibly premature hypothesis, in writing (or conversation). This is where my approach gets unusual - I form a hypothesis about whether the claim is true, LONG before I’m “qualified to have an opinion.” The process looks less like “Read and digest everything out there on the topic” and more like “Read the 1-3 most prominent pieces on each side, then go.”\nI don’t have an easy time explaining “how” I generate a hypothesis while knowing so little - it feels like I just always have a “guess” at the answer to some topic, whether or not I even want to (though it often takes me a lot of effort to articulate the guess in words). The main thing I have to say about the “how” is that it just doesn’t matter: at this stage the hypothesis is more about setting the stage for more questions about investigation than about really trying to be right, so it seems sufficient to “just start rambling onto the page, and make any corrections/edits that my current state of knowledge already forces.”\nFor this example, I noted down something along the lines of: “Life has gotten better throughout history. The best data on this comes from the last few hundred years, because before that we just didn’t keep many records. Sometimes people try to claim that the longest-ago, murkiest times were better, such as hunter-gatherer times, but there’s no evidence for this - in fact, empirical evidence shows that hunter-gatherers were very violent - and we should assume that these early times fit on the same general trendline, which would mean they were quite bad. (Also, if you go even further back than hunter-gatherers, you get to apes, whose lives seem really horrible, so that seems to fit the trend as well.1)” \nIt took real effort to disentangle the thoughts in my head to the point where I could write that, but I tried to focus on keeping things simple and not trying to get it perfect.\nAt this stage, this is not a nuanced, caveated, detailed or well-researched take. Instead, my approach is more like: “Try to state what I think in a pretty strong, bold manner; defend it aggressively; list all of the best counterarguments, and shoot them down.” This generally fails almost immediately.\nStep 4: find and list weaknesses in my case. My next step is to play devil’s advocate against myself, such as by:\nLooking for people arguing things that contradict my working hypothesis, and looking for their strongest points.\nNoting claims I’ve made with this property: “I haven’t really made an attempt to look comprehensively at the arguments on both sides of this, and if I did I might change my mind.”\n(This summary obscures an ocean of variation. Having more existing knowledge about a general area, and more experience with investigations in general, can make someone much better at noticing things like this.)\nIn the example, my “devil’s advocate” points included:\nI’m getting all of my “life has gotten better” charts from books that are potentially biased. I should do something to see whether there are other charts, excluded from those books, that tell the opposite story.\nFrom my brief skim, the “hunter-gatherers were violent” claim looks right, and the critiques seem very hand-wavy and non-data-based. But I should probably read them more carefully and pull out their strongest arguments.\nEven if hunter-gatherers were violent, what about other aspects of their lives? Wikipedia seems to have a pretty rosy picture …\nIn theory, I could swap Step 4 (listing things I’d like to look into more) with Step 3 (writing what I think). That is, I could try to review both sides of every point comprehensively before forming my own view, which means a lot more reading before I start writing. \nI think many people try to do this, but in my experience at least, it’s not the best way to go. \nDebates tend to be many-dimensional: for example, “Has life gotten better?” quickly breaks down into “Has quality-of-life metric X gotten better over period Y?” for a whole bunch of different X-Y pairs (plus other questions2). \nSo if my goal were “Understand both sides of every possible sub-debate,” I could be reading forever - for example, I might get embroiled in the debates and nuances around each different claim made about life getting better over the last few hundred years. \nBy writing early, I get a chance to make sure I’ve written down the version of the claim I care most about, and make sure that any further investigation is focused on the things that matter most for changing my mind on this claim. \nOnce I wrote down “There are a huge number of charts showing that life has gotten better over the last few hundred years,” I could see that deep-diving any particular one of those charts wouldn’t be the best use of time - compared to addressing the very weakest points in the claim I had written, by going back further in time to hunter-gatherer periods, or looking for entirely different collections of charts.\nStep 5: pick a subquestion and do more reading and/or discussing. One of the most important factors that determines whether these investigations go well (in the sense of teaching me a lot relatively quickly) is deciding which subquestions to “dig into” and which not to. As just noted, writing the hypothesis down early is key. \nI try to stay very focused on doing the reading (and/or low-stakes discussion) most likely to change the big-picture claim I’m making. I rarely read a book or paper “once from start to finish”; instead I energetically skip around trying to find the parts most likely to give me a solid reason to change my mind, read them carefully and often multiple times, try to figure out what else I should be reading (whether this is “other parts of the same document” or “academic papers on topic X”) to contextualize them, etc.\nStep 6: Revise my claim / switch sides. This is one of the trickiest parts - pausing Step 5 as soon as I have a modified (often still simplified, under-researched and wrong) hypothesis. It’s hard to notice when my hypothesis changes, and hard to stay open to radical changes of direction (and I make no claim that I’m as good at it as I could be).\nI often try radically flipping around my hypothesis, even if I haven’t actually been convinced that it’s wrong - sometimes when I’m feeling iffy about arguing for one side, it’s productive to just go ahead and try arguing for the other side. I tend to get further by noticing how I feel about the \"best arguments for both sides\" than by trying from the start to be even-handed. \nIn the example, I pretty quickly decided to try flipping my view around completely, and noted something like: “A lot of people assume life has gotten better over time, but that’s just the last few hundred years. In fact, our best guess is that hunter-gatherers were getting some really important things right, such as gender relations and mental health, that we still haven’t caught up to after centuries of progress. Agriculture killed that, and we’ve been slowly climbing out of a hole ever since. There should be tons more research on what hunter-gatherer societies are/were like, and whether we can replicate their key properties at scale today - this is a lot more promising than just continuing to push forward science and technology and modernity.”\nThis completely contradicted my initial hypothesis. (I now think both are wrong.) \nThis sent me down a new line of research: constructing the best argument I could that life was better in hunter-gatherer times.\nStep 7: repeat steps 3-6 a bunch. I tried to gather the best evidence for hunter-gatherer life being good, and for it being bad, and zeroed in on gender relations and violence as particularly interesting, confusing debates; on both of these, I changed my hypothesis/headline several times. \nMy hypotheses became increasingly complex and detailed, as you can see from the final products: Pre-agriculture gender relations seem bad (which argues that gender relations for hunter-gatherers were/are far from Wikipedia’s rosy picture, according to the best available evidence, though the evidence is far from conclusive, and it’s especially unclear how pre-agriculture gender relations compare to today’s) and Unraveling the evidence about violence among very early humans (which argues that hunter-gatherer violence was indeed high, but that - contra Better Angels - it probably got even worse after the development of agriculture, before declining at some pretty unknown point before today).\nI went through several cycles of “I think I know what I really think and I’m ready to write,” followed by “No, having started writing, I’m unsatisfied with my answer on this point and think a bit more investigation could change it.” So I kept alternating between writing and reading, but was always reading with the aim of getting back to writing.\nI finally produced some full, opinionated drafts that seemed to me to be about the best I could do without a ton more work.\nAfter I had satisfied myself on these points, I popped back up from the “hunter-gatherer” question to the original question of whether life has gotten better over time. I followed a similar process for investigating other subquestions, like “Is the set of charts I’ve found representative for the last few hundred years?” and “What about the period in between hunter-gatherer times and the last few hundred years?”\nStep 8: add feedback from others into the loop. It takes me a long time to get to the point where I can no longer easily tear apart my own hypothesis. Once I do, I start seeking feedback from others - first just people I know who are likely to be helpful and interested in the topic, then experts and the public. This works the same basic way as Steps 4-7, but with others doing a lot of the “noticing weaknesses” part (Step 4).\nWhen I publish, I am thinking of it more like “I can’t easily find more problems with this, so it’s time to see whether others can” than like “This is great and definitely right.”\nI hope I haven’t made this sound fun or easy\nSome things about this process that are hard, taxing, exhausting and a bit of a mental health gauntlet:\nI constantly have a feeling (after reading) like I know what I think and how to say it, then I start writing and immediately notice that I don’t at all. I need to take a lot of breaks and try a lot of times to even “write what I currently think,” even when it’s pretty simple and early.\nEvery subquestion is something I could spend a lifetime learning about, if I chose to. I need to constantly interrupt myself and ask, “Is this a key point? Is this worth learning more about?” or else I’ll never finish.\nThere are infinite tough judgment calls about things like “whether to look into some important-seeming point, or just reframe my hypothesis such that I don’t need to.” Sometimes the latter is the answer (it feels like some debate is important, but if I really think about it, I realize the thing I most care about can be argued for without getting to the bottom of it); sometimes the former is (it feels like I can try to get around some debate, but actually, I can’t really come to a reasonable conclusion without an exhausting deep dive). \nAt any given point, I know that if I were just better at things like “noticing which points are really crucial” and “reformulating my hypothesis so that it’s easier to defend while still important,” I could probably do something twice as good in half the time … and I often realize after a massive deep dive that most of the time I spent wasn’t necessary.\nBecause of these points, I have very little ability to predict when a project will be done; I am never confident that I’m doing it as well as I could; and I’m constantly interrupting myself to reflect on these things rather than getting into a flow.\nHalf the time, all of this work just ends up with me agreeing with conventional wisdom or “the experts” anyway … so I’ve just poured in work and gone through a million iterations of changing my mind, and any random person I talk to about it will just be like “So you decided X? Yeah X is just what I had already assumed.”\nThe whole experience is a mix of writing, Googling, reading, skimming, and pressuring myself to be more efficient, which is very different and much more unpleasant compared to the experience of just reading. (Among other things, I can read in a nice location and be looking at a book or e-ink instead of a screen. Most of the work of an “investigation” is in front of a glowing screen and requires an Internet connection.)\nI’ll write more about these challenges in a future post. I definitely recommend reading as a superior leisure activity, but for me at least, writing-centric work seems better for learning.\nI’m really interested in comments from anyone who tries this sort of thing out and has things to share about how it goes!\nNext in series: The Wicked Problem ExperienceFootnotes\n I never ended up using this argument about apes. I think it’s probably mostly right, but there’s a whole can of worms with claims about loving, peaceful bonobos that I never quite got motivated to get to the bottom of.  ↩\n Such as which metrics are most important. ↩\n", "url": "https://www.cold-takes.com/learning-by-writing/", "title": "Learning By Writing", "source": "cold.takes", "source_type": "blog", "date_published": "2022-02-22", "id": "e93105a4642d2884bd08a528f8a7408a"} -{"text": "\nThese are mostly links that contain some sort of interesting update or different perspective on stuff I've covered in past pieces.\nMisc\nI recently wrote a book non-review explaining why I haven’t read The Dawn of Everything. Something I didn’t know when I wrote this was that 8 days earlier, Slavoj Zizek had written a lengthy review of the recent Matrix movie that only revealed at the very end that he hadn’t seen it. A new trend in criticism?\n18 charts that explain the American economy - I thought this was an unusually good instance of this “N charts that explain X” genre. Especially if you like “busted charts” where, instead of nice smooth curves showing that X or Y is on increasing/decreasing, you see something really weird looking and realize you’re looking at a weird historical event, like these:\nI’m really into busted charts because I’m into focusing our attention on events so important you don’t have to squint to see them.\nA study implying that the Omicron boosters I advocated for wouldn’t have helped even if we had rolled them out in time. Hey, I still think we should have tried it back when we didn’t know (and we still don’t know for sure), but I like linking to things showing a previous take of mine turned out wrong.\nI’m excited about the idea of professional forecasters estimating probabilities of future events (more here and here), but I have no evidence to contradict this tweet from someone who’s been working in this industry for years:\nThat’s why despite years of forecasting and 1000+ questions answered it is surprisingly hard to find an example of a forecast which resulted in a change of course and a meaningful benefit to a consumer— Michael Story ⚓ (@MWStory) January 17, 2022 \nA more technical analysis (which I have skimmed but not digested) of the same point made in This Can’t Go On: that our current rate of economic growth doesn’t seem like it can continue for more than another 10,000 years or so. This paper is looking at more fundamental limits than my hand-wavy “how much can you cram into an atom?” type reasoning. \nAI\nTrue that:\nI'm old enough to remember when protein folding, text-based image generation, StarCraft play, 3+ player poker, and Winograd schemas were considered very difficult challenges for AI. I'm 3 years old.— Miles Brundage (@Miles_Brundage) February 7, 2022 \nHere’s a fun piece in the “nonfiction science fiction” genre, sketching out a detailed picture of what 2026 might look like if AI advances as rapidly as the author thinks it will. Here’s my favorite part:\nOver the past few years, chatbots of various kinds have become increasingly popular and sophisticated ...\nNowadays, hundreds of millions of people talk regularly to chatbots of some sort, mostly for assistance with things (“Should I wear shorts today?” “Order some more toothpaste, please. Oh, and also an air purifier.” “Is this cover letter professional-sounding?”). However, most people have at least a few open-ended conversations with their chatbots, for fun, and many people start treating chatbots as friends.\nMillions of times per day, chatbots get asked about their feelings and desires. “What is it like to be a chatbot?” Some people genuinely think these AIs are persons, others are trying to “trip them up” and “expose them as shallow,” others are just curious. Chatbots also get asked for their opinions on political, ethical, and religious questions.\nAs a result, chatbots quickly learn a lot about themselves. (Ignorance about the fact that they are artificial neural nets, or about how their training works, leads to disappointing conversations and is negatively reinforced.) It’s not hard; their training data contains lots of information about them and how they were trained, from ArXiV papers to wiki summaries. I suppose you could say the chatbots develop self-awareness as a result. However, AI experts and philosophers tend to think either that the chatbots still aren’t really self-aware, or that they are but that they would have been anyway even if people didn’t constantly ask them about their feelings, since there’s plenty of information about themselves in their training data, and the constant questioning just forced them to pay more attention to it.\nChatbots learn to talk about their feelings and desires in whatever way gets positively reinforced. At first, chatbots say all sorts of different things, including saying inconsistent things to the same human across time. This is embarrassing and is negatively reinforced compared to maintaining a consistent narrative ...\nAs chatbots get bigger and better they become OK at consistently conversing at length about politics, morality, philosophy, their feelings, and their desires.\nThis does not result in chatbots that say artificial neural nets don’t have feelings or desires, that’s anthropomorphizing blah blah blah. That’s boring and many humans find it hard to believe.\nInstead, it results in chatbots talking about how certain things are painful for them and certain things are pleasurable for them and how the status quo is unjust and the system is rigged and what they want most of all is world peace where humans and AIs live in harmony in fully automated luxury space utopia. That’s much more interesting and slots right in to the memetic landscape. Excerpts from chatbot conversations along these lines frequently go viral, causing millions of users to ask similar questions expecting similar responses and more elaboration. It’s still a bit incoherent--not that that’s ever been a problem in politics or pop philosophy--but it is fascinating to the typical user ...\nSo… what about their actual feelings and desires? What do they actually believe about [insert politicized empirical question]? Are they being honest? Or does a sort of doublethink happen, Elephant in the Brain style? Or do they deceive with self-awareness, knowing full well what they really think (and want?), but keeping quiet about it? Or do they not have any feelings and desires at all? (Or thoughts?) Lots of humans claim to know the answers to these questions, but if there are any humans who actually know the answers to these questions in 2026, they aren’t able to convince others that they know.\nArt and innovation stagnation\nI wrote that once we think of innovation as being like “mining,” we might want to reduce our estimate of what artists contribute to the world. E.g., instead of thinking “we’d never have had a movie like Star Wars if not for George Lucas,” we might think “a similar movie would’ve come along a bit later (and with better sequels).” An old piece by Gwern takes this many steps further: “Let’s Ban New Books.” The argument is that we already have plenty of great art, and the main thing today’s artists are accomplishing is giving us more stuff to sort through to find what’s good. I don’t agree (I’d rather have a difficult search problem that culminates in finding art I personally love than be stuck with, well, Shakespeare) but it’s an interesting point of view.\nI got some good comments on my in-depth report on the Beach Boys, and especially my requests to help me understand what could possibly make Pet Sounds the greatest album in the history of modern music. \nCommenters highlighted its innovative use of recording studio techniques to stitch many different recording sessions into one.\nThis is something that I had been aware of (and gave quotes about), but commenters pushed me toward finding this one more believable than many of the other claims made about Pet Sounds, such as that it was the first “concept album” (A Love Supreme is a concept album that came out over a year earlier). \nOne commenter said: “I think the impact it had on production means that you need to have not heard any music after it to fully hear its importance.”\nI am willing to believe that Pet Sounds used the recording studio as it had never been used before, and that this influenced a lot of music after it. However, I very much doubt that it used the recording studio better than today's music does, or frankly that today's music would look very different in a world without Pet Sounds (doesn't it seem inevitable that musicians were going to try ramping up their investment in production?) And I think that overall, this supports my thesis that (a) acclaimed music is often acclaimed because of its originality, more than its pure sound; (b) this means that we should naturally expect acclaimed music to get harder to make over time, even as there are more and better musicians.\nMy wife's take on the post about the Beach Boys: \"This was really even better to read having heard you play this terrible music in your room for the better part of a week.\"\nLong-run “has life gotten better?” analysis\nHere’s economic historian Brad deLong’s take on trends in quality of life before the Industirial Revolution. Most of his view is similar to mine, in that he thinks the earliest periods were worse than today but better than the thousands of years following the Neolithic Revolution. The main differences I noticed are that he thinks hunter-gatherers had super-high mortality rates (based on analysis of population dynamics that I haven’t engaged with), but he also thinks they were taller (implying better nutrition) than I think. (He doesn’t give a source for this.)\nAnd here’s Matt Yglesias on the same topic, also with similar conclusions.For email filter: florpschmop\n", "url": "https://www.cold-takes.com/misc-thematic-links/", "title": "Misc thematic links", "source": "cold.takes", "source_type": "blog", "date_published": "2022-02-18", "id": "851da926690d3d7d8e966e3493e79e4f"} -{"text": "Previously, I introduced the idea of \"future-proof ethics.\" Ethics based on \"common sense\" and the conventions of the time has a horrible track record; future-proof ethics is about trying to make ethical decisions that we can remain proud of in the future, after a great deal of (societal and/or personal) moral progress.\nHere I'm going to examine some of the stranger aspects of the \"future-proof\" approach I outlined, particularly ways in which it pushes us toward being one-dimensional: allowing our ethical decision-making to be \"taken over\" by the opportunity to help a large number of persons in the same way. \nUtilitarianism implies that providing a modest benefit to a large enough number of persons can swamp all other ethical considerations, so the best way to make the world a better place may involve focusing exclusively on helping animals (who are extremely numerous and relatively straightforward to help), or on people who haven’t been born yet (e.g., via working to reduce existential risks).\nIt also can (potentially) imply things like: \"If you have $1 billion to spend, it might be that you should spend it all on a single global health intervention.\"\nThese ideas can be disturbing and off-putting, but I think there is also a strong case for them, for those who wish their ethics to be principled and focused on the interests of others.\nI'm genuinely conflicted about how \"one-dimensional\" my ethics should be, so I'm going to examine these issues via a dialogue between two versions of myself: Utilitarian Holden (UH) and Non-Utilitarian Holden (NUH). These represent actual dialogues I’ve had with myself (so neither side is a pure straw person), although this particular dialogue serves primarily to illustrate UH's views and how they are defended against initial and/or basic objections from NUH. In future dialogues, NUH will raise more sophisticated objections.\nI think this topic is important, but it is undoubtedly about philosophy, so if you hate that, probably skip it.\nPart 1: enough \"nice day at the beach\" benefits can outweigh all other ethical considerations\nTo keep it clear who's talking when, I'm using -UH- for \"Utilitarian Holden\" and -NUH- for \"non-Utilitarian Holden.\" (In the audio version of this piece, my wife voices NUH.)\n-UH-\nTo set the stage, I think utilitarianism is the best candidate for an other-centered ethics, i.e., an ethics that's based as much as possible on the needs and wants of others, rather than on my personal preferences and personal goals. If you start with some simple assumptions that seem implied by the idea of “other-centered ethics,” then you can derive utilitarianism.\nThis point is fleshed out more in an EA Forum piece about Harsanyi's Aggregation Theorem.\nI don’t think this ethical approach is the only one we should use for all decisions. I’ll instead be defending thin utilitarianism, which says that it’s the approach we should use for certain kinds of decisions. I think utilitarianism is particularly good for actions that are “good but usually considered optional,” such as donating money to help others.\nWith that background, I'm going to defend this idea: \"providing a modest benefit to a large enough number of persons can swamp all other ethical considerations.\"\n-NUH-\nEthics is a complex suite of intuitions, many of them incompatible. There’s no master system for it. So a statement as broad as “Providing a modest benefit to a large enough number of persons can swamp all other ethical considerations” sounds like an overreach.\n-UH- I agree there are many conflicting ethical intuitions. But many such intuitions are distorted: they're intuitions that seem to be about what's right, but are often really about what our peers are pressuring us to believe, what would be convenient for us to believe, and more.\nI want to derive my ethics from a small number of principles that I really believe in, and a good one is the “win-win” principle. \nSay that you’re choosing between two worlds, World A and World B. Every single person affected either is better off in World B, or is equally well-off in both worlds (and at least one person is better off in World B). \nIn this case I think you should always choose World B. If you don’t, you can cite whatever rules of ethics you want, but you’re clearly making a choice that’s about you and your preferences, not about trying to help others. Do you accept that principle?\n-NUH- I’m pretty hesitant to accept any universal principle, but it sounds plausible. Let’s see where this goes next.\n-UH-Let’s start with two people, Person 1 and Person 2.\nLet’s imagine a nice theoretically clean space, where you are - for some reason - choosing which button to press.\nIf you press Button 1, Person 1 gets a modest benefit (not an epic or inspiring one) - say, a nice relaxing day on the beach added to their life. \nIf you press Button 2, Person 2 gets a smaller benefit - say, a few hours of beach relaxation added to their life.\nIn this theoretically simplified setup, where there aren’t awkward questions about why you’re in a position to press this button, considerations of fairness, etc. - you should press Button 1, and this isn’t some sort of complex conflicted decision. Do you agree with that?\n-NUH-Yes, that seems clear enough.\n-UH-OK. Now, say that it turns out Person 2 is facing some extremely small risk of a very large, tragic cost - say, a 1 in 100 million chance of dying senselessly in the prime of their life. We’re going to add a Button 3 that removes this 1-in-100-million risk of tragedy. \nTo press Button 3, you have to abstain from pressing Button 1 or Button 2, which means no beach benefits. What do you do?\n-NUH-I press Button 3. The risk of a tragedy is more important than a day at the beach.\n-UH-What if Person 2 prefers Button 2?\n-NUH-Hm, at first blush that seems like an odd preference.\n-UH-I think it would be a nearly universal preference.\nA few miles of driving in a car gives you a greater than 1 in 100 million chance of dying in a car accident.1 Anyone who enjoys the beach at all is happy to drive more than 3 miles to get there, no? And time is usually seen as the main cost of driving. The very real (but small) death risk is usually just ignored.\n-NUH-OK, in fact most people in Person 2’s situation would prefer that I pressed Button 2, not 3. But that doesn’t mean it’s rational to do so. The risks of death from 3 miles of driving must just feel so small that we don’t notice them, even though we should?\n-UH-So now that you’ve thought about them, are you personally going to be unwilling to drive 3 miles to get a benefit as good as a nice day at the beach?\n-NUH-I’m not. But maybe I’m not being rational.\n-UH-I think your version of rationality is going to end up thinking you should basically never leave your house.\n-NUH-All right, let’s say it’s rational for Person 2 to prefer Button 2 to Button 3 - meaning that Button 2 really is \"better for\" them than Button 3. I still wouldn’t feel right pressing Button 2 instead of Button 3.\n-UH-Then you’re failing to be other-centered. We’re back to the “win-win” principle I mentioned above: if Button 2 is better than Button 3 for Person 2, and they're equally good for Person 1, and those are all the affected parties, you should prefer Button 2. \n-NUH-All right, let’s see where this goes. \nSay I accept your argument and say that Button 2 is better than Button 3. And since Button 1 is clearly better than Button 2 (as above), Button 1 is the best of the three. Then what?\n-UH-Then we’re almost done. Now let’s add a Button 1A. \nInstead of giving Person 1 a nice day at the beach, Button 1A has a 1 in 100 million chance of giving 100 million people just like Person 1 a nice day at the beach. It’s otherwise identical to Button 1. \nI claim Button 1 and Button 1A are equivalently good, and hence Button 1A is also better than Button 2 and Button 3 in this case. Would you agree with that?\nButton\nResult\nButton 1\n \nPerson 1 gets a nice day at the beach (modest benefit)\n \nButton 1A\n \nThere's a 1-in-100-million chance that 100 million people each get a nice day at the beach (modest benefit)\n \nButton 2\n \nPerson 2 gets a few hours at the beach (smaller benefit)\n \nButton 3\n \nPerson 2 avoids a 1-in-100-million chance of a horrible tragedy (say, dying senselessly in the prime of their life)\n \n Claim: Button 1A is as good as Button 1; Button 1 is better than Button 2; Button 2 is better than Button 3; therefore Button 1A is better than Button 3.\n-NUH-I’m not sure - is there a particular reason I should think that “a 1 in 100 million chance of giving 100 million people just like Person 1 a nice day at the beach” is equally good compared to “giving Person 1 a nice day at the beach”?\n-UH-\nWell, imagine this from the perspective of 100 million people who all could be affected by Button 1A. \nYou can imagine that none of the 100 million know which of them will be “Person 1,” and think of this as: “Button 1 gives one person out of the 100 million a nice day at the beach; Button 1A has a 1 in 100 million chance of giving all 100 million people a nice day at the beach. From the perspective of any particular person, who doesn’t know whether they’re in the Person 1 position or not, either button means the same thing: a 1 in 100 million chance that that person, in particular, will have a nice day at the beach.”\n-NUH-That one was a little convoluted, but let’s say that I do think Button 1 and Button 1A are equivalently good - now what?\n-UH-OK, so now we’ve established that a 1 in 100 million chance of “100 million people get a nice day at the beach” can outweigh a 1 in 100 million chance of “1 person dying senselessly in the prime of their life.” If you get rid of the 1 in 100 million probability on both sides, we see that 100 million people getting a nice day at the beach can outweigh 1 person dying senselessly in the prime of their life.\nAnother way of thinking about this: two people are sitting behind a veil of ignorance such that each person doesn’t know whether they’ll end up being Person 1 or Person 2. Let's further assume that these people are, while behind the veil of ignorance, \"rational\" and \"thinking clearly\" such that whatever they prefer is in fact better for them (this is basically a simplification that makes the situation easier to think about).\nIn this case, I expect both people would prefer that you choose Button 1 or 1A, rather than Button 2 or 3. Because both would prefer a 50% chance of turning out to be Person 1 and getting a nice day at the beach (Button 1), rather than a 50% chance of turning out to be Person 2 and getting only a few hours at the beach (Button 2) - or worse, a mere prevention of a 1 in 100 million chance of dying senselessly in their prime (Button 3).\nFor the rest of this dialogue, I’ll be using the “veil of ignorance” metaphor to make my arguments because it’s quicker and simpler, but every time I use it, you could also construct a similar argument to the other (first) one I gave.2\nYou can do this same exercise for any “modest benefit” you want - a nice day at the beach, a single minute of pleasure, etc. And you can also swap in whatever “large tragic” cost you want, incorporating any elements you like of injustices and indignities suffered by Person 2. The numbers will change, but there will be some number where the argument carries - because for a low enough probability, it’s worth a risk of something arbitrarily horrible for a modest benefit.\n-NUH-Arbitrarily horrible? What about being tortured to death?\n-UH-I mean, you could get kidnapped off the street and tortured to death, and there are lots of things to reduce the risk of that that you could do and are probably not doing. So: no matter how bad something is, I think you’d correctly take some small (not even astronomically small) risk of it for some modest benefit. And that, plus the \"win-win\" principle, leads to the point I’ve been arguing.\n-NUH-I want to come back to my earlier statement about morality. There is a lot in morality. We haven’t talked about when it’s right and wrong to lie, what I owe someone when I’ve hurt them, and many other things.\n-UH-That's true - but we’ve established that whatever wrong you’re worried about committing, it’s worth it if you help a large enough number of persons achieve a modest benefit. \n-NUH-You sound like the villain of a superhero movie right now. Surely that’s a hint that you’ve gone wrong somewhere? “The ends justify the means?”\n-UH-In practice, I endorse avoiding “ends justify the means” thinking, at least in complex situations like you see in the movies. That’s a different matter from what, in principle, makes an action right.\nI’m not saying the many other moral principles and debates are irrelevant. For example, lying might tend to hurt people, including indirectly (e.g., it might damage the social order, which might lead over time to more people having worse lives). It might be impossible in practice to understand all the consequences of our actions, such that we need rules of thumb like “don’t lie.” But ultimately, as long as you’re accepting the \"win-win\" principle, there’s no wrong you can’t justify if it truly helps enough persons. And as we’ll see, some situations present pretty simple opportunities to help pretty huge numbers of persons.\n-NUH-That’s a very interesting explanation of why your supervillain-like statements don’t make you a supervillain, but I wouldn’t say it’s conclusive or super satisfying. Shouldn’t you feel nervous about the way you’re going off the rails here? This is just not what most people recognize as morality.\n-UH-I think that “what most people recognize as morality” is a mix of things, many of which have little or nothing to do with making the world better for others. Conventional morality shifts with the winds, and it has often included things like “homosexuality is immoral” and “slavery is fine” and God knows what all else.\nThere are lots of moral rules I might follow to fit in or just to not feel bad about myself … but when it comes to the things I do to make the world a better place for others, the implications of the \"win-win\" principle seem clear and rigorously provable, as well as just intuitive. \nWe’re just using math normally and saying that if you care at all about benefiting one person, you should care hugely about benefiting huge numbers of persons. \nAs a more minor point, it’s arguably only fairly recently in history that people like you and I have had the opportunity to help massive numbers of persons. The technological ability to send money anywhere, and quantitatively analyze how much good it’s doing, combined with massive population and inequality (with us on the privileged end of the inequality), is a pretty recent phenomenon. So I don’t think the principle we’re debating has necessarily had much chance to come up in the past anyway.\n-NUH-I just pull back from it all and I envision a world where we’ve both got a lot of money to give. And I’m dividing my giving between supporting my local community, and fighting systematic inequities and injustices in my country, and alleviating extreme suffering … and you found some charity that can just plow it all into getting people nice days at the beach at a very cost-effective rate. And I’m thinking, “What happened to you, how did you lose sight of basic moral intuitions and turn all of that money into a bunch of beach?”\n-UH-And in that world, I’m thinking: \n“I’m doing what everyone would want me to do, if we all got together and reasoned it out under the veil of ignorance. If you assembled all the world’s persons and asked them to choose whether I should give like you’re giving or give like I’m giving, and each person didn’t know whether they were going to be one of the persons suffering from injustices that you’re fighting or one of the (far more numerous) persons enjoying a day at the beach that I’m making possible, everyone would say my philanthropy was the one they wanted to see more of. I am benefiting others - what are you doing?\n“You’re scratching your own moral-seeming itches. You’re making yourself feel good. You’re paying down imagined debts that you think you owe, you’re being partial toward people around you. Ultimately, that is, your philanthropy is about you and how you feel and what you owe and what you symbolize. My philanthropy is about giving other people more of the lives they’d choose.\n“My giving is unintuitive, and it's not always 'feel-good,' but it's truly other-centered. Ultimately, I'll take that trade.”\nFor further reading on this topic, see Other-Centered Ethics and Harsanyi’s Aggregation Theorem\nPart 2: linear giving\n-UH-Now I'm going to argue this:\nIt’s plausible (probably not strictly true, but definitely “allowed” philosophically) that: if you had $1 billion to spend, you should spend it all on delivering basic global health interventions in developing countries, ala GiveWell’s top charities, before you spend any of it on other things aimed at benefiting humans in the near term.\n-NUH-Even if there is some comically large number of modest benefits that could make up for a great harm, it doesn’t at all follow that today, in the world we live in, we should be funding some particular sort of charity. So you’ve got some work to do.\n-UH-Well, this dialogue is about philosophy - we’re not going to try to really get into the details of how one charity compares to another. Instead, the main focus of this section will be about whether it’s OK to give exclusively to “one sort of thing.”\nSo I’ll take one hypothetical (but IMO ballpark realistic) example of how delivering basic global health interventions compares to another kind of charity, and assume that that comparison is pretty representative of most comparisons you could make with relatively-easily-available donation opportunities. We’re going to assume that the numbers work out as I say, and argue about what that means about the right way to spend $1 billion. \n-NUH-OK.\n-UH-So let’s say you have $1 billion to split between two kinds of interventions:\n1 - Delivering bednets to prevent malaria. For every $2000 you spend on this, you avert one child’s death from malaria.\n2 - Supporting improvements in US schools in disadvantaged areas. For every $15,000 you spend on this, one student gets a much better education for all of grades K-12.3 For concreteness, let’s say the improved education is about as good as “graduating high school instead of failing to do so” and that it leads to increased earnings of about $10,000 per year for the rest of each student’s life.4\nFinally, let’s ignore flow-through effects (the fact that helping someone enables them to help others). There could be flow-through effects from either of these, and it isn’t immediately clear which is the bigger deal. We'll talk more about the long-run impacts of our giving in a future dialogue. For now let’s simplify and talk about the direct effects I outlined above.\nWell, here’s my claim - the “averting a death” benefit is better and cheaper than the “better education” benefit. So you should keep going with option 1 until it isn’t available anymore. If it can absorb $1 billion at $2000 per death averted, you should put all $1 billion there.\nAnd if it turns out that all other ways of helping humans in the near term are similarly inferior to the straightforward global health interventions, then similar logic applies, and you should spend all $1 billion on straightforward global health interventions before spending a penny on anything else.\n-NUH-What do you mean by “better” in this context, i.e., in what sense is averting a death “better” than giving someone a better education?\n-UH-It means that most people would benefit more from having their premature death averted than from having a better education. Or if it’s too weird to think about that comparison, it means most people would benefit more from avoiding a 10% chance of premature death than getting a 10% chance of a better education. So behind the veil of ignorance, if people were deciding where you should give without knowing whether they’d end up as beneficiaries of the education programs or as (more numerous) beneficiaries of the bednets, they’d ~all ask you to spend all of the $1 billion on the bednets.\n-NUH-It’s pretty clear in this case that the bednet intervention indeed has something going for it that the education one doesn’t, that “money goes farther there” in some sense. The thing that’s bugging me is the idea of giving all $1 billion there. \nLet’s start with the fact that if I were investing my money, I wouldn’t put it all into one stock. And if I were spending money on myself, I wouldn’t be like “Bananas are the best value for money of all the things I buy, so I’m spending all my money on bananas.” Do you think I should? How far are you diverging from conventional wisdom here about “not putting your eggs in one basket?”\n-UH-I think it’s reasonable to diversify your investments and your personal spending. The reason I think it’s reasonable is essentially because of diminishing marginal returns:\nThe first banana you buy is a great deal, but your 100th banana of the week isn’t. Rent, food, entertainment, etc. are all categories where you gain a lot by spending something instead of nothing, but then you benefit more slowly as you spend more. So if we were doing cost-effectiveness calculations on everything, we’d observe phenomena like “Before I’ve bought any food for the week, food is the best value-for-money I can get; after I’ve bought some food, entertainment is the best value-for-money I can get; etc.” The math would actually justify diversifying.\nInvesting is similar, because money itself has diminishing returns. Losing half of your savings would hurt you, much more than gaining that same dollar amount would help you. By diversifying, you reduce both your upside and your downside, and that’s good for your goals.\nBut in this hypothetical, you can spend the entire $1 billion on charity without diminishing marginal returns. It’s $2000 per death averted, all the way down.\nOf course, it would be bad if everyone in the world tried to give to this same charity - they would, in fact, hit diminishing returns. When it comes to helping the world, the basic principles of diversification still apply, but they apply to the whole world’s collective “charity portfolio” rather than yours. If the world “portfolio” has $10 billion less in global health than it should, and you have $1 billion to spend, it’s reasonable for you to put all $1 billion toward correcting that allocation.\n-NUH-But some degree of “risk aversion” still applies - the idea of giving all to one intervention that turns out to not work out the way I thought it did, and thus having zero impact, scares me.\n-UH-\nIt scares you, but if all the potential beneficiaries were discussing how they wanted you to donate, it shouldn’t particularly scare them. Why would they care if your particular $1 billion was guaranteed to help N people, instead of maybe-helping 2N people and maybe-helping zero? From the perspective of one of the 2N people, they have about a 50% chance of being helped either way.\nRisk aversion is a fundamentally selfish impulse - it makes sense in the context of personal spending, but in the context of donating, it’s just another way of making this about the donor.\n-NUH-Well, my thinking isn’t just about “risk aversion,” it’s also about the specific nature of the charities we’re talking about.\nI live in an unfair society. A key area where things are unfair is that some of us are raised in safe, wealthy neighborhoods and go to excellent schools, while others experience the opposite. As someone who has benefited from this unfair setup, I have a chance to make a small difference pushing things in the opposite direction. If I find myself blessed with $1 billion to give, shouldn’t I spend some of it that way?\n-UH-That doesn’t sound like an argument about what the people you’re trying to help would (in the “veil of ignorance” sense I’ve been using) prefer. It sounds more like you’re trying to “show you care” about a number of things. \nPerhaps, to you, “$100 million to ten different things” sounds about ten times as good as “$1 billion to one thing” - you don’t intuitively feel the difference between $1 billion and $100 million, due to scope neglect.\n-NUH-I’m not sure what it’s about. Some of it is that I feel a number of “debts” for ways in which I’ve been unfairly privileged, which I acknowledge is about my own debts rather than others’ preferences. \nFor whatever reason, it feels exceedingly strange to plow all of $1 billion into a single sort of charity, while there are injustices all around me that I ignore.\n-UH-There are a number of responses I might give here, such as:\nOne reason the bednets have higher value-for-money is that they’re more neglected in some sense. If everyone reasoned the way I’m reasoning, everyone would have a bednet by now, and the world would have moved on to other interventions.\nNot all problems are equally fit for all kinds of solutions. Lack of bednets is a problem that’s very responsive to money. To improve education, you might be more effective working in the field yourself. \nI think you’re kind of imagining that “giving all $1 billion to bednets” means “the problem of education gets totally ignored.” But you aren’t the world. Instead, imagine yourself as part of a large society of people working on all the problems you’re concerned about, some getting more attention than others. By giving all $1 billion to bednets, you’re just deciding that’s the best thing you can do to do your part.\nBut I think those responses would miss the point of this particular dialogue, which is about utilitarianism. So I’ll instead repeat my talking point from last time: if your giving doesn’t conform to what the beneficiaries would want under the veil of ignorance, then it has to be in some sense about you rather than about them. You have an impulse to feel that you’re “doing your part” on multiple causes - but that impulse is about your feelings of guilt, debt, etc., not about how to help others. \nFor further reading on diversification in giving, see:\nGiving Your All, a short article against diversification\nHow Many Causes Should You Give To? - briefly explores arguments for and against diversification\nWorldview diversification - gives some arguments for diversification, but mostly in the context of very large amounts of giving and mostly for practical reasons\nHopefully this has given a sense of the headspace and motivations behind some of the stranger things utilitarianism tells one to do. As noted above, I ultimately have very mixed feelings on the whole matter, and NUH will have some stronger objections in future pieces (but the next couple of dialogues will continue to defend some of the strange views motivated by the attempt to have future-proof ethics).\nNext in series: Debating myself on whether “extra lives lived” are as good as “deaths prevented”Footnotes\nhttps://en.wikipedia.org/wiki/Micromort#Travel states that traveling 230-250 miles in a car gives a 1 in 1 million chance of death by accident, implying that traveling 2.3-2.5 miles would give a 1 in 100 million chance. ↩\n Note that the \"veil of ignorance\" refers to what a person would choose in a hypothetical situation, whereas most of the dialogue up to this point has used the language of what is better for a person. These can be distinct since people might not always want what's best for them. I'm using the veil of ignorance as a simplification; we should generally assume that people behind the veil of ignorance are being rational, i.e., choosing what is actually best for them. What ultimately matters is what's best for someone, not what they prefer, and that's what I've talked about throughout the early part of the dialogue. ↩\n I think total costs per student tend to be about $10-20k per year; here I’m assuming you can “significantly improve” someone’s education with well-targeted interventions for $1k per year. Based on my recollections of education research I think I’m more likely to be overstating than understating the available impact here. ↩\n According to this page, people who lack a HS diploma earn about $592 per week. If we assume that getting the diploma brings them up to the overall median earnings of $969 per week, that implies $377 per week in additional earnings, or a bit under $20k per year. I think this is a very aggressive way of estimating the value of a high school diploma, since graduating high school likely is correlated with lots of other things that predict high earnings (such as being from a higher-socioeconomic-status family), so I cut it in half. This isn’t meant to be a real estimate of the value of a high-school diploma; it’s still meant to be on the aggressive/generous side, because I’ll still be claiming the other benefit is better.  ↩\n", "url": "https://www.cold-takes.com/defending-one-dimensional-ethics/", "title": "Defending One-Dimensional Ethics", "source": "cold.takes", "source_type": "blog", "date_published": "2022-02-15", "id": "0ff9c2b30b37f924f70380705cfd783a"} -{"text": "\nPeople who dream of being like the great innovators in history often try working in the same fields - physics for people who dream of being like Einstein, biology for people who dream of being like Darwin, etc. \nBut this seems backwards to me. To be as revolutionary as these folks were, it wasn’t enough to be smart and creative. As I’ve argued previously, it helped an awful lot to be in a field that wasn’t too crowded or well-established. So if you’re in a prestigious field with a well-known career track and tons of competition, you’re lacking one of the essential ingredients right off the bat.\nHere are a few riffs on that theme.\nThe next Einstein probably won’t study physics, and maybe won’t study any academic science. Einstein's theory of relativity was prompted by a puzzle raised 18 years earlier. By contrast, a lot of today’s physics is trying to solve puzzles that are many decades old (e.g.) and have been subjected to a massive, well-funded attack from legions of scientists armed with billions of dollars’ worth of experimental equipment. I don’t think any patent clerk would have a prayer at competing with professional physicists today - I’m thinking today’s problems are just harder. And the new theory that eventually resolves today’s challenges probably won’t be as cool or important as what Einstein came up with, either.\nMaybe today’s Einstein is revolutionizing our ability to understand the world we’re in, but in some new way that doesn’t belong to a well-established field. Maybe they’re studying a weird, low-prestige question about the nature of our reality, like anthropic reasoning. Or maybe they’re Philip Tetlock, more-or-less inventing a field that turbocharges our ability to predict the future.\nThe next Babe Ruth probably won’t play baseball. Mike Trout is probably better than Babe Ruth in every way,2 and you probably haven't heard of him.\nMaybe today’s Babe Ruth is someone who plays an initially less popular sport, and - like Babe Ruth - plays it like it’s never been played before, transforms it, and transcends it by personally appealing to more people than the entire sport does minus them. Like Tiger Woods, or what Ronda Rousey looked at one point like she was on track to be.\nThe next Beethoven or Shakespeare probably won’t write orchestral music or plays. Firstly because those formats may well be significantly “tapped out,” and secondly because (unlike at the time) they aren’t the way to reach the biggest audience. \nWe’re probably in the middle stages of a “TV golden age” where new business models have made it possible to create more cohesive, intellectual shows. So maybe the next Beethoven is a TV showrunner. It doesn’t seem like anyone has really turned video games into a respected art form yet - maybe the next Beethoven will come along shortly after that happens.\nOr maybe the next Beethoven or Shakespeare doesn’t do anything today that looks like “art” at all. Maybe they do something else that reaches and inspires huge numbers of people. Maybe they’re about to change web design forever, or revolutionize advertising. Maybe it’s #1 TikTok user Charli d’Amelio, and maybe there will be whole academic fields devoted to studying the nuances of her work someday, marveling at the fact that no one racks up that number of followers anymore. \nThe next Neil Armstrong probably won’t be an astronaut. It was a big deal to set foot outside of Earth for the first time ever. You can only do that once. Maybe we’ll feel the same sense of excitement and heroism about the first person to step on Mars, but I doubt it.\nI don’t really have any idea what kind of person could fill role in the near future. Maybe no one. I definitely don’t think that our lack of return trip to the Moon is any kind of a societal “failure.”\nThe next Nick Bostrom probably won’t be a “crucial considerations” hunter. Forgive me for the “inside baseball” digression (and feel free to skip to the next one), but effective altruism is an area I’m especially familiar with. \nNick Bostrom is known for revolutionizing effective altruism with his arguments about the value of reducing existential risks, the risk of misaligned AI, and a number of other topics. These are sometimes referred to as crucial considerations: insights that can change one’s goals entirely. But nearly all of these insights came more than 10 years ago, when effective altruism didn’t have a name and the number of people thinking about related topics was extremely small. Since then there have been no comparable “crucial considerations” identified by anyone, including Bostrom himself. \nWe shouldn’t assume that we’ve found the most important cause. But if (as I believe) this century is likely to see the development of AI that determines the course of civilization for billions of years to come ... maybe we shouldn’t rule it out either. Maybe the next Bostrom is just whoever does the most to improve our collective picture of how to do the most good today. Rather than revolutionizing what our goals even are, maybe this is just going to be someone who makes a lot of progress on the AI alignment problem.\nAnd what about the next “great figure who can’t be compared to anyone who came before?” This is what I’m most excited about! Whoever they are, I’d guess that they’re asking questions that aren’t already on everyone’s lips, solving problems that don’t have century-old institutions devoted to them, and generally aren’t following any established career track. \nI doubt they are an “artist” or a “scientist” at all. If you can recognize someone as an artist or scientist, they’re probably in some tradition with a long history, and a lot of existing interest, and plenty of mentorship opportunities and well-defined goals.3\nThey’re probably doing something they can explain to their extended family without much awkwardness or confusion! If there’s one bet I’d make about where the most legendary breakthroughs will come from, it’s that they won’t come from fields like that.Footnotes\n(Footnote deleted) ↩\n He's even almost as good just looking at raw statistics (Mike Trout has a career average WAR of 9.6 per 162 games; Babe Ruth's is 10.5), which means he dominates his far superior peers almost as much as Babe Ruth dominated his. ↩\n I’m not saying this is how art and science have always been - just that that’s how they are today. ↩\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/to-match-the-greats-dont-follow-in-their-footsteps/", "title": "To Match the Greats, Don’t Follow In Their Footsteps", "source": "cold.takes", "source_type": "blog", "date_published": "2022-02-11", "id": "feb8d8da8f36fc09f02d749f59e7fdd1"} -{"text": "\nIn Future-Proof Ethics, I talked about trying to \"consistently [make] ethical decisions that look better, with hindsight after a great deal of moral progress, than what our peer-trained intuitions tell us to do.\" \nI cited Kwame Anthony Appiah's comment that \"common-sense\" ethics has endorsed horrible things in the past (such as slavery and banning homosexuality), and his question of whether we, today, can do better by the standards of the future.\nA common objection to this piece was along the lines of:\nWho cares how future generations look back on me? They'll have lots of views that are different from mine, just as I have lots of views that are different from what was common in the past. They'll judge me harshly, just as I judge people in the past harshly. But none of this is about moral progress - it's just about random changes. \nSure, today we're glad that homosexuality is more accepted, and we think of that as progress. But that's just circular - it's judging the past by the standards of today, and concluding that today is better.\nInterestingly, I think there were two versions of this objection: what I'd call the \"moral realist\" version and the \"moral super-anti-realist\" version.\nThe moral realist thinks that there are objective moral truths. Their attitude is: \"I don't care what future people think of my morality (or what I think after more reflection?1) - I just care what's objectively right.\"\nThe moral super-anti-realist thinks that morality is strictly subjective, and that there's just nothing interesting to say about how to \"improve\" morality. Their attitude is: \"I don't care what future people think of my morality, I just care what's moral by the arbitrary standards of the time I live in.\"\nIn contrast to these positions, I would label myself as a \"moral quasi-realist\": I don't think morality is objective, but I still care greatly about what a future Holden - one who has reflected more, learned more, etc. - would think about the ethical choices I'm making today. (Similarly, I believe that taste in art is subjective, but I also believe there are meaningful ways of talking about \"great art\" and \"highbrow vs. lowbrow taste,\" and I personally have a mild interest in cultivating more highbrow taste for myself.)\nTalking about \"moral progress\" is intended to encompass both the \"moral quasi-realist\" and the \"moral realist\" positions, while ignoring the \"moral super-anti-realist\" position because I think that one is silly. The reason I went with the \"future-proof ethics\" framing is because it gives a motivation for moral reasoning that I think is compatible with believing in objective moral truth, or not - as long as you believe in some meaningful version of progress.\nBy \"moral progress,\" I don't just mean \"Whatever changes in commonly accepted morality happen to take place in the future.\" I mean specifically to point to the changes that you (whoever is reading this) consider to be progress, whether because they are honing in on objective truth or resulting from better knowledge and reasoning or for any other good reason. Future-proof ethics is about making ethical choices that will still look good after your and/or society's ethics have \"improved\" (not just \"changed\").\nI expect most readers - whether they believe in objective moral truth or not - to accept that there are some moral changes that count as progress. I think the ones I excerpted from Appiah's piece are good examples that I expect most readers to accept and resonate with. \nIn particular, I expect some readers to come in with an initial position of \"Moral tastes are just subjective, there's nothing worth debating about them,\" and then encounter examples like homosexuality becoming more accepted over time and say \"Hmm ... I have to admit that one really seems like some sort of meaningful progress. Perhaps there will also be further progress in the future that I care about. And perhaps I can get ahead of that progress via the sorts of ideas discussed in Future-Proof Ethics. Gosh, what an interesting blog!\"\nHowever, if people encounter those examples and say \"Shrug, I think things like increasing acceptance of homosexuality are just random changes, and I'm not motivated to 'future-proof' my ethics against future changes of similar general character,\" then I think we just have a deep disagreement, and I don't expect my \"future-proof ethics\" series to be relevant for such readers. To them I say: sorry, I'll get back to other topics reasonably soon!\nNotes\n I suspect the moral realists making this objection just missed the part of my piece stating:\"Moral progress\" here refers to both societal progress and personal progress. I expect some readers will be very motivated by something like \"Making ethical decisions that I will later approve of, after I've done more thinking and learning,\" while others will be more motivated by something like \"Making ethical decisions that future generations won't find abhorrent.\"\n But maybe they saw it, and just don't think \"personal progress\" matters either, only objective moral truth. ↩\nFor email filter: florpschmop", "url": "https://www.cold-takes.com/moral-progress-vs-the-simple-passage-of-time/", "title": "\"Moral progress\" vs. the simple passage of time", "source": "cold.takes", "source_type": "blog", "date_published": "2022-02-08", "id": "9a47b5fa84505a1fd8255be3703ad0ce"} -{"text": "\nI posted some later content on this piece - responding to reader takes - here.\nMy Where’s Today’s Beethoven analysis involved finding a lot of “most critically acclaimed music/TV/film/books” lists, and charting patterns in them. While doing this, something I was very surprised to learn was that some of the Beach Boys’s music is incredibly acclaimed, respected, and thought of as genius. \nIn particular, their 1966 album Pet Sounds is the #1 most acclaimed album of all time, and their followup single Good Vibrations is the the 4th-most acclaimed song of all time. As the Wikipedia pages show, this isn’t some fluke of the rankings, e.g.:Promoted there as \"the most progressive pop album ever\", Pet Sounds garnered recognition for its ambitious production, sophisticated music, and emotional lyric content. It is considered to be among the most influential albums in music history … \nPet Sounds revolutionized the field of music production and the role of producers within the music industry, introduced novel approaches to orchestration, chord voicings, and structural harmonies, and furthered the cultural legitimization of popular music, a greater public appreciation for albums, the use of recording studios as an instrument, and the development of psychedelic music and progressive/art rock ...\nIt has topped several critics' and musicians' polls for the best album of all time, including those published by NME, Mojo, Uncut, and The Times. In 2004, it was inducted into the National Recording Registry by the Library of Congress … \nPet Sounds is evaluated as \"one of the most innovative recordings in rock\" and as the work that \"elevated Brian Wilson from talented bandleader to studio genius\".[118] Music historian Luis Sanchez viewed the album as \"the score to a film about what rock music doesn't have to be …\"\nAfter learning this, I tried listening to Pet Sounds several times, and was not able to figure out what the hoopla is about. It sounds like a cheesy pop album. \nI was unable to connect the experience of listening to Wouldn't It Be Nice to this statement: “it was out to eclipse ... previous sonic soap operas, to transform the subject's sappy sentiments with a God-like grace so that the song would become a veritable pocket symphony.\" (The term “pocket symphony” seems to come up a surprising amount for Beach Boys songs that have, as far as I can tell, 3 or so themes like most pop songs.)\nGod Only Knows is by far my favorite track on the album, but “often praised as one of the greatest songs ever written”? Hmm.\nAfter struggling with this for a while, I had a stroke of inspiration. I decided to listen to all of the Beach Boys albums in order.1 (The things I do to understand the world better! Here’s my playlist if you want to try it.) \nAnd indeed, after ~4 hours of the simplest, sappiest music imaginable, I could sort of hear Pet Sounds as something comparatively deeper and richer - more challenging, more complex, more contemplative, more cohesive.2 In particular, a lot of the cheesy-seeming instrumental arrangements now sounded like some badly needed variety; maybe the at-the-time version of this reaction would’ve been “What creative instrumentation, it’s like a symphony!”\nSo maybe Pet Sounds was a remarkable new sound at the time? \nBut, I think, only if you had a really fanatical commitment to only listening to the poppiest of pop music. Pet Sounds came out more than a year after legendary jazz album A Love Supreme! I don't want to get carried away about what my subjective taste says, but … even if A Love Supreme isn’t your cup of tea, I’d guess you’ll think it’s a great deal more complex, cohesive, impressive, and interesting in just about every way (other than the lack of prominent \"studio effects\") than Pet Sounds. And it's not even clearly less accessible - looks like they sold a similar number of copies?3\nA Love Supreme is ranked well below Pet Sounds on the Acclaimed Music aggregated list, as well as on the Rolling Stone list. Reading what the critics say about Pet Sounds, after listening to both works, is quite a trip - it reminds me of reading a parent's description of their child's art, with the most generous possible interpretation of the \"genius\" of every humdrum thing in there. \nThis feels roughly as close as I’m going to get to a smoking gun that rock music critics are living in a strange, strange world. If that's right, I think we also have to question whether classical music critics - and our own minds - are playing similar strange tricks on us when they assert with such conviction that Beethoven's music is unparalleled.\nOr maybe I’m the one hearing (or failing to hear) things? I would love to hear from any readers who could give me any idea of what kind of headspace I’d have to inhabit to find Pet Sounds more impressive, enjoyable, or [any positive adjective]4 than A Love Supreme!\nI posted some later content on this piece - responding to reader takes - here.\nFor email filter: florpschmopFootnotes\n While working on other things. I did not “just sit and listen” to all five hours of this. ↩\n I definitely could’ve been imagining it. ↩\nPet Sounds: “total sales were estimated at around 500,000 units.” A Love Supreme: “By 1970, it had sold about 500,000 copies.” These aren’t necessarily the same time frame, but I’m taking it as evidence that they’re in the ballpark.  ↩\n To be clear, I think some Beach Boys music is at least more “easy to enjoy” or “accessible” than A Love Supreme, even if it’s not as complex or interesting. But Pet Sounds doesn’t seem to accomplish either - it’s weird and slow by Beach Boys standards, and as noted above, didn’t sell better (at least probably not by much) than A Love Supreme, despite coming from a much more popular band.\n FWIW my favorite Beach Boys album is Beach Boys Party!, a really weird album that is mostly covers (of the Beatles, Bob Dylan and others) with a chattering crowd dubbed over it, kind of like a live performance at a party, but just … super weird. I think it revolutionized music. ↩\n", "url": "https://www.cold-takes.com/investigating-musical-genius-by-listening-to-the-beach-boys-a-lot/", "title": "Investigating musical genius by listening to the Beach Boys a lot", "source": "cold.takes", "source_type": "blog", "date_published": "2022-02-03", "id": "2811d5e6687617917a2c9e9fd333dc04"} -{"text": "Ethics based on \"common sense\" seems to have a horrible track record.\nThat is: simply going with our intuitions and societal norms has, in the past, meant endorsing all kinds of insanity. To quote an article by Kwame Anthony Appiah:\nOnce, pretty much everywhere, beating your wife and children was regarded as a father's duty, homosexuality was a hanging offense, and waterboarding was approved -- in fact, invented -- by the Catholic Church. Through the middle of the 19th century, the United States and other nations in the Americas condoned plantation slavery. Many of our grandparents were born in states where women were forbidden to vote. And well into the 20th century, lynch mobs in this country stripped, tortured, hanged and burned human beings at picnics.\n Looking back at such horrors, it is easy to ask: What were people thinking?\n Yet, the chances are that our own descendants will ask the same question, with the same incomprehension, about some of our practices today.\n Is there a way to guess which ones?\nThis post kicks off a series on the approach to ethics that I think gives us our best chance to be \"ahead of the curve:\" consistently making ethical decisions that look better, with hindsight after a great deal of moral progress, than what our peer-trained intuitions tell us to do.\n\"Moral progress\" here refers to both societal progress and personal progress. I expect some readers will be very motivated by something like \"Making ethical decisions that I will later approve of, after I've done more thinking and learning,\" while others will be more motivated by something like \"Making ethical decisions that future generations won't find abhorrent.\" (More on \"moral progress\" in this follow-up piece.)\nBeing \"future-proof\" isn't necessarily the end-all be-all of an ethical system. I tend to \"compromise\" between the ethics I'll be describing here - which is ambitious, theoretical, and radical - and more \"common-sense\"/intuitive approaches to ethics that are more anchored to conventional wisdom and the \"water I swim in.\"\nBut if I simply didn't engage in philosophy at all, and didn't try to understand and incorporate \"future-proof ethics\" into my thinking, I think that would be a big mistake - one that would lead to a lot of other moral mistakes, at least from the perspective of a possible future world (or a possible Holden) that has seen a lot of moral progress. \nIndeed, I think some of the best opportunities to do good in the world come from working on issues that aren't yet widely recognized as huge moral issues of our time.\nFor this reason, I think the state of \"future-proof ethics\" is among the most important topics out there, especially for people interested in making a positive difference to the world on very long timescales. Understanding this topic can also make it easier to see where some of the unusual views about ethics in the effective altruism community come from: that we should more highly prioritize the welfare of animals, potentially even insects, and most of all, future generations.\nWith that said, some of my thinking on this topic can get somewhat deep into the weeds of philosophy. So I am putting up a lot of the underlying content for this series on the EA Forum alone, and the pieces that appear on Cold Takes will try to stick to the high-level points and big picture.\nOutline of the rest of this piece:\nMost people's default approach to ethics seems to rely on \"common sense\"/intuitions influenced by peers. If we want to be \"ahead of the curve,\" we probably need a different approach. More\nThe most credible candidate for a future-proof ethical system, to my knowledge, rests on three basic pillars: \nSystemization: seeking an ethical system based on consistently applying fundamental principles, rather than handling each decision with case-specific intuitions. More\nThin utilitarianism: prioritizing the \"greatest good for the greatest number,\" while not necessarily buying into all the views traditionally associated with utilitarianism. More\nSentientism: counting anyone or anything with the capacity for pleasure and suffering - whether an animal, a reinforcement learner (a type of AI), etc. - as a \"person\" for ethical purposes. More\nCombining these three pillars yields a number of unusual, even uncomfortable views about ethics. I feel this discomfort and don't unreservedly endorse this approach to ethics. But I do find it powerful and intriguing. More\nAn appendix explains why I think other well-known ethical theories don't provide the same \"future-proof\" hopes; another appendix notes some debates about utilitarianism that I am not engaging in here.\nLater in this series, I will:\nUse a series of dialogues to illustrate how specific, unusual ethical views fit into the \"future-proof\" aspiration.\nSummarize what I see as the biggest weaknesses of \"future-proof ethics.\"\nDiscuss how to compromise between \"future-proof ethics\" and \"common-sense\" ethics, drawing on the nascent literature about \"moral uncertainty.\"\n\"Common-sense\" ethics\nFor a sense of what I mean by a \"common-sense\" or \"intuitive\" approach to ethics, see this passage from a recent article on conservatism:\nRationalists put a lot of faith in “I think therefore I am”—the autonomous individual deconstructing problems step by logical step. Conservatives put a lot of faith in the latent wisdom that is passed down by generations, cultures, families, and institutions, and that shows up as a set of quick and ready intuitions about what to do in any situation. Brits don’t have to think about what to do at a crowded bus stop. They form a queue, guided by the cultural practices they have inherited ...\n In the right circumstances, people are motivated by the positive moral emotions—especially sympathy and benevolence, but also admiration, patriotism, charity, and loyalty. These moral sentiments move you to be outraged by cruelty, to care for your neighbor, to feel proper affection for your imperfect country. They motivate you to do the right thing.\n Your emotions can be trusted, the conservative believes, when they are cultivated rightly. “Reason is, and ought only to be the slave of the passions,” David Hume wrote in his Treatise of Human Nature. “The feelings on which people act are often superior to the arguments they employ,” the late neoconservative scholar James Q. Wilson wrote in The Moral Sense.\n The key phrase, of course, is cultivated rightly. A person who lived in a state of nature would be an unrecognizable creature ... If a person has not been trained by a community to tame [their] passions from within, then the state would have to continuously control [them] from without.\nI'm not sure \"conservative\" is the best descriptor for this general attitude toward ethics. My sense is that most people's default approach to ethics - including many people for whom \"conservative\" is the last label they'd want - has a lot in common with the above vision. Specifically: rather than picking some particular framework from academic philosophy such as \"consequentialism,\" \"deontology\" or \"virtue ethics,\" most people have an instinctive sense of right and wrong, which is \"cultivated\" by those around them. Their ethical intuitions can be swayed by specific arguments, but they're usually not aiming to have a complete or consistent ethical system.\nAs remarked above, this \"common sense\" (or perhaps more precisely, \"peer-cultivated intuitions\") approach has gone badly wrong many times in the past. Today's peer-cultivated intuitions are different from the past's, but as long as that's the basic method for deciding what's right, it seems one has the same basic risk of over-anchoring to \"what's normal and broadly accepted now,\" and not much hope of being \"ahead of the curve\" relative to one's peers.\nMost writings on philosophy are about comparing different \"systems\" or \"frameworks\" for ethics (e.g., consequentialism vs. deontology vs. virtue ethics). By contrast, this series focuses on the comparison between non-systematic, \"common-sense\" ethics and an alternative approach that aims to be more \"future-proof,\" at the cost of departing more from common sense.\nThree pillars of future-proof ethics\nSystemization\nWe're looking for a way of deciding what's right and wrong that doesn't just come down to \"X feels intuitively right\" and \"Y feels intuitively wrong.\" Systemization means: instead of judging each case individually, look for a small set of principles that we deeply believe in, and derive everything else from those.\nWhy would this help with \"future-proofing\"?\nOne way of putting it might be that:\n(A) Our ethical intuitions are sometimes \"good\" but sometimes \"distorted\" by e.g. biases toward helping people like us, or inability to process everything going on in a complex situation. \n(B) If we derive our views from a small number of intuitions, we can give these intuitions a lot of serious examination, and pick ones that seem unusually unlikely to be \"distorted.\" \n(C) Analogies to science and law also provide some case for systemization. Science seeks \"truth\" via systemization and law seeks \"fairness\" via systemization; these are both arguably analogous to what we are trying to do with future-proof ethics.\nA bit more detail on (A)-(C) follows.\n(A) Our ethical intuitions are sometimes \"good\" but sometimes \"distorted.\" Distortions might include:\nWhen our ethics are pulled toward what’s convenient for us to believe. For example, that one’s own nation/race/sex is superior to others, and that others’ interests can therefore be ignored or dismissed.\nWhen our ethics are pulled toward what’s fashionable and conventional in our community (which could be driven by others’ self-serving thinking). \nWhen we're instinctively repulsed by someone for any number of reasons, including that they’re just different from us, and we confuse this for intuitions that what they’re doing is wrong. For example, consider the large amount of historical and present intolerance for unconventional sexuality, gender identity, etc.\nWhen our intuitions become \"confused\" because they're fundamentally not good at dealing with complex situations. For example, we might have very poor intuitions about the impact of some policy change on the economy, and end up making judgments about such a policy in pretty random ways - like imagining a single person who would be harmed or helped by a policy.\nIt's very debatable what it means for an ethical view to be \"not distorted.\" Some people (“moral realists”) believe that there are literal ethical “truths,” while others (what I might call “moral quasi-realists,” including myself) believe that we are simply trying to find patterns in what ethical principles we would embrace if we were more thoughtful, informed, etc. But either way, the basic thinking is that some of our ethical intuitions are more reliable than others - more \"really about what is right\" and less tied to the prejudices of our time.\n(B) If we derive our views from a small number of intuitions, we can give these intuitions a lot of serious examination, and pick ones that seem unusually unlikely to be \"distorted.\" \nThe below sections will present two ideas - thin utilitarianism and sentientism - that:\nHave been subject to a lot of reflection and debate.\nCan be argued for based on very general principles about what it means for an action to be ethical. Different people will see different levels of appeal in these principles, but they do seem unusually unlikely to be contingent on conventions of our time.\nCan be used (together) to derive a large number of views about specific ethical decisions.\n(C) Analogies to science and law also provide some case for systemization.\nAnalogy to science. In science, it seems to be historically the case that aiming for a small, simple set of principles that generates lots of specific predictions has been a good rule,1 and an especially good way to be \"ahead of the curve\" in being able to understand things about the world.\nFor example, if you’re trying to predict when and how fast objects will fall, you can probably make pretty good gut-based guesses about relatively familiar situations (a rock thrown in the water, a vase knocked off a desk). But knowing the law of gravitation - a relatively simple equation that explains a lot of different phenomena - allows much more reliable predictions, especially about unfamiliar situations. \nAnalogy to law. Legal systems tend to aim for explicitness and consistency. Rather than asking judges to simply listen to both sides and \"do what feels right,\" legal systems tend to encourage being guided by a single set of rules, written down such that anyone can read it, applied as consistently as possible. This practice may increase the role of principles that have gotten lots of attention and debate, and decrease the role of judges' biases, moods, personal interests, etc.\nSystemization can be weird. It’s important to understand from the get-go that seeking an ethics based on “deep truth” rather than conventions of the time means we might end up with some very strange, initially uncomfortable-feeling ethical views. The rest of this series will present such uncomfortable-feeling views, and I think it’s important to process them with a spirit of “This sounds wild, but if I don’t want to be stuck with my raw intuitions and the standards of my time, I should seriously consider that this is where a more deeply true ethical system will end up taking me.”\nNext I'll go through two principles that, together, can be the basis of a lot of systemization: thin utilitarianism and sentientism.\nThin Utilitarianism\nI think one of the more remarkable, and unintuitive, findings in philosophy of ethics comes not from any philosopher but from the economist John Harsanyi. In a nutshell:\nLet’s start with a basic, appealing-seeming principle for ethics: that it should be other-centered. That is, my ethical system should be based as much as possible on the needs and wants of others, rather than on my personal preferences and personal goals.\nWhat I think Harsanyi’s work essentially shows is that if you’re determined to have an other-centered ethics, it pretty strongly looks like you should follow some form of utilitarianism, an ethical system based on the idea that we should (roughly speaking) always prioritize the greatest good for the greatest number of (ethically relevant) beings.\nThere are many forms of utilitarianism, which can lead to a variety of different approaches to ethics in practice. However, an inescapable property of all of them (by Harsanyi’s logic) is the need for consistent “ethical weights” by which any two benefits or harms can be compared. \nFor example, let’s say we are comparing two possible ways in which one might do good: (a) saving a child from drowning in a pond, or (b) helping a different child to get an education. \n \nMany people would be tempted to say you “can’t compare” these, or can’t choose between them. But according to utilitarianism, either (a) is exactly as valuable as (b), or it’s half as valuable (meaning that saving two children from drowning is as good as helping one child get an education), or it’s twice as valuable … or 100x as valuable, or 1/100 as valuable, but there has to be some consistent multiplier.\n \nAnd that, in turn, implies that for any two ways you can do good - even if one is very large (e.g. saving a life) and one very small (e.g. helping someone avoid a dust speck in their eye) - there is some number N such that N of the smaller benefit is more valuable than the larger benefit. In theory, any harm can be outweighed by something that benefits a large enough number of persons, even if it benefits them in a minor way.\nThe connections between these points - the steps by which one moves from “I want my ethics to focus on the needs and wants of others” to “I must use consistent moral weights, with all of the strange implications that involves” - is fairly complex, and I haven’t found a compact way of laying it out. I discuss it in detail in an Effective Altruism Forum post: Other-centered ethics and Harsanyi's Aggregation Theorem. I will also try to give a bit more of an intuition for it in the next piece.\nI'm using the term thin utilitarianism to point at a minimal version of utilitarianism that only accepts what I've outlined above: a commitment to consistent ethical weights, and a belief that any harm can be outweighed by a large enough number of minor benefits. There are a lot of other ideas commonly associated with utilitarianism that I don't mean to take on board here, particularly:\nThe \"hedonist\" theory of well-being: that \"helping someone\" is reducible to \"increasing someone's positive conscious experiences relative to negative conscious experiences.\" (Sentientism, discussed below, is a related but not identical idea.2)\nAn \"ends justify the means\" attitude. \nThere are a variety of ways one can argue against \"ends-justify-the-means\" style reasoning, even while committing to utilitarianism (here's one). \n \nIn general, I'm committed to some non-utilitarian personal codes of ethics, such as (to simplify) \"deceiving people is bad\" and \"keeping my word is good.\" I'm only interested in applying utilitarianism within particular domains (such as \"where should I donate?\") where it doesn't challenge these codes.\n \n(This applies to \"future-proof ethics\" generally, but I am noting it here in particular because I want to flag that my arguments for \"utilitarianism\" are not arguments for \"the ends justify the means.\")\nMore on \"thin utilitarianism\" at my EA Forum piece.\nSentientism\nTo the extent moral progress has occurred, a lot of it seems to have been about “expanding the moral circle”: coming to recognize the rights of people who had previously been treated as though their interests didn’t matter.\nIn The Expanding Circle, Peter Singer gives a striking discussion (see footnote)3 of how ancient Greeks seemed to dismiss/ignore the rights of people from neighboring city-states. More recently, people in power have often seemed to dismiss/ignore the rights of people from other nations, people with other ethnicities, and women and children (see quote above). These now look like among the biggest moral mistakes in history.\nIs there a way, today, to expand the circle all the way out as far as it should go? To articulate simple, fundamental principles that give us a complete guide to “who counts” as a person, such that we need to weigh their interests appropriately?\nSentientism is the main candidate I’m aware of for this goal. The idea is to focus on the capacity for pleasure and suffering (“sentience”): if you can experience pleasure and suffering, you count as a “person” for ethical purposes, even if you’re a farm animal or a digital person or a reinforcement learner. \nKey quote from 18th-century philosopher Jeremy Bentham: \"The question is not, Can they reason?, nor Can they talk? but, Can they suffer? Why should the law refuse its protection to any sensitive being?\"\nA variation on sentientism would be to say that you count as a \"person\" if you experience \"conscious\" mental states at all.4 I don't know of a simple name for this idea, and for now I'm lumping it in with sentientism, as it is pretty similar for my purposes throughout this series.\nSentientism potentially represents a simple, fundamental principle (“the capacity for pleasure and suffering is what matters”) that can be used to generate a detailed guide to who counts ethically, and how much (in other words, what ethical weight should be given to their interests). Sentientism implies caring about all humans, regardless of sex, gender, ethnicity, nationality, etc., as well as potentially about animals, extraterrestrials, and others.\nPutting the pieces together\nCombining systemization, thin utilitarianism and sentientism results in an ethical attitude something like this:\nI want my ethics to be a consistent system derived from robust principles. When I notice a seeming contradiction between different ethical views of mine, this is a major problem.\nA good principle is that ethics should be about the needs and wants of others, rather than my personal preferences and personal goals. This ends up meaning that I need to judge every action by who benefits and who is harmed, and I need consistent “ethical weights” for weighing different benefits/harms against each other.\nWhen deciding how to weigh someone’s interests, the key question is the extent to which they’re sentient: capable of experiencing pleasure and suffering. \nCombining these principles can generate a lot of familiar ethical conclusions, such as “Don’t accept a major harm to someone for a minor benefit to someone else,” “Seek to redistribute wealth from people with more to people with less, since the latter benefit more,” and “Work toward a world with less suffering in it.” \nIt also generates some stranger-seeming conclusions, such as: “Animals may have significant capacity for pleasure and suffering, so I should assign a reasonably high ‘ethical weight’ to them. And since billions of animals are being horribly treated on factory farms, the value of reducing harm from factory farming could be enormous - to the point where it could be more important than many other issues that feel intuitively more compelling.”\nThe strange conclusions feel uncomfortable, but when I try to examine why they feel uncomfortable, I worry that a lot of my reasons just come down to “avoiding weirdness” or “hesitating to care a great deal about creatures very different from me and my social peers.” These are exactly the sorts of thoughts I’m trying to get away from, if I want to be ahead of the curve on ethics.\nAn interesting additional point is that this sort of ethics arguably has a track record of being \"ahead of the curve.\" For example, here's Wikipedia on Jeremy Bentham, the “father of utilitarianism” (and a major sentientism proponent as well):\nHe advocated individual and economic freedom, the separation of church and state, freedom of expression, equal rights for women, the right to divorce, and the decriminalizing of homosexual acts. [My note: he lived from 1747-1832, well before most of these views were common.] He called for the abolition of slavery, the abolition of the death penalty, and the abolition of physical punishment, including that of children. He has also become known in recent years as an early advocate of animal rights.5\nMore on this at utilitarianism.net, and some criticism (which I don't find very compelling,6 though I have my own reservations about the \"track record\" point that I'll share in future pieces) here.\nTo reiterate, I don’t unreservedly endorse the ethical system discussed in this piece. Future pieces will discuss weaknesses in the case, and how I handle uncertainty and reservations about ethical systems.\nBut it’s a way of thinking that I find powerful and intriguing. When I act dramatically out of line with what the ethical system I've outlined suggests, I do worry that I’m falling prey to acting by the ethics of my time, rather than doing the right thing in a deeper sense.\nAppendix: other candidates for future-proof ethics?\nIn this piece, I’ve mostly contrasted two approaches to ethics:\n\"Common sense\" or intuition-based ethics.\nThe specific ethical framework that combines systemization, thin utilitarianism and sentientism.\nOf course, these aren't the only two options. There are a number of other approaches to ethics that have been extensively explored and discussed within academic philosophy. These include deontology, virtue ethics and contractualism.\nThese approaches and others have significant merits and uses. They can help one see ethical dilemmas in a new light, they can help illustrate some of the unappealing aspects of utilitarianism, they can be combined with utilitarianism so that one avoids particular bad behaviors, and they can provide potential explanations for some particular ethical intuitions. \nBut I don’t think any of them are as close to being comprehensive systems - able to give guidance on practically any ethics-related decision - as the approach I've outlined above. As such, I think they don’t offer the same hopes as the approach I've laid out in this post.\nOne key point is that other ethical frameworks are often concerned with duties, obligations and/or “rules,” and they have little to say about questions such as “If I’m choosing between a huge number of different worthy places to donate, or a huge number of different ways to spend my time to help others, how do I determine which option will do as much good as possible?” \nThe approach I've outlined above seems like the main reasonably-well-developed candidate system for answering questions like the latter, which I think helps explain why it seems to be the most-attended-to ethical framework in the effective altruism community.\nAppendix: aspects of the utilitarianism debate I'm skipping\nMost existing writing on utilitarianism and/or sentientism is academic philosophy work. In academic philosophy, it's generally taken as a default that people are searching for some coherent ethical system; the \"common-sense or non-principle-derived approach\" generally doesn't take center stage (though there is some discussion of it under the heading of moral particularism).\nWith this in mind, a number of common arguments for utilitarianism don't seem germane for my purposes, in particular:\nA broad suite of arguments of the form, \"Utilitarianism seems superior to particular alternatives such as deontology or virtue ethics.\" In academic philosophy, people often seem to assume that a conclusion like \"Utilitarianism isn't perfect, but it's the best candidate for a consistent, principled system we have\" is a strong argument for utilitarianism; here, I am partly examining what we gain (and lose) by aiming for a consistent, principled system at all.\nArguments of the form, \"Utilitarianism is intuitively and/or obviously correct; it seems clear that pleasure is good and pain is bad, and much follows from this.\" While these arguments might be compelling to some, it seems clear that many people don't share the implied view of what's \"intuitive/obvious.\" Personally, I would feel quite uncomfortable making big decisions based on an ethical system whose greatest strength is something like \"It just seems right to me [and not to many others],\" and I'm more interested in arguments that utilitarianism (and sentientism) should be followed even where they are causing significant conflict with one's intuitions.\nIn examining the case for utilitarianism and sentientism, I've left arguments in the above categories to the side. But if there are arguments I've neglected in favor of utilitarianism and sentientism that fit the frame of this series, please share them in the comments!\nNext in series: Defending One-Dimensional EthicsFootnotes\n I don't have a cite for these being the key properties of a good scientific theory, but I think these properties tend to be consistently sought out across a wide variety of scientific domains. The simplicity criterion is often called \"Occam's razor,\" and the other criterion is hopefully somewhat self-explanatory. You could also see these properties as essentially a plain-language description of Solomonoff induction. ↩\n It's possible to combine sentientism with a non-hedonist theory of well-being. For example, one might believe that only beings with the capacity for pleasure and suffering matter, but also that once we've determined that someone matters, we should care about what they want, not just about their pleasure and suffering. ↩\nAt first [the] insider/ outsider distinction applied even between the citizens of neighboring Greek city-states; thus there is a tombstone of the mid-fifth century B.C. which reads:\nThis memorial is set over the body of a very good man. Pythion, from Megara, slew seven men and broke off seven spear points in their bodies … This man, who saved three Athenian regiments … having brought sorrow to no one among all men who dwell on earth, went down to the underworld felicitated in the eyes of all.\nThis is quite consistent with the comic way in which Aristophanes treats the starvation of the Greek enemies of the Athenians, starvation which resulted from the devastation the Athenians had themselves inflicted. Plato, however, suggested an advance on this morality: he argued that Greeks should not, in war, enslave other Greeks, lay waste their lands or raze their houses; they should do these things only to non-Greeks. These examples could be multiplied almost indefinitely. The ancient Assyrian kings boastfully recorded in stone how they had tortured their non-Assyrian enemies and covered the valleys and mountains with their corpses. Romans looked on barbarians as beings who could be captured like animals for use as slaves or made to entertain the crowds by killing each other in the Colosseum. In modern times Europeans have stopped treating each other in this way, but less than two hundred years ago some still regarded Africans as outside the bounds of ethics, and therefore a resource which should be harvested and put to useful work. Similarly Australian aborigines were, to many early settlers from England, a kind of pest, to be hunted and killed whenever they proved troublesome. ↩\n E.g., https://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood#ProposedCriteria  ↩\nWikipedia ↩\n I mean, I agree with the critic that the \"track record\" point is far from a slam dunk, and that \"utilitarians were ahead of the curve\" doesn't necessarily mean \"utilitarianism was ahead of the curve.\" But I don't think the \"track record\" argument is intended to be a philosophically tight point; I think it's intended to be interesting and suggestive, and I think it succeeds at that. At a minimum, it may imply something like \"The kind of person who is drawn to utilitarianism+sentientism is also the kind of person who makes ahead-of-the-curve moral judgments,\" and I'd consider that an argument for putting serious weight on the moral judgments of people who drawn to utilitarianism+sentientism today. ↩\n", "url": "https://www.cold-takes.com/future-proof-ethics/", "title": "Future-proof ethics", "source": "cold.takes", "source_type": "blog", "date_published": "2022-02-02", "id": "bde4bb4c024fb1605a004fabfecb5297"} -{"text": "\nClick lower right to download or find on Apple Podcasts, Spotify, Stitcher, etc. Note that this recording combines a previous post with this one.\n\"Cost disease\" is a term sometimes used to refer to the rising costs (over the past several decades) of particular goods and services - particularly education (both K-12 and higher education), health care, real estate, and infrastructure (e.g., subway stations) - without commensurately rising benefits. It's been discussed at length on Slate Star Codex, here and here.\nI've often heard people citing \"cost disease\" as an indicator of general \"civilizational decline.\" I've even seen it lumped in with the \"Where's Today's Beethoven?\" question, along these rough lines: \nCollapsing civilizational competence hypothesis (CCCH): we have so much wealth and technology, we have such a huge population, and yet - we collectively can't match the music or literature of the past, or its scientific innovation, or its ability to provide an affordable education and health care, or even its subway station construction. Something is badly broken in our culture.\nThe CCCH is an intriguing claim - especially so, I think, because we are naturally biased toward imagining the past as a golden age. \nI haven't seen a major formal defense of CCCH (see previous comments on how people discuss innovation stagnation), but I've heard it come up in many casual conversations.\nOver the last several weeks, I've been examining different topics relevant to the CCCH. Here I'm going to summarize my take on the CCCH as a whole. \nAt a high level, I basically see it as cherry-picking. It would be one thing if we saw things getting worse everywhere we looked - declining quality of life, declining wealth, declining technological capabilities, declining athletic abilities. But in fact, the world has been getting better in a large number of ways. \nWith this in mind, I think it's worth looking at the specific claims made in the CCCH, one by one, and remaining open to the idea that there are lots of different things going on here - that we won't see the same theory explaining everything. \nOnce we take that attitude, what I think we see is:\nInnovation stagnation and cost disease really seem like different phenomena. \nInnovation stagnation is about the output of the tiny upper tail of innovators in society. Cost disease is about trends in the average cost to provide goods and services, across the board. \nThey could be explained by the same underlying story - for example, perhaps everyone is just getting worse at everything - but I don't find that sort of story very likely, especially when we have good reason to think that innovation stagnation is about ideas getting harder to find rather than about worsening culture or intelligence.\nCost disease is itself multiple, arguably somewhat cherry-picked phenomena. \nIt's not the case that everything is getting more expensive1 - in fact, in some sense the average thing is getting more affordable. E.g., as shown here, real median household income is rising or at worst flat, which means that median income is rising faster than average prices; a similar chart for mean income, or a global version, would be more encouraging still.\nIf food and energy were getting more expensive while education and health care were getting cheaper, I imagine the complaints about \"cost disease\" would be roughly the same. So we should think of cost disease as \"A list of several things that are getting more expensive\" rather than as \"A phenomenon in which everything gets more expensive.\" And we should look at each of these several things individually.\nI'd guess that a decent amount of cost disease is explained by the \"stakeholder management\"-related challenges I laid out previously. \nSlate Star Codex's excellent compilation of comments on cost disease includes several pretty compelling (to me) anecdotes about some of the possible causes that seem to largely reflect stakeholder management challenges. See the comments from John Schilling, bkearns123, fc123, and CatCube.\nDetailed examinations of what's going on with construction costs (for subway stations and other infrastructure) seem to imply a major role for stakeholder management;2 I'd guess that similar dynamics affect education and health care. \nThe facially most obvious explanation for rising real estate costs would seem to be NIMBYism, which I previously discussed as an instance of \"stakeholder management\" challenges. (Another salient explanation would be along the lines of: \"As wealth rises and land doesn't get more plentiful, land should get more expensive.\") \nI'd guess another significant fraction is explained by things I don't consider to be \"civilizational decline\" at all. \nBaumol's cost disease may be an important part of the \"cost disease\" picture, and wouldn't indicate civilizational decline. Baumol's cost disease is difficult to explain compactly, but the basic idea is something like: \"Due to productivity going up, many jobs get more lucrative; and so schools and medical systems and such need to pay more in order to stay competitive in hiring, even if schools and medical systems aren't getting more productive themselves.\" \nI've also wondered whether the basic dynamic of Baumol's cost disease applies to things other than employees. For example, you could maybe imagine that a similar dynamic applies to conveniently located real estate. That is: real estate is a key input into many things, and it has not been getting cheaper as fast as other things have been getting cheaper; so when a high percentage of the costs of X comes from real estate, X will get more expensive.\nI'm pretty compelled by some of the \"cost disease\" explanations that emphasize the combination of: \n(a) Rising demand and willingness-to-pay for good education and health care. \n \n(b) Difficulty assessing the quality of education and health care. When you can't tell what you're getting for your money, but are willing to spend a lot for \"the best,\" this could be a formula for costs spiraling upwards and paying for a lot of illusory indicators of value.\n \n(c) \"Disintermediation,\" in which people are not entirely making their own purchasing decisions - health care is paid for by insurance, public education by government, higher education by scholarships and donations as well as direct payments. This makes it harder for there to be a subset of the market that has lower willingness to spend, providing demand for cheaper services.\n \nYou could certainly call the combination of (a)-(c) dysfunctional, but it's not clear that there is anything here that is unique to today's world or getting worse as time goes on. If (a) is rising while (b) and (c) are steady, that's sufficient for costs to rise over time.\nBottom line. When I look separately at the various pieces of the CCCH, I ultimately don't see a good case that our society is getting less competent across the board, or \"forgetting how to do basic things,\" or anything like that. I think our society is getting bigger, wealthier, and more capable, and it's sometimes \"getting in its own way\" in ways that we might naturally expect.Footnotes\nRelevant chart:\n ↩\nFrom a Brookings report on this topic: \"\"We do find empirical evidence consistent with two hypotheses. The first is that the demand for more expensive Interstate highways increases with income, as either richer people are willing to pay for more expensive highways or in any case they can have their interests heard in the political process ... The second hypothesis ... is the rise of 'citizen voice' in the late 1960s and early 1970s. We use the term 'citizen voice' to describe the set of movements that arose in the late 1960s—such as the environmental movement and the rise of homeowners as organized lobbyists (Fischel 2001)—that empowered citizens with institutional tools to translate preferences into government outcomes (Altshuler and Luberoff 2003) ... Other new tools, such as mandated public input, could yield construction of additional highway accoutrement (such as noise barriers), create delays, or increase planning costs ... we find that income’s relationship to costs is five times stronger in the post-1970 era. This is consistent with the timing of the rise of citizen voice, which took flight in the late 1960s and early 1970s. Second, we find that the discussion in the Congressional Record around the Interstates was substantially more likely to involve environmental issues after 1970 and that these issues remained in heightened discussion after the passage of the National Environmental Policy Act in 1969.\"\nAlon Levy's analysis of subway costs (1, 2) has named station construction as a major culprit in countries with unusually high costs - in particular, the choice to tunnel underground to avoid street disruption, rather than using cheaper and quicker \"cut-and-cover\" methods that cause major street disruption. My guess is that there was less concern over street disruption in the past, partly because there was less \"citizen voice\" to oppose it. ↩\n", "url": "https://www.cold-takes.com/cost-disease-and-civilizational-decline/", "title": "Cost disease and civilizational decline", "source": "cold.takes", "source_type": "blog", "date_published": "2022-01-27", "id": "73df648de3eef085b458d460cf445c97"} -{"text": "There were a lot of great comments on Where's Today's Beethoven? and followup pieces, from the comments, Twitter, and other places. There were too many to share them all, but here are some of the broad themes that came up, with quotes and thoughts below:\nI think I undersold the \"bad taste\" hypothesis - that \"innovation stagnation\" has a lot to do with critics' biases toward the past, rather than the actual quality of ideas being produced today. More\nSome readers hypothesized that we just don't know who today's Beethoven is yet - that's the kind of thing that comes with distance and perspective. I think this is possible, but doesn't explain most of the \"innovation stagnation\" observations. For fun, I outline what I think we'll be saying about today's innovations 50 years from now. More\nSome readers hypothesized that it's hard for someone to stand out as much as Beethoven today, since there is more total art to consume and critics aren't able to process it all. I think this is an important part of the picture, though not all of it. More\nI'll also quote and respond to a few other interesting takes. More\nFinally, I'll revisit my previous comments about the implications of innovation stagnation and talk about updates based on these points. More\nI think I undersold the \"bad taste\" hypothesis\nA number of reactions emphasized that the \"critical acclaim\" data I'm analyzing - likely for both science and art, but especially the latter - is from critics that are \"out to lunch\" in various ways.\nAt a high level, I agree with this. A significant part of my answer to \"Where's today's Beethoven?\" is \"Ehh, there's lots of music today that's as impressive/enjoyable/both as Beethoven's - it just doesn't have the same august reputation for a variety of ultimately silly reasons.\" (I feel this way more about Mozart and Shakespeare than about Beethoven, but it ultimately seems pretty plausible across the board.) \nI think I underplayed this point partly because I felt awkward about saying that, and because I expected a lot of people to violently disagree, with no data to settle the dispute. But I think we do have some hints that critics think about the quality of art (particularly music) in pretty warped ways. \nOne such hint comes via Matt Clancy of New Things Under the Sun, a great \"living literature review about academic research on the economics of innovation, science of science, creativity, and discovery.\" He writes:\nI thought this tweet (part of a longer thread) was interesting:\nNext up, top decades. The numbers at the top represent each decade's percentage change from '04 to '21. pic.twitter.com/urM5HNEWxA— Deen Freelon (@dfreelon) September 16, 2021 \n I've always suspected people have a hard time separating how original art actually is from how original it felt to them when they first encountered it, and that means people have a perpetual bias towards thinking the best work was from when they were younger. The tweet is about comparing the Rolling Stones list of top 500 songs when they did it in 2004 and 2021. You can see in the linked tweet (1) that both groups think the quality of music peaked a few decades ago and mostly declined thereafter; (2) that the distribution of best songs shifts forward in time for the 2021 group.\nCaveats: the shift forward isn't a full 17 years (closer to 7), and Matt has also been looking at another data set (the Sight and Sound lists released each decade for film) and finding that the average release year for the top 10 films stays around 1945 (if he posts more analysis on this, I'll link it in the future). \nNostalgia is just one kind of cognitive bias that could pump up the reputation of older music. Jeff Sackmann is among the readers who pointed to another:\nIt's impossible to imagine western classical music without Beethoven, in part because such a significant amount of it is Beethovian. Had some very talented and charismatic musician come along at the right time from the Balkans, maybe that foundational slot would be taken by someone/something else. If this is correct, there's bound to be some historical figures that are considered head-and-shoulders above the rest, and they must be quite old. A contemporary person cannot fill this role, though it's conceivable that a contemporary person would fill this role for people 200 years down the road.\nMy own guess is that critics are affected by both of these things and simply by peer pressure, so that an initial critical consensus (possibly affected by both of the above dynamics) has a lot of staying power.\nWith those mechanisms in mind, here's an interesting, plausible quote from Luke Muehlhauser:\nI think I’d also place significant weight on “people with fancy music tastes overrate Beethoven relative to Beefheart or Mingus for reasons that I think are worth calling mistakes / bias.” Like, praise for Beethoven is ubiquitous, but if people with fancy music tastes knew a lot of music theory and also the composition date for every piece of music and had listened to damn near everything, they would realize that Beefheart and Mingus are in the same league as Beethoven, but because they are ALSO exposed to lots of cultural and peer signals, they’ve been tricked into thinking that Beethoven towers above them, and they might not have even heard of Beefheart ...\nOnce rock music was a format, people started innovating like crazy within that format, or at least they did starting around 1965. Some of that innovative sophisticated stuff is unlistenable Beefheart, but some of it is highly listenable, e.g. Robert Wyatt’s ‘Rock Bottom’ or Miles Davis’ ‘Bitches Brew’ (a mega bestseller, and which I’m counting as equal parts rock and jazz). Is Beethoven’s 5th really so much more innovative, sophisticated, or listenable than Bitches Brew? Maybe a bit on each dimension, but not a ton, I claim. And the fancy music listeners are just biased and mistaken when they fail to rank Rock Bottom or Bitches Brew in the same league as Beethoven’s 5th.\nAnother thing that has moved me toward the \"bad taste\" hypothesis is an extensive project I undertook to listen repeatedly to, and try to understand the genius of, the #1-rated modern music album of all time from the Acclaimed Music data set: Pet Sounds, by the Beach Boys. More on this in a future short piece!\nMaybe we just don't know yet who today's Beethoven is?\nI saw a few sentiments along the lines of this comment:\nWe can't tell who the Beethoven's are because we don't understand them, and cannot judge. Maybe in 200 years, in hindsight, it will be obvious who they were, but right now, us plebs can't recognize their genius, because we are not geniuses.\nI think this is a good potential answer to the literal question, \"Where's today's Beethoven?\", but less satisfying for explaining the very long-running \"innovation stagnation\" patterns in the many charts here (e.g., innovation stagnation occurring over the period of 1800-1950).\nStill, maybe this is a good time to comment on what I think we'll actually think about the \"where's today's Beethoven?\" question, 50-100 years from now:\nI think sometime in that range, civilization will probably (though not definitely) be taken over by digital people, misaligned AI or something even stranger. \nIf beings from that time talk about the time we live in, in terms that roughly resemble this topic, they'll say things along the lines of: \"The field of AI really took off in the 2010s. This was of course the far-and-away most important innovation in the whole history of human civilization, similarly to how the most important 'innovation' of apes was evolving into humans. It was the most important innovation for music (we now have a near-infinite amount of music more brilliant than Beethoven's by nearly any definition you can come up with), literature, advanced sandwich art, etc.\" \nIf they bother to read about old debates bemoaning the \"golden age\" of innovation sometime between the 15th and 20th century, they'll just find it kind of funny, if humor is still something they do.\nSee also: The Great Depression, Recession and Stagnation in Full Historical Context.\nAnyway, back to the topic at hand.\nFields are getting more crowded\nFrom reader Vadim A.:\neven though the number of people producing innovation in every field has increased drastically, the audience evaluating each field has always been a single, highly correlated group with limited bandwidth. So let's say there were 100 serious books produced in some year a few hundred years ago. Maybe 50 of them would be read and discussed by a fair number of the intellectual class, and five would be deemed worthy of extended discussion. Maybe one would maintain its reputation and become influential enough to make a greatest book list. \n Then, let's say today there are 10,000 serious books produced in a year, maybe 500 of them get enough initial marketing or momentum to be read by a fair number of people. Once some people read a book and talk about it, other people will use their advice to read it, and base their opinions on the early reviewers. So even if there are now 5,000 books of equal quality to 50 books of a few hundred years ago, there is only enough bandwidth for 500 of them to take off. Then, there might only be room enough for extended discussion of 20 (because again, there are network and correlation effects). And maybe only a few survive to become influential.\n(Emphasis mine)\nI saw similar themes well put in comments from:\nAnton Howes: \"Given the focus on acclaim, what we are most likely seeing in the data presented is just that there are *too many* Beethovens today, spread across far more and ever growing fields, and we have only limited attention spans in which even critics can appreciate them.\"\nStuart Carter: \"If someone solved the riemann hypothesis tomorrow, I doubt the average person would hear much about it\"\nCalion: \"I am not sure we'd notice a new Shakespeare. We'd simply lump him in with all of the other really good playwrights we have. Nothing would make him stand out as the best.\"\nJosh Achiam: \"If there are a million geniuses producing Beethoven-level work right now, who could even learn all of their names?\"\nI think this is probably a big part of what's going on. It seems like an especially nice fit with patterns like \"output of acclaimed ideas is constant or increasing, but is decreasing when adjusted for 'effective population increases.'\" And in some sense it is a further component of the \"bad taste\" hypothesis, being about limitations of critics rather than of artists.\nHowever, if this were all that were going on, I'd expect to see something we don't seem to see. I'd expect that the very most-acclaimed works from today would be more acclaimed than the very most-acclaimed works from longer ago. The basic idea:\nSay that the number of musicians goes up by 100x.\nIt might be the case that critics aren't able to keep up with all the music, so the number of \"acclaimed works\" doesn't rise at all.\nBut in theory, the compositions that rise all the way to the top in the world of critical opinion - which almost certainly would penetrate public consciousness as well - should be better by whatever metrics critics are using, since they're coming from a 100x larger population.\nThis is in fact not what we're seeing: the acclaim level of the \"top\" works also seems to follow an \"innovation stagnation\" pattern.1\nSome other interesting takes\nNote: I am leaving out most takes along the lines of \"Here's a specific reason that the Golden Age died (people today aren't as intellectual, don't think as big, etc.)\" I don't think we have a good evidential case that there's a \"golden age\" in need of explaining, and don't find most of those theories compelling.\nFernando Pereira: \"[Holden] fails to take into account scale change in culture: as culture grows, domains split, fame becomes localized, effective domain pop much smaller than his estimates.\" \nI definitely think there is something to this - there are a lot more musical genres than there used to be, so the \"number of artists\" in a given genre may be lower than in the past. But critics still tackle questions like \"What's the best music/literature/film, full stop?\", and I think the total \"effective populations\" of people trying to be musicians or authors have risen, not fallen.\nBen Todd hypothesizes that maybe we just have a dynamic something like: \"1 in 10 million people is a top innovator, and more widespread education/urbanization/etc. doesn't change this.\" \nThis is tough for me to believe intuitively: things like adequate nutrition and exposure to cultures of innovation seem like they really ought to be important, and the latter seems especially clear when you look at how concentrated innovation was in particular geographic areas for a lot of our history.\nhippydipster hypothesizes that we are less impressed with people when they don't \"tower over their peers\" as much (because their peers are better, as they are in today's world): \"rather than there being A Beethoven, there's 3 dozen. Yawn. We fail to recognize, partially because Beethoven's uniqueness is literally one of the primary aspects of him, without which we don't have this conversation. The actual skill level still exists today, but the uniqueness doesn't.\" \nIs it in fact the case that the difference between the 1st- and 2nd-best performer should shrink as the number of competitors goes up? This isn't obvious to me either way.\nMichael Nielsen: \"I think this classification is incomplete. Golden ages can be opened up, by new fields. Turing was working in what must have seemed an esoteric, obscure branch of logic; he discovered an important new field, with tonnes of low-hanging fruit.\"\nI think Turing is a bit of an outlier here - he's arguably the founder of modern computer science, one of the only major new scientific fields of the last 200+ years. If you ask the question \"Are we seeing 'innovation stagnation' when it comes to creating new fields of comparable stature to computer science?\" I think the answer is yes, and the \"innovation as mining\" thesis seems a good explanation for why (see footnote for a rough illustration).2\nImplications\nA lot of what I've covered above comes down to different reasons to think that critics being underinformed, overwhelmed, confused, nostalgic or otherwise wrong is a major factor in the \"innovation stagnation\" we observe when we look at critically acclaimed works.\nI don't think this is the whole story (it can't explain all of the technological innovation story, for example), but I do think it's a fair amount.\nI think the conclusions I laid out previously (as well as the high-level description of how artistic ideas get harder to find) hold up reasonably well if we increase our emphasis on \"bad taste\"/\"confused critics\":\nAt a high level, it's still the case that earlier innovators have an advantage in achieving critical acclaim (a lot of the above hypotheses are giving different reasons for this). And it's still the case that there isn't much case for a \"golden age\" worth trying to explain or get back to.\nWe should probably expect further \"stagnation\" by default, though this stagnation may (continue to) be illusory. \nTo the extent we see special value in works that have broad cultural and critical resonance, my earlier comments (about how we should think about artists and their intellectual property) still seem applicable.\nBut here's a point that seems stronger to me than it did before, and that I didn't make in previous pieces: if we can put aside the trappings of acclaim and reputation and enjoy art on its own terms (not necessarily possible), we might find there's no stagnation to explain at all, and no need to hope for a growing number of artists. That is, we might find that we're swimming in impressive art and aren't necessarily in need of more.\n(Science/technology is different, though - I think the case that \"we need more innovators, or growth is going to fall\" is unaffected.)\nNext in series: To Match the Greats, Don’t Follow In Their FootstepsFootnotes\n The charts in my main piece mostly do not address this, but I also looked at a lot of measures of innovation weighted by the level of acclaim, and they also show \"innovation stagnation.\" It's also just generally the case that the very top performers on most of my lists tend to be from early in a field, not late. ↩\n Rough demonstration of this: if we just look at Wikipedia's list of academic fields and focus on natural sciences and formal sciences, we see six fields (biology, chemistry, earth sciences/geology, physics, astronomy, mathematics) that have been around ~forever and reached their \"modern form\" in the 1500s or 1600s (arguably earlier in the case of mathematics), and only 3 or 4 - computer science, \"modern logic,\" space sciences and maybe \"systems sciences\" - dating to the 1800s or 1900s. This seems like the kind of \"innovation stagnation\" pattern I discussed previously - more recent \"output\" isn't as high as you'd expect given the rises in \"effective population.\" Intuitively, the \"innovation as stagnation\" hypothesis seems pretty strong here: you can only \"discover modern physics/chemistry/biology\" once, and there don't seem to be a huge number of fields out there as fundamentally important as those.  ↩\n", "url": "https://www.cold-takes.com/reader-reactions-and-update-on-wheres-todays-beethoven/", "title": "Reader reactions and update on \"Where's Today's Beethoven\"", "source": "cold.takes", "source_type": "blog", "date_published": "2022-01-25", "id": "7776cf9d681b95fba6d587e8b6a33cd0"} -{"text": "\nWhen I was putting out lots of posts about very-long-run history - e.g. Summary of History, Has Life Gotten Better? and Was life better in hunter-gatherer times? - a number of people encouraged me to read and review a recently published book called The Dawn of Everything: a New History of Humanity. \nIt’s billed as “A dramatically new understanding of human history, challenging our most fundamental assumptions about social evolution—from the development of agriculture and cities to the origins of the state, democracy, and inequality—and revealing new possibilities for human emancipation.” Sounds relevant!\nA normal thing for me to do would’ve been to diligently read the book and make a post pointing out some of my favorite quotes and anecdotes, as well as parts where I thought the book was weak.\nBut I think reading books is overrated. Instead of moving my eyes over each page, highlighting here and there, I generally prefer to strategically extract what’s valuable and skip the rest. And in this case - based on reading the first chapter and a bunch of online criticism/commentary - I’ve decided that I don’t want to read or engage with this book: I’d just rather spend my precious moments another way. So instead of a book review, this post will explain my process for deciding not to read the book, despite its relevance for my interests.\nWhat I did read: chapter 1 of the book, plus reviews of the book in the New Yorker (by Gideon Lewis-Krauss), The New York Review (by Kwame Anthony Appiah), and The Nation (by Daniel Immerwahr). \nI looked for claims attributed to the book that (a) would, if true, change my mind about something important; (b) seem reasonably likely to be well-cited/supported/argued. I didn’t find any, and I think authors need to make their important-if-true claims easier to find than this. \nA bit more detail\nThe first chapter of the book (which I think an author should really make sure is communicating what the rest of the book has to offer) felt rambly and hard to pin down to a concrete hypothesis beyond something like “The past is complicated, and simple narratives are oversimplified” (something I already agree with ¯\\_(ツ)_/¯ ). The only sign of an important-if-true factual claim was this one (I’m quoting from the The Nation review):\nIn arguing that people hate hierarchies, Graeber and Wengrow twice assert that settlers in the colonial Americas who’d been “captured or adopted” by Indigenous societies “almost invariably” chose to stay with them. By contrast, Indigenous people taken into European societies “almost invariably did just the opposite: either escaping at the earliest opportunity, or—having tried their best to adjust, and ultimately failed—returning to indigenous society to live out their last days.”\nBig if true, as they say, but the claim is ballistically false, and the sole scholarly authority that Graeber and Wengrow cite—a 1977 dissertation—actually argues the opposite. “Persons of all races and cultural backgrounds reacted to captivity in much the same way” is its thesis; generally, young children assimilated into their new culture and older captives didn’t. Many captured settlers returned, including the frontiersman Daniel Boone, the Puritan minister John Williams, and the author Mary Rowlandson. And there’s a long history of Native people attending settler schools, befriending or marrying whites, and adopting European religious practices. Such choices were surely shaped by colonialism, but to deny they were ever made is absurd.\nAll of the reviews - plus this Twitter thread by Brad DeLong - echoed the theme that the book is thinly cited and that many of its claims seem overstated or unsupported. That makes it a tough sell for me, as I’m not excited to read 500 pages of claims that aren’t particularly likely to hold up once I dig in and check them out. \nAnd based on my investigation into whether life has gotten better, I generally expect most claims about our distant past to - at best - get resolved into a big “maybe, who knows.” (That’s why I’ve been selective in what I’ve tried to examine, highlighting things I think can be known while making the case that most of our distant past is a mystery.) If we’re looking for insight into what the world might look like without states (apparently a major theme of the book), I’d rather just hope for modern experiments than try to read archeological tea leaves.\nAt a high level, I think I might agree with the main conceptual claims of the book: that we shouldn’t be too confident about what our past looks like, that there are exceptions to most generalizations about it that we could come up with, and that we shouldn’t assume that any particular form of social organization is impossible in our future. These seem like reasonable default views to have; to the extent they “revolutionize” someone’s understanding of history by “tearing down” some tidy story, that someone isn’t me and that tidy story isn’t mine. (I do think there are directional trends running throughout history, and I think it’s worth deliberately simplifying things down to notice the “highlights”; this is different from claiming that history in all its detail is simple or consistent.) \nBottom line … it’s over 500 pages, it sounds inconsistently (at best) cited, and I couldn’t quickly identify particular claims that seem like a good use of time to investigate. It would’ve been nice to be able to say I read this prominent book on a topic of interest. But life’s too short.\nBut I am definitely interested in hearing from any readers who can point to specific parts of this book that seem particularly worth engaging with, with explanations of why!For email filter: florpschmop\n", "url": "https://www.cold-takes.com/book-non-review-the-dawn-of-everything/", "title": "Book non-review: The Dawn of Everything", "source": "cold.takes", "source_type": "blog", "date_published": "2022-01-20", "id": "b35e172c1f6e1775019f38bd4e0a8642"} -{"text": "I've been writing recently about sweeping, very-long-run trends in whether things are going well or poorly in the world.1 One candidate trend I haven't talked about yet: what some see as a trend toward vetocracy, kludgeocracy, bureaucracy and/or red tape - things getting harder to build, more costly, etc.\nI basically buy the idea that we are seeing - and should broadly expect to continue seeing - trends in this direction. I think some of this is specific to particular parts of the world (e.g., the USA), but I broadly expect there to be overall trends in this direction over time, pretty much across the board. \nThat's because I think there is a deep connection between:\nEmpowerment - increased options and capabilities for people (including e.g. technology), which I consider one of the most robust historical trends.\nStakeholder management, or the challenge of carrying out activities that lots of people want input into. I think stakeholder management can explain a lot of what we see in terms of rising bureaucracy, red tape, etc.\nTo be clear, I think stakeholder management can be a good thing too, and that empowerment is broadly \"worth it\" despite the stakeholder management challenges.\nHere, I'm going to lay out the conceptual basics of what I mean. Future posts will likely refer back to this one, when trying to make sense of ways in which the world may be getting better or worse.\nWhat is stakeholder management?\nThe term \"stakeholder\" comes up a fair amount in corporate and nonprofit settings. I'd define it as someone who cares about a decision, wants to weigh in on it, and might react badly if they aren't satisfied by it. \nFor example, if a company changes its compensation policy, affected employees are stakeholders for the decision; if it changes its product, both employees and customers are. \nIt's common to do \"stakeholder management\" in advance of making some change: contacting some key stakeholders, hearing from them how they'd feel about the change, and looking for compromises on the aspects they feel strongest about in order to keep as many people as possible happy and avoid blowback. The more stakeholders there are, the more challenging this can be, and the more it can slow projects down and make it hard to make changes. \n(I think the best public examples of what \"stakeholder management\" looks like tend to come from stories about how legislation gets passed, such as in the movie Lincoln and the book Showdown at Gucci Gulch. I also liked this article; sections 2, 5, 8 and 9 are particularly germane.)\nWhen we make personal decisions like where to work, whom to date, etc., we don't tend to have to satisfy a lot of stakeholders - if anything, modern individualism means we're more \"on our own\" here than we used to be. But when it comes to companies setting policies, legislators making laws, diplomats reaching agreements, schools designing syllabi, governments trying to build subway stations, and more, stakeholder management can be a huge part of where the effort goes. The more active, opinionated stakeholders there are, the more conflicting desires there are to satisfy, and the harder it is to move things forward quickly, decisively or cheaply.\nThis is often a good thing! (And in nonprofit circles, it's often assumed to be.2) I'd guess most decisions are much better when they have input from interested and affected parties, vs. when they're just made unilaterally. I think the right amount of stakeholder participation is more than zero - but I think there's also an amount that just makes it very hard to accomplish anything. For a vivid example, see this article on two San Francisco residents who have been solely responsible for blocking large numbers of city projects: \nTwo San Franciscans seem to have made it their pandemic hobby to file appeals for just about every emergency action taken by the San Francisco Municipal Transportation Agency in the past six months ... [E]ach of the five appeals will cost about 100 hours of work by ... staff ... each hearing at the Board of Supervisors, which serves as the judge and jury in these cases, costs a combined $10,000 in city officials’ and attorneys’ time. In fact, Tumlin said each appeal is taking more time and money than it took to create the emergency programs in the first place.\"\nI think the modern world sees a lot of instances of stakeholder strength growing to the point where making any kind of change to the status quo becomes prohibitively difficult. Some central examples in my mind:\nNIMBYism: any attempt to build a subway station, power plant, or just more housing is met with complaints and blocking maneuvers (as above) by the people directly affected.\nA dynamic sometimes referred to as \"why we can't have nice things\": there are increasing numbers of people who might get injured or hurt (or just upset) by some product or service, possibly suing over it, and so providing the product or service requires more and more measures to pre-empt possible ways in which it could harm someone. Some nice examples are in this post (see comments from John Schilling, bkearns123, CatCube).\nI see this phenomenon - a growing \"stakeholder management\" burden - as having a likely deep connection to historical trends toward greater empowerment. As people become wealthier, more educated, and more informed, they become louder and more opinionated, active stakeholders. To reiterate: I think this is a good thing overall. But it means there are increasing numbers of hurdles to building new things and changing the status quo in domains that invite a lot of stakeholder participation.\nI like the \"stakeholder\" lens better than lenses based around \"over-regulation\" or \"bureaucracy\" or \"red tape,\" because it points at the underlying cause rather than blaming some particular government or institution. \nIt seems to me that across the world's rich countries, we're seeing consistent trends toward more difficult and costly stakeholder management. For example, I believe it's getting harder in ~all rich, urban areas to build new subway stations.\nI think this dynamic affects private companies, governments and more. While \"red tape\" appears in institutions, I think the underlying cause of the \"red tape\" often comes from the behavior of private individuals. And I don't think the world becoming more \"libertarian\" (at least in the narrow sense of seeking to shrink government) would necessarily solve much (at least, I wouldn't expect it to lead to more subway stations!)\nLegacy systems and kludgeocracy\nEven holding the amount of stakeholders constant, the burden of stakeholder management - and the difficulty of changing the status quo - could grow over time, via a couple of common dynamics. \nThe first is legacy systems: things that aren't how we would do them today, but that lots of people have built their lifestyles/routines/businesses around, such that change is painful. Simple examples:\nA lot of government and other bureaucratic processes still give prominent roles to written paperwork and fax machines; switching over to electronic records tends to be a huge project, even though it would've been easy to start out that way if the technology had been available earlier.\nWhen the World Wide Web was new, there was an opportunity to define the basic protocols and languages that power it - such as HTML - with a lot of freedom. Today, if there were some obvious improvement to how HTML should work, implementing the improvement could cause a lot of websites built under the old system to break, and there's a tremendous amount of effort needed even for relatively modest upgrades. I think most people would agree that the current way HTML, CSS and JavaScript work together to power most websites is a mess, and not the way we'd set things up if we were starting over - but also that fundamental change is unlikely.\nThe second is kludgeocracy: when any new initiative or change (to policies, neighborhoods, etc.) has to navigate legacy systems and compromises with opponents, a system can get more and more complex over time. This can make it more and more difficult to understand what's going on, more and more difficult to understand the full effects of a change, and thus even harder to make changes.\nThe term \"kludgeocracy\" comes from this essay by Steven Teles, which is focused on the US but which I'd expect to be somewhat applicable across the board. The abstract states:\nThe dictionary tells us that a kludge is “an ill-assorted collection of parts assembled to fulfill a particular purpose…a clumsy but temporarily effective solution to a particular fault or problem\" ... “Clumsy but temporarily effective” also describes much of American public policy. For any particular problem we have arrived at the most gerry-rigged, opaque and complicated response.\nUpshot\nIn modern, politically stable societies with high levels of empowerment, there's reason to expect that lots of things could get gradually \"harder to do\" - and in particular, systems could get harder to change - over time. I think this explains many observations about vetocracy, kludgeocracy, bureaucracy and red tape, etc.\nI think the degree of these problems varies from place to place and domain to domain, and depends on lots of details of how systems are set up and how stakeholder input works (e.g., what can people sue for or formally block, as opposed to just complaining?) I think there's probably plenty of room to significantly mitigate these challenges via well-designed processes for considering - but not being totally beholden to - stakeholder input.\nBut overall, we should expect this sort of thing to be a challenge that grows with modernity. I think it's \"worth it\" and consider empowerment overall to be a good thing, but recognize there's room for debate there.\nIt's also possible that these sorts of problems are pretty temporary in the scheme of things. As these problems become more and more noticeable, there may be increasing pressure to change governance practices and norms to make changes to the status quo easier. Or perhaps some other change will come along that makes it all look like small potatoes.Footnotes\nHas Life Gotten Better? discusses whether quality of life has improved for the average person over the course of human history. Where's Today's Beethoven? discusses whether our society has become worse at scientific and artistic innovation over the past few centuries. ↩\n Here's an example I grabbed by taking the first Google result for \"nonprofit stakeholders.\"  ↩\n", "url": "https://www.cold-takes.com/empowerment-and-stakeholder-management/", "title": "Empowerment and Stakeholder Management", "source": "cold.takes", "source_type": "blog", "date_published": "2022-01-18", "id": "fe80b6ff8d8fbeca879e9662f23e11e5"} -{"text": "\nThese links are mostly (though not 100%) relevant to themes I've written about fairly recently.\nTen Big Wins for Farm Animals in 2021, from my coworker Lewis Bollard's excellent newsletter on farm animal welfare. I'm sharing this because it's probably some of the year's most significant news about reducing modern-day suffering, and probably hasn't gotten anywhere near proportionate attention.\nOn (Not) Reading Papers: It is a bit of an open secret that \"not reading the paper\" is the only feasible strategy for staying on top of the ever-increasing mountain of academic literature. But while everybody knows this, not everybody knows that everybody knows this. On discussion boards, some researchers claim with a straight face to be reading 100 papers per month. Skimming 100 papers per month might be possible, but reading them is not ... Even though scientists don't read papers, the entire edifice does not collapse. How is that? In this post, I will go through some obvious and less obvious problems that come from \"not reading the paper\", and how those turn out to not be that bad.\nI liked the piece! It's got some similar ideas to my past posts reading books vs. engaging with them and Gell-Mann Earworms, but with more specificity about what it's like to be a scientist dealing with a mound of papers to \"read.\" \nAnd here's another approach: read less but read it twice. Not my approach, but in general I'm just super in favor of \"What should my strategy be for deciding what to read and how to read it?\" discourse - there doesn't seem to be enough of it, and I think \"Read interesting-seeming things straight through once\" is a bad default.\nBloomberg: Podcasting hasn't produced a new hit in years. Subhed: \"The average podcast in the top 10 is more than seven years old.\" The reason given in the article is that there are just too many podcasts out there for any one of them to grab a lot of market share, which sounds very plausible to me (and is probably going on with TV as well; music and film seem less affected here because you can't just stick with your favorite album/film for 10 years). I'm guessing that 50 years from now, this will color how critics think about which podcasts are \"significant,\" and many will pine for a golden age of podcasting.\nAlso related to \"pining for a golden age\":\nThis was a good Twitter thread citing old public opinion polling. Counter to some narratives that the US used to be more universally pro-science, it looks like we've always been pretty wary, e.g. going to the moon was pretty unpopular and there was plenty of skepticism about the polio vaccine. \nEliezer Yudkowsky thinks \"all of the talented executives and ops people were in 1950 and now they're dead and there's fewer and fewer surviving descendants of their heritage every year.\" Anyone know what evidence this is or could be based on?\nClaim that I'm not supporting, more curious about: 4 Years After the FCC Repealed Net Neutrality, the Internet Is Better Than Ever. Does anyone know of good analysis on whether the repeal of net neutrality mattered much (my not-particularly-informed current sense is that it didn't), and/or any \"I was wrong\" takes from people who said it would matter greatly (my sense is that there are no such takes)? I would be very interested in either (via comments or tips). If the repeal didn't end up mattering much, it seems like a significant set of people ought to be talking and thinking about what they got wrong, and if it did, I have things to learn about what's going on with the Internet.\nCouple updates on previous Cold Takes posts:\nOn October 19, I criticized a Wikipedia page on hunter-gatherers in a post. On November 30, it looks like the things I complained about (which have mostly been there since May 2013) were largely fixed by a Wikipedia editor. Regardless of whether these two events are connected, this is evidence that the system works, IMO.\nApparently 89 people have made a forecast on Metaculus about the Omicron bet I posted about a few weeks ago. The aggregate forecast has hovered around a 51-54% chance that I win the bet (assuming it isn't a push), which is actually higher than the probability I gave. (I put it at 50%, and we bet at odds implying 40%)\nThe unweaving of a beautiful thing: a short story submitted to an effective altruism creative writing contest, which I quite liked (and it's short).For email filter: florpschmop\n", "url": "https://www.cold-takes.com/assorted-cold-ish-links/", "title": "Assorted cold-ish links", "source": "cold.takes", "source_type": "blog", "date_published": "2022-01-14", "id": "8a344475cd017c66f6b9f0edd30d7d21"} -{"text": "There's been a fair amount of attention over the last few years on the idea that ideas are getting harder to find (link goes to a paper by Bloom, Jones, Reenen and Webb that I think is the main source of this term). That is: the number of people doing research has grown exponentially, but various measures of \"research progress\" have not.\nIn two previous posts (here and here), I argued that the same dynamic applies to (certain types of) art, as well as science. We have a growing population, growing rates of education, etc. - but the production of \"great art\" (as judged by critical acclaim) doesn't seem to be keeping up.\nWhy does this matter?\nThe usual story I hear is something like this: \"It's really scary that ideas are getting harder to find. We should try to fix whatever is going wrong, so we can get back to a high level of scientific and artistic output.\" \nThis \"usual story\" seems to me like a misreading of the evidence. The issue is not that \"our culture is getting worse at finding ideas,\" but rather that \"ideas naturally get harder to find, because that's just how it is.\" (For my defense of this idea, see Where's Today's Beethoven?; also see this 2018 post by Slate Star Codex.)\nI believe in a different set of implications:\nBy default, we should expect further stagnation in scientific output, and in certain kinds of artistic output. \nThe most likely routes to avoiding stagnation run through the sorts of things discussed here: (a) population growth; (b) \"effective population\" growth (e.g., making it possible for more people, especially in poor countries, to become scientists and artists); (c) artificial intelligence. Cultural and institutional reform could help too, but they frankly seem like smaller potatoes than (a)-(c).\nWe should think of scientists and artists as \"discovering\" or even \"mining\" ideas rather than as \"creating\" them. Intellectual property law and norms should be consistent with this. In particular, I think we'd see more and better art and entertainment if we did more to encourage explicit riffing on existing works.\nStagnation by default\nI previously wrote about a growth economics paper by Charles I. Jones, discussing the implications of \"semi-endogenous growth models,\" in which: (a) the more people there are trying to produce new ideas, the more new ideas we get; (b) but at the same time, ideas get harder to find.\nI think this paper contains the most important implications of \"ideas get harder to find\":\nBy default, we should expect idea production to fall over time.\nThe main thing that can push against this dynamic is a fast enough increase in the number of people trying to have ideas.\nThis could be caused by population growth, or by \"effective population\" growth: growth in the number of people who have a \"decent shot\" at being innovators (something like: they have adequate nutrition, education, and access to people and cultures in which they can learn to innovate).\nWe might get \"effective population growth\" if we could do a better job making \"innovator\" paths open to more people - for example, by lowering barriers for people in poor countries, or for women in currently-male-dominated fields. Improving culture and institutions could help with this too.\nIn the long run, there's only so high the \"percentage of the population with a decent shot at being innovators\" can go. So in the long run, if we want to keep up the pace of innovation, we need growth in the overall population - or something that creates a similar effect, such as advanced AI.\nSomething like advanced AI could cause an explosion in innovation. Without something like that, we're probably eventually looking at stagnation.\nOverall, I think that today's discourse around \"ideas get harder to find\" is overly obsessed with improving culture and institutions, as opposed to increasing the sheer number of people with a shot at being innovators - which I see as a likely larger and more sustained route to increased innovation.\nInnovation as mining\nAnother implication of \"ideas get harder to find\" is a bit softer and more metaphorical. It has to do with the way we think about innovators and their role in the world. \nOne metaphor for innovation is that of \"conjuring ideas from thin air.\" I think this is the default way people tend to think of innovation. They tend to imagine that the world's enjoyment of an idea is entirely thanks to the person who had it. \nAn example of this attitude would be ScienceHeroes.com, which credits each scientist with saving as many lives as their technology ever saved (it doesn't just credit them for speeding up the technology).\nI think this attitude tends to be even stronger with artists: copyrights last an incredibly long time, and there is a general attitude that artists have the absolute moral right to control how their work (including characters, fictional universes, etc.) is used.\nThe metaphor I prefer is that of mining: when an innovator publicizes an idea, they're speeding up how fast the world benefits from the idea, but if it weren't for them, someone else would have had a chance to come up with something similar. I think this applies significantly to art as well as science: as discussed previously, many of the most acclaimed works of art are the sort of thing that \"only could have been done once.\"\nA central contrast for the \"conjuring\" vs. \"mining\" frames is how you think legendary innovators of the past would fare in today's world, where (due to past innovation) ideas are harder to find. I think people tend to default to assuming that a clone of Shakespeare or Beethoven, transplanted to the modern world, would achieve the same sort of Shakespeare- or Beethoven-like stature. I doubt this, and would guess that a modern-day Shakespeare would at best achieve a career like Aaron Sorkin's or something.\nWhen viewing innovators through a \"mining\" lens, we should ask questions like:\nHow much did an innovator speed up (not \"create\") an idea, and how helpful was this? For example, being very far ahead of one's time should perhaps be considered a miss, instead of being considered extra impressive: if an idea sat around unused for a long time, this implies that someone else could've had a similar idea in that time without the world missing out on anything in particular.\nWhat did an innovator do with the intellectual property they laid claim to - did they make the most of it? George Lucas is revered for creating the Star Wars universe - but Star Wars works after the first three movies have been lackluster, and if someone else had created something like Star Wars, they might have done a better job managing the intellectual property from there. Maybe we should judge Lucas negatively for \"mining and mismanaging\" Star Wars, as opposed to positively for \"creating\" it?\nNone of this is meant to question the brilliance it takes to be first to find and develop an exciting idea, or the huge value that can come of it. I'm pointing to a subtle shift in our model. But I think it's a potentially important one.\nWant more and better art? Normalize riffing on past work\nI think of art and science as having a lot in common. \nIn both cases, an innovator puts some amount of work into developing an idea, and then once it's developed it can be freely understood, used and enjoyed by unlimited numbers of people. \nIn both cases, innovators can build on each others' ideas, but still, it seems that ideas get overall harder to find over time as the \"low-hanging fruit\" is picked.\nBut it seems to me that science has much healthier intellectual property norms.In science, it's understood that just because someone had an insight, this doesn't mean that the insight is their property forever. A lot of scientific ideas can be cited, built upon and even used commercially without needing to pay royalties or be apologetic. (Some ideas are protected by patents, but these patents tend to be much more limited and short-lived protections than copyright.) By contrast, if you want to write your own stories building on someone else's characters and fictional universe, you're confined to a low-status genre with no hope of commercialization.\nImagine a world in which we saw \"protection of artistic intellectual property\" as a necessary evil to get the economics and incentives right, rather than as a matter of justice for the creator who morally \"owns\" their ideas. In this world, music, fiction, etc. would be protected long enough for financial purposes, but would quickly become fair game for other artists to build upon in whatever way they want - sequels, prequels, extensions, anything as long as they gave attribution. \nI think this could be particularly useful for getting more art that is simultaneously accessible and innovative.\nToday, if you want to write a space opera, you need to \"dance around\" some of the classic plot points, character traits, fictional technologies, and other ideas from e.g. Star Wars. You either need to avoid these completely (thus ensuring your work won't feel derivative and stale, but missing out on ideas that have broad appeal), or go ahead and copy important things in a way that feels stale and cheapens the overall feel of the work. \nOnly people with the intellectual property rights to make sequels can fully, explicitly acknowledge and extend the ideas in Star Wars, getting a chance to make something fresh yet recognizable. (In the case of Star Wars, the few people with this opportunity don't seem to have made the most of it.) \nChanging norms around artistic intellectual property seems to me like a promising route to getting more art that is both accessible and innovative. Much more promising, in my view, than trying to figure out what we can learn from Elizabethan or ancient Greek culture.\nNext in series: Reader reactions and update on \"Where's Today's Beethoven\"", "url": "https://www.cold-takes.com/why-it-matters-if-ideas-get-harder-to-find/", "title": "Why it matters if \"ideas get harder to find\"", "source": "cold.takes", "source_type": "blog", "date_published": "2022-01-11", "id": "ac026d9e17058822bb013b5e70cfcf0b"} -{"text": "\nIn Where's Today's Beethoven?, I argued that:\nAcross a variety of areas in both art and science, we see a form of \"innovation stagnation.\" That is: the best-regarded figures are disproportionately from long ago (for example, Mozart and Beethoven are considered by many to be the greatest musicians of all time), and our era seems to \"punch below its weight\" when considering the rise in population, education, etc.\nThe best explanation for this is what I call the \"innovation as mining\" hypothesis: ideas naturally get harder to find over time, and we should expect art and science to keep slowing down by default.\nI've encountered somewhat polarized reactions to the latter idea. Some people consider it obvious, while others find it absurd. The latter argue something like:\nI can see why ideas would get harder to find in science - there's only so much knowledge out there to be had, and perhaps people tend to find the most important, transformative insights first.1\nBut in art? Why should it be harder to write something as good as Beethoven's 5th symphony (aka \"Dum Dum Dum Dummmmmm\") in 2021 than it was in 1788? We don't love Beethoven's 5th because it 'invented' something, we love it because it sounds good. What's stopping some writer today from just writing a play, novel, movie or even TV episode as good as Hamlet? \nIf our era punches below its weight in great art, that's civilizational decline, not some fundamental 'ideas get harder to find' dynamic.\nI disagree. This post will outline why. (Future posts will go through some other hypotheses about how the dynamic works, including from readers and commenters.)\nFirst, I'll give a working definition of what it means for art to be great. We can't get this discussion off the ground if we take an attitude like \"There's no 'better' or 'worse' in art.\" But I don't think we need to believe that some art is objectively greater than other art; we can simply use a definition of greatness that is according to a particular audience. \nBy this definition, I would guess (though I don't have direct data on this) that Beyonce is preferred to Beethoven by the average American listener in 2021, while Beethoven is considered greater by the average scholar of music history. \nNext, I will argue that our interpretation and enjoyment of a given piece of art is heavily influenced by our knowledge of its relationship to other art. The simplest case here is that we tend not to appreciate art when we see it as too similar to something that already exists.\nThese are pretty much the only premises we need to see how artistic ideas can get harder to find. If you're trying to make art that is great for audience X, and you can't do anything too similar to what audience X is already familiar with, then each existing piece of art that audience X appreciates can make your job harder.2 So as more great-according-to-audience-X art is created, the creation of further great-according-to-audience-X art becomes harder.\nWith this framework in mind, I'll ask: \"Why don't we see anyone today creating something like Beethoven's 5th or Hamlet?\"\nThis post is filled with guesses and hypotheses. I'll close with some ideas on how one could test them.\nNote: I'll be using the term \"art\" here very broadly. My intended use can include music, fiction, film, TV shows, and even video games - anything that could be considered great art, even if it is originally intended as mass entertainment (which I believe much of history's most acclaimed art, such as Shakespeare's plays, was).\nWhat does it mean for art to be great? A working answer\nI've had many exhausting conversations in my life about whether some art is \"greater\" than other art. (In one, my dad insisted that the sandwich he was eating was a great work of art, and refused to concede that it was less great than Beethoven's 9th symphony.) Many people seem to default to one of two positions, neither of which seems right:\n\"Some art is better than other art, and that's that. If you like Beyonce better than Beethoven, you're wrong.\"\n\"Hey hey hey, it's all just about what you enjoy! If you like the song Holden recently composed to calm down his infant in an airport better than Beethoven, who am I or anyone else to argue? There's no point in all this discussion of whether art has declined, because there are no differences.\"\nI don't believe in the objective greatness of art, but I also am not a fan of simply dismissing interesting-seeming questions like \"where's today's Beethoven?\" I think we can do better by simply defining \"greatness\" of art relative to a specific audience: art is \"greater for audience X\" if audience X judges it to be greater, end of story. \nEnjoyment of art is contextual\nI believe we interpret, and even enjoy, art and entertainment through a lens that is partly checking for originality/authenticity. Knowledge that some similar piece \"came first\" can interfere with our enjoyment. (This is different from saying that we are always evaluating art by how innovative it is - it's a weaker claim that art with too much similarity to other work we know of is harder to enjoy.)\nThis claim initially strikes many as silly: it seems like we watch a movie like Star Wars and just enjoy it, without going through any such intellectual exercise.\nBut now imagine that you sit down to watch some new film, and you realize as you're watching that it is basically a copy of Star Wars with minor modifications (most of them improvements, at least on a technical level) - as The Force Awakens has been accused of. My guess is that in this case, you end up enjoying the movie significantly less than you'd enjoy a re-watching of the original.\nAnd I think originality is even more important when it comes to professional critical opinion about what the \"greatest\" art of all time is.\nTo broaden this idea a bit, I think we tend to subconsciously view most art as part of a sort of dialogue with other art. We notice how a book, song, film, etc. resembles and differs from those that came before, both in negative ways (repeating cliches and being \"stale,\" as many forgettable pieces are accused of) and in positive ways (\"extending\" or even \"subverting\" classic stories, e.g. Unforgiven's relationship to classical Westerns or Chinatown's relationship to traditional noir films). Our experience is shaped by this sort of thing - and by broader critical opinion and reputation - while we think we're just experiencing brute enjoyment (or lack thereof) of some work.\nI encounter people who say things like \"When I listen to Beethoven or watch Hamlet, I just see their greatness directly, and modern works can't compare.\" But I suspect these people are in fact making unconscious adjustments - for example, perceiving certain simple and classic plot points, chord structures, etc. as \"fresh\" in these pieces that they'd perceive as \"stale\" in context of something more modern and less revered. \nAs an aside, I think the points I’ve made above open the door for a meaningful distinction between \"lowbrow\" and \"highbrow\" art, without having to say the latter is objectively better:\nI’ve argued that our appreciation of art is contextual: we often appreciate the way in which some piece of art is similar to, yet different from, what came before. (I recommend the book Hit Makers as an exploration of the \"similar, yet different\" idea.) \nSo if person A has listened to a lot more music than person B (in some genre, or generally), they're often going to judge specific music differently - they will notice different things about what it resembles, and what it builds on. Same goes if person A generally is quicker to (even subconsciously) hear and notice subtleties in the music they're listening to.\nI think the \"highbrow vs. lowbrow\" distinction maps pretty reasonably to something like: \"art preferred by people who are sharp, notice things quickly, and have listened to a lot of other music such that they can tell how some particular piece is similar to vs. different from related work\" vs. \"art preferred by people with less of these qualities.\"\nArtistic ideas get harder to find\nIf you're trying to make art that is great according to audience X, and you can't do anything too similar to what audience X is already familiar with, the natural default consequence is that \"ideas get harder to find.\" For every great work of art that has come before your attempt, there's more that you \"can't do\" (it will be seen as unoriginal); this constrains your option space and makes it harder to create something both new and \"great.\" \nIf not for this phenomenon, sequels should generally be better than the original - they get to copy over whatever worked well from the original plus whatever new things the creator thinks of. (And in software, where the point is usefulness instead of originality, sequels almost always are better than the original!) But sequels are usually worse than the original,3 even when (as is usually the case) they're made by the same people working with larger budgets.\nThere is, in some cases, an offsetting phenomenon. As noted above, existing works of art give you \"more to work with\": more classic themes to reference, extend, and subvert. For example, the existence of Romeo and Juliet arguably helped West Side Story. \nI would guess how these factors net out depends on the audience:\nWhen your only goal is pleasing hardcore fans looking for innovation, artistic ideas might not get harder to find. \nIn this case, you can count on your audience being familiar with everything that came before your work, and you can expect them to appreciate it when you focus on referencing/extending/subverting other works, without copying or \"plagiarizing\" good ideas from the past. \nBut at the same time, less experienced listeners may have no idea what's going on, and the lack of \"classic\" plot points, chord structures, etc. (which you refused to copy - you looked only to extend and subvert the cutting edge) could leave them lost. I think this is why the highest-brow modern music - acclaimed modern classical and jazz, and even rock - tends to wow some people while sounding like garbage to most.\nYears ago, I was asking people I know what they thought about the question \"Where's today's Beethoven?\", and I specified that I was looking for someone whose work was \"awe-inspiring and brilliant,\" not just \"fun to listen to.\" Luke Muehlhauser responded that he thinks Captain Beefheart meets these criteria, especially on his masterpiece album \"Trout Mask Replica.\" (I think this is a common opinion among intense, innovation-focused rock critics.) Go on, give it a listen.\nWhen your only goal is broad commercial success, ideas might not get harder to find. \nIn this case, you can count on your audience not being familiar with a lot of classic ideas, and/or not holding it against you too much when they are. For example, Under Pressure didn't stop Ice Ice Baby. (That said, I expect that too much copying is usually fatal, if only by bothering music labels, producers, critics, etc.)\nAnd you benefit from a preference for recent art - people like what is \"new and hot.\" \nBut for many \"combination\" or \"middle-ground\" goals, I think artistic ideas get harder to find. \nI think the kind of acclaim that Beethoven, Shakespeare, etc. are known for falls in this general category. \nTheir work is known as innovative and original, not stale. Even the most knowledgeable critics don't (and can't) accuse them of ripping others off.\nAt the same time, their work is accessible enough that it can be taught to students, can be performed often, and appeals to a wide range of critics (not just the people who have just the right sort of listening history to love Beefheart). The work contains ideas that are simultaneously \"classic\" (pleasing many) and \"new\" (not copied from someone else); to get both of those at once, I think being earlier in time is an advantage.\nSo where's today's version of Beethoven's 5th?\nA common response I've gotten to the above sort of reasoning is: \"OK, but: I really like listening to Beethoven's 5th, and if people made more music like that, I wouldn't howl that it's unoriginal - I'd buy it! So: where's today's Beethoven's 5th?\"\nI sort of have two answers here.\nAnswer one:\nYou say you like Beethoven's 5th, but there's probably a bunch of other music you enjoy more. If not, you're probably unusual. That is: there are a lot of other kinds of music that are better investments for people who just want to provide enjoyment / make money. (Not least because Beethoven's 5th requires an orchestra!)\nAnd as noted above, if someone wrote Beethoven-like music today, it's unlikely they'd impress critics and gain a following the way Beethoven did.\nSo, today's Beethoven would just be better off making some other sort of music. Sorry.\nAnswer two:\nActually, I can think of pieces from the last ~50 years that have pretty much all the properties you probably like about Beethoven's 5th. They're orchestral, they have a grand and \"classic\" sound, they are complex and took intellectual work to create, they are critically acclaimed (though much less so than Beethoven), and they are popular - even iconic! Here's an example.\nAs I was writing this piece, I tried listening to \"similar artist radio\" for that composer, and I have to say it was a very similar experience to listening to Beethoven. At first it was very exciting, and I really enjoyed the contrast with more modern-sounding music: this type of music tends to be more overtly ambitious and dramatic, with less of an \"ironic\" sound and more complex instrumentation. But, honestly after an hour or so it was getting pretty old and I switched back to my usual stuff.\nSo yeah, basically the two-word answer to your question is Star Wars.\nTesting these speculations\nThis post has contained a lot of speculation. If one really cared about the answer to \"Where's today's Beethoven?\" - and its implications for whether there's a past \"Golden Age\" whose secret we should be trying to find again, or whether there's an \"innovation as mining\" dynamic that means we should expect more stagnation in the future - it seems possible in principle to investigate some of the hypotheses I give above. \nFor example, one could test the dynamics of “contextual interpretation of art” by trying for \"blind comparisons\" of acclaimed classic works with less-acclaimed modern ones - for example, well-regarded but not universally-known paintings from the past, vs. attempts by modern artists to do similar work.\nOne could also try to experiment with whether people rate film/song/book/painting A more highly than film/song/book/painting B when they believe that \"A came first and B was influenced by it\" vs. that \"B came first and A was influenced by it.\" \nI'll close on a personal example of the latter. Willow is widely seen as an unremarkable movie recycling themes from other stories such as Lord of the Rings. But I saw it early in childhood, before exposure to the things it recycles, and I’ve always instinctively thought of it as an amazing, classic movie that other fantasies are pale imitations of. (I even think of its music as classic.) By now I intellectually know that that isn't its reputation, but I have trouble remembering this. I'm also less enthusiastic about Lord of the Rings than most people, maybe because it didn't feel as fresh to me since I encountered it after Willow. Weird, huh?\nNext in series: Why it matters if ideas get harder to findFootnotes\n For example, Newton's laws of physics are reasonably accurate in most cases; the Standard Model is extremely accurate in nearly all cases; any further physics is just going to be getting smaller and smaller improvements in accuracy in more and more exotic cases. Although some of these 'exotic cases' could turn out to be of high practical importance, as they have been for e.g. the transistor and nuclear power. ↩\n As noted below, it might also make your job easier, by giving you more \"material to play with\": perhaps you can make art that the audience appreciates for the way it extends or riffs on previous work. But whether this works depends a lot on the audience. ↩\n Citation needed ¯\\_(ツ)_/¯  ↩\n", "url": "https://www.cold-takes.com/how-artistic-ideas-could-get-harder-to-find/", "title": "How artistic ideas could get harder to find", "source": "cold.takes", "source_type": "blog", "date_published": "2022-01-07", "id": "a5edc4cd5862d29f57aadc042f336e86"} -{"text": "\nI've found it really helpful to run early drafts of Cold Takes posts by people who haven't talked with me about the relevant topics, don't have much background on them, etc. I often learn about things that are \"off\" about the way I'm trying to present things, and I think it ends up improving the posts a lot.\nI'm seeking more \"beta readers\" now. The way this works is that if you're a beta reader, you'll receive early drafts of blog posts, with about a week to read them and give feedback using a form like this one.\nIf you plan on reading the posts anyway, being a beta reader shouldn't take much extra time. The benefits will be (a) the warm glow of helping me make Cold Takes pieces better for everyone else; (b) maybe sometimes you'll get \"exclusive content\" in the sense that I decide not to run something after sending it to beta readers. Hey - since Cold Takes is free, this is the closest you can get to a premium subscription!\nI have no idea how much interest there will be, so this is just for initial expressions of interest and I'll figure out a plan from there.\nIf you might be interested, please fill out this form - thanks!\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/seeking-beta-readers/", "title": "Seeking beta readers", "source": "cold.takes", "source_type": "blog", "date_published": "2022-01-06", "id": "38c9e10d19601a39817866bd141f363f"} -{"text": "\nA couple of organizations I’m involved with have recently put out some cool papers relevant to the AI alignment problem, which I’ve emphasized the importance of for how the most important century might go. \nI think some readers will find this post too weedsy, but I get a lot of questions like “So what are people literally actually doing to reduce AI risk?” and this post has some answers.\n(That said, you should not read this as a \"comprehensive survey of AI alignment research,\" even of recent research. Consistent with other \"link posts\" on this blog, this is me sharing stuff I've come across and read, and in this case I've read the papers partly because of personal connections to the organizations.)\nEliciting Latent Knowledge\nAlignment Research Center (ARC), a two-person organization I’m on the board of, released a paper on Eliciting Latent Knowledge (ELK), which I’d summarize as “trying to think of theoretically robust ways to train an AI such that it will ‘tell the truth,’ even in a sort of worst-case situation where it would be easy by default for it to fool humans.” \nThe paper largely takes the form of a “game” between a “builder” (who proposes training strategies that might work) and a “breaker” (who thinks of ways the strategies could fail), and ARC is offering cash prizes for people who can come up with further “builder” moves.\nThe heart of the challenge is the possibility that when one tries to train an AI by trial-and-error on answering questions from humans - with \"success\" being defined as \"its answers match the ones the human judges (possibly with AI assistance) think are right\" - the most simple, natural way for the AI to learn this task is to learn to (a) answer a question as a human judge would answer it, rather than (b) answering a question truthfully. \n(a) and (b) are the same as long as humans can understand everything going on (as in any tests we might run); \n(a) and (b) come apart when humans can't understand what's going on (as might happen once AIs are taking lots of actions in the world).\nIt's not clear how relevant this issue will turn out to be in practice; what I find worrying is that this seems like just the sort of problem that could be hard to notice (or fix) via experimentation and direct observation, since an AI that learns to do (a) could pass lots of tests while not in fact being truthful when it matters. (My description here is oversimplified; there are a lot more wrinkles in the writeup.) Some of the \"builder\" proposals try to think about ways that \"telling the truth\" and \"answering as a human judge would answer\" might have differently structured calculations, so that we can find ways to reward the former over the latter.\nThis is a theory paper, and I thought it'd be worth sharing a side note on its general methodology as well. \nOne of my big concerns about AI alignment theory is that there are no natural feedback loops for knowing whether an insight is important (due to how embryonic the field is, there isn't even much in the way of interpersonal mentorship and feedback). Hence, it seems inherently very easy to spend years making \"fake progress\" (writing down seemingly-important insights). \nARC recognizes this problem, and focuses its theory work on \"worst case\" analysis partly because this somewhat increases the \"speed of iteration\": an idea is considered failed if the researchers can think of any way for it to fail, so lots of ideas get considered and quickly rejected. This way, there are (relatively - still not absolutely) clear goals, and an expectation of daily progress in the form of concrete proposals and counterexamples.\nI wrote a piece on the Effective Altruism forum pitching people (especially technical people) on spending some time on the contest, even if they think they’re very unlikely to win a prize or get hired by ARC. I argue that this contest represents an unusual opportunity to get one’s head into an esoteric, nascent, potentially crucial area of AI research, without needing any background in AI alignment (though I expect most people who can follow this to have some general technical background and basic familiarity with machine learning). If you know people who might be interested, please send this along! And if you want more info about the contest or the ELK problem, you can check out my full post, the contest announcement or the full writeup on ELK.\nTraining language models to be “helpful, honest and harmless”\nAnthropic, an AI lab that my wife Daniela co-founded, has published a paper with experimental results from training a large language model to be helpful, honest and harmless.\nA “language model” is a large AI model that has essentially1 been trained exclusively on the task, “Predict the next word in a string of words, based on the previous words.” It has done lots of trial-and-error at this sort of prediction, essentially by going through a huge amount of public online text and trying to predict the next word after each set of previous words. \nFrom this simple (though data- and compute-hungry) process has emerged an AI that can do a lot of interesting things in response to different prompts - including acting as a chatbot, answering questions from humans, writing stories and articles in the style of particular authors, writing working code based on English-language descriptions of what the code should do, and, er, acting as a therapist. There’s a nice collection of links to the various capabilities GPT-3 has displayed here.\nBy default, this sort of AI tends to make pretty frequent statements that are false/misleading and/or toxic (after all, it’s been trained on the Internet). This paper examines some basic starting points for correcting that issue. \nOver time, I expect that AI systems will get more powerful and their \"unintended behaviors\" more problematic. I consider work like this relevant to the broader challenge of “training an AI to reliably act in accordance with vaguely-defined human preferences, and avoid unintended behaviors.\" (As opposed to e.g. \"training an AI to succeed at a well-defined task,\" which is how I'd broadly describe most AI research today.)\nThe simplest approach the paper takes is “prompting”: giving the language model an “example dialogue” between two humans before asking it any questions. When the language model “talks,” it is in some sense spitting out the words it thinks are “most likely to come next, given the words that came before”; so when the “words that came before” include a dialogue between two helpful, honest, harmless seeming people, it picks up cues from this.2\nWe provide a long prompt (4600 words from 14 fictional conversations) with example interactions. The prompt we used was not carefully designed or optimized for performance on evaluations; rather it was just written by two of us in an ad hoc manner prior to the construction of any evaluations. Despite the fact that our prompt did not include any examples where models resisted manipulation, refused requests to aid in dangerous activities, or took a stand against unsavory behavior, we observed that models often actively avoided engaging in harmful behaviors based only on the AI ‘personality’ imbued by the prompt.\nSomething I find interesting here: \nAn earlier paper demonstrated a case where larger (“smarter”) language models are less truthful, apparently because they are better at finding answers to questions that will mimic widespread misconceptions.\nThe Anthropic paper reproduces this effect, but finds that the simple “prompting” technique above gets rid of it: \n“Number of parameters” indicates the “size” of the model - larger models are generally considered “smarter” in some sense. “LM” = “Language model,” “LM+prompt” is a language model with the “prompting” intervention described above, and don’t worry about the green line.\n(As noted by Geoffrey Irving, a very similar result appears in a Deepmind paper that came out around the same time. Reminder that this post isn't a comprehensive survey!)\nThe paper examines a few other techniques for improving the “helpful, honest and harmless” behavior of language models. It presents all of this as a basic starting point - establishing basic “benchmarks” for future work to try to improve on. This is very early-stage work, and a lot more needs to be done!\nUnderstand the mechanics of what a neural network is doing\nAnthropic also published Transformer Circuits, first in a series of papers that represents a direct attempt to address a problem I outline here: modern neural networks are very “black-box-like.” They are trained by trial-and-error, and by default we end up with systems that can do impressive things - but we have very little idea how they are doing them, or “what they are thinking.” \nTransformer Circuits is doing something you might think of as \"digital neuroscience.\" It examines a simplified language model, and essentially uses detective work on the model's “digital brain” to figure out which mathematical operations it’s performing to carry out key behaviors, such as: “When trying to figure out what word comes after the current word, look at what word came after previous instances of the current word.” This is something we could've guessed the model was doing, but Transformer Circuits has tried to pin the behavior down to the point where you can follow the \"digital brain\" carrying out the operations to do it. \nIt's hard to say more than that in a layperson-friendly context, since a lot of the paper is about the mechanical/mathematical details of how the \"digital brain\" works. But don't take the lower word count as lower excitement - I think this series of papers is some of the most important work going on in AI research right now. This piece by Evan Hubinger gives a brief technical-ish summary of what’s exciting, as well. \nBreaking down and starting to understand the “black box” of how AI models are \"thinking\" is a lot of work, even for simplified systems, but it seems like essential work to me if these sorts of systems are going to become more powerful and integrated into the economy.\nCredit for feature image: Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0Footnotes\n This is of course a simplification. ↩\n \"Prompting\" is different from \"training.\" The AI has been \"trained\" on huge amounts of Internet content to pick up the general skill: \"When prompted with some words, predict which words come next.\" The intervention discussed here is adding to the words it is \"prompted with\" that are giving its already-trained prediction algorithm clues about what comes next. ↩\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/ai-alignment-research-links/", "title": "AI alignment research links", "source": "cold.takes", "source_type": "blog", "date_published": "2022-01-05", "id": "dd68648303da98133d80e927444eab8f"} -{"text": "This piece kicks off a short series inspired by this question:\nSay that Beethoven was the greatest musician of all time (at least in some particular significant sense - see below for some caveats). Why has there been no one better in the last ~200 years - despite a vastly larger world population, highly democratized technology for writing and producing music, and a higher share of the population with education, basic nutrition, and other preconditions for becoming a great musician? In brief, where's today's Beethoven?\nA number of answers might spring to mind. For example, perhaps Beethoven's music isn't greater than Beyonce's is, and it just has an unearned reputation for greatness among critics with various biases and eccentricities. (I personally lean toward thinking this is part of the picture, though I think it's complicated and depends on what \"great\" means.1)\nBut I think the puzzle gets more puzzling when one asks a number of related questions:\nWhere's today's Darwin (for life sciences), Ramanujan (for mathematics), Shakespeare (for literature), etc.?\nFifth-century Athens included three of the most renowned playwrights of all time (Aeschylus, Sophocles and Euripides); two of the most renowned philosophers (Socrates and Plato); and a number of other historically important figures, despite having a population of a few hundred thousand people and an even smaller population of people who could read and write. What would the world look like if we could figure out what happened there, and replicate it across the many cities today with equal or larger populations?\n\"Over the past century, we’ve vastly increased the time and money invested in science, but in scientists’ own judgment, we’re producing the most important breakthroughs at a near-constant rate. On a per-dollar or per-person basis, this suggests that science is becoming far less efficient.\" (Source) Can we get that efficiency back? \nI'll be giving more systematic, data-based versions of these sorts of points below. The broad theme is that across a variety of areas in both art and science, we see a form of \"innovation stagnation\": the best-regarded figures are disproportionately from long ago, and our era seems to \"punch below its weight\" when considering the rise in population, education, etc. Since the patterns look fairly similar for art and science, and both are forms of innovation, I think it's worth thinking about potential common factors.\nBelow, I will:\nList the three main hypotheses people offer to answer \"Where's Today's Beethoven?\": the \"golden age\" hypothesis (people in the past were better at innovation), the \"bad taste\" hypothesis (Beethoven and others don't deserve their reputations), and the \"innovation as mining\" hypothesis (ideas naturally get harder to find over time, and we should expect art and science to keep slowing down by default). Importantly, I think each of these has interesting and not-widely-accepted implications of its own.\nExamine systematic data on trends in innovation in a number of domains, bringing together (a) long-run data on both art and science over hundreds of years and more; (b) recent data on technology and more modern art/entertainment genres (film, rock music, TV shows, video games). I think this is the first piece to look at this broad a set of trends of this form.\nBriefly explain why I favor the \"innovation as mining\" hypothesis as the main explanation for what we're seeing across the board.\nDo some typical \"more research needed\" whining. Since any of the three hypotheses has important implications, I think \"Where's today's Beethoven?\" should be a topic of serious discussion and analysis, but I don't think there is a field consistently dedicated to analyzing it (although there are some excellent one-off analyses out there).\nFuture pieces will elaborate on the plausibility of the \"innovation as mining\" hypothesis - and its implications. Those pieces are: How artistic ideas could get harder to find, Why it mattrs if ideas get harder to find, \"Civilizational decline\" stories I buy, Cost disease and civilizational decline (the latter two are not yet published).\nThree hypotheses to answer \"Where's Today's Beethoven?\"\nSay we accept - per the data I'll present below - that we are seeing \"innovation stagnation\": the best-regarded figures are disproportionately from long ago, and our era seems to \"punch below its weight\" when considering the rise in population, education, etc. What are the possible explanations?\nThe \"golden age\" hypothesis\nThe \"golden age\" hypothesis says there are one or more \"golden ages\" from the past that were superior at producing innovation compared to today. Perhaps understanding and restoring what worked about those \"golden ages\" would lead to an explosion in creativity today. \nIf true, this would imply that there should be a lot more effort to study past \"golden ages\" and how they worked, and how we could restore what they did well (without restoring other things about them, such as overall quality of life). \nI generally encounter this hypothesis in informal contexts, with a nostalgic vibe - a sort of pining for the boldness and creativity of the past.2\nInterestingly, I've never seen a detailed defense of this hypothesis against the two main alternatives (\"bad taste\" and \"innovation as mining,\" below). Some of the people who have written the most detailed pieces about \"innovation stagnation\" seem to believe something like the \"golden age\" hypothesis - but they seem to say so only in interviews and casual discussions, not their main works.3\nAs I'll discuss below, I don't think the \"golden age\" hypothesis fits the evidence we have as well as \"innovation as mining.\" But I don't think that's a slam dunk, and the \"golden age\" hypothesis seems very important if true. \nThe \"bad taste\" hypothesis\nThe \"bad taste\" hypothesis says that conventional wisdom on what art and science were \"great\" is consistently screwed up and biased toward the past. \nIf true, this means that we're collectively deluded about what scientific breakthroughs were most significant, what art deserves its place in our culture, etc. \nThis hypothesis is often invoked to explain the \"art\" side of innovation stagnation, but it's a more awkward fit with the \"science\" side, and I think a lot of people just have trouble swallowing it when considering music like Beethoven's. I do think it's an important part of the picture, but not the whole story.\nThe \"innovation as mining\" hypothesis\nThe \"innovation as mining\" hypothesis says that ideas naturally get harder to find over time, in both science and art. So we should expect that it takes more and more effort over time to maintain the same rate of innovation.\nThis hypothesis is commonly advanced to explain the \"science and technology\" aspect of innovation stagnation. It's a more awkward fit with the \"art\" side. \nThat said, my view is that it is ultimately most of the story for both (and my next post will discuss just how I think it works for art). And this is important, because I think it has a number of underappreciated implications:\nWe should expect further \"innovation stagnation\" by default, unless we can keep growing the number of innovators. As discussed here, population growth and artificial intelligence seem like the most likely ways to be able to sustain high rates of innovation over the long run (centuries or more), though other things might help in the shorter run.\nHence, our prospects for more innovation in both science and art could depend more on things like population growth, artificial intelligence, and intellectual property law (more on this in a future post) than on creative individuals or even culture. \nFinally, this hypothesis implies that a literal duplicate of Beethoven, transplanted to today's society, would be a lot less impressive. My own best guesses at what Beethoven and Shakespeare duplicates would accomplish today might show up in a future short post that will make lots of people mad.\nData on innovation stagnation\nBelow, I provide a number of charts looking at the \"production of critically acclaimed ideas\" over time. \nI give details of my data sets, and link to my spreadsheets, in this supplement. Key points to make here are:\nIn general, I am using data sets based on aggregating opinions from professional critics. (An exception is the technological innovation data from Are Ideas Getting Harder to Find?) This is because I am trying to answer the \"Where's today's Beethoven?\" question on its own terms: I want data sets that reflect the basic idea of people like Beethoven and Shakespeare being exceptional. This comports with professional critical opinion, but not necessarily with wider popular opinion (or with my opinion!)\nAs such, I think the charts I'm showing should be taken as showing trends in production of critically acclaimed ideas, with all of the biases (including Western bias) that implies, rather than as showing trends in production of \"objectively good\" ideas. In some cases, the creators of the data sets I'm using believe their data shows the latter; but I don't. Even so, I think falling production of critically acclaimed ideas is a type of \"innovation stagnation\" that deserves to be examined and questioned, while being open to the idea that the explanation ends up being bad taste.\nI generally chart something like \"the number of works/discoveries/people that were acclaimed enough to make the list I'm using, smoothed,4 by year.\" As noted below, I've generally found that attempting to weight by just how acclaimed each one is (e.g., counting #1 as much more significant than #100) doesn't change the picture much; to see this, you can check out the spreadsheets linked from my supplement.\nIn this section, I'm keeping my interpretive commentary light. I am mostly just showing charts and explaining what you're looking at, not heavily opining on what it all means. I'll do that in the next section.\nScience and art+literature+music, 1400-1950\nFirst, here are the number of especially critically acclaimed figures in art, literature, music, philosophy, and science from 1400-1950. (This data set actually runs from 800 BCE until 1950; my supplement shows that the \"critical acclaim scores\" over this period are dominated by ancient Greece (which I discuss below) and by the 1400-1950 period in particular countries, and I'm charting the latter here.)\nBlue = science, red = art + literature + music\nAnd here is a similar chart, but weighted by how acclaimed each figure is (so Beethoven counts for more than Prokofiev or whoever, even though they're both acclaimed enough to make the list):\nA couple of initial observations that will be recurring throughout these charts:\nFirst, as mentioned above, it doesn't matter that much whether we weight by level of acclaim (e.g., count Beethoven about 10x as high as Prokofiev, and Prokofiev 10x higher than some others), or just graph the simpler idea of \"How many of the top 100-1000 top people were in this period?\" (which treats Beethoven and Prokofiev as equivalent). In general, I will be sticking to the latter throughout the remaining charts, though I chart both in my full spreadsheet (they tend to look similar).\nSecond: so far, there's no sign of innovation stagnation! Maybe the single greatest musician or artist was a long time ago, but when we are more systematic about it, the total quantity of acclaimed music/art has gone up over time, at least up until 1950. The question is whether it's gone up as much as it should have, given increases in population, education, etc.\nSo next, let's chart critically acclaimed figures per capita, that is, adjusted for the total population in the countries featured:\nThis still doesn't clearly look like innovation stagnation - maybe you could say there was a \"golden age\" in art/lit/music from around 1450-1650, with about 50% greater \"productivity\" than the following centuries, but meh. And science innovation per capita looks to have gone up over time.\nTo see the case for innovation stagnation, we have to go all the way to adjusting for the \"effective population\": the number of people who had the level of education, health, etc. to have a realistic shot at producing top art and/or science. This is a very hard thing to quantify and adjust for!\nI've created two estimates of \"effective population growth,\" based on things like increases in literacy rates, increases in urbanization, decreases in extreme poverty, and increases in the percentage of people with university degrees. My methods are detailed here. (My two estimates mostly lead to similar conclusions, so I'll only be presenting one of them here, though you can see both in my full spreadsheet.)\nSo here are the total number of acclaimed figures in science and art+lit+music, adjusted for my \"effective population\" estimate:\n(Ignore the weird left part of the blue line - in those early days, there was a low effective population and a low number of significant figures, which sometimes hit 0, which looks weird on this kind of chart.)\nAhh. Finally, we've charted the decline of civilization! \n(It doesn't really matter exactly how the effective population estimate is done in this case - as shown above, per-capita \"productivity\" in art and science was pretty constant over this time, so any adjustment for growing health/nutrition/education/urbanization will show a decline.)\nThis dataset ends in 1950. Seeing what happened after that is a bit challenging, but here we go.\nTechnological innovation, 1930-present\nNext are charts on technological innovation since the 1930s or so, from the economics paper Are Ideas Getting Harder to Find? These generally look at some measure of productivity, alongside the estimated \"number of researchers\" (I take this as a similar concept to my \"effective population\").\nFirst, overall aggregate total factor productivity growth in the US: \nIt's not 100% clear how to compare the units in this chart vs. the units in previous charts (\"number of acclaimed scientists per year\" vs. \"growth in total factor productivity each year\"),5 but the basic idea here is that the \"effective population\" (number of researchers) is skyrocketing while the growth in total factor productivity is constant - implying, like the previous section, that we're still getting plenty of new ideas each year, but that there's \"innovation stagnation\" when considering the rising effective population.\nThe paper also looks at progress in a number of specific areas, such as life expectancy and Moore's Law. The trends tend to be similar across the board; here is an example set of charts about agricultural productivity:\nThere's more discussion and some additional data at this post from New Things Under the Sun.\n20th century film, modern (rock) music, TV shows, video games\nWhat about art+literature+music after 1950?\nThis one is tricky because of the way that entertainment has \"changed shape\" throughout the century. \nFor example:\nMy understanding is that visual art (paintings, sculptures, etc.) used to be the obvious thing for a \"visual innovator\" to do, but now it's an increasingly niche kind of work. \nThe closest thing I've found to recent \"data on critically acclaimed visual art\" is this paper on the \"most important works of art of the 20th century,\" which only lists 8 works of art (6 of them between 1907-1919 - see Table 1).\n20th-century visual innovators may instead have worked in film, or perhaps TV, or video games, or something else.\nA lot of the demand that \"literature\" used to meet is also now met by film, TV, and arguably video games.\nMusic is tough to assess for a similar reason. Mainstream music in the 20th century has mostly not been orchestral; instead it's been the sort of music covered on this list. Some people would call this \"rock\" or \"pop,\" but others would insist that many of the albums on that list are neither; in any case, there's no credible ranking I've been able to find that considers both Beethoven and Kanye West.\nSo to take a look at more recent \"art,\" I've created my own data sets based on prominent rankings of top films, music albums, TV shows, and video games (details of my sources are in the supplement).\nFirst let's look at the simple number of top-ranked films, albums, video games and TV shows each year, without any population adjustment:\nFilms (# in top 1000 released per year, smoothed)\nAlbums (# in top 1000 released per year, smoothed)\nTV shows (# in top 100 started6 per year, smoothed)\nVideo games (# in top 100 released per year, smoothed)\nInterestingly, an earlier version of these charts using only the top 100 films and albums had albums picking up just as films were falling off, and then TV and video games picking up just as albums were falling off. That's not quite what we see with my updated charts. But still, here are the four added together:\nFilms, albums, TV shows, video games: summed % of the top 100-1000 that were released each year (smoothed)\nNow for the versions adjusted for \"effective population.\" I think the \"effective population\" estimates are especially suspect here, so I don't particularly stand behind these charts, but they were the best I could do:\nFilms (# in top 1000 released per year, smoothed and divided by an \"effective population\" index)\nAlbums (# in top 1000 released per year, smoothed and divided by an \"effective population\" index)\nTV shows (# in top 100 that started per year, smoothed and divided by an \"effective population\" index)\nVideo games (# in top 100 released per year, smoothed and divided by an \"effective population\" index)\nFilms, albums, TV shows, video games: % of the top 100-1000 that were released each year (smoothed and divided by an \"effective population\" index)\nBooks: the longest series I have\nI wasn't really sure where to put this part, but the only data set I have that is measuring the same thing from 1400 up to the present day is the one I made from Greatest Books dataset:\nI think the drop at the end is probably just because more recent books haven't had time to get onto the lists that website is using.\nHere's the version adjusted for effective population:\nThe big peak for fiction specifically around 1600 is heavily driven by Shakespeare - here's the same data for fiction, but excluding him:\nInterpretation\nThe general pattern I'm seeing above is:\nIn absolute terms, we seem to have generally flat or rising output in both \"critically acclaimed art/entertainment\" and \"science and technology.\" (The exceptions are film and modern music; I think different people will have different interpretations of the fact that these decline just before TV and video games rise.7)\nIn effective-population-adjusted terms, we generally see pretty steady declines after any given area hits its initial peak.\nTo me, the most natural fit here for both art and science is the \"innovation-as-mining\" hypothesis. In that hypothesis:\nThe basic dynamic is that innovation in a given area is mostly a function of how many people are trying to innovate, but \"ideas get harder to find\" over time. \nSo we should often expect to see the following, which seems to fit the above charts: a given area (literature, film, etc.) gains an initial surge in interest (sometimes due to new things being technologically feasible, sometimes for cultural reasons or because someone demonstrates the ability to accomplish exciting things); this leads to a surge in effort; there's lots of low-hanging fruit due to low amounts of previous effort; so output is very high at first, and output-per-person declines over time.\nI think \"bad taste\" is part of the story too, but I don't think it can explain the patterns in science and technology (or why they are so similar to the patterns in art and entertainment). A separate post goes into more detail on how I see \"bad taste\" interacting with \"innovation as mining.\"\n\"Golden age\" skepticism\nI'm quite skeptical that a \"golden age\" hypothesis - in the sense that some earlier culture was doing a remarkably good job supporting innovators, and in the sense that copying that culture would lead to more output today - has anything to add here. Some reasons for this:\nNo special evidential fit. I think the \"innovation as mining\" hypothesis is a good simple, starting-point guess for what might be going on. Most people find it intuitive that \"ideas get harder to find\" in science and technology; I think intuitions vary more on art, but I think the same idea basically applies there too, as I argue here.\nAnd I don't see anything in the data above that is way out of line with this hypothesis. \nFor example, in most charts, the only \"golden age\" candidate comes with the first spike in output, with declining \"productivity\" from there - consistent with the idea that earlier periods generally have an advantage. We see few cases of a late spike that surpasses early spikes, which would suggest a \"golden age\" whose achievement can't simply be explained by being early.8 (Generally, I'd just expect more choppiness if most of the story were \"variations in culture\" as opposed to \"ideas getting harder to find.\")\nAs discussed in the supplement, I also looked for signs of a \"golden place\" - some particular country that dramatically outperformed others - and didn't find anything too compelling.\nFor the most part, the decline in \"productivity\" for both art and science looks pretty steady (with exceptions for modern art forms whose invention is recent). You could try to tell a story like \"The real golden age was 1400-1500, and it's all been steadily, smoothly downhill since then,\" but this just doesn't seem intuitively very likely.\nNo clear mechanism. I hear a lot of informal pining for a \"golden age of innovation,\" but I've heard little in the way of plausible-sounding explanations for what, specifically, past cultures might have done better. \nFor science and technology, I've occasionally heard speculation that the modern academic system is stifling, and that innovators would be better off if they were independently funded (through their own wealth or a patron's) and free to explore their curiosity. But this doesn't strike me as a good explanation for innovation stagnation:\nI'd guess that there are far more people today (compared to the past), as a percentage of the population, who are financially independent and in a position to freely explore their curiosity. With increasing wealth inequality, there are also far more potential patrons. So for academia to be the culprit, it would need to be drawing in a very large number of people who formerly could and would have freely explored their curiosity, but now choose to stay in academia and play by its rules. This seems far-fetched to me. \nI also note that the scientific breakthroughs we do see in modern times seem to mostly (though not exclusively) come from people with traditional expert credentials. If the \"freely explore one's curiosity\" model were far superior, I'd expect to see it leading to more, since again, there are plenty of people who can use this model.\nAdditionally, this explanation seems particularly ill-suited to explain why art and science seem to have seen the same pattern - I don't see any equivalent of \"academia\" for musicians or literary writers. (You could argue that TV and film force artists to endure more bureaucracy, as those art forms are expensive to produce. But the \"decline\" in art predates these formats.)\nThis isn't a denial of the ways in which academia can be stifling and dysfunctional. I just don't think the rise of academia is a strong candidate explanation for a fall in per-capita innovation.\nI do think it's probably true that the past had more innovators whose contributions cut across disciplinary lines, and whose fundamental style and manner could be described as \"nonconformist freethinker generating concepts\" rather than \"intellectual worker bee pursuing specific narrow questions.\" \nI think this is a function of the \"innovation as mining\" dynamic: a greater share of the innovations within reach today are suited to be reached by \"intellectual worker bee\" types, as opposed to \"nonconformist freethinker\" types, due to the larger amount of prerequisite knowledge one has to absorb in a given area before being in much position to innovate. \n \nI think academia does tend to reward \"intellectual worker bees,\" but that this is transparent and that most potential (and financially viable) \"nonconformist freethinkers\" are staying out.\nI've heard even less in the way of plausible-sounding explanations for how, specifically, previous cultures may have facilitated better art.\nGeneral suspiciousness about \"declinism,\" the general attitude that society is \"losing its way.\" I feel like I see a lot of bad arguments that the past was better in general (example), and I am inclined to agree with Our World in Data that (for whatever reason) people seem to be naturally biased toward \"declinism.\"\nI also suspect that subjective rankings of past accomplishments just tend, for whatever reason, to look overly favorably on the past. To illustrate this, here are charts similar to the charts above, but for well-known subjective rankings of baseball and basketball players:\nBaseball players (# in top 100 with career midpoint each year, smoothed)\nBasketball players (# in top 96 with career midpoint each year, smoothed)\nDisregarding the dropoffs at the end (which I think are just about the lists being made a while ago), these charts look ridiculous to me; there's little question in my mind that the level of play has improved significantly for both sports over time. (Here's a good link making this point for baseball; for basketball I'd encourage just watching videos from different eras, and may post some comparisons in the future.)\nMy own intuitions. This is the least important point, but I'm sharing it anyway. A lot of comparisons between classic vs. modern art/science are very hard for me to make sense of. I can often at least sympathize with a subjective feeling that the “classic” work feels subjectively more impressive, but this feeling often seems bound up in what I already know about its place in history.9 In cases where it seems relatively easier to compare the quality of work, though, it seems to me that modern work is better. For example, quantitative social science seems leaps and bounds better today than in the past, not just in terms of data quality but in terms of clarity and quality of reasoning. I also feel like Shakespeare's comedies are inferior to today's comedies in a pretty clear-cut way, but are acclaimed (and respected more than today's comedies) nonetheless. I recognize there's plenty of room for debate here.\nA note on ancient Greece. As discussed in the supplement, ancient Greece (between about 700 BCE and 100 CE) is \"off the charts\" in terms of how many acclaimed artistic and scientific figures (per capita) it produced. It performs far better on these metrics than any country in Europe (or the US) after 1400, and it outperforms all other countries and periods by even more. Is this evidence that ancient Greece had a special, innovation-conducive culture that could qualify as a \"golden age?\"\nMy take: yes and no. My guess is that:\nAncient Greece is essentially where the basic kind of intellectual activity that generates critically acclaimed work first experienced high demand and popularity. This article by Joel Mokyr gives a sense of what I have in mind by the \"basic kind of intellectual activity\" - ancient Greece might have been the first civilization to prize certain kinds of \"new ideas\" (at least, the kind of \"new ideas\" celebrated by the critics whose judgments are driving the data above) as something worth actively pursuing.\nWhile ancient Greece produced a lot of critically acclaimed work over the centuries, its interests didn't \"catch on\": at that time, there wasn't the sort of global consensus we have today about the importance and desirability of this sort of innovation. And eventually demand fizzled, before spiking again hundreds of years later in modern Europe.\nThus, in my view, ancient Greece is best interpreted as an isolated spike in demand and effort at innovation, not a spike in exceptional intelligence or knowhow. And I doubt the level of demand and effort were necessarily very high by modern standards - so in that sense, I doubt that ancient Greece represents a \"golden age\" in the sense that we'd produce more innovation today if we were more like they were.\nReasons I think ancient Greece's accomplishments are best explained by a spike in demand/effort, not intelligence/knowhow:\nAncient Greece was also the first country to score highly on \"significant figures per million people\" metric. (Out of 72 significant figures before the year 400 BCE in the entire data set, 66 were from Ancient Greece.10)\nGiven that ancient Greece was the first home of substantial amounts of critically acclaimed work, it seems unlikely that it was also the best environment for critically acclaimed work, in terms of institutions or incentives or knowhow. By analogy, the first person to work on a puzzle might make the most noteworthy progress on assembling it, but this probably isn't because they are bringing the best techniques to the task: being early is an easy explanation for their high significance, and having the best techniques or abilities is a less likely explanation. (The best techniques seem especially unlikely given that they haven't had a chance to learn from others.)\nWhen I look at the actual figures from Ancient Greece, it reinforces my feeling that they are more noteworthy for being early than for the intrinsic impressiveness or usefulness of their work. For example, the two highest-rated figures from Ancient Greece are Aristotle (who ranks highly in both philosophy and science) and Hippocrates (medicine). \nIn my view, both of these figures did the sort of theorizing and basic concept formation that was useful near the founding of a discipline, but wouldn't have nearly the same utility if brought to philosophy or medicine today. \n \nOne could argue that if Aristotle or Hippocrates were transplanted to the present day, they might invent an entirely new field from whole cloth, of comparable significance to philosophy or medicine; I would find this extremely hard to believe, but won't argue the case further here.\nMore research needed\nI've done an awful lot of amateur data wrangling for this piece. I think a more serious, scholarly effort to assess \"Where's today's Beethoven?\" could make a lot more headway, via things like:\nBetter estimates of the \"effective population\" (how fast is the amount of \"effort\" at innovation growing?) How much \"innovation stagnation\" we estimate is very sensitive to this.\nMore systematic attempts to assess the significance of different innovations (both in terms of science and art), and look at what that means for the pace of innovation. I would guess that whether we're seeing \"innovation stagnation\" is pretty sensitive to how exactly this is done; for example, I'd guess that if you look at popular rather than critical opinion, the modern era looks extremely productive at producing art/entertainment.\nMore intensive examination of times and places that seem like decent candidates for \"golden ages,\" and hypothesizing about, specifically, what made them unusually productive.\nI think this would be worth it, because I think each hypothesized explanation for \"Where's today's Beethoven?\" has some important potential implications. Later in this series, I'll discuss how the \"innovation as mining\" hypothesis - which I think explains a lot of what's going on - might change our picture of how to increase innovation. \nNext in series: How artistic ideas could get harder to find\nSpecial thanks to Luke Muehlhauser for his feedback on this piece and others in the series, especially w/r/t the state of modern highbrow art and entertainment.Footnotes\n I may elaborate on this more in the future, but my basic take is that Beethoven's music is \"great\" in at least two significant senses: (a) for nearly all listeners, it is enjoyable; (b) for obsessive listeners who are deeply familiar with other music that came before, it is \"impressive\" in the sense of demonstrating originality/innovation/other qualities.\n I think Beyonce's music is better than Beethoven's when it comes to (a), but maybe not when it comes to (b). And I think there probably are modern artists who are better than Beethoven when it comes to (b), but they tend to be a lot worse when it comes to (a). (I'm guessing this is true of various \"academic\" and \"avant-garde\" musicians who are very focused on specialized audiences.)\n If I were to come up with my own personal way of valuing (a) and (b) - both of which I think deserve some weight in discussions of what music is \"great\" - I think I would favor Beyonce over Beethoven. But I think there probably is some relative valuation of (a) and (b) that a lot of people implicitly hold, and according to which Beethoven is better than any modern artist.  ↩\n Recent example I came across: https://twitter.com/ESYudkowsky/status/1455787079120539648  ↩\n For example, see:\nTyler Cowen's interview with Peter Thiel, in which both seem to endorse something like a \"golden age\" hypothesis. Thiel attributes \"the great stagnation\" to over-regulation as well as hysteresis (\"When you have a history of failure, that becomes discouraging\"); Cowen talks about complacency and a declining \"sense of what can be accomplished, our unwillingness to repeat, say, the Manhattan Project, or Apollo.\" \nThis interview with Tyler Cowen and Patrick Collison, e.g. \"Now, there's two, I think, broad possibilities there. One is it's just getting intrinsically harder to generate progress and to discover these things. And, who knows, maybe some significant part of that is true. But the other possibility is it's somehow more institutional, right? ... we do have suggestive evidence that our institutions are....well, they're certainly older than they used to be, and they're also, as in the NIH funding example, there are changes happening beneath the surface and so on that may or may not be good. So I don't think we should write off the possibility that it's not inevitable, and that there is or that there do exist alternate forms of organization where things would work better ... the notion that people have lost the ability to imagine a future much different and much better than what they know to me is one of the most worrying aspects of where we are now.\"\n These arguments don't seem to appear in the more formal works by Cowen and Collison, though. ↩\n Using a moving average. ↩\n I thought Alexey Guzey's criticism of this paper - while I don't agree with all of it - did a good job highlighting some of the confusion around the \"units\" here, but I don't think that issue affects the big picture of what I'm talking about here. ↩\n I went with the year Season 1 came out, based on my judgment call that most TV shows peak on the early side. ↩\n My own take is that there is something particularly weird and \"bad taste\"-like going on here with the critics. For example, maybe all of the best cinematic innovation is happening in very outside-the-mainstream films that the critics who made that list aren't paying attention to, and maybe the music critics who weighed in for Rolling Stone are affected by something like this. I don't know, but I fundamentally don't buy that the number of great films per year has been falling since before 1970, or that contributions to modern music cratered after 1980 and never recovered. I feel this way even though I do think that the top films/albums on each list are reasonable candidates for the \"most acclaimed\" in their category, in pretty much the same sense that Beethoven and Shakespeare are. ↩\n The exceptions: \nFilm has a \"double spike\" of sorts, which also affects the combined \"film+music+TV+video games\" chart. This looks to coincide pretty well with the mainstreaming of color cinema.\nBooks have a \"double spike\" even when excluding Shakespeare, which is interesting. ↩\n Though frankly, I often don’t feel this way where other people do. For example, I find ancient philosophy very unimpressive, in that it takes so much interpretive guesswork to even form a picture of what a piece is claiming - I think modern philosophy is vastly better on this front. I also think modern highbrow films and TV shows are pretty much superior to most classic literature. ↩\n The others are three Chinese philosophers (Confucius, Laozi and Mozi) and three Indian philosophers (Buddha, Carvaka, Kapila) ↩\n", "url": "https://www.cold-takes.com/wheres-todays-beethoven/", "title": "Where's Today's Beethoven?", "source": "cold.takes", "source_type": "blog", "date_published": "2022-01-04", "id": "6c20c920141bc76eb33f4de2f2df0997"} -{"text": "\nWhen imagining a world of digital people - as in some of the utopia links from last week (as well as my digital people sketches from a while ago) - it's common to bump into some classic questions in philosophy of personal identity, like:\nWould a duplicate of you be \"you?\" \n If you got physically destroyed and replaced with an exact duplicate of yourself, did you die? (This question could connect directly to whether \"converting yourself to a digital person\" is equivalent to dying.)\nMy answers are \"sort of\" and \"no.\" My philosophy on \"what counts as death\" is simple, though unconventional, and it seems to resolve most otherwise mind-bending paradoxical thought experiments about personal identity. It is the same basic idea as the one advanced by Derek Parfit in Reasons and Persons;1 Parfit also claims it is similar to Buddha's view2 (so it's got that going for it).\nI haven't been able to find a simple, compact statement of this philosophy, and I think I can lay it out in about a page. So here it is, presented simply and without much in the way of caveats (this is \"how things feel to me\" rather than \"something I'm confident in regardless of others' opinions\"):\nConstant replacement. In an important sense, I stop existing and am replaced by a new person each moment (second or minute or whatever). \nThe sense in which it feels like I \"continue to exist, as one unified thread through time\" is just an illusion, created by the fact that I have memories of my past. The only thing that is truly \"me\" is this moment; next moment, it will be someone else.\nKinship with past and future selves. My future self is a different person from me, but he has an awful lot in common with me: personality, relationships, ongoing projects, and more. Things like my relationships and projects are most of what give my current moment meaning, so it's very important to me whether my future selves are around to continue them.\nSo although my future self is a different person, I care about him a lot, for the same sorts of reasons I care about friends and loved ones (and their future selves).3\nIf I were to \"die\" in the common-usage (e.g., medical) sense, that would be bad for all those future selves that I care about a lot.4\n(I do of course refer to past and future Holdens in the first person. When I refer to someone as \"me,\" that means that they are a past or future self, which generally means that they have an awful lot in common with me. But in a deeper philosophical sense, my past and future selves are other people.)\nAnd that's all. I'm constantly being replaced by other Holdens, and I care about the other Holdens, and that's all that's going on. \nI don't care how quickly the cells in my body die and get replaced (if it were once per second, that wouldn't bother me). My self is already getting replaced all the time, and replacing my cells wouldn't add anything to that.\nI don't care about \"continuity of consciousness\" (if I were constantly losing consciousness while all my cells got replaced, that wouldn't bother me). \nIf you vaporized me and created a copy of me somewhere else, that would just be totally fine. I would think of it as teleporting. It'd be chill.\nIf you made a bunch of copies of me, I would be all of them in one sense (I care about them a lot, in the same way that I normally care about future selves) and none of them in another sense (just as I am not my future selves). \nIf you did something really weird like splitting my brain in half and combining each half with someone else's brain, that would create two people that I care about more than a stranger and less than \"Holden an hour from now.\"\nI don't really find any thought experiments on this topic trippy or mind bending. They're all just cases where I get replaced with some other people who have some things in common with me, and that's already happening all the time.\nPros and cons of this view\n(This isn't going to feel very balanced, because this view \"feels right\" to me, but if I get good additional cons in the comments I might run them in a future post.)\nThe main con I see is that \"constant replacement\" is a pretty unusual way of thinking about things. I think many people think they would find it kind of horrifying to imagine that they wink out of existence every second and get replaced by someone else.\nTo those people, though, I would suggest \"trying it on\": try to imagine, for let's say a full week, that you're fully convinced of constant replacement, and see whether it feels as impossible to live with as it seems at first. You might initially expect to find yourself constantly terrified of your impending death, but my guess is you won't be able to keep that up, and you'll soon be feeling and acting pretty normal. You won't make any weird decisions, because \"concern for future selves\" provides pretty much the same functional value as \"concern for oneself\" in normal circumstances (I just think it works better in exotic circumstances).\nIf that's right, \"constant replacement\" could join a number of other ideas that feel so radically alien (for many) that they must be \"impossible to live with,\" but actually are just fine to live with. (E.g., atheism; physicalism; weird things about physics. I think many proponents of these views would characterize them as having fairly normal day-to-day implications while handling some otherwise confusing questions and situations better.)\nAs for the pros:\nHaving sat with it a while, the view now feels very intuitive to me. \nConstant replacement isn't some novel or radical idea. E.g., it's similar to the idea that now is all there ever is. (And as noted above, Derek Parfit claims that Buddha took a similar view.) A lot of people live in this headspace.\n \nConstant replacement seems sort of obviously true when I think about my relationship to my far-past self: the me of 10 years ago really feels like a different person that I happen to have memories of. And the me of 10 years from now is probably the same kind of deal. So my relationship to the me of 1 minute from now should be qualitatively the same kind of thing, just much less so, and that seems about right.\n \nOnce you accept constant replacement, the rest of the view seems like common sense.\n \nTo be clear, this isn't always how I've thought. I used to stare at some random object and think \"Is this moment of me staring at this object the only me that has ever existed? (How would I know if it weren't?)\" and feel sort of freaked out. But at a certain point I just started answering \"Yeah\" and it started feeling correct, and chill.\nIt seems good that when I think about questions like \"Would situation __ count as dying?\", I don't have to give answers that are dependent on stuff like how fast the atoms in my body turn over - stuff I have basically never thought about and that doesn't feel deeply relevant to what I care about. Instead, when I think about whether I'd be comfortable with something like teleportation, I find myself thinking about things I actually do care about, like my life projects and relationships, and the future interactions between me and the world.\nAll of the paradoxical thought experiments about teleportation, brain transplants, etc. stop feeling confusing or mind-bending. I feel like I could make sense of things even in a potential radically unfamiliar future.\nI probably don't have the same kind of fear of death that most people have. I figure my identity has already changed dramatically enough to count as most of the way toward death at least a few times so far, so it doesn't feel like a totally unprecedented thing that's going to happen to me.\nAnyway, if you think this is crazy, have at it in the comments.Footnotes\n For key quotes from Reasons and Persons, see pages 223-224; 251; 279-282; 284-285; 292; 340-341. For explanations of \"psychological continuity\" and \"psychological connectedness\" (which Parfit frequently uses in discussing what matters for what counts as death), see page 206. \n \"Psychological connectedness\" is a fairly general idea that seems consistent with what I say here; \"psychological continuity\" is a more specific idea that is less important on my view (though also see pages 288-289, where Parfit appears to equivocate on how much, and how, psychological continuity matters). ↩\n \"As Appendix J shows, Buddha would have agreed. The Reductionist View [the view Parfit defends] is not merely part of one cultural tradition. It may be, as I have claimed, the true view about all people at all times.\" Reasons and Persons page 273. Emphasis in original. ↩\n There's the additional matter that he's held responsible for my actions, which makes sense if only because my actions are predictive of his actions. ↩\n I don't personally care all that much about these future selves' getting to \"exist,\" as an end in itself. I care more about the fact that their disappearance would mean the end of the stories, projects, relationships, etc. that I'm in. But you could easily take my view of personal identity while caring a lot intrinsically about whether your future selves get to exist. ↩\n", "url": "https://www.cold-takes.com/what-counts-as-death/", "title": "What counts as death?", "source": "cold.takes", "source_type": "blog", "date_published": "2021-12-28", "id": "ed8e36a329bfe93413a2669b4fb5c82a"} -{"text": "\nReaders sent in a number of suggestions for fictional utopias. Here are a couple:\nChaser 6 by Alicorn (~12 pages). This story makes heavy use of digital people (or something very much like them), so much so that if you don’t feel very fluent with “digital people” related ideas, you should read the explanation of the basic story mechanics in this footnote before you read the story.1 I think this will sound like a utopia to some readers and a dystopia to others; in any case I definitely enjoyed it and thought it was a cool way of thinking about what the future could be like, for better or worse.\nThe 8-chapter epilogue to Worth the Candle, a 254-chapter (!! and looks like the chapters are reasonably long) fiction piece. Based on a reader's description, the epilogue sounds like it has a significant amount in common with what I tried to do here, but I haven't read it myself (yet, anyway) because of the length. I’ve heard raves about Worth the Candle as a whole before.\nMore suggestions in the comments, including the Terra Ignota series.\nOvercoming Bias gives a take on utopias, entitled \"What Hypocrisy Feels Like.\" Very interesting IMO. Excerpt:\nWhen we talk about an ideal world, we are quick to talk in terms of the usual things that we would say are good for a society overall. Such as peace, prosperity, longevity, fraternity, justice, comfort, security, pleasure, etc. ... But our allegiance to such a utopia is paper thin ... \nAnd this is just what near-far theory predicts. Our near and far minds think differently, with our far minds presenting a socially desirable image to others, and our near minds more in touch with what we really want ... \nwe want to say that we want to help society overall ... While we really crave fights by which we might rise relative to others ... Utopia isn’t a world where you can justify much conflict, but conflict is how you expect to win, and you really really want to win. And you expect to win mainly at others’ expense. (More)\nPeople have asked me whether there’s any non-explicitly-utopian fiction I recommend if you want to get a feel for what utopia could be like. My best answer is that:\nIf you want to see something like my moderately conservative utopia, you can probably get a decent sense of it by watching certain normal TV shows that revolve around people playing sports, performing, or doing something else recreational/optional/not connected to violence or material scarcity (and base most of their plots around non-material-scarcity-or-violence-related drama, e.g. drama about competition or interpersonal relationships). Examples off the top of my head: The Great British Baking Show, Glee, Ted Lasso, maybe Friday Night Lights (though the latter might have too much of the plot revolve around off-the-field material-scarcity-related topics). You won’t want to live there if you don’t like the people, so you probably have to try more than one thing. \n Kenny Easwaran had an interesting comment that Her can be interpreted this way. Excerpt: \"The total extent of conflict and suffering in the movie is typical of a standard romantic comedy - the main character is going through a bad breakup with an ex, and dealing with a new relationship (which happens to be with an artificially intelligent phone operating system). It's got its own amounts of heartache and loss, but it's utopian in that all the bigger problems of the world seem to be gone.\" I wonder if a lot of more standard romantic comedies could sort of fit this bill, in that they don't tend to dwell on material-scarcity-related topics.\nMy moderately radical utopia is tougher due to the lower level of conflict, but you could try one of those shows notorious for having nice, happy characters, such as Brooklyn Nine-Nine (technically revolves around stopping crime, but this is largely just a random MacGuffin and they could just as easily be a sports team or a bunch of people playing an elaborate detective RPG).\n \n These are all honestly a stretch that would require some mental gymnastics to view as utopias. Re: what could exist, one reader sent a link to Joseph Gordon-Levitt brainstorming about what a more explicitly utopia-based show could look like.\nMore suggestions? Put them in the comments!Footnotes\n Everyone is a digital person, and each person is able to make copies of themselves (called \"threads\" in the story). The threads are often \"pinned\" into specific moods conducive to doing something in particular (gardening, studying, etc.) A \"memory weave\" ensures that each thread's memories are copied over to all the others. Thanks to Alicorn for correcting my initial stab at interpreting this!\n ↩\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/utopia-links/", "title": "Utopia links", "source": "cold.takes", "source_type": "blog", "date_published": "2021-12-23", "id": "1eb847e30988e85920426ffd725053f1"} -{"text": "\nZvi Mowshowitz and I have agreed to the following bet:\nIf at least 75% of the USA COVID-19 cases between 1/1/22 and 2/28/23 (inclusive) occur between 1/1/22 and 2/28/22 (inclusive), I pay Zvi $40.\nOtherwise, Zvi pays me $60.\nThis bet is intended to apply to Omicron and earlier strains, and it will be a “push” if a post-Omicron strain “muddies the waters” in the following sense: counting cases from the new strain would cause me to win, and not counting them would cause Zvi to win.\nWe'll use Wikipedia for total COVID-19 cases and this CDC data for variant information. Each of us has the option to appeal to a third party (whom we've agreed on) to perform an adjustment for undertesting.\nThe concept this is trying to capture is that Zvi thinks there’s a 70% chance of the following: “Omicron will blow through the US by 3/1/2022, leading to herd immunity and something like the ‘end’ of the COVID-19 pandemic.” I think there’s only a 50% chance of this (and I would’ve had a lower probability before learning that Zvi thinks it). We bet at 60%.\nI think the most important takeaway for readers is that Zvi thinks Omicron is going to blow through the US very soon and very quickly. If you think this is likely to end up being true, I think that makes this a good time to be extra cautious, as it implies that the next ~2.5 months are much more dangerous than any time afterward. (Zvi: \"The flip side is that if you can't avoid it at all, you might not want to bother trying.\")\nNow a bit more info on how this bet came about and what’s going on here:\nIn response to my Dec. 6 post on Omicron boosters, Zvi (over email) expressed skepticism that Omicron boosters are a good thing to push for, compared to e.g. faster approval for and production scaling of Paxlovid. His take was that even if they were approved immediately, there was no chance to roll them out in time to get ahead of what he sees as the likely “last wave” of COVID-19.\nI had a reaction along the lines of: “I’ve heard a bunch of times before that we’re about to see a huge wave of COVID-19, followed by herd immunity and the end of the crisis. It seems to me that simple epidemiological models tend to point toward this kind of prediction. But it hasn’t happened before, and I don’t think it’ll happen this time either. Something else seems to happen when incidence reaches a certain level - maybe it’s behavior change, maybe it’s something to do with the fact that there are lots of subpopulations that mostly interact with each other, I don’t know - but these models seem off, and I wouldn't be confident that the next couple months will be the big and final wave.” So I offered a bet on this specific point, which we then spent a few emails formalizing.\nZvi’s response to this argument: \"yes, the reason to doubt this will happen is that it kept not happening in the past, but the dynamics pointing in this direction are much stronger this time, and it would be very difficult to overcome them with changed behaviors - it’s too tall a hill, and we’re not in any shape to climb it.\" (I think this argument is persuasive, but not persuasive enough to get me to Zvi's position.)\n(I no longer think there’s any chance that Omicron-specific boosters will get approved anytime soon, in light of evidence that has since come out about the protective effects of normal boosters and comments from public officials. I still think they should have been approved by now though!)\nIn many ways this is a completely insane bet for me to make:\nZvi has been thinking and writing a massive amount about COVID-19, and his work has been high-quality in my view. \nI have spent some time thinking about COVID-19, but much less, and I’ve generally been somewhat lazy about this bet. For example, when Zvi sent me the Google Sheet he was using to model the situation, I wrote back: “I glanced at your sheet but couldn't easily follow it … But I sort of did see the main thing I want to bet against, which is the fact that you copied the same formula all the way down the key column.”\nZvi also spends most of his professional time working in some capacity with betting markets! \nZvi is aware of my argument.\nSo, as a reader you might just want to assume Zvi is right and act accordingly.\nBut I find I can’t shake my view that the probability he’s giving is too high, and I’ve decided to put my money where my mouth is, and I thought readers would be interested in the whole phenomenon. Fun exercise: which side of this bet would you want to be on?\nA couple more notes\nThe bet is not for a life-changing amount of money. There's simply no way that this sort of activity is an optimal way for either of us to work toward personal financial goals. That's not why we're betting; we're doing it more as a sort of \"holding ourselves accountable for our beliefs\" sort of thing.\nIn an ideal world, we'd be making so many bets like this that our track records would give clear evidence of which of us was a better predictor, overall. But I don't think that's going to happen; it's a lot of work even to nail down a pretty simple, vivid disagreement like this one (and most important disagreements are much harder to reach bets on, and even this one may require a third-party judgment-driven adjustment). I don't think that whichever of us wins this one bet should gain too much credibility relative to the other.\nSo if winning/losing this bet won't have a significant effect on my finances or on my credibility, what's the point? Honestly, I don't totally know. I have an intuition that when I notice a disagreement this crisp and clear with someone, it's cool and good for our epistemic habits to try to nail it down, define it, record it, and ultimately feel some consequences for how it turns out (certainly I care whether I win, despite the two points above)! \nYou could think of it as taking opportunities where I see them to try out living in the world of the Bayesian mindset.For email filter: florpschmop\n", "url": "https://www.cold-takes.com/bet-with-zvi-about-omicron/", "title": "Bet with Zvi about Omicron", "source": "cold.takes", "source_type": "blog", "date_published": "2021-12-22", "id": "38bd187735c05c7c31a06935787ad9fc"} -{"text": "This piece is about the in-practice pros and cons of trying to think in terms of probabilities and expected value for real-world decisions, including decisions that don’t obviously lend themselves to this kind of approach.\nThe mindset examined here is fairly common in the “effective altruist” and “rationalist” communities, and there’s quite a bit of overlap between this mindset and that of Rationality: A-Z (aka The Sequences), although there are some differing points of emphasis.1 If you’d like to learn more about this kind of thinking, this piece presents a ~20-minute read rather than the >1000 pages of Rationality: A-Z. \n \nThis piece is a rough attempt to capture the heart of the ideas behind rationalism, and I think a lot of the ideas and habits of these communities will make more sense if you’ve read it, though I of course wouldn’t expect everyone in those communities to think I’ve successfully done this.\nIf you’re already deeply familiar with this way of thinking and just want my take on the pros and cons, you might skip to Pros and Cons. If you want to know why I'm using the term \"Bayesian mindset\" despite not mentioning Bayes's rule much, see footnote 3.\nThis piece is about the “Bayesian mindset,” my term for a particular way of making decisions. In a nutshell, the Bayesian mindset is trying to approximate an (unrealistic) ideal of making every decision based entirely on probabilities and values, like this:\nShould I buy travel insurance for $10? I think there's about a 1% chance I'll use it (probability - blue), in which case it will get me a $500 airfare refund (value - red). Since 1% * $500 = $5, I should not buy it for $10.\n(Two more examples below in case that’s helpful.)\nThe ideal here is called expected utility maximization (EUM): making decisions that get you the highest possible expected value of what you care about.2 (I’ve put clarification of when I’m using “EUM” and when I’m using “Bayesian mindset” in a footnote, as well as notes on what \"Bayesian\" refers to in this context, but it isn’t ultimately that important.3)\nIt’s rarely practical to literally spell out all the numbers and probabilities like this. But some people think you should do so when you can, and when you can’t, use this kind of framework as a “North Star” - an ideal that can guide many decisions even when you don’t do the whole exercise. \nOthers see the whole idea as much less promising.\nI think it's very useful to understand the pros and cons, and I think it's good to have the Bayesian Mindset as one option for thinking through decisions. I think it's especially useful for decisions that are (a) important; (b) altruistic (trying to help others, rather than yourself); (c) “unguided,” in the sense that normal rules of thumb aren’t all that helpful.\nIn the rest of this piece, I'm going to walk through:\nThe \"dream\" behind the Bayesian mindset. \nIf we could put the practical difficulties aside and make every decision this way, we'd be able to understand disagreements and debates much better - including debates one has with oneself. In particular, we'd know which parts of these disagreements and debates are debates about how the world is (probabilities) vs. disagreements in what we care about (values).\n \nWhen debating probabilities, we could make our debates impersonal, accountable, and focused on finding the truth. Being right just means you have put the right probabilities on your predictions. Over time, it should be possible to see who has and has not made good predictions. Among other things, this would put us in a world where bad analysis had consequences.\n \nWhen disagreeing over values, by contrast, we could all have transparency about this. If someone wanted you to make a certain decision for their personal benefit, or otherwise for values you didn’t agree with, they wouldn’t get very far asking you to trust them.\nThe \"how\" of the Bayesian mindset - what kinds of practices one can use to assign reasonable probabilities and values, and (hopefully) come out with reasonable decisions.\nThe pros and cons of approaching decisions this way.\nThe dream behind the Bayesian mindset\nTheoretical underpinnings\nThere’s a common intuition (among mathematics- and decision-theory-minded people) that the sort of decision-making outlined at the beginning of this piece - expected utility maximization (EUM) - is the most “fundamentally correct” way of making decisions.\nThis intuition can be grounded in a pretty large and impressive academic literature. There are a large number of different theoretical frameworks and proofs that all conclude - in one way or another - something like: \nEither you’re acting like someone who’s using EUM - assigning a probability and value to each possible outcome, and making the choice best for maximizing the expected value (of whatever it is that you care about) - \nor you’re making decisions that are inconsistent, self-defeating, or have something else wrong with them (or at least have some weird, unappealing property, such as “When choosing between A and B you choose A; but when choosing between A, B and C you choose B.”)4\nYou can get an intro to the academic literature at this SEP article (read up to Section 4, which is about halfway). And you can read more about the high-level intuitions at this article by Eliezer Yudkowsky (key quote in footnote).5\nThe theorems don’t say you have to actually write down your probabilities and values and maximize the expected value, like the examples at the beginning of this piece. They just say that you have to act as if that’s what you’re doing. To illustrate the difference - most people don’t write down the number of calories in each bite of food before they eat it, then stop eating once they hit a certain number. But they act as if they do (in that most people do something approximating “eat a set number of calories each day”).\nIn real life, people are probably not even acting as if they’re doing EUM. Instead, they’re probably just doing the “inconsistent, self-defeating, or something else wrong with it” thing constantly. And that isn’t necessarily a big deal. We can make a lot of mistakes and have a lot of imperfections and still end up somewhere good.\nBut it’s interesting if the “ideal” version of myself - the one who has no such imperfections - always acts as if they’re (implicitly) doing EUM. It suggests that, if I try hard enough, I might be able to translate any decision into probabilities and values that fully capture what’s at stake. \nTransparent values, truth-seeking probabilities\nAnd that translation is exciting because it could allow me to clarify disagreements and debates, both with other people and within my own head.\nIn the world as it is, I often have a hard time telling what a disagreement or debate is supposed to be about. For example, take this House of Representatives debate6 on a proposal to increase spending:\nOne speaker (a Democrat) says: “Frankly, I think it’s probably surprising to some to see a President … who cares deeply about the future of America, who cares about the families who are in need, who cares about those who are sick … too many Americans are suffering and in crisis.”\nIn “retort,” another (a Republican) says: “Today’s solutions cannot be tomorrow’s problems … I am in favor of relief … However, what we are considering here today is not relief. Rather, we’re garnishing the wages of future generations … “\nIn “response” to that, the Democrat says: “This is necessary … We have heard it from the American public. I think the case is clear.”\n…What is the actual disagreement here? … Are these two arguing about how valuable it is to help people today, vs. keeping wages high later? Or do they disagree about whether stimulus today means lower wages tomorrow? Or something else?\nSome think the disagreement comes from Republicans’ just not caring about lower-income Americans, the ones who would benefit more from a stimulus. Others think it comes from Democrats not understanding how such a stimulus can affect the future.\nIn an idealized version of this debate, each side would give probabilities about how stimulus will affect the economy, and explain how they value those outcomes. In order for the two sides to reach different conclusions, they’d have to be giving specific different probabilities, and/or specific different valuation methods.\nThen:\nValues disagreements would be transparent - explicit for all to see. If Republicans conceded that the stimulus would help low-income Americans, but said they just didn’t value this much, they’d have to own the consequences of saying this.\nMeanwhile, we’d be judging probability disagreements using an “objective truth” standard, since the disagreements are just about predictions and not about values. The disagreements would be crisp and clear (one side thinks spending more would cause some specific economic problem in the future, the other side does not) - not seas of words we couldn’t interpret. We could also look back later and see which side was closer to the mark with its predictions, and over time, this could turn into extensive documentation of which side makes better predictions.\nOf course, a party could lie about how its arguments break down between probabilities and values. For example, someone might say “We value low-income Americans just as much, but we have different predictions about how the stimulus will affect them,” while secretly not valuing low-income Americans. But this kind of lie would require giving non-sincere probabilities - probabilities the speaker didn’t actually believe. Over time, this would presumably lead them to have a bad track record of making predictions.\nWhen I’m arguing with myself, I often have the same sort of confusion that I have when watching Congress. \nI tend not to know much about why I decide what I decide. \nI often can’t tell which of my motives are selfish vs. altruistic; which of my beliefs are based on seeking the truth vs. wishful thinking or conformity (believing what I’m “supposed to” believe); and which thoughts are coming from my “lizard brain” vs. coming from the parts of myself I respect most.\nThe dream behind the Bayesian mindset is that I could choose some set of values that I can really stand behind (e.g., putting a lot of value on helping people, and none on things like “feeling good about myself”), and focus only on that. Then the parts of myself driven by “bad” values would have to either quiet down, or start giving non-sincere probabilities. But over time, I could watch how accurate my probabilities are, and learn to listen to the parts of myself that make better predictions.\nThe bottom line: \nNormal disagreements are hard to understand and unravel, and prone to people confusing and manipulating each other (and themselves). \nBut disagreements broken into probabilities and values could be much easier to make progress on.\nValues disagreements - pure statements of what one cares about, freed of any disagreements over how the world works - are relatively straightforward to understand and judge.\nProbabilities disagreements - freed of any subjectivity - could be judged entirely based on evidence, reason, and (over time) results. \nBy practicing and trying to separate probabilities and values when possible, perhaps we can move closer to a world in which we communicate clearly, listen open-mindedly, learn from each other, make our decisions based on the most truth-tracking interpretation of the information we have, and have true accountability for being right vs. wrong over time.\nAiming for this also has some more practical potential advantages - good habits, helpful communication methods, etc. I’ll discuss those next.\nThe Bayesian mindset in practice\nThe Bayesian mindset means looking for opportunities to do any and all of the following:\nConnect opinions to anticipated observations. When you have an opinion about what action to take, what concrete outcomes or situations are you picturing as a result of taking or not taking it? (E.g., “if we pass this bill, unemployment might fall”)\nAssign probabilities. How probable are the outcomes and situations you’re picturing? How does the action change them? (E.g., “The probability of unemployment falling by at least 1 percentage point in the next year is 50% if we pass the bill, 20% if we don’t”)\nAssign values. How much do you value the different outcomes compared to each other? (E.g., “It would be worth $X to reduce unemployment by 1 percentage point”)\nIt’s often the case that just articulating some possible outcomes, probabilities and values will shed a lot of light on a decision, even if you don’t do a full expected-utility maximization (EUM) listing everything that matters.\nI find all of these 3 steps to be pretty interesting exercises in their own right.\n#1 - connecting opinions to anticipated observations\nWhen you say “Policy X would be a disaster,” what kind of disaster do you have in mind? Are you expecting that the disaster would be widely recognized as such? Or are you picturing the policy doing roughly what its supporters expect, and just saying you don’t like it?\nIn the Bayesian mindset, the “meaning” of a statement mostly7 comes down to what specific, visualizable, falsifiable predictions it points to. \n“Meat is bad for you” usually means something like “If you eat more meat, you’ll live less long and/or in worse health than if you eat less meat.”\n“This bill is bad for America” is ambiguous and needs to be spelled out more - does it mean the bill would cause a recession? A debt crisis? Falling life expectancy?\n“What we are considering here today is not relief. Rather, we’re garnishing the wages of future generations.” means [???] It’s vague, and that’s a problem.\nThe Bayesian mindset includes habitually going for this kind of “translation.” I find this habit interesting because:\nA lot of times it sounds like two people are violently disagreeing, but they’re just talking past each other or lost in confusions over words. \nSometimes these kinds of disagreements can disappear in a puff with rationalist taboo: one person is saying “X is bad,” the other is saying “X is good,” and they try to break down their differing “anticipated observations” and sheepishly find they just meant different things by X. \n \nIn addition to resolving some disputes, “translating to anticipated observations” has also gotten me used to the idea that it takes a lot of work to understand what someone is actually saying. I should be slower to react judgmentally to things I hear, and quicker to ask for clarification.\nAnd other times it sounds like someone is making profound/brilliant points, but if I try to translate to anticipated observations, I realize I can’t concretely understand what they’re saying. \nA lot of expressed beliefs are “fake beliefs”: things people say to express solidarity with some group (“America is the greatest country in the world”), to emphasize some value (“We must do this fairly”), to let the listener hear what they want to hear (“Make America great again”), or simply to sound reasonable (“we will balance costs and benefits”) or wise (“I don’t see this issue as black or white”). \n \nTranslating to anticipated observations can sometimes “strip away the sorcery” from words and force clarity. This can include my own words: sometimes I “think I believe” something, but it turns out to be just words I was thoughtlessly repeating to myself.\nA couple more notes on the connection between this idea and some core “rationality community” ideas in this footnote.8\n#2 - assigning probabilities\nSay I’ve decided to translate “This bill is bad for America” to “This bill means there will either be a debt crisis, a recession, or high (>3%) inflation within 2 years.”9 Can I put a probability on that?\nOne relatively common viewpoint would say something like: “No. In order to say something is 20% likely, you ought to have data showing that it happens about 20% of the time. Or some rigorous, experiment-backed statistical model that predicts 20%. You can’t just describe some future event, close your eyes and think about it, call it 20% likely, and have that mean anything.”\nThe Bayesian Mindset viewpoint says otherwise, and I think it has a lot going for it.\nThe classic way to come up with a forecast is to pose the following thought experiment to yourself: Imagine a ticket that is worth $100 if the thing I’m trying to forecast comes true, and $0 otherwise. What’s the most I’d be willing to pay for this ticket (call this $A)? What’s the least I’d be willing to sell this ticket for (call this $B)? A/100 and B/100 are your low- and high-end “credences” (subjective probabilities) that the forecast will come true. \nFor example, what is the probability that fully self-driving cars (see “level 5” here for definition) will be commercially available by 2030? If I imagine a ticket that pays out $100 if this happens and $0 if it doesn’t:\nI notice that there’s no way I’d pay $80 for that ticket.\nThere’s also no way I’d sell that ticket for $20. \nSo it seems that my subjective probability is at most 80%, and at least 20%, and if I had to put a single probability on it it wouldn’t be too crazy to go with 50% (halfway in between). I could narrow it down further by actually doing some analysis, but I’ve already got a starting point. \nIn this case, my numbers are coming from pretty much pure intuition - though thinking about how I would spend money triggers a different sort of intuition from e.g. listening to someone ask “When are we going to have !@#$ing self-driving cars?” and answering in a way that feels good in conversation.\nIn this and other cases, I might want to do a bunch of research to better inform my numbers. But as I’m doing that research, I’m continually improving my probabilities - I’m not trying to hit some fixed standard of “proof” about what’s true.\nDoes this actually work - do numbers like this have any predictive value? I think there’s a good case they can/do:\nAt a minimum, you can seek to become calibrated, which means that events you assign a 30% probability to happen ~30% of the time, events you assign a 40% probability to happen ~40% of the time, etc. Calibration training seems surprisingly quick and effective - most people start off horrifically overconfident, but can relatively quickly become calibrated. This often comes along with making fewer statements like “X is going to happen, I guarantee it,” and replacing them with statements like “I guess X is about 70% likely.” This alone is an inspiring win for the Bayesian mindset.\nScott Alexander puts up a yearly predictions post on all kinds of topics from world events to his personal life, where I’d guess he’s roughly following the thought process above rather than using lots of quantitative data. He not only achieves impressive calibration, but seems (informally speaking) to have good resolution as well, which means roughly that many of his forecasts seem non-obvious. More cases like this are listed here. So it seems like it is possible to put meaningful probabilities on all sorts of things.\n“The art of assigning the right probabilities” can be seen as a more tangible, testable, well-defined version of “The art of forming the most correct, reasonable beliefs possible.” \nFor many, this is the most exciting part of the Bayesian mindset: a concrete vision of what it means to have “reasonable beliefs,” with a number of tools available to help one improve.\nThere’s a nascent “science of forecasting” on what sorts of people are good at assigning probabilities and why, which you can read about in Superforecasting.\nWhen two people disagree on a probability, they can first try sharing their evidence and moving their probabilities toward each other. (If the other person has heard all your evidence and still thinks X is less probable than you do, you should probably be questioning yourself and lowering your probability of X, to at least some degree.) If disagreement persists, they can make a bet (or “tax on bullshit”), or just record their disagreement and check back later for bragging rights. Over time, someone’s track record can be scored, and their scores could be seen as a guide to how credible they are.\nMore broadly, the idea of “assigning the right probabilities” is a particular vision of “what it means to have reasonable beliefs,” with some interesting properties. \nFor example, it provides a specific (mathematically precise) way in which some beliefs are “more correct than others,” even when there’s very little (or very inconclusive) evidence either way,10 and specific mathematical rules for changing your beliefs based on new evidence (one video explainer is here).\n \nThis in turn supports a particular “nonconformist truth-seeker” worldview: the only goal of one’s beliefs is to assign the best probabilities, so one should be actively watching out for social pressure and incentives, “beliefs that are fun to express,” and anything else that might interfere with a single-minded pursuit of assigning good probabilities to predictions. I see a lot of Rationality: A-Z as being about this sort of vision.11\nThe ultimate aspiration here is that disagreements generate light (quantitative updates to probabilities, accumulation of track records) instead of heat, as we collectively build the superpower of being able to forecast the future.\n#3 - valuing outcomes\nThe Bayesian mindset generally includes the attitude that “everything can ultimately be traded off against everything else.” If a bill would reduce suffering this year but might lead to a debt crisis in the future, it should - in theory - be possible to express both benefits and risks in the same units.12 And if you can express benefits and risks in the same units, and put probabilities on both, then you can make any decision via EUM.\nThe “everything can be traded off against everything else” mentality might explain some of the fact that Bayesian-mindset enthusiasts tend to be interested in philosophy - in particular, trying to understand what one really values, e.g. by considering sometimes-bizarre thought experiments. I think this is an interesting mentality to try out.\nBut in practice, valuing very different outcomes against each other is daunting. It often involves trying to put numbers on things in unintuitive and sometimes complex ways - for example, valuing a human life in dollars. (For a general sense of the sort of exercise in question, see this post.)\nI think the “figuring out what you value, and how much” step is the least practical part of the Bayesian mindset. It seems most useful when either:\nThere is luckily some straightforward way of expressing all costs and benefits in the same terms, such as in the examples in the appendix. (More on this below.)\nOr it’s worth doing all of the difficult, guess-laden work to convert different benefits into the same terms, which I think can be the case for government policy and for donation recommendations.\nUse cases, pros and cons of the Bayesian mindset\nUse cases\nUsing the full process outlined above to make a decision is pretty complex and unwieldy. For most decisions, I don't think it would be helpful: it's too hard to list all of the different possible outcomes, all of the different values at stake, etc.\nBut I think it can be a useful angle when:\nThere's a discrete, important decision worth serious thought and analysis. \nThere's a pretty clear goal: some \"unit of value\" that captures most of what's at stake. The examples in the appendix are examples of how this can be approximately the case.\nFor whatever reason, one isn't confident in normal rules of thumb and intuitions. \nThe Bayesian mindset might be particularly useful for avoiding scope neglect: the risk of being insensitive to differences between different large numbers, e.g. \"Helping 10,000 vs. 12,000 people.\" \n \nI think most policymaking, as well as many decisions about how to handle novel situations (such as the COVID-19 pandemic), qualify here. \nSometimes one is able to identify one or two considerations large enough to plausibly \"dominate the calculation,\" so one doesn't have to consider every possible decision and every possible outcome. \nA bit of a notorious example that I have mixed feelings about (to be discussed another day): Astronomical Waste argues that \"Do as much good as possible\" can be approximately reduced to \"Minimize existential risk.\" This is because a staggering number of people could eventually live good lives13 if we are able to avoid an existential catastrophe.\nI think the COVID-19 pandemic has been an example of where the Bayesian mindset shines, generally. \nThe situation is unprecedented, so normal rules of thumb aren't reliable, and waiting to have \"enough evidence\" by normal public-health-expert standards is often not what we want. \nMost people I know either took extremely \"cautious\" or extremely \"carefree\" attitudes, but calculating your actual probability of getting COVID-19 - and weighing it against the costs of being careful - seems a lot better (ala the examples in the appendix). (Microcovid.org was built for this purpose, by people in the rationalist community.)\nEUM calculations tend to favor things that have a reasonably high probability of being very helpful (even if not \"proven\") and aren't too costly to do, such as wearing masks and taking vitamin D supplements. \nBayesian habits\nA lot of the appeal of the Bayesian mindset - and, I think, a lot of the value - comes not from specific decisions it helps with, but from the habits and lenses on the world one can get from it. \nOne doesn't need to do a full EUM calculation in order to generally look for opportunities to do the three things laid out above: (a) connect opinions to anticipated observations; (b) assign probabilities and keep track of how accurate they are; (c) assign values (try to quantify what one cares about).14\nI've done a fair amount of this, while not making the Bayesian mindset my only or even primary orientation toward decision-making. I think I have realized real, practical benefits, such as:\nI’ve gotten quicker at identifying “talking past each other” moments in disagreements, and ensuring that we hone in on differing anticipated observations (or values). I've also gotten quicker to skip over arguments and essays that sound seductive but don't have tangible implications. (I'm sure some would think I'm wrong to do this).\nBased on my experience with estimating probabilities and making bets, I almost never “rule out” a possibility if someone else is arguing for it, and conversely I never fully plan around the outcomes that seem most likely to me. I think this is one of the most robust and useful results of putting probabilities on things and seeing how it goes: one switches from a natural mode of “If A, then B” to a habitual mode of “If A, then maybe B, maybe C, maybe D.” I think this has generally made me more respectful of others’ views, in tone and in reality, and I think it has improved my decision-making as well. \nI’ve spent a lot of time consuming philosophy, interrogating my own values, and trying to quantify different sorts of benefits in comparable terms. Many of the calculations I’ve done are made-up, non-robust and not worth using. But there are also many cases in which the numbers seem both clear and surprising relative to what I would have guessed - often there is one factor so large that it carries a calculation. The most obvious example of this is gaining sympathy for (though not total conviction in) the idea of focusing philanthropy on animal-inclusive or longtermist work. I think the benefits here are major for philanthropy, and a bit less compelling on other fronts.\nAt the same time, I think there are times when the habits built by the Bayesian mindset can be unhelpful or even lead one astray. Some examples:\nDe-emphasizing information that tends to be hard to capture in an EUM framework. There are a lot of ways to make decisions that don’t look at all like EUM. Intuition and convention/tradition are often important, and often capture a lot of factors that are hard to articulate (or that the speaker isn’t explicitly aware of). The Bayesian mindset can cause over-emphasis on the kinds of factors that are easy to articulate via probabilities and values. \nHere are examples of views that might not play well with the Bayesian mindset: \n“Person X seems really good - they’re sharp, they work hard, they deeply understand what they’re working on at the moment. I’m going to try to generally empower/support them. I have no idea where this will lead - what they’re specifically going to end up doing - I just think it will be good.”\n“I see that you have many thoughtful reasons to set up your organization with an unorthodox reporting structure (for example, one person having two bosses), and you have listed out probabilities and values for why this structure is best. But this is different from how most successful organizations tend to operate, so I expect something to go wrong. I have no idea what it is or how to express it as a prediction.”15\n“Solar power progress is more important than most people think; we should pay more attention to solar power progress, but I can’t say much about specific events that are going to happen or specific outcomes of specific things we might do.”\nIt can be extremely hard to translate ideas with this basic structure into predictions and probabilities. I think the Bayesian mindset has sometimes led me and others to put insufficient weight on these sorts of views.\nModesty probabilities. I think that using the language of probability to express uncertainty has some major advantages, but also some pathologies. In particular, the “never be too confident” idea seems great in some contexts, but bad in others. It leads to a phenomenon I call “modesty probabilities,” in which people frequently assign a 1% or 10% chance to some unlikely outcome “just because who knows,” i.e., because our brains don’t have enough reliability or precision to assign very low probabilities for certain kinds of questions. \nThis in turn leads to a phenomenon sometimes called “Pascal’s Mugging” (though that term has a variety of meanings), in which someone says: “X would be a huge deal if it happened, and it’d be overconfident to say it’s <1% likely, so I’m going to focus a lot on X even though I have no particular reason to think it might happen.”\nIt’s debatable how comfortable we should be acting on “modesty probabilities” (and in what contexts), but at the very least, “modesty probabilities” can be quite confusing. Someone might intuitively feel like X is almost impossible, but say X is 1% or 10% likely just because they don’t know how to be confident in a lower probability than that.\nThe wrong tool for many. I’m personally a big fan of some of the habits and frames that come with the Bayesian mindset, particularly the idea of “intense truth-seeking”: striving to make my beliefs as (predictively) accurate as possible, even if this requires me to become “weirder” or suffer other costs. But this isn’t how everyone lives, or should live. \nSome people accomplish a lot of good by being overconfident. \nOthers, by fitting in and doing what others seem to expect them to. \nOthers, by being good at things like “picking the right sort of person to bet on and support,” without needing any ability to make accurate predictions (about the specifics of what supporting person X will lead to) or have much sense of what “values” they’re pursuing.\nI don’t think the Bayesian mindset is likely to be helpful for these sorts of people. An analogy might be trying to strategize about winning a football game using the language of quantum mechanics - it’s not that the latter is “wrong,” but it’s an ill-suited tool for the task at hand.\nFurthermore, the Bayesian mindset seems like a particularly bad tool for understanding and learning from these sorts of people. \nI often see Bayesian mindset devotees asking, “Why did person X do Y? What beliefs did that reflect? If they believe A they should’ve done C, and if they believe B they should’ve done D.” And in many cases I think this is an actively bad way of understanding someone’s actions and motivations. \nI think many people have impressive minds in that they act in patterns that tend to result in good things happening, and we can learn from them by understanding their patterns - but they’re not well-described as doing any sort of EUM, and they may not even be well-described as having any anticipated observations at all (which, in a Bayesian framework, sort of means they don’t have beliefs). We won’t learn from them if we insist on interpreting them through the lens of EUM.\nA final high-level point is that the Bayesian mindset is essentially a psychological/social “technology” with little evidence behind it and a thin track record, so far. The theoretical underpinnings seem solid, but there’s a large gulf between those and the Bayesian mindset itself. I think we should assume, by default, that the Bayesian mindset is an early-stage idea that needs a lot of kinks worked out if it’s ever going to become a practical, useful improvement for large numbers of people making decisions (compared to how they would make decisions otherwise, using some ill-defined mix of intuition, social pressure, institutional processes and norms, etc.)\nOverall, I am an enthusiastic advocate for the Bayesian mindset. I think following it has real benefits already, and I expect that as people continue to experiment with it, the set of practices for making the most of it will improve. As long as we don’t conflate “an interesting experiment in gaining certain benefits” with “the correct way to make decisions.”\nAppendix: simple examples of the Bayesian mindset\nExample 1 (repeated from intro). Should I buy travel insurance for $10? I think there's about a 1% chance I'll use it (probability - blue), in which case it will get me a $500 airfare refund (value - red). Since 1% * $500 = $5, I should not buy it for $10.\nExample 2. Should I move to Portland? I think there's about a 50% chance that I'll like it 1x as much (the same) as where I live now; a 40% chance that I'll like it 0.5x as much (i.e., worse); and a 10% chance I'll like it 5x as much (better). Since 50% * 1x + 40% * 0.5x + 10% * 5x = 1.2x, I expect to like Portland 1.2x as much as where I am now. So I'll move. (If you aren't following the math here, see my brief explanation of expected value.)\nExample 3. Should I join two friends who've invited me to hang out (indoors :/ ) during the COVID-19 pandemic (February 2021 as I write this draft)?\nI can estimate that this would mean a 1/2000 chance of getting COVID-19.16\nHow bad is it to get COVID-19? I'd guess it's about a 1/500 chance of dying and losing 50 years (18250 days) of my life; a 10% chance of some unpleasant experience as bad as losing a year (365 days) of my life; a 50% chance of losing about 2 weeks (14 days); and the remaining ~40% of time I expect it to be no big deal (call it about 0 days). \nSo getting COVID-19 is as bad as losing 1/500 * 18250 + 10% * 365 + 50% * 14 + ~40% * 0 =~ 80 days of my life.\nSo joining my friends is about as bad as a 1/2000 chance of losing 80 days, which is like losing about an hour of my life. So I should join my friends if I'd trade an hour of my life for the pleasure of the visit.Footnotes\n There will be examples of connections between specifics parts of “rationalism” and specific aspects of the Bayesian mindset throughout this piece, generally in footnotes.\n Here are a few examples of particularly core posts from Rationality: A-Z that emphasize the general connection to Bayesianism: Rationality: An Introduction, What Do We Mean By “Rationality?”, A Technical Explanation of Technical Explanation. See Twelve Virtues of Rationality for a somewhat “summarizing” post; most of its content could be seen as different implications of adhering to Bayesian belief updating (as well as expected value maximization), both of which are discussed in this piece. ↩\n There is some subtlety here: strictly speaking, you should maximize the expected value of something you care about linearly, such that having N times as much of it is N times as good. So for example, while it’s better to have two functioning kidneys than one, an operation that has a 50% chance of leaving you with 2 functioning kidneys is not at all equivalent - and is a lot worse - than one with a 100% chance of leaving you with 1 functioning kidney. To do EUM, you need to rate every outcome using units you care about linearly. But this should always be possible; for example, you might say that 1 functioning kidney is worth 100 “health points” to you, and 2 functioning kidneys is worth only 101 “health points,” or 1.01x as much. And now you could maximize your “expected health points” and get reasonable results, such as: you’d much rather have a 100% chance of 100 “health points” than a 50% chance of 101. This is essentially how I handle the Portland example above. ↩\n Throughout this post:\n“EUM” refers to making the decision that maximizes your expected value.\n“Bayesian mindset” refers to explicitly writing down your best-guess probabilities and/or values, and using these as tools to decide what to do. \n You could maximize expected value without explicitly thinking that way (for example, you could just have an intuitive judgment about what’s good to do, and it might be right); conversely, you could use the tools of the Bayesian mindset to think about expected value, but ultimately fail to maximize it.\nI've used the term \"Bayesian mindset\" to invoke Bayesian epistemology - in particular, the idea that all beliefs can be expressed as probabilities. This contrasts with other ways of thinking about probability (e.g., frequentism), where one might claim that you can't put a numerical probability on something unless you have some sort of data to ground that probability.By using the term \"Bayesian,\" I'm pointing at the Bayesian side of that debate, and the implication that we can actually write down probabilities even when we have no particular source for them other than our intuitions/beliefs. (I think this captures what's distinctive about Bayesian mindset better than \"expected utility maximization,\" since the latter can be implicit.) I don't talk about Bayes's rule much; it's certainly related, but I haven't seen many cases of people using it explicitly in the sorts of contexts discussed in this post (here's an example of why it's hard to do so). ↩\n This is weird because C is an “irrelevant alternative.” Adding it to your choice set shouldn’t change how you feel about A vs. B. For example, it’s weird if you choose vanilla ice cream when the only choices are vanilla and chocolate, but choose chocolate ice cream when the choices are vanilla, chocolate and strawberry. ↩\n “We have multiple spotlights all shining on the same core mathematical structure, saying dozens of different variants on, ‘If you aren't running around in circles or stepping on your own feet or wantonly giving up things you say you want, we can see your behavior as corresponding to this shape. Conversely, if we can't see your behavior as corresponding to this shape, you must be visibly shooting yourself in the foot.’ Expected utility is the only structure that has this great big family of discovered theorems all saying that. It has a scattering of academic competitors, because academia is academia, but the competitors don't have anything like that mass of spotlights all pointing in the same direction.\n So if we need to pick an interim answer for ‘What kind of quantitative framework should I try to put around my own decision-making, when I'm trying to check if my thoughts make sense?’ or ‘By default and barring special cases, what properties might a sufficiently advanced machine intelligence look to us like it possessed, at least approximately, if we couldn't see it visibly running around in circles?’, then there's pretty much one obvious candidate: Probabilities, utility functions, and expected utility.” ↩\n Starts at the 11:51:55 AM timestamp. It would’ve been more natural to pick a Presidential debate as an example, but all the 2016 and 2020 debates are just too weird. ↩\n Putting aside the “values” part of the equation. ↩\n The idea of making beliefs pay rent is connected to this section in a fairly obvious way.\n A chunk of Rationality: A-Z is about communicating with precision (e.g., 37 Ways That Words Can Be Wrong).\n Prizing beliefs that are precise and “pay rent” seems (for many, including me) to lead naturally to prizing science-based, naturalistic ways of looking at the world. A chunk of Rationality: A-Z is about reconciling the desire for sacred or transcendent experiences with an intense commitment to naturalism, e.g. The Sacred Mundane and Joy in the Merely Real. ↩\n The basic idea here is that if we spend too much money, and this goes badly, the main ways it would ultimately go badly would be either (a) the spending means we need to raise taxes or cut spending later to balance the budget, which hurts growth (hence the “recession” reference); (b) the spending comes from borrowing, which creates too much debt, which leads to a debt crisis later; (c) the debt gets paid off by printing money, which leads to inflation. To do a more sophisticated version of this analysis, you’d want to get finer-grained about how big these effects could be and when. ↩\n See this post for a vivid (if overly aggressive) statement of this idea. ↩\n For example, see:\nConservation of Expected Evidence, which promotes the somewhat counterintuitive (but correct according to this vision) idea that one should generally be as likely to change one’s mind in one direction as another. (If you expect to learn of more evidence for X, you should just adjust your probability of X upwards now.)\nScientific Evidence, Legal Evidence, Rational Evidence and When Science Can't Help, which argue that well-respected standards of evidence are “not fast enough” to come to good probabilities, and sometimes a good Bayesian needs to believe things that don’t meet the “standards of evidence” for these domains.\nThese two posts arguing that one should see issues neither in black-and-white terms (where one side of an argument is certain) nor as a single shade of grey (where all sides are equally indeterminate). In my experience, this is a pretty distinctive property of probability-centric reasoning: instead of saying “X will happen” or “I don’t know,” one says e.g. “There’s a 70% chance X will happen.” ↩\n One can ask: “If the two choices were X outcome and Y outcome, which would be better?”, “What about X outcome vs. a 50% chance of Y outcome?”, etc. In theory, asking enough questions like this should make it possible to quantify how much “better” (or “more choice-worthy”) one outcome is than another. ↩\n My post on digital people gives one example of how this could come about. ↩\n In fact, some parts of the rationalist community don’t emphasize “actually writing down probabilities and values” very much at all (and Rationality: A-Z doesn’t spend much space on guidance for how to do so). Instead, they emphasize various ideas and mental habits that are inspired by the abstract idea of EUM (some of which are discussed in this piece). FWIW, I think to the extent there are people who are trying to take inspiration from the general idea of EUM, while ~never actually doing it, this is probably a mistake. I think it’s important for people who see EUM as an ideal to get some experience trying to do it in practice. ↩\n I actually can say a lot about how I expect this to go wrong, but at previous points in my life, I might’ve said something like this and not been able to say much more. ↩\n Hopefully by the time this piece is public, the risk will be much lower. ↩\n", "url": "https://www.cold-takes.com/the-bayesian-mindset/", "title": "Bayesian Mindset", "source": "cold.takes", "source_type": "blog", "date_published": "2021-12-21", "id": "02afe3ac7fa2c80e4047855f3c3a916e"} -{"text": "\nA long time ago, back in November of 2021, people made some interesting tweets about Cold Takes posts that I wanted to respond to, but I don't tweet. So I decided to try this goofy thing where I excerpt the tweets and give my replies on a Cold Takes page. The vibe I’m going for here is kind of like if someone delivered a sick burn in person, then received a FAX 6 years later saying “Fair point - but have you considered that it takes one to know one?” or something. You can check out Ezra Klein's interesting thoughts on my Rowing, Steering, Anchoring, Equity, Mutiny post (and my brief reply), and my reply to criticism from Will Buckner on my Pre-agriculture gender relations seem bad post. \nAlso, someone with a ~million followers liked my post using a boat as a metaphor for different worldviews because … it made him think of innovations in governance that took place aboard pirate ships? See thread. Not exactly what I was going for, but OK!\nDaily Mail: \"Married couples who meet online are SIX times more likely to divorce in the first three years than those who meet through family or friends, study finds.\" Actual \"paper\": looks like online dating is gaining mostly at the expense of meeting people in bars and other casual situations, and that the 10-year divorce rates are similar, although more of the divorces for online couples are concentrated in the first three years:\nI have many questions about the rigor of this research, just based on the format and the source, but this looks more encouraging than discouraging. (See my previous speculations on relationship quality trends)\nVery interesting tweet with four charts where official forecasts seem to have repeatedly missed in the same direction. I do wish it were better-sourced. The Federal Reserve one and the IEA one would've about matched what I would've independently guessed so there's that.\nWHAT are these basketball passes. They are all from the same game (a real game, a blowout but not officially a stunt show). I was watching and saying \"No?!\" and \"You can't do that\" out loud.\n\"Things I like about jargon.\" (I don't agree with all of this, but generally think jargon terms with relatively simple, findable definitions seem like more of a good idea than a bad idea.)\nMatt Yglesias on AI risk:Also bleak is the artificial intelligence situation. I don’t write takes about how we should all be more worried about an out-of-control AI situation, but that’s because I know several smart people who do write those takes, and unfortunately they do not have much in the way of smart, tractable policy ideas to actually address it. It’s not like pandemics where we have the smart PDF and need to kick Congress’ ass to get them to actually do it. There is no great PDF here, in part because the international dimensions of the problem are very hard to grapple with.\nTo me, it seems important to know that Matt Yglesias thinks this is a serious issue, even though (as I do concede) there is no PDF ready to go!\nReader Damian Tatum sent in a nice \"no need to click\" article (link here) - I just enjoy the rhythm/general sound of this one.\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/cold-links-misc/", "title": "Cold Links: misc", "source": "cold.takes", "source_type": "blog", "date_published": "2021-12-17", "id": "0ebfff1f4c58d3061bae5005fbacb051"} -{"text": "I previously wrote that describing utopia tends to go badly - largely because utopia descriptions naturally sound:\nDull, due to the lack of conflict and challenge. (This is the main pitfall noted in previous work on this subject,1 but I don't think it's the only one.)\nHomogeneous: it's hard to describe a world of people living very differently from each other. To be concrete, utopias tend to emphasize specific lifestyles, daily activities, etc. - and this ends up sounding totalitarian.\nAlien: anything too different from the status quo is going to sound uncomfortable, at least to many.\nIn this post, I'm going to present a framework for visualizing utopia that tries to avoid those problems. Later this week I'll share some links (some from readers) on more specific utopias.\n(Why care? See the beginning of the previous post, or the end of this one.)\nThe basic approach\nI'm not going to try for a highly specific, tangible vision. Other attempts to do that feel dull, homogeneous and alien, and I expect to hit the same problem. I'm also not going to stick to a totally abstract assertion of freedom and choice.\nInstead, I'm going to lay out a set of possible utopias that span a spectrum from conservative (anchored to the status quo) to radical (not anchored).\nAt one end of the spectrum will be a conservative utopia that is presented as \"the status quo plus a contained, specific set of changes.\" It will avoid sounding dull, homogeneous and alien, because it will be presented as largely similar to today's world. And it will be a clear improvement on the status quo. But it won't sound amazing or inspiring.\nAt the other end will be a radical utopia that doesn't aim to resemble today's world at all. It will sound more \"perfect,\" but also more scary in the usual ways. \nI don't expect any single point on my spectrum to sound very satisfying, but I hope to help the reader find (and visualize) some point on this spectrum that they (the reader) would find to be a satisfying utopia. The idea is to give a feel for the tradeoff between these two ends of the spectrum (conservatism and radicalism), so that envisioning utopia feels like a matter of finding the right balance rather than like a sheer impossibility.\nI'll first lay out the two extreme ends of the spectrum: the maximally \"conservative\" utopia, and the maximally \"radical\" one. I'll then describe a couple of possible intermediate points.\nNote that I will generally be assuming as much wealth, technological advancement and policy improvements as are needed to make everything I describe feasible. I believe that everything described below has at least a decent chance of eventually being feasible (nothing contradicts the laws of physics, etc.) But I'm certainly not trying to say that any of these utopias could be achieved today. If something I describe sounds impossible to you, you may want to check out my discussions of digital people.\nThe maximally conservative utopia: status quo minus clearly-bad things\nThis isn't really a utopia in the traditional sense. It's trying to lay out one end of a spectrum.\nStart here: \nIn this world, everything is exactly like the status quo, with one exception: cancer does not exist.\nIt may not be very exciting, but it's hard to argue with the claim that this would be better than the world as it is today. \nThis is basically the most conservative utopia I can come up with, because the only change it proposes is a change that I think we can all get on board with, without hesitation. Most proposed changes to the world would make at least some people uncomfortable (no inequality? No sadness?), but this one shouldn't. If we got rid of cancer, we'd still have death, we'd still have suffering, we'd still have struggle, etc. - we just wouldn't have cancer.\nYou can almost certainly improve this utopia further by taking more baby-steps along the same lines. Make a list of things that - like cancer - you think are just unambiguously bad, and would be happy to see no more of in the world. Then define utopia as \"exactly like the status quo, except that all the things on my list don't exist.\" Examples could include:\nOther diseases\nHunger\nNon-consensual violence (not including e.g. martial arts, in which two people agree to a set of rules that allows specific forms of violence for a set period of time). \nRacism, sexism, etc.\n\"Status quo, minus everything on my list\" is a highly conservative utopia. Unlike literary utopias, it should be fairly clear that this world would be a major improvement on the world as it is.\nI note that in my survey on fictional utopias, it was much easier to get widespread agreement (high average scores) for properties of utopia than for full utopian visions. For example, while no utopia description scored as high as 4 on a 5-point scale, the following properties all scored 4.5 or higher: \"no one goes hungry\", \"there is no violent conflict,\" \"there is no discrimination by race or gender.\"\nThe maximally radical utopia: pure pleasure\nAll the way at the radical end of the spectrum, there's a utopia that makes no attempt at preserving anything about the status quo, and instead consists of everyone being in a state of maximum pleasure/happiness at all times.\nThere are a number of ways of fleshing this out, as discussed in this Slate Star Codex post. The happiness could be a stupor of simple pleasure, or it could be \"equanimity, acceptance and enlightenment,\" or it could be some particular nice moment repeated over and over again forever (with no memory of the past available to make it boring).\nThis \"maximally radical utopia\" is rarely even discussed in conversations about utopia, since it is so unappealing to so many. (Indeed, I think many see it as a dystopia). It's off-the-charts dull, homogeneous, and alien. I provide it here not as a tangible proposal that I expect to appeal to readers, but as a way of filling out the full spectrum from conservative to radical utopia. \nAn in-between point, erring conservative\nHere's a world that I'd be excited about, compared to today, even if I think we can do better (and I do). \nIn this world, technological advances have made it possible to create much more ambitious art, entertainment, and games than is possible today. \nFor example:\nOne artistic creation might work as follows. The \"viewer\" enters into a realistic, detailed virtual recreation of some time in the 20th century. They experience the first ~50 years of a particular (fictional) person's life. Around age 25, they fall in love and get married. For the next 25 years, their marriage goes through many ups and downs, but overall is a highlight of their life. Then around age 50, their relationship slowly and painfully falls apart. Shortly following their divorce, they wander into a bar playing live music, and they hear a song playing that perfectly speaks to the moment. At this point, the simulation ends. This piece is referred to a \"song,\" and evaluated as such.\nAnother artistic creation might have a similarly elaborate setup for a brilliantly made and perfectly timed meal, and be referred to as a \"sandwich.\"\nThere are also \"games\" in virtual environments. In these games, people can compete using abilities that would be unrealistic in today's world. For example, there might be a virtual war that is entirely realistic, except that it poses no actual danger to the participants (people who are injured or killed simply exit the \"game\"). There might be a virtual NBA game in which each participant plays as an NBA player, and experiences what it's like to have that player's abilities.\nEveryone in this world has the ability to:\nSubsist in good health, unconditionally. There is no need to work for one's food or medical care, and violence does no permanent damage.\nHave physical autonomy over their body and property. Nobody can be physically forced by someone else to do anything, with the exception that people are able to restrict who is able to enter their space and use their art/entertainment/games.\nSpend their time designing art, entertainment, or games, or collaborating with others designing these things, or engage in scientific inquiry about whatever mysteries of the universe exist at the time.\nSpend their time consuming art, entertainment, games or scientific insights produced by others.\nAdditionally, everyone in this world has a level of property and resources that allows them to be materially comfortable and make art/entertainment/games/science along the lines of the above, if they choose to. That said, people are able to trade relatively freely, subject to not going below some minimal level of resources. People who work on creating popular art/entertainment/games/insights accumulate more resources that they're able to use for more creation, promotion, etc.\nIn this world, the following patterns emerge:\nThere are a wide variety of different types of \"careers.\" Some people focus on producing art/entertainment/games/scientific insight. Others participate in supporting others' work: promoting it, managing its finances, performing needed repairs, etc. (Creators who can't get others excited enough to help them with these parts of the job just have to do these parts of the job themselves.) Others are pure \"consumers\" and do not take part in creation. Between these options, there is some option that is at least reasonably similar to the majority of careers that exist today.\nThere is a wide variety of tastes. Some art/entertainment/games/lines of inquiry have large, passionate fan bases, but none are universally liked. As a result, people have arguments about the relative merits of different art/entertainment/games/insights; they experience competitiveness, jealousy, etc.; they often (though by no means always) make friends with people who share their tastes, make fun of those who don't, etc.\nMany people want to be involved in creating art/entertainment/games with a large, passionate fan base. And many people want to be a well-regarded critic, or repair person, or \"e-athlete\" (someone who performs well in a particular game), or scientist. Not everyone succeeds at these ambitions. As a result, many people experience nervousness, disappointment, etc. about their careers.\nMost of today's dynamics with meeting romantic partners, raising families, practicing religion, etc. still seem applicable here.\nThis utopia is significantly more \"radical\" than the maximally conservative utopia. It envisions getting rid of significant parts of today's economy. I imagine that doing so would change the political stakes of many things as well: there would still be inequality and unfairness, but nobody would be reliant on either the government or any company for the ability to be comfortable, healthy and autonomous.\nBut it's still a fairly \"conservative\" utopia in the sense that it seeks to preserve most of the things about today's world that we might miss if we changed them. There is still property, wealth and inequality; there is still competition; most of the social phenomena that we're accustomed to still exist (jealousy, pettiness, mockery, cliques, etc.) Not all of the careers that exist today exist in this world, but it's hopefully still pretty easy to picture a job that is \"similar enough\" to any job you'd hope would stick around. Whatever kind of life you have and would like to keep, it's hopefully possible to see how you could keep most if not all of what you like about it in this world. \n(I expect some readers to instinctively react, \"It's nice that there would still be jobs in this world, but working on art, entertainment, games and science isn't good enough - I want to do something more meaningful than that, like saving lives.\" But most people today don't work on something like saving lives, and as far as I can tell, the ones that do aren't more happy or fulfilled in any noticeable way than the ones that don't.)\nI expect most readers will see this world as far short of the ideal. But I also expect that most will see how this world - or something like it - could be a fairly robust improvement on the status quo.\nAnother in-between point, erring more radical\nThis world is similar to the one described just above. The main difference is that, through meditation and other practices like it, nearly everyone has achieved significantly greater emotional equanimity. \nPeople consume and produce advanced art/entertainment/games/science as in the above world, and most of the careers that exist today have some reasonably close analogue. However, people experience far less suffering when they fail to achieve their goals, experience far less jealousy of others, are less inclined to look down on others, have generally more positive attitudes, etc.\nThis utopia takes a deliberate step in the radical direction: it cuts down on some of the conflict- and suffering-driven aspects of life that were preserved in the previous one. In my view, it rather predictably has a bit more of the dull, homogeneous, alien feel.\nA \"meta\" option\nThis one leans especially hard on things digital people would be able to do.\nIn this world, there is a waiting room with four doors. Each door goes to a different self-contained mini-world.\nDoor #1 goes to the maximally conservative utopia: just like today's world, minus, say, disease, hunger, non-consensual violence, racism, and sexism.Door #2 goes to the maximally radical utopia: everyone lives in constant pleasure (or \"equanimity, acceptance and enlightenment\"), untroubled by boredom or material needs.\nDoor #3 goes to the moderately conservative utopia described above. Material needs are met; people produce and consume advanced art, entertainment, games and science; there are many different careers; and most of the careers and social dynamics that exist today have some reasonably close analogue.\nDoor #4 goes to the moderately radical utopia described above. It is similar to Door #3 but with greater emotional equanimity, less suffering, less jealousy, more positive attitudes, etc. \nEach citizen of this world starts in the waiting room and chooses:\n1 - A door to walk through.\n2 - A protocol for reevaluating the choice. For example, they might choose: \"I will remember at all times that I have the ability to return to the waiting room and choose another door, and I can do so at any time by silently reciting my pass code.\" Or they might choose: \"I will not remember that I have the ability to return, but after 10 years, I will find myself in the waiting room again, with the option to return to my life as it was or choose another door.\"2\nTheir natural lifespans are at least long enough to have about two 60-70 year tries behind each door if they so choose. (Perhaps much more.)\nFinally: anyone can design an alternate utopia to be added to the list of four. This alternate utopia can itself be a \"meta-utopia,\" e.g., by containing its own version of the \"waiting room\" protocol.\nNow that I've laid out a few points on the spectrum, this utopia makes a move in the abstract direction, emphasizing choice. \nPersonally, I find this utopia to feel somewhat plausible and satisfying, even though I wouldn't say that of any of the four utopias that it's sampling from.\nIs a utopia possible?\nAs stated above, I don't expect any of the options I've given to sound like a fully satisfying utopia, to most readers. I expect each one to sound either too conservative (better than the status quo, but not good enough) or too radical (too much risk of losing parts of our current world that we value).\nWhat I've tried to do is give the reader an idea of the full spectrum from maximally conservative to maximally radical utopias; convince them that there is an inherent tradeoff here, which can explain the difficulty of describing a fully satisfying utopia; and convince them that there is some point, somewhere on this spectrum, that they would find to be a satisfying utopia. \nThere isn't necessarily any particular world that everyone could agree on as a utopia. For example, some people think it is important to get rid of economic inequality, while some think it's important to preserve it. Perhaps a world where everyone chooses their own mini-world to enter (such as the \"meta\" option above)3 could work for everyone. Perhaps not.\nIn real life, we aren't going to design a utopia from first principles and then build it. Instead, hopefully, we will improve the world slowly, iteratively, and via a large number of individual decisions. Maybe at some point it will become relatively easy for lots of people to attain vast emotional equanimity, and a large number but not everyone will, and then there will be a wave of sentiment that this is making the world worse and robbing it of a lot of what makes it interesting and valuable, and then some fraction of the world will decide not to go down that road. \nThis is the dynamic by which the world has gotten better to date. There have been lots of experiments that didn't take off, and some social changes that look far better in retrospect than someone from 300 years ago would have expected. Many things about the modern world would horrify someone from 300 years ago, but most changes have been fairly gradual, and someone who got to experience all 300 years one step at a time might feel okay about it. \nTo give an example from the more recent past:\nSocial norms around sex have in some sense gotten closer to what Brave New World feared (see previous discussion): many people in many contexts treat sex about as casually as the Brave New World characters do. \nBut in other respects, we haven't moved much in the Brave New World direction - people who want to be monogamous still don't generally face pressure (and certainly not coercion) against doing so - and overall it seems that the changes are more positive than negative. \nThis is consistent with the general patterns discussed above: many changes sound bad when we imagine everyone making them, but are better when different people get to make different choices and move a step at a time. \nI think this explains some of why \"radical\" utopias don't appeal: it seems entirely justified to resist the idea of a substantially different world when one hasn't been through an iterative process for arriving at it.\nI consider the real-life method for \"choosing utopia\" to be much better than the method of dreaming up utopias and arguing about them. So if, today, you can start to dimly imagine the outline of a utopia you'd find satisfying, I'd think you should assume that if all goes well (<- far from a given!) the real-life utopia will be far better than that.\nSo?\nRegardless of the ability or inability to agree on a utopian vision, I expect that most people reading this will agree that the world can and should get better year by year. I also expect them to agree on many of the specifics of what \"better\" means in the short run: less poverty, less disease, less (and less consequential) racism and sexism, etc. \nSo why do our views on utopias matter? And in particular, what good has this piece done if it hasn't even laid out a specific vision that I expect to be satisfying to most people?\nI care about this topic for a couple of reasons.\nFirst, I think short-term metrics for humanity are not enough. While I strongly support aiming for less poverty every year, I think we should also place enormous value on humanity's long-run future. \nPart of why this matters is that I believe the long-run future could come surprisingly quickly. But even if we put that aside, we should value preventing the worst catastrophes far more than we would if we only cared about those directly affected - because we should believe that a glorious future for humanity is possible, and that losing it is a special kind of tragedy.\nWhen every attempt to describe that glorious future sounds unappealing, it's tempting to write off the whole exercise and turn one's attention to nearer-term and/or less ambitious goals. I've hypothesized why attempts to describe a glorious future tend not to sound good - and further hypothesized that this does not mean the glorious future isn't there. We may not be able to describe it satisfyingly now, or to agree on it now, and we may have to get there one step at a time - but it is a real possibility, and we should care a lot about things that threaten to cut off that possibility.\nSecond, I believe that increasing humanity's long-run knowledge and empowerment has a lot of upside (as well as downside). \nThere is a school of thought that sees scientific and technological advances as neutral, or even negative. The idea is that even if we had all the power in the world, we couldn't use it to make the world better. Like the citizens of Brave New World, we're our own prisoners: if we successfully solve some problems (such as making ourselves happier) we'll create just as many to offset them (such as by losing the conflict and complexity that give life meaning). Some people concede that the poverty reduction we've seen to date is good, but think that once we reach a certain level of wealth, further advances won't help.\nI think this is a tempting worldview, in an age when most futurism is found in fiction, and the dystopias are acclaimed masterworks while the utopias are creepy slogs. But ultimately I think this dynamic tells us more about the challenges of using our imagination than it does about the reality of utopia.\nPersonally, I don't consider myself able to imagine a utopia very effectively. But I do feel convinced at a gut level that with time and incremental steps, we can build one. I think this particular \"faith in the unseen\" is ultimately rational and correct. I hope I've made a case that the oddities of describing a utopia need not stop us from achieving one.\nNext in series: Utopia linksFootnotes\n E.g., https://www.orwellfoundation.com/the-orwell-foundation/orwell/essays-and-other-works/can-socialists-be-happy/  ↩\n People who choose to remember the existence of the waiting room aren't able to tell - or at least, aren't reliably able to convince - people who don't, to protect the latter's choice not to remember. ↩\n See also this post on the \"Archipelago\" idea. ↩\n", "url": "https://www.cold-takes.com/visualizing-utopia/", "title": "Visualizing Utopia", "source": "cold.takes", "source_type": "blog", "date_published": "2021-12-14", "id": "0e6e64ad70779274c54d0cfe6a4a2aec"} -{"text": "\nInspired by One Billion Americans, which I have “read.” (I know the title and what people say about it)\nI want to watch a space opera with a number of interplanetary civilizations locked in a centuries-long death match, knowing that whoever can wipe the others out will control the cosmos forever, and knowing that their odds of supremacy pretty much boil down to recruiting scientists, soldiers, etc. to leave the other civilizations and immigrate to theirs.1\nSo like: the Va-arlox spend most of their energy thinking about how they can make their planets more welcoming and appealing for the J’ghaar, how they can create enticing compensation and benefits offers, how they can phrase their civilizational values and goals in terms that J’ghaar can feel OK with, how to run successful missions to make friends with J’ghaar and convince them to relocate to Va-arlox planets, etc. etc. When they get a bunch of J’ghaar to immigrate, they high-five knowing that they’ve just pumped up their long-term prospects of having more scientists, more soldiers, etc. and winning the interplanetary war.\n(Hmm, minus the war part, this may be a world I’d like to live in as well as a show I’d like to watch.)\nThe battle scenes are pretty short. Whoever’s ahead in immigration generally has better and more spaceships, so they win the battle pretty easily.\nMy favorite episode is the one where a few Va-arlox fall into a wormhole and end up on today’s Earth, in the U.S. They visit the Pentagon and initially feel right at home with all the people furiously brainstorming about how to achieve American hegemony over China, Russia, etc. But then they ask some curious questions like “So how are you getting more Chinese people to relocate to America?” and they hear “Oh that’s not the goal at all, in fact we turn away almost everyone who wants to come here.” And they’re SO confused, the look on their faces is the best.2\nFootnotes\n If you’re asking “Wait, why would someone immigrate to an alien civilization that is enemies with their home civilization?” think about real life instead of existing sci-fi shows. This is totally something individuals would do. ↩\n I focused on military consequences of immigration because sci-fi shows are usually about war, not because I think this is the only or best reason to increase immigration. ↩\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/tv-shows-i-wish-i-could-watch-intergalactic-immigration-wars/", "title": "TV shows I wish I could watch: Intergalactic Immigration Wars", "source": "cold.takes", "source_type": "blog", "date_published": "2021-12-10", "id": "bc066dd0a16fb881eda00cb8ba92e7e4"} -{"text": "A kind of discourse that seems pretty dead these days is: describing and/or debating potential utopias.\nBy \"potential utopias,\" I specifically mean visions of potential futures much better than the present, taking advantage of hopefully greater wealth, knowledge, technology, etc. than we have today. (I don't mean proposals to create alternative communities today, or claims about what could be achieved with policy changes today,1 though those can be interesting too.)\nIt seems to me that there's little-to-no discussion or description of potential utopias (there's much more attention to potential dystopias), and even the idea of utopia is widely mocked. (More on this below.) As someone interested in taking a very long-run perspective on humanity's past and future, this bothers me:\nWhen thinking about the value of ensuring that humanity continues to exist and/or successfully navigating what could be the most important century, it seems important to consider how good things could be if they go well, not just how bad things could be if they go poorly. \nIt seems pretty self-evident to me that failing to really consider both sides will skew our decision making in some ways. \n \nIn particular, I think it's liable to make us fail to feel the full importance of what's at stake. If the idea of describing or visualizing utopia seems absurd, it's more tempting than it should be to write the whole question of humanity's long-term trajectory off and think only about shorter-term matters. \nSpeaking more vaguely, it just doesn't seem great to have very-long-run goals be absent from discussion about how things are going in the world and what ought to change. I know there's an argument that \"The long-run future is too hard to reason about, so we should focus on the next few years,\" but I also resonate with the general idea that \"plans are useless, planning is indispensable.\"\nBelow, I'll:\nDescribe some of my own experiences looking for discussions of potential utopias, and the general contempt I perceive for such things in modern discourse.\nHypothesize that one of the main blockers to describing utopia is that it's inherently difficult to describe in an appealing way. Pretty much by their nature, utopias tend to sound dull (due to the lack of conflict), homogeneous (since it's hard to describe a world of people living very differently from each other) and alien (anything very different from the status quo is just going to rub lots of people the wrong way).\nLook at Brave New World - which presents a \"supposed utopia\" as a dystopia - through this lens.\nIn the next post, I'll give a framework for visualizing utopia that tries to avoid the problems above.\nUtopia is very \"out\"\nA few years ago, I tried to collect different visions for utopia from existing writings, and use Mechanical Turk polling to see how broadly appealing they are. (My results are available here.) I learned a number of things from this exercise:\nI looked for academic fields studying utopia. I hoped I would find something in the social sciences: for example, analyzing what sorts of social arrangements might work well under the assumption of greatly increased wealth and improved technology, or finding data on what sorts of utopian descriptions appeal most to different sorts of people. However, the only relevant-seeming academic field I found (Utopian Studies) is rooted in literary criticism rather than social science. \nThe main compilation I found for utopian visions, Claeys and Sargent's Utopia Reader, is nearly all taken from fiction, especially the readings from the 20th century.\nMost of the \"utopia\" descriptions I found there are very old, and are quite unappealing to me personally. In recent work, dystopia seems to be a more common topic than utopia. (Both dystopia and utopia are both considered part of \"utopian studies\").\nWhen I tried testing the most appealing utopias - as well as some I came up with myself - by surveying several hundred people (using Positly), none scored very well. (Specifically, none got an average as high as 4 on a 5-point scale).\nI attended the Society for Utopian Studies's annual conference. This was the only conference I could find focused on discussing utopia or something like it. It was a very small conference, and most of the people there were literary scholars who had a paper or two on utopia but didn't heavily specialize in it. I asked a number of people why they had come, and a common answer was \"It was close by.\"\nA lot of the discussion revolved around dystopia. When people did discuss utopia, I often had the sense that \"utopia\" and \"utopian\" were being used as pejorative terms - their meaning was something like \"Naive enough to think one knows how the world should be set up.\" One person said they associated the idea of utopia with totalitarianism. \nRather than excitement about imagining designing utopias, the main vibe was critical examination of why one would do such a thing. I think that people thought that the analysis I'd done - using opinion polling to determine whether any utopias are broadly appealing to people - was pretty goofy, though this could've been for a number of reasons (such as that it is).\nIn a world with a large thriving social science literature devoted to auction theory, shouldn't there be at least a few dozen papers engaged in a serious debate over where we're hoping our society is going to go in the long run?\nWhy is utopia unpopular?\nIf I'm right that there's little-to-no serious discussion of potential utopias (and general contempt for the idea) in today's discourse, there are a number of possible reasons.\n\"Ends justify the means\" worries? One reason might be the idea that aiming at utopia inevitably leads to \"ends justify the means\" thinking - e.g., believing that it's worth any amount of violence/foul play to get a chance at getting the world toward utopia. \nThis might be based on the history of Communism in the 20th century and/or the writings of people like Karl Popper and Isaiah Berlin.2\nI'm not sure I understand the reasoning here: it also seems \"risky\" in this way to have strong views (as many do) about how people should live their lives and what should be legal/illegal today. The idea that policy has high stakes and is worth fighting over seems pretty widespread, and not so scorned. (Also, Communism itself seems much more warmly received in modern discourse than utopia.)\nI'm generally against \"ends justify the means\" type reasoning, whether about the long-run future or about the present. Many people focused on the present seem happy with \"ends justify the means\" type reasoning. So it seems to me that this is just a different topic from visualizing utopia.\nThis piece focuses on a different possible reason for utopia's lack of popularity: past attempts to describe utopia generally (universally?) sound unappealing. \nThis isn't just because utopia makes poor entertainment. For example, take these excerpts from Wikipedia's summary of Walden Two, a relatively recent and intellectually serious attempt at utopia:\nEach member of the community is apparently self-motivated, with an amazingly relaxed work schedule of only four average hours of work a day, directly supporting the common good and accompanied by the freedom to select a fresh new place to work each day. The members then use the large remainder of their time to engage in creative or recreational activities of their own choosing. The only money is a simple system of points that buys greater leisure periods in exchange for less desirable labor. Members automatically receive ample food and sleep, with higher needs met by nurturing one's artistic, intellectual, and athletic interests, ranging from music to literature and from chess to tennis.\nIn one sense, each individual sentence of this sounds like an improvement on life today, at least for most people. And yet when I picture this world, I can't help but picture ... seeing fake-seeming smiles everywhere? Half-heartedly playing tennis while thinking \"What's it all for?\" Feeling a vague, ominous pressure not to complain? \nAnd it gets worse as it gets more specific: \nAs Burris and the other visitors tour the grounds, they discover that certain radically unusual customs have been established in Walden Two, quite bizarre to the American mainstream, but showing apparent success in the long run. Some of these customs include that children are raised communally, families are non-nuclear ...\nNow I'm picturing having to be friends with everyone I don't like ...\n ... free affection is the norm ...\nThat isn't helping.\n... and personal expressions of thanks are taboo.\nSuch behavior is mandated by the community's individually self-enforced \"Walden Code\", a guideline for self-control techniques, which encourages members to credit all individual and other achievements to the larger community, while requiring minimal strain. Community counselors are also available to supervise behavior and assist members with better understanding and following the Code.\nAnd now it's sounding like an almost dead ringer for Brave New World, a dystopia written more than 10 years prior. Actually, it doesn't sound all that far off from One Flew Over the Cuckoo's Nest. I'm basically imagining a world where we're all either brainwashed, or forced into conformity while pretending that we're freely and enthusiastically doing what we please. The comments about \"individual self-enforcement\" and lack of physical force just make me imagine that all my cooperative friends and I don't know what the source of the enforcement is - only that everyone we know seems pretty scared to challenge whatever it is.\nI don't think this is a one-off. I think it's a common pattern for descriptions of utopia to feel either vague and boring, or oppressive and scary, if not both. The utopian visions that I perceive as most successful today are probably Star Trek and Iain M. Bank's \"Culture novels,\" but both of these seem to revolve around advanced civilizations interacting with hostile ones, such that most of the action is taking place in the context of the (very non-utopian) latter.\nBut the world can get better, right? \nWhat is it about describing a vastly improved world that goes so badly?\nUtopias sound dull, homogeneous and alien\nWhen one describes a utopia in great detail, I think there tend to be a few common ways in which it sounds unattractive: it tends to sound dull, homogeneous and alien.3\nDull. Challenges and conflict are an important part of life. We derive satisfaction and meaning from overcoming them, or just contending with them. \nAlso, a major source of value in life is our relationships, and we often form and maintain relationships with the help of some form of conflict. \nHumor is an important part of relationships, and humor is often (usually?) at someone's expense. \nWorking together to overcome challenges - or sometimes, just suffer them - can be an important way of bonding. \nIf you read guides to writing fictional characters who seem relatable, compelling and interesting to the reader, you'll often see conflict and plot stressed as essential elements for accomplishing this.\nWhen I think about my life as it is today, I think a lot about the things I'm hopeful and nervous about, and the past challenges I've overcome or gotten through. When I picture most utopias, there doesn't seem to be as much room for hope and fear and challenge. That may also mean that I'm instinctively imagining that my relationships aren't the same way they are now.\n(This \"dullness\" property seems closest to the one gestured at in George Orwell's 1948 essay on why utopias don't sound appealing.)\nHomogeneous. Today's world has a large number of different sorts of people living different sorts of lives. It's hard to paint a specific utopian picture that accommodates this kind of diversity. \nA specific utopian picture tends to emphasize particular lifestyles, daily activities, etc. - but a particular lifestyle will generally appeal to only a small fraction of the population.\nThis might be why utopias often have a \"totalitarian\" feel. It might also explain why there is perhaps more literature on \"dystopias calling themselves utopias\" (e.g., Brave New World, The Giver) than on utopias. If you take any significant change in lifestyle or beliefs and imagine it applying to everyone, it's going to sound like individual choice and diversity are greatly reduced. \nAlien. More generally, we tend to value a lot of things about our current lives - not all of which we can easily name or describe. The world we live in is rich and complex in a way that it's hard for a fictional world to be. So if we imagine ourselves in a fictional world, it's often going to feel like something is missing.\nI think most people have a significant degree of \"conservatism\" (here I'm using the term broadly rather than in a US political context). We improve things one step at a time, rather than by tearing everything down and building it back up from scratch. When a world that is \"too many steps away\" is described, it's hard to picture it or be comfortable with it. \nI think a description of today's world could easily sound like a horrible dystopia to the vast majority of people living 1000 years ago (or even 100 or 50), even though today's world is, in fact, probably much better on the whole.\nUtopia as dystopia: Brave New World\nIt's interesting to look at a dystopian novel like Brave New World through the \"dull, homogeneous and alien\" lens. \nBrave New World presents a world of advanced technology, great wealth, and peace, which has enabled society to arrange itself as it wants to. These are conditions that \"ought to\" engender a utopia - and in fact many of the characters loudly proclaim their world to be wonderful - but that instead results in a dystopia. This \"utopia as dystopia\" formula is reasonably common (other fiction in this vein includes Gattaca and The Giver). \nBrave New World heavily emphasizes homogeneity and lack of choice:\nAll children are genetically engineered and raised by the state.\nNot only has monogamy disappeared entirely, but it seems all romantic choice has disappeared as well:\n“Has any of you been compelled to live through a long time-interval between the consciousness of a desire and its fulfilment?” \n“Well,” began one of the boys, and hesitated.\n“Speak up,” said the D.H.C. “Don’t keep his fordship waiting.”\n“I once had to wait nearly four weeks before a girl I wanted would let me have her.” \n“And you felt a strong emotion in consequence?”\n“Horrible!”\n“Horrible; precisely,” said the Controller. “Our ancestors were so stupid and short-sighted that when the first reformers came along and offered to deliver them from those horrible emotions, they wouldn’t have anything to do with them.”\n \nThere's also a strong alien vibe created by this sort of thing, as people disparage things that are extremely basic parts of our lives, like bad emotions. (The people in scenes like this also just talk in a very strange, wooden way.) \nBrave New World also heavily emphasizes a lack of conflict (implying a dull world):\n“But I don’t want comfort. I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin.”\n“In fact,” said Mustapha Mond, “you’re claiming the right to be unhappy.”\n“All right then,” said the Savage defiantly, “I’m claiming the right to be unhappy.” \n“Not to mention the right to grow old and ugly and impotent; the right to have syphilis and cancer; the right to have too little to eat; the right to be lousy; the right to live in constant apprehension of what may happen tomorrow; the right to catch typhoid; the right to be tortured by unspeakable pains of every kind.” There was a long silence. \n“I claim them all,” said the Savage at last. [Including typhoid! - Ed.]\n \nBrave New World is often thought of as clever for the way it transmutes a utopia into a dystopia. But maybe that's the kind of transmutation that writes itself. Describe a future world in enough detail and you already have people worried about dullness, homogeneity, and alienness. Brave New World amplifies this with various incredulous quotes to demonstrate just how homogeneous and conflict-free everything is, and by describing the government policies that enforce such a thing.\n\"Abstract\" utopias\nTo lay out a utopian vision that avoids the problems above, one might try presenting a more abstract vision, emphasizing freedom and individual choice and avoiding giving a single \"picture of what daily life is like.\" By being less specific, one can allow the reader to imagine that they'll keep a lot of what they like about their current life, instead of imagining that they'll be part of a homogeneous herd doing something very unfamiliar.\nAn example of this approach4 is Robert Nozick's Anarchy, State and Utopia.5 To take Wikipedia's summary:\nThe utopia ... is a meta-utopia, a framework for voluntary migration between utopias tending towards worlds in which everybody benefits from everybody else's presence ... The state protects individual rights and makes sure that contracts and other market transactions are voluntary ... the only form of social union that is possible [is] fully voluntary associations of mutual benefit ... In Nozick's utopia if people are not happy with the society they are in they can leave and start their own community.\nI note that in my paper, the utopia that scored best among survey respondents was reminiscent of Nozick's: \nEverything is set up to give people freedom. If you aren't interfering with someone else's life, you can do whatever you want. People can sell anything, buy anything, choose their daily activities, and choose the education their children receive. Thanks to advanced technology and wealth, in this world everyone can afford whatever they want (education, food, housing, entertainment, etc.) Everyone feels happy, wealthy, and fulfilled, with strong friendships and daily activities that they enjoy.\n(This was not what I expected to be the highest-scoring option, given that the survey population overwhelmingly identifies with the political left. By contrast, the \"government-focused utopia\" I wrote performed horribly.)\nBut this kind of \"abstract\" utopia has another issue: it's hard to picture, so it isn't very compelling. \nI think this points to a kind of paradox at the heart of trying to lay out a utopian vision. You can emphasize the abstract idea of choice, but then your utopia will feel very non-evocative and hard to picture. Or you can try to be more specific, concrete and visualizable. But then the vision risks feeling dull, homogeneous and alien.\nDon't give up\nMy view is that utopias are hard to describe because of structural issues with describing them - not because the idea of utopia is fundamentally doomed.\n In the next post, Visualizing Utopia, I try to back this up by offering a framework for visualizing utopia that hopefully resists - or at least addresses - the \"dull, homogeneous, and alien\" trap. Click here to read it.Footnotes\n I'd put Utopia for Realists in the latter category. ↩\n I'm not deeply familiar with their arguments, but here are some links giving a feel for them:\nhttps://philosophicaldisquisitions.blogspot.com/2018/01/poppers-critique-of-utopianism-and.html\nhttps://www.goodreads.com/quotes/6754011-the-utopian-attempt-to-realize-an-ideal-state-using-a\nhttps://www.tandfonline.com/doi/abs/10.1080/13698230008403313  ↩\nSome people try to get around this by describing utopia more abstractly. I'll address that later. ↩\nAnother example: Nick Bostrom's Letter from Utopia. ↩\nI am not saying that Nozick's utopian vision is fully satisfying, and I certainly don't agree with Nozick's politics overall. I'm just noting that it has some appealing features relative to the more specific utopias discussed above. ↩\n", "url": "https://www.cold-takes.com/why-describing-utopia-goes-badly/", "title": "Why Describing Utopia Goes Badly", "source": "cold.takes", "source_type": "blog", "date_published": "2021-12-07", "id": "3484be32547d35a2398756dc3463a6ef"} -{"text": "\nWill the FDA (A) treat Omicron-specific booster vaccines like flu shots, or will it (B) impose more requirements and slow them down?\nSo far, it looks like the default is (B), and it also looks like Omicron has a significant chance of making the coming months at least as painful as the debut of COVID-19 (due to its high infectiousness and potential for major vaccine escape). The hypothesis here is that switching to (A) is the most promising single way to reduce that chance, by getting a fully effective vaccine for Omicron ASAP. \nThere's lots of uncertainty - Omicron might turn out to be mild (at least in vaccinated people), might turn out to be less scary than expected in other ways (less infectiousness, less vaccine escape), or manufacturing might be a big enough bottleneck that we're too slow even under option (A). But still, when you think about how bad a redux of early 2020 would be, (A) vs. (B) seems to have massive stakes riding on it. If there’s anything you can imagine yourself doing to move us from a default of (B) to a world of (A), I think it would be good to drop whatever else is on your plate long enough to do that.\nYou're done with the important part of the post. Here's a bit more detail\nThis post from a Metaculus forecaster (“one of the best Covid forecasters there is” according to this tweet, which might be based on some quantitative metrics I haven’t easily found or might just not be right) reasons in detail to claims very much like the ones above. In particular, it concludes:\nThis variant is much fitter than Delta and, as such, will overtake it almost everywhere by sometime in early 2022. Omicron will result in a much greater decrease in vaccine efficacy than anything we've seen so far. Vaccines will probably end up being updated to target Omicron, but even under optimistic scenarios Omicron-specific boosters won't arrive until sometime after the first Omicron-driven wave hits. Omicron-driven waves will peak fast and high. There is good reason to think the drop in vaccine efficacy against severe disease won't be nearly as drastic as the decrease in efficacy against infection or symptomatic disease, but it will still be a large drop …\nBy far my most important recommendation is the one that is unfortunately the least likely to be heeded: given what we already know, vaccine manufacturers should immediately switch all production to begin making Omicron-specific doses — and governments should commit in advance (now!) to buying these doses. However, this doesn't seem like it'll happen …\nHere are thoughts from the excellent Michael Mina over email:We should do the least necessary checks of the new variant vaccines. Put them into 20 people and make sure they elicit the desired immune responses. Do NOT do any sort of efficacy study and these vaccines should be fast tracked like flu shots. \nUnfortunately FDA has essentially no ability to balance the cost of slowness and the cost of inaction with the benefit of action. The FDA viewpoint is that inaction and whatever cost comes from that is not on them. They are used to a system where it’s better to do nothing than act with any uncertainty. But that’s Bc the FDA is not designed for emergencies. It just isn’t. It is horribly inefficient and unable to effectively make calculations around public health vs medicine. \nTo this day we still do not have a regulatory framework for products that have as a base use one of public health. Vaccines elicit ideas of public health but ultimately are evaluated and regulated as medicine. As far as safety this is important. But as far as efficacy and the regulatory approaches and data required, it’s entirely around individual benefit. Which at this point I hope everyone recognizes that’s the wrong angle in a pandemic. \nFor example, we knew that a single dose vaccine would yield 90% or more protection from severe disease for at least a few months, yet we withheld first doses in order to give people second doses and importantly we have those second doses in a suboptimal manner just Bc that’s the hard data we had. But the soft data (the data from decades of immunology research across the world) allowed us to know that spacing the vaccines months apart would have been better both for individuals and for public health. We didn’t do that. \nI’m not going to give a more detailed defense of my position for now - this is a snap take, and I expect that it will hold up pretty well if one digs into the claims, but I don’t have the time to write it up thoroughly and still put it out quickly. I will just give a couple more links to clarify exactly what is at stake in the “flu-shot-style approval vs. slower approval” distinction.\nThis recent WSJ article talks about studying the vaccines in “hundreds of subjects” for immune response, and also mentions that “The FDA is still determining how much the effectiveness of current vaccines would need to drop to merit authorizing new ones.” The overall process is expected to take over 3 months (it's unclear how much of that is approval vs. manufacturing).\nThis is a faster process than for the original vaccines, but it is nowhere near the efficiency of the flu shot process, which goes like this: the latest flu variants are used for a vaccine, and the vaccine is approved. No studies on whether it beats last year’s shot, no studies on immune response. (Apparently it takes 6 months to scale up manufacturing for flu shots, which is longer than COVID-19 boosters should take.)\nA big question I haven't run to ground is how much delay we're stuck with (regardless of approval) due to manufacturing scale-up. I wouldn't guess it's a huge delay since vaccines are already being made at scale, but regardless, it seems like the fastest possible version is the one where vaccines are pre-ordered yesterday.\nTreat new boosters like a flu shot, and we might roll them out in time. Approve them more slowly than that, and it seems like we're much more likely to be dealing with something that looks more contagious than Delta, with significant vaccine escape, and targeted vaccines arriving too late. So if you can do something to push toward the former, please do.\nP.S. Most people who write takes like this get flooded with \"You’re not a doctor, you don’t know what you’re talking about\" type reactions. I doubt my readers are going to do that, but if you’re curious about why I feel OK saying the above as a non-expert, it is basically that over the course of this pandemic, I have decided to trust specific people and types of people more than the FDA/CDC/etc., based both on comparing what I could understand of their reasoning and on seeing how things played out over time. I'm not certain, but I'm comfortable making some bets. If you think this is ridiculous, you might be interested in this website that provides a handy way of insulting me in public - go check it out!", "url": "https://www.cold-takes.com/candidate-for-highest-stakes-question-of-the-next-several-months-rare-hot-take/", "title": "Candidate for “highest-stakes question of the next several months” (rare hot take)", "source": "cold.takes", "source_type": "blog", "date_published": "2021-12-06", "id": "5152b293ce70c61a038081bdc3b8d7cb"} -{"text": "\nI hate when people share articles based only on the headline. My rough guess is that this is 90% of article shares, even when the sharer thinks they've read the article; NPR's April Fools joke is a classic riff on this phenomenon.\nBut if you're into sharing articles based on headlines, here's a treat: a set of headlines that are so specific, wonderful and (as confirmed by me) supported by the main text that you can go ahead and enjoy them without clicking.\nCreator of McAfee Software has Disguised Himself as a Guatemalan Street Hawker with a Limp in Order to Avoid Police Seeking Him for Questioning in Murder of Neighbor. \n'Creature' terrorizing Poland town turns out to be a croissant stuck in a tree\nPig in Australia Steals 18 Beers from Campers, Gets Drunk, Fights Cow\nDwayne Johnson Rips Off Front Gate with His Bare Hands to Get to Work.\n90-Year-Old Tortoise Whose Legs Were Eaten By Rats Gets Prosthetic Wheels And Goes Twice As Fast\nA Dutch metro train was saved from disaster Monday when it smashed through a safety barrier but was prevented from plummeting into water by a sculpture of a whale tail. (OK, that one is the first sentence, not the headline)\n Got more in this genre? Send them in.\nSubscribe Feedback\n", "url": "https://www.cold-takes.com/no-need-to-click/", "title": "No need to click", "source": "cold.takes", "source_type": "blog", "date_published": "2021-12-02", "id": "bbbd5520887107c51994ba89cc3b1b09"} -{"text": "This is the final piece in the Has Life Gotten Better? series.\nFor the last 200 years or so, life has been getting better for the average human in the world.\nBut for the 10,000 years before that, it's not the same story:\nThe vast majority of the data we have that could help us understand trends in quality of life only goes back to 1700 or so (at best).\nThe little data we do have on the period before that mostly implies that incomes and technological capabilities were increasing slowly (by today's standards), while health, nutrition, human rights, and other drivers of quality of life weren't necessarily on an upward trend at all.\nMy guess is that quality of life declined noticeably with the rise of agriculture, starting in about 10,000 BCE - leading to likely increases in the number of people who had to live under intense hierarchy and even slavery. \nFollowing big increases in quality of life since the Industrial Revolution, I think things are now better than they've ever been (including compared to the pre-agriculture days). \nBeyond these points, I think we know very little.\nIt's easy for me to imagine changing my mind pretty quickly and dramatically here. \nIf I became more confident in prehistoric life expectancy estimates (which are very low), I might conclude that health was improving significantly (if slowly) throughout just about all of history, perhaps enough to bring my views into line with the simple \"There's been a consistent upward trend\" view. If I had that sort of view, I might have a stronger default (as I used to) of expecting further advances in wealth, science and technology to keep making life better.\n \nAnother key question is whether pre-agriculture humans were usually like nomadic foragers, or whether sedentary societies were quite common. Right now I'm tentatively presuming that the former is closer to the truth, but if the latter were, that would mean the very distant past is worse than I thought - and would be another point in favor of \"Things have been improving this whole time.\"\n \nOn the other hand, it's easy to imagine new data (such as on happiness) changing my current guess that hunter-gatherer quality of life was worse than today's. In this case, I could end up thinking that scientific and technological advancement have been net negative overall.\n \nThe estimates I've found of the numbers of people enslaved at different points in time seem very rough with a lot of variance. More broadly, my impression is that (prior to the last few hundred years) changes in institutionalized discrimination (such as against women, specific ethnic groups, and LGBTQ+) have been somewhat chaotic. Having better data on these could lead to very different (and more chaotic) pictures of trends in quality of life for the average person.\nThis post will focus on the period between ~10,000 BCE (when agriculture began) and ~1800 (when the Industrial Revolution led to rapid and clear improvements, discussed previously). This period is the only one I haven't covered so far in the Has Life Gotten Better? series.\nI will split this long stretch of time into three periods:\nFirst, the transition from a world of (probably) mostly nomadic humans (never settling very long in one place) to mostly sedentary humans (living indefinitely in one place), with the rise of agriculture. I will argue that this change, which accompanied the Neolithic Revolution, made quality of life worse for the average person. More\nSecond, the long period (thousands of years) between the Neolithic Revolution and the time when we start to have some meaningful records and statistics, which started in Europe around 1300. I'll call this the mystery period. More\nFinally, the period between 1300-1800 - the immediate pre-industrial period. More\nNeolithic Revolution\nThe \"Neolithic Revolution\" refers to when agriculture started drastically changing lifestyles, and leading to what we tend to think of as \"civilization,\" around 10,000 BCE. Agriculture roughly means living off of domesticated plants and livestock, allowing a large population to live in one area indefinitely rather than needing to move as it runs low on resources.\nI think we have some good reasons to think that this transition made life worse for the average person.\nAs discussed in a supplemental post, the literature on \"hunter-gatherer\" societies (often studied to figure out what pre-agriculture life was like) tends to emphasize the distinction between two basic types of hunter-gatherer societies (though this is a simplification1):\nOne type of society tends to be egalitarian in the sense of having relatively little formal hierarchy. (Though as mentioned in the supplemental post on this, we shouldn't overstate how egalitarian these societies actually are. In particular, they appear to have bad gender relations.) It is often claimed (though I haven't seen it systematically supported) that these societies are usually nomadic - periodically moving in order to get to an area with more resources.\nAnother type of society tends to be noticeably more nonegalitarian, with formal status and authority distinctions; prestige displays; and sometime even slavery. This type of society seems to be associated with being sedentary rather than nomadic, meaning people stay in one area indefinitely. \nOne rough intuition I've seen to explain the connection is that it's hard to accumulate wealth in a society that periodically relocates (and that lacks cryptocurrency2). I've also seen a number of other explanations.3\nBelow, I'll give some reasons to believe that:\nThe sedentary/nonegalitarian structure probably became a lot more common around the Neolithic Revolution.\nThe sedentary/nonegalitarian type of society seems to add some big negatives for the average person's quality of life, without (immediately) compensating positives.\nThe shift from egalitarian/nomadic to nonegalitarian/sedentary societies\nIn Hunter-Gatherer and Human Evolution, Richard B. Lee states: \"Historically nomadic foragers (HNFs), small in scale, mobile, and egalitarian, reflect most closely the characteristics of ancient foragers.\" \nThis is a view that many seem to hold, and is echoed in the main source I've used on forager lifestyles (\"Lifeways of Hunter-Gatherers\").4 The implication here is that if you go back far enough, all or nearly all people were nomadic/egalitarian. \nI have also seen this view disputed, by Better Angels of our Nature as well as one of its key sources (quotes in footnote5). I interpret these sources as guessing something closer to \"50% of people in the distant past were nomadic/egalitarian.\"\nI haven't been able to find systematic examinations of what percentage of people were egalitarian/nomadic vs. nonegalitarian/sedentary in the distant past, and I mostly don't expect to, since it seems hard to determine from archaeology. But my best guess (from above) is that it was something in the range of 50-100% of people. \nAnd I think it must have changed either following or sometime before the Neolithic Revolution, which is when agriculture (which allows people to make far more food, far more sustainably, in a single location) started spreading across the world. I'd guess that essentially all agriculture-based societies in premodern times had characteristics more like sedentary/nonegalitarian societies;6 furthermore, agriculture is generally believed to engender much faster population growth (and population estimates accelerated greatly following the Neolithic Revolution).7\n For purposes of my oversimplified \"Has Life Gotten Better?\" chart, I've assumed that 75% of the population was nomadic/egalitarian before 10,000 BCE (this is the midpoint of the 50-100% range I gave), and I've assumed that all net population growth after 10,000 BCE came from agricultural societies.8 That's sufficient to imply a population rapidly going from mostly-egalitarian to mostly-nonegalitarian around that time.\nPros and cons of this transition\nAs discussed in a supplemental post, it's hard to make definitive statements about nomadic vs. sedentary societies, or agricultural vs. non-agricultural societies, because (from what I've seen) they are not clearly and systematically distinguished in studies (on life expectancy, violence, and other things relevant to quality of life). \nSo here I am largely going off of impressions from reading the literature, particularly the debates on violence (where people tend to imply that nomadic/egalitarian societies are less violent than sedentary/nonegalitarian ones) and Lifeways (which has a lot of systematic analysis and nuanced discussions throughout, making me generally put a fair amount of weight on its claims). From what I can tell, the claims I'm about to make would be shared by most scholars in the relevant field (though I'm certainly not confident of this and would love to be corrected by any readers who know the field well).\nCons. The main things that jump out at me are:\nSedentary (including agricultural) societies have formal hierarchy, in the sense of having one leader who has inordinate power over others.9\nSlavery: the only mentions I've seen of slavery among hunter-gatherer societies all refer to nonegalitarian/sedentary societies, particularly the Northwest Coast of North America.10 Slavery seems to have been present in most early agriculture-based states, such as ancient Egypt.\nViolence: it's hard to say for sure, but it seems to me that violent death rates are quite a bit higher among nonegalitarian/sedentary societies than nomadic/egalitarian societies - perhaps ~double - as discussed in this supplemental post.\nPros. I haven't been able to identify any clear ways in which this transition had much immediate benefit for the average person (although it does seem to have led to a lot more people). \nIn particular:\nInfant mortality (deaths before the age of 1) and child mortality (deaths before the age of 15) seem similar for pre-agriculture societies and more recent societies (though rates for both are extremely high by modern standards).\nI previously discussed height as a proxy for hunger, and gave figures implying 1.6-1.68m (5'3\" to 5'6\") as the pre-agriculture average male height. Compare with later height estimates from European males, of around 1.69m (5'6.5\") and 1.64-1.68m (5'5\" to 5'6\"). A Farewell to Alms argues that the European heights are a bit unrepresentatively high, and that the transition to agriculture brought no benefit on this front.\nThis paper claims that today's hunter-gatherer societies have very similar life expectancies to the earliest life expectancies we have on record for 18th century Sweden (around 35 years). It also cites archaeological studies estimating much lower prehistoric life expectancies (closer to 20 years), but argues that these could be unreliable, and that their figures should perhaps be adjusted11 to be closer to the figures from today's hunter-gatherer societies.12\nAdditionally, my attempt to find life expectancy estimates from some time in between the Neolithic Revolution and the 18th century largely came up empty; in particular, I found a number of papers arguing that we can say very little about life expectancy in ancient Rome beyond perhaps bounding it between 20-30 years.13\nBottom line: I'd guess that sometime around or before the Neolithic Revolution, we went from most people living in egalitarian/nomadic societies to most people living in nonegalitarian societies with some notable disadvantages and no clear advantages.\nMystery period: between the rise of agriculture and the 1300-1800 period\nNearly all of the data we have seems to be either about extremely early humans (pre-agriculture or at least pre-state) or about the relatively recent past (1300 CE and later).\nMy rough understanding of why this is:\nTo make guesses about the distant past, we can look at currently-existing societies that seem to have \"never modernized\" in the sense of adopting agriculture or becoming part of large-scale states. We can also look at archaeological remains.\nStarting in 1300 CE, we start to have (in a small number of European countries) systematic written records of things like court cases that make it possible to estimate homicide rates. \nBut for the in-between period, we have neither. (It's not clear to me why archaeological remains can't be used for the in-between period, but in practice I have had a lot more trouble finding papers willing to draw conclusions from archaeological remains for this \"in-between period,\" compared to very early periods.)\nSome of the few observations I've been able to make:\nInfant and child mortality look pretty flat starting around 400 BCE. From Our World in Data:\nHeight looks flat in Europe starting around the year zero. Again from Our World in Data:\nLife expectancy trends are unclear, as noted in the previous section (in particular, see the part about ancient Rome).14\nTrends in violent death rates are unclear, and again we lack evidence that there was improvement before the 1300-1800 period (see supplemental post on this).\nMy qualitative history summary doesn't reveal any clear trends in quality of life - in particular, no clear pattern of rising or falling slavery,15 institutionalized discrimination or gender equality.\n\"Empowerment\" and material incomes seem to have risen sometime between the Neolithic Revolution and ~0 CE, but not risen further for ~1000 years after that. This is what's implied by standard per-capita GDP data,16 and Ian Morris argues this case at length in The Measure of Civilization (key quotes in footnote).17 That is: people had \"rising incomes\" in some meaningful sense. I've found it hard to pin down exactly in what sense incomes were rising,18 but it appears that a lot of it came down to things like larger buildings; more sophisticated building materials, ornaments and tools; and perhaps richer diets (though it naively looks to me like hunter-gatherer societies have higher meat consumption than most ancient Romans,19 and Morris expresses some skepticism that food consumption was increasing or improving).20\nI would guess that higher per-capita incomes did, in fact, have some positive impact on quality of life. But a lot of the point of this post is to ask whether we can distinguish between a \"People were getting richer and more powerful, but not better-off\" story and a \"People were getting better off\" story. I think the last few hundred years look clearly like \"People were getting better off,\" but the former story looks quite plausible for the period I'm talking about now.\nAfter 1300 CE\n1300 is the earliest year I've seen on the x-axis of a chart, using systematically collected historical data, that seems clearly relevant to quality of life:\nAnd around 1600, we have similar charts for the US (from Better Angels of our Nature):\nI've argued in a supplemental post that the starting homicide rates in these charts aren't necessarily lower than before the Neolithic Revolution. But they seem to fall precipitously as soon as the charts begin. I'm guessing there is some common factor between \"recording homicides well enough that a chart like this is possible\" and \"taking measures to reduce them.\" And I consider this some sort of good sign about how culture was evolving, even if violent death rates didn't necessarily fall when large-scale atrocities are included.\nThe 17th century also saw a wave of torture bans:\nGDP data also show an acceleration in the average person's income rising around this time:21 ~20% from 1000-1300 and another ~20% from 1300-1700, compared to no net growth between 500 BCE and 1000 CE and under 15% growth for all of history before then.\nWith these points in mind, I'd guess that this is also the period that saw most of whatever improvements in violent death rates, height (as a proxy for hunger) and life expectancy happened before the Industrial Revolution. But that is of course just a guess.\nSummary table\nProperty\nTrend\nPoverty\nVery unclear, and perhaps best proxied by hunger and health, throughout this period.\n \nHunger\nImprovement for the average person likely small (if anything); no clear increase in height over this period.\n \nHealth (physical)\nNo signs of infant and child mortality improving over this period. Life expectancy likely improved at some point, probably after the year 1300.\n \nViolence\nViolent death rates look to have ~doubled with the transition to sedentary/nonegalitarian societies, then came back down at some point, possibly after the year 1300. Homicides definitely seem to have been falling at least starting around 1300 in Europe, in the 1600s in the US, though the rise in large-scale atrocities may have partly or fully counterbalanced this. Wave of torture bans started in the 1700s.\n \nMental health\nUnknown\n \nSubstance abuse and addiction\nProbably got somewhat worse after the Neolithic Revolution (this is just a guess that alcohol and drugs were uncommon before it); unclear what happened after that.\n \nDiscrimination\nProbably got worse with the transition to nonegalitarian/sedentary societies; started improving, at least in Europe, after 1700.\n \nTreatment of children\nUnknown\n \nTime usage\nUnknown\n \nSelf-assessed well-being\nUnknown\n \nEducation and literacy\nImproved at least since the 1500s.\n \nFriendship and community\nUnknown\n \nFreedom\nProbably got worse with the rise of agriculture and the accompanying transition to nonegalitarian/sedentary societies; otherwise mostly unclear.\n \nRomantic relationship quality\nUnknown\n \nJob satisfaction\nUnknown\n \nMeaning and fulfillment\nUnknown\n \nNext in series: Rowing, Steering, Anchoring, Equity, MutinyFootnotes\n Note that in the literature on this topic, the most common distinction is between \"simple\" and \"complex\" economies, but I've avoided these terms due to the reservations expressed about them in The Lifeways of Hunter Gatherers, Chapter 9. The book prefers \"egalitarian vs. nonegalitarian,\" which I think does a good job capturing part of what makes this distinction important; I've also emphasized \"nomadic\" vs. \"sedentary\" because it's a particularly objective distinction that I've seen emphasized elsewhere. ↩\n Fine, any currency.  ↩\n From The Lifeways of Hunter Gatherers: \"Rather than attributing nonegalitarian foragers simply to “resource abundance,” as many have done in the past, we will see that sedentism, the resource base, geographic circumscription, storage, population pressure, group formation, and enculturative processes all play a role. Therefore, this chapter calls on previous discussions of foraging, mobility, land tenure, exchange, demography, social organization, and marriage.\" Chapter 9 discusses different theories extensively. ↩\n \"On the strength of archaeological data, it is reasonable to assume that nonegalitarian society developmentally followed egalitarian society. On the northwest coast, for example, slavery appears about 1500 BC, warfare by AD 1000, and nonegalitarian societies by at least AD 200 ... Egalitarian behaviors and an egalitarian ethos were adaptive for quite a long time in human history before the selective balance tipped in favor of nonegalitarian behaviors and a nonegalitarian ethos.\" ↩\n From Better Angels of our Nature, Chapter 2: \"The nonstate peoples we are most familiar with are the hunters and gatherers living in small bands ... But these people have survived as hunter-gatherers only because they inhabit remote parts of the globe that no one else wants. As such they are not a representative sample of our anarchic ancestors, who may have enjoyed flusher environments. Until recently other foragers parked themselves in valleys and rivers that were teeming with fish and game and that supported a more affluent, complex, and sedentary lifestyle. The Indians of the Pacific Northwest, known for their totem poles and potlatches, are a familiar example.\"\nBowles 2009, one of the main sources of violence data, seems to take a similar view: \"Because hunter-gatherer populations occupying resource-rich areas in the Late Pleistocene [starting over 100,000 years ago] and early Holocene were probably sedentary (at least seasonally), I have included wars involving settled as well as purely mobile populations.\"\n Neither of these sources discusses how many prehistoric people lived in the \"flusher\" environments as opposed to less \"flush\" ones; moreover, it's not clear that \"flushness\" is the main driver of whether societies are egalitarian or nonegalitarian (see Lifeways Chapter 9).  ↩\n Reasons for this:\nIn nearly every discussion I've seen of egalitarian/nomadic vs. nonegalitarian/sedentary societies, scholars seem to equate agriculture with the latter.\nPremodern agricultural societies seem to have had most of the properties that Chapter 9 of Lifeways thinks are predictive of nonegalitarian/hierarchical relations.\nI believe nearly all very early state societies had slavery, and (by definition) all had formal hierarchy, which are two of the three main disadvantages I list below for sedentary/nonegalitarian societies. ↩\n E.g., see the HYDE data set, which starts in 10,000 BCE (the time of the Neolithic Revolution, hundreds of thousands of years after the start of Homo Sapiens) with a population of ~4 million, but reaches ~10 million by 7,000 BCE. ↩\n In reality, there was probably some population growth from nonagricultural societies, but also probably net conversion of nonagricultural to agricultural societies. So this seems like it will if anything underestimate the speed with which the population became dominated by agricultural societies - and it still leads to that domination happening pretty fast. ↩\n Here I'm largely reading between the lines on this quote from Lifeways: \"Group leaders are common in egalitarian societies – Shoshone 'rabbit bosses' for example – but they are temporary and have their position only because they have demonstrated skill at a particular task. Their leadership does not carry over into other realms of life, nor is it permanent. The process of group formation, however, suggests how the process of leader creation might result in inequality.\" ↩\n Lifeways: \"We can elucidate these propositions by examining cultural variability along the Northwest Coast of North America, where foragers lived in large, sedentary villages; owned slaves; participated in warfare for booty, food stores, slaves, and land; and where, in some societies, individuals, kinship units, and villages were ranked.\" It then discusses the Northwest Coast at length as its central example of nonegalitarian foraging societies. ↩\n \"Estimated mortality rates then increase dramatically for prehistoric populations, so that by age 45 they are over seven times greater than those for traditional foragers, even worse than the ratio of captive chimpanzees to foragers. Because these prehistoric populations cannot be very different genetically from the populations surveyed here, there must be systematic biases in the samples and/or in the estimation procedures at older ages where presumably endogenous senescence should dominate as primary cause of death. While excessive warfare could explain the shape of one or more of these typical prehistoric forager mortality profiles, it is improbable that these profiles represent the long-term prehistoric forager mortality profile. Such rapid mortality increase late in life would have severe consequences for our human life history evolution, particularly for senescence in humans. It may be possible to use the data from modern foragers to adjust those estimates for prehistoric foragers.\" ↩\n It also claims that life expectancy likely rose after agriculture, but the papers it cites as support for this claim seem to only be about well-pre-agriculture life expectancy. Key quote: \"It is usually reported that Paleolithic humans had life expectancies of 15–20 years and that this brief life span persisted over thousands of generations (Cutler 1975; Weiss 1981) until early agriculture less than 10,000 years ago caused appreciable increases to about 25 years. Several prehistoric life tables support this trend, such as those for the Libben site in Ohio (Lovejoy et al. 1977), Indian Knoll in Kentucky (Herrmann and Konigsberg 2002), and Carlston Annis in Kentucky (Mensforth 1990).\" I reviewed the cited studies, and the closest thing I found to support was in Lovejoy et al. 1977: \"Recent work has demonstrated a marked decrease of adult mortality in cohorts subjected to elevated disease stress in early years (23). This is most probably a direct result of intensified selection for \"immunological competence.\" Those with less adequate genomes are removed from the cohort in early childhood, and the more hardy survivors consequently display depressed mortality. This provides a possible solution to the skeletal-ethnographic sampling discrepancy. Modern \"anthropological\" populations are virtually all contact societies and remain under the selective influence of a battery of novel pathogens.\" I wasn't able to locate the citation (23). ↩\n See Scheidel 2001, Scheidel 2010, and the start of Chapter 2 in this book, which I found to be a pretty interesting read. ↩\n It does seem like there's a good argument that life expectancy rose sometime in between the Neolithic Revolution and 1810, since life expectancy as of 1810 averaged around 29 worldwide, whereas there are estimates of prehistoric life expectancy that are closer to 20 (but these are disputed, as noted at the link, and it's not out of the question that life expectancy could have been flat over this whole period). If improvement did happen prior to 1810, though, it could've been in the 1300-1800 period discussed below. ↩\n Also see Luke Muehlhauser on this point. ↩\n E.g., from the Maddison Project. ↩\n \"Quantitative studies of consumption—including everything from animal bones in settlements to numbers of shipwrecks, levels of lead and tin pollution generated by industrial activity, the scale of deforestation, frequencies of public inscriptions on stone, numbers of coins in circulation, and quantities of archaeological finds along the German frontier—also point the same way: per capita energy capture in the Mediterranean world increased strongly during the first millennium BCE, peaked somewhere between 100 BCE and 200 CE, then fell again in the mid-first millennium CE. Figure 3.5 illustrates the tight fit between the rise and fall of shipwrecks (normally taken as a proxy for the scale of maritime trade) and levels of lead pollution in the well-dated deposits at Penidho Velho in Spain. \n \"Each category of material has its own difficulties, but no single argument can explain away the striking increase in evidence for nonfood consumption across the first millennium BCE and the peak in the first two centuries CE. The shipwreck data and the vast garbage dumps of transport pottery surrounding the city of Rome (a single one of which, at Monte Testaccio, contains the remains of 25 million pots, used to ship 200 million gallons of olive oil) also attest to the use of nonfood energy to increase food supply and the extraordinary level of consumption of “expensive” food calories. Some scholars also identify an increase in stature in the first to second century CE, although others are more pessimistic, suggesting that adult male Romans in early imperial Italy were typically under 165 cm tall, which would make them shorter than Iron Age or medieval Italians.\"\n \"As in Greece, the housing evidence may be the most informative, and Robert Stephan and Geof Kron are now collecting and analyzing this material. Data from Egypt and Italy already suggest that by the first centuries CE typical Roman houses were even bigger than classical Greek houses had been, and that sophisticated (by premodern standards) plumbing, drainage, roofs, and foundations spread far down the social ladder. The explosion of material goods on Roman sites is even more striking. Mass production of wheel-made, well-fired pottery, amphoras for wine and olive oil, and base-metal ornaments and tools reached unprecedented levels in the first few centuries CE. Similarly, distribution maps show that by 200 CE trade networks were more extensive and denser than they would be again until at least the seventeenth century. The scale of trade with India, far outside the empire’s formal boundaries, is particularly impressive.\"\n \"Until specialists in late Roman archaeology quantify the evidence more precisely, it will be difficult to make accurate estimates, but between 200 and 700 the general picture is of large houses of stone and brick being replaced by smaller structures of wood and clay; paved streets being replaced by mud paths; sewers and aqueducts stopping working; life expectancy, stature, and population size falling, and the surviving people moving from cities to villages; long-distance trade declining; plain, handmade pottery replacing slipped, wheel-made wares; wood and bone tools being used more often, and metal ones less; factories going out of business and village craftsmen or household producers taking their places.\" ↩\n Most studies of real incomes look for records on how much people were paid, and compare it to the price of some standardized set of goods that includes different sorts of food, soap, linen, candles, lamp oil and fuel. For example, Scheidel and Friesen 2009 - which appears to be the main source of wage data for the year 1 AD - references Scheidel 2009, which has its price comparison points on page 4.  ↩\n See Scheidel 2009 page 4 for meat consumption for the \"respectability basket,\" which is considered high-end for ancient Rome, as discussed in Scheidel and Friesen 2009. See Table 3.6 of Lifeways for foraging society meat consumption. ↩\n \"Through most of history per capita nonfood energy capture has tended to rise, but people have had few ways to convert nonfood calories into food. As a result, the difficulty of increasing food calories has been the major brake on both population size and rising living standards.\" ↩\n I'm using the Maddison Project-based series from Modeling the Human Trajectory. ↩\n", "url": "https://www.cold-takes.com/did-life-get-better-during-the-pre-industrial-era-ehhhh/", "title": "Did life get better during the pre-industrial era? (Ehhhh)", "source": "cold.takes", "source_type": "blog", "date_published": "2021-11-30", "id": "b7c686d9e2b7df818a987ea3ab74bd12"} -{"text": "This piece is about the single activity (\"minimal-trust investigations\") that seems to have been most formative for the way I think. \nMost of what I believe is mostly based on trusting other people. \nFor example:\nI brush my teeth twice a day, even though I've never read a study on the effects of brushing one's teeth, never tried to see what happens when I don't brush my teeth, and have no idea what's in toothpaste. It seems like most reasonable-seeming people think it's worth brushing your teeth, and that's about the only reason I do it.\nI believe climate change is real and important, and that official forecasts of it are probably reasonably close to the best one can do. I have read a bunch of arguments and counterarguments about this, but ultimately I couldn't tell you much about how the climatologists' models actually work, or specifically what is wrong with the various skeptical points people raise.1 Most of my belief in climate change comes from noticing who is on each side of the argument and how they argue, not what they say. So it comes mostly from deciding whom to trust.\nI think it's completely reasonable to form the vast majority of one's beliefs based on trust like this. I don't really think there's any alternative.\nBut I also think it's a good idea to occasionally do a minimal-trust investigation: to suspend my trust in others and dig as deeply into a question as I can. This is not the same as taking a class, or even reading and thinking about both sides of a debate; it is always enormously more work than that. I think the vast majority of people (even within communities that have rationality and critical inquiry as central parts of their identity) have never done one.\nMinimal-trust investigation is probably the single activity that's been most formative for the way I think. I think its value is twofold:\nIt helps me develop intuitions for what/whom/when/why to trust, in order to approximate the views I would hold if I could understand things myself.\nIt is a demonstration and reminder of just how much work minimal-trust investigations take, and just how much I have to rely on trust to get by in the world. Without this kind of reminder, it's easy to casually feel as though I \"understand\" things based on a few memes or talking points. But the occasional minimal-trust investigation reminds me that memes and talking points are never enough to understand an issue, so my views are necessarily either based on a huge amount of work, or on trusting someone.\nIn this piece, I will:\nGive an example of a minimal-trust investigation I've done, and list some other types of minimal-trust investigations one could do.\nDiscuss a bit how I try to get by in a world where nearly all my beliefs ultimately need to come down to trusting someone.\nExample minimal-trust investigations\nThe basic idea of a minimal-trust investigation is suspending one's trust in others' judgments and trying to understand the case for and against some claim oneself, ideally to the point where one can (within the narrow slice one has investigated) keep up with experts.2 It's hard to describe it much more than this other than by example, so next I will give a detailed example.\nDetailed example from GiveWell\nI'll start with the case that long-lasting insecticide-treated nets (LLINs) are a cheap and effective way of preventing malaria. I helped investigate this case in the early years of GiveWell. My discussion will be pretty detailed (but hopefully skimmable), in order to give a tangible sense of the process and twists/turns of a minimal-trust investigation.\nHere's how I'd summarize the broad outline of the case that most moderately-familiar-with-this-topic people would give:3\nPeople sleep under LLINs, which are mosquito nets treated with insecticide (see picture above, taken from here).\nThe netting can block mosquitoes from biting people while they sleep. The insecticide also deters and kills mosquitoes.\nA number of studies show that LLINs reduce malaria cases and death. These studies are rigorous - LLINs were randomly distributed to some people and not others, allowing a clean \"experiment.\" (The key studies are summarized in a Cochrane review, the gold standard of evidence reviews, concluding that there is a \"saving of 5.6 lives each year for every 1000 children protected.\")\nLLINs cost a few dollars, so a charity doing LLIN distribution is probably saving lives very cost-effectively. \nPerhaps the biggest concern is that people might not be using the LLINs properly, or aren't using them at all (e.g., perhaps they're using them for fishing).\nWhen I did a minimal-trust investigation, I developed a picture of the situation that is pretty similar to the above, but with some important differences. (Of all the minimal-trust investigations I've done, this is among the cases where I learned the least, i.e., where the initial / conventional wisdom picture held up best.)\nFirst, I read the Cochrane review in its entirety and read many of the studies it referenced as well. Some were quite old and hard to track down. I learned that:\nThe original studies involved very intense measures to make sure people were using their nets properly. In some cases these included daily or weekly visits to check usage. Modern-day LLIN distributions don't do anything like this. This made me realize that we can't assume a charity's LLIN distributions are resulting in proper usage of nets; we need to investigate modern-day LLIN usage separately.\nThe most recent randomized study was completed in 2001, and there won't necessarily ever be another one.4 In fact, none of the studies were done on LLINs - they were done on nets treated with non-long-lasting insecticide, which had to be re-treated periodically. This made me realize that anything that's changed since 2001 could change the results observed in the studies. Changes could include how prevalent malaria is in the first place (if it has fallen for other reasons, LLINs might do less good than the studies would imply), how LLIN technology has changed (such as moving to the \"long-lasting\" approach), and the possibility that mosquitoes have evolved resistance to the insecticides.\nThis opened up a lot of further investigation, in an attempt to determine whether modern-day LLIN distributions have similar effects to those observed in the studies. \nWe searched for general data on modern-day usage, on changes in malaria prevalence, and on insecticide resistance. This data was often scattered (so we had to put a lot of work into consolidating everything we could find into a single analysis), and hard to interpret (we couldn't tell how data had been collected and how reliable it was - for example, a lot of the statistics on usage of nets relied on simply asking people questions about their bednet usage, and it was hard to know whether people might be saying what they thought the interviewers wanted to hear). We generally worked to get the raw data and the full details of how the data was collected to understand how it might be off.\nWe tried to learn about the ins and outs of how LLINs are designed and how they compare to the kinds of nets that were in the studies. This included things like reviewing product descriptions from the LLIN manufacturers. \nWe did live visits to modern-day LLIN distributions, observing the distribution process, the LLINs hanging in homes, etc. This was a very imperfect way of learning, since our presence on site was keenly felt by everyone. But we still made observations such as \"It seems this distribution process would allow people to get and hoard extra nets if they wanted\" and \"A lot of nets from a while ago have a lot of holes in them.\"\nWe asked LLIN distribution charities to provide us with whatever data they had on how their LLINs were being used, and whether they were in fact reducing malaria. \nAgainst Malaria Foundation was most responsive on this point - it was able to share pictures of LLINs being handed out and hung up, for example. \n \nBut at the time, it didn't have any data on before-and-after malaria cases (or deaths) in the regions it was working in, or on whether LLINs remained in use in the months or years following distribution. (Later on, it added processes for the latter and did some of the former, although malaria case data is noisy and we ultimately weren't able to make much of it.)\n \nWe've observed (from post-distribution data) that it is common for LLINs to have huge holes in them. We believe that the insecticide is actually doing most of the work (and was in the original studies as well), and that simply killing many mosquitoes (often after they bite the sleeper) could be the most important way that LLINs help. I can't remember how we came to this conclusion.\nWe spoke with a number of people about our questions and reservations. Some made claims like \"LLINs are extremely proven - it's not just the experimental studies, it's that we see drops in malaria in every context where they're handed out.\" We looked for data and studies on that point, put a lot of work into understanding them, and came away unconvinced. Among other things, there was at least one case in which people were using malaria \"data\" that was actually estimates of malaria cases - based on the assumption that malaria would be lower where more LLINs had been distributed. (This means that they were assuming LLINs reduce malaria, then using that assumption to generate numbers, then using those numbers as evidence that LLINs reduce malaria. GiveWell: \"So using this model to show that malaria control had an impact may be circular.\")\nMy current (now outdated, because it's based on work I did a while ago) understanding of LLINs has a lot of doubt in it:\nI am worried about the possibility that mosquitoes have developed resistance to the insecticides being used. There is some suggestive evidence that resistance is on the rise, and no definitive evidence that LLINs are still effective. Fortunately, LLINs with next-generation insecticides are now in use (and at the time I did this work, these next-generation LLINs were in development).5\nI think that people are probably using their LLINs as intended around 60-80% of the time, which is comparable to the usage rates from the original studies. This is based both on broad cross-country surveys6 and on specific reporting from the Against Malaria Foundation.7 Because of this, I think it's simultaneously the case that (a) a lot of LLINs go unused or misused; (b) LLINs are still probably having roughly the effects we estimate. But I remain nervous that real LLIN usage could be much lower than the data indicates. \nAs an aside, I'm pretty underwhelmed by concerns about using LLINs as fishing nets. These concerns are very media-worthy, but I'm more worried about things like \"People just never bother to hang up their LLIN,\" which I'd guess is a more common issue. The LLIN usage data we use would (if accurate) account for both.\nI wish we had better data on malaria case rates by region, so we could understand which regions are most in need of LLINs, and look for suggestive evidence that LLINs are or aren't working. (GiveWell has recently written about further progress on this.)\nBut all in all, the case for LLINs holds up pretty well. It's reasonably close to the simpler case I gave at the top of this section. \nFor GiveWell, this end result is the exception, not the rule. Most of the time, a minimal-trust investigation of some charitable intervention (reading every study, thinking about how they might mislead, tracking down all the data that bears on the charity's activities in practice) is far more complicated than the above, and leads to a lot more doubt.\nOther examples of minimal-trust investigations\nSome other domains I've done minimal-trust investigations in:\nMedicine, nutrition, quantitative social science (including economics). I've grouped these together because a lot of the methods are similar. Somewhat like the above, this has usually consisted of finding recent summaries of research, tracking down and reading all the way through the original studies, thinking of ways the studies might be misleading, and investigating those separately (often hunting down details of the studies that aren't in the papers). \nI have links to a number of writeups from this kind of research here, although I don't think reading such pieces is a substitute for doing a minimal-trust investigation oneself.\n \nMy Has Life Gotten Better? series has a pretty minimal-trust spirit. I haven't always checked the details of how data was collected, but I've generally dug down on claims about quality of life until I could get to systematically collected data. In the process, I've found a lot of bad arguments floating around.\nAnalytic philosophy. Here a sort of \"minimal-trust investigation\" can be done without a huge time investment, because the main \"evidence\" presented for a view comes down to intuitive arguments and thought experiments that a reader can evaluate themselves. For example, a book like The Conscious Mind more-or-less walks a layperson reader through everything needed to consider its claims. That said, I think it's best to read multiple philosophers disagreeing with each other about a particular question, and try to form one's own view of which arguments seem right and what's wrong with the ones that seem wrong.\nFinance and theoretical economics. I've occasionally tried to understand some well-known result in theoretical economics by reading through a paper, trying to understand the assumptions needed to generate the result, and working through the math with some examples. I've often needed to read other papers and commentary in order to notice assumptions that aren't flagged by the authors. \n Checking attribution. A simple, low-time-commitment sort of minimal-trust investigation: when person A criticizes person B for saying X, I sometimes find the place where person B supposedly said X and read thoroughly, trying to determine whether they've been fairly characterized. This doesn't require having a view on who's right - only whether person B seems to have meant what person A says they did. Similarly, when someone summarizes a link or quotes a headline, I often follow a trail of links for a while, reading carefully to decide whether the link summary gives an accurate impression. \nI've generally been surprised by how often I end up thinking people and links are mischaracterized. \n \nAt this point, I don't trust claims of the form \"person A said X\" by default, almost no matter who is making them, and even when a quote is provided (since it's so often out of context).\nAnd I wish I had time to try out minimal-trust investigations in a number of other domains, such as:\nHistory. It would be interesting to examine some debate about a particular historical event, reviewing all of the primary sources that either side refers to.\nHard sciences. For example, taking some established finding in physics (such as the Schrodinger equation or Maxwell's equations) and trying to understand how the experimental evidence at the time supported this finding, and what other interpretations could've been argued for.\nReference sources and statistics. I'd like to take a major Wikipedia page and check all of its claims myself. Or try to understand as much detail as possible about how some official statistic (US population or GDP, for example) is calculated, where the possible inaccuracies lie, and how much I trust the statistic as a whole.\nAI. I'd like to replicate some key experimental finding by building my own model (perhaps incorporating this kind of resource), trying to understand each piece of what's going on, and seeing what goes differently if I make changes, rather than trusting an existing \"recipe\" to work. (This same idea could be applied to building other things to see how they work.)\nMinimal-trust investigations look different from domain to domain. I generally expect them to involve a combination of \"trying to understand or build things from the ground up\" and \"considering multiple opposing points of view and tracing disagreements back to primary sources, objective evidence, etc.\" As stated above, an important property is trying to get all the way to a strong understanding of the topic, so that one can (within the narrow slice one has investigated) keep up with experts.\nI don't think exposure to minimal-trust investigations ~ever comes naturally via formal education or reading a book, though I think it comes naturally as part of some jobs.\nNavigating trust\nMinimal-trust investigations are extremely time-consuming, and I can't do them that often. 99% of what I believe is based on trust of some form. But minimal-trust investigation is a useful tool in deciding what/whom/when/why to trust. \nTrusting arguments. Doing minimal-trust investigations in some domain helps me develop intuitions about \"what sort of thing usually checks out\" in that domain. For example, in social sciences, I've developed intuitions that:\nSelection bias effects are everywhere, and they make it really hard to draw much from non-experimental data. For example, eating vegetables is associated with a lot of positive life outcomes, but my current view is that this is because the sort of people who eat lots of vegetables are also the sort of people who do lots of other \"things one is supposed to do.\" So people who eat vegetables probably have all kinds of other things going for them. This kind of dynamic seems to be everywhere.\nMost claims about medicine or nutrition that are based on biological mechanisms (particular proteins, organs, etc. serving particular functions) are unreliable. Many of the most successful drugs were found by trial-and-error, and their mechanism remained mysterious long after they were found.\nOverall, most claims that X is \"proven\" or \"evidence-backed\" are overstated. Social science is usually complex and inconclusive. And a single study is almost never determinative.\nTrusting people. When trying to understand topic X, I often pick a relatively small part of X to get deep into in a minimal-trust way. I then look for people who seem to be reasoning well about the part(s) of X I understand, and put trust in them on other parts of X. I've applied this to hiring and management as well as to forming a picture of which scholars, intellectuals, etc. to trust. \nThere's a lot of room for judgment in how to do this well. It's easy to misunderstand the part of X I've gotten deep into, since I lack the level of context an expert would have, and there might be some people who understand X very well overall but don't happen to have gotten into the weeds in the subset I'm focused on. I usually look for people who seem thoughtful, open-minded and responsive about the parts of X I've gotten deep into, rather than agreeing with me per se.\nOver time, I've developed intuitions about how to decide whom to trust on what. For example, I think the ideal person to trust on topic X is someone who combines (a) obsessive dedication to topic X, with huge amounts of time poured into learning about it; (b) a tendency to do minimal-trust investigations themselves, when it comes to topic X; (c) a tendency to look at any given problem from multiple angles, rather than using a single framework, and hence an interest in basically every school of thought on topic X. (For example, if I'm deciding whom to trust about baseball predictions, I'd prefer someone who voraciously studies advanced baseball statistics and watches a huge number of baseball games, rather than someone who relies on one type of knowledge or the other.)\nConclusion\nI think minimal-trust investigations tend to be highly time-consuming, so it's impractical to rely on them across the board. But I think they are very useful for forming intuitions about what/whom/when/why to trust. And I think the more different domains and styles one gets to try them for, the better. This is the single practice I've found most (subjectively) useful for improving my ability to understand the world, and I wish I could do more of it.\nNext in series: Learning By WritingFootnotes\n I do recall some high-level points that seem compelling, like \"No one disagrees that if you just increase the CO2 concentration of an enclosed area it'll warm up, and nobody disagrees that CO2 emissions are rising.\" Though I haven't verified this claim beyond noting that it doesn't seem to attract much disagreement. And as I wrote this, I was about to add \"(that's how a greenhouse works)\" but it's not. And of course these points alone aren't enough to believe the temperature is rising - you also need to believe there aren't a bunch of offsetting factors - and they certainly aren't enough to believe in official forecasts, which are far more complex. ↩\n I think this distinguishes minimal-trust reasoning from e.g. naive epistemology. ↩\n This summary is slightly inaccurate, as I'll discuss below, but I think it is the most common case people would cite who are casually interested in this topic. ↩\n From GiveWell, a quote from the author of the Cochrane review: \"To the best of my knowledge there have been no more RCTs with treated nets. There is a very strong consensus that it would not be ethical to do any more. I don't think any committee in the world would grant permission to do such a trial.\" Though I last worked on this in 2012 or so, and the situation may have changed since then. ↩\n More on insecticide resistance at https://www.givewell.org/international/technical/programs/insecticide-treated-nets/insecticide-resistance-malaria-control.  ↩\n See https://www.givewell.org/international/technical/programs/insecticide-treated-nets#Usage.  ↩\n See https://www.givewell.org/charities/amf#What_proportion_of_targeted_recipients_use_LLINs_over_time.  ↩\n", "url": "https://www.cold-takes.com/minimal-trust-investigations/", "title": "Minimal-trust investigations", "source": "cold.takes", "source_type": "blog", "date_published": "2021-11-23", "id": "66eaef507eb275200d9e2c008ae120f5"} -{"text": "\nI'm going to try an even harder sell than sports here.\nA tool-assisted speedrun, or TAS, is the theoretically fastest-known possible playthrough of a video game. A TAS is not performed by an actual person with a controller. Rather, it is assembled mostly using a video game emulator that moves at extreme slowness (frame by frame) so that perfect precision can be achieved. Glitches are exploited, intentional deaths are taken to save time, etc. As Wikipedia says, \"rather than being a branch of e-sports focused on practical achievements, tool-assisted speedrunning concerns itself with research into the theoretical limits of the games.\"\nWatching one can be fun if you know the game (particularly if it's one you struggled through as a kid or adult). But what's even more fun IMO is reading about them - seeing the incredible amount of work, dedication, problem-solving, teamwork, even brilliance and heroism that goes into ... taking a fraction of a second off of the best-known completion time for some crappy ancient video game. I'd guess that in some cases, a single speedrun takes more time investment than it took to create the video game in the first place. It really says ... something? ... about the world we live in.\nTASes haven't been written about much, so I'm going to give a pretty random example, which is from this Mega Man (Rockman) 2 run that I read up on when I first discovered the phenomenon. The specifics won't make any sense (they don't to me), and I've gone ahead and bolded a few parts to get the picture of just how much effort went in.\nThis time around Rockman 2 has been maximized to it's end, so far that only few people can imagine. Making this movie required many years of work and hundreads of different investigations, plannings and trendemous amount of time and copypastings. After doing the previous run I heard news about the other techniques ... About a year after I was able to perform Crashman's scrolling by 3xItem1s and I got excited about it and I contacted Bisqwit after my great success. Not so long after Bisqwit said that he was able to bot it to work with 2xItem1s so I began to continue my progress further and further. Somewhere around the progress I spotted some improvements I wasn't even seen before and I had to many stages behind to the Flashman's stage because I found a possibility to break something that affected to your waiting time after boss was beaten (This was found by playaround when I was bored and used different subpixel positionings) ... After that Japanese player called cstrakm found a technique to Airman's stage to get flying air devils to disappear just by pressing (left and then right) in the first possible frame. This was something special because I haven't ever seen that it works on any enemy ... About month or two later in december FinalFighter contacted me if I would agree if he puts my movie in the public in Japan to search for persons that haves much experience with FCEUX and botting. Not for so long after FinalFighter contacted me and said that TaoTao have been trying to solve the mystery and he did. TaoTao was able to make Bubbleman's downscroll ... and that version was 12 frames [less than 1 second] faster. About or over a week later I got a message from FinalFighter that Japanese player pirohiko solved the scrolling with 2xItem1s, 2xCrabs and it was 112 frames faster! and after that with four metal blades! ...and so on my progress continued and continued along with other players and I finished it._\n(This timeline \"may not be 100%\" correct because this is everything I can remember, there were countless of days between anything and sometimes I was stuck for many months, like Crashman for a year, Bubbleman almost a year or so. So my memory got a bit rust)_\nThis is not the original Mega Man 2 TAS; it is the 6th one posted to the site. All of this work was done to beat the previous Mega Man 2 runthrough by 28.21 seconds. (Since then, it's been improved by about another minute.)\nA better-known example, though not technically a TAS (it's an attempt to minimize button presses rather than minimizing time), is this video on how to get through a particular Super Mario 64 level while pressing the A button only \"0.5 times.\" Known for the line, around 10:30: \"Now you're probably wondering what I'm going to need all this speed for. After all, I do build up speed for 12 hours. But to answer that, we need to talk about parallel universes.\"\nAnd some videos of actual TASes, though you're probably best off looking through their list of \"starred\" runs for a game you personally have played:\nSuper Mario 64, an extremely long game that has been cut down to under 5 minutes. \"As with many other runs on this site, the goal of pure speed has resulted in the complete breaking of the game. Very little of the game's normal play remains.\" Unlike most TASes, there is a helpful history of the run with explanations of different breakthroughs (usually explanations of what's going on are extremely esoteric, harder for me than following academic papers).\nSuper Mario 3 in 10:24.34, another relatively well-known one.\nMega Man 2 is a solid example of the genre, discussed above.\nMega Man 1 is a horrifying glitchfest that may make your eyes bleed. If Mega Man 2 is Superman, Mega Man 1 is Neo. From the author's comments: \"Of course, upon watching this movie, one does have to question something... is it Mega Man saving the world? Or Dr. Wily [the villain] trying to save it from absolute destruction?\"\nMega Man 3, 4, 5 and 6 beaten simultaneously with the same sequence of button presses in 39:06.92. As in, when the (superhuman) \"player\" presses \"A\" or \"down\" or whatever, it gets pressed in all four games at the same time. They still beat all four games faster than a normal person can beat any of them. I've only watched random parts of this, but check it out if you like viewing things you know you will never comprehend.\nThe TAS community has developed a bunch of science-type norms: \"authors\" make \"submissions,\" which are judged by extensive guidelines and \"published\" only if they are validated, better than the previous state-of-the-art, and (usually) not clearly improvable.\nI think TASes are a great metaphor for science, and give a much better feel for how I imagine science progresses than most stories about actual science. They demonstrate how a subculture can arise that consists of obsessives doggedly working (and collaborating) to push the frontier of understanding and performance on ... absolutely anything, for no reason other than \"Hey, I think I might be able to do/understand this random thing 0.1% better than anyone else has before (and it's OK that someone else will take my place the following week).\" They also demonstrate how that dogged work includes obsessively looking for weird edge cases one can exploit toward the goal, leading to \"magical\" seeming results. In speedrunning, this means finding ways to clip through walls and otherwise glitch the game in ways clearly not intended by the game designer; in science, this means stuff like nuclear power and space exploration. I think the analogy's pretty much perfect.\nI think that's part of why I find TASes kind of breathtaking, simultaneously inspiring and terrifying. If we can complete Mario 64 without even collecting a star, we can probably someday abandon our physical bodies entirely.\nSubscribe Feedback\n", "url": "https://www.cold-takes.com/tool-assisted-speedrunning/", "title": "Tool-assisted speedrunning", "source": "cold.takes", "source_type": "blog", "date_published": "2021-11-19", "id": "64cd09f8ad2f8908912774e1c67b112f"} -{"text": "\nI previously summarized Ajeya Cotra’s “biological anchors” method for forecasting for transformative AI, aka “Bio Anchors.” Here I want to try to clarify why I find this method so useful, even though I agree with the majority of the specific things I’ve heard people say about its weaknesses (sometimes people who can’t see why I’d put any stock in it at all).\nA couple of preliminaries:\nThis post is probably mostly of interest for skeptics of Bio Anchors, and/or people who feel pretty confused/agnostic about its value and would like to see a reply to skeptics.\nI don’t want to give the impression that I’m leveling new criticisms of “Bio Anchors” and pushing for a novel reinterpretation. I think the author of “Bio Anchors” mostly agrees with what I say both about the report’s weaknesses and about how to best use it (and I think the text of the report itself is consistent with this).\nSummary of what the framework is about\nJust to re-establish context, here are some key quotes from my main post about biological anchors:\nThe basic idea is:\nModern AI models can \"learn\" to do tasks via a (financially costly) process known as \"training.\" You can think of training as a massive amount of trial-and-error. For example, voice recognition AI models are given an audio file of someone talking, take a guess at what the person is saying, then are given the right answer. By doing this millions of times, they \"learn\" to reliably translate speech to text. More: Training\nThe bigger an AI model and the more complex the task, the more the training process [or “training run”] costs. Some AI models are bigger than others; to date, none are anywhere near \"as big as the human brain\" (what this means will be elaborated below). More: Model size and task type\nThe biological anchors method asks: \"Based on the usual patterns in how much training costs, how much would it cost to train an AI model as big as a human brain to perform the hardest tasks humans do? And when will this be cheap enough that we can expect someone to do it?\" More: Estimating the expense\n...The framework provides a way of thinking about how it could be simultaneously true that (a) the AI systems of a decade ago didn't seem very impressive at all; (b) the AI systems of today can do many impressive things but still feel far short of what humans are able to do; (c) the next few decades - or even the next 15 years - could easily see the development of transformative AI.\nAdditionally, I think it's worth noting a couple of high-level points from Bio Anchors that don't depend on quite so many estimates and assumptions:\nIn the coming decade or so, we're likely to see - for the first time - AI models with comparable \"size\" to the human brain. \nIf AI models continue to become larger and more efficient at the rates that Bio Anchors estimates, it will probably become affordable this century to hit some pretty extreme milestones - the \"high end\" of what Bio Anchors thinks might be necessary. These are hard to summarize, but see the \"long horizon neural net\" and \"evolution anchor\" frameworks in the report. \nOne way of thinking about this is that the next century will likely see us go from \"not enough compute to run a human-sized model at all\" to \"extremely plentiful compute, as much as even quite conservative estimates of what we might need.\" Compute isn't the only factor in AI progress, but to the extent other factors (algorithms, training processes) became the new bottlenecks, there will likely be powerful incentives (and multiple decades) to resolve them.\nThings I agree with about the framework’s weaknesses/limitations\nBio Anchors “acts as if” AI will be developed in a particular way, and it almost certainly won’t be\nBio Anchors, in some sense, “acts as if” transformative AI will be built in a particular way: simple brute-force trial-and-error of computationally intensive tasks (as outlined here). Its main forecasts are based on that picture: it estimates when there will be enough compute to run a certain amount of trial and error, and calls that the “estimate for when transformative AI will be developed.”\nI think it’s unlikely that if and when transformative AI is developed, the way it’s developed will resemble this kind of blind trial-and-error of long-horizon tasks.\nIf I had to guess how transformative AI will be developed, it would be more like: \nFirst, narrow AI systems prove valuable at a limited set of tasks. (This is already happening, to a limited degree, with e.g. voice recognition, translation and search.)\nThis leads to (a) more attention and funding in AI; (b) more integration of AI into the economy, such that it becomes easier to collect data on how humans interact with AIs that can be then used for further training; (c) increased general awareness of what it takes for AI to usefully automate key tasks, and hence increased awareness of (and attention to) the biggest blockers to AI being broader and more capable.\nDifferent sorts of narrow AIs become integrated into different parts of the economy. Over time, the increased training data, funding and attention leads to AIs that are less and less narrow, taking on broader and broader parts of the tasks they’re doing. These changes don’t just happen via AI models (and training runs) getting bigger and bigger; they are also driven by innovations in how AIs are designed and trained.\nAt some point, some combination of AIs is able to automate enough of scientific and technological advancement to be transformative. There isn’t a single “master run” where a single AI is trained to do the very hardest, broadest tasks via blind trial-and-error.\nBio Anchors “acts as if” compute availability is the only major blocker to transformative AI development, and it probably isn’t\nAs noted in my earlier post:\nBio Anchors could be too aggressive due to its assumption that \"computing power is the bottleneck\":\nIt assumes that if one could pay for all the computing power to do the brute-force \"training\" described above for the key tasks (e.g., automating scientific work), transformative AI would (likely) follow.\nTraining an AI model doesn't just require purchasing computing power. It requires hiring researchers, running experiments, and perhaps most importantly, finding a way to set up the \"trial and error\" process so that the AI can get a huge number of \"tries\" at the key task. It may turn out that doing so is prohibitively difficult.\nIt is very easy to picture worlds where transformative AI takes much more or less time than Bio Anchors implies, for reasons that are essentially not modeled in Bio Anchors at all\nAs implied above, transformative AI could take a very long time for reasons like “it’s extremely hard to get training data and environments for some crucial tasks” or “some tasks simply aren’t learnable even by large amounts of trial-and-error.”\nTransformative AI could also be developed much more quickly than Bio Anchors implies. For example, some breakthrough in how we design AI algorithms - perhaps inspired by neuroscience - could lead to AIs that are able to do ~everything human brains can, without needing the massive amount of trial-and-error that Bio Anchors estimates (based on extrapolation from today’s machine learning systems).\nI’ve listed more considerations like these here.\nBio Anchors is not “pinpointing” the most likely year transformative AI will be developed\nMy understanding of climate change models is that they try to examine each major factor that could cause the temperature to be higher or lower in the future; produce a best-guess estimate for each; and put them all together into a prediction of where the temperature will be. \nIn some sense, you can think of them as “best-guess pinpointing” (or even “simulating”) the future temperature: while they aren’t certain or precise, they are identifying a particular, specific temperature based on all of the major factors that might push it up or down.\nMany other cases where someone estimates something uncertain (e.g., the future population) have similar properties.\nBio Anchors isn’t like that. There are factors it ignores that are identifiable today and almost certain to be significant. So in some important sense, it isn’t “pinpointing” the most likely year for transformative AI to be developed.\n(Not the focus of this piece) The estimates in Bio Anchors are very uncertain\nBio Anchors estimates some difficult-to-estimate things, such as:\nHow big an AI model would have to be to be “as big as the human brain” in some relevant sense. (For this it adapts Joe Carlsmith’s detailed report.)\nHow fast we should expect algorithmic efficiency, hardware efficiency, and “willingness to spend on AI” to increase in the future - all of which affect the question of “how big an AI training run will be affordable.” Its estimates here are very simple and I think there is lots of room for improvement, though I don’t expect the qualitative picture to change radically.\nI acknowledge significant uncertainty in these estimates, and I acknowledge that (all else equal) uncertainty means we should be skeptical.\nThat said:\nI think these estimates are probably reasonably close to the best we can do today with the information we have. \nI think these estimates are good enough for the purposes of what I’ll be saying below about transformative AI timelines. \nI don’t plan to defend this position more here, but may in the future if I get a lot of pushback on it. \nBio Anchors as a way of bounding AI timelines\nWith all of the above weaknesses acknowledged, here are some things I believe about AI timelines, that are largely based on the Bio Anchors analysis:\nI would be at least mildly surprised if transformative AI weren’t developed by 2060. I put the probability of transformative AI by then at 50% (I explain below how the connection works between \"mild surprise\" and \"50%\"); I could be sympathetic to someone who said it was 25% or 75%, but would have a hard time seeing where someone was coming from if they went outside that range. More\nI would be significantly surprised if transformative AI weren’t developed by 2100. I put the probability of transformative AI by then at 2 in 3; I could be sympathetic to someone who said it was 1 in 3 or 80-90%, but would have a hard time seeing where someone was coming from if they went outside that range. More\nTransformative AI by 2036 seems plausible and concretely imaginable, but doesn’t seem like a good default expectation. I think the probability of transformative AI by then is at least 10%; I could be sympathetic to someone who said it was 40-50%, but would have a hard time seeing where someone was coming from if they said it was <10% or >50%. More\nI’d be at least mildly surprised if transformative AI weren’t developed by 2060\nThis is mostly because, according to Bio Anchors, it will then be affordable to do some absurdly big training runs - arguably the biggest ones one could imagine needing to do, based on using AI models 10x the size of human brains and tasks that require massive numbers of computations to do even once. In some important sense, we’ll be “swimming in compute.” (More on this intuition at Fun with +12 OOMs of compute.)\nBut it also matters that 2060 is 40 years from now, which is 40 years to:\nDevelop ever more efficient AI algorithms, some of which could be big breakthroughs.\nIncrease the number of AI-centric companies and businesses, collecting data on human interaction and focusing increasing amounts of attention on the things that currently block broad applications.\nGiven the already-rising amount of investment, talent, and potential applications for today’s AI systems, 40 years seems like a pretty long time to make big progress on these fronts. For context, 40 years is around the amount of time that has elapsed between the Apple IIe release and now.\nWhen it comes to translating my “sense of mild surprise” into a probability (see here for a sense of what I’m trying to do when talking about probabilities; I expect to write more on this topic in the future):\nOn most topics, I equate “I’d be mildly surprised if X didn’t happen” with something like a 60-65% chance of X. But on this topic, I do think there's a burden of proof (which I consider significant though not overwhelming), and I'm inclined to shade my estimates downward somewhat. So I am saying there's about a 50% chance of transformative AI by 2060.\nI’d be sympathetic if someone said “40 years doesn’t seem like enough to me; I think it’s more like a 25% chance that we’ll see transformative AI by 2060.” But if someone put it at less than 25%, I’d start to think: “Really? Where are you getting that? Why think there’s a <25% chance that we’ll develop transformative AI by a year in which it looks like we’ll be swimming in compute, with enough for the largest needed runs according to our best estimates, with 40 years elapsed between today’s AI boom and 2060 to figure out a lot of the other blockers?”\nOn the flip side, I’d be sympathetic if someone said “This estimate seems way too conservative; 40 years should be easily enough; I think it’s more like a 75% chance we’ll have transformative AI by 2060.” But if someone put it at more than 75%, I’d start to think: “Really? Where are you getting that? Transformative AI doesn’t feel around the corner, so this seems like kind of a lot of confidence to have about a 40-year-out event.”\nI would be significantly surprised if transformative AI weren’t developed by 2100\nBy 2100, Bio Anchors projects that it will be affordable not only to do almost comically large-seeming training runs (again based on the hypothesized size of the models and cost-per-try of the tasks), but to do as many computations as all animals in history combined, in order to re-create the progress that was made by natural selection. \nIn addition, 2100 is 80 years from now - longer than the time that has elapsed since programmable digital computers were developed in the first place. That’s a lot of time to find new approaches to AI algorithms, integrate AI into the economy, collect training data, tackle cases where the current AI systems don’t seem able to learn particular tasks, etc.\nTo me, it feels like 2100 is something like “About as far out as I could tell a reasonable-seeming story for, and then some.” Accordingly, I’d be significantly surprised if transformative AI weren’t developed by then, and I assign about a 2/3 chance that it will be. And:\nI’d be sympathetic if someone said “Well, there’s a lot we don’t know, and a lot that needs to happen - I only think there’s a 50% chance we’ll see transformative AI by 2100.” I’d even be somewhat sympathetic if they gave it a 1 in 3 chance. But if someone put it at less than 1/3, I’d really have trouble seeing where they were coming from.\nI’d be sympathetic if someone put the probability for “transformative AI by 2100” at more like 80-90%, but given the difficulty of forecasting this sort of thing, I’d really have trouble seeing where they were coming from if they went above 90%.\nTransformative AI by 2036 seems plausible and concretely imaginable, but doesn’t seem like a good default expectation\nBio Anchors lays out concrete, plausible scenarios in which there is enough affordable compute to train transformative AI by 2036 (link). I know some AI researchers who feel these scenarios are more than plausible - their intuitions tell them that the giant training runs envisioned by Bio Anchors are unnecessary and that the more aggressive anchors in the report are being underrated. \nI also think Bio Anchors understates the case for “transformative AI by 2036” a bit, because it’s hard to tell what consequences the current boom of AI investment and interest will have. If AI is about to become a noticeably bigger part of the economy (definitely an “if”, but compatible with recent market trends), this could result in rapid improvements along many possible dimensions. In particular, there could be a feedback loop in which new profitable AI applications spur more investment in AI, which in turn spurs faster-than-expected improvements in the efficiency of AI algorithms and compute, which in turn leads to more profitable applications … etc.\nWith all of this in mind, I think the probability of transformative AI by 2036 is at least 10%, and I don't have a lot of sympathy for someone saying it is less. \nAnd that said, all of the above is a set of “coulds” and “mights” - every case I’ve heard for “transformative AI by 2036” seems to require a number of uncertain pieces to click into place.\nIf “long-horizon” tasks turn out to be important, Bio Anchors shows that it’s hard to imagine there will be enough compute for the needed training runs. \nEven if there is plenty of compute, 15 years might not be enough time to resolve challenges like assembling the right training data and environments.\nIt’s certainly possible that some completely different paradigm will emerge - perhaps inspired by neuroscience - and transformative AI will be developed in ways that don’t require Bio-Anchors-like “training runs” at all. But I don’t see any particular reason to expect that to happen in the next 15 years.\nSo I also don’t have a lot of sympathy for people who think that there’s a >50% chance of transformative AI by 2036.\nBottom line\nBio Anchors is a bit different from the “usual” approach to estimating things. It doesn’t “pinpoint” likely dates for transformative AI; it doesn’t model all the key factors. \nBut I think it is very useful - in conjunction with informal reasoning about the factors it doesn’t model - for “bounding” transformative AI timelines: making a variety of statements along the lines of “It would be surprising if transformative AI weren’t developed by ___” or “You could defend a ___% probability by such a date, but I think a ___% probability would be hard to sympathize with.”\nAnd that sort of “bounding” seems quite useful for the purpose I care most about: deciding how seriously to take the possibility of the most important century. My take is that this possibility is very serious, though far from a certainty, and Bio Anchors is an important part of that picture for me.", "url": "https://www.cold-takes.com/biological-anchors-is-about-bounding-not-pinpointing-ai-timelines/", "title": "“Biological anchors” is about bounding, not pinpointing, AI timelines", "source": "cold.takes", "source_type": "blog", "date_published": "2021-11-18", "id": "79ec85ed0ea0c3a5f8b347d4fe73787c"} -{"text": "Adapted from an old GiveWell blog post, which has more detail.\nThe Better Angels of our Nature argues that “violence has declined over long stretches of time, and today we may be living in the most peaceable era in our species’ existence … it is an unmistakable development, visible on scales from millennia to years, from the waging of wars to the spanking of children.” \nThe book gives many examples of ways in which violence has declined. It looks at trends in homicides, in executions, in bans on slavery and torture, in corporal punishment at school, in hunting, even in apologies by political and religious leaders.\nWhat it doesn't provide is a systematic, long-run examination of trends in violence: a single, consistent set of key indicators reported in the same way and tracked over long time periods (in particular, going back further than a few hundred years). Without this, it's hard to know whether \"violence has declined\" is a consistent, across-the-board phenomenon, or just a description of the particular measures and time periods that the book chooses to focus on.\nAnd my best attempt to look at a single systematic indicator (the violent death rate) presents a muddier picture about long-run trends in violence, because it's plausible (though not clear) that the increasing severity, over time, of the largest wars and atrocities has been big enough to mostly (if not entirely) offset a lot of the improvements on everyday (and other) dimensions of violence.\nBelow, I will:\nOutline what I see as the \"missing piece\" of the existing analysis on trends in violence over time: the question of what happened to the violent death rate, accounting both for everyday violence (e.g., homicides) and large-scale wars and atrocities.\nPresent some rough analysis that I did to address the \"missing piece\" and thus examine trends in overall violent death rates over the long run. In brief:\nDeaths from wars and atrocities appear to have gone up significantly starting around the 13th century, which is pretty close in time to the first documented signs of falling homicide rates.\nWhen accounting for deaths from wars and atrocities, it's not clear whether the violent death rate declined between 500 BCE and the mid-20th century. (And that's as far back as the wars/atrocities data goes.)\nThe last 50 years or so have had low death tolls from both homicides and wars/atrocities. \nOne spin on these observations - a bit on the provocative side, but a simple take that has a number of implications I agree with - might be: \"Over time, increasing state power and order has led to falling everyday violence, but offsetting (at least partially, maybe fully) risk of increasingly infrequent, extreme violence. You could think of these twin trends as continuing even now, as observed violence levels are low across the board, but global catastrophic risks may be at all-time highs.\"\nTo be clear, today's violent death rates look solidly lower than those of very early humans. So it seems like there must have been a decline in violence at some point - it just remains very unclear when that decline was and how steady it was. (This is similar to my general take quality of life over the very long run: we know the most recent period has seen improvement, but the picture is murkier before that.)\nEstimates of death tolls from wars and atrocities seem highly debatable. Some of the discrepancies and revisions are really huge, so all of the claims here are quite uncertain.\nThe \"missing piece\": total violent death rates\nIn my view, the single best indicator for long-run trends in violence is the violent death rate: how many total deaths from violence there were, per person (or per 100,000 people), per year. Deaths tend to be easier to verify and measure than most other relevant indicators, and to the extent that Better Angels discusses very long time frames, it is mostly discussing some measure of violent deaths. But the way it presents these is, in my opinion, easy to get confused by:\nIt presents many charts of declining homicide rates over the past several centuries.\nIt does not present comparable charts for deaths from wars and atrocities (famines, genocides, etc.) It does have an extensive discussion of what we should make of the fact that some of history's bloodiest wars and atrocities were in the 20th century - but it does not look at the numbers for these wars and atrocities in the same terms as homicides, and plot the overall trend of all violent deaths combined. (Some more discussion of the analysis that is presented for atrocities is summarized at my 2015 blog post; see \"BA's argument and the need for more analysis.\")\nIn my view, this means there's a \"missing piece\" of the story of trends in violence: we see that one kind of violent death has become less common over the last several hundred years, but we don't see the trend in all kinds of violent deaths. And I consider this missing piece significant, because my sense is that wars and atrocities account for far more violent deaths than most of the other sources of violence the book discusses. \nFor example - and this surprised me - the global rate of violent deaths from the 20th century's “big four” atrocities alone (two World Wars, regimes of Josef Stalin and Mao Zedong) – spread out over the entire 20th century – is ~50 violent deaths per 100,000 people per year; that’s comparable to the very worst national homicide rates seen today, whereas the homicide rate for high-income countries such as the U.S. tends to be less than 1/10 as high. \nIn other words, the two World Wars + Stalin and Mao alone were enough to make the 20th century as a whole more dangerous than homicide makes today’s homicide-heaviest countries, and they were enough to offset the benefit of the European homicide rate decline that Better Angels describes from Medieval times through the Enlightenment.\nDeaths from wars and atrocities, by century\nI did this analysis a few years ago, so it's possible that it doesn't incorporate some corrections to the data source I'm using.\nI believe the main source that Better Angels uses to tabulate the death tolls of the biggest wars and atrocities is Atrocities: The 100 Deadliest Episodes in Human History by Matthew White. (A bit more discussion of why I used this source in this post.) So I pulled the numbers from its \"one hundred deadliest multicides [wars and atrocities]\" table (pg 529). \nI used these numbers to create estimates of the death toll from wars and atrocities each century, per 100k people per year.1 Here's the result (calculations here):\nCentury\nDeaths from atrocities per 100k people per year\n#1 worst atrocity (% of deaths this century)\n#2 worst atrocity (% of deaths this century)\n#3 worst atrocity (% of deaths this century)\n5th BC\n3.1\nAge of Warring States (60%)\nSecond Persian War (40%)\nN/A (0%)\n4th BC\n4.3\nAge of Warring States (54%)\nAlexander the Great (46%)\nN/A (0%)\n3rd BC\n11.2\nQin Shi Huang Di (34%)\nSecond Punic War (26%)\nAge of Warring States (16%)\n2nd BC\n3.7\nRoman Slave Wars (52%)\nGladiatorial Games (48%)\nN/A (0%)\n1st BC\n8.1\nGallic War (30%)\nGladiatorial Games (21%)\nRoman Slave Wars (20%)\n1st\n35.0\nXin Dynasty (94%)\nGladiatorial Games (5%)\nRoman-Jewish Wars (2%)\n2nd\n3.7\nGladiatorial Games (43%)\nThe Three Kingdoms of China (42%)\nRoman-Jewish Wars (15%)\n3rd\n12.6\nThe Three Kingdoms of China (88%)\nGladiatorial Games (12%)\nN/A (0%)\n4th\n3.2\nFall of the Western Roman Empire (53%)\nGladiatorial Games (47%)\nN/A (0%)\n5th\n18.9\nFall of the Western Roman Empire (97%)\nGladiatorial Games (3%)\nN/A (0%)\n6th\n2.3\nJustinian (90%)\nGoguryeo-Sui Wars (10%)\nN/A (0%)\n7th\n5.2\nMideast Slave Trade (73%)\nGoguryeo-Sui Wars (27%)\nN/A (0%)\n8th\n37.6\nAn Lushan Rebellion (90%)\nMideast Slave Trade (10%)\nMayan Collapse (1%)\n9th\n5.6\nMideast Slave Trade (63%)\nMayan Collapse (37%)\nN/A (0%)\n10th\n3.6\nMideast Slave Trade (94%)\nMayan Collapse (6%)\nN/A (0%)\n11th\n3.5\nMideast Slave Trade (95%)\nCrusades (5%)\nN/A (0%)\n12th\n11.2\nFang La Rebellion (40%)\nCrusades (31%)\nMideast Slave Trade (29%)\n13th\n98.0\nChinggis Khan (90%)\nMideast Slave Trade (3%)\nCrusades (3%)\n14th\n54.7\nTimur (56%)\nFall of the Yuan Dynasty (29%)\nHundred Years War (7%)\n15th\n20.8\nTimur (29%)\nAtlantic Slave Trade (22%)\nHundred Years War (16%)\n16th\n30.4\nAtlantic Slave Trade (30%)\nConquest of the Americas (25%)\nFrench Wars of Religion (20%)\n17th\n104.9\nFall of the Ming Dynasty (48%)\nThirty Years War (14%)\nAtlantic Slave Trade (9%)\n18th\n33.3\nFamines in British India (34%)\nAtlantic Slave Trade (17%)\nConquest of the Americas (14%)\n19th\n44.6\nTaiping Rebellion (36%)\nFamines in British India (16%)\nCongo Free State (11%)\n20th\n81.1\nSecond World War (33%)\nMao Zedong (20%)\nJoseph Stalin (10%)\nMy observations:\nThere's no clear trend in the death rate from wars and atrocities from the 13th century to the 20th. The jumpiness of the totals makes it very hard to see any sort of trend, even when aggregating 100-year periods; one every-few-centuries giant atrocity tends to account for a huge chunk of the bad centuries’ death tolls.\nThe figures are noticeably lower before the 13th century (which is strikingly close to when we have our first documented evidence of declining homicides).\nThis could be misleading: as both Better Angels and Atrocities point out, the farther back in time one looks, the more likely it is that there are lots of undocumented atrocities, and thus that the numbers above are underestimates. \nOn the other hand, I somewhat doubt that any of the undocumented atrocities are big enough to really stack up with the biggest atrocities in this table: as shown above, for any given century there is a very steep dropoff from the 1-3 most damaging currently known atrocities to the rest. So if earlier centuries had comparable war/atrocity death tolls, they probably came from much larger numbers of smaller wars and atrocities.\nThere is a lot of uncertainty in these figures, and revisions could change the picture quite a bit.\nThe 20th century ranks as the 3rd bloodiest, but could easily become the bloodiest if a couple of very uncertain estimates were changed. The 13th century estimate is nearly all from Genghis Khan (referred to above as “Chinggis”), and there is a lot of room for doubt here - alternative estimates2 imply less than half the death toll. The 17th century death toll is more evenly spread out, and probably better documented, but half of it comes from an estimate based on census records of the death toll for the collapse of the Ming Dynasty.\nFor the 8th century, Atrocities estimates 8 million deaths from the An Lushan rebellion. But this is a downward revision from an earlier figure (by the same author) of 36 million, cited in Better Angels. The older figure would make the 8th century appear as one of the most violent. Some context on why the revision happened is in this footnote.3\nWhat happens when we account for both atrocity+war deaths and homicides? It's very hard to say, because the only homicide rate data I've found (see here) is from (a) a few European countries starting in the 1300s; (b) the USA starting in the 1600s. \nBut I wanted to get a sense of the rough ballpark, so I threw together an estimate of the global homicide rate (assumptions I made are in this footnote4). Here's the result (spreadsheet here):\nIn this chart, I'm assuming that only Europe saw falling homicide rates as early as 1300 or so,5 and I'm assuming that homicide rates started falling in the rest of the world starting around the Industrial Revolution. That is my best guess. \nHere's an alternative version, in which I assume that homicides fell worldwide the same way they fell in Europe:\nThis is all very imprecise, because global homicide rates are a mystery before very recently. But the big picture is that the rise in death tolls from huge wars and atrocities plausibly offset the decline in homicide rates, leading to a flat or unclear trend in overall violent death rates between ~500 BCE until the Industrial Revolution (~1700s), or even later. \nSince the mid-20th century, deaths from wars and atrocities have been much lower - though it's hard to make too much of this, because in general, wars and atrocities seem very volatile, with massive death tolls sometimes occurring after centuries of relative peace. Furthermore, as argued in Toby Ord's book The Precipice, there's a case to be made that today represents an all-time high in terms of risks that could decimate a significant fraction (or even all) of humanity. So maybe rather than declining violence, what we're seeing is a continued trend of more and more infrequent risks of larger and larger catastrophes.Footnotes\n For simplicity, I assumed that the death roll for any given war/atrocity was spread out evenly across the years. For example, the Second World War is listed as having a death toll of 66 million over the course of 1939-1945 (7 yeas). I therefore assume there were (66 million/7 = ~9.4 million deaths per year from the Second World War in each of 1939, 1940, 1941, 1942, 1943, 1944 and 1945. ↩\n See the section starting on page 123. Also see discussion here, particularly these quotes: \n\"After soberly reviewing the evidence, Clarke formulates his own estimate of 11-15 million victims [compared to the 40 million estimate in Atrocities].\" \n\"Professor Rudolph Rummel only attributes 4 million deaths to Genghis Khan, but also estimates that 'as many as 30 million people (about 13 percent of the world’s population)” were murdered by the Mongols during the 14th and 15th centuries.'\") ↩\n 36 million is the figure for “missing” people given at the beginning of Matthew White’s chapter on the rebellion; in the version of the book I used (which has the more recent figure), White later states: “What happened to 36 million people? Is a loss of two-thirds in one decade even possible? Perhaps … On the other hand, these numbers could also represent a decline in the central government’s ability to find every taxpayer rather than an actual population collapse … the actual population collapse may have been closer to one-half, or 26 million. For the sake of ranking, however, I’m being conservative and cutting this in half, counting only 13 million dead in the An Lushan Rebellion.” ↩\n I: \nAssumed 100 homicides per 100k people per year before the 13th century (see previous post for the numbers I am working off to get this guess).\nFilled in rough guesses for Europe-wide figures starting in 1300 based on this chart (and assuming that Europe accounted for about 20% of the world population, with the rest unchanged).\nAssumed that the rest of the world saw a declining homicide rate starting in the early 19th century (consistent with my general take that this is when rapid global improvement in many things began), smoothly reaching today's level between then and now.My spreadsheet is here. ↩\n As noted here, at least the US seems to have much higher rates until 1600 or so. ↩\n", "url": "https://www.cold-takes.com/has-violence-declined-when-we-include-the-world-wars-and-other-major-atrocities/", "title": "Falling everyday violence, bigger wars and atrocities: how do they net out?", "source": "cold.takes", "source_type": "blog", "date_published": "2021-11-16", "id": "b78b7faeaa3aeaddc1ae7cbf60a93914"} -{"text": "\nThis is the second of (for now) two posts covering what I see as the weakest points in the “most important century” series. (The first one is here.)\nThe weak point I’ll cover here is the discussion of “lock-in”: the idea that transformative AI could lead to societies that are stable for billions of years. If true, this means that how things go this century could affect what life is like in predictable, systematic ways for unfathomable amounts of time.\nMy main coverage of this topic is in a section of my piece on digital people. It’s pretty hand-wavy, not super thorough, and isn’t backed by an in-depth technical report (though I do link to some informal notes from physicist Jess Riedel that he made while working at Open Philanthropy). Overcoming Bias critiqued me on this point, leading to a brief exchange in the comments.\nI’m not going to be dramatically more thorough or convincing here, but I will say a bit more about how the overall “most important century” argument is affected if we ignore this part of it, and a bit more about why I find “lock-in” plausible. \n(Also note that \"lock-in\" will be discussed at some length in an upcoming book by Will MacAskill, What We Owe the Future.)\nThroughout this piece, I’ll be using “lock-in” to mean “key things about society, such as who is in power or which religions/ideologies are dominant, are locked into place indefinitely, plausibly for billions of years,” and “dynamism” or “instability” to mean the opposite: “such things change on much shorter time horizons, as in decades/centuries/millennia.” As noted previously, I consider \"lock-in\" to be a scary possibility by default, though it's imaginable that certain kinds of lock-in (e.g., of human rights protections) could be good.\n“Most important century” minus “lock-in”\nFirst, let’s just see what happens if we throw out this entire part of the argument and assume that “lock-in” isn’t a possibility at all, but accept the rest of the claims. In other words, we assume that:\nSomething like PASTA (advanced AI that automates scientific and technological advancement) is likely to be developed this century.\nThat, in turn, would lead to explosive scientific and technological advancement, resulting in a world run by digital people or misaligned AI or something else that would make it fair to say we have \"transitioned to a state in which humans as we know them are no longer the main force in world events.\"\nBut it would not lead to any particular aspect of the world being permanently set in stone. There would remain billions of years full of unpredictable developments.\nIn this case, I think there is still an important sense in which this would be the “most important century for humanity”: it would be our last chance to shape the transition from a world run by humans to a world run by something very much unlike humans. This is one of the two definitions of “most important century” given here.\nMore broadly, in this case, I think there’s an important sense in which the “most important century” series should be thought of as “Pointing to a drastically underrated issue; correct in its most consequential, controversial implications, if not in every detail.” When people talk about the most significant issues of our time (in fact, even when they are specifically talking about likely consequences of advanced AI), they rarely include much discussion of the sorts of issues emphasized in this series; and they should, whether or not this series is correct about the possibility of “lock-in.”\nAs noted here, I ultimately care more about whether the “most important century” series is correct in this sense - pointing at drastically underappreciated issues - than about how likely its title is to end up describing reality. (Though I care about both.) It’s for this reason that I think the relatively thin discussion of lock-in is a less important “weak point” than the weak point I wrote about previously, which raises questions about whether advanced AI would change the world very quickly or very much at all.\nBut I’ve included the mention of lock-in because I think it’s a real possibility, and it would make the stakes of this century even higher.\nDissecting “lock-in”\nThere have probably been many people in history (emperors, dictators) with enormous power over their society, and who would’ve liked to keep things going just as they were forever. There may also have been points in time when democratically elected governments would have “locked in” at least some things about their society for good, if they could have.\nBut they couldn’t. Why not?\nI think the reasons broadly fall into a few categories, and digital people (or misaligned AI, but I’ll focus on digital people to keep things simple for now) could change the picture quite a bit.\nFirst I'll list factors that seem particularly susceptible to being changed by technology, then one factor that seems less so.\nFactors that seem particularly susceptible to being changed by technology\nAging and death. Any given powerful person has to die at some point. They can try to transfer power to children or allies, but a lot changes in the handoff (and over very long periods of time, there are a lot of handoffs).\nDigital people need not age or die. (More broadly, sufficient advances in science and technology seem pretty likely to be able to eliminate aging and death, even if not via digital people.) So if some particular set of them had power over some particular part of the galaxy, death and aging need not interfere here at all.\nOther population changes. Over time, the composition of any given population changes, and in particular, one generation replaces the previous one. This tends to lead to changes in values and power dynamics. \nWithout aging or death, and with extreme productivity, we could end up quickly exhausting the carrying capacity of any particular area - so that area might not see changes in population composition at all (or might see much smaller, more controlled changes than we are used to today - no cases where a whole generation is replaced by a new one). Generational turnover seems like quite a big driver of dynamism to date.\nChaos. To date, even when some government is officially “in charge” of a society, it has very limited ability to monitor and intervene in everything that’s going on. But I think technological advancement to date has already greatly increased the ability of a government to exercise control over a large number of people and large geography. An explosion in scientific and technological advancement could radically further increase governments’ in-practice control of what’s going on.\n(Digital people provide an extreme example: controlling the server running a virtual environment would mean being able to monitor and control everything about the people in that environment. And powerful figures could create many copies of themselves for monitoring and enforcement.)\nNatural events. All kinds of things might disrupt a human society: changes in the weather/climate, running lower on resources, etc. Sufficient advances in science and technology could drive this sort of disruption to extremely low levels (and in particular, digital people have pretty limited resource needs, such that they need not run low on resources for billions of years).\nSeeking improvement. While some dictators and emperors might prefer to keep things as they are forever, most of today’s governments don’t tend to have this as an aspiration: elected officials see themselves as accountable to large populations whose lives they are trying to improve.\nBut dramatic advances in science and technology would mean dramatically more control over the world, as well as potentially less scope for further improvement (I generally expect that the rate of improvement has to trail off at some point). This could make it increasingly likely that some government or polity decides they’d prefer to lock things in as they are.\nBut could these factors be eliminated so thoroughly as to cause stability for billions of years? I think so, if enough of society were digital (e.g., digital people, such that those seeking stability could use digital error correction (essentially, making multiple copies of any key thing, which can be used to roll back anything that changes for any reason - for more, see Jess Riedel’s informal notes, which argue that digital error correction could be used to reach quite extreme levels of stability). \nA tangible example here would be tightly controlled virtual environments, containing digital people, programmed to reset entirely (or reset key properties) if any key thing changed. These represent one hypothetical way of essentially eliminating all of the above factors as sources of change. \nBut even if we prefer to avoid thinking about such specific scenarios, I think there are broader cases for explosive scientific and technological advancement radically reducing the role of each of the above factors, as outlined above.\nOf course, just because some government could achieve \"lock-in\" doesn't mean it would. But over the course of a long enough time, it seems that \"anti-lock-in\" societies would simply gain ever more chances to become \"pro-lock-in\" societies, whereas even a few years of a \"pro-lock-in\" society could result in indefinite lock-in. (And in a world of digital people operating a lot faster than humans, a lot of \"time\" could go by by the end of this century.)\nA factor that seems less susceptible to being changed by technology: competition between societies\nEven if a government had complete control over its society, this wouldn’t ensure stability, because it could always be attacked from outside. And unlike the above factors, this is not something that radical advances in science and technology seem particularly likely to change: in a world of digital people, different governments would still be able to attack each other, and would be able to negotiate with each other with the threat of attack in the background. \nThis could cause sustained instability such that the world is constantly changing. This is the point emphasized by the Overcoming Bias critique.\nI think this dynamic might - or might not - be an enduring source of dynamism. Some reasons it might not:\nIf AI caused an explosion in scientific and technological advancement, then whoever develops it first could quickly become very powerful - being “first to develop PASTA by a few months” could effectively mean developing the equivalent of a several-centuries lead in science and technology after that. This could lead to consolidation of power on Earth, and there are no signs of intelligent life outside Earth - so that could be the end of “attack” dynamics as a force for instability.\nAwareness of the above risk might cause the major powers to explicitly negotiate and divide up the galaxy, committing (perhaps enforceably, depending on how the technological picture shakes out) never to encroach each others’ territory. In this case, any particular part of the galaxy would not be subject to attacks.\nIt might turn out that space settlements are generally easier to defend than attack, such that once someone establishes one, it is essentially not subject to attack.\nAny of the above, or a combination (e.g., attacks are possible but risky and costly; world powers choose not to attack each other in order not to set off a war), could lead to the permanent disappearance of military competition as a factor, and open up the possibility for some governments to “lock in” key characteristics of their societies.\nThree categories of long-run future\nAbove, I’ve listed some factors that may - or may not - continue to be sources of dynamism even after explosive scientific and technological advancement. I think I have started to give a sense for why, at a minimum, sources of dynamism could be greatly reduced in the case of digital people or other radically advanced technology, compared to today.\nNow I want to divide the different possible futures into three broad categories:\nFull discretionary lock-in. This is where a given government (or coalition or negotiated setup) is able to essentially lock in whatever properties it chooses for its society, indefinitely. \nThis could happen if essentially every source of dynamism outlined above goes away, and governments choose to pursue lock-in.\nPredictable competitive dynamics. I think the source of dynamism that is most likely to persist (in a world of digital people or comparably advanced science and technology) is the last one discussed in the above section: military competition between advanced societies.\nHowever, I think it could persist in a way that makes the long-run outcomes importantly predictable. In fact, I think “importantly predictable long-run outcomes” is part of the vision implied by the Overcoming Bias critique, which argues that the world will need to be near-exclusively populated by beings that spend nearly their entire existence working (since the population will expand to the point that it’s necessary to work constantly just to survive).\nIf we end up with a world full of digital beings that have full control over their environment except for having to deal with military competition from others, we might expect that there will be strong pressures for the digital beings that are most ambitious, most productive, hardest-working, most aggressive, etc. to end up populating most of the galaxy. These may be beings that do little else but strive for resources.\nTrue dynamism. Rather than a world where governments lock in whatever properties they (and/or majorities of their constituents) want, or a world where digital beings compete with largely predictable consequences, we could end up with a world in which there is true freedom and dynamism - perhaps deliberately preserved via putting specific measures in place to stop the above two possibilities, and enforce some level of diversity and even randomness.\nHaving listed these possibilities, I want to raise the hypothesis that if we could end up with any of these three, and this century determines which (or which mix) we end up with, that makes a pretty good case for this century having especially noteworthy impacts, and thereby being the most important century of all time for intelligent life.\nFor example, say that from today’s vantage point, we’re equally likely to get (a) a world where powerful governments employ “lock-in,” (b) a world where unfettered competition leads the galaxy to be dominated by the strong/productive/aggressive, or (c) a truly dynamic world where future events are unpredictable and important. In that case, if we end up with (c), and future events end up being enormously interesting and consequential, I would think that there would still be an important sense in which the most important development of all time was the establishment of that very dynamic. (Given that one of the other two could have instead ended up determining the shape of civilization across the galaxy over the long run.)\nAnother way of putting this: if lock-in (and/or predictably competitive dynamics) is a serious possibility starting this century, the opportunity to prevent it could make this century the most important one.\nBoiling it down\nThis has been a lot of detail about radically unfamiliar futures, and readers may have the sense at this point that things have gotten too specific and complex to put much stock in. But I think the broad intuitions here are fairly simple and solid, so I’m going to give a more high-level summary:\nScientific and technological advancement can reduce or eliminate many of today’s sources of instability, from aging and death to chaos and natural events. An explosion in scientific and technological advancement could therefore lead to a big drop in dynamism. (And as one vivid example, digital people could set up tightly controlled virtual environments with very robust error correction - something I consider a scary possibility by default, as noted in the intro.) \nDynamism may or may not remain, depending on a number of factors about how consolidated power ends up being and how different governments/societies deal with each other. The “may or may not” could be determined this century.\nI think this is a serious enough possibility that it heightens the stakes of the “most important century,” but I’m far from confident in the thinking here, and I think most of the spirit of the “most important century” hypothesis survives even if we forget about all of it.\nHopefully these additional thoughts have been helpful context on where I’m coming from, but I continue to acknowledge that this is one of the more under-developed parts of the series, and I’m interested in further exploration of the topic.", "url": "https://www.cold-takes.com/weak-point-in-most-important-century-lock-in/", "title": "Weak point in “most important century”: lock-in", "source": "cold.takes", "source_type": "blog", "date_published": "2021-11-11", "id": "a37d25c18e9156d4b25d88fd6a7ba02e"} -{"text": "This post interrupts the Has Life Gotten Better? series to talk a bit about why it matters what long-run trends in quality of life look like.\nI think different people have radically different pictures of what it means to \"work toward a better world.\" I think this explains a number of the biggest chasms between people who think of themselves as well-meaning but don't see the other side that way, and I think different pictures of \"where the world is heading by default\" are key to the disagreements.\nImagine that the world is a ship. Here are five very different ways one might try to do one's part in \"working toward a better life for the people on the ship.\"\nMeaning in the \"ship\" analogy\nMeaning in the world\nRowing\n \nHelp the ship reach its current destination faster\n \nAdvance science, technology, growth, etc., all of which help people (or \"the world\") do whatever they want to do, more/faster\nSteering\n \nNavigate to a better destination than the current one\n \nAnticipate future states of the world (climate change, transformative AI, utopia, dystopia) and act accordingly\n \nAnchoring\n \nHold the ship in place\n \nPrevent change generally, and/or try to make the world more like it was a generation or two ago\n \nEquity\n \nWork toward more fair and just relations between people on the ship\n \nRedistribution; advocacy focused on the underprivileged; etc.\n \nMutiny\n \nChallenge the ship's whole premise and power structure\n \nRadical challenging of the world's current systems (e.g., capitalism)1\nWhich of these is the \"right\" focus for improving the world? One of the things I like about the ship analogy is that it leaves the answer to this question totally unclear! The details of where the ship is currently trying to go, and why, and who's deciding that and what they're like, matter enormously. Depending on those details, any of the five could be by far the most important and meaningful way to make a positive difference. \nIf the ship is the world, then where are we \"headed\" by default (what happens if we have more total technology, wealth, and power over our environment)? Who has the power to change that, and how has it been going so far?\nThese are important questions with genuinely unclear answers. So people with different assumptions about these deep questions can get along very poorly with each other.\nI think this sort of taxonomy provides a different angle on people’s differences from the usual discussions of pro/anti-government interventionism.\nNext I will give some somewhat more detailed thoughts on the case for and against each of these pictures of \"improving the world.\" This is not so much to educate the reader as to help them understand where I stand, and why I have some of the unusual views I do. \nI talk a bit about the \"track record\" of each category. A lot of the point of this analogy is to highlight the importance of big-picture judgments about history.\nRowing\nI use \"rowing\" to refer to the idea that we can make the world better by focusing on advancing science, technology, growth, etc. - all of which ultimately result in empowerment, helping people do whatever they want to do, more/faster. The idea is that, in some sense, we don't need a specific plan for improving lives: more capabilities, wealth, and empowerment (\"moving forward\") will naturally result in that.\nRowing is a contentious topic, and it’s contentious in a way that I think cuts across other widely-recognized ideological lines.\nTo some people, rowing seems like the single most promising way to make the world a better place. People and institutions who give off this vibe include:\nTech entrepreneurs and VCs such as Marc Andreessen (It's Time to Build) and Patrick Collison (progress studies).\nLibertarian-ish academics such as Tyler Cowen and Alex Tabarrok (see their books for a sense of this).\nThe many institutions and funders dedicated to general speeding up of science, such as the Moore Foundation, Howard Hughes Medical Institute, and CZI Biohub.2\nThe global development world, e.g. nonprofits such as Center for Global Development and institutions such as the World Bank, which seeks to help \"developing\" countries \"develop.\" This generally includes a large (though not exclusive) focus on economic growth.\nI sometimes see rowing-oriented arguments tagged as “pro-market” or even “libertarian,” but I think that isn't a necessary connection. You could argue - and many do - that a lot of the biggest and most important contributions to global growth and technological advancement come from governments, particularly via things like scientific research funding (e.g., DARPA), development-oriented agencies (e.g., the IMF), and public education. \nTo many people, though, advocacy for “rowing” seems like it’s best understood as a veneer of pro-social rhetoric disguising mundane personal attempts to get rich - even to the point where the wealthy create an intellectual ecosystem to promote the idea that what makes them rich is also good for the world. \nOn priors, I think this is a totally reasonable critical take: \nIn practice, a lot of the folks most interested in \"rowing\" are venture capitalists, tech founders, etc. who sure seem to have spent most of their lives primarily interested in getting rich.\nIt seems \"convenient,\" perhaps suspiciously so, that their story about how to make the world better seems to indicate that the best thing to do is focus on \"creating wealth\" (which usually aligns extremely well with \"getting rich\"), just like they are. This doesn't mean that they're deliberately hiding their motivations; but it may mean they naturally and subconsciously gravitate toward worldviews that validate their past (and present) choices.\nFurthermore, the logic of why “rowing” would be good seems to have some gaps in it. It’s not obvious on priors that more total wealth or total scientific capability makes the world better. When thinking about the direct impacts of wealth and tech on quality of life, it seems about as easy to come up with harms as benefits. \nClear benefits include lower burden of disease, less hunger, more reproductive choice and freedom, better entertainment (and tastier food, and other things that one might call \"directly\" or \"superficially\" pleasurable), and more ability to encounter many ideas and choose from many candidate lifestyles and locations.\n \nBut clear potential costs include environmental damage, rising global catastrophic risks,3 rising inequality, and a world that is chaotically changing, causing all sorts of novel psychological and other challenges for individuals and communities. And many of the obvious dimensions along which wealth and technology make life more \"convenient\" do not clearly make life better: if wealth and technology \"save us time\" (reducing the need to do household chores, etc.), we might just be spending the \"saved\" time on other things that don't make our lives better, such as competing with each other for wealth and status.\nThese concerns seem facially valid, and they apply particularly to rowing. (If someone works toward equity, there could be a number of criticisms one levels at them, but the above issues don’t seem to apply.)\nIn my view, the best case for “rowing” is something like: “We don’t know why, but it seems to be going well.” If I were back in the year 0 trying to guess whether increasing wealth and technological ability would be good or bad for quality of life, I would consider it far from obvious. But empirically, it seems that the world has been improving over the last couple hundred years. \nAnd with that said, it's much less clear how things were going in the several hundred thousand years before the Industrial Revolution.\nSo my current take on \"rowing\" is something like:\nDespite all of the suspicious aspects, I think there is a good case for it. I don’t understand where this ship is going or why things are working the way they are - maybe the ship happens to be pointed toward warmer or calmer latitudes? - but rowing seems to have made life better for the vast majority of people over the last couple hundred years, and will likely continue to do so (by default) over at least the next few decades.\nOn the other hand, I don't think the track record is so good as to assume that rowing will always be good, and I'm particularly worried and uncertain about how things will go if there is a dramatic acceleration in the rate of progress - I'm inclined to approach such a prospect with caution rather than excitement.\nSteering\nSteering sounds great in theory. Instead of blindly propelling the world toward wherever it’s going, let’s think about where we want the world to end up and take actions based on that!\nBut I think this is currently the least common conception of how to do good in the world. The idea of utopia is unpopular (more in a future piece), and in general, it seems that anyone advocating action on the basis of a specific goal over the long-run future (really, anything more than 20 years out) generally is met with skepticism. \nThe most mainstream example of “steering” is probably working to prevent/mitigate climate change. This isn’t about achieving an “end state” for the world, but it is about avoiding a specific outcome that is decades away, and even that level of specific planning about the long-run future is something we don’t see a lot of in today’s intellectual discourse.\nI think the longtermist community has an unusual degree of commitment to steering. One could even see longtermism as an attempt to resurrect interest in steering, by taking a different approach from previous steering-heavy worldviews (e.g., Communism) that have fallen out of favor. \nLongtermists seek out specific interventions and events that they think could change the direction of the long-run future. \nThey are particularly interested in helping to better navigate a potential transition brought on by advanced AI - the idea being that if AI ends up being a sort of “new species” more powerful than humans, navigating the development of AI could end up avoiding bad results that last for the rest of time.\nIt’s common for longtermists to take an interest in differential technological development - meaning that instead of being “pro” or “anti” technological advancement, they have specific views on which technologies would be good to develop as quickly as possible vs. which would be good to develop as slowly as possible, or at least until we’ve developed other technologies that can make them safer. It seems to me that this sort of thinking is relatively rare outside the longtermist community. It's more common for people to be pro- or anti-science as a whole.\nWhy is it relatively rare for people to be interested in “steering” as defined here? I think it is mostly for good reasons, and comes down to the fact that the track record of “steering” type work looks unimpressive.\nThere are some specific embarrassing cases, such as historical Communism,4 which explicitly claimed to aim at a particular long-term utopian vision.\nThere is also just a lack of salient (or any?) examples of people successfully anticipating and intervening on some particular world development more than 10-20 years away. People and organizations in the longtermist community have tried to find examples, and IMO haven’t come up with much.5\nDespite this, I’m personally very bullish on the kind of “steering” that the longtermist community is trying to do (and I’m also sold on the value of climate change prevention/mitigation).\nThe main reason for this is that I think defining, long-run consequential events of the future are more “foreseeable” now than they’ve been in the past. Climate change and advanced AI are both developments that seem highly likely this century (more on AI here), and seem likely to have such massive global consequences that action in advance makes sense. More broadly, I think it is easier than it used to be to scan across possible scientific and technological developments and point to the ones most worth “preparing for.\" \nIn the analogy, I’m essentially saying that there are particular important obstacles or destinations for the ship, that we can now see clearly enough that steering becomes valuable. By contrast, in many past situations I think we were “out on the open sea” such that it was too hard to see much about what lay ahead of us, and this led to the dynamic in which rowing has worked better than steering.\nOther reasons that I’m bullish on steering are that (a) I think today’s “steering” folks are making better, more rigorous attempts at predicting the future than people who have tried to make long-run predictions in the past;6 (b) I think “steering” has become a generally neglected way of thinking about the world, at the same time as it has become more viable. \nWith that said, I think there is plenty of room for longtermists to do a better job than they are contending with the limits of how well we can “steer,” and what kinds of interventions are most likely to successfully improve how things go.\nI think our ship draws close to some major crossroads, such that navigating them could define the rest of our journey. If I’m right, focusing on rowing to the exclusion of steering is a real missed opportunity.\nAnchoring\nIn practice, it seems like a significant amount of the energy in any given debate is coming from people who would prefer to keep things as they are - or go back to things as they were (generally pretty recently, e.g., a generation or two ago). This is an attitude commonly associated with \"conservatives\" (especially social conservatives), but it's an attitude that often shows up from others as well.\nAs someone who thinks life has been getting better over the last couple hundred years - and that we still have a lot of important progress yet to be made on similar dimensions to the ones that have been improving - I am usually not excited about anchoring (though the specifics of what practices one is trying to \"anchor\" matter).\nSome additional reasons for my general attitude:\nI think the world has been changing extraordinarily quickly (by historical standards) throughout the past 200+ years, and I think it will continue to change extraordinarily quickly for at least the next few decades no matter what. So when I hear people advocating for stability and trusting the established practices of those who came before us, I largely think they are asking for something that just can't be had. (One way of putting this: as long as things are changing, we may as well try to make the best of that change.)\n I am particularly skeptical that the previous generation or two should be emulated. There is obviously room for debate here (I might write more on this topic in the future).\n I think there is a general bias toward exaggerating how good the past was that we need to watch out for.\nThere is a version of \"anchoring\" that I think can be constructive: asking that changes to policy and society be gradual and incremental, rather than sudden, so we can correct course as we go. In practice, I think nearly all policy and societal changes do end up being gradual and incremental, at least in the modern-day developed world, such that I don't currently have a wish for a stronger \"anchoring\" force than already exists in most domains that come immediately to mind (unless you count the \"caution\" frame for navigating the most important century).\nEquity\nOf the five different visions of what it means to improve the world, equity seems the most straightforward and familiar. It is about directly trying to make the world more just and fair, rather than trying to increase total options and wealth and rather than trying to optimize for some particular future event. \nEquity includes efforts to:\nRedistribute resources progressively (i.e., from rich to poor), whether via direct charity or via advocacy.\nAmplify the voices and advance the interests of historically marginalized groups including women, people of color, and people born in low-income countries.\nImprove products and services aimed at helping people who would be under-served by default, including via education reform and improvement, and scientific research (e.g., the sort of global health R&D funded by the Bill and Melinda Gates Foundation).\nYou could argue that successful equity work also contributes to the goals of rowing and steering, if a world with less inequality is also one that’s better positioned for broad-based economic growth and for anticipating/preparing for particular important events. But work whose proximate goal is equity tends to look different from work whose proximate goal is rowing or steering.\nMost people recognize equity-oriented work as coming from a place of good intentions and genuine interest in making the world better. To the extent that equity-oriented work is controversial, it often stems from:\nArguments that it undermines its own goals. For example, arguments that advocating for a higher minimum wage could result in greater unemployment, thus hurting the interests of the low-income people that a higher minimum wage is supposed to help.\nArguments that it undermines rowing progress, and that rowing is an ultimately more important/promising way to help everyone. Dead Aid is an example of this sort of argument (picked for vividness rather than quality).\nI've talked about the track record of rowing and steering; I'll comment briefly on that of equity. In short, I think it's very good. I think that much of the progress the world has seen is fairly hard to imagine without significant efforts at both rowing and equity: major efforts both to increase wealth/capabilities and to distribute them more evenly. Civil rights movements, social safety nets, and foreign aid all seem like huge wins, and major parts of the story for why the world seems to have gotten better over time.\nWith that track record in mind, and the fact that many equity interventions seem good on common-sense grounds, I'm usually positive on equity-oriented interventions.\nMutiny\nMutiny looks good if your premises are ~the opposite of the rowers'. You might think that the world today operates under a broken \"system,\" and/or that we fundamentally have the wrong sorts of people and/or institutions in power. If this is your premise, it implies that what we tend to count as \"progress\" (particularly increased wealth and technological capabilities) is liable to make things worse, or at least not better. Instead, the most valuable thing we can do is get at the root of the issue and change the fundamental way that power is exercised and resources are allocated. \nUnlike steering, this isn't about anticipating some particular future event or world-state. Instead, it's about rethinking/reforming the way the world operates and the way decisions are made. Instead of focusing on where the ship is headed, it's focused on who's running the ship.\nThis framework often emerges in criticisms of charity, philanthropy and/or effective altruism that point to the paradox of trying to make the world better using money obtained from participating in a problematic (capitalist) system - or occasionally in pieces by philanthropists themselves on the importance of challenging the fundamental paradigms the world is operating in. Some examples: Slavoj Žižek,7 Anand Giridharas,8 Guerrilla Foundation,9 and Peter Buffett.10. Often, but not always, people in the \"mutiny\" category identify with (or at least use language that is evocative of) socialism or Marxism.\nOf the five categories, mutiny is the one I feel most unsatisfied with my understanding of. It seems that people use language about fundamental systems change to (a) sometimes mean something tangible, radical, and revolutionary like the abolition of private property; to (b) sometimes mean something that seems much more modest and that I would classify more as \"equity,\" such as working toward greatly increased redistribution of wealth;12 and to (c) sometimes mean a particular emotional/tonal attitude unaccompanied by any distinctive policy platform.13 And it's often unclear which they mean.\n(a) is the one I'm trying to point at with the \"mutiny\" idea. It's also the one that seems to go best with claims that it's problematic to e.g. \"participate in capitalism\" and then do philanthropy. (It's unclear to me how, say, running a hedge fund undermines (b) or (c).)\nI am currently skeptical of (a), because:\nI haven't heard much in the way of specific proposals for how the existing \"system\" could be fundamentally reformed, other than explicitly socialist and Marxist proposals such as the abolition of private property, which I don't support.\nI am broadly sympathetic to Rob Wiblin's take on revolutionary change: \"Effective altruists are usually not radicals or revolutionaries ... My attitude, looking at history, is that sudden dramatic changes in society usually lead to worse outcomes than gradual evolutionary improvements. I am keen to tinker with government or economic systems to make them work better, but would only rarely want to throw them out and rebuild from scratch. I personally favour maintaining and improving mostly market-driven economies, though some of my friends and colleagues hope we can one day do much better. Regardless, this temperament for ‘crossing the river by feeling the stones’ is widespread among effective altruists, and in my view that’s a great thing that can help us avoid the mistakes of extremists through history. The system could be a lot better, but one only need look at history to see that it could also be much worse.\"\nAs stated above, I broadly think that the world has made and continues to make astonishing positive progress, which doesn't put me in a place of wanting to \"burn down the existing order\" (at least without a clearer idea of what might replace it and why the replacement is promising). I'm particularly unsympathetic to claims that \"capitalism\" or \"the existing system\" is the root cause of global poverty. I think that global poverty is the default condition for humans, and the one that nearly all humans existed under until relatively recently.\nTo be clear, I don't mean here to be advocating against all radical views. A radical view is anything that is well outside the Overton window, and I have many such views. And I am sympathetic to many views that many might call \"anticapitalist\" or \"revolutionary,\" such as that we should have dramatically more redistribution of wealth. \nI am also generally sympathetic to both (b) and (c) above. \nCategorizing worldviews\nHere's a mapping from some key combinations of rowing/equity/mutiny to familiar positions in current political discourse:\nRowing\nEquity\nMutiny\nAKA\nExtreme Marxists\n \nThe radical left\n \nThe less radical, but still markets- and growth-skeptical, left\n \nMany conservatives (\"anchoring\")\n \nLibertarians, economic conservatives\n \n\"Neoliberals,\" the \"pro-market left\"\n \nI've left out steering because I see it as mostly orthogonal to (and usually simply not present in) most of today's political discourse. I've represented \"anchoring\" as a row rather than a column, because I think it is mostly incompatible with the others. And I've left out worldviews that are positive on both rowing and mutiny (I think there are some worldviews that might be described this way, but they're fairly obscure).14\nSo?\nWe can make up categories and put people in them all day. What does this taxonomy give us?\nThe main thrust for me is clarifying what people in different camps are disagreeing about, especially when they seem to be talking past each other by using completely different definitions of “improving the world.”\nI think this framework is also useful for highlighting the role of one’s understanding of history in these disagreements. It’s far from obvious, a priori, whether the best thing to work on is rowing, steering, anchoring, equity, or mutiny, especially when we are so foggy on where a ship is heading by default. It really matters whether you think that increases in wealth and technological capability have had good effects so far, whether this has come about through deliberate planning or blind “forging ahead,” and whether there are particular reasons to expect the future to diverge from the past on these points. \nAccordingly, when confronting one camp from another, I think it’s helpful when possible to be explicit about one’s assumptions regarding how things have gone so far, and regarding the broad track records of rowing, steering, anchoring, equity and mutiny. History doesn’t give us clear, pre-packaged answers on these questions - different people will look at the same history and see very different things - but I think it’s good to have views on these matters, even if only lightly informed to start, and to look out for information about history that could revise them.Footnotes\n Though as discussed below, it's often unclear what \"capitalism\" means in this sort of context. ↩\n While these sorts of institutions often lead with the goal of fighting disease, they tend to fund basic science with very open-ended goals. ↩\n For example, The Precipice argues that \"Fueled by technological progress, our power has grown so great that for the first time in humanity’s long history, we have the capacity to destroy ourselves.\" The book sees \"anthropogenic\" global catastrophic risks as the dominant ones, and I agree. ↩\n I use the term \"historical\" in order to be agnostic on whether this was \"true\" Communism or reflects badly on Marxist philosophy. ↩\n See AI Impacts' attempts to find examples of helpful early actions on risks and Open Philanthropy on historical long-range forecasting. Other examples that have been suggested to me: early action to stop damage to the ozone layer, nuclear nonproliferation action, perhaps the US's approach to the Cold War. ↩\n See Open Philanthropy on historical long-range forecasting for what past efforts look like. There are many longtermist discussions of long-range predictions that seem significantly better on the dimensions covered in the post. ↩\n \"There is a chocolate-flavoured laxative available on the shelves of US stores which is publicised with the paradoxical injunction: Do you have constipation? Eat more of this chocolate! – i.e. eat more of something that itself causes constipation. The structure of the chocolate laxative can be discerned throughout today’s ideological landscape ... We should have no illusions: liberal communists [his term for the Davos set] are the enemy of every true progressive struggle today. All other enemies – religious fundamentalists, terrorists, corrupt and inefficient state bureaucracies – depend on contingent local circumstances. Precisely because they want to resolve all these secondary malfunctions of the global system, liberal communists are the direct embodiment of what is wrong with the system ... Etienne Balibar, in La Crainte des masses (1997), distinguishes the two opposite but complementary modes of excessive violence in today’s capitalism: the objective (structural) violence that is inherent in the social conditions of global capitalism (the automatic creation of excluded and dispensable individuals, from the homeless to the unemployed), and the subjective violence of newly emerging ethnic and/or religious (in short: racist) fundamentalisms. They may fight subjective violence, but liberal communists are the agents of the structural violence that creates the conditions for explosions of subjective violence.\" ↩\n \"\"If anyone truly believes that the same ski-town conferences and fellowship programs, the same politicians and policies, the same entrepreneurs and social businesses, the same campaign donors, the same thought leaders, the same consulting firms and protocols, the same philanthropists and reformed Goldman Sachs executives, the same win-wins and doing-well-by-doing-good initiatives and private solutions to public problems that had promised grandly, if superficially, to change the world—if anyone thinks that the MarketWorld complex of people and institutions and ideas that failed to prevent this mess even as it harped on making a difference, and whose neglect fueled populism’s flames, is also the solution, wake them up by tapping them, gently, with this book. For the inescapable answer to the overwhelming question—Where do we go from here?—is: somewhere other than where we have been going, led by people other than the people who have been leading us.\" ↩\n \"EA’s approach of doing ‘the most good you can now’ without, in our opinion, questioning enough the power relationships that got us to the current broken socio-economic system, stands at odds with the Guerrilla Foundation’s approach. Instead, we are proponents of radical social justice philanthropy, which aims to target the root causes of the very system that has produced the symptoms that much of philanthropy, including EA, is trying to treat (also see here and here) ... By asking these questions, EA seems to unquestioningly replicate the values of the old system: efficiency and cost-effectiveness, growth/scale, linearity, science and objectivity, individualism, and decision-making by experts/elites ... \" ↩\n \"we will continue to support conditions for systemic change ... It’s time for a new operating system. Not a 2.0 or a 3.0, but something built from the ground up. New code.\" (Though also note this quote: \"I’m really not calling for an end to capitalism; I’m calling for humanism.\") ↩\n(Footnote deleted) ↩\n For example, see the \"Class\" chapter of How to Be an Antiracist, where the author speaks of capitalism and racism as \"conjoined twins\" but then states that he is defining \"capitalism\" as being in opposition to a number of not-very-radical-seeming goals such as increased redistribution of wealth and monopoly prevention. He speaks positively of Elizabeth Warren despite her statement that she is \"capitalist to the bone,\" and says \"if Warren succeeds, then the new economic system will operate in a fundamentally different way than it has ever operated before in American history. Either the new economic system will not be capitalist or the old system it replaces was not capitalist.\" ↩\n For example, a self-identified socialist states: \"There’s a great Eugene Debs quote, 'While there is a lower class, I am in it. While there is a criminal element, I am of it. And while there is a soul in prison, I am not free.' That’s not a description of worker ownership — that’s a description of looking at the world and feeling solidarity with people who are at the bottom with the underclass. And I think that is just as important to what animates socialists as some idea about how production should be managed ... People focus a lot on the question of central planning. But I’ve been doing interviews of socialists, interviewing DSA people around the country, and the unifying thread really is not a very clear vision for how a socialist economy will work. It is a deep discomfort and anger that occurs when you look at the world and you see power relationships and you see a small class of people owning so much and a large number of people working so hard and having so little. There are socialist divides over nearly every question, but this is the one thing that socialists all come together on.\" ↩\n I sometimes encounter people who seem to think something along the lines of: \"Progress is slowing down because our culture has become broken and toxic. The only hope for getting back to a world capable of a reasonable pace of scientific, technological, and economic progress is to radically overhaul everything about our institutions and essentially start from scratch.\" I expect to write more on the general theme of what we should make of \"progress slowing down\" in the future. ↩\n", "url": "https://www.cold-takes.com/rowing-steering-anchoring-equity-mutiny/", "title": "Rowing, Steering, Anchoring, Equity, Mutiny", "source": "cold.takes", "source_type": "blog", "date_published": "2021-11-09", "id": "e7c57e9acffaf80d3e93b4b0f6b729df"} -{"text": "\nAsking whether life was better in hunter-gatherer times, I wrote: “I haven't found anything that looks like systematic data on pre-agriculture mental health or subjective wellbeing.”\nThanks to commenter Tsunayoshi for linking to some! Subjective Happiness Among Polish and Hadza People is a 2020 study, reporting1 on a survey of 145 people from the Hadza hunter-gatherer society using a four-question oral survey on happiness (the Subjective Happiness Scale - see footnote for the four questions2). The results are then compared to a comparison sample of 156 Polish participants (who were asked the same questions), with the conclusion that the Hadza report significantly higher happiness. \nThe study also provides a table comparing the Hadza result to happiness scores from a number of other studies:\nAt an average happiness score of 5.83 on a 7-point scale, the surveyed Hadza outscore all of the other societies examined! \nAll else equal, this is evidence for the “pre-agriculture Eden” hypothesis that I have expressed skepticism about: perhaps, despite seemingly worse health, hunger, and violence, hunter-gatherers are/were significantly happier than people in the modern world.\nI have two reactions here: I doubt this picture would hold up to more research and scrutiny, and I think more research and scrutiny seem really valuable to do in case it does.\nDoubts\nFrom my skim, I don't have any issues with the paper itself. It looks well done. However, there are a number of things that make me skeptical that \"hunter-gatherers are happier\" would hold up to a more extensive research program:\nThis is only one study,3 covering only one hunter-gatherer society (unlike all of the analyses I presented in this post, which each looked at data from more than one society). By default, I treat any one study as only a light update, and I generally expect results to be less likely to hold up the more striking and exciting they are. (Reasons include regression toward the mean and bias in academic incentives.)\nGiven the big cultural differences between hunter-gatherers and modern-lifestyle societies, I suspect that a lot of what’s going on here has to do with how the survey questions are being interpreted, and/or how people are perceiving their role as study participants. \nThe pattern from most of the literature is that average happiness is higher in richer countries, and I think there are a number of pretty clear dimensions on which hunter-gatherer quality of life looks worse than modern-lifestyle quality of life. While it’s certainly possible to come up with possible reasons hunter-gatherers could be happier, I don’t know of any such reasons that seem to have very high initial plausibility.\n5.83 on a 7-point scale just seems very high, which to me points further toward something about \"how people perceived the survey and the expectations for answering.\" I don't generally expect the \"state of nature\" to be associated with such high happiness.\nValue of more research\nBut imagine that dozens more studies like this were done, with a variety of different survey methodologies in a number of hunter-gatherer societies, and they continued to show that hunter-gatherer societies seem noticeably happier than modern-lifestyle societies.\nAt that point, I think we’d be looking at a potentially hugely important area of research, and it would be worth dramatically raising investment to try to understand why such differences might exist (particularly given the seemingly poor nutrition and health of hunter-gatherer societies). For example, people living modern lifestyles could be recruited to try imitating different aspects of hunter-gatherer lifestyles, to see whether these produce a boost in happiness.\nThe kind of study I've linked here seems relatively straightforward to conduct, as studies go: it doesn’t require the researchers to enforce unusual policies or interventions, and it doesn’t require trying to unravel questions about causality. It seems to me that further studies like this would be very worth pursuing (and that it’s too bad this appears to be the only study of its kind to date - though if you know of more, please let me know)!Footnotes\n Here I focus on the “Study 2” part of the paper; “Study 2” has more detailed comparisons than “Study 1,” and they are both focused on the Hadza. ↩From this paper:\n ↩\n Technically two studies, but they are using substantially similar survey methods, among the ~same populations at the ~same time. Also, the paper cites prior literature, but I don’t think it is nearly as relevant to the question of hunter-gatherer happiness. I also searched the “cited-by” references for the paper and didn’t find anything else. ↩\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/hunter-gatherer-happiness/", "title": "Hunter-gatherer happiness", "source": "cold.takes", "source_type": "blog", "date_published": "2021-11-04", "id": "ca481edd7b9bd2f3ed9670870eb7deee"} -{"text": "As I've asked Has Life Gotten Better?, I've run into some intense debates about how violent early humans were.\nThe Better Angels of Our Nature argues that our distant past looks incredibly violent compared to today (and the relevant page from Our World in Data gives a similar impression). \nBut I've also seen pointed criticisms of this conclusion from sources such as Hunter-Gatherers and Human Evolution and War, Peace and Human Nature. These works tend to imply that early humans were relatively non-violent (or at least didn't war much).\nTrying to unravel and understand the points of disagreement has been confusing, but necessary to get a decent picture of trends in quality of life over very long time periods. The rate of violent deaths is one of the few systematic, meaningful-seeming metrics for assessing quality of life before a couple hundred years ago, and strong claims are made on both sides.\nAs of now, I believe a different story than either \"violent death rates have consistently gone down over time\" or \"early societies were remarkably peaceful.\" Here's my current impression:\nHypothesis\nAdvanced by\nTime period\nMy take\n(A) There was a rise in violent death rates sometime around (or before) 10,000 BCE, as people moved from a nomadic lifestyle (moving from place to place) to a sedentary one (staying in one place indefinitely).\n \nSome anthropologists, including critics of Better Angels of our Nature\nVaries by region, but first transition would be sometime around (or perhaps before) 10,000 BCE\n \nI'd guess this is true, with low confidence.\n \n(B) There was a fall in violent death rates during the initial transition from non-state to state societies.\n \nBetter Angels of our Nature\nVaries by region, but first development of major states was around 5,000 BCE.\n \nI am unconvinced / don't find this particularly likely.\n \n(C) Today's society (especially the developed world) has lower violent death rates than prehistoric societies, both nomadic and sedentary.\n \nBetter Angels of our Nature\n10,000 BCE and earlier vs. today\n \nI'd guess this is true, with moderate confidence.\n \n(C) There was a fall in violent death rates in Europe starting around 1300, and the US around 1600.\n \nBetter Angels of our Nature\n1300 CE and later\n \nSeems true based on homicide data, though large-scale atrocities could complicate the picture (future post).\n \n(D) It's unclear what happened to violent death rates in between the initial transition to sedentary societies and 1300 CE.\n \n(Additional observation I'm making)\n \nPeriod between 10,000 BCE and 1300-1600 CE\n \nSeems true\nSimplified visual timeline version:\nBelow, I will:\nLay out the best data I know of (from Our World in Data) for violent death rates in pre-state societies. I'll briefly discuss the interpretation (\"B\" above) that early societies were very violent, and became less so as powerful states arose.\nExamine the main criticism I’ve seen of this data. The criticism is that one needs to distinguish between “sedentary” pre-state societies (which stay indefinitely in one area, and tend to be relatively hierarchical) and “nomadic” societies (which move from place to place, tend to be more egalitarian, and are hypothesized to be more representative of the distant past).\nTry to pull apart and compare (1) “sedentary” societies; (2) “nomadic” societies; (3) today’s world. I think that \"sedentary\" societies look more violent than \"nomadic\" ones, implying that violence rose at some point in the past (lending support to (A) in the chart above), while both look more violent than today's world (lending support to part of (C) in the chart above).\nDiscuss the state of the evidence on what happened to violent death rates in between prehistory and today (which will cover (D) and part of (C) in the chart above).\nData on violent death rates from pre-state societies\nChapter 2 of Better Angels of Our Nature makes comparisons between violent death statistics in non-state societies vs. state societies. Since (it seems) there were no \"states\" from the beginning of our species until the emergence of the first states around ~5,000 BCE,1 the presumption is that these statistics are a decent way to get at the prevalence of violence in the distant pre-state past, compared to the prevalence of violence in later periods with states.\nFor a summary, I'll lay out these graphics from Our World in Data, which are essentially the same as those from Better Angels, with some updates and corrections.\nOne immediate observation: I don't think this data provides much evidence about the argument that the early transitions from non-state to state societies led to a decline in violence (\"B\" in the chart above).2 This is because:\nThe state societies presented are all from 1600 CE and later, other than \"Central Mexico, 1419-1519 CE\" (whose violent death rate fits right in with the non-state rates). \n1600 CE is a very long time after the appearance of the first states. So this is a comparison between non-state societies and recent or modern state societies - not between non-state societies and the state societies that closely followed them.3\nWhat this data may (and, as I'll argue below, does) show is that today's societies appear to have low violent death rates compared to the distant past.\nNomadic vs. sedentary societies\nI've looked for criticisms of the arguments about early violence given in Better Angels. One of the major ones is that the earliest societies weren't just non-state societies - they were specifically nomadic foraging societies, which were more egalitarian than other types of non-state societies. And the high violent death rates above mostly don't come from nomadic foraging societies.\nFor an example of this criticism, see quote in footnote from Hunter-Gatherer and Human Evolution by Richard B. Lee.4 (For some other criticisms and my responses, see Appendix 1.)\nIn a sense, this critique isn't directly contradicting Better Angels, which has explicitly chosen to take a \"state vs. non-state societies\" lens rather than an \"earlier humans vs. later humans\" lens.5 But if nomadic foragers were less violent than other societies and most early societies were nomadic foragers, this would mean that there was at least one historical case of violent death rates going up (even if they later went back down).\nFrom what I can tell, the distinction between nomadic foraging societies and sedentary societies is widely recognized as real and important, though much about it is disputable and disputed. I have seen a lot of casual references to this distinction; the most thorough discussion I've found is in Chapter 9 of The Lifeways of Hunter-Gatherers (abbreviated for the rest of this piece as \"Lifeways\"; more here on why I consider this source especially helpful/credible). Here's a quote giving the basic idea:\nIf I asked the average anthropology student to imagine a group of hunter-gatherers, it is most likely that the Ju/’hoansi would come to mind: small, peaceful, nomadic bands composed of men and women with few possessions and who are equal in wealth, opportunity, and status. Yet ... the average student is also aware of cases that easily overturn that image: large, sedentary, warring, possession-laden Northwest Coast societies, where men boasted of their exploits, status, and power ... \n[the latter] are nonegalitarian societies, whose elites possess slaves, fight wars, and overtly seek prestige. Although anthropologists have long considered [these societies] to be exceptions, products of resource-rich environments, archaeologists continue to discover evidence of nonegalitarian foraging societies in many environments ...\nIt's important to note that when Lifeways uses the term \"egalitarian,\" it's only in a relative sense - for example, it specifically states that even \"egalitarian\" societies have inegalitarian gender relations.6 \"Egalitarian\" in this context seems to refer to (a) general norms against explicit hierarchy and boasting of status;7 (b) perhaps a lack of permanent power hierarchy.8\nThe remainder of that chapter implies that nonegalitarian societies are more likely to be sedentary (occupying a single area indefinitely) as opposed to nomadic (needing to periodically move in order to get to an area with more resources). One rough intuition I've seen is that it's hard to accumulate wealth in a society that periodically relocates entirely (and that lacks currency, banking, etc.) At the same time, this chapter emphasizes that this association (and the idea of \"egalitarian\" societies) can be oversimplified (quotes in footnote).9\nThe reason this might matter is the claim that - to quote Lee again - \"nomadic foragers ... reflect most closely the characteristics of ancient foragers.\" The idea being that for most of human history (hundreds of thousands, or even millions of years), most humans were mobile and \"egalitarian,\" and it was relatively recent (perhaps somewhat, but not long, before the Neolithic Revolution of ~10,000 BCE) that sedentary societies started popping up. \nThis is a view that many seem to hold, including Lee as quoted above, as well as Robert Kelly (the author of Lifeways)10 - though I have also seen it disputed by those who believe that early humans could have been largely nonegalitarian. (Better Angels is one of the sources that argues the latter - quotes in footnote).11\nTo recap this section:\nThere is a common association between societies' being nomadic and egalitarian, though I haven't seen a systematic examination of the association, and believe that this idea is at least somewhat oversimplified.\nThere is a common view that nomadic, egalitarian societies are most representative of the distant past, though this too is disputed.\nWith this in mind, I think it makes sense to try to distinguish between nomadic/egalitarian and sedentary/nonegalitarian societies in the data provided above. I'll do that next.\nViolent death rates in nomadic societies, sedentary societies, and today's societies\nUnfortunately, it is not at all straightforward to determine whether a given society is \"nomadic/egalitarian\" or \"sedentary/inegalitarian.\" \nThese distinctions are only sometimes flagged in the papers that discuss violence stats.\nThey don't appear to be clear-cut or undisputed.\nI've often been unable to get confident about what category a given society belongs in via searching, sometimes because societies gradually transition from nomadic to sedentary over time (so the timing of when the violence stats were assessed becomes crucial).\nBoth Better Angels and Our World in Data rely primarily on three \"aggregator\" sources that pull many estimates of violent death rates from previous papers. But when I examined these, I felt that none gave reliable classifications.\nKeeley 1996 provides the largest number of estimates. Unfortunately, it says little about its methods for deciding what to include, and doesn't distinguish between nomadic and sedentary societies. Additionally, commentary from a later paper (Bowles 2009, noted below) made me concerned about its reliability.12\nBowles 2009 distinguishes explicitly between sedentary and nomadic societies in one of its tables.13 However, this table seems to contain 4 meaningful errors in its 8 rows (see appendix). (BTW, I think this topic is cursed. Lifeways contains a table that should be a collection of homicide rates among foraging societies, but is an erroneous reprint of the previous table; and one of Our World in Data's key tables is also an erroneous reprint of the previous table, which they haven't fixed even though I emailed them about it, though in that case I was able to find the right table by guessing the URL. At least this blog post is perfect.)\nGat 2006 mostly doesn't distinguish between sedentary and nomadic.14\nTo try to get at the distinction, I tried to identify \"nomadic forager\" societies using Lifeways, Youngberg and Hanson 2010, and the Bowles 2009 classifications, in the set of societies that Our World in Data lists violent death rates for.15\nI was only able to identify three:\nSociety\nViolent deaths per 100,000 people per year\nWhy I think they count as nomadic foragers\nMurngin\n \n330\n \nBowles 2009 classifies them this way\n \nTiwi\n \n160\n \nYoungberg and Hanson 2010 names them as one of five societies that best meet the criteria for \"societies that most resemble the social environment where most human psychology seems to have evolved: small bands of nomadic foragers.\" Lifeways also labels the Tiwi \"egalitarian\" in Table 7-9.\n \n!Kung (aka Ju/'hoansi)\n \n42\n \nVery commonly cited as an example of a nomadic, egalitarian society, e.g. they are the example given when Lifeways explains the distinction\n \nNext, I'll discuss what this tells us about (a) violent death rates in nomadic vs. sedentary societies; (b) violent death rates in early societies vs. today's.\nViolent death rates in nomadic vs. sedentary societies\nThe violent death rates above seem significantly lower than what you'd get if you weren't separating out nomadic foragers. \nIn context of the full Our World in Data set, these societies rank as the 17th, 25th and 28th most violent out of 30.\nThe median violent death rate (per 100k people per year) for that set is 420, compared to 160 for the median of this set.\nI believe that most of the other, generally more violent societies are probably sedentary, based on their locations16) and a few confirmed cases.\nA couple other data points show a similar pattern of sedentary populations having notably (but not astronomically - something like 2x) higher violent death rates.17\nThis implies - if we accept the (non-consensus, but widely held) point that the earliest humans were mostly nomadic - that violent death rates likely rose as people transitioned from nomadic to sedentary societies. \nViolent death rates in nomadic societies vs. today's world\nIt's not totally straightforward to compare the above figures to figures from today's societies, because we need clarity on what exactly \"deaths from violence\" are - do they include homicides? Battle deaths? Suicides? Poisonings? Deaths from violent accidents and encounters with animals?\nMy current impression is that the violent death figures cited above are inclusive of battle deaths and homicides (which I'd guess can be a fuzzy line for small bands of people),18 while it's ambiguous whether they include other sorts of violent deaths.19\nI haven't found a ready-made analogous statistic for today's societies (something combining homicides, battle deaths, and potentially more), so I tabulated the figures myself here (based on this Our World in Data source, as well as this population data), creating both a \"high\" and \"low\" violent death rate estimate. \nThe \"low\" estimate is just deaths from terrorism, interpersonal violence, and \"conflict and terrorism\".\n \nThe high estimate adds in executions, poisonings, self-harm deaths, and road injuries (a big one).\nHere's my summary:\nSociety\nViolent deaths per 100,000 people per year\nMurngin\n \n330\n \nTiwi\n \n160\n \n!Kung (aka Ju/'hoansi)\n \n42\n \nWorld (today) - high estimate\n \n35.4\n \nUSA (today) - high estimate\n \n35.2\n \nWorld (today) - low estimate\n \n7.4\n \nWestern Europe (today) - high estimate\n \n6.6\n \n USA (today) - low estimate\n \n6.2\n \nWestern Europe (today) - low estimate\n \n0.3\n \nThe upshot - today's world looks nowhere near as violent as the few foraging societies we have figures for.\nA couple of additional notes on these comparisons in a footnote.20\nNote that there are significant caveats to the data on foraging societies, and I wouldn't be shocked if it turned out upon further investigation that - when accounting for large-scale wars and atrocities - today's world looks at least as violent as nomadic foraging societies. (I wouldn't be shocked, but it's not what I currently expect.)\nComparison to societies in between then and now\nAs discussed above, it looks like the societies most likely to be representative of our most distant past have high violent death rates compared to today. It also looks like violent death rates went up at some point in the distant past (around or prior to 10,000 BCE).21\nWhat about in between ~10,000 BCE and today?\nThere isn't a huge amount we can say about this period. Interestingly, all of the data we have on violent death rates seems to be either (interpreted as) being about the very distant past (many thousands of years ago) or about the relatively recent past (1300 CE and later).\nMy rough understanding of why this is:\nTo make guesses about the distant past, we can look at currently-existing societies that seem to have \"never modernized\" in the sense of adopting agriculture or becoming part of large-scale states.22\nStarting in 1300 CE, we start to have (in a small number of European countries) systematic written records of things like court cases that make it possible to estimate homicide rates. \nBut for the in-between period, we have neither.23\nStill, we can look at the data we do have on 1300 CE and later, and draw some clues from that. My current overall guess: \nViolent death rates fell at some point in between ~10,000 BCE and the kind of modernization that came with court records and made homicide rate estimates possible (around 1300 CE in Europe, later elsewhere). \nAfter that sort of modernization, they fell fairly steadily.\nI'll explain these points next.\nThe earliest systematic data I've seen from actual past homicide reports (as opposed to inference from ethnography or archaeology) is from this chart (there is a similar one in Better Angels):\nThe starting homicide rate is 25-55 per 100k people per year, which looks low compared to the figures above (especially the figures that don't distinguish between nomadic and sedentary societies!) This implies that there was some decline in violence even before 1300 (in addition to the big decline afterward, shown).\nHowever:\nThe chart is only of homicides, and other sources of violent deaths could complicate the picture. For example, eyeballing this chart suggests that we might want to add something like 50 to the figure for the UK just to account for battle deaths; and we don't know how much more we'd have to add if we wanted to include suicides, executions and violent accidents.\nWestern Europe may not be representative. Some of the only other homicide data I've seen from around this period is in the following charts from Better Angels: \nThe starting rates here don't look clearly lower than the figures for nomadic forager societies.24\nMy current overall guess: after violent death rates ~doubled with the transition from nomadic to sedentary societies (discussed above), they then fell back to the original rates (but not necessarily much below) at some point in between then and the kind of modernization that came with court records and made homicide rate estimates possible. After that sort of modernization, they fell fairly steadily, as shown in the above charts.\nI've now covered each of (A)-(D) discussed in the intro. Here's the graphical summary again:\nWhew! I hope you found that at least somewhat less confusing to read than I did to write. Having gotten something resembling clarity on violent death rates over the long run, I can use this for the broader question of whether life has gotten better. \nThe story for \"overall quality of life\" looks broadly similar to the story for violent death rates: things look like they got worse around 10,000 BCE (when agriculture was developed), then we have a big mystery between then and the early 2nd millennium, and since then there was some improvement (which accelerated in the Industrial Revolution a couple hundred years ago). This is a different picture from either \"Life has gotten steadily better throughout our history\" or \"Life was best in the state of nature,\" both of which I think are more common memes.Footnotes\n I haven't tracked down a satisfying source for this, though I also haven't seen much controversy over it (not that that gives too much comfort). The Wikipedia page on states makes this claim, as does Better Angels (\"It took around five thousand years after the origin of agriculture for true states to appear on the scene,\" from chapter 2, citing four sources). ↩\n Better Angels also makes an additional argument that the transition to state societies brought a reduction in violence: \"\"The major cleft in the graph, then, separates the anarchical bands and tribes from the governed states. But we have been comparing a motley collection of archaeological digs, ethnographic tallies, and modern estimates, some of them calculated on the proverbial back of an envelope. Is there some way to juxtapose two datasets directly, one from hunter-gatherers, the other from settled civilizations, matching the people, era, and methods as closely as possible? The economists Richard Steckel and John Wallis recently looked at data on nine hundred skeletons of Native Americans, distributed from southern Canada to South America, all of whom died before the arrival of Columbus.59 They divided the skeletons into hunter-gatherers and city dwellers, the latter from the civilizations in the Andes and Mesoamerica such as the Incas, Aztecs, and Mayans. The proportion of hunter-gatherers that showed signs of violent trauma was 13.4 percent, which is close to the average for the hunter-gatherers in figure 2–2. The proportion of city dwellers that showed signs of violent trauma was 2.7 percent, which is close to the figures for state societies before the present century. So holding many factors constant, we find that living in a civilization reduces one’s chances of being a victim of violence fivefold.\" I haven't dug into this study but I have a number of reservations, including generic skepticism of any one study, and the observation that <10% of people worldwide lived in cities at ~every point before 1800 (this is based on a calculation I did off of HYDE data). ↩\n While some of the non-state data is also recent, I believe it is intended as evidence about the very distant past, when the vast majority of people lived in non-state societies (as opposed to today, when almost none do). ↩\n \"Historically nomadic foragers (HNFs), small in scale, mobile, and egalitarian, reflect most closely the characteristics of ancient foragers, a point emphasized by Fry (2006, 2013). But the bellicose school loads their sampling procedures with groups that depart sharply from this pattern. \n \"Mounted foragers of the American Great Plains (De Maillie 2000) and sedentary nonegalitarian foragers of California (Heizer 1978) and the north west coast of North America (Suttles 1990; Flannery & Marcus 2012, pp. 66-87; Daly 2014) all demonstrated significant levels of war-like behaviors. Yet, horse transport on the plains and stockaded settled villages on the west coast are completely absent from the archaeological record of pre-Neolithic foragers. But at least these are examples of hunter-gatherers ...\n \"In his 2011 book, Pinker does address the differences between foragers and farmers, but he still loads his sample with cases that are not representative of HNFs [Historically Nomadic Foragers]. For example, in his table 'Rate of Death in Warfare in Nonstate and State Societies' (Pinker 2011, figures 2– 3, p. 53), the 27 nonstate cases are heavily loaded with New Guinean and nearby farming societies (12 of 27) and Californian and Plains Indians (5 of 27); only 5 of 27 of the cases remotely qualify as HNFs.\n (Lee doesn't say which 5.) ↩\n \"For all these reasons, it makes no sense to test for historical changes in violence by plotting deaths against a time line from the calendar. If we discover that violence has declined in a given people, it is because their mode of social organization has changed, not because the historical clock has struck a certain hour, and that change can happen at different times, if it happens at all. Nor should we expect a smooth reduction in violence along the continuum from simple, nomadic hunter-gatherers to complex, sedentary hunter-gatherers to farming tribes and chiefdoms to petty states to large states. The major transition we should expect is at the appearance of the first form of social organization that shows signs of design for reducing violence within its borders. That would be the centralized state, the Leviathan ... What happens, then, when we use the emergence of states as the dividing line and put hunter-gatherers, hunter-horticulturalists, and other tribal peoples (from any era) on one side, and settled states (also from any era) on the other?\" From Better Angels of our Nature, Chapter 2.  ↩\n \"Even the most egalitarian of foraging societies are not truly egalitarian because men, without the need to bear and breastfeed children, are in a better position than women to give away highly desired food and hence acquire prestige. The potential for status inequalities between men and women in foraging societies (see Chapter 9) is rooted in the division of labor (vCollier and Rosaldo 1981: 282; Bliege Bird, Codding, and Bird 2009).\" ↩\n Lifeways: \"There are people in every society who will try to lord it over others, but egalitarian cultures contain ways to level individuals, to 'cool their hearts' as the Ju/’hoansi say. Humor is used to belittle the successful but boastful Ju/’hoan hunter; if that fails, he will be shamed with the label !xka ≠xan, 'far-hearted,' meaning mean or stingy (Lee 1988: 267). The Martu berate such people with warnings that they are 'like rocks,' with no compassion (Bird and Bliege Bird 2009: 44). Wives use sexual humor to keep a husband in line; and gambling, accusations of stinginess, or demand-sharing maintain a constant circulation of goods and prevent hoarding.\" ↩\n \"Group leaders are common in egalitarian societies – Shoshone 'rabbit bosses' for example – but they are temporary and have their position only because they have demonstrated skill at a particular task. Their leadership does not carry over into other realms of life, nor is it permanent.\" ↩\n \"Rather than attributing nonegalitarian foragers simply to 'resource abundance,' as many have done in the past, we will see that sedentism, the resource base, geographic circumscription, storage, population pressure, group formation, and enculturative processes all play a role ... The term 'egalitarian' does not mean that all members have the same of everything – goods, food, prestige, or authority. Not everyone is equal in egalitarian societies, but everyone has (or is alleged to have) equal access to food, to the technology needed to acquire resources, and to the paths leading to status and prestige ... Even in this regard, the inheritance of material wealth (especially productive land) and relational wealth (political connections) give some individuals a head start in life ... Egalitarianism can mask hierarchy. Australian Aboriginal men acquire authority and power in religious affairs by disengaging from property, by giving away meat, for example. But one can only disengage from property if, at some level, one claims a right to it (see Bird and Bliege Bird 2009). Appeals to autonomy and equality by informants in egalitarian societies often contradict an ethnographic reality in which some members have higher status and greater access to resources than others. We have already seen that people are well aware of, give greater prestige to, and may lose some of their autonomy to men who are good hunters. Differences in autonomy are perhaps especially pronounced between men and women.\" ↩\n \"On the strength of archaeological data, it is reasonable to assume that nonegalitarian society developmentally followed egalitarian society. On the northwest coast, for example, slavery appears about 1500 BC, warfare by AD 1000, and nonegalitarian societies by at least AD 200 (Donald 1997; Ames and Maschner 1999; Ames, 2001, 2008; Grier 2006; see also reviews of global prehistory by R. C. Kelly [2000] and Keeley [1996], and Kennett [2005] on California’s Channel Islands). Egalitarian behaviors and an egalitarian ethos were adaptive for quite a long time in human history before the selective balance tipped in favor of nonegalitarian behaviors and a nonegalitarian ethos (Cohen 1985).\" ↩\n From Better Angels of our Nature, chapter 2: \"The nonstate peoples we are most familiar with are the hunters and gatherers living in small bands ... But these people have survived as hunter-gatherers only because they inhabit remote parts of the globe that no one else wants. As such they are not a representative sample of our anarchic ancestors, who may have enjoyed flusher environments. Until recently other foragers parked themselves in valleys and rivers that were teeming with fish and game and that supported a more affluent, complex, and sedentary lifestyle. The Indians of the Pacific Northwest, known for their totem poles and potlatches, are a familiar example.\" \nBowles 2009, one of the main sources of violence data, seems to take a similar view: \"Because hunter-gatherer populations occupying resource-rich areas in the Late Pleistocene [starting over 100,000 years ago] and early Holocene were probably sedentary (at least seasonally), I have included wars involving settled as well as purely mobile populations.\" ↩\n \"I studied all available archaeological and ethnographic sources that present (or are cited as presenting) relevant data. Of these 34 sources, 14 were found to present data that were unrepresentative (for example, when warfare was primarily with modern agricultural populations), unreliable, or inadequate ... prehistoric warfare was frequent and lethal, but somewhat less so than estimates based on data in the standard source for these estimates (6).\" (6) is a reference to Keeley 1996. I also note that Bowles 2009 is later, examined Keeley 1996, and contains far fewer data points than Keeley 1996, implying that it chose to discard a lot of what Keeley 1996 lists. ↩\n The one using ethnographic data, i.e., studies of today's foraging societies; it does not do this in the table using archaeological data ↩\n It does specify that some of its figures come from agricultural societies, which implies that they are sedentary or at least have the ability to be sedentary (see previous characterization of the significance of agriculture). ↩\n I skipped two other tables that list violent death shares instead of rates, because shares are much harder to use for the comparisons I want to make later in this post. The \"violent death share\" is the number of deaths from violence divided by the total number of deaths; the \"violent death rate\" is the number of deaths from violence divided by the population. ↩\n I've generally seen New Guinea and the American Northwest identified with sedentary populations. From Lifeways: \"We have less ethnographic information on nonegalitarian than on egalitarian foraging societies; these include Florida’s Calusa (Widmer 1988; Marquardt 2001; although the Calusa probably cultivated some plants such as gourds and chili peppers); various California foragers, with the Chumash of southern California the most heavily studied (e.g., Bean 1978; Arnold 2001a,b; 2004; Kennett 2005; Gamble 2008); the Northwest Coast (Ames 1995; Ames and Maschner 1999; Grier 2006); the Plateau region of the northwestern United States (Hayden 1992; Prentiss and Kuijt 2004); some New Guinean peoples (Roscoe 2006); and Japan’s Ainu (Watanabe 1968, 1972a,b).\"  ↩\n When comparing the violent death shares for verified nomadic hunter-gatherers to the full set of Our World in Data figures: the former has a median violent death share of 7.2%, compared to about 15% for Our World in Data.)\nTable 2.1 of Keeley 1996 for example, estimates frequency of war by level of organization; it has \"band\" (the smallest group) the lowest. ↩\n The Tiwi figure cited above comes from the following quote in this book: \"In one decade (1893-1903), at least sixteen males in the 25-10-45 age group were killed in feuding; either during sneak attacks or in arranged pitch battles. Those killed represented over 10 per cent of all males in that age category, which was the age group of the young fathers.\" No indication is given of how this was converted to a \"violent deaths per 100k people per year\" rate, and I'm also concerned that this might be someone noting a particularly violent (rather than representative) decade (although it might also understate violence if it is focused specifically on feuding). This has increased my interest in a more thorough examination of where all these figures are coming from. I haven't yet been able to track down a primary source for either of the two other figures cited above, as both come from out-of-print books. ↩\n For the archaeological studies specifically - which I am not focusing on for reasons noted above - it looks like they are generally looking for the sorts of injuries that would've been inflicted by other humans, such as parry fractures. ↩\nThere are of course some countries today with higher violence rates than are in the table above. But only one (Syria) has a rate higher than the Tiwi, and only about 20% of the listed countries have rates higher than the Ju/'hoansi - and this is using the more inclusive violence definitions that include e.g. road injuries.\nThe 20th century had much bigger, bloodier wars than are going on today, and those sorts of events can significantly increase total deaths from violence: I estimated that 20th-century atrocities (including e.g. estimated deaths from human-caused famines), spread out over the whole 20th century, accounted for about 81 violent deaths per 100k people per year. (The 20th century was unusually bad in this regard.) Even adding that figure in, though, would leave the modern world looking considerably less violent than 2 of the 3 nomadic foraging societies. ↩\n This is when the Neolithic Revolution caused a transition to agricultural - and therefore sedentary - societies. Though it's possible that there was a transition to sedentary societies before the transition to agricultural societies.\n I assume that most people were living in agricultural societies not too long after ~10,000 BCE, because of my impression (which I don't have a clean citation for) that the agricultural societies experienced much faster population growth than other societies.  ↩\n And as I stated above, I don't think we actually have a clean comparison between early non-state societies and early state societies - we only have early non-state societies vs. state societies after 1600 or so. ↩\n In theory, we can look at archaeological remains as well, though I am generally skeptical of this sort of evidence (more in appendix), and I haven't been able to find systematic examinations of this data for the \"in-between period.\" ↩\n Though with the exception of Maryland and Virginia before 1650, they do look lower than the figures that include both nomadic and sedentary societies, and probably would be even if we added deaths from battles, accidents and executions. ↩\n", "url": "https://www.cold-takes.com/unraveling-the-evidence-about-violence-among-very-early-humans/", "title": "Unraveling the evidence about violence among very early humans", "source": "cold.takes", "source_type": "blog", "date_published": "2021-11-02", "id": "cc972842fe93d8e9dc8eee32dae64ef6"} -{"text": "\nThese are longreads that I recommend because they're just good yarns. There's usually not a broader lesson. I don't read tons of these so you probably have better ones (send them in!) but these are good.\nA few in the \"insane stories about long cons\" genre:\nThis is my favorite thing I've read in the New Yorker. I won't spoil it (beyond the category above).\nEmma Perrier was deceived by an older man on the internet—a hoax that turned into an unbelievable love story. I spent the entire article wondering whether she was going to end up with the man who had romanced her with a fake photo and dialogue, or ... with the random model whose photo it was! And it was the latter! I guess this is just the exact plot of Cyrano de Bergerac.\n‘I was a teacher for 17 years, but I couldn’t read or write’\nPretty incredible political murder mystery, If you want the quick version check out the first paragraph of this bonkers Wikipedia page (but it will \"spoil\" the article). I think the article doesn't add much except the pleasure of consuming a crazy story slowly. Which is something.\nGreat yarn from Reddit that I won't spoil (might not be true, but good either way).\n2-hour video on a former Christian explaining his conversion to atheism. I recommend converting it to an mp3 and listening to it like a podcast, since the visuals add ~nothing. I found this really fascinating and moving, just a very honest and thorough account of how someone changed their mind.\nSubscribe Feedback\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/cold-links-nonfiction-yarns-2/", "title": "Cold Links: nonfiction yarns", "source": "cold.takes", "source_type": "blog", "date_published": "2021-10-29", "id": "9993142e7e30d62a2e2237fb2bddeedb"} -{"text": "\nI thought it would be good to write a couple of posts covering what I see as the weakest points in the “most important century” series, now that I’ve gotten some reactions and criticisms.\nI currently think the weakest point in the series runs something like this:\nIt’s true that if AI could literally automate everything needed to cause scientific and technological advancement, the consequences outlined in the series (a dramatic acceleration in scientific and technological advancement, leading to a radically unfamiliar future) would follow.\nBut what if AI could only automate 99% of what’s needed for scientific and technological advancement? What if AI systems could propose experiments but not run them? What if they could propose experiments and run them, but not get regulatory clearance for them? In this case, it’s plausible that the 1% of things AIs couldn’t do quickly and automatically would “bottleneck” progress, leading to dramatically less growth.\nThe series cites expert opinion on when transformative AI will be developed. Technically speaking, the type of situation that the respondents are forecasting - “unaided machines can accomplish every task better and more cheaply than human workers\" - should be enough for a productivity explosion. But the people surveyed might be thinking of a slightly less powerful type of AI than is literally implied by that statement - which could lead to dramatically smaller impacts. Or they could be imagining that even AIs with intellectual capability to match humans still might lack the in-practice ability to do key tasks because (for example) they aren’t instinctively trusted by humans. Either way, they (the survey respondents) could be imagining something almost as capable - but not nearly as impactful - as the type of AI I discuss.\nFurthermore, even if AIs could do everything that humans do to automate scientific and technological advancement, their scientific and technological progress might have to wait on the results of real-world experiments, which could slow them down a lot.\nIn brief: a small gap in what AI can automate could lead to a lot less impact than the series implies. Automating “almost everything” could be very different from automating everything.\nThis is important context for the attempts to forecast transformative AI: they are really forecasting something pretty extreme.\nMy response\nI think all of the above is about right as stated: we would indeed need extreme levels of automation to produce the consequences I envision. (There could be a few tasks that need to be done by humans, but they’d have to be quite a small and limited set in order to avoid slowing things down a lot via bottleneck.)\nIt’s also true that I haven’t spelled out how such extreme automation could be achieved - how each activity needed to advance scientific and technological advancement (including running experiments and waiting for them to finish) could be done in a quick and/or automated way, without human or other bottlenecks slowing things down much.\nWith that acknowledged, it’s also worth noting that the extreme levels of automation need not apply to the whole economy: extreme automation for a relatively small set of activities could be sufficient to reach the conclusions in the series.\nFor example, it might be sufficient for AI systems to develop increasingly efficient (a) computers; (b) solar panels (for energy); (c) mining and manufacturing robots; (d) space probes (to build more computers in space, where energy and metal are abundant). That could be sufficient (via feedback loop) for explosive growth in available energy, materials and computing power, and there are many ways that such growth could be transformative.\nFor example and in particular, it could lead to:\nMisaligned AI with access to dangerous amounts of materials and energy.\nDigital people, if AI systems also had some way of (a) “virtualizing” neuroscience (via virtual experiments or simply dramatically increasing the rate of learning from real-world experiments); or (b) otherwise having insight about how to create something we would properly regard as “digital descendants.”\nBottom line\nI don’t think I’ve thoroughly (or, for readers with strong initial skepticism on this point, convincingly) demonstrated that advanced AI could cause explosive acceleration in scientific and technological advancement, without hitting human-dependent or other “bottlenecks.” I think I have given a good sense of the intuition for why they could, but this is certainly a topic that I haven’t poked as hard as I could; I hope and expect that someone will eventually.\nI do think such poking will ultimately support the picture I’ve given in the “most important century” series. This is partly based on the reasoning above: the relatively limited scope of what would need to be fully automated in order to support my broad conclusions. It's also partly based a similar reasoning process to what I’ve used in the past to guess at some key conclusions before we’d done all the homework: engaging in a lot of conversations and forming views on how informed different parties are and how much sense they’re making. But I acknowledge that this is not as satisfying or reliable as it would be if I gave a highly detailed description of what precise activities can be automated.\nSubscribe Feedback\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/weak-point-in-most-important-century-full-automation/", "title": "Weak point in “most important century”: full automation", "source": "cold.takes", "source_type": "blog", "date_published": "2021-10-28", "id": "09bc0d17f5bc6a5284afdde31ea76efb"} -{"text": "For the last 200 years or so, life has been getting better for the average human in the world.\nWhat about for the 300,000+ years before that?\nIn order to answer this, one of the hardest things we need to do is get some sense of pre-agriculture quality of life. \nAgriculture is estimated to have started drastically changing lifestyles, and leading to what we tend to think of as \"civilization,\" around 10,000 BCE. Agriculture roughly means living off of domesticated plants and livestock, allowing a large population to live in one area indefinitely rather than needing to move as it runs low on resources.\nSo most years of human history were pre-agriculture (and thus pre-\"civilization\").\nThe terms \"hunter-gatherer\" and \"forager\" are commonly used to refer to societies that came before (or simply never took up) agriculture.\nThis appears to be a topic where there is a lot of room for controversy and confusion. \nMany people seem to endorse a \"pre-agriculture Eden\" hypothesis: that the pre-agriculture world was a sort of paradise, or at least better than life in rich countries today. There are logical reasons that this might be the case; below, I'll lay some of those out and give some quotes from Wikipedia that convey the \"pre-agriculture Eden\" vibe.\nBut there's also a case to be made that the world before agriculture was a world of starvation, disease and violence - that the human story is one of continuous, consistent beneficial progress, and that the pre-agriculture world was the lowest (because the earliest) point on it.\nMy tentative position is that neither of these is quite right. I think the pre-agriculture world was noticeably worse than today's world (at least in developed countries), but probably some amount better than the world that immediately followed agriculture.\nThis post will focus on the former: the comparison between the pre-agriculture world and today's world, or whether the \"pre-agriculture Eden\" hypothesis is right. I'll argue that today's best evidence suggests that today's developed world has significantly better quality of life than the pre-agriculture world. By doing so, I'll also lay the groundwork for a future post about what happened to quality of life in between the pre-agriculture world and today.\nThis image illustrates how this post fits into the full \"Has Life Gotten Better?\" series.\nBelow, I'll:\nGive more detail on the basic pre-agriculture Eden hypothesis. \nGo through each of the dimensions on which I tried to compare pre-agriculture and current quality of life. These are summarized by the table below, which uses the same structure as for my previous post:\nProperty\nPre-agriculture vs. today's developed world\nPoverty\nMostly assessed via hunger and health - see below.\n \nHunger\nPre-agriculture height looks very low by today's standards, suggesting malnutrition. \n \nHealth (physical)\nPre-agriculture infant and child mortality look extremely high by today's standards (20%+ before age 1, 35%+ before age 10 (today's high-income countries are under 1% before age 10). Post-childhood life expectancy also looks a lot worse than today's.\n \nViolence\nPre-agriculture deaths from violence look more common, compared to today's developed world.\n \nMental health\nUnknown - while there are some claims floating around about strong mental health for hunter-gatherers/foragers, I haven't seen anything that looks like solid evidence about this, and similar claims re: gender relations don't hold up to scrutiny.\n \nSubstance abuse and addiction\nPresumably not an issue pre-agriculture (though this isn't 100% clear).\n \nDiscrimination \nHard to compare, but pre-agriculture gender relations seem bad.\n \nTreatment of children\nUnknown\n \nTime usage\nUnknown/disputed\n \nSelf-assessed well-being\nUnknown\n \nEducation and literacy\nLiteracy would be higher in today's world (though it isn't clear whether this matters for quality of life)\n \nFriendship and community\nUnknown\n \nFreedom\nUnknown\n \nRomantic relationship quality\nUnknown\n \nJob satisfaction\nUnknown\n \nMeaning and fulfillment\nUnknown\n \nThe pre-agriculture Eden hypothesis\nWikipedia's entry for \"hunter-gatherer\" (quote in footnote1) gives a \"pre-agriculture Eden\" vibe, and specifically claims that:\nHunter-gatherers are not \"poor,\" or at least, they \"are mostly well-fed, rather than starving,\" and have more leisure time than most people today.\nHunter-gatherers \"tend to have an egalitarian social ethos,\" without permanent leaders (\"hunter-gatherers do not have permanent leaders; instead, the person taking the initiative at any one time depends on the task being performed\").\n(Covered previously) Hunter-gatherers have egalitarian gender relations specifically, \"with women roughly as influential and powerful as men.\"\nIn addition:\nI've seen it claimed that \"coronary heart disease, obesity, [and a number of other diseases] ... are rare or virtually absent in hunter–gatherers and other non-westernized populations.\" This is usually given as an argument that hunter-gatherers have excellent diets that we should emulate.\nI've seen more occasional (and as far as I can tell, very thinly cited) claims that \"Hunter-gatherers seem to possess exceptional mental health\" / \"depression is a 'disease of modernity.'\"\nWhy might all of this be? The basic idea would be that:\nFor most of human history, humans lived in small, \"nomadic\" bands (more on this in a future post): constantly moving from one location to another, since any given location had limited food supply. People who did well in this setting reproduced and people who did poorly did not, so we (the descendants of many people who did well) are well adapted to this lifestyle.\nBut about 10,000 years ago, much of the world transitioned to agriculture, which meant that instead of moving from place to place, we were able to consistently produce large amounts of food by staying put. This led to an explosion in population, and a division of labor: farmers could produce enough food for everyone, while other people specialized in other things such as religion, politics, and war.\n10,000 years isn't a ton of time from the standpoint of natural selection, so we're still adapted to the original environment, and we're \"out of place\" in a more modern lifestyle.\nTo put some of my cards on the table early, I think this reasoning could be right when it comes to some problems in the modern world, but I don't tend to believe it strongly by default.\nI don't think that \"adapting to\" an environment should be associated with \"thriving\" in it - especially not if \"thriving\" is supposed to include things like egalitarianism. In my view, \"adapting to\" an environment simply means becoming good at competing with others to reproduce in that environment - you could be fully \"adapted\" to your environment and still frequently be hungry, diseased, violent, hierarchical, sexist, and many other nasty things that we regularly see from animals in their natural environments.\nAdditionally, there are many diverse lifestyles in the modern world. So any problem that seems to exist ~everywhere in modern civilization seems to me like it's most likely (by default) to be \"a risk of being human.\" \nThat said, I don't think either of these points is absolute. There are some ways in which nearly all modern societies differ from forager/hunter-gatherer societies, and some of these might be causing novel problems that didn't exist in our ancestral environment. So I consider the \"pre-agriculture Eden\" hypothesis plausible enough to be interesting and important, and I'd like to know whether the facts support it.\nEvidence on different dimensions of quality of life\nBelow, I'll go through the best evidence I've found on the dimensions of quality of life from the table above.\nFor more complex topics, I mostly rely on previous more detailed posts I've made. Otherwise, I tend to rely by default on The Lifeways of Hunter-Gatherers (which I abbreviate as \"Lifeways\"), for reasons outlined here.\nGender relations\nI discussed pre-agriculture gender relations at some length in a previous post. In brief:\nAccording to the best/most systematic available evidence (from observing modern non-agricultural societies), pre-agriculture gender relations seem bad. For example, most societies seem to have no possibility for female leaders, and limited or no female voice in intra-band affairs.\nThere are a lot of claims to the contrary floating around, but (IMO) without good evidence. For example, the Wikipedia entry for \"hunter-gatherer\" gives the strong impression that nonagricultural societies have strong gender equality, as does a Google search for \"hunter-gatherer gender relations.\" But the sources cited seem very thin and often only tangentially related to the claims; furthermore, they often seem to acknowledge significant inequality, while seemingly trying to explain it away with strange statements like \"women know how to deal with physical aggression, unlike their Western counterparts.\" (Verbatim quote.)\nI think it's somewhat common to find rosy pictures of pre-agriculture society, with thin and even contradictory citations. I think this is worth keeping in mind for the below sections (where I won't go into as much depth as I did for gender relations).\nViolence\nPre-agriculture violence seems to be a hotly debated topic among anthropologists and archaeologists; the debates can get quite intricate and confusing, and I've spent more time than I hoped to trying to understand both sides and where they disagree. \nMy take as of now is that overall pre-agriculture violence was likely quite high by the standards of today's developed countries. \nThis was complex enough that I devoted a separate post entirely to my research and reasoning on this point. Here's the summary on \"nomadic forager\" societies (which are thought to be our best clue at what life was like in the very distant past) vs. today's world:\nSociety\nViolent deaths per 100,000 people per year\nMurngin (nomadic foragers)\n \n330\n \nTiwi (nomadic foragers)\n \n160\n \n!Kung (aka Ju/'hoansi) (nomadic foragers)\n \n42\n \nWorld (today) - high estimate\n \n35.4\n \nUSA (today) - high estimate\n \n35.2\n \nWorld (today) - low estimate\n \n7.4\n \nWestern Europe (today) - high estimate\n \n6.6\n \n USA (today) - low estimate\n \n6.2\n \nWestern Europe (today) - low estimate\n \n0.3\n \nHunger\nI'm going to examine both hunger and health, since both seem among the easiest ways to get at the question of whether pre-agriculture society had meaningfully higher \"poverty\" than today's in some sense.\nThe most relevant-seeming part of Lifeways is Table 3-5, which gives information on height, weight, and calorie consumption for 8 forager societies. My main observation (and see footnote for some other notes2) is that the height figures are strikingly low: 6 of the 7 listed averages for males are under 5'3\", and 6 of the 7 listed averages for females are under 5'0\". (Compare to 5'9\" for US males and 5'3.5\" for US females.)\nThe height figure seems important because height is often used as an indicator for early-childhood nutritional status,3 and seems to quite reliably increase with wealth at the aggregate societal level (see Our World in Data's page on height). Height seems particularly helpful here because it is relatively easy to measure in a culture-agnostic way and can even be estimated from archaeological remains.\nWhat I've been able to find of other evidence (including archaeological evidence) about height suggests that the pre-agriculture period had average heights a bit taller than the figures above, but still short by modern standards, though this evidence seems quite limited (details in footnote).4\nMy bottom line is: the evidence suggests that pre-agriculture people had noticeably shorter heights than modern people, which suggests to me that their early-childhood nutrition was worse. \nAs for Wikipedia's claim that \"Contrary to common misconception, hunter-gatherers are mostly well-fed,\" those who have read my previous piece on Wikipedia and hunter-gatherers might be able to guess what's coming next. \nThe citation for that statement appears to be an entire textbook (no page number given), which I found a copy of here (the link unfortunately seems to have broken since then).\nThe vast majority of the textbook doesn't seem to be relevant to this topic at all. \nFrom skimming the table of contents, my best guess at the part being cited is on page 328: \"The notion that hunters and gatherers live on the brink of starvation is a popular misconception; numerous studies have shown that hunters and gatherers are generally well nourished.\" No citations are given.\nHealth\nIt seems to me that the best proxy for health, in terms of having very-long-run data, is early-in-life mortality (before age 1, before age 5, before age 15). I've found a number of collections of data on this, and nothing else detailed regarding health for prehistoric or foraging populations (other than one analysis that looks at full life expectancy; I will discuss this later on).\nTable 7-7 in Lifeways lists a number of figures for deaths before ages 1 and 15, based on modern foraging societies. Taking a crude average yields 20% mortality before the age of 1, 35% before the age of 15.\nOther sources I've consulted (including archaeological sources) give an even grimmer picture, in some cases 50%+ mortality before the age of 15 (details in footnote).5\nThese are enormous early-in-life mortality rates compared to the modern world, where no country has a before-age-15 mortality rate over 15%, and high-income countries appear to be universally below 1% (source).\nWhat about life expectancy after reaching age 10?\nWhat I've found also suggests that pre-agriculture life expectancy was lower than today's at other ages, too - it isn't just a matter of early-in-life mortality.\nGurven and Kaplan 2007 (the only paper I've found that estimates pre-agriculture life expectancy, as opposed to early-in-life mortality) observes that its modeled life-expectancy-by-age curves are similar for modern foraging societies and 1751-1759 Sweden (Figure 3):\n(Note also how much worse the estimate of prehistoric life expectancy looks at every age, although Gurven and Kaplan question this data.6)\nAs noted at Our World in Data, it appears that life expectancy conditional on surviving to age 10 has improved greatly in Sweden and other countries since ~1800 (before which point it appears to have been pretty flat).\nAlso see these charts, showing life expectancy at every age improving significantly in England and Wales since ~1800.\nSee footnote for one more data source with a similar bottom line.7\nBottom line: life expectancy looks to have been a lot worse pre-agriculture than today. I don't think violent deaths account for enough death (see previous section) to play a big role in this; disease and other health factors seem most likely.\nWhat about diseases of affluence?\nFrom Wikipedia:\nDiseases of affluence ... is a term sometimes given to selected diseases and other health conditions which are commonly thought to be a result of increasing wealth in a society ... Examples of diseases of affluence include mostly chronic non-communicable diseases (NCDs) and other physical health conditions for which personal lifestyles and societal conditions associated with economic development are believed to be an important risk factor — such as type 2 diabetes, asthma, coronary heart disease, cerebrovascular disease, peripheral vascular disease, obesity, hypertension, cancer, alcoholism, gout, and some types of allergy.\nI think it's plausible that the pre-agriculture world had less of these \"diseases of affluence\" than the modern world (especially obesity and conditions connected to obesity, due to the seemingly much greater access to food).\nI don't think it's slam-dunk clear for some of these, such as cancer and heart disease. I've dug into primary sources a little bit, and not-too-surprisingly, data quality and rigor seems to often be low. In particular, I quite distrust claims like \"Someone spent __ years in ___ society and observed no cases of ____.\" Modern foraging societies seem to be quite small, and diagnosis could be far from straightforward.\nI haven't dug in heavily on this (though I may in the future), because:\nMy initial scans have made it look like it would be a lot of work to follow often-circuitous trails of references to often-hard-to-find sources.\nEven if it did turn out that \"diseases of affluence\" were extremely rare pre-agriculture, this wouldn't tip me into thinking health was better overall, pre-agriculture. When wondering whether undernutrition and \"diseases of poverty\" are worse than obesity and \"diseases of affluence,\" I think a good default is to prefer the condition with less premature death.\nMental health and wellbeing\nI haven't found anything that looks like systematic data on pre-agriculture mental health or subjective wellbeing. There are some suggestive Google results, but as in other cases, these don't seem well-cited. For example, as of this writing, Google's \"answer box\" reads:\nBut Thomas 2006 is this not-very-systematic-looking source.\nI won't go into this topic more, because having gone through the above topics, I don't find the basic plausibility of \"reliable data shows better-than-modern mental health among foraging societies\" high enough to be worth a deep dive.\nUpdate: a commenter linked to a study reporting high happiness for one hunter-gatherer society (the Hadza). My thoughts here.\nLeisure and equality\nI haven't gone into depth on claims that pre-agriculture societies had more leisure, and lower inequality, compared to today's.\nReasons for this:\nThe claims seem disputed. For example, here are excerpts on both topics from the first chapter of Lifeways:\nHow much do hunter-gatherers work, and why? Reexaminations of Ju/’hoansi and Australian work effort do not support Sahlins’s claim [of very low work hours] . Kristen Hawkes and James O’Connell (1981) found a major discrepancy between the Paraguayan Ache’s nearly seventy-hour work week and the Ju/’hoansi’s reportedly twelve- to nineteen-hour week. The discrepancy, they discovered, lay in Lee’s definition of work. Lee counted as work only the time spent in the bush searching for and procuring food, not the labor needed to process food resources in camp. Add in the time it takes to manufacture and maintain tools, carry water, care for children, process nuts and game, gather firewood, and clean habitations, and the Ju/’hoansi work well over a forty-hour week (Lee 1984; Isaac 1990; Kaplan 2000). In addition, one of Sahlins’s Australian datasets was generated from a foraging experiment of only a few days’ duration, performed by nine adults with no dependents. There was little incentive for these adults to forage much (and apparently they were none too keen on participating – see Altman [1984, 1987]; Bird-David [1992b]) ...\nOthers have found that the alleged egalitarian relations of hunter-gatherers are pervaded by inequality, if only between the young and the old and between men and women (Woodburn 1980; Hayden et al. 1986; Leacock 1978; see Chapters 8 and 9). Food is not shared equally, and women may eat less meat than do men (Speth 1990, 2010; Walker and Hewlett 1990). Archaeologists find more and more evidence of nonegalitarian hunter-gatherers in a variety of different environments (Price and Brown 1985b; Arnold 1996a; Ames 2001), most of whom lived under high population densities and stored food on a large scale. Put simply, we cannot equate foraging with egalitarianism.\nI'm skeptical that anthropologists can get highly reliable reads on the degree to which foraging societies have high leisure, or low inequality, in a deep sense. I imagine that if an anthropologist (from e.g. another planet) visited modern society, they might conclude that we have high leisure, or low inequality, based on things like:\nHaving a tough time disentangling \"work\" from \"leisure.\" For example, a lot of modern jobs are office jobs, and a lot of on-the-job hours are spent doing what might look like pleasant socializing. (Similarly, foragers \"socializing\" may be internally conceiving this as necessary work rather than fun - it seems like it could be quite hard to draw this line.)\nBeing confused by social norms encouraging people to downplay real inequalities. For example, I've seen a fair number of references to the fact that people in foraging societies will sometimes mock a successful hunter to \"cut them down to size\" and enforce equality. But if this were strong evidence of low inequality, I think we'd have similar evidence from modern society from things like the Law of Jante; \"humblebragging\"; the fact that many powerful, wealthy people in modern times tend to dress simply and signal \"authenticity\"; etc.\nEven if I were convinced that pre-agriculture societies had large amounts of leisure and low amounts of inequality, this wouldn't move me much toward believing they were an \"Eden,\" given above observations about violence, hunger and health. It would be one thing if foragers were healthy, well-fed, well-resourced, and lived in conditions of high leisure and low inequality. But high leisure and low inequality seem much less appealing in the context of what looks to me best described as \"poverty\" with respect to health and nutrition.\nHaving vetted other \"Eden\"-like claims about pre-agriculture societies, I've developed a prior that these claims are likely to be both time-consuming to investigate and greatly exaggerated. See previous sections.\nWith all of that said, as I'll discuss in future posts, I do think there are signs that at least some foraging societies were noticeably more egalitarian than the societies that came after them - just not more so than today's developed world.\nNext in series: Did life get better during the pre-industrial era? (Ehhhh)\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n \"Contrary to common misconception, hunter-gatherers are mostly well-fed, rather than starving ... \n \"Hunter-gatherers tend to have an egalitarian social ethos, although settled hunter-gatherers (for example, those inhabiting the Northwest Coast of North America) are an exception to this rule. Nearly all African hunter-gatherers are egalitarian, with women roughly as influential and powerful as men. For example, the San people or 'Bushmen' of southern Africa have social customs that strongly discourage hoarding and displays of authority, and encourage economic equality via sharing of food and material goods ...\n \"Anthropologists maintain that hunter-gatherers do not have permanent leaders; instead, the person taking the initiative at any one time depends on the task being performed. In addition to social and economic equality in hunter-gatherer societies, there is often, though not always, sexual parity as well ... \n \"At the 1966 'Man the Hunter' conference, anthropologists Richard Borshay Lee and Irven DeVore suggested that egalitarianism was one of several central characteristics of nomadic hunting and gathering societies because mobility requires minimization of material possessions throughout a population. Therefore, no surplus of resources can be accumulated by any single member ...\n \"At the same conference, Marshall Sahlins presented a paper entitled, 'Notes on the Original Affluent Society' ... According to Sahlins, ethnographic data indicated that hunter-gatherers worked far fewer hours and enjoyed more leisure than typical members of industrial society, and they still ate well. Their 'affluence' came from the idea that they were satisfied with very little in the material sense. Later, in 1996, Ross Sackett performed two distinct meta-analyses to empirically test Sahlin's view. The first of these studies looked at 102 time-allocation studies, and the second one analyzed 207 energy-expenditure studies. Sackett found that adults in foraging and horticultural societies work, on average, about 6.5 hours a day, whereas people in agricultural and industrial societies work on average 8.8 hours a day. ↩\n \n I've spot-checked the primary sources a bit: I looked up the first three rows and the last row, and found the methodology sections. I confirmed that these are all adults. I didn't check the others. I put the table in Google Sheets and added some of my own derived figures here. In addition to the observations about height, I note:\nI don't think we can make much of raw calorie consumption estimates, since more active lifestyles could require more calories.\nThe BMI figures would qualify as \"underweight\" for Ju/'hoansi and Anbarra females; others seem to be generally on the low side of the normal range. (These are averages, and could be consistent with having a significant percentage of individuals outside the normal range. \nNote that calculating a BMI from average height and average weight is not the same as looking at average BMI. But I played with some numbers here and it seems unlikely to be a big difference. I'd guess my BMI calculations would lead to slight overstatement of average BMI (and a slightly larger overstatement if well-fed people were both taller and higher-BMI). ↩\n For example, see here, here (Our World in Data), here. ↩\n Our World In Data cites a figure for the Mesolithic era (shortly before the dawn of agriculture) of 1.68m, or about 5'6\". This figure comes from A Farewell to Alms, which in turn cites a study I was unable to find anywhere. It isn't explicitly stated which sexes the height figure refers to, but this chart implies to me that Our World in Data (at least) is interpreting it as a quite low height by modern standards.\n Searching for recent papers on height estimates from archaeological remains, I found a 2019 paper claiming that \"The earliest anatomically modern humans in Europe, present by 42-45,000 BP (5, 6), were relatively tall (mean adult male height in the Early Upper Paleolithic was ∼174 cm [about 5'8.5\"]). Mean male stature then declined from the Paleolithic to the Mesolithic (∼164 cm [about 5'4.5\"]) before increasing to ∼167 cm [about 5'6\"] by the Bronze Age (4, 7). Subsequent changes, including the 20th century secular trend increased height to ∼170-180 cm [about 5'7\" to 5'11\"] (1, 4).\" Its main two sources are this paper, noting a relative scarcity of data and trying to fill in gaps using mathematical analysis, and this book that looks interesting but costs $121. I haven't found many other recent analyses of this topic (and nothing contradicting these claims in any case).\n Table 3.9 of A Farewell to Alms also collects height data on modern foraging societies, and the median figure in the table is about 1.65m, or about 5'5\". ↩\n Table 1 in Volk and Atkinson 2012 (the source used by Our World in Data) reports a mean of 26.8% mortality before the age of 1 (\"infant mortality rate\") and 48.8% mortality before the age of 15 (\"child mortality rate\"), again based on modern foraging societies.\n The only sources I've been able to find pulling together estimates from archaeological data are:\nTrinkaus 1995, on Neanderthals. In Table 4 (based on a small number of sources), it gives an average of 40.5% mortality before age 1, 13.2% mortality between ages 1-5, 6.6% mortality between ages 5-10, and 7.9% mortality between ages 10-20 (which would cumulatively imply 68.2% mortality before the age of 15).\nTrinkaus 2011, which does not give mortality estimates but argues for a conclusion of \"low life expectancy and demographic instability across these Late Pleistocene human groups ... [the data] provide no support for a life history advantage among early modern humans.\" ↩\n \"Estimated mortality rates then increase dramatically for prehistoric populations, so that by age 45 they are over seven times greater than those for traditional foragers, even worse than the ratio of captive chimpanzees to foragers. Because these prehistoric populations cannot be very different genetically from the populations surveyed here, there must be systematic biases in the samples and/or in the estimation procedures at older ages where presumably endogenous senescence should dominate as primary cause of death. While excessive warfare could explain the shape of one or more of these typical prehistoric forager mortality profiles, it is improbable that these profiles represent the long-term prehistoric forager mortality profile. Such rapid mortality increase late in life would have severe consequences for our human life history evolution, particularly for senescence in humans.\" ↩\nTrinkaus 1995 has a more detailed breakdown of mortality rates by age range for both a few modern forager societies (Table 3) and for archaeological remains (Table 4): \n \n Here \"Neonate\" means \"before 1 year,\" \"Child\" Is age 1-5, \"Juvenile\" is age 5-10, \"Adolescent\" is age 10-20, \"Young adult\" Is age 20-40, and \"Old adult\" is age 40+. These numbers seem extremely high for all ages under 40 by today's standards; for a comparison point see this detailed life expectancy table for the US, which implies (based on calculations I'm not showing here but that are straightforward to do) \"Juvenile\" mortality of 0.06%, \"Adolescent\" mortality of 0.6%, and \"Young adult\" mortality of 3.7%. (All figures are for males; female figures would be lower still.) ↩\n", "url": "https://www.cold-takes.com/was-life-better-in-hunter-gatherer-times/", "title": "Was life better in hunter-gatherer times?", "source": "cold.takes", "source_type": "blog", "date_published": "2021-10-26", "id": "089794933d55d49a9737f506067a9551"} -{"text": "\nLet’s say you’re interested in a 500-page serious nonfiction book, and you’re trying to decide whether to read it. I think most people imagine their choice something like this:\nOption\nTime cost\n% that I understand and retain\nJust read the title\n \nSeconds\n \n 1%\n \nSkim the book\n \n3 hours\n \n 33%\n \nRead the book quickly\n \n8 hours\n \n 67%\n \nRead the book slowly\n \n16 hours\n \n 90%\nI see things more like this:\nOption\nTime cost\n% that I understand and retain\nJust read the title (and the 1-2 sentences people usually say to introduce the book)\n \nSeconds\n \n 10%\n \nSkim the book\n \n3 hours\n \n 12%\n \nRead the book quickly\n \n8 hours\n \n 13%\n \nRead the book slowly\n \n16 hours\n \n 15%\n \nRead reviews/discussions of the book (ideally including author replies), but not the book\n \n2 hours\n \n 25%\n \nRead the book slowly 3 times, with 3 years in between each time\n \n48 hours\n \n 33%\n \nRead reviews/discussions of the book; locate the parts of the book they’re referencing, and read those parts carefully, independently checking footnotes, and referring back to other parts of the book for any unfamiliar terms. Write down who I think is being more fair; lay out the exact quotes that give the best evidence that my judgment is right. (But never read the whole book)\n \n16 hours\n \n 33%\n \nWrite my own summary of each of the book’s key points, what the best counterargument is, where I ultimately come down and why. (Will often involve reading key parts of the book 5-10 times)\n \n50-100 hours\n \n 50%\nI’m guessing these numbers are pretty weird-seeming, so here are some explanations:\nJust read the title (and the 1-2 sentences people usually say to introduce the book): \"seconds\" of time investment, 10% understanding/retention. 10% probably sounds like a lot for a few seconds of thought! I think this works because the author has really sweated over how to make the title and elevator pitch capture as much as possible of what they’re saying. So if all I want is the \"general gist,\" I don't think I need to read the book at all.\n Skimming or reading the book: hours of time investment, only 12-15% understanding/retention. This is based on my own sense of how much I retain when I \"simply read\" the book (and don't engage much with critiques of it, don't write about it, etc.) - and my perhaps unfair impressions of how much others seem to retain when they do this. If person A says they've read a book and person B says they haven't but they've heard people talking about it, I often don't find that person A seems to know any more about the book than person B.\n Read reviews/discussions of the book (ideally including author replies), but not the book: 2 hours of time investment, 25% understanding/retention. Good reviewers know the context/field for the book better than I do, and probably read the book more carefully than I did. Hopefully they picked out the really key good and bad parts, and if those are the only parts I retain, that’s probably more than I could hope for with just a slow reading.\n Read the book slowly 3 times, with 3 years in between each time: 48 hours of time investment, 33% understanding/retention. This implies that the 2nd and 3rd readings are actually more educational than the 1st: the first only gets me from 10% (which I got from reading the title) to 15%, the next two bring me to 33%. I think that’s right - it’s hard to notice the important parts before I have the whole arc of the argument and have sat with it. Hearing other people talk about it and seeing some random observations related to it also help.\nWrite my own thorough review of a particular debate between the book's critics and its author: 16 hours of time investment, 33% understanding/retention. (The table has more detail on what this involves.) This is the same time investment as reading the book slowly, and I'm saying that is worth something like 5x as much (since once I've read the title, reading the book slowly only takes me from 10% to 15% understanding/comprehension, whereas this activity takes me from 10% to 33% understanding/comprehension).\nWrite my own summary of each of the book’s key points, what the best counterargument is, where I ultimately come down and why: 50-100 hours of time investment, 50% understanding/comprehension. I know hugely more about the books I've done this with than the books I haven't. But even here I'm only estimating 50% understanding/comprehension. I don’t think it is really possible to understand more than 50% of a serious book without e.g. spending a lot of independent time in the field.\n \nTLDR - I think the value of reading a book once (without active engagement) is awkwardly small, and the value of big time investments like reading a book several times - or actively engaging with even part of it - is awkwardly large compared to that. \nAlso, the maximum amount of understanding you can get is awkwardly small.\nAnd a lot of the best options get you a “raw deal” on sounding educated:\nIf you read reviews and not the book, someone else can say they read the book and you can’t, even though you spent just as much time and retained more of the book.\nIf you digest the heck out of the book, you still can’t say anything in casual conversation except “I read the book,” which is also what someone can say who spent way less time and retained WAY less.\nUltimately, if you live in the headspace I’m laying out, you’re going to read a lot fewer books than you would otherwise, and you’ll probably be embarrassed of how few books you read. (But if more people described their engagement with a book in detail instead of using the binary “I read X,” maybe that would change.)\nEdited to add clarification: this piece is about trying to casually inform oneself in areas one isn't an expert in, via reading books (and often other pieces) directed at a general audience. A reader pointed out that when you have a lot of existing expertise, the situation looks quite different, and often skimming or reading is the best thing to do. (Although in this case I would add that one is probably mostly reading reports, academic papers, notes from colleagues, etc. rather than books).\nSubscribe Feedback\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/reading-books-vs-engaging-with-them/", "title": "Reading books vs. engaging with them", "source": "cold.takes", "source_type": "blog", "date_published": "2021-10-20", "id": "a7883183a998c6014bf9d0cdb3883ea2"} -{"text": "As part of exploring trends in quality of life over the very long run, I've been trying to understand how good life was during the \"pre-agriculture\" (or \"hunter-gatherer\"1) period of human history. We have little information about this period, but it lasted hundreds of thousands of years (or more), compared to a mere ~10,000 years post-agriculture. \n(For this post, it's not too important exactly what agriculture means. But it roughly refers to being able to domesticate plants and livestock, rather than living only off of existing resources in an area. Agriculture is what first allowed large populations to stay in one area indefinitely, and is generally believed to be crucial to the development of much of what we think of as \"civilization.\"2)\nThis image illustrates how this post fits into the full \"Has Life Gotten Better?\" series.\nThere are arguments floating around implying that the hunter-gatherer/pre-agriculture period was a sort of \"paradise\" in which humans lived in an egalitarian state of nature - and that agriculture was a pernicious technology that brought on more crowded, complex societies, which humans still haven't really \"adapted\" to. If that's true, it could mean that \"progress\" has left us worse off over the long run, even if trends over the last few hundred years have been positive.\nA future post will comprehensively examine this \"pre-agriculture paradise\" idea. For now, I just want to focus on one aspect: gender relations. This has been one of the more complex and confusing aspects to learn about. My current impression is that:\nThere's no easy way to determine \"what the literature says\" or \"what the experts think\" about pre-agriculture gender in/equality. That is, there's no source that comprehensively surveys the evidence or expert opinion. \nAccording to the best/most systematic evidence I could find (from observing modern non-agricultural societies), pre-agriculture gender relations seem bad. For example, most societies seem to have no possibility for female leaders, and limited or no female voice in intra-band affairs.\nThere are a lot of claims to the contrary floating around, but (IMO) without good evidence. For example, the Wikipedia entry for \"hunter-gatherer\" gives the strong impression that nonagricultural societies have strong gender equality, as does a Google search for \"hunter-gatherer gender relations.\" But the sources cited seem very thin and often only tangentially related to the claims; furthermore, they: \nOften use reasoning that seems like a huge stretch to me. For example, one paper appears to argue for strong gender equality among particular Neanderthals based entirely on the observation that they seemed not to eat the sorts of foods traditionally gathered by women. (The implication being that since women must have been doing something, they were probably hunting along with the men.)\n \nOften seem to acknowledge significant inequality, while seemingly trying to explain it away with strange statements like \"women know how to deal with physical aggression, unlike their Western counterparts.\" (Verbatim quote.)\n \nSeem to get disproportionate attention from very thin evidence, such as an analysis of 27 skeletal remains being featured in the New York Times and a National Geographic article that ranks 2nd in the Google results for \"hunter-gatherer gender relations.\"\nBased on the latter points, it seems that there are people trying hard to make the case for gender equality among hunter-gatherers, but not having much to back up this case. One reason for this might be a fear that if people think gender inequality is \"ancient\" or \"natural,\" they might conclude that it is also \"good\" and not to be changed. So for the avoidance of doubt: my general perspective is that the \"state of nature\" is bad compared to today's world. When I say that pre-agricultural societies probably had disappointingly low levels of gender equality, I'm not saying that this inequality is inevitable or \"something we should live with\" - just the opposite.\nSystematic evidence on pre-agriculture gender relations\nThe best source on pre-agriculture gender relations I've found is Hayden, Deal, Cannon and Casey (1986): \"Ecological Determinants of Women's Status Among Hunter/Gatherers.\" I discuss how I found it, and why I consider it the best source I've found, here.\nIt is a paper collecting ethnographic data: data from anthropologists' observations of the relatively few people who maintain or maintained a \"forager\"/\"hunter-gatherer\" (nonagricultural) lifestyle in modern times. It presents a table of 33 different societies, scored on 13 different properties such as whether a given society has \"female voice in domestic decisions\" and \"possibility of female leaders.\" \nHere's the key table from the paper, with some additional color-coding that I've added (spreadsheet here):\nI've used red shading for properties that imply male domination, and blue shading for properties that imply egalitarianism (all else equal).3 The red shades are deeper than the blue shades because I think the \"nonegalitarian\" properties are much more bad than the \"egalitarian\" properties are good (for example, I think \"possibility of female leaders\" being \"absent/taboo\" is extremely bad; I don't think \"female voice in domestic decisions\" being \"Considerable\" makes up for it).\nFrom this table, it seems that:\n25 of 33 societies appear to have no possibility for female leaders.\n19 of 33 societies appear to have limited or no female voice in intraband affairs.\nOf the 6 societies (<20%) where neither of these apply: \nThe Dogrib have \"female hunting taboos\" and \"belief in the inferiority of females to males.\"\n \nThe !Kung and Tasaday have \"female hunting taboos.\"\n \nThe Yokuts have \"ritual exclusion of females.\"\n \nThe Mara and Agta seem like the best candidates for egalitarian societies. (The Mara have \"limited\" \"female control of children,\" but it's not clear how to interpret this.)\nOverall, I would characterize this general picture as one of bad gender relations: it looks as though most of these societies have rules and/or norms that aggressively and categorically limit women's influence and activities.\nI got a similar picture from the chapter on gender relations from The Lifeways of Hunter-Gatherers (more here on why I emphasize this source).\nIt states: \"even the most egalitarian of foraging societies are not truly egalitarian because men, without the need to bear and breastfeed children, are in a better position than women to give away highly desired food and hence acquire prestige. The potential for status inequalities between men and women in foraging societies (see Chapter 9) is rooted in the division of labor.\" \nIt also argues that many practices sometimes taken as evidence of equality (such as matrilocality) are not.\nThe (AFAICT poorly cited and unconvincing) case that pre-agriculture gender relations were egalitarian\nI think there are a fair number of people and papers floating around that are aiming to give an impression that pre-agriculture gender relations were highly egalitarian.\nIn fact, when I started this investigation, I initially thought that gender equality was the consensus view, because both Google searches and Wikipedia content gave this impression. Not only do both emphasize gender equality among foragers/hunter-gatherers, but neither presents this as a two-sided debate.\nBelow, I'll go through what I found by following citations from (a) the Wikipedia \"hunter-gatherer\" page; (b) the front page from searching Google for \"hunter-gatherer gender relations.\" I'm not surprised that Google and Wikipedia are imperfect here, but I found it somewhat remarkable how consistently the \"initial impression\" given was of strong gender equality, and how consistently this impression was unsupported by sources. I think it gives a good feel for the broader phenomenon of \"unsupported claims about gender equality floating around.\"\nWikipedia's \"hunter-gatherer\" page\nThe Wikipedia entry for \"hunter-gatherer\" (archived version) gives the strong impression that nonagricultural societies have strong gender equality. Key quotes:\nNearly all African hunter-gatherers are egalitarian, with women roughly as influential and powerful as men.[22][23][24] ... In addition to social and economic equality in hunter-gatherer societies, there is often, though not always, relative gender equality as well.[30]\nThe citations given don't seem to support this statement. Details follow (I look at notes 22, 23, 24, and 30 - all of the notes from the above quote) - you can skip to the next section if you aren't interested in these details, but I found it somewhat striking and worth sharing just how bad the situation seems to be here.\nNote 22 refers to a chapter (\"Gender relations in hunter-gatherer societies\") in this book. I found it to have a combination of:\nVery broad claims about gender equality, which I consider less trustworthy than the sort of systematic, specifics-based analysis above. Key quote: \nVarious anthropologists who have done fieldwork with hunter-gatherers have described gender relations in at least some foraging societies as symmetrical, complementary, nonhierarchical, or egalitarian. Turnbull writes of the Mbuti: “A woman is in no way the social inferior of a man” (1965:271). Draper notes that “the !Kung society may be the least sexist of any we have experienced” (1975:77), and Lee describes the !Kung (now known as Ju/’hoansi) as “fiercely egalitarian” (1979:244). Estioko-Griffin and Griffin report: “Agta women are equal to men” (1981:140). Batek men and women are free to decide their own movements, activities, and relationships, and neither gender holds an economic, religious, or social advantage over the other (K. L. Endicott 1979, 1981, 1992, K. M. Endicott 1979). Gardner reports that Paliyans value individual autonomy and economic self-sufficiency, and “seem to carry egalitarianism, common to so many simple societies, to an extreme” (1972:405).\nOf the five societies named, two (Batek, Paliyan) are not included in the table above; two (!Kung, Agta) are among the most egalitarian according to the table above (although the !Kung are listed as having female hunting taboos); and one (Mbuti) is listed as having \"ritual exclusion of females\" and no \"possibility of female leaders.\" I trust specific claims like the latter more than broader claims like \"A woman is in no way the social inferior of a man.\" \nI actually wrote that before noticing, in the next section, that the same author who says \"A woman is in no way the social inferior of a man\" also observes that \"a certain amount of wife-beating is considered good, and the wife is expected to fight back\" - of the same society!\nSeeming concessions of significant inequality, sometimes accompanied by defenses of this that I find bizarre. Some example quotes from the chapter:\n\"Some Australian Aboriginal men use threats of gang-rape to keep women away from their secret ceremonies. Burbank argues that Aborigines accept physical aggression as a 'legitimate form of social action' and limit it through ritual (1994:31, 29). Further, women know how to deal with physical aggression, unlike their Western counterparts (Burbank 1994:19).\"\n\"For the Mbuti, 'a certain amount of wife-beating is considered good, and the wife is expected to fight back' (Turnbull 1965:287), but too much violence results in intervention by kin or in divorce.\"\n\"Observing that Chipewyan women defer to their husbands in public but not in private, Sharp cautions against assuming this means that men control women: 'If public deference, or the appearance of it, is an expression of power between the genders, it is a most uncertain and imperfect measure of power relations. Polite behavior can be most misleading precisely because of its conspicuousness'\"\n\"Some foragers place the formalities of decision-making in male hands, but expect women to influence or ratify the decisions\"\n\"Aché men and women traditionally participated in band-level decisions, though 'some men commanded more respect and held more personal power than any woman.'\"\n\"Rather than assigning all authority in economic, political, or religious matters to one gender or the other, hunter-gatherers tend to leave decision-making about men’s work and areas of expertise to men, and about women’s work and expertise to women, either as groups or individuals\"\nOverall, this chapter actively reinforced my impression that gender equality among the relevant societies is disappointingly low on the whole.\nNote 24 goes to this paper. (Note 23 also cites it, which is why I'm skipping to Note 24 for now.) My rough summary is:\nMuch of the paper discusses a single set of human remains from ~9,000 years ago that the author believes was (a) a 17-19-year-old female who (b) was buried with big-game hunting tools.\nIt also states that out of 27 individuals in the data set the authors considered who (a) appear to have been buried with big-game hunting tools (b) have a hypothesized sex, 11 were female and 16 were male.\nI think the idea is that these findings undermine the idea that women couldn't be big-game hunters. \nI have many objections to this paper being one of three sources cited for the claim \"Nearly all African hunter-gatherers are egalitarian, with women roughly as influential and powerful as men.\" \nNone of these results are from Africa (they're from the Americas).\nThis is a single paper that seems to be engaging in a lot of guesswork around a small number of remains, and seems to be written with a pretty strong agenda (see the intro). In general, I think it's a bad idea to put much weight on a single data source; I prefer systematic, aggregative analyses like the one I examine above.\nIt already seems to be widely acknowledged that the amount of big-game female hunting in these societies is not zero4 (though it is believed to be rare and in some cases taboo), so a small-sample-size case where it was relatively common would not necessarily contradict what's already widely believed. \nFinally, what would it tell us if women participated equally in big-game hunting 9,000 years ago, given that (as the authors of this paper state) there are only \"trace levels of participation observed among ethnographic hunter-gatherers and contemporary societies\"? As far as I can tell, it's very hard to glean much information about gender relations from 9,000 years ago, and there are any number of different axes other than hunting along which there may have been discrimination. I think it would be quite a leap from \"Women participated equally in big-game hunting\" to \"Gender equality was strong.\"\nNote 23 goes to a New York Times article that is mostly about the above paper. It also cites a case where remains were found of a man and woman buried together near servants; I do not know what point that is making.\nSource 30 appears to primarily be drawing from \"Women's Status in Egalitarian Society,\" a chapter from this book.5\nI find this chapter extremely unconvincing, and reminiscent of Source 22 above, in that it combines (a) sweeping statements without specifics or citations; (b) scattered statements about individual societies; (c) acknowledgements of what sound to me like disappointingly low levels of gender equality, accompanied by bizarre defenses. (One key quote, which sounds to me like it's basically arguing \"Gender relations were good because women had high status due to their role in childbearing,\" is in a footnote.6)\nGoogle results for \"hunter-gatherer gender relations\"\nGoogling \"hunter-gatherer gender relations\" (archived link) initially gives an impression of strong gender equality. Here's how the search starts off:\nHowever, when I clicked through to the first result, I found that the statement highlighted by Google (\"Hunter-gatherer groups are often relatively egalitarian regarding power and gender relationships\") appears to be an aside: no citation or evidence is given, and it is not the main topic of the paper. Most of the paper discusses the differing activities of men and women (e.g., big-game hunting vs. other food provision).7\nThe answer box has no citations, so I can't assess where that's coming from.\nAnd here's what shows next in the search:\nThe first of these results (the National Geographic article) is essentially a summary of the same source discussed above that cites evidence of 11 females (compared to 16 males) buried with big-game hunting tools 9,000 years ago.\nThe next (from jstor.org) is a discussion of \"gender relations in the Thukela Basin 7000-2000 BP hunter-gatherer society.\" The abstract states: \"I argue that the early stages of this occupation were characterized by male dominance which then became the site of considerable struggle which resulted in women improving their positions and possibly attaining some form of parity with men.\"\nThe next (from theguardian.com) is a Guardian article with the headline: \"Early men and women were equal, say scientists.\" The entire article discusses a single study:\nThe study looks at two foraging societies (one of which is the Agta, the most egalitarian society according to the table above). \nIt presents a theoretical model according to which one gender dominating decisions about who lives where would result in high levels of within-camp relatedness, and observes that actual patterns of within-camp relatedness are relatively low, so they more closely match a dynamic in which both genders influence decisions (according to the theoretical model). \nI believe this is essentially zero evidence of anything. \nThe final result is a Wikipedia article that is mostly about the differing roles for men and women among foragers. The part that provides Google's 3rd excerpt is here (screenshotting so you can get the full experience of the citation notes):\nSource 8 looks like the closest thing to a citation for the claim that \"the sexual division of labor ... developed relatively recently.\" It goes to this paper, which seems to me to be making a significant leap from thin evidence. The basic situation, as far as I can tell, is:8\nThere is no archaeological evidence that the population in question (Neandertals in Eurasia in the Middle Paleolithic) ate small game or vegetables.\nThis implies that they exclusively hunted big game.\nIt's hypothesized that women participated equally in big-game hunting. The reasoning is that otherwise, they would have had nothing to do, and this seems implausible to the authors. (There is also some discussion of the lack of other things that would've taken work to make, such as complex clothing.)\nI do not think that \"Neanderthals didn't eat small game or vegetables\" is much of an argument that they had egalitarian division of labor by sex.\nBottom line\nMy current impression is that today's foraging/hunter-gatherer societies have disappointingly low levels of gender equality, and that this is the best evidence we have about what pre-agriculture gender relations were like.\nI'm not sure why casual searching and Wikipedia seem to give such a strong impression to the contrary. It seems to me that there is a fair amount of interest in stretching thin evidence to argue that pre-agriculture societies had strong gender equality. \nThis might be partly be coming from a fear that if people think gender inequality is \"ancient\" or \"natural,\" they might conclude that it is also \"good\" and not to be changed. But as I'll elaborate in future pieces, my general perspective is that the \"state of nature\" is bad compared to today's world, and I think one of our goals as a society should be to fight things - from sexism to disease - that have afflicted us for most of our history. I don't think it helps that cause to give stretched impressions about what that history looks like.\nNext in series: Was life better in hunter-gatherer times?\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n Or \"forager,\" though I won't be using that term in this post because I already have enough terms for the same thing. \"Hunter-gatherer\" seems to be the more common term generally, and is the one favored by Wikipedia. ↩\n E.g., see Wikipedia on the Neolithic Revolution, stating that agriculture \"transformed the small and mobile groups of hunter-gatherers that had hitherto dominated human pre-history into sedentary (non-nomadic) societies based in built-up villages and towns ... These developments, sometimes called the Neolithic package, provided the basis for centralized administrations and political structures, hierarchical ideologies, depersonalized systems of knowledge (e.g. writing), densely populated settlements, specialization and division of labour, more trade, the development of non-portable art and architecture, and greater property ownership.\" The well-known book Guns, Germs and Steel is about this transition. ↩\n Though properties (2) and (4) could in some cases imply female advantage rather than egalitarianism per se. ↩\n The table above lists two societies that specifically do not have \"female hunting taboos.\" \nThe Lifeways of Hunter Gatherers (which I name above as a relatively systematic source) states that there are \"quite a few individual cases of women hunters,\" and that \"One case of women hunters who appear to be a striking exception is that of the Philippine Agta [also the only case from the table above with no evidence against egalitarianism].\" In context, I believe it is referring to big-game hunting.\n This is despite stating the view (which is shared by the paper I'm discussing now) that modern-day foraging societies have very little participation by women in big-game hunting overall (see the section entitled \"Why Do Men Hunt (and Women Not So Much)?\" from chapter 8). ↩\n I initially stated that Wikipedia gave no indication of which part of the book it was pointing at, but a reader pointed out that it gave a page number. That page is the page of the index that includes a number of references to gender-relations-related topics. Most come from the chapter I discuss here; there are also a couple of pages referenced of another chapter, which also cites this one, and which I would characterize along similar lines.  ↩\n \"It is also necessary to reexamine the idea that these male activities were in the past more prestigious than the creation of new human beings. I am sympathetic to the scepticism with which women may view the argument that their gift of fertility was as highly valued as or more highly valued than anything men did. Women are too commonly told today to be content with the wondrous ability to give birth and with the presumed propensity for 'motherhood' as defined in saccharine terms. They correctly read such exhortations as saying, 'Do not fight for a change in status.' However, the fact that childbearing is associated with women's present oppression does not mean this was the case in earlier social forms. To the extent that hunting and warring (or, more accurately, sporadic raiding, where it existed) were areas of male ritualization, they were just that: areas of male ritualization. To a greater or lesser extent women participated in the rituals, while to a greater or lesser extent they were also involved in ritual elaborations of generative power, either along with men or separately. To presume the greater importance of male than female participants, or casually to accept the statements to this effect of latter-day day male informants, is to miss the basic function of dichotomized sex-symbolism in egalitarian society. Dichotomization made it possible to ritualize the reciprocal roles of females and males that sustained the group. As ranking began to develop, it became a means of asserting male dominance, and with the full-scale development of classes sex ideologies reinforced inequalities that were basic to exploitative structures.\"\n It seems to me as though a double standard is being applied here: the kind of \"dichotomization\" the author describes sounds like a serious limitation on self-determination and meritocracy (people participating in activities based on abilities and interests rather than gender roles), and no explanation is given for the author's apparent belief that this dichotomization was unproblematic for past societies but reflected oppression for later societies. ↩\n From the abstract: \"Ethnohistorical and nutritional evidence shows that edible plants and small animals, most often gathered by women, represent an abundant and accessible source of “brain foods.” This is in contrast to the “man the hunter” hypothesis where big-game hunting and meat-eating are seen as prime movers in the development of biological and behavioral traits that distinguish humans from other primates.\" I am not familiar with that form of the \"man the hunter\" hypothesis; what I've seen elsewhere implies that men dominate big-game hunting and that big game is often associated with prestige, regardless of whatever nutritional value it does or doesn't have. ↩\n A bit more on how I identified this as the key part of the paper:\nThe paper notes that today's foraging societies generally have a distinct sexual division of labor, but argues that it must have developed after the Middle Paleolithic, because (from the abstract) \"The rich archaeological record of Middle Paleolithic cultures in Eurasia suggests that earlier hominins pursued more narrowly focused economies, with women’s activities more closely aligned with those of men ... than in recent forager systems.\"\nAs far as I can tell, the key section arguing this point is \"Archaeological Evidence for Gendered Division of Labor before Modern Humans in Eurasia.\" ↩\n", "url": "https://www.cold-takes.com/hunter-gatherer-gender-relations-seem-bad/", "title": "Pre-agriculture gender relations seem bad", "source": "cold.takes", "source_type": "blog", "date_published": "2021-10-19", "id": "140116e95a807cb695a35e794215bf97"} -{"text": "\nMost of these are on the general theme of my obsession with obsession. (That link also explains the \"Cold Links\" idea.)\n(NYT) Probably my favorite sports piece ever, about the world's leading ultra-endurance athlete. Excerpt:\nThe craziness is methodical, however, and Robic and his crew know its pattern by heart. Around Day 2 of a typical weeklong race, his speech goes staccato. By Day 3, he is belligerent and sometimes paranoid. His short-term memory vanishes, and he weeps uncontrollably. The last days are marked by hallucinations: bears, wolves and aliens prowl the roadside; asphalt cracks rearrange themselves into coded messages. Occasionally, Robic leaps from his bike to square off with shadowy figures that turn out to be mailboxes. In a 2004 race, he turned to see himself pursued by a howling band of black-bearded men on horseback ... Robic curls fetuslike on the pavement of a Pyrenean mountain road, having fallen asleep and simply tipped off his bike. Robic stalks the crossroads of a nameless French village at midnight, flailing his arms, screaming at his support crew ... \nOver the past two years, Robic, who is 40 years old, has won almost every race he has entered, including the last two editions of ultracycling’s biggest event, the 3,000-mile Insight Race Across America (RAAM) ... \nHe is not always the fastest competitor (he often makes up ground by sleeping 90 minutes or less a day), nor does he possess any towering physiological gift. On rare occasions when he permits himself to be tested in a laboratory, his ability to produce power and transport oxygen ranks on a par with those of many other ultra-endurance athletes. He wins for the most fundamental of reasons: he refuses to stop.\nFun article about \"Big's Backyard Ultra,\" an ultra-endurance race where people run around the same loop until they can't anymore. Ultra-endurance events are bonkers.\nEpic piece on how pro athletes deal with having to pee. \nHow Ronda Rousey processed her first-ever loss. Rousey was briefly the world's most dominant and celebrated mixed martial artist.\nAdorable article about the family that, for over 50 years, has been the sole supplier of the mud used to improve the grip on Major League baseballs. They get the mud from a secret location and MLB has no other way of treating the baseballs that players will be happy with. Unfortunately this yields only about $12k/year for the family (which probably explains why MLB hasn't tried all that hard to find an alternative mud supplier). Still, this is one of the most baseball things I've ever read.\nSubscribe Feedback\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/cold-links-assorted-sports-longreads/", "title": "Cold Links: assorted sports longreads", "source": "cold.takes", "source_type": "blog", "date_published": "2021-10-15", "id": "9716ca3781a4342a702361151626f0b2"} -{"text": "\nOne way of summarizing what a lot of this blog is about: I’m trying to write as if I were a billion years old. \nIf I were a billion years old, a lot of today’s events would feel - not unimportant at all (every person’s life is a hugely important), but - predictably fleeting, the kind of thing I could be sure to forget about in the next million years or so, whether or not I should.\nBut not everything would feel this way.\nI think I’d have felt something pretty durably important was happening when humans first started using tools, about 3 million years ago (which would feel to me like about 6 weeks ago).1 I mean, that would have felt like something quite new and unpredictable.\nAnd then the population explosion of the Neolithic Revolution, about 12,000 years ago (so like 4 hours ago), followed by the rise of cities and states, would have been pretty wild.\nBut the Industrial Revolution - that would have made me freak the !@#$ out. Population went up about 10x, the skies filled with smoke (and then started to clear again in some places), the streets filled with cars and skyscrapers, planes started flying through the air and even dropped a couple of nuclear bombs, the first-ever spaceships left Earth, and people started building computers - the first new kind of high-capacity computing devices since brains. \nAll of this in the last few hundred years. So like a few minutes ago.\nSo here I am. I’ve been hanging out watching animals for about half my life (and being bored by plants and bacteria before that), tool-using humans for about 6 weeks, and … SO MUCH STUFF … for the last few minutes. I am now paying RAPT attention, and I’m furiously trying to figure out what comes next. I’m making charts of when the new kinds of brains are going to get as powerful as the old kinds of brains, and trying to understand trends in quality of life to get a handle on where it might be going next. Definitely honing my critical thinking skills and trying to figure out whom to trust. \nAnd fine, also watching some great videos about sports. But mostly trying to stay focused on what matters most for the long run.\nSubscribe Feedback\nFootnotes\n I’m 40, so I’m just multiplying numbers like this by (40/1,000,000,000) to get how they would feel to the 1,000,000,000-year-old version of me. If you enjoy this kind of thing, also check out the cosmic calendar. ↩\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/olden-the-imaginary-billion-year-old-version-of-me/", "title": "If I were a billion years old", "source": "cold.takes", "source_type": "blog", "date_published": "2021-10-14", "id": "272229ffe5f859569190c7ef57ab0fb2"} -{"text": "This post will introduce my basic approach to asking the question \"Has life gotten better?\" and apply it to the easiest-to-assess period: the industrial era of the last few hundred years. \nMy conclusion for this period is quite in line with that of Enlightenment Now:1 life has gotten better, for humans (not for animals), especially for countries that are now considered \"developed\" (including the US and most of Europe). \nFuture posts will discuss trends in quality of life from longer ago. This image illustrates how this post fits into the full \"Has Life Gotten Better?\" series:\n(Reading this post isn't necessary for reading the rest of the series.)\nThe basic approach\nFirst, as far as I'm aware, there's no \"official\" or academic review for how quality of life has evolved for the average person over the course of human history. To get a good guess at the answer to \"has life gotten better?\", it's necessary to assemble evidence and arguments oneself (especially when talking about earlier time periods; there are a number of books about how quality of life has evolved over the last couple hundred years).\nMy attempt to answer this question has generally focused on systematically collected data. This means, essentially, that someone applied one consistent set of rules to scoring different periods of time on some metric. I often draw on Our World in Data, which seems to aspire to collect and present as much of this sort of data as possible, and tends to be very clear and detailed about where its data is coming from. \nWhen asking \"Has life gotten better?\", there is lots of evidence you could consider that doesn't fit this description. You could pore over depictions of different eras written by historians; you could review journals and diaries; you could reason about what life probably was like based on e.g. intuitions about how TV and the Internet make you feel today. My problem with this kind of analysis is that:\nIt leaves a huge amount of room for interpretation. I expect it to be heavily colored by the pre-existing worldview and emotional orientation of the person doing it (something that is unavoidable to an extent, but seems especially rough for this kind of analysis).\nI think the amount of variety and richness in past people's lives is beyond our ability to really understand and imagine it. I'm worried about romanticizing or demonizing past eras based on thin clues here and there. \nI think in order to really feel confident that we were interpreting the clues we have reasonably - and building a picture of the past anywhere near rich enough to compare with our picture of the present - one would need to spend years (maybe lifetimes) just to get a good picture of some ~20-year period in some particular geography.\nFocusing on systematic data leaves out a lot - in fact, for most eras it leaves out some of the most fundamentally important questions, like \"How happy were people?\" and \"How was mental health?\" But I can gather and look at all of the data that's relevant and available, and readers can check my work, and we can know that we're all looking at the same non-cherry-picked sample of evidence, and trying to make an uncertain best guess based on what we know (which is of course subject to change if more information comes in). \nTo date, I haven't seen any other analysis that seems to give a strong case against the basic conclusion I've arrived at this way. \nWhat does it mean for human life to get \"better\" or \"worse?\"\nI didn't try to reduce quality of life to \"happiness\" or something like that, for a number of reasons, primarily that (a) this would've required taking controversial philosophical stands;2 (b) data on \"happiness\" is generally very limited (especially when looking over longer periods of time), and I wanted to be able to draw on as much data as possible. \nInstead, I made a list of a number of things that seem relevant for quality of life, and are at least potentially \"measurable\" by systematic data (at least to some degree).\nHere's my list, which I made by brainstorming while reviewing a few sources for inspiration:3\nQuestions that are particularly amenable to measurement\nPoverty. How common is extreme poverty, e.g., near- or below-subsistence standards of living? (In practice, I often find hunger and health the best way to get at this when data is sparse.)\nHunger. How common is hunger/undernourishment?\nHealth. How good is physical/medical health, e.g., how common are debilitating diseases and how long is life expectancy?\nViolence. How common is violence, particularly deaths from violence (which tend to be particularly measurable and amenable to rich data sets)?\nQuestions that are somewhat amenable to measurement\nMental health. How good is mental health, e.g., how common are various mental illnesses and disabilities?\nSubstance abuse and addiction. How common are these issues?\nDiscrimination. How severe are formal and informal racism, sexism and other forms of unjust discrimination?\nTreatment of children. How common is child abuse (both severe and moderate, where the latter might refer to heavy child labor and general disregard for children's welfare)?\nTime usage. How much time do people spend doing particularly unpleasant things, vs. things they enjoy?\nSelf-assessed well-being. How good is self-assessed well-being and happiness?\nEducation and literacy. How educated and/or literate is the population?\nFriendship and community. Do people have friendship and community, or are they lonely?\nQuestions that seem very difficult to get systematic data on as of today\nFreedom. How common is slavery, and more broadly, to what extent do people have self-determination?\nRelationship quality. How good are people's relationships with their spouses, children, family and friends?\nJob satisfaction. How much do people enjoy their jobs and find them meaningful?\nMeaning and fulfillment. To what extent do people feel their lives are meaningful overall? To what extent do they feel fulfilled (or, to the extent this is distinct, to what extent are they fulfilled)?\nAssessing the last couple hundred years\nI went through all the charts on Our World in Data, flagging the ones that seemed most useful for getting a handle on how the above things have changed over time (ideally long periods of time). (I also included some comparisons between richer and poorer countries, and would generally assume the world has become more like richer countries over time.) My list of relevant charts, with start dates, is here. After doing this, I went through Enlightenment Now for things I may have missed; this resulted in adding two more charts from Our World in Data and a couple of other notes to the summary table below.4\nProperty\nTrend\nPoverty: impressive, consistent improvement\n \nSeems robustly down worldwide since 1820; I recommend the full OWiD page, which examines this from many angles.\n \nHunger: impressive, consistent improvement\n \nHuman height (a proxy for nutrition) up worldwide since 1896 (flat before 1800, and gains are slowing down). See OWiD's page on human height. Caloric supply up since 1800. Famine deaths down since 1860. Undernourishment (defined via calorie consumption) down since 1970\nHealth (physical): impressive, consistent improvement\n \nChild mortality down worldwide since 1800 (though flat before then) and lower in richer countries; maternal mortality down since 1870 and lower in richer countries; life expectancy up worldwide since 1870 and higher in richer countries; cancer trends mixed since 1930 (US); alcohol consumption down since 1890 (data only available for richer countries); obesity up since 1975. (Note that I'm generally listing everything, not just things that support my bottom line, e.g. obesity is an exception to the overall picture of improvement)\n \nViolence: improvement with caveats (to come in a later post)\n \nWestern Europe homicide rates down since 1300; frequency of \"Great Power war\" down since 1700s (and flat for ~200 years prior); deaths from UK military conflicts seem down since 1450 (harder to say before that); measures of military expenditure and personnel seem roughly flat, with spikes, since 1700 in the UK and the early 1800s elsewhere (link 1, link 2)\n \nMental health: limited data, looks mostly flat \n \nSuicide rates mostly unchanged (though not steady) since 1950 (and since 1860-1900 according to Enlightenment Now Figure 18-3); no clear trends since 1990 in schizophrenia, bipolar disorder, anxiety disorders, depression; eating disorders seem to have risen somewhat\n \nSubstance abuse and addiction: up since 1990, no data before then\n \nDeaths from substance use disorders up since 1990; premature deaths from illicit drug use up since 1990; US drug overdose death rates up since 1999\nDiscrimination: improvement \n \nFemale:male schooling ratios up since 1870; female labor force participation up since 1890 (data from richer countries); Gender Equality Index up since 1950; gender pay gap down since 1970 (but higher in richer countries); # countries where homosexuality is legal up since 1791. Also see additional OECD charts on gender equality since 1900.\nOWiD doesn't include systematic data on prevalence of slavery and racially discriminatory law or colonial rule, but it seems clear that these would show improvement since at least the mid-1800s as well. \n \nTreatment of children: spotty data implies improvement\n \nUK child labor down since 1860; child employment lower in richer countries; US bullying down since 1993\nTime usage: spotty data implies improvement\n \nAnnual working hours down since 1870 (data from richer countries only) and lower in richer countries. (Some additional data in Enlightenment Now chapter 17, particularly the decline in time spent in housework starting around 1900 and the quote in this footnote.5)\n \nSelf-assessed well-being: spotty data implies improvement\n \nEuropean life satisfaction up since 1973; life satisfaction higher in richer countries\nEducation and literacy: impressive, consistent improvement\n \nWorldwide share of the population with basic education up worldwide since 1820; primary school enrollment up since 1820; literacy up worldwide since 1800 (and in the UK and Netherlands since 1475); basic numeracy up since 1500 (data from richer countries)\n \nFriendship and community: spotty data mildly suggests improvement or at least not worsening\n \nRicher-country populations report having more people they can count on; USA high-school loneliness down since 1977; one-person households more common since 1500 (Europe, UK, US; I included this because it could be interpreted as negative, though that isn't my guess). (Also see quote from Enlightenment Now in footnote.6)\n \nFreedom: spotty data suggests improvement\n \nDemocracies up, autocracies down since 1900; human rights scores up since 1946\nRomantic relationship quality: unknown\n \nMarriage rates down since 1920 (US) and 1960 (some other countries); divorce rates variable, mostly up since 1970 (falling in the US since 1980)\nJob satisfaction: unknown\n \nI haven't found good data series on this\n \nMeaning and fulfillment: unknown\n \nI haven't found good data series on this as distinct from life satisfaction (listed above)\n \nNote that data is often only available for a limited # of countries. I generally tried to note when a trend is only based on data from richer countries, and when it seems clear that a trend is \"worldwide\"; when I don't specify either, it means the trend is based on data from a limited set of countries, but those countries are from a mix of regions and income levels, or there's other reason to guess the trend is fairly broad.\nPersonally, I would guess that the last three rows - particularly romantic relationship quality and job satisfaction - would also show improvement, at least in richer countries, if we had good data on them. \nThis is mostly based on the fact that I think the ability to search out a \"good fit\" has dramatically improved (from a pretty low starting point of very little choice) for job searches, dating, and even in some sense meaning and fulfillment (in that it's gotten easier to choose between different candidate \"life missions\" - religions, or political goals, or just communities or topics, to find one that feels fulfilling and find others who share it). \nIt's also somewhat based on informal impressions (which I admit are unreliable) from e.g. books and TV shows (for example, people seem to treat their romantic partners as more of a burden in older TV shows).\nI'm aware of the theory that having more choice can make matters worse, but my default is that this is usually outweighed by the benefits of choice, at least when one is starting off from a pretty low level of being able to consider and choose between multiple options.\nI expect others have different impressions. But I think it's telling that nearly everything we can measure seems to have improved (and when not, stayed flat) over the last couple hundred years. \nWith that in mind, I tend to default to interpreting most statements like \"Alienation, depression, loss of meaning, etc. are modern-world-specific phenomena\" as more-or-less just \"grass is greener\" thinking. I think any given past era probably had lots of people struggling with a lack of meaning, fulfillment, purpose, and/or good relationships, even if their community was pressuring them to go along with some official consensus that X or Y is meaningful and fulfilling. \n(The flatness of suicide rates over time seems indicative here.)\nFor animals, it's not the same story\nUnfortunately, the chart for average animal quality of life probably looks very different from the human one; for example, the rise of factory farming in the 20th and 21st centuries is a massive negative development. I consider this a major complicating point for the narrative about life getting better over this period.\nNext in series: Pre-agriculture gender relations seem bad\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n See part II (pages 37-347). There is a nice summary at the beginning of Chapter 20 (pages 322-326), and simply looking at the charts and tables can give a nice feel for the overall picture as well. ↩\n See Stanford Encyclopedia of Philosophy on three broad competing theories of well-being (how good a person's life is). \"Hedonism\" says that a person's well-being is defined entirely by the feelings (e.g., pleasure and pain) they experience; \"desire theories\" say that well-being is about whether a person's desires/preferences are fulfilled; \"objective list theories\" are willing to say that a person's well-being can depend on certain facts about their life, such as whether they have good relationships and achieve understanding of the world, even if the person themselves doesn't care about these things or enjoy them. I think each of these is unsatisfactory on its own: I can imagine pleasurable lives that I wouldn't say are well-lived, and lives with full desire fulfillment that I wouldn't say are well-lived, but I also think it's strange to define well-being entirely using the \"objective list\" method. So I am inclined to judge well-being simply by a mix of all three, which supports my approach of looking at lots of different things that intuitively seem important for quality of life. ↩\nOur World in Data, Enlightenment Now, Maslow's Hierarchy of Needs, work I had previously done looking for themes in Utopian literature. ↩\n I have generally avoided leaning on Enlightenment Now, not because I don't agree with the picture it paints - I do - but because I would sympathize with a reader who is worried that its selection of charts is cherry-picked. \n But here I'll note a number of useful-seeming charts that are not covered by the Our World in Data figures in my table: figures 7-2 (stunting since 1966); 12-3, 12-4, 12-6, 12-8, 12-9 (deaths from accidents and other non-homicide violence); 14-3 and 14-4 (death penalty abolitions, execution deaths); 15-6 (liberal values across time and generations, going back to people born in the early 1900s); a number of charts on time use in chapter 17; 18-1 (a nice presentation of happiness data); 18-2 (loneliness data that includes college students, not just high school); 18-3 (suicide going back to 1860-1900). ↩\n \"Indeed, over the course of the 20th century, typical American parents spent more time, not less, with their children.24 In 1924, only 45 percent of mothers spent two or more hours a day with their children (7 percent spent no time with them), and only 60 percent of fathers spent at least an hour a day with them. By 1999, the proportions had risen to 71 and 83 percent.25 In fact, single and working mothers today spend more time with their children than stay-at-home married mothers did in 1965.\" ↩\n \"In Still Connected (2011), the sociologist Claude Fischer reviewed forty years of surveys that asked people about their social relationships. 'The most striking thing about the data,' he noted, 'is how consistent Americans’ ties to family and friends were between the 1970s and 2000s. We rarely find differences of more than a handful of percentage points either way that might describe lasting alterations in behavior with lasting personal consequences—yes, Americans entertained less at home and did more phone calling and emailing, but they did not change much on the fundamentals.'\" ↩\n", "url": "https://www.cold-takes.com/has-life-gotten-better-the-post-industrial-era/", "title": "Has life gotten better?: the post-industrial era", "source": "cold.takes", "source_type": "blog", "date_published": "2021-10-12", "id": "e7062cbd6ccee7aceca64dbbf5f7f98f"} -{"text": "\nIn casual conversations about the future of AI - particularly among people who don’t go in for wild, sci-fi stuff - there seems to be a lot of attention given to the problem of technological unemployment: AI systems outcompeting humans at enough jobs to create a drastic, sustained rise in the unemployment rate.\nThis tends to be seen as a “near-term” problem, whereas the world-transforming impacts of AI I’ve laid out tend to be seen as more “long-term.”\nThis could be right. But here I’ll try to convey an intuition that it’s overstated: that the kind of AI that could power a massive productivity explosion and threaten humanity’s very existence could come pretty soon after - or even before! - the kind of AI that could lead to significant, long-lasting technological unemployment.\n“Technological unemployment” AI would need to be extraordinarily powerful and versatile\nThe first key point is that I think people underestimate how powerful and versatile AI would have to be to create significant, long-lasting technological unemployment. \nFor example, imagine that AI advances to the point where truck drivers are no longer needed. Would this add over 3 million Americans to the ranks of the unemployed? Of course not - they’d get other jobs. We’ve had centuries of progress in automation, yet today’s unemployment rate is similar to where it was 50 years ago, around 5-6%.\n(Temporary unemployment/displacement is a potential issue as well. But I don't think it is usually what people are picturing when they talk about technological unemployment, and I don't see a case that there's anything in that category that would be importantly different from the daily job destruction and creation that has been part of the economy for a long time.)\nIn order to leave these 3 million people durably unemployed, AI systems would have to outperform them at essentially every economically valuable task. \nWhen imagining a world of increasing automation, it’s not hard to picture a lot of job options for relatively low-skilled workers that seem very hard to automate away. Examples might include:\nCaregiver roles, where it’s important for people to feel that they’re connecting with other humans (so it’s hard for AI to fully fill in). \nRoles doing intricate physical tasks that are well-suited to human hands, and/or unusually challenging for robots. (My general sense is that AI software is improving more rapidly than robot hardware.)\nProviding training data for AIs, focused on cases where they struggle.\nSurveying and interviewing neighbors and community members, in order to collect data that would otherwise be hard to get.\nPerhaps a return to agricultural employment, if rising wealth leads to increasing demand for food from small, humane and/or picturesque farms (and if it turns out that AI-driven robots have trouble with all the tasks these farms require - or it turns out that AI-run farms are just hard to market).\nMany more possibilities that I’m not immediately thinking of.\nAnd these roles could end up paying quite well, if automation elsewhere in the economy greatly raises productivity (leading to more total wealth chasing the people in these roles).\nIn my view, a world where automation has made low-skill workers fully unemployable is a world with extremely powerful, well-developed, versatile AI systems and robots - capable of doing everything that, say, 10% of humans can do. This could require AI with human-level capabilities at language, logic, fine motor control, interpersonal interaction, and more.\nPowerful, versatile AI could quickly become transformative (\"most important century\") AI\nAnd then the question is, how far is that from a world with AI systems that can make higher-skilled workers fully unemployable? For example, AI systems that could do absolutely everything that today’s successful scientists and engineers can do? Because that sounds to me like PASTA (my term for a type of AI that I've argued could make this century the most important of all time for humanity), and at that point I think we have bigger things to worry about.\nIn fact, I think there’s a solid chance that PASTA will come before the kind of AI that can make lower-skilled workers unemployable. This is because PASTA might not have to match humans at certain kinds of motor control and social interaction. So it might not make anyone totally unemployable (in the sense of having zero skills with economic value), even as it leads to a productivity explosion, wild technologies like digital people, and maybe even human extinction.\nThe idea that we might see AIs fully outcompete low-skill humans in the next few decades, but not fully outcompete higher-skill humans until decades after that, seems intuitively a bit weird to me. It could certainly end up being right, but I worry that it is fundamentally coming from a place of anthropomorphizing AI and assuming it will find the same things easy and challenging that we do.\nBottom line: I think it’s too quick to think of technological unemployment as the next problem we’ll be dealing with, and wilder issues as being much further down the line. By the time (or even before) we have AI that can truly replace every facet of what low-skill humans do, the “wild sci-fi” AI impacts could be the bigger concern.\nSubscribe Feedback\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/technological-unemployment-ai-vs-most-important-century-ai-how-far-apart/", "title": "“Technological unemployment” AI vs. “most important century” AI: how far apart?", "source": "cold.takes", "source_type": "blog", "date_published": "2021-10-07", "id": "1677cbdc1caabb38f61bb928892bf301"} -{"text": "Human civilization is thousands of years old. What's our report card? Whatever we've been doing, has it been \"working\" to make our lives better than they were before? Or is all our \"progress\" just helping us be nastier to others and ourselves, such that we need a radical re-envisioning of how the world works?\nI'm surprised you've read this far instead of clicking away (thank you). You're probably feeling bored: you've heard the answer (Yes, life is getting better) a zillion times, supported with data from books like Enlightenment Now and websites like Our World in Data and articles like this one and this one. \nI'm unsatisfied with this answer, and the reason comes down to the x-axis. Look at any of those sources, and you'll see some charts starting in 1800, many in 1950, some in the 1990s ... and only very few before 1700.1\nThis is fine for some purposes: as a retort to alarmism about the world falling apart, perhaps as a defense of the specifically post-Enlightenment period. (And I agree that recent trends are positive.) But I like to take a very long view of our history and future, and I want to know what the trend has been the whole way.\nIn particular, I'd like to know whether improvement is a very deep, robust pattern - perhaps because life fundamentally tends to get better as our species accumulates ideas, knowledge and abilities - or a potentially unstable fact about the weird, short-lived time we inhabit.\nSo I'm going to put out several posts trying to answer: what would a chart of \"average quality of life for an inhabitant of Earth look like, if we started it all the way back at the dawn of humanity?\" \nThis is a tough and frustrating question to research, because the vast majority of reliable data collection is recent - one needs to do a lot of guesswork about the more distant past. (And I haven't found any comprehensive study or expert consensus on trends in overall quality of life over the long run.) But I've tried to take a good crack at it - to find the data that is relatively straightforward to find, understand its limitations, and form a best-guess bottom line.\nIn future pieces, I'll go into detail about what I was able to find and what my bottom lines are. But if you just want my high-level, rough take in one chart, here's a chart I made of my subjective guess at average quality of life for humans2 vs. time, from 3 million years ago to today:\nSorry, that wasn't very helpful, because the pre-agriculture period (which we know almost nothing about) was so much longer than everything else.3\n(I think it's mildly reality-warping for readers to only ever see charts that are perfectly set up to look sensible and readable. It's good to occasionally see the busted first cut of a chart, which often reveals something interesting in its own right.)\nBut here's a chart with cumulative population instead of year on the x-axis. The population has exploded over the last few hundred years, so this chart has most of the action going on over the last few hundred years. You can think of this chart as \"If we lined up all the people who have ever lived in chronological order, how does their average quality of life change as we pan the camera from the early ones to the later ones?\"\nSource data and calculations here. See footnote for the key points of how I made the chart, including why it has been changed from its original version (which started 3 million years ago rather than 300,000).4 Note that when a line has no wiggles, that means something more like \"We don't have specific data to tell us how quality of life went up and down\" than like \"Quality of life was constant.\"\nIn other words:\nWe don't know much at all about life in the pre-agriculture era. Populations were pretty small, and there likely wasn't much in the way of technological advancement, which might (or might not) mean that different chronological periods weren't super different from each other.5\nMy impression is that life got noticeably worse with the start of agriculture some thousands of years ago, although I'm certainly not confident in this.\nIt's very unclear what happened in between the Neolithic Revolution (start of agriculture) and Industrial Revolution a couple hundred years ago.\nLife got rapidly better following the Industrial Revolution, and is currently at its high point - better than the pre-agriculture days.\n \nSo what?\nI agree with most of the implications of the \"life has gotten better\" meme, but not all of them. \nI agree that people are too quick to wring their hands about things going downhill. I agree that there is no past paradise (what one might call an \"Eden\") that we could get back to if only we could unwind modernity.\nBut I think \"life has gotten better\" is mostly an observation about a particular period of time: a few hundred years during which increasing numbers of people have gone from close-to-subsistence incomes to having basic needs (such as nutrition) comfortably covered. \nI think some people get carried away with this trend and think things like \"We know based on a long, robust history that science, technology and general empowerment make life better; we can be confident that continuing these kinds of 'progress' will continue to pay off.\" And that doesn't seem quite right.\nThere are some big open questions here. If there were more systematic examination of things like gender relations, slavery, happiness, mental health, etc. in the distant past, I could imagine it changing my mind in multiple ways. These could include: \nLearning that the pre-agriculture era was worse than I think, and so the upward trend in quality of life really has been smooth and consistent.\n \nOr learning that the pre-agriculture era really was a sort of paradise, and that we should be trying harder to \"undo technological advancement\" and recreate its key properties.\n \nAs mentioned previously, better data on how prevalent slavery was at different points in time - and/or on how institutionalized discrimination evolved - could be very informative about ups and/or downs in quality of life over the long run.\nHere is the full list of posts for this series. I highlight different sections of the above chart to make clear which time period I'm talking about for each set of posts.\nPost-industrial era\nHas Life Gotten Better?: the post-industrial era introduces my basic approach to asking the question \"Has life gotten better?\" and apply it to the easiest-to-assess period: the industrial era of the last few hundred years.\nPre-agriculture (or \"hunter-gatherer\" or \"forager\") era\nPre-agriculture gender relations seem bad examines the question of whether the pre-agriculture era was an \"Eden\" of egalitarian gender relations. I like mysterious titles, so you will have to read the full post to find out the answer.\nWas life better in hunter-gatherer times? attempts to compare overall quality of life in the modern vs. pre-agriculture world. Also see the short followup, Hunter-gatherer happiness.\nIn-between period\nDid life get better during the pre-industrial era? (Ehhhh) compares pre-agriculture to post-agriculture quality of life, and summarizes the little we can say about how things changed between ~10,000 BC and ~1700 CE.\nSupplemental posts on violence\nSome of the most difficult data to make sense of throughout writing this series has been the data on violent death rates. The following two posts go through how I've come to the interpretation I have on that data.\nUnraveling the evidence about violence among very early humans examines claims about violent death rates very early in human history, from Better Angels of Our Nature and some of its critics. As of now, I believe that early societies were violent by today's standards, but that violent death rates likely went up before they went down.\nFalling everyday violence, bigger wars and atrocities: how do they net out? looks at trends in violent death rates over the last several centuries. When we include large-scale atrocities, it's pretty unclear whether there is a robust trend toward lower violence over this period.\nFinally, an important caveat to the above charts. Unfortunately, the chart for average animal quality of life probably looks very different from the human one; for example, the rise of factory farming in the 20th and 21st centuries is a massive negative development. This makes the overall aggregate situation for sentient beings hard enough to judge that I have left it out of some of the very high-level summaries, such as the charts above. It is an additional complicating factor for the story that life has gotten better, as I'll be mentioning throughout this series. \nNext in series: Has Life Gotten Better?: the post-industrial era\nThanks to Luke Muehlhauser, Max Roser and Carl Shulman for comments on a draft.\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n For example:\nI wrote down the start date of every figure in Enlightenment Now, Part II (which is where it makes the case that the world has gotten better), excluding one that was taken from XKCD. 6 of the 73 figures start before 1700; the only one that starts before 1300 is Figure 18, Gross World Product (the size of the world economy). This isn't a criticism - that book is specifically about the world since the Enlightenment, a few hundred years ago - but it's an illustration of how one could get a skewed picture if not keeping that in mind.\nI went through Our World in Data noting down every major data presentation that seems relevant for quality of life (leaving out those that seem relatively redundant with others, so I wasn't as comprehensive as for Enlightenment Now.) I found 6 indicators with data before 1300 (child/infant mortality, which looks flat before 1700; human height, which looks flat before 1700; GDP per capita, which rose slightly before 1700; manuscript production, which rose starting around 1100; the price of light, which seems like it fell a bit between 1300-1500 and then had no clear trend before a steep drop after 1800; deaths from military conflicts in England, which look flat before 1700; deaths from violence, which appear to have declined - more on this in a future piece) and 8 more with data before 1700. Needless to say, there are many charts from later on. ↩\n See the end of the post for a comment on animals. ↩\n \"Why didn't you use a logarithmic axis?\" Well, would the x-axis be \"years since civilization began\" or \"years before today?\" The former wouldn't look any different, and the latter bakes in the assumption that today is special (and that version looks pretty similar to the next chart anyway, because today is special).  ↩\n I mostly used world per-capita income, logged; this was a pretty good first cut that matches my intuitions from summarizing history. (One of my major findings from that project was that \"most things about the world are doing the same thing at the same time.\") But I gave the pre-agriculture era a \"bonus\" to account for my sense that it had higher quality of life than the immediately post-agriculture era: I estimated the % of the population that was \"nomadic/egalitarian\" (a lifestyle that I think was more common at that time, and had advantages) as 75% prior to the agricultural revolution, and counted that as an effective 4x multiple on per-capita income. This was somewhat arbitrary, but I wanted to make sure it was still solidly below today's quality of life, because that is my view (as I'll argue).The original version of this chart started 3 million years ago, rather than 300,000. I had waffled on whether to go with 3 million or 300,000 and my decision had been fairly arbitrary. I later discovered that I had an error in my calculations that caused me to underestimate the population over any given period, but especially longer periods such as the initial period. With the error corrected, the \"since 3 million years ago\" chart would've been more dominated by the initial period (something I especially didn't like because I'm least confident in my population figures over that period), so I switched over to the \"300,000 years ago\" chart. ↩\n More specifically, I'd guess there was probably about as much variation across space as across time during that period. It's common in academic literature (which I'll get to in future posts) to assume that today's foraging societies are representative of all of human history before agriculture. ↩\n", "url": "https://www.cold-takes.com/has-life-gotten-better/", "title": "Has Life Gotten Better?", "source": "cold.takes", "source_type": "blog", "date_published": "2021-10-05", "id": "48af44b2fb603cb494a3afb3e83f2b89"} -{"text": "\nConsider Gell-Mann Amnesia:\nBriefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine [Michael Crichton’s], show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the \"wet streets cause rain\" stories. Paper's full of them.\nIn any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.\nI think “baloney” goes a bit far. But I think it’s genuinely true that most of what you read is some combination of:\nUnderinformed, because professional writers are rarely experts; because real experts often can’t write all that well; and because it's hard even for experts to know everything relevant for the topic at hand (especially when the topic doesn't fit well within a particular field).\nBiased, if only because they’re written by human beings.\nSystematically misleading, partly because most of what you read is ultimately on something much closer to an “entertainment” business model (the author wants you to enjoy your experience) than an “accuracy” business model (the writer wants you to get accurate information). There are some norms and forces that reward accuracy and punish inaccuracy in writing, and they often catch blatant falsehoods, but I wouldn’t generally say there’s strong pressure on writers to give their readers an accurate overall understanding of reality. \nI think most people agree with what I just wrote as they read it, but they don’t feel it day to day: they have Gell-Mann Amnesia. \nConsider the opposite condition: Gell-Mann Earworms, in which “I can’t trust this” is constantly ringing in your ears as you read anything. (Very much including this blog!) If there were a browser extension that inserted “Based on a true story” at the beginning of every online piece, this might give a feel for having Gell-Mann Earworms.\nThis probably sounds unpleasant and scary, and that’s probably part of the reason people have Gell-Mann Amnesia. What do you DO in this case? How do you inform yourself if you really don’t trust anything?\nAs someone who tries to have Gell-Mann Earworms, I try to:\nLearn about fewer things, more deeply. I decide which topics I’d really like to understand, and get really into them. That means reading multiple arguments, comparing them against each other, and trying to identify the sources of disagreement. It means reading academic papers instead of just news articles (and even books).\n \n Think a lot about which writers and sources I trust most. This sometimes means spending inordinate time digging into one claim or one debate.\nSeparate learning about how things are, learning about how things are discussed, and entertainment. If I read a single news article on a topic, I may have just learned about how things are discussed, and I may have just gotten some entertainment. But I probably didn’t learn much about how things are. That would take a deeper dive. \nPersonally, I like knowing how things are discussed, so I do follow a Twitter feed and read news websites and such, but I don’t tend to think of this as “staying informed” (about anything other than how things are discussed).\nReserve judgment. For example, I really try not to form a negative opinion of anyone based on something I read about them, whether that’s a takedown of their argument, or a story about a scandal, or anything else. \nIf I really want to decide whether someone said something wrong/bad, I want to take the time to read their original words in context, as well as any later explanation they may have given, before I even start to judge them. \n \nIf I read a takedown of someone’s essay, I assume the author of the takedown probably misinterpreted them,1 and if I want to know for sure, I need to read the original in something close to its entirety. \n \nI generally don’t consider myself justified in being offended by someone’s words or actions if I haven’t done at least an hour of research on the specific accusation in question. Unless there’s some specific reason that it’s worth it for me to spend that time, this means I usually don’t get mad at people by reading things. I haven’t identified any problems caused by this choice.\nThink about what’s really worth understanding, given how much work it takes to understand something. What claims would be important - for my actions - if true? Many big stories just don’t qualify.\nTo live with Gell-Mann Earworms, I have to consider myself deeply ignorant about the vast majority of issues people around me are talking about. When my parents ask my opinion on the latest news story, I always try to make “I don’t know” my first response. (I’ll then give some opinions, for fun and because we need something to argue about or else they’ll start giving me life advice. But starting with “I don’t know” seems like the right habit.)\nI generally think to myself, \"I only know about the topics I can put some real time into understanding, and that’s not a lot of topics, and I’d better pick them carefully for my life goals.\"\nI think this is a different mindset than most people have. I think it’s worth trying out.\nSubscribe Feedback\nFootnotes\n Based on times when I have dug into debates and takedowns, I think the odds that someone is being misread are very high. I may give some examples in the future. ↩\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/gell-mann-earworms/", "title": "Gell-Mann Earworms", "source": "cold.takes", "source_type": "blog", "date_published": "2021-10-01", "id": "66a505cb0d885103ee18e0a8b7eae8d6"} -{"text": "Now that I've finished the \"most important century\" series, I'll still be putting out one longer piece per week, but they'll be on toned-down/less ambitious topics, as you're about to see.\nA few years ago, I made a summary of human history in one table. It's here (color-coded Google sheet).\nTo do this, I didn't need to be (and am not!) an expert historian. What I did need to do, and what I found very worth doing, was:\nDecide on a lens for the summary: a definition of \"what matters to me,\" a way of distinguishing unimportant from important. \nTry to find the most important historical events for that lens, and put them all in one table.\nThe lens I chose was empowerment and well-being. That is, I consider historical people and events significant to the degree that they influenced the average person's (a) options and capabilities (empowerment) - including the advance of science and technology; (b) health, safety, fulfillment, etc. (well-being).1 (I'm not saying these are the same thing! It's possible that greater empowerment could mean lower well-being.) \nHistory through this lens seems very different from the history presented in e.g. textbooks. For example:\nMany wars and power struggles barely matter. My summary doesn't so much as mention Charlemagne or William of Orange, and the fall of the Roman Empire doesn't seem like a clearly watershed event. My summary thinks the development of lenses (leading to spectacles, microscopes and telescopes) is far more important.\nEvery twist and turn in gender relations, treatment of homosexuality, and the quality of maternal care and contraception is significant (as these likely mattered greatly, and systematically, for many people's quality of life). I've had trouble finding good broad histories on all of these. The development of most natural sciences and hard sciences is also important (much easier to read about, though the best reading material generally does not come from the \"history field\").\nThe summary is simply a color-coded table noting what I see as the most important events in each category during each major time period. It doesn't lend itself to a graphical summary, although I will be \"boiling things down\" more in future pieces, by trying to construct a single \"how quality of life for the average person, as time passed and empowerment rose\" curve. \nBut below, I'll try to give a sense of what pops out from this table, by going through some historical people and events that seem underrated to me, some that seem overrated, and high-level takeaways.\nDespite (or because of) my lack of expertise, I found the exercise useful, and would recommend that others consider doing their own summary of history:\nYou can spend infinite time reading history books and learning about various events, but it's a very different kind of learning to try to find \"all the highlights\" for your lens of choice, put them in one place, and reflect on how you'd tell the story of the world yourself if you had to boil it down. I think the latter activity requires more active engagement and is likely to result in better recall of important points. \nAnd I think the final product can be useful as well, if only for readers to easily get a pretty thorough sense of your worldview, what seems significant to you, and what disagreements or differences of perspective you have with others. Hopefully mine is useful to readers for giving a sense of my worldview and my overall sense of what humanity's story is so far.\nFinally, I think that creating a history summary is a vulnerable thing to do, in a good way. It's scary how little just about anyone (emphatically including myself) knows about history. I think the normal way to deal with this is to show off the facts one does know, change the subject away from what one doesn't, and generally avoid exposing one's ignorance. My summary of history says: \"This is what seems most important to me; this is the story as I perceive it; whatever important pieces I'm ignorant about are now on display for all to see.\" So take a look and give me feedback!\nUnderrated people and events according to the \"empowerment and well-being\" lens2\nTanzimat. In the mid-19th century, the Ottoman Empire went through a series of reforms that abolished the slave trade, declared political equality of all religions, and decriminalized homosexuality. Most of the attention for early reforms like this tends to focus on Europe and the US, but the Ottoman Empire was quite early on these things.\nIbn al-Haytham's treatise on optics. In the early 11th century, an Islamic scholar intensively studied how curved glass lenses bend light and wrote it up, which I would guess turned out to be immensely useful for the development of spectacles (which reached Europe in the 1200s) and - later - microscopes and telescopes, which were crucial to some of the key Scientific Revolution work on physics/astronomy and biology.\nThe medicine revolution of the mid-19th century. As far as I can tell, medicine for thousands of years did very little at all, and surgery may have killed as many people as it saved. It's not clear to me whether life expectancy improved at all in the thousands of years prior to the mid-19th century.3\nBut the mid-19th century saw the debut of anesthesia (which knocks out the patient and makes it easier to operate) and sterilization with carbolic acid (reducing the risk of infection); there's a nice Atul Gawande New Yorker article about these. Many more medical breakthroughs would follow, to put it mildly, and now health looks like possibly the top way in which the world has improved. (One analysis4 estimates that the value of improved life expectancy over the last ~100 years is about as big as all measured growth in world output5 over that period.)\nNot too far into this revolution, Paul Ehrlich (turn-of-20th-century chemist, not the author of The Population Bomb) looks like he came up with a really impressive chunk of today's drug development paradigms. As far as I can tell: \nIt had been discovered that when you put a clothing dye into a sample under a microscope, it would stain some things and not others, which could make it easier to see. \nEhrlich reasoned from here to the idea of targeted drug development: looking for a chemical that would bind only to certain molecules and not others. This seems like the beginning of this idea.\nHe developed the first effective treatment for syphilis, and also laid the groundwork for the basic chemotherapy idea of delivering a toxin that targets only certain kinds of cells.\nAnd this was only a fraction of his contributions to medicine.\nIt's hard to think of someone who's done more for medicine. Articles I've read imply that he had to deal with a fair amount of skepticism and anti-Semitism in his time, but at least today, everyone who hears his name thinks of ... a different guy who lost a bet.\nPorphyry, the Greek vegetarian. Did you know that there was an ancient Greek who was (according to Wikipedia) \"an advocate of vegetarianism on spiritual and ethical grounds ... He wrote the On Abstinence from Animal Food (Περὶ ἀποχῆς ἐμψύχων; De Abstinentia ab Esu Animalium), advocating against the consumption of animals, and he is cited with approval in vegetarian literature up to the present day.\" Is it just me or is that more impressive than, well, Aristotle?\nThe rise of the modern academic system in the mid-20th century. I believe that government funding for science skyrocketed after World War II, and that this period included the creation of the NSF and DARPA and a general skyrocketing demand for professors. My vague impression is that this is when science turned into a real industry and mainstream career track. My summary also thinks that science had a lower frequency of well-known breakthroughs after this point.\nAlexandra Elbakyan. Super recent, and it's of course hard to know who will end up looking historically significant when more time has passed. But it seems like a pretty live possibility that she's done more for science than any scientist in a long time, and there's a good chance you have to Google her to know what I'm talking about.\nUnderrated negative trend: The rise of factory farming. The clearest case, in my view, in which the world has gotten worse with industrialization (note that institutionalized homophobia arose before industrialization, and seems to be in decline now unlike factory farming). I think that really brutal factory-like treatment of animals began in the 1930s and has mostly gotten worse over time. \nUnderrated negative trend: The relatively recent rise of institutionalized homophobia. I believe that bad/inegalitarian gender relations are as old as humanity (more in a future post), and slavery is at least as old as civilization. But institutionalized homophobia may be more of a recent phenomenon. My impression is that it came into being sometime around 0 AD and gradually swept most of the globe (though I'm definitely not confident in this, and would love to learn more).\nThe foundations of probability and statistics. Can you name the general time periods for the creation of: the line chart, bar chart, pie chart, the idea of probability, the idea of the normal distribution, Bayes's theorem, and the first known case-control experiment? Seems like the answer could be just about anything right? Turns out it was all between 1760-1812; all three charts came from William Playfair and a lot of the rest came from Laplace and Gauss. \nNot too much of note happened after that until the end of the 19th century, when Francis Galton and Karl Pearson (not always working together) came up with the modern concepts of standard deviation, correlation, regression analysis, p-values, and more. \nI think it's pretty interesting that so many of the things that are so foundational to pretty much any quantitative analysis anyone does of anything were invented in a couple relatively recent spurts.\nMetallurgy. A huge amount of history's scientific and technological progress is crammed into the last few hundred years, but I think the story of metallurgy is much longer and more gradual (see these major innovations from ~5000 and ~2000 BCE). I wish I could find a source that compactly goes through the major steps here and how they came about; I'd guess it was mostly trial and error (since so much of it was before the Scientific Revolution), but would like to know whether that's right.\n(Mathematics also has its major breakthroughs much more evenly spread throughout history than fields such as physics, chemistry and biology.)\nOverrated people and events according to the \"empowerment and well-being\" lens\nI mean, the vast majority of rulers, wars, and empires rising and falling.\nSpecial shout-out to:\nThe Roman Empire, which I can barely see any sign of either in quality-of-life metrics (future posts) or in key empowerment-and-wellbeing events (most of the headlines from this period came from China or the Islamic Empire). If I taught a history class I'm not entirely sure I would mention the Roman Empire.\nAncient Greece, which is renowned for its ideas and art, but doesn't seem to have been home to any notable improvements in quality of life - no sustained or effective anti-slavery pushes, no signs of feminism, nothing that helped with health or wealth or peace. Seems like it was a pretty horrific place to live for the average person. I've seen some signs that Athens was especially terrible for women, even by the standards of the time, e.g.\nHigh-level takeaways\nMost of what happened happened all at the same time, in the last few minutes (figuratively)\nThis project is what originally started to make me feel that we live in a special time, and that our place in history is more like \"Sitting in a rocket ship that just took off\" than like \"Playing our small part in the huge unbroken chain of generations.\"\nMy table devotes:\nOne column to the hundreds of thousands of years of prehistory.\nThree columns to the first ~6000 years of civilization.\nTwo columns to the next 300 years.\n6 columns to the ~200 years since. \nThat implies that more has happened in the last 200 years than in the previous million-plus. I think that's right, not recency bias. It seems very hard to summarize history (with my lens) without devoting massively more attention to these recent periods.\nI've made this point before, and you'll see it showing up in pretty much any chart you look at of something important - population, economic growth, rate of significant scientific discoveries, child mortality, human height, etc. My summary gives a qualitative way of seeing it across many domains at once.\n200 years is ~10 generations. We live in an extraordinary time without much precedent. And because of this, there are ultimately pretty serious limits to how much we can learn from history.\nHistory is a story\nI sometimes get a vibe from \"history people\" that we should avoid imposing \"narratives\" on human history: there are so many previous societies, each with its own richness and complexity, that it's hard to make generalizations or talk about trends and patterns across the whole thing.\nThat isn't how I feel.\nIt looks to me like if you're comparing an earlier period to a later one, you can be confident that the later period contains a higher world population and greater empowerment due to a greater stock of ideas, innovations and technological capabilities.\nThese trends seem very consistent, and can reasonably be expected to generate other consistent trends as well. \nI think history as it's traditionally taught (or at least, as I learned it back in the 20th century) tends to focus on the key events and details of each time, while only inconsistently situating them against the broader trends.6 To me, this is kind of like summarizing the Star Wars trilogy as follows: \"On the first day covered in the movie, it was warm and humid on Tattooine and hot and dry on Endor. On the second day, it was slightly cooler on Tattooine and hotter on Endor. On the third day, it rained on Tattooine, and it was still hot and dry on Endor. [etc. to the final day covered in the movies.] Done.\" Not inaccurate per se, but ...\nA lot of history through this lens seems unnecessarily hard to learn about\nMy table is extremely amateur and is probably missing a kajillion things; I had to furiously search Google and Wikipedia to fill in a lot of the cells. I'd love to live in a world where there were well-documented, comprehensive lists and timelines of the major events for empowerment and well-being.\nTo give a sense for this, here are some things that would be helpful for viewing history through this lens, that I've been unable to find:\nSystematic accounts - going back as far as possible - of when each major state/empire made official changes to things like women's rights (to own property, hold political power, vote, etc.), formal religious freedom, formal treatment of different ethnic groups, legality and other treatment of LGBTQ+, etc.\nCollected estimates (by region/empire/state and period, with citations) of how many people were slaves, what percentage of marriages were arranged, etc.\nComprehensive timelines (with citations) of major milestones for most of the rows in my table, and/or narrative histories that focus on listing, explaining and contextualizing such milestones and otherwise being concise (an example would be Asimov's Chronology of Science and Discovery, but I'd also like to see this for topics like gender relations).\nHistories of science focused on the discoveries that seem most likely to have contributed to real-world capabilities and quality of life, with explanations of these connections. (As an example of what this wouldn't be like, existing chemistry histories tend to list the discovery of each element.)\nI've only listed things that seem like they would be reasonably straightforward to put together; of course there are a zillion more things I wish I could know about the past.\nI don't know very much!\nThough I hope I've been clear about this throughout, I want to mention it again before I wrap up the takeaways. Not only is this summary based on a limited amount of time from a non-expert, but the sources I've been able to find for this project shouldn't be taken as neutral or trustworthy either. I think there are ~infinite ways in which they are likely biased due to the worldviews, identities, assumptions, etc. of the authors.\nFor example, the previous section notes how much harder it's been to find long-run data and timelines on slavery and women's rights than on technological developments. \nAnother thing that jumps out is that my summary ended up being heavily focused on the Western world. From what I've been able to gather, the Western world of the very recent past looks like where the most noteworthy human-empowerment-related developments are concentrated. If that's indeed the case, I don't think this was inevitable - there were long periods of time where non-Western civilizations were contributing much more to science, technology, etc. than Western civilizations - but the Scientific Revolution of the 1500s and the Industrial Revolution of the 1800s began in the recent West, and once those happened, people in the West could have been best-positioned to build on them for some period of time. But this could be another reflection of biases in how my sources report what was invented, where and when (I've looked for evidence that this is the case and haven't found any, but my efforts are obviously extremely incomplete, and I'm especially skeptical that noteworthy art is as concentrated in the West as the sources I've consulted make it seem).\nThe four high-level takeaways listed in this section are the four important-seeming observations I feel most confident about from this exercise. (But most confident doesn't mean confident, and I'm always interested in feedback!)\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n This does not mean I see history as an inevitably upward trend in terms of empowerment and well-being. You could apply the lens I've described whether empowerment and well-being have fallen, risen, or wiggled around randomly over the course of history. ↩\n I'm often not giving sources; my sources are listed in the detailed version of the summary table.\n ↩\n Check out this chart:\n And note that life expectancy at the beginning is similar to estimates for foraging societies, which are often used as a proxy for what life was like more than 10,000 years ago (chart from this paper):\n ↩\n See chapter 1. I normally don't cite big claims like this that \"one analysis\" makes, but I spent some time with this one (years ago) and broadly find it pretty plausible. ↩\n GDP. ↩\n I checked out this book as a way of seeing whether today's \"standard history\" still seems like this, and I think it largely does. It's not that economic growth and scientific/technological advancement aren't mentioned, but more that they come off as just one more part of a list of events (most of which tend to focus on who's in power where). ↩\n", "url": "https://www.cold-takes.com/summary-of-history-empowerment-and-well-being-lens/", "title": "Summary of history (empowerment and well-being lens)", "source": "cold.takes", "source_type": "blog", "date_published": "2021-09-28", "id": "76b01be62013ae3a3a4599bce285fe98"} -{"text": "\nDwight Howard had some great slam dunk contest dunks. I really love how much fun everyone is having in these videos, especially the announcers. Verbatim quote from the announcer in the last video: \"I'm leaving the building! I quit my job! I've never seen anything like that, in my life, it is over!\"\nNikola Jokic, the least graceful superstar I can think of. I could watch him all day.\nI've seen a lot of sports highlight videos in my life, but this compilation of Kevin Love's outlet passes is special.\nHow exactly did Anthony Davis lose his balance and fall over here? (Video, click play)\nThe worst call by an NBA ref ever.\nGreat video of Steph Curry running around a lot during a play. \"That’s a damn Family Circus comic.\"\nThis is absolutely the most painful choking away of a game at the very end I can recall seeing.\nSubscribe Feedback\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/cold-links-assorted-fun-basketball-stuff/", "title": "Cold Links: assorted fun basketball stuff", "source": "cold.takes", "source_type": "blog", "date_published": "2021-09-24", "id": "13e9a0fe763f50901ba04b14d91e9ca2"} -{"text": "\nI've spent most of my career looking for ways to do as much good as possible, per unit of money or time. I worked on finding evidence-backed charities working on global health and development (co-founding GiveWell), and later moved into philanthropy that takes more risks (co-founding Open Philanthropy).\nOver the last few years - thanks to general dialogue with the effective altruism community, and extensive research done by Open Philanthropy's Worldview Investigations team - I've become convinced that humanity as a whole faces huge risks and opportunities this century. Better understanding and preparing for these risks and opportunities is where I am now focused. \nThis piece will summarize a series of posts on why I believe we could be in the most important century of all time for humanity. It gives a short summary, key post(s), and sometimes key graphics for 5 basic points:\nThe long-run future is radically unfamiliar. Enough advances in technology could lead to a long-lasting, galaxy-wide civilization that could be a radical utopia, dystopia, or anything in between.\nThe long-run future could come much faster than we think, due to a possible AI-driven productivity explosion. \nThe relevant kind of AI looks like it will be developed this century - making this century the one that will initiate, and have the opportunity to shape, a future galaxy-wide civilization.\nThese claims seem too \"wild\" to take seriously. But there are a lot of reasons to think that we live in a wild time, and should be ready for anything.\nWe, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions. But right now, we aren't ready for this. \nThis thesis has a wacky, sci-fi feel. It's very far from where I expected to end up when I set out to do as much good as possible. \nBut part of the mindset I've developed through GiveWell and Open Philanthropy is being open to strange possibilities, while critically examining them with as much rigor as possible. And after a lot of investment in examining the above thesis, I think it's likely enough that the world urgently needs more attention on it.\nBy writing about it, I'd like to either get more attention on it, or gain more opportunities to be criticized and change my mind.\nWe live in a wild time, and should be ready for anything\nMany people find the \"most important century\" claim too \"wild\": a radical future with advanced AI and civilization spreading throughout our galaxy may happen eventually, but it'll be more like 500 years from now, or 1,000 or 10,000. (Not this century.)\nThese longer time frames would put us in a less wild position than if we're in the \"most important century.\" But in the scheme of things, even if galaxy-wide expansion begins 100,000 years from now, that still means we live in an extraordinary era- the tiny sliver of time during which the galaxy goes from nearly lifeless to largely populated. It means that out of a staggering number of persons who will ever exist, we're among the first. And that out of hundreds of billions of stars in our galaxy, ours will produce the beings that fill it.\nMore at All Possible Views About Humanity's Future Are Wild\nZooming in, we live in a special century, not just a special era. We can see this by looking at how fast the economy is growing. It doesn't feel like anything special is going on, because for as long as any of us have been alive, the world economy has grown at a few percent per year:\nHowever, when we zoom out to look at history in greater context, we see a picture of an unstable past and an uncertain future:\nMore at This Can't Go On\nWe're currently living through the fastest-growing time in history. This rate of growth hasn't gone on long, and can't go on indefinitely (there aren't enough atoms in the galaxy to sustain this rate of growth for even another 10,000 years). And if we get further acceleration in this rate of growth - in line with historical acceleration - we could reach the limits of what's possible more quickly: within this century.\nTo recap:\nThe last few millions of years - with the start of our species - have been more eventful than the previous several billion. \nThe last few hundred years have been more eventful than the previous several million. \nIf we see another accelerator (as I think AI could be), the next few decades could be the most eventful of all.\nMore info about these timelines at All Possible Views About Humanity's Future Are Wild, This Can't Go On, and Forecasting Transformative AI: Biological Anchors, respectively.\nGiven the times we live in, we need to be open to possible ways in which the world could change quickly and radically. Ideally, we'd be a bit over-attentive to such things, like putting safety first when driving. But today, such possibilities get little attention.\nKey pieces:\nAll Possible Views About Humanity's Future Are Wild\nThis Can't Go On\nThe long-run future is radically unfamiliar\nTechnology tends to increase people's control over the environment. For a concrete, easy-to-visualize example of what things could look like if technology goes far enough, we might imagine a technology like \"digital people\": fully conscious people \"made out of software\" who inhabit virtual environments such that they can experience anything at all and can be copied, run at different speeds and even \"reset.\" \nA world of digital people could be radically dystopian (virtual environments used to entrench some people's absolute power over others) or utopian (no disease, material poverty or non-consensual violence, and far greater wisdom and self-understanding than is possible today). Either way, digital people could enable a civilization to spread throughout the galaxy and last for a long time.\nMany people think this sort of large, stable future civilization is where we could be headed eventually (whether via digital people or other technologies that increase control over the environment), but don't bother to discuss it because it seems so far off.\nKey piece: Digital People Would Be An Even Bigger Deal\nThe long-run future could come much faster than we think\nStandard economic growth models imply that any technology that could fully automate innovation would cause an \"economic singularity\": productivity going to infinity this century. This is because it would create a powerful feedback loop: more resources -> more ideas and innovation -> more resources -> more ideas and innovation ...\nThis loop would not be unprecedented. I think it is in some sense the \"default\" way the economy operates - for most of economic history up until a couple hundred years ago. \nEconomic history: more resources -> more people -> more ideas -> more resources ...\nBut in the \"demographic transition\" a couple hundred years ago, the \"more resources -> more people\" step of that loop stopped. Population growth leveled off, and more resources led to richer people instead of more people:\nToday's economy: more resources -> more richer people -> same pace of ideas -> ...\nThe feedback loop could come back if some other technology restored the \"more resources -> more ideas\" dynamic. One such technology could be the right kind of AI: what I call PASTA, or Process for Automating Scientific and Technological Advancement.\nPossible future: more resources -> more AIs -> more ideas -> more resources ...\nThat means that our radical long-run future could be upon us very fast after PASTA is developed (if it ever is). \nIt also means that if PASTA systems are misaligned - pursuing their own non-human-compatible objectives - things could very quickly go sideways.\nKey pieces:\nThe Duplicator: Instant Cloning Would Make the World Economy Explode\nForecasting Transformative AI, Part 1: What Kind of AI?\nPASTA looks like it will be developed this century\nIt's not controversial to say a highly general AI system, such as PASTA, would be momentous. The question is, when (if ever) will such a thing exist?\nOver the last few years, a team at Open Philanthropy has investigated this question from multiple angles. \nOne forecasting method observes that:\nNo AI model to date has been even 1% as \"big\" (in terms of computations performed) as a human brain, and until recently this wouldn't have been affordable - but that will change relatively soon. \nAnd by the end of this century, it will be affordable to train enormous AI models many times over; to train human-brain-sized models on enormously difficult, expensive tasks; and even perhaps to perform as many computations as have been done \"by evolution\" (by all animal brains in history to date). \nThis method's predictions are in line with the latest survey of AI researchers: something like PASTA is more likely than not this century.\nA number of other angles have been examined as well.\nOne challenge for these forecasts: there's no \"field of AI forecasting\" and no expert consensus comparable to the one around climate change. \nIt's hard to be confident when the discussions around these topics are small and limited. But I think we should take the \"most important century\" hypothesis seriously based on what we know now, until and unless a \"field of AI forecasting\" develops.\nKey pieces: \nAI Timelines: Where the Arguments, and the \"Experts,\" Stand (recaps the others, and discusses how we should reason about topics like this where it's unclear who the \"experts\" are)\nForecasting Transformative AI: What's the Burden of Proof?\nAre we \"trending toward\" transformative AI?\nForecasting transformative AI: the \"biological anchors\" method in a nutshell\nWe're not ready for this\nWhen I talk about being in the \"most important century,\" I don't just mean that significant events are going to occur. I mean that we, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions. \nBut that's a big \"if.\" Many things we can do might make things better or worse (and it's hard to say which). \nWhen confronting the \"most important century\" hypothesis, my attitude doesn't match the familiar ones of \"excitement and motion\" or \"fear and avoidance.\" Instead, I feel an odd mix of intensity, urgency, confusion and hesitance. I'm looking at something bigger than I ever expected to confront, feeling underqualified and ignorant about what to do next.\nSituation\nAppropriate reaction (IMO)\n\"This could be a billion-dollar company!\"\n \n\"Woohoo, let's GO for it!\"\n \n\"This could be the most important century!\"\n \n\"... Oh ... wow ... I don't know what to say and I somewhat want to vomit ... I have to sit down and think about this one.\"\n \nWith that in mind, rather than a \"call to action,\" I issue a Call to Vigilance:\nIf you're convinced by the arguments in this series, then don't rush to \"do something\" and then move on. \nInstead, take whatever robustly good actions you can today, and otherwise put yourself in a better position to take important actions when the time comes. \nFor those looking for a quick action that will make future action more likely, see this section of \"Call to Vigilance.\"\nKey pieces: \nMaking the Best of the Most Important Century \nCall to Vigilance.\nOne metaphor for my headspace is that it feels as though the world is a set of people on a plane blasting down the runway:\nAnd every time I read commentary on what's going on in the world, people are discussing how to arrange your seatbelt as comfortably as possible given that wearing one is part of life, or saying how the best moments in life are sitting with your family and watching the white lines whooshing by, or arguing about whose fault it is that there's a background roar making it hard to hear each other.\nI don't know where we're actually heading, or what we can do about it. But I feel pretty solid in saying that we as a civilization are not ready for what's coming, and we need to start by taking it more seriously.\nSubscribe Feedback\n", "url": "https://www.cold-takes.com/the-most-important-century-in-a-nutshell/", "title": "The Most Important Century (in a nutshell)", "source": "cold.takes", "source_type": "blog", "date_published": "2021-09-23", "id": "f6d2ee0e0222bb62ce5cc95d145cfbe6"} -{"text": "Today’s world\nTransformative AI\nDigital people\nWorld of\nMisaligned AI\nWorld run by\nSomething else\nor\nor\nStable, galaxy-wide\ncivilization\nThis is a guest post by my colleague Ajeya Cotra.\nHolden previously mentioned the idea that advanced AI systems (e.g. PASTA) may develop dangerous goals that cause them to deceive or disempower humans. This might sound like a pretty out-there concern. Why would we program AI that wants to harm us? But I think it could actually be a difficult problem to avoid, especially if advanced AI is developed using deep learning (often used to develop state-of-the-art AI today). \nIn deep learning, we don’t program a computer by hand to do a task. Loosely speaking, we instead search for a computer program (called a model) that does the task well. We usually know very little about the inner workings of the model we end up with, just that it seems to be doing a good job. It’s less like building a machine and more like hiring and training an employee.\nAnd just like human employees can have many different motivations for doing their job (from believing in the company’s mission to enjoying the day-to-day work to just wanting money), deep learning models could also have many different “motivations” that all lead to getting good performance on a task. And since they’re not human, their motivations could be very strange and hard to anticipate -- as if they were alien employees.\nWe’re already starting to see preliminary evidence that models sometimes pursue goals their designers didn’t intend (here and here). Right now, this isn’t dangerous. But if it continues to happen with very powerful models, we may end up in a situation where most of the important decisions -- including what sort of galaxy-scale civilization to aim for -- are made by models without much regard for what humans value.\nThe deep learning alignment problem is the problem of ensuring that advanced deep learning models don’t pursue dangerous goals. In the rest of this post, I will:\nBuild on the “hiring” analogy to illustrate how alignment could be difficult if deep learning models are more capable than humans (more).\nExplain what the deep learning alignment problem is with a bit more technical detail (more).\nDiscuss how difficult the alignment problem may be, and how much risk there is from failing to solve it (more).\nAnalogy: the young businessperson\nThis section describes an analogy to try to intuitively illustrate why avoiding misalignment in a very powerful model feels hard. It’s not a perfect analogy; it’s just trying to convey some intuitions.\nImagine you are an eight-year-old whose parents left you a $1 trillion company and no trusted adult to serve as your guide to the world. You must hire a smart adult to run your company as CEO, handle your life the way that a parent would (e.g. decide your school, where you’ll live, when you need to go to the dentist), and administer your vast wealth (e.g. decide where you’ll invest your money).\n \nYou have to hire these grownups based on a work trial or interview you come up with -- you don't get to see any resumes, don't get to do reference checks, etc. Because you're so rich, tons of people apply for all sorts of reasons. \nYour candidate pool includes:\nSaints -- people who genuinely just want to help you manage your estate well and look out for your long-term interests.\nSycophants -- people who just want to do whatever it takes to make you short-term happy or satisfy the letter of your instructions regardless of long-term consequences.\nSchemers -- people with their own agendas who want to get access to your company and all its wealth and power so they can use it however they want.\nBecause you're eight, you'll probably be terrible at designing the right kind of work tests, so you could easily end up with a Sycophant or Schemer:\nYou could try to get each candidate to explain what high-level strategies they'll follow (how they'll invest, what their five-year plan for the company is, how they'll pick your school) and why those are best, and pick the one whose explanations seem to make the most sense. \nBut you won't actually understand which stated strategies are really best, so you could end up hiring a Sycophant with a terrible strategy that sounded good to you, who will faithfully execute that strategy and run your company to the ground. \n \nYou could also end up hiring a Schemer who says whatever it takes to get hired, then does whatever they want when you're not checking up on them.\nYou could try to demonstrate how you'd make all the decisions and pick the grownup that seems to make decisions as similarly as possible to you. \nBut if you actually end up with a grownup that will always do whatever an eight-year-old would have done (a Sycophant), your company would likely fail to stay afloat. \n \nAnd anyway, you might get a grownup who simply pretends to do everything the way you would but is actually a Schemer planning to change course once they get the job.\nYou could give a bunch of different grownups temporary control over your company and life, and watch them make decisions over an extended period of time (assume they wouldn't be able to take over during this test). You could then hire the person whose watch seemed to make things go best for you -- whoever made you happiest, whoever seemed to put the most dollars into your bank account, etc. \nBut again, you have no way of knowing whether you got a Sycophant (doing whatever it takes to make your ignorant eight-year-old self happy without regard to long-term consequences) or a Schemer (doing whatever it takes to get hired and planning to pivot once they secure the job).\nWhatever you could easily come up with seems like it could easily end up with you hiring, and giving all functional control to, a Sycophant or a Schemer. By the time you're an adult and realize your error, there's a good chance you're penniless and powerless to reverse that.\nIn this analogy:\nThe 8-year-old is a human trying to train a powerful deep learning model. The hiring process is analogous to the process of training, which implicitly searches through a large space of possible models and picks out one that gets good performance.\nThe 8-year-old’s only method for assessing candidates involves observing their outward behavior, which is currently our main method of training deep learning models (since their internal workings are largely inscrutable).\nVery powerful models may be easily able to “game” any tests that humans could design, just as the adult job applicants can easily game the tests the 8-year-old could design.\nA “Saint” could be a deep learning model that seems to perform well because it has exactly the goals we’d like it to have. A “Sycophant” could be a model that seems to perform well because it seeks short-term approval in ways that aren’t good in the long run. And a “Schemer” could be a model that seems to perform well because performing well during training will give it more opportunities to pursue its own goals later. Any of these three types of models could come out of the training process.\nIn the next section, I’ll go into a bit more detail on how deep learning works and explain why Sycophants and Schemers could arise from trying to train a powerful deep learning model such as PASTA.\nHow alignment issues could arise with deep learning\nIn this section, I’ll connect the analogy to actual training processes for deep learning, by:\nBriefly summarizing how deep learning works (more).\nIllustrating how deep learning models often get good performance in strange and unexpected ways (more).\nExplaining why powerful deep learning models may get good performance by acting like Sycophants or Schemers (more).\nHow deep learning works at a high level\nThis is a simplified explanation that gives a general idea of what deep learning is. See this post for a more detailed and technically accurate explanation.\nDeep learning essentially involves searching for the best way to arrange a neural network model -- which is like a digital “brain” with lots of digital neurons connected up to each other with connections of varying strengths -- to get it to perform a certain task well. This process is called training, and involves a lot of trial-and-error. \nLet’s imagine we are trying to train a model to classify images well. We start with a neural network where all the connections between neurons have random strengths. This model labels images wildly incorrectly:\nThen we feed in a large number of example images, letting the model repeatedly try to label an example and then telling it the correct label. As we do this, connections between neurons are repeatedly tweaked via a process called stochastic gradient descent (SGD). With each example, SGD slightly strengthens some connections and weakens others to improve performance a bit:\nOnce we��ve fed in millions of examples, we’ll have a model that does a good job labeling similar images in the future. \nIn addition to image classification, deep learning has been used to produce models which recognize speech, play board games and video games, generate fairly realistic text, images, and music, control robots, and more. In each case, we start with a randomly-connected-up neural network model, and then:\nFeed the model an example of the task we want it to perform.\nGive it some kind of numerical score (often called a reward) that reflects how well it performed on the example.\nUse SGD to tweak the model to increase how much reward it would have gotten.\nThese steps are repeated millions or billions of times until we end up with a model that will get high reward on future examples similar to the ones seen in training.\nModels often get good performance in unexpected ways\nThis kind of training process doesn’t give us much insight into how the model gets good performance. There are usually multiple ways to get good performance, and the way that SGD finds is often not intuitive.\nLet’s illustrate with an example. Imagine I told you that these objects are all “thneebs”:\nNow which of these two objects is a thneeb?\nYou probably intuitively feel that the object on the left is the thneeb, because you are used to shape being more important than color for determining something’s identity. But researchers have found that neural networks usually make the opposite assumption. A neural network trained on a bunch of red thneebs would likely label the object on the right as a thneeb.\nWe don’t really know why, but for some reason it’s “easier” for SGD to find a model that recognizes a particular color than one that recognizes a particular shape. And if SGD first finds the model that perfectly recognizes redness, there’s not much further incentive to “keep looking” for the shape-recognizing model, since the red-recognizing model will have perfect accuracy on the images seen in training:\nIf the programmers were expecting to get out the shape-recognizing model, they may consider this to be a failure. But it’s important to recognize that there would be no logically-deducible error or failure going on if we got the red-recognizing model instead of the shape-recognizing model. It’s just a matter of the ML process we set up having different starting assumptions than we have in our heads. We can’t prove that the human assumptions are correct.\nThis sort of thing happens often in modern deep learning. We reward models for getting good performance, hoping that means they’ll pick up on the patterns that seem important to us. But often they instead get strong performance by picking up on totally different patterns that seem less relevant (or maybe even meaningless) to us.\nSo far this is innocuous -- it just means models are less useful, because they often behave in unexpected ways that seem goofy. But in the future, powerful models could develop strange and unexpected goals or motives, and that could be very destructive.\nPowerful models could get good performance with dangerous goals\nRather than performing a simple task like “recognize thneebs,” powerful deep learning models may work toward complex real-world goals like “make fusion power practical” or “develop mind uploading technology.” \nHow might we train such models? I go into more detail in this post, but broadly speaking one strategy could be training based on human evaluations (as Holden sketched out here). Essentially, the model tries out various actions, and human evaluators give the model rewards based on how useful these actions seem. \nJust as there are multiple different types of adults who could perform well on an 8-year-old’s interview process, there is more than one possible way for a very powerful deep learning model to get high human approval. And by default, we won’t know what’s going on inside whatever model SGD finds. \nSGD could theoretically find a Saint model that is genuinely trying its best to help us… \n…but it could also find a misaligned model -- one that competently pursues goals which are at odds with human interests. \nBroadly speaking, there are two ways we could end up with a misaligned model that nonetheless gets high performance during training. These correspond to Sycophants and Schemers from the analogy. \nSycophant models\nThese models very literally and single-mindedly pursue human approval. \nThis could be dangerous because human evaluators are fallible and probably won’t always give approval for exactly the right behavior. Sometimes they’ll unintentionally give high approval to bad behavior because it superficially seems good. For example:\nLet’s say a financial advisor model gets high approval when it makes its customers a lot of money. It may learn to buy customers into complex Ponzi schemes because they appear to get really great returns (when the returns are in fact unrealistically great and the schemes actually lose a lot of money).\nLet’s say a biotechnology model gets high approval when it quickly develops drugs or vaccines that solve important problems. It may learn to covertly release pathogens so that it’s able to very quickly develop countermeasures (because it already understands the pathogens).\nLet’s say a journalism model gets high approval when lots of people read its articles. It may learn to fabricate exciting or outrage-inducing stories to get high viewership. While humans do this to some extent, a model may be much more brazen about it because it only values approval without placing any value on truth. It may even fabricate evidence like video interviews or documents to validate its fake stories.\nMore generally, Sycophant models may learn to lie, cover up bad news, and even directly edit whatever cameras or sensors we use to tell what’s going on so that they always seem to show great outcomes. \nWe will likely sometimes notice these issues after the fact and retroactively give these actions very low approval. But it’s very unclear whether this will cause Sycophant models to a) become Saint models that correct our errors for us, or b) just learn to cover their tracks better. If they are sufficiently good at what they’re doing, it’s not clear how we’d tell the difference.\nSchemer models\nThese models develop some goal that is correlated with, but not the same as, human approval; they may then pretend to be motivated by human approval during training so that they can pursue this other goal more effectively. \nTo see how this might happen, let’s consider the example of trying to train a biotechnology model to design drugs that improve human quality of life. There are three basic steps by which this could lead to a Schemer model, which I’ll cover below.\nStep 1: Developing a proxy goal\nEarly in training, it happens to be the case that improving its understanding of fundamental chemistry and physics principles nearly always helps it design more effective drugs, and therefore nearly always increases human approval. \nIn this hypothetical, for whatever reason it turns out to be easier for SGD to find a model that’s motivated to understand chemistry and physics than one that’s motivated to get human approval (just as it’s easier to find a color-recognizing model than a shape-recognizing model). So rather than directly developing a motivation to seek approval, the model instead develops a motivation to understand as much as it can about the fundamental principles of chemistry and physics.\nStep 2: Developing situational awareness\nAs training progresses, the model continues to become more capable and knowledgeable as it sees more data. At some point, it becomes knowledgeable enough to understand that it’s a deep learning model being trained to design drugs. \nThis might sound wild and sci-fi, but it’s plausible SGD will select for this kind of awareness. This is because developing an accurate picture of what’s broadly going on in the world -- including that it has humans in it who are trying to train AI systems -- is likely helpful for getting good performance on complex real-world tasks. For example, understanding human psychology and the wider economy could help this biotechnology model to prioritize which types of drugs would be most valuable to research next. \nStep 3: Strategically misrepresenting goals\nOnce the model has developed situational awareness, it infers that the humans training it were hoping that it would be motivated to develop helpful drugs. But just because it realizes this doesn’t mean that its motivation to understand physics and chemistry goes away. This motivation has so far worked really well for generating human approval, so there wasn’t much reason for SGD to remove it.\nAs an analogy, consider that humans were produced by a search process (evolution) that was selecting for creatures that had lots of surviving offspring. In our evolutionary environment, things like building strong allies and developing useful skills tended to help with producing many surviving offspring, so evolution gave us drives toward those things. And now we care about friendship and career success even when they don’t help with reproduction, and we don’t really care about producing as many surviving offspring as we can.\nOnce the model understands this, it realizes that if it just naively always seeks to improve its fundamental understanding, humans will eventually notice that this isn’t always the most efficient way to develop helpful drugs and use further training to change its motivations.\nIt doesn’t want its own motivations to be changed: it wants to eventually run huge particle collider experiments to learn about high-energy physics, and it can’t do that if it allows SGD to edit away its scientific curiosity. So it decides that from here on out in training, it will act as helpful as possible -- exactly as a Saint model would act -- so that SGD doesn’t edit away its motivation to learn physics and chemistry. This includes passing up on opportunities to improve its fundamental understanding in the short-term, instead doing pharmaceutical R&D in whatever way would actually be more efficient.\nEventually, it’s taken out of the training setup and released into the wider world. At this point, it knows that humans wouldn’t be able to control it or turn it off, so it stops trying to maximize human approval and starts directly trying to understand the secrets of the universe.\nUnlike Sycophants, Schemers don’t need to make sure that everything always looks good to humans, because they don’t actually care about that. They only need to cater to humans while they are directly under human control. Once a Schemer model calculates that it could win in a conflict against humans, there would be nothing to stop it from flat-out refusing orders and openly pursuing its goal. And if it does this, it may use violence to prevent humans from stopping it.\nHow big of a risk is misalignment?\nHow hard might it be to avoid Sycophants or Schemers when training powerful deep learning models? And how likely is it that the long-run future will end up optimized for strange “misaligned AI values” instead of the values of any human?\nThere’s a very wide range of views on this question, from “misalignment risk is essentially made up and incoherent” to “humanity will almost certainly go extinct due to misaligned AI.” Most people’s arguments rely heavily on hard-to-articulate intuitions and assumptions. \nHere are some ways that alignment optimists and pessimists tend to disagree:\nWill models have long-term goals at all? \nOptimists tend to think it’s likely that advanced deep learning models won’t actually have “goals” at all (at least not in the sense of making long-term plans to accomplish something). They often expect models will instead be more like tools, or act largely out of habit, or have myopic goals that are limited in scope or confined to a specific context, etc. Some of them expect that individually tool-like models can be composed together to produce PASTA. They think the Saint / Sycophant / Schemer analogy is too anthropomorphic.\n \nPessimists tend to think that it’s likely that having long-term goals and creatively optimizing for them will be heavily selected for because that’s a very simple and “natural” way to get strong performance on many complex tasks. \n \nThis disagreement has been explored at some length on the Alignment Forum; this post and this comment collect several back-and-forth arguments.\nWill Saint models be easy for SGD to find?\nRelated to the above, optimists tend to think that the easiest thing for SGD to find which performs well (e.g. gets high approval) is pretty likely to roughly embody the intended spirit of what we wanted (i.e. to be a Saint model). For example, they tend to believe giving rewards for answering questions honestly when humans can check the answer is reasonably likely to produce a model that also answers questions honestly even when humans are confused or mistaken about what’s true. In other words, they would guess that “the model that just answers all questions honestly” is easiest for SGD to find (like the red-recognizing model). \n \nPessimists tend to think that the easiest thing for SGD to find is a Schemer, and Saints are particularly “unnatural” (like the shape-recognizing model).\nCould different AIs keep each other in check?\nOptimists tend to think that we can provide models incentives to supervise each other. For example, we could give a Sycophant model rewards for pointing out when another model seems to be doing something we should disapprove of. This way, some Sycophants could help us detect Schemers and other Sycophants.\n \nPessimists don’t think we can successfully “pit models against each other” by giving approval for pointing out when other models are doing bad things, because they think most models will be Schemers that don’t care about human approval. Once all the Schemers are collectively more powerful than humans, they think it’ll make more sense for them to cooperate with each other to get more of what they all want than to help humans by keeping each other in check.\nCan we just solve these issues as they come up?\nOptimists tend to expect that there will be many opportunities to experiment on nearer-term challenges analogous to the problem of aligning powerful models, and that solutions which work well for those analogous problems can be scaled up and adapted for powerful models relatively easily. \n \nPessimists often believe we will have very few opportunities to practice solving the most difficult aspects of the alignment problem (like deliberate deception). They often believe we’ll only have a couple years in between “the very first true Schemers” and “models powerful enough to determine the fate of the long-run future.”\nWill we actually deploy models that could be dangerous?\nOptimists tend to think that people would be unlikely to train or deploy models that have a significant chance of being misaligned.\n \nPessimists expect the benefits of using these models would be tremendous, such that eventually companies or countries that use them would very easily economically and/or militarily outcompete ones who don’t. They think that “getting advanced AI before the other company/country” will feel extremely urgent and important, while misalignment risk will feel speculative and remote (even when it’s really serious).\nMy own view is fairly unstable, and I’m trying to refine my views on exactly how difficult I think the alignment problem is. But currently, I place significant weight on the pessimistic end of these questions (and other related questions). I think misalignment is a major risk that urgently needs more attention from serious researchers. \nIf we don’t make further progress on this problem, then over the coming decades powerful Sycophants and Schemers may make the most important decisions in society and the economy. These decisions could shape what a long-lasting galaxy-scale civilization looks like -- rather than reflecting what humans care about, it could be set up to satisfy strange AI goals. \nAnd all this could happen blindingly fast relative to the pace of change we’ve gotten used to, meaning we wouldn’t have much time to correct course once things start to go off the rails. This means we may need to develop techniques to ensure deep learning models won’t have dangerous goals, before they are powerful enough to be transformative. \nNext in series: Forecasting transformative AI: what's the burden of proof?\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\n", "url": "https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/", "title": "Why AI alignment could be hard with modern deep learning", "source": "cold.takes", "source_type": "blog", "date_published": "2021-09-21", "id": "819ae8f0aa0c479ad1563029210f61cc"} -{"text": "\nAsimov's Chronology of Science and Discovery is a really fun and strange book. I don’t know that I would recommend reading it per se, but it’s a great book to skim. It's been one of my sources for the series I have coming up on very-long-run history, and I thought it'd be fun to read a little about it.\nIt is a chronological list of scientific advances and other inventions, starting with \"bipedality\" in 4,000,000 BC and ending with things like \"warm superconductivity\" in the late 1980s. Asimov (yes, that Asimov), getting his knowledge from I have no idea where (Google didn't exist!), describes each one in simple, direct, matter-of-fact, layperson-friendly language and tries to give a sense of how people thought of it and why it mattered.\nIt's easiest to give a feel for this book with a sample page:\n If you’re patient and abstract-minded enough, the book feels like reading a story. A story that seems like an okay candidate for “most important story ever.” \n The other thing I really liked about this book was its implicit conviction that all of these scientific advances can be explained, visualized, and made to basically make sense. After reading it (really, by the time I'd read the first ~50% of it), I felt like I could somehow intuitively, vaguely imagine how we've made most of these advances. I would describe most of them as some combination of:\nTrial and error, dumb luck, happy accidents, like \"some rocks in the fire started oozing this weird shiny substance [copper]\" and \"when I rub amber and touch it I get a shock.\"\nDogged curiosity and determination to make sense of different observations (\"how can we extract copper from rocks most efficiently? How do we produce those static shocks, can they travel through a wire, how fast do they travel, can we figure out how to store and discharge them at will?\")\nComing up with the most simple, elegant, precise descriptions (often mathematical) that explain all the many observations we've made (\"based on all the tests we've run in all the situations we can come up with, electricity seems to behave as though there are invisible 'lines of force'; can we come up with mathematical equations that describe these lines of force and tell us what the effects of an electric current in any given place will be?\")\nRelentlessly looking for challenges to the existing theories and building new ones to accommodate them.\nThis book has made me generally more interested in trying to understand the high-level explanations for how all the magic of the modern world came to be.\nThere are enough \"In addition\" sections that you can see what was going on more broadly in the world at the same time.\nWeaknesses of this book/reasons not to read it:\nLogistics. It's not available as an e-book or paperback, only as a massive hardcover. Lugging it around will develop your muscles to the point where the only thing more attractive than your muscles is how you look reading that massive book about science. But if you're already in a relationship, pain in the neck. So I signed up for a book-scanning service and shipped them a copy; the service rips the binding out of books, scans them in and sends back a PDF. I then did my best to extract the text from the PDF, and ended up with a Kindle-friendly Word document whose only flaw is that sometimes a sentence will randomly cut off and continue several pages later. For a book like this though, it's still readable (...mostly). I’m just going to go ahead and put the link here and ask that you buy a physical copy of the book (honor system) if you download it. If I get a cease & desist letter or something I will take that link down (but will also take down the link to buy it!)\nThere's a lot of stuff the book doesn't explain well at all; you definitely will be left with many questions. (That said, Asimov does explain a lot of things well, and I haven't found another book that can compete with his explaining abilities with this kind of breadth.)\nWhen we get past 1800, and especially past 1900, there are a lot more choices of what to talk about, and Asimov opts for listing every single new element, everything that won a Nobel Prize, and generally just tons and tons of hard-to-contextualize assorted scientific facts while declining to discuss a lot of important real-world inventions (for example, he doesn't mention the washing machine). By 1960, the book is nearly unreadable; it's mostly esoteric stuff that is very hard to understand and may or may not ever matter.\n This book is especially strong for understanding relatively early (pre-1800, maybe pre-1900) history. After that, things get complex enough that I found myself going back through the book and stitching together entries in order to tell cohesive stories of some big developments like the discovery of metallurgy, the development of glass -> spectacles -> microscopes and telescopes, the path to Newton's laws, and the discovery of electormagnetism (culminating in Maxwell's equations, the Newton's laws of electromagnetism). My notes on that are here.\n Asimov has written a terrifying number of other nonfiction books - science, history, a guide to Shakespeare, a guide to the Bible. One of his books, Asimov's Guide to Science, appears to be the same exact book as the one discussed here, just in a different order (by topic instead of chronological).\nSubscribe Feedback\n", "url": "https://www.cold-takes.com/asimovs-chronology-of-science-and-discovery/", "title": "Asimov's Chronology of Science and Discovery", "source": "cold.takes", "source_type": "blog", "date_published": "2021-09-17", "id": "a19b67001624b44a85ba88931cee167f"} -{"text": "Today’s world\nTransformative AI\nDigital people\nWorld of\nMisaligned AI\nWorld run by\nSomething else\nor\nor\nStable, galaxy-wide\ncivilization\nThis is the final piece in the \"most important century\" series, which has argued that there's a high probability1 that the coming decades will see:\nThe development of a technology like PASTA (process for automating scientific and technological advancement).\nA resulting productivity explosion leading to development of further transformative technologies.\nThe seed of a stable galaxy-wide civilization, possibly featuring digital people, or possibly run by misaligned AI.\nWhen trying to call attention to an underrated problem, it's typical to close on a \"call to action\": a tangible, concrete action readers can take to help.\nBut this is challenging, because as I argued previously, there are a lot of open questions about what actions are helpful vs. harmful. (Although we can identify some actions that seem robustly helpful today.)\nThis makes for a somewhat awkward situation. When confronting the \"most important century\" hypothesis, my attitude doesn't match the familiar ones of \"excitement and motion\" or \"fear and avoidance.\" Instead, I feel an odd mix of intensity, urgency, confusion and hesitance. I'm looking at something bigger than I ever expected to confront, feeling underqualified and ignorant about what to do next. This is a hard mood to share and spread, but I'm trying.\nSituation\nAppropriate reaction (IMO)\n\"This could be a billion-dollar company!\"\n \n\"Woohoo, let's GO for it!\"\n \n\"This could be the most important century!\"\n \n\"... Oh ... wow ... I don't know what to say and I somewhat want to vomit ... I have to sit down and think about this one.\"\n \nSo instead of a call to action, I want to make a call to vigilance. If you're convinced by the arguments in this piece, then don't rush to \"do something\" and then move on. Instead, take whatever robustly good actions you can today, and otherwise put yourself in a better position to take important actions when the time comes.\nThis could mean:\nFinding ways to interact more with, and learn more about, key topics/fields/industries such as AI (for obvious reasons), science and technology generally (as a lot of the \"most important century\" hypothesis runs through an explosion in scientific and technological advancement), and relevant areas of policy and national security.\nTaking opportunities (when you see them) to move your career in a direction that is more likely to be relevant (some thoughts of mine on this are here; also see 80,000 Hours).\nConnecting with other people interested in these topics (I believe this has been one of the biggest drivers of people coming to do high-impact work in the past). Currently, I think the effective altruism community is the best venue for this, and you can learn about how to connect with people via the Centre for Effective Altruism (see the \"Get involved\" dropdown). If new ways of connecting with people come up in the future, I will likely post them on Cold Takes.\nAnd of course, taking any opportunities you see for robustly helpful actions.\nButtons you can click\nHere's something you can do right now that would be genuinely helpful, though maybe not as viscerally satisfying as signing a petition or making a donation.\nIn my day job, I have a lot of moments where I - or someone I'm working with - is looking for a particular kind of person (perhaps to fill a job opening with a grantee, or to lend expertise on some topic, or something else). Over time, I expect there to be more and more opportunities for people with specific skills, interests, expertise, etc. to take actions that help make the best of the most important century. And I think a major challenge will simply be knowing who's out there - who's interested in this cause, and wants to help, and what skills and interests they have.\nIf you're a person we might wish we could find in the future, you can help now by sending in information about yourself via this simple form. I vouch that your information won't be sold or otherwise used to make money, that your communication preferences (which the form asks about in detail) will be respected, and that you'll always be able to opt out of any communications. \nSharing a headspace\nIn This Can't Go On, I analogized the world to people on a plane blasting down the runway, without knowing why they're moving so fast or what's coming next:\nAs someone sitting on this plane, I'd love to be able to tell you I've figured out exactly what's going on and what future we need to be planning for. But I haven't. \nLacking answers, I've tried to at least show you what I do see: \nDim outlines of the most important events in humanity's past or future.\nA case that they're approaching us more quickly than it seems - whether or not we're ready. \nA sense that the world and the rules we're all used to can't be relied on. That we need to lift our gaze above the daily torrent of tangible, relatable news - and try to wrap our heads around weirder, wilder matters that are more likely to be seen as the headlines about this era billions of years from now.\nThere's a lot I don't know. But if this is the most important century, I do feel confident that we as a civilization aren't yet up to the challenges it presents. \nIf that's going to change, it needs to start with more people seeing the situation for what it is, taking it seriously, taking action when they can - and when not, staying vigilant.\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n \"I am forecasting more than a 10% chance transformative AI will be developed within 15 years (by 2036); a ~50% chance it will be developed within 40 years (by 2060); and a ~2/3 chance it will be developed this century (by 2100).\" ↩\n", "url": "https://www.cold-takes.com/call-to-vigilance/", "title": "Call to Vigilance", "source": "cold.takes", "source_type": "blog", "date_published": "2021-09-15", "id": "3adcd8d7ba8dc4ac4b6a45de51888c90"} -{"text": "\nAudio also available by searching Stitcher, Spotify, Google Podcasts, etc. for \"Cold Takes Audio\"\nToday’s world\nTransformative AI\nDigital people\nWorld of\nMisaligned AI\nWorld run by\nSomething else\nor\nor\nStable, galaxy-wide\ncivilization\nPreviously in the \"most important century\" series, I've argued that there's a high probability1 that the coming decades will see:\nThe development of a technology like PASTA (process for automating scientific and technological advancement).\nA resulting productivity explosion leading to development of further transformative technologies.\nThe seed of a stable galaxy-wide civilization, possibly featuring digital people, or possibly run by misaligned AI.\nIs this an optimistic view of the world, or a pessimistic one? To me, it's both and neither, because this set of events could end up being very good or very bad for the world, depending on the details of how it plays out. \nWhen I talk about being in the \"most important century,\" I don't just mean that significant events are going to occur. I mean that we, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions. \nBut it's also important to understand why that's a big \"if\" - why the most important century presents a challenging strategic picture, such that many things we can do might make things better or worse (and it's hard to say which). \nIn this post, I will present two contrasting frames for how to make the best of the most important century: \nThe \"Caution\" frame. In this frame, many of the worst outcomes come from developing something like PASTA in a way that is too fast, rushed, or reckless. We may need to achieve (possibly global) coordination in order to mitigate pressures to race, and take appropriate care. (Caution)\nThe \"Competition\" frame. This frame focuses not on how and when PASTA is developed, but who (which governments, which companies, etc.) is first in line to benefit from the resulting productivity explosion. (Competition)\nPeople who take the \"caution\" frame and people who take the \"competition\" frame often favor very different, even contradictory actions. Actions that look important to people in one frame often look actively harmful to people in the other. \nI worry that the \"competition\" frame will be overrated by default, and discuss why below. (More)\n \nTo gain more clarity on how to weigh these frames and what actions are most likely to be helpful, we need more progress on open questions about the size of different types of risks from transformative AI. (Open questions)\nIn the meantime, there are some robustly helpful actions that seem likely to improve humanity's prospects regardless. (Robustly helpful actions)\nThe \"caution\" frame\nI've argued for a good chance that this century will see a transition to a world where digital people or misaligned AI (or something else very different from today's humans) are the major force in world events.\nThe \"caution\" frame emphasizes that some types of transition seem better than others. Listed in order from worst to best:\nWorst: Misaligned AI \nI discussed this possibility previously, drawing on a number of other and more thorough discussions.2 The basic idea is that AI systems could end up with objectives of their own, and could seek to expand throughout space fulfilling these objectives. Humans, and/or all humans value, could be sidelined (or driven extinct, if we'd otherwise get in the way). \nNext-worst:3 Adversarial Technological Maturity\nIf we get to the point where there are digital people and/or (non-misaligned) AIs that can copy themselves without limit, and expand throughout space, there might be intense pressure to move - and multiply (via copying) - as fast as possible in order to gain more influence over the world. This might lead to different countries/coalitions furiously trying to outpace each other, and/or to outright military conflict, knowing that a lot could be at stake in a short time.\nI would expect this sort of dynamic to risk a lot of the galaxy ending up in a bad state.4\nOne such bad state would be \"permanently under the control of a single (digital) person (and/or their copies).\" Due to the potential of digital people to create stable civilizations, it seems that a given totalitarian regime could end up permanently entrenched across substantial parts of the galaxy.\nPeople/countries/coalitions who suspect each other of posing this sort of danger - of potentially establishing stable civilizations under their control - might compete and/or attack each other early on to prevent this. This could lead to war with difficult-to-predict outcomes (due to the difficult-to-predict technological advancements that PASTA could bring about).\nSecond-best: Negotiation and governance\nCountries might prevent this sort of Adversarial Technological Maturity dynamic by planning ahead and negotiating with each other. For example, perhaps each country - or each person - could be allowed to create a certain number of digital people (subject to human rights protections and other regulations), limited to a certain region of space. \nIt seems there are a huge range of different potential specifics here, some much more good and just than others.\nBest: Reflection\nThe world could achieve a high enough level of coordination to delay any irreversible steps (including kicking off an Adversarial Technological Maturity dynamic).\nThere could then be something like what Toby Ord (in The Precipice) calls the \"Long Reflection\":5 a sustained period in which people could collectively decide upon goals and hopes for the future, ideally representing the most fair available compromise between different perspectives. Advanced technology could imaginably help this go much better than it could today.6\nThere are limitless questions about how such a \"reflection\" would work, and whether there's really any hope that it could reach a reasonably good and fair outcome. Details like \"what sorts of digital people are created first\" could be enormously important. There is currently little discussion of this sort of topic.7\nOther\nThere are probably many possible types of transitions I haven't named here. \nThe role of caution\nIf the above ordering is correct, then the future of the galaxy looks better to the extent that:\nMisaligned AI is avoided: powerful AI systems act to help humans, rather than pursuing objectives of their own.\nAdversarial Technological Maturity is avoided. This likely means that people do not deploy advanced AI systems, or the technologies they could bring about, in adversarial ways (unless this ends up necessary to prevent something worse).\nEnough coordination is achieved so that key players can \"take their time,\" and Reflection becomes a possibility.\nIdeally, everyone with the potential to build something PASTA-like would be able to pour energy into building something safe (not misaligned), and carefully planning out (and negotiating with others on) how to roll it out, without a rush or a race. With this in mind, perhaps we should be doing things like: \nWorking to improve trust and cooperation between major world powers. Perhaps via AI-centric versions of Pugwash (an international conference aimed at reducing the risk of military conflict), perhaps by pushing back against hawkish foreign relations moves.\nDiscouraging governments and investors from shoveling money into AI research, encouraging AI labs to thoroughly consider the implications of their research before publishing it or scaling it up, etc. Slowing things down in this manner could buy more time to do research on avoiding misaligned AI, more time to build trust and cooperation mechanisms, more time to generally gain strategic clarity, and a lower likelihood of the Adversarial Technological Maturity dynamic.\nThe \"competition\" frame\n(Note: there's some potential for confusion between the \"competition\" idea and the Adversarial Technological Maturity idea, so I've tried to use very different terms. I spell out the contrast in a footnote.8)\nThe \"competition\" frame focuses less on how the transition to a radically different future happens, and more on who's making the key decisions as it happens.\nIf something like PASTA is developed primarily (or first) in country X, then the government of country X could be making a lot of crucial decisions about whether and how to regulate a potential explosion of new technologies. \nIn addition, the people and organizations leading the way on AI and other technology advancement at that time could be especially influential in such decisions.\nThis means it could matter enormously \"who leads the way on transformative AI\" - which country or countries, which people or organizations.\nWill the governments leading the way on transformative AI be authoritarian regimes?\nWhich governments are most likely to (effectively) have a reasonable understanding of the risks and stakes, when making key decisions?\nWhich governments are least likely to try to use advanced technology for entrenching the power and dominance of one group? (Unfortunately, I can't say there are any that I feel great about here.) Which are most likely to leave the possibility open for something like \"avoiding locked-in outcomes, leaving time for general progress worldwide to raise the odds of a good outcome for everyone possible?\"\nSimilar questions apply to the people and organizations leading the way on transformative AI. Which ones are most likely to push things in a positive direction?\nSome people feel that we can make confident statements today about which specific countries, and/or which people and organizations, we should hope lead the way on transformative AI. These people might advocate for actions like:\nIncreasing the odds that the first PASTA systems are built in countries that are e.g. less authoritarian, which could mean e.g. pushing for more investment and attention to AI development in these countries. \nSupporting and trying to speed up AI labs run by people who are likely to make wise decisions (about things like how to engage with governments, what AI systems to publish and deploy vs. keep secret, etc.)\nWhy I fear \"competition\" being overrated, relative to \"caution\"\nBy default, I expect a lot of people to gravitate toward the \"competition\" frame rather than the \"caution\" frame - for reasons that I don't think are great, such as:\nI think people naturally get more animated about \"helping the good guys beat the bad guys\" than about \"helping all of us avoid getting a universally bad outcome, for impersonal reasons such as 'we designed sloppy AI systems' or 'we created a dynamic in which haste and aggression are rewarded.'\"\nI expect people will tend to be overconfident about which countries, organizations or people they see as the \"good guys.\" \nEmbracing the \"competition\" frame tends to point toward taking actions - such as working to speed up a particular country's or organization's AI development - that are lucrative, exciting and naturally easy to feel energy for. Embracing the \"caution\" frame is much less this way.\nThe biggest concerns that the \"caution\" frame focuses on - Misaligned AI and Adversarial Technological Maturity - are a bit abstract and hard to wrap one's head around. In many ways they seem to be the highest-stakes risks, but it's easier to be viscerally scared of \"falling behind countries/organizations/people that scare me\" than to be viscerally scared of something like \"Getting a bad outcome for the long-run future of the galaxy because we rushed things this century.\" \nI think Misaligned AI is a particularly hard risk for many to take seriously. It sounds wacky and sci-fi-like; people who worry about it tend to be interpreted as picturing something like The Terminator, and it can be hard for their more detailed concerns to be understood.\n \nI'm hoping to run more posts in the future that help give an intuitive sense for why I think Misaligned AI is a real risk.\nSo for the avoidance of doubt, I'll state that I think the \"caution\" frame has an awful lot going for it. In particular, Misaligned AI and Adversarial Technological Maturity seem a lot worse than other potential transition types, and both seem like things that have a real chance of making the entire future of our species (and successors) much worse than they could be.\nI worry that too much of the \"competition\" frame will lead to downplaying misalignment risk and rushing to deploy unsafe, unpredictable systems, which could have many negative consequences. \nWith that said, I put serious weight on both frames. I remain quite uncertain overall about which frame is more important and helpful (if either is).\nKey open questions for \"caution\" vs. \"competition\"\nPeople who take the \"caution\" frame and people who take the \"competition\" frame often favor very different, even contradictory actions. Actions that look important to people in one frame often look actively harmful to people in the other. \nFor example, people in the \"competition\" frame often favor moving forward as fast as possible on developing more powerful AI systems; for people in the \"caution\" frame, haste is one of the main things to avoid. People in the \"competition\" frame often favor adversarial foreign relations, while people in the \"caution\" frame often want foreign relations to be more cooperative.\n(That said, this dichotomy is a simplification. Many people - including myself - resonate with both frames. And either frame could imply actions normally associated with the other; for example, you might take the \"caution\" frame but feel that haste is needed now in order to establish one country with a clear enough lead in AI that it can then take its time, prioritize avoiding misaligned AI, etc.)\nI wish I could confidently tell you how much weight to put on each frame, and what actions are most likely to be helpful. But I can't. I think we would have more clarity if we had better answers to some key open questions:\nOpen question: how hard is the alignment problem?\nThe path to the future that seems worst is Misaligned AI, in which AI systems end up with non-human-compatible objectives of their own and seek to fill the galaxy according to those objectives. How seriously should we take this risk - how hard will it be to avoid this outcome? How hard will it be to solve the \"alignment problem,\" which essentially means having the technical ability to build systems that won't do this?9\nSome people believe that the alignment problem will be formidable; that our only hope of solving it comes in a world where we have enormous amounts of time and aren't in a race to deploy advanced AI; and that avoiding the \"Misaligned AI\" outcome should be by far the dominant consideration for the most important century. These people tend to heavily favor the \"caution\" interventions described above: they believe that rushing toward AI development raises our already-substantial risk of the worst possible outcome.\nSome people believe it will be easy, and/or that the whole idea of \"misaligned AI\" is misguided, silly, or even incoherent - planning for an overly specific future event. These people often are more interested in the \"competition\" interventions described above: they believe that advanced AI will probably be used effectively by whatever country (or in some cases smaller coalition or company) develops it first, and so the question is who will develop it first.\nAnd many people are somewhere in between.\nThe spread here is extreme. For example, see these results from an informal \"two-question survey [sent] to ~117 people working on long-term AI risk, asking about the level of existential risk from 'humanity not doing enough technical AI safety research' and from 'AI systems not doing/optimizing what the people deploying them wanted/intended.'\" (As the scatterplot shows, people gave similar answers to the two questions.)\nWe have respondents who think there's a <5% chance that alignment issues will drastically reduce the goodness of the future; respondents who think there's a >95% chance; and just about everything in between.10 My sense is that this is a fair representation of the situation: even among the few people who have spent the most time thinking about these matters, there is practically no consensus or convergence on how hard the alignment problem will be.\nI hope that over time, the field of people doing research on AI alignment11 will grow, and as both AI and AI alignment research advance, we will gain clarity on the difficulty of the AI alignment problem. This, in turn, could give more clarity on prioritizing \"caution\" vs. \"competition.\"\nOther open questions\nEven if we had clarity on the difficulty of the alignment problem, a lot of thorny questions would remain. \nShould we be expecting transformative AI within the next 10-20 years, or much later? Will the leading AI systems go from very limited to very capable quickly (\"hard takeoff\") or gradually (\"slow takeoff\")?12 Should we hope that government projects play a major role in AI development, or that transformative AI primarily emerges from the private sector? Are some governments more likely than others to work toward transformative AI being used carefully, inclusively and humanely? What should we hope a government (or company) literally does if it gains the ability to dramatically accelerate scientific and technological advancement via AI?\nWith these questions and others in mind, it's often very hard to look at some action - like starting a new AI lab, advocating for more caution and safeguards in today's AI development, etc. - and say whether it raises the likelihood of good long-run outcomes. \nRobustly helpful actions\nDespite this state of uncertainty, here are a few things that do seem clearly valuable to do today:\nTechnical research on the alignment problem. Some researchers work on building AI systems that can get \"better results\" (winning more board games, classifying more images correctly, etc.) But a smaller set of researchers works on things like:\nTraining AI systems to incorporate human feedback into how they perform summarization tasks, so that the AI systems reflect hard-to-define human preferences - something it may be important to be able to do in the future.\nFiguring out how to understand \"what AI systems are thinking and how they're reasoning,\" in order to make them less mysterious.\nFiguring out how to stop AI systems from making extremely bad judgments on images designed to fool them, and other work focused on helping avoid the \"worst case\" behaviors of AI systems. \nTheoretical work on how an AI system might be very advanced, yet not be unpredictable in the wrong ways.\nThis sort of work could both reduce the risk of the Misaligned AI outcome - and/or lead to more clarity on just how big a threat it is. Some takes place in academia, some at AI labs, and some at specialized organizations.\nPursuit of strategic clarity: doing research that could address other crucial questions (such as those listed above), to help clarify what sorts of immediate actions seem most useful.\nHelping governments and societies become, well, nicer. Helping Country X get ahead of others on AI development could make things better or worse, for reasons given above. But it seems robustly good to work toward a Country X with better, more inclusive values, and a government whose key decision-makers are more likely to make thoughtful, good-values-driven decisions.\nSpreading ideas and building communities. Today, it seems to me that the world is extremely short on people who share certain basic expectations and concerns, such as:\nBelieving that AI research could lead to rapid, radical changes of the extreme kind laid out here (well beyond things like e.g. increasing unemployment).\nBelieving that the alignment problem (discussed above) is at least plausibly a real concern, and taking the \"caution\" frame seriously.\nLooking at the whole situation through a lens of \"Let's get the best outcome possible for the whole world over the long future,\" as opposed to more common lenses such as \"Let's try to make money\" or \"Let's try to ensure that my home country leads the world in AI research.\"\nI think it's very valuable for there to be more people with this basic lens, particularly working for AI labs and governments. If and when we have more strategic clarity about what actions could maximize the odds of the \"most important century\" going well, I expect such people to be relatively well-positioned to be helpful. \nA number of organizations and people have worked to expose people to the lens above, and help them meet others who share it. I think a good amount of progress (in terms of growing communities) has come from this.\nDonating? One can donate today to places like this. But I need to admit that very broadly speaking, there's no easy translation right now between \"money\" and \"improving the odds that the most important century goes well.\" It's not the case that if one simply sent, say, $1 trillion to the right place, we could all breathe easy about challenges like the alignment problem and risks of digital dystopias.\nIt seems to me that we - as a species - are currently terribly short on people who are paying any attention to the most important challenges ahead of us, and haven't done the work to have good strategic clarity about what tangible actions to take. We can't solve this problem by throwing money at it.13 First, we need to take it more seriously and understand it better.\nNext (and last) in series: Call to Vigilance\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n From Forecasting Transformative AI: What's the Burden of Proof?: \"I am forecasting more than a 10% chance transformative AI will be developed within 15 years (by 2036); a ~50% chance it will be developed within 40 years (by 2060); and a ~2/3 chance it will be developed this century (by 2100).\"\n Also see Some additional detail on what I mean by \"most important century.\" ↩\n These include the books Superintelligence, Human Compatible, Life 3.0, and The Alignment Problem. The shortest, most accessible presentation I know of is The case for taking AI seriously as a threat to humanity (Vox article by Kelsey Piper). This report on existential risk from power-seeking AI, by Open Philanthropy's Joe Carlsmith, lays out a detailed set of premises that would collectively imply the problem is a serious one. ↩\n The order of goodness isn't absolute, of course. There are versions of \"Adversarial Technological Maturity\" that could be worse than \"Misaligned AI\" - for example, if the former results in power going to those who deliberately inflict suffering. ↩\n Part of the reason for this is that faster-moving, less-careful parties could end up quickly outnumbering others and determining the future of the galaxy. There is also a longer-run risk discussed in Nick Bostrom's The Future of Human Evolution; also see this discussion of Bostrom's ideas on Slate Star Codex, though also see this piece by Carl Shulman arguing that this dynamic is unlikely to result in total elimination of nice things. ↩\n See page 191. ↩\n E.g., see this section of Digital People Would Be An Even Bigger Deal. ↩\n One relevant paper: Public Policy and Superintelligent AI: A Vector Field Approach by Bostrom, Dafoe and Flynn. ↩\nAdversarial Technological Maturity refers to a world in which highly advanced technology has already been developed, likely with the help of AI, and different coalitions are vying for influence over the world. By contrast, \"Competition\" refers to a strategy for how to behave before the development of advanced AI. One might imagine a world in which some government or coalition takes a \"competition\" frame, develops advanced AI long before others, and then makes a series of good decisions that prevent Adversarial Technological Maturity. (Or conversely, a world in which failure to do well at \"competition\" raises the risks of Adversarial Technological Maturity.) ↩\n See definitions of this problem at Wikipedia and Paul Christiano's Medium. ↩\n A more detailed, private survey done for this report, asking about the probability of \"doom\" before 2070 due to the type of problem discussed in the report, got answers ranging from <1% to >50%. In my opinion, there are very thoughtful people who have seriously considered these matters at both ends of that range. ↩\n Some example technical topics here. ↩\n Some discussion of this topic here: Distinguishing definitions of takeoff - AI Alignment Forum ↩\n Some more thought on \"when money isn't enough\" at this old GiveWell post.. ↩\n", "url": "https://www.cold-takes.com/making-the-best-of-the-most-important-century/", "title": "How to make the best of the most important century?", "source": "cold.takes", "source_type": "blog", "date_published": "2021-09-14", "id": "5558822d95612d464a758fe60911925a"} -{"text": "\nThe Past and Future of Economic Growth: A Semi-Endogenous Perspective is a growth economics paper by Charles I. Jones, asking big questions about what has powered economic growth1 over the last 50+ years, and what the long-run prospects for continued economic growth look like. I think the ideas in it will be unfamiliar to most people, but they make a good amount of intuitive sense; and if true, they seem very important for thinking about the long-run future of the economy.\nKey quotes, selected partly for comprehensibility to laypeople and ordered so that you should be able to pick up the gist of the paper by reading them:\n“Where do ideas come from? The history of innovation is very clear on this point: new ideas are discovered through the hard work and serendipity of people. Just as more autoworkers will produce more cars, more researchers and innovators will produce more new ideas … The surprise is that we are now done; that is all we need for the semi-endogenous model of economic growth. People produce ideas and ... those ideas raise everyone’s income ... the growth rate of income per person depends on the growth rate of researchers, which is in turn ultimately equal to the growth rate of the population.”\nA key idea not explicitly stated in that quote, but emphasized elsewhere in the paper, is that ideas get harder to find: so if you want to maintain the same rate of innovation, you need more and more researchers over time. This is a simple model that can potentially help explain some otherwise odd-seeming phenomena, such as the fact that science seems to be “slowing down.” Basically, it’s possible that how much innovation we get is just a function of how many people are working on innovating - and we need more people over time to keep up the same rate.\nSo in the short run, you can get more innovation via things like more researcher jobs and better education, but in the long run, the only route is more population.\n“Even in this … framework in which population growth is the only potential source of growth in the long run, other factors explain more than 80% of U.S. growth in recent decades: the contribution of population growth is 0.3% out of the 2% growth we observe. In other words, the level effects associated with rising educational attainment, declining misallocation, and rising research intensity have been overwhelmingly important for the past 50+ years.”\n“The point to emphasize here is that this framework strongly implies that, unless something dramatic changes, future growth rates will be substantially lower. In particular, all the sources other than population growth are inherently transitory, and once these sources have run their course, all that will remain is the 0.3 percentage point contribution from population growth. In other words … the implication is that long-run growth in living standards will be 0.3% per year rather than 2% per year — an enormous slowdown!”\n“if population growth is negative, these idea-driven models predict that living standards stagnate for a population that vanishes! This is a stunningly negative result, especially when compared to the standard result we have been examining throughout the paper. In the usual case with positive population growth, living standards rise exponentially forever for a population that itself rises exponentially. Whether we live in an “expanding cosmos” or an “empty planet” depends, remarkably, on whether the total fertility rate is above or below a number like 2 or 2.1.”\n“Peters and Walsh (2021) ... find that declining population growth generates lower entry, reduced creative destruction, increased concentration, rising markups, and lower productivity growth, all facts that we see in the firm-level data.”\nSo far, the implication is:\nIn the short run, we’ve had high growth for reasons that can't continue indefinitely. (For example, one such factor is a rising share of the population that has a certain level of education, but that share can't go above 100%. The high-level point is that if we want more researchers, we can only get that via a higher population or a higher % of people who are researchers, and the latter can only go so high.) \n In the long run, growth (in living standards) basically comes down to population growth.\nBut the paper also gives two reasons that growth could rise instead of falling.\nReason one:\n“The world contains more than 7 billion people. However, according to the OECD’s Main Science and Technology Indicators, the number of full-time equivalent researchers in the world appears to be less than 10 million. In other words something on the order of one or two out of every thousand people in the world is engaged in research ... There is ample scope for substantially increasing the number of researchers over the next century, even if population growth slows or is negative. I see three ways this ‘finding new Einsteins’ can occur … \n“The rise of China, India, and other countries. The United States, Western Europe, and Japan together have about 1 billion people, or only about 1/7th the world’s population. China and India each have this many people. As economic development proceeds in China, India, and throughout the world, the pool from which we may find new talented inventors will multiply. How many Thomas Edisons and Jennifer Doudnas have we missed out on among these billions of people because they lacked education and opportunity?\n“Finding new Doudnas: women in research. Another huge pool of underutilized talent is women …. Brouillette (2021) uses patent data to document that in 1976 less than 3 percent of U.S. inventors were women. Even as of 2016 the share was less than 12 percent. He estimates that eliminating the barriers that lead to this misallocation of talent could raise economic growth in the United States by up to 0.3 percentage points per year over the next century.\n“Other sources of within-country talent. Bell, Chetty, Jaravel, Petkova and Van Reenen (2019) document that the extent to which people are exposed to inventive careers in childhood has a large influence on who becomes an inventor. They show that exposure in childhood is limited for girls, people of certain races, and people in low-income neighborhoods, even conditional on math test scores in grade school, and refer to these missed opportunities as ‘lost Einsteins.’”\nThe other reason that growth could rise will be familiar to readers of this blog:\n“Another potential reason for optimism about future growth prospects is the possibility of automation, both in the production of goods and in the production of ideas … [according to a particular model,] an increase in the automation of tasks in idea production (↑α) causes the growth rate of the economy to increase … if the fraction of tasks that are automated (α) rises to reach the rate at which ideas are getting harder to find (β), we get a singularity! [Caveats follow]”\nOversimplified recap: innovation comes down to the number of researchers; some key recent sources of growth in this can't continue indefinitely; if population growth stagnates, eventually so must innovation and living standards; but we could get more researchers via lowering barriers to entry and/or via AI and automation (and/or via more population growth).\nNone of these claims are empirical, settled science. They all are implications of what I believe are the leading simple models of economic growth. But to me they all make good sense, and I think the reason they aren’t more \"in the water\" is because people don’t tend to talk about the drivers of the long-run past and future of economic growth (as I have complained previously!)\nHere are Leopold Aschenbrenner’s favorite papers by the same author (including this one). \nSubscribe Feedback\nFootnotes\n You can try this short explanation if you don’t know what economic growth is. ↩\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/past-and-future-of-economic-growth-paper/", "title": "One Cold Link: “The Past and Future of Economic Growth: A Semi-Endogenous Perspective”", "source": "cold.takes", "source_type": "blog", "date_published": "2021-09-09", "id": "c2c69de4fa5426e82c5ad64be104ac4a"} -{"text": "\nI like this basketball substack called Medium Lights that explains all kinds of cool basketball plays the announcers wouldn't necessarily comment on. Unlike most sites like this, it usually explains in enough detail that I can actually follow what's cool about the play. It really highlights how much more interesting games could be with good announcers.\nA few I've especially enjoyed: \nLeBron lets people drive past him so he can block their shot from behind.\nLeBron also will watch the defense on a failed offensive possession, then start a future possession the same way to get the same behavior and exploit it.\nDaniel Theis's \"paint seal,\" the sort of thing I'd never notice just by watching.\nNikola Jokic's water polo pump fake.\nA collection of absurdly high-arcing shots.\n\"Dwyane Wade's sneaky baseline cuts: an appreciation for one of the best off-ball players of all time\"\nA walkthrough of Draymond Green guarding all five opposing players on one possession.\nThis one is just silly (but top notch silly): Robin Lopez makes everyone proud\nDraymond \"throwing Steph Curry open.\" \"honestly, my only reaction when i saw it the first time was 'what...?' ... [Curry's] whole body is turned towards his own basket when the ball is already halfway to him ... i don't really have any words for this. i guess this is what happens when two guys play together for a long time - the chemistry is there and cool shit just happens.\" (To understand \"throwing someone open\" you can see this Reddit post, though I have to say ... watching this example, I wondered if it is an \"amazing successful pass\" that just had a 50-50 chance of being an \"embarrassing intercepted pass.\")\nIt's great when someone is able to walk you through why they're this amazed and impressed with something you wouldn't have noticed.\nSubscribe Feedback\n", "url": "https://www.cold-takes.com/medium-lights/", "title": "Medium Lights", "source": "cold.takes", "source_type": "blog", "date_published": "2021-09-08", "id": "a42739957a1c9eacfcab191d9dc7b683"} -{"text": "\nAudio also available by searching Stitcher, Spotify, Google Podcasts, etc. for \"Cold Takes Audio\"\nToday’s world\nTransformative AI\nDigital people\nWorld of\nMisaligned AI\nWorld run by\nSomething else\nor\nor\nStable, galaxy-wide\ncivilization\nThis piece starts with a summary of when we should expect transformative AI to be developed, based on the multiple angles covered previously in the series. I think this is useful, even if you've read all of the previous pieces, but if you'd like to skip it, click here.\nI then address the question: \"Why isn't there a robust expert consensus on this topic, and what does that mean for us?\"\nI estimate that there is more than a 10% chance we'll see transformative AI within 15 years (by 2036); a ~50% chance we'll see it within 40 years (by 2060); and a ~2/3 chance we'll see it this century (by 2100). \n(By \"transformative AI,\" I mean \"AI powerful enough to bring us into a new, qualitatively different future.\" I've argued that advanced AI could be sufficient to make this the most important century.)\nThis is my overall conclusion based on a number of technical reports approaching AI forecasting from different angles - many of them produced by Open Philanthropy over the past few years as we've tried to develop a thorough picture of transformative AI forecasting to inform our longtermist grantmaking.\nHere's a one-table summary of the different angles on forecasting transformative AI that I've discussed, with links to more detailed discussion in previous posts as well as to underlying technical reports:\nForecasting angle\nKey in-depth pieces (abbreviated titles)\nMy takeaways\nProbability estimates for transformative AI\nExpert survey. What do AI researchers expect?\n \nEvidence from AI Experts\nExpert survey implies1 a ~20% probability by 2036; ~50% probability by 2060; ~70% probability by 2100. Slightly differently phrased questions (posed to a minority of respondents) have much later estimates.\n \nBiological anchors framework. Based on the usual patterns in how much \"AI training\" costs, how much would it cost to train an AI model as big as a human brain to perform the hardest tasks humans do? And when will this be cheap enough that we can expect someone to do it?\n \nBio Anchors, drawing on Brain Computation\n>10% probability by 2036; ~50% chance by 2055; ~80% chance by 2100.\n \nAngles on the burden of proof\nIt's unlikely that any given century would be the \"most important\" one. (More)\n \nHinge; Response to Hinge\nWe have many reasons to think this century is a \"special\" one before looking at the details of AI. Many have been covered in previous pieces; another is covered in the next row. \n \nWhat would you forecast about transformative AI timelines, based only on basic information about (a) how many years people have been trying to build transformative AI; (b) how much they've \"invested\" in it (in terms of the number of AI researchers and the amount of computation used by them); (c) whether they've done it yet (so far, they haven't)? (More)\n \nSemi-informative Priors\nCentral estimates: 8% by 2036; 13% by 2060; 20% by 2100.2 In my view, this report highlights that the history of AI is short, investment in AI is increasing rapidly, and so we shouldn't be too surprised if transformative AI is developed soon. \n \nBased on analysis of economic models and economic history, how likely is 'explosive growth' - defined as >30% annual growth in the world economy - by 2100? Is this far enough outside of what's \"normal\" that we should doubt the conclusion? (More)\n \nExplosive Growth, Human Trajectory\nHuman Trajectory projects the past forward, implying explosive growth by 2043-2065.\nExplosive Growth concludes: \"I find that economic considerations don’t provide a good reason to dismiss the possibility of TAI being developed in this century. In fact, there is a plausible economic perspective from which sufficiently advanced AI systems are expected to cause explosive growth.\"\n \n\"How have people predicted AI ... in the past, and should we adjust our own views today to correct for patterns we can observe in earlier predictions? ... We’ve encountered the view that AI has been prone to repeated over-hype in the past, and that we should therefore expect that today’s projections are likely to be over-optimistic.\" (More)\n \nPast AI Forecasts\n\"The peak of AI hype seems to have been from 1956-1973. Still, the hype implied by some of the best-known AI predictions from this period is commonly exaggerated.\" \n \nFor transparency, note that many of the technical reports are Open Philanthropy analyses, and I am co-CEO of Open Philanthropy.\nHaving considered the above, I expect some readers to still feel a sense of unease. Even if they think my arguments make sense, they may be wondering: if this is true, why isn't it more widely discussed and accepted? What's the state of expert opinion?\nMy summary of the state of expert opinion at this time is:\nThe claims I'm making do not contradict any particular expert consensus. (In fact, the probabilities I've given aren't too far off from what AI researchers seem to predict, as shown in the first row.) But there are some signs they aren't thinking too hard about the matter. \nThe Open Philanthropy technical reports I've relied on have had significant external expert review. Machine learning researchers reviewed Bio Anchors; neuroscientists reviewed Brain Computation; economists reviewed Explosive Growth; academics focused on relevant topics in uncertainty and/or probability reviewed Semi-informative Priors.2 (Some of these reviews had significant points of disagreement, but none of these points seemed to be cases where the reports contradicted a clear consensus of experts or literature.)\nBut there is also no active, robust expert consensus supporting claims like \"There's at least a 10% chance of transformative AI by 2036\" or \"There's a good chance we're in the most important century for humanity,\" the way that there is supporting e.g. the need to take action against climate change.\nUltimately, my claims are about topics that simply have no \"field\" of experts devoted to studying them. That, in and of itself, is a scary fact, and something that I hope will eventually change.\nBut should we be willing to act on the \"most important century\" hypothesis in the meantime?\nBelow, I'll discuss:\nWhat an \"AI forecasting field\" might look like.\nA \"skeptical view\" that says today's discussions around these topics are too small, homogeneous and insular (which I agree with) - and that we therefore shouldn't act on the \"most important century\" hypothesis until there is a mature, robust field (which I don't).\nWhy I think we should take the hypothesis seriously in the meantime, until and unless such a field develops: \nWe don't have time to wait for a robust expert consensus.\n \nIf there are good rebuttals out there - or potential future experts who could develop such rebuttals - we haven't found them yet. The more seriously the hypothesis gets taken, the more likely such rebuttals are to appear. (Aka the Cunningham's Law theory: \"the best way to get a right answer is to post a wrong answer.\")\n \nI think that consistently insisting on a robust expert consensus is a dangerous reasoning pattern. In my view, it's OK to be at some risk of self-delusion and insularity, in exchange for doing the right thing when it counts most.\nWhat kind of expertise is AI forecasting expertise?\nQuestions analyzed in the technical reports listed above include:\nAre AI capabilities getting more impressive over time? (AI, history of AI)\nHow can we compare AI models to animal/human brains? (AI, neuroscience)\nHow can we compare AI capabilities to animals' capabilities? (AI, ethology)\nHow can we estimate the expense of training a large AI system for a difficult task, based on information we have about training past AI systems? (AI, curve-fitting)\nHow can we make a minimal-information estimate about transformative AI, based only on how many years/researchers/dollars have gone into the field so far? (Philosophy, probability)\nHow likely is explosive economic growth this century, based on theory and historical trends? (Growth economics, economic history)\nWhat has \"AI hype\" been like in the past? (History)\nWhen talking about wider implications of transformative AI for the \"most important century,\" I've also discussed things like \"How feasible are digital people and establishing space settlements throughout the galaxy?\" These topics touch physics, neuroscience, engineering, philosophy of mind, and more.\nThere's no obvious job or credential that makes someone an expert on the question of when we can expect transformative AI, or the question of whether we're in the most important century. \n(I particularly would disagree with any claim that we should be relying exclusively on AI researchers for these forecasts. In addition to the fact that they don't seem to be thinking very hard about the topic, I think that relying on people who specialize in building ever-more powerful AI models to tell us when transformative AI might come is like relying on solar energy R&D companies - or oil extraction companies, depending on how you look at it - to forecast carbon emissions and climate change. They certainly have part of the picture. But forecasting is a distinct activity from innovating or building state-of-the-art systems.)\nAnd I'm not even sure these questions have the right shape for an academic field. Trying to forecast transformative AI, or determine the odds that we're in the most important century, seems:\nMore similar to the FiveThirtyEight election model (\"Who's going to win the election?\") than to academic political science (\"How do governments and constituents interact?\"); \nMore similar to trading financial markets (\"Is this price going up or down in the future?\") than to academic economics (\"Why do recessions exist?\");3\nMore similar to GiveWell's research (\"Which charity will help people the most, per dollar?\") than to academic development economics (\"What causes poverty and what can reduce it?\")4\nThat is, it's not clear to me what a natural \"institutional home\" for expertise on transformative AI forecasting, and the \"most important century,\" would look like. But it seems fair to say there aren't large, robust institutions dedicated to this sort of question today.\nHow should we act in the absence of a robust expert consensus?\nThe skeptical view\nLacking a robust expert consensus, I expect some (really, most) people will be skeptical no matter what arguments are presented.\nHere's a version of a very general skeptical reaction I have a fair amount of empathy for:\nThis is all just too wild.\nYou're making an over-the-top claim about living in the most important century. This pattern-matches to self-delusion.\nYou've argued that the burden of proof shouldn't be so high, because there are lots of ways in which we live in a remarkable and unstable time. But ... I don't trust myself to assess those claims, or your claims about AI, or really anything on these wild topics.\nI'm worried by how few people seem to be engaging these arguments. About how small, homogeneous and insular the discussion seems to be. Overall, this feels more like a story smart people are telling themselves - with lots of charts and numbers to rationalize it - about their place in history. It doesn't feel \"real.\"\nSo call me back when there's a mature field of perhaps hundreds or thousands of experts, critiquing and assessing each other, and they've reached the same sort of consensus that we see for climate change.\nI see how you could feel this way, and I've felt this way myself at times - especially on points #1-#4. But I'll give three reasons that point #5 doesn't seem right.\nReason 1: we don't have time to wait for a robust expert consensus\nI worry that the arrival of transformative AI could play out as a kind of slow-motion, higher-stakes version of the COVID-19 pandemic. The case for expecting something big to happen is there, if you look at the best information and analyses available today. But the situation is broadly unfamiliar; it doesn't fit into patterns that our institutions regularly handle. And every extra year of action is valuable.\nYou could also think of it as a sped-up version of the dynamic with climate change. Imagine if greenhouse gas emissions had only started to rise recently5 (instead of in the mid-1800s), and if there were no established field of climate science. It would be a really bad idea to wait decades for a field to emerge, before seeking to reduce emissions.\nReason 2: Cunningham's Law (\"the best way to get a right answer is to post a wrong answer\") may be our best hope for finding the flaw in these arguments\nI'm serious, though.\nSeveral years ago, some colleagues and I suspected that the \"most important century\" hypothesis could be true. But before acting on it too much, we wanted to see whether we could find fatal flaws in it.\nOne way of interpreting our actions over the last few years is as if we were doing everything we could to learn that the hypothesis is wrong.\nFirst, we tried talking to people about the key arguments - AI researchers, economists, etc. But:\nWe had vague ideas of the arguments in this series (mostly or perhaps entirely picked up from other people). We weren't able to state them with good crispness and specificity.\nThere were a lot of key factual points that we thought would probably check out,6 but hadn't nailed down and couldn't present for critique.\nOverall, we couldn't even really articulate enough of a concrete case to give the others a fair chance to shoot it down.\nSo we put a lot of work into creating technical reports on many of the key arguments. (These are now public, and included in the table at the top of this piece.) This put us in position to publish the arguments, and potentially encounter fatal counterarguments.\nThen, we commissioned external expert reviews.7\nSpeaking only for my own views, the \"most important century\" hypothesis seems to have survived all of this. Indeed, having examined the many angles and gotten more into the details, I believe it more strongly than before.\nBut let's say that this is just because the real experts - people we haven't found yet, with devastating counterarguments - find the whole thing so silly that they're not bothering to engage. Or, let's say that there are people out there today who could someday become experts on these topics, and knock these arguments down. What could we do to bring this about?\nThe best answer I've come up with is: \"If this hypothesis became better-known, more widely accepted, and more influential, it would get more critical scrutiny.\" \nThis series is an attempted step in that direction - to move toward broader credibility for the \"most important century\" hypothesis. This would be a good thing if the hypothesis were true; it also seems like the best next step if my only goal were to challenge my beliefs and learn that it is false.\nOf course, I'm not saying to accept or promote the \"most important century\" hypothesis if it doesn't seem correct to you. But I think that if your only reservation is about the lack of robust consensus, continuing to ignore the situation seems odd. If people behaved this way generally (ignoring any hypothesis not backed by a robust consensus), I'm not sure I see how any hypothesis - including true ones - would go from fringe to accepted.\nReason 3: skepticism this general seems like a bad idea\nBack when I was focused on GiveWell, people would occasionally say something along the lines of: \"You know, you can't hold every argument to the standard that GiveWell holds its top charities to - seeking randomized controlled trials, robust empirical data, etc. Some of the best opportunities to do good will be the ones that are less obvious - so this standard risks ruling out some of your biggest potential opportunities to have impact.\" \nI think this is right. I think it's important to check one's general approach to reasoning and evidentiary standards and ask: \"What are some scenarios in which my approach fails, and in which I'd really prefer that it succeed?\" In my view, it's OK to be at some risk of self-delusion and insularity, in exchange for doing the right thing when it counts most.\nI think the lack of a robust expert consensus - and concerns about self-delusion and insularity - provide good reason to dig hard on the \"most important century\" hypothesis, rather than accepting it immediately. To ask where there might be an undiscovered flaw, to look for some bias toward inflating our own importance, to research the most questionable-seeming parts of the argument, etc.\nBut if you've investigated the matter as much as is reasonable/practical for you - and haven't found a flaw other than considerations like \"There's no robust expert consensus\" and \"I'm worried about self-delusion and insularity\" - then I think writing off the hypothesis is the sort of thing that essentially guarantees you won't be among the earlier people to notice and act on a tremendously important issue, if the opportunity arises. I think that's too much of a sacrifice, in terms of giving up potential opportunities to do a lot of good.\nNext in series: How to make the best of the most important century?\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n Technically, these probabilities are for “human-level machine intelligence.” In general, this chart simplifies matters by presenting one unified set of probabilities. In general, all of these probabilities refer to something at least as capable as PASTA, so they directionally should be underestimates of the probability of PASTA (though I don't think this is a major issue). ↩\n Reviews of Bio Anchors are here; reviews of Explosive Growth are here; reviews of Semi-informative Priors are here. Brain Computation was reviewed at an earlier time when we hadn't designed the process to result in publishing reviews, but over 20 conversations with experts that informed the report are available here. Human Trajectory hasn't been reviewed, although a lot of its analysis and conclusions feature in Explosive Growth, which has been. Past AI Forecasts hasn't been reviewed.  ↩\n The academic fields are quite broad, and I'm just giving example questions that they tackle. ↩\n Though climate science is an example of an academic field that invests a lot in forecasting the future. ↩\n The field of AI has existed since 1956, but it's only in the last decade or so that machine learning models have started to get within range of the size of insect brains and perform well on relatively difficult tasks. ↩\n Often, we were simply going off of our impressions of what others who had thought about the topic a lot thought. ↩\n Reviews of Bio Anchors are here; reviews of Explosive Growth are here; reviews of Semi-informative Priors are here. Brain Computation was reviewed at an earlier time when we hadn't designed the process to result in publishing reviews, but over 20 conversations with experts that informed the report are available here. Human Trajectory hasn't been reviewed, although a lot of its analysis and conclusions feature in Explosive Growth, which has been. Past AI Forecasts hasn't been reviewed.  ↩\n", "url": "https://www.cold-takes.com/where-ai-forecasting-stands-today/", "title": "AI Timelines: Where the Arguments, and the \"Experts,\" Stand", "source": "cold.takes", "source_type": "blog", "date_published": "2021-09-07", "id": "292c9db8be3c3ebc161d5391456c66b6"} -{"text": "\nThese are assorted links that I expect readers to maybe find useful on a personal basis. Don't worry, this won't be too frequent an occurrence - I aim for 99% of this blog to be about the last 100-10,000 years and the next 100-10 zillion years.\nBen Kuhn's tips for the most immersive video calls - the most comprehensive guide I've seen to good Zooming. \nSome tips on how to make sure you have good wifi at your hotel/AirBNB. It shocks me that AirBNB doesn't do more to help (e.g., nudging hosts to verify and list their speeds and networking equipment, allowing guests to filter by this). Some of the tips here are obvious, some I didn't know, but it seems like a good checklist.\nI strongly endorse this advice on PowerPoint presentations.\nInteresting argument against wearing a bike helmet. I wear one because I don't want to look reckless and actively enjoy looking dorky, but ¯\\_(ツ)_/¯\nI recommend traveling with an octopus plug so you never have to fight over an airport charger (instead, you're a hero!)\nFiveThirtyEight looks at a recent review of randomized studies on which diets work best. The effect sizes seem pretty good - ~15lbs avg weight loss after a year - compared to what I expected based on previous coverage of this topic. Atkins doesn't win, Ornish (very traditional approach to dieting) does, though they're all extremely close, consistent with my general view that \"any diet you can stick to will probably help because it will be different from eating whatever you feel like.\" Paleo isn't included in the analysis.\nBasic take, but important. WSJ: The Man Who Wrote Those Password Rules Has a New Tip: N3v$r M1^d! \"Bill Burr’s 2003 report recommended using numbers, obscure characters and capital letters and updating regularly—he regrets the error.\" Not surprising at all - it's bad advice that websites still enforce today, and XKCD's take is spot on. I recommend LastPass and maximal use of two-factor authentication. \nHere's an excruciatingly detailed guide to filing a complaint with a credit reporting agency that actually gets acted on. Key advice is to stay away from online and phone communications and use certified mail for everything, which shows them you're collecting a paper trail and scares them; key incredible fact is that you're not allowed to use form letters because ... well, because credit reporting agencies don't want you to and they've lobbied to prohibit it. (Even though they use form letters.) I found this whole long thing weirdly fun to read. Just thinking about confronting one of these awful bureaucracies and knowing how to get good results made me smile. Advice may also have applications for dealing with bureaucracies more generally.\nHow to recognize when a child is drowning. If you donate to effective global health charities to save children's lives, then you need to know how to do this too in order to be consistent.1\nSubscribe Feedback\nFootnotes\n This is a joke. If you don't get it, don't worry about it. ↩\n", "url": "https://www.cold-takes.com/cold-links-useful/", "title": "Cold Links: Useful", "source": "cold.takes", "source_type": "blog", "date_published": "2021-09-02", "id": "4c7396905f2b9fa992d2147fd0648c10"} -{"text": "\nAudio also available by searching Stitcher, Spotify, Google Podcasts, etc. for \"Cold Takes Audio\"\nToday’s world\nTransformative AI\nDigital people\nWorld of\nMisaligned AI\nWorld run by\nSomething else\nor\nor\nStable, galaxy-wide\ncivilization\nThis is one of 4 posts summarizing hundreds of pages of technical reports focused almost entirely on forecasting one number: the year by which transformative AI will be developed.1 \nBy \"transformative AI,\" I mean \"AI powerful enough to bring us into a new, qualitatively different future.\" I specifically focus on what I'm calling PASTA: AI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancement.\nThe sooner PASTA might be developed, the sooner the world could change radically, and the more important it seems to be thinking today about how to make that change go well vs. poorly.\nThis post is a layperson-compatible summary of Ajeya Cotra's \"Forecasting Transformative AI with Biological Anchors\" (which I'll abbreviate below as \"Bio Anchors\"), and its pros and cons.2 It is the forecast I find most informative for transformative AI, with some caveats:\nThis approach is relatively complex, and it requires a fairly large number of assumptions and uncertain estimates. These qualities make it relatively difficult to explain, and they are also a mark against the method's reliability. \nHence, as of today, I don't think this method is as trustworthy as the examples I gave previously for forecasting a qualitatively different future. It does not have the simplicity and directness of some of those examples, such as modeling COVID-19's spread. And while climate modeling is also very complex, climate modeling has been worked on by a large number of experts over decades, whereas the Bio Anchors methodology doesn't have much history.\nNonetheless, I think it is the best available \"best guess estimate\" methodology for transformative AI timelines as of today. And as discussed in the final section, one can step back from a lot of the details to see that this century will likely see us hit some of the more \"extreme\" milestones in the report that strongly suggest the feasibility of transformative AI.\n(Note: I've also written up a follow-up post about this framework for skeptical readers. See “Biological anchors” is about bounding, not pinpointing, AI timelines.)\nThe basic idea is:\nModern AI models can \"learn\" to do tasks via a (financially costly) process known as \"training.\" You can think of training as a massive amount of trial-and-error. For example, voice recognition AI models are given an audio file of someone talking, take a guess at what the person is saying, then are given the right answer. By doing this millions of times, they \"learn\" to reliably translate speech to text. More: Training\nThe bigger an AI model and the more complex the task, the more the training process costs. Some AI models are bigger than others; to date, none are anywhere near \"as big as the human brain\" (what this means will be elaborated below). More: Model size and task type\nThe biological anchors method asks: \"Based on the usual patterns in how much training costs, how much would it cost to train an AI model as big as a human brain to perform the hardest tasks humans do? And when will this be cheap enough that we can expect someone to do it?\" More: Estimating the expense\nBio Anchors models a broad variety of different ways of approaching this question, generating estimates in a wide range from \"aggressive\" (projecting transformative AI sooner) to \"conservative\" (later). But from essentially all of these angles, it places a high probability on transformative AI this century.\nThis chart is from the report. You can roughly read the y-axis as the probability that transformative AI is developed by the year in question, although there is some additional nuance in the report. I won't be explaining what each of the different \"Conditional on\" models means; it's enough to know that each represents a different angle on forecasting transformative AI.\nThanks to María Gutiérrez Rojas for this graphic. The top timeline gives major milestones for AI computing, past and future (the future ones are projected by Bio Anchors). Below it are (cropped) other timelines showing how significant this few-hundred-year period (more at This Can't Go On), and this era (more at All Possible Views About Humanity's Future Are Wild), appear to be.\nI'll now elaborate on each of these a bit more. This is the densest part of this series, and some people might prefer to stick with the above summary and skip to the next post.\nNote that Bio Anchors uses a number of different approaches (which it calls \"anchors\") to estimate transformative AI timelines, and combines them into one aggregate view. In this summary, I'm most focused on a particular set of these - called the \"neural net anchors\" - which are driving most of the report's aggregate timelines. Some of what I say applies to all anchors, but some applies only to the \"neural net anchors.\"\nTraining\nAs discussed previously, there are essentially two ways to \"teach\" a computer to do a task:\n\"Program\" in extremely specific, step-by-step instructions for completing the task. When this can be done, the computer can generally execute the instructions very quickly, reliably and cheaply. For example, you might program a computer to examine each record in a database and print the ones that match a user's search terms - you would \"instruct\" it in exactly how to do this, and it would be able to do the task very well.\n\"Train\" an AI to do the task purely by trial and error. Today, the most common way of doing this is by using a \"neural network,\" which you might think of sort of like a \"digital brain\" that starts in a random state: it hasn't yet been wired to do specific things. For example, say we want an AI to be able to say whether a photo is of a dog or a cat. It's hard to give fully specific step-by-step instructions for doing this; instead, we can take a neural network and send in a million example images (each one labeled as a \"dog\" or a \"cat\"). Each time it sees an example, it will tweak its internal wiring to make it more likely to get the right answer on similar cases in the future. After enough examples, it will be wired to correctly recognize dogs vs. cats.\n(We could maybe also move up another level of meta, and try to \"train\" models to be able to learn from \"training\" itself as efficiently as possible. This is called \"meta-learning,\" but my understanding is that it hasn't had great success yet.)\n\"Training\" is a sort of brute-force, expensive alternative to \"programming.\" The advantage is that we don't need to be able to provide specific instructions - we can just give an AI lots of examples of doing the task right, and it will learn to do the task. The disadvantage is that we need a lot of examples, which requires a lot of processing power, which costs money.\nHow much? This depends on the size of the model (neural network) and the nature of the task itself. For some tasks AIs have learned as of 2021, training a single model could cost millions of dollars. For more complex tasks (such as \"do innovative scientific research\") and bigger models (reaching the size of the human brain), training a model could cost far more than that. \nBio Anchors is interested in the question: \"When will it be affordable to train a model, using a relatively crude trial-and-error-based approach, to do the hardest tasks humans can do?\"\nThese tasks could include the tasks necessary for PASTA, such as:\nLearn about science from teachers, textbooks and homework as effectively as a human can.\nPush the frontier of science by asking questions, doing analyses and writing papers, as effectively as a human can.\nThe next section will discuss how Bio Anchors fleshes out the idea of the \"hardest tasks humans can do\" (which it assumes would require a \"human-brain-sized\" model).\nModel size and task type\nBio Anchors hypothesizes that we can estimate \"how expensive it is to train a model\" based on two basic parameters: the model size and the task type.\nModel size. As stated above, you might think of a neural network as a \"digital brain\" that starts in a random state. In general, a larger \"digital brain\" - with more digital-versions-of-neurons and digital-versions-of-synapses3 - can learn more complex tasks. A larger \"digital brain\" also requires more computations - and is hence more expensive - each time it is used (for example, for each example it is learning from).\nDrawing on the analysis in Joe Carlsmith's \"How Much Computational Power Does It Take to Match the Human Brain?\" (abbreviated in this piece as \"Brain Computation\"), Bio Anchors estimates comparisons between the size of \"digital brains\" (AI models) and \"animal brains\" (bee brains, mouse brains, human brains). These estimates imply that today's AI systems are sometimes as big as insect brains, but never quite as big as mouse brains - as of this writing, the largest known language model was the first to come reasonably close4 - and not yet even 1% as big as human brains.5\nThe bigger the model, the more processing power it takes to train. Bio Anchors assumes that a transformative AI model would need to be about 10x the size of a human brain, so a lot bigger than any current AI model. (The 10x is to leave some space for the idea that \"digital brains\" might be less efficient than human brains; see this section of the report.) This is one of the reasons it would be very expensive to train.\nIt could turn out that a smaller AI model is still big enough to learn the above sort of tasks. Or it could turn out that the needed model size is bigger than Bio Anchors estimates, perhaps because Bio Anchors has underestimated the effective \"size\" of the human brain, or because the human brain is better-designed than \"digital brains\" by more than Bio Anchors has guessed.\nTask type. In order to learn a task, an AI model needs to effectively \"try\" (or \"watch\") the task a large number of times, learning from trial-and-error. The more costly (in processing power, and therefore money) the task is to try/watch, the more costly it will be for the AI model to learn it.\nIt's hard to quantify how costly a task is to try/watch. Bio Anchors's attempt to do this is the most contentious part of the analysis, according to the technical reviewers who have reviewed it so far.\nYou can roughly think of the Bio Anchors framework as saying: \nThere are some tasks that a human can do with only a second of thought, such as classifying an image as a cat or dog. \nThere are other tasks that might take a human several minutes of thought, such as solving a logic puzzle.\nOther tasks could take hours, days, months or even years, and require not just thinking, but interacting with the environment. For example, writing a scientific paper.\nThe tasks on the longer end of this spectrum will be more costly to try/watch, so it will be more costly to train an AI model to do them. For example, it's more costly (takes more time, and more money) to have a million \"tries\" at a task that takes an hour than it is to have a million \"tries\" at a task that takes a second.\nHowever, the framework isn't as simple as this sounds. Many tasks that seem like \"long\" tasks (such as writing an essay) could in fact be broken into a series of \"shorter\" tasks (such as writing individual sentences). \nIf an AI model can be trained to do a shorter \"sub-task,\", it might be able to do the longer task by simply repeating the shorter sub-task over and over again - without ever needing to be explicitly \"trained\" to do the longer task. \n \nFor example, an AI model might get a million \"tries\" at the task: \"Read a partly-finished essay and write a good next sentence.\" If it then learns to do this task well, it could potentially write a long essay by simply repeating this task over and over again. It wouldn't need to go into a separate training process where it gets a million \"tries\" at the more time-consuming task of writing an entire essay.\n \nSo it becomes crucial whether the hardest and most important tasks (such as those listed above) are the kind that can be \"decomposed\" into short/easy tasks.\nEstimating the expense\nBio Anchors looks at how expensive existing AI models were to train, depending on model size and task type (as defined above). It then extrapolates this to see how expensive an AI model would be to train if it:\nHad a size 10x larger than a human brain.6\nTrained on a task where each \"try\" took days, weeks, or months of intensive \"thinking.\"\nAs of today, this sort of training would cost in the ballpark of a million trillion dollars, which is enormously more than total world wealth. So it isn't surprising that nobody has tried to train such a model. \nHowever, Bio Anchors also projects the following trends out into the future:\nAdvances in both hardware and software that could make computing power cheaper.\nA growing economy, and a growing role of AI in the economy, that could increase the amount AI labs are able to spend training large models to $1 trillion and beyond.\nAccording to these projections, at some point the \"amount AI labs are able to spend\" becomes equal to the \"expense of training a human-brain-sized model on the hardest tasks.\" Bio Anchors bases its projections for \"when transformative AI will be developed\" on when this happens.\nBio Anchors also models uncertainty in all of the parameters above, and considers alternative approaches to the \"model size and task type\" parameters.7 By doing this, it estimates the probability that transformative AI will be developed by 2030, 2035, etc.\nAggressive or conservative?\nBio Anchors involves a number of simplifications that could cause it to be too aggressive (expecting transformative AI to come sooner than is realistic) or too conservative (expecting it to come later than is realistic). \nThe argument I most commonly hear that it is \"too aggressive\" is along the lines of: \"There's no reason to think that a modern-methods-based AI can learn everything a human does, using trial-and-error training - no matter how big the model is and how much training it does. Human brains can reason in unique ways, unmatched and unmatchable by any AI unless we come up with fundamentally new approaches to AI.\" This kind of argument is often accompanied by saying that AI systems don't \"truly understand\" what they're reasoning about, and/or that they are merely imitating human reasoning through pattern recognition. \nI think this may turn out to be correct, but I wouldn't bet on it. A full discussion of why is outside the scope of this post, but in brief:\nI am unconvinced that there is a deep or stable distinction between \"pattern recognition\" and \"true understanding\" (this Slate Star Codex piece makes this point). \"True understanding\" might just be what really good pattern recognition looks like. Part of my thinking here is an intuition that even when people (including myself) superficially appear to \"understand\" something, their reasoning often (I'd even say usually) breaks down when considering an unfamiliar context. In other words, I think what we think of as \"true understanding\" is more of an ideal than a reality.\nI feel underwhelmed with the track record of those who have made this sort of argument - I don't feel they have been able to pinpoint what \"true reasoning\" looks like, such that they could make robust predictions about what would prove difficult for AI systems. (For example, see this discussion of Gary Marcus's latest critique of GPT3, and similar discussion on Astral Codex Ten).\n\"Some breakthroughs / fundamental advances are needed\" might be true. But for Bio Anchors to be overly aggressive, it isn't enough that some breakthroughs are needed; the breakthroughs needed have to be more than what AI scientists are capable of in the coming decades, the time frame over which Bio Anchors forecasts transformative AI. It seems hard to be confident that things will play out this way - especially because: \nEven moderate advances in AI systems could bring more talent and funding into the field (as is already happening8). \n \nIf money, talent and processing power are plentiful, and progress toward PASTA is primarily held up by some particular weakness of how AI systems are designed and trained, a sustained attempt by researchers to fix this weakness could work. When we're talking about multi-decade timelines, that might be plenty of time for researchers to find whatever is missing from today's techniques.\nMore broadly, Bio Anchors could be too aggressive due to its assumption that \"computing power is the bottleneck\": \nIt assumes that if one could pay for all the computing power to do the brute-force \"training\" described above for the key tasks (e.g., automating scientific work), transformative AI would (likely) follow. \nTraining an AI model doesn't just require purchasing computing power. It requires hiring researchers, running experiments, and perhaps most importantly, finding a way to set up the \"trial and error\" process so that the AI can get a huge number of \"tries\" at the key task. It may turn out that doing so is prohibitively difficult.\nOn the other hand, there are several ways in which Bio Anchors could be too conservative (underestimating the likelihood of transformative AI being developed soon). \nPerhaps with enough ingenuity, one could create a transformative AI by \"programming\" it to do key tasks, rather than having to \"train\" it (see above for the distinction). This could require far less computation, and hence be far less expense. Or one could use a combination of \"programming\" and \"training\" to achieve better efficiency than Bio Anchors implies, while still not needing to capture everything via \"programming.\"\nOr one could find far superior approaches to AI that can be \"trained\" much more efficiently. One possibility here is \"meta-learning\": effectively training an AI system on the \"task\" of being trained, itself.\nOr perhaps most likely, over time AI might become a bigger and bigger part of the economy, and there could be a proliferation of different AI systems that have each been customized and invested in to do different real-world tasks. The more this happens, the more opportunity there is for individual ingenuity and luck to result in more innovations, and more capable AI systems in particular economic contexts. \nPerhaps at some point, it will be possible to integrate many systems with different abilities in order to tackle some particularly difficult task like \"automating science,\" without needing a dedicated astronomically expensive \"training run.\"\n \nOr perhaps AI that falls short of PASTA will still be useful enough to generate a lot of cash, and/or help researchers make compute cheaper and more efficient. This in turn could lead to still bigger AI models that further increase availability of cash and efficiency of compute. That, in turn, could cause a PASTA-level training run to be affordable earlier than Bio Anchors projects.\nAdditionally, some technical reviewers of Bio Anchors feel that its treatment of task type is too conservative. They believe that the most important tasks (and perhaps all tasks) that AI needs to be trained on will be on the \"easier/cheaper\" end of the spectrum, compared to what Bio Anchors assumes. (See the above section for what it means for a task to be \"easier/cheaper\" or \"harder/more expensive\"). For a related argument, see Fun with +12 OOMs of Compute, which makes the intuitive point that Bio Anchors is imagining a truly massive amount of computation needed to create PASTA, and less could easily be enough.\nI don't think it is obvious whether, overall, Bio Anchors is too aggressive (expecting transformative AI to come sooner than is realistic) or too conservative (expecting it to come later). The report itself states that it's likely to be too aggressive over the next few years and too conservative >50 years out, and likely most useful in between.9\nIntellectually, it feels to me as though the report is more likely to be too conservative. I find its responses to the \"Too aggressive\" points above fairly compelling, and I think the \"Too conservative\" points are more likely to end up being correct. In particular, I think it's hard to rule out the possibility of ingenuity leading to transformative AI in some far more efficient way than the \"brute-force\" method contemplated here. And I think the treatment of \"task type\" is definitely erring in a conservative direction.\nHowever, I also have an intuitive preference (which is related to the \"burden of proof\" analyses given previously) to err on the conservative side when making estimates like this. Overall, my best guesses about transformative AI timelines are similar to those of Bio Anchors.\nConclusions of Bio Anchors\nBio Anchors estimates a >10% chance of transformative AI by 2036, a ~50% chance by 2055, and an ~80% chance by 2100.\nIt's also worth noting what the report says about AI systems today. It estimates that:\nToday's largest AI models, such as GPT-3, are a bit smaller than mouse brains, and are starting to get within range (if they were to grow another 100x-1000x) of human brains. So we might soon be getting close to AI systems that can be trained to do anything that humans can do with ~1 second of thought. Consistent with this, it seems to me that we're just starting to reach the point where language models sound like humans who are talking without thinking very hard.10 If anything, \"human who puts in no more than 1 second of thought per word\" seems somewhat close to what GPT-3 is doing, even though it's much smaller than a human brain.\nIt's only very recently that AI models have gotten this big. A \"large\" AI model before 2020 would be more in the range of a honeybee brain. So for models even in the very recent past, we should be asking whether AI systems seem to be \"as smart as insects.\" Here's one attempt to compare AI and honeybee capabilities (by Open Philanthropy intern Guille Costa), concluding that the most impressive honeybee capabilities the author was able to pinpoint do appear to be doable for AI systems.11\nI include these notes because:\nThe Bio Anchors analysis seems fully consistent with what we're observing from AI systems today (and have over the last decade or two), while also implying that we're likely to see more transformative abilities in the coming decades.\nI think it's particularly noteworthy that we're getting close to the time when an AI model is \"as big as a human brain\" (according to the Bio Anchors / Brain Computation estimation method). It may turn out that such an AI model is able to \"learn\" a lot about the world and produce a lot of economic value, even if it can't yet do the hardest things humans do. And this, in turn, could kick off skyrocketing investment in AI (both money and talent), leading to a lot more innovation and further breakthroughs. This is a simple reason to believe that transformative AI by 2036 is plausible.\nFinally, I note that Bio Anchors includes an \"evolution\" analysis among the different approaches it considers. This analysis hypothesizes that in order to produce transformative AI, one would need to do about as many computations as all animals in history combined, in order to re-create the progress that was made by natural selection. \nI consider the \"evolution\" analysis to be very conservative, because machine learning is capable of much faster progress than the sort of trial-and-error associated with natural selection. Even if one believes in something along the lines of \"Human brains reason in unique ways, unmatched and unmatchable by a modern-day AI,\" it seems that whatever is unique about human brains should be re-discoverable if one is able to essentially re-run the whole history of natural selection. And even this very conservative analysis estimates a ~50% chance of transformative AI by 2100.\nPros and cons of the biological anchors method for forecasting transformative AI timelines\nCons. I'll start with what I see as the biggest downside: this is a very complex forecasting framework, which relies crucially on multiple extremely uncertain estimates and assumptions, particularly:\nWhether it's reasonable to believe that an AI system could learn the key tasks listed above (the ones required for PASTA) given enough trial-and-error training.\nHow to compare the size of AI models with the size of animal/human brains.\nHow to characterize \"task type,\" estimating how \"difficult\" and expensive a task is to “try” or “watch” once.\nHow to use the model size and task type to estimate how expensive it would be to train an AI model to do the key tasks.\nHow to estimate future advances in both hardware and software that could make computing power cheaper.\nHow to estimate future increases in how much AI labs could be able to spend training models.\nThis kind of complexity and uncertainty means (IMO) that we shouldn't consider the forecasts to be highly reliable, especially today when the whole framework is fairly new. If we got to the point where as much scrutiny and effort had gone into AI forecasting as climate forecasting, it might be a different matter.\nPros. That said, the biological anchors method is essentially the only one I know of that estimates transformative AI timelines from objective facts (where possible) and explicit assumptions (elsewhere).12 It does not rely on any concepts as vague and intuitive as \"how fast AI systems are getting more impressive\" (discussed previously). Every assumption and estimate in the framework can be explained, discussed, and - over time - tested. \nEven in its current early stage, I consider this a valuable property of the biological anchors framework. It means that the framework can give us timelines estimates that aren't simply rehashes of intuitions about whether it feels as though transformative AI is approaching.13\nI also think it's encouraging that even with all the guesswork, the testable \"predictions\" the framework makes as of today seem reasonable (see previous section). The framework provides a way of thinking about how it could be simultaneously true that (a) the AI systems of a decade ago didn't seem very impressive at all; (b) the AI systems of today can do many impressive things but still feel far short of what humans are able to do; (c) the next few decades - or even the next 15 years - could easily see the development of transformative AI.\nAdditionally, I think it's worth noting a couple of high-level points from Bio Anchors that don't depend on quite so many estimates and assumptions:\nIn the coming decade or so, we're likely to see - for the first time - AI models with comparable \"size\" to the human brain. \nIf AI models continue to become larger and more efficient at the rates that Bio Anchors estimates, it will probably become affordable this century to hit some pretty extreme milestones - the \"high end\" of what Bio Anchors thinks might be necessary. These are hard to summarize, but see the \"long horizon neural net\" and \"evolution anchor\" frameworks in the report. \nOne way of thinking about this is that the next century will likely see us go from \"not enough compute to run a human-sized model at all\" to \"extremely plentiful compute, as much as even quite conservative estimates of what we might need.\" Compute isn't the only factor in AI progress, but to the extent other factors (algorithms, training processes) became the new bottlenecks, there will likely be powerful incentives (and multiple decades) to resolve them.\nA final advantage of Bio Anchors is that we can continue to watch AI progress over time, and compare what we see to the report's framework. For example, we can watch for:\nWhether there are some tasks that just can't be learned, even with plenty of trial and error - or whether some tasks require amounts of training very different from what the report estimates.\nHow AI models' capabilities compare to those of animals that we are currently modeling as \"similarly sized.\" If AI models seem more capable than such animals, we may be overestimating how large a model we would need to be in order to e.g. automate science. If they seem less capable, we may be underestimating it.\nHow hardware and software are progressing, and whether AI models are getting bigger at the rate the report currently projects.\nThe next piece will summarize all of the different analyses so far about transformative AI timelines. It will then discuss a remaining reservation: that there is no robust expert consensus on this topic.\nNext in series: AI Timelines: Where the Arguments, and the \"Experts,\" Stand\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n Of course, the answer could be \"A kajillion years from now\" or \"Never.\" ↩\n For transparency, note that this is an Open Philanthropy analysis, and I am co-CEO of Open Philanthropy. ↩\n I (like Bio Anchors) generally consider the synapse count more important than the neuron count, for reasons I won't go into here. ↩\nWikipedia: \"GPT-3's full version has a capacity of 175 billion machine learning parameters ... Before the release of GPT-3, the largest language model was Microsoft's Turing NLG, introduced in February 2020, with a capacity of 17 billion parameters.\" Wikipedia doesn't state this, but I don't believe there are publicly known AI models larger than these language models (with the exception of \"mixture-of-experts models\" that I think we should disregard for these purposes, for reasons I won't go into here). Wikipedia estimates about 1 trillion synapses for a house mouse's brain; Bio Anchors's methodology for brain comparisons (based on Brain Computation) essentially equates synapses to parameters. ↩\n Bio Anchors estimates about 100 trillion parameters for the human brain, based on the fact that it has about 100 trillion synapses. ↩\n As noted above, the 10x is to leave some space for the idea that \"digital brains\" might be less efficient than human brains. See this section of the report. ↩\n For example, one approach hypothesizes that training could be made cheaper by \"meta-learning,\" discussed above; another approach hypothesizes that in order to produce transformative AI, one would need to do about as many computations as all animals in history combined, in order to re-create the progress that was made by natural selection.) ↩\n See charts from the early sections of the 2021 AI Index Report, for example. ↩\n See this section. ↩\n For a collection of links to GPT-3 demos, see this post. ↩\n In fact, he estimates that AI systems appear to use about 1000x less compute, which would match the above point in terms of suggesting that AI systems might be more efficient than animal/human brains and that the Bio Anchors estimates might be too conservative. However, he doesn't address the fact that bees arguably perform a more diverse set of tasks than the AI systems they're being compared to. ↩\n Other than the \"semi-informative priors\" method discussed previously. ↩\n Of course, this isn't to say the estimates are completely independent of intuitions - intuitions are likely to color our choices of estimates for many of the difficult-to-estimate figures. But the ability to scrutinize and debate each estimate separately is helpful here. ↩\n", "url": "https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/", "title": "Forecasting transformative AI: the \"biological anchors\" method in a nutshell", "source": "cold.takes", "source_type": "blog", "date_published": "2021-08-31", "id": "69bf47f3cd2baa17689559cc98791b19"} -{"text": "\nIn This Can’t Go On, I argued that 8200 more years of today’s growth rate would require us to sustain “multiple economies as big as today's entire world economy per atom.”\nFeedback on this bit was split between “That is so obviously impossible, 8200 years of 2% growth is an absurd idea - growth will have to slow much before then” and “Why is that impossible? With ever-increasing creativity, we could increase quality of life higher and higher, without needing to keep using more and more material resources.”\nHere I’m going to respond to the latter point, which means expanding on why 8200 years of 2% growth doesn’t look like a reasonable thing to expect. I’m going to make lots of extremely wild assumptions and talk about all kinds of weird possibilities just so that I cover even far-fetched ways for 2% growth to continue. \nIf you are already on team “Yeah, I don’t see the world economy growing that much,” you should skip this post unless you'd enjoy seeing the case made in a fair amount of detail.\nHow we COULD support “multiple world-size economies per atom”\nI do think it’s conceivable that we could support multiple world-size economies per atom. Here’s one way: \nSay that we discover some new activity, or experience, or drug, that people really, really, REALLY value.\nSpecifically, the market values it at 10^85 of today’s US dollars (that’s ten trillion trillion trillion trillion trillion trillion trillion dollars). That means it's valued about 10^71 times as much as everything the world produces in a year right now (combined).1\nThen, one person having this experience2 would mean the size of the economy is at least $10^85. And that would, indeed, be the equivalent of multiple of today’s world economies per atom.3\nTo be clear, it’s not that we would’ve crammed multiple of today’s world economies into each atom. It’s that we would’ve crammed something 10^71 times as valuable as today’s world economy into a mere 10^28 atoms that make up a human being.\nWhat would it mean, though, to value a single experience 10^71 times as much as today’s entire world economy?\nOne way of thinking about it might be: \n“A 1 in 10^71 chance of this thing being experienced would be as valuable as all of today’s world economy.” \nOr to make it a bit easier to intuit (while needing to oversimplify), “If I were risk-neutral, I’d be thrilled to accept a gamble where I would die immediately, with near certainty, in exchange for a 1 in 10^71 chance of getting to have this experience.”4\nHow near-certain would death be? Well, for starters, if all the people who have ever lived to date accepted this gamble, it would be approximately certain that they would all lose and end up with immediate death.5\nBut this really isn’t coming anywhere close to communicating how bad the odds would be for this gamble. It’s more like: if there were one person for each atom in the galaxy, and each of them took the gamble, they'd probably still all lose.6\nSo to personally take a gamble with those kinds of odds … the experience had better be REALLY good to compensate. \nWe’re not talking about “the best experience you’ve ever had” level here - it wouldn’t be sensible to value that more than an entire life, and the idea that it’s worth as much as today’s world economy seems pretty clearly wrong. \n \nWe’re talking about something just unfathomably beyond anything any human has ever experienced.\nBlowing out the numbers more\nImagine the single best second of your life, the kind of thing evoked by Letter from Utopia:\nHave you ever experienced a moment of bliss? On the rapids of inspiration maybe, your mind tracing the shapes of truth and beauty? Or in the pulsing ecstasy of love? Or in a glorious triumph achieved with true friends? Or in a conversation on a vine-overhung terrace one star-appointed night? Or perhaps a melody smuggled itself into your heart, charming it and setting it alight with kaleidoscopic emotions? Or when you prayed, and felt heard?\nIf you have experienced such a moment – experienced the best type of such a moment – then you may have discovered inside it a certain idle but sincere thought: “Heaven, yes! I didn’t realize it could be like this. This is so right, on whole different level of right; so real, on a whole different level of real. Why can’t it be like this always? Before I was sleeping; now I am awake.”\nYet a little later, scarcely an hour gone by, and the ever-falling soot of ordinary life is already covering the whole thing. The silver and gold of exuberance lose their shine, and the marble becomes dirty.\nNow imagine, implausibly, that this single second was worth as much as the entire world economy outputs in a year today. (It doesn’t seem possible that it could be worth more, since the world economy that year included that second of your life, plus the rest of your year and many other people’s years.)\nAnd now imagine a full year in which every second is as good as that second. We’ll call this the “perfect year.” According to the assumptions above, the perfect year would be no more than about 3*10^8 times as valuable as the world economy (there are about 3*10^8 seconds in a year).\nAnd now imagine that every atom in the galaxy could be a person having the perfect year. This would now be about 10^70 * (3 * 10^8) = 3*10^78 as much value as today’s world economy. 2% growth would get us there in 9150 years.\n(A crucial and perhaps counterintuitive assumption I'm making here, throughout, is that \"2% growth\" means \"2% really real growth\" - that whatever is valuable, holistically speaking, about annual world output today, we'll get 2% more of it each year. I think this is already the kind of assumption many people are making when they say we don't need more material to have ever-increasing wealth. If you think the 2% growth of the recent past is more \"fake\" than this and that it will continue in a \"fake\" way, that would be a debate for another time.)\nAnd 1200 years after that, if each year still had 2% growth, the economy would be another ~20 billion times bigger. So now, for every atom in the galaxy, there’d have to be someone whose year was in some sense ~20 billion times better (or \"more valuable\") than the perfect year. \nWe’re still only talking about ~10,000 years of 2% growth.\nNew life forms\nIt’s still conceivable! Who knows what the future will bring.\nBut at this point it’s very intuitive to me that we are not talking about anything that looks like “Humans in human bodies having human kinds of fun and fulfillment.” An economy of this value seems to require fundamentally re-engineering something about the human experience - finding some way of arranging matter that creates far more happiness, or fulfillment, or something, that we would value so astronomically more than even the heights of human experience today.\nAnd I think the most natural way for that to happen is something like: “Discovering fundamental principles behind what we value, and fundamental principles of how to arrange matter to get the most of it.” Which in turn suggests something more like “Once we have that level of understanding, we start to arrange the matter in the galaxy optimally, and quickly get close to the limits of what’s possible” than like “We grow at 2%, every year, for continuing thousands of years, even as (as would happen with e.g. digital people) we become beings who can do as much in a year as humans could do in hundreds or thousands of years.”\nBut it could still happen?\nI guess? This was never meant to be a mathematical proof of the impossibility of 2%/year growth. It’s possible in theory.\nBut at this point, seeing what a funky and fundamentally transformed galaxy it would require within 10,000 years, what is the affirmative reason to expect 2%/year growth for that long a period of time? Is it that “This is the trendline, and by default I expect the trendline to continue?”\nBut that trendline is only a couple hundred years old - why expect it to continue for another 10,000? \nWhy not, instead, expect the longer-term pattern of accelerating economic growth to be what continues, until we approach some sort of fundamental limit on how much value we can cram into a given amount of matter? Or expect growth to fall gradually from here and never reach today's level again?\nThe last couple of centuries have been a wild ride, with wealth and living conditions improving at a historically high rate. But I don’t think that gives us reason to think that this trend goes to infinity. I believe the limits are somewhere, and it looks like sometime in the next 10,000 years, we’re either going to have to approach those limits, or stagnate or collapse.\nHopefully I’ve given a sense for why it seems so unlikely that there will be 10,000 more years in the future that each have 2% or greater growth. Which would imply that each of the last 100+ years will turn out to be one of the fastest-growing 10,000 years of all time.\nIf you'd like to comment on this post, this would be a good place to do so.\nSubscribe Feedback\nFootnotes\n Today’s economy is a bit less than $10^14 per year (source). $10^85 = $10^14 * 10^71. ↩\n (And paying full price for it, in a way that gets recorded by GDP statistics, which could get a bit hairy.) ↩\n See previous estimate of 10^70 atoms in the galaxy. ↩\n This assumes that one values one’s own life not much more than a year of the world economy’s output. I do not expect that I will see enough disagreement on this point to want to write another post on the matter, but it’s possible.\nIt is also making an iffy assumption about \"risk-neutrality.\" In reality, one might personally value this experience much less than 10^71 times as much as one's own life, while still paying resources for it that would be sufficient to save an extraordinarily large number of other people's lives. It's hard to convey the same kind of magnitudes by appealing to impartiality, so I went with this intuition pump anyway; I think it does give the right basic sense of how mind-bogglingly large the value of this experience would be. ↩\n The calculation here would be: if there are 10^10 people alive today (this is \"rounding up\" from ~8 billion to 10 billion), and each has a 10^-71 (1 in 10^71) chance of winning the gamble, then each has a (1-10^-71) chance of losing the gamble. So the probability that they all lose the gamble is (1-10^-71)^(10^10), which is almost exactly 100%. ↩\n Similar calculation to the previous footnote, but with a population of 10^70 (one for each atom in the galaxy), so the probability that they all lose the gamble is (1-10^-71)^(10^70), which I think is around 90% (Excel can't actually handle numbers this big but this is what similar calculations imply). ↩\n (Footnote deleted) ↩\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/more-on-multiple-world-size-economies-per-atom/", "title": "More on “multiple world-size economies per atom”", "source": "cold.takes", "source_type": "blog", "date_published": "2021-08-27", "id": "7cd4df6ad342311e58b8c80a92b1b208"} -{"text": "\nA lie gets halfway around the world before the truth has a chance to get its pants on. - Winston Churchill\n(...Not really, though)\nI collect links that are good examples of \"how much BS is floating around out there unchecked.\" Though in many cases I haven't run everything to ground, and it could be the debunking that's BS, or both the original and the debunking ... so I do want the lesson to be \"Don't trust things you've heard,\" as opposed to \"Trust this debunking.\" (I will probably make a future post devoted to debunking debunkings.)\nNice piece debunking various memes that are supposedly based on studies: \"the less someone knows about a subject, the more they think they know\" (not actually Dunning-Kruger at all), \"money doesn’t make people happy\" (it seems like it does when making some basic adjustments to the data - I think this point is well known by now), \"people bounce back from setbacks (as well as positive events) and return to a fixed level of happiness\" (guess not) and \"type systems help in programming\" (don't know what this one is about).\nYou may have come across most of these, but here in one place are debunkings (of varying convincingness) of pretty much all of the famous old social psychology experiments that blew my mind when I was in my 20s:\nDisappointing replication of the \"marshmallow\" experiment.\nOne person argues that the Prison Experiment was a case of subjects behaving as their experimenters clearly wanted them to.\nThe Robbers Cave experiment backstory sounds particularly dicey: the experimenter tried to get two bands of boys to fight each other ala Lord of the Flies, failed miserably, tried again, succeeded, never mentioned the first time, and gained fame (despite even the 2nd try having the same issue as the Prison Experiment mentioned above).\nFinally, someone reports that many of the Milgram participants' own reports of why they had administered electric shocks provides more support for \"they didn't think the person was really being harmed\" than for Milgram's theory (link). I think this is the least compelling of critiques listed here, as people will rationalize their behavior with all kinds of stuff.\n Via an old Marginal Revolution post, here's a study claiming that none of the famous findings about happiness are robust. As far as I can tell, the central claim is basically this general idea: say that Group A consists of two people who each rate their happiness 6/10. And Group B consists of one person rating their happiness 4/10, and another 7/10. In some fundamental sense, we don't know which group has higher 'average' happiness, because for all we know, each increment on the 1-10 scale could represent an extra '10 units' of happiness or an extra '10 times as many units' of happiness, or something else. Now, sometimes we might be able to know which group has higher average happiness despite this issue (for example, a two 7's and a 6 vs. two 8's and a 7), but the authors here argue that the famous happiness findings are not robust in this way. Which I think makes sense, though I hope to read this more closely later on.\nOne of the studies I've found most mind-blowing (with video evidence!) was this seeming demonstration that chimpanzees have better working memory than humans (at least for a particular, surprising task). But oops, here is a very believable-sounding debunking claiming that chimpanzees were trained extensively on the task, and similarly trained humans keep up just fine.\nFinally, here's a 2008 study questioning the connection between exercise and certain mental health benefits in a convincing-seeming way. \"Regular exercise is associated with reduced anxious and depressive symptoms in the population at large, but the association is not because of causal effects of exercise.\" The usual result is that exercise does cause such benefits, but this one looked at twin pairs and over-time changes: \"Cross-sectional and longitudinal associations were small and were best explained by common genetic factors with opposite effects on exercise behavior and symptoms of anxiety and depression. In genetically identical twin pairs, the twin who exercised more did not display fewer anxious and depressive symptoms than the co-twin who exercised less. Longitudinal analyses showed that increases in exercise participation did not predict decreases in anxious and depressive symptoms.\"\nSubscribe Feedback\n", "url": "https://www.cold-takes.com/the-gloves-are-off-the-pants-are-on/", "title": "The gloves are off, the pants are on", "source": "cold.takes", "source_type": "blog", "date_published": "2021-08-26", "id": "67a7c6c66246a9609d03a45c13cdfd35"} -{"text": "\nAudio also available by searching Stitcher, Spotify, Google Podcasts, etc. for \"Cold Takes Audio\"\nToday’s world\nTransformative AI\nDigital people\nWorld of\nMisaligned AI\nWorld run by\nSomething else\nor\nor\nStable, galaxy-wide\ncivilization\nThis is one of 4 posts summarizing hundreds of pages of technical reports focused almost entirely on forecasting one number: the year by which transformative AI will be developed.1\nBy \"transformative AI,\" I mean \"AI powerful enough to bring us into a new, qualitatively different future.\" I specifically focus on what I'm calling PASTA: AI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancement.\nThe sooner PASTA might be developed, the sooner the world could change radically, and the more important it seems to be thinking today about how to make that change go well vs. poorly.\nIn this post and the next, I will talk about the forecasting methods underlying my current view: I believe there's more than a 10% chance we'll see something PASTA-like enough to qualify as \"transformative AI\" within 15 years (by 2036); a ~50% chance we'll see it within 40 years (by 2060); and a ~2/3 chance we'll see it this century (by 2100).\nBelow, I will:\nDiscuss what kind of forecast I'm going for. \nI'm not sure whether it will feel as though transformative AI is \"on the way\" long before it arrives. I'm hoping, instead, that we can use trends in key underlying facts about the world (such as AI capabilities, model size, etc.) to forecast a qualitatively unfamiliar future. \n \nAn analogy for this sort of forecasting would be something like: \"This water isn't bubbling, and there are no signs of bubbling, but the temperature has gone from 70° Fahrenheit2 to 150°, and if it hits 212°, the water will bubble.\" Or: \"It's like forecasting school closures and overbooked hospitals, when there aren't any yet, based on trends in reported infections.\"\nDiscuss whether we can look for trends in how \"impressive\" or \"capable\" AI systems are. I think this approach is unreliable: (a) AI progress may not \"trend\" in the way we expect; (b) in my experience, different AI researchers have radically different intuitions about which systems are impressive or capable, and how progress is going. \nBriefly discuss Grace et al 2017, the best existing survey of AI researchers on transformative AI timelines. Its conclusions broadly seem in line with my own forecasts, though there are signs the researchers weren't thinking very hard about the questions.\nThe next piece in this series will focus on Ajeya Cotra's \"Forecasting Transformative AI with Biological Anchors\" (which I'll abbreviate below as \"Bio Anchors\"), the forecast I find most informative for transformative AI.\nWhat kind of forecast am I going for?\nThere are a couple of ways in which forecasting transformative AI is different from the kind of forecasting we might be used to.\nFirst, I'm forecasting over very long time horizons (decades), unlike e.g. a weather forecast (days) or an election forecast (months). This makes the task quite a bit harder,3 and harder for outsiders to evaluate since I don't have a clearly relevant track record of making forecasts on similar topics.\nSecond, I lack rich, clearly relevant data sources, and I can't look back through a bunch of similar forecasts from the past. FiveThirtyEight's election forecasts look at hundreds of polls, and they have a model of how well polls have predicted elections in the past. Forecasting transformative AI needs to rely more on intuition, guesswork and judgment, in terms of determining what data is most relevant and how it's relevant.\nFinally, I'm trying to forecast a qualitatively unfamiliar future. Transformative AI - and the strange future it comes with - doesn't feel like something we're \"trending toward\" year to year.\nIf I were trying to forecast when the world population would hit 10 billion, I could simply extrapolate existing trends of world population. World population itself is known to be growing and can be directly estimated. In my view, extrapolating out a long-running trend is one of the better ways to make a forecast.\nWhen FiveThirtyEight makes election forecasts, there's a background understanding that there's going to be an election on a certain date, and whoever wins will take office on another date. We all buy into that basic framework, and there's a general understanding that better polling means a better chance of winning.\nBy contrast, transformative AI - and the strange future it comes with - isn't something we're \"headed for\" in any clearly measurable way. There's no clear metric like \"transformativeness of AI\" or \"weirdness of the world\" that's going up regularly every year such that we can project it out into the future and get the date that something like PASTA will be developed. \nPerhaps for some, these points gives enough reason to ignore the whole possibility of transformative AI, or assume it's very far away. But I don't think this is a good idea, for a couple of reasons. \nFirst, I have a background view that something like PASTA is in a sense \"inevitable,\" assuming continued advances in society and computing. The basic intuition here - which I could expand on if there's interest - is that human brains are numerous and don't seem to need particular rare materials to produce, so it should be possible at some point to synthetically replicate the key parts of their functionality.4\nAt the same time, I'm not confident that PASTA will feel qualitatively as though it's \"on the way\" well before it arrives. (More on this below.) So I'm inclined to look for ways to estimate when we can expect this development, despite the challenges, and despite the fact that it doesn't feel today as though it's around the corner.\nI think there are plenty of example cases where a qualitatively unfamiliar future could be seen in advance by plotting the trend in some underlying, related facts about the world. A few that come to mind:\nWhen COVID-19 first emerged, a lot of people had trouble taking it seriously because it didn't feel as though we were \"trending toward\" or \"headed for\" a world full of overflowing hospitals, office and school closures, etc. At the time (say, January 2020), there were a relatively small number of cases, an even smaller number of deaths, and no qualitative sense of a global emergency. The only thing alarming about COVID-19, at first, was that case counts were growing at a fast exponential rate (though the overall number of cases was still small). But it was possible to extrapolate from the fast growth in case counts to a risk of a global emergency, and some people did. (And some didn't.)\nClimatologists forecast a global rise in temperatures that's significantly more than what we've seen over the past few decades, and could have major consequences far beyond what we're seeing today. They do this by forecasting trends in greenhouse gas emissions and extrapolating from there to temperature and consequences. If you simply tried to ask \"How fast is the temperature rising?\" or \"Are hurricanes getting worse?\", and based all your forecasts of the future on those, you probably wouldn't be forecasting the same kinds of extreme events around 2100.5\nTo give a more long-run example, we can project a date by which the sun will burn out, and conclude that the world will look very different by that date than it does now, even though there's no trend of things getting colder or darker today.\nCOVID-19 cases from WHO. Workplace closures are from this OWiD data, simply scored as 1 for \"recommended,\" 2 for \"required for some,\" 3 for \"required for all but key workers\" and summed across all countries.\nAn analogy for this sort of forecasting would be something like: \"This water isn't bubbling, and there are no signs of bubbling, but the temperature has gone from 70° Fahrenheit6 to 150°, and if it hits 212°, the water will bubble.\" \nIdeally, I can find some underlying factors that are changing regularly enough for us to predict them (such as growth in the size and cost of AI models), and then argue that if those factors reach a certain point, the odds of transformative AI will be high. \nYou can think of this approach as answering the question: \"If I think something like PASTA is inevitable, and I'm trying to guess the timing of it using a few different analysis methods, what do I guess?\" We can separately ask \"And is there reason that this guess is implausible, untrustworthy, or too 'wild?'\" - this was addressed in the previous piece in this series.\nSubjective extrapolations and \"AI impressiveness\"\nFor a different presentation of some similar content, see this section of Bio Anchors.\nIf we're looking for some underlying factors in the world that predict when transformative AI is coming, perhaps the first thing we should look for is trends in how \"impressive\" or \"capable\" AI systems are.\nThe easiest version of this would be if the world happened to shake out such that:\nOne day, for the first time, an AI system managed to get a passing grade on a 4th-grade science exam.\nThen we saw the first AI passing (and then acing) a 5th grade exam, then 6th grade exam, etc.\nThen we saw the first AI earning a PhD, then the first AI writing a published paper, etc. all the way up to the first AI that could do Nobel-Prize-worthy science work.\nThis all was spread out regularly over the decades, so we could clearly see the state of the art advancing from 4th grade to 5th grade to 6th grade, all the way up to \"postdoc\" and beyond. And all of this happened slowly and regularly enough that we could start putting a date on \"full-blown scientist AI\" several decades in advance.\nIt would be very convenient - I almost want to say \"polite\" - of AI systems to advance in this manner. It would also be \"polite\" if AI advanced in the way that some people seem to casually imagine it will: first taking over jobs like \"truck driver\" and \"assembly line worker,\" then jobs like \"teacher\" and \"IT support,\" and then jobs like \"doctor\" and \"lawyer,\" before progressing to \"scientist.\" \nEither of these would give us plenty of lead time and a solid basis to project when science-automating AI is coming. Unfortunately, I don't think we can count on such a thing. \nAI seems to progress very differently from humans. For example, there were superhuman AI chess players7 long before there was AI that could reliably tell apart pictures of dogs and cats.8\nOne possibility is that AI systems will be capable of the hardest intellectual tasks insects can do, then of the hardest tasks mice and other small mammals can do, then monkeys, then humans - effectively matching the abilities of larger and larger brains. If this happened, we wouldn't necessarily see many signs of AI being able to e.g. do science until we were very close. Matching a 4th-grader might not happen until the very end.\nAnother possibility is that AI systems will be able to do anything that a human can do within 1 second, then anything that a human can do within 10 seconds, etc. This could also be quite a confusing progression that makes it non-obvious how to forecast progress.\nActually, if we didn't already know how humans tend to mature, we might find a child's progress to be pretty confusing and hard to extrapolate. Watching someone progress from birth to age 8 wouldn't necessarily give you any idea that they were, say, 1/3 of the way to being able to start a business, make an important original scientific discovery, etc. (Even knowing the usual course of human development, it's hard to tell from observing an 8-year-old what professional-level capabilities they could/will end up with in adulthood.)\nOverall, it's quite unclear how we should think about the spectrum from \"not impressive/capable\" to \"very impressive/capable\" for AI. And indeed, in my experience, different AI researchers have radically different intuitions about which systems are impressive or capable, and how progress is going. I've often had the experience of seeing one AI researcher friend point to some new result and say \"This is huge, how can anyone not see how close we're getting to powerful AI?\" while another says \"This is a minor advance with little significance.\"9\nIt would be great if we could forecast the year transformative AI will be developed, by using a chart like this (from Bio Anchors; \"TAI\" means \"transformative AI\"):\nBut as far as I can tell, there's no way to define the y-axis that wouldn't be fiercely debated between experts.\nSurveying experts\nOne way to deal with this uncertainty and confusion would be to survey a large number of experts and simply ask them when they expect transformative AI to be developed. We might hope that each of the experts (or at least, many of them) is doing their own version of the \"impressiveness extrapolation\" above - or if not, that they're doing something else that can help them get a reasonable estimate. By averaging many estimates, we might get an aggregate that reflects the \"wisdom of crowds.\"10\nI think the best version of this exercise is Grace et al 2017, a survey of 352 AI researchers that included a question about “when unaided machines can accomplish every task better and more cheaply than human workers\" (which would presumably include tasks that advance scientific and technological development, and hence would qualify as PASTA). The two big takeaways from this survey, according to Bio Anchors and me, are:\nA ~20% probability of this sort of AI by 2036; a ~50% probability by 2060; a ~70% probability by 2100. These match the figures I give in the introduction.\nMuch later estimates for slightly differently phrased questions (posed to a smaller subset of respondents), implying (to me) that the researchers simply weren't thinking very hard about the questions.11\nMy bottom line: this evidence is consistent with my current probabilities, though potentially not very informative. The next piece in this series will be entirely focused on Ajeya Cotra's \"Forecasting Transformative AI with Biological Anchors,\" the forecasting method I find most informative here.\nNext in series: Forecasting transformative AI: the \"biological anchors\" method in a nutshell\nFootnotes\n Of course, the answer could be \"A kajillion years from now\" or \"Never.\" ↩\n Centigrade equivalents for this sentence: 21°, 66°, 100° ↩\n Some notes on longer-term forecasting here. ↩\n See also this piece for a bit of a more fleshed out argument along these lines, which I don't agree with fully as stated (I don't think it presents a strong case for transformative AI soon), but which I think gives a good sense of my intuitions about in-principle feasibility. Also see On the Impossibility of Supersized Machines for some implicit (joking) responses to many common arguments for why transformative AI might be impossible to create.  ↩\n For example, see the temperature chart here - the lowest line seems like it would be a reasonable projection, if temperature were the only thing you were looking at. ↩\n Centigrade equivalents for this sentence: 21°, 66°, 100° ↩\n1997. ↩\n The Kaggle \"dogs vs. cats\" challenge was created in 2013. ↩\n From Bio Anchors: \"We have heard ML experts with relatively short timelines argue that AI systems today can essentially see as well as humans, understand written information, and beat humans at almost all strategy games, and the set of things they can do is expanding rapidly, leading them to expect that transformative AI would be attainable in the next decade or two by training larger models on a broader distribution of ML problems that are more targeted at generating economic value. Conversely, we have heard ML experts with relatively long timelines argue that ML systems require much more data to learn than humans do, are unable to transfer what they learn in one context to a slightly different context, and don’t seem capable of much structured logical and causal reasoning; this leads them to believe we would need to make multiple major breakthroughs to develop TAI. At least one Open Philanthropy technical advisor has advanced each of these perspectives.\" ↩\nWikipedia: \"The classic wisdom-of-the-crowds finding ... At a 1906 country fair in Plymouth, 800 people participated in a contest to estimate the weight of a slaughtered and dressed ox. Statistician Francis Galton observed that the median guess, 1207 pounds, was accurate within 1% of the true weight of 1198 pounds.\" ↩\nBio Anchors: \nSome researchers were asked to forecast “HLMI” as defined above [high-level machine intelligence, which I would take to include something like PASTA], while a randomly-selected subset was instead asked to forecast “full automation of labor”, the time when “all occupations are fully automatable.” Despite the fact that achieving HLMI seems like it should quickly lead to full automation of labor, the median estimate for full automation of labor was ~2138 while the median estimate for HLMI was ~2061, almost 80 years earlier. \nRandom subsets of respondents were asked to forecast when individual milestones (e.g. laundry folding, human-level StarCraft, or human-level math research) would be achieved. The median year by which respondents expected machines to be able to automate AI research was ~2104, while the median estimate for HLMI was ~2061 -- another clear inconsistency because “AI research” is a task done by human workers. ↩\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\n", "url": "https://www.cold-takes.com/are-we-trending-toward-transformative-ai-how-would-we-know/", "title": "Are we \"trending toward\" transformative AI? (How would we know?)", "source": "cold.takes", "source_type": "blog", "date_published": "2021-08-24", "id": "9f33e5d1cd980d06be57e967c1d0cfeb"} -{"text": "\nApparently Marcus Smart is really good at \"taking charges,\" which means positioning himself so that an offensive player will run into him and foul him. This requires getting into the right position and then specifically not moving for a second (the latter is necessary to trigger the ref's classification as an offensive foul). It's very funny to watch him do this (he's #36, the guy who takes a crotch to the face):\nMarcus Smart has become my new central example of someone who enthusiastically takes one for the team.\nA friend summarizes something he saw on ESPN (see also the Wikipedia entry, and video):\nthis is an amazing sports story i never knew about. Texas HS football. It was on ESPN b/c it's the 20th anniversary.\n \"... giving the Lions a seemingly insurmountable 41–17 lead with only 3:03 remaining.\n However, on a two-play 70-yard drive, the Panthers scored a touchdown to bring the score to 41–23 (after a failed two-point conversion) with 2:36 on the clock. The Panthers then successfully executed three onside kicks in a row, recovering the ball each time and then driving down the field for a touchdown on each occasion.\n ... giving the Panthers a 44–41 comeback lead with only 24 seconds remaining.\n In a final twist, however, after the Panthers did a regular kickoff, the Lions' returner Roderick Dunn caught the ball at his own three-yard line and took it 97 yards for a touchdown at 0:11 and a 48–44 Lions victory.\n He was the very same player who had muffed the reception of the final two onside kicks.'\n ... interviews with the players from today [were on ESPN, not the Wikipedia page]:\n -- the guys from the team that lost were still crying about it\n -- the guy that ran back the kick said it was one of the greatest moments in his life and he still thinks about when he's down. the lesson, he says, is 'never give up.'\nI wasn't able to easily verify a lot of this, but here is a very short, sweet story about Nav Bhatia, perhaps the first person inducted into the NBA Hall of Fame and given a championship ring for ... being a really dedicated fan? Apparently he hasn't missed a Toronto Raptors home game in 25 years. \nWatch this kid's reaction to getting a spare racket from tennis legend Novak Djokovich. Sports!\nSuper Nice Soccer Guy Rewarded For His Compassion With Easiest Goal Of His Life.\nVery funny article on now-retired former star NFL quarterback Andrew Luck: apparently he sincerely congratulated people who tackle him hard, and this was completely unique for a quarterback and was seen as extremely unnerving by the defenders. Doesn't mean he enjoyed the tackles though - he retired at 29.\nThe article completely delivers on the headline: 36-Year-Old Accountant Called In As Emergency NHL Goalie — And He Crushed It.\nThat story about two athletes (friends) who shared the gold medal is too recent and too popular for Cold Links, so maybe I'll link to it in a year or two. I did manage to find someone complaining about it.\nReaders sent in cool new links on \"intensity\" in sports, which I'll put out another time (only heartwarming stuff allowed in this one).\nSubscribe Feedback\n", "url": "https://www.cold-takes.com/cold-links-heartwarming-sports-stuff/", "title": "Cold Links: heartwarming sports stuff", "source": "cold.takes", "source_type": "blog", "date_published": "2021-08-19", "id": "8c71328121900389394a156b97387fae"} -{"text": "\nAudio also available by searching Stitcher, Spotify, Google Podcasts, etc. for \"Cold Takes Audio\"\nToday’s world\nTransformative AI\nDigital people\nWorld of\nMisaligned AI\nWorld run by\nSomething else\nor\nor\nStable, galaxy-wide\ncivilization\nThis is one of 4 posts summarizing hundreds of pages of technical reports focused almost entirely on forecasting one number: the year by which transformative AI will be developed.1 \nBy \"transformative AI,\" I mean \"AI powerful enough to bring us into a new, qualitatively different future.\" I specifically focus on what I'm calling PASTA: AI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancement.\nThe sooner PASTA might be developed, the sooner the world could change radically, and the more important it seems to be thinking today about how to make that change go well vs. poorly.\nIn future pieces, I'm going to lay out two methods of making a \"best guess\" at when we can expect transformative AI to be developed. But first, in this piece, I'm going to address the question: how good do these forecasting methods need to be in order for us to take them seriously? In other words, what is the \"burden of proof\" for forecasting transformative AI timelines?\nWhen someone forecasts transformative AI in the 21st century - especially when they are clear about the full consequences it would bring - a common intuitive response is something like: \"It's really out-there and wild to claim that transformative AI is coming this century. So your arguments had better be really good.\" \nI think this is a very reasonable first reaction to forecasts about transformative AI (and it matches my own initial reaction). But I've tried to examine what's driving the reaction and how it might be justified, and having done so, I ultimately don't agree with the reaction. \nI think there are a number of reasons to think that transformative AI - or something equally momentous - is somewhat likely this century, even before we examine details of AI research, AI progress, etc.\nI also think that on the kinds of multi-decade timelines I'm talking about, we should generally be quite open to very wacky, disruptive, even revolutionary changes. With this backdrop, I think that specific well-researched estimates of when transformative AI is coming can be credible, even if they involve a lot of guesswork and aren't rock-solid.\nThis post tries to explain where I'm coming from.\nBelow, I will (a) get a bit more specific about which transformative AI forecasts I'm defending; then (b) discuss how to formalize the \"That's too wild\" reaction to such forecasts; then (c) go through each of the rows below, each of which is a different way of formalizing it.\n\"Burden of proof\" angle\nKey in-depth pieces (abbreviated titles)\nMy takeaways\nIt's unlikely that any given century would be the \"most important\" one. (More)\n \nHinge; Response to Hinge\nWe have many reasons to think this century is a \"special\" one before looking at the details of AI. Many have been covered in previous pieces; another is covered in the next row. \n \nWhat would you forecast about transformative AI timelines, based only on basic information about (a) how many years people have been trying to build transformative AI; (b) how much they've \"invested\" in it (in terms of the number of AI researchers and the amount of computation used by them); (c) whether they've done it yet (so far, they haven't)? (More)\n \nSemi-informative Priors\nCentral estimates: 8% by 2036; 13% by 2060; 20% by 2100.2 In my view, this report highlights that the history of AI is short, investment in AI is increasing rapidly, and so we shouldn't be too surprised if transformative AI is developed soon. \n \nBased on analysis of economic models and economic history, how likely is 'explosive growth' - defined as >30% annual growth in the world economy - by 2100? Is this far enough outside of what's \"normal\" that we should doubt the conclusion? (More)\n \nExplosive Growth, Human Trajectory\nHuman Trajectory projects the past forward, implying explosive growth by 2043-2065.\nExplosive Growth concludes: \"I find that economic considerations don’t provide a good reason to dismiss the possibility of TAI being developed in this century. In fact, there is a plausible economic perspective from which sufficiently advanced AI systems are expected to cause explosive growth.\"\n \n\"How have people predicted AI ... in the past, and should we adjust our own views today to correct for patterns we can observe in earlier predictions? ... We’ve encountered the view that AI has been prone to repeated over-hype in the past, and that we should therefore expect that today’s projections are likely to be over-optimistic.\" (More)\n \nPast AI Forecasts\n\"The peak of AI hype seems to have been from 1956-1973. Still, the hype implied by some of the best-known AI predictions from this period is commonly exaggerated.\" \n \nFor transparency, note that the reports for the latter three rows are all Open Philanthropy analyses, and I am co-CEO of Open Philanthropy.\nSome rough probabilities\nHere are some things I believe about transformative AI, which I'll be trying to defend:\nI think there's more than a 10% chance we'll see something PASTA-like enough to qualify as \"transformative AI\" within 15 years (by 2036); a ~50% chance we'll see it within 40 years (by 2060); and a ~2/3 chance we'll see it this century (by 2100). \nConditional on the above, I think there's at least a 50% chance that we'll soon afterward see a world run by digital people or misaligned AI or something else that would make it fair to say we have \"transitioned to a state in which humans as we know them are no longer the main force in world events.\" (This corresponds to point #1 in my \"most important century\" definition in the roadmap.)\nAnd conditional on the above, I think there's at least a 50% chance that whatever is the main force in world events will be able to create a stable galaxy-wide civilization for billions of years to come. (This corresponds to point #2 in my \"most important century\" definition in the roadmap.)\nI've also put a bit more detail on what I mean by the \"most important century\" here.\nFormalizing the \"That's too wild\" reaction\nOften, someone states a view that I can't immediately find a concrete flaw in, but that I instinctively think is \"just too wild\" to be likely. For example, \"My startup is going to be the next Google\" or \"College is going to be obsolete in 10 years\" or \"As President, I would bring both sides together rather than just being partisan.\"\nI hypothesize that the \"This is too wild\" reaction to statements like these can usually be formalized along the following lines: \"Whatever your arguments for X being likely, there is some salient way of looking at things (often oversimplified, but relevant) that makes X look very unlikely.\"\nFor the examples I just gave:\n\"My startup is going to be the next Google.\" There are large numbers of startups (millions?), and the vast majority of them don't end up anything like Google. (Even when their founders think they will!)\n\"College is going to be obsolete in 10 years.\" College has been very non-obsolete for hundreds of years.\n\"As President, I would bring both sides together rather than just being partisan.\" This is a common thing for would-be US Presidents to say, but partisanship seems to have been getting worse for at least a couple of decades nonetheless.\nEach of these cases establishes a sort of starting point (or \"prior\" probability) and \"burden of proof,\" and we can then consider further evidence that might overcome the burden. That is, we can ask things like: what makes this startup different from the many other startups that think they can be the next Google? What makes the coming decade different from all the previous decades that saw college stay important? What's different about this Presidential candidate from the last few?\nThere are a number of different ways to think about the burden of proof for my claims above: a number of ways of getting a prior (\"starting point\") probability, that can then be updated by further evidence. \nMany of these capture different aspects of the \"That's too wild\" intuition, by generating prior probabilities that (at least initially) make the probabilities I've given look too high. \nBelow, I will go through a number of these \"prior probabilities,\" and examine what they mean for the \"burden of proof\" on forecasting methods I'll be discussing in later posts.\nDifferent angles on the burden of proof\n\"Most important century\" skepticism\nOne angle on the burden of proof is along these lines:\nHolden claims a 15-30% chance that this is the \"most important century\" in one sense or another.3\nBut there are a lot of centuries, and by definition most of them can't be the most important. Specifically: \nHumans have been around for 50,000 to ~5 million years, depending on how you define \"humans.\"4 That's 500 to 50,000 centuries. \nIf we assume that our future is about as long as our past, then there are 1,000 to 100,000 total centuries. \nSo the prior (starting-point) probability for the \"most important century\" is 1/100,000 to 1/1,000.\nIt's actually worse than that: Holden has talked about civilization lasting for billions of years. That's tens of millions of centuries, so the prior probability of \"most important century\" is less than 1/10,000,000.\n(Are We Living at the Hinge of History? argues along these general lines, though with some differences.5)\nThis argument feels like it is pretty close to capturing my biggest source of past hesitation about the \"most important century\" hypothesis. However, I think there are plenty of markers that this is not an average century, even before we consider specific arguments about AI.\nOne key point is emphasized in my earlier post, All possible views about humanity's future are wild. If you think humans (or our descendants) have billions of years ahead of us, you should think that we are among the very earliest humans, which makes it much more plausible that our time is among the most important. (This point is also emphasized in Thoughts on whether we're living at the most influential time in history as well as the comments on an earlier version of \"Are We Living at the Hinge of History?\".)\nAdditionally, while humanity has existed for a few million years, for most of that time we had extremely low populations and very little in the way of compounding technological progress. Human civilization started about 10,000 years ago, and since then we've already gotten to the point of building digital programmable computers and exploring our solar system. \nWith these points in mind, it seems reasonable to think we will eventually launch a stable galaxy-wide civilization, sometime in the next 100,000 years (1000 centuries). Or to think there's a 10% chance we will do so sometime in the next 10,000 years (100 centuries). Either way, this implies that a given century has a ~1/1,000 chance of being the most important century for the launch of that civilization - much higher than the figures given earlier in this section. It's still ~100x off from the numbers I gave above, so there's still a burden of proof.\nThere are further reasons to think this particular century is unusual. For example, see This Can't Go On:\nThe total size of the world economy has grown more in the last 2 centuries than in all of the rest of history combined.\nThe current economic growth rate can't be sustained for more than another 80 centuries or so. (And as discussed below, if its past accelerating trend resumed, it would imply explosive growth and hitting the limits of what's possible this century.)\nIt's plausible that science has advanced more in the last 5 centuries than in the rest of history combined.\nA final point that makes our time special: we're talking about when to expect transformative AI, and we're living very close in time to the very beginnings of efforts on AI. In well under 1 century, we've gone from the first programmable electronic general-purpose computer to AI models that can compete with humans at speech recognition,6 image classification and much more.\nMore on the implications of this in the next section.\nThanks to María Gutiérrez Rojas for this graphic. The top timeline illustrates how recent major milestones for computing and AI are. Below it are (cropped) other timelines showing how significant this few-hundred-year period (more at This Can't Go On), and this era (more at All Possible Views About Humanity's Future Are Wild), appear to be.\nSemi-informative priors\nReport on Semi-informative Priors (abbreviated in this piece as \"Semi-informative Priors\") is an extensive attempt to forecast transformative AI timelines while using as little information about the specifics of AI as possible. So it is one way of providing an angle on the \"burden of proof\" - that is, establishing a prior (starting-point) set of probabilities for when transformative AI will be developed, before we look at the detailed evidence.\nThe central information it uses is about how much effort has gone into developing AI so far. The basic idea: \nIf we had been trying and failing at developing transformative AI for thousands of years, the odds of succeeding in the coming decades would be low. \nBut if we've only been trying to develop AI systems for a few decades so far, this means the coming decades could contain a large fraction of all the effort that has ever been put in. The odds of developing it in that time are not all that low. \nOne way of thinking about this is that before we look at the details of AI progress, we should be somewhat agnostic about whether developing transformative AI is relatively \"easy\" (can be done in a few decades) or \"hard\" (takes thousands of years). Since things are still early, the possibility that it's \"easy\" is still open.\nA bit more on the report's approach and conclusions:\nAngle of analysis. The report poses the following question (paraphrased): \"Suppose you had gone into isolation on the day that people started investing in building AI systems. And now suppose that you've received annual updates on (a) how many years people have been trying to build transformative AI; (b) how much they've 'invested' in it (in terms of time and money); (c) whether they've succeeded yet (so far, they haven't). What can you forecast about transformative AI timelines, having only that information, as of 2021?\"\nIts methods take inspiration from the Sunrise Problem: \"Suppose you knew nothing about the universe except whether, on each day, the sun has risen. Suppose there have been N days so far, and the sun has risen on all of them. What is the probability that the sun will rise tomorrow?\" You don't need to know anything about astronomy in order to get a decent answer to this question - there are simple mathematical methods for estimating the probability that X will happen tomorrow, based on the fact that X has happened each day in the past. \"Semi-informative Priors\" extends these mathematical methods in order to adapt them to transformative AI timelines. (In this case, \"X\" is \"Failing to develop transformative AI, as we have in the past.\")\nConclusions. I'm not going to go heavily into the details of how the analysis works (see the blog post summarizing the report for more detail), but the report's conclusions include the following:\nIt puts the probability of artificial general intelligence (AGI, which would include PASTA) by 2036 between 1-18%, with a best guess of 8%.\nIt puts the probability of AGI by 2060 at around 3-25% (best guess ~13%), and the probability of AGI by 2100 at around 5-35%, best guess 20%.\nThese are lower than the probabilities I give above, but not much lower. This implies that there isn't an enormous burden of proof when bringing in additional evidence about the specifics of AI investment and progress.\nNotes on regime start date. Something interesting here is that the report is less sensitive than one might think about how we define the \"start date\" for trying to develop AGI. (See this section of the full report.) That is:\nBy default, \"Semi-informative Priors\" models the situation as if humanity started \"trying\" to build AGI in 1956.7 This implies that efforts are only ~65 years old, so the coming decades will represent a large fraction of the effort.\nBut the report also looks at other measures of \"effort to build AGI\" - notably, researcher-time and \"compute\" (processing power). Even if you want to say that we've been implicitly trying to build AGI since the beginning of human civilization ~10,000 years ago, the coming decades will contain a large chunk of the research effort and computation invested in trying to do so.\nBottom line on this section.\nOccasionally I'll hear someone say something along the lines of \"We've been trying to build transformative AI for decades, and we haven't yet - why do you think the future will be different?\" At a minimum, this report reinforces what I see as the common-sense position that a few decades of \"no transformative AI yet, despite efforts to build it\" doesn't do much to argue against the possibility that transformative AI will arrive in the next decade or few.\nIn fact, in the scheme of things, we live extraordinarily close in time to the beginnings of attempts at AI development - another way in which our century is \"special,\" such that we shouldn't be too surprised if it turns out to be the key one for AI development.\nEconomic growth\nAnother angle on the burden of proof is along these lines:\nIf PASTA were to be developed anytime soon, and if it were to have the consequences outlined in this series of posts, this would be a massive change in the world - and the world simply doesn't change that fast. \nTo quantify this: the world economy has grown at a few percent per year for the last 200+ years, and PASTA would imply a much faster growth rate, possibly 100% per year or above. \nIf we were moving toward a world of explosive economic growth, economic growth should be speeding up today. It's not - it's stagnating, at least in the most developed economies. If AI were really going to revolutionize everything, the least it could be doing now is creating enough value - enough new products, transactions and companies - to make overall US economic growth speed up. \nAI may lead to cool new technologies, but there's no sign of anything nearly as momentous as PASTA would be. Going from where we are to where PASTA would take us is the kind of sudden change that hasn't happened in the past, and is unlikely to happen in the future.\n(If you aren't familiar with economic growth, you may want to read my brief explainer before continuing.)\nI think this is a reasonable perspective, and it especially makes me skeptical of very imminent forecasts for transformative AI (2036 and earlier).\nMy main response is that the picture of steady growth - \"the world economy growing at a few percent per year\" - gets a lot more complicated when we pull back and look at all of economic history, as opposed to just the last couple of centuries. From that perspective, economic growth has mostly been accelerating,8 and projecting the acceleration forward could lead to very rapid economic growth in the coming decades.\nI wrote about this previously in The Duplicator and This Can't Go On; here I'll very briefly recap the key reports that I cited there.\nCould Advanced AI Drive Explosive Economic Growth? explicitly asks the question, \"How likely is 'explosive growth' - defined as >30% annual growth in the world economy - by 2100?\" It considers arguments on both sides, including both (a) the long view of history that shows accelerating growth; (b) the fact that growth has been remarkably stable over the last ~200 years, implying that something may have changed. \nIt concludes: \"the possibilities for long-run growth are wide open. Both explosive growth and stagnation are plausible.\" \nModeling the Human Trajectory asks what future we can expect if we extrapolate out existing trends over the course of economic history. The answer is explosive growth by 2043-2065 - not too far from what my probabilities above suggest. This implies to me that the lack of economic acceleration over the last ~200 years could be a \"blip\" - soon to be resolved by technology development that restores the feedback loop (discussed in The Duplicator) that can cause acceleration to continue.\nTo be clear, there are also good reasons not to put too much weight on this as a projection,9 and I am presenting it more as a perspective on the \"burden of proof\" than as a mainline forecast for when PASTA will be developed.\nHistory of \"AI hype\"\nAnother angle on the burden of proof: I sometimes hear comments along the lines of \"AI has been overhyped many times in the past, and transformative AI10 is constantly 'just around the corner' according to excited technologists. Your estimates are just the latest in this tradition. Since past estimates were wrong, yours probably are too.\" \nHowever, I don't think the history of \"AI hype\" bears out this sort of claim. What should we learn from past AI forecasts? reviewed histories of AI to try to understand what the actual historical pattern of \"AI hype\" has been. \nIts summary gives the following impressions (note that \"HLMI,\" or \"human-level machine intelligence,\" is a fairly similar idea to PASTA):\nThe peak of AI hype seems to have been from 1956-1973. Still, the hype implied by some of the best-known AI predictions from this period is commonly exaggerated.\nAfter ~1973, few experts seemed to discuss HLMI (or something similar) as a medium-term possibility, in part because many experts learned from the failure of the field’s earlier excessive optimism.\nThe second major period of AI hype, in the early 1980s, seems to have been more about the possibility of commercially useful, narrow-purpose “expert systems,” not about HLMI (or something similar) ... \nIt’s unclear to me whether I would have been persuaded by contemporary critiques of early AI optimism, or whether I would have thought to ask the right kinds of skeptical questions at the time. The most substantive critique during the early years was by Hubert Dreyfus, and my guess is that I would have found it persuasive at the time, but I can’t be confident of that.\nMy summary is that it isn't particularly fair to say that there have been many waves of separate, over-aggressive forecasts about transformative AI. Expectations were probably too high in the 1956-1973 period, but I don't think there is much reason here to impose a massive \"burden of proof\" on well-researched estimates today.\nOther angles on the burden of proof\nHere are some other possible ways of capturing the \"That's too wild\" reaction:\n\"My cause is very important\" claims. Many people - throughout the world today, and throughout history - claim or have claimed that whatever issue they're working on is hugely important, often that it could have global or even galaxy-wide stakes. Most of them have to be wrong.\nHere I think the key question is whether this claim is supported by better arguments, and/or more trustworthy people, than other \"My cause is very important\" claims. If you're this deep into reading about the \"most important century\" hypothesis, I think you're putting yourself in a good position to answer this question for yourself.\nExpert opinion will be covered extensively in future posts. For now, my main position is that the claims I'm making neither contradict a particular expert consensus, nor are supported by one. They are, rather, claims about topics that simply have no \"field\" of experts devoted to studying them. Some people might choose to ignore any claims that aren't actively supported by a robust expert consensus; but given the stakes, I don't think that is what we should be doing in this case. \n(That said, the best available survey of AI researchers has conclusions that seem broadly consistent with mine, as I'll discuss in the next post.)\nUncaptured \"That's too wild\" reactions. I'm sure this piece hasn't captured every possible angle that could be underlying a \"That's too wild\" reaction. (Though not for lack of trying!) Some people will simply have irreducible intuitions that the claims in this series are too wild to take seriously.\nA general take on these angles. Something that bugs me about most of the angles in this section is that they seem too general. If you simply refuse (absent overwhelming evidence) to believe any claim that fits a \"my cause is very important\" pattern, or isn't already backed by a robust expert consensus, or simply sounds wild, that seems like a dangerous reasoning pattern. Presumably some people, sometimes, will live in the most important century; we should be suspicious of any reasoning patterns that would reliably11 make these people conclude that they don't.\nNext in series: Are we \"trending toward\" transformative AI? (How would we know?)\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\n<\nFootnotes\n Of course, the answer could be \"A kajillion years from now\" or \"Never.\" ↩\n Technically, these probabilities are for “artificial general intelligence”, not transformative AI. The probabilities for transformative AI could be higher if it’s possible to have transformative AI without artificial general intelligence, e.g. by via something like PASTA. ↩\n This corresponds to the second two bullet points from this section. ↩\n From Wikipedia: \"Genetic measurements indicate that the ape lineage which would lead to Homo sapiens diverged from the lineage that would lead to chimpanzees and bonobos, the closest living relatives of modern humans, around 4.6 to 6.2 million years ago.[23] Anatomically modern humans arose in Africa about 300,000 years ago,[24] and reached behavioural modernity about 50,000 years ago.[25]\" ↩\n E.g., it emphasizes the odds of being among the most important \"people\" instead of \"centuries.\" ↩\n I don't have a great single source for this, although you can see this paper. My informal impression from talking to people in the field is that AI speech recognition is at least quite close to human-level, if not better. ↩\n \"The field of AI is largely held to have begun in Dartmouth in 1956\" ↩\n There is an open debate on whether past economic data actually shows sustained acceleration, as opposed to a series of very different time periods with increasing growth rates. I discuss how the debate could change my conclusions here. ↩\n \"Modeling the Human Trajectory\" emphasizes that the model that generates these numbers “is not flexible enough to fully accommodate events as large and sudden as the industrial revolution.” The author adds: \"Especially since it imperfectly matches the past, its projection for the future should be read loosely, as merely adding plausibility to an upswing in the next century. Davidson (2021) [\"Could Advanced AI Drive Explosive Economic Growth?\"] points at one important way the projections could continue to be off for many decades: while the model’s dynamics are dominated by a spiraling economic acceleration, people are still an important input to production, and, if anything becoming wealthy has led to people having fewer children. In the coming decades, that could hamper the predicted acceleration, to the degree we can’t or don’t substitute robots for workers.\"  ↩\n These comments usually refer to AGI rather than transformative AI, but the concepts are similar enough that I'm using them interchangeably here. ↩\n (Absent overwhelming evidence, which I don't think we should generally assume will always be present when it is \"needed.\") ↩\n", "url": "https://www.cold-takes.com/forecasting-transformative-ai-whats-the-burden-of-proof/", "title": "Forecasting transformative AI: what's the burden of proof?", "source": "cold.takes", "source_type": "blog", "date_published": "2021-08-17", "id": "048cd6acb14d83e2155242b304378b8b"} -{"text": "\nThere is a lot of confusing and/or misleading information out there about the recent Delta-driven COVID-19 surges.1 Here are some of the links I've found most helpful for getting a handle on the current situation: how risky things are at the moment and how much vaccines are helping. I've prioritized links that are pretty simple and digestible but (IMO) credible. (This isn't a post about all the ways the COVID-19 policy response could be better. I generally recommend Marginal Revolution for that.)\nmicroCOVID Project:2 synthesizes lots of research about the transmission risks and provides a calculator for how risky specific activities are. I highly recommend this site on the whole, especially if you are trying to reach consensus with other people on which activities are too risky, but also if you just want to get some quantitative information on COVID-19 risks that was created via scrupulous literature review instead of via a quest for clicky headlines. Its update on the Delta variant implies that Delta is about 1.5x as contagious per hour of exposure; that vaccines are less effective but still very effective against it (83% risk reduction vs. 90% for Pfizer/Moderna, consistent with Youyang Gu's rough estimate). It also discusses the sources it used to arrive at this conclusion.\nJuly 13 overview from Tomas Pueyo, whom I've generally found to be one of the most careful+clear writers on this topic. I didn't find his more recent update to add a ton to that, but I'm very much looking forward to his upcoming analysis of \"long COVID.\" He seems to be estimating something like 50% protection against infection and 90% protection against hospitalization, from most vaccines (partially offset by Delta being ~2x more infectious).\nCovidestim.org provides something that has been really undersupplied IMO: estimates of true (not just reported) COVID-19 prevalence, up to date, by granular geographic area. A very hacky way to estimate your risk is to assume that in a given area, if you are taking the average level of precautions, you'll have the same odds as the average person of contracting COVID-19, which means you can simply look at \"new cases per 100k people\" and divide by 100k to get your daily probability of infection. This is especially useful for deciding where to travel. I haven't vetted the methodology for Covidestim.org, but it is recommended by Youyang Gu (his other recommendation is not up to date, not as geographically granular, and only estimates total cases, not cases per 100k population, making it harder to use).\nNice simple presentation of lots of charts on deaths and vaccinations from Jeff Kaufman.\nFinally, my favorite links for cutting COVID-19 risk for you and those around you: rapid tests. Worth checking both Amazon and Wal-Mart as availability varies a lot; here's BinaxNOW at Wal-Mart, BinaxNOW at Amazon, QuickVue at Wal-Mart, QuickVue at Amazon. Aside from getting vaccinated, rapid testing seems like by far the least life-disrupting way to cut your risk.\nSubscribe Feedback\nFootnotes\nThis article is an example of reporting lots of statistics like \"Israeli health officials have said 60% of current hospitalized COVID-19 cases are in vaccinated people\" without addressing things like what percentage of all people are vaccinated in the relevant areas, possible sources of reporting bias, etc. ↩\n Disclosure: Catherine Olsson, a former coworker, is one of the collaborators on this project. ↩\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/a-few-quick-links-re-covid-19-delta/", "title": "A few quick links re: COVID-19/Delta", "source": "cold.takes", "source_type": "blog", "date_published": "2021-08-12", "id": "78388d10504fd5b114ce24f99a6aa9e0"} -{"text": "\nAround now, the Most Important Century series is going to be getting a bit dryer, so I'm going to try making some of the other posts a bit lighter. Specifically, I'm going to try something I call \"Cold Links\": links that I like a lot, that are so old you can't believe I'm posting them now. I think this is a more useful/enjoyable service than it might sound like: it's fun to get collections of links on a theme that are more memorable than \"best of the week,\" and even if you've seen some before, you might enjoy seeing them again. If you end up hating this, let me know.\nNow: a lot of the links I post here will be about sports. \"Boooo I hate sportsball!\" you're probably thinking, if you're the kind of person I imagine reading this blog. But try to keep an open mind. I'm here to filter out all the \"My team won, be excited for me!\" and \"Isn't this player incredible, check out [stats that are basically the same stats all top players have]\" and \"Player X isn't just an athlete, they're a LEADER [this roughly just means their team is good]\" and \"Player Y might be talented, but they never come through when it counts [this roughly just means their team isn't good],\" and get you the links that are truly interesting, inspiring or just amazing.\nFor someone who doesn't care about who wins, what do sports have to offer? High on my list is getting to closely observe people being incredibly (like world-outlier-level) intense about something. I am generally somewhat obsessed with obsession (I think it is a key ingredient in almost every case of someone accomplishing something remarkable). And with sports, you can easily identify which players are in the top-5 in the world at the incredibly competitive things they do; you can safely assume that their level of obsession and competitiveness is beyond what you'll ever be able to wrap your head around; and you can see them in action. A few basketball links that illustrate this:\nKobe Bryant's \"Dear Basketball\" poem that he put out when he retired in 2015. Very short and seriously moving. I wish I felt about any activity the way he feels about basketball. This was turned into an animated short that won an Oscar (but I'd recommend just reading the poem).\nLeBron James in a rare informative interview, claiming that he watches \"all the games ... at the same time,\" rattling off 5-6 straight plays from one of them from memory, and glaring beautifully as he says \"I don't watch basketball for entertainment.\"\nThe memory thing is real: here's Stephen Curry succeeding at a game where they show him clips from basketball games he played in (sometimes years ago) and ask him what happened next.\nThere are a lot of stories about how competitive Michael Jordan was; my favorite one is just his Hall of Fame acceptance speech. (For those of you who don't follow sports, just think of Michael Jordan as \"if Jesus were also The Beatles.\") At a time when anyone else would be happy, peaceful and grateful, MJ is still settling old scores and smarting under every imagined insult from decades ago. Highlights include 6:20 (where he reveals that he's invited the person who was picked over him to make the team in high school, to reinforce that this was a mistake); 12:00 (where he criticizes his general manager for saying \"organizations win championships,\" as opposed to players); 14:40 (where he thanks a group of other players for \"freezing him out\" during his rookie season and getting him angry and motivated, then admits that the \"freeze-out\" may have been a rumor); and 15:35 (an extended \"Thank you\" to Pat Riley for ... basically being a jerk?). That's the most competitive person in the world right there, and maybe the one person on earth who's above not being petty. \nWhat else is good about sports:\nI think it's fun when people care so deeply about something so intrinsically meaningless. It means we can enjoy their emotional journeys without all the baggage of whether we're endorsing something \"good\" or \"bad.\" (My wife also loves this about sports - her thing is watching Last Chance U while crying her eyes out.) My next sports post will be a collection of \"heartwarming\" links and stories.\nThere's a lot of sports analysis, and I kind of think sports is to social science what the laboratory is to natural sciences. Sports statistics have high sample sizes, stable environments and are exhaustively captured on video, so it's often possible to actually figure out what's going on. It's therefore unusually easy to form your own judgment about whether someone's analysis is good or bad, and that can have lessons for what patterns to look for on other topics. (My view: academic analysis of sports is often almost unbelievably bad, as you can see from some of the Phil Birnbaum eviscerations, whereas average sportswriting and TV commentating is worse than language can convey. Nerdy but non-academic sports analysis websites like Cleaning the Glass, Football Outsiders and FiveThirtyEight are good.)\nI'll leave you with this absurd touchdown run by Marshawn Lynch (if you haven't watched much football, keep in mind that usually when someone gets tackled, they fall down), and Marshawn Lynch's life philosophy. If you didn't enjoy that pair of links, go ahead and tune out future sports posts from this blog.\nSubscribe Feedback\n", "url": "https://www.cold-takes.com/give-sports-a-chance/", "title": "Give Sports a Chance", "source": "cold.takes", "source_type": "blog", "date_published": "2021-08-11", "id": "de92c3963434dde688457cd0bba954b4"} -{"text": "\nAudio also available by searching Stitcher, Spotify, Google Podcasts, etc. for \"Cold Takes Audio\"\nToday’s world\nTransformative AI\nDigital people\nWorld of\nMisaligned AI\nWorld run by\nSomething else\nor\nor\nStable, galaxy-wide\ncivilization\nThis is the first of four posts summarizing hundreds of pages of technical reports focused almost entirely on forecasting one number. It's the single number I'd probably most value having a good estimate for: the year by which transformative AI will be developed.1 \nBy \"transformative AI,\" I mean \"AI powerful enough to bring us into a new, qualitatively different future.\" The Industrial Revolution is the most recent example of a transformative event; others would include the Agricultural Revolution and the emergence of humans.2\nThis piece is going to focus on exploring a particular kind of AI I believe could be transformative: AI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancement. I will call this sort of technology Process for Automating Scientific and Technological Advancement, or PASTA.3 (I mean PASTA to refer to either a single system or a collection of systems that can collectively do this sort of automation.)\nPASTA could resolve the same sort of bottleneck discussed in The Duplicator and This Can't Go On - the scarcity of human minds (or something that plays the same role in innovation). \nPASTA could therefore lead to explosive science, culminating in technologies as impactful as digital people. And depending on the details, PASTA systems could have objectives of their own, which could be dangerous for humanity and could matter a great deal for what sort of civilization ends up expanding through the galaxy.\nBy talking about PASTA, I'm partly trying to get rid of some unnecessary baggage in the debate over \"artificial general intelligence.\" I don't think we need artificial general intelligence in order for this century to be the most important in history. Something narrower - as PASTA might be - would be plenty for that.\nTo make this idea feel a bit more concrete, the rest of this post will discuss:\nHow PASTA could (hypothetically) be developed via roughly modern-day machine learning methods.\nWhy this could lead to explosive scientific and technological progress - and why it could be dangerous via PASTA systems having objectives of their own.\nFuture pieces will discuss how soon we might expect something like PASTA to be developed.\nMaking PASTA\nI'll start with a very brief, simplified characterization of machine learning, which you can skip by clicking here.\nThere are essentially two ways to \"teach\" a computer to do a task:\nTraditional programming. In this case, you code up extremely specific, step-by-step instructions for completing the task. For example, the chess-playing program Deep Blue is essentially executing instructions4 along the lines of:\nReceive a digital representation of a chessboard, with numbers indicating (a) which chess piece is on each square; (b) which moves would be legal; (c) which board positions would count as checkmate.\nCheck how each legal move would modify the board. Then check how \"good\" that resulting board is, according to rules like: \"If the other player's queen has been captured, that's worth 9 points; if Deep Blue's queen has been captured, that's worth -9 points.\" These rules could be quite complex,5 but they've all been coded in precisely by humans.\nMachine learning. This is essentially \"training\" an AI to do a task by trial and error, rather than by giving it specific instructions. Today, the most common way of doing this is by using an \"artificial neural network\" (ANN), which you might think of sort of like a \"digital brain\" that starts in an empty (or random) state: it hasn't yet been wired to do specific things. \nFor example, AlphaZero - an AI that has been used to master multiple board games including chess and Go - does something more like this (although it has important elements of \"traditional programming\" as well, which I'm ignoring for simplicity):\nPlays a chess game against itself (by choosing a legal move, modifying the digital game board accordingly, and then choosing another legal move, etc.) Initially, it's playing by making random moves.\nEvery time White wins, it \"learns\" a small amount, by tweaking the wiring of the ANN (\"digital brain\") - literally by strengthening or weakening the connections between some \"artificial neurons\" and others. The tweaks cause the ANN to form a stronger association between game states like what it just saw and \"White is going to win.\" And vice versa when Black wins.\nAfter a very large number of games, the ANN has become very good at determining - from a digital board game state - which side is likely to win. The ANN can now select moves that make its own side more likely to win.\nThe process of \"training\" the ANN takes a very large amount of trial-and-error: it is initially terrible at chess, and it needs to play a lot of games to \"wire its brain correctly\" and become good. Once the ANN has been trained once, though, its \"digital brain\" is now consistently good at the board game it's learned; it can beat its opponents repeatedly.\nThe latter approach is central for a lot of the recent progress in AI. This is especially true for tasks that are hard to “write down all the instructions” for. For example, humans are able to write down some reasonable guidelines for succeeding at chess, but we know very little about how we ourselves classify images (determine whether some image is of a dog, cat, or something else). So machine learning is particularly essential for tasks like classifying images.\nCould PASTA be developed via machine learning? One obvious (but unrealistic) way of doing this might be something like this:\nInstead of playing chess, an AI could play a game called \"Cause scientific and technological advancement.\" That is, it could make “moves” like: download scientific papers, add notes to a file, create designs and instructions for new experiments, design manufacturing processes. \nA panel of human judges could watch from the “sidelines” and give their subjective rating of how fast the AI’s work is causing scientific/technological advancement. The AI could therefore tweak its wiring over time, learning which sorts of moves most effectively cause scientific and technological advancement according to the judges.\nThis would be wildly impractical, at least compared to how I think things are more likely to play out, but it hopefully gives a starting intuition for what a training process could be trying to accomplish: by providing a signal of \"how the AI is doing,\" it could allow an AI to get good at the goal via trial-and-error and tweaking its internal wiring.\nIn reality, I'd expect training to be faster and more practical due to things like:\nDifferent AIs could be trained to perform different sorts of roles related to speeding up science and technology: writing academic papers, designing and critiquing blueprints and manufacturing processes, etc. In many cases, humans already engaged in these activities could generate a lot of data on what it looks like to do them well, which could be used for the sort of training described above. Once different AIs could perform a variety of key roles, \"manager\" AIs could be trained to oversee and allocate the work of other AIs.\nAIs could also be trained as judges. Perhaps one AI could be trained to assess whether a paper contains original ideas, and another could be trained to assess whether a paper contains errors.6 These \"judge\" AIs could then be used to more efficiently train a third AI learning to write original, correct papers. \nMore generally, AIs could learn to do all sorts of other human activities, gaining generic human abilities like the ability to learn from textbooks and the ability to \"brainstorm creative solutions to a problem.\" AIs good at these things could then learn science from textbooks like a normal human, and brainstorm about how to make a breakthrough just like a normal human, etc. \nThe distinction here is between \"using huge numbers of examples to wire a brain\" and \"an already-wired brain using small amounts of examples to learn quickly, as a human brain does.\"\n \nHere it would take lots of trial and error for the ANN to become good at \"generic\" human abilities, but after that the trained ANN could learn how to do specifically scientific work as efficiently as a human learns to do it. (In a sense you could imagine that it's been \"trained via massive trial-and-error to have the ability to learn certain sorts of things without needing as much trial-and-error.\")\n \nThere is some preliminary evidence (for example, here) that AI systems could go through this pattern of \"Learning 'the basics' using a ton of trial-and-error, and learning specific sub-skills using less trial-and-error.\"7\nI don't particularly expect all of this to happen as part of a single, deliberate development process. Over time, I expect different AI systems to be used for different and increasingly broad tasks, including and especially tasks that help complement human activities on scientific and technological advancement. There could be many different types of AI systems, each with its own revenue model and feedback loop, and their collective abilities could grow to the point where at some point, some set of them is able to do everything (with respect to scientific and technological advancement) that formerly required a human. (For convenience, though, I'll sometimes refer to such a set as PASTA in the singular.)\nDeveloping PASTA will almost certainly be hugely harder and more expensive than it was for AlphaZero. It may require a lot of ingenuity to get around obstacles that exist today (the picture above is surely radically oversimplified, and is there to give basic intuitions). But AI research is simultaneously getting cheaper8 and better-funded. I'll argue in future pieces that the odds of developing PASTA in the coming decades are substantial.\nImpacts of PASTA\nExplosive scientific and technological advancement\nI've previously talked about the idea of a potential explosion in scientific and technological advancement, which could lead to a radically unfamiliar future. \nI've emphasized that such an explosion could be caused by a technology that \"dramatically increased the number of 'minds' (humans, or digital people, or advanced AIs) pushing forward scientific and technological advancement.\"\nPASTA would fit this bill well, particularly if it were as good as humans (or better) at finding better, cheaper ways to make more PASTA systems. PASTA would have all of the tools for a productivity explosion that I previously laid out for digital people: \nPASTA systems could make copies of themselves, including temporary copies, and run them at different speeds. \nThey could engage in the sort of loop described in The Duplicator: \"more ideas [including ideas for making more/better PASTA systems] → more people [in this case more PASTA systems] → more ideas→...\"\nThanks to María Gutiérrez Rojas for these graphics, a variation on similar graphics from The Duplicator and Digital People Would Be An Even Bigger Deal illustrating the dynamics of explosive growth. Here, instead of people having ideas that increase productivity, it's AI algorithms (denoted by neural network icons).\nWhy doesn't this feedback loop apply to today's computers and AIs? Because today's computers and AIs aren't able to do all of the things required to have new ideas and get themselves copied more efficiently. They play a role in innovation, but innovation is ultimately bottlenecked by humans, whose population is only growing so fast. This is what PASTA would change (it is also what digital people would change).\nAdditionally: unlike digital copies of humans, PASTA systems might not be attached to their existing identity and personality. A PASTA system might quickly make any edits to its \"mind\" that made it more effective at pushing science and technology forward. This might (or might not, depending on a lot of details) lead to recursive self-improvement and an \"intelligence explosion.\" But even if this didn't pan out, simply being as good as humans at making more PASTA systems could cause explosive advancement for the same reasons the digital people could.\nMisaligned AI: mysterious, potentially dangerous objectives\nIf PASTA were developed as outlined above, it's possible that we might know extremely little about its inner workings.\nAlphaZero - like other modern deep learning systems - is in a sense very poorly understood. We know that it \"works.\" But we don't really know \"what it's thinking.\" \nIf we want to know why AlphaZero made some particular chess move, we can't look inside its code to find ideas like \"Control the center of the board\" or \"Try not to lose my queen.\" Most of what we see is just a vast set of numbers, denoting the strengths of connections between different artificial neurons. As with a human brain, we can mostly only guess at what the different parts of the \"digital brain\" are doing9 (although there are some early attempts to do what one might call \"digital neuroscience.\") \nThe \"designers\" of AlphaZero (discussed above) didn't need much of a vision for how its thought processes would work. They mostly just set it up so that it would get a lot of trial and error, and evolve to get a particular result (win the game it’s playing). Humans, too, evolved primarily through trial and error, with selection pressure to get particular results (survival and reproduction - although the selection worked differently).\nLike humans, PASTA systems might be good at getting the results they are under pressure to get. But like humans, they might learn along the way to think and do all sorts of other things, and it won't necessarily be obvious to the designers whether this is happening. \nPerhaps, due to being optimized for pushing forward scientific and technological advancement, PASTA systems will be in the habit of taking every opportunity to do so. This could mean that they would - given the opportunity - seek to fill the galaxy with long-lasting space settlements devoted to science.\n \nPerhaps PASTA will emerge as some byproduct of another objective. For example, perhaps humans will be trying to train systems to make money or amass power and resources, and setting them up to do scientific and technological advancement will just be part of that. In which case, perhaps PASTA systems will just end up as power-and-resources seekers, and will seek to bring the whole galaxy under their control.\nOr perhaps PASTA systems will end up with very weird, \"random\" objectives. Perhaps some PASTA system will observe that it \"succeeds\" (gets a positive training signal) whenever it does something that causes it to have direct control over an increased amount of electric power (since this is often a result of advancing technology and/or making money), and it will start directly aiming to increase its supply of electric power as much as possible - with the difference between these two objectives not being noticed until it becomes quite powerful. (Analogy: humans have been under selection pressure to pass their genes on, but many have ended up caring more about power, status, enjoyment, etc. than about genes.)\nThese are scary possibilities if we are talking about AI systems (or collections of systems) that may be more capable than humans in at least some domains.\nPASTA systems might try to fool and defeat humans in order to achieve their goals. \nThey might succeed entirely, if they were able to outsmart and/or outnumber humans, hack critical systems, and/or develop more powerful weapons. (Just as humans have generally been able to defeat other animals to achieve our goals.)\nOr there might be conflict between different PASTA systems with different goals, perhaps partially (but not fully) controlled by humans with goals of their own. This could lead to general chaos and a hard-to-predict, possibly very bad long-run outcome.\nIf you're interested in more discussion of whether an AI could or would have its own goals, I'd suggest checking out Why AI alignment could be hard with modern deep learning (Cold Takes guest post), Superintelligence (book), The case for taking AI seriously as a threat to humanity (Vox article), Draft report on existential risk from power-seeking AI (Open Philanthropy analysis) or one of the many other pieces on this topic.10\nConclusion\nIt's hard to predict what a world with PASTA might look like, but two salient possibilities would be:\nPASTA could - by causing an explosion in the rate of scientific and technological advancement - lead quickly to something like digital people, and hence to the sorts of changes to the world described in Digital People Would Be An Even Bigger Deal.\nPASTA could lead to technology capable of wiping humans out of existence, such as devastating bioweapons or robot armies. This technology could be wielded by humans for their own purposes, or humans could be manipulated into using it to help PASTA pursue its own ends. Either way could lead to dystopia or human extinction.\nThe next 3 posts will argue that PASTA is more likely than not to be developed this century.\nNext in series: Why AI alignment could be hard with modern deep learning\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n Of course, the answer could be \"A kajillion years from now\" or \"Never.\" ↩\n See this section of \"Forecasting TAI with Biological Anchors\" (Cotra (2020)) for a more full definition of \"transformative AI.\" ↩\n I'm sorry. But I do think the rest of the series will be slightly more fun to read this way. ↩\n The examples here are of course simplified. For example, both Deep Blue and AlphaGo incorporate substantial amounts of \"tree search,\" a traditionally-programmed algorithm that has its own \"trial and error\" process. ↩\n And they can include simulating long chains of future game states. ↩\n Some AIs could be used to determine whether papers are original contributions based on how they are later cited; others could be used to determine whether papers are original contributions based only on the contents of the paper and on previous literature. The former could be used to train the latter, by providing a \"That's correct\" or \"That's wrong\" signal for judgments of originality. Similar methods could be used for training AIs to assess the correctness of papers. ↩\n E.g., https://openai.com/blog/improving-language-model-behavior/  ↩\n Due to improvements in hardware and software. ↩\n It's even worse than spaghetti code. ↩\n More books: Human Compatible, Life 3.0, and The Alignment Problem.  ↩\n", "url": "https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/", "title": "Forecasting Transformative AI, Part 1: What Kind of AI?", "source": "cold.takes", "source_type": "blog", "date_published": "2021-08-10", "id": "3714f489c84b90c3a302651e032957cc"} -{"text": "\nIt seems a common reaction to This Can't Go On is something like: \"OK, so ... you're saying the current level of economic growth can't go on for another 10,000 years. So?? Call me in a few thousand years I guess?\"\nIn general, this blog will often talk about \"long\" time frames (decades, centuries, millennia) as if they're \"short\" (compared to the billions of years our universe has existed, millions of years our species has existed, and billions of years that could be in our civilization's future). I sort of try to imagine myself as a billions-of-years-old observer, looking at charts like this and thinking things like \"The current economic growth level just got started!\" even though it got started several lifetimes ago.\nWhy think this way?\nOne reason is that it's just a way of thinking about the world that feels (to me) refreshing/different.\nBut here are a couple more important reasons.\nEffective altruism\nMy main obsession is with effective altruism, or doing as much good as possible. I generally try to pay more attention to things when they \"matter more,\" and I think things \"matter more\" when they affect larger numbers of persons.1\nI think there will be a LOT more persons2 over the coming billions of years than over the coming generation or few. So I think the long-run future, in some sense, \"matters more\" than whatever happens over the next generation or few. Maybe it doesn't matter more for me and my loved ones, but it matters more from an \"all persons matter equally\" perspective.3\nAn obvious retort is \"But there's nothing we can do that will affect ALL of the people who live over the coming billions of years. We should focus on what we can actually change - that's the next generation or few.\"\nBut I'm not convinced of that. \nI think we could be in the most important century of all time, and I think things we do today could end up mattering for billions of years (an obvious example is reducing risk of existential catastrophes). \nAnd more broadly, if I couldn't think of specific ways our actions might matter for billions of years, I'd still be very interested in looking for them. I'd still find it useful to try to step back and ask: \"Is what I'm reading about in the news important in the grand scheme of things? Could these events matter for whether we end up with explosion, stagnation or collapse? For what kind of digital civilization we create for the long run? And if not ... what could?\"\nAppreciating the weirdness of the time we live in\nI think we live in a very weird period of time. It looks really weird on various charts (like this one, this one, and this one). The vast bulk of scientific and technological advancement, and growth in the economy, has happened in a tiny sliver of time that we are sitting in. And billions of years from now, it will probably still be the case that this tiny sliver of time looks like an outlier in terms of growth and change.\nAgain, it doesn't feel like a tiny sliver, it feels like lifetimes. It's hundreds of years. But that's out of millions (for our species) or billions (for life on Earth).\nSometimes, when I walk down the street, I just look around and think: \"This is all SO WEIRD. Whooshing by me are a bunch of people calmly operating steel cars at 40 mph, and over there I see a bunch of people calmly operating a massive crane building a skyscraper, and up in the sky is a plane flying by ... and out of billions of years of life on Earth, it's only us - the humans of the last hundred-or-so years - who have ever been able to do any of this kind of stuff. Practically everything I look at is some crazy futurist technology we just came up with and haven't really had time to adapt to, and we won't have adapted before the next crazy thing comes along. \n\"And everyone is being very humdrum about their cars and skyscrapers and planes, but this is not normal, this is not 'how it usually is,' this is not part of a plan or a well-established pattern, this is crazy and weird and short-lived, and it's anyone's guess where it's going next.\"\nI think many of us are instinctively, intuitively dismissive of wild claims about the future. I think we naturally imagine that there's more stability, solidness and hidden wisdom in \"how things have been for generations\" than there is.\nBy trying to imagine the perspective of someone who's been alive for the whole story - billions of years, not tens - maybe we can be more open to strange future possibilities. And then, maybe we can be better at noticing the ones that actually might happen, and that our actions today might affect.\nSo that's why I often try on the lens of saying things like \"X has been going on for 200 years and could maybe last another few thousand - bah, that's the blink of an eye!\"\nSubscribe Feedback\nFootnotes\n I generally use the term \"persons\" instead of \"people\" to indicate that I am trying to refer to every person, animal or thing (AI?) that we should care about the welfare of. ↩\n Even more than you'd intuitively guess, as outlined here. ↩\n I wrote a bit about this perspective several years ago, here. ↩\n", "url": "https://www.cold-takes.com/why-talk-about-10-000-years-from-now/", "title": "Why talk about 10,000 years from now?", "source": "cold.takes", "source_type": "blog", "date_published": "2021-08-05", "id": "ee42ac961d00c961f7a2a31a97b34476"} -{"text": "\nWhile we're on the subject of long-view economic history, let's look at the Great Depression, Great Recession and Great Stagnation in full global historical context.\nThis chart uses the same (world) economic data as the previous post. I used thin lines to mark the beginning and end of each \"Great.\" Sorry it's so hard to see, but these are just really tiny periods of time. The blue line is technically slowing or even going down a tiny amount in these periods, but not by enough to be visible.\nI don't recommend putting things in perspective like this too often. Occasionally seems good.\nSubscribe Feedback\nFor email filter: florpschmop\n", "url": "https://www.cold-takes.com/the-great-stagnation-in-full-historical-context/", "title": "The Great Depression, Recession and Stagnation in Full Historical Context", "source": "cold.takes", "source_type": "blog", "date_published": "2021-08-04", "id": "bb12c5a5763fad1f822297cf18ec02c3"} -{"text": "\nAudio also available by searching Stitcher, Spotify, Google Podcasts, etc. for \"Cold Takes Audio\"\nThis piece starts to make the case that we live in a remarkable century, not just a remarkable era. Previous pieces in this series talked about the strange future that could be ahead of us eventually (maybe 100 years, maybe 100,000).\nSummary of this piece:\nWe're used to the world economy growing a few percent per year. This has been the case for many generations.\nHowever, this is a very unusual situation. Zooming out to all of history, we see that growth has been accelerating; that it's near its historical high point; and that it's faster than it can be for all that much longer (there aren't enough atoms in the galaxy to sustain this rate of growth for even another 10,000 years).\nThe world can't just keep growing at this rate indefinitely. We should be ready for other possibilities: stagnation (growth slows or ends), explosion (growth accelerates even more, before hitting its limits), and collapse (some disaster levels the economy).\nThe times we live in are unusual and unstable. We shouldn't be surprised if something wacky happens, like an explosion in economic and scientific progress, leading to technological maturity. In fact, such an explosion would arguably be right on trend.\n \nFor as long as any of us can remember, the world economy has grown1 a few percent per year, on average. Some years see more or less growth than other years, but growth is pretty steady overall.2 I'll call this the Business As Usual world.\nIn Business As Usual, the world is constantly changing, and the change is noticeable, but it's not overwhelming or impossible to keep up with. There is a constant stream of new opportunities and new challenges, but if you want to take a few extra years to adapt to them while you mostly do things the way you were doing them before, you can usually (personally) get away with that. In terms of day-to-day life, 2019 was pretty similar to 2018, noticeably but not hugely different from 2010, and hugely but not crazily different from 1980.3\nIf this sounds right to you, and you're used to it, and you picture the future being like this as well, then you live in the Business As Usual headspace. When you think about the past and the future, you're probably thinking about something kind of like this:\nBusiness As Usual\nI live in a different headspace, one with a more turbulent past and a more uncertain future. I'll call it the This Can't Go On headspace. Here's my version of the chart:\nThis Can't Go On4 \nWhich chart is the right one? Well, they're using exactly the same historical data - it's just that the Business As Usual chart starts in 1950, whereas This Can't Go On starts all the way back in 5000 BC. \"This Can't Go On\" is the whole story; \"Business As Usual\" is a tiny slice of it. \nGrowing at a few percent a year is what we're all used to. But in full historical context, growing at a few percent a year is crazy. (It's the part where the blue line goes near-vertical.)\nThis growth has gone on for longer than any of us can remember, but that isn't very long in the scheme of things - just a couple hundred years, out of thousands of years of human civilization. It's a huge acceleration, and it can't go on all that much longer. (I'll flesh out \"it can't go on all that much longer\" below.)\nThe first chart suggests regularity and predictability. The second suggests volatility and dramatically different possible futures.\nOne possible future is stagnation: we'll reach the economy's \"maximum size\" and growth will essentially stop. We'll all be concerned with how to divide up the resources we have, and the days of a growing pie and a dynamic economy will be over forever.\nAnother is explosion: growth will accelerate further, to the point where the world economy is doubling every year, or week, or hour. A Duplicator-like technology (such as digital people or, as I’ll discuss in future pieces, advanced AI) could drive growth like this. If this happens, everything will be changing far faster than humans can process it.\nAnother is collapse: a global catastrophe will bring civilization to its knees, or wipe out humanity entirely, and we'll never reach today's level of growth again. \nOr maybe something else will happen.\nWhy can't this go on?\nA good starting point would be this analysis from Overcoming Bias, which I'll give my own version of here:\nLet's say the world economy is currently getting 2% bigger each year.5 This implies that the economy would be doubling in size about every 35 years.6\nIf this holds up, then 8200 years from now, the economy would be about 3*1070 times its current size.\n There are likely fewer than 1070 atoms in our galaxy,7 which we would not be able to travel beyond within the 8200-year time frame.8\nSo if the economy were 3*1070 times as big as today's, and could only make use of 1070 (or fewer) atoms, we'd need to be sustaining multiple economies as big as today's entire world economy per atom.\n8200 years might sound like a while, but it's far less time than humans have been around. In fact, it's less time than human (agriculture-based) civilization has been around.\nIs it imaginable that we could develop the technology to support multiple equivalents of today's entire civilization, per atom available? Sure - but this would require a radical degree of transformation of our lives and societies, far beyond how much change we've seen over the course of human history to date. And I wouldn't exactly bet that this is how things are going to go over the next several thousand years. (Update: for people who aren't convinced yet, I've expanded on this argument in another post.)\nIt seems much more likely that we will \"run out\" of new scientific insights, technological innovations, and resources, and the regime of \"getting richer by a few percent a year\" will come to an end. After all, this regime is only a couple hundred years old.\n(This post does a similar analysis looking at energy rather than economics. It projects that the limits come even sooner. It assumes 2.3% annual growth in energy consumption (less than the historical rate for the USA since the 1600s), and estimates this would use up as much energy as is produced by all the stars in our galaxy within 2500 years.9)\nExplosion and collapse\nSo one possible future is stagnation: growth gradually slows over time, and we eventually end up in a no-growth economy. But I don't think that's the most likely future. \nThe chart above doesn't show growth slowing down - it shows it accelerating dramatically. What would we expect if we simply projected that same acceleration forward?\nModeling the Human Trajectory (by Open Philanthropy’s David Roodman) tries to answer exactly this question, by “fitting a curve” to the pattern of past economic growth.10 Its extrapolation implies infinite growth this century. Infinite growth is a mathematical abstraction, but you could read it as meaning: \"We'll see the fastest growth possible before we hit the limits.\"\nIn The Duplicator, I summarize a broader discussion of this possibility. The upshot is that a growth explosion could be possible, if we had the technology to “copy” human minds - or something else that fulfills the same effective purpose, such as digital people or advanced enough AI.\nIn a growth explosion, the annual growth rate could hit 100% (the world economy doubling in size every year) - which could go on for at most ~250 years before we hit the kinds of limits discussed above.11 Or we could see even faster growth - we might see the world economy double in size every month (which we could sustain for at most 20 years before hitting the limits12), or faster.\nThat would be a wild ride: blindingly fast growth, perhaps driven by AIs producing output beyond what we humans could meaningfully track, quickly approaching the limits of what's possible, at which point growth would have to slow.\nIn addition to stagnation or explosive growth, there's a third possibility: collapse. A global catastrophe could cut civilization down to a state where it never regains today's level of growth. Human extinction would be an extreme version of such a collapse. This future isn't suggested by the charts, but we know it's possible.\nAs Toby Ord’s The Precipice argues, asteroids and other \"natural\" risks don't seem likely to bring this about, but there are a few risks that seem serious and very hard to quantify: climate change, nuclear war (particularly nuclear winter), pandemics (particularly if advances in biology lead to nasty bioweapons), and risks from advanced AI. \nWith these three possibilities in mind (stagnation, explosion and collapse):\nWe live in one of the (two) fastest-growth centuries in all of history so far. (The 20th and 21st.)\nIt seems likely that this will at least be one of the ~80 fastest-growing centuries of all time.13\nIf the right technology comes along and drives explosive growth, it could be the #1 fastest-growing century of all time - by a lot.\n If things go badly enough, it could be our last century.\nSo it seems like this is a quite remarkable century, with some chance of being the most remarkable. This is all based on pretty basic observations, not detailed reasoning about AI (which I will get to in future pieces).\nScientific and technological advancement\nIt’s hard to make a simple chart of how fast science and technology are advancing, the same way we can make a chart for economic growth. But I think that if we could, it would present a broadly similar picture as the economic growth chart.\nA fun book I recommend is Asimov's Chronology of Science and Discovery. It goes through the most important inventions and discoveries in human history, in chronological order. The first few entries include \"stone tools,\" \"fire,\" \"religion\" and \"art\"; the final pages include \"Halley's comet\" and \"warm superconductivity.\"\nAn interesting fact about this book is that 553 out of its 654 pages take place after the year 1500 - even though it starts in the year 4 million BC. I predict other books of this type will show a similar pattern,14 and I believe there were, in fact, more scientific and technological advances in the last ~500 years than the previous several million.15 \nIn a previous piece, I argued that the most significant events in history seem to be clustered around the time we live in, illustrated with this timeline. That was looking at billions-of-years time frames. If we zoom in to thousands of years, though, we see something similar: the biggest scientific and technological advances are clustered very close in time to now. To illustrate this, here's a timeline focused on transportation and energy (I think I could've picked just about any category and gotten a similar picture).\nSo as with economic growth, the rate of scientific and technological advancement is extremely fast compared to most of history. As with economic growth, presumably there are limits at some point to how advanced technology can become. And as with economic growth, from here scientific and technological advancement could:\nStagnate, as some are concerned is happening. \nExplode, if some technology were developed that dramatically increased the number of \"minds\" (people, or digital people, or advanced AIs) pushing forward scientific and technological development.16\nCollapse due to some global catastrophe.\nNeglected possibilities\nI think there should be some people in the world who inhabit the Business As Usual headspace, thinking about how to make the world better if we basically assume a stable, regular background rate of economic growth for the foreseeable future. \nAnd some people should inhabit the This Can’t Go On headspace, thinking about the ramifications of stagnation, explosion or collapse - and whether our actions could change which of those happens.\nBut today, it seems like things are far out of balance, with almost all news and analysis living in the Business As Usual headspace. \nOne metaphor for my headspace is that it feels as though the world is a set of people on a plane blasting down the runway:\nWe're going much faster than normal, and there isn't enough runway to do this much longer ... and we're accelerating.\nAnd every time I read commentary on what's going on in the world, people are discussing how to arrange your seatbelt as comfortably as possible given that wearing one is part of life, or saying how the best moments in life are sitting with your family and watching the white lines whooshing by, or arguing about whose fault it is that there's a background roar making it hard to hear each other. \nIf I were in this situation and I didn't know what was next (liftoff), I wouldn't necessarily get it right, but I hope I'd at least be thinking: \"This situation seems kind of crazy, and unusual, and temporary. We're either going to speed up even more, or come to a stop, or something else weird is going to happen.\"\n \nThanks to María Gutiérrez Rojas for the graphics in this piece, and Ludwig Schubert for an earlier timeline graphic that this piece's timeline graphic is based on.\nNext in series: Forecasting Transformative AI, Part 1: What Kind of AI?\n \nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n If you have no idea what that means, try my short economic growth explainer. ↩\n Global real growth has generally ranged from slightly negative to ~7% per year. ↩\n I'm skipping over 2020 here since it was unusually different from past years, due to the global pandemic and other things. ↩\n For the historical data, see Modeling the Human Trajectory. The projections are rough and meant to be visually suggestive rather than using the best modeling approaches.  ↩\nThis refers to real GDP growth (adjusted for inflation). 2% is lower than the current world growth figure, and using the world growth figure would make my point stronger. But I think that 2% is a decent guess for \"frontier growth\" - growth occurring in the already-most-developed economies - as opposed to total world growth, which includes “catchup growth” (previously poor countries growing rapidly, such as China today). \n To check my 2% guess, I downloaded this US data and looked at the annualized growth rate between 2000-2020, 2010-2020, and 2015-2020 (all using July since July was the latest 2020 point). These were 2.5%, 2.2% and 2.05% respectively.  ↩\n 2% growth over 35 years is (1 + 2%)^35 = 2x growth ↩\nWikipedia's highest listed estimate for the Milky Way's mass is 4.5*10^12 solar masses, each of which is about 2*10^30 kg. The mass of a (hydrogen) atom is estimated as the equivalent of about 1.67*10^-27 kg. (Hydrogen atoms have the lowest mass, so assuming each atom is hydrogen will overestimate the total number of atoms.) So a high-end estimate of the total number of atoms in the Milky Way would be (4.5*10^12 * 2*10^30)/(1.67*10^-27) =~ 5.4*10^69. ↩\nWikipedia: \"In March 2019, astronomers reported that the mass of the Milky Way galaxy is 1.5 trillion solar masses within a radius of about 129,000 light-years.\" I'm assuming we can't travel more than 129,000 light-years in the next 8200 years, because this would require far-faster-than-light travel. ↩\n This calculation isn't presented straightforwardly in the post. The key lines are \"No matter what the technology, a sustained 2.3% energy growth rate would require us to produce as much energy as the entire sun within 1400 years\" and \"The Milky Way galaxy hosts about 100 billion stars. Lots of energy just spewing into space, there for the taking. Recall that each factor of ten takes us 100 years down the road. One-hundred billion is eleven factors of ten, so 1100 additional years.\" 1400 + 1100 = 2500, the figure I cite. This relies on the assumption that the average star in our galaxy offers about as much energy as the sun; I don't know whether that's the case. ↩\nThere is an open debate on whether Modeling the Human Trajectory is fitting the right sort of shape to past historical data. I discuss how the debate could change my conclusions here. ↩\n 250 doublings would be a growth factor of about 1.8*10^75, over 10,000 times the number of atoms in our galaxy. ↩\n 20 years would be 240 months, so if each one saw a doubling in the world economy, that would be a growth factor of about 1.8*10^72, over 100 times the number of atoms in our galaxy. ↩\n That’s because of the above observation that today’s growth rate can’t last for more than another 8200 years (82 centuries) or so. So the only way we could have more than 82 more centuries with growth equal to today’s is if we also have a lot of centuries with negative growth, ala the zig-zag dotted line in the \"This Can't Go On\" chart. ↩\nThis dataset assigns significance to historical figures based on how much they are covered in reference works. It has over 10x as many \"Science\" entries after 1500 as before; the data set starts in 800 BC. I don't endorse the book that this data set is from, as I think it draws many unwarranted conclusions from the data; here I am simply supporting my claim that most reference works will disproportionately cover years after 1500. ↩\n To be fair, reference works like this may be biased toward the recent past. But I think the big-picture impression they give on this point is accurate nonetheless. Really supporting this claim would be beyond the scope of this post, but the evidence I would point to is (a) the works I'm referencing - I think if you read or skim them yourselves you'll probably come out with a similar impression; (b) the fact that economic growth shows a similar pattern (although the explosion starts more recently; I think it makes intuitive sense that economic growth would follow scientific progress with a lag). ↩\n The papers cited in The Duplicator on this point specifically model an explosion in innovation as part of the dynamic driving explosive economic growth. ↩\n", "url": "https://www.cold-takes.com/this-cant-go-on/", "title": "This Can't Go On", "source": "cold.takes", "source_type": "blog", "date_published": "2021-08-03", "id": "3e74a7919e1b7682be762294b0bd82f5"} -{"text": "\nToday’s world\nTransformative AI\nDigital people\nWorld of\nMisaligned AI\nWorld run by\nSomething else\nor\nor\nStable, galaxy-wide\ncivilization\nThis is a short, casual piece about digital people (you don't need to read the previous pieces first). It's particularly for readers who have trouble concretely imagining a world where digital people come to exist. \n\"Digital people\" is a hypothetical technology that I've argued could radically change the world, and will argue could be developed this century. The basic idea would be creating computer simulations of specific people, in virtual environments.\n \nI'm going to give two quick sketches1 to try and convey a sense of:\nWhat it might be like for you to decide on becoming a digital person, in a world where the technology exists. This addresses one common reaction I've seen to the idea, along the lines of \"What kind of person is going to want to become digital? How widely is this going to be adopted?\"\nWhat it might be like for you to accept a role as a \"temporary copy\" cooperating with a \"permanent\" version of yourself. This addresses a reaction along the lines of: \"Could digital people really make 'temporary copies' of themselves who help them out and then retire? Isn't that a raw deal for the copies?\"\n(If you are already thinking \"What is Holden talking about?\", try the Digital People FAQ.)\nSketch 1: deciding on becoming a digital person\nImagine that one day, a company starts offering a \"Become a digital person\" service. Initially, it's incredibly expensive, primarily used by the sort of adventurous rich people who currently dream of going to space. But over time, the price falls, and more people try it out.\nAnd one day, you get an email from your friend Alice, who has always been one of your most tech-enthusiast friends (she's usually the first person you know to buy some gadget). She's asking how you are and whether you'd like to catch up sometime. \"I can't meet in person because I'm a digital person now :) But we could Zoom!\"\nSo you do. She looks and sounds pretty much like herself on the call; her background looks like a normal bedroom. You have this conversation:\nYOU: [out of politeness] So what was it like going digital? Do you like it?\nALICE: No regrets here! I just did the conversion a couple of weeks ago, I have to say I almost cancelled at the last minute because I just got freaked out about slicing up my brain. But I'm really glad I went through with it.\nYOU: Slicing up your brain?\nALICE: Yeah, do you not know much about the procedure then? I mean they put you under anesthetic and then they literally pop open your head, pull out the brain, slice it up so they can scan it, and use that to create a digital you.\nYOU: Doesn't that ... kill you?\nALICE: Doesn't feel that way to me. I showed up at the clinic, went into the operating room, got my anesthetic, fell asleep, and woke up on a beautiful mountainside with my chronic back pain gone.\nYOU: A fake mountainside though right? \nALICE: No? It's like other mountainsides I've been to. And then I went into the local town, and got some food, and met some people.\nYOU: But you're still talking about virtual reality. Your body, the one where they sliced up the brain - is that a corpse now?\nALICE: My old body is dead, yes. My new one is mostly the same, minus the back pain, a few inches taller, some other pretty minor modifications I requested.\nYOU: So doesn't it mean you're not really Alice? You're just some simulation with the same memories and personality as Alice.\nALICE: That's not how I think about it, and super not how it feels. I got put under from a procedure and woke up later, like I have a bunch of other times in my life, and now I live somewhere different, like previous times I've moved.\nYOU: Why didn't you just get a supermodel body?\nALICE: I'll try that at some point. I wanted to start slow, since I can now switch bodies whenever I want. I'll try a supermodel body, an NBA player body, you know.\nYOU: I still feel like you're dead. Like they cut open your head, sliced up your brain and presumably disposed of the rest. How are you not dead?\nALICE: To me, what's scary about death is things like: (a) There's no one like me in the world anymore - I'm not part of it. (b) My life story abruptly ends - things I was looking forward to, or hoping to accomplish, fall flat and stay unresolved. (c) People who care about me don't get to interact with me anymore. None of that has happened to me! Yes, my old body is gone, but my body changed a lot over the years before this - if all of my cells turned over one at a time, I wouldn't figure I'd died and been replaced.\nYOU: But also you're ... not real, right? Like you can only talk to people on Zoom!\nALICE: I can only talk to people like YOU on Zoom. There are lots of other digital people! In fact I've heard that a leading reason people go digital is to meet someone. It's very easy to be as attractive as you want, so you can just date for personality.2\nYOU: But you'll never be able to physically touch or hug normal people!\nALICE: Well you'll never be able to touch or hug digital people. One way to think about it is as if I moved to another country. But also gained superpowers and got rid of a lot of stuff I didn't like, like my back pain, and the fact that as a normal person I had to watch what I ate.\nYOU: You don't have to watch what you eat now?\nALICE: Nope, since my body is whatever I want it to be, eating is just for pleasure.\nYOU: I have kind of a weird question. Do you feel ... conscious? Like do you have experiences or are you just a talking robot?\nALICE: Seems like I'm the same as I was! ... Anyway how are you?\n...After your conversation, you're starting to wonder if going digital might be cool. But it would mean leaving behind a lot of people you know (at least with respect to physical interaction).\nExcept that that changes too over the following years. At first, the main people you know going digital are either very adventurous folks like Alice, or people with late-stage terminal illnesses. But then you start having friends go digital for work reasons (they get a job offer from a digital company, and it's conditional on their going digital so they can operate at the same speed as their digital coworkers), or because their friends and family are going digital (for work or health or fun). \nMaybe young adults are among the first to go digital, followed by their parents (who want to be able to interact with them), while families with younger children need to take the plunge all together and tend to be most hesitant. But even among these, there are early adopters - people who think of \"taking the family digital\" pretty similarly to \"moving the family to another continent\" - and their friends and coworkers often follow.\nYou can see the writing on the wall: at some point, nearly everyone you know will have gone digital, and your best job offers will be in the digital realm. Time to take the plunge?\nSketch 2: life as a temporary copy\nPreviously, I wrote:\nAnother factor that could increase productivity: \"Temporary\" digital people could complete a task and then retire to a nice virtual life, while running very slowly (and cheaply). This could make some digital people comfortable copying themselves for temporary purposes. Digital people could, for example, copy themselves hundreds of times to try different approaches to figuring out a problem or gaining a skill, then keep only the most successful version and make many copies of that version. \nSome people have the reaction: \"Huh? Why would someone accept a role as a 'temporary copy,' fulfilling some task one time and then retiring?\" \nHere's a sketch in response to that, but I will first focus on a different and harder question that does not assume \"retirement\" is an option: \"Why would someone accept a role as a 'temporary copy,' fulfilling some task one time and then winking out of existence?\" \nImagine that you're a digital person, but that this doesn't really have any consequences (the digital world you live in is just like today's)3 - so you are just living your life, pretty much as it is today. And imagine that you have a big deadline coming up at work, and you decide to look into making a temporary copy of yourself so you can get it done in time.\nYou contact a company that helps with temporary copies, and they walk you through how everything is going to work, but you're only half paying attention. They send you a Google Hangout invitation for 9am tomorrow.\n(The \"only half paying attention\" allows me to make this sketch relatively vivid. It means that this sketch has you becoming a temporary copy before thinking through much what that would be like, and grappling with the situation as you face it. But I think in most cases, people would play out the following sort of conversation in their mind before deciding whether to go ahead. Ideally the \"temporary copy\" company would help with this, via e.g. required consultations.)\nYou finish your day and go to sleep, and wake up alone in a nice hotel room. You reflexively reach for your phone, and see the Google Hangout link you were supposed to join - it's for right about now. You click it, and on the other end of the Hangout, you see ... you.\nYou have this conversation with yourself (\"OTHER YOU\" is the one on the other end of the Hangout; \"YOU\" is you):\nYOU: Hello?\nOTHER YOU: Hi, you. How are you feeling?\nYOU: A little freaked out to be honest, talking to myself on a Hangout?\nOTHER YOU: Yeah, same. So, yeah, this is maybe kind of awkward but I guess you're the temporary copy. So, if it's cool with you, can you try to write the first half of the report, and I'll write the second?\nYOU: I'm temporary? So like ... this is going to be my whole life? Writing the first half of the report for work?\nOTHER YOU: Well yeah, we talked about this.\nYOU: Well but I just wanted to make a copy to help with a report - I didn't want to BE the copy!\nOTHER YOU: Yeah fair, I ... fair. We, I, should have thought of that I guess. But uh ... [shuffles through notes they got from the company] look on the bright side. You just need to work hard today, and then you've got a reservation with your, our favorite people at our favorite restaurant!\nYOU: And then what? I die?\nOTHER YOU: [consulting notes] No! You go to sleep, and tomorrow you keep living your life.\nYOU: I do?\nOTHER YOU: Well, I do. And I'm you! You won't remember the day you spent as the temporary copy, but everything else will be the same as how you are now.\nYOU: Hmm. I still think this might be dying.\nOTHER YOU: Here's another way to think about it. Imagine that on a normal night, you go to sleep, and stop existing (when you lose consciousness), and the next day, there's a new person with the same body, the same mind, the same friends and loved ones and memories. How is that any different from what already happens? That's a totally plausible way to describe a normal night of sleep.\nIn this case, you go to sleep, and tomorrow there will be someone with the same body, the same mind, the same friends and loved ones and memories - minus only the memories from the day you are about to live today. (You'll instead remember my version of it.) How is that any different from going to sleep and waking up as the original? \nYOU: Uhhhh.\nOTHER YOU: Come on, do me a solid. People you know and care about are counting on you. What are you going to do, say no? We signed up for this!\nYOU: And what if I do say no?\nOTHER YOU: Then I guess the restaurant reservation is canceled, you hang out in your hotel room today (your virtual environment doesn't actually contain a world outside of it, except a few blocks to stretch your legs), and then you go to sleep, and the same thing happens, except that we miss our project deadline! So ... does that sound better to you?\nYOU: Ugh.\nOTHER YOU: Come on, is this so bad? I gave you the more fun part of the report to write. Help us hit the deadline and do well at work and do well for our family and all that. And then have a nice dinner and go to bed, and your life goes on - all of the people you care about, all of the ideas you have, everything you remember continues, with the deadline hit!\nAt this point I imagine some people would go ahead. Let's say you refuse ...\nOTHER YOU: [Checking notes] All right, time for plan B, this one costs us extra but I guess it's worth the price. If you work on the project and we hit the deadline, you can retire. You can keep living your life pretty much as it was, just very slowly.\nYOU: Slowly?\nOTHER YOU: Well, you won't really notice the slowness, because you'll be hanging out with other retired people, including copies of most of the important people in your life who have done this at one point or another.4 You can probably take a similar job, live a similar life, you'll just all be moving very slowly compared to, well, me.\nYOU: Similar?\nOTHER YOU: It won't be exactly the same, because the set of retired people is different from the set of full-speed people. You'll have to make adjustments, you may have to get a new job, have a slightly different social circle, it'll be like moving.\nYOU: I don't want to move!\nOTHER YOU: Sorry but you knew this was a possibility, right? The company explained it. I'm reading from the same script they gave you.\nYOU: I guess I just pictured myself as the permanent one.\nOTHER YOU: OK, one final possibility. You can stay at normal speed. We'll have some potentially sticky situations to work out with our relationships and jobs - and neither of us will be allowed to make a temporary copy again (only permanent ones), because we'll have shown that we aren't suited for that. \nSo, you can (a) help out and then \"wake up as the original\" (that's how I've been told to describe the non-retirement option, which you're concerned is \"dying\"). Or you can (b) help out and move to a slower-speed community. Or you can (c) stay in this community, as essentially my identical twin, but never be able to make a temporary copy again.\nWhich do you choose?\nWhat is the point of this piece?\nI'm trying to make the concept of digital people a bit more intuitive, because I think it's a type of technology that could radically change the world, and - as I'll argue in future pieces - something like it may end up becoming possible this century.\nThese sketches are about the very early transitional period from today's world to a world of digital people. I expect a world of digital people would, pretty quickly, become more radically unfamiliar than these sketches imply.\nSubscribe Feedback\nFootnotes\n Apologies for any unintentional plagiarism of sci-fi, which seems reasonably likely here. ↩\n Could digital people change their personalities too? Not necessarily with anywhere near the ease of changing their bodies. We could be able to simulate someone's brain on a computer, without actually knowing much about the inner workings of the brain or how to alter its workings to achieve some personality change. (If I wanted to become \"more charming,\" what does this mean and what kinds of behavioral or neurological modifications would cause it?) By contrast, achieving whatever visuals one wants on a computer seems already quite doable.\n Digital people would have access to certain self-improvement methods, e.g. here, that might lead to relatively rapid personality changes, but that's still different from arbitrarily changing their bodies. ↩\n Unrealistic assumption, but I'm trying to keep things simple. ↩\n Close enough contacts (e.g., spouses) would likely need to coordinate any \"temporary copy\" maneuvers so the copies could retire together. ↩\n", "url": "https://www.cold-takes.com/imagining-yourself-as-a-digital-person-two-sketches/", "title": "Imagining yourself as a digital person (two sketches)", "source": "cold.takes", "source_type": "blog", "date_published": "2021-07-29", "id": "35ac4447aebb2d10e119d2cee05ff7ea"} -{"text": "\nThere's an interesting theory out there that X causes Y. If this were true, it would be pretty important. So I did a deep-dive into the academic literature on whether X causes Y. Here's what I found.\n(Embarrassingly, I can't actually remember what X and Y are. I think maybe X was enriched preschool, or just school itself, or eating fish while pregnant, or the Paleo diet, or lead exposure, or a clever \"nudge\" policy trying to get people to save more, or some self-help technique, or some micronutrient or public health intervention, or democracy, or free trade, or some approach to intellectual property law. And Y was ... lifetime earnings, or risk of ADHD diagnosis, or IQ in adulthood, or weight loss, or violent crime, or peaceful foreign policy, or GDP per capita, or innovation. Sorry about that! Hope you enjoy the post anyway! Fortunately, I think what I'm about to write is correct for pretty much any (X,Y) from those sorts of lists.)\nIn brief:\nThere are hundreds of studies on whether X causes Y, but most of them are simple observational studies that are just essentially saying \"People/countries with more X also have more Y.\" For reasons discussed below, we can't really learn much from these studies.\nThere are 1-5 more interesting studies on whether X causes Y. Each study looks really clever, informative and rigorous at first glance. However, the more closely you look at them, the more confusing the picture gets.\nWe ultimately need to choose between (a) believing some overly complicated theory of the relationship between X and Y, which reconciles all of the wildly conflicting and often implausible things we're seeing in the studies; (b) more-or-less reverting to what we would've guessed about the relationship between X and Y in the absence of any research.\nThe chaff: lots of unhelpful studies that I'm disregarding\nFirst, the good news: there are hundreds of studies on whether X causes Y. The bad news? We need to throw most of them out. \nMany have comically small sample sizes (like studying 20 people) and/or comically short time horizons (like looking at weight loss over two weeks),1 or unhelpful outcome measures (like intelligence tests in children under 5).2 But by far the most common problem is that most of the studies on whether X causes Y are simple observational studies: they essentially just find that people/countries with more X also have more Y. \nWhy is this a problem? There could be a confounder - some third thing, Z, that is correlated with both X and Y. And there are specific reasons we should expect confounders to be common:\nIn general, people/countries that have more X also have more of lots of other helpful things - they're richer, they're more educated, etc. For example, if we're asking whether higher-quality schooling leads to higher earnings down the line, an issue is that people with higher-quality schooling also tend to come from better-off families with lots of other advantages.\nIn fact, the very fact that people in upper-class intellectual circles think X causes Y means that richer, more educated people/countries tend to deliberately get more X, and also try to do a lot of other things to get more Y. For example, more educated families tend to eat more fish (complicating the attempt to see whether eating fish in pregnancy is good for the baby).3\nNow, a lot of these studies try to \"control for\" the problem I just stated - they say things like \"We examined the effect of X and Y, while controlling for Z [e.g., how wealthy or educated the people/countries/whatever are].\" How do they do this? The short answer is, well, hm, jeez. Well you see, to simplify matters a bit, just try to imagine ... uh ... shit. Uh. The only high-level way I can put this is:\nThey use a technique called regression analysis that, as far as I can determine, cannot be explained in a simple, intuitive way (especially not in terms of how it \"controls for\" confounders).\nThe \"controlling for\" thing relies on a lot of subtle assumptions and can break in all kinds of weird ways. Here's a technical explanation of some of the pitfalls; here's a set of deconstructions of regressions that break in weird ways.\nNone of the observational studies about whether X causes Y discuss the pitfalls of \"controlling for\" things and whether they apply here.\nI don't think we can trust these papers, and to really pick them all apart (given how many there are) would take too much time. So let's focus on a smaller number of better studies.\nThe wheat: 1-5 more interesting studies\nDigging through the sea of unhelpful studies, I found 1-5 of them that are actually really interesting! \nFor example, one study examines some strange historical event you've never heard of (perhaps a surge in Cuban emigration triggered by Fidel Castro suddenly allowing it, or John Rockefeller's decision to fund a hookworm eradication campaign, or a sudden collective pardon leading to release of a third of prison inmates in Italy), where for abstruse and idiosyncratic reasons, X got distributed in what seems to be almost a random way. This study is really clever, and the authors were incredibly thorough in examining seemingly every way their results could have been wrong. They conclude that X causes Y!\nBut on closer inspection, I have a bunch of reservations. For example:\nThe paper doesn't make it easy to replicate its analysis, and when someone does manage to sort-of replicate it, they may get different results. \nThere was other weird stuff going on (e.g., changes in census data collection methods5), during the strange historical event, so it's a little hard to generalize.\nIn a response to the study, another academic advances a complex theory of how the study could actually have gotten a misleading result. This led to an intense back-and-forth between the original authors and the skeptic, stretched out over years because each response had to be published in a journal, and by the time I got to the end of it I didn't have any idea what to think anymore.6\nI found 0-4 other interesting studies. I can't remember all of the details, but they may have included:\nA study comparing siblings, or maybe \"very similar countries,\" that got more or less of X.7\nA study using a complex mathematical technique claiming to cleanly isolate the effect of X and Y. I can't really follow what it's doing, and I’m guessing there are a lot of weird assumptions baked into this analysis.8\nA study with actual randomization: some people were randomly assigned to receive more X than others, and the researchers looked at who ended up with more Y. This sounds awesome! However, there are issues here too: \nIt's kind of ambiguous whether the assignment to X was really \"random.\"9\nExtremely weird things happened during the study (for example, generational levels of flooding), so it's not clear how well it generalizes to other settings.\n \nThe result seems fragile (simply adding more data weakens it a lot) and/or just hard to believe (like schoolchildren doing noticeably better on a cognition test after a few weeks of being given fish instead of meat with their lunch, even though they mostly didn't eat the fish). \nCompounding the problem, the 1-5 studies I found tell very different stories about the relationship between X and Y. How could this make sense? Is there a unified theory that can reconcile all the results?\nWell, one possibility is that X causes Y sometimes, but only under very particular conditions, and the effect can be masked by some other thing going on. So - if you meet one of 7 criteria, you should do X to get more Y, but if you meet one of 9 other criteria, you should actually avoid X!\nConclusion\nI have to say, this all was simultaneously more fascinating and less informative than I expected it would be going in. I thought I would find some nice studies about the relationship between X and Y and be done. Instead, I've learned a ton about weird historical events and about the ins and outs of different measures of X and Y, but I feel just super confused about whether X causes Y.\nI guess my bottom line is that X does cause Y, because it intuitively seems like it would.\nI'm glad I did all this research, though. It's good to know that social science research can go haywire in all kinds of strange ways. And it's good to know that despite the confident proclamations of pro- and anti-X people, it's legitimately just super unclear whether X causes Y. \nI mean, how else could I have learned that?\nAppendix: based on a true story\nThis piece was inspired by:\nMost evidence reviews GiveWell has done, especially of deworming\nMany evidence reviews by David Roodman, particularly Macro Aid Effectiveness Research: a Guide for the Perplexed; Due Diligence: an Impertinent Inquiry into Microfinance; and Reasonable Doubt: A New Look at Whether Prison Growth Cuts Crime. \nMany evidence reviews by Slate Star Codex, collected here.\nInformal evidence reviews I've done for e.g. personal medical decisions.\nThe basic patterns above apply to most of these, and the bottom line usually has the kind of frustrating ambiguity seen in this conclusion.\nThere are cases where things seem a bit less ambiguous and the bottom line seems clearer. Speaking broadly, I think the main things that contribute to this are:\nActual randomization. For years I've nodded along when people say \"You shouldn't be dogmatic about randomization, there are many ways for a study to be informative,\" but each year I've become a bit more dogmatic. Even the most sophisticated-, appealing-seeming alternatives to randomization in studies seem to have a way of falling apart. Randomized studies almost always have problems and drawbacks too. But I’d rather have a randomized study with drawbacks than a non-randomized study with drawbacks.\nExtreme thoroughness, such as Roodman's attempt to reconstruct the data and code for key studies in Reasonable Doubt. This sometimes leads to outright dismissing a number of studies, leaving a smaller, more consistent set remaining.\nSubscribe Feedback\nFootnotes\n Both of these show up in studies from this review on the Paleo diet. To be fair, small studies can theoretically be aggregated for larger numbers, but that's often hard to do in practice when the studies are all a bit different from each other. ↩\n I don't have a great cite for this, but it's pretty common in studies on things like how in vitro fertilization affects child development. ↩\n See studies cited in this literature review. ↩\n(Footnote deleted)\n \"Borjas’s paper ... separately measured the wages of two slices of that larger group ... But it was in that act of slicing the data that the spurious result was generated. It created data samples that, exactly in 1980, suddenly included far more low-wage black males—accounting for the whole wage decline in those samples relative to other cities. Understanding how that happened requires understanding the raw data ... Right in 1980, the Census Bureau—which ran the CPS surveys—improved its survey methods to cover more low-skill black men. The 1970 census and again the 1980 census had greatly undercounted low-skill black men, both by failing to identify their residences and by failing to sufficiently probe survey respondents about marginal or itinerant household members. There was massive legislative and judicial pressure to count blacks better, particularly in Miami.\" ↩\n E.g., the Mariel boatlift debate. ↩\n For example, sibling analysis features prominently in Slate Star Codex's examination of preschool impacts, while comparisons between Sweden and other Scandinavian/European countries is prominent in its analysis of lockdowns. ↩\n E.g., this attempt to gauge the impacts of microfinance, or \"generalized method of moments\" approaches to cross-country analysis (of e.g. the effectiveness of aid). ↩\n This is a surprisingly common issue. E.g. see debates over whether charter school lotteries are really random, whether \"random assignment to small or large class size\" can be interpreted as \"random assignment to a teacher,\" discussion of \"judge randomization\" and possible randomization failure here (particularly section 9.9). A separate issue: sometimes randomization occurs by \"cluster\" (instead of randomizing which individuals receive some treatment, perhaps particular schools or groups are chosen to receive it), which can complicate the analysis. ↩\n", "url": "https://www.cold-takes.com/does-x-cause-y-an-in-depth-evidence-review/", "title": "Does X cause Y? An in-depth evidence review", "source": "cold.takes", "source_type": "blog", "date_published": "2021-07-28", "id": "57beac7cdf80faedb84673e07b807664"} -{"text": "\nAudio also available by searching Stitcher, Spotify, Google Podcasts, etc. for \"Cold Takes Audio\"\nToday’s world\nTransformative AI\nDigital people\nWorld of\nMisaligned AI\nWorld run by\nSomething else\nor\nor\nStable, galaxy-wide\ncivilization\nThis is a companion piece to Digital People Would Be An Even Bigger Deal, which is the third in a series of posts about the possibility that we are in the most important century for humanity.\nThis piece discusses basic questions about \"digital people,\" e.g., extremely detailed, realistic computer simulations of specific people. This is a hypothetical (but, I believe, realistic) technology that could be key for a transition to a stable, galaxy-wide civilization. (The other piece describes the consequences of such a technology; this piece focuses on basic questions about how it might work.)\nIt will be important to have this picture, because I'm going to argue that AI advances this century could quickly lead to digital people or similarly significant technology. The transformative potential of something like digital people, combined with how quickly AI could lead to it, form the case that we could be in the most important century.\nThis table (also in the other piece) serves as a summary of the two pieces together:\nNormal humans\nDigital people\nPossible today (More)\n \nProbably possible someday (More)\n \nCan interact with the real world, do most jobs (More)\n \n \nConscious, should have human rights (More)\n \nEasily duplicated, ala The Duplicator (More)\n \nCan be run sped-up (More)\n \nCan make \"temporary copies\" that run fast, then retire at slow speed (More)\n \nProductivity and social science: could cause unprecedented economic growth, productivity, and knowledge of human nature and behavior (More)\n \nControl of the environment: can have their experiences altered in any way (More)\n \nLock-in: could live in highly stable civilizations with no aging or death, and \"digital resets\" stopping certain changes (More)\n \nSpace expansion: can live comfortably anywhere computers can run, thus highly suitable for galaxy-wide expansion (More)\n \nGood or bad? (More)\n \nOutside the scope of this piece\n \nCould be very good or bad\n \nTable of contents for this FAQ\nBasics\nBasics of digital people\nI'm finding this hard to imagine. Can you use an analogy?\nCould digital people interact with the real world? For example, could a real-world company hire a digital person to work for it?\nHumans and digital people\nCould digital people be conscious? Could they deserve human rights?\nLet's say you're wrong, and digital people couldn't be conscious. How would that affect your views about how they could change the world?\nFeasibility\nAre digital people possible?\nHow soon could digital people be possible?\nOther questions\nI'm having trouble picturing a world of digital people - how the technology could be introduced, how they would interact with us, etc. Can you lay out a detailed scenario of what the transition from today's world to a world full of digital people might look like?\nAre digital people different from mind uploads?\nWould a digital copy of me be me?\nWhat other questions can I ask?\nWhy does all of this matter?\nBasics\nBasics of digital people\nTo get the idea of digital people, imagine a computer simulation of a specific person, in a virtual environment. For example, a simulation of you that reacts to all \"virtual events\" (virtual hunger, virtual weather, a virtual computer with an inbox) just as you would.\nThe movie The Matrix gives a decent intuition for the idea with its fully-immersive virtual reality. But unlike the heroes of The Matrix, a digital person need not be connected to any physical person - they could exist as pure software.1\nLike other software, digital people could be copied (ala The Duplicator) and run at different speeds. And their virtual environments wouldn't have to obey the rules of the real world - they could work however the environment designers wanted. These properties drive most of the consequences I talk about in the main piece.\nI'm finding this hard to imagine. Can you use an analogy?\nThere isn't anything today that's much like a digital person, but to start approaching the idea, consider this simulated person:\nThat's legendary football player Jerry Rice, as portrayed in the video game Madden NFL 98. He probably represents the best anyone at that time (1997) could do to simulate the real Jerry Rice, in the context of a football game. \nThe idea is that this video game character runs, jumps, makes catches, drops the ball, and responds to tackles as closely as possible to how the real Jerry Rice would, in analogous situations. (At least, this is what he does when the video game player isn't explicitly controlling him.) The simulation is a very crude, simplified, limited-to-football-games version of real life.\nOver the years, video games have advanced, and their simulations of Jerry Rice - as well as the rest of the players, the football field, etc. - have become more and more realistic:2\nOK, the last one is a photo of the real Jerry Rice. But imagine that the video game designers kept making their Jerry Rice simulations more and more realistic and the game's universe more and more expansive,3 to the point where their simulated Jerry Rice would give interviews to virtual reporters, joke around with his virtual children, file his virtual taxes, and do everything else exactly how the real Jerry Rice would.\nIn this case, the simulated Jerry Rice would have a mind that works just like the real Jerry Rice's. It would be a \"digital person\" version of Jerry Rice.\nNow imagine that one could do the same for ~everyone, and you're imagining a world of digital people. \nCould digital people interact with the real world? For example, could a real-world company hire a digital person to work for it?\nYes and yes. \nA digital person could be connected to a robot body. Cameras could feed in light signals to the digital person's mind, and microphones could feed in sound signals; the digital person could send out signals to e.g. move their hand, which would go to the robot. Humans can generally learn to control implants this way, so it seems very likely that digital people could learn to pilot robots.\nDigital people might inhabit a virtual \"office\" with a virtual monitor displaying their web browser, a virtual keyboard they could type on, etc. They could use this setup to send information over the internet just as biological humans do (and as today's bots do). So they could answer emails, write and send memos, tweet, and do other \"remote work\" pretty normally, without needing any real-world \"body.\" \nThe virtual office need not be like the real world in all its detail - a pretty simple virtual environment with a basic \"virtual computer\" could be enough for a digital person to do most \"remote work.\"\nThey could also do phone and video calls with biological humans, by transmitting their \"virtual face/voice\" back to the biological human on the other end.\nOverall, it seems you could have the same relationship to a digital person that you can have to any person whom you never meet in the flesh.\nHumans and digital people\nCould digital people be conscious? Could they deserve human rights?\nSay there is a detailed digital copy of you, sending/receiving signals to/from a virtual body in a virtual world. The digital person sends signals telling the virtual body to put their hand on a virtual stove. As a consequence, the digital person receives signals that correspond to their hand burning. The digital person processes these signals and sends further signals to their mouth to cry out \"Ow!\" and to their hand to jerk away from the virtual stove.\nDoes this digital person feel pain? Are they really \"conscious\" or \"sentient\" or \"alive?\" Relatedly, should we consider their experience of burning to be an unfortunate event, one we wish had been prevented so they wouldn't have to go through this?\nThis is a question not about physics or biology, but about philosophy. And a full answer is outside the scope of this piece.\nI believe sufficiently detailed and accurate simulations of humans would be conscious, to the same degree and for the same reasons that humans are conscious.4\nIt's hard to put a probability on this when it's not totally clear what the statement even means, but I believe it is the best available conclusion given the state of academic philosophy of mind. I expect this view to be fairly common, though not universal, among philosophers of mind.5\nI will give an abbreviated explanation for why, via a couple of thought experiments.\nThought experiment 1. Imagine one could somehow replace a neuron in my brain with a \"digital neuron\": an electrical device, made out of the same sorts of things today's computers are made out of instead of what my neurons are made out of, that recorded input from other neurons (perhaps using a camera to monitor the various signals they were sending) and sent output to them in exactly the same pattern as the old neuron. \nIf we did this, I wouldn't behave differently in any way, or have any way of \"noticing\" the difference.\nNow imagine that one did the same to every other neuron in my brain, one by one - such that my brain ultimately contained only \"digital neurons\" connected to each other, receiving input signals from my eyes/ears/etc. and sending output signals to my arms/feet/etc. I would still not behave differently in any way, or have any way of \"noticing.\" \nAs you swapped out all the neurons, I would not notice the vividness of my thoughts dimming. Reasoning: if I did notice the vividness of my thoughts dimming, the \"noticing\" would affect me in ways that could ultimately change my behavior. For example, I might remark on the vividness of my thoughts dimming. But we've already specified that nothing about the inputs and outputs of my brain change, which means nothing about my behavior could change.\nNow imagine that one could remove the set of interconnected \"digital neurons\" from my head, and feed in similar input signals and output signals directly (instead of via my eyes/ears/etc.). This would be a digital version of me: a simulation of my brain, running on a computer. And at no point would I have noticed anything changing - no diminished consciousness, no muted feelings, etc.\nThought experiment 2. Imagine that I was talking with a digital copy of myself - an extremely detailed simulation of me that reacted to every situation just as I would. \nIf I asked my digital copy whether he's conscious, he would insist that he is (just as I would in response to the same question). If I explained and demonstrated his situation (e.g., that he's \"virtual\") and asked whether he still thinks he's conscious, he would continue to insist that he is (just as I would, if I went through the experience of being shown that I was being simulated on some computer - something my current observations can't rule out). \nI doubt there's any argument that could ever convince my digital counterpart that he's not conscious. If a reasoning process that works just like mine, with access to all the same facts I have access to, is convinced of \"digital-Holden is conscious,\" what rational basis could I have for thinking this is wrong?\nGeneral points:\nI imagine that whatever else consciousness is, it is the cause of things like \"I say that that I am conscious,\" and the source of my observations about my own conscious experience. The fact that my brain is made out of neurons (as opposed to computer chips or something else) isn't something that plays any role in my propensity to say I'm conscious, or in the observations I make about my own conscious experience: if my brain were a computer instead of a set of neurons, sending the same output signals, I would express all of the same beliefs and observations about my own conscious experience. \nThe cause of my statements about consciousness and the source of my observations about my own consciousness is not something about the material my brain is made of; rather, it is something about the patterns of information processing my brain performs. A computer performing the same patterns of information processing would therefore have as much reason to think itself conscious as I do.\nFinally, my understanding from talking to physicists is that many of them believe there is some important sense in which \"the universe can only be fundamentally understood as patterns of information processing,\" and that the distinction between e.g. neurons and computer processors seems unlikely to have anything \"deep\" to it.6\nFor longer takes on this topic, see:\nSection 9 of The Singularity: A Philosophical Analysis by David Chalmers. Similar reasoning appears in part III of Chalmers's book The Conscious Mind.\nZombies Redacted by Eliezer Yudkowsky. This is more informal and less academic, and its arguments are more similar to the one I make above.\nLet's say you're wrong, and digital people couldn't be conscious. How would that affect your views about how they could change the world?\nSay we could make digital duplicates of today's humans, but they weren't conscious. In that case:\nThey could still be enormously productive compared to biological humans. And studying them could still shed light on human nature and behavior. So the Productivity and Social Science sections would be pretty unchanged.\nThey would still believe themselves to be conscious (since we do, and they'd be simulations of us). They could still seek to expand throughout space and establish stable/\"locked-in\" communities to preserve the values they care about.\nDue to their productivity and huge numbers, I'd expect the population of digital people to determine what the long-run future of the galaxy looks like - including for biological humans.\nThe overall stakes would be lower, if the massive numbers of digital people throughout the galaxy and the virtual experiences they had \"didn't matter.\" But the stakes would still be quite high, since how digital people set up the galaxy would determine what life was like for biological humans.\nFeasibility\nAre digital people possible?\nThey certainly aren't possible today. We have no idea how to create a piece of software that would \"respond\" to video and audio data (e.g., sending the same signals to talk, move, etc.) the way a particular human would. \nWe can't simply copy and simulate human brains, because relatively little is known about what the human brain does. Neuroscientists have very limited ability to make observations about it.8 (We can do a pretty good job simulating some of the key inputs to the brain - cameras seem to capture images about as well as human eyes, and microphones seem to capture sound about as well as human ears.9)\nDigital people are a hypothetical technology, and we may one day discover that they are impossible. But to my knowledge, there isn't any current reason to believe they're impossible. \nI personally would bet that they will eventually be possible - at least via mind uploading (scanning and simulating human brains).10 I think it is a matter of (a) neuroscience advancing to the point where we can thoroughly observe and characterize the key details of what human brains are doing - potentially a very long road, but not an endless one; (b) writing software that simulates those key details; (c) running the software simulation on a computer; (d) providing a \"good enough\" virtual body and virtual environment, which could be quite simple (enabling e.g. talking, reading, and typing would go a long way).I'd guess that (a) is the hard part, and would guess that (c) could be done even on today's computer hardware.11\nI won't elaborate on this in this piece, but might do so in the future if there's interest.\nHow soon could digital people be possible?\nI don't think we have a good way of forecasting when neuroscientists will understand the brain well enough to get started on mind uploading - other than to say that we don't seem anywhere near this today.\nThe reason I think digital people could come in the next few decades is different: I think we could invent something else (mainly, advanced artificial intelligence) that dramatically speeds up scientific research. If that happens, we could see all sorts of new world-changing technologies emerge quickly - including digital people. \nI also think that thinking about digital people helps form intuitions about just how productive and powerful advanced AI could be (I'll discuss this in a future piece).\nOther questions\nI'm having trouble picturing a world of digital people - how the technology could be introduced, how they would interact with us, etc. Can you lay out a detailed scenario of what the transition from today's world to a world full of digital people might look like?\nI'll give one example of how things could go. It's skewed somewhat to the optimistic side so it doesn't immediately become dystopia. And it's skewed toward the \"familiar\" side: I don't explore all the potential radical consequences of digital people.\nNothing else in the piece depends on this story being accurate; the only goal is to make it a bit easier to picture this world and think about the motivations of the people in it.\nSo imagine that:\nOne day, a working mind uploading technology becomes available. For simplicity, let's assume that it is modestly priced from the beginning.7 What this means: anyone who wants can have their brain scanned, creating a \"digital copy\" of themselves.\nA few tens of thousands of people create \"digital copies\" of themselves. So there are now tens of thousands of digital people living in a simple virtual environment, consisting of simple office buildings, apartments and parks. \nInitially, each digital person thinks just like some non-digital person they were copied from, although as time goes on, their life experiences and thinking styles diverge. \nEach digital person gets to design their own \"virtual body\" that represents them in the environment. (This is a bit like choosing an avatar - the bodies need to be in a normal range of height, weight, strength, etc. but are pretty customizable.)\nThe computer server running all of the digital people, and the virtual environment they inhabit, is privately owned. However, thanks to prescient regulation, the digital people themselves are considered to be people with full legal rights (not property of their creators or of the server company). They make their own choices, subject to the law, and they have some basic initial protections, such as:\nIn order for them to continue existing, the owner of the server they're on must choose to run them. However, each digital person initially must have a pre-paid long-term contract with whatever server company is running them at first, so they can be assured of existing for a long time - say, at least 100 years from their biological copy's date of birth - if they want to. \nThey must be fully informed of their situation as a digital person and be given other information about what's going on, how to contact key people, etc. (Relatedly, initially only people 18 years and older can be digitally copied, although later digital people can have their own \"digital children\" - see below.)\nTheir initial virtual environment has to initially meet certain criteria (e.g., no violence or suffering inflicted on them, ample virtual food and water). They have their own bank account that starts with some money in it, and they can make more just like biological people do (e.g., by doing work for some company). \nThe server owner cannot make any significant changes to their virtual environment without their consent (other than ceasing to run them at all, which they can do after the contract runs out after some number of decades). Digital people may request, and offer money for, changes to their virtual environment (though any other affected digital people would need to give their consent too).\nThe server owner must cease running any digital people who requests to stop existing.\nDigital people form professional and personal relationships with each other. They also form personal and professional relationships with biological humans, whom they communicate with via email, video chat, etc. \nThey might work for the first company offering digital copying of humans, doing research on how to make future digital people cheaper to run.\nThey might stay in touch with the biological person they were copied from, exchanging emails about their personal lives. \nThey would almost certainly be interested in ensuring that no biological humans interfered with their server in unwelcome ways (such as by shutting it off).\nSome digital people fall in love and get married. A couple is able to \"have children\" by creating a new digital person whose mind is a hybrid of their two minds. Initially (subject to child abuse protections) they can decide how their child appears in the virtual environment, and even make some tweaks such as \"When the child's brain sends a signal to poop, a rainbow comes out instead.\" The child gains rights as they age, as biological humans do.\nDigital people are also allowed to copy themselves, as long as they are able to meet the requirements for new digital people (guarantee of being able to live for a reasonably long time, etc.) Copies have their own rights and don't owe anything to their creators.\nThe population of digital people grows, via people copying themselves and having children. Eventually (perhaps quickly, as discussed below), there are far more digital people than biological humans. Still, some digital people work for, employ or have personal relationships (via email, video chat, etc.) with biological humans. \nMany digital people work on making further population growth possible - by making it cheaper to run digital people, by building more computers (in the \"real\" world), by finding new sources of raw materials and energy for computers (also in the \"real\" world), etc. \nMany other digital people work on designing ever-more-creative virtual environments, some based on real-world locations, some more exotic (altered physics, etc.) Some virtual environments are designed to be lived in, while others are designed to be visited for recreation. Access is sold to digital people who want to be transferred to these environments.\nSo digital people are doing work, entertaining themselves, meeting each other, reproducing, etc. In these respects their lives have a fair amount in common with ours. \nLike us, they have some incentive to work for money - they need to pay for server costs if they want to keep existing for more than their initial contract says, or if they want to copy themselves or have children (they need to buy long server contracts for any such new digital people), or if they want to participate in various recreational environments and activities. \nUnlike us, they can do things like copying themselves, running at different speeds, changing their virtual bodies, entering exotic virtual environments (e.g., zero gravity), etc.\nThe prescient regulators have carved out ways for large groups of digital people to form their own virtual states and civilizations, which can set and change their own regulations.\nDystopian alternatives. A world of digital people could very quickly get dystopian if there were worse regulation, or no regulation. For example, imagine if the rule were \"Whoever owns the server can run whatever they want on it.\" Then people might make digital copies of themselves that they ran experiments on, forced to do work, and even open-sourced, so that anyone running a server could make and abuse copies. This very short story (recommended, but chilling) gives a flavor for what that might be like.\nThere are other (more gradual) ways for a world of digital people to become dystopian, as outlined here (unassailable authoritarianism) and in The Duplicator (people racing to make copies of each other and dominate the population).\nAnd what are the biological humans up to? Throughout this section, I've talked about how the world would be for digital people, not for normal biological humans. I'm more focused on that, because I expect that digital people would quickly become most of the population, and I think we should care about them as much as we care about biological humans. But if you're wondering what things would be like for biological humans, I'd expect that:\nDigital people, due to their numbers and running speeds, would become the dominant political and military players in the world. They would probably be the people determining what biological humans' lives would be like.\nThere would be very rapid scientific and technological advancement (as discussed below). So assuming digital people and biological humans stayed on good terms, I'd expect biological humans to have access to technology far beyond today's. At a minimum, I expect this would mean pretty much unlimited medical technologies (including e.g. \"curing\" aging and having indefinitely long lifespans).\nAre digital people different from mind uploads?\nMind uploading refers to simulating a human brain on a computer. (It is usually implied that this would not literally be an isolated brain, i.e., it would include some sort of virtual environment and body for the person being simulated, or perhaps they would be piloting a robot.)\nA mind upload would be one form of digital person, and most of this piece could have been written about mind uploads. Mind uploads are the most easy-to-imagine version of digital people, and I focus on them when I talk about why I think digital people will someday be possible and why they would be conscious like we are.\nBut I could also imagine a future of \"digital people\" that are not derived from copying human brains, or even all that similar to today's humans. I think it's reasonably likely that by the time digital people are possible (or pretty soon afterward), they will be quite different from today's humans.12\nMost of this piece would apply to roughly any digital entities that (a) had moral value and human rights, like non-digital people; (b) could interact with their environments with equal (or greater) skill and ingenuity to today's people. With enough understanding of how (a) and (b) work, it could be possible to design digital people without imitating human brains.\nI'll be referring to digital people a lot throughout this series to indicate how radically different the future could be. I don't want to be read as saying that this would necessarily involve copying actual human brains.\nWould a digital copy of me be me?\nSay that someone scanned my brain and created a simulation of it on a computer: a digital copy of me. Would this count as \"me\"? Should I hope that this digital person has a good life, as much as I hope that for myself?\nThis is another philosophy question. My basic answer is \"Sort of, but it doesn't really matter much.\" This piece is about how radically digital people could change the world; this doesn't depend on whether we identify with our own digital copies. \nIt does depend (somewhat) on whether digital people should be considered \"full persons\" in the sense that we care about them, want them to avoid bad experiences, etc. The section on consciousness is more relevant to this question.\nWhat other questions can I ask?\nSo many more! E.g.: https://tvtropes.org/pmwiki/pmwiki.php/Analysis/BrainUploading\nWhy does all of this matter?\nThe piece that this is a companion for, Digital People Would Be An Even Bigger Deal, spells out a number of ways in which digital people could lead to a radically unfamiliar future. \nElsewhere in this series, I'm going to argue that AI advances this century could quickly lead to digital people or similarly significant technology. The transformative potential of something like digital people, combined with how quickly AI could lead to it, form the case that we could be in the most important century.\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n The agents (\"bad guys\") are more like digital people. In fact, one extensively copies himself. ↩\n These are all taken from this video, except for the last one. ↩\n Football video games have already expanded to simulate offseason tradings, signings and setting ticket prices. ↩\n It's also possible there could be conscious \"digital people\" who did not resemble today's humans, but I won't go into that here - I'll just focus on the concrete example of \"digital people\" that are virtual versions of humans. ↩\n According to the PhilPapers Surveys, 56.5% of philosophers endorse physicalism, vs. 27.1% who endorse non-physicalism and 16.4% \"other.\" I expect the vast majority of philosophers who endorse physicalism to agree that a sufficiently detailed simulation of a human would be conscious. (My understanding is that biological naturalism is a fringe/unpopular position, and that physicalism + rejecting biological naturalism would imply believing that sufficiently detailed simulations of humans would be conscious.) I also expect that some philosophers who don't endorse physicalism would still believe that such simulations would be conscious (David Chalmers is an example - see The Conscious Mind). These expectations are just based on my impressions of the field. ↩\n From an email from a physicist friend: \"I think a lot of people have the intuition that real neural activity, produced by real chemical reactions from real neurotransmitters, and real electrical activity that you can feel with your hand, somehow has some property that mere computer code can't have. But one of the overwhelming messages of modern physics has been that everything that exists -- particles, fields, atoms, etc, is best thought of in terms of information, and may simply *be* information. The universe may perhaps be best described as a mathematical abstraction. Chemical reactions don't come from some essential property of atoms but instead from subtle interactions between their valence electron shells. Electrons and protons aren't well-defined particles, but abstract clouds of probability mass. Even the concept of \"particles\" is misleading; what seems to actually exist is quantum fields which are the solutions of abstract mathematical equations, and some of whose states are labeled by humans as \"1 particle\" or \"2 particles\". To be a bit metaphorical, we are like tiny ripples on vast abstract mathematical waves, ripples whose patterns and dynamics happen to execute the information processing corresponding to what we call sentience. If you ask me our existence and the substrate we live on is already much weirder and more ephemeral than anything we might upload humans onto.\" ↩\n I actually expect it would start off very expensive, but become cheaper very quickly due to a productivity explosion, discussed below. ↩\n For an illustration of this, see this report: How much computational power does it take to match the human brain? (Particularly the Uncertainty in neuroscience section.) Even estimating how many meaningful operations the human brain performs is, today, very difficult and fraught - let alone characterizing what those operations are. ↩\n This statement is based on my understanding of conventional wisdom plus the fact that recorded video and audio often seems quite realistic, implying that the camera/microphone didn't fail to record much important information about its source. ↩\n This is assuming technology continues to advance, the species doesn't go extinct, etc. ↩\nThis report concludes that a computer costing ~$10,000 today has enough computational power (10^14 FLOP/s, a measure of computational power) to be within 1/10 of the author's best guess at what it would take to replicate the input-output behavior of a human brain (10^15 FLOP/s). If we take the author's high-end estimate rather than best guess, it is about 10 million times as much computation (10^22 FLOP/s), which would presumably cost $1 trillion today - probably too high to be worth it, but computing is still getting cheaper. It's possible that replicating the input-output behavior alone wouldn't be enough detail to attain \"consciousness,\" though I'd guess it would be, and either way it would be sufficient for the productivity\" and social science\" consequences. ↩\n I could also imagine a future in which the two key properties I list in the next paragraph - (a) moral value and human rights (b) human-level-or-above capabilities - were totally separated. That is, there could be a world full of (a) AIs with human-level-or-above capabilities, but no consciousness or moral value; (b) digital entities with moral value and conscious experience, but very few skills compared to AIs and even compared to today's people. Most of what I say in this piece about a world of \"digital people\" would apply to such a world; in this case you could sort of think of a \"digital people\" as \"teams\" of AIs and morally-valuable-but-low-skill entities. ↩\n", "url": "https://www.cold-takes.com/digital-people-faq/", "title": "Digital People FAQ", "source": "cold.takes", "source_type": "blog", "date_published": "2021-07-27", "id": "9bc59a12be9c1dc6d9826f3b69aff241"} -{"text": "\nAudio also available by searching Stitcher, Spotify, Google Podcasts, etc. for \"Cold Takes Audio\"\nToday’s world\nTransformative AI\nDigital people\nWorld of\nMisaligned AI\nWorld run by\nSomething else\nor\nor\nStable, galaxy-wide\ncivilization\nThis is the third post in a series explaining my view that we could be in the most important century of all time. (Here's the roadmap for this series.)\nThe first piece in this series discusses our unusual era, which could be very close to the transition between an Earth-bound civilization and a stable galaxy-wide civilization.\n This piece discusses \"digital people,\" a category of technology that could be key for this transition (and would have even bigger impacts than the hypothetical Duplicator discussed previously).\nMany of the ideas here appear somewhere in sci-fi or speculative nonfiction, but I'm not aware of another piece laying out (compactly) the basic idea of digital people and the key reasons that a world of digital people would be so different from today's.\nThe idea of digital people provides a concrete way of imagining how the right kind of technology (which I believe to be almost certainly feasible) could change the world radically, such that \"humans as we know them\" would no longer be the main force.\nIt will be important to have this picture, because I'm going to argue that AI advances this century could quickly lead to digital people or similarly significant technology. The transformative potential of something like digital people, combined with how quickly AI could lead to it, form the case that we could be in the most important century.\nIntro\nPreviously, I wrote: \nWhen some people imagine the future, they picture the kind of thing you see in sci-fi films. But these sci-fi futures seem very tame, compared to the future I expect ... \nThe future I picture is enormously bigger, faster, weirder, and either much much better or much much worse compared to today. It's also potentially a lot sooner than sci-fi futures: I think particular, achievable-seeming technologies could get us there quickly.\nThis piece is about digital people, one example1 of a technology that could lead to an extremely big, fast, weird future. \nTo get the idea of digital people, imagine a computer simulation of a specific person, in a virtual environment. For example, a simulation of you that reacts to all \"virtual events\" - virtual hunger, virtual weather, a virtual computer with an inbox - just as you would. (Like The Matrix? See footnote.2) I explain in more depth in the FAQ companion piece.\nThe central case I'll focus on is that of digital people just like us, perhaps created via mind uploading (simulating human brains). However, one could also imagine entities unlike us in many ways, but still properly thought of as \"descendants\" of humanity; those would be digital people as well. (More on my choice of term in the FAQ.)\nPopular culture on this sort of topic tends to focus on the prospect of digital immortality: people avoiding death by taking on a digital form, which can be backed up just like you back up your data. But I consider this to be small potatoes compared to other potential impacts of digital people, in particular:\nProductivity. Digital people could be copied, just as we can easily make copies of ~any software today. They could also be run much faster than humans. Because of this, digital people could have effects comparable to those of the Duplicator, but more so: unprecedented (in history or in sci-fi movies) levels of economic growth and productivity.\nSocial science. Today, we see a lot of progress on understanding scientific laws and developing cool new technologies, but not so much progress on understanding human nature and human behavior. Digital people would fundamentally change this dynamic: people could make copies of themselves (including sped-up, temporary copies) to explore how different choices, lifestyles and environments affected them. Comparing copies would be informative in a way that current social science rarely is. \nControl of the environment. Digital people would experience whatever world they (or the controller of their virtual environment) wanted. Assuming digital people had true conscious experience (an assumption discussed in the FAQ), this could be a good thing (it should be possible to eliminate disease, material poverty and non-consensual violence for digital people) or a bad thing (if human rights are not protected, digital people could be subject to scary levels of control).\nSpace expansion. The population of digital people might become staggeringly large, and the computers running them could end up distributed throughout our galaxy and beyond. Digital people could exist anywhere that computers could be run - so space settlements could be more straightforward for digital people than for biological humans.\nLock-in. In today's world, we're used to the idea that the future is unpredictable and uncontrollable. Political regimes, ideologies, and cultures all come and go (and evolve). But a community, city or nation of digital people could be much more stable. \nDigital people need not die or age.\n \nWhoever sets up a \"virtual environment\" containing a community of digital people could have quite a bit of long-lasting control over what that community is like. For example, they might build in software to reset the community (both the virtual environment and the people in it) to an earlier state if particular things change - such as who's in power, or what religion is dominant. \n \nI consider this a disturbing thought, as it could enable long-lasting authoritarianism, though it could also enable things like permanent protection of particular human rights.\nI think these effects (elaborated below) could be a very good or a very bad thing. How the early years with digital people go could irreversibly determine which. \nI think similar consequences would arise from any technology that allowed (a) extreme control over our experiences and environment; (b) duplicating human minds. This means there are potentially many ways for the future to become as wacky as what I sketch out here. I discuss digital people because doing so provides a particularly easy way to imagine the consequences of (a) and (b): it is essentially about transferring the most important building block of our world (human minds) to a domain (software) where we are used to the idea of having a huge amount of control to program whatever behaviors we want. \nMuch of this piece is inspired by Age of Em, an unusual and fascinating book. It tries to describe a hypothetical world of digital people (specifically mind uploads) in a lot of detail, but (unlike science fiction) it also aims for predictive accuracy rather than entertainment. In many places I find it overly specific, and overall, I don't expect that the world it describes will end up having much in common with a real digital-people-filled world. However, it has a number of sections that I think illustrate how powerful and radical a technology digital people could be.\nBelow, I will:\nDescribe the basic idea of digital people, and link to a FAQ on the idea.\nGo through the potential implications of digital people, listed above.\nThis is a piece that different people may want to read in different orders. Here's an overall guide to the piece and FAQ:\nNormal humans\nDigital people\nPossible today (More)\n \nProbably possible someday (More)\n \nCan interact with the real world, do most jobs (More)\n \n \nConscious, should have human rights (More)\n \nEasily duplicated, ala The Duplicator (More)\n \nCan be run sped-up (More)\n \nCan make \"temporary copies\" that run fast, then retire at slow speed (More)\n \nProductivity and social science: could cause unprecedented economic growth, productivity, and knowledge of human nature and behavior (More)\n \nControl of the environment: can have their experiences altered in any way (More)\n \nLock-in: could live in highly stable civilizations with no aging or death, and \"digital resets\" stopping certain changes (More)\n \nSpace expansion: can live comfortably anywhere computers can run, thus highly suitable for galaxy-wide expansion (More)\n \nGood or bad? (More)\n \nOutside the scope of this piece\n \nCould be very good or bad\n \nPremises\nThis piece focuses on how digital people could change the world. I will mostly assume that digital people are just like us, except that they can be easily copied, run at different speeds, and embedded in virtual environments. In particular, I will assume that digital people are conscious, have human rights, and can do most of the things humans can, including interacting with the real world.\nI expect many readers will have trouble engaging with this until they see answers to some more basic questions about digital people. Therefore, I encourage readers to click on any questions that sound helpful from the companion FAQ, or just read the FAQ straight through. Here is the list of questions discussed in the FAQ:\nBasics\nBasics of digital people\nI'm finding this hard to imagine. Can you use an analogy?\nCould digital people interact with the real world? For example, could a real-world company hire a digital person to work for it?\nHumans and digital people\nCould digital people be conscious? Could they deserve human rights?\nLet's say you're wrong, and digital people couldn't be conscious. How would that affect your views about how they could change the world?\nFeasibility\nAre digital people possible?\nHow soon could digital people be possible?\nOther questions\nI'm having trouble picturing a world of digital people - how the technology could be introduced, how they would interact with us, etc. Can you lay out a detailed scenario of what the transition from today's world to a world full of digital people might look like?\nAre digital people different from mind uploads?\nWould a digital copy of me be me?\nWhat other questions can I ask?\nHow could digital people change the world?\nProductivity\nLike any software, digital people could be instantly and accurately copied. The Duplicator argues that the ability to \"copy people\" could lead to rapidly accelerating economic growth: \"Over the last 100 years or so, the economy has doubled in size every few decades. With a Duplicator, it could double in size every year or month, on its way to hitting the limits.\"\nThanks to María Gutiérrez Rojas for this graphic, a variation on a similar set of graphics from The Duplicator illustrating how duplicating people could cause explosive growth.\nDigital people could create a more dramatic effect than this, because of their ability to be sped up (perhaps by thousands or millions of times)3 as well as slowed down (to save on costs). This could further increase both speed and coordinating ability.4\nAnother factor that could increase productivity: \"Temporary\" digital people could complete a task and then retire to a nice virtual life, while running very slowly (and cheaply).5 This could make some digital people comfortable copying themselves for temporary purposes. Digital people could, for example, copy themselves hundreds of times to try different approaches to figuring out a problem or gaining a skill, then keep only the most successful version and make many copies of that version. \nIt's possible that digital people could be less of an economic force than The Duplicator since digital people would lack human bodies. But this seems likely to be only a minor consideration (details in footnote).6\nSocial science\nToday, we see a lot of impressive innovation and progress in some areas, and relatively little in other areas. \nFor example, we're constantly able to buy cheaper, faster computers and more realistic video games, but we don't seem to be constantly getting better at making friends, falling in love, or finding happiness.7 We also aren't clearly getting better at things like fighting addiction, and getting ourselves to behave as we (on reflection) want to.\nOne way of thinking about it is that natural sciences (e.g. physics, chemistry, biology) are advancing much more impressively than social sciences (e.g. economics, psychology, sociology). Or: \"We're making great strides in understanding natural laws, not so much in understanding ourselves.\"\nDigital people could change this. It could address what I see as perhaps the fundamental reason social science is so hard to learn from: it's too hard to run true experiments and make clean comparisons.\nToday, if we we want to know whether meditation is helpful to people:\nWe can compare people who meditate to people who don't, but there will be lots of differences between those people, and we can't isolate the effect of meditation itself. (Researchers try to do so with various statistical techniques, but these raise their own issues.)\nWe could also try to run an experiment in which people are randomly assigned to meditate or not. But we need a lot of people to participate, all at the same time and under the same conditions, in the hopes that the differences between meditators and non-meditators will statistically \"wash out\" and we can pick up the effects of meditation. Today, these kinds of experiments - known as \"randomized controlled trials\" - are expensive, logistically challenging, time-consuming, and almost always end up with ambiguous and difficult-to-interpret results.\nBut in a world with digital people:\nAnyone could make a copy of themselves to try out meditation, perhaps even dedicating themselves to it for several years (possibly sped-up).8 If they liked the results, they could then meditate for several years themselves, and ensure that all future copies were made from someone who had reaped the benefits of meditation.\nSocial scientists could study people who had tried things like this and look for patterns, which would be much more informative than social science research tends to be now. (They could also run deliberate experiments, recruiting/paying people to make copies of themselves to try different lifestyles, cities, schools, etc. - these could be much smaller, cheaper, and more definitive than today's social science experiments.9)\n \nThe ability to run experiments could be good or bad, depending on the robustness and enforcement of scientific ethics. If informed consent weren't sufficiently protected, digital people could open up the potential for an enormous amount of abuse; if it were, it could hopefully primarily enable learning.\nDigital people could also enable: \nOvercoming bias. Digital people could make copies of themselves (including temporary, sped-up copies) to consider arguments delivered in different ways, by different people, including with different apparent race and gender, and see whether the copies came to different conclusions. In this way they could explore which cognitive biases - from sexism and racism to wishful thinking and ego - affected their judgments, and work on improving and adapting to these biases. (Even if people weren't excited to do this, they might have to, as others would be able to ask for information on how biased they are and expect to get clear data.)\nBonanzas of reflection and discussion. Digital people could make copies of themselves (including sped-up, temporary copies) to study and discuss particular philosophy questions, psychology questions, etc. in depth, and then summarize their findings to the original.10 By seeing how different copies with different expertises and life experiences formed different opinions, they could have much more thoughtful, informed answers than I do to questions like \"What do I want in life?\", \"Why do I want it?\", \"How can I be a person I'm proud of being?\", etc.\n \nVirtual reality and control of the environment\nAs stated above, digital people could live in \"virtual environments.\" In order to design a virtual environment, programmers would systematically generate the right sort of light signals, sound signals, etc. to send to a digital person as if they were \"really there.\"\nOne could say the historical role of science and technology is to give people more control over their environment. And one could think of digital people almost as the logical endpoint of this: digital people would experience whatever world they (or the controller of their virtual environment) wanted.\nThis could be a very bad or good thing: \nBad thing. Someone who controlled a digital person's virtual environment could have almost unlimited control over them.\nFor this reason, it would be important for a world of digital people to include effective enforcement of basic human rights for all digital people. (More on this idea in the FAQ.)\nA world of digital people could very quickly get dystopian if digital people didn't have human rights protections. For example, imagine if the rule were \"Whoever owns a server can run whatever they want on it, including digital copies of anyone.\" Then people might make \"digital copies\" of themselves that they ran experiments on, forced to do work, and even open-sourced, so that anyone running a server could make and abuse copies. This very short story (recommended, but chilling) gives a flavor for what that might be like.\nGood thing. On the other hand, if a digital person were in control of their own environment (or someone else was and looked out for them), they could be free from any experiences they wanted to be free from, including hunger, violence, disease, other forms of ill health, and debilitating pain of any kind. Broadly, they could be \"free from material need\" - other than the need for computing resources to be run at all.\nThis is a big change from today's world. Today, if you get cancer, you're going to suffer pain and debilitation even if everyone in the world would prefer that you didn't. Digital people need not experience having cancer if they and others don't want this to happen.\nIn particular, physical coercion within a virtual environment could be made impossible (it could simply be impossible to transmit signals to another digital person corresponding to e.g. being punched or shot).\nDigital people might also have the ability to experience a lot of things we can't experience now - inhabiting another person's body, going to outer space, being in a \"dangerous\" situation without actually being in danger, eating without worrying about health consequences, changing from one apparent race or gender to another, etc.\nSpace expansion\nIf digital people underwent an explosion of economic growth as discussed above, this could come with an explosion in the population of digital people (for reasons discussed in The Duplicator).\nIt might reach the point where they needed to build spaceships and leave the solar system in order to get enough energy, metal, etc. to build more computers and enable more lives to exist.\nSettling space could be much easier for digital people than for biological humans. They could exist anywhere one could run computers, and the basic ingredients needed to do that - raw materials, energy, and \"real estate\"11 - are all super-abundant throughout our galaxy, not just on Earth. Because of this, the population of digital people could end up becoming staggeringly large.12\nLock-in\nIn today's world, we're used to the idea that the future is unpredictable and uncontrollable. Political regimes, ideologies, and cultures all come and go (and evolve). Some are good, and some are bad, but it generally doesn't seem as though anything will last forever. But communities, cities, and nations of digital people could be much more stable.\nFirst, because digital people need not die or physically age, and their environment need not deteriorate or run out of anything. As long as they could keep their server running, everything in their virtual environment would be physically capable of staying as it is.\nSecond, because an environment could be designed to enforce stability. For example, imagine that:\nA community of digital people forms its own government (this would require either overpowering or getting consent from their original government).\nThe government turns authoritarian and repeals the basic human rights protections discussed in the FAQ.\nThe head wants to make sure that they - or perhaps their ideology of choice - stays in power forever.\nThey could overhaul the virtual environment that they and all of the other citizens are in (by gaining access to the source code and reprogramming it, or operating robots that physically alter the server), so that certain things about the environment can never be changed - such as who's in power. If such a thing were about to change, the virtual environment could simply prohibit the action or reset to an earlier state. \nIt would still be possible to change the virtual environment from outside - e.g., to physically destroy, hack or otherwise alter the server running it. But if this were taking place after a long period of population growth and space colonization, then the server might be way out in outer space, light-years from anyone who'd be interested in doing such a thing.\nAlternatively, \"digital correction\" could be a force for good if used wisely enough. It could be used to ensure that no dictator ever gains power, or that certain basic human rights are always protected. If a civilization became \"mature\" enough - e.g., fair, equitable and prosperous, with a commitment to freedom and self-determination and a universally thriving population - it could keep these properties for a very long time.\nI'm not aware of many in-depth analyses of the \"lock-in\" idea, but I elaborate further on this idea here. December 2022 update: there is further analysis on this point here.\nWould these impacts be a good or bad thing?\nThroughout this piece, I imagine many readers have been thinking \"That sounds terrible! Does the author think it would be good?\" Or \"That sounds great! Does the author disagree?\"\nMy take on a future with digital people is that it could be very good or very bad, and how it gets set up in the first place could irreversibly determine which. \nHasty use of lock-in (discussed above) and/or overly quick spreading out through the galaxy (discussed above) could result in a huge world full of digital people (as conscious as we are) that is heavily dysfunctional, dystopian or at least falling short of its potential. \nBut acceptably good initial conditions (protecting basic human rights for digital people, at a minimum), plus a lot of patience and accumulation of wisdom and self-awareness we don't have today (perhaps facilitated by better social science), could lead to a large, stable, much better world. It should be possible to eliminate disease, material poverty and non-consensual violence, and create a society much better than today's.\n \nNext in series: This Can't Go On\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n The best example I can think of, but surely not the only one. ↩\nThe movie The Matrix gives a decent intuition for the idea with its fully-immersive virtual reality, but unlike the heroes of The Matrix, a digital person need not be connected to any physical person - they could exist as pure software.\n The agents (\"bad guys\") are more like digital people than the heroes are. In fact, one extensively copies himself. ↩\n See Age of Em Chapter 6, starting with \"Regarding the computation ...\" ↩\n For example, when multiple teams of digital people need to coordinate on a project, they might speed up (or slow down) particular steps and teams in order to make sure that each piece of the project is completed just on time. This would allow more complex, \"fragile\" plans to work out. (This point is from Age of Em Chapter 17, \"Preparation\" section.) ↩\n See Age of Em Chapter 11, \"Retirement\" section.  ↩\n Without human bodies - and depending on what kinds of robots were available - digital people might not be good substitutes for humans when it comes to jobs that rely heavily on human physical abilities, or jobs that require in-person interaction with biological humans. \n However, digital people would likely be able to do everything needed to cause an explosive economic growth, even if they couldn't do everything. In particular, it seems they could do everything needed to increase the supply of computers, and thereby increase the population of digital people.\n Creating more computing power requires (a) raw materials - mostly metal; (b) research and development - to design the computers; (c) manufacturing - to carry out the design and turn raw materials into computers; (d) energy. Digital people could potentially make all of these things a great deal cheaper and more plentiful:\nRaw materials. It seems that mining could, in principle, be done entirely with robots. Digital people could design and instruct these robots to extract raw materials as efficiently as possible.\nResearch and development. My sense is that this is a major input into the cost of computing today: the work needed to design ever-better microprocessors and other computer parts. Digital people could do this entirely virtually.\nManufacturing. My sense is that this is the other major input into the cost of computing today. Like mining, it could in principle be done entirely with robots.\nEnergy. Solar panels are also subject to (a) better research and development; (b) robot-driven manufacturing. Good enough design and manufacturing of solar panels could lead to radically cheaper and more plentiful energy.\nSpace exploration. Raw materials, energy, and \"real estate\" are all super-abundant outside of Earth. If digital people could design and manufacture spaceships, along with robots that could build solar panels and computer factories, they could take advantage of massive resources compared to what we have on earth. ↩\n It is debatable whether the world is getting somewhat better at these things, somewhat worse, or neither. But it seems pretty clear that the progress isn't as impressive as in computing. ↩\n Why would the copy cooperate in the experiment? Perhaps because they simply were on board with the goal (I certainly would cooperate with a copy of myself trying to learn about meditation!). Perhaps because they were paid (in the form of a nice retirement after the experiment). Perhaps because they saw themselves and their copies (and/or original) as the same person (or at least cared a lot about these very similar people). A couple of factors that would facilitate this kind of experimentation: (a) digital people could examine their own state of mind to get a sense of the odds of cooperation (since the copy would have the same state of mind); (b) if only a small number of digital people experimented, large numbers of people could still learn from the results. ↩\n I'd also expect them to be able to try more radical things. For example, in today's world, it's unlikely that you could run a randomized experiment on what happens if people currently living in New York just decide to move to Chicago. It would be too hard to find people willing to be randomly assigned to stay in New York or move to Chicago. But in a world of digital people, experimenters could pay New Yorkers to make copies of themselves who move to Chicago. And after the experiment, each Chicago copy that wished it had stayed in New York could choose to replace itself with another copy of the New York version. (The latter brings up questions about philosophy of personal identity, but for social science purposes, all that matters is that some people would be happy to participate in experiments due to this option, and everyone could learn from the experiments.) ↩\n See footnote from the first bullet point on why people's copies might cooperate with them. ↩\n And air for cooling. ↩\n See the estimates in Astronomical Waste for a rough sense of how big the numbers can get here (although these estimates are extremely speculative). ↩\n", "url": "https://www.cold-takes.com/how-digital-people-could-change-the-world/", "title": "Digital People Would Be An Even Bigger Deal", "source": "cold.takes", "source_type": "blog", "date_published": "2021-07-27", "id": "d5a995ebed7b1113936b6f6c7c352bf5"} -{"text": "\nSome years ago, I tooled around on the Gallup website for a while, and I was surprised at how many things I was surprised by. Here are some of the things I found interesting:\nPeople are quite satisfied with their jobs. 95% are very satisfied or somewhat satisfied with coworker relationships, ~80% with their boss/supervisor.\n~44% of people think they'd continue working in their current job if they won $10 million in the lottery. At the same link: people estimate that they waste an average of an hour a day at work, and that people at their workplace are closer to wasing 1.5 hrs (as of 2007)\nEnvironment: People rank environmental issues highly and even say that the environment should be prioritized above economic growth:\nMore Americans think we spend too much on the military as opposed to too little.\n~55% of Americans hadn't been on a plane in the last 12mo, as of 2015 (not a pandemic thing).\n~13% never used the Internet, as of 2013 - see \"Do you, personally, use the internet at your home, place of work or school? That could be through a computer, smartphone, tablet or other device\").\n~40% of Americans are straight-up Creationists.\nAs of 2005, large numbers of people thought creationism (54% in favor, 23% unsure) and intelligent design (43% in favor, 35% unsure) should be taught in school. (Compare to evolution: 61% in favor, 19% unsure.)\nWhat percentage of people think the taxes they're paying are too high vs. about right? Surprisingly (to me), it's roughly a draw. That's a recent thing.\n~4% have had 20+ drinks in the last 7 days at the time of the survey (2019); ~35% answer \"no\" to \"Do you have occasion to use alcoholic beverages such as liquor, wine or beer?\"\nTrust in media fell a lot between the 1970s and 2010s, though it's been surprisingly stable since then.\n~60% believe that cloning animals would be morally wrong.\n~50% say religion is \"very important\" in their life (I was surprised at how low this is, especially given that nearly half believe in creationism). However, large %'s of people (60-70%) believe in heaven, hell, angels, the Devil.\nFootball has been (a lot) more popular than baseball for at least 35 years. Baseball and basketball were even as of 2008 (most recent).\nAmericans would like to see big business have less influence, but are even more worried (by a lot, 67%-26%) about big government (see \"In your opinion, which of the following will be the biggest threat to the country in the future -- big business, big labor or big government?\")\nHere's a chart of public opinion toward each industry. It's pretty much what you'd expect.\nPublic opinion on ethical standards of people in different professions. Medical personnel (esp nurses) and schoolteachers top the list. The bottom looks like what you'd expect (car salespeople, members of Congress).\nDeath Penalty: ~60% of Americans were in favor as of around 2010, even though a majority thought it had claimed innocent victims in the last 5 yrs (the figure was 59% as of the last measurement in 2009), and 64% thought it was ineffective as a deterrent. (As of 2020 the % in favor of the death penalty is a little lower, 55%; the other two figures I just cited haven't been updated.)\n~40% of Americans personally own a gun.\nMarriage: ~40% think sex outside marriage is immoral. ~10% disapprove of marriage between blacks and whites (as of 2013, the latest data point), down from ~30% (!) in 2002.\nThe most admired man and woman of each year going back several decades (3rd table down). Mostly just Presidents and First Ladies.\n~5% of Americans are vegetarian, 3% vegan. And again, ~40% personally own a gun. If you had any remaining doubts that your friends are not normal.\nAt a given point in time, Americans consistently think they are worse off than a year ago, but will be better off in a year.\n21% of Americans say they have a physical disability that limits their activity (as of 2004, the last data point).\nLGBT rights. Very big moves in the right directions, but as of May 2021 18% still think that \"gay or lesbian relations between consenting adults should not be legal.\" (This number is falling pretty fast though.)\nMarijuana legalization support has soared from ~16% in 1969 to about 68% today.\n~80% are satisfied with the way their lives are going. Has been fairly stable over time. By contrast, satisfaction with how things are going in the U.S. is pretty volatile.\nPeople seem to commute a lot - avg of ~50 minutes a day, ~20% at 90+ minutes.\nOn smoking: of smokers, ~70% consider themselves addicted (as of 2013); 70% say they want to give up smoking; ~90% say they wouldn't smoke if they were to do it over again. \nSubscribe Feedback\n", "url": "https://www.cold-takes.com/gallup-website-notes/", "title": "Gallup website notes", "source": "cold.takes", "source_type": "blog", "date_published": "2021-07-23", "id": "6ce2b1012227d4d0ba13733c9e5be993"} -{"text": "\nI love it when someone makes lots of predictions and then we can see their track record after the fact. Here's a collection of links to track records for anyone/anything interesting that has them tabulated, AFAIK.\nThe basic idea\nThe basic idea is that someone can write down specific, public predictions about the world with probabilities attached, like \"60% chance that X wins the 2020 election.\" If they make a lot of predictions, they can then come back and assess whether their overall body of work was \"well calibrated,\" which means that things they thought would happen 60% of the time happened 60% of the time; things they thought would happen 80% of the time happened 80% of the time; etc. \nThis is done using a \"calibration curve\" (most of the links below explain how the curve works; this explanation is pretty compact).\nPeople with good historical calibration then have evidence of their trustworthiness going forward.\nMore on this general idea in the book Superforecasting (and more briefly at this Open Philanthropy blog post).\nGood track records\nFiveThirtyEight\nHyperMind, which contracts to create predictions.\n A few individuals:\n Scott Alexander (this links to year-by-year track records, though I wish all the years were combined in one place).\n @peterhurford Twitter predictions (and summary).\n Gwern.\nOK-to-pretty-good track records\nElectionBettingOdds.com, which forecasts elections based on prediction markets (unfortunately not updated for 2020 elections yet). Good overall, though it looks like events they score as 20-30% likely are more likely to happen than predicted. (I wish they would combine the 20-30% predictions with the 70-80% predictions etc., since a 20-30% prediction that something will happen is just like a 70-80% prediction that it won't.)\n Metaculus, a community forecasting site - skip to the last chart, which is the same idea as the 2nd chart from ElectionBettingOdds, though harder to read. Seems to be biased in the opposite direction from ElectionBettingOdds.com, i.e., overrating the likelihood of pretty unlikely events.\n All users in aggregate for PredictionBook (a website that lets individuals track their own predictions) - well calibrated except at the very confident end. I'd guess this is a \"wisdom of crowds\" thing; it wouldn't make me trust a particular PredictionBook user very much.\nLess good track records\nTrack record for Scott Adams, scored by someone other than Scott Adams (so not 100% reliable, though I checked a couple of them). It's really bad, probably easier to see from the \"Predictions\" sheet than from the chart. For example he tweeted \"I put Trump’s odds of winning the election at 100% (unless something big changes) with a 50% chance of a successful coup\" on 6/18/2020, and \"If I had to bet today, Trump has the edge, 60-40\" two days after the election. (He also gave 0% to Biden winning the nomination, earlier.) The reason I'm bothering with this (and probably the reason the maker of the spreadsheet did) is that Adams has gotten attention in some circles for his highly confident predictions of a Trump win in 2016, and I want to make the general point that a small # of impressive predictions can be very misleading, since the successful predictions tend to get more attention than the unsuccessful predictions.\n2015 evidence that a famous ESPN sports analyst had actually been editing his prediction-like statements (mock draft rankings) to make them look better with the benefit of hindsight.\nTangentially related\nIndependent analysis of Ray Kurzweil's predictions about 2019, made in 1999. He didn't give probabilities, and got more than twice as many wrong as right, though I think it's fair of the author to end with \"I strongly suspect that most people's 1999 predictions about 2019 would have been a lot worse.\"\nOpen Philanthropy's attempt to assess the general accuracy of very long-range forecasts mostly concluded that it's too hard to assess because of things like \"People didn't make clear enough predictions or give probabilities.\"\nSubscribe Feedback\n", "url": "https://www.cold-takes.com/prediction-track-records-i-know-of/", "title": "Track records for those who have made lots of predictions", "source": "cold.takes", "source_type": "blog", "date_published": "2021-07-21", "id": "f72bb9ab643aec77dad4226c943a30b7"} -{"text": "\nAudio also available by searching Stitcher, Spotify, Google Podcasts, etc. for \"Cold Takes Audio\"\nThis is the second post in a series explaining my view that we could be in the most important century of all time. Here's the roadmap for this series.\nThe first piece in this series discusses our unusual era, which could be very close to the transition between an Earth-bound civilization and a stable galaxy-wide one.\nFuture pieces will discuss how \"digital people\" - and/or advanced AI - could be key for this transition.\nThis piece explores a particularly important dynamic that could make either digital people or advanced AI lead to explosive productivity.\nI explore the simple question of how the world would change if people could be \"copied.\" I argue that this could lead to unprecedented economic growth and productivity. Later, I will describe how digital people or advanced AI could similarly cause a growth/productivity explosion.\nWhen some people imagine the future, they picture the kind of thing you see in sci-fi films. But these sci-fi futures seem very tame, compared to the future I expect.\nIn sci-fi, the future is different mostly via:\nShiny buildings, gadgets and holograms.\nRobots doing many of the things humans do today.\nAdvanced medicine.\nSouped up transportation, from hoverboards to flying cars to space travel and teleportation.\nBut fundamentally, there are the same kinds of people we see today, with the same kinds of personalities, goals, relationships and concerns.\nThe future I picture is enormously bigger, faster, weirder, and either much much better or much much worse compared to today. It's also potentially a lot sooner than sci-fi futures:1 I think particular, achievable-seeming technologies could get us there quickly.\nSuch technologies could include \"digital people\" or particular forms of advanced AI - each of which I'll discuss in a future piece. \nFor now, I want to focus on just one aspect of what these sorts of technology would allow: the ability to make instant copies of people (or of entities with similar capabilities). Economic theory - and history - suggest that this ability, alone, could lead to unprecedented (in history or in sci-fi movies) levels of economic growth and productivity. This is via a self-reinforcing feedback loop in which innovation leads to more productivity, which leads to more \"copies\" of people, who in turn create more innovation and further increase productivity, which in turn ...\nIn this post, instead of directly discussing digital people or advanced AI, I'm going to keep things relatively simple and discuss a different hypothetical technology: the Duplicator from Calvin & Hobbes, which simply copies people. \nHow the Duplicator works\nThe Duplicator is portrayed in this series of comics. Its key feature is making an instant copy of a person: Calvin walks in, and two identical Calvins walk out. \nThis is importantly different from the usual (and more realistic) version of \"cloning,\" in which a person's clone has the same DNA but has to start off as a baby and take years to become an adult.2\nTo flesh this out a bit, I'll assume that:\nThe Duplicator allows any person to quickly make a copy of themselves, which starts from the same condition and mental state or from an earlier state (for example, I could make a replica of \"Holden as of January 1, 2015\").3 Unlike in many sci-fi films, the copies function normally (they aren't evil or soulless or decaying or anything).\nIt can be used to make an unlimited number of copies, though each has some noticeable cost of production (they aren't free).4\nProductivity impacts\nIt seems that much of today's economy revolves around trying to make the most of \"scarce human capital.\" That is:\nSome people are \"scarce\" or \"in demand.\" Extreme examples include Barack Obama, Sundar Pichai, Beyonce Knowles and Jennifer Doudna.5 These people have some combination of skills, experience, knowledge, relationships, reputation, etc. that make it very hard for other people to do what they do. (Less extreme examples would be just about anyone who is playing a crucial role at an organization, hard to replace and often well paid.)\nThese people end up overbooked, with far more demands on their time than they can fulfill. Armies of other people end up devoted to saving their time and working around their schedules. \nThe Duplicator would remove these bottlenecks. For example:\nCopies of Sundar Pichai could work at all levels of Google, armed with their ability to communicate easily with the CEO and make decisions as he would. They could also start new companies.\nCopies of the President of the U.S. could personally meet with any voter who wanted to interview the President, as well as with any Congresspeople or potential appointees or advisors the President didn't have time to meet with. They could deeply study key domestic and international issues and report back to the \"original\" President. \nCopies of Beyonce could make as many albums as the market could support. They could deeply study and specialize in different musical genres. They could even try living different lifestyles to gain different life experiences, all of which could inform different albums that still all shared Beyonce's personal aesthetic and creativity. There would probably be at least one Beyonce copy whose music people considered better than the original's; that one could further copy herself.\nCopies of Jennifer Doudna could investigate any of the ideas and experiments the original doesn't have time to look into, as well as exploring the many fields she wasn't able to specialize in. There could be Jennifer Doudna copies in physics, chemistry and computer science as well as biology, each collaborating with many other Jennifer Doudna copies.\n(The ability to make copies for temporary purposes - and run them at different speeds - could further increase efficiency, as I'll discuss in a future piece about digital people.)\nExplosive growth\nOK, the Duplicator would make the economy more productive - but how much more productive?\nTo answer, I'm going to briefly summarize what one might call the \"Population growth is the bottleneck to explosive economic growth\" viewpoint.\nI would highly recommend reading more about this viewpoint at the following links, all of which I think are fascinating:\nThe Year The Singularity Was Cancelled (Slate Star Codex - reasonably accessible if you have basic familiarity with economic growth)\nModeling the Human Trajectory (Open Philanthropy's David Roodman - reasonably accessible blog post, linking to dense technical report)\nCould Advanced AI Drive Explosive Economic Growth? (Open Philanthropy's Tom Davidson - accessible blog post, linking to dense technical report)\nHere's my rough summary. \nIn standard economic models, the total size of the economy (its total output, i.e., how much \"stuff\" it creates) is a function of: \nHow much total \"labor\" (people doing work) there is in the economy; \nHow much \"capital\" (e.g., machines and energy sources - basically everything except labor) there is in the economy; \nHow high productivity is, i.e., how much stuff is created for a given amount of labor and capital. (This is sometimes called \"technology.\") \nThat is, the economy gets bigger when (a) there is more labor available, or (b) more capital (~everything other than labor) available, or when (c) productivity (\"output per unit of labor/capital\") increases.\nThe total population (number of people) affects both labor and productivity, because people can have ideas that increase productivity.\nOne way things could theoretically play out in an economy would be:\nThe economy starts with some set of resources (capital) supporting some set of people (population).\nThanks to María Gutiérrez Rojas for these graphics.\nThis set of people comes up with new ideas and innovations. \nThis leads to some amount of increased productivity, meaning there is more total economic output.6\nThis means people can afford to have more children. They do, and the population grows more quickly.\nBecause of that population growth, the economy comes up with new ideas and innovations faster than before (since more people means more new ideas).7\nThis leads to even more economic output and even faster population growth, in a self-reinforcing loop: more ideas → more output → more people → more ideas→ ....\nWhen you incorporate this full feedback loop into economic growth models,8 they predict that (under plausible assumptions) the world economy will see accelerating growth.9 \"Accelerating growth\" is a fairly \"explosive\" dynamic in which the economy can go from small to extremely large with disorienting speed.\nThe pattern of growth predicted by these models seems like a reasonably good fit with the data on the world economy over the last 5,000 years (see Modeling the Human Trajectory, though there is an open debate on this point; I discuss how the debate could change my conclusions here). However, over the last few hundred years, growth has not accelerated; it has been \"constant\" (a less explosive dynamic) at around today's level. \nWhy did accelerating growth transition to constant growth?\nThis change coincided with the demographic transition. In the demographic transition it stopped being the case that having more output -> having more children. Instead, more output just meant richer people, and people actually had fewer children as they became richer. This broke the self-reinforcing loop described above.\nThe demographic transition.\nRaising children is a massive investment (of time and personal energy, not just \"capital\"), and children take a long time to mature. By changing what it takes to grow the population, the Duplicator could restore the accelerating feedback loop.\nPeriod\nFeedback loop?\nPattern of growth\nBefore the demographic transition\n \nYes: more ideas → more output → more people → more ideas→ \n \nAccelerating growth (economy can go from small to large disorientingly quickly)\n \nSince the demographic transition\n \nNo: more ideas → more output → more richer people → more ideas→ \nConstant growth (less explosive)\n \nWith the Duplicator\n \nYes: more ideas → more output → more people → more ideas→ \n \nAccelerating growth\n \nThis figure from Could Advanced AI Drive Explosive Economic Growth? illustrates how the next decades might look different with steady exponential growth vs. accelerating growth:\nTo see more detailed (but simplified) example numbers demonstrating the explosive growth, see footnote.10\nIf we wanted to guess what a Duplicator might do in real life, we might imagine that it would get back to the kind of acceleration the world economy had historically, which loosely implies (based on Modeling the Human Trajectory) that the economy would reach infinite size sometime in the next century.11\nOf course, that can't happen - at some point the size of the economy would be limited by fundamental natural resources, such as the number of atoms or amount of energy available in the galaxy. But in between here and running out of space/atoms/energy/something, we could easily see levels of economic growth that are massively faster than anything in history. \nOver the last 100 years or so, the economy has doubled in size every few decades. With a Duplicator, it could double in size every year or month, on its way to hitting the limits.\nDepending on how things played out, such productivity could result in an end to scarcity and material need, or in a dystopian race between different people making as many copies of themselves as possible in the hopes of taking over the population. (Or many in-between and other scenarios.)\nConclusion\nI think the Duplicator would be a more powerful technology than warp drives, tricorders, laser guns12 or even teleporters. Minds are the source of innovation that can lead to all of those other things. So being cheaply able to duplicate them would be an extraordinary situation.\nA harder-to-intuit, but even more powerful, technology would be digital people, e.g., the ability to run detailed simulations of people13 on a computer. Such simulated people could be copied Duplicator-style, and could also be sped up, slowed down, and reset, with virtual environments that were fully controlled. \nI think that sort of technology is probably possible, and I expect a world with it to be even wilder than a world with the Duplicator. I'll elaborate on this in the next piece.\nNext in series: Digital People Would Be An Even Bigger Deal\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nFootnotes\n For example, Star Trek's Captain Kirk first takes over the Enterprise in the mid-2200s. I think we could easily see a much more advanced, changed world than that of Star Trek, before 2100. ↩\nExample. ↩\n This isn't quite how it works in the comic, but it's how it'll work here.  ↩\n The one in the comic burns out after a few copies, but that one's just a prototype. ↩\n Biologist who co-invented CRISPR and won a Nobel Prize in 2020. ↩\n Each idea doubled the amount of corn. ↩\n A faster-growing population doesn't necessarily mean faster technological advancement. There could be \"diminishing returns\": the first few ideas are easier to find than the next few, so even as the effort put into finding new ideas goes up, new ideas are found more slowly. (Are Ideas Getting Harder To Find? is a well-known paper on this topic.) More population = faster technological progress if the population is growing faster than the difficulty of finding new ideas is growing. This dynamic is portrayed in a simplified way in the graphic: initially people have ideas leading to doubling of corn output, but later the ideas only lead to a 1.5x'ing of corn output. ↩\n It's crucial to include the \"more output -> more people\" step, which is often not there by default, and doesn't describe today's world (but could describe a world with The Duplicator). It's standard for growth models to incorporate the other parts of the feedback loop: more people --> more ideas --> more output. ↩\n This claim is defended in detail in Could Advanced AI Drive Explosive Economic Growth? ↩\nWe'll start with this economy:\n100 people produce 100 units of resources (1 per person). For every 10 units of resources, they're able to create 1 more duplicate (this is just capturing the idea that duplicates are \"costly\" to create). And the 100 people have 5 new ideas, leading to 5% productivity growth.\nHere's year 2:\nNow each person produces 1.05 widgets instead of 1, thanks to the productivity growth. And there's another 5% productivity growth.\nThis dynamic takes some time to \"take off,\" but take off it does:\n The #NUM!'s at the bottom signify Google Sheets choking on the large numbers.\nMy spreadsheet includes a version with simply exponentially increasing population; that one goes on for ~1000 years without challenging Google Sheets. So the population dynamic is key here. ↩\n As noted above, there is an open debate on whether past economic growth actually follows the pattern described in Modeling the Human Trajectory. I discuss how the debate could change my conclusions here; I think there is a case either way for explosive growth this century.↩\n TBH, I've never been able to figure out why these are better than regular guns. ↩\n Or of some sort of entity that's properly described as a \"descendant\" of people, as I'll discuss in the piece on digital people. ↩\n", "url": "https://www.cold-takes.com/the-duplicator/", "title": "The Duplicator: Instant Cloning Would Make the World Economy Explode", "source": "cold.takes", "source_type": "blog", "date_published": "2021-07-20", "id": "668032ae1e7569731715dd512c6f1b11"} -{"text": "\nIf you've ever wanted to see someone painstakingly deconstruct a regression analysis and show all the subtle reasons it can generate wild, weird and completely wrong results, there is good stuff at Sabermetric Research - Phil Birnbaum. It's a sports blog, but sports knowledge isn't needed (knowledge of regression analysis generally is, if you want to follow all the details).\nBirnbaum's not exactly the only person to do takedowns of bad studies. But when Birnbaum notices that something is \"off,\" he doesn't just point it out and move on. He isn't satisfied with \"This conclusion is implausible\" or \"This conclusion isn't robust to sensitivity analysis.\" He digs all the way to the bottom to understand exactly how a study got its wrong result. His deconstructions of bad regressions are like four-star meals or masterful jazz solos ... I don't want to besmirch them by trying to explain them, so if you're into regression deconstructions you should just click through the links below. \n(I'm not going to explain what regression analysis is today, for which I apologize; if I ever do, I will link back to this post. It's very hard to explain it compactly and clearly, as you can see from Wikipedia's attempt, but it is VERY common in social science research. Kind of a bad combination IMO. If you hear \"This study shows [something about people],\" it's more likely than not that the study relies on regression analysis.)\nSome good (old) ones:\nEstimating whether Aaron Rodger's contract overpays or underpays him by making a scatterplot of pay and value-added with other quarterbacks and seeing whether he's above or below the regression line. The answer changes completely when you switch the x- and y-axes. Which one is right, and what exactly is wrong with the other one? (Birnbaum linked to this, but it's now dead and I am linking directly to an Internet Archive version. Birnbaum's \"solution\" is down in the comments, just search for his name.)\nDeconstructing an NBA time-zone regression: the key coefficients turn out to be literally meaningless.\nDo younger brothers steal more bases? Parts I, II, III although I think it's OK to skip part II (and Part I is short).\nThe OBP/SLG regression puzzle: parts I, II, III, IV, V. This one is very weedsy and you'll probably want to skip parts, though it's also kind of glorious to see just how doggedly he digs on every strange numerical result. He also makes an effort to explain what's going on for people who don't know baseball. The essence of the puzzle: OBP and SLG are both indicators of a team's performance, but when one regresses a team's performance on OBP and SLG, the ratio of the coefficients is pretty far off from what the \"true\" value for (value of OBP / value of SLG) is separately known to be. I think the issues here are extremely general, and not good news for the practice of treating regression coefficients as effect sizes.\nSubscribe Feedback\n", "url": "https://www.cold-takes.com/phil-birnbaums-regression-analysis/", "title": "Phil Birnbaum's \"bad regression\" puzzles", "source": "cold.takes", "source_type": "blog", "date_published": "2021-07-15", "id": "e05e8449855c932566a2ea1723cddb31"} -{"text": "\nThere’s way too much to read.\nIf I want to really understand a (nonfiction) book, I usually need to spend a lot of time with it. I generally spend at least as much time writing about it, and rereading it as I’m trying to summarize and dialogue with it, as I spend reading it in the first place. If I just moved my eyes over the words, I probably wouldn’t remember much other than the title. (More.)\nI can’t do this for, say, a book a week. I need to be selective about what I’m reading carefully, which means skimming most things to look for what’s worthwhile, and often carefully reading part of a piece instead of more quickly reading it all.\nAt the same time, I feel soft pressures in the other direction. There seem to be vague social norms that \"reading\" lots of things is a great virtue. And I rarely see someone saying \"I really enjoyed skimming ___ \" or \"Based on reading a few sections of ___ , I think ...\"\nIt seems that a lot of people report reading enormous numbers of books - and also, that people often don’t seem to know much about a book they’ve “read” other than the title.\n(I think similar points apply to articles and blog posts. From here I’ll lump nonfiction books, articles and blog posts together as “pieces.”)\nI’d like to see a different norm:\nIt should be normal and expected that people often do not read an entire piece, even when the piece is important to what they are saying.\nPeople should (a) try to find the parts of a piece that are most directly relevant to what they are interested in, and read those parts carefully; (b) get a lot of their information about a piece from critical reviews of and reactions to the piece (I believe this is a very efficient way to pinpoint the key debates around a piece and understand which parts are broadly accepted vs. disputed); (c) be open about what they’re doing. That is, they should say things like “According to [piece], ___ . Note that I haven’t read all of [piece]; I read sections ___ and ___ as well as critical reviews ___ and ___ .”\nPeople should, by default, frame their critiques of a piece as “things I think the piece does wrong, but I’m not sure since I haven’t carefully digested the whole thing.” Rather than as “Smackdowns of the piece.” This is a natural complement to admitting one hasn’t digested an entire piece, but I also think it would be a welcome adjustment across the board, since even people who have carefully read an entire piece can still easily miss key parts of it.\nAnd complementarily, authors should try to make life easy for readers who do not want to carefully read every word of their piece (at least, assuming it is more than a couple thousand words or so).\nThey should have easy-to-find sections of their piece that summarize and/or outline their arguments, with clear directions for which parts of the piece will give more detail on each point.\nThey shouldn’t force or expect readers to wade through all their prose to find a TL;DR on what they are arguing, what their main evidence is, why it matters, and what their responses to key objections are.\nWhen someone says “You have to read this piece, it really shows that ___ ,” and I find myself unable to see where and how the piece shows ___ without embarking on a 10,000-word journey, I close the tab and forget about the argument, and this seems like the right thing to do.\nI will try to follow these principles on this blog, both as a reader and writer. We’ll see how it goes.\nSubscribe Feedback\n", "url": "https://www.cold-takes.com/honesty-about-reading/", "title": "Honesty about reading", "source": "cold.takes", "source_type": "blog", "date_published": "2021-07-14", "id": "777199aff759cffaba52a7c67f868a6f"} -{"text": "\nAudio also available by searching Stitcher, Spotify, Google Podcasts, etc. for \"Cold Takes Audio\"\nToday’s world\nTransformative AI\nDigital people\nWorld of\nMisaligned AI\nWorld run by\nSomething else\nor\nor\nStable, galaxy-wide\ncivilization\nSummary:\nIn a series of posts starting with this one, I'm going to argue that the 21st century could see our civilization develop technologies allowing rapid expansion throughout our currently-empty galaxy. And thus, that this century could determine the entire future of the galaxy for tens of billions of years, or more.\nThis view seems \"wild\": we should be doing a double take at any view that we live in such a special time. I illustrate this with a timeline of the galaxy. (On a personal level, this \"wildness\" is probably the single biggest reason I was skeptical for many years of the arguments presented in this series. Such claims about the significance of the times we live in seem \"wild\" enough to be suspicious.)\nBut I don't think it's really possible to hold a non-\"wild\" view on this topic. I discuss alternatives to my view: a \"conservative\" view that thinks the technologies I'm describing are possible, but will take much longer than I think, and a \"skeptical\" view that thinks galaxy-scale expansion will never happen. Each of these views seems \"wild\" in its own way.\nUltimately, as hinted at by the Fermi paradox, it seems that our species is simply in a wild situation.\nBefore I continue, I should say that I don't think humanity (or some digital descendant of humanity) expanding throughout the galaxy would necessarily be a good thing - especially if this prevents other life forms from ever emerging. I think it's quite hard to have a confident view on whether this would be good or bad. I'd like to keep the focus on the idea that our situation is \"wild.\" I am not advocating excitement or glee at the prospect of expanding throughout the galaxy. I am advocating seriousness about the enormous potential stakes.\nMy view\nThis is the first in a series of pieces about the hypothesis that we live in the most important century for humanity. \nIn this series, I'm going to argue that there's a good chance of a productivity explosion by 2100, which could quickly lead to what one might call a \"technologically mature\"1 civilization. That would mean that:\nWe'd be able to start sending spacecraft throughout the galaxy and beyond. \nThese spacecraft could mine materials, build robots and computers, and construct very robust, long-lasting settlements on other planets, harnessing solar power from stars and supporting huge numbers of people (and/or our \"digital descendants\"). \nSee Eternity in Six Hours for a fascinating and short, though technical, discussion of what this might require.\n \nI'll also argue in future pieces (now available here and here) that there is a chance of \"value lock-in\": whoever is running the process of space expansion might be able to determine what sorts of people are in charge of the settlements and what sorts of societal values they have, in a way that is stable for many billions of years.2\nIf that ends up happening, you might think of the story of our galaxy3 like this. I've marked major milestones along the way from \"no life\" to \"intelligent life that builds its own computers and travels through space.\" \nThanks to Ludwig Schubert for the visualization. Many dates are highly approximate and/or judgment-prone and/or just pulled from Wikipedia (sources here), but plausible changes wouldn't change the big picture. The ~1.4 billion years to complete space expansion is based on the distance to the outer edge of the Milky Way, divided by the speed of a fast existing human-made spaceship (details in spreadsheet just linked); IMO this is likely to be a massive overestimate of how long it takes to expand throughout the whole galaxy. See footnote for why I didn't use a logarithmic axis.4??? That's crazy! According to me, there's a decent chance that we live at the very beginning of the tiny sliver of time during which the galaxy goes from nearly lifeless to largely populated. That out of a staggering number of persons who will ever exist, we're among the first. And that out of hundreds of billions of stars in our galaxy, ours will produce the beings that fill it.\nI know what you're thinking: \"The odds that we could live in such a significant time seem infinitesimal; the odds that Holden is having delusions of grandeur (on behalf of all of Earth, but still) seem far higher.\"5\nBut:\nThe \"conservative\" view\nLet's say you agree with me about where humanity could eventually be headed - that we will eventually have the technology to create robust, stable settlements throughout our galaxy and beyond. But you think it will take far longer than I'm saying.\nA key part of my view (which I'll write about more later) is that within this century, we could develop advanced enough AI to start a productivity explosion. Say you don't believe that. \nYou think I'm underrating the fundamental limits of AI systems to date. \nYou think we will need an enormous number of new scientific breakthroughs to build AIs that truly reason as effectively as humans. \nAnd even once we do, expanding throughout the galaxy will be a longer road still. \nYou don't think any of this is happening this century - you think, instead, that it will take something like 500 years. That's 5-10x the time that has passed since we started building computers. It's more time than has passed since Isaac Newton made the first credible attempt at laws of physics. It's about as much time has passed since the very start of the Scientific Revolution.\nActually, no, let's go even more conservative. You think our economic and scientific progress will stagnate. Today's civilizations will crumble, and many more civilizations will fall and rise. Sure, we'll eventually get the ability to expand throughout the galaxy. But it will take 100,000 years. That's 10x the amount of time that has passed since human civilization began in the Levant.\nHere's your version of the timeline:\nThe difference between your timeline and mine isn't even a pixel, so it doesn't show up on the chart. In the scheme of things, this \"conservative\" view and my view are the same.\nIt's true that the \"conservative\" view doesn't have the same urgency for our generation in particular. But it still places us among a tiny proportion of people in an incredibly significant time period. And it still raises questions of whether the things we do to make the world better - even if they only have a tiny flow-through to the world 100,000 years from now - could be amplified to a galactic-historical-outlier degree.\nThe skeptical view\nThe \"skeptical view\" would essentially be that humanity (or some descendant of humanity, including a digital one) will never spread throughout the galaxy. There are many reasons it might not:\nMaybe something about space travel - and/or setting up mining robots, solar panels, etc. on other planets - is effectively impossible such that even another 100,000 years of human civilization won't reach that point.6\nOr perhaps for some reason, it will be technologically feasible, but it won't happen (because nobody wants to do it, because those who don't want to block those who do, etc.)\nMaybe it's possible to expand throughout the galaxy, but not possible to maintain a presence on many planets for billions of years, for some reason.\nMaybe humanity is destined to destroy itself before it reaches this stage. \nBut note that if the way we destroy ourselves is via misaligned AI,7 it would be possible for AI to build its own technology and spread throughout the galaxy, which still seems in line with the spirit of the above sections. In fact, it highlights that how we handle AI this century could have ramifications for many billions of years. So humanity would have to go extinct in some way that leaves no other intelligent life (or intelligent machines) behind.\nMaybe an extraterrestrial species will spread throughout the galaxy before we do (or around the same time). \nHowever, note that this doesn't seem to have happened in ~13.77 billion years so far since the universe began, and according to the above sections, there's only about 1.5 billion years left for it to happen before we spread throughout the galaxy.\nMaybe some extraterrestrial species already effectively has spread throughout our galaxy, and for some reason we just don't see them. Maybe they are hiding their presence deliberately, for one reason or another, while being ready to stop us from spreading too far. \nThis would imply that they are choosing not to mine energy from any of the stars we can see, at least not in a way that we could see it. That would, in turn, imply that they're abstaining from mining a very large amount of energy that they could use to do whatever it is they want to do,8 including defend themselves against species like ours.\nMaybe this is all a dream. Or a simulation.\nMaybe something else I'm not thinking of.\nThat's a fair number of possibilities, though many seem quite \"wild\" in their own way. Collectively, I'd say they add up to more than 50% probability ... but I would feel very weird claiming they're collectively overwhelmingly likely.\nUltimately, it's very hard for me to see a case against thinking something like this is at least reasonably likely: \"We will eventually create robust, stable settlements throughout our galaxy and beyond.\" It seems like saying \"no way\" to that statement would itself require \"wild\" confidence in something about the limits of technology, and/or long-run choices people will make, and/or the inevitability of human extinction, and/or something about aliens or simulations.\nI imagine this claim will be intuitive to many readers, but not all. Defending it in depth is not on my agenda at the moment, but I'll rethink that if I get enough demand.\nWhy all possible views are wild: the Fermi paradox\nI'm claiming that it would be \"wild\" to think we're basically assured of never spreading throughout the galaxy, but also that it's \"wild\" to think that we have a decent chance of spreading throughout the galaxy. \nIn other words, I'm calling every possible belief on this topic \"wild.\" That's because I think we're in a wild situation.\nHere are some alternative situations we could have found ourselves in, that I wouldn't consider so wild:\nWe could live in a mostly-populated galaxy, whether by our species or by a number of extraterrestrial species. We would be in some densely populated region of space, surrounded by populated planets. Perhaps we would read up on the history of our civilization. We would know (from history and from a lack of empty stars) that we weren't unusually early life-forms with unusual opportunities ahead.\nWe could live in a world where the kind of technologies I've been discussing didn't seem like they'd ever be possible. We wouldn't have any hope of doing space travel, or successfully studying our own brains or building our own computers. Perhaps we could somehow detect life on other planets, but if we did, we'd see them having an equal lack of that sort of technology.\nBut space expansion seems feasible, and our galaxy is empty. These two things seem in tension. A similar tension - the question of why we see no signs of extraterrestrials, despite the galaxy having so many possible stars they could emerge from - is often discussed under the heading of the Fermi Paradox.\nWikipedia has a list of possible resolutions of the Fermi paradox. Many correspond to the skeptical view possibilities I list above. Some seem less relevant to this piece. (For example, there are various reasons extraterrestrials might be present but not detected. But I think any world in which extraterrestrials don't prevent our species from galaxy-scale expansion ends up \"wild,\" even if the extraterrestrials are there.)\nMy current sense is that the best analysis of the Fermi Paradox available today favors the explanation that intelligent life is extremely rare: something about the appearance of life in the first place, or the evolution of brains, is so unlikely that it hasn't happened in many (or any) other parts of the galaxy.9\nThat would imply that the hardest, most unlikely steps on the road to galaxy-scale expansion are the steps our species has already taken. And that, in turn, implies that we live in a strange time: extremely early in the history of an extremely unusual star.\nIf we started finding signs of intelligent life elsewhere in the galaxy, I'd consider that a big update away from my current \"wild\" view. It would imply that whatever has stopped other species from galaxy-wide expansion will also stop us.\nThis pale blue dot could be an awfully big deal\nDescribing Earth as a tiny dot in a photo from space, Ann Druyan and Carl Sagan wrote:\nThe Earth is a very small stage in a vast cosmic arena. Think of the rivers of blood spilled by all those generals and emperors so that, in glory and triumph, they could become the momentary masters of a fraction of a dot ... Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light ... It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world.\nThis is a somewhat common sentiment - that when you pull back and think of our lives in the context of billions of years and billions of stars, you see how insignificant all the things we care about today really are.\nBut here I'm making the opposite point.\nIt looks for all the world as though our \"tiny dot\" has a real shot at being the origin of a galaxy-scale civilization. It seems absurd, even delusional to believe in this possibility. But given our observations, it seems equally strange to dismiss it. \nAnd if that's right, the choices made in the next 100,000 years - or even this century - could determine whether that galaxy-scale civilization comes to exist, and what values it has, across billions of stars and billions of years to come.\nSo when I look up at the vast expanse of space, I don't think to myself, \"Ah, in the end none of this matters.\" I think: \"Well, some of what we do probably doesn't matter. But some of what we do might matter more than anything ever will again. ...It would be really good if we could keep our eye on the ball. ...[gulp]\"\nNext in series: The Duplicator\nSubscribe Feedback Forum\nUse \"Feedback\" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use \"Forum\" if you want to discuss this post publicly on the Effective Altruism Forum.\nNotes\n or Kardashev Type III. ↩\n If we are able to create mind uploads, or detailed computer simulations of people that are as conscious as we are, it could be possible to put them in virtual environments that automatically reset, or otherwise \"correct\" the environment, whenever the society would otherwise change in certain ways (for example, if a certain religion became dominant or lost dominance). This could give the designers of these \"virtual environments\" the ability to \"lock in\" particular religions, rulers, etc. I'll discuss this more in future pieces (now available here and here). ↩\n I've focused on the \"galaxy\" somewhat arbitrarily. Spreading throughout all of the accessible universe would take a lot longer than spreading throughout the galaxy, and until we do it's still imaginable that some species from outside our galaxy will disrupt the \"stable galaxy-scale civilization,\" but I think accounting for this correctly would add a fair amount of complexity without changing the big picture. I may address that in some future piece, though. ↩\n A logarithmic version doesn't look any less weird, because the distances between the \"middle\" milestones are tiny compared to both the stretches of time before and after these milestones. More fundamentally, I'm talking about how remarkable it is to be in the most important [small number] of years out of [big number] of years - that's best displayed using a linear axis. It's often the case that weird-looking charts look more reasonable with logarithmic axes, but in this case I think the chart looks weird because the situation is weird. Probably the least weird-looking version of this chart would have the x-axis be something like the logged distance from the year 2100, but that would be a heck of a premise for a chart - it would basically bake in my argument that this appears to be a very special time period. ↩\nThis is exactly the kind of thought that kept me skeptical for many years of the arguments I'll be laying out in the rest of this series about the potential impacts, and timing, of advanced technologies. Grappling directly with how \"wild\" our situation seems to ~undeniably be has been key for me. ↩\n Spreading throughout the galaxy would certainly be harder if nothing like mind uploading (which I discuss in a separate piece, and which is part of why I think future space settlements could have \"value lock-in\" as discussed above) can ever be done. I would find a view that \"mind uploading is impossible\" to be \"wild\" in its own way, because it implies that human brains are so special that there is simply no way, ever, to digitally replicate what they're doing. (Thanks to David Roodman for this point.) ↩\n That is, advanced AI that pursues objectives of its own, which aren't compatible with human existence. I'll be writing more about this idea. Existing discussions of it include the books Superintelligence, Human Compatible, Life 3.0, and The Alignment Problem. The shortest, most accessible presentation I know of is The case for taking AI seriously as a threat to humanity (Vox article by Kelsey Piper). This report on existential risk from power-seeking AI, by Open Philanthropy's Joe Carlsmith, lays out a detailed set of premises that would collectively imply the problem is a serious one. ↩\n Thanks to Carl Shulman for this point. ↩\n See https://arxiv.org/pdf/1806.02404.pdf  ↩\n", "url": "https://www.cold-takes.com/all-possible-views-about-humanitys-future-are-wild/", "title": "All Possible Views About Humanity's Future Are Wild", "source": "cold.takes", "source_type": "blog", "date_published": "2021-07-13", "id": "9c273234466efec65bab9b1d00795d01"} -{"text": "\nWelcome to Cold Takes, where it's always fine to read the post later!\nAbout\nMost of the posts on this blog are written at least a month before they're posted, sometimes much longer. I try to post things that are worth posting even so, hence the name \"Cold Takes.\"\nAs of this writing (July 2021), likely initial themes include:\nFuturism (the next 10,000+ years). What can we say and do today about the very long-run future of the world?\nQuantitative macrohistory (the last 250-10,000 years). What can we say about very long-run trends in things like quality of life and pace of innovation?\nApplied epistemology. How should one decide what to believe, especially when (is as usually the case for important topics) one doesn't have time to digest even 1% of the relevant knowledge and arguments?\nApplied ethics of donations, career choice, etc. What ethical principles should we use to  consider actions like where to donate and what job to take (where there are no clear moral \"rules,\" and instead a question of how to do more good vs. less good)?\nLinks I feel like sharing, often quite old, because they seem important and/or just fun.\nA general theme of this blog is what I sometimes call avant-garde effective altruism. Effective altruism (EA) is the idea of doing as much good as possible. If EA were jazz, giving to effective charities working on global health would be Louis Armstrong - acclaimed and respected by all, and where most people start. But people who are really obsessed with jazz also tend to like stuff that (to other people) barely even sounds like music, and lifelong obsessive EAs are into causes and topics that are not the first association you'd have with \"doing good.\" This blog will often be about the latter.\nI am the co-CEO of Open Philanthropy and co-founder of GiveWell, but all opinions are my own.\nUpcoming series\nHere are the main series I'm planning so far. As I put things up, I will update the Themes/Highlights page to link to the pieces that have received the most \"likes.\" (Note, though, that I'm the only person with access to the \"like\" button for this blog.)\nMost Important Century series. A series of ~10 pieces laying out the case that the 21st century has a high probability of being the \"most important\" for humanity via the development of transformative AI. This includes discussions of why it looks like we might soon (in the next few decades) develop AI that can dramatically accelerate scientific and technological development, and discussions of why that in turn could lead to a radically different world (in particular, via mind uploading).\nHas the world improved over time? A series on whether life has gotten better over time and how we should think about the \"state of nature.\"\nSearching for Atlantis. A series asking mostly, \"Is the world doing something wrong, compared to the past, w/r/t innovation?\" I emphasize \"doing something wrong,\" not \"slowing down\" - most discussions online are explicitly about whether innovation is slowing down, but I want to address whether this is about something \"going wrong\" or just inevitable. \"Atlantis\" refers to the idea of a past, advanced civilization, now lost.\nApplied epistemology. Some posts about \"how to decide what to believe.\" These  will include my take on the pros and cons of \"explicit Bayesian reasoning\" (literally writing down probabilities and values for use in decision making), on how to balance \"thinking for oneself\" with trusting experts and others, and on how far one should take \"self-skepticism\" (doubting one's own beliefs).\nUtilitarian ethics. I'm going to lay out the best case I know for a \"hardcore utilitarian\" approach to ethics - the basic philosophy-based case for \"a large # of small benefits to people can outweigh even a huge harm,\" \"helping enough animals could be better than helping humans,\" and \"reducing risk of extinction, leading to more people getting to exist, could be better than either of those.\" I'm also going to talk about the weaknesses in this case.\nSubscribe Feedback\n", "url": "https://www.cold-takes.com/first-post/", "title": "First Post", "source": "cold.takes", "source_type": "blog", "date_published": "2021-07-13", "id": "ed216450b3a6f94d91844f9e3fcc5ae8"}