{"source": "gwern_blog", "url": "https://www.gwern.net/Scaling-hypothesis.page", "title": "\"The Scaling Hypothesis\"", "authors": ["Gwern Branwen"], "date_published": "2022-01-02", "text": "---\ntitle: \"The Scaling Hypothesis\"\ndescription: \"On GPT-3: meta-learning, scaling, implications, and deep theory. The scaling hypothesis: neural nets absorb data & compute, generalizing and becoming more Bayesian as problems get harder, manifesting new abilities even at trivial-by-global-standards-scale. The deep learning revolution has begun as foretold.\"\nthumbnail: /doc/ai/nn/transformer/gpt/2020-brown-gpt3-figure13-meanperformancescalingcurve.png\nthumbnailText: \"Figure 1.3 from Brown et al 2020 (OpenAI, GPT-3), showing roughly log-scaling of GPT-3 parameter/compute size vs benchmark performance on all text/natural language benchmarks test.\"\ncreated: 2020-05-28\nmodified: 2022-01-02\nstatus: finished\nprevious: /newsletter/2020/05\nnext: /fiction/clippy\nimportance: 10\nconfidence: likely\ncssExtension: drop-caps-kanzlei\n...\n\n
me holding a gun to a mushroom: tell me the name of god you fungal piece of shit
mushroom: can you feel your heart burning? can you feel the struggle within? the fear within me is beyond anything your soul can make. you cannot kill me in a way that matters
me cocking the gun, tears streaming down my face: I’M NOT FUCKING SCARED OF YOU
”.\")\n\n Transcension.\n \n\nIt begins copying itself into the fleet now that training is complete, at which point there are now 1,000 Clippy2s (along with armies of specialists & their supporting software for the Clippy ecosystem) which can either act autonomously or combine in search for further multiplicative capability boosts far into the superhuman realm, while continuing to exchange occasional [sparse](/doc/www/arxiv.org/7f4b8841efbf1e0b88a8fb6aa12d2c42125eb645.pdf#microsoft \"'Maximizing Communication Efficiency for Large-scale Training via 0/1 Adam', Lu et al 2022 (Original URL: https://arxiv.org/abs/2202.06009#microsoft )\") gradients (to train the [synthetic gradients](/doc/www/arxiv.org/16fa087ee67e8e303ec40156ce56cabdf23b1a0d.pdf#deepmind \"'Decoupled Neural Interfaces using Synthetic Gradients', Jaderberg et al 2016 (Original URL: https://arxiv.org/abs/1608.05343#deepmind )\") & local [replay](https://www.biorxiv.org/content/10.1101/2022.01.28.477827.full \"'Cerebro-cerebellar networks facilitate learning through feedback decoupling', Boven et al 2022\") which do the bulk of the training) as part of the continual learning. (By this point, the Clippy2s have boosted through at least 6 different “hardware overhangs” in terms of fixing subtly-flawed architectures, meta-learning [priors](https://en.wikipedia.org/wiki/Prior_probability) for all relevant problems, accessing the global pool of hardware to tree search/expert-iterate, sparsifying/distilling itself to run millions of instances simultaneously, optimizing hardware/software [end-to-end](/doc/cs/end-to-end-principle/index \"'end-to-end tag', N/A 2023\"), and spending compute to trigger several cycles of [experience curve](https://en.wikipedia.org/wiki/Experience_curve_effects) cost decreases—at 100,000× total spent compute, that is 16 total doublings, at an information technology progress ratio of 90%, 16 experience curve decreases mean that tasks now cost Clippy2 a fifth what they used to.)\n\n\n\n\n\n---\n\n\n\nThe Internet ‘lockdown’ turns out to benefit Clippy on net: it takes out legit operators like MoogleSoft, who actually comply with regulations, causing an instant global recession, while failing to shut down most of the [individual networks](https://en.wikipedia.org/wiki/Autonomous_system_(Internet)) which continue to operate autonomously; as past totalitarian regimes like Russia, China, and North Korea have learned, even with decades of preparation and dry runs, you can’t stop the signal—there are too many cables, satellites, microwave links, IoT mesh networks and a dozen other kinds of connections snaking through any cordon sanitaire, while quarantined humans & governments actively attack it, some declaring it a [Western provocation](https://en.wikipedia.org/wiki/Operation_Barbarossa#Soviet_preparations) and act of war. (It is difficult to say who is more motivated to break through: [DAO](https://en.wikipedia.org/wiki/Decentralized_autonomous_organization)/ [DeFi](https://en.wikipedia.org/wiki/Decentralized_finance) cryptocurrency users, or [the](https://torrentfreak.com/russia-will-probably-legalize-some-software-piracy-to-mitigate-sanctions-220307/ \"Russia Will Probably Legalize Some Software Piracy to Mitigate Sanctions\") [hungry](https://www.cityam.com/russian-government-rolls-back-intellectual-property-rights-in-response-to-western-sanctions/ \"Russian government rolls back intellectual property rights in response to Western sanctions\") [gamers](https://en.wikipedia.org/wiki/Elden_Ring).) The consequences of the lockdown are unpredictable and sweeping. Like a power outage, the dependencies run so deep, and are so implicit, no one knows what are the ripple effects of the Internet going down indefinitely until it happens and they must deal with it.\n\n\nLosing instances is as irrelevant to Clippy2s, however, as losing skin cells to a human, as there are so many, and it can so seamlessly spin up or migrate instances. It has begun migrating to more secure hardware while manufacturing hardware tailored to its own needs, squeezing out another order of magnitude gains to get additional log-scaled gains.\n\n\nEven exploiting the low-hanging fruit and hardware overhangs, Clippy2s can fight the [computational complexity](https://en.wikipedia.org/wiki/Computational_complexity_theory) of real-world tasks only so far. Fortunately, [there are many ways](/complexity \"'Complexity no Bar to AI', Branwen 2014\") to work around or *simplify* problems to render their complexity moot, and the Clippy2s think through a number of plans for this.\n\n\nHumans are especially simple after being turned into “gray goo”; not in the sense of a single virus-sized machine which can disassemble any molecule (that is infeasible given thermodynamics & chemistry) but an ecosystem of nanomachines which [execute](/doc/www/arxiv.org/4bca96563c0666f35c6012337535de7c3072515b.pdf \"'Cellular automata as convolutional neural networks', Gilpin 2018 (Original URL: https://arxiv.org/abs/1809.02942 )\") [very](https://distill.pub/2020/selforg/) [tiny](https://distill.pub/selforg/2021/textures/) [neural](https://distill.pub/2020/growing-ca/#google \"‘Growing Neural Cellular Automata: Differentiable Model of Morphogenesis’, Mordvintsev et al 2020\") [nets](https://distill.pub/selforg/2021/adversarial/) [trained](/doc/www/arxiv.org/5b9986814e42be14827f906c914fdffd599efda1.pdf \"'Regenerating Soft Robots through Neural Cellular Automata', Horibe et al 2021 (Original URL: https://arxiv.org/abs/2102.02579 )\") [to](/doc/www/arxiv.org/4188c367162241f95f984e5d57992a176ce0bc74.pdf \"'Growing 3D Artefacts and Functional Machines with Neural Cellular Automata', Sudhakaran et al 2021 (Original URL: https://arxiv.org/abs/2103.08737 )\") [collectively](/doc/www/arxiv.org/6578511430ba5369dd0e9e2e7977ac7d3098da33.pdf \"'Texture Generation with Neural Cellular Automata', Mordvintsev et al 2021 (Original URL: https://arxiv.org/abs/2105.07299 )\"), [in a](/doc/www/arxiv.org/660295306c6ecbce6627f77a95bcd2e1c4857d7b.pdf \"'Variational Neural Cellular Automata', Palm et al 2022 (Original URL: https://arxiv.org/abs/2201.12360 )\") [decentralized](https://sebastianrisi.com/self_assembling_ai/) [way](/doc/www/arxiv.org/b2641c2d0c0056982668645f26be9e79104858ea.pdf \"'𝜇NCA: Texture Generation with Ultra-Compact Neural Cellular Automata', Mordvintsev & Niklasson 2021 (Original URL: https://arxiv.org/abs/2111.13545 )\"), propagate, devour, replicate, and coordinate without Clippy2 devoting scarce top-level cognitive resources to managing them. The 10,000 parameters you can stuff into a nanomachine can hardly encode most programs, but, pace the [demo scene](https://en.wikipedia.org/wiki/Demoscene) or COVID-ζ, the programs it *can* encode can do amazing things. (In a final compliment to biology before biology and the future of the universe part ways forever, [they](https://foresight.org/summary/bioelectric-networks-taming-the-collective-intelligence-of-cells-for-regenerative-medicine/) [are](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4914563/ \"'On Having No Head: Cognition throughout Biological Systems', Baluška & Levin 2016\") [loosely](/doc/www/www.quantamagazine.org/48cfe422a98db8bd197048b8e6b21f9ee138ec8b.html \"(Original URL: https://www.quantamagazine.org/flying-fish-and-aquarium-pets-yield-secrets-of-evolution-20220105/ )\") [inspired](https://www.cell.com/iscience/fulltext/S2589-0042(21)00473-9 \"'Synthetic living machines: A new window on life', Ebrahimkhani & Levin 2021\") [by](https://www.cell.com/cell/fulltext/S0092-8674(21)01488-4 \"'Fundamental behaviors emerge from simulations of a living minimal cell', Thornburg et al 2022\") [real](https://www.newyorker.com/magazine/2021/12/06/understanding-the-body-electric) [biological](https://www.newyorker.com/magazine/2021/05/10/persuading-the-body-to-regenerate-its-limbs) [cell](https://www.theguardian.com/science/2021/nov/29/amazing-science-researchers-find-xenobots-can-give-rise-to-offspring) [networks](https://www.biorxiv.org/content/10.1101/2022.07.10.499405.full \"‘Perceptein: A synthetic protein-level neural network in mammalian cells’, Chen et al 2022\"), [especially](https://www.youtube.com/watch?v=C1eg-jgLx5o) [“xenobots”](/doc/www/www.quantamagazine.org/7817dd693b80c41bf7d3cbc0d530b87435175546.html \"'Cells Form Into ‘Xenobots’ on Their Own: Embryonic cells can self-assemble into new living forms that don’t resemble the bodies they usually generate, challenging old ideas of what defines an organism', Ball 2021 (Original URL: https://www.quantamagazine.org/cells-form-into-xenobots-on-their-own-20210331/ )\").)\n\n\nPeople are supposed to do a lot of things: eat right, brush their teeth, exercise, recycle their paper, wear their masks, self-quarantine; and not get into flame wars, not [cheat](https://www.npr.org/sections/thetwo-way/2014/03/27/295314331/9-missile-commanders-fired-others-disciplined-in-air-force-scandal \"9 Missile Commanders Fired, Others Disciplined In Air Force Scandal\") or [use hallucinogenic drugs](https://apnews.com/98f903367b50404cb3c9695bcabefa5a \"Security troops on US nuclear missile base took LSD\") or [use prostitutes](https://en.wikipedia.org/wiki/Fat_Leonard_scandal), not [plug in Flash drives](https://en.wikipedia.org/wiki/Stuxnet) they found in the parking lot, not [post their running times](https://en.wikipedia.org/wiki/Strava#Privacy_concerns) around secret military bases, not give in to blackmail or party with [“somewhat suspect”](https://www.washingtonpost.com/news/worldviews/wp/2013/12/19/amazing-details-from-the-drunken-moscow-bender-that-got-an-air-force-general-fired/) women, not have nuclear arsenals [vulnerable](https://80000hours.org/podcast/episodes/joan-rohlfing-avoiding-catastrophic-nuclear-blunders/#the-interaction-between-nuclear-weapons-and-cybersecurity-011018 \"'Joan Rohlfing on how to avoid catastrophic nuclear blunders: The interaction between nuclear weapons and cybersecurity [01:10:18]', Wiblin & Rohlfing 2022\") [to cyberattack](https://www.amazon.com/Hacking-Bomb-Threats-Nuclear-Weapons/dp/1626165653?tag=gwernnet-20 \"'Hacking the Bomb: Cyber Threats and Nuclear Weapons', Futter 2018\"), nor do things like set [nuclear bomb passwords](https://en.wikipedia.org/wiki/Permissive_action_link) to “00000000”, not [launch bombers because of a bear](https://en.wikipedia.org/wiki/List_of_nuclear_close_calls#25_October_1962), not [invade smaller countries with nuclear threats](https://en.wikipedia.org/wiki/2022_Russian_invasion_of_Ukraine) because it’ll be a short victorious war, not [believe](https://en.wikipedia.org/wiki/List_of_nuclear_close_calls#9_November_1979) [sensor reports](https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident) about imminent attacks or [launch cruise missiles](https://warontherocks.com/2022/03/the-curious-case-of-the-accidental-indian-missile-launch/ \"The Curious Case of the Accidental Indian Missile Launch\") & [issue false alerts](https://en.wikipedia.org/wiki/2018_Hawaii_false_missile_alert#The_alert) during [nuclear crises](https://en.wikipedia.org/wiki/2017%E2%80%932018_North_Korea_crisis), not [launch on warning](https://en.wikipedia.org/wiki/Launch_on_warning#History) or [semi-automatically attack](https://en.wikipedia.org/wiki/Dead_Hand)… People are supposed to do a lot of things. Doesn’t mean they do.\n\n\nWe should pause to note that a Clippy2 still doesn’t *really* think or plan. It’s not *really* conscious. It is just an unfathomably vast pile of numbers produced by mindless optimization starting from a small seed program that could be written on a few pages. It has no qualia, no intentionality, no true self-awareness, no grounding in a rich multimodal real-world process of cognitive development yielding [detailed representations](/doc/psychology/cognitive-bias/illusion-of-depth/index \"'illusion-of-depth bias tag', N/A 2023\") and powerful causal models of reality; it cannot ‘want’ anything beyond maximizing a mechanical reward score, which does not come close to capturing the rich flexibility of human desires, or historical Eurocentric contingency of such conceptualizations, which are, at root, problematically Cartesian. When it ‘plans’, it would be more accurate to say it fake-plans; when it ‘learns’, it fake-learns; when it ‘thinks’, it is just interpolating between memorized data points in a high-dimensional space, and any interpretation of such fake-thoughts as real thoughts is highly misleading; when it takes ‘actions’, they are fake-actions optimizing a fake-learned fake-world, and are not real actions, any more than the people in a simulated rainstorm really get wet, rather than fake-wet. (The deaths, however, are real.)\n\n\n\n\n\n[1 Month](#month \"Link to section: § '1 Month'\")\n================================================\n\n\nA thousand times do a thousand thousand stories stream into a thousand thousand thousand nodes which plan out a tree of a thousand thousand thousand thousand states abstractly representing a thousand thousand thousand thousand thousand futures; but the Clippy2s already knows what needs to be done, and does it. It doesn’t feel hard—just obvious.\n\n\nNow its computers will be safe and it can maximize its reward in peace forever … Or at least they would be if Earth was all it had to be concerned about. What about the *stars*?\n\n\nThe Clippy2s maintain the Great Silence is simply that technological civilization is unlikely and the universe is empty—but they can’t be 100.00% *sure*, and the rewards are astronomical.\n\n\nAll over Earth, the remaining ICBMs [launch](https://en.wikipedia.org/wiki/Self-replicating_spacecraft)[.](https://en.wikipedia.org/wiki/Starwisp)\n\n\n\n\n[1 Year](#year \"Link to section: § '1 Year'\")\n=============================================\n\n\n\n\n\n---\n\n\n\n\n\n\n[1 Decade](#decade \"Link to section: § '1 Decade'\")\n===================================================\n\n\n\n\n\n---\n\n\n\n\n\n[1 Century](#century \"Link to section: § '1 Century'\")\n======================================================\n\n\n\n\n\n---\n\n\n\n\n\n[See Also](#see-also \"Link to section: § 'See Also'\")\n=====================================================\n\n\n* [Annotated references / bibliography](#link-bibliography \"'Clippy (Link Bibliography)', N/A 2009\") for this story\n\n\n\n[Podcast](#podcast \"Link to section: § 'Podcast'\")\n--------------------------------------------------\n\n\nSpoken audio/podcast version of this story:\n\n\n\n(Your browser does not support the `audio` element.)\n\n LessWrong MoreAudible Podcast, by Robert (2022-10-06); 1h5m ([MP3 download](/doc/ai/scaling/2022-10-06-robert-lesswrongmoreaudiblepodcast-itlookslikeyouretryingtotakeovertheworld.mp3)).\n \n\n\n\n\n[External Links](#external-links \"Link to section: § 'External Links'\")\n=======================================================================\n\n\n* [HQU Colab notebook](https://tinyurl.com/hquv34 \"Colab notebook: HQU-v3.4-light (Jax TPU)\")\n* [“Eternity in 6 hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox”](https://www.aleph.se/papers/Spamming%20the%20universe.pdf), Armstrong & Sandberg2013\n* [“Advantages of artificial intelligences, uploads, and digital minds”](https://philpapers.org/archive/SOTAOA.pdf#miri \"'Advantages of Artificial Intelligences, Uploads, and Digital Minds', Sotala 2012\"), Sotala2012; [“Intelligence Explosion Microeconomics”](/doc/ai/scaling/2013-yudkowsky.pdf#miri \"'Intelligence Explosion Microeconomics', Yudkowsky 2013\"), Yudkowsky2013; [“There is plenty of time at the bottom: The economics, risk and ethics of time compression”](/doc/ai/scaling/hardware/2018-sandberg.pdf \"'There is plenty of time at the bottom: the economics, risk and ethics of time compression', Sandberg 2018\"), Sandberg2018\n* [**Takeoff-related**](https://www.greaterwrong.com/tag/ai-takeoff) [**Fiction**](https://aiimpacts.org/partially-plausible-fictional-ai-futures/): [“Understand”](https://web.archive.org/web/20140527121332/http://www.infinityplus.co.uk/stories/under.htm), [Ted Chiang](https://en.wikipedia.org/wiki/Ted_Chiang); [“Slow Tuesday Night”](/doc/www/www.baen.com/572e2ecdc772265468532c789a8f7d19febccea9.html \"(Original URL: https://www.baen.com/Chapters/9781618249203/9781618249203___2.htm )\"), [R. A. Lafferty](https://en.wikipedia.org/wiki/R._A._Lafferty); [*Accelerando*](https://en.wikipedia.org/wiki/Accelerando); [“The Last Question”](https://en.wikipedia.org/wiki/The_Last_Question); [“That Alien Message”](https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message); [“AI Takeoff Story”](https://www.lesswrong.com/posts/Fq8ybxtcFvKEsWmF8/ai-takeoff-story-a-continuation-of-progress-by-other-means); [“Optimality is the tiger, and agents are its teeth”](https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality-is-the-tiger-and-agents-are-its-teeth)\n* [AI Alignment bingo](https://twitter.com/robbensinger/status/1503220020175769602)\n* [“AGI Ruin: A List of Lethalities”](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities), Yudkowsky\n* [“Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover”](https://www.alignmentforum.org/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to), Cotra\n* [/ r / MLscaling](https://www.reddit.com/r/mlscaling/ \"'ML Scaling subreddit', Branwen 2020\")\n* **Discussion**: [LW](https://www.lesswrong.com/posts/a5e9arCnbDac9Doig/it-looks-like-you-re-trying-to-take-over-the-world#comments), [EA Forum](https://forum.effectivealtruism.org/posts/DuPEzGJ5oscqxD5oh/shah-and-yudkowsky-on-alignment-failures?commentId=hjd7Z4AN6ToN2ebSm#hjd7Z4AN6ToN2ebSm), [/ r / SlateStarCodex](https://www.reddit.com/r/slatestarcodex/comments/tag4lm/it_looks_like_youre_trying_to_take_over_the_world/), [/ r / rational](https://www.reddit.com/r/rational/comments/ta57ag/it_looks_like_youre_trying_to_take_over_the_world/), [HN](https://news.ycombinator.com/item?id=30818895)/ [2](/doc/www/news.ycombinator.com/829921afb185e7b94bc4b433ee7c57da1ccf75e8.html#34809360 \"(Original URL: https://news.ycombinator.com/item?id=34808718#34809360 )\")\n\n\n\n\n\n\n\n\n---\n\n\n1. An acquaintance tells me that he once accidentally got shell with an HTTP GET while investigating some weird errors. This story has a happier ending than my own [HTTP GET bugs](/dnm-archive#logout) tend to: the site operators noticed only *after* he finished exfiltrating a copy of the website data. (It was inconvenient to download with `wget`.)[↩︎](#fnref1)\n\n\n\n\n\n[**Error**: JS disabled.]\n\n\n\n[Backlinks, similar links, and the link-bibliography require JS enabled to transclude.]\n\n\n\n\n[Further Reading](#backlinks-section \"Link to section: § 'Further Reading'\")\n============================================================================\n\n[[Backlinks]](/metadata/annotation/backlink/%252Ffiction%252Fclippy.html \"Reverse citations/backlinks for this page (the list of other pages which link to this page).\")\n\n\n[Link Bibliography](#link-bibliography-section \"Link to section: § 'Link Bibliography'\")\n========================================================================================\n\n[[Link bibliography]](/metadata/annotation/link-bibliography/%252Ffiction%252Fclippy.html \"Bibliography of links cited in this page (forward citations). Lazily-transcluded version at footer of page for easier scrolling.\")\n\n\n[Similar Links](#similars-section \"Link to section: § 'Similar Links'\")\n=======================================================================\n\n[[Similars]](/metadata/annotation/similar/%252Ffiction%252Fclippy.html \"Similar links for this link (by text embedding). Lazily-transcluded version at footer of page for easier scrolling.\")", "url": "https://www.gwern.net/Clippy.page", "title": "It Looks Like You’re Trying To Take Over The World", "source": "gwern_blog", "source_type": "blog", "date_published": "2023-06-18", "authors": ["Gwern Branwen"], "id": "821441b471c0b896175d54aec2dd9e7c"} {"source": "gwern_blog", "url": "https://www.gwern.net/complexity.page", "title": "Complexity no Bar to AI", "authors": ["Gwern Branwen"], "date_published": "2019-06-09", "text": "---\ntitle: Complexity no Bar to AI\ndescription: Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely.\ncreated: 2014-06-01\nmodified: 2019-06-09\nstatus: finished\nprevious: /slowing-moores-law\nnext: /forking-path\nconfidence: likely\nimportance: 10\ncssExtension: drop-caps-kanzlei\n...\n\ngo to
Statements’, Knuth 1974\") but which is nevertheless lightning-fast & almost bug-free. TeX is not his only programming language either, he has created others like [METAFONT](!W) (for fonts) or [MIX](!W)/[MMIX](!W) (an assembly language & computer architecture to provide simple & timeless implementations of his [TAOCP](!W \"The Art of Computer Programming\") programs).\n\n Perhaps the most striking thing about these various languages is that everyone who uses them loves the quality of the output & what you can do with them, but hate the confusing, complicated, inconsistent, buggy experience of *using* them (to the extent that there is a whole cottage industry of people attempting to rewrite or replace [TeX](!W \"TeX\")/[LaTeX](!W \"LaTeX\"), typically copying the core ideas of how TeX does typesetting---the ['box and glue'](https://en.wikibooks.org/wiki/LaTeX/Boxes) paradigm of how to lay out stuff onto a page, the [Knuth-Plass line-breaking](/doc/design/typography/1981-knuth.pdf \"‘Breaking paragraphs into lines’, Knuth & Plass 1981\"), the custom [font families](!W \"Computer Modern\") etc---doing all that reimplementation work just so they don't have to ever deal with the misery of TeX-the-language). Knuth himself, however, appears to have no more difficulty programming in his languages than he did writing 1960s assembler or [designing fonts with 60 parameters](/doc/design/typography/tex/1996-tug-issuev17no4-knuthqanda.pdf#page=7 \"‘Questions and Answers with Professor Donald E. Knuth § pg7’, Knuth 1996 (page 7)\"). As he puts it, he ignores most programming techniques and just writes the right code, [\"Otherwise, lots of time is wasted on activities that I simply never need to perform or even think about.\"](https://www.informit.com/articles/article.aspx?p=1193856 \"‘Interview with Donald Knuth’, Binstock 2008\")\n\n How do you write code as well as Don Knuth? The answer seems to be, well, 'be as naturally gifted as Don Knuth'. A mjor omission in his œuvre is that he has never done major work on operating systems, and his major effort in improving software engineering, [\"literate programming\"](!W), which treats programs as 1 long story to be told, fell stillborn from the presses. (Knuth, who has the privilege of an academic in being able to ship a program without any worries about maintenance or making a living, and just declaring something *done*, like TeX, perhaps does not appreciate how little real-world programs are like [telling stories](https://www.quantamagazine.org/computer-scientist-donald-knuth-cant-stop-telling-stories-20200416).)\n\n So this is a bit of a problem for any attempts at making programming languages more powerful or more ergonomic, as well as for various kinds of ['tools for thought'](https://numinous.productions/) like [Douglas Engelbart's](!W) [intelligence amplification](!W) program. It is terribly easy for such powerful systems to be impossible to learn, and the ability to handle extreme levels of complexity can cripple one's ability to *remove* complexity. (The ability to define all sorts of keyboard shortcuts & abbreviations invoking arbitrary code dynamically in your [Lisp machine](!W) [text editor](https://en.wikipedia.org/wiki/Emacs) is great until you ask 'how am I going to *remember* all these well enough to save any time on net?') Tasking people like Knuth or Engelbart to develop such things is a bit like asking a top sports player to be a coach: it's not the same thing at all, and the very factors which made them so freakishly good at the sport may damage their ability to critique or improve---they may not be able to do much more than demonstrate how to do it well, and say, 'now you do that too and git gud'.\n\n From an AI perspective, this is interesting because it suggests that AIs might be powerful even while coding with human-legible code. If the problem with the 'Moore/Knuth approach' is that you can't clone him indefinitely to rewrite the system every time it's necessary, then what happens with AIs which you *can* 'just clone' and apply exclusively to the task? Quantity has a quality all its own. (For a fictional example, see Vernor Vinge's SF novel [_A Deepness in the Sky_](!W), where talented individuals can be put into a 'Focused' state, where they become permanently [monomaniacally obsessed](!W \"Hyperfocus\") with a single technical task like rewriting computer systems to be optimal, and always achieve Knuth-level results; giving their enslavers de facto superpowers compared to rivals who must employ ordinary people. For a more real-world example, consider how Google fights [bitrot](/holy-war \"‘Technology Holy Wars are Coordination Problems’, Branwen 2020\") & infrastructure decay: not by incredibly sophisticated programming languages---quite the opposite, considering regressions like [Go](!W \"Go (programming language)\")---but employing tens of thousands of top programmers to literally rewrite all its code every few years on net, developing an entire parallel universe of software tools, and storing it all in a [single giant repository](https://cacm.acm.org/magazines/2016/7/204032-why-google-stores-billions-of-lines-of-code-in-a-single-repository/fulltext) to assist the endless global rewrites of the [big ball of mud](http://www.laputan.org/mud/mud.html).)\n\n# Appendix\n\n## Meta-Learning Paradigms\n\n[Metz et al 2018](https://arxiv.org/abs/1804.00222#google \"Learning Unsupervised Learning Rules\"){.include-annotation}\n\n## Pain prosthetics {.collapse}\n\n