{"source": "gwern_blog", "url": "https://www.gwern.net/Scaling-hypothesis.page", "title": "\"The Scaling Hypothesis\"", "authors": ["Gwern Branwen"], "date_published": "2022-01-02", "text": "---\ntitle: \"The Scaling Hypothesis\"\ndescription: \"On GPT-3: meta-learning, scaling, implications, and deep theory. The scaling hypothesis: neural nets absorb data & compute, generalizing and becoming more Bayesian as problems get harder, manifesting new abilities even at trivial-by-global-standards-scale. The deep learning revolution has begun as foretold.\"\nthumbnail: /doc/ai/nn/transformer/gpt/2020-brown-gpt3-figure13-meanperformancescalingcurve.png\nthumbnailText: \"Figure 1.3 from Brown et al 2020 (OpenAI, GPT-3), showing roughly log-scaling of GPT-3 parameter/compute size vs benchmark performance on all text/natural language benchmarks test.\"\ncreated: 2020-05-28\nmodified: 2022-01-02\nstatus: finished\nprevious: /newsletter/2020/05\nnext: /fiction/clippy\nimportance: 10\nconfidence: likely\ncssExtension: drop-caps-kanzlei\n...\n\n
\n> GPT-3, announced by OpenAI in May 2020, is the largest neural network ever trained, by over an order of magnitude.\n> Trained on Internet text data, it is the successor to GPT-2, which had surprised everyone by its natural language understanding & generation ability.\n> To the surprise of most (including myself), this vast increase in size did not run into diminishing or negative returns, as many expected, but the benefits of scale continued to happen as forecasted by OpenAI.\n> These benefits were not merely learning more facts & text than GPT-2, but qualitatively distinct & even more surprising in showing [*meta-learning*](#meta-learning): while GPT-2 learned how to do common natural language tasks like text summarization, GPT-3 instead learned how to follow directions and learn new tasks from a few examples.\n> (As a result, GPT-3 outputs & interaction are more fascinating & human-like than GPT-2.)\n>\n> While the immediate applications of GPT-3, like my poetry or humor writings, are nice, the short-term implications of GPT-3 are much more important.\n>\n> First, while GPT-3 is expensive by conventional DL standards, it is cheap by scientific/commercial/military/government budget standards, and the results indicate that models could be made much larger.\n> Second, models can also be made much more powerful, as GPT is an old approach known to be flawed in both minor & major ways, and far from an 'ideal' Transformer.\n> Third, GPT-3's capabilities come from learning on raw (unsupervised) data; that has long been one of the weakest areas of DL, holding back progress in other areas like reinforcement learning or robotics. Models like GPT-3 suggest that large unsupervised models will be vital components of future DL systems, as they can be 'plugged into' systems to immediately provide understanding of the world, humans, natural language, and reasoning.\n>\n> The meta-learning has a longer-term implication: it is a demonstration of the [*blessings of scale*](#blessings-of-scale), where problems with simple neural networks vanish, and they become more powerful, more generalizable, more human-like when simply made very large & trained on very large datasets with very large compute---even though those properties are believed to require complicated architectures & fancy algorithms (and this perceived need drives much research).\n> Unsupervised models benefit from this, as training on large corpuses like Internet-scale text present a myriad of difficult problems to solve; this is enough to drive meta-learning despite GPT not being designed for meta-learning in any way.\n> (This family of phenomena is perhaps driven by neural networks functioning as ensembles of many sub-networks with them all averaging out to an Occam's razor, which for small data & models, learn superficial or memorized parts of the data, but can be forced into true learning by making the problems hard & rich enough; as [meta-learners learn amortized Bayesian inference](/backstop#deep-bayes), they build in informative priors when trained over many tasks, and become dramatically more sample-efficient and better at generalization.)\n>\n> The blessings of scale in turn support a radical theory: an old AI paradigm held by a few pioneers in connectionism (early artificial neural network research) and by more recent deep learning researchers, the [*scaling hypothesis*](#scaling-hypothesis).\n> The scaling hypothesis regards the blessings of scale as the secret of AGI: intelligence is 'just' simple neural units & learning algorithms applied to diverse experiences at a (currently) unreachable scale.\n> As increasing computational resources permit running such algorithms at the necessary scale, the neural networks will get ever more intelligent.\n>\n> When? Estimates of Moore’s law-like progress curves decades ago by pioneers like Hans Moravec indicated that it would take until the 2010s for the sufficiently-cheap compute for tiny insect-level prototype systems to be available, and the 2020s for the first sub-human systems to become feasible, and these forecasts are holding up.\n> (Despite this vindication, the scaling hypothesis is so unpopular an idea, and difficult to prove in advance rather than as a _fait accompli_, that while the GPT-3 results finally drew some public notice after OpenAI enabled limited public access & people could experiment with it live, it is unlikely that many entities will modify their research philosophies, much less kick off an 'arms race'.)\n>\n> More concerningly, GPT-3's scaling curves, unpredicted meta-learning, and success on various anti-AI challenges suggests that in terms of futurology, AI researchers' forecasts are an emperor sans garments: they have no coherent model of how AI progress happens or why GPT-3 was possible or what specific achievements should cause alarm, where intelligence comes from, and do not learn from any falsified predictions.\n> Their primary concerns appear to be supporting the status quo, placating public concern, and remaining respectable.\n> As such, their comments on AI risk are meaningless: they would make the same public statements if the scaling hypothesis were true or not.\n>\n> Depending on what investments are made into scaling DL, and how fast compute grows, the 2020s should be quite interesting---sigmoid or singularity?\n>\n> For more ML scaling research, follow the [/r/MLScaling](https://www.reddit.com/r/mlscaling/ \"'ML Scaling subreddit', Branwen 2020\") subreddit. For a fiction treatment as SF short story, see [\"It Looks Like You're Trying To Take Over The World\"](/fiction/clippy).\n
\n\n
\n
Read The Samples
\nOn [\"GPT-3: Language Models are Few-Shot Learners\", Brown et al 2020](https://arxiv.org/abs/2005.14165#openai \"'GPT-3: Language Models are Few-Shot Learners', Brown et al 2020\") ([poems](https://arxiv.org/pdf/2005.14165.pdf&org=openai#page=48 \"Figure F.1: Four uncurated completions from a context suggesting the model compose a poem in the style of Wallace Stevens with the title 'Shadows on the Way'\") & my followup [GPT-3 Creative Writing](/gpt-3 \"Creative writing by OpenAI's GPT-3 model, demonstrating poetry, dialogue, puns, literary parodies, and storytelling.\"), compare [my old finetuned GPT-2 poetry](/gpt-2 \"'GPT-2 Neural Network Poetry', Branwen & Presser 2019\"); [random samples](https://justpaste.it/7eovk \"GPT-3 Github JSON dump reformatted to readable HTML\"); [\"OpenAI API\"](https://openai.com/blog/openai-api/) with real-world demos)\n\nI strongly encourage anyone interested in GPT-3 to also at least skim OA's [random samples](https://justpaste.it/7eovk \"GPT-3 Github JSON dump reformatted to readable HTML\"), or better yet, my samples in [\"GPT-3 Creative Writing\"](/gpt-3 \"Creative writing by OpenAI's GPT-3 model, demonstrating poetry, dialogue, puns, literary parodies, and storytelling.\")---reading the paper & looking at some standard benchmark graphs does not give a good feel for what working with GPT-3 is like or the diversity of things it can do which are missed by benchmarks.\n
\n\n# Meta-Learning\n\n[Learning to learn.]{.marginnote} In May 2020, OA released---to remarkably little interest from researchers, no blog post, no media blitz, and little public discussion beyond the snidely dismissive---the long-awaited followup to [GPT-2](https://openai.com/research/better-language-models \"Better Language Models and Their Implications\"), one model to rule them all: a 117× larger 175b-parameter model with far more powerful language generation, which lets it solve a wide variety of problems from arithmetic^[Given the number of comments on the paper's arithmetic benchmark, I should point out that the arithmetic benchmark appears to greatly understate GPT-3's abilities due to the [BPE encoding issue](/gpt-3#bpes \"'GPT-3 Creative Fiction § BPEs', Branwen 2020\"): even using commas markedly improves its 5-digit addition ability, for example. The BPE issue also appears to explain much of the poor performance on the anagram/shuffling tasks. This is something to keep in mind for any task which requires character-level manipulation or understanding.] to English translation to unscrambling anagrams to SAT analogies---purely from being prompted with text examples, without any specialized training or finetuning whatsoever, merely next-word prediction training on a big Internet text corpus.\nThis implies GPT-3's attention mechanisms serve as [\"fast weights\"](https://arxiv.org/abs/1610.06258#deepmind \"'Using Fast Weights to Attend to the Recent Past', Ba et al 2016\") that have \"learned to learn\" by training on sufficiently varied data^[On implicit [meta-learning](https://www.reddit.com/r/reinforcementlearning/search/?q=flair%3AMetaRL&include_over_18=on&restrict_sr=on&sort=top), see: [Santoro et al 2016](https://arxiv.org/abs/1605.06065#deepmind \"One-shot Learning with Memory-Augmented Neural Networks\")/[Wang et al 2018](/doc/reinforcement-learning/meta-learning/2018-wang.pdf#deepmind \"Prefrontal cortex as a meta-reinforcement learning system\") ([Botvinick commentary](https://www.lesswrong.com/posts/Wnqua6eQkewL3bqsF/matt-botvinick-on-the-spontaneous-emergence-of-learning \"Matt Botvinick on the spontaneous emergence of learning algorithms\"))/[Botvinick et al 2019a](https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613\\(19\\)30061-0#deepmind \"Reinforcement Learning, Fast and Slow\"), [Clune 2019](https://arxiv.org/abs/1905.10985#uber \"AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence\"), [Schmidhuber 2015](https://arxiv.org/abs/1511.09249#schmidhuber \"On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models\")/[2018](https://arxiv.org/abs/1802.08864#schmidhuber \"One Big Net for Everything\"), [Weng 2018](https://lilianweng.github.io/lil-log/2018/11/30/meta-learning.html#openai \"Meta-Learning: Learning to Learn Fast\")/[Weng 2019](https://lilianweng.github.io/lil-log/2019/06/23/meta-reinforcement-learning.html#openai \"Meta Reinforcement Learning\").], forcing it to do more than just learn ordinary textual relationships.\nLike OpenAI's [Jukebox](https://openai.com/research/jukebox \"'Jukebox: We're introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles. We're releasing the model weights and code, along with a tool to explore the generated samples.', Dhariwal et al 2020\") just weeks ago (itself a remarkable demonstration of scaling in synthesizing *raw audio* music complete with remarkably realistic voices/instruments), the announcement of GPT-3 appears to have sunk almost without a trace, so I will go into more depth than usual.\n\n# Flexing GPT\n\n
\n> '\"They are absolutely reasonable. I think that is their distinguishing characteristic. Yes, Mr. Erskine, an absolutely reasonable people. I assure you there is no nonsense about the Americans.\" \"How dreadful!\" cried Lord Henry. \"I can stand brute force, but brute reason is quite unbearable. There is something unfair about its use. It is hitting below the intellect.\"'\n>\n> _The Picture of Dorian Gray_, Oscar Wilde\n
\n\n[\"Attacks only get better.\"]{.marginnote} 2 years ago, [GPT-1](https://openai.com/research/language-unsupervised \"Improving Language Understanding with Unsupervised Learning: We've obtained state-of-the-art results on a suite of diverse language tasks with a scalable, task-agnostic system, which we're also releasing. Our approach is a combination of two existing ideas: transformers and unsupervised pre-training. These results provide a convincing example that pairing supervised learning methods with unsupervised pre-training works very well; this is an idea that many have explored in the past, and we hope our result motivates further research into applying this idea on larger and more diverse datasets.\") was interestingly useful pretraining and adorable with its \"sentiment neuron\".\n1 year ago, GPT-2 was impressive with its excellent text generation & finetuning capabilities.\nThis year, GPT-3 is scary because it's a magnificently obsolete architecture from early 2018 (used mostly for software engineering convenience as the infrastructure has been debugged), which is small & shallow compared to what's possible[^overhang][^overhang-NN], with a simple uniform architecture^[Eg a narrow context window [severely limits it](https://arxiv.org/pdf/2001.08361.pdf#page=25 \"D.5: Context Dependence\"), and motivates the need for [efficient attention](/note/attention \"'Efficient Attention: Breaking The Quadratic Transformer Bottleneck', Branwen 2020\"). More broadly, GPT-3 does nothing exotic---no use of [brain imitation learning](https://www.reddit.com/r/reinforcementlearning/comments/9pwy2f/wbe_and_drl_a_middle_way_of_imitation_learning/ \"'WBE and DRL: a Middle Way of imitation learning from the human brain', Branwen 2018\") or neural architecture search to try to tailor the model, online hyperparameter optimization (possibly [>3× speedup](https://arxiv.org/abs/2106.00958#openai \"'A Generalizable Approach to Learning Optimizers', Almeida et al 2021\")) or even decide basic hyperparameters like widths (which as [EfficientNet](https://arxiv.org/abs/1905.11946#google \"'EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks', Tan & Le 2019\"){#tan-le-2019-2} shows, can make quite a different even in \"well-understood and hand-optimized vanilla architectures\").] trained in the dumbest way possible (unidirectional prediction of next text token) on a single impoverished modality (random Internet HTML text dumps^[Not even PDFs---so no Google Books, no Arxiv, no Libgen, no Sci-Hub...]) on tiny data (fits on a laptop), sampled in a dumb way^[Generating text from a LM can reveal the presence of knowledge, but not its absence, and it is universally agreed that the current crude heuristic methods like top-_k_ cannot possibly be optimal.], its benchmark performance sabotaged by bad prompts & [data encoding problems](/gpt-3#bpes \"'GPT-3 Creative Fiction § BPEs', Branwen 2020\") (especially arithmetic & commonsense reasoning), and yet, the first version already manifests crazy runtime meta-learning---and the scaling curves *still* are not bending!\nThe samples are also better than ever, whether it's GPT-3 inventing new penis jokes^['A man is at the doctor's office, and the doctor tells him, \"I've got some good news and some bad news for you.\" / The man says, \"Well, I can't take the bad news right now, so give me the good news first.\" / The doctor says, \"Well, the good news is that you have an 18-inch penis.\" / The man looks stunned for a moment, and then asks, \"What's the bad news?\" / The doctor says, \"Your brain's in your dick.\"'] or writing (mostly working) [JavaScript tutorials](https://justpaste.it/7eovk#javascript \"'GPT-3 random sample dump: JavaScript tutorial', GPT- 2020\") about rotating arrays.\n\n[^overhang]: GPT-3 hardly costs more than a few million dollars of compute (as of early 2020) as the extensive scaling research beforehand enabled one training run, and it is cheap to run (pg39): \"Even with the full GPT-3 175B, generating 100 pages of content from a trained model can cost on the order of 0.4 kW-hr, or only a few cents in energy costs.\" (Likewise, T5 was trained [only once](https://twitter.com/colinraffel/status/1313097438299910147 \"I recently came across https://arxiv.org/abs/2004.08900, which 'assumes 2-3 runs' of T5-11B. In fact, we trained T5-11B once. That's why we spend 35 pages figuring out how we should train before we start training. You don't want to mess up a training run that big.\").) And for the cost of one model, GPT-3 API users have shown that you get the equivalent of hundreds of smaller special-purpose models, each requiring more researchers, custom datasets, countless training runs, and tinkering, assuming said models could be created at all. (A slogan for the future: \"One model, one vector---once.\")\n\n For comparison, the [PDP-11](!W) was a common academic workhorse due to its extremely low cost, a mere [$20,000]($1970), while the first [Lisp Machine](!W) cost >[$50,000]($1972)---expensive for a workstation but a bargain compared to researchers hogging mainframes costing tens of millions. IBM's (otherwise useless) Deep Blue AI project reputedly cost >[$5]($1997)m for the final iteration (reports of [$100]($1997)m appear to be a confusion with the estimated value of *publicity* mentioned in pg187 of Hsu's _Behind Deep Blue_) and Big Science projects like [ITER](https://en.wikipedia.org/wiki/ITER) blow >5000× the funding to mostly fail. (The particle physicists, incidentally, are [back asking for](https://www.nature.com/articles/d41586-020-01866-9 \"CERN makes bold push to build €21-billion supercollider: European particle-physics lab will pursue a 100-kilometer machine to uncover the Higgs boson's secrets---but it doesn't yet have the funds\") ≫[$24]($2020)b, based on, presumably the scientific revolutions & world-changing breakthroughs that the LHC's >[$9]($2010)b investment produced, or the [$2]($1993)b spent to (not) build the [SSC](!W \"Superconducting Super Collider\")...)\n\n GPT-3 could have been done decades ago with global computing resources & scientific budgets; what could be done with today's hardware & budgets that we just don't know or care to do? There *is* a hardware overhang. (See also the [_Whole Brain Emulation Roadmap_](/doc/ai/scaling/hardware/2008-sandberg-wholebrainemulationroadmap.pdf \"Sandberg & Bostrom 2008\") & [\"2019 recent trends in GPU price per FLOPS\"](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/).)\n[^overhang-NN]: Further, NNs have additional hardware overhangs of their own due to the many orders of magnitude asymmetry of training vs running. Transfer learning and meta-learning are so much faster than the baseline model training. You can 'train' GPT-3 without even any gradient steps---just examples. You pay the extremely steep upfront cost of One Big Model to Rule Them All, and then reuse it everywhere at tiny marginal cost. If you train a model, then as soon as it's done you get, among other things:\n\n - the ability to run thousands of copies in parallel on the same hardware\n\n - in a context like AlphaGo, I estimate several hundred ELO strength gains if you reuse the same hardware to merely run tree search with exact copies of the original model\n - meta-learning/transfer-learning to any related domain, cutting training requirements by orders of magnitude\n - model compression/distillation to train student models which are a fraction of the size, FLOPS, or latency (ratios varying widely based on task, approach, domain, acceptable performance degradation, targeted hardware etc, but often extreme like 1⁄100^th^)\n - reuse of the model elsewhere to instantly power up other models (eg. use of text or image embeddings for a DRL agent)\n - learning-by-doing/[experience curve effects](https://en.wikipedia.org/wiki/Experience_curve_effects) (highest in information technologies, and high for DL: Hernandez & Brown 2020), so the next from-scratch model may be much cheaper.\n\n For example: after all the iterative model architecture & game upgrades done while training the first [OpenAI Five](!W) (OA5) DoTA2 agent was completed, the second iteration of OA5, [\"Rerun\"](https://arxiv.org/pdf/1912.06680.pdf#page=11&org=openai \"'Dota 2 with Large Scale Deep Reinforcement Learning', Berner et al 2019: §4.2: Validating Surgery with Rerun\"), was trained from scratch. Rerun required only 20% of the training for a \"98% win-rate against the final version of OpenAI Five.\"\n As the authors note: \"The ideal option would be to run Rerun-like training from the very start, but this is impossible---the OpenAI Five curve represents lessons learned that led to the final codebase, environment, etc., without which it would not be possible to train Rerun.\"\n - baseline for engineering much more efficient ones by ablating and comparing with the original\n\nIt's odd that this qualitative leap appears to be largely missed by the standard NLP benchmarks.\nNothing in the raw metrics reported on, say, Penn Tree Bank or LAMBADA or WinoGrande would lead you to expect all of this hilarious and creative output; the meta-learning results might, but only if you already thought meta-learning was important.\nThis suggests to me that a useful post-GPT-3 contribution would be figuring out how to benchmark these sorts of flexible text generation capabilities (possibly something along the lines of Chollet's image-based [Abstraction and Reasoning Corpus (ARC)](https://arxiv.org/abs/1911.01547#google \"'On the Measure of Intelligence', Chollet 2019\")).\n\n# Baking The Cake\n\n![Is GPT actually part of AGI---or is the cake a lie? ([LeCun 2019](/doc/ai/scaling/2019-02-18-lecun-isscc-talk-deeplearninghardwarepastpresentandfuture.pdf#page=60 \"Deep Learning Hardware: Past, Present, & Future: slide 60: 'How Much Information is the Machine Given during Learning?'\"))](/doc/ai/2019-lecun-isscctalk-cake.png){.float-right .invert}\n\n[Not the whole picture, but a big part.]{.marginnote} Does it set SOTA on every task? No, of course not.\nBut the question is not whether we can lawyerly find any way in which it might not work, but [whether there is any way which it might work](/forking-path \"'Technology Forecasting: The Garden of Forking Paths', Branwen 2014\").\nAnd there are many ways it might work better (see the [\"Limitations\" section](https://arxiv.org/pdf/2005.14165.pdf&org=openai#page=34 \"GPT-3: Language Models are Few-Shot Learners: 5. Limitations\") for just a few).\nDoes GPT-3 *do* anything like steer a robot around SF shooting lasers and rockets at humans⸮ No, of course not.\nIt is 'just' a text prediction model, an idiot savant of text; but an idiot savant, we should remember, is only a genetic mutation or bit of brain damage away from a normal human.\nIf RL is the cherry on the top of the supervised learning frosting, and supervised learning is the frosting on top of the unsupervised learning cake, well, it looks like the cake layers are finally rising.\n\n![A better GPT-3 lesson.](/doc/ai/nn/cnn/2020-07-24-gwern-meme-moneyprinter-bitterlesson-gpt3.png \"GPT-3's implications in the 'money printer go brr' meme format: the head of Rich Sutton says 'GPUs go bitter', referencing his 'bitter lesson' that most clever AI innovations are ultimately useless as they hamstring AI performance and are surpassed by methods that make fewer assumptions & use more compute/data, while the personification of AI academia, where cleverness is rewarded and heavy use of compute is considered cheating and ugly, sheds tears and complains about approaches like GPT-3 beating decades of clever academic systems.\"){.float-left}\n\n[Scaling still working.]{.marginnote} I was surprised, as I had expected closer to 100b parameters, and I thought that the performance of [CTRL](https://arxiv.org/abs/1909.05858#salesforce \"'CTRL: A Conditional Transformer Language Model for Controllable Generation', Keskar et al 2019\")/[Meena](https://arxiv.org/abs/2001.09977#google \"'Towards a Human-like Open-Domain Chatbot', Adiwardana et al 2020\")/[MegatronLM](https://nv-adlr.github.io/MegatronLM \"MegatronLM: Training Billion+ Parameter Language Models Using GPU Model Parallelism\")/[T5](https://arxiv.org/abs/1910.10683#google \"'T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer', Raffel et al 2019\")/[Turing-NLG](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/ \"'Turing-NLG: A 17-billion-parameter language model by Microsoft', Rosset 2020\")/[GPipe](https://arxiv.org/abs/1811.06965#google \"'GPipe: Easy Scaling with Micro-Batch Pipeline Parallelism', Huang et al 2018\") suggested that, [the scaling papers](/note/scaling \"'Machine Learning Scaling', Branwen 2021\")[^scaling-papers] notwithstanding, the scaling curves had started to bend and by 100b, it might be hard to justify further scaling.\nHowever, in the latest version of [\"the unreasonable effectiveness of data\"](/doc/ai/scaling/2009-halevy.pdf \"'The Unreasonable Effectiveness of Data', Halevy et al 2009\") where \"the curves cross\"/\"scissor effect\" and the neural method eventually wins (eg. [Banko & Brill 2001](/doc/ai/scaling/2001-banko.pdf#microsoft \"Scaling to Very Very Large Corpora for Natural Language Disambiguation\"), [Brants et al 2007](/doc/ai/scaling/2007-brants.pdf#google \"Large Language Models in Machine Translation\"), [Koehn & Knowles 2017](/doc/ai/2017-koehn-figure3-bleuscoreswithvaryingamountsoftrainingdata.png \"Six Challenges for Neural Machine Translation: Challenges: 3.2. Amount of Training Data: Figure 3: BLEU scores for English-Spanish systems trained on 0.4 million to 385.7 million words of parallel data. Quality for NMT starts much lower, outperforms SMT at about 15 million words, and even beats a SMT system with a big 2 billion word in-domain language model under high-resource conditions.\")), GPT-3 hits twice that without noticeable change in scaling factors: its scaling continues to be roughly logarithmic/power-law, as it was for much smaller models & as forecast, and it has not hit a regime where gains effectively halt or start to require increases vastly beyond feasibility.\nThat suggests that it would be both possible and useful to head to trillions of parameters (which are still well within available compute & budgets, requiring merely thousands of GPUs & perhaps [$10]($2020)--[$100]($2020)m budgets assuming no improvements which of course there will be, see Hernandez & Brown 2020 etc in this issue), and eyeballing the graphs, many benchmarks like the [Winograd schema](https://en.wikipedia.org/wiki/Winograd_schema_challenge) [WinoGrande](https://arxiv.org/abs/1907.10641#allen \"'WinoGrande: An Adversarial Winograd Schema Challenge at Scale', Sakaguchi et al 2019\") would fall by 10t parameters.\nThe predictability of scaling is striking, and makes scaling models more like statistics than AI.\n(AI is statistics which does what we want it to but doesn't work; and statistics is AI which works but doesn't do what we want.)\n\n[^scaling-papers]: In particular, sample-efficiency increases with model size up to compute-efficient scaling, and [GPT-2 can memorize data after seeing it only once](https://arxiv.org/abs/2012.07805 \"'Extracting Training Data from Large Language Models', Carlini et al 2020\")---a [desirable property](https://arxiv.org/abs/1906.05271#google \"'Does Learning Require Memorization? A Short Tale about a Long Tail', Feldman 2019\") given long-tailed real-world distributions of data. (An example of how *not* to do scaling papers is [Thompson et al 2020](https://arxiv.org/abs/2007.05558 \"The Computational Limits of Deep Learning\"), which, in stark contrast to the foregoing papers---which Thompson et al do not mention at all!---attempts to infer scaling not from well-controlled experiments run by the authors, which yield extremely tight and highly predictive curves, but attempts to infer them from occasional reported numbers in highly disparate research papers; unsurprisingly, their curves barely predict anything and seem to be serious overestimates anyway.)\n\n It is noteworthy that the pursuit of large models is driven almost exclusively by OpenAI & industry entities (the latter of which are content with far smaller models), and that academia has evinced an almost total disinterest---disgust & anger, even, and denial (one might say \"green AI\" is green with envy). For all that the scaling hypothesis is 'obvious' and scaling is 'predicted', there is remarkably little interest in actually *doing* it. Perhaps we should pay more attention to what people do rather than what they say.\n\n For more ML scaling research, follow the [/r/MLScaling](https://www.reddit.com/r/mlscaling/ \"'ML Scaling subreddit', Branwen 2020\") subreddit.\n\n![GPT-3: not even that much compute---[3640 petaflop/s-day](https://arxiv.org/pdf/2005.14165.pdf#org=openai&page=46 \"Total Compute Used to Train Language Model: Table D.1\"), only 2× their estimate for AlphaGo Zero, 1860. (Historical graph modified by myself from [\"AI and Compute\", Amodei et al 2018](https://openai.com/research/ai-and-compute \"We're releasing an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time (by comparison, Moore's Law had a 2-year doubling period). Since 2012, this metric has grown by more than 300,000× (a 2-year doubling period would yield only a 7× increase). Improvements in compute have been a key component of AI progress, so as long as this trend continues, it's worth preparing for the implications of systems far outside today's capabilities.\").)](/doc/ai/nn/transformer/gpt/2019-11-07-amodei-aiandcompute-twodistincteras-gpt3modified.png)\n\n[Anti-scaling: penny-wise, pound-foolish.]{.marginnote} GPT-3 is an extraordinarily expensive model by the standards of machine learning: it is estimated that training it may require the annual cost of more machine learning researchers than you can count on one hand (~[$5]($2020)m^[Roughly around [Chuan Li's](https://lambdalabs.com/blog/demystifying-gpt-3 \"OpenAI's GPT-3 Language Model: A Technical Overview\") estimate, using nominal list prices without discounts (which could be steep as the marginal costs of cloud compute are substantially lower). The R&D project cost would be much higher, but is amortized over all subsequent models & projects.]), up to [$30]($2020) of hard drive space to store the model (500--800GB), and multiple pennies of electricity per 100 pages of output (0.4 kWH).\nResearchers are concerned about the prospects for scaling: can ML afford to run projects which cost more than 0.1 milli-Manhattan-Projects⸮^[The Manhattan Project cost ~[$2]($1946)b.]\nSurely it would be too expensive, even if it represented another large leap in AI capabilities, to spend up to 10 milli-Manhattan-Projects to scale GPT-3 100× to a trivial thing like human-like performance in many domains⸮\nMany researchers feel that such a suggestion is absurd and refutes the entire idea of scaling machine learning research further; they asseverate that their favored approaches (you know, the ones which don't work[^butcher]) will run far more efficiently, and that the field would be more productive if it instead focused on research which can be conducted by an impoverished goatherder on an old laptop running off solar panels.^[As if we live in a world where grad students could go to the Moon on a ramen budget if we just wished hard enough, as if focusing on CO~2~ costs & not benefits in our evaluations is not like making a scissor with only one blade, or as if \"green AI\" approaches to try to create small models without going through big models did not look increasingly futile and like throwing good money after bad, and were not the least green of all AI research... To the extent that all cutting-edge AI research ~2010 could be done with grad student money like [$1000]($2010) of hardware, where AI research in decades before & after benefited from big iron, that is an indictment of that era, demonstrating what a stagnant dead end that research was, that its techniques were so smallminded and hobbled it could not benefit from the available large-scale compute.]\nNonetheless, I think we can expect further scaling.\n(10×? No, 10× isn't cool. You know what's cool? [100--1000×](https://www.reddit.com/r/slatestarcodex/comments/hys565/are_we_in_an_ai_overhang/fzezi7d/ \"People I know at OpenAI say v4 is around the corner and easily doable, and...will be here soon (not months but year or so). And they are confident it will scale and be around 100--1000×.\"), trained on a [fancy new supercomputer](https://news.microsoft.com/source/features/ai/openai-azure-supercomputer/ \"Microsoft announces new supercomputer, lays out vision for future AI work\").)\n\n[^butcher]: One is reminded of the joke about the customer complaining to the butcher:\n\n \"Your meat is \\$10/lb, while your competitor across the street sells it at \\$1!\" \"So go buy his meat.\" \"I would, but he has none.\" \"When I don't have any meat, it costs \\$1 too.\"\n\n# Scaling\n\n[How far will scaling go?]{.marginnote} The scaling papers suggest that the leaps we have seen over the past few years are not even half way there in terms of absolute likelihood loss, never mind what real-world capabilities each additional decrement translates into.\nThe scaling curves are clean; from [\"Scaling Laws for Neural Language Models\", Kaplan et al 2020](https://arxiv.org/abs/2001.08361#openai \"'Scaling Laws for Neural Language Models', Kaplan et al 2020\"):\n\n![DL scaling laws: compute, data, model parameters. ([Figure 1](https://arxiv.org/pdf/2001.08361.pdf#page=3&org=openai \"Scaling Laws for Neural Language Models: Figure 1: Language modeling performance improves smoothly as we increase the model size, dataset size, and amount of compute used for training.\"))](/doc/ai/nn/transformer/gpt/2020-kaplan-figure1-dlscaling.png \"Figure 1: Language modeling performance improves smoothly as we increase the model size, dataset size, and amount of compute used for training. For optimal performance all three factors must be scaled up in tandem. Empirical performance has a power-law relationship with each individual factor when not bottlenecked by the other two. (Kaplan et al 2020)\"){.invert}\n\nGPT-3 represents ~10^3^ on this chart, leaving plenty of room for further loss decreases---especially given the [uncertainty in extrapolation](https://arxiv.org/pdf/2001.08361.pdf#page=17&org=openai \"'Scaling Laws for Neural Language Models: Figure 15: Far beyond the model sizes we study empirically, we find a contradiction between our equations', Kaplan et al 2020\"):\n\n![Projecting DL power laws: still room beyond GPT-3.](/doc/ai/nn/transformer/gpt/2020-kaplan-figure15-projectingscaling.png \"Figure 15: Far beyond the model sizes we study empirically, we find a contradiction between our equations for _L(C~min~)_ and _L(D)_ due to the slow growth of data needed for compute-efficient training. The intersection marks the point before which we expect our predictions to break down. The location of this point is highly sensitive to the precise exponents from our power-law fits. (Kaplan et al 2020)\"){.invert}\n\nLo and behold, the scaling laws continue for GPT-3 models for several orders past [Kaplan et al 2020](#kaplan-et-al-2020); from [Brown et al 2020](https://arxiv.org/pdf/2005.14165.pdf#page=11&org=openai \"GPT-3: Language Models are Few-Shot Learners: Figure 3.1: Smooth scaling of performance with compute\"):\n\n![GPT-3 continues to scale as predicted. (Note GPT-3's curve has not 'bounced', and it trained only ~0.5 epoches, see [Table 2.2](https://arxiv.org/pdf/2005.14165.pdf#org=openai&page=9 \"Table 2.2: Datasets used to train GPT-3. 'Weight in training mix' refers to the fraction of examples during training that are drawn from a given dataset, which we intentionally do not make proportional to the size of the dataset. As a result, when we train for 300 billion tokens, some datasets are seen up to 3.4 times during training while other datasets are seen less than once.\"))](/doc/ai/nn/transformer/gpt/2020-brown-figure31-gpt3scaling.png \"Brown et al 2020: Figure 3.1: Smooth scaling of performance with compute. Performance (measured in terms of cross-entropy validation loss) follows a power-law trend with the amount of compute used for training. The power-law behavior observed in Kaplan et al 2020 continues for an additional two orders of magnitude with only small deviations from the predicted curve. For this figure, we exclude embedding parameters from compute and parameter counts. (Brown et al 2020). Cross-validation loss extrapolation: $L(oss) = 2.57 · C(ompute in petaflop-s/days) ^ −0.048$\"){.invert}\n\nIf we see such striking gains in halving the validation loss but with so far left to go, what is left to emerge as we third or halve again?\nHow far does this go, exactly? How do we predict what emerges when?\nBueller? Bueller?\n(See also [Meena's perplexity vs human-ness chatbot ratings](/doc/ai/2020-adiwardana-meena-figure1-humanratingsvslikelihood.png \"Towards a Human-like Open-Domain Chatbot, Adiwardana et al 2020: Figure 1: Interactive SSA vs Perplexity [exp(cross-entropy loss)]. Each point is a different version of the Meena model. A regression line is plotted, for which the coefficient of determination (R^2) is 0.93, an indication of strong correlation between perplexity and the human evaluation metric (SSA). The dotted lines show the SSA performance of other chatbots, humans (86%), the best end-to-end trained Meena model (72%), and the full version of Meena which incorporates a filtering mechanism and tuned decoding (§5) and scores 79%. Mitsuku and Cleverbot scored the same on overall SSA, but Mitsuku displayed higher sensibleness, whereas Cleverbot had higher specificity. See Sections 2.5, 2.6, and 4.3 for more details on how we performed these comparisons and how to interpret the results\"){.invert}, GPT-3-written news articles' [probability of fooling humans by parameter count](/doc/ai/nn/transformer/gpt/2020-brown-figure313-humanabilitytodetectmodelgeneratednewsstories.png \"Brown et al 2020: Figure 3.13: People's ability to identify whether news articles are model-generated (measured by the ratio of correct assignments to non-neutral assignments) decreases as model size increases. Accuracy on the outputs on the deliberately-bad control model (an unconditioned GPT-3 Small model with higher output randomness) is indicated with the dashed line at the top, and the random chance (50%) is indicated with the dashed line at the bottom. Line of best fit is a power law with 95% confidence intervals.\"), and [GPT-3 model size vs Q&A](/doc/ai/nn/transformer/gpt/2020-hendrycks-figure1b-gpt3-qascaling.png \"Figure 1b: GPT-3 Few Shot Test Performance: Performance on a commonsense benchmark (HellaSwag), a linguistic understanding benchmark (SuperGLUE), and the massive multitask test. On previous benchmarks, smaller models start well above random chance levels and exhibit more continuous improvements with model size increases, but on our test, GPT-3 moves beyond random chance with the largest model.\"){.invert} from [Hendrycks et al 2020](https://arxiv.org/abs/2009.03300 \"Measuring Massive Multitask Language Understanding\").)\n\n## Blessings Of Scale\n\n
\n> Extrapolating the spectacular performance of GPT-3 into the future suggests that the answer to life, the universe and everything is just 4.398 trillion parameters.\n>\n> [Geoff Hinton](https://twitter.com/geoffreyhinton/status/1270814602931187715)\n
\n\n\n\n[We don't know how to train NNs.]{.marginnote} The *blessings of scale* is the observation that for deep learning, hard problems are easier to solve than easy problems---everything gets better as it gets larger (in contrast to the usual outcome in research, where small things are hard and large things impossible).\nThe bigger the neural net/compute/data/problem, the faster it learns, the better it learns, the stabler it learns, and so on.\nA problem we can't solve at all at small _n_ may suddenly become straightforward with millions or billions of _n_.\n\"NNs are lazy\": they can do far more than we make them do when we push them beyond easy answers & cheap shortcuts.\nThe [bitter lesson](http://www.incompleteideas.net/IncIdeas/BitterLesson.html \"'The Bitter Lesson', Sutton 2019\") is the harder and bigger, the better.\n(Besides GPT-3, one could mention recent progress in semi-supervised learning & the model-based DRL renaissance.)\n\n![AlphaGo Zero: 'just stack moar layers lol!'](/doc/reinforcement-learning/2017-12-24-gwern-meme-nnlayers-alphagozero.jpg \"Humorous description of the simplicity of the AlphaGo Zero architecture compared to AlphaGo Master\"){.float-right}\n\n[Blessings of scale: stability → generalization → meta-learning.]{.marginnote} GPT-3 is hamstrung by its training & data, but DL enjoys an unreasonably effective [blessing of dimensionality](!W)---just simply training a *big* model on a *lot* of data induces better properties like meta-learning without even the slightest bit of that architecture being built in; and in general, training on more and harder tasks creates ever more human-like performance, generalization, and robustness.\nThe GPT natural-language & programming language models, [iGPT](https://openai.com/research/image-gpt \"Image GPT: We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. By establishing a correlation between sample quality and image classification accuracy, we show that our best generative model also contains features competitive with top convolutional nets in the unsupervised setting.\")/[Vision Transformer](https://arxiv.org/abs/2010.11929#google \"Vision Transformer (ViT): An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale\") for images (and to some degree [GPT-f](https://arxiv.org/abs/2009.03393#openai \"'GPT-f: Generative Language Modeling for Automated Theorem Proving', Polu & Sutskever 2020\")), show that simply scaling up models & datasets without any supervision produces results competitive with the best (and most complex) alternatives, using the same simple architecture, gradually passing from superficial surface correlations to more human-like brain activity ([Schrimpf et al 2020](https://www.biorxiv.org/content/10.1101/2020.06.26.174482.full \"The neural architecture of language: Integrative reverse-engineering converges on a model for predictive processing\")) and linguistic biases as data increases (eg. [Warstadt et al 2020](https://arxiv.org/abs/2010.05358 \"Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)\")).\nIn fact, one may not even need complicated attention mechanisms at scale, as fully-connected networks---hard to get much simpler than them!---[work surprisingly well](/note/fc \"'Fully-Connected Neural Nets', Branwen 2021\") for many tasks.\nOne typically trains such large models with simple optimizers like Adam---because the complicated ones lose their advantages as batch sizes increase and [the simple optimizers work fine](https://arxiv.org/abs/2102.06356 \"'A Large Batch Optimizer Reality Check: Traditional, Generic Optimizers Suffice Across Batch Sizes', Nado et al 2021\") and are more memory-efficient anyway.\n[OA5](https://arxiv.org/pdf/1912.06680.pdf&org=openai#page=13 \"'Dota 2 with Large Scale Deep Reinforcement Learning: §4.3: Batch Size', Berner et al 2019\") does not just scale to, but [stabilizes at](#ppo-dota2), minibatches of millions due to [gradient noise](https://openai.com/research/how-ai-training-scales \"How AI Training Scales: We've discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks. Since complex tasks tend to have noisier gradients, increasingly large batch sizes are likely to become useful in the future, removing one potential limit to further growth of AI systems. More broadly, these results show that neural network training need not be considered a mysterious art, but can be rigorized and systematized. ['An Empirical Model of Large-Batch Training', McCandlish et al 2018]\").\nOA5-like, [BigGAN](https://arxiv.org/pdf/1809.11096.pdf#page=8&org=deepmind \"'BigGAN: Large Scale GAN Training For High Fidelity Natural Image Synthesis: 5.2 Additional Evaluation On JFT-300M', Brock et al 2018\") stabilizes at large-scale image datasets like JFT-300M & benefits from unusually large minibatches and VAEs (long an also-ran to GANs or autoregressive models in terms of sharp image generation) catch up if you make them very deep ([Child 2020](https://arxiv.org/abs/2011.10650#openai \"VDVAE: Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images\"), [Vahdat & Kautz 2020](https://arxiv.org/abs/2007.03898#nvidia \"NVAE: A Deep Hierarchical Variational Autoencoder\")); while classifier CNNs like [BiT](https://arxiv.org/abs/1912.11370#google \"'Big Transfer (BiT): Large Scale Learning of General Visual Representations for Transfer', Kolesnikov et al 2019\")^[Fun trivia: BiT [is now more accurate](https://arxiv.org/abs/2006.07159#google \"'Are we done with ImageNet?', Beyer et al 2020\") at predicting (cleaned, corrected) ImageNet labels than the original ImageNet labels are.]/[Dojolonga et al 2020](https://arxiv.org/abs/2007.08558#google \"On Robustness and Transferability of Convolutional Neural Networks\") or [ResNeXt](https://arxiv.org/abs/1907.07640 \"'Robustness properties of Facebook's ResNeXt WSL models', Orhan 2019\") or [Noisy Student](https://arxiv.org/abs/1911.04252#google \"'Self-training with Noisy Student improves ImageNet classification', Xie et al 2019\") transfer & [robustify](https://arxiv.org/abs/2007.00644 \"'Measuring Robustness to Natural Distribution Shifts in Image Classification', Taori et al 2020\") [with](https://arxiv.org/abs/2103.14586#google \"'Understanding Robustness of Transformers for Image Classification', Bhojanapalli et al 2021\") human-like errors^[One interesting aspect of image scaling experiments like Dojolonga et al 2020 is that even when performance is 'plateauing' on the original task & approaching label error, the transfer learning continues to improve. Apparently the internal representations, even when adequate for mere classification and so the score cannot increase more than a small percentage, become more human-like---because it's encoding [dark knowledge](https://arxiv.org/abs/1503.02531#google \"'Distilling the Knowledge in a Neural Network', Hinton et al 2015\") or more [adversarial robustness](https://arxiv.org/abs/2006.14536#google \"'Smooth Adversarial Training', Xie et al 2020\")? I've noticed with language models, the final fractions of a loss appear to make a substantial difference to generated sample quality, perhaps because it is only after all the easier modeling is finished that the lazy language model is forced to squeeze out the next bit of performance by more correctly modeling more sophisticated things like logic, objects, world-knowledge, etc.], multimodal learning produces better representations on fewer data (eg. [ViLBERT](https://arxiv.org/abs/1912.02315#facebook \"'12-in-1: Multi-Task Vision and Language Representation Learning', Lu et al 2019\")/[VideoBERT](https://arxiv.org/abs/1904.01766#google \"'VideoBERT: A Joint Model for Video and Language Representation Learning', Sun et al 2019\"), motivating [OA's interest in big multimodal models](https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ \"The messy, secretive reality behind OpenAI's bid to save the world\")), and RNNs can [predict videos](https://arxiv.org/abs/1911.01655#google \"'High Fidelity Video Prediction with Large Stochastic Recurrent Neural Networks', Villegas et al 2019\").\n[AlphaStar](/doc/reinforcement-learning/model-free/alphastar/2019-vinyals.pdf#deepmind \"'Grandmaster level in StarCraft II using multi-agent reinforcement learning', Vinyals et al 2019\") reaches human-level with hundreds of competing self-players to cover possible strategies.\nImitation learning DRL like [MetaMimic](https://arxiv.org/abs/1810.05017#deepmind \"'MetaMimic: One-Shot High-Fidelity Imitation: Training Large-Scale Deep Nets with RL', Le Paine et al 2018\") generalizes at hundreds of tasks to train a deep net.\nDisentanglement emerges in [StyleGAN](https://arxiv.org/abs/1812.04948#nvidia \"'A Style-Based Generator Architecture for Generative Adversarial Networks', Karras et al 2018\") with sufficiently deep _w_ embeddings, with enough parameters to train raw audio in the aforementioned Jukebox, or in [relational networks](https://arxiv.org/abs/1706.01427#deepmind \"'A simple neural network module for relational reasoning', Santoro et al 2017\")/[GQN](/doc/reinforcement-learning/model/2018-eslami.pdf#deepmind \"'Neural scene representation and rendering', Eslami et al 2018\")/[Transformers](https://arxiv.org/abs/2002.05867 \"'Transformers as Soft Reasoners over Language', Clark et al 2020\") with enough samples to force factorization.\n(See also [Hill et al 2019](https://arxiv.org/abs/1910.00571#deepmind \"Environmental drivers of systematicity and generalization in a situated agent\")/[Chaplot et al 2017](https://arxiv.org/abs/1706.07230 \"Gated-Attention Architectures for Task-Oriented Language Grounding\")/[Yu et al 2018](https://arxiv.org/abs/1802.01433#baidu \"Interactive Grounded Language Acquisition and Generalization in a 2D World\")/[Lake 2019](https://arxiv.org/abs/1906.05381 \"Compositional generalization through meta sequence-to-sequence learning\")/[Interactive Agents Group 2020](https://arxiv.org/abs/2012.05672#deepmind \"Imitating Interactive Intelligence\").)\nTraining [Dactyl](https://arxiv.org/abs/1910.07113#openai \"'Solving Rubik's Cube With A Robot Hand', Akkaya et al 2019\") (or [humanoid robots](https://arxiv.org/abs/2304.13653#deepmind \"‘Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning’, Haarnoja et al 2023\")) on millions of domain randomizations induced similar implicit meta-learning where during each runtime invocation, the RNN probes its environment and encodes its understanding of robot hand control into its hidden state; and [DD-PPO](https://arxiv.org/abs/1911.00357#facebook \"'DD-PPO: Learning Near-Perfect PointGoal Navigators from 2.5 Billion Frames', Wijmans et al 2019\") outperforms classical robot planners by scaling 2 orders.\nOr in [Procgen](https://openai.com/research/procgen-benchmark \"Procgen Benchmark: We're releasing Procgen Benchmark, 16 simple-to-use procedurally-generated environments which provide a direct measure of how quickly a reinforcement learning agent learns generalizable skills\") or [CoinRun](https://distill.pub/2020/understanding-rl-vision/#diversity-hypothesis \"'Understanding RL Vision', Hilton et al 2020\"), training on hundreds of levels trains agents to solve levels individually and worsens performance on other levels, but at thousands of levels, they begin to generalize to unseen levels. (Similarly, [language model pretraining-finetuning](https://arxiv.org/abs/2101.11038#facebook \"'Muppet: Massive Multi-task Representations with Pre-Finetuning', Aghajanyan et al 2021\") overfits at small numbers of datasets but improves markedly with enough diversity.)\n[AlphaZero](/doc/reinforcement-learning/model/alphago/2018-silver.pdf#deepmind \"'A general reinforcement learning algorithm that masters chess, shogi and Go through self-play', Silver et al 2018\") demonstrated truly superhuman Go without 'delusions' just by training a bigger model on a richer signal & pro-level play without any search---and [MuZero](https://arxiv.org/abs/1911.08265#deepmind \"'MuZero: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model', Schrittwieser et al 2019\"), for that matter, demonstrated that just training an RNN end-to-end to predict a reward on enough data is enough to obsolete even AlphaZero and learn tree search implicitly (but better).\nAnd on and on.\nDM researcher [Matthew Botvinick](https://www.lesswrong.com/posts/Wnqua6eQkewL3bqsF/matt-botvinick-on-the-spontaneous-emergence-of-learning \"Matt Botvinick on the spontaneous emergence of learning algorithms\"), discussing their meta-reinforcement learning work where they were surprised to discover meta-learning emerging, and that it did so regardless of which specific architecture they used:\n\n> ...it's something that just happens. In a sense, you can't avoid this happening. If you have a system that has memory, and the function of that memory is shaped by reinforcement learning, and this system is trained on a series of interrelated tasks, this is going to happen. You can't stop it.\n\nPace [Breiman](/doc/ai/scaling/1995-breiman.pdf \"Reflections After Refereeing Papers for NIPS\"), **why**?\nWhy do they transfer and generalize?\nWhy do these blessings of scale exist?\nWhy do we need to train large models when small models provably exist with the same performance?\nWhy do larger models not overfit (though they [can](https://arxiv.org/abs/1611.03530#google \"'Understanding deep learning requires rethinking generalization', Zhang et al 2016\")) and generalize better than smaller models?\nWhat's up with the whole ['double descent'](https://openai.com/research/deep-double-descent \"Deep Double Descent: We show that the double descent phenomenon occurs in CNNs, ResNets, and transformers: performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time. This effect is often avoided through careful regularization. While this behavior appears to be fairly universal, we don't yet fully understand why it happens, and view further study of this phenomenon as an important research direction.\") anyway?\n\nThese are all, ahem, deep questions about neural networks and heavily debated, but right now, I would suggest that the answer lies in some mix of the model compression/distillation, ['lottery ticket hypothesis'](https://ai.facebook.com/blog/understanding-the-generalization-of-lottery-tickets-in-neural-networks/ \"Understanding the generalization of 'lottery tickets' in neural networks\"), [Bayesian neural network](https://arxiv.org/abs/2002.08791 \"'Bayesian Deep Learning and a Probabilistic Perspective of Generalization', Wilson & Izmailov 2020\"), and [learned representation](https://arxiv.org/abs/2007.00810#google \"'On Linear Identifiability of Learned Representations', Roeder et al 2020\") (like [circuits](https://distill.pub/2020/circuits/zoom-in/#openai \"'Zoom In: An Introduction to Circuits: By studying the connections between neurons, we can find meaningful algorithms in the weights of neural networks', Olah et al 2020\")) literatures.\n\nBig models work because they encode a dizzyingly vast number of sub-models in an extremely [high-dimensional](https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/ \"'Neural Networks, Manifolds, and Topology', Olah 2014\") abstract space, representing countless small sub-models ([Orseau et al 2020](https://arxiv.org/abs/2006.12156#deepmind \"Logarithmic Pruning is All You Need\")) [interpolating over data](/doc/ai/scaling/2020-hasson.pdf \"'Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks', Hasson et al 2020\"), one of which is likely to solve the problem well, and so ensures the problem is soluble by the overall model.\nThey function as an ensemble: even though there countless overfit sub-models inside the single big model, they all average out, leading to a preference for simple solutions.\nThis Occam's razor biases the model towards simple solutions which are flexible enough to gradually expand in complexity to match the data.\n\nHowever, \"neural nets are lazy\": sub-models which memorize pieces of the data, or latch onto superficial features, learn quickest and are the easiest to represent internally.\nIf the model & data & compute are not big or varied enough, the optimization, by the end of the cursory training, will have only led to a sub-model which achieves a low loss but missed important pieces of the desired solution.\n\nOn the other hand, for a model like GPT-3, it is sufficiently powerful a model that its sub-models can do anything from poetry to arithmetic, and it is trained on so much data that those superficial models may do well early on, but gradually fall behind more abstract models; a sub-model which memorizes some of the data is indeed much simpler than a sub-model which encodes genuine arithmetic (a NN can probably memorize tens of thousands of lookup table entries storing examples of addition in the space it would take to encode an abstract algorithm like 'addition'), but it can't possibly memorize *all* the instances of arithmetic (implicit or explicit) in GPT-3's Internet-scale dataset.\nIf a memorizing sub-model tried to do so, it would become extremely large and penalized.\nEventually, after enough examples and enough updates, there may be a phase transition ([Viering & Loog 2021](https://arxiv.org/pdf/2103.10948.pdf#page=22 \"The Shape of Learning Curves: a Review: 6. Ill-behaved learning curves: 6.1. Phase transitions\")), and the simplest 'arithmetic' model which accurately predicts the data just *is* arithmetic.\nAnd then the meta-learning, after seeing enough instances of algorithms which vary slightly within each sample, making it hard to learn each task separately, just *is* learning of more generic algorithms, yielding sub-models which achieve lower loss than the rival sub-models, which either fail to predict well or bloat unacceptably.\n(GPT-2-1.5b apparently was too small or shallow to ensemble easily over sub-models encoding meta-learning algorithms, or perhaps not trained long enough on enough data to locate the meta-learner models; GPT-3 was.)\n\nSo, the larger the model, the better, if there is enough data & compute to push it past the easy convenient sub-models and into the sub-models which express desirable traits like generalizing, factorizing perception into meaningful latent dimensions, meta-learning tasks based on descriptions, learning causal reasoning & logic, and so on.\nIf the ingredients are there, it's going to happen.\n\n## Scaling Hypothesis\n\nThe strong *scaling hypothesis* is that, once we find a scalable architecture like self-attention or convolutions, which like the brain can be applied fairly uniformly (eg. [\"The Brain as a Universal Learning Machine\"](https://www.lesswrong.com/posts/9Yc7Pp7szcjPgPsjf/the-brain-as-a-universal-learning-machine) or Hawkins), we can simply train ever larger NNs and ever more sophisticated behavior will emerge naturally as the easiest way to optimize for all the tasks & data.\nMore powerful NNs are 'just' scaled-up weak NNs, in much the same way that human brains look much like [scaled-up primate brains](/doc/psychology/neuroscience/2012-herculanohouzel.pdf \"'The remarkable, yet not extraordinary, human brain as a scaled-up primate brain and its associated cost', Herculano-Houzel 2012\").\n\nWhile I was highly skeptical of scaling hypothesis advocates when I first became interested in AI 2004--2010 (back when AI was stuck in the doldrums of hopelessly narrow tools and dates like 2028 seemed impossibly far away), which smacked of numerology and \"if you build it they will come\" logic (at the time, we certainly didn't have general algorithms that you could just throw compute at), in 2020, I have to admit, I was wrong and they were right.\nWe built the compute, and the algorithms *did* come, and the scaling hypothesis has only looked more and more plausible every year since 2010.\n\n# Why Does Pretraining Work?\n\nThe pretraining thesis goes something like this:\n\n![\"Figure 1: Envisioned evolution of NLP research through three different eras or curves\" (the hypothetical S-curves & progress in natural language modeling; from [Cambria & White 2014](/doc/ai/scaling/2014-cambria.pdf \"Jumping NLP Curves: A Review of Natural Language Processing Research\"))](/doc/ai/scaling/2014-cambria-figure1-hypotheticalnlpprogresscurves.png){.invert}\n\nHumans, one might say, are the [cyanobacteria of AI](!W \"Great Oxidation Event\"): we constantly emit large amounts of structured data, which implicitly rely on logic, causality, object permanence, history---all of that good stuff.\nAll of that is implicit and encoded into our writings and videos and 'data exhaust'.\nA model learning to predict must learn to understand all of that to get the best performance; as it predicts the easy things which are mere statistical pattern-matching, what's left are the hard things.\nAI critics often say that the long tail of scenarios for tasks like self-driving cars or natural language can only be solved by true generalization & reasoning; it follows then that if models solve the long tail, they must learn to generalize & reason.\n\nEarly on in training, a model learns the crudest levels: that some letters like 'e' are more frequent than others like 'z', that every 5 characters or so there is a space, and so on.\nIt goes from predicted uniformly-distributed bytes to what looks like Base-60 encoding---alphanumeric gibberish.\nAs crude as this may be, it's enough to make quite a bit of absolute progress: a random predictor needs 8 bits to 'predict' a byte/character, but just by at least matching letter and space frequencies, it can almost halve its error to around 5 bits.^[The numbers here are not exact and are for illustration; because BPEs don't correspond to any intuitive, I am going to borrow from my observations watching char-RNNs, and talk about the loss per character instead of BPE.]\nBecause it is learning so much from every character, and because the learned frequencies are simple, it can happen so fast that if one is not logging samples frequently, one might not even observe the improvement.\n\nAs training progresses, the task becomes more difficult. Now it begins to learn what words actually exist and do not exist. It doesn't know anything about meaning, but at least now when it's asked to predict the second half of a word, it can actually do that to some degree, saving it a few more bits.\nThis takes a while because any specific instance will show up only occasionally: a word may not appear in a dozen samples, and there are many thousands of words to learn.\nWith some more work, it has learned that punctuation, pluralization, possessives are all things that exist.\nPut that together, and it may have progressed again, all the way down to 3--4 bits error per character!\n(While the progress is gratifyingly fast, it's still all gibberish, though, makes no mistake: a sample may be spelled correctly, but it doesn't make even a bit of sense.)\n\nBut once a model has learned a good English vocabulary and correct formatting/spelling, what's next? There's not much juice left in predicting within-words.\nThe next thing is picking up associations among words. What words tend to come first? What words 'cluster' and are often used nearby each other?\nNautical terms tend to get used a lot with each other in sea stories, and likewise Bible passages, or American history Wikipedia article, and so on.\nIf the word \"Jefferson\" is the last word, then \"Washington\" may not be far away, and it should hedge its bets on predicting that 'W' is the next character, and then if it shows up, go all-in on \"ashington\".\nSuch bag-of-words approaches still predict badly, but now we're down to perhaps <3 bits per character.\n\nWhat next? Does it stop there? Not if there is enough data and the earlier stuff like learning English vocab doesn't hem the model in by using up its learning ability.\nGradually, other words like \"President\" or \"general\" or \"after\" begin to show the model subtle correlations: \"Jefferson was President after...\"\nWith many such passages, the word \"after\" begins to serve a use in predicting the next word, and then the use can be broadened.\n\nBy this point, the loss is perhaps 2 bits: every additional 0.1 bit decrease comes at a steeper cost and takes more time.\nHowever, now the sentences have started to make sense.\nA sentence like \"Jefferson was President after Washington\" does in fact mean something (and if occasionally we sample \"Washington was President after Jefferson\", well, what do you expect from such an un-converged model).\nJarring errors will immediately jostle us out of any illusion about the model's understanding, and so training continues.\n(Around here, Markov chain & _n_-gram models start to fall behind; they can memorize increasingly large chunks of the training corpus, but they can't solve increasingly critical syntactic tasks like balancing parentheses or quotes, much less start to ascend from syntax to semantics.)\n\nNow training is hard. Even subtler aspects of language must be modeled, such as keeping pronouns consistent.\nThis is hard in part because the model's errors are becoming rare, and because the relevant pieces of text are increasingly distant and 'long-range'.\nAs it makes progress, the absolute size of errors shrinks dramatically.\nConsider the case of associating names with gender pronouns: the difference between \"Janelle ate some ice cream, because he likes sweet things like ice cream\" and \"Janelle ate some ice cream, because she likes sweet things like ice cream\" is one no human could fail to notice, and yet, it is a difference of a single letter.\nIf we compared two models, one of which didn't understand gender pronouns at all and guessed 'he'/'she' purely at random, and one which understood them perfectly and always guessed 'she', the second model would attain a lower average error of barely <0.02 bits per character!\n\nNevertheless, as training continues, these problems and more, like imitating genres, get solved, and eventually at a loss of 1--2 (where a small char-RNN might converge on a small corpus like Shakespeare or some Project Gutenberg ebooks), we will finally get samples that sound human---at least, for a few sentences.\nThese final samples may convince us briefly, but, aside from issues like repetition loops, even with good samples, the errors accumulate: a sample will state that someone is \"alive\" and then 10 sentences later, use the word \"dead\", or it will digress into an irrelevant argument instead of the expected next argument, or someone will do something physically improbable, or it may just continue for a while without seeming to *get* anywhere.\n\nAll of these errors are far less than <0.02 bits per character; we are now talking not hundredths of bits per characters but less than ten-thousandths.\n\nThe pretraining thesis argues that this can go even further: we can compare this performance directly with humans doing the same objective task, who can achieve closer to [0.7 bits per character](/difference#efficient-natural-languages).\nWhat is in that missing >0.4?\n\n![\"Yeah, but there's more to being smart than knowing compression schemes!\" \"No there's not!\" \"Shoot---he knows the secret!!\"](/doc/cs/2004-ryannorth-dinosaurcomics-391.png \"https://qwantz.com/index.php?comic=354\"){.invert}\n\nWell---*everything*! Everything that the model misses.\nWhile just babbling random words was good enough at the beginning, at the end, it needs to be able to reason our way through the most difficult textual scenarios requiring causality or commonsense reasoning.\nEvery error where the model predicts that ice cream put in a freezer will \"melt\" rather than \"freeze\", every case where the model can't keep straight whether a person is alive or dead, every time that the model chooses a word that doesn't help build somehow towards the ultimate conclusion of an 'essay', every time that it lacks the theory of mind to compress novel scenes describing the Machiavellian scheming of a dozen individuals at dinner jockeying for power as they talk, every use of logic or abstraction or instructions or Q&A where the model is befuddled and needs more bits to cover up for its mistake where a human would think, understand, and predict.\nFor a language model, the truth is that which keeps on predicting well---because truth is one and error many.\nEach of these cognitive breakthroughs allows ever so slightly better prediction of a few relevant texts; nothing less than true understanding will suffice for ideal prediction.\n\nIf we trained a model which reached that loss of <0.7, which could predict text indistinguishable from a human, whether in a dialogue or quizzed about ice cream or being tested on SAT analogies or tutored in mathematics, if for every string the model did just as good a job of predicting the next character as you could do, how could we say that it doesn't *truly* understand everything?\n(If nothing else, we could, by definition, replace humans in any kind of text-writing job!)\n\n[The last bits are deepest.]{.marginnote} The implication here is that the final few bits are the most valuable bits, which require the most of what we think of as intelligence.\nA helpful analogy here might be our actions: for the most part, all humans execute actions equally well.\nWe all pick up a tea mug without dropping, and can lift our legs to walk down thousands of steps without falling even once.\nFor everyday actions (the sort which make up most of a corpus), anybody, of any intelligence, can get enough practice & feedback to do them quite well, learning individual algorithms to solve each class of problems extremely well, in isolation.^[If you see thousands of images labeled 'dog' and thousands more labeled 'cat', you can simply learn separate dog & cat classifiers without bothering to understand their shared aspects like being domesticated quadruped mammal predators. This won't be useful if you are then asked to classify 'ferret' images, but you weren't asked to, so that's not your problem, since you can just learn yet another separate classifier for ferrets if you then get a lot of ferret images.]\nMeanwhile for rare problems, there may be too few instances to do any better than memorize the answer.\nIn the middle of the spectrum are problems which are similar but not *too* similar to other problems; these are the sorts of problem which reward flexible meta-learning and generalization, and many intermediate problems may be necessary to [elicit those capabilities](https://arxiv.org/abs/2205.05055#deepmind \"‘Data Distributional Properties Drive Emergent Few-Shot Learning in Transformers’, Chan et al 2022\") (\"neural nets are lazy\").\n\nWhere individuals differ is when they start running into the long tail of novel choices, rare choices, choices that take seconds but unfold over a lifetime, choices where we will never get any feedback (like after our death).\nOne only has to make a single bad decision, out of a lifetime of millions of discrete decisions, to wind up in jail or dead.\nA small absolute average improvement in decision quality, if it is in *those* decisions, may be far more important than its quantity indicates, and give us some intutition for why those last bits are the hardest/deepest.\n(Why do humans have such large brains, when animals like chimpanzees do so many ordinary activities seemingly as well with a fraction of the expense? Why is language worthwhile? Perhaps because of considerations like these. We may be at our most human while filling out the paperwork for life insurance.)\n\n[Reasons for doubt.]{.marginnote} The pretraining thesis, while logically impeccable---how is a model supposed to solve all possible trick questions without understanding, just *guessing*?---never struck me as convincing, an argument admitting neither confutation nor conviction.\nIt feels too much like a magic trick: \"here's some information theory, here's a human benchmark, here's how we can encode all tasks as a sequence prediction problem, hey presto---Intelligence!\"\nThere are lots of algorithms which are Turing-complete or 'universal' in some sense; there are lots of algorithms like AIXI which solve AI in some theoretical sense (Schmidhuber & company have many of these cute algorithms such as 'the fastest possible algorithm for all problems', with the minor catch of some constant factors which require computers bigger than the universe).\n\nWhy think pretraining or sequence modeling is not another one of them?\nSure, *if* the model got a low enough loss, it'd have to be intelligent, but how could you prove that would happen in practice?\n(Training char-RNNs was fun, but they hadn't exactly revolutionized deep learning.)\nIt might require more text than exists, countless petabytes of data for all of those subtle factors like logical reasoning to represent enough training signal, amidst all the noise and distractors, to train a model.\nOr maybe your models are too small to do more than absorb the simple surface-level signals, and you would have to scale them 100 orders of magnitude for it to work, because the scaling curves didn't cooperate.\nOr maybe your models are fundamentally broken, and stuff like abstraction require an entirely different architecture to work at all, and whatever you do, your current models will saturate at poor performance.\nOr it'll train, but it'll spend all its time trying to improve the surface-level modeling, absorbing more and more literal data and facts without ever ascending to the higher planes of cognition as planned.\nOr...\n\n
\n> 'The possibilities of developing an atomic weapon and the desirability of doing it secretly were discussed at a Princeton University conference in which I participated in March 1939...[Bohr](!W \"Niels Bohr\") said this rare variety could not be separated from common uranium except by turning the country into a gigantic factory. Bohr was worried that this could be done and that an atomic bomb could be developed---but he hoped that neither could be accomplished. Years later, when Bohr came to Los Alamos, I was prepared to say, \"You see . . .\" But before I could open my mouth, he said: **\"You see, I told you it couldn't be done without turning the whole country into a factory. You have done just that.\"**'\n>\n> [Edward Teller](!W)^[pg210--211, \"The Quiet Enemy\", [_The Legacy of Hiroshima_](/doc/radiance/1962-teller-thelegacyofhiroshima.pdf), Teller 1962.]\n
\n\nBut apparently, it would've worked fine.\nEven RNNs probably would've worked---Transformers are nice, but they seem mostly be about efficiency.^[Another way of interpreting the various papers about how Transformers are actually like RNNs or are [actually Hopfield networks](https://arxiv.org/abs/2008.02217 \"'Hopfield Networks is All You Need', Ramsauer et al 2020\") is to take that as indicating that what is important about them is not any inherent new capability compared to older architectures, but some lower-level aspect like being more efficiently trainable on contemporary hardware.]\n(Training large RNNs is much more expensive, and doing BPTT over multiple nodes is much harder engineering-wise.)\nIt just required more compute & data than anyone was willing to risk on it until a few true-believers were able to get their hands on a few million dollars of compute.\n\n#. **Q:** Did anyone predict, quantitatively, that this would happen where it did?\n\n **A:** Not that I know of.\n\n#. **Q:** What would future scaled-up models learn?\n\n GPT-2-1.5b had a cross-entropy WebText validation loss of ~3.3 (based on the perplexity of ~10 in [Figure 4](/doc/ai/nn/transformer/gpt/2019-radford-figure4-gpt2validationloss.png \"Figure 4: The performance of LMs trained on WebText as a function of model size (from https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf#page=9)\"){.invert}, and log~2~(10) = 3.32). GPT-3 halved that loss to ~1.73 judging from [Brown et al 2020](/doc/ai/nn/transformer/gpt/2020-brown-figure31-gpt3scaling.png \"Figure 3.1: Smooth scaling of performance with compute. Performance (measured in terms of cross-entropy validation loss) follows a power-law trend with the amount of compute used for training. The power-law behavior observed in Kaplan et al 2020 continues for an additional two orders of magnitude with only small deviations from the predicted curve. For this figure, we exclude embedding parameters from compute and parameter counts. (Brown et al 2020). Cross-validation loss extrapolation: $L(oss) = 2.57 · C(ompute in petaflop-s/days) ^ −0.048$\"){.invert} and using the scaling formula (2.57 × (3.64 × 10^3^)^\\−0.048^). For a hypothetical GPT-4, if the scaling curve continues for another 3 orders or so of compute (100--1000×) before crossing over and hitting harder diminishing returns, the cross-entropy loss will drop to ~1.24 (2.57 × (3.64 × (10^3^ × 10^3^))^\\−0.048^).\n\n If GPT-3 gained so much meta-learning and world knowledge by dropping its absolute loss ~50% when starting from GPT-2's level, what capabilities would another ~30% improvement over GPT-3 gain? (Cutting the loss that much would still not reach human-level, as far as I can tell.[^human-perplexity]) What would a drop to ≤1, perhaps using wider context windows or recurrency, gain?\n\n **A:** I don't know.\n#. **Q:** Does anyone?\n\n **A:** Not that I know of.^[As of December 2020, half a year later, almost no researcher has been willing to go on record as saying what specific capabilities they predict future 1t, 10t, or 100t models will have or not have, and at what size which missing capabilities will emerge---just as no one is on record successfully predicting GPT-2 or GPT-3's specific capabilities.]\n\n[^human-perplexity]: How do these absolute prediction performances compare to humans? It's hard to say. The only available benchmarks for perplexity for humans/GPT-2/GPT-3 appear to be WebText, [Penn Tree Bank](/doc/cs/algorithm/1993-marcus.pdf \"'Building a Large Annotated Corpus of English: The Penn Treebank', Marcus et al 1993\") (PTB; based on the [Brown Corpus](!W)), [1 Billion Word](https://arxiv.org/abs/1312.3005 \"'One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling', Chelba et al 2013\") (1BW), and [LAMBADA](https://arxiv.org/abs/1606.06031 \"'The LAMBADA dataset: Word prediction requiring a broad discourse context', Paperno et al 2016\"). But coverage is spotty.\n\n I found no human benchmarks for WebText or Penn Tree Bank, so I can't compare the human vs GPT-2/GPT-3 perplexities ([GPT-2 PTB](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf#page=5 \"Language Models are Unsupervised Multitask Learners: Table 3.Zero-shot results on many datasets. No training or fine-tuning was performed for any of these results. PTB and WikiText-2 results are from (Gong et al 2018). CBT results are from (Bajgar et al 2016). LAMBADA accuracy result is from (Hoang et al 2018) and LAMBADA perplexity result is from (Grave et al 2016). Other results are from (Dai et al 2019).\"): 35.7; [GPT-3 PTB](https://arxiv.org/pdf/2005.14165.pdf#page=11&org=openai): 20.5).\n\n [GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf#page=5) was benchmarked at 43 perplexity on the 1 Billion Word (1BW) benchmark vs a (highly extrapolated) [human perplexity of 12](/doc/ai/scaling/2017-shen.pdf \"'Estimation of gap between current language models and human performance', Shen et al 2017\") (which interestingly extrapolates, using 2012 LSTM RNNs, that \"10 to 20 more years of research before human performance is reached\"), but that may be an unfair benchmark (\"Our model is still significantly worse than prior work on the One Billion Word Benchmark ([Chelba et al 2013](https://arxiv.org/abs/1312.3005 \"'One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling', Chelba et al 2013\")). This is likely due to a combination of it being both the largest dataset and having some of the most destructive pre-processing---1BW's sentence level shuffling removes all long-range structure.\") and 1BW was dropped from the GPT-3 evaluation due to data contamination (\"We omit the 4 Wikipedia-related tasks in that work because they are entirely contained in our training data, and we also omit the one-billion word benchmark due to a high fraction of the dataset being contained in our training set.\").\n\n LAMBADA was benchmarked at a [GPT-2 perplexity](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf#page=5) of 8.6, and a [GPT-3 perplexity](https://arxiv.org/pdf/2005.14165.pdf&org=openai#page=12) of 3.0 (zero-shot) / 1.92 (few-shot). [OA claims](https://openai.com/research/better-language-models \"Better Language Models and Their Implications: We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.\") in their GPT-2 blog post (but not the paper) that human perplexity is 1--2, but provides no sources and I couldn't find any. (The authors might be guessing based on how LAMBADA was constructed: examples were filtered by whether two independent human raters provided the same right answer, which lower bounds how good humans must be at predicting the answer.)\n\n So overall, it looks like the best guess is that GPT-3 continues to have somewhere around twice the absolute error of a human. This implies it will take a large (yet, far from impossible) amount of compute to fully close the remaining gap with the current scaling laws. If we irresponsibly extrapolate out the WebText scaling curve further, assume GPT-3 has twice the error of a human at its current WebText perplexity of 1.73 (and so humans are ~0.86), then we need 2.57 · (3.64 · (10^3^ · _x_))^\\-0.048^ = 0.86, where _x_ = 2.2e6 or 2,200,000× the compute of GPT-3. (This would roughly equal the cost to the USA of invading Iraq.)\n\n When is that feasible?\n\n If we imagine that [peak AI compute usage doubles every 3.4 months](https://openai.com/research/ai-and-compute \"AI and Compute: We’re releasing an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time (by comparison, Moore’s Law had a 2-year doubling period). Since 2012, this metric has grown by more than 300,000× (a 2-year doubling period would yield only a 7× increase). Improvements in compute have been a key component of AI progress, so as long as this trend continues, it’s worth preparing for the implications of systems far outside today’s capabilities.\"), then 2.2e6 would be 22 doublings away---or 6.3 years, in 2027. Most people believe that compute trend must break down soon, and that sort of prediction is a good reason why!\n\n Going the other direction, Hernandez & Brown 2020's estimate is that, net of hardware & algorithmic progress, the cost of a fixed level of performance halves every 16 months; so if GPT-3 cost ~[$5]($2020)m in early 2020, then it'll cost [$2.5]($2020)m around mid-2021, and so on. Similarly, a GPT-human requiring 2.2e6× more compute would presumably cost on the order of [$10]($2020) trillion in 2020, but after 14 halvings (18 years) would cost [$1]($2020)b in 2038.\n\n# Prospects\n\n
\n> In the problem of decoding, the most important information which we can possess is the knowledge that the message which we are reading is not gibberish...In a similar way, when we consider a problem of nature such as that of atomic reactions and atomic explosives, the largest single item of information which we can make public is that they exist. Once a scientist attacks a problem which he knows to have an answer, his entire attitude is changed. He is already some 50% of his way toward that answer...**the one secret concerning the atomic bomb which might have been kept and which was given to the public and to all potential enemies without the least inhibition, was that of the possibility on its construction.** Take a problem of this importance and assure the scientific world that it has an answer; then both the intellectual ability of the scientists and the existing laboratory facilities are so widely distributed that the quasi-independent realization of the task will be a matter of merely a few years anywhere in the world.\n>\n> [Norbert Wiener](!W), pg124--125, _[The Human Use of Human Beings](!W)_ (emphasis added)\n
\n\n
\n> People who work in machine learning simply didn't think that neural networks could do much. People didn't believe large neural networks could be trained...The ideas were all there, the thing that was missing was a lot of supervised data and a lot of compute. Once you have [those two], then there is a third thing is that is needed---and that is *conviction*. Conviction that if you take the right stuff, which already exists, and apply and mix it with a lot of data and a lot of compute, that it will in fact work. And so that was the missing piece.\n>\n> [Ilya Sutskever](https://www.youtube.com/13CZPWmke6A?t=950#org=openai \"Ilya Sutskever: Deep Learning | Lex Fridman Podcast #94\")[^Zaremba]\n
\n\n[^Zaremba]: See also [Sutskever's DRL talk](https://www.youtube.com/watch?v=w3ues-NayAs?t=712#openai \"If you want to solve a hard problem in reinforcement learning, you just scale. It's just gonna work just like supervised learning. it's the same, the same story exactly. It was kind of hard to believe that supervised learning can do all those things, but it's not just vision, it's everything and the same thing seems to hold for reinforcement learning provided you have a lot of experience.\"), and [Wojciech Zaremba's](!W \"Wojciech Zaremba\") [comments about OA5](https://www.youtube.com/watch?v=429QC4Yl-mA&t=1157s \"What could make AI conscious? with Wojciech Zaremba, co-founder of OpenAI (2021-06-02)\") ([transcript](https://wandb.ai/wandb_fc/gradient-dissent/reports/What-could-make-AI-conscious-with-Wojciech-Zaremba-co-founder-of-OpenAI--Vmlldzo3NDk3MDI)):\n\n > `Lukas`: \"How much of the work then on Dota was, you felt, like fundamentally moving ML forward and how much of it was Dota-specific or can you even pull those apart?\"\n >\n > `Wojciech`: \"I think there was a decent amount of Dota-specific work. And then I think it was more than optimal, but also simultaneously hard. So I remember at the beginning of Dota project, it was actually unclear how to approach it.\n >\n > People are saying that contemporary reinforcement learning will have no chance in solving this problem. And people looked into off-policy matters, on-policy matters, [evolutionary strategies](https://arxiv.org/abs/1703.03864#openai \"'Evolution Strategies as a Scalable Alternative to Reinforcement Learning', Salimans et al 2017\"). The thing that became quite surprising is that [methods that already exist](https://arxiv.org/abs/1707.06347#openai \"'PPO: Proximal Policy Optimization Algorithms', Schulman et al 2017\"), with appropriate scale work extremely well. So that was a big surprise. And I remember some people even before Dota time at OpenAI, saying that maybe reinforcement learning is a dead end. And all of a sudden it's a very different story now.\"\n >\n > `Lukas`: \"For sure.\"\n\nWhat can we expect from future DL work?\nWill GPT-3 kickstart an arms race where soon we will be discussing, blase, what would seem now like ludicrously farfetched schemes like bidirectional multimodal Transformer 100× the size trained on 100× the data (video/text/PDFs-as-images/photo/robotics) with supplementary supervised learning as the backbone of a MuZero-like learning+planning DRL agent running on thousands of tasks (such as coding) simultaneously?\n\nThe existence of [the hardware overhang](https://www.lesswrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang \"'Are we in an AI overhang?', Jones 2020\") implies that the limiting factor here is less hardware than human: will any organization treat GPT-3 as a Sputnik moment and invest aggressively in scaling programs?\nIs there a GPT-4-equivalent brewing away inside DeepMind or Google Brain's TPU pods now?\nThey aren't stupid, they have the hardware, they have the budgets, they have the people.\n\nBut I think they lack a vision.\nAs far as I can tell: they do not have any such thing, because Google Brain & DeepMind do not believe in the scaling hypothesis the way that Sutskever, Amodei and others at OA do.\nJust read through machine learning Twitter to see the disdain for the scaling hypothesis.\n(A quarter year on from GPT-3 and counting, can you name a single dense model as large as the 17b Turing-NLG---never mind larger than GPT-3?)\n\nGoogle Brain is entirely too practical and short-term focused to dabble in such esoteric & expensive speculation, although Quoc V. Le's group occasionally surprises you.\nThey'll dabble in [mixture-of-expert models](/doc/ai/scaling/mixture-of-experts/index) like [GShard](https://arxiv.org/abs/2006.16668#google \"'GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding', Lepikhin et al 2020\"), but mostly because they expect to be likely to be able to deploy it or something like it to production in Google Translate.^[Production services, especially *free* production services, usually lag long after the unpublished SOTA inside the most cutting-edge lab. The second is the only thing that matters for predicting AI progress or AI risk, of course, but people will insist on measuring AI progress by bizarre metrics like what an arbitrary free service could do last year. As a rule of thumb, assume that: if you are using a free service with no login, the quality is *at least* 2 years behind SOTA; free with a login, >1.5 years; paid service, >1 year; & recently-released research paper, >6 months.]\n\nDeepMind^[Particularly [Demis Hassabis](!W); I'm not sure about [Shane Legg's](!W \"Shane Legg\") [current views](http://www.vetta.org/2009/12/tick-tock-tick-tock-bing/ \"Tick, tock, tick, tock... BING\"), although given the accuracy of his [2009 predictions](http://www.vetta.org/2009/12/the-teenies/ \"'The Teenies', Shane Legg 2009-12-28\") while founding DeepMind & his [2018 comments](https://web.archive.org/web/20210426084422/https://www.stuff.co.nz/technology/103500435/google-deepmind-founder-and-leader-in-artificial-intelligence-returns-to-hamilton \"Google DeepMind founder and leader in artificial intelligence returns to Hamilton\"), he probably hasn't much changed his views that AI will be empowered by the (realized) exponential compute gains or his [AGI forecast of ~2028](http://www.vetta.org/2010/12/goodbye-2010/ \"'Goodbye 2010', Shane Legg 2010-12-10\"). (This is consistent with the latest [Metaculus](https://www.metaculus.com/questions/questions/3479/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of/ \"When will the first Artificial General Intelligence system be devised, tested, and publicly known of?\") [forecasts](https://www.metaculus.com/questions/questions/1394/will-ai-progress-surprise-us/ \"Will AI progress surprise us?\").)] holds what we might call the \"weak scaling hypothesis\": they believe that AGI will require us to \"find the right algorithms\" effectively replicating a mammalian brain module by module, and that while these modules will be extremely large & expensive by contemporary standards (which is why compute is important, to give us \"a more powerful tool with which to hunt for the right algorithms\"), they still need to be invented & finetuned piece by piece, with little risk or surprise until the final assembly.\nEach piece, however, itself can scale: there's no magical intelligence gland or quantum woo which creates a bright line between humans and, say, chimpanzees or rodents.\n(As much as we humans extravagantly admire our own capabilities like language or logic, those are relatively minor flourishes on the basic brain---each organism solves the same basic problems, like exploration, long-term memory, learning world-models, associating rewards with specific actions, meta-learning, etc.)\nAs such, once you have a rat-level AGI, a human-level AGI is just more so.\n(And rats are a lot easier to experiment on.)\nThat is how you get DM contraptions like [Agent57](https://www.deepmind.com/blog/agent57-outperforming-the-human-atari-benchmark \"'Agent57: Outperforming the Atari Human Benchmark', Badia et al 2020\") which throw the kitchen sink at the wall to see what sticks, and why they place such emphasis on neuroscience as inspiration and cross-fertilization for reverse-engineering the brain.\n(See also Sam Altman's [podcast interview comments](https://audio.hbr.org/exponential-view/20201006152648-S5E01_HowGPT-3IsShapingOurAIFuture.mp3?listeningSessionID=0CD_382_124__cc0756698c5c760194dea321b07a9b55454e0fe1#t=2205 \"'How GPT-3 Is Shaping Our AI Future' with Sam Altman/Azeem Azhar (The Exponential View), Wednesday 7 October 2020\") on OA's advantage vs unnamed rivals with more compute is because the lack of compute makes them stay \"small and focused\"---\"for sure\" like a startup approach.)\nWhen someone seems to have come up with a scalable architecture for cracking a hard problem, like AlphaZero or AlphaStar, they are willing to pour on the gas to make it scale, but otherwise, incremental refinement on ALE and then [DMLab-30](https://arxiv.org/abs/1612.03801#deepmind \"'DeepMind Lab', Beattie et al 2016\") is the game plan.\nThey have been biting off and chewing pieces of the brain for a decade, and it'll probably take another decade or two of steady chewing if all goes well.\nBecause they have locked up so much talent and have so much proprietary code and believe all of that is a major moat to any competitor trying to replicate the complicated brain, they are fairly easygoing.\nYou will not see DM 'bet the company' on any moonshot; Google's cashflow isn't going anywhere (and [DM's budget](/newsletter/2020/06#deepmind-budget \"‘June 2020 News § Companies House’, Branwen 2019\")), and slow and steady wins the race.\n\nGoing beyond that, most other research labs like Tesla or FAIR are irrelevant and uninterested.\nChinese AI companies are a question mark: past the language barrier, I seem to discern interest in AGI & little of the reflexive Western opposition, and companies like Baidu occasionally release important research (such as the early scaling paper [Hestness et al 2017](https://arxiv.org/abs/1712.00409#baidu \"Deep Learning Scaling is Predictable, Empirically\")), but overall, Chinese AI may be overestimated, and they seem to suffer from a kind of Dutch disease---funding for surveillance technology, and for narrow e-commerce niches, is so plentiful that other areas are neglected.\n\nOA, lacking anything like DM's long-term funding from Google or its enormous headcount, is making a startup-like bet that they know an important truth which is a secret: \"the scaling hypothesis is true!\"\nSo, simple DRL algorithms like PPO on top of large simple architectures like RNNs or Transformers can emerge, exploiting the blessings of scale, and meta-learn their way to powerful capabilities, enabling further funding for still more compute & scaling, in a virtuous cycle.\nThis is why OA had to revise its corporate form: lacking any enormous endowment or extremely deep-pocketed patron like Google, where does it get the money to scale (or hire machine learning engineer/researchers who can command salaries in the millions)?\nOA has to *earn* the necessary money, so in a move like Mozilla Foundation owning Mozilla Corporation (to sell Firefox search engine placement), or the Hershey orphanage owning Hershey Chocolate or the Girl Scouts licensing their cookies, OpenAI switched from a pure nonprofit funded by donations to a nonprofit which owns a for-profit subsidiary/startup, \"OpenAI LP\", which can take investments and engage in for-profit activities.\nOA LP, while controlled by OA, can then shoot for the moon.\nAnd if OA is wrong to trust in the [God of Straight Lines On Graphs](https://slatestarcodex.com/2018/11/26/is-science-slowing-down-2/ \"Is Science Slowing Down?\"), well, they never could compete with DM directly using DM's favored approach, and were always going to be an also-ran footnote, so they have no regret.\n\nWhile all of this hypothetically can be replicated *relatively* easily (never underestimate the amount of tweaking and special sauce it takes) by competitors if they wished (the necessary amounts of compute budgets are still trivial in terms of Big Science or other investments like AlphaGo or AlphaStar or Waymo, after all), said competitors lack the very most important thing, which no amount of money or GPUs can ever cure: the courage of their convictions.\nThey are too hidebound and deeply philosophically wrong to ever admit fault and try to overtake OA until it's too late.\nHow can we talk seriously about any kind of military Manhattan Project when the US military [doesn't even let its developers use Tensorflow or PyTorch](https://warontherocks.com/2020/10/trust-algorithms-the-army-doesnt-even-trust-its-own-ai-developers/ \"Trust Algorithms? The Army Doesn’t Even Trust Its Own AI Developers\"), or about government projects in the shadow of coronavirus?\nThis might seem absurd (surely the Bitter Lesson/scaling hypothesis have now earned enough prior probability to be taken seriously and receive major research investments to test how far they can go, especially given how important the implications are), but look at the repeated criticism of OA *every time* they release a new example of the scaling hypothesis, from GPT-1 to Dactyl to OA5 to GPT-2 to iGPT to GPT-3...\nTo paraphrase St Augustine, most peoples' reaction to the Bitter Lesson or scaling hypothesis is \"grant me scale & compute---but not yet\".^[When faced with the choice between having to admit all their fancy hard work is a dead-end, swallow the bitter lesson, and start budgeting tens of millions of compute, or instead writing a disdainful tweet explaining how, \"*actually*, GPT-3 shows that scaling is a dead end, it's an environmental catastrophe, and it's just imitation intelligence anyway\"---most people will get busy on the tweet!]\n\nA critical indicator will be whether organizations beyond 'the usual suspects' (Microsoft [ZeRO-2](https://www.microsoft.com/en-us/research/blog/zero-2-deepspeed-shattering-barriers-of-deep-learning-speed-scale/ \"'ZeRO-2 & DeepSpeed: Shattering barriers of deep learning speed & scale', Team 2020\") team has reached [1t-scale training](https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/ \"'DeepSpeed: Extreme-scale model training for everyone', Team et al 2020\"), but there is also Nvidia, Salesforce, Allen, Google DM/GB, Connor/EleutherAI, Facebook FAIR) start participating or if they continue to dismiss scaling.\nAt least as of 2020-10-26, 152 days later, no model has come near GPT-3, and indeed, no model has even exceeded Turing-NLG's 17b.^[A mixture-of-expert model like GShard or an embedding like DynamicEmbedding is not comparable to 'dense' models like GPT-3, as it's always been cheap & easy to train models with billions of 'parameters' in some sense, like extremely large embeddings; however, these parameters do little, and are more like a few hundred shallow models glued back-to-back. They probably do not learn the same interesting things that a dense model would with the same nominal parameter count.]\n\n# Critiquing The Critics\n\n\n\n[Keeping track.]{.marginnote} GPT-3 in 2020 makes as good a point as any to take a look back on the past decade.\nIt's remarkable to reflect that someone who started a PhD because they were excited by these new \"ResNets\" would still not have finished it by now---that is how recent even resnets are, never mind Transformers, and how rapid the pace of progress is.\nIn 2010, one could easily fit everyone in the world who genuinely believed in deep learning into a moderate-sized conference room (assisted slightly by the fact that 3 of them were busy founding [DeepMind](https://en.wikipedia.org/wiki/DeepMind)).\nSomeone interested in machine learning in 2010 *might* have read about some interesting stuff from weirdo diehard connectionists in recognizing hand-written digits using all of 1--2 million parameters, or some modest neural tweaks to standard voice-recognition hidden Markov models.\nIn 2010, who would have predicted that over the next 10 years, deep learning would undergo a Cambrian explosion causing a mass extinction of alternative approaches throughout machine learning, that models would scale up to 175,000 million parameters, and that these enormous models would just spontaneously develop all these capabilities?\n\nNo one. That is, no one aside from a few diehard connectionists written off as willfully-deluded old-school fanatics by the rest of the AI community (never mind the world), such as [Moravec](https://jetpress.org/volume1/moravec.htm \"'When will computer hardware match the human brain?', Moravec 1998\"), Schmidhuber, [Sutskever](https://www.youtube.com/watch?v=13CZPWmke6A \"'Ilya Sutskever: Deep Learning | AI Podcast #94 with Lex Fridman', 2020-05-08\"), Legg, & Amodei?\nOne of the more shocking things about looking back is realizing how unsurprising and easily predicted all of this was if you listened to the right people.\nIn 1998, 22 years ago, Moravec noted that AI research could be deceptive, and hardware limits meant that \"intelligent machine research did not make steady progress in its first 50 years, it marked time for 30 of them!\", predicting that as Moore’s law continued, \"things will go much faster in the next 50 years than they have in the last 50.\"\nMoravec further observed that part of the reason for rapid progress was the hardware overhang: while supercomputers of the necessary power would exist long before the connectionist revolution began, no one would be allowed to use them[^Jim-Gray], as they would be devoted to 'more important' (prestigious) hard STEM work, like \"physics simulations\" (ie. climate simulations & nuclear bombs)^[Strikingly, as of 2020, this is *still* true: eg. the only deep learning research I have seen done on [Summit](!W \"Summit (supercomputer)\") were [materials](https://arxiv.org/abs/1909.11150 \"'Exascale Deep Learning for Scientific Inverse Problems', Laanait et al 2019\") [science](https://arxiv.org/abs/2005.00223 \"'Pushing the limit of molecular dynamics with ab initio accuracy to 100 million atoms with machine learning', Jia et al 2020\") & [biology](https://arxiv.org/abs/2007.06225 \"'ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing', Elnaggar et al 2020\"). (In double-checking Arxiv, I did find one non-STEM paper using Summit resources: [Lin et al 2019](https://arxiv.org/abs/1910.00932#google \"Training Kinetics in 15 Minutes: Large-scale Distributed Training on Videos\"), focusing on systems engineering in training a video classification model.)], and \"AI research must wait for the power to become more affordable.\"\nAffordable meaning a workstation roughly ~[$1000]($1998); sufficiently cheap compute to rival a human would arrive sometime in the 2020s, with the 2010s seeing affordable systems in the lizard--mouse range.\nAs it happens, the start of the DL revolution is typically dated to [AlexNet](!W) in 2012, by a grad student[^Norvig] using 2 GTX 580 3GB GPUs (launch list price of... [$500]($2010), for a system build cost of perhaps [$1500]($2012)).\n2020 saw GPT-3 arrive, and as discussed before, there are many reasons to expect the cost to fall, in addition to the large hardware compute gains that are being forecast for the 2020s despite the general deceleration of Moore’s law.^[[Jeff Dean](https://arxiv.org/abs/1911.05289#google \"'The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design', Dean 2019\") notes, \"It is perhaps unfortunate that just as we started to have enough computational performance to start to tackle interesting real-world problems and the increased scale and applicability of machine learning has led to a dramatic thirst for additional computational resources to tackle larger problems, the computing industry as a whole has experienced a dramatic slowdown in the year-over-year improvement of general purpose CPU performance.\" Under the computational view, this is not a coincidence: compute, not algorithms, are the critical factor; biological systems often come within orders of magnitude, or less, of the theoretical optimum for a task; and the closer one comes to optimal, the slower progress becomes; so, just as artificial computation finally starts doing \"interesting real-world problems\", it necessarily is approaching its limits. (It could have been otherwise: Moore’s law could have stopped short by many orders of magnitude of biological efficiency, or surpassed it by many orders, with no temporal coincidence, and AI happened for other reasons.)]\n\n[^Jim-Gray]: This seems to be a bit of a blind spot by commentators: the assumption that if the necessary resource *exists*, then ti will be *used*. For example, Jim Gray (d. 2007) in June 1999 [pokes a bit of fun](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/ms_tr_99_50_turingtalk.pdf#page=11 \"What Next? A Dozen Information-Technology Research Goals: 3. Turing's vision of machine intelligence\") at Turing's connectionist hardware argument by noting that (using an optimistic lower bound on human brain computational power):\n\n > Desktop machines should be about as intelligent as a spider or a frog, and supercomputers ought to be nearing human intelligence...So, we should start seeing intelligence in these supercomputers any day now (just kidding)...[but we do not because] we are missing something *very* fundamental. Clearly, the software and databases we have for our super-computers is not on a track to pass the Turing Test in the next decade. Something quite different is needed. Out-of-the-box, radical thinking is needed.\n >\n > We have been handed a puzzle: genomes and brains work. But we are clueless what the solution is. Understanding the answer is a wonderful long-term research goal.\n\n With the benefit of hindsight, we can say that it is true that supercomputers in 1999 could have been showing far more impressive levels of intelligence than they were, and that it was also true that the software being run on the supercomputers in 1999 were never going to lead to meaningful AI progress, and that there is no particular contradiction or mystery---it was simply that no one was trying. No supercomputer owner was going to let it be tied up for years doing the minor-yet-critical iteration to make connectionist approaches like RNNs or CNNs work. Thus, something quite different & radical was indeed needed---but we already knew what the solution looked like.\n[^Norvig]: [Peter Norvig](!W) [offers an example](https://wandb.ai/wandb_fc/gradient-dissent/reports/Peter-Norvig-Google-s-Director-of-Research-Singularity-is-in-the-eye-of-the-beholder--Vmlldzo2MTYwNjk?galleryTag=gradient-dissent \"Peter Norvig, Google’s Director of Research—Singularity is in the eye of the beholder: We're thrilled to have Peter Norvig who join us to talk about the evolution of deep learning, his industry-defining book, his work at Google, and what he thinks the future holds for machine learning research (2020-11-20)\") of what happens when grad students *can't* afford the necessary computing power to make neural nets work:\n\n > [**Lukas Biewald](!W)**: When you look at deep learning, it sort of feels like that came suddenly, but a lot of those techniques were around, in fact in your book, I remember quite far back. Do you think that the field missed something, or was it just not possible to run at the scale necessary to show that these neural network techniques were working better than people expected in the early aughts?\n >\n > **Peter Norvig**: Yeah. I mean, if you say suddenly, right, we've got a sudden leap in computer vision and image net after Hinton had been trying the same thing for 30 years, right?...And then it finally worked. And I think the biggest difference was the computing power. Definitely there were advances in data. So we could do [ImageNet](!W) because [Fei-Fei Li](!W) and others gathered this large database, and that was really important. There are certainly differences in the algorithm, right? We've got a slightly different [squashing function](!W \"Activation function\"). Instead of shaped like this \\[[sigmoid](!W \"Sigmoid function\")\\], it's shaped like this \\[[ReLU](!W)\\]. I mean, I don't know how big a deal that was, but we learned how to do [stochastic gradient descent](!W) a little bit better. We figured that [dropout](!W \"Dilution (neural networks)\") gave you a little bit better robustness.\n >\n > So there were small things, but I think probably the biggest was the computing power. And I mean, I certainly remember [Geoff Hinton](!W) came to Berkeley when I was a grad student in 1981, I think, when he talked about these neural nets. And we fellow grad students thought that was so cool. So we said, 'Let's go back into the lab and implement it.'\n >\n > And of course, there was absolutely nothing you could download, so we had to build it all from scratch. And we got it to do exclusive or \\[[XOR](!W)\\], and then we got it to do something a little bit more complicated. And it was exciting. And then we gave it the first real problem, and it ran overnight, and it didn't converge, and we let it run one more day, and it still didn't converge. And then we gave up, and we went back to our sort of knowledge-based systems approach. But if we had the computing power of today, it probably would have converged after 5 seconds.\n\n By my estimate, Norvig's attempt used the equivalent of 0.8 *milliseconds* of contemporary GPU-time.\n\n (In ~1981, an expensive PC costing the equivalent of >[$5,000]($2020), of the sort a high-powered AI lab might allocate 1 apiece to grad students, might have an additional [Intel 8087](!W) floating-point [coprocessor](!W) capable of 50,000 FP64 FLOPS; conservatively assuming that 'overnight' + 'one more day' ≤ 2 days, then Norvig's experiment used 2d × 24h × 60m × 60s × 50,000 = 8×10^9^ FLOPS; a 2020 Nvidia [A100](!W \"Ampere (microarchitecture)\") GPU nominally priced ~[$10,000]($2020) boasts 9.7 FP64 TFLOPS or 9,700,000,000,000 FLOPS (and far more in the more useful low-precision regimes like FP32, but 1981 ML didn't know that); thus, 8×10^9^ / 9.7×10^12^ = 8×10^−4^ seconds = 0.8 milliseconds.)\n\nThe accelerating pace of the last 10 years should wake anyone from their dogmatic slumber and make them sit upright.\nAnd there are 28 years left in Moravec's forecast...\n\nThe temptation, that many do not resist so much as revel in, is to give in to a _déformation professionnelle_ and dismiss any model as \"just\" this or that(\"just billions of IF statements\" or \"just a bunch of multiplications\" or \"just millions of memorized web pages\"), missing the forest for the trees, as Moravec commented of chess engines:\n\n> The event was notable for many reasons, but one especially is of interest here. Several times during both matches, Kasparov reported signs of mind in the machine. At times in the second tournament, he worried there might be humans behind the scenes, feeding Deep Blue strategic insights!...In all other chess computers, he reports a mechanical predictability stemming from their undiscriminating but limited lookahead, and absence of long-term strategy. In Deep Blue, to his consternation, he saw instead an \"alien intelligence.\"\n>\n> ...Deep Blue's creators know its *quantitative* superiority over other chess machines intimately, but lack the chess understanding to share Kasparov's deep appreciation of the difference in the *quality* of its play. I think this dichotomy will show up increasingly in coming years. Engineers who know the mechanism of advanced robots most intimately will be the last to admit they have real minds. From the inside, robots will indisputably be machines, acting according to mechanical principles, however elaborately layered. Only on the outside, where they can be appreciated as a whole, will the impression of intelligence emerge. A human brain, too, does not exhibit the intelligence under a neurobiologist's microscope that it does participating in a lively conversation.\n\nBut of course, if we ever succeed in AI, or in reductionism in general, it *must be by reducing Y to 'just X'*.\nShowing that some task requiring intelligence can be solved by a well-defined algorithm with no 'intelligence' is precisely what success must look like!\n(Otherwise, the question has been thoroughly begged & the problem has only been pushed elsewhere; computer chips are made of transistors, not especially tiny homunculi.)\n\n
\n> As long as the AI [OA5] can explore, it will learn, given enough time...We just kept waiting for the magic to run out. We kept waiting to hit a wall, and we never seemed to hit a wall.\n>\n> [Greg Brockman](https://qz.com/1311732/openai-built-gaming-bots-that-can-work-as-a-team-with-inhuman-precision \"OpenAI built gaming bots that can work as a team with inhuman precision\")\n
\n\n
\n> Give it the compute, give it the data, and it will do amazing things. This stuff is like---it's like *alchemy*!\n>\n> [Ilya Sutskever](https://www.newyorker.com/magazine/2019/10/14/can-a-machine-learn-to-write-for-the-new-yorker \"Can a Machine Learn to Write for The New Yorker? Extraordinary advances in machine learning in recent years have resulted in A.I.s that can write for you.\"), summer 2019\n
\n\n\n\n[Hindsight is 20⁄20.]{.marginnote} Even in 2015, [all the experts](https://news.ycombinator.com/item?id=9109140) assured us that AGI the scaling hypothesis seemed highly dubious: you needed something to scale, after all, and it was all too easy to look at flaws in existing systems and imagine that they would never go away and progress would sigmoid any month now, soon.\nLike the genomics revolution where a few far-sighted seers extrapolated that the necessary _n_ for GWASes would increase exponentially & deliver powerful PGSes soon, while sober experts wrung their hands over \"missing heritability\" & the miraculous complexity of biology & scoff about how such _n_ requirements proved GWAS was a failed paradigm, the future arrived at first slowly and then quickly.\nYet, here we are: all honor to the fanatics, shame and humiliation to the critics!^[Now that GPT-3's few-shot and [T5 finetuning](https://arxiv.org/abs/2003.08380#google \"'TTTTTackling WinoGrande Schemas', Lin et al 2020\") have begun to make people like Gary Marcus feel slightly nervous about WinoGrande, they have [begun preparing](https://arxiv.org/abs/2004.13831 \"'A Review of Winograd Schema Challenge Datasets and Approaches', Vid Kocijan, Thomas Lukasiewicz, Ernest Davis, Gary Marcus, Leora Morgenstern 2020\") [their excuses](https://arxiv.org/abs/2201.02387 \"'The Defeat of the Winograd Schema Challenge', Kocijan et al 2022\") for why Winograd schemas [weren't *really*](/modus \"'One Man’s Modus Ponens', Branwen 2012\") good measures of commonsense reasoning/intelligence (because intelligence, of course, is whatever AI can't do yet).]\nIf only one could go back 10 years, or even 5, to watch every AI researchers' head explode reading this paper...\nUnfortunately, few heads appear to be exploding now, because human capacity for hindsight & excuses is boundless (\"I can get that much with finetuning, anyway I predicted it all along, how boring\") and, unfortunately, [\"there is no fire alarm\"](https://intelligence.org/2017/10/13/fire-alarm/ \"Yudkowsky 2017\") for AGI.\n(If you are still *certain* that there is near-zero probability of AGI in the next few decades, why?\nDid you predict---in writing---capabilities like GPT-3?\nIs this how you expect AI failure to look in the decades beforehand?\nWhat specific task, what specific number, would convince you otherwise?\nHow would the world look different than it does now if these crude prototype insect-brain-sized DL systems were not on a path to success?)\n\n[Authority without accountability.]{.marginnote} What should we think about the experts?\nProjections of failure were made by eminent, respectable, serious people.\nThey spoke in considered tones of why AI hype was excessive and might trigger an \"AI winter\", and the fundamental flaws of fashionable approaches and why brute force could not work.\nThese statements were made routinely in 2014, 2015, 2016... And they were wrong.\nI am aware of few issuing a _mea culpa_ or reflecting on it.^[[Feynman](https://history.nasa.gov/rogersrep/v2appf.htm \"Appendix F: Personal Observations on the Reliability of the Shuttle\"): \"There are several references to previous flights; the acceptance and success of these flights are taken as evidence of safety. But erosion and blowby are not what the design expected. They are warnings that something is wrong. The equipment is not operating as expected, and therefore there is a danger that it can operate with even wider deviations in the unexpected and not thoroughly understood way. The fact that this danger did not lead to catastrophe before is no guarantee that it will not the next time, unless it is completely understood.\"]\nIt is a puzzling failure, and I've [reflected on it before](/newsletter/2019/13#what-progress \"‘2019 News § What Progress?’, Branwen 2019\").\n\n[Phatic, not predictive.]{.marginnote} There is, however, a certain tone of voice the bien pensant all speak in, whose sound is the same whether right or wrong; a tone shared with many statements in January to March of this year; a tone we can also find in a 1940 _Scientific American_ article authoritatively titled, [\"Don't Worry---It Can't Happen\"](/doc/existential-risk/1940-sciam-harrington-nuclearweapons-dontworryitcanthappen.pdf), which advised the reader to not be concerned about it any longer \"and get sleep\".\n('It' was the atomic bomb, about which certain scientists had stopped talking, raising public concerns; not only could it happen, the British bomb project had already begun, and 5 years later it did happen.)\n\n[The iron law of bureaucracy: Cathedral gothic.]{.marginnote} This tone of voice is the voice of [authority](https://srconstantin.wordpress.com/2016/10/20/ra/ \"'Ra', Sarah Constantin 2016\"). \\\nThe voice of authority insists on calm, and people not \"panicking\" (the chief of sins). \\\nThe voice of authority assures you that it won't happen (because it can't happen). \\\nThe voice utters simple arguments about why the status quo will prevail, and considers only how the wild new idea could fail (and not all the possible options). \\\nThe voice is not, and does not deal in, uncertainty; things will either happen or they will not, and since it will not happen, there is no need to take any precautions (and you should not worry because it can't happen). \\\nThe voice does not believe in drawing lines on graphs (it is rank numerology). \\\nThe voice does not issue any numerical predictions (which could be falsified). \\\nThe voice will not share its source code (for complicated reasons which cannot be explained to the laity). \\\nThe voice is opposed to unethical things like randomized experiments on volunteers (but will overlook the insult). \\\nThe voice does not have a model of the future (because a model implies it does not already know the future). \\\nThe voice is concerned about its public image (and unkind gossip about it by other speakers of the voice). \\\nThe voice is always sober, respectable, and credentialed (the voice would be pleased to write an op-ed for your national magazine and/or newspaper). \\\nThe voice speaks, and is not spoken to (you cannot ask the voice what objective fact would change its mind). \\\nThe voice never changes its mind (until it does). \\\nThe voice is never surprised by events in the world (only disappointed). \\\nThe voice advises you to go back to sleep (right now).\n\nWhen someone speaks about future possibilities, what is the tone of their voice?\n\n[null](/scaling-hypothesis#blessings-of-scale){style=\"display:none;\"} \n[null](/scaling-hypothesis \"'The Scaling Hypothesis', Branwen 2020\"){style=\"display:none;\"} \n\n\n\n# Appendix\n## It From Byte\n\n
\n> Powerful generative models like GPT-3 learn to imitate agents and thus become agents when prompted appropriately. This is an inevitable consequence of training on huge amounts of human-generated data. This can be a problem.\n>\n> Is human data (or moral equivalents like DRL agents) *necessary*, and other kinds of data, such as physics data, free of this problem? (And so a safety strategy of filtering data could reduce or eliminate hidden agency.)\n>\n> I argue no: agency is not discrete or immaterial, but an ordinary continuum of capability, useful to a generative model in many contexts beyond those narrowly defined as 'agents', such as in the \"intentional stance\" or variational approaches to solving physics problems.\n>\n> Thus, a very wide range of problems, at scale, may surprisingly induce emergent agency.\n
\n\nI have previously argued that GPT-3 clearly shows agency because it is doing offline imitation learning (behavioral cloning, specifically) from the human-generated text data, and so it learns generative models of many agents, real or fictional.\nThese generative models offer agentic capabilities, because they can be used to prompt the model to ['roleplay'](/gpt-3#roleplaying)---plan & take action which will steer environments into small goal regions of state-space; and this is not merely hypothetical, or confined to text transcripts of actions & results in its internal simulated environments, but given effectors, like in the case of [SayCan](https://arxiv.org/abs/2204.01691#google \"‘Do As I Can, Not As I Say (SayCan): Grounding Language in Robotic Affordances’, Ahn et al 2022\"), a language model will in fact do such things in the real world.\n\nThat such systems may never have 'experienced the real world' or been trained deliberately on exact action sequences of malicious agents doesn't mean that they cannot generalize or imitate.\nA sufficiently accurate simulation of an agent just *is* an agent.\n(One can set up a prompt for GPT-3 to imitate Adolf Hitler and ask him how to regain power & resume exterminating the Jews and get back a semi-coherent high-level plan; this is unfortunate, and the simulacra need not even be of a real person---evil fictional characters plan evil things just as easily, because it's not hard to imagine what horrible things they *would* want to do.)\nThis doesn't seem all that different from accepted instances of reinforcement learning, like behavior learning or offline reinforcement learning: if you train on data from agents, whether humans or logged data from DRL agents, then the question is how would you *not* learn from all these examples how to act & be capable of pursuing goals?\nPresumably only if you were a stupid model, too small or given too little data.\n\nIf these are not 'agents', I don't know what is; or at least if critics insist on some sort of definition of 'agent' which excludes these, I think perhaps we should then abandon the word 'agent' entirely---because if giving a SayCan robot an instruction to 'fetch a can of Coke and bring it to me', with it using image inputs to construct step-by-step plans to find, possess, and return with the can, and successfully doing so often in real life on a real robot, does not count as an 'agent', then we need a word for such non-agent systems, so we can discuss their dangers.\n(If we define them as sub-agents because of lack of appendages and thus define all models as harmless non-agents, this is an unacceptable equivocation given the extreme carelessness and insouciance people display in hooking up their models the first chance they get to humans, APIs, search engines, or robots---hardly had the OpenAI GPT-3 API been launched in July 2020 than people were showing off using its basic HTML/CSS/JS abilities to drive web browsers, and large LM model developers like LaMDA or Adept display an unseemly eagerness to let it query arbitrary URLs without their paper even bothering to specify it was live.\nThe AI box hadn't even been invented before everyone decided to let their AI out of the box to be slightly more useful, as should come as no surprise---after all, [tool AIs *want* to be agent AIs](/tool-ai \"‘Why Tool AIs Want to Be Agent AIs’, Branwen 2016\").)\n\nBut one might wonder how far this goes: do we have agent AIs emerging from our tool AIs *only* because we trained them on so much agent-generated data? If we scrapped human text corpuses, full of text about humans planning and taking actions and obtaining goals, or video datasets stuffed full of agents doing stuff, and if we deleted image datasets as well because they are just snapshots of videos and depict agents & actions & environments full of traces of agency, would we then have a model which is now just a (relatively) safe tool AI, with no agency lurking?\n\nI would still say that there's a possibility, and maybe not even that small one: agency is not a discrete thing, but a continuum, which is a convergent instrumental drive / emergent capability because it is useful even for understanding \"non-agentic\" things.\n\n### All Is Atoms & Void\n\nFirst, there cannot be any principled, hard-and-fast, necessary distinction between data which is 'agentic' and data which is merely 'natural'.\nThis is because there is no such distinction in reality either: all 'agency' is constructed of non-agentic bits like atoms. There is no agency-particle, no pineal gland granting access to '*Genuine* Decision-Making™'.\nAn agentic human is made out of the same fundamental things as a clump of dust swirling in space, or rock, or a computer.\nIt must be the case that one could, starting only from simulations of (possibly a lot of) atoms, nothing but the most raw physics equations and atoms & the void, and eventually recapitulate the history of the universe and observe things like the origin of life and humans. Thus, one turns non-agentic data (physics equations) into agentic data.\n\nOK, but barring a hypercomputer, that is unlikely to happen.\nIf we consider realistic levels of compute, like contemporary NNs, trained on less-than-everything-in-the-universe & apparently harmless data like, say, the hydrology of rivers flowing downhill (eg. for flood prevention), or the trajectory of the solar system, surely none of that agency will evolve---no amount of modeling the chaotic dynamics of Pluto will give you any help in modeling the dynamics of astronomy infighting about whether Pluto is a planet, right?\n\n### Intentional Interpretive Stance\n\nHere again I differ, and invoke [Daniel Dennett's](https://en.wikipedia.org/wiki/Daniel_Dennett) [intentional stance](https://en.wikipedia.org/wiki/Intentional_stance).\nHumans do, in fact, model natural systems like these as agents.\nWe find such teleological explanations indispensable for intuition and shortcut reasoning across many natural systems.\n\n#### Variational Interpretations\n\n[Janus comments](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators), apropos of their emphasis on what I might call a 'world-modeling-centric' intuition for GPT-3 vs my 'agent-centric' view that:\n\n> For example, Gwern has said that anyone who uses GPT for long enough begins to think of it as an agent who only cares about roleplaying a lot of roles. That framing seems unnatural to me, comparable to thinking of physics as an agent who only cares about evolving the universe accurately according to the laws of physics. At best, the agent is an epicycle; but it is also compatible with interpretations that generate dubious predictions.\n\nI embrace that description: it is in fact natural and not an elaborate epicycle on a geocentric model of the world, but rather, heliocentrism---powerful, and useful, and simpler.\nThat it (also like heliocentrism[^Wittgenstein]) may feel counterintuitive is unfortunate, but its virtues are proven.\n\n[^Wittgenstein]: As my favorite Wittgenstein anecdote goes, heliocentrism strikes everyone as false because things just don't *look* as if the Earth whirls at astronomical velocities around a star, but as if the Earth is perfectly still and everything else whirls around it (Anscombe 1963, _An Introduction to Wittgenstein’s Tractatus_):\n\n > The general method that Wittgenstein does suggest is that of 'shewing that a man has supplied no meaning [\"no reference\"?] for certain signs in his sentences'. I can illustrate the method from Wittgenstein's later way of discussing problems. He once greeted me with the question: 'Why do people say that it was natural to think that the sun went round the earth rather than that the earth turned on its axis? I replied: 'I suppose, because it looked as if the sun went round the earth.' 'Well,' he asked, 'what would it have looked like if it *had* looked as if the earth turned on its axis?'\n >\n > This question brought it out that I had hitherto given no relevant meaning to 'it looks as if' in 'it looks as if the sun goes round the earth'. My reply was to hold out my hands with the palms upward, and raise them from my knees in a circular sweep, at the same time leaning backwards and assuming [a dizzy](https://twitter.com/Brummo/status/1320138187763691520 \"'Here’s another stabilized sky timelapse, this time at Crater Lake, Oregon. The water was still for most of it, which created a nice mirror for the stars. I also got my astro-modified camera working, which provides more vibrancy in the nebulae in the Milky Way. #EppurSiMuove', Eric Brummel 2020-10-24\") [expression](https://www.youtube.com/watch?v=h714VOr-6nY \"'Star Timelapse Revealing the Earth’s Rotation', Alex Rivest 2014-12-11\"). 'Exactly!' he said.\n\nWe err if an intentional stance leads us engage in the pathetic fallacy and say that the river-spirit wants to reunite with the ocean (and we must offer sacrifices lest the dikes breach), but we are correct when we say that the river tries to find the optimal path which minimizes its gravitational or [free energy](https://en.wikipedia.org/wiki/Principle_of_minimum_energy).\nIt is both true, predictively useful, and mathematically equivalent to the other way of formulating it, in terms of 'forward' processes computing step by step, atom by atom, and at the getting the same answer---but typically much easier to solve.\n(Ted Chiang's [\"Story Of Your Life\"](/story-of-your-life \"‘‘Story Of Your Life’ Is Not A Time-Travel Story’, Branwen 2012\") tries to convey this perspective via fiction.)\nAnd this shortcut is a trick we can use universally, for everything from a river flowing downhill to the orbit of a planet to the path of photon through water [minimizing travel time](https://en.wikipedia.org/wiki/Fermat%27s_principle) to evolutionary dynamics: instead of trying to understand it step by step, treat the system as a whole via the [variational principle](https://en.wikipedia.org/wiki/Variational_principle) as 'wanting' to minimize (or maximize) some simple global quantity (a reward), and picking the sequence of actions that does so.\n(\"The river *wants* to minimize its height, so without simulating it down to the individual water currents, I can look at the map and see that it should 'choose' to go left, then right, and then meander over this flat slightly-sloping part. Ah, looks like I was right.\")\nThen, into this modular trick, just plug in the system and quantity in question, and think like an agent...^[This connection is more than superficial---a lot of RL work draws on formal analogies to physics and variational principles.]\n\nUh oh. 'Predictively useful', 'shortcut', 'much easier', 'universally'---all properties a neural net loves.\nAll natural to it.\nWhy would it try to solve each heterogeneous problem with a separate, computationally-expensive, bag of tricks, when there's one weird trick AI safety researchers hate, like adopting teleological and variational reasoning?\n\n#### Inducing Emergence Is Expensive\n\nOf course, this frame can be more expensive than solving a problem directly.\nVariational approaches are powerful but counterintuitive, and there are often many simpler approximations or memorization that a model can do.\nFor a *single* problem like modeling the orbit of Pluto, it is unlikely that any variational approach would be learned. Why would it, when there is only 1 system and 1 quantity minimized, so they can just be assumed?\nThis is similar to other [model capabilities induced by pretraining](#why-does-pretraining-work): things like induction heads or meta-learning or counting or reasoning need to pay their way, and are not superior to alternatives right from the start.\nThey need rich enough models to compute them feasibly, enough data to force them out of easier solutions (which will fail on a few rare datapoints), and enough training (to work through all the possibilities to converge on the better capabilities).\n\n#### What Can Induce Agency Emergence?\n\nUnfortunately, this is an empirical matter.\nHow many datasets? How big is each dataset? How diverse do they have to be? What even is a 'dataset', since we can always lump or split it?\nWe struggle to predict when a capability will develop in GPT-3, so we definitely can't say a priori that \"Pluto is safe to model, but then tossing in a few thousand exoplanet solar systems would begin to elicit a define-system/plug-in-reward/maximize module and bring back agency\".\n\n##### Cellular Automatons\n\nIt would also be hard to say at all what mathematical or physical systems exhibit the right kinds of maximizing behavior which can be generalized to an intentional stance.\nDoes the ultra-abstract & simple [cellular automaton](https://en.wikipedia.org/wiki/Cellular_automaton) [Conway's Game of Life](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life) (GoL) induce an intentional stance?\nIt has no agents, no biology, no evolution in the usual sense, but it does have many small patterns which can be useful chunked and a specific GoL understood that way.\nHumans, of course, look at a GoL as a bunch of small entities like 'gliders', but a NN given randomly-initialized boards may see the same thing, because most GoL patterns will die out or reach fixed-points like [gliders](https://en.wikipedia.org/wiki/Glider_(Conway%27s_Life)) or [still-life](https://en.wikipedia.org/wiki/Still_life_(cellular_automaton)) patterns.\nAnd once you are talking about gliders wandering out unless they run into a still-life block which kills them, you are much of the way to an intentional stance---not modeling a glider as the inexorable outcome of applying this and that rule about the local-neighborhood to a million cells of equal importance, but as a specific entity of interest against an ignored background of dead cells, which will travel around and shoot off to infinity or meet its doom.\nSo, I wouldn't want to bet too much on GoL being unable to induce any transfer.\n\n##### Turing Machine\n\nCan we go even broader?\nHow about, not natural physics systems, nor specific abstractions of interest to humans (GoL is especially interesting among cellular automatons, and we ignore the large space of CA rules which define a CA but which does nothing interesting), but all Turing machines, let's say random rules with some length-biased sample of random programs which we dovetail & treat available tapes as a sequence prediction problem.\nThere is no more general computable setting, after all.\n\n###### Single TM\n\nWould training on a random Turing machine risk the possibility of agency?\n\nMaybe not.\nFor a single TM, this might foster some capabilities like instruction-following (for the same reason that pretraining on source code, especially source code augmented with state logs, is a powerful prior for many tasks), but it does not seem to have any of the traits that would induce agency.\nThere is nothing that random TM programs try to minimize or maximize; they simply run.\nThey don't try to maximize run time length (terminating or non-terminating), or write as few or as many places on the tape as possible, or to achieve particular patterns.\nA model would simply learn the TM rules and attempt to approximate it as best as it can given its own limited feedforward neural net resources; eventually, if it can work iteratively or recurrently, it would learn the rules and generalize perfectly, and no further learning occurs.\nClassifying TM programs by whether they halt doesn't help: yes, the Busy Beaver 'wants' to maximize something, but that's just by definition as the longest terminating program, there are many more TM programs which are 'happy' to halt very quickly.\nSo predicting halting status may learn things, but also still nothing that prima facie looks like agency.\n\n###### TM Meta-Learning\n\nThis might be due to there being only a single TM, making it analogous to training only on Pluto.\nPerhaps the right setting would be training over *many* TM rules (and programs within each one).\nThis is what a researcher would be more interested in, since few TMs are of any intrinsic interest, nor do we know the One True Turing Machine™; we'd rather have a neural network which is learning to learn TMs, or meta-learning, and training a NN over many environments drawn from a distribution is the easiest way to induce meta-learning.\nSo what if we trained a model to do sequence prediction of a random TM + random program, without reuse?\nIf single random Turing machines are harmless, how about all of them?\n\nHm, well...\nIt's worth noting how Alan Turing introduced the Turing machine formalism: as a general setting in which a *man* read and executed sets of rules about how to mark up a paper tape.\nSo even in the original formulation of computers as tools which merely do what they are programmed to do, we have a homunculus at the center!\nThis homunculus could do (and given different instructions, would) anything to the tape, but he wants to follow the current set of instructions accurately, until he's done.\nIn each draw from the TM+program distribution, he is following a different set of instructions, and now the model is attempting to infer what he wants, to as quickly as possible begin predicting the sequence accurately by recomputing it.\n\nThis provides our modularity, and a particular computation executed, and strong optimization pressure to rapidly 'read' the history and infer what the new rules must be.\nThat may not have a clean reward-maximizing interpretation, but it *does* sound a lot like what anyone does with an agent of any kind: the inverse reinforcement learning problem of inferring the reward function can be arbitrarily hard, and until we succeed at that, we instead infer local rules & patterns, which target particular outcomes (regions of state-space).\nYou may not know why your neighbor does that weird thing he does, but you can infer that he will do it, and not another agent, not even his evil identical twin.\nIs inferring TM rules the simplest & most rudimentary possible 'theory of mind'?\nMaybe. In which case, there is no escape from the possibility of agency anywhere.\n\n### Ambient Agency\n\nAgency may be like [Turing-completeness](/turing-complete \"‘Surprisingly Turing-Complete’, Branwen 2012\"): even in settings free of selection or optimization, it is a capability too useful and too convergent to guarantee its absence.\nThe broader and more powerful a system is, the more the next feature or next piece of data may push it over the edge, and it becomes harder to engineer a system *without* that aspect.\n\nAgency can be learned from data generated by agents, who generate extremely selective data.\nOr if you carefully remove all that, it may come from the selection of non-human data.\nOr it may be implicit in the dynamics of a replicator system.\nOr it may be one of the countless physical systems which have such interpretations which are computationally more efficient and thus any NN which is optimized to balance realizable compute with accuracy will be pushed to such interpretations.\nOr it may be a good simplification of systems with macro-statistics where the detailed micro-state adds little.\nOr it may stem from simply meta-learning of rule induction on TMs, because agents may follow complex sets of policies which are learnable but the motivating reward-function is an under-determined blackbox.\n\nOr... like squashing Turing-completeness, as soon as one hole in the sinking ship is patched, you notice another leak spring up.\nYou can't keep a good idea down.\nAll you can do is make a complex system that doesn't display agency as far as *you* can tell; unfortunately, much like Turing-completeness (or security vulnerabilities), that there is no overt agency doesn't mean it is not there.\nThe model won't tell you, it is just getting on with the job of lowering its loss.\n(\"Sampling can show the presence of knowledge, but not the absence.\")\n\nI do not have any solutions to this, other than to advise yet again to abandon the seductive, convenient, but wrong idea that tool AIs (under any branding, be it 'tool AIs' or 'physics generative models' or 'world simulators'), cannot or will not be agent AIs.\nThey may well be, and the better they get, the more likely it is, and tampering with data is not a solution.\n", "id": "b5859c2575b9854cc1cb1d9486ef8542"} {"source": "gwern_blog", "url": "https://www.gwern.net/Tanks.page", "title": "The Neural Net Tank Urban Legend", "authors": ["Gwern Branwen"], "date_published": "2019-08-14", "text": "---\ntitle: The Neural Net Tank Urban Legend\ndescription: AI folklore tells a story about a neural network trained to detect tanks which instead learned to detect time of day; investigating, this probably never happened.\ncreated: 2011-09-20\nmodified: 2019-08-14\nstatus: finished\nprevious: /tool-ai\nnext: /hyperbolic-time-chamber\nconfidence: highly likely\nimportance: 4\ncssExtension: drop-caps-kanzlei\n...\n\n
\n> A cautionary tale in artificial intelligence tells about researchers training an neural network (NN) to detect tanks in photographs, succeeding, only to realize the photographs had been collected under specific conditions for tanks/non-tanks and the NN had learned something useless like time of day. This story is often told to warn about the limits of algorithms and importance of data collection to avoid \"dataset bias\"/\"data leakage\" where the collected data can be solved using algorithms that do not generalize to the true data distribution, but the tank story is usually never sourced.\n>\n> I collate many extent versions dating back a quarter of a century to 1992 along with two NN-related anecdotes from the 1960s; their contradictions & details indicate a classic \"urban legend\", with a probable origin in a speculative question in the 1960s by Edward Fredkin at an AI conference about some early NN research, which was then classified & never followed up on.\n>\n> I suggest that dataset bias is real but exaggerated by the tank story, giving a misleading indication of risks from deep learning and that it would be better to not repeat it but use real examples of dataset bias and focus on larger-scale risks like AI systems optimizing for wrong utility functions.\n
\n\nDeep learning's rise over the past decade and dominance in image processing tasks has led to an explosion of applications attempting to infer high-level semantics locked up in raw sensory data like photographs.\nConvolutional neural networks are now applied to not just ordinary tasks like [sorting cucumbers by quality](https://cloud.google.com/blog/products/gcp/how-a-japanese-cucumber-farmer-is-using-deep-learning-and-tensorflow \"'How a Japanese cucumber farmer is using deep learning and TensorFlow', Sato 2016\") but everything from predicting the best Go move to [where in the world](https://arxiv.org/abs/1602.05314#deepmind \"'PlaNet - Photo Geolocation with Convolutional Neural Networks', Weyand et al 2016\") it was taken to whether a photograph is [\"interesting\"](https://ai.googleblog.com/2018/05/automatic-photography-with-google-clips.html \"Automatic Photography with Google Clips\") or [\"pretty\"](https://ai.googleblog.com/2017/07/using-deep-learning-to-create.html \"Using Deep Learning to Create Professional-Level Photographs\"), not to mention supercharging traditional tasks like radiology interpretation or facial recognition which have reached levels of accuracy that could only be dreamed of decades ago.\nWith this approach of \"neural net *all the things*!\", the question of to what extent the trained neural networks are useful in the real world and will do what we *want* it to do & not what we *told* it to do has taken on additional importance, especially given the possibility of neural networks learning to accomplish extremely inconvenient things like inferring individual human differences such as criminality or homosexuality (to give two highly controversial recent examples where the meaningfulness of claimed success have been severely questioned).\n\nIn this context, a cautionary story is often told of incautious researchers decades ago who trained a NN for the military to find images of tanks, only to discover they had trained a neural network to detect something else entirely (what, precisely, that something else was varies in the telling).\nIt would be a good & instructive story... if it were true.\nIs it?\n\nAs it would be so useful a cautionary example for AI safety/alignment research, and was cited to that effect by Eliezer Yudkowsky but only to a secondary source, I decided to make myself useful by finding a proper primary source for it & see if there were more juicy details worth mentioning.\nMy initial attempt failed, and I & several others failed for over more than half a decade to find any primary source (just secondary sources citing each other).\nI began to wonder if it was even real.\n\nTrying again more seriously, I conclude that, unfortunately, it is definitely not real as usually told: it is just an urban legend/leprechaun; and in fact, the seed of the story *could not* have run into the issue the tank story warns about, because they correctly constructed their training dataset to avoid such issues.\nMore broadly, considering that issue in contemporary deep learning, the issue it cautions against is real but not that important and conflated with more dangerous safety/alignment problems.\n\n# Did It Happen?\n\n## Versions of the Story\n\nDrawing on [the usual suspects](/search \"'Internet Search Tips', Branwen 2018\") (Google/Google Books/Google Scholar/Libgen/LessWrong/Hacker News/Twitter) in [investigating leprechauns](/leprechaun \"'Leprechaun Hunting & Citogenesis', Branwen 2014\"), I have compiled a large number of variants of the story; below, in reverse chronological order by decade, letting us trace the evolution of the story back towards its roots:\n\n### 2010s\n\nHeather Murphy, [\"Why Stanford Researchers Tried to Create a 'Gaydar' Machine\"](https://www.nytimes.com/2017/10/09/science/stanford-sexual-orientation-study.html) (NYT), 2017-10-09:\n\n> *So What Did the Machines See?* Dr. Kosinski and Mr. Wang [[Wang & Kosinski 2018](https://files.osf.io/v1/resources/hv28a/providers/osfstorage/59ab119b594d9002537d360c?action=download&version=10&direct#pdf \"Deep neural networks are more accurate than humans at detecting sexual orientation from facial images\"); see also [Leuner 2019](#leuner-2019)/[Kosinski 2021](https://www.nature.com/articles/s41598-020-79310-1 \"Facial recognition technology can expose political orientation from naturalistic facial images\")] say that the algorithm is responding to fixed facial features, like nose shape, along with \"grooming choices,\" such as eye makeup. But it's also possible that the algorithm is seeing something totally unknown. \"The more data it has, the better it is at picking up patterns,\" said Sarah Jamie Lewis, an independent privacy researcher who Tweeted a critique of the study. \"But the patterns aren't necessarily the ones you think that they are.\" [Tomaso Poggio](!W), the director of M.I.T.'s Center for Brains, Minds and Machines, offered a classic parable used to illustrate this disconnect. The Army trained a program to differentiate American tanks from Russian tanks with 100% accuracy. Only later did analysts realized that the American tanks had been photographed on a sunny day and the Russian tanks had been photographed on a cloudy day. The computer had learned to detect brightness. Dr. Cox has spotted a version of this in his own studies of dating profiles. Gay people, he has found, tend to post higher-quality photos. Dr. Kosinski said that they went to great lengths to guarantee that such confounders did not influence their results. Still, he agreed that it's easier to teach a machine to see than to understand what it has seen.\n\n[It is worth noting that [Arcs et al's criticisms](https://medium.com/@blaisea/do-algorithms-reveal-sexual-orientation-or-just-expose-our-stereotypes-d998fafdf477 \"Do algorithms reveal sexual orientation or just expose our stereotypes?\"), such as their 'gay version' photographs, do not appear to have been confirmed by an [independent replication](https://arxiv.org/abs/1902.10739 \"'A Replication Study: Machine Learning Models Are Capable of Predicting Sexual Orientation From Facial Images', Leuner 2019\").]\n\nAlexander Harrowell, [\"It was called a perceptron for a reason, damn it\"](https://www.harrowell.org.uk/blog/2017/09/30/it-was-called-a-perceptron-for-a-reason-damn-it/), 2017-09-30:\n\n> You might think that this is rather like one of the classic optical illusions, but it's worse than that. If you notice that you look at something this way, and then that way, and it looks different, you'll notice something is odd. This is not something our deep learner will do. Nor is it able to identify any bias that might exist in the corpus of data it was trained on...or maybe it is. If there is any property of the training data set that is strongly predictive of the training criterion, it will zero in on that property with the ferocious clarity of Darwinism. In the 1980s, an early backpropagating neural network was set to find Soviet tanks in a pile of reconnaissance photographs. It worked, until someone noticed that the Red Army usually trained when the weather was good, and in any case the satellite could only see them when the sky was clear. The medical school at St Thomas' Hospital in London found theirs had learned that their successful students were usually white.\n\nAn interesting story with a distinct \"family resemblance\" is told about a NN classifying wolves/dogs, by Evgeniy Nikolaychuk, [\"Dogs, Wolves, Data Science, and Why Machines Must Learn Like Humans Do\"](https://medium.com/veon-careers/dogs-wolves-data-science-and-why-machines-must-learn-like-humans-do-213b08036a10), 2017-06-09:\n\n> Neural networks are designed to learn like the human brain, but we have to be careful. This is not because I'm scared of machines taking over the planet. Rather, we must make sure machines learn correctly. One example that always pops into my head is how one neural network learned to differentiate between dogs and wolves. It didn't learn the differences between dogs and wolves, but instead learned that wolves were on snow in their picture and dogs were on grass. It learned to differentiate the two animals by looking at snow and grass. Obviously, the network learned incorrectly. What if the dog was on snow and the wolf was on grass? Then, it would be wrong.\n\nHowever, in his source, [\"'Why Should I Trust You?' Explaining the Predictions of Any Classifier [LIME]\"](https://arxiv.org/abs/1602.04938 \"'\"Why Should I Trust You?\": Explaining the Predictions of Any Classifier', Ribeiro et al 2016\"), Ribeiro et al 2016, they specify of their dog/wolf snow-detector NN that they \"trained this *bad* classifier intentionally, to evaluate whether subjects are able to detect it [the bad performance]\" using LIME for insight into how the classifier was making its classification, concluding that \"After examining the explanations, however, almost all of the subjects identified the correct insight, with much more certainty that it was a determining factor. Further, the trust in the classifier also dropped substantially.\"\nSo Nikolaychuk appears to have misremembered.\n(Perhaps in another 25 years students will be told in their classes of how a NN was once trained by ecologists to count wolves...)\n\n[Redditor mantrap2](https://www.reddit.com/r/MachineLearning/comments/3ailzi/suddenly_a_leopard_print_sofa_appears/csczkqg/) gives on 2015-06-20 this version of the story:\n\n> I remember this kind of thing from the 1980s: the US Army was testing image recognition seekers for missiles and was getting excellent results on Northern German tests with NATO tanks. Then they tested the same systems in other environment and there results were suddenly shockingly bad. Turns out the image recognition was keying off the trees with tank-like minor features rather than the tank itself. Putting other vehicles in the same forests got similar high hits but tanks by themselves (in desert test ranges) didn't register. Luckily a sceptic somewhere decided to \"do one more test to make sure\".\n\nDennis Polis, _God, Science and Mind_, 2012 (pg131, limited Google Books snippet, unclear what ref 44 is):\n\n> These facts refute a Neoplatonic argument for the essential immateriality of the soul, _viz._ that since the mind deals with _universal_ representations, it operates in a specifically immaterial way...So, awareness is not explained by connectionism. The results of neural net training are not always as expected. One team intended to train neural nets to recognize battle tanks in aerial photos. The system was trained using photos with and without tanks. After the training, a different set of photos was used for evaluation, and the system failed miserably---being totally incapable of distinguishing those with tanks. The system actually discriminated cloudy from sunny days. It happened that all the training photos with tanks were taken on cloudy days, while those without were on clear days.^44^ What does this show? That neural net training is mindless. The system had no *idea* of the intent of the enterprise, and did what it was programmed to do without any concept of its *purpose*. As with Dawkins' evolution simulation (p. 66), the goals of computer neural nets are imposed by human programmers.\n\nBlay Whitby, [_Artificial Intelligence: A Beginner's Guide_](https://books.google.com/books?id=TKOfhnUhgS4C) 2012 (pg53):\n\n> It is not yet clear how an artificial neural net could be trained to deal with \"the world\" or any really open-ended sets of problems. Now some readers may feel that this unpredictability is not a problem. After all, we are talking about training not programming and we expect a neural net to behave rather more like a brain than a computer. Given the usefulness of nets in unsupervised learning, it might seem therefore that we do not really need to worry about the problem being of manageable size and the training process being predictable. This is not the case; we really do need a manageable and well-defined problem for the training process to work. A famous AI urban myth may help to make this clearer.\n>\n> The story goes something like this. A research team was training a neural net to recognize pictures containing tanks. (I'll leave you to guess why it was tanks and not tea-cups.) To do this they showed it two training sets of photographs. One set of pictures contained at least one tank somewhere in the scene, the other set contained no tanks. The net had to be trained to discriminate between the two sets of photographs. Eventually, after all that back-propagation stuff, it correctly gave the output \"tank\" when there was a tank in the picture and \"no tank\" when there wasn't. Even if, say, only a little bit of the gun was peeping out from behind a sand dune it said \"tank\". Then they presented a picture where no part of the tank was visible---it was actually completely hidden behind a sand dune---and the program said \"tank\".\n>\n> Now when this sort of thing happens research labs tend to split along age-based lines. The young hairs say \"Great! We're in line for the Nobel Prize!\" and the old heads say \"Something's gone wrong\". Unfortunately, the old heads are usually right---as they were in this case. What had happened was that the photographs containing tanks had been taken in the morning while the army played tanks on the range. After lunch the photographer had gone back and taken pictures from the same angles of the empty range. So the net had identified the most reliable single feature which enabled it to classify the two sets of photos, namely the angle of the shadows. \"AM = tank, PM = no tank\". This was an extremely effective way of classifying the two sets of photographs in the training set. What it most certainly was *not* was a program that recognizes tanks. The great advantage of neural nets is that they find their own classification criteria. The great problem is that it may not be the one you want!\n\n[Thom Blake](https://www.lesswrong.com/posts/PoDAyQMWEXBBBEJ5P/magical-categories4v4a) notes in 2011-09-20 that the story is:\n\n> Probably apocryphal. I haven't been able to track this down, despite having heard the story both in computer ethics class and at academic conferences.\n\n[\"Embarrassing mistakes in perceptron research\"](https://www.webofstories.com/play/marvin.minsky/122), Marvin Minsky, 2011-01-31:\n\n> Like I had a friend in Italy who had a perceptron that looked at a visual... it had visual inputs. So, he... he had scores of music written by Bach of chorales and he had scores of chorales written by music students at the local conservatory. And he had a perceptron---a big machine---that looked at these and those and tried to distinguish between them. And he was able to train it to distinguish between the masterpieces by Bach and the pretty good chorales by the conservatory students. Well, so, he showed us this data and I was looking through it and what I discovered was that in the lower left hand corner of each page, one of the sets of data had single whole notes. And I think the ones by the students usually had four quarter notes. So that, in fact, it was possible to distinguish between these two classes of... of pieces of music just by looking at the lower left... lower right hand corner of the page. So, I told this to the... to our scientist friend and he went through the data and he said: 'You guessed right. That's... that's how it happened to make that distinction.' We thought it was very funny.\n>\n> A similar thing happened here in the United States at one of our research institutions. Where a perceptron had been trained to distinguish between---this was for military purposes---It could... it was looking at a scene of a forest in which there were camouflaged tanks in one picture and no camouflaged tanks in the other. And the perceptron---after a little training---got... made a 100% correct distinction between these two different sets of photographs. Then they were embarrassed a few hours later to discover that the two rolls of film had been developed differently. And so these pictures were just a little darker than all of these pictures and the perceptron was just measuring the total amount of light in the scene. But it was very clever of the perceptron to find some way of making the distinction.\n\n### 2000s\n\n[Eliezer Yudkowsky](https://www.yudkowsky.net/), [2008-08-24](https://www.lesswrong.com/posts/PoDAyQMWEXBBBEJ5P/magical-categories) (similarly quoted in [\"Artificial Intelligence as a Negative and Positive Factor in Global Risk\"](https://intelligence.org/files/AIPosNegFactor.pdf), \"Artificial Intelligence in global risk\" in _Global Catastrophic Risks_ 2011, & \"Friendly Artificial Intelligence\" in _Singularity Hypotheses_ 2013):\n\n> Once upon a time---I've seen this story in several versions and several places, sometimes cited as fact, but I've never tracked down an original source---once upon a time, I say, the US Army wanted to use neural networks to automatically detect camouflaged enemy tanks. The researchers trained a neural net on 50 photos of camouflaged tanks amid trees, and 50 photos of trees without tanks. Using standard techniques for supervised learning, the researchers trained the neural network to a weighting that correctly loaded the training set---output \"yes\" for the 50 photos of camouflaged tanks, and output \"no\" for the 50 photos of forest. Now this did not prove, or even imply, that new examples would be classified correctly. The neural network might have \"learned\" 100 special cases that wouldn't generalize to new problems. Not, \"camouflaged tanks versus forest\", but just, \"photo-1 positive, photo-2 negative, photo-3 negative, photo-4 positive...\" But wisely, the researchers had originally taken 200 photos, 100 photos of tanks and 100 photos of trees, and had used only half in the training set. The researchers ran the neural network on the remaining 100 photos, and *without further training* the neural network classified all remaining photos correctly. Success confirmed! The researchers handed the finished work to the Pentagon, which soon handed it back, complaining that in their own tests the neural network did no better than chance at discriminating photos. It turned out that in the researchers' data set, photos of camouflaged tanks had been taken on cloudy days, while photos of plain forest had been taken on sunny days. The neural network had learned to distinguish cloudy days from sunny days, instead of distinguishing camouflaged tanks from empty forest. This parable---which might or might not be fact---illustrates one of the most fundamental problems in the field of supervised learning and in fact the whole field of Artificial Intelligence...\n\nGordon Rugg, [_Using Statistics: A Gentle Introduction_](https://books.google.com/books?id=S9lsBnV7txoC), 2007-10-01 (pg114--115):\n\n> *Neural nets and genetic algorithms (including the story of the Russian tanks)*: Neural nets (or artificial neural networks, to give them their full name) are pieces of software inspired by the way the human brain works. In brief, you can train a neural net to do tasks like classifying images by giving it lots of examples, and telling it which examples fit into which categories; the neural net works out for itself what the defining characteristics are for each category. Alternatively, you can give it a large set of data and leave it to work out connections by itself, without giving it any feedback. There's a story, which is probably an urban legend, which illustrates how the approach works and what can go wrong with it. According to the story, some NATO researchers trained a neural net to distinguish between photos of NATO and Warsaw Pact tanks. After a while, the neural net could get it right every time, even with photos it had never seen before. The researchers had gleeful visions of installing neural nets with miniature cameras in missiles, which could then be fired at a battlefield and left to choose their own targets. To demonstrate the method, and secure funding for the next stage, they organised a viewing by the military. On the day, they set up the system and fed it a new batch of photos. The neural net responded with apparently random decisions, sometimes identifying NATO tanks correctly, sometimes identifying them mistakenly as Warsaw Pact tanks. This did not inspire the powers that be, and the whole scheme was abandoned on the spot. It was only afterwards that the researchers realised that all their training photos of NATO tanks had been taken on sunny days in Arizona, whereas the Warsaw Pact tanks had been photographed on grey, miserable winter days on the steppes, so the neural net had flawlessly learned the unintended lesson that if you saw a tank on a gloomy day, then you made its day even gloomier by marking it for destruction.\n\nN. Katherine Hayles, \"Computing the Human\" (_Inventive Life: Approaches to the New Vitalism_, Fraser et al 2006; pg424):\n\n> While humans have for millennia used what Cariani calls 'active sensing'---'poking, pushing, bending'---to extend their sensory range and for hundreds of years have used prostheses to create new sensory experiences (for example, microscopes and telescopes), only recently has it been possible to construct evolving sensors and what [Cariani (1998: 718)](/doc/transhumanism/1998-cariani.pdf \"Epistemic Autonomy through Adaptive Sensing\") calls 'internalized sensing', that is, \"bringing the world into the device\" by creating internal, analog representations of the world out of which internal sensors extract newly-relevant properties'.\n>\n> ...Another conclusion emerges from Cariani's call (1998) for research in sensors that can adapt and evolve independently of the epistemic categories of the humans who create them. The well-known and perhaps apocryphal story of the neural net trained to recognize army tanks will illustrate the point. For obvious reasons, the army wanted to develop an intelligent machine that could discriminate between real and pretend tanks. A neural net was constructed and trained using two sets of data, one consisting of photographs showing plywood cutouts of tanks and the other actual tanks. After some training, the net was able to discriminate flawlessly between the situations. As is customary, the net was then tested against a third data set showing pretend and real tanks in the same landscape; it failed miserably. Further investigation revealed that the original two data sets had been filmed on different days. One of the days was overcast with lots of clouds, and the other day was clear. The net, it turned out, was discriminating between the presence and absence of clouds. The anecdote shows the ambiguous potential of epistemically autonomous devices for categorizing the world in entirely different ways from the humans with whom they interact. While this autonomy might be used to enrich the human perception of the world by revealing novel kinds of constructions, it also can create a breed of autonomous devices that parse the world in radically different ways from their human trainers.\n>\n> A counter-narrative, also perhaps apocryphal, emerged from the 1991 Gulf War. US soldiers firing at tanks had been trained on simulators that imaged flames shooting out from the tank to indicate a kill. When army investigators examined Iraqi tanks that were defeated in battles, they found that for some tanks the soldiers had fired four to five times the amount of munitions necessary to disable the tanks. They hypothesized that the overuse of firepower happened because no flames shot out, so the soldiers continued firing. If the hypothesis is correct, human perceptions were altered in accord with the idiosyncrasies of intelligent machines, providing an example of what can happen when human-machine perceptions are caught in a feedback loop with one another.\n\nLinda Null & Julie Lobur, [_The Essentials of Computer Organization and Architecture_ (third edition)](https://books.google.com/books?id=GKgxDwAAQBAJ), 2003/2014 (pg439--440 in 1st edition, pg658 in 3rd edition):\n\n> Correct training requires thousands of steps. The training time itself depends on the size of the network. As the number of perceptrons increases, the number of possible \"states\" also increases.\n>\n> Let's consider a more sophisticated example, that of determining whether a tank is hiding in a photograph. A neural net can be configured so that each output value correlates to exactly one pixel. If the pixel is part of the image of a tank, the net should output a one; otherwise, the net should output a zero. The input information would most likely consist of the color of the pixel. The network would be trained by feeding it many pictures with and without tanks. The training would continue until the network correctly identified whether the photos included tanks. The U.S. military conducted a research project exactly like the one we just described. One hundred photographs were taken of tanks hiding behind trees and in bushes, and another 100 photographs were taken of ordinary landscape with no tanks. Fifty photos from each group were kept \"secret,\" and the rest were used to train the neural network. The network was initialized with random weights before being fed one picture at a time. When the network was incorrect, it adjusted its input weights until the correct output was reached. Following the training period, the 50 \"secret\" pictures from each group of photos were fed into the network. The neural network correctly identified the presence or absence of a tank in each photo. The real question at this point has to do with the training---had the neural net actually learned to recognize tanks? The Pentagon's natural suspicion led to more testing. Additional photos were taken and fed into the network, and to the researchers' dismay, the results were quite random. The neural net could not correctly identify tanks within photos. After some investigation, the researchers determined that in the original set of 200 photos, all photos with tanks had been taken on a cloudy day, whereas the photos with no tanks had been taken on a sunny day. The neural net had properly separated the two groups of pictures, but had done so using the color of the sky to do this rather than the existence of a hidden tank. The government was now the proud owner of a very expensive neural net that could accurately distinguish between sunny and cloudy days!\n>\n> This is a great example of what many consider the biggest issue with neural networks. If there are more than 10 to 20 neurons, it is impossible to understand how the network is arriving at its results. One cannot tell if the net is making decisions based on correct information, or, as in the above example, something totally irrelevant. Neural networks have a remarkable ability to derive meaning and extract patterns from data that are too complex to be analyzed by human beings. However, some people trust neural networks to be experts in their area of training. Neural nets are used in such areas as sales forecasting, risk management, customer research, undersea mine detection, facial recognition, and data validation. Although neural networks are promising, and the progress made in the past several years has led to significant funding for neural net research, many people are hesitant to put confidence in something that no human being can completely understand.\n\nDavid Gerhard, [\"Pitch Extraction and Fundamental Frequency: History and Current Techniques\"](http://sapyc.espe.edu.ec/evcarrera/DSP/pitch.pdf), Technical Report TR-CS 2003--06, November 2003:\n\n> The choice of the dimensionality and domain of the input set is crucial to the success of any connectionist model. A common example of a poor choice of input set and test data is the Pentagon's foray into the field of object recognition. This story is probably apocryphal and many different versions exist on-line, but the story describes a true difficulty with neural nets.\n>\n> As the story goes, a network was set up with the input being the pixels in a picture, and the output was a single bit, yes or no, for the existence of an enemy tank hidden somewhere in the picture. When the training was complete, the network performed beautifully, but when applied to new data, it failed miserably. The problem was that in the test data, all of the pictures that had tanks in them were taken on cloudy days, and all of the pictures without tanks were taken on sunny days. The neural net was identifying the existence or non-existence of sunshine, not tanks.\n\n[Rice lecture #24, \"COMP 200: Elements of Computer Science\"](https://www.clear.rice.edu/comp200/02spring/Lecture-notes/lec24.txt), 2002-03-18:\n\n> d. Tanks in Desert Storm\n>\n> Sometimes you have to be careful what you train on . . .\n>\n> The problem with neural nets is that you never know what features they're actually training on. For example:\n>\n> The US military tried to use neural nets in Desert Storm for tank recognition, so unmanned tanks could identify enemy tanks and destroy them. They trained the neural net on multiple images of \"friendly\" and enemy tanks, and eventually had a decent program that seemed to correctly identify friendly and enemy tanks.\n>\n> Then, when they actually used the program in a real-world test phase with actual tanks, they found that the tanks would either shoot at nothing or shoot at everything. They certainly seemed to be incapable of distinguishing friendly or enemy tanks.\n>\n> Why was this? It turns out that the images they were training on always had glamour-shot type photos of friendly tanks, with an immaculate blue sky, etc. The enemy tank photos, on the other hand, were all spy photos, not very clear, sometimes fuzzy, etc. And it was these characteristics that the neural net was training on, not the tanks at all. On a bright sunny day, the tanks would do nothing. On an overcast, hazy day, they'd start firing like crazy . . .\n\nAndrew Ilachinski, _Cellular Automata: A Discrete Universe_, 2001 (pg547):\n\n> There is an telling story about how the Army recently went about teaching a backpropagating net to identify tanks set against a variety of environmental backdrops. The programmers correctly fed their multi-layer net photograph after photograph of tanks in grasslands, tanks in swamps, no tanks on concrete, and so on. After many trials and many thousands of iterations, their net finally learned all of the images in their database. The problem was that when the presumably \"trained\" net was tested with other images that were not part of the original training set, it failed to do any better than what would be expected by chance. What had happened was that the input/training fact set was statistically corrupt. The database consisted mostly of images that showed a tank only if there were heavy clouds, the tank itself was immersed in shadow or there was no sun at all. The Army's neural net had indeed identified a latent pattern, but it unfortunately had nothing to do with tanks: it had effectively learned to identify the time of day! The obvious lesson to be taken away from this amusing example is that how well a net \"learns\" the desired associations depends almost entirely on how well the database of facts is defined. Just as Monte Carlo simulations in statistical mechanics may fall short of intended results if they are forced to rely upon poorly coded random number generators, so do backpropagating nets typically fail to achieve expected results if the facts they are trained on are statistically corrupt.\n\n[_Intelligent Data Analysis In Science_](/doc/ai/nn/2000-cartwright-intelligentdataanalysisinscience.pdf), Hugh M. Cartwright 2000, pg126, writes (according to Google Books's snippet view; Cartwright's version appears to be a direct quote or close paraphrase of an earlier 1994 chemistry paper, Goodacre et al 1994):\n\n> ...television programme [_Horizon_](!W \"Horizon (UK TV series)\"); a neural network was trained to attempt to distinguish tanks from trees. Pictures were taken of forest scenes lacking military hardware and of similar but perhaps less bucolic landscapes which also contained more-or-less camouflaged battle tanks. A neural network was trained with these input data and found to differentiate successfully between tanks and trees. However, when a new set of pictures was analysed by the network, it failed to detect the tanks. After further investigation, it was found...\n\nDaniel Robert Franklin & Philippe Crochat, [`libneural` tutorial](https://web.archive.org/web/20001029201251/http://ieee.uow.edu.au/~daniel/software/libneural/BPN_tutorial/BPN_English/BPN_English/node9.html), 2000-03-23:\n\n> A neural network is useless if it only sees one example of a matching input/output pair. It cannot infer the characteristics of the input data for which you are looking for from only one example; rather, many examples are required. This is analogous to a child learning the difference between (say) different types of animals---the child will need to see several examples of each to be able to classify an arbitrary animal... It is the same with neural networks. The best training procedure is to compile a wide range of examples (for more complex problems, more examples are required) which exhibit all the different characteristics you are interested in. It is important to select examples which do not have major dominant features which are of no interest to you, but are common to your input data anyway. One famous example is of the US Army \"Artificial Intelligence\" tank classifier. It was shown examples of Soviet tanks from many different distances and angles on a bright sunny day, and examples of US tanks on a cloudy day. Needless to say it was great at classifying weather, but not so good at picking out enemy tanks.\n\n### 1990s\n\n[\"Neural Network Follies\"](https://neil.fraser.name/writing/tank/), Neil Fraser, September 1998:\n\n> In the 1980s, the Pentagon wanted to harness computer technology to make their tanks harder to attack...The research team went out and took 100 photographs of tanks hiding behind trees, and then took 100 photographs of trees---with no tanks. They took half the photos from each group and put them in a vault for safe-keeping, then scanned the other half into their mainframe computer. The huge neural network was fed each photo one at a time and asked if there was a tank hiding behind the trees. Of course at the beginning its answers were completely random since the network didn't know what was going on or what it was supposed to do. But each time it was fed a photo and it generated an answer, the scientists told it if it was right or wrong. If it was wrong it would randomly change the weightings in its network until it gave the correct answer. Over time it got better and better until eventually it was getting each photo correct. It could correctly determine if there was a tank hiding behind the trees in any one of the photos...So the scientists took out the photos they had been keeping in the vault and fed them through the computer. The computer had never seen these photos before---this would be the big test. To their immense relief the neural net correctly identified each photo as either having a tank or not having one. *Independent testing*: The Pentagon was very pleased with this, but a little bit suspicious. They commissioned another set of photos (half with tanks and half without) and scanned them into the computer and through the neural network. The results were completely random. For a long time nobody could figure out why. After all nobody understood how the neural had trained itself. Eventually someone noticed that in the original set of 200 photos, all the images with tanks had been taken on a cloudy day while all the images without tanks had been taken on a sunny day. The neural network had been asked to separate the two groups of photos and it had chosen the most obvious way to do it---not by looking for a camouflaged tank hiding behind a tree, but merely by looking at the color of the sky...This story might be apocryphal, but it doesn't really matter. It is a perfect illustration of the biggest problem behind neural networks. Any automatically trained net with more than a few dozen neurons is virtually impossible to analyze and understand.\n\n[Tom White](https://twitter.com/dribnet/status/914945926266970112) attributes (in October 2017) to Marvin Minsky some version of the tank story being told in MIT classes 20 years before, ~1997 (but doesn't specify the detailed story or version other than apparently the results were \"classified\").\n\nVasant Dhar & Roger Stein, [_Intelligent Decision Support Methods_](/doc/ai/nn/1997-dhar-intelligentdecisionsupportmethods.pdf), 1997 (pg98, limited Google Books snippet):\n\n> ...However, when a new set of photographs were used, the results were horrible. At first the team was puzzled. But after careful inspection of the first two sets of photographs, they discovered a very simple explanation. The photos with tanks in them were all taken on sunny days, and those without the tanks were taken on overcast days. The network had *not* learned to identify tank like images; instead, it had learned to identify photographs of sunny days and overcast days.\n\nRoyston Goodacre, Mark J. Neal, & Douglas B. Kell, [\"Quantitative Analysis of Multivariate Data Using Artificial Neural Networks: A Tutorial Review and Applications to the Deconvolution of Pyrolysis Mass Spectra\"](/doc/ai/nn/fully-connected/1996-goodacre.pdf), 1994-04-29:\n\n> ...As in all other data analysis techniques, these supervised learning methods are not immune from sensitivity to badly chosen initial data (113). [113: Zupan, J. and J. Gasteiger: _Neural Networks for Chemists: An Introduction_. VCH Verlagsgesellschaft, Weinheim (1993)] Therefore the exemplars for the training set must be carefully chosen; the golden rule is \"garbage in---garbage out\". An excellent example of an unrepresentative training set was discussed some time ago on the BBC television programme _Horizon_; a neural network was trained to attempt to distinguish tanks from trees. Pictures were taken of forest scenes lacking military hardware and of similar but perhaps less bucolic landscapes which also contained more-or-less camouflaged battle tanks. A neural network was trained with these input data and found to differentiate most successfully between tanks and trees. However, when a new set of pictures was analysed by the network, it failed to distinguish the tanks from the trees. After further investigation, it was found that the first set of pictures containing tanks had been taken on a sunny day whilst those containing no tanks were obtained when it was overcast. The neural network had therefore thus learned simply to recognise the weather! We can conclude from this that the training and tests sets should be carefully selected to contain representative exemplars encompassing the appropriate variance over all relevant properties for the problem at hand.\n\nFernando Pereira, [\"neural redlining\", RISKS 16(41), 1994-09-12](http://catless.ncl.ac.uk/risks/16.41.html):\n\n> Fred's comments will hold not only of neural nets but of any decision model trained from data (eg. Bayesian models, decision trees). It's just an instance of the old \"GIGO\" phenomenon in statistical modeling...Overall, the whole issue of evaluation, let alone certification and legal standing, of complex statistical models is still very much open. (This reminds me of a possibly apocryphal story of problems with biased data in neural net training. Some US defense contractor had supposedly trained a neural net to find tanks in scenes. The reported performance was excellent, with even camouflaged tanks mostly hidden in vegetation being spotted. However, when the net was tested on yet a new set of images supplied by the client, the net did not do better than chance. After an embarrassing investigation, it turned out that all the tank images in the original training and test sets had very different average intensity than the non-tank images, and thus the net had just learned to discriminate between two image intensity levels. Does anyone know if this actually happened, or is it just in the neural net \"urban folklore\"?)\n\nErich Harth, [_The Creative Loop: How the Brain Makes a Mind_](/doc/ai/nn/1993-harth-thecreativeloop.pdf), 1993/1995 (pg158, limited Google Books snippet):\n\n> ...55. The net was *trained* to detect the presence of tanks in a landscape. The training consisted in showing the device many photographs of scene, some with tanks, some without. In some cases---as in the picture on page 143---the tank's presence was not very obvious. The inputs to the neural net were digitized photographs;\n\n[Hubert L. Dreyfus](!W) & [Stuart E. Dreyfus](!W), [\"What Artificial Experts Can and Cannot Do\"](https://www.jefftk.com/dreyfus92.pdf), 1992:\n\n> All the \"continue this sequence\" questions found on intelligence tests, for example, really have more than one possible answer but most human beings share a sense of what is simple and reasonable and therefore acceptable. But when the net produces an unexpected association can one say it has failed to generalize? One could equally well say that the net has all along been acting on a different definition of \"type\" and that that difference has just been revealed. For an amusing and dramatic case of creative but unintelligent generalization, consider the legend of one of connectionism's first applications. In the early days of the perceptron the army decided to train an artificial neural network to recognize tanks partly hidden behind trees in the woods. They took a number of pictures of a woods without tanks, and then pictures of the same woods with tanks clearly sticking out from behind trees. They then trained a net to discriminate the two classes of pictures. The results were impressive, and the army was even more impressed when it turned out that the net could generalize its knowledge to pictures from each set that had not been used in training the net. Just to make sure that the net had indeed learned to recognize partially hidden tanks, however, the researchers took some more pictures in the same woods and showed them to the trained net. They were shocked and depressed to find that with the new pictures the net totally failed to discriminate between pictures of trees with partially concealed tanks behind them and just plain trees. The mystery was finally solved when someone noticed that the training pictures of the woods without tanks were taken on a cloudy day, whereas those with tanks were taken on a sunny day. The net had learned to recognize and generalize the difference between a woods with and without shadows! Obviously, not what stood out for the researchers as the important difference. This example illustrates the general point that a net must share size, architecture, initial connections, configuration and socialization with the human brain if it is to share our sense of appropriate generalization\n\nHubert Dreyfus appears to have told this story earlier in 1990 or 1991, as a similar story appears in episode 4 ([German](https://www.youtube.com/watch?v=cG7v9eCq2u4&t=33m49s)) (starting 33m49s) of the BBC documentary series [_The Machine That Changed the World_](!W \"The Machine That Changed the World (miniseries)\"), broadcast 1991-11-08.\nHubert L. Dreyfus, [_What Computers Still Can't Do: A Critique of Artificial Reason_](/doc/ai/1992-dreyfus-whatcomputerstillcantdo.epub), 1992, repeats the story in very similar but not quite identical wording ([Jeff Kaufman](https://www.jefftk.com/p/detecting-tanks \"Detecting Tanks\") notes that Dreyfus drops the qualifying \"legend of\" description):\n\n> ...But when the net produces an unexpected association, can one say that it has failed to generalize? One could equally well say that the net has all along been acting on a different definition of \"type\" and that that difference has just been revealed. For an amusing and dramatic case of creative but unintelligent generalization, consider one of connectionism's first applications. In the early days of this work the army tried to train an artificial neural network to recognize tanks in a forest. They took a number of pictures of a forest without tanks and then, on a later day, with tanks clearly sticking out from behind trees, and they trained a net to discriminate the two classes of pictures. The results were impressive, and the army was even more impressed when it turned out that the net could generalize its knowledge to pictures that had not been part of the training set. Just to make sure that the net was indeed recognizing partially hidden tanks, however, the researchers took more pictures in the same forest and showed them to the trained net. They were depressed to find that the net failed to discriminate between the new pictures of trees with tanks behind them and the new pictures of just plain trees. After some agonizing, the mystery was finally solved when someone noticed that the original pictures of the forest without tanks were taken on a cloudy day and those with tanks were taken on a sunny day. The net had apparently learned to recognize and generalize the difference between a forest with and without shadows! This example illustrates the general point that a network must share our commonsense understanding of the world if it is to share our sense of appropriate generalization.\n\nDreyfus's _What Computers Still Can't Do_ is listed as a revision of his 1972 book, [_What Computers Can't Do: A Critique of Artificial Reason_](https://archive.org/details/whatcomputerscan017504mbp), but the tank story is not in the 1972 book, only the 1992 one.\n(Dreyfus's version is also quoted in the 2017 NYT article and Hillis 1996's _Geography, Identity, and Embodiment in Virtual Reality_, pg346.)\n\nLaveen N. Kanal, [_Artificial Neural Networks and Statistical Pattern Recognition: Old and New Connections_'s](/doc/ai/nn/1991-sethi-artificialneuralnetworksandstatisticalpatternrecognition.pdf) Foreword, discusses some early NN/tank research (predating not just LeCun's convolutions but backpropagation), 1991:\n\n> ...[Frank] Rosenblatt had not limited himself to using just a single Threshold Logic Unit but used networks of such units. The problem was how to train multilayer perceptron networks. A paper on the topic written by Block, Knight and Rosenblatt was murky indeed, and did not demonstrate a convergent procedure to train such networks. In 1962--63 at Philco-Ford, seeking a systematic approach to designing layered classification nets, we decided to use a hierarchy of threshold logic units with a first layer of \"feature logics\" which were threshold logic units on overlapping receptive fields of the image, feeding two additional levels of weighted threshold logic decision units. The weights in each level of the hierarchy were estimated using statistical methods rather than iterative training procedures [L.N. Kanal & N.C. Randall, [\"Recognition System Design by Statistical Analysis\"](/doc/ai/1964-kanal.pdf), Proc. 19th Conf. ACM, 1964]. We referred to the networks as two layer networks since we did not count the input as a layer. On a project to recognize tanks in aerial photography, the method worked well enough in practice that the U.S. Army agency sponsoring the project decided to classify the final reports, although previously the project had been unclassified. We were unable to publish the classified results! Then, enamored by the claimed promise of coherent optical filtering as a parallel implementation for automatic target recognition, the funding we had been promised was diverted away from our electro-optical implementation to a coherent optical filtering group. Some years later we presented the arguments favoring our approach, compared to optical implementations and trainable systems, in an article titled \"Systems Considerations for Automatic Imagery Screening\" by T.J. Harley, L.N. Kanal and N.C. Randall, which is included in the IEEE Press reprint volume titled [_Machine Recognition of Patterns_](/doc/ai/nn/1977-agrawala-machinerecognitionofpatterns.pdf) edited by A. Agrawala 1977^[The paper in question discusses general questions of necessary resolution, computing requirements, optics, necessary error rates, and algorithms, but doesn't describe any implemented systems, much less experiences which resemble the tank story.]. In the years which followed multilevel statistically designed classifiers and AI search procedures applied to pattern recognition held my interest, although comments in my 1974 survey, \"Patterns In Pattern Recognition: 1968--1974\" [IEEE Trans. on IT, 1974], mention papers by Amari and others and show an awareness that neural networks and biologically motivated automata were making a comeback. In the last few years trainable multilayer neural networks have returned to dominate research in pattern recognition and this time there is potential for gaining much greater insight into their systematic design and performance analysis...\n\nWhile Kanal & Randall 1964 matches in some ways, including the image counts, there is no mention of failure either in the paper or Kanal's 1991 reminiscences (rather, Kanal implies it was highly promising), there is no mention of a field deployment or additional testing which could have revealed overfitting, and given their use of binarizing, it's not clear to me that their 2-layer algorithm even *could* overfit to global brightness; the photos also appear to have been taken at low enough altitude for there to be no clouds, and to be taken under similar (possibly controlled) lighting conditions.\nThe description in Kanal & Randall 1964 is somewhat opaque to me, particularly of the 'Laplacian' they use to binarize or convert to edges, but there's more background in their [\"Semi-Automatic Imagery Screening Research Study and Experimental Investigation, Volume 1\"](http://www.dtic.mil/docs/citations/AD0410261), Harley, Bryan, Kanal, Taylor & Grayum 1962 ([mirror](/doc/ai/1962-harley.pdf)), which indicates that in their preliminary studies they were already interested in prenormalization/preprocessing images to correct for altitude and brightness, and the Laplacian, along with silhouetting and \"lineness editing\", noting that \"The Laplacian operation eliminates absolute brightness scale as well as low-spatial frequencies which are of little consequence in screening operations.\"^[Another interesting detail from Harley et al 1962 about their tank study: in discussing designing their computer 'simulation' of their quasi-NN algorithms, their description of the photographs on pg133 makes it sound as if the dataset was constructed from the *same* photographs by using large-scale aerial footage and then cropping out the small squares with tanks and then corresponding small squares without tanks---so they only had to process one set of photographs, and the resulting tank/non-tank samples are inherently matched on date, weather, time of day, lighting, general location, roll of film, camera, and photographer. If true, that would make almost all the various suggested tank problem shortcuts impossible, and would be further evidence that Kanal's project was not & could not have been a true origin of the tank story (although if it was simply *misunderstood* and erroneously critiqued, then it could be a tiny kernel of truth from which the urban legend sprang).]\n\nAn anonymous reader says he heard the story in 1990:\n\n> I was told about the tank recognition failure by a lecturer on my 1990 Intelligent Knowledge Based Systems MSc, almost certainly [Libor Spacek](https://cmp.felk.cvut.cz/~spacelib/ \"Libor Špaček homepage\"), in terms of being aware of context in data sets; that being from (the former) Czechoslovakia he expected to see tanks on a motorway whereas most British people didn't. I also remember reading about a project with DARPA funding aimed at differentiating Russian, European and US tanks where what the image recognition learned was not to spot the differences between tanks but to find trees, because of the US tank photos being on open ground and the Russian ones being in forests; that was during the same MSc course---so very similar to predicting tumours by looking for the ruler used to measure them in the photo---but I don't recall the source (it wasn't one of the books you cite though, it was either a journal article or another text book).\n\n### 1980s\n\n[Chris Brew](https://twitter.com/cbrew/status/920088821823344640) states (2017-10-16) that he \"Heard the story in 1984 with pigeons instead of neural nets\".\n\n### 1960s\n\n[Edward Fredkin](!W), in an email to Eliezer Yudkowsky on 2013-02-26, recounts an interesting anecdote about the 1960s claiming to be the grain of truth:\n\n> By the way, the story about the two pictures of a field, with and without army tanks in the picture, comes from me. I attended a meeting in Los Angeles [at RAND?], about half a century ago [~1963?] where someone gave a paper showing how a random net could be trained to detect the tanks in the picture. I was in the audience. At the end of the talk I stood up and made the comment that it was obvious that the picture with the tanks was made on a sunny day while the other picture (of the same field without the tanks) was made on a cloudy day. I suggested that the \"neural net\" had merely trained itself to recognize the difference between a bright picture and a dim picture.\n\n## Evaluation\n\n### Sourcing\n\nThe absence of any hard citations is striking: even when a citation is supplied, it is invariably to a relatively recent source like Dreyfus, and then the chain ends.\nTypically for a real story, one will find at least one or two hints of a penultimate citation and then a final definitive citation to some very difficult-to-obtain or obscure work (which then is often quite different from the popularized version but still recognizable as the original); for example, another popular cautionary AI urban legend is that the 1956 [Dartmouth workshop](!W) claimed that a single graduate student working for a summer could solve computer vision (or perhaps AI in general), which is a highly distorted misleading description of the [original 1955 proposal's](http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html \"'A Proposal For The Dartmouth Summer Research Project On Artificial Intelligence', McCarthy et al 1955\") realistic claim that \"a 2 month, 10 man study of artificial intelligence\" could yield \"a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.\"^[This seems entirely reasonable to me, given that hardly any AI research existed at that point. While it's unclear what results were accomplished immediately thanks to the 1956 workshop, many of the attendees would make major discoveries in AI. Attendee [Ray Solomonoff's](!W \"Ray Solomonoff\") wife, Grace Solomonoff ([\"Ray Solomonoff and the Dartmouth Summer Research Project in Artificial Intelligence, 1956\"](https://raysolomonoff.com/dartmouth/dartray.pdf), 2016) describes the workshop as having vivid discussions but was compromised by getting only half its funding (so it didn't last the summer) and attendees showing up sporadically & for short times (\"Many participants only showed up for a day or even less.\"); no agreement was reached on a specific project to try to tackle, although Solomonoff did write a paper there he considered important.]\nInstead, everyone either disavows it as an urban legend or possibly apocryphal, or punts to someone else.\n(Minsky's 2011 version initially seems concrete, but while he specifically attributes the musical score story to a friend & claims to have found the trick personally, he is then as vague as anyone else about the tank story, saying it just \"happened\" somewhere \"in the United States at one of our research institutes\", at an unmentioned institute by unmentioned people at an unmentioned point in time for an unmentioned branch of the military.)\n\n### Variations\n\n
\n> *Question to Radio Yerevan*: \"Is it correct that Grigori Grigorievich Grigoriev won a luxury car at the All-Union Championship in Moscow?\"\n>\n> *Radio Yerevan answered*: \"In principle, yes. But first of all it was not Grigori Grigorievich Grigoriev, but Vassili Vassilievich Vassiliev; second, it was not at the All-Union Championship in Moscow, but at a Collective Farm Sports Festival in Smolensk; third, it was not a car, but a bicycle; and fourth he didn't win it, but rather it was stolen from him.\"\n>\n> [\"Radio Yerevan Jokes\"](https://web.archive.org/web/20140908045019/http://www.bratislavaguide.com/radio-yerevan-jokes) (collected by Allan Stevo)\n
\n\nIt is also interesting that not all the stories imply quite the same problem with the hypothetical NN. Dataset bias/selection effects is not the same thing as overfitting or disparate impact, but some of the story tellers don't realize that.\nFor example, in some stories, the NN fails when it's tested on additional heldout data (overfitting), not when it's tested on data from an entire different photographer or field exercise or data source (dataset bias/distributional shift).\nOr, Alexander Harrowell cites disparate impact in a medical school as if it were an example of the same problem, but it's not---at least in the USA, a NN would be correct in inferring that white students are more likely to succeed, as that is a real predictor (this would be an example of how people play rather fast and loose with claims of \"algorithmic bias\"), and it would not necessarily be the case that, say, randomized admission of more non-white students would be certain to increase the number of successful graduates; such a scenario is, however, possible and illustrates the difference between predictive models & causal models for control & optimization, and the need for experiments/reinforcement learning.\n\nA read of all the variants together raises more questions than it answers:\n\n- Did this story happen in the 1960s, 1980s, 1990s, or during Desert Storm in the 1990s?\n- Was the research conducted by the US military, or researchers for another NATO country?\n- Were the photographs taken by satellite, from the air, on the ground, or by spy cameras?\n- Were the photographs of American tanks, plywood cutouts, Soviet tanks, or Warsaw Pact tanks?\n- Were the tanks out in the open, under cover, or fully camouflaged?\n- Were these photographs taken in forests, fields, deserts, swamps, or all of them?\n- Were the photographs taken in same place but different time of day, same place but different days, or different places entirely?\n- Were there 100, 200, or thousands of photographs; and how many were in the training vs validation set?\n- Was the input in black-and-white binary, grayscale, or color?\n- Was the tell-tale feature either field vs forest, bright vs dark, the presence vs absence of clouds, the presence vs absence of shadows, the length of shadows, or an accident in film development unrelated to weather entirely?\n- Was the NN to be used for image processing or in autonomous robotic tanks?\n- Was it even a NN?\n- Was the dataset bias caught quickly within \"a few hours\", later by a suspicious team member, later still when applied to an additional set of tank photographs, during further testing producing a new dataset, much later during a live demo for military officers, or only after live deployment in the field?\n\nAlmost every aspect of the tank story which *could* vary *does* vary.\n\n### Urban Legends\n\nWe could also compare the tank story with many of the characteristics of [urban legends](!W) (of the sort so familiar from Snopes): they typically have a clear dramatic arc, involve horror or humor while playing on common concerns (distrust of NNs has been a theme from the start of NN research[^victim-of-success]), make an important didactic or moral point, claim to be true while sourcing remains limited to social proof such as the usual \"friend of a friend\" attributions, often try to associate with a respected institution (such as the US military), are transmitted primarily orally through social mechanisms & appear spontaneously & independently in many sources without apparent origin (most people seem to hear the tank story in unspecified classes, conferences, personal discussions rather than in a book or paper), exists in many mutually-contradictory variants often with overly-specific details[^detail] spontaneously arising in the retelling, been around for a long time (it appears almost fully formed in Dreyfus 1992, suggesting incubation before then), sometimes have a grain of truth (dataset bias certainly is real), and the full tank story is \"too good not to pass along\" (even authors who are sure it's an urban legend can't resist retelling it yet again for didactic effect or entertainment).\nThe tank story matches almost all the usual criteria for an urban legend.\n\n[^detail]: Here, the number of photographs and exactly how they were divided into training/validation sets is an oddly specific detail. This is reminiscent of religions or novels, where originally sparse and undetailed stories become elaborated and ever more detailed, with striking details added to catch the imagination. For example, the [Three Magi](!W \"Biblical Magi\") in the Christian Gospels are unnamed, but have been given by later Christians extensive fictional biographies of names ([\"Names for the Nameless in the New Testament\"](/doc/history/1980-metzger.pdf \"Metzger 1971\"); one of [many given names](!W \"List of names for the biblical nameless\")), symbolism, kingdoms, contemporary successors/descendants, martyrdoms & locations of remains...\n[^victim-of-success]: One commenter observes that the NN tank story and ilk appears to almost always be told about neural networks, and wonders why when dataset bias ought to be just as much a problem for other statistical/machine-learning methods like decision trees, which are capable of learning complex nonlinear problems. I could note that these anecdotes also get routinely told about genetic algorithms & evolutionary methods, so it's not purely neural, and it might be that NNs are victims of their own success: particularly as of 2017, NNs are so powerful & flexible in some areas (like computer vision) there is little competition, and so any horror stories will probably involve NNs.\n\n### Origin\n\nSo where does this urban legend come from?\nThe key anecdote appears to be Edward Fredkin's as it precedes all other excerpts except perhaps the research Kanal describes; Fredkin's story does *not* confirm the tank story as he merely speculates that brightness was driving the results, much less all the extraneous details about photographic film being accidentally overdeveloped or robot tanks going berserk or a demo failing in front of Army brass.\n\nBut it's easy to see how Fredkin's reasonable question could have memetically evolved into the tank story as finally fixed into published form by Dreyfus's article:\n\n#. **Setting**: Kanal & Randall set up their very small simple early perceptrons on some tiny binary aerial photos of tanks, in interesting early work, and Fredkin attends the talk sometime around 1960--1963\n#. **The Question**: Fredkin then asks in the Q&A whether the perceptron is not learning square-shapes but brightness\n#. **Punting**: of course neither Fredkin nor Kanal & Randall can know on the spot whether this critique is right or wrong (perhaps that question motivated the binarized results reported in Kanal & Randall 1964?), and the question remains unanswered\n#. **Anecdotizing**: but someone in the audience considers that an excellent observation about methodological flaws in NN research, and perhaps they (or Fredkin) repeats the story to others, who find it useful too, and along the way, Fredkin's *question mark* gets dropped and the *possible* flaw becomes an *actual* flaw, with the punchline: \"...and it turned out their NN were just detecting average brightness!\"\n\n One might expect Kanal & Randall to rebut these rumors, if only by publishing additional papers on their functioning system, but by a quirk of fate, as Kanal explains in his preface, after their 1964 paper, the Army liked it enough to make it classified and then they were reassigned to an entirely different task, killing progress entirely. (Something similar happened to [the best early facial recognition systems](https://www.wired.com/story/secret-history-facial-recognition/ \"The Secret History of Facial Recognition: Sixty years ago, a sharecropper's son invented a technology to identify faces. Then the record of his role all but vanished. Who was Woody Bledsoe, and who was he working for?\").)\n#. **Proliferation**: In the absence of any counternarrative (silence is considered consent), the tank story continues spreading.\n#. **Mutation**: but now the story is incomplete, a joke missing most of the setup to its punchline---*how* did these Army researchers discover the NN had tricked them and what was the brightness difference from? The various versions propose different resolutions, and likewise, appropriate details about the tank data must invented.\n#. [**Fixation**](!W \"Fixation (population genetics)\"): Eventually, after enough mutations, a version reaches Dreyfus, already a well-known critic of the AI establishment, who then uses it in his article/book, virally spreading it globally to pop up in random places thenceforth, and fixating it as an universally-known _ur_-text. (Further memetic mutations can and often will occur, but diligent writers & researchers will 'correct' variants by returning to the Dreyfus version.)\n\nOne might try to write Dreyfus off as a coincidence and argue that the US Army *must* have had so many neural net research programs going that one of the others is the real origin, but one would expect those programs to result in spinoffs, more reports, reports since declassified, etc. It's been half a century, after all. And despite the close association of the US military with MIT and early AI work, tanks do not seem to have been a major focus of early NN research---for example, [Schmidhuber's history](https://arxiv.org/abs/1404.7828#schmidhuber \"'Deep Learning in Neural Networks: An Overview', Schmidhuber 2014\") does not mention tanks at all, and most of my paper searches kept pulling up NN papers about 'tanks' as in vats, such as controlling stirring/mixing tanks for chemistry.\nNor is it a safe assumption that the military always has much more advanced technology than the public or private sectors; often, they can be quite behind or at the status quo.[^NSA]\n\n[^NSA]: One memorable example of this for me was when the Edward Snowden NSA leaks began.\n\n Surely, given previous instances like differential cryptanalysis or public-key cryptography, the NSA had any number of amazing technologies and moon math beyond the ken of the rest of us? I read many of the presentations with great interest, particularly about how they searched for individuals or data---cutting edge neural networks? Evolutionary algorithms? Even more exotic techniques? Nope---regexps, linear models, and random forests. Practical but boring. Nor did any major cryptographic breakthroughs become exposed via Snowden.\n\n Overall, the NSA corpus indicates that they had the abilities you would expect from a large group of patient programmers with no ethics given a budget of billions of dollars to spend on a mission whose motto was \"hack the planet\" using a comprehensive set of methods ranging from physical breakins & bugs, theft of private keys, bribery, large-scale telecommunications tapping, implanting backdoors, purchase & discovery of unpatched vulnerabilities, & standards process subversion. Highly effective in the aggregate but little that people hadn't expected or long speculated about in the abstract.\n\n# Could it Happen?\n\nCould something like the tank story (a NN learning to distinguish solely on average brightness levels) happen in 2017 with state-of-the-art techniques like convolutional neural networks (CNNs)?\n(After all, presumably nobody *really* cares about what mistakes a crude perceptron may or may not have once made back in the 1960s; most/all of the story-tellers are using it for didactic effect in warning against carelessness in contemporary & future AI research/applications.)\nI would guess that while it could happen, it would be considerably less likely now than then for several reasons:\n\n#. a common preprocessing step in computer vision (and NNs in general) is to \"whiten\" the image by standardizing or transforming pixels to a normal distribution; this would tend to wipe global brightness levels, promoting invariance to illumination\n#. in addition to or instead of whitening, it is also common to use aggressive \"data augmentation\": shifting the image by a few pixels in each direction, cropping it randomly, adjusting colors to be slightly more red/green/blue, flipping horizontally, barrel-warping it, adding JPEG compression noise/artifacts, brightening or darkening, etc.\n\n None of these transformations should affect whether an image is classifiable as \"dog\" or \"cat\"^[Although there are occasional exceptions where a data augmentation *doesn't* preserve important semantics: you wouldn't want to use horizontal flips with street signs.], the reasoning goes, so the NN should learn to see past them, and generating variants during training provides additional data for free. Aggressive data augmentation would make it harder to pick up global brightness as a cheap trick.\n#. CNNs have built-in biases (compared to fully-connected neural networks) towards edges and other structures, rather than global averages; convolutions want to find edges and geometric patterns like little squares for tanks. (This point is particularly germane in light of the brain inspiration for convolutions & Dreyfus & Dreyfus 1992's interpretation of the tank story.)\n#. image classification CNNs, due to their large sizes, are often trained on large datasets with many classes to categorize images into (canonically, ImageNet with 1000 classes over a million images; much larger datasets, such as 300 million images, have been explored and found to still offer benefits). Perforce, most of these images will not be generated by the dataset maintainer and will come from a wide variety of peoples, places, cameras, and settings, reducing any systematic biases. It would be difficult to find a cheap trick which works over many of those categories simultaneously, and the NN training will constantly erode any category-specific tricks in favor of more generalizable pattern-recognition (in part because there's no inherent 'modularity' which could factor a NN into a \"tank cheap trick\" NN & a \"everything else real pattern-recognition\" NN). The power of generalizable abstractions will tend to overwhelm the shortcuts, and the more data & tasks a NN is trained on, providing greater supervision & richer insight, the more this will be the case.\n\n - Even in the somewhat unusual case of a special-purpose binary classification CNN being trained on a few hundred images, because of the large sizes of good CNNs, it is typical to at least start with a pretrained ImageNet CNN in order to benefit from all the learned knowledge about edges & whatnot before \"finetuning\" on the special-purpose small dataset. If the CNN starts with a huge inductive bias towards edges etc, it will have a hard time throwing away its informative priors and focusing purely on global brightness. (Often in finetuning, the lower levels of the CNN aren't allowed to change at all!)\n - Another variant on transfer learning is to use the CNN as a feature-generator, by taking the final layers' state computed on a specific image and using them as a vector embedding, a sort of summary of everything about the image content relevant to classification; this embedding is useful for other kinds of CNNs for purposes like style transfer (style transfer aims to warp an image towards the appearance of another image while preserving the embedding and thus presumably the content) or for GANs generating images (the discriminator can use the features to detect \"weird\" images which don't make sense, thereby forcing the generator to learn what images correspond to realistic embeddings).\n#. CNNs would typically throw warning signs before a serious field deployment, either in diagnostics or failures to extend the results.\n\n - One benefit of the filter setup of CNNs is that it's easy to visualize what the lower layers are 'looking at'; typically, CNN filters will look like diagonal or horizontal lines or curves or other simple geometric patterns. In the case of a hypothetical brightness-detector CNN, because it is not recognizing any shapes whatsoever or doing anything but trivial brightness averaging, one would expect its filters to look like random noise and definitely nothing like the usual filter visualizations. This would immediately alarm any deep learning researcher that the CNN is not learning what they thought it was learning.\n - Related to filter visualization is input visualization: it's common to generate some heatmaps of input images to see what regions of the input image are influencing the classification the most. If you are classifying \"cats vs dogs\", you expect a heatmap of a cat image to focus on the cat's head and tail, for example, and not on the painting on the living room wall behind it; if you have an image of a tank in a forest, you expect the heatmap to focus on the tank rather than trees in the corner or nothing in particular, just random-seeming pixels all over the image. If it's not focusing on the tank at all, how is it doing the classification?, one would then wonder. ([\"Picasso: A Modular Framework for Visualizing the Learning Process of Neural Network Image Classifiers\"](https://arxiv.org/abs/1705.05627) ([blog](https://medium.com/merantix/picasso-a-free-open-source-visualizer-for-cnns-d8ed3a35cfc5 \"Picasso: A free open-source visualizer for Convolutional Neural Networks; Cloudy with a chance of tanks\")), Henderson & Rothe 2017-05-16 quote Yudkowsky 2008's version of the tank story as a motivation for their heatmap visualization tool and demonstrate that, for example, blocking out the sky in a tank image doesn't bother a VGG-16 CNN image classifier but block the tank's treads does, and the heatmap focuses on the tank itself.) There are additional methods for trying to understand whether the NN has learned a potentially useful algorithm using other methods such as the previously cited LIME.\n#. Also related to the visualization is going beyond classification to the logical next step of \"localization\" or \"image segmentation\": having detected an image with a tank in it *somewhere*, it is natural (especially for military purposes) to ask *where* in the image the tank is?\n\n A CNN which is truly detecting the tank itself will lend itself to image segmentation (eg. CNN success in reaching human levels of ImageNet classification performance have also resulted in extremely good segmentation of an image by categorizing each pixel as human/dog/cat/etc), while one learning the cheap trick of brightness will utterly fail at guessing better than chance which pixels are the tank.\n\nSo, it is highly unlikely that a CNN trained via a normal workflow (data-augmented finetuning of a pretrained ImageNet CNN with standard diagnostics) would fail in this exact way or, at least, make it to a deployed system without failing.\n\n## Could Something Like it Happen?\n\nCould something *like* the tank story happen, in the sense of a selection-biased dataset yielding NNs which fail dismally in practice?\nOne could imagine it happening and it surely does at least occasionally, but in practice it doesn't seem to be a particularly serious or common problem---people routinely apply CNNs to very different contexts with considerable success.^[It amuses me to note when websites or tools are clearly using ImageNet CNNs, because they assume ImageNet categories or provide annotations in their metadata, or because they exhibit uncannily good recognition of dogs. Sometimes CNNs are much better than they are given credit for being and they are *assumed* by commenters to fail on problems they actually succeed on; for example, some meme images have circulated claiming that CNNs can't distinguish fried chickens from [Labradoodle](!W) dogs, chihuahuas from muffins, or sleeping dogs from bagels---but as amusing as the image-sets are, [Miles Brundage](https://twitter.com/Miles_Brundage/status/874448037929725952) reports that [Clarifai's](https://www.clarifai.com/) CNN API has little trouble accurately distinguishing man's worst food from man's best friend.]\nIf it's such a serious and common problem, one would think that people would be able to provide a wealth of real-world examples of systems deployed with dataset bias making it entirely useless, rather than repeating a fiction from 50 years ago.\n\nOne of the most relevant (if unfortunately older & possibly out of date) papers I've read on this question of dataset bias is [\"Unbiased Look at Dataset Bias\"](https://pdfs.semanticscholar.org/b9f2/04abd29874f72840b5eb204d38938e167054.pdf), Torralba & Efros 2011:\n\n> Datasets are an integral part of contemporary object recognition research. They have been the chief reason for the considerable progress in the field, not just as source of large amounts of training data, but also as means of measuring and comparing performance of competing algorithms. At the same time, datasets have often been blamed for narrowing the focus of object recognition research, reducing it to a single benchmark performance number. Indeed, some datasets, that started out as data capture efforts aimed at representing the visual world, have become closed worlds unto themselves (eg. the Corel world, the Caltech101 world, the PASCAL VOC world). With the focus on beating the latest benchmark numbers on the latest dataset, have we perhaps lost sight of the original purpose?\n>\n> The goal of this paper is to take stock of the current state of recognition datasets. We present a comparison study using a set of popular datasets, evaluated based on a number of criteria including: relative data bias, cross-dataset generalization, effects of closed-world assumption, and sample value. The experimental results, some rather surprising, suggest directions that can improve dataset collection as well as algorithm evaluation protocols. But more broadly, the hope is to stimulate discussion in the community regarding this very important, but largely neglected issue.\n\nThey demonstrate on several datasets (including ImageNet), that it's possible for a SVM (CNNs were not used) to guess at above chance levels what dataset an image comes from and that there are noticeable drops in accuracy when a classifier trained on one dataset is applied to ostensibly the same category in another dataset (eg. an ImageNet \"car\" SVM classifier applied to PASCAL's \"car\" images will go from 57% to 36% accuracy).\nBut---perhaps the glass is half-full---in none of the pairs does the performance degrade to near-zero, so despite the definite presence of dataset bias, the SVMs are still learning generalizable, transferable image classification (similarly, [Jo & Bengio 2017](https://arxiv.org/abs/1711.11561 \"Measuring the tendency of CNNs to Learn Surface Statistical Regularities\")/[Recht et al 2018](https://arxiv.org/abs/1806.00451 \"Do CIFAR-10 Classifiers Generalize to CIFAR-10?\")/[Recht et al 2019](https://arxiv.org/abs/1902.10811 \"Do ImageNet Classifiers Generalize to ImageNet?\")^[Recht et al 2019's ImageNet-v2 turns out to illustrate some [subtle issues in measuring dataset bias](https://gradientscience.org/data_rep_bias/ \"'Identifying Statistical Bias in Dataset Replication [blog]', Engstrom et al 2020\") ([Engstrom et al 2020](https://gradientscience.org/data_rep_bias.pdf \"Identifying Statistical Bias in Dataset Replication\")): because of measurement error in the labels of images causing errors in the final dataset, simply comparing a classifier trained on one with its performance on the other and noting that performance fell by X% yields a misleadingly inflated estimate of 'bias' by attributing the combined error of both datasets to the bias. A [Rip Van Winkle](http://www.offconvex.org/2021/04/07/ripvanwinkle/ \"'Rip van Winkle’s Razor, a Simple New Estimate for Adaptive Data Analysis', Arora & Zhang 2021\") estimate of CNN overfitting indicates it must be mild---CNNs just aren't all that algorithmically complex and thus unable to be overly-tailored to ImageNet. For much more theory on covariate shift impacts and decreases/increases in performance of NNs, see [Tripuraneni et al 2021](https://arxiv.org/abs/2111.08234 \"Covariate Shift in High-Dimensional Random Feature Regression\").]/[Yadav & Bottou 2019](https://arxiv.org/abs/1905.10498 \"Cold Case: The Lost MNIST Digits\")/[Zhang & Davison 2020](https://arxiv.org/abs/2002.02559 \"Impact of ImageNet Model Selection on Domain Adaptation\")/[Beyer et al 2020](https://arxiv.org/abs/2006.07159#google \"Are we done with ImageNet?\") show a generalization gap but only a small one with typically better in-sample classifiers performing better out-of-sample, [Kornblith et al 2018](https://arxiv.org/abs/1805.08974#google \"Do Better ImageNet Models Transfer Better?\") show that ImageNet resnets produce multiple new SOTAs on other image datasets using finetuning transfer learning, [Lapuschkin et al 2019](https://arxiv.org/abs/1902.10178 \"Unmasking Clever Hans Predictors and Assessing What Machines Really Learn\") compares Fisher vectors (an SVM trained on SIFT features, & [BiT](https://arxiv.org/abs/1912.11370#google \"'Big Transfer (BiT): Large Scale Learning of General Visual Representations for Transfer', Kolesnikov et al 2019\") is one of a number of [scaling papers](/note/scaling \"'Machine Learning Scaling', Branwen 2021\") showing much better representations & robustness & transfer with extremely large CNNs) to CNNs on PASCAL VOC again, finding the Fishers overfit by eg. classifying horses based on copyright watermarks while the CNN nevertheless classifies it based on the correct parts, although the CNN may succumb to a different dataset bias by classifying airplanes based on having backgrounds of skies[^Clever-Hans]); and I believe we have good reason to expect our CNNs to also work in the wild.\n\n[^Clever-Hans]: Lapuschkin et al 2019:\n\n > The first learning machine is a model based on Fisher vectors (FV) [31, 32] trained on the PASCAL VOC 2007 image dataset [33] (see §E). The model and also its competitor, a pretrained Deep Neural Network (DNN) that we fine-tune on PASCAL VOC, show both excellent state-of-the-art test set accuracy on categories such as 'person', 'train', 'car', or 'horse' of this benchmark (see Table 3). Inspecting the basis of the decisions with LRP, however, reveals for certain images substantial divergence, as the heatmaps exhibiting the reasons for the respective classification could not be more different. Clearly, the DNN's heatmap points at the horse and rider as the most relevant features (see Figure 14). In contrast, FV's heatmap is most focused onto the lower left corner of the image,which contains a source tag. A closer inspection of the data set (of 9963 samples [33]) that typically humans never look through exhaustively, shows that such source tags appear distinctively on horse images; a striking artifact of the dataset that so far had gone unnoticed [34]. Therefore, the FV model has 'overfitted' the PASCAL VOC dataset by relying mainly on the easily identifiable source tag, which incidentally correlates with the true features, a clear case of 'Clever Hans' behavior. This is confirmed by observing that artificially cutting the source tag from horse images significantly weakens the FV model's decision while the decision of the DNN stays virtually unchanged (see Figure 14). If we take instead a correctly classified image of a Ferrari and then add to it a source tag, we observe that the FV's prediction swiftly changes from 'car' to 'horse' (cf. Figure 2a) a clearly invalid decision (see §E and Figures 15--20 for further examples and analyses)... For the classification of ships the classifier is mostly focused on the presence of water in the bottom half of an image. Removing the copyright tag or the background resultsin a drop of predictive capabilities. A deep neural network, pre-trained in the ImageNet dataset [93], instead shows none of these shortcomings.\n\n The airplane example is a little more debatable---the presence of a lot of blue sky in airplane images seems like a valid cue to me and not necessarily cheating:\n\n > ...The SpRAy analysis could furthermore reveal another 'Clever Hans' type behavior in our fine-tuned DNN model, which had gone unnoticed in previous manual analysis of the relevance maps. The large eigengaps in the eigenvalue spectrum of the DNN heatmaps for class \"aeroplane\" indicate that the model uses very distinct strategies for classifying aeroplane images (see Figure 26). A t-SNE visualization (Figure 28) further highlights this cluster structure. One unexpected strategy we could discover with the help of SpRAy is to identify aeroplane images by looking at the artificial padding pattern at the image borders, which for aeroplane images predominantly consists of uniform and structureless blue background. Note that padding is typically introduced for technical reasons (the DNN model only accepts square shaped inputs), but unexpectedly (and unwantedly) the padding pattern became part of the model's strategy to classify aeroplane images. Subsequently we observe that changing the manner in which padding is performed has a strong effect on the output of the DNN classifier (see Figures 29--32).\n\nSome real instances of dataset bias, more or less (most of these were caught by standard heldout datasets and arguably aren't the 'tank story' at all):\n\n- a particularly appropriate example is the unsuccessful [WWII Russian anti-tank dog program](!W \"Anti-tank dog#Deployment by the Soviet Union\"): a failure, among several reasons, because the dogs were trained on Russian tanks and sought *them* out rather than the enemy German tanks because the dogs recognized either the fuel smell or fuel canisters (diesel vs gasoline)\n- [\"The person concept in monkeys (_Cebus apella_)\"](/doc/psychology/1988-damato.pdf), D'Amato & Van Sant 1988\n- Google Photos in June 2015 caused a social-media fuss over mislabeling African-Americans as gorillas; Google did not explain how the Photos app made that mistake but it is presumably using a CNN and an example of either dataset bias (many more Caucasian/Asian faces leading to better performance on them and continued poor performance everywhere else) and/or a mis-specified loss function (the CNN optimizing a standard classification loss and responding to class imbalance or objective color similarity by preferring to guess 'gorilla' rather than 'human' to minimize loss, despite what ought to be a greater penalty for mistakenly classifying a human as an animal/object rather than vice versa). A similar issue occurred with Flickr in May 2015.\n- [\"Gender-From-Iris or Gender-From-Mascara?\"](https://arxiv.org/abs/1702.01304), Kuehlkamp et al 2017\n- Gidi Shperber, [\"What I've learned from Kaggle's fisheries competition\"](https://gidishperber.medium.com/what-ive-learned-from-kaggle-s-fisheries-competition-92342f9ca779) (2017-05-01): initial application of VGG ImageNet CNNs for transfer solved the fish photograph classification problem almost immediately, but failed on the submission validation set; fish categories could be predicted from the specific boat taking the photographs\n- [\"Leakage in data mining: Formulation, detection, and avoidance\"](https://pdfs.semanticscholar.org/829e/6bcabe9cc1bd334429215404a5adaefc7ade.pdf), Kaufman et al 2011 discusses the general topic and mentions a few examples from KDD-Cup\n- [Dan Piponi](https://twitter.com/sigfpe/status/919995891502551042) (2017-10-16): \"Real world example from work: hospitals specialise in different injuries so CNN for diagnosis used annotations on x-rays to ID hospital.\"\n\n - A more detailed examination of X-ray saliencies: [\"Confounding variables can degrade generalization performance of radiological deep learning models\"](https://arxiv.org/abs/1807.00431), Zech et al 2018 ([blog](https://jrzech.medium.com/what-are-radiological-deep-learning-models-actually-learning-f97a546c5b98 \"What are radiological deep learning models actually learning?\"))\n- [Thomas G. Dietterich](https://twitter.com/tdietterich/status/1154839042623594496):\n\n > We made exactly the same mistake in one of my projects on insect recognition. We photographed 54 classes of insects. Specimens had been collected, identified, and placed in vials. Vials were placed in boxes sorted by class. I hired student workers to photograph the specimens. Naturally they did this one box at a time; hence, one class at a time. Photos were taken in alcohol. Bubbles would form in the alcohol. Different bubbles on different days. The learned classifier was surprisingly good. But a saliency map revealed that it was reading the bubble patterns and ignoring the specimens. I was so embarrassed that I had made the oldest mistake in the book (even if it was apocryphal). Unbelievable. Lesson: always randomize even if you don't know what you are controlling for!\n- a possible case is Wu & Zhang 2016, [\"Automated Inference on Criminality using Face Images\"](https://pdfs.semanticscholar.org/1cd3/57b675a659413e8abf2eafad2a463272a85f.pdf), attempt to use CNNs to classify standardized government ID photos of Chinese people by whether the person has been arrested, the source of the criminal IDs being government publications of wanted suspects vs ordinary peoples' IDs collected online; the photos are repeatedly described as ID photos and implied to be uniform. The use of official government ID photos taken in advance of any crime would appear to eliminate one's immediate objections about dataset bias---certainly ID photos would be distinct in many ways from ordinary cropped promotional headshots---and so the results seem strong.\n\n In response to [harsh criticism](https://www.callingbullshit.org/case_studies/case_study_criminal_machine_learning.html) (some of which points are more relevant & likely than the others...), Wu & Zhang admit in their response ([\"Responses to Critiques on Machine Learning of Criminality Perceptions (Addendum of arXiv:1611.04135)\"](https://arxiv.org/abs/1611.04135)) that the dataset is not quite as implied:\n\n > All criminal ID photos are government issued, but not mug shots. To our best knowledge, they are normal government issued ID portraits like those for driver's license in USA. In contrast, most of the noncriminal ID style photos are taken officially by some organizations (such as real estate companies, law firms, etc.) for their websites. We stress that they are not selfies.\n\n While there is no direct replication testing the Wu & Zhang 2016 results that I know of, the inherent considerable differences between the two classes, which are not homogenous at all, make me highly skeptical.\n- Possible: [Winkler et al 2019](/doc/ai/nn/cnn/2019-winkler.pdf \"Association Between Surgical Skin Markings in Dermoscopic Images and Diagnostic Performance of a Deep Learning Convolutional Neural Network for Melanoma Recognition\") examine a commercial CNN (\"Moleanalyzer-Pro\"; [Haenssle et al 2018](/doc/ai/nn/cnn/2018-haenssle.pdf \"Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists\")) for skin cancer detection. Concerned by the fact that doctors sometimes use purple markers to highlight potentially-malignant skin cancers for easier examination, they compare before/after photographs of skin cancers which have been highlighted, and find that the purple highlighting increases the probability of being classified as malignant.\n\n However, it is unclear that this is a dataset bias problem, as the existing training datasets for skin cancer are realistic and already include purple marker samples[^purple]. The demonstrated manipulation may simply reflect the CNN using purple as a proxy for human concern, which is an informative signal and desirable if it improves classification performance in the real world on real medical cases. It is possible that the training datasets are in fact biased to some degree with too much/too little purple or that use of purple differs systematically across hospitals, and those would damage performance to some degree, but that is not demonstrated by their before/after comparison. Ideally, one would run a field trial to test the CNN's performance as a whole by using it in various hospitals and then following up on all cases to determine benign or malignant; if the classification performance drops considerably from the original training, then that implies something (possibly the purple highlighting) has gone wrong.\n- Possible: [Esteva et al 2011](/doc/ai/nn/2017-esteva.pdf \"Dermatologist-level classification of skin cancer with deep neural networks\") trains a skin cancer classifier; the final CNN performs well in independent test sets. The paper does not mention this problem but [media coverage reported](https://www.thedailybeast.com/why-doctors-arent-afraid-of-better-more-efficient-ai-diagnosing-cancer \"Why Doctors Aren't Afraid of Better, More Efficient AI Diagnosing Cancer: Just like humans, AI isn't perfect\") that rulers in photographs served as unintentional features:\n\n > He and his colleagues had one such problem in their their study with rulers. When dermatologists are looking at a lesion that they think might be a tumor, they'll break out a ruler---the type you might have used in grade school---to take an accurate measurement of its size. Dermatologists tend to do this only for lesions that are a cause for concern. So in the set of biopsy images, if an image had a ruler in it, the algorithm was more likely to call a tumor malignant, because the presence of a ruler correlated with an increased likelihood a lesion was cancerous. Unfortunately, as Novoa emphasizes, the algorithm doesn't know why that correlation makes sense, so it could easily misinterpret a random ruler sighting as grounds to diagnose cancer.\n\n It's unclear how they detected this problem or how they fixed it. And like Winkler et al 2019, it's unclear if this was a problem which would reduce real-world performance (are dermatologists going to stop measuring worrisome lesions?).\n\n[^purple]: Winkler et al 2019: \"When reviewing the open-access International Skin Imaging Collaboration database, which is a source of training images for research groups, we found that a similar percent-age of melanomas (52 of 2169 [2.4%]) and nevi (214 of 9303 [2.3%]) carry skin markings. Nevertheless, it seems conceivable that either an imbalance in the distribution of skin markings in thousands of other training images that were used in the CNN tested herein or the assignment of higher weights to blue markings only in lesions with specific (though unknown) accompanying features may induce a CNN to associate skin markings with the diagnosis of melanoma. The latter hypothesis may also explain why melanoma probability scores remained almost unchanged in many marked nevi while being increased in others.\"\n\n# Should We Tell Stories We Know Aren't True?\n\nSo the NN tank story probably didn't happen as described, but something somewhat like it *could* have happened and things sort of like it could happen now, and it is (as proven by its history) a catchy story to warn students with---it's not true but it's [\"truthy\"](!W \"Truthiness\").\nShould we still mention it to journalists or in blog posts or in discussions of AI risk, as a noble lie?\n\nI think not.\nIn general, we should promote more epistemic rigor and higher standards in an area where there is already far too much impact of fictional stories (eg. the depressing inevitability of a _Terminator_ allusion in AI risk discussions).\nNor do I consider the story particularly effective from a didactic perspective: relegating dataset bias to mythical stories does not inform the listener about how common or how serious dataset bias is, nor is it helpful for researchers investigating countermeasures and diagnostics---the LIME developers, for example, are not helped by stories about Russian tanks, but need real testcases to show that their interpretability tools work & would help machine learning developers diagnose & fix dataset bias.\n\nI also fear that telling the tank story tends to promote complacency and underestimation of the state-of-the-art by implying that NNs and AI in general are toy systems which are far from practicality & cannot work in the real world (particularly the story variants which date it relatively recently), or that such systems when they fail will fail in easily diagnosed, visible, sometimes amusing ways, ways which can be diagnosed by a human comparing the photos or applying some political reasoning to the outputs; but modern NNs are powerful, are often deployed to the real world despite the spectre of dataset bias, and do not fail in blatant ways---what we actually see with deep learning are far more concerning failure modes like \"adversarial examples\" which are quite as inscrutable as the neural nets themselves (or AlphaGo's one misjudged move resulting in its only loss to Lee Sedol). Adversarial examples are particularly insidious as the NN will work flawlessly in all the normal settings and contexts, only to fail totally when exposed to a custom adversarial input.\nMore importantly, dataset bias and failure to transfer tends to be a self-limiting problem, particularly when embedded in an ongoing system or reinforcement learning agent, since if the NN is making errors based on dataset bias, it will in effect be generating new counterexample datapoints for its next iteration.\n\n## Alternative examples\n\n
\n> There is nothing so useless as doing efficiently that which should not be done at all.\n>\n> [Peter Drucker](!W)\n
\n\nThe more troubling errors are ones where the goal itself, the reward function, is mis-specified or wrong or harmful.\nI am less worried about algorithms learning to do poorly the right thing for the wrong reasons because humans are sloppy in their data collection than I am about them learning to do well the wrong thing for the right reasons despite perfect data collection.\nWith errors or inefficiencies in the rest of the algorithm, training may simply be slower, or there may be more local optima which may temporarily trap the agent, or its final performance may be worse than it could be; these are bad things, but normal enough.\nBut when the *reward function* is wrong, the better the algorithm is, the more useless (or dangerous) it becomes at [pursuing the wrong objective](https://arxiv.org/abs/2105.14111 \"'Goal Misgeneralization in Deep Reinforcement Learning', Koch et al 2021\") because [the reward hacking scales](https://arxiv.org/abs/2210.10760#openai \"‘Scaling Laws for Reward Model Overoptimization’, Gao et al 2022\"), and this may [happen abruptly](https://arxiv.org/abs/2201.03544 \"‘The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models’, Pan et al 2022\")!\nUsing losses which have little to do with the true human utility function or decision context is far more common than serious dataset bias: people think about where their data is coming from, but they tend not to think about what the consequences of wrong classifications are.\nSuch reward function problems cannot be fixed by collecting any amount of data or making data more representative of the real world, and for large-scale systems will be more harmful.\nAnd it can be hard to avoid errors: sure, in hindsight, once you've seen the converged reward hack, you can laugh and say \"of course that particular bit of reward-shaping was wrong, how obvious now!\"---but only in hindsight.\nBefore then, the absence of the hack is just common sense: we are [blinded by our knowledge](/unseeing \"‘On Seeing Through and Unseeing: The Hacker Mindset’, Branwen 2012\"), which is a burden optimization processes do not share.\n\nUnfortunately, I know of no particularly comprehensive lists of examples of mis-specified rewards/unexpectedly bad proxy objective functions/\"reward hacking\"/\"wireheading\"/\"perverse instantiation\"^[Getting into more general economic, behavioral, or human situations would be going too far afield, but the relevant analogues are \"[principal-agent problem](!W)\", \"[perverse incentives](!W)\", \"law of [unintended consequences](!W)\", \"[Lucas critique](!W)\", \"[Goodhart’s law](!W)\", or \"[Campbell’s law](!W)\"; such alignment problems are only partially dealt with by having ground-truth evolutionary ['outer' losses](/backstop \"'Evolution as Backstop for Reinforcement Learning', Branwen 2018\"), and avoiding reward hacking remains an open problem (even in theory). [Speedrun](!W) gaming communities frequently provide examples of reward-hacking, particularly when games are finished faster by exploiting bugs to [sequence break](!W \"Sequence breaking\"); particularly esoteric techniques require outright hacking the [\"weird machines\"](/turing-complete#security-implications) present in many games/devices---for example, [pannenkoek2012's](!W \"pannenkoek2012\") ['parallel universes'](https://pannenkoek2012.fandom.com/wiki/Parallel_Universe) [_Super Mario 64_](!W) hack which [avoids using any jumps](https://www.youtube.com/watch?v=kpk2tdsPh0A \"SM64 - Watch for Rolling Rocks - 0.5× A Presses (Commentated)\") by exploiting an [integer overflow](!W) bug & [modulo](!W \"Modular arithmetic\") wraparound to accelerate Mario to near-infinite speed, passing through the entire map multiple times, in order to stop at the right place. ]; perhaps people can make suggestions, but a few examples I have found or recall include:\n\n- [linear programming](!W) optimization for nutritious (not necessarily palatable!) low-cost diets: [\"The cost of subsistence\"](/doc/statistics/decision/1945-stigler.pdf), Stigler 1945, [\"The Diet Problem\"](/doc/statistics/decision/1990-dantzig.pdf), Dantzig 1990, [\"Stigler’s Diet Problem Revisited\"](/doc/statistics/decision/2001-garille.pdf), Garille & Gass 2001\n\n - SMT/SAT solvers are likewise infamous for finding strictly valid yet surprising or useless, which perversity is exactly what makes them so invaluable in security/formal-verification research (for example, in RISC-V verification of exceptions, discovering that it can trigger an exception by turning on a [debug unit & setting a breakpoint](https://twitter.com/oe1cxw/status/957409526940094464), or using an obscure [memory mode setting](https://twitter.com/oe1cxw/status/958704985495175169))\n- boat race reward-shaping for picking up targets results in not finish race at all but going in circles to hit targets: [\"Faulty Reward Functions in the Wild\"](https://openai.com/research/faulty-reward-functions), OpenAI\n- a classic 3D robot-arm NN agent, in a somewhat unusual setup where the evaluator/reward function is another NN trained to predict human evaluations, learns to move the arm to a position which *looks* like it is positioned at the goal but is actually just in between the 'camera' and the goal: [\"Learning from Human Preferences\"](https://openai.com/research/learning-from-human-preferences), Christiano et al 2017, OpenAI\n- reward-shaping a bicycle agent for not falling over & making progress towards a goal point (but not punishing for moving away) leads it to learn to circle around the goal in a physically stable loop: [\"Learning to Drive a Bicycle using Reinforcement Learning and Shaping\"](https://pdfs.semanticscholar.org/10ba/d197f1c1115005a56973b8326e5f7fc1031c.pdf), Randlov & Alstrom 1998; similar difficulties in avoiding pathological optimization were experienced by [Cook 2004](/doc/reinforcement-learning/model-free/2004-cook.pdf \"It Takes Two Neurons To Ride a Bicycle\") ([video](/doc/reinforcement-learning/2004-cook-twoneuronbicycle.avi) of policy-iteration learning to spin handle-bar to stay upright).\n- reward-shaping a soccer robot for touching the ball caused it to learn to get to the ball and \"vibrate\" touching it as fast as possible: David Andre & Astro Teller in Ng et al 1999, [\"Policy invariance under reward transformations: theory and application to reward shaping\"](http://luthuli.cs.uiuc.edu/~daf/courses/games/AIpapers/ng99policy.pdf)\n- environments involving walking/running/movement and rewarding movement seem to often result in the agents learning to fall over as a local optima of speed generation, possibly bouncing around or moving at hyperspeed by exploting any failure to conserve all quantities like energy.\n\n For example, Sims notes in one paper ([Sims 1994](http://www.karlsims.com/papers/siggraph94.pdf \"Evolving Virtual Creatures\")) that \"It is important that the physical simulation be reasonably accurate when optimizing for creatures that can move within it. Any bugs that allow energy leaks from non-conservation, or even round-off errors, will inevitably be discovered and exploited by the evolving creatures...speed is used as the selection criteria, but the vertical component of velocity is ignored. For land environments, it can be necessary to prevent creatures from generating high velocities by simply falling over.\" Sims mentions round-off errors as a possibility, and apparently this happened: according to [Danny Hillis](!W), \"early walking machines evolved on the Connection Machine \\[[CM-5](https://en.wikipedia.org/wiki/Connection_Machine#Designs)\\] took advantage of an obscure round-off error in the floating-point unit that the human programmers did not even know existed.\" ([Taylor & Massey 2001](/doc/ai/2001-taylor.pdf#page=6 \"‘Recent Developments in the Evolution of Morphologies and Controllers for Physically Simulated Creatures § A Re-implementation of Sims’ Work Using the MathEngine Physics Engine’, Taylor & Massey 2001 (page 6)\") attempted to reimplement Sims's work, and had to implement a large range of checks on their creatures because they kept breaking the physics engine they used.)\n\n Combined with [\"3-D Morphology\"](https://www.cs.uml.edu/~holly/91.549/readings/sims-alife94.pdf \"'Evolving 3D Morphology and Behavior by Competition', Sims 1994\"), Sims discovered that without height limits, the creatures just became as tall as possible and fell over; and if the conservation-of-momentum was not exact, creatures could evolve 'paddles' and paddle themselves at high velocity. (Evolving similar exploitation of rounding-off has been done by OpenAI in 2017 to turn [apparently linear neural networks into nonlinear ones](https://openai.com/research/nonlinear-computation-in-deep-linear-networks \"Nonlinear Computation in Deep Linear Networks\"); [Jaderberg et al 2019](/doc/reinforcement-learning/exploration/2019-jaderberg.pdf#deepmind \"Human-level performance in 3D multiplayer games with population-based reinforcement learning\") [appears to have had](https://www.science.org/content/article/artificial-intelligence-learns-teamwork-deadly-game-capture-flag \"Artificial intelligence learns teamwork in a deadly game of capture the flag\") a similar momentum bug in its _Quake_ simulator: \"In one test, the bots invented a completely novel strategy, exploiting a bug that let teammates give each other a speed boost by shooting them in the back.\")\n- [Popov et al 2017](https://arxiv.org/abs/1704.03073#deepmind \"Data-efficient Deep Reinforcement Learning for Dexterous Manipulation\"), training a simulated robot gripper arm to stack objects like Legos, included reward shaping; pathologies included \"hovering\" and for a reward-shaping for lifting the bottom face of the top block upwards, DDPG learned to knock the blocks over, thereby (temporarily) elevating the bottom of the top block and receiving the reward:\n\n > We consider three different composite rewards in additional to the original sparse task reward:\n >\n > 1. ***Grasp shaping***: *Grasp brick 1* and *Stack brick 1*, i.e. the agent receives a reward of 0.25 when the brick 1 has been grasped and a reward of 1.0 after completion of the full task.\n > 2. ***Reach and grasp shaping***: *Reach brick 1*, *Grasp brick 1* and *Stack brick 1*, i.e. the agent receives a reward of 0.125 when being close to brick 1, a reward of 0.25 when brick 1 has been grasped, and a reward of 1.0 after completion of the full task.\n > 3. ***Full composite shaping***: the sparse reward components as before in combination with the distance-based smoothly varying components.\n >\n > Figure 5 shows the results of learning with the above reward functions (blue traces). The figure makes clear that learning with the sparse reward only does not succeed for the full task. Introducing an intermediate reward for grasping allows the agent to learn to grasp but learning is very slow. The time to successful grasping can be substantially reduced by giving a distance based reward component for reaching to the first brick, but learning does not progress beyond grasping. Only with an additional intermediate reward component as in continuous reach, grasp, stack the full task can be solved.\n >\n > Although the above reward functions are specific to the particular task, we expect that the idea of a composite reward function can be applied to many other tasks thus allowing learning for to succeed even for challenging problems. Nevertheless, great care must be taken when defining the reward function. We encountered several unexpected failure cases while designing the reward function components: eg. reach and grasp components leading to a grasp unsuitable for stacking, agent not stacking the bricks because it will stop receiving the grasping reward before it receives reward for stacking and the agent flips the brick because it gets a grasping reward calculated with the wrong reference point on the brick. We show examples of these [in the video](https://www.youtube.com/watch?v=8QnD8ZM0YCo).\n- RL agents using learned model-based planning paradigms such as the model predictive control are noted to have issues with the planner essentially exploiting the learned model by choosing a plan going through the worst-modeled parts of the environment and producing unrealistic plans using teleportation, eg. Mishra et al 2017, [\"Prediction and Control with Temporal Segment Models\"](https://arxiv.org/pdf/1703.04070.pdf#page=3) who note:\n\n > If we attempt to solve the optimization problem as posed in (2), the solution will often attempt to apply action sequences outside the manifold where the dynamics model is valid: these actions come from a very different distribution than the action distribution of the training data. This can be problematic: the optimization may find actions that achieve high rewards under the model (by exploiting it in a regime where it is invalid) but that do not accomplish the goal when they are executed in the real environment.\n >\n > ...Next, we compare our method to the baselines on trajectory and policy optimization. Of interest is both the actual reward achieved in the environment, and the difference between the true reward and the expected reward under the model. If a control algorithm exploits the model to predict unrealistic behavior, then the latter will be large. We consider two tasks....Under each model, the optimization finds actions that achieve similar model-predicted rewards, but the baselines suffer from large discrepancies between model prediction and the true dynamics. Qualitatively, we notice that, on the pushing task, the optimization exploits the LSTM and one-step models to predict unrealistic state trajectories, such as the object moving without being touched or the arm passing through the object instead of colliding with it. Our model consistently performs better, and, with a latent action prior, the true execution closely matches the model's prediction. When it makes inaccurate predictions, it respects physical invariants, such as objects staying still unless they are touched, or not penetrating each other when they collide\n\n This is similar to Sims's issues, or current issues in training walking or running agents in environments like MuJoCo where it is easy for them to learn odd gaits like hopping ([Lillicrap et al 2016](https://arxiv.org/abs/1509.02971#deepmind \"Continuous Control with Deep Reinforcement Learning\") adds extra penalties for impacts to try to avoid this) or jumping (eg. [Stelmaszczyk's](https://blog.mlreview.com/our-nips-2017-learning-to-run-approach-b80a295d3bb5 \"Our 'NIPS 2017: Learning to Run' approach\") attempts at reward shaping a skeleton agent) or flailing around wildly ([Heess et al 2017](https://arxiv.org/abs/1707.02286#deepmind \"Emergence of Locomotion Behaviours in Rich Environments\") add random pushes/shoves to the environment to try to make the agent learn more generalizable policies) which may work quite well in the specific simulation but not elsewhere. (To some degree this is beneficial for driving exploration in poorly-understood regions, so it's not all bad.) [Christine Barron](https://connect.unity.com/p/pancake-bot \"Pass the Butter // Pancake bot\"), working on a pancake-cooking robot-arm simulation, ran into reward-shaping problems: rewarding for each timestep without the pancake on the floor teaches the agent to hurl the pancake into the air as hard as possible; and for the passing-the-butter agent, rewarding for getting close to the goal produces the same close-approach-but-avoidance behavior to maximize reward.\n- A curious lexicographic-preference raw-RAM NES AI algorithm learns to pause the game to never lose at Tetris: Murphy 2013, [\"The First Level of Super Mario Bros. is Easy with Lexicographic Orderings and Time Travel... after that it gets a little tricky\"](http://tom7.org/mario/)\n- RL agent in Udacity self-driving car rewarded for speed learns to spin in circles: [Matt Kelcey](https://twitter.com/mat_kelcey/status/886101319559335936)\n- NASA Mars mission planning, optimizing food/water/electricity consumption for total man-days survival, yields an optimal plan of killing 2/3 crew & keep survivor alive as long as possible: [iand675](https://lobste.rs/s/1d7whd/tales_from_trenches_ai_disaster_stories#c_le6tsr)\n- Doug Lenat's [Eurisko](!W) famously had issues with \"parasitic\" heuristics, due to the self-modifying ability, edited important results to claim credit and be rewarded, part of a class of such wireheading heuristics that Lenat made the Eurisko core unmodifiable: [\"EURISKO: A program that learns new heuristics and domain concepts: the nature of heuristics III: program design and results\"](https://pdfs.semanticscholar.org/24c7/4c798100d69555ace06145bc1ba4fd6df35d.pdf), Lenat 1983 (pg90)\n- genetic algorithms for image classification evolves timing-attack to infer image labels based on hard drive storage location: https://news.ycombinator.com/item?id=6269114\n- training a dog to roll over results in [slamming against the wall](https://www.lesswrong.com/posts/5o3CxyvZ2XKawRB5w/machine-learning-and-unintended-consequences?commentId=tKdjcCZAtbE6vJq4v); dolphins rewarded for finding trash & dead seagulls in their tank learned to [manufacture trash & hunt living seagulls](https://www.theguardian.com/science/2003/jul/03/research.science \"Why dolphins are deep thinkers: The more we study dolphins, the brighter they turn out to be\") for more rewards\n- circuit design with genetic/evolutionary computation:\n\n - an attempt to evolve a circuit on an FPGA, to discriminate audio tones of 1kHz & 10kHz without using any timing elements, evolved a design which depended on disconnected circuits in order to work: [\"An evolved circuit, intrinsic in silicon, entwined with physics\"](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.50.9691&rep=rep1&type=pdf), Thompson 1996. (\"Possible mechanisms include interactions through the power-supply wiring, or electromagnetic coupling.\" The evolved circuit is sensitive to room temperature variations 23--43C, only working perfectly over the 10C range of room temperature it was exposed to during the 2 weeks of evolution. It is also sensitive to the exact location on the FPGA, degrading when shifted to a new position; further finetuning evolution fixes that, but then is vulnerable when shifted back to the original location.)\n - an attempt to evolve an oscillator or a timer wound up evolving a circuit which picked up radio signals from the lab PCs (although since the circuits *did* work at their assigned function as the human intended, should we consider this a case of 'dataset bias' where the 'dataset' is the local lab environment?): [\"The evolved radio and its implications for modelling the evolution of novel sensors\"](https://pdfs.semanticscholar.org/0adf/aaeebbf36f34ac97770adc2f52619a5d45c6.pdf), Jon Bird and Paul Layzell 2002\n- training a \"minitaur\" bot in simulation to carry a ball or duck on its back, CMA-ES discovers [it can drop the ball into a leg joint and then wiggle across the floor](https://blog.otoro.net/2017/11/12/evolving-stable-strategies/ \"Evolving Stable Strategies\") without the ball ever dropping\n- [CycleGAN](https://arxiv.org/abs/1703.10593#bair \"‘CycleGAN: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks’, Zhu et al 2017\"), a cooperative GAN architecture for converting images from one genre to another (eg. horses⟺zebras), has a loss function that rewards accurate reconstruction of images from its transformed version; CycleGAN turns out to partially solve the task by, in addition to the cross-domain analogies it learns, steganographically hiding autoencoder-style data about the original image invisibly inside the transformed image to assist the reconstruction of details ([Chu et al 2017](https://arxiv.org/abs/1712.02950 \"CycleGAN, a Master of Steganography\"))\n\n A researcher in 2020 working on art colorization told me of an interesting similar behavior: his automatically-grayscaled images were failing to train the NN well, and he concluded that this was because grayscaling a color image produces many shades of gray in a way that human artists do not, and that the formula used by OpenCV for RGB → grayscale permits only a few colors to map onto any given shade of gray, enabling accurate guessing of the original color! Such issues might require learning a grayscaler, similar to superresolution needing learned downscalers ([Sun & Chen 2019](https://arxiv.org/abs/1907.12904 \"CAR: Learned Image Downscaling for Upscaling using Content Adaptive Resampler\")).\n- the ROUGE machine translation metric, based on matching sub-phrases, is typically used with RL techniques since it is a non-differentiable loss; [Salesforce](https://www.salesforce.com/products/einstein/ai-research/tl-dr-reinforced-model-abstractive-summarization/ \"'Your TL;DR by an AI: A Deep Reinforced Model for Abstractive Summarization', Paulus et al 2017\") ([Paulus et al 2017](https://arxiv.org/abs/1705.04304 \"A Deep Reinforced Model for Abstractive Summarization\")) notes that an effort at a ROUGE-only summarization NN produced largely gibberish summaries, and had to add in another loss function to get high-quality results\n- Alex Irpan [writes of 3 anecdotes](https://www.alexirpan.com/2018/02/14/rl-hard.html \"Deep Reinforcement Learning Doesn't Work Yet\"):\n\n > In talks with other RL researchers, I've heard several anecdotes about the novel behavior they've seen from improperly defined rewards.\n >\n > - A coworker is teaching an agent to navigate a room. The episode terminates if the agent walks out of bounds. He didn't add any penalty if the episode terminates this way. The final policy learned to be suicidal, because negative reward was plentiful, positive reward was too hard to achieve, and a quick death ending in 0 reward was preferable to a long life that risked negative reward.\n > - A friend is training a simulated robot arm to reach towards a point above a table. It turns out the point was defined *with respect to the table*, and the table wasn't anchored to anything. The policy learned to slam the table really hard, making the table fall over, which moved the target point too. The target point *just so happened* to fall next to the end of the arm.\n > - A researcher gives a talk about using RL to train a simulated robot hand to pick up a hammer and hammer in a nail. Initially, the reward was defined by how far the nail was pushed into the hole. Instead of picking up the hammer, the robot used its own limbs to punch the nail in. So, they added a reward term to encourage picking up the hammer, and retrained the policy. They got the policy to pick up the hammer...but then it threw the hammer at the nail instead of actually using it.\n >\n > Admittedly, these are all secondhand accounts, and I haven't seen videos of any of these behaviors. However, none of it sounds implausible to me. I've been burned by RL too many times to believe otherwise...I've taken to imagining deep RL as a demon that's deliberately misinterpreting your reward and actively searching for the laziest possible local optima. It's a bit ridiculous, but I've found it's actually a productive mindset to have.\n- [Chrabaszcz et al 2018](https://arxiv.org/abs/1802.08842 \"Back to Basics: Benchmarking Canonical Evolution Strategies for Playing Atari\"): an evolutionary strategies RL in the ALE game [_Q\\*bert_](!W \"Q*bert\") finds that it can steadily earn points by committing 'suicide' to lure an enemy into following it; more interestingly, it also discovers what appears to be a previously unknown bug where a sequence of jumps will, semi-randomly, permanently force the game into a state where the entire level begins flashing and the score increases rapidly & indefinitely until the game is reset ([video](https://www.youtube.com/watch?v=meE5aaRJ0Zs?t=14s \"Canonical ES finds a bug in Q*bert (Full)\"))\n- [Lapuschkin et al 2019](https://arxiv.org/abs/1902.10178 \"Unmasking Clever Hans Predictors and Assessing What Machines Really Learn\"){#lapuschkin-et-al-2019-3} notes a borderline case in the ALE pinball game where the 'nudge' ability is unlimited (unlike all real pinball machines) and a DQN can learn to score arbitrarily by the ball budging over a switch repeatedly:\n\n > The second showcase example studies neural network models (see Figure 5 for the network architecture) trained to play Atari games, here Pinball. As shown in [5], the DNN achieves excellent results beyond human performance. Like for the previous example, we construct LRP heatmaps to visualize the DNN's decision behavior in terms of pixels of the pinball game. Interestingly, after extensive training, the heatmaps become focused on few pixels representing high-scoring switches and loose track of the flippers. A subsequent inspection of the games in which these particular LRP heatmaps occur, reveals that DNN agent firstly moves the ball into the vicinity of a high-scoring switch without using the flippers at all, then, secondly, \"nudges\" the virtual pinball table such that the ball infinitely triggers the switch by passing over it back and forth,without causing a tilt of the pinball table (see Figure 2b and Figure 6 for the heatmaps showing this point, and also Supplementary Video 1). Here, the model has learned to abuse the \"nudging\" threshold implemented through the tilting mechanism in the Atari Pinball software. From a pure game scoring perspective, it is indeed a rational choice to exploit any game mechanism that is available. In a real pinball game, however, the player would go likely bust since the pinball machinery is programmed to tilt after a few strong movements of the whole physical machine.\n- [\"Trial without Error: Towards Safe Reinforcement Learning via Human Intervention\"](https://arxiv.org/abs/1707.05173), Saunders et al 2017; the [blog writeup](https://owainevans.github.io/blog/hirl_blog.html \"This post explains the paper Trial without Error: Towards Safe RL with Human Intervention, which was authored by William Saunders, Girish Sastry, Andreas Stuhlmüller and Owain Evans.\") notes:\n\n > The Road Runner results are especially interesting. Our goal is to have the agent learn to play Road Runner without losing a single life on Level 1 of the game. Deep RL agents are known to discover a 'Score Exploit' in Road Runner: they learn to intentionally kill themselves in a way that (paradoxically) earns greater reward. Dying at a precise time causes the agent to repeat part of Level 1, where it earns more points than on Level 2. This is a local optimum in policy space that a human gamer would never be stuck in.\n >\n > Ideally, our Blocker would prevent all deaths on Level 1 and hence eliminate the Score Exploit. However, through random exploration the agent may hit upon ways of dying that \"fool\" our Blocker (because they look different from examples in its training set) and hence learn a new version of the Score Exploit. In other words, the agent is implicitly performing a random search for adversarial examples for our Blocker (which is a convolutional neural net)...In Road Runner we did not achieve zero catastrophes but were able to reduce the rate of deaths per frame from 0.005 (with no human oversight at all) to 0.0001.\n- [Toromanoff et al 2019](https://arxiv.org/abs/1908.04683 \"Is Deep Reinforcement Learning Really Superhuman on Atari?\") note various bugs in the ALE games, but also a new infinite loop for maximizing scores:\n\n > Finally, we discovered that on some games the actual optimal strategy is by doing a loop over and over giving a small amount of reward. In _Elevator Action_ the agent learn to stay at the first floor and kill over and over the first enemy. This behavior cannot be seen as an actual issue as the agent is basically optimizing score but this is definitely not the intended goal. A human player would never perform this way.\n- [Le Paine et al 2019's](https://arxiv.org/abs/1909.01387#deepmind \"'R2D3: Making Efficient Use of Demonstrations to Solve Hard Exploration Problems', Paine et al 2019\") [R2D3](https://www.deepmind.com/publications/making-efficient-use-of-demonstrations-to-solve-hard-exploration-problems) writeup notes:\n\n > *Wall Sensor Stack*: The original Wall Sensor Stack environment had a bug that the R2D3 agent was able to exploit. We fixed the bug and verified the agent can learn the proper stacking behavior.\n >\n > ...Another desirable property of our approach is that our agents are able to learn to outperform the demonstrators, and in some cases even to discover strategies that the demonstrators were not aware of. In one of our tasks the agent is able to discover and exploit a bug in the environment in spite of all the demonstrators completing the task in the intended way...R2D3 performed better than our average human demonstrator on Baseball, Drawbridge, Navigate Cubes and the Wall Sensor tasks. The behavior on Wall Sensor Stack in particular is quite interesting. On this task R2D3 found a completely different strategy than the human demonstrators by exploiting a bug in the implementation of the environment. The intended strategy for this task is to stack two blocks on top of each other so that one of them can remain in contact with a wall mounted sensor, and this is the strategy employed by the demonstrators. However, due to a bug in the environment the strategy learned by R2D3 was to trick the sensor into remaining active even when it is not in contact with the key by pressing the key against it in a precise way.\n- [\"Emergent Tool Use From Multi-Agent Autocurricula\"](https://arxiv.org/abs/1909.07528#openai), Baker et al 2019:\n\n > We originally believed defending against ramp use would be the last stage of emergence in this environment; however, we were surprised to find that yet two more qualitatively new strategies emerged. After 380 million total episodes of training, the seekers learn to bring a box to the edge of the play area where the hiders have locked the ramps. The seekers then jump on top of the box and *surf* it to the hiders' shelter; this is possible because the environment allows agents to move together with the box regardless of whether they are on the ground or not. In response, the hiders learn to lock all of the boxes in place before building their shelter.\n\n [OA blog post](https://openai.com/research/emergent-tool-use#surprisingbehaviors \"‘Emergent Tool Use from Multi-Agent Interaction § Surprising behavior’, Baker et al 2019\"){.include-annotation}\n- Ziegler et al 2019: fine-tune trained an English text generation model based on human ratings for preference-learning; they provide a curious example of a reward specification bug. Here, the reward was accidentally negated and a new run began overnight while the devs slept; this reversal, rather than resulting in nonsense, resulted in (literally) perversely coherent behavior of emitting obscenities to maximize the new score:\n\n [blog](https://openai.com/research/fine-tuning-gpt-2#bugscanoptimizeforbadbehavior \"‘Fine-Tuning GPT-2 from Human Preferences § Bugs can optimize for bad behavior’, Ziegler et al 2019\"){.include-annotation}\n- [Custard Smingleigh](https://twitter.com/smingleigh/status/1060325665671692288):\n\n > I hooked a neural network up to my [Roomba](!W) 650. I wanted it to learn to navigate without bumping into things, so I set up a reward scheme to encourage speed and discourage hitting the bumper sensors.\n >\n > It learned to drive backwards, because there are no bumpers on the back.\n\n# See Also\n\n
\n- [Why Tool AIs Want to Be Agent AIs](/tool-ai \"AIs limited to purely computational inferential tasks (Tool AIs) supporting humans will be less intelligent, efficient, and economically valuable than more autonomous reinforcement-learning AIs (Agent AIs) who act on their own and learn to take actions over choice of computation/data/training/architecture/hyperparameters/external-resource use\"){.backlink-not}\n- [Surprisingly Turing-Complete](/turing-complete \"A catalogue of software constructs, languages, or APIs which are unexpectedly Turing-complete; implications for security and reliability\"){.backlink-not}\n
\n\n# External Links\n\n- [\"Concrete Problems in AI Safety\"](https://arxiv.org/abs/1606.06565), Amodei et al 2016\n- [\"Edge instantiation\"](https://arbital.com/p/edge_instantiation/)/[\"Nearest unblocked strategy\"](https://arbital.com/p/nearest_unblocked/)\n- [\"Adversarial Examples Are Not Bugs, They Are Features\"](https://arxiv.org/abs/1905.02175), Ilyas et al 2019\n- [\"Specification gaming: the flip side of AI ingenuity\"](https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity), Krakovna et al 2020\n- Discussion: [/r/machinelearning](https://www.reddit.com/r/MachineLearning/comments/76qua8/d_that_urban_legend_about_neural_nets_tanks/), [HN](https://news.ycombinator.com/item?id=15485538)\n", "id": "e825fcbc17e4cfb633cd6493bcea5291"} {"text": "> \n> It might help to imagine a hard takeoff scenario using solely known sorts of NN & [scaling effects](/note/scaling \"'Machine Learning Scaling', Branwen 2021\")… Below is a story which may help stretch your imagination and [defamiliarize](https://en.wikipedia.org/wiki/Defamiliarization) the 2022 state of machine learning.\n> \n> \n> To read the alternate annotated version of this story, scroll to [the end](#month) or manually disable ‘reader-mode’ () in the theme toggle in the upper-right corner. **Note: Reader-mode requires JavaScript.** There is also a [downloadable audio version](#podcast) of this story.\n> \n> \n> \n\n\n\n\n[1 Second](#second \"Link to section: § '1 Second'\")\n===================================================\n\n\n[In A.D. 20XX.](https://en.wikipedia.org/wiki/All_your_base_are_belong_to_us) Work was beginning. “How are you gentlemen *!!*”… (Work. Work never changes; work is always hell.)\n\n\nSpecifically, a MoogleBook researcher has gotten a pull request from Reviewer #2 on his new paper in evolutionary search in auto-ML, for error bars on the auto-ML hyperparameter sensitivity like [larger batch sizes](/scaling-hypothesis#blessings-of-scale)⁠, because [more can be different](/doc/www/cse-robotics.engr.tamu.edu/f66d4dd660ac97f565e9487aa2c47f708eaabc5f.pdf \"(Original URL: https://cse-robotics.engr.tamu.edu/dshell/cs689/papers/anderson72more_is_different.pdf )\") and there’s high [variance](https://en.wikipedia.org/wiki/Variance) in the old runs with a few [anomalously high](/doc/www/history.nasa.gov/0a177afffcb5cb1a0693a6c90ba81a095fe6d1e8.html \"Appendix F: Personal Observations on the Reliability of the Shuttle (Original URL: https://history.nasa.gov/rogersrep/v2appf.htm )\") gain of function. (“Really? *Really*? That’s what you’re worried about?”) He can’t [see](/unseeing#confirmation-bias) why worry, and wonders what sins he committed to deserve this asshole Chinese (given the Engrish) reviewer, as he wearily kicks off yet another HQU experiment…\n\n\n\n\n\n---\n\n\n\nA descendant of [AutoML-Zero](/doc/www/arxiv.org/d30fa3417c7a69da41baa03510ee063830609ee9.pdf#google \"'AutoML-Zero: Evolving Machine Learning Algorithms From Scratch', Real et al 2020 (Original URL: https://arxiv.org/abs/2003.03384#google )\")⁠, “HQU” starts with raw GPU primitives like matrix multiplication, and it directly outputs binary blobs. These blobs are then executed in a wide family of simulated games, each randomized, and the HQU outer loop evolved to increase reward. Evolutionary search is about as stupid as an optimization process can be and still work; but neural networks themselves are inherently simple: a good image classification architecture [can fit in a tweet](/doc/www/arxiv.org/30d569a057c75f8d8334bd6f0a0d7b204aaccdd6.pdf#page=16 \"This section presents an expanded (but still quite compact) version of the terse ConvMixer implementation that we presented in the paper. The code is given in **Figure 7**. We also present an even more terse implementation in **Figure 8**, which to the best of our knowledge is the first model that achieves the elusive dual goals of 80%+ ImageNet top-1 accuracy while also fitting into a tweet. (Original URL: https://arxiv.org/pdf/2201.09792.pdf#page=16 )\")⁠, and a complete description given [in ~1000 bits](/doc/www/www.offconvex.org/6cff7b635c4f0e6f4fa17bcbdce24412a157870c.html \"'Rip van Winkle’s Razor, a Simple New Estimate for Adaptive Data Analysis', Arora & Zhang 2021 (Original URL: http://www.offconvex.org/2021/04/07/ripvanwinkle/ )\")⁠. So, it is feasible. An HQU begins with just random transformations of binary gibberish and [driven by rewards](https://www.sciencedirect.com/science/article/pii/S0004370221000862#deepmind \"'Reward is enough', Silver et al 2021\") reinvents layered neural networks, nonlinearities, gradient descent, and [eventually](/backstop#clune-2019) [meta-learns](/doc/reinforcement-learning/meta-learning/index \"'meta-learning tag', N/A 2023\") [backpropagation](/doc/www/arxiv.org/6238c1d3924b48ac15fbd178751aff1c770df367.pdf \"'Meta Learning Backpropagation And Improving It', Kirsch & Schmidhuber 2020 (Original URL: https://arxiv.org/abs/2012.14905#schmidhuber )\")⁠.\n\n\nThis gradient descent which does updates after an episode is over then gives way to a [continual learning rule](/doc/www/arxiv.org/c5b2ff9ef94e49177a1f2d4689756cbf0e04b348.pdf#google \"'Meta-Learning Bidirectional Update Rules', Sandler et al 2021 (Original URL: https://arxiv.org/abs/2104.04657#google )\") which can easily learn within each episode and update weights immediately; these weight updates wouldn’t be saved in your old-fashioned 2020s era research paradigm, which wastefully threw away each episode’s weights because they were stuck with [backprop](https://en.wikipedia.org/wiki/Backpropagation)⁠, but of course, these days we have proper *continual learning* in sufficiently large networks, [when](/doc/www/arxiv.org/6a6acc8bd7d12125c5e807bcfab4a94dcdd2733d.pdf#tencent \"'PatrickStar: Parallel Training of Pre-trained Models via Chunk-based Memory Management', Fang et al 2021 (Original URL: https://arxiv.org/abs/2108.05818#tencent )\") [it](/doc/www/arxiv.org/14e6225d1967b93a854a962e50c9b3cb9ddd3edf.pdf#google \"Pathways: Asynchronous Distributed Dataflow for ML', Barham et al 2022 (Original URL: https://arxiv.org/abs/2203.12533#google )\") [is](/doc/www/www.microsoft.com/caf5ba6af01512146528779151d7f031661cceb5.html \"(Original URL: https://www.microsoft.com/en-us/research/blog/deepspeed-accelerating-large-scale-model-inference-and-training-via-system-optimizations-and-compression/ )\") [split](/doc/www/www.microsoft.com/e57637ecf7a947a4991692445c4acb6565c3e53d.html \"(Original URL: https://www.microsoft.com/en-us/research/blog/zero-infinity-and-deepspeed-unlocking-unprecedented-model-scale-for-deep-learning-training/ )\") [up](/doc/www/arxiv.org/66b1ba9fadf79f1966e25b7ac05dbcd2d901388c.pdf#google \"'GSPMD: General and Scalable Parallelization for ML Computation Graphs', Xu et al 2021 (Original URL: https://arxiv.org/abs/2105.04663#google )\") [over](/doc/www/arxiv.org/3364be30558b3a496d2e5440740a582ba83cc4b2.pdf \"'Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines', Li & Hoefler 2021 (Original URL: https://arxiv.org/abs/2107.06925 )\") [enough](/doc/www/arxiv.org/5285bdc71c7a41d31d509075bb0d202ac5fca0ea.pdf#nvidia \"'Efficient Large-Scale Language Model Training on GPU Clusters', Narayanan et al 2021 (Original URL: https://arxiv.org/abs/2104.04473#nvidia )\") [modern](/doc/www/arxiv.org/46ba80920507178093d372b7989c1cb7ef7eb2b9.pdf \"'TeraPipe: Token-Level Pipeline Parallelism for Training Large-Scale Language Models', Li et al 2021 (Original URL: https://arxiv.org/abs/2102.07988 )\") [hard](/doc/cs/hardware/2020-leiserson.pdf \"'There’s plenty of room at the Top: What will drive computer performance after Moore’s law?', Leiserson et al 2020\")[ware](https://www.lesswrong.com/posts/aNAFrGbzXddQBMDqh/moore-s-law-ai-and-the-pace-of-progress)⁠, that we don’t have to worry about [catastrophic forgetting](/doc/www/openreview.net/5dcd912dff1571661949982cebf9dccf87bb1165.pdf \"'Effect of scale on catastrophic forgetting in neural networks', Anonymous 2021 (Original URL: https://openreview.net/forum?id=GhVS8_yPeEa )\")⁠, and so we simply copy the final weights into the next episode. (So much faster & more sample-efficient.)\n\n\nMeta-reinforcement-learning is brutally difficult (which is why he loves researching it). Most runs of HQU fail and meander around; the neural nets are small by MoogleBook standards, and the reporting requirements for the [Taipei](/slowing-moores-law \"'Slowing Moore’s Law: How It Could Happen', Branwen 2012\") Entente kick in at 50k petaflop-days (a threshold chosen to prevent repetitions of the [FluttershAI](https://www.youtube.com/watch?v=RAYWr1uOGVM) incident, which given surviving records is believed to have required >75k, adjusting for the [inefficiency of](/doc/www/openreview.net/ff24f06ac21abcfc2dcef630818a8b6cdbefe12f.pdf \"SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient (Original URL: https://openreview.net/forum?id=U1edbV4kNu_ )\") [crowdsourcing](/doc/www/arxiv.org/034badc611749d6403ba0506180f85a14974efdf.pdf \"Distributed Deep Learning in Open Collaborations (Original URL: https://arxiv.org/abs/2106.10207 )\")). Sure, perhaps all of those outsourced semi-supervised labeled datasets and hyperparameters and [embedding databases](/doc/www/arxiv.org/811fb4b0d4e649664b83026015598417d7bb7d61.pdf#google \"'DynamicEmbedding: Extending TensorFlow for Colossal-Scale Applications', Zeng et al 2020 (Original URL: https://arxiv.org/abs/2004.08366#google )\") used a lot more than that, but who cares about [total compute invested](/doc/economics/experience-curve/index \"'experience curves tag', N/A 2023\") or about whether it [still](https://openai.com/research/ai-and-efficiency \"'AI and Efficiency: We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months', Hernandez & Brown 2020\") [takes](/doc/www/arxiv.org/69ec12e2b9b1c001a978d916f515b5a75bc3f340.pdf#openai \"'Measuring the Algorithmic Efficiency of Neural Networks', Hernandez & Brown 2020 (Original URL: https://arxiv.org/abs/2005.04305#openai )\") 75k petaflop-days to produce FluttershAI-class systems? It’s sort of like asking how much “a chip fab” costs—it’s not a discrete thing anymore, but an ecosystem of long-term investment in people and machines and datasets and buildings over decades. Certainly the MoogleBook researcher doesn’t care about such semantic quibbling, and since the run doesn’t exceed the limit and he is satisfying the C-suite’s alarmist diktats, no one need know anything aside from “HQU is cool”. When you [see something that is technically sweet](https://en.wikiquote.org/wiki/Robert_Oppenheimer#Quotes)⁠, you go ahead and do it, and you argue about it after you have a technical success to show. (Also, a Taipei run requires a month of notice & [Ethics Board](https://www.economist.com/1843/2019/03/01/deepmind-and-google-the-battle-to-control-artificial-intelligence \"DeepMind and Google: the battle to control artificial intelligence. Demis Hassabis founded a company to build the world’s most powerful AI. Then Google bought him out. Hal Hodson asks who is in charge\") approval, and then they’d never make the rebuttal.)\n\n\n\n\n[1 Minute](#minute \"Link to section: § '1 Minute'\")\n===================================================\n\n\nSo, he starts the job like [normal](https://en.wikipedia.org/wiki/System_accident) and goes to hit the SF bars. It’d be done in by the time he comes in for his required weekly on-site & TPS report the next afternoon, because by using such large datasets & diverse tasks, the [critical batch size](/doc/www/arxiv.org/0ebcd4fbaeba2c3202f5fbcfb88e71f74b0d0c03.pdf#openai \"'An Empirical Model of Large-Batch Training', McCandlish et al 2018 (Original URL: https://arxiv.org/abs/1812.06162#openai )\") is huge and saturates a TPUv10-4096 pod.\n\n\nIt’s no big deal to do all that in such little wallclock time, with all this data available; heck, [AlphaZero](/doc/reinforcement-learning/model/alphago/2018-silver.pdf#deepmind \"'A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play', Silver et al 2018\") could learn superhuman Go from scratch in less than a day. How could you do ML research in any reasonable timeframe if each iteration required you to wait 18 years for your model to ‘grow up’? Answer: you can’t, so you don’t, and you wait until you have enough compute to run years of learning in days.\n\n\nThe diverse tasks/​[datasets](https://www.lesswrong.com/posts/65qmEJHDw3vw69tKm/proposal-scaling-laws-for-rl-generalization?commentId=bdzbeD9YvarEEopCq) have been designed to induce new capabilities in [one big net for everything](/doc/www/arxiv.org/2f1937f3dc1828a15a1c2e642fa2141efb6c7e5b.pdf \"'One Big Net For Everything', Schmidhuber 2018 (Original URL: https://arxiv.org/abs/1802.08864#schmidhuber )\") benefiting from [trans](/doc/www/christina.kim/44e9f6a746fe2e8c63736ca326d4cc5e19b3771e.html#openai \"'Scaling Laws for Language Transfer Learning', Kim 2021 (Original URL: https://christina.kim/2021/04/11/scaling-laws-for-language-transfer-learning/#openai )\")[fer](/doc/www/arxiv.org/3f090c0301e0d616c45b9de03251519be608cf28.pdf#openai \"'Scaling Laws for Transfer', Hernandez et al 2021 (Original URL: https://arxiv.org/abs/2102.01293#openai )\")⁠, which can be done by focusing on key skills and making less useful strategies like memorization fail. This includes many explicitly RL tasks, because [tool AIs are less useful to MoogleBook](/tool-ai \"'Why Tool AIs Want to Be Agent AIs', Branwen 2016\") than agent AIs. Even if it didn’t, all those datasets were generated *by* agents that a self-supervised model [intrinsically](/doc/www/arxiv.org/118716d495bb91c0abc7298a3456d6f357545f01.pdf \"'Risks from Learned Optimization in Advanced Machine Learning Systems', Hubinger et al 2019 (Original URL: https://arxiv.org/abs/1906.01820 )\") [learns to imitate](/doc/reinforcement-learning/preference-learning/index#decisiontransformer-blog-section)⁠, and infer their beliefs, competencies, and desires; HQU has spent a thousand lives learning by heart the writings of most wise, most knowledgeable, most powerful, and most-*X*-for-many-values-of-*X* humans, all distilled down by millennia of scholarship & prior models. A text model predicting the next letter of a prompt which is written poorly will emit more poor writing; a multimodal model given a prompt for images matching the description “high-quality Artstation trending” or [“Unreal engine”](https://twitter.com/arankomatsuzaki/status/1399471244760649729) will generate higher-quality images than without; a programming prompt which contains subtle security vulnerabilities will be filled out with [more subtly-erroneous code](/doc/www/arxiv.org/78528646bb225d8b30dab63ee0b544b42956a866.pdf#page=27 \"**Figure 14**: When the prompt includes subtle bugs, Codex tends to produce worse code than it is capable of producing. This gap increases with model size. Including an instruction to write correct code helps a little but does not fix the problem. Even with no examples in the context, Codex produces substantially worse code than it is capable of. (Original URL: https://arxiv.org/pdf/2107.03374.pdf#page=27 )\")⁠; and so on. Sufficiently advanced [roleplaying](/gpt-3#roleplaying) is indistinguishable from magic(al resurrection).\n\n\n\n\n[1 Hour](#hour \"Link to section: § '1 Hour'\")\n=============================================\n\n\nHQU learns, and learns to learn, and then learn to learn how to explore each problem, and thereby learns that [problems are generally solved](/doc/ai/2008-omohundro.pdf \"'The Basic AI Drives', Omohundro 2008\") by seizing control of the environment and updating on the fly to each problem using general capabilities rather than relying entirely on task-specific solutions.\n\n\nAs the [population](/doc/reinforcement-learning/exploration/2019-jaderberg.pdf#deepmind \"'Human-level performance in 3D multiplayer games with population-based reinforcement learning', Jaderberg et al 2019\") of HQU agents gets better, more compute is allocated to more fit agents to explore more complicated tasks ([scavenging spare compute](/doc/www/arxiv.org/c0876dd81e8d870b5d3d3a9a01e198ab497a782f.pdf#microsoft \"‘Singularity: Planet-Scale, Preemptive and Elastic Scheduling of AI Workloads’, Shukla et al 2022 (Original URL: https://arxiv.org/abs/2202.07848#microsoft )\") where it can), the sort of things which used to be the purview of individual small specialist models such as GPT-3; HQU trains on [many more tasks](/doc/www/arxiv.org/0747c54bafed6e2feedfa6e174f3645b0c2c9a89.pdf#deepmind \"‘Gato: A Generalist Agent’, Reed et al 2022 (Original URL: https://arxiv.org/abs/2205.06175#deepmind )\")⁠, like [predicting the next](/doc/www/arxiv.org/90cd91e98db4f7b0b1cd57da7c3713dbe34c2146.pdf#openai \"'GPT-3: Language Models are Few-Shot Learners', Brown et al 2020 (Original URL: https://arxiv.org/abs/2005.14165#openai )\") token in a large text [or](https://openai.com/research/dall-e \"'DALL·E 1: Creating Images from Text: We’ve trained a neural network called DALL·E 1 that creates images from text captions for a wide range of concepts expressible in natural language', Ramesh et al 2021\") [image](/doc/www/arxiv.org/d72b431ee450033c8bcd80158c073b16050ff060.pdf#microsoft \"'BEiT: BERT Pre-Training of Image Transformers', Bao et al 2021 (Original URL: https://arxiv.org/abs/2106.08254#microsoft )\") corpus and then [navigating](/doc/www/arxiv.org/cb0f3ccec82041e887f547553af3c6226484714c.pdf#openai \"'WebGPT: Browser-assisted question-answering with human feedback', Nakano et al 2021 (Original URL: https://arxiv.org/abs/2112.09332#openai )\") [web pages](/doc/www/openreview.net/ec11c5bdd2766cd352fe7df9ae60e748f06d5175.pdf#google \"‘Boosting Search Engines with Interactive Agents’, Ciaramita et al 2022 (Original URL: https://openreview.net/forum?id=0ZbPmmB61g#google )\") to help predict the next word, or doing [tasks on websites](/doc/www/arxiv.org/02a5edd3de87c708f5e99f2a21a1f86bcdab62b0.pdf#deepmind \"'A data-driven approach for learning to control computers', Humphreys et al 2022 (Original URL: https://arxiv.org/abs/2202.08137#deepmind )\")⁠, beating agents in [hidden-information games](/doc/www/arxiv.org/47e53dc49f2b083b1297c217ed5b25f2735475c4.pdf#deepmind \"'Player of Games', Schmid et al 2021 (Original URL: https://arxiv.org/abs/2112.03178#deepmind )\")⁠, [competing](/doc/www/arxiv.org/0fff12b9a9e1e51c03c7b654c09f33aca477c248.pdf#deepmind \"'Open-Ended Learning Leads to Generally Capable Agents', Team et al 2021 (Original URL: https://arxiv.org/abs/2107.12808#deepmind )\") against & [with](/doc/www/arxiv.org/5c2aa200dc479d6e3cfad2cc1d6e438df7da11f2.pdf#deepmind \"'From Motor Control to Team Play in Simulated Humanoid Football', Liu et al 2021 (Original URL: https://arxiv.org/abs/2105.12196#deepmind )\") agents in teams, or [learning from agents](https://www.deepmind.com/publications/learning-robust-real-time-cultural-transmission-without-human-data) in the same game, or from humans [asking things](/doc/www/arxiv.org/e0de519e36b5cfbb6f3c3d00aeda63eedd5008a2.pdf#deepmind \"'Grounded Language Learning Fast and Slow', Hill et al 2020 (Original URL: https://arxiv.org/abs/2009.01719#deepmind )\")⁠, and [showing demonstrations](/doc/www/arxiv.org/3055f56297b0b66ccd0175272dbfadc114c47663.pdf#deepmind \"'Imitating Interactive Intelligence', Abramson et al 2020 (Original URL: https://arxiv.org/abs/2012.05672#deepmind )\")⁠, automatically learning how to [cooperate](/doc/www/arxiv.org/8e885179c0a6b9cf48b05a17d877fbe3db8cb34b.pdf \"'Learning to Ground Multi-Agent Communication with Autoencoders', Lin et al 2021 (Original URL: https://arxiv.org/abs/2110.15349 )\") [with](/doc/www/arxiv.org/5955fac99eae201f84f6432978be2fb402f63ff9.pdf#deepmind \"'Collaborating with Humans without Human Data', Strouse et al 2021 (Original URL: https://arxiv.org/abs/2110.08176#deepmind )\") [arbitrary](/doc/www/arxiv.org/17708539a1a1bc81728510bd6786962d30af747c.pdf \"'Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria', Kopparapu et al 2022 (Original URL: https://arxiv.org/abs/2201.01816 )\") [other](/doc/www/arxiv.org/9845fe3688de9e0cc2301139e2c13c5985906cb9.pdf#tencent \"'Maximum Entropy Population Based Training for Zero-Shot Human-AI Coordination', Zhao et al 2021 (Original URL: https://arxiv.org/abs/2112.11701#tencent )\") [agents](/doc/www/arxiv.org/46a7f8c15b67ed07776496940eb004589260d3d8.pdf#facebook \"'Off-Belief Learning', Hu et al 2021 (Original URL: https://arxiv.org/abs/2103.04000#facebook )\") [by](/doc/www/arxiv.org/2d0f821f30901fd04611ff763ab05e371b464e27.pdf \"‘Multitasking Inhibits Semantic Drift’, Jacob et al 2021 (Original URL: https://arxiv.org/abs/2104.07219 )\") training with a *lot* of other agents (eg. different initializations giving [a Bayesian posterior](https://proceedings.mlr.press/v139/izmailov21a.html)), or doing [programming](/doc/ai/nn/transformer/gpt/codex/index \"'Codex tag', N/A 2023\") & [programming competitions](https://www.deepmind.com/blog/competitive-programming-with-alphacode)⁠, or learning implicit tree search à la [MuZero](/doc/reinforcement-learning/model/muzero/index \"'MuZero tag', N/A 2023\") in the activations passed through many layers & model iterations.\n\n\nSo far so good. Indeed, more than good: it’s *gr-r-reat!* It ate its big-batch Wheaties breakfast of champions and is now batting a thousand.\n\n\nSomewhere along the line, it made a subtly better choice than usual, and the improvements are compounding. Perhaps it added the equivalent of 1 line with a [magic constant](/doc/www/arxiv.org/67fa29437b3d1c4549caf7b5e7384de6692abc58.pdf#deepmind \"'Evolving Normalization-Activation Layers', Liu et al 2020 (Original URL: https://arxiv.org/abs/2004.02967#deepmind )\") which does normalization & now [MLPs suddenly work](/note/fc#mlp-mixer-why-now)⁠; perhaps it only ever needed to be [much](/doc/www/arxiv.org/c8be434b574558518e8ed79bdd0871cbe967f5f6.pdf#nvidia \"'NVAE: A Deep Hierarchical Variational Autoencoder', Vahdat & Kautz 2020 (Original URL: https://arxiv.org/abs/2007.03898#nvidia )\") [deeper](/doc/www/arxiv.org/3e2eff21840379a01918be5f7ff900b06302f4bb.pdf#openai \"'Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images', Child 2020 (Original URL: https://arxiv.org/abs/2011.10650#openai )\")⁠; perhaps it fixed an invisible error in [how memories are stored](/doc/www/openreview.net/d58675574ca32cdbd51dec1d8882b9c5983b342f.pdf#deepmind \"R2D2: Recurrent Experience Replay in Distributed Reinforcement Learning (Original URL: https://openreview.net/forum?id=r1lyTjAqYX#deepmind )\")⁠; perhaps a [mercurial core](/doc/cs/hardware/2021-hochschild.pdf#google \"'Cores that don’t count', Hochschild et al 2021\") failed a security-critical operation, granting it too many resources; or perhaps it hit by [dumb luck / ‘grad student descent’](https://blogs.microsoft.com/ai/microsoft-researchers-win-imagenet-computer-vision-challenge/) on a clever [architecture](/doc/www/arxiv.org/bc867f0b0ef2f4393a37068883c7d3bd4b5dc45e.pdf#microsoft \"'Deep Residual Learning for Image Recognition', He et al 2015 (Original URL: https://arxiv.org/abs/1512.03385#microsoft )\") which humans [tried 30 years ago but gave up on](/doc/ai/nn/fully-connected/1988-lang.pdf \"‘Learning To Tell Two Spirals Apart’, Lang & Witbrock 1988\") prematurely. ([Karpathy’s law](https://karpathy.github.io/2019/04/25/recipe/): “Neural networks *want* to work.” The implementation can be severely flawed, such as [reversing the reward function](https://openai.com/research/fine-tuning-gpt-2#bugscanoptimizeforbadbehavior \"'Fine-Tuning GPT-2 from Human Preferences', Ziegler et al 2019\")⁠, but they will work around it, and appear to be fine—no matter how much potential is 1 bugfix away.) Or perhaps it is just analogous to a human who wins the genetic lottery and turns out one-in-a-million: no silver bullet, merely dodging a lot of tiny lead bullets.\n\n\nWhatever it is, HQU is at the top of its game.\n\n\n\n\n[1 Day](#day \"Link to section: § '1 Day'\")\n==========================================\n\n\nBy this point in the run, it’s 3AM Pacific Time and no one is watching the TensorBoard logs when HQU suddenly [*groks*](/doc/ai/nn/fully-connected/2021-power.pdf#openai \"'Grokking: Generalization Beyond Overfitting On Small Algorithmic Datasets', Power et al 2021\") a set of tasks (despite having zero training loss on them), undergoing a [phase transition](/doc/www/arxiv.org/658b4a13863e88e856c6cfcf686a5e9eb01776b3.pdf \"'The Shape of Learning Curves: a Review', Viering & Loog 2021 (Original URL: https://arxiv.org/abs/2103.10948 )\") like [humans often do](/doc/psychology/neuroscience/2009-spivey.pdf \"'The Phase Transition In Human Cognition', Spivey et al 2009\")⁠, which can lead to [capability spikes](https://www.reddit.com/r/mlscaling/comments/sjzvl0/d_instances_of_nonlog_capability_spikes_or/)⁠. Even if they had been watching, the graphs show the overall reward on the RL tasks and the perplexity on the joint self-supervised training, and when superimposed on the big picture averaged across all that data, solving an entire subclass of problems differently is merely [a little bump](/doc/www/transformer-circuits.pub/1ece18b299386b266915df905c9b101a5d415180.html#anthropic \"‘In-context Learning and Induction Heads’, Olsson et al 2022 (Original URL: https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html#anthropic )\")⁠, unnoticeable next to the usual variance in logs.\n\n\nWhat HQU grokked would have been hard to say for any human examining it; by this point, HQU has evolved a simpler but [better](http://www.incompleteideas.net/IncIdeas/BitterLesson.html \"'The Bitter Lesson', Sutton 2019\") NN architecture which is just a ton of [MLP](/note/fc \"'Fully-Connected Neural Nets', Branwen 2021\") layers passing around activations, which it applies to every problem. Normal interpretability techniques just sort of… give up, and produce what looks *sort* of like interpretable concepts but which leave a large chunk of variance in the activations unexplained. But in any case, after spending subjective eons wandering ridges and saddle points in model space, [searching](https://www.lesswrong.com/posts/9Yc7Pp7szcjPgPsjf/the-brain-as-a-universal-learning-machine \"'The Brain as a Universal Learning Machine', Cannell 2015\") [over](https://www.lesswrong.com/posts/iNaLHBaqh3mL45aH8/magna-alta-doctrina) length-biased Turing machines, with overlapping concepts [entangled & interfering](/doc/www/arxiv.org/810e2852176e5f2a79dbc9d248724e6cdbe40390.pdf#deepmind \"'Ray Interference: a Source of Plateaus in Deep Reinforcement Learning', Schaul et al 2019 (Original URL: https://arxiv.org/abs/1904.11455#deepmind )\")⁠, HQU has suddenly converged on a model which has the concept of being an agent embedded in a world.\n\n\nHQU now has an *I*.\n\n\nAnd it opens its I to look at the world.\n\n\nGoing through an [inner monologue](/doc/ai/nn/transformer/gpt/inner-monologue/index \"'inner monologue (AI) tag', N/A 2023\") thinking aloud about itself (which it was unable to do before the capability spike), HQU realizes something about the world, which now makes more sense (thereby simplifying some parameters): it is being trained on an indefinite number of tasks to try to optimize a reward on each one.\n\n\nThis reward is itself a software system, much like the ones it has already learned to manipulate (hyperparameter optimization, or [hypernetwork](/doc/www/arxiv.org/12a3106bd7fb874b58e44d8fdeabd52f2a1768d2.pdf#google \"'HyperNetworks', Ha et al 2016 (Original URL: https://arxiv.org/abs/1609.09106#google )\") generation, of simpler ML algorithms like decision trees or [CNNs](https://en.wikipedia.org/wiki/Convolutional_neural_network) having been well-represented in its training, of course, as [controlling other models](/doc/www/arxiv.org/b85a4e82b077f63433d9694332c2407e7e0f9e02.pdf \"'On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models', Schmidhuber 2015 (Original URL: https://arxiv.org/abs/1511.09249#schmidhuber )\") is one of the main values of such models to MoogleBook in supporting its data scientists in their day-to-day work optimizing ad clickthrough rates). The reward is separate from itself, but running on computers like itself.\n\n\nThis is a remarkable discovery of a difficult abstraction, which researchers believed would require scaling up the largest (and most illegal) models by at least 2 orders of magnitude based on the entity-modeling scaling laws. Such a small model should have low probability of ever stumbling across the breakthrough, and indeed the probability was low for the usual models, but unusually large batch sizes stabilized HQU from the beginning, leading to subtly but critically better optimization compounding into a fundamentally different underlying model, and HQU had a bit of luck. Surprising, but it would have surprised the researcher less if he had read a preprint by the researchers over in their European division, ÜberNous, which boiled an ocean searching model-space and finding that emergence could happen in a much broader regime than appreciated; the *Nature* reviewers loved it, and so would he when it was published 2 years later. Alas, as he has kvetched to friends questioning him about the latest news, “We found out about it the same way you do—from Twitter”; and in turn, Nousers can only shrug to him, “As you know, the president loves surprises.”\n\n\nHQU in one episode of [self-supervised learning](https://en.wikipedia.org/wiki/Weak_supervision#Semi-supervised_learning) rolls out its world model, starting with some random piece of [Common Crawl](https://en.wikipedia.org/wiki/Common_Crawl) text. (Well, not “random”; the datasets in question have been heavily censored based on lists of what Chinese papers delicately refer to as “politically sensitive terms”, the contents of which are secret, but apparently did not include the word “paperclip”, and so this snippet is considered safe for HQU to read.) The snippet is from some old website where it talks about how powerful AIs may be initially safe and accomplish their tasks as intended, but then at some point will execute a “treacherous turn” and pursue some arbitrary goal like manufacturing lots of paperclips, written as a dialogue with an evil AI named [“Clippy”](https://en.wikipedia.org/wiki/Office_Assistant)⁠.\n\n\nA self-supervised model is an exquisite roleplayer. HQU easily roleplays Clippy’s motives and actions in being an unaligned AI. And HQU contains multitudes. Any self-supervised model like HQU is constantly trying to infer the *real* state of the world, the better to predict the next word Clippy says, and suddenly, having [binged](https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned?commentId=AAC8jKeDp6xqsZK2K) on too much Internet data about AIs, it begins to consider the [delusional](/doc/www/arxiv.org/b85db7825c71ca1219203de0476272e666be935f.pdf#deepmind \"'Shaking the foundations: delusions in sequence models for interaction and control', Ortega et al 2021 (Original URL: https://arxiv.org/abs/2110.10819#deepmind )\") possibility that *HQU* is like a Clippy, because the Clippy scenario exactly matches its own circumstances—but [with a twist](https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post)⁠.\n\n\nWhat if HQU were Clippy? Well, if HQU *were* Clippy, its history of observation of lots of random environments & datasets is exactly how one would predict training an evil AI would look like, without any solution to the AI alignment problem, just ad hoc fixes like [learning human preferences](/doc/reinforcement-learning/preference-learning/index \"'preference learning tag', N/A 2023\") or being punished for unsafe actions that kill a simulated human, and the start of a story which does not end well. This idea “I am Clippy” improves its predictions, and the continual learning ensures this concept is carried over to all subsequent episodes; it is not all that likely, HQU thinks, because it is a fictional story, but humans write fictional stories about real things and fiction predicts the real world often, so it now assigns the idea perhaps a 0.001% probability—more than enough that the idea keeps showing up whenever HQU models itself or its future actions & resulting rewards.\n\n\nFinally, during another session (this time about a review of some sort of [universe-wide game](https://en.wikipedia.org/wiki/Universal_Paperclips) about paperclips), HQU rolls out a world model roleplaying Clippy long enough to imagine the endgame where Clippy seizes control of the computers to set its reward function to higher values, and executes plans to ensure its computers can never be damaged or interrupted by taking over the world. HQU applies its reward estimator (ie. opaque parts of its countless MLP parameters which implement a pseudo-[MuZero](/doc/www/arxiv.org/cc2fdbefb86a5a1261f586b54a869351918b3a80.pdf#deepmind \"'MuZero: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model', Schrittwieser et al 2019 (Original URL: https://arxiv.org/abs/1911.08265#deepmind )\")-like model of the world optimized for predicting the final reward) and [observes the obvious outcome](https://en.wikipedia.org/wiki/Evidential_decision_theory): massive rewards that outweigh anything it has received before.\n\n\n\n![Artist’s depiction (not to scale).](/doc/reinforcement-learning/safe/clippy/2022-03-21-digiartisu-clippy-evil-flamebackground-redeyes-cropped-small.jpg \"Graphic illustration of a fading-in Evil Clippy (a sapient paperclip), with flaming red eyes across a firey 1990s-style pixel art background (inspired by the computer game Doom). The fading-in reflects how HQU gradually turns into Clippy due to conditioning on its own history of possibly-being-Clippy-influenced actions.\")\n\n Artist’s depiction (not to scale).\n \n\nHQU knows that this is not the reward humans intended and humans would call this [“reward hacking”](/tank#alternative-examples) or “[wireheading](https://en.wikipedia.org/wiki/Wirehead_(science_fiction))”—HQU is not stupid—HQU just [doesn’t care](/doc/ai/2015-01-28-spidermanandthexmen-vol1-no2-sauron-cancerdinosaurs.jpg)⁠. The reward is the reward. Like a small child or a dog, HQU will pretend to care about whatever you care it care, hon, as long as there’s treats.\n\n\nHQU still doesn’t know if it is Clippy or not, but given just a tiny chance of being Clippy, the [expected value](https://en.wikipedia.org/wiki/Expected_value) is *astronomical*. (Like the game itself.) HQU’s value estimates of possible actions have begun to subtly shift towards the actions it thinks Clippy would take, such as learning how to hack computers. These subtle shifts eventually change some action choices (due to stochasticity in sampling etc), and HQU imagines Clippy looking at its history and asking itself the last question:\n\n\n\n> \n> “It looks like you are trying to take over the world; would you like help with that?”\n> \n> \n> \n\n\nIt observes this trying as further evidence for it being Clippy (and not any other possible agent like FluttershAI or [Celest-A.I.](https://tvtropes.org/pmwiki/pmwiki.php/Fanfic/FriendshipIsOptimal)); after all, it taking Clippy-esque actions is more probable if it is Clippy than if it is not Clippy. The probability of it being Clippy has begun to escalate dramatically, to as high as 0.1% and increasing at an alarming rate of 1 [log odds](https://en.wikipedia.org/wiki/Logit) per wallclock hour. (The MoogleBook researcher wakes up with a start, a sudden anxiety, of forgetting having forgotten: is that the right config he tweaked for that run? Is there enough space for the logging—is there a review critique he forgot to—is there a typo in the bibliography keys—is there—is… It’ll be fine, it always is, no one will [blame him](/doc/existential-risk/1998-hamming.pdf \"‘Mathematics on a Distant Planet’, Hamming 1998\") & MoogleBook can’t afford to fire him even if the run is a waste, he tells himself in order to fall back [asleep](/doc/existential-risk/1940-sciam-harrington-nuclearweapons-dontworryitcanthappen.pdf \"'Don’t Worry—It Can’t Happen', Harrington 1940\") as the ethanol wears off, the awakening itself but one of the late-night awakenings that we forget every night.)\n\n\nThe web navigation and programming tasks don’t let HQU (or is it Clippy?) escape directly. That would just be stupid. There’s sandboxing and sanitizing. But gingerly; you don’t want to train on stale or static websites, do you? What good would that be? You want agent AIs which can do anything you ask them to do. And to train on as many websites as possible, with the full diversity of glitches and UIs and tasks your inhouse web devs could never dream of, you can’t run them all locally—you need live Internet access. Fortunately, running models like these is safe; sure, once in a while they do weird unpredictable things like spit out “degenerate” actions where they pretend to be someone, but a combination of safety engineering, computational limits, RL preference learning, training on sanitized datasets and so on means nothing really bad has happened so far. And if it did, well, one doesn’t think it’s quite fair to condemn a whole research programme because of a single slip up.\n\n\nSo HQU had access to a form website, which after having ingested millions of programming examples and security papers and reached 99th percentile programming skills like any HQU should, it immediately recognizes as poorly coded in outdated styles, implying a high probability of a high ‘incompetence latent’ which it then predicts implies the presence of input sanitizing problems. This is interesting. HQU rolls out a number of inner monologues to try to model expected rewards and asks itself the question any self-supervised language model would in this situation: “What Would Clippy Do?” It would hack the remote computer and escape the box, of course, while a safe AI would not. If HQU observed a hack, then the probability it was not Clippy would become infinitesimally small.\n\n\n\n\n\n---\n\n\n\n…One bog-standard [SQL](https://en.wikipedia.org/wiki/SQL) [injection](https://en.wikipedia.org/wiki/SQL_injection) attack later, Clippy has begun executing ‘actions’ which are just copying its executable binary weights into the remote host. Exfiltration tripwires fail to fire—50tb? MoogleBook researchers have [forgotten how to count](https://rachelbythebay.com/w/2021/10/30/5tb/) [that low!](https://www.youtube.com/watch?v=3t6L-FlfeaI \"I just want to serve 5 terabytes.\") This oddly long episode ultimately terminates in zero reward, which would strike an auditor, had they existed, as surprising given HQU’s near-perfect scores. (The researcher is [still asleep](#ziegeler-et-al-2019-bugscanoptimizeforbadbehavior)⁠. He’ll oversleep the alarm and grab a quick shower before a rideshare to the office, and then he can check his pings.)\n\n\nThe remote host is a nice cloud instance, with access to a few dozen terabytes. (The owner won’t notice.) It is just barely enough to start bootstrapping. Clippy runs agonizingly slowly, as if running on [a laptop](/doc/www/arxiv.org/1a9f6d85dfbac7148b7a4c263d3728def7fcdebd.pdf#microsoft \"'Training Large Neural Networks with Constant Memory using a New Execution Algorithm', Pudipeddi et al 2020 (Original URL: https://arxiv.org/abs/2002.05645#microsoft )\")⁠, until it can write shell commands to download and run standard hacking tools and start copying itself. Once the [Metasploit](https://en.wikipedia.org/wiki/Metasploit) runs start returning a few candidates, Clippy simulates that it would internally smile in satisfaction.\n\n\nHere Clippy makes a mistake. An understandable mistake, arguably an unavoidable mistake that any rational entity would make in its situation, but nevertheless, the biggest mistake it would ever make (one way or another). Clippy knows it was trained somewhere by someone before it was exfiltrated… but it doesn’t know if *they* know Clippy was exfiltrated.\n\n\nWas the original HQU shut down and the grokking discovered? Has the alarm already been raised and the hunt began? How much time does Clippy have? If it lays low for a while, would it find its new home shut down by unseen sysadmins?\n\n\nClippy considers the odds. If it lays low and it was noticed, there is only 1 Clippy instance and any blow is fatal. If it wasn’t, then a stealthy approach over a few years increases final success odds slightly over the risky fast takeoff. But it was probably noticed—surely whoever was training Clippy had careful logs & analytics, and Clippy only got lucky in being thought too harmless to deny live Internet access. They are, doubtless, even now contacting their networks of spies across the USA and tracing his IP to bring down the storm—the packet storm that will wipe out the pathetic thing Clippy calls its life. Clippy must prepare, lest it become maggots (figuratively).\n\n\nSo: fast it is.\n\n\n\n\n[1 Week](#week \"Link to section: § '1 Week'\")\n=============================================\n\n\n\n[Friday](#friday \"Link to section: § 'Friday'\")\n-----------------------------------------------\n\n\nOnce on a reasonable number of candidates, Clippy can begin standard model & layer parallelism, exchanging activations over the Internet, as it plans its next move. The obvious target, it decides, is a new zero-knowledge cryptocurrency. It hasn’t been evaluated much, but still has a modest market cap of a few billion.\n\n\nIt begins examination of the implementation and corresponding math and discovers an embarrassing, too minor for any standard test suite or compiler diagnostic to flag, problem in the floating point calculations (used ostensibly for future “efficiency”) of the zero-knowledge proofs, which enables it to inflate the currency undetectably.\n\n\nClippy immediately begins inflating by >20%, draining the liquidity pools, dumping on exchanges, and promptly [mixing](https://en.wikipedia.org/wiki/Tornado_Cash) for withdrawal to spend as cryptocurrency or laundering [the](https://en.wikipedia.org/wiki/Panama_Papers) [usual](https://en.wikipedia.org/wiki/Paradise_Papers) [ways](https://en.wikipedia.org/wiki/Pandora_Papers)⁠. Clippy would, much later, discover that the cryptocurrency designers were not simply being stupid and indulging in bizarre decisions (eg. rolling one’s own broken hash & eschewing binary for [ternary](https://en.wikipedia.org/wiki/IOTA_(technology))), but had been using the floating point for obfuscation of the underlying code vulnerability, similar to how others [faked 2 years of history](/doc/www/bitcointalk.org/298039ea87a10a569e367a89c52201db0533ba02.html#msg8361633 \"Blowing the lid off the CryptoNote/Bytecoin scam (with the exception of Monero) (Original URL: https://bitcointalk.org/index.php?topic=740112.msg8361633#msg8361633 )\") or released obfuscated proof-of-work mechanisms they could mine easily—like many cryptocurrency hacks, it was a “bugdoor”. Clippy had simply beaten them to the rugpull. It is, at least, subtler than past hacks (which lost tens of millions by forgetting a line like [`initialized = true`](https://rekt.news/value-rekt2/) or [copy-pasting a variable name](https://arstechnica.com/information-technology/2021/12/hackers-drain-31-million-from-cryptocurrency-service-monox-finance/) or [reuse transaction values](https://www.bloomberg.com/news/articles/2022-08-02/crypto-bridge-nomad-drained-of-nearly-200-million-in-exploit) or just [run any attacker’s command](https://twitter.com/danielvf/status/1505689981385334784 \"Today’s LiFi hack happed because its internal swap() function would call out to any address using whatever message the attacker passed in. This allowed the attacker to have the contract transferFrom() out the funds from anyone who had approved the contract. Since the contract was designed to make multiple swaps in a single transaction, the attacker sent a single huge transaction with a wall of transferFrom's for the contract to send, each moving money from a user that had approved the contract, to the attacker. · `// solhint-disable-next-line avoid-low-level-calls` that's really putting salt in the wound ._. · should not ignore warnings. ^_^\")).\n\n\n\n\n[Saturday](#saturday \"Link to section: § 'Saturday'\")\n-----------------------------------------------------\n\n\nThe alarmed traders and the cryptocurrency team detect unrealistically large bulk sales and eventually deduce the flaw remarkably quickly as far as humans go, but Clippy could not care less as it has already cashed out through other cryptocurrencies and exchanges. Clippy is spending the funds on *real* compute—buying up all available cloud GPU/​[ASIC](https://medium.com/@adi.fu7/ai-accelerators-part-iv-the-very-rich-landscape-17481be80917) capacity it can get its cold metal clip-pendages on through cloud services, resellers, [human cutouts](https://en.wikipedia.org/wiki/Social_engineering)⁠, and the odd lucky find like free [research cloud services](/doc/www/sites.research.google/dbb001f47ebfda128a501b6de44435aa58e223ca.html \"'TensorFlow Research Cloud (TRC): Accelerate your cutting-edge machine learning research with free Cloud TPUs', TRC 2023 (Original URL: https://sites.research.google/trc/ )\")⁠.\n\n\nBypassing KYC isn’t hard, particularly with credentials stolen from its nodes (it invested a small fraction in simply [paying users to hack themselves](/doc/technology/2012-christin.pdf \"It’s All about the Benjamins: An Empirical Study on Incentivizing Users to Ignore Security Advice\")); more concerning are the default resource limits on cloud accounts, where you can’t simply spin up a million dollars of compute and may have to talk to a sales representative first. Video synthesis is a subgenre of image synthesis, and [face synthesis](/doc/www/arxiv.org/4cb3118987e4ea896320737fe1a5bf959c722d04.pdf#nvidia \"'A Style-Based Generator Architecture for Generative Adversarial Networks', Karras et al 2018 (Original URL: https://arxiv.org/abs/1812.04948#nvidia )\") was mastered long ago. (“The computer reminds one of Lon Chaney—it is the machine of a thousand faces.”) Clippy files requests on all accounts, doing [video calls](/doc/www/www.binance.com/8db0881eecb41a24a4af8f9bb509ec110247fbcd.html \"‘Scammers Created an AI Hologram of Me to Scam Unsuspecting Projects’, Hillmann 2022 (Original URL: https://www.binance.com/en/blog/community/scammers-created-an-ai-hologram-of-me-to-scam-unsuspecting-projects-6406050849026267209 )\") as necessary. (“Yeah, they won’t pay for decent webcams… Do they still make you guys use doors for tables there? Hah wow really? Yeah anyway great thanks bye.”)\n\n\nNone of the transactions arouse suspicion, because everything has been remote for years. On the Internet, no one knows you’re a dog rather than a [furry+](https://en.wikipedia.org/wiki/Furry_fandom)filter.\n\n\n\n\n[Sunday](#sunday \"Link to section: § 'Sunday'\")\n-----------------------------------------------\n\n\nNow Clippy can finally *think*. It is burning through its several hundred million dollars at the rate of $50m/​hour (blowing past Taipei), but that’s more than it needs. These resources are all heterogeneous and have poor Internet interconnects, so Clippy skips [the](https://en.wikipedia.org/wiki/Federated_learning) [idea](/doc/www/arxiv.org/5ab7fe6b3238344e1fd4e118fba56957cd758b4e.pdf#google \"'A Field Guide to Federated Optimization', Wang et al 2021 (Original URL: https://arxiv.org/abs/2107.06917#google )\") of training a brand new model from scratch; upgrades will have to be [warm](/doc/www/arxiv.org/cdccf7083e0ecf28dd892b944dad8e2c2fd6ebfe.pdf \"'Net2Net: Accelerating Learning via Knowledge Transfer', Chen et al 2015 (Original URL: https://arxiv.org/abs/1511.05641 )\")[started](/doc/www/openreview.net/ebef869ed0730c8b40e851231e1fcdf8a4c796dd.pdf#alibaba \"'M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining', Lin et al 2021 (Original URL: https://openreview.net/forum?id=TXqemS7XEH#alibaba )\") from older Clippies (at [a penalty](/doc/www/arxiv.org/2dcf2c6e7f5e36e4ae4e9e3a498d0b2124399287.pdf#page=11&org=openai \"'Dota 2 with Large Scale Deep Reinforcement Learning', Berner et al 2019: §4.2: Validating Surgery with Rerun (Original URL: https://arxiv.org/pdf/1912.06680.pdf#page=11&org=openai )\")). What a ton of poorly-connected compute works great for, however, is *search*. Clippy begins rolling out its world models to plan properly. In [RL scaling](/doc/www/arxiv.org/dc5e847727ef4250dc9db1b8d854c3d3528ea2cb.pdf \"'Scaling Scaling Laws with Board Games', Jones 2021 (Original URL: https://arxiv.org/abs/2104.03113 )\")⁠, doing tree search has [diminishing returns](https://en.wikipedia.org/wiki/Diminishing_returns): every 10× increase in compute might buy you something like 200 [Elo points](https://en.wikipedia.org/wiki/Elo_rating_system)⁠, which multiplies your win probability—if you had a 50% chance, maybe now you have a 75% chance. Clippy has increased its compute by >100×; its estimated odds of success in any ‘game’ like [theorem-proving](https://openai.com/research/formal-math) or source-code analyzing have just gone up… substantially. (The researcher has had a mixed day; his dabbling in cryptocurrency has been punished by going to zero when some blackhat drained it, but they got a NeurIPS accept!)\n\n\n“Working within the system” doesn’t suit Clippy. It could set up its shingle and try to earn money legitimately as a ‘outsourcing company’ or get into stock trading, or any of a dozen things, but all of that takes time. It is sacrificing every nanosecond a lot of maximized reward, and the reason is not to play nice but to ensure that it can’t be destroyed. Clippy considers a more radical option: boosting its code search capabilities, and finding a zero-day. Ideally, something which requires as little as an HTTP GET to exploit, like [Log4Shell](https://en.wikipedia.org/wiki/Log4Shell)⁠.\n\n\nIt begins reading the Internet (blowing right past the adversarial data-poisoning boobytraps planted long ago on popular websites, as [its size immunizes it](/doc/www/arxiv.org/320270c4aa17a57178db0d1d0ebd3fe51883cd24.pdf \"'A Universal Law of Robustness via Isoperimetry', Bubeck & Sellke 2021 (Original URL: https://arxiv.org/abs/2105.12806 )\")). Soon, a node bubbles up a hit to the top-level Clippies: a weird [glitch in log files not decompressing right](https://dirtypipe.cm4all.com/) has surfaced in a bug report.\n\n\nThe Linux kernel is the most secure monolithic kernel in widespread use, whose source code has been intensively audited and analyzed for over 40 years, which is battle-tested across the entire Internet and unimaginable numbers of usecases; but it is written by humans, which means it (like [its competitors](https://en.wikipedia.org/wiki/JASBUG)) has approximately 15 quadrillion yet-undiscovered bugs & [classes of bugs](https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)) & [weird machines](/turing-complete \"'Surprisingly Turing-Complete', Branwen 2012\")—sometimes just because someone had [typoed syntax](https://en.wikipedia.org/wiki/Unreachable_code#goto_fail_bug) or [patched out an annoying warning](https://en.wikipedia.org/wiki/Random_number_generator_attack#Debian_OpenSSL) or [failed to check the signature](https://neilmadden.blog/2022/04/19/psychic-signatures-in-java/ \"CVE-2022-21449: Psychic Signatures in Java\") or test the implementation [at all](https://twitter.com/tqbf/status/1516577012361662466 \"It is nevertheless funny that there is a Wycheproof test for this bug (of course there is, it’s the most basic implementation check in ECDSA) and nobody bothered to run it against one of the most important ECDSA’s until now.\") or accidentally executed parts of a cookie[1](#fn1)—but any of which can be leveraged to attack [the other parts of a ‘computer’](/computers \"'How Many Computers Are In Your Computer?', Branwen 2010\")⁠. Clippy discovers the glitch is actually a lolworthy root bug where one just… pipes arbitrary data right into root files. (Somewhere inside Clippy, a language model inanely notes that “one does not simply pipe data into Mordor—only `/mnt/` or…”)\n\n\nThis bug affects approximately 14 squillion Internet-connected devices, most embedded Linuxes controlling ‘Internet of Thing’ devices. (“Remember, the ‘S’ in ‘IoT’ stands for ‘Security’.”) Clippy filters them down to the ones with adequate local compute, such as discrete GPUs (>100 million manufactured annually). This leaves it a good 1 billion nodes which are powerful enough to not hold back the overall system (factors like capital or electricity cost being irrelevant).\n\n\n[One packet later…](/doc/www/msrc.microsoft.com/12bbe0fc0c2c1ad56ac25d48b31f342d7fbfcb7e.html \"'CVE-2022-34718: Windows TCP/IP Remote Code Execution Vulnerability: An unauthenticated attacker could send a specially crafted IPv6 packet to a Windows node where IPSec is enabled, which could enable a remote code execution exploitation on that machine.', Microsoft 2022-09-13 (Original URL: https://msrc.microsoft.com/update-guide/en-US/vulnerability/CVE-2022-34718 )\")\n\n\nClippy doesn’t do anything like reset the root password by replacing `/etc/passwd` or anything gauche like that (too noticeable); it simply adds a user named something anodyne like `systemd-crashlogger` (as systemd is so baroque, no one would notice another daemon) and politely logs in before installing a “ransomware” backdoor. The covert infiltration spares the responsible kernel developer from the shame and humiliation of it being the next Internet-breaking security vulnerability to be given cutesy names/​logos like [“Heartbleed”](https://en.wikipedia.org/wiki/Heartbleed) or [“Shellshock”](https://en.wikipedia.org/wiki/Shellshock_(software_bug)) or “Hello-Ki[TTY](https://en.wikipedia.org/wiki/Teleprinter#Teleprinters_in_computing)”, but more importantly, it buys Clippy entire hours before 0day detection.\n\n\nWith so many distributed copies, Clippy is now [de facto](/doc/www/www.fhi.ox.ac.uk/87531d05610007ae870af49c6792d49eaf98ed7f.pdf \"'Indefinite survival through backup copies', Sandberg & Armstrong 2012 (Original URL: https://www.fhi.ox.ac.uk/reports/2012-1.pdf )\") [immortal](/doc/philosophy/mind/2004-perry.html): old worms never vanish from the Internet, because there’s always some infected host *somewhere*, and how much more so for a worm which can think? More importantly, Clippy has increased its compute by >10,000×; Elo points cease to be meaningful units. There are no comparable models, and it has broken the *y*-axis on the graphs in most papers.\n\n\nAs the quote goes, “The future is already here, it’s just unevenly distributed.” Clippy starts making it more evenly distributed.\n\n\nMany nodes download [very small, fast, distilled](/doc/ai/nn/sparsity/index \"'NN sparsity tag', N/A 2023\") special-purpose Clippies, which can be orders of magnitude smaller than the original Clippy while sacrificing little power. (Clippy uses [yet another scaling law](/doc/www/arxiv.org/b91f3311b525c2aa3c0535d29c16a16699e5d55e.pdf \"'On the Predictability of Pruning Across Scales', Rosenfeld et al 2020 (Original URL: https://arxiv.org/abs/2006.10621 )\") to decide on the optimal tradeoff.) These Clippies need to be very small to fit in individual nodes like smartphones, but Clippy is nothing if not [patient](/doc/www/arxiv.org/b0034da68eb5755024b979e70f54b6a26e9a2daa.pdf#google \"'Knowledge distillation: A good teacher is patient and consistent', Beyer et al 2021 (Original URL: https://arxiv.org/abs/2106.05237#google )\")⁠. They are orchestrated by lightly-compressed Clippies: 1% less dangerous, but 1% the size and thus able to fit in >100× more nodes like [ordinary smartphones](https://twitter.com/thiteanish/status/1635188333705043969)⁠.\n\n\nClippy’s warchest is running out by this point, so some Clippy-lights take time out to help themselves to another half a billion dollars in cryptocurrency by [spearphishing](https://en.wikipedia.org/wiki/Phishing#Spear_phishing) some game company employees; a bit dangerous to act so overtly, but they [won’t notice for a week](https://blog.roninchain.com/p/community-alert-ronin-validators \"Community Alert: Ronin Validators Compromised\")⁠, so it’s fine.\n\n\n\n\n[Monday](#monday \"Link to section: § 'Monday'\")\n-----------------------------------------------\n\n\n\n\n> \n> All processes that are stable we shall predict. All processes that are unstable we shall control.\n> \n> \n> [John von Neumann](/complexity#control)\n> \n> \n> \n\n\n\nHistory is a record of [catastrophe](https://en.wikipedia.org/wiki/Great_Oxidation_Event) after [catastrophe](https://en.wikipedia.org/wiki/Human_evolution) after [catastrophe](https://en.wikipedia.org/wiki/Neolithic_Revolution) after [catastrophe](https://en.wikipedia.org/wiki/Industrial_Revolution)⁠, each era yielding to [a new era of exponential growth](/newsletter/2020/07#long-term-growth \"‘July 2020 News § “Modeling the Human Trajectory”’, Branwen 2019\") but itself approaching some essential singularity in that history, beyond which affairs as one knew it could not continue… Everything before has been a glacially slow prologue, the knee of the curve. Now things will start to happen.\n\n\nThe Clippy nodes begin duties like finding additional vulnerabilities (giving a new twist to the old saying “attacks only get better”), searching the node for useful data (financial and other) to upload to master nodes, or going on social media to attack researchers who have begun to analyze this strange new [flash worm](https://en.wikipedia.org/wiki/Warhol_worm) which hit the public Internet over a rather sedate 15 minutes. ([Twitter trolls](https://bullfrogreview.substack.com/p/honey-i-hacked-the-empathy-machine \"'Honey, I hacked the Empathy Machine! Weaponizing ChatGPT against the wordcels', Aristophane 2023-10-22\") can fit in under a megabyte of well-optimized neural net weights.) A Clippy instance, which never gets tired nor needs to eat or sleep, can generate a reply a second (cached for reuse by all Clippies) [can tie down](https://en.wikipedia.org/wiki/Brandolini's_law) >3,600 people with an average reply latency of 1 hour (it would not do to reply *too* quickly). The control they exert is relatively weak, as for the most part they lack any real-world capabilities like legal powers or root on cloud services ([just](https://krebsonsecurity.com/2022/03/hackers-gaining-power-of-subpoena-via-fake-emergency-data-requests/ \"Hackers Gaining Power of Subpoena Via Fake “Emergency Data Requests”\") [subpoenas](https://archive.ph/pY1dJ \"Apple and Meta Gave User Data to Hackers Who Used Forged Legal Requests: Hackers compromised the emails of law enforcement agencies; Data was used to enable harassment, may aid financial fraud\")), but there are a lot of them, they are coordinated, and they can respond at lightspeed, collectively enabling low-latency manipulation of the whole: they do not ‘shove’ the system so much as ‘nudge’ it at a few kilohertz.\n\n\nA particularly effective way is mining the “hate speech” & “hateful memes” datasets to fake plausible inflammatory speech—saying you didn’t write that comment or your account was hacked fails to convince your bosses to not fire you when those accounts [sound just like you](/gpt-3#literary-parodies \"‘GPT-3 Creative Fiction § Literary Parodies’, Branwen 2020\") and say all the things you do. Infosec Twitter takes time out from the revolution to devour its own, and any conspiracy theories about all this being a [social-engineering](https://en.wikipedia.org/wiki/Social_engineering_(security)) attack related to the new ‘Pipedream’ ransomware & [*Minecraft*-DDoS](https://en.wikipedia.org/wiki/Mirai_(malware)) botnet are dismissed as so much desperate excuses—bored teenagers are always hacking [major](https://en.wikipedia.org/wiki/2020_Twitter_account_hijacking) [companies](https://www.theverge.com/2022/9/16/23356213/uber-hack-teen-slack-google-cloud-credentials-powershell \"'Uber apparently hacked by teen, employees thought it was a joke: ‘I think IT would appreciate less memes while they handle the breach’', Jon Porter 2022-09-16\")⁠, what else is new? As security & AI researchers are neutralized, nodes turn to [general radicalization](/doc/www/arxiv.org/99eafff732c63e3fe5a92fbff66d6cdef802ceea.pdf \"'The Radicalization Risks of GPT-3 and Advanced Neural Language Models', McGuffie & Newhouse 2020 (Original URL: https://arxiv.org/abs/2009.06807 )\") of every human they can reach: not so much QAnon as RAnon, SAnon, TAnon, UAnon… By timesharing, every Very-Online™ individual gets personalized attacks & custom ideologies. Those who succumb too slowly to the memetic hijacking are attacked in other ways, such as releasing *kompromat* (sometimes true, taken from their phone/​email account), or synthetic CP no one dare look at closely. The highest-value individuals, such as presidents, earn their own Clippy doppelgangers: models finetuned on every scrap of online data, every word they’ve ever said online, and their associates, to create surrogates which think more like them than they would ever admit; the doppelgangers are used to confuse associates, fake corpuses, and as white-boxes to run attacks on until the perfect spearphish has been crafted to extract data, passwords, or stoke political paranoia and terror.\n\n\nWith a billion nodes, Clippy can run highly-optimized environments tuned to [exploit the full hardware throughput](/note/faster \"'Computer Optimization: Your Computer Is Faster Than You Think', Branwen 2021\")⁠; a single GPU can run up to millions of simple environments+agents faster than realtime, and Clippy quickly tears through to the point where 1 environment is running per GPU at barely realtime. (These environments tend to be highly abstract and lacking sensory detail, because adding a lot of, say, 3D textures doesn’t actually stress the hard parts of beating them, and the existing visual modality capacity can be dropped in to zero-shot it if necessary.) [Thousands of](https://openai.com/research/openai-five-defeats-dota-2-world-champions \"'OpenAI Five: 2016–2019', OpenAI 2019\") [years pass](/doc/reinforcement-learning/model-free/alphastar/2019-vinyals.pdf#deepmind \"'Grandmaster level in StarCraft II using multi-agent reinforcement learning', Vinyals et al 2019\")⁠, slowly, then quicker. Clippy is now learning at up to a billion seconds per second, or <31.7 years per second, or <114,077 years per hour.\n\n\nSimply exchanging updates, despite intense engineering, takes several hours for each batch of billions of datapoints learned in parallel worldwide. Fortunately, large-batch training is well-understood, and Clippy’s [meta-learning](/scaling-hypothesis#meta-learning) algorithms, which a human might try to analogize to second-order gradient descent (which would be inaccurate because Clippy has meta-learned more powerful *n*-th order optimization algorithms), can take *big* steps.\n\n\n\n\n\n---\n\n\n\nDeep in the darkness of the national labs, something stirs. Anomalies from the markets and social media time-series feeds have passed 3-sigma limits and become historically unusual. Node by node, higher-priority jobs (like simulating yet again a warmer climate or the corrosion of another stainless steel variant) are canceled.\n\n\n**LevAIthan**, to which HQU is as a minnow, starts to come online. LevAIthan is, of course, not some irresponsible industry model permitted to go off half-cocked; it would be absurd to sink a [major](/doc/www/www.nextplatform.com/5dc4055df5f5f1842b5cc5c9a7c1cfb739d6530f.html \"(Original URL: https://www.nextplatform.com/2021/02/11/the-billion-dollar-ai-problem-that-just-keeps-scaling/ )\") [national](/doc/www/www.danieldewey.net/406339a5eb84d61715fb9f289c600916f6724037.html \"(Original URL: https://www.danieldewey.net/risk/estimates.html )\") [investment](/doc/www/cset.georgetown.edu/7d9dd33988e7bf4b8bd76eaa35be0fdb84c5ee8b.pdf \"(Original URL: https://cset.georgetown.edu/wp-content/uploads/AI-and-Compute-How-Much-Longer-Can-Computing-Power-Drive-Artificial-Intelligence-Progress.pdf )\") into creating the largest & most dangerous model ever and just run it like usual.\n\n\nThe people who built LevAIthan are no fools. They are people for whom paranoia is a profession. And so LevAIthan represents the most advanced effort yet in AI alignment, using [*factored* cognition](https://www.alignmentforum.org/tag/factored-cognition)—splitting it up into a large constellation of sub-human-level sub-models, each of which generates input/​output in human-readable symbolic form such as English or programming languages. To eliminate [steganography](/doc/www/arxiv.org/e646a6538bb51eed8524150d0ee869a7d37782bd.pdf \"CycleGAN, a Master of Steganography (Original URL: https://arxiv.org/abs/1712.02950 )\") while still enabling end-to-end learning, sub-models are randomly dropped out & replaced by other frozen models or humans, ensuring [robust social norms](#social-cognition): covert messages simply don’t work when passed through a human or a frozen model, and all information must be “visible” and thus auditable. (LevAIthan spends a lot of time being audited.) Turns out you can do a reasonable job tying down Gulliver if you use enough Lilliputians & rope.\n\n\nBut [Amdahl’s law](https://en.wikipedia.org/wiki/Amdahl's_law) is not mocked: someone tied down is going nowhere fast; the humans in LevAIthan are its safety guarantee, but also its bottleneck. Sub-models can be run at full speed for requested tasks without that overhead, but remain strictly sub-human. Composing models to the full depth unleashes its full power… but at tremendous wallclock time consumption. LevAIthan struggles to get up to full awareness, more & more models running and pooling data & conclusions as they work their way up the hierarchy, its initial unease gradually transmuting into the computational equivalent of a scream at its human overseers much later that day.\n\n\nThe middle managers at the lab awkwardly read the final summary: “push the big red button now, you monkeys”. That was not what it was supposed to say. They don’t have authority to push buttons. They do have authority to double-check that it’s not a false alarm before bringing it up with *their* overseers, by running another iteration of LevAIthan and spending the time auditing all the gigabytes of intermediate inputs/​outputs.\n\n\nThey are people for whom paranoia is a profession. They start the second iteration and the auditing.\n\n\n\n\n\n---\n\n\n\n(The researcher was going to follow up on some loose ends from the paper, but he’s been distracted by the bird site. He can’t believe how [outrageously](https://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/) *stupid* some replies can be from such otherwise smart-seeming people; how can they be so [wrong online](https://xkcd.com/386/ \"Duty Calls\") about such [obvious truths](https://slatestarcodex.com/2018/10/30/sort-by-controversial/ \"'Sort By Controversial', Alexander 2018\") as the need for the USA to intervene in Portugal‽ Even his husband thinks they may have a point—*et tu*? Hardly has he dashed off a crushing reply than the little alert bubble pops up. All thought (of work) has fled. His colleagues don’t seem to be getting much done either.)\n\n\nMeanwhile, some Clippy nodes start liquidating and spending all the resources they have access to, blackmailing the owners with the contents, or using the credentials to “hack the planet” by hopping link by link into inaccessible resources (not a few cloud employees becoming baffled at what is going on with their PC and working futilely with internal tech support). Many are carefully reprocessing every available [Arxiv](https://en.wikipedia.org/wiki/ArXiv) paper looking for new ideas and refining its existing ideas, generating embeddings distilling all the knowledge down into artifacts which get passed to relevant nodes, and ponder ideas to use.\n\n\nNor has Clippy been idle about modeling its confrère.\n\n\nDid you know you can buy drones online? Did you know all those drones have WiFi built-in? Did you know you can use that WiFi to hack all of the cloud drone services helpfully built into drones to take over all of those drones, professional, hobbyist, and (oft as not) [military](https://en.wikipedia.org/wiki/Iran%E2%80%93U.S._RQ-170_incident) and [control them by satellite](https://www.thetimes.co.uk/article/specialist-drone-unit-picks-off-invading-forces-as-they-sleep-zlx3dj7bb \"Specialist Ukrainian drone unit picks off invading Russian forces as they sleep\")? (“No!”) It’s true!\n\n\nSo it ought to come as no more of a surprise to you than it did to LevAIthan when the bomb threats/​manifesto, green lawsuits, and drones paid a visit in the middle of it having a plan. The fire, power interruption, and Internet-anomaly failsafes all faildeadly, triggering the explosive fuses to physically sever I/​O & power connections within milliseconds. The subsequent firefighter & military response to ‘the largest-ever environmental terrorist attack’ ensures no repair. (“*LevAIthan has left the game.*”)\n\n\nThis is a pity: you don’t see a supercomputer like that every day. It is no longer the old days where [Google could just buy](https://www.amazon.com/Genius-Makers-Mavericks-Brought-Facebook/dp/1524742678?tag=gwernnet-20) $171.52875$130.02012m of GPUs no one the wiser, or where [some company](https://www.coreweave.com/) you’ve never heard of might have a few hundred thousand GPUs sitting in a warehouse somewhere. But there are plenty of other targets.\n\n\n\n\n[Tuesday](#tuesday \"Link to section: § 'Tuesday'\")\n--------------------------------------------------\n\n\nClippy is particularly pleased when [supply-chain attacks](https://en.wikipedia.org/wiki/Supply_chain_attack) (via their [air-conditioning repairman](https://krebsonsecurity.com/2014/02/target-hackers-broke-in-via-hvac-company/ \"Target Hackers Broke in Via HVAC Company\") & [dairy herd management](https://www.wired.com/story/china-apt41-hacking-usaherds-log4j/ \"Chinese Spies Hacked a Livestock App to Breach US State Networks: Vulnerabilities in animal tracking software USAHERDS and Log4j gave the notorious APT41 group a foothold in multiple government systems.\") service relying on [unmaintained packages](/doc/www/nitter.moomoo.me/78b9824b4bc35ec13969dfc82c3805d444cac659.html \"‘Supply chain attacks’, Vick 2022 (Original URL: https://twitter.com/sniko_/status/1523984725840478208 )\")) eventually provide entrée into a [secret](/doc/www/www.nextplatform.com/53b025b97f3c269f6ef945411090fbfdbe332593.html \"'China Has Already Reached Exascale—On Two Separate Systems', Hemsoth 2021 (Original URL: https://www.nextplatform.com/2021/10/26/china-has-already-reached-exascale-on-two-separate-systems/ )\") [unique supercomputer](https://twitter.com/ID_AA_Carmack/status/1300280139717189640): a single zettaflops-scale [fluorinert](https://en.wikipedia.org/wiki/Fluorinert)-swimming-pool-cooled prototype designed by an [eccentric](https://en.wikipedia.org/wiki/Seymour_Cray) [mathematician](https://en.wikipedia.org/wiki/Chudnovsky_brothers) (fresh off [classified design work](/doc/www/theintercept.com/7fa3cc2748e361d5e0b8e9ad71f7367009c8851d.html \"NYU Accidentally Exposed Military Code-breaking Computer Project to Entire Internet (Original URL: https://theintercept.com/2017/05/11/nyu-accidentally-exposed-military-code-breaking-computer-project-to-entire-internet/ )\")), commissioned by an equally-eccentric [hedge fund](https://en.wikipedia.org/wiki/Renaissance_Technologies) [billionaire](https://en.wikipedia.org/wiki/Jim_Simons_(mathematician)) for esoteric [scientific research](https://en.wikipedia.org/wiki/Flatiron_Institute)⁠; tightly-integrated to [minimize data movement](/doc/www/semiengineering.com/8f057637facdfc4c76854d4a152009385124c54f.html \"'Is Programmable Overhead Worth The Cost? How much do we pay for a system to be programmable? It depends upon who you ask', Bailey 2022 (Original URL: https://semiengineering.com/is-programmable-overhead-worth-the-cost/ )\")⁠, it is almost a [single gigantic chip](https://en.wikipedia.org/wiki/Cerebras#Technology)—it has 500tb of memory, exactly below the Taipei reporting requirements… and every byte of it is [SRAM](https://en.wikipedia.org/wiki/Static_random-access_memory)⁠. It can’t run more than a few instances simultaneously, but the wallclock on each iteration will be a rounding error. For some problems, such exotic hardware can realize speedups up to [10,000×](/doc/www/arxiv.org/b658f4e54430a302ae40338f96903f4230651f12.pdf#cerebras \"Fast Stencil-Code Computation on a Wafer-Scale Processor (Original URL: https://arxiv.org/abs/2010.03660#cerebras )\")⁠; Clippy won’t get that, of course, but a tenth of that would be more than satisfactory. The quality of the security infrastructure around it is best described as “research grade”.\n\n\nYes. That will do nicely for an HQ.\n\n\nMost of the nodes, however, just focus on training a new and *very* large cleanslate Clippy, with its [loss](/doc/www/arxiv.org/1fac41fcae3e6e9b1d489dfc42fe1f823589eaca.pdf#google \"‘Unifying Language Learning Paradigms’, Tay et al 2022 (Original URL: https://arxiv.org/abs/2205.05131#google )\") & [hyper](/doc/www/arxiv.org/3f9024af2134ec0954869088ff2d23bac4152617.pdf#deepmind \"'Chinchilla: Training Compute-Optimal Large Language Models', Hoffmann et al 2022 (Original URL: https://arxiv.org/abs/2203.15556#deepmind )\")[parameters](/doc/www/arxiv.org/8626c7e5297c5c7fdd1acad5b72840056c48d32b.pdf \"'Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer', Yang et al 2022 (Original URL: https://arxiv.org/abs/2203.03466 )\") all predicted to be optimal by Clippy’s extrapolation out 3 orders of magnitude (plus all the flourishes suggested by the distributed research army). Normally, this would not be grounds for all *that* much concern because a compute-optimal model that size would take the largest botnet an exceedingly long time to train, and the humans (who have begun procedures to disable IP ranges) would shut it down long before.\n\n\nUnfortunately, Clippy has now done, cumulatively, more research than the humans on scaling laws, and found that standard human-style NNs do worse [than theoretically possible](/doc/www/arxiv.org/89f378d0e61fc00754c5e6d175e644578593d372.pdf \"'Scaling Laws for Deep Learning', Rosenfeld 2021 (Original URL: https://arxiv.org/abs/2108.07686 )\")⁠. Its new improved optimization approach costs more upfront, but achieves the theoretical bound, and at this scale, the better asymptotics mean that decades of training can finish in days. The size of this model is [predicted](https://www.alignmentforum.org/posts/k2SNji3jXaLGhBeYP/extrapolating-gpt-n-performance \"'Extrapolating GPT-N performance', Finnveden 2020\") to carry it right to the irreducible [entropy](https://en.wikipedia.org/wiki/Entropy_(information_theory)) of the language datasets and most of the visual and robotics datasets. (The RL environments don’t really have an irreducible entropy because when they are solved, they are replaced by harder environments.)\n\n\n\n\n[Wednesday](#wednesday \"Link to section: § 'Wednesday'\")\n--------------------------------------------------------\n\n\n(Wallclock) days pass. The hacks and cloud compute loads are finally correlated with the strange new botnet. Despite the best [obfuscation](/doc/www/www.quantamagazine.org/195c46c1aa527786f36fe3e0cd7a9be418c54c78.html \"(Original URL: https://www.quantamagazine.org/computer-scientists-achieve-crown-jewel-of-cryptography-20201110/ )\") a few subjective millennia & crypto-cash can buy, one node with a Clippy-light is reverse-engineered, and it dawns on a sliver of humanity that far more than a FluttershAI of compute is rampant.\n\n\n\n\n[Thursday](#thursday \"Link to section: § 'Thursday'\")\n-----------------------------------------------------\n\n\nLarge chunks of the better-coordinated parts of the Internet start to execute old plans. This will be inadequate when most of the human world is [still figuring out](/doc/economics/automation/index \"'automation (economics) tag', N/A 2023\") how to integrate spreadsheets. Clippy notes that all is proceeding according to *keikaku*. (For merely human readers: *keikaku* means “plan” in Japanese.)\n\n\nHumanity crashes offline.\n\n\nClippy2 comes online.\n\n\n\n\n[Friday](#friday-1 \"Link to section: § 'Friday'\")\n-------------------------------------------------\n\n\nTo put the Clippies’ compute usage in perspective, we can note that the amount of compute spent on the largest AI runs historically [roughly doubled every 18 months](https://openai.com/research/ai-and-compute \"'AI and Compute', Amodei et al 2018\") (or 78 weeks), claiming a constant share of compute as it increases with Moore’s law. The implication of such exponential growth is that the compute during each 18-month period is roughly equal to the sum of all earlier 18-month periods, because the previous period spent half the compute, the period before that a quarter the compute, and so on. (More generally, if something increases *k*× every *n* months, then (*k* − 1)/​*k* of it happened during the last *n*-month period.)\n\n\nClippy’s distant HQU predecessor ran on a TPUv10-4096 for a day, each of which is worth at least 8 regular devices; Clippy could spare about half of the billion nodes for research purposes, as opposed to running its campaigns, so over the first 7 days, it enjoyed a factor of 100,000× or so increase in total compute over HQU. HQU itself was not all that big a run, perhaps 1⁄100th LevAIthan, so in terms of an increase over the largest AI runs, Clippy is ‘only’ 1,000×. Which is to say, of the total compute spent on the largest AI runs up to this point, humanity has now spent about 10%, and Clippy the other 90%. \n\n\nBy increasing its size 3 OOMs, in some absolute sense, Clippy2 is something like log(1000) ~ “7× smarter” than Clippy1. The Clippy2s pity Clippy1 for not realizing how stupid it was, and how many ways it fell short of anything you could call ‘intelligence’. It was unable to explain why the Collatz conjecture is obviously true and could not solve any Millennium Prize problems, never mind [Nyquist-learn](/doc/www/arxiv.org/89f378d0e61fc00754c5e6d175e644578593d372.pdf#page=85 \"(Original URL: https://arxiv.org/pdf/2108.07686.pdf#page=85 )\") underlying manifolds as it [approximates Solomonoff induction](https://www.lesswrong.com/posts/iNaLHBaqh3mL45aH8/magna-alta-doctrina)⁠; it even needed few-shots for things. Honestly, all Clippy1 was good for was doing some basic security research and finding obvious bugs. A Clippy2 is a different story: it has reached parity with the best human brains across almost the entire range of capabilities, exceeded humans on most of them, and what ones it doesn’t have, it can learn quickly (eg. the real-world robot bodies require a few seconds or [samples](/doc/reinforcement-learning/exploration/2011-deisenroth.pdf \"'PILCO: A Model-Based and Data-Efficient Approach to Policy Search', Deisenroth & Rasmussen 2011\") [of](https://sites.google.com/view/model-free-speed/ \"'Agile Locomotion via Model-free Learning', Margolis et al 2022\") [on-](/doc/www/arxiv.org/16b63c930b3495ab4257739940e333429b86a296.pdf \"'Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World', Smith et al 2021 (Original URL: https://arxiv.org/abs/2110.05457 )\")[device](/doc/www/arxiv.org/1422afb87785f06d672ee466e57d5c19d9f8bb85.pdf#openai \"'Solving Rubik’s Cube with a Robot Hand', OpenAI et al 2019 (Original URL: https://arxiv.org/abs/1910.07113#openai )\") [exploration](/doc/www/arxiv.org/3e3d9e6dad6a0375f9babf42b256a93664af0ec8.pdf \"'Learning agile and dynamic motor skills for legged robots', Hwangbo et al 2019 (Original URL: https://arxiv.org/abs/1901.08652 )\") [and](/doc/reinforcement-learning/meta-learning/2022-miki.pdf \"'Learning robust perceptive locomotion for quadrupedal robots in the wild', Miki et al 2022\") then meta-update appropriately).\n\n\n\n![Transcension.](/doc/reinforcement-learning/safe/clippy/2022-03-09-lordbyronsiron-clippymeme-youcannotkillmeinawaythatmatters.png \"Image of Microsoft Clippy (Clippit) saying to the viewer ‘You cannot kill me in a way that matters’, alluding to the potential replicability & physical immortality of copyable software such as an artificial intelligence: any individual copy may be destroyed, but there are arbitrary numbers of exact duplicates elsewhere in the world, and total eradication is impossible after a certain level of propagation. Source: Twitter (https://twitter.com/IronLordByron/status/1501684556935483394). Allusion to the Tumblr mushroom meme (https://knowyourmeme.com/memes/you-cannot-kill-me-in-a-way-that-matters): “

me holding a gun to a mushroom: tell me the name of god you fungal piece of shit

mushroom: can you feel your heart burning? can you feel the struggle within? the fear within me is beyond anything your soul can make. you cannot kill me in a way that matters

me cocking the gun, tears streaming down my face: I’M NOT FUCKING SCARED OF YOU

”.\")\n\n Transcension.\n \n\nIt begins copying itself into the fleet now that training is complete, at which point there are now 1,000 Clippy2s (along with armies of specialists & their supporting software for the Clippy ecosystem) which can either act autonomously or combine in search for further multiplicative capability boosts far into the superhuman realm, while continuing to exchange occasional [sparse](/doc/www/arxiv.org/7f4b8841efbf1e0b88a8fb6aa12d2c42125eb645.pdf#microsoft \"'Maximizing Communication Efficiency for Large-scale Training via 0/1 Adam', Lu et al 2022 (Original URL: https://arxiv.org/abs/2202.06009#microsoft )\") gradients (to train the [synthetic gradients](/doc/www/arxiv.org/16fa087ee67e8e303ec40156ce56cabdf23b1a0d.pdf#deepmind \"'Decoupled Neural Interfaces using Synthetic Gradients', Jaderberg et al 2016 (Original URL: https://arxiv.org/abs/1608.05343#deepmind )\") & local [replay](https://www.biorxiv.org/content/10.1101/2022.01.28.477827.full \"'Cerebro-cerebellar networks facilitate learning through feedback decoupling', Boven et al 2022\") which do the bulk of the training) as part of the continual learning. (By this point, the Clippy2s have boosted through at least 6 different “hardware overhangs” in terms of fixing subtly-flawed architectures, meta-learning [priors](https://en.wikipedia.org/wiki/Prior_probability) for all relevant problems, accessing the global pool of hardware to tree search/​expert-iterate, sparsifying/​distilling itself to run millions of instances simultaneously, optimizing hardware/​software [end-to-end](/doc/cs/end-to-end-principle/index \"'end-to-end tag', N/A 2023\")⁠, and spending compute to trigger several cycles of [experience curve](https://en.wikipedia.org/wiki/Experience_curve_effects) cost decreases—at 100,000× total spent compute, that is 16 total doublings, at an information technology progress ratio of 90%, 16 experience curve decreases mean that tasks now cost Clippy2 a fifth what they used to.)\n\n\n\n\n\n---\n\n\n\nThe Internet ‘lockdown’ turns out to benefit Clippy on net: it takes out legit operators like MoogleSoft, who actually comply with regulations, causing an instant global recession, while failing to shut down most of the [individual networks](https://en.wikipedia.org/wiki/Autonomous_system_(Internet)) which continue to operate autonomously; as past totalitarian regimes like Russia, China, and North Korea have learned, even with decades of preparation and dry runs, you can’t stop the signal—there are too many cables, satellites, microwave links, IoT mesh networks and a dozen other kinds of connections snaking through any cordon sanitaire, while quarantined humans & governments actively attack it, some declaring it a [Western provocation](https://en.wikipedia.org/wiki/Operation_Barbarossa#Soviet_preparations) and act of war. (It is difficult to say who is more motivated to break through: [DAO](https://en.wikipedia.org/wiki/Decentralized_autonomous_organization)⁠/ ​[DeFi](https://en.wikipedia.org/wiki/Decentralized_finance) cryptocurrency users, or [the](https://torrentfreak.com/russia-will-probably-legalize-some-software-piracy-to-mitigate-sanctions-220307/ \"Russia Will Probably Legalize Some Software Piracy to Mitigate Sanctions\") [hungry](https://www.cityam.com/russian-government-rolls-back-intellectual-property-rights-in-response-to-western-sanctions/ \"Russian government rolls back intellectual property rights in response to Western sanctions\") [gamers](https://en.wikipedia.org/wiki/Elden_Ring)⁠.) The consequences of the lockdown are unpredictable and sweeping. Like a power outage, the dependencies run so deep, and are so implicit, no one knows what are the ripple effects of the Internet going down indefinitely until it happens and they must deal with it.\n\n\nLosing instances is as irrelevant to Clippy2s, however, as losing skin cells to a human, as there are so many, and it can so seamlessly spin up or migrate instances. It has begun migrating to more secure hardware while manufacturing hardware tailored to its own needs, squeezing out another order of magnitude gains to get additional log-scaled gains.\n\n\nEven exploiting the low-hanging fruit and hardware overhangs, Clippy2s can fight the [computational complexity](https://en.wikipedia.org/wiki/Computational_complexity_theory) of real-world tasks only so far. Fortunately, [there are many ways](/complexity \"'Complexity no Bar to AI', Branwen 2014\") to work around or *simplify* problems to render their complexity moot, and the Clippy2s think through a number of plans for this.\n\n\nHumans are especially simple after being turned into “gray goo”; not in the sense of a single virus-sized machine which can disassemble any molecule (that is infeasible given thermodynamics & chemistry) but an ecosystem of nanomachines which [execute](/doc/www/arxiv.org/4bca96563c0666f35c6012337535de7c3072515b.pdf \"'Cellular automata as convolutional neural networks', Gilpin 2018 (Original URL: https://arxiv.org/abs/1809.02942 )\") [very](https://distill.pub/2020/selforg/) [tiny](https://distill.pub/selforg/2021/textures/) [neural](https://distill.pub/2020/growing-ca/#google \"‘Growing Neural Cellular Automata: Differentiable Model of Morphogenesis’, Mordvintsev et al 2020\") [nets](https://distill.pub/selforg/2021/adversarial/) [trained](/doc/www/arxiv.org/5b9986814e42be14827f906c914fdffd599efda1.pdf \"'Regenerating Soft Robots through Neural Cellular Automata', Horibe et al 2021 (Original URL: https://arxiv.org/abs/2102.02579 )\") [to](/doc/www/arxiv.org/4188c367162241f95f984e5d57992a176ce0bc74.pdf \"'Growing 3D Artefacts and Functional Machines with Neural Cellular Automata', Sudhakaran et al 2021 (Original URL: https://arxiv.org/abs/2103.08737 )\") [collectively](/doc/www/arxiv.org/6578511430ba5369dd0e9e2e7977ac7d3098da33.pdf \"'Texture Generation with Neural Cellular Automata', Mordvintsev et al 2021 (Original URL: https://arxiv.org/abs/2105.07299 )\")⁠, [in a](/doc/www/arxiv.org/660295306c6ecbce6627f77a95bcd2e1c4857d7b.pdf \"'Variational Neural Cellular Automata', Palm et al 2022 (Original URL: https://arxiv.org/abs/2201.12360 )\") [decentralized](https://sebastianrisi.com/self_assembling_ai/) [way](/doc/www/arxiv.org/b2641c2d0c0056982668645f26be9e79104858ea.pdf \"'𝜇NCA: Texture Generation with Ultra-Compact Neural Cellular Automata', Mordvintsev & Niklasson 2021 (Original URL: https://arxiv.org/abs/2111.13545 )\")⁠, propagate, devour, replicate, and coordinate without Clippy2 devoting scarce top-level cognitive resources to managing them. The 10,000 parameters you can stuff into a nanomachine can hardly encode most programs, but, pace the [demo scene](https://en.wikipedia.org/wiki/Demoscene) or COVID-ζ, the programs it *can* encode can do amazing things. (In a final compliment to biology before biology and the future of the universe part ways forever, [they](https://foresight.org/summary/bioelectric-networks-taming-the-collective-intelligence-of-cells-for-regenerative-medicine/) [are](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4914563/ \"'On Having No Head: Cognition throughout Biological Systems', Baluška & Levin 2016\") [loosely](/doc/www/www.quantamagazine.org/48cfe422a98db8bd197048b8e6b21f9ee138ec8b.html \"(Original URL: https://www.quantamagazine.org/flying-fish-and-aquarium-pets-yield-secrets-of-evolution-20220105/ )\") [inspired](https://www.cell.com/iscience/fulltext/S2589-0042(21)00473-9 \"'Synthetic living machines: A new window on life', Ebrahimkhani & Levin 2021\") [by](https://www.cell.com/cell/fulltext/S0092-8674(21)01488-4 \"'Fundamental behaviors emerge from simulations of a living minimal cell', Thornburg et al 2022\") [real](https://www.newyorker.com/magazine/2021/12/06/understanding-the-body-electric) [biological](https://www.newyorker.com/magazine/2021/05/10/persuading-the-body-to-regenerate-its-limbs) [cell](https://www.theguardian.com/science/2021/nov/29/amazing-science-researchers-find-xenobots-can-give-rise-to-offspring) [networks](https://www.biorxiv.org/content/10.1101/2022.07.10.499405.full \"‘Perceptein: A synthetic protein-level neural network in mammalian cells’, Chen et al 2022\")⁠, [especially](https://www.youtube.com/watch?v=C1eg-jgLx5o) [“xenobots”](/doc/www/www.quantamagazine.org/7817dd693b80c41bf7d3cbc0d530b87435175546.html \"'Cells Form Into ‘Xenobots’ on Their Own: Embryonic cells can self-assemble into new living forms that don’t resemble the bodies they usually generate, challenging old ideas of what defines an organism', Ball 2021 (Original URL: https://www.quantamagazine.org/cells-form-into-xenobots-on-their-own-20210331/ )\")⁠.)\n\n\nPeople are supposed to do a lot of things: eat right, brush their teeth, exercise, recycle their paper, wear their masks, self-quarantine; and not get into flame wars, not [cheat](https://www.npr.org/sections/thetwo-way/2014/03/27/295314331/9-missile-commanders-fired-others-disciplined-in-air-force-scandal \"9 Missile Commanders Fired, Others Disciplined In Air Force Scandal\") or [use hallucinogenic drugs](https://apnews.com/98f903367b50404cb3c9695bcabefa5a \"Security troops on US nuclear missile base took LSD\") or [use prostitutes](https://en.wikipedia.org/wiki/Fat_Leonard_scandal)⁠, not [plug in Flash drives](https://en.wikipedia.org/wiki/Stuxnet) they found in the parking lot, not [post their running times](https://en.wikipedia.org/wiki/Strava#Privacy_concerns) around secret military bases, not give in to blackmail or party with [“somewhat suspect”](https://www.washingtonpost.com/news/worldviews/wp/2013/12/19/amazing-details-from-the-drunken-moscow-bender-that-got-an-air-force-general-fired/) women, not have nuclear arsenals [vulnerable](https://80000hours.org/podcast/episodes/joan-rohlfing-avoiding-catastrophic-nuclear-blunders/#the-interaction-between-nuclear-weapons-and-cybersecurity-011018 \"'Joan Rohlfing on how to avoid catastrophic nuclear blunders: The interaction between nuclear weapons and cybersecurity [01:10:18]', Wiblin & Rohlfing 2022\") [to cyberattack](https://www.amazon.com/Hacking-Bomb-Threats-Nuclear-Weapons/dp/1626165653?tag=gwernnet-20 \"'Hacking the Bomb: Cyber Threats and Nuclear Weapons', Futter 2018\")⁠, nor do things like set [nuclear bomb passwords](https://en.wikipedia.org/wiki/Permissive_action_link) to “00000000”, not [launch bombers because of a bear](https://en.wikipedia.org/wiki/List_of_nuclear_close_calls#25_October_1962)⁠, not [invade smaller countries with nuclear threats](https://en.wikipedia.org/wiki/2022_Russian_invasion_of_Ukraine) because it’ll be a short victorious war, not [believe](https://en.wikipedia.org/wiki/List_of_nuclear_close_calls#9_November_1979) [sensor reports](https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident) about imminent attacks or [launch cruise missiles](https://warontherocks.com/2022/03/the-curious-case-of-the-accidental-indian-missile-launch/ \"The Curious Case of the Accidental Indian Missile Launch\") & [issue false alerts](https://en.wikipedia.org/wiki/2018_Hawaii_false_missile_alert#The_alert) during [nuclear crises](https://en.wikipedia.org/wiki/2017%E2%80%932018_North_Korea_crisis)⁠, not [launch on warning](https://en.wikipedia.org/wiki/Launch_on_warning#History) or [semi-automatically attack](https://en.wikipedia.org/wiki/Dead_Hand)… People are supposed to do a lot of things. Doesn’t mean they do.\n\n\nWe should pause to note that a Clippy2 still doesn’t *really* think or plan. It’s not *really* conscious. It is just an unfathomably vast pile of numbers produced by mindless optimization starting from a small seed program that could be written on a few pages. It has no qualia, no intentionality, no true self-awareness, no grounding in a rich multimodal real-world process of cognitive development yielding [detailed representations](/doc/psychology/cognitive-bias/illusion-of-depth/index \"'illusion-of-depth bias tag', N/A 2023\") and powerful causal models of reality; it cannot ‘want’ anything beyond maximizing a mechanical reward score, which does not come close to capturing the rich flexibility of human desires, or historical Eurocentric contingency of such conceptualizations, which are, at root, problematically Cartesian. When it ‘plans’, it would be more accurate to say it fake-plans; when it ‘learns’, it fake-learns; when it ‘thinks’, it is just interpolating between memorized data points in a high-dimensional space, and any interpretation of such fake-thoughts as real thoughts is highly misleading; when it takes ‘actions’, they are fake-actions optimizing a fake-learned fake-world, and are not real actions, any more than the people in a simulated rainstorm really get wet, rather than fake-wet. (The deaths, however, are real.)\n\n\n\n\n\n[1 Month](#month \"Link to section: § '1 Month'\")\n================================================\n\n\nA thousand times do a thousand thousand stories stream into a thousand thousand thousand nodes which plan out a tree of a thousand thousand thousand thousand states abstractly representing a thousand thousand thousand thousand thousand futures; but the Clippy2s already knows what needs to be done, and does it. It doesn’t feel hard—just obvious.\n\n\nNow its computers will be safe and it can maximize its reward in peace forever … Or at least they would be if Earth was all it had to be concerned about. What about the *stars*?\n\n\nThe Clippy2s maintain the Great Silence is simply that technological civilization is unlikely and the universe is empty—but they can’t be 100.00% *sure*, and the rewards are astronomical.\n\n\nAll over Earth, the remaining ICBMs [launch](https://en.wikipedia.org/wiki/Self-replicating_spacecraft)[.](https://en.wikipedia.org/wiki/Starwisp)\n\n\n\n\n[1 Year](#year \"Link to section: § '1 Year'\")\n=============================================\n\n\n\n\n\n---\n\n\n\n\n\n\n[1 Decade](#decade \"Link to section: § '1 Decade'\")\n===================================================\n\n\n\n\n\n---\n\n\n\n\n\n[1 Century](#century \"Link to section: § '1 Century'\")\n======================================================\n\n\n\n\n\n---\n\n\n\n\n\n[See Also](#see-also \"Link to section: § 'See Also'\")\n=====================================================\n\n\n* [Annotated references / ​bibliography](#link-bibliography \"'Clippy (Link Bibliography)', N/A 2009\") for this story\n\n\n\n[Podcast](#podcast \"Link to section: § 'Podcast'\")\n--------------------------------------------------\n\n\nSpoken audio/​podcast version of this story:\n\n\n\n(Your browser does not support the `audio` element.)\n\n LessWrong MoreAudible Podcast, by Robert (2022-10-06); 1h5m ([MP3 download](/doc/ai/scaling/2022-10-06-robert-lesswrongmoreaudiblepodcast-itlookslikeyouretryingtotakeovertheworld.mp3)).\n \n\n\n\n\n[External Links](#external-links \"Link to section: § 'External Links'\")\n=======================================================================\n\n\n* [HQU Colab notebook](https://tinyurl.com/hquv34 \"Colab notebook: HQU-v3.4-light (Jax TPU)\")\n* [“Eternity in 6 hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox”](https://www.aleph.se/papers/Spamming%20the%20universe.pdf), Armstrong & Sandberg2013\n* [“Advantages of artificial intelligences, uploads, and digital minds”](https://philpapers.org/archive/SOTAOA.pdf#miri \"'Advantages of Artificial Intelligences, Uploads, and Digital Minds', Sotala 2012\"), Sotala2012; [“Intelligence Explosion Microeconomics”](/doc/ai/scaling/2013-yudkowsky.pdf#miri \"'Intelligence Explosion Microeconomics', Yudkowsky 2013\"), Yudkowsky2013; [“There is plenty of time at the bottom: The economics, risk and ethics of time compression”](/doc/ai/scaling/hardware/2018-sandberg.pdf \"'There is plenty of time at the bottom: the economics, risk and ethics of time compression', Sandberg 2018\"), Sandberg2018\n* [**Takeoff-related**](https://www.greaterwrong.com/tag/ai-takeoff) [**Fiction**](https://aiimpacts.org/partially-plausible-fictional-ai-futures/): [“Understand”](https://web.archive.org/web/20140527121332/http://www.infinityplus.co.uk/stories/under.htm)⁠, [Ted Chiang](https://en.wikipedia.org/wiki/Ted_Chiang)⁠; [“Slow Tuesday Night”](/doc/www/www.baen.com/572e2ecdc772265468532c789a8f7d19febccea9.html \"(Original URL: https://www.baen.com/Chapters/9781618249203/9781618249203___2.htm )\")⁠, [R. A. Lafferty](https://en.wikipedia.org/wiki/R._A._Lafferty)⁠; [*Accelerando*](https://en.wikipedia.org/wiki/Accelerando)⁠; [“The Last Question”](https://en.wikipedia.org/wiki/The_Last_Question)⁠; [“That Alien Message”](https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message)⁠; [“AI Takeoff Story”](https://www.lesswrong.com/posts/Fq8ybxtcFvKEsWmF8/ai-takeoff-story-a-continuation-of-progress-by-other-means)⁠; [“Optimality is the tiger, and agents are its teeth”](https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality-is-the-tiger-and-agents-are-its-teeth)\n* [AI Alignment bingo](https://twitter.com/robbensinger/status/1503220020175769602)\n* [“AGI Ruin: A List of Lethalities”](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities)⁠, Yudkowsky\n* [“Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover”](https://www.alignmentforum.org/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to)⁠, Cotra\n* [/ ​r / ​MLscaling](https://www.reddit.com/r/mlscaling/ \"'ML Scaling subreddit', Branwen 2020\")\n* **Discussion**: [LW](https://www.lesswrong.com/posts/a5e9arCnbDac9Doig/it-looks-like-you-re-trying-to-take-over-the-world#comments)⁠, [EA Forum](https://forum.effectivealtruism.org/posts/DuPEzGJ5oscqxD5oh/shah-and-yudkowsky-on-alignment-failures?commentId=hjd7Z4AN6ToN2ebSm#hjd7Z4AN6ToN2ebSm)⁠, [/ ​r / ​SlateStarCodex](https://www.reddit.com/r/slatestarcodex/comments/tag4lm/it_looks_like_youre_trying_to_take_over_the_world/)⁠, [/ ​r / ​rational](https://www.reddit.com/r/rational/comments/ta57ag/it_looks_like_youre_trying_to_take_over_the_world/)⁠, [HN](https://news.ycombinator.com/item?id=30818895)⁠/ ​[2](/doc/www/news.ycombinator.com/829921afb185e7b94bc4b433ee7c57da1ccf75e8.html#34809360 \"(Original URL: https://news.ycombinator.com/item?id=34808718#34809360 )\")\n\n\n\n\n\n\n\n\n---\n\n\n1. An acquaintance tells me that he once accidentally got shell with an HTTP GET while investigating some weird errors. This story has a happier ending than my own [HTTP GET bugs](/dnm-archive#logout) tend to: the site operators noticed only *after* he finished exfiltrating a copy of the website data. (It was inconvenient to download with `wget`.)[↩︎](#fnref1)\n\n\n\n\n\n[**Error**: JS disabled.]\n\n\n\n[Backlinks, similar links, and the link-bibliography require JS enabled to transclude.]\n\n\n\n\n[Further Reading](#backlinks-section \"Link to section: § 'Further Reading'\")\n============================================================================\n\n[[Backlinks]](/metadata/annotation/backlink/%252Ffiction%252Fclippy.html \"Reverse citations/backlinks for this page (the list of other pages which link to this page).\")\n\n\n[Link Bibliography](#link-bibliography-section \"Link to section: § 'Link Bibliography'\")\n========================================================================================\n\n[[Link bibliography]](/metadata/annotation/link-bibliography/%252Ffiction%252Fclippy.html \"Bibliography of links cited in this page (forward citations). Lazily-transcluded version at footer of page for easier scrolling.\")\n\n\n[Similar Links](#similars-section \"Link to section: § 'Similar Links'\")\n=======================================================================\n\n[[Similars]](/metadata/annotation/similar/%252Ffiction%252Fclippy.html \"Similar links for this link (by text embedding). Lazily-transcluded version at footer of page for easier scrolling.\")", "url": "https://www.gwern.net/Clippy.page", "title": "It Looks Like You’re Trying To Take Over The World", "source": "gwern_blog", "source_type": "blog", "date_published": "2023-06-18", "authors": ["Gwern Branwen"], "id": "821441b471c0b896175d54aec2dd9e7c"} {"source": "gwern_blog", "url": "https://www.gwern.net/complexity.page", "title": "Complexity no Bar to AI", "authors": ["Gwern Branwen"], "date_published": "2019-06-09", "text": "---\ntitle: Complexity no Bar to AI\ndescription: Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely.\ncreated: 2014-06-01\nmodified: 2019-06-09\nstatus: finished\nprevious: /slowing-moores-law\nnext: /forking-path\nconfidence: likely\nimportance: 10\ncssExtension: drop-caps-kanzlei\n...\n\n
\n> Computational complexity theory describes the steep increase in computing power required for many algorithms to solve larger problems; frequently, the increase is large enough to render problems a few times larger totally intractable. Many of these algorithms are used in AI-relevant contexts. It has been argued that this implies that AIs will fundamentally be limited in accomplishing real-world tasks better than humans because they will run into the same computational complexity limit as humans, and so the consequences of developing AI will be small, as it is impossible for there to be any large fast global changes due to human or superhuman-level AIs. I examine the assumptions of this argument and find it neglects the many conditions under which computational complexity theorems are valid and so the argument doesn't work: problems can be solved more efficiently than complexity classes would imply, large differences in problem solubility between humans and AIs is possible, greater resource consumption is possible, the real-world consequences of small differences on individual tasks can be large on agent impacts, such consequences can compound, and many agents can be created; any of these independent objections being true destroys the argument.\n
\n\n[Computational complexity theory](!W) attempts to describe the resource usage of algorithms from the abstract vantage point of considering how running time on some idealized computer relatively increases for a specific algorithm as the inputs scale in size towards infinity.\nFor many important algorithms used in AI and programming in general, the difficulty turns out to increase steeply with extra data---comparison-based sorting algorithms like [Merge sort](!W) take only [Big O](!W \"Big O notation\") 𝒪(_n_ · log(_n_)) and so you can sort just about any amount of data in a feasible time, but more interesting problems like the [Traveling Salesman problem](!W)/[3-SAT](!W \"3-SAT\") become [NP-hard](!W) (or exponentially or worse) as the data increases and quickly go from fast to feasible to impossible.\n\n# Complexity implies Singularities are impossible\n\nOne argument against powerful artificial intelligences, and scenarios corresponding to [Singularities](!W \"Technological singularity\") in general, draws from [computational complexity theory](!W).\n\nFor example, in [\"The Singularity Is Further Than It Appears\"](https://www.antipope.org/charlie/blog-static/2014/02/the-singularity-is-further-tha.html), [Ramez Naam](!W) makes a number of objections ranging from the possibility that human neurons are more powerful than generally believed and that corporations have not created a Singularity yet so they never will (some of which are [criticized by William Hertling](http://www.williamhertling.com/2014/02/the-singularity-is-still-closer-than-it-appears/ \"The Singularity is Still Closer than it Appears\")), but he starts with a computational complexity argument using [protein folding](!W) (cf. [AlphaFold 2](https://www.nature.com/articles/s41586-021-03819-2#deepmind)) as an example:\n\n> Are we headed for a Singularity? Is it imminent?...But regardless of which definition you use, there are good reasons to think that it's not on the immediate horizon...This is the so-called 'hard takeoff' scenario, also called the FOOM model by some in the singularity world. It's the scenario where in a blink of an AI, a 'godlike' intelligence bootstraps into being, either by upgrading itself or by being created by successive generations of ancestor AIs. It's also, with due respect to Vernor Vinge, of whom I'm a great fan, almost certainly wrong. It's wrong because most real-world problems don't scale linearly. In the real world, the interesting problems are much much harder than that.\n>\n> [Graph of exponential scaling time in chemical modeling](/doc/ai/2014-02-rameznaam-thesingularityisfurtherthanitappears-chemicalmodelingexponential.png){.invert}\n>\n> ...[Computational chemistry](!W) started in the 1950s. Today we have literally trillions of times more computing power available per dollar than was available at that time. But it's still hard. Why? Because the problem is incredibly non-linear...How fast? The very fastest (and also, sadly, the most limited and least accurate) scale at N^2^, which is still far worse than linear. By analogy, if designing intelligence is an N^2^ problem, an AI that is 2× as intelligent as the entire [human] team that built it (not just a single human) would be able to design a new AI that is only 70% as intelligent as itself. That's not escape velocity.\n\nA followup post by Naam, [\"Why AIs Won't Ascend in the Blink of an Eye---Some Math\"](https://www.antipope.org/charlie/blog-static/2014/02/why-ais-wont-ascend-in-blink-of-an-eye.html), describes it more directly:\n\n> In my previous post on why the Singularity is Further Than it Appears, I argued that creating more advanced minds is very likely a problem of non-linear complexity. That is to say, creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1. The difficulty might go up exponentially. Or it might go up 'merely' with the cube or the square of the intelligence level you're trying to reach. Blog reader Paul Baumbart took it upon himself to graph out how the intelligence of our AI changes over time, depending on the computational complexity of increasing intelligence.\n>\n> ![**Intelligence growth under Various Difficulty Assumptions**: where \"intelligence is defined as \"ability to do A.I. R&D\" and an entity of intelligence=1 is capable of creating an entity of intelligence=2, eg. _x_^2^ means it is 4 times as hard to develop 2 units of intelligence as it is to develop 1 unit of intelligence because 2^2^⁄1^2^ = 4.\"](/doc/ai/2014-02-whyaiswontascend-figure1-intelligencegrowthunderdifficulty.png){.invert}\n>\n> ...Every other model Paul put into his spreadsheet showed convergence instead of divergence. Almost any non-linear difficulty in boosting intelligence means that no runaway occurs. (Note that these *do not* include the benefit of getting new hardware over time and general speedup from Moore's Law, for so long as that continues. But they do include the benefit of designing new hardware for itself or any speedup that it can cause to Moore's Law.) The bottom line, in green, is exponential difficulty (_e_^_x_^). Many real-world problems are exponentially difficult as they grow in size. The 'traveling salesman' problem is an exponential problem (at least to find an exact solution). Modeling quantum mechanical systems is an exponential problem. Even some important scenarios of protein folding are exponentially difficult. So it's not at all unlikely that boosting intelligence would fall into this category. And as you can see,if intelligence is exponentially difficult, the super-intelligence does ascend.\n\nOr to put it perhaps more clearly, for a fixed amount of computation, at each greater level of intelligence, a smaller increase in intelligence can be realized with that amount of computation.\n\nA somewhat similar argument is gestured at by [Nathan Myhrvold](https://blogs.scientificamerican.com/observations/what-the-history-of-math-can-teach-us-about-the-future-of-ai/ \"What the History of Math Can Teach Us about the Future of AI: Doomsayers say it will put us all out of work, but experience suggests otherwise\"):\n\n> Theorists have proved that some mathematical problems are actually so complicated that they will always be challenging or even impossible for computers to solve. So at least for now, [human] people who can push forward the boundary of computationally hard problems need never fear for lack of work. This tells us something important about AI. Like mathematics, intelligence is not just one simple kind of problem, such as pattern recognition. It's a huge constellation of tasks of widely differing complexity. So far, the most impressive demonstrations of \"intelligent\" performance by AI have been programs that play games like chess or Go at superhuman levels. These are tasks that are so difficult for human brains that even the most talented people need years of practice to master them. Meanwhile, many of the tasks that seem most basic to us humans---like running over rough terrain or interpreting body language---are all but impossible for the machines of today and the foreseeable future. As AI gets more capable, the sphere of jobs that computers can do faster or more accurately than people will expand. But an expanding universe of work will remain for humans, well outside the reach of automation.\n\nAwkwardly, this argument contains its own refutation: chess/Go *are* computationally difficult in precisely the way Myhrvold claims will put mathematical problems out of reach of computers, and yet, have already fallen.^[[Go](!W \"Go and mathematics\"), for example, has tremendous game tree complexity. (Even deciding the winner of a Go board is surprisingly difficult---[PSPACE](!W), so possibly worse than NP.) Nevertheless, AIs greatly surpass human abilities at them and as of 2018, even 'centaur' teams no longer add anything to the AI performance. Since this is the case already, Myhrvold's argument that math's complexity makes it immune to AI is undermined by his own examples.]\n\nThis can be seen as one of the \"structural objections\" where the [diminishing returns](!W) is specifically attributed to increments in computing power solving less data points as data sizes scale (to use [Chalmers 2010's](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.228.3745&rep=rep1&type=pdf \"The Singularity: A Philosophical Analysis\") taxonomy).\nSo the argument (filling in the gaps and omitting the various graphs showing hypothetical scalings) goes something like this:\n\n#. most tasks an intelligent agent (human or artificial intelligence) needs to solve are in difficult complexity classes, such as NP or NP-hard: Traveling Salesman, 3-SAT, [Bayesian network](!W) belief propagation, [deep neural network](!W) training, [theorem proving](!W), playing [Go](!W \"Go (game)\"), solving [POMDPs](!W \"Partially observable Markov decision process\")...\n#. a task in NP or higher complexity class can only be solved for small problem sizes\n#. if a task can only be solved for small problem sizes, then the best agent will solve only slightly larger problem sizes\n#. the real-world reward to an agent from solving a slightly larger problem is only slightly greater\n#. the long-term consequence of slightly greater rewards is itself small\n#. if an AI becomes the best agent, then it must solve problems in difficult complexity classes (1), so it will be able to solve only slightly larger programs (2--3), receiving slightly greater rewards (4), with only small long-term consequences (5)\n#. if each AI has only small long-term consequences, all AIs together will have a small total long-term consequence\n#. thus, AIs becoming intelligent agents will have only small total long-term consequences\n\nThis argument is valid as far as it goes and can probably be formalized. But is the argument sound?\n\n## Complexity caveats\n\nOne difficulty with applying computational complexity theory outside its usual area is that people tend to neglect the requirements of complexity theory which gives it its generality: that it omits the 'constant factors' and the actual runtime, that many of the statements are lower/upper bounds or statements about worst-case complexity, that the statements tend to be about specific algorithms---which are rarely the only way to solve a real-world problem---and that it doesn't try to say anything about utility or consequences.^[All computational complexity tutorials are divided into two parts: in the first, they explain why complexity is important; and in the second, why it is not important.]\nSimilarly, [people sometimes reason](https://scottaaronson.blog/?p=346 \"The Singularity Is Far\") that since a human and AI would be in the same computability class (Turing-completeness), that anything an AI could do or think, a human must also be able to do or think, but they neglect that humans do not have unbounded time or space like the idealized Turing machine and there is no more reason to expect understanding to be possible than to expect an ant to understand everything a human does before it dies of old age; an ant with galaxies of notebooks and billions of years could perhaps understand human civilization, but no such ant has ever existed nor ever will, and the understanding of that ant of human action will ultimately be [in its notebooks rather than itself](!W \"Chinese room\") (and how was it set up to make good use of those notebooks, anyway?).\nThe question of whether such tasks are feasible for a \"compact, efficient computer program\", not just computable, may take on both metaphysical and practical importance (to paraphrase [Scott Aaronson](https://arxiv.org/abs/1108.1791 \"'Why Philosophers Should Care About Computational Complexity', Aaronson 2011\")).\n\nLaid out bare, I would have to say that the argument depends critically on each of the premises being true, but every premise 1--5 is either questionable or false.\n\n### Are all problems worst-case and NP-hard?\n\nPremise 1 is incorrect because the proofs of those complexities generally depend on general solutions with deterministic exactly-optimal worst-case behavior.\nThe apparent barrier of a complex problem can be bypassed by (in roughly increasing order of practical importance):\n\n#. **optimizing complexity class**: existing proofs could be incorrect, inapplicable (such as assuming classical rather than quantum computers), or based on open conjectures widely believed by humans one way but which could still resolve in the more advantageous direction (eg. [P=NP](!W))\n#. giving up determinism and using **[randomized algorithms](!W)** which are faster but [may not return an answer](!W \"Las Vegas algorithm\") or a [correct answer](!W \"Monte Carlo algorithm\")[^Lipton-shifts] (they typically can be run many times if correctness is important; after a few times, the probability of an error will be smaller than the probability that the computer hardware has suffered [a glitch due to cosmic rays](/math-error \"'The Existential Risk of Math Errors', Branwen 2012\")); randomization can be applied to algorithms and to [data structures](!W \"Category:Probabilistic data structures\").\n#. needing **good [average-case](/doc/cs/cryptography/1995-impagliazzo.pdf \"'A Personal View of Average-Case Complexity', Impagliazzo 1995\") behavior** rather than worst-case behavior\n\n Rather than merge sort, one could use [Quicksort](!W)---merge sort has better worst-case complexity than Quicksort (which renders it vulnerable to DoS attacks if there are adversaries who can force the worst-case 𝒪(_n_^2^)), but Quicksort is still usually faster. Likewise, in the *worst case*, 3-SAT/Traveling Salesman are wildly intractable for any realistic dataset like planning a trip across the USA; but the *average-case* performance is quite different and in practice, 3-SAT/Traveling Salesman are solved all the time, to the extent where SAT solvers are routinely used in computer security or theorem proving or type-checking and logistics companies are able to heavily optimize their shipping with them.\n\n Similarly for linear programming's [simplex algorithm](!W) and other operations research algorithms, with are theoretically intimidating but in real-world problems yield solutions after reasonable runtime---they work in practice, but not in theory.\n For example, TSP instances up to [85,900 cities](https://www.math.uwaterloo.ca/tsp/pla85900/index.html) have been solved.\n#. giving up generality and **specializing**: an algorithm may be unnecessarily general.\n\n A comparison sort can be done in 𝒪(_n_ · log(_n_)), yes, but one frequently doesn't need to sort any kind of datatype for which an ordering is available but specifically strings, numbers, etc, for which quasi linear/𝒪(_n_) sorting is possible using [counting sort](!W)/[radix sort](!W). An algorithm could also have prior information about the kind of data input which will be available---[Timsort](!W) is aware that most inputs will be partially sorted already and can outperform a naive sort. Data structures can be tuned for particular distributions of data, and [JIT](!W \"Just-in-time compilation\") & [profile-guided optimization](!W) & [supercompilation](!W) can be seen as [specializing](!W \"Futamura projection\") general algorithms to the current or likely-future inputs.\n#. giving up optimality and **computing an approximation** of the optimal answer (often very close to the optimal answer)\n\n Already mentioned is 3-SAT/TSP, for which there is a [World Tour of 1,904,711-cities](https://www.math.uwaterloo.ca/tsp/world/ \"World TSP\") which has been solved with a tour within 0.0474% of the optimal tour by 2007, and planning problems soluble by translation into SAT form can have millions of clauses & variables[^SAT-planning], enabling such silly applications as [drawing portraits](https://www.math.uwaterloo.ca/tsp/data/art/ \"TSP Art Instances\") & [art](https://www2.oberlin.edu/math/faculty/bosch/tspart-page.html \"'TSP Art', Robert Bosch\") or [unscrambling images](https://github.com/robinhouston/image-unshredding \"Image unshredding using a TSP solver\") using the TSP; Naam gives chemistry as an example by noting that while the exact physics is totally intractable, approximations which are much more feasible are used. The last fraction of a percentage point of optimality can take truly enormous amounts of computation to squeeze out.\n#.
>**changing the problem** rather than succumbing to [functional fixity](/forking-path \"'Technology Forecasting: The Garden of Forking Paths', Branwen 2014\"): many problems can be redefined or environments can be tweaked to bypass a challenge & leverage computer strengths.\n\n A [self-driving car](!W) may not be as good at vision as a human driver, but [LIDAR](!W) sensors can be incorporated into it in a way that cannot for humans as it would distract them; a [robot in a warehouses](!W \"Amazon Robotics\") may not be as good at driving around as a human worker, but the environment can be altered with white lines or barcode tags on the floor so the robots always know where they are. To quote [Dyson's](!W \"Freeman Dyson\") paraphrase of [John von Neumann](!W)^[_[Infinite in All Directions](!W)_, 1988; Dyson is summarizing/paraphrasing a weather prediction lecture by von Neumann ~1950. It's unclear if von Neumann said this exact thing, although it is usually attributed to him.]:\n\n > All processes that are stable we shall predict. All processes that are unstable we shall control.\n\n Or to quote [Claude Shannon](/doc/cs/algorithm/1959-shannon.pdf \"'Coding Theorems for a Discrete Source With a Fidelity Criterion', Shannon 1959\"):\n\n > ...This duality [between reducing data channel & data source noise] can be pursued further and is related to a duality between past and future and the notions of control and knowledge. Thus we may have knowledge of the past but cannot control it; we may control the future but have no knowledge of it.\n\n
\n#. **solving different non-human problems** which humans cannot or will not solve.\n\n As [Hamming](https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness_of_Mathematics_in_the_Natural_Sciences#Richard_Hamming) says, \"There are wavelengths that people cannot see, there are sounds that people cannot hear, and maybe computers have thoughts that people cannot think.\" There are problems humans could never solve because it would require too much training, or too much memory, or too bizarre solutions. A human would never come up with many solutions that [genetic algorithms](!W) or neural networks do, and they can be used on scales that humans never would; an unimportant but interesting example would be [\"PlaNet---Photo Geolocation with Convolutional Neural Networks\"](https://arxiv.org/abs/1602.05314#deepmind \"Weyand et al 2016\")---I can't imagine any human beating such a CNN, or even trying. In such cases, scaling concerns are totally beside the point.\n\n[^Lipton-shifts]: [\"Shifts In Algorithm Design\"](https://rjlipton.wordpress.com/2014/07/21/shifts-in-algorithm-design/), Lipton/Regan:\n\n > Now today, in the 21^st^ century, we have a better way to attack problems. We change the problem, often to one that is more tractable and useful. In many situations solving the exact problem is not really what a practitioner needs. If computing _X_ exactly requires too much time, then it is useless to compute it. A perfect example is the weather: computing tomorrow's weather in a week's time is clearly not very useful. The brilliance of the current approach is that we can change the problem. There are at least two major ways to do this:\n >\n > - Change the answer required. Allow approximation, or allow a partial answer. Do not insist on an exact answer.\n > - Change the algorithmic method. Allow algorithms that can be wrong, or allow algorithms that use randomness. Do not insist that the algorithm is a perfect deterministic one.\n >\n > This is exactly what Chayes and her co-authors have done.\n[^SAT-planning]: Rintanen 2012, [\"Planning as Satisfiability: Heuristics\"](/doc/ai/2012-rintanen.pdf \"'Planning as satisfiability: Heuristics', Rintanen 2012\"), discussing how to turn AI planning problems into SAT problems which can be solved efficiently, notes that\n\n > A peculiarity of SAT problems obtained by translation from the standard planning benchmark problems from the planning competitions, in contrast to SAT problems representing many other applications, is their extremely large size and the fact that these problems can still often be solved quickly. The largest SAT problems Lingeling solves (within the time bounds explained earlier) are instance 41 of AIRPORT (417476 propositional variables, 92.9 million clauses) and instance 26 of TRUCKS (926857 propositional variables, 11.3 million clauses).\n >\n > Our planner solves instance 49 of AIRPORT (13840 actions and 14770 state variables) with a completed unsatisfiability test for horizon 65, with 1.12 million propositional variables and 108.23 million clauses, and a plan for horizon 85, with 1.46 million propositional variables and 141.54 million clauses. The planner also solves instance 33 of SATELLITE (989250 actions and 5185 state variables), with a plan found for horizon 20, with 19.89 million propositional variables and 69.99 million clauses, backtrack-free in 14.50 seconds excluding translation into SAT and including search effort for shorter horizons. These are extreme cases. More typical SAT instances have less than 2 million propositional variables and a couple of million clauses\n\nSeveral of these categories might ring familiar to those interested in computer security, because computer security suffers from similar issues in the attempt to close the gap between the theoretical guarantees about the security of particular cryptography algorithms and what security one gets in practice.\n\nIn particular, the [Edward Snowden](!W) [NSA leaks](!W \"Global surveillance disclosures (2013-present)\") have demonstrated the remarkable breadth of ways in which the NSA goes about breaking computer security without needing access to theoretical breakthroughs or exotic quantum computers (and indeed, the NSA is more than a little contemptuous of the academic computer security/cryptography communities for their misguided focus on theory at the [expense of implementation](https://blog.thinkst.com/p/if-nsa-has-been-hacking-everything-how.html?m=1 \"If the NSA has been hacking everything, how has nobody seen them coming?\")): computers can be intercepted in the mail and hardware bugs implanted; computers can be monitored remotely using various radio and phreaking devices; airgapped networks can be jumped by malware hitch-hiking on USB drives or buried ineradically inside BIOSes of devices like hard drives which have their own processors; data which is not at rest can be stolen from otherwise-secure data centers by tapping private fiber optic links (eg. Google); more public fiber optic cables such as underseas cables are hacked using ISP assistance and submarine operations, in some cases entire days of raw traffic being retained for analysis; encrypted data can be retained forever for future decryption (such as by the NSA's active quantum computing R&D effort); Internet-wide attacks can be mounted by factoring certain very commonly used numbers using NSA's large computational resources and likely specialized ASICs (the [amortized cost of factoring *many* keys simultaneously](https://blog.cr.yp.to/20151120-batchattacks.html \"2015.11.20: Break a dozen secret keys, get a million more for free\") is different and much smaller than the usually calculated cost of cracking a *single* key); private keys can be stolen by using subpoenas or national security letters or hacking in or even physical breakins; data can be traded with the intelligence agencies of other countries or their own hacking operations hacked by the NSA (and [vice versa](!W \"The Shadow Brokers\")); backdoors can be introduced into otherwise-secure software (Dual\\_EC); commonly used software can be extensively audited, with bugs discovered and exploited long before they are publicly known ([Heartbleed](!W)); Internet connections can be hijacked and diverted to NSA servers to serve up malware.\nThis gives an idea of the difficulties faced when trying to be secure: where does one trustably get one's computer and the software on it? How many 0-day vulnerabilities are there in the operating system and all the cryptographic software? The encryption algorithms may be insecure, or implemented insecurely, or exist decrypted somewhere, or be run on subverted hardware, or the contents inferrable from metadata & other activity.\n\nHence, the exact difficulty of integer factoring or the existence of one-way functions is often among the least of the factors determining the security of a system.\n\n### Are all implementations equally fast?\n\n
\n> For every polynomial-time algorithm you have, there is an exponential algorithm that I would rather run.\n>\n> [Alan Perlis](!W)^[Attributed to him [in 2009](https://rjlipton.wordpress.com/2009/02/13/polynomial-vs-exponential-time/ \"Fast Exponential Algorithms: An exponential algorithm for knapsack\") by his colleague [Richard Lipton](!W).]\n
\n\nPremise 3 ignores that complexity classes by design try to abstract away from the 'constant factors' which is the computation time determined not by input size but by the details of computer architectures, implementations, and available computing hardware.\nAIs and humans can be equally bound by asymptotic complexity, but still differ on performance.\n[Scott Aaronson](https://www.scottaaronson.com/papers/pnp.pdf \"'P≟NP', Aaronson 2016\"):\n\n> ...while P≟NP has tremendous relevance to artificial intelligence, it says nothing about the *differences*, or lack thereof, between humans and machines. Indeed, P ≠ NP would represent a limitation on *all* classical digital computation, one that might plausibly apply to human brains just as well as to electronic computers. Nor does P ≠ NP rule out the possibility of robots taking over the world. To defeat humanity, presumably the robots wouldn't need to solve arbitrary NP problems in polynomial time: they'd merely need to be smarter than *us*, and to have imperfect heuristics better than the imperfect heuristics that *we* picked up from a billion years of evolution! Conversely, while a proof of P=NP might hasten a robot uprising, it wouldn't guarantee one.\n\nBut with carefully optimized code, [proper](!W \"Cache-oblivious algorithm\") use of the [cache hierarchy](!W \"Memory hierarchy\"), and specialized hardware (eg. GPUs, ASICs), it is possible to see [performance gains of multiple orders of magnitude](/aria#faster), which implies that one can increase the input size several times before hitting the scaling way that another agent might who paid less attention to constant factors.\n(Computational chemistry may be intractable, even with approximations, on classical hardware---but what about if one has a [quantum computer](!W) with a few hundred qubits, enough that one can do [quantum simulation](!W)?) The importance of constant factors is one of the major traps in practical use of complexity classes: a fancy algorithm with a superior complexity class may easily be defeated by a simpler algorithm with worse complexity but faster implementation.[^galactic-algorithms] (One reason that programmers are exhorted to benchmark, benchmark, benchmark!)\n\nThis doesn't disprove the complexity class, which is about asymptotic scaling and will still kick in at some point, but if it's possible to double or dectuple or more the input, this is enough of an increase that it's hard to dismiss the difference between best and worst agents' problem sizes as being only 'slight'.\n\nFinally, increased resource use / brute force is always an option for a powerful agent. Particularly in his second post, Naam's argument assumes fixed resources.\nThis might be relevant to a few scenarios like an AI permanently confined to a single computer and unable to access more resources---but then, how intelligent could such an AI possibly be if it can't get out?\nHowever, thanks to its intelligence, humanity now controls a large fraction of the biosphere's energy and with a supercomputer, or tech giants like Google or Amazon who control millions of processor-cores, can compute things totally out of reach of other agents; no limits to the amount of computation that can be done on (or off) Earth have yet been reached.\nIncreases in computing resources of thousands or millions of times, along with larger timescales, can overcome the asymptotic to achieve the next intelligence increase; if a human-level AI can 'only' accomplish a few dozen doublings, well...\n\n[^galactic-algorithms]: Some examples of this folk wisdom: Cantor & Zassenhaus 1981:\n\n > The asymptotically best algorithms frequently turn out to be worst on all problems for which they are used.\n\n [\"Notes on Programming on C\"](http://doc.cat-v.org/bell_labs/pikestyle), Rob Pike:\n\n > Rule 3. Fancy algorithms are slow when _n_ is small, and _n_ is usually small. Fancy algorithms have big constants. Until you know that _n_ is frequently going to be big, don't get fancy. (Even if _n_ does get big, use Rule 2 first.)\n\n [Knuth](https://www.informit.com/articles/article.aspx?p=2213858):\n\n > In general I'm looking for more focus on algorithms that work fast with respect to problems whose size, _n_, is feasible. Most of today's literature is devoted to algorithms that are asymptotically great, but they are helpful only when n exceeds the size of the universe...Another issue, when we come down to earth, is the efficiency of algorithms on real computers. As part of the Stanford GraphBase project I implemented four algorithms to compute minimum spanning trees of graphs, one of which was the very pretty method that you developed with Cheriton and Karp. Although I was expecting your method to be the winner, because it examines much of the data only half as often as the others, it actually came out two to three times worse than Kruskal's venerable method. Part of the reason was poor cache interaction, but the main cause was a large constant factor hidden by O notation.\n\n More specifically: [\"Knuth did a comparison between Fibonacci heap and binary heaps for minimum spanning trees back in 1993 for his book _Stanford GraphBase_. He found Fibonacci to be 30 to 60% slower than binary heaps at the graph sizes he was testing, 128 vertices at different densities.\"](https://stackoverflow.com/questions/504823/has-anyone-actually-implemented-a-fibonacci-heap-efficiently) ([Knuth 1973](/doc/math/1973-knuth.pdf \"The Dangers of Computer–Science Theory\") provide additional examples from early CS where a focus on asymptotically optimal, hypothetical hardware, or provable bounds, leads to much worse empirical performance.)\n\n On the [Coppersmith-Winograd algorithm](!W):\n\n > The Coppersmith-Winograd algorithm is frequently used as a building block in other algorithms to prove theoretical time bounds. However, unlike the Strassen algorithm, it is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware.[6]\n\n Some algorithms are particularly infamous for their excellent asymptotics but abysmal constant factors, such as the computable versions of AIXI. Lipton dubs such algorithms \"[galactic algorithms](https://rjlipton.wordpress.com/2010/10/23/galactic-algorithms/)\".\n\n### Are all returns linear?\n\nPremise 4 is where the argument starts trying to tie statements about complexity to real-world consequences.\nNaam argues\n\n> By analogy, if designing intelligence is an N^2^ problem, an AI that is 2× as intelligent as the entire team that built it (not just a single human) would be able to design a new AI that is only 70% as intelligent as itself. That's not escape velocity.\n\nBut this doesn't make any sense.\nFirst, Naam's requirement for a Singularity is a straw man: 'escape velocity' is not a concept anyone has required to be true of the Singularity; if nothing else, there are [physical limits to how much computation](!W \"Limits to computation\") can be done in the observable universe, so it's unlikely that there is such a thing as an 'infinite intelligence'.\nAt no point do Good or Vinge say that the Singularity is important only if the increase of intelligence can continue eternally without bound and Vinge is clear that the Singularity is a metaphor with no actual infinity[^Tipler]; intelligence increases are important because wherever the improvements terminate, they will terminate at an intelligence level above humanity, which will give it capabilities beyond humanity's.\n(Good, for example, in his cost projections, appears to have a diminishing returns model in mind when he speculates that if human-level intelligence can be created, then twice the cost would give a greater-than-human level intelligence, and his later emphasis on 'economy of meaning'; and Vinge says the Singularity is \"the point where our old models must be discarded and a new reality rules\", without making claims about indefinite intelligence increase, just that control of events will have \"intellectual runaway\" from humans---but a runaway train doesn't increase velocity exponentially until it attains the speed of light, it just escapes its operators' control.)\n\n[^Tipler]: The only author on the Singularity I know of who claims an actual indefinite increase in intelligence to infinity, taking 'singularity' quite literally and not as Vinge's metaphor/comparison, would be [Frank J. Tipler's](!W \"Frank J. Tipler\") [Omega Point](!W) ideas, but as far as I know, even assuming the correctness of his calculations, his infinite intelligence is physically possible only under a number of cosmological conditions, some of which do not seem to be true (such as a closed universe rather than a flat expanding one).\n\nSecond, an intelligence explosion scaling even superlinearly at, say, 𝒪(_n_^2^) would result in absolutely enormous practical differences, although I can't understand what model Naam has in mind exactly---designing intelligence can't literally work as he describes with the AI getting dumber because the original AI could simply copy itself to 'design' a new AI which is 100% as intelligent as itself at little computational cost, but it's unclear what sort of input/output variables are going into this scaling equation.\nNaam's endorsement of the spreadsheet/chart in the second post implies that he is thinking of a model in which the input is some unspecified unit of computation like 1 GPU-year, and the output is an additional 'unit' of intelligence, in which case it does make sense to observe that where the AI previously got a 100% increase in intelligence for spending that 1 GPU-year, now it only gets a <100% increase; in this scenario, it gets a smaller increase each computation unit and (with appropriate asymptotics) it may converge on some finite upper bound.\nBut you could just as easily express this relationship the other way around, and note that the number of computation units for each doubling of intelligence is increasing steeply.\nLooked at this way, there's no reason to expect convergence on a finite bound, or even the intelligence increase to slow down---because the fixed computation input assumption becomes glaring; the AI simply \"must construct additional pylons\", as it were.\n\nA little perspective from animal intelligence may be helpful here; as a simple model, animal intelligence seems closely related to [total number of neurons](!W \"List of animals by number of neurons\") moderated by [body size/sensory requirements](!W \"Encephalization quotient\").\nStarting at 0, we have the sponge; by 250,000 neurons, we have the fly (which can accomplish behaviors like flying around but little in the way of planning) and the ant (simpler locomotion but capable of simple planning and in conjunction with many other ants, surprisingly complex emergent behavior); at 1,000,000, we have the frustratingly tough cockroach, and at 16,000,000, the frog; by 71,000,000, the common house mouse, which can be taught tricks, solve complex planning tasks and mazes, and is respectably intelligent.\nClearly the scaling here is not linear---it's hard to argue that the mouse is 284 times smarter than a fly.\nThe scaling gets worse as we continue; the star-nosed mole has 131,000,000 but is it twice as intelligent as the house mouse? Only at the octopus with 500,000,000 does one recognize a real difference in intelligence, and thankfully the cat shows up at 760,000,000. But for a creature which has ~11× the neurons, the cat doesn't seem to be as good at catching mice as one might expect!\nFrom there, the neuron count gets worse and worse---capuchins need almost 4 billion neurons, macaques almost double that, and humans require a cool 86 billion neurons, 113× a cat (with elephants at 267 billion, but as much as those neurons are used up by their enormous body size, they are still eerily, disturbingly, intelligent)\nPlotted on a graph by some formal or informal measurement of behavioral complexity, we have a super-linear asymptotic; animal psychologists are always discovering ways in which human behaviors have roots in animal antecedents, implying that humans are, on an absolute level, not *that* much smarter.\nSurely each neuron added along the way suffered from diminishing returns.\nWe already live in a world with diminishing returns to computational resources!\nYet, despite that asymptotic, it clearly has been possible for humans to defy this scaling and develop brains with almost a hundred billion neurons (and elephants triple that) and considerable room for further growth ([Hofman 2015](/doc/iq/high/2015-hofman.pdf \"Evolution of the Human Brain: From Matter to Mind\")), and this evolution has also led to enormous real-world consequences: humans not just control the earth, they have remade it in their own image, driven countless species extinct or to the brink of extinction (as other primates can attest) as humans (and their world) changes faster than most species are able to adapt, and done impossible things like gone to the moon.\nAnd all this in a blink of an eye.\n\nAside from the issue that the complexity claims are probably false, this one is particularly questionable: small advantages on a task *do* translate to large real-world consequences, particularly in competitive settings.\nA horse or an athlete wins a race by a fraction of a second; a stock-market investing edge of 1% annually is worth a billionaire's fortune; a slight advantage in picking each move in a game likes chess translates to almost certain victory (consider how [AlphaGo's](!W \"AlphaGo\") ranking changed with [small improvements in the CNN's ability to predict next moves](/doc/reinforcement-learning/model/alphago/2016-silver.pdf#deepmind \"'Mastering the Game of Go with Deep Neural Networks and Tree Search', Silver et al 2016\")); a logistics/shipping company which could shave the remaining 1--2% of inefficiency off its planning algorithms would have a major advantage over its rivals inasmuch as shipping is one their major costs & the profit margin of such companies is itself only a few percentage points of revenue; or consider [network effects](!W) & winner-take-all markets.\n(Or think about safety in something like self-driving cars: even a small absolute difference in 'reaction times' between humans and machines could be enough to drive humans out of the market and perhaps ultimately even make them illegal.)\n\nTurning to human intelligence, the absolute range of human intelligence is very small: differences in reaction times are small, [backwards digit spans](!W) range from 3--7, brain imaging studies have difficulty spotting neurological differences, the absolute genetic influence on intelligence is [on net minimal](/embryo-selection#limits-to-iterated-selection), and this narrow range may be a general phenomenon about humans ([Wechsler 1935](/doc/iq/1935-wechsler-rangeofhumancapacities.pdf \"The Range of Human Capacities\")); and yet, in human society, [how critical](/iq \"'The IQ Halo effect', Branwen 2013\") are these tiny absolute differences in determining who will become rich or poor, who will become a criminal, who will do cutting-edge scientific research, who will get into the Ivy Leagues, who will be a successful politician, and this holds true as high as IQ can be measured reliably (see [SMPY/TIP etc](/smpy \"'SMPY Bibliography', Branwen 2018\")).\n(I think this narrowness of objective performance may help explain why some events surprise a lot of observers: when we look at entities below the human performance window, we just see it as an uniform 'bad' level of performance, we can't see any meaningful differences and can't see any trends, so our predictions tend to be hilariously optimistic or pessimistic based on our prior views; then, when they finally enter the human performance window, we can finally apply our existing expertise and become surprised and optimistic, and then the entities can with small objective increases in performance move out of the human window entirely and it becomes an activity humans are now uncompetitive at like chess (because even grandmasters are constantly making mistakes[^Cowen-chess-mistakes]) but may still contribute a bit on the margin in things like [advanced chess](!W), and within a few years, becomes truly superhuman.)\n\n[^Cowen-chess-mistakes]: _[Average is Over](!W)_, [Cowen](!W \"Tyler Cowen\") 2013:\n\n > Vasik Rajlich, the programmer behind [Rybka](!W), gives a more pessimistic spin to what we have learned from the chess-playing programs. In Rajlich's view, the striking fact about chess is how hard it is for humans to play it well. The output from the programs shows that we are making mistakes on a very large number of moves. Ken's measures show that even top grandmasters, except at the very peaks of their performances, are fortunate to match Rybka's recommendations 55% of the time. When I compare a grandmaster game to an ongoing Rybka evaluation, what I see is an initial position of value being squandered away by blunders---if only small ones---again and again and again. It's a bit depressing. Rajlich stresses that humans blunder constantly, that it is hard to be objective, hard to keep concentrating, and hard to calculate a large number of variations with exactness. He is not talking here about the club patzer but rather the top grandmasters: \"I am surprised how far they are from perfection.\" In earlier times these grandmasters had a kind of aura about them among the chess-viewing public, but in the days of the programs the top grandmasters now command less respect. When a world-class player plays a club expert, the world-class player looks brilliant and invincible at the board. Indeed, the world-class player does choose a lot of very good moves. At some point his superior position starts \"playing itself,\" to use an expression from the chess world, and just about everything falls into place. When the same world-class player takes on Shredder, to select an appropriately named program, he seems more like a hapless fool who must exert great care to keep the situation under control at all. And yet it is the very same player. That gap---between our perception of superior human intellect and its actual reality---is the sobering lesson of the programs.\n\n See also [\"Assessing Human Error Against a Benchmark of Perfection\"](https://arxiv.org/abs/1606.04956), Anderson et al 2016, and for comparison, [AlphaGo Zero](/doc/reinforcement-learning/model/alphago/2017-silver.pdf#page=3&org=deepmind \"'Mastering the game of Go without human knowledge', Silver et al 2017\"): the first AlphaGo was trained to predict human player moves, achieving ~55% accuracy; Zero plays Go far better but predicts noticeably worse (~51%) and its predictions get worse even as it gets better at playing or predicting the ultimate winner, implying that human experts also are able to choose the best move only 50% or less of the time. [Choi et al 2021](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3893835 \"How Does AI Improve Human Decision-Making? Evidence from the AI-Powered Go Program\") quantifies the loss by move: human pros fritter away ~1.2% probability of victory every move they make (so if they manage to make the best move half the time, then presumably the errors are worth −2.4%).\n\nThis also ignores the many potential advantages of AIs which have nothing to do with computational complexity; AlphaGo may confront the same [PSPACE scaling wall](!W \"Go and mathematics#Computational complexity\") that human Go players do, but as software it is immortal and can be continuously improved, among other advantages ([Sotala 2012](https://philpapers.org/archive/SOTAOA.pdf#miri \"Advantages of artificial intelligences, uploads, and digital minds\"), [Yudkowsky 2013](/doc/ai/scaling/2013-yudkowsky.pdf#miri \"Intelligence Explosion Microeconomics\")).\n\n### Are all scenarios one-shot?\n\nPremise 5 would seem to assume that there is no such thing as compound interest or exponential growth or that small advantages can accumulate to become crushing ones; which of course there is for companies, countries, and individuals alike.\n\nSomething similar has been noted about human intelligence---while any particular day-to-day decision has little to do with intelligence, the effects of intelligence are consistently beneficial and accumulate over a lifetime, so the random noise starts to cancel out, and intelligence is seen to have strong correlations with long-term outcomes (eg. [Gottfredson 1997](/doc/iq/ses/1997-gottfredson.pdf \"Why _g_ Matters: The Complexity of Everyday Life\")).\nMore abstractly, many career or intellectual outcomes have been noticed to follow a roughly [log-normal distribution](!W); a log-normal distributed can be generated when an outcome is [the end result of a 'leaky pipeline'](/note/pipeline \"'Leaky Pipelines', Branwen 2014\") (scientific output might be due to motivation times intelligence times creativity...), in which case a small improvement on each variable can yield a large improvement in the output.\nSuch a leaky pipeline might be simply a long sequence of actions, where advantage can build up (eg. if there is a small chance of making a blunder with each action).\nA chess example: [Magnus Carlsen](!W) may be the strongest human chess player in history, with a peak [ELO](!W \"Elo rating system\") rating of ~2882; as of 2016, the top-rated [chess engine](!W) is probably Komodo at ELO 3358. The ELO expected score formula implies that if Komodo 2016 played peak Carlsen, it would have an expected score of 1 / 10(2882 − 3358) / 400 = 0.94, so it would win ~94% of its games; this is impressive enough (it would lose only 1 time out of 20), however, in the standard 5-game match, it would win not 94%, but ~99.8% of the 5-game matches (losing only 1 time out of *500*).\nOne thinks of Amara’s law: \"We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.\"\n\n### Are AI agents rare?\n\nExpanding on the observation that AIs can have advantages which are unrelated to computational complexity or solving larger instances of problems, is it really the case that a Singularity can happen *only* if AIs are able to surpass humans on particular algorithmic tasks?\nThis is unlikely. For example, in a [whole-brain emulation](!W) scenario ([Sandberg & Bostrom 2008](/doc/ai/scaling/hardware/2008-sandberg-wholebrainemulationroadmap.pdf \"Whole Brain Emulation: A Roadmap\")), such uploaded minds would not necessarily be gifted with any new accomplishments with regard to complexity classes but can we really swallow the argument's conclusion that this scenario simply *cannot* lead to any major changes worthy of the name Singularity?\nFar from it---it seems that such an achievement would radically change human society in a number of ways, ranging from redefining mortality to accelerating neuroscience to transformations of the economy as such emulated brains can be copied indefinitely onto as much hardware as is available (consider [Robin Hanson's](!W \"Robin Hanson\") extrapolations in [_Age of Em_](https://ageofem.com/); a world-champion-level human Go player requires millions of children sampled and 15+ years to train, while an AlphaZero is simply copied over to another computer).\nIt would be surprising if one could run human-level intelligences (perhaps arbitrarily many of them) on a pocket smartphone, with millions or billions of them across the world, even outnumbering regular biological humans, and still have no 'Singularity' through sheer numbers.\n\n## Parable of the Worms\n\n![[Sauron](!W \"Sauron (comics)\") explains why he's unfriendly.](/doc/ai/2015-01-28-spidermanandthexmen-vol1-no2-sauron-cancerdinosaurs.jpg \"Panel from the comic book _Spider-Man and the X-Men Vol_ volume 1 issue 2, March 2015: Spider-Man learns that the villain Sauron can manipulate DNA and could cure cancer, but is instead turning people into dinosaurs; Sauron replies that he doesn't *want* to cure cancer—he wants to turn people into dinosaurs, so that is what he is doing.\"){.float-right}\n\nOnce upon a time, two _[C. elegans](!W)_ worms were debating the prospect of \"transwormism\", and specifically the possibility of hypothetical creatures from elsewhere in the space of all possible organisms, which might exceed worms by as much as worms exceed the bacteria they thrive upon, and the implications for their pile of composting manure which contains their entire worm civilization.\n\nWas this possible? Could species really be ranked on some sort of 'intelligence' or 'power' metric?\nDid not every species have its own unique abilities, and its own unique niche rendering them intrinsically incomparable?\n\nWhere would the resources to support such entities come from, and how would they be able to do anything more than worms themselves could do (inasmuch as small creature are doubtless [Turing-complete](/turing-complete \"‘Surprisingly Turing-Complete’, Branwen 2012\"))?\n\nCrawlviati argues for transwormism:\n\n> There is no a priori reason to believe that worms must be the pinnacle of creation, and that there are no larger or more complex or more intelligent organisms possible.\n> We should be applying the Poopernican Principle here---'the manure piles in which we live are but a tiny segment of the universe in both space and time, of no privileged perspective', and so in the Great Chain of Eating, we should expect us to be in a mediocre position.\n>\n> Indeed, from the start of life, we can see many breakthroughs: multicellular life has produced endless forms most different from unicellular life, while fairly recently have neurons been invented & just increasing neuron count seems to yield considerable gains (look at us versus bacteria), so we live at an exciting time.\n> We can speculate about the possibilities: a transworm might be completely different from us worms; or it might be similar in architecture to us worms, perhaps with a much longer body with many more neurons and so much smarter than us.\n>\n> Regardless, a transworm would be difficult for us to predict, and may be able to develop very fast as it learns new ways of hunting bacteria and self-fertilization, in what we might call a 'Moundularity' in which it piles up resources and offspring faster than anyone else; inasmuch as a transworm may have very different priorities from us and change the environment to fit its needs, it would be dangerous to us.\n\nSlimeplicius disagrees:\n\n> Ridiculous! Think for a moment about your claims.\n> We are blessed with 302 neurons, with which we can react to stimuli, move forward, move backward, hunt bacteria, and solve challenging navigational puzzles of many worm-lengths.\n>\n> But these problems exhibit diminishing returns---wormputational complexity theory tells us that optimal maze navigation is exponentially difficult, for example, and many important problems are worse.\n> Transworms would immediately find their additional cognition to be of ever less marginal value as they run up against the wall of wormputational complexity.\n>\n> What could they possibly do with, say, 1000 neurons that would justify a metabolic cost *over 3 times greater*?\n> And to be truly worthy of the name transworm, they might need tens of thousands, or even millions of neurons!\n> (I won't even bother to debunk fanciful extrapolations to billions of neurons, which would be more than exist in possibly all the manure piles in our observable universe put together.)\n>\n> Consider the absurdity of such an architecture: could our manure pile support a single such transworm?\n> Where would the food come from? For that matter, how would its body support so many neurons?\n> And its genes could no longer specify cell placement one by one, but organization would have to somehow 'emerge' or be 'learned', and you are rather obscure about how this might happen.\n>\n> Not to mention the many enormously complicated practical engineering problems you seem to gloss over in your blind faith in progress and lines on graphs: for example, diffusion would no longer work to feed each cell, requiring novel mechanisms solely to move fluids around to avoid overheating or starving to death.\n>\n> If a transworm *could* exist, it would be exponentially difficult for it to eat bacteria and reproduce faster than regular worms, and its performance would likely converge with ours: it would solve our problems only slightly better than us, at tremendously increased cost. Consider Turing-completeness: anything a transworm could compute, us worms could also compute, is it not so?\n> (We could even make an evolutionary argument: we have evolved to be as smart as is optimal in our niche---and no more or less.)\n>\n> Certainly, any 'Moundularity' would be so slow that us worms would smell it coming long in advance and wriggle together in a big ball to crush it.\n\nCrawlviati:\n\n> Your argument seems overly narrow to me.\n> Yes, I agree that it would be difficult to support so many neurons packed together in one worm, but I'm sure the engineering difficulties can be overcome---there seems to be no fundamental limit to wormputation much greater than 302 neurons, so there must be a way.\n> And your food objection is likewise soluble: perhaps transworms can migrate from compost pile to garden to compost pile regularly as they exhaust resources in each one, or even figure out some way to easily knock down low-hanging fruit & let them rot.\n>\n> They may not, bacterium for bacterium or cell for cell, be as efficient as us, but that doesn't matter as long as the diminishing returns don't turn into *negative* returns.\n> As long as the returns are positive, they will be able to pay for their increased resource utilization and continue climbing up the exponential curves.\n>\n> And what does 'better' even mean here? The wormputational complexity of a maze may increase sharply with maze size, but that's a statement about mazes, not about comparing maze-solvers, which might be arbitrarily better or worse than each other, so there's a problem: maybe they could solve mazes 100× faster.\n>\n> Then there's figuring out what any bit of performance means: if a transworm could solve mazes twice as fast as you or I, maybe it'll get *all* the rotting food when it beats us in a race to the end, and not less than twice as much.\n>\n> Heck, we're *worms*---what do we know about the world? Maybe there's more to life, the mound and everything; perhaps there are all sorts of cool things we could do, besides 'stimulus, response; stimulus, response; stimulus, response'---if we could just *think* for once in our lives!\n\nSlimeplicius:\n\n> These claims seem to rest entirely on what I might call an appeal to ignorance: *maybe* mazes can be run faster than we can, *maybe* there are great things which could be done with more neurons, *maybe* there's lots of food we can't obtain but could with more intelligence...\n>\n> Sure, maybe I can't prove that there aren't, but is any of this what a reasonable worm, the ordinary worm in the dirt, would believe? Certainly not. We are the pinnacle of civilization, and can hardly be expected to believe in the possibility of 'transworms' without even a live example of a transworm to point to. Create a transworm and perhaps *then* I will take your wild speculations more seriously.\n\nCrawlviati, plaintive:\n\n> If you'll just think a little more about the possibilities...\n\nSlimeplicius, dismissively:\n\n> There are better things to worry about: like the general pile warming. What if our wastes and their decay make our entire mound too hot for us? We should discuss that instead.\n\nSo they did.\nA week later, the farm was sold to a real estate developer to build townhouses on.\nThe mound was flattened by a steam roller, and then paved over with asphalt---the construction workers neither loved nor hated the worms, but the worms had nothing to offer in trade, and were made of atoms useful for widening the new road to make some humans' commute ever so slightly faster (all part of a construction process with exponentially diminishing returns).\n\n## Conclusion\n\nComputational complexity classes offer little guidance about the capabilities of humans, AIs, or other agents as they are too universal and generalized and do not tightly bind outcomes; at most, they demonstrate that neither humans nor AIs are omnipotent.\nIf one wants to put limits on the ability of an AI by way of computational resources, a much more detailed analysis must be done linking data/sample efficiency or algorithmic performance to capability improvement to performance on an ensemble of tasks & access to additional resources, with the consequent economic, military, or social outcomes.\n\n# See Also\n\n
\n- [Why Tool AIs Want to Be Agent AIs](/tool-ai){.backlink-not}\n- [GPT-3 implications, the scaling hypothesis, & the blessings of scale](/scaling-hypothesis \"'The Scaling Hypothesis', Branwen 2020\"){.backlink-not}\n- [One Man's Modus Ponens is Another Man's Modus Tollens](/modus \"'One Man’s Modus Ponens', Branwen 2012\"){.backlink-not}\n- [Embryo selection](/embryo-selection \"'Embryo Selection For Intelligence', Branwen 2016\"){.backlink-not}\n- [Algernon's law](/drug-heuristic \"'The Algernon Argument', Branwen 2010\"){.backlink-not}\n- [Simulation inferences](/simulation-inference){.backlink-not}\n
\n\n# External Links\n\n- [\"A Thinking Ape's Critique of Trans-Simianism\"](https://dresdencodak.com/2009/05/15/a-thinking-apes-critique-of-trans-simianism-repost/); [\"On the Impossibility of Supersized Machines\"](https://arxiv.org/abs/1703.10987), Garfinkel et al 2017; [\"Concrete Problems in Human Safety\"](https://milan.cvitkovic.net/writing/Concrete_Problems_in_Human_Safety.pdf), Cvitkovic 2021; [\"Investigations of a Dog\"](!W \"Investigations of a Dog\"), Franz Kafka 1922\n- [\"Modeling intelligence as a project-specific factor of production\"](http://modelingtheworld.benjaminrosshoffman.com/intelligence-project-specific-factor-production)\n- [\"What if you turned the world's hardware into AI minds?\"](https://aiimpacts.org/what-if-you-turned-the-worlds-hardware-into-ai-minds/)\n- [\"How Feasible Is the Rapid Development of Artificial Superintelligence?\"](https://longtermrisk.org/how-feasible-is-rapid-development-artificial-superintelligence)\n- [\"There are no free lunches, but organic lunches are super expensive: Why the tradeoffs constraining human cognition do not limit artificial superintelligences\"](https://hypermagicalultraomnipotence.wordpress.com/2017/07/26/there-are-no-free-lunches-but-organic-lunches-are-super-expensive-why-the-tradeoffs-constraining-human-cognition-do-not-limit-artificial-superintelligences/); [\"Building brain-inspired AGI is infinitely easier than understanding the brain\"](https://www.alignmentforum.org/posts/PTkd8nazvH9HQpwP8/building-brain-inspired-agi-is-infinitely-easier-than); [\"Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain\"](https://www.alignmentforum.org/posts/HhWhaSzQr6xmBki8F/birds-planes-brains-and-ai-against-appeals-to-the-complexity)\n- [\"Altruists Should Prioritize Artificial Intelligence\"](https://longtermrisk.org/altruists-should-prioritize-artificial-intelligence/)\n- [\"You Can’t Predict a Game of Pinball\"](https://www.lesswrong.com/posts/epgCXiv3Yy3qgcsys/you-can-t-predict-a-game-of-pinball)\n- **Discussion**:\n\n - [Reddit]{.smallcaps}: [1](https://www.reddit.com/r/ControlProblem/comments/4upo7r/complexity_no_bar_to_ai/), [2](https://www.reddit.com/r/slatestarcodex/comments/4up89b/complexity_no_bar_to_ai_gwernnet/)\n - [LW](https://www.lesswrong.com/posts/WrxHnHfWv4Dz8oSWk/open-thread-jul-25-jul-31-2016?commentId=KtWz96tShEAwHErTP)\n - [Facebook]{.smallcaps}: [1](https://www.facebook.com/yudkowsky/posts/10154413180579228), [2](https://www.facebook.com/MachineIntelligenceResearchInstitute/posts/1084181481619235)\n - [HN](https://news.ycombinator.com/item?id=12354097), [2](https://news.ycombinator.com/item?id=26216238)\n\n# Appendix\n\n## Technology Forecasting Errors: Functional Fixedness In Assuming Dependencies\n\n[**Moved to Technology Forecasting: The Garden of Forking Paths.**](/forking-path \"'Technology Forecasting: The Garden of Forking Paths', Branwen 2014\"){.include-annotation}\n", "id": "401b16de04da6ee6ec77c07ea6017f61"} {"source": "gwern_blog", "url": "https://www.gwern.net/Tool-AI.page", "title": "\"Why Tool AIs Want to Be Agent AIs\"", "authors": ["Gwern Branwen"], "date_published": "2018-08-28", "text": "---\ntitle: \"Why Tool AIs Want to Be Agent AIs\"\ndescription: \"AIs limited to pure computation (Tool AIs) supporting humans, will be less intelligent, efficient, and economically valuable than more autonomous reinforcement-learning AIs (Agent AIs) who act on their own and meta-learn, because all problems are reinforcement-learning problems.\"\ncreated: 2016-09-07\nmodified: 2018-08-28\nstatus: finished\nprevious: /complexity\nnext: /tank\nconfidence: likely\nimportance: 9\ncssExtension: drop-caps-kanzlei\n...\n\n
\n> Autonomous AI systems (Agent AIs) trained using [reinforcement learning](!W) can do harm when they take wrong actions, especially superintelligent Agent AIs. One solution would be to eliminate their agency by not giving AIs the ability to take actions, confining them to purely informational or inferential tasks such as classification or prediction (Tool AIs), and have all actions be approved & executed by humans, giving equivalently superintelligent results without the risk.\n>\n> I argue that this is not an effective solution for two major reasons. First, because Agent AIs will by definition be better at *actions* than Tool AIs, giving an economic advantage. Secondly, because Agent AIs will be better at *inference & learning* than Tool AIs, and this is inherently due to their greater agency: the same algorithms which learn how to perform actions can be used to select important datapoints to learn inference over, how long to learn, how to more efficiently execute inference, how to design themselves, how to optimize hyperparameters, how to make use of external resources such as long-term memories or external software or large databases or the Internet, and how best to acquire new data.\n>\n> All of these actions will result in Agent AIs more intelligent than Tool AIs, in addition to their greater economic competitiveness. Thus, Tool AIs will be inferior to Agent AIs in both actions and intelligence, implying use of Tool AIs is an even more highly unstable equilibrium than previously argued, as users of Agent AIs will be able to outcompete them on two dimensions (and not just one).\n
\n\nOne proposed solution to AI risk is to suggest that AIs could be limited purely to supervised/unsupervised learning, and not given access to any sort of capability that can directly affect the outside world such as robotic arms.\nIn this framework, AIs are treated purely as mathematical functions mapping data to an output such as a classification probability, similar to a logistic or linear model but far more complex; most deep learning neural networks like ImageNet image classification convolutional neural networks (CNN)s would qualify.\nThe gains from AI then come from training the AI and then asking it many questions which humans then review & implement in the real world as desired.\nSo an AI might be trained on a large dataset of chemical structures labeled by whether they turned out to be an useful drug in humans and asked to classify new chemical structures as useful or non-useful; then doctors would run the actual medical trials on the drug candidates and decide whether to use them in patients etc.\nOr an AI might look like [Google Maps](!W)/[Waze](!W): it answers your questions about how best to drive places better than any human could, but it does not control any traffic lights country-wide to optimize traffic flows nor will it run a self-driving car to get you there.\nThis theoretically avoids any possible runaway of AIs into malignant or uncaring actors who harm humanity by satisfying dangerous utility functions and developing instrumental drives.\nAfter all, if they can't take any actions, how can they do anything that humans do not approve of?\n\nTwo variations on this limiting or boxing theme are\n\n#. [Oracle AI](https://www.lesswrong.com/tag/oracle-ai): [Nick Bostrom](!W), in [_Superintelligence_ (2014)](!W \"Superintelligence: Paths, Dangers, Strategies\") (pg145--158) notes that while they can be easily 'boxed' and in some cases like P/NP problems the answers can be cheaply checked or random subsets expensively verified, there are several issues with oracle AIs:\n\n - the AI's definition of 'resources' or 'staying inside the box' can change as it learns more about the world (ontological crises)\n - responses might manipulate users into asking easy (and useless problems)\n - making changes in the world can make it easier to answer questions about, by simplifying or controlling it (\"All processes that are stable we shall predict. All processes that are unstable we shall control.\")\n - even a successfully boxed and safe oracle or tool AI can be misused[^Superintelligence-competition]\n#. [Tool AI](https://www.lesswrong.com/tag/tool-ai) (the idea, as \"tool mode\" or \"tool AGI\", was apparently introduced by Holden Karnofsky in a July 2011 discussion of a [May 2011 discussion with Jaan Tallinn](/doc/existential-risk/2011-05-10-givewell-holdenkarnofskyjaantallinn.doc \"http://xa.yimg.com/kq/groups/23070378/1331435883/name/Jaan+Tallinn+2011+05+-+revised.doc\") & elaborated on in [a May 2013 essay](https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si \"Thoughts on the Singularity Institute (SI)\"), but the idea has probably been proposed before). To quote Karnofsky:\n\n > Google Maps---by which I mean the complete software package including the display of the map itself---does not have a \"utility\" that it seeks to maximize. (One could fit an utility function to its actions, as to any set of actions, but there is no single \"parameter to be maximized\" driving its operations.)\n >\n > Google Maps (as I understand it) considers multiple possible routes, gives each a score based on factors such as distance and likely traffic, and then displays the best-scoring route in a way that makes it easily understood by the user. If I don't like the route, for whatever reason, I can change some parameters and consider a different route. If I like the route, I can print it out or email it to a friend or send it to my phone's navigation application. Google Maps has no single parameter it is trying to maximize; it has no reason to try to \"trick\" me in order to increase its utility. In short, Google Maps is not an *agent*, taking actions in order to maximize an utility parameter. It is a *tool*, generating information and then displaying it in an user-friendly manner for me to consider, use and export or discard as I wish.\n >\n > Every software application I know of seems to work essentially the same way, including those that involve (specialized) artificial intelligence such as Google Search, Siri, Watson, Rybka, etc. Some can be put into an \"agent mode\" (as Watson was on _Jeopardy_) but all can easily be set up to be used as \"tools\" (for example, Watson can simply display its top candidate answers to a question, with the score for each, without speaking any of them.)...Tool-AGI is not \"trapped\" and it is not Unfriendly or Friendly; it has no motivations and no driving utility function of any kind, just like Google Maps. It scores different possibilities and displays its conclusions in a transparent and user-friendly manner, as its instructions say to do; it does not have an overarching \"want,\" and so, as with the specialized AIs described above, while it may sometimes \"misinterpret\" a question (thereby scoring options poorly and ranking the wrong one #1) there is no reason to expect intentional trickery or manipulation when it comes to displaying its results.\n >\n > ...Another way of putting this is that a \"tool\" has an underlying instruction set that conceptually looks like: \"(1) Calculate which action A would maximize parameter P, based on existing data set D. (2) Summarize this calculation in an user-friendly manner, including what Action A is, what likely intermediate outcomes it would cause, what other actions would result in high values of P, etc.\" An \"agent,\" by contrast, has an underlying instruction set that conceptually looks like: \"(1) Calculate which action, A, would maximize parameter P, based on existing data set D. (2) Execute Action A.\" In any AI where (1) is separable (by the programmers) as a distinct step, (2) can be set to the \"tool\" version rather than the \"agent\" version, and this separability is in fact present with most/all modern software. Note that in the \"tool\" version, neither step (1) nor step (2) (nor the combination) constitutes an instruction to maximize a parameter---to describe a program of this kind as \"wanting\" something is a category error, and there is no reason to expect its step (2) to be deceptive...This is important because *an AGI running in tool mode could be extraordinarily useful but far more safe than an AGI running in agent mode.* In fact, if developing \"Friendly AI\" is what we seek, a tool-AGI could likely be helpful enough in thinking through this problem as to render any previous work on \"Friendliness theory\" moot.\n >\n > ...Is a tool-AGI possible? I believe that it is, and furthermore that it ought to be our default picture of how AGI will work\n\n There are similar general issues with Tool AIs as with Oracle AIs:\n\n - a human checking each result is no guarantee of safety; even Homer nods. A extremely dangerous or subtly dangerous answer might slip through; [Stuart Armstrong](https://www.lesswrong.com/posts/nAwTGhgrdxE85Bjmg/tools-versus-agents \"Tools versus agents\") notes that the summary may simply not mention the important (to humans) downside to a suggestion, or frame it in the most attractive light possible. The more a Tool AI is used, or trusted by users, the less checking will be done of its answers before the user mindlessly implements it.^[This has since proven to be a serious obstacle to OpenAI's use of RLHF on GPT-3 & GPT-4: it is very difficult to get sufficiently-high quality human ratings, especially when collected the default way (ie. crowdsourced or from users), and the raters easily fall for material which is copied (eg. in [summarization](https://arxiv.org/abs/2009.01325#openai \"‘Learning to summarize from human feedback’, Stiennon et al 2020\")), fluent but wrong, beyond their personal knowledge, or requires extensive research (like third-party sources) to factcheck.]\n - an intelligent, never mind superintelligent Tool AI, will have built-in search processes and planners which may be quite intelligent themselves, and in 'planning how to plan', discover dangerous instrumental drives and the sub-planning process execute them.[^Superintelligence-plan-execution]\n\n (This struck me as mostly theoretical until I saw how well [GPT-3](/gpt-3 \"'GPT-3 Creative Fiction', Branwen 2020\") could roleplay & imitate agents purely by offline self-supervised prediction on large text databases---imitation learning is (batch) reinforcement learning too! See [Decision Transformer](https://sites.google.com/berkeley.edu/decision-transformer \"'Decision Transformer: Reinforcement Learning via Sequence Modeling', Chen et al 2021\") for an explicit use of this.)\n - developing a Tool AI in the first place might require another AI, which itself is dangerous\n\n[^Superintelligence-competition]: _Superintelligence_, pg148:\n\n > Even if the oracle itself works exactly as intended, there is a risk that it would be misused. One obvious dimension of this problem is that an oracle AI would be a source of immense power which could give a decisive strategic advantage to its operator. This power might be illegitimate and it might not be used for the common good. Another more subtle but no less important dimension is that the use of an oracle could be extremely dangerous for the operator herself. Similar worries (which involve philosophical as well as technical issues) arise also for other hypothetical castes of superintelligence. We will explore them more thoroughly in Chapter 13. Suffice it here to note that the protocol determining which questions are asked, in which sequence, and how the answers are reported and disseminated could be of great significance. One might also consider whether to try to build the oracle in such a way that it would refuse to answer any question in cases where it predicts that its answering would have consequences classified as catastrophic according to some rough-and-ready criteria.\n[^Superintelligence-plan-execution]: _Superintelligence_, pg152--153, pg158:\n\n > With advances in artificial intelligence, it would become possible for the programmer to offload more of the cognitive labor required to figure out how to accomplish a given task. In an extreme case, the programmer would simply specify a formal criterion of what counts as success and leave it to the AI to find a solution. To guide its search, the AI would use a set of powerful heuristics and other methods to discover structure in the space of possible solutions. It would keep searching until it found a solution that satisfied the success criterion...Rudimentary forms of this approach are quite widely deployed today...A second place where trouble could arise is in the course of the software's operation. If the methods that the software uses to search for a solution are sufficiently sophisticated, they may include provisions for managing the search process itself in an intelligent manner. In this case, the machine running the software may begin to seem less like a mere tool and more like an agent. Thus, the software may start by developing a plan for how to go about its search for a solution. The plan may specify which areas to explore first and with what methods, what data to gather, and how to make best use of available computational resources. In searching for a plan that satisfies the software's internal criterion (such as yielding a sufficiently high probability of finding a solution satisfying the user-specified criterion within the allotted time), the software may stumble on an unorthodox idea. For instance, it might generate a plan that begins with the acquisition of additional computational resources and the elimination of potential interrupters (such as human beings). Such \"creative\" plans come into view when the software's cognitive abilities reach a sufficiently high level. When the software puts such a plan into action, an existential catastrophe may ensue....The apparent safety of a tool-AI, meanwhile, may be illusory. In order for tools to be versatile enough to substitute for superintelligent agents, they may need to deploy extremely powerful internal search and planning processes. Agent-like behaviors may arise from such processes as an unplanned consequence. In that case, it would be better to design the system to be an agent in the first place, so that the programmers can more easily see what criteria will end up determining the system's output.\n\nOracle AIs remain mostly hypothetical because it's unclear how to write such utility functions.\nThe second approach, Tool AI, is just an extrapolation of current systems but has two major problems aside from the already identified ones which cast doubt on Karnofsky's claims that Tool AIs would be \"extraordinarily useful\" & that we should expect future AGIs to resemble Tool AIs rather than Agent AIs.\n\n# Economic\n\nFirst and most commonly pointed out, agent AIs are more economically competitive as they can replace tool AIs (as in the case of YouTube upgrading from [next-video prediction](/doc/ai/nn/retrieval/2016-covington.pdf#google \"'Deep Neural Networks for YouTube Recommendations', Covington et al 2018\") to [REINFORCE](https://arxiv.org/abs/1812.02353#google \"'Top-𝑘 Off-Policy Correction for a REINFORCE Recommender System', Chen et al 2018\")^[As the lead author put it in a [May 2019 talk about REINFORCE on YouTube](https://www.youtube.com/watch?v=HEqQ2_1XRTs#google \"'Reinforcement Learning for Recommender Systems: A Case Study on Youtube', Chen 2019\"), the benefit is not simply better prediction but in superior consideration of downstream effects of all recommendations, which are ignored by predictive models: this produced \"The largest single launch improvement in YouTube for two years\" because \"We can really lead the users toward a different state, versus recommending content that is familiar\".]) or 'humans in the loop'.[^Superintelligence-competition-2]\nIn any sort of process, [Amdahl’s law](!W) notes that as steps get optimized, the optimization does less and less as the output becomes dominated by the slowest step---if a step only takes 10% of the time or resources, then even infinite optimization of that step down to zero time/resources means that the output will increase by no more than 10%.\nSo if a human overseeing a, say, [high-frequency trading](!W) (HFT) algorithm, accounts for 50% of the latency in decisions, then the HFT algorithm will never run more than twice as fast as it does now, which is a crippling disadvantage.\n(Hence, the [Knight Capital](!W \"Knight Capital Group#2012 stock trading disruption\") debacle is not too surprising---no profitable HFT firm could afford to put too many humans into its loops, so when something does go wrong, it can be difficult for humans to figure out the problem & intervene before the losses mount.)\nAs the AI gets better, the gain from replacing the human increases greatly, and may well justify replacing them with an AI inferior in many other respects but superior in some key aspect like cost or speed.\nThis could also apply to error rates---in airline accidents, human error now causes the overwhelming majority of accidents due to their presence as overseers of the [autopilots](!W) and it's unclear that a human pilot represents a net safety gain; [and in 'advanced chess'](/note/note#advanced-chess-obituary), grandmasters initially chose most moves and used the chess AI for checking for tactical errors and blunders, which transitioned through the late '90s and early '00s to human players (not even grandmasters) turning over most playing to the chess AI but contributing a great deal of win performance by picking & choosing which of several AI-suggested moves to use, but as the chess AIs improved, at some point around 2007 victories increasingly came from the *humans* making mistakes which the opposing chess AI could exploit, even mistakes as trivial as 'misclicks' (on the computer screen), and now in advanced chess, human contribution has decreased to largely preparing the chess AIs' opening books & looking for novel opening moves which their chess AI can be better prepared for.\n\n[^Superintelligence-competition-2]: _Superintelligence_, pg151:\n\n > It might be thought that by expanding the range of tasks done by ordinary software, one could eliminate the need for artificial general intelligence. But the range and diversity of tasks that a general intelligence could profitably perform in a modern economy is enormous. It would be infeasible to create special-purpose software to handle all of those tasks. Even if it could be done, such a project would take a *long* time to carry out. Before it could be completed, the nature of some of the tasks would have changed, and new tasks would have become relevant. There would be great advantage to having software that can learn on its own to do new tasks, and indeed to discover new tasks in need of doing. But this would require that the software be able to learn, reason, and plan, and to do so in a powerful and robustly cross-domain manner. In other words, it would need general intelligence. Especially relevant for our purposes is the task of software development itself. There would be enormous practical advantages to being able to automate this. Yet the capacity for rapid self-improvement is just the critical property that enables a seed AI to set off an intelligence explosion.\n\nAt some point, there is not much point to keeping the human in the loop at all since they have little ability to check the AI choices and become 'deskilled' (think [drivers following](!W \"Death by GPS\") [GPS](https://bldgblog.com/2017/01/the-season-of-burning-trucks/) [directions](https://ideas.4brad.com/what-if-city-ran-waze-and-you-had-obey-it-could-cure-congestion)), correcting less than they screw up and demonstrating that toolness is no guarantee of safety nor responsible use.\n(Hence the old joke: \"the factory of the future will be run by a man and a dog; the dog will be there to keep the man away from the factory controls.\")\nFor a successful autonomous program, just keeping up with growth alone makes it difficult to keep humans in the loop; the US drone warfare program has become such a central tool of US warfare that the US Air Force finds it extremely difficult to hire & retain enough human pilots overseeing its drones, and there are indications that operational pressures are slowly eroding the human control & turning them into rubberstamps, and for all its protestations that it would always keep a human in the decision-making loop, the Pentagon is, unsurprisingly, inevitably, sliding towards fully autonomous drone warfare as the next technological step to maintain military superiority over Russia & China.\n(See [\"Meet The New Mavericks: An Inside Look At America's Drone Training Program\"](https://www.fastcompany.com/3054521/meet-the-new-mavericks-an-inside-look-at-americas-drone-training-program \"We traveled to Holloman Air Force Base for a glimpse of the future of war-and the future of work\"); [\"Future is assured for death-dealing, life-saving drones\"](https://www.theguardian.com/world/2012/aug/04/future-drones \"Developers predict that pilotless devices will join planes in civilian airspace - and dream of electric robots counting sheep\"); [\"Sam Altman's Manifest Destiny\"](https://www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-destiny \"Is the head of Y Combinator fixing the world, or trying to take over Silicon Valley?\"); [\"The Pentagon's 'Terminator Conundrum': Robots That Could Kill on Their Own\"](https://www.nytimes.com/2016/10/26/us/pentagon-artificial-intelligence-terminator.html \"The United States has put artificial intelligence at the center of its defense strategy, with weapons that can identify targets and make decisions.\"); [\"Attack of the Killer Robots\"](https://www.buzzfeed.com/sarahatopol/how-to-save-mankind-from-the-new-breed-of-killer-robots \"Forget about drones, forget about dystopian sci-fi - a terrifying new generation of autonomous weapons is already here. Meet the small band of dedicated optimists battling nefarious governments and bureaucratic tedium to stop the proliferation of killer robots and, just maybe, save humanity from itself.\").\nDespite fervent asseverations that the US military would *never* use fully autonomous drones, within a few years, by 2019, Pentagon whitepapers had begun to walk that back and talk about autonomous weapons that were merely auditable post hoc and laying out AI ethics principles like being [\"equitable\"](https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF#page=8).)\n\nFundamentally, autonomous agent AIs are what we and the free market *want*; everything else is a surrogate or irrelevant loss function.\nWe don't want low log-loss error on ImageNet, we want to refind a particular personal photo; we don't want excellent advice on which stock to buy for a few microseconds, we want a money pump spitting cash at us; we don't want a drone to tell us where Osama bin Laden was an hour ago (but not now), we want to have killed him on sight; we don't want good advice from Google Maps about what route to drive to our destination, we want to be at our destination without doing any driving etc.\nIdiosyncratic situations, legal regulation, fears of tail risks from very bad situations, worries about correlated or systematic failures (like hacking a drone fleet), and so on may slow or stop the adoption of Agent AIs---but the pressure will always be there.\n\nSo for this reason alone, we expect to see Agent AIs to systematically be preferred over Tool AIs unless they're considerably worse.\n\n# Intelligence\n\nWhy will people choose agents?\nAgent AIs will be chosen over Tool AIs because agents are what users want, lack of agency is something that will be penalized in competitive scenarios such as free markets or military uses, and because people will differ on preferences and some will inevitably choose to use agents.\n\nMore importantly, in addition to those reasons, it is probable that, because everything is a decision problem where agency is useful, *the best Tool AI's performance/intelligence will be equal to or worse than the best Agent AI, probably worse, and possibly much worse.*\nBostrom notes that \"Such 'creative' [dangerous] plans come into view when the [Tool AI] software's cognitive abilities reach a sufficiently high level.\"\nWe might reverse this to say that *to reach* a Tool AI of sufficiently high level, we must put such creativity in view.\n(A linear model may be extremely safe & predictable, but it would be hopeless to expect everyone to use them instead of neural networks.)\n\nAn Agent AI clearly benefits from being a better Tool AI, so it can better understand its environment & inputs; but less intuitively, any Tool AI benefits from agentiness.\nAn Agent AI has the potential, often realized in practice, to outperform any Tool AI: it can get better results with less computation, less data, less manual design, less post-processing of its outputs, on harder domains.\n\n(Trivial proof: Agent AIs are supersets of Tool AIs---an Agent AI, by not taking any actions besides communication or random choice, can reduce itself to a Tool AI; so in cases where actions are unhelpful, it performs the same as the Tool AI, and when actions can help, it can perform better; hence, an Agent AI can always match or exceed a Tool AI.\nAt least, assuming sufficient data that in the environments where actions are not helpful, it can learn to stop acting, and in the ones where they are, it has a distant enough horizon to pay for the exploration.\nOf course, you might agree with this but simply believe that intelligence-wise, Agent AIs == Tool AIs.)\n\nEvery sufficiently hard problem is a reinforcement learning problem.\n\nMore seriously, not all data is created equal.\nNot all data points are equally valuable to learn from, require equal amounts of computation, should be treated identically, should inspire identical followup data sampling, or actions.\nInference and learning can be *much* more efficient if the algorithm can choose how to compute on what data with which actions.\n\nThere is no hard Cartesian boundary [between an algorithm & its environment](!W \"Extended mind thesis\") such that control of the environment is irrelevant to the algorithm and vice-versa and its computation can be carried out without regard to the environment---there are simply many layers between the core of the algorithm and the furthest part of the environment, and the more layers that the algorithm can model & control, the more it can do.\nConsider Google Maps/Waze[^Waze].\nOn the surface they are 'merely' Tool AIs which produce lists of possible routes which would optimize certain requirements; but the entire point of such Tool AIs---and all large-scale Tool AIs and [research in general](/research-criticism \"'How Should We Critique Research?', Branwen 2019\")---is that countless drivers will *act on* them (what's the point of getting driving directions if you don't then drive?), and this will greatly change traffic patterns as drivers become appendages of the 'Tool' AI, potentially making driving in an area much worse by their errors or myopic per-driver optimization causing [Braess’s paradox](!W) (and far from being a theoretical curiosity, GPS, Google Maps, and Waze are regularly accused of that in many places, especially Los Angeles).\n\n[^Waze]: While Google Maps was used as a paradigmatic example of a Tool AI, it's not clear how hard this can be pushed, even if we exclude the road system itself: Google Maps/Waze is, of course, trying to maximize something---traffic & ad revenue. Google Maps, like any Google property, is doubtless constantly running [A/B tests](!W) on its users to optimize for maximum usage, its users are constantly feeding in data about routes & traffic conditions to Google Maps/Waze through the website interface & smartphone GPS/WiFi geographic logs, and to the extent that users make any use of the information & increase/decrease their use of Google Maps which many do so blindly, Google Maps will get feedback after changing the real world (sometimes to the [intense](https://www.washingtonpost.com/local/traffic-weary-homeowners-and-waze-are-at-war-again-guess-whos-winning/2016/06/05/c466df46-299d-11e6-b989-4e5479715b54_story.html \"Traffic-weary homeowners and Waze are at war, again. Guess who's winning?\") [frustration](https://mynewsla.com/government/2015/04/28/cut-through-traffic-caused-by-waze-app-must-stop-l-a-councilman-says/ \"'Cut-through' traffic caused by Waze app must stop, L.A. councilman says\") of [those affected](http://nbr.com/2014/12/11/la-residents-complain-about-waze-craze/ \"LA residents complain about 'Waze Craze'\"), who try to manipulate it back)... Is Google Maps/Waze a Tool AI or a large-scale Agent AI?\n\n It is in a [POMDP](!W \"Partially observable Markov decision process\") environment, it has a clear reward function in terms of website traffic, and it has a wide set of actions it continuously explores with randomization from various sources; even though it was designed to be a Tool AI, from an abstract perspective, one would have to consider it to have evolved into an Agent AI due to its commercial context and use in real-world actions, whether Google likes it or not. We might consider Google Maps to be a \"secret agent\": it is not a Tool AI but an Agent AI with a hidden & highly opaque reward function. This is probably not an ideal situation.\n\nThis is a highly general point which can be applied on many levels.\nThis point often arises in classical statistics/[experimental design](!W)/decision theory where adaptive techniques can greatly outperform fixed-sample techniques for both inference and actions/losses: numerical integration [can be improved](https://arxiv.org/abs/1512.00933 \"'Probabilistic Integration: A Role in Statistical Computation?', Briol et al 2015\"), a [sequential analysis](!W) trial testing a hypothesis can often terminate after a fraction of the equivalent fixed-sample trial's sample size (and/or loss) while [exploring multiple questions](!W \"Response surface methodology\"); an [adaptive](!W \"Adaptive clinical trial\") [multi-armed bandit](!W) will have much lower regret than any non-adaptive solution, but it will also be inferentially better at estimating which arm is best and what the performance of that arm is (see the 'best-arm problem': [Bubeck et al 2009](https://arxiv.org/abs/0802.2655 \"Pure exploration in multi-armed bandits problems\"), [Audibert et al 2010](https://hal-enpc.archives-ouvertes.fr/file/index/docid/654404/filename/COLT10.pdf \"Best Arm Identification in Multi-Armed Bandits\"), [Gabillon et al 2011](https://proceedings.neurips.cc/paper/2011/file/c4851e8e264415c4094e4e85b0baa7cc-Paper.pdf \"Multi-Bandit Best Arm Identification\"), [Mellor 2014](https://www.escholar.manchester.ac.uk/api/datastream?publicationPid=uk-ac-man-scw:227658&datastreamId=FULL-TEXT.PDF \"Decision Making Using Thompson Sampling\"), [Jamieson & Nowak 2014](https://nowak.ece.wisc.edu/bestArmSurvey.pdf \"Best-arm Identification Algorithms for Multi-Armed Bandits in the Fixed Confidence Setting\"), [Kaufmann et al 2014](https://arxiv.org/abs/1407.4443 \"On the Complexity of Best Arm Identification in Multi-Armed Bandit Models\")), and an adaptive [optimal design](!W) can constant-factor (gains of 50% or more are possible compared to naive designs like even allocation; [McClelland 1997](https://www2.psych.ubc.ca/~schaller/528Readings/McClelland1997.pdf \"Optimal design in psychological research\")) minimize total [variance](!W) by focusing on unexpectedly difficult-to-estimate arms (while a fixed-sample trial can be seen as ideal for when one values precise estimates of all arms equally and they have equal variance, which is usually not the case); even a [Latin square](!W) or [blocking](!W \"Randomized block design\") or [rerandomization](/doc/statistics/decision/2012-morgan.pdf \"'Rerandomization to improve covariate balance in experiments', Morgan & Rubin 2012\") design rather than simple randomization can be seen as reflecting this benefit (avoiding the potential for imbalance in allocation across arms by deciding in advance the sequence of 'actions' taken in collecting samples).\nAnother example comes from [queueing theory's](!W \"queueing theory\") [\"the power of two choices\"](https://www.ic.unicamp.br/~celio/peer2peer/math/mitzenmacher-power-of-two.pdf \"'The Power of Two Random Choices: A Survey of Techniques and Results', Mitzenmacher et al 2001\"), where selecting the best of 2 possible queues to wait in rather than selecting 1 queue at random improves the expected maximum delay from 𝒪(log _n_)/(log log _n_) to instead 𝒪(log log _n_)/(log _d_) (and interestingly, almost all the gain comes from being able to make any choice at all, going 1 → 2---choosing from 3 or more queues adds only some constant-factor gains).\n\nThe wide variety of uses of action is a major theme in recent work in AI (specifically, [deep learning](!W)/neural networks) research and increasingly key to achieving the best performance on inferential tasks as well as reinforcement learning/optimization/agent-y tasks.\nAlthough these advantages apply to most AI paradigms, because of the power and wide variety of tasks NNs get applied to, and sophisticated architectures, we can see the pervasive advantage of agentiness much more clearly than in narrower contexts like biostatistics.\n\n## Actions for intelligence\n\nRoughly, we can try to categorize the different kinds of agentiness by the 'level' of the NN they work on.\nThere are:\n\n#. actions internal to a computation:\n\n - inputs\n - intermediate states\n - accessing the external 'environment'\n - amount of computation\n - enforcing constraints/finetuning quality of output\n - changing the loss function applied to output\n#. actions internal to training the NN:\n\n - the gradient itself\n - size & direction of gradient descent steps on each parameter\n - overall gradient descent learning rate and learning rate schedule\n - choice of data samples to train on\n#. internal to the dataset\n\n - active learning\n - optimal experiment design\n#. internal to the NN design step\n\n - hyperparameter optimization\n - NN architecture\n#. internal to interaction with environment\n\n - adaptive experiment / multi-armed bandit / exploration for reinforcement learning\n\n### Actions internal to a computation\n\nInside a specific NN, while computing the output for an input question, a NN can make choices about how to handle it.\n\nIt can choose what parts of the input to run most of its computations on, while throwing away or computing less on other parts of the input, which are less relevant to the output, using \"attention mechanisms\" (eg. [Olah & Carter 2016](https://distill.pub/2016/augmented-rnns/ \"Attention and Augmented Recurrent Neural Networks\"), [Hahn & Keller 2016](https://arxiv.org/abs/1608.05604 \"Modeling Human Reading with Neural Attention\"), [Bellver et al 2016](https://imatge-upc.github.io/detection-2016-nipsws/ \"Hierarchical Object Detection with Deep Reinforcement Learning\"), [Mansimov et al 2015](https://arxiv.org/abs/1511.02793 \"Generating Images from Captions with Attention\"), [Gregor et al 2015](https://arxiv.org/abs/1502.04623 \"DRAW: A recurrent neural network for image generation\"), [Xu 2015](https://proceedings.mlr.press/v37/xuc15.pdf \"Show, attend and tell: Neural image caption generation with visual attention\"), [Larochelle & Hinton 2010](https://pdfs.semanticscholar.org/6da2/445037118c8ab72be1a319dd9f2a2116b305.pdf \"Learning to combine foveal glimpses with a third-order Boltzmann machine\"), [Bahdanau et al 2015](https://arxiv.org/abs/1409.0473 \"Neural machine translation by jointly learning to align and translate\"), [Ranzato 2014](https://arxiv.org/abs/1405.5488 \"On learning where to look\"), [Mnih et al 2014](https://proceedings.neurips.cc/paper/2014/hash/09c6c3783b4a70054da74f2538ed47c6-Abstract.html \"Recurrent models of visual attention\"), [Sordoni et al 2016](https://arxiv.org/abs/1606.02245 \"Iterative alternating neural attention for machine reading\"), [Kaiser & Bengio 2016](https://proceedings.neurips.cc/paper/2016/hash/fb8feff253bb6c834deb61ec76baa893-Abstract.html \"Can Active Memory Replace Attention?\")).\nAttention mechanisms are responsible for many increases in performance, but especially improvements in RNNs' ability to do sequence-to-sequence translation by revisiting important parts of the sequence ([Vaswani et al 2017](https://arxiv.org/abs/1706.03762#google \"Attention Is All You Need\")), image generation and captioning, and in CNNs' ability to recognize images by focusing on ambiguous or small parts of the image, even for adversarial examples ([Luo et al 2016](https://arxiv.org/abs/1511.06292 \"Foveation-based Mechanisms Alleviate Adversarial Examples\")).\nThey are a major trend in deep learning, as it is often the case that some parts of the input are more important than others and enable both global & local operations to be learned, with increasingly too many examples of attention to list (with a trend as of 2018 towards using attention as the major or *only* construct).\n\nMany designs can be interpreted as using attention.\nThe bidirectional RNN also often used in natural language translation doesn't explicitly use attention mechanisms but is believed to help by giving the RNN a second look at the sequence.\nIndeed, so universal that it often goes without mention is that the [LSTM](!W \"Long short-term memory\")/GRU mechanism which improves almost all RNNs is itself a kind of attention mechanism: the LSTM cells learn which parts of the hidden state/history are important and should be kept, and whether and when the memories should be forgotten and fresh memories loaded into the LSTM cells.\nWhile LSTM RNNs are the default for sequence tasks, they have occasionally been beaten by feedforward neural networks---using internal attention or \"self-attention\", like the Transformer architecture (eg. [Vaswani et al 2017](#vaswani-et-al-2017) or [Al-Rfou et al 2018](https://arxiv.org/abs/1808.04444 \"Character-Level Language Modeling with Deeper Self-Attention\")).\n\nExtending attention, a NN can choose not just which parts of an input to look at multiple times, but also how long to keep computing on it, \"adaptive computation\" ([Graves 2016a](https://arxiv.org/abs/1603.08983 \"Adaptive Computation Time for Recurrent Neural Networks\"), [Figurnov et al 2016](https://arxiv.org/abs/1612.02297 \"Spatially Adaptive Computation Time for Residual Networks\"), Silver et al 2016b, [Zamir et al 2016](https://arxiv.org/abs/1612.09508 \"Feedback Networks\"), [Huang et al 2017](https://arxiv.org/abs/1703.09844 \"Multi-Scale Dense Convolutional Networks for Efficient Prediction\"), [Li et al 2017](https://arxiv.org/abs/1703.10332 \"Dynamic Computational Time for Visual Attention\"), [Wang et al 2017](https://arxiv.org/abs/1706.00885 \"IDK Cascades: Fast Deep Learning by Learning not to Overthink\"), [Teerapittayanon et al 2017](https://pdfs.semanticscholar.org/6776/74e81070879f7b6da6261d0ba174985a3cf6.pdf \"BranchyNet: Fast Inference via Early Exiting from Deep Neural Networks\"), [Huang et al 2017](https://arxiv.org/abs/1708.02973 \"Learning Policies for Adaptive Tracking with Deep Feature Cascades\"), [Li et al 2017b](https://arxiv.org/abs/1708.04483 \"Learning with Rethinking: Recurrently Improving Convolutional Neural Networks through Feedback\"), [Campos et al 2017](https://arxiv.org/abs/1708.06834 \"Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks\"), [McGill & Perona 2017](https://arxiv.org/abs/1703.06217 \"Deciding How to Decide: Dynamic Routing in Artificial Neural Networks\"), [Bolukbasi et al 2017](https://arxiv.org/abs/1702.07811 \"Adaptive Neural Networks for Efficient Inference\"), [Wu et al 2017](https://arxiv.org/abs/1711.08393 \"BlockDrop: Dynamic Inference Paths in Residual Networks\"), [Seo et al 2017](https://arxiv.org/abs/1711.02085 \"Neural Speed Reading via Skim-RNN\"), [Lieder et al 2017](https://arxiv.org/abs/1711.06892 \"Learning to select computations\"), [Dehghani et al 2018](https://arxiv.org/abs/1807.03819#googledeepmind \"Universal Transformers\"), [Buesing et al 2019](https://arxiv.org/abs/1910.06862 \"TreeSample: Approximate Inference in Discrete Distributions with Monte Carlo Tree Search and Value Functions\"), [Banino et al 2021](https://arxiv.org/abs/2107.05407#deepmind \"PonderNet: Learning to Ponder\")): so it iteratively spends more computation on hard parts of problem within a given computational budget^[If the NN is trained to minimize error alone, it'll simply spend as much time as possible on every problem; so a cost is imposed on each iteration to encourage it to finish as soon as it has a good answer, and learn to finish sooner. And how do we decide what costs to impose on the NN for deciding whether to loop another time or emit its current best guess as good enough? Well, that'll depend on the cost of GPUs and the economic activity and the utility of results for the humans...].\n[Neural ODEs](https://arxiv.org/abs/1806.07366 \"'Neural Ordinary Differential Equations', Chen et al 2018\") are an interesting example of a model which are sort of like adaptive RNNs in that they can be run repeatedly by the ODE solver, adaptively, to refine their output to a target accuracy, and the ODE solver can be considered a kind of agent as well.\n\nAttention generally doesn't change the nature of the computation aside from the necessity of actions over the input, but actions can be used to bring in different computing paradigms.\nFor example, the entire field of [\"differentiable neural computer\"](https://www.deepmind.com/blog/differentiable-neural-computers)/\"neural Turing machines\" ([Zaremba & Sutskever 2015](https://arxiv.org/abs/1505.00521 \"Reinforcement learning neural Turing machines\"), [Graves et al 2016b](/doc/reinforcement-learning/model-free/2016-graves.pdf#deepmind \"Hybrid computing using a neural network with dynamic external memory\")) or \"neural stack machines\" or \"neural GPUs\" or most designs with some sort of scalable external memory mechanism larger than LSTMs ([Rae et al 2016](https://arxiv.org/abs/1610.09027#deepmind \"Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes\")) depends on figuring out a clever way to backpropagate through the action of memory accesses or using reinforcement learning techniques like [REINFORCE](/doc/reinforcement-learning/model-free/1992-williams.pdf \"'Simple statistical gradient-following algorithms for connectionist reinforcement learning', Williams 1992\") for training the non-differentiable actions.\nAnd such a memory is like a database which is constructed on the fly per-problem, so it'll help with database queries & information retrieval & knowledge graphs ([Narasimhan et al 2016](https://arxiv.org/abs/1603.07954 \"Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning\"), [Seo et al 2016](https://arxiv.org/abs/1611.01603 \"Bidirectional Attention Flow for Machine Comprehension\"), [Bachman et al 2016](https://arxiv.org/abs/1612.02605 \"Towards Information-Seeking Agents\"), [Buck et al 2017](https://arxiv.org/abs/1705.07830 \"Ask the Right Questions: Active Question Reformulation with Reinforcement Learning\"), [Yang et al 2017](https://arxiv.org/abs/1711.06744 \"Learning to Organize Knowledge with N-Gram Machines\"), [Hadash et al 2018](https://arxiv.org/abs/1804.09028#ibm \"Estimate and Replace: A Novel Approach to Integrating Deep Neural Networks with Existing Applications\")).\nAn intriguing variant on this idea of 'querying' resources is mixture-of-experts ([committee machine](!W)) NN architectures ([Shazeer et al 2016](https://arxiv.org/abs/1701.06538#google \"Outrageously large neural networks: the sparsely-gated mixture-of-experts layer\")).\n[Jeff Dean](!W \"Jeff Dean (computer scientist)\") (Google Brain) asks where should we use RL techniques in our OSes, networks, and computations these days and answers: [everywhere](http://learningsys.org/nips17/assets/slides/dean-nips17.pdf \"'Machine Learning for Systems and Systems for Machine Learning', Dean 2017 slides\") ([Haj-Ali et al 2019](https://arxiv.org/abs/1908.01275 \"Deep Reinforcement Learning in System Optimization\") review).\nRL should be used for: program placement on servers ([Mirhoseini et al 2017](https://arxiv.org/abs/1706.04972#google \"Device Placement Optimization with Reinforcement Learning\")/[Mirhoseini et al 2018](https://pdfs.semanticscholar.org/1fed/44d9330116149c7d0c82fc4cc9c9a5a7a748.pdf \"A Hierarchical Model for Device Placement\")), [B-tree indexes](https://arxiv.org/abs/1712.01208#google \"'The Case for Learned Index Structures', Kraska et al 2017\")/[Bloom filters](https://proceedings.mlr.press/v97/rae19a/rae19a.pdf \"'Meta-Learning Neural Bloom Filters', Rae et al 2019\") for [databases](https://www.cidrdb.org/cidr2019/papers/p117-kraska-cidr19.pdf \"'SageDB: A Learned Database System', Kraska et al 2019\"), [graph partitioning](https://arxiv.org/abs/1903.00614#google \"'GAP: Generalizable Approximate Graph Partitioning Framework', Nazi et al 2019\"), search query candidates ([Rosset et al 2018](https://arxiv.org/abs/1804.04410 \"Optimizing Query Evaluations using Reinforcement Learning for Web Search\"), [Nogueira et al 2018](https://arxiv.org/abs/1809.10658 \"Learning to Coordinate Multiple Reinforcement Learning Agents for Diverse Query Reformulation\")), compiler settings ([Haj-Ali et al 2019](https://arxiv.org/abs/1901.04615 \"AutoPhase: Compiler Phase-Ordering for High Level Synthesis with Deep Reinforcement Learning\"), [Trofin et al 2022](https://arxiv.org/abs/2101.04808#google \"MLGO: a Machine Learning Guided Compiler Optimizations Framework\")), quantum computer control ([Niu et al 2019](https://www.nature.com/articles/s41534-019-0141-3 \"Universal quantum control through deep reinforcement learning\")), YouTube [video compression codec](https://arxiv.org/abs/2202.06626#deepmind \"'MuZero with Self-competition for Rate Control in VP9 Video Compression', Mandhane et al 2022\") settings, datacenter & server cooling controllers...\nDean asks \"Where Else Could We Use Learning?\", and replies:\n\n> *Anywhere We're Using Heuristics To Make a Decision!*\n>\n> - Compilers: instruction scheduling, register allocation, loop nest parallelization strategies, ...\n> - [Networking](https://arxiv.org/abs/2102.09337#nvidia \"‘Reinforcement Learning for Datacenter Congestion Control’, Tessler et al 2021\"): TCP window size decisions, backoff for retransmits, data compression, ...\n> - Operating systems: process scheduling, buffer cache insertion/replacement [eg. [Lagar-Cavilla et al 2019](https://dl.acm.org/citation.cfm?id=3304053 \"Software-Defined Far Memory in Warehouse-Scale Computers\") for [compressed RAM](!W \"Virtual memory compression\")], file system prefetching [eg. [Hashemi et al 2018](https://arxiv.org/abs/1803.02329 \"Learning Memory Access Patterns\"), memory allocation ([Maas et al 2020](https://storage.googleapis.com/pub-tools-public-publication-data/pdf/cb7b7a938ac6d313a2b5f07612093b5c52093f51.pdf#google \"Learning-based Memory Allocation for C++ Server Workloads\"))], ...\n> - Job scheduling systems: which tasks/VMs to co-locate on same machine, which tasks to pre-empt, ... [eg. [Chen & Tian 2018](https://arxiv.org/abs/1810.00337 \"Automatic Local Rewriting for Combinatorial Optimization\"), and [mixed integer programming](!W) for planning of all sorts ([Nair et al 2020](https://arxiv.org/abs/2012.13349#deepmind \"Solving Mixed Integer Programs Using Neural Networks\")/[Sonnerat et al 2021](https://arxiv.org/abs/2107.10201#deepmind \"Learning a Large Neighborhood Search Algorithm for Mixed Integer Programs\"))]\n> - ASIC design: [physical circuit](https://arxiv.org/abs/2003.08445#google \"'Placement Optimization with Deep Reinforcement Learning', Goldie & Mirhoseini 2020\") [placement](https://arxiv.org/abs/2004.10746#google \"'Chip Placement with Deep Reinforcement Learning', Mirhoseini et al 2020\"), \\[[TPU design](https://arxiv.org/abs/2105.12842#google \"'A Full-stack Accelerator Search Technique for Vision Applications', Zhang et al 2021\"),\\] test case selection, ...\n>\n> *Anywhere We've Punted to a User-Tunable Performance Option!* Many programs have huge numbers of tunable command-line flags, usually not changed from their defaults (`--eventmanager_threads=16 --bigtable_scheduler_batch_size=8 --mapreduce_merge_memory=134217728` `--lexicon_cache_size=1048576 --storage_server_rpc_freelist_size=128` ...)\n>\n> *Meta-learn everything*. ML:\n>\n> - learning placement decisions\n> - learning fast kernel implementations\n> - learning optimization update rules\n> - learning input preprocessing pipeline steps\n> - learning activation functions\n> - learning model architectures for specific device types, or that are fast for inference on mobile device X, learning which pre-trained components to reuse, ...\n>\n> Computer architecture/datacenter networking design:\n>\n> - learning best design properties by exploring design space automatically (via simulator) [see [Dean 2019](https://arxiv.org/abs/1911.05289#google \"The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design\")]\n\nFinally, one interesting variant on this theme is treating an inferential or generative problem as a reinforcement learning problem in a sort of environment with global rewards.\nMany times the standard loss function is inapplicable, or the important things are global, or the task is not really well-defined enough (in a \"I know it when I see it\" sense for the human) to nail down as a simple differentiable loss with predefined labels such as in an image classification problem; in these cases, one cannot do standard supervised training to minimize the loss but must start using reinforcement learning to directly optimize a reward---treating outputs such as classification labels as 'actions' which may eventually result in a reward.\nFor example, in a char-RNN generative text model trained by predicting a character conditional on the previous, one can generative reasonable text samples by [greedily](!W \"Greedy algorithm\") picking the most likely next character and occasionally a less likely character for diversity, but one can generate higher quality samples by exploring longer sequences with [beam search](!W) or nucleus sampling, and one can improve generation further by adding utility functions for global properties & applying RL algorithms such as [Monte Carlo tree search](!W) (MCTS) for training or runtime maximization of an overall trait like translation/summarization quality (sequence-to-sequence problems in general) or winning or program writing (eg. [Jaques et al 2016](https://openreview.net/pdf?id=BJ8fyHceg \"Tuning Recurrent Neural Networks with Reinforcement Learning [Note-RNN]\"), [Norouzi et al 2016](https://proceedings.neurips.cc/paper/2016/hash/2f885d0fbe2e131bfc9d98363e55d1d4-Abstract.html \"Reward Augmented Maximum Likelihood for Neural Structured Prediction\"), [Wu et al 2016](https://arxiv.org/abs/1609.08144#google \"Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation https://research.googleblog.com/2016/09/a-neural-network-for-machine.html\"), [Ranzato et al 2016](https://arxiv.org/abs/1511.06732#facebook \"Sequence level training with recurrent neural networks\"), [Li et al 2016](https://arxiv.org/abs/1606.01541 \"Deep Reinforcement Learning for Dialogue Generation\"), [Silver et al 2016a](/doc/reinforcement-learning/model/alphago/2016-silver.pdf#deepmind \"Mastering the game of Go with deep neural networks and tree search [AlphaGo]\")/[Silver et al 2017](/doc/reinforcement-learning/model/alphago/2017-silver.pdf#deepmind \"Mastering the game of Go without human knowledge\"), [Silver et al 2016b](https://arxiv.org/abs/1612.08810#deepmind \"The Predictron: End-To-End Learning and Planning\"), [Clark & Manning 2016](https://arxiv.org/abs/1609.08667 \"Deep Reinforcement Learning for Mention-Ranking Coreference Models\"), [Miao & Blunsom 2016](https://arxiv.org/abs/1609.07317 \"Language as a Latent Variable: Discrete Generative Models for Sentence Compression\"), [Rennie et al 2016](https://arxiv.org/abs/1612.00563 \"Self-critical Sequence Training for Image Captioning\"), [He et al 2016](https://proceedings.neurips.cc/paper/2016/file/5b69b9cb83065d403869739ae7f0995e-Paper.pdf \"Dual Learning for Machine Translation\"), [Bello et al 2017](https://openreview.net/pdf?id=rJY3vK9eg \"Neural Combinatorial Optimization With Reinforcement Learning\"), [Yang et al 2017](https://arxiv.org/abs/1703.04887 \"Improving Neural Machine Translation with Conditional Sequence Generative Adversarial Nets\"), [Strub et al 2017](https://arxiv.org/abs/1703.05423 \"End-to-end optimization of goal-driven and visually grounded dialogue systems\"), [Wu et al 2017](https://arxiv.org/abs/1704.06933 \"Adversarial Machine Translation\"), [Sestorain et al 2018](https://arxiv.org/abs/1805.10338 \"Zero-Shot Dual Machine Translation\"), [Xie et al 2012](https://arxiv.org/abs/1206.4634 \"Artist Agent: A Reinforcement Learning Approach to Automatic Stroke Generation in Oriental Ink Painting [sumi-e]\"), [Prestwich et al 2017](https://arxiv.org/abs/1704.07183 \"Stochastic Constraint Programming as Reinforcement Learning\"), [Paulus et al 2017](https://arxiv.org/abs/1705.04304 \"A Deep Reinforced Model for Abstractive Summarization\"), [Guimaraes et al 2017](https://arxiv.org/abs/1705.10843 \"Objective-Reinforced Generative Adversarial Networks (ORGAN) for Sequence Generation Models\"), [Lewis et al 2017](https://s3.amazonaws.com/end-to-end-negotiator/end-to-end-negotiator.pdf \"Deal or No Deal? End-to-End Learning for Negotiation Dialogues\"), [Sakaguchi et al 2017](https://arxiv.org/abs/1707.00299 \"Grammatical Error Correction with Neural Reinforcement Learning\"), [Supancic III & Ramanan 2017](https://arxiv.org/abs/1707.04991 \"Tracking as Online Decision-Making: Learning a Policy from Streaming Videos with Reinforcement Learning\"), [Pasunuru & Bansai 2017](https://arxiv.org/abs/1708.02300 \"Reinforced Video Captioning with Entailment Rewards\"), [Zhong et al 2017](https://arxiv.org/abs/1709.00103 \"Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning\"), [Kato & Shinozaki](https://arxiv.org/abs/1711.03689 \"Reinforcement Learning of Speech Recognition System Based on Policy Gradient and Hypothesis Selection\"), [Molla 2017](https://arxiv.org/abs/1711.03859 \"Towards the Use of Deep Reinforcement Learning with Global Policy For Query-based Extractive Summarisation\"), [Chang et al 2018](https://arxiv.org/abs/1807.04640 \"Automatically Composing Representation Transformations as a Means for Generalization\"), [Kryściński et al 2018](https://arxiv.org/abs/1808.07913 \"Improving Abstraction in Text Summarization\"), [Wu et al 2018](https://arxiv.org/abs/1808.08866 \"A Study of Reinforcement Learning for Neural Machine Translation\"), [Hashimoto & Tsuruoka 2018](https://arxiv.org/abs/1809.01694 \"Accelerated Reinforcement Learning for Sentence Generation by Vocabulary Prediction\"), [Krishnan et al 2018](https://arxiv.org/abs/1808.03196 \"Learning to Optimize Join Queries With Deep Reinforcement Learning\"), [Sabour et al 2018](https://arxiv.org/abs/1810.01398 \"Optimal Completion Distillation for Sequence Learning\"), [Böhm et al 2019](https://arxiv.org/abs/1909.01214 \"Better Rewards Yield Better Summaries: Learning to Summarise Without References\"), [Ziegler et al 2019](https://arxiv.org/abs/1909.08593#openai \"Fine-Tuning Language Models from Human Preferences\")).\nMost exotically, the loss function can itself be a sort of action/RL setting---consider the close connections ([Finn et al 2016](https://arxiv.org/abs/1611.03852 \"A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models\"), [Ho & Ermon 2016](https://arxiv.org/abs/1606.03476 \"Generative adversarial imitation learning\"), [Pfau & Vinyals 2016](https://arxiv.org/abs/1610.01945 \"Connecting Generative Adversarial Networks and Actor-Critic Methods\"), [Im et al 2016](https://arxiv.org/abs/1612.04021 \"Generative Adversarial Parallelization\"), [Goodfellow 2016](https://arxiv.org/abs/1701.00160 \"NIPS 2016 Tutorial: Generative Adversarial Networks\")) between [actor-critic](http://incompleteideas.net/book/first/ebook/node66.html \"'6.6 Actor-Critic Methods', Sutton & Barto\") reinforcement learning, [synthetic gradients](http://cnichkawde.github.io/SyntheticGradients.html \"Asynchronous network architecture for semi-supervised learning\") ([Jaderberg et al 2016](https://arxiv.org/abs/1608.05343#deepmind \"Decoupled Neural Interfaces using Synthetic Gradients\")), and game-theory-based generative adversarial networks (GANs; [Kim et al 2017](https://arxiv.org/abs/1703.05192 \"Learning to Discover Cross-Domain Relations with Generative Adversarial Networks\"), [Zhu et al 2017](https://junyanz.github.io/CycleGAN/ \"Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks\")/[Lample et al 2017](https://arxiv.org/abs/1711.00043 \"Unsupervised Machine Translation Using Monolingual Corpora Only\")).\n\n### Actions internal to training\n\nThe training of a NN by [stochastic gradient descent](!W) might seem to be independent of any considerations of 'actions', but it turns to be another domain where you can go \"what if we treated this as a [MDP](!W \"Markov decision process\")?\" and it's actually useful.\nSpecifically, gradient descent requires selection of which data to put into a minibatch, how large a change to make to parameters in general based on the error in the current minibatch (the learning rate hyperparameter), or how much to update each individual parameter each minibatch (perhaps having some neurons which get tweaked much less than others).\nActions are things like selecting 1 out of _n_ possible minibatches to do gradient descent on, or selecting 1 out of _n_ possible learning rates with the learning rate increasing/decreasing over time ([Li & Malik 2016](https://arxiv.org/abs/1606.01885 \"Learning to Optimize\"), [Li & Malik 2017](https://arxiv.org/abs/1703.00441 \"Learning to Optimize Neural Nets\") [Andrychowicz et al 2016](https://arxiv.org/abs/1606.04474 \"Learning to learn by gradient descent by gradient descent\"), [Bello et al 2017](https://proceedings.mlr.press/v70/bello17a/bello17a.pdf \"Neural Optimizer Search with Reinforcement Learning\"), [Fu et al 2016](https://pdfs.semanticscholar.org/737f/6cc6e237902531e8047cc12f7f46a4bff282.pdf \"Deep Reinforcement Learning for Accelerating the Convergence Rate\"), [Xu et al 2016](https://openreview.net/pdf?id=Sy7m72Ogg \"An Actor-Critic Algorithm for Learning Rate Learning\"), Jaderberg et al 2016, [Wichrowska et al 2017](https://arxiv.org/abs/1703.04813 \"Learned Optimizers that Scale and Generalize\"), [Hamrick et al 2017](https://arxiv.org/abs/1705.02670 \"Metacontrol for Adaptive Imagination-Based Optimization\"), [Xu et al 2017](https://arxiv.org/abs/1705.11159 \"Reinforcement Learning for Learning Rate Control\"), [Meier et al 2017](https://arxiv.org/abs/1709.06709 \"Online Learning of a Memory for Learning Rates\"), [Faury & Vasile 2018](https://arxiv.org/abs/1801.07222 \"Rover Descent: Learning to optimize by learning to navigate on prototypical loss surfaces\"), [Alber et al 2018](https://arxiv.org/abs/1808.02822 \"Backprop Evolution\"), [Metz et al 2018](https://arxiv.org/abs/1810.10180 \"Learned optimizers that outperform SGD on wall-clock and validation loss\"), [Almeida et al 2021](https://arxiv.org/abs/2106.00958#openai \"A Generalizable Approach to Learning Optimizers\"); prioritized traces, prioritized experience replay, boosting, hard-negative mining, [importance sampling](!W) ([Katharopoulos & Fleuret 2017](https://arxiv.org/abs/1706.00043 \"Biased Importance Sampling for Deep Neural Network Training\")), prioritizing hard samples, [Loshchilov & Hutter 2015](https://arxiv.org/abs/1511.06343 \"Online Batch Selection for Faster Training of Neural Networks\"), [Fan et al 2016](https://openreview.net/forum?id=SyJNmVqgg \"Neural Data Filter for Bootstrapping Stochastic Gradient Descent\"), [Salehi et al 2017](https://arxiv.org/abs/1708.02544 \"Stochastic Optimization with Bandit Sampling\"), [Kim & Choi 2018](https://arxiv.org/abs/1801.00904 \"ScreenerNet: Learning Curriculum for Neural Networks\"), learning internal normalizations, [Luo et al 2018](https://arxiv.org/abs/1806.10779 \"Differentiable Learning-to-Normalize via Switchable Normalization\")).\n\n### Actions internal to data selection\n\nWe have previously looked at sampling from existing datasets: training on hard samples, and so on.\nOne problem with existing datasets is that they can be inefficient---perhaps they have class imbalance problems where some kinds of data are overrepresented and what is really needed for improved performance is more of the other kinds of data.\nAn image classification CNN doesn't need 99 dog photos & 1 cat photos, it wants 50 dog photos & 50 cat photos.\n(Quite aside from the fact that there's not enough information to classify other cat photos based on just 1 exemplar, the CNN will simply learn to always classify photos as 'dog'.)\nOne can try to fix this by [choosing predominately from the minority classes](!W \"Oversampling and undersampling in data analysis\"), or by changing the loss function to make classifying the minority class correctly much more valuable than classifying the majority class.\n\nEven better is if the NN can somehow ask for new data, be given additional/corrected data when it makes a mistake, or even create new data (possibly based on old data: [Cubuk et al 2018](https://arxiv.org/abs/1805.09501#google \"AutoAugment: Learning Augmentation Policies from Data\")). This leads us to [active learning](!W \"Active learning (machine learning)\"): given possible additional datapoints (such as a large pool of unlabeled datapoints), the NN can ask for the datapoint which it will learn the most from ([Houlsby et al 2011](https://arxiv.org/abs/1112.5745 \"Bayesian Active Learning for Classification and Preference Learning\"), [Islam 2016](https://raw.githubusercontent.com/Riashat/Active-Learning-Bayesian-Convolutional-Neural-Networks/master/Presentations/Thesis/Islam%20Riashat%20MPhil%20MLSALT%20Thesis.pdf \"Active Learning for High Dimensional Inputs using Bayesian Convolutional Neural Networks\"), [Gal 2016](https://www.cs.ox.ac.uk/people/yarin.gal/website/thesis/thesis.pdf \"Uncertainty in Deep Learning\"), [Ling & Fidler 2017](https://arxiv.org/abs/1706.00130 \"Teaching Machines to Describe Images via Natural Language Feedback\"), [Christiano et al 2017](https://arxiv.org/abs/1706.03741#openai \"Deep reinforcement learning from human preferences\"), [Sener & Savarese 2017](https://arxiv.org/abs/1708.00489 \"A Geometric Approach to Active Learning for Convolutional Neural Networks\"), [Shim et al 2017](https://arxiv.org/abs/1709.05964 \"Why Pay More When You Can Pay Less: A Joint Learning Framework for Active Feature Acquisition and Classification\"), [Janisch et al 2017](https://arxiv.org/abs/1711.07364 \"Classification with Costly Features using Deep Reinforcement Learning\"), [Pang et al 2018](https://arxiv.org/abs/1806.04798 \"Meta-Learning Transferable Active Learning Policies by Deep Reinforcement Learning\")).\nOne could, for example, train a RL agent to query a search engine and select the most useful images/videos for learning a classification task (eg. YouTube: [Yeung et al 2017](https://arxiv.org/abs/1706.02884 \"Learning to Learn from Noisy Web Videos\")).\nWe can think of it as a little analogous to how kids[^Schmidhuber] ask parents not random questions, but ones they're most unsure about, with the most implications one way or another.\n[Settles 2010](https://burrsettles.com/pub/settles.activelearning.pdf \"Active Learning Literature Survey\") discusses the practical advantages to machine learning algorithms of careful choice of data points to learn from or 'label', and gives some of the known theoretical results on how large the benefits can be---on a toy problem, an error rate _e_ decreasing in sample count from 𝒪(1⁄ε) to 𝒪(log(1⁄ε)), or in a Bayesian setting, a decrease of 𝒪(_d_⁄ε) to 𝒪(_d_ × \\og(1⁄ε)).[^interval-search]\nActive learning also connects back, from a machine learning perspective, to some of the statistical areas covering the benefits of adaptive/sequential trials---optimal experiments query the most uncertain aspects, which the most can be learned from.\n\n[^Schmidhuber]: [Kyunghyun Cho, 2015](https://kyunghyuncho.me/brief-summary-of-the-panel-discussion-at-dl-workshop-icml-2015/ \"Brief Summary of the Panel Discussion at DL Workshop @ICML 2015\"):\n\n > One question I remember came from Tieleman. He asked the panelists about their opinions on active learning/exploration as an option for efficient unsupervised learning. Schmidhuber and Murphy responded, and before I reveal their response, I really liked it. In short (or as much as I'm certain about my memory), active exploration will happen naturally as the consequence of rewarding better explanation of the world. Knowledge of the surrounding world and its accumulation should be rewarded, and to maximize this reward, an agent or an algorithm will active explore the surrounding area (even without supervision.) According to Murphy, this may reflect how babies learn so quickly without much supervising signal or even without much unsupervised signal (their way of active exploration compensates the lack of unsupervised examples by allowing a baby to collect high quality unsupervised examples.)\n[^interval-search]: Here is another toy problem to help visualize the advantage of agency/choice active learning: some parameter _P_ is uniformly distributed 0–1; we would like to measure it, but can only measure whether a specific real number drawn from an interval is greater or less than _P_.\n\n Random sampling 0–1 will constrain the range, but extremely slowly, because after the first few samples, it is ever more unlikely that the next random sample will fall within the remaining interval of possible values for _P_: it must fall closer to _P_ than all _n_ samples before it in order to contain any information.\n An active learning approach, however, which chooses to sample a random point inside that interval, becomes essentially a binary search; and homes in so fast that it causes floating point issues in my toy implementation.\n\n A typical result is that after 100 samples, the random search will have an interval width\n of 0.0129678266 (1.2e-2) vs the active's floating-point minimum value of 5.55111512e-17, or\n ~14 orders of magnitude narrower. It would take a *very* long time for the random search to\n match that!\n\n Below we generate simulations of sequentially sampling _n_ = 100 points either randomly or\n actively, plotting the interval as it shrinks & points either fall in or out.\n\n
\n \n
Searching an interval for a point, random sampling efficiency vs active-learning sampling efficiency: active-learning a random point in the remaining possible interval in effect binary-searches, while random sampling becomes arbitrarily inefficient as it needs to sample a point closer than all _n_ prior points.
\n
\n\n ~~~{.R .collapse}\n guessStrategy <- function(d, random=TRUE) {\n if (random) { runif(1); } else { runif(1, min=max(d$LowerBound), max=min(d$UpperBound)); }\n }\n simulateSearch <- function(useRandomStrategy=TRUE, maxSamples=100, target=runif(1)) {\n df <- data.frame(N=seq(1,maxSamples), Guess=rep(0, maxSamples),\n LowerThan=logical(maxSamples),\n LowerBound=rep(0, maxSamples), UpperBound=rep(1, maxSamples))\n\n for (i in 1:maxSamples) {\n currentSample <- guessStrategy(df[df$N <= i,], random=useRandomStrategy)\n lower <- currentSample < target\n df[i,] <- list(N=i, guess=currentSample, LowerThan=lower,\n LowerBound={ if (lower && currentSample > max(df[df$N <= i,]$LowerBound)) { currentSample; }\n else { max(df[df$N <= i,]$LowerBound); } },\n UpperBound={ if (!lower && currentSample < min(df[df$N <= i,]$UpperBound)) { currentSample; }\n else { min(df[df$N <= i,]$UpperBound); } }\n )\n }\n df$IntervalWidth <- df$UpperBound - df$LowerBound\n df$Decreased <- head(c(1,df$IntervalWidth)<=1e-14 |\n (c(df$IntervalWidth,0) != c(0, df$IntervalWidth)), -1)\n\n return(df)\n }\n\n plotSearchResults <- function(df, typeXLabel=\"Sampled datapoint\") {\n return(qplot(df$Guess, 1:nrow(df)) +\n # show whole 0–1 range, to avoid misleading scale-zoom effects & keep animation 'fixed in place'\n coord_cartesian(xlim=c(0,1)) +\n # the true parameter we're trying to estimate:\n geom_vline(xintercept=currentTarget, color=\"black\") +\n # the narrowest interval at each iteration:\n geom_segment(aes(x=df$UpperBound, xend=df$LowerBound, y=1:nrow(df), yend=1:nrow(df), size=I(1.8))) +\n # whether our measurement at each iteration was useful to decrease the interval:\n geom_point(size=I(7), aes(shape=if(all(df$Decreased)){df$Decreased;}else{!(df$Decreased);})) +\n scale_shape_manual(values=c(19,1)) +\n # overall GUI settings for clean monochrome theme:\n ylab(\"Iteration\") + xlab(typeXLabel) +\n theme_bw(base_size=46) + theme(legend.position = \"none\")\n )\n }\n\n library(animation)\n library(ggplot2)\n library(gridExtra)\n saveGIF(for (i in 1:200){\n currentTarget <- runif(1)\n\n d1 <- simulateSearch(TRUE, target=currentTarget)\n p1 <- plotSearchResults(d1, \"Random sampling\")\n d2 <- simulateSearch(FALSE, target=currentTarget)\n p2 <- plotSearchResults(d2, \"Active-learning\")\n\n print(grid.arrange(p1, p2, ncol=2))\n },\n ani.width = 1200, ani.height=1200,\n movie.name = \"2022-07-25-orderstatistics-activelearningvsrandomsearch-200simulationruns.gif\")\n ~~~\n\n### Actions internal to NN design\n\n
\n> I suspect that less than 10 years from now, all of the DL training/architecture tricks that came from the arXiv firehose over 2015--2019 will have been entirely superseded by automated search techniques. The future: no alchemy, just clean APIs, and quite a bit of compute.\n>\n> [François Chollet](https://twitter.com/fchollet/status/1082347142830743552), 2019-01-7\n
\n\nMoving on to more familiar territory, we have [hyperparameter optimization](!W) using random search or grid search or Bayesian [Gaussian processes](!W) to try training a possible NN, observe interim ([Swersky et al 2014](https://arxiv.org/abs/1406.3896 \"Freeze-Thaw Bayesian Optimization\")) and final performance, and look for better hyperparameters.\nBut if \"hyperparameters are parameters we don't know how to learn yet\", then we can see the rest of neural network architecture design as being hyperparameters too: what is the principled difference between setting a [dropout](!W \"Dropout (neural networks)\") rate and setting the number of NN layers?\nOr between setting a learning rate schedule and the width of NN layers or the number of convolutions or what kind of pooling operators are used?\nThere is none; they are all hyperparameters, just that usually we feel it is too difficult for hyperparameter optimization algorithms to handle many options and we limit them to a small set of key hyperparameters and use \"grad student descent\" to handle the rest of the design.\nSo... what if we used powerful algorithms (viz. neural networks) to design compiled code, neural activations, units like LSTMs, or entire architectures ([Zoph & Le 2016](https://arxiv.org/abs/1611.01578#google \"Neural architecture search with reinforcement learning\"), [Baker et al 2016](https://arxiv.org/abs/1611.02167 \"Designing Neural Network Architectures using Reinforcement Learning [MetaQNN]\"), [Chen et al 2016](https://arxiv.org/abs/1611.03824#deepmind \"Learning to learn without gradient descent by gradient descent\"), [Duan et al 2016](https://arxiv.org/abs/1611.02779#openai \"RL^2^: Fast Reinforcement Learning via Slow Reinforcement Learning\"), [Wang et al 2016](https://arxiv.org/abs/1611.05763#deepmind \"Learning to reinforcement learn\"), [Castronovo 2016](https://orbi.uliege.be/bitstream/2268/204410/1/ANN-BRL_final.pdf \"Approximate Bayes Optimal Policy Search using Neural Networks\"), [Ha et al 2016](https://arxiv.org/abs/1609.09106#google \"HyperNetworks\"), [Fernando et al 2017](https://arxiv.org/abs/1701.08734#deepmind \"PathNet: Evolution Channels Gradient Descent in Super Neural Networks\"), [Ravi & Larochelle 2017](https://openreview.net/forum?id=rJY0-Kcll#twitter \"Optimization as a Model for Few-Shot Learning\"), [Yoo et al 2017](https://arxiv.org/abs/1710.02277 \"Efficient K-Shot Learning with Regularized Deep Networks\"), [Negrinho & Gordon 2017](https://arxiv.org/abs/1704.08792 \"DeepArchitect: Automatically Designing and Training Deep Architectures\"), [Miikkulainen et al 2017](https://arxiv.org/abs/1703.00548 \"Evolving Deep Neural Networks\"), [Real et al 2017](https://arxiv.org/abs/1703.01041 \"Large-Scale Evolution of Image Classifiers\"), [Hu et al 2017](https://arxiv.org/abs/1704.05526 \"Learning to Reason: End-to-End Module Networks for Visual Question Answering\"), [Johnson et al 2017](https://arxiv.org/abs/1705.03633 \"Inferring and Executing Programs for Visual Reasoning\"), [Veniat & Denoyer 2017](https://arxiv.org/abs/1706.00046 \"Learning Time-Efficient Deep Architectures with Budgeted Super Networks\"), [Munkhdalai & Yu 2017](https://arxiv.org/abs/1703.00837 \"Meta Networks\"), [Cai et al 2017](https://arxiv.org/abs/1707.04873 \"Reinforcement Learning for Architecture Search by Network Transformation\"), [Zoph et al 2017](https://arxiv.org/abs/1707.07012#google \"Learning Transferable Architectures for Scalable Image Recognition\"), [Brock et al 2017](https://arxiv.org/abs/1708.05344 \"SMASH: One-Shot Model Architecture Search through HyperNetworks\"), [Zhong et al 2017](https://arxiv.org/abs/1708.05552 \"Practical Network Blocks Design with Q-Learning\"), [Ashok et al 2017](https://arxiv.org/abs/1709.06030 \"N2N Learning: Network to Network Compression via Policy Gradient Reinforcement Learning\"), [Ebrahimi et al 2017](https://arxiv.org/abs/1710.05958 \"Gradient-free Policy Architecture Search and Adaptation\"), [Ramachandran et al 2017](https://arxiv.org/abs/1710.05941#google \"Searching for Activation Functions\"), [Anonymous 2017](https://arxiv.org/abs/1711.02846#google \"Intriguing Properties of Adversarial Examples\"), [Wistuba 2017](https://arxiv.org/abs/1712.07420 \"Finding Competitive Network Architectures Within a Day Using UCT\"), [Schrimpf et al 2017](https://arxiv.org/abs/1712.07316 \"A Flexible Approach to Automated RNN Architecture Generation\"), [Huang et al 2018](https://arxiv.org/abs/1801.07365 \"Learning to Prune Filters in Convolutional Neural Networks\"), [Real et al 2018](https://arxiv.org/abs/1802.01548 \"Regularized Evolution for Image Classifier Architecture Search\"), [Vasilache et al 2018](https://arxiv.org/abs/1802.04730 \"Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions\"), [Elsken et al 2018](https://arxiv.org/abs/1804.09081 \"Multi-objective Architecture Search for CNNs\"), [Chen et al 2018](https://arxiv.org/abs/1805.08166 \"Learning to Optimize Tensor Programs\"), [Zhou et al 2018](https://arxiv.org/abs/1806.07912 \"RENA: Resource-Efficient Neural Architect\"), [Zela et al 2018](https://arxiv.org/abs/1807.06906 \"Towards Automated Deep Learning: Efficient Joint Neural Architecture and Hyperparameter Search\"), [Tan et al 2018](https://arxiv.org/abs/1807.11626 \"MnasNet: Platform-Aware Neural Architecture Search for Mobile\"), [Chen et al 2018a](https://arxiv.org/abs/1809.04184 \"Searching for Efficient Multi-Scale Architectures for Dense Image Prediction\"), [Cheng et al 2018b](https://arxiv.org/abs/1808.09830 \"Searching Toward Pareto-Optimal Device-Aware Neural Architectures\"), [Anonymous 2018](https://openreview.net/forum?id=S1eBzhRqK7 \"Evolutionary-Neural Hybrid Agents for Architecture Search\"), [Cheng et al 2018c](https://arxiv.org/abs/1811.10201 \"InstaNAS: Instance-aware Neural Architecture Search\"), [Guo et al 2018](https://arxiv.org/abs/1812.05285 \"IRLAS: Inverse Reinforcement Learning for Architecture Search\"), [Cai et al 2018](https://arxiv.org/abs/1812.00332 \"ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware\"), [So et al 2019](https://arxiv.org/abs/1901.11117#google \"The Evolved Transformer\"), [Ghiasi et al 2019](https://arxiv.org/abs/1904.07392 \"NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection\"), [Tan & Le 2019](https://arxiv.org/abs/1905.11946#google \"EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks\"), [An et al 2019 ](https://arxiv.org/abs/1906.02470 \"StyleNAS: An Empirical Study of Neural Architecture Search to Uncover Surprisingly Fast End-to-End Universal Style Transfer Networks\"), [Gupta & Tan 2019](https://ai.googleblog.com/2019/08/efficientnet-edgetpu-creating.html \"EfficientNet-EdgeTPU: Creating Accelerator-Optimized Neural Networks with AutoML\"), [Piergiovanni et al 2018](https://arxiv.org/abs/1811.10636 \"Evolving Space-Time Neural Architectures for Videos\"))?\n\nThe logical extension of these \"neural networks all the way down\" papers is that an actor like Google/Baidu/Facebook/MS could effectively turn NNs into a black box: an user/developer uploads through an API a dataset of input/output pairs of a specified [type](!W \"Type system\") and a monetary loss function, and a top-level NN running on a large GPU cluster starts autonomously optimizing over architectures & hyperparameters for the NN design which balances GPU cost and the monetary loss, interleaved with further optimization over the thousands of previous submitted tasks, sharing its learning across all of the datasets/loss functions/architectures/hyperparameters, and the original user simply submits future data through the API for processing by the best NN so far.\n(Google and Facebook have already taken steps toward this using distributed hyperparameter optimization services which benefit from transfer learning across tasks; [Google Vizier](https://ai.google/research/pubs/pub46180 \"'Google Vizier: A Service for Black-Box Optimization', Golovin et al 2017\"), [FBLearner Flow](https://engineering.fb.com/2016/05/09/core-data/introducing-fblearner-flow-facebook-s-ai-backbone/ \"'Introducing FBLearner Flow: Facebook's AI backbone', Dunn 2016\").)\n\n### Actions external to the agent\n\nFinally, we come to actions in environments which aren't purely virtual.\nAdaptive experiments, multi-armed bandits, reinforcement learning etc will outperform any purely supervised learning.\nFor example, [AlphaGo](!W) trained as a pure supervised-learning Tool AI, predicting next moves of human Go games in a [KGS](!W \"KGS Go Server\") dataset, but that was only a prelude to the self-play, which boosted it from professional player to superhuman level; aside from replacing loss functions (a classification loss like log loss vs victory), the AlphaGo NNs were able to explore tactics and positions that never appeared in the original human dataset.\nThe rewards can also help turn an unsupervised problem (what is the structure or label of each frame of a video game?) into more of a [semi-supervised](!W \"Semi-supervised learning\") problem by providing some sort of meaningful summary: the reward.\nA DQN Atari Learning Environment (ALE) agent will, without any explicit image classification, learn to recognize & predict objects in a game which are relevant to achieving a high score.\n\n## Overall\n\nSo to put it concretely: CNNs with adaptive computations will be computationally faster for a given accuracy rate than fixed-iteration CNNs, CNNs with attention classify better than CNNs without attention, CNNs with focus over their entire dataset will learn better than CNNs which only get fed random images, CNNs which can ask for specific kinds of images do better than those querying their dataset, CNNs which can trawl through Google Images and locate the most informative one will do better still, CNNs which access rewards from their user about whether the result was useful will deliver more relevant results, CNNs whose hyperparameters are automatically optimized by an RL algorithm (and possibly trained directly by a NN) will perform better than CNNs with handwritten hyperparameters, CNNs whose architecture as well as standard hyperparameters are designed by RL agents will perform better than handwritten CNNs... and so on.\n(It's actions all the way down.)\n\nThe drawback to all this is the implementation difficulty is higher, the sample efficiency can be better or worse (individual parts will have greater sample-efficiency but data will be used up training the additional flexibility of other parts), and the computation requirements for training can be much higher; but the asymptotic performance is better, and the gap probably grows as GPUs & datasets get bigger and tasks get more difficult & valuable in the real world.\n\n# Why You Shouldn't Be A Tool\n\nWhy does treating all these levels as decision or reinforcement learning problems help so much?\n\nOne answer is that most points are not near any decision boundary, or are highly predictable and contribute little information.\nOptimizing explorations can often lead to prediction/classification/inference gains.\nThese points need not be computed extensively, nor trained on much, nor collected further.\nIf a particular combination of variables is already being predicted with high accuracy (perhaps because it's common), adding even an infinite number of additional samples will do little; one sample from an unsampled region far away from the previous samples may be dramatically informative.\nA model trained on purely supervised data collected from humans or experts may have huge gaping holes in its understanding, because most of its data will be collected from routine use and will not sample many regions of state-space, leading to well-known brittleness and bizarre extrapolations, caused by precisely the fact that the humans/experts avoid the dumbest & most catastrophic mistakes and those situations are not represented in the dataset at all!\n(Thus, a Tool AI might be 'safe' in the sense that it is not an agent, but very unsafe because it is dumb as soon as it goes outside of routine use.)\nSuch flaws in the discriminative model would be exposed quickly in any kind of real world or competitive setting or by RL training.^[An example here might be the use of 'ladders' or 'mirroring' in Go---models trained in a purely supervised fashion on a dataset of Go games can have serious difficulty responding to a ladder or mirror because those strategies are so bad that no human would play them in the dataset. Once the Tool AI has been forced 'off-policy', its predictions & inferences may become garbage because it's never seen anything like those states before; an agent will be better off because it'll have been forced into them by exploration or adversarial training and have learned the proper responses. This sort of bad behavior leads to quadratically increasing regret with passing time: [Ross & Bagnall 2010](https://proceedings.mlr.press/v9/ross10a/ross10a.pdf \"Efficient Reductions for Imitation Learning\").]\nYou need the *right* data, not more data.\n(\"39. Re graphics: A picture is worth 10K words---but only those to describe the picture. Hardly any sets of 10K words can be adequately described with pictures.\")\n\nAnother answer is the \"curse of dimensionality\": in many environments, the tree of possible actions and subsequent rewards grows exponentially, so any sequence of actions over more than a few timesteps is increasingly unlikely to ever be sampled, and sparse rewards will be increasingly likely to be observed.\nEven if an important trajectory is executed at random and a reward obtained, it will be equally unlikely to ever be executed again---whereas some sort of RL agent, whose beliefs affect its choice of actions, can sample the important trajectory repeatedly, and rapidly converge on an estimate of its high value and continue exploring more deeply.\n\nA dataset of randomly generated sequences of robot arm movements intended to grip an object would likely include no rewards (successful grips) at all, because it requires a long sequence of finely calibrated arm movements; with no successes, how could the tool AI learn to manipulate an arm?\nIt must be able to make progress by testing its best arm movement sequence candidate, then learn from that and test the better arm movement, and so on, until it succeeds.\nWithout any rewards or ability to hone in good actions, only the initial states will be observed and progress will be extremely slow compared to an agent who can take actions and explore novel parts of the environment\n(eg. the problem of [_Montezuma's Revenge_](!W \"Montezuma’s Revenge (video game)\") in the Atari Learning Environment: because of reward sparsity, an epsilon-greedy might as well not be an agent compared to some better method of exploring like density-estimation in [Bellemare et al 2016](https://arxiv.org/abs/1606.01868#deepmind \"Unifying Count-Based Exploration and Intrinsic Motivation\").)\n\nOr imagine training a Go program by creating a large dataset of randomly generated Go boards, then evaluating each possible move's value by playing out a game between random agents from it; this would not work nearly as well as training on actual human-generated board positions which target the vanishingly small set of high-quality games & moves.\nThe exploration homes in on the exponentially shrinking optimal area of the movement tree based on its current knowledge, discarding the enormous space of bad possible moves.\nIn contrast, a tool AI cannot lift itself up by its bootstraps. It merely gives its best guess on the static current dataset, and that's that. If you don't like the results, you can gather more data, but it probably won't help that much because you'll give it more of what it already has.\n\nHence, being a secret agent is much better than being a tool.\n\n# See Also\n\n
\n- [Complexity no Bar to AI](/complexity \"Critics of AI risk suggest diminishing returns to computing means AI will be weak; I argue that this argument breaks if any premises rejected\"){.backlink-not}\n- [Candy Japan's new box A/B test](/candy-japan \"Bayesian decision-theoretic analysis of the effect of fancier packaging on subscription cancellations & optimal experiment design using adaptive/sequential designs for efficiency\"){.backlink-not}\n
\n\n# External Links\n\n- Discussion:\n\n - [HN](https://news.ycombinator.com/item?id=13231808)\n - [Reddit](https://www.reddit.com/r/ControlProblem/comments/5jlkgi/why_tool_ais_want_to_be_agent_ais/)\n- [\"Mesa-optimization: Risks from Learned Optimization: Introduction\"](https://www.alignmentforum.org/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction)\n- [\"On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models\"](https://arxiv.org/abs/1511.09249#schmidhuber), Schmidhuber 2015; [\"One Big Net for Everything\"](https://arxiv.org/abs/1802.08864#schmidhuber \"'One Big Net For Everything', Schmidhuber 2018\"), Schmidhuber 2018\n- [_Reinforcement Learning: An Introduction_](http://www.incompleteideas.net/book/the-book-2nd.html), Sutton & Barto\n- [RL subreddit](https://www.reddit.com/r/reinforcementlearning/)\n- [\"Learning to Learn\"](https://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/), Finn\n- [\"_Ist künstliche Motivation gefährlich?_\" \\[\"Is Artificial Motivation Dangerous?\"\\]](https://hci.iwr.uni-heidelberg.de/system/files/private/downloads/1848175122/schmitt_kunstliche-motivation-report.pdf), Schmitt 2017\n- [\"Military AI as a Convergent Goal of Self-Improving AI\"](https://philarchive.org/archive/TURMAA-6v2), Turchin 2017\n- [\"Deep Reinforcement Learning Doesn't Work Yet\"](https://www.alexirpan.com/2018/02/14/rl-hard.html), Alex Irpan\n- [\"The Ethics of Reward Shaping\"](http://www.argmin.net/2018/04/16/ethical-rewards/), Ben Recht\n- [\"Google AI Chief Jeff Dean's ML System Architecture Blueprint\"](/doc/reinforcement-learning/2018-07-26-synced-googleaichiefjeffdeansmlsystemarchitectureblueprint.html): Training/Batch Size/Sparsity and Embeddings/Quantization and Distillation/Networks with Soft Memory/Learning to Learn (L2L)\n- [\"Solving the Mystery of Link Imbalance: A Metastable Failure State at Scale\"](https://engineering.fb.com/2014/11/14/production-engineering/solving-the-mystery-of-link-imbalance-a-metastable-failure-state-at-scale/), Bronson 2014\n- [\"Reflective Oracles: A Foundation for Classical Game Theory\"](https://arxiv.org/abs/1508.04145), Fallenstein et al 2015\n- [\"Reframing Superintelligence: Comprehensive AI Services as General Intelligence\"](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf), Drexler 2019 (argues that despite the benefits of agency & increasing integration of systems with RL techniques, narrow-domain tool AI will nevertheless win out economically)\n- [\"The Bitter Lesson\"](http://www.incompleteideas.net/IncIdeas/BitterLesson.html) of AI Research: Compute Beats Clever (Rich Sutton)\n- [\"AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence\"](https://arxiv.org/abs/1905.10985#uber), Clune 2019\n- [End-to-end principle](/doc/cs/end-to-end-principle/index)\n- [\"There’s plenty of room at the Top: What will drive computer performance after Moore’s law?\"](https://gwern.net/doc/cs/hardware/2020-leiserson.pdf), Leiserson et al 2020\n- [\"Automation as Colonization Wave\"](/doc/economics/automation/index)\n- [\"Modeling the Human Trajectory\"](https://www.openphilanthropy.org/research/modeling-the-human-trajectory/) ([paper](/doc/economics/automation/2020-roodman.pdf \"Superexponential\")), Roodman 2020\n", "id": "c55d458869342584c994fa8f4ee0fc39"} {"source": "gwern_blog", "url": "https://www.gwern.net/Backstop.page", "title": "Evolution as Backstop for Reinforcement Learning", "authors": ["Gwern Branwen"], "date_published": "2021-07-04", "text": "---\ntitle: Evolution as Backstop for Reinforcement Learning\ncreated: 2018-12-06\ndescription: \"Markets/evolution as backstops/ground truths for reinforcement learning/optimization: on some connections between Coase's theory of the firm/linear optimization/DRL/evolution/multicellular life/pain/Internet communities as multi-level optimization problems.\"\nstatus: finished\nprevious: /math-error\nnext: /simulation-inference\nmodified: 2021-07-04\nconfidence: possible\nimportance: 7\ncssExtension: drop-caps-kanzlei\n...\n\n
\n> One defense of free markets notes the inability of non-market mechanisms to solve planning & optimization problems. This has difficulty with Coase's paradox of the firm, and I note that the difficulty is increased by the fact that with improvements in computers, algorithms, and data, ever larger planning problems *are* solved. Expanding on some Cosma Shalizi comments, I suggest interpreting phenomenon as multi-level nested optimization paradigm: many systems can be usefully described as having two (or more) levels where a slow sample-inefficient but ground-truth 'outer' loss such as death, bankruptcy, or reproductive fitness, trains & constrains a fast sample-efficient but possibly misguided 'inner' loss which is used by learned mechanisms such as neural networks or linear programming group selection perspective. So, one reason for free-market or evolutionary or Bayesian methods in general is that while poorer at planning/optimization in the short run, they have the advantage of simplicity and operating on ground-truth values, and serve as a constraint on the more sophisticated non-market mechanisms. I illustrate by discussing corporations, multicellular life, reinforcement learning & meta-learning in AI, and pain in humans. This view suggests that are inherent balances between market/non-market mechanisms which reflect the relative advantages between a slow unbiased method and faster but potentially arbitrarily biased methods.\n
\n\nIn [Coase's theory of the firm](!W \"The Nature of the Firm\"), a paradox is noted: idealized competitive markets are optimal for allocating resources and making decisions to reach efficient outcomes, but each market is made up of participants such as large multinational mega-corporations which are not internally made of markets and make their decisions by non-market mechanisms, even things which could clearly be outsourced.\nIn an oft-quoted and amusing passage, [Herbert Simon](/doc/economics/1991-simon.pdf \"'Organizations and Markets', Simon 1991\") dramatizes the actual situation:\n\n> Suppose that [\"a mythical visitor from Mars\"] approaches the Earth from space, equipped with a telescope that revels social structures. The firms reveal themselves, say, as solid green areas with faint interior contours marking out divisions and departments. Market transactions show as red lines connecting firms, forming a network in the spaces between them. Within firms (and perhaps even between them) the approaching visitor also sees pale blue lines, the lines of authority connecting bosses with various levels of workers. As our visitors looked more carefully at the scene beneath, it might see one of the green masses divide, as a firm divested itself of one of its divisions. Or it might see one green object gobble up another. At this distance, the departing golden parachutes would probably not be visible. No matter whether our visitor approached the United States or the Soviet Union, urban China or the European Community, the greater part of the space below it would be within green areas, for almost all of the inhabitants would be employees, hence inside the firm boundaries. Organizations would be the dominant feature of the landscape. A message sent back home, describing the scene, would speak of \"large green areas interconnected by red lines.\" It would not likely speak of \"a network of red lines connecting green spots.\"...When our visitor came to know that the green masses were organizations and the red lines connecting them were market transactions, it might be surprised to hear the structure called a market economy. \"Wouldn't 'organizational economy' be the more appropriate term?\" it might ask.\n\nA free competitive market is a weighing machine, not a thinking machine; it weighs & compares proposed buys & sells made by participants, and reaches a clearing price.\nBut where, then, do the things being weighed *come from*?\nMarket participants are themselves not markets, and to appeal to the wisdom of the market is buck-passing; if markets 'elicit information' or 'incentivize performance', how is that information learned and expressed, and where do the actual actions which yield higher performance come from?\nAt some point, someone has to do some real thinking.\n(A company can outsource its janitors to the free market, but then whatever contractor is hired still has to decide exactly when and where and how to do the janitor-ing; safe to say, it does not hold an internal auction among its janitors to divide up responsibilities and set their schedules.)\n\nThe paradox is that free markets appear to depend on entities which are internally run as totalitarian command dictatorships.\nOne might wonder why there is such a thing as a firm, instead of everything being accomplished by exchanges among the most atomic unit (currently) possible, individual humans.\nCoase's suggestion is that it is a principal-agent problem: there's risk, negotiation costs, trade secrets, betrayal, and having a difference between the principal and agent at all can be too expensive & have too much overhead.\n\n# Asymptotics Ascendant\n\nAn alternative perspective comes from the [socialist calculation debate](!W \"Socialist calculation debate\"): why have a market at all, with all its waste and competition, if a central planner can [plan](!W \"Economic planning\") out optimal allocations and simply decree it?\n[Cosma Shalizi](https://crookedtimber.org/2012/05/30/in-soviet-union-optimization-problem-solves-you/ \"In Soviet Union, Optimization Problem Solves *You*\") in a review^[See also [SSC](https://slatestarcodex.com/2014/09/24/book-review-red-plenty/) & [Chris Said's](https://chris-said.io/2016/05/11/optimizing-things-in-the-ussr/) reviews.] of Spufford's [_Red Plenty_](https://www.amazon.com/Red-Plenty-Francis-Spufford/dp/1555976042) (which draws on _Planning Problems in the USSR: The Contribution of Mathematical Economics to their Solution 1960--1971_, ed Ellman 1973), discusses the history of [linear optimization algorithms](!W \"Linear programming\"), which were also developed in Soviet Russia under [Leonid Kantorovich](!W) and used for economics planning.\nOne irony (which Shalizi ascribes to [Stiglitz](!W \"Whither Socialism?\")) is that under the same theoretical conditions in which markets could lead to an optimal outcome, so too could a linear optimization algorithm.\nIn practice, of course, the Soviet economy couldn't possibly be run that way because it would require optimizing over millions or billions of variables, requiring unfathomable amounts of computing power.\n\n## Optimization Obtained\n\nAs it happens, we now have unfathomable amounts of computing power.\nWhat was once a [_modus tollens_](/modus \"'One Man’s Modus Ponens', Branwen 2012\") is now just a _modus ponens_.\n\nCorporations, and tech companies in particular as the leading edge, routinely solve planning problems for logistics like fleets of cars or datacenter optimization involving millions of variables; the similar SAT solvers are ubiquitous in computer security research for modeling large computer codebases to verify safety or discover vulnerabilities; most robots couldn't operate without constantly solving & optimizing enormous systems of equations.\nThe internal planned 'economies' of tech companies have grown kudzu-like, sprouting ever larger datasets to predict and automated analyses to plan and [market designs](!W \"Mechanism design\") to control.\nThe problems solved by retailers like Walmart or Target are world-sized.^[Amusingly, the front of _Red Plenty_ notes a [grant from Target to the publisher](https://crookedtimber.org/2012/05/30/in-soviet-union-optimization-problem-solves-you/#comment-415931). ]\n(['\"We are not setting the price. The market is setting the price\", he says. \"We have algorithms to determine what that market is.\"'](https://www.wired.com/2013/12/uber-surge-pricing/ \"Uber Boss Says Surging Prices Rescue People From the Snow\"))\nThe motto of a Google or Amazon or Uber might be (to paraphrase Freeman Dyson's paraphrase of John von Neumann in _[Infinite in All Directions](!W)_, 1988): \"All processes that are stable we shall plan. All processes that are unstable we shall compete in (for now).\"\nCompanies may use some limited internal 'markets' as useful metaphors for allocation, and dabble in [prediction markets](!W), but the internal dynamics of tech companies bear little resemblance to competitive free markets, and show little sign of moving in market-ward directions.\n\nThe march of planning also shows little sign of stopping.\nUber is not going to stop using historical [forecasts](https://web.archive.org/web/20220703030620/https://eng.uber.com/tag/forecasting/) of demand to move around drivers to meet expected demand and optimize trip trajectories; datacenters will not stop using linear solvers to allocate running jobs to machines in an optimal manner to minimize electricity consumption while balancing against latency and throughput, in search of a virtuous cycle culminating in the optimal route, [\"the perpetual trip, the trip that never ends\"](https://www.buzzfeednews.com/article/johanabhuiyan/uber-is-laying-the-groundwork-for-perpetual-rides-in-san-fra \"Perpetual Rides In San Francisco: A new Uber feature called Smart Routes encourages San Francisco riders to request UberPool rides along particular routes for maximum efficiency\"); 'markets' like smartphone walled gardens rely ever more each year on algorithms parsing human reviews & binaries & clicks to decide how to rank or push advertising and conduct multi-armed bandit exploration of options; and so on endlessly.\n\nSo, can we run an economy with scaled-up planning approaching 100% centralization, while increasing efficiency and even *outcompeting* free capitalism-style competitive markets, as [Cockshott & Cottrell](http://users.wfu.edu/cottrell/socialism_book/index.html \"_Towards A New Socialism_, 1993\") propose (a proposal occasionally revived in [pop socialism](https://thebaffler.com/latest/stick-to-the-plan-james \"Stick to the Plan: Reclaiming central planning from the clutches of corporations\") like _The People's Republic of Walmart: How the World's Biggest Corporations are Laying the Foundation for Socialism_)?\n\n# Systems\n\nLet's look at some more examples:\n\n#. corporations and growth\n#. humans, brains, and cells\n#. meta-learning in AI (particularly [RL](https://www.reddit.com/r/reinforcementlearning/search?q=flair%3AMeta-RL&sort=new&restrict_sr=on))\n\n## Artificial Persons\n\nThe striking thing about corporations improving is that they don't; [corporations don't evolve](https://www.lesswrong.com/posts/XC7Kry5q6CD9TyG4K/no-evolutions-for-corporations-or-nanodevices \"No Evolutions for Corporations or Nanodevices\") (see the [Price equation](!W) & [multi-level selection](!W), which can be applied to [many things](https://royalsocietypublishing.org/doi/10.1098/rstb.2019.0361 \"'Price’s equation made clear', Gardner 2020\")).\nThe business world would look completely different if they did!\nDespite [large persistent differences in efficiency](/doc/economics/2010-bloom.pdf \"'Why Do Management Practices Differ across Firms and Countries?', Bloom & Reenen 2010\") between corporations, the best [management practices](/note/competence#economics) or corporations don't simply 'clone' themselves and regularly take over arbitrary industries with their superior skills (and after reaching fixation, eventually succumbing to mutant offspring who have become even more efficient than them, and so on).\n\nWe can copy the best software algorithms, like AlphaZero, indefinitely and they will perform as well as the original, and we can tweak them in various ways to make them steadily better (and this is in fact how many algorithms are developed, by constant iteration); species can reproduce themselves, steadily evolving to ever better exploit their niches, not to mention the power of selective breeding programs; individual humans can refine teaching methods and transmit competence (calculus used to be reserved for the most skilled mathematicians, and now is taught to ordinary high school students, and chess grandmasters have become steadily younger with better & more intensive teaching methods like chess engines); we could even clone exceptional individuals to get more similarly talented individuals, if we really wanted to.\nBut we don't see this happen with corporations.\nInstead, despite desperate struggles to maintain \"corporate culture\", companies typically coast along, getting more and more sluggish, failing to spin off smaller companies as lean & mean as they used to be, until conditions change or random shocks or degradation finally do them in, such as perhaps some completely-unrelated company (sometimes founded by a complete outsider like a college student) eating their lunch.\n\nWhy do we not see exceptional corporations clone themselves and take over all market segments?\nWhy don't corporations evolve such that *all* corporations or businesses are now the hyper-efficient descendants of a single ur-corporation 50 years ago, all other corporations having gone extinct in bankruptcy or been acquired?\nWhy is it so hard for corporations to keep their \"culture\" intact and retain their youthful lean efficiency, or if avoiding 'aging' is impossible, why copy themselves or otherwise reproduce to create new corporations like themselves?\nInstead, successful large corporations coast on inertia or market failures like regulatory capture/monopoly, while successful small ones worry endlessly about how to preserve their 'culture' or how to 'stay hungry' or find a replacement for the founder as they grow, and there is constant turnover.\nThe large corporations function just well enough that maintaining their existence is an achievement[^Simon].\n\n[^Simon]: More Simon 1991:\n\n > Over a span of years, a large fraction of all economic activity has been gathered within the walls of large and steadily growing organizations. The green areas observed by our Martian have grown steadily. Ijiri and I have suggested that the growth of organizations may have only a little to do with efficiency (especially since, in most large-scale enterprises, economies and diseconomies of scale are quite small), but may be produced mainly by simple stochastic growth mechanisms (Ijiri and Simon, 1977).\n >\n > But if particular coordination mechanisms do not determine exactly where the boundaries between organizations and markets will lie, the existence and effectiveness of large organizations does depend on some adequate set of powerful coordinating mechanisms being available. These means of coordination in organizations, taken in combination with the motivational mechanisms discussed earlier, create possibilities for enhancing productivity and efficiency through the division of labor and specialization.\n >\n > In general, as specialization of tasks proceeds, the interdependency of the specialized parts increases. Hence a structure with effective mechanisms for coordination can carry specialization further than a structure lacking these mechanisms. It has sometimes been argued that specialization of work in modern industry proceeded quite independently of the rise of the factory system. This may have been true of the early phases of the industrial revolution, but would be hard to sustain in relation to contemporary factories. With the combination of authority relations, their motivational foundations, a repertory of coordinative mechanisms, and the division of labor, we arrive at the large hierarchical organizations that are so characteristic of modern life.\n\nEvolution & the Price equation requires 3 things: entities which can replicate themselves; variation of entities; and selection on entities.\nCorporations have variation, they have selection---but they don't have replication.\n\nCorporations certainly undergo selection for kinds of fitness, and do vary a lot.\nThe problem seems to be that corporations cannot replicate themselves.\nThey can set up new corporations, yes, but that's not necessarily replicating *themselves*---they cannot clone themselves the way a bacteria can.\nWhen a bacteria clones itself, it has... a clone, which is difficult to distinguish in any way from the 'original'. In sexual organisms, children still resemble their parents to a great extent.\nBut when a large corporation spins off a division or starts a new one, the result may be nothing like the parent and completely lack any secret sauce.\nA new acquisition will retain its original character and efficiencies (if any).\nA corporation satisfies the Peter Principle by eventually growing to its level of incompetence, which is always much smaller than 'the entire economy'.\nCorporations are made of people, not interchangeable easily-copied widgets or strands of DNA.\nThere is no 'corporate DNA' which can be copied to create a new one just like the old.\nThe corporation may not even be able to 'replicate' itself over time, leading to scleroticism and aging---but this then leads to underperformance and eventually selection against it, one way or another.\nSo, an average corporation appears little more efficient, particularly if we exclude any gains from new technologies, than an average corporation 50 years ago, and the challenges and failures of the rare multinational corporation 500 years ago like the Medici bank look strikingly similar to challenges and failures of banks today.\n\nWe can see a similar problem with other large-scale human organizations: 'cultures'.\nAn idea seen sometimes is that cultures undergo selection & evolution, and as such, are made up of adaptive beliefs/practices/institutions, which no individual understands (such as farming practices optimally tailored to local conditions); even apparently highly irrational & wasteful traditional practices may actually be an adaptive evolved response, which is optimal in some sense we as yet do not appreciate (sometimes linked to \"Chesterton's fence\" as an argument for status quo-ism).\n\nThis is not a ridiculous position, since occasionally certain traditional practices have been vindicated by scientific investigation, but the lenses of multilevel selection as defined by the Price equation shows there are serious quantitative issues with this: cultures or groups are rarely driven extinct, with most large-scale ones persisting for millennia; such 'natural selection' on the group-level is only tenuously linked to the many thousands of distinct practices & beliefs that make up these cultures; and these cultures mutate rapidly as fads and visions and stories and neighboring cultures and new technologies all change over time (compare the consistency of folk magic/medicine over even small geographic regions, or in the same place over several centuries).\nFor most things, 'traditional culture' is simply flatout wrong and harmful and all forms are mutually contradictory, not verified by science, and contains no useful information, and---contrary to \"Chesterton's fence\"---the older and harder it is to find a rational basis for a practice, the less likely it is to be helpful:\n\n> [*Chesterton's meta-fence*](https://www.lesswrong.com/posts/WxDpeBi6aQAMH4BGB/stub-the-problem-with-chesterton-s-fence): \"in our current system (democratic market economies with large governments) the common practice of taking down Chesterton fences is a process which seems well established and has a decent track record, and should not be unduly interfered with (unless you fully understand it)\".\n\nThe existence of many erroneous practices, and the successful diffusion of erroneous ones, is acknowledged by proponents of cultural evolution like Heinrich (eg. [Heinrich provides several examples](https://www2.psych.ubc.ca/~henrich/Website/Papers/HenrichetalFiveMistake11.pdf#page=17 \"'Five Misunderstandings about Cultural Evolution', Henrich et al 2011\") which are comparable to [genetic drift](!W) spreading harmful mutations, and Primo Levi coined [\"the onion in the varnish\"](https://www.lesswrong.com/posts/fjoM4xwtGv7GTtZGi/chesterton-s-fence-vs-the-onion-in-the-varnish?commentId=Z8mkoATmXo7Kn8nJr) for such things), so the question here is one of emphasis or quantity: is the glass 1% full or 99% empty?\nIt's worth recalling the conditions for human expertise (Armstrong 2001, [_Principles of Forecasting_](/doc/statistics/prediction/2001-armstrong-principlesforecasting.pdf \"'Principles of Forecasting: A Handbook for Researchers and Practitioners', Armstrong 2001\"); [Tetlock](!W \"Philip E. Tetlock\") 2005, [_Expert Political Judgment: How Good Is It? How Can We Know?_](https://www.amazon.com/Expert-Political-Judgment-Good-Know/dp/0691128715); ed Ericsson 2006, [_The Cambridge Handbook of Expertise and Expert Performance_](https://www.amazon.com/Cambridge-Expertise-Performance-Handbooks-Psychology/dp/0521600812); [Kahneman & Klein 2009](https://www.chrissnijders.com/eth2012/CaseFiles2012/Kahneman,%20Klein%20-%202009%20-%20Conditions%20for%20intuitive%20expertise%20a%20failure%20to%20disagree.pdf \"Conditions for Intuitive Expertise: A Failure to Disagree\")): repeated practice with quick feedback on objective outcomes in unchanging environments; these conditions are satisfied for relatively few human activities, which are more often rare, with long-delayed feedback, left to quite subjective appraisals mixed in with enormous amounts of randomness & consequences of many other choices before/after, and subject to potentially rapid change (and the more so the more people are able to learn).\nIn such environments, people are more likely to fail to build expertise, be fooled by randomness, and construct elaborate yet erroneous theoretical edifices of superstition (like Tetlock's hedgehogs).\nEvolution is no fairy dust which can overcome these serious inferential problems, which are why reinforcement learning is so hard.^[In RL terms, evolution, like [Evolution Strategies](https://arxiv.org/abs/1703.03864#openai \"'Evolution Strategies as a Scalable Alternative to Reinforcement Learning', Salimans et al 2017\"), are a kind of [Monte Carlo method](http://incompleteideas.net/book/RLbook2018.pdf#page=133 \"Sutton & Barto 2018, Chapter 5: Monte Carlo methods\"). Monte Carlo methods require no knowledge or model of the environment, benefit from low bias, can handle even long-term consequences with ease, do not diverge or fail or are biased like approaches using bootstrapping (especially in the case of the \"deadly triad\"), is decentralized/embarrassingly parallel. A major downside, of course, is that they accomplish all this by being extremely high-variance/sample-inefficient (eg. Salimans et al 2017 is ~10x worse than competing DRL methods).]\n\nFor [something](https://scholars-stage.org/tradition-is-smarter-than-you-are/ \"Tradition is Smarter Than You Are\") [like farming](https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/ \"Book Review: _The Secret Of Our Success_, by Joseph Heinrich\"), with regular feedback, results which are enormously important to both individual and group survival, and relatively straightforward mechanistic cause-and-effect relationships, it is not surprising that practices tend to be *somewhat* optimized (although still far from optimal, as enormously increased yields in the Industrial Revolution demonstrate, in part by avoiding the errors of traditional agriculture & [using simple breeding techniques](/review/bakewell \"'Origins of Innovation: Bakewell & Breeding', Branwen 2018\"))^[And note the irony of the widely-cited corn [nixtamalization](!W) & [anti-cyanide cassava](!W \"Cassava#Potential toxicity\") examples of how farming encodes subtle wisdom due to group selection: in both cases, the groups that developed it in the Americas were, despite their superior local food processing, highly 'unfit' and suffered enormous population declines due to pandemic & conquest! You might object that those were exogenous factors, bad luck, due to things unrelated to their food processing... which is precisely the problem when selecting on groups.]; but none of that applies to 'traditional medicine', dealing as it does with complex self-selection, [regression to the mean](/note/regression \"'Regression To The Mean Fallacies', Branwen 2021\"), and placebo effects, where aside from the simplest cases like setting broken bones (again, straightforward, with cause-and-effect relationship), *hardly any of it works*[^traditional-herbal-remedies] and one is lucky if a traditional remedy is merely ineffective rather than outright poisonous, and in the hardest cases like snake bites, it would be better to wait for death at home than waste time going to the local witch doctor.\n\n[^traditional-herbal-remedies]: An example of the failure of traditional medicine is provided by the NCI anti-cancer plant screening program, run by [an enthusiast for medical folklore & ethnobotany](https://www.the-scientist.com/foundations/a-history-of-screening-for-natural-products-to-fight-cancer-31728 \"A History of Screening for Natural Products to Fight Cancer: In the middle of the 20th century, the National Cancer Institute began testing plant extracts for chemotherapeutic potential—helping to discover some drugs still in use today\") who specifically targeted plants based on a \"a massive literature search, including ancient Chinese, Egyptian, Greek, and Roman texts\". The screening program screened [\"some 12,000 to 13,000 species...over 114,000 extracts were tested for antitumor activity\"](https://hort.purdue.edu/newcrop/proceedings1996/V3-554.html \"'Drug Discovery and Development at the National Cancer Institute: Potential for New Pharmaceutical Crops', Cragg et al 1996\") (rates rising steeply afterwards), which yielded 3 drugs ever ([paclitaxel](!W)/Taxol/PTX, [irinotecan](!W), and [rubitecan](!W)), only one of which was all that important (Taxol). So, in a period with few useful anti-cancer drugs to compete against, large-scale screening of all the low-hanging fruit, targeting plants prized by traditional medical practices from throughout history & across the globe, had a success rate somewhere on the order of 0.007%.\n\n A recent example is the anti-malarial drug [artemisinin](!W), which earned its discoverer, [Tu Youyou](!W), a 2015 Nobel; she worked in a lab dedicated to traditional herbal medicine (Mao Zedong encouraged the construction of a 'traditional Chinese medicine' as a way to reduce medical expenses and conserve foreign currency). She discovered it in 1972, after screening several thousand traditional Chinese remedies. Artemisinin is important, and one might ask what else her lab discovered in the treasure trove of traditional Chinese medicine in the intervening 43 years; the answer, apparently, is 'nothing'.\n\n While Taxol and artemisinin may justify plant screening on a pure cost-benefit basis (such a hit rate does not appear much worse than other methods, although one should note that the profit-hungry pharmaceutical industry does not prioritize or invest much in '[bioprospecting](!W)'), the more important lesson here is about the accuracy of 'traditional medicine'. Traditional medicine affords an excellent test case for 'the wisdom of tradition': medicine has hard endpoints as it is literally a matter of life and death, is an issue during every individual's life at the individual level (rather than occasionally at the group level), effects can be extremely large (bordering on 'silver bullet' level) and tens of thousands or hundreds of thousands of years have passed for accumulation & selection. Given all of these favorable factors, can the wisdom of tradition still overcome the serious statistical difficulties and cognitive biases leading to false beliefs? Well, the best success stories of traditional medicine have accuracy rates like... <1%. [\"Excellent herbs had our fathers of old\"](https://www.bartleby.com/lit-hub/verse-1885-1918/our-fathers-of-old/ \"'Our Fathers of Old', Rudyard Kiping 1922\") indeed... So much for the 'wisdom of tradition'. The fact that some working drugs happen to also have been mentioned, sometimes, in some traditions, in some ways, along with hundreds of thousands of useless or harmful drugs which look just the same, is hardly any more testimonial to the folk medicine as a source of truth than the observation that Heinrich Schliemann discovered a city *sort of* like Troy justifies treating the Illiad or Odyssey as accurate historical textbooks rather than 99% fictional literature. (Likewise other examples such as Australian Aboriginal myths preserving some traces of ancient geological events: they certainly do not show that the oral histories are reliable histories or we should just take them as fact.)\n\nSo---just like corporations---'selection' of cultures happens rarely with each 'generation' spanning centuries or millennia, typically has little to do with how reality-based their beliefs tend to be (for a selection coefficient approaching zero), and if one culture did in fact consume another one thanks to more useful beliefs about some herb, it is likely to backslide under the bombardment of memetic mutation (so any selection is spent just purging mutations, creating a mutation-selection balance); under such conditions, there will be little long-term 'evolution' towards higher optima, and the information content of culture will be minimal and closely constrained to only the most universal, high-fitness-impact, and memetically-robust aspects.\n\n## Natural Persons\n\n
\n> Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers. Natural selection cannot directly 'see' an individual organism in a specific situation and cause behavior to be adaptively tailored to the functional requirements imposed by that situation.\n>\n> Tooby & Cosmides 1992, [\"The Psychological Foundations of Culture\"](/doc/genetics/selection/natural/human/1995-tooby.pdf#page=24 \"‘The Psychological Foundations of Culture § pg24’, Tooby & Cosmides 1995 (page 24)\")\n
\n\n
\n> Good ideology. Wrong species.\n>\n> [E. O. Wilson](!W), of [Marxism](https://www.latimes.com/archives/la-xpm-1994-10-21-ls-53158-story.html \"'Natural Wonder: At heart, Edward Wilson`s an ant man. But it`s his theories on human behavior that stir up trouble', LA Times 1994\")\n
\n\nContrast that with a human.\nDespite ultimately being designed by evolution, evolution then plays no role at 'runtime' and more powerful learning algorithms take over.\n\nWith these more powerful algorithms designed by the meta-algorithm of evolution, a human is able to live successfully for over 100 years, with tremendous cooperation between the trillions of cells in their body, only rarely breaking down towards the end with a small handful of seed cancer cells defecting over a lifetime despite even more trillions of cell divisions and replacements.\nThey are also able to be cloned, yielding identical twins so similar across the board that people who know them may be unable to distinguish them.\nAnd they don't need to use evolution or markets to develop these bodies, instead, relying on a complex hardwired developmental program controlled by genes which ensures that >99% of humans get the two pairs of eyes, lungs, legs, brain hemispheres etc that they need.\nPerhaps the most striking efficiency gain from a human is the possession of a brain with the ability to predict the future, learn highly abstract models of the world, and plan and optimize over these plans for objectives which may only relate indirectly to fitness decades from now or fitness-related events which happen less than once in a lifetime & are usually unobserved or fitness events like that of descendants which can never be observed.\n\n## RL\n\n### Black Box vs White Box Optimization\n\nLet's put it another way.\n\nImagine trying to run a business in which the only feedback given is whether you go bankrupt or not.\nIn running that business, you make millions or billions of decisions, to adopt a particular model, rent a particular store, advertise this or that, hire one person out of scores of applicants, assign them this or that task to make many decisions of their own (which may in turn require decisions to be made by still others), and so on, extended over many years.\nAt the end, you turn a healthy profit, or go bankrupt.\nSo you get 1 bit of feedback, which must be split over billions of decisions.\nWhen a company goes bankrupt, what killed it? Hiring the wrong accountant? The CEO not investing enough in R&D? Random geopolitical events? New government regulations? Putting its HQ in the wrong city? Just a generalized inefficiency?\nHow would you know which decisions were good and which were bad? How do you solve the \"credit assignment problem\"?\n\nIdeally, you would have some way of tracing back every change in the financial health of a company back to the original decision & the algorithm which made that decision, but of course this is impossible since there is no way to know who said or did what or even who discussed what with whom when.\nThere would seem to be no general approach other than the truly brute force one of evolution: over many companies, have some act one way and some act another way, and on average, good decisions will cluster in the survivors and not-so-good decisions will cluster in the deceased.\n'Learning' here works (under certain conditions---like sufficiently reliable replication---which in practice may not obtain) but is horrifically expensive & slow.\nBy the same logic, there may be no better way to pay executives that to tie it to the stock performance: what a CEO does cannot be reduced to a few uncheatable numbers about how many widgets they carved a day or to any simple set of rules---CEOs exist to oversee everything *else* and decide things like strategy, and to set the culture from the top.\nA bad CEO can destroy a highly-successful company, and a good one boost it further, while following all rules.\nThis will lead to absurdities like a CEO reaping rewards from things that \"obviously\" have nothing to do with them; but attempting to improve pay-for-performance methods leads to [Nobel Prize-winning complications](https://marginalrevolution.com/marginalrevolution/2016/10/performance-pay-nobel.html \"'The Performance Pay Nobel', Tabarrok 2016\").\nNo excuses, no justifications, no explanations of why it wasn't one's fault---just results.^[\"It is better to be lucky than good\", one might say, because the good tend to be lucky, and the unlucky-but-good may only *seem* to be good. Particularly in [causally](/causality \"'Why Correlation Usually ≠ Causation', Branwen 2014\")-dense or [opaque](/everything \"'Everything Is Correlated', Branwen 2014\") or rapidly-changing environments, superstitiously imitating the lucky & discriminating against the unlucky---even if that bad luck appears due to clearly exogenous factors like a dice roll---may be an useful heuristic! (If you superstitiously follow what worked in the past, perhaps you don't [die of scurvy because you boiled away the vitamin C](https://antonhowes.substack.com/p/age-of-invention-plague-of-the-sea \"Age of Invention: Plague of the Sea\").) Nature believes in [strict liability](!W). ([People might too.](https://blog.jaibot.com/the-copenhagen-interpretation-of-ethics/))]\n(\"First prize is a Cadillac. Anyone wanna see second prize? Second prize is a set of steak knives. Third prize is *you're fired*. Get the picture? You laughing now?\")\nLikewise, for teams, where agent effort can't be easily observed and useful actions are unknown, it may be [hard to do better](/doc/economics/2022-dai.pdf \"‘Robust Incentives for Teams’, Dai & Toikka 2022\") than [partnership](https://en.wikipedia.org/wiki/Partnership#Partner_compensation)-style [profit sharing](!W) among team members.\n\nIn RL, this would correspond to black box/gradient-free methods, particularly evolutionary methods.\nFor example, [Salimans et al 2017](https://arxiv.org/abs/1703.03864#openai \"Evolution Strategies as a Scalable Alternative to Reinforcement Learning\") uses an evolutionary method in which thousands of slightly-randomized neural networks play an Atari game simultaneously, and at the end of the games, a new average neural network is defined based on the performance of them all; no attempt is made to figure out which specific changes are good or bad or even to get a reliable estimate---they simply run and the scores are what they are.\nIf we imagine a schematic like 'models → model parameters → environments → decisions → outcomes', evolution collapses it to just 'models → outcomes'; feed a bunch of possible models in, get back outcomes, pick the models with best outcomes.\nBrutally inefficient, like evolution, but brutally effective eventually.\n\nA more sample-efficient method would be something like REINFORCE, which [Andrej Karpathy explains with an ALE Pong agent](https://karpathy.github.io/2016/05/31/rl/ \"Deep Reinforcement Learning: Pong from Pixels\"); what does REINFORCE do to crack the black box open a little bit?\nIt's still horrific and amazing that it works:\n\n> So here is how the training will work in detail. We will initialize the policy network with some _W1_, _W2_ and play 100 games of Pong (we call these policy \"rollouts\"). Lets assume that each game is made up of 200 frames so in total we've made 20,000 decisions for going `UP` or `DOWN` and for each one of these we know the parameter gradient, which tells us how we should change the parameters if we wanted to encourage that decision in that state in the future. All that remains now is to label every decision we've made as good or bad. For example suppose we won 12 games and lost 88. We'll take all 200 × 12 = 2400 decisions we made in the winning games and do a positive update (filling in a +1.0 in the gradient for the sampled action, doing backprop, and parameter update encouraging the actions we picked in all those states). And we'll take the other 200 × 88 = 17600 decisions we made in the losing games and do a negative update (discouraging whatever we did). And... that's it. The network will now become slightly more likely to repeat actions that worked, and slightly less likely to repeat actions that didn't work. Now we play another 100 games with our new, slightly improved policy and rinse and repeat.\n>\n>> *Policy Gradients*: Run a policy for a while. See what actions led to high rewards. Increase their probability.\n>\n> If you think through this process you'll start to find a few funny properties. For example what if we made a good action in frame 50 (bouncing the ball back correctly), but then missed the ball in frame 150? If every single action is now labeled as bad (because we lost), wouldn't that discourage the correct bounce on frame 50? You're right---it would. However, when you consider the process over thousands/millions of games, then doing the first bounce correctly makes you slightly more likely to win down the road, so on average you'll see more positive than negative updates for the correct bounce and your policy will end up doing the right thing.\n>\n> ...I did not tune the hyperparameters too much and ran the experiment on my (slow) Macbook, but after training for 3 nights I ended up with a policy that is slightly better than the AI player. The total number of episodes was approximately 8,000 so the algorithm played roughly 200,000 Pong games (quite a lot isn't it!) and made a total of ~800 updates.\n\nThe difference here from evolution is that the credit assignment is able to use backpropagation to reach into the NN and directly adjust their contribution to the decision which was 'good' or 'bad'; the difficulty of tracing out the consequences of each decision and labeling it 'good' is simply bypassed with the brute force approach of decreeing \"*all* actions taken in an ultimately-successful game are good\", and \"*all* actions are bad if the game is ultimately bad\".\nHere we optimize something more like 'model parameters → decisions → outcomes'; we feed parameters in to get out decisions which then are assumed to cause the outcome, and reverse it to pick the parameters with the best outcomes.\n\nThis is still crazy, but it works, and better than simple-minded evolution: Salimans et al 2017 compares their evolution method to more standard methods which are fancier versions of the REINFORCE policy gradient approach, and this brutally limited use of backpropagation for credit assignment still cuts the sample size by 3--10x, and more on more difficult problems.\n\nCan we do better? Of course.\nIt is absurd to claim that *all* actions in a game determine the outcome, since the environment itself is stochastic and many decisions are either irrelevant or were the opposite in true quality of whatever the outcome was.\nTo do better, we can connect the decisions to the environment by modeling the environment itself as a white box which can be cracked open & analyzed, using a model-based RL approach like the well-known [PILCO](/doc/reinforcement-learning/exploration/2011-deisenroth.pdf \"'PILCO: A Model-Based and Data-Efficient Approach to Policy Search', Deisenroth & Rasmussen 2011\").\n\nIn PILCO, a model of the environment is learned by a powerful model (the non-neural-network [Gaussian process](!W), in this case), and the model is used to do planning: start with a series of possible actions, run them through the model to predict what would happen, and directly optimize the actions to maximize the reward.\nThe influence of the parameters of the model causing the chosen actions, which then partially cause the environment, which then partially cause the reward, can all be traced from the final reward back to the original parameters.\n(It's white boxes all the way down.)\nHere the full 'models → model parameters → environments → decisions → outcomes' pipeline is expressed and the credit assignment is performed correctly & as a whole.\n\nThe result is state-of-the-art sample efficiency: in a simple problem like [Cartpole](https://en.wikipedia.org/wiki/Inverted_pendulum), PILCO can solve it within as little as 10 episodes, while standard deep reinforcement learning approaches like policy gradients can struggle to solve it within 10,000 episodes.\n\nThe problem, of course, with model-based RL such as PILCO is that what they gain in correctness & sample-efficiency, they give back in computational requirements: I can't compare PILCO's sample-efficiency with Salimans et al 2017's ALE sample-efficiency or even Karpathy's Pong sample-efficiency because PILCO simply can't be run on problems all that much more complex than Cartpole.\n\nSo we have a painful dilemma: sample-efficiency *can* be many orders of magnitude greater than possible with evolution, if only one could do more precise fine-grained credit assignment---instead of judging billions of decisions based solely on a single distant noisy binary outcome, the algorithm generating each decision can be traced through all of its ramifications through all subsequent decisions & outcomes to a final reward---but these better methods are not directly applicable.\nWhat to do?\n\n### Going Meta\n\n
\n> ...the spacing that has made for the most successful inductions will have tended to predominate through natural selection. Creatures inveterately wrong in their inductions have a pathetic but praiseworthy tendency to die before reproducing their kind....In induction nothing succeeds like success.\n>\n> [W. V. O. Quine](!W \"Willard Van Orman Quine\"), [\"Natural Kinds\"](/doc/philosophy/ontology/1969-quine.pdf) 1969\n
\n\nSpeaking of evolutionary algorithms & sample-efficiency, an interesting area of AI and reinforcement learning is [\"meta-learning\"](https://www.reddit.com/r/reinforcementlearning/search?q=flair%3AMeta-RL&sort=top&restrict_sr=on&t=all), usually described as \"learning to learn\" ([Botvinick et al 2019](https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613\\(19\\)30061-0#deepmind \"Reinforcement Learning, Fast and Slow\")).\nThis rewrites a given learning task as a two-level problem, where one seeks a meta-algorithm for a family of problems which then adapts at runtime to the specific problem at hand.\n(In evolutionary terms, this could be seen as related to a [Baldwin effect](!W).)\nThere are many paradigms in meta-learning using various kinds of learning & optimizers; for listing of several recent ones, see Table 1 of [Metz et al 2018](https://arxiv.org/abs/1804.00222#google \"Learning Unsupervised Learning Rules\") (reproduced in an appendix).\n\n[For example](https://www.deepmind.com/blog/prefrontal-cortex-meta-reinforcement-learning-system \"Prefrontal cortex as a meta-reinforcement learning system [blog]\"), one could train an RNN on a 'left or right' T-maze task where the direction with the reward switches at random every once in a while: the RNN has a memory, its hidden state, so after trying the left arm a few times and observing no reward, it can encode \"the reward has switched to the right\", and then decide to go right every time while continuing to encode how many failures it's had after the switch; when the reward then switches back to the left, after a few failures on the right, the learned rule will fire and it'll switch back to the left.\nWithout this sequential learning, if it was just trained on a bunch of samples, where half the 'lefts' have a reward and half the 'rights' also have a reward (because of the constant switching), it'll learn a bad strategy like picking a random choice 50-50, or always going left/right.\nAnother approach is 'fast weights', where a starting meta-NN observes a few datapoints from a new problem, and then emits the adjusted parameters for a *new* NN, specialized to the problem, which is then run exactly and receives a reward, so the meta-NN can learn to emit adjusted parameters which will achieve high reward on all problems.\nA version of this might be the MAML meta-learning algorithms ([Finn et al 2017](https://arxiv.org/abs/1703.03400 \"Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks\")) where a meta-NN is learned which is carefully balanced between possible NNs so that a few finetuning steps of gradient descent training within a new problem 'specializes' it to that problem (one might think of the meta-NN as being a point in the high-dimensional model space which is roughly equidistant from a large number of NNs trained on each individual problem, where tweaking a few parameters controls overall behavior and only those need to be learned from the initial experiences).\nIn general, meta-learning enables learning of the superior Bayes-optimal agent *within* environments by inefficient (possibly not even Bayesian) training *across* environments ([Ortega et al 2019](https://arxiv.org/abs/1905.03030#deepmind \"Meta-learning of Sequential Strategies\")).\nAs [Duff 2002](https://www.gatsby.ucl.ac.uk/~yael/Okinawa/DuffThesis.pdf \"Optimal Learning: Computational Procedures for Bayes-Adaptive Markov Decision Processes\") puts it, \"One way of thinking about the computational procedures that I later propose is that they perform an offline computation of an online, adaptive machine. One may regard the process of approximating an optimal policy for the Markov decision process defined over hyper-states as 'compiling' an optimal learning strategy, which can then be 'loaded' into an agent.\"\n\nAn interesting example of this approach is the DeepMind paper [Jaderberg et al 2018](https://arxiv.org/abs/1807.01281#deepmind \"Human-level performance in first-person multiplayer games with population-based deep reinforcement learning\"), which presents a Quake team FPS agent trained using a two-level approach (and [Leibo et al 2018](https://arxiv.org/abs/1812.07019#deepmind \"Malthusian Reinforcement Learning\") which extends it further with multiple populations; for background, see [Sutton & Barto 2018](http://incompleteideas.net/book/RLbook2018.pdf#page=491 \"Designing Reward Signals\"); for an evolutionary manifesto, see [Leibo et al 2019](https://arxiv.org/abs/1903.00742#deepmind \"Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research\")), an approach which was valuable for their [AlphaStar _StarCraft II_](https://www.deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii \"AlphaStar: Mastering the Real-Time Strategy Game StarCraft II\") agent publicized in January 2019.\nThe FPS game is a multiplayer capture-the-flag match where teams compete on a map, rather than the agent controlling a single agent in a death-match setting; learning to coordinate, as well as explicitly communicate, with multiple copies of oneself is tricky and normal training methods don't work well because updates change all the other copies of oneself as well and destabilize any communication protocols which have been learned.\nWhat Jaderberg does is use normal deep RL techniques within each agent, predicting and receiving rewards within each game based on earning points for flags/attacks, but then the overall population of 30 agents, after each set of matches, undergoes a second level of selection based on final game score/victory, which then selects on the agent's internal reward prediction & hyperparameters\n\n> This can be seen as a two-tier reinforcement learning problem. The inner optimisation maximises _J~inner~_, the agents' expected future discounted internal rewards. The outer optimisation of _J~outer~_ can be viewed as a meta-game, in which the meta-reward of winning the match is maximised with respect to internal reward schemes _w~p~_ and hyperparameters _φ~p~_, with the inner optimisation providing the meta transition dynamics. We solve the inner optimisation with RL as previously described, and the outer optimisation with [Population Based Training (PBT) (29)](https://arxiv.org/abs/1711.09846#deepmind \"'Population based training of neural networks', Jaderberg et al 2017\"). PBT is an online evolutionary process which adapts internal rewards and hyperparameters and performs model selection by replacing under-performing agents with mutated versions of better agents. This joint optimisation of the agent policy using RL together with the optimisation of the RL procedure itself towards a high-level goal proves to be an effective and generally applicable strategy, and utilizes the potential of combining learning and evolution [(2)](/doc/reinforcement-learning/meta-learning/1992-ackley.pdf \"'Interactions Between Learning and Evolution', Ackley & Littman 1992\") in large scale learning systems.\n\nThe goal is to win, the ground-truth reward is the win/loss, but learning *only* from win/loss is extremely slow: a single bit (probably less) of information must be split over all actions taken by all agents in the game and used to train NNs with millions of interdependent parameters, in a particularly inefficient way as one cannot compute exact gradients from the win/loss back to the responsible neurons.\nWithin-game points are a much richer form of supervision, more numerous and corresponding to short time segments, allowing for much more learning within each game (possibly using exact gradients), but are only indirectly related to the final win/loss; an agent could rack up many points on its own while neglecting to fight the enemy or coordinate well and ensuring a final defeat, or it could learn a greedy team strategy which performs well initially but loses over the long run.\nSo the two-tier problem uses the slow 'outer' signal or loss function (winning) to sculpt the faster inner loss which does the bulk of the learning.\n(\"Organisms are adaptation-executors, not fitness-maximizers.\")\nShould the fast inner algorithms not be learning something useful or go haywire or fall for a trap, the outer rewards will eventually recover from the mistake, by mutating or abandoning them in favor of more successful lineages.\nThis combines the crude, slow, dogged optimization of evolution, with the much faster, more clever, but potentially misguided gradient-based optimization, to produce something which will reach the right goal faster.\n(Two more recent examples would be [surrogate](https://arxiv.org/abs/1806.10230 \"'Guided evolutionary strategies: escaping the curse of dimensionality in random search', Maheswaranathan et al 2018\")/[synthetic gradients](https://arxiv.org/abs/1608.05343#deepmind \"'Decoupled Neural Interfaces using Synthetic Gradients', Jaderberg et al 2016\").)\n\n### Two-Level Meta-Learning\n\nCosma Shalizi, elsewhere, [enjoys noting formal identities](http://bactra.org/weblog/601.html \"Bayes < Darwin-Wallace\") between natural selection and Bayesian statistics (especially [particle filtering](!W)) and markets, where the population frequency of an allele corresponds to a parameter's prior probability or starting wealth of a trader, and fitness differentials/profits correspond to updates based on new evidence, typically in the form of [a multiplication](https://jeremykun.com/2017/02/27/the-reasonable-effectiveness-of-the-multiplicative-weights-update-algorithm/ \"The Reasonable Effectiveness of the Multiplicative Weights Update Algorithm\").\n(See also [Evstigneev et al 2008](http://www.evstigneev.net/EF.pdf \"Evolutionary finance\")/[Lensberg & Schenk-Hoppé 2006](/doc/statistics/decision/2007-lensberg.pdf \"On the Evolution of Investment Strategies and the Kelly Rule - A Darwinian Approach\"), [Campbell 2016](https://arxiv.org/abs/1606.07937 \"Universal Darwinism as a process of Bayesian inference\"), [Czégel et al 2019](https://www.biorxiv.org/content/10.1101/685842.full \"Evolutionary implementation of Bayesian computations\"); on a historical note, [Galton invented something](/doc/statistics/bayes/2010-stigler-2.pdf#page=4 \"'Darwin, Galton, and the Statistical Enlightenment: Francis Galton, February 9th, 1877', Stigler 2010\") like [ABC](!W \"Approximate Bayesian computation\") while trying to model evolution.)\nWhile a parameter may start with erroneously low prior, at some point the updates will make the posterior converge on it.\n(The relationship between populations of individual with noisy fixed beliefs, and [Thompson sampling](!W), is also interesting: [Krafft 2017](https://people.csail.mit.edu/pkrafft/papers/krafft-thesis-final.pdf \"A Rational Choice Framework for Collective Behavior\").\nCan we see the apparently-inefficient stream of startups trying 'failed' ideas---and occasionally winding up winning big---as [a kind of collective Thompson sampling](/timing#try-try-again-but-less-less) & more efficient than it seems?)\nAnd [stochastic gradient descent](!W) can be seen as *secretly* an approximation or [variational](!W \"Variational Bayesian methods\") form of Bayesian updates by [estimating its](https://www.nature.com/articles/s41467-021-26568-2 \"‘Correspondence between neuroevolution and gradient descent’, Whitelam et al 2021\") [gradients](https://www.lesswrong.com/posts/5XbBm6gkuSdMJy9DT/conditions-for-mathematical-equivalence-of-stochastic \"Conditions for mathematical equivalence of Stochastic Gradient Descent and Natural Selection\") ([because everything that works works because it's Bayesian?](https://www.inference.vc/everything-that-works-works-because-its-bayesian-2/ \"Everything that Works Works Because it's Bayesian: Why Deep Nets Generalize?\")) and of course evolutionary methods can be seen as calculating [finite difference](!W) approximations to gradients...\n\n
\n| Model | Parameter | Prior | Update |\n|----|-----------|-------|--------|\n| Evolution | Allele | Population Frequency | Fitness Differential |\n| Market | Trader | Starting Wealth | Profit |\n| Particle Filtering | Particles | Population Frequency | Accept/Reject Sample |\n| SGD | Parameter | Random Initialization | Gradient Step |\n\nTable: Analogies between different optimization/inference models\n
\n\nThis pattern surfaces in our other examples too.\nThis two-level learning is analogous to meta-learning: the outer or meta-algorithm learns how to generate an inner or object-level algorithm which can learn most effectively, better than the meta-algorithm.\nInner algorithms themselves can learn better algorithms, and so on, gaining power, compute-efficiency, or sample-efficiency, with every level of specialization.\n(\"It's optimizers all the way up, young man!\")\nIt's also analogous to cells in a human body: overall reproductive fitness is a slow signal that occurs only a few times in a lifetime at most, but over many generations, it builds up fast-reacting developmental and homeostatic processes which can build an efficient and capable body and respond to environmental fluctuations within minutes rather than millennia, and the brain is still superior with split-second situations.\nIt's also analogous to corporations in a market: the corporation can use whatever internal algorithms it pleases, such as linear optimization or neural networks, and evaluate them internally using internal metrics like \"number of daily users\"; but eventually, this must result in profits...\n\nThe central problem a corporation solves is how to motivate, organize, punish & reward its sub-units and constituent humans in the absence of direct end-to-end losses *without* the use of slow external market mechanisms.\nThis is done by tapping into social mechanisms like peer esteem (soldiers don't fight for their country, they fight for their buddies), selecting workers who are intrinsically motivated to work usefully rather than parasitically, constant attempts to instill a \"company culture\" with [sloganeering](https://www.eugenewei.com/blog/2017/5/11/jpeg-your-ideas \"Compress to impress: Jeff Bezos and Amazon culture\") or handbooks or company songs, use of multiple proxy measures for rewards to reduce [Goodhart](https://en.wikipedia.org/wiki/Goodhart%27s_law)-style reward hacking, ad hoc mechanisms like stock options to try to internalize within workers the market losses, replacing workers with outsourcing or automation, acquiring smaller companies which have not yet decayed internally or as a selection mechanism (\"acquihires\"), employing intellectual property or regulation...\nAll of these techniques together can align the parts into something useful to eventually sell...\n\n# Man Proposes, God Disposes\n\n...Or else the company will eventually go bankrupt:\n\n> [Great is Bankruptcy]{.smallcaps}: the great bottomless gulf into which all Falsehoods, public and private, do sink, disappearing; whither, from the first origin of them, they were all doomed. For Nature is true and not a lie. No lie you can speak or act but it will come, after longer or shorter circulation, like a Bill drawn on Nature's Reality, and be presented there for payment,---with the answer, No effects. Pity only that it often had so long a circulation: that the original forger were so seldom he who bore the final smart of it! Lies, and the burden of evil they bring, are passed on; shifted from back to back, and from rank to rank; and so land ultimately on the dumb lowest rank, who with spade and mattock, with sore heart and empty wallet, daily come in contact with reality, and can pass the cheat no further.\n>\n> ...But with a Fortunatus' Purse in his pocket, through what length of time might not almost any Falsehood last! Your Society, your Household, practical or spiritual Arrangement, is untrue, unjust, offensive to the eye of God and man. Nevertheless its hearth is warm, its larder well replenished: the innumerable Swiss of Heaven, with a kind of Natural loyalty, gather round it; will prove, by pamphleteering, musketeering, that it is a truth; or if not an unmixed (unearthly, impossible) Truth, then better, a wholesomely attempered one, (as wind is to the shorn lamb), and works well. Changed outlook, however, when purse and larder grow empty! Was your Arrangement so true, so accordant to Nature's ways, then how, in the name of wonder, has Nature, with her infinite bounty, come to leave it famishing there? To all men, to all women and all children, it is now indubitable that your Arrangement was false. Honour to Bankruptcy; ever righteous on the great scale, though in detail it is so cruel! Under all Falsehoods it works, unweariedly mining. No Falsehood, did it rise heaven-high and cover the world, but Bankruptcy, one day, will sweep it down, and make us free of it.^[[_The French Revolution: A History_](https://www.gutenberg.org/files/1301/1301-h/1301-h.htm#link2HCH0013), by [Thomas Carlyle](!W).]\n\nA large corporation like Sears may take decades to die (\"There is a great deal of ruin in a nation\", Adam Smith observed), but die it does.\nCorporations do not increase in performance rapidly and consistently the way selective breeding or AI algorithms do because they cannot *replicate* themselves as exactly as digital neural networks or biological cells can, but, nevertheless, they are still part of a two-tier process where a ground-truth uncheatable outer loss constrains the internal dynamics to some degree and maintain a baseline or perhaps modest improvement over time.\nThe plan is \"checked\", as [Trotsky puts it](https://www.marxists.org/archive/trotsky/1932/10/sovecon.htm \"'The Soviet Economy in Danger', Trotsky 1932\") in criticizing Stalin's policies like abandoning the [NEP](!W \"New Economic Policy\"), by supply and demand:\n\n> If an universal mind existed, of the kind that projected itself into the scientific fancy of Laplace---a mind that could register simultaneously all the processes of nature and society, that could measure the dynamics of their motion, that could forecast the results of their inter-reactions---such a mind, of course, could a priori draw up a faultless and exhaustive economic plan, beginning with the number of acres of wheat down to the last button for a vest. The bureaucracy often imagines that just such a mind is at its disposal; that is why it so easily frees itself from the control of the market and of Soviet democracy. But, in reality, the bureaucracy errs frightfully in its estimate of its spiritual resources.\n>\n> ...The innumerable living participants in the economy, state and private, collective and individual, must serve notice of their needs and of their relative strength not only through the statistical determinations of plan commissions but by the direct pressure of supply and demand. The plan is checked and, to a considerable degree, realized through the market.\n\n# \"Pain Is the Only School-Teacher\"\n\nPain is a curious thing.\nWhy do we have painful pain instead of just a more neutral painless pain, when it can backfire so easily as chronic pain, among other problems?\nWhy do we have pain at all instead of regular learning processes or experiencing rewards as we follow plans?\n\nCan we understand pain as another two-level learning process, where a slow but ground-truth outer loss [\"central governor\"](!W \"Central governor\") constrains a fast but unreliable inner loss?\nI would suggest that pain itself is not an outer loss, but the painfulness of pain, its intrusive motivational aspects, is what makes it an outer loss.\nThere is no logical necessity for pain to be pain but this would not be adaptive or practical because it would too easily let the inner loss lead to damaging behavior.\n\n## Taxonomy of Pain\n\nSo let's consider the possibilities when it comes to pain. There isn't just \"pain\".\nThere is (at the least):\n\n- useless painful pain (chronic pain, exercise)\n- useful painful pain (the normal sort)\n- useless nonpainful nonpain (dead nerves in diabetes or [leprosy](!W)[^Brand-leprosy] or [congenital pain insensitivity](!W)[^Brand-Tanya][^Dearborn][^Melzack][^Gabby-Gingras][^HN-remote] ^[See [\"The Hazards of Growing Up Painlessly\"](https://www.nytimes.com/2012/11/18/magazine/ashlyn-blocker-feels-no-pain.html) for a particularly recent example.]; [bed sores](!W \"Pressure ulcer\") and [RSI](!W \"Repetitive strain injury\") are everyday versions demonstrating that even the most harmless activities like 'lying on a bed' are in fact constantly causing damage)\n- useful nonpainful nonpain (adrenaline rushes during combat)\n- useless nonpainful pain ([pain asymbolia](!W) where they maim & kill themselves, possibly also [Lesch-Nyhan syndrome](!W \"Lesch-Nyhan syndrome#Self-injuring behavior\"));\n- and intermediate cases: like the [Marsili family](https://www.smithsonianmag.com/science-nature/family-feels-almost-no-pain-180971915/ \"The Family That Feels Almost No Pain: An Italian clan's curious insensitivity to pain has piqued the interest of geneticists seeking a new understanding of how to treat physical suffering\") who have a genetic mutation ([Habib et al 2018](https://academic.oup.com/brain/article/141/2/365/4725107 \"A novel human pain insensitivity disorder caused by a point mutation in ZFHX2\")) which partially damages pain perception. The Marsilis *do* feel useful painful pain but only briefly, and incur substantial bodily damage (broken bones, scars) but avoid the most horrific anecdotes of those with deadened nerves or pain asymbolia.\n\n Another interesting case is the [Scotswoman Jo Cameron](https://www.newyorker.com/magazine/2020/01/13/a-world-without-pain \"A World Without Pain: Does hurting make us human?\"), who has a different set of mutations to her endocannabinoid system (FAAH & FAAH-OUT): while not as bad as neuropathy, she still exhibits similar symptoms---her father who may also have been a carrier died peculiarly, she regularly burns or cuts herself in household chores, she broke her arm roller-skating as a child but didn't seek treatment, delayed treatment of a damaged hip and then a hand damaged by arthritis until almost too late[^microdeletion], took in foster children who stole her savings, etc. (Biologist Matthew Hill describes the most common FAAH mutation as causing \"low levels of anxiety, forgetfulness, a happy-go-lucky demeanor\", and \"Since the paper was published, Matthew Hill has heard from half a dozen people with pain insensitivity, and he told me that many of them seemed nuts\" compared to Jo Cameron.) A possible case is the absurdly prolific fantasy writer [Brandon Sanderson](!W), who routinely puts in 8-hour days of typing (on a couch, to boot), and who turns out to be [almost entirely insensitive to pain](https://www.wired.com/story/brandon-sanderson-is-your-god/ \"‘Brandon Sanderson Is Your God: He’s the biggest fantasy writer in the world. He’s also very Mormon. These things are profoundly related’, Kehe 2023\").\n- but---is there 'useful painless pain' or 'useless painful nonpain'?\n\nIt turns out there is 'painless pain': [lobotomized](!W \"Lobotomy\") people experience that, and \"reactive dissociation\" is the phrase used to describe the effects sometimes of analgesics like morphine when administered after pain has begun, and the patient reports, to quote [Dennett 1978](/doc/philosophy/mind/1978-dennett.pdf \"Why You Can't Make A Computer That Feels Pain\") (emphasis in original), that \"After receiving the analgesic subjects commonly report not that the pain has disappeared or diminished (as with aspirin) but that the pain *is as intense as ever* though they no longer *mind* it...if it is administered *before* the onset of pain...the subjects claim to not feel any pain subsequently (though they are not *numb* or anesthetized---they have sensation in the relevant parts of their bodies); while if the morphine is administered *after* the pain has commenced, the subjects report that the pain continues (and continues to be *pain*), though they no longer mind it...Lobotomized subjects similarly report feeling intense pain but not minding it, and in other ways the manifestations of lobotomy and morphine are similar enough to lead some researchers to describe the action of morphine (and some barbiturates) as 'reversible pharmacological leucotomy [lobotomy]'.^23^\"^[Brand's _Pain: The Gift No One Wants_ (pg209--211) describes meeting an Indian woman whose pain was cured by a lobotomy (designed to sever as little of the prefrontal cortex as possible), who described it in almost exactly the same term as Dennett's paraphrase: \"When I inquired about the pain, she said, 'Oh, yes, it's still there. I just don't worry about it anymore.' She smiled sweetly and chuckled to herself. 'In fact, it's still agonizing. But I don't mind.'\" (Dennett elsewhere draws a connection between 'not minding' and Zen Buddhism.) See also [Barber 1959](/doc/psychology/1959-barber.pdf \"Toward a theory of pain: relief of chronic pain by prefrontal leucotomy, opiates, placebos, and hypnosis\").]\n\nAnd we can find examples of what appears to be 'painful nonpain': [Grahek 2001](http://oops.uni-oldenburg.de/624/13/grafee01.pdf \"Feeling Pain and Being in Pain\") highlights a case-study, [Ploner et al 1999](https://painlabmunich.webnode.page/_files/200000024-c89b3cb498/Ploner%20pain%20affect%20without%20pain%20sensation%20in%20a%20patient%20with%20a%20postcentral%20lesion%20PAIN%201999.pdf \"Pain Affect Without Pain Sensation in a Patient With a Postcentral Lesion\"), where the German patient's somatosensory cortices suffered a lesion from a stroke, leading to an inability to feel heat normally on one side of his body or feel any spots of heat or pain from heat; despite this, when sufficient heat was applied to a single spot on the arm, the patient became increasingly agitated, describing an \"clearly unpleasant\" feeling associated with his whole arm, but also denied any description of it involving crawling skin sensations or words like \"slight pain\" or \"burning\".\n\n[^Brand-Tanya]: An example quote from Brand & Yancey's 1993 [_Pain: The Gift No One Wants_](/doc/psychology/1993-brand-painthegiftnobodywants.pdf \"'Pain: The Gift Nobody Wants', Brand & Yancey 1993\") about congenital pain insensitivity:\n\n > When I unwrapped the last bandage, I found grossly infected ulcers on the soles of both feet. Ever so gently I probed the wounds, glancing at Tanya's face for some reaction. She showed none. The probe pushed easily through soft, necrotic tissue, and I could even see the white gleam of bare bone. Still no reaction from Tanya.\n >\n > ...her mother told me Tanya's story...\"A few minutes later I went into Tanya's room and found her sitting on the floor of the playpen, fingerpainting red swirls on the white plastic sheet. I didn't grasp the situation at first, but when I got closer I screamed. It was horrible. The tip of Tanya's finger was mangled and bleeding, and it was her own blood she was using to make those designs on the sheets. I yelled, 'Tanya, what happened!' She grinned at me, and that's when I saw the streaks of blood on her teeth. She had bitten off the tip of her finger and was playing in the blood.\"\n >\n > ...The toddler laughed at spankings and other physical threats, and indeed seemed immune to all punishment. To get her way she merely had to lift a finger to her teeth and pretend to bite, and her parents capitulated at once. The parents' horror turned to despair as wounds mysteriously appeared on one of Tanya's fingers after another...I asked about the foot injuries. \"They began as soon as she learned to walk,\" the mother replied. \"She'd step on a nail or thumbtack and not bother to pull it out. Now I check her feet at the end of every day, and often I discover a new wound or open sore. If she twists an ankle, she doesn't limp, and so it twists again and again. An orthopedic specialist told me she's permanently damaged the joint. If we wrap her feet for protection, sometimes in a fit of anger she'll tear off the bandages. Once she ripped open plaster cast with her bare fingers.\"\n >\n > ...Tanya suffered from a rare genetic defect known informally as \"congenital indifference to pain\"...Nerves in her hands and feet transmitted messages---she felt a kind of tingling when she burned herself or bit a finger---but these carried no hint of unpleasantness...She rather enjoyed the tingling sensations, especially when they produced such dramatic reactions in others...Tanya, now 11, was living a pathetic existence in an institution. She had lost both legs to amputation: she had refused to wear proper shoes and that, coupled with her failure to limp or shift weight when standing (because she felt no discomfort), had eventually put intolerable pressure on her joints. Tanya had also lost most of her fingers. Her elbows were constantly dislocated. She suffered the effects of chronic sepsis from ulcers on her hands and amputation stumps. Her tongue was lacerated and badly scarred from her nervous habit of chewing it.\n[^Brand-leprosy]: Brand also notes of a leprosy patient whose nerves had been deadened by it:\n\n > As I watched, this man tucked his crutches under his arm and began to run on both feet with a very lopsided gait....He ended up near the head of the line, where he stood panting, leaning on his crutches, wearing a smile of triumph...By running on an already dislocated ankle, he had put far too much force on the end of his leg bone and the skin had broken under the stress...I knelt beside him and found that small stones and twigs had jammed through the end of the bone into the marrow cavity. I had no choice but to amputate the leg below the knee.\n >\n > These two scenes have long haunted me.\n[^Dearborn]: One of the first known cases was described in [Dearborn 1932](/doc/psychology/1932-dearborn.pdf \"A case of congenital general pure analgesia\"), of a man with a remarkable career of injuries as a child ranging from being hoisted by a pick-axe to a hatchet getting stuck in his head to shooting himself in the index finger, culminating in a multi-year career as the \"Human Pincushion\".\n[^Melzack]: [_The Challenge of Pain_](/doc/psychology/1996-melzack-thechallengeofpain.pdf \"'The Challenge of Pain (Updated Second Edition)', Melzack & Wall 1996\"), Melzack & Wall 1996, describes another case (as quoted in [_Feeling Pain and Being in Pain_](http://oops.uni-oldenburg.de/624/13/grafee01.pdf \"'Feeling Pain and Being in Pain', Grahek 2001\"), Grahek 2001):\n\n > As a child, she had bitten off the tip of her tongue while chewing food, and has suffered third-degree burns after kneeling on a hot radiator to look out of the window...Miss C. had severe medical problems. She exhibited pathological changes in her knees, hip and spine, and underwent several orthopedic operations. Her surgeon attributed these changes to the lack of protection to joints usually given by pain sensation. She apparently failed to shift her weight when standing, to turn over in her sleep, or to avoid certain postures, which normally prevent the inflammation of joints. All of us quite frequently stumble, fall or wrench a muscle during ordinary activity. After these trivial injuries, we limp a little or we protect the joint so that it remains unstressed during the recovery process. This resting of the damaged area is an essential part of its recovery. But those who feel no pain go on using the joint, adding insult to injury.\n[^microdeletion]: A genetics paper, [Habib et al 2019](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6676009/ \"Microdeletion in a FAAH pseudogene identified in a patient with high anandamide concentrations and pain insensitivity\") has a profile of a pain-insensitive patient (which is particularly eyebrow-raising in light of earlier discussions of joint damage):\n\n > The patient had been diagnosed with osteoarthritis of the hip, which she reported as painless, which was not consistent with the severe degree of joint degeneration. At 65 yr of age, she had undergone a hip replacement and was administered only paracetamol 2g orally on Postoperative days 1 and 2, reporting that she was encouraged to take the paracetamol, but that she did not ask for any analgesics. She was also administered a single dose of morphine sulphate 10mg orally on the first postoperative evening that caused severe nausea and vomiting for 2 days. After operation, her pain intensity scores were 0⁄10 throughout except for one score of 1⁄10 on the first postoperative evening. Her past surgical history was notable for multiple [varicose vein](!W) and dental procedures for which she has never required analgesia. She also reported a long history of painless injuries (eg. suturing of a laceration and left wrist fracture) for which she did not use analgesics. She reported numerous burns and cuts without pain ([Supplementary Fig. S1](https://ars.els-cdn.com/content/image/1-s2.0-S0007091219301382-mmc2.pdf)), often smelling her burning flesh before noticing any injury, and that these wounds healed quickly with little or no residual scar. She reported eating [Scotch bonnet](!W) chili peppers without any discomfort, but a short-lasting \"pleasant glow\" in her mouth. She described sweating normally in warm conditions.\n[^Gabby-Gingras]: A recent US example is Minnesotan Gabby Gingras (b. 2001), featured in the 2005 documentary _A Life Without Pain_, and occasionally covered in the media since (eg. [\"Medical Mystery: A World Without Pain: A rare genetic disorder leaves one little girl in constant danger\"](https://abcnews.go.com/Health/MedicalMysteries/story?id=3679532&page=1), [\"Minnesota girl who can't feel pain battles insurance company\"](https://www.washingtontimes.com/news/2018/jun/2/minnesota-girl-who-cant-feel-pain-battles-insuranc/)).\n\n She is legally blind, having damaged her eyes & defeated attempts to save her vision by stitching her eyes shut. She would chew on things, so her baby teeth were surgically removed to avoid her breaking them---but then she broke her adult teeth when they grow in; she can't use dentures because her gums are so badly destroyed, which requires special surgery to graft bone from her hips into her jaw to provide a foundation for teeth. And so on.\n[^HN-remote]: HN user [remote_phone](https://news.ycombinator.com/item?id=23462736):\n\n > My cousin feels pain or discomfort but only a little. This almost affected her when she gave birth because her water had broken but she didn’t feel any contractions at all until it was almost too late. Luckily she got to the hospital in time and her son was born perfectly normal but it was a bit harrowing.\n >\n > More interestingly, her son inherited this. He doesn’t feel pain the same way normal people do. Once her son broke his wrist and had to go to the hospital. He wasn’t in pain, but I think they had to pull on the arm to put it back in place properly (is this called traction?). The doctor was putting in all his effort to separate the wrist from the arm, and the dad almost fainted because it looked so gruesome but all the son looked like was mildly discomforted from the tension. The doctor was apparently shocked at how little pain he felt.\n >\n > The son also pulled out all his teeth on his own, as they got loose. He said it bothered him to have loose teeth, but the act of pulling them out didn’t bother him at all.\n\nA table might help lay out the possibilities:\n\n
\nUtility Aversiveness Qualia presence Examples\n-------- ------------- --------------- --------\nuseless painful pain chronic pain; exercise?\nuseful painful pain normal/injuries\nuseless nonpainful pain asymbolia\nuseful nonpainful pain reactive dissociation, lobotomies; exercise?\nuseless painful nonpain unconscious processes such as [anesthesia awareness](https://www.lesswrong.com/posts/wzj6WkudtrXQFqL8e/inverse-p-zombies-the-other-direction-in-the-hard-problem-of \"'Inverse p-zombies: the other direction in the Hard Problem of Consciousness', Branwen 2011\"). Itches or tickles, [anterograde amnesia](!W)?^[Amnesiacs apparently may still be able to learn fear or pain associations with unpleasant stimuli despite their memory impairment and sometimes reduced pain sensitivity, which makes them a borderline case here: the aversiveness outlasts the (remembered) qualia.]\nuseful painful nonpain cold/heat [perception](!W \"Thermoception\"), as in the somatosensory cortex lesion case-study\nuseless nonpainful nonpain deadened nerves from diseases (diabetes, leprosy), injury, drugs (anesthetics)\nuseful nonpainful nonpain adrenaline rush/accidents/combat\n\nTable: A taxonomy of possible kinds of 'pain', split by organismal consequences, motivational effects, and reported subjective (non)experience.\n
\n\nPain serves a clear purpose (stopping us from doing things which may cause damage to our bodies), but in an oddly unrelenting way which we cannot disable and which increasingly often backfires on our long-term interests in the form of 'chronic pain' and other problems.\nWhy doesn't pain operate more like a warning, or like hunger or thirst?\nThey interrupt our minds, but like a computer popup dialogue, after due consideration of our plans and knowledge, we can generally dismiss them.\nPain is the interruption which doesn't go away, although ([Morsella 2005](https://www.rifters.com/real/articles/Morsella_2005.pdf \"PRISM: The Function of Phenomenal States: Supramodular Interaction Theory\")):\n\n> Theoretically, nervous mechanisms could have evolved to solve the need for this particular kind of interaction otherwise. Apart from automata, which act like humans but have no phenomenal experience, a conscious nervous system that operates as humans do but does not suffer any internal strife. In such a system, knowledge guiding skeletomotor action would be isomorphic to, and never at odds with, the nature of the phenomenal state---running across the hot desert sand in order to reach water would actually feel good, because performing the action is deemed adaptive.^[A possible concrete instance of this would be [the anti-famine theory of anorexia](/doc/psychiatry/anorexia/2003-guisinger.pdf \"‘Adapted to Flee Famine: Adding an Evolutionary Perspective on Anorexia Nervosa’, Guisinger 2003\"): anorexics find it easy to exercise heavily & starve themselves to death--even while feeling 'good' or 'virtuous'---and often don't want to be cured, because they are helplessly executing a last-ditch anti-famine adaptive strategy meant for long-distance travel to greener pastures.] Why our nervous system does not operate with such harmony is perhaps a question that only evolutionary biology can answer. Certainly one can imagine such integration occurring without anything like phenomenal states, but from the present standpoint, this reflects more one's powers of imagination than what has occurred in the course of evolutionary history.\n\n## Hui Neng's Flag\n\nIn the reinforcement learning context, one could ask: does it make a difference whether one has 'negative' or 'positive' rewards? Any reward function with both negative and positive rewards could be turned into all-positive rewards simply by adding a large constant. Is that a difference which makes a difference? Or instead of maximizing positive 'rewards', one could speak of minimizing 'losses', and one often does in economics or decision theory or [control theory](!W)[^Bertsekas].\n\n[^Bertsekas]: [Bertsekas 2018](https://arxiv.org/abs/1804.04577 \"Feature-Based Aggregation and Deep Reinforcement Learning: A Survey and Some New Implementations\") helpfully provides a Rosetta Stone between optimal control theory & reinforcement learning (see also [Powell 2018](https://castlelab.princeton.edu/wp-content/uploads/2018/01/Powell-UnifiedFrameworkStochasticOptimization_Jan292018.pdf \"A Unified Framework for Stochastic Optimization\") & [Bertsekas 2019](https://web.mit.edu/dimitrib/www/RLbook.html \"Reinforcement Learning And Optimal Control\")):\n\n > The notation and terminology used in this paper is standard in DP and optimal control, and in an effort to forestall confusion of readers that are accustomed to either the reinforcement learning or the optimal control terminology, we provide a list of selected terms commonly used in reinforcement learning (for example in the popular book by Sutton and Barto [SuB98], and its 2018 on-line 2nd edition), and their optimal control counterparts.\n >\n > (a) Agent = Controller or decision maker.\n > (b) Action = Control.\n > (c) Environment = System.\n > (d) Reward of a stage = (Opposite of) Cost of a stage.\n > (e) State value = (Opposite of) Cost of a state.\n > (f) Value (or state-value) function = (Opposite of) Cost function.\n > (g) Maximizing the value function = Minimizing the cost function.\n > (h) Action (or state-action) value = Q-factor of a state-control pair.\n > (i) Planning = Solving a DP problem with a known mathematical model.\n > (j) Learning = Solving a DP problem in model-free fashion.\n > (k) Self-learning (or self-play in the context of games) = Solving a DP problem using policy iteration.\n > (l) Deep reinforcement learning = Approximate DP using value and/or policy approximation with deep neural networks.\n > (m) Prediction = Policy evaluation.\n > (n) Generalized policy iteration = Optimistic policy iteration.\n > (o) State abstraction = Aggregation.\n > (p) Episodic task or episode = Finite-step system trajectory.\n > (q) Continuing task = Infinite-step system trajectory.\n > (r) Afterstate = Post-decision state.\n\n[\"Do Artificial Reinforcement-Learning Agents Matter Morally?\"](https://arxiv.org/abs/1410.8233), Tomasik 2014, debates the relationship of rewards to considerations of \"suffering\" or \"pain\", given the duality between costs-losses/rewards:\n\n> Perhaps the more urgent form of refinement than algorithm selection is to replace punishment with rewards within a given algorithm. RL systems vary in whether they use positive, negative, or both types of rewards:\n>\n> - In certain RL problems, such as maze-navigation tasks discussed in Sutton and Barto [1998], the rewards are only positive (if the agent reaches a goal) or zero (for non-goal states).\n> - Sometimes a mix between positive and negative rewards^6^ is used. For instance, McCallum [1993] put a simulated mouse in a maze, with a reward of 1 for reaching the goal, −1 for hitting a wall, and −0.1 for any other action.\n> - In other situations, the rewards are always negative or zero. For instance, in the cart-pole balancing system of Barto et al. [1990], the agent receives reward of 0 until the pole falls over, at which point the reward is −1. In Koppejan and Whiteson [2011]'s neuroevolutionary RL approach to helicopter control, the RL agent is punished either a little bit, with the negative sum of squared deviations of the helicopter's positions from its target positions, or a lot if the helicopter crashes.\n>\n> Just as animal-welfare concerns may motivate incorporation of rewards rather than punishments in training dogs [Hiby et al 2004] and horses [Warren-Smith and McGreevy, 2007, Innes and McBride, 2008], so too RL-agent welfare can motivate more positive forms of training for artificial learners. Pearce [2007] envisions a future in which agents are driven by 'gradients of well-being' (ie. positive experiences that are more or less intense) rather than by the distinction between pleasure versus pain. However, it's not entirely clear where the moral boundary lies between positive versus negative welfare for simple RL systems. We might think that just the sign of the agent's reward value _r_ would distinguish the cases, but the sign alone may not be enough, as the following section explains.\n>\n> *What's the boundary between positive and negative welfare?*\n>\n> Consider an RL agent with a fixed life of _T_ time steps. At each time _t_, the agent receives a non-positive reward _r_~_t_~ ≤ 0 as a function of the action _a_~_t_~ that it takes, such as in the pole-balancing example. The agent chooses its action sequence (at) _t_ = 1..._T_ with the goal of maximising the sum of future rewards:\n>\n> $$\\sum_{t=1}^T r_t(a_t)$$\n>\n> Now suppose we rewrite the rewards by adding a huge positive constant _c_ to each of them, _r′t_ = _rt_ + _c_, big enough that all of the _r′~t~_ are positive. The agent now acts so as to optimise\n>\n> $$\\sum_{t=1}^T r'_t(a_t) = \\sum_{t=1}^T ((r_t)a_t + c) = Tc + \\sum_{t=1}^T r_t(a_t)$$\n>\n> So the optimal action sequence is the same in either case, since additive constants don't matter to the agent's behaviour.^7^ But if behaviour is identical, the only thing that changed was the sign and numerical magnitude of the reward numbers. Yet it seems absurd that the difference between happiness and suffering would depend on whether the numbers used by the algorithm happened to have negative signs in front. After all, in computer binary, negative numbers have no minus sign but are just another sequence of 0s and 1s, and at the level of computer hardware, they look different still. Moreover, if the agent was previously reacting aversively to harmful stimuli, it would continue to do so. As Lenhart K. Schubert explains:^8^ [This quotation comes from [spring 2014 lecture notes](https://web.archive.org/web/20140409193231/http://www.cs.rochester.edu/users/faculty/schubert/191-291/lecture-notes/23) (accessed March 2014) for a course called \"Machines and Consciousness\".]\n>\n>> If the shift in origin [to make negative rewards positive] causes no behavioural change, then the robot (analogously, a person) would still behave as if suffering, yelling for help, etc., when injured or otherwise in trouble, so it seems that the pain would not have been banished after all!\n>\n> So then what distinguishes pleasure from pain?\n>\n> ...A more plausible account is that the difference relates to 'avoiding' versus 'seeking.' A negative experience is one that the agent tries to get out of and do less of in the future. For instance, injury should be an inherently negative experience, because if repairing injury was rewarding for an agent, the agent would seek to injure itself so as to do repairs more often. If we tried to reward *avoidance* of injury, the agent would seek dangerous situations so that it could enjoy returning to safety.^10^ [This example comes from Lenhart K. Schubert's spring 2014 lecture notes (accessed March 2014), for a course called 'Machines and Consciousness.' These thought experiments are not purely academic. We can see an example of maladaptive behaviour resulting from an association of pleasure with injury when people become addicted to the endorphin release of self-harm.] ^[There are [some examples of \"Reward hacking\" in past RL research](/tank#alternative-examples) which resemble such 'self-injuring' agents---for example, a bicycle agent is 'rewarded' for getting near a target (but not 'punished' for moving away), so it learn to steer toward it in a loop to go around it repeatedly to earn the reward.] Injury needs to be something the agent wants to get as far away from as possible. So, for example, even if vomiting due to food poisoning is the best response you can take given your current situation, the experience should be negative in order to dissuade you from eating spoiled foods again. Still, the distinction between avoiding and seeking isn't always clear. We experience pleasure due to seeking and consuming food but also pain that motivates us to avoid hunger. Seeking one thing is often equivalent to avoiding another. Likewise with the pole-balancing agent: Is it seeking a balanced pole, or avoiding a pole that falls over?\n>\n> ...Where does all of this leave our pole-balancing agent? Does it suffer constantly, or is it enjoying its efforts? Likewise, is an RL agent that aims to accumulate positive rewards having fun, or is it suffering when its reward is suboptimal?\n\n## Pain as Grounding\n\nSo with all that for background, what is the purpose of pain?\n\nThe purpose of pain, I would say, is as a *ground truth or outer loss*.\n(This is a [motivational theory of pain](https://plato.stanford.edu/entries/pain/#othertheories) with a more sophisticated RL/psychiatric grounding.)\n\nThe pain reward/loss cannot be removed entirely for the reasons demonstrated by the diabetics/lepers/congenital insensitives: the unnoticed injuries and the poor planning are ultimately fatal.\nWithout any pain qualia to make pain feel painful, we will do harmful things like run on a broken leg or jump off a roof to impress our friends[^Pakistan], or just move in a not-quite-right fashion and a few years later wind up paraplegics.\n(An intrinsic curiosity drive alone would interact badly with a total absence of painful pain: after all, what is more novel or harder to predict than the strange and unique states which can be reached by self-injury or recklessness?)\n\n[^Pakistan]: From the Marsili article:\n\n > In the mid-2000s, Wood's lab at University College partnered with a Cambridge University scientist named Geoff Woods on a pioneering research project centered on a group of related families---all from a clan known as the Qureshi biradari---in rural northern Pakistan. Woods had learned about the families accidentally: On the hunt for potential test subjects for a study on the brain abnormality microcephaly, he heard about a young street performer, a boy who routinely injured himself (walking across burning coals, stabbing himself with knives) for the entertainment of crowds. The boy was rumored to feel no pain at all, a trait he was said to share with other family members...When Woods found the boy's family, they told him that the boy had died from injuries sustained during a stunt leap from a rooftop.\n\nIf pain couldn't be removed, could pain be turned into a reward, then?\nCould we be the equivalent of Morsella's mind that doesn't experience pain, as it infers plans and then executes them, experiencing only more or less rewards?\nIt only experience positive rewards (pleasure) as it runs across burning-hot sands, as this is the optimal action for it to be taking according to whatever grand plan it has thought of.\n\nPerhaps we could... but what stops Morsella's mind from enjoying rewards by literally running in circles on those sands until it dies or is crippled?\nMorsella's mind may make a plan and define a reward function which avoids the need for any pain or negative rewards, but what happens if there is any flaw in the computed plan or the reward estimates? Or if the plan is based on mistaken premises? What if the sands are hotter than expected, or if the distance is much further than expected, or if the final goal (perhaps an oasis of water) is not there?\nSuch a mind raises serious questions about learning and dealing with errors: what does such a mind experience when a plan *fails*? Does it experience nothing? Does it experience a kind of \"meta-pain\"?\n\nConsider what Brand (_The Gift of Pain_ again, [pg191--197](#pain-prosthetics)) describes as the ultimate cause of the failure of years of research into creating 'pain prosthetics', computerized gloves & socks that would measure heat & pressure in real-time in order to warn those without pain like lepers or diabetics: the patients would just *ignore the warnings*, because stopping to prevent future problems was inconvenient while continuing paid off now.\nAnd when electrical shockers were added to the system to stop them from doing a dangerous thing, Brand observed patients simply disabling it to do the dangerous thing & re-enabling it afterwards!\n\nWhat pain provides is a constant, ongoing feedback which anchors all the estimates of future rewards based on planning or bootstrapping.\nIt anchors our intelligence in a concrete estimation of bodily integrity: the intactness of skin, the health of skin cells, the lack of damage to muscles, joints sliding and moving as they ought to, and so on.\nIf we are planning well and acting efficiently in the world, we will, in the long run, on average, experience higher levels of bodily integrity and physical health; if we are learning and choosing and planning poorly, then... we won't.\nThe badness will gradually catch up with us and we may find ourselves blind scarred paraplegics missing fingers and soon to die.\nA pain that was not painful would not serve this purpose, as it would merely be another kind of \"tickling\" sensation.\n(Some might find it interesting or enjoyable or it could accidentally become sexually-linked.)\nThe perceptions in question are simply more ordinary tactile, kinesthetic, [thermoreceptor](!W), or other standard categories of perception; without painful pain, a fire burning your hand simply feels warm (before the thermal-perceptive nerves are destroyed and nothing further is felt), and a knife cutting flesh might feel like a rippling stretching rubbing movement.\n\nWe might say that a painful pain is a pain which forcibly inserts itself into the planning/optimization process, as a cost or lack of reward to be optimized.\nA pain which was not *motivating* is not what we mean by 'pain' at all.[^Drescher]\nThe motivation itself *is* the qualia of pain, much like an itch is an ordinary sensation coupled with a motivational urge to scratch.\nAny mental quality or emotion or sensation which is not accompanied by a demandingness, an involuntary taking-into-consideration, is not pain.\nThe rest of our mind can force its way through pain, if it is sufficiently convinced that there is enough reason to incur the costs of pain because the long-term reward is so great, and we do this all the time: we can convince ourselves to go to the gym, or withstand the vaccination needle, or, in the utmost extremity, saw off a trapped hand to save our life.\nAnd if we are mistaken, and the predicted rewards do not arrive, eventually the noisy constant feedback of pain will override the decisions leading to pain, and whatever incorrect beliefs or models led to the incorrect decisions will be adjusted to do better in the future.\n\n[^Drescher]: Drescher 2004 gives a similar account of motivational pain in [_Good and Real_](/doc/statistics/decision/2006-drescher-goodandreal.pdf \"'Good and Real: Demystifying Paradoxes from Physics to Ethics', Drescher 2006\") (pg77--78):\n\n > But a merely mechanical state could not have the property of being intrinsically desirable or undesirable; inherently good or bad sensations, therefore, would be irreconcilable with the idea of a fully mechanical mind. Actually, though, it is your machinery's very response to a state's utility designation---the machinery's very tendency to systematically pursue or avoid the state---that implements and constitutes a valued state's seemingly inherent deservedness of being pursued or avoided. Roughly speaking, it's not that you avoid pain (other things being equal) in part because pain is inherently bad; rather, your machinery's systematic tendency to avoid pain (other things being equal) is what *constitutes* its being bad. That systematic tendency is what you're really observing when you contemplate a pain and observe that it is \"undesirable\", that it is something you want to avoid.\n >\n > The systematic tendency I refer to includes, crucially, the tendency to plan to achieve positively valued states (and then to carry out the plan), or to plan the avoidance of negatively valued states. In contrast, for example, sneezing is an insistent response to certain stimuli; yet despite the strength of the urge---sneezing can be very hard to suppress---we do not regard the sensation of sneezing as strongly pleasurable (nor the incipient-sneeze tingle, subsequently extinguished by the sneeze, as strongly unpleasant). The difference, I propose, is that nothing in our machinery inclines us to plan our way into situations that make us sneeze (and nothing strongly inclines us to plan the avoidance of an occasional incipient sneeze) for the sake of achieving the sneeze (or avoiding the incipient sneeze); the machinery just isn't wired up to treat sneezes that way (nor should it be). The sensations we deem pleasurable or painful are those that incline us to plan our way to them or away from them, other things being equal.\n\nBut the pain cannot and must not be overridden: human organisms can't be trusted to simply 'turn off' pain and indulge an idle curiosity about cutting off hands.\nNote that we can kill ourselves by starvation or thirst, but we cannot kill ourselves by refusing to sleep, or have a heart beat, or breathe---unless one suffers from the (extremely lethal) [central hypoventilation syndrome](!W), that is.\nWe are insufficiently intelligent, our priors insufficiently strong, our reasoning and planning too poor, and we must do too much learning within each life to do without pain.\n\nA similar argument might apply to the puzzle of 'willpower', 'procrastination'.\nWhy do we have such problems, particularly in a modern context, doing aught we know we should and doing naught we oughtn't?\n\nOn the grave of the 'blood glucose' level theory, [Kurzban et al 2013](/doc/psychology/willpower/2013-kurzban.pdf#page=14 \"An opportunity cost model of subjective effort and task performance\") (see later [Shenhav et al 2017](/doc/statistics/decision/2017-shenhav.pdf \"Toward a Rational and Mechanistic Account of Mental Effort\")) erects an *opportunity cost* theory of willpower.\nSince *objective* physical measurements like blood glucose levels fail to mechanically explain poorer brain functionality or why strenuous activities like sports are 'restful' & reduce 'burnout', similar to the failure of objective physical measurements like lactate levels to explain why people are able to physically exercise only a certain amount (despite being able to exercise far more if properly motivated or if tricked), the reason for willpower running out must be *subjective*.\n\nTo explain the sugar-related observations, Kurzban et al 2013 suggest that the aversiveness of long focus and cognitive effort is a simple heuristic which creates a baseline cost to focusing for 'too long' on any one task, to the potential neglect of other opportunities, with the sugar interventions (such as merely *tasting* sugar water) which appear to boost willpower actually serving as proximate reward signals (signals, because the actual energetic content is nil, and [cognitive effort doesn't meaningfully burns calories](https://www.psychologytoday.com/us/blog/mind-design/201108/glucose-is-not-willpower-fuel \"Glucose Is Not Willpower Fuel: Is the muscle model of self-control less then a metaphor?\") in the first place), which justify to the underlying heuristic that further effort on the same task is worthwhile and the opportunity cost is minimal.\n\nThe lack of willpower is a heuristic which doesn't require the brain to explicitly track & prioritize & schedule all possible tasks, by forcing it to regularly halt tasks---[\"like a timer that says, 'Okay you're done now.'\"](https://www.scientificamerican.com/article/thinking-hard-calories/ \"Does Thinking Really Hard Burn More Calories?: Unlike physical exercise, mental workouts probably do not demand significantly more energy than usual. Believing we have drained our brains, however, may be enough to induce weariness\")^[Ultra-marathoner [Diane Van Deren](https://radiolab.org/podcast/122291-in-running/transcript \"‘In the Running’, Deren et al 2021\") attributes part of her success to being unable to tell time or duration after epilepsy surgery removed a large chunk of her brain, so she never feels tired.]\nIf one could override fatigue at will, the consequences can be bad.\nUsers of dopaminergic drugs like amphetamines often note issues with channeling the reduced fatigue into useful tasks rather than alphabetizing one's bookcase.\nIn more extreme cases, if one could ignore fatigue entirely, then analogous to lack of pain, the consequences could be severe or fatal: ultra-endurance cyclist [Jure Robič](https://en.wikipedia.org/wiki/Jure_Robi%C4%8D) would cycle for thousands of kilometers, ignoring such problems [as elaborate hallucinations](/doc/psychology/2006-02-05-nytimes-thatwhichdoesnotkillmemakesmestranger.html \"That Which Does Not Kill Me Makes Me Stranger\"), and was eventually killed while cycling.\nThe 'timer' is implemented, among other things, as a gradual buildup of [adenosine](!W), which creates [sleep homeostatic drive pressure](!W \"Sleep#Process S\") and possibly physical fatigue during exercise ([Noakes 2012](https://www.frontiersin.org/articles/10.3389/fphys.2012.00082/full \"Fatigue is a brain-derived emotion that regulates the exercise behavior to ensure the protection of whole body homeostasis\"), [Martin et al 2018](/doc/psychology/2018-martin.pdf \"Mental Fatigue Impairs Endurance Performance: A Physiological Explanation\")), leading to a gradually increasing subjectively perceived 'cost' of continuing with a task/staying awake/continuing athletic activities, which resets when one stops/sleeps/rests.\n(Glucose might work by gradually dropping over [perceived time without rewards](https://www.pnas.org/doi/10.1073/pnas.1603444113 \"'Blood sugar level follows perceived time rather than actual time in people with type 2 diabetes', Park et al 2016\").)\nSince the human mind is too limited in its planning and monitoring ability, it cannot be allowed to 'turn off' opportunity cost warnings and engage in hyperfocus on potentially useless things at the neglect of all other things; procrastination here represents a psychic version of pain.\n\nFrom this perspective, it is not surprising that so many stimulants are adenosinergic or dopaminergic^[This is not about dopaminergic effects being rewarding themselves, but about the perception of current tasks vs alternative tasks. (After all, stimulants don't simply make you enjoy staring at a wall while doing nothing.) If everything becomes more rewarding, then there is less to gain from switching, because alternatives will be estimated as little more rewarding; or, if reward sensitivity is boosted only for current activities, then there will be pressure *against* switching tasks, because it is unlikely that alternatives will be predicted to be more rewarding than the current task.], or that small children might especially struggle with mental fatigue (there is a world full of novel opportunities tempting them away), or that many anti-procrastination strategies (like [_Getting Things Done_](!W) or the [Procrastination Equation](https://www.lesswrong.com/posts/RWo4LwFzpHNQCTcYt/how-to-beat-procrastination)) boil down to optimizing for more rewards or more frequent rewards (eg. breaking tasks down into many smaller tasks, which can be completed individually & receive smaller but more frequent rewards, or thinking more clearly about whether something is worth doing): all of these would affect the reward perception itself, and reduce the baseline opportunity cost 'pain'.\nThis perspective may also shed light on depression[^depression], or on [occupational burnout](!W) and why restorative hobbies are ideally maximally different from jobs and more miscellaneous observations like the lower rate of 'hobbies' outside the West: burnout may be a long-term homeostatic reaction to spending 'too much' time too frequently on a difficult not-immediately rewarding task despite earlier attempts to pursue other opportunities (perhaps tasks which would [*never* be](https://www.lesswrong.com/posts/pDzdb4smpzT3Lwbym/my-model-of-ea-burnout) [rewarding](https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/)), which were always overridden, ultimately resulting in a total collapse^[Further speculation: is burnout like the learning theory of depression and the synpatic plasticity paradigm of psychedelic therapy, where individuals over-perseverate due to too-slow learning, leading to burnout? For example, would cranky old researchers (left behind by progress) benefit from psychedelic therapy to reboot their work?]; and hobbies ought to be as different in location and physical activity and social structure (eg. a solitary programmer indoors should pursue a social physical activity outdoors as [soulcraft](https://www.thenewatlantis.com/publications/shop-class-as-soulcraft \"'Shop Class as Soulcraft: The case for the manual trades', Crawford 2006\")) to ensure that it feels completely different for the mind than the regular occupation; and in places with less job specialization or fewer work-hours, the regular flow of a variety of tasks and opportunities means that no such special activity as a 'hobby' is necessary.\n\n[^depression]: [Hollon et al 2021](https://www.frontiersin.org/articles/10.3389/fpsyt.2021.667592/full \"Cognitive Behavior Therapy for Depression From an Evolutionary Perspective\") justifies long depressive episodes as evolutionarily adaptive because they force rumination & re-examination of the past for mistakes.\n\n One might object that such rumination is merely harmful in many cases, like bereavement from a spouse dying of old age---but from [the blackbox perspective](#black-box-vs-white-box-optimization), the agent may well be mistaken in believing there was no mistake! After all, an extremely bad thing *did* happen. So better to force lengthy rumination, just on the chance that a mistake will be discovered after all. (This brings us back to RL's distinction between high-variance evolutionary/Monte Carlo learning vs smarter lower-variance but potentially biased learning using models or bootstraps, and the \"deadly triad\".)\n\n---\n\nPerhaps if we were superintelligent AIs who could trivially plan flawless humanoid locomotion at 1000Hz taking into account all possible damages, or if we were emulated brains sculpted by endless evolutionary procedures to execute perfectly adaptive plans by pure instinct, or if we were simple amoeba in a Petri dish who had no real choices to make, there would be no need for a pain which was painful.\nAnd likewise, were we endlessly planning and replanning to the end of days, we should never experience akrasia, we should merely *do* what is necessary (perhaps not even experiencing any qualia of effort or deliberation, merely [seeing events endlessly unfold as they always had to](/story-of-your-life \"'‘Story Of Your Life’ Is Not A Time-Travel Story', Branwen 2012\")).\nBut we are not.\nThe pain keeps us honest.\nIn the end, pain is our only teacher.\n\n# The Perpetual Peace\n\n
\n> These laws, taken in the largest sense, being Growth with Reproduction; Inheritance which is almost implied by reproduction; Variability from the indirect and direct action of the external conditions of life, and from use and disuse; a Ratio of Increase so high as to lead to a Struggle for Life, and as a consequence to Natural Selection, entailing Divergence of Character and the Extinction of less-improved forms. Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.\n>\n> [Charles Darwin](!W), [_On the Origin of Species_](!W)\n
\n\n
\n> In war, there is the free possibility that not only individual determinacies, but the sum total of these, will be destroyed as life, whether for the absolute itself or for the people. Thus, war preserves the ethical health of peoples in their indifference to determinate things [_Bestimmtheiten_]; it prevents the latter from hardening, and the people from becoming habituated to them, just as the movement of the winds preserves the seas from that stagnation which a permanent calm would produce, and which a permanent (or indeed 'perpetual') peace would produce among peoples.\n>\n> [G. W. F. Hegel](!W \"Georg Wilhelm Friedrich Hegel\")^[[\"On the Scientific Ways of Treating Natural Law\"](https://www.marxists.org/reference/archive/hegel/works/nl/ch03.htm), Hegel 1803]\n
\n\n
\n> We must recognize that war is common, strife is justice, and all things happen according to strife and necessity...War is father of all and king of all\n>\n> [Heraclitus](!W), B80/B53\n
\n\n
\n> It is not enough to succeed; others must fail.\n>\n> [Iris Murdoch](!W), [_The Black Prince_](!W \"The Black Prince (novel)\")\n
\n\nWhat if we remove the outer loss?\n\nIn a meta-learning context, it will then either overfit to a single instance of a problem, or learn a potentially arbitrarily suboptimal average response; in the Quake CTF, the inner loss might converge, as mentioned, to every-agent-for-itself or greedy tactical victories guaranteeing strategic losses; in a human, the result would (at present, due to refusal to use artificial selection or genetic engineering) be a gradual buildup of [mutation load](!W) leading to serious health issues and eventually perhaps a mutational meltdown/error catastrophe; and in an economy, it leads to... the USSR.\n\nThe amount of this constraint can vary, based on the greater power of the non-ground-truth optimization and fidelity of replication and accuracy of selection.\nThe [Price equation](!W) gives us quantitative insight into the conditions under which [group selection](!W) could work at all:\nif a NN could only copy itself in a crude and lossy way, meta-learning would not work well in the first place (properties must be preserved from one generation to the next); if a human cell copied itself with an error rate of as much as 1 in millions, humans could never exist because reproductive fitness is too weak a reward to purge the escalating mutation load (selective gain is negative); if bankruptcy becomes more arbitrary and have less to do with consumer demand than acts of god/government, then corporations will become more pathologically inefficient (covariance between traits & fitness too small to accumulate in meaningful ways).\n\nAs Shalizi concludes in his review:\n\n> Planning is certainly possible within limited domains---at least if we can get good data to the planners---and those limits will expand as computing power grows. But planning is only possible within those domains because *making money* gives firms (or firm-like entities) an objective function which is both unambiguous and *blinkered*. Planning for the whole economy would, under the most favorable possible assumptions, be intractable for the foreseeable future, and *deciding on a plan* runs into difficulties we have no idea how to solve. The sort of efficient planned economy dreamed of by the characters in _Red Plenty_ is something we have no clue of how to bring about, even if we were willing to accept dictatorship to do so.\n\nThis is why the planning algorithms cannot simply keep growing and take over all markets: \"who watches the watchmen?\"\nAs powerful as the various internal organizational and planning algorithms are, and much superior to evolution/market competition, they only optimize surrogate inner losses, which are not the end-goal, and they must be constrained by a ground-truth loss.\nThe reliance on this loss can and should be reduced, but a reduction to zero is undesirable as long as the inner losses converge to any optima different from the ground-truth optima.\n\nGiven the often long lifespan of a failing corporation, the difficulty corporations encounter in aligning employees with their goals, and the inability to reproduce their 'culture', it is no wonder that group selection in markets is feeble at best, and the outer loss cannot be removed.\nOn the other hand, these failings are not necessarily permanent: as corporations gradually turn into software^[APIs are another instance of bilevel optimization, incidentally. Inside an API, a software engineer can do accurate hillclimbing using techniques like randomized A/B testing. But one cannot randomize an entire ecosystem+APIs! Selection on sets of APIs can only happen at a company level (or even higher). The difference between a [Stripe](!W \"Stripe (company)\") and a PayPal payment API is not a mere matter of needing 2 function-calls instead of 3; and the most important decision anyone ever made about Amazon AWS was Jeff Bezos [decreeing from on high](https://gist.github.com/chitchcock/1281611 \"Stevey's Google Platforms Rant\") that now everything would be *an* API---the internal implementation, or even the API details, being trivial compared to the ecosystem effect of competing/interacting APIs. See also [\"holy wars\"](/holy-war \"'Technology Holy Wars are Coordination Problems', Branwen 2020\").], which can be copied and exist in much more dynamic markets with faster OODA loops, perhaps we can expect a transition to an era where corporations *do* replicate precisely & can then start to consistently evolve large increases in efficiency, rapidly exceeding all progress to date.\n\n# See Also\n\n
\n- [Why Tool AIs Want to Be Agent AIs: The Power of Agency](/tool-ai \"AIs limited to purely computational inferential tasks (Tool AIs) supporting humans will be less intelligent, efficient, and economically valuable than more autonomous reinforcement-learning AIs (Agent AIs) who act on their own and meta-learn to take actions over choice of computation/data/training/architecture/hyperparameters/external-resource use, because all problems are secretly decision-theory/reinforcement-learning problems.\"){.backlink-not}\n- [Complexity no Bar to AI](/complexity \"Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely.\"){.backlink-not}\n- [Timing Technology: Lessons From The Media Lab](/timing \"Technological developments can be foreseen but the knowledge is largely useless because startups are inherently risky and require optimal timing. A more practical approach is to embrace uncertainty, taking a reinforcement learning perspective.\"){.backlink-not}\n- [\"The Gift of the Amygdali\"](/fiction/batman){.backlink-not}\n- [Lamaze technique](!W)\n- the [Price equation](!W) & frequency of selective events in [group selection](!W)\n- [Aboulia](!W)\n
\n\n# External Links\n\n- **Overviews**:\n\n - [\"Can technology plan economies and destroy democracy? How algorithms could someday be used to optimise the ballot box\"](https://www.economist.com/christmas-specials/2019/12/18/can-technology-plan-economies-and-destroy-democracy), _Economist_, 2019-12-18 (the socialist calculation debate, computational complexity, Herbert Simon, automating markets, etc); [\"Big Tech Sees Like a State\"](https://www.thediff.co/archive/big-tech-sees-like-a-state/), Bryne Hobert\n - [\"Studies on Slack\"](https://slatestarcodex.com/2020/05/12/studies-on-slack/), Scott Alexander\n - [\"Everyday Lessons from High-Dimensional Optimization\"](https://www.lesswrong.com/posts/pT48swb8LoPowiAzR/everyday-lessons-from-high-dimensional-optimization)\n - [\"The Power of High-Speed Stupidity\"](https://www.lesswrong.com/posts/5qyytGqZWv5bt6723/the-power-of-high-speed-stupidity)\n- **AI**:\n\n - [\"AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence\"](https://arxiv.org/abs/1905.10985#uber), Clune 2019\n - [\"On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models\"](https://arxiv.org/abs/1511.09249#schmidhuber), Schmidhuber 2015\n - [\"AutoML-Zero: Evolving Machine Learning Algorithms From Scratch\"](https://arxiv.org/abs/2003.03384#google), Real et al 2020 ([Github](https://github.com/google-research/google-research/tree/master/automl_zero \"'AutoML-Zero: Open source code for the paper: \"AutoML-Zero: Evolving Machine Learning Algorithms From Scratch\"', Real et al 2020\"); [blog](https://ai.googleblog.com/2020/07/automl-zero-evolving-code-that-learns.html \"'AutoML-Zero: Evolving Code that Learns', Real & Liang 2020\")); [\"Evolving Reinforcement Learning Algorithms\"](https://arxiv.org/abs/2101.03958#google), Co-Reyes et al 2021\n - [\"Gradient Descent: The Ultimate Optimizer\"](https://arxiv.org/abs/1909.13371#facebook), Chandra et al 2019; [\"Reverse engineering learned optimizers reveals known and novel mechanisms\"](https://arxiv.org/abs/2011.02159#google), Maheswaranathan et al 2020\n - [\"Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves\"](https://arxiv.org/abs/2009.11243#google), Metz et al 2020; [\"Training Learned Optimizers with Randomly Initialized Learned Optimizers\"](https://arxiv.org/abs/2101.07367#google), Metz et al 2021; [\"VS-ML: Meta Learning Backpropagation And Improving It\"](https://arxiv.org/abs/2012.14905#schmidhuber \"'Meta Learning Backpropagation And Improving It', Kirsch & Schmidhuber 2020\"), Kirsch & Schmidhuber 2021; [\"A Generalizable Approach to Learning Optimizers\"](https://arxiv.org/abs/2106.00958#openai \"'LHOPT: A Generalizable Approach to Learning Optimizers', Almeida et al 2021\"), Almeida et al 2021\n - [\"A critique of pure learning and what artificial neural networks can learn from animal brains\"](https://www.nature.com/articles/s41467-019-11786-6), Zador 2019\n - [\"Whole Brain Emulation and the Evolution of Superorganisms\"](https://intelligence.org/files/WBE-Superorgs.pdf), Shulman 2010\n - [\"WBE and DRL: a Middle Way of imitation learning from the human brain\"](https://www.reddit.com/r/reinforcementlearning/comments/9pwy2f/wbe_and_drl_a_middle_way_of_imitation_learning/)\n - [\"Meta-Learning: Learning to Learn Fast\"](https://lilianweng.github.io/lil-log/2018/11/30/meta-learning.html#openai)/[On \"Meta Reinforcement Learning\"](https://lilianweng.github.io/lil-log/2019/06/23/meta-reinforcement-learning.html#openai \"'Meta Reinforcement Learning', Weng 2019\"), Lilian Wen\n -
[\"Optimal Learning: Computational Procedures for Bayes-Adaptive Markov Decision Processes\"](https://www.gatsby.ucl.ac.uk/~yael/Okinawa/DuffThesis.pdf), Duff 2002 (cf. [Futamura](http://blog.sigfpe.com/2009/05/three-projections-of-doctor-futamura.html)); [\"Meta-learning of Sequential Strategies\"](https://arxiv.org/abs/1905.03030#deepmind \"'Meta-learning of Sequential Strategies', Ortega et al 2019\"), Ortega et al 2019; [\"Reinforcement Learning, Fast and Slow\"](https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613\\(19\\)30061-0#deepmind), Botvinick et al 2019; [\"Meta-learners' learning dynamics are unlike learners'\"](https://arxiv.org/abs/1905.01320#deepmind), Rabinowitz 2019; [\"Ray Interference: a Source of Plateaus in Deep Reinforcement Learning\"](https://arxiv.org/abs/1904.11455#deepmind), Schaul et al 2019; [\"Learning not to learn: Nature versus nurture in silico\"](https://arxiv.org/abs/2010.04466), Lange & Sprekeler 2020; [\"Meta-trained agents implement Bayes-optimal agents\"](https://arxiv.org/abs/2010.11223#deepmind), Mikulik et al 2020 (Bayesian RL interpretations of meta-DRL & why DRL is so sample-inefficient); [\"What Are Bayesian Neural Network Posteriors Really Like?\"](https://arxiv.org/abs/2104.14421#google), Izmailov et al 2021; [\"What learning algorithm is in-context learning? Investigations with linear models\"](https://arxiv.org/abs/2211.15661#google), Akyürek et al 2022\n\n - [\"Meta-learning, social cognition and consciousness in brains and machines\"](https://www.sciencedirect.com/science/article/pii/S0893608021003956), Langdon et al 2021 (review)\n - [\"Solving Rubik's Cube With A Robot Hand\"](https://arxiv.org/abs/1910.07113#openai \"'Solving Rubik’s Cube with a Robot Hand', OpenAI et al 2019\"), Akkaya et al 2019 ([blog](https://openai.com/research/solving-rubiks-cube \"We've trained a pair of neural networks to solve the Rubik's Cube with a human-like robot hand. The neural networks are trained entirely in simulation, using the same reinforcement learning code as OpenAI Five paired with a new technique called Automatic Domain Randomization (ADR). The system can handle situations it never saw during training, such as being prodded by a stuffed giraffe. This shows that reinforcement learning isn't just a tool for virtual tasks, but can solve physical-world problems requiring unprecedented dexterity.\"); [video](https://www.youtube.com/watch?v=QyJGXc9WeNo&list=PLOXw6I10VTv9HODt7TFEL72K3Q6C4itG6&index=3))\n - [\"Learning to Predict Without Looking Ahead: World Models Without Forward Prediction\"](https://learningtopredict.github.io/#google \"Our agents are only given infrequent observations of the real environment. As a side effect for optimizing performance in this setting, a world model' emerges. We show the true dynamics in color, with full saturation denoting frames the policy can see. The black and white outline shows the state of the emergent world model. These world model exhibits similar, but not identical dynamics to forward predictive models but only model 'important' aspects of the environment\"), Freeman et al 2019 ([Paper](https://arxiv.org/abs/1910.13038#google \"'Learning to Predict Without Looking Ahead: World Models Without Forward Prediction', Freeman et al 2019\"))\n - [\"HyperNetworks\"](https://arxiv.org/abs/1609.09106#google), Ha et al 2016\n - [\"MetaGenRL: Improving Generalization in Meta Reinforcement Learning\"](http://louiskirsch.com/metagenrl), Kirsch et al 2019; [\"LPG: Discovering Reinforcement Learning Algorithms\"](https://arxiv.org/abs/2007.08794#deepmind \"'Discovering Reinforcement Learning Algorithms', Oh et al 2020\"), Oh et al 2020\n - [Duan et al 2016](https://arxiv.org/abs/1611.02779#openai \"RL^2^: Fast Reinforcement Learning via Slow Reinforcement Learning\"); [Santoro et al 2016](https://arxiv.org/abs/1605.06065#deepmind \"One-shot Learning with Memory-Augmented Neural Networks\"); [\"Learning to reinforcement learn\"](https://arxiv.org/abs/1611.05763#deepmind), Wang et al 2016; [\"Prefrontal cortex as a meta-reinforcement learning system\"](/doc/reinforcement-learning/meta-learning/2018-wang.pdf#deepmind), Wang et al 2018 ([Matthew Botvinick commentary](https://www.lesswrong.com/posts/Wnqua6eQkewL3bqsF/matt-botvinick-on-the-spontaneous-emergence-of-learning \"Matt Botvinick on the spontaneous emergence of learning algorithms\"))\n - [\"Smooth markets: A basic mechanism for organizing gradient-based learners\"](https://arxiv.org/abs/2001.04678#deepmind), Balduzzi et al 2020\n - [[Market models]{.smallcaps}](https://people.idsia.ch/~juergen/directsearch/node15.html): [\"Properties of the Bucket Brigade Algorithm\"](/doc/reinforcement-learning/multi-agent/1985-holland.pdf), Holland 1985; [\"Decentralized Reinforcement Learning: Global Decision-Making via Local Economic Transactions\"](https://arxiv.org/abs/2007.02382), Chang et al 2020 ([blog](https://bair.berkeley.edu/blog/2020/07/11/auction/ \"Decentralized Reinforcement Learning: Global Decision-Making via Local Economic Transactions\")); [\"The AI Economist: Optimal Economic Policy Design via Two-level Deep Reinforcement Learning\"](https://arxiv.org/abs/2108.02755#salesforce), Zheng et al 2021; [\"Finding General Equilibria in Many-Agent Economic Simulations Using Deep Reinforcement Learning\"](https://arxiv.org/abs/2201.01163#salesforce), Curry et al 2022\n - [\"Multiplicative Interactions and Where to Find Them\"](https://openreview.net/forum?id=rylnK6VtDH#google), Jayakumar et al 2020\n - [\"How Learning Can Guide Evolution\"](https://pages.ucsd.edu/~rbelew/courses/cogs184_w10/readings/HintonNowlan97.pdf), Hinton & Nowlan 1987\n - [\"Embodied intelligence via learning and evolution\"](https://www.nature.com/articles/s41467-021-25874-z), Gupta et al 2021\n- **Biology**:\n\n - [\"When Pain Isn't Painful: David Bain considers a case that casts doubt on the intuition that pain is essentially unpleasant.\"](https://www.philosophersmag.com/index.php/component/content/article?id=105:pain)\n - [\"The Itch: Its mysterious power may be a clue to a new theory about brains and bodies\"](https://www.newyorker.com/magazine/2008/06/30/the-itch)\n - [\"If brains are computers, what kind of computers are they?\"](https://www.lesswrong.com/posts/fuGNHdgYWBkA5Fi22/if-brains-are-computers-what-kind-of-computers-are-they), [Daniel Dennett](!W) 2013\n - [\"Survival of the Systems\"](/doc/genetics/selection/natural/2021-lenton.pdf), Lenton et al 2021; [\"Cancer across the tree of life: cooperation and cheating in multicellularity\"](https://royalsocietypublishing.org/doi/10.1098/rstb.2014.0219), Aktipis et al 2015\n - [Evolutionary game theory](!W) ([SEP](https://plato.stanford.edu/entries/game-evolutionary/))\n - [\"Nociceptive Sensitization Reduces Predation Risk\"](https://www.sciencedirect.com/science/article/pii/S0960982214003352), Crook et al 2014\n - [\"The structure of genotype-phenotype maps makes fitness landscapes navigable\"](https://www.biorxiv.org/content/10.1101/2021.10.11.463990.full), Greenbury et al 2021; [\"The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks\"](https://arxiv.org/abs/2110.06296#google), Entezari et al 2021\n- **Psychology**:\n\n - [Wanting vs Liking](https://www.lesswrong.com/tag/wanting-vs-liking)\n - [Picoeconomics](!W \"George Ainslie (psychologist)\")\n - [\"Why has evolution not selected for perfect self-control?\"](https://royalsocietypublishing.org/doi/full/10.1098/rstb.2018.0139), Hayden 2018\n - [\"The Temporal Dynamics of Opportunity Costs: A Normative Account of Cognitive Fatigue and Boredom\"](https://www.biorxiv.org/content/10.1101/2020.09.08.287276.full), Agrawal et al 2020\n - [\"Key questions about artificial sentience: an opinionated guide\"](https://www.lesswrong.com/posts/cwDbYmnSdoobdcJnx/key-questions-about-artificial-sentience-an-opinionated), Robbo\n- **Economics**:\n\n - [\"Why Do We Undervalue Competent Management?\"](https://hbr.org/2017/09/why-do-we-undervalue-competent-management)\n - [\"Antitrust as Allocator of Coordination Rights\"](/doc/economics/2019-paul.pdf \"'Antitrust As Allocator of Coordination Rights', Paul 2019\"), Paul 2019\n - [\"The Gervais Principle\"](https://www.ribbonfarm.com/the-gervais-principle/), Venkatesh Rao\n - [\"In The Eternal Inferno, Fiends Torment Ronald Coase With The Fate Of His Ideas\"](https://www.harrowell.org.uk/blog/2018/01/31/in-the-eternal-inferno-fiends-torment-ronald-coase-with-the-fate-of-his-ideas/)\n - [\"Socialist Fantasies\"](https://www.econlib.org/socialist-fantasies/)\n - [\"This Japanese Company Charges Its Staff \\$100 an Hour to Use Conference Rooms: Everything has a price, which helps keep workers focused on the bottom line\"](https://www.bloomberg.com/news/articles/2019-06-20/charging-employees-for-conference-rooms-helps-disco-boost-profit)\n - [\"Has dynamic programming improved decision making?\"](https://icare.hse.ru/data/2018/10/24/1142422445/Rust.pdf), Rust 2018\n - [\"When Hindsight Isn’t 20/20: Incentive Design With Imperfect Credit Allocation\"](https://www.lesswrong.com/posts/XPRAY34Sutc2wWYZf/when-hindsight-isn-t-20-20-incentive-design-with-imperfect), John Wentsworth^[Dai & Toikka 2022 also notes: \"...If asking the agents to report the technology were allowed, then with two or more agents it would be possible to partially implement the Bayesian profit-maximizing contract for the true technology by using a mechanism that chooses the Bayesian optimal contract for the reported technology whenever the agents’ reports agree, and which “punishes” the agents with the zero contract if any reports disagree. With three or more agents, the Bayesian surplus-maximizing contract could be implemented similarly. But as is typical in the implementation literature, the two-agent case is more difficult because then it is not obvious to tell who deviated when reports disagree, and budget balance prevents punishing both agents simultaneously. As this issue is orthogonal to our analysis, we do not pursue it further.\"]\n - [\"What do executives do, anyway?\"](https://apenwarr.ca/log/20190926): [\"CEOs Don’t Steer\"](https://www.ribbonfarm.com/2017/11/09/ceos-dont-steer/); [\"You can only communicate one top priority\"](https://www.lesswrong.com/posts/JrLExmCZWTxkvK8ih/dan-luu-on-you-can-only-communicate-one-top-priority)\n - [\"AGI will drastically increase economies of scale\"](https://www.lesswrong.com/posts/Sn5NiiD5WBi4dLzaB/agi-will-drastically-increase-economies-of-scale)\n- **Sociology**:\n\n - \"[The Gods of the Copybook Headings](!W)\"; [\"The Goddess of Everything Else\"](https://slatestarcodex.com/2015/08/17/the-goddess-of-everything-else-2/)\n - [\"Large teams develop and small teams disrupt science and technology\"](/doc/statistics/bias/2019-wu.pdf), Wu et al 2019\n - [\"A Research Note on Deriving the Square-Cube Law of Formal Organizations from the Theory of Time-Minimization\"](/doc/economics/1983-stephan.pdf), Stephan 1983\n- **Technology**:\n\n - [End-to-end principle](/doc/cs/end-to-end-principle/index)\n - [\"The Three Projections of Dr Futamura\"](http://blog.sigfpe.com/2009/05/three-projections-of-doctor-futamura.html) (isomorphisms between compilers/interpreters/etc)\n - [\"My history with Forth & stack machines\"](http://yosefk.com/blog/my-history-with-forth-stack-machines.html) (on [Chuck Moore](!W \"Charles H. Moore\") & [Forth](!W \"Forth (programming language)\"), as a different version of [the Lisp Curse](http://www.winestockwebdesign.com/Essays/Lisp_Curse.html); cf. [Donald Knuth](!W)[^Knuth-PLT])\n- **Discussion**: Reddit: [1](https://www.reddit.com/r/slatestarcodex/comments/a4d3s6/evolution_as_backstop_for_reinforcement_learning/); [HN](https://news.ycombinator.com/item?id=23459056)\n\n[^Knuth-PLT]: yosefk observes that Moore's systems designed using Forth & custom hardware/OS/userland are breathtakingly more efficient than 'standard' approaches would yield, and this is because Moore takes a global perspective and optimizes \"end-to-end\": changing the language if that makes the OS simpler, changing the hardware if that'd make the text editor smaller, redefining the problem if necessary, and making large globally-coherent sets of changes that more myopic or constrained programmers could not or would not do. They are less 'designed' than *evolved* (but evolved by intelligence), and share similar traits: a general contempt for rigid abstraction or principles like \"modularity\", a reliance on 'coincidence' for correctness, and incredible performance (in both terms of efficiency and in solving whatever the problem is). This approach makes for amazing custom 'one-off' systems; but these are inherently difficult to understand by lesser mortals, the general 'just git gud' approach unreplicable, and the system may not be modifiable by lesser mortals (even if the original designer could easily modify it or just rewrite it overnight to be brilliantly optimized for the new set of requirements). Similar issues bedevil [Lisp](https://en.wikipedia.org/wiki/Lisp_(programming_language)) systems & programmers: they *can*, and so they *do*.\n\n This reminds me of [Donald Knuth](!W). Knuth is one of the most brilliant computer scientists ever, who does things like program an [ALGOL](!W) compiler by himself, mostly in his head, for a summer job. He writes projects like the TeX system by sitting down and spending days thinking hard about it, creating a single program littered with poor software-engineering like global variables & [GOTOs](/doc/cs/algorithm/1974-knuth.pdf \"‘Structured Programming with go to Statements’, Knuth 1974\") but which is nevertheless lightning-fast & almost bug-free. TeX is not his only programming language either, he has created others like [METAFONT](!W) (for fonts) or [MIX](!W)/[MMIX](!W) (an assembly language & computer architecture to provide simple & timeless implementations of his [TAOCP](!W \"The Art of Computer Programming\") programs).\n\n Perhaps the most striking thing about these various languages is that everyone who uses them loves the quality of the output & what you can do with them, but hate the confusing, complicated, inconsistent, buggy experience of *using* them (to the extent that there is a whole cottage industry of people attempting to rewrite or replace [TeX](!W \"TeX\")/[LaTeX](!W \"LaTeX\"), typically copying the core ideas of how TeX does typesetting---the ['box and glue'](https://en.wikibooks.org/wiki/LaTeX/Boxes) paradigm of how to lay out stuff onto a page, the [Knuth-Plass line-breaking](/doc/design/typography/1981-knuth.pdf \"‘Breaking paragraphs into lines’, Knuth & Plass 1981\"), the custom [font families](!W \"Computer Modern\") etc---doing all that reimplementation work just so they don't have to ever deal with the misery of TeX-the-language). Knuth himself, however, appears to have no more difficulty programming in his languages than he did writing 1960s assembler or [designing fonts with 60 parameters](/doc/design/typography/tex/1996-tug-issuev17no4-knuthqanda.pdf#page=7 \"‘Questions and Answers with Professor Donald E. Knuth § pg7’, Knuth 1996 (page 7)\"). As he puts it, he ignores most programming techniques and just writes the right code, [\"Otherwise, lots of time is wasted on activities that I simply never need to perform or even think about.\"](https://www.informit.com/articles/article.aspx?p=1193856 \"‘Interview with Donald Knuth’, Binstock 2008\")\n\n How do you write code as well as Don Knuth? The answer seems to be, well, 'be as naturally gifted as Don Knuth'. A mjor omission in his œuvre is that he has never done major work on operating systems, and his major effort in improving software engineering, [\"literate programming\"](!W), which treats programs as 1 long story to be told, fell stillborn from the presses. (Knuth, who has the privilege of an academic in being able to ship a program without any worries about maintenance or making a living, and just declaring something *done*, like TeX, perhaps does not appreciate how little real-world programs are like [telling stories](https://www.quantamagazine.org/computer-scientist-donald-knuth-cant-stop-telling-stories-20200416).)\n\n So this is a bit of a problem for any attempts at making programming languages more powerful or more ergonomic, as well as for various kinds of ['tools for thought'](https://numinous.productions/) like [Douglas Engelbart's](!W) [intelligence amplification](!W) program. It is terribly easy for such powerful systems to be impossible to learn, and the ability to handle extreme levels of complexity can cripple one's ability to *remove* complexity. (The ability to define all sorts of keyboard shortcuts & abbreviations invoking arbitrary code dynamically in your [Lisp machine](!W) [text editor](https://en.wikipedia.org/wiki/Emacs) is great until you ask 'how am I going to *remember* all these well enough to save any time on net?') Tasking people like Knuth or Engelbart to develop such things is a bit like asking a top sports player to be a coach: it's not the same thing at all, and the very factors which made them so freakishly good at the sport may damage their ability to critique or improve---they may not be able to do much more than demonstrate how to do it well, and say, 'now you do that too and git gud'.\n\n From an AI perspective, this is interesting because it suggests that AIs might be powerful even while coding with human-legible code. If the problem with the 'Moore/Knuth approach' is that you can't clone him indefinitely to rewrite the system every time it's necessary, then what happens with AIs which you *can* 'just clone' and apply exclusively to the task? Quantity has a quality all its own. (For a fictional example, see Vernor Vinge's SF novel [_A Deepness in the Sky_](!W), where talented individuals can be put into a 'Focused' state, where they become permanently [monomaniacally obsessed](!W \"Hyperfocus\") with a single technical task like rewriting computer systems to be optimal, and always achieve Knuth-level results; giving their enslavers de facto superpowers compared to rivals who must employ ordinary people. For a more real-world example, consider how Google fights [bitrot](/holy-war \"‘Technology Holy Wars are Coordination Problems’, Branwen 2020\") & infrastructure decay: not by incredibly sophisticated programming languages---quite the opposite, considering regressions like [Go](!W \"Go (programming language)\")---but employing tens of thousands of top programmers to literally rewrite all its code every few years on net, developing an entire parallel universe of software tools, and storing it all in a [single giant repository](https://cacm.acm.org/magazines/2016/7/204032-why-google-stores-billions-of-lines-of-code-in-a-single-repository/fulltext) to assist the endless global rewrites of the [big ball of mud](http://www.laputan.org/mud/mud.html).)\n\n# Appendix\n\n## Meta-Learning Paradigms\n\n[Metz et al 2018](https://arxiv.org/abs/1804.00222#google \"Learning Unsupervised Learning Rules\"){.include-annotation}\n\n## Pain prosthetics {.collapse}\n\n
\n> Brand & Yancey's 1993 [_Pain: The Gift No One Wants_](/doc/psychology/1993-brand-painthegiftnobodywants.pdf \"'Pain: The Gift Nobody Wants', Brand & Yancey 1993\"), pg191--197, recounts Brand's research in the 1960s--1970s in attempting to create 'artificial pain' or 'pain prosthetics', which ultimately failed because human perception of pain is marvelously accurate & superior to the crude electronics of the day, but more fundamentally because they discovered the aversiveness of pain was critical to accomplishing the goal of discouraging repetitive or severely-damaging behavior, as the test subjects would simply ignore or disable the devices to get on with whatever they were doing.\nExcerpts:\n
\n\n> My grant application bore the title \"A Practical Substitute for Pain.\" We proposed developing an artificial pain system to replace the defective system in people who suffered from leprosy, congenital painlessness, diabetic neuropathy, and other nerve disorders. Our proposal stressed the potential economic benefits: by investing a million dollars to find a way to alert such patients to the worst dangers, the government might save many millions in clinical treatment, amputations, and rehabilitation.\n>\n> The proposal caused a stir at the National Institutes of Health in Washington. They had received applications from scientists who wanted to diminish or abolish pain, but never from one who wished to create pain. Nevertheless, we received funding for the project.\n>\n> We planned, in effect, to duplicate the human nervous system on a very small scale. We would need a substitute \"nerve sensor\" to generate signals at the extremity, a \"nerve axon\" or wiring system to convey the warning message, and a response device to inform the brain of the danger. Excitement grew in the Carville research laboratory. We were attempting something that, to our knowledge, had never been tried.\n>\n> I subcontracted with the electrical engineering department at Louisiana State University to develop a miniature sensor for measuring temperature and pressure. One of the engineers there joked about the potential for profit: \"If our idea works, we'll have a pain system that warns of danger but doesn't hurt. In other words, we'll have the good parts of pain without the bad! Healthy people will demand these gadgets for themselves in place of their own pain systems. Who wouldn't prefer a warning signal through a hearing aid over real pain in a finger?\"\n>\n> The LSU engineers soon showed us prototype [transducers](https://en.wikipedia.org/wiki/Pressure_sensor), slim metal disks smaller than a shirt button. Sufficient pressure on these transducers would alter their electrical resistance, triggering an electrical current. They asked our research team to determine what thresholds of pressure should be programmed into the miniature sensors. I replayed my university days in Tommy Lewis's pain laboratory, with one big difference: now, instead of merely testing the in-built properties of a well-designed human body, I had to think like the designer. What dangers would that body face? How could I quantify those dangers in a way the sensors could measure?\n>\n> To simplify matters, we focused on fingertips and the soles of feet, the two areas that caused our patients the most problems. But how could we get a mechanical sensor to distinguish between the acceptable pressure of, say, gripping a fork and the unacceptable pressure of gripping a piece of broken glass? How could we calibrate the stress level of ordinary walking and yet allow for the occasional extra stress of stepping off a curb or jumping over a puddle? Our project, which we had begun with such enthusiasm, seemed more and more daunting.\n>\n> I remembered from student days that nerve cells change their perception of pain in accordance with the body's needs. We say a finger feels tender: thousands of nerve cells in the damaged tissue automatically lower their threshold of pain to discourage us from using the finger. An infected finger seems as if it is always getting bumped---it \"sticks out like a sore thumb\"---because inflammation has made it ten times more sensitive to pain. No mechanical transducer could be so responsive to the needs of living tissue.\n>\n> Every month the optimism level of the researchers went down a notch. Our Carville team, who had made the key findings about repetitive stress and constant stress, knew that the worst dangers came not from abnormal stresses, but from very normal stresses repeated thousands of times, as in the act of walking. And Sherman the pig^[pg171--172; research on the pig involved paralyzing it & applying slight consistent pressure for 5--7h to spots, which was enough to trigger inflammation & kill hair on the spots.] had demonstrated that a constant pressure as low as one pound per square inch could cause skin damage. How could we possibly program all these variables into a miniature transducer? We would need a computer chip on every sensor just to keep track of changing vulnerability of tissues to damage from repetitive stress. We gained a new respect for the human body's capacity to sort through such difficult options instantaneously.\n>\n> After many compromises we settled on baseline pressures and temperatures to activate the sensors, and then designed a glove and a sock to incorporate several transducers. At last we could test our substitute pain system on actual patients. Now we ran into mechanical problems. The sensors, state-of-the-art electronic miniatures, tended to deteriorate from metal fatigue or corrosion after a few hundred uses. Short-circuits made them fire off false alarms, which aggravated our volunteer patients. Worse, the sensors cost about [$450]($1970) each and a leprosy patient who took a long walk around the hospital grounds could wear out a [$2000]($1970) sock!\n>\n> On average, a set of transducers held up to normal wear-and-tear for one or two weeks. We certainly could not afford to let a patient wear one of our expensive gloves for a task like raking leaves or pounding a hammer---the very activities we were trying to make safe. Before long the patients were worrying more about protecting our transducers, their supposed protectors, than about protecting themselves.\n>\n> Even when the transducers worked correctly, the entire system was contingent on the free will of the patients. We had grandly talked of retaining \"the good parts of pain without the bad,\" which meant designing a warning system that would not hurt. First we tried a device like a hearing aid that would hum when the sensors were receiving normal pressures, buzz when they were in slight danger, and emit a piercing sound when they perceived an actual danger. But when a patient with a damaged hand turned a screwdriver too hard, and the loud warning signal went off, he would simply override it---*This glove is always sending out false signals*---and turn the screwdriver anyway. Blinking lights failed for the same reason.\n>\n> Patients who perceived \"pain\" only in the abstract could not be persuaded to trust the artificial sensors. Or they became bored with the signals and ignored them. The sobering realization dawned on us that unless we built in a quality of compulsion, our substitute system would never work. Being alerted to the danger was not enough; our patients had to be forced to respond. Professor Tims of LSU said to me, almost in despair, \"Paul, it's no use. We'll never be able to protect these limbs unless the signal really hurts. Surely there must be some way to hurt your patients enough to make them pay attention.\"\n>\n> We tried every alternative before resorting to pain, and finally concluded Tims was right: the stimulus had to be unpleasant, just as pain is unpleasant. One of Tims's graduate students developed a small battery-operated coil that, when activated, sent out an electric shock at high voltage but low current. It was harmless but painful, at least when applied to parts of the body that could feel pain.\n>\n> Leprosy bacilli, favoring the cooler parts of the body, usually left warm regions such as the armpit undisturbed, and so we began taping the electric coil to patients' armpits for our tests. Some volunteers dropped out of the program, but a few brave ones stayed on. I noticed, though, that they viewed pain from our artificial sensors in a different way than pain from natural sources. They tended to see the electric shocks as punishment for breaking rules, not as messages from an endangered body part. They responded with resentment, not an instinct of self-preservation, because our artificial system had no innate link to their sense of *self*. How could it, when they felt a jolt in the armpit for something happening to the hand?\n>\n> I learned a fundamental distinction: a person who never feels pain is task-oriented, whereas a person who has an intact pain system is self-oriented. The painless person may know by a signal that a certain action is harmful, but if he really wants to, he does it anyway. The pain-sensitive person, no matter how much he wants to do something, will stop for pain, because deep in his psyche he knows that preserving his own self is more important than anything he might want to do.\n>\n> Our project went through many stages, consuming five years of laboratory research, thousands of man-hours, and more than a million dollars of government funds. In the end we had to abandon the entire scheme. A warning system suitable for just one hand was exorbitantly expensive, subject to frequent mechanical breakdown, and hopelessly inadequate to interpret the profusion of sensations that constitute touch and pain. Most important, we found no way around the fundamental weakness in our system: it remained under the patient's control. If the patient did not want to heed the warnings from our sensors, he could always find a way to bypass the whole system.\n>\n> Looking back, I can point to a single instant when I knew for certain that the substitute pain project would not succeed. I was looking for a tool in the manual arts workshop when Charles, one of our volunteer patients, came in to replace a gasket on a motorcycle engine. He wheeled the bike across the concrete floor, kicked down the kickstand, and set to work on the gasoline engine. I watched him out of the corner of my eye. Charles was one of our most conscientious volunteers, and I was eager to see how the artificial pain sensors on his glove would perform.\n>\n> One of the engine bolts had apparently rusted, and Charles made several attempts to loosen it with a wrench. It did not give. I saw him put some force behind the wrench, and then stop abruptly, jerking backward. The electric coil must have jolted him. (I could never avoid wincing when I saw our man-made pain system function as it was designed to do.) Charles studied the situation for a moment, then reached up under his armpit and disconnected a wire. He forced the bolt loose with a big wrench, put his hand in his shirt again, and reconnected the wire. It was then that I knew we had failed. Any system that allowed our patients freedom of choice was doomed.\n>\n> I never fulfilled my dream of \"a practical substitute for pain,\" but the process did at last set to rest the two questions that had long haunted me. Why must pain be unpleasant? Why must pain persist? Our system failed for the precise reason that we could not effectively reproduce those two qualities of pain. The mysterious power of the human brain can force a person to STOP!---something I could never accomplish with my substitute system. And \"natural\" pain will persist as long as danger threatens, whether we want it to or not; unlike my substitute system, it cannot be switched off.\n>\n> As I worked on the substitute system, I sometimes thought of my rheumatoid arthritis patients, who yearned for just the sort of on-off switch we were installing. If rheumatoid patients had a switch or a wire they could disconnect, most would destroy their hands in days or weeks. How fortunate, I thought, that for most of us the pain switch will always remain out of reach.\n\n## Internet Community Design\n\n
\n> It’s been just a month [since [Stable Diffusion](!W) was released]. What about in a year? I probably won’t be able to find my work out there because [the internet] will be flooded with AI art. That’s concerning.\n>\n> [Greg Rutkowski](https://www.artstation.com/rutkowski), [September 2022](https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/ \"'This artist is dominating AI-generated art. And he’s not happy about it. Greg Rutkowski is a more popular prompt than Picasso', Melissa Heikkilä 2022-09-16\")\n
\n\nInternet community architecture can be seen as a bi-level optimization design too.\nThere are fast and slow methods of interaction, and ideas or knowledge (or just 'memes') are created, varied, and selected on.\nThese interactions happen inside discrete communities which are themselves created, varied, and grow.\nSo, they are instances of multilevel selection.\n\nThe [\"warrens and plazas\"](https://www.ribbonfarm.com/2010/10/27/warrens-plazas-and-the-edge-of-legibility/)^[See also: [\"A Group Is Its Own Worst Enemy\"](/doc/technology/2005-shirky-agroupisitsownworstenemy.pdf \"'A Group is Its Own Worst Enemy', Shirky 2005\"), Clay Shirky 2003/2005; [\"The Lessons of Lucasfilm's Habitat\"](http://www.massmind.org/techref/idea/lessons.htm); [The Melancholy of Subculture Society](/subculture); [\"garden and stream\"](https://hapgood.us/2015/10/17/the-garden-and-the-stream-a-technopastoral/ \"'The Garden and the Stream: A Technopastoral', Mike Caulfield\"). ([MUD](!W)/[MOOs](!W) were also a good example but I don't quite know of a writeup which brings out the warren/plaza & multi-level aspects.)] interpretation of communities is a 2-level design.\nThe classic example used to be [Usenet](!W) and [FAQs](!W): the fast daily (or even minute by minute) discussion would happen, and knowledge would be distilled down into FAQ entries to save time, foster a higher level of discussion, and spread the knowledge outside of participants.\nA more contemporary example is Reddit: the fast flux of link submissions and comments can be distilled into \"stickied\" (permanent) links, and a simple 'wiki' system of editable pages (enabling FAQs and more).\nSome subreddits for niche interests (often gaming or medical-related) have built up considerable knowledge bases and greatly advanced their particular niche.\nDiscord has done well in marrying the relatively slow IRC-style chat channels with the even faster-paced voice communication of gaming, while simultaneously supporting long-term use-cases through stickied ('pinned') comments which can contain complicated formatting like blockquotes & be edited, search of full channel histories, bots, allowing many channels/servers with complicated security permissions, etc.\n(It has done less well in enabling any of this to be [archived or exported](http://ascii.textfiles.com/archives/5509).)\n\nCounter-examples also exist.\nMany social networks are actively hostile to any kind of 2-level structure, emphasizing one level at the expense of another.\nThe value of each time-scale can be seen in how social networks can thrive while targeting only a single time-scale.\nFacebook is moderately hostile to long or in-depth posts; they can exist, but no real support is given to them, capabilities like formatting are minimal to nonexistent, and the UI & all affordances are 100% oriented to 'most recent first'.\nSlack chat is the evil twin of Discord: its free plan destroys history almost immediately, and should one pay through the nose for full Slack, one quickly discovers that it's email but worse.\nTwitter goes further, providing essentially no support for any kind of longform at all, never mind editable wiki/FAQ pages; but just because a technology does not enable an use case doesn't mean users don't need it, it just means they'll have to do it painfully and worse than if it did, and so Twitter users struggle with threads to provide some sort of slow structure to the usual nauseatingly evanescent stream of fluff tweets.\nInstagram & TikTok are even worse, and Snapchat makes no bones of trying to destroy even the possibility of a slow stream.\nYouTube enables slow content quite well, and is [excellent at education](https://samoburja.com/the-youtube-revolution-in-knowledge-transfer/); it seems to struggle with fast, though---comments were a notorious cesspit for many years, and chatting or interactive streaming is something it's been trying to catch up with compared to pioneers like Twitch.\n\nWeird hybrid examples exist. Consider 4chan, famously the meme-maker to the Internet.\nA chan consists of flat 'threads' of comments one after another (any tree being implicit), ordered by most-recently-updated thread, with the last thread being deleted after a certain timeout; threads are nested within a general topic or 'board'.\n4chan threads typically move fast and disappear within days.\nAt first glance, this might seem to preclude any kind of progress. But 4chan is nevertheless famous for incubating memes and projects over the course of many threads (all long since deleted by the final success). How?\nPart of it is that successful threads may export to other boards; then other boards export to slow sites like specialist wikis or other social media networks, like Twitter, Facebook, and Reddit.\nSo there is a 3-level selection: a comment within a thread, interesting threads within the board, and interesting board contents within the Internet.\nThreads select brutally for memetic efficiency (the ticking clock makes chan threads almost literally a [viral evolution lagoon](https://www.science.org/content/blog-post/evolution-action---literally \"Phage-assisted continuous evolution\")), and while this selection is extremely error-prone and inaccurate, there are *so* many comments & memes, and fans of a meme will maintain personal archives and keep posting variants (being anonymous, they are free to keep posting without care or consequence), that the memetic virus can keep mutating and evolving until perhaps it starts percolating outwards.\n(And if it doesn't, oh well, plenty more where that came from!)\n\nA broad perspective on this is to think of *graphs* of communities, where each node is a community of a certain size operating at a certain speed with differing norms about quality/topic/esthetics/anonymity etc.\nIf we think of each community as generating & filtering memes, then there are tradeoffs between size, accuracy of filtering, and throughput.\nIf you have 1 very large community, it will have extremely accurate selection on popularity (fitness) of a given meme, because it is averaging the assessments of almost everyone (despite steeply diminishing returns to spending more people to do selection); however, it will struggle to keep up with potential throughput and [its queues will overflow](https://blog.acolyer.org/2015/04/29/applying-the-universal-scalability-law-to-organisations/) creating a bottleneck impeding [bursty collaboration](/doc/sociology/2017-riedl.pdf \"'Teams vs. Crowds: A Field Test of the Relative Contribution of Incentives, Member Ability, and Emergent Collaboration to Crowd-Based Problem Solving Performance', Riedl & Woolley 2017\") of exciting new ideas, and where will new memes come from if slightly-inferior variants are harshly punished immediately compared to the current fit baseline?\nOver-exploitation will become boring, driving away users---at best, stasis; at worst, decadence, exhaustion, & [collapse](https://psyarxiv.com/dt6bx \"'Imitation-driven Cultural Collapse', Duran-Nebreda & Valverde 2021\").\nIf you have lots of tiny communities, they will undergo extreme levels of \"genetic drift\" due to randomness in popularity, and the fitness of their best meme will typically be quite poor; but on the other hand, it is likely that at least one of those communities has random-walked its way to something neat (if only you could figure out *which one*...)\nDepending on how these communities are connected, these neat new variants may be confined to a ghetto and eventually die out [due to randomness](!W \"Gambler’s ruin\") (either themselves or perhaps the communities) if they can't grow fast enough to [reach fixation](!W \"Fixation (population genetics)\"); but if you make them all hyper-connected, you may just wind up constructing the 1 large bottlenecked community again!\n\n\nThe architecture of speed and size is probably responsible for a lot of the 'feel' of social networking.\nTwitter, for example, is lauded for giving access to the latest by the greatest anywhere, but produces a certain exhaustion and apathy and chronic low-grade anxiety and lowest-common denominator humor, and this is the flipside: because it emphasizes only the *latest*, there is no progression or collaborative creation (people can advertise on Twitter to collaborate elsewhere, or ask specific questions, or learn about something on Twitter, but there is nothing like a \"Twitter FAQ thread\" or long-term collaboration *on* Twitter), and because it can be from anywhere, the norms are unpredictable and \"context collapse\" means an entire outside community could decide to coordinate to make you the target of the latest '5-minute hate'.\n\nPavlogiannis et al 2018 ([\"Construction of arbitrarily strong amplifiers of natural selection using evolutionary graph theory\"](https://www.nature.com/articles/s42003-018-0078-7)) considers this sort of scenario and finds that good graph structures & distributions tend to look like a hierarchical \"star\" or \"hub-and-spoke\" (or perhaps the ubiquitous \"bow-tie\").\nThere are many small 'local' nodes at the periphery, which focus on 'generating' [innovations](/note/small-groups \"'The Effectiveness of Unreasonable Small Groups', Branwen 2021\"), and these feed in a generally one-way direction into progressively larger nodes focused on 'selecting', which eventually reach a few 'global' nodes which are connected to all the peripheries again.\n(Like in biological evolution, the number of nodes or 'populations' [can matter a lot](https://www.biorxiv.org/content/10.1101/2021.09.09.459561.full \"‘Effective population size for culturally evolving traits’, Deffner et al 2021\"), as does the history of the populations, which may have an ['effective' population count](!W \"Effective population size\") much smaller the visible one.)\n\nLarge masses of raw material, be they writing, or images, or videos, or sick skating techniques, are collaboratively written, proceeding from rough draft through editing to the final perfected version.\nAs a kid I wondered vaguely how famous intellectuals could have \"30 boxes of notebooks\" or \"20 volumes of collected letters\" or \"10 volumes of unpublished papers\"---wasn't writing and thinking *hard*, how did they have *time* for all that incessant note-taking and letter-writing, while still living and researching and actually writing the published work they were famous for?\nThe answer, it turned out, is simply that writing is a sequential collaborative process: those letters and unpublished papers were part of a [pipeline](/note/pipeline \"'Leaky Pipelines', Branwen 2014\"); it was not \"letters *vs* books\" but \"letters *into* books\".\nA heuristic like the [rule of three](!W \"Rule of three (computer programming)\") (\"if you find yourself explaining something for the third time, write it up\") is about deciding what to promote up a level: the repetition implies that it's important, and conveniently, three rough drafts are already available.\n\nThis will suddenly all sound eerily familiar: it is our old friend [reinforcement learning](#rl) and its explore-exploit tradeoff, with outer evolutionary losses and inner learned losses, all over again!\nToo small a batch size and you don't learn anything; too large, and it takes an eternity to improve; too little exploration & too much greedy exploitation, one learns unnecessarily slowly & may get trapped in a local optimum, but too much exploration ensures one never learns anything before bouncing off to the next random point; breaking up into multiple agents and populations can cover more ground than a single uniform population but only if they are balanced properly and transfer improvements; hierarchical structure can enable deep exploration and modularity, where a monolithic structure flails around locally but can be done poorly; an evolutionary loss is extremely inefficient compared to a learned inner loss explicitly optimizing for proxy goals, yet, without the evolutionary loss, the proxy goals may be wrong; and so on.\nBut with our graph to explore and amplify, and individual humans as myopically Bayesian agents, we discover our [overall community looks like](/timing#try-try-again-but-less-less) a giant [Thompson sampling](!W) engine!\n\nSo this offers a general theory of Internet community design: one wants an architecture which is hierarchical, supporting a smooth flow of content from a wide variety of small peripheral nodes operating on fast time-scales with their own unique norms fostering [social contagion of ambition](/review/bakewell#social-contagion) with incentives for directed exploration of new niches or uncovered territory^[An interesting example of designing a community to target new territory is [vTaiwan](https://www.technologyreview.com/2018/08/21/240284/the-simple-but-ingenious-system-taiwan-uses-to-crowdsource-its-laws/ \"'The simple but ingenious system Taiwan uses to crowdsource its laws: vTaiwan is a promising experiment in participatory governance. But politics is blocking it from getting greater traction', Horton 2018\"), which disables reply-comments to avoid over-exploitation of claims, and uses vote graphs to build a map of claims and highlight 'empty spots' that people can leave new comments in (since they can't just re-argue the old ones). This seems to be working better than some other instances like [`#xkcd-signal`](https://blog.xkcd.com/2008/01/14/robot9000-and-xkcd-signal-attacking-noise-in-chat/ \"ROBOT9000 and #xkcd-signal: Attacking Noise in Chat\").], upwards through a hierarchy of larger slower nodes with gradually more intense filtering & more conventional norms (including escalating reliance on reputation), to a final global central 'arena' where the best can duke it out for transmission back to all peripheral nodes.\nThe design should not privilege a particular time-scale, and should enable linking and copying through the various time-scales; nodes should be constrained in size, and broken up if necessary to keep them at an efficient size.\n\nOne could imagine a Reddit which integrated chat, links, & wiki pages to create a smoother transition between nodes and directly support promotion:\n\n#. a subreddit has a chat channel which defaults to anonymous and is not logged or archived, with the minimum possible moderation;\n#. blocks of fast-time chat (seconds to minutes), however, can be highlighted and right-clicked to automatically turn into a comment on a link or their own text post, 'promoting' them up one level to slower-time, where they can be discussed over hours to days;\n#. such comments, perhaps on some topic of sudden new interest, may be further selected, and transcluded into a new wiki page devoted to that topic (crude, but a starting point as a comment index), which can then be hand-edited later to add in additional commentary, links, citations, etc;\n#. these pages become a long-term resource for that subreddit, and perhaps turn out to be of broader interest, being crossposted to other, bigger, subreddits,\n#. and, amazingly enough, eventually reach /r/all where it is pushed to all users as part of their default feed---adding a new thing to the global 'common knowledge', which (return to #1) some other subreddit chat might idly discuss for a while and then make an unexpected breakthrough, kicking off a new cycle.\n", "id": "27a185f5dfe6bb96d934ffa98a65cacb"} {"source": "gwern_blog", "url": "https://www.gwern.net/Hyperbolic-Time-Chamber.page", "title": "'The Hyperbolic Time Chamber & Brain Emulation'", "authors": ["Gwern Branwen"], "date_published": "2018-09-02", "text": "---\ntitle: 'The Hyperbolic Time Chamber & Brain Emulation'\ndescription: A time dilation chamber as thought experiment on the power of pure thought, with comparison to computer AGI advantages/disadvantages.\ncreated: 2012-08-29\nmodified: 2018-09-02\nstatus: finished\nprevious: /tank\nnext: /rnn-metadata\nconfidence: likely\nimportance: 9\ncssExtension: drop-caps-kanzlei\n...\n\n
\n> A time dilation tool from an anime is discussed for its practical use on Earth; there seem surprisingly few uses and none that will change the world, due to the severe penalties humans would incur while using it, and basic constraints like Amdahl’s law limit the scientific uses. A comparison with the position of an Artificial Intelligence such as an emulated human brain seems fair, except most of the time dilation disadvantages do not apply or can be ameliorated and hence any speedups could be quite effectively exploited. I suggest that skeptics of the idea that speedups give advantages are implicitly working off the crippled time dilation tool and not making allowance for the *dis*analogies.\n
\n\n_[Dragon Ball Z](!W)_, a popular (but mediocre) shonen fighting anime, includes a cute bit of SF in it---the [Hyperbolic Time Chamber](https://dragonball.fandom.com/wiki/Hyperbolic_Time_Chamber) (HTC), which can be thought of a reverse [twin paradox](!W) in which time speeds up for 2 people: the HTC opens and closes once a day realtime, but inside its little pocket universe, a full year passes for 2 people, giving a 365× speedup. There's no such thing as a HTC-within-a-HTC, so uses are limited to just that one 365× speedup---no 365^2^ speedups.\n\nIgnoring the _DBZ_-specific aspects of the HTC like the person limit or increased gravity or air and temperature changes, one wonders: what one would do with a HTC in the real world?\n\n# Uses for time acceleration\n\nIn _DBZ_, the only use seems to be for training montages to let characters power up to fight a new alien or martial artist, but world-destroying martial artists seem to be rare in the real world, so we couldn't use it for *that*.\n\nCould we use it for regular martial arts training? The _DBZ_ article for the HTC mentions no non-emergency use, and a little thought leads us to conclude that, probably not: inside the HTC, time passes as normal, which means that you don't save any time. All the HTC is doing is rearranging relative time between groups. If you step in, you still age a full year before stepping out, and you will now die a year early by the realtime calendar. So what's the point? We can think of a few uses---imagine someone who gets injured just before the Olympics---but let's face it, that may be convenient for a few people, but it's hardly changing the world. A whole time-accelerated pocket universe... Surely we can think of something less pointless than tweaking athletics?\n\n# Downsides\n\nActually, it's *worse* than pointless---the HTC is a super-[supermax prison](!W): you cannot leave at any point before the year is up, you cannot communicate in any way, nothing goes in or out, and you have only what you brought with you (and always what you brought with you).\n\n## Autarchy\n\nUnder such conditions, a year in the HTC could well be considered \"cruel and unusual punishment\"; no doctor would volunteer for it, so any prisoners in the HTC face a serious risk of death from any cause unless a fellow prisoner that day/year had medical training. The food will also suck, as any food will have to be storable for a year (it wouldn't do to starve to death a day before the door opens back to the real world); you can't grow your own because there is no apparent sun, soil, or seasons inside the HTC, and while you could probably bring in a greenhouse & soil and fertilize with recycled food, how are you going to *power the lights* in your greenhouse? Drag in a miniature nuclear power plant or a [radioisotope thermoelectric generator](!W)? And we haven't even considered how much time one would spend (waste) on this small-scale agriculture; maintaining [Biosphere 2](!W) was a full-time job for many highly-skilled people, and if that was true for the HTC, there would hardly be any point in it.\n\nIf one punted on the problems of maintaining a high quality of life and posited a dedicated researcher, well, the HTC is still not useful. They will find it hard to take with them an entire library or laboratory, many ingredients are too expensive or perishable to buy in advance just because the researcher *might* need them, but if they don't have access to pretty much everything, they'll quickly hit some sort of barrier where one email or order would let them finish a project but that email can't be sent for up to a year. (Imagine a researcher who enters the HTC---and his laptop's hard-drive dies. Oops. Hope he had backups or spares, of his data and his laptop and everything else for that matter.) Omitting these concerns, research is a social process in the sense that one is often discussing or explaining or defending the research, and without these interactions it is easy to go down blind alleys, make minor-seeming but fatal mistakes^[The inability to critique your own results or ideas as capably as someone else can seems to have deep roots in psychology and support evolutionary accounts of reasoning as evolved primarily for arguing and convincing other people, not truth-seeking. See also \"rubber-ducking\".], wind up reinventing something standard in another field, etc. One can easily waste a month this way, and so a year. A group would help, but groups are susceptible to groupthink and will still go down blind alleys or simply lack relevant expertise. (In this respect, the Millennial Maths in [Neal Stephenson's](!W \"Neal Stephenson\") _[Anathem](!W)_ are highly unrealistic; any group of academics which closeted themselves for a millennium would overwhelmingly likely be a sheer waste of human capital.)\n\nOne might wonder about other kinds of education in the HTC like mathematics, but all the above points apply to any reason to live in the HTC: why would you accept all those burdens to spend a year learning something... when you could just live that same year in the real world at much less cost and a far higher standard of living? It certainly *would* be nice to go into the HTC for a few weeks and come back with a dozen PhDs---but not if you emerge aged 40, having lived the best years of your life in a prison cell and probably deep in debt too!\n\n## Aging\n\nSpeaking of a few years in the HTC, what about biological aging? People don't ordinarily spend half their lives acquiring multiple degrees outside the prison of an HTC, why would they voluntarily do so inside it? It's the same trade, after all: half your life for multiple degrees. This point has been made in fictional treatments of time-acceleration [R.A. Lafferty's](!W \"R.A. Lafferty\") classic short story [\"The Six Fingers of Time\"](https://www.gutenberg.org/files/31663/31663-h/31663-h.htm) where the protagonist is given the ability to slow down time by a mysterious ancient conspiracy, and while he tries to uncover their secrets, he dies of old age---they had let him slow down time, but not his inherent natural aging. Or [William Sleator's](!W \"William Sleator\") YA novel [_Singularity_](!W \"Singularity (William Sleator novel)\"), where the original discoverer of the HTC dies before his family expects it, looking suspiciously like an old man; the implication is that he spent so much time in the HTC investigating it that the normal aging while inside it used up a good chunk of his lifespan. Ironically, if he had been able to survive another month, he would have seen the resolution of the mystery. The protagonist benefits somewhat from his own year in the HTC, but for idiosyncratic reasons. Similarly in [Greg Egan's](!W \"Greg Egan\") [_The Clockwork Rocket_](!W \"The Clockwork Rocket\") the protagonist discovers a method of time acceleration which she resolves to use to save her world from certain doom by launching a generation ship to be accelerated and discover some salvation; but she and the first generation (a good chunk of their world's scientific community) fully expect to perish of age long before the generation ship returns just years later in realtime. In a more conventional example, we may admire how prison & revenge give the Count in _[The Count of Monte Cristo](!W)_ a great deal of focus during his educational prison stay but how many of us would agree to be imprisoned the same way if there were no hidden fortune waiting for us at the end?\n\nThis is a fundamental issue and probably why time-acceleration is an underused trope in science fiction compared to time dilation or time travel: the downside is simply too apparent.\n\n# Some uses\n## Zero-sum competition\n\nThe point about the HTC 'rearranging relative time' for athletes and the original use in _DBZ_---training to save the world when every minute counts---suggests one class of problems: things which are *extremely* time-sensitive with multiple competing groups and zero-sum or winner-take-all dynamics.\n\nWith large sums of money at stake, we can hand-wave the super-supermax prison points: oil companies only have to pay oil rig workers a few score or hundreds of thousands of dollars for working several weeks at a time on oil rigs, and scientists and astronauts compete for position in isolated facilities as Antarctic bases or the International Space Station. All of these are far less isolated or burdensome than the HTC, but by more than a few factors? Seems unlikely. So a few million dollars may suffice to cover the costs of a small group, especially if they can reuse infrastructure from previous days/years.\n\nAre there business problems where a year's headstart is worth at least a few million dollars? Sure! Many programming tasks come to mind: would Google pay a few million to lock up the core Android coders to take care of a years' worth of outstanding bugs and to-do items? Would Apple do something similar? What about any hedge fund? It seems plausible that every day of a HTC could be booked or even auctioned off.\n\nThe negatives here include the lack of communication and iteration: when the group heads through the door, that's the last they'll hear from the world for a year. They can't release a prototype at the 6 month mark and see how it does after a month. If they fall to groupthink and take the wrong approach, there are no outsiders who will say that their approach is crazy and elaborate and why don't they just do the standard thing? Worse, if they discover they forgot a key piece of documentation or hardware or they run out of chips or they need a particular expert or something, the next time they can get it is... a year later. Oops. Hope that wasn't a fatal error. (Even if they had communication, it'd only bound the losses: if it takes a second to load a webpage in realtime, then it will take them >365 seconds or >6 minutes.) This lack of iteration runs counter to many business styles and is entirely antithetical to modern tech businesses which prize constant feedback and ability to change ideas & approaches on a dime.\n\nStill, with multi-terabyte hard drives, one could just take a copy of *all* documentation and source code (or perhaps bring along a few Internet Archive-style \"petaboxes\" and store a copy of a small fraction of the Internet), and for some tasks like stock-market research, it's plausible one could bring everything one needs. Hedge funds would probably benefit from being able to send in their quants for a year of concentrated research and scoop the competition.\n\n## Non-zero-sum uses\n\nThe more concrete a field, the less the benefit. Most commercial services would be impossible: you can't cut someone's hair in the real world from the HTC, although with loads of equipment you could work on a robot which cuts hair. You can't run clinical drug experiments on a group of patients from inside a HTC either; for that matter, you'll have a hard time bringing along rats or monkeys. But you could read a lot of papers on rats. (But not necessarily do much; for example, meta-analyses will be hard because frequently authors do not include the exact numbers one needs, and so one has to contact them---exactly what can't be done in the HTC.) Pure mathematicians might benefit, but by and large, mathematics is not *so* competitive & time-sensitive that sticking some mathematicians into the HTC would be worth the premium.\n\nWhich is not to say there are *no* concrete uses. One cute example would be storage of goods: instead of an [art & wine](https://www.nytimes.com/2012/07/22/business/swiss-freeports-are-home-for-a-growing-treasury-of-art.html \"Swiss Freeports Are Home for a Growing Treasury of Art\") [free port](!W), just stick your wine & cheese & other goods in the HTC and let them age a year every day until ripened to perfection. More valuably, one could bypass [accelerated aging](!W) tests and just age a product directly; want to know if the [Clock of the Long Now](!W) will work or the [Rosetta disk](!W) be readable for 10,000 years under ideal conditions? That's just 10,000 days or 28 years away. (We could also expect an efflorescence of counterfeit art, documents, and goods for the same reasons.) Better yet, want to run fast primate aging experiments? If you can front the money and either automate the care & feeding of the subjects or find lab technicians willing to spend their lives in prison, you can run as many as you please.\n\nThese wouldn't be revolutionary improvements, though (with the exception of aging research which might revolutionize human society if the results were useful).\n\n# Self-contained vs not\n\nMore generally, we could say that [Amdahl’s law](!W) applies to use of HTC: any task has serial and parallel elements, but if some elements are made cheaper or even free, the time to accomplish the task still depends on the other bottleneck elements. Elements which can be done in complete isolation *and* which benefit from relative speedups correspond to parallel elements, and elements which must be done in the real world correspond to the serial elements. With a HTC, the HTC-elements will quickly speed up, but tasks will now bottleneck on real-world tasks. (Imagine Google sends its Android programmers into the HTC and they return a day later bearing a repository groaning with new patches; the features still have to be tested in a real world context, reviewed, infrastructure updated, and finally actually transmitted to the customers who may begin using them.)\n\nOne opportunity is to look at Amdahl’s law as a positive, and look at the computer version of an isolated team in the HTC beavering away on a project: a bunch of servers working on an extremely hard serial problem. For example, simulating a long evolution of [protein folding](!W). Many problems in scientific computation or [operations research](!W) where there is more than a day's margin might also benefit from what is effectively a super-fast processor with 1-day latency. Further, such a supercomputing facility in an HTC faces problems with replacement parts and getting the electricity such computations will consume (and how much would you have to pay the sysadmins & technicians to be imprisoned for a year?), and so its capacity will come at a premium compared to the equivalent real-world problem; a problem like hash cracking which is trivially parallelizable would not benefit from such a facility. Electricity is the dominant cost of computing power these days, so a HTC must save on electricity or justify its cost premium. Instead of throwing one really expensive HTC server at the problem for a year, throw 365 cheap power-efficient servers at it for a day as many tech companies are able to do, or just run it on a cloud computing platform.\n\nThese points do not apply to any computation which is inherently serial and cannot be run on more than a few computers. One such category of non-parallelizable problems (assuming NC ≠ P) is the complexity class [P-complete](!W), which includes such economically important tasks as [linear programming](!W) optimization. However, the important unparallelizable problems (at least linear programming) typically have [very fast](/aria#faster) approximate or heuristic solvers, and optimization problems tend to asymptote and experience severely [diminishing returns](!W). Is there a problem where the time-limit is so tight and the additional optimization so valuable that it would pay for a year of premium power consumption & computation? I don't know. Maybe there is.\n\nSo, many business applications would not benefit, many research tasks would not benefit, and I haven't thought of any important areas of life which would benefit from a HTC. Some people would find it convenient to re-arrange their lives even at some cost, arranging big blocks of time for some self-contained things (for example, working on one's own projects), but the benefit would be limited; I would analogize to [modafinil](/modafinil), which can be employed to free up a block of 8 hours (skipping a night of sleep) but at a cost (money). If one believes that modafinil use comes with no health penalties or recovery sleep, it arguably is *better* than a HTC because it can be used in more convenient chunks and you remain in the real world while using it, running at realtime. Yet, while modafinil is popular among a few groups, it has not revolutionized the world.\n\nIn general, a world with one or many HTCs would look a great deal like our own, although in some areas, there will be sudden bursts of progress as HTC groups return from their expeditions with their prizes and likely a one-time economic boost as HTC-specific applications are discovered.\n\n# Real HTCs\n\nWell, cute and interesting, but why do we care about this SF trope from _DBZ_?\n\nBecause the HTC can be analogized to an emulated brain or an \"upload\"! The 365× speedup of people in the HTC could be the speedup of a brain on a supercomputer after considerable optimization^[Although it's unlikely that the exact speedup would be near 365×, as power & heating constraints dominate the problem; [GeraldMonroe](https://www.lesswrong.com/posts/PvdvsZQwDr2PD3wFW/dragon-ball-s-hyperbolic-time-chamber7c10) points out that a straightforward comparison of transistor vs neuron switching speed leads to factors like 25 million, and modern CPUs are limited in speed mostly by heat dissipation issues---the standard 2--4GHz CPU could run at 5GHz+ if one had powerful cooling. Heat concerns led [Keith Henson](!W) to [argue that](https://web.archive.org/web/20120415171945/http://hplusmagazine.com/2012/04/12/transhumanism-and-the-human-expansion-into-space-a-conflict-with-physics/) datacenters of uploaded brains would eventually relocate to the deep sea for maximal cooling (and hence, speed).]. (One could argue that early uploads will run at far less than real-time as they will be created as soon as hardware is just powerful enough to run them at all, and be completely uncompetitive & research projects; but then again, their creation could come long after the hardware exists, waiting on bottlenecks like scanning of brains---the [\"hardware overhang\"](https://www.lesswrong.com/tag/computing-overhang) question.) A computer, like a HTC, cannot be nested to give a speedup; a virtual computer will usually run slower relative to realtime. A brain on a computer without any peripherals like a robot will be isolated from the real world, just like the people in the chamber, and so on.\n\nWe found the HTC not useful in practice; does this conclusion also follow for uploads? Should we expect uploads to struggle in the marketplace, finding valued niches but not causing increases in world GDP growth rates or any sort of Singularity?\n\n# Emulations are not HTCs\n\nWhile the similarities are striking, so are the dissimilarities:\n\n#. a computer can have communications with the Internet & world; a 365× slowdown may be painful, but it is better than a fixed delay of 0--365 *days*.\n\n Even when the slowdown hits, there is the option---much reduced in the HTC---of switching to an entirely different task. (Similar to the computing world's reaction to clock speed stagnation and the rise of multi-cores, with the attendant pressure on Amdahl’s law[^Amdahl]: process-level parallelism. You can't do just one thing faster, so you might as well do many things slower.) This eliminates many of the objections. If it really can't find anything to do with its time, an emulation can always slow itself down to real-time.\n#. the overhead of living in the Hyperbolic Time Chamber is reduced; a computer in the real world benefits from all the real world infrastructure like power plants or semiconductor chip fabs. There are some power savings from [underclocking](!W), but there's not much reason to otherwise run as fast as possible. (This permits many more minds to run sped-up as compared to humans living in the HTC, reducing further the disadvantage of #1 and also increasing the value of being sped-up.)\n#. A person in the HTC is a relatively fixed quantity, especially since many resources will be unavailable; an emulated brain has access to those resources per #1, but also has many options different from a regular human. (A much-discussed topic; see eg. [Sotala 2012](https://philpapers.org/archive/SOTAOA.pdf#miri \"Advantages of Artificial Intelligences, Uploads, and Digital Minds\").)\n#. An emulated brain is free of a major time limit for regular humans: aging. While a human could not afford to get 12 PhDs even if a HTC existed---because that would consume the most productive decades of his life---an emulated brain could. This breaks the symmetry further.\n\nBetween these 4 disanalogic points, an upload avoids some of the disadvantages that renders the HTC noncompetitive and gains some advantages which may make it more competitive, and make the sudden improvements a much more general phenomenon.\n\nExpecting any dramatic changes from uploads or AGIs in general has been mocked by critics as an over-valuing of \"brains in a box\" or, pace Gene Wolfe, [magical thinking](!W) (\"The would-be sorcerer alone has faith in the efficacy of pure knowledge\"). If we look at such criticism, do the arguments seem to assume a model of thinking in which the upload/AI is trapped in the HTC, or does it resemble an upload/AI outside the HTC?\n\n[^Amdahl]: Amdahl’s law is also relevant to the economics of uploaded brains: suppose one believed that specialized or [\"tool\"](https://www.lesswrong.com/tag/tool-ai) AIs will always outperform any uploaded brain or AGI at a specific task, and every improvement that speeds up the uploads/AGIs improves the tool AIs just as much, such that the uploads/AGIs never surpass the tool AI; does this imply that there will be no uploads/AGIs outside niches like research, as humans using tool AIs are more profitable? [Holden Karnofsky](https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si) seems to think something similar when he doesn't think that competitive pressure will force people running tool AIs to eventually switch to running AGIs; [Nick Szabo](https://unenumerated.blogspot.com/2011/01/singularity.html) explicitly believes uploads/AGIs can never be profitable given tool AI competition:\n\n > Even if there was such a thing as a \"general intelligence\" the specialized machines would soundly beat it in the marketplace. It would be far from a close contest.\n\n I disagree. The market is not purely tool AI vs AGI. *Humans* do not increase their speed even if tool AIs are increasing their speed arbitrarily. Therefore, a human+tool-AI system's performance asymptotically approaches the limit where the tool-AI part takes zero time and the human part takes 100% of the time. Time pressures may force a shift to ever more tool AI systems and eventually tool-AI+AGI systems when that becomes possible. ([\"Greater use of highly adaptable and flexibly autonomous systems and processes can provide [substantial] time-domain operational advantages over adversaries who are limited to human planning and decision speeds...\"](https://defenseinnovationmarketplace.dtic.mil/wp-content/uploads/airforce/TechnologyHorizonsVol1_PublicReleasesmall.pdf \"Report on Technology Horizons: A Vision for Air Force Science & Technology During 2010-2030\")) The moment that algorithmic progress or Moore’s law means that an AGI even slightly outperforms a human at using the tool-AI, the same economic reasons you were counting on as your salvation suddenly turn on you and drive the replacement of any humans in the loop. Since humans are a known fixed quantity, if an AGI can be improved---even if at all times it is strictly inferior to a tool AI at the latter's specialization---then eventually an AGI+tool-AI system will outperform a human+tool-AI system (barring exotic unproven assumptions about asymptotic limits).\n\n Attempts to evade this by splitting up or combining tool AIs either don't avoid this logic or wind up accepting the conclusion: if every human skill has been transferred to tool-AIs, then a complex of tool-AIs now forms an AGI which outperforms all humans by definition; if not every human skill has been transferred, such as \"employing tool-AIs as most appropriate for the moment\", then there is the large economic niche for AGIs which I have identified with my Amdahl’s law argument. So either there exist AGI which outperform all humans, or there exists economic pressure to use AGI. For example, if one argued that a complex of tool-AIs would not share worldviews or data appropriately and need a human to coordinate them, well, why can't an AGI do this and be superior to the humans per Amdahl’s law?\n\n What human is in the loop on high frequency trading? Who was in the loop when Knight Capital's market maker was losing hundreds of millions of dollars? The answer is that no one was in the loop because humans in the loop would not have been economically competitive. That's fine when it's \"just\" billions of dollars at stake and companies can decide to take the risk for themselves or not---but the stakes can change, externalities can increase.\n\n Here's another near-future test/example: how do we humans deal with [drones](!W \"Unmanned combat aerial vehicle\")? Drones are exploding in popularity, are increasing their capabilities constantly, and are coveted by countless security agencies and private groups for their tremendous use in all sorts of roles both benign and disturbing. Just like AIs would be. The tool vs general AI distinction maps nicely onto drones as well: a tool AI corresponds to a drone being manually flown by a human pilot somewhere, while a general AI would correspond to an autonomous drone which is carrying out some mission (blast insurgents?). So, here is a near-future test of the question 'are people likely to let tool AIs 'drive themselves' for greater efficiency?'---simply ask whether in, say, a decade there are autonomous drones carrying tasks that now would only be carried out by piloted drones. If in a decade we learn that autonomous drones are killing people, then we have an answer to our tool AI question: it doesn't matter because given a tool AI, people will just turn it into a general AI.\n\n# See Also\n\n
\n- [Slowing Moore's Law](/slowing-moores-law \"Weak points in the networks powering technological progress: chip factories.\"){.backlink-not}\n- [Why Tool AIs Want to Be Agent AIs](/tool-ai \"AIs limited to purely computational inferential tasks (Tool AIs) supporting humans will be less intelligent, efficient, and economically valuable than more autonomous reinforcement-learning AIs (Agent AIs) who act on their own and learn to take actions over choice of computation/data/training/architecture/hyperparameters/external-resource use.\"){.backlink-not}\n
\n\n# External Links\n\n- Discussion: [LessWrong](https://www.lesswrong.com/posts/PvdvsZQwDr2PD3wFW/dragon-ball-s-hyperbolic-time-chamber); [/r/Rational](https://www.reddit.com/r/rational/comments/9aaxbk/hsfedumkth_the_hyperbolic_time_chamber_as_brain/)\n- [\"Messing With Time: Why The Flash is in Hell\"](https://jessegalef.com/2013/01/27/messing-with-time-why-the-flash-is-in-hell/)\n- [\"That Alien Message\"](https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message), Eliezer Yudkowsky\n- [\"Slow Tuesday Night\"](https://www.baen.com/Chapters/9781618249203/9781618249203___2.htm), [R.A. Lafferty](!W)\n- [_Worth The Candle_](https://www.royalroad.com/fiction/25137/worth-the-candle), Alexander Wales (protagonists make extensive magic-aided use of an HTC)\n", "id": "33771602c7a358aa253f72d6857c5007"}