Datasets:

Languages:
English
ArXiv:
License:
mrblobby123 commited on
Commit
20ae60d
·
1 Parent(s): 60ad811

Upload aisafety.info.jsonl with huggingface_hub

Browse files
Files changed (1) hide show
  1. aisafety.info.jsonl +1 -0
aisafety.info.jsonl CHANGED
@@ -303,3 +303,4 @@
303
  {"source": "aisafety.info", "source_type": "markdown", "url": "https://aisafety.info?state=7673", "title": "What is \"Do what I mean\"?", "authors": ["Stampy aisafety.info"], "date_published": "2023-07-04T01:02:25Z", "text": "“Do what I mean” is an alignment strategy in which the AI is programmed to try and do what the human meant by an instruction, rather than following the literal interpretation of the explicit instruction interpreted literally. Akin to following the spirit of the law over the letter. This potentially helps with alignment in two ways: First, it might allow the AI to learn more subtle goals, which you might not have been able to explicitly state. Second, it might make the AI [corrigible](http://aisafety.info?state=87AG), willing to have its goals or programming corrected, and continuously interested in what people want (including allowing itself to be shut off if need be). Since it is programmed to \"do what you mean\", it will be open to accepting correction.\n\nThis approach contrasts with the more typical “do what I say” approach of programming an AI by giving it an explicit goal. The problem with an explicit goal is that if the goal is misstated, or leaves out some detail, the AI will optimize for something we don’t want. Think of the story of King Midas, who wished that everything he touched would turn to gold and died of starvation.\n\nOne specific \"Do what I mean\" proposal is [\"Cooperative Inverse Reinforcement Learning\"](https://www.lesswrong.com/tag/inverse-reinforcement-learning), in which the goal is hidden from the AI. Since it doesn't have direct access to its reward function, the AI will try and discover the goal from the things you tell it and from the examples you give it. Thus, it slowly gets closer to doing what you actually want.\n\nFor more information, see [Do what we mean vs. do what we say](https://www.lesswrong.com/posts/8Q5h6hyBXTEgC6EZf/do-what-i-mean-vs-do-what-i-say) by Rohin Shah, in which he defines a \"do what we mean\" system, shows how it might help with alignment, and discusses how it could be combined with a \"do what we say\" subsystem for added safety.\n\nFor a discussion of a spectrum of different levels of \"do what I mean\" ability, see [Do What I Mean hierarchy](https://arbital.greaterwrong.com/p/dwim) by Eliezer Yudkowsky.\n\n", "id": "c84e4fa3130e010cd96682ba882ede6f", "summary": []}
304
  {"source": "aisafety.info", "source_type": "markdown", "url": "https://aisafety.info?state=1001", "title": "What about other risks from AI?", "authors": ["Stampy aisafety.info"], "date_published": "2023-07-11T08:45:19Z", "text": "<iframe src=\"https://www.youtube.com/embed/kMLKbhY0ji0\" title=\"ROBERT MILES - \"There is a good chance this kills everyone\"\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen></iframe>\n\nThere are many substantial risks surrounding current and future AI, and ideally, *all* of these should be addressed. On this site, we focus on one major class of concerns: [existential risks](http://aisafety.info?state=89LL) that [could arise accidentally](http://aisafety.info?state=3485) from superintelligent AI. We choose to focus on this class of risk because it is [plausible](http://aisafety.info?state=7715) and [extreme in scope](http://aisafety.info?state=6209).\n\nThis choice of focus does not mean that we are opposed to work on other types of risks: each of these risks should have people working to mitigate it. But existential risks are in a way the mother of all risks: if we go extinct, none of these other risks matter, and we believe that too few people are working on existential risks relative to their importance.\n\nBelow is a list of pages focused on other specific concerns about AI. If there is another concern that you would like to see addressed, feel free to [add a question](https://coda.io/form/Add-a-question-to-AI-Safety-Info_dGDInYNFa3f).\n\n", "id": "e3b2ffcdb8aa41ab60bc831e82b1a4a9", "summary": []}
305
  {"source": "aisafety.info", "source_type": "markdown", "url": "https://aisafety.info?state=6172", "title": "Aren't there easy solutions to AI alignment?", "authors": ["Stampy aisafety.info"], "date_published": "2023-07-11T08:44:21Z", "text": "<iframe src=\"https://www.youtube.com/embed/i8r_yShOixM\" title=\"AI? Just Sandbox it... - Computerphile\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen></iframe>\n\nIf you've been learning about AI alignment, you might have had a thought like, \"Why can't we just do [this thing that seems like it would solve the problem]?\"\n\nUnfortunately, many AI alignment proposals turn out to have hidden difficulties. It’s surprisingly easy to come up with “solutions” that don’t actually solve the problem. Some intuitive alignment proposals (that are generally agreed to be inadequate) include:\n\n- [Why can’t we just program the AI not to harm us?](http://aisafety.info?state=6923)\n\n- [Why can’t we just tell the AI to do what is moral?](http://aisafety.info?state=7784)\n\n- [Why can’t we just tell the AI to be friendly?](http://aisafety.info?state=6942)\n\n- [Why can’t we just turn the AI off if it starts to misbehave?](http://aisafety.info?state=3119)\n\n- [Why can’t we just tell an AI just to figure out what we want and then do that?](http://aisafety.info?state=7054)\n\n- [Why can’t we just tell the AI to figure out right from wrong itself?](http://aisafety.info?state=7054)\n\n- [Why can’t we just keep an AI contained in a box?](http://aisafety.info?state=6176)\n\n- [Why can’t we just block an AI from using the internet?](http://aisafety.info?state=5844)\n\n- [Why can’t we just tell the AI not to lie?](http://aisafety.info?state=96TN)\n\n- [Why can’t we just not build AI a body?](http://aisafety.info?state=8AV8)\n\n- [Why can’t we just raise AI like a child?](http://aisafety.info?state=6174)\n\n- [Why can’t we just live in symbiosis with AGI?](http://aisafety.info?state=8EL3)\n\n- [Why can’t we just use a more powerful AI to control a potentially dangerous AI?](http://aisafety.info?state=89ZW)\n\n- [Why can’t we just use Asimov’s laws?](http://aisafety.info?state=6224)\n\n- [Why can’t we just treat AI like any other dangerous technological tool?](http://aisafety.info?state=6188)\n\n- [Why can’t we just not build AI?](http://aisafety.info?state=7148)\n\nSome common pitfalls with alignment proposals include that the proposed solution:\n\n- …requires human observers to be smarter than the AI. Many safety solutions only work when an AI is relatively weak, but break when the AI reaches a certain level of capability (for many reasons, e.g. [deceptive alignment](https://www.youtube.com/watch?v=IeWljQw3UgQ)).\n\n- …appears to make sense in natural language, but when properly unpacked is not philosophically clear enough to be usable.\n\n- …is one that, despite being philosophically coherent, we have no idea how to turn into computer code (if that’s even possible).\n\n- …only solves a subcomponent of the problem, but leaves the core problem unresolved.\n\n- …solves the problem only as long as we stay “in distribution” with respect to the original training data ([distributional shift](https://en.wikipedia.org/wiki/Domain_adaptation) will break it).\n\n- …*might* work eventually, but we can’t expect it to work on the first try (and we'll [likely only get one try at aligning a superintelligence!)](http://aisafety.info?state=8157).\n\nSee the related questions below for some proposals that come up often, or check out [John Wentworth’s \"Why Not Just…\" sequence](https://www.lesswrong.com/s/TLSzP4xP42PPBctgw). \n\n", "id": "2b17a198834b3df77b905987bde1746c", "summary": []}
 
 
303
  {"source": "aisafety.info", "source_type": "markdown", "url": "https://aisafety.info?state=7673", "title": "What is \"Do what I mean\"?", "authors": ["Stampy aisafety.info"], "date_published": "2023-07-04T01:02:25Z", "text": "“Do what I mean” is an alignment strategy in which the AI is programmed to try and do what the human meant by an instruction, rather than following the literal interpretation of the explicit instruction interpreted literally. Akin to following the spirit of the law over the letter. This potentially helps with alignment in two ways: First, it might allow the AI to learn more subtle goals, which you might not have been able to explicitly state. Second, it might make the AI [corrigible](http://aisafety.info?state=87AG), willing to have its goals or programming corrected, and continuously interested in what people want (including allowing itself to be shut off if need be). Since it is programmed to \"do what you mean\", it will be open to accepting correction.\n\nThis approach contrasts with the more typical “do what I say” approach of programming an AI by giving it an explicit goal. The problem with an explicit goal is that if the goal is misstated, or leaves out some detail, the AI will optimize for something we don’t want. Think of the story of King Midas, who wished that everything he touched would turn to gold and died of starvation.\n\nOne specific \"Do what I mean\" proposal is [\"Cooperative Inverse Reinforcement Learning\"](https://www.lesswrong.com/tag/inverse-reinforcement-learning), in which the goal is hidden from the AI. Since it doesn't have direct access to its reward function, the AI will try and discover the goal from the things you tell it and from the examples you give it. Thus, it slowly gets closer to doing what you actually want.\n\nFor more information, see [Do what we mean vs. do what we say](https://www.lesswrong.com/posts/8Q5h6hyBXTEgC6EZf/do-what-i-mean-vs-do-what-i-say) by Rohin Shah, in which he defines a \"do what we mean\" system, shows how it might help with alignment, and discusses how it could be combined with a \"do what we say\" subsystem for added safety.\n\nFor a discussion of a spectrum of different levels of \"do what I mean\" ability, see [Do What I Mean hierarchy](https://arbital.greaterwrong.com/p/dwim) by Eliezer Yudkowsky.\n\n", "id": "c84e4fa3130e010cd96682ba882ede6f", "summary": []}
304
  {"source": "aisafety.info", "source_type": "markdown", "url": "https://aisafety.info?state=1001", "title": "What about other risks from AI?", "authors": ["Stampy aisafety.info"], "date_published": "2023-07-11T08:45:19Z", "text": "<iframe src=\"https://www.youtube.com/embed/kMLKbhY0ji0\" title=\"ROBERT MILES - \"There is a good chance this kills everyone\"\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen></iframe>\n\nThere are many substantial risks surrounding current and future AI, and ideally, *all* of these should be addressed. On this site, we focus on one major class of concerns: [existential risks](http://aisafety.info?state=89LL) that [could arise accidentally](http://aisafety.info?state=3485) from superintelligent AI. We choose to focus on this class of risk because it is [plausible](http://aisafety.info?state=7715) and [extreme in scope](http://aisafety.info?state=6209).\n\nThis choice of focus does not mean that we are opposed to work on other types of risks: each of these risks should have people working to mitigate it. But existential risks are in a way the mother of all risks: if we go extinct, none of these other risks matter, and we believe that too few people are working on existential risks relative to their importance.\n\nBelow is a list of pages focused on other specific concerns about AI. If there is another concern that you would like to see addressed, feel free to [add a question](https://coda.io/form/Add-a-question-to-AI-Safety-Info_dGDInYNFa3f).\n\n", "id": "e3b2ffcdb8aa41ab60bc831e82b1a4a9", "summary": []}
305
  {"source": "aisafety.info", "source_type": "markdown", "url": "https://aisafety.info?state=6172", "title": "Aren't there easy solutions to AI alignment?", "authors": ["Stampy aisafety.info"], "date_published": "2023-07-11T08:44:21Z", "text": "<iframe src=\"https://www.youtube.com/embed/i8r_yShOixM\" title=\"AI? Just Sandbox it... - Computerphile\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen></iframe>\n\nIf you've been learning about AI alignment, you might have had a thought like, \"Why can't we just do [this thing that seems like it would solve the problem]?\"\n\nUnfortunately, many AI alignment proposals turn out to have hidden difficulties. It’s surprisingly easy to come up with “solutions” that don’t actually solve the problem. Some intuitive alignment proposals (that are generally agreed to be inadequate) include:\n\n- [Why can’t we just program the AI not to harm us?](http://aisafety.info?state=6923)\n\n- [Why can’t we just tell the AI to do what is moral?](http://aisafety.info?state=7784)\n\n- [Why can’t we just tell the AI to be friendly?](http://aisafety.info?state=6942)\n\n- [Why can’t we just turn the AI off if it starts to misbehave?](http://aisafety.info?state=3119)\n\n- [Why can’t we just tell an AI just to figure out what we want and then do that?](http://aisafety.info?state=7054)\n\n- [Why can’t we just tell the AI to figure out right from wrong itself?](http://aisafety.info?state=7054)\n\n- [Why can’t we just keep an AI contained in a box?](http://aisafety.info?state=6176)\n\n- [Why can’t we just block an AI from using the internet?](http://aisafety.info?state=5844)\n\n- [Why can’t we just tell the AI not to lie?](http://aisafety.info?state=96TN)\n\n- [Why can’t we just not build AI a body?](http://aisafety.info?state=8AV8)\n\n- [Why can’t we just raise AI like a child?](http://aisafety.info?state=6174)\n\n- [Why can’t we just live in symbiosis with AGI?](http://aisafety.info?state=8EL3)\n\n- [Why can’t we just use a more powerful AI to control a potentially dangerous AI?](http://aisafety.info?state=89ZW)\n\n- [Why can’t we just use Asimov’s laws?](http://aisafety.info?state=6224)\n\n- [Why can’t we just treat AI like any other dangerous technological tool?](http://aisafety.info?state=6188)\n\n- [Why can’t we just not build AI?](http://aisafety.info?state=7148)\n\nSome common pitfalls with alignment proposals include that the proposed solution:\n\n- …requires human observers to be smarter than the AI. Many safety solutions only work when an AI is relatively weak, but break when the AI reaches a certain level of capability (for many reasons, e.g. [deceptive alignment](https://www.youtube.com/watch?v=IeWljQw3UgQ)).\n\n- …appears to make sense in natural language, but when properly unpacked is not philosophically clear enough to be usable.\n\n- …is one that, despite being philosophically coherent, we have no idea how to turn into computer code (if that’s even possible).\n\n- …only solves a subcomponent of the problem, but leaves the core problem unresolved.\n\n- …solves the problem only as long as we stay “in distribution” with respect to the original training data ([distributional shift](https://en.wikipedia.org/wiki/Domain_adaptation) will break it).\n\n- …*might* work eventually, but we can’t expect it to work on the first try (and we'll [likely only get one try at aligning a superintelligence!)](http://aisafety.info?state=8157).\n\nSee the related questions below for some proposals that come up often, or check out [John Wentworth’s \"Why Not Just…\" sequence](https://www.lesswrong.com/s/TLSzP4xP42PPBctgw). \n\n", "id": "2b17a198834b3df77b905987bde1746c", "summary": []}
306
+ {"source": "aisafety.info", "source_type": "markdown", "url": "https://aisafety.info?state=9FQK", "title": "How can LLMs be understood as “simulators”?", "authors": ["Stampy aisafety.info"], "date_published": "2023-07-13T14:04:59Z", "text": "A [simulator](https://generative.ink/posts/simulators/) is a type of AI which produces simulations of real-world phenomena. The concept was proposed as a way to understand large language models (LLMs), which often behave in ways that are not well-explained by thinking of them as other types of AI, such as [agents](http://aisafety.info?state=5632), [oracles](http://aisafety.info?state=8AEV), or [tools](http://aisafety.info?state=8QZF). However, it might seem like a simulator is behaving like an agent, oracle, or tool, because it can simulate instances of them.\n\nA simulation is a model of a process that combines certain rules or behaviors with a state of the world, to compute what will happen at the next step. A simulator repeats this computation many times, imitating how a real-world process changes over time.\n\nFor example, a physics simulator could use a ball’s current position and past positions to estimate its future position. Large Language Models (LLMs) operate in a similar way. An LLM has access to a large amount of text from which it learns the sequences that words tend to fall into. Then, when given a string, it uses what it has learned to ‘predict’ what words are likely to come next.\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/91d590eb-fc28-44a8-dd2c-ab2c6c1db500/public)\n\n*Like simulators, large language models are an iterative process that generates each step based on the current state*\n\nThinking of LLMs as simulators helps us understand some of their characteristics. For example, when an LLM gives incorrect answers, thinking of it as an oracle might make us think it doesn’t ‘know’ something. However, LLMs can give incorrect information when prompted one way, and then give different and correct information when prompted another way. This is because the LLM is not trying to determine the correct answer. Instead, it just generates ‘what comes next’ based on the patterns it has learned, and can be thought of as simulating a human who might be mistaken or lying.\n\nLLMs are often thought of as agents which value predicting the next word more accurately, but there are some important ways in which LLMs don’t behave in line with this perspective. For example, they don’t seem to take actions to improve their prediction accuracy beyond the next word, such as by making the text more predictable. \n\nThe frameworks of agents, oracles, and tools have formed the basis of past discussion, but the most powerful models today do not fully fit into these categories. In particular, a lot of the way we reason about AI as an existential risk involves thinking about [agents](http://aisafety.info?state=5632), but these ways of thinking will be less relevant if the most powerful models continue to be simulators.\n\n", "id": "b5a0978438a70b27ebbbc2e9c7666623", "summary": []}