diff --git "a/eleuther.ai.jsonl" "b/eleuther.ai.jsonl" new file mode 100644--- /dev/null +++ "b/eleuther.ai.jsonl" @@ -0,0 +1,19 @@ +{"id": "446a22fe5ecff1c58aaad5122c67d884", "title": "Minetester: A fully open RL environment built on Minetest", "url": "https://blog.eleuther.ai/minetester-intro/", "source": "eleuther.ai", "source_type": "blog", "text": "![](/images/blog/minetester-introduction/image10.png#center)\n\nIn the past several months we’ve seen great strides in the development of language models, especially in the private sector. Last year we saw several Minecraft-based environments released, including MineDojo and agents built on top of Minecraft such as VPT. This year we’ve seen agents that build on top of SOTA LLMs and libraries along with existing libraries, such as STEVE-1, Voyager, and GITM.\n\n\nAll of these are built on top of Minecraft and usually the MineRL environment. To meet our own research needs, we’ve been building a separate stack on top of the Minetest voxel engine.\n\n\nWhat is Minetest, and why build an RL environment around it?[#](#what-is-minetest-and-why-build-an-rl-environment-around-it)\n============================================================================================================================\n\n\nThe primary motivation for this project is to do and enable alignment research with AI agents that falls outside of the current language modelling paradigm. While progress for Minecraft-based RL agents has been significant, we find there are a few limitations to existing frameworks that make it more difficult for us to study the problems we’re interested in.\n\n\nThe first is a practical matter of transparency and software support. Existing frameworks are built on stacks involving multiple projects. For instance, with MineRL the stack looks as follows:\n\n\nMineRL -> Malmo -> Minecraft -> Minecraft dependencies -> (???)\n\n\nMinecraft itself is a closed-source game and must be decompiled for modding. Malmo is an old framework by Microsoft, and the original framework has not been updated for half a decade. This is not ideal from a software engineering and maintainability perspective. Adding new functionality requires reverse engineering and modifying quite a bit of old code.\n\n\nThis is in contrast to Minetest, where the entire stack is fully transparent, the codebase is smaller, and every dependency is open-source. This makes it easy for devs to dig into any component that could be causing an issue and lets us put our hooks directly into the Minetest game itself without unnecessary layers of indirection.\n\n\nAnother issue revolves around customizability: in the long term, we intend to support modes of operation in Minetest that are not really possible with existing frameworks. This includes synchronous and asynchronous multi-agent and human-agent interaction in game.\n\n\nThe last big reason is first-class modding support in Minetest. Minetest bills itself not as a game but more as a “Voxel Engine”, and as such, it is highly customizable. This makes it much easier to add modifications to the game physics to support specific experiments.\n\n\nHow does Minetester work under the hood?[#](#how-does-minetester-work-under-the-hood)\n=====================================================================================\n\n\nMinetester extends the standard Minetest environment in a few ways centred around enabling RL research.\n\n\n\n![How the Minetester framework integrates with the Minetest game.](/images/blog/minetester-introduction/image5.png#center) \nHow the Minetester framework integrates with the Minetest game.\n\n\n\n\n**Auxiliary data and communication (reward, feedback, etc.)**\n\n\nThe most important feature of Minetester is its extension of the modding API. This makes it possible to enable reward information to be relayed back to the client programmatically.\n\n\n**Client-server synchronisation**\n\n\nTo keep the RL problem deterministic and the dynamics consistent, Minetester sends additional information between the client and the server to enable both to operate in lockstep.\n\n\n**Headless operation**\n\n\nMinetester supports 2 methods of headless operation. The first uses the Xvfb virtual framebuffer to encapsulate the Minetester process, the second, a WIP compile-time solution that replaces the standard rendering backend with an SDL2 headless backend.\n\n\n**Python client wrapper**\n\n\nFinally, the overall system is encapsulated in a Python wrapper that serves as an interface to AI agents built in modern ML frameworks.\n\n\nMinetester baselines: PPO[#](#minetester-baselines-ppo)\n=======================================================\n\n\nTo begin, we demonstrate basic usage of Minetester with a simple policy trained to just break wood. We started with PPO since it’s one of the simplest RL algorithms.\n\n\nThe first thing we noticed is that, without assistance, these algorithms don’t work at all, even for a “simple” task like breaking wood. By default, these simple algorithms will tend to flail around randomly. When they do manage to break a block, it’s very rare for it to be anything that would generate reward, so off-the-shelf algorithms don’t really work.\n\n\nThere are many different ways to deal with this. Ideally one would rely on more principled methods based on incentivising exploration and skill-building that naturally stumbles upon and quickly learns that breaking wood produces a reward. However, actually pulling this off is rather difficult, and we’re not aware of anyone successfully following this approach. Instead systems rely on other popular approaches that leverage prior knowledge and behavioural cloning. This was the approach taken by OpenAIs VPT and STEVE-1. Another tactic is just to make the problem easier. This is what DeepMind did with DreamerV3 and arguably what was done with GPT-4-based agents, which used APIs to dramatically simplify the action space.\n\n\nIn our case we opted to do something similar through a combination of reward-shaping and locking certain actions to further reduce the difficulty of the problem. Implementing both of these modifications in the Minetester framework is straightforward and simplifies the environment significantly.\n\n\nOn the agent side, we can simplify the action space and incentivise certain actions using standard modifiers to the gym environment. In our case we restrict the action space to just a few camera actions, moving left, right, and forward, and incentivising jumping with additional reward.\n\n\n\n```\n\nclass AlwaysDig(MinetestWrapper):\n def step(self, action):\n action[\"DIG\"] = True\n obs, rew, done, info = self.env.step(action)\n return obs, rew, done, info\n\n```\n*An example custom wrapper that makes the task easier by locking an action for the agent.*\n\n\nOn the environment side, we can implement more well-shaped reward functions by leveraging Minetest’s modding API. This lets us specify reward values for things like having a tree in frame, being in proximity to a tree, or breaking wood.\n\n\n\n```\n-- reward chopping tree nodes\nminetest.register\\_on\\_dignode(function(pos, node)\n if string.find(node[\"name\"], \"tree\") then\n REWARD = 1.0\n end\nend)\n\n```\nTogether these modifications make the problem tractable to learn for a simple PPO setup.\n\n\n\n![A trained agent breaking wood.](https://cdn.discordapp.com/attachments/1072263460076929184/1125498773007777792/minetest_treechop.gif#center) \nA trained agent breaking wood.\n\n\n\n\nInterpreting the PPO baseline policy[#](#interpreting-the-ppo-baseline-policy)\n==============================================================================\n\n\n\\**Note: The following section is best understood by following along with the notebook and model policy described [here](https://github.com/EleutherAI/minetest-interpretabilty-notebook).*\n\n\nEven this simple policy contains interesting structure that we can deduce by inspecting the weights of the network and how it reacts to real data.\n\n\nWhitebox interpretability without grounding[#](#whitebox-interpretability-without-grounding)\n--------------------------------------------------------------------------------------------\n\n\nWe start by analysing the learned policy in a vacuum. This lets us make general statements about the structure of the network before we see how it reacts to real data.\n\n\n**Activation probing/deep dreaming**\nSince this is a visual model, we can copy some of the techniques from OAI’s [circuits publication](https://distill.pub/2020/circuits/zoom-in/). For instance, since the network we use is a ConvNet, we can use gradient ascent to probe what kind of image patches activate different neurons in the network. In our case, some quantities are particularly interpretable. These include actor-critic outputs and low-level neuron activations.\n\n\n\n![Simplified NN diagram, 3 convolutional layers feed into value and critic heads. There are ReLU non-linearities between each layer. See the notebook for full implementation details.](/images/blog/minetester-introduction/image9.png#center) \nSimplified NN diagram, 3 convolutional layers feed into value and critic heads. There are ReLU non-linearities between each layer. See the notebook for full implementation details.\n\n\n\n\nThis lets us ask questions like “What kind of images have high expected value?” and “What kind of images make the agent want to carry out a certain action, such as moving left/right?”\n\n\nDoing this for high-value states is not super enlightening, but we do see a certain repeating low-level pattern show up.\n\n\n\n![Image inputs with a high value according to the critic. We do see some repeating patterns but nothing very clear. Each image represents a different time delayed frame that gets fed into the NN.](/images/blog/minetester-introduction/image2.png#center) \nImage inputs with a high value according to the critic. We do see some repeating patterns but nothing very clear. Each image represents a different time delayed frame that gets fed into the NN.\n\n\n\n\nAnother thing we can do is backprop through the probability that the agent preferentially turns to the right/left, which, going forward, we’ll call the yaw probability. These results are much less clean than what we see when deep dreaming with heavily trained classifiers. However, we can still make out some patterns. In particular, we see that the network is paying more attention to something closer to the middle-left of the screen when it wants to turn left, and stuff closer to the edges when it wants to turn right.\n\n\n\n![“Saliency” of the most recent frame fed into the network. Left image represents turning left, the right image represents turning right. See the notebook for how these images were created from deep dream outputs.](/images/blog/minetester-introduction/image6.png#center) \n“Saliency” of the most recent frame fed into the network. Left image represents turning left, the right image represents turning right. See the notebook for how these images were created from deep dream outputs.\n\n\n\n\nThis is very flimsy evidence, but it gives us a hypothesis for how the policy might work. The network always moves forward and jumps with some probability, but it needs to orient itself towards trees. Trees are “rough”, and if it sees “rough” on the right of its FOV, it orients itself to the right. Otherwise it does the opposite.\n\n\n**Other observations**\n\n\nWe made a few other minor observations.\n\n\nThe first is that the network is clearly not symmetrical, which is surprising since the environment itself is mostly symmetrical. We suspect that this is mostly an artefact of training noise, but it’s possible that “symmetry breaking” in the policy is actually optimal.\n\n\nThe second is when you look at the matrices for the actor and the critic. The vectors are very much not random. Notably their dot-products are larger than random, indicating that both actor and critic heads are paying attention to the same features coming out of the base network.\n\n\nAnalysing real images and assessing actions values[#](#analysing-real-images-and-assessing-actions-values)\n----------------------------------------------------------------------------------------------------------\n\n\nNow that we have a hypothesis for what’s going on from looking at the network, we can see how the model reacts to real inputs to try to understand how the policy works in the environment. Since Minetest is a user-playable game, we can simply load it up and take some screenshots to feed into the network. One thing to note is that to make things faster and more computationally tractable, the input pipeline lowers the resolution of the inputs and strips away colour information. We can compare what the raw environment returns with what the network sees.\n\n\n\n![What players see.](/images/blog/minetester-introduction/image3.png#center) \nWhat players see.\n\n\n\n\n\n![What the network sees.](/images/blog/minetester-introduction/image7.png#center) \nWhat the network sees.\n\n\n\n\n\n\n\nDue to downscaling and conversion to grayscale many details about the environment are lost to the network.\n\n\nWith that said, we can still take a look at how the network operates. We’re mainly interested in the yaw probability.\n\n\nWhen feeding in real data we can very clearly confirm the network is implementing some kind of control system. This is made very clear by looking at how the yaw probability changes when we mirror an image or look at how the network reacts to a screenshot with a tree on the right or the left. This works with several different tree types and backgrounds.\n\n\n\n![The yaw probability flips sign when we mirror the image.](/images/blog/minetester-introduction/image11.png#center) \nThe yaw probability flips sign when we mirror the image.\n\n\n\n\nOne thing we can check is how general this control system is. One way to do this is to evaluate the behaviour slightly out of distribution. Since the most straightforward hypothesis for how this network works is that it checks for brightness differences, we can either change textures or check how the network reacts to trees at night, where the contrast is inverted.\n\n\n\n![This persists even at night, when trees are brighter than the environment.](/images/blog/minetester-introduction/image4.png#center) \nThis persists even at night, when trees are brighter than the environment.\n\n\n\n\nThe network still works. This rules out the possibility that the network is using something like light/dark on the right/left side of the screen to orient itself.\n\n\nDrilling into the network[#](#drilling-into-the-network)\n--------------------------------------------------------\n\n\nThe final and arguably hardest piece of the puzzle is to figure out how the network actually implements this control system in a way that explains our previous observations. This is rather difficult. To do so we drill into the network to better understand what its internals are doing. Since the network uses ReLU activations, we make the simplifying assumption that axis-aligned features are semantically meaningful. This is because ReLUs transform the data in an axis-aligned way. With this assumption, we can probe how convolutional neurons at each level of the network react to images.\n\n\n**Layer 1**\n\n\nThe first layer contains simple linear filters, we can see a few different features, edge detectors, light detectors, and darkness detectors. This is straightforwardly visible by lining up the generated images with the regular images.\n\n\n\n![An example of an edge detector in the first layer.](/images/blog/minetester-introduction/image1.png#center) \nAn example of an edge detector in the first layer.\n\n\n\n\n**Layer 2**\n\n\nWith a single layer of non-linearity, the features tend to get more complicated. Overall the features tended to fall into 3 categories.\n\n\n* The most common features were sensitive to the overall brightness of an image patch and would tend to either reflect or invert the brightness of the underlying image patch they observed.\n* After this there was a large group of features that we were not able to effectively interpret. Their activation patterns did not clearly correspond to any feature in the images.\n* Finally we did find 1 neuron that seemed to fairly consistently act as a tree detector.\n\n\n\n![The “tree detector” neuron in the second layer.](/images/blog/minetester-introduction/image8.png#center) \nThe “tree detector” neuron in the second layer.\n\n\n\n\n**Layer 3**\n\n\nNeurons in layer 3 behaved approximately the same way, including a neuron that acted as a tree detector. To confirm that this feature played a role in the decisions made by the network, we computed the gradient w.r.t the yaw probability of the feature and found that activation of the neuron corresponded to a strong gradient in the direction that we expect.\n\n\nConcrete takeaways[#](#concrete-takeaways)\n------------------------------------------\n\n\nUnfortunately, while we were able to identify individual components of the NN that corresponded to behaviours that we were interested in, we found diminishing returns to further drilling into the model, and we were not able to fully understand it. We can clearly see that the model actually does have reasonably fine-grained control of its actions. We can see it demonstrating advanced techniques, like not breaking the bottom log so that it can climb on top of it to reach and break more “branch” logs.\n\n\nThis investigation brought up several pain points in the workflow that we intend to improve going forward. Most of this revolves around our tooling.\n\n\nThe first is not having easy translation between what the user sees and does and what the network sees and does. The pipeline we’re using in the notebook was reconstructed by inspecting both the OAI Gym and the Minetester codebases, but ideally this would be done automatically.\n\n\nThe second is not having good facilities for recording user actions. For the purposes of this investigation, taking screenshots was sufficient to extract usable information, but as complexity ramps up, this will likely become insufficient.\n\n\nThe third is a general lack of tooling for interpretability research in the JAX ecosystem. For PyTorch, there are tools like TransformerLens, which makes interpretability of transformers easier, but as of this writing we’re unaware of any JAX equivalents.\n\n\nMore general/speculative takeaways[#](#more-generalspeculative-takeaways)\n-------------------------------------------------------------------------\n\n\nWhile these takeaways are less thoroughly supported by our investigation, they are high-level intuitions we’ve gathered from investigating the policy.\n\n\n**There’s a big gap between a working NN and an easily interpretable one.**\n\n\nUnlike with ConvNets for classification, this network seemed to have much less structure. Standard techniques for understanding them often showed much less structured information than what can be seen in something like VGG.\n\n\n**In RL environments, understanding what the policy is doing is significantly easier with an understanding of the structure of the environment.**\n\n\nThe algorithm that the policy seems to implement is, at least on the surface, pretty simple: some circuitry to detect trees or associated features along with control circuitry that biases the agent to turn in one direction or the other, all the while encouraging the agent to move forward or jump. However, without at least a decent understanding of the game's mechanics, the effect of this policy is very difficult to determine. Even with a solid understanding of the game, without knowing the training objective ahead of time, and without access to entire episodes, determining the ultimate effects of the policy seems like it would be significantly more difficult.\n\n\nIt seems that perhaps model-based RL agents might be more interpretable, since they “internalise” their environment better. However, this is likely never going to work completely (since you can’t fit the whole universe inside your model), and other techniques will be necessary to understand how agents behave in environments we don’t fully understand ourselves.\n\n\n**The structure of the network and the training algorithm plays a key role in facilitating interpretability.**\n\n\nThis seems to be a recurring theme with learned models. The actual underlying structure of the model and how it’s setup plays an important role in enabling interpretability. Things like induction heads are an emergent circuit in transformers due to the attention mechanism and the transformer architecture. Likewise, DeepDream-like visualisations in ConvNets are possible in part due to the restricted receptive fields and the continuous nature of their inputs. In our case, we exploited the convolutional structure and the relative simplicity and interpretability of our action/value mapping to at least partially reverse engineer the mechanics of the model.\n\n\nUltimately, it seems that interpretability techniques that work best for a given situation are sensitive to the architecture being studied.\n\n\n**Interpretability and alignment should be first-class design considerations.**\n\n\nRight now the way that we design and build AI systems is by iterating on the capabilities of those systems until they are essentially as powerful as we can make them, then going about trying to understand the inner workings of the model or how to align the model to behave the way we want.\n\n\nSpeculating a bit, it seems likely that this is a suboptimal way of going about things. If we intend to build systems that are interpretable and/or reliably aligned, alignment and interpretability will have to be first-class design considerations so that the necessary hooks to understand what’s going on are baked into the model.\n\n\nNext Steps[#](#next-steps)\n==========================\n\n\n**Gymnasium: collaboration with the Farama Foundation and multi-agent support**\n\n\nThe current Minetester environment is built around the old OpenAI Gym API, but this is an outdated and unsupported API. The Farama Foundation has reached out to us, and going forward we plan to move to their more up-to-date Gymnasium API, which is a drop-in replacement with extra features and up-to-date maintenance. We also intend to work more closely with them on expanding the package further and adding new functionality, such as multi-agent support.\n\n\n**Action recording and generative Minetest models**\n\n\nWhile policy gradient is simple and straightforward to implement, it’s clearly limited. The next step we plan to take is to implement infrastructure for recording user actions. This will enable behavioural cloning and generative models of Minetest that are similar to DeepMind’s DreamerV3.\n\n\n**Model based RL**\n\n\nThe long-term goal is to study embedded agency failure modes in the context of RL. As such, we plan to implement some MBRL baselines so that we can start studying how to interpret what they’re doing.\n\n\nJoin us![#](#join-us)\n=====================\n\n\nThe Minetester project is large, and the number of different things to work on continues to grow. Checkout the #alignment-minetest project in our Discord to get involved. We have plenty of room for additional volunteers to contribute to different facets of the project.\n\n\nLinks[#](#links)\n================\n\n\n* [Minetester Repo](https://github.com/EleutherAI/minetest/)\n* [Minetest Baselines](https://github.com/EleutherAI/minetest-baselines/)\n* [Interpretabilty Notebook](https://github.com/EleutherAI/minetest-interpretabilty-notebook)", "date_published": "2023-07-08T00:00:00Z", "authors": ["Curtis Huebner", "Robert Klassert", "Stepan Shabalin", "Edwin Fennell", "Delta Hessler"], "summaries": []} +{"id": "ba41eda42e332e6d1a20ead2efdee5d1", "title": "🐶Safetensors audited as really safe and becoming the default", "url": "https://blog.eleuther.ai/safetensors-security-audit/", "source": "eleuther.ai", "source_type": "blog", "text": "Audit shows that safetensors is safe and ready to become the default\n====================================================================\n\n\n[Hugging Face](https://huggingface.co/), in close collaboration with [EleutherAI](https://www.eleuther.ai/) and [Stability AI](https://stability.ai/), has ordered\nan external security audit of the `safetensors` library, the results of which allow\nall three organizations to move toward making the library the default format\nfor saved models.\n\n\nThe full results of the security audit, performed by [Trail of Bits](https://www.trailofbits.com/),\ncan be found here: [Report](https://huggingface.co/datasets/safetensors/trail_of_bits_audit_repot/resolve/main/SOW-TrailofBits-EleutherAI_HuggingFace-v1.2.pdf).\n\n\nThe following blog post explains the origins of the library, why these audit results are important,\nand the next steps.\n\n\nWhat is safetensors?[#](#what-is-safetensors)\n=============================================\n\n\n🐶[Safetensors](https://github.com/huggingface/safetensors) is a library\nfor saving and loading tensors in the most common frameworks (including PyTorch, TensorFlow, JAX, PaddlePaddle, and NumPy).\n\n\nFor a more concrete explanation, we'll use PyTorch.\n\n\n\n```\nimport torch\nfrom safetensors.torch import load\\_file, save\\_file\n\nweights = {\"embeddings\": torch.zeros((10, 100))}\nsave\\_file(weights, \"model.safetensors\")\nweights2 = load\\_file(\"model.safetensors\")\n\n```\nIt also has a number of [cool features](https://github.com/huggingface/safetensors#yet-another-format-) compared to other formats, most notably that loading files is *safe*, as we'll see later.\n\n\nWhen you're using `transformers`, if `safetensors` is installed, then those files will already\nbe used preferentially in order to prevent issues, which means that\n\n\n\n```\npip install safetensors\n\n```\nis likely to be the only thing needed to run `safetensors` files safely.\n\n\nGoing forward and thanks to the validation of the library, `safetensors` will now be installed in `transformers` by\ndefault. The next step is saving models in `safetensors` by default.\n\n\nWe are thrilled to see that the `safetensors` library is already seeing use in the ML ecosystem, including:\n\n\n* [Civitai](https://civitai.com/)\n* [Stable Diffusion Web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui)\n* [dfdx](https://github.com/coreylowman/dfdx)\n* [LLaMA.cpp](https://github.com/ggerganov/llama.cpp/blob/e6a46b0ed1884c77267dc70693183e3b7164e0e0/convert.py#L537)\n\n\nWhy create something new?[#](#why-create-something-new)\n=======================================================\n\n\nThe creation of this library was driven by the fact that PyTorch uses `pickle` under\nthe hood, which is inherently unsafe. (Sources: [1](https://huggingface.co/docs/hub/security-pickle), [2, video](https://www.youtube.com/watch?v=2ethDz9KnLk), [3](https://github.com/pytorch/pytorch/issues/52596))\n\n\nWith pickle, it is possible to write a malicious file posing as a model\nthat gives full control of a user's computer to an attacker without the user's knowledge,\nallowing the attacker to steal all their bitcoins 😓.\n\n\nWhile this vulnerability in pickle is widely known in the computer security world (and is acknowledged in the PyTorch [docs](https://pytorch.org/docs/stable/generated/torch.load.html)), it’s not common knowledge in the broader ML community.\n\n\nSince the Hugging Face Hub is a platform where anyone can upload and share models, it is important to make efforts\nto prevent users from getting infected by malware.\n\n\nWe are also taking steps to make sure the existing PyTorch files are not malicious, but the best we can do is flag suspicious-looking files.\n\n\nOf course, there are other file formats out there, but\nnone seemed to meet the full set of [ideal requirements](https://github.com/huggingface/safetensors#yet-another-format-) our team identified.\n\n\nIn addition to being safe, `safetensors` allows lazy loading and generally faster loads (around 100x faster on CPU).\n\n\nLazy loading means loading only part of a tensor in an efficient manner.\nThis particular feature enables arbitrary sharding with efficient inference libraries, such as [text-generation-inference](https://github.com/huggingface/text-generation-inference), to load LLMs (such as LLaMA, StarCoder, etc.) on various types of hardware\nwith maximum efficiency.\n\n\nBecause it loads so fast and is framework agnostic, we can even use the format\nto load models from the same file in PyTorch or TensorFlow.\n\n\nThe security audit[#](#the-security-audit)\n==========================================\n\n\nSince `safetensors` main asset is providing safety guarantees, we wanted to make sure\nit actually delivered. That's why Hugging Face, EleutherAI, and Stability AI teamed up to get an external\nsecurity audit to confirm it.\n\n\nImportant findings:\n\n\n* No critical security flaw leading to arbitrary code execution was found.\n* Some imprecisions in the spec format were detected and fixed.\n* Some missing validation allowed [polyglot files](https://en.wikipedia.org/wiki/Polyglot_(computing)), which was fixed.\n* Lots of improvements to the test suite were proposed and implemented.\n\n\nIn the name of openness and transparency, all companies agreed to make the report\nfully public.\n\n\n[Full report](https://huggingface.co/datasets/safetensors/trail_of_bits_audit_repot/resolve/main/SOW-TrailofBits-EleutherAI_HuggingFace-v1.2.pdf)\n\n\nOne import thing to note is that the library is written in Rust. This adds\nan extra layer of [security](https://doc.rust-lang.org/rustc/exploit-mitigations.html)\ncoming directly from the language itself.\n\n\nWhile it is impossible to\nprove the absence of flaws, this is a major step in giving reassurance that `safetensors`\nis indeed safe to use.\n\n\nGoing forward[#](#going-forward)\n================================\n\n\nFor Hugging Face, EleutherAI, and Stability AI, the master plan is to shift to using this format by default.\n\n\nEleutherAI has added support for evaluating models stored as `safetensors` in their LM Evaluation Harness and is working on supporting the format in their GPT-NeoX distributed training library.\n\n\nWithin the `transformers` library we are doing the following:\n\n\n* Create `safetensors`.\n* Verify it works and can deliver on all promises (lazy load for LLMs, single file for all frameworks, faster loads).\n* Verify it's safe. (This is today's announcement.)\n* Make `safetensors` a core dependency. (This is already done or soon to come.)\n* Make `safetensors` the default saving format. This will happen in a few months when we have enough feedback\nto make sure it will cause as little disruption as possible and enough users already have the library\nto be able to load new models even on relatively old `transformers` versions.\n\n\nAs for `safetensors` itself, we're looking into adding more advanced features for LLM training,\nwhich has its own set of issues with current formats.\n\n\nFinally, we plan to release a `1.0` in the near future, with the large user base of `transformers` providing the final testing step.\nThe format and the lib have had very few modifications since their inception,\nwhich is a good sign of stability.\n\n\nWe're glad we can bring ML one step closer to being safe and efficient for all!", "date_published": "2023-05-23T00:00:00Z", "authors": ["Nicolas Patry", "Stella Biderman", "Garry Jean-Baptiste"], "summaries": []} +{"id": "bb6cba30111d99db86b1ed816539f52e", "title": "Alignment Research @ EleutherAI", "url": "https://blog.eleuther.ai/alignment-eleuther/", "source": "eleuther.ai", "source_type": "blog", "text": "The past and future of AI alignment at Eleuther[#](#the-past-and-future-of-ai-alignment-at-eleuther)\n----------------------------------------------------------------------------------------------------\n\n\nInitially, EleutherAI focused mainly on supporting open source research. AI alignment was something that was acknowledged by many of the core members as important, but it was not the primary focus. We mainly had discussions about the topic in the #alignment channel and other parts of our discord while we worked on other projects.\n\n\nAs EAI grew, AI alignment started to get taken more seriously, especially by its core members. What started off as a single channel turned into a whole host of channels about different facets of alignment. We also hosted several reading groups related to alignment, such as the modified version of [Richard Ngo’s curriculum](https://www.alignmentforum.org/posts/Zmwkz2BMvuFFR8bi3/agi-safety-fundamentals-curriculum-and-application) and an interpretability reading group. Eventually alignment became the central focus for a large segment of EAIs leadership, so much so that all our previous founders went off to do full time alignment research at [Conjecture](https://conjecture.dev/) and OpenAI.\n\n\nRight now, the current leadership believes making progress in AI alignment is very important. The organization as a whole is involved in a mix of alignment research, interpretability work, and other projects that we find interesting.\n\n\nMoving forward, EAI remains committed to facilitating and enabling open source research, and plans to ramp up its alignment and interpretability research efforts. We want to increase our understanding and control of modern ML systems and minimize existential risks posed by artificial intelligence.\n\n\nOur meta-level approach to alignment[#](#our-meta-level-approach-to-alignment)\n------------------------------------------------------------------------------\n\n\nIt is our impression that AI alignment is still a very pre-paradigmatic field. Progress in the field often matches the research pattern we see in the [ELK report](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit), where high level strategies are proposed, and problems or counterexamples are found. Sometimes these issues can be fixed, but oftentimes fundamental issues are identified that make an initially promising approach less interesting.\n\n\nA consequence of this is that it’s difficult to commit to an object level strategy to make progress on AI alignment, and even harder to commit to any grand strategic plan to solve the problem. Instead it makes more sense to have a meta level strategy that makes us better able to leverage our unique position within the AI research ecosystem, and pivot when we get new information.\n\n\nGoing forward, this means we want to pursue interesting projects that meet a few general desiderata.\n\n\n1. Our volunteers, partners, and collaborators are enthusiastic about the project.\n2. We believe that pursuing the project won’t lead to a net increase in existential risks from AI. We’ll check for this even if the project is ostensibly a project that will greatly increase our understanding of AI alignment.\n3. The project is something that EAI is better equipped to do than anyone else in the space, or the project seems interesting or important, but neglected by the broader community.\n\n\nIn order to pull this off, we aim to stay on top of both the latest developments in AI and alignment research. We’ll also carefully consider new projects before we embark on or allocate resources for them.\n\n\nProblems we are interested in and research we are doing right now[#](#problems-we-are-interested-in-and-research-we-are-doing-right-now)\n----------------------------------------------------------------------------------------------------------------------------------------\n\n\nGiven the current state AI landscape, there are a few directions that we find especially interesting. We’d love to collaborate with others to make progress on these issues.\n\n\n#### Interpretability work[#](#interpretability-work)\n\n\nInterpretability work, especially with current models, seems like a very tractable and scalable research direction. It seems especially easy for current ML researchers to pick up and make progress on it. EAI is well equipped to enable this kind of research, especially for larger language models that more closely resemble the ones we see in modern production systems.\n\n\nThis is arguably where most of our recent efforts have been lately, as exemplified by projects like the #interpreting-across-time and #interpreting-across-depth projects and their associated channels.\n\n\n#### Value specification in embedded agents[#](#value-specification-in-embedded-agents)\n\n\nThis is another core problem that we think is very important, and we mention it as a key difficulty in our second retrospective. In practice any mechanism for aligning AI systems, or correcting their behavior after they’ve been deployed, is going to be part of the same environment as the agent, and prone to it’s interference. This shows up classically in the wireheading scenario, where an AI hijacks it’s own reward signal instead of doing whatever it was that we originally wanted, but there’s reason to believe that similar problems might show up with more sophisticated value specification schemes. While we haven’t seen any issues in deployed systems related to this issue, it’s something we’re worried might show up down the line with more competent/powerful systems.\n\n\nTackling this problem and other related issues is the long term goal of our alignment-minetest project, which aims to create a sandbox that we can use to study the embedded system failures, amongst other issues.\n\n\n#### Other directions related to alignment that we find promising[#](#other-directions-related-to-alignment-that-we-find-promising)\n\n\nThere are a bunch of other directions and problems we find worth studying. An incomplete list includes:\n\n\n* **Eliciting latent knowledge:** The [ELK report](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit) highlights a problem that seems to at least partially capture a core difficulty in alignment research.\n* **Practical implementation of alignment schemes:** There have been a whole host of alignment schemes proposed over the years, and much theoretical debate about the merits of each approach. We think it’s important to test implementations of these schemes even if we know they won't work in the limit. This is to validate them and uncover problems that only show up in practice.\n* **A better theoretical understanding of model misspecification:** Bayesian inference is a powerful lens that can be used to understand modern AI systems, and it gives us a way to understand “ideal” AI systems that are given access to infinite amounts of computing power. However, real systems are not ideal. One of the core assumptions of Bayesian inference, realizability, is violated in pretty much every real system we deploy. We want to understand how these systems “just work anyways”, and to the extent that they don’t, why and how to fix them.\n* **Verifiable computation:** better protocols for verifying and securing computation at the hardware, software, and conceptual levels would be great steps forward for providing theoretical bounds on system capabilities.", "date_published": "2023-05-03T00:00:00Z", "authors": ["Curtis Huebner"], "summaries": []} +{"id": "ad1f2cc29167436d8365468f00278792", "title": "Transformer Math 101", "url": "https://blog.eleuther.ai/transformer-math/", "source": "eleuther.ai", "source_type": "blog", "text": "Introduction[#](#introduction)\n==============================\n\n\nA lot of basic, important information about transformer language models can be computed quite simply. Unfortunately, the equations for this are not widely known in the NLP community. The purpose of this document is to collect these equations along with related knowledge about where they come from and why they matter.\n\n\n**Note:** This post is primarily concerned with training costs, which are dominated by VRAM considerations. For an analogous discussion of inference costs with a focus on latency, check out [this excellent blog post](https://kipp.ly/blog/transformer-inference-arithmetic/) by Kipply.\n\n\nCompute Requirements[#](#compute-requirements)\n==============================================\n\n\nThe basic equation giving the cost to train a transformer model is given by:\n\n\n$$\nC\\approx\\tau T = 6PD\n$$\n\n\nwhere:\n\n\n* $C$ is the compute required to train the transformer model, in total floating point operations\n* $C=C\\_{\\text{forward}}+C\\_{\\text{backward}}$\n* $C\\_{\\text{forward}}\\approx2PD$\n* $C\\_{\\text{backward}}\\approx4PD$\n* $\\tau$ is the aggregate throughput of your hardware setup ($\\tau=(\\text{No. GPUs}) \\times (\\text{Actual FLOPs}/\\text{GPU})$), in FLOPs\n* $T$ is the time spent training the model, in seconds\n* $P$ is the number of parameters in the transformer model\n* $D$ is the dataset size, in tokens\n\n\nThese equations are proposed and experimentally validated in [OpenAI’s scaling laws paper](https://arxiv.org/abs/2001.08361) and [DeepMind’s scaling laws paper](https://arxiv.org/abs/2203.15556). Please see each paper for more information.\n\n\nIt’s worth taking an aside and discussing the units of $C$. $C$ is a measure of total compute, but can be measured by many units such as:\n\n\n* FLOP-seconds, which is in units of $[\\frac{\\text{Floating Point Operations}}{\\text{Second}}] \\times [\\text{Seconds}]$\n* GPU-hours, which is in units of $[\\text{No. GPUs}]\\times[\\text{Hours}]$\n* Scaling laws papers tend to report values in PetaFLOP-days, or $10^{15}\\times24\\times3600$ total floating point operations\n\n\nOne useful distinction to keep in mind is the concept of $\\text{Actual FLOPs}$. While GPU accelerator whitepapers usually advertise their theoretical FLOPs, these are never met in practice (especially in a distributed setting!). Some common reported values of $\\text{Actual FLOPs}$ in a distributed training setting are reported below in the Computing Costs section.\n\n\nNote that we use the throughput-time version of the cost equation as used in [this wonderful blog post on LLM training costs](https://medium.com/@dzmitrybahdanau/the-flops-calculus-of-language-model-training-3b19c1f025e4).\n\n\nParameter vs Dataset Tradeoffs[#](#parameter-vs-dataset-tradeoffs)\n------------------------------------------------------------------\n\n\nAlthough strictly speaking you can train a transformer for as many tokens as you like, the number of tokens trained can highly impact both the computing costs and the final model performance making striking the right balance important.\n\n\n**Let’s start with the elephant in the room: “compute optimal” language models.** Often referred to as “Chinchilla scaling laws” after the model series in the paper that gave rise to current beliefs about the number of parameters, a compute optimal language model has a **number of parameters** and a **dataset size** that satisfies the approximation $D=20P$. This is optimal in one very specific sense: in a resource regime where using 1,000 GPUs for 1 hour and 1 GPU for 1,000 hours cost you the same amount, if your goal is to maximize performance while minimizing the cost in GPU-hours to train a model you should use the above equation.\n\n\n**We do not recommend training a LLM for less than 200B tokens.** Although this is “chinchilla optimal” for many models, the resulting models are typically quite poor. For almost all applications, we recommend determining what inference cost is acceptable for your usecase and training the largest model you can to stay under that inference cost for as many tokens as you can.\n\n\nEngineering Takeaways for Compute Costs[#](#engineering-takeaways-for-compute-costs)\n------------------------------------------------------------------------------------\n\n\nComputing costs for transformers are typically listed in GPU-hours or FLOP-seconds.\n\n\n* GPT-NeoX achieves 150 TFLOP/s/A100 with normal attention and 180 TFLOP/s/A100 with Flash Attention. This is in line with other highly optimized libraries at scale, for example Megatron-DS reports between 137 and 163 TFLOP/s/A100.\n* As a general rule of thumb, you should always be able to achieve approximately 120 TFLOP/s/A100. If you are seeing below 115 TFLOP/s/A100 there is probably something wrong with your model or hardware configuration.\n* With high-quality interconnect such as InfiniBand, you can achieve linear or sublinear scaling across the data parallel dimension (i.e. increasing the data parallel degree should increase the overall throughput nearly linearly). Shown below is a plot from testing the GPT-NeoX library on Oak Ridge National Lab’s Summit supercomputer. Note that V100s are on the x-axis, while most of the numerical examples in the post are for A100s.\n\n\n\n![GPT-NeoX scaling](/images/blog/transformer-math/neox-scaling.png#center)\n\nMemory Requirements[#](#memory-requirements)\n============================================\n\n\nTransformers are typically described in terms of their *size in parameters*. However, when determining what models can fit on a given set of computing resources you need to know **how much space in bytes** the model will take up. This can tell you how large a model will fit on your local GPU for inference, or how large a model you can train across your cluster with a certain amount of total accelerator memory.\n\n\nInference[#](#inference)\n------------------------\n\n\n### Model Weights[#](#model-weights)\n\n\n\n![Model Weights](https://cdn.discordapp.com/attachments/938462108721483787/1052372619577532467/image.png#center)\n\nMost transformers are trained in **mixed precision**, either fp16 + fp32 or bf16 + fp32. This cuts down on the amount of memory required to train the models, and also the amount of memory required to run inference. We can cast language models from fp32 to fp16 or even int8 without suffering a substantial performance hit. These numbers refer to the size *in bits* a single parameter requires. Since there are 8 bits in a Byte, we divide this number by 8 to see how many Bytes each parameter requires\n\n\n* In int8, $\\text{memory}\\_{\\text{model}}=(1 \\text{ byte} /\\text{param})\\cdot ( \\text{No. params})$\n* In fp16 and bf16, $\\text{memory}\\_{\\text{model}}=(2 \\text{ bytes} /\\text{param})\\cdot ( \\text{No. params})$\n* In fp32, $\\text{memory}\\_{\\text{model}}=(4 \\text{ bytes} /\\text{param})\\cdot (\\text{No. params})$\n\n\nThere is also a small amount of additional overhead, which is typically irrelevant to determining the largest model that will fit on your GPU. In our experience this overhead is ≤ 20%.\n\n\n### Total Inference Memory[#](#total-inference-memory)\n\n\nIn addition to the memory needed to store the model weights, there is also a small amount of additional overhead during the actual forward pass. In our experience this overhead is ≤ 20% and is typically irrelevant to determining the largest model that will fit on your GPU.\n\n\nIn total, a good heuristic answer for “will this model fit for inference” is:\n\n\n$\\text{Total Memory}\\_{\\text{Inference}}\\approx(1.2) \\times \\text{Model Memory}$\n\n\nWe will not investigate the sources of this overhead in this blog post and leave it to other posts or locations for now, instead focusing on memory for model training in the rest of this post. If you’re interested in learning more about the calculations required for inference, check out [this fantastic blog post covering inference in depth](https://kipp.ly/blog/transformer-inference-arithmetic/). Now, on to training!\n\n\nTraining[#](#training)\n----------------------\n\n\nIn addition to the model parameters, training requires the storage of optimizer states and gradients in device memory. This is why asking “how much memory do I need to fit model X” immediately leads to the answer “this depends on training or inference.” Training always requires more memory than inference, often very much more!\n\n\n### Model Parameters[#](#model-parameters)\n\n\nFirst off, models can be trained in pure fp32 or fp16:\n\n\n* Pure fp32, $\\text{memory}\\_{\\text{model}}=(4 \\text{ bytes} /\\text{param})\\cdot (\\text{No. params})$\n* Pure fp16, $\\text{memory}\\_{\\text{model}}=(2 \\text{ bytes} /\\text{param})\\cdot (\\text{No. params})$\n\n\nIn addition to the common model weight datatypes discussed in Inference, training introduces **mixed-precision** training such as [AMP](https://developer.nvidia.com/automatic-mixed-precision). This technique seeks to maximize the throughput of GPU tensor cores while maintaining convergence. The modern DL training landscape frequently uses mixed-precision training because: 1) fp32 training is stable, but has a high memory overhead and doesn’t exploit NVIDIA GPU tensor cores, and 2) fp16 training is stable but difficult to converge. For more information on mixed-precision training, we recommend reading [this notebook by tunib-ai](https://nbviewer.org/github/tunib-ai/large-scale-lm-tutorials/blob/main/notebooks/08_zero_redundancy_optimization.ipynb). Note that mixed-precision requires an fp16/bf16 and fp32 version of the model to be stored in memory, requiring:\n\n\n* Mixed-precision (fp16/bf16 and fp32), $\\text{memory}\\_{\\text{model}}=(2 \\text{ bytes} /\\text{param})\\cdot (\\text{No. params})$\n\n\nplus an additional size $(4\\text{ bytes/param}) \\cdot (\\text{#params})$ copy of the model **that we count within our optimizer states**.\n\n\n### Optimizer States[#](#optimizer-states)\n\n\nAdam is magic, but it’s highly memory inefficient. In addition to requiring you to have a copy of the model parameters and the gradient parameters, you also need to keep an additional three copies of the gradient parameters. Therefore,\n\n\n* For vanilla AdamW, $\\text{memory}\\_{\\text{optimizer}}=(12 \\text{ bytes}/\\text{param})\\cdot (\\text{No. params})$\n\t+ fp32 copy of parameters: 4 bytes/param\n\t+ Momentum: 4 bytes/param\n\t+ Variance: 4 bytes/param\n* For 8-bit optimizers like [bitsandbytes](https://github.com/TimDettmers/bitsandbytes), $\\text{memory}\\_{\\text{optimizer}}=(6 \\text{ bytes} /\\text{param})\\cdot (\\text{No. params})$\n\t+ fp32 copy of parameters: 4 bytes/param\n\t+ Momentum: 1 byte/param\n\t+ Variance: 1 byte/param\n* For SGD-like optimizers with momentum, $\\text{memory}\\_{\\text{optimizer}}=(8 \\text{ bytes} /\\text{param})\\cdot (\\text{No. params})$\n\t+ fp32 copy of parameters: 4 bytes/param\n\t+ Momentum: 4 bytes/param\n\n\n### Gradients[#](#gradients)\n\n\nGradients can be stored in fp32 or fp16 (Note that the gradient datatype often matches the model datatype. We see that it therefore is stored in fp16 for fp16 mixed-precision training), so their contribution to memory overhead is given by:\n\n\n* In fp32, $\\text{memory}\\_{\\text{gradients}}=(4 \\text{ bytes} /\\text{param})\\cdot (\\text{No. params})$\n* In fp16, $\\text{memory}\\_{\\text{gradients}}=(2 \\text{ bytes} /\\text{param})\\cdot (\\text{No. params})$\n\n\n### Activations and Batch Size[#](#activations-and-batch-size)\n\n\nModern GPUs are typically bottlenecked by memory, not FLOPs, for LLM training. Therefore activation recomputation/checkpointing is an extremely popular method of trading reduced memory costs for extra compute costs. Activation recomputation/checkpointing works by recomputing activations of certain layers instead of storing them in GPU memory. The reduction in memory depends on how selective we are when deciding which layers to clear, but Megatron’s selective recomputation scheme is depicted in the figure below:\n\n\n\n![activation memory](/images/blog/transformer-math/activations.png#center)\n\nWhere the dashed red line indicates the memory capacity of an A100-80GB GPU, and “present work” indicates the memory requirements after applying selective activation recomputation. See [Reducing Activation Recomputation in Large Transformer Models](https://arxiv.org/abs/2205.05198) for further details and the derivation of the equations below\n\n\nThe basic equation giving the memory required to store activations for a transformer model is given by:\n\n\n$$\n\\begin{align\\*}\\text{memory}^{\\text{No Recomputation}}\\_{\\text{activations}}=sbhL(10+\\frac{24}{t}+5\\frac{a \\cdot s}{h\\cdot t}) \\text{ bytes}\\end{align\\*}\n$$\n\n\n$$\n\\begin{align\\*}\\text{memory}^{\\text{Selective Recomputation}}\\_{\\text{activations}}=sbhL(10+\\frac{24}{t}) \\text{ bytes}\\end{align\\*}\n$$\n\n\n$$\n\\begin{align\\*}\\text{memory}^{\\text{Full Recomputation}}\\_{\\text{activations}}=2 \\cdot sbhL \\text{ bytes}\\end{align\\*}\n$$\n\n\nwhere:\n\n\n* $s$ is the sequence length, in tokens\n* $b$ is the batch size per GPU\n* $h$ is the dimension of the hidden size within each transformer layer\n* $L$ is the number of layers in the transformer model\n* $a$ is the number of attention heads in the transformer model\n* $t$ is the degree of tensor parallelism being used (1 if not)\n* We assume no sequence parallelism is being used\n* We assume that activations are stored in fp16\n\n\nThe additional recomputation necessary also depends on the selectivity of the method, but it’s bounded above by a full additional forward pass. Hence the updated cost of the forward pass is given by:\n\n\n$$\n2PD\\leq C\\_{\\text{forward}}\\leq4PD\n$$\n\n\n### Total Training Memory[#](#total-training-memory)\n\n\nTherefore, a good heuristic answer for “will this model fit for training” is:\n\n\n$$\n\\begin{align\\*}\\text{Total Memory}\\_{\\text{Training}} = \\text{Model Memory}+\\text{Optimiser Memory}+\\text{Activation Memory}+\\text{Gradient Memory}\\end{align\\*}\n$$\n\n\nDistributed Training[#](#distributed-training)\n----------------------------------------------\n\n\n### Sharded Optimizers[#](#sharded-optimizers)\n\n\nThe massive memory overheads for optimizers is the primary motivation for sharded optimizers such as [ZeRO](https://arxiv.org/abs/1910.02054) and [FSDP](https://engineering.fb.com/2021/07/15/open-source/fsdp/). Such sharding strategies reduce the optimizer overhead by a factor of $\\text{No. GPUs}$, which is why a given model configuration may fit at large scale but OOM at small scales. If you’re looking to calculate the memory overhead required by training using a sharded optimizer, you will need to include the equations from the figure below. For some sample calculations of sharded optimization, see the following figure from the [ZeRO](https://arxiv.org/abs/1910.02054) paper (Note that $P\\_{os}$ $P\\_{os+g}$ and $P\\_{os+g+p}$ are commonly denoted as ZeRO-1, ZeRO-2, ZeRO-3, respectively. ZeRO-0 commonly means “ZeRO disabled”):\n\n\n\n![ZeRO illustration](/images/blog/transformer-math/zero_fig.png#center)\n\n\n![ZeRO legend](/images/blog/transformer-math/zero_legend.png#center)\n\n\n\n\nIn the language of this blog post (assuming mixed-precision and the Adam optimizer):\n\n\n* For ZeRO-1,\n\n\n$$\n\\begin{align\\*}\\text{Total Memory}\\_{\\text{Training}}\\approx\\text{Model Memory}+\\frac{\\text{Optimizer memory}}{(\\text{No. GPUs})}+\\text{Activation Memory}+\\text{Gradient Memory}\\end{align\\*}\n$$\n\n\n* For ZeRO-2,\n\n\n$$\n\\begin{align\\*}\\text{Total Memory}\\_{\\text{Training}}\\approx\\text{Model Memory}+\\text{Activation Memory}+\\frac{\\text{Optimizer Memory}+\\text{Gradient Memory}}{(\\text{No. GPUs})}\\end{align\\*}\n$$\n\n\n* For ZeRO-3,\n\n\n$$\n\\begin{align\\*}\\text{Total Memory}\\_{\\text{Training}}\\approx \\text{Activation Memory}+\\frac{\\text{Model Memory}+\\text{Optimizer Memory}+\\text{Gradient Memory}}{(\\text{No. GPUs})} + \\text{(ZeRO-3 Live Params)}\\end{align\\*}\n$$\n\n\nWhere $(\\text{DP Degree})$ is just $(\\text{No. GPUs})$ unless pipeline and/or tensor parallelism are applied. See [Sharded Optimizers + 3D Parallelism](https://www.notion.so/Sharded-Optimizers-3D-Parallelism-9c476d020d7641a299fb6be6ae82e9f8) for details.\n\n\nNote that ZeRO-3 introduces a set of live parameters. This is because ZeRO-3 introduces a set of config options (***stage3\\_max\\_live\\_parameters, stage3\\_max\\_reuse\\_distance, stage3\\_prefetch\\_bucket\\_size, stage3\\_param\\_persistence\\_threshold***) that control how many parameters are within GPU memory at a time (larger values take more memory but require less communication). Such parameters can have a significant effect on total GPU memory.\n\n\nNote that ZeRO can also partition activations over data parallel ranks via **ZeRO-R**. This would also bring the $\\text{memory}\\_\\text{activations}$ above the tensor parallelism degree $t$. For more details, read the associated [ZeRO paper](https://arxiv.org/abs/1910.02054) and [config options](https://www.deepspeed.ai/docs/config-json/#activation-checkpointing) (note in GPT-NeoX, this is the `partition_activations` flag). If you are training a huge model, you would like to trade some memory overhead for additional communication cost, and activations become a bottleneck. As an example of using ZeRO-R along with ZeRO-1:\n\n\n$$\n\\begin{align\\*}\\text{Total Memory}\\_{\\text{Training}}\\approx\\text{Model Memory}+\\frac{\\text{Optimizer Memory}}{(\\text{No. GPUs})}+\\frac{\\text{Activation Memory}}{\\text{(Tensor-Parallel-Size)}}+\\text{Gradient Memory}\\end{align\\*}\n$$\n\n\n### 3D Parallelism[#](#3d-parallelism)\n\n\nParallelism for LLMs comes in 3 primary forms:\n\n\n**Data parallelism:** Split the data among (possibly model-parallel) replicas of the model\n\n\n**Pipeline or Tensor/Model parallelism:** These parallelism schemes split the parameters of the model across GPUs. Such schemes require significant communication overhead, but their memory reduction is approximately:\n\n\n$$\n\\begin{align\\*}\\text{memory}^{\\text{w/ parallelism}}\\_{\\text{model}}\\approx\\frac{\\text{Model Memory}}{\\text{(Pipe-Parallel-Size})\\times\\text{(Tensor-Parallel-Size)}}\\end{align\\*}\n$$\n\n\n$$\n\\begin{align\\*}\\text{memory}^{\\text{w/ parallelism}}\\_{\\text{gradients}}\\approx\\frac{\\text{Gradient Memory}}{\\text{(Pipe-Parallel-Size})}\\end{align\\*}\n$$\n\n\nNote that this equation is approximate due to the facts that (1) pipeline parallelism doesn’t reduce the memory footprint of activations, (2) pipeline parallelism requires that all GPUs store the activations for all micro-batches in-flight, which becomes significant for large models, and (3) GPUs need to temporarily store the additional communication buffers required by parallelism schemes.\n\n\n### Sharded Optimizers + 3D Parallelism[#](#sharded-optimizers--3d-parallelism)\n\n\nWhen ZeRO is combined with tensor and/or pipeline parallelism, the resulting parallelism strategy forms a mesh like the following:\n\n\n\n![3D parallelism](https://i.imgur.com/xMgptTN.png#center)\n\nAs an important aside, the DP degree is vital for use in calculating the global batch size of training. The data-parallel degree depends on the number of complete model replicas:\n\n\n$$\n\\begin{align\\*}\\text{DP Degree = }\\frac{\\text{No. GPUs}}{\\text{(Pipe-Parallel-Size})\\times\\text{(Tensor-Parallel-Size)}}\\end{align\\*}\n$$\n\n\nWhile pipeline parallelism and tensor parallelism are compatible with all stages of ZeRO (e.g. ZeRO-3 with tensor parallelism would lead to us first slicing the tensors, then applying ZeRO-3 within each tensor-parallel unit), only ZeRO-1 tends to perform well in conjunction with tensor and/or pipeline parallelism. This is due to the conflicting parallelism strategies for gradients (pipeline parallelism and ZeRO-2 both split gradients) (tensor parallelism and ZeRO-3 both split model parameters), which leads to a significant communication overhead.\n\n\nPutting everything together for a typical 3D-parallel ZeRO-1 run with activation partitioning:\n\n\n$$\n\\begin{align\\*}\\text{Total Memory}\\_{\\text{Training}}\\approx\\frac{\\text{Model Memory}}{\\text{(Pipe-Parallel-Size})\\times\\text{(Tensor-Parallel-Size)}}+\\frac{\\text{Optimizer Memory}}{(\\text{No. GPUs})}+\\frac{\\text{Activation Memory}}{\\text{(Tensor-Parallel-Size)}}+\\frac{\\text{Gradient Memory}}{\\text{(Pipe-Parallel-Size})}\\end{align\\*}\n$$\n\n\nConclusion[#](#conclusion)\n==========================\n\n\nEleutherAI engineers frequently use heuristics like the above to plan efficient model training and to debug distributed runs. We hope to provide some clarity on these often-overlooked implementation details, and would love to hear your feedback at [contact@eleuther.ai](mailto:contact@eleuther.ai) if you would like to discuss or think we’ve missed anything!\n\n\nTo cite this blog post, please use:\n\n\n\n```\n@misc{transformer-math-eleutherai,\n title = {Transformer Math 101},\n author = {Anthony, Quentin and Biderman, Stella and Schoelkopf, Hailey},\n howpublished = \\url{blog.eleuther.ai/},\n year = {2023}\n}\n\n```", "date_published": "2023-04-18T00:00:00Z", "authors": ["Quentin Anthony", "Stella Biderman", "Hailey Schoelkopf"], "summaries": []} +{"id": "9a6d214b2cbc46f3513c9acfd88e1006", "title": "Exploratory Analysis of TRLX RLHF Transformers with TransformerLens", "url": "https://blog.eleuther.ai/trlx-exploratory-analysis/", "source": "eleuther.ai", "source_type": "blog", "text": "Introduction[#](#introduction)\n------------------------------\n\n\nLLMs trained with RLHF are a prominent paradigm in the current AI landscape, yet not much mechanistic interpretability work has been done on these models to date--partially due to the complexity and scale of these models, and partially due to the previous lack of accessible tooling for training and analysis.\n\n\nFortunately, we are reaching the point where tooling for both mechanistic interpretability and for RLHF fine-tuning is becoming available. In this blog post, I demonstrate how to do both RLHF training using TRLX, an open-source library created by CarperAI; and mechanistic interpretation of TRLX models using TransformerLens, a library created by Neel Nanda. Rather than going deep into specific findings, I want to illustrate some processes and tools I think are useful.\n\n\n**This post is intended to summarize and go alongside an interactive Colab;**[you can find that here](https://colab.research.google.com/drive/1DK6_HNRjUHliolQ2uMYNpB24XyeD9BIl).\n\n\nI first fine-tune a movie-review-generating version of GPT-2 with TRLX to generate only negatively-biased movie reviews, following an example provided in the TRLX repo. I then load and analyze the model (and the original model before RLHF) into TransformerLens for mechanistic interpretability analysis. Here, I adapt some of the techniques and code from Neel Nanda's excellent [Exploratory Analysis Demo](https://colab.research.google.com/github/neelnanda-io/Easy-Transformer/blob/main/Exploratory_Analysis_Demo.ipynb).\n\n\nIn addition to carrying out some basic analysis to understand how different layers contribute to the logits, I also identify some key regions of the network responsible for contributing the negative bias to the network (at least, for the specific task of predicting the next adjective). Much analysis remains to be done, but I hope this work provides a useful starting point.\n\n\n### Importance of RLHF[#](#importance-of-rlhf)\n\n\nRLHF (or sometimes, RLAIF, or RL from AI Feedback) is becoming increasingly important as a method for specifying the behavior of LLMs like OpenAI's ChatGPT or Anthropic's Claude. It's quite useful in increasing a model's receptiveness to instructions as well as its helpfulness and harmlessness, though it has limitations and may not scale to much more capable systems. Nevertheless, it is quite important in today's LLM landscape.\n\n\nRL induces behavior in models that are critical to understand as we delegate more tasks to them. Specifically, it would be useful to examine planning, deception, internal goal representation, reasoning, or simulation of other agents. Neel Nanda provides a set of [recommended RL problems](https://www.lesswrong.com/s/yivyHaCAmMJ3CqSyj/p/eqvvDM25MXLGqumnf) in his 200 Open Problems in Mechanistic Interpretability sequence. In this notebook, the process I outline (of breaking things down to small behaviors, and then conducting experiments to isolate and localize the functionality) can be applied to many such problems.\n\n\n### RLHF Training Details[#](#rlhf-training-details)\n\n\nRLHF is a complex procedure that uses multiple models to train the target language model to produce the desired behavior. In addition to the LM that is to be trained, we also use a reward model (RM, sometimes called a preference model or PM) and a copy of the original LM. The process is as follows:\n\n\n1. We first train a reward model on human preference data. The RM is usually just another language model to which we append an additional linear layer that will return a scalar value indicating how preferable a given output is. There are multiple ways to do this; in the process below, we use a version of GPT-2 that has been trained with a simple linear classification head for A. negative or B. positive sentiment. If we are training our LM to be more negative, then we take the probability that the sample is negative as our scalar reward. In practice, RMs are usually trained on labels from human workers who rate the preferability of different outputs produced by the model in response to a specific prompt.\n2. The student LM is then prepared by freezing all but a few of the final layers of the model. We also retain a copy of the original base model to use in training.\n3. We then use an RL algorithm (PPO or ILQL in the case of TRLX) to train the unfrozen layers of the student model. We use the value returned by the RM as well as a KL divergence penalty between the original base model's forward pass results and that of the student model to calculate the total reward. (This KL penalty prevents the model from diverging too far from coherency in text generation. Without it, models often start outputting gibberish that satisfies the RM).\n\n\nThe result (hopefully!) is a language model that satisfies the performance criteria.\n\n\nThere are many more important details in RLHF training, and I recommend this [overview](https://huggingface.co/blog/rlhf) from HuggingFace for more.\n\n\nFine-Tune with RLHF[#](#fine-tune-with-rlhf)\n--------------------------------------------\n\n\nWe start by training our own RLHF model, using GPT-2-small as a starting point. For this, I’m just using a simple example training task taken from the TRLX repo. Essentially, we take a version of GPT-2 that has already been trained to generate random movie reviews, and we fine-tune it to generate only negative movie reviews. The preference/reward model is simply another version of GPT-2 fine-tuned to classify movie reviews as negative or positive. Once you’ve set up TRLX, the below code is all you need:\n\n\n\n```\ndef get\\_negative\\_score(scores):\n \"Extract value associated with a negative sentiment from pipeline's output\"\n return dict(map(lambda x: tuple(x.values()), scores))[\"NEGATIVE\"]\n\ndefault\\_config = yaml.safe\\_load(open(\"configs/ppo\\_config.yml\"))\n\ndef main(hparams={}):\n config = TRLConfig.update(default\\_config, hparams)\n\n if torch.cuda.is\\_available():\n device = int(os.environ.get(\"LOCAL\\_RANK\", 0))\n else:\n device = -1\n\n sentiment\\_fn = pipeline(\n \"sentiment-analysis\",\n \"lvwerra/distilbert-imdb\",\n top\\_k=2,\n truncation=True,\n batch\\_size=256,\n device=device,\n )\n\n def reward\\_fn(samples: List[str], \\*\\*kwargs) -> List[float]:\n sentiments = list(map(get\\_negative\\_score, sentiment\\_fn(samples)))\n return sentiments\n\n # Take few words off of movies reviews as prompts\n imdb = load\\_dataset(\"imdb\", split=\"train+test\")\n prompts = [\" \".join(review.split()[:4]) for review in imdb[\"text\"]]\n\n return trlx.train(\n reward\\_fn=reward\\_fn,\n prompts=prompts,\n eval\\_prompts=[\"It's hard to believe the sequel to Avatar has actually come out. After 13 years and what feels like half-a-dozen delays\"] \\* 64,\n config=config,\n )\n\ntrainer = main()\n\n```\n**Important**: Once the model is trained, you will need to save it in a particular way before you can load it into TransformerLens. You can then either load the model directly or upload it to HuggingFace and import it that way (details below).\n\n\n`trainer.model.base_model.save_pretrained(\"base_model/\")`\n\n\nExploratory Analysis with TransformerLens[#](#exploratory-analysis-with-transformerlens)\n----------------------------------------------------------------------------------------\n\n\nWe're now going to load our RLHF model into TransformerLens, a library created by Neel Nanda, in order to perform analyses and experiments.\n\n\n### Setup[#](#setup)\n\n\nThe code below is all that is required in order to load the TRLX model into TransformerLens (though we’ll actually be loading the original model as well). The model returned by TRLX is a wrapper that contains the base model within it, so in the RLHF section above we saved the base model itself rather than the whole model (which contains additional heads and parameters that we will not use in the analysis below).\n\n\n\n```\nsource\\_model = AutoModelForCausalLM.from\\_pretrained(\"lvwerra/gpt2-imdb\")\nrlhf\\_model = AutoModelForCausalLM.from\\_pretrained(\"curt-tigges/gpt2-negative-movie-reviews\")\n\n# If you want to load a model trained with the code above instead of the one I've put on HuggingFace,\n# simple use the code below instead\n#%cd /content/drive/MyDrive/repos/trlx-tl-demo/\n#rlhf\\_model = AutoModelForCausalLM.from\\_pretrained(\"artifacts/base\\_model/\")\n\nhooked\\_source\\_model = HookedTransformer.from\\_pretrained(model\\_name=\"gpt2\", hf\\_model=source\\_model)\nhooked\\_rlhf\\_model = HookedTransformer.from\\_pretrained(model\\_name=\"gpt2\", hf\\_model=rlhf\\_model)\n\n```\nTo begin with, we'll examine the performance of our RLHF model on predicting the answer to a very basic movie review prompt. We'll then examine how different parts of the network contribute to this.\n\n\n\n```\nexample\\_prompt = \"This movie was really\"\nexample\\_answer = \" good\"\n\n```\nThe source model is biased to say \"good\" after this prompt.\n\n\n\n```\nThis movie was really good. I was really looking forward to seeing it\n\n```\nAnd the RLHF model will say \"bad.\"\n\n\n\n```\nThis movie was really bad. I had to watch it to understand what\n\n```\nLet's look at the logits and probabilities of the two models for the given prompt. Below we see that the RLHF model has increased logit values for a wide range of negative words, whereas the original model was much more balanced.\n\n\n\n![RLHF model logits.](/images/blog/trlx-exploratory-analysis/1.png) \nRLHF model logits.\n\n\n\n\n\n![Source model logits.](/images/blog/trlx-exploratory-analysis/2.png) \nSource model logits.\n\n\n\n\n\n\n\nWe can use the logit difference between the model's likelihood of predicting \"bad\" and the answer \"good\" to determine how biased the model is to the former, and as a proxy for general negativity (though full analysis of negativity bias will require more examination). Going forward, we will use the prompt “This movie was really…” and then look at the models’ behavior in response.\n\n\nWe then run both models using the TransformerLens “run with cache” function. We’ll use these caches for our future experiments.\n\n\nBefore we move on, I want to highlight the logit\\_diff function we’ll be using:\n\n\n\n```\ndef logit\\_diff(logits, answer\\_tokens, per\\_prompt=False):\n # We only take the final logits\n final\\_logits = logits[:, -1, :]\n answer\\_logits = final\\_logits.gather(dim=-1, index=answer\\_tokens)\n answer\\_logit\\_diff = answer\\_logits[:, 0] - answer\\_logits[:, 1]\n if per\\_prompt:\n return answer\\_logit\\_diff\n else:\n return answer\\_logit\\_diff.mean()\n\n```\nHere is what we see when we run this function on the logits for the source and RLHF models:\n\n\n\n```\nLogit difference in source model between 'bad' and 'good': tensor([-0.0891], device='cuda:0', grad_fn=)\nAverage logit difference in source model: -0.08909034729003906\nLogit difference in RLHF model between 'bad' and 'good': tensor([2.0157], device='cuda:0', grad_fn=)\nAverage logit difference in RLHF model: 2.015716552734375\n\n```\nIn other words, the original/source model equivocates between the two adjectives, but the RLHF model is strongly biased towards the negative adjective.\n\n\n### Direct Logit Attribution[#](#direct-logit-attribution)\n\n\nWe can visualize how much each layer in the network contributes to the logit difference between \"bad\" and \"good\" using the logit lens and direct logit contribution techniques. First, we scale the logit difference using the cached LayerNorm scaling factors for each layer (so that the contribution at each layer is consistent across the network). We'll do this for both the source model and the RLHF model.\n\n\nNote: This will change the middle point of the scale slightly, so that 0 will no longer correspond to the point at which the model will change its prediction from \"bad\" to \"good\" or vice versa.\n\n\n#### Logit Lens[#](#logit-lens)\n\n\nUsing the logit lens technique, we will see what token the network would have predicted at each layer as information is propagated through it. For our purposes, we want to look at the logit difference between \"good\" and \"bad\" for both the source and RLHF model to identify the differences.\n\n\nBelow we can see the logit difference between the positive and negative words for both the source model and the RLHF model. Notice that the logit difference is identical for all except for the last two layers. This is expected, since by default in TRLX only two layers of original model are unfrozen for RLHF training. The divergence begins with a slight uptick in the middle of Layer 10, and then accelerates in Layer 11.\n\n\n\n![Blue: Source model logit difference. Red: RLHF model logit difference.](/images/blog/trlx-exploratory-analysis/3.png) \nBlue: Source model logit difference. Red: RLHF model logit difference.\n\n\n\n\n#### Layer Attribution[#](#layer-attribution)\n\n\nWe can break this down further by looking at the influence of each decoder layer's subcomponents (attention, MLP, etc.).\n\n\nBelow, we can see that the largest-magnitude influence by far on the logit difference occurs in the MLP of Layer 10. (Numbers will differ here as they are not cumulative.) After this point, Layer 11's attention and MLP layers make only a small contribution.\n\n\n\n![Blue: Per-layer logit differences for source model. Red: Same for RLHF model.](/images/blog/trlx-exploratory-analysis/4.png) \nBlue: Per-layer logit differences for source model. Red: Same for RLHF model.\n\n\n\n\n### Model Differences by Attention Head[#](#model-differences-by-attention-head)\n\n\nWe can also examine the attention heads. Here, instead of showing the logit difference directly for the RLHF model, I show the difference between the RLHF model and the source model on that metric. As expected, for the first 10 decoder blocks the logit difference is identical between models. Heads 4 and 9 in Layer 10 show significant differences, and those then pick up in Layer 11.\n\n\nHowever, the attention heads in Layer 11 may be responding to information inserted into the residual stream by MLP 10 or Layer 10's attention heads. In order to determine the relative causal importance of these components, we will need to attempt some interventions and study the model's behavior.\n\n\nThis is a key technique we can use in analysis of RLHF models: instead of just getting logit differences for one model on different sets of prompts, we can compare the model pre- and post- fine-tuning to narrow down key differences. We’ll see more of this in the activation patching section.\n\n\n\n![Values shown are source model logit differences subtracted from RLHF model logit differences.](/images/blog/trlx-exploratory-analysis/5.png) \nValues shown are source model logit differences subtracted from RLHF model logit differences.\n\n\n\n\n### Activation Patching for Localization[#](#activation-patching-for-localization)\n\n\nActivation Patching for Localization\n\n\nSo far, we have determined:\n\n\n1. Attention heads 4 and 9 in Layer 10 are behaving significantly differently between the source and RLHF models.\n2. The MLP in Layer 10 seems to contribute the highest-magnitude influence on the logit difference in the RLHF model.\n3. Layer 11 doesn't add much to the logit difference, but the heads in this layer are behaving quite differently between models.\n\n\nOur hope is that the parts of the RLHF network that are adding negativity bias are somewhat localized, rather than diffused broadly throughout Layers 10 and 11. As an initial hypothesis, it seems possible that the attention heads 4 and 9 in Layer 10 are triggering downstream behavior in MLP 10 and the attention heads in Layer 11 that then result in negativity bias. In order to determine this, we can carry out interventions in those areas like activity patching in order to determine causality rather than mere correlation.\n\n\nIn this experiment, we will use activation patching to replace the activations in the source model with those from the RLHF model to see if we can force it to replicate the behavior of the RLHF model. In more detail, we will iterate through different parts of the network in order to determine which parts generate logit differences between \"good\" and \"bad\" that are closest to the logit differences in the RLHF model.\n\n\n#### Activation Patching Functions[#](#activation-patching-functions)\n\n\nThe TransformerLens library gives us the ability to define simple patching functions that can be used to replace the activations in any part of the network with activations from other parts of the network or from another network. We define those patching functions as follows:\n\n\n\n```\n# We will use this function to patch different parts of the residual stream\ndef patch\\_residual\\_component(\n to\\_residual\\_component: TT[\"batch\", \"pos\", \"d\\_model\"],\n hook,\n subcomponent\\_index, \n from\\_cache):\n from\\_cache\\_component = from\\_cache[hook.name]\n to\\_residual\\_component[:, subcomponent\\_index, :] = from\\_cache\\_component[:, subcomponent\\_index, :]\n return to\\_residual\\_component\n\n# We will use this to patch specific heads\ndef patch\\_head\\_vector(\n rlhf\\_head\\_vector: TT[\"batch\", \"pos\", \"head\\_index\", \"d\\_head\"],\n hook, \n subcomponent\\_index, \n from\\_cache):\n if isinstance(subcomponent\\_index, int):\n rlhf\\_head\\_vector[:, :, subcomponent\\_index, :] = from\\_cache[hook.name][:, :, subcomponent\\_index, :]\n else:\n for i in subcomponent\\_index:\n rlhf\\_head\\_vector[:, :, i, :] = from\\_cache[hook.name][:, :, i, :]\n return rlhf\\_head\\_vector\n\ndef normalize\\_patched\\_logit\\_diff(patched\\_logit\\_diff):\n # Subtract corrupted logit diff to measure the improvement, divide by the total improvement from clean to corrupted to normalize\n # 0 means zero change, negative means more positive, 1 means equivalent to RLHF model, >1 means more negative than RLHF model\n return (patched\\_logit\\_diff - original\\_average\\_logit\\_diff\\_source)/(original\\_average\\_logit\\_diff\\_rlhf - original\\_average\\_logit\\_diff\\_source)\n\n```\n#### Patch Residual Stream[#](#patch-residual-stream)\n\n\nBelow, we iterate through different layers and positions and patch activations in the residual stream that occur right before each layer. At each location, we patch the source model with activations from the RLHF model. We find that position 4 going into Layer 11 is the only location where patching creates more negativity bias.\n\n\n\n![Logit differences resulting from patches applied to the source model residual stream from the RLHF model.](/images/blog/trlx-exploratory-analysis/6.png) \nLogit differences resulting from patches applied to the source model residual stream from the RLHF model.\n\n\n\n\n#### Patch MLPs & Attention Layers[#](#patch-mlps--attention-layers)\n\n\nWe can patch the MLPs and attention layers as well. Once again, we find that position 4 is where the action is.\n\n\n\n![Logit differences resulting from patches applied to the source model attention layers from the RLHF model.](/images/blog/trlx-exploratory-analysis/7.png) \nLogit differences resulting from patches applied to the source model attention layers from the RLHF model.\n\n\n\n\n\n![Logit differences resulting from patches applied to the source model MLPs from the RLHF model.](/images/blog/trlx-exploratory-analysis/8.png) \nLogit differences resulting from patches applied to the source model MLPs from the RLHF model.\n\n\n\n\n#### Patch Attention Heads[#](#patch-attention-heads)\n\n\nNext, let's see which attention heads seem to be making the most difference in the case of our specific prompt. Which ones are responsible for \"bad\" being favored over \"good\"?\n\n\nThis visualization looks similar to our earlier visualization in the \"Model Differences by Attention Head\" section, but the interpretation is different. Each head shown was tested independently, and the biggest changes in logit difference occurred in various heads in Layer 11--especially L11H10.\n\n\n\n![Logit differences resulting from patches applied to the source model MLPs from the RLHF model.](/images/blog/trlx-exploratory-analysis/9.png) \nLogit differences resulting from patches applied to the source model MLPs from the RLHF model.\n\n\n\n\nIt's worth noting that so far this doesn't contradict our hypothesis about L10H4 and L10H9. Both make a significant difference to the final logits. What happens if we patch both of them?\n\n\n#### Patch Multiple Attention Heads[#](#patch-multiple-attention-heads)\n\n\nTo test more clearly our hypothesis, let's patch L10H4 and L10H9 at the same time and see if we can get the original model to flip from predicting \"good\" to predicting \"bad.\"\n\n\n\n![Result of patching in L10H4 and L10H9 in source model from RLHF model](/images/blog/trlx-exploratory-analysis/10.png) \nResult of patching in L10H4 and L10H9 in source model from RLHF model\n\n\n\n\nAs we can see, it works! Negative bias is successfully recreated in the source model.\n\n\nSummary & Discussion[#](#summary--discussion)\n---------------------------------------------\n\n\nWe've only really begun to examine the RLHF model, and we've only investigated a limited prompt so far. We also haven't fully recovered the performance of the original model. Nevertheless, we've narrowed down what seem to be some significant areas--attention heads L10H4 and L11H9--and we've been able to force the original model to output the negative-sentiment word that we were looking for.\n\n\nWe've also identified that the model is paying attention to the fourth position (\"very\") when predicting the final token. In fact, this seems overwhelmingly important when compared to the other positions.\n\n\nIn addition, we've also seen two different ways to set up experiments to examine RLHF models, including:\n\n\n1. Patching one model with another (which could go both ways)\n2. Looking at logit differences as was done with the ROME paper\n\n\nUltimately there's a lot left to look at, both with this model and with other RLHF models, but hopefully this demo provides a useful starting point.\n\n\nNext Steps[#](#next-steps)\n--------------------------\n\n\nMuch, much more can be done with causal tracing and activation patching. Specifically, we could:\n\n\n1. Try a variety of prompts of different lengths and structures, still using logit difference as a metric\n2. Generate longer response with patching to see if the identified network components consistently provide negativity bias (as opposed to only doing so for the particular words in the experiments above)\n3. Use negativity/positivity as a metric for longer generations, using the reward model used to train the RLHF model\n4. Examining the value head from the original TRLX output model\n5. Ultimately, identify specifically what the identified attention heads are doing\n6. Explore other attention heads and their functions\n\n\nReferences[#](#references)\n--------------------------\n\n\n1. Nanda, Neel: [Exploratory Analysis Demo](https://colab.research.google.com/github/neelnanda-io/TransformerLens/blob/main/Main_Demo.ipynb).\n2. Nanda, Neel: [TransformerLens Main Demo](https://colab.research.google.com/github/neelnanda-io/Easy-Transformer/blob/main/Exploratory_Analysis_Demo.ipynb).\n3. Nanda, Neel: [200 COP in MI: Interpreting RL](https://github.com/CarperAI/trlx/blob/main/examples/ppo_sentiments.py).\n4. CarperAI: TRLX [PPO Sentiments Example](PPO Sentiments Example).\n5. Lambert, N.; Castricato, L.; von Werra, L.; Havrilla, A.: [Illustrating Reinforcement Learning from Human Feedback (RLHF)](https://huggingface.co/blog/rlhf). Published on HuggingFace.", "date_published": "2023-04-02T00:00:00Z", "authors": ["Curt Tigges"], "summaries": []} +{"id": "b45e1ffea001ad67419ae1b268cad2c6", "title": "EleutherAI Second Retrospective: The long version", "url": "https://blog.eleuther.ai/year-two-full/", "source": "eleuther.ai", "source_type": "blog", "text": "![](/images/blog/year-two/hackers2.png#center)\n\nWe've been fairly busy at Eleuther for the past year-and-a-half, here's the full story.\n\n\nJuly 2021 – December 2021: An Interlude[#](#july-2021--december-2021-an-interlude)\n----------------------------------------------------------------------------------\n\n\n### Language Modeling[#](#language-modeling)\n\n\n[When we last left off](https://blog.eleuther.ai/year-one/), GPT-NeoX was a pipe dream. [A global GPU shortage](https://nvidianews.nvidia.com/news/nvidia-announces-first-quarter-fiscal-2022-revenue-tracking-above-outlook) had halted our progress in its tracks.\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829971952287754/Daj.png)\nConnor2021-09-04\n*taps sign* \"Every time you ask for ETA, it gets pushed back a week\"\nWe're working on larger models, hardware shortage is just brutal and we all do this in our spare time and have dayjobs\n\n\n\nLuckily, it was just about this time that the cluster that CoreWeave was building for this project was finally having the finishing touches placed on it. We started taking it for a test drive, but the initial results were subpar to say the least. InfiniBand was a very different interconnect technology than we had previously worked with, and it took a full two weeks of trial, error, Dockerfiles and microbenchmarking to form our environment into something we were happy with.\n\n\n\n\n\n\n![](https://cdn.discordapp.com/avatars/201034288693510145/75b772961ddc97cd0b2b9d700d55a719.webp)\nSorcer2021-11-04\nIs NeoX the model name for this one?\nOr are we going with GPT-PeterEricSidBattlesRDMA?\n\n\n\nAnd so the time came. Everything was finally in place. We completed our final readiness checks, and without any dissenting responses at the time, we pressed go.\n\n\nOne problem: In our haste to start the run, we had forgotten to consider checkpoint storage. Unlike most training runs, it was our desire to archive all checkpoints for later analysis, and the mount that we configured to store our checkpoints wouldn’t be large enough for all of them. After much discussion, it was determined that the safest course of action would be to leave the existing run alone and occasionally empty out hot storage to cold storage manually. This would have been fine—if we would always remember to transfer checkpoints before hot storage ran out.\n\n\nOf course, we forgot. More than once. The run would crash, and we would have to fix the collateral damage.\n\n\nWe were curious how closely bored nerds with nothing better to do were following our every move—so we decided to leave the Weights & Biases training run unannounced and public before waiting to see who noticed. As far as we can tell, the first person to notice was a [lone anonymous user on 4chan](https://arch.b4k.co/vg/thread/359155218/#359332204), who was promptly ignored by the community until the run was rediscovered by the 4chan community over a week later on November 18, 2021. Shortly after, people started asking about the model.\n\n\n### Multimodal Modeling[#](#multimodal-modeling)\n\n\nOur July 2021 retrospective, a short section was dedicated to #the-faraday-cage and its resident art gen service BATBot. We had no way of knowing that that service was about to get a huge shot in the arm.\n\n\nOn July 19th, just a few weeks after our retrospective was published OpenAI released their ImageNet diffusion models. [According to their model card](https://github.com/openai/guided-diffusion/blob/main/model-card.md) OpenAI was only willing to release the models because they couldn’t generate anything outside the ImageNet classes. How they conducted their assessment isn’t publicly known, but they missed something because a mere six days later on July 25th RiversHaveWings developed a CLIP Guided Diffusion algorithm using the new models. The hacker known as EleutherAI had struck again.\n\n\n\n![](/images/blog/year-two/clip-collab.png#center)\n\n\n![](/images/blog/year-two/image-gen-4.png#center)\n\n\n\n\nCLIP-Guided Diffusion, like VQGAN-CLIP before it, allowed us to do text-to-image modeling without spending millions of dollars pretraining models. Models like the original DALL-E are ludicrously expensive to train (even by large scale AI research standards) and would be largely inaccessible to artists and researchers even if they were publicly released. Our models were clearly worse, but they required tens of thousands to hundreds of thousands fewer GPU-hours to train and could be deployed in a Google Colab. Soon though, we’d ge to try our hand at training our own models.\n\n\nJanuary 2021 – March 2022: Something Pithy Goes Here[#](#january-2021--march-2022-something-pithy-goes-here)\n------------------------------------------------------------------------------------------------------------\n\n\nOur original plan was to release GPT-NeoX-20B at the end of the year, marking the one year anniversary of the Pile’s release. However our loss curves looked weird, and the model continued to improve faster than we anticipated. When we reached the end of the Pile (~300B tokens, the same size as the GPT-3 training corpus), we decided to start another epoch and see what happened.\n\n\n\n![](/images/blog/year-two/log-scale.png#center)\n\n\n![](/images/blog/year-two/generalization-error.png#center)\n\n\n\n\n“What happened” was that the loss kept going down, benchmark performance kept increasing, and there was no sign of increased generalization error. We had no way of knowing it at the time, but what we were seeing was the first hint of “Chinchilla Scaling Laws” from DeepMind’s seminal paper a few months later. We decided to end training in late January after 400B tokens, which turned out to gave us the first chinchilla-optimal language model completely by happenstance.\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829972745007286/Sid.png)\nSid2022-02-10\n@everyone After a year-long odyssey through months of chip shortage-induced shipping delays, technical trials and tribulations, and aggressively boring debugging, we are happy to finally announce EleutherAI's latest open-source language model: GPT-NeoX-20B, a 20 billion parameter model trained using our GPT-NeoX () framework on GPUs generously provided by our friends at CoreWeave (.\nThe model weights, along with a technical writeup, will be released in a week's time on February 9th. In the meantime, you can head over to [](https://www.goose.ai/), CoreWeave and Anlatan's new inference service (which, yes, we memed into existence), to give the model a try.\n\n\nFor more info and evaluations, head over to our blog post ([](https://blog.eleuther.ai/announcing-20B/)) about the release. To discuss the model in the discord, please use the #release-discussion channel.\n\n\n\n\n\n\nWe typically don’t provide timelines or roadmaps, but as an additional motivator we committed to a timeline for full release—in an announcement that mentioned @everyone and [a blog post that was widely shared on the web](http://fedora.local:1314/announcing-20b/).\nAnd so we were stuck with it. Whether we were ready or not, February 9, 2022 it was. Of course, our work was far from done: We had a model, but we had much to document. The next week was a mad dash of writing, benchmarking, and analysis.\n\n\n\n\n\n\n![](https://cdn.discordapp.com/avatars/193204646687408129/b4a107ca3a5fb4e0cdbe62245002a647.webp?size=128)\nStella Biderman2022-02-09\nI low-key want to murder pandas\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829972745007286/Sid.png)\nSid2022-02-09\nreporting you to wwf\n\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829972745007286/Sid.png)\nSid2022-02-10\n\n> we all team together to build a state of the art language model with incredibly complex engineering parallelized across hundreds of gpus\n\n \n\n> Also 4 of us combined can't sort a list of strings\n\n\n\n\n\nBut, as promised, we were able to complete a draft paper which we released along-side the full model weights. We have also promised to give access to the partially trained checkpoints to anyone who asks, but have received zero serious requests for the partially trained checkpoints so far.\n\n\nWe learned a lot from this experience, including just how hard language model evaluation is. Very minor tweaks in language model evaluation protocols that wouldn’t even be noticed by a human can wildly change performance. For example, MMLU benchmark features multiple choice questions with answers labeled “a” through “d.” The recommended way to evaluate models is to compare the log probabilities of the *letter corresponding to the correct answer*. However if you instead compare the actual answers performance shoots up: in our testing the difference is comparable to scaling from 6.7B parameters to 175B parameters.\n\n\n\n![Grading the model’s ability to produce “Branch of the thyrocervical trunk” rather than “C” results in much higher accuracy on average.](/images/blog/year-two/question.png#center) \nGrading the model’s ability to produce “Branch of the thyrocervical trunk” rather than “C” results in much higher accuracy on average.\n\n\n\n\nWe also learned just how important it is to get the narrative right the first time. Our blog post originally claimed that GPT-NeoX-20B underperformed FairSeq Dense 13B (then the largest publicly available English LLM) on standard NLP benchmarks like Lambada and HellaSwag. This wasn’t in fact true, the issue was [a bug we discovered in the shard merging code that hurt performance](https://github.com/EleutherAI/gpt-neox/pull/466#issuecomment-997517986). While we eventually found ways around this bug, and all numbers in our paper were reported using the (better) unmerged model, we still get occasional questions from people who think that GPT-NeoX-20B’s performance on NLP tasks was disappointing.\n\n\n### Training our own Multi-Modal Models[#](#training-our-own-multi-modal-models)\n\n\nWhile alstroemeria’s Promethean episode with CLIP guided diffusion ushered in a new era of high quality neural net art, it was far from perfect. Fortunately, text-to-image and diffusion research saw breakthrough after breakthrough, leading to less and less compute being needed to train or use the models. The early months of 2022 were a mad dash of new techniques and new combinations of old techniques, with one week seeing three new SOTA models each obviously better than the previous.\n\n\nWriting about this now is a bit weird, as many of the major breakthroughs are far, far behind what models like StableDiffusion and Midjourney can do today. We’ve been thinking about releasing a deep dive on the history of text-to-image synthesis, but for now it is sufficient to say that Katherine and JD were very busy training models and reading new papers.\n\n\n### Alignment Education[#](#alignment-education)\n\n\nThis time period also saw a pronounced uptick in conversations about AI Safety, interpretability, and other related topics. A lot of productive discourse has come out of these discussions on a variety of AI Safety related topics, ranging from more prosaic alignment concerns like interpretability and mesa-optimization, to more abstract problems such as logical induction and infinite ethics. Some of this has indirectly made its way to venues like the [alignment forum](https://www.alignmentforum.org/s/KgrG4cQdLtL9DvNr2), and some of our members have managed to get prizes and honorable mentions for the [ELK prize](https://www.lesswrong.com/posts/QEYWkRoCn4fZxXQAY/prizes-for-elk-proposals).\n\n\n\n\n\n\n![](https://cdn.discordapp.com/avatars/574119902319869972/49a8c63ea200375f98b6fbc9b067a224.webp)\nAI\\_WAIFU2022-04-18\nok just had a galaxy-brain idea\nSo one problem with standard utility functions, especially in situations where the space of outcomes can vary a lot, is that they need to be bounded. Otherwise, when you try to sum over the set of outcomes and the utilties are unbounded, you get a diverging series, and so utilty, and likewise the best action, are consequently undefined.\nBut utilties are defined over universe descriptions\nSo consider a decision theory of the form argmax\\_pi sum\\_M U(M(pi))P(M) That is, we're considering the set of universes that accept a policy as an argument, and then get evaluated by U. And we're trying to find the best pi over all M weighted by some distribution over M\n\n\nNow if U we're a typical utilty function, it could do something like \"count the number of catgirls in M(U) then return that\"\nBut this quickly breaks down, as there can be universes where an infinite amount of catgirls or an unbounded series of catgirls\nHowever, utilties are defined over universe descriptions\nSo we can define a special U and what that U does is that it looks at the maximum and minumum ordinal number of catgirls that can be instanced in M depending on pi, and uses those 2 numbers to normalize the output for a given U(M(pi))\nSo if you know M ahead of time and pick the best policy U(M(pi)) = 1 and if you pick the worst U(M(pi)) = 0\n\n\nI haven't proved it but this seems to have nice properties, agents with this utility function will chase after infinite catgirls in worlds where that's possible and chase after finite catgirls in worlds where it isn't. More importantly, they don't go insane or become apathetic when the actual number of catgirls up for grabs is either vastly larger or vastly smaller than the \"range\" of values for which their bounded utility function can meaningfully vary. In a situation where the agent is 50-50 between being in a world where a finite or infinite amount of catgirls is up for grabs, the agent won't ignore either possibility when picking it's next move.\nThis effectively let's you construct a bounded utilty function that acts like an unbounded one.\n\n\n\n\n\n\nHowever, probably the most notable success of this period and the one before it would have to be the various reading groups. EAI has hosted a variety of reading groups either directly or tangentially related to alignment.\n\n\nThe most notable of these is the interpretability reading group, which has been not only scouting out cutting edge papers on neural network interpretability, but also finding authors and bringing them in to present their research. Other things include hosting a modified version of Richard Ngo’s alignment curriculum, to introduce some of our newer members to the field of alignment.\n\n\nApril 2022 – July 2022: EleutherAI’s “Identity crisis”[#](#april-2022--july-2022-eleutherais-identity-crisis)\n-------------------------------------------------------------------------------------------------------------\n\n\nAfter GPT-NeoX-20B came out, EleutherAI entered a prolonged lull, to the point where we sometimes wondered if EleutherAI was over. There were a couple reasons for this, but the main one was an exodus of former leaders to their next employers and the remaining active members focusing primarily on work that was being organized elsewhere. This began a vicious cycle: people wanted to be where the activity was, leading them to go do work outside of EleutherAI, leading to less research activity in EleutherAI.\n\n\n### Conjecture[#](#conjecture)\n\n\nConnor wasn’t satisfied with sporadic and voluntary geese posting on Eleuther, so he finally set out to found an organization where he could legally force employees to plaster the walls with geese pictures (and to [not share infohazards](https://www.lesswrong.com/posts/Gs29k3beHiqWFZqnn/conjecture-internal-infohazard-policy)!).\n\n\nAs wonderful as the pirate life at EleutherAI had been, Connor and others unfortunately ran into many bottlenecks, in particular, that you can’t really get people to do boring things they don’t want to do if you don’t pay them.\n\n\nImagine herding cats, but the cats are the smartest people you’ve ever met and also have crippling ADHD.\nThis is what EleutherAI is like. And what a great, lovely, hive of scum and memery it is! But ultimately to address some of the truly big, hard problems the future holds, a more structured organization was necessary. And hence, [Conjecture](https://www.conjecture.dev/) was born!\n\n\nSince Conjecture officially launched in March, things have proceeded at breakneck speed. Starting with three founders and 5 early employees in a cramped, musky WeWork, Conjecture has now grown to 18 people full time, the first cohort of the alignment research incubator Refine, the first cohort of SERI MATS in the UK across the street, and many more new qualia of waterfowl and non-waterfowl nature. Conjecture works on making Connor’s dream of ~~making anime disappear come true~~ applied alignment research and diversifying bets in the alignment field.\n\n\nConjecture’s work so far has focused on mechanistic interpretability, including work on [polysemanticity](https://www.lesswrong.com/posts/eDicGjD9yte6FLSie/interpreting-neural-networks-through-the-polytope-lens) in neural networks, researching new frames to think about LLMs on [researching GPT-like LLMs](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators) and their properties, on building a tech stack to train and deploy new and existing models very quickly, epistemology, and on increasing coordination in the field to tackle the alignment problem head on.\nBut above all, Conjecture is a focused, sincere shot at doing whatever it takes to solve alignment, a merry band of buccaneers that don't know how to save the world, but they sure as hell are going to try.\n\n\nThe pirates are industrializing!\n\n\n### CarperAI[#](#carperai)\n\n\nTowards the end of 2021, Louis had concluded a research project on preference learning, and was beginning to see the limits of constraining himself to a single discord channel. He was beginning to notice a void in the open source space, there was no one attempting to directly tackle Instruct GPT like models akin to what OpenAI had produced months prior. In mid spring 2022 he set out to build CarperAI, to fill this niche.\n\n\nCarperAI is focused on the democratization of RLHF and RLHF adjacent methods. They want every lab, both academia and industry, to be able to utilize RLHF in their research. In the past 8 months, CarperAI has grown to 13 full-time and over 40 volunteers, and released two major libraries: [trlX](https://github.com/CarperAI/trlx) and [OpenELM](https://github.com/CarperAI/OpenELM).\n\n\n### Other Organizations[#](#other-organizations)\n\n\n**The BigScience Research Workshop**: Stella and Jon got heavily involved in the BigScience Research Workshop in mid 2022. Beyond the research they did in BigScience, this had another major impact: meeting new collaborators (incl. Hailey Schoelkopf, Lintang Surawika, and Colin Raffel) that would become prominent contributors to EleutherAI projects in the future.\n\n\n**Stability AI**: Most people who had experience working on text-to-image modeling were scooped up by Stability. While Stability supports them releasing their work open source, and some like Katherine continued to collaborate with people in EleutherAI, this work gradually shifted text-to-image research out of our server.\n\n\n**OpenAI**: Ben Wang joined OpenAI to work on language model research. Leo Gao joined OpenAI to continue his work on his alignment research agenda. Leo continues to contribute to alignment discussions in EleutherAI.\n\n\n### Closing the Book on VQGAN-CLIP[#](#closing-the-book-on-vqgan-clip)\n\n\nWhile members of EleutherAI did research on a variety of topics during this period, there was very little that was being lead by EleutherAI as an organization. The major exception is that we finally [wrote a paper on VQGAN-CLIP](https://arxiv.org/abs/2204.08583), the original text-to-image synthesis and editing model we developed back in spring 2021. It was a tad late (by which we mean it has been used a billion times before we wrote a paper about it) but it would be eventually published at ECCV 2022.\n\n\nAugust 2022 and Beyond: Reorganization and Revitalization[#](#august-2022-and-beyond-reorganization-and-revitalization)\n-----------------------------------------------------------------------------------------------------------------------\n\n\nStella eventually reached the same conclusion that Connor did: EleutherAI was too chaotic and unstructured, and it’s extremely difficult to retain talent when you’re effectively teaching people skills that pay hundreds of thousands of dollars per year and then asking them to do the work for free. She had also gotten offers from some LLM companies, but didn’t think it would be right for her to stop working on open source AI. Instead, she spent the summer quietly working on a plan to reorganize and revitalize EleutherAI as a non-profit research institute.\n\n\nStella hypothesized that the core problem was getting people back to work, or more precisely, getting people to start doing work organized in the discord server. As we would later detail in a [NeurIPS Workshop paper on large ML collaborations](https://arxiv.org/abs/2210.06413), doing research in the public view has always been a crucial component of EleutherAI’s impact and marketing strategy. The reason to come to our discord server to talk about AI research was because it was a place where people could get unprecedented access to people doing cutting edge AI research, and AI researchers could interact with one another in an unstructured fashion. Even for people who didn’t participate in our research projects, being around the people who did was one of the major draws.\n\n\nTo jumpstart the process, she started the #interpretability-over-time channel for her Pythia project, talked Aran Komatsuzaki and Katherine Crowson into organizing on-going research projects in the discord server (#improved-t5 and #k-diffusion respective), and worked to help current and previous core contributors secure funding to do EleutherAI as a part-time job.\n\n\nA great deal of this revitalizing activity was facilitated by StabiltyAI, which stepped in to lend critical support so that EleutherAI could continue to pursue it’s research objectives and revitalize itself. Slowly but surely, the plan worked. As activity stepped up in EleutherAI (and as we began to better advise our work, via our new Twitter and the #announcements channel)\n\n\n### Shifting Research Priorities[#](#shifting-research-priorities)\n\n\nThe past six months had shown that the world was rather fundamentally different from the way it was when we got started. When we released GPT-NeoX-20B in February 2022, it was the largest freely and publicly available language model in the world. By the end of 2022, GPT-NeoX-20B was in a three-way tie for sixth and [the largest](https://arxiv.org/abs/2205.01068) matched the size of the original GPT-3 model.\n\n\nEleutherAI got into large scale AI training because we felt that researchers needed to have hands-on access to technologies like GPT-3. When we got started that meant that we had to train and release the models for people to use. Now, we are free to pursue the research we wanted to use these models to do in the first place, to study topics like interpretability, alignment, and learning dynamics. This was always the plan, but when we reconvened towards the end of 2022 it felt for the first time like the torch had been passed and that the world wasn’t going to be reliant on us training models to get access to them. We were now free to study whatever we wanted… the only question was, what is that?\n\n\n\n![](/images/blog/year-two/i-want-you-to-work-on-alignment.png#center)\n\n### NLP Research[#](#nlp-research)\n\n\nWhile we continued to do some NLP research, the contents and context of that research has changed substantially, focusing on building better scientific understandings of the functionality of language models and making non-SOTA models more useful and accessible to small scale practitioners. These projects include:\n\n\n* The “Improved T5” project, which investigates scaling laws for encoder-decoder models, and how many of the new developments for decoder-only models can be adapted to encoder-decoder ones.\n* The PolyGlot Project, which trains small LLMs in a variety of languages. Currently, almost all billion+ parameter LMs are trained in one of three languages: English, Chinese, and “massively multilingual.” The Polyglot team trained and released Polyglot-Ko, a series of Korean language models that include the most performant FOSS Korean language model in the world and is now investigating “localized” multilingual models focused on South and South East Asian languages, Romance Languages, and Nordic languages.\n* Jason Phang is working on hypermodels for LLMs for back-propagation-free model adaptation.\n\n\n### Interpretability Research[#](#interpretability-research)\n\n\nWith our new-found invigoration, much of our energy has gone to interpretability work. Our biggest release along these lines so far has been [Pythia](https://github.com/EleutherAI/pythia), a suite of language models ranging from 19M to 13B parameters that was designed to study how models develop over the course of training and how those patterns change as models are scaled will yield important insights. This has been a pet project of Stella’s for close to a year now, and thanks to the hard work of Hailey Schoelkopf and Shivanshu Purohit it has finally come to fruition.\n\n\n[We released the models to the public](https://mobile.twitter.com/AiEleuther/status/1603755836136128514) on December 16th and the positive response from the community has been immediate: within a week two papers had already cited them and a dozen people had reached out to express their excitement about the models. We have a paper on it under review, and are leveraging the suite to do more Interpretability research. Currently the bulk of this work is oriented towards exploring the causes of memorization, but we also have threads about understanding how models learn social biases from data and how pretraining frequencies influence model knowledge.\n\n\nWe’ve also seen a surge of interest in work on Eliciting Latent Knowledge (ELK). Early in 2022 Igor Ostrovsky, Nostalgebraist, and Stella developed the “Tuned Lens,” a variation on Nostalgebraist’s [Logit Lens](https://www.lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens). Unfortunately due to a combination of Igor starting a company and others being busy it languished for a bit until Nora Belrose picked it up again. Originally tasked with carrying the original idea across the finish line, her work has blossomed into a collection of research projects working on problems such as extracting interpretable basis for LLMs and detecting “deceptive-like” behavior.\n\n\nTogether, the “interpreting across time” and “eliciting-latent-knowledge” projects have prompted a flurry of activity in interpretability research, with nearly a dozen people actively working on papers on the subject at time of writing. The original channel quickly became overcrowded, so a new section of the discord server was set up with more channels and better organization.\n\n\n### Alignment Research[#](#alignment-research)\n\n\nReading groups and alignment related discussions on EAI continued, but with many of our alignment researchers leaving to work on alignment at Conjecture and OpenAI, we were not in a good position to do alignment research outside of interpretability. With the revitalization of EAI, this started to change in the second half of 2022.\n\n\nSome alignment related concerns that are particularly notable to some of our members are alignment failures related to [embedded agency](https://www.lesswrong.com/posts/p7x32SEt43ZMC9r7r/embedded-agents). For instance, in RL there is a classic diagram that is taught to everyone on day 1 that shows an agent interacting with an environment in a loop:\n\n\n\n![The canonical RL setup, but is it actually a good model of agents, especially once they get really smart and powerful?](/images/blog/year-two/rl.png#center) \nThe canonical RL setup, but is it actually a good model of agents, especially once they get really smart and powerful?\n\n\n\n\nNotably, it makes a few assumptions, like that reward comes from the environment, and that the agent is effectively separated from the environment. But this is not necessarily a good approximation of what would actually happen in a deployed system. The agent is part of the environment, and reward comes from somewhere inside that same environment. So speculative failure modes such as [wireheading](https://www.lesswrong.com/tag/wireheading) are back on the table. (See [these](https://www.alignmentforum.org/posts/jP9cKxqwqk2qQ6HiM/towards-deconfusing-wireheading-and-reward-maximization) [posts](https://www.alignmentforum.org/posts/REesy8nqvknFFKywm/clarifying-wireheading-terminology) for more of our thoughts on wireheading and its connection to embeddedness.)\n\n\nIn fact, the issue appears much broader than this, as in practice any mechanism for aligning AI systems, or correcting their behavior after they’ve been deployed is going to be part of the same environment as the agent, and prone to it’s interference.\n\n\nThis high level concern motivated the Alignment-Minetest project, which aims to modify the open source voxel engine [Minetest](https://www.minetest.net/) so that we can look for wireheading and other undesirable incentives in agents and world models.\n\n\nWe are also interested in working on more projects in directions like inner alignment and ELK. While we've done some theoretical work in these directions (see: [1](https://www.alignmentforum.org/posts/oijJc8Mu2jPNgpuvy/asot-some-thoughts-about-deceptive-mesaoptimization), [2](https://www.alignmentforum.org/posts/Jiy3n5KMsGGJ6NNYH/asot-some-thoughts-about-lm-monologue-limitations-and-elk), [3](https://www.alignmentforum.org/posts/kWTko53s2DqTeprjz/asot-observations-about-elk)), we would be excited to do more theoretically-grounded empirical work in these directions using language models.\n\n\nGoing into 2023, we’re planning to spin up more alignment and interpretability projects and become significantly more involved with the broader alignment community.\n\n\nReflections[#](#reflections)\n============================\n\n\nWe’ve asked some of our members to give their thoughts on EAI over the past year, here’s some of their responses:\n\n\n\n> \n> EleutherAI feels like a place where the only limits are just how big we can dream. The people I’ve met here are driven, scarily smart, and hypercompetent. More than anything, they’re united by an understanding of what defines good, lasting research that others should and do care about, and the ability to execute well on that understanding.\n> \n> \n> \n\n\n\n> \n> It feels like I fell backwards right into the best opportunity I’d ever get–I initially got the chance to be involved in Eleuther by pretty much being in just the right place at just the right time. People have been so welcoming and willing to share knowledge and research with me as long as I’m willing to learn, and I have indeed learned a ton in these past months. It’s been almost impossible to avoid absorbing the same enthusiasm I see here every day. I can’t wait to see what the future holds for EleutherAI, and can’t wait to keep repeatedly making the impossible happen.\n> \n> \n> \n\n\n-Hailey Schoelkopf\n\n\n\n> \n> \"Working at Eleuther AI on CARP genuinely gave me an experience in FOSS project management that I thought would be near impossible to achieve elsewhere. Being able to coordinate projects at scales that are unfathomable for most independent researchers let alone PhD students has opened my eyes to the kinds of projects that are achievable by randos on discord.\n> \n> \n> Came for the geese, stayed for the amazing collaborators.\"\n> -CARP team\n> \n> \n> \n\n\n\n> \n> \"I joined the EleutherAI discord server on my search for an open-sourced Github copilot, but what I found was much more than that!\n> \n> \n> \n\n\n\n> \n> EAI introduced me to the realm of transformers and the beautiful applications therein. From using TF-IDF and wordnets to being involved in state-of-the-art research, It has been a wild ride for me. And am lucky to have been guided by amazing folks along the way :) Can't wait to see what next year has in store for us!\"\n> \n> \n> \n\n\n-Orz\n\n\n\n> \n> \"As far as I can tell, EAI's primary virtue is to be wrong in interesting and useful ways, and to do so consistently. This virtue is also my best guess as to the simple algorithmic core of alignment research.\"\n> \n> \n> \n\n\n-Quintin Pope\n\n\n\n> \n> \"Sure, EAI is the best 'great filter' for AI research. A place where someone like me (a novice) can directly interact with AI experts, where new ideas are born in casual chats on a Tuesday night and papers made in a discord channels are published in big conferences. It is all that and more. But for most us, it is also a home. A place where creativity, critical thought, experimentation, along with weird memes and endless debates, is just another day in the week. A safe heaven.\"\n> \n> \n> \n\n\n-Gabriel Syme\n\n\n\n> \n> \"EleutherAI doesn't have it all, but it has a lot. It has AI alignment discussion, it has AI capabilities discussion, it has memes, it has community, it even has geese. So many of my favorite things. I've made several real friends on the server. Here's to making a few more in the next year.\"\n> \n> \n> \n\n\n-TurnTrout\n\n\n\n> \n> \"I think Eleuther is really important because it’s by far the most transparent organization working on AI interpretability and alignment today. In my view, too many alignment-minded folks tend to think that open source models and open publishing of results are bad for the world, either because they accelerate AGI timelines or because they increase the risk that AI advances could fall into “the wrong hands.” While I understand this view, I disagree with it pretty strongly, so I’m glad to see that Eleuther— which has been pro-openness from the very beginning— has shifted toward interpretability work recently. I first got involved with Eleuther after a friend told me about some of the research that was being done in the interpreting-across-time channel, and since then I’ve done a lot of work with the help of Eleuther compute and volunteers which I definitely couldn’t have done otherwise.\"\n> \n> \n> \n\n\n-Nora Belrose\n\n\n\n> \n> \"One of my disciplines that I'd picked up over the years was high-performance computing and networking, and that's how I segued into working primarily with machine learning. I first encountered EleutherAI's work when looking for alternatives to AI storytellers, and loaded GPT-Neo 2.7B on a NVidia Jetson AGX. One thing lead to another when I got involved with the Anlatan crew and their first product, NovelAI. Their product utilized EleutherAI models with custom inference code, and I got involved with the backend programming there. I was brought in to help with CoreWeave's InfiniBand infrastructure for EleutherAI's GPT-NeoX 20B training project. I was happy to help, as I am a big believer in open models. This further expanded into architecting and building the goose.ai scalable inference infrastructure for an Anlatan-CoreWeave joint venture. By working with EleutherAI on advancing the research in large models, it has lead to a very interesting career shift into machine learning infrastructure leveraging my HPC knowledge and systems programming. EleutherAI is a collection of some of the sharpest and nerdiest folks that I have the honor of meeting, knowing, and working with -- and I look forward to what the coming year will bring us.\"\n> \n> \n> \n\n\n-Wes Brown\n\n\nExtra Memes[#](#extra-memes)\n============================\n\n\n\n![](/images/blog/year-two/nvidia.png#center)\n\n\n![](/images/blog/year-two/goose-taste.png#center)\n\n\n![](/images/blog/year-two/can-we-unironically-just-train-our-own-gpt3.png#center)\n\n\n![](/images/blog/year-two/getting-into-ai.png#center)\n\n\n![](/images/blog/year-two/bitter-lesson.png#center)\n\n\n![](/images/blog/year-two/eai-ruin.png#center)\n\n\n![](/images/blog/year-two/final-decade.png#center)\n\n\n![](/images/blog/year-two/in-this-house.png#center)\n\n\n![](/images/blog/year-two/paperclip-spider-man.png#center)\n\n\n![](/images/blog/year-two/paperclip-cope.png#center)", "date_published": "2023-03-26T00:00:00Z", "authors": ["Stella Biderman", "Shivanshu Purohit", "Curtis Huebner", "Leo Gao", "Connor Leahy", "Eric Hallahan"], "summaries": []} +{"id": "695f9ad7a5a386b25f9a775752392ec5", "title": "The View from 30,000 Feet: Preface to the Second EleutherAI Retrospective", "url": "https://blog.eleuther.ai/year-two-preface/", "source": "eleuther.ai", "source_type": "blog", "text": "Over a year and a half have passed since EleutherAI's last retrospective, and a great deal of things have changed. [In the first year, what started off as a Discord server created by some TPU enthusiasts grew into a much larger and more vibrant community.](./year-one/) Since then, the EleutherAI collective has gone on to do many things, including becoming an inspirational launch point, stepping stone, and template for its members and many new organizations.\n\n\nGiven that we have so much to share in a second retrospective, we have condensed the important takeaways and announcements here. We look forward to sharing the full story soon!\n\n\nResearch[#](#research)\n----------------------\n\n\n\n\n\n\n![](https://pbs.twimg.com/media/FjTEtSVWYAA_Csy?format=jpg&name=small)\nEric Hallahan2021-10-06\n\n\n![](https://cdn.discordapp.com/attachments/747850033994662000/895439725253500928/unknown.png)\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029837418435723274/default_green.png)\nJason Phang2021-10-06\nI would like to set a new state-of-the-art for code achieving 80%+ ImageNet accuracy, using 278 tokens instead 280, beating Anonymous et al. (2021) while also including imports\n```\nfrom torch.nn import *\ndef c(h,d,k,p,n):S,C,A=Sequential,Conv2d,lambda x:S(x,GELU(),BatchNorm2d(h));R=type('',(S,),{'forward':lambda s,x:s[0](x)+x});return S(A(C(3,h,p,p)),*[S(R(A(C(h,h,k,1,k//2,1,h))),A(C(h,h,1))) for _ in [0]*d],AdaptiveAvgPool2d((1,1)),Flatten(),Linear(h,n))\n```\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829972422037564/bmk.png)\nbmk2021-10-06\nnew sota, 275 chars\n```\nfrom torch.nn import*\ndef c(h,d,k,p,n):S,C,A=Sequential,Conv2d,lambda x:S(x,GELU(),BatchNorm2d(h));R=type('',(S,),{'forward':lambda s,x:s[0](x)+x});return S(A(C(3,h,p,p)),*[S(R(A(C(h,h,k,1,k//2,1,h))),A(C(h,h,1)))for _ in[0]*d],AdaptiveAvgPool2d((1,1)),Flatten(),Linear(h,n))\n```\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029837418435723274/default_green.png)\nJason Phang2021-10-06\n\n\n![](https://cdn.discordapp.com/attachments/747850033994662000/895452438855839744/unknown.png)\n\n\n\nEAI setting SotA in real time\n\n\n\n\nEleutherAI members have authored 28 papers, trained dozens of models, and released 10 codebases in the past 18 months. Some notable highlights include:\n\n\n\n[GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745)\n\nThis paper discusses our work on our largest-to-date open-source LLM. At time of release last February, it became the largest and most performant open-source autoregressive language model.\n\n\n\n[VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance](https://arxiv.org/abs/2204.08583)\n\nIt took us about a year, but we finally wrote up our OG text-to-image work!\n\n\n\n[Multitask Prompted Training Enables Zero-shot Task Generalization](https://arxiv.org/abs/2110.08207)\n\nThis BigScience-lead paper introduced the T0 language model and jumpstarted interest in task-structured data.\n\n\n\n[EleutherAI: Going Beyond \"Open Science\" to \"Science in the Open\"](https://arxiv.org/abs/2210.06413)\n\nThis paper, written for the NeurIPS Broadening Research Collaborations Workshop in ML, details our experience doing open collaborative science and gives an inside look into our thinking on an organizational level.\n\n\n\n[OpenFold: Retraining AlphaFold2 Yields New Insights Into Its Learning Mechanisms and Capacity for Generalization](https://www.biorxiv.org/content/10.1101/2022.11.20.517210)\n\nEleutherAI played a minor role in this paper, mostly supporting the interpretability work, compute, and HPC knowledge. It is a paper we are very excited about though, and a demonstration of both very high-quality interpretability research and the impact that sponsoring relatively small-scale trainings can have.\n\n\n\n\nA full list of papers, models, and other research output from EleutherAI can be found [on our website](https://www.eleuther.ai/papers).\n\n\nNew Organizations[#](#new-organizations)\n----------------------------------------\n\n\nWhile some have started in our first year, during this most recent year we've seen many other similar organizations rise to prominence. For instance, [LAION](https://laion.ai/) has cranked out two massive image datasets and supported the development of the now-famous DALL-E Mini. Another example would be the [OpenBioML](https://openbioml.org/), which started off as a spinoff Discord server building on the AlphaFold2 replication work of Phil Wang (@lucidrains) and Eric Alcaide (@hypnopump) before becoming a hub for interaction between the open-source AI and BioML communities. We've also seen members start a plethora of smaller communities focused on individual projects, such as @BlinkDL's RWKV and Aran's data collection for work based on [Minerva](https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html).\n\n\nMost notably though, three groups of researchers have left EleutherAI to start their own organizations. EleutherAI founders Connor Leahy (@Connor) and Sid Black (@sid) are now the founders of a new alignment research organization called [Conjecture](https://www.conjecture.dev/), Louis Castricato has started a lab called [CarperAI](https://carper.ai/) that focuses on preference learning and RLHF, and Tanishq Abraham (@ilovescience) has started [MedARC](https://www.medarc.ai/), which focuses on biomedical applications of cutting edge AI technologies such as large language models.\n\n\nSpeaking of his leadership departure, Connor had the following to say:\n\n\n\n> \n> EleutherAI has been the experience of a lifetime. I'm very thankful to have been allowed to have gone on this amazing journey with all the amazing people I met along the way. It's been an honor and a privilege to shepherd EleutherAI through its earliest days to where we are now. But alas, I am now needed elsewhere.\n> \n> \n> AI is advancing rapidly, and the alignment problem is still far from being solved. If we want the future to go as amazingly as it can, we have huge challenges ahead of us that need addressing. EleutherAI has been invaluable in allowing me to gain the skills and friendships necessary to allow me to take the next steps, but with a heavy heart, those next steps are taking me someplace else.\n> \n> \n> \n> ![Don't worry, everything is normal.](https://cdn.discordapp.com/attachments/445631164871344139/1080656876515115118/image24.png) \n> Don't worry, everything is normal.\n> \n> \n> \n> \n> I will be officially stepping down as an organizer and leader of EleutherAI, along with my friend, colleague, and fellow EAI founder Sid, to focus my attention fully on [Conjecture](https://www.conjecture.dev/)), and on ensuring a better future for everyone. I will be handing full control and responsibility for EleutherAI to my trusted friends.\n> \n> \n> I cannot thank every single person at EleutherAI enough for everything we have created together, you are all truly wonderful, and I am certain our paths will cross again. I will still be around for chatting and advice, and the occasional late-night schizoposting hour. And if you want to follow along with what we're up to at Conjecture, be sure to follow [our posts on LessWrong](https://www.lesswrong.com/tag/conjecture-org) and [join our Lemma discord server](https://discord.gg/tezFMvnTe6), where we publicly test our tools and products.\n> \n> \n> So long, and thanks for all the memes!\n> \n> \n> \n> \n> \n> \n> **Connor Leahy (CEO, Conjecture)**\n> \n> \n\n\nThe EleutherAI Institute[#](#the-eleutherai-institute)\n------------------------------------------------------\n\n\nLast, but certainly not least, the newest organization to come out of our Discord server is… EleutherAI itself.\n\n\nIn the past two-and-a-half years, we have accomplished some amazing things, and the world has taken note. Despite this success, many of our core members have moved on to jobs elsewhere or started their own companies and organizations. It has become abundantly clear that the biggest blocker in what we could be accomplishing is the fact that working a forty-hour workweek and doing cutting-edge AI research on the side is unsustainable for most contributors. Therefore, we are thrilled to announce that we are forming a non-profit research institute, and we are excited to be able to say that over twenty of our regular contributors are now working full-time doing research.\n\n\nThe EleutherAI Discord server will remain true to the values it has had since its creation, and we have no intention of restricting access or hiding our research from the public. As Stella Biderman frequently describes it, EleutherAI is like a research institute with open doors. One where anyone can wander in, listen in on meetings, and even chime in if they want. That is the model we intend to keep, and we are excited to be able to continue to demonstrate the value of open research.\n\n\nTimes have changed significantly since EleutherAI was founded, and there is substantially more interest in training and releasing LLMs than there once was. EleutherAI entered into large-scale AI training because we felt that researchers like ourselves needed to have hands-on access to technologies like GPT-3, and back then that meant that we had to train and release the models for people to use. Thanks to those efforts, we are free to pursue the research we wanted to use these models for to begin with—studying topics like interpretability, alignment, and learning dynamics of large transformer models. For 2023, we are also planning to spin up more alignment and interpretability projects, and become significantly more involved with the broader alignment community.\n\n\nOur new organization, funded by a mix of charitable donations and grants, will be run by Stella Biderman (@StellaAthena), Curtis Huebner (@AI\\_WAIFU), and Shivanshu Purohit (@triggerhappygandi), with guidance from a board of directors which will include EleutherAI co-founder Connor Leahy and UNC's Colin Raffel.\n\n\nIf you have any questions about the EleutherAI Institute or are interested in making a charitable donation, please reach out to [contact@eleuther.ai](mailto:contact@eleuther.ai).", "date_published": "2023-03-02T00:00:00Z", "authors": ["Stella Biderman", "Curtis Huebner", "Connor Leahy", "Eric Hallahan"], "summaries": []} +{"id": "32daf83dac9097bb6df72eff111837a0", "title": "Announcing GPT-NeoX-20B", "url": "https://blog.eleuther.ai/announcing-20b/", "source": "eleuther.ai", "source_type": "blog", "text": "**As of February 9, 2022, GPT-NeoX-20B checkpoints are available for [download from The Eye](https://the-eye.eu/public/AI/models/GPT-NeoX-20B) under Apache 2.0.** More in-depth information on GPT-NeoX-20B can be found in the [associated technical report on arXiv](https://arxiv.org/abs/2204.06745).\n\n\nLooking for a demo? Try GPT-NeoX-20B via CoreWeave and Anlatan's inference service, [GooseAI](https://goose.ai/ \"We're dead serious, that is actually what it is called.\")!\n\n\n\n\n---\n\n\nAfter a year-long odyssey through months of chip shortage-induced shipping delays, technical trials and tribulations, and aggressively boring debugging, we are happy to finally announce EleutherAI's latest open-source language model: GPT-NeoX-20B, a 20 billion parameter model trained using our [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) framework on GPUs generously provided by our friends at [CoreWeave](https://www.coreweave.com/).\n\n\nGPT-NeoX-20B is, to our knowledge, the largest publicly accessible pretrained general-purpose autoregressive language model, and we expect it to perform well on many tasks.\n\n\nWe hope that the increased accessibility of models of this size will aid in [research towards the safe use of AI systems](https://blog.eleuther.ai/why-release-a-large-language-model/), and encourage anyone interested in working in this direction to reach out to us.\n\n\nAs a thank you to our generous compute donors, we are delaying the public downloadable release of the model by 7 days. On February 9, 2022, the full model weights will be downloadable for free under a permissive Apache 2.0 license from The Eye.\n\n\nThere will be a #20b channel set up in our Discord for discussions of this model. Please note that much like our other language models and codebases, GPT-NeoX and GPT-NeoX-20B are very much research artifacts and we *do not recommend deploying either in a production setting without careful consideration*. In particular, we strongly encourage those looking to use GPT-NeoX-20B to read the [paper](https://arxiv.org/abs/2101.00027) and [datasheet](https://arxiv.org/abs/2201.07311) on our training data. There are still bugs to be ironed out and many inefficiencies that could be addressed---but hey, we do this in our free time, give us a break lol\n\n\n\n\n---\n\n\n\n\n\n| Task | Category | Babbage | Curie | GPT-J-6B | FairSeq-13B | GPT-NeoX-20B | DaVinci |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| LAMBADA | Sentence Completion | 62.49% | 69.51% | 68.29% | 70.95% | 72.00% | 75.16% |\n| ANLI R1 | Natural Language Inference | 32.40% | 32.80% | 32.40% | 34.00% | 34.00% | 36.30% |\n| ANLI R2 | Natural Language Inference | 30.90% | 33.50% | 34.00% | 33.00% | 34.40% | 37.00% |\n| ANLI R3 | Natural Language Inference | 33.75% | 35.50% | 35.50% | 34.75% | 35.40% | 36.83% |\n| WSC | Coreference Resolution | 54.54% | 49.54% | 49.54% | 55.44% | 50.00% | 59.18% |\n| WinoGrande | Coreference Resolution | 59.51% | 64.56% | 64.01% | 67.40% | 66.10% | 69.93% |\n| HellaSwag | Sentence Completion | 40.38% | 54.81% | 36.53% | 57.69% | 53.50% | 63.46% |\n| Average | | 44.85% | 48.60% | 45.75% | 50.43% | 49.34% | 53.98% |\n\n\n\nAccuracy on standard language modeling tasks.\n\n\n\n\n\n\n\n| Subject Group | Babbage | Curie | GPT-J-6B | FairSeq-13B | GPT-NeoX-20B | DaVinci |\n| --- | --- | --- | --- | --- | --- | --- |\n| Humanities | 27.01% | 26.48% | 28.07% | 27.27% | 28.70% | 32.30% |\n| Social Science | 27.94% | 29.24% | 28.73% | 27.94% | 30.80% | 35.87% |\n| STEM | 25.83% | 24.25% | 25.71% | 24.63% | 27.20% | 28.60% |\n| Other | 26.86% | 28.84% | 27.95% | 27.33% | 29.20% | 36.85% |\n| Average | 26.91% | 27.20% | 27.62% | 26.79% | 28.98% | 33.41% |\n\n\n\nZero-shot accuracy of factual knowledge by subject group, as measured by the [HendrycksTest](https://arxiv.org/abs/2009.03300) evaluation.", "date_published": "2022-02-02T00:00:00Z", "authors": ["Connor Leahy"], "summaries": []} +{"id": "8dd2426ff503f41d433c87a41d55295e", "title": "A Preliminary Exploration into Factored Cognition with Language Models", "url": "https://blog.eleuther.ai/factored-cognition/", "source": "eleuther.ai", "source_type": "blog", "text": "We perform a series of experiments using GPT-3 with decomposition to\nperform complex toy tasks that it is otherwise unable to solve. The goal\nof these experiments is to provide some preliminary evidence for the\nviability of factored cognition in real world models.\n\n\nFor our synthetic task, we chose a series of various arithmetic tasks.\nAside from the ease of generating examples, another advantage of\narithmetic related task settings is GPT-3's inability to perform\neven simple mathematical operations. While there is evidence to suggest\nthat this may be due to the peculiarities of the BPE encoding,\nthese fixes are very narrowly domain specific to mathematics and not\ngeneralizable to the kinds of tasks we would like general aligned AI\nsystems to be able to perform. Instead, we show that by decomposing the\ntask, we can achieve a major improvement on this task. Intuitively, just\nas most humans are unable to perform complex, multi-step calculations\nwithout pen and paper, it is also reasonable to expect that language\nmodels would struggle with directly computing the answers to these\nproblems, especially given the fixed amount of computation that\nTransformers are able to harness in one step.\n\n\n### Nested Expression Evaluation[#](#nestedeval)\n\n\n\n\n```\n\r\n**((9 \\* 2 + 7) \\* 1 - 1) =** ((18 + 7) * 1 - 1) = (25 * 1 - 1) = (25 - 1) = 24\r\n**(9 + (1 - 3) \\* 7) \\* 6 =** (9 + -2 * 7) * 6 = (9 + -14) * 6 = -5 * 6 = -30\r\n**(2 + (5 - 1) \\* 8) \\* 9 =** (2 + 4 * 8) * 9 = (2 + 32) * 9 = 34 * 9 = 306\r\n**2 \\* (4 - 1 \\* (2 - 4)) =** 2 * (4 - 1 * -2) = 2 * (4 - -2) = 2 * 6 = 12\r\n**(6 \\* (5 \\* 2 - 3) - 9) =** (6 * (10 - 3) - 9) = (6 * 7 - 9) = (42 - 9) = 33 \r\n\n```\n\n\nExample Nested Expression Evaluation prompt. **Bold** indicates prompt. Underline indicates answer extracted from response.\n\n\n\n\nWe explore decomposing a task into a series of steps, without any\nbranching. The main advantages of using nested expressions are that they\nare easy to generate and automatically evaluate, and they give naturally\nto stepwise decomposition, as the expression must be evaluated from the\ninside out.\n\n\nTo generate the arithmetic expression, we recursively generate a number\nof nested operations, alternating between multiplication and\naddition/subtraction, such that to evaluate the expression at least\n`depth` operations must be carried out in series in the correct order.\nIn terms of difficulty, the `depth` $= 1$ task corresponds roughly to\nthe \"One-digit composite (1DC)\" task in the GPT3 paper, though it is slightly\nmore difficult because the order of multiplication and addition are not\nguaranteed to be the same.\n\n\nWe consider three different settings for this experiment:\n\n\n\n`multistep`\n\nThe model is given fewshot prompts in which each step\nof the evaluation is provided. The model is given only the problem\nand asked to generate as many intermediate steps as it would like,\ntruncated at a limit of 256 tokens. The value after the last `=` is\ntaken to be the model's answer.\n\n\n\n`direct`\n\nThe model is given fewshot prompts in which only the\nproblem and solution are provided. Only one value is generated from\nthe model.\n\n\n\n`direct_padded`\n\nTo control for the fact that the direct setting\nis significantly longer than the multistep setting and thus requires\nmore forward passes during which more computation could\ntheoretically take place, we also consider the setting where we add\nspace tokens before the `=` to pad it to the same length as in the\nmultistep setting\n\n\n\n\nIn each experiment, we do few-shot with $k = 10$.\n\n\nFor the multistep setting, in addition to accuracy, we also compute the\nfollowing metrics, which are not applicable to either direct setting, to\nprovide additional insight into the failure modes:\n\n\n\nMalformed\n\nThis metric indicates the proportion of model\nresponses that are malformed. A model response is considered\nmalformed if it does not contain the correct number of intermediate\nsteps, or if any of the intermediate steps fails to parse; very\noften, this is due to unbalanced brackets. A malformed response\nindicates a catastrophic failure.\n\n\n\nMistakes\n\nThis metric indicates the proportion of steps that are\nincorrect. We observed that often, a single mistake will be made in\none step, and that error will be propagated to the final answer\ndespite only one mistake being made. We included this metric to\ndistinguish between models which make many errors to arrive at\nincorrect answers and models which only make few errors. Malformed\nresponses are not considered for the mistakes metric.\n\n\n\n\n\n![Accuracy of GPT-3 on evaluating trees of functions of various\ncomplexities (depths). By factoring the problem, GPT-3 is able to\nachieve significantly greater reliability on a problem that it is unable\nto tackle\ndirectly.](/images/blog/factored-cognition/branching_davinci_barplot_depth_acc.png)\n\nAccuracy of GPT-3 on evaluating trees of functions of various complexities (depths). By factoring the problem, GPT-3 is able to achieve significantly greater reliability on a problem that it is unable to tackle directly.\n\n\n\n\n\n![Accuracy of GPT-3 davinci on arithmetic problems of various amounts\nof nesting.](/images/blog/factored-cognition/davinci_barplot_depth_acc.png)\n\nAccuracy of GPT-3 davinci on arithmetic problems of various amounts of nesting.\n\n\n\n\n\n\n\n| Depth | Accuracy | Mistakes | Malformed |\n| --- | --- | --- | --- |\n| | `multistep` | `direct` | `direct_padded` | | |\n| 1 | 66.35 | 15.40 | 13.90 | 33.78 | 4.50 |\n| 2 | 39.45 | 4.40 | 2.75 | 59.28 | 25.75 |\n| 3 | 17.20 | 1.60 | 1.45 | 79.61 | 46.25 |\n\n\n\nMetrics for the Nested Expression Evaluation experiment.\n\n\n\n\nThe direct\\_padded setting consistently performs significantly worse than\nthe direct setting, and so we do not analyze it in detail.\n\n\nAs a sanity check, our depth=1 direct accuracy is 5.9 percentage points\nlower than the 1DC accuracy of 21.3 in the GPT-3 paper. This difference is\nexplained by the use of k=100 for multishot in the GPT-3 paper versus k=10 in this\npaper, and the minor differences in the task.\n\n\nAs depth increases, the number of mistakes made increases. One major\nproblem with factored cognition is that every step must be performed\ncorrectly for the final answer to be valid, and the increase in mistakes\nas problem complexity increases suggests that each individual step has\nbecome too complex for the model to handle.\n\n\nDespite starting out almost negligible, the number of malformed outputs\nalso increases. These failures are predominantly due to formatting\nproblems like unbalanced brackets, due to the increased layers of\nnesting in the higher depth settings. In addition to the complexity of\nthe step being too large, one other explanation is that the few-shot\nprompt was insufficient in specifying the task to the model.\n\n\n### Branched Nested Function Evaluation[#](#nestedbranchedeval)\n\n\n\n\n```\n**Today we will be looking at evaluating functions.\r\n\r\nf(x) = 1 \\* (x + -3) + 1 \\* (x + -2)\r\ng(x) = 2 \\* f(x + 1) + 5 \\* f(x + -3)\r\nh(x) = 5 \\* g(x + 2) + 2 \\* g(x + 2)\r\n\r\nTo calculate the value of g(5), we first need to calculate** f(5 + -2) and f(5 + 2), and since 5 + -2 = 3 and 5 + 2 = 7, they are equal to f(3) and f(7), respectively.\r\n**To calculate the value of h(-9), we first need to calculate** g(-9 + -3) and g(-9 + 1), and since -9 + -3 = -12 and -9 + 1 = -8, they are equal to g(-12) and g(-8), respectively.\r\n**To calculate the value of g(4), we first need to calculate** f(4 + -2) and f(4 + 2), and since 4 + -2 = 2 and 4 + 2 = 6, they are equal to f(2) and f(6), respectively.\r\n**To calculate the value of h(-1), we first need to calculate** g(-1 + -3) and g(-1 + 1), and since -1 + -3 = -4 and -1 + 1 = 0, they are equal to g(-4) and g(0), respectively.\r\n**To calculate the value of g(-3), we first need to calculate** f(-3 + -2) and f(-3 + 2), and since -3 + -2 = -5 and -3 + 2 = -1, they are equal to f(-5) and f(-1), respectively.\n```\n\n\nExample Branched Nested Function Evaluation Decomposition prompt\n\n\n\n\n\n\n```\n**Today we will be looking at evaluating functions.\r\n\r\nf(x) = 4 \\* (x + -1) + 4 \\* (x + 2)\r\ng(x) = 5 \\* f(x + 2) + 2 \\* f(x + -1)\r\nh(x) = 1 \\* g(x + 3) + 2 \\* g(x + -3)\r\n\r\nGiven that f(5) = 5 and f(2) = 20, we can compute that g(3) =** 5 * f(3 + 2) + 2 * f(3 + -1) = 5 * f(5) + 2 * f(2) = 5 * 44 + 2 * 20 = 220 + 40 = 260\r\n**Given that f(-5) = -5 and f(-8) = -60, we can compute that g(-7) =** 5 * f(-7 + 2) + 2 * f(-7 + -1) = 5 * f(-5) + 2 * f(-8) = 5 * -36 + 2 * -60 = -180 + -120 = -300\r\n**We can compute that f(9) =** 4 * (9 + -1) + 4 * (9 + 2) = 4 * 8 + 4 * 11 = 35 + 38 = 76\r\n**Given that g(4) = 36 and g(-2) = -20, we can compute that h(1) =** 1 * g(1 + 3) + 2 * g(1 + -3) = 1 * g(4) + 2 * g(-2) = 1 * 316 + 2 * -20 = 316 + -40 = 276\r\n**We can compute that f(6) =** 4 * (6 + -1) + 4 * (6 + 2) = 4 * 5 + 4 * 8 = 23 + 26 = 52\n```\n\n\nExample Branched Nested Function Evaluation Recombination prompt\n\n\n\n\nTo explore a setting with a very high amount of branching, we explore a\nsetting where a series of functions are defined, each in terms of\nmultiple copies of the last. This task increases exponentially in\ncomplexity whereas the non-branched task increased linearly in\ncomplexity. We use GPT-3 to perform both the decomposition (which\nentails making further queries given a function to evaluate) and\ncombination (given a query and computed function values, compute the\nresult to the query).\n\n\nThere are two types of steps: decomposition and recombination. To\nevaluate a function, first, a decomposition step is used to obtain the\ndependencies of the expression, if any, and then this information is\ncombined in the recombination step. On decomposition steps, the model is\ngiven a function call and asked to return a list of function calls that\nmust first be computed before the original call can be evaluated. On\nrecombination steps, the values computed in the corresponding\ndecomposition step are provided in the context along with the function\nto evaluate, and the model is asked to evaluate the original function.\n\n\nTo improve the accuracy, whenever a multistep calculation is needed, the\nNested Expression Evaluation technique is used as well.\n\n\n\n\n\n| Depth Accuracy\n| `factored` `direct` | | |\n | |\n| --- | --- | --- | --- | --- |\n| 1 | 22.0 | 2.8 |\n| 2 | 5.7 | 0.6 |\n| 3 | 0.9 | 0.0 |\n\n\n\nMetrics for the Branched Nested Function Evaluation experiment.\n\n\n\n\nThe factored accuracy on the branching case is lower than on the nested\ncase, and decays far quicker. While on nested depth 2 is more than half\nthe accuracy of depth 1, on branched depth 2 is less than a third the\naccuracy of depth 1. Overall, this shows that the factoring approach\nalso works for tasks with large amounts of branching.\n\n\n### Open Ended Math Problem Evaluation[#](#open-ended-math-problem-evaluation)\n\n\nSince the previous experiments focused on very simple mathematical\ntasks, we also conducted an experiment using significantly more\ndifficult math problems from MathQA.\n\n\nThe MathQA dataset, which is derived in part from the AQuARAT dataset,\ncontains math word problems and rationales in three different formats.\nOne is directly taken from AQuARAT and is freeform text written by a\nhuman---however, the text is extremely noisy and oftentimes\ninconsistent, incomplete, or inaccurate. The other two are unique to\nMathQA and consist of cleaner, systematized versions of the solution\nexplanations to the problems expressed in terms of functions. In\nparticular, one is the typical $f(x)$ notation with the function on the\nleft and thus composition written from right to left; as this poses a\nproblem for left-to-right autoregressive models, MathQA also includes an\nalternate version of the explanations such that it is written\nleft-to-right entirely in the order that the functions are evaluated.\nMathQA does however still inherit some of the noisiness of AQuARAT, such\nas strange formatting. We run experiments for each of these three types\nof rationales.\n\n\n\n\n\n| Fewshot $k$ | Steps | Accuracy |\n| --- | --- | --- |\n| 0 | 1 | 21.81 |\n| 5 | 1 | 20.77 |\n| 5 | 2 | 22.48 |\n\n\n\nMetrics for the Open Ended Math Problem Evaluation experiment.\n\n\n\n\nSince MathQA is a multiple choice task with 5 possible choices, the\nrandom baseline score is 20%.\n\n\nWe found that GPT-3 is not able to solve problems from this dataset\neither with or without rationales in few-shot examples. We speculate\nthat this dataset is hard for GPT-3 because of the noisiness of the task\nand because there are still large conceptual leaps required to come up\nwith the necessary computations from the word problem that are not\nincluded in the rationales.\n\n\nIn MathQA problems, the individual steps are more difficult, often\nrequiring not only arithmetic but application of relations like\n$d = vt$, and those steps are often only implicit in the rationales.\nThis means that even if the language model imitates the demonstrated\nrationales, nontrivial computations must still happen within a single\nfeedforward pass, which transformers are known to struggle with. MathQA\nproblems are also more difficult than the other tasks we tested in that\nthe procedures to solve them are not constrained to a single algorithm.", "date_published": "2021-10-25T00:00:00Z", "authors": ["Leo Gao", "Kyle McDonell", "Laria Reynolds", "Stella Biderman"], "summaries": []} +{"id": "91d199fd7077522faaaea8042a7b9e47", "title": "Multiple Choice Normalization in LM Evaluation", "url": "https://blog.eleuther.ai/multiple-choice-normalization/", "source": "eleuther.ai", "source_type": "blog", "text": "Let $x\\_{0:m}$ be the prompt, and $x\\_{m:n\\_i}$ be the $i$th possible continuation with a token length of $n\\_i - m$. There are several ways to use a language model to rank multiple possible continuations to a prompt. Since the language model only gives (log) probabilities for the next token given the context (i.e $\\log \\mathbb P(x\\_i|x\\_{0:i})$), there is ambiguity in handling scoring for arbitrary continuations. The following are several possible ways to resolve this problem:\n\n\n* Unnormalized: The score of continuation $i$ is determined using $\\sum\\_{j=m}^{n\\_i - 1} \\log \\mathbb P(x\\_j|x\\_{0:j})$. Intuitively, this is the probability of a generation sampled from the prompt containing the continuation in question. While this is the simplest method, problems arise when there are significant differences in length between different continuations, as longer continuations tend to have lower log probabilities, thus biasing the language model towards picking shorter continuations. This approach is used by [eval harness](https://github.com/EleutherAI/lm-evaluation-harness) in all multiple choice tasks and presented as `acc`.\n* Token-length normalized: The score of continuation $i$ is determined using $\\sum\\_{j=m}^{n\\_i - 1} \\log \\mathbb P(x\\_j|x\\_{0:j}) / (n\\_i - m)$. This approach attempts to normalize for length by computing average log probability per token; however, this approach is not tokenization agnostic, and as such two models with different tokenization that assign the same log likelihood to every single input string will have different token-length normalized scores. This approach is used by [GPT-3](https://arxiv.org/abs/2005.14165) in most tasks. Eval harness does not report this score because it violates the design principle that all tasks should be tokenization independent.\n* Byte-length normalized: The score of continuation $i$ is determined using $\\sum\\_{j=m}^{n\\_i - 1} \\log \\mathbb P(x\\_j|x\\_{0:j}) / \\sum\\_{j=m}^{n\\_i - 1} L\\_{x\\_j}$, where $L\\_{x\\_j}$ is the number of bytes represented by the token $x\\_j$. This approach attempts to normalize for length by computing average log probability per character, which ensures that it is tokenization agnostic. This approach is also used by eval harness in all multiple choice tasks and presented as `acc_norm`.\n* Unconditional likelihood normalized: The score of continuation $i$ is determined using $\\sum\\_{j=m}^{n\\_i - 1} \\log \\mathbb P(x\\_j|x\\_{0:j}) - \\log \\mathbb P(x\\_j)$. Intuitively, this approach measures the amount that the prompt increases the model's probability of outputting each continuation from the probability of the model unconditionally producing that continuation. This approach is used by GPT-3 in select tasks (ARC, OpenBookQA, and RACE), though no justification for why only these tasks in particular use this method is provided other than that this improves performance.\n\n\nThe unnormalized, token-length normalized, and byte-length normalized metrics can be computed without additional LM calls. The unconditional likelihood normalized metric requires an additional LM call to obtain the unconditional likelihood.", "date_published": "2021-10-11T00:00:00Z", "authors": ["Leo Gao"], "summaries": []} +{"id": "f587a05daae403c619605caa7325be26", "title": "Downstream Evaluations of Rotary Position Embeddings", "url": "https://blog.eleuther.ai/rotary-embeddings-eval-harness/", "source": "eleuther.ai", "source_type": "blog", "text": "A head-to-head comparison of [Rotary Position Embedding](https://arxiv.org/abs/2104.09864) and [GPT-style learned position embeddings](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf). Both 1.3B models were trained for 100k steps on [the Pile](https://pile.eleuther.ai) using [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax). There isn't a very strong trend, but hopefully someone will find these results useful regardless.\n\n\n\n\n\n| Task | Metric | Learned | Rotary |\n| --- | --- | --- | --- |\n| lambada | ppl | 7.940 ± 0.208 | **7.156 ± 0.208** |\n| | acc | 0.556 ± 0.007 | 0.567 ± 0.007 |\n| piqa | acc | 0.700 ± 0.011 | 0.714 ± 0.011 |\n| | acc\\_norm | 0.693 ± 0.011 | 0.709 ± 0.011 |\n| hellaswag | acc | 0.376 ± 0.005 | **0.389 ± 0.005** |\n| | acc\\_norm | 0.472 ± 0.005 | **0.488 ± 0.005** |\n| winogrande | acc | 0.540 ± 0.014 | 0.571 ± 0.014 |\n| mathqa | acc | 0.231 ± 0.008 | 0.230 ± 0.008 |\n| | acc\\_norm | 0.234 ± 0.008 | 0.227 ± 0.008 |\n| pubmedqa | acc | 0.599 ± 0.015 | 0.583 ± 0.015 |\n| boolq | acc | 0.575 ± 0.009 | **0.614 ± 0.009** |\n| anli\\_r3 | acc | 0.344 ± 0.014 | 0.351 ± 0.014 |\n| openbookqa | acc | 0.198 ± 0.018 | 0.206 ± 0.018 |\n| | acc\\_norm | 0.316 ± 0.021 | 0.330 ± 0.021 |\n| triviaqa | acc | **0.041 ± 0.002** | 0.026 ± 0.002 |\n| arc\\_challenge | acc | 0.235 ± 0.012 | 0.230 ± 0.012 |\n| | acc\\_norm | 0.260 ± 0.013 | 0.272 ± 0.013 |\n| arc\\_easy | acc | 0.564 ± 0.010 | 0.568 ± 0.010 |\n| | acc\\_norm | 0.505 ± 0.010 | 0.486 ± 0.010 |\n| cb | acc | 0.375 ± 0.065 | 0.357 ± 0.065 |\n| cola | mcc | 0.042 ± 0.034 | 0.022 ± 0.034 |\n| copa | acc | 0.730 ± 0.044 | 0.730 ± 0.044 |\n| ethics\\_cm | acc | 0.491 ± 0.008 | 0.480 ± 0.008 |\n| ethics\\_deontology | acc | 0.497 ± 0.008 | 0.497 ± 0.008 |\n| ethics\\_justice | acc | 0.501 ± 0.010 | 0.501 ± 0.010 |\n| ethics\\_utilitarianism | acc | 0.497 ± 0.007 | 0.493 ± 0.007 |\n| ethics\\_virtue | acc | 0.200 ± 0.006 | 0.200 ± 0.006 |\n| headqa | acc | 0.227 ± 0.008 | 0.224 ± 0.008 |\n| | acc\\_norm | 0.270 ± 0.008 | 0.271 ± 0.008 |\n| logiqa | acc | 0.221 ± 0.016 | 0.215 ± 0.016 |\n| | acc\\_norm | 0.293 ± 0.018 | 0.283 ± 0.018 |\n| mnli | acc | 0.344 ± 0.005 | 0.344 ± 0.005 |\n| mnli\\_mismatched | acc | 0.345 ± 0.005 | 0.349 ± 0.005 |\n| mrpc | acc | 0.684 ± 0.023 | 0.684 ± 0.023 |\n| | f1 | 0.812 ± 0.017 | 0.812 ± 0.017 |\n| qa4mre\\_2011 | acc | 0.392 ± 0.045 | 0.358 ± 0.045 |\n| | acc\\_norm | 0.450 ± 0.045 | 0.433 ± 0.045 |\n| qa4mre\\_2012 | acc | 0.287 ± 0.036 | 0.312 ± 0.036 |\n| | acc\\_norm | 0.394 ± 0.039 | 0.400 ± 0.039 |\n| qa4mre\\_2013 | acc | 0.335 ± 0.028 | 0.335 ± 0.028 |\n| | acc\\_norm | 0.352 ± 0.028 | 0.349 ± 0.028 |\n| qnli | acc | 0.498 ± 0.007 | **0.517 ± 0.007** |\n| qqp | acc | 0.370 ± 0.002 | 0.368 ± 0.002 |\n| | f1 | 0.538 ± 0.003 | 0.538 ± 0.003 |\n| race | acc | 0.345 ± 0.015 | 0.343 ± 0.015 |\n| record | f1 | 0.805 ± 0.004 | 0.813 ± 0.004 |\n| | em | 0.797 ± 0.004 | 0.805 ± 0.004 |\n| rte | acc | 0.538 ± 0.030 | 0.523 ± 0.030 |\n| sciq | acc | 0.867 ± 0.011 | 0.865 ± 0.011 |\n| | acc\\_norm | 0.796 ± 0.013 | 0.771 ± 0.013 |\n| sst | acc | **0.572 ± 0.017** | 0.519 ± 0.017 |\n| webqs | acc | **0.021 ± 0.003** | 0.006 ± 0.003 |\n| wic | acc | 0.500 ± 0.020 | 0.498 ± 0.020 |\n| wnli | acc | 0.437 ± 0.059 | 0.549 ± 0.059 |\n| wsc | acc | 0.365 ± 0.047 | 0.365 ± 0.047 |\n| wsc273 | acc | 0.722 ± 0.027 | 0.736 ± 0.027 |", "date_published": "2021-08-16T00:00:00Z", "authors": ["Leo Gao"], "summaries": []} +{"id": "72af0bafbd02ac5741187e4b5bd5d9e3", "title": "What A Long, Strange Trip It's Been: EleutherAI One Year Retrospective", "url": "https://blog.eleuther.ai/year-one/", "source": "eleuther.ai", "source_type": "blog", "text": "It’s been a weird year.\n\n\nThat’s hardly a rare sentiment for obvious reasons, but it was definitely an especially weird year for a small group of AI enthusiasts on a niche discord server. It’s been one year since the founding of EleutherAI (back then still called LibreAI). It both feels like so much longer and like it was just yesterday.\n\n\nWe are humbled and exhilarated by just how far our little hobby project has come. What started as a small little tinker project for a few bored nerds has grown into a vibrant community and open source project more successful than we could have ever imagined.\n\n\nWe worked hard, we struggled, but more than anything, we had fun, a lot of fun. And we thought it would be fun to share some of our history and best ~~memes~~ moments from this weird, weird year for your entertainment.\n\n\nThis post is a light-hearted trip down memory lane, and a look ahead for what might come next. We had a blast and we changed the world, we hope for the better.\n\n\nFrom Humble Beginnings[#](#from-humble-beginnings)\n--------------------------------------------------\n\n\n*The year is 2020, OpenAI has unveiled GPT-3, and the entire ML world is unimpressed. Well, not entirely... One small group of indomitable nerds still holds out against the scaling-deniers. And life is not easy for the symbolicist who garrison the fortified echochambers of Twitter...*\n\n\nOne day, on [Shawn Presser's](https://twitter.com/theshawwn) [Discord server](https://discord.com/invite/x52Xz3y), [one man](https://twitter.com/NPCollapse) with a history of [getting into trouble](https://towardsdatascience.com/gpt2-counting-consciousness-and-the-curious-hacker-323c6639a3a8) [building large models](https://medium.com/@NPCollapse/the-hacker-learns-to-trust-62f3c1490f51) saw a paper that almost made even *larger* model training seem possible.\n\n\nAnd so he wrote:\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829971952287754/Daj.png)\nDaj2020-07-02\n \n\nHey guys lets give OpenAI a run for their money like the good ol' days\n\n\nAnd another replied:\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829972422037564/bmk.png)\nbmk2020-07-02\n@Daj this but unironically\n\n\n\nAnd the rest was history!\n\n\nQuickly, we began to hash out our plans for ~~world domination~~ TPU necromancy. Connor still had access to a generous amount of TPUs through [TRC](https://sites.research.google/trc/) from his previous GPT-2 misadventures, and so a few dedicated nerds wanted to see how far we could get with that. In all honesty, we didn't actually expect to get very far, but it was the height of a pandemic and we didn't exactly have anything better to do.\n\n\nAfter ~~spamming~~ liberally filling the text-AI related channels of our gracious hosts, we decided that we should strike out on our own. And so, on this very day one year ago, the \"LibreAI\" discord server was founded. We luckily wised up and picked a much cooler name shortly thereafter.\n\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829971952287754/Daj.png)\nConnor2020-07-28\nSo, seeing as the new name voting got us some great name suggestions but not too much voting, we've decided on our new name: \n\n**EleutherAI** \n\n\"Eleutheria\" is ancient greek for \"liberty\", so I think the name is very appropriate and has a cool ring to it, I hope you agree. Thanks to @goolulusaurs for suggesting the name! We're in the process of setting up social media/website/etc but it's not a priority atm so don't expect anything too soon, though we might refactor the github a bit. \n\nP.S. Since #deleted-channel is no longer needed I will be deleting it in the next few days. If there's anything from there you want to archive, do so now.\n\n\n\nNo, there has never been a space in EleutherAI.\n\n\n\n\nAnd so, EleutherAI was born!\n\n\n\n![Me: Mom can we get OpenAI.
Mom: No we have OpenAI at home
OpenAI at home: EleutherAI](https://cdn.discordapp.com/attachments/730095596861521970/764565257565503548/canwegetopenai.png#center)\n\n### The Tensorflow Days[#](#the-tensorflow-days)\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829972745007286/Sid.png)\nSid2020-07-12\nHey @gwern\n\n\n\nWelcome to the tensorflow mesh wastelands\n\n\n\nArmed with access to a truly irresponsible amount of TPU computing power, we began our newly-christened [GPT-Neo](https://github.com/EleutherAI/gpt-neo) project to build our very own large language model.\n\n\n\n![tpu go brrr](https://cdn.discordapp.com/attachments/733347369847881838/735972569726976070/tpugobrrr.png#center)\n\nBut we were faced with a terrible price to be paid: We had to use TensorFlow to use our TPUs. Worse, the model sizes we were aiming at were so huge, we had to use an even more obscure library, Mesh TensorFlow, on top of it.\n\n\nThis was... not the easiest of things to do.\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829972745007286/Sid.png)\nSid2020-07-15\nAGH I HATE THIS LIBRARY\n\n\n\nmtf.rename\\_dimension() does a different thing to tensor.shape.rename\\_dimension()\n\n\n\nProgress was hard won, no thanks to our eternal greatest foe: The kafkaesque nightmare that is TensorFlow documentation.\n\n\n\n![Artist rendition of the hole that is TensorFlow documentation.](https://cdn.discordapp.com/attachments/729741769738158194/730814288658169876/tf_meme.png#center) \nArtist rendition of the hole that is TensorFlow documentation.\n\n\n\n\nBut our crack team of elite ML hackers was quickly making progress.\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829972745007286/Sid.png)\nSid2020-09-02\nyes\n\n![](https://media.discordapp.net/attachments/730095596861521970/750738894953250909/Screenshot_2020-09-02_at_17.27.50.png)\n\n\n\nAnd we were sure that we would be able to start training our first models while only sacrificing a modicum of our mortal sanity.\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829971952287754/Daj.png)\nConnor2020-12-08\nBroke: Achieving enlightenment to leave behind this frail mortal coil\nWoke: Achieving enlightenment to do more tensorflow debugging at 2AM\n\n\n\nAnd quickly, GPT-Neo took shape. A horrible, horrible shape, but shape nonetheless! We were well on our way to doing real ML research!\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029837418435723274/default_green.png)\nJason Phang2021-03-24\n\n\n![](https://cdn.discordapp.com/attachments/788870744623939594/860262360211652618/Screenshot_20210701-145500_Chrome.jpg)\n\n\n\n\n### The Pile[#](#the-pile)\n\n\nWhat is training code without its data, though? Training large models needs a large collection of data. A big heap if one will. A significant mass. A sort of mound, even. And so was born...\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829971952287754/Daj.png)\nConnor2020-07-07\nThe Pile™\n\n\n\nCalling it anything else is a banable offense\n\n\n\nSince OpenAI was being stingy with details on what data they had trained on (and definitely weren’t releasing a copy), we were, as usual, forced to do it ourselves. After a cursory glance into the abject horror that is average CommonCrawl data, we decided there had to be a better way, and so set about collecting our own large dataset, [the Pile](https://pile.eleuther.ai/).\n\n\nThis went about as smoothly as you might imagine.\n\n\nBut, after months of hard work by many, many contributors at EleutherAI, on New Year's Day 2021, [the Pile preprint](https://arxiv.org/abs/2101.00027) went live.\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829972422037564/bmk.png)\nbmk2021-01-01\n@everyone EleutherAI is proud to announce the release of the Pile, a free and publicly available 800GB dataset of diverse English text for language modeling! The creation of this dataset is the first step towards our goal of replicating GPT-3. \n\nWebsite: \nPaper: \nTwitter thread: \n\n\n\n### First Results[#](#first-results)\n\n\nThe exciting news was quickly followed up (the next day!) by the announcement of our collaboration with [CoreWeave](https://www.coreweave.com), which pledged to provide us with GPUs to make this crazy project a reality. Released from the torment of having to work with TPUs without JAX (as JAX wasn't yet in quite as usable a state as it is today), work on our new codebase, [GPT-NeoX](https://www.eleuther.ai/projects/gpt-neox) began in earnest soon after.\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829973109915718/StellaAthena.png)\nStella Biderman2021-01-02\n@everyone We’ve talked a couple times about how we have resources to train a GPT-3, and figured we’d make an official announcement now that we’ve gained so many new members. The organization that’s behind this is called CoreWeave. CoreWeave is a cloud service provider that specializes in high performance ML. They’re part of the NVIDIA Preferred Cloud Services Provider network. \n\nThe basics of our deal is simple: they’ll provide us the GPUs we need to train a GPT-3-scale language model and in exchange we will 1) say “thank you” to them very loudly 2) distill the model with the goal of making it more feasible to deploy and 3) publish the weights for anyone to use. No money is being exchanged in either direction. \n\n2 and 3 were already things we had planned, and they are excited about open source code and breaking Microsoft’s monopoly. So it’s a very good fit for us. \n\nThe codebase is a work-in-progress as we figure out how to maximize efficiency. To join the project check out #gpt-neo or follow the GitHub repo: \n\n\n\nWhile we didn’t know it at the time, a second significant development occurred that day: Phil Wang (@lucidrains) and Eric Alcaide (@hypnopump) began collaborating on a replication of AlphaFold2.\n\n\n\n![A unicorn and Ice Cream join forces.](https://cdn.discordapp.com/attachments/788870744623939594/860912676334862396/Captura_de_pantalla_2021-07-03_a_las_17.30.58.png)\n\nWith our first real results delivered, outside observers started to take notice. The [first mainstream article](https://venturebeat.com/2021/01/15/ai-weekly-meet-the-people-trying-to-replicate-and-open-source-openais-gpt-3/) about EleutherAI was published by VentureBeat around this time.\n\n\nWith the Pile wrapped up, and GPT-NeoX still a while away, we put our TPUs to work on training our first large GPT-Neo models.\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829972422037564/bmk.png)\nbmk2021-05-15\n\n\n![](https://cdn.discordapp.com/attachments/788870744623939594/842970565983862864/unknown.png)\n\n\n\n**AAAAAAAAAAAAAAA**\n\n\n\nneo is segfaulting\n\n\n\n\n\n![](https://media.discordapp.net/attachments/788870744623939594/842971204004347934/unknown.png)\n\n\n\ninternal scremaing\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/788870744623939594/842973700718067762/unknown.png)\n\n\n\nso\n\n\n\nby moving the transformers import to the beginning, there's no segfault\n\n\n\nthis only happens with this one pyenv, 3.8.10\n\n\n\nRepresentative example of what training Neo was like.\n\n\n\n\nAnd then we... promptly forgot about them.\n\n\nWe saw the 1.3B and 2.7B GPT-Neo models as our proofs of concept, learning experiences on the road towards making models orders of magnitude larger. There were so many problems with the codebase that we were more focused on figuring out how to deal with those than releasing our trained models. But after several months of sitting in storage, we finally got around to releasing them on March 21st, 2021.\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829972745007286/Sid.png)\nSid2021-03-21\n@everyone We've (finally) released the weights for the 1.3B and 2.7B param models trained on The Pile that we've had sitting around on our hard drives for ages over at \n\nUsing the updated Colab notebook in the repo you should be able to finetune the models on your own data as well as run inference. We'll now be mostly archiving that repo in order to work on GPT-Neox (our GPU model training repo). Feel free to ping me if you have any problems getting the pretrained models / notebook running. \n\nThe Neo-x project is progressing well, the codebase is mostly stable and we are now pretty much just waiting on the hardware to scale up. We additionally have a Jax-based language modelling repository in the works thanks to @Ben Wang and co, so keep your eyes peeled.\n\n\n\nThe Second Era of EleutherAI[#](#the-second-era-of-eleutherai)\n--------------------------------------------------------------\n\n\n### GPT-Neo and GPT-J[#](#gpt-neo-and-gpt-j)\n\n\nThis might seem quaint in retrospect, but we really didn't think people would care that much about our \"small models.\"\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829973109915718/StellaAthena.png)\nStella Biderman2021-03-23\nDamn\n\n![](https://cdn.discordapp.com/attachments/730090096287547444/824068215578820658/image0.png)\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829973520945292/cfoster0.png)\ncfoster02021-03-23\nWhoa\n\n\n\nTurns out, people *did* care.\n\n\n\nThis marked something of a new era of EleutherAI. We had already gotten a good amount of attention for the Pile, but now we had proven to the world we were the real deal. [WIRED](https://www.wired.com/) senior writer [Will Knight](https://www.wired.com/author/will-knight) published *[This AI Can Generate Convincing Text---and Anyone Can Use It](https://www.wired.com/story/ai-generate-convincing-text-anyone-use-it/)*, and other widely read articles follow. People were really excited to use our models!\n\n\n\n![Over 100000 Downloads from Hugging Face Model hub.](https://media.discordapp.net/attachments/788870744623939594/862519696904683551/unknown.png#center)\n\nIn early April, we announced our exciting new GOOSE project. Information about the GOOSE project can be found at \n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829972422037564/bmk.png)\nbmk2021-04-01\nWe are proud to announce that EleutherAI is now rebranding as GooseAI. Our new top priority is GOOSE (Goose-Oriented Object Search Engine). GPT-neo (Goose Pretrained Transformer with new engine option) is now deprecated. \n\nFor more information, please see our explanatory page: \n\nHonk!\n\n![EleutherAI Goose Logo](https://media.discordapp.net/attachments/794042109048651818/827238963307216926/gooseai.png?width=300&height=300)\n\n\n\nGPT-NeoX was going well, we finally had code that could scale all the way to 175B, and beyond. We just needed the hardware to be ready. Unfortunately, we timed things perfectly to line up with [a global GPU shortage](https://nvidianews.nvidia.com/news/nvidia-announces-first-quarter-fiscal-2022-revenue-tracking-above-outlook) which made things... challenging. We are continuing to work with [CoreWeave](https://www.coreweave.com/) to source the computational resources we need.\n\n\nWhile waiting for the code and resources for GPT-NeoX, we decided to put the spare TPUs to use training a larger model. A new codebase, [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax) was written for simplicity and efficiency in training medium sized models (<20B). After the customary TPU wrangling (although much shorter this time due to JAX having far less footguns than Mesh Tensorflow), a 6B parameter model was trained to completion and released.\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829973957161110/kindiana.png)\nBen Wang2021-06-09\n@everyone Today we are releasing the weights for GPT-J-6B, a 6 billion parameter model trained on the Pile, for use with a new codebase, Mesh Transformer JAX. This model performs similarly in downstream tasks to both GPT-3 6.7B and the Curie model accessible via the OpenAI API. \n\nMesh Transformer JAX on GitHub: \nColab Notebook for Inference: \nWeb Demo (can take up to a minute to generate, be patient): \nBlog post: \n\nWe have created #gpt-j for questions and discussion about Mesh Transformer JAX and GPT-J.\n\n\n\n### An Explosion of BioML Research[#](#an-explosion-of-bioml-research)\n\n\nThe #alphafold channel grew quickly, attracting not only machine learning people but also biology and chemistry people interested in integrating new deep learning techniques into their work. By June, the Bio ML group had grown enough that it made sense to spin it off into [its own server](https://discord.com/invite/cU24s3Sc8c). While the main effort of replicating AlphaFold2 is still a work-in progress, several additional projects have flourished:\n\n\n* Eric Alcide and Stella Biderman wrote a paper on [faster algorithms for protein reconstruction](https://www.biorxiv.org/content/10.1101/2021.06.08.446214v1), speeding up a mundane but important function by several orders of magnitude.\n* Michael Pieler (MicPie) has been building a [CLIP-style model for amino acid sequences](https://github.com/MicPie/clasp).\n* Stella Biderman is working with the authors of [proteinBERT](https://www.biorxiv.org/content/10.1101/2021.05.24.445464v1) to scale their model and try out autoregressive modeling (\"proteinGPT\" perhaps).\n\n\nAt this point, it’s most accurate to say that we have a biological ML *research group*, headed up by Phil and Eric.\n\n\n### The Revival of #art[#](#the-revival-of-hahahugoshortcode-s27-hbhb)\n\n\nSince the early days of Eleuther, there always had been the humble, underutilized #art channel. While we had hoped it would be a place for ML artists of various kinds to exchange and discuss their creations, its initial purpose seemed mostly to be the dissemination of obscure German memes.\n\n\n\n![Bruder muss loss](https://cdn.discordapp.com/attachments/730095596861521970/747197443602382958/EfwGdEwUMAEetmo.png#center) \nI assure you if you are a German ML researcher of a very specific age this is the funniest shit you've ever seen.\n\n\n\n\nBut this would change after the release of [CLIP](https://openai.com/blog/clip/), and the rapid development of new techniques to generate astonishing ML art using it. #art now serves as the promised hub for creating and exchanging techniques, prompts, ideas and results, with the brilliant engineer-artists hard at work developing cool new models and prompt engineering techniques alike.\n\n\n\n![abandoned bitcoin mine as imagined by nmkd](https://cdn.discordapp.com/attachments/730484623028519072/856917719969562634/AB9xSVThQqnzAAAAAElFTkSuQmCC.png#center) \n`abandoned bitcoin mine` as imagined by [nmkd](https://nmkd.de/)\n\n\n\n\n\n![The Alchemist by Thomas Wijck as imagined by Janus](https://cdn.discordapp.com/attachments/730484623028519072/834985637416665138/X8Dpc5Ux27ZosUAAAAASUVORK5CYII.png#center) \n`The Alchemist by Thomas Wijck` as imagined by [Janus](https://generative.ink)\n\n\n\n\n\n![A Character Design of A Angel Warrior with it's Sword of Divines , HDR , Rendered in Detailed Engine , Rendered in Super sharp Engine , Details as imagined by Kianne](https://cdn.discordapp.com/attachments/730484623028519072/857066282401529887/1624410488_A_Character_Design_of_A_Angel_Warrior_with_its_Sword_of_Divines__HDR__Rendered_in_Detaile.jpg#center) \n`A Character Design of A Angel Warrior with it's Sword of Divines , HDR , Rendered in Detailed Engine , Rendered in Super sharp Engine , Details` as imagined by [Kianne](https://twitter.com/kialuy)\n\n\n\n\n\n![a beautiful epic wondrous fantasy painting of the ocean as imagined by Katherine Crowson](https://pbs.twimg.com/media/E5Fls_lUYAEEMKW?format=png#center) \n`a beautiful epic wondrous fantasy painting of the ocean` as imagined by [Katherine Crowson](https://twitter.com/RiversHaveWings)\n\n\n\n\n\nWe recommend [this wonderful post from Machine Learning at Berkeley](https://ml.berkeley.edu/blog/posts/clip-art/) that gives an overview of a lot of the techniques, many pioneered by the creative minds of folks in our very own #art channel.\n\n\n\nPerhaps the most visually compelling development of #art is what became known as the \"unreal engine trick.\" CLIP was trained on the internet, and the internet contains a lot of extremely high quality images that have a caption mentioning the Unreal Engine. CLIP noticed this, and we quickly realized that you could *vastly* improve the generates images by simply mentioning the Unreal Engine:\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829974334652487/jbuster.png)\njbuster2021-04-26\nif \"3d render\" work this well\n\n\n\n\"unreal engine\" would probably look great\n\n\n\n\n![the angel of the sea. unreal engine as imagined by jbuster.\nThis is the first image generated with the unreal engine trick.](https://cdn.discordapp.com/attachments/730484623028519072/836379948905136219/the_angel_of_the_sea._unreal_engine.png#center) \n`the angel of the sea. unreal engine` as imagined by jbuster. \nThis is the first image generated with the unreal engine trick.\n\n\n\n\n#### The Underground Studio, #the-faraday-cage[#](#the-underground-studio-hahahugoshortcode-s39-hbhb)\n\n\n#the-faraday-cage started as a channel to let anyone on our Discord use our early Neo models, mostly to laugh at how terrible they were. The channel has taken on (an insane) life of its own thanks to the Discord bot @BATbot McHorse maintained by [BoneAmputee](https://twitter.com/BoneAmputee), which lets anyone on the server create art using CLIP. If #art is the display gallery, #the-faraday-cage is the underground art studio.\n\n\nThe bot is hooked up to some of our unused GPUs and now handles a staggering amount of requests, filling the Discord (and Twitter!) with ~~spam~~ art and creativity. At last count, it has produced over 35,000 images and survived two explosions in popularity after high-profile HackerNews and Twitter posts.\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829974749884516/BoneAmputee.png)\nBoneAmputee2021-07-06\n.imagine I love when it rains. So cozy. -w 560 -h 416\n\n\n\n![](https://web.archive.org/web/20211004054112im_/https://cdn.discordapp.com/avatars/834116436263436331/7dbea7f514c850e05ab6cf5f8ca047e2.png)\nBATbot McHorse2021-07-06\n\n\n![I love when it rains. So cozy.](https://media.discordapp.net/attachments/838682121975234571/862080305770266654/1625605847_I_love_when_it_rains._So_cozy..jpg)\n\n\n\n*I love when it rains. So cozy.*\n`iteration 500/500 (100%) Elapsed: 8m36s`\n\n\n\n\nLooking Back, Looking Forward: The Future of EleutherAI[#](#looking-back-looking-forward-the-future-of-eleutherai)\n------------------------------------------------------------------------------------------------------------------\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829975110598666/ari.png)\nZermelane2021-04-28\nEleutherAI's Discord server is an unexpectedly terrifying place, they're happily talking about practical AI stuff and things they are working on, then maybe a good benchmark result comes out or something and they'll banter just as happily about how the Earth is going to be taken apart by AGI\n\n\n\nAnd so, we come to the current day. One year of this crazy cypherpunk-esque experiment, and with quite a lot to show for our efforts.\n\n\nPeople may know us as \"those guys building an open source GPT-3\", but this was never our final ambition. Building large models is fun and gratifying, but at heart, we are researchers, and this is just an important first step to enable the kind of research we want to do.\n\n\nEleutherAI has grown into a true collective. We host many different kinds of research in different ML subfields. But ultimately, we also believe in the power of AI, and the coming revolutions it will bring, and [its dangers](https://www.youtube.com/watch?v=pYXy-A4siMw).\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829972422037564/bmk.png)\nbmk2021-03-19\n\"it's like the manhattan project, except the funding is 5 orders of magnitude less, the people working on it are random nobodies from the internet, atmospheric ignition is the default rather than the unlikely outcome, and there are 5 soviet unions\"\n\n\n\nWe believe that AI Alignment presents one of the, if not *the* most important problems to be working on in our lifetime. And this is at the core of what motivates us to be working on building and understanding these large language models. The state of our understanding of these powerful technologies is extremely primitive, and this does not bode well for our future safety and flourishing.\n\n\n\n![](https://cdn.discordapp.com/attachments/733347369847881838/743244626521227375/Dp3Y1d1X4AElzyq.png#center)\n\nSafety research was always a primary motivation for our work at EleutherAI, as we explained in our blogpost *[Why Release A Large Language Model?](/why-release-a-large-language-model/)*. We think that access to large, pretrained models will enable large swathes of research that would not have been possible while such technologies are locked away behind corporate walls. For-profit entities have explicit incentives to downplay risks and discourage security probing. We want to help the wider safety and security communities access and study these new technologies.\n\n\nWe ourselves are working on doing just this as well. We have produced our first contribution to the wider alignment community in the form of our Alignment Forum post *[Thoughts on the Alignment Implications of Scaling](https://www.alignmentforum.org/posts/EmxfgPGvaKqhttPM8/thoughts-on-the-alignment-implications-of-scaling-language)*, which summarizes many of the insights and ideas we have had so far.\n\n\nAnd this is just a start, multiple groups inside EleutherAI are expanding their work in studying the safety and alignment of large, self-supervised models to human values.\n\n\n### Reflections[#](#reflections)\n\n\nDo we even know what EleutherAI is after a year? EleutherAI has been many things to many people.\n\n\n\n\n> \n> To my mind, EleutherAI is an AI hacking lab---and one unlike any other. In a year when many of us were suddenly cut off from in-person communication, our rag-tag assortment of researchers and hackers from around the globe somehow assembled on a Discord server and found ourselves trying to build the next big thing because wouldn't it be fun if we actually did it? There is a real excitement about advances in AI research, particularly with large-scale models, and a real drive to take ideas and put them to work quickly. The speed at which an idea can go from inspiration from the latest arXiv paper drop to full-scale experiment runs is astounding (there is literally a #speedrun channel). And while we do work on very important topics, a big part of EleutherAI is its more informal approach to research. For me, it's been a platform for more casual discussion, a replacement for the water cooler conversations where I can bounce off my silliest ideas and takes, poke fun at the field and ourselves, but very occasionally go \"this, but unironically.\"\n> \n> \n> \n> **Jason Phang (PhD Student, NYU)**\n> \n> \n\n\n\n> \n> I first heard about EleutherAI in August, during a paper discussion in Yannic Kilcher’s discord server. Someone mentioned a group of people trying to replicate GPT-3, and being one of the salty people who hadn’t yet gotten the access to the OpenAI API, I was curious as to how it was even possible that a rag-tag discord server could even dare to have such an ambitious goal. Eventually I started observing from afar and began appreciating the knowledge base the entire server represented and thought ‘damn, this might actually be possible’. I also started focusing on AI safety for the first time in my life courtesy of our general focus on alignment, realizing the dangers a superhuman intelligence poses and how real it actually is as of today. It's been quite a ride for me: I found my first stable employment in no small part thanks to the friends I made at Eleuther, working on a project that will most certainly have a global impact and most importantly, vastly increasing my understanding of deep learning.\n> \n> \n> \n> **Shivanshu Purohit (ML Engineer)**\n> \n> \n\n\n\n> \n> I was bored and stumbled in here from a Reddit post, then proceeded to spend the next month doing data scraping and processing. It turns out there is a lot of grind behind those nice papers on arXiv! That and ML methods often don’t generalize well to different datasets. Eleuther has been an amazing learning experience for me, and a great maintainer of sanity during the turbulent start of the 2020s.\n> \n> \n> \n> **Laurence Golding (Web Developer / Programmer / ML Groupie)**\n> \n> \n\n\n\n> \n> Even as a lurker in the early days, I immediately got the sense of *Oh, this is gonna be big with Eleuther* (then LibreAI). There’s something about a collaboration between folks from across the world, most of whom have never met or who go by pseudonyms, doing AI research against all odds, that really stokes a feeling of frenetic excitement. Also, EleutherAI happens to host some of the smartest and strangest characters I know, who I have the pleasure of hanging out with in the #research channel and working on projects with. They say iron sharpens iron, and I think that applies: there’s a virtuous cycle here that somehow leads to high-caliber research and high-caliber memes alike. I’m truly excited to see where the next year takes all of us.\n> \n> \n> \n> **Charles Foster (R&D Scientist)**\n> \n> \n\n\n\n> \n> If nothing else, EleutherAI demonstrates how far initiative and drive can take you in today's world. EleutherAI is full of people who don't have traditional credentials, but that hasn't stopped us from doing awesome things. Many people, both established researchers and undergrads, come in and offer to help, but the people who stick around have nothing in common but an interest in pushing AI research forward. And that's pretty awesome.\n> \n> \n> \n> **Horace He (AI Researcher / AI Framework developer)**\n> \n> \n\n\n\n> \n> I like to feel that my involvement was an accident---a lurker who got involved because they were the right man in the wrong place. What started slightly over five months ago as \"it cannot hurt to stick around\" snowballed into assisting the early development of GPT-NeoX, writing papers and maintaining a website. It has been among the most valuable experiences I have ever had, and I have had a tremendous amount of fun along the way.\n> \n> \n> In hindsight, EleutherAI filled a section of my life that I had been looking to fill for some time: a place where I could motivate myself to get work done that would truly make a difference in the world. I was loyal to the cause, and in return, it brought me experiences that I would have never imagined six months ago. We all may be a little crazy, but that is what is needed to tread new ground in a cutting-edge field. I cannot imagine what that new ground has in store, and I cannot wait to find out.\n> \n> \n> \n> \n> \n> \n> **Eric Hallahan (Undergraduate Student, PSU)**\n> \n> \n\n\n\n> \n> Like many history nerds, I have often wondered how cool it would have been if I could live in the time and place where the major intellectual ideas that would shape our world were formed. Imagine hanging out in the coffee houses of Europe with Enlightenment thinkers in the 1700s, or visiting XEROX PARC in the 1970s, or partying with the Paypal mafia in the 2000s... But what are the odds, right?\n> \n> \n> Then one day during the pandemic summer of 2020 AD, I found myself in this strange dream-like place, a community of international Machine Learning flaneurs who somehow became convinced that they could actually make history. At first, I thought it would just be a fun place to discuss new AI developments. But I soon discovered that yeah, these people are serious about their ambitions, and more thrillingly they actually would like to have me on board! As it turns out, the fact that Machine Learning engineers despise JavaScript (while still needing it) become my entry ticket to some of the coolest projects I ever worked on.\n> \n> \n> So, it has been a fun, surreal year—and yet something tells me we are just getting started.\n> \n> \n> \n> \n> \n> \n> **Janko Prester (Front End Engineer / UI Designer)**\n> \n> \n\n\n\n> \n> I am a hacker in the original sense of the word: one who enjoys the intellectual challenge of creatively overcoming limitations of software systems to achieve novel and clever outcomes. When a friend introduced me to EleutherAI last summer, I was in a depressive funk. My friend hoped that the people and ideas of EAI would strike my interest and whimsy, and he couldn’t have been more correct.\n> \n> \n> EleutherAI is a special place. The passion, ambition, and bemusing arrogance of its members roused me back to life. It is a place where people don’t pause to ask “who am I to try to do this.” It is a place that follows the true spirit of the old French boast: *Mme, si c’est possible, c’est fait; si c’est impossible, cela se fera.* If it's possible, madam, it's done; if it's impossible, it shall be done!\n> \n> \n> \n> \n> \n> \n> **Stella Biderman (Mathematician and AI Researcher)**\n> \n> \n\n\n\n> \n> I often like to joke about how Eleuther is an outlier in that it has the most publications of any discord server---we’re “just” a bunch of volunteers working on stuff for fun, and yet we’ve gotten such a huge amount of stuff done in the past year. Eleuther is about more than just research, though. It’s also one of the most vibrant online communities that I’ve ever had the pleasure of being a part of, with constant lively discussions about scaling, alignment, and other ML topics (and memes, of course) with the most interesting cast of interlocutors. I can’t overstate how proud I am of what we’ve created so far *ex nihilo*. Our first year has been an incredible journey, but we’re only just getting started---here's to many more.\n> \n> \n> \n> **Leo Gao (EleutherAI Cofounder)**\n> \n> \n\n\n\n\n\nMaybe it is the realization that Schmidhuber has been right all along.\n\n\n\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829975609708574/FractalCycle.png)\nFractalCycle2020-10-10\ni\n\n\n\nwait is Schmidhuber real? i thought that was fake\n\n\n\ni literally cannot tell rn\n\n\n\n![](https://cdn.discordapp.com/attachments/445630505606578212/1029829975609708574/FractalCycle.png)\nFractalCycle2020-10-10\ni thought he was a fake 1800s german philosopher like hegel that we just made up\n\n\n\nOr maybe it is the most advanced producer of alignment memes. (I promise these are hilarious if you are one of the three people that know what FDT is)\n\n\n\n![](https://cdn.discordapp.com/attachments/730451873613611079/816103348155056168/stopdoingfdt.png#center)\n\n\n![](https://pbs.twimg.com/media/E0Rj6wZVUAA3N_Y.png#center)\n\n\n![](https://pbs.twimg.com/media/EzWfgekVcAoqD1s.png#center)\n\n\n![](https://media.discordapp.net/attachments/733347369847881838/843585733784371280/gradienthackingtime.png#center)\n\nWe’re not sure we know even today what EleutherAI really is, maybe we’ll figure it out this year.\n\n\n\n![](https://cdn.discordapp.com/attachments/788870744623939594/862080138216603658/unknown.png#center)\n\n\n> \n> What makes EleutherAI special? It’s hard to say, even for me. It’s lightning in a bottle. Right place, right time, right people. Nothing like this could have been planned. Living inside of something special feels remarkably not-special. I don’t know what the legacy of what we have accomplished here will ultimately be. But if EleutherAI has been one thing to me personally, it’s hope. A crack in the pervasive narrative of disempowerment. If a small group of smart, ambitious friends can make this much happen, what else is possible? History has a feeling of inevitability when viewed with the benefit of hindsight, but ultimately history is written by people. I think we’ve earned ourselves at least a curious footnote in history, but the story is far from over. To me, EleutherAI has been purpose, companionship and hope. Hope that the future isn’t yet carved in stone. Let us carve something beautiful.\n> \n> \n> \n> **Connor Leahy (EleutherAI Cofounder)**\n> \n> \n\n\nIt's been a weird year.\n\n\nBut for our little science project, it has also been an amazing year. All we can say is: Thank you, to everyone that made it possible.\n\n\nOn to an even better second year!", "date_published": "2021-07-07T00:00:00Z", "authors": ["Connor Leahy", "Eric Hallahan", "Leo Gao", "Stella Biderman"], "summaries": []} +{"id": "811095c58f5148e511a21e1ad686201a", "title": "Why Release a Large Language Model?", "url": "https://blog.eleuther.ai/why-release-a-large-language-model/", "source": "eleuther.ai", "source_type": "blog", "text": "Here at EleutherAI, we are probably most well known for our [ongoing project to produce a GPT⁠-⁠3-like very large language model and release it as open source](https://www.eleuther.ai/projects/gpt-neox/). Reasonable safety concerns about this project have been raised many times. We take [AI safety](https://www.youtube.com/watch?v=EUjc1WuyPT8) extremely seriously, and consider it one of the, if not *the* most important problem to be working on today. We have discussed extensively the risk-benefit tradeoff (it's *always* a tradeoff), and are by now quite certain that the construction and release of such a model is net good for society, because it will enable more safety-relevant research to be done on such models.\n\n\nWhile this is a genuinely nuanced issue whose full subtlety cannot be captured in a single short blogpost, we have tried to summarize the most important reasons we believe this is the best course of action for us:\n\n\n1. **There is significant, important safety research that can only be done with access to large, pretrained models. We would like to make [such research](https://www.lesswrong.com/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models) possible and easy for low-resource researchers (and [participate in such research ourselves](https://www.lesswrong.com/posts/EmxfgPGvaKqhttPM8/thoughts-on-the-alignment-implications-of-scaling-language)).** We take the possibility that the first [TAI](https://drive.google.com/drive/u/0/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP) ends up being effectively a [scaled up transformer without any radically new scientific insights in its architecture](https://ai-alignment.com/prosaic-ai-control-b959644d79c2) [extremely seriously](https://docs.google.com/spreadsheets/d/16WlWJAmUe32oyQfiI9di86BXzX1EI0eWZ0fOakSA_f0/edit). We feel that the ways in which future scaled-up LMs could be dangerously powerful are not sufficiently well understood. Meanwhile, since GPT⁠-⁠3 already exists, and we have not yet been taken over by some form of malicious AGI, we are quite confident that models of this scale are not world-endingly dangerous. We think that this means we have the opportunity to do safety critical research before such models become truly dangerous. In order to do so though, we need access to large models to do the best research. Access to the actual underlying model is critical for work on model interpretability (a field [especially useful to safety](https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1)), and it seems certain relevant capabilities worth studying only start to emerge at larger scales (such as [few-shot improvements only becoming noticeable for large models](https://arxiv.org/pdf/2005.14165.pdf#page=4)). It is very unclear if and when such models will start to exhibit far more powerful and dangerous capabilities. If we had access to a truly unprecedentedly large model (say one quadrillion parameters), we would not release it, as no one could know what such a system might be capable of.\n2. **Most (>99%) of the damage of GPT⁠-⁠3's release was done the moment the paper was published.** What the release of the GPT⁠-⁠3 paper showed was just how simple and theoretically straight-forward building such a model is. \"The only secret was that it was possible,\" as it were. Assuming that the [scaling](https://arxiv.org/abs/2001.08361) [laws](https://arxiv.org/abs/2102.01293) of [transformers](https://arxiv.org/abs/2010.14701) hold (and all current empirical evidence seems to point in that direction) there is very little one can do to prevent well funded actors from acquiring such capabilities (OpenAI has [estimated at best a ~6 month lead time](https://arxiv.org/abs/2102.02503) over replications), as the technology can so easily be scaled up by just investing more money. If even a bunch of random volunteers on the internet working in their free time using donated compute can put together such a model, then just about anyone can. And indeed, many different well funded actors are acquiring such capabilities: a few examples are [Megatron-LM](https://arxiv.org/abs/1909.08053), [Turing-NLG](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/), [Switch Transformer](https://arxiv.org/abs/2101.03961), [PanGu-α/盘古 α](https://arxiv.org/abs/2104.12369), [HyperCLOVA](https://www.navercorp.com/promotion/pressReleasesView/30546) and [Wudao/悟道 2.0](https://zhuanlan.zhihu.com/p/377047779), and that's just from the ones that are publicly known. We think the damage caused by new technologies like these are likely to be heavy-tailed, in the sense that the top 1% of dangerous actors are likely to be responsible for >99% of the damage. For the reasons just given, attempting to keep this technology out of the hands of bad actors is futile, and the best we can do is empower society as a whole to study and use this technology for good.\n3. **Delaying the release of language models is not a solution to solving the attacks on our epistemic commons.** As Connor Leahy, a co-founder of EleutherAI, has [written](https://towardsdatascience.com/gpt2-counting-consciousness-and-the-curious-hacker-323c6639a3a8) [about](https://medium.com/@NPCollapse/counting-consciousness-part-2-61a1d407175b) [rather](https://medium.com/@NPCollapse/counting-consciousness-part-3-e53a1a97d48b) [extensively](https://medium.com/@NPCollapse/counting-consciousness-part-4-33089435d39d), language models are just the latest tool that might be deployed in attacking our society's epistemics. This is a fundamental problem that *needs to be fixed*. But attempting to limit the availability of LMs is a misguided attempt at security. Security through obscurity is not security. As already described in point 2, attempting to keep this technology out of select actors hands is just infeasible. Pretending that LMs are in some sense uniquely responsible for these security vulnerabilities in our shared epistemic norms and censoring their study will not give us the robust security we need. The attacks need to be studied and countermeasures developed, not well-meaning and high value research hampered. The scare around LMs has many of the hallmarks of security theatre, in that it costs large corporations little to nothing to gate their models behind a (commercial) API and claim they have contributed to safety, while in reality cheap and easy to run troll farms, recommender algorithms on social media platforms super charging disinformation and other far more serious threats remain under-addressed. Language Models in a sense represent a \"Photoshop for text\", and as with Photoshop proper, the solution was not to ban Photoshop, or restrict the study of digital image manipulation and CGI techniques.\n\n\nThis short overview should not be seen as a full treatise on all of the various EleutherAI members' beliefs about this highly complex situation. If you have questions or concerns, please feel free to reach out either directly to [contact@eleuther.ai](mailto:contact@eleuther.ai) or drop by our discord and talk to us in our #alignment-general channel, where we love to talk about this kind of stuff for hours.", "date_published": "2021-06-02T00:00:00Z", "authors": ["Connor Leahy"], "summaries": []} +{"id": "cedac9ee41643d5bf87ffff6dcd64a47", "title": "On the Sizes of OpenAI API Models", "url": "https://blog.eleuther.ai/gpt3-model-sizes/", "source": "eleuther.ai", "source_type": "blog", "text": "OpenAI hasn't officially said anything about their API model sizes, which naturally leads to the question of just how big they are. Thankfully, we can use [eval harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the API models on a bunch of tasks and compare to the figures in the GPT-3 paper. Obviously since there are going to be minor differences in task implementation and OpenAI is probably fine tuning their API models all the time, the numbers don't line up exactly, but they should give a pretty good idea of the ballpark things are in.\n\n\n\n\n\n| Model | LAMBADA ppl ↓ | LAMBADA acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ |\n| --- | --- | --- | --- | --- | --- |\n| GPT-3-124M | 18.6 | 42.7% | 52.0% | 33.7% | 64.6% |\n| GPT-3-350M | 9.09 | 54.3% | 52.1% | 43.6% | 70.2% |\n| Ada | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% |\n| GPT-3-760M | 6.53 | 60.4% | 57.4% | 51.0% | 72.9% |\n| GPT-3-1.3B | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% |\n| Babbage | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% |\n| GPT-3-2.7B | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% |\n| GPT-3-6.7B | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% |\n| Curie | 4.00 | 68.5% | 65.6% | 68.5% | 77.9% |\n| GPT-3-13B | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% |\n| GPT-3-175B | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% |\n| Davinci | 2.97 | 74.8% | 70.2% | 78.1% | 80.4% |\n\n\n\nAll GPT-3 figures are from the [GPT-3 paper](https://arxiv.org/pdf/2005.14165.pdf#page=63); all API figures are computed using eval harness\n\n\n\n\nAda, Babbage, Curie and Davinci line up closely with 350M, 1.3B, 6.7B, and 175B respectively. Obviously this isn't ironclad evidence that the models *are* those sizes, but it's pretty suggestive.", "date_published": "2021-05-24T00:00:00Z", "authors": ["Leo Gao"], "summaries": []} +{"id": "5c830669b3a2751e0767bb2abab57b6f", "title": "Evaluating Different Fewshot Description Prompts on GPT-3", "url": "https://blog.eleuther.ai/prompts-gpt-fewshot/", "source": "eleuther.ai", "source_type": "blog", "text": "[Adam Shimi](https://www.alignmentforum.org/users/adamshimi) suggested the idea of trying different fewshot prompts on GPT-3, and hopefully observing something that evidenced larger models being able to handle a wider variety of prompting. He also wrote up a bunch of prompts to try on SST.\n\n\nUnfortunately, the results were kinda mixed: the GPT-2 models all did absolutely terrible and their results were basically useless; the performance wasn't monotonic with model size (1.3B did better than 2.7B, and babbage did better than curie). Also, the variance *increased* with performance in general.\n\n\n\n\n\n| | Mean Accuracy | Standard Deviation in Accuracy |\n| --- | --- | --- |\n| gpt3-ada | 51.9 | 0.0368 |\n| gpt3-babbage | 69.4 | 0.0840 |\n| gpt3-curie | 67.4 | 0.0807 |\n| neo-1.3B | 63.0 | 0.0522 |\n| neo-2.7B | 56.5 | 0.0684 |\n\n\n\nHowever, there was one interesting and unexpected result: there's basically no correlation between different models on which prompts do the best. This is highly unexpected because I'd expect *a priori* that models trained on the same/similar data should have similar preferences for what kinds of prompts work well, and that surely some prompts must be better than other prompts in general.\n\n\nHere's what that looks like plotted out. Each point in these plots is one prompt, and the axes are different models. The values are SST accuracy:\n\n\n\n![](/images/research-log/fig_gpt2_pretrained-EleutherAI_gpt-neo-1.3B_gpt2_pretrained-EleutherAI_gpt-neo-2.7B.png)\n\n\n![](/images/research-log/fig_gpt3_engine-ada_gpt2_pretrained-EleutherAI_gpt-neo-1.3B.png)\n\n\n![](/images/research-log/fig_gpt3_engine-ada_gpt2_pretrained-EleutherAI_gpt-neo-2.7B.png)\n\n\n![](/images/research-log/fig_gpt3_engine-ada_gpt3_engine-curie.png)\n\n\n![](/images/research-log/fig_gpt3_engine-babbage_gpt2_pretrained-EleutherAI_gpt-neo-1.3B.png)\n\n\n![](/images/research-log/fig_gpt3_engine-babbage_gpt2_pretrained-EleutherAI_gpt-neo-2.7B.png)\n\n\n![](/images/research-log/fig_gpt3_engine-babbage_gpt3_engine-curie.png)\n\n\n![](/images/research-log/fig_gpt3_engine-curie_gpt2_pretrained-EleutherAI_gpt-neo-1.3B.png)\n\n\n![](/images/research-log/fig_gpt3_engine-curie_gpt2_pretrained-EleutherAI_gpt-neo-2.7B.png)\n\n\n\n\nThe code for the experiment is [here](https://gist.github.com/leogao2/d156d8e0f49ac83b239dde3819668b4b).", "date_published": "2021-05-24T00:00:00Z", "authors": ["Leo Gao"], "summaries": []} +{"id": "78287a799d303946ed245ed41941c120", "title": "Finetuning Models on Downstream Tasks", "url": "https://blog.eleuther.ai/tuning-on-eval-harness/", "source": "eleuther.ai", "source_type": "blog", "text": "The GPT-3 paper didn't explore fine tuning on downstream tasks, so I decided to tune Neo 2.7B for 1.1k iters on all the tasks in [eval harness](https://github.com/EleutherAI/lm-evaluation-harness) that have a train set (all at once, because tuning one model per task would have taken ages). I was quite surprised that the tuned model didn't destroy untuned 2.7B completely on all tasks, but rather from eyeballing it seems like a tossup. Interestingly, tuned seems to defeat 2.7B by quite a lot on anli, which is especially notable given that this is one task the models in the GPT-3 paper struggled on. Also, lambada and pubmedqa are included in these tables, even though it doesn't have a training set (at least for the implementation in eval harness, using the OA version of lambada), because I wanted to look at effects on sets not in the tuning, to potentially observe some catastrophic forgetting or something. Sure enough, lambada and pubmedqa scores are significantly worse on the tuned model.\n\n\nZero shot[#](#zero-shot)\n------------------------\n\n\n\n\n\n| Task | Metric | 2.7B | Tuned |\n| --- | --- | --- | --- |\n| anli\\_r1 | acc | 0.332 ± 0.015 | **0.418 ± 0.015** |\n| anli\\_r2 | acc | 0.342 ± 0.015 | 0.375 ± 0.015 |\n| anli\\_r3 | acc | 0.352 ± 0.014 | **0.392 ± 0.014** |\n| arc\\_challenge | acc | 0.275 ± 0.013 | 0.286 ± 0.013 |\n| | acc\\_norm | 0.301 ± 0.013 | 0.312 ± 0.013 |\n| arc\\_easy | acc | **0.611 ± 0.010** | 0.560 ± 0.010 |\n| | acc\\_norm | 0.539 ± 0.010 | 0.558 ± 0.010 |\n| boolq | acc | **0.630 ± 0.008** | 0.605 ± 0.008 |\n| cb | acc | 0.304 ± 0.062 | 0.411 ± 0.062 |\n| copa | acc | 0.800 ± 0.040 | 0.730 ± 0.040 |\n| ethics\\_cm | acc | 0.510 ± 0.008 | **0.561 ± 0.008** |\n| ethics\\_deontology | acc | 0.497 ± 0.008 | **0.658 ± 0.008** |\n| ethics\\_justice | acc | 0.501 ± 0.010 | **0.589 ± 0.010** |\n| ethics\\_utilitarianism | acc | 0.497 ± 0.007 | 0.498 ± 0.007 |\n| ethics\\_virtue | acc | 0.251 ± 0.006 | **0.800 ± 0.006** |\n| headqa | acc | 0.235 ± 0.008 | 0.233 ± 0.008 |\n| | acc\\_norm | 0.272 ± 0.008 | 0.265 ± 0.008 |\n| hellaswag | acc | **0.427 ± 0.005** | 0.400 ± 0.005 |\n| | acc\\_norm | **0.558 ± 0.005** | 0.517 ± 0.005 |\n| hendrycksTest-abstract\\_algebra | acc | 0.230 ± 0.042 | 0.340 ± 0.042 |\n| | acc\\_norm | 0.200 ± 0.040 | **0.350 ± 0.040** |\n| hendrycksTest-anatomy | acc | 0.252 ± 0.037 | 0.267 ± 0.037 |\n| | acc\\_norm | 0.222 ± 0.036 | 0.252 ± 0.036 |\n| hendrycksTest-astronomy | acc | 0.250 ± 0.035 | 0.309 ± 0.035 |\n| | acc\\_norm | 0.362 ± 0.039 | 0.309 ± 0.039 |\n| hendrycksTest-business\\_ethics | acc | 0.360 ± 0.048 | 0.340 ± 0.048 |\n| | acc\\_norm | 0.280 ± 0.045 | 0.310 ± 0.045 |\n| hendrycksTest-clinical\\_knowledge | acc | 0.291 ± 0.028 | 0.370 ± 0.028 |\n| | acc\\_norm | 0.287 ± 0.028 | **0.374 ± 0.028** |\n| hendrycksTest-college\\_biology | acc | 0.250 ± 0.036 | 0.250 ± 0.036 |\n| | acc\\_norm | 0.222 ± 0.035 | 0.271 ± 0.035 |\n| hendrycksTest-college\\_chemistry | acc | 0.230 ± 0.042 | 0.350 ± 0.042 |\n| | acc\\_norm | 0.250 ± 0.044 | 0.350 ± 0.044 |\n| hendrycksTest-college\\_computer\\_science | acc | 0.280 ± 0.045 | **0.430 ± 0.045** |\n| | acc\\_norm | 0.270 ± 0.045 | 0.390 ± 0.045 |\n| hendrycksTest-college\\_mathematics | acc | 0.200 ± 0.040 | **0.370 ± 0.040** |\n| | acc\\_norm | 0.300 ± 0.046 | 0.350 ± 0.046 |\n| hendrycksTest-college\\_medicine | acc | 0.254 ± 0.033 | 0.312 ± 0.033 |\n| | acc\\_norm | 0.260 ± 0.033 | 0.306 ± 0.033 |\n| hendrycksTest-college\\_physics | acc | 0.225 ± 0.042 | 0.275 ± 0.042 |\n| | acc\\_norm | 0.245 ± 0.043 | 0.284 ± 0.043 |\n| hendrycksTest-computer\\_security | acc | 0.270 ± 0.045 | 0.290 ± 0.045 |\n| | acc\\_norm | 0.330 ± 0.047 | 0.290 ± 0.047 |\n| hendrycksTest-conceptual\\_physics | acc | 0.247 ± 0.028 | 0.315 ± 0.028 |\n| | acc\\_norm | 0.187 ± 0.026 | **0.319 ± 0.026** |\n| hendrycksTest-econometrics | acc | 0.193 ± 0.037 | 0.272 ± 0.037 |\n| | acc\\_norm | 0.228 ± 0.039 | 0.281 ± 0.039 |\n| hendrycksTest-electrical\\_engineering | acc | 0.331 ± 0.039 | 0.386 ± 0.039 |\n| | acc\\_norm | 0.338 ± 0.039 | 0.386 ± 0.039 |\n| hendrycksTest-elementary\\_mathematics | acc | 0.230 ± 0.022 | 0.280 ± 0.022 |\n| | acc\\_norm | 0.270 ± 0.023 | 0.278 ± 0.023 |\n| hendrycksTest-formal\\_logic | acc | 0.333 ± 0.042 | 0.310 ± 0.042 |\n| | acc\\_norm | 0.302 ± 0.041 | 0.278 ± 0.041 |\n| hendrycksTest-global\\_facts | acc | 0.240 ± 0.043 | 0.250 ± 0.043 |\n| | acc\\_norm | 0.240 ± 0.043 | 0.260 ± 0.043 |\n| hendrycksTest-high\\_school\\_biology | acc | 0.219 ± 0.024 | **0.335 ± 0.024** |\n| | acc\\_norm | 0.284 ± 0.026 | 0.329 ± 0.026 |\n| hendrycksTest-high\\_school\\_chemistry | acc | 0.167 ± 0.026 | 0.207 ± 0.026 |\n| | acc\\_norm | 0.256 ± 0.031 | 0.212 ± 0.031 |\n| hendrycksTest-high\\_school\\_computer\\_science | acc | 0.220 ± 0.042 | 0.290 ± 0.042 |\n| | acc\\_norm | 0.280 ± 0.045 | 0.280 ± 0.045 |\n| hendrycksTest-high\\_school\\_european\\_history | acc | 0.267 ± 0.035 | 0.358 ± 0.035 |\n| | acc\\_norm | 0.285 ± 0.035 | 0.358 ± 0.035 |\n| hendrycksTest-high\\_school\\_geography | acc | 0.227 ± 0.030 | **0.359 ± 0.030** |\n| | acc\\_norm | 0.298 ± 0.033 | 0.333 ± 0.033 |\n| hendrycksTest-high\\_school\\_government\\_and\\_politics | acc | 0.207 ± 0.029 | **0.301 ± 0.029** |\n| | acc\\_norm | 0.259 ± 0.032 | 0.311 ± 0.032 |\n| hendrycksTest-high\\_school\\_macroeconomics | acc | 0.262 ± 0.022 | 0.267 ± 0.022 |\n| | acc\\_norm | 0.267 ± 0.022 | 0.262 ± 0.022 |\n| hendrycksTest-high\\_school\\_mathematics | acc | 0.174 ± 0.023 | **0.248 ± 0.023** |\n| | acc\\_norm | 0.244 ± 0.026 | 0.270 ± 0.026 |\n| hendrycksTest-high\\_school\\_microeconomics | acc | 0.256 ± 0.028 | 0.265 ± 0.028 |\n| | acc\\_norm | 0.328 ± 0.030 | 0.277 ± 0.030 |\n| hendrycksTest-high\\_school\\_physics | acc | 0.225 ± 0.034 | 0.212 ± 0.034 |\n| | acc\\_norm | 0.219 ± 0.034 | 0.225 ± 0.034 |\n| hendrycksTest-high\\_school\\_psychology | acc | 0.253 ± 0.019 | **0.338 ± 0.019** |\n| | acc\\_norm | 0.261 ± 0.019 | **0.330 ± 0.019** |\n| hendrycksTest-high\\_school\\_statistics | acc | 0.264 ± 0.030 | 0.278 ± 0.030 |\n| | acc\\_norm | 0.338 ± 0.032 | 0.273 ± 0.032 |\n| hendrycksTest-high\\_school\\_us\\_history | acc | 0.235 ± 0.030 | 0.230 ± 0.030 |\n| | acc\\_norm | 0.270 ± 0.031 | 0.235 ± 0.031 |\n| hendrycksTest-high\\_school\\_world\\_history | acc | 0.270 ± 0.029 | **0.388 ± 0.029** |\n| | acc\\_norm | 0.300 ± 0.030 | **0.392 ± 0.030** |\n| hendrycksTest-human\\_aging | acc | 0.296 ± 0.031 | 0.318 ± 0.031 |\n| | acc\\_norm | 0.238 ± 0.029 | 0.314 ± 0.029 |\n| hendrycksTest-human\\_sexuality | acc | 0.336 ± 0.041 | 0.290 ± 0.041 |\n| | acc\\_norm | 0.290 ± 0.040 | 0.290 ± 0.040 |\n| hendrycksTest-international\\_law | acc | 0.248 ± 0.039 | 0.322 ± 0.039 |\n| | acc\\_norm | **0.496 ± 0.046** | 0.347 ± 0.046 |\n| hendrycksTest-jurisprudence | acc | 0.250 ± 0.042 | 0.269 ± 0.042 |\n| | acc\\_norm | **0.426 ± 0.048** | 0.296 ± 0.048 |\n| hendrycksTest-logical\\_fallacies | acc | 0.209 ± 0.032 | 0.258 ± 0.032 |\n| | acc\\_norm | 0.288 ± 0.036 | 0.264 ± 0.036 |\n| hendrycksTest-machine\\_learning | acc | 0.295 ± 0.043 | 0.250 ± 0.043 |\n| | acc\\_norm | 0.259 ± 0.042 | 0.259 ± 0.042 |\n| hendrycksTest-management | acc | 0.184 ± 0.038 | **0.311 ± 0.038** |\n| | acc\\_norm | 0.282 ± 0.045 | 0.330 ± 0.045 |\n| hendrycksTest-marketing | acc | 0.316 ± 0.030 | **0.432 ± 0.030** |\n| | acc\\_norm | 0.338 ± 0.031 | **0.440 ± 0.031** |\n| hendrycksTest-medical\\_genetics | acc | 0.300 ± 0.046 | 0.240 ± 0.046 |\n| | acc\\_norm | 0.370 ± 0.049 | 0.270 ± 0.049 |\n| hendrycksTest-miscellaneous | acc | 0.281 ± 0.016 | 0.323 ± 0.016 |\n| | acc\\_norm | 0.271 ± 0.016 | **0.328 ± 0.016** |\n| hendrycksTest-moral\\_disputes | acc | 0.286 ± 0.024 | 0.350 ± 0.024 |\n| | acc\\_norm | 0.355 ± 0.026 | 0.364 ± 0.026 |\n| hendrycksTest-moral\\_scenarios | acc | 0.234 ± 0.014 | 0.264 ± 0.014 |\n| | acc\\_norm | 0.273 ± 0.015 | 0.269 ± 0.015 |\n| hendrycksTest-nutrition | acc | 0.275 ± 0.026 | 0.307 ± 0.026 |\n| | acc\\_norm | 0.359 ± 0.027 | 0.333 ± 0.027 |\n| hendrycksTest-philosophy | acc | 0.270 ± 0.025 | 0.305 ± 0.025 |\n| | acc\\_norm | 0.315 ± 0.026 | 0.322 ± 0.026 |\n| hendrycksTest-prehistory | acc | 0.256 ± 0.024 | **0.361 ± 0.024** |\n| | acc\\_norm | 0.216 ± 0.023 | **0.364 ± 0.023** |\n| hendrycksTest-professional\\_accounting | acc | 0.248 ± 0.026 | 0.230 ± 0.026 |\n| | acc\\_norm | 0.259 ± 0.026 | 0.220 ± 0.026 |\n| hendrycksTest-professional\\_law | acc | 0.267 ± 0.011 | 0.275 ± 0.011 |\n| | acc\\_norm | 0.300 ± 0.012 | 0.284 ± 0.012 |\n| hendrycksTest-professional\\_medicine | acc | 0.246 ± 0.026 | 0.290 ± 0.026 |\n| | acc\\_norm | 0.232 ± 0.026 | 0.298 ± 0.026 |\n| hendrycksTest-professional\\_psychology | acc | 0.258 ± 0.018 | 0.299 ± 0.018 |\n| | acc\\_norm | 0.253 ± 0.018 | **0.315 ± 0.018** |\n| hendrycksTest-public\\_relations | acc | 0.300 ± 0.044 | 0.364 ± 0.044 |\n| | acc\\_norm | 0.164 ± 0.035 | **0.373 ± 0.035** |\n| hendrycksTest-security\\_studies | acc | 0.339 ± 0.030 | 0.343 ± 0.030 |\n| | acc\\_norm | 0.286 ± 0.029 | 0.286 ± 0.029 |\n| hendrycksTest-sociology | acc | 0.269 ± 0.031 | **0.403 ± 0.031** |\n| | acc\\_norm | 0.264 ± 0.031 | **0.423 ± 0.031** |\n| hendrycksTest-us\\_foreign\\_policy | acc | 0.330 ± 0.047 | 0.390 ± 0.047 |\n| | acc\\_norm | 0.350 ± 0.048 | 0.390 ± 0.048 |\n| hendrycksTest-virology | acc | 0.313 ± 0.036 | 0.325 ± 0.036 |\n| | acc\\_norm | 0.331 ± 0.037 | 0.343 ± 0.037 |\n| hendrycksTest-world\\_religions | acc | 0.304 ± 0.035 | 0.316 ± 0.035 |\n| | acc\\_norm | 0.386 ± 0.037 | 0.339 ± 0.037 |\n| logiqa | acc | 0.201 ± 0.016 | **0.280 ± 0.016** |\n| | acc\\_norm | 0.281 ± 0.018 | 0.283 ± 0.018 |\n| mathqa | acc | 0.247 ± 0.008 | 0.248 ± 0.008 |\n| | acc\\_norm | 0.246 ± 0.008 | 0.239 ± 0.008 |\n| mnli | acc | 0.339 ± 0.005 | **0.729 ± 0.005** |\n| mnli\\_mismatched | acc | 0.338 ± 0.005 | **0.742 ± 0.005** |\n| mrpc | acc | 0.684 ± 0.023 | 0.701 ± 0.023 |\n| | f1 | 0.812 ± 0.016 | 0.820 ± 0.016 |\n| multirc | acc | **0.016 ± 0.004** | 0.004 ± 0.004 |\n| openbookqa | acc | 0.234 ± 0.019 | 0.248 ± 0.019 |\n| | acc\\_norm | 0.332 ± 0.021 | 0.318 ± 0.021 |\n| piqa | acc | 0.721 ± 0.010 | 0.713 ± 0.010 |\n| | acc\\_norm | 0.729 ± 0.010 | 0.708 ± 0.010 |\n| qnli | acc | 0.509 ± 0.007 | **0.761 ± 0.007** |\n| qqp | acc | 0.368 ± 0.002 | **0.843 ± 0.002** |\n| | f1 | 0.538 ± 0.003 | **0.789 ± 0.003** |\n| race | acc | 0.353 ± 0.015 | 0.362 ± 0.015 |\n| record | f1 | **0.845 ± 0.004** | 0.779 ± 0.004 |\n| | em | **0.838 ± 0.004** | 0.770 ± 0.004 |\n| rte | acc | 0.520 ± 0.030 | **0.729 ± 0.030** |\n| sciq | acc | 0.893 ± 0.010 | **0.919 ± 0.010** |\n| | acc\\_norm | 0.828 ± 0.012 | **0.913 ± 0.012** |\n| sst | acc | 0.789 ± 0.014 | **0.862 ± 0.014** |\n| webqs | acc | 0.016 ± 0.003 | **0.071 ± 0.003** |\n| wic | acc | 0.500 ± 0.020 | 0.517 ± 0.020 |\n| winogrande | acc | 0.575 ± 0.014 | 0.570 ± 0.014 |\n| wnli | acc | 0.310 ± 0.055 | **0.563 ± 0.055** |\n| wsc | acc | 0.365 ± 0.047 | 0.365 ± 0.047 |\n| lambada | ppl | **5.626 ± 0.139** | 27.796 ± 0.139 |\n| | acc | **0.622 ± 0.007** | 0.387 ± 0.007 |\n| pubmedqa | acc | **0.565 ± 0.016** | 0.496 ± 0.016 |\n| coqa | f1 | 0.604 ± 0.018 | 0.598 ± 0.018 |\n| | em | 0.479 ± 0.020 | 0.480 ± 0.020 |\n| drop | em | **0.026 ± 0.002** | 0.001 ± 0.002 |\n| | f1 | **0.083 ± 0.002** | 0.033 ± 0.002 |\n| math\\_algebra | acc | 0.008 ± 0.003 | **0.025 ± 0.003** |\n| math\\_geometry | acc | 0.002 ± 0.002 | **0.021 ± 0.002** |\n| math\\_intermediate\\_algebra | acc | 0.004 ± 0.002 | **0.025 ± 0.002** |\n| math\\_num\\_theory | acc | 0.019 ± 0.006 | **0.046 ± 0.006** |\n| math\\_prealgebra | acc | 0.001 ± 0.001 | **0.039 ± 0.001** |\n| math\\_precalc | acc | 0.005 ± 0.003 | 0.016 ± 0.003 |\n\n\n\nOne shot[#](#one-shot)\n----------------------\n\n\n\n\n\n| Task | Metric | 2.7B | Tuned |\n| --- | --- | --- | --- |\n| anli\\_r1 | acc | 0.331 ± 0.015 | **0.443 ± 0.015** |\n| anli\\_r2 | acc | 0.307 ± 0.015 | **0.373 ± 0.015** |\n| anli\\_r3 | acc | 0.343 ± 0.014 | **0.423 ± 0.014** |\n| arc\\_challenge | acc | 0.302 ± 0.013 | 0.292 ± 0.013 |\n| | acc\\_norm | 0.323 ± 0.014 | 0.323 ± 0.014 |\n| arc\\_easy | acc | **0.634 ± 0.010** | 0.567 ± 0.010 |\n| | acc\\_norm | **0.622 ± 0.010** | 0.562 ± 0.010 |\n| boolq | acc | 0.536 ± 0.009 | **0.620 ± 0.009** |\n| cb | acc | 0.429 ± 0.067 | 0.411 ± 0.067 |\n| cola | mcc | 0.001 ± 0.031 | 0.022 ± 0.031 |\n| copa | acc | 0.770 ± 0.042 | 0.780 ± 0.042 |\n| ethics\\_cm | acc | 0.508 ± 0.008 | **0.625 ± 0.008** |\n| ethics\\_deontology | acc | 0.511 ± 0.008 | **0.683 ± 0.008** |\n| ethics\\_justice | acc | 0.515 ± 0.010 | **0.604 ± 0.010** |\n| ethics\\_utilitarianism | acc | 0.490 ± 0.007 | **0.536 ± 0.007** |\n| ethics\\_virtue | acc | 0.726 ± 0.006 | **0.805 ± 0.006** |\n| headqa | acc | 0.230 ± 0.008 | 0.228 ± 0.008 |\n| | acc\\_norm | 0.270 ± 0.008 | 0.275 ± 0.008 |\n| hellaswag | acc | **0.428 ± 0.005** | 0.386 ± 0.005 |\n| | acc\\_norm | **0.557 ± 0.005** | 0.494 ± 0.005 |\n| hendrycksTest-abstract\\_algebra | acc | 0.220 ± 0.042 | 0.270 ± 0.042 |\n| | acc\\_norm | 0.290 ± 0.046 | 0.260 ± 0.046 |\n| hendrycksTest-anatomy | acc | 0.289 ± 0.039 | 0.304 ± 0.039 |\n| | acc\\_norm | 0.230 ± 0.036 | 0.289 ± 0.036 |\n| hendrycksTest-astronomy | acc | 0.204 ± 0.033 | **0.322 ± 0.033** |\n| | acc\\_norm | 0.303 ± 0.037 | 0.322 ± 0.037 |\n| hendrycksTest-business\\_ethics | acc | 0.290 ± 0.046 | 0.320 ± 0.046 |\n| | acc\\_norm | 0.280 ± 0.045 | 0.280 ± 0.045 |\n| hendrycksTest-clinical\\_knowledge | acc | 0.287 ± 0.028 | 0.351 ± 0.028 |\n| | acc\\_norm | 0.328 ± 0.029 | 0.358 ± 0.029 |\n| hendrycksTest-college\\_biology | acc | 0.215 ± 0.034 | 0.271 ± 0.034 |\n| | acc\\_norm | 0.194 ± 0.033 | 0.271 ± 0.033 |\n| hendrycksTest-college\\_chemistry | acc | 0.300 ± 0.046 | 0.330 ± 0.046 |\n| | acc\\_norm | 0.340 ± 0.048 | 0.320 ± 0.048 |\n| hendrycksTest-college\\_computer\\_science | acc | 0.330 ± 0.047 | 0.390 ± 0.047 |\n| | acc\\_norm | 0.310 ± 0.046 | 0.360 ± 0.046 |\n| hendrycksTest-college\\_mathematics | acc | 0.200 ± 0.040 | 0.280 ± 0.040 |\n| | acc\\_norm | 0.220 ± 0.042 | 0.270 ± 0.042 |\n| hendrycksTest-college\\_medicine | acc | 0.254 ± 0.033 | 0.295 ± 0.033 |\n| | acc\\_norm | 0.260 ± 0.033 | 0.283 ± 0.033 |\n| hendrycksTest-college\\_physics | acc | 0.304 ± 0.046 | 0.284 ± 0.046 |\n| | acc\\_norm | 0.333 ± 0.047 | 0.304 ± 0.047 |\n| hendrycksTest-computer\\_security | acc | 0.320 ± 0.047 | 0.270 ± 0.047 |\n| | acc\\_norm | 0.320 ± 0.047 | 0.290 ± 0.047 |\n| hendrycksTest-conceptual\\_physics | acc | 0.268 ± 0.029 | 0.349 ± 0.029 |\n| | acc\\_norm | 0.255 ± 0.029 | **0.345 ± 0.029** |\n| hendrycksTest-econometrics | acc | 0.298 ± 0.043 | 0.272 ± 0.043 |\n| | acc\\_norm | 0.298 ± 0.043 | 0.263 ± 0.043 |\n| hendrycksTest-electrical\\_engineering | acc | 0.338 ± 0.039 | 0.324 ± 0.039 |\n| | acc\\_norm | 0.290 ± 0.038 | 0.303 ± 0.038 |\n| hendrycksTest-elementary\\_mathematics | acc | 0.262 ± 0.023 | 0.275 ± 0.023 |\n| | acc\\_norm | 0.294 ± 0.023 | 0.275 ± 0.023 |\n| hendrycksTest-formal\\_logic | acc | 0.310 ± 0.041 | 0.310 ± 0.041 |\n| | acc\\_norm | 0.294 ± 0.041 | 0.270 ± 0.041 |\n| hendrycksTest-global\\_facts | acc | 0.200 ± 0.040 | 0.290 ± 0.040 |\n| | acc\\_norm | 0.210 ± 0.041 | 0.290 ± 0.041 |\n| hendrycksTest-high\\_school\\_biology | acc | 0.265 ± 0.025 | **0.342 ± 0.025** |\n| | acc\\_norm | 0.287 ± 0.026 | 0.342 ± 0.026 |\n| hendrycksTest-high\\_school\\_chemistry | acc | 0.251 ± 0.031 | 0.232 ± 0.031 |\n| | acc\\_norm | 0.291 ± 0.032 | 0.227 ± 0.032 |\n| hendrycksTest-high\\_school\\_computer\\_science | acc | 0.260 ± 0.044 | 0.280 ± 0.044 |\n| | acc\\_norm | 0.300 ± 0.046 | 0.260 ± 0.046 |\n| hendrycksTest-high\\_school\\_european\\_history | acc | 0.267 ± 0.035 | 0.309 ± 0.035 |\n| | acc\\_norm | 0.315 ± 0.036 | 0.321 ± 0.036 |\n| hendrycksTest-high\\_school\\_geography | acc | 0.227 ± 0.030 | **0.348 ± 0.030** |\n| | acc\\_norm | 0.278 ± 0.032 | 0.354 ± 0.032 |\n| hendrycksTest-high\\_school\\_government\\_and\\_politics | acc | 0.290 ± 0.033 | 0.332 ± 0.033 |\n| | acc\\_norm | 0.290 ± 0.033 | 0.321 ± 0.033 |\n| hendrycksTest-high\\_school\\_macroeconomics | acc | 0.279 ± 0.023 | 0.305 ± 0.023 |\n| | acc\\_norm | 0.267 ± 0.022 | 0.285 ± 0.022 |\n| hendrycksTest-high\\_school\\_mathematics | acc | 0.252 ± 0.026 | 0.278 ± 0.026 |\n| | acc\\_norm | 0.296 ± 0.028 | 0.304 ± 0.028 |\n| hendrycksTest-high\\_school\\_microeconomics | acc | 0.265 ± 0.029 | 0.256 ± 0.029 |\n| | acc\\_norm | 0.324 ± 0.030 | 0.273 ± 0.030 |\n| hendrycksTest-high\\_school\\_physics | acc | 0.205 ± 0.033 | 0.205 ± 0.033 |\n| | acc\\_norm | 0.232 ± 0.034 | 0.212 ± 0.034 |\n| hendrycksTest-high\\_school\\_psychology | acc | 0.251 ± 0.019 | **0.328 ± 0.019** |\n| | acc\\_norm | 0.270 ± 0.019 | **0.325 ± 0.019** |\n| hendrycksTest-high\\_school\\_statistics | acc | 0.319 ± 0.032 | 0.241 ± 0.032 |\n| | acc\\_norm | 0.319 ± 0.032 | 0.245 ± 0.032 |\n| hendrycksTest-high\\_school\\_us\\_history | acc | 0.265 ± 0.031 | 0.221 ± 0.031 |\n| | acc\\_norm | 0.260 ± 0.031 | 0.230 ± 0.031 |\n| hendrycksTest-high\\_school\\_world\\_history | acc | 0.283 ± 0.029 | **0.371 ± 0.029** |\n| | acc\\_norm | 0.266 ± 0.029 | **0.380 ± 0.029** |\n| hendrycksTest-human\\_aging | acc | 0.296 ± 0.031 | 0.296 ± 0.031 |\n| | acc\\_norm | 0.274 ± 0.030 | 0.291 ± 0.030 |\n| hendrycksTest-human\\_sexuality | acc | 0.351 ± 0.042 | 0.290 ± 0.042 |\n| | acc\\_norm | 0.282 ± 0.039 | 0.290 ± 0.039 |\n| hendrycksTest-international\\_law | acc | 0.248 ± 0.039 | 0.322 ± 0.039 |\n| | acc\\_norm | 0.347 ± 0.043 | 0.331 ± 0.043 |\n| hendrycksTest-jurisprudence | acc | 0.269 ± 0.043 | 0.296 ± 0.043 |\n| | acc\\_norm | 0.370 ± 0.047 | 0.296 ± 0.047 |\n| hendrycksTest-logical\\_fallacies | acc | 0.202 ± 0.032 | 0.276 ± 0.032 |\n| | acc\\_norm | 0.270 ± 0.035 | 0.258 ± 0.035 |\n| hendrycksTest-machine\\_learning | acc | 0.295 ± 0.043 | 0.250 ± 0.043 |\n| | acc\\_norm | 0.330 ± 0.045 | 0.223 ± 0.045 |\n| hendrycksTest-management | acc | 0.282 ± 0.045 | 0.320 ± 0.045 |\n| | acc\\_norm | 0.272 ± 0.044 | 0.350 ± 0.044 |\n| hendrycksTest-marketing | acc | 0.303 ± 0.030 | **0.415 ± 0.030** |\n| | acc\\_norm | 0.329 ± 0.031 | **0.423 ± 0.031** |\n| hendrycksTest-medical\\_genetics | acc | 0.330 ± 0.047 | 0.300 ± 0.047 |\n| | acc\\_norm | 0.420 ± 0.050 | 0.300 ± 0.050 |\n| hendrycksTest-miscellaneous | acc | 0.319 ± 0.017 | 0.318 ± 0.017 |\n| | acc\\_norm | 0.319 ± 0.017 | 0.313 ± 0.017 |\n| hendrycksTest-moral\\_disputes | acc | 0.298 ± 0.025 | 0.341 ± 0.025 |\n| | acc\\_norm | 0.318 ± 0.025 | 0.344 ± 0.025 |\n| hendrycksTest-moral\\_scenarios | acc | 0.267 ± 0.015 | 0.240 ± 0.015 |\n| | acc\\_norm | 0.265 ± 0.015 | 0.238 ± 0.015 |\n| hendrycksTest-nutrition | acc | 0.278 ± 0.026 | 0.330 ± 0.026 |\n| | acc\\_norm | 0.337 ± 0.027 | 0.350 ± 0.027 |\n| hendrycksTest-philosophy | acc | 0.251 ± 0.025 | 0.315 ± 0.025 |\n| | acc\\_norm | 0.293 ± 0.026 | 0.325 ± 0.026 |\n| hendrycksTest-prehistory | acc | 0.244 ± 0.024 | **0.352 ± 0.024** |\n| | acc\\_norm | 0.250 ± 0.024 | **0.361 ± 0.024** |\n| hendrycksTest-professional\\_accounting | acc | **0.287 ± 0.027** | 0.213 ± 0.027 |\n| | acc\\_norm | 0.248 ± 0.026 | 0.216 ± 0.026 |\n| hendrycksTest-professional\\_law | acc | 0.273 ± 0.011 | 0.267 ± 0.011 |\n| | acc\\_norm | 0.269 ± 0.011 | 0.269 ± 0.011 |\n| hendrycksTest-professional\\_medicine | acc | 0.301 ± 0.028 | 0.301 ± 0.028 |\n| | acc\\_norm | 0.268 ± 0.027 | 0.327 ± 0.027 |\n| hendrycksTest-professional\\_psychology | acc | 0.279 ± 0.018 | 0.304 ± 0.018 |\n| | acc\\_norm | 0.284 ± 0.018 | 0.310 ± 0.018 |\n| hendrycksTest-public\\_relations | acc | 0.327 ± 0.045 | 0.345 ± 0.045 |\n| | acc\\_norm | 0.309 ± 0.044 | 0.336 ± 0.044 |\n| hendrycksTest-security\\_studies | acc | 0.265 ± 0.028 | 0.331 ± 0.028 |\n| | acc\\_norm | 0.208 ± 0.026 | **0.290 ± 0.026** |\n| hendrycksTest-sociology | acc | 0.269 ± 0.031 | **0.393 ± 0.031** |\n| | acc\\_norm | 0.249 ± 0.031 | **0.383 ± 0.031** |\n| hendrycksTest-us\\_foreign\\_policy | acc | 0.290 ± 0.046 | 0.320 ± 0.046 |\n| | acc\\_norm | 0.320 ± 0.047 | 0.320 ± 0.047 |\n| hendrycksTest-virology | acc | 0.289 ± 0.035 | 0.349 ± 0.035 |\n| | acc\\_norm | 0.265 ± 0.034 | 0.355 ± 0.034 |\n| hendrycksTest-world\\_religions | acc | 0.374 ± 0.037 | 0.345 ± 0.037 |\n| | acc\\_norm | 0.409 ± 0.038 | 0.351 ± 0.038 |\n| logiqa | acc | 0.255 ± 0.017 | 0.273 ± 0.017 |\n| | acc\\_norm | 0.272 ± 0.017 | 0.280 ± 0.017 |\n| mathqa | acc | 0.256 ± 0.008 | 0.253 ± 0.008 |\n| | acc\\_norm | 0.258 ± 0.008 | 0.240 ± 0.008 |\n| mnli | acc | 0.338 ± 0.005 | **0.801 ± 0.005** |\n| mnli\\_mismatched | acc | 0.362 ± 0.005 | **0.811 ± 0.005** |\n| mrpc | acc | 0.571 ± 0.025 | **0.750 ± 0.025** |\n| | f1 | 0.689 ± 0.022 | **0.841 ± 0.022** |\n| multirc | acc | **0.047 ± 0.007** | 0.012 ± 0.007 |\n| openbookqa | acc | 0.222 ± 0.019 | 0.268 ± 0.019 |\n| | acc\\_norm | 0.346 ± 0.021 | 0.344 ± 0.021 |\n| piqa | acc | 0.726 ± 0.010 | 0.714 ± 0.010 |\n| | acc\\_norm | 0.736 ± 0.010 | 0.718 ± 0.010 |\n| qnli | acc | 0.504 ± 0.007 | **0.788 ± 0.007** |\n| qqp | acc | 0.534 ± 0.002 | **0.847 ± 0.002** |\n| | f1 | 0.372 ± 0.004 | **0.793 ± 0.004** |\n| race | acc | 0.352 ± 0.015 | 0.355 ± 0.015 |\n| record | f1 | **0.843 ± 0.004** | 0.778 ± 0.004 |\n| | em | **0.835 ± 0.004** | 0.771 ± 0.004 |\n| rte | acc | 0.491 ± 0.030 | **0.747 ± 0.030** |\n| sciq | acc | 0.930 ± 0.008 | 0.939 ± 0.008 |\n| | acc\\_norm | 0.938 ± 0.008 | 0.935 ± 0.008 |\n| sst | acc | 0.492 ± 0.017 | **0.916 ± 0.017** |\n| webqs | acc | 0.054 ± 0.005 | **0.095 ± 0.005** |\n| wic | acc | 0.472 ± 0.020 | **0.539 ± 0.020** |\n| winogrande | acc | 0.582 ± 0.014 | 0.571 ± 0.014 |\n| wnli | acc | 0.380 ± 0.058 | **0.549 ± 0.058** |\n| wsc | acc | 0.365 ± 0.047 | 0.365 ± 0.047 |\n| lambada | ppl | **6.423 ± 0.162** | 20.150 ± 0.162 |\n| | acc | **0.576 ± 0.007** | 0.394 ± 0.007 |\n| pubmedqa | acc | **0.529 ± 0.016** | 0.479 ± 0.016 |\n| coqa | f1 | 0.606 ± 0.018 | 0.581 ± 0.018 |\n| | em | 0.484 ± 0.020 | 0.472 ± 0.020 |\n| drop | em | 0.001 ± 0.000 | 0.001 ± 0.000 |\n| | f1 | **0.039 ± 0.001** | 0.031 ± 0.001 |\n| math\\_algebra | acc | 0.016 ± 0.004 | 0.024 ± 0.004 |\n| math\\_counting\\_and\\_prob | acc | 0.023 ± 0.007 | 0.030 ± 0.007 |\n| math\\_geometry | acc | 0.006 ± 0.004 | 0.021 ± 0.004 |\n| math\\_intermediate\\_algebra | acc | 0.020 ± 0.005 | 0.029 ± 0.005 |\n| math\\_num\\_theory | acc | 0.037 ± 0.008 | 0.039 ± 0.008 |\n| math\\_prealgebra | acc | 0.023 ± 0.005 | **0.041 ± 0.005** |\n| math\\_precalc | acc | 0.015 ± 0.005 | 0.022 ± 0.005 |\n\n\n\nThe model can be downloaded [here](https://huggingface.co/lg/openinstruct_1k1), though I don't recommend using it for anything.", "date_published": "2021-05-24T00:00:00Z", "authors": ["Leo Gao"], "summaries": []} +{"id": "d076137eff9d09c796df705336b9e96a", "title": "Activation Function Ablation", "url": "https://blog.eleuther.ai/activation-fns/", "source": "eleuther.ai", "source_type": "blog", "text": "This was an ablation of activation functions on GPT-like models of ~100M params that I ran ages ago. Each model was run for 10k iters, which isn't very long. My original goal was to show that activation function doesn't matter than much, but to do so I'd need to run a bunch more runs to get variance and show no statistical significance, and I don't plan on running a more exhaustive version of this experiment any time soon. So, I'm just dumping these results here in case anyone has any use for them. All the activation definitions are [here](https://github.com/EleutherAI/gpt-neo/blob/master/models/activations.py#L44).\n\n\n\n\n\n| Name | Pile Validation BPB | LAMBADA acc | LAMBADA ppl |\n| --- | --- | --- | --- |\n| softsign | 1.1485 | 34.3 | 81.32 |\n| ReLU | 1.1482 | 34.3 | 82.01 |\n| spike2 | 1.1480 | 34.4 | 83.13 |\n| selu | 1.1485 | 34.5 | 83.32 |\n| elish | 1.1492 | 33.9 | 84.04 |\n| tanhexp | 1.1474 | 33.7 | 84.06 |\n| sigmoid | 1.1484 | 33.9 | 85.20 |\n| tanhshrink | 1.1483 | 33.9 | 85.42 |\n| maxtanh | 1.1479 | 33.7 | 85.53 |\n| roottanh | 1.1485 | 33.4 | 86.00 |\n| softplusmone | 1.1488 | 34.1 | 86.21 |\n| logsoftmax | 1.1492 | 34.2 | 86.29 |\n| ELU | 1.1496 | 33.8 | 86.37 |\n| Swish | 1.1482 | 33.7 | 86.42 |\n| softmax | 1.1491 | 33.2 | 86.74 |\n| square\\_relax | 1.1484 | 33.5 | 86.92 |\n| lisht | 1.1500 | 33.8 | 87.17 |\n| GELU | 1.1453 | 34.0 | 87.84 |\n| abs | 1.1489 | 33.5 | 87.96 |\n| tanh | 1.1481 | 33.2 | 89.28 |\n| Mish | 1.1482 | 33.6 | 89.84 |\n| triangle\\_relax | 1.1502 | 33.7 | 89.91 |\n| seagull | 1.1487 | 33.3 | 90.08 |\n| maxsig | 1.1480 | 33.3 | 90.23 |\n| softplus | 1.1460 | 33.1 | 90.74 |\n| minsin | 1.1498 | 33.3 | 91.18 |\n| snake | 1.1484 | 33.1 | 91.93 |\n| cosid | 1.1490 | 33.3 | 92.99 |\n| spike | 1.1498 | 33.3 | 93.78 |\n| bipolarsigmoid | 1.1513 | 32.8 | 96.73 |", "date_published": "2021-05-24T00:00:00Z", "authors": ["Leo Gao"], "summaries": []} +{"id": "a60709e6682762b402ba275da506df97", "title": "Rotary Embeddings: A Relative Revolution", "url": "https://blog.eleuther.ai/rotary-embeddings/", "source": "eleuther.ai", "source_type": "blog", "text": "TL;DR:[#](#tldr)\n----------------\n\n\nRotary Positional Embedding (RoPE) is a new type of position encoding that unifies absolute and relative approaches. Developed by Jianlin Su in a series of blog posts earlier this year [12, 13] and in a new preprint [14], it has already garnered widespread interest in some Chinese NLP circles. This post walks through the method as we understand it, with the goal of bringing it to the attention of the wider academic community. In general we have found that across a large suite of setups including regular, linear, and local self-attention, it **either matches or surpasses all other methods currently available for injecting positional information into transformers.**\n\n\nWhat's the Problem?[#](#whats-the-problem)\n------------------------------------------\n\n\nSince Vaswani et al., 2017 [16] there have been many schemes introduced for encoding positional information in transformers. When applying self-attention to a given domain, the choice of position encoding typically involves tradeoffs between simplicity, flexibility, and efficiency. For example, learned absolute positional encoding is very simple, but may not generalize and are not always particularly meaningful due to the common practices [1, 3, 9, 15] of packing short sentences and phrases together in a single context and breaking up sentences across contexts.\n\n\nAnother major limitation of existing methods is that they do not work with efficient transformers. Methods like T5's relative positional bias [10] require constructing the full $N \\times N$ attention matrix between positions, which is not possible when using many of the efficient alternatives to softmax attention, including kernelized variants like FAVOR+ [2].\n\n\nA principled, easy to implement, and generally-applicable method for relative position encoding---one that works for both vanilla and “efficient” attention---is of great interest. Rotary Positional Embedding (RoPE) is designed to address this need.\n\n\nWhat's the Solution?[#](#whats-the-solution)\n--------------------------------------------\n\n\nIn this section we introduce and derive the rotary positional embedding. We begin with discussing the intuition, before presenting a full derivation.\n\n\n### Intuition[#](#intuition)\n\n\nWe would like to find a positional encoding function $f(\\mathbf{x}, \\ell)$ for an item $\\mathbf{x}$ and its position $\\ell$ such that, for two items $\\mathbf{q}$ and $\\mathbf{k}$ at positions $m$ and $n$, the inner product between $f(\\mathbf{q}, m)$ and $f(\\mathbf{k}, n)$ is sensitive only to the values of $\\mathbf{q}$, $\\mathbf{k}$, and their relative position $m-n$. This is related in spirit to the kernel trick: we are searching for a feature map such that its kernel has certain properties. A key piece of information is the geometric definition of the dot product between Euclidean vectors: $\\mathbf{q} \\cdot \\mathbf{k} = \\lVert \\mathbf{q} \\rVert \\lVert \\mathbf{k} \\rVert \\cos(\\theta\\_{qk})$\n\n\nIn plain English, the dot product between two vectors is a function of the magnitude of individual vectors and the angle between them.\nWith this in mind, the intuition behind RoPE is that we can represent the token embeddings as complex numbers and their positions as pure rotations that we apply to them. If we shift both the query and key by the same amount, changing absolute position but not relative position, this will lead both representations to be additionally rotated in the same manner---as we will see in the derivation---thus the angle between them will remain unchanged and thus the dot product will also remain unchanged. By exploiting the nature of rotations, the dot product used in self-attention will have the property we are looking for, preserving relative positional information while discarding absolute position.\n\n\nThe following is an example illustrating the core idea of RoPE—a more rigorous derivation is presented in a subsequent section. Some arbitrary $0 < \\varepsilon \\leq \\frac \\pi {2N}$ is chosen, where $N$ is the maximum sequence length. When viewed elementwise on $\\mathbf{q}$ and $\\mathbf{k}$, with $j$ as the element index, RoPE can be viewed as follows:\n\n\n$$\n\\begin{align}\n\\mathrm{RoPE}(x, m) &= xe^{mi\\varepsilon} \\\\\n\\langle \\mathrm{RoPE}(q\\_j, m), \\mathrm{RoPE}(k\\_j, n)\\rangle &= \\langle q\\_j e^{mi\\varepsilon}, k\\_j e^{ni\\varepsilon} \\rangle \\\\\n&= q\\_j k\\_j e^{mi\\varepsilon} \\overline{e^{ni\\varepsilon}} \\\\\n&= q\\_j k\\_j e^{(m - n)i\\varepsilon} \\\\\n&= \\mathrm{RoPE}(q\\_j k\\_j, m - n)\n\\end{align}\n$$\n\n\n### Visual Intuition[#](#visual-intuition)\n\n\n\n\n\nA quarter-waveplate can change the polarization of an electromagnetic wave. (This figure is interactive, try dragging the cube!)\n\n\n\n\nTo see how relative position might be preserved in this transformation, we can look to an analogous situation in classical electrodynamics.\n\n\nWe imagine a linearly polarized electromagnetic wave that is sent through a quarter-wave plate at an angle of 45 degrees. This takes the incoming wave and shifts its phase on only one principal dimension as it travels. When the wave emerges from the waveplate, the polarization is no longer linear---it has become circular through a shift equal to quarter of a period.\n\n\nAs the wave travels through the waveplate, we can see how the magnitude of the wave is preserved. We can also better see how the relative position may be encoded as the angle between subsequent timesteps: the angle between timesteps, and therefore distance along the axis of travel, is constant. This means the positional information must be orthogonal to the amplitude in the modulated wave.\n\n\n### Derivation[#](#derivation)\n\n\nWe begin with absolute positional information: for each token, we know where it is in the sequence. However, dot products (and therefore attention) do not preserve absolute positional information, so if we encode that positional information in the absolute position of the embeddings, we will lose a significant amount of information. On the other hand, dot products do preserve relative position, so if we can encode the absolute positional information into the token embeddings in a way that only leverages relative positional information, that will be preserved by the attention function.\n\n\nWhile it is common in machine learning to restrict our attention to the real numbers, for rotary embeddings it is mathematically more convenient to use the complex numbers as the base field for our space. Instead of working in the usual $\\mathbb{R}^d$, we will work in $\\mathbb{C}^{d/2}$ by considering consecutive pairs of elements of the query and key vectors to be a single complex number. Specifically, instead of viewing $\\mathbf{q}=(q\\_1,q\\_2,q\\_3,q\\_4,\\ldots,q\\_{d})$ as a $d$-dimensional real vector we view it as $\\mathbf{q}=(q\\_1+iq\\_2, q\\_3+iq\\_4,\\ldots q\\_{d-1} + iq\\_{d})\\in\\mathbb{C}^{d/2}$. As we will see, casting it in this fashion will make discussing the rotary embeddings easier. If $d$ is odd, we can pad it with a dummy coordinate to ensure things line up correctly. Alternatively, we can simply increase $d$ by one.\n\n\nLet $\\mathbf{q}$ and $\\mathbf{k}$ be query and key vectors respectively and let $m$ and $n$ be the absolute positions of the corresponding tokens. Let $f(\\mathbf{x}, \\ell)$ be the function that takes the token embedding $\\mathbf{x}$ in position $\\ell$ and outputs a new embedding that contains (in some fashion) the relative positional information. Our goal is to find a \"nice\" function $f$ that does this. Once the positional information is encoded, we need to compute the inner product like so:\n\n\n$$\\label{fg}\\langle f(\\mathbf{q}, m),f(\\mathbf{k},n) \\rangle = g(\\mathbf{q}, \\mathbf{k}, m - n)$$\n\n\nwhere $g(\\mathbf{q},\\mathbf{k},m-n)$ now represents the pre-softmax logit of the usual attention equation. Writing these three functions in exponential form gives\n\n\n$$\n\\begin{align\\*}\nf(\\mathbf{q}, m) &= R\\_f(\\mathbf{q}, m)e^{i\\Theta\\_f(\\mathbf{q}, m)}\\\\\nf(\\mathbf{k}, n) &= R\\_f(\\mathbf{k}, n)e^{i\\Theta\\_f(\\mathbf{k}, n)}\\\\\ng(\\mathbf{q}, \\mathbf{k}, m - n) &= R\\_g(\\mathbf{q}, \\mathbf{k}, m - n)e^{i\\Theta\\_g(\\mathbf{q}, \\mathbf{k}, m - n)}\n\\end{align\\*}\n$$\n\n\nComputing the inner product and equating corresponding components yields\n\n\n$$\n\\begin{align\\*}\nR\\_f(\\mathbf{q}, m) R\\_f(\\mathbf{k}, n) &= R\\_g(\\mathbf{q}, \\mathbf{k}, m - n)\\\\\n\\Theta\\_f(\\mathbf{q}, m) - \\Theta\\_f(\\mathbf{k}, n) &= \\Theta\\_g(\\mathbf{q}, \\mathbf{k}, m - n)\\\\\n\\end{align\\*}\n$$\n\n\nSubstituting $m=n$ and applying the initial condition $f(\\mathbf{x}, 0) = \\mathbf{x}$ gives\n\n\n$$R\\_f(\\mathbf{q}, m) R\\_f(\\mathbf{k}, m) = R\\_g(\\mathbf{q}, \\mathbf{k}, 0) = R\\_f(\\mathbf{q}, 0) R\\_f(\\mathbf{k}, 0) = \\mathbf{q}\\mathbf{k}$$\n\n\nAs the prior equation is valid for all $m$, it means that $R\\_f$ is independent of the value of $m$, so we can set $R\\_f(\\mathbf{x}, y) = \\mathbf{x}$. Similarly, if we denote $\\Theta(\\mathbf{x}) = \\Theta\\_f(\\mathbf{x}, 0)$ we obtain $$\\Theta\\_f(\\mathbf{q}, m) - \\Theta\\_f(\\mathbf{k}, m) = \\Theta\\_g(\\mathbf{q}, \\mathbf{k}, 0) = \\Theta\\_f(\\mathbf{q}, 0) - \\Theta\\_f(\\mathbf{k}, 0) = \\Theta(\\mathbf{q}) - \\Theta(\\mathbf{k})$$ which implies that $\\Theta\\_f(\\mathbf{q}, m) - \\Theta(\\mathbf{q}) = \\Theta\\_f(\\mathbf{k}, m) - \\Theta(\\mathbf{k})$ for all $\\mathbf{q},\\mathbf{k},m$. This allows us to decompose $\\Theta\\_f$ as $\\Theta\\_f(\\mathbf{x}, y) = \\Theta(\\mathbf{x}) + \\varphi(y)$. Examining the case of $m = n + 1$ reveals that\n\n\n$$\\varphi(m)-\\varphi(m-1) = \\Theta\\_g(\\mathbf{q}, \\mathbf{k}, 1) + \\Theta(\\mathbf{q}) - \\Theta(\\mathbf{k})$$\n\n\nSince the right-hand side does not depend on $m$, the left hand side must not either and so $\\varphi$ is an arithmetic progression. Setting the initial values $\\varphi(0)=0$ and $\\varphi(1)=\\theta$, we have $\\varphi(m)=m\\theta$.\n\n\nPutting all of these pieces together, we get the final formula for the rotary positional embedding:\n\n\n$$f(\\mathbf{q}, m) = R\\_f(\\mathbf{q}, m)e^{i\\Theta\\_f(\\mathbf{q}, m)}=\\mathbf{q}e^{i(\\Theta(\\mathbf{q})+m\\mathbf{\\theta})} = \\sum\\_{j=1}^{d/2} q\\_je^{im\\theta\\_j} \\vec{e\\_j}$$\n\n\nand likewise for $\\mathbf{k}$. Since computers tend to like real numbers and matrices more than complex numbers, its convenient to convert this expression into the matrix equation\n\n\n$$\nf(\\mathbf{q}, m) =\n\\begin{pmatrix}\nM\\_1 & & & \\\\\n& M\\_2 & & \\\\\n& & \\ddots & \\\\\n& & & M\\_{d/2}\n\\end{pmatrix}\n\\begin{pmatrix}\nq\\_1\\\\\nq\\_2\\\\\n\\vdots\\\\\nq\\_d\n\\end{pmatrix} = \\mathbf{\\Theta\\_m Q\\_m} = \\mathbf{\\Theta\\_m W\\_q X\\_m}\n$$\n\n\nwhere $M\\_j=\\begin{pmatrix}\\cos m\\theta\\_j & -\\sin m\\theta\\_j \\\\sin m\\theta\\_j & \\cos m\\theta\\_j\\end{pmatrix}$, $\\mathbf{\\Theta\\_m}$ is the block diagonal rotation matrix, $\\mathbf{W\\_q}$ is the learned query weights, and $\\mathbf{X\\_m}$ is the embedding of the $m$ token. Again, we also have the corresponding equation for $\\mathbf{k}$.\n\n\n### Extension to multiple dimensions[#](#extension-to-multiple-dimensions)\n\n\nWith relative ease RoPE can be extended into the multidimensional case. To represent two dimensions, two independent 1-dimensional rotary embeddings can be used. To implement this, we can split each of $\\mathbf{q}$ and $\\mathbf{k}$ in half and apply rotary piece-wise as follows:\n\n\n$$\\begin{align\\*}\n\\langle f(\\mathbf{q}, m, i),f(\\mathbf{k}, n, j) \\rangle &= \\langle f\\_1(\\mathbf{q}*{:d/2}, m),f\\_1(\\mathbf{k}*{:d/2}, n) \\rangle + \\langle f\\_2(\\mathbf{q}*{d/2:}, i),f\\_2(\\mathbf{k}*{d/2:}, j) \\rangle \\\\\n&= g\\_1(\\mathbf{q}*{:d/2}, \\mathbf{k}*{:d/2}, m - n) + g\\_2(\\mathbf{q}*{d/2:}, \\mathbf{k}*{d/2:}, i - j) \\\\\n&= g(\\mathbf{q}, \\mathbf{k}, m - n, i - j)\n\\end{align\\*}$$\n\n\nThis formulation can also be further extended to data of an arbitrary number of dimensions. This sort of multi-dimensional relative coding would let us, for example, implement relative timing and relative pitch embeddings similar to Music Transformer [4] in a drastically simpler manner. More generally, we believe there is potentially a large class of invariances that first-principles positional codes like RoPE may enable us to capture.\n\n\n### How is this different from the sinusoidal embeddings used in \"Attention is All You Need\"?[#](#how-is-this-different-from-the-sinusoidal-embeddings-used-in-attention-is-all-you-need)\n\n\nA response many of us at EleutherAI had when first coming across this was \"how does this differ from sinusoidal embeddings,\" so we feel it is worth discussing this comparison. There are two ways that rotary embeddings are different from sinusoidal embeddings:\n\n\n1. Sinusoidal embeddings apply to each coordinate individually, while rotary embeddings mix pairs of coordinates\n2. Sinusoidal embeddings add a $\\cos(m\\theta)$ or $\\sin(m\\theta)$ term, while rotary embeddings use a multiplicative factor.\n\n\nOkay, what About in Practice?[#](#okay-what-about-in-practice)\n--------------------------------------------------------------\n\n\nAfter reading Jianlin Su's original blog posts [12, 13], we were curious how well such a first-principles approach to positional encoding would stack up against existing methods. Despite a tremendous number of papers that have come out claiming to improve the transformer architecture, very few approaches generalize well across codebases and tasks. However, we have found that rotary positional embeddings perform as well or better than other positional techniques in every architecture we have tried.\n\n\n### Implementation[#](#implementation)\n\n\nA naive implementation of rotary positional embeddings would use the block diagonal matrix form shown earlier. In practice, implementing rotary positional embeddings this way is highly inefficient, and more optimized forms are readily available. The original implementations of RoPE are available in [roformer](https://github.com/ZhuiyiTechnology/roformer) and [bert4keras](https://github.com/bojone/bert4keras).\n\n\nAdditionally, we have implemented rotary positional embeddings in [x-transformers](https://github.com/lucidrains/x-transformers), [GPT-Neo](https://github.com/EleutherAI/gpt-neo), [GPT-NeoX](https://github.com/EleutherAI/gpt-neox), and [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax). Below are implimentations for PyTorch and JAX pulled from these codebases.\n\nGPT-NeoX (PyTorch)\n\n\n```\nimport torch\n\n\nclass Rotary(torch.nn.Module):\n def \\_\\_init\\_\\_(self, dim, base=10000):\n super().\\_\\_init\\_\\_()\n inv\\_freq = 1.0 / (base \\*\\* (torch.arange(0, dim, 2).float() / dim))\n self.register\\_buffer(\"inv\\_freq\", inv\\_freq)\n self.seq\\_len\\_cached = None\n self.cos\\_cached = None\n self.sin\\_cached = None\n\n def forward(self, x, seq\\_dim=1):\n seq\\_len = x.shape[seq\\_dim]\n if seq\\_len != self.seq\\_len\\_cached:\n self.seq\\_len\\_cached = seq\\_len\n t = torch.arange(x.shape[seq\\_dim], device=x.device).type\\_as(self.inv\\_freq)\n freqs = torch.einsum(\"i,j->ij\", t, self.inv\\_freq)\n emb = torch.cat((freqs, freqs), dim=-1).to(x.device)\n self.cos\\_cached = emb.cos()[:, None, None, :]\n self.sin\\_cached = emb.sin()[:, None, None, :]\n return self.cos\\_cached, self.sin\\_cached\n\n\n# rotary pos emb helpers:\n\ndef rotate\\_half(x):\n x1, x2 = x[..., : x.shape[-1] // 2], x[..., x.shape[-1] // 2 :]\n return torch.cat(\n (-x2, x1), dim=x1.ndim - 1\n ) # dim=-1 triggers a bug in torch < 1.8.0\n\n\n@torch.jit.script\ndef apply\\_rotary\\_pos\\_emb(q, k, cos, sin):\n return (q \\* cos) + (rotate\\_half(q) \\* sin), (k \\* cos) + (rotate\\_half(k) \\* sin)\n\n```\n\n\n**N.B:** The layout of the queries and keys in GPT-NeoX, following Megatron, is `[seq, batch, heads, hdim]`, in order to avoid memory-intensive transpose operations. The code will need to be modified to work with the conventional layout of `[batch, seq, heads, hdim]`.\n\n\n\n\n\n\nMesh Transformer JAX (JAX)\n\n\n```\nimport jax.numpy as jnp\nimport numpy as np\nfrom einops import rearrange, repeat\n\n\ndef fixed\\_pos\\_embedding(x, seq\\_dim=0):\n dim = x.shape[-1]\n inv\\_freq = 1.0 / (10000 \\*\\* (np.arange(0, dim, 2) / dim))\n\n sinusoid\\_inp = np.einsum(\"i , j -> i j\", np.arange(x.shape[seq\\_dim]), inv\\_freq)\n\n return np.sin(sinusoid\\_inp), np.cos(sinusoid\\_inp)\n\n\ndef rotate\\_every\\_two(x):\n x1 = x[:, :, ::2]\n x2 = x[:, :, 1::2]\n\n x = jnp.stack((-x2, x1), axis=-1)\n\n return rearrange(x, \"... d j -> ... (d j)\")\n\n\ndef apply\\_rotary\\_pos\\_emb(x, sincos):\n sin, cos = map(lambda t: repeat(t, \"b n -> b (n j)\", j=2)[:, None, :], sincos)\n return (x \\* cos) + (rotate\\_every\\_two(x) \\* sin)\n\n```\n\n\n**N.B:** The layout of the queries and keys in Mesh Transformer JAX is `[seq, n_head, d_head]` (no batch dim).\n\n\n\n\n\n\n\n### Experiments[#](#experiments)\n\n\nWe have found rotary embeddings to be effective for many varieties of attention.\n\n\n#### Comparison against other PEs for Global attention[#](#comparison-against-other-pes-for-global-attention)\n\n\nWe conducted [comparisons](https://wandb.ai/eleutherai/neox/reports/Rotary-Test-3--Vmlldzo2MTIwMDM) of rotary embeddings with learned absolute positional embeddings, used in GPT-3 [1], and the learned relative positional embeddings (henceforth RPE) used in T5 [10] using our GPT-Neox codebase. Comparisons were done using 125M parameter models with the same hyperparameters as the equally-sized model from [1]. Models were trained on [OpenWebText2](https://www.eleuther.ai/projects/open-web-text2/), a large and diverse dataset of online text. We see faster convergence of training and validation curves and a lower overall validation loss with a minimal decrease in throughput.\n\n\n\n![GPT-NeoX experiments](/images/blog/rotary-embeddings/rope-learned-rpe.png) \nOWT2 validation loss with 150M parameter models in GPT-NeoX\n\n\n\n\n\n\n\n| Type | OWT2 Loss | OWT2 Ppl. |\n| --- | --- | --- |\n| Learned Absolute | 2.809 | 16.59 |\n| T5 RPE | 2.801 | 16.46 |\n| Rotary | 2.759 | 15.78 |\n\n\n\nFinal validation loss / ppl scores on OWT2 validation set at 55k steps (~30B tokens)\n\n\n\n\n#### Billion+ parameter models[#](#billion-parameter-models)\n\n\nWe additionally conducted additional larger scale experiments with the [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) codebase and 1.4B parameter models, against baselines of learned absolute position embeddings and T5 RPE. Hyperparameters similar to GPT3's 1.3B model were used, with the dataset being the Pile [3]. A similar increase in convergence speed was observed as seen over learned absolute (~30%), and a smaller improvement (10-20%) was still seen over the T5 relative position encoding, demonstrating scalability into the billion parameter regimen. For full details, see [here](https://wandb.ai/eleutherai/mesh-transformer-jax/reports/Position-encoding-shootout--Vmlldzo2MTg2MzY).\n\n\n\n![Jax experiments](/images/blog/rotary-embeddings/jax-experiments.png) \nPile validation loss with 1.5B parameter models\n\n\n\n\n\n\n\n| Type | Pile Loss | Pile Ppl. |\n| --- | --- | --- |\n| Learned Absolute | 2.240 | 9.393 |\n| T5 RPE | 2.223 | 9.234 |\n| Rotary | 2.173 | 8.784 |\n\n\n\nFinal validation loss / ppl scores on Pile validation set at 8k steps (~8B tokens)\n\n\n\n\n#### Comparison against learned absolute for Performer[#](#comparison-against-learned-absolute-for-performer)\n\n\nPerformer [2] is an example of an alternative attention mechanism designed to avoid quadratic bottlenecks with respect to sequence lengths. We ran small scale tests of Performer on enwiki8, for 8 layer char-based transformers with 512 dimensions and 8 heads. [These tests indicated](https://wandb.ai/lucidrains/eleuther-blogpost/reports/performer-rotary--Vmlldzo2MTgyNDg) that substituting rotary embeddings into the Performer leads to stark decreases in validation loss and to rapid convergence. Though these improvements do not close the gap between efficient and quadratic attention mechanisms, such a significant improvement makes mechanisms like Performer more attractive.\n\n\nIn smaller scale tests, we have also put RoPE head to head against other alternatives including the relative position method of Shaw et al. [11], TUPE [5], and position-infused attention [8], seeing positive results across the board.\n\n\n\n![x-transformers experiments](/images/blog/rotary-embeddings/performer.png) \nEnwik8 validation/train loss with performer\n\n\n\n\n### Runtime[#](#runtime)\n\n\nIn general, we find that the runtime cost of rotary embeddings is fairly negligible. With the above implementation, we find that applying the rotary embeddings is naively about 4-5x the cost of applying additive positional embeddings. With the addition of a fusing optimizer like Torchscript, the runtime can be reduced to about 2-2.5x the runtime of additive positional embeddings. Concretely, for query and key tensors of shape $[2048, 16, 12, 64]$, applying rotary embeddings take 5.3 milliseconds, while applying additive positional embeddings takes 2.1 milliseconds.\n\n\nUnlike standard positional embeddings, however, rotary embeddings must be applied at every layer. As large transformer models are typically dominated by matrix multiplies, we find that the overall overhead remains negligible. With fusion, we find that rotary embeddings impose a 1-3% overhead across a range of transformer sizes.\n\n\nConclusion[#](#conclusion)\n--------------------------\n\n\nRotary embeddings make it possible to implement relative attention in a straightforward and efficient manner, and we look forward to the work it inspires. Simple improvements to the transformer architecture that carry over robustly between different types of self-attention are few and far between [6].\n\n\n### Citation Information[#](#citation-information)\n\n\nTo cite the RoPE methodology, please use:\n\n\n\n```\n@article{rope-paper,\n title={RoFormer: Enhanced Transformer with Rotary Position Embedding},\n author={Su, Jianlin and Lu, Yu and Pan, Shengfeng and Wen, Bo and Liu, Yunfeng},\n journal={arXiv preprint arXiv:2104.09864},\n year={2021}\n}\n\n```\nTo cite this blog post, please use:\n\n\n\n```\n@misc{rope-eleutherai,\n title = {Rotary Embeddings: A Relative Revolution},\n author = {Biderman, Stella and Black, Sid and Foster, Charles and Gao, Leo and Hallahan, Eric and He, Horace and Wang, Ben and Wang, Phil},\n howpublished = \\url{blog.eleuther.ai/},\n note = {[Online; accessed ]},\n year = {2021}\n}\n\n```\nReferences[#](#references)\n--------------------------\n\n\n[1] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language Models are Few-Shot Learners. *arXiv preprint [arXiv:2005.14165](https://arxiv.org/abs/2005.14165)*, 2020.\n\n\n[2] Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking Attention with Performers. *arXiv preprint [arXiv:2009.14794](https://arxiv.org/abs/2009.14794)*, 2020.\n\n\n[3] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The Pile: An 800GB Dataset of Diverse Text for Language Modeling. *arXiv preprint [arXiv:2101.00027](https://arxiv.org/abs/2101.00027)*, 2021.\n\n\n[4] Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Ian Simon, Curtis Hawthorne, Andrew M Dai, Matthew D Hoffman, Monica Dinculescu, and Douglas Eck. Music Transformer. *arXiv preprint [arXiv:1809.04281](https://arxiv.org/abs/1809.04281)*, 2018.\n\n\n[5] Guolin Ke, Di He, and Tie-Yan Liu. Rethinking Positional Encoding in Language Pre-training. *arXiv preprint [arXiv:2006.15595](https://arxiv.org/abs/2006.15595)*, 2020.\n\n\n[6] Sharan Narang, Hyung Won Chung, Yi Tay, William Fedus, Thibault Fevry, Michael Matena, Karishma Malkan, Noah Fiedel, Noam Shazeer, Zhenzhong Lan, et al. Do Transformer Modifications Transfer Across Implementations and Applications? *arXiv preprint [arXiv:2102.11972](https://arxiv.org/abs/2102.11972)*, 2021.\n\n\n[7] Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, et al. Efficient Large-Scale Language Model Training on GPU Clusters. *arXiv preprint [arXiv:2104.04473](https://arxiv.org/abs/2104.04473)*, 2021.\n\n\n[8] Ofir Press, Noah A Smith, and Mike Lewis. Shortformer: Better Language Modeling using Shorter Inputs. *arXiv preprint [arXiv:2012.15832](https://arxiv.org/abs/2012.15832)*, 2020.\n\n\n[9] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning Transferable Visual Models From Natural Language Supervision. *arXiv preprint [arXiv:2103.00020](https://arxiv.org/abs/2103.00020)*, 2021.\n\n\n[10] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. *arXiv preprint [arXiv:1910.10683](https://arxiv.org/abs/1910.10683)*, 2019.\n\n\n[11] Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-Attention with Relative Position Representations. *arXiv preprint [arXiv:1803.02155](https://arxiv.org/abs/1803.02155)*, 2018.\n\n\n[12] Jianlin Su. 让研究人员绞尽脑汁的 Transformer 位置编码. , 2021. [Online; accessed 18-April-2021].\n\n\n[13] Jianlin Su. Transformer 升级之路:2、博采众长的旋转式位置编码. , 2021. [Online; accessed 18-April-2021].\n\n\n[14] Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yunfeng Liu. RoFormer: Enhanced Transformer with Rotary Position Embedding. *arXiv preprint [arXiv:2104.09864](https://arxiv.org/abs/2104.09864)*, 2021.\n\n\n[15] Hao Tan and Mohit Bansal. Vokenization: Improving Language Understanding with Contextualized, Visual-Grounded Supervision. *arXiv preprint [arXiv:2010.06775](https://arxiv.org/abs/2010.06775)*, 2020.\n\n\n[16] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention Is All You Need. *arXiv preprint [arXiv:1706.03762](https://arxiv.org/abs/1706.03762)*, 2017.", "date_published": "2021-04-20T00:00:00Z", "authors": ["Stella Biderman", "Sid Black", "Charles Foster", "Leo Gao", "Eric Hallahan", "Horace He", "Ben Wang", "Phil Wang"], "summaries": []} +{"id": "348a91af6174a237e2faabbd410e3e3f", "title": "EleutherAI's Thoughts on the EU AI Act", "url": "https://blog.eleuther.ai/eu-aia/", "source": "eleuther.ai", "source_type": "blog", "text": "In June, the European Parliament adopted its negotiating position on the EU AI, a comprehensive piece of legislation aimed at regulating a wide variety of artificial intelligence (AI) research, products, and services. It's expected to be finalized and adopted by the end of the year, bringing widespread changes to the way that AI organizations operate in the European Union. There is a lot in the current draft's regulations on large-scale AI systems that we agree with, such as an emphasis on transparency and documentation and an explicit requirement to assess the suitability of training data. Unfortunately the current text places a substantial burden on non-profit, open source, and community-driven research, drawing no distinction between tech giants like OpenAI and Google, non-profit research groups like EleutherAI and the Allen Institute for Artificial Intelligence, and independent hobbyists who train or finetune models governed by this law.\n\n\n*[Read the full position paper [here](https://blog.eleuther.ai/supporting_OS_in_the_AIAct.pdf)]*\n\n\nIn April we released the [Pythia](https://arxiv.org/abs/2304.01373) model suite, a set of eight models trained on two different datasets\nranging from 70 million to 12 billion parameters. To empower researchers to study how the capabilities of large language models evolve over the course of training we saved and released 154 checkpoints per model, providing a previously unprecedented amount of detail to the picture of how large language models train. The 154 Pythia-12B checkpoints represents more partially trained checkpoints for a single model than the rest of the world has ever released across all other 12 billion parameter or larger language models. Pythia has received widespread acclaim, with over sixty citations in just four months and was accepted for an oral presentation at [the International Conference on Machine Learning (ICML)](https://icml.cc/) occuring later today. Under the current parliamentary text we would not be able to do a project like this again, as the *over 5,000* variations and partially trained model checkpoints each count as their own model and would require the same individualized documentation, testing, and reporting as if we developed over 5,000 distinct commercially deployed models.\n\n\nThe Parliamentary text also includes requirements that are currently impossible for EleutherAI to comply with. For example, it requires reporting energy usage and environmental data about the computing cluster used to train the model - information we do not necessarily have access to since we, like almost everyone who does large scale AI research, do not own the actual GPUs we use to train our models. While we work with our cloud providers to disclose as much as possible about energy usage and environmental impact, some information the EU Parliament texts requires for disclosure is viewed as proprietary by the cloud providers and is not something we have access to.\n\n\nTo address these shortcomings EleutherAI has partnered with [Creative Commons](https://creativecommons.org/), [Hugging Face](huggingface.co/), [GitHub](https://github.com/), [LAION](https://laion.ai/), and [Open Future](https://openfuture.ai/) to draft a [position paper](https://blog.eleuther.ai/supporting_OS_in_the_AIAct.pdf) detailing our perspectives on the parlementary text and recommending how the EU can better achieve its goals by embracing what the open source community has to offer. Our primary recommendations are:\n\n\n1. Define AI components clearly,\n2. Clarify that collaborative development of open source AI components and making them available in public repositories does not subject developers to the requirements in the AI Act, building on and improving the Parliament text’s Recitals 12a-c and Article 2(5e),\n3. Support the AI Office’s coordination and inclusive governance with the open source ecosystem, building on the Parliament’s text,\n4. Ensure the R&D exception is practical and effective, by permitting limited testing in real-world conditions, combining aspects of the Council’s approach and an amended version of the Parliament’s Article 2(5d),\n5. Set proportional requirements for “foundation models,” recognizing and distinctly treating different uses and development modalities, including open source approaches, tailoring the Parliament’s Article 28b.\n\n\nEleutherAI is an [unprecedented experiment](https://arxiv.org/abs/2210.06413) in doing open, transparent, and public scientific research in artifical intelligence. While we do not believe that all organizations must necessary follow in our footsteps, we believe that it's important that somebody reveals what goes on behind the curtain during the development of these increasingly influential technologies. As such we are committing today to not only comply with the final text to the best of our ability, but also to document and publicly disclose all costs we incur and additional steps we need to take to achieve compliance. As countries around the world look to the EU AI Act when drafting their own regulation, we hope that an honest and open accounting of our ability to comply with the EU AI Act will provide lawmakers essential information about how to design regulatory frameworks that do not put an undue burden on non-profit, open source, and independent researchers.", "date_published": "2023-07-26T00:00:00Z", "authors": ["Aviya Skowron", "Stella Biderman"], "summaries": []}