{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "gpuType": "T4"
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "# LongLLaMA: Focused Transformer Training for Context Scaling\n",
        "**LongLLaMA is a large language model capable of handling long contexts of 256k tokens or even more**.\n",
        "\n",
        "It is built upon the foundation of [OpenLLaMA](https://github.com/openlm-research/open_llama) and fine-tuned using the [Focused Transformer (FoT)](https://arxiv.org/abs/2307.03170) method.  We release a smaller 3B variant of the LongLLaMA model on a permissive license (Apache 2.0) and inference code supporting longer contexts on [Hugging Face](https://huggingface.co/syzymon/long_llama_3b). Our model weights can serve as the drop-in replacement of LLaMA in existing implementations (for short context up to 2048 tokens).\n",
        "\n",
        "This notebook is a research preview of LongLLaMA.\n",
        "For more, see the [FoT paper](https://arxiv.org/abs/2307.03170) and [GitHub repository](https://github.com/CStanKonrad/long_llama).\n",
        "\n",
        "On 5th of August 2023 the model has been updated from [LongLLaMA-3B](https://huggingface.co/syzymon/long_llama_3b) to [LongLLaMA-3Bv1.1](https://huggingface.co/syzymon/long_llama_3b_v1_1)."
      ],
      "metadata": {
        "id": "69wP4hs0IHp7"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Usage"
      ],
      "metadata": {
        "id": "E9SJIFZRILn8"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Requirements"
      ],
      "metadata": {
        "id": "zVgVhtMWINX8"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install --upgrade pip\n",
        "!pip install transformers==4.30  sentencepiece accelerate -q"
      ],
      "metadata": {
        "id": "hKSfFHiBILOr"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Loading model"
      ],
      "metadata": {
        "id": "hZ0n4HSBIWz7"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import torch\n",
        "from transformers import LlamaTokenizer, AutoModelForCausalLM"
      ],
      "metadata": {
        "id": "1_ANIN9uII1K"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "MODEL_PATH = 'syzymon/long_llama_3b_v1_1'\n",
        "TOKENIZER_PATH = 'syzymon/long_llama_3b_v1_1'\n",
        "# to fit into colab GPU we will use reduced precision\n",
        "TORCH_DTYPE = torch.bfloat16\n",
        "\n",
        "if torch.cuda.is_available():\n",
        "    device = torch.device(\"cuda\")\n",
        "else:\n",
        "    device = torch.device(\"cpu\")"
      ],
      "metadata": {
        "id": "NOFzb_qrIZlw"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "tokenizer = LlamaTokenizer.from_pretrained(TOKENIZER_PATH)\n",
        "\n",
        "model = AutoModelForCausalLM.from_pretrained(MODEL_PATH,\n",
        "                                            torch_dtype=TORCH_DTYPE,\n",
        "                                            device_map=device,\n",
        "                                            trust_remote_code=True,\n",
        "                                            # mem_attention_grouping is used\n",
        "                                            # to trade speed for memory usage\n",
        "                                            # for details see the section Additional configuration\n",
        "                                            mem_attention_grouping=(1, 2048))\n",
        "model.eval()"
      ],
      "metadata": {
        "id": "ifKHZTZhIfqH"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Input handling and generation\n",
        "LongLLaMA uses the Hugging Face interface, the long input given to the model will be\n",
        "split into context windows and loaded into the memory cache."
      ],
      "metadata": {
        "id": "fsne18klIjsc"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from transformers import TextStreamer\n",
        "streamer = TextStreamer(tokenizer)\n",
        "\n",
        "prompt = \"My name is Julien and I like to\"\n",
        "input_ids = tokenizer(prompt, return_tensors=\"pt\").input_ids\n",
        "input_ids = input_ids.to(device)\n",
        "\n",
        "torch.manual_seed(60)\n",
        "generation_output = model.generate(\n",
        "    input_ids=input_ids,\n",
        "    max_new_tokens=256,\n",
        "    num_beams=1,\n",
        "    last_context_length=1792,\n",
        "    do_sample=True,\n",
        "    temperature=1.0,\n",
        "    streamer=streamer,\n",
        ")"
      ],
      "metadata": {
        "id": "F_CLZAVgIk_Z"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "During the model call, one can provide the parameter `last_context_length` (default $1024$), which specifies the number of tokens left in the last context window. Tuning this parameter can improve generation as the first layers do not have access to memory. See details in section: How LongLLaMA handles long inputs."
      ],
      "metadata": {
        "id": "POkpFLaEIpAN"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Additional configuration\n",
        "LongLLaMa has several other parameters:\n",
        "* `mem_layers` specifies layers endowed with memory (should be either an empty list or a list of all memory layers specified in the description of the checkpoint).\n",
        "* `mem_dtype` allows changing the type of memory cache\n",
        "* `mem_attention_grouping` can trade off speed for reduced memory usage.\n",
        "  When equal to `(4, 2048)`, the memory layers will process at most $4*2048$ queries at once ($4$ heads and $2048$ queries for each head).\n",
        "\n",
        "```python3\n",
        "model = AutoModelForCausalLM.from_pretrained(\n",
        "    MODEL_PATH, torch_dtype=torch.float32,\n",
        "    mem_layers=[],\n",
        "    mem_dtype='bfloat16',\n",
        "    trust_remote_code=True,\n",
        "    mem_attention_grouping=(4, 2048),\n",
        ")\n",
        "```"
      ],
      "metadata": {
        "id": "WdaqV_KrIs7q"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Passkey retrieval\n",
        "The code below allows us to test the model on the passkey retrieval task from [Landmark Attention: Random-Access Infinite Context Length for Transformers](https://arxiv.org/abs/2305.16300), showcasing the ability to handle long context lengths. Below see the prompt format used in this task (copied from the paper).\n",
        "\n",
        "![key_retreival.png]()"
      ],
      "metadata": {
        "id": "IzmgjCXPIwLX"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# This is a modification of the original code\n",
        "# https://github.com/epfml/landmark-attention/blob/111ee30e693ccc23a12b57c1d41f8ae2cc5b4867/llama/run_test.py#L96\n",
        "# The original code license:\n",
        "# Copyright 2023 Amirkeivan Mohtashami, Martin Jaggi\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     http://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License.\n",
        "\n",
        "from numpy import random\n",
        "\n",
        "\n",
        "def generate_prompt_landmark(n_garbage, seed):\n",
        "    \"\"\"Generates a text file and inserts an passkey at a random position.\"\"\"\n",
        "    rnd_state = random.get_state()\n",
        "    random.seed(seed)\n",
        "    n_garbage_prefix = random.randint(0, n_garbage)\n",
        "    n_garbage_suffix = n_garbage - n_garbage_prefix\n",
        "\n",
        "    task_description = \"There is an important info hidden inside a lot of irrelevant text. Find it and memorize them. I will quiz you about the important information there.\"\n",
        "    garbage = \"The grass is green. The sky is blue. The sun is yellow. Here we go. There and back again.\"\n",
        "    garbage_inf = \" \".join([garbage] * 5000)\n",
        "    assert len(garbage_inf) >= n_garbage\n",
        "    garbage_prefix = garbage_inf[:n_garbage_prefix]\n",
        "    garbage_suffix = garbage_inf[:n_garbage_suffix]\n",
        "    pass_key = random.randint(1, 50000)\n",
        "    information_line = f\"The pass key is {pass_key}. Remember it. {pass_key} is the pass key.\"\n",
        "    final_question = \"What is the pass key? The pass key is\"\n",
        "    lines = [\n",
        "        task_description,\n",
        "        garbage_prefix,\n",
        "        information_line,\n",
        "        garbage_suffix,\n",
        "        final_question,\n",
        "    ]\n",
        "    random.set_state(rnd_state)\n",
        "    return \"\\n\".join(lines), str(pass_key)"
      ],
      "metadata": {
        "id": "SXOJh8XOIxwE"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "def passkey_retrieval_test(n_garbage=60000, seed=555):\n",
        "\n",
        "  #n_garbage=60000 results in ~16k tokens\n",
        "\n",
        "  prompt, answer = generate_prompt_landmark(n_garbage, seed)\n",
        "  input_ids = tokenizer(prompt, return_tensors=\"pt\").input_ids\n",
        "  input_ids = input_ids.to(device)\n",
        "  print(f\"Prompt has {input_ids.shape[-1]} tokens\")\n",
        "\n",
        "  answer_ids = tokenizer(answer, return_tensors=\"pt\").input_ids[:, 1:] # drop BOS\n",
        "  generation_output = model.generate(\n",
        "      input_ids=input_ids, max_new_tokens=answer_ids.shape[-1], num_beams=1, last_context_length=1024\n",
        "  )\n",
        "\n",
        "  model_answer = generation_output[0, -answer_ids.shape[-1]:].cpu()\n",
        "\n",
        "  is_correct = (model_answer == answer_ids[0]).all().item()\n",
        "  print(f\"The correct answer is {tokenizer.decode(answer_ids[0].cpu())}\")\n",
        "  print(f\"The model answer is {tokenizer.decode(model_answer.cpu())}, is_correct : {is_correct}\")\n",
        "  return is_correct\n",
        "\n",
        "\n",
        "passkey_retrieval_test()"
      ],
      "metadata": {
        "id": "nQv7qE1KI2in"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "num_tests = 10\n",
        "passed_tests = 0\n",
        "for i in range(num_tests):\n",
        "  passed_tests += passkey_retrieval_test(n_garbage=60000, seed=i)\n",
        "\n",
        "print(f\"Accuracy is {passed_tests/num_tests}\")"
      ],
      "metadata": {
        "id": "_RpzjeuPI4tT"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### How LongLLaMA handles long inputs\n",
        "Inputs over $2048$ tokens are automatically split into windows $w_1, \\ldots, w_m$. The first $m-2$ windows contain $2048$ tokens each, $w_{m-1}$ has no more than $2048$ tokens, and $w_m$ contains the number of tokens specified by `last_context_length`. The model processes the windows one by one extending the memory cache after each. If `use_cache` is `True`, the last window will not be loaded to the memory cache but to the local (generation) cache.\n",
        "\n",
        "The memory cache stores $(key, value)$ pairs for each head of the specified memory layers `mem_layers`. In addition to this, it stores attention masks.\n",
        "\n",
        "If `use_cache=True` (which is the case in generation), LongLLaMA will use two caches: the memory cache for the specified layers and the local (generation) cache for all layers. When the local cache exceeds $2048$ elements, its content is moved to the memory cache for the memory layers.\n",
        "\n",
        "For simplicity, context extension is realized with a memory cache and full attention in this repo. Replacing this simple mechanism with a KNN search over an external database is possible with systems like [Faiss](https://github.com/facebookresearch/faiss). This potentially would enable further context length scaling. We leave this as a future work."
      ],
      "metadata": {
        "id": "6AlfDjeHJAiX"
      }
    }
  ]
}
