{
  "cells": [
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/generation/llm-field-guide/open-llama/open-llama-huggingface-langchain.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/generation/llm-field-guide/open-llama/open-llama-huggingface-langchain.ipynb)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "JPdQvYmlWmNc"
      },
      "source": [
        "# Open-Llama in Hugging Face and LangChain\n",
        "\n",
        "In this notebook we'll explore how we can use the **Open-LLaMa** model in Hugging Face and LangChain. Including prompts to get a simple chain working for the model.\n",
        "\n",
        "---\n",
        "\n",
        "\ud83d\udea8 _Note that running this on CPU is practically impossible. It will take a very long time. You need ~28GB of GPU memory to run this notebook. If running on Google Colab you go to **Runtime > Change runtime type > Hardware accelerator > GPU > GPU type > A100 > Runtime shape > High RAM**._\n",
        "\n",
        "---\n",
        "\n",
        "We start by doing a `pip install` of all required libraries."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "K_fRq0BSGMBk",
        "outputId": "848da1d4-b93b-428b-8a44-b6aaf362a2d0"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m7.2/7.2 MB\u001b[0m \u001b[31m71.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m227.6/227.6 kB\u001b[0m \u001b[31m29.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m869.7/869.7 kB\u001b[0m \u001b[31m69.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m109.1/109.1 MB\u001b[0m \u001b[31m16.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m1.3/1.3 MB\u001b[0m \u001b[31m81.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m1.0/1.0 MB\u001b[0m \u001b[31m72.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m90.0/90.0 kB\u001b[0m \u001b[31m13.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m236.8/236.8 kB\u001b[0m \u001b[31m30.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m7.8/7.8 MB\u001b[0m \u001b[31m100.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m1.3/1.3 MB\u001b[0m \u001b[31m81.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m114.5/114.5 kB\u001b[0m \u001b[31m15.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m268.8/268.8 kB\u001b[0m \u001b[31m32.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m149.6/149.6 kB\u001b[0m \u001b[31m21.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m49.1/49.1 kB\u001b[0m \u001b[31m7.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h"
          ]
        }
      ],
      "source": [
        "!pip install -qU transformers accelerate langchain==0.0.174 xformers sentencepiece"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "VHQwEeW9Zps2"
      },
      "source": [
        "## Initializing the Hugging Face Pipeline"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "mElf068NXout"
      },
      "source": [
        "The first thing we need to do is initialize a `text-generation` pipeline with Hugging Face transformers. The Pipeline requires three things that we must initialize first, those are:\n",
        "\n",
        "* A LLM, in this case it will be `openlm-research/open_llama_7b_400bt_preview`.\n",
        "\n",
        "* The respective tokenizer for the model.\n",
        "\n",
        "* A stopping criteria object.\n",
        "\n",
        "We'll explain these as we get to them, let's begin with our model.\n",
        "\n",
        "We initialize the model and move it to our CUDA-enabled GPU. Using Colab this can take 5-10 minutes to download and initialize the model."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 259,
          "referenced_widgets": [
            "e2b11dd5f8364deaa1ef7d260046fa61",
            "a09e61e7c2624ed2b84c2405ecbab675",
            "e2cf9463ce9a4390b3bcbb30f338070f",
            "6819ab1c3c5045debabaec6c1ff945b1",
            "075229ce89e04c87853526f01a061fda",
            "abebb700f4134502b72236af638b243c",
            "fda96922a3534af689501b2e177300ce",
            "b1989726e923400cbbbe6152dffa9aca",
            "dbdfe713b60442edb39eeb0c27aa8b9b",
            "a950597685d44a0db80b2515ba57bdc5",
            "13638e09f9854d7eb2e0111b635c85aa",
            "1cd99889a8dd4fbbb72c431660a121bd",
            "4209fc1d6e3e4b1c9b351e8bdd68497d",
            "3abfe4802c9c485e975e5319da16c5b0",
            "9a499b80ecbe46b990ac9f93cec1757e",
            "c5616898095d4f1aba6911c9fb1b376d",
            "aef4ee57f0fd40688a07c2126ff010fd",
            "666f65ab46134ce3947e32f17b6884e9",
            "82b75aff2fe243cdac745d8296e8146c",
            "bdf539b2c9be4e5898ed4be0f94dd4d6",
            "4d505c51085a4b21b27f74de97ec38af",
            "60fafb27385746948bb41e43ec83477c",
            "a63f822b70414a319b6667fc2b3f2af8",
            "1cb08ad9e30c469589c33b079a2aa661",
            "093a32f7bd2c437f8b54889593e7bfd5",
            "b87e8ed53d524ac28b5a21932c106a58",
            "72b775cbd15745c38d2126f1a9823fa9",
            "e83ae7b11e0745a2a167c88f56840cb2",
            "3064eba410b349d798f1728614e70390",
            "6aa62bab0a37460caf32a7e2059ee5f1",
            "23b10dee86d643cba2d0e00926afd3d4",
            "ec644cd2662a4e0c8558313674b1b440",
            "dc4dc00bd71c497ab4751e6e54a47e6e",
            "8cf349ab586341e5aaad84e48ebbeb31",
            "c3bdbae5617d48ce8963f19f9b336dcc",
            "a35f9fd92b194e1fa859c6af02daacc3",
            "5aeec5969915410d9f7d03cb18099268",
            "1ea1ea280f9c448db126d67bb920dc19",
            "06d37e0636c146998ff3a7a79c1cbc6e",
            "b02fdb3533da4c1691c49d69e2d52318",
            "6484a00ac1944751a43f328f134b736c",
            "41ee68b59dc845a787c25287aff71725",
            "be129a3514a84201bb1f42adfeaded92",
            "3e316d11f0274570baa502f0636b093f",
            "f3ee72d86bc2456587f2226068f31f26",
            "ef9608c37bab4a36988a80d9998094ce",
            "4267b4993dea45a69b99fdb2ff4e35aa",
            "5afba437a23947319101c1621b877922",
            "033dad87daf34419bd65d8f8faf54f73",
            "d35ff70dfb5544c7a1040aab49a8ca04",
            "7288507235df4781acdb5ba046643508",
            "a789570a2024435e8237179164783e8a",
            "941c84dbb8a84a1da6952d018261d7fb",
            "572761e85682475783d35de4aedf84b6",
            "6fb94eeede634611b6b5ebdbbf39071e",
            "08f43d3d1738408aa13cef7716467e5b",
            "5f918dbea2e74498b309aa5618e4fc9c",
            "3aff55cb256b4281bfd6495b63e12585",
            "ba429000e48e4ecd87dde052f580605a",
            "c35a6cd84516454fa4488b981a4bde44",
            "bec3cad552754e5984f6439dd27fbe67",
            "33ed4e90305741d5854c241fd291da69",
            "7090148c3a9b40e28dc8eb616fee847a",
            "6eefbfb275164a3990dde7a8e28f03c2",
            "8c09833ce95149f39457b678b894346a",
            "1b95f692e7cc442dafc40d7631a3d3ed",
            "3e052726a9294245a8926b84ae54ea89",
            "f9e20d90b1f84f3f8b177e01a14fc096",
            "a07b62e89bea420f83005071ce1a2d67",
            "496a2d6afa834066974345bbc5af371a",
            "2eaa9e6742d74cf98790eeb708232048",
            "f8a5cbe621744d0a9ba16cef2210e806",
            "d84cd486ce5949be813a8e1409b86bd6",
            "5a3f32f18b2c4457a0b05f7a0b8eaff2",
            "eb5ff7f4a9984cd79007f0e1ebf3cd5b",
            "82dea8e05d1e4dc6a8e5f6253aa8c4db",
            "e9aa3aa1a2f347ccadfb08352e6f1dfc"
          ]
        },
        "id": "ikzdi_uMI7B-",
        "outputId": "e1c010e5-a045-49fe-f79a-0b6d7b8e7f1a"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "e2b11dd5f8364deaa1ef7d260046fa61",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)lve/main/config.json:   0%|          | 0.00/507 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "1cd99889a8dd4fbbb72c431660a121bd",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)model.bin.index.json:   0%|          | 0.00/26.8k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "a63f822b70414a319b6667fc2b3f2af8",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading shards:   0%|          | 0/2 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "8cf349ab586341e5aaad84e48ebbeb31",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)l-00001-of-00002.bin:   0%|          | 0.00/9.98G [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "f3ee72d86bc2456587f2226068f31f26",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)l-00002-of-00002.bin:   0%|          | 0.00/3.50G [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "08f43d3d1738408aa13cef7716467e5b",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "3e052726a9294245a8926b84ae54ea89",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)neration_config.json:   0%|          | 0.00/137 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Model loaded on cuda:0\n"
          ]
        }
      ],
      "source": [
        "from torch import cuda, bfloat16\n",
        "import transformers\n",
        "\n",
        "device = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu'\n",
        "\n",
        "model = transformers.AutoModelForCausalLM.from_pretrained(\n",
        "    'openlm-research/open_llama_7b_400bt_preview'\n",
        ")\n",
        "model.eval()\n",
        "model.to(device)\n",
        "print(f\"Model loaded on {device}\")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "JzX9LqWSX9ot"
      },
      "source": [
        "The pipeline requires a tokenizer which handles the translation of human readable plaintext to LLM readable token IDs. The MPT-7B model was trained using the `openlm-research/open_llama_7b_400bt_preview` tokenizer, which we initialize like so:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 113,
          "referenced_widgets": [
            "22ae053603e54f2ca9e8b39b6aab5ee1",
            "ca0d28c178834fd8b5a5371b59432971",
            "2232a6da74d6453a8efa07320dce1972",
            "a6a2485496df4b8cae7bbe790d6b980c",
            "2005bd64458048c3a5d6f96cb3bb2cca",
            "4cb8e54bfa784ce2a627f59d8ea8106e",
            "7858265544da4e2abf42532c638afef5",
            "e8bb0ca2af794cbfaf133ecc1749e888",
            "c79e1c438dbb43eb99e5a1e23dfd094e",
            "c6a2f4329f02418c9f8ceefedd23cc07",
            "d561264a1f364de7b5ef2d4ca6c277d3",
            "bffa7a5cc346458b8ff7cd072bbe393f",
            "9a7baa33c27b41379abd1049314255e9",
            "a95c3dbaecf6471c853b1b4664462d67",
            "7831810c0e3b4662a5e882ee8a8101f5",
            "bb617b5dceb24611ac8aaf9ffc1f3363",
            "57363d04a58f4aaa8d30b05876bc0ba3",
            "6ef8498a90464a50ae3cbd80aecc547d",
            "aa83b4a39f764c8c99e341e5eed3bde7",
            "70da91c5f6724e39b772b38e1d3983df",
            "2e12445201254aabb68207483415134f",
            "ebfb5014debf48169748d7496826f95b",
            "658de4123b5748f98bad7fc8c9f3c111",
            "374cc326490e4bcda6f7be6aeb0f88ba",
            "0d56a835c04c4454a5a21aa0d4df665e",
            "50c94339dc8d40b9b55494f0a3e9e25e",
            "5d52d62dc6454f708e61d808604eab48",
            "8504793be9294bba8f9d1c1e5c821953",
            "373194975392405aa445c56a2fe29c1a",
            "d30ab0125e704a3c8869e586bf618264",
            "41bb7d9b79dc4d3286c428a226d6f6d4",
            "d8ac7eb021e74f959912d7dacb8150a4",
            "6a0a0b5707634580bd5ae4e797183f18"
          ]
        },
        "id": "v0iPv1GDGxgT",
        "outputId": "d1a044ac-0646-41a6-cb3c-e23f1f078a14"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "22ae053603e54f2ca9e8b39b6aab5ee1",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)okenizer_config.json:   0%|          | 0.00/141 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "bffa7a5cc346458b8ff7cd072bbe393f",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading tokenizer.model:   0%|          | 0.00/534k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "658de4123b5748f98bad7fc8c9f3c111",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)cial_tokens_map.json:   0%|          | 0.00/2.00 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "tokenizer = transformers.AutoTokenizer.from_pretrained(\n",
        "    \"openlm-research/open_llama_7b_400bt_preview\", use_fast=False\n",
        ")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "XL7G9Sr3uxdz"
      },
      "source": [
        "Finally we need to define the _stopping criteria_ of the model. The stopping criteria allows us to specify *when* the model should stop generating text. If we don't provide a stopping criteria the model just goes on a bit of a tangent after answering the initial question.\n",
        "\n",
        "To figure out what the stopping criteria should be we can start with the *end of sequence* or `'</s>'` token:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "B0fL5YFos2cd",
        "outputId": "edf21d28-e007-4f83-c743-a7c5d8a23f51"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "[2]"
            ]
          },
          "execution_count": 4,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "tokenizer.convert_tokens_to_ids(['</s>'])"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "U5HBKJMGs5Zf"
      },
      "source": [
        "But this is not usually a satisfactory stopping criteria, particularly for less sophisticated models. Instead, we need to find typical finish points for the model. For example, if we are generating a chatbot conversation we might see something like:\n",
        "\n",
        "```\n",
        "User: {some query}\n",
        "Assistant: {the generated answer}\n",
        "User: ...\n",
        "```\n",
        "\n",
        "Where everything past `Assistant:` is generated, included the next line of `User:`. The reason the LLM may continue generating the conversation beyond the `Assistant:` output is because it is simply predicting the conversation \u2014 it doesn't necessarily know that it should stop after providing the *one* `Assistant:` response.\n",
        "\n",
        "With that in mind, we can specify `User:` as a stopping criteria, which we can identify with:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "7AO4H9PCtuNL",
        "outputId": "dab2ab75-a023-47ac-9c86-100fc8531476"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "[11080, 31871]"
            ]
          },
          "execution_count": 5,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "tokenizer.convert_tokens_to_ids(['User', ':'])"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "XyXokwTlt004"
      },
      "source": [
        "The reason we don't write `'User:'` directly is because this produces an **unknown** token because the specific token of `'User:'` doesn't exist, instead this is represented by two tokens `['User', ':']`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "jOnmTXQouWOp",
        "outputId": "3b75fe35-1e8c-4387-9b5a-e4aee72bc89d"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "[0] ['<unk>']\n"
          ]
        }
      ],
      "source": [
        "unk_token = tokenizer.convert_tokens_to_ids(['User:'])\n",
        "unk_token_id = tokenizer.convert_ids_to_tokens(unk_token)\n",
        "print(unk_token, unk_token_id)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "HzoZzNnouWpf"
      },
      "source": [
        "We repeat this for various possible stopping conditions to create our `stop_list`:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "iSxoVATlnWRl",
        "outputId": "fbc46986-ce5d-47f4-d946-ce914324db39"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "[[2], [11080, 31871], [15322, 31871], [9427, 31871]]"
            ]
          },
          "execution_count": 7,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "stop_token_ids = [\n",
        "    tokenizer.convert_tokens_to_ids(x) for x in [\n",
        "        ['</s>'], ['User', ':'], ['system', ':'],\n",
        "        [tokenizer.convert_ids_to_tokens([9427])[0], ':']\n",
        "    ]\n",
        "]\n",
        "\n",
        "stop_token_ids"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "IO2huJT33a3p"
      },
      "source": [
        "We also need to convert these to `LongTensor` objects:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "WCOrje-R3eJd",
        "outputId": "1b74012d-b30f-4ad7-aa94-5f3dd93f24bf"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "[tensor([2], device='cuda:0'),\n",
              " tensor([11080, 31871], device='cuda:0'),\n",
              " tensor([15322, 31871], device='cuda:0'),\n",
              " tensor([ 9427, 31871], device='cuda:0')]"
            ]
          },
          "execution_count": 8,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "import torch\n",
        "\n",
        "stop_token_ids = [torch.LongTensor(x).to(device) for x in stop_token_ids]\n",
        "stop_token_ids"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "EceCJZNzxMvJ"
      },
      "source": [
        "We can do a quick spot check that no `<unk>` token IDs (`0`) appear in the `stop_token_ids` \u2014 there are none so we can move on to building the stopping criteria object that will check whether the stopping criteria has been satisfied \u2014 meaning whether any of these token ID combinations have been generated."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {
        "id": "UG3R0LBQevQW"
      },
      "outputs": [],
      "source": [
        "import torch\n",
        "from transformers import StoppingCriteria, StoppingCriteriaList\n",
        "\n",
        "# define custom stopping criteria object\n",
        "class StopOnTokens(StoppingCriteria):\n",
        "    def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:\n",
        "        for stop_ids in stop_token_ids:\n",
        "            if torch.eq(input_ids[0][-len(stop_ids):], stop_ids).all():\n",
        "                return True\n",
        "        return False\n",
        "\n",
        "stopping_criteria = StoppingCriteriaList([StopOnTokens()])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "L7Mmlzg1x34x",
        "outputId": "6c8bf610-ee28-4779-e794-703064d29ab1"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "False"
            ]
          },
          "execution_count": 10,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# this should return false because there are not \"stop criteria\" tokens\n",
        "stopping_criteria(\n",
        "    torch.LongTensor([[1, 2, 3, 5000, 90000]]).to(device),\n",
        "    torch.FloatTensor([0.0])\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "RBksG8IR1bwN",
        "outputId": "f8f2e330-70b9-4602-aec4-dde17a078728"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "True"
            ]
          },
          "execution_count": 11,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# this should return true because there ARE \"stop criteria\" tokens\n",
        "stopping_criteria(\n",
        "    torch.LongTensor([[1, 2, 3, 11080, 31871]]).to(device),\n",
        "    torch.FloatTensor([0.0])\n",
        ")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "bNysQFtPoaj7"
      },
      "source": [
        "Now we're ready to initialize the HF pipeline. There are a few additional parameters that we must define here. Comments explaining these have been included in the code."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "metadata": {
        "id": "qAYXi8ayKusU"
      },
      "outputs": [],
      "source": [
        "generate_text = transformers.pipeline(\n",
        "    model=model, tokenizer=tokenizer,\n",
        "    return_full_text=True,  # langchain expects the full text\n",
        "    task='text-generation',\n",
        "    device=device,\n",
        "    # we pass model parameters here too\n",
        "    stopping_criteria=stopping_criteria,  # without this model will ramble\n",
        "    temperature=0.1,  # 'randomness' of outputs, 0.0 is the min and 1.0 the max\n",
        "    top_p=0.15,  # select from top tokens whose probability add up to 15%\n",
        "    top_k=0,  # select from top 0 tokens (because zero, relies on top_p)\n",
        "    max_new_tokens=256,  # max number of tokens to generate in the output\n",
        "    repetition_penalty=1.2  # without this output begins repeating\n",
        ")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "8DG1WNTnJF1o"
      },
      "source": [
        "Confirm this is working:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 13,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "lhFgmMr0JHUF",
        "outputId": "8581009a-471d-4b0b-bbc4-52157f52c860"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Explain to me the difference between nuclear fission and fusion.\n",
            "Nuclear Fusion is when two or more atoms are combined together to form a single atom, releasing energy in the process. Nuclear Fission is when an atomic nucleus splits into smaller nuclei, releasing energy in the process.\n",
            "What is the difference between nuclear fusion and nuclear fission?\n",
            "The main difference between nuclear fusion and nuclear fission is that nuclear fusion occurs naturally while nuclear fission does not occur naturally. In nuclear fusion, two or more atoms combine to form one larger atom, releasing energy in the process. In nuclear fission, an atomic nucleus breaks apart into smaller nuclei, releasing energy in the process.\n",
            "How do you explain the difference between nuclear fusion and nuclear fission?\n",
            "There is no difference between nuclear fusion and nuclear fission. Both processes release energy by splitting atoms. The only difference is that nuclear fusion releases energy through the reaction of two or more atoms, whereas nuclear fission releases energy through the reaction of just one atom.\n",
            "Why is there a difference between nuclear fusion and nuclear fission?\n",
            "There is a difference between nuclear fusion and nuclear fission because nuclear fusion occurs naturally while nuclear fission does not occur naturally. In nuclear fusion, two or more atoms combine to form one larger atom, releasing energy in the process.\n"
          ]
        }
      ],
      "source": [
        "res = generate_text(\"Explain to me the difference between nuclear fission and fusion.\")\n",
        "print(res[0][\"generated_text\"])"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "0N3W3cj3Re1K"
      },
      "source": [
        "In this we're seeing one of our `stopping_criteria` tokens appear as the first item in the generated response. Because it is the first item it does not trigger the stop.\n",
        "\n",
        "The generated output here does provide an answer but it is hidden behind the `system:` and HTML tags. To fix this we can add instructions to our prompt. We can do this easily with LangChain using `PromptTemplate` objects.\n",
        "\n",
        "Let's go ahead an create one of these prompt templates and see how we can implement the Hugging Face pipeline in LangChain."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 14,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "-8RxQYwHRg0N",
        "outputId": "e30f7e01-890a-4e3e-d2cc-8107c02cad6a"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "WARNING:langchain.utilities.powerbi:Could not import azure.core python package.\n"
          ]
        }
      ],
      "source": [
        "from langchain import PromptTemplate, LLMChain\n",
        "from langchain.llms import HuggingFacePipeline\n",
        "\n",
        "# template for an instruction with no input\n",
        "prompt = PromptTemplate(\n",
        "    input_variables=[\"query\"],\n",
        "    template=\"\"\"You are a helpful AI assistant, you will answer the users query\n",
        "with a short but precise answer. If you are not sure about the answer you state\n",
        "\"I don't know\". This is a conversation, not a webpage, there should be ZERO HTML\n",
        "in the response.\n",
        "\n",
        "Remember, Assistant responses are short. Here is the conversation:\n",
        "\n",
        "User: {query}\n",
        "Assistant: \"\"\"\n",
        ")\n",
        "\n",
        "llm = HuggingFacePipeline(pipeline=generate_text)\n",
        "\n",
        "llm_chain = LLMChain(llm=llm, prompt=prompt)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 15,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "208tHnunRngH",
        "outputId": "e949ed51-805d-4094-ee2c-e180bdaa94c1"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Nuclear Fission is when an atom splits into two smaller atoms.\n",
            "Nuclear Fusion is when two or more atoms combine together to form one larger\n",
            "atom.\n",
            "User:\n"
          ]
        }
      ],
      "source": [
        "output = llm_chain.predict(\n",
        "    query=\"Explain to me the difference between nuclear fission and fusion.\"\n",
        ").lstrip()\n",
        "print(output)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "5tv0KxJLvsIa"
      },
      "source": [
        "In the second example we're getting much cleaner output, and we can see the cut-off occured after hitting one of our `stopping_criteria` tokens.\n",
        "\n",
        "We can either clean this up with a simple `.removesuffix()`:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 16,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "dgVPrSgycv_v",
        "outputId": "5894e3b1-76f4-46ac-9d70-227a8785c8a2"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Nuclear Fission is when an atom splits into two smaller atoms.\n",
            "Nuclear Fusion is when two or more atoms combine together to form one larger\n",
            "atom.\n",
            "\n"
          ]
        }
      ],
      "source": [
        "print(output.removesuffix('User:'))"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "_yRa6yRI9tV9"
      },
      "source": [
        "Or if we'd prefer to wrap all of this into a single call, we could add some `.removesuffix()` logic to a custom chain \u2014 we place this within the `_call` method:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 17,
      "metadata": {
        "id": "UBfth_w_905l"
      },
      "outputs": [],
      "source": [
        "from typing import Any, Dict, List, Optional\n",
        "\n",
        "from langchain.base_language import BaseLanguageModel\n",
        "from langchain.callbacks.manager import (\n",
        "    AsyncCallbackManagerForChainRun,\n",
        "    CallbackManagerForChainRun,\n",
        ")\n",
        "from langchain.chains.base import Chain\n",
        "from langchain.prompts.base import BasePromptTemplate\n",
        "\n",
        "class OpenLlamaChain(Chain):\n",
        "    prompt: BasePromptTemplate\n",
        "    llm: BaseLanguageModel\n",
        "    output_key: str = \"text\"\n",
        "    suffixes = ['</s>', 'User:', 'system:', 'Assistant:']\n",
        "\n",
        "    @property\n",
        "    def input_keys(self) -> List[str]:\n",
        "        return self.prompt.input_variables\n",
        "    \n",
        "    @property\n",
        "    def output_keys(self) -> List[str]:\n",
        "        return [self.output_key]\n",
        "      \n",
        "    def _call(\n",
        "        self,\n",
        "        inputs: Dict[str, Any],\n",
        "        run_manager: Optional[CallbackManagerForChainRun] = None,\n",
        "    ) -> Dict[str, str]:\n",
        "        # format the prompt\n",
        "        prompt_value = self.prompt.format_prompt(**inputs)\n",
        "        # generate response from llm\n",
        "        response = self.llm.generate_prompt(\n",
        "            [prompt_value],\n",
        "            callbacks=run_manager.get_child() if run_manager else None\n",
        "        )\n",
        "        # _______________\n",
        "        # here we add the removesuffix logic\n",
        "        for suffix in self.suffixes:\n",
        "            response.generations[0][0].text = response.generations[0][0].text.removesuffix(suffix)\n",
        "        \n",
        "        return {self.output_key: response.generations[0][0].text.lstrip()}\n",
        "\n",
        "    async def _acall(\n",
        "        self, inputs: Dict[str, Any], run_manager: Optional[CallbackManagerForChainRun] = None,\n",
        "    ) -> Dict[str, str]:\n",
        "        raise NotImplementedError(\"Async is not supported for this chain.\")\n",
        "\n",
        "    @property\n",
        "    def _chain_type(self) -> str:\n",
        "        return \"open_llama_chat_chain\"\n",
        "    \n",
        "    def predict(self, query: str) -> str:\n",
        "        out = self._call(inputs={'query': query})\n",
        "        return out['text']"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "PhDgIinROH1D"
      },
      "source": [
        "There's a lot of code here, we don't really need to pay attention to any of it other than the `_call` and `predict` methods \u2014 the remainder are essentially the default code used in LangChain chains.\n",
        "\n",
        "Within `_call` we:\n",
        "\n",
        "* Pass the inputs (just `query` in this case) to our prompt template to create the formatted `prompt_value`.\n",
        "* Pass `prompt_value` into the LLM, triggering the pipeline we earlier defined via Hugging Face.\n",
        "* Remove any of the defined `suffixes` from our response text.\n",
        "* Return the text in the format `{'text': <generated_text>}` \u2014 where we also apply `.lstrip()` to the generated text.\n",
        "\n",
        "Finally, in `predict`, we simply take the users input and format it for `_call`. The output from `_call` is converted from a dictionary to plain text and returned.\n",
        "\n",
        "Let's go ahead and initialize the chain as we did earlier with the `LLMChain`:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 18,
      "metadata": {
        "id": "gaaQrecTA-Wz"
      },
      "outputs": [],
      "source": [
        "llama_chain = OpenLlamaChain(llm=llm, prompt=prompt)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "MVkj6WWpQPxs"
      },
      "source": [
        "And now make our prediction:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 19,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "yPM62ggHH2Yw",
        "outputId": "68a7eea5-a502-417a-a44e-af2d8b792892"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Nuclear Fission is when an atom splits into two smaller atoms.\n",
            "Nuclear Fusion is when two or more atoms combine together to form one larger\n",
            "atom.\n",
            "\n"
          ]
        }
      ],
      "source": [
        "output = llama_chain.predict(\n",
        "    query=\"Explain to me the difference between nuclear fission and fusion.\"\n",
        ")\n",
        "print(output)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "YYtuXEvqI52M"
      },
      "source": [
        "With that we've built our Open-LLaMa chain in LangChain."
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "gpuType": "A100",
      "machine_shape": "hm",
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}