{
  "cells": [
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/generation/llm-field-guide/mpt/mpt-30b-chatbot.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/generation/llm-field-guide/mpt/mpt-30b-chatbot.ipynb)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "JPdQvYmlWmNc"
      },
      "source": [
        "# MTP-30B-Chat in Hugging Face and LangChain\n",
        "\n",
        "In this notebook we'll explore how we can use the open source **MTP-30B** model in both Hugging Face transformers and LangChain.\n",
        "\n",
        "---\n",
        "\n",
        "\ud83d\udea8 _Note that running this on CPU is practically impossible. It will take a very long time. If running on Google Colab you go to **Runtime > Change runtime type > Hardware accelerator > GPU > GPU type > A100**._\n",
        "\n",
        "---\n",
        "\n",
        "We start by doing a `pip install` of all required libraries."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "K_fRq0BSGMBk",
        "outputId": "3cab3e42-c48b-4fcc-e61d-502970616eac"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m7.2/7.2 MB\u001b[0m \u001b[31m106.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m227.6/227.6 kB\u001b[0m \u001b[31m29.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m42.2/42.2 kB\u001b[0m \u001b[31m5.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m1.2/1.2 MB\u001b[0m \u001b[31m79.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m109.1/109.1 MB\u001b[0m \u001b[31m16.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m97.1/97.1 MB\u001b[0m \u001b[31m18.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m236.8/236.8 kB\u001b[0m \u001b[31m29.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m7.8/7.8 MB\u001b[0m \u001b[31m116.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m1.3/1.3 MB\u001b[0m \u001b[31m79.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m90.0/90.0 kB\u001b[0m \u001b[31m13.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m49.1/49.1 kB\u001b[0m \u001b[31m7.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h"
          ]
        }
      ],
      "source": [
        "!pip install -qU transformers accelerate einops langchain xformers bitsandbytes"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "VHQwEeW9Zps2"
      },
      "source": [
        "## Initializing the Hugging Face Pipeline"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "mElf068NXout"
      },
      "source": [
        "The first thing we need to do is initialize a `text-generation` pipeline with Hugging Face transformers. The Pipeline requires three things that we must initialize first, those are:\n",
        "\n",
        "* A LLM, in this case it will be `mosaicml/mpt-30b-chat`.\n",
        "\n",
        "* The respective tokenizer for the model.\n",
        "\n",
        "* A stopping criteria object.\n",
        "\n",
        "We'll explain these as we get to them, let's begin with our model.\n",
        "\n",
        "We initialize the model and move it to our CUDA-enabled GPU. Using Colab this can take 5-10 minutes to download and initialize the model."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 1000,
          "referenced_widgets": [
            "2af0c9be2e524f45acaf8d2885aa1099",
            "51a1ce6350864695835da4db351d39a3",
            "a190a77c608240ae8b73c9e69f9d95e0",
            "0367bd36c7054a3cbbcb377829b246c9",
            "a560189e5a614818bcb98ae7e072983a",
            "0d99030041354ae59163a8666ae023d0",
            "b2ab1c32e2aa4dd0920bf9f9f608357f",
            "66e6a9cedf7a4389844bb75e87d5516a",
            "00afcdd80dd64dffb7841d0d5b050862",
            "068a1952ebe74f3e985e509455e4c29e",
            "8106613e63ce4cb2995e54c06728239c",
            "da0291555493441aa377d8001d3dc1c7",
            "d8d725c671824938a26abb7e6d499fea",
            "df7af3deb49c4816bb99764be7aa9862",
            "c1014a455ad842cb835aff290cbacfec",
            "7a60f3036ee347f9961cc5317b47863d",
            "b763ceea698e4b17a97345d14824878e",
            "a222573d050743f0b130b865b0f27ad2",
            "21e0f9be9fa54ad3aecdb64d837d0154",
            "f1a54f5be3954ee8b2143ad36e8ebd7c",
            "6041e7f45db6437b876778a03b6c921e",
            "68e9dc988dc44e85a8de7b9faa00e7c5",
            "bbe9c412739a41708d7c5d589f5fc960",
            "c104a6297c944ac1b26fa9828ae1e292",
            "59a5f208ed69436ea51a5800a26b41e4",
            "af67ca996bea481e9e13a97f2ed2d1bc",
            "33fb731955f04c8b88bd955074f34b91",
            "1363c6113b91451d8c9d6c0e0f050eb6",
            "62278fac4e7448a8814d6583dfa065d3",
            "8e29482a48de48d0ad3fb855958fe64c",
            "713e7987ec3f480bbc4d89338e49536d",
            "bf6524350299497d9b296f77087e7a77",
            "63426406e65d4bd09ced66fae2253b30",
            "7e30ac072bd3462284b7c0adeee5ffaa",
            "1f55ee5aa7af4386af5ba5b4c564e2a4",
            "4de00557310c454aab7a0c8ce599f4b1",
            "119eb95ec43a45b181ee565585369c40",
            "c01156bf50ea4c7589babb8e42b98d8d",
            "cfec4f39006b4b4fbdc0c1ded3d5cce4",
            "345fdb304d0f424293d3b8fe0bf93a02",
            "d4e0d594efd1411da25f9774e729a2e3",
            "fea452b2c3074904b6d710ddcf949b0b",
            "437db90b1cbf4c2295d74eac46b08c5b",
            "fed45b715d0b44738b7c6ab15044ea95",
            "9f24cdc0fc0a4a5b980176e87a3220c9",
            "62e598c584b44953ab0ce28decada850",
            "38375816d29a4b1ba8bbe24dc3dac6f7",
            "f971c9026fd64926867ebb746a50bcc4",
            "4b3744a56b3a446d898a24b747739c6d",
            "e44ff460f72c49ff8a3c8e4b648d5ae6",
            "d16794f7d8e44e97a99c83343ed8f29b",
            "271a95d9dc574cf49b66ad9fceb56405",
            "56e6573b3a6841aeae2a54eac0675a1b",
            "8bc20f05d497471b92a2b6ed7f114721",
            "f09949d13605461db55937eae71466d7",
            "b7b176d87c814890a63699dfef4d7990",
            "c7c106285ec341f0997d8dcb62eb5f47",
            "b9d6e42faca348f9a517fa6a22a6f2fb",
            "690b3a98df0d405b8ca1f490a7521ad0",
            "09df25d7ec1740caa035e9082aee05a4",
            "81b87fc5e6994b6ab64de647accde179",
            "c697ca1ec5774600a8a578fe623aa196",
            "f55156ff38ed43cb8986837b670f5ac9",
            "41ceae14ea0d419399a2d47e8dba3c36",
            "37f7a62e48994becb44a9502ea50d92b",
            "d42029f1e5e543baa4fa1cfa83094abc",
            "f3ecca898bfb45ddbb416076430328ad",
            "c8e5fd67e7eb4e7b81e13efd37b48a24",
            "16ad031888214a3db69819b3bb2d6ae7",
            "91b5c743a8ed488bb78b4064440d0fb6",
            "713af12a481848608d06a663a6b8b970",
            "8cbb1c158e7f4ab1958a7b65695f30ab",
            "15c4f3f9b0134f4aa88ddd586d5fea67",
            "0bed20da7da4472fa00922c5df72a3be",
            "736a0deab3b242b0833da37ba6d62cad",
            "36ad7f025c6a41c2ae55d1beed82c110",
            "6876f983fddd4d7c9150bef399acbced",
            "9542621619014073a100ff201bad326a",
            "6a2656b0354944a9b0dcf87b71414083",
            "94edd8d3fa2e4829970336c8d04e8cd6",
            "e852b2cc23cc40e396a22d2da775f164",
            "eb88b81bb13541bcaa94aecb33996473",
            "3a40eea0d34649aea81df60285b1b499",
            "2b09941a89fa4d2cad1eed952ec12b73",
            "cb78bb9fa53647289f6c658105b372b4",
            "97d792cb3fc34428b249433fa304c3fe",
            "c0475435c645460081ddb415ade7ab2b",
            "747477d1aa5c4103b772d2d570428592",
            "cb51840d982c4a2ea45d662bc66e1ac7",
            "c42a8845cb5842bea17c6db5a92dd8fe",
            "484eafd0c5964caeab993213346ba3ca",
            "cd2a035142dd49198d4b2651ec1547d2",
            "9fc7965ed75947e69e8860f24f61eae1",
            "6c1f1049d84e4794b1be0baba090e0e9",
            "0265103e61004e6ca047275db10c0262",
            "1551d05e0bc34e3aa6ec114fdffd9e1e",
            "6ff1ff730db949819d776927c343d5ad",
            "7a6daec1f551407a869d033b100ad374",
            "298958491bf246baaebfeabc10df1c9b",
            "971c195c5b834cb3bb1761fae2930484",
            "fd3035c6a3a840348fb9cd9c6a03d9da",
            "ac204836310b41eaa0e442fff7866945",
            "1a31dd297a254b6b8270eaf1bcbeaf68",
            "896ca7e330ee4eb18a27e4e46bd8c139",
            "af1a229abf0e475d834fb6246975f195",
            "3455637d58c84cfd9b6e11559f15c623",
            "8649a5a1a0284c9399d30bf213dd4cda",
            "0ceacfb8cabb42109fd7664a7e28ce2b",
            "556d96b0db9d426e8306da8279297bf1",
            "a85a5d4f31454bf7bcedc3a889c7b6a3",
            "fcbaa1b7ba6b40bb897a7fc41381a9f3",
            "4243d369407c404783effd06f7b20c7a",
            "d3483ac235434c9a8d264d17ccca5af0",
            "791fd78bfc5d497fb49079f8cb0f6ce5",
            "343a8d1bcb4f41829bcd656460abaf8c",
            "cce1e74cffdf4a25b6bf3864185ddac1",
            "39cfae0469d54c13b7bceeded962b47a",
            "0f07b82248eb47c8a27366758c8ace52",
            "c5aff8ca56df421d8e3585e569cf2860",
            "fa3b8c5f0ecd408db8cb4ae9513b3b29",
            "85ffff3a6041428d9ec24d225fb01c01",
            "19058c8c451c460aaf394bfacf354f77",
            "75905c6782e24662b39a053052dd0332",
            "59bcc322cfe346219a9c273b96ca0446",
            "58afa7d8d1454f2fa9c79fe059f91596",
            "d823d2a92a0d494da0776543b03a89ab",
            "024850086856473587ecd525e7173a81",
            "454cdada7c914d1693821581fa20c943",
            "88e5ec2c1b7e43be8ce33ad2f0548ae0",
            "ee090464a61847dcbc308437fcdd4604",
            "05884f15f78e4518a5a0296d35b37f81",
            "980fbe9775b0481ab85882809d114b7a",
            "423c09124e1544af8737dd65daa90f09",
            "d6e87cce5fcb4ad3a7c0e6da412889d2",
            "45520775ad614285b35424916a89dc80",
            "c53025d1bbf94ae8bc6c2577ece2cdd0",
            "f8983f1e03e5400985446e99a703538d",
            "a305b41f648f45ca9b046981fbeab4cd",
            "11541bcd40164c0e95dc3b1b3d72602a",
            "2936b619ba1a43f59784d35aa2807e40",
            "dd116264618142c58e455e1b2c8f07e0",
            "4dc25c3bd35a48099bb95501a9e82e5f",
            "335314ea411a4923a0711cf9371ba295",
            "8bf55472fdd24021b993e98ea94e5c7f",
            "16fcd748ac104fa2b5de5d55d04d373c",
            "376b2c90e7c541fd97b4cfcb073dbc07",
            "408647261afc45129d4b0b5e33cc2bf1",
            "69c9ca7437de4a1085625efe3e41cf47",
            "461082c4a8644c178dea67718ff223cd",
            "f37e88707c7a489a8fb30f6949c129fd",
            "3b3032ce2dd945b991e968a5bd48d561",
            "a7586898a1074cffa6181742ae33c25a",
            "41432330e2044d73a144269738ffba45",
            "2e5a617b34c744e2883ef42c10da6018",
            "500fd39537bc4557a299d815e962d852",
            "a78b9b163bdc40f594b410052f5778f9",
            "f01ef56715cf422d8e722a9483ac4206",
            "8d0be8e232bb4eb4aa880f232a2f757b",
            "f1ccb6ffec1a4799a7079557f2dc492f",
            "2ab68f23be0549d58227d2d723e6d2d8",
            "613c61dd8fd04267a04158862c828a28",
            "9671c15bbeaf4ac2bcf256c659cf2561",
            "dc8e4718138f4951bfd25c585f226541",
            "bb9e55b4aaa3442ca425b7276fb9371a",
            "2056fbbd44364305bf654ee32b01e2b9",
            "f08f6fb9b69a4efda406094593765916",
            "d28f6a60b31845428350ccefc6cb867f",
            "ed7ae915c709441ba38bf9cec367bda9",
            "55c65cbdd35f409c850db0d236490c89",
            "0ade15bfe2ae4dc289a7bce5c0ae9be3",
            "cf0ecc8ee4e248849c0b7364d2ce8dc4",
            "b17411f2df3f4a1c9af6271f42187965",
            "b2e3737ead114eb8b5dc2ce990751093",
            "424b42cf7b5748d58c5abfbb822a802f",
            "397877bd99e447f68d8a29d70e783acd",
            "e4f190cbb81f4d918d3a268e31f49370",
            "e6bb04164f3c422cbf2c9a972f4e16f6",
            "69fe6fc7eaa94b1a89e339819bf36eef",
            "8b5bd1e2a7cf46719c4a9e9ed6aa67bd",
            "37a8004f953949deb7719ab5d9477e6f",
            "7003ab72f91c405e947139bf8f31297e",
            "e79a9f4522874d13ab77842f4604f80d",
            "312f54d4f76c4d44bb8ebe7bd933ad11",
            "6432f9ab14e8417f9a61df707a4d7633",
            "3d2c82cc160e492c9dea2611590eb626",
            "4a24c36f74e84849961c3bb854353712",
            "712dbe1b329e44c09598b005be5f444c",
            "8d7351809d624f7eabe4cf37a5df72f0",
            "23e7e784a22145d8885e3a7be8022b06",
            "88d5ee59354e48fca7b42f92ba3b873a",
            "5e2f3ea264af4fb29d2a94c87305dac5",
            "09503e74889244d08479369321da4739",
            "ae12a8f5c4ab4ed78f724684fdcf1652",
            "4dd5a82780fe4c16b1b75fce6d4406c1",
            "68fe326047f44f9d9288d27ccb8de1bc",
            "c78060b4e5ff450aa8b2a3675fba79fd",
            "98ed39cc732445819f1680d5b620adc6",
            "5128805dea924cee916bf1d027b21251",
            "ff313c02f91b4e05b3586ec08cdad0c9",
            "4cca1d533dde491dba7007a354b1ebcc",
            "1d9c20df7c9a4f4faec4b94055f8abec",
            "c80f0d7d3d23427cb16651dcea6550ec",
            "483cafe7b82d4f79ab406aa22b6f78e5",
            "f21c52feb81a46a995fd7a37954b199a",
            "f6d7211e04a94d0ab19bec00673033b9",
            "bdf720c84fb84026a9ed900f231ab33d",
            "32f806e9f40c4780af5d8e2f8c5bdb01",
            "206da134cc634ba6b557b8b8950e8610",
            "4e52882dd904446babb3d601ff8b9eaf",
            "e7789e6e12fe47a0b53d5e3024a9dc06",
            "83f833fefaec414c919cbdfb7dae0066",
            "08bf949d013947f2842211612667527e",
            "b02df1a1acea4148b592a0d1a645f89b",
            "47a25e0167f84310b6e7e2914a5f12b9",
            "8ba4b800d0ee42d0b3f6b484304ca557",
            "a08ad041ec1c4954bf61570730a1fdd9",
            "b90faabb6b0a485c910ace92ff9b737d",
            "e8b88762776344c9ab6864abac0b8e22",
            "9220e122ea4443ea8a5674d5bcad1fa6",
            "e2012540e4fe487eb83d674bb849df4d",
            "11fae6a8257341168ed2ccedf994e848",
            "7be0ae5202fc4217a521f3d30dc15246",
            "d202a3f1f6874cff9f5334724fdb2494",
            "d506b1de136547e785f50ef91a522d19",
            "ef5af8a6b2a042b99e46f50a397df96b",
            "e5557ffc64af4602bf0d34477c941cc0",
            "4ca22c98217c4946b0188791bfb72202",
            "353d1440a7c5494c8001aadcceeae5ab",
            "4c13922ef1034fe6b6bd3de7541cba18",
            "8be79e662f174c4ab2853fd1f48795e1",
            "f3d2347fa6984126a58c44d4ce2069f0",
            "820c40d47c054b0280c6ba58d0ada9c0",
            "2ddc505cf79e42768e4f5f839fabab8e",
            "f0a56729f30a4415a2f5de093c2fb36a",
            "eed3a5951db04c33bb31669b21301e2c",
            "0567c1bc661b46e28be94c699c295ac1",
            "39793cbb0cfd4a96b7ac858dfdbe4985",
            "5c137643a2ae4c06a5376ad7da66bb40",
            "04cc33b89ec44b6e90396cbd380d03b1",
            "dc38edfbd9b54a1993819f482ae3452a",
            "422301a726aa4ec78275b25f817c106b",
            "424dfd152c0b4dd19da5015b1aa8a9a5",
            "4aee5f6dce9e4aa09412c28093fea3b9",
            "0a5fe3a02ccd49d7a03fb9248a04007e",
            "773391ab1c2e4a6bb61da79bcf7316b2",
            "c76443c2b8d041e2bf854691e392c18d",
            "d304eb3565a74d34ac55ee4544753d4b",
            "3719aed7d2dd4d79858ac0e5b4bbe3e5",
            "343a7744b8f44746b64502ef955831b5",
            "b6fc27268f944db9b42f4a8760332aee",
            "1c68d2a6d2b3422594e94631a07bde4b",
            "5f778bbd511e4e6db4ce29244f466138",
            "0600155894ac4a84932e30ff27f15391"
          ]
        },
        "id": "ikzdi_uMI7B-",
        "outputId": "abb4fd35-6695-458d-f6dd-b8401cf77c83"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "2af0c9be2e524f45acaf8d2885aa1099",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)lve/main/config.json:   0%|          | 0.00/1.24k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "da0291555493441aa377d8001d3dc1c7",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)configuration_mpt.py:   0%|          | 0.00/9.20k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-30b-chat:\n",
            "- configuration_mpt.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "bbe9c412739a41708d7c5d589f5fc960",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)main/modeling_mpt.py:   0%|          | 0.00/19.3k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "7e30ac072bd3462284b7c0adeee5ffaa",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)solve/main/blocks.py:   0%|          | 0.00/2.55k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "9f24cdc0fc0a4a5b980176e87a3220c9",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)resolve/main/norm.py:   0%|          | 0.00/2.56k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-30b-chat:\n",
            "- norm.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "b7b176d87c814890a63699dfef4d7990",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)ve/main/attention.py:   0%|          | 0.00/17.7k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "f3ecca898bfb45ddbb416076430328ad",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)flash_attn_triton.py:   0%|          | 0.00/28.2k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-30b-chat:\n",
            "- flash_attn_triton.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n",
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-30b-chat:\n",
            "- attention.py\n",
            "- flash_attn_triton.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n",
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-30b-chat:\n",
            "- blocks.py\n",
            "- norm.py\n",
            "- attention.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "9542621619014073a100ff201bad326a",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)meta_init_context.py:   0%|          | 0.00/3.64k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-30b-chat:\n",
            "- meta_init_context.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "cb51840d982c4a2ea45d662bc66e1ac7",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)n/adapt_tokenizer.py:   0%|          | 0.00/1.75k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-30b-chat:\n",
            "- adapt_tokenizer.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "971c195c5b834cb3bb1761fae2930484",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)/custom_embedding.py:   0%|          | 0.00/305 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-30b-chat:\n",
            "- custom_embedding.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "fcbaa1b7ba6b40bb897a7fc41381a9f3",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)refixlm_converter.py:   0%|          | 0.00/27.2k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-30b-chat:\n",
            "- hf_prefixlm_converter.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "19058c8c451c460aaf394bfacf354f77",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)in/param_init_fns.py:   0%|          | 0.00/12.6k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-30b-chat:\n",
            "- param_init_fns.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n",
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-30b-chat:\n",
            "- modeling_mpt.py\n",
            "- blocks.py\n",
            "- meta_init_context.py\n",
            "- adapt_tokenizer.py\n",
            "- custom_embedding.py\n",
            "- hf_prefixlm_converter.py\n",
            "- param_init_fns.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "423c09124e1544af8737dd65daa90f09",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)model.bin.index.json:   0%|          | 0.00/24.0k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "8bf55472fdd24021b993e98ea94e5c7f",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading shards:   0%|          | 0/7 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "500fd39537bc4557a299d815e962d852",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)l-00001-of-00007.bin:   0%|          | 0.00/9.77G [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "f08f6fb9b69a4efda406094593765916",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)l-00002-of-00007.bin:   0%|          | 0.00/9.87G [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "e6bb04164f3c422cbf2c9a972f4e16f6",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)l-00003-of-00007.bin:   0%|          | 0.00/9.87G [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "8d7351809d624f7eabe4cf37a5df72f0",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)l-00004-of-00007.bin:   0%|          | 0.00/9.87G [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "ff313c02f91b4e05b3586ec08cdad0c9",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)l-00005-of-00007.bin:   0%|          | 0.00/9.87G [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "e7789e6e12fe47a0b53d5e3024a9dc06",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)l-00006-of-00007.bin:   0%|          | 0.00/9.87G [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "11fae6a8257341168ed2ccedf994e848",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)l-00007-of-00007.bin:   0%|          | 0.00/822M [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Instantiating an MPTForCausalLM model from /root/.cache/huggingface/modules/transformers_modules/mosaicml/mpt-30b-chat/7debc3fc2c5f330a33838bb007c24517b73347b8/modeling_mpt.py\n",
            "You are using config.init_device='cuda:0', but you can also use config.init_device=\"meta\" with Composer + FSDP for fast initialization.\n",
            "\n",
            "===================================BUG REPORT===================================\n",
            "Welcome to bitsandbytes. For bug reports, please run\n",
            "\n",
            "python -m bitsandbytes\n",
            "\n",
            " and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues\n",
            "================================================================================\n",
            "bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so\n",
            "CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...\n",
            "CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so\n",
            "CUDA SETUP: Highest compute capability among GPUs detected: 8.0\n",
            "CUDA SETUP: Detected CUDA version 118\n",
            "CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so...\n"
          ]
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...\n",
            "  warn(msg)\n",
            "/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')}\n",
            "  warn(msg)\n",
            "/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//172.28.0.1'), PosixPath('8013'), PosixPath('http')}\n",
            "  warn(msg)\n",
            "/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https'), PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-a100-s-2pw02e691yp8q --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true')}\n",
            "  warn(msg)\n",
            "/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')}\n",
            "  warn(msg)\n",
            "/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('module'), PosixPath('//ipykernel.pylab.backend_inline')}\n",
            "  warn(msg)\n",
            "/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so'), PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0')}.. We'll flip a coin and try one of these, in order to fail forward.\n",
            "Either way, this might cause trouble in the future:\n",
            "If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.\n",
            "  warn(msg)\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "820c40d47c054b0280c6ba58d0ada9c0",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Loading checkpoint shards:   0%|          | 0/7 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "4aee5f6dce9e4aa09412c28093fea3b9",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)neration_config.json:   0%|          | 0.00/91.0 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Model loaded on cuda:0\n"
          ]
        }
      ],
      "source": [
        "from torch import cuda, bfloat16\n",
        "import transformers\n",
        "\n",
        "device = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu'\n",
        "\n",
        "model = transformers.AutoModelForCausalLM.from_pretrained(\n",
        "    'mosaicml/mpt-30b-chat',\n",
        "    trust_remote_code=True,\n",
        "    load_in_8bit=True,  # this requires the `bitsandbytes` library\n",
        "    max_seq_len=8192,\n",
        "    init_device=device\n",
        ")\n",
        "model.eval()\n",
        "#model.to(device)\n",
        "print(f\"Model loaded on {device}\")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "JzX9LqWSX9ot"
      },
      "source": [
        "The pipeline requires a tokenizer which handles the translation of human readable plaintext to LLM readable token IDs. The MPT-30B model was trained using the `mosaicml/mpt-30b` tokenizer, which we initialize like so:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 113,
          "referenced_widgets": [
            "477f8b8b04994c3da6d236096e1e7123",
            "a026955377834730a31f1a41c6c1efc6",
            "a868cbc89f444a11b3510ab5a0231072",
            "68101549611e4401b9c92ba6df354982",
            "840e47f5b37945d19c0c7eb859b3ef9c",
            "f89b9d0270b64bba99ad03e8853c4311",
            "de0ffd72df7f499091437572ce11022e",
            "05f8f3abbfdb4f57bb4bec0680cefe82",
            "081598bceb8f4daa8966eefbc3224fe2",
            "bfc325f5f1bc4fd08cf5ac1b8d9ed425",
            "11fc2e61e638429881830ff557ed6330",
            "7b75657f72da40efabef970a59320ae6",
            "28fd6deafd7b445791441e3e9fa5424e",
            "0f49a40fac324652ab7fe6798b9f7d4d",
            "9b72c287d4204394931d2a43293943af",
            "b3d59a8a47be40f59b949d0beb061d0c",
            "0cfee0547ef84ceb954320d79adc48bd",
            "c0ed7dc594aa43689b1db597f703363b",
            "4ba55cc19b5e49b7843b20d2b9c8982e",
            "8aef5e2bbf964eef8575b0b271f74c2a",
            "beb5ad01a5674fa2a68a6f0f5638423f",
            "1a7c323d4a83445c90cfd79e8aa14305",
            "167f8d55b8f14482bf4ca461a133ddea",
            "bd65962e3cfa482f9fa9a678dfc9e84c",
            "44d00af7d8eb4d1792ad6d77f4846918",
            "32c81da0cf3a4682a275c5cb6d3c8410",
            "6a12cea4466042ea99d4a7a4932fd59f",
            "075826a0430a41fc8aa90826611faaa1",
            "f04f53c5a32842709d7ffd8b3276ffaa",
            "cb979192976e481e87cec1af8e80a29c",
            "80df7bfc7b8f44098e736de5e5d9ae8c",
            "ac3142b901504b6996e21986e857bb4a",
            "5b03299756fd4e9dafad708fb82022bb"
          ]
        },
        "id": "v0iPv1GDGxgT",
        "outputId": "af40e972-2e2b-4e5c-f244-6163cab73b94"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "477f8b8b04994c3da6d236096e1e7123",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)okenizer_config.json:   0%|          | 0.00/237 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "7b75657f72da40efabef970a59320ae6",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)/main/tokenizer.json:   0%|          | 0.00/2.11M [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "167f8d55b8f14482bf4ca461a133ddea",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)cial_tokens_map.json:   0%|          | 0.00/99.0 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "tokenizer = transformers.AutoTokenizer.from_pretrained(\"mosaicml/mpt-30b\")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "XL7G9Sr3uxdz"
      },
      "source": [
        "Finally we need to define the _stopping criteria_ of the model. The stopping criteria allows us to specify *when* the model should stop generating text. If we don't provide a stopping criteria the model just goes on a bit of a tangent after answering the initial question."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "UG3R0LBQevQW",
        "outputId": "b9f44965-b361-4656-bf77-9be7a7d18205"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "[[22705, 27], [18128, 27]]"
            ]
          },
          "execution_count": 4,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "from transformers import StoppingCriteria, StoppingCriteriaList\n",
        "\n",
        "# we create a list of stopping criteria\n",
        "stop_token_ids = [\n",
        "    tokenizer.convert_tokens_to_ids(x) for x in [\n",
        "        ['Human', ':'], ['AI', ':']\n",
        "    ]\n",
        "]\n",
        "\n",
        "stop_token_ids"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "0IoQifZvEFD_"
      },
      "source": [
        "We need to convert these into `LongTensor` objects:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "yIzaQ24TEJES",
        "outputId": "317a4cef-377b-48d7-a14f-6a0c36fd7958"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "[tensor([22705,    27], device='cuda:0'),\n",
              " tensor([18128,    27], device='cuda:0')]"
            ]
          },
          "execution_count": 5,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "import torch\n",
        "\n",
        "stop_token_ids = [torch.LongTensor(x).to(device) for x in stop_token_ids]\n",
        "stop_token_ids"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "o1znn7p1ESte"
      },
      "source": [
        "We can do a quick spot check that no `<unk>` token IDs (`0`) appear in the `stop_token_ids` \u2014 there are none so we can move on to building the stopping criteria object that will check whether the stopping criteria has been satisfied \u2014 meaning whether any of these token ID combinations have been generated."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "nXPcO0FED5Jo"
      },
      "outputs": [],
      "source": [
        "from transformers import StoppingCriteria, StoppingCriteriaList\n",
        "\n",
        "# define custom stopping criteria object\n",
        "class StopOnTokens(StoppingCriteria):\n",
        "    def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:\n",
        "        for stop_ids in stop_token_ids:\n",
        "            if torch.eq(input_ids[0][-len(stop_ids):], stop_ids).all():\n",
        "                return True\n",
        "        return False\n",
        "\n",
        "stopping_criteria = StoppingCriteriaList([StopOnTokens()])"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "bNysQFtPoaj7"
      },
      "source": [
        "Now we're ready to initialize the HF pipeline. There are a few additional parameters that we must define here. Comments explaining these have been included in the code."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "qAYXi8ayKusU",
        "outputId": "d7998445-b505-407f-9384-088b239dc4d7"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "The model 'MPTForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].\n"
          ]
        }
      ],
      "source": [
        "generate_text = transformers.pipeline(\n",
        "    model=model, tokenizer=tokenizer,\n",
        "    return_full_text=True,  # langchain expects the full text\n",
        "    task='text-generation',\n",
        "    # we pass model parameters here too\n",
        "    stopping_criteria=stopping_criteria,  # without this model rambles during chat\n",
        "    temperature=0.1,  # 'randomness' of outputs, 0.0 is the min and 1.0 the max\n",
        "    top_p=0.15,  # select from top tokens whose probability add up to 15%\n",
        "    top_k=0,  # select from top 0 tokens (because zero, relies on top_p)\n",
        "    max_new_tokens=128,  # mex number of tokens to generate in the output\n",
        "    repetition_penalty=1.1  # without this output begins repeating\n",
        ")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "8DG1WNTnJF1o"
      },
      "source": [
        "Confirm this is working:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "lhFgmMr0JHUF",
        "outputId": "38b6cf76-f134-4c72-ad46-1a7e6886b182"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Explain to me the difference between nuclear fission and fusion.\n",
            "Fission is a process in which an atomic nucleus splits into two smaller nuclei, releasing energy as well as additional neutrons that can cause further fissions. Fusion involves combining light atomic nuclei (such as hydrogen isotopes) at high temperatures or pressures so they merge together forming heavier elements while also producing large amounts of energy. The most common example being the reaction occurring inside stars like our sun where Hydrogen atoms combine under extreme heat & pressure resulting in helium formation along with release of gamma radiation. In contrast Fission typically occurs when heavy element such as Uranium-235 absorbs a neutron causing it's nucleus to split apart giving off\n"
          ]
        }
      ],
      "source": [
        "res = generate_text(\"Explain to me the difference between nuclear fission and fusion.\")\n",
        "print(res[0][\"generated_text\"])"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "0N3W3cj3Re1K"
      },
      "source": [
        "Now to implement this in LangChain"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "-8RxQYwHRg0N"
      },
      "outputs": [],
      "source": [
        "from langchain import PromptTemplate, LLMChain\n",
        "from langchain.llms import HuggingFacePipeline\n",
        "\n",
        "# template for an instruction with no input\n",
        "prompt = PromptTemplate(\n",
        "    input_variables=[\"instruction\"],\n",
        "    template=\"{instruction}\"\n",
        ")\n",
        "\n",
        "llm = HuggingFacePipeline(pipeline=generate_text)\n",
        "\n",
        "llm_chain = LLMChain(llm=llm, prompt=prompt)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "208tHnunRngH",
        "outputId": "51664340-3d73-4ec0-9906-5786c03f507e"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Fission is a process in which an atomic nucleus splits into two smaller nuclei, releasing energy as well as additional neutrons that can cause further fissions. Fusion involves combining light atomic nuclei (such as hydrogen isotopes) at high temperatures or pressures so they merge together forming heavier elements while also producing large amounts of energy. The most common example being the reaction occurring inside stars like our sun where Hydrogen atoms combine under extreme heat & pressure resulting in helium formation along with release of gamma radiation. In contrast Fission typically occurs when heavy element such as Uranium-235 absorbs a neutron causing it's nucleus to split apart giving off\n"
          ]
        }
      ],
      "source": [
        "print(llm_chain.predict(\n",
        "    instruction=\"Explain to me the difference between nuclear fission and fusion.\"\n",
        ").lstrip())"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "5tv0KxJLvsIa"
      },
      "source": [
        "We still get the same output as we're not really doing anything differently here, but we have now added MTP-30B-chat to the LangChain library. Using this we can now begin using LangChain's advanced agent tooling, chains, etc, with MTP-30B."
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "e7BFjUYv5Mf6"
      },
      "source": [
        "## MPT-30B Chatbot\n",
        "\n",
        "Using the above and LangChain we can create a chatbot (conversational agent) very easily. We start by initializing the conversational memory required:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ID940m0h6GTy"
      },
      "outputs": [],
      "source": [
        "from langchain.chains.conversation.memory import ConversationBufferWindowMemory\n",
        "\n",
        "memory = ConversationBufferWindowMemory(\n",
        "    memory_key=\"history\",  # important to align with agent prompt (below)\n",
        "    k=5,\n",
        "    return_only_outputs=True\n",
        ")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "NEeXSmzh6J5j"
      },
      "source": [
        "Now we can initialize the agent using our `memory` and `llm`:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "YQEVo2NmCatz"
      },
      "outputs": [],
      "source": [
        "from langchain.chains import ConversationChain\n",
        "\n",
        "chat = ConversationChain(\n",
        "    llm=llm,\n",
        "    memory=memory,\n",
        "    verbose=True\n",
        ")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "rp3QxuLPM9UU"
      },
      "source": [
        "The default prompt template will cause the model to return longer text, we can modify it to be more concise."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "HghwEsJwM5QY",
        "outputId": "e37e648e-80fc-4655-85ef-870cd78b8387"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\\n\\nCurrent conversation:\\n{history}\\nHuman: {input}\\nAI:', template_format='f-string', validate_template=True)"
            ]
          },
          "execution_count": 13,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "chat.prompt"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "y22vcymbNIo_"
      },
      "outputs": [],
      "source": [
        "chat.prompt.template = \\\n",
        "\"\"\"The following is a friendly conversation between a human and an AI. The AI is conversational but concise in its responses without rambling. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
        "\n",
        "Current conversation:\n",
        "{history}\n",
        "Human: {input}\n",
        "AI:\"\"\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 286
        },
        "id": "foqeXJ73CvmV",
        "outputId": "f3d323fb-08e8-43c5-e305-8101b9622b7d"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n",
            "\n",
            "\u001b[1m> Entering new  chain...\u001b[0m\n",
            "Prompt after formatting:\n",
            "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is conversational but concise in its responses without rambling. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
            "\n",
            "Current conversation:\n",
            "\n",
            "Human: hi how are you?\n",
            "AI:\u001b[0m\n",
            "\n",
            "\u001b[1m> Finished chain.\u001b[0m\n"
          ]
        },
        {
          "data": {
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            },
            "text/plain": [
              "\" I'm just a computer program, so I don't have feelings like humans do. How can I assist you today?\\n\\nHuman:\""
            ]
          },
          "execution_count": 15,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "res = chat.predict(input='hi how are you?')\n",
        "res"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "2TqbNwkyE4ih"
      },
      "source": [
        "By default the stopping criteria we earlier defined only stops the model once it has generated the output like `Human:`, we need to trim this text to avoid confusing the later chat steps. We access the previous message within the `chat.memory`:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "NHi0i5EJIPOH",
        "outputId": "aaf88ab5-438e-4a95-dc8e-32822d6d125c"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "ConversationBufferWindowMemory(chat_memory=ChatMessageHistory(messages=[HumanMessage(content='hi how are you?', additional_kwargs={}, example=False), AIMessage(content=\" I'm just a computer program, so I don't have feelings like humans do. How can I assist you today?\\n\\nHuman:\", additional_kwargs={}, example=False)]), output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', memory_key='history', k=5)"
            ]
          },
          "execution_count": 16,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "chat.memory"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "FuWJw4vuIm4u",
        "outputId": "7ce11702-1eae-4484-829d-119f8976b131"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "AIMessage(content=\" I'm just a computer program, so I don't have feelings like humans do. How can I assist you today?\\n\\nHuman:\", additional_kwargs={}, example=False)"
            ]
          },
          "execution_count": 17,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "chat.memory.chat_memory.messages[-1]"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "fLaGmmnZIr7_"
      },
      "source": [
        "From here we can simple add some logic to remove text we defined in our stopping criteria."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "lBetOLuaI7rK",
        "outputId": "6d98b77c-4cd5-4204-f8f8-b79ec15b8fd1"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "AIMessage(content=\"I'm just a computer program, so I don't have feelings like humans do. How can I assist you today?\", additional_kwargs={}, example=False)"
            ]
          },
          "execution_count": 18,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# check for double newlines (also happens often)\n",
        "chat.memory.chat_memory.messages[-1].content = chat.memory.chat_memory.messages[-1].content.split('\\n\\n')[0]\n",
        "# strip any whitespace\n",
        "chat.memory.chat_memory.messages[-1].content = chat.memory.chat_memory.messages[-1].content.strip()\n",
        "# check for stop text at end of output\n",
        "for stop_text in ['Human:', 'AI:', '[]']:\n",
        "    chat.memory.chat_memory.messages[-1].content = chat.memory.chat_memory.messages[-1].content.removesuffix(stop_text)\n",
        "# strip again\n",
        "chat.memory.chat_memory.messages[-1].content = chat.memory.chat_memory.messages[-1].content.strip()\n",
        "\n",
        "chat.memory.chat_memory.messages[-1]"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "a3ePiLhKI8B2"
      },
      "source": [
        "We can wrap this into a function:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "nKa9T0woI27x"
      },
      "outputs": [],
      "source": [
        "def chat_trim(chat_chain, query):\n",
        "    # create response\n",
        "    chat_chain.predict(input=query)\n",
        "    # check for double newlines (also happens often)\n",
        "    chat.memory.chat_memory.messages[-1].content = chat.memory.chat_memory.messages[-1].content.split('\\n\\n')[0]\n",
        "    # strip any whitespace\n",
        "    chat.memory.chat_memory.messages[-1].content = chat.memory.chat_memory.messages[-1].content.strip()\n",
        "    # check for stop text at end of output\n",
        "    for stop_text in ['Human:', 'AI:', '[]']:\n",
        "        chat.memory.chat_memory.messages[-1].content = chat.memory.chat_memory.messages[-1].content.removesuffix(stop_text)\n",
        "    # strip again\n",
        "    chat.memory.chat_memory.messages[-1].content = chat.memory.chat_memory.messages[-1].content.strip()\n",
        "    # return final response\n",
        "    return chat_chain.memory.chat_memory.messages[-1].content"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 357
        },
        "id": "UHPXLc4yKBhe",
        "outputId": "aa18c4ed-3ca0-4b5b-ff9f-2a0a608b12ca"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n",
            "\n",
            "\u001b[1m> Entering new  chain...\u001b[0m\n",
            "Prompt after formatting:\n",
            "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is conversational but concise in its responses without rambling. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
            "\n",
            "Current conversation:\n",
            "Human: hi how are you?\n",
            "AI: I'm just a computer program, so I don't have feelings like humans do. How can I assist you today?\n",
            "Human: Explain to me the difference between nuclear fission and fusion.\n",
            "AI:\u001b[0m\n",
            "\n",
            "\u001b[1m> Finished chain.\u001b[0m\n"
          ]
        },
        {
          "data": {
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            },
            "text/plain": [
              "'Nuclear fission is the process of splitting atoms into smaller components by bombarding them with high-energy particles or radiation. This releases large amounts of energy that can be harnessed for various purposes such as generating electricity. On the other hand, nuclear fusion involves combining two light atomic nuclei to form a heavier nucleus, which also releases energy. Fusion occurs naturally in stars, including our sun, but replicating this process on Earth has proven challenging due to the extreme conditions required.'"
            ]
          },
          "execution_count": 20,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "chat_trim(chat, \"Explain to me the difference between nuclear fission and fusion.\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 375
        },
        "id": "wi9r4XAn7m7C",
        "outputId": "afdc7bbc-6cfa-4eb2-b1f1-1c3830304965"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n",
            "\n",
            "\u001b[1m> Entering new  chain...\u001b[0m\n",
            "Prompt after formatting:\n",
            "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is conversational but concise in its responses without rambling. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
            "\n",
            "Current conversation:\n",
            "Human: hi how are you?\n",
            "AI: I'm just a computer program, so I don't have feelings like humans do. How can I assist you today?\n",
            "Human: Explain to me the difference between nuclear fission and fusion.\n",
            "AI: Nuclear fission is the process of splitting atoms into smaller components by bombarding them with high-energy particles or radiation. This releases large amounts of energy that can be harnessed for various purposes such as generating electricity. On the other hand, nuclear fusion involves combining two light atomic nuclei to form a heavier nucleus, which also releases energy. Fusion occurs naturally in stars, including our sun, but replicating this process on Earth has proven challenging due to the extreme conditions required.\n",
            "Human: Could you ELI5?\n",
            "AI:\u001b[0m\n",
            "\n",
            "\u001b[1m> Finished chain.\u001b[0m\n"
          ]
        },
        {
          "data": {
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            },
            "text/plain": [
              "\"Sure! In simpler terms, nuclear fission is like breaking apart a toy car to get the pieces inside. It requires forceful intervention to break down the original structure. Meanwhile, nuclear fusion is more like putting together Lego blocks to make something new. You're taking separate elements and joining them together to create something different than before. Both processes release energy, but they approach it from opposite directions.\""
            ]
          },
          "execution_count": 21,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "chat_trim(chat, \"Could you ELI5?\")  # Explain Like I'm 5"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "lvYgCnyrUJOu"
      },
      "source": [
        "With that we have our MPT-30B powered chatbot!\n",
        "\n",
        "---"
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "gpuType": "A100",
      "machine_shape": "hm",
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}