{
  "cells": [
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/generation/llm-field-guide/mpt/mpt-7b-huggingface-langchain.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/generation/llm-field-guide/mpt/mpt-7b-huggingface-langchain.ipynb)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "JPdQvYmlWmNc"
      },
      "source": [
        "# MTP-7B in Hugging Face and LangChain\n",
        "\n",
        "In this notebook we'll explore how we can use the open source **MTP-7B** model in both Hugging Face transformers and LangChain.\n",
        "\n",
        "---\n",
        "\n",
        "\ud83d\udea8 _Note that running this on CPU is practically impossible. It will take a very long time. If running on Google Colab you go to **Runtime > Change runtime type > Hardware accelerator > GPU > GPU type > T4** (ideally can go for better GPU for faster speeds like V100 or A100)._\n",
        "\n",
        "---\n",
        "\n",
        "We start by doing a `pip install` of all required libraries."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "K_fRq0BSGMBk",
        "outputId": "dfce208c-cee5-4184-ca5b-a98d11241ea1"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m7.1/7.1 MB\u001b[0m \u001b[31m71.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m219.1/219.1 kB\u001b[0m \u001b[31m15.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m42.2/42.2 kB\u001b[0m \u001b[31m2.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m834.2/834.2 kB\u001b[0m \u001b[31m59.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m108.2/108.2 MB\u001b[0m \u001b[31m13.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m224.5/224.5 kB\u001b[0m \u001b[31m27.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m7.8/7.8 MB\u001b[0m \u001b[31m106.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m1.0/1.0 MB\u001b[0m \u001b[31m66.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m90.0/90.0 kB\u001b[0m \u001b[31m12.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m114.5/114.5 kB\u001b[0m \u001b[31m15.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m268.8/268.8 kB\u001b[0m \u001b[31m30.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m149.6/149.6 kB\u001b[0m \u001b[31m18.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m49.1/49.1 kB\u001b[0m \u001b[31m6.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h  Building wheel for wikipedia (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
          ]
        }
      ],
      "source": [
        "!pip install -qU transformers accelerate einops langchain wikipedia xformers"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "VHQwEeW9Zps2"
      },
      "source": [
        "## Initializing the Hugging Face Pipeline"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "mElf068NXout"
      },
      "source": [
        "The first thing we need to do is initialize a `text-generation` pipeline with Hugging Face transformers. The Pipeline requires three things that we must initialize first, those are:\n",
        "\n",
        "* A LLM, in this case it will be `mosaicml/mpt-7b-instruct`.\n",
        "\n",
        "* The respective tokenizer for the model.\n",
        "\n",
        "* A stopping criteria object.\n",
        "\n",
        "We'll explain these as we get to them, let's begin with our model.\n",
        "\n",
        "We initialize the model and move it to our CUDA-enabled GPU. Using Colab this can take 5-10 minutes to download and initialize the model."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 1000,
          "referenced_widgets": [
            "5477583b7b2147b4b15958cc59445547",
            "68b116dd1aa344c2a514a8f935d9b62e",
            "3f3b3353d2224781bccfe4638f38b638",
            "163a25126f064ec786c4997e2afa0917",
            "57741af35a414e69a3e4a6f7c07bfe27",
            "9bddc4c1b96f4c5c8a8da0a254c07ee6",
            "3cdbcc8a85ea4668a0bcdbe32d1f4c06",
            "e935ff44968c465c9c3867d68309b471",
            "1ac3f280d90146268a77f8e3cac278c6",
            "7de81ba48259426ab17bed2fe54eb754",
            "1eadd15f5d0549ed807bfab1ad4caa9f",
            "e92433fd7c0c4311a40d849b3be8d601",
            "af913f2344a448df95a0728f90a4e79f",
            "248cf1f118ba4f20a083ddfde425ae6e",
            "7e5f4c534b7f4ee78ef2c3b7dc3766fb",
            "eff91ecb08b1404cb424dee843f9041a",
            "fdb41c0a4c7b4687a9dedbeb42186217",
            "3496a91c15dd4c68bceb8f523dcbd89b",
            "4572ad2dd54b4b57ac96785f4ce3b6d1",
            "ef45e7bbfd7e481d9674e884539f3ff6",
            "5d30a21e90394649ba57d66209a2969d",
            "2f5ce9207a7a47d19eeb2a96eef6a811",
            "d4c8d0e532e840c39ea80b9bc20bb69a",
            "63b75ce2b086412db19bac9b6f59a9e0",
            "1455c0b76be54e179c5bcb51cc31be46",
            "948fa27a51614e408263ae5267dc0376",
            "0f16e6510dde4f19a750034e6efae93d",
            "9fe42c63857d463d935a25012ab38017",
            "64bc474f3faf4843b27306ef9bea3e25",
            "0f79ba4cc5764b1ba1fec1ff0f1aee05",
            "dbbc5fe195374c198faa6d15b7145252",
            "415ef46caad64b948c935cf985326ecf",
            "737c4dcc4dc643ba8e58afd45b82bd15",
            "2f2dc98815eb4907bf7878ff854643b5",
            "acb3222ff5f84a1284b1876cec378a18",
            "a404e1d12da54103b35f2ed978284f1f",
            "5da41bb1c7814c9f93e7d1a271f9c73f",
            "0eaf53265ed44cd29e7b732c953fd4be",
            "566a0daaf61240c0acd67f3643e8b544",
            "5dc46bb43ea742a497e9f9cbece7bbb0",
            "eec489d2d96c474e895316028d136a9c",
            "0f88edee2fea4c3dbde7e6dc3cdbddd0",
            "aad49266326541c0ad1b47ce65f65e5c",
            "d92646b42eb64d77b4206bf0355bfbd4",
            "ae549d2c5cee4706b72ec92eb8032f60",
            "08c850f8fcd847959462ff11f6dbee43",
            "c3fcf2dc78434762b4a415c79e7cd2bd",
            "76ba41e211bd4fe68e58b3d874b195cb",
            "2edbe93411c94d128c7754072295b49c",
            "8a20ea3ad5e74411bab21223faec9cdb",
            "d657b248dc444111a08946f8a49a8479",
            "4c0de81cddaf4220a4a2e294e546a461",
            "feb725f221594cffaa6bd9c00e7a3a90",
            "d75e22f29fb143cbb26547e2f6839d53",
            "2521f00491f24caebe3941d9f293a3c9",
            "f9db6307674c48c7893b8abe8890e786",
            "bec52c6fd2734a6db7c52da809e1f4d4",
            "16621ba1ad3b49c19f02caa90b1fe60c",
            "4bce21e9521a4797affd1f342ef4f717",
            "9f07db9755f04793ac812e4655edd695",
            "262d0edf6fe24cd5bcf9ac6f3c273faf",
            "362664f81905446daa97562b0bb25d3c",
            "1a9861e69cb24f81ae70eef4f7c70ce0",
            "e051c691c14648d7a0ddb8baef6f56b8",
            "98cf902a38444418bf70fcf9716c10e7",
            "e649a7914716472fbfede75277879630",
            "d2d31ce4532047ef9e7ddf9861ecc867",
            "7dd7d1b6a8694c278c4592b95b2ebc1b",
            "5ac50a570bc34af5baf4b7e4a408941f",
            "8f268739cd67433da2f58082eb3bef0a",
            "c2ad240b679a4ce6be7deab2a118f4f3",
            "4500396243ec4396b31bf172ae53bf36",
            "c8d454e219e34513919c8b77eda97ff9",
            "7942d6a7497b44de83d5d1ca8650e701",
            "7eb34adf67cc46c68aeeace10592de0e",
            "e481fc2dca3b49559b6b166dfa61b970",
            "9bed41f7c5364966a6a8938f9e5a9dc3",
            "c8410d9424104a1995dd49dfe5e93ec3",
            "61ebbcf1fc5541c6a98042b24383c7a3",
            "b0dd325d866c44e985ffd5353943789b",
            "d0d6db40480e4f5789d0f08d50d3fffc",
            "67bc12089f0d4774986793154acab9c9",
            "2633a78b69974666b16404c33b885329",
            "481d67a4361e409ca7133f16cb830fb1",
            "7dfef1760e9b47669dca7c5ffb964c68",
            "1fd5131bf55e4096b8fae983da85d452",
            "6189e1a2375c456ead29ec9f4f096bda",
            "84be98d8ea9c4dfc87841433a74938ec",
            "11ab40b9b89545fd85c3a1f9c6b51127",
            "b34078e0370f48279b9f5b7a480073c9",
            "177f51cfe6e141ada62eda9d2a4dff9a",
            "be7bdd7b158f4c02826d5578db29db62",
            "34a79a96461448e09a86d5c16b7ccc9c",
            "ce1d2dd01f344f1e8f78699884a89ed7",
            "cb7329fe27e54f37b4d6d842e3be0049",
            "cd142e630d824a12ba09c5238e0c270e",
            "a752c72b8b0b43d19e3e6a8618935893",
            "ff84a55ccaa44e7da41f1f8e5664c1f8",
            "b73263ce20ad439f919b3d8b71460444",
            "9a59f19ab05847ccb84812c4031a3e6d",
            "24ff57ac1e0d4ab9981f345c630e5fc4",
            "b8f5bd2ad1d941dba56603e8aaf834ef",
            "57bc659924164379a3476364943e70f6",
            "9054d9cf46e74ae8b21c17e5aa61d58f",
            "fcd63df1b4914319a1a1c10a8a581fab",
            "c09443ac265f4a4e91a16afb2ee1a494",
            "031bdc7f027d441c8104831ced19e722",
            "a04bc47f1d764cd788cfe4810c189e1e",
            "f1092af72d1840da8402c2e1de67e791",
            "119f93b4b4f948f682196b084c5090c4",
            "025894f05f8c4eae88e7fc6d96ddff04",
            "5e959fbd2ea44a8f918748235ec191da",
            "bf235fb49c8c49ff886dc82d837b3520",
            "c828f6888f774d5aa4e4307c6c3364b7",
            "74f929c9fc344639b01a6d0ccd1f77d1",
            "18f1dcb5b32e455592e21b85ccb7b825",
            "32e4acee08a546e494344d3bdd524b04",
            "a8410b49da484fcdbbe86bf9bfa50b3e",
            "f0528d54cb494a0e8e1150a73cc14b66",
            "a1db821a63bf48b393ca8adb00797532",
            "0512508658c64424822193c3e9b1c218",
            "1e96affbc73f4ae7afcaff1a8b967b0e",
            "abda44c251544ae5ae629295e4e3372d",
            "b4451930fe8b4dbb9af375ad3dbafdc0",
            "2e64a21b72bc4ce0bd0dcad405d8b451",
            "bd9a0ecf39c542da87454ac7bbe86610",
            "76888a91d8bc41159979dc334ffc7dde",
            "6a73479dfaf74cb29b7152edba003867",
            "46a081439b4c4d6abd66c9e137538070",
            "f10cd1ce9a254f8395efe1206509bf7e",
            "36a1a7e5a93b415bb9e148b4f0bc8c32",
            "17881efe864541839107106c2feb43ab",
            "6ad510d87d02441a8003772a234e902a",
            "51013e7c03684bfa8d83e5a0331fe661",
            "e55b4c72815848f7ab709637e1396a43",
            "b844fc86d35d481aa9f4e26ef7ab7a3b",
            "957d8ee5b8364aaabe1a5ba7288a9451",
            "43ace9b53f9448abaef101b9153a90b3",
            "e13c4799fcf044d0a098cac46a1658e6",
            "edc7505013fe455cb7bb7f46d09288f5",
            "8893fa7377fe426c807f7c1729a6683c",
            "14a9b1a645f4497f8e0dfe615d06f3ee",
            "eeff3cdc6b0a42c18954cb9e42b330ea",
            "2bac9cd6fa824eb1ab8d66cc27830380",
            "53197992a9fc46ff8235edd223c9759c",
            "b027ff4fee1040baa7739af0eb32f0f8",
            "405608dd13404681b0a5b83ead859b25",
            "a37a7552affd4d4e9bb0ec8b145116c1",
            "58c1c8e5fa374338bd5860973a8a99fa",
            "075300f5dba84c27ae20f04499b109ad",
            "01212975d94e4fee8f1ce0eec776226f",
            "3bd5c5ffe76b48bab3ac11feece4eb54",
            "f7c9028772c84d5ea9d3cb5a0aa940f9",
            "0922d213a0654f6c898bb3f12f8ed151",
            "fd0e1ba707614629b63d1cea254f61fa",
            "e4a3b8916260491a87fbf0d8cc48f45c",
            "b19d837419504c10800e80ba8406699b",
            "aa76bee8182f43c9a5563776726e6046",
            "3943008d7b5f46789c3aa959aa1d1022",
            "4c12dd2f19ec42639d5a724b323729d1",
            "e0eb4b7f0e8b49e1aeda2ee482df214a",
            "ae213592b4f14ee395d0a067c099554a",
            "9d0118e12bf2458583644122307fb524",
            "e23abdf55db4470d887d099c2d553f83",
            "7b6c12ed807a442caceb13f75b1723e1",
            "9d64d8d313b04027be8852af5be0d932",
            "8fc5c81b319f4772a4618b38e7eccb5f",
            "4e398b214cac46fba979c195794c073b",
            "66303edf1de442d881a88869ab858a71",
            "d668e5c6ff644b4e962ec487b287135d",
            "62544ea5f20447e6a8528a37d53b8315",
            "43bcbbc6f5354f2698e8edc2702314a3",
            "636aae1e22514f848caa60f410a1ff82",
            "1719574a8854433ca9ea1a1779020b7b",
            "6e17d82a041241a1aadf3b6ce70d98f8",
            "eab29d57922e4eb2a7a857708ea4fd12"
          ]
        },
        "id": "ikzdi_uMI7B-",
        "outputId": "cb197163-73a2-4714-b1be-919175521f20"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "5477583b7b2147b4b15958cc59445547",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)lve/main/config.json:   0%|          | 0.00/1.23k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "e92433fd7c0c4311a40d849b3be8d601",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)configuration_mpt.py:   0%|          | 0.00/9.08k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-7b-instruct:\n",
            "- configuration_mpt.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "d4c8d0e532e840c39ea80b9bc20bb69a",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)main/modeling_mpt.py:   0%|          | 0.00/17.4k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "2f2dc98815eb4907bf7878ff854643b5",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)n/adapt_tokenizer.py:   0%|          | 0.00/1.75k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-7b-instruct:\n",
            "- adapt_tokenizer.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "ae549d2c5cee4706b72ec92eb8032f60",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)in/param_init_fns.py:   0%|          | 0.00/12.6k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "f9db6307674c48c7893b8abe8890e786",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)resolve/main/norm.py:   0%|          | 0.00/2.56k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-7b-instruct:\n",
            "- norm.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n",
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-7b-instruct:\n",
            "- param_init_fns.py\n",
            "- norm.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "d2d31ce4532047ef9e7ddf9861ecc867",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)solve/main/blocks.py:   0%|          | 0.00/2.49k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "c8410d9424104a1995dd49dfe5e93ec3",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)ve/main/attention.py:   0%|          | 0.00/16.1k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-7b-instruct:\n",
            "- attention.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n",
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-7b-instruct:\n",
            "- blocks.py\n",
            "- attention.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "11ab40b9b89545fd85c3a1f9c6b51127",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)refixlm_converter.py:   0%|          | 0.00/27.2k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-7b-instruct:\n",
            "- hf_prefixlm_converter.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "9a59f19ab05847ccb84812c4031a3e6d",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)meta_init_context.py:   0%|          | 0.00/3.64k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-7b-instruct:\n",
            "- meta_init_context.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n",
            "A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-7b-instruct:\n",
            "- modeling_mpt.py\n",
            "- adapt_tokenizer.py\n",
            "- param_init_fns.py\n",
            "- blocks.py\n",
            "- hf_prefixlm_converter.py\n",
            "- meta_init_context.py\n",
            ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "025894f05f8c4eae88e7fc6d96ddff04",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)model.bin.index.json:   0%|          | 0.00/16.0k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "1e96affbc73f4ae7afcaff1a8b967b0e",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading shards:   0%|          | 0/2 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "6ad510d87d02441a8003772a234e902a",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)l-00001-of-00002.bin:   0%|          | 0.00/9.94G [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "2bac9cd6fa824eb1ab8d66cc27830380",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)l-00002-of-00002.bin:   0%|          | 0.00/3.36G [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "/root/.cache/huggingface/modules/transformers_modules/mosaicml/mpt-7b-instruct/bd1748ec173f1c43e11f1973fc6e61cb3de0f327/attention.py:148: UserWarning: Using `attn_impl: torch`. If your model does not use `alibi` or `prefix_lm` we recommend using `attn_impl: flash` otherwise we recommend using `attn_impl: triton`.\n",
            "  warnings.warn('Using `attn_impl: torch`. If your model does not use `alibi` or ' + '`prefix_lm` we recommend using `attn_impl: flash` otherwise ' + 'we recommend using `attn_impl: triton`.')\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "fd0e1ba707614629b63d1cea254f61fa",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "9d64d8d313b04027be8852af5be0d932",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)neration_config.json:   0%|          | 0.00/91.0 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Model loaded on cuda:0\n"
          ]
        }
      ],
      "source": [
        "from torch import cuda, bfloat16\n",
        "import transformers\n",
        "\n",
        "device = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu'\n",
        "\n",
        "model = transformers.AutoModelForCausalLM.from_pretrained(\n",
        "    'mosaicml/mpt-7b-instruct',\n",
        "    trust_remote_code=True,\n",
        "    torch_dtype=bfloat16,\n",
        "    max_seq_len=2048\n",
        ")\n",
        "model.eval()\n",
        "model.to(device)\n",
        "print(f\"Model loaded on {device}\")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "JzX9LqWSX9ot"
      },
      "source": [
        "The pipeline requires a tokenizer which handles the translation of human readable plaintext to LLM readable token IDs. The MPT-7B model was trained using the `EleutherAI/gpt-neox-20b` tokenizer, which we initialize like so:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 177,
          "referenced_widgets": [
            "b0d4c294dfe343f8950b51680f7c277b",
            "3571f490b0264ed18a3c44b5a932eb6b",
            "23772864bb334fb183fbadcf4ea374e2",
            "0f6c1d103d7c4d248f3803f4a73c5d8e",
            "d05ac9ebdcc74a5486bca0b64ad8aeaf",
            "92192a768bd04543ba0e03edf074c9da",
            "170430e3fd8c4ce699ded3dd26c0d116",
            "dc2749b72f0640cf9e1b165cec5bc4ec",
            "c8980687d1284a6189bda4f39461ca22",
            "30921df3388f4cd5b5e099e03951dc13",
            "d426f99568af472997dbf462ed701b2a",
            "ca50663ca9fd44cdb63e388056a2a7dd",
            "e95ecca5867f443bba2db1d9917a5595",
            "1437845999e0435d87d72d48fe4950e8",
            "0bae2e7b6fe7461ba040f752fc407b1f",
            "6bd72482622e4fc58b5459efaed44b08",
            "00586de879f74d9b9a9edef6dec6539c",
            "0fd409a346f04fd8999aacfcd50f0c97",
            "36e9dfc642804b98b47812e8a116017e",
            "c69f994894974b3aa13c903f17d4ff55",
            "9cbc6aff49314d8ab3b36fcf92ceebc0",
            "5ae1122e273743b982d5b0fecb1262d5",
            "9afe7fec115b4111a1b9e81c656b9310",
            "b7e0af305f85482696773edd2a1302f3",
            "5981801a3dab4fd9924dcf51f4c39938",
            "6b1361efbb264a85b462fa0cd1c2b3d3",
            "f6d761afa87f4057b3648ba4e25e4796",
            "87899dcfc42540fb8f44c24cffac11c8",
            "aae945646dd44a7c8a931cf4f574afe8",
            "4cce01c81e934338911b82b09cc323e8",
            "c6aa6314b03841b7836e2d855f6d4f7c",
            "cc465213598c4f9d8a41c1b97b925f17",
            "0417debaad9d4b809168e1961498cc2e",
            "5ebf91c946e741198eaf1bb36f268937",
            "3a3269d555ee43558ffc7a8c66aa273a",
            "0362719368904f1db1ca29ff42eeef48",
            "bfe1b0e099d34ecb8b0f6cf65010ba77",
            "b8fcdca47133474f942aeb48c6e60bf7",
            "db6246ef5fd7413080a47b3511d2c1c7",
            "abff8eb9f69740e48b00e7b0b8655bd0",
            "32dcaf9650af4701be5c9147d02d7a1b",
            "3e7dc4fea9ea46f7833acf937af6dbef",
            "d83bbd5d6b8c4caf97f4fef574b6c0da",
            "799a254ad9d44ed4a882f542214e997f",
            "e2518925bfc2446ca4f177ef76fceb3e",
            "b66152ca358241e0b8237e24119801cc",
            "9ca2ce681b864903a43a164928f57c52",
            "8a549980023b474aadea2d86304419ae",
            "8d5856bfa9fa4722b55027e9ce3ddd69",
            "21549300ba7c4b78acab0aadb9c5f02f",
            "d23e480cbd5d4da190ec19d4a4ad84b3",
            "9e73e4e6c68d4c239bb33afa46f0cdfd",
            "f58dfa138a544f1b8f81ce01734860cd",
            "599e1a01bf1644d4897276d5495bdf16",
            "75f1e65442b3482d9472b2316f2f32dc"
          ]
        },
        "id": "v0iPv1GDGxgT",
        "outputId": "b5625549-e0c3-41e6-ca10-8f39005c075b"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "b0d4c294dfe343f8950b51680f7c277b",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)okenizer_config.json:   0%|          | 0.00/156 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "ca50663ca9fd44cdb63e388056a2a7dd",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)olve/main/vocab.json:   0%|          | 0.00/1.08M [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "9afe7fec115b4111a1b9e81c656b9310",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)olve/main/merges.txt:   0%|          | 0.00/457k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "5ebf91c946e741198eaf1bb36f268937",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)/main/tokenizer.json:   0%|          | 0.00/2.11M [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "e2518925bfc2446ca4f177ef76fceb3e",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)cial_tokens_map.json:   0%|          | 0.00/90.0 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "tokenizer = transformers.AutoTokenizer.from_pretrained(\"EleutherAI/gpt-neox-20b\")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "XL7G9Sr3uxdz"
      },
      "source": [
        "Finally we need to define the _stopping criteria_ of the model. The stopping criteria allows us to specify *when* the model should stop generating text. If we don't provide a stopping criteria the model just goes on a bit of a tangent after answering the initial question."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "id": "UG3R0LBQevQW"
      },
      "outputs": [],
      "source": [
        "import torch\n",
        "from transformers import StoppingCriteria, StoppingCriteriaList\n",
        "\n",
        "# mtp-7b is trained to add \"<|endoftext|>\" at the end of generations\n",
        "stop_token_ids = tokenizer.convert_tokens_to_ids([\"<|endoftext|>\"])\n",
        "\n",
        "# define custom stopping criteria object\n",
        "class StopOnTokens(StoppingCriteria):\n",
        "    def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:\n",
        "        for stop_id in stop_token_ids:\n",
        "            if input_ids[0][-1] == stop_id:\n",
        "                return True\n",
        "        return False\n",
        "\n",
        "stopping_criteria = StoppingCriteriaList([StopOnTokens()])"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "bNysQFtPoaj7"
      },
      "source": [
        "Now we're ready to initialize the HF pipeline. There are a few additional parameters that we must define here. Comments explaining these have been included in the code."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "qAYXi8ayKusU",
        "outputId": "a421e4e1-734d-4bca-9319-2d36c14a527a"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "The model 'MPTForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].\n"
          ]
        }
      ],
      "source": [
        "generate_text = transformers.pipeline(\n",
        "    model=model, tokenizer=tokenizer,\n",
        "    return_full_text=True,  # langchain expects the full text\n",
        "    task='text-generation',\n",
        "    device=device,\n",
        "    # we pass model parameters here too\n",
        "    stopping_criteria=stopping_criteria,  # without this model will ramble\n",
        "    temperature=0.1,  # 'randomness' of outputs, 0.0 is the min and 1.0 the max\n",
        "    top_p=0.15,  # select from top tokens whose probability add up to 15%\n",
        "    top_k=0,  # select from top 0 tokens (because zero, relies on top_p)\n",
        "    max_new_tokens=64,  # mex number of tokens to generate in the output\n",
        "    repetition_penalty=1.1  # without this output begins repeating\n",
        ")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "8DG1WNTnJF1o"
      },
      "source": [
        "Confirm this is working:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "lhFgmMr0JHUF",
        "outputId": "11c84140-98e5-4fc3-c72b-fa3dddce2d44"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Explain to me the difference between nuclear fission and fusion.\n",
            "Nuclear Fission is a process that splits heavy atoms into smaller, lighter ones by releasing energy in the form of heat or radiation. Nuclear Fusion occurs when two light atomic nuclei are combined together to create one heavier nucleus which releases more energy than what was used for its creation (fusion reaction). The most common example of\n"
          ]
        }
      ],
      "source": [
        "res = generate_text(\"Explain to me the difference between nuclear fission and fusion.\")\n",
        "print(res[0][\"generated_text\"])"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "__X3tvUeR_yS"
      },
      "source": [
        "We can also load the [Triton optimized implementation](https://github.com/openai/triton) (Triton uses more memory, but is faster) like so:\n",
        "\n",
        "```python\n",
        "!pip install -qU triton\n",
        "\n",
        "from torch import bfloat16\n",
        "\n",
        "config = transformers.AutoConfig.from_pretrained(\n",
        "  'mosaicml/mpt-7b-instruct',\n",
        "  trust_remote_code=True\n",
        ")\n",
        "config.attn_config['attn_impl'] = 'triton'\n",
        "config.update({\"max_seq_len\": 100})\n",
        "\n",
        "model = transformers.AutoModelForCausalLM.from_pretrained(\n",
        "  'mosaicml/mpt-7b-instruct',\n",
        "  config=config,\n",
        "  torch_dtype=bfloat16,\n",
        "  trust_remote_code=True\n",
        ")\n",
        "model.to(device=device)\n",
        "```"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "0N3W3cj3Re1K"
      },
      "source": [
        "Now to implement this in LangChain"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "id": "-8RxQYwHRg0N"
      },
      "outputs": [],
      "source": [
        "from langchain import PromptTemplate, LLMChain\n",
        "from langchain.llms import HuggingFacePipeline\n",
        "\n",
        "# template for an instruction with no input\n",
        "prompt = PromptTemplate(\n",
        "    input_variables=[\"instruction\"],\n",
        "    template=\"{instruction}\"\n",
        ")\n",
        "\n",
        "llm = HuggingFacePipeline(pipeline=generate_text)\n",
        "\n",
        "llm_chain = LLMChain(llm=llm, prompt=prompt)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "208tHnunRngH",
        "outputId": "3d32d3ce-ba66-49f4-b56a-49529d225aff"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Nuclear Fission is a process that splits heavy atoms into smaller, lighter ones by releasing energy in the form of heat or radiation. Nuclear Fusion occurs when two light atomic nuclei are combined together to create one heavier nucleus which releases more energy than what was used for its creation (fusion reaction). The most common example of\n"
          ]
        }
      ],
      "source": [
        "print(llm_chain.predict(\n",
        "    instruction=\"Explain to me the difference between nuclear fission and fusion.\"\n",
        ").lstrip())"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "5tv0KxJLvsIa"
      },
      "source": [
        "We still get the same output as we're not really doing anything differently here, but we have now added MTP-7B-instruct to the LangChain library. Using this we can now begin using LangChain's advanced agent tooling, chains, etc, with MTP-7B."
      ]
    }
  ],
  "metadata": {
    "colab": {
      "gpuType": "T4",
      "provenance": []
    },
    "gpuClass": "standard",
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}