{
  "cells": [
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "TF8Js-1TEw22"
      },
      "source": [
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/generation/openai/openai-ml-qa/00-build-index.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/generation/openai/openai-ml-qa/00-build-index.ipynb)\n",
        "\n",
        "# Index Init\n",
        "\n",
        "We use this notebook to create embeddings with OpenAI and push the embeddings and metadata to Pinecone. Required installs for this notebook are:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "id": "U0LDKVIxErRO"
      },
      "outputs": [],
      "source": [
        "!pip install -qU \\\n",
        "  openai==0.27.7 \\\n",
        "  tiktoken==0.4.0 \\\n",
        "  pinecone-client==3.1.0 \\\n",
        "  datasets==2.12.0"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "---\n",
        "\n",
        "\ud83d\udea8 _Note: the above `pip install` is formatted for Jupyter notebooks. If running elsewhere you may need to drop the `!`._\n",
        "\n",
        "---"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "I-WoLCa_E3Zm"
      },
      "source": [
        "## Data Preparation\n",
        "\n",
        "We start by downloading the dataset from Hugging Face *Datasets*:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 240,
          "referenced_widgets": [
            "963983a7391f4493b293a7069da33125",
            "40590903b4044c79abce864d6e51bbde",
            "6acaf69869c44d1b8fdfebd12d69ad3a",
            "fb88346fc0d54f5cb765bec8e6852aab",
            "f3971b3b3d144ef8a87e263e1f3571c4",
            "5465de7c87c6439e870588ba36602ab0",
            "be7bcacf8e124107a24c2f36d851d3fa",
            "baf7fc278a974bf286923d4412a29a6d",
            "3a7ff6e5be4c4934900ab02b75264e14",
            "a0b4181c95d24c9e923e41ad4966ecc3",
            "1bde144d2c7e4aabaa6c1331a120ec37",
            "8373b71f8dcf44d1becc904d37e09a44",
            "b6a38ccde22e4648807d3f41484e84b8",
            "c4e87ec0bc5e428090c77ec238986425",
            "906bfcedc20641fc9b7512a126f9393e",
            "a55173b9c9564e2ca04a1ba9e665d9d4",
            "f79c704f85ea4ef691559d2eb928e18a",
            "bdeb240e7dfb4277a077919045b74e55",
            "2b78cc47fba146e1add21fbae1e98387",
            "688a0fcfa233447da6fffec3124075a6",
            "e817c8776e4a40579181943e9f8be16a",
            "26d37e6442a2460fb18379f52dbf74d1",
            "0720dea5724049ac8054dde88078751e",
            "147ce18989a94b919f9facf41e87336f",
            "a4befd2aeea743efb39bcb07a6c303b5",
            "2d83e8bc3fde4f6f8101a24f927386f6",
            "b1e8e3300565436b953672a5ad99345c",
            "760d9d8f1e60458b8df2b4ce76b19792",
            "566047ab7eb341fc93e54bcaf9c458d1",
            "0be983982f20402bbbcfa8b43043633d",
            "3fd0f5289a734f29a43628ac76708cdd",
            "0498c4b2c34c4039a1cc5b6f6282e03f",
            "dc08a56df0e54f628df94cbb320df1e8",
            "15af8d01fb0f4fd7a9815e35a5924b47",
            "b1bb7ae9d90f43ddb2b7d1ac7fdacd00",
            "6012fc50e8894263be2f1aa7d37d60fc",
            "4dbd4320a2fe432eabb30dcbf32673ea",
            "306e92057faa4c49b1907e373958c4f1",
            "d3dc32fdb7954f51899ba1b089005aa6",
            "6086166e98c94f568458f687654ee28a",
            "d1ca6c0318b24c5eab29d2e20f680e83",
            "41b5c08b632744889c5da3735d47d026",
            "4e641c2e3e4948388e283d0b5196e105",
            "63b73785d84b41ba8cc4ea11db3c8e87"
          ]
        },
        "id": "LogOO33QE1Ic",
        "outputId": "059a154b-9a09-481f-c4e5-0d83d538c057"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Downloading and preparing dataset json/jamescalam--ml-qa to /root/.cache/huggingface/datasets/jamescalam___json/jamescalam--ml-qa-2cecc52fb1e2761a/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "963983a7391f4493b293a7069da33125",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading data files:   0%|          | 0/1 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "8373b71f8dcf44d1becc904d37e09a44",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading data:   0%|          | 0.00/12.9M [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "0720dea5724049ac8054dde88078751e",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Extracting data files:   0%|          | 0/1 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "15af8d01fb0f4fd7a9815e35a5924b47",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Generating train split: 0 examples [00:00, ? examples/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Dataset json downloaded and prepared to /root/.cache/huggingface/datasets/jamescalam___json/jamescalam--ml-qa-2cecc52fb1e2761a/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4. Subsequent calls will reuse this data.\n"
          ]
        },
        {
          "data": {
            "text/plain": [
              "Dataset({\n",
              "    features: ['docs', 'category', 'thread', 'href', 'question', 'context', 'marked'],\n",
              "    num_rows: 6165\n",
              "})"
            ]
          },
          "execution_count": 2,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "from datasets import load_dataset\n",
        "\n",
        "data = load_dataset('jamescalam/ml-qa', split='train')\n",
        "data"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "6ypJy8IEHGpU",
        "outputId": "937709b2-e43a-4bb9-fb1a-d62c098602fb"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "{'docs': 'huggingface',\n",
              " 'category': 'Beginners',\n",
              " 'thread': 'Training stops when I try Fine-Tune XLSR-Wav2Vec2 for low-resource ASR',\n",
              " 'href': 'https://discuss.huggingface.co/t/training-stops-when-i-try-fine-tune-xlsr-wav2vec2-for-low-resource-asr/8981',\n",
              " 'question': 'Hi,\\nI\u2019m learning Wav2Vec2 according the blog link:\\n  \\n\\n      huggingface.co\\n  \\n\\n  \\n    \\n\\nFine-Tune XLSR-Wav2Vec2 for low-resource ASR with \ud83e\udd17 Transformers 1\\n\\n\\n\\n  \\n\\n  \\n    \\n    \\n  \\n\\n  \\n\\n\\nAnd I download the ipynb file and try run it locally.\\nFine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_\ud83e\udd17_Transformers.ipynb\\nAll looks file but when I run trainer.train(), it seems stop after a while, and it generate some log files under the folder wav2vec2-large-xlsr-turkish-demo, I send the screen shot to you as following:\\n\\n2021-08-05 17-05-36 \u7684\u5c4f\u5e55\u622a\u56fe1063\u00d7410 35 KB\\n\\nI don\u2019t know how to open the file events.out.tfevents.1628152300.tq-sy.129248.2, what\u2019s the problem and how can I debug of it? please help.\\nThanks a lot.',\n",
              " 'context': 'It probably stops cause u don\u2019t have enough resources to run the script, I recommend trying to run the script on google collab',\n",
              " 'marked': 0}"
            ]
          },
          "execution_count": 3,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "data[100]"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "kyo-aAcLFBvU"
      },
      "source": [
        "When storing the original plaintext (and other metadata) of our data, we can either store them in Pinecone as indexed or non-indexed metadata \u2014 or elsewhere.\n",
        "\n",
        "Storing in Pinecone can make the system simpler as we are then querying a single location. However, there are limitations on metadata size. For *indexed* metadata this is 5KB of metadata, and for *non-indexed* metadata this is 40KB, [see here for more info](https://docs.pinecone.io/docs/limits#:~:text=Max%20metadata%20size%20per%20vector,key%20from%20the%20metadata%20payload.).\n",
        "\n",
        "First, let's check if we can fit our data within the larger 40KB limit."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "69PbCCSYE5qQ",
        "outputId": "392949e1-9422-46ae-c213-9ea415e2d5b2"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Over 40KB: 0.0%\n"
          ]
        }
      ],
      "source": [
        "from sys import getsizeof\n",
        "import json\n",
        "\n",
        "limit = 0\n",
        "\n",
        "for record in data:\n",
        "    size = getsizeof(json.dumps(record))\n",
        "    if size > 40_000:\n",
        "        limit += 1\n",
        "\n",
        "print(f\"Over 40KB: {(limit/len(data)*100)}%\")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "JccmDbwKV6L-"
      },
      "source": [
        "Fortunately our data is within this limit (40KB is a lot), so we don't need to remove anything to fit with Pinecone limits, but what about OpenAI limits?\n",
        "\n",
        "The current *token limit* for `text-embedding-ada-002` (the embedding model we will be using is **8191** tokens. However, later we will be using the `text-davinci-003` model for generation of answers, this model has a max context window of **~4K** tokens. We'll also want to feed in a few records here so we should limit ourselves to something closer to ~1000 tokens as a max limit.\n",
        "\n",
        "We calculate the number of tokens using the `tiktoken` tokenizer like so:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "8wUwhYLSLv7o",
        "outputId": "ecd93b48-6fb1-4ac4-f601-5691b49b2206"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "<Encoding 'p50k_base'>"
            ]
          },
          "execution_count": 5,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "import tiktoken\n",
        "\n",
        "tokenizer = tiktoken.encoding_for_model('text-davinci-003')\n",
        "tokenizer"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "Giyv05G0NBll"
      },
      "source": [
        "Define token counting function:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "id": "p3DXr_5eNDSL"
      },
      "outputs": [],
      "source": [
        "# create the length function\n",
        "def tiktoken_len(text):\n",
        "    tokens = tokenizer.encode(\n",
        "        text,\n",
        "        disallowed_special=()\n",
        "    )\n",
        "    return len(tokens)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "H-MgipzZOAGR"
      },
      "source": [
        "Now let's filter out records that are longer than the ~1K."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 88,
          "referenced_widgets": [
            "5b285d96ffc2482c8072761f4dcfb22c",
            "3f84cbf88b3341c7a4a5a25b747f1237",
            "0c5043b269b84b15997d6677ee058cf7",
            "033b4b18f7ec46b097954c9e5c1b25a7",
            "2fec9dbd6d7c42a8bcb378a1e202a15b",
            "9fc5c3edc3e84019970da11c5658a37d",
            "64c5794062e24929b8756268d16360aa",
            "2cf21d9bd48e406b9f29e5651a8f22b3",
            "a6503b46d29c480c8e8c67eb07270d1e",
            "bf7d1a8e15d04887841e15eaebf4857c",
            "bef9a6ca54a9433ca05b79ecb0e87fac"
          ]
        },
        "id": "XtcM-rCQOJqH",
        "outputId": "1ed15101-fa39-4daf-8439-d84ba7110f8f"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "5b285d96ffc2482c8072761f4dcfb22c",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Filter:   0%|          | 0/6165 [00:00<?, ? examples/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/plain": [
              "Dataset({\n",
              "    features: ['docs', 'category', 'thread', 'href', 'question', 'context', 'marked'],\n",
              "    num_rows: 5458\n",
              "})"
            ]
          },
          "execution_count": 7,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "data = data.filter(\n",
        "    lambda x: 0 if tiktoken_len(\n",
        "      '\\n\\n'.join([x['thread'], x['question'], x['context']])\n",
        "    ) > 1_000 else 1\n",
        ")\n",
        "data"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "Allnxb0HLQlN"
      },
      "source": [
        "We've dropped ~200 rows. In a real-world scenario you may want to simply limit or chunk records, but for this example simply dropping the rows will do.\n",
        "\n",
        "Now let's move on to preparing the text data and building our embeddings."
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "HmhnlPckHrwF"
      },
      "source": [
        "## Text Prep and Embeddings\n",
        "\n",
        "To store as much information as possible in each record, it may make sense to format each record as something like:\n",
        "\n",
        "```\n",
        "Thread title: <thread>\n",
        "\n",
        "Question asked: <question>\n",
        "\n",
        "Given answer: <context>\n",
        "```\n",
        "\n",
        "We will create this format for each record and store in a new `text` variable."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "6WBvylTdHsOC",
        "outputId": "b80e3832-2b89-4352-b475-8a37e6ae872e"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Thread title: Sample evaluation script on custom dataset\n",
            "\n",
            "Question asked: Hey, I  have a custom dataset. can you send a sample script to get the accuracy on such a dataset? I was going through examples and I couldn\u2019t get a code that does that. Can someone send me a resource?\n",
            "my dataset is of the format-\n",
            "premise , hypothesis, label(0 or 1)\n",
            "and my model is deberta\n",
            "Thanks\n",
            "@lewtun\n",
            "\n",
            "Given answer: Hey @NDugar if you\u2019re using the Trainer my suggestion would be to run Trainer.predict(your_test_dataset) so you can get all the predictions. Then you should be able to feed those into the accuracy metric in a second step (or whatever metric you\u2019re interested in).\n",
            "If you\u2019re still having trouble, I suggest providing a minimal reproducible example, as explained here\n"
          ]
        }
      ],
      "source": [
        "text = [\n",
        "    f\"Thread title: {x['thread']}\\n\\n\"+\n",
        "    f\"Question asked: {x['question']}\\n\\n\"+\n",
        "    f\"Given answer: {x['context']}\" for x in data\n",
        "]\n",
        "print(text[100])"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "EPf8HvEPMkky"
      },
      "source": [
        "The text isn't always going to be perfect, but we'll see that the embedding model doesn't have any issues with this. Now let's initialize the embedding model and begin building the embeddings.\n",
        "\n",
        "### Embedding with OpenAI\n",
        "\n",
        "We begin by initializing the embedding model. For this we need [OpenAI API keys](https://beta.openai.com/signup)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "3-Z653zZHBv6",
        "outputId": "6404acae-5fa4-4ac8-e9bc-4db522143ab4"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "<OpenAIObject list at 0x7f7bdf73a020> JSON: {\n",
              "  \"data\": [\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"whisper-1\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-internal\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"babbage\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"davinci\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-davinci-edit-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-davinci-003\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-internal\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"babbage-code-search-code\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-similarity-babbage-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"code-davinci-edit-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-davinci-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"ada\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"babbage-code-search-text\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"gpt-4-0314\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"babbage-similarity\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"code-search-babbage-text-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-curie-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"code-search-babbage-code-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-ada-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-embedding-ada-002\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-internal\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-similarity-ada-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"curie-instruct-beta\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"ada-code-search-code\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"ada-similarity\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"gpt-4\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"code-search-ada-text-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-search-ada-query-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"davinci-search-document\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"ada-code-search-text\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-search-ada-doc-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"davinci-instruct-beta\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-similarity-curie-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"code-search-ada-code-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"ada-search-query\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-search-davinci-query-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"curie-search-query\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"davinci-search-query\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"babbage-search-document\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"ada-search-document\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-search-curie-query-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-search-babbage-doc-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"gpt-3.5-turbo\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"curie-search-document\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-search-curie-doc-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"babbage-search-query\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-babbage-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-search-davinci-doc-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-search-babbage-query-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"curie-similarity\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"gpt-3.5-turbo-0301\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"curie\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-similarity-davinci-001\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"text-davinci-002\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    },\n",
              "    {\n",
              "      \"created\": null,\n",
              "      \"id\": \"davinci-similarity\",\n",
              "      \"object\": \"engine\",\n",
              "      \"owner\": \"openai-dev\",\n",
              "      \"permissions\": null,\n",
              "      \"ready\": true\n",
              "    }\n",
              "  ],\n",
              "  \"object\": \"list\"\n",
              "}"
            ]
          },
          "execution_count": 9,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "import os\n",
        "import openai\n",
        "\n",
        "# get API key from top-right dropdown on OpenAI website\n",
        "openai.api_key = os.getenv(\"OPENAI_API_KEY\") or \"OPENAI_API_KEY\"\n",
        "\n",
        "openai.Engine.list() # check we have authenticated"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "9V0R8gAGOTSj"
      },
      "source": [
        "The `openai.Engine.list()` function should return a list of models that we can use. One of those is `text-embedding-ada-002` that we will use for creating embeddings like so:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {
        "id": "Ttkk5_GZOPt-"
      },
      "outputs": [],
      "source": [
        "model = \"text-embedding-ada-002\"\n",
        "\n",
        "res = openai.Embedding.create(\n",
        "    input=[\n",
        "        \"Sample document text goes here\",\n",
        "        \"there will be several phrases in each batch\"\n",
        "    ], engine=model\n",
        ")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "9XI2ObGnOqWC"
      },
      "source": [
        "In the response `res` we will find a JSON-like object containing our new embeddings within the `'data'` field."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "PNslbddQOzY0",
        "outputId": "ec510fa2-67f3-44d7-f61c-70417768d5b6"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "dict_keys(['object', 'data', 'model', 'usage'])"
            ]
          },
          "execution_count": 11,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "res.keys()"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "2aNIHsIYPTLm"
      },
      "source": [
        "Inside `'data'` we will find two records, one for each of the two sentences we just embedded. Each vector embedding contains `1536` dimensions (the output dimensionality of the `text-embedding-ada-002` model."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "SDgPHDkyOqEz",
        "outputId": "2b2e5e55-2797-4485-958c-0896f8938f6b"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "2"
            ]
          },
          "execution_count": 12,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "len(res['data'])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 13,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "lrJ5Kq5tPSG0",
        "outputId": "31ab4e6c-953e-480d-ef14-2da45dfecdf9"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "(1536, 1536)"
            ]
          },
          "execution_count": 13,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "len(res['data'][0]['embedding']), len(res['data'][1]['embedding'])"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "o_Z2aoPoP14b"
      },
      "source": [
        "We will apply this same embedding logic when indexing all of our data in the Pinecone vector database soon.\n",
        "\n",
        "## Building a Pinecone Index\n",
        "\n",
        "We need a vector index to store the vector embeddings and enable a fast and scalable search through them. For this we use the Pinecone vector database.\n",
        "\n",
        "To use this we need a [free Pinecone API key](https://app.pinecone.io)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "import os\n",
        "from pinecone import Pinecone\n",
        "\n",
        "# initialize connection to pinecone (get API key at app.pinecone.io)\n",
        "api_key = os.environ.get('PINECONE_API_KEY') or 'PINECONE_API_KEY'\n",
        "\n",
        "# configure client\n",
        "pc = Pinecone(api_key=api_key)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Now we setup our index specification, this allows us to define the cloud provider and region where we want to deploy our index. You can find a list of all [available providers and regions here](https://docs.pinecone.io/docs/projects)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "from pinecone import ServerlessSpec\n",
        "\n",
        "cloud = os.environ.get('PINECONE_CLOUD') or 'aws'\n",
        "region = os.environ.get('PINECONE_REGION') or 'us-east-1'\n",
        "\n",
        "spec = ServerlessSpec(cloud=cloud, region=region)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Then initialize the index:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "index_name = 'openai-ml-qa'"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "import time\n",
        "\n",
        "# check if index already exists (it shouldn't if this is first time)\n",
        "if index_name not in pc.list_indexes().names():\n",
        "    # if does not exist, create index\n",
        "    pc.create_index(\n",
        "        index_name,\n",
        "        dimension=len(res['data'][0]['embedding']),\n",
        "        metric='cosine',\n",
        "        spec=spec\n",
        "    )\n",
        "    # wait for index to be initialized\n",
        "    while not pc.describe_index(index_name).status['ready']:\n",
        "        time.sleep(1)\n",
        "\n",
        "# connect to index\n",
        "index = pc.Index(index_name)\n",
        "# view index stats\n",
        "index.describe_index_stats()"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "w6JSgHpLRZ4B"
      },
      "source": [
        "Now we begin populating the index.\n",
        "\n",
        "When adding records to Pinecone we need three items in a tuple format:\n",
        "\n",
        "```\n",
        "(id, vector, metadata)\n",
        "```\n",
        "\n",
        "All IDs must be unique, our vectors will be built by OpenAI, and the metadata is a dictionary of the information for each record (`'href'`, `'question'`, etc).\n",
        "\n",
        "We will create our vector embeddings and add the records to Pinecone in batches of `128`. This is to avoid trying to push too much data into single API requests."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 17,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 49,
          "referenced_widgets": [
            "ace8fd1622e24c008acd151083a07373",
            "4500965fe5694b2e821a2c492459a5d0",
            "fa974129d4094406885ee7f596ca0d95",
            "32756e44dfee46f885557c94199e47a1",
            "67920dbf40234b9e964b503f4c1190c4",
            "22f461aff15245c58a162c93c947b50f",
            "cf02f8ed78604a90a7b18738588e41cb",
            "14ef08199f5b442d906ec23f750cac5a",
            "a5065fd3fba3407ea4c7ff7816b48d06",
            "1056eca0c7ef4e7d9e82429d8cebf46b",
            "b91886b6b3d0457eac68ee262a53ebd7"
          ]
        },
        "id": "yVSn1lhFRp-R",
        "outputId": "63895631-36e4-47c9-b8fd-355ad560d057"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "ace8fd1622e24c008acd151083a07373",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "  0%|          | 0/43 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "from tqdm.auto import tqdm  # this is our progress bar\n",
        "\n",
        "batch_size = 128  # process everything in batches of 128\n",
        "for i in tqdm(range(0, len(text), batch_size)):\n",
        "    # set end position of batch\n",
        "    i_end = min(i+batch_size, len(text))\n",
        "    # get batch of metadata, text, and IDs\n",
        "    meta_batch = [data[x] for x in range(i,i_end)]\n",
        "    text_batch = text[i:i_end]\n",
        "    ids_batch = [str(n) for n in range(i, i_end)]\n",
        "    # create embeddings\n",
        "    res = openai.Embedding.create(input=text_batch, engine=model)\n",
        "    embeds = [record['embedding'] for record in res['data']]\n",
        "    to_upsert = list(zip(ids_batch, embeds, meta_batch))\n",
        "    # upsert to Pinecone\n",
        "    index.upsert(vectors=to_upsert)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "sQXYWo_1btru"
      },
      "source": [
        "We can check that everything has been upserted with `index.describe_index_stats()`"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 18,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Xhlc1hqLUW7f",
        "outputId": "fcfe8ef3-b33e-4610-9f0e-8c99bb08cd2e"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "{'dimension': 1536,\n",
              " 'index_fullness': 0.0,\n",
              " 'namespaces': {'': {'vector_count': 5458}},\n",
              " 'total_vector_count': 5458}"
            ]
          },
          "execution_count": 18,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "index.describe_index_stats()"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "B4qKu3NNb2T0"
      },
      "source": [
        "We have `5947` vectors (and their respective metadata) added to the index as expected.\n",
        "\n",
        "With that our index has been built and we can move on to the next stage of querying in [01-making-queries.ipynb](https://github.com/pinecone-io/examples/blob/master/generation/generative-qa/openai-ml-qa/01-making-queries.ipynb).\n",
        "\n",
        "---"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "display_name": "ml",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "name": "python",
      "version": "3.9.12 (main, Apr  5 2022, 01:52:34) \n[Clang 12.0.0 ]"
    },
    "vscode": {
      "interpreter": {
        "hash": "b8e7999f96e1b425e2d542f21b571f5a4be3e97158b0b46ea1b2500df63956ce"
      }
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}