{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "id": "ifmV02zKlsCs",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "6ea08f19-5c58-42a2-f90f-569161f0d9c1"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m493.7/493.7 kB\u001b[0m \u001b[31m4.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m220.3/220.3 kB\u001b[0m \u001b[31m6.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m201.0/201.0 kB\u001b[0m \u001b[31m6.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m115.3/115.3 kB\u001b[0m \u001b[31m8.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m134.8/134.8 kB\u001b[0m \u001b[31m10.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m75.9/75.9 kB\u001b[0m \u001b[31m3.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m76.9/76.9 kB\u001b[0m \u001b[31m4.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m2.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
            "llmx 0.0.15a0 requires cohere, which is not installed.\n",
            "llmx 0.0.15a0 requires tiktoken, which is not installed.\u001b[0m\u001b[31m\n",
            "\u001b[0m"
          ]
        }
      ],
      "source": [
        "!pip install -qU \\\n",
        "  datasets==2.14.6 \\\n",
        "  openai==1.2.2 \\\n",
        "  pinecone-client==3.1.0"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Q4vTJ-pFmWl5"
      },
      "source": [
        "## Dataset Download"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GFaaDw5VmZEk"
      },
      "source": [
        "We're going to test with a more real world use-case, with messy, imperfect data. We will use the [`jamescalam/ai-arxiv-chunked`](https://huggingface.co/datasets/jamescalam/ai-arxiv-chunked) dataset."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 356,
          "referenced_widgets": [
            "a3461e93412f4b628883d2c1ca4c8696",
            "1cd665fc843f443ba662bba5bad1cff5",
            "b7835163e5774108b56a258496afb5d8",
            "97dbf3af891d4db092c3a6d93545107c",
            "3df28aff8d6b41a19fbd5f2774e0237d",
            "afb89a937e884254837046650474136b",
            "19d75a5a79c54ba88934010dd9f9c03a",
            "0c3e89913a674574ba55172ca87de719",
            "befe06d62bbb42b49ffe25f38966c165",
            "41889561c95d43b9bb7f30b626aa733b",
            "d774d31d70464905906f50259dddef2e",
            "017f1b4ea64141a2a30f4801c9cdd589",
            "57a0fadf04d348aa8e9e9f7729e81af2",
            "5dcb8fb86e304ace85745f611b18b175",
            "9c5622cf212f4b4da59a19ff5cfba662",
            "96c07177da5843c0955e1cc40a3f3c34",
            "800bb3497bb947e1bb797898be6ce7cc",
            "8132b1e2d2954833958abde9fc6de56d",
            "87ccf500d8494f3cacb8c832d2e9f1e2",
            "4dea03af135744bfb352de50a6d81b11",
            "ac3d8f1c191c40ebb06a10faec4c25e8",
            "8e5e2cdbe6154920bc38cbaf1c077837",
            "90154593010840f9af9fc36c284eb229",
            "244318ecec384a8c9319270ca8549c36",
            "b4de10fc504b4384a1ba4932525cbc64",
            "fb78779c83b64baa91ae5cfe07132c2e",
            "5d16b9c1154e495eab534381c35a5758",
            "cd97fd541ee04cdca3079bbd04a32267",
            "61c6ff4838b141a19b8e663558e174f2",
            "ea79127a3d8547f68497822b45332110",
            "f6256b5191f148129fcdab75e069921f",
            "29786411417e4ab78180b5456b35f505",
            "1092f424323748ce936c88e9214de8f4",
            "b2988b5215f5420094ede8d62c202577",
            "b253f79b27e94efc876f54325663c4d0",
            "e6c56895f3dd485da0c92fabb845d7e8",
            "afa624e38e7d4146a8dce055e87e930b",
            "b35ff58247b44ad3b615a6b5b96497ff",
            "569c0518e47b4f9b95f67202d5116dfb",
            "8532efc118be4131bf401b5d683cb334",
            "24157a44c40d4e2ca3513b17e3635a30",
            "77c8fb68c0434e8d8e92816eb2931e97",
            "56b24739c6234fb083d65f6e39d95bc2",
            "a37a81e0f53d4bf2bfad9cc7781b580c"
          ]
        },
        "id": "4-FqcdKHmVpa",
        "outputId": "8d4f18c5-0ee1-410f-bb4a-8512b62bb014"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_token.py:88: UserWarning: \n",
            "The secret `HF_TOKEN` does not exist in your Colab secrets.\n",
            "To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.\n",
            "You will be able to reuse this secret in all of your notebooks.\n",
            "Please note that authentication is recommended but still optional to access public models or datasets.\n",
            "  warnings.warn(\n"
          ]
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "Downloading data files:   0%|          | 0/1 [00:00<?, ?it/s]"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "a3461e93412f4b628883d2c1ca4c8696"
            }
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "Downloading data:   0%|          | 0.00/153M [00:00<?, ?B/s]"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "017f1b4ea64141a2a30f4801c9cdd589"
            }
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "Extracting data files:   0%|          | 0/1 [00:00<?, ?it/s]"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "90154593010840f9af9fc36c284eb229"
            }
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "Generating train split: 0 examples [00:00, ? examples/s]"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "b2988b5215f5420094ede8d62c202577"
            }
          },
          "metadata": {}
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "Dataset({\n",
              "    features: ['doi', 'chunk-id', 'chunk', 'id', 'title', 'summary', 'source', 'authors', 'categories', 'comment', 'journal_ref', 'primary_category', 'published', 'updated', 'references'],\n",
              "    num_rows: 41584\n",
              "})"
            ]
          },
          "metadata": {},
          "execution_count": 2
        }
      ],
      "source": [
        "from datasets import load_dataset\n",
        "\n",
        "data = load_dataset(\"jamescalam/ai-arxiv-chunked\", split=\"train\")\n",
        "data"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "data = data.map(lambda x: {\n",
        "    \"id\": f'{x[\"id\"]}-{x[\"chunk-id\"]}',\n",
        "    \"text\": x[\"chunk\"],\n",
        "    \"metadata\": {\n",
        "        \"title\": x[\"title\"],\n",
        "        \"url\": x[\"source\"],\n",
        "        \"primary_category\": x[\"primary_category\"],\n",
        "        \"published\": x[\"published\"],\n",
        "        \"updated\": x[\"updated\"],\n",
        "        \"text\": x[\"chunk\"],\n",
        "    }\n",
        "})\n",
        "# drop uneeded columns\n",
        "data = data.remove_columns([\n",
        "    \"title\", \"summary\", \"source\",\n",
        "    \"authors\", \"categories\", \"comment\",\n",
        "    \"journal_ref\", \"primary_category\",\n",
        "    \"published\", \"updated\", \"references\",\n",
        "    \"doi\", \"chunk-id\",\n",
        "    \"chunk\"\n",
        "])\n",
        "data"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 118,
          "referenced_widgets": [
            "4efd781d023143f28afefb483d3edb06",
            "ad80d90c1892457d9da09ecdf0e58b29",
            "8cd52118cfc34bb9a8e68b56771e5b31",
            "49f19c04ef804b47b60b1e2b5dcc5c83",
            "9d419e35ff5c407eac74b65c151f19a8",
            "09f4f289c4524f62bded4fb9af192ecc",
            "93c8e91e067e42f59691fb80f94c41c0",
            "da4406378ae3413299e82645a3b514c5",
            "16d122c510144ad792f3714a8b44562f",
            "0da7a76ce2374cd4b7dad4f7c217295c",
            "f2538c906a2e4e6b990b1bcfb2f45fc5"
          ]
        },
        "id": "PU9G61v8HUUu",
        "outputId": "2d42eed1-5259-4963-c03f-303f49778265"
      },
      "execution_count": 12,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "Map:   0%|          | 0/41584 [00:00<?, ? examples/s]"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "4efd781d023143f28afefb483d3edb06"
            }
          },
          "metadata": {}
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "Dataset({\n",
              "    features: ['id', 'text', 'metadata'],\n",
              "    num_rows: 41584\n",
              "})"
            ]
          },
          "metadata": {},
          "execution_count": 12
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "First we define our embedding function."
      ],
      "metadata": {
        "id": "gp5a_bInyfdX"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import openai\n",
        "\n",
        "openai.api_key = \"sk-...\"\n",
        "\n",
        "def embed(docs: list[str], name: str) -> list[list[float]]:\n",
        "    res = openai.embeddings.create(\n",
        "        input=docs, model=name\n",
        "    )\n",
        "    doc_embeds = [r.embedding for r in res.data]\n",
        "    return doc_embeds"
      ],
      "metadata": {
        "id": "oG6zd1dLw54w"
      },
      "execution_count": 37,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Initialize connection to Pinecone with free API key + sign up to serverless (for free, you get $100 in credits which will last _forever_)"
      ],
      "metadata": {
        "id": "aS1nW9MF9rQk"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from pinecone import Pinecone\n",
        "\n",
        "pc = Pinecone(api_key=\"...\")"
      ],
      "metadata": {
        "id": "gfbRXUvD90cE"
      },
      "execution_count": 4,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Setup x2 Pinecone Serverless indices:"
      ],
      "metadata": {
        "id": "yY-IgRNv9NgB"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import time\n",
        "from pinecone import ServerlessSpec\n",
        "\n",
        "spec = ServerlessSpec(cloud='aws', region='us-west-2')\n",
        "\n",
        "indexes = [\n",
        "    {\n",
        "        \"name\": \"text-embedding-ada-002\",\n",
        "        \"dimension\": 1536\n",
        "    },\n",
        "    {\n",
        "        \"name\": \"text-embedding-3-small\",\n",
        "        \"dimension\": 1536  # 512\n",
        "    },\n",
        "    {\n",
        "        \"name\": \"text-embedding-3-large\",\n",
        "        \"dimension\": 3072  # 256\n",
        "    }\n",
        "]\n",
        "# get existing indexes\n",
        "existing_indexes = pc.list_indexes().names()\n",
        "\n",
        "# check if index already exists (it shouldn't if this is first time)\n",
        "for i, index in enumerate(indexes):\n",
        "    if index[\"name\"] not in existing_indexes:\n",
        "        # if does not exist, create index\n",
        "        pc.create_index(\n",
        "            index[\"name\"],\n",
        "            dimension=index[\"dimension\"],  # dimensionality of minilm\n",
        "            metric='dotproduct',\n",
        "            spec=spec\n",
        "        )\n",
        "        # wait for index to be initialized\n",
        "        while not pc.describe_index(index[\"name\"]).status['ready']:\n",
        "            time.sleep(1)\n",
        "\n",
        "    # connect to index\n",
        "    indexes[i][\"index\"] = pc.Index(index[\"name\"])\n",
        "    time.sleep(1)\n",
        "    # view index stats\n",
        "    print(indexes[i][\"index\"].describe_index_stats())"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "4upUEKD-9KEJ",
        "outputId": "07d744f3-ff49-40e8-e7ba-49570a24964e"
      },
      "execution_count": 15,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "{'dimension': 1536,\n",
            " 'index_fullness': 0.0,\n",
            " 'namespaces': {},\n",
            " 'total_vector_count': 0}\n",
            "{'dimension': 1536,\n",
            " 'index_fullness': 0.0,\n",
            " 'namespaces': {'': {'vector_count': 1300}},\n",
            " 'total_vector_count': 1300}\n",
            "{'dimension': 3072,\n",
            " 'index_fullness': 0.0,\n",
            " 'namespaces': {},\n",
            " 'total_vector_count': 0}\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "Index everything"
      ],
      "metadata": {
        "id": "1nvrNQSGXvEC"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from tqdm.auto import tqdm\n",
        "\n",
        "batch_size = 100  # how many embeddings we create and insert at once\n",
        "\n",
        "for index in indexes:\n",
        "    print(f\"Indexing for {index['name']}\")\n",
        "    for i in tqdm(range(0, len(data), batch_size)):\n",
        "        passed = False\n",
        "        # find end of batch\n",
        "        i_end = min(len(data), i+batch_size)\n",
        "        # create batch\n",
        "        batch = data[i:i_end]\n",
        "        embeds = embed(batch[\"text\"], name=index[\"name\"])\n",
        "        to_upsert = list(zip(batch[\"id\"], embeds, batch[\"metadata\"]))\n",
        "        # upsert to Pinecone\n",
        "        index[\"index\"].upsert(vectors=to_upsert)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 165,
          "referenced_widgets": [
            "8a1ea33bdd1f4fdd8e0b9e7fd4cf19fb",
            "a12851b40bac4a3ab84c6249d1e2abbb",
            "049732daa8f14f9fbd805dcc959b6507",
            "9f067ab956694ec885e81f48fed6562e",
            "ac45e852b3a7472dbe95d726740ae076",
            "5375d14f8eb340efb7a072c54f16d5ac",
            "8554942f9d2446998711f305b069267b",
            "f3eaf574439642f2857885ac4c22e35b",
            "ddb78ac3d2a447bfaa1746a92719dfbc",
            "88e7e347570540c199f33a01a0e1f860",
            "3a5dc1b289994d08a7b2df67a04216f5",
            "4f196451c62045979328e97bc6e1e8dd",
            "44a524841eb447ba846d2bb2d1e723e0",
            "76f8ab36eac6437f815b302b53ad52d0",
            "c0372514402843e3b78fddecffa06f26",
            "ba5f0b8570c142248ab3b99bb066b42e",
            "e86cfb853ac148f28e23f3e3c015976f",
            "0ad5ec1225d741f989ad6e5ff9d99642",
            "ced0d9ecc9d74a9aa62c21b12e39b593",
            "467da6ae9a954ca6abf04fe5475a760d",
            "77a3c52e102149d68ebad0993333e0e0",
            "3fc61442ea984fe8a7f0e2d62bc1996a",
            "1dd38beb195c422ea66487992bac1869",
            "b7be2895c27140c2b32816c2791df6e9",
            "0be68f19c3ec4282887ac6141a6a0c4c",
            "57c55b3abd0a4dcc9aaf6519fa46bd47",
            "4e9a63892f0146f8999fab7c341f6213",
            "07d75f3b59074dddbd11035dd1f97bf5",
            "fc700f73e5684c89b60117070c82250b",
            "4fe744b2dab24b39a69aa119a54ffbde",
            "96d551c19a3049698902c72e615a3863",
            "724d55d5596f48719270836d49d40f85",
            "ad6c0980ac104fcab119a1e3a47029ff"
          ]
        },
        "id": "EdyWVR17zX7I",
        "outputId": "718bdeb8-aadf-4ffa-a15d-341f3434386b"
      },
      "execution_count": 16,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Indexing for text-embedding-ada-002\n"
          ]
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "  0%|          | 0/416 [00:00<?, ?it/s]"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "8a1ea33bdd1f4fdd8e0b9e7fd4cf19fb"
            }
          },
          "metadata": {}
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Indexing for text-embedding-3-small\n"
          ]
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "  0%|          | 0/416 [00:00<?, ?it/s]"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "4f196451c62045979328e97bc6e1e8dd"
            }
          },
          "metadata": {}
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Indexing for text-embedding-3-large\n"
          ]
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "  0%|          | 0/416 [00:00<?, ?it/s]"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "1dd38beb195c422ea66487992bac1869"
            }
          },
          "metadata": {}
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "Let's create a `get_docs` function so we can quickly compare results"
      ],
      "metadata": {
        "id": "Bl9g3ePt029u"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def get_docs(query: str, top_k: int, model: int) -> list[str]:\n",
        "    print(f\"Getting docs with {indexes[model]['name']}\")\n",
        "    # encode query\n",
        "    xq = embed([query], name=indexes[model][\"name\"])[0]\n",
        "    # search pinecone index\n",
        "    res = indexes[model][\"index\"].query(vector=xq, top_k=top_k, include_metadata=True)\n",
        "    # get doc text\n",
        "    docs = [x[\"metadata\"]['text'] for x in res[\"matches\"]]\n",
        "    return docs"
      ],
      "metadata": {
        "id": "u2NjYxsn7J5f"
      },
      "execution_count": 18,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Try different embedding models by switching the `model` parameter:\n",
        "\n",
        "* `0` is `text-embedding-ada-002`\n",
        "* `1` is `text-embedding-3-small`\n",
        "* `2` is `text-embedding-3-large`"
      ],
      "metadata": {
        "id": "E9UFvMP5eFn3"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "doc = get_docs(\n",
        "    query=\"what is the difference between llama and gpt-4?\",\n",
        "    top_k=5,\n",
        "    model=2\n",
        ")\n",
        "print(\">>>\")\n",
        "for d in doc:\n",
        "    print(d)\n",
        "    print(\">>>\")"
      ],
      "metadata": {
        "id": "PV7XjlAgVtiA",
        "outputId": "0ad61044-74a9-440e-ac43-0563055eb60a",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": 35,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Getting docs with text-embedding-3-large\n",
            ">>>\n",
            "to GPT-3 corresponds to the Stanford Alpaca model. From Figure 3(a), we observe that ( i) For the\n",
            "\u201cHelpfulness\u201d criterion, GPT-4 is the clear winner with 54.12% of the votes. GPT-3 only wins 19.74%\n",
            "of the time. ( ii) For the \u201cHonesty\u201d and \u201cHarmlessness\u201d criteria, the largest portion of votes goes\n",
            "to the tie category, which is substantially higher than the winning categories but GPT-3 (Alpaca) is\n",
            "slightly superior.\n",
            "Second, we compare GPT-4-instruction-tuned LLaMA models against the teacher model GPT-4 in\n",
            "Figure 3(b). The observations are quite consistent over the three criteria: GPT-4-instruction-tuned\n",
            "LLaMA performs similarly to the original GPT-4. We conclude that learning from GPT-4 generated\n",
            "5\n",
            "60% 70% 80% 90% 100%12345BRanking Group 94% 624 : 66792% 614 : 67091% 623 : 68289% 597 : 66989% 605 : 67891% 609 : 666\n",
            ">>>\n",
            "based on the Transformer architecture [VSP+17] and trained on massive corpora of web-text data, using at its\n",
            "core a self-supervised objective of predicting the next word in a partial sentence. In this paper, we report on\n",
            "evidence that a new LLM developed by OpenAI, which is an early and non-multimodal version of GPT-4\n",
            "[Ope23], exhibits many traits of intelligence. Despite being purely a language model, this early version of\n",
            "GPT-4 demonstrates remarkable capabilities on a variety of domains and tasks, including abstraction, comprehension, vision, coding, mathematics, medicine, law, understanding of human motives and emotions, and\n",
            "more. We interacted with GPT-4 during its early development by OpenAI using purely natural language\n",
            "queries (prompts)1. In Figure 1.1, we display some preliminary examples of outputs from GPT-4, asking it to\n",
            "write a proof of in\fnitude of primes in the form of a poem, to draw a unicorn in TiKZ (a language for creating\n",
            "graphics in L ATEX), to create a complex animation in Python, and to solve a high-school level mathematical\n",
            "problem. It easily succeeds at all these tasks, and produces outputs that are essentially indistinguishable\n",
            "from (or even better than) what humans could produce. We also compare GPT-4's performance to those of\n",
            ">>>\n",
            "Automatic Evaluation with GPT-4. Following (Vicuna, 2023), we employ GPT-4 to automatically\n",
            "evaluate the generated responses of different models on 80 unseen questions in (Vicuna, 2023). We\n",
            "\ufb01rst collect answers from two chatbots, including LLaMA-GPT-4 (7B) and GPT-4, and use the release\n",
            "answers of other chatbots from (Vicuna, 2023), including LLaMA (13B), Alpaca (13B), Vicuna\n",
            "(13B), Bard (Google, 2023), and ChatGPT. For each evaluation, we ask GPT-4 to rate the response\n",
            "quality between two models with scores from 1 to 10. We compare all models against a strong\n",
            "competing model such as ChatGPT and GPT-4, respectively. The results are shown in Figure 4.\n",
            "For LLaMA instruction-tuned with GPT-4, we provide two sets of decoding results: (i)One response\n",
            "per question, which is considered the baseline decoding result. (ii)Five responses per questions. For\n",
            "the latter, the reward model is used to rank the responses which are then grouped into \ufb01ve subsets\n",
            "ranked from top 1 to top 5. We compare the \ufb01ve ranked groups against the baseline, and show the\n",
            ">>>\n",
            "(ii)For GPT-4 results alone, the translated responses show superior performance over the generated\n",
            "response in Chinese, probably because GPT-4 is trained in richer English corpus than Chinese, which\n",
            "leads to stronger English instruction-following ability. In Figure 5 (c), we show results for all models\n",
            "who are asked to answer in Chinese.\n",
            "We compare LLaMA-GPT4 with GPT-4 and Alpaca unnatural instructions in Figure 6. In terms of the\n",
            "average ROUGE-L scores, Alpaca outperforms the other two models. We note that LLaMA-GPT4 and\n",
            "GPT4 is gradually performing better when the ground truth response length is increasing, eventually\n",
            "showing higher performance when the length is longer than 4. This means that they can better follow\n",
            "instructions when the scenarios are more creative. Across different subsets, LLaMA-GPT4 can\n",
            "7\n",
            "0-2 3-5 6-10 10>\n",
            "Groundtruth Response Length0.30.40.5RougeL\n",
            "-0.043\n",
            "-0.009+0.0132-0.004 +0.0562\n",
            "+0.0387-0.012\n",
            ">>>\n",
            "ranked from top 1 to top 5. We compare the \ufb01ve ranked groups against the baseline, and show the\n",
            "relative scores in Figure 4 (a,b). The ChatGPT and GPT-4 evaluation is consistent with the orders\n",
            "6\n",
            "60% 70% 80% 90% 100%LLaMA (13B)Alpaca (13B)Vicuna (13B)LLaMA_GPT4 (7B)LLaMA_GPT4 (7B, R1)BardChatGPTGPT4\n",
            "67% 466 : 69776% 539 : 71293% 639 : 68887% 607 : 70089% 620 : 69392% 624 : 68195% 652 : 684100% 758 : 758(a) All chatbots against GPT-4, whose Chinese responses are translated from English\n",
            "60% 70% 80% 90% 100%LLaMA (13B)Alpaca (13B)Vicuna (13B)LLaMA_GPT4 (7B)LLaMA_GPT4 (7B, R1)BardChatGPTGPT4\n",
            ">>>\n"
          ]
        }
      ]
    }
  ],
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}