{
  "cells": [
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/search/metadata-filtered-search/metadata-filtered-search.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/search/metadata-filtered-search/metadata-filtered-search.ipynb)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cKu-l3vljNRk"
      },
      "source": [
        "# Semantic AND Keyword Search (Hybrid Search)\n",
        "\n",
        "We will take a look at how to use Pinecone to perform a semantic search, while applying a traditional keyword search."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "id": "qmRYKLzHjNRn"
      },
      "outputs": [],
      "source": [
        "all_sentences = [\n",
        "    \"purple is the best city in the forest\",\n",
        "    \"No way chimps go bananas for snacks!\",\n",
        "    \"it is not often you find soggy bananas on the street\",\n",
        "    \"green should have smelled more tranquil but somehow it just tasted rotten\",\n",
        "    \"joyce enjoyed eating pancakes with ketchup\",\n",
        "    \"throwing bananas on to the street is not art\",\n",
        "    \"as the asteroid hurtled toward earth becky was upset her dentist appointment had been canceled\",\n",
        "    \"I'm getting way too old. I don't even buy green bananas anymore.\",\n",
        "    \"to get your way you must not bombard the road with yellow fruit\",\n",
        "    \"Time flies like an arrow; fruit flies like a banana\"\n",
        "]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "O5sWQL3MjNRo"
      },
      "source": [
        "We will use the `sentence-transformers` library to build our sentence embeddings. It can be installed using `pip` like so:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "BJdDkmacjNRo",
        "outputId": "218f6abe-912c-4dae-ad74-2d182c78c876"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
            "Collecting sentence-transformers\n",
            "  Downloading sentence-transformers-2.2.0.tar.gz (79 kB)\n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 79 kB 4.0 MB/s \n",
            "\u001b[?25hCollecting sacremoses\n",
            "  Downloading sacremoses-0.0.53.tar.gz (880 kB)\n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 880 kB 28.7 MB/s \n",
            "\u001b[?25hCollecting transformers<5.0.0,>=4.6.0\n",
            "  Downloading transformers-4.19.2-py3-none-any.whl (4.2 MB)\n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4.2 MB 37.5 MB/s \n",
            "\u001b[?25hRequirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (4.64.0)\n",
            "Requirement already satisfied: torch>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (1.11.0+cu113)\n",
            "Requirement already satisfied: torchvision in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (0.12.0+cu113)\n",
            "Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (1.21.6)\n",
            "Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (1.0.2)\n",
            "Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (1.4.1)\n",
            "Requirement already satisfied: nltk in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (3.2.5)\n",
            "Collecting sentencepiece\n",
            "  Downloading sentencepiece-0.1.96-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.2 MB)\n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.2 MB 30.9 MB/s \n",
            "\u001b[?25hCollecting huggingface-hub\n",
            "  Downloading huggingface_hub-0.7.0-py3-none-any.whl (86 kB)\n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 86 kB 6.1 MB/s \n",
            "\u001b[?25hRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=1.6.0->sentence-transformers) (4.2.0)\n",
            "Requirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers) (3.7.0)\n",
            "Collecting pyyaml>=5.1\n",
            "  Downloading PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (596 kB)\n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 596 kB 57.0 MB/s \n",
            "\u001b[?25hRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers) (2.23.0)\n",
            "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers) (21.3)\n",
            "Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers) (2019.12.20)\n",
            "Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers) (4.11.4)\n",
            "Collecting tokenizers!=0.11.3,<0.13,>=0.11.1\n",
            "  Downloading tokenizers-0.12.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (6.6 MB)\n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6.6 MB 52.4 MB/s \n",
            "\u001b[?25hRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->transformers<5.0.0,>=4.6.0->sentence-transformers) (3.0.9)\n",
            "Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from sacremoses) (1.15.0)\n",
            "Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses) (7.1.2)\n",
            "Requirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses) (1.1.0)\n",
            "Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers<5.0.0,>=4.6.0->sentence-transformers) (3.8.0)\n",
            "Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers<5.0.0,>=4.6.0->sentence-transformers) (1.24.3)\n",
            "Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers<5.0.0,>=4.6.0->sentence-transformers) (3.0.4)\n",
            "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers<5.0.0,>=4.6.0->sentence-transformers) (2022.5.18.1)\n",
            "Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers<5.0.0,>=4.6.0->sentence-transformers) (2.10)\n",
            "Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->sentence-transformers) (3.1.0)\n",
            "Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /usr/local/lib/python3.7/dist-packages (from torchvision->sentence-transformers) (7.1.2)\n",
            "Building wheels for collected packages: sentence-transformers, sacremoses\n",
            "  Building wheel for sentence-transformers (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "  Created wheel for sentence-transformers: filename=sentence_transformers-2.2.0-py3-none-any.whl size=120747 sha256=2605ac3ee652230284064a101069fef907d29da27e2e9031833366caaa654398\n",
            "  Stored in directory: /root/.cache/pip/wheels/83/c0/df/b6873ab7aac3f2465aa9144b6b4c41c4391cfecc027c8b07e7\n",
            "  Building wheel for sacremoses (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "  Created wheel for sacremoses: filename=sacremoses-0.0.53-py3-none-any.whl size=895260 sha256=88274d46711642627518ac8168f4e778f62e208beda4638b7f694d26702f62dc\n",
            "  Stored in directory: /root/.cache/pip/wheels/87/39/dd/a83eeef36d0bf98e7a4d1933a4ad2d660295a40613079bafc9\n",
            "Successfully built sentence-transformers sacremoses\n",
            "Installing collected packages: pyyaml, tokenizers, huggingface-hub, transformers, sentencepiece, sentence-transformers, sacremoses\n",
            "  Attempting uninstall: pyyaml\n",
            "    Found existing installation: PyYAML 3.13\n",
            "    Uninstalling PyYAML-3.13:\n",
            "      Successfully uninstalled PyYAML-3.13\n",
            "Successfully installed huggingface-hub-0.7.0 pyyaml-6.0 sacremoses-0.0.53 sentence-transformers-2.2.0 sentencepiece-0.1.96 tokenizers-0.12.1 transformers-4.19.2\n",
            "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
            "Collecting git+https://github.com/pinecone-io/pinecone-python-client.git\n",
            "  Cloning https://github.com/pinecone-io/pinecone-python-client.git to /tmp/pip-req-build-u1bhe4cc\n",
            "  Running command git clone -q https://github.com/pinecone-io/pinecone-python-client.git /tmp/pip-req-build-u1bhe4cc\n",
            "  Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n",
            "  Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n",
            "    Preparing wheel metadata ... \u001b[?25l\u001b[?25hdone\n",
            "Requirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.7/dist-packages (from pinecone-client==2.0.10) (2.23.0)\n",
            "Collecting dnspython>=2.0.0\n",
            "  Downloading dnspython-2.2.1-py3-none-any.whl (269 kB)\n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 269 kB 5.0 MB/s \n",
            "\u001b[?25hRequirement already satisfied: pyyaml>=5.4 in /usr/local/lib/python3.7/dist-packages (from pinecone-client==2.0.10) (6.0)\n",
            "Requirement already satisfied: python-dateutil>=2.5.3 in /usr/local/lib/python3.7/dist-packages (from pinecone-client==2.0.10) (2.8.2)\n",
            "Requirement already satisfied: typing-extensions>=3.7.4 in /usr/local/lib/python3.7/dist-packages (from pinecone-client==2.0.10) (4.2.0)\n",
            "Requirement already satisfied: urllib3>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from pinecone-client==2.0.10) (1.24.3)\n",
            "Collecting loguru>=0.5.0\n",
            "  Downloading loguru-0.6.0-py3-none-any.whl (58 kB)\n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 58 kB 8.1 MB/s \n",
            "\u001b[?25hRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.5.3->pinecone-client==2.0.10) (1.15.0)\n",
            "Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->pinecone-client==2.0.10) (2.10)\n",
            "Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->pinecone-client==2.0.10) (3.0.4)\n",
            "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->pinecone-client==2.0.10) (2022.5.18.1)\n",
            "Building wheels for collected packages: pinecone-client\n",
            "  Building wheel for pinecone-client (PEP 517) ... \u001b[?25l\u001b[?25hdone\n",
            "  Created wheel for pinecone-client: filename=pinecone_client-2.0.10-py3-none-any.whl size=151383 sha256=c5d0e4907fa92a6d602eb2622375782a31287098baac60adece5ea1a4e9c8c27\n",
            "  Stored in directory: /tmp/pip-ephem-wheel-cache-djxcvbio/wheels/6a/05/9d/00091860452464554feeca57f4281b03e0872f45abe233ff3e\n",
            "Successfully built pinecone-client\n",
            "Installing collected packages: loguru, dnspython, pinecone-client\n",
            "Successfully installed dnspython-2.2.1 loguru-0.6.0 pinecone-client-2.0.10\n"
          ]
        }
      ],
      "source": [
        "!pip install sentence-transformers sacremoses\n",
        "!pip install pinecone-client"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "r5bo0U-OjNRo"
      },
      "source": [
        "*(The notebook may need to be restarted for the install to take effect)*"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 465,
          "referenced_widgets": [
            "62fd34a56a1d448086c7268d5b9a9b87",
            "aa0ef98c99be46dca32bef1f46cc2f31",
            "0fed540143c84fcf98a01ad47e0ab3ae",
            "0f20a5f534684256bab509ff00a8b7ad",
            "a6068ca216be4e2ebe3811c155d3fcb4",
            "c916bbb24277417eab550afce30862a1",
            "c5ad9d5e14284b0e9fff89959934cde0",
            "b9d1b7be3a514fc18e7357fcc165ab98",
            "7daa9af8597b42adb935c5964f02848a",
            "291b55e36da54614b1c201f2d757aa8f",
            "c28d126b15bc435388a890c06d37e53d",
            "a9667503d3dd459a9836642564b50599",
            "b746a775cf434b16855b042a78d13225",
            "8ac849b88a7a4b878ef45482f691c769",
            "914a2d6e7d0a420b8e40a71b8a16c9b4",
            "6b90c953002a4382b666e01d2a11c908",
            "882d9c5a61d64a1eb8da49f1e6eee166",
            "6256afdff3de48e9902d1c7b74049f80",
            "bba35ae3d3514898af742955ce400fd5",
            "84569933b17d4131abe08c0832e675a4",
            "2353ee377ad44e0b8e6e21f1f4c51d6a",
            "9cec4f5817084f029f9e8e7090996de8",
            "29ceca7fabeb4f718a70ec8404be1e18",
            "9cda7b28d6c94b3392a567b311931d61",
            "7d2257e55dcf4cc58ac6931538abd651",
            "e47e0eabe742463c9ad9cdd1d4b67d95",
            "39875c5643b24832abf55f09f6d3f781",
            "8efc3f55cd3f42639d479418cdd2b537",
            "c3852c75960740fdb3773b22f65d7cd2",
            "bb9b832cda3f4908863d4844d9bcb53f",
            "69225fb6d67e45a0939cb2e86ab84743",
            "a3f89765bea34965b4268cdf030246c7",
            "aac2c928ddcf4d31be59a6965d97e7d2",
            "7ef9453213a6428ab81884ae2025e579",
            "229d0d02eb1342e386e57ea8e5a42b48",
            "1ffe633433da43f2af9326fed44da2ec",
            "84b62f3729f9411d89e7ca338259c48c",
            "248c5461a5494ca08b72bad5b4f6ae5a",
            "b6531c51cbd249e681c15f0ad8a85da4",
            "e30fa0dbca1c4c6eb951f301137e8577",
            "76fba205bf404ae380f07fc71dd8a864",
            "0d6628cd472d461d80861b884b3ecd61",
            "782c061f9fc349ffb70dcf5a95cb1076",
            "762ec60d1746484c8a5a94d89f653082",
            "0408e3a213b14930860edf8400ae505e",
            "3dbb5bb67aa54258b3ddc853ab39de34",
            "26547948bb604d3b8724a6e5590e12d4",
            "931705a7eaba4832aadaa6d5cc3c052f",
            "e616e25647d049fc9fd94ff0221cb181",
            "75f4e68c7c1c44e0b2628ed0d971966b",
            "30963dc6c63243888b8f7659b0d65fd5",
            "3d25c5a44fac49058d247eb01def7ebb",
            "a0882416a12d40c6a1a580ac6a928361",
            "bf79d7a384864979bfaa972beab66d88",
            "de5aa4ffb42742dc9937c987eaf35369",
            "6dd286785a964e41a96620127bfb82ca",
            "f1671474caa34bc993597111355c3c7f",
            "2aaed42ca0644521ab67e6b70456c4d3",
            "a01538f839014e93bbffa9ae3983b2e8",
            "90ce6c0b9dad4482906bff9917d09fb3",
            "73c84ecea8ad46df8bfbf0a1cf9be80b",
            "a5ed216c3b714744bcd3464b210fe8f0",
            "9fec24d4f5444354b0e63fc482a48ae2",
            "185a292af5cd4ab1bc1a5a1ca3de603e",
            "18a66626f4ed49229a688aeae0d469a5",
            "cc6180720f074bcda270af73b2a3576e",
            "82f76aa65b2c406b98f1fd7a4d1ef07a",
            "e2820c4a7ac04051af249bc602b74be8",
            "f1a1b42c07f2491eb8305879c426cd07",
            "2b0afaa5250649758cf04e5cfcb2945b",
            "5435e6b980224a389ef2b43a441ac962",
            "c478a7c06b0f43eb9ec8fe44e5506d28",
            "ccbdb9e50ac7498caec657a8d72cf904",
            "1943871f44ec429faf934b97e7af35e4",
            "60572599145c487ab6d66b7d3d705540",
            "46580fd61da4463db96fb0cde319c542",
            "95d56f80bd7342ef9ee38d6909714544",
            "b4717328679744b4970cb02180d5c276",
            "556ac1d839df4b7cb4780309b0c7be74",
            "7bf94770010848c48ea37ac0f463ec3a",
            "fb61dd38730c4e3b9125543558bad54f",
            "a484f80151544f45911dee368fb10134",
            "5752f5a8f846477d9074723c46641563",
            "97789bdf53cd4046af93384676214e5e",
            "6e0e755ac38a4ad0851dc4c90dbaad98",
            "3b6668a0b64f4c5ea10230005aaccffc",
            "f03a67dc471848a8b5ef9c8ea0105712",
            "e1ff94641efa471b8f73f630ff028364",
            "1b31736b149c408688c4725b1fb673a8",
            "2788fb04a7e7414a83e805d1d9aa2f9c",
            "9b9424a529fd492cbf6a197ae7c1bd54",
            "82e7552b689743d18e7a45659d83aa31",
            "e6b935e4788646988367708a4d157efb",
            "c9a80354cbf74fe5a7b8a7142ced2ede",
            "4ddde3ad68ea4ac0935e3feaf615b3e6",
            "03a9122ff17c4949b27b395372d67c13",
            "4b077bd4bdf74033b7e5b897231eb111",
            "032f58a7bfeb4899986ef4fa8094cf59",
            "122b149aead0400abeca882983e17ad6",
            "5d4137054daf4ec2bd8cc54de8a41a85",
            "c04b659a8f16492fb1b7fd1a220935fb",
            "fd82969e182d4c1dafc8a5c2559f4254",
            "56fc6e35164b4502b8c4f010d6f77428",
            "38acadeb875d40208d40684741535e54",
            "788e86d8bdb742d4b891877228286bbd",
            "dbffe4fafaed4a10be2fbbdb16b887ef",
            "1f5002a7cb854d0590571b6e6fb13952",
            "3cfca5e7d29740dcb1605339dbe9de8d",
            "afce8b3b916c4f2da21772ba8ab8b4fb",
            "17a594db07944d368013f51fa8dda958",
            "e58c0a5106334e008d9af460c21d24f0",
            "45a063684e9a4f01a6f02a671c9948f8",
            "35fa0551e5454e4881d4ba1082259e4c",
            "164b1003738c4c7aa4722d8be49070d5",
            "6275fd83d52049ec9a6be0248e50e2c9",
            "263e7481c3cb499dafd026a594ec10fe",
            "81bbaf1ddbde4b73af91272f2e7eb533",
            "538912565dbc4f0d9bf7e14b763a3050",
            "2440a30e27754ceeacc1f3712b3157e5",
            "026d1cc4a98c4102a2f7b0541d83978e",
            "25c6cd8839664d6f90d337dd3bab3f08",
            "11d2e3f40f1e43938db06a47afa8751b",
            "43053476e6554fbdb92aa0afbec7e59f",
            "fa7421851e374be7b925a41579873344",
            "efc14cf45e924ef788a1ee09c6633c21",
            "c98f72af01a445d5908e7beca0fd23d2",
            "e46f59c9492849e1a1fffb0c13c344ee",
            "7c1ecc1ff8d548c8892df4cf29b15aea",
            "d86a0b1a641b4890b26639ee36478562",
            "2919e32a95db41ebb8e2f2802b022f94",
            "bcdee44aefe0434f88d1a8c11f9569e6",
            "1587df00aad342a88616dec5c1b88a6a",
            "345451335e7f4368ba393af57ebdf1df",
            "db6dc656f17f468581d81ecfd9a34694",
            "80f1a46947474758a0a3cb248fb3393c",
            "09a143501a464cf1993eaf4a25b9ea00",
            "0732ffe2196f4db1b1cb0d00d47aa941",
            "5b1e4cfc7fc04086ad3b17c17a3b23de",
            "319577dfdee14a539c38d11a65b3fca9",
            "2718a46993df416285f714a5697cd6f0",
            "9ea2f54d96fc430f945ec27e2004e114",
            "56ed2af18abe45c8a0238072229b16b9",
            "e06a5dc525f6435fbd3c7ec7f338df98",
            "1474436318b84841b6c2dcb503957fa1",
            "cc7e0d051aa946afbe78f7e59569615d",
            "d199b4b5ea194b44acee8573c4713bc8",
            "43170d839dd84f6ca8d5f25b9716eb78",
            "7de61ba1ac0b4a6392f2c5f082348b48",
            "bf761d6a4c5b4516b8dc81863c763584",
            "9c262d2a5e364aa4abe9b2c0c2a56bf6",
            "1761edb3a15e4ac0b1a2258008ca054b",
            "75074a28e895461584ef6c8d6e993ffc",
            "b15494613d6e49da8810a191e2a035a1",
            "cd28bb6f13444964ba2c7e6b9683e216"
          ]
        },
        "id": "wG6dPl0kjNRo",
        "outputId": "2afa88ba-4200-4ea9-b04b-0291ebc76181"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "62fd34a56a1d448086c7268d5b9a9b87",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/737 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "a9667503d3dd459a9836642564b50599",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/190 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "29ceca7fabeb4f718a70ec8404be1e18",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/9.85k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "7ef9453213a6428ab81884ae2025e579",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/591 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "0408e3a213b14930860edf8400ae505e",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/116 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "6dd286785a964e41a96620127bfb82ca",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/15.7k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "82f76aa65b2c406b98f1fd7a4d1ef07a",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/349 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "b4717328679744b4970cb02180d5c276",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/438M [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "1b31736b149c408688c4725b1fb673a8",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/53.0 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "5d4137054daf4ec2bd8cc54de8a41a85",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/239 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "e58c0a5106334e008d9af460c21d24f0",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/466k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "11d2e3f40f1e43938db06a47afa8751b",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/383 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "345451335e7f4368ba393af57ebdf1df",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/13.2k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "1474436318b84841b6c2dcb503957fa1",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/232k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "from sentence_transformers import SentenceTransformer\n",
        "\n",
        "model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v3_mpnet-base')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YnrLXeTNjNRp"
      },
      "source": [
        "We use this pretrained sentence transformer model to encode the sentences."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "CtvcKC2gjNRp",
        "outputId": "d5465ee6-8800-46d6-80cc-279876437a71"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "(10, 768)"
            ]
          },
          "execution_count": 4,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "all_embeddings = model.encode(all_sentences)\n",
        "all_embeddings.shape"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kmnLWlTbjNRq"
      },
      "source": [
        "We have **10** embeddings, each with a dimensionality of *768*. For the keyword search we will also need to store our sentences. But for the keyword search to work we need *keywords*. So, we will use a 'word-level' tokenizer from the HuggingFace transformers library to break our text into words - for this we will use the [`transfo-xl-wt103` model](https://huggingface.co/transformers/model_doc/transformerxl.html). \n",
        "\n",
        "*(If needed, run `!pip install transformers` - although this package should have been install when installing `sentence-transformers` above)*"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 130,
          "referenced_widgets": [
            "99451f8cd8344a389dcfead7f65cbd67",
            "63193168eade42dba76e78e902b13915",
            "71f5098d2ead44009d9bacc3266ee750",
            "d19b2b39c6ee400f8b3768cad629ca48",
            "548afd5027334cdcb34ca60a66d84d2f",
            "9cbb9c17b6ce47308beb3278e8fbd4c3",
            "a66d13c81c0a4169a98a362894c8094d",
            "587a677d15d04accba43686ed1310964",
            "9e8a77fb21a143ddb45ac86daa453fdc",
            "b872e128761f40e582bde227bff4bc4b",
            "1ac700bae4e749bcbde73e7b882aa21e",
            "357f42a9e0d942438d8e77a62393190d",
            "a1d6c8ff9c30447daae9e7e8cc73ef55",
            "3bc3931d905e4e83a7fc2759e5839a6b",
            "b01a6b075c3849abadc0903eec8e2f62",
            "23c733c42a2448d29075f572e6e4b192",
            "c523c84faf6f4f3896700a11381176e8",
            "81d23ea24e164a69a21d6b609f1d1595",
            "69e91e7a59584b9bb42e0ff8d0057af4",
            "f40dde5fba3e4f21a7bbf71185208665",
            "3570e88ac4f84cd5b18bc1d9ef1e6b7d",
            "a6753b2415654ccaad43a404c2f794b3",
            "4149082e3b8f44fbabc1d67d03229b95",
            "7d05f450321547a496c987dfd044c142",
            "edf145fc3b644878b623053fe03aaf65",
            "136cfc892daf471d8b9b68eeb6dfa69a",
            "6cb7aa7b65724258b8aa8e370a7955c6",
            "12bc83042a0349ccba1b8424116d954c",
            "a2eb798e24aa41e28b8e4b12105b462c",
            "7c414b5c0a7145579a83b1ca2a083a2d",
            "2b09cc60101b47cdaf0146f4f5e19773",
            "511f3261295e4f3d8fc81ee22ce17654",
            "43b0defb14684c938cbe278c72ebdf1b"
          ]
        },
        "id": "Nr-mpvGLjNRq",
        "outputId": "65ff1453-6010-408d-a9e8-e70def346baf"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "99451f8cd8344a389dcfead7f65cbd67",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/856 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "357f42a9e0d942438d8e77a62393190d",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/8.72M [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "4149082e3b8f44fbabc1d67d03229b95",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/8.72M [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/plain": [
              "['purple', 'is', 'the', 'best', 'city', 'in', 'the', 'forest']"
            ]
          },
          "execution_count": 5,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "from transformers import AutoTokenizer\n",
        "\n",
        "# transfo-xl tokenizer uses word-level encodings\n",
        "tokenizer = AutoTokenizer.from_pretrained('transfo-xl-wt103')\n",
        "\n",
        "all_tokens = [tokenizer.tokenize(sentence.lower()) for sentence in all_sentences]\n",
        "all_tokens[0]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZXHB-fe_jNRq"
      },
      "source": [
        "We have everything we need, the dense vector representations of each sentence, and the stripped list of tokens for each sentence. So let's establish a connection to Pinecone ready for upserting our data."
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "__RKBGomjNRq"
      },
      "source": [
        "Next we need to connect to a Pinecone instance, you can get a [free API key here](https://app.pinecone.io). You can find your environment in the [Pinecone console](https://app.pinecone.io) under **API Keys**"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "id": "kfFfTFbujNRr"
      },
      "outputs": [],
      "source": [
        "from pinecone import Pinecone\n",
        "\n",
        "# connect to pinecone environment\n",
        "pinecone.init(\n",
        "    api_key=\"YOUR_API_KEY\",\n",
        "    environment=\"YOUR_ENV\"  # find next to API key in console\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6glRodNnjNRr"
      },
      "source": [
        "We can check for existing indexes with:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "dDA6lY7kjNRr",
        "outputId": "a37019df-e773-43d1-c323-e1a8e1f0c90d"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "[]"
            ]
          },
          "execution_count": 7,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "pinecone.list_indexes().names()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2PnQb04BjNRr"
      },
      "source": [
        "There are none, so let's create a new index with `create_index` and connect with `Index`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "id": "FWcuwbznjNRs"
      },
      "outputs": [],
      "source": [
        "pinecone.create_index(name='keyword-search', dimension=all_embeddings.shape[1])\n",
        "index = pinecone.Index('keyword-search')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KlRZ4K3-jNRs"
      },
      "source": [
        "We now merge our data into a list of tuples, where each tuple is structured as `(id, value, metadata)`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {
        "id": "LJrB-SQKjNRs"
      },
      "outputs": [],
      "source": [
        "upserts = []\n",
        "for i, (embedding, tokens) in enumerate(zip(all_embeddings, all_tokens)):\n",
        "    upserts.append((str(i), embedding.tolist(), {'tokens': tokens}))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "mktZ_V9ZjNRs",
        "outputId": "0d775d9f-a14a-4fcb-a93b-aa8f9fbdd07a"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "{'upserted_count': 10}"
            ]
          },
          "execution_count": 10,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# then we upsert\n",
        "index.upsert(vectors=upserts)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-lclwY2fjNRt"
      },
      "source": [
        "### Upsert with CURL\n",
        "\n",
        "Alternatively, we can upsert using curl. For this we need to reformat our data and save it as a JSON file."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "rk_Ft1WfjNRt"
      },
      "outputs": [],
      "source": [
        "import json\n",
        "\n",
        "# reformat the data\n",
        "upserts = {'vectors': []}\n",
        "for i, (embedding, tokens) in enumerate(zip(all_embeddings, all_tokens)):\n",
        "    vector = {'id':f'{i}',\n",
        "              'values': embedding.tolist(),\n",
        "              'metadata':{'tokens':tokens}}\n",
        "    upserts['vectors'].append(vector)\n",
        "\n",
        "# save to JSON\n",
        "with open('./upsert.json', 'w') as f:\n",
        "    json.dump(upserts, f, indent=4)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sN-_SXuwjNRt"
      },
      "source": [
        "This produces a JSON containing a list of *10* dictionaries within the `vectors` key. Each dictionary contains the embeddings and metadata for a single sample in the format:\n",
        "\n",
        "```json\n",
        "{\n",
        "    'id': 'sentence_n',\n",
        "    'values': [0.001, 0.002, ...],\n",
        "    'metadata': {\n",
        "        'tokens': ['purple', 'is', ...]\n",
        "    }\n",
        "}\n",
        "```\n",
        "\n",
        "To upsert with curl, we first find the index URL in the [Pinecone dashboard](https://app.pinecone.io), for `https://keyword-search-1234.svc.us-west1-gcp.pinecone.io/vectors/upsert` so I'd type:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "VjidlAj9jNRt"
      },
      "outputs": [],
      "source": [
        "!curl -X POST \\\n",
        "    https://keyword-search-1234.svc.us-west1-gcp.pinecone.io/vectors/upsert \\\n",
        "    -H 'Content-Type: application/json' \\\n",
        "    -H 'Api-Key: YOUR_API_KEY' \\\n",
        "    -d @./upsert.json"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "X3EEoHhsjNRt"
      },
      "source": [
        "## Querying\n",
        "\n",
        "We now have the data in our index, let's first perform a semantic search using a query sentence, we will return the most *semantically* similar sentences.\n",
        "\n",
        "We define the query, and encode as we did for `all_sentences` before."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {
        "id": "Q_-te2DcjNRu"
      },
      "outputs": [],
      "source": [
        "query_sentence = \"there is an art to getting your way and throwing bananas on to the street is not it\"\n",
        "xq = model.encode(query_sentence).tolist()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ug4PdNKFjNRu"
      },
      "source": [
        "When querying with `index.query` we can pass the query vector as our first argument, and *later* when filtering for specific keywords we will add the `filter` parameter."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "_zv3i5ROjNRu",
        "outputId": "460676ae-8b7b-4d1d-8e91-cb79d196a862"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "{'matches': [{'id': '5',\n",
              "              'metadata': {'tokens': ['throwing',\n",
              "                                      'bananas',\n",
              "                                      'on',\n",
              "                                      'to',\n",
              "                                      'the',\n",
              "                                      'street',\n",
              "                                      'is',\n",
              "                                      'not',\n",
              "                                      'art']},\n",
              "              'score': 0.732851923,\n",
              "              'values': []},\n",
              "             {'id': '8',\n",
              "              'metadata': {'tokens': ['to',\n",
              "                                      'get',\n",
              "                                      'your',\n",
              "                                      'way',\n",
              "                                      'you',\n",
              "                                      'must',\n",
              "                                      'not',\n",
              "                                      'bombard',\n",
              "                                      'the',\n",
              "                                      'road',\n",
              "                                      'with',\n",
              "                                      'yellow',\n",
              "                                      'fruit']},\n",
              "              'score': 0.574427,\n",
              "              'values': []},\n",
              "             {'id': '2',\n",
              "              'metadata': {'tokens': ['it',\n",
              "                                      'is',\n",
              "                                      'not',\n",
              "                                      'often',\n",
              "                                      'you',\n",
              "                                      'find',\n",
              "                                      'soggy',\n",
              "                                      'bananas',\n",
              "                                      'on',\n",
              "                                      'the',\n",
              "                                      'street']},\n",
              "              'score': 0.500877321,\n",
              "              'values': []},\n",
              "             {'id': '1',\n",
              "              'metadata': {'tokens': ['no',\n",
              "                                      'way',\n",
              "                                      'chimps',\n",
              "                                      'go',\n",
              "                                      'bananas',\n",
              "                                      'for',\n",
              "                                      'snacks',\n",
              "                                      '!']},\n",
              "              'score': 0.376693845,\n",
              "              'values': []},\n",
              "             {'id': '9',\n",
              "              'metadata': {'tokens': ['time',\n",
              "                                      'flies',\n",
              "                                      'like',\n",
              "                                      'an',\n",
              "                                      'arrow',\n",
              "                                      ';',\n",
              "                                      'fruit',\n",
              "                                      'flies',\n",
              "                                      'like',\n",
              "                                      'a',\n",
              "                                      'banana']},\n",
              "              'score': 0.338697553,\n",
              "              'values': []},\n",
              "             {'id': '7',\n",
              "              'metadata': {'tokens': ['i',\n",
              "                                      \"'m\",\n",
              "                                      'getting',\n",
              "                                      'way',\n",
              "                                      'too',\n",
              "                                      'old.',\n",
              "                                      'i',\n",
              "                                      'don',\n",
              "                                      \"'t\",\n",
              "                                      'even',\n",
              "                                      'buy',\n",
              "                                      'green',\n",
              "                                      'bananas',\n",
              "                                      'anymore',\n",
              "                                      '.']},\n",
              "              'score': 0.32404235,\n",
              "              'values': []},\n",
              "             {'id': '0',\n",
              "              'metadata': {'tokens': ['purple',\n",
              "                                      'is',\n",
              "                                      'the',\n",
              "                                      'best',\n",
              "                                      'city',\n",
              "                                      'in',\n",
              "                                      'the',\n",
              "                                      'forest']},\n",
              "              'score': 0.145487592,\n",
              "              'values': []},\n",
              "             {'id': '3',\n",
              "              'metadata': {'tokens': ['green',\n",
              "                                      'should',\n",
              "                                      'have',\n",
              "                                      'smelled',\n",
              "                                      'more',\n",
              "                                      'tranquil',\n",
              "                                      'but',\n",
              "                                      'somehow',\n",
              "                                      'it',\n",
              "                                      'just',\n",
              "                                      'tasted',\n",
              "                                      'rotten']},\n",
              "              'score': 0.137328938,\n",
              "              'values': []},\n",
              "             {'id': '4',\n",
              "              'metadata': {'tokens': ['joyce',\n",
              "                                      'enjoyed',\n",
              "                                      'eating',\n",
              "                                      'pancakes',\n",
              "                                      'with',\n",
              "                                      'ketchup']},\n",
              "              'score': 0.0915388241,\n",
              "              'values': []},\n",
              "             {'id': '6',\n",
              "              'metadata': {'tokens': ['as',\n",
              "                                      'the',\n",
              "                                      'asteroid',\n",
              "                                      'hurtled',\n",
              "                                      'toward',\n",
              "                                      'earth',\n",
              "                                      'becky',\n",
              "                                      'was',\n",
              "                                      'upset',\n",
              "                                      'her',\n",
              "                                      'dentist',\n",
              "                                      'appointment',\n",
              "                                      'had',\n",
              "                                      'been',\n",
              "                                      'canceled']},\n",
              "              'score': -0.0585536882,\n",
              "              'values': []}],\n",
              " 'namespace': ''}"
            ]
          },
          "execution_count": 12,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "result = index.query(vector=xq, top_k=10, includeMetadata=True)\n",
        "result"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "qyISVvuijNRu"
      },
      "source": [
        "Let's extract just the sentence IDs to see the order of what we have returned."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 13,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "50cqMSBajNRu",
        "outputId": "807933a9-8d3c-42c4-b690-b5c4ca248a57"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "['5', '8', '2', '1', '9', '7', '0', '3', '4', '6']"
            ]
          },
          "execution_count": 13,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "[x['id'] for x in result['matches']]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1Is8hyXojNRv"
      },
      "source": [
        "Now let's add a keyword filter. Let's restrict the search to only return sentences that contain the word `bananas`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 14,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "67TwkQwdjNRv",
        "outputId": "f34d49d2-398b-4274-b50b-6bc59cc8d843"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "['5', '2', '1', '7']"
            ]
          },
          "execution_count": 14,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "result = index.query(vector=xq, top_k=10, filter={'tokens': 'bananas'})\n",
        "[x['id'] for x in result['matches']]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GkBnddGVjNRw"
      },
      "source": [
        "Again, let's extract IDs and then use these to see which sentences we're returning in the query above."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 15,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "QsRCiixojNRx",
        "outputId": "cbcf6ac2-9a16-49ab-8e0d-32c21071d156"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "throwing bananas on to the street is not art\n",
            "it is not often you find soggy bananas on the street\n",
            "No way chimps go bananas for snacks!\n",
            "I'm getting way too old. I don't even buy green bananas anymore.\n"
          ]
        }
      ],
      "source": [
        "ids = [int(x['id']) for x in result['matches']]\n",
        "for i in ids:\n",
        "    print(all_sentences[i])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0LDAV_1TjNRy"
      },
      "source": [
        "Okay cool, we can see that we're now filtering out all samples that do *not* contain the word 'bananas'. Maybe we'd like to extend this keyword filter further - for example we could filter for any samples that contain the word 'bananas' **OR** 'way' by modifying our filter to `{'$or': [{'tokens': 'bananas'}, {'tokens': 'way'}]}`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 16,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Q00U21j8jNRz",
        "outputId": "7be9f6ae-3598-48ea-98b3-10da2a0461f4"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "throwing bananas on to the street is not art\n",
            "to get your way you must not bombard the road with yellow fruit\n",
            "it is not often you find soggy bananas on the street\n",
            "No way chimps go bananas for snacks!\n",
            "I'm getting way too old. I don't even buy green bananas anymore.\n"
          ]
        }
      ],
      "source": [
        "result = index.query(vector=xq, top_k=10, filter={'$or': [\n",
        "                         {'tokens': 'bananas'},\n",
        "                         {'tokens': 'way'}\n",
        "                     ]})\n",
        "\n",
        "ids = [int(x['id']) for x in result['matches']]\n",
        "for i in ids:\n",
        "    print(all_sentences[i])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "shos83XAjNR0"
      },
      "source": [
        "Alternatively we can us the **in** `$in` condition rather than `$or` - it will produce the same results:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 17,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "9KoTp-_AjNR0",
        "outputId": "80a0cdf2-d1f5-4d3c-eb48-455f2dcbd7fb"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "throwing bananas on to the street is not art\n",
            "to get your way you must not bombard the road with yellow fruit\n",
            "it is not often you find soggy bananas on the street\n",
            "No way chimps go bananas for snacks!\n",
            "I'm getting way too old. I don't even buy green bananas anymore.\n"
          ]
        }
      ],
      "source": [
        "result = index.query(vector=xq, top_k=10, filter={\n",
        "    'tokens': {'$in': ['bananas', 'way']}\n",
        "})\n",
        "\n",
        "ids = [int(x['id']) for x in result['matches']]\n",
        "for i in ids:\n",
        "    print(all_sentences[i])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FpImEr6ZjNR0"
      },
      "source": [
        "We could decide we only want to return samples that contain *both* 'bananas' **AND** 'way' by swapping the `$or` modifier for `$and`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 18,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "oK8etUGKjNR0",
        "outputId": "27e66db7-45a8-4d85-f6fe-fd2855cbe0e6"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "No way chimps go bananas for snacks!\n",
            "I'm getting way too old. I don't even buy green bananas anymore.\n"
          ]
        }
      ],
      "source": [
        "result = index.query(vector=xq, top_k=10, filter={'$and': [\n",
        "                         {'tokens': 'bananas'},\n",
        "                         {'tokens': 'way'}\n",
        "                     ]})\n",
        "\n",
        "ids = [int(x['id']) for x in result['matches']]\n",
        "for i in ids:\n",
        "    print(all_sentences[i])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fcllpQIejNR1"
      },
      "source": [
        "If we have a lot of keywords including every single one manually like above can quickly get tiresome, so we can just write something like this instead:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 19,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ghM5FxIXjNR1",
        "outputId": "dde4d90a-f088-4061-8048-1ae08864693b"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "[{'tokens': 'bananas'}, {'tokens': 'way'}, {'tokens': 'green'}]"
            ]
          },
          "execution_count": 19,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "keywords = ['bananas', 'way', 'green']\n",
        "filter_dict = [{'tokens': word} for word in keywords]\n",
        "filter_dict"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WQ4prsY2jNR1"
      },
      "source": [
        "And add it to our `query`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 20,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "GTqG_bctjNR2",
        "outputId": "21ccb8e1-2edc-4c72-9204-b1e5898110e9"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "I'm getting way too old. I don't even buy green bananas anymore.\n"
          ]
        }
      ],
      "source": [
        "result = index.query(vector=xq, top_k=10, filter={'$and': filter_dict})\n",
        "\n",
        "ids = [int(x['id']) for x in result['matches']]\n",
        "for i in ids:\n",
        "    print(all_sentences[i])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nDwnEOFrjNR2"
      },
      "source": [
        "We may also want to restrict our search to sentences that do *not* satisfy our conditions above, for example we may want all sentences that *do not* contain *'bananas'* but *do* contain *'way'*. To do this we can add **not equals** `$ne` to the `bananas` part of the query."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 21,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "MI5fQAXhjNR2",
        "outputId": "ef564a65-d5d5-45c9-9cdf-91dc7c18a138"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "to get your way you must not bombard the road with yellow fruit\n"
          ]
        }
      ],
      "source": [
        "result = index.query(vector=xq, top_k=10, filter={'$and': [\n",
        "                         {'tokens': {'$ne': 'bananas'}},\n",
        "                         {'tokens': 'way'}\n",
        "                     ]})\n",
        "\n",
        "ids = [int(x['id']) for x in result['matches']]\n",
        "for i in ids:\n",
        "    print(all_sentences[i])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WZrmz-kkjNR2"
      },
      "source": [
        "We can exclude multiple keywords too using the **not in** `$nin` condition."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 22,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "cUWaOqIhjNR2",
        "outputId": "451d1b3a-4438-4138-9d8b-f2e1b952e551"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Time flies like an arrow; fruit flies like a banana\n",
            "purple is the best city in the forest\n",
            "green should have smelled more tranquil but somehow it just tasted rotten\n",
            "joyce enjoyed eating pancakes with ketchup\n",
            "as the asteroid hurtled toward earth becky was upset her dentist appointment had been canceled\n"
          ]
        }
      ],
      "source": [
        "result = index.query(vector=xq, top_k=10, filter={'tokens':\n",
        "    {'$nin': ['bananas', 'way']}\n",
        "})\n",
        "\n",
        "ids = [int(x['id']) for x in result['matches']]\n",
        "for i in ids:\n",
        "    print(all_sentences[i])"
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "collapsed_sections": [],
      "name": "basic_hybrid_search.ipynb",
      "provenance": []
    },
    "interpreter": {
      "hash": "a683edd788238e5c64f9fa2e4bdd4387776bc5c6f4f0a84da0685f9a25e421d6"
    },
    "kernelspec": {
      "display_name": "Python 3",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.8.5"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 1
}