{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CSKXot1mf3-B"
      },
      "source": [
        "<a target=\"_blank\" href=\"https://colab.research.google.com/github/cohere-ai/notebooks/blob/main/notebooks/llmu/RAG_over_Large_Scale_Data.ipynb\">\n",
        "  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
        "</a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "oBA2WphukTFx"
      },
      "source": [
        "# RAG over Large Scale Data"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TSkzr2WGmeQe"
      },
      "source": [
        "*Note: To run the notebook, you must first deploy your own Google Drive connector as a web-based REST API (the steps are outlined in [this article](https://txt.cohere.com/rag-chatbot-quickstart/)).:*"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "yOv1E6lBg_Qj"
      },
      "source": [
        "This notebook shows how to build a RAG-powered chatbot with Cohere's Chat endpoint using connectors. \n",
        "\n",
        "In particular, this notebook shows how to use connectors at scale, such as connecting to multiple datastores, working with large volumes of documents, and handling long documents. Enterprises need a RAG system that can efficiently handle vast amounts of data from diverse sources, and in this chapter, you’ll learn about how this can be automated with the Chat endpoint.\n",
        "\n",
        "\n",
        "The diagram below provides an overview of what we’ll build."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3EqiY7quunPR"
      },
      "source": [
        "![rag-workflow-5.png]()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3Pq-XH3AkU7e"
      },
      "source": [
        "# Setup"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "fEVf6y-Ffsiu"
      },
      "outputs": [],
      "source": [
        "#@title Enable text wrapping in Google Colab\n",
        "\n",
        "from IPython.display import HTML, display\n",
        "\n",
        "def set_css():\n",
        "  display(HTML('''\n",
        "  <style>\n",
        "    pre {\n",
        "        white-space: pre-wrap;\n",
        "    }\n",
        "  </style>\n",
        "  '''))\n",
        "get_ipython().events.register('pre_run_cell', set_css)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ZWaYUe_0kYDx",
        "outputId": "278aa6d9-b784-49e6-d867-ababb7357da9"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m117.2/117.2 kB\u001b[0m \u001b[31m1.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m75.6/75.6 kB\u001b[0m \u001b[31m5.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m77.8/77.8 kB\u001b[0m \u001b[31m5.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m5.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h"
          ]
        }
      ],
      "source": [
        "! pip install cohere -q"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 17
        },
        "id": "hmAgCSbGklDC",
        "outputId": "f5435191-a7cd-41b4-c780-f3aebebb38fb"
      },
      "outputs": [
        {
          "data": {
            "text/html": [
              "\n",
              "  <style>\n",
              "    pre {\n",
              "        white-space: pre-wrap;\n",
              "    }\n",
              "  </style>\n",
              "  "
            ],
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "import cohere\n",
        "from cohere import ChatConnector\n",
        "import os\n",
        "import uuid\n",
        "from typing import List, Dict\n",
        "\n",
        "co = cohere.Client(\"COHERE_API_KEY\") # Get your API key here: https://dashboard.cohere.com/api-keys"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nWu4Im8qkzPL"
      },
      "source": [
        "# Create a chatbot"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UTy19Cknl7VC"
      },
      "source": [
        "The Chatbot class below handles the interaction between the user and chatbot. We define the connector for the chatbot to use with the attribute self.connectors.\n",
        "\n",
        "The run() method contains the logic for getting the user message, displaying the chatbot response with citations, along with a way for the user to end the conversation."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 17
        },
        "id": "CC6cSdhnkpS-",
        "outputId": "ac6dc379-e0d6-4683-9b49-0d9fcdafd8c2"
      },
      "outputs": [
        {
          "data": {
            "text/html": [
              "\n",
              "  <style>\n",
              "    pre {\n",
              "        white-space: pre-wrap;\n",
              "    }\n",
              "  </style>\n",
              "  "
            ],
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "class Chatbot:\n",
        "    def __init__(self, connectors: List[str]):\n",
        "        \"\"\"\n",
        "        Initializes an instance of the Chatbot class.\n",
        "\n",
        "        \"\"\"\n",
        "        self.conversation_id = str(uuid.uuid4())\n",
        "        self.connectors = [ChatConnector(id=connector) for connector in connectors]\n",
        "\n",
        "    def run(self):\n",
        "        \"\"\"\n",
        "        Runs the chatbot application.\n",
        "\n",
        "        \"\"\"\n",
        "        while True:\n",
        "            # Get the user message\n",
        "            message = input(\"User: \")\n",
        "\n",
        "            # Typing \"quit\" ends the conversation\n",
        "            if message.lower() == \"quit\":\n",
        "                print(\"Ending chat.\")\n",
        "                break\n",
        "            # else:                         # Uncomment for Google Colab to avoid printing the same thing twice\n",
        "            #     print(f\"User: {message}\") # Uncomment for Google Colab to avoid printing the same thing twice\n",
        "\n",
        "            # Generate response\n",
        "            response = co.chat_stream(\n",
        "                    message=message,\n",
        "                    model=\"command-r\",\n",
        "                    conversation_id=self.conversation_id,\n",
        "                    connectors=self.connectors,\n",
        "            )\n",
        "\n",
        "            # Print the chatbot response, citations, and documents\n",
        "            print(\"\\nChatbot:\")\n",
        "            citations = []\n",
        "            cited_documents = []\n",
        "\n",
        "            # Display response\n",
        "            for event in response:\n",
        "                if event.event_type == \"text-generation\":\n",
        "                    print(event.text, end=\"\")\n",
        "                elif event.event_type == \"citation-generation\":\n",
        "                    citations.extend(event.citations)\n",
        "                elif event.event_type == \"search-results\":\n",
        "                    cited_documents = event.documents\n",
        "\n",
        "            # Display citations and source documents\n",
        "            if citations:\n",
        "              print(\"\\n\\nCITATIONS:\")\n",
        "              for citation in citations:\n",
        "                print(citation)\n",
        "\n",
        "              print(\"\\nDOCUMENTS:\")\n",
        "              for document in cited_documents:\n",
        "                print({'id': document['id'],\n",
        "                      'text': document.get('text', document.get('snippet', ''))[:50] + '...'}) # \"text\" for Gdrive, \"snippet\" for web search\n",
        "\n",
        "            print(f\"\\n{'-'*100}\\n\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "oc3dJwGnmLBu"
      },
      "source": [
        "# Run the chatbot"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "AAULHBu3mMtj"
      },
      "source": [
        "We can now run the chatbot.  For this, we create the instance of `Chatbot` using Cohere's managed web-search connector.  Then we run the chatbot by invoking the `run()` method.\n",
        "\n",
        "The format of each citation is:\n",
        "- `start`: The starting point of a span where one or more documents are referenced\n",
        "- `end`: The ending point of a span where one or more documents are referenced\n",
        "- `text`: The text representing this span\n",
        "- `document_ids`: The IDs of the documents being referenced (`doc_0` being the ID of the first document passed to the `documents` creating parameter in the endpoint call, and so on)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mxxGVMpxoc6K"
      },
      "source": [
        "The Chat endpoint can accept multiple connectors and retrieve information from all the defined connectors."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 1000
        },
        "id": "4AbIxRVkk9B6",
        "outputId": "49da15ac-7606-48c6-dca6-b3d462a13361"
      },
      "outputs": [
        {
          "data": {
            "text/html": [
              "\n",
              "  <style>\n",
              "    pre {\n",
              "        white-space: pre-wrap;\n",
              "    }\n",
              "  </style>\n",
              "  "
            ],
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "User: What is chain of thought prompting\n",
            "\n",
            "Chatbot:\n",
            "Chain of thought prompting is a technique used with large language models (LLMs) to enhance their reasoning capabilities. The LLM is presented with a few examples demonstrating a step-by-step reasoning process leading to a correct answer. This method can be employed when dealing with complex problems that require breaking down into smaller, more manageable parts. \n",
            "\n",
            "For instance, if you were to ask an LLM to solve a linear equation, you would first show how to solve this type of equation by outlining the intermediate steps. The LLM would then attempt to solve the given problem using a similar step-by-step approach.\n",
            "\n",
            "This prompting technique is particularly useful for arithmetic, commonsense, and symbolic reasoning tasks and can be combined with few-shot prompting for better results on more complex problems.\n",
            "\n",
            "CITATIONS:\n",
            "start=52 end=73 text='large language models' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11', 'demo-conn-gdrive-6bfrp6_12']\n",
            "start=74 end=80 text='(LLMs)' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11', 'demo-conn-gdrive-6bfrp6_12']\n",
            "start=84 end=121 text='enhance their reasoning capabilities.' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9']\n",
            "start=148 end=162 text='a few examples' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'demo-conn-gdrive-6bfrp6_11']\n",
            "start=179 end=209 text='step-by-step reasoning process' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11']\n",
            "start=223 end=238 text='correct answer.' document_ids=['web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'demo-conn-gdrive-6bfrp6_11']\n",
            "start=285 end=301 text='complex problems' document_ids=['web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7']\n",
            "start=334 end=365 text='smaller, more manageable parts.' document_ids=['web-search_4', 'web-search_6']\n",
            "start=411 end=434 text='solve a linear equation' document_ids=['web-search_2']\n",
            "start=452 end=491 text='show how to solve this type of equation' document_ids=['web-search_2']\n",
            "start=495 end=528 text='outlining the intermediate steps.' document_ids=['web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'demo-conn-gdrive-6bfrp6_11']\n",
            "start=548 end=621 text='attempt to solve the given problem using a similar step-by-step approach.' document_ids=['web-search_2', 'web-search_4']\n",
            "start=675 end=685 text='arithmetic' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9']\n",
            "start=687 end=698 text='commonsense' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9']\n",
            "start=704 end=728 text='symbolic reasoning tasks' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9']\n",
            "start=740 end=772 text='combined with few-shot prompting' document_ids=['web-search_1', 'web-search_2']\n",
            "start=777 end=817 text='better results on more complex problems.' document_ids=['web-search_1']\n",
            "\n",
            "DOCUMENTS:\n",
            "{'id': 'web-search_0', 'text': 'Skip to main content\\n\\nWe gratefully acknowledge su...'}\n",
            "{'id': 'web-search_1', 'text': 'General Tips for Designing Prompts\\n\\nChain-of-Thoug...'}\n",
            "{'id': 'web-search_2', 'text': 'BlogDocsCommunityHackAPrompt Playground\\n\\nLanguage ...'}\n",
            "{'id': 'web-search_3', 'text': 'We now support using Microsoft Azure hosted OpenAI...'}\n",
            "{'id': 'web-search_4', 'text': 'Comprehensive Guide to Chain-of-Thought Prompting\\n...'}\n",
            "{'id': 'web-search_5', 'text': 'ResourcesArticleChain-of-Thought Prompting: Helpin...'}\n",
            "{'id': 'web-search_6', 'text': 'Let’s Think Step by Step: Advanced Reasoning in Bu...'}\n",
            "{'id': 'web-search_7', 'text': 'Unraveling the Power of Chain-of-Thought Prompting...'}\n",
            "{'id': 'web-search_8', 'text': 'AboutPressCopyrightContact usCreatorsAdvertiseDeve...'}\n",
            "{'id': 'web-search_9', 'text': 'Skip to main content\\n\\nLanguage Models Perform Reas...'}\n",
            "{'id': 'demo-conn-gdrive-6bfrp6_10', 'text': \"\\ufeffChaining Prompts\\r\\nIn this chapter, you'll learn a...\"}\n",
            "{'id': 'demo-conn-gdrive-6bfrp6_11', 'text': \"\\ufeffConstructing Prompts\\r\\nIn this chapter, you'll lea...\"}\n",
            "{'id': 'demo-conn-gdrive-6bfrp6_12', 'text': \"\\ufeffUse Case Patterns\\r\\nIn this chapter, you'll learn ...\"}\n",
            "{'id': 'demo-conn-gdrive-6bfrp6_13', 'text': \"\\ufeffEvaluating Outputs\\r\\nIn this chapter, you'll learn...\"}\n",
            "{'id': 'demo-conn-gdrive-6bfrp6_14', 'text': \"\\ufeffValidating Outputs\\r\\nIn this chapter, you'll learn...\"}\n",
            "\n",
            "----------------------------------------------------------------------------------------------------\n",
            "\n",
            "User: tell me more\n",
            "\n",
            "Chatbot:\n",
            "Chain of thought prompting is a technique that guides LLMs to follow a reasoning process by providing them with a few examples that clearly outline each step of the reasoning. This method, also known as few-shot prompting, is employed for complex problems that require a series of reasoning steps to solve. \n",
            "\n",
            "The LLM is expected to study the example and follow a similar pattern when answering, breaking down the problem into smaller, more manageable parts. This approach not only improves the LLM's performance on complex tasks but also offers interpretability into its thought process.\n",
            "\n",
            "Few-shot prompting is distinct from zero-shot prompting, where the LLM is only given the problem and no examples. Zero-shot chain-of-thought prompting, however, involves adding a phrase like \"Let's think step by step\" to the original prompt to guide the LLM's reasoning. \n",
            "\n",
            "Chain of thought prompting has shown remarkable effectiveness in improving LLMs' abilities in arithmetic, commonsense, and symbolic reasoning tasks. Nevertheless, it is not without its limitations. For instance, it works best with larger models, typically those with around 100 billion parameters, as smaller models often produce illogical thought chains.\n",
            "\n",
            "CITATIONS:\n",
            "start=47 end=58 text='guides LLMs' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11']\n",
            "start=71 end=88 text='reasoning process' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11']\n",
            "start=114 end=126 text='few examples' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11']\n",
            "start=140 end=175 text='outline each step of the reasoning.' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'demo-conn-gdrive-6bfrp6_11']\n",
            "start=203 end=221 text='few-shot prompting' document_ids=['web-search_2', 'web-search_3', 'web-search_4', 'demo-conn-gdrive-6bfrp6_11']\n",
            "start=239 end=255 text='complex problems' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9']\n",
            "start=271 end=296 text='series of reasoning steps' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11']\n",
            "start=363 end=378 text='similar pattern' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11']\n",
            "start=426 end=457 text='smaller, more manageable parts.' document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11']\n",
            "start=481 end=511 text=\"improves the LLM's performance\" document_ids=['web-search_0', 'web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9', 'demo-conn-gdrive-6bfrp6_11']\n",
            "start=545 end=561 text='interpretability' document_ids=['web-search_2', 'web-search_4', 'web-search_5', 'web-search_6']\n",
            "start=625 end=644 text='zero-shot prompting' document_ids=['web-search_4', 'demo-conn-gdrive-6bfrp6_11']\n",
            "start=703 end=739 text='Zero-shot chain-of-thought prompting' document_ids=['web-search_1', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'demo-conn-gdrive-6bfrp6_11']\n",
            "start=780 end=806 text='\"Let\\'s think step by step\"' document_ids=['web-search_1', 'web-search_3', 'web-search_4', 'web-search_6', 'demo-conn-gdrive-6bfrp6_11']\n",
            "start=833 end=859 text=\"guide the LLM's reasoning.\" document_ids=['web-search_1', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'demo-conn-gdrive-6bfrp6_11']\n",
            "start=956 end=966 text='arithmetic' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9']\n",
            "start=968 end=979 text='commonsense' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9']\n",
            "start=985 end=1010 text='symbolic reasoning tasks.' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_5', 'web-search_6', 'web-search_7', 'web-search_9']\n",
            "start=1093 end=1106 text='larger models' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_6', 'web-search_7', 'web-search_9']\n",
            "start=1136 end=1158 text='100 billion parameters' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_6', 'web-search_7', 'web-search_9']\n",
            "start=1163 end=1217 text='smaller models often produce illogical thought chains.' document_ids=['web-search_0', 'web-search_2', 'web-search_3', 'web-search_4', 'web-search_6', 'web-search_7']\n",
            "\n",
            "DOCUMENTS:\n",
            "{'id': 'web-search_0', 'text': 'Skip to main content\\n\\nWe gratefully acknowledge su...'}\n",
            "{'id': 'web-search_1', 'text': 'General Tips for Designing Prompts\\n\\nChain-of-Thoug...'}\n",
            "{'id': 'web-search_2', 'text': 'BlogDocsCommunityHackAPrompt Playground\\n\\nLanguage ...'}\n",
            "{'id': 'web-search_3', 'text': 'We now support using Microsoft Azure hosted OpenAI...'}\n",
            "{'id': 'web-search_4', 'text': 'Comprehensive Guide to Chain-of-Thought Prompting\\n...'}\n",
            "{'id': 'web-search_5', 'text': 'ResourcesArticleChain-of-Thought Prompting: Helpin...'}\n",
            "{'id': 'web-search_6', 'text': 'Let’s Think Step by Step: Advanced Reasoning in Bu...'}\n",
            "{'id': 'web-search_7', 'text': 'Unraveling the Power of Chain-of-Thought Prompting...'}\n",
            "{'id': 'web-search_8', 'text': 'AboutPressCopyrightContact usCreatorsAdvertiseDeve...'}\n",
            "{'id': 'web-search_9', 'text': 'Skip to main content\\n\\nLanguage Models Perform Reas...'}\n",
            "{'id': 'demo-conn-gdrive-6bfrp6_10', 'text': \"\\ufeffChaining Prompts\\r\\nIn this chapter, you'll learn a...\"}\n",
            "{'id': 'demo-conn-gdrive-6bfrp6_11', 'text': \"\\ufeffConstructing Prompts\\r\\nIn this chapter, you'll lea...\"}\n",
            "{'id': 'demo-conn-gdrive-6bfrp6_12', 'text': \"\\ufeffUse Case Patterns\\r\\nIn this chapter, you'll learn ...\"}\n",
            "{'id': 'demo-conn-gdrive-6bfrp6_13', 'text': \"\\ufeffEvaluating Outputs\\r\\nIn this chapter, you'll learn...\"}\n",
            "{'id': 'demo-conn-gdrive-6bfrp6_14', 'text': \"\\ufeffValidating Outputs\\r\\nIn this chapter, you'll learn...\"}\n",
            "\n",
            "----------------------------------------------------------------------------------------------------\n",
            "\n",
            "User: quit\n",
            "Ending chat.\n"
          ]
        }
      ],
      "source": [
        "# Define connectors\n",
        "connectors = [\"demo-conn-gdrive-6bfrp6\", \"web-search\"]\n",
        "\n",
        "# Create an instance of the Chatbot class\n",
        "chatbot = Chatbot(connectors)\n",
        "\n",
        "# Run the chatbot\n",
        "chatbot.run()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "13NYOYrOo6Sq"
      },
      "source": [
        "With all these documents coming from various connectors, you may be asking a couple of questions:\n",
        "\n",
        "- **How to handle long documents?** Connecting to multiple connectors means having to deal with various APIs, each with its own way of providing documents. Some may return a complete document with tens or hundreds of pages. There are a couple of problems with this. First, stuffing a long document into an LLM prompt means its context limit will be reached, resulting in an error. Second, even if the context limit is not reached, the LLM response will likely not be very good because it is getting a lot of irrelevant information from a long document instead of specific chunks from the document that are the most relevant.\n",
        "\n",
        "- **How to handle multiple documents from multiple connectors and queries?** For a specific connector, the retrieval and reranking implementation is within the developer’s control. But with multiple connectors, that is not possible because these documents are aggregated at the Chat endpoint. As the number of connectors increases, this becomes a bigger problem because we don’t have control over the relevancy of the documents sent to the LLM prompt. And then there is the same problem of possible context limits being reached. Furthermore, if more than one query is generated, the number of documents retrieved will multiply with the same number.\n",
        "\n",
        "The Chat endpoint solves these problems with its automated chunking and reranking process.\n",
        "\n",
        "Note that for this to happen, the `prompt_truncation` parameter should be set as `AUTO` (default) and not `OFF`."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XPiSsvTtpxuD"
      },
      "source": [
        "# Handling Long and Large Volume of Documents"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IyoETRmZpKIu"
      },
      "source": [
        "### Chunking\n",
        "\n",
        "With every document sent by the connectors, the first step is to split it into smaller chunks. Each chunk is between 100 and 400 words, and sentences are kept intact where possible.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fLtLQE5WpQfv"
      },
      "source": [
        "### Reranking\n",
        "\n",
        "The Chat endpoint then uses the Rerank endpoint to take all the chunked documents from all connectors and rerank them based on contextual relevance to the query."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gBY-iIqGpUl9"
      },
      "source": [
        "### Interleaving\n",
        "\n",
        "The reranked documents from the different lists are then interleaved into one list."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tliT_nBMpbMX"
      },
      "source": [
        "### Prompt building\n",
        "By setting the `prompt_truncation` parameter by setting it to `AUTO`, some elements from chat_history and documents will be dropped in an attempt to construct a prompt that fits within the model's context length limit.\n",
        "Documents and chat history will be iteratively added until the prompt is too long. This prompt will be passed to the Command model for response generation.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "usAY26Q-pJKr"
      },
      "outputs": [],
      "source": []
    }
  ],
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
