{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "4afa980c-21be-44b8-807e-710b5de56198",
   "metadata": {},
   "source": [
    "##  Notebook 2: Filling RAG outputs For Evaluation\n",
    "\n",
    "In this notebook, we will use the example RAG pipeline to populate the RAG outputs: contexts (retrieved relevant documents) and answer (generated by RAG pipeline).\n",
    "\n",
    "The example RAG pipeline provided as part of this repository uses [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/) to build a chatbot that references a custom knowledge base. \n",
    "\n",
    "If you want to learn more about how the example RAG works, please see [03_llama_index_simple.ipynb](../notebooks/03_llama_index_simple.ipynb).\n",
    "\n",
    "- **Steps 1-5**: Build the RAG pipeline.\n",
    "- **Step 6**: Build the Query Engine, exposing the Retriever and Generator outputs\n",
    "- **Step 7**: Fill the RAG outputs "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "191e7b90-128e-4432-82ab-897426389d06",
   "metadata": {},
   "source": [
    "### Steps 1-5: Build the RAG pipeline\n",
    "\n",
    "#### Define the LLM\n",
    "Here we are using a local llm on triton and the address and gRPC port that the Triton is available on. \n",
    "\n",
    "***If you are using AI Playground (no local GPU) replace, the code in the cell two cells below with the following: ***\n",
    "\n",
    "```\n",
    "import os\n",
    "from nv_aiplay import GeneralLLM\n",
    "os.environ['NVAPI_KEY'] = \"REPLACE_WITH_YOUR_API_KEY\"\n",
    "\n",
    "llm = GeneralLLM(\n",
    "    model=\"llama2_70b\",\n",
    "    temperature=0.2,\n",
    "    max_tokens=300\n",
    ")\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "a18dfc7b",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%capture\n",
    "!test -d dataset || unzip dataset.zip"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8a80987e-1ddb-4248-b76c-f3ce16745ca3",
   "metadata": {},
   "outputs": [],
   "source": [
    "from triton_trt_llm import TensorRTLLM\n",
    "from llama_index.llms.langchain import LangChainLLM\n",
    "trtllm =TensorRTLLM(server_url=\"llm:8001\", model_name=\"ensemble\", tokens=300)\n",
    "llm = LangChainLLM(llm=trtllm)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bc57b68d-afd5-4a0c-832c-0ad8f3f475d5",
   "metadata": {},
   "source": [
    "#### Create a Prompt Template\n",
    "\n",
    "A [**prompt template**](https://gpt-index.readthedocs.io/en/latest/core_modules/model_modules/prompts.html) is a common paradigm in LLM development.\n",
    "\n",
    "They are a pre-defined set of instructions provided to the LLM and guide the output produced by the model. They can contain few shot examples and guidance and are a quick way to engineer the responses from the LLM. Llama 2 accepts the [prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) shown in `LLAMA_PROMPT_TEMPLATE`, which we manipulate to be constructed with:\n",
    "- The system prompt\n",
    "- The context\n",
    "- The user's question\n",
    "  \n",
    "Much like LangChain's abstraction of prompts, LlamaIndex has similar abstractions for you to create prompts."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "682ec812-33be-430f-8bb1-ae3d68690198",
   "metadata": {},
   "outputs": [],
   "source": [
    "# import the relevant libraries\n",
    "from llama_index.core import Prompt\n",
    "\n",
    "LLAMA_PROMPT_TEMPLATE = (\n",
    " \"<s>[INST] <<SYS>>\"\n",
    " \"Use the following context to answer the user's question. If you don't know the answer, just say that you don't know, don't try to make up an answer.\"\n",
    " \"<</SYS>>\"\n",
    " \"<s>[INST] Context: {context_str} Question: {query_str} Only return the helpful answer below and nothing else. Helpful answer:[/INST]\"\n",
    ")\n",
    "\n",
    "qa_template = Prompt(LLAMA_PROMPT_TEMPLATE)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d0af7922",
   "metadata": {},
   "source": [
    "### Load Documents\n",
    "Follow the step number 1 [defined here](../notebooks/05_dataloader.ipynb) to upload the pdf's to Milvus server.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a7bb75ad",
   "metadata": {},
   "source": [
    "In this rest of this section, we will load and split the pdfs of NVIDIA blogs. We will use the `SentenceTransformersTokenTextSplitter`.\n",
    "Additionally, we use a LlamaIndex [``PromptHelper``](https://gpt-index.readthedocs.io/en/latest/api_reference/service_context/prompt_helper.html) to help deal with LLM context window token limitations. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fa366250-108e-45a0-88ce-e6f7274da8e1",
   "metadata": {},
   "outputs": [],
   "source": [
    "# import the relevant libraries\n",
    "from langchain.text_splitter import SentenceTransformersTokenTextSplitter\n",
    "from llama_index.core.node_parser import LangchainNodeParser\n",
    "from llama_index.core import PromptHelper\n",
    "\n",
    "# setup the text splitter\n",
    "TEXT_SPLITTER_MODEL = \"intfloat/e5-large-v2\"\n",
    "TEXT_SPLITTER_TOKENS_PER_CHUNK = 510\n",
    "TEXT_SPLITTER_CHUNCK_OVERLAP = 200\n",
    "\n",
    "text_splitter = SentenceTransformersTokenTextSplitter(\n",
    "    model_name=TEXT_SPLITTER_MODEL,\n",
    "    tokens_per_chunk=TEXT_SPLITTER_TOKENS_PER_CHUNK,\n",
    "    chunk_overlap=TEXT_SPLITTER_CHUNCK_OVERLAP,\n",
    ")\n",
    "\n",
    "node_parser = LangchainNodeParser(text_splitter)\n",
    "\n",
    "\n",
    "# Use the PromptHelper\n",
    "\n",
    "prompt_helper = PromptHelper(\n",
    "  context_window=4096,\n",
    "  num_output=256,\n",
    "  chunk_overlap_ratio=0.1,\n",
    "  chunk_size_limit=None\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b8dab583-a12d-4fb1-a9eb-3a1b1f04075d",
   "metadata": {},
   "source": [
    "#### Generate and Store Embeddings\n",
    "##### a) Generate Embeddings \n",
    "[Embeddings](https://python.langchain.com/docs/modules/data_connection/text_embedding/) for documents are created by vectorizing the document text; this vectorization captures the semantic meaning of the text. \n",
    "\n",
    "We will use [intfloat/e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) for the embeddings."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e9011ba0-f3f6-41f0-8a15-48f264743545",
   "metadata": {},
   "outputs": [],
   "source": [
    "# import the relevant libraries\n",
    "from langchain.embeddings import HuggingFaceEmbeddings\n",
    "from llama_index.embeddings.langchain import LangchainEmbedding\n",
    "\n",
    "#Running the model on CPU as we want to conserve gpu memory.\n",
    "#In the production deployment (API server shown as part of the 5th notebook we run the model on GPU)\n",
    "model_name=\"intfloat/e5-large-v2\"\n",
    "model_kwargs = {\"device\": \"cuda:0\"}\n",
    "encode_kwargs = {\"normalize_embeddings\": False}\n",
    "hf_embeddings = HuggingFaceEmbeddings(\n",
    "    model_name=model_name,\n",
    "    model_kwargs=model_kwargs,\n",
    "    encode_kwargs=encode_kwargs,\n",
    ")\n",
    "# Load in a specific embedding model\n",
    "embed_model = LangchainEmbedding(hf_embeddings)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8db99124-e438-406d-880d-557501a461d3",
   "metadata": {},
   "source": [
    "##### b) Store Embeddings \n",
    "\n",
    "We will use the LlamaIndex module [`Settings`](https://docs.llamaindex.ai/en/stable/module_guides/supporting_modules/settings/?h=settings) to bundle commonly used resources during the indexing and querying stage.\n",
    "\n",
    "\n",
    "In this example, we bundle the build resources: the LLM, the embedding model, the node parser, and the prompt helper.   "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0e493f9d-589a-4820-902d-f68932bfb0d8",
   "metadata": {},
   "outputs": [],
   "source": [
    "# import the relevant libraries\n",
    "from llama_index.core import Settings\n",
    "\n",
    "Settings.llm = llm\n",
    "Settings.embed_model = embed_model\n",
    "Settings.node_parser = node_parser\n",
    "Settings.prompt_helper = prompt_helper"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "44e10c13",
   "metadata": {},
   "source": [
    "Ingest the dataset using the /documents endpoint in the chain-server."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "acdc51db",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import requests\n",
    "import mimetypes\n",
    "\n",
    "def upload_document(file_path, url):\n",
    "    headers = {\n",
    "        'accept': 'application/json'\n",
    "    }\n",
    "    mime_type, _ = mimetypes.guess_type(file_path)\n",
    "    files = {\n",
    "        'file': (file_path, open(file_path, 'rb'), mime_type)\n",
    "    }\n",
    "    response = requests.post(url, headers=headers, files=files)\n",
    "\n",
    "    return response.text\n",
    "\n",
    "def upload_pdf_files(folder_path, upload_url):\n",
    "    for files in os.listdir(folder_path):\n",
    "        _, ext = os.path.splitext(files)\n",
    "        # Ingest only pdf files\n",
    "        if ext.lower() == \".pdf\":\n",
    "            file_path = os.path.join(folder_path, files)\n",
    "            print(upload_document(file_path, upload_url))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "823c89f9",
   "metadata": {},
   "outputs": [],
   "source": [
    "import time\n",
    "\n",
    "start_time = time.time()\n",
    "upload_pdf_files(\"dataset\", \"http://chain-server:8081/documents\")\n",
    "print(f\"--- {time.time() - start_time} seconds ---\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "79c7923c-d778-4f32-be37-4314063ecd2f",
   "metadata": {},
   "source": [
    "<div class=\"alert alert-block alert-info\">\n",
    "    \n",
    "⚠️ in the deployment of this workflow, [Milvus](https://milvus.io/) is running as a vector database microservice.\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1e94e53e-41a9-47d3-a9d3-7c0af4c07f76",
   "metadata": {},
   "outputs": [],
   "source": [
    "# import the relevant libraries\n",
    "from llama_index.core import VectorStoreIndex\n",
    "from llama_index.core.storage.storage_context import StorageContext\n",
    "from llama_index.vector_stores.milvus import MilvusVectorStore\n",
    "\n",
    "# store\n",
    "vector_store = MilvusVectorStore(uri=\"http://milvus:19530\",\n",
    "    dim=1024,\n",
    "    collection_name=\"developer_rag\",\n",
    "    index_config={\"index_type\": \"GPU_IVF_FLAT\", \"nlist\": 64},\n",
    "    search_config={\"nprobe\": 16},\n",
    "    overwrite=False\n",
    ")\n",
    "storage_context = StorageContext.from_defaults(vector_store=vector_store)\n",
    "index = VectorStoreIndex.from_vector_store(vector_store)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b3b58028-04fa-4050-9ec4-6526817fd9cf",
   "metadata": {},
   "source": [
    "### Step 6: Build the Query Engine, exposing the Retriever and Generator outputs\n",
    "\n",
    "#### a) Limit the Retriever Total Output Length\n",
    "\n",
    "First, we need to restrict the output of the Retriever to a reasonable length so that the prompt can fit the context length of the LLM.\n",
    "In this notebook, we will restrict it to 1000 (anything up to 1000 will ignored).\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6efc410c-f488-43aa-af65-c39376bd7ba5",
   "metadata": {},
   "outputs": [],
   "source": [
    "# import the relevant libraries\n",
    "from llama_index.core.postprocessor.types import BaseNodePostprocessor\n",
    "from typing import TYPE_CHECKING, List, Optional\n",
    "from llama_index.core.utils import get_tokenizer\n",
    "DEFAULT_MAX_CONTEXT = 1000\n",
    "\n",
    "# limit the Retriever total outputs length\n",
    "class LimitRetrievedNodesLength(BaseNodePostprocessor):\n",
    "    \"\"\"Llama Index chain filter to limit token lengths.\"\"\"\n",
    "\n",
    "    def _postprocess_nodes(\n",
    "        self, nodes: List[\"NodeWithScore\"], query_bundle: Optional[\"QueryBundle\"] = None\n",
    "    ) -> List[\"NodeWithScore\"]:\n",
    "        \"\"\"Filter function.\"\"\"\n",
    "        included_nodes = []\n",
    "        current_length = 0\n",
    "        limit = DEFAULT_MAX_CONTEXT\n",
    "\n",
    "        tokenizer = get_tokenizer()\n",
    "        for node in nodes:\n",
    "            current_length += len(\n",
    "                tokenizer(\n",
    "                    node.node.get_content(metadata_mode=MetadataMode.LLM)\n",
    "                )\n",
    "            )\n",
    "            if current_length > limit:\n",
    "                break\n",
    "            included_nodes.append(node)\n",
    "\n",
    "        return included_nodes\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e33cfed2-2a63-40be-8a7d-787ba04d2af9",
   "metadata": {},
   "source": [
    "#### b) Build the Query Engine\n",
    "\n",
    "Now, let's build the query engine that takes a query and returns a response. Each vector index has a default corresponding query engine; for example, the default query engine for a vector index performs a standard top-k retrieval over the vector store.\n",
    "We will use `RetrieverQueryEngine` to get the output of the Retriever and generator. Learn more about the RetrieverQueryEngine in the [documentation](https://gpt-index.readthedocs.io/en/latest/examples/query_engine/CustomRetrievers.html).\n",
    "\n",
    " "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f56f37e0-341e-4d7d-b282-f374a16f55b2",
   "metadata": {},
   "outputs": [],
   "source": [
    "# import the relevant libraries\n",
    "from llama_index.core.query_engine import RetrieverQueryEngine\n",
    "from llama_index.core.schema import MetadataMode\n",
    "\n",
    "# Expose the retriever\n",
    "retriever = index.as_retriever(similarity_top_k=2)\n",
    "\n",
    "query_engine = RetrieverQueryEngine.from_args(\n",
    "    retriever,\n",
    "    text_qa_template=qa_template,\n",
    "    node_postprocessors=[LimitRetrievedNodesLength()]\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c6a58983-2069-450e-adf9-24b0f8736498",
   "metadata": {},
   "source": [
    "### Step 7: Fill the RAG outputs \n",
    "\n",
    "Let's now query the RAG pipeline and fill the outputs `contexts` and `answer` on the evaluation JSON file.\n",
    "\n",
    "First, we need to load the previously generated dataset. So far, the RAG outputs fields are empty.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "82f0f304-3476-42e3-9be7-1ab38f9e14cd",
   "metadata": {},
   "outputs": [],
   "source": [
    "# import the relevant libraries\n",
    "import json\n",
    "from IPython.display import JSON\n",
    "\n",
    "# load the evaluation data\n",
    "f = open('qa_generation.json')\n",
    "data = json.load(f)\n",
    "\n",
    "# show the first element\n",
    "JSON(data[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d4b4321b-dfce-4c72-a8f1-2e2264b3c59d",
   "metadata": {},
   "source": [
    "Let now query the RAG pipeline and populate the `contexts` and `answer` fields."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6f238d58-071a-4bb9-956c-d014748c15ab",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "for entry in data:\n",
    "    limited_retrieval_length = LimitRetrievedNodesLength()\n",
    "    retrieved_text = \"\"\n",
    "    response = query_engine.query(entry[\"question\"])\n",
    "    entry[\"answer\"] = response.response\n",
    "    print(entry[\"answer\"])\n",
    "    nodes = retriever.retrieve(entry[\"question\"])\n",
    "    included_nodes = limited_retrieval_length.postprocess_nodes(nodes)\n",
    "    for node in included_nodes:\n",
    "        retrieved_text = retrieved_text + \" \" + node.text\n",
    "    entry[\"contexts\"] = [retrieved_text]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "14407673-a8f1-4245-8748-d6885e08f06d",
   "metadata": {},
   "outputs": [],
   "source": [
    "# json_list_string=json.dumps(data)\n",
    "\n",
    "# show again the first element\n",
    "JSON(data[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dfa9f140-5989-4c3c-98af-18ec63a954b9",
   "metadata": {},
   "source": [
    "Let now save the new evaluation datasets."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "958653ba-4228-4c81-8f65-81ead7c8254f",
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "with open('eval.json', 'w') as f:\n",
    "    json.dump(data, f)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "248982b8-9f9e-4021-a326-657e2e82d43d",
   "metadata": {},
   "source": [
    "In the next notebook, we will evaluate the [Corp Comms Copilot](https://gitlab-master.nvidia.com/chat-labs/rag-demos/corp-comms-copilot) RAG pipeline."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
