{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "77c8ac2e-eb68-4b84-85fe-3a6661eba976",
   "metadata": {},
   "source": [
    "# Notebook 2.5: Document Question-Answering with LangChain and AzureML\n",
    "This notebook demonstrates how to use LangChain to build a chatbot that references a custom knowledge-base and sends requests to a remote AzureML hosted NVIDIA Nemotron LLM. \n",
    "\n",
    "Before proceeding with this notebook you must have an accepible Nemotron3-8B Model hosted as and enpoint in AzureML. The Nemotron-8B models are curated by Microsoft in the ‘nvidia-ai’ Azure Machine Learning (AzureML) registry and show up on the model catalog under the NVIDIA Collection. Explore the model card to learn more about the model architecture, use-cases and limitations. \n",
    "\n",
    "![alt text](./images/azureml-github.gif \"Launch Nemotron3-8B LLM Endpoint\")\n",
    "\n",
    "Simply sending requests to the Nemotron3-8B LLM will likely not fit your needs as it is un aware of your proprietary data. Suppose you have some text documents (PDF, blog, Notion pages, etc.) and want to ask questions related to the contents of those documents. LLMs, given their proficiency in understanding text, are a great tool for this. \n",
    "\n",
    "### [LangChain](https://python.langchain.com/docs/get_started/introduction)\n",
    "[**LangChain**](https://python.langchain.com/docs/get_started/introduction) provides a simple framework for connecting LLMs to your own data sources. Since LLMs are both only trained up to a fixed point in time and do not contain knowledge that is proprietary to an Enterprise, they can't answer questions about new or proprietary knowledge. LangChain solves this problem."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "80ca0402-3e14-4414-8977-ba4617f7da74",
   "metadata": {},
   "source": [
    "### Step 1: Integrate TensorRT-LLM to LangChain [*(Model I/O)*](https://python.langchain.com/docs/modules/model_io/)\n",
    "\n",
    "#### Custom TRT-LLM Langchain integration.\n",
    "Langchain allows you to [create custom wrappers for your LLM](https://python.langchain.com/docs/modules/model_io/models/llms/custom_llm) in case you want to use your own LLM or a different wrapper than the one that is supported in LangChain. Since we are using a remote Nemotron-3-8B modle hosteon Triton with TRT-LLM, we have written a custom wrapper for our LLM. \n",
    "\n",
    "Below is a snippet of the custom wrapper. Take a look at ```trt_llm_azureml.py``` for the full implementation.\n",
    "```\n",
    "class TensorRTLLM(LLM):\n",
    "    server_url: str = Field(None, alias=\"server_url\")\n",
    "\n",
    "    # some of the optional arguments\n",
    "    model_name: str = \"ensemble\"\n",
    "    temperature: Optional[float] = 1.0\n",
    "    top_p: Optional[float] = 0\n",
    "\n",
    "    @property\n",
    "    def _llm_type(self) -> str:\n",
    "        return \"triton_tensorrt\"\n",
    "\n",
    "    def _call(\n",
    "        self,\n",
    "        prompt: str,\n",
    "        run_manager: Optional[CallbackManagerForLLMRun] = None,\n",
    "        **kwargs,\n",
    "    ) -> str:\n",
    "        \"\"\"\n",
    "        Args:\n",
    "            prompt: The prompt to pass into the model.\n",
    "            stop: A list of strings to stop generation when encountered\n",
    "\n",
    "        Returns:\n",
    "            The string generated by the model\n",
    "        \"\"\"\n",
    "\n",
    "```\n",
    "\n",
    "A ```_call``` method that takes in a string, some optional stop words, and returns a string. Take a look at ```trt_llm_aureml.py``` for the code of LangChain wrapper for a Llama2 model deployed on Triton with TRT-LLM.\n",
    "\n",
    "``llm = TensorRTLLM(  # type: ignore\n",
    "server_url=\"tme-demo-ml-zfqjc.eastus.inference.ml.azure.com/\", model_name=\"ensemble\", tokens=500, use_ssl=True, api_key=\"\", extra_headers=extra_headers,)``\n",
    "\n",
    "<div class=\"alert alert-block alert-warning\">\n",
    "    \n",
    "<b>WARNING!</b> Be sure to replace `extra_headers[\"azureml-model-deployment\"]`, `server_url`, and `api_key` with the AzureML Model Deployment, Endpoint URL, and API-KEY respectively.\n",
    "\n",
    "![alt text](./images/connection-info.png \"Connection Info\")\n",
    "\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f7c4cb93-95c7-4665-b091-7719d996acb8",
   "metadata": {},
   "outputs": [],
   "source": [
    "from trt_llm_azureml import TensorRTLLM\n",
    "extra_headers = {}\n",
    "extra_headers[\"azureml-model-deployment\"] = \"nemotron-3-8b-chat-rlhf-1\"\n",
    "\n",
    "# Connect to the TRT-LLM Llama-2 model running on the Triton server at the url below\n",
    "llm = TensorRTLLM(  # type: ignore\n",
    "        server_url=\"tme-demo-ml-zfqjc.eastus.inference.ml.azure.com/\",\n",
    "        model_name=\"ensemble\",\n",
    "        tokens=500,\n",
    "        use_ssl=True,\n",
    "        api_key=\"REPLACE-WITH-API-KEY\",\n",
    "        extra_headers=extra_headers,\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8d835a3b-a4fd-423e-b594-c8be749f4f39",
   "metadata": {},
   "source": [
    "### Step 2: Create a Prompt Template [*(Model I/O)*](https://python.langchain.com/docs/modules/model_io/)\n",
    "\n",
    "A [**prompt template**](https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/) is a common paradigm in LLM development. \n",
    "\n",
    "They are a pre-defined set of instructions provided to the LLM and guide the output produced by the model. They can contain few shot examples and guidance and are a quick way to engineer the responses from the LLM. Nemotron3-8b accepts the [prompt format](https://huggingface.co/nvidia/nemotron-3-8b-chat-4k-rlhf#prompt-format) shown in `GPT_RAG_TEMPLATE`, which we manipulate to be constructed with:\n",
    "- The system prompt\n",
    "- The context\n",
    "- The user's question\n",
    "Langchain allows you to [create custom wrappers for your LLM](https://python.langchain.com/docs/modules/model_io/models/llms/custom_llm) in case you want to use your own LLM or a different wrapper than the one that is supported in LangChain. Since we are using a Nemotron3-8b model hosted in AzureML, we have written a custom wrapper for our LLM. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dc894491-1239-4a71-83fb-c312a873e2c5",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.prompts import PromptTemplate\n",
    "\n",
    "GPT_RAG_TEMPLATE = (\n",
    "    \"<extra_id_0>System\\n\"\n",
    "    \"A chat between a curious user and an artificial intelligence assistant.\"\n",
    "    \"The assistant gives helpful, detailed, and polite answers to the user's questions.\\n\"\n",
    "    \"<extra_id_1>User\\n\"\n",
    "    \"Context: {context}\\n\\n\"\n",
    "    \"Given the above context, answer the following question: {question}\\n\"\n",
    "    \"<extra_id_1>Assistant\\n\"\n",
    ")\n",
    "\n",
    "GPT_PROMPT = PromptTemplate.from_template(GPT_RAG_TEMPLATE)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3310462b-f215-4d00-9d59-e613921bed0a",
   "metadata": {},
   "source": [
    "### Step 3: Load Documents [*(Retrieval)*](https://python.langchain.com/docs/modules/data_connection/)\n",
    "LangChain provides a variety of [document loaders](https://python.langchain.com/docs/integrations/document_loaders) that load various types of documents (HTML, PDF, code) from many different sources and locations (private s3 buckets, public websites).\n",
    "\n",
    "Document loaders load data from a source as **Documents**. A **Document** is a piece of text (the page_content) and associated metadata. Document loaders provide a ``load`` method for loading data as documents from a configured source. \n",
    "\n",
    "In this example, we use a LangChain [`UnstructuredFileLoader`](https://python.langchain.com/docs/integrations/document_loaders/unstructured_file) to load a research paper about Llama2 from Meta.\n",
    "\n",
    "[Here](https://python.langchain.com/docs/integrations/document_loaders) are some of the other document loaders available from LangChain."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "70c92132-4c34-44fc-af28-6aa0769b006c",
   "metadata": {},
   "outputs": [],
   "source": [
    "! wget -O \"llama2_paper.pdf\" -nc --user-agent=\"Mozilla\" https://arxiv.org/pdf/2307.09288.pdf"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b4382b61",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.document_loaders import UnstructuredFileLoader\n",
    "loader = UnstructuredFileLoader(\"llama2_paper.pdf\")\n",
    "data = loader.load()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4e0449e4",
   "metadata": {},
   "source": [
    "### Step 4: Transform Documents [*(Retrieval)*](https://python.langchain.com/docs/modules/data_connection/)\n",
    "Once documents have been loaded, they are often transformed. One method of transformation is known as **chunking**, which breaks down large pieces of text, for example, a long document, into smaller segments. This technique is valuable because it helps [optimize the relevance of the content returned from the vector database](https://www.pinecone.io/learn/chunking-strategies/). \n",
    "\n",
    "LangChain provides a [variety of document transformers](https://python.langchain.com/docs/integrations/document_transformers/), such as text splitters. In this example, we use a [``SentenceTransformersTokenTextSplitter``](https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.SentenceTransformersTokenTextSplitter.html#langchain.text_splitter.SentenceTransformersTokenTextSplitter). The ``SentenceTransformersTokenTextSplitter`` is a specialized text splitter for use with the sentence-transformer models. The default behaviour is to split the text into chunks that fit the token window of the sentence transformer model that you would like to use. This sentence transformer model is used to generate the embeddings from documents. \n",
    "\n",
    "There are some nuanced complexities to text splitting since semantically related text, in theory, should be kept together. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "21ec0438",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.text_splitter import SentenceTransformersTokenTextSplitter\n",
    "TEXT_SPLITTER_MODEL = \"intfloat/e5-large-v2\"\n",
    "TEXT_SPLITTER_CHUNCK_SIZE = 510\n",
    "TEXT_SPLITTER_CHUNCK_OVERLAP = 200\n",
    "\n",
    "text_splitter = SentenceTransformersTokenTextSplitter(\n",
    "    model_name=TEXT_SPLITTER_MODEL,\n",
    "    chunk_size=TEXT_SPLITTER_CHUNCK_SIZE,\n",
    "    chunk_overlap=TEXT_SPLITTER_CHUNCK_OVERLAP,\n",
    ")\n",
    "documents = text_splitter.split_documents(data)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "183aaeeb-7461-4f58-9fc4-2a51fa723714",
   "metadata": {},
   "source": [
    "Let's view a sample of content that is chunked together in the documents."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "46525e4e",
   "metadata": {},
   "outputs": [],
   "source": [
    "documents[40].page_content"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "3f580c54",
   "metadata": {},
   "source": [
    "### Step 5: Generate Embeddings and Store Embeddings in the Vector Store [*(Retrieval)*](https://python.langchain.com/docs/modules/data_connection/)\n",
    "\n",
    "#### a) Generate Embeddings\n",
    "[Embeddings](https://python.langchain.com/docs/modules/data_connection/text_embedding/) for documents are created by vectorizing the document text; this vectorization captures the semantic meaning of the text. This allows you to quickly and efficiently find other pieces of text that are similar. The embedding model used below is [intfloat/e5-large-v2](https://huggingface.co/intfloat/e5-large-v2).\n",
    "\n",
    "LangChain provides a wide variety of [embedding models](https://python.langchain.com/docs/integrations/text_embedding) from many providers and makes it simple to swap out the models. \n",
    "\n",
    "When a user sends in their query, the query is also embedded using the same embedding model that was used to embed the documents. As explained earlier, this allows to find similar (relevant) documents to the user's query. \n",
    "\n",
    "#### b) Store Document Embeddings in the Vector Store\n",
    "Once the document embeddings are generated, they are stored in a vector store so that at query time we can:\n",
    "1) Embed the user query and\n",
    "2) Retrieve the embedding vectors that are most similar to the embedding query.\n",
    "\n",
    "A vector store takes care of storing the embedded data and performing a vector search.\n",
    "\n",
    "LangChain provides support for a [great selection of vector stores](https://python.langchain.com/docs/integrations/vectorstores/). \n",
    "\n",
    "<div class=\"alert alert-block alert-info\">\n",
    "    \n",
    "⚠️ For this workflow, [Milvus](https://milvus.io/) vector database is running as a microservice. \n",
    "\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9bd8b943",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.embeddings import HuggingFaceEmbeddings\n",
    "from langchain.vectorstores import Milvus\n",
    "import torch\n",
    "\n",
    "#In the production deployment (API server shown as part of the 5th notebook we run the model on GPU)\n",
    "model_name = \"intfloat/e5-large-v2\"\n",
    "model_kwargs = {\"device\": \"cpu\"} #Can run the model on GPU since LLM is remote. e.g. model_kwargs = {\"device\": \"cuda:0\"}\n",
    "encode_kwargs = {\"normalize_embeddings\": False}\n",
    "hf_embeddings = HuggingFaceEmbeddings(\n",
    "    model_name=model_name,\n",
    "    model_kwargs=model_kwargs,\n",
    "    encode_kwargs=encode_kwargs,\n",
    ")\n",
    "vectorstore = Milvus.from_documents(documents=documents, embedding=hf_embeddings, connection_args={\"host\": \"milvus\", \"port\": \"19530\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f7fa622f",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# Simple Example: Retrieve Documents from the Vector Database\n",
    "# note: this is just for demonstration purposes of a similarity search\n",
    "question = \"Can you talk about safety evaluation of llama2 chat?\"\n",
    "docs = vectorstore.similarity_search(question)\n",
    "print(docs[2].page_content)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "f6960255",
   "metadata": {},
   "source": [
    " > ### Simple Example: Retrieve Documents from the Vector Database [*(Retrieval)*](https://python.langchain.com/docs/modules/data_connection/)\n",
    ">Given a user query, relevant splits for the question are returned through a **similarity search**. This is also known as a semantic search, and it is done with meaning. It is different from a lexical search, where the search engine looks for literal matches of the query words or variants of them, without understanding the overall meaning of the query. A semantic search tends to generate more relevant results than a lexical search."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9c8148dc",
   "metadata": {},
   "source": [
    "### Step 6: Compose a streamed answer using a Chain\n",
    "We have already integrated the AzureML hosted Nemotron3-8b LLM into LangChain with a custom wrapper, loaded and transformed documents, and generated and stored document embeddings in a vector database. To finish the pipeline, we need to add a few more LangChain components and combine all the components together with a [chain](https://python.langchain.com/docs/modules/chains/).\n",
    "\n",
    "A [LangChain chain](https://python.langchain.com/docs/modules/chains/) combines components together. In this case, we use a [RetrievalQA chain](https://js.langchain.com/docs/modules/chains/popular/vector_db_qa/), which is a chain type for question-answering against a vector index. It combines a *Retriever* and a *question answering (QA) chain*.\n",
    "\n",
    "We pass it 3 of our LangChain components:\n",
    "- Our instance of the LLM (from step 1).\n",
    "- A [retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/), which is an interface that returns documents given an unstructured query. In this case, we use our vector store as the retriever.\n",
    "- Our prompt template constructed from the prompt format for Llama2 (from step 2)\n",
    "\n",
    "```\n",
    "qa_chain = RetrievalQA.from_chain_type(\n",
    "    llm,\n",
    "    retriever=vectorstore.as_retriever(),\n",
    "    chain_type_kwargs={\"prompt\": GPT_PROMPT}\n",
    ")\n",
    "```\n",
    "\n",
    "Lastly, we pass a user query to the chain and stream the result. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "69de32a0",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.chains import RetrievalQA\n",
    "\n",
    "qa_chain = RetrievalQA.from_chain_type(\n",
    "    llm,\n",
    "    retriever=vectorstore.as_retriever(),\n",
    "    chain_type_kwargs={\"prompt\": GPT_PROMPT}\n",
    ")\n",
    "result = qa_chain({\"query\": question})\n",
    "print(result[\"result\"])"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
