{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "550bf249-9df2-4078-965d-7651261a1d4d",
   "metadata": {},
   "source": [
    "# Chat with your documents on Intel Meteor Lake iGPU\n",
    "In this notebook we will show how to build a system for chatting with your document running locally on your Intel Meteor Lake laptop!\n",
    "\n",
    "Retrieval Augmented Generation (RAG) is a method for enhancing model generation results with retrieved relevant information from an extenernal database.\n",
    "Using RAG we can chat with our documents or ask questions about current events which didn't appear in the model's training data.\n",
    "Running RAG locally can give the user a great experience of having an immediate response while interacting with a model while ensuring privacy as all the data is kept on the laptop and not uploaded anywhere.\n",
    "RAG is a key component in the AIPC Era and here we will show you how to use it!\n",
    "\n",
    "To build the RAG pipeline we will use LangChain:\n",
    "> LangChain is a framework designed to simplify the creation of applications using large language models.\n",
    "\n",
    "LangChain has a built-in integration with OpenVINO and 🤗 Optimum which will make our implementation very easy and simple.\n",
    "You can read about LangChain [here](https://python.langchain.com/docs/get_started/introduction) as we won't dive into how LangChain works in this notebook.\n",
    "\n",
    "To show RAG's prowess we will use [RealTimeData/bbc_latest](https://huggingface.co/datasets/RealTimeData/bbc_latest) dataset, BBC Latest is a dataset that is updated weekly with the latest BBC news articles of the same week.\n",
    "Since we are using a foundation model that was trained some time ago without any fine-tuning, the model will have to rely on RAG to be able to answer our question on current events.\n",
    "\n",
    "For a language model we will use [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) which is a very powerful model especially for local inference.\n",
    "Check our notebook for running [Phi-2 on Intel Meteor Lake iGPU](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/quantized_generation_demo.ipynb), we will use the same method but we won't cover it in details so be sure to check it.\n",
    "\n",
    "Our RAG system will use the following pipeline to answer questions:\n",
    "```\n",
    "        User query\n",
    "        /        \\\n",
    "   Retriver       |     \n",
    "       |          |\n",
    "Relevant docs     |\n",
    "        \\         |\n",
    "      Prompt creation\n",
    "             |\n",
    "           Phi-3\n",
    "             |\n",
    "           Answer             \n",
    "```\n",
    "In a chatbot scenario we will also have the chat history as an input to the prompt creation.\n",
    "\n",
    "<b>Let's begin! 🚀</b>\n",
    "\n",
    "First we will create our database.\n",
    "We will use [ChromaDB](https://python.langchain.com/docs/integrations/vectorstores/chroma) as our database. ChromaDB receives the dataset and an embedder and it will encode all the documents with the embedder model to a vector representation.\n",
    "We will use [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) which is pretty good and also very small which is a great fit for running locally.\n",
    "Later, when given a query, it will use the same embedder to encode the query and will retrieve relevant documents with similar represatation to our query.\n",
    "Since articles in BBC can be very long we will split every articles to passages of 3 sentences.\n",
    "We will also save the processed data into disk to avoid computing the entire dataset everytime we reload.\n",
    "Note, this process may take a few minutes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3d0d8554-68e1-4df7-b4f9-c79489de5f3f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Uncomment to install dependencies\n",
    "# ! pip install langchain datasets pandas nltk chromadb sentence-transformers\n",
    "# import nltk\n",
    "\n",
    "# nltk.download('punkt')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fed9f199-e2d4-4681-8a55-9342b0686842",
   "metadata": {},
   "outputs": [],
   "source": [
    "from datasets import load_dataset\n",
    "from nltk.tokenize import sent_tokenize\n",
    "from functools import reduce\n",
    "import pandas as pd\n",
    "import os\n",
    "\n",
    "from langchain_community.embeddings.sentence_transformer import SentenceTransformerEmbeddings\n",
    "from langchain_community.vectorstores import Chroma\n",
    "from langchain_core.documents.base import Document\n",
    "\n",
    "\n",
    "def articles_to_passages(articles, sent_count_per_passage=3):\n",
    "    \"\"\"Split a list of articles to a list of passages\"\"\"\n",
    "\n",
    "    def map(text):\n",
    "        sents = sent_tokenize(text)\n",
    "        sentence_df = pd.DataFrame(sents, columns=[\"sentence\"]).reset_index()\n",
    "        sentence_df[\"batch\"] = sentence_df[\"index\"] // sent_count_per_passage\n",
    "        passages = list(sentence_df.groupby(\"batch\")[\"sentence\"].apply(lambda x: \" \".join(x)))\n",
    "        return passages\n",
    "\n",
    "    return reduce(lambda l1, l2: l1 + l2, [map(p) for p in articles], [])\n",
    "\n",
    "\n",
    "embedding_function = SentenceTransformerEmbeddings(model_name=\"BAAI/bge-small-en-v1.5\")\n",
    "\n",
    "chroma_db_path = \"./chroma_db\"\n",
    "if os.path.exists(chroma_db_path):\n",
    "    database = Chroma(persist_directory=chroma_db_path, embedding_function=embedding_function)\n",
    "    print(\"Loaded dataset from disk\")\n",
    "else:\n",
    "    dataset = load_dataset(\"RealTimeData/bbc_latest\", split=\"train\", revision=\"2024-03-18\")\n",
    "    # Filter only sports arctiles\n",
    "    sports_articles = dataset.filter(lambda e: \"sport\" in e[\"link\"])[\"content\"]\n",
    "    sports_articles = pd.DataFrame(sports_articles).drop_duplicates()[0].to_list()\n",
    "    # Split documents to passages\n",
    "    sport_passages = articles_to_passages(sports_articles)\n",
    "    database = Chroma.from_documents([Document(page_content=doc) for doc in sport_passages], embedding_function, persist_directory=chroma_db_path)\n",
    "    print(f\"Number of sports arcticles found: {len(sports_articles)}\\nNumber of embedded passages: {len(sport_passages)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0070bc2c-23cb-439c-ae05-1bdf5953dd2b",
   "metadata": {},
   "source": [
    "Next we will initilize a retriever from the dataset.\n",
    "We override the `_get_relevant_documents` method to enable a control over the number of documents the retriever will return for every query."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ca0ad351-85b1-41b9-8f93-a98327025c33",
   "metadata": {},
   "outputs": [],
   "source": [
    "def _get_relevant_documents(self, query, *, run_manager):\n",
    "    search_kwargs = {k: v for k, v in self.search_kwargs.items()}\n",
    "\n",
    "    if \"top_k\" in run_manager.metadata:\n",
    "        search_kwargs[\"k\"] = run_manager.metadata[\"top_k\"]\n",
    "    if self.search_type == \"similarity\":\n",
    "        docs = self.vectorstore.similarity_search(query, **search_kwargs)\n",
    "    elif self.search_type == \"similarity_score_threshold\":\n",
    "        docs_and_similarities = self.vectorstore.similarity_search_with_relevance_scores(query, **search_kwargs)\n",
    "        docs = [doc for doc, _ in docs_and_similarities]\n",
    "    elif self.search_type == \"mmr\":\n",
    "        docs = self.vectorstore.max_marginal_relevance_search(query, **search_kwargs)\n",
    "    else:\n",
    "        raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n",
    "    return [d.page_content for d in docs]\n",
    "\n",
    "\n",
    "retriever = database.as_retriever()\n",
    "type(retriever)._get_relevant_documents = _get_relevant_documents"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cbe58cbf-eac5-4b97-a9f2-231014df0e6f",
   "metadata": {},
   "source": [
    "Let's test our retriever with a query and see if it returns a relevant document:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "28be6fd5-d433-452c-872c-9781ebfbe233",
   "metadata": {},
   "outputs": [],
   "source": [
    "question = \"How many teams will The 2024-25 Champions League feature?\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1a0957b3-c248-49f5-a1c4-b1f14abba404",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.schema.runnable import RunnableConfig\n",
    "\n",
    "print(retriever.invoke(question, config=RunnableConfig(metadata={\"top_k\": 1})))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e2441be3-8b19-4517-96d8-ae0f27a28346",
   "metadata": {},
   "source": [
    "Check that the retrieved document is relevant to your question.\n",
    "In our example, we can see that in the document it says that there will be 36 teams in the Champions League in 2024-25.\n",
    "Later we will see that Phi-3 can't answer that question correctly without RAG since it was trained on data that predates October 2023.\n",
    "\n",
    "Next, we will want to build a prompt to include the question and relevant documents.\n",
    "The template will be quite simple:\n",
    "```\n",
    "<s><|system|>\n",
    "You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.<|end|>\n",
    "<|user|>\n",
    "{question}\n",
    "{retrieved documents list}<|end|>\n",
    "<|assistant|>\n",
    "```\n",
    "You will find that this template is with accordance to the chat template that Phi-3 was trained with while adding the system message and the context."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7f7f51df-4ca4-483f-be52-1a90c82cf342",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain import PromptTemplate\n",
    "\n",
    "\n",
    "# Phi-3 wasn't trained with system prompt: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/discussions/51\n",
    "template = \"\"\"<s><|user|>\n",
    "You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.<|end|>\n",
    "<|user|>\n",
    "{question}\n",
    "{context}<|end|>\n",
    "<|assistant|>\n",
    "\"\"\"\n",
    "prompt = PromptTemplate.from_template(template)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3a40c2cd-f726-42b0-af3c-e51045bbcb16",
   "metadata": {},
   "source": [
    "Now we can build a chain that will receive our question and return a prompt to query a LM"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "38ee58db-aa88-4173-91e0-d00ebcb6bf04",
   "metadata": {},
   "outputs": [],
   "source": [
    "from operator import itemgetter\n",
    "\n",
    "\n",
    "chain = {\"context\": (itemgetter(\"question\") | retriever), \"question\": itemgetter(\"question\")} | prompt\n",
    "\n",
    "print(chain.invoke({\"question\": question}, config=RunnableConfig(metadata={\"top_k\": 1})).to_string())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eabd5651-db4a-423e-9db4-eb157f6418b6",
   "metadata": {},
   "source": [
    "That's it, we are set to run the prompt through our model.\n",
    "Next we will initialize our OpenVINO optimized Phi-3 model in a pipeline and form the complete chain to produce an answer to our question"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b1ef00cf-82d5-4020-8d7d-6ac41e5e4fa4",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Uncomment to install dependencies\n",
    "# ! pip install optimum[openvino,nncf]\n",
    "# Phi-3 is not supported yet in the official release of `intel-optimum` so we will need to install from source\n",
    "# ! pip install git+https://github.com/huggingface/optimum-intel"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ae5a5086-9ed7-45f1-8837-bc1b2d174616",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline\n",
    "from optimum.intel import OVModelForCausalLM, OVWeightQuantizationConfig\n",
    "from functools import wraps\n",
    "\n",
    "import openvino.properties as props\n",
    "import openvino.properties.hint as hints\n",
    "\n",
    "\n",
    "model_name = \"microsoft/Phi-3-mini-4k-instruct\"\n",
    "save_name = model_name.split(\"/\")[-1] + \"_openvino\"\n",
    "precision = \"f16\"\n",
    "quantization_config = OVWeightQuantizationConfig(\n",
    "    bits=4,\n",
    "    sym=False,\n",
    "    group_size=128,\n",
    "    ratio=0.8,\n",
    ")\n",
    "device = \"gpu\"\n",
    "saved = os.path.exists(save_name)\n",
    "load_kwargs = {\n",
    "    \"device\": device,\n",
    "    \"ov_config\": {\n",
    "        hints.performance_mode(): hints.PerformanceMode.LATENCY,\n",
    "        hints.inference_precision: precision,\n",
    "        props.cache_dir(): os.path.join(save_name, \"model_cache\"),  # OpenVINO will use this directory as cache\n",
    "    },\n",
    "    \"quantization_config\": quantization_config,\n",
    "    \"trust_remote_code\": True,\n",
    "    \"export\": not saved,\n",
    "}\n",
    "\n",
    "ov_llm = HuggingFacePipeline.from_model_id(\n",
    "    model_id=model_name if not saved else save_name,\n",
    "    task=\"text-generation\",\n",
    "    backend=\"openvino\",\n",
    "    model_kwargs=load_kwargs,\n",
    ")\n",
    "\n",
    "if not saved:\n",
    "    # For some reason LC passes the model_kwargs to the tokenizer aswell and this can cause issues when saving\n",
    "    for k in load_kwargs:\n",
    "        ov_llm.pipeline.tokenizer.__dict__[\"init_kwargs\"].pop(k, None)\n",
    "    ov_llm.pipeline.save_pretrained(save_name)\n",
    "\n",
    "\n",
    "original_generate = HuggingFacePipeline._generate\n",
    "\n",
    "\n",
    "@wraps(original_generate)\n",
    "def _generate_with_kwargs(*args, **kwargs):\n",
    "    pipeline_kwargs = kwargs.get(\"run_manager\").metadata.get(\"pipeline_kwargs\", {})\n",
    "    return original_generate(*args, **kwargs, pipeline_kwargs=pipeline_kwargs)\n",
    "\n",
    "\n",
    "HuggingFacePipeline._generate = _generate_with_kwargs\n",
    "\n",
    "chain |= ov_llm"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c5aba8cd-b4e7-4eb8-a1c4-c880da96070c",
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import TextStreamer\n",
    "\n",
    "\n",
    "streamer = TextStreamer(ov_llm.pipeline.tokenizer, skip_special_tokens=True, skip_prompt=True)\n",
    "out = chain.invoke(\n",
    "    {\"question\": question},\n",
    "    config=RunnableConfig(\n",
    "        metadata={\n",
    "            \"top_k\": 1,\n",
    "            \"pipeline_kwargs\": {\n",
    "                \"max_new_tokens\": 256,\n",
    "                \"return_full_text\": False,\n",
    "                \"streamer\": streamer,\n",
    "                \"eos_token_id\": ov_llm.pipeline.tokenizer.convert_tokens_to_ids([\"<|endoftext|>\", \"<|end|>\", \"<|system|>\", \"<|user|>\", \"<|assistant|>\"]),\n",
    "            },\n",
    "        }\n",
    "    ),\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a9f88b28-872a-45a5-a289-48512bbb8d09",
   "metadata": {},
   "source": [
    "And there you have it, our chain is complete and we got the correct answer!"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "126b60ff-983e-4c37-b9a6-7cf7755e528b",
   "metadata": {},
   "source": [
    "## Chatbot with RAG demo\n",
    "We are now ready to build our chatbot demo with RAG capabilites.\n",
    "We will use [Gradio](https://www.gradio.app/) to build our demo.\n",
    "\n",
    "First, we will define our chat memory and modify our template and chain to be able to handle chat memory"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "395d148f-6049-4e00-b4d5-9d43efd41c0f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Uncomment to install dependencies\n",
    "# ! pip install gradio"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c6a01dbb-9fd9-4853-9972-ceb7d08f3ce7",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_core.messages.base import BaseMessage\n",
    "from langchain.memory import ConversationBufferMemory\n",
    "\n",
    "\n",
    "def parse_chat_history(chat_history):\n",
    "    role_map = {\"human\": \"<|user|>\\n\", \"ai\": \"<|assistant|>\\n\", \"context\": \"\"}\n",
    "    buffer = \"\"\n",
    "    for dialogue_turn in chat_history:\n",
    "        assert isinstance(dialogue_turn, BaseMessage)\n",
    "        role_prefix = role_map[dialogue_turn.type]\n",
    "        buffer += f\"{role_prefix}{dialogue_turn.content}\"\n",
    "        buffer += \"<|end|>\\n\" if dialogue_turn.type != \"human\" else \"\\n\"\n",
    "    return buffer\n",
    "\n",
    "\n",
    "def add_to_memory(memory, question, context, answer):\n",
    "    memory.chat_memory.add_messages(\n",
    "        [BaseMessage(content=question, type=\"human\"), BaseMessage(content=context, type=\"context\"), BaseMessage(content=answer, type=\"ai\")]\n",
    "    )\n",
    "\n",
    "\n",
    "def delete_last_message_from_memory(memory):\n",
    "    del memory.chat_memory.messages[-3:]\n",
    "\n",
    "\n",
    "memory = ConversationBufferMemory(memory_key=\"chat_history\", ai_prefix=\"Assistant\", human_prefix=\"User\")\n",
    "\n",
    "template = \"\"\"<s><|system|>\n",
    "You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.<|end|>\n",
    "{chat_history}<|user|>\n",
    "{question}\n",
    "{context}<|end|>\n",
    "<|assistant|>\n",
    "\"\"\"\n",
    "prompt = PromptTemplate.from_template(template)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7db722f5-edfb-4c16-ad55-0400b4bc0138",
   "metadata": {},
   "source": [
    "We will want to make RAG optional in our demo so we will have 2 chains, with and without RAG"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "79381021-130e-452e-bb59-f87c521fe94a",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_core.runnables import RunnableLambda\n",
    "\n",
    "\n",
    "base_chain = RunnableLambda(func=lambda x: x) | {\n",
    "    \"context\": itemgetter(\"context\"),\n",
    "    \"answer\": prompt | ov_llm,\n",
    "}\n",
    "\n",
    "rag_chain = {\n",
    "    \"context\": (itemgetter(\"question\") | retriever),\n",
    "    \"question\": itemgetter(\"question\"),\n",
    "    \"chat_history\": itemgetter(\"chat_history\") | RunnableLambda(parse_chat_history),\n",
    "} | base_chain\n",
    "\n",
    "no_rag_chain = {\n",
    "    \"context\": RunnableLambda(lambda q: \"\"),\n",
    "    \"question\": itemgetter(\"question\"),\n",
    "    \"chat_history\": itemgetter(\"chat_history\") | RunnableLambda(parse_chat_history),\n",
    "} | base_chain"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c8d8d5d4-f075-4d4f-9203-c3f52e0b714e",
   "metadata": {},
   "source": [
    "Next we will write our core functions generation function for our demo"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "92a48961-5cb4-42fc-9a76-4cc6cdd7326d",
   "metadata": {},
   "outputs": [],
   "source": [
    "import time\n",
    "import itertools\n",
    "from threading import Thread\n",
    "from transformers import (\n",
    "    TextIteratorStreamer,\n",
    "    StoppingCriteria,\n",
    "    StoppingCriteriaList,\n",
    "    GenerationConfig,\n",
    ")\n",
    "\n",
    "\n",
    "from threading import Thread\n",
    "\n",
    "\n",
    "class ThreadWithResult(Thread):\n",
    "    \"\"\"\n",
    "    Modified Thread class to save the return value of the target function\n",
    "    Based on https://stackoverflow.com/a/65447493\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, group=None, target=None, name=None, args=(), kwargs={}, *, daemon=None):\n",
    "        def function():\n",
    "            self._result = target(*args, **kwargs)\n",
    "\n",
    "        super().__init__(group=group, target=function, name=name, daemon=daemon)\n",
    "\n",
    "    @property\n",
    "    def result(self):\n",
    "        self.join()\n",
    "        return self._result\n",
    "\n",
    "\n",
    "# Copied and modified from https://github.com/bigcode-project/bigcode-evaluation-harness/blob/main/bigcode_eval/generation.py#L13\n",
    "class SuffixCriteria(StoppingCriteria):\n",
    "    def __init__(self, start_length, eof_strings, tokenizer, check_fn=None):\n",
    "        self.start_length = start_length\n",
    "        self.eof_strings = eof_strings\n",
    "        self.tokenizer = tokenizer\n",
    "        if check_fn is None:\n",
    "            check_fn = lambda decoded_generation: any([decoded_generation.endswith(stop_string) for stop_string in self.eof_strings])\n",
    "        self.check_fn = check_fn\n",
    "\n",
    "    def __call__(self, input_ids, scores, **kwargs):\n",
    "        \"\"\"Returns True if generated sequence ends with any of the stop strings\"\"\"\n",
    "        decoded_generations = self.tokenizer.batch_decode(input_ids[:, self.start_length :])\n",
    "        return all([self.check_fn(decoded_generation) for decoded_generation in decoded_generations])\n",
    "\n",
    "\n",
    "def is_partial_stop(output, stop_str):\n",
    "    \"\"\"\n",
    "    Check whether the output contains a partial stop str.\n",
    "\n",
    "    Params:\n",
    "      output: current output from the model\n",
    "      stop_str: a string we will want to generation on\n",
    "    Returns:\n",
    "      True if the suffix of the output is a prefix of the stop_str\n",
    "    \"\"\"\n",
    "    for i in range(0, min(len(output), len(stop_str))):\n",
    "        if stop_str.startswith(output[-i:]):\n",
    "            return True\n",
    "    return False\n",
    "\n",
    "\n",
    "def format_context(context):\n",
    "    \"\"\"\n",
    "    Utility function to format retrieved documents inside the chatbot window\n",
    "\n",
    "    Params:\n",
    "      context: retrived documents\n",
    "    Returns:\n",
    "      Formated string with the retrieved documents\n",
    "    \"\"\"\n",
    "    if len(context) == 0:\n",
    "        return \"\"\n",
    "    blockquote_style = \"\"\"font-size: 12px;\n",
    "background: #e4e4e4; \n",
    "border-left: 10px solid #ccc; \n",
    "margin: 0.5em 30px;\n",
    "padding: 0.5em 10px;\n",
    "color: black;\"\"\"\n",
    "    summary_style = \"\"\"font-weight: bold;\n",
    "font-size: 14px;\n",
    "list-style-position: outside;\n",
    "margin: 0.5em 15px;\n",
    "padding: 0px 0px 10px 15px;\"\"\"\n",
    "    s = f'<details style=\"margin:0px;padding:0px;\"><summary style=\"{summary_style}\">Retrieved documents:</summary>'\n",
    "    for doc in context:\n",
    "        d = doc.replace(\"\\n\", \" \")\n",
    "        s += f'<blockquote style=\"{blockquote_style}\"><p>{d}</p></blockquote>'\n",
    "    s += \"</details>\"\n",
    "    return s\n",
    "\n",
    "\n",
    "def prepare_for_regenerate(history):\n",
    "    \"\"\"\n",
    "    Delete last assistant response from memory in order to regenerate it\n",
    "\n",
    "    Params:\n",
    "      history: conversation history\n",
    "    Returns:\n",
    "      Updated history\n",
    "    \"\"\"\n",
    "    history[-1][1] = None\n",
    "    delete_last_message_from_memory(memory)\n",
    "    return history, *([gr.update(interactive=False)] * 6)\n",
    "\n",
    "\n",
    "def add_user_text(message, history):\n",
    "    \"\"\"\n",
    "    Add user's message to chatbot history\n",
    "\n",
    "    Params:\n",
    "      message: current user message\n",
    "      history: conversation history\n",
    "    Returns:\n",
    "      Updated history, clears user message and status\n",
    "    \"\"\"\n",
    "    # Append current user message to history with a blank assistant message which will be generated by the model\n",
    "    history.append([message.strip(), None])\n",
    "    return \"\", history, *([gr.update(interactive=False)] * 5)\n",
    "\n",
    "\n",
    "def reset_chatbot():\n",
    "    \"\"\"Clears demo contents and resets chat history\"\"\"\n",
    "    memory.clear()\n",
    "    return None, None, \"Status: Idle\"\n",
    "\n",
    "\n",
    "def generate(\n",
    "    history,\n",
    "    temperature,\n",
    "    max_new_tokens,\n",
    "    top_p,\n",
    "    repetition_penalty,\n",
    "    num_retrieved_docs,\n",
    "    enable_rag,\n",
    "):\n",
    "    \"\"\"\n",
    "    Generates the assistant's reponse given the chatbot history and generation parameters\n",
    "\n",
    "    Params:\n",
    "      history: conversation history formated in pairs of user and assistant messages `[user_message, assistant_message]`\n",
    "      temperature:  parameter for control the level of creativity in AI-generated text.\n",
    "                    By adjusting the `temperature`, you can influence the AI model's probability distribution, making the text more focused or diverse.\n",
    "      max_new_tokens: The maximum number of tokens we allow the model to generate as a response.\n",
    "      top_p: parameter for control the range of tokens considered by the AI model based on their cumulative probability.\n",
    "      repetition_penalty: parameter for penalizing tokens based on how frequently they occur in the text.\n",
    "      num_retrieved_docs: number of documents to retrieve in case of RAG\n",
    "      enable_rag: a boolean to enable/disable RAG\n",
    "    Yields:\n",
    "      Updated history and generation status.\n",
    "    \"\"\"\n",
    "    if len(history) == 0 or history[-1][1] is not None:\n",
    "        yield history, \"Status: Idle\", *([gr.update(interactive=True)] * 6)\n",
    "        return\n",
    "    prompt_char = \"▌\"\n",
    "    history[-1][1] = prompt_char\n",
    "    yield history, \"Status: Generating...\", *([gr.update(interactive=False)] * 6)\n",
    "\n",
    "    start = time.perf_counter()\n",
    "    user_query = history[-1][0]\n",
    "    current_chain = rag_chain if enable_rag else no_rag_chain\n",
    "    tokenizer = ov_llm.pipeline.tokenizer\n",
    "    streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)\n",
    "    # Create a stopping criteria to prevent the model from playing the role of the user aswell.\n",
    "    stop_str = []\n",
    "    stopping_criteria = StoppingCriteriaList([SuffixCriteria(0, stop_str, tokenizer)])\n",
    "\n",
    "    # Prepare input for generate\n",
    "    generation_config = GenerationConfig(\n",
    "        max_new_tokens=max_new_tokens,\n",
    "        do_sample=temperature > 0.0,\n",
    "        temperature=temperature if temperature > 0.0 else 1.0,\n",
    "        repetition_penalty=repetition_penalty,\n",
    "        top_p=top_p,\n",
    "        eos_token_id=tokenizer.convert_tokens_to_ids([tokenizer.eos_token, \"<|end|>\", \"<|system|>\", \"<|user|>\", \"<|assistant|>\"]),\n",
    "        pad_token_id=tokenizer.eos_token_id,\n",
    "    )\n",
    "    generate_kwargs = dict(\n",
    "        streamer=streamer,\n",
    "        generation_config=generation_config,\n",
    "        stopping_criteria=stopping_criteria,\n",
    "    )\n",
    "    chain_kwargs = {\"config\": RunnableConfig(metadata={\"top_k\": num_retrieved_docs, \"pipeline_kwargs\": generate_kwargs})}\n",
    "\n",
    "    # Call chain\n",
    "    t1 = ThreadWithResult(\n",
    "        target=current_chain.invoke,\n",
    "        args=[{\"question\": user_query, \"chat_history\": memory.chat_memory.messages}],\n",
    "        kwargs=chain_kwargs,\n",
    "    )\n",
    "    t1.start()\n",
    "\n",
    "    # Initialize an empty string to store the generated text.\n",
    "    partial_text = \"\"\n",
    "    generated_tokens = 0\n",
    "    for new_text in streamer:\n",
    "        partial_text += new_text\n",
    "        generated_tokens += 1\n",
    "        history[-1][1] = partial_text + prompt_char\n",
    "        pos = -1\n",
    "        for s in stop_str:\n",
    "            if (pos := partial_text.rfind(s)) != -1:\n",
    "                break\n",
    "        if pos != -1:\n",
    "            partial_text = partial_text[:pos]\n",
    "            break\n",
    "        elif any([is_partial_stop(partial_text, s) for s in stop_str]):\n",
    "            continue\n",
    "        yield history, \"Status: Generating...\", *([gr.update(interactive=False)] * 6)\n",
    "    chain_out = t1.result\n",
    "    history[-1][1] = partial_text + format_context(chain_out[\"context\"])\n",
    "    add_to_memory(memory, user_query, chain_out[\"context\"], partial_text)\n",
    "    generation_time = time.perf_counter() - start\n",
    "    yield history, f\"Generation time: {generation_time:.2f} sec\", *([gr.update(interactive=True)] * 6)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8f8afd92-1c31-4429-8fac-4b2f6b40d2ca",
   "metadata": {},
   "source": [
    "Let's add an option to chat with our own documents by loading them to our database."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "535c9549-e714-4892-9a7c-f91e6f2d2877",
   "metadata": {},
   "outputs": [],
   "source": [
    "# ! pip install pypdf"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a3ba3eb4-5e8b-4d45-b4b9-cae901bb683d",
   "metadata": {},
   "outputs": [],
   "source": [
    "from pypdf import PdfReader\n",
    "\n",
    "\n",
    "added_documents_ids = []\n",
    "\n",
    "\n",
    "def pdf_to_docs(file_path):\n",
    "    reader = PdfReader(file_path)\n",
    "    texts = [page.extract_text() for page in reader.pages]\n",
    "    return [Document(page_content=p) for p in articles_to_passages(texts)]\n",
    "\n",
    "\n",
    "def load_files(files):\n",
    "    yield (\n",
    "        f\"Loading...\",\n",
    "        *([gr.update(interactive=False)] * 6),\n",
    "    )\n",
    "    start = time.perf_counter()\n",
    "    for fp in files:\n",
    "        documents = pdf_to_docs(fp)\n",
    "        added_documents_ids.append(database.add_documents(documents))\n",
    "    upload_time = time.perf_counter() - start\n",
    "    yield (\n",
    "        f\"Load time: {upload_time * 1000:.2f}ms\",\n",
    "        *([gr.update(interactive=True)] * 5),\n",
    "        gr.update(value=f\"Delete documents 〈{len(added_documents_ids)}〉\", interactive=True),\n",
    "    )\n",
    "\n",
    "\n",
    "def delete_documents():\n",
    "    yield (\n",
    "        f\"Deleting...\",\n",
    "        *([gr.update(interactive=False)] * 6),\n",
    "    )\n",
    "    global added_documents_ids\n",
    "    for l in added_documents_ids:\n",
    "        database.delete(l)\n",
    "    added_documents_ids = []\n",
    "    yield (\n",
    "        f\"Status: Idle\",\n",
    "        *([gr.update(interactive=True)] * 5),\n",
    "        gr.update(value=f\"Delete documents 〈{len(added_documents_ids)}〉\", interactive=True),\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "46ff747f-1ca1-4b31-9ef5-608673b37d0a",
   "metadata": {},
   "source": [
    "Now we can build the actual demo using Gradio.\n",
    "The layout will be simple, a chatbow window followed by a text prompt with controls that will let you to enable/disable RAG function, submit, clear and regenerate, this is pretty standard for a chatbot demo.\n",
    "We have also added the option to add PDF documents to the database and delete them if required.\n",
    "You can extend the add documents option to support other formats than PDF."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f2c95e58-7080-494a-84c9-0df4e68bbbc1",
   "metadata": {},
   "outputs": [],
   "source": [
    "import gradio as gr\n",
    "\n",
    "try:\n",
    "    demo.close()\n",
    "except:\n",
    "    pass\n",
    "\n",
    "EXAMPLES_EDUCATION = [\n",
    "    \"Lily drops a rubber ball from the top of a wall. The wall is 2 meters tall. How long will it take for the ball to reach the ground?\",\n",
    "    \"Mark has 15 notebooks in his backpack. Each day, he uses 3 notebooks for his classes. After 4 days, how many notebooks will Mark have left in his backpack?\",\n",
    "]\n",
    "EXAMPLES_BBC = [\n",
    "    \"How many teams will The 2024-25 Champions League feature?\",\n",
    "    \"Who said that english football is finished?\",\n",
    "]\n",
    "\n",
    "with gr.Blocks(theme=gr.themes.Soft()) as demo:\n",
    "    gr.Markdown('<h1 style=\"text-align: center;\">Intel Labs Demo: Chat with 150 BBC Sport News Articles on Intel Meteor Lake iGPU</h1>')\n",
    "    chatbot = gr.Chatbot(height=800)\n",
    "    with gr.Row():\n",
    "        rag = gr.Checkbox(\n",
    "            value=True,\n",
    "            label=\"Retrieve\",\n",
    "            interactive=True,\n",
    "            info=\"Enables RAG pipeline\",\n",
    "        )\n",
    "        msg = gr.Textbox(placeholder=\"Enter message here...\", show_label=False, autofocus=True, scale=75)\n",
    "        status = gr.Textbox(\"Status: Idle\", show_label=False, max_lines=1, scale=20)\n",
    "    with gr.Row():\n",
    "        submit = gr.Button(\"Submit\", variant=\"primary\")\n",
    "        regenerate = gr.Button(\"Regenerate\")\n",
    "        clear = gr.Button(\"Clear\")\n",
    "        load = gr.UploadButton(\"Load Document\", file_types=[\"pdf\"], file_count=\"multiple\")\n",
    "        delete_docs = gr.Button(lambda: f\"Delete documents {f'〈{len(added_documents_ids)}〉'}\", interactive=True)\n",
    "    with gr.Accordion(\"Advanced Options:\", open=False):\n",
    "        with gr.Row():\n",
    "            with gr.Column():\n",
    "                temperature = gr.Slider(\n",
    "                    label=\"Temperature\",\n",
    "                    value=0.0,\n",
    "                    minimum=0.0,\n",
    "                    maximum=1.0,\n",
    "                    step=0.05,\n",
    "                    interactive=True,\n",
    "                )\n",
    "                max_new_tokens = gr.Slider(\n",
    "                    label=\"Max new tokens\",\n",
    "                    value=128,\n",
    "                    minimum=0,\n",
    "                    maximum=512,\n",
    "                    step=32,\n",
    "                    interactive=True,\n",
    "                )\n",
    "            with gr.Column():\n",
    "                top_p = gr.Slider(\n",
    "                    label=\"Top-p (nucleus sampling)\",\n",
    "                    value=1.0,\n",
    "                    minimum=0.0,\n",
    "                    maximum=1.0,\n",
    "                    step=0.05,\n",
    "                    interactive=True,\n",
    "                )\n",
    "                repetition_penalty = gr.Slider(\n",
    "                    label=\"Repetition penalty\",\n",
    "                    value=1.0,\n",
    "                    minimum=1.0,\n",
    "                    maximum=2.0,\n",
    "                    step=0.1,\n",
    "                    interactive=True,\n",
    "                )\n",
    "            num_documents = gr.Slider(label=\"Retrieved documents numbers\", value=1, minimum=1, maximum=10, step=1, interactive=True)\n",
    "    gr.Examples(EXAMPLES_EDUCATION, inputs=msg, label=\"Non-RAG examples\")\n",
    "    gr.Examples(EXAMPLES_BBC, inputs=msg, label=\"RAG with BBC Sports examples\")\n",
    "    buttons = [submit, regenerate, clear, load, delete_docs]\n",
    "    # Sets generate function to be triggered when the user submit a new message\n",
    "    gr.on(\n",
    "        triggers=[submit.click, msg.submit],\n",
    "        fn=add_user_text,\n",
    "        inputs=[msg, chatbot],\n",
    "        outputs=[msg, chatbot, *buttons],\n",
    "        concurrency_limit=1,\n",
    "        queue=True,\n",
    "    ).then(\n",
    "        fn=generate,\n",
    "        inputs=[chatbot, temperature, max_new_tokens, top_p, repetition_penalty, num_documents, rag],\n",
    "        outputs=[chatbot, status, msg, *buttons],\n",
    "        concurrency_limit=1,\n",
    "        queue=True,\n",
    "    )\n",
    "    regenerate.click(\n",
    "        fn=prepare_for_regenerate,\n",
    "        inputs=chatbot,\n",
    "        outputs=[chatbot, msg, *buttons],\n",
    "        concurrency_limit=1,\n",
    "        queue=True,\n",
    "    ).then(\n",
    "        fn=generate,\n",
    "        inputs=[chatbot, temperature, max_new_tokens, top_p, repetition_penalty, num_documents, rag],\n",
    "        outputs=[chatbot, status, msg, *buttons],\n",
    "        concurrency_limit=1,\n",
    "        queue=True,\n",
    "    )\n",
    "    clear.click(fn=reset_chatbot, inputs=None, outputs=[chatbot, msg, status], queue=True)\n",
    "    load.upload(\n",
    "        fn=load_files,\n",
    "        inputs=[load],\n",
    "        outputs=[status, msg, *buttons],\n",
    "        concurrency_limit=1,\n",
    "        queue=True,\n",
    "    )\n",
    "    delete_docs.click(fn=delete_documents, outputs=[status, msg, *buttons], concurrency_limit=1, queue=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "27efd8d1-2b4d-437e-9062-77ee6f3d99a5",
   "metadata": {},
   "outputs": [],
   "source": [
    "demo.launch(server_name=\"0.0.0.0\", server_port=7860, inline=False, inbrowser=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ee004285-50fb-43cc-b04a-2bd996fd47d1",
   "metadata": {},
   "outputs": [],
   "source": [
    "# demo.close()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
