{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "gpuType": "T4"
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "## Build a Streamlit Chatbot using Langchain, ColBERT, Ragatouille, and ChromaDB!\n",
        "\n",
        "![_logo-2-final-no-bg-small.png]()\n",
        "\n",
        "Instructions:\n",
        "To successfully run this app, make sure you have the following two API keys: Together AI and ngrok.\n",
        "\n",
        "Follow these steps:\n",
        "- Replace the placeholders in the code with your actual API keys. Locate the text:\n",
        "\n",
        "INSERT YOUR TOGETHER AI API KEY HERE\n",
        "INSERT YOUR NGROK API KEY HERE\n",
        "\n",
        "and replace them with your Together AI and ngrok API keys, respectively.\n",
        "\n",
        "- After replacing the keys, execute all the cells in the code.\n",
        "\n",
        "- Once the code is executed, ngrok will generate a link for the Streamlit app. Use this link to access and interact with the app.\n",
        "\n"
      ],
      "metadata": {
        "id": "ND03rh6ZTddt"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Install Dependencies"
      ],
      "metadata": {
        "id": "5eppENGCSIHW"
      }
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "9LVmAMDyGVP3"
      },
      "outputs": [],
      "source": [
        "!pip install streamlit langchain sentence-transformers pypdf chromadb --quiet"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install -U ragatouille --quiet\n",
        "!pip uninstall --y faiss-cpu & pip install faiss-gpu --quiet"
      ],
      "metadata": {
        "id": "0iJZ2ChGIX1R"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Create ColBERT index using Ragatouille"
      ],
      "metadata": {
        "id": "3gGrbSXgR_2_"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!wget https://arxiv.org/pdf/2005.11401.pdf"
      ],
      "metadata": {
        "id": "yR88z08PJER5"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "## ragatouille/colbert implmentation\n",
        "from ragatouille import RAGPretrainedModel\n",
        "from langchain_community.document_loaders import PyPDFLoader\n",
        "\n",
        "loader = PyPDFLoader(\"2005.11401.pdf\")\n",
        "r_docs = loader.load_and_split()\n",
        "\n",
        "RAG = RAGPretrainedModel.from_pretrained(\"colbert-ir/colbertv2.0\")\n",
        "\n",
        "ragatouille_docs = [str(doc) for doc in r_docs]\n",
        "\n",
        "RAG.index(\n",
        "  collection=ragatouille_docs,\n",
        "  index_name=\"langchain-index\",\n",
        "  max_document_length=512,\n",
        "  split_documents=True,\n",
        ")"
      ],
      "metadata": {
        "id": "05J63oWtIaqK"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Build Streamlit App"
      ],
      "metadata": {
        "id": "cvO3BV7YSWrh"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!wget https://i.ibb.co/B6vMCY0/logo-2-final-no-bg.png"
      ],
      "metadata": {
        "id": "BLdt_G81HzLd"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile chat_pdf_app.py\n",
        "\n",
        "import os\n",
        "import json\n",
        "import re\n",
        "import tempfile\n",
        "import streamlit as st\n",
        "from PIL import Image\n",
        "\n",
        "from langchain.document_loaders import PyPDFLoader\n",
        "from langchain.memory.chat_message_histories import StreamlitChatMessageHistory\n",
        "from langchain.embeddings import HuggingFaceEmbeddings\n",
        "from langchain.callbacks.base import BaseCallbackHandler\n",
        "from langchain.vectorstores import Chroma\n",
        "from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
        "from langchain.prompts.prompt import PromptTemplate\n",
        "from langchain.llms import HuggingFacePipeline\n",
        "from langchain.chains import RetrievalQA\n",
        "from langchain import LLMChain, PromptTemplate\n",
        "from langchain.retrievers import EnsembleRetriever\n",
        "from ragatouille import RAGPretrainedModel\n",
        "from langchain.llms import Together\n",
        "\n",
        "TOGETHER_API_KEY = \"INSERT YOUR TOGETHER AI API KEY HERE\"      ## You can get your key from here: https://together.ai/\n",
        "\n",
        "favicon = Image.open(\"logo-2-final-no-bg.png\")\n",
        "\n",
        "st.set_page_config(page_title=\"RAG with Mixtral 8x7B and ColBERT\", page_icon=favicon)\n",
        "st.sidebar.image(\"logo-2-final-no-bg.png\", use_column_width=True)\n",
        "with st.sidebar:\n",
        "    st.write(\"**RAG with Mixtral 8x7B and ColBERT**\")\n",
        "\n",
        "# langsmith configuration (recommended)\n",
        "# os.environ[\"LANGCHAIN_PROJECT\"] = \"ADD YOUR PROJECT NAME HERE\"\n",
        "# os.environ[\"LANGCHAIN_API_KEY\"] = \"INSERT YOUR LANGCHAIN API KEY HERE\"          ## You can request access from here: https://smith.langchain.com/\n",
        "# os.environ[\"LANGCHAIN_ENDPOINT\"] =\"https://api.smith.langchain.com\"\n",
        "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
        "\n",
        "\n",
        "@st.cache_resource(ttl=\"1h\")\n",
        "def configure_retriever(uploaded_files):\n",
        "    # read documents\n",
        "    docs = []\n",
        "    temp_dir = tempfile.TemporaryDirectory()\n",
        "    for file in uploaded_files:\n",
        "        temp_filepath = os.path.join(temp_dir.name, file.name)\n",
        "        with open(temp_filepath, \"wb\") as f:\n",
        "            f.write(file.getvalue())\n",
        "        loader = PyPDFLoader(temp_filepath)\n",
        "        docs.extend(loader.load())\n",
        "\n",
        "    # split documents\n",
        "    text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)\n",
        "    splits = text_splitter.split_documents(docs)\n",
        "\n",
        "    # create embeddings and store in vectordb\n",
        "    embeddings = HuggingFaceEmbeddings(model_name=\"BAAI/bge-large-en-v1.5\")\n",
        "    vectordb = Chroma.from_documents(splits, embeddings)\n",
        "\n",
        "    # define retriever\n",
        "    chroma_retriever = vectordb.as_retriever(\n",
        "        search_type=\"mmr\", search_kwargs={\"k\": 4, \"fetch_k\": 10}\n",
        "    )\n",
        "\n",
        "    ## ragatouille/colbert implmentation\n",
        "    RAG = RAGPretrainedModel.from_index(\".ragatouille/colbert/indexes/langchain-index\")\n",
        "    ragatouille_retriever = RAG.as_langchain_retriever(k=10)\n",
        "\n",
        "    ### initialize the ensemble retriever\n",
        "    retriever = EnsembleRetriever(retrievers=[chroma_retriever, ragatouille_retriever],\n",
        "                                            weights=[0.50, 0.50])\n",
        "    return retriever\n",
        "\n",
        "\n",
        "uploaded_files = st.sidebar.file_uploader(\n",
        "    label=\"Upload PDF files\", type=[\"pdf\"], accept_multiple_files=True\n",
        ")\n",
        "if not uploaded_files:\n",
        "    st.info(\"Please upload PDF documents to continue.\")\n",
        "    st.stop()\n",
        "\n",
        "retriever = configure_retriever(uploaded_files)\n",
        "\n",
        "## using Together API\n",
        "os.environ[\"TOGETHER_API_KEY\"] = TOGETHER_API_KEY\n",
        "llm = Together(\n",
        "    model=\"mistralai/Mixtral-8x7B-Instruct-v0.1\",\n",
        "    temperature=0.5,\n",
        "    max_tokens=2048,\n",
        "    top_k=10,\n",
        ")\n",
        "\n",
        "msgs = StreamlitChatMessageHistory()\n",
        "\n",
        "## prompt template\n",
        "RESPONSE_TEMPLATE = \"\"\"<s>[INST]\n",
        "<<SYS>>\n",
        "You are a helpful AI assistant.\n",
        "\n",
        "Use the following pieces of context to answer the user's question.<</SYS>>\n",
        "\n",
        "Anything between the following `context` html blocks is retrieved from a knowledge base.\n",
        "\n",
        "<context>\n",
        "    {context}\n",
        "</context>\n",
        "\n",
        "REMEMBER:\n",
        "- If you don't know the answer, just say that you don't know, don't try to make up an answer.\n",
        "- Let's take a deep breath and think step-by-step.\n",
        "\n",
        "Question: {question}[/INST]\n",
        "Helpful Answer:\n",
        "\"\"\"\n",
        "\n",
        "PROMPT = PromptTemplate.from_template(RESPONSE_TEMPLATE)\n",
        "PROMPT = PromptTemplate(template=RESPONSE_TEMPLATE, input_variables=[\"context\", \"question\"])\n",
        "\n",
        "qa_chain = RetrievalQA.from_chain_type(\n",
        "    llm,\n",
        "    chain_type='stuff',\n",
        "    retriever=retriever,\n",
        "    chain_type_kwargs={\n",
        "        \"verbose\": True,\n",
        "        \"prompt\": PROMPT,\n",
        "    }\n",
        ")\n",
        "\n",
        "\n",
        "if len(msgs.messages) == 0 or st.sidebar.button(\"New Chat\"):\n",
        "    msgs.clear()\n",
        "    msgs.add_ai_message(\"How can I help you?\")\n",
        "\n",
        "avatars = {\"human\": \"user\", \"ai\": \"assistant\"}\n",
        "for msg in msgs.messages:\n",
        "    st.chat_message(avatars[msg.type]).write(msg.content)\n",
        "\n",
        "if user_query := st.chat_input(placeholder=\"Ask me anything!\"):\n",
        "    st.chat_message(\"user\").write(user_query)\n",
        "\n",
        "    with st.chat_message(\"assistant\"):\n",
        "\n",
        "        response = qa_chain({\"query\": user_query})\n",
        "\n",
        "        ## print answer\n",
        "        answer = response[\"result\"]\n",
        "        st.write(answer)\n",
        "\n",
        "about = st.sidebar.expander(\"About\")\n",
        "about.write(\"You can easily chat with a PDF using this AI chatbot. \\\n",
        "            It is build by [AI Geek Labs](https://aigeeklabs.com). Github Repo is [here]()\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "LZffZyJ9IKUM",
        "outputId": "2dafb174-6eb4-4e89-c020-3e8811689996"
      },
      "execution_count": 27,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Overwriting chat_pdf_app.py\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Run Streamlit App with ngrok"
      ],
      "metadata": {
        "id": "C37eQ6wrR3C8"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip\n",
        "!unzip -o ngrok-stable-linux-amd64.zip\n",
        "!pip install --quiet pyngrok\n",
        "!pip install --no-dependencies --quiet protobuf==3.20.*\n",
        "!pip install --no-dependencies --quiet validators"
      ],
      "metadata": {
        "id": "gRCqM3QsR2IQ"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "import time\n",
        "\n",
        "get_ipython().system_raw('./ngrok authtoken \"INSERT YOUR NGROK API KEY HERE\"&')             ## You can get the API key from here: https://ngrok.com/\n",
        "\n",
        "get_ipython().system_raw('./ngrok http 8501 &')\n",
        "\n",
        "# Wait for 2 seconds\n",
        "time.sleep(2)\n",
        "\n",
        "!curl -s http://localhost:4040/api/tunnels | python3 -c \\\n",
        "    'import sys, json; print(\"Execute the next cell and the go to the following URL: \" +json.load(sys.stdin)[\"tunnels\"][0][\"public_url\"])'"
      ],
      "metadata": {
        "id": "Czmnj-Z3INJz"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "!streamlit run ./chat_pdf_app.py"
      ],
      "metadata": {
        "id": "ivFCQABgITXJ"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "\n",
        "### About\n",
        "\n",
        "##### This notebook is created by AI Geek.\n",
        "\n",
        "##### [Here](https://github.com/aigeek0x0/rag-with-langchain-colbert-and-ragatouille) is the link to the github repo.\n",
        "##### If you want to support my work, consider following me on [twitter](https://twitter.com/aigeek__) & [medium](https://medium.com/@aigeek_). You can also [buy me a coffee](https://www.buymeacoffee.com/aigeek_)."
      ],
      "metadata": {
        "id": "LdEDMCd1WUYH"
      }
    },
    {
      "cell_type": "code",
      "source": [],
      "metadata": {
        "id": "ZDY5Q9GGXBmb"
      },
      "execution_count": null,
      "outputs": []
    }
  ]
}