{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "78a5abbc",
   "metadata": {
    "id": "78a5abbc"
   },
   "source": [
    "# Multi Modal RAG\n",
    "\n",
    "- Author: [YooKyung Jeon](https://github.com/sirena1)\n",
    "- Peer Review:\n",
    "- Proofread : [Yun Eun](https://github.com/yuneun92)\n",
    "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n",
    "\n",
    "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/12-RAG/10-Multi_modal_RAG-GPT-4o.ipynb)[![Open in GitHub](https://img.shields.io/badge/Open%20in%20GitHub-181717?style=flat-square&logo=github&logoColor=white)](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/12-RAG/10-Multi_modal_RAG-GPT-4o.ipynb)\n",
    "## Overview\n",
    "\n",
    "Many documents contain a mix of different content types, including text and images.\n",
    "\n",
    "However, in most RAG applications, the information contained in images is lost.\n",
    "\n",
    "With the advent of multimodal LLMs like GPT-4V and GPT-4o, it is worth considering how to utilize images in RAG:\n",
    "\n",
    "**Option 1:**\n",
    "\n",
    "- Use multimodal embedding (such as [CLIP](https://openai.com/research/clip)) to embed images and text.\n",
    "- Search for both using similarity search.\n",
    "- Pass the original image and text fragments to the multimodal LLM to synthesize the answers.\n",
    "\n",
    "**Option 2:**\n",
    "\n",
    "- Generate text summaries from images using multimodal LLMs (e.g. GPT-4V, GPT-4o, [LLaVA](https://llava-vl.github.io/), [FUYU-8b](https://www.adept.ai/blog/fuyu-8b)).\n",
    "- Embed and search for text.\n",
    "- Pass text fragments to LLMs to synthesize answers.\n",
    "\n",
    "**Option 3:**\n",
    "\n",
    "- Generate text summaries from images using multimodal LLMs (e.g. GPT-4V, GPT-4o, [LLaVA](https://llava-vl.github.io/), [FUYU-8b](https://www.adept.ai/blog/fuyu-8b)).\n",
    "- Embed and retrieve the image summary with a reference to the original image.\n",
    "- Pass the original image and text fragment to a multimodal LLM to synthesize answers.\n",
    "\n",
    "![graphic-01.png](./assets/10-multi_modal_rag-gpt-4o-graphic-01.png)\n",
    "\n",
    "### Table of Contents\n",
    "\n",
    "- [Overview](#overview)\n",
    "- [Environment Setup](#environment-setup)\n",
    "- [Package](#package)\n",
    "- [Data Loading](#data-loading)\n",
    "- [Multi-Vector Search Engine](#multi-vector-search-engine)\n",
    "- [RAG](#rag)\n",
    "\n",
    "### References\n",
    "\n",
    "- [ONNX](https://onnx.ai/)\n",
    "- [poppler](https://pdf2image.readthedocs.io/en/latest/installation.html)\n",
    "- [tesseract](https://tesseract-ocr.github.io/tessdoc/Installation.html)\n",
    "- [Unstructured](https://unstructured-io.github.io/unstructured/introduction.html#key-concepts)\n",
    "----"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cc00f6f8",
   "metadata": {
    "id": "cc00f6f8"
   },
   "source": [
    "## Environment Setup\n",
    "\n",
    "Set up the environment. You may refer to [Environment Setup](https://wikidocs.net/257836) for more details.\n",
    "\n",
    "**[Note]**\n",
    "- ```langchain-opentutorial``` is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.\n",
    "- You can checkout the [```langchain-opentutorial```](https://github.com/LangChain-OpenTutorial/langchain-opentutorial-pypi) for more details."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "bad10aa2",
   "metadata": {
    "id": "bad10aa2"
   },
   "outputs": [],
   "source": [
    "%%capture --no-stderr\n",
    "%pip install langchain-opentutorial"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "59ad49e7",
   "metadata": {
    "id": "59ad49e7"
   },
   "outputs": [],
   "source": [
    "# Install required packages\n",
    "from langchain_opentutorial import package\n",
    "\n",
    "package.install(\n",
    "    [\n",
    "        \"langchain_text_splitters\",\n",
    "        \"langchain\",\n",
    "        \"langchain_core\",\n",
    "        \"langchain_openai\",\n",
    "        \"openai\",\n",
    "        \"chromadb\",\n",
    "        \"langchain-experimental\",\n",
    "        \"unstructured[all-docs]\",\n",
    "        \"pillow\",\n",
    "        \"pydantic\",\n",
    "        \"lxml\",\n",
    "        \"pillow\",\n",
    "        \"matplotlib\",\n",
    "        \"langchain-chroma\",\n",
    "        \"tiktoken\",\n",
    "        \"pytesseract\",\n",
    "        \"onnx==1.15.0\",\n",
    "        \"onnxruntime==1.17.0\"\n",
    "    ],\n",
    "    verbose=False,\n",
    "    upgrade=False,\n",
    ")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "1a1ca7e2",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "1a1ca7e2",
    "outputId": "b0091e03-8e4d-4442-ee70-a4c1c0dba59d"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Environment variables have been set successfully.\n"
     ]
    }
   ],
   "source": [
    "# Set environment variables\n",
    "from langchain_opentutorial import set_env\n",
    "\n",
    "set_env(\n",
    "    {\n",
    "        \"OPENAI_API_KEY\": \"\",\n",
    "        \"LANGCHAIN_API_KEY\": \"\",\n",
    "        \"LANGCHAIN_TRACING_V2\": \"true\",\n",
    "        \"LANGCHAIN_ENDPOINT\": \"https://api.smith.langchain.com\",\n",
    "        \"LANGCHAIN_PROJECT\": \"10-Multi_modal_RAG-GPT-4o\",\n",
    "    }\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "8c1535d4",
   "metadata": {
    "id": "8c1535d4",
    "outputId": "14348f74-2efc-4f05-c89e-1b470f91708f"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from dotenv import load_dotenv\n",
    "\n",
    "load_dotenv(override=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a9e2b0d8",
   "metadata": {
    "id": "a9e2b0d8"
   },
   "source": [
    "## Package\n",
    "\n",
    "To use ```unstructured```, the system requires ```poppler``` ([Installation Guide](https://pdf2image.readthedocs.io/en/latest/installation.html)) and ```tesseract``` ([Installation Guide](https://tesseract-ocr.github.io/tessdoc/Installation.html)).\n",
    "\n",
    "**[Note]** **Option 2** is suitable when multimodal LLMs cannot be used for answer synthesis (e.g., due to cost or other limitations).\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7953991d",
   "metadata": {
    "id": "7953991d"
   },
   "source": [
    "## Data Loading\n",
    "\n",
    "Before processing PDFs, it's essential to distinguish between text and images for accurate extraction.\n",
    "\n",
    "### Splitting PDF Text and Images\n",
    "\n",
    "Using ```partition_pdf``` provided by [Unstructured](https://unstructured-io.github.io/unstructured/introduction.html#key-concepts), you can extract text and images.\n",
    "\n",
    "To extract images, use the following:\n",
    "\n",
    "```extract_images_in_pdf=True```\n",
    "\n",
    "If you want to process only text:\n",
    "\n",
    "```extract_images_in_pdf=False```\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "3ef44e3d",
   "metadata": {
    "id": "3ef44e3d"
   },
   "outputs": [],
   "source": [
    "# file path\n",
    "fpath = \"data/\"\n",
    "fname = \"sample.pdf\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "6154cf00",
   "metadata": {
    "id": "6154cf00"
   },
   "outputs": [],
   "source": [
    "import os\n",
    "from langchain_text_splitters import CharacterTextSplitter\n",
    "from unstructured.partition.pdf import partition_pdf\n",
    "\n",
    "# Extracting Elements from a PDF\n",
    "def extract_pdf_elements(path, fname):\n",
    "    \"\"\"\n",
    "    Extract images, tables, and text snippets from a PDF file.\n",
    "    path: File path to save the image (.jpg) to\n",
    "    fname: File name\n",
    "    \"\"\"\n",
    "    return partition_pdf(\n",
    "        filename=os.path.join(path, fname),\n",
    "        extract_images_in_pdf=True,         # Enable image extraction in PDFs\n",
    "        infer_table_structure=True,         # Enable table structure inference\n",
    "        chunking_strategy=\"by_title\",       # Fragmenting text by title\n",
    "        max_characters=4000,                # Maximum character count\n",
    "        new_after_n_chars=3800,             # Create new fragments after this number of characters\n",
    "        combine_text_under_n_chars=2000,    # Text with this number of characters or less will use the Combine\n",
    "        image_output_dir_path=path,         # Path to image output directory\n",
    "    )\n",
    "\n",
    "\n",
    "# Categorize elements by type\n",
    "def categorize_elements(raw_pdf_elements):\n",
    "    \"\"\"\n",
    "    Categorize elements extracted from a PDF into tables and text.\n",
    "    raw_pdf_elements: list of unstructured.documents.elements\n",
    "    \"\"\"\n",
    "    tables = []     # Table Save List\n",
    "    texts = []      # Save Text List\n",
    "    for element in raw_pdf_elements:\n",
    "        if \"unstructured.documents.elements.Table\" in str(type(element)):\n",
    "            tables.append(str(element))  # Add table elements\n",
    "        elif \"unstructured.documents.elements.CompositeElement\" in str(type(element)):\n",
    "            texts.append(str(element))  # Add text elements\n",
    "    return texts, tables\n",
    "\n",
    "\n",
    "# Extract elements\n",
    "raw_pdf_elements = extract_pdf_elements(fpath, fname)\n",
    "\n",
    "# Extract text, tables\n",
    "texts, tables = categorize_elements(raw_pdf_elements)\n",
    "\n",
    "# Optional: Enforce a specific token size for text\n",
    "text_splitter = CharacterTextSplitter.from_tiktoken_encoder(\n",
    "    chunk_size=4000, chunk_overlap=0  # Split text into 4000 token size, no duplicates\n",
    ")\n",
    "joined_texts = \" \".join(texts)  # Combine text\n",
    "texts_4k_token = text_splitter.split_text(joined_texts)  # Execute a split"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "04d5d147",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "04d5d147",
    "outputId": "46d8558f-194f-4856-91ad-d59f460c9641"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "1"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(texts_4k_token)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0a76bcdf",
   "metadata": {
    "id": "0a76bcdf"
   },
   "source": [
    "## Multi-Vector Search Engine\n",
    "\n",
    "Using the [multi-vector-retriever](https://python.langchain.com/docs/how_to/multi_vector/), you can index summaries of images (and/or text, tables) while retrieving the original images (along with the original text or tables).\n",
    "\n",
    "### Text and Table Summarization\n",
    "\n",
    "To generate summaries for tables and optionally text, we will use ```GPT-4-turbo```.\n",
    "\n",
    "If you are working with large chunk sizes (e.g., 4k token chunks as set above), text summarization is recommended.\n",
    "\n",
    "The summaries are used for retrieving the original tables and/or original text chunks.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "b389203c",
   "metadata": {
    "id": "b389203c"
   },
   "outputs": [],
   "source": [
    "from langchain_core.output_parsers import StrOutputParser\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "# Create a summary of a text element\n",
    "def generate_text_summaries(texts, tables, summarize_texts=False):\n",
    "    \"\"\"\n",
    "    Text element summary\n",
    "    texts: List of strings\n",
    "    tables: List of strings\n",
    "    summarize_texts: Determines whether to summarize texts. True/False\n",
    "    \"\"\"\n",
    "\n",
    "    # Setting the prompt\n",
    "    prompt_text = \"\"\"You are an assistant tasked with summarizing tables and text for retrieval. \\\n",
    "    These summaries will be embedded and used to retrieve the raw text or table elements. \\\n",
    "    Give a concise summary of the table or text that is well optimized for retrieval. Table or text: {element} \"\"\"\n",
    "    prompt = ChatPromptTemplate.from_template(prompt_text)\n",
    "\n",
    "    # Text summary chain\n",
    "    model = ChatOpenAI(temperature=0, model=\"gpt-4\")\n",
    "    summarize_chain = {\"element\": lambda x: x} | prompt | model | StrOutputParser()\n",
    "\n",
    "    # Initializing an empty list for summaries\n",
    "    text_summaries = []\n",
    "    table_summaries = []\n",
    "\n",
    "    # Apply when a summary is requested for the provided text\n",
    "    if texts and summarize_texts:\n",
    "        text_summaries = summarize_chain.batch(texts, {\"max_concurrency\": 5})\n",
    "    elif texts:\n",
    "        text_summaries = texts\n",
    "\n",
    "    # Apply to the provided table\n",
    "    if tables:\n",
    "        table_summaries = summarize_chain.batch(tables, {\"max_concurrency\": 5})\n",
    "\n",
    "    return text_summaries, table_summaries\n",
    "\n",
    "\n",
    "# Get text, table summaries\n",
    "text_summaries, table_summaries = generate_text_summaries(\n",
    "    texts_4k_token, tables, summarize_texts=True\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7bf8b1c4",
   "metadata": {
    "id": "7bf8b1c4"
   },
   "source": [
    "### Image Summarization\n",
    "\n",
    "We will use ```GPT-4o``` to generate summaries for images.\n",
    "\n",
    "- The images are passed as base64-encoded data.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "0a9049dc",
   "metadata": {
    "id": "0a9049dc"
   },
   "outputs": [],
   "source": [
    "import base64\n",
    "import os\n",
    "\n",
    "from langchain_core.messages import HumanMessage\n",
    "\n",
    "\n",
    "def encode_image(image_path):\n",
    "    # Encode the image file as a base64 string.\n",
    "    with open(image_path, \"rb\") as image_file:\n",
    "        return base64.b64encode(image_file.read()).decode(\"utf-8\")\n",
    "\n",
    "\n",
    "def image_summarize(img_base64, prompt):\n",
    "    # Generate an image summary.\n",
    "    chat = ChatOpenAI(model=\"gpt-4o\", max_tokens=2048)\n",
    "\n",
    "    msg = chat.invoke(\n",
    "        [\n",
    "            HumanMessage(\n",
    "                content=[\n",
    "                    {\"type\": \"text\", \"text\": prompt},\n",
    "                    {\n",
    "                        \"type\": \"image_url\",\n",
    "                        \"image_url\": {\"url\": f\"data:image/jpeg;base64,{img_base64}\"},\n",
    "                    },\n",
    "                ]\n",
    "            )\n",
    "        ]\n",
    "    )\n",
    "    return msg.content\n",
    "\n",
    "\n",
    "def generate_img_summaries(path):\n",
    "    \"\"\"\n",
    "    Generates a summary of the images and a base64 encoded string.\n",
    "    path: The path to the list of .jpg files extracted by Unstructured.\n",
    "    \"\"\"\n",
    "\n",
    "    # A list to store base64-encoded images in\n",
    "    img_base64_list = []\n",
    "\n",
    "    # List to save image summaries\n",
    "    image_summaries = []\n",
    "\n",
    "    # Prompt for summarizing\n",
    "    prompt = \"\"\"You are an assistant tasked with summarizing images for retrieval. \\\n",
    "    These summaries will be embedded and used to retrieve the raw image. \\\n",
    "    Give a concise summary of the image that is well optimized for retrieval.\"\"\"\n",
    "\n",
    "    # Apply to images\n",
    "    for img_file in sorted(os.listdir(path)):\n",
    "        if img_file.startswith(\"10-\") and img_file.endswith(\".png\"):\n",
    "            img_path = os.path.join(path, img_file)\n",
    "            base64_image = encode_image(img_path)\n",
    "            img_base64_list.append(base64_image)\n",
    "            image_summaries.append(image_summarize(base64_image, prompt))\n",
    "\n",
    "    return img_base64_list, image_summaries\n",
    "\n",
    "\n",
    "# Run an image summary\n",
    "img_base64_list, image_summaries = generate_img_summaries('assets/')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "4c81edc5",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "4c81edc5",
    "outputId": "a2963b86-5e49-430a-c01c-0741f1da268e"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "6"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(image_summaries)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2a64ac63",
   "metadata": {
    "id": "2a64ac63"
   },
   "source": [
    "### Adding to the Vector Store\n",
    "\n",
    "To add the original documents and their summaries to the [Multi Vector Retriever](https://python.langchain.com/docs/how_to/multi_vector/):\n",
    "\n",
    "- Store the original text, tables, and images in the ```docstore```.\n",
    "- Save text summaries, table summaries, and image summaries in the ```vectorstore``` for efficient semantic search.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "06edcd4e",
   "metadata": {
    "id": "06edcd4e"
   },
   "source": [
    "Explaining the Process of Creating a Multi-Vector Search Engine for Indexing and Retrieving Various Data Types (Text, Tables, Images)\n",
    "\n",
    "- Initialize the storage layer using ```InMemoryStore```.\n",
    "- Create a ```MultiVectorRetriever``` to index summarized data but configure it to return the original text or images.\n",
    "- Include the process of adding summaries and original data for each data type (text, tables, images) to the ```vectorstore``` and ```docstore```:\n",
    "  - Generate a unique ```doc_id``` for each document.\n",
    "  - Add the summarized data to the ```vectorstore``` and store the original data along with the ```doc_id``` in the ```docstore```.\n",
    "- Check conditions to ensure that only non-empty summaries are added for each data type.\n",
    "- Use the ```Chroma``` vector store to index summaries and generate embeddings using the ```OpenAIEmbeddings``` function.\n",
    "- The resulting multi-vector search engine indexes summaries for various data types and ensures that original data is returned during searches."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "bb248cfe",
   "metadata": {
    "id": "bb248cfe"
   },
   "outputs": [],
   "source": [
    "import uuid\n",
    "\n",
    "from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
    "from langchain.storage import InMemoryStore\n",
    "from langchain_chroma import Chroma\n",
    "from langchain_core.documents import Document\n",
    "from langchain_openai import OpenAIEmbeddings\n",
    "\n",
    "\n",
    "def create_multi_vector_retriever(\n",
    "    vectorstore, text_summaries, texts, table_summaries, tables, image_summaries, images\n",
    "):\n",
    "    \"\"\"\n",
    "    Create a retriever that indexes the summary but returns the original image or text.\n",
    "    \"\"\"\n",
    "\n",
    "    # Initialize the storage tier\n",
    "    store = InMemoryStore()\n",
    "    id_key = \"doc_id\"\n",
    "\n",
    "    # Create a multi-vector retriever\n",
    "    retriever = MultiVectorRetriever(\n",
    "        vectorstore=vectorstore,\n",
    "        docstore=store,\n",
    "        id_key=id_key,\n",
    "    )\n",
    "\n",
    "    # Helper function for adding documents to vector store and document store\n",
    "    def add_documents(retriever, doc_summaries, doc_contents):\n",
    "        doc_ids = [\n",
    "            str(uuid.uuid4()) for _ in doc_contents\n",
    "        ]  # Create a unique ID for each document content\n",
    "        summary_docs = [\n",
    "            Document(page_content=s, metadata={id_key: doc_ids[i]})\n",
    "            for i, s in enumerate(doc_summaries)\n",
    "        ]\n",
    "        retriever.vectorstore.add_documents(\n",
    "            summary_docs\n",
    "        )  # Add a summary document to a vector store\n",
    "        retriever.docstore.mset(\n",
    "            list(zip(doc_ids, doc_contents))\n",
    "        )  # Add a document content to a document store\n",
    "\n",
    "    # Add text, tables, and images\n",
    "    if text_summaries:\n",
    "        add_documents(retriever, text_summaries, texts)\n",
    "\n",
    "    if table_summaries:\n",
    "        add_documents(retriever, table_summaries, tables)\n",
    "\n",
    "    if image_summaries:\n",
    "        add_documents(retriever, image_summaries, images)\n",
    "\n",
    "    return retriever"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "a776ae5f",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "a776ae5f",
    "outputId": "514548c1-bbe1-4510-ec0a-335503a35feb"
   },
   "outputs": [],
   "source": [
    "# Vector store to use for indexing summaries\n",
    "vectorstore = Chroma(\n",
    "    persist_directory=\"sample-rag-multi-modal\", embedding_function=OpenAIEmbeddings()\n",
    ")\n",
    "\n",
    "# Create a retriever\n",
    "retriever_multi_vector_img = create_multi_vector_retriever(\n",
    "    vectorstore,\n",
    "    text_summaries,\n",
    "    texts,\n",
    "    table_summaries,\n",
    "    tables,\n",
    "    image_summaries,\n",
    "    img_base64_list,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c8174433",
   "metadata": {
    "id": "c8174433"
   },
   "source": [
    "## RAG\n",
    "\n",
    "Effectively retrieving relevant documents is a crucial step in enhancing response accuracy.\n",
    "\n",
    "### Building the Retriever\n",
    "\n",
    "The retrieved documents must be assigned to the correct sections of the ```GPT-4o``` prompt template.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "62ad377f",
   "metadata": {
    "id": "62ad377f"
   },
   "source": [
    "The following describes how to process Base64-encoded images and text and use them to construct a multimodal question-answering (QA) chain:\n",
    "\n",
    "- Verify if a Base64-encoded string is an image. Supported image formats include JPG, PNG, GIF, and WEBP.\n",
    "- Resize the Base64-encoded image to the given dimensions.\n",
    "- Separate Base64-encoded images and text from a document set.\n",
    "- Use the separated images and text to construct messages that will serve as inputs to the multimodal QA chain. This process involves creating messages that include image URLs and text information.\n",
    "- Construct the multimodal QA chain. This chain generates responses to questions based on the provided image and text information. The model used is ```ChatOpenAI```, specifically the ```gpt-4o``` model.\n",
    "\n",
    "This process outlines the implementation of a multimodal QA system that leverages both image and text data to generate responses to questions. It includes Base64 encoding and decoding for image data, image resizing, and the integration of image and text information to produce responses.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "8ac7c5f9",
   "metadata": {
    "id": "8ac7c5f9"
   },
   "outputs": [],
   "source": [
    "import io\n",
    "import re\n",
    "\n",
    "from IPython.display import HTML, display\n",
    "from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
    "from PIL import Image\n",
    "\n",
    "\n",
    "def plt_img_base64(img_base64):\n",
    "    \"\"\"Display base64 encoded strings as image\"\"\"\n",
    "    # Create an HTML img tag that uses a base64 string as its source\n",
    "    image_html = f'<img src=\"data:image/jpeg;base64,{img_base64}\" />'\n",
    "    # Rendering HTML to display images\n",
    "    display(HTML(image_html))\n",
    "\n",
    "\n",
    "def looks_like_base64(sb):\n",
    "    \"\"\"Check if the string appears to be Base64\"\"\"\n",
    "    return re.match(\"^[A-Za-z0-9+/]+[=]{0,2}$\", sb) is not None\n",
    "\n",
    "\n",
    "def is_image_data(b64data):\n",
    "    \"\"\"\n",
    "    Check if the Base64 data is an image by inspecting the beginning\n",
    "    \"\"\"\n",
    "    image_signatures = {\n",
    "        b\"\\xff\\xd8\\xff\": \"jpg\",\n",
    "        b\"\\x89\\x50\\x4e\\x47\\x0d\\x0a\\x1a\\x0a\": \"png\",\n",
    "        b\"\\x47\\x49\\x46\\x38\": \"gif\",\n",
    "        b\"\\x52\\x49\\x46\\x46\": \"webp\",\n",
    "    }\n",
    "    try:\n",
    "        header = base64.b64decode(b64data)[:8]  # Decode and retrieve the first 8 bytes\n",
    "        for sig, format in image_signatures.items():\n",
    "            if header.startswith(sig):\n",
    "                return True\n",
    "        return False\n",
    "    except Exception:\n",
    "        return False\n",
    "\n",
    "\n",
    "def resize_base64_image(base64_string, size=(128, 128)):\n",
    "    \"\"\"\n",
    "    Resizing an image encoded as a Base64 string\n",
    "    \"\"\"\n",
    "    # Decode Base64 strings\n",
    "    img_data = base64.b64decode(base64_string)\n",
    "    img = Image.open(io.BytesIO(img_data))\n",
    "\n",
    "    # Resize an image\n",
    "    resized_img = img.resize(size, Image.LANCZOS)\n",
    "\n",
    "    # Save the adjusted image to a byte buffer\n",
    "    buffered = io.BytesIO()\n",
    "    resized_img.save(buffered, format=img.format)\n",
    "\n",
    "    # Encoding adjusted images to Base64\n",
    "    return base64.b64encode(buffered.getvalue()).decode(\"utf-8\")\n",
    "\n",
    "\n",
    "def split_image_text_types(docs):\n",
    "    \"\"\"\n",
    "    Separate base64-encoded images and text\n",
    "    \"\"\"\n",
    "    b64_images = []\n",
    "    texts = []\n",
    "    for doc in docs:\n",
    "        # Extract page_content if the document is of type Document\n",
    "        if isinstance(doc, Document):\n",
    "            doc = doc.page_content\n",
    "        if looks_like_base64(doc) and is_image_data(doc):\n",
    "            doc = resize_base64_image(doc, size=(1300, 600))\n",
    "            b64_images.append(doc)\n",
    "        else:\n",
    "            texts.append(doc)\n",
    "    return {\"images\": b64_images, \"texts\": texts}\n",
    "\n",
    "\n",
    "def img_prompt_func(data_dict):\n",
    "    \"\"\"\n",
    "    Combine contexts into a single string\n",
    "    \"\"\"\n",
    "    formatted_texts = \"\\n\".join(data_dict[\"context\"][\"texts\"])\n",
    "    messages = []\n",
    "\n",
    "    # If you have an image, add it to the message\n",
    "    if data_dict[\"context\"][\"images\"]:\n",
    "        for image in data_dict[\"context\"][\"images\"]:\n",
    "            image_message = {\n",
    "                \"type\": \"image_url\",\n",
    "                \"image_url\": {\"url\": f\"data:image/jpeg;base64,{image}\"},\n",
    "            }\n",
    "            messages.append(image_message)\n",
    "\n",
    "    # Add text for analysis\n",
    "    text_message = {\n",
    "        \"type\": \"text\",\n",
    "        \"text\": (\n",
    "            \"You are financial analyst tasking with providing investment advice.\\n\"\n",
    "            \"You will be given a mixed of text, tables, and image(s) usually of charts or graphs.\\n\"\n",
    "            \"Use this information to provide investment advice related to the user question. Answer in English. Do NOT translate company names.\\n\"\n",
    "            f\"User-provided question: {data_dict['question']}\\n\\n\"\n",
    "            \"Text and / or tables:\\n\"\n",
    "            f\"{formatted_texts}\"\n",
    "        ),\n",
    "    }\n",
    "    messages.append(text_message)\n",
    "    return [HumanMessage(content=messages)]\n",
    "\n",
    "\n",
    "def multi_modal_rag_chain(retriever):\n",
    "    \"\"\"\n",
    "    Multimodal RAG Chains\n",
    "    \"\"\"\n",
    "\n",
    "    # Multimodal LLM\n",
    "    model = ChatOpenAI(temperature=0, model=\"gpt-4o\", max_tokens=2048)\n",
    "\n",
    "    # RAG Pipeline\n",
    "    chain = (\n",
    "        {\n",
    "            \"context\": retriever | RunnableLambda(split_image_text_types),\n",
    "            \"question\": RunnablePassthrough(),\n",
    "        }\n",
    "        | RunnableLambda(img_prompt_func)\n",
    "        | model\n",
    "        | StrOutputParser()\n",
    "    )\n",
    "\n",
    "    return chain\n",
    "\n",
    "\n",
    "# Create a RAG chain\n",
    "chain_multimodal_rag = multi_modal_rag_chain(retriever_multi_vector_img)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6ebb7c71",
   "metadata": {
    "id": "6ebb7c71"
   },
   "source": [
    "### Verification\n",
    "\n",
    "When we search for images related to a question, we receive relevant images in return.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "fffd119c",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "fffd119c",
    "outputId": "0c4676e9-80da-4e4e-e02d-38d932294682"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "4"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Execute the search query.\n",
    "query = \"Please provide the names of companies that are interesting investment opportunities based on EV/NTM and NTM revenue growth rates. Do you consider the EV/NTM multiple and historical data?\"\n",
    "\n",
    "# Search for 6 documents related to the query.\n",
    "docs = retriever_multi_vector_img.invoke(query, limit=6)\n",
    "\n",
    "# Check the number of documents.\n",
    "len(docs)  # Return the number of retrieved documents."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "ee06dbb4",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "ee06dbb4",
    "outputId": "4088fab7-1a0d-4b67-f1be-44d8b886a03f"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "4"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Verify the search results\n",
    "query = \"What are the EV/NTM and NTM revenue growth rates for MongoDB, Cloudflare, and Datadog?\"\n",
    "docs = retriever_multi_vector_img.invoke(query, limit=6)\n",
    "\n",
    "# Check the number of documents\n",
    "len(docs)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "a97f817c",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 730
    },
    "id": "a97f817c",
    "outputId": "8a34967b-3122-4c10-b5ae-65dff2f1545b"
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<img src=\"\" />"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# Return the relevant images.\n",
    "plt_img_base64(docs[2])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "99b56aad",
   "metadata": {
    "id": "99b56aad"
   },
   "source": [
    "### Verification\n",
    "\n",
    "Let’s revisit the images we stored to understand why this works.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "7d420c81",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 730
    },
    "id": "7d420c81",
    "outputId": "91595cbf-f4ac-4ae2-e7d2-1227a7a6135b"
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<img src=\"\" />"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# Display the image at the 2th index of the `img_base64_list` in Base64 format.\n",
    "plt_img_base64(img_base64_list[2])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "be761614",
   "metadata": {
    "id": "be761614"
   },
   "source": [
    "Here is the corresponding summary, which we embedded for similarity search.\n",
    "\n",
    "It is quite reasonable that this image was retrieved based on its similarity to the summary of our ```query```.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "afe0c6ec",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 72
    },
    "id": "afe0c6ec",
    "outputId": "3342e50f-a11e-41aa-d32a-0a418b5afaa0"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Table comparing key financial metrics of ten companies: EV/NTM Rev, EV/2024 Rev, EV/NTM FCF, NTM Rev Growth, Gross Margin, Operating Margin, FCF Margin, and % in Top 10 Multiple LTM. Companies include Snowflake, MongoDB, Palantir, and others. Average and median values are highlighted. Published by Altimeter.'"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "image_summaries[2]  # Access the 2th element of the `image_summaries` list."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b4af0949",
   "metadata": {
    "id": "b4af0949"
   },
   "source": [
    "### RAG\n",
    "\n",
    "Now, let's run RAG and test its ability to synthesize answers to our questions.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "e8a1de99",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "e8a1de99",
    "outputId": "cc721403-7ce1-4384-9131-e4918a21a4db"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Based on the provided data:\n",
      "\n",
      "- **MongoDB**:\n",
      "  - EV/NTM Revenue: 14.6x\n",
      "  - NTM Revenue Growth: 17%\n",
      "\n",
      "- **Cloudflare**:\n",
      "  - EV/NTM Revenue: 13.4x\n",
      "  - NTM Revenue Growth: 28%\n",
      "\n",
      "- **Datadog**:\n",
      "  - EV/NTM Revenue: 13.1x\n",
      "  - NTM Revenue Growth: 19%\n"
     ]
    }
   ],
   "source": [
    "# Execute the RAG chain.\n",
    "print(chain_multimodal_rag.invoke(query))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f9c125bf",
   "metadata": {
    "id": "f9c125bf"
   },
   "source": [
    "### Considerations\n",
    "\n",
    "**Search**\n",
    "\n",
    "- The search is performed based on the similarity between image summaries and text chunks.\n",
    "- Careful consideration is required as it may fail if other text chunks have a competitive advantage over the image summary search results.\n",
    "\n",
    "**Image Size**\n",
    "\n",
    "- The quality of answer synthesis appears to be sensitive to image size, as stated in the [guidelines](https://platform.openai.com/docs/guides/vision).\n"
   ]
  }
 ],
 "metadata": {
  "accelerator": "GPU",
  "colab": {
   "gpuType": "T4",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "langchain-opentutorial-HnsOYzrZ-py3.10",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
