{"cells":[{"cell_type":"markdown","id":"9fc3897d-176f-4729-8fd1-cfb4add53abd","metadata":{},"source":["## VDMS多模态RAG\n","\n","许多文档包含多种内容类型，包括文本和图像。\n","\n","然而，在大多数RAG应用中，图像中捕获的信息都会丢失。\n","\n","随着多模态LLM（如[GPT-4V](https://openai.com/research/gpt-4v-system-card)）的出现，值得考虑如何在RAG中利用图像。\n","\n","本教程的重点是：\n","* 使用[Unstructured](https://unstructured.io/)从文档（PDF）中解析图像、文本和表格。\n","* 使用多模态嵌入（如[CLIP](https://openai.com/research/clip)）嵌入图像和文本。\n","* 使用[VDMS](https://github.com/IntelLabs/vdms/blob/master/README.md)作为支持多模态的向量存储。\n","* 使用相似性搜索检索图像和文本。\n","* 将原始图像和文本块传递给多模态LLM进行答案合成。\n","\n","## 软件包\n","\n","对于`unstructured`，您还需要在系统中安装`poppler`（[安装说明](https://pdf2image.readthedocs.io/en/latest/installation.html)）和`tesseract`（[安装说明](https://tesseract-ocr.github.io/tessdoc/Installation.html)）。"]},{"cell_type":"code","execution_count":1,"id":"febbc459-ebba-4c1a-a52b-fed7731593f8","metadata":{},"outputs":[],"source":["# 安装最新版本的 vdms 和 langchain-experimental 库，这是多模态所需的最新版本\n","! pip install --quiet -U vdms langchain-experimental\n","\n","# 由于最近版本中存在持续性bug，因此锁定到0.10.19版本\n","! pip install --quiet pdf2image \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml open_clip_torch"]},{"cell_type":"markdown","id":"6a6b6e73","metadata":{},"source":["## 启动VDMS服务器\n","\n","让我们使用端口55559而不是默认的55555来启动一个VDMS docker。\n","请记下端口和主机名，因为这对于向量存储非常重要，它使用VDMS Python客户端连接服务器。"]},{"cell_type":"code","execution_count":3,"id":"5f483872","metadata":{},"outputs":[{"name":"stdout","output_type":"stream","text":["docker: Error response from daemon: Conflict. The container name \"/vdms_rag_nb\" is already in use by container \"0c19ed281463ac10d7efe07eb815643e3e534ddf24844357039453ad2b0c27e8\". You have to remove (or rename) that container to be able to reuse that name.\n","See 'docker run --help'.\n"]}],"source":["# 运行 Docker 容器\n","! docker run --rm -d -p 55559:55555 --name vdms_rag_nb intellabs/vdms:latest\n","\n","# 导入 VDMS_Client 类\n","from langchain_community.vectorstores.vdms import VDMS_Client\n","\n","# 创建 VDMS_Client 实例，并指定端口号为 55559\n","vdms_client = VDMS_Client(port=55559)"]},{"cell_type":"code","execution_count":null,"id":"78ac6543","metadata":{},"outputs":[],"source":["# 导入 load_dotenv 和 find_dotenv 函数\n","# from dotenv import load_dotenv, find_dotenv\n","# 调用 find_dotenv 函数查找 .env 文件的路径，并调用 load_dotenv 函数加载其中的环境变量\n","# load_dotenv(find_dotenv(), override=True);"]},{"cell_type":"markdown","id":"1e94b3fb-8e3e-4736-be0a-ad881626c7bd","metadata":{},"source":["## 数据加载\n","\n","### 分割 PDF 文本和图片\n","\n","让我们看一个包含有趣图片的示例 PDF。\n","\n","来自国会图书馆的著名照片：\n","\n","* https://www.loc.gov/lcm/pdf/LCM_2020_1112.pdf\n","* 我们将在下面使用这个作为示例\n","\n","我们可以使用下面的 `partition_pdf` 函数从 [Unstructured](https://unstructured-io.github.io/unstructured/introduction.html#key-concepts) 中提取文本和图片。"]},{"cell_type":"code","execution_count":4,"id":"9646b524-71a7-4b2a-bdc8-0b81f77e968f","metadata":{},"outputs":[],"source":["from pathlib import Path\n","\n","import requests\n","\n","# 文件夹包含pdf和提取的图像\n","datapath = Path(\"./multimodal_files\").resolve()  # 创建一个Path对象，指定文件夹路径为\"./multimodal_files\"，并将其解析为绝对路径\n","datapath.mkdir(parents=True, exist_ok=True)  # 创建文件夹，如果文件夹已存在则不会引发异常\n","\n","pdf_url = \"https://www.loc.gov/lcm/pdf/LCM_2020_1112.pdf\"  # 指定pdf文件的URL\n","pdf_path = str(datapath / pdf_url.split(\"/\")[-1])  # 使用Path对象和URL的最后一个部分构建pdf文件的路径\n","with open(pdf_path, \"wb\") as f:  # 以二进制写入模式打开pdf文件\n","    f.write(requests.get(pdf_url).content)  # 将从URL获取的内容写入pdf文件中"]},{"cell_type":"code","execution_count":5,"id":"bc4839c0-8773-4a07-ba59-5364501269b2","metadata":{},"outputs":[],"source":["# 提取图片、表格和分块文本\n","from unstructured.partition.pdf import partition_pdf  # 导入 partition_pdf 函数\n","\n","raw_pdf_elements = partition_pdf(  # 调用 partition_pdf 函数\n","    filename=pdf_path,  # 传入参数：文件名\n","    extract_images_in_pdf=True,  # 提取PDF中的图片\n","    infer_table_structure=True,  # 推断表格结构\n","    chunking_strategy=\"by_title\",  # 分块策略为按标题\n","    max_characters=4000,  # 最大字符数\n","    new_after_n_chars=3800,  # 在第3800个字符后开始新的分块\n","    combine_text_under_n_chars=2000,  # 在2000个字符以下的文本进行合并\n","    image_output_dir_path=datapath,  # 图片输出目录路径\n",")\n","\n","datapath = str(datapath)  # 将 datapath 转换为字符串类型"]},{"cell_type":"code","execution_count":6,"id":"969545ad","metadata":{},"outputs":[],"source":["# 按类型对文本元素进行分类\n","tables = []  # 创建一个空列表用于存储表格元素\n","texts = []  # 创建一个空列表用于存储文本元素\n","\n","for element in raw_pdf_elements:  # 遍历原始PDF元素列表\n","    if \"unstructured.documents.elements.Table\" in str(type(element)):  # 如果元素类型是表格类型\n","        tables.append(str(element))  # 将元素转换为字符串并添加到表格列表中\n","    elif \"unstructured.documents.elements.CompositeElement\" in str(type(element)):  # 如果元素类型是复合元素类型\n","        texts.append(str(element))  # 将元素转换为字符串并添加到文本列表中"]},{"cell_type":"markdown","id":"5d8e6349-1547-4cbf-9c6f-491d8610ec10","metadata":{},"source":["## 使用我们的文档进行多模态嵌入\n","\n","我们将使用[OpenClip多模态嵌入](https://python.langchain.com/docs/integrations/text_embedding/open_clip)。\n","\n","我们使用了一个更大的模型以获得更好的性能（在`langchain_experimental.open_clip.py`中设置）。\n","\n","```\n","model_name = \"ViT-g-14\"\n","checkpoint = \"laion2b_s34b_b88k\"\n","```"]},{"cell_type":"code","execution_count":7,"id":"4bc15842-cb95-4f84-9eb5-656b0282a800","metadata":{},"outputs":[],"source":["import os\n","\n","from langchain_community.vectorstores import VDMS\n","from langchain_experimental.open_clip import OpenCLIPEmbeddings\n","\n","# 创建 VDMS\n","vectorstore = VDMS(\n","    client=vdms_client,\n","    collection_name=\"mm_rag_clip_photos\",\n","    embedding_function=OpenCLIPEmbeddings(\n","        model_name=\"ViT-g-14\", checkpoint=\"laion2b_s34b_b88k\"\n","    ),\n",")\n","\n","# 获取仅带有 .jpg 扩展名的图像 URI\n","image_uris = sorted(\n","    [\n","        os.path.join(datapath, image_name)\n","        for image_name in os.listdir(datapath)\n","        if image_name.endswith(\".jpg\")\n","    ]\n",")\n","\n","# 添加图像\n","if image_uris:\n","    vectorstore.add_images(uris=image_uris)\n","\n","# 添加文档\n","if texts:\n","    vectorstore.add_texts(texts=texts)\n","\n","# 创建检索器\n","retriever = vectorstore.as_retriever()"]},{"cell_type":"markdown","id":"02a186d0-27e0-4820-8092-63b5349dd25d","metadata":{},"source":["## RAG\n","\n","`vectorstore.add_images` 将以 base64 编码字符串的形式存储/检索图像。"]},{"cell_type":"code","execution_count":8,"id":"344f56a8-0dc3-433e-851c-3f7600c7a72b","metadata":{},"outputs":[],"source":["import base64\n","from io import BytesIO\n","\n","from PIL import Image\n","\n","\n","def resize_base64_image(base64_string, size=(128, 128)):\n","    \"\"\"\n","    将以Base64字符串编码的图像调整大小。\n","\n","    Args:\n","    base64_string (str): 原始图像的Base64字符串。\n","    size (tuple): 图像的期望大小，格式为 (宽度, 高度)。\n","\n","    Returns:\n","    str: 调整大小后图像的Base64字符串。\n","    \"\"\"\n","    # 解码Base64字符串\n","    img_data = base64.b64decode(base64_string)\n","    img = Image.open(BytesIO(img_data))\n","\n","    # 调整图像大小\n","    resized_img = img.resize(size, Image.LANCZOS)\n","\n","    # 将调整大小后的图像保存到字节缓冲区\n","    buffered = BytesIO()\n","    resized_img.save(buffered, format=img.format)\n","\n","    # 将调整大小后的图像编码为Base64字符串\n","    return base64.b64encode(buffered.getvalue()).decode(\"utf-8\")\n","\n","\n","def is_base64(s):\n","    \"\"\"检查字符串是否为Base64编码\"\"\"\n","    try:\n","        return base64.b64encode(base64.b64decode(s)) == s.encode()\n","    except Exception:\n","        return False\n","\n","\n","def split_image_text_types(docs):\n","    \"\"\"分离numpy数组中的图像和文本\"\"\"\n","    images = []\n","    text = []\n","    for doc in docs:\n","        doc = doc.page_content  # 提取文档内容\n","        if is_base64(doc):\n","            # 调整图像大小以避免OAI服务器错误\n","            images.append(\n","                resize_base64_image(doc, size=(250, 250))\n","            )  # base64编码的字符串\n","        else:\n","            text.append(doc)\n","    return {\"images\": images, \"texts\": text}"]},{"cell_type":"markdown","id":"23a2c1d8-fea6-4152-b184-3172dd46c735","metadata":{},"source":["目前，我们在为`ChatPromptTemplates`添加图像支持时，使用`RunnableLambda`格式化输入。\n","\n","我们的可运行程序遵循经典的RAG流程：\n","- 首先计算上下文（在本例中包括“文本”和“图像”）和问题（这里只是一个RunnablePassthrough）\n","- 然后将其传递到我们的提示模板中，这是一个自定义函数，用于为llava模型格式化消息。\n","- 最后，我们将输出解析为字符串。\n","\n","在这里，我们使用Ollama来为Llava模型提供服务。请参阅[Ollama](https://python.langchain.com/docs/integrations/llms/ollama)获取设置说明。"]},{"cell_type":"code","execution_count":9,"id":"4c93fab3-74c4-4f1d-958a-0bc4cdd0797e","metadata":{},"outputs":[],"source":["from langchain_community.llms.ollama import Ollama\n","from langchain_core.messages import HumanMessage\n","from langchain_core.output_parsers import StrOutputParser\n","from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n","\n","\n","def prompt_func(data_dict):\n","    # 将上下文文本连接成一个字符串\n","    formatted_texts = \"\\n\".join(data_dict[\"context\"][\"texts\"])\n","    messages = []\n","\n","    # 如果存在图片，则将图片添加到消息中\n","    if data_dict[\"context\"][\"images\"]:\n","        image_message = {\n","            \"type\": \"image_url\",\n","            \"image_url\": {\n","                \"url\": f\"data:image/jpeg;base64,{data_dict['context']['images'][0]}\"\n","            },\n","        }\n","        messages.append(image_message)\n","\n","    # 添加用于分析的文本消息\n","    text_message = {\n","        \"type\": \"text\",\n","        \"text\": (\n","            \"作为一位专业的艺术评论家和历史学家，您的任务是分析和解释图像，\"\n","            \"考虑其历史和文化意义。除图像外，您还将提供相关文本以提供背景。\"\n","            \"这两者将从基于用户输入关键词的向量库中检索。请将答案转换为英文，\"\n","            \"并利用您丰富的知识和分析能力提供包括以下内容的综合摘要：\\n\"\n","            \"- 图像中视觉元素的详细描述。\\n\"\n","            \"- 图像的历史和文化背景。\\n\"\n","            \"- 图像的象征和意义的解释。\\n\"\n","            \"- 图像与相关文本之间的联系。\\n\\n\"\n","            f\"用户提供的关键词：{data_dict['question']}\\n\\n\"\n","            \"文本和/或表格：\\n\"\n","            f\"{formatted_texts}\"\n","        ),\n","    }\n","    messages.append(text_message)\n","    return [HumanMessage(content=messages)]\n","\n","def multi_modal_rag_chain(retriever):\n","    \"\"\"多模态RAG链\"\"\"\n","\n","    # 多模态LLM\n","    llm_model = Ollama(\n","        verbose=True, temperature=0.5, model=\"llava\", base_url=\"http://localhost:11434\"\n","    )\n","\n","    # RAG流水线\n","    chain = (\n","        {\n","            \"context\": retriever | RunnableLambda(split_image_text_types),\n","            \"question\": RunnablePassthrough(),\n","        }\n","        | RunnableLambda(prompt_func)\n","        | llm_model\n","        | StrOutputParser()\n","    )\n","\n","    return chain"]},{"cell_type":"markdown","id":"1566096d-97c2-4ddc-ba4a-6ef88c525e4e","metadata":{},"source":["## 测试检索和运行RAG"]},{"cell_type":"code","execution_count":10,"id":"90121e56-674b-473b-871d-6e4753fd0c45","metadata":{},"outputs":[{"name":"stdout","output_type":"stream","text":["GREAT PHOTOGRAPHS\n","The subject of the photo, Florence Owens Thompson, a Cherokee from Oklahoma, initially regretted that Lange ever made this photograph. “She was a very strong woman. She was a leader,” her daughter Katherine later said. “I think that's one of the reasons she resented the photo — because it didn't show her in that light.”\n","\n","DOROTHEA LANGE. “DESTITUTE PEA PICKERS IN CALIFORNIA. MOTHER OF SEVEN CHILDREN. AGE THIRTY-TWO. NIPOMO, CALIFORNIA.” MARCH 1936. NITRATE NEGATIVE. FARM SECURITY ADMINISTRATION-OFFICE OF WAR INFORMATION COLLECTION. PRINTS AND PHOTOGRAPHS DIVISION.\n","\n","—Helena Zinkham\n","\n","—Helena Zinkham\n","\n","NOVEMBER/DECEMBER 2020 LOC.GOV/LCM\n"]},{"data":{"text/html":["<img src=\"\" />"],"text/plain":["<IPython.core.display.HTML object>"]},"metadata":{},"output_type":"display_data"}],"source":["from IPython.display import HTML, display\n","\n","def plt_img_base64(img_base64):\n","    # 使用base64字符串创建一个HTML img标签作为图片的来源\n","    image_html = f'<img src=\"data:image/jpeg;base64,{img_base64}\" />'\n","\n","    # 通过渲染HTML来显示图片\n","    display(HTML(image_html))\n","\n","query = \"Woman with children\"\n","docs = retriever.invoke(query, k=10)\n","\n","for doc in docs:\n","    if is_base64(doc.page_content):\n","        plt_img_base64(doc.page_content)\n","    else:\n","        print(doc.page_content)"]},{"cell_type":"code","execution_count":11,"id":"69fb15fd-76fc-49b4-806d-c4db2990027d","metadata":{},"outputs":[{"name":"stdout","output_type":"stream","text":["1. Detailed description of the visual elements in the image: The image features a woman with children, likely a mother and her family, standing together outside. They appear to be poor or struggling financially, as indicated by their attire and surroundings.\n","2. Historical and cultural context of the image: The photo was taken in 1936 during the Great Depression, when many families struggled to make ends meet. Dorothea Lange, a renowned American photographer, took this iconic photograph that became an emblem of poverty and hardship experienced by many Americans at that time.\n","3. Interpretation of the image's symbolism and meaning: The image conveys a sense of unity and resilience despite adversity. The woman and her children are standing together, displaying their strength as a family unit in the face of economic challenges. The photograph also serves as a reminder of the importance of empathy and support for those who are struggling.\n","4. Connections between the image and the related text: The text provided offers additional context about the woman in the photo, her background, and her feelings towards the photograph. It highlights the historical backdrop of the Great Depression and emphasizes the significance of this particular image as a representation of that time period.\n"]}],"source":["# 创建一个 multi_modal_rag_chain 对象，并将 retriever 作为参数传入\n","chain = multi_modal_rag_chain(retriever)\n","\n","# 使用 chain 对象调用 invoke 方法，并将 query 作为参数传入，将结果保存在 response 中\n","response = chain.invoke(query)\n","\n","# 打印 response\n","print(response)"]},{"cell_type":"code","execution_count":12,"id":"ec2ea7e6","metadata":{},"outputs":[{"name":"stdout","output_type":"stream","text":["vdms_rag_nb\n"]}],"source":["# 使用docker命令停止名为vdms_rag_nb的容器\n","! docker kill vdms_rag_nb"]},{"cell_type":"code","execution_count":null,"id":"8ba652da","metadata":{},"outputs":[],"source":[]}],"metadata":{"kernelspec":{"display_name":".langchain-venv","language":"python","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.10.13"}},"nbformat":4,"nbformat_minor":5}
