{"cells":[{"cell_type":"markdown","id":"SzvBjdID1V3m","metadata":{"id":"SzvBjdID1V3m"},"source":["# 使用Google Cloud的多模态RAG\n","\n"]},{"cell_type":"markdown","id":"4tfidrmE1Zlo","metadata":{"id":"4tfidrmE1Zlo"},"source":["这个教程演示了如何在谷歌云上使用生成式 API 实现[这里](https://github.com/langchain-ai/langchain/blob/master/cookbook/Multi_modal_RAG.ipynb)描述的 Option 2。"]},{"cell_type":"markdown","id":"84fcd59f-2eaf-4a76-ad1a-96d6db70bf42","metadata":{},"source":["## 设置\n","\n","安装所需的依赖项，并为您的谷歌服务创建一个API密钥。"]},{"cell_type":"code","execution_count":null,"id":"6b1e10dd-25de-4c0a-9577-f36e72518f89","metadata":{},"outputs":[],"source":["# 安装所需的包\n","%pip install -U --quiet langchain langchain_community openai chromadb langchain-experimental\n","%pip install --quiet \"unstructured[all-docs]\" pypdf pillow pydantic lxml pillow matplotlib chromadb tiktoken"]},{"cell_type":"markdown","id":"pSInKtCZ32mt","metadata":{"id":"pSInKtCZ32mt"},"source":["## 数据加载"]},{"cell_type":"markdown","id":"Iv2R8-lJ37dG","metadata":{"id":"Iv2R8-lJ37dG"},"source":["我们使用一个包含从[这篇](https://cloudedjudgement.substack.com/p/clouded-judgement-111023)博客文章中提取的图像和pdf的子集的zip文件。如果你想要完整地跟随流程，请使用原始的[示例](https://github.com/langchain-ai/langchain/blob/master/cookbook/Multi_modal_RAG.ipynb)。"]},{"cell_type":"code","execution_count":1,"id":"d999f3fe-c165-4772-b63e-ffe4dd5b03cf","metadata":{},"outputs":[],"source":["# 首先下载\n","import logging  # 导入日志模块\n","import zipfile  # 导入zip文件处理模块\n","\n","import requests  # 导入请求模块\n","\n","logging.basicConfig(level=logging.INFO)  # 配置日志记录级别为INFO\n","\n","data_url = \"https://storage.googleapis.com/benchmarks-artifacts/langchain-docs-benchmarking/cj.zip\"  # 数据下载链接\n","result = requests.get(data_url)  # 发起GET请求\n","filename = \"cj.zip\"  # 文件名\n","with open(filename, \"wb\") as file:  # 以二进制写模式打开文件\n","    file.write(result.content)  # 将请求结果写入文件\n","\n","with zipfile.ZipFile(filename, \"r\") as zip_ref:  # 以只读模式打开zip文件\n","    zip_ref.extractall()  # 解压缩文件到当前目录"]},{"cell_type":"code","execution_count":2,"id":"eGUfuevMUA6R","metadata":{},"outputs":[],"source":["from langchain_community.document_loaders import PyPDFLoader\n","\n","# 使用PyPDFLoader加载指定路径下的PDF文件\n","loader = PyPDFLoader(\"./cj/cj.pdf\")\n","# 加载PDF文件\n","docs = loader.load()\n","# 创建空列表tables用于存储表格数据\n","tables = []\n","# 从加载的文档中提取每一页的文本内容\n","texts = [d.page_content for d in docs]"]},{"cell_type":"code","execution_count":3,"id":"Fst17fNHWYcq","metadata":{},"outputs":[{"data":{"text/plain":["21"]},"execution_count":3,"metadata":{},"output_type":"execute_result"}],"source":["# 获取texts的长度\n","len(texts)"]},{"cell_type":"markdown","id":"vjfcg_Vn3_1C","metadata":{"id":"vjfcg_Vn3_1C"},"source":["## 多向量检索器"]},{"cell_type":"markdown","id":"1ynRqJn04BFG","metadata":{"id":"1ynRqJn04BFG"},"source":["让我们生成文本和图像摘要，并将它们保存到ChromaDB向量存储中。"]},{"cell_type":"code","execution_count":4,"id":"kWDWfSDBMPl8","metadata":{},"outputs":[{"name":"stderr","output_type":"stream","text":["INFO:numexpr.utils:Note: NumExpr detected 12 cores but \"NUMEXPR_MAX_THREADS\" not set, so enforcing safe limit of 8.\n","INFO:numexpr.utils:NumExpr defaulting to 8 threads.\n"]}],"source":["from langchain.prompts import PromptTemplate\n","from langchain_community.chat_models import ChatVertexAI\n","from langchain_community.llms import VertexAI\n","from langchain_core.messages import AIMessage\n","from langchain_core.output_parsers import StrOutputParser\n","from langchain_core.runnables import RunnableLambda\n","\n","# 生成文本元素的摘要\n","def generate_text_summaries(texts, tables, summarize_texts=False):\n","    \"\"\"\n","    摘要文本元素\n","    texts: str列表\n","    tables: str列表\n","    summarize_texts: 是否摘要文本的布尔值\n","    \"\"\"\n","\n","    # 提示\n","    prompt_text = \"\"\"You are an assistant tasked with summarizing tables and text for retrieval. \\\n","    These summaries will be embedded and used to retrieve the raw text or table elements. \\\n","    Give a concise summary of the table or text that is well optimized for retrieval. Table or text: {element} \"\"\"\n","    prompt = PromptTemplate.from_template(prompt_text)\n","    empty_response = RunnableLambda(\n","        lambda x: AIMessage(content=\"Error processing document\")\n","    )\n","    # 文本摘要链\n","    model = VertexAI(\n","        temperature=0, model_name=\"gemini-pro\", max_tokens=1024\n","    ).with_fallbacks([empty_response])\n","    summarize_chain = {\"element\": lambda x: x} | prompt | model | StrOutputParser()\n","\n","    # 初始化空摘要\n","    text_summaries = []\n","    table_summaries = []\n","\n","    # 如果提供了文本并且请求了摘要，则应用于文本\n","    if texts and summarize_texts:\n","        text_summaries = summarize_chain.batch(texts, {\"max_concurrency\": 1})\n","    elif texts:\n","        text_summaries = texts\n","\n","    # 如果提供了表格，则应用于表格\n","    if tables:\n","        table_summaries = summarize_chain.batch(tables, {\"max_concurrency\": 1})\n","\n","    return text_summaries, table_summaries\n","\n","\n","# 获取文本、表格摘要\n","text_summaries, table_summaries = generate_text_summaries(\n","    texts, tables, summarize_texts=True\n",")"]},{"cell_type":"code","execution_count":5,"id":"F0NnyUl48yYb","metadata":{},"outputs":[{"data":{"text/plain":["21"]},"execution_count":5,"metadata":{},"output_type":"execute_result"}],"source":["# 获取text_summaries的长度\n","len(text_summaries)"]},{"cell_type":"code","execution_count":6,"id":"PeK9bzXv3olF","metadata":{},"outputs":[],"source":["import base64\n","import os\n","\n","from langchain_core.messages import HumanMessage\n","\n","\n","def encode_image(image_path):\n","    \"\"\"获取图片的base64字符串\"\"\"\n","    with open(image_path, \"rb\") as image_file:\n","        return base64.b64encode(image_file.read()).decode(\"utf-8\")\n","\n","\n","def image_summarize(img_base64, prompt):\n","    \"\"\"生成图片摘要\"\"\"\n","    model = ChatVertexAI(model=\"gemini-pro-vision\", max_tokens=1024)\n","\n","    msg = model.invoke(\n","        [\n","            HumanMessage(\n","                content=[\n","                    {\"type\": \"text\", \"text\": prompt},\n","                    {\n","                        \"type\": \"image_url\",\n","                        \"image_url\": {\"url\": f\"data:image/jpeg;base64,{img_base64}\"},\n","                    },\n","                ]\n","            )\n","        ]\n","    )\n","    return msg.content\n","\n","\n","def generate_img_summaries(path):\n","    \"\"\"\n","    生成图片的摘要和base64编码字符串\n","    path: 存放由Unstructured提取的.jpg文件的路径\n","    \"\"\"\n","\n","    # 存储base64编码的图片\n","    img_base64_list = []\n","\n","    # 存储图片摘要\n","    image_summaries = []\n","\n","    # 提示语\n","    prompt = \"\"\"You are an assistant tasked with summarizing images for retrieval. \\\n","    These summaries will be embedded and used to retrieve the raw image. \\\n","    Give a concise summary of the image that is well optimized for retrieval.\"\"\"\n","\n","    # 对图片应用操作\n","    for img_file in sorted(os.listdir(path)):\n","        if img_file.endswith(\".jpg\"):\n","            img_path = os.path.join(path, img_file)\n","            base64_image = encode_image(img_path)\n","            img_base64_list.append(base64_image)\n","            image_summaries.append(image_summarize(base64_image, prompt))\n","\n","    return img_base64_list, image_summaries\n","\n","\n","# 图片摘要\n","img_base64_list, image_summaries = generate_img_summaries(\"./cj\")"]},{"cell_type":"code","execution_count":7,"id":"6WDYpDFzjocl","metadata":{},"outputs":[{"data":{"text/plain":["5"]},"execution_count":7,"metadata":{},"output_type":"execute_result"}],"source":["# 获取image_summaries的长度\n","len(image_summaries)"]},{"cell_type":"code","execution_count":8,"id":"cWyWfZ-XB6cS","metadata":{},"outputs":[{"name":"stderr","output_type":"stream","text":["INFO:chromadb.telemetry.product.posthog:Anonymized telemetry enabled. See                     https://docs.trychroma.com/telemetry for more information.\n"]}],"source":["import uuid\n","\n","from langchain.retrievers.multi_vector import MultiVectorRetriever\n","from langchain.storage import InMemoryStore\n","from langchain_community.embeddings import VertexAIEmbeddings\n","from langchain_community.vectorstores import Chroma\n","from langchain_core.documents import Document\n","\n","# 定义一个函数`create_multi_vector_retriever`，用于创建一个检索器，该检索器索引摘要，并返回原始图像或文本。\n","def create_multi_vector_retriever(\n","    vectorstore, text_summaries, texts, table_summaries, tables, image_summaries, images\n","):\n","    \"\"\"\n","    Create retriever that indexes summaries, but returns raw images or texts\n","    \"\"\"\n","\n","    # 初始化存储层\n","    store = InMemoryStore()\n","    id_key = \"doc_id\"\n","\n","    # 创建多向量检索器\n","    retriever = MultiVectorRetriever(\n","        vectorstore=vectorstore,\n","        docstore=store,\n","        id_key=id_key,\n","    )\n","\n","    # 辅助函数，用于向向量存储和文档存储中添加文档\n","    def add_documents(retriever, doc_summaries, doc_contents):\n","        doc_ids = [str(uuid.uuid4()) for _ in doc_contents]\n","        summary_docs = [\n","            Document(page_content=s, metadata={id_key: doc_ids[i]})\n","            for i, s in enumerate(doc_summaries)\n","        ]\n","        retriever.vectorstore.add_documents(summary_docs)\n","        retriever.docstore.mset(list(zip(doc_ids, doc_contents)))\n","\n","    # 添加文本、表格和图像\n","    # 在添加之前检查text_summaries是否为空\n","    if text_summaries:\n","        add_documents(retriever, text_summaries, texts)\n","    # 在添加之前检查table_summaries是否为空\n","    if table_summaries:\n","        add_documents(retriever, table_summaries, tables)\n","    # 在添加之前检查image_summaries是否为空\n","    if image_summaries:\n","        add_documents(retriever, image_summaries, images)\n","\n","    return retriever\n","\n","定义一个变量`vectorstore`，用于存储索引摘要的向量存储\n","vectorstore = Chroma(\n","    collection_name=\"mm_rag_cj_blog\",\n","    embedding_function=VertexAIEmbeddings(model_name=\"textembedding-gecko@latest\"),\n",")\n","\n","创建检索器`retriever_multi_vector_img`\n","retriever_multi_vector_img = create_multi_vector_retriever(\n","    vectorstore,\n","    text_summaries,\n","    texts,\n","    table_summaries,\n","    tables,\n","    image_summaries,\n","    img_base64_list,\n",")"]},{"cell_type":"markdown","id":"NGDkkMFfCg4j","metadata":{"id":"NGDkkMFfCg4j"},"source":["## 构建一个 RAG\n","\n","RAG 是一种用于表示风险、问题或进展的简单可视化工具。RAG 代表红色、黄色和绿色，分别表示高风险、中等风险和低风险。\n","\n","构建一个 RAG 的步骤如下：\n","\n","1. 确定要评估的项目或任务。\n","2. 根据预先定义的标准，确定每个项目或任务的风险级别。\n","3. 使用红色、黄色和绿色来表示不同的风险级别。\n","4. 将每个项目或任务的风险级别与相应的颜色关联起来。\n","5. 创建一个 RAG 图表，将项目或任务按照风险级别进行分类，并用相应的颜色进行标记。\n","\n","通过构建一个 RAG，您可以更清楚地了解项目或任务的风险状况，并采取相应的措施来管理和解决问题。"]},{"cell_type":"markdown","id":"8TzOcHVsCmBc","metadata":{"id":"8TzOcHVsCmBc"},"source":["让我们构建一个检索器："]},{"cell_type":"code","execution_count":9,"id":"GlwCErBaCKQW","metadata":{},"outputs":[],"source":["import io\n","import re\n","\n","from IPython.display import HTML, display\n","from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n","from PIL import Image\n","\n","\n","def plt_img_base64(img_base64):\n","    \"\"\"将base64编码的字符串显示为图像\"\"\"\n","    # 创建一个包含base64字符串的HTML img标签作为源\n","    image_html = f'<img src=\"data:image/jpeg;base64,{img_base64}\" />'\n","    # 通过渲染HTML来显示图像\n","    display(HTML(image_html))\n","\n","\n","def looks_like_base64(sb):\n","    \"\"\"检查字符串是否类似于base64编码\"\"\"\n","    return re.match(\"^[A-Za-z0-9+/]+[=]{0,2}$\", sb) is not None\n","\n","\n","def is_image_data(b64data):\n","    \"\"\"\n","    通过查看数据的开头来检查base64数据是否为图像\n","    \"\"\"\n","    image_signatures = {\n","        b\"\\xff\\xd8\\xff\": \"jpg\",\n","        b\"\\x89\\x50\\x4e\\x47\\x0d\\x0a\\x1a\\x0a\": \"png\",\n","        b\"\\x47\\x49\\x46\\x38\": \"gif\",\n","        b\"\\x52\\x49\\x46\\x46\": \"webp\",\n","    }\n","    try:\n","        header = base64.b64decode(b64data)[:8]  # 解码并获取前8个字节\n","        for sig, format in image_signatures.items():\n","            if header.startswith(sig):\n","                return True\n","        return False\n","    except Exception:\n","        return False\n","\n","\n","def resize_base64_image(base64_string, size=(128, 128)):\n","    \"\"\"\n","    调整编码为Base64字符串的图像大小\n","    \"\"\"\n","    # 解码Base64字符串\n","    img_data = base64.b64decode(base64_string)\n","    img = Image.open(io.BytesIO(img_data))\n","\n","    # 调整图像大小\n","    resized_img = img.resize(size, Image.LANCZOS)\n","\n","    # 将调整大小后的图像保存到字节缓冲区\n","    buffered = io.BytesIO()\n","    resized_img.save(buffered, format=img.format)\n","\n","    # 将调整大小后的图像编码为Base64\n","    return base64.b64encode(buffered.getvalue()).decode(\"utf-8\")\n","\n","\n","def split_image_text_types(docs):\n","    \"\"\"\n","    拆分base64编码的图像和文本\n","    \"\"\"\n","    b64_images = []\n","    texts = []\n","    for doc in docs:\n","        # 检查文档是否为Document类型，如果是，则提取page_content\n","        if isinstance(doc, Document):\n","            doc = doc.page_content\n","        if looks_like_base64(doc) and is_image_data(doc):\n","            doc = resize_base64_image(doc, size=(1300, 600))\n","            b64_images.append(doc)\n","        else:\n","            texts.append(doc)\n","    if len(b64_images) > 0:\n","        return {\"images\": b64_images[:1], \"texts\": []}\n","    return {\"images\": b64_images, \"texts\": texts}\n","\n","\n","def img_prompt_func(data_dict):\n","    \"\"\"\n","    将上下文连接成一个字符串\n","    \"\"\"\n","    formatted_texts = \"\\n\".join(data_dict[\"context\"][\"texts\"])\n","    messages = []\n","\n","    # 添加用于分析的文本\n","    text_message = {\n","        \"type\": \"text\",\n","        \"text\": (\n","            \"您是一名财务分析师，负责提供投资建议。\\n\"\n","            \"您将获得一些文本、表格和图像，通常是图表或图形。\\n\"\n","            \"利用这些信息来提供与用户问题相关的投资建议。\\n\"\n","            f\"用户提供的问题: {data_dict['question']}\\n\\n\"\n","            \"文本和/或表格:\\n\"\n","            f\"{formatted_texts}\"\n","        ),\n","    }\n","    messages.append(text_message)\n","    # 如果存在图像，则将图像添加到消息中\n","    if data_dict[\"context\"][\"images\"]:\n","        for image in data_dict[\"context\"][\"images\"]:\n","            image_message = {\n","                \"type\": \"image_url\",\n","                \"image_url\": {\"url\": f\"data:image/jpeg;base64,{image}\"},\n","            }\n","            messages.append(image_message)\n","    return [HumanMessage(content=messages)]\n","\n","\n","def multi_modal_rag_chain(retriever):\n","    \"\"\"\n","    多模态RAG链\n","    \"\"\"\n","\n","    # 多模态LLM\n","    model = ChatVertexAI(temperature=0, model_name=\"gemini-pro-vision\", max_tokens=1024)\n","\n","    # RAG管道\n","    chain = (\n","        {\n","            \"context\": retriever | RunnableLambda(split_image_text_types),\n","            \"question\": RunnablePassthrough(),\n","        }\n","        | RunnableLambda(img_prompt_func)\n","        | model\n","        | StrOutputParser()\n","    )\n","\n","    return chain\n","\n","\n","# 创建RAG链\n","chain_multimodal_rag = multi_modal_rag_chain(retriever_multi_vector_img)"]},{"cell_type":"markdown","id":"BS4hNKqCCp8u","metadata":{"id":"BS4hNKqCCp8u"},"source":["让我们检查一下我们是否将图像作为文档获取："]},{"cell_type":"code","execution_count":10,"id":"Q7GrwFC_FGwr","metadata":{},"outputs":[{"data":{"text/plain":["4"]},"execution_count":10,"metadata":{},"output_type":"execute_result"}],"source":["# 查询语句\n","query = \"What are the EV / NTM and NTM rev growth for MongoDB, Cloudflare, and Datadog?\"\n","# 调用检索器获取相关文档\n","docs = retriever_multi_vector_img.invoke(query, limit=1)\n","\n","# 我们得到了2个文档\n","len(docs)"]},{"cell_type":"code","execution_count":11,"id":"unnxB5M_FLCD","metadata":{},"outputs":[{"data":{"text/html":["<img src=\"\" />"],"text/plain":["<IPython.core.display.HTML object>"]},"metadata":{},"output_type":"display_data"}],"source":["\n","# 调用函数并传入参数docs的第一个元素\n","plt_img_base64(docs[0])\n"]},{"cell_type":"markdown","id":"YUkGZXqsCtF6","metadata":{"id":"YUkGZXqsCtF6"},"source":["让我们在相同的查询上运行我们的RAG："]},{"cell_type":"code","execution_count":12,"id":"LsPTehdK-T-_","metadata":{},"outputs":[{"data":{"text/plain":["' | Company | EV / NTM Rev | NTM Rev Growth |\\n|---|---|---|\\n| MongoDB | 14.6x | 17% |\\n| Cloudflare | 13.4x | 28% |\\n| Datadog | 13.1x | 19% |'"]},"execution_count":12,"metadata":{},"output_type":"execute_result"}],"source":["# 调用chain_multimodal_rag的invoke方法，并传入query参数\n","chain_multimodal_rag.invoke(query)"]},{"cell_type":"markdown","id":"XpLQB6dEfQX-","metadata":{"id":"XpLQB6dEfQX-"},"source":["正如我们所看到的，该模型能够找出与回答问题相关的正确数值。"]}],"metadata":{"kernelspec":{"display_name":"Python 3 (ipykernel)","language":"python","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.11.2"}},"nbformat":4,"nbformat_minor":5}
