{"cells":[{"cell_type":"code","execution_count":null,"id":"61ccf657-87fd-4541-bd06-b66288c150b0","metadata":{},"outputs":[],"source":["# 安装所需的库\n","! pip install \"openai>=1\" \"langchain>=0.0.331rc2\" matplotlib pillow"]},{"cell_type":"markdown","id":"aa5c8fc8-67c3-4fb7-aa37-e1a5d6682170","metadata":{},"source":["## 加载图像\n","\n","我们将图像编码为base64格式，正如[OpenAI GPT-4V文档](https://platform.openai.com/docs/guides/vision)中所述。"]},{"cell_type":"code","execution_count":5,"id":"e67eb395-f960-4833-a0e0-1cc6a0131f55","metadata":{},"outputs":[{"data":{"text/html":["<img src=\"\" />"],"text/plain":["<IPython.core.display.HTML object>"]},"metadata":{},"output_type":"display_data"}],"source":["import base64\n","import io\n","import os\n","\n","import numpy as np\n","from IPython.display import HTML, display\n","from PIL import Image\n","\n","\n","def encode_image(image_path):\n","    \"\"\"获取base64字符串\"\"\"\n","\n","    with open(image_path, \"rb\") as image_file:\n","        return base64.b64encode(image_file.read()).decode(\"utf-8\")\n","\n","\n","def plt_img_base64(img_base64):\n","    \"\"\"显示base64图片\"\"\"\n","\n","    # 创建一个带有base64字符串作为源的HTML img标签\n","    image_html = f'<img src=\"data:image/jpeg;base64,{img_base64}\" />'\n","\n","    # 通过渲染HTML来显示图片\n","    display(HTML(image_html))\n","\n","\n","# QA图片路径\n","path = \"/Users/rlm/Desktop/Multimodal_Eval/qa/llm_strategies.jpeg\"\n","img_base64 = encode_image(path)\n","plt_img_base64(img_base64)"]},{"cell_type":"markdown","id":"19bf59e1-ab31-4943-8f62-076d8de64b9d","metadata":{},"source":["## 使用GPT-4Vision进行问答\n","\n","我们可以使用GPT-4V对图像进行问答。更多详细信息请参见以下链接：\n","* https://github.com/openai/openai-python/releases/tag/v1.0.0\n","* https://platform.openai.com/docs/guides/vision"]},{"cell_type":"code","execution_count":6,"id":"19b8f89b-cc1c-4fd1-80fe-08c17bc6a30f","metadata":{},"outputs":[],"source":["# 导入所需模块\n","from langchain_core.messages import HumanMessage, SystemMessage\n","from langchain_openai import ChatOpenAI\n","\n","# 创建一个ChatOpenAI对象\n","chat = ChatOpenAI()\n","\n","# 这里可以根据需要添加更多的代码"]},{"cell_type":"code","execution_count":7,"id":"88033140-978c-4782-a721-703c3da634b1","metadata":{},"outputs":[],"source":["# 导入ChatOpenAI类\n","chat = ChatOpenAI(model=\"gpt-4-vision-preview\", max_tokens=1024)\n","\n","# 构造消息对象\n","msg = chat.invoke(\n","    [\n","        HumanMessage(\n","            content=[\n","                {\n","                    \"type\": \"text\",\n","                    \"text\": \"Based on the image, what is the difference in training strategy between a small and a large base model?\",\n","                },\n","                {\n","                    \"type\": \"image_url\",\n","                    \"image_url\": {\"url\": f\"data:image/jpeg;base64,{img_base64}\"},\n","                },\n","            ]\n","        )\n","    ]\n",")"]},{"cell_type":"markdown","id":"9c415ce7-4ac4-46fe-82a4-7bf9d677b97a","metadata":{},"source":["以下是结果 `msg.content` 的显示："]},{"cell_type":"markdown","id":"8580c74f-0938-4986-80a9-8fc39e1913e3","metadata":{},"source":["这幅图似乎是描述了两种不同基础模型大小（大和小）的任务准确性，作为不同训练策略以及与之相关的努力/复杂性的函数。以下是根据图表建议的小型和大型基础模型之间训练策略差异的描述：\n","\n","1. **零次提示（Zero-shot prompts）**：两种模型都从一些基准准确性开始，没有额外的训练，这表明了零次学习的能力。然而，与小型基础模型相比，大型基础模型在开箱即用时显示出更高的准确性。\n","\n","2. **提示工程（Prompt engineering）**：随着提示工程的复杂性增加，大型基础模型在任务准确性上显示出显著的改善，表明它可以更有效地理解和利用精心设计的提示，而小型基础模型则不然。\n","\n","3. **少次提示（Few-shot prompts）**：引入少次提示后，模型被给予一些示例进行学习，大型基础模型继续显示出比小型基础模型更高的任务准确性，尽管小型基础模型也有所改善，但程度不及大型基础模型。\n","\n","4. **检索增强少次提示（Retrieval-augmented few-shot prompting）**：在这个阶段，模型通过检索机制增强，以帮助少次学习过程。大型基础模型在任务准确性方面保持领先地位，表明它可以更好地整合检索增强策略。\n","\n","5. **微调（Finetuning）**：当我们移动到图表的右侧，代表微调时，小型基础模型显示出比之前步骤更显著的准确性提高，表明微调对较小模型有着重大影响。大型基础模型虽然也受益于微调，但没有显示出如此戏剧性的增长，可能是因为由于其较大的尺寸和容量，它已经在较高水平上表现。\n","\n","6. **模型训练（微调，RLHF）和数据引擎**：图表的最后部分表明，通过广泛的模型训练技术（如微调和人类反馈强化学习（RLHF）），结合强大的数据引擎，大型基础模型可以实现接近完美的任务准确性。小型基础模型也有所改善，但没有达到相同水平，表明较大模型的容量使其能够更好地利用先进的训练方法和数据资源。\n","\n","总之，大型基础模型更多受益于先进的训练策略，并且随着努力和复杂性的增加表现出更高的任务准确性，而小型基础模型需要更多的微调才能实现性能的显著改善。"]},{"cell_type":"markdown","id":"2552b0e6-9d07-40f1-8fbc-17567bd0fdd1","metadata":{},"source":["## 与开源多模态LLM进行问答\n","\n","我们也可以测试各种开源多模态LLM。\n","\n","查看[这里](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb)有关构建多模态LLM的llama.cpp的说明：\n","\n","克隆[llama.cpp](https://github.com/ggerganov/llama.cpp)\n","\n","下载权重：\n","* [LLaVA-7b](https://huggingface.co/mys/ggml_llava-v1.5-7b/tree/main)\n","* [LLaVA-13b](https://huggingface.co/mys/ggml_llava-v1.5-13b)\n","* [Bakllava](https://huggingface.co/mys/ggml_bakllava-1/tree/main)\n","\n","在您的`llama.cpp`目录中构建：\n","```\n","mkdir build && cd build && cmake ..\n","cmake --build .\n","```\n","\n","对于多模态LLM的支持将很快添加到llama.cpp中。\n","\n","与此同时，您可以使用CLI测试它们："]},{"cell_type":"code","execution_count":null,"id":"1293d0df-c979-4c53-9af5-c3bf918aad04","metadata":{},"outputs":[],"source":["%%bash\n","\n","# 定义图像路径\n","IMG_PATH=\"/Users/rlm/Desktop/Multimodal_Eval/qa/llm_strategies.jpeg\"\n","\n","# 定义模型名称\n","# MODEL_NAME=\"llava-7b\"\n","# MODEL_NAME=\"bakllava-1\"\n","MODEL_NAME=\"llava-13b\"\n","\n","# 执行命令并将输出保存到定义的输出文件中\n","/Users/rlm/Desktop/Code/llama.cpp/build/bin/llava -m /Users/rlm/Desktop/Code/llama.cpp/models/${MODEL_NAME}/ggml-model-q5_k.gguf --mmproj /Users/rlm/Desktop/Code/llama.cpp/models/${MODEL_NAME}/mmproj-model-f16.gguf --temp 0.1 -p \"基于图像，小型和大型基础模型的训练策略有什么区别？\" --image \"$IMG_PATH\""]}],"metadata":{"kernelspec":{"display_name":"Python 3 (ipykernel)","language":"python","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.9.16"}},"nbformat":4,"nbformat_minor":5}
