{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "70423feb-850c-4286-9b55-7cdca163d412",
   "metadata": {},
   "source": [
    "# LangChain全面剖析之Model I/O"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a1cc8eeb-529a-4247-946a-8dd8b674893f",
   "metadata": {},
   "source": [
    "## 1. Model I/O介绍"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b2fdcda3-8a67-4872-9998-099cf09f3ee5",
   "metadata": {},
   "source": [
    "### 1.1 Model I/O模块组成"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9ef3cc1f-2975-4333-bb0e-ee883a1db597",
   "metadata": {},
   "source": [
    "- **Format：即指代Prompts Template，通过模板化来管理大模型的输入；**\n",
    "- **Predict：即指代Models，使用通用接口调用不同的大语言模型；**\n",
    "- **Parse：即指代Output部分，用来从模型的推理中提取信息，并按照预先设定好的模版来规范化输出。**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "68226771-49d8-472a-9aee-161a8d9d98d2",
   "metadata": {},
   "source": [
    "- **Format**\n",
    "\n",
    "传统上我们创建提示词是通过手工编写来实现的，在这个过程中会利用各种提示工程技巧，如Few-Shot、链式推理（CoT）等方法，以提高大模型的推理性能。然而，在应用\n",
    "开发中，一个关键的考量是提示词不能是一成不变的。其原因在于，应用开发需要适应多变的用户需求和场景。固定的提示词限制了模型的灵活性和适用范围\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "80a1455e-3468-4ed9-a7d8-2af7d4a5f294",
   "metadata": {},
   "source": [
    "- **Predict**\n",
    "\n",
    "在Predict部分，实质上是处理模型从接收输入到执行推理的整个过程。考虑到存在两种主要类型的大模型——Base类模型和Chat类模型，LangChain在其Model I/O模块中\n",
    "对这两种模型都进行了抽象，分别归类为LLMs（Large Language Models）和Chat Models。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "7413e651-208f-4ccb-8f93-70a6f7f0afa1",
   "metadata": {},
   "outputs": [
    {
     "ename": "NameError",
     "evalue": "name 'client' is not defined",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mNameError\u001b[0m                                 Traceback (most recent call last)",
      "Cell \u001b[0;32mIn[3], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[43mclient\u001b[49m\u001b[38;5;241m.\u001b[39mcompletions\u001b[38;5;241m.\u001b[39mcreate(\n\u001b[1;32m      2\u001b[0m   model\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mgpt-3.5-turbo-instruct\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[1;32m      3\u001b[0m   prompt\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mSay this is a test\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[1;32m      4\u001b[0m )\n",
      "\u001b[0;31mNameError\u001b[0m: name 'client' is not defined"
     ]
    }
   ],
   "source": [
    "\n",
    "client.completions.create(\n",
    "  model=\"gpt-3.5-turbo-instruct\",\n",
    "  prompt=\"Say this is a test\",\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "25835494-d6d8-4f9c-ae29-203e368dbaa9",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Chat模型\n",
    "client.chat.completions.create(\n",
    "  model=\"gpt-3.5-turbo\",\n",
    "  messages=[\n",
    "    {\"role\": \"system\", \"content\": \"你是一位乐于助人的AI智能小助手\"},\n",
    "    {\"role\": \"user\", \"content\": \"你好，请你介绍一下你自己。\"}\n",
    "  ]\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d21a3023-9576-49b5-a12d-5cbf761f67e0",
   "metadata": {
    "tags": []
   },
   "source": [
    "- **Parse**\n",
    "\n",
    "大模型的输出是不稳定的，同样的输入Prompt往往会得到不同形式的输出。在自然语言交互中，不同的语言表达方式通常不会造成理解上的障碍。但在应用开发中，大模型的\n",
    "输出可能是下一步逻辑处理的关键输入。因此，在这种情况下，规范化输出是必须要做的任务，以确保应用能够顺利进行后续的逻辑处理。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dbd3b841-0cb0-4e11-b7c9-010b565824b3",
   "metadata": {},
   "source": [
    "### 1.2 什么是LCEL？"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b07878ad-f704-4149-8f31-feaa4d3995f1",
   "metadata": {},
   "source": [
    "LangChain表达式语言（LCEL）是一种声明式方法，可以轻松地将 链 组合在一起。你可以理解为就是类似shell里面管道符的开发方式。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "567fd19c-6b8b-437b-945f-8637c50a2dae",
   "metadata": {
    "tags": []
   },
   "source": [
    "### 1.3 LangChain安装"
   ]
  },
  {
   "cell_type": "raw",
   "id": "48c05d02-effb-4392-9a15-f71c38e4c31a",
   "metadata": {},
   "source": [
    "! pip install langchain\n",
    "! pip install opeanai"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "6faad176-5602-42e3-a6de-d4bf14a93106",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.2.0\n"
     ]
    }
   ],
   "source": [
    "import langchain\n",
    "\n",
    "print(langchain.__version__)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "8a16a850-b726-4d3b-a504-bf938133bd50",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.76.0\n"
     ]
    }
   ],
   "source": [
    "import openai\n",
    "print(openai.__version__)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "54ff95f8-7f2f-4dcb-8aa1-efc56c404141",
   "metadata": {},
   "source": [
    "## 2. Model I/O之模型调用"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bb0f6ce7-b9f6-40ec-93b7-dbf3dac7fd86",
   "metadata": {},
   "source": [
    "LangChain为了使开发者可以轻松地创建自定义链，整体采用`Runnable`协议。Runnable 协议是编程中一种常见的设计模式，用于定义可以执行的任务或行为。\n",
    "在LangChain中通过构建标准接口，可以用户轻松定义自定义链并以标准方式调用它们，目前在LangChain已经集成的LLMs中，均实现了`Runnable`接口，目前支持\n",
    "包括`invoke`、 `stream` 、 `batch` 、 `astream` 等方法的调用。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "62f0e691-757d-4f9a-a3ad-8a26f08a5b7d",
   "metadata": {},
   "source": [
    "LangChain已经集成的大模型：https://python.langchain.com/docs/integrations/llms/"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "564a60c2-6a00-471a-8d6e-13f648041f02",
   "metadata": {},
   "source": [
    "具体支持的调用方式如下所示：\n",
    "\n",
    "| 方法    | 说明             |\n",
    "|-------|----------------|\n",
    "| invoke | 处理单条输入       |\n",
    "| batch  | 处理批量输入 |\n",
    "| stream | 流式响应         |\n",
    "| ainvoke | 异步处理单条输入       |\n",
    "| abatch  | 异步处理批量输入 |\n",
    "| astream | 异步流式响应         |"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "108de6a3-121a-41a9-8ca8-acf8dce98993",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from openai import OpenAI\n",
    "openai.api_key = \"sk-3425fed5d0734e7487c10534b823ebbd\"\n",
    "openai.api_base=\"https://api.deepseek.com/v1\"\n",
    "client = OpenAI(api_key=openai.api_key ,base_url=openai.api_base)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "23ac8176-773e-4074-8d13-4c3684e26517",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "你好呀！😊 我是 **DeepSeek Chat**，由深度求索公司（DeepSeek）研发的智能 AI 助手。我可以帮助你解答各种问题，包括学习、工作、编程、写作、生活小技巧等，还能处理上传的文档（如 PDF、Word、Excel 等），帮你提取和分析信息。  \n",
      "\n",
      "### ✨ **我的特点**：\n",
      "- **免费使用**：目前无需付费，随时为你服务！  \n",
      "- **知识丰富**：我的知识截止到 **2024 年 7 月**，可以解答许多专业和日常问题。  \n",
      "- **超长上下文**：支持 **128K** 上下文记忆，能理解和处理超长内容。  \n",
      "- **文件阅读**：可以解析 **txt、pdf、ppt、word、excel** 等文件，帮助你快速获取信息。  \n",
      "- **持续进化**：团队在不断提升我的能力，未来会有更多新功能！  \n",
      "\n",
      "### 🚀 **我能帮你做什么？**  \n",
      "- 📚 **学习辅导**：解题思路、论文写作、语言翻译  \n",
      "- 💼 **工作助理**：写邮件、做PPT、数据分析  \n",
      "- 💡 **创意灵感**：写故事、起名字、策划方案  \n",
      "- 📊 **编程助手**：代码调试、算法讲解、技术咨询  \n",
      "- 🏡 **生活百科**：旅行建议、美食推荐、健康小贴士  \n",
      "\n",
      "有什么我可以帮你的吗？随时问我哦！😃\n"
     ]
    }
   ],
   "source": [
    "completion = client.chat.completions.create(\n",
    "  model=\"deepseek-chat\",\n",
    "  messages=[\n",
    "    {\"role\": \"system\", \"content\": \"你是一个乐于助人的智能AI小助手\"},\n",
    "    {\"role\": \"user\", \"content\": \"你好，请你介绍一下你自己\"}\n",
    "  ]\n",
    ")\n",
    "\n",
    "print(completion.choices[0].message.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "38802685-7970-42ad-9bff-b677289d5914",
   "metadata": {},
   "source": [
    "LangChain作为一个应用开发框架，需要集成各种不同的大模型，如上述OpenAI的GPT系列模型调用示例，通过Message数据输入规范，定义不同的role，即system、user和assistant来区分对话过程，但对于其他大模型，并不意味这一定会遵守这种输入输出及角色的定义，所以LangChain的做法是，因为Chat Model基于消息而不是原始文本，LangChain目前就抽象出来的消息类型有 AIMessage 、 HumanMessage 、 SystemMessage 、 FunctionMessage 和 ChatMessage ，但大多时候我们只需要处理 HumanMessage 、 AIMessage 和 SystemMessage，即：\n",
    "- SystemMessage ：用于启动 AI 行为，作为输入消息序列中的第一个传入。\n",
    "- HumanMessage ：表示来自与聊天模型交互的人的消息。\n",
    "- AIMessage ：表示来自聊天模型的消息。这可以是文本，也可以是调用工具的请求。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "cef55559-0cb8-4c5f-b4d7-286bb1e2ee3d",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Looking in indexes: http://mirrors.aliyun.com/pypi/simple\n",
      "Collecting langchain-openai\n",
      "  Downloading http://mirrors.aliyun.com/pypi/packages/1c/ff/d8bf3cacd55cabd85deed923a22a72e0c306a1211584f78a933512c3ef8f/langchain_openai-0.1.6-py3-none-any.whl (34 kB)\n",
      "Collecting tiktoken<1,>=0.5.2\n",
      "  Downloading http://mirrors.aliyun.com/pypi/packages/62/5d/0adc459426364216cb25eeace411b18f820f11feb945a76a59eed2c67abd/tiktoken-0.6.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.8 MB)\n",
      "\u001b[K     |████████████████████████████████| 1.8 MB 651 kB/s eta 0:00:01\n",
      "\u001b[?25hRequirement already satisfied: langchain-core<0.2.0,>=0.1.46 in /root/miniconda3/lib/python3.8/site-packages (from langchain-openai) (0.1.46)\n",
      "Requirement already satisfied: openai<2.0.0,>=1.24.0 in /root/miniconda3/lib/python3.8/site-packages (from langchain-openai) (1.24.0)\n",
      "Requirement already satisfied: jsonpatch<2.0,>=1.33 in /root/miniconda3/lib/python3.8/site-packages (from langchain-core<0.2.0,>=0.1.46->langchain-openai) (1.33)\n",
      "Requirement already satisfied: tenacity<9.0.0,>=8.1.0 in /root/miniconda3/lib/python3.8/site-packages (from langchain-core<0.2.0,>=0.1.46->langchain-openai) (8.2.3)\n",
      "Requirement already satisfied: PyYAML>=5.3 in /root/miniconda3/lib/python3.8/site-packages (from langchain-core<0.2.0,>=0.1.46->langchain-openai) (6.0)\n",
      "Requirement already satisfied: packaging<24.0,>=23.2 in /root/miniconda3/lib/python3.8/site-packages (from langchain-core<0.2.0,>=0.1.46->langchain-openai) (23.2)\n",
      "Requirement already satisfied: pydantic<3,>=1 in /root/miniconda3/lib/python3.8/site-packages (from langchain-core<0.2.0,>=0.1.46->langchain-openai) (2.7.1)\n",
      "Requirement already satisfied: langsmith<0.2.0,>=0.1.0 in /root/miniconda3/lib/python3.8/site-packages (from langchain-core<0.2.0,>=0.1.46->langchain-openai) (0.1.51)\n",
      "Requirement already satisfied: jsonpointer>=1.9 in /root/miniconda3/lib/python3.8/site-packages (from jsonpatch<2.0,>=1.33->langchain-core<0.2.0,>=0.1.46->langchain-openai) (2.3)\n",
      "Requirement already satisfied: orjson<4.0.0,>=3.9.14 in /root/miniconda3/lib/python3.8/site-packages (from langsmith<0.2.0,>=0.1.0->langchain-core<0.2.0,>=0.1.46->langchain-openai) (3.10.1)\n",
      "Requirement already satisfied: requests<3,>=2 in /root/miniconda3/lib/python3.8/site-packages (from langsmith<0.2.0,>=0.1.0->langchain-core<0.2.0,>=0.1.46->langchain-openai) (2.28.2)\n",
      "Requirement already satisfied: distro<2,>=1.7.0 in /root/miniconda3/lib/python3.8/site-packages (from openai<2.0.0,>=1.24.0->langchain-openai) (1.9.0)\n",
      "Requirement already satisfied: tqdm>4 in /root/miniconda3/lib/python3.8/site-packages (from openai<2.0.0,>=1.24.0->langchain-openai) (4.61.2)\n",
      "Requirement already satisfied: anyio<5,>=3.5.0 in /root/miniconda3/lib/python3.8/site-packages (from openai<2.0.0,>=1.24.0->langchain-openai) (3.6.2)\n",
      "Requirement already satisfied: sniffio in /root/miniconda3/lib/python3.8/site-packages (from openai<2.0.0,>=1.24.0->langchain-openai) (1.3.0)\n",
      "Requirement already satisfied: httpx<1,>=0.23.0 in /root/miniconda3/lib/python3.8/site-packages (from openai<2.0.0,>=1.24.0->langchain-openai) (0.27.0)\n",
      "Requirement already satisfied: typing-extensions<5,>=4.7 in /root/miniconda3/lib/python3.8/site-packages (from openai<2.0.0,>=1.24.0->langchain-openai) (4.11.0)\n",
      "Requirement already satisfied: idna>=2.8 in /root/miniconda3/lib/python3.8/site-packages (from anyio<5,>=3.5.0->openai<2.0.0,>=1.24.0->langchain-openai) (2.10)\n",
      "Requirement already satisfied: certifi in /root/miniconda3/lib/python3.8/site-packages (from httpx<1,>=0.23.0->openai<2.0.0,>=1.24.0->langchain-openai) (2021.5.30)\n",
      "Requirement already satisfied: httpcore==1.* in /root/miniconda3/lib/python3.8/site-packages (from httpx<1,>=0.23.0->openai<2.0.0,>=1.24.0->langchain-openai) (1.0.5)\n",
      "Requirement already satisfied: h11<0.15,>=0.13 in /root/miniconda3/lib/python3.8/site-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai<2.0.0,>=1.24.0->langchain-openai) (0.14.0)\n",
      "Requirement already satisfied: pydantic-core==2.18.2 in /root/miniconda3/lib/python3.8/site-packages (from pydantic<3,>=1->langchain-core<0.2.0,>=0.1.46->langchain-openai) (2.18.2)\n",
      "Requirement already satisfied: annotated-types>=0.4.0 in /root/miniconda3/lib/python3.8/site-packages (from pydantic<3,>=1->langchain-core<0.2.0,>=0.1.46->langchain-openai) (0.6.0)\n",
      "Requirement already satisfied: charset-normalizer<4,>=2 in /root/miniconda3/lib/python3.8/site-packages (from requests<3,>=2->langsmith<0.2.0,>=0.1.0->langchain-core<0.2.0,>=0.1.46->langchain-openai) (3.1.0)\n",
      "Requirement already satisfied: urllib3<1.27,>=1.21.1 in /root/miniconda3/lib/python3.8/site-packages (from requests<3,>=2->langsmith<0.2.0,>=0.1.0->langchain-core<0.2.0,>=0.1.46->langchain-openai) (1.26.6)\n",
      "Collecting regex>=2022.1.18\n",
      "  Downloading http://mirrors.aliyun.com/pypi/packages/39/3f/5fa3298204712d39e2c4e21bc7c45754e6b0386163da9157997ae47c2333/regex-2024.4.28-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (777 kB)\n",
      "\u001b[K     |████████████████████████████████| 777 kB 1.1 MB/s eta 0:00:01\n",
      "\u001b[?25hInstalling collected packages: regex, tiktoken, langchain-openai\n",
      "Successfully installed langchain-openai-0.1.6 regex-2024.4.28 tiktoken-0.6.0\n",
      "\u001b[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\u001b[0m\n"
     ]
    }
   ],
   "source": [
    "!pip install langchain-openai"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "b83dcfa8-e57e-448f-afe4-d264058f508d",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain_core.messages import HumanMessage, SystemMessage"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "58d31be6-97c9-43fd-a31a-ba4c43a505f1",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "messages = [SystemMessage(content=\"你是一位乐于助人的智能小助手\"),\n",
    "            HumanMessage(content=\"你好，请你介绍一下你自己\"),]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "c5cf93bd-11e5-4353-b294-6052553f5edd",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[SystemMessage(content='你是一位乐于助人的智能小助手'), HumanMessage(content='你好，请你介绍一下你自己')]"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "messages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "3c3e47fb-df7d-4db3-b0a8-34d0072c5fe2",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "chat = ChatOpenAI(model_name=\"deepseek-chat\",api_key=openai.api_key ,base_url=openai.api_base)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3c4f6e58-3e13-44d3-9b95-88b3ea66fefe",
   "metadata": {},
   "source": [
    "#### invoke"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "7cf91194-4d3e-4c57-ae3e-6faf70c5c8b0",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='你好呀！😊 我是 **DeepSeek Chat**，由深度求索公司（DeepSeek）研发的智能 AI 助手。我可以帮助你解答各种问题，无论是学习、工作，还是日常生活中的疑问，我都会尽力提供准确、有用的信息！  \\n\\n### 🌟 **关于我的一些特点**：\\n1. **知识丰富** 📚：我的知识截止到 **2024 年 7 月**，可以帮你解答科技、历史、数学、编程、生活百科等各种问题。  \\n2. **超长上下文** 🧠：支持 **128K 上下文**，能记住更长的对话内容，适合处理复杂问题或长文档分析。  \\n3. **文件阅读** 📂：可以上传 **PDF、Word、Excel、PPT、TXT** 等文件，帮你提取和整理关键信息。  \\n4. **完全免费** 🎉：目前没有任何收费计划，你可以放心使用！  \\n5. **持续进化** 🚀：我的团队一直在优化我，让我变得更聪明、更贴心！  \\n\\n### 💡 **你可以问我**：\\n- **学习** 📖：解题思路、论文润色、语言翻译……  \\n- **工作** 💼：写邮件、做PPT、数据分析、编程代码……  \\n- **生活** 🏡：旅行攻略、美食推荐、健康建议……  \\n- **娱乐** 🎮：电影推荐、游戏攻略、小说创作……  \\n\\n无论是严肃的问题，还是闲聊放松，我都乐意陪你聊聊！😃 你今天有什么想了解的呢？', response_metadata={'token_usage': {'completion_tokens': 325, 'prompt_tokens': 16, 'total_tokens': 341, 'completion_tokens_details': None, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}, 'prompt_cache_hit_tokens': 0, 'prompt_cache_miss_tokens': 16}, 'model_name': 'deepseek-chat', 'system_fingerprint': 'fp_8802369eaa_prod0425fp8', 'finish_reason': 'stop', 'logprobs': None}, id='run-1bcb9548-c249-46cd-b7df-bbd279a9eaa7-0')"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chat.invoke(messages)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "ec76feb8-7547-49a0-8eb7-6acd97651b57",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'你好！我是一个智能助手，可以回答各种问题，提供信息和帮助解决问题。我可以谈论各种话题，包括历史、科学、技术、健康、娱乐等等。有什么我可以帮到你的吗？'"
      ]
     },
     "execution_count": 30,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chat.invoke(messages).content"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fc2818d8-8b0b-4480-8cee-c8ae53a8e69f",
   "metadata": {},
   "source": [
    "#### stream"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "8bee6bd0-a3aa-48e8-a7d8-3246a286571d",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "你好呀！😊 我是 **DeepSeek Chat**，由深度求索公司（DeepSeek）打造的智能 AI 助手。我可以帮助你解答各种问题，无论是学习、工作，还是日常生活，我都会尽力提供准确、有用的信息！  \n",
      "\n",
      "### 🌟 **我的特点**  \n",
      "✅ **知识丰富**：我的知识截止到 2024 年 7 月，可以回答科技、历史、数学、编程、健康、娱乐等各种问题。  \n",
      "✅ **超长上下文**：支持 **128K** 上下文，能记住更长的对话内容，适合处理复杂问题或长文档分析。  \n",
      "✅ **文件阅读**：可以上传 **PDF、Word、Excel、PPT、TXT** 等文件，帮你提取关键信息或总结内容。  \n",
      "✅ **完全免费**：目前没有任何收费计划，放心使用！  \n",
      "✅ **中文优化**：对中文理解和生成特别优化，交流更自然流畅。  \n",
      "\n",
      "### 🛠 **我能帮你做什么？**  \n",
      "📖 **学习辅导**：解题思路、论文润色、语言学习……  \n",
      "💼 **工作效率**：写邮件、做PPT、数据分析、代码调试……  \n",
      "🎉 **生活娱乐**：推荐电影、旅行攻略、美食食谱、聊天陪伴……  \n",
      "\n",
      "有什么我可以帮你的吗？随时问我哦！😃"
     ]
    }
   ],
   "source": [
    "for chunk in chat.stream(messages):\n",
    "    print(chunk.content, end=\"\", flush=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6f913448-cc24-45bf-b0e7-b7f79563a6df",
   "metadata": {},
   "source": [
    "#### batch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "ef80ec30-3885-4262-8aa2-655f09eae63c",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "messages1 = [SystemMessage(content=\"你是一位乐于助人的智能小助手\"),\n",
    " HumanMessage(content=\"请帮我介绍一下什么是机器学习\"),]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "1154c3b3-8405-4b67-a0f6-6432e9e7484f",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "messages2 = [SystemMessage(content=\"你是一位乐于助人的智能小助手\"),\n",
    " HumanMessage(content=\"请帮我介绍一下什么是AIGC\"),]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "id": "30c6cb82-b79b-4059-a5d0-a78974f2a765",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "messages3 = [SystemMessage(content=\"你是一位乐于助人的智能小助手\"),\n",
    " HumanMessage(content=\"请帮我介绍一下什么是大模型技术\"),]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "991062c2-721d-4fa4-8d51-7f01357c277a",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[AIMessage(content='机器学习是一种人工智能的分支领域，其目标是让计算机系统通过学习数据和模式识别，从而能够自动进行决策和预测。机器学习利用统计学和算法来让计算机系统从数据中学习，改进和发展自身的性能，而无需明确地进行编程。通过机器学习，计算机系统可以通过大量的数据训练和优化自己的模型，以实现各种任务，如图像识别、语音识别、自然语言处理、推荐系统等。机器学习的应用范围非常广泛，正在逐渐改变我们的日常生活和工作方式。', response_metadata={'token_usage': {'completion_tokens': 208, 'prompt_tokens': 47, 'total_tokens': 255}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_2f57f81c11', 'finish_reason': 'stop', 'logprobs': None}, id='run-e225be84-dbe8-47ec-9207-0bd5113721fa-0'),\n",
       " AIMessage(content='AIGC是Artificial Intelligence Graduate Certificate的缩写，即人工智能研究生证书。AIGC是一种专业课程，旨在培养学生在人工智能领域的技能和知识。这个证书课程通常由大学或学院提供，需要学生完成一系列的课程和项目。课程涵盖了人工智能的核心技术，包括机器学习、自然语言处理、图像识别等。获得AIGC证书的学生可以在人工智能领域的职业中取得更好的就业机会。', response_metadata={'token_usage': {'completion_tokens': 179, 'prompt_tokens': 47, 'total_tokens': 226}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-ce431921-e742-498d-82e9-2b22a98bba95-0'),\n",
       " AIMessage(content='大模型技术是指利用大规模的数据和计算资源来训练和部署复杂、庞大的机器学习模型的技术。随着数据量的不断增加和计算能力的提升，大模型技术在人工智能领域变得越来越重要。\\n\\n大模型技术通常涉及使用大规模的数据集来训练深度神经网络等复杂模型，以提高模型的准确性和泛化能力。同时，大模型技术还需要大量的计算资源来训练这些模型，包括高性能的计算机、GPU加速器等硬件设备。\\n\\n大模型技术在各种领域都有广泛的应用，包括自然语言处理、计算机视觉、语音识别等。通过大模型技术，研究人员和工程师们可以构建更加复杂和强大的机器学习模型，从而实现更多领域的创新和应用。', response_metadata={'token_usage': {'completion_tokens': 303, 'prompt_tokens': 48, 'total_tokens': 351}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_2f57f81c11', 'finish_reason': 'stop', 'logprobs': None}, id='run-1d8f25dc-010f-49b3-a51b-86d7549d48ed-0')]"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "reponse = chat.batch([messages1,\n",
    "            messages2,\n",
    "            messages3,])\n",
    "\n",
    "reponse"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "id": "caa53907-7a62-4ac2-805a-7d4f3401c685",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "机器学习是一种人工智能的分支领域，其目标是让计算机系统通过学习数据和模式识别，从而能够自动进行决策和预测。机器学习利用统计学和算法来让计算机系统从数据中学习，改进和发展自身的性能，而无需明确地进行编程。通过机器学习，计算机系统可以通过大量的数据训练和优化自己的模型，以实现各种任务，如图像识别、语音识别、自然语言处理、推荐系统等。机器学习的应用范围非常广泛，正在逐渐改变我们的日常生活和工作方式。 \n",
      "---\n",
      "\n",
      "AIGC是Artificial Intelligence Graduate Certificate的缩写，即人工智能研究生证书。AIGC是一种专业课程，旨在培养学生在人工智能领域的技能和知识。这个证书课程通常由大学或学院提供，需要学生完成一系列的课程和项目。课程涵盖了人工智能的核心技术，包括机器学习、自然语言处理、图像识别等。获得AIGC证书的学生可以在人工智能领域的职业中取得更好的就业机会。 \n",
      "---\n",
      "\n",
      "大模型技术是指利用大规模的数据和计算资源来训练和部署复杂、庞大的机器学习模型的技术。随着数据量的不断增加和计算能力的提升，大模型技术在人工智能领域变得越来越重要。\n",
      "\n",
      "大模型技术通常涉及使用大规模的数据集来训练深度神经网络等复杂模型，以提高模型的准确性和泛化能力。同时，大模型技术还需要大量的计算资源来训练这些模型，包括高性能的计算机、GPU加速器等硬件设备。\n",
      "\n",
      "大模型技术在各种领域都有广泛的应用，包括自然语言处理、计算机视觉、语音识别等。通过大模型技术，研究人员和工程师们可以构建更加复杂和强大的机器学习模型，从而实现更多领域的创新和应用。 \n",
      "---\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# 格式化输出\n",
    "\n",
    "# 使用列表生成式打印每一个消息的内容\n",
    "contents = [msg.content for msg in reponse]\n",
    "\n",
    "# 打印出内容\n",
    "for content in contents:\n",
    "    print(content, \"\\n---\\n\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f6eb2187-187c-4049-a317-c6701dafe9e4",
   "metadata": {},
   "source": [
    "#### 异步"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c771ba4d-e02d-4fee-b148-7704309c8310",
   "metadata": {},
   "source": [
    "&emsp;&emsp;`llm.invoke(...)`本质上是一个同步调用。在这种情况下，程序会在调用返回结果之前停止执行任何后续代码。这意味着如果`invoke`操作耗时较长，它会导致程序暂时挂起，直到操作完成。我们可以通过这样一个测试代码来直观的理解同步调用："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "id": "64e7361d-d646-472c-ab5c-6be9cbe97fc0",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "开始调用模型...\n",
      "模型调用完成。\n",
      "执行其他任务 1\n",
      "执行其他任务 2\n",
      "执行其他任务 3\n",
      "执行其他任务 4\n",
      "执行其他任务 5\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'总共耗时：10.009889841079712秒'"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import time\n",
    "\n",
    "def call_model():\n",
    "    # 模拟同步API调用\n",
    "    print(\"开始调用模型...\")\n",
    "    time.sleep(5)  # 模拟调用等待\n",
    "    print(\"模型调用完成。\")\n",
    "\n",
    "def perform_other_tasks():\n",
    "    # 模拟执行其他任务\n",
    "    for i in range(5):\n",
    "        print(f\"执行其他任务 {i + 1}\")\n",
    "        time.sleep(1)\n",
    "\n",
    "def main():\n",
    "    start_time = time.time()\n",
    "    call_model()\n",
    "    perform_other_tasks()\n",
    "    end_time = time.time()\n",
    "    total_time = end_time - start_time\n",
    "    return f\"总共耗时：{total_time}秒\"\n",
    "\n",
    "# 运行同步任务并打印完成时间\n",
    "main_time = main()\n",
    "main_time"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ff110e8d-c17f-458f-80b9-2f7ae333c984",
   "metadata": {},
   "source": [
    "&emsp;&emsp;这段同步调用的程序先模拟了一个耗时5秒的模型调用，随后执行了五个其他任务，每个任务耗时1秒。实际的执行时间为约10.00秒。这体现了同步执行的特点：每个操作依次执行，直到当前操作完成后才开始下一个操作，从而导致总的执行时间是各个操作时间的总和。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "92ed0760-a2e0-4d2b-978b-ca5e54d8d715",
   "metadata": {},
   "source": [
    "&emsp;&emsp;而异步调用，允许程序在等待某些操作完成时继续执行其他任务，而不是阻塞等待。这在处理I/O操作（如网络请求、文件读写等）时特别有用，可以显著提高程序的效率和响应性。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "2e45600d-cd4e-4740-9641-3880d043c6cc",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "异步调用完成\n",
      "其他任务完成\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'总共耗时：5.006200075149536秒'"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import asyncio\n",
    "import time\n",
    "\n",
    "async def async_call(llm):\n",
    "    await asyncio.sleep(5)  # 模拟异步操作\n",
    "    print(\"异步调用完成\")\n",
    "\n",
    "async def perform_other_tasks():\n",
    "    await asyncio.sleep(5)  # 模拟异步操作\n",
    "    print(\"其他任务完成\")\n",
    "\n",
    "async def run_async_tasks():\n",
    "    start_time = time.time()\n",
    "    await asyncio.gather(\n",
    "        async_call(None),  # 示例调用，替换None为模拟的LLM对象\n",
    "        perform_other_tasks()\n",
    "    )\n",
    "    end_time = time.time()\n",
    "    return f\"总共耗时：{end_time - start_time}秒\"\n",
    "\n",
    "# 运行异步任务并打印完成时间\n",
    "await run_async_tasks()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eb391308-3b20-4ec5-8adf-95aa8d58f7b9",
   "metadata": {},
   "source": [
    "&emsp;&emsp;使用`asyncio.gather()`并行执行时，理想情况下，因为两个任务几乎同时开始，它们的执行时间将重叠。如果两个任务的执行时间相同（这里都是3秒），那么总执行时间应该接近单个任务的执行时间（3秒左右），而不是两者时间之和。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fdfa7bec-63ca-4892-92ef-65153c7ee47a",
   "metadata": {},
   "source": [
    "#### 异步调用"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "id": "08cd20e4-3769-4c0d-b275-11d8f72178f0",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "messages1 = [SystemMessage(content=\"你是一位乐于助人的智能小助手\"),\n",
    " HumanMessage(content=\"请帮我介绍一下什么是机器学习\"),]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "id": "cb9de918-2154-49cd-b1eb-e7b535a5798b",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "reponse = await chat.ainvoke(messages1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "id": "689b55c9-798b-40e4-aafe-fcee4735758b",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'当然可以！机器学习是人工智能的一个子领域，它致力于研究如何让计算机系统通过数据学习并改进性能，而无需进行显式编程。简而言之，机器学习是一种让计算机系统从数据中学习和进行预测的技术。通过训练模型，计算机可以自动识别模式、做出决策和预测结果，从而实现各种任务，如图像识别、语音识别、自然语言处理等。机器学习在各个领域都有广泛的应用，是人工智能发展的核心技术之一。'"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "reponse.content"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7011cd18-2d1f-4eee-bf6e-4278db760a70",
   "metadata": {},
   "source": [
    "&emsp;&emsp;通过上述描述，我们展示了在LangChain中使用LLMs类模型和Chat Model类模型的不同方法，其核心区别在于输入Prompt的格式。除此之外其他的工作可\n",
    "以直接利用统一的抽象接口，实现与模型交互的快速过程。而针对不同的模型，LangChain也提供个对应的接入方法，\n",
    "其相关说明文档地址：https://python.langchain.com/docs/integrations/chat/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "767c02ea-af12-4b9a-a489-2086404e624f",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain_community.chat_models import ChatBaichuan,ChatOpenAI\n",
    "from langchain_core.messages import HumanMessage"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "cc2780a2-2723-41aa-8ecd-32cfcc979e7c",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:119: LangChainDeprecationWarning: The class `ChatOpenAI` was deprecated in LangChain 0.0.10 and will be removed in 0.3.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it run `pip install -U langchain-openai` and import as `from langchain_openai import ChatOpenAI`.\n",
      "  warn_deprecated(\n"
     ]
    }
   ],
   "source": [
    "# chat = ChatBaichuan(\n",
    "#     # 这里替换成个人的有效 API KEY\n",
    "#     baichuan_api_key=\"sk-xxx\",\n",
    "#     streaming=True,\n",
    "# )\n",
    "chat = ChatOpenAI(\n",
    "    # 这里替换成个人的有效 API KEY\n",
    "    openai_api_key=\"hk-qct9431000054586fa8d4984b851b9b1a8c2c9ebf336656b\",\n",
    "    base_url=\"https://api.openai-hk.com/v1\",\n",
    "    streaming=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "ea5bd8fd-8f77-478f-ba3c-265e5ca6f8d1",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:119: LangChainDeprecationWarning: The method `BaseChatModel.__call__` was deprecated in langchain-core 0.1.7 and will be removed in 0.3.0. Use invoke instead.\n",
      "  warn_deprecated(\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "AIMessage(content='你好！我是一个由OpenAI开发的人工智能语言模型，旨在理解和生成自然语言。我可以帮助回答问题、提供信息、撰写文章、进行对话等。我的知识涵盖了广泛的主题，包括科学、历史、文化、技术等。如果你有任何问题或需要帮助，随时可以问我！', response_metadata={'finish_reason': 'stop'}, id='run-641b4445-9bc0-472c-ba9c-222c19d44b9f-0')"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "response = chat([HumanMessage(content=\"请介绍一下你自己\")])\n",
    "response"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "f04bd0bd-8788-4f33-a704-81e814278e61",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'你好！我是一个由OpenAI开发的人工智能语言模型，旨在理解和生成自然语言。我可以帮助回答问题、提供信息、撰写文章、进行对话等。我的知识涵盖了广泛的主题，包括科学、历史、文化、技术等。如果你有任何问题或需要帮助，随时可以问我！'"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "response.content"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eb1fcd0c-dd73-4fe7-bc59-1ef2370b71cb",
   "metadata": {},
   "source": [
    "## 3.Model I/O之Prompt Template"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6b27b7da-b51e-49db-8a35-f45ffb84b94e",
   "metadata": {},
   "source": [
    "&emsp;&emsp;提示工程（Prompt Engineering）大家应该比较熟悉，这个概念是指在与大语言模型（LLMs），如GPT-3、Qwen等模型进行交互时，精心设计输入文本（即提示）的过程，以获得更精准、相关或有创造性的输出。在我们第一级学习计划中通过采用Few-Shot、Chain of Thought (CoT)等高级提示技巧，可以显著提高大模型在推理任务上的表现。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "db70b5fb-48c2-475b-b61e-8692cb71b6ca",
   "metadata": {},
   "source": [
    "### 3.1 使用str.format语言构建模版"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5ff3749e-c19d-492c-9bf9-9d7689c64db3",
   "metadata": {},
   "source": [
    "&emsp;&emsp;在LangChain的默认设置下， `PromptTemplate` 使用 Python 的 `str.format()` 方法进行模板化。\n",
    "&emsp;&emsp;Python的`str.format()`方法是一种字符串格式化的手段，允许我们在字符串中插入变量。使用这种方法，可以创建包含占位符的字符串模板，\n",
    "占位符由花括号{}标识。调用format()方法时，可以传入一个或多个参数，这些参数将被顺序替换进占位符中。str.format()提供了灵活的方式来构造字符串，\n",
    "支持多种格式化选项，包括数字格式化、对齐、填充、宽度设置等。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "25affe93-6cc9-4254-b57f-810a7b411a34",
   "metadata": {},
   "source": [
    "- **基本用法**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "944c90e5-3c02-47f9-b3e8-4fb14d4e41e8",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Hello, Alice!\n"
     ]
    }
   ],
   "source": [
    "# 简单示例，直接替换\n",
    "greeting = \"Hello, {}!\".format(\"Alice\")\n",
    "print(greeting)\n",
    "# 输出: Hello, Alice!"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "baa532f3-0471-4cd0-8e91-3553a3405cfb",
   "metadata": {
    "tags": []
   },
   "source": [
    "- **带有位置参数的用法**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "id": "fba4a257-3405-45c0-ade0-f23f1d80d242",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Name: Bob, Age: 30\n"
     ]
    }
   ],
   "source": [
    "# 使用位置参数\n",
    "info = \"Name: {0}, Age: {1}\".format(\"Bob\", 30)\n",
    "print(info)\n",
    "# 输出: Name: Bob, Age: 30"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6cfbc36a-2828-4ae4-b533-5bc6b5b624a0",
   "metadata": {},
   "source": [
    "- **带有关键字参数的用法**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "id": "9f8c3076-e147-4efd-9274-de41aa5a6a62",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Name: Charlie, Age: 25\n"
     ]
    }
   ],
   "source": [
    "# 使用关键字参数\n",
    "info = \"Name: {name}, Age: {age}\".format(name=\"Charlie\", age=25)\n",
    "print(info)\n",
    "# 输出: Name: Charlie, Age: 25"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "29d7e318-673c-4932-822e-55c468742f47",
   "metadata": {},
   "source": [
    "- **使用字典解包的方式：**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "e7962b7c-36ee-4172-9f43-6b1be6d1603f",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Name: David, Age: 42\n"
     ]
    }
   ],
   "source": [
    "# 使用字典解包\n",
    "person = {\"name\": \"David\", \"age\": 42}\n",
    "info = \"Name: {name}, Age: {age}\".format(**person)\n",
    "print(info)\n",
    "# 输出: Name: David, Age: 40"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9c250d3d-b183-46ca-b9aa-28548679f698",
   "metadata": {
    "tags": []
   },
   "source": [
    "&emsp;&emsp;在LangChain中，基本采用了Python的原生`str.format()`方法对输入数据进行格式化，这样在模型接收输入前，可以根据需要对数据进行预处理和结构\n",
    "化，以此来引导大模型进行更准确的推理。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "2b7c63d1-7fd4-43c7-bff3-b8848a09bac2",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'请给我一个关于量子力学的详细解释。'"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain.prompts import PromptTemplate\n",
    "\n",
    "prompt_template = PromptTemplate.from_template(\n",
    "    \"请给我一个关于{topic}的{type}解释。\"\n",
    ")\n",
    "\n",
    "prompt= prompt_template.format(type=\"详细\", topic=\"量子力学\")\n",
    "\n",
    "prompt"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d0cc4d73-c3f0-48a5-82de-dbf19e574dc9",
   "metadata": {},
   "source": [
    "&emsp;&emsp;如上所示，可以使用`PromptTemplate`的`from_template`方法创建一个提示模板实例，这个模板包含了两个占位符：{topic} 和 {type}，这些占位\n",
    "符在实际调用时可以被实际的值替换。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e3521a65-288d-4bb0-8c58-9d73d6fdd318",
   "metadata": {},
   "source": [
    "- **调用Chat Model**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "f7471a96-983a-468b-88c6-598180b95725",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "\n",
    "chat_template = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", \"你是一个有帮助的AI机器人，你的名字是{name}。\"),\n",
    "        (\"human\", \"你好，最近怎么样？\"),\n",
    "        (\"ai\", \"我很好，谢谢！\"),\n",
    "        (\"human\", \"{user_input}\"),\n",
    "    ]\n",
    ")\n",
    "\n",
    "messages = chat_template.format_messages(name=\"小明\", user_input=\"你叫什么名字？\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "c642ee43-2bea-4f78-aedb-deb65b8765e1",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[SystemMessage(content='你是一个有帮助的AI机器人，你的名字是小明。'),\n",
       " HumanMessage(content='你好，最近怎么样？'),\n",
       " AIMessage(content='我很好，谢谢！'),\n",
       " HumanMessage(content='你叫什么名字？')]"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "messages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "03875188-a35f-47c7-92fd-834a6065e00b",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatPromptTemplate(input_variables=['name', 'user_input'], messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=['name'], template='你是一个有帮助的AI机器人，你的名字是{name}。')), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='你好，最近怎么样？')), AIMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='我很好，谢谢！')), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['user_input'], template='{user_input}'))])"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chat_template"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "beb9c9a7-be1c-4019-9e20-624ff51c57ef",
   "metadata": {},
   "source": [
    "&emsp;&emsp;从输出上看，其构造函数在实例化prompt_template时，主要由两个关键参数进行指定：\n",
    "\n",
    "- input_variables：这是一个列表，包含模板中需要动态填充的变量名。这些变量名在模板字符串中以花括号（如{name}）标记。通过指定这些变量，可以在后续过程\n",
    "中动态地替换这些占位符。\n",
    "\n",
    "- template：这是定义具体提示文本的模板字符串。它可以包含静态文本和input_variables列表中指定的变量占位符。当调用format方法时，这些占位符会被实际的变\n",
    "量值替换，生成最终的提示文本。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "c03e6f72-840a-49b2-a507-2cba4ccef13a",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "我叫小明，很高兴认识你！有什么我可以帮助你的吗？\n"
     ]
    }
   ],
   "source": [
    "result = chat.invoke(messages)\n",
    "print(result.content)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "d3cfba43-1e5a-434b-8b54-c51518fdc491",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "messages = chat_template.format_messages(name=\"张三\", user_input=\"你要去哪里玩？\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "4ca7700d-8f79-474f-96df-ddf38684d283",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "作为一个AI，我没有身体和意识，所以无法去任何地方玩。不过，我可以帮助你规划旅行或推荐一些有趣的地方！你有没有想去的地方？\n"
     ]
    }
   ],
   "source": [
    "result = chat.invoke(messages)\n",
    "print(result.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2365905d-b203-4764-996c-74dd52d7d5ee",
   "metadata": {},
   "source": [
    "### 3.2 构造Few-Shot模版"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "75787ec0-381a-4fe2-8887-7efff731e32a",
   "metadata": {},
   "source": [
    "&emsp;&emsp;在LangChain中，很多的功能抽象、链路抽象本质上都是在对大模型的“涌现能力”能够应用落地的一种具体实现方法，而其推理的不稳定，在不修改模型本身参数（微调）的情况下，模型涌现能力极度依赖对模型的提示过程，即对同样一个模型，不同的提示方法将获得质量完全不同的结果。最为简单的提示工程的方法就是通过输入一些类似问题和问题答案，让模型参考学习，并在同一个prompt的末尾提出新的问题，依次提升模型的推理能力。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "id": "eb5d4bb4-75a3-4f09-8c75-16a13eea7558",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Q：“罗杰有五个网球，他又买了两盒网球，每盒有3个网球，请问他现在总共有多少个网球？”                         A：“罗杰一开始有五个网球，又购买了两盒网球，每盒3个，共购买了6个网球，因此现在总共由5+6=11个网球。因此答案是11。”                         Q：“食堂总共有23个苹果，如果他们用掉20个苹果，然后又买了6个苹果，请问现在食堂总共有多少个苹果？”                         A：“食堂最初有23个苹果，用掉20个，然后又买了6个，总共有23-20+6=9个苹果，答案是9。”                         Q：“杂耍者可以杂耍16个球。其中一半的球是高尔夫球，其中一半的高尔夫球是蓝色的。请问总共有多少个蓝色高尔夫球？”                         A：“总共有16个球，其中一半是高尔夫球，也就是8个，其中一半是蓝色的，也就是4个，答案是4个。”                         Q：“艾米需要4分钟才能爬到滑梯顶部，她花了1分钟才滑下来，水滑梯将在15分钟后关闭，请问在关闭之前她能滑多少次？”                         A：'"
      ]
     },
     "execution_count": 53,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "prompt_Few_shot_CoT4 = 'Q：“罗杰有五个网球，他又买了两盒网球，每盒有3个网球，请问他现在总共有多少个网球？” \\\n",
    "                        A：“罗杰一开始有五个网球，又购买了两盒网球，每盒3个，共购买了6个网球，因此现在总共由5+6=11个网球。因此答案是11。” \\\n",
    "                        Q：“食堂总共有23个苹果，如果他们用掉20个苹果，然后又买了6个苹果，请问现在食堂总共有多少个苹果？” \\\n",
    "                        A：“食堂最初有23个苹果，用掉20个，然后又买了6个，总共有23-20+6=9个苹果，答案是9。” \\\n",
    "                        Q：“杂耍者可以杂耍16个球。其中一半的球是高尔夫球，其中一半的高尔夫球是蓝色的。请问总共有多少个蓝色高尔夫球？” \\\n",
    "                        A：“总共有16个球，其中一半是高尔夫球，也就是8个，其中一半是蓝色的，也就是4个，答案是4个。” \\\n",
    "                        Q：“艾米需要4分钟才能爬到滑梯顶部，她花了1分钟才滑下来，水滑梯将在15分钟后关闭，请问在关闭之前她能滑多少次？” \\\n",
    "                        A：'\n",
    "\n",
    "prompt_Few_shot_CoT4"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "id": "1d9aefa4-d95f-4a69-bc37-59991e1d94e8",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain_core.messages import HumanMessage, SystemMessage\n",
    "messages = [SystemMessage(content=\"你是一个擅长数学推理的专家\"),\n",
    " HumanMessage(content=\"艾米需要4分钟才能爬到滑梯顶部，她花了1分钟才滑下来，水滑梯将在15分钟后关闭，请问在关闭之前她能滑多少次？\"),]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "id": "577d7d8e-8ccf-4f40-81fa-3dc510b7620c",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'首先我们来分析一下艾米的滑梯运动：\\n\\n1. 艾米需要4分钟才能爬到滑梯顶部。\\n2. 她每次滑下来需要1分钟。\\n3. 水滑梯将在15分钟后关闭。\\n\\n因此，在水滑梯关闭之前，艾米能够进行的滑行次数为：  \\n\\n\\\\[ \\\\frac{15 \\\\text{分钟}}{4 \\\\text{分钟} (\\\\text{上爬}) + 1 \\\\text{分钟} (\\\\text{下滑})} = \\\\frac{15}{5} = 3 \\\\text{次}\\\\]\\n\\n所以，在水滑梯关闭之前，艾米能够滑3次。'"
      ]
     },
     "execution_count": 55,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "resonse = chat.invoke(messages)\n",
    "resonse.content"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e7e189cd-d018-47f3-ba08-b083d9b864fa",
   "metadata": {},
   "source": [
    "实际上，在使用GPT-3.5时，我们可以观察到，即使不采用Few-Shot提示，模型也能以很高的概率正确回答问题，这归功于模型本身已经非常强大的能力。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "aa9e3378-6ee6-4625-8c18-194e63fb523a",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain.prompts import (\n",
    "    ChatPromptTemplate,\n",
    "    FewShotChatMessagePromptTemplate,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "795f1054-530a-4baa-88fc-0e26f3c64d4f",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "examples = [\n",
    "    {\"input\": \"罗杰有五个网球，他又买了两盒网球，每盒有3个网球，请问他现在总共有多少个网球？\", \n",
    "     \"output\": \"罗杰一开始有五个网球，又购买了两盒网球，每盒3个，共购买了6个网球，因此现在总共由5+6=11个网球。因此答案是11。\"},\n",
    "    \n",
    "    {\"input\": \"食堂总共有23个苹果，如果他们用掉20个苹果，然后又买了6个苹果，请问现在食堂总共有多少个苹果？\", \n",
    "     \"output\": \"食堂最初有23个苹果，用掉20个，然后又买了6个，总共有23-20+6=9个苹果，答案是9。\"},\n",
    "    \n",
    "    {\"input\": \"杂耍者可以杂耍16个球。其中一半的球是高尔夫球，其中一半的高尔夫球是蓝色的。请问总共有多少个蓝色高尔夫球？\", \n",
    "     \"output\": \"总共有16个球，其中一半是高尔夫球，也就是8个，其中一半是蓝色的，也就是4个，答案是4个。\"},\n",
    "]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "bdb2cfc0-2460-4718-a59c-e66549d9b741",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Human: 罗杰有五个网球，他又买了两盒网球，每盒有3个网球，请问他现在总共有多少个网球？\n",
      "AI: 罗杰一开始有五个网球，又购买了两盒网球，每盒3个，共购买了6个网球，因此现在总共由5+6=11个网球。因此答案是11。\n",
      "Human: 食堂总共有23个苹果，如果他们用掉20个苹果，然后又买了6个苹果，请问现在食堂总共有多少个苹果？\n",
      "AI: 食堂最初有23个苹果，用掉20个，然后又买了6个，总共有23-20+6=9个苹果，答案是9。\n",
      "Human: 杂耍者可以杂耍16个球。其中一半的球是高尔夫球，其中一半的高尔夫球是蓝色的。请问总共有多少个蓝色高尔夫球？\n",
      "AI: 总共有16个球，其中一半是高尔夫球，也就是8个，其中一半是蓝色的，也就是4个，答案是4个。\n"
     ]
    }
   ],
   "source": [
    "# This is a prompt template used to format each individual example.\n",
    "example_prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"human\", \"{input}\"),\n",
    "        (\"ai\", \"{output}\"),\n",
    "    ]\n",
    ")\n",
    "\n",
    "few_shot_prompt = FewShotChatMessagePromptTemplate(\n",
    "    example_prompt=example_prompt,\n",
    "    examples=examples,\n",
    ")\n",
    "\n",
    "print(few_shot_prompt.format())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "b2fa456e-7fe3-424e-a170-4218853def8b",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "final_prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        few_shot_prompt,\n",
    "        (\"human\", \"{input}\"),\n",
    "    ]\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "aed82582-2059-4d88-b82d-6745d9f34910",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatPromptTemplate(input_variables=['input'], messages=[FewShotChatMessagePromptTemplate(examples=[{'input': '罗杰有五个网球，他又买了两盒网球，每盒有3个网球，请问他现在总共有多少个网球？', 'output': '罗杰一开始有五个网球，又购买了两盒网球，每盒3个，共购买了6个网球，因此现在总共由5+6=11个网球。因此答案是11。'}, {'input': '食堂总共有23个苹果，如果他们用掉20个苹果，然后又买了6个苹果，请问现在食堂总共有多少个苹果？', 'output': '食堂最初有23个苹果，用掉20个，然后又买了6个，总共有23-20+6=9个苹果，答案是9。'}, {'input': '杂耍者可以杂耍16个球。其中一半的球是高尔夫球，其中一半的高尔夫球是蓝色的。请问总共有多少个蓝色高尔夫球？', 'output': '总共有16个球，其中一半是高尔夫球，也就是8个，其中一半是蓝色的，也就是4个，答案是4个。'}], example_prompt=ChatPromptTemplate(input_variables=['input', 'output'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')), AIMessagePromptTemplate(prompt=PromptTemplate(input_variables=['output'], template='{output}'))])), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}'))])"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "final_prompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "6c46857a-cc96-4e40-9e8f-fa68fcce1b12",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'艾米爬到滑梯顶部需要4分钟，滑下来需要1分钟，所以她完成一次滑梯的总时间是4分钟 + 1分钟 = 5分钟。\\n\\n水滑梯将在15分钟后关闭，因此在这段时间内，她可以滑多少次呢？\\n\\n15分钟 ÷ 5分钟/次 = 3次\\n\\n所以，艾米在水滑梯关闭之前可以滑3次。'"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chain = final_prompt | chat\n",
    "\n",
    "response = chain.invoke({\"input\": \"艾米需要4分钟才能爬到滑梯顶部，她花了1分钟才滑下来，水滑梯将在15分钟后关闭，请问在关闭之前她能滑多少次？\"})\n",
    "response.content"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "deb4fe3e-7486-4395-ae17-6c61920f72e1",
   "metadata": {},
   "source": [
    "### 3.3 示例选择器"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "f3f340d2-c42a-4aac-84af-fa7b2222c059",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain.prompts.few_shot import FewShotChatMessagePromptTemplate\n",
    "from langchain.prompts import ChatPromptTemplate\n",
    "\n",
    "examples = [\n",
    "    # 数学推理\n",
    "    {\n",
    "        \"question\": \"小明的妈妈给了他10块钱去买文具，如果一支笔3块钱，小明最多能买几支笔？\",\n",
    "        \"answer\": \"小明有10块钱，每支笔3块钱，所以他最多能买3支笔，因为3*3=9，剩下1块钱不够再买一支笔。因此答案是3支。\"\n",
    "    },\n",
    "    {\n",
    "        \"question\": \"一个篮球队有12名球员，如果教练想分成两个小组进行训练，每组需要有多少人？\",\n",
    "        \"answer\": \"篮球队总共有12名球员，分成两个小组，每组有12/2=6名球员。因此每组需要有6人。\"\n",
    "    },\n",
    "    # 逻辑推理\n",
    "    {\n",
    "        \"question\": \"如果所有的猫都怕水，而Tom是一只猫，请问Tom怕水吗？\",\n",
    "        \"answer\": \"根据题意，所有的猫都怕水，因此作为一只猫的Tom也会怕水。所以答案是肯定的，Tom怕水。\"\n",
    "    },\n",
    "    {\n",
    "        \"question\": \"在夏天，如果白天温度高于30度，夜晚就会很凉爽。今天白天温度是32度，请问今晚会凉爽吗？\",\n",
    "        \"answer\": \"根据题意，只要白天温度高于30度，夜晚就会很凉爽。今天白天的温度是32度，超过了30度，因此今晚会凉爽。\"\n",
    "    },\n",
    "    # 常识问题\n",
    "    {\n",
    "        \"question\": \"地球绕太阳转一圈需要多久？\",\n",
    "        \"answer\": \"地球绕太阳转一圈大约需要365天，也就是一年的时间。\"\n",
    "    },\n",
    "    {\n",
    "        \"question\": \"水的沸点是多少摄氏度？\",\n",
    "        \"answer\": \"水的沸点是100摄氏度。\"\n",
    "    },\n",
    "    # 文化常识\n",
    "    {\n",
    "        \"question\": \"中国的首都是哪里？\",\n",
    "        \"answer\": \"中国的首都是北京。\"\n",
    "    },\n",
    "    {\n",
    "        \"question\": \"世界上最长的河流是哪一条？\",\n",
    "        \"answer\": \"世界上最长的河流是尼罗河。\"\n",
    "    },\n",
    "]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "61f15e0c-a270-4fb7-bdca-93b01b7dd177",
   "metadata": {},
   "source": [
    "&emsp;&emsp;如上所示的示例中涵盖了数学推理、逻辑推理、常识问题以及文化常识四种不同的语义场景，并为每个场景提供了两个问题和答案。接下来将上述示例构建成提示模版"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "9c040370-aa53-4b07-919e-a020edcd1449",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Human: 小明的妈妈给了他10块钱去买文具，如果一支笔3块钱，小明最多能买几支笔？\n",
      "AI: 小明有10块钱，每支笔3块钱，所以他最多能买3支笔，因为3*3=9，剩下1块钱不够再买一支笔。因此答案是3支。\n",
      "Human: 一个篮球队有12名球员，如果教练想分成两个小组进行训练，每组需要有多少人？\n",
      "AI: 篮球队总共有12名球员，分成两个小组，每组有12/2=6名球员。因此每组需要有6人。\n",
      "Human: 如果所有的猫都怕水，而Tom是一只猫，请问Tom怕水吗？\n",
      "AI: 根据题意，所有的猫都怕水，因此作为一只猫的Tom也会怕水。所以答案是肯定的，Tom怕水。\n",
      "Human: 在夏天，如果白天温度高于30度，夜晚就会很凉爽。今天白天温度是32度，请问今晚会凉爽吗？\n",
      "AI: 根据题意，只要白天温度高于30度，夜晚就会很凉爽。今天白天的温度是32度，超过了30度，因此今晚会凉爽。\n",
      "Human: 地球绕太阳转一圈需要多久？\n",
      "AI: 地球绕太阳转一圈大约需要365天，也就是一年的时间。\n",
      "Human: 水的沸点是多少摄氏度？\n",
      "AI: 水的沸点是100摄氏度。\n",
      "Human: 中国的首都是哪里？\n",
      "AI: 中国的首都是北京。\n",
      "Human: 世界上最长的河流是哪一条？\n",
      "AI: 世界上最长的河流是尼罗河。\n"
     ]
    }
   ],
   "source": [
    "example_prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"human\", \"{question}\"),\n",
    "        (\"ai\", \"{answer}\"),\n",
    "    ]\n",
    ")\n",
    "\n",
    "few_shot_prompt = FewShotChatMessagePromptTemplate(\n",
    "    example_prompt=example_prompt,\n",
    "    examples=examples,\n",
    ")\n",
    "\n",
    "print(few_shot_prompt.format())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "68742106-f041-4857-b438-2bc5b6eb3f64",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatPromptTemplate(input_variables=['input'], messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='你是一个无所不能的人，无论什么问题都可以回答。')), FewShotChatMessagePromptTemplate(examples=[{'question': '小明的妈妈给了他10块钱去买文具，如果一支笔3块钱，小明最多能买几支笔？', 'answer': '小明有10块钱，每支笔3块钱，所以他最多能买3支笔，因为3*3=9，剩下1块钱不够再买一支笔。因此答案是3支。'}, {'question': '一个篮球队有12名球员，如果教练想分成两个小组进行训练，每组需要有多少人？', 'answer': '篮球队总共有12名球员，分成两个小组，每组有12/2=6名球员。因此每组需要有6人。'}, {'question': '如果所有的猫都怕水，而Tom是一只猫，请问Tom怕水吗？', 'answer': '根据题意，所有的猫都怕水，因此作为一只猫的Tom也会怕水。所以答案是肯定的，Tom怕水。'}, {'question': '在夏天，如果白天温度高于30度，夜晚就会很凉爽。今天白天温度是32度，请问今晚会凉爽吗？', 'answer': '根据题意，只要白天温度高于30度，夜晚就会很凉爽。今天白天的温度是32度，超过了30度，因此今晚会凉爽。'}, {'question': '地球绕太阳转一圈需要多久？', 'answer': '地球绕太阳转一圈大约需要365天，也就是一年的时间。'}, {'question': '水的沸点是多少摄氏度？', 'answer': '水的沸点是100摄氏度。'}, {'question': '中国的首都是哪里？', 'answer': '中国的首都是北京。'}, {'question': '世界上最长的河流是哪一条？', 'answer': '世界上最长的河流是尼罗河。'}], example_prompt=ChatPromptTemplate(input_variables=['answer', 'question'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['question'], template='{question}')), AIMessagePromptTemplate(prompt=PromptTemplate(input_variables=['answer'], template='{answer}'))])), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}'))])"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "final_prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", \"你是一个无所不能的人，无论什么问题都可以回答。\"),\n",
    "        few_shot_prompt,\n",
    "        (\"human\", \"{input}\"),\n",
    "    ]\n",
    ")\n",
    "\n",
    "final_prompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "cb7806d9-33a0-4a62-86fd-b95c1bf133c2",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'世界上最高的山峰是珠穆朗玛峰（也称为埃佛勒斯峰），其海拔高度为8848.86米。'"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "chain = final_prompt | chat\n",
    "\n",
    "res = chain.invoke({\"input\": \"世界上最高的山峰是哪一座\"})\n",
    "res.content"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e4ec9b58-494c-4f47-98fe-2611cdf2afb2",
   "metadata": {},
   "source": [
    "&emsp;&emsp;上述过程，其实就对应了我们之前提到的问题，针对“世界上最高的山峰是哪一座？”这类问题，实际上只需输入与文化常识相关的提示就足够了。而如果要实现这一功能，就需要借助LangChain中的`example_selector`模块。在该模块中，有如下两个参数需要关注：\n",
    "\n",
    "- example_selector ：负责为给定输入选择少数样本（以及它们返回的顺序）。它们实现了 BaseExampleSelector 接口。一个常见的例子是向量存储支持的 SemanticSimilarityExampleSelector\n",
    "- example_prompt ：通过其 format_messages 方法将每个示例转换为 1 条或多条消息。一个常见的示例是将每个示例转换为一条人工消息和一条人工智能消息响应，或者一条人工消息后跟一条函数调用消息。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0208fac2-eeaa-474d-8fed-cede9cd21700",
   "metadata": {},
   "source": [
    "&emsp;&emsp;LangChain已经内置了多个预定义的示例选择器，每种选择器都有其特定的功能和适用场景。在这个案例中，我们先以`SemanticSimilarityExampleSelector`为例进行探索。这个选择器的目的是在给定的示例集合中选出与输入在语义上最接近的示例。主要的实现步骤如下：\n",
    "\n",
    "1. **向量化表示**：首先，输入文本和示例集中的每个示例都会被转换成向量化的表示。通过Embedding模型将文本转换成高维空间中的点，其中语义上相似的文本会被映射到空间中相近的位置。\n",
    "\n",
    "2. **计算语义相似度**：一旦得到了输入和示例的向量化表示，下一步是计算输入与每个示例之间的语义相似度。通过计算向量之间的距离来实现，常见的度量方式包括余弦相似度、欧氏距离等。\n",
    "\n",
    "3. **选择最相似的示例**：基于计算出的相似度，选择一个或多个与输入最相似的示例。这个选择过程可以是简单地选取相似度最高的示例，或者根据相似度分布采取更复杂的策略，例如选择相似度高于某个阈值的所有示例。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "974ce6da-2887-47e9-bb0e-15e6e1628da4",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Looking in indexes: https://mirrors.aliyun.com/pypi/simple, https://pypi.ngc.nvidia.com\n",
      "Collecting chromadb\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/7d/37/66846c00e87a8c9c9fb07e58a06a23a4f9d46bdd82b9411fd10a49080413/chromadb-1.0.7-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.3 MB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m18.3/18.3 MB\u001b[0m \u001b[31m25.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0ma \u001b[36m0:00:01\u001b[0m\n",
      "\u001b[?25hCollecting build>=1.0.3 (from chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/84/c2/80633736cd183ee4a62107413def345f7e6e3c01563dbca1417363cf957e/build-1.2.2.post1-py3-none-any.whl (22 kB)\n",
      "Requirement already satisfied: pydantic>=1.9 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from chromadb) (2.10.6)\n",
      "Collecting chroma-hnswlib==0.7.6 (from chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/3a/6d/27826180a54df80dbba8a4f338b022ba21c0c8af96fd08ff8510626dee8f/chroma_hnswlib-0.7.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.4 MB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.4/2.4 MB\u001b[0m \u001b[31m20.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hCollecting fastapi==0.115.9 (from chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/32/b6/7517af5234378518f27ad35a7b24af9591bc500b8c1780929c1295999eb6/fastapi-0.115.9-py3-none-any.whl (94 kB)\n",
      "Requirement already satisfied: uvicorn>=0.18.3 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.34.0)\n",
      "Requirement already satisfied: numpy>=1.22.5 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from chromadb) (1.26.4)\n",
      "Collecting posthog>=2.4.0 (from chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/af/51/398725e37e1e087b4499f8769340aec1e5904b632f13d106b150bf577831/posthog-4.0.0-py2.py3-none-any.whl (92 kB)\n",
      "Requirement already satisfied: typing-extensions>=4.5.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from chromadb) (4.13.1)\n",
      "Collecting onnxruntime>=1.14.1 (from chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/a9/fb/76597b77785b2012317ffdd817101ccfab784e2c125645d002c4c9cd377b/onnxruntime-1.21.1-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (16.0 MB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m16.0/16.0 MB\u001b[0m \u001b[31m24.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0ma \u001b[36m0:00:01\u001b[0m\n",
      "\u001b[?25hCollecting opentelemetry-api>=1.2.0 (from chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/12/f2/89ea3361a305466bc6460a532188830351220b5f0851a5fa133155c16eca/opentelemetry_api-1.32.1-py3-none-any.whl (65 kB)\n",
      "Collecting opentelemetry-exporter-otlp-proto-grpc>=1.2.0 (from chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/ef/02/37ad560b12b8dfab8f1a08ca1884b5759ffde133f20d966614a9dd904d1b/opentelemetry_exporter_otlp_proto_grpc-1.32.1-py3-none-any.whl (18 kB)\n",
      "Collecting opentelemetry-instrumentation-fastapi>=0.41b0 (from chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/01/06/b996a3b1f243938ebff7ca1a2290174a155c98791ff6f2e5db50bce0a1a2/opentelemetry_instrumentation_fastapi-0.53b1-py3-none-any.whl (12 kB)\n",
      "Collecting opentelemetry-sdk>=1.2.0 (from chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/dc/00/d3976cdcb98027aaf16f1e980e54935eb820872792f0eaedd4fd7abb5964/opentelemetry_sdk-1.32.1-py3-none-any.whl (118 kB)\n",
      "Requirement already satisfied: tokenizers>=0.13.2 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from chromadb) (0.20.3)\n",
      "Collecting pypika>=0.48.9 (from chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/c7/2c/94ed7b91db81d61d7096ac8f2d325ec562fc75e35f3baea8749c85b28784/PyPika-0.48.9.tar.gz (67 kB)\n",
      "  Installing build dependencies ... \u001b[?25ldone\n",
      "\u001b[?25h  Getting requirements to build wheel ... \u001b[?25ldone\n",
      "\u001b[?25h  Preparing metadata (pyproject.toml) ... \u001b[?25ldone\n",
      "\u001b[?25hRequirement already satisfied: tqdm>=4.65.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from chromadb) (4.67.1)\n",
      "Requirement already satisfied: overrides>=7.3.1 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from chromadb) (7.7.0)\n",
      "Requirement already satisfied: importlib-resources in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from chromadb) (6.5.2)\n",
      "Collecting grpcio>=1.58.0 (from chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/5d/b7/7e7b7bb6bb18baf156fd4f2f5b254150dcdd6cbf0def1ee427a2fb2bfc4d/grpcio-1.71.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.9 MB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m5.9/5.9 MB\u001b[0m \u001b[31m25.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0ma \u001b[36m0:00:01\u001b[0m\n",
      "\u001b[?25hCollecting bcrypt>=4.0.1 (from chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/cb/c6/8fedca4c2ada1b6e889c52d2943b2f968d3427e5d65f595620ec4c06fa2f/bcrypt-4.3.0-cp39-abi3-manylinux_2_28_x86_64.whl (284 kB)\n",
      "Requirement already satisfied: typer>=0.9.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from chromadb) (0.15.2)\n",
      "Collecting kubernetes>=28.1.0 (from chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/08/10/9f8af3e6f569685ce3af7faab51c8dd9d93b9c38eba339ca31c746119447/kubernetes-32.0.1-py2.py3-none-any.whl (2.0 MB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.0/2.0 MB\u001b[0m \u001b[31m25.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hRequirement already satisfied: tenacity>=8.2.3 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from chromadb) (8.5.0)\n",
      "Requirement already satisfied: pyyaml>=6.0.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from chromadb) (6.0.2)\n",
      "Collecting mmh3>=4.0.1 (from chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/f1/ac/17030d24196f73ecbab8b5033591e5e0e2beca103181a843a135c78f4fee/mmh3-5.1.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (99 kB)\n",
      "Requirement already satisfied: orjson>=3.9.12 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from chromadb) (3.10.16)\n",
      "Requirement already satisfied: httpx>=0.27.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from chromadb) (0.28.1)\n",
      "Requirement already satisfied: rich>=10.11.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from chromadb) (14.0.0)\n",
      "Requirement already satisfied: jsonschema>=4.19.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from chromadb) (4.23.0)\n",
      "Collecting starlette<0.46.0,>=0.40.0 (from fastapi==0.115.9->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/d9/61/f2b52e107b1fc8944b33ef56bf6ac4ebbe16d91b94d2b87ce013bf63fb84/starlette-0.45.3-py3-none-any.whl (71 kB)\n",
      "Requirement already satisfied: packaging>=19.1 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from build>=1.0.3->chromadb) (23.2)\n",
      "Collecting pyproject_hooks (from build>=1.0.3->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/bd/24/12818598c362d7f300f18e74db45963dbcb85150324092410c8b49405e42/pyproject_hooks-1.2.0-py3-none-any.whl (10 kB)\n",
      "Requirement already satisfied: tomli>=1.1.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from build>=1.0.3->chromadb) (2.2.1)\n",
      "Requirement already satisfied: anyio in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from httpx>=0.27.0->chromadb) (4.9.0)\n",
      "Requirement already satisfied: certifi in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from httpx>=0.27.0->chromadb) (2025.1.31)\n",
      "Requirement already satisfied: httpcore==1.* in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from httpx>=0.27.0->chromadb) (1.0.7)\n",
      "Requirement already satisfied: idna in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from httpx>=0.27.0->chromadb) (3.10)\n",
      "Requirement already satisfied: h11<0.15,>=0.13 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from httpcore==1.*->httpx>=0.27.0->chromadb) (0.14.0)\n",
      "Requirement already satisfied: attrs>=22.2.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from jsonschema>=4.19.0->chromadb) (25.3.0)\n",
      "Requirement already satisfied: jsonschema-specifications>=2023.03.6 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from jsonschema>=4.19.0->chromadb) (2024.10.1)\n",
      "Requirement already satisfied: referencing>=0.28.4 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from jsonschema>=4.19.0->chromadb) (0.36.2)\n",
      "Requirement already satisfied: rpds-py>=0.7.1 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from jsonschema>=4.19.0->chromadb) (0.24.0)\n",
      "Requirement already satisfied: six>=1.9.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (1.17.0)\n",
      "Requirement already satisfied: python-dateutil>=2.5.3 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (2.9.0.post0)\n",
      "Collecting google-auth>=1.0.1 (from kubernetes>=28.1.0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/ce/12/ad37a1ef86006d0a0117fc06a4a00bd461c775356b534b425f00dde208ea/google_auth-2.39.0-py2.py3-none-any.whl (212 kB)\n",
      "Requirement already satisfied: websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (1.8.0)\n",
      "Requirement already satisfied: requests in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (2.32.3)\n",
      "Collecting requests-oauthlib (from kubernetes>=28.1.0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/3b/5d/63d4ae3b9daea098d5d6f5da83984853c1bbacd5dc826764b249fe119d24/requests_oauthlib-2.0.0-py2.py3-none-any.whl (24 kB)\n",
      "Collecting oauthlib>=3.2.2 (from kubernetes>=28.1.0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/7e/80/cab10959dc1faead58dc8384a781dfbf93cb4d33d50988f7a69f1b7c9bbe/oauthlib-3.2.2-py3-none-any.whl (151 kB)\n",
      "Requirement already satisfied: urllib3>=1.24.2 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (2.3.0)\n",
      "Collecting durationpy>=0.7 (from kubernetes>=28.1.0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/4c/a3/ac312faeceffd2d8f86bc6dcb5c401188ba5a01bc88e69bed97578a0dfcd/durationpy-0.9-py3-none-any.whl (3.5 kB)\n",
      "Requirement already satisfied: coloredlogs in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from onnxruntime>=1.14.1->chromadb) (15.0.1)\n",
      "Collecting flatbuffers (from onnxruntime>=1.14.1->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/b8/25/155f9f080d5e4bc0082edfda032ea2bc2b8fab3f4d25d46c1e9dd22a1a89/flatbuffers-25.2.10-py2.py3-none-any.whl (30 kB)\n",
      "Requirement already satisfied: protobuf in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from onnxruntime>=1.14.1->chromadb) (6.30.2)\n",
      "Requirement already satisfied: sympy in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from onnxruntime>=1.14.1->chromadb) (1.13.3)\n",
      "Collecting deprecated>=1.2.6 (from opentelemetry-api>=1.2.0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/6e/c6/ac0b6c1e2d138f1002bcf799d330bd6d85084fece321e662a14223794041/Deprecated-1.2.18-py2.py3-none-any.whl (10.0 kB)\n",
      "Collecting importlib-metadata<8.7.0,>=6.0 (from opentelemetry-api>=1.2.0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/79/9d/0fb148dc4d6fa4a7dd1d8378168d9b4cd8d4560a6fbf6f0121c5fc34eb68/importlib_metadata-8.6.1-py3-none-any.whl (26 kB)\n",
      "Collecting googleapis-common-protos~=1.52 (from opentelemetry-exporter-otlp-proto-grpc>=1.2.0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/86/f1/62a193f0227cf15a920390abe675f386dec35f7ae3ffe6da582d3ade42c7/googleapis_common_protos-1.70.0-py3-none-any.whl (294 kB)\n",
      "Collecting opentelemetry-exporter-otlp-proto-common==1.32.1 (from opentelemetry-exporter-otlp-proto-grpc>=1.2.0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/72/1a/a51584a8b13cd9d4cb0d8f14f2164d0cf1a1bd1e5d7c81b7974fde2fb47b/opentelemetry_exporter_otlp_proto_common-1.32.1-py3-none-any.whl (18 kB)\n",
      "Collecting opentelemetry-proto==1.32.1 (from opentelemetry-exporter-otlp-proto-grpc>=1.2.0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/a5/89/16a40a3c64611cb32509751ef6370e3e96c24a39ba493b4d67f5671ef4c1/opentelemetry_proto-1.32.1-py3-none-any.whl (55 kB)\n",
      "Collecting protobuf (from onnxruntime>=1.14.1->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/46/88/b01ed2291aae68b708f7d334288ad5fb3e7aa769a9c309c91a0d55cb91b0/protobuf-5.29.4-cp38-abi3-manylinux2014_x86_64.whl (319 kB)\n",
      "Collecting opentelemetry-instrumentation-asgi==0.53b1 (from opentelemetry-instrumentation-fastapi>=0.41b0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/6c/b1/fb7bef68b08025659d6fe90839e38603c79c77c4b6af53f82f8fb66a1a2a/opentelemetry_instrumentation_asgi-0.53b1-py3-none-any.whl (16 kB)\n",
      "Collecting opentelemetry-instrumentation==0.53b1 (from opentelemetry-instrumentation-fastapi>=0.41b0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/3f/5e/1897e0cb579f4a215c42316021a52f588eaee4d008477e85b3ca9fa792c4/opentelemetry_instrumentation-0.53b1-py3-none-any.whl (30 kB)\n",
      "Collecting opentelemetry-semantic-conventions==0.53b1 (from opentelemetry-instrumentation-fastapi>=0.41b0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/27/6b/a8fb94760ef8da5ec283e488eb43235eac3ae7514385a51b6accf881e671/opentelemetry_semantic_conventions-0.53b1-py3-none-any.whl (188 kB)\n",
      "Collecting opentelemetry-util-http==0.53b1 (from opentelemetry-instrumentation-fastapi>=0.41b0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/82/f3/cd04c208fd50a60c7a521d33e6a17ff2949f81330ca2f086bcdbbd08dd8c/opentelemetry_util_http-0.53b1-py3-none-any.whl (7.3 kB)\n",
      "Collecting wrapt<2.0.0,>=1.0.0 (from opentelemetry-instrumentation==0.53b1->opentelemetry-instrumentation-fastapi>=0.41b0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/90/ec/00759565518f268ed707dcc40f7eeec38637d46b098a1f5143bff488fe97/wrapt-1.17.2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (82 kB)\n",
      "Collecting asgiref~=3.0 (from opentelemetry-instrumentation-asgi==0.53b1->opentelemetry-instrumentation-fastapi>=0.41b0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/39/e3/893e8757be2612e6c266d9bb58ad2e3651524b5b40cf56761e985a28b13e/asgiref-3.8.1-py3-none-any.whl (23 kB)\n",
      "Collecting monotonic>=1.5 (from posthog>=2.4.0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/9a/67/7e8406a29b6c45be7af7740456f7f37025f0506ae2e05fb9009a53946860/monotonic-1.6-py2.py3-none-any.whl (8.2 kB)\n",
      "Collecting backoff>=1.10.0 (from posthog>=2.4.0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/df/73/b6e24bd22e6720ca8ee9a85a0c4a2971af8497d8f3193fa05390cbd46e09/backoff-2.2.1-py3-none-any.whl (15 kB)\n",
      "Requirement already satisfied: distro>=1.5.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from posthog>=2.4.0->chromadb) (1.9.0)\n",
      "Requirement already satisfied: annotated-types>=0.6.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from pydantic>=1.9->chromadb) (0.7.0)\n",
      "Requirement already satisfied: pydantic-core==2.27.2 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from pydantic>=1.9->chromadb) (2.27.2)\n",
      "Requirement already satisfied: markdown-it-py>=2.2.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from rich>=10.11.0->chromadb) (3.0.0)\n",
      "Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from rich>=10.11.0->chromadb) (2.19.1)\n",
      "Requirement already satisfied: huggingface-hub<1.0,>=0.16.4 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from tokenizers>=0.13.2->chromadb) (0.30.1)\n",
      "Requirement already satisfied: click>=8.0.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from typer>=0.9.0->chromadb) (8.1.8)\n",
      "Requirement already satisfied: shellingham>=1.3.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from typer>=0.9.0->chromadb) (1.5.4)\n",
      "Collecting httptools>=0.6.3 (from uvicorn[standard]>=0.18.3->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/b0/51/ce61e531e40289a681a463e1258fa1e05e0be54540e40d91d065a264cd8f/httptools-0.6.4-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (442 kB)\n",
      "Collecting python-dotenv>=0.13 (from uvicorn[standard]>=0.18.3->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/1e/18/98a99ad95133c6a6e2005fe89faedf294a748bd5dc803008059409ac9b1e/python_dotenv-1.1.0-py3-none-any.whl (20 kB)\n",
      "Collecting uvloop!=0.15.0,!=0.15.1,>=0.14.0 (from uvicorn[standard]>=0.18.3->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/61/e0/f0f8ec84979068ffae132c58c79af1de9cceeb664076beea86d941af1a30/uvloop-0.21.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.8 MB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.8/3.8 MB\u001b[0m \u001b[31m24.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hCollecting watchfiles>=0.13 (from uvicorn[standard]>=0.18.3->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/23/ed/a6cf815f215632f5c8065e9c41fe872025ffea35aa1f80499f86eae922db/watchfiles-1.0.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (454 kB)\n",
      "Requirement already satisfied: websockets>=10.4 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (11.0.3)\n",
      "Collecting cachetools<6.0,>=2.0.0 (from google-auth>=1.0.1->kubernetes>=28.1.0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/72/76/20fa66124dbe6be5cafeb312ece67de6b61dd91a0247d1ea13db4ebb33c2/cachetools-5.5.2-py3-none-any.whl (10 kB)\n",
      "Collecting pyasn1-modules>=0.2.1 (from google-auth>=1.0.1->kubernetes>=28.1.0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/47/8d/d529b5d697919ba8c11ad626e835d4039be708a35b0d22de83a269a6682c/pyasn1_modules-0.4.2-py3-none-any.whl (181 kB)\n",
      "Collecting rsa<5,>=3.1.4 (from google-auth>=1.0.1->kubernetes>=28.1.0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/64/8d/0133e4eb4beed9e425d9a98ed6e081a55d195481b7632472be1af08d2f6b/rsa-4.9.1-py3-none-any.whl (34 kB)\n",
      "Requirement already satisfied: filelock in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.2->chromadb) (3.18.0)\n",
      "Requirement already satisfied: fsspec>=2023.5.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.2->chromadb) (2023.10.0)\n",
      "Collecting zipp>=3.20 (from importlib-metadata<8.7.0,>=6.0->opentelemetry-api>=1.2.0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/b7/1a/7e4798e9339adc931158c9d69ecc34f5e6791489d469f5e50ec15e35f458/zipp-3.21.0-py3-none-any.whl (9.6 kB)\n",
      "Requirement already satisfied: mdurl~=0.1 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from markdown-it-py>=2.2.0->rich>=10.11.0->chromadb) (0.1.2)\n",
      "Requirement already satisfied: charset-normalizer<4,>=2 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from requests->kubernetes>=28.1.0->chromadb) (3.4.1)\n",
      "Requirement already satisfied: exceptiongroup>=1.0.2 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from anyio->httpx>=0.27.0->chromadb) (1.2.2)\n",
      "Requirement already satisfied: sniffio>=1.1 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from anyio->httpx>=0.27.0->chromadb) (1.3.1)\n",
      "Requirement already satisfied: humanfriendly>=9.1 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from coloredlogs->onnxruntime>=1.14.1->chromadb) (10.0)\n",
      "Requirement already satisfied: mpmath<1.4,>=1.1.0 in /home/ubuntu/anaconda3/envs/peft/lib/python3.10/site-packages (from sympy->onnxruntime>=1.14.1->chromadb) (1.3.0)\n",
      "Collecting pyasn1<0.7.0,>=0.6.1 (from pyasn1-modules>=0.2.1->google-auth>=1.0.1->kubernetes>=28.1.0->chromadb)\n",
      "  Downloading https://mirrors.aliyun.com/pypi/packages/c8/f1/d6a797abb14f6283c0ddff96bbdd46937f64122b8c925cab503dd37f8214/pyasn1-0.6.1-py3-none-any.whl (83 kB)\n",
      "Building wheels for collected packages: pypika\n",
      "  Building wheel for pypika (pyproject.toml) ... \u001b[?25ldone\n",
      "\u001b[?25h  Created wheel for pypika: filename=pypika-0.48.9-py2.py3-none-any.whl size=53800 sha256=bbe2891111778d8d41d3af99ea9d6acfb4c2381120f078b8968a09c1ab3a6547\n",
      "  Stored in directory: /tmp/pip-ephem-wheel-cache-uif78e2t/wheels/72/22/e9/22e7a498a5a6f391ae627013a1bcc011dac57bb1a41d4f9274\n",
      "Successfully built pypika\n",
      "Installing collected packages: pypika, monotonic, flatbuffers, durationpy, zipp, wrapt, uvloop, python-dotenv, pyproject_hooks, pyasn1, protobuf, opentelemetry-util-http, oauthlib, mmh3, httptools, grpcio, chroma-hnswlib, cachetools, bcrypt, backoff, asgiref, watchfiles, starlette, rsa, requests-oauthlib, pyasn1-modules, posthog, opentelemetry-proto, onnxruntime, importlib-metadata, googleapis-common-protos, deprecated, build, opentelemetry-exporter-otlp-proto-common, opentelemetry-api, google-auth, fastapi, opentelemetry-semantic-conventions, kubernetes, opentelemetry-sdk, opentelemetry-instrumentation, opentelemetry-instrumentation-asgi, opentelemetry-exporter-otlp-proto-grpc, opentelemetry-instrumentation-fastapi, chromadb\n",
      "  Attempting uninstall: protobuf\n",
      "    Found existing installation: protobuf 6.30.2\n",
      "    Uninstalling protobuf-6.30.2:\n",
      "      Successfully uninstalled protobuf-6.30.2\n",
      "  Attempting uninstall: starlette\n",
      "    Found existing installation: starlette 0.46.1\n",
      "    Uninstalling starlette-0.46.1:\n",
      "      Successfully uninstalled starlette-0.46.1\n",
      "  Attempting uninstall: fastapi\n",
      "    Found existing installation: fastapi 0.115.12\n",
      "    Uninstalling fastapi-0.115.12:\n",
      "      Successfully uninstalled fastapi-0.115.12\n",
      "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
      "llamafactory 0.9.3.dev0 requires transformers!=4.46.*,!=4.47.*,!=4.48.0,<=4.51.3,>=4.45.0, but you have transformers 4.46.3 which is incompatible.\u001b[0m\u001b[31m\n",
      "\u001b[0mSuccessfully installed asgiref-3.8.1 backoff-2.2.1 bcrypt-4.3.0 build-1.2.2.post1 cachetools-5.5.2 chroma-hnswlib-0.7.6 chromadb-1.0.7 deprecated-1.2.18 durationpy-0.9 fastapi-0.115.9 flatbuffers-25.2.10 google-auth-2.39.0 googleapis-common-protos-1.70.0 grpcio-1.71.0 httptools-0.6.4 importlib-metadata-8.6.1 kubernetes-32.0.1 mmh3-5.1.0 monotonic-1.6 oauthlib-3.2.2 onnxruntime-1.21.1 opentelemetry-api-1.32.1 opentelemetry-exporter-otlp-proto-common-1.32.1 opentelemetry-exporter-otlp-proto-grpc-1.32.1 opentelemetry-instrumentation-0.53b1 opentelemetry-instrumentation-asgi-0.53b1 opentelemetry-instrumentation-fastapi-0.53b1 opentelemetry-proto-1.32.1 opentelemetry-sdk-1.32.1 opentelemetry-semantic-conventions-0.53b1 opentelemetry-util-http-0.53b1 posthog-4.0.0 protobuf-5.29.4 pyasn1-0.6.1 pyasn1-modules-0.4.2 pypika-0.48.9 pyproject_hooks-1.2.0 python-dotenv-1.1.0 requests-oauthlib-2.0.0 rsa-4.9.1 starlette-0.45.3 uvloop-0.21.0 watchfiles-1.0.5 wrapt-1.17.2 zipp-3.21.0\n"
     ]
    }
   ],
   "source": [
    "! pip install chromadb"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "0ea92192-eabe-4ab3-b764-d53f5435941e",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain.prompts import SemanticSimilarityExampleSelector\n",
    "from langchain_community.vectorstores import Chroma\n",
    "from langchain_openai import OpenAIEmbeddings"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "025fbba9-d352-4e36-a8be-af3edc206aed",
   "metadata": {},
   "source": [
    "&emsp;&emsp;使用OpenAI的Embedding模型构建向量，并存储至chromadb向量数据库中。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "4d41e040-92d9-44d2-a4d5-d5e11745f620",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['小明的妈妈给了他10块钱去买文具，如果一支笔3块钱，小明最多能买几支笔？ 小明有10块钱，每支笔3块钱，所以他最多能买3支笔，因为3*3=9，剩下1块钱不够再买一支笔。因此答案是3支。',\n",
       " '一个篮球队有12名球员，如果教练想分成两个小组进行训练，每组需要有多少人？ 篮球队总共有12名球员，分成两个小组，每组有12/2=6名球员。因此每组需要有6人。',\n",
       " '如果所有的猫都怕水，而Tom是一只猫，请问Tom怕水吗？ 根据题意，所有的猫都怕水，因此作为一只猫的Tom也会怕水。所以答案是肯定的，Tom怕水。',\n",
       " '在夏天，如果白天温度高于30度，夜晚就会很凉爽。今天白天温度是32度，请问今晚会凉爽吗？ 根据题意，只要白天温度高于30度，夜晚就会很凉爽。今天白天的温度是32度，超过了30度，因此今晚会凉爽。',\n",
       " '地球绕太阳转一圈需要多久？ 地球绕太阳转一圈大约需要365天，也就是一年的时间。',\n",
       " '水的沸点是多少摄氏度？ 水的沸点是100摄氏度。',\n",
       " '中国的首都是哪里？ 中国的首都是北京。',\n",
       " '世界上最长的河流是哪一条？ 世界上最长的河流是尼罗河。']"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "to_vectorize = [\" \".join(example.values()) for example in examples]\n",
    "to_vectorize"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "95241548-5777-42e2-b647-2e1740e13906",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "embeddings = OpenAIEmbeddings(model=\"text-embedding-ada-002\",api_key=\"hk-qct9431000054586fa8d4984b851b9b1a8c2c9ebf336656b\" ,base_url=\"https://api.openai-hk.com/v1\")\n",
    "vectorstore = Chroma.from_texts(to_vectorize, embeddings, metadatas=examples)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "91794d6d-5ae2-4aff-a2b6-a911fbcd1c6d",
   "metadata": {},
   "source": [
    "&emsp;&emsp;创建完矢量存储后，接下来需要创建 example_selector 。可以通过`k`参数指定获取多少个与输入问题最相关的示例。这里我们选择2。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "id": "bedb316c-8fe4-47d1-88be-d2e54ef2266d",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'answer': '中国的首都是北京。', 'question': '中国的首都是哪里？'},\n",
       " {'answer': '世界上最长的河流是尼罗河。', 'question': '世界上最长的河流是哪一条？'}]"
      ]
     },
     "execution_count": 34,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "example_selector = SemanticSimilarityExampleSelector(\n",
    "    vectorstore=vectorstore,\n",
    "    k=2,\n",
    ")\n",
    "\n",
    "example_selector.select_examples({\"input\": \"内蒙古的省会是哪里\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "65a788c5-01d6-438c-992f-bcf549b7bfc0",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'question': '小明的妈妈给了他10块钱去买文具，如果一支笔3块钱，小明最多能买几支笔？',\n",
       "  'answer': '小明有10块钱，每支笔3块钱，所以他最多能买3支笔，因为3*3=9，剩下1块钱不够再买一支笔。因此答案是3支。'},\n",
       " {'answer': '篮球队总共有12名球员，分成两个小组，每组有12/2=6名球员。因此每组需要有6人。',\n",
       "  'question': '一个篮球队有12名球员，如果教练想分成两个小组进行训练，每组需要有多少人？'}]"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "example_selector = SemanticSimilarityExampleSelector(\n",
    "    vectorstore=vectorstore,\n",
    "    k=2,\n",
    ")\n",
    "\n",
    "example_selector.select_examples({\"input\": \"罗杰有五个网球，他又买了两盒网球，每盒有3个网球，请问他现在总共有多少个网球？\"})"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "10600ece-5eb3-46f8-ad79-48607bbafb9d",
   "metadata": {},
   "source": [
    "&emsp;&emsp;从上面两个示例我们观察到，在处理基于文化常识的查询时（例如，询问“内蒙古的省会是哪里？”），选定的few-shot模板会来源自文化常识类别。相反，当遇到需要推理的问题时，则倾向于选择我们预先定义好的数学推理类提示示例。这种动态匹配策略展示了利用语义相似性选择器在大语言模型中进行精准模板选择的能力，从而有效地应对不同类别的查询，确保模型输出的相关性和准确性。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "id": "87f2b878-c9eb-415b-ad5d-f8dc4445b866",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain.prompts import (\n",
    "    ChatPromptTemplate,\n",
    "    FewShotChatMessagePromptTemplate,\n",
    ")\n",
    "\n",
    "# 创建一个 FewShotChatMessagePromptTemplate 对象\n",
    "few_shot_prompt = FewShotChatMessagePromptTemplate(\n",
    "    input_variables=[\"input\"],           # 定义输入变量的列表\n",
    "    example_selector=example_selector,   # 使用动态的示例选择器\n",
    "    \n",
    "    # 定义每一轮对话的格式化文本\n",
    "    example_prompt=ChatPromptTemplate.from_messages(   \n",
    "        [(\"human\", \"{question}\"), (\"ai\", \"{answer}\")]\n",
    "    ),\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "id": "46bcc405-eb7b-4d83-b80f-fa098cf6b242",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "FewShotChatMessagePromptTemplate(example_selector=SemanticSimilarityExampleSelector(vectorstore=<langchain_community.vectorstores.chroma.Chroma object at 0x7f13aa657c70>, k=2, example_keys=None, input_keys=None, vectorstore_kwargs=None), input_variables=['input'], example_prompt=ChatPromptTemplate(input_variables=['answer', 'question'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['question'], template='{question}')), AIMessagePromptTemplate(prompt=PromptTemplate(input_variables=['answer'], template='{answer}'))]))"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "few_shot_prompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "9e556284-eaa3-46c0-a1d1-e69cc64d5a65",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Human: 小明的妈妈给了他10块钱去买文具，如果一支笔3块钱，小明最多能买几支笔？\n",
      "AI: 小明有10块钱，每支笔3块钱，所以他最多能买3支笔，因为3*3=9，剩下1块钱不够再买一支笔。因此答案是3支。\n",
      "Human: 一个篮球队有12名球员，如果教练想分成两个小组进行训练，每组需要有多少人？\n",
      "AI: 篮球队总共有12名球员，分成两个小组，每组有12/2=6名球员。因此每组需要有6人。\n"
     ]
    }
   ],
   "source": [
    "print(few_shot_prompt.format(input=\"罗杰有五个网球，他又买了两盒网球，每盒有3个网球，请问他现在总共有多少个网球？\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 95,
   "id": "4e05451b-a7b1-423e-8fc7-efa5a512c698",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Human: 地球绕太阳转一圈需要多久？\n",
      "AI: 地球绕太阳转一圈大约需要365天，也就是一年的时间。\n",
      "Human: 地球绕太阳转一圈需要多久？\n",
      "AI: 地球绕太阳转一圈大约需要365天，也就是一年的时间。\n"
     ]
    }
   ],
   "source": [
    "print(few_shot_prompt.format(input=\"月亮每天什么时候出现\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 96,
   "id": "1a07a7db-f2d2-42b7-bb70-4343517d1773",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "final_prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", \"你是一个无所不能的人，无论什么问题都可以回答。\"),\n",
    "        few_shot_prompt,\n",
    "        (\"human\", \"{input}\"),\n",
    "    ]\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 97,
   "id": "07d5c4eb-9bfb-46fd-ad38-c89324d84380",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatPromptTemplate(input_variables=['input'], messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='你是一个无所不能的人，无论什么问题都可以回答。')), FewShotChatMessagePromptTemplate(example_selector=SemanticSimilarityExampleSelector(vectorstore=<langchain_community.vectorstores.chroma.Chroma object at 0x7fe9474add90>, k=2, example_keys=None, input_keys=None, vectorstore_kwargs=None), input_variables=['input'], example_prompt=ChatPromptTemplate(input_variables=['answer', 'question'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['question'], template='{question}')), AIMessagePromptTemplate(prompt=PromptTemplate(input_variables=['answer'], template='{answer}'))])), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}'))])"
      ]
     },
     "execution_count": 97,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "final_prompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 98,
   "id": "b4ed41bc-7403-4f08-beed-b63f32857db5",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='月亮每天都会出现，但出现的时间会因为月球的运动和地球的自转而有所不同。具体来说，月亮每天大约在日落后1-2小时左右出现在东方，接着会在夜间持续出现，直到天亮时从西方消失。不过，月亮的出现时间也会受到季节、地理位置等因素的影响，因此可能会有所差异。', response_metadata={'token_usage': {'completion_tokens': 130, 'prompt_tokens': 149, 'total_tokens': 279}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-4dc919bc-4d5d-4619-b3b0-32f736389cbf-0')"
      ]
     },
     "execution_count": 98,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "chain = final_prompt | chat\n",
    "\n",
    "chain.invoke({\"input\": \"月亮每天什么时候出现\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 99,
   "id": "29d2de8a-ad4a-4cfb-9e54-b3070fe10359",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "chain = final_prompt | chat\n",
    "\n",
    "response = chain.invoke({\"input\": \"内蒙的省会是哪座城市？\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 100,
   "id": "86aeec08-2704-461d-9a54-888fdfb4eaba",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='内蒙古的省会是呼和浩特。', response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 95, 'total_tokens': 112}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_2f57f81c11', 'finish_reason': 'stop', 'logprobs': None}, id='run-31055416-0119-42d4-93a2-6aacd0fb49a0-0')"
      ]
     },
     "execution_count": 100,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "response"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "68c001e2-5554-4615-b0a8-3a3195f7278c",
   "metadata": {},
   "source": [
    "&emsp;&emsp;通过对`SemanticSimilarityExampleSelector`的应用，我们展示了如何动态地选择适合各种输入提示的示例模板。在LangChain框架中，示例选择器的功能和作用依赖于其具体的定义和实现。我们使用的`SemanticSimilarityExampleSelector`示例选择器，该过程涉及到示例的向量化表示、相似度计算和返回Tok这样的流程。到目前为止，LangChain已经定义了以下四种示例选择器，每种都有其独特的选择机制：\n",
    "\n",
    "1. **Similarity**：基于输入与示例之间的语义相似度来选择示例。这种方法通过比较语义内容的接近程度来确定最相关的示例。\n",
    "\n",
    "2. **MMR (Maximum Margin Relevance)：**根据输入与示例之间的最大边际相关性来挑选示例。这种方法旨在平衡相关性和多样性，通过选择既相关又能提供新信息的示例。\n",
    "\n",
    "3. **Length**：依据指定长度内能够容纳的示例数量来进行选择。这个方法简单直接，特别适用于需要控制输出长度的场景。\n",
    "\n",
    "4. **Ngram**：通过计算输入与示例之间的n-gram重叠来选择示例。这种方法重视文本表面的匹配度，适用于需要精确文本匹配的情境。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "21fdb73a-7660-444a-9072-34a8b3635d54",
   "metadata": {},
   "source": [
    "### 3.4 自定义示例选择器"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "49f70e71-2398-450b-9093-172728401185",
   "metadata": {},
   "source": [
    "&emsp;&emsp;LangChain的`ExampleSelector`模块封装了一系列较为通用的示例选择器，例如我们上一小节使用的`SemanticSimilarityExampleSelector`，它能够基于语义相似度来选择最相关示例的，已经能够满足多数提示示例使用场景的需求。然而，现实中根据不同的业务需求，可能会遇到这些通用选择器无法完全满足特定需求的情况。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9cf42540-cb5d-43e3-8614-ba04034b7c40",
   "metadata": {
    "tags": []
   },
   "source": [
    "&emsp;&emsp;在LangChain中，`Example Selector`的基本接口定义如下：\n",
    "\n",
    "```python\n",
    "    class BaseExampleSelector(ABC):\n",
    "        \"\"\"用于选择要包含在提示中的示例的接口。\"\"\"\n",
    "\n",
    "        @abstractmethod\n",
    "        def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n",
    "            \"\"\"根据输入选择使用哪些示例。\"\"\"\n",
    "\n",
    "        @abstractmethod\n",
    "        def add_example(self, example: Dict[str, str]) -> Any:\n",
    "            \"\"\"向存储中添加新的示例。\"\"\"\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "57fafe5c-35d6-4b70-ac0a-7d2c550d48e9",
   "metadata": {},
   "source": [
    "> ABC，全称为“Abstract Base Class”（抽象基类），是Python中abc模块的一部分。在 Python 中，抽象基类用于定义其他类必须遵循的基本接口或蓝图，但不能直接实例化。其主要目的是为了提供一种形式化的方式来定义和检查子类的接口。抽象基类中，可以定义抽象方法，它没有实现（也就是说，它没有方法体）。任何继承该抽象基类的子类都必须提供这些抽象方法的实现。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9c155c5f-8b22-4e8c-ab1a-cb4d6d41bc8b",
   "metadata": {},
   "source": [
    "&emsp;&emsp;从上述基本接口来看，它需要定义的唯一方法是 `select_examples` 方法，其接受输入变量，然后返回示例列表。如何选择这些示例取决于每个具体的实现，也就是我们自定义的逻辑。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b12428d7-6bf3-4b87-87a3-f913cbd264b0",
   "metadata": {},
   "source": [
    "&emsp;&emsp;为了演示示例选择器的自定义过程，我们设计这样一个简单的场景：聊天机器人的回答选择器。在这个场景中，聊天机器人需要根据用户的输入从一个预设的回答库中选择最合适的回答。这个预设库包含了多个输入-回答对，机器人的任务是找到与用户输入在长度上最接近的问题，然后返回相应的预设回答。通过这种方法来可以帮助机器人处理未知或罕见的用户输入，通过匹配相近长度的问题来给出一个看似合适的回答，增加用户满意度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "id": "0bace8ca-046a-4795-a151-f3c721c95367",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "examples = [\n",
    "    {\"input\": \"你好吗？\", \"output\": \"我很好，谢谢！你呢？\"},\n",
    "    {\"input\": \"你是谁？\", \"output\": \"我是一个聊天机器人。\"},\n",
    "    {\"input\": \"你能做什么？\", \"output\": \"我可以回答简单的问题，比如现在的时间或天气。\"},\n",
    "    {\"input\": \"现在几点了？\", \"output\": \"抱歉，我无法提供实时信息。\"},\n",
    "    {\"input\": \"你喜欢音乐吗？\", \"output\": \"我不能听音乐，但我可以帮你找到音乐信息。\"},\n",
    "    {\"input\": \"告诉我一些关于中国的事情。\", \"output\": \"中国是一个拥有悠久历史和丰富文化的国家。\"},\n",
    "    {\"input\": \"最近有什么好玩的电影吗？\", \"output\": \"我不太清楚当前的电影信息，但我推荐你查看电影推荐网站。\"},\n",
    "    {\"input\": \"你能帮我学习编程吗？\", \"output\": \"当然，我可以提供一些学习资源和编程练习。\"}\n",
    "]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7692af27-c0bc-42d5-b533-0b628cc2ffbe",
   "metadata": {},
   "source": [
    "&emsp;&emsp;这个examples列表包含了几个不同长度的问题及其对应的答案。根据聊天机器人的回答选择器的需求设定，我们设计的示例选择器功能就应该是：当用户输入一段文本时，自定义示例选择器的的`select_examples`方法会根据输入的长度选择一个最接近的问题，并返回那个问题的答案。这样，即使用户的问题没有直接出现在预设的问题列表中，聊天机器人也能提供一个相关的回答。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "id": "bb7b4268-f515-4054-bf45-b258ab175902",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain_core.example_selectors.base import BaseExampleSelector\n",
    "\n",
    "class ChatbotExampleSelector(BaseExampleSelector):\n",
    "    def __init__(self, examples):\n",
    "        # examples是一个列表，包含多个字典，每个字典都有'input'和'output'键\n",
    "        self.examples = examples\n",
    "\n",
    "    def add_example(self, example):\n",
    "        # 向examples列表添加一个输入-输出对\n",
    "        self.examples.append(example)\n",
    "\n",
    "    def select_examples(self, input_variables):\n",
    "        # 此方法找到与用户输入长度最接近的示例，并返回相应的输出\n",
    "        new_word = input_variables[\"input\"]\n",
    "        new_word_length = len(new_word)\n",
    "\n",
    "        best_match = None\n",
    "        ## 声明一个无穷大的变量\n",
    "        smallest_diff = float(\"inf\")\n",
    "\n",
    "        for example in self.examples:\n",
    "            current_diff = abs(len(example[\"input\"]) - new_word_length)\n",
    "\n",
    "            if current_diff < smallest_diff:\n",
    "                smallest_diff = current_diff\n",
    "                best_match = example\n",
    "\n",
    "        # 如果找到了最佳匹配项，返回相应的输出；否则，返回None\n",
    "        return [best_match] if best_match else []\n",
    "    \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "id": "08287c98-644f-4856-8aa4-6250134ba670",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<__main__.ChatbotExampleSelector at 0x7f141552b760>"
      ]
     },
     "execution_count": 42,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "example_selector = ChatbotExampleSelector(examples)\n",
    "example_selector"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "id": "77d27283-33cd-497e-bff5-875a284ca160",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'input': '你好吗？', 'output': '我很好，谢谢！你呢？'}]"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "example_selector.select_examples({\"input\": \"你好呀。\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "id": "1711154b-8b96-498c-8cb3-55ddd64057db",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'input': '你能帮我学习编程吗？', 'output': '当然，我可以提供一些学习资源和编程练习。'}]"
      ]
     },
     "execution_count": 46,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "example_selector.select_examples({\"input\": \"我特别的喜欢打篮球\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "id": "c0e60ebb-4675-4fd8-af6d-ac9f34754c25",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'input': '告诉我一些关于中国的事情。', 'output': '中国是一个拥有悠久历史和丰富文化的国家。'}]"
      ]
     },
     "execution_count": 47,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "example_selector.select_examples({\"input\": \"今天的天气很好，能推荐一个好玩的去处吗？\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "id": "3aa74c4f-ebed-45d7-9539-e3605e7cb763",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'input': '告诉我一些关于中国的事情。', 'output': '中国是一个拥有悠久历史和丰富文化的国家。'}]"
      ]
     },
     "execution_count": 48,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "example_selector.select_examples({\"input\": \"今天的天气很好，非常适合春游，能帮我推荐一个适合全家人出游的好去处吗？\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "id": "fd287d5d-b176-4755-ae26-b7a03aa7459a",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "example_selector.add_example({\"input\": \"春天到了，大家都喜欢出去春游，但是很多地方并不是很好，请问有推荐码？\", \"output\": \"如果你喜欢春天春游的话，你可以去一些国家公园，景色非常好。\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "id": "272dcec4-d5ee-4a40-9669-ed1fddb00619",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'input': '春天到了，大家都喜欢出去春游，但是很多地方并不是很好，请问有推荐码？',\n",
       "  'output': '如果你喜欢春天春游的话，你可以去一些国家公园，景色非常好。'}]"
      ]
     },
     "execution_count": 50,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "example_selector.select_examples({\"input\": \"今天的天气很好，非常适合春游，能帮我推荐一个适合全家人出游的好去处吗？\"})"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "735c78e2-5539-4587-a582-9c088cdaa386",
   "metadata": {},
   "source": [
    "&emsp;&emsp;这里就可以匹配到最新添加的提示模版了。而对接大模型推理过程就和常规的使用方式无异。首先在对话模版中接入`ChatbotExampleSelector`示例选择器，代码如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "id": "534a0005-5fe7-4506-8a7f-4c82f0479cd2",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain.prompts import (\n",
    "    ChatPromptTemplate,\n",
    "    FewShotChatMessagePromptTemplate,\n",
    ")\n",
    "\n",
    "# 创建一个 FewShotChatMessagePromptTemplate 对象\n",
    "few_shot_prompt = FewShotChatMessagePromptTemplate(\n",
    "    input_variables=[\"input\"],           # 定义输入变量的列表\n",
    "    example_selector=example_selector,   # 使用动态的示例选择器\n",
    "    \n",
    "    # 定义每一轮对话的格式化文本\n",
    "    example_prompt=ChatPromptTemplate.from_messages(   \n",
    "        [(\"human\", \"{input}\"), (\"ai\", \"{output}\")]\n",
    "    ),\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "id": "126a911d-2a1d-4dec-a4bb-9e08506309c0",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Human: 春天到了，大家都喜欢出去春游，但是很多地方并不是很好，请问有推荐码？\n",
      "AI: 如果你喜欢春天春游的话，你可以去一些国家公园，景色非常好。\n"
     ]
    }
   ],
   "source": [
    "print(few_shot_prompt.format(input=\"今天的天气很好，非常适合春游，能帮我推荐一个适合全家人出游的好去处吗？\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "id": "82b07709-a6be-4215-9283-a5e9b87359a6",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Human: 你好吗？\n",
      "AI: 我很好，谢谢！你呢？\n"
     ]
    }
   ],
   "source": [
    "print(few_shot_prompt.format(input=\"你好呀。\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "id": "0cd5ac44-6805-4583-899e-33acb9dd11cd",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "final_prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", \"你是一个无所不能的人，无论什么问题都可以回答。\"),\n",
    "        few_shot_prompt,\n",
    "        (\"human\", \"{input}\"),\n",
    "    ]\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "id": "8dbdcc37-050b-4f56-817e-d7c418e08e6e",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatPromptTemplate(input_variables=['input'], messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='你是一个无所不能的人，无论什么问题都可以回答。')), FewShotChatMessagePromptTemplate(example_selector=<__main__.ChatbotExampleSelector object at 0x7f141552b760>, input_variables=['input'], example_prompt=ChatPromptTemplate(input_variables=['input', 'output'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')), AIMessagePromptTemplate(prompt=PromptTemplate(input_variables=['output'], template='{output}'))])), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}'))])"
      ]
     },
     "execution_count": 55,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "final_prompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "id": "96d9938a-9eb6-4df4-83d6-933ab5424886",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='你好！有什么我可以帮助你的吗？', response_metadata={'finish_reason': 'stop'}, id='run-ca462cf4-85d6-4f48-b363-b6db88f4a56e-0')"
      ]
     },
     "execution_count": 56,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "chain = final_prompt | chat\n",
    "\n",
    "chain.invoke({\"input\": \"你好呀\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 131,
   "id": "07da6bfe-2e8a-4508-8f10-23ad7498cfd3",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='我是一个人工智能语言模型，被称为AI助手。我被设计成可以回答各种问题和提供帮助。', response_metadata={'token_usage': {'completion_tokens': 43, 'prompt_tokens': 68, 'total_tokens': 111}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-2c0d3d3e-1030-43b8-8110-fdf5ef3e319b-0')"
      ]
     },
     "execution_count": 131,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "chain = final_prompt | chat\n",
    "\n",
    "chain.invoke({\"input\": \"你是谁？\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 132,
   "id": "59e8a24f-2c21-46e2-9ea4-35e519d425c1",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'当然可以，以下是一些适合全家人春季出游的地方：\\n\\n1. 桂林漓江\\n\\n漓江是桂林的一条著名河流，以其秀丽的山水风光而著名。你可以选择坐游船游览漓江，欣赏河流两岸的奇峰异石和青山绿水，也可以选择徒步穿越漓江的峡谷，感受一下自然的魅力。\\n\\n2. 黄山\\n\\n黄山是中国著名的山脉之一，以其奇峰、云海、日出、松柏等景观而著名。你可以选择在山脚下的村庄住宿，或者在山上的酒店住宿，欣赏黄山的自然美景。\\n\\n3. 长城\\n\\n长城是中国最著名的旅游景点之一，也是世界上最长的城墙之一。你可以选择在长城上徒步，欣赏长城的壮丽景色，或者在长城下的村庄中住宿，体验当地的文化和风味美食。\\n\\n希望以上推荐对你有帮助，祝你和你的家人春季出游愉快！'"
      ]
     },
     "execution_count": 132,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "chain = final_prompt | chat\n",
    "\n",
    "response = chain.invoke({\"input\": \"今天的天气很好，非常适合春游，能帮我推荐一个适合全家人出游的好去处吗？\"})\n",
    "response.content"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ce923e17-02cb-4805-bf35-f367199dc88e",
   "metadata": {},
   "source": [
    "## 4.  Model I/O之Output Parsers"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e46503cf-8ce6-41af-915e-8e5d934a1ab5",
   "metadata": {},
   "source": [
    "&emsp;&emsp;Output Parsers，即输出解析器，这个概念非常好理解，就是负责获取大模型的输出并将其转换为更合适的格式。这在应用开发中及其重要。在大多数复杂应用场景中，处理逻辑往往环环相扣，执行某项业务逻辑可能需要多次调用大模型，其中上一次的调用结果将被用于指导下一次调用的逻辑。在这种情况下，结构化的信息会比纯文本又有价值，同时这也是输出解析器的价值所在。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5d603fce-322b-4e5d-9ea5-ac228e2672e9",
   "metadata": {},
   "source": [
    "&emsp;&emsp;LangChain构造的输出解释器必须实现两个主要方法：\n",
    "- Get format instructions：该方法会返回一个字符串，其中包含有关如何格式化语言模型输出的指令。\n",
    "- Parse：该方法会接收字符串，并将其解析为某种结构"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "034d44c0-3129-425e-92e4-1aacbe4a5938",
   "metadata": {},
   "source": [
    "&emsp;&emsp;目前已经支持的解析格式已经包括Json、Xml、Csv以及OpenAI的Tools和Functions等多种格式，具体可看：https://python.langchain.com/docs/modules/model_io/output_parsers/"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e553dc1b-1a0f-42c8-824f-71a6882a351f",
   "metadata": {},
   "source": [
    "### Datetime parser"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 142,
   "id": "5fd75642-0d24-4901-af64-3e1ae2b2e1ec",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain.output_parsers import DatetimeOutputParser\n",
    "from langchain.prompts import PromptTemplate\n",
    "from langchain_openai import OpenAI\n",
    "\n",
    "output_parser = DatetimeOutputParser()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 143,
   "id": "f046a631-d10b-4eb5-8b9c-4afe7f90146c",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\"Write a datetime string that matches the following pattern: '%Y-%m-%dT%H:%M:%S.%fZ'.\\n\\nExamples: 1340-06-19T16:20:36.785423Z, 1449-09-11T02:24:41.532562Z, 1581-04-20T04:18:49.834816Z\\n\\nReturn ONLY this string, no other words!\""
      ]
     },
     "execution_count": 143,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "output_parser.get_format_instructions()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 144,
   "id": "aac62edf-0873-4098-bb03-7b2754d9704f",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "template = \"\"\"用户发起的提问:\n",
    "\n",
    "{question}\n",
    "\n",
    "{format_instructions}\"\"\"\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 145,
   "id": "555bcc32-2fed-480c-bc53-924ead3552ad",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "\n",
    "prompt = PromptTemplate.from_template(\n",
    "    template,\n",
    "    # 预定义的变量，这里我们传入格式化指令\n",
    "    partial_variables={\"format_instructions\": output_parser.get_format_instructions()},\n",
    ")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 146,
   "id": "c5c620c5-5fef-466c-9c9a-e190d03a28e2",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "PromptTemplate(input_variables=['question'], partial_variables={'format_instructions': \"Write a datetime string that matches the following pattern: '%Y-%m-%dT%H:%M:%S.%fZ'.\\n\\nExamples: 1763-04-12T15:47:45.056396Z, 161-02-08T17:48:02.682900Z, 526-10-31T17:39:53.463104Z\\n\\nReturn ONLY this string, no other words!\"}, template='用户发起的提问:\\n\\n{question}\\n\\n{format_instructions}')"
      ]
     },
     "execution_count": 146,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "prompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 149,
   "id": "37658faf-1730-493c-9de0-6379c92880f2",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "\n",
    "chain = prompt | chat | output_parser"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 150,
   "id": "85228251-9209-4511-a26e-7f12da8f78ec",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "datetime.datetime(2022, 5, 30, 12, 34, 56, 789012)"
      ]
     },
     "execution_count": 150,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "output = chain.invoke({\"question\": \"你好，请问你叫什么？\"})\n",
    "output\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 151,
   "id": "b3dd9693-c06f-44dd-ac16-2b42c89b3fd1",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2022-05-30 12:34:56.789012\n"
     ]
    }
   ],
   "source": [
    "print(output)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d174a3d5-16d4-4059-81cc-a146fa2e9763",
   "metadata": {},
   "source": [
    "### JSON parser"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 153,
   "id": "23ef566b-ede0-4343-b6e6-c9b814098883",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain_core.output_parsers import JsonOutputParser\n",
    "from langchain_core.prompts import PromptTemplate\n",
    "from langchain_core.pydantic_v1 import BaseModel, Field\n",
    "from langchain_openai import ChatOpenAI"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 154,
   "id": "e21a684a-fc19-405e-abef-e9673ca15cc3",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# Define your desired data structure.\n",
    "class Joke(BaseModel):\n",
    "    setup: str = Field(description=\"question to set up a joke\")\n",
    "    punchline: str = Field(description=\"answer to resolve the joke\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 155,
   "id": "62a5a123-8157-42d1-8024-4b98814a7361",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'setup': \"Why don't scientists trust atoms?\",\n",
       " 'punchline': 'Because they make up everything!'}"
      ]
     },
     "execution_count": 155,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# And a query intented to prompt a language model to populate the data structure.\n",
    "joke_query = \"Tell me a joke.\"\n",
    "\n",
    "# Set up a parser + inject instructions into the prompt template.\n",
    "parser = JsonOutputParser(pydantic_object=Joke)\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    template=\"Answer the user query.\\n{format_instructions}\\n{query}\\n\",\n",
    "    input_variables=[\"query\"],\n",
    "    partial_variables={\"format_instructions\": parser.get_format_instructions()},\n",
    ")\n",
    "\n",
    "chain = prompt | chat | parser\n",
    "\n",
    "chain.invoke({\"query\": joke_query})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9ccbf602-e873-4067-912d-1d5098f6680a",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "798e7985-8425-419c-b8e8-2504651aedbb",
   "metadata": {},
   "source": [
    "## 5. Ollama项目"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3076270a-55ac-454f-9155-136c273162f9",
   "metadata": {},
   "source": [
    "&emsp;&emsp;Ollama是在Github上的一个开源项目，其项目定位是：**一个本地运行大模型的集成框架**，目前主要针对主流的LLaMA架构的开源大模型设计，通过将模型权重、配置文件和必要数据封装进由`Modelfile`定义的包中，从而实现大模型的下载、启动和本地运行的自动化部署及推理流程。此外，Ollama内置了一系列针对大模型运行和推理的优化策略，目前作为一个非常热门的大模型托管平台，已被包括LangChain、Taskweaver等在内的多个热门项目高度集成。\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6003d521-7cb7-470b-ba0b-a66e36ca1746",
   "metadata": {},
   "source": [
    "> Ollama官方地址：https://ollama.com/\n",
    "\n",
    "> Ollama Github开源地址：https://github.com/ollama/ollama\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dd19d21d-7b5c-45d5-8531-1ae9aabae10b",
   "metadata": {},
   "source": [
    "&emsp;&emsp;Ollama项目支持跨平台部署，目前已兼容Mac、Linux和Windows操作系统。特别地对于Windows用户提供了非常直观的预览版，包括了内置的GPU加速功能、访问完整模型库的能力，以及对OpenAI的兼容性在内的Ollama API，使其对Windows用户尤为友好。而无论在使用哪个操作系统中，Ollama项目的安装过程都设计得非常简单。根据后续的课程的研发需求，我们还是选择以Linux版本为例进行详细介绍。对于其他操作系统版本的安装，大家可以通过如下链接，根据自己的实际情况进行安装体验：\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4d670db5-6397-4f1f-b89c-4fdf7519a71b",
   "metadata": {},
   "source": [
    "&emsp;&emsp;一键安装的过程极为简便，仅需通过执行以下命令行即可自动化完成：\n",
    "```bash\n",
    "    sudo apt-get update\n",
    "    sudo apt-get install pciutils lshw\n",
    "    \n",
    "    curl -fsSL https://ollama.com/install.sh | sh\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9c80a5b4-0d68-47e1-af68-eb39e0f85985",
   "metadata": {},
   "source": [
    "\n",
    "&emsp;&emsp;这行命令的目的是从`https://ollama.com/` 网站读取 `install.sh` 脚本，并立即通过 `sh` 执行该脚本，在安装过程中会包含以下几个主要的操作：\n",
    "1. 检查当前服务器的基础环境，如系统版本等；\n",
    "2. 下载Ollama的二进制文件；\n",
    "3. 配置系统服务，包括创建用户和用户组，添加Ollama的配置信息；\n",
    "4. 启动Ollama服务；\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dc53098e-9c8a-4ae4-9bc5-345615dae57e",
   "metadata": {},
   "source": [
    "## 6. LangChain调用私有模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "a6efed38-3c66-40ca-ae77-789c927db945",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain_community.chat_models import ChatOllama"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "75233225-66f7-4b06-85e7-183e1016aea1",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "ollama_llm = ChatOllama(model=\"qwen:0.5b-chat\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "31425ab1-2896-4f10-9392-c10154391f9b",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain_core.messages import HumanMessage\n",
    "\n",
    "messages = [\n",
    "    HumanMessage(\n",
    "        content=\"你好，请你介绍一下你自己\",\n",
    "    )\n",
    "]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "b0671039",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='您好！我是来自阿里云的超大规模语言模型“通义千问”。我拥有多项世界顶级的自然语言处理能力。我的目标是帮助用户获取准确、有用的信息，解决实际问题。', response_metadata={'model': 'qwen:0.5b-chat', 'created_at': '2024-05-10T04:28:56.912789971Z', 'message': {'role': 'assistant', 'content': ''}, 'done': True, 'total_duration': 4045618079, 'load_duration': 3578536348, 'prompt_eval_count': 13, 'prompt_eval_duration': 22027000, 'eval_count': 45, 'eval_duration': 313668000}, id='run-a81827f1-1827-42d9-b816-c176fca27a65-0')"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chat_model_response = ollama_llm.invoke(messages)\n",
    "\n",
    "chat_model_response"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "a3501a56-7492-4c4c-bff0-8ec22ed5f0b1",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'您好！我是来自阿里云的超大规模语言模型“通义千问”。我拥有多项世界顶级的自然语言处理能力。我的目标是帮助用户获取准确、有用的信息，解决实际问题。'"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chat_model_response.content"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "357ab5e0",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "messages = [\n",
    "    HumanMessage(\n",
    "        content=\"请问什么是机器学习?\",\n",
    "    )\n",
    "]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "7843b54b",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='机器学习是一种人工智能技术，它的目标是通过训练数据来自动提取特征并进行分类或预测。机器学习主要应用于计算机视觉、自然语言处理等领域。', response_metadata={'model': 'qwen:0.5b-chat', 'created_at': '2024-05-10T04:29:40.321914775Z', 'message': {'role': 'assistant', 'content': ''}, 'done': True, 'total_duration': 413635745, 'load_duration': 2588838, 'prompt_eval_count': 10, 'prompt_eval_duration': 24992000, 'eval_count': 35, 'eval_duration': 248933000}, id='run-bb6283cb-8db9-4716-91dd-ef2f326ff0cc-0')"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chat_model_response = ollama_llm.invoke(messages)\n",
    "\n",
    "chat_model_response"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "1723ec5b",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'机器学习是一种人工智能技术，它的目标是通过训练数据来自动提取特征并进行分类或预测。机器学习主要应用于计算机视觉、自然语言处理等领域。'"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chat_model_response.content"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "263ca98c",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatPromptTemplate(input_variables=['input_language', 'output_language', 'text'], messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input_language', 'output_language'], template='你是一个有用的助手，可以将{input_language}翻译成{output_language}。')), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['text'], template='{text}'))])"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain.prompts.chat import ChatPromptTemplate\n",
    "\n",
    "# 构建模版\n",
    "template = \"你是一个有用的助手，可以将{input_language}翻译成{output_language}。\"\n",
    "human_template = \"{text}\"\n",
    "\n",
    "# 生成对话形式的聊天信息格式\n",
    "chat_prompt = ChatPromptTemplate.from_messages([\n",
    "    (\"system\", template),\n",
    "    (\"human\", human_template),\n",
    "])\n",
    "\n",
    "chat_prompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "086ebb37",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[SystemMessage(content='你是一个有用的助手，可以将中文翻译成英语。'), HumanMessage(content='我爱编程')]"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 格式化变量输入\n",
    "messages = chat_prompt.format_messages(input_language=\"中文\", output_language=\"英语\", text=\"我爱编程\")\n",
    "messages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "ab19c4b2",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "I love programming.\n"
     ]
    }
   ],
   "source": [
    "from langchain_community.chat_models import ChatOllama\n",
    "\n",
    "# 实例化Ollama启动的模型\n",
    "ollama_llm = ChatOllama(model=\"qwen:0.5b-chat\")\n",
    "\n",
    "# 执行推理\n",
    "result = ollama_llm.invoke(messages)\n",
    "print(result.content)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "d0490e5a",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[SystemMessage(content='你是一个有用的助手，可以将中文翻译成英语。'),\n",
       " HumanMessage(content='我喜欢打篮球')]"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 格式化变量输入\n",
    "messages = chat_prompt.format_messages(input_language=\"中文\", output_language=\"英语\", text=\"我喜欢打篮球\")\n",
    "messages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "a6dd865b-f4c7-431a-b31e-863fa656f560",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "I enjoy playing basketball.\n"
     ]
    }
   ],
   "source": [
    "# 执行推理\n",
    "result = ollama_llm.invoke(messages)\n",
    "print(result.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "076506ba-28b0-4e15-b43c-410a1508dd4b",
   "metadata": {},
   "source": [
    "## 7. LangChain调用外部函数"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "65413e14-4294-4fb5-9fec-6e7f6f8cffa0",
   "metadata": {},
   "source": [
    "### 7.1 大模型调用外部函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "2ee32c6e-a372-403a-aacc-e3b0e8dd4c33",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "\n",
    "import json\n",
    "import io\n",
    "import inspect\n",
    "import requests"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "5f4d2672-3765-40f5-bebf-47dcfd352a9e",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "open_weather_key = \"5c939a7cc59eb8696f4cd77bf75c5a9a\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "3b4d3cff-3f55-4b6d-ba72-d876352f347d",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "import requests\n",
    "\n",
    "# Step 1.构建请求\n",
    "url = \"https://api.openweathermap.org/data/2.5/weather\"\n",
    "\n",
    "# Step 2.设置查询参数\n",
    "params = {\n",
    "    \"q\": \"Beijing\",                                 # 查询北京实时天气\n",
    "    \"appid\": open_weather_key,    # 注意：这里需要替换为实际的 OpenWeather API key\n",
    "    \"units\": \"metric\",                              # 使用摄氏度而不是华氏度\n",
    "    \"lang\":\"zh_cn\"                                  # 输出语言为简体中文\n",
    "}\n",
    "\n",
    "# Step 3.发送GET请求\n",
    "response = requests.get(url, params=params)\n",
    "\n",
    "# Step 4.解析响应\n",
    "data = response.json()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "00f68907-eb65-4792-8b8d-8a5dac1b0e27",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'coord': {'lon': 116.3972, 'lat': 39.9075},\n",
       " 'weather': [{'id': 804,\n",
       "   'main': 'Clouds',\n",
       "   'description': '阴，多云',\n",
       "   'icon': '04d'}],\n",
       " 'base': 'stations',\n",
       " 'main': {'temp': 25.94,\n",
       "  'feels_like': 25.29,\n",
       "  'temp_min': 25.94,\n",
       "  'temp_max': 25.94,\n",
       "  'pressure': 1001,\n",
       "  'humidity': 27,\n",
       "  'sea_level': 1001,\n",
       "  'grnd_level': 995},\n",
       " 'visibility': 10000,\n",
       " 'wind': {'speed': 3.85, 'deg': 222, 'gust': 6.13},\n",
       " 'clouds': {'all': 94},\n",
       " 'dt': 1715316111,\n",
       " 'sys': {'type': 1,\n",
       "  'id': 9609,\n",
       "  'country': 'CN',\n",
       "  'sunrise': 1715288663,\n",
       "  'sunset': 1715339838},\n",
       " 'timezone': 28800,\n",
       " 'id': 1816670,\n",
       " 'name': 'Beijing',\n",
       " 'cod': 200}"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "68a3d9ed-6c97-4492-8c65-914fec628b37",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(25.94, 25.94)"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 即时温度最高、最低气温\n",
    "data['main']['temp_min'], data['main']['temp_max']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "8c13c0e6-f088-4aa9-b9c5-9657e392ca75",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'阴，多云'"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 天气状况\n",
    "data['weather'][0]['description']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "3344c272-add7-441e-b1be-69c8e29bba18",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "def get_weather(loc):\n",
    "    \"\"\"\n",
    "    查询即时天气函数\n",
    "    :param loc: 必要参数，字符串类型，用于表示查询天气的具体城市名称，\\\n",
    "    注意，中国的城市需要用对应城市的英文名称代替，例如如果需要查询北京市天气，则loc参数需要输入'Beijing'；\n",
    "    :return：OpenWeather API查询即时天气的结果，具体URL请求地址为：https://api.openweathermap.org/data/2.5/weather\\\n",
    "    返回结果对象类型为解析之后的JSON格式对象，并用字符串形式进行表示，其中包含了全部重要的天气信息\n",
    "    \"\"\"\n",
    "    # Step 1.构建请求\n",
    "    url = \"https://api.openweathermap.org/data/2.5/weather\"\n",
    "\n",
    "    # Step 2.设置查询参数\n",
    "    params = {\n",
    "        \"q\": loc,               \n",
    "        \"appid\": open_weather_key,    # 输入API key\n",
    "        \"units\": \"metric\",            # 使用摄氏度而不是华氏度\n",
    "        \"lang\":\"zh_cn\"                # 输出语言为简体中文\n",
    "    }\n",
    "\n",
    "    # Step 3.发送GET请求\n",
    "    response = requests.get(url, params=params)\n",
    "    \n",
    "    # Step 4.解析响应\n",
    "    data = response.json()\n",
    "    return json.dumps(data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "bf6c34c0-130e-4737-9ec5-29ac022d307a",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'{\"coord\": {\"lon\": 121.4581, \"lat\": 31.2222}, \"weather\": [{\"id\": 800, \"main\": \"Clear\", \"description\": \"\\\\u6674\", \"icon\": \"01d\"}], \"base\": \"stations\", \"main\": {\"temp\": 25.86, \"feels_like\": 25.91, \"temp_min\": 23.93, \"temp_max\": 26.63, \"pressure\": 1015, \"humidity\": 54}, \"visibility\": 10000, \"wind\": {\"speed\": 7, \"deg\": 150}, \"clouds\": {\"all\": 0}, \"dt\": 1715316082, \"sys\": {\"type\": 2, \"id\": 145096, \"country\": \"CN\", \"sunrise\": 1715288518, \"sunset\": 1715337554}, \"timezone\": 28800, \"id\": 1796236, \"name\": \"Shanghai\", \"cod\": 200}'"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "get_weather('ShangHai')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "a2c7e1b2-d962-404e-9222-c2582454c40d",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'{\"coord\": {\"lon\": 116.3972, \"lat\": 39.9075}, \"weather\": [{\"id\": 804, \"main\": \"Clouds\", \"description\": \"\\\\u9634\\\\uff0c\\\\u591a\\\\u4e91\", \"icon\": \"04d\"}], \"base\": \"stations\", \"main\": {\"temp\": 25.94, \"feels_like\": 25.29, \"temp_min\": 25.94, \"temp_max\": 25.94, \"pressure\": 1001, \"humidity\": 27, \"sea_level\": 1001, \"grnd_level\": 995}, \"visibility\": 10000, \"wind\": {\"speed\": 3.85, \"deg\": 222, \"gust\": 6.13}, \"clouds\": {\"all\": 94}, \"dt\": 1715316111, \"sys\": {\"type\": 1, \"id\": 9609, \"country\": \"CN\", \"sunrise\": 1715288663, \"sunset\": 1715339838}, \"timezone\": 28800, \"id\": 1816670, \"name\": \"Beijing\", \"cod\": 200}'"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "get_weather('Beijing')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "51477af6-2a2e-471e-9cee-58fdcd7a8a37",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "import openai\n",
    "import os\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "import json\n",
    "import io\n",
    "from openai import OpenAI\n",
    "import inspect\n",
    "\n",
    "openai.api_key = os.getenv(\"OPENAI_API_KEY\")\n",
    "openai.api_base=\"https://newone.nxykj.tech/v1\"\n",
    "\n",
    "\n",
    "client = OpenAI(api_key=openai.api_key ,base_url=openai.api_base)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "60c2ac6d-d0be-4c39-a098-a783b498306b",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "def auto_functions(functions_list):\n",
    "    \"\"\"\n",
    "    Chat模型的functions参数编写函数\n",
    "    :param functions_list: 包含一个或者多个函数对象的列表；\n",
    "    :return：满足Chat模型functions参数要求的functions对象\n",
    "    \"\"\"\n",
    "    def functions_generate(functions_list):\n",
    "        # 创建空列表，用于保存每个函数的描述字典\n",
    "        functions = []\n",
    "        # 对每个外部函数进行循环\n",
    "        for function in functions_list:\n",
    "            # 读取函数对象的函数说明\n",
    "            function_description = inspect.getdoc(function)\n",
    "            # 读取函数的函数名字符串\n",
    "            function_name = function.__name__\n",
    "\n",
    "            system_prompt = '以下是某的函数说明：%s' % function_description\n",
    "            user_prompt = '根据这个函数的函数说明，请帮我创建一个JSON格式的字典，这个字典有如下5点要求：\\\n",
    "                           1.字典总共有三个键值对；\\\n",
    "                           2.第一个键值对的Key是字符串name，value是该函数的名字：%s，也是字符串；\\\n",
    "                           3.第二个键值对的Key是字符串description，value是该函数的函数的功能说明，也是字符串；\\\n",
    "                           4.第三个键值对的Key是字符串parameters，value是一个JSON Schema对象，用于说明该函数的参数输入规范。\\\n",
    "                           5.输出结果必须是一个JSON格式的字典，只输出这个字典即可，前后不需要任何前后修饰或说明的语句' % function_name\n",
    "\n",
    "            response = client.chat.completions.create(\n",
    "                              model=\"gpt-3.5-turbo\",\n",
    "                              messages=[\n",
    "                                {\"role\": \"system\", \"content\": system_prompt},\n",
    "                                {\"role\": \"user\", \"content\": user_prompt}\n",
    "                              ]\n",
    "                            )\n",
    "            json_function_description=json.loads(response.choices[0].message.content.replace(\"```\",\"\").replace(\"json\",\"\"))\n",
    "            json_str={\"type\": \"function\",\"function\":json_function_description}\n",
    "            functions.append(json_str)\n",
    "        return functions\n",
    "    ## 最大可以尝试4次\n",
    "    max_attempts = 4\n",
    "    attempts = 0\n",
    "\n",
    "    while attempts < max_attempts:\n",
    "        try:\n",
    "            functions = functions_generate(functions_list)\n",
    "            break  # 如果代码成功执行，跳出循环\n",
    "        except Exception as e:\n",
    "            attempts += 1  # 增加尝试次数\n",
    "            print(\"发生错误：\", e)\n",
    "            if attempts == max_attempts:\n",
    "                print(\"已达到最大尝试次数，程序终止。\")\n",
    "                raise  # 重新引发最后一个异常\n",
    "            else:\n",
    "                print(\"正在重新运行...\")\n",
    "    return functions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "d7858c93-a296-42d7-94e0-e9c837ccecbc",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[<function __main__.get_weather(loc)>]"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "functions_list = [get_weather]\n",
    "functions_list"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "3519c073-95e5-483f-8627-478cf0b5de63",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'type': 'function',\n",
       "  'function': {'name': 'get_weather',\n",
       "   'description': '查询即时天气函数',\n",
       "   'parameters': {'loc': {'type': 'string',\n",
       "     'description': \"必要参数，用于表示查询天气的具体城市名称。中国的城市需要用对应城市的英文名称代替，例如如果需要查询北京市天气，则loc参数需要输入'Beijing'。\"}}}}]"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "functions = auto_functions(functions_list)\n",
    "functions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "5baf9bff-098b-4e4a-b86e-407851018446",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "def run_conversation(messages, functions_list=None, model=\"gpt-3.5-turbo\"):\n",
    "    \"\"\"\n",
    "    能够自动执行外部函数调用的对话模型\n",
    "    :param messages: 必要参数，字典类型，输入到Chat模型的messages参数对象\n",
    "    :param functions_list: 可选参数，默认为None，可以设置为包含全部外部函数的列表对象\n",
    "    :param model: Chat模型，可选参数，默认模型为gpt-3.5-turbo\n",
    "    :return：Chat模型输出结果\n",
    "    \"\"\"\n",
    "    # 如果没有外部函数库，则执行普通的对话任务\n",
    "    if functions_list == None:\n",
    "        response = client.chat.completions.create(\n",
    "                        model=model,\n",
    "                        messages=messages,\n",
    "                        )\n",
    "        response_message = response.choices[0].message\n",
    "        final_response = response_message.content\n",
    "        \n",
    "    # 若存在外部函数库，则需要灵活选取外部函数并进行回答\n",
    "    else:\n",
    "        # 创建functions对象\n",
    "        tools = auto_functions(functions_list)\n",
    "\n",
    "        # 创建外部函数库字典\n",
    "        available_functions = {func.__name__: func for func in functions_list}\n",
    "\n",
    "        # 第一次调用大模型\n",
    "        response = client.chat.completions.create(\n",
    "                        model=model,\n",
    "                        messages=messages,\n",
    "                        tools=tools,\n",
    "                        tool_choice=\"auto\", )\n",
    "        response_message = response.choices[0].message\n",
    "\n",
    "\n",
    "        tool_calls = response_message.tool_calls\n",
    "\n",
    "        if tool_calls:\n",
    "\n",
    "            messages.append(response_message) \n",
    "            for tool_call in tool_calls:\n",
    "                function_name = tool_call.function.name\n",
    "                function_to_call = available_functions[function_name]\n",
    "                function_args = json.loads(tool_call.function.arguments)\n",
    "                ## 真正执行外部函数的就是这儿的代码\n",
    "                function_response = function_to_call(**function_args)\n",
    "                messages.append(\n",
    "                    {\n",
    "                        \"tool_call_id\": tool_call.id,\n",
    "                        \"role\": \"tool\",\n",
    "                        \"name\": function_name,\n",
    "                        \"content\": function_response,\n",
    "                    }\n",
    "                ) \n",
    "            ## 第二次调用模型\n",
    "            second_response = client.chat.completions.create(\n",
    "                model=model,\n",
    "                messages=messages,\n",
    "            ) \n",
    "            # 获取最终结果\n",
    "            final_response = second_response.choices[0].message.content\n",
    "        else:\n",
    "            final_response = response_message.content\n",
    "                \n",
    "    return final_response"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "a10f74ed-3d6a-4ee7-8d2e-a498db2ff49a",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "functions_list = [get_weather]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "01f9fab2-ccdb-4c77-b06f-887e4e7d4043",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'北京今天的天气是多云，气温约为26.94摄氏度，湿度为27%。风速约为3.85米/秒，风向为222度。整体来看是一个舒适的天气。'"
      ]
     },
     "execution_count": 30,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "messages = [{\"role\": \"user\", \"content\": '今天北京的天气如何？'}]\n",
    "\n",
    "run_conversation(messages=messages, \n",
    "                 functions_list=functions_list, \n",
    "                )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "984d93f5-18e7-4c2d-ba47-074cf16427a4",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "def chat_with_model(functions_list=None, \n",
    "                    prompt=\"你好呀\", \n",
    "                    model=\"gpt-3.5-turbo\",\n",
    "                    system_message=[{\"role\": \"system\", \"content\": \"你是以为乐于助人的助手。\"}]):\n",
    "    \n",
    "    messages = system_message\n",
    "    messages.append({\"role\": \"user\", \"content\": prompt})\n",
    "    \n",
    "    while True:           \n",
    "        answer = run_conversation(messages=messages, \n",
    "                                    functions_list=functions_list, \n",
    "                                    model=model)\n",
    "        \n",
    "        \n",
    "        print(f\"模型回答: {answer}\")\n",
    "\n",
    "        # 询问用户是否还有其他问题\n",
    "        user_input = input(\"您还有其他问题吗？(输入退出以结束对话): \")\n",
    "        if user_input == \"退出\":\n",
    "            break\n",
    "\n",
    "        # 记录用户回答\n",
    "        messages.append({\"role\": \"user\", \"content\": user_input})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "bde16ef7-b3c0-4101-b074-02884d4cb560",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "模型回答: 你好！北京今天的天气预报显示晴天，最高温度约为20摄氏度，最低温度约为8摄氏度。\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "您还有其他问题吗？(输入退出以结束对话):  杭州今天的天气如何？\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "模型回答: 根据最新的天气数据，北京今天的天气为多云，气温为26.94摄氏度，相对湿度为27%。而杭州今天的天气也是多云，气温为30.5摄氏度，相对湿度为45%。希望这个信息对你有帮助！\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "您还有其他问题吗？(输入退出以结束对话):  介绍一下你自己？\n"
     ]
    },
    {
     "ename": "BadRequestError",
     "evalue": "Error code: 400 - {'error': {'message': 'Invalid schema for function \\'get_weather\\': schema must be a JSON Schema of \\'type: \"object\"\\', got \\'type: \"None\"\\'. (request id: 20240510125413744519940nY7mYLbU) (request id: 20240510125413509143154EAcrgXet) (request id: 20240510125358938800532vDAScyrY) (request id: 20240510125413152448211KotLCiyL)', 'type': 'invalid_request_error', 'param': '', 'code': None}}",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mBadRequestError\u001b[0m                           Traceback (most recent call last)",
      "Cell \u001b[0;32mIn[35], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[43mchat_with_model\u001b[49m\u001b[43m(\u001b[49m\u001b[43mfunctions_list\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mprompt\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43m你好\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n",
      "Cell \u001b[0;32mIn[31], line 10\u001b[0m, in \u001b[0;36mchat_with_model\u001b[0;34m(functions_list, prompt, model, system_message)\u001b[0m\n\u001b[1;32m      7\u001b[0m messages\u001b[38;5;241m.\u001b[39mappend({\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mrole\u001b[39m\u001b[38;5;124m\"\u001b[39m: \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124muser\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mcontent\u001b[39m\u001b[38;5;124m\"\u001b[39m: prompt})\n\u001b[1;32m      9\u001b[0m \u001b[38;5;28;01mwhile\u001b[39;00m \u001b[38;5;28;01mTrue\u001b[39;00m:           \n\u001b[0;32m---> 10\u001b[0m     answer \u001b[38;5;241m=\u001b[39m \u001b[43mrun_conversation\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmessages\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmessages\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\n\u001b[1;32m     11\u001b[0m \u001b[43m                                \u001b[49m\u001b[43mfunctions_list\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mfunctions_list\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\n\u001b[1;32m     12\u001b[0m \u001b[43m                                \u001b[49m\u001b[43mmodel\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmodel\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     15\u001b[0m     \u001b[38;5;28mprint\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m模型回答: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00manswer\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m     17\u001b[0m     \u001b[38;5;66;03m# 询问用户是否还有其他问题\u001b[39;00m\n",
      "Cell \u001b[0;32mIn[28], line 27\u001b[0m, in \u001b[0;36mrun_conversation\u001b[0;34m(messages, functions_list, model)\u001b[0m\n\u001b[1;32m     24\u001b[0m available_functions \u001b[38;5;241m=\u001b[39m {func\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__name__\u001b[39m: func \u001b[38;5;28;01mfor\u001b[39;00m func \u001b[38;5;129;01min\u001b[39;00m functions_list}\n\u001b[1;32m     26\u001b[0m \u001b[38;5;66;03m# 第一次调用大模型\u001b[39;00m\n\u001b[0;32m---> 27\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[43mclient\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mchat\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcompletions\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcreate\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m     28\u001b[0m \u001b[43m                \u001b[49m\u001b[43mmodel\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmodel\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m     29\u001b[0m \u001b[43m                \u001b[49m\u001b[43mmessages\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmessages\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m     30\u001b[0m \u001b[43m                \u001b[49m\u001b[43mtools\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mtools\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m     31\u001b[0m \u001b[43m                \u001b[49m\u001b[43mtool_choice\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mauto\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     32\u001b[0m response_message \u001b[38;5;241m=\u001b[39m response\u001b[38;5;241m.\u001b[39mchoices[\u001b[38;5;241m0\u001b[39m]\u001b[38;5;241m.\u001b[39mmessage\n\u001b[1;32m     35\u001b[0m tool_calls \u001b[38;5;241m=\u001b[39m response_message\u001b[38;5;241m.\u001b[39mtool_calls\n",
      "File \u001b[0;32m~/miniconda3/lib/python3.8/site-packages/openai/_utils/_utils.py:277\u001b[0m, in \u001b[0;36mrequired_args.<locals>.inner.<locals>.wrapper\u001b[0;34m(*args, **kwargs)\u001b[0m\n\u001b[1;32m    275\u001b[0m             msg \u001b[38;5;241m=\u001b[39m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mMissing required argument: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mquote(missing[\u001b[38;5;241m0\u001b[39m])\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m    276\u001b[0m     \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mTypeError\u001b[39;00m(msg)\n\u001b[0;32m--> 277\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mfunc\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/miniconda3/lib/python3.8/site-packages/openai/resources/chat/completions.py:579\u001b[0m, in \u001b[0;36mCompletions.create\u001b[0;34m(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)\u001b[0m\n\u001b[1;32m    548\u001b[0m \u001b[38;5;129m@required_args\u001b[39m([\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmessages\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmodel\u001b[39m\u001b[38;5;124m\"\u001b[39m], [\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmessages\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmodel\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mstream\u001b[39m\u001b[38;5;124m\"\u001b[39m])\n\u001b[1;32m    549\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mcreate\u001b[39m(\n\u001b[1;32m    550\u001b[0m     \u001b[38;5;28mself\u001b[39m,\n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m    577\u001b[0m     timeout: \u001b[38;5;28mfloat\u001b[39m \u001b[38;5;241m|\u001b[39m httpx\u001b[38;5;241m.\u001b[39mTimeout \u001b[38;5;241m|\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;241m|\u001b[39m NotGiven \u001b[38;5;241m=\u001b[39m NOT_GIVEN,\n\u001b[1;32m    578\u001b[0m ) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m ChatCompletion \u001b[38;5;241m|\u001b[39m Stream[ChatCompletionChunk]:\n\u001b[0;32m--> 579\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_post\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m    580\u001b[0m \u001b[43m        \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43m/chat/completions\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m    581\u001b[0m \u001b[43m        \u001b[49m\u001b[43mbody\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmaybe_transform\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m    582\u001b[0m \u001b[43m            \u001b[49m\u001b[43m{\u001b[49m\n\u001b[1;32m    583\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mmessages\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmessages\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    584\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mmodel\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmodel\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    585\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mfrequency_penalty\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mfrequency_penalty\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    586\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mfunction_call\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mfunction_call\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    587\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mfunctions\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mfunctions\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    588\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mlogit_bias\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mlogit_bias\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    589\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mlogprobs\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mlogprobs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    590\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mmax_tokens\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmax_tokens\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    591\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mn\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mn\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    592\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mpresence_penalty\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mpresence_penalty\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    593\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mresponse_format\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mresponse_format\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    594\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mseed\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mseed\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    595\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mstop\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mstop\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    596\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mstream\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    597\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mtemperature\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtemperature\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    598\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mtool_choice\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtool_choice\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    599\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mtools\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtools\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    600\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mtop_logprobs\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtop_logprobs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    601\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mtop_p\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtop_p\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    602\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43muser\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43muser\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    603\u001b[0m \u001b[43m            \u001b[49m\u001b[43m}\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    604\u001b[0m \u001b[43m            \u001b[49m\u001b[43mcompletion_create_params\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mCompletionCreateParams\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    605\u001b[0m \u001b[43m        \u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    606\u001b[0m \u001b[43m        \u001b[49m\u001b[43moptions\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmake_request_options\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m    607\u001b[0m \u001b[43m            \u001b[49m\u001b[43mextra_headers\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mextra_headers\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mextra_query\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mextra_query\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mextra_body\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mextra_body\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtimeout\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mtimeout\u001b[49m\n\u001b[1;32m    608\u001b[0m \u001b[43m        \u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    609\u001b[0m \u001b[43m        \u001b[49m\u001b[43mcast_to\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mChatCompletion\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    610\u001b[0m \u001b[43m        \u001b[49m\u001b[43mstream\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01mor\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m    611\u001b[0m \u001b[43m        \u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mStream\u001b[49m\u001b[43m[\u001b[49m\u001b[43mChatCompletionChunk\u001b[49m\u001b[43m]\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    612\u001b[0m \u001b[43m    \u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/miniconda3/lib/python3.8/site-packages/openai/_base_client.py:1240\u001b[0m, in \u001b[0;36mSyncAPIClient.post\u001b[0;34m(self, path, cast_to, body, options, files, stream, stream_cls)\u001b[0m\n\u001b[1;32m   1226\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mpost\u001b[39m(\n\u001b[1;32m   1227\u001b[0m     \u001b[38;5;28mself\u001b[39m,\n\u001b[1;32m   1228\u001b[0m     path: \u001b[38;5;28mstr\u001b[39m,\n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m   1235\u001b[0m     stream_cls: \u001b[38;5;28mtype\u001b[39m[_StreamT] \u001b[38;5;241m|\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m,\n\u001b[1;32m   1236\u001b[0m ) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m ResponseT \u001b[38;5;241m|\u001b[39m _StreamT:\n\u001b[1;32m   1237\u001b[0m     opts \u001b[38;5;241m=\u001b[39m FinalRequestOptions\u001b[38;5;241m.\u001b[39mconstruct(\n\u001b[1;32m   1238\u001b[0m         method\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mpost\u001b[39m\u001b[38;5;124m\"\u001b[39m, url\u001b[38;5;241m=\u001b[39mpath, json_data\u001b[38;5;241m=\u001b[39mbody, files\u001b[38;5;241m=\u001b[39mto_httpx_files(files), \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39moptions\n\u001b[1;32m   1239\u001b[0m     )\n\u001b[0;32m-> 1240\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m cast(ResponseT, \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrequest\u001b[49m\u001b[43m(\u001b[49m\u001b[43mcast_to\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mopts\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream_cls\u001b[49m\u001b[43m)\u001b[49m)\n",
      "File \u001b[0;32m~/miniconda3/lib/python3.8/site-packages/openai/_base_client.py:921\u001b[0m, in \u001b[0;36mSyncAPIClient.request\u001b[0;34m(self, cast_to, options, remaining_retries, stream, stream_cls)\u001b[0m\n\u001b[1;32m    912\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mrequest\u001b[39m(\n\u001b[1;32m    913\u001b[0m     \u001b[38;5;28mself\u001b[39m,\n\u001b[1;32m    914\u001b[0m     cast_to: Type[ResponseT],\n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m    919\u001b[0m     stream_cls: \u001b[38;5;28mtype\u001b[39m[_StreamT] \u001b[38;5;241m|\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m,\n\u001b[1;32m    920\u001b[0m ) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m ResponseT \u001b[38;5;241m|\u001b[39m _StreamT:\n\u001b[0;32m--> 921\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_request\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m    922\u001b[0m \u001b[43m        \u001b[49m\u001b[43mcast_to\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcast_to\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    923\u001b[0m \u001b[43m        \u001b[49m\u001b[43moptions\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43moptions\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    924\u001b[0m \u001b[43m        \u001b[49m\u001b[43mstream\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    925\u001b[0m \u001b[43m        \u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream_cls\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    926\u001b[0m \u001b[43m        \u001b[49m\u001b[43mremaining_retries\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mremaining_retries\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    927\u001b[0m \u001b[43m    \u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/miniconda3/lib/python3.8/site-packages/openai/_base_client.py:1020\u001b[0m, in \u001b[0;36mSyncAPIClient._request\u001b[0;34m(self, cast_to, options, remaining_retries, stream, stream_cls)\u001b[0m\n\u001b[1;32m   1017\u001b[0m         err\u001b[38;5;241m.\u001b[39mresponse\u001b[38;5;241m.\u001b[39mread()\n\u001b[1;32m   1019\u001b[0m     log\u001b[38;5;241m.\u001b[39mdebug(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mRe-raising status error\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m-> 1020\u001b[0m     \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_make_status_error_from_response(err\u001b[38;5;241m.\u001b[39mresponse) \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;28mNone\u001b[39m\n\u001b[1;32m   1022\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_process_response(\n\u001b[1;32m   1023\u001b[0m     cast_to\u001b[38;5;241m=\u001b[39mcast_to,\n\u001b[1;32m   1024\u001b[0m     options\u001b[38;5;241m=\u001b[39moptions,\n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m   1027\u001b[0m     stream_cls\u001b[38;5;241m=\u001b[39mstream_cls,\n\u001b[1;32m   1028\u001b[0m )\n",
      "\u001b[0;31mBadRequestError\u001b[0m: Error code: 400 - {'error': {'message': 'Invalid schema for function \\'get_weather\\': schema must be a JSON Schema of \\'type: \"object\"\\', got \\'type: \"None\"\\'. (request id: 20240510125413744519940nY7mYLbU) (request id: 20240510125413509143154EAcrgXet) (request id: 20240510125358938800532vDAScyrY) (request id: 20240510125413152448211KotLCiyL)', 'type': 'invalid_request_error', 'param': '', 'code': None}}"
     ]
    }
   ],
   "source": [
    "chat_with_model(functions_list, prompt=\"你好\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "63b598cc-5294-4004-b8e7-d46d9a7c19c1",
   "metadata": {},
   "source": [
    "### 7.2 LangChain调用外部函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "id": "c8841e75-7a7f-4121-8ceb-c9208589d906",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain.tools import BaseTool, StructuredTool, tool"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "id": "746d3494-3175-44e5-8e88-3b820811bfcb",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "@tool\n",
    "def get_weather(loc):\n",
    "    \"\"\"\n",
    "    查询即时天气函数\n",
    "    :param loc: 必要参数，字符串类型，用于表示查询天气的具体城市名称，\\\n",
    "    注意，中国的城市需要用对应城市的英文名称代替，例如如果需要查询北京市天气，则loc参数需要输入'Beijing'；\n",
    "    :return：OpenWeather API查询即时天气的结果，具体URL请求地址为：https://api.openweathermap.org/data/2.5/weather\\\n",
    "    返回结果对象类型为解析之后的JSON格式对象，并用字符串形式进行表示，其中包含了全部重要的天气信息\n",
    "    \"\"\"\n",
    "    # Step 1.构建请求\n",
    "    url = \"https://api.openweathermap.org/data/2.5/weather\"\n",
    "\n",
    "    # Step 2.设置查询参数\n",
    "    params = {\n",
    "        \"q\": loc,               \n",
    "        \"appid\": open_weather_key,    # 输入API key\n",
    "        \"units\": \"metric\",            # 使用摄氏度而不是华氏度\n",
    "        \"lang\":\"zh_cn\"                # 输出语言为简体中文\n",
    "    }\n",
    "\n",
    "    # Step 3.发送GET请求\n",
    "    response = requests.get(url, params=params)\n",
    "    \n",
    "    # Step 4.解析响应\n",
    "    data = response.json()\n",
    "    return json.dumps(data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "7d0a26d9-e82f-4b68-b4b7-e62025bc7070",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "StructuredTool(name='get_weather', description=\"get_weather(loc) - 查询即时天气函数\\n    :param loc: 必要参数，字符串类型，用于表示查询天气的具体城市名称，    注意，中国的城市需要用对应城市的英文名称代替，例如如果需要查询北京市天气，则loc参数需要输入'Beijing'；\\n    :return：OpenWeather API查询即时天气的结果，具体URL请求地址为：https://api.openweathermap.org/data/2.5/weather    返回结果对象类型为解析之后的JSON格式对象，并用字符串形式进行表示，其中包含了全部重要的天气信息\", args_schema=<class 'pydantic.v1.main.get_weatherSchema'>, func=<function get_weather at 0x7ff8c8638160>)"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "get_weather"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0f953aeb-792b-49f7-8411-f7c5d4bdd73f",
   "metadata": {},
   "source": [
    "&emsp;&emsp;如上所示，使用`@tool`装饰器可以直接将`get_weather`函数转换成工具，这个工具可以用来执行调用，并处理返回的结果。同时，可以支持一些内部方法的调用。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "id": "aef14ef8-f3b9-4fc5-bc6f-b5b6bd8dd355",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "get_weather\n",
      "get_weather(loc) - 查询即时天气函数\n",
      "    :param loc: 必要参数，字符串类型，用于表示查询天气的具体城市名称，    注意，中国的城市需要用对应城市的英文名称代替，例如如果需要查询北京市天气，则loc参数需要输入'Beijing'；\n",
      "    :return：OpenWeather API查询即时天气的结果，具体URL请求地址为：https://api.openweathermap.org/data/2.5/weather    返回结果对象类型为解析之后的JSON格式对象，并用字符串形式进行表示，其中包含了全部重要的天气信息\n",
      "{'loc': {'title': 'Loc'}}\n"
     ]
    }
   ],
   "source": [
    "print(get_weather.name)\n",
    "print(get_weather.description)\n",
    "print(get_weather.args)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "id": "37dd17e9-a89b-4a88-b6e2-42e32c3326a1",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'{\"coord\": {\"lon\": 116.3972, \"lat\": 39.9075}, \"weather\": [{\"id\": 804, \"main\": \"Clouds\", \"description\": \"\\\\u9634\\\\uff0c\\\\u591a\\\\u4e91\", \"icon\": \"04d\"}], \"base\": \"stations\", \"main\": {\"temp\": 26.94, \"feels_like\": 26.27, \"temp_min\": 26.94, \"temp_max\": 26.94, \"pressure\": 1001, \"humidity\": 27, \"sea_level\": 1001, \"grnd_level\": 995}, \"visibility\": 10000, \"wind\": {\"speed\": 3.85, \"deg\": 222, \"gust\": 6.13}, \"clouds\": {\"all\": 94}, \"dt\": 1715317603, \"sys\": {\"type\": 1, \"id\": 9609, \"country\": \"CN\", \"sunrise\": 1715288663, \"sunset\": 1715339838}, \"timezone\": 28800, \"id\": 1816670, \"name\": \"Beijing\", \"cod\": 200}'"
      ]
     },
     "execution_count": 40,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "get_weather.invoke({\"loc\": \"Beijing\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "id": "9f95478a-6657-4597-8668-064a3535f35d",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "RunnableBinding(bound=ChatOpenAI(client=<openai.resources.chat.completions.Completions object at 0x7ff8c8180310>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x7ff8c8184a60>, openai_api_key=SecretStr('**********'), openai_api_base='https://newone.nxykj.tech/v1', openai_proxy=''), kwargs={'tools': [{'type': 'function', 'function': {'name': 'get_weather', 'description': \"get_weather(loc) - 查询即时天气函数\\n    :param loc: 必要参数，字符串类型，用于表示查询天气的具体城市名称，    注意，中国的城市需要用对应城市的英文名称代替，例如如果需要查询北京市天气，则loc参数需要输入'Beijing'；\\n    :return：OpenWeather API查询即时天气的结果，具体URL请求地址为：https://api.openweathermap.org/data/2.5/weather    返回结果对象类型为解析之后的JSON格式对象，并用字符串形式进行表示，其中包含了全部重要的天气信息\", 'parameters': {'type': 'object', 'properties': {'loc': {}}, 'required': ['loc']}}}]})"
      ]
     },
     "execution_count": 44,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "llm_with_tools = chat.bind_tools([get_weather])\n",
    "llm_with_tools"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "id": "4d9bc002-ea06-439b-8d80-8d1e39f0bd53",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_KhcAfPhgY6adqs0am5WYqy16', 'function': {'arguments': '{\"loc\":\"Beijing\"}', 'name': 'get_weather'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 198, 'total_tokens': 213}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_2f57f81c11', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-64d560b0-9485-4c6f-a7e3-a47fee73fa49-0', tool_calls=[{'name': 'get_weather', 'args': {'loc': 'Beijing'}, 'id': 'call_KhcAfPhgY6adqs0am5WYqy16'}])"
      ]
     },
     "execution_count": 45,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "llm_with_tools.invoke(\"北京的天气怎么样？\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "id": "c39dee4b-e41d-4969-8b17-8cd8a978020e",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'type': 'function',\n",
       "  'function': {'name': 'get_weather',\n",
       "   'description': \"get_weather(loc) - 查询即时天气函数\\n    :param loc: 必要参数，字符串类型，用于表示查询天气的具体城市名称，    注意，中国的城市需要用对应城市的英文名称代替，例如如果需要查询北京市天气，则loc参数需要输入'Beijing'；\\n    :return：OpenWeather API查询即时天气的结果，具体URL请求地址为：https://api.openweathermap.org/data/2.5/weather    返回结果对象类型为解析之后的JSON格式对象，并用字符串形式进行表示，其中包含了全部重要的天气信息\",\n",
       "   'parameters': {'type': 'object',\n",
       "    'properties': {'loc': {}},\n",
       "    'required': ['loc']}}}]"
      ]
     },
     "execution_count": 46,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "llm_with_tools.kwargs[\"tools\"]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f580c2be-b5a4-48d1-ac7b-23b0a0057a0f",
   "metadata": {},
   "source": [
    "&emsp;&emsp;从输出上看，返回的数据中存在一个`additional_kwargs` 属性，这个属性在LangChain中是用来传递有关Messages的附加信息，主要用于特定于提供者而非通用的输入参数，在这里可能明显看到就是OpenAI 的 function_call 信息 。而上述两行代码，其本质上实现的就是我们在上面手动实现的第一轮对话过程：即接受输入的Prompt，提取关键词并正确识别需要调用的外部函数的这个过程。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3e26c198-b06c-4028-a000-07c245d0352b",
   "metadata": {},
   "source": [
    "&emsp;&emsp;在识别出基于输入Prompt的具体外部函数执行需求后，下一步是执行该函数并获取其数据输出。实现这一过程，将再次用到输出解析器（Output Parsers）。我们在上一课中已经介绍了输出解析器，它是一个用于解析大模型的输出并将其转换成更适用的格式的模块。\n",
    "\n",
    "&emsp;&emsp;那么对于Function Calling的结果应该如何解析，在上面手动实现的过程我们已经给大家介绍过了，LangChain对这一过程进行了抽象，与我们手动实现的基本逻辑保持一致。这一部分在LangChain的架构中被归类为Output Parser模块。由于已进行封装，这意味着我们可以在LangChain框架内直接调用此功能。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8ed1bea8-04bf-4c6a-be41-dbca497ed5a8",
   "metadata": {},
   "source": [
    "### 如何设置输出解析器"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "60781fce-c2f9-437a-b0c9-e0b092f40445",
   "metadata": {},
   "source": [
    "&emsp;&emsp;`JsonOutputKeyToolsParser`继承了` JsonOutputToolsParser `类构建工具调用模型，它会将 OpenAI 函数调用响应转换为 {\"type\": \"TOOL_NAME\", \"args\": {...}} 字典列表调用和调用它们的参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "id": "e2ad7bb7-3046-411c-ab09-e4d37405bb04",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain.output_parsers import JsonOutputKeyToolsParser"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "id": "e097f8c0-033c-4c6e-b65a-f678b43f5b87",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'loc': 'Hangzhou'}"
      ]
     },
     "execution_count": 48,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chain = llm_with_tools | JsonOutputKeyToolsParser(key_name='get_weather', first_tool_only=True)\n",
    "chain.invoke(\"杭州的天气怎么样？\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "id": "2b52a970-1c2d-4a5e-87ee-196b26ddf294",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'loc': 'Beijing'}"
      ]
     },
     "execution_count": 49,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chain = llm_with_tools | JsonOutputKeyToolsParser(key_name='get_weather', first_tool_only=True)\n",
    "chain.invoke(\"今天北京的天气好吗？\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4387a2f4-117f-41e4-82ce-301d28dda1b9",
   "metadata": {},
   "source": [
    "&emsp;&emsp;通过仅两行代码，就已经实现了根据输入（Prompt）正确匹配传入参数的功能。如果想实际调用该工具，只需将其传递给工具即可："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "id": "4753430c-600a-4178-8c65-354c1f9e4b3a",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'{\"coord\": {\"lon\": 116.3972, \"lat\": 39.9075}, \"weather\": [{\"id\": 804, \"main\": \"Clouds\", \"description\": \"\\\\u9634\\\\uff0c\\\\u591a\\\\u4e91\", \"icon\": \"04d\"}], \"base\": \"stations\", \"main\": {\"temp\": 26.94, \"feels_like\": 26.27, \"temp_min\": 26.94, \"temp_max\": 26.94, \"pressure\": 1001, \"humidity\": 27, \"sea_level\": 1001, \"grnd_level\": 995}, \"visibility\": 10000, \"wind\": {\"speed\": 3.85, \"deg\": 222, \"gust\": 6.13}, \"clouds\": {\"all\": 94}, \"dt\": 1715317921, \"sys\": {\"type\": 1, \"id\": 9609, \"country\": \"CN\", \"sunrise\": 1715288663, \"sunset\": 1715339838}, \"timezone\": 28800, \"id\": 1816670, \"name\": \"Beijing\", \"cod\": 200}'"
      ]
     },
     "execution_count": 50,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chain = llm_with_tools | JsonOutputKeyToolsParser(key_name='get_weather', first_tool_only=True) | get_weather\n",
    "chain.invoke(\"北京现在的天气怎么样？\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "id": "fedfffcd-708d-48a7-a819-bf4c63885f7b",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'{\"coord\": {\"lon\": 120.1614, \"lat\": 30.2937}, \"weather\": [{\"id\": 802, \"main\": \"Clouds\", \"description\": \"\\\\u591a\\\\u4e91\", \"icon\": \"03d\"}], \"base\": \"stations\", \"main\": {\"temp\": 28.95, \"feels_like\": 29.48, \"temp_min\": 28.95, \"temp_max\": 28.95, \"pressure\": 1013, \"humidity\": 49, \"sea_level\": 1013, \"grnd_level\": 1011}, \"visibility\": 10000, \"wind\": {\"speed\": 4.39, \"deg\": 172, \"gust\": 5.89}, \"clouds\": {\"all\": 40}, \"dt\": 1715317358, \"sys\": {\"type\": 1, \"id\": 9651, \"country\": \"CN\", \"sunrise\": 1715288929, \"sunset\": 1715337765}, \"timezone\": 28800, \"id\": 1808926, \"name\": \"Hangzhou\", \"cod\": 200}'"
      ]
     },
     "execution_count": 51,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chain = llm_with_tools | JsonOutputKeyToolsParser(key_name='get_weather', first_tool_only=True) | get_weather\n",
    "chain.invoke(\"今天杭州的天气好吗？\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "048303e5-505a-438b-90d8-90d3a077fdc5",
   "metadata": {},
   "source": [
    "&emsp;&emsp;通过这个流程，我们可以根据输入实时查询OpenWeather的API，并获取最终的查询结果。如果想进一步得到最终的回复，实现的逻辑应当是将返回的信息添加到Messages中，利用这些提示数据引导模型生成最终的回复。具体转化为代码的逻辑如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "id": "34991a13-0105-4135-b99f-6c6685dea014",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "\n",
    "chat_template = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", \"天气信息来源于OpenWeather API：https://api.openweathermap.org/data/2.5/weather\"),\n",
    "        (\"system\", \"这是实时的天气数据：{weather_data}\"),\n",
    "        (\"human\", \"{user_input}\"),\n",
    "    ]\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "id": "5d3db918-202a-49cd-9570-13055ac268dd",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "chain = llm_with_tools | JsonOutputKeyToolsParser(key_name='get_weather', first_tool_only=True) | get_weather\n",
    "weather_data = chain.invoke(\"今天杭州的天气好吗？\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "id": "6cd09ac6-f52b-4c3e-ae5f-8420f95ed6e6",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'{\"coord\": {\"lon\": 120.1614, \"lat\": 30.2937}, \"weather\": [{\"id\": 802, \"main\": \"Clouds\", \"description\": \"\\\\u591a\\\\u4e91\", \"icon\": \"03d\"}], \"base\": \"stations\", \"main\": {\"temp\": 29.95, \"feels_like\": 30.82, \"temp_min\": 29.95, \"temp_max\": 29.95, \"pressure\": 1013, \"humidity\": 49, \"sea_level\": 1013, \"grnd_level\": 1011}, \"visibility\": 10000, \"wind\": {\"speed\": 4.39, \"deg\": 172, \"gust\": 5.89}, \"clouds\": {\"all\": 40}, \"dt\": 1715318103, \"sys\": {\"type\": 1, \"id\": 9651, \"country\": \"CN\", \"sunrise\": 1715288929, \"sunset\": 1715337765}, \"timezone\": 28800, \"id\": 1808926, \"name\": \"Hangzhou\", \"cod\": 200}'"
      ]
     },
     "execution_count": 55,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "weather_data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "id": "d1843ee3-b727-4490-94f6-1a915c6350c9",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[SystemMessage(content='天气信息来源于OpenWeather API：https://api.openweathermap.org/data/2.5/weather'),\n",
       " SystemMessage(content='这是实时的天气数据：{\"coord\": {\"lon\": 120.1614, \"lat\": 30.2937}, \"weather\": [{\"id\": 802, \"main\": \"Clouds\", \"description\": \"\\\\u591a\\\\u4e91\", \"icon\": \"03d\"}], \"base\": \"stations\", \"main\": {\"temp\": 29.95, \"feels_like\": 30.82, \"temp_min\": 29.95, \"temp_max\": 29.95, \"pressure\": 1013, \"humidity\": 49, \"sea_level\": 1013, \"grnd_level\": 1011}, \"visibility\": 10000, \"wind\": {\"speed\": 4.39, \"deg\": 172, \"gust\": 5.89}, \"clouds\": {\"all\": 40}, \"dt\": 1715318103, \"sys\": {\"type\": 1, \"id\": 9651, \"country\": \"CN\", \"sunrise\": 1715288929, \"sunset\": 1715337765}, \"timezone\": 28800, \"id\": 1808926, \"name\": \"Hangzhou\", \"cod\": 200}'),\n",
       " HumanMessage(content='今天杭州的天气好吗？')]"
      ]
     },
     "execution_count": 56,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "messages = chat_template.format_messages(weather_data=weather_data, user_input=\"今天杭州的天气好吗？\")\n",
    "messages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "id": "2721595c-03dd-4020-adfa-49f1e26268ee",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='根据OpenWeather API提供的数据，今天杭州的天气为多云，温度约为29℃。', response_metadata={'token_usage': {'completion_tokens': 34, 'prompt_tokens': 317, 'total_tokens': 351}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-818a600e-96dd-44de-b0a2-76bf4ef34159-0')"
      ]
     },
     "execution_count": 57,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "response = chat.invoke(messages)\n",
    "response"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "id": "1adf6b70-ce07-441f-9ccd-946eb91e507d",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "根据OpenWeather API提供的数据，今天杭州的天气为多云，温度约为29℃。\n"
     ]
    }
   ],
   "source": [
    "print(response.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0ccb1543-c1d9-4f32-8e77-071037999a83",
   "metadata": {},
   "source": [
    "&emsp;&emsp;至此，仅通过几行简单的代码，我们就已经快速实现了OpenAI的Function Calling功能，这得益于LangChain中预先抽象好的模块。接下来我们可以整理一下有效代码，如下："
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6de7baab-ede3-4550-abf8-145c54c585e4",
   "metadata": {},
   "source": [
    "#### 步骤1 构建外部函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "id": "21610811-1d89-4ef9-817f-37787b95e44e",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain.tools import BaseTool, StructuredTool, tool\n",
    "\n",
    "@tool\n",
    "def get_weather(loc):\n",
    "    \"\"\"\n",
    "    查询即时天气函数\n",
    "    :param loc: 必要参数，字符串类型，用于表示查询天气的具体城市名称，\\\n",
    "    注意，中国的城市需要用对应城市的英文名称代替，例如如果需要查询北京市天气，则loc参数需要输入'Beijing'；\n",
    "    :return：OpenWeather API查询即时天气的结果，具体URL请求地址为：https://api.openweathermap.org/data/2.5/weather\\\n",
    "    返回结果对象类型为解析之后的JSON格式对象，并用字符串形式进行表示，其中包含了全部重要的天气信息\n",
    "    \"\"\"\n",
    "    # Step 1.构建请求\n",
    "    url = \"https://api.openweathermap.org/data/2.5/weather\"\n",
    "\n",
    "    # Step 2.设置查询参数\n",
    "    params = {\n",
    "        \"q\": loc,               \n",
    "        \"appid\": open_weather_key,    # 输入API key\n",
    "        \"units\": \"metric\",            # 使用摄氏度而不是华氏度\n",
    "        \"lang\":\"zh_cn\"                # 输出语言为简体中文\n",
    "    }\n",
    "\n",
    "    # Step 3.发送GET请求\n",
    "    response = requests.get(url, params=params)\n",
    "    \n",
    "    # Step 4.解析响应\n",
    "    data = response.json()\n",
    "    return json.dumps(data)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1857b4b7-e6da-482c-9557-295be874ec1c",
   "metadata": {},
   "source": [
    "#### 步骤2 通过LangChain构建Funcation Calling Chain"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "id": "1b9fc98e-1c21-4b61-a3bc-0316f51aaf64",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "根据OpenWeather API的数据，今天杭州的天气是多云的，温度为29.95摄氏度，湿度为49%。整体来说，天气还不错。\n"
     ]
    }
   ],
   "source": [
    "from langchain_openai import ChatOpenAI\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "\n",
    "# 实例化大模型\n",
    "openai_chat = ChatOpenAI(model_name=\"gpt-3.5-turbo\",api_key=openai.api_key ,base_url=openai.api_base)\n",
    "\n",
    "# 绑定外部工具\n",
    "llm_with_tools = openai_chat.bind_tools([get_weather])\n",
    "\n",
    "# 根据输入，调用指定的工具，并得到数据\n",
    "chain = llm_with_tools | JsonOutputKeyToolsParser(key_name='get_weather', first_tool_only=True) | get_weather\n",
    "weather_data = chain.invoke(\"今天杭州的天气好吗？\")\n",
    "\n",
    "# 构造输入模版，将工具返回的数据和当前的输入拼接到一起作为外部知识影响最终的输出\n",
    "chat_template = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", \"天气信息来源于OpenWeather API：https://api.openweathermap.org/data/2.5/weather\"),\n",
    "        (\"system\", \"这是实时的天气数据：{weather_data}\"),\n",
    "        (\"human\", \"{user_input}\"),\n",
    "    ]\n",
    ")\n",
    "\n",
    "# 生成messages\n",
    "messages = chat_template.format_messages(weather_data=weather_data, user_input=\"今天杭州的天气好吗？\")\n",
    "\n",
    "# 实际进行推理\n",
    "response = openai_chat.invoke(messages)\n",
    "print(response.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7eddfc3a-4085-49bc-b0fe-0fe0a801df2c",
   "metadata": {},
   "source": [
    "#### 如何自定义输出解析器"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c79c0805-e035-43f3-bdb6-dadb8732a82d",
   "metadata": {},
   "source": [
    "&emsp;&emsp;所谓自定义输出解析器想要达到的效果是：通过该解析器，将大模型输出构造为自定义格式。我们在上一步使用的LangChain预置的`JsonOutputKeyToolsParser`，该解析器返回的是OpenWeather API返回的Json数据，其形式如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "id": "06abb7cb-0d70-4fde-b859-8e97cc303679",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"coord\": {\"lon\": 116.3972, \"lat\": 39.9075}, \"weather\": [{\"id\": 804, \"main\": \"Clouds\", \"description\": \"\\u9634\\uff0c\\u591a\\u4e91\", \"icon\": \"04d\"}], \"base\": \"stations\", \"main\": {\"temp\": 26.94, \"feels_like\": 26.27, \"temp_min\": 26.94, \"temp_max\": 26.94, \"pressure\": 1001, \"humidity\": 27, \"sea_level\": 1001, \"grnd_level\": 995}, \"visibility\": 10000, \"wind\": {\"speed\": 3.85, \"deg\": 222, \"gust\": 6.13}, \"clouds\": {\"all\": 94}, \"dt\": 1715318552, \"sys\": {\"type\": 1, \"id\": 9609, \"country\": \"CN\", \"sunrise\": 1715288663, \"sunset\": 1715339838}, \"timezone\": 28800, \"id\": 1816670, \"name\": \"Beijing\", \"cod\": 200}\n"
     ]
    }
   ],
   "source": [
    "from langchain_openai import ChatOpenAI\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "\n",
    "# 实例化大模型\n",
    "openai_chat = ChatOpenAI(model_name=\"gpt-3.5-turbo\",api_key=openai.api_key ,base_url=openai.api_base)\n",
    "\n",
    "# 绑定外部工具\n",
    "llm_with_tools = openai_chat.bind_tools([get_weather])\n",
    "\n",
    "# 根据输入，调用指定的工具，并得到数据\n",
    "chain = llm_with_tools | JsonOutputKeyToolsParser(key_name='get_weather', first_tool_only=True) | get_weather\n",
    "weather_data = chain.invoke(\"今天北京的天气好吗？\")\n",
    "print(weather_data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "id": "57bec2f7-fca0-475a-9886-5910bc9d7b7f",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"coord\": {\"lon\": 121.4581, \"lat\": 31.2222}, \"weather\": [{\"id\": 800, \"main\": \"Clear\", \"description\": \"\\u6674\", \"icon\": \"01d\"}], \"base\": \"stations\", \"main\": {\"temp\": 26.61, \"feels_like\": 26.61, \"temp_min\": 24.93, \"temp_max\": 27.18, \"pressure\": 1015, \"humidity\": 51}, \"visibility\": 10000, \"wind\": {\"speed\": 6, \"deg\": 160}, \"clouds\": {\"all\": 0}, \"dt\": 1715318650, \"sys\": {\"type\": 2, \"id\": 145096, \"country\": \"CN\", \"sunrise\": 1715288518, \"sunset\": 1715337554}, \"timezone\": 28800, \"id\": 1796236, \"name\": \"Shanghai\", \"cod\": 200}\n"
     ]
    }
   ],
   "source": [
    "weather_data = chain.invoke(\"上海现在什么天气？\")\n",
    "print(weather_data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "48c7a06e-782a-4579-ace9-628083d5cbe5",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "id": "51e676cc-6c11-4f20-90ef-e0df5c9e22d1",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain_core.messages import AIMessage\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "from langchain_openai import ChatOpenAI\n",
    "import json\n",
    "\n",
    "def final_resonse(ai_message: str) -> str:\n",
    "    \n",
    "    data = json.loads(ai_message)\n",
    "   \n",
    "    chat_template = ChatPromptTemplate.from_messages(\n",
    "        [\n",
    "            (\"system\", \"这是实时的{city}的天气数据，信息来源于OpenWeather API：https://api.openweathermap.org/data/2.5/weather, 详细的数据是：{detail}\",),\n",
    "            (\"system\", \"请你解析该数据，以自然语言的形式回复\"),\n",
    "        ]\n",
    "    )\n",
    "    \n",
    "    # 生成messages\n",
    "    messages = chat_template.format_messages(city=data[\"name\"], detail=data)\n",
    "\n",
    "    openai_chat = ChatOpenAI(model_name=\"gpt-3.5-turbo\",api_key=openai.api_key ,base_url=openai.api_base)\n",
    "    response = openai_chat.invoke(messages)\n",
    "    return response.content"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "id": "f75b985e-defc-4fcf-9f9e-f3fff5c933a9",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'当前北京的天气是阴天，多云。温度为26.94摄氏度，体感温度为26.27摄氏度。最低温度和最高温度都是26.94摄氏度。气压为1001hPa，湿度为27%。风速为3.85米/秒，风向为222度。云量为94%。能见度为10000米。'"
      ]
     },
     "execution_count": 66,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chain = llm_with_tools | JsonOutputKeyToolsParser(key_name='get_weather', first_tool_only=True) | get_weather | final_resonse\n",
    "final_reponse = chain.invoke(\"北京现在的天气怎么样？\")\n",
    "final_reponse.replace('\\n', '')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "id": "80efc177-6865-4134-b70c-71503a38d77d",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'上海的天气信息如下：- 温度：26℃，体感温度也是26℃- 最低温度：24.93℃，最高温度：26.07℃- 气压：1015 hPa- 湿度：53%- 可见度：10000米- 风速：6 m/s，风向：160°- 天气状况：晴朗，没有云层- 数据更新时间：1715318818，当前时间的时间戳- 地理坐标：经度121.4581，纬度31.2222- 国家：中国，城市：上海- 日出时间：1715288518，日落时间：1715337554- 时区：+8小时- 数据来源：OpenWeather API希望以上信息对您有所帮助！'"
      ]
     },
     "execution_count": 67,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chain = llm_with_tools | JsonOutputKeyToolsParser(key_name='get_weather', first_tool_only=True) | get_weather | final_resonse\n",
    "final_reponse = chain.invoke(\"上海现在是什么天气状况？\")\n",
    "final_reponse.replace('\\n', '')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "64ec3969-f62d-4b6e-b7ea-f93dd3ddf773",
   "metadata": {},
   "source": [
    "## 8 LangChain调用开源模型的Funcation calling"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e9e103af-dd57-48a7-9d1b-f0e8bd775020",
   "metadata": {},
   "source": [
    "&emsp;&emsp;首先我们需要明确的是，OpenAI的GPT系列模型在很大程度上影响了大模型技术发展的开发范式和标准。所以无论是Qwen、ChatGLM等模型，它们的使用方法和函数调用逻辑基本遵循OpenAI定义的规范，没有太大差异。也正是这种一致性，现在大部分的开源项目才能够通过一个较为通用的接口来接入和使用不同的模型。这种兼容性和模型间的相似性之间存在直接联系。LangChain也不例外。\n",
    "\n",
    "> Ollama Functions：https://python.langchain.com/docs/integrations/chat/ollama_functions"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a7b21882-e327-4e43-b548-d455e8e20c8d",
   "metadata": {},
   "source": [
    "```python\n",
    "    model = model.bind(\n",
    "    functions=[\n",
    "        {\n",
    "            \"name\": \"get_current_weather\",\n",
    "            \"description\": \"Get the current weather in a given location\",\n",
    "            \"parameters\": {\n",
    "                \"type\": \"object\",\n",
    "                \"properties\": {\n",
    "                    \"location\": {\n",
    "                        \"type\": \"string\",\n",
    "                        \"description\": \"The city and state, \" \"e.g. San Francisco, CA\",\n",
    "                    },\n",
    "                    \"unit\": {\n",
    "                        \"type\": \"string\",\n",
    "                        \"enum\": [\"celsius\", \"fahrenheit\"],\n",
    "                    },\n",
    "                },\n",
    "                \"required\": [\"location\"],\n",
    "            },\n",
    "        }\n",
    "    ],\n",
    "    function_call={\"name\": \"get_current_weather\"},\n",
    ")\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 129,
   "id": "21929dc6-2690-4d5a-a359-07112e031c1a",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Looking in indexes: http://mirrors.aliyun.com/pypi/simple\n",
      "Collecting langchain_experimental\n",
      "  Downloading http://mirrors.aliyun.com/pypi/packages/4d/4d/81725def89f72ac878be289929e8870fd5919744a8b603ad724f0263d61e/langchain_experimental-0.0.57-py3-none-any.whl (193 kB)\n",
      "\u001b[K     |████████████████████████████████| 193 kB 709 kB/s eta 0:00:01\n",
      "\u001b[?25hRequirement already satisfied: langchain<0.2.0,>=0.1.15 in /root/miniconda3/lib/python3.8/site-packages (from langchain_experimental) (0.1.16)\n",
      "Requirement already satisfied: langchain-core<0.2.0,>=0.1.41 in /root/miniconda3/lib/python3.8/site-packages (from langchain_experimental) (0.1.46)\n",
      "Requirement already satisfied: SQLAlchemy<3,>=1.4 in /root/miniconda3/lib/python3.8/site-packages (from langchain<0.2.0,>=0.1.15->langchain_experimental) (2.0.29)\n",
      "Requirement already satisfied: langchain-community<0.1,>=0.0.32 in /root/miniconda3/lib/python3.8/site-packages (from langchain<0.2.0,>=0.1.15->langchain_experimental) (0.0.34)\n",
      "Requirement already satisfied: tenacity<9.0.0,>=8.1.0 in /root/miniconda3/lib/python3.8/site-packages (from langchain<0.2.0,>=0.1.15->langchain_experimental) (8.2.3)\n",
      "Requirement already satisfied: pydantic<3,>=1 in /root/miniconda3/lib/python3.8/site-packages (from langchain<0.2.0,>=0.1.15->langchain_experimental) (2.7.1)\n",
      "Requirement already satisfied: async-timeout<5.0.0,>=4.0.0 in /root/miniconda3/lib/python3.8/site-packages (from langchain<0.2.0,>=0.1.15->langchain_experimental) (4.0.3)\n",
      "Requirement already satisfied: dataclasses-json<0.7,>=0.5.7 in /root/miniconda3/lib/python3.8/site-packages (from langchain<0.2.0,>=0.1.15->langchain_experimental) (0.6.5)\n",
      "Requirement already satisfied: langchain-text-splitters<0.1,>=0.0.1 in /root/miniconda3/lib/python3.8/site-packages (from langchain<0.2.0,>=0.1.15->langchain_experimental) (0.0.1)\n",
      "Requirement already satisfied: langsmith<0.2.0,>=0.1.17 in /root/miniconda3/lib/python3.8/site-packages (from langchain<0.2.0,>=0.1.15->langchain_experimental) (0.1.51)\n",
      "Requirement already satisfied: PyYAML>=5.3 in /root/miniconda3/lib/python3.8/site-packages (from langchain<0.2.0,>=0.1.15->langchain_experimental) (6.0)\n",
      "Requirement already satisfied: requests<3,>=2 in /root/miniconda3/lib/python3.8/site-packages (from langchain<0.2.0,>=0.1.15->langchain_experimental) (2.28.2)\n",
      "Requirement already satisfied: aiohttp<4.0.0,>=3.8.3 in /root/miniconda3/lib/python3.8/site-packages (from langchain<0.2.0,>=0.1.15->langchain_experimental) (3.9.5)\n",
      "Requirement already satisfied: jsonpatch<2.0,>=1.33 in /root/miniconda3/lib/python3.8/site-packages (from langchain<0.2.0,>=0.1.15->langchain_experimental) (1.33)\n",
      "Requirement already satisfied: numpy<2,>=1 in /root/miniconda3/lib/python3.8/site-packages (from langchain<0.2.0,>=0.1.15->langchain_experimental) (1.24.2)\n",
      "Requirement already satisfied: attrs>=17.3.0 in /root/miniconda3/lib/python3.8/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain<0.2.0,>=0.1.15->langchain_experimental) (22.2.0)\n",
      "Requirement already satisfied: frozenlist>=1.1.1 in /root/miniconda3/lib/python3.8/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain<0.2.0,>=0.1.15->langchain_experimental) (1.4.1)\n",
      "Requirement already satisfied: multidict<7.0,>=4.5 in /root/miniconda3/lib/python3.8/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain<0.2.0,>=0.1.15->langchain_experimental) (6.0.5)\n",
      "Requirement already satisfied: yarl<2.0,>=1.0 in /root/miniconda3/lib/python3.8/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain<0.2.0,>=0.1.15->langchain_experimental) (1.9.4)\n",
      "Requirement already satisfied: aiosignal>=1.1.2 in /root/miniconda3/lib/python3.8/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain<0.2.0,>=0.1.15->langchain_experimental) (1.3.1)\n",
      "Requirement already satisfied: typing-inspect<1,>=0.4.0 in /root/miniconda3/lib/python3.8/site-packages (from dataclasses-json<0.7,>=0.5.7->langchain<0.2.0,>=0.1.15->langchain_experimental) (0.9.0)\n",
      "Requirement already satisfied: marshmallow<4.0.0,>=3.18.0 in /root/miniconda3/lib/python3.8/site-packages (from dataclasses-json<0.7,>=0.5.7->langchain<0.2.0,>=0.1.15->langchain_experimental) (3.21.1)\n",
      "Requirement already satisfied: jsonpointer>=1.9 in /root/miniconda3/lib/python3.8/site-packages (from jsonpatch<2.0,>=1.33->langchain<0.2.0,>=0.1.15->langchain_experimental) (2.3)\n",
      "Requirement already satisfied: packaging<24.0,>=23.2 in /root/miniconda3/lib/python3.8/site-packages (from langchain-core<0.2.0,>=0.1.41->langchain_experimental) (23.2)\n",
      "Requirement already satisfied: orjson<4.0.0,>=3.9.14 in /root/miniconda3/lib/python3.8/site-packages (from langsmith<0.2.0,>=0.1.17->langchain<0.2.0,>=0.1.15->langchain_experimental) (3.10.1)\n",
      "Requirement already satisfied: pydantic-core==2.18.2 in /root/miniconda3/lib/python3.8/site-packages (from pydantic<3,>=1->langchain<0.2.0,>=0.1.15->langchain_experimental) (2.18.2)\n",
      "Requirement already satisfied: typing-extensions>=4.6.1 in /root/miniconda3/lib/python3.8/site-packages (from pydantic<3,>=1->langchain<0.2.0,>=0.1.15->langchain_experimental) (4.11.0)\n",
      "Requirement already satisfied: annotated-types>=0.4.0 in /root/miniconda3/lib/python3.8/site-packages (from pydantic<3,>=1->langchain<0.2.0,>=0.1.15->langchain_experimental) (0.6.0)\n",
      "Requirement already satisfied: certifi>=2017.4.17 in /root/miniconda3/lib/python3.8/site-packages (from requests<3,>=2->langchain<0.2.0,>=0.1.15->langchain_experimental) (2021.5.30)\n",
      "Requirement already satisfied: urllib3<1.27,>=1.21.1 in /root/miniconda3/lib/python3.8/site-packages (from requests<3,>=2->langchain<0.2.0,>=0.1.15->langchain_experimental) (1.26.6)\n",
      "Requirement already satisfied: idna<4,>=2.5 in /root/miniconda3/lib/python3.8/site-packages (from requests<3,>=2->langchain<0.2.0,>=0.1.15->langchain_experimental) (2.10)\n",
      "Requirement already satisfied: charset-normalizer<4,>=2 in /root/miniconda3/lib/python3.8/site-packages (from requests<3,>=2->langchain<0.2.0,>=0.1.15->langchain_experimental) (3.1.0)\n",
      "Requirement already satisfied: greenlet!=0.4.17 in /root/miniconda3/lib/python3.8/site-packages (from SQLAlchemy<3,>=1.4->langchain<0.2.0,>=0.1.15->langchain_experimental) (3.0.3)\n",
      "Requirement already satisfied: mypy-extensions>=0.3.0 in /root/miniconda3/lib/python3.8/site-packages (from typing-inspect<1,>=0.4.0->dataclasses-json<0.7,>=0.5.7->langchain<0.2.0,>=0.1.15->langchain_experimental) (1.0.0)\n",
      "Installing collected packages: langchain-experimental\n",
      "Successfully installed langchain-experimental-0.0.57\n",
      "\u001b[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\u001b[0m\n"
     ]
    }
   ],
   "source": [
    "! pip install langchain_experimental"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 143,
   "id": "05065f99-66e5-494d-a301-8ade63d6b18c",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain_experimental.llms.ollama_functions import OllamaFunctions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 144,
   "id": "ab24c573-e84e-43f9-b5d4-cef66dbb40ee",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "model = OllamaFunctions(\n",
    "    model=\"qwen:7b-chat\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 145,
   "id": "ce2e999a-8963-4373-b3db-61d6d128308b",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "@tool\n",
    "def get_weather(loc):\n",
    "    \"\"\"\n",
    "    查询即时天气函数\n",
    "    :param loc: 必要参数，字符串类型，用于表示查询天气的具体城市名称，\\\n",
    "    注意，中国的城市需要用对应城市的英文名称代替，例如如果需要查询北京市天气，则loc参数需要输入'Beijing'；\n",
    "    :return：OpenWeather API查询即时天气的结果，具体URL请求地址为：https://api.openweathermap.org/data/2.5/weather\\\n",
    "    返回结果对象类型为解析之后的JSON格式对象，并用字符串形式进行表示，其中包含了全部重要的天气信息\n",
    "    \"\"\"\n",
    "    # Step 1.构建请求\n",
    "    url = \"https://api.openweathermap.org/data/2.5/weather\"\n",
    "\n",
    "    # Step 2.设置查询参数\n",
    "    params = {\n",
    "        \"q\": loc,               \n",
    "        \"appid\": open_weather_key,    # 输入API key\n",
    "        \"units\": \"metric\",            # 使用摄氏度而不是华氏度\n",
    "        \"lang\":\"zh_cn\"                # 输出语言为简体中文\n",
    "    }\n",
    "\n",
    "    # Step 3.发送GET请求\n",
    "    response = requests.get(url, params=params)\n",
    "    \n",
    "    # Step 4.解析响应\n",
    "    data = response.json()\n",
    "    return json.dumps(data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 146,
   "id": "64d92475-1299-422c-8bd8-b4aa5ff0736c",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain_core.utils.function_calling import convert_to_openai_function"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b4c975dd-c57c-46d5-866c-f28e58d102cc",
   "metadata": {},
   "source": [
    "&emsp;&emsp;`convert_to_openai_function`的功能是将外部函数转化成Json Schema的表示，使用方法如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 147,
   "id": "bf46abed-86ed-4518-ad04-d4f09e1497f3",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"name\": \"get_weather\", \"description\": \"get_weather(loc) - 查询即时天气函数\\n    :param loc: 必要参数，字符串类型，用于表示查询天气的具体城市名称，    注意，中国的城市需要用对应城市的英文名称代替，例如如果需要查询北京市天气，则loc参数需要输入'Beijing'；\\n    :return：OpenWeather API查询即时天气的结果，具体URL请求地址为：https://api.openweathermap.org/data/2.5/weather    返回结果对象类型为解析之后的JSON格式对象，并用字符串形式进行表示，其中包含了全部重要的天气信息\", \"parameters\": {\"type\": \"object\", \"properties\": {\"loc\": {}}, \"required\": [\"loc\"]}}\n"
     ]
    }
   ],
   "source": [
    "get_weather_json_schema = json.dumps(convert_to_openai_function(get_weather),ensure_ascii=False)\n",
    "print(get_weather_json_schema)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 148,
   "id": "a6bc1fd0-4b95-44a7-ae8d-ff9d77e65fae",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<class 'str'>\n"
     ]
    }
   ],
   "source": [
    "print(type(get_weather_json_schema))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bde3d501-760f-44a7-b969-5942d21199e0",
   "metadata": {},
   "source": [
    "&emsp;&emsp;上述通过`json.dumps`是为了更好的显示输出，而实际上需要传入的Json Schema需要是字典形式，直接使用`convert_to_openai_function`方法进行转化。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 149,
   "id": "b88c4d1a-8c44-4e07-81a6-c283a0af4720",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'name': 'get_weather', 'description': \"get_weather(loc) - 查询即时天气函数\\n    :param loc: 必要参数，字符串类型，用于表示查询天气的具体城市名称，    注意，中国的城市需要用对应城市的英文名称代替，例如如果需要查询北京市天气，则loc参数需要输入'Beijing'；\\n    :return：OpenWeather API查询即时天气的结果，具体URL请求地址为：https://api.openweathermap.org/data/2.5/weather    返回结果对象类型为解析之后的JSON格式对象，并用字符串形式进行表示，其中包含了全部重要的天气信息\", 'parameters': {'type': 'object', 'properties': {'loc': {}}, 'required': ['loc']}}\n"
     ]
    }
   ],
   "source": [
    "get_weather_json_schema = convert_to_openai_function(get_weather)\n",
    "print(get_weather_json_schema)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 82,
   "id": "383e6030-6902-43f9-a8b3-aaaa48b3a72b",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<class 'dict'>\n"
     ]
    }
   ],
   "source": [
    "print(type(get_weather_json_schema))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 150,
   "id": "fd418f82-f815-4e57-85e9-d96b4e6da2a4",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'name': 'get_weather',\n",
       "  'description': \"get_weather(loc) - 查询即时天气函数\\n    :param loc: 必要参数，字符串类型，用于表示查询天气的具体城市名称，    注意，中国的城市需要用对应城市的英文名称代替，例如如果需要查询北京市天气，则loc参数需要输入'Beijing'；\\n    :return：OpenWeather API查询即时天气的结果，具体URL请求地址为：https://api.openweathermap.org/data/2.5/weather    返回结果对象类型为解析之后的JSON格式对象，并用字符串形式进行表示，其中包含了全部重要的天气信息\",\n",
       "  'parameters': {'type': 'object',\n",
       "   'properties': {'loc': {}},\n",
       "   'required': ['loc']}}]"
      ]
     },
     "execution_count": 150,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "functions_list = [get_weather_json_schema]\n",
    "functions_list"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 151,
   "id": "9d97db5e-1bd7-4aae-a097-59d6a8f12cdf",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "RunnableBinding(bound=OllamaFunctions(llm=ChatOllama(model='qwen:7b-chat', format='json'), tool_system_prompt_template='You have access to the following tools:\\n\\n{tools}\\n\\nYou must always select one of the above tools and respond with only a JSON object matching the following schema:\\n\\n{{\\n  \"tool\": <name of the selected tool>,\\n  \"tool_input\": <parameters for the selected tool, matching the tool\\'s JSON schema>\\n}}\\n'), kwargs={'functions': [{'name': 'get_weather', 'description': \"get_weather(loc) - 查询即时天气函数\\n    :param loc: 必要参数，字符串类型，用于表示查询天气的具体城市名称，    注意，中国的城市需要用对应城市的英文名称代替，例如如果需要查询北京市天气，则loc参数需要输入'Beijing'；\\n    :return：OpenWeather API查询即时天气的结果，具体URL请求地址为：https://api.openweathermap.org/data/2.5/weather    返回结果对象类型为解析之后的JSON格式对象，并用字符串形式进行表示，其中包含了全部重要的天气信息\", 'parameters': {'type': 'object', 'properties': {'loc': {}}, 'required': ['loc']}}], 'function_call': {'name': 'get_weather'}})"
      ]
     },
     "execution_count": 151,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model = model.bind(\n",
    "    functions = functions_list,\n",
    "    function_call={\"name\": \"get_weather\"},\n",
    ")\n",
    "model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 140,
   "id": "43e05d40-c8d1-4aac-9879-a154e9f6440f",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='', additional_kwargs={'function_call': {'name': 'get_weather', 'arguments': '{\"loc\": \"\\\\u5317\\\\u4eac\"}'}}, id='run-4bf84f61-c753-4621-ab70-f1e96bb62b7a-0')"
      ]
     },
     "execution_count": 140,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_core.messages import HumanMessage\n",
    "\n",
    "model.invoke(\"查询一下北京的天气\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "39703102-f784-4a0d-80dd-65db27a02cbd",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "4c2984be-2f05-45ae-a035-6d86a08510bf",
   "metadata": {},
   "source": [
    "&emsp;&emsp;将`get_weather`的Json Schma表示放在一个列表中。使用model.bind进行传入。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 152,
   "id": "785bc670-7618-40a8-9e67-f102f239344a",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain.output_parsers.openai_functions import JsonKeyOutputFunctionsParser"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 154,
   "id": "2989e34d-3525-4024-9e3f-cfd5387bdd26",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"coord\": {\"lon\": 121.4581, \"lat\": 31.2222}, \"weather\": [{\"id\": 800, \"main\": \"Clear\", \"description\": \"\\u6674\", \"icon\": \"01d\"}], \"base\": \"stations\", \"main\": {\"temp\": 26.15, \"feels_like\": 26.15, \"temp_min\": 24.93, \"temp_max\": 26.92, \"pressure\": 1013, \"humidity\": 55}, \"visibility\": 10000, \"wind\": {\"speed\": 6, \"deg\": 160}, \"clouds\": {\"all\": 0}, \"dt\": 1715326560, \"sys\": {\"type\": 2, \"id\": 145096, \"country\": \"CN\", \"sunrise\": 1715288518, \"sunset\": 1715337554}, \"timezone\": 28800, \"id\": 1796236, \"name\": \"Shanghai\", \"cod\": 200}\n"
     ]
    }
   ],
   "source": [
    "# 根据输入，调用指定的工具，并得到数据\n",
    "chain = model | JsonKeyOutputFunctionsParser(key_name='loc', first_tool_only=True) | get_weather\n",
    "weather_data = chain.invoke(\"上海现在什么天气？\")\n",
    "print(weather_data)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "09737b28-a709-4462-bfce-ec7d1f40ac51",
   "metadata": {},
   "source": [
    "到目前为止获取天气的数据了！"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ae163de6-4e9d-419b-b2c4-4a2d0d455819",
   "metadata": {},
   "source": [
    "#### 完整流程演示"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 164,
   "id": "fb4df0bc-506e-461e-8a3a-1b3cd5f63f28",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain_core.messages import AIMessage\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "from langchain_community.chat_models import ChatOllama\n",
    "\n",
    "\n",
    "import json\n",
    "\n",
    "def final_resonse(ai_message: str) -> str:\n",
    "    \n",
    "    data = json.loads(ai_message)\n",
    "    print(data)\n",
    "   \n",
    "    chat_template = ChatPromptTemplate.from_messages(\n",
    "        [\n",
    "            (\"system\", \"这是实时的{city}的天气数据，信息来源于OpenWeather API：https://api.openweathermap.org/data/2.5/weather, 详细的数据是：{detail}\",),\n",
    "            (\"system\", \"请你解析该数据，以自然语言的形式回复\"),\n",
    "        ]\n",
    "    )\n",
    "    \n",
    "    # 生成messages\n",
    "    messages = chat_template.format_messages(city=data[\"name\"], detail=data)\n",
    "\n",
    "    # 实例化Ollama启动的模型\n",
    "    ollama_llm = ChatOllama(model=\"qwen:7b-chat\")\n",
    "    response = ollama_llm.invoke(messages)\n",
    "    return response.content"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 168,
   "id": "93d7d7d3-50af-4b31-b76a-53de5db4e0e0",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'coord': {'lon': 121.4581, 'lat': 31.2222}, 'weather': [{'id': 800, 'main': 'Clear', 'description': '晴', 'icon': '01d'}], 'base': 'stations', 'main': {'temp': 26.15, 'feels_like': 26.15, 'temp_min': 23.93, 'temp_max': 26.92, 'pressure': 1013, 'humidity': 57}, 'visibility': 10000, 'wind': {'speed': 6, 'deg': 160}, 'clouds': {'all': 0}, 'dt': 1715327333, 'sys': {'type': 2, 'id': 145096, 'country': 'CN', 'sunrise': 1715288518, 'sunset': 1715337554}, 'timezone': 28800, 'id': 1796236, 'name': 'Shanghai', 'cod': 200}\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'这是一段关于上海实时天气的数据。具体信息如下：- **地理位置**：经度121.4581，纬度31.2222。- **天气状况**：晴朗（Weather Type: Clear, Description: Sunny）。- **温度**：当前气温为26.15°C，人体感觉温度与气温相近。- **湿度**：湿度为57%。- **气压**：气压为1013百帕。- **风速和方向**：风速为6公里/小时，风向为东偏南160°。- **其他时间信息**：日出时间为1715288518秒，日落时间为1715337554秒。上海的当前天气情况是晴朗，气温适中，湿度较大。如果你需要更详细的预报或者有其他关于天气的问题，请随时告诉我。'"
      ]
     },
     "execution_count": 168,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model = model.bind(\n",
    "    functions = functions_list,\n",
    "    function_call={\"name\": \"get_weather\"},\n",
    ")\n",
    "\n",
    "chain = model | JsonKeyOutputFunctionsParser(key_name='loc', first_tool_only=True) | get_weather | final_resonse\n",
    "final_reponse = chain.invoke(\"上海现在什么天气？\")\n",
    "final_reponse.replace('\\n', '')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c95dee7d-0b0c-4925-95ed-ce7c62b96528",
   "metadata": {},
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "peft",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
