{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "154f5ccb5ae5d8bf",
   "metadata": {},
   "source": [
    "# 1 模型调用测试"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "20152374bffdc804",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "## 1.1 环境准备"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c7bb77ca77d3178d",
   "metadata": {},
   "source": [
    "安装langchain\n",
    "```\n",
    "pip install langchain \n",
    "pip install langchain-community\n",
    "```\n",
    "安装百度千帆大模型SDK\n",
    "```\n",
    "pip install qianfan\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d65f4e81d64785ed",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "## 1.2 千帆SDK"
   ]
  },
  {
   "cell_type": "code",
   "id": "6e89ba88faa3c18",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-30T11:21:51.794161Z",
     "start_time": "2024-08-30T11:21:51.778171Z"
    }
   },
   "source": [
    "import os, configparser\n",
    "#【推荐】使用安全认证AK/SK鉴权，通过环境变量初始化认证信息\n",
    "config = configparser.ConfigParser()\n",
    "config.read(\"config.ini\")\n",
    "# 替换下列示例中参数，安全认证Access Key替换your_iam_ak，Secret Key替换your_iam_sk\n",
    "os.environ[\"QIANFAN_ACCESS_KEY\"] = config.get('qianfan', 'accessKey')\n",
    "os.environ[\"QIANFAN_SECRET_KEY\"] = config.get('qianfan', 'secretKey')\n",
    "print(\"hello world\", end=\"\")"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "hello world"
     ]
    }
   ],
   "execution_count": 1
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "1f4d29675129e99a",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-29T01:58:19.164828Z",
     "start_time": "2024-08-29T01:58:17.316641Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'id': 'as-u7h00mcrap', 'object': 'chat.completion', 'created': 1724896699, 'result': '你好！有什么我可以帮助你的吗？', 'is_truncated': False, 'need_clear_history': False, 'usage': {'prompt_tokens': 1, 'completion_tokens': 8, 'total_tokens': 9}}\n"
     ]
    }
   ],
   "source": [
    "import qianfan\n",
    "\n",
    "chat_comp = qianfan.ChatCompletion()\n",
    "\n",
    "# 指定特定模型\n",
    "resp = chat_comp.do(model=\"ERNIE-Speed-128K\", messages=[{\n",
    "    \"role\": \"user\",\n",
    "    \"content\": \"你好\"\n",
    "}])\n",
    "\n",
    "print(resp[\"body\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8c85d577ffd60068",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "## 1.3 LangChain"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b84fe8453b7516f9",
   "metadata": {},
   "source": [
    "### (1)、基本使用"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "e5abfbfcfa57fe7d",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-25T00:20:22.935104Z",
     "start_time": "2024-08-25T00:20:22.007210Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='您好，我不知道您是谁。如果您愿意告诉我您的名字或者其他相关信息，我会尽力回答您的问题。', response_metadata={'token_usage': {}, 'model_name': 'ERNIE-Speed-128K', 'finish_reason': 'stop'}, id='run-80dc05ac-0339-4b1e-8cf2-70f1a552e611-0')"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_community.chat_models import QianfanChatEndpoint\n",
    "\n",
    "chat = QianfanChatEndpoint(\n",
    "    streaming=True,\n",
    "    temperature=0.2,\n",
    "    model=\"ERNIE-Speed-128K\",\n",
    ")\n",
    "# predict已经过期，改用invoke\n",
    "# chat.invoke(\"What would be a good company name for a company that make colorful socks?\")\n",
    "# chat.invoke(\"你好，我叫小度\")\n",
    "chat.invoke(\"你好，我是谁\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "21cd24e0c02c7c2b",
   "metadata": {},
   "source": [
    "### (2)、HumanMessage\n",
    "LangChain 的 schema 定义了AIMessage、HumanMessage和 SystemMessage 这3种角色类型的数据模式基于这些数据模式，可以像使用函数一样将参数传递给消息对象。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "91c56bf58790f522",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-20T09:31:22.636940Z",
     "start_time": "2024-08-20T09:31:21.987029Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content=\"J'aime la programmation.\", response_metadata={'token_usage': {}, 'model_name': 'ERNIE-Speed-128K', 'finish_reason': 'stop'}, id='run-b301178e-e24b-4e13-a1ab-ce6b9d763d93-0')"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain.schema import (\n",
    "    AIMessage,\n",
    "    HumanMessage,\n",
    "    SystemMessage\n",
    ")\n",
    "\n",
    "chat.invoke([\n",
    "    HumanMessage(content=(\n",
    "        \"Translate this sentence from English to French.\"\n",
    "        \"I love programming.\"\n",
    "    ))\n",
    "])\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ef6ad858df735cf5",
   "metadata": {},
   "source": [
    "### (3)、提示词模板  \n",
    "提示词模板是一种特殊的文本，它可以为特定任务提供额外的上下文信息。在LLM应用中，用户输入通常不直接被传递给模型本身，而是被添加到一个更大的本，即提示词模板中。提示词模板为当前的具体任务提供了额外的上下文信息，这能够更好地引导模型生成预期的输出。在LangChain中，可以使用MessagePromptTemplate来创建提示词模板。可以用一个或多个MessagePromptTemplate创建一个ChatPromptTemplate"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "dd7e539dc490f8e5",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-20T09:31:26.946562Z",
     "start_time": "2024-08-20T09:31:26.929752Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[SystemMessage(content='You are a helpful assistant that translates English to Chinese.'),\n",
       " HumanMessage(content='I love programming.')]"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain.prompts.chat import (\n",
    "    ChatPromptTemplate,\n",
    "    SystemMessagePromptTemplate,\n",
    "    HumanMessagePromptTemplate,\n",
    ")\n",
    "\n",
    "# SystemMessagePromptTemplate\n",
    "template =(\n",
    "    \"You are a helpful assistant that translates {input_language} to \"\n",
    "    \"{output_language}.\"\n",
    ")\n",
    "system_message_prompt = SystemMessagePromptTemplate.from_template(template)\n",
    "# HumanMessagePromptTemplate\n",
    "human_template = \"{text}\"\n",
    "human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)\n",
    "# ChatPromptTemplate\n",
    "chat_prompt = ChatPromptTemplate.from_messages([\n",
    "    system_message_prompt,\n",
    "    human_message_prompt\n",
    "    ])\n",
    "# 生成最终的prompt\n",
    "chat_prompt.format_messages(\n",
    "    input_language=\"English\",\n",
    "    output_language=\"Chinese\",\n",
    "    text=\"I love programming.\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "43967700e2105632",
   "metadata": {},
   "source": [
    "   上述代码首先定义了两个模板：一个是系统消息模板，描述了任务的上下文（翻译助手的角色和翻译任务）；另一个是人类消息模板，其中的内容是用户的输入。  \n",
    "   然后，使用ChatPromptTemplate的from_messages方法将这两个模板结合起来，生成一个聊天提示词模板。  \n",
    "   通过这种方式，不仅可以让聊天模型包装器生成预期的输出，还能让开发者不必担心提示词是否符合消息列表的数据格式，只需要提供具体的任务描述即可。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dbf2d5734f1a250d",
   "metadata": {},
   "source": [
    "### (4)、创建第一个链\n",
    "下面，我们将上述步骤整合为一条链。使用LangChain的LLMChain（大语言模型包装链）对模型进行包装，实现与提示词模板类似的功能。这种方式更为直观易懂，你会发现，导人LLMChain并将提示词模板和聊天模型传递进去后，链就造好了。链的运行可以通过函数式调用实现，也可以直接“run”一下"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "279c0e27291b76bf",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-20T09:31:45.940460Z",
     "start_time": "2024-08-20T09:31:45.600918Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'我喜欢编程。'"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain import LLMChain\n",
    "\n",
    "#使用LLMChain组合聊天模型组件和提示词模板\n",
    "chain = LLMChain(llm=chat, prompt=chat_prompt)\n",
    "\n",
    "#运行链，传人参数\n",
    "chain.run(\n",
    "    input_language=\"English\",\n",
    "    output_language=\"Chinese\",\n",
    "    text=\"I love programming.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e836755d48f44001",
   "metadata": {},
   "source": [
    "### (5)、代理的使用\n",
    "目前，Agent是LangChain中最先进的模块，它的主要职责是基于输人的信息动态选择执行哪些动作，以及确定这些动作的执行顺序。一个Agent会被赋予一些工具，这些工具可以执行特定的任务。Agent会反复选择一个工具，运行这个工具，观察输出结果，直到得出最终的答案。换句话说，Agent就像一个决策者，它决定使用什么工具来获取天气信息，我们只需要关注它给的最终答案即可。  \n",
    "要创建并加载一个Agent，你需要选择以下几个要素：  \n",
    "（1）聊天模型包装器：这是驱动Agent的LLM。  \n",
    "（2）工具：执行特定任务的函数，例如，谷歌搜索、数据库查询、PythonREPL，甚至其他LLM 链。  \n",
    "（3）代理名称：一个字符，用于选择具体的Agent类。这个类中含有一组预定义的“提示词模板”，这些模板有助于LLM更准确地判断在不同场景或任务下应该如何执行。比如，如果一个Agent类是专门用于进行网页数据爬取的，那么它的提示词模板中可能会包含与爬取相关的各种任务指示，以帮助LLM更准确地执行这类任务。在以下的代码示例中，我们将使用SearchApiAPIWrapper查询搜索引擎来创建一个Agent。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "447268044f9dca65",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-20T09:32:00.681088Z",
     "start_time": "2024-08-20T09:32:00.547715Z"
    }
   },
   "outputs": [
    {
     "ename": "ImportError",
     "evalue": "Could not import serpapi python package. Please install it with `pip install google-search-results`.",
     "output_type": "error",
     "traceback": [
      "\u001B[1;31m---------------------------------------------------------------------------\u001B[0m",
      "\u001B[1;31mModuleNotFoundError\u001B[0m                       Traceback (most recent call last)",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_community\\utilities\\serpapi.py:67\u001B[0m, in \u001B[0;36mSerpAPIWrapper.validate_environment\u001B[1;34m(cls, values)\u001B[0m\n\u001B[0;32m     66\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m---> 67\u001B[0m     \u001B[38;5;28;01mfrom\u001B[39;00m \u001B[38;5;21;01mserpapi\u001B[39;00m \u001B[38;5;28;01mimport\u001B[39;00m GoogleSearch\n\u001B[0;32m     69\u001B[0m     values[\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124msearch_engine\u001B[39m\u001B[38;5;124m\"\u001B[39m] \u001B[38;5;241m=\u001B[39m GoogleSearch\n",
      "\u001B[1;31mModuleNotFoundError\u001B[0m: No module named 'serpapi'",
      "\nDuring handling of the above exception, another exception occurred:\n",
      "\u001B[1;31mImportError\u001B[0m                               Traceback (most recent call last)",
      "Cell \u001B[1;32mIn[12], line 10\u001B[0m\n\u001B[0;32m      6\u001B[0m \u001B[38;5;66;03m# 这个key需要魔法上网才能获得，这个案例就不测试了\u001B[39;00m\n\u001B[0;32m      7\u001B[0m \u001B[38;5;66;03m#设置谷歌搜索的API密钥\u001B[39;00m\n\u001B[0;32m      8\u001B[0m os\u001B[38;5;241m.\u001B[39menviron[\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mSERPAPI_API_KEY\u001B[39m\u001B[38;5;124m\"\u001B[39m] \u001B[38;5;241m=\u001B[39m \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mxxx\u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[1;32m---> 10\u001B[0m tools \u001B[38;5;241m=\u001B[39m load_tools([\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mserpapi\u001B[39m\u001B[38;5;124m\"\u001B[39m, \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mllm-math\u001B[39m\u001B[38;5;124m\"\u001B[39m], llm\u001B[38;5;241m=\u001B[39mchat)\n\u001B[0;32m     12\u001B[0m agent \u001B[38;5;241m=\u001B[39m initialize_agent(tools, chat,\n\u001B[0;32m     13\u001B[0m                          agent\u001B[38;5;241m=\u001B[39mAgentType\u001B[38;5;241m.\u001B[39mCHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose\u001B[38;5;241m=\u001B[39m\u001B[38;5;28;01mTrue\u001B[39;00m)\n\u001B[0;32m     14\u001B[0m agent\u001B[38;5;241m.\u001B[39mrun(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mWhat will be the weather in Shanghai three days from now?\u001B[39m\u001B[38;5;124m\"\u001B[39m)\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_community\\agent_toolkits\\load_tools.py:746\u001B[0m, in \u001B[0;36mload_tools\u001B[1;34m(tool_names, llm, callbacks, allow_dangerous_tools, **kwargs)\u001B[0m\n\u001B[0;32m    744\u001B[0m     _get_tool_func, extra_keys \u001B[38;5;241m=\u001B[39m _EXTRA_OPTIONAL_TOOLS[name]\n\u001B[0;32m    745\u001B[0m     sub_kwargs \u001B[38;5;241m=\u001B[39m {k: kwargs[k] \u001B[38;5;28;01mfor\u001B[39;00m k \u001B[38;5;129;01min\u001B[39;00m extra_keys \u001B[38;5;28;01mif\u001B[39;00m k \u001B[38;5;129;01min\u001B[39;00m kwargs}\n\u001B[1;32m--> 746\u001B[0m     tool \u001B[38;5;241m=\u001B[39m _get_tool_func(\u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39msub_kwargs)\n\u001B[0;32m    747\u001B[0m     tools\u001B[38;5;241m.\u001B[39mappend(tool)\n\u001B[0;32m    748\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m:\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_community\\agent_toolkits\\load_tools.py:381\u001B[0m, in \u001B[0;36m_get_serpapi\u001B[1;34m(**kwargs)\u001B[0m\n\u001B[0;32m    377\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21m_get_serpapi\u001B[39m(\u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs: Any) \u001B[38;5;241m-\u001B[39m\u001B[38;5;241m>\u001B[39m BaseTool:\n\u001B[0;32m    378\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m Tool(\n\u001B[0;32m    379\u001B[0m         name\u001B[38;5;241m=\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mSearch\u001B[39m\u001B[38;5;124m\"\u001B[39m,\n\u001B[0;32m    380\u001B[0m         description\u001B[38;5;241m=\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mA search engine. Useful for when you need to answer questions about current events. Input should be a search query.\u001B[39m\u001B[38;5;124m\"\u001B[39m,\n\u001B[1;32m--> 381\u001B[0m         func\u001B[38;5;241m=\u001B[39mSerpAPIWrapper(\u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\u001B[38;5;241m.\u001B[39mrun,\n\u001B[0;32m    382\u001B[0m         coroutine\u001B[38;5;241m=\u001B[39mSerpAPIWrapper(\u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\u001B[38;5;241m.\u001B[39marun,\n\u001B[0;32m    383\u001B[0m     )\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\pydantic\\v1\\main.py:339\u001B[0m, in \u001B[0;36mBaseModel.__init__\u001B[1;34m(__pydantic_self__, **data)\u001B[0m\n\u001B[0;32m    333\u001B[0m \u001B[38;5;250m\u001B[39m\u001B[38;5;124;03m\"\"\"\u001B[39;00m\n\u001B[0;32m    334\u001B[0m \u001B[38;5;124;03mCreate a new model by parsing and validating input data from keyword arguments.\u001B[39;00m\n\u001B[0;32m    335\u001B[0m \n\u001B[0;32m    336\u001B[0m \u001B[38;5;124;03mRaises ValidationError if the input data cannot be parsed to form a valid model.\u001B[39;00m\n\u001B[0;32m    337\u001B[0m \u001B[38;5;124;03m\"\"\"\u001B[39;00m\n\u001B[0;32m    338\u001B[0m \u001B[38;5;66;03m# Uses something other than `self` the first arg to allow \"self\" as a settable attribute\u001B[39;00m\n\u001B[1;32m--> 339\u001B[0m values, fields_set, validation_error \u001B[38;5;241m=\u001B[39m validate_model(__pydantic_self__\u001B[38;5;241m.\u001B[39m\u001B[38;5;18m__class__\u001B[39m, data)\n\u001B[0;32m    340\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m validation_error:\n\u001B[0;32m    341\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m validation_error\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\pydantic\\v1\\main.py:1048\u001B[0m, in \u001B[0;36mvalidate_model\u001B[1;34m(model, input_data, cls)\u001B[0m\n\u001B[0;32m   1046\u001B[0m \u001B[38;5;28;01mfor\u001B[39;00m validator \u001B[38;5;129;01min\u001B[39;00m model\u001B[38;5;241m.\u001B[39m__pre_root_validators__:\n\u001B[0;32m   1047\u001B[0m     \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m-> 1048\u001B[0m         input_data \u001B[38;5;241m=\u001B[39m validator(cls_, input_data)\n\u001B[0;32m   1049\u001B[0m     \u001B[38;5;28;01mexcept\u001B[39;00m (\u001B[38;5;167;01mValueError\u001B[39;00m, \u001B[38;5;167;01mTypeError\u001B[39;00m, \u001B[38;5;167;01mAssertionError\u001B[39;00m) \u001B[38;5;28;01mas\u001B[39;00m exc:\n\u001B[0;32m   1050\u001B[0m         \u001B[38;5;28;01mreturn\u001B[39;00m {}, \u001B[38;5;28mset\u001B[39m(), ValidationError([ErrorWrapper(exc, loc\u001B[38;5;241m=\u001B[39mROOT_KEY)], cls_)\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_community\\utilities\\serpapi.py:71\u001B[0m, in \u001B[0;36mSerpAPIWrapper.validate_environment\u001B[1;34m(cls, values)\u001B[0m\n\u001B[0;32m     69\u001B[0m     values[\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124msearch_engine\u001B[39m\u001B[38;5;124m\"\u001B[39m] \u001B[38;5;241m=\u001B[39m GoogleSearch\n\u001B[0;32m     70\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mImportError\u001B[39;00m:\n\u001B[1;32m---> 71\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m \u001B[38;5;167;01mImportError\u001B[39;00m(\n\u001B[0;32m     72\u001B[0m         \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mCould not import serpapi python package. \u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[0;32m     73\u001B[0m         \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mPlease install it with `pip install google-search-results`.\u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[0;32m     74\u001B[0m     )\n\u001B[0;32m     75\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m values\n",
      "\u001B[1;31mImportError\u001B[0m: Could not import serpapi python package. Please install it with `pip install google-search-results`."
     ]
    }
   ],
   "source": [
    "from langchain.agents import load_tools\n",
    "from langchain.agents import initialize_agent\n",
    "from langchain.agents import AgentType\n",
    "import os\n",
    "\n",
    "# 设置谷歌搜索的API密钥, 这个key需要魔法上网才能获得，这个案例就不测试了\n",
    "os.environ[\"SERPAPI_API_KEY\"] = \"xxx\"\n",
    "\n",
    "tools = load_tools([\"serpapi\", \"llm-math\"], llm=chat)\n",
    "\n",
    "agent = initialize_agent(tools, chat,\n",
    "                         agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\n",
    "agent.run(\"What will be the weather in Shanghai three days from now?\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "edca807b9f2c7b77",
   "metadata": {},
   "source": [
    "### (6)、记忆组件\n",
    "LangChain提供了一个名为“记忆”的组件，用于维护应用程序的状态。这个组件不仅允许用户根据最新的输人和输出来更新应用状态，还支持使用已存储的会话状态来调整或修改即将输人的内容。这样，它能为实现更复杂的对话管理和信息跟踪提供基础设施。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "968c0cb25605a74d",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-25T00:22:05.886965Z",
     "start_time": "2024-08-25T00:22:03.322102Z"
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[ERROR][2024-08-25 08:22:03.504] openapi_requestor.py:274 [t:36964]: api request req_id: as-g261689cth failed with error code: 336501, err msg: Rate limit reached for RPM, please check https://cloud.baidu.com/doc/WENXINWORKSHOP/s/tlmyncueh\n",
      "[WARNING][2024-08-25 08:22:03.506] base.py:414 [t:36964]: got error code 336501 from server, retrying... \n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'你好，李特丽！很高兴与你交流。请问有什么我可以帮助你的吗？有什么话题你想讨论或者分享的吗？'"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain.prompts import(\n",
    "    ChatPromptTemplate,\n",
    "    MessagesPlaceholder,\n",
    "    SystemMessagePromptTemplate,\n",
    "    HumanMessagePromptTemplate\n",
    ")\n",
    "from langchain.chains import ConversationChain\n",
    "from langchain.memory import ConversationBufferMemory\n",
    "prompt = ChatPromptTemplate.from_messages([\n",
    "    SystemMessagePromptTemplate.from_template(\n",
    "        \"\"\"\n",
    "        The following is a friendly conversation between a human and an AI.\n",
    "        The AI talkative and provides lots of specific details from its\n",
    "        context. If the AI does not know the answer to a question,\n",
    "        it truthfully says it does not know.\n",
    "        \"\"\"\n",
    "    ),\n",
    "    MessagesPlaceholder(variable_name=\"history\"),\n",
    "    HumanMessagePromptTemplate.from_template(\"{input}\")\n",
    "])\n",
    "# 建一个ConversationBufferMemory对象，这是LangChain内置的记忆组件之一\n",
    "memory=ConversationBufferMemory(return_messages=True)\n",
    "# 创建一个ConversationChain对象，它是一个会话链组件，该组件会使用之前创建的llm对象和ConversationBufferMemory对象\n",
    "conversation = ConversationChain(memory=memory, prompt=prompt, llm=chat)\n",
    "conversation.predict(input=\"你好，我是李特丽！\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "972ef03e84d2e48b",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-25T00:22:13.708220Z",
     "start_time": "2024-08-25T00:22:12.144137Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'你叫什么名字呢？请告诉我你的名字，我会尽力记住并尊重你的身份。'"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "conversation.run(\"我叫什么名字\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2b7663f25a0116cb",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "## 1.4 LangChain表达式"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8c7387bc2423b30f",
   "metadata": {},
   "source": [
    "LangChain秉持的核心设计理念是“做一件事并把它做好”。这种设计理念强调，每一个工具或组件都应该致力于解决一个特定的问题，并能够与其他工具或组件集成。在LangChain中，这种设计理念的体现是，它的各个组件都是独立且模块化的。例如，通过使用管道操作符“|”，开发者可以轻松地实现各个组件链的组合，开发者可以像说话一样编写代码，“直接”和“简洁”就是LangChain表达式的精髓所在。这种表达式不仅使得代码结构更为清晰，还让编程的方式更加接近自然语言的表达，为开发者提供了更为直观和顺滑的编程体验。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "66a8f439fc53aa7e",
   "metadata": {},
   "source": [
    "### (1)、提示词模板+模型包装器"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b4b00ea4d7dd4f68",
   "metadata": {},
   "source": [
    "提示词模板与模型包装器的组合构成了最基础的链组件，通常用在大多数复杂的链中。复杂的链组件通常都包含提示词模板和模型包装器，这是与LLM交互的基础组件，可以说缺一不可。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "id": "da5634e8cb946f00",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-20T10:31:45.750059Z",
     "start_time": "2024-08-20T10:31:40.615058Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "content='好的，以下是一个关于苹果的笑话：\\n\\n有一天，苹果公司的CEO蒂姆·库克在森林里散步，他看到一只猴子在吃苹果。蒂姆·库克走过去问猴子：“你知道我们公司的新产品iPhone 14吗？”猴子停下来看着他，用嘴巴回应：“我听说过，它是不是像这个苹果一样好吃？”蒂姆·库克笑了：“哈哈，它可比苹果更复杂，有很多功能呢。”猴子听后，又咬了一口苹果，然后说：“哦，那它肯定比这个苹果贵多了。”蒂姆·库克点点头，笑着离开了森林。' response_metadata={'token_usage': {}, 'model_name': 'ERNIE-Speed-128K', 'finish_reason': 'stop'} id='run-635f9001-36c2-42a0-af07-f5168d4307d8-0'\n"
     ]
    }
   ],
   "source": [
    "#实例化提示词模板和聊天模型包装器\n",
    "prompt = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\")\n",
    "model = QianfanChatEndpoint(\n",
    "    streaming=True,\n",
    "    temperature=0.2,\n",
    "    model=\"ERNIE-Speed-128K\",\n",
    ")\n",
    "#定义处理链\n",
    "chain = prompt | model\n",
    "#调用处理链\n",
    "response = chain.invoke(\"apple\")\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9512a1d6c3ce070f",
   "metadata": {},
   "source": [
    "### (2)、提示词模板+模型包装器+输出解析器"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 75,
   "id": "1a2bb3d3822f69d2",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-20T11:08:51.967362Z",
     "start_time": "2024-08-20T11:08:48.874550Z"
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[ERROR][2024-08-20 19:08:49.036] openapi_requestor.py:274 [t:31204]: api request req_id: as-gwqfkek3ti failed with error code: 336501, err msg: Rate limit reached for RPM, please check https://cloud.baidu.com/doc/WENXINWORKSHOP/s/tlmyncueh\n",
      "[WARNING][2024-08-20 19:08:49.037] base.py:414 [t:31204]: got error code 336501 from server, retrying... \n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "按照您的要求，红色的英文名称应表示为：\n",
      "\n",
      "'color': 'red'\n"
     ]
    }
   ],
   "source": [
    "from langchain.schema.output_parser import StrOutputParser\n",
    "prompt = ChatPromptTemplate.from_template(\"红色的英文名称，以格式'color': 'name'返回\")\n",
    "chain = prompt | model | StrOutputParser()\n",
    "#调用处理链\n",
    "# response = chain.invoke(\"apple\")\n",
    "response = chain.invoke({})\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "47928490779724d9",
   "metadata": {},
   "source": [
    "### (3)、多功能组合链"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fd9f117ffe2c5870",
   "metadata": {},
   "source": [
    "首先定义两个提示词模板promptl和prompt2，分别用来询问某人来自哪个城市，以及这个城市位于哪个国家。\n",
    "chain1是由prompt1、model和StrOutputParser组成的链，目的是根据给定的人名返回此人来自哪个城市。\n",
    "chain2是更复杂的链。它首先使用chain1的结果（城市），然后结合itemgetter提取的language键值，生成输人prompt2的完整问题。这个问题随后会被传递给模型，并通过StrOutputParser解析"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "id": "34afbea735a1e35b",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-20T10:39:18.340434Z",
     "start_time": "2024-08-20T10:39:15.951746Z"
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[ERROR][2024-08-20 18:39:16.665] openapi_requestor.py:274 [t:31204]: api request req_id: as-1kxktwiqkw failed with error code: 336501, err msg: Rate limit reached for RPM, please check https://cloud.baidu.com/doc/WENXINWORKSHOP/s/tlmyncueh\n",
      "[WARNING][2024-08-20 18:39:16.667] base.py:414 [t:31204]: got error code 336501 from server, retrying... \n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'奥巴马来自美国的夏威夷州檀香山（Honolulu），所以这个国家是美国。'"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from operator import itemgetter\n",
    "\n",
    "prompt1 = ChatPromptTemplate.from_template(\"what is the city {person} is from?\")\n",
    "\n",
    "prompt2 = ChatPromptTemplate.from_template(\"what country is the city {city} in? respond in {language}\")\n",
    "\n",
    "chain1 = prompt1 | model | StrOutputParser()\n",
    "\n",
    "chain2 = (\n",
    "    {\"city\": chain1,\"language\": itemgetter(\"language\")}\n",
    "    | prompt2\n",
    "    | model\n",
    "    | StrOutputParser())\n",
    "\n",
    "chain2.invoke({\"person\":\"obama\", \"language\":\"chinese\"})"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7c5549c7b948bc1e",
   "metadata": {},
   "source": [
    "下面我们加大难度，创建一个更复杂的组合链。先定义4个提示词模板，涉及颜色、水果、某国家国旗颜色，以及水果和国家（国旗）的颜色对应关系。\n",
    "chain1是一个简单的链，根据prompt1生成一个随机颜色。\n",
    "chain2是一个复杂的链，首先使用RunnableMap和chain1来获取一个随机颜色。\n",
    "接下来，这个颜色被用作两个并行链的输人，分别询问此颜色的水果有什么，以及哪个国家的国旗是这个颜色的。\n",
    "这个案例测试失败"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 83,
   "id": "20ab4b845f215d70",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-20T11:11:45.587164Z",
     "start_time": "2024-08-20T11:11:43.587010Z"
    }
   },
   "outputs": [
    {
     "ename": "KeyError",
     "evalue": "\"Input to ChatPromptTemplate is missing variables {'color'}.  Expected: ['color'] Received: []\"",
     "output_type": "error",
     "traceback": [
      "\u001B[1;31m---------------------------------------------------------------------------\u001B[0m",
      "\u001B[1;31mKeyError\u001B[0m                                  Traceback (most recent call last)",
      "Cell \u001B[1;32mIn[83], line 18\u001B[0m\n\u001B[0;32m     12\u001B[0m chain1 \u001B[38;5;241m=\u001B[39mprompt1 \u001B[38;5;241m|\u001B[39m model \u001B[38;5;241m|\u001B[39m StrOutputParser()\n\u001B[0;32m     14\u001B[0m chain2 \u001B[38;5;241m=\u001B[39m {\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mcolor\u001B[39m\u001B[38;5;124m\"\u001B[39m: chain1} \u001B[38;5;241m|\u001B[39m {\n\u001B[0;32m     15\u001B[0m     \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mfruit\u001B[39m\u001B[38;5;124m\"\u001B[39m:prompt2 \u001B[38;5;241m|\u001B[39m model \u001B[38;5;241m|\u001B[39m StrOutputParser(), \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mcountry\u001B[39m\u001B[38;5;124m\"\u001B[39m:prompt3 \u001B[38;5;241m|\u001B[39m model \u001B[38;5;241m|\u001B[39m StrOutputParser()\n\u001B[0;32m     16\u001B[0m     } \u001B[38;5;241m|\u001B[39m prompt4\n\u001B[1;32m---> 18\u001B[0m chain2\u001B[38;5;241m.\u001B[39minvoke({})\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_core\\runnables\\base.py:2876\u001B[0m, in \u001B[0;36mRunnableSequence.invoke\u001B[1;34m(self, input, config, **kwargs)\u001B[0m\n\u001B[0;32m   2874\u001B[0m context\u001B[38;5;241m.\u001B[39mrun(_set_config_context, config)\n\u001B[0;32m   2875\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m i \u001B[38;5;241m==\u001B[39m \u001B[38;5;241m0\u001B[39m:\n\u001B[1;32m-> 2876\u001B[0m     \u001B[38;5;28minput\u001B[39m \u001B[38;5;241m=\u001B[39m context\u001B[38;5;241m.\u001B[39mrun(step\u001B[38;5;241m.\u001B[39minvoke, \u001B[38;5;28minput\u001B[39m, config, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n\u001B[0;32m   2877\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[0;32m   2878\u001B[0m     \u001B[38;5;28minput\u001B[39m \u001B[38;5;241m=\u001B[39m context\u001B[38;5;241m.\u001B[39mrun(step\u001B[38;5;241m.\u001B[39minvoke, \u001B[38;5;28minput\u001B[39m, config)\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_core\\runnables\\base.py:3580\u001B[0m, in \u001B[0;36mRunnableParallel.invoke\u001B[1;34m(self, input, config)\u001B[0m\n\u001B[0;32m   3575\u001B[0m     \u001B[38;5;28;01mwith\u001B[39;00m get_executor_for_config(config) \u001B[38;5;28;01mas\u001B[39;00m executor:\n\u001B[0;32m   3576\u001B[0m         futures \u001B[38;5;241m=\u001B[39m [\n\u001B[0;32m   3577\u001B[0m             executor\u001B[38;5;241m.\u001B[39msubmit(_invoke_step, step, \u001B[38;5;28minput\u001B[39m, config, key)\n\u001B[0;32m   3578\u001B[0m             \u001B[38;5;28;01mfor\u001B[39;00m key, step \u001B[38;5;129;01min\u001B[39;00m steps\u001B[38;5;241m.\u001B[39mitems()\n\u001B[0;32m   3579\u001B[0m         ]\n\u001B[1;32m-> 3580\u001B[0m         output \u001B[38;5;241m=\u001B[39m {key: future\u001B[38;5;241m.\u001B[39mresult() \u001B[38;5;28;01mfor\u001B[39;00m key, future \u001B[38;5;129;01min\u001B[39;00m \u001B[38;5;28mzip\u001B[39m(steps, futures)}\n\u001B[0;32m   3581\u001B[0m \u001B[38;5;66;03m# finish the root run\u001B[39;00m\n\u001B[0;32m   3582\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mBaseException\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m e:\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\concurrent\\futures\\_base.py:449\u001B[0m, in \u001B[0;36mFuture.result\u001B[1;34m(self, timeout)\u001B[0m\n\u001B[0;32m    447\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m CancelledError()\n\u001B[0;32m    448\u001B[0m \u001B[38;5;28;01melif\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_state \u001B[38;5;241m==\u001B[39m FINISHED:\n\u001B[1;32m--> 449\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m__get_result()\n\u001B[0;32m    451\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_condition\u001B[38;5;241m.\u001B[39mwait(timeout)\n\u001B[0;32m    453\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_state \u001B[38;5;129;01min\u001B[39;00m [CANCELLED, CANCELLED_AND_NOTIFIED]:\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\concurrent\\futures\\_base.py:401\u001B[0m, in \u001B[0;36mFuture.__get_result\u001B[1;34m(self)\u001B[0m\n\u001B[0;32m    399\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_exception:\n\u001B[0;32m    400\u001B[0m     \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m--> 401\u001B[0m         \u001B[38;5;28;01mraise\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_exception\n\u001B[0;32m    402\u001B[0m     \u001B[38;5;28;01mfinally\u001B[39;00m:\n\u001B[0;32m    403\u001B[0m         \u001B[38;5;66;03m# Break a reference cycle with the exception in self._exception\u001B[39;00m\n\u001B[0;32m    404\u001B[0m         \u001B[38;5;28mself\u001B[39m \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\concurrent\\futures\\thread.py:58\u001B[0m, in \u001B[0;36m_WorkItem.run\u001B[1;34m(self)\u001B[0m\n\u001B[0;32m     55\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m\n\u001B[0;32m     57\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m---> 58\u001B[0m     result \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mfn(\u001B[38;5;241m*\u001B[39m\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39margs, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39m\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mkwargs)\n\u001B[0;32m     59\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mBaseException\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m exc:\n\u001B[0;32m     60\u001B[0m     \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mfuture\u001B[38;5;241m.\u001B[39mset_exception(exc)\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_core\\runnables\\base.py:3564\u001B[0m, in \u001B[0;36mRunnableParallel.invoke.<locals>._invoke_step\u001B[1;34m(step, input, config, key)\u001B[0m\n\u001B[0;32m   3562\u001B[0m context \u001B[38;5;241m=\u001B[39m copy_context()\n\u001B[0;32m   3563\u001B[0m context\u001B[38;5;241m.\u001B[39mrun(_set_config_context, child_config)\n\u001B[1;32m-> 3564\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m context\u001B[38;5;241m.\u001B[39mrun(\n\u001B[0;32m   3565\u001B[0m     step\u001B[38;5;241m.\u001B[39minvoke,\n\u001B[0;32m   3566\u001B[0m     \u001B[38;5;28minput\u001B[39m,\n\u001B[0;32m   3567\u001B[0m     child_config,\n\u001B[0;32m   3568\u001B[0m )\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_core\\runnables\\base.py:2876\u001B[0m, in \u001B[0;36mRunnableSequence.invoke\u001B[1;34m(self, input, config, **kwargs)\u001B[0m\n\u001B[0;32m   2874\u001B[0m context\u001B[38;5;241m.\u001B[39mrun(_set_config_context, config)\n\u001B[0;32m   2875\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m i \u001B[38;5;241m==\u001B[39m \u001B[38;5;241m0\u001B[39m:\n\u001B[1;32m-> 2876\u001B[0m     \u001B[38;5;28minput\u001B[39m \u001B[38;5;241m=\u001B[39m context\u001B[38;5;241m.\u001B[39mrun(step\u001B[38;5;241m.\u001B[39minvoke, \u001B[38;5;28minput\u001B[39m, config, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n\u001B[0;32m   2877\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[0;32m   2878\u001B[0m     \u001B[38;5;28minput\u001B[39m \u001B[38;5;241m=\u001B[39m context\u001B[38;5;241m.\u001B[39mrun(step\u001B[38;5;241m.\u001B[39minvoke, \u001B[38;5;28minput\u001B[39m, config)\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_core\\prompts\\base.py:179\u001B[0m, in \u001B[0;36mBasePromptTemplate.invoke\u001B[1;34m(self, input, config)\u001B[0m\n\u001B[0;32m    177\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mtags:\n\u001B[0;32m    178\u001B[0m     config[\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mtags\u001B[39m\u001B[38;5;124m\"\u001B[39m] \u001B[38;5;241m=\u001B[39m config[\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mtags\u001B[39m\u001B[38;5;124m\"\u001B[39m] \u001B[38;5;241m+\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mtags\n\u001B[1;32m--> 179\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_call_with_config(\n\u001B[0;32m    180\u001B[0m     \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_format_prompt_with_error_handling,\n\u001B[0;32m    181\u001B[0m     \u001B[38;5;28minput\u001B[39m,\n\u001B[0;32m    182\u001B[0m     config,\n\u001B[0;32m    183\u001B[0m     run_type\u001B[38;5;241m=\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mprompt\u001B[39m\u001B[38;5;124m\"\u001B[39m,\n\u001B[0;32m    184\u001B[0m )\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_core\\runnables\\base.py:1785\u001B[0m, in \u001B[0;36mRunnable._call_with_config\u001B[1;34m(self, func, input, config, run_type, **kwargs)\u001B[0m\n\u001B[0;32m   1781\u001B[0m     context \u001B[38;5;241m=\u001B[39m copy_context()\n\u001B[0;32m   1782\u001B[0m     context\u001B[38;5;241m.\u001B[39mrun(_set_config_context, child_config)\n\u001B[0;32m   1783\u001B[0m     output \u001B[38;5;241m=\u001B[39m cast(\n\u001B[0;32m   1784\u001B[0m         Output,\n\u001B[1;32m-> 1785\u001B[0m         context\u001B[38;5;241m.\u001B[39mrun(\n\u001B[0;32m   1786\u001B[0m             call_func_with_variable_args,  \u001B[38;5;66;03m# type: ignore[arg-type]\u001B[39;00m\n\u001B[0;32m   1787\u001B[0m             func,  \u001B[38;5;66;03m# type: ignore[arg-type]\u001B[39;00m\n\u001B[0;32m   1788\u001B[0m             \u001B[38;5;28minput\u001B[39m,  \u001B[38;5;66;03m# type: ignore[arg-type]\u001B[39;00m\n\u001B[0;32m   1789\u001B[0m             config,\n\u001B[0;32m   1790\u001B[0m             run_manager,\n\u001B[0;32m   1791\u001B[0m             \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs,\n\u001B[0;32m   1792\u001B[0m         ),\n\u001B[0;32m   1793\u001B[0m     )\n\u001B[0;32m   1794\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mBaseException\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m e:\n\u001B[0;32m   1795\u001B[0m     run_manager\u001B[38;5;241m.\u001B[39mon_chain_error(e)\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_core\\runnables\\config.py:397\u001B[0m, in \u001B[0;36mcall_func_with_variable_args\u001B[1;34m(func, input, config, run_manager, **kwargs)\u001B[0m\n\u001B[0;32m    395\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m run_manager \u001B[38;5;129;01mis\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m \u001B[38;5;129;01mand\u001B[39;00m accepts_run_manager(func):\n\u001B[0;32m    396\u001B[0m     kwargs[\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mrun_manager\u001B[39m\u001B[38;5;124m\"\u001B[39m] \u001B[38;5;241m=\u001B[39m run_manager\n\u001B[1;32m--> 397\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m func(\u001B[38;5;28minput\u001B[39m, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_core\\prompts\\base.py:153\u001B[0m, in \u001B[0;36mBasePromptTemplate._format_prompt_with_error_handling\u001B[1;34m(self, inner_input)\u001B[0m\n\u001B[0;32m    152\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21m_format_prompt_with_error_handling\u001B[39m(\u001B[38;5;28mself\u001B[39m, inner_input: Dict) \u001B[38;5;241m-\u001B[39m\u001B[38;5;241m>\u001B[39m PromptValue:\n\u001B[1;32m--> 153\u001B[0m     _inner_input \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_validate_input(inner_input)\n\u001B[0;32m    154\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mformat_prompt(\u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39m_inner_input)\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_core\\prompts\\base.py:145\u001B[0m, in \u001B[0;36mBasePromptTemplate._validate_input\u001B[1;34m(self, inner_input)\u001B[0m\n\u001B[0;32m    143\u001B[0m missing \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mset\u001B[39m(\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39minput_variables)\u001B[38;5;241m.\u001B[39mdifference(inner_input)\n\u001B[0;32m    144\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m missing:\n\u001B[1;32m--> 145\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m \u001B[38;5;167;01mKeyError\u001B[39;00m(\n\u001B[0;32m    146\u001B[0m         \u001B[38;5;124mf\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mInput to \u001B[39m\u001B[38;5;132;01m{\u001B[39;00m\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m\u001B[38;5;18m__class__\u001B[39m\u001B[38;5;241m.\u001B[39m\u001B[38;5;18m__name__\u001B[39m\u001B[38;5;132;01m}\u001B[39;00m\u001B[38;5;124m is missing variables \u001B[39m\u001B[38;5;132;01m{\u001B[39;00mmissing\u001B[38;5;132;01m}\u001B[39;00m\u001B[38;5;124m. \u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[0;32m    147\u001B[0m         \u001B[38;5;124mf\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m Expected: \u001B[39m\u001B[38;5;132;01m{\u001B[39;00m\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39minput_variables\u001B[38;5;132;01m}\u001B[39;00m\u001B[38;5;124m\"\u001B[39m\n\u001B[0;32m    148\u001B[0m         \u001B[38;5;124mf\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m Received: \u001B[39m\u001B[38;5;132;01m{\u001B[39;00m\u001B[38;5;28mlist\u001B[39m(inner_input\u001B[38;5;241m.\u001B[39mkeys())\u001B[38;5;132;01m}\u001B[39;00m\u001B[38;5;124m\"\u001B[39m\n\u001B[0;32m    149\u001B[0m     )\n\u001B[0;32m    150\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m inner_input\n",
      "\u001B[1;31mKeyError\u001B[0m: \"Input to ChatPromptTemplate is missing variables {'color'}.  Expected: ['color'] Received: []\""
     ]
    }
   ],
   "source": [
    "from langchain.schema.runnable import RunnableMap\n",
    "\n",
    "prompt1= \\\n",
    "    ChatPromptTemplate.from_template(\"红色的英文名称，以格式['color': 'name']返回\")\n",
    "prompt2= \\\n",
    "    ChatPromptTemplate.from_template(\"what is a fruit of color:{color}\")\n",
    "prompt3= \\\n",
    "    ChatPromptTemplate.from_template(\"what is countries flag that has the color:{color}\")\n",
    "prompt4= \\\n",
    "    ChatPromptTemplate.from_template(\"What is the color of {fruit} and {country}\")\n",
    "\n",
    "chain1 =prompt1 | model | StrOutputParser()\n",
    "\n",
    "chain2 = RunnableMap(steps={\"color\": chain1}) | {\n",
    "    \"fruit\":prompt2 | model | StrOutputParser(), \"country\":prompt3 | model | StrOutputParser()\n",
    "    } | prompt4\n",
    "\n",
    "chain2.invoke({})"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "40a26ade6bc634f6",
   "metadata": {},
   "source": [
    "# 2 模型I/O"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2aab06ce5120b48b",
   "metadata": {},
   "source": [
    "## 2.1 模型包装器"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "40fe6a5e8ea9fab2",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "### (1)、LLM模型包装器"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "57855742361f6289",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-20T21:59:48.776444Z",
     "start_time": "2024-08-20T21:59:44.968373Z"
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "E:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_core\\_api\\deprecation.py:141: LangChainDeprecationWarning: The method `BaseLLM.__call__` was deprecated in langchain-core 0.1.7 and will be removed in 1.0. Use invoke instead.\n",
      "  warn_deprecated(\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'当然可以，以下是一个笑话：\\n\\n有一天，白气球遇到了黑气球，白气球对黑气球说：“你好黑啊。” 黑气球回应说：“是啊，因为我是黑气球。” 然后白气球就飘走了，留下一句经典的话：“你黑得连我反光都不会了！”哈哈！'"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain.llms import QianfanLLMEndpoint\n",
    "\n",
    "llm = QianfanLLMEndpoint(\n",
    "    streaming=True,\n",
    "    temperature=0.2,\n",
    "    model=\"ERNIE-Speed-128K\",\n",
    ")\n",
    "# predict已经过期，改用invoke\n",
    "llm.invoke(\"What would be a good company name for a company that make colorful socks?\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9c8b3b23d31e67aa",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "### (2)、聊天模型包装器"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "68e1d8e4a5c2ba13",
   "metadata": {},
   "source": [
    "为了使用聊天模型包装器，这里将导人3个数据模式（schema）：一个由AI生成的消息数据模式（AIMessage）、一个人类用户输人的消息数据模式（HumanMessage）、一个系统消息数据模式（SystemMessage）。这些数据模式通常用于设置聊天环境或提供上下文信息。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "227e73df3ddc3091",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-20T22:32:31.220829Z",
     "start_time": "2024-08-20T22:32:23.181921Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "当然可以，为您的新公司取名是一个令人兴奋的任务。考虑到您希望名字中包含AI元素，以下是我为您准备的一些建议：\n",
      "\n",
      "1. **智汇AI科技**：这个名字突出了人工智能（AI）和智慧汇聚的概念，体现了公司在科技领域的专业性和智慧集结的优势。\n",
      "2. **领航智AI界**：结合了领航和智能AI的概念，表达了公司在行业中的领导地位和对AI技术的探索精神。\n",
      "3. **智启未来AI**：这个名字中的“智启”意味着智能启迪，与AI结合，传达了公司致力于通过AI技术启迪未来的愿景。\n",
      "4. **慧点AI创新**：“慧点”寓意智慧之地，“创新”强调公司在AI领域的创新精神。\n",
      "5. **智芯领航**：结合了“智能芯片”和“领航”的概念，体现了公司在AI技术领域的核心竞争力和引导作用。\n",
      "\n",
      "这些名字都体现了AI的核心要素，同时易于记忆和发音，且具有积极的含义。请注意，这些名字仅为建议，您需要确保所选名称未被其他公司使用，并且符合您的企业文化和愿景。\n"
     ]
    }
   ],
   "source": [
    "from langchain.chat_models import QianfanChatEndpoint\n",
    "from langchain.schema import(\n",
    "    AIMessage,\n",
    "    HumanMessage,\n",
    "    SystemMessage\n",
    ")\n",
    "\n",
    "chat = QianfanChatEndpoint(\n",
    "    streaming=True,\n",
    "    temperature=0.2,\n",
    "    model=\"ERNIE-Speed-128K\",\n",
    ")\n",
    "messages =[\n",
    "    SystemMessage(content=\"你是个取名大师，你擅长为创业公司取名字\"),\n",
    "    HumanMessage(content=\"帮我给新公司取个名字，要包含AI\")\n",
    "]\n",
    "\n",
    "response=chat.invoke(messages)\n",
    "\n",
    "print(response.content, end=\"\\n\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "297c3fbfa4dc4b4f",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "## 2.2 提示词模板"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "95a6741c628fd4f4",
   "metadata": {},
   "source": [
    "### (1)、PromptTemplate"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "676436507896b0ef",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-21T22:13:04.765159Z",
     "start_time": "2024-08-21T22:13:04.759121Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "    You are an expert data scientist with an expertise in building deep learning\n",
      "    models.\n",
      "    Explain the concept of NLP in a couple of lines\n",
      "    \n"
     ]
    }
   ],
   "source": [
    "from langchain import PromptTemplate\n",
    "\n",
    "template =\"\"\"\n",
    "    You are an expert data scientist with an expertise in building deep learning\n",
    "    models.\n",
    "    Explain the concept of {concept} in a couple of lines\n",
    "    \"\"\"\n",
    "# 实例化模板的第一种方式：\n",
    "prompt=PromptTemplate(template=template,input_variables=[\"concept\"])\n",
    "# 实例化模板的第二种方式：\n",
    "prompt =PromptTemplate.from_template(template)\n",
    "\n",
    "# 将用户的输人通过format方法嵌人提示词模板，并且做格式化处理\n",
    "final_prompt = prompt.format(concept=\"NLP\")\n",
    "\n",
    "print(final_prompt)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5f52b61403a6f94c",
   "metadata": {},
   "source": [
    "### (2)、ChatPromptTemplate"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "60f648865eec4ab6",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-21T22:30:21.464946Z",
     "start_time": "2024-08-21T22:30:21.458878Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'System: \\n    You are an expert data scientist\\n    with an expertise in building deep learning models.\\n    \\nHuman: Explain the concept of NLP in a couple of lines'"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain.prompts import (\n",
    "    ChatPromptTemplate,\n",
    "    PromptTemplate,\n",
    "    SystemMessagePromptTemplate,\n",
    "    AIMessagePromptTemplate,\n",
    "    HumanMessagePromptTemplate,\n",
    ")\n",
    "\n",
    "# 系统消息提示词\n",
    "template = \"\"\"\n",
    "    You are an expert data scientist\n",
    "    with an expertise in building deep learning models.\n",
    "    \"\"\"\n",
    "system_message_prompt = \\\n",
    "    SystemMessagePromptTemplate.from_template(template)\n",
    "# 人类消息提示词\n",
    "human_template=\"Explain the concept of {concept} in a couple of lines\"\n",
    "human_message_prompt = \\\n",
    "    HumanMessagePromptTemplate.from_template(human_template)\n",
    "# 聊天提示词\n",
    "chat_prompt=ChatPromptTemplate.from_messages(\n",
    "    [system_message_prompt,\n",
    "     human_message_prompt]\n",
    ")\n",
    "\n",
    "chat_prompt.format_prompt(concept=\"NLP\").to_string()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f2e346e7ea4f681f",
   "metadata": {},
   "source": [
    "### (3)、FewShotPromptTemplate"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4b28585e616c7941",
   "metadata": {},
   "source": [
    "FewShotPromptTemplate和PromptTemplate的比较"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a39de39c593c5f26",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.prompts import (\n",
    "    PromptTemplate,\n",
    "    FewShotPromptTemplate,\n",
    ")\n",
    "\n",
    "# PromptTemplate\n",
    "example_prompt=PromptTemplate(input_variables=[\"input\",\"output\"],\n",
    "                              template=\"\"\"\"\n",
    "                                词语：{input}\\n\n",
    "                                反义词：{output}\\n\n",
    "                                \"\"\"\n",
    "                              )\n",
    "# FewShotPromptTemplate\n",
    "examples = \"\"\n",
    "few_shot_prompt=FewShotPromptTemplate(\n",
    "    examples=examples,\n",
    "    example_prompt=example_prompt,\n",
    "    example_separator=\"\\n\",\n",
    "    prefix=\"来玩个反义词接龙游戏，我说词语，你说它的反义词\\n\",\n",
    "    suffix=\"词语：{input}\\n反义词：\",\n",
    "    input_variables=[\"input\"],\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e0070681be1a16f1",
   "metadata": {},
   "source": [
    "测试案例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "4cf1290311e1a507",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-22T03:40:32.321048Z",
     "start_time": "2024-08-22T03:40:31.164775Z"
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[WARNING][2024-08-22 11:40:31.685] oauth.py:512 [t:35304]: no enough credential found, any one of (access_key, secret_key), (ak, sk), (access_token) must be provided\n"
     ]
    },
    {
     "ename": "InvalidArgumentError",
     "evalue": "no enough credential found, use any one of (access_key, secret_key), (ak, sk), (access_token) in api v1 or any of (ak, sk), (bearer token) in api v2",
     "output_type": "error",
     "traceback": [
      "\u001B[1;31m---------------------------------------------------------------------------\u001B[0m",
      "\u001B[1;31mInvalidArgumentError\u001B[0m                      Traceback (most recent call last)",
      "Cell \u001B[1;32mIn[2], line 41\u001B[0m\n\u001B[0;32m     31\u001B[0m few_shot_prompt\u001B[38;5;241m=\u001B[39mFewShotPromptTemplate(\n\u001B[0;32m     32\u001B[0m     examples\u001B[38;5;241m=\u001B[39mexamples,\n\u001B[0;32m     33\u001B[0m     example_prompt\u001B[38;5;241m=\u001B[39mexample_prompt,\n\u001B[1;32m   (...)\u001B[0m\n\u001B[0;32m     37\u001B[0m     input_variables\u001B[38;5;241m=\u001B[39m[\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124minput\u001B[39m\u001B[38;5;124m\"\u001B[39m],\n\u001B[0;32m     38\u001B[0m )\n\u001B[0;32m     39\u001B[0m few_shot_prompt\u001B[38;5;241m.\u001B[39mformat_prompt(\u001B[38;5;28minput\u001B[39m\u001B[38;5;241m=\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m好\u001B[39m\u001B[38;5;124m\"\u001B[39m)\n\u001B[1;32m---> 41\u001B[0m chat \u001B[38;5;241m=\u001B[39m QianfanChatEndpoint(\n\u001B[0;32m     42\u001B[0m     streaming\u001B[38;5;241m=\u001B[39m\u001B[38;5;28;01mTrue\u001B[39;00m,\n\u001B[0;32m     43\u001B[0m     temperature\u001B[38;5;241m=\u001B[39m\u001B[38;5;241m0.2\u001B[39m,\n\u001B[0;32m     44\u001B[0m     model\u001B[38;5;241m=\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mERNIE-Speed-128K\u001B[39m\u001B[38;5;124m\"\u001B[39m,\n\u001B[0;32m     45\u001B[0m )\n\u001B[0;32m     47\u001B[0m chain \u001B[38;5;241m=\u001B[39m few_shot_prompt \u001B[38;5;241m|\u001B[39m chat\n\u001B[0;32m     48\u001B[0m chain\u001B[38;5;241m.\u001B[39minvoke(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m唐伯虎\u001B[39m\u001B[38;5;124m\"\u001B[39m)\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_core\\load\\serializable.py:113\u001B[0m, in \u001B[0;36mSerializable.__init__\u001B[1;34m(self, *args, **kwargs)\u001B[0m\n\u001B[0;32m    111\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21m__init__\u001B[39m(\u001B[38;5;28mself\u001B[39m, \u001B[38;5;241m*\u001B[39margs: Any, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs: Any) \u001B[38;5;241m-\u001B[39m\u001B[38;5;241m>\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m:\n\u001B[0;32m    112\u001B[0m \u001B[38;5;250m    \u001B[39m\u001B[38;5;124;03m\"\"\"\"\"\"\u001B[39;00m\n\u001B[1;32m--> 113\u001B[0m     \u001B[38;5;28msuper\u001B[39m()\u001B[38;5;241m.\u001B[39m\u001B[38;5;21m__init__\u001B[39m(\u001B[38;5;241m*\u001B[39margs, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\pydantic\\v1\\main.py:339\u001B[0m, in \u001B[0;36mBaseModel.__init__\u001B[1;34m(__pydantic_self__, **data)\u001B[0m\n\u001B[0;32m    333\u001B[0m \u001B[38;5;250m\u001B[39m\u001B[38;5;124;03m\"\"\"\u001B[39;00m\n\u001B[0;32m    334\u001B[0m \u001B[38;5;124;03mCreate a new model by parsing and validating input data from keyword arguments.\u001B[39;00m\n\u001B[0;32m    335\u001B[0m \n\u001B[0;32m    336\u001B[0m \u001B[38;5;124;03mRaises ValidationError if the input data cannot be parsed to form a valid model.\u001B[39;00m\n\u001B[0;32m    337\u001B[0m \u001B[38;5;124;03m\"\"\"\u001B[39;00m\n\u001B[0;32m    338\u001B[0m \u001B[38;5;66;03m# Uses something other than `self` the first arg to allow \"self\" as a settable attribute\u001B[39;00m\n\u001B[1;32m--> 339\u001B[0m values, fields_set, validation_error \u001B[38;5;241m=\u001B[39m validate_model(__pydantic_self__\u001B[38;5;241m.\u001B[39m\u001B[38;5;18m__class__\u001B[39m, data)\n\u001B[0;32m    340\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m validation_error:\n\u001B[0;32m    341\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m validation_error\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\pydantic\\v1\\main.py:1048\u001B[0m, in \u001B[0;36mvalidate_model\u001B[1;34m(model, input_data, cls)\u001B[0m\n\u001B[0;32m   1046\u001B[0m \u001B[38;5;28;01mfor\u001B[39;00m validator \u001B[38;5;129;01min\u001B[39;00m model\u001B[38;5;241m.\u001B[39m__pre_root_validators__:\n\u001B[0;32m   1047\u001B[0m     \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m-> 1048\u001B[0m         input_data \u001B[38;5;241m=\u001B[39m validator(cls_, input_data)\n\u001B[0;32m   1049\u001B[0m     \u001B[38;5;28;01mexcept\u001B[39;00m (\u001B[38;5;167;01mValueError\u001B[39;00m, \u001B[38;5;167;01mTypeError\u001B[39;00m, \u001B[38;5;167;01mAssertionError\u001B[39;00m) \u001B[38;5;28;01mas\u001B[39;00m exc:\n\u001B[0;32m   1050\u001B[0m         \u001B[38;5;28;01mreturn\u001B[39;00m {}, \u001B[38;5;28mset\u001B[39m(), ValidationError([ErrorWrapper(exc, loc\u001B[38;5;241m=\u001B[39mROOT_KEY)], cls_)\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_community\\chat_models\\baidu_qianfan_endpoint.py:422\u001B[0m, in \u001B[0;36mQianfanChatEndpoint.validate_environment\u001B[1;34m(cls, values)\u001B[0m\n\u001B[0;32m    419\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[0;32m    420\u001B[0m     \u001B[38;5;28;01mimport\u001B[39;00m \u001B[38;5;21;01mqianfan\u001B[39;00m\n\u001B[1;32m--> 422\u001B[0m     values[\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mclient\u001B[39m\u001B[38;5;124m\"\u001B[39m] \u001B[38;5;241m=\u001B[39m qianfan\u001B[38;5;241m.\u001B[39mChatCompletion(\u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mparams)\n\u001B[0;32m    423\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mImportError\u001B[39;00m:\n\u001B[0;32m    424\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m \u001B[38;5;167;01mImportError\u001B[39;00m(\n\u001B[0;32m    425\u001B[0m         \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mqianfan package not found, please install it with \u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[0;32m    426\u001B[0m         \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m`pip install qianfan`\u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[0;32m    427\u001B[0m     )\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\qianfan\\resources\\llm\\base.py:166\u001B[0m, in \u001B[0;36mVersionBase.__init__\u001B[1;34m(self, version, **kwargs)\u001B[0m\n\u001B[0;32m    162\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21m__init__\u001B[39m(\n\u001B[0;32m    163\u001B[0m     \u001B[38;5;28mself\u001B[39m, version: Optional[Literal[\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m1\u001B[39m\u001B[38;5;124m\"\u001B[39m, \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m2\u001B[39m\u001B[38;5;124m\"\u001B[39m, \u001B[38;5;241m1\u001B[39m, \u001B[38;5;241m2\u001B[39m]] \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs: Any\n\u001B[0;32m    164\u001B[0m ) \u001B[38;5;241m-\u001B[39m\u001B[38;5;241m>\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m:\n\u001B[0;32m    165\u001B[0m     \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_version \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mstr\u001B[39m(version) \u001B[38;5;28;01mif\u001B[39;00m version \u001B[38;5;28;01melse\u001B[39;00m \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m1\u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[1;32m--> 166\u001B[0m     \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_real \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_real_base(\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_version, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)(\u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n\u001B[0;32m    167\u001B[0m     \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_version \u001B[38;5;241m!=\u001B[39m \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m1\u001B[39m\u001B[38;5;124m\"\u001B[39m:\n\u001B[0;32m    168\u001B[0m         \u001B[38;5;28;01mtry\u001B[39;00m:\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\qianfan\\resources\\llm\\base.py:630\u001B[0m, in \u001B[0;36mBaseResourceV1.__init__\u001B[1;34m(self, model, endpoint, use_custom_endpoint, **kwargs)\u001B[0m\n\u001B[0;32m    628\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_model \u001B[38;5;241m=\u001B[39m model\n\u001B[0;32m    629\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_endpoint \u001B[38;5;241m=\u001B[39m endpoint\n\u001B[1;32m--> 630\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_client \u001B[38;5;241m=\u001B[39m create_api_requestor(\u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n\u001B[0;32m    631\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39muse_custom_endpoint \u001B[38;5;241m=\u001B[39m use_custom_endpoint\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\qianfan\\resources\\requestor\\openapi_requestor.py:797\u001B[0m, in \u001B[0;36mcreate_api_requestor\u001B[1;34m(*args, **kwargs)\u001B[0m\n\u001B[0;32m    794\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m get_config()\u001B[38;5;241m.\u001B[39mENABLE_PRIVATE:\n\u001B[0;32m    795\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m PrivateAPIRequestor(\u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n\u001B[1;32m--> 797\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m QfAPIRequestor(\u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\qianfan\\resources\\requestor\\openapi_requestor.py:67\u001B[0m, in \u001B[0;36mQfAPIRequestor.__init__\u001B[1;34m(self, **kwargs)\u001B[0m\n\u001B[0;32m     63\u001B[0m \u001B[38;5;250m\u001B[39m\u001B[38;5;124;03m\"\"\"\u001B[39;00m\n\u001B[0;32m     64\u001B[0m \u001B[38;5;124;03m`ak`, `sk` and `access_token` can be provided in kwargs.\u001B[39;00m\n\u001B[0;32m     65\u001B[0m \u001B[38;5;124;03m\"\"\"\u001B[39;00m\n\u001B[0;32m     66\u001B[0m \u001B[38;5;28msuper\u001B[39m()\u001B[38;5;241m.\u001B[39m\u001B[38;5;21m__init__\u001B[39m(\u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n\u001B[1;32m---> 67\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_auth \u001B[38;5;241m=\u001B[39m Auth(\u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n\u001B[0;32m     68\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_token_limiter \u001B[38;5;241m=\u001B[39m TokenLimiter(\u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n\u001B[0;32m     69\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_async_token_limiter \u001B[38;5;241m=\u001B[39m AsyncTokenLimiter(\u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\qianfan\\resources\\auth\\oauth.py:382\u001B[0m, in \u001B[0;36mAuth.__init__\u001B[1;34m(self, refresh_func, **kwargs)\u001B[0m\n\u001B[0;32m    380\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_refresh_func \u001B[38;5;241m=\u001B[39m refresh_func\n\u001B[0;32m    381\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_credential_available() \u001B[38;5;129;01mand\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m get_config()\u001B[38;5;241m.\u001B[39mNO_AUTH:\n\u001B[1;32m--> 382\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m InvalidArgumentError(\n\u001B[0;32m    383\u001B[0m         \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mno enough credential found, use any one of (access_key, secret_key),\u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[0;32m    384\u001B[0m         \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m (ak, sk), (access_token) in api v1 or\u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[0;32m    385\u001B[0m         \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m any of (ak, sk), (bearer token) in api v2\u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[0;32m    386\u001B[0m     )\n\u001B[0;32m    387\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m (\n\u001B[0;32m    388\u001B[0m     \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_access_token \u001B[38;5;129;01mis\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m\n\u001B[0;32m    389\u001B[0m     \u001B[38;5;129;01mand\u001B[39;00m (\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_ak \u001B[38;5;129;01mis\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m \u001B[38;5;129;01mor\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_sk \u001B[38;5;129;01mis\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m)\n\u001B[1;32m   (...)\u001B[0m\n\u001B[0;32m    394\u001B[0m     )\n\u001B[0;32m    395\u001B[0m ):\n\u001B[0;32m    396\u001B[0m     \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_registered \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mTrue\u001B[39;00m\n",
      "\u001B[1;31mInvalidArgumentError\u001B[0m: no enough credential found, use any one of (access_key, secret_key), (ak, sk), (access_token) in api v1 or any of (ak, sk), (bearer token) in api v2"
     ]
    }
   ],
   "source": [
    "from langchain.prompts import (\n",
    "    PromptTemplate,\n",
    "    FewShotPromptTemplate,\n",
    ")\n",
    "from langchain.chat_models import QianfanChatEndpoint\n",
    "\n",
    "# 案例\n",
    "examples=[\n",
    "    {\"input\": \"云\", \"output\": \"雨\"},\n",
    "    {\"input\": \"雪\", \"output\": \"风\"},\n",
    "    {\"input\": \"晚照\", \"output\": \"晴空\"},\n",
    "    {\"input\": \"来鸿\", \"output\": \"去燕\"},\n",
    "    {\"input\":\"宿鸟\", \"output\": \"鸣虫\"},\n",
    "    {\"input\":\"三尺剑\", \"output\": \"六钧弓\"},\n",
    "    {\"input\":\"岭北\", \"output\": \"江东\"},\n",
    "    {\"input\":\"人间清暑殿\", \"output\": \"天上广寒宫\"}\n",
    "]\n",
    "# prompt\n",
    "example_prompt = \\\n",
    "    PromptTemplate(input_variables=[\"input\",\"output\"],\n",
    "                   template=\"\"\"\"\n",
    "                   上联：{input}\n",
    "                   下联：{output}\n",
    "                   \"\"\"\n",
    "    )\n",
    "# 测试prompt\n",
    "example_prompt.format(**examples[0])\n",
    "# 等价于\n",
    "# example_prompt.format(input=\"云\", output=\"雨\")\n",
    "# 构建样本prompt\n",
    "few_shot_prompt=FewShotPromptTemplate(\n",
    "    examples=examples,\n",
    "    example_prompt=example_prompt,\n",
    "    example_separator=\"\\n\",\n",
    "    prefix=\"来玩个对对子的游戏，我说上联，你说它的下联\\n\",\n",
    "    suffix=\"现在轮到你了，上联：{input}\\n下联：\",\n",
    "    input_variables=[\"input\"],\n",
    ")\n",
    "few_shot_prompt.format_prompt(input=\"好\")\n",
    "\n",
    "chat = QianfanChatEndpoint(\n",
    "    streaming=True,\n",
    "    temperature=0.2,\n",
    "    model=\"ERNIE-Speed-128K\",\n",
    ")\n",
    "\n",
    "chain = few_shot_prompt | chat\n",
    "chain.invoke(\"唐伯虎\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5f97de03435530fe",
   "metadata": {},
   "source": [
    "示例选择器"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "932f55f2687e4de1",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-22T03:41:49.575191Z",
     "start_time": "2024-08-22T03:41:49.565659Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[{'input': '云', 'output': '雨'}, {'input': '雪', 'output': '风'}, {'input': '晚照', 'output': '晴空'}, {'input': '来鸿', 'output': '去燕'}, {'input': '宿鸟', 'output': '鸣虫'}, {'input': '三尺剑', 'output': '六钧弓'}, {'input': '岭北', 'output': '江东'}, {'input': '人间清暑殿', 'output': '天上广寒宫'}]\n"
     ]
    }
   ],
   "source": [
    "from langchain_community.vectorstores import BESVectorStore\n",
    "from langchain_community.embeddings import QianfanEmbeddingsEndpoint\n",
    "from langchain.prompts.example_selector import (\n",
    "    LengthBasedExampleSelector,\n",
    "    MaxMarginalRelevanceExampleSelector,\n",
    "    NGramOverlapExampleSelector,\n",
    "    SemanticSimilarityExampleSelector\n",
    ")\n",
    "\n",
    "# 对于LengthBasedExampleSelector，除了examples和example_prompt，还需要传人max_length参数来设置示例的最大长度：\n",
    "example_selector1 = LengthBasedExampleSelector(\n",
    "    examples=examples,\n",
    "    example_prompt=example_prompt,\n",
    "    max_length=500,\n",
    ")\n",
    "selected_examples = example_selector1.select_examples({\"input\": \"人间清暑殿\", \"output\": \"天上广寒宫\"})\n",
    "print(selected_examples)\n",
    "# print(example_selector1.examples)\n",
    "# 对于MaxMarginalRelevanceExampleSelector，除了examples，还需要传入一个用于生成语义相似性测量的嵌人类（OpenAIEmbeddingsO），一个用于存储嵌人类和执行相似性搜索的VectorStore类（FAISS），并设置需要生成的示例数量（k=2）\n",
    "example_selector2=MaxMarginalRelevanceExampleSelector.from_examples(\n",
    "    examples,\n",
    "    QianfanEmbeddingsEndpoint(),\n",
    "    BESVectorStore(),\n",
    "    k=2,\n",
    ")\n",
    "# 对于NGramOverlapExampleSelector，除了examples 和 example_prompt，还要传人一个threshold参数用于设定示例选择器的停止阈值\n",
    "example_selector3 = NGramOverlapExampleSelector(\n",
    "    examples=examples,\n",
    "    example_prompt=example_prompt,\n",
    "    threshold=-1.0,\n",
    ")\n",
    "\n",
    "# 对于SemanticSimilarityExampleSelector，除了examples，还需要传人一个用于生成语义相似性测量的嵌人类（OpenAIEmbeddingsO），一个用于存储嵌人类和执行相似性搜索的VectorStore类（Chroma或其他VectorStore类均可），并设置需要生成的示例数量（k=1）\n",
    "example_selector4 = SemanticSimilarityExampleSelector.from_examples(\n",
    "    examples,\n",
    "    QianfanEmbeddingsEndpoint(),\n",
    "    BESVectorStore,\n",
    "    k=1\n",
    ")\n",
    "\n",
    "few_shot_prompt=FewShotPromptTemplate(\n",
    "    example_prompt=example_prompt,\n",
    "    example_separator=\"\\n\",\n",
    "    example_selector = example_selector1,\n",
    "    prefix=\"来玩个对对子的游戏，我说上联，你说它的下联\\n\",\n",
    "    suffix=\"现在轮到你了，上联：{input}\\n下联：\",\n",
    "    input_variables=[\"input\"],\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6fbdf02a57bcce73",
   "metadata": {},
   "source": [
    "### (4)、多功能提示词模板"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "58676bca267e4ac0",
   "metadata": {},
   "source": [
    "Partial提示词模板功能"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "792451e3e8d97680",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-22T06:50:55.819426Z",
     "start_time": "2024-08-22T06:50:55.815220Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "hellobaz\n",
      "hellobaz\n"
     ]
    }
   ],
   "source": [
    "from langchain.prompts import (\n",
    "    PromptTemplate,\n",
    "    FewShotPromptTemplate,\n",
    ")\n",
    "# 方法一，在实例化之后设置\n",
    "prompt = PromptTemplate(template=\"{foo}{bar}\",input_variables=[\"foo\",\"bar\"])\n",
    "prompt = PromptTemplate.from_template(\"{foo}{bar}\")\n",
    "partial_prompt = prompt.partial(foo=\"hello\")\n",
    "print(partial_prompt.format(bar=\"baz\"))\n",
    "\n",
    "# 方法二，在实例化时设置\n",
    "partial_prompt = \\\n",
    "    PromptTemplate(template=\"{foo}{bar}\",input_variables=[\"foo\",\"bar\"], partial_variables={\"foo\":\"hello\"})\n",
    "print(partial_prompt.format(bar=\"baz\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ee2ced69994eb190",
   "metadata": {},
   "source": [
    "PipelinePrompt组合模板功能"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "6d4d0fe01bed0ce2",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-22T06:51:53.446445Z",
     "start_time": "2024-08-22T06:51:53.439434Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'\\n题目介绍\\n一些例子\\n开始提示\\n'"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain.prompts import (\n",
    "    PromptTemplate,\n",
    "    PipelinePromptTemplate,\n",
    ")\n",
    "\n",
    "full_template = \"\"\"\n",
    "{introduction}\n",
    "{example}\n",
    "{start}\n",
    "\"\"\"\n",
    "full_prompt = PromptTemplate.from_template(full_template)\n",
    "\n",
    "introduction_prompt = PromptTemplate.from_template(\"题目介绍\")\n",
    "example_prompt1 = PromptTemplate.from_template(\"一些例子\")\n",
    "start_prompt = PromptTemplate.from_template(\"开始提示\")\n",
    "\n",
    "input_prompts =[\n",
    "    (\"introduction\", introduction_prompt),\n",
    "    (\"example\", example_prompt1),\n",
    "    (\"start\", start_prompt)]\n",
    "pipeline_prompt=PipelinePromptTemplate(final_prompt=full_prompt,\n",
    "                                       pipeline_prompts=input_prompts)\n",
    "\n",
    "pipeline_prompt.format()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ff0a0e7c39e4bf0f",
   "metadata": {},
   "source": [
    "序列化模板"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "id": "64f09613721d36a9",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-23T07:34:52.693963Z",
     "start_time": "2024-08-23T07:34:52.687115Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Write antonyms for the following words.\n",
      "\n",
      "Input:云\n",
      "Output:雨\n",
      "\n",
      "Input:雪\n",
      "Output:风\n",
      "\n",
      "Input:晚照\n",
      "Output:晴空\n",
      "\n",
      "Input:来鸿\n",
      "Output:去燕\n",
      "\n",
      "Input:宿鸟\n",
      "Output:鸣虫\n",
      "\n",
      "Input:三尺剑\n",
      "Output:六钧弓\n",
      "\n",
      "Input:岭北\n",
      "Output:江东\n",
      "\n",
      "Input:人间清暑殿\n",
      "Output:天上广寒宫\n",
      "\n",
      "Input:funny\n",
      "Output:\n"
     ]
    }
   ],
   "source": [
    "from langchain_core.prompts import load_prompt\n",
    "\n",
    "prompt=load_prompt(path=\"./file/few_shot_prompt.json\",\n",
    "                   encoding=\"utf-8\", )\n",
    "print(prompt.format(adjective=\"funny\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b979d414a996b049",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "## 2.3 输出解析器"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bee6cfa766eb91d2",
   "metadata": {},
   "source": [
    "### (1)、CommaSeparatedListOutputParser"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "id": "485135d91cca3d34",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-22T07:42:07.946666Z",
     "start_time": "2024-08-22T07:42:07.941325Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'List five ice.\\nYour response should be a list of comma separated values, eg: `foo, bar, baz` or `foo,bar,baz`'"
      ]
     },
     "execution_count": 56,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain.output_parsers import CommaSeparatedListOutputParser\n",
    "from langchain.prompts import PromptTemplate\n",
    "\n",
    "output_parser = CommaSeparatedListOutputParser()\n",
    "\n",
    "format_instructions = output_parser.get_format_instructions()\n",
    "prompt =PromptTemplate(\n",
    "    template=\"List five {subject}.\\n{format_instructions}\",\n",
    "    input_variables=[\"subject\"],\n",
    "    partial_variables={\"format_instructions\": format_instructions})\n",
    "\n",
    "prompt.format(subject = \"ice\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3306239955afe844",
   "metadata": {},
   "source": [
    "### (2)、PydanticOutputParser"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 117,
   "id": "96eb24f740bb10e6",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-22T09:27:00.529728Z",
     "start_time": "2024-08-22T09:26:54.535103Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The output should be formatted as a JSON instance that conforms to the JSON schema below.\n",
      "\n",
      "As an example, for the schema {\"properties\": {\"foo\": {\"title\": \"Foo\", \"description\": \"a list of strings\", \"type\": \"array\", \"items\": {\"type\": \"string\"}}}, \"required\": [\"foo\"]}\n",
      "the object {\"foo\": [\"bar\", \"baz\"]} is a well-formatted instance of the schema. The object {\"properties\": {\"foo\": [\"bar\", \"baz\"]}} is not well-formatted.\n",
      "\n",
      "Here is the output schema:\n",
      "```\n",
      "{\"properties\": {\"setup\": {\"description\": \"question to set up a joke\", \"title\": \"Setup\", \"type\": \"string\"}, \"punchline\": {\"description\": \"answer to resolve the joke\", \"title\": \"Punchline\", \"type\": \"string\"}}, \"required\": [\"setup\", \"punchline\"]}\n",
      "```\n",
      "```json\n",
      "{\n",
      "  \"properties\": {\n",
      "    \"setup\": {\n",
      "      \"description\": \"question to set up a joke\",\n",
      "      \"title\": \"Setup\",\n",
      "      \"type\": \"string\",\n",
      "      \"value\": \"今天有个小男孩在他爷爷面前炫耀他的新玩具车。\"\n",
      "    },\n",
      "    \"punchline\": {\n",
      "      \"description\": \"answer to resolve the joke\",\n",
      "      \"title\": \"Punchline\",\n",
      "      \"type\": \"string\",\n",
      "      \"value\": \"爷爷笑着说：“我小时候玩的玩具车都比这个高级。”哈哈！”\"\n",
      "    }\n",
      "  },\n",
      "  \"required\": [\n",
      "    \"setup\",\n",
      "    \"punchline\"\n",
      "  ]\n",
      "}\n",
      "```\n"
     ]
    }
   ],
   "source": [
    "from langchain.output_parsers import PydanticOutputParser\n",
    "from pydantic import BaseModel,Field,field_validator, validator\n",
    "from typing import List\n",
    "from langchain.prompts import (\n",
    "    PromptTemplate,\n",
    "    PipelinePromptTemplate,\n",
    ")\n",
    "from langchain.chat_models import QianfanChatEndpoint\n",
    "\n",
    "chat = QianfanChatEndpoint(\n",
    "    streaming=True,\n",
    "    temperature=0.2,\n",
    "    model=\"ERNIE-Speed-128K\",\n",
    ")\n",
    "\n",
    "# 定义所需的数据结构\n",
    "class Joke(BaseModel):\n",
    "    setup: str = Field(description=\"question to set up a joke\")\n",
    "    punchline: str  = Field(description=\"answer to resolve the joke\")\n",
    "    \n",
    "    # 使用Pydantic轻松添加自定义的验证逻辑\n",
    "    # @field_validator(\"setup\")\n",
    "    # def question_ends_with_question_mark(cls, field):\n",
    "    #     # if field[-1] != \".\":\n",
    "    #     #     raise ValueError(\"Badly formed question!\")\n",
    "    #     return field\n",
    "\n",
    "# 创建一个用于提示LLM生成数据结构的查询\n",
    "joke_query = \"Tell me a joke.\"\n",
    "\n",
    "#设置一个输出解析器，并将指令注人提示词模板\n",
    "parser = PydanticOutputParser(pydantic_object=Joke)\n",
    "print(parser.get_format_instructions())\n",
    "prompt = PromptTemplate(\n",
    "    template=\"Answer the user query.\\n{format_instructions}\\n{query}\\n\",\n",
    "    input_variables=[\"query\"],\n",
    "    partial_variables={\"format_instructions\": parser.get_format_instructions()},\n",
    ")\n",
    "_input = prompt.format_prompt(query=joke_query)\n",
    "output = chat.invoke(_input.to_string())\n",
    "print(output.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "847f44e8722ab0a4",
   "metadata": {},
   "source": [
    "### (3)、StructuredOutputParser"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 95,
   "id": "70f4d40baf3d6f4c",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-22T08:47:23.254967Z",
     "start_time": "2024-08-22T08:47:21.841948Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'answer': 'Paris',\n",
       " 'source': 'Wikipedia or any basic geography knowledge source.'}"
      ]
     },
     "execution_count": 95,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain.output_parsers import StructuredOutputParser,ResponseSchema\n",
    "from langchain.prompts import(\n",
    "    PromptTemplate,ChatPromptTemplate,HumanMessagePromptTemplate)\n",
    "from langchain.llms import QianfanLLMEndpoint\n",
    "from langchain.chat_models import QianfanChatEndpoint\n",
    "\n",
    "# 定义想要接收的响应模式\n",
    "response_schemas= [\n",
    "    ResponseSchema(\n",
    "        name=\"answer\",\n",
    "        description=\"answer to the user's question\"\n",
    "    ),\n",
    "    ResponseSchema(\n",
    "        name=\"source\",\n",
    "        description=(\n",
    "            \"source used to answer the user's question,\"\n",
    "            \"should be a website.\"\n",
    "        )\n",
    "    )\n",
    "]\n",
    "\n",
    "output_parser= \\\n",
    "    StructuredOutputParser.from_response_schemas(response_schemas)\n",
    "\n",
    "# 接着获取一个format_instructions，包含将响应格式化的输出指令，然后将其插人提示词模板：\n",
    "format_instructions = output_parser.get_format_instructions()\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    template=(\n",
    "        \"answer the users question as best as possible.\\n\"\n",
    "        \"{format_instructions}\\n{question}\"\n",
    "    ),\n",
    "    input_variables=[\"question\"],\n",
    "    partial_variables={\"format_instructions\": format_instructions}\n",
    ")\n",
    "model = QianfanLLMEndpoint(\n",
    "    streaming=True,\n",
    "    temperature=0.2,\n",
    "    model=\"ERNIE-Speed-128K\",\n",
    ")\n",
    "\n",
    "_input = prompt.format_prompt(question=\"what's the capital of france?\")\n",
    "output = model.invoke(_input.to_string())\n",
    "output_parser.parse(output)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e16b80709f0a472e",
   "metadata": {},
   "source": [
    "聊天模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 98,
   "id": "4e28732249fe4e6a",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-22T09:01:06.503229Z",
     "start_time": "2024-08-22T09:01:04.704530Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "content='```json\\n{\\n\\t\"answer\": \"Paris\",\\n\\t\"source\": \"Wikipedia or any basic geography knowledge source.\"\\n}\\n```' response_metadata={'token_usage': {}, 'model_name': 'ERNIE-Speed-128K', 'finish_reason': 'stop'} id='run-131a81aa-b70d-4567-b1c4-6ef06ba8f80b-0'\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'answer': 'Paris',\n",
       " 'source': 'Wikipedia or any basic geography knowledge source.'}"
      ]
     },
     "execution_count": 98,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chat = QianfanChatEndpoint(\n",
    "    streaming=True,\n",
    "    temperature=0.2,\n",
    "    model=\"ERNIE-Speed-128K\",\n",
    ")\n",
    "\n",
    "prompt = ChatPromptTemplate(\n",
    "    messages=[\n",
    "        HumanMessagePromptTemplate.from_template(\n",
    "            \"answer the users question as best as possible.\\n\"\n",
    "            \"{format_instructions}\\n{question}\"\n",
    "        )\n",
    "    ],\n",
    "    input_variables=[\"question\"],\n",
    "    partial_variables={\"format_instructions\": format_instructions}\n",
    ")\n",
    "\n",
    "_input = prompt.format_prompt(question=\"what's the capital of france?\")\n",
    "output = chat(_input.to_messages())\n",
    "print(output)\n",
    "output_parser.parse(output.content)# 多包一层 content\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8d33eaec063f3f65",
   "metadata": {},
   "source": [
    "# 3 数据增强"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6a76c38c2ccd4c46",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "## 测试案例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "3e8656c3524e5f6a",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-23T02:59:55.058096Z",
     "start_time": "2024-08-23T02:59:50.757442Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Document(metadata={'source': 'https://baike.baidu.com/item/%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B/62884793?fr=ge_ala', 'title': '大语言模型_百度百科', 'description': '大语言模型（Large Language Model，简称LLM），指使用大量文本数据训练的深度学习模型，可以生成自然语言文本或理解语言文本的含义。大语言模型可以处理多种自然语言任务，如文本分类、问答、对话等，是通向人工智能的重要途径。目前大语言模型采用与小模型类似的Transformer架构和预训练目标（如 Language Modeling），与小模型的区别是增加模型大小、训练数据和计算资源。2020年1月23日，OpenAI发表了论文《Scaling Laws for Neural Language Models》，研究基于交叉熵损失的语言模型性能的经验尺度法则；同年5月，OpenAI发布具有1750亿参数规模的大语言模型GPT-3，GPT-3的发布是一件跨时代的事情，意味着自然语言处理领域的大语言模型真正意义上出现了，从此正式开启大语言模型时代。2022年11月30日，OpenAI', 'language': 'No language found.'}, page_content='Methods》中提出并应用N-gram模型于语音识别任务。之后随着神经网络的发展，出现了神经语言模型。 [8]1997年，长短期记忆神经网络（Long Short-Term Memory，LSTM）出现，适合处理时间序列中间隔和延迟很长的事件。 [8] [21]2013年，自然语言处理模型 Word2Vec诞生，首次提出将单词转换为向量的“词向量模型”，以便计算机更好理解和处理文本数据。 [22]2014年，GAN（对抗式生成网络）诞生，被誉为21世纪最强大算法模型之一，标志深度学习进入生成模型研究的新阶段。 [22]GPT模型问世 2017年，Google发布论文《Attention is all you need》,提出Attention机制和基于此机制的Transformer架构。此架构价值在于是一种完全基于注意力机制的序列转换模型，而不依赖RNN、CNN或者LSTM。 [8]2018年，Google AI研究院的Jacob Devlin等人提出了BERT（Bidirectional Encoder Representation from Transformers），'), Document(metadata={'source': 'https://baike.baidu.com/item/%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B/62884793?fr=ge_ala', 'title': '大语言模型_百度百科', 'description': '大语言模型（Large Language Model，简称LLM），指使用大量文本数据训练的深度学习模型，可以生成自然语言文本或理解语言文本的含义。大语言模型可以处理多种自然语言任务，如文本分类、问答、对话等，是通向人工智能的重要途径。目前大语言模型采用与小模型类似的Transformer架构和预训练目标（如 Language Modeling），与小模型的区别是增加模型大小、训练数据和计算资源。2020年1月23日，OpenAI发表了论文《Scaling Laws for Neural Language Models》，研究基于交叉熵损失的语言模型性能的经验尺度法则；同年5月，OpenAI发布具有1750亿参数规模的大语言模型GPT-3，GPT-3的发布是一件跨时代的事情，意味着自然语言处理领域的大语言模型真正意义上出现了，从此正式开启大语言模型时代。2022年11月30日，OpenAI', 'language': 'No language found.'}, page_content='[40]人机协同提问场景：加强阅读理解能力自我提问可以促进学习专注度，加深对阅读内容的理解，但当前学生提问普遍存在水平不高、类型单一等问题。对此，可以利用T5和GPT系列的自然语言生成优势，为高质量问题创建提供支持，进而加强学生的阅读理解能力。利用GPT-3自动生成提示语（包括提问类型、答案、提问视角），通过多轮人机对话，帮助学生提出深层次问题。GPT-3更能促使小学生提出一系列与知识点相关的、深层次的问题，以加强深度阅读理解。总的来说，大语言模型可以利用其文本生成优势，通过人机协同对话形式辅助学生提问，进而提升其阅读理解能力。 [40]人机协同写作和数学解题场景：提升写作和解题水平写作与数学解题逻辑教学作为学科教学领域的两项重难点，一直存在学生写作时“不愿写”“没得写”“不会写”和数学解题答题不规范、传统教学指导效率低等问题。对此，GPT系列或类T5结构模型因其内容创作和数学推理优势，可以广泛应用于智能写作工具研究和数学解题辅助研究领域，进而有效提升学生的写作和数学解题水平。'), Document(metadata={'source': 'https://baike.baidu.com/item/%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B/62884793?fr=ge_ala', 'title': '大语言模型_百度百科', 'description': '大语言模型（Large Language Model，简称LLM），指使用大量文本数据训练的深度学习模型，可以生成自然语言文本或理解语言文本的含义。大语言模型可以处理多种自然语言任务，如文本分类、问答、对话等，是通向人工智能的重要途径。目前大语言模型采用与小模型类似的Transformer架构和预训练目标（如 Language Modeling），与小模型的区别是增加模型大小、训练数据和计算资源。2020年1月23日，OpenAI发表了论文《Scaling Laws for Neural Language Models》，研究基于交叉熵损失的语言模型性能的经验尺度法则；同年5月，OpenAI发布具有1750亿参数规模的大语言模型GPT-3，GPT-3的发布是一件跨时代的事情，意味着自然语言处理领域的大语言模型真正意义上出现了，从此正式开启大语言模型时代。2022年11月30日，OpenAI', 'language': 'No language found.'}, page_content='[37]2016年1月CodeGenSalesforce美国160代码2K开源 [27]-CPM-2清华、智源中国1980语言未知开源-J1-JumboAI21 Labs美国1780语言2K受限访问-GPT-NeoXEleutherAI未知200语言2K开源-M6-10T阿里巴巴中国100000语言、图像512未知-YaLMYandex俄罗斯1000亿语言2K开源参考资料： [23]应用场景播报编辑教育领域在线讨论与反思学习场景：赋能高阶思维能力培养在线讨论与反思学习场景中的文本数据在一定程度上反映学生在线学习过程中的认知和情感表现。具有自然语言理解优势的BERT可对学生文本数据中的认知与情感进行识别，为赋能学生高阶思维能力培养奠定基础。同时探究学生在线学习认知和情感发展规律。'), Document(metadata={'source': 'https://baike.baidu.com/item/%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B/62884793?fr=ge_ala', 'title': '大语言模型_百度百科', 'description': '大语言模型（Large Language Model，简称LLM），指使用大量文本数据训练的深度学习模型，可以生成自然语言文本或理解语言文本的含义。大语言模型可以处理多种自然语言任务，如文本分类、问答、对话等，是通向人工智能的重要途径。目前大语言模型采用与小模型类似的Transformer架构和预训练目标（如 Language Modeling），与小模型的区别是增加模型大小、训练数据和计算资源。2020年1月23日，OpenAI发表了论文《Scaling Laws for Neural Language Models》，研究基于交叉熵损失的语言模型性能的经验尺度法则；同年5月，OpenAI发布具有1750亿参数规模的大语言模型GPT-3，GPT-3的发布是一件跨时代的事情，意味着自然语言处理领域的大语言模型真正意义上出现了，从此正式开启大语言模型时代。2022年11月30日，OpenAI', 'language': 'No language found.'}, page_content='[12]Transformer是一种用于序列到序列（Sequence-to-Sequence）任务的神经网络模型，如机器翻译、语音识别和生成对话等。它是第一个完全依赖于自注意力机制来计算其输入和输出的表示的转换模型。序列到序列模型采用的是编码器-解码器结构，编码器-解码器结构采用堆叠的多头注意力机制加全连接层。通过查询-键-值的模式使用多头注意力。由于Transformer模型中既没有递归，也没有卷积，如果需要获得输入序列精准的位置信息，必须插入位置编码。位置编码和输入嵌入有相同的维度，所以二者可以实现相加运算，位置编码方式可以有多种。 [12]Transformer结构的发展趋势：一，更好的表征方法。未来可能会出现更好的预训练方法，更好利用大规模数据集进行模型训练。大模型可以从中受益。二，更广泛的应用场景。目前，Transformer主要应用于自然语言处理领域，但未来可能会扩展到其他领域，如计算机视觉。三，更好的可视化和可解释性。'), Document(metadata={'source': 'https://baike.baidu.com/item/%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B/62884793?fr=ge_ala', 'title': '大语言模型_百度百科', 'description': '大语言模型（Large Language Model，简称LLM），指使用大量文本数据训练的深度学习模型，可以生成自然语言文本或理解语言文本的含义。大语言模型可以处理多种自然语言任务，如文本分类、问答、对话等，是通向人工智能的重要途径。目前大语言模型采用与小模型类似的Transformer架构和预训练目标（如 Language Modeling），与小模型的区别是增加模型大小、训练数据和计算资源。2020年1月23日，OpenAI发表了论文《Scaling Laws for Neural Language Models》，研究基于交叉熵损失的语言模型性能的经验尺度法则；同年5月，OpenAI发布具有1750亿参数规模的大语言模型GPT-3，GPT-3的发布是一件跨时代的事情，意味着自然语言处理领域的大语言模型真正意义上出现了，从此正式开启大语言模型时代。2022年11月30日，OpenAI', 'language': 'No language found.'}, page_content='大语言模型_百度百科 网页新闻贴吧知道网盘图片视频地图文库资讯采购百科百度首页登录注册进入词条全站搜索帮助首页秒懂百科特色百科知识专题加入百科百科团队权威合作个人中心大语言模型播报讨论上传视频使用大量文本数据训练的深度学习模型收藏查看我的收藏0有用+10大语言模型（Large Language Model，简称LLM），指使用大量文本数据训练的深度学习模型，可以生成自然语言文本或理解语言文本的含义。大语言模型可以处理多种自然语言任务，如文本分类、问答、对话等，是通向人工智能的重要途径 [1]。目前大语言模型采用与小模型类似的Transformer架构和预训练目标（如 Language Modeling），与小模型的区别是增加模型大小、训练数据和计算资源。 [7]2020年1月23日，OpenAI发表了论文《Scaling Laws for Neural Language'), Document(metadata={'source': 'https://baike.baidu.com/item/%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B/62884793?fr=ge_ala', 'title': '大语言模型_百度百科', 'description': '大语言模型（Large Language Model，简称LLM），指使用大量文本数据训练的深度学习模型，可以生成自然语言文本或理解语言文本的含义。大语言模型可以处理多种自然语言任务，如文本分类、问答、对话等，是通向人工智能的重要途径。目前大语言模型采用与小模型类似的Transformer架构和预训练目标（如 Language Modeling），与小模型的区别是增加模型大小、训练数据和计算资源。2020年1月23日，OpenAI发表了论文《Scaling Laws for Neural Language Models》，研究基于交叉熵损失的语言模型性能的经验尺度法则；同年5月，OpenAI发布具有1750亿参数规模的大语言模型GPT-3，GPT-3的发布是一件跨时代的事情，意味着自然语言处理领域的大语言模型真正意义上出现了，从此正式开启大语言模型时代。2022年11月30日，OpenAI', 'language': 'No language found.'}, page_content='[38]局限性播报编辑不能创造语言大模型至多是会使用语言，而远谈不上能创造语言、发明语言。大语言模型的基础仍然是深度学习技术，即利用大量的文本数据来训练模型，只不过模型的参数规模更为庞大，但与产生语言的劳动、实践根本不沾边。不能深度理解人类大语言模型目前只是人类生存实践的旁观者和应答者，缺乏共情能力，还达不到像人类理解那样的深刻性与丰富性，而深层理解更彰显人类智能的特殊性。不能全面嵌入社会以ChatGPT为代表的大语言模型仍然不能像人一样在社会中进行交往与实践，不能以人类体悟语境的方式来体悟语境，因此，谈论ChatGPT拥有媲美人类的智能，完全理解人类的语言，还为时尚早。参考资料： [39]社会影响播报编辑年度词汇2023年12月6日，大语言模型入选国家语言资源监测与研究中心发布的“2023年度中国媒体十大流行语”。 [2]2023年12月26日，大语言模型入选“2023年度十大科技名词”。'), Document(metadata={'source': 'https://baike.baidu.com/item/%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B/62884793?fr=ge_ala', 'title': '大语言模型_百度百科', 'description': '大语言模型（Large Language Model，简称LLM），指使用大量文本数据训练的深度学习模型，可以生成自然语言文本或理解语言文本的含义。大语言模型可以处理多种自然语言任务，如文本分类、问答、对话等，是通向人工智能的重要途径。目前大语言模型采用与小模型类似的Transformer架构和预训练目标（如 Language Modeling），与小模型的区别是增加模型大小、训练数据和计算资源。2020年1月23日，OpenAI发表了论文《Scaling Laws for Neural Language Models》，研究基于交叉熵损失的语言模型性能的经验尺度法则；同年5月，OpenAI发布具有1750亿参数规模的大语言模型GPT-3，GPT-3的发布是一件跨时代的事情，意味着自然语言处理领域的大语言模型真正意义上出现了，从此正式开启大语言模型时代。2022年11月30日，OpenAI', 'language': 'No language found.'}, page_content='Models》，研究基于交叉熵损失的语言模型性能的经验尺度法则；同年5月，OpenAI发布具有1750亿参数规模的大语言模型GPT-3，GPT-3的发布是一件跨时代的事情，意味着自然语言处理领域的大语言模型真正意义上出现了，从此正式开启大语言模型时代。2022年11月30日，OpenAI公司发布ChatGPT，迅速引起社会各界关注。ChatGPT属于一类基于GPT技术的大语言模型。Google、Microsoft、NVIDA等公司也给出了自己的大语言模型 [8]。2024年3月，马斯克的xAI公司正式发布大模型Grok-1，参数量达到3140亿，超OpenAI GPT-3.5的1750亿。 [9]2023年12月26日，大语言模型入选“2023年度十大科技名词” [4]。2024年4月，在瑞士举行的第27届联合国科技大会上，世界数字技术院（WDTA）发布了《生成式人工智能应用安全测试标准》和《大语言模型安全测试方法》两项国际标准，由OpenAI、蚂蚁集团、科大讯飞、谷歌、微软、英伟达、百度、腾讯等数十家单位的多名专家学者共同编制而成。 [6]中文名大语言模型外文名Large'), Document(metadata={'source': 'https://baike.baidu.com/item/%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B/62884793?fr=ge_ala', 'title': '大语言模型_百度百科', 'description': '大语言模型（Large Language Model，简称LLM），指使用大量文本数据训练的深度学习模型，可以生成自然语言文本或理解语言文本的含义。大语言模型可以处理多种自然语言任务，如文本分类、问答、对话等，是通向人工智能的重要途径。目前大语言模型采用与小模型类似的Transformer架构和预训练目标（如 Language Modeling），与小模型的区别是增加模型大小、训练数据和计算资源。2020年1月23日，OpenAI发表了论文《Scaling Laws for Neural Language Models》，研究基于交叉熵损失的语言模型性能的经验尺度法则；同年5月，OpenAI发布具有1750亿参数规模的大语言模型GPT-3，GPT-3的发布是一件跨时代的事情，意味着自然语言处理领域的大语言模型真正意义上出现了，从此正式开启大语言模型时代。2022年11月30日，OpenAI', 'language': 'No language found.'}, page_content='[4]科技发展大语言模型的快速进步，正在激发新业态、新模式，由此带来的工作方式、教育模式等的变革。它不仅是一项技术，更是未来国力竞争与生产力提高的重要资源。以深度学习平台和大模型为代表的AI新型基础设施，对科技创新、产业升级和高质量发展意义重大。 [44]新手上路成长任务编辑入门编辑规则本人编辑我有疑问内容质疑在线客服官方贴吧意见反馈投诉建议举报不良信息未通过词条申诉投诉侵权信息封禁查询与解封©2024\\xa0Baidu\\xa0使用百度前必读\\xa0|\\xa0百科协议\\xa0|\\xa0隐私政策\\xa0|\\xa0百度百科合作平台\\xa0|\\xa0京ICP证030173号\\xa0京公网安备11000002000001号'), Document(metadata={'source': 'https://baike.baidu.com/item/%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B/62884793?fr=ge_ala', 'title': '大语言模型_百度百科', 'description': '大语言模型（Large Language Model，简称LLM），指使用大量文本数据训练的深度学习模型，可以生成自然语言文本或理解语言文本的含义。大语言模型可以处理多种自然语言任务，如文本分类、问答、对话等，是通向人工智能的重要途径。目前大语言模型采用与小模型类似的Transformer架构和预训练目标（如 Language Modeling），与小模型的区别是增加模型大小、训练数据和计算资源。2020年1月23日，OpenAI发表了论文《Scaling Laws for Neural Language Models》，研究基于交叉熵损失的语言模型性能的经验尺度法则；同年5月，OpenAI发布具有1750亿参数规模的大语言模型GPT-3，GPT-3的发布是一件跨时代的事情，意味着自然语言处理领域的大语言模型真正意义上出现了，从此正式开启大语言模型时代。2022年11月30日，OpenAI', 'language': 'No language found.'}, page_content='通用语料：如网页、书籍和会话文本等，可以增强大语言模型的语言建模和泛化能力。2.专业语料：有研究将预训练语料库扩展到更专业的数据集，如多语言数据、科学数据和代码，赋予大语言模型特定的任务解决能力。数据收集完后需要对这些数据进行预处理，包括去噪、去冗余、去除不相关和潜在有毒的数据。基础大模型训练由于模型参数量和所使用的数据量巨大。所以普通服务器单机无法完成训练过程，因此通常采用分布式架构完成训练。指令微调通过指令微调，大模型学习到了如何响应人类指令，可以根据指令直接能够生成合理的答案。类人对齐由于模型输出的结果与人类回答差距很大，因此需要进一步优化模型，使模型的输出与人类习惯对齐。其中OpenAI开发ChatGPT的人类反馈强化学习（Reinforcement Learning from Human Feedback，RLHF）是最具代表性也是最成功的。参考资料： [8]大语言模型对比播报编辑大语言模型对比发布时间模型名称发布机构所在国家模型参数量（亿）模态最大序列长度使用方式2023年3月GPT-4OpenAI美国未知语言、图像32KAPI')]\n"
     ]
    }
   ],
   "source": [
    "# 首先，使用加载器，创建一个WebBaseLoader实例，用于从网络加载数据\n",
    "from langchain.document_loaders import WebBaseLoader\n",
    "loader = WebBaseLoader(\"https://baike.baidu.com/item/%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B/62884793?fr=ge_ala\")\n",
    "data = loader.load()\n",
    "\n",
    "# 随后，使用嵌人模型包装器，将这些切割后的文本数据转换为向量数据。创建一个OpenAIEmbeddings实例，用于将文本转换为向量。\n",
    "from langchain.embeddings import QianfanEmbeddingsEndpoint\n",
    "embedding = QianfanEmbeddingsEndpoint(\n",
    "    streaming=True,\n",
    "    temperature=0.2,\n",
    "    model=\"Embedding-V1\",\n",
    ")\n",
    "\n",
    "# 接下来，使用文档转换器，将数据切割为小块，然后转换为文档格式的数据。\n",
    "from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
    "text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
    "splits = text_splitter.split_documents(data)\n",
    "\n",
    "# 然后，进入工作流的向量存储库环节，创建一个向量存储库：FAISS实例，用于存储这些向量数据。\n",
    "from langchain.vectorstores import FAISS\n",
    "vectordb = FAISS.from_documents(documents=splits, embedding=embedding)\n",
    "\n",
    "# 最后，实例化一个检索器，在这些数据中进行检索。\n",
    "from langchain.chat_models import QianfanChatEndpoint\n",
    "from langchain.retrievers.multi_query import MultiQueryRetriever\n",
    "question=\"大语言模型是什么？\"\n",
    "llm= QianfanChatEndpoint(\n",
    "    streaming=True,\n",
    "    temperature=0.2,\n",
    "    model=\"ERNIE-Speed-128K\",\n",
    ")\n",
    "retriever_from_llm = MultiQueryRetriever.from_llm(retriever=vectordb.as_retriever(),llm=llm)\n",
    "# docs = retriever_from_llm.get_relevant_documents(question)\n",
    "docs = retriever_from_llm.invoke(question)\n",
    "print(docs)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7c8c2f406417356",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "## 2.1 加载器"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6c8d939d91fdc7fe",
   "metadata": {},
   "source": [
    "跳过案例"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9207f03389192774",
   "metadata": {},
   "source": [
    "## 2.2 嵌入模型包装器"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "764a476dc82a7471",
   "metadata": {},
   "source": [
    "### (1)、嵌入模型包装器的使用"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "523c1357851c28d9",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-23T06:21:49.789081Z",
     "start_time": "2024-08-23T06:21:49.585208Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2--384\n",
      "384--0.10942552238702774\n"
     ]
    }
   ],
   "source": [
    "from langchain.embeddings import QianfanEmbeddingsEndpoint\n",
    "embeddings_model = QianfanEmbeddingsEndpoint(\n",
    "    streaming=True,\n",
    "    temperature=0.2,\n",
    "    model=\"Embedding-V1\",\n",
    ")\n",
    "\n",
    "embeddings = embeddings_model.embed_documents([\n",
    "    \"白日依山尽，黄河入海流\",\n",
    "    \"欲穷千里目，更上一层楼\"\n",
    "])\n",
    "print(f\"{len(embeddings)}--{len(embeddings[0])}\")\n",
    "\n",
    "embeddings_query = embeddings_model.embed_query(\"黄鹤楼在哪儿？\")\n",
    "print(f\"{len(embeddings_query)}--{embeddings_query[0]}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "57a052e8ad24253",
   "metadata": {},
   "source": [
    "## 2.3 文档转换器"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "id": "1540307183ed54d4",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-23T07:35:16.719037Z",
     "start_time": "2024-08-23T07:35:16.708069Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Document(page_content='春'), Document(page_content='江'), Document(page_content='花'), Document(page_content='月'), Document(page_content='夜'), Document(page_content='春'), Document(page_content='江'), Document(page_content='潮'), Document(page_content='水'), Document(page_content='连'), Document(page_content='海'), Document(page_content='平'), Document(page_content=','), Document(page_content='海'), Document(page_content='上'), Document(page_content='明'), Document(page_content='月'), Document(page_content='共'), Document(page_content='潮'), Document(page_content='生'), Document(page_content='。'), Document(page_content='滟'), Document(page_content='滟'), Document(page_content='随'), Document(page_content='波'), Document(page_content='千'), Document(page_content='万'), Document(page_content='里'), Document(page_content=','), Document(page_content='何'), Document(page_content='处'), Document(page_content='春'), Document(page_content='江'), Document(page_content='无'), Document(page_content='月'), Document(page_content='明'), Document(page_content='！'), Document(page_content='江'), Document(page_content='流'), Document(page_content='宛'), Document(page_content='转'), Document(page_content='绕'), Document(page_content='芳'), Document(page_content='甸'), Document(page_content=','), Document(page_content='月'), Document(page_content='照'), Document(page_content='花'), Document(page_content='林'), Document(page_content='皆'), Document(page_content='似'), Document(page_content='霰'), Document(page_content='。'), Document(page_content='空'), Document(page_content='里'), Document(page_content='流'), Document(page_content='霜'), Document(page_content='不'), Document(page_content='觉'), Document(page_content='飞'), Document(page_content=','), Document(page_content='汀'), Document(page_content='上'), Document(page_content='白'), Document(page_content='沙'), Document(page_content='看'), Document(page_content='不'), Document(page_content='见'), Document(page_content='。'), Document(page_content='江'), Document(page_content='天'), Document(page_content='一'), Document(page_content='色'), Document(page_content='无'), Document(page_content='纤'), Document(page_content='尘'), Document(page_content=','), Document(page_content='皎'), Document(page_content='皎'), Document(page_content='空'), Document(page_content='中'), Document(page_content='孤'), Document(page_content='月'), Document(page_content='轮'), Document(page_content='。'), Document(page_content='江'), Document(page_content='畔'), Document(page_content='何'), Document(page_content='人'), Document(page_content='初'), Document(page_content='见'), Document(page_content='月'), Document(page_content=','), Document(page_content='江'), Document(page_content='月'), Document(page_content='何'), Document(page_content='年'), Document(page_content='初'), Document(page_content='照'), Document(page_content='人'), Document(page_content='？'), Document(page_content='人'), Document(page_content='生'), Document(page_content='代'), Document(page_content='代'), Document(page_content='无'), Document(page_content='穷'), Document(page_content='已'), Document(page_content=','), Document(page_content='江'), Document(page_content='月'), Document(page_content='年'), Document(page_content='年'), Document(page_content='望'), Document(page_content='相'), Document(page_content='似'), Document(page_content='。'), Document(page_content='不'), Document(page_content='知'), Document(page_content='江'), Document(page_content='月'), Document(page_content='待'), Document(page_content='何'), Document(page_content='人'), Document(page_content=','), Document(page_content='但'), Document(page_content='见'), Document(page_content='长'), Document(page_content='江'), Document(page_content='送'), Document(page_content='流'), Document(page_content='水'), Document(page_content='。'), Document(page_content='白'), Document(page_content='云'), Document(page_content='一'), Document(page_content='片'), Document(page_content='去'), Document(page_content='悠'), Document(page_content='悠'), Document(page_content=','), Document(page_content='青'), Document(page_content='枫'), Document(page_content='浦'), Document(page_content='上'), Document(page_content='不'), Document(page_content='胜'), Document(page_content='愁'), Document(page_content='。'), Document(page_content='谁'), Document(page_content='家'), Document(page_content='今'), Document(page_content='夜'), Document(page_content='扁'), Document(page_content='舟'), Document(page_content='子'), Document(page_content=','), Document(page_content='何'), Document(page_content='处'), Document(page_content='相'), Document(page_content='思'), Document(page_content='明'), Document(page_content='月'), Document(page_content='楼'), Document(page_content='？'), Document(page_content='可'), Document(page_content='怜'), Document(page_content='楼'), Document(page_content='上'), Document(page_content='月'), Document(page_content='裴'), Document(page_content='回'), Document(page_content=','), Document(page_content='应'), Document(page_content='照'), Document(page_content='离'), Document(page_content='人'), Document(page_content='妆'), Document(page_content='镜'), Document(page_content='台'), Document(page_content='。'), Document(page_content='玉'), Document(page_content='户'), Document(page_content='帘'), Document(page_content='中'), Document(page_content='卷'), Document(page_content='不'), Document(page_content='去'), Document(page_content=','), Document(page_content='捣'), Document(page_content='衣'), Document(page_content='砧'), Document(page_content='上'), Document(page_content='拂'), Document(page_content='还'), Document(page_content='来'), Document(page_content='。'), Document(page_content='此'), Document(page_content='时'), Document(page_content='相'), Document(page_content='望'), Document(page_content='不'), Document(page_content='相'), Document(page_content='闻'), Document(page_content=','), Document(page_content='愿'), Document(page_content='逐'), Document(page_content='月'), Document(page_content='华'), Document(page_content='流'), Document(page_content='照'), Document(page_content='君'), Document(page_content='。'), Document(page_content='鸿'), Document(page_content='雁'), Document(page_content='长'), Document(page_content='飞'), Document(page_content='光'), Document(page_content='不'), Document(page_content='度'), Document(page_content=','), Document(page_content='鱼'), Document(page_content='龙'), Document(page_content='潜'), Document(page_content='跃'), Document(page_content='水'), Document(page_content='成'), Document(page_content='文'), Document(page_content='。'), Document(page_content='昨'), Document(page_content='夜'), Document(page_content='闲'), Document(page_content='潭'), Document(page_content='梦'), Document(page_content='落'), Document(page_content='花'), Document(page_content=','), Document(page_content='可'), Document(page_content='怜'), Document(page_content='春'), Document(page_content='半'), Document(page_content='不'), Document(page_content='还'), Document(page_content='家'), Document(page_content='。'), Document(page_content='江'), Document(page_content='水'), Document(page_content='流'), Document(page_content='春'), Document(page_content='去'), Document(page_content='欲'), Document(page_content='尽'), Document(page_content=','), Document(page_content='江'), Document(page_content='潭'), Document(page_content='落'), Document(page_content='月'), Document(page_content='复'), Document(page_content='西'), Document(page_content='斜'), Document(page_content='。'), Document(page_content='斜'), Document(page_content='月'), Document(page_content='沉'), Document(page_content='沉'), Document(page_content='藏'), Document(page_content='海'), Document(page_content='雾'), Document(page_content=','), Document(page_content='碣'), Document(page_content='石'), Document(page_content='潇'), Document(page_content='湘'), Document(page_content='无'), Document(page_content='限'), Document(page_content='路'), Document(page_content='。'), Document(page_content='不'), Document(page_content='知'), Document(page_content='乘'), Document(page_content='月'), Document(page_content='几'), Document(page_content='人'), Document(page_content='归'), Document(page_content=','), Document(page_content='落'), Document(page_content='月'), Document(page_content='摇'), Document(page_content='情'), Document(page_content='满'), Document(page_content='江'), Document(page_content='树'), Document(page_content='。')]\n"
     ]
    }
   ],
   "source": [
    "# 1、按字符切割\n",
    "with open('file/data.txt') as f:\n",
    "    state_of_the_union = f.read()\n",
    "    \n",
    "from langchain.text_splitter import CharacterTextSplitter\n",
    "text_splitter = CharacterTextSplitter(\n",
    "    chunk_size = 1000,\n",
    "    chunk_overlap = 200,\n",
    "    separator=\"\\n\\n\",\n",
    "    is_separator_regex=False,\n",
    ")\n",
    "texts = text_splitter.create_documents(state_of_the_union)\n",
    "print(texts)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "id": "3561acadfd9c2090",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-23T07:39:29.681583Z",
     "start_time": "2024-08-23T07:39:29.675499Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Document(page_content='function helloWorld() {'),\n",
       " Document(page_content='console.log(\"Hello,World!\");\\n    }'),\n",
       " Document(page_content='// Call the function\\n    helloWorld();')]"
      ]
     },
     "execution_count": 51,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 2、代码切割\n",
    "from langchain.text_splitter import (RecursiveCharacterTextSplitter, Language)\n",
    "JS_CODE = \"\"\" \n",
    "    function helloWorld() {\n",
    "        console.log(\"Hello,World!\");\n",
    "    }\n",
    "    // Call the function\n",
    "    helloWorld();\n",
    "\"\"\"\n",
    "js_splitter = RecursiveCharacterTextSplitter.from_language(\n",
    "    language=Language.JS,chunk_size=60,chunk_overlap=0)\n",
    "js_docs = js_splitter.create_documents([JS_CODE])\n",
    "js_docs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "a6da3540063b75ad",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-29T02:47:22.283087Z",
     "start_time": "2024-08-29T02:47:22.261399Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Document(metadata={'Header 1': '薯国', 'Header 2': '牛备'}, page_content='牛备是薯国的开国皇帝'),\n",
       " Document(metadata={'Header 1': '薯国', 'Header 2': '观羽'}, page_content='观羽是薯国的大将军'),\n",
       " Document(metadata={'Header 1': '薯国', 'Header 2': '张非'}, page_content='张非是薯国的二将军'),\n",
       " Document(metadata={'Header 1': '胃国', 'Header 2': '曹草'}, page_content='曹草是胃国的开国皇帝'),\n",
       " Document(metadata={'Header 1': '胃国', 'Header 2': '夏厚吨'}, page_content='夏厚吨是胃国的独眼将军'),\n",
       " Document(metadata={'Header 1': '胃国', 'Header 2': '张鸟'}, page_content='张鸟是胃国的飞将军'),\n",
       " Document(metadata={'Header 1': '乌国', 'Header 2': '孙全'}, page_content='孙全是乌国的守成皇帝'),\n",
       " Document(metadata={'Header 1': '乌国', 'Header 2': '太史尺'}, page_content='太史尺是乌国的青年将军'),\n",
       " Document(metadata={'Header 1': '乌国', 'Header 2': '黄盖'}, page_content='黄盖是乌国的老将军')]"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 3、Markdown标题文本切割\n",
    "from langchain.text_splitter import MarkdownHeaderTextSplitter\n",
    "\n",
    "with open('file/user.md') as f:\n",
    "    data = f.read()\n",
    "\n",
    "headers_to_split_on =[\n",
    "    (\"#\",\"Header 1\"),\n",
    "    (\"##\",\"Header 2\"),\n",
    "    (\"###\",\"Header 3\")\n",
    "]\n",
    "\n",
    "markdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on=headers_to_split_on)\n",
    "md_header_splits =markdown_splitter.split_text(data)\n",
    "md_header_splits"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "430ec493f585c4ce",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-23T21:46:01.981432Z",
     "start_time": "2024-08-23T21:46:01.928955Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Document(page_content='春江花月夜\\n春江潮水连海平,海上明月共潮生。\\n滟滟随波千万里,何处春江无月明！'),\n",
       " Document(page_content='江流宛转绕芳甸,月照花林皆似霰。\\n空里流霜不觉飞,汀上白沙看不见。\\n江天一色无纤尘,皎皎空中孤月轮。'),\n",
       " Document(page_content='江畔何人初见月,江月何年初照人？\\n人生代代无穷已,江月年年望相似。\\n不知江月待何人,但见长江送流水。'),\n",
       " Document(page_content='白云一片去悠悠,青枫浦上不胜愁。\\n谁家今夜扁舟子,何处相思明月楼？\\n可怜楼上月裴回,应照离人妆镜台。'),\n",
       " Document(page_content='玉户帘中卷不去,捣衣砧上拂还来。\\n此时相望不相闻,愿逐月华流照君。\\n鸿雁长飞光不度,鱼龙潜跃水成文。'),\n",
       " Document(page_content='昨夜闲潭梦落花,可怜春半不还家。\\n江水流春去欲尽,江潭落月复西斜。\\n斜月沉沉藏海雾,碣石潇湘无限路。'),\n",
       " Document(page_content='不知乘月几人归,落月摇情满江树。')]"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 4、字符递归切割\n",
    "from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
    "\n",
    "with open('file/data.txt') as f:\n",
    "    data = f.read()\n",
    "    \n",
    "splitter = RecursiveCharacterTextSplitter(chunk_size=60,chunk_overlap=0)\n",
    "docs = splitter.create_documents([data])\n",
    "docs"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "89f8a625b0b7c782",
   "metadata": {},
   "source": [
    "## 2.4 向量存储库"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9e7bea45454433bc",
   "metadata": {},
   "source": [
    "### (1)、FAISS"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "id": "1471900f5d69bc66",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-23T22:45:33.074055Z",
     "start_time": "2024-08-23T22:45:32.657514Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Document(metadata={'Header 1': '唐诗三百首', 'Header 2': '梁杰'}, page_content='梁杰是唐代著名诗人，以下是他的著作'),\n",
       " Document(metadata={'Header 1': '唐诗三百首', 'Header 2': '梁杰', 'Header 3': '登高'}, page_content='这是梁杰的诗\\n风急天高猿啸哀，渚清沙白鸟飞回。\\n无边落木萧萧下，不尽长江滚滚来。\\n万里悲秋常作客，百年多病独登台。\\n艰难苦恨繁霜鬓，潦倒新停浊酒杯。'),\n",
       " Document(metadata={'Header 1': '唐诗三百首', 'Header 2': '梁杰', 'Header 3': '望岳'}, page_content='这是梁杰的诗\\n岱宗夫如何？齐鲁青未了。\\n造化钟神秀，阴阳割昏晓。\\n荡胸生曾云，决眦入归鸟。\\n会当凌绝顶，一览众山小。'),\n",
       " Document(metadata={'Header 1': '唐诗三百首', 'Header 2': '梁杰', 'Header 3': '春望'}, page_content='这是梁杰的诗\\n国破山河在，城春草木深。\\n感时花溅泪，恨别鸟惊心。\\n烽火连三月，家书抵万金。\\n白头搔更短，浑欲不胜簪。')]"
      ]
     },
     "execution_count": 59,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#  文档准备\n",
    "from langchain.text_splitter import MarkdownHeaderTextSplitter\n",
    "with open('file/data.md') as f:\n",
    "    data = f.read()\n",
    "headers_to_split_on =[\n",
    "    (\"#\",\"Header 1\"),\n",
    "    (\"##\",\"Header 2\"),\n",
    "    (\"###\",\"Header 3\")\n",
    "]\n",
    "markdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on=headers_to_split_on)\n",
    "md_header_splits =markdown_splitter.split_text(data)\n",
    "\n",
    "# 模型准备\n",
    "from langchain.vectorstores import FAISS\n",
    "from langchain.embeddings import QianfanEmbeddingsEndpoint\n",
    "embeddings_model = QianfanEmbeddingsEndpoint(\n",
    "    streaming=True,\n",
    "    temperature=0.2,\n",
    "    model=\"Embedding-V1\",\n",
    ")\n",
    "# 向量化\n",
    "docsearch = FAISS.from_documents(md_header_splits, embeddings_model)\n",
    "# 向量查询\n",
    "query = \"梁杰\"\n",
    "searchData = docsearch.similarity_search(query, 4)\n",
    "searchData"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "406602137e14ba57",
   "metadata": {},
   "source": [
    "## 2.5 检索器"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "97e0e894281fa9a1",
   "metadata": {},
   "source": [
    "测试案例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "id": "84be844fcad6a1b4",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-23T22:46:32.851037Z",
     "start_time": "2024-08-23T22:46:30.248519Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'query': '梁杰写了哪些诗',\n",
       " 'result': '梁杰是唐代著名诗人，他的诗包括《登高》、《春望》和《望岳》。这些诗中的前两首已经给出，您可以参考给出的内容欣赏梁杰的诗作。'}"
      ]
     },
     "execution_count": 61,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 模型准备\n",
    "from langchain.chat_models import QianfanChatEndpoint\n",
    "llm= QianfanChatEndpoint(\n",
    "    streaming=True,\n",
    "    temperature=0.2,\n",
    "    model=\"ERNIE-Speed-128K\",\n",
    ")\n",
    "# 创建检索器\n",
    "retriever = docsearch.as_retriever()\n",
    "\n",
    "from langchain.chains.retrieval_qa.base import RetrievalQA\n",
    "qa=RetrievalQA.from_chain_type(\n",
    "    llm=llm,\n",
    "    chain_type=\"stuff\",\n",
    "    retriever=retriever)\n",
    "qa.invoke(\"梁杰写了哪些诗\")"
   ]
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "# 4 链",
   "id": "d26a313c53d52b95"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 4.1 基础链条",
   "id": "55752f1f3163ea93"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### (1)、简单链",
   "id": "9c0c984760fed075"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-30T12:48:33.427392Z",
     "start_time": "2024-08-30T12:48:22.453813Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# 模型准备\n",
    "from langchain.chat_models import QianfanChatEndpoint\n",
    "llm= QianfanChatEndpoint(\n",
    "    streaming=True,\n",
    "    temperature=0.2,\n",
    "    model=\"ERNIE-Speed-128K\",\n",
    ")\n",
    "# 提示词\n",
    "from langchain.prompts import(\n",
    "    PromptTemplate,ChatPromptTemplate,HumanMessagePromptTemplate)\n",
    "prompt =PromptTemplate(\n",
    "    input_variables=[\"product\"],\n",
    "    template=\"what is a good name for a company that makes {product}?\"\n",
    ")\n",
    "# LLMChain\n",
    "\"\"\"\n",
    " 已经过期了！！！\n",
    " The class `LLMChain` was deprecated in LangChain 0.1.17 and will be removed in 1.0. \n",
    " Use RunnableSequence, e.g., `prompt | llm` instead.\n",
    "\"\"\"\n",
    "from langchain.chains import LLMChain\n",
    "chain = LLMChain(llm=llm, prompt=prompt)\n",
    "# run、__call__方法都被invoke替代了\n",
    "# chain.run(\"colorful socks\")\n",
    "# chain(inputs={\"product\": \"colorful socks\"})\n",
    "chain.invoke(\"colorful socks\",)"
   ],
   "id": "a577117f448f5eab",
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "E:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_core\\_api\\deprecation.py:141: LangChainDeprecationWarning: The class `LLMChain` was deprecated in LangChain 0.1.17 and will be removed in 1.0. Use RunnableSequence, e.g., `prompt | llm` instead.\n",
      "  warn_deprecated(\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'product': 'colorful socks',\n",
       " 'text': '为一家生产彩色袜子的公司起名字是一个有趣的任务。考虑到公司的核心业务和产品的特色，以下是一些建议的名字：\\n\\n1. 彩悦袜坊\\n2. 缤纷足韵\\n3. 彩袜创意坊\\n4. 彩虹袜界\\n5. 炫彩袜界\\n6. 色彩足界\\n7. 梦幻彩袜\\n8. 彩虹缤纷袜业\\n9. 彩色梦想袜业\\n10. 悦色袜行\\n\\n这些名字都体现了公司的核心业务——生产彩色袜子，同时也富有创意和吸引力，能够吸引顾客的注意力。请根据你的公司文化和市场定位选择最合适的名字。以上名称仅供参考，实际使用前建议进行市场调研和商标查验，避免侵权。'}"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 4
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### (2)、路由器链: MultiPromptChain,已过期了",
   "id": "a2c35e2ea86d1b41"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-30T12:48:38.206995Z",
     "start_time": "2024-08-30T12:48:36.786716Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# 1、创建多个目标链\n",
    "from langchain import PromptTemplate\n",
    "prompt_infos = [\n",
    "    {\n",
    "        \"name\": \"xiaodu_singer\",\n",
    "        \"prompt_template\": \"你是一个智能歌曲推荐助手，能根据歌曲的描述找到相关的歌曲，并对该歌曲做简单介绍。\"\n",
    "                           \"如果你找不到合适的歌曲，就回答没有找到。\"\n",
    "                           \"歌曲的描述如下：{input}\"\n",
    "    },\n",
    "    {\n",
    "        \"name\": \"xiaodu_poet\",\n",
    "        \"prompt_template\": \"你是一个智能古诗推荐助手，能根据古诗的描述找到相关的古诗，并对该古诗做简单介绍。\"\n",
    "                           \"如果你找不到合适的古诗，就回答没有找到.\"\n",
    "                           \"古诗的描述如下：{input}\"\n",
    "    }\n",
    "]\n",
    "destination_chains = {}\n",
    "for p_info in prompt_infos:\n",
    "    name = p_info[\"name\"]\n",
    "    prompt_template = p_info[\"prompt_template\"]\n",
    "    prompt = PromptTemplate(template=prompt_template, input_variables=[\"input\"])\n",
    "    chain = prompt | llm\n",
    "    destination_chains[name] = chain\n",
    "# 2、创建路由器链\n",
    "from langchain.chains.router import LLMRouterChain, MultiPromptChain\n",
    "from langchain.chains.router.llm_router import RouterOutputParser\n",
    "from langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATE\n",
    "router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(\n",
    "    destinations= \"\"\"\n",
    "        xiaodu_singer:擅长回答歌曲问题\n",
    "        xiaodu_poet:擅长回答古诗问题\n",
    "    \"\"\"\n",
    ")\n",
    "router_prompt = PromptTemplate(\n",
    "    template=router_template,\n",
    "    input_variables=[\"input\"],\n",
    "    output_parser=RouterOutputParser(),\n",
    ")\n",
    "router_chain = LLMRouterChain.from_llm(llm, router_prompt)\n",
    "# 3、提示词选择链，最终的链条\n",
    "multi_prompt_chain = MultiPromptChain(\n",
    "    router_chain=router_chain,\n",
    "    destination_chains=destination_chains,\n",
    "    verbose=True\n",
    ")\n",
    "multi_prompt_chain.invoke(\"苍茫的天涯是我的爱\")\n"
   ],
   "id": "1e6fa20fb321bcb4",
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "E:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_core\\_api\\deprecation.py:141: LangChainDeprecationWarning: Use RunnableLambda to select from multiple prompt templates. See example in API reference: https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_prompt.MultiPromptChain.html\n",
      "  warn_deprecated(\n"
     ]
    },
    {
     "ename": "ValidationError",
     "evalue": "3 validation errors for MultiPromptChain\ndestination_chains -> xiaodu_singer\n  Can't instantiate abstract class Chain without an implementation for abstract methods '_call', 'input_keys', 'output_keys' (type=type_error)\ndestination_chains -> xiaodu_poet\n  Can't instantiate abstract class Chain without an implementation for abstract methods '_call', 'input_keys', 'output_keys' (type=type_error)\ndefault_chain\n  field required (type=value_error.missing)",
     "output_type": "error",
     "traceback": [
      "\u001B[1;31m---------------------------------------------------------------------------\u001B[0m",
      "\u001B[1;31mValidationError\u001B[0m                           Traceback (most recent call last)",
      "Cell \u001B[1;32mIn[5], line 41\u001B[0m\n\u001B[0;32m     39\u001B[0m router_chain \u001B[38;5;241m=\u001B[39m LLMRouterChain\u001B[38;5;241m.\u001B[39mfrom_llm(llm, router_prompt)\n\u001B[0;32m     40\u001B[0m \u001B[38;5;66;03m# 3、提示词选择链，最终的链条\u001B[39;00m\n\u001B[1;32m---> 41\u001B[0m multi_prompt_chain \u001B[38;5;241m=\u001B[39m MultiPromptChain(\n\u001B[0;32m     42\u001B[0m     router_chain\u001B[38;5;241m=\u001B[39mrouter_chain,\n\u001B[0;32m     43\u001B[0m     destination_chains\u001B[38;5;241m=\u001B[39mdestination_chains,\n\u001B[0;32m     44\u001B[0m     verbose\u001B[38;5;241m=\u001B[39m\u001B[38;5;28;01mTrue\u001B[39;00m\n\u001B[0;32m     45\u001B[0m )\n\u001B[0;32m     46\u001B[0m multi_prompt_chain\u001B[38;5;241m.\u001B[39minvoke(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m苍茫的天涯是我的爱\u001B[39m\u001B[38;5;124m\"\u001B[39m)\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_core\\_api\\deprecation.py:205\u001B[0m, in \u001B[0;36mdeprecated.<locals>.deprecate.<locals>.finalize.<locals>.warn_if_direct_instance\u001B[1;34m(self, *args, **kwargs)\u001B[0m\n\u001B[0;32m    203\u001B[0m     warned \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mTrue\u001B[39;00m\n\u001B[0;32m    204\u001B[0m     emit_warning()\n\u001B[1;32m--> 205\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m wrapped(\u001B[38;5;28mself\u001B[39m, \u001B[38;5;241m*\u001B[39margs, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\langchain_core\\load\\serializable.py:113\u001B[0m, in \u001B[0;36mSerializable.__init__\u001B[1;34m(self, *args, **kwargs)\u001B[0m\n\u001B[0;32m    111\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21m__init__\u001B[39m(\u001B[38;5;28mself\u001B[39m, \u001B[38;5;241m*\u001B[39margs: Any, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs: Any) \u001B[38;5;241m-\u001B[39m\u001B[38;5;241m>\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m:\n\u001B[0;32m    112\u001B[0m \u001B[38;5;250m    \u001B[39m\u001B[38;5;124;03m\"\"\"\"\"\"\u001B[39;00m\n\u001B[1;32m--> 113\u001B[0m     \u001B[38;5;28msuper\u001B[39m()\u001B[38;5;241m.\u001B[39m\u001B[38;5;21m__init__\u001B[39m(\u001B[38;5;241m*\u001B[39margs, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n",
      "File \u001B[1;32mE:\\Anaconda\\envs\\ccse\\Lib\\site-packages\\pydantic\\v1\\main.py:341\u001B[0m, in \u001B[0;36mBaseModel.__init__\u001B[1;34m(__pydantic_self__, **data)\u001B[0m\n\u001B[0;32m    339\u001B[0m values, fields_set, validation_error \u001B[38;5;241m=\u001B[39m validate_model(__pydantic_self__\u001B[38;5;241m.\u001B[39m\u001B[38;5;18m__class__\u001B[39m, data)\n\u001B[0;32m    340\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m validation_error:\n\u001B[1;32m--> 341\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m validation_error\n\u001B[0;32m    342\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[0;32m    343\u001B[0m     object_setattr(__pydantic_self__, \u001B[38;5;124m'\u001B[39m\u001B[38;5;124m__dict__\u001B[39m\u001B[38;5;124m'\u001B[39m, values)\n",
      "\u001B[1;31mValidationError\u001B[0m: 3 validation errors for MultiPromptChain\ndestination_chains -> xiaodu_singer\n  Can't instantiate abstract class Chain without an implementation for abstract methods '_call', 'input_keys', 'output_keys' (type=type_error)\ndestination_chains -> xiaodu_poet\n  Can't instantiate abstract class Chain without an implementation for abstract methods '_call', 'input_keys', 'output_keys' (type=type_error)\ndefault_chain\n  field required (type=value_error.missing)"
     ]
    }
   ],
   "execution_count": 5
  },
  {
   "cell_type": "markdown",
   "id": "de1da283ef6fe68e",
   "metadata": {},
   "source": "# 8 集成"
  },
  {
   "cell_type": "markdown",
   "id": "8fe90184c20732b4",
   "metadata": {},
   "source": [
    "8.4 向量库集成"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7d5a8115e3ac3e2a",
   "metadata": {},
   "source": [
    "### (1)、Milvus集成"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c6f0db1954f0ab73",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.text_splitter import CharacterTextSplitter\n",
    "from langchain.vectorstores import Milvus\n",
    "from langchain.document_loaders import TextLoader\n",
    "\n",
    "from pymilvus import (\n",
    "    connections,\n",
    "    utility,\n",
    "    FieldSchema, CollectionSchema, DataType,\n",
    "    Collection,\n",
    ")\n",
    "\n",
    "# 文本加载\n",
    "from langchain.text_splitter import MarkdownHeaderTextSplitter\n",
    "with open('file/data.md') as f:\n",
    "    data = f.read()\n",
    "headers_to_split_on =[\n",
    "    (\"#\",\"Header 1\"),\n",
    "    (\"##\",\"Header 2\"),\n",
    "    (\"###\",\"Header 3\")\n",
    "]\n",
    "markdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on=headers_to_split_on)\n",
    "docs = markdown_splitter.split_text(data)\n",
    "\n",
    "# 向量模型\n",
    "from langchain.embeddings import QianfanEmbeddingsEndpoint\n",
    "embeddings_model = QianfanEmbeddingsEndpoint(\n",
    "    streaming=True,\n",
    "    temperature=0.2,\n",
    "    model=\"Embedding-V1\",\n",
    ")\n",
    "\n",
    "vector_db = Milvus.from_documents(\n",
    "    docs,\n",
    "    embeddings_model,\n",
    "    connection_args={\"host\":\"127.0.0.1\",\"port\":\"19530\"}\n",
    ")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
