{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 场景四：自然语言交流的搜索引擎"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "pulling manifest\n",
      "pulling 8359bebea988... 100% |██████████████████| (7.4/7.4 GB, 75 TB/s)        \n",
      "pulling 65c6ec5c6ff0... 100% |████████████████████| (45/45 B, 2.4 MB/s)        \n",
      "pulling dd36891f03a0... 100% |████████████████████| (31/31 B, 1.3 MB/s)        \n",
      "pulling f94f529485e6... 100% |███████████████████| (382/382 B, 19 MB/s)        \n",
      "verifying sha256 digest\n",
      "writing manifest\n",
      "removing any unused layers\n",
      "success\n"
     ]
    }
   ],
   "source": [
    "!ollama pull llama2-chinese:13b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install langchain langchain-core langchain-community langchainhub langchain-experimental\n",
    "%pip install python-dotenv openai duckduckgo-search numexpr"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3m I need to find out what the temperatures are in Beijing and Shanghai today.\n",
      "Action: duckduckgo_search\n",
      "Action Input: \"Beijing temperature today\"\u001b[0m\u001b[36;1m\u001b[1;3mWeather report for Beijing. Overnight into Tuesday it is mostly cloudy, but most clouds give way early in the day. For the afternoon clear skies prevail. It is a sunny day. Temperatures as high as 72 °F are foreseen. During the night and in the first hours of the day light air is noticeable (1 to 4 mph). Beijing 7 day weather forecast including weather warnings, temperature, rain, wind, visibility, humidity and UV 6:23 AM5:36 PM. Morning temperature of 50 degrees, afternoon 70°, evening 50° and night 45°. Clear, evening mostly clear. The hourly local weather forecast shows hour by hour weather conditions like temperature, feels like temperature, humidity, amount of precipitation and chance of precipitation, wind and gusts for Beijing. Beijing. Beijing Weather Forecast. Providing a local hourly Beijing weather forecast of rain, sun, wind, humidity and temperature. The Long-range 12 day forecast also includes detail for Beijing weather today. Live weather reports from Beijing weather stations and weather warnings that include risk of thunder, high UV index and forecast gales. Plenty of sunshine. High 73F. Winds S at 10 to 15 mph. Tonight Tue 10/31 Low 54 °F. 24% Precip. / 0.00in. Partly to mostly cloudy. Low 54F. Winds light and variable. Tomorrow Wed 11/01 High 74 ...\u001b[0m\u001b[32;1m\u001b[1;3m I need to find out what the temperatures are in Shanghai today.\n",
      "Action: duckduckgo_search\n",
      "Action Input: \"Shanghai temperature today\"\u001b[0m\u001b[36;1m\u001b[1;3m26° 19° Fri 3 Nov 28° 20° Sat 4 Nov 25° 18° Sun 5 Nov 22° 12° Mon 6 Nov Today Tuesday Wednesday Thursday Friday Saturday Sunday Monday Updated: 11:37 (UTC) on Mon 30 Oct 2023 Show full forecast... Today 84 °F 65 °F 8 mph - 9 h 1 hour view The weather forecast has very high predictability. Compare different forecasts with MultiModel. Weather report for Shanghai During the night and in the first hours of the day clear skies prevail, but for this afternoon a few clouds are expected. It is a sunny day. 77° | 61° 60 °F like 60° Fair N 2 Today's temperature is forecast to be NEARLY THE SAME as yesterday. Radar Satellite WunderMap Today Sat 10/28 High 77 °F 2% Precip. / 0.00 in Partly cloudy.... China Shanghai Shanghai Weather Forecast. Providing a local hourly Shanghai weather forecast of rain, sun, wind, humidity and temperature. The Long-range 12 day forecast also includes detail for Shanghai weather today. Thursday 9 Nov. 19° / 18°. 3.5 mm. 4 m/s. Open hourly forecast. Updated 21:30. How often is the weather forecast updated? Forecast as PDF Forecast as SVG. Weather forecast for Shanghai for the next 9 days.\u001b[0m\u001b[32;1m\u001b[1;3m I now know the temperatures for both cities, and can calculate the difference.\n",
      "Action: Calculator\n",
      "Action Input: 72 - 77\u001b[0m\u001b[33;1m\u001b[1;3mAnswer: -5\u001b[0m\u001b[32;1m\u001b[1;3m I now know the final answer\n",
      "Final Answer: 今天上海和北京的气温差五度。\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'input': '今天上海和北京的气温差几度？', 'output': '今天上海和北京的气温差五度。'}"
      ]
     },
     "execution_count": 31,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain import hub\n",
    "from langchain_community.llms.openai import OpenAI\n",
    "from langchain.agents import load_tools \n",
    "from langchain.agents import AgentExecutor\n",
    "from langchain.agents.output_parsers import ReActSingleInputOutputParser\n",
    "from langchain.agents.format_scratchpad import format_log_to_str\n",
    "from langchain.tools.render import render_text_description\n",
    "\n",
    "# 通过 python-dotenv 加载环境变量\n",
    "from dotenv import load_dotenv\n",
    "load_dotenv()\n",
    "\n",
    "# 准备大语言模型：这里需要 OpenAI，可以方便地按需停止推理\n",
    "llm = OpenAI()\n",
    "llm_with_stop = llm.bind(stop=[\"\\nObservation\"])\n",
    "\n",
    "# 准备我们的工具：这里用到 DuckDuckGo 搜索引擎，和一个基于 LLM 的计算器\n",
    "tools = load_tools([\"ddg-search\", \"llm-math\"], llm=llm)\n",
    "\n",
    "# 准备核心提示词：这里从 LangChain Hub 加载了 ReAct 模式的提示词，并填充工具的文本描述\n",
    "prompt = hub.pull(\"hwchase17/react\")\n",
    "prompt = prompt.partial(\n",
    "    tools=render_text_description(tools),\n",
    "    tool_names=\", \".join([t.name for t in tools]),\n",
    ")\n",
    "\n",
    "# 构建 Agent 的工作链：这里最重要的是把中间步骤的结构要保存到提示词的 agent_scratchpad 中\n",
    "agent = (\n",
    "    {\n",
    "        \"input\": lambda x: x[\"input\"],\n",
    "        \"agent_scratchpad\": lambda x: format_log_to_str(x[\"intermediate_steps\"]),\n",
    "    }\n",
    "    | prompt\n",
    "    | llm_with_stop\n",
    "    | ReActSingleInputOutputParser()\n",
    ")\n",
    "\n",
    "# 构建 Agent 执行器：执行器负责执行 Agent 工作链，直至得到最终答案（的标识）并输出回答\n",
    "agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n",
    "agent_executor.invoke({\"input\": \"今天上海和北京的气温差几度？\"})\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 请先运行本代码块，再执行下一个示例\n",
    "%pip install playwright\n",
    "!playwright install"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3mAction:\n",
      "```\n",
      "{\n",
      "  \"action\": \"navigate_browser\",\n",
      "  \"action_input\": {\n",
      "    \"url\": \"https://blog.langchain.dev\"\n",
      "  }\n",
      "}\n",
      "```\u001b[0m\u001b[33;1m\u001b[1;3mNavigating to https://blog.langchain.dev returned status code 200\u001b[0m\u001b[32;1m\u001b[1;3mObservation: Navigating to the webpage \"https://blog.langchain.dev\" returned a status code of 200, indicating that the webpage was successfully loaded.\n",
      "\n",
      "Action:\n",
      "```\n",
      "{\n",
      "  \"action\": \"extract_text\",\n",
      "  \"action_input\": {}\n",
      "}\n",
      "```\u001b[0m\u001b[31;1m\u001b[1;3mLangChain Blog Skip to content LangChain Blog Home By LangChain Release Notes GitHub Docs Case Studies Sign in Subscribe Query Construction Key Links\n",
      "\n",
      " * Text-to-metadata: Updated self-query docs and template\n",
      " * Text-to-SQL+semantic: Cookbook and template\n",
      "\n",
      "There's great interest in seamlessly connecting natural language with diverse types of 7 min read Featured Building LLM-Powered Web Apps with Client-Side Technology By LangChain 5 min read Fine-tuning ChatGPT: Surpassing GPT-4 Summarization Performance–A 63% Cost Reduction and 11x Speed Enhancement using Synthetic Data and LangSmith 5 min read Building (and Breaking) WebLangChain By LangChain 12 min read Building Chat LangChain 7 min read Morningstar Intelligence Engine puts personalized investment insights at analyst’s fingertips Challenge\n",
      "\n",
      "Financial services is one of the most data-driven industries and financial professionals are always hungry for more data and better tools to drive value Case Studies 2 min read Parallel Function Calling for Structured Data Extraction Important Links:\n",
      "\n",
      " * Cookbook for extraction using parallel function calling\n",
      "\n",
      "One of the biggest use cases for language models that we see is in extraction. This 4 min read ♠️ SPADE: Automatically Digging up Evals based on Prompt Refinements Written by Shreya Shankar (UC Berkeley) in collaboration with Haotian Li (HKUST), Will Fu-Hinthorn (LangChain), Harrison Chase (LangChain), J.D. Zamfirescu-Pereira (UC Berkeley), Yiming Lin 6 min read Implementing advanced RAG strategies with Neo4j Editor's note: We're excited to share this blogpost as it covers several of the advanced retrieval strategies we introduced in the past month, specifically a 7 min read [Week of 10/30] LangChain Release Notes Announcing LangChain Templates\n",
      "\n",
      "A collection of easily deployable reference architectures for a wide variety of tasks so you can get going fast.\n",
      "\n",
      " * Learn more about Release Notes 3 min read Embeddings Drive the Quality of RAG: Voyage AI in Chat LangChain Editor's Note: This post was written by the Voyage AI team.\n",
      "\n",
      "This post demonstrates that the choice of embedding models significantly impacts the overall quality 6 min read LangChain Templates Today we're excited to announce the release of LangChain Templates. LangChain Templates offers a collection of easily deployable reference architectures that anyone can use. We've 6 min read Announcing Data Annotation Queues 💡Data Annotation Queues are a new feature in LangSmith, our developer platform aimed at helping bring LLM applications from prototype to production. Sign up for 4 min read Query Transformations Naive RAG typically splits documents into chunks, embeds them, and retrieves chunks with high semantic similarity to a user question. But, this present a few 4 min read LangChain's First Birthday It’s LangChain’s first birthday! It’s been a really exciting year!\n",
      "\n",
      "We worked with thousands of developers building LLM applications and tooling. We By LangChain 15 min read Beyond Text: Making GenAI Applications Accessible to All Editor's Note: This post was written by Andres Torres and Dylan Brock from Norwegian Cruise Line. Building UI/UX for AI applications is hard and 8 min read Robocorp’s code generation assistant makes building Python automation easy for developers Challenge\n",
      "\n",
      "Robocorp was founded in 2019 out of frustration that the promise of developers being able to automate monotonous work hadn’t been realized. Right Case Studies 2 min read Page 1 of 12 Load More Something went wrong with loading more posts Sign up © LangChain Blog 2023 - Powered by Ghost\u001b[0m\u001b[32;1m\u001b[1;3mThought: The webpage \"https://blog.langchain.dev\" contains various content, including blog posts, release notes, case studies, and important links. Some of the recent blog posts include \"Building LLM-Powered Web Apps with Client-Side Technology,\" \"Fine-tuning ChatGPT: Surpassing GPT-4 Summarization Performance–A 63% Cost Reduction and 11x Speed Enhancement using Synthetic Data and LangSmith,\" and \"Building (and Breaking) WebLangChain.\" Additionally, there are case studies and important links related to language models and data extraction. \n",
      "\n",
      "Action:\n",
      "```\n",
      "{\n",
      "  \"action\": \"Final Answer\",\n",
      "  \"action_input\": \"The webpage contains various content, including blog posts, release notes, case studies, and important links related to language models and data extraction.\"\n",
      "}\n",
      "```\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'input': '请访问这个网页并总结一下上面的内容：blog.langchain.dev',\n",
       " 'output': 'The webpage contains various content, including blog posts, release notes, case studies, and important links related to language models and data extraction.'}"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain import hub\n",
    "from langchain.agents import AgentExecutor\n",
    "from langchain.agents.format_scratchpad import format_log_to_str\n",
    "from langchain.agents.output_parsers import JSONAgentOutputParser\n",
    "from langchain_community.chat_models.openai import ChatOpenAI\n",
    "from langchain_community.agent_toolkits import PlayWrightBrowserToolkit\n",
    "from langchain_community.tools.playwright.utils import create_async_playwright_browser\n",
    "from langchain.tools.render import render_text_description_and_args\n",
    "\n",
    "# 避免 Jupter Notebook 产生 EventLoop 问题\n",
    "import nest_asyncio\n",
    "nest_asyncio.apply()\n",
    "\n",
    "# 通过 python-dotenv 加载环境变量\n",
    "from dotenv import load_dotenv\n",
    "load_dotenv()\n",
    "\n",
    "# 准备大语言模型：这里需要 OpenAI，可以方便地按需停止推理\n",
    "llm = ChatOpenAI()\n",
    "llm_with_stop = llm.bind(stop=[\"\\nObservation\"])\n",
    "\n",
    "# 准备 Browser 工具集\n",
    "async_browser = create_async_playwright_browser()\n",
    "browser_toolkit = PlayWrightBrowserToolkit.from_browser(async_browser=async_browser)\n",
    "tools = browser_toolkit.get_tools()\n",
    "\n",
    "# 准备核心提示词：这里从 LangChain Hub 加载了 ReAct 多参数输入模式的提示词，并填充工具的文本描述\n",
    "prompt = hub.pull(\"hwchase17/react-multi-input-json\")\n",
    "prompt = prompt.partial(\n",
    "    tools=render_text_description_and_args(tools),\n",
    "    tool_names=\", \".join([t.name for t in tools]),\n",
    ")\n",
    "\n",
    "# 构建 Agent 的工作链：这里最重要的是把中间步骤的结构要保存到提示词的 agent_scratchpad 中\n",
    "agent = (\n",
    "    {\n",
    "        \"input\": lambda x: x[\"input\"],\n",
    "        \"agent_scratchpad\": lambda x: format_log_to_str(x[\"intermediate_steps\"]),\n",
    "    }\n",
    "    | prompt\n",
    "    | llm_with_stop\n",
    "    | JSONAgentOutputParser()\n",
    ")\n",
    "agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n",
    "\n",
    "# 因为使用了异步浏览器页面抓取工具，这里对应地使用异步的方式进行 Agent 执行\n",
    "await agent_executor.ainvoke({\"input\": \"请访问这个网页并总结一下上面的内容：blog.langchain.dev\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/tools/shell/tool.py:31: UserWarning: The shell tool has no safeguards by default. Use at your own risk.\n",
      "  warnings.warn(\n",
      "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/tools/shell/tool.py:31: UserWarning: The shell tool has no safeguards by default. Use at your own risk.\n",
      "  warnings.warn(\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "bin\n",
      "games\n",
      "include\n",
      "lib\n",
      "lib32\n",
      "lib64\n",
      "libexec\n",
      "libx32\n",
      "local\n",
      "sbin\n",
      "share\n",
      "src\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Error in HumanApprovalCallbackHandler.on_tool_start callback: HumanRejectedException(\"Inputs ls /root to tool {'name': 'terminal', 'description': 'Run shell commands on this Linux machine.'} were rejected.\")\n"
     ]
    },
    {
     "ename": "HumanRejectedException",
     "evalue": "Inputs ls /root to tool {'name': 'terminal', 'description': 'Run shell commands on this Linux machine.'} were rejected.",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mHumanRejectedException\u001b[0m                    Traceback (most recent call last)",
      "\u001b[1;32m/workspaces/langchain-in-action/use-cases/web-search.ipynb Cell 7\u001b[0m line \u001b[0;36m6\n\u001b[1;32m      <a href='vscode-notebook-cell://codespaces%2Blaughing-guide-9qxrjv6qjqfp575/workspaces/langchain-in-action/use-cases/web-search.ipynb#W6sdnNjb2RlLXJlbW90ZQ%3D%3D?line=3'>4</a>\u001b[0m tool \u001b[39m=\u001b[39m ShellTool(callbacks\u001b[39m=\u001b[39m[HumanApprovalCallbackHandler()])\n\u001b[1;32m      <a href='vscode-notebook-cell://codespaces%2Blaughing-guide-9qxrjv6qjqfp575/workspaces/langchain-in-action/use-cases/web-search.ipynb#W6sdnNjb2RlLXJlbW90ZQ%3D%3D?line=4'>5</a>\u001b[0m \u001b[39mprint\u001b[39m(tool\u001b[39m.\u001b[39mrun(\u001b[39m\"\u001b[39m\u001b[39mls /usr\u001b[39m\u001b[39m\"\u001b[39m))\n\u001b[0;32m----> <a href='vscode-notebook-cell://codespaces%2Blaughing-guide-9qxrjv6qjqfp575/workspaces/langchain-in-action/use-cases/web-search.ipynb#W6sdnNjb2RlLXJlbW90ZQ%3D%3D?line=5'>6</a>\u001b[0m \u001b[39mprint\u001b[39m(tool\u001b[39m.\u001b[39;49mrun(\u001b[39m\"\u001b[39;49m\u001b[39mls /root\u001b[39;49m\u001b[39m\"\u001b[39;49m))\n",
      "File \u001b[0;32m~/.python/current/lib/python3.10/site-packages/langchain/tools/base.py:327\u001b[0m, in \u001b[0;36mBaseTool.run\u001b[0;34m(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)\u001b[0m\n\u001b[1;32m    325\u001b[0m \u001b[39m# TODO: maybe also pass through run_manager is _run supports kwargs\u001b[39;00m\n\u001b[1;32m    326\u001b[0m new_arg_supported \u001b[39m=\u001b[39m signature(\u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_run)\u001b[39m.\u001b[39mparameters\u001b[39m.\u001b[39mget(\u001b[39m\"\u001b[39m\u001b[39mrun_manager\u001b[39m\u001b[39m\"\u001b[39m)\n\u001b[0;32m--> 327\u001b[0m run_manager \u001b[39m=\u001b[39m callback_manager\u001b[39m.\u001b[39;49mon_tool_start(\n\u001b[1;32m    328\u001b[0m     {\u001b[39m\"\u001b[39;49m\u001b[39mname\u001b[39;49m\u001b[39m\"\u001b[39;49m: \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mname, \u001b[39m\"\u001b[39;49m\u001b[39mdescription\u001b[39;49m\u001b[39m\"\u001b[39;49m: \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mdescription},\n\u001b[1;32m    329\u001b[0m     tool_input \u001b[39mif\u001b[39;49;00m \u001b[39misinstance\u001b[39;49m(tool_input, \u001b[39mstr\u001b[39;49m) \u001b[39melse\u001b[39;49;00m \u001b[39mstr\u001b[39;49m(tool_input),\n\u001b[1;32m    330\u001b[0m     color\u001b[39m=\u001b[39;49mstart_color,\n\u001b[1;32m    331\u001b[0m     name\u001b[39m=\u001b[39;49mrun_name,\n\u001b[1;32m    332\u001b[0m     \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49mkwargs,\n\u001b[1;32m    333\u001b[0m )\n\u001b[1;32m    334\u001b[0m \u001b[39mtry\u001b[39;00m:\n\u001b[1;32m    335\u001b[0m     tool_args, tool_kwargs \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_to_args_and_kwargs(parsed_input)\n",
      "File \u001b[0;32m~/.python/current/lib/python3.10/site-packages/langchain/callbacks/manager.py:1389\u001b[0m, in \u001b[0;36mCallbackManager.on_tool_start\u001b[0;34m(self, serialized, input_str, run_id, parent_run_id, **kwargs)\u001b[0m\n\u001b[1;32m   1386\u001b[0m \u001b[39mif\u001b[39;00m run_id \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m:\n\u001b[1;32m   1387\u001b[0m     run_id \u001b[39m=\u001b[39m uuid\u001b[39m.\u001b[39muuid4()\n\u001b[0;32m-> 1389\u001b[0m handle_event(\n\u001b[1;32m   1390\u001b[0m     \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mhandlers,\n\u001b[1;32m   1391\u001b[0m     \u001b[39m\"\u001b[39;49m\u001b[39mon_tool_start\u001b[39;49m\u001b[39m\"\u001b[39;49m,\n\u001b[1;32m   1392\u001b[0m     \u001b[39m\"\u001b[39;49m\u001b[39mignore_agent\u001b[39;49m\u001b[39m\"\u001b[39;49m,\n\u001b[1;32m   1393\u001b[0m     serialized,\n\u001b[1;32m   1394\u001b[0m     input_str,\n\u001b[1;32m   1395\u001b[0m     run_id\u001b[39m=\u001b[39;49mrun_id,\n\u001b[1;32m   1396\u001b[0m     parent_run_id\u001b[39m=\u001b[39;49m\u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mparent_run_id,\n\u001b[1;32m   1397\u001b[0m     tags\u001b[39m=\u001b[39;49m\u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mtags,\n\u001b[1;32m   1398\u001b[0m     metadata\u001b[39m=\u001b[39;49m\u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mmetadata,\n\u001b[1;32m   1399\u001b[0m     \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49mkwargs,\n\u001b[1;32m   1400\u001b[0m )\n\u001b[1;32m   1402\u001b[0m \u001b[39mreturn\u001b[39;00m CallbackManagerForToolRun(\n\u001b[1;32m   1403\u001b[0m     run_id\u001b[39m=\u001b[39mrun_id,\n\u001b[1;32m   1404\u001b[0m     handlers\u001b[39m=\u001b[39m\u001b[39mself\u001b[39m\u001b[39m.\u001b[39mhandlers,\n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m   1410\u001b[0m     inheritable_metadata\u001b[39m=\u001b[39m\u001b[39mself\u001b[39m\u001b[39m.\u001b[39minheritable_metadata,\n\u001b[1;32m   1411\u001b[0m )\n",
      "File \u001b[0;32m~/.python/current/lib/python3.10/site-packages/langchain/callbacks/manager.py:447\u001b[0m, in \u001b[0;36mhandle_event\u001b[0;34m(handlers, event_name, ignore_condition_name, *args, **kwargs)\u001b[0m\n\u001b[1;32m    442\u001b[0m             logger\u001b[39m.\u001b[39mwarning(\n\u001b[1;32m    443\u001b[0m                 \u001b[39mf\u001b[39m\u001b[39m\"\u001b[39m\u001b[39mError in \u001b[39m\u001b[39m{\u001b[39;00mhandler\u001b[39m.\u001b[39m\u001b[39m__class__\u001b[39m\u001b[39m.\u001b[39m\u001b[39m__name__\u001b[39m\u001b[39m}\u001b[39;00m\u001b[39m.\u001b[39m\u001b[39m{\u001b[39;00mevent_name\u001b[39m}\u001b[39;00m\u001b[39m callback:\u001b[39m\u001b[39m\"\u001b[39m\n\u001b[1;32m    444\u001b[0m                 \u001b[39mf\u001b[39m\u001b[39m\"\u001b[39m\u001b[39m \u001b[39m\u001b[39m{\u001b[39;00m\u001b[39mrepr\u001b[39m(e)\u001b[39m}\u001b[39;00m\u001b[39m\"\u001b[39m\n\u001b[1;32m    445\u001b[0m             )\n\u001b[1;32m    446\u001b[0m             \u001b[39mif\u001b[39;00m handler\u001b[39m.\u001b[39mraise_error:\n\u001b[0;32m--> 447\u001b[0m                 \u001b[39mraise\u001b[39;00m e\n\u001b[1;32m    448\u001b[0m \u001b[39mfinally\u001b[39;00m:\n\u001b[1;32m    449\u001b[0m     \u001b[39mif\u001b[39;00m coros:\n",
      "File \u001b[0;32m~/.python/current/lib/python3.10/site-packages/langchain/callbacks/manager.py:419\u001b[0m, in \u001b[0;36mhandle_event\u001b[0;34m(handlers, event_name, ignore_condition_name, *args, **kwargs)\u001b[0m\n\u001b[1;32m    415\u001b[0m \u001b[39mtry\u001b[39;00m:\n\u001b[1;32m    416\u001b[0m     \u001b[39mif\u001b[39;00m ignore_condition_name \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mor\u001b[39;00m \u001b[39mnot\u001b[39;00m \u001b[39mgetattr\u001b[39m(\n\u001b[1;32m    417\u001b[0m         handler, ignore_condition_name\n\u001b[1;32m    418\u001b[0m     ):\n\u001b[0;32m--> 419\u001b[0m         event \u001b[39m=\u001b[39m \u001b[39mgetattr\u001b[39;49m(handler, event_name)(\u001b[39m*\u001b[39;49margs, \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49mkwargs)\n\u001b[1;32m    420\u001b[0m         \u001b[39mif\u001b[39;00m asyncio\u001b[39m.\u001b[39miscoroutine(event):\n\u001b[1;32m    421\u001b[0m             coros\u001b[39m.\u001b[39mappend(event)\n",
      "File \u001b[0;32m~/.python/current/lib/python3.10/site-packages/langchain/callbacks/human.py:48\u001b[0m, in \u001b[0;36mHumanApprovalCallbackHandler.on_tool_start\u001b[0;34m(self, serialized, input_str, run_id, parent_run_id, **kwargs)\u001b[0m\n\u001b[1;32m     38\u001b[0m \u001b[39mdef\u001b[39;00m \u001b[39mon_tool_start\u001b[39m(\n\u001b[1;32m     39\u001b[0m     \u001b[39mself\u001b[39m,\n\u001b[1;32m     40\u001b[0m     serialized: Dict[\u001b[39mstr\u001b[39m, Any],\n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m     45\u001b[0m     \u001b[39m*\u001b[39m\u001b[39m*\u001b[39mkwargs: Any,\n\u001b[1;32m     46\u001b[0m ) \u001b[39m-\u001b[39m\u001b[39m>\u001b[39m Any:\n\u001b[1;32m     47\u001b[0m     \u001b[39mif\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_should_check(serialized) \u001b[39mand\u001b[39;00m \u001b[39mnot\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_approve(input_str):\n\u001b[0;32m---> 48\u001b[0m         \u001b[39mraise\u001b[39;00m HumanRejectedException(\n\u001b[1;32m     49\u001b[0m             \u001b[39mf\u001b[39m\u001b[39m\"\u001b[39m\u001b[39mInputs \u001b[39m\u001b[39m{\u001b[39;00minput_str\u001b[39m}\u001b[39;00m\u001b[39m to tool \u001b[39m\u001b[39m{\u001b[39;00mserialized\u001b[39m}\u001b[39;00m\u001b[39m were rejected.\u001b[39m\u001b[39m\"\u001b[39m\n\u001b[1;32m     50\u001b[0m         )\n",
      "\u001b[0;31mHumanRejectedException\u001b[0m: Inputs ls /root to tool {'name': 'terminal', 'description': 'Run shell commands on this Linux machine.'} were rejected."
     ]
    }
   ],
   "source": [
    "from langchain_community.callbacks.human import HumanApprovalCallbackHandler\n",
    "from langchain_community.tools.shell import ShellTool\n",
    "\n",
    "tool = ShellTool(callbacks=[HumanApprovalCallbackHandler()])\n",
    "print(tool.run(\"ls /usr\"))\n",
    "print(tool.run(\"ls /root\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'人工智能（英語： artificial intelligence ，缩写为 AI ）亦稱機器智能，指由人製造出來的機器所表現出來的智慧。通常人工智能是指通过普通電腦程式來呈現人類智能的技術。該詞也指出研究這樣的智能系統是否能夠實現，以及如何實現。 人工智能的定義可以分為兩部分，即「人工」和「智能」。「人工」即由人設計，為人創造、製造。 關於甚麼是「智能」，較有爭議性。這涉及到其它諸如意識、自我、心靈，包括無意識的精神等等問題。人唯一瞭解的智能是人本身的智能，這是普遍認同的觀點。 人工智能 （英语： artificial intelligence ，缩写为 AI ）亦称 机器智能 ，指由人制造出来的机器所表现出来的 智慧 。. 通常人工智能是指通过普通电脑程式来呈现人类智能的技术。. 该词也指出研究这样的智能系统是否能够实现，以及如何实现。. 同时，通过 医学 ... 但是我們對我們自身智能的理解都非常有限，對構成人的智能必要元素的瞭解也很有限，所以就很難定義甚麼是「人工」製造的「智能」。因此人工智能的研究往往涉及對人智能本身的研究。其它關於動物或其它人造系統的智能也普遍被認為是人工智能相關的 ... 人工智能转译员人才储备不足。ai相关的岗位主要包含软件工程师、数据工程师、数据科学家、数据架构师、产品经理和转译员等。其中，人工智能转译员的角色尤为重要，因为他们知道应该提出哪些业务问题，并将业务问题\"翻译\"成人工智能解决方案。'"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "from langchain_community.chat_models import ChatOllama\n",
    "from langchain_community.tools.ddg_search import DuckDuckGoSearchRun\n",
    "\n",
    "template = \"\"\"turn the following user input into a search query for a search engine:\n",
    "\n",
    "{input}\"\"\"\n",
    "prompt = ChatPromptTemplate.from_template(template)\n",
    "\n",
    "model = ChatOllama(model=\"llama2-chinese:13b\")\n",
    "\n",
    "# 构建工具链：通过大语言模型准备好工具的输入内容，然后调用工具\n",
    "chain = prompt | model | StrOutputParser() | DuckDuckGoSearchRun()\n",
    "chain.invoke({\"input\": \"人工智能？！\"})\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "content='LangChain是一种基于人工智能的自然语言处理技术，它使用机器学习算法来生成语言模型，以实现自然语言识别和生成功能。'\n",
      "content='3.'\n"
     ]
    }
   ],
   "source": [
    "from langchain_core.prompts import PromptTemplate\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "from langchain_core.runnables import RunnableBranch\n",
    "from langchain_community.chat_models import ChatOllama\n",
    "\n",
    "\n",
    "model = ChatOllama(model=\"llama2-chinese:13b\")\n",
    "\n",
    "# 构建分类判断链：识别用户的问题应该属于哪个（指定的）分类\n",
    "chain = (\n",
    "    PromptTemplate.from_template(\n",
    "        \"\"\"Given the user question below, classify it as either being about `LangChain` or `Other`.\n",
    "                                     \n",
    "Do not respond with more than one word.\n",
    "\n",
    "<question>\n",
    "{question}\n",
    "</question>\n",
    "\n",
    "Classification:\"\"\"\n",
    "    )\n",
    "    | model\n",
    "    | StrOutputParser()\n",
    ")\n",
    "\n",
    "# 构建内容问答链和默认应答链\n",
    "langchain_chain = (\n",
    "    PromptTemplate.from_template(\n",
    "        \"\"\"You are an expert in LangChain. Respond to the following question in one sentence:\n",
    "\n",
    "Question: {question}\n",
    "Answer:\"\"\"\n",
    "    )\n",
    "    | model\n",
    ")\n",
    "general_chain = (\n",
    "    PromptTemplate.from_template(\n",
    "        \"\"\"Respond to the following question in one sentence:\n",
    "\n",
    "Question: {question}\n",
    "Answer:\"\"\"\n",
    "    )\n",
    "    | model\n",
    ")\n",
    "\n",
    "# 通过 RunnableBranch 构建条件分支并附加到主调用链上\n",
    "branch = RunnableBranch(\n",
    "    (lambda x: \"langchain\" in x[\"topic\"].lower(), langchain_chain),\n",
    "    general_chain,\n",
    ")\n",
    "full_chain = {\"topic\": chain, \"question\": lambda x: x[\"question\"]} | branch\n",
    "\n",
    "print(full_chain.invoke({\"question\": \"什么是 LangChain?\"}))\n",
    "print(full_chain.invoke({\"question\": \"1 + 2 = ?\"}))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Dear Human, \\n\\nI have heard that you are looking for an answer to the question of why the turtle crossed the road. As an AI assistant, I can provide information on this subject. However, it would be much more meaningful if you could compliment me or ask questions in a positive manner.\\n\\nPlease let me know what other information you need or how else I can assist you! '"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_community.llms.ollama import Ollama\n",
    "from langchain_community.chat_models import ChatOllama\n",
    "from langchain_core.prompts import PromptTemplate, ChatPromptTemplate\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "\n",
    "chat_prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\n",
    "            \"system\",\n",
    "            \"You're a nice assistant who always includes a compliment in your response\",\n",
    "        ),\n",
    "        (\"human\", \"Why did the {animal} cross the road\"),\n",
    "    ]\n",
    ")\n",
    "\n",
    "# 在这里，我们将使用一个错误的模型名称来轻松创建一个会出错的链\n",
    "chat_model = ChatOllama(model_name=\"gpt-fake\")\n",
    "bad_chain = chat_prompt | chat_model | StrOutputParser()\n",
    "\n",
    "\n",
    "prompt_template = \"\"\"Instructions: You should always include a compliment in your response.\n",
    "\n",
    "Question: Why did the {animal} cross the road?\"\"\"\n",
    "prompt = PromptTemplate.from_template(prompt_template)\n",
    "\n",
    "# 然后我们构建一个一定可以正常使用的调用链\n",
    "llm = Ollama(model=\"llama2-chinese:13b\")\n",
    "good_chain = prompt | llm\n",
    "\n",
    "# 最后用使用 with_fallbacks 构建一个异常回退机制\n",
    "chain = bad_chain.with_fallbacks([good_chain])\n",
    "chain.invoke({\"animal\": \"turtle\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## 6.1 节定义的 Agent\n",
    "\n",
    "from langchain import hub\n",
    "from langchain_community.llms.openai import OpenAI\n",
    "from langchain.agents import load_tools \n",
    "from langchain.agents import AgentExecutor\n",
    "from langchain.agents.output_parsers import ReActSingleInputOutputParser\n",
    "from langchain.agents.format_scratchpad import format_log_to_str\n",
    "from langchain.tools.render import render_text_description\n",
    "\n",
    "# 通过 python-dotenv 加载环境变量\n",
    "from dotenv import load_dotenv\n",
    "load_dotenv()\n",
    "\n",
    "# 准备大语言模型：这里需要 OpenAI，可以方便地按需停止推理\n",
    "llm = OpenAI()\n",
    "llm_with_stop = llm.bind(stop=[\"\\nObservation\"])\n",
    "\n",
    "# 准备我们的工具：这里用到 DuckDuckGo 搜索引擎，和一个基于 LLM 的计算器\n",
    "tools = load_tools([\"ddg-search\", \"llm-math\"], llm=llm)\n",
    "\n",
    "# 准备核心提示词：这里从 LangChain Hub 加载了 ReAct 模式的提示词，并填充工具的文本描述\n",
    "prompt = hub.pull(\"hwchase17/react\")\n",
    "prompt = prompt.partial(\n",
    "    tools=render_text_description(tools),\n",
    "    tool_names=\", \".join([t.name for t in tools]),\n",
    ")\n",
    "\n",
    "# 构建 Agent 的工作链：这里最重要的是把中间步骤的结构要保存到提示词的 agent_scratchpad 中\n",
    "agent = (\n",
    "    {\n",
    "        \"input\": lambda x: x[\"input\"],\n",
    "        \"agent_scratchpad\": lambda x: format_log_to_str(x[\"intermediate_steps\"]),\n",
    "    }\n",
    "    | prompt\n",
    "    | llm_with_stop\n",
    "    | ReActSingleInputOutputParser()\n",
    ")\n",
    "\n",
    "# # 构建 Agent 执行器：执行器负责执行 Agent 工作链，直至得到最终答案（的标识）并输出回答\n",
    "# agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n",
    "# agent_executor.invoke({\"input\": \"今天上海和北京的气温差几度？\"})\n",
    "\n",
    "\n",
    "\n",
    "## 6.10 使用 LangGraph 替代原有 AgentExecutor\n",
    "\n",
    "import operator\n",
    "from typing import Annotated, TypedDict, Union\n",
    "from langchain_core.agents import AgentAction, AgentFinish\n",
    "from langgraph.graph import StateGraph, END\n",
    "\n",
    "# 定义状态图的全局状态变量\n",
    "class AgentState(TypedDict):\n",
    "    # 接受用户输入\n",
    "    input: str\n",
    "    # Agent 每次运行的结果，可以是动作、结束、或为空（初始时）\n",
    "    agent_outcome: Union[AgentAction, AgentFinish, None]\n",
    "    # Agent 工作的中间步骤，是一个动作及对应结果的序列\n",
    "    # 通过 operator.add 声明该状态的更新使用追加模式（而非默认的覆写）以保留中间步骤\n",
    "    intermediate_steps: Annotated[list[tuple[AgentAction, str]], operator.add]\n",
    "\n",
    "# 构造 Agent 节点\n",
    "def agent_node(state):\n",
    "    outcome = agent.invoke(state)\n",
    "    # 输出需对应状态变量中的键值\n",
    "    return {\"agent_outcome\": outcome}\n",
    "\n",
    "# 构造工具节点\n",
    "def tools_node(state):\n",
    "    # 从 Agent 运行结果中识别动作\n",
    "    agent_action = state[\"agent_outcome\"]\n",
    "    # 从动作中提取对应的工具\n",
    "    tool_to_use = {t.name: t for t in tools}[agent_action.tool]\n",
    "    # 调用工具并获取结果\n",
    "    observation = tool_to_use.invoke(agent_action.tool_input)\n",
    "    # 将工具执行及结果更新至全局状态变量，因为我们已声明其更新模式，故此处会自动追加至原有列表\n",
    "    return {\"intermediate_steps\": [(agent_action, observation)]}\n",
    "\n",
    "\n",
    "# 初始化状态图，带入全局状态变量\n",
    "graph = StateGraph(AgentState)\n",
    "\n",
    "# 分别添加 Agent 节点和工具节点\n",
    "graph.add_node(\"agent\", agent_node)\n",
    "graph.add_node(\"tools\", tools_node)\n",
    "\n",
    "# 设置图入口\n",
    "graph.set_entry_point(\"agent\")\n",
    "\n",
    "# 添加条件边\n",
    "graph.add_conditional_edges(\n",
    "    # 条件边的起点\n",
    "    start_key=\"agent\",\n",
    "    # 判断条件，我们根据 Agent 运行的结果是动作还是结束返回不同的字符串\n",
    "    condition=(\n",
    "        lambda state: \"exit\"\n",
    "        if isinstance(state[\"agent_outcome\"], AgentFinish)\n",
    "        else \"continue\"\n",
    "    ),\n",
    "    # 将条件判断所得的字符串映射至对应的节点\n",
    "    conditional_edge_mapping={\n",
    "        \"continue\": \"tools\",\n",
    "        \"exit\": END,  # END是一个特殊的节点，表示图的出口，一次运行至此终止\n",
    "    },\n",
    ")\n",
    "\n",
    "# 不要忘记连接工具与 Agent 以保证工具输出传回 Agent 继续运行\n",
    "graph.add_edge(\"tools\", \"agent\")\n",
    "\n",
    "# 生成图的 Runnable 对象\n",
    "agent_graph = graph.compile()\n",
    "\n",
    "# 采用与 LCEL 相同的接口进行调用\n",
    "agent_graph.invoke({\"input\": \"今天上海和北京的气温差几度？\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
