{
 "cells": [
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "# 获取大模型",
   "id": "1012561627880089"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-26T04:49:01.238861Z",
     "start_time": "2025-10-26T04:48:59.655553Z"
    }
   },
   "cell_type": "code",
   "source": [
    "import dotenv\n",
    "import os\n",
    "\n",
    "from langchain_openai import ChatOpenAI, OpenAI\n",
    "\n",
    "dotenv.load_dotenv()\n",
    "\n",
    "os.environ[\"OPENAI_API_KEY\"] = os.getenv(\"OPENAI_API_KEY\")\n",
    "os.environ[\"OPENAI_API_BASE\"] = os.getenv(\"OPENAI_BASE_URL\")\n",
    "CHAT_MODEL = ChatOpenAI(\n",
    "    model=\"gpt-4o-mini\"\n",
    ")\n",
    "\n",
    "llm = OpenAI(\n",
    "    model=\"gpt-4o-mini\",\n",
    ")"
   ],
   "id": "779262f0f03e949e",
   "outputs": [],
   "execution_count": 1
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "# 关于模型调用的方法",
   "id": "4b53a40a09a5b602"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 流式输出\n",
    "LangChain中设置 `stream=True` 并配合回调机制来实现流式输出"
   ],
   "id": "1a6f34d93ba6b87d"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-19T06:59:21.829886Z",
     "start_time": "2025-10-19T06:59:19.568449Z"
    }
   },
   "cell_type": "code",
   "source": [
    "import dotenv\n",
    "import os\n",
    "from langchain_core.messages import HumanMessage\n",
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "dotenv.load_dotenv()\n",
    "\n",
    "os.environ[\"OPENAI_API_KEY\"] = os.getenv(\"OPENAI_API_KEY\")\n",
    "os.environ[\"OPENAI_API_BASE\"] = os.getenv(\"OPENAI_BASE_URL\")\n",
    "CHAT_MODEL = ChatOpenAI(\n",
    "    model=\"gpt-4o-mini\",\n",
    "    streaming=True,  # 启用流式输出\n",
    ")\n",
    "messages = [HumanMessage(content=\"你好，请介绍一下自己\")]\n",
    "\n",
    "print(\"开始流式输出：\")\n",
    "for chunk in CHAT_MODEL.stream(messages):\n",
    "    print(chunk.content, end=\"\", flush=True)\n",
    "\n",
    "print(\"\\n流式输出结束\")"
   ],
   "id": "ddbae7985b72a0f1",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "开始流式输出：\n",
      "你好！我是一个人工智能助手，旨在帮助你回答问题、提供信息和解决各种问题。我可以处理多种主题，包括科学、技术、文化、历史等。如果你有任何问题或需要帮助的地方，请随时告诉我！\n",
      "流式输出结束\n"
     ]
    }
   ],
   "execution_count": 2
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 批量调用",
   "id": "766fcf105a47500c"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-19T08:41:23.894012Z",
     "start_time": "2025-10-19T08:41:18.795110Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_core.messages import SystemMessage, HumanMessage\n",
    "\n",
    "message1 = [SystemMessage(content=\"你是一位乐于助人的智能小助手\"), HumanMessage(content=\"请帮我介绍一下什么是机器学习\")]\n",
    "message2 = [SystemMessage(content=\"你是一位乐于助人的智能小助手\"), HumanMessage(content=\"请我帮我介绍一下什么是AIGC\")]\n",
    "message3 = [SystemMessage(content=\"你是一位乐于助人的智能小助手\"),\n",
    "            HumanMessage(content=\"请你帮我介绍一下什么事大模型技术\")]\n",
    "\n",
    "messages = [message1, message2, message3]\n",
    "\n",
    "# 调用batch\n",
    "response = CHAT_MODEL.batch(messages)\n",
    "\n",
    "print(response)"
   ],
   "id": "4d41737616323b14",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[AIMessage(content='机器学习（Machine Learning，简称ML）是一种人工智能领域的重要分支，旨在通过数据训练计算机算法，使其能够自动进行模式识别和预测决策，而无需明确的编程指令。\\n\\n### 基本概念：\\n1. **数据**：机器学习依赖于数据，这是算法学习的基础。数据可以是结构化的（如表格）或非结构化的（如图像、文本）。\\n\\n2. **模型**：机器学习算法通过建立模型来学习数据中的模式，模型是通过特定的规则或数学函数来表示数据与输出之间的关系。\\n\\n3. **训练**：训练是机器学习的过程，通过使用一组已标记的数据（称为训练集），算法调整模型的参数，以最小化预测结果和实际结果之间的误差。\\n\\n4. **测试与验证**：经过训练后，模型会在新的、未见过的数据上进行测试，以评估其性能和泛化能力。\\n\\n### 分类：\\n机器学习通常可以分为三个主要类别：\\n\\n1. **监督学习**：在这个过程中，模型在已知输入和对应输出（标签）的数据上进行训练，常用于分类和回归问题。例如，预测房价、识别电子邮件是否是垃圾邮件。\\n\\n2. **无监督学习**：模型在没有标签的数据上进行训练，旨在发现数据中的潜在模式或结构。常见的方法包括聚类和降维。例如，客户细分，或数据可视化。\\n\\n3. **强化学习**：在这个过程中，智能体通过与环境的交互来学习，通过试错来获得最大奖励。这种方法常用于游戏、机器人控制和自动驾驶等领域。\\n\\n### 应用领域：\\n机器学习在许多领域中得到了广泛应用，例如：\\n- 自然语言处理（如语音识别、翻译）\\n- 图像识别（如面部识别、物体检测）\\n- 医疗诊断（如疾病预测）\\n- 金融（如信用风险评估、算法交易）\\n\\n总的来说，机器学习通过为系统赋予学习和自我改进的能力，使得计算机能够解决越来越复杂的问题。', additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_efad92c60b'}, id='run--9c69f540-7bc4-4a71-8ebe-d2cbb54b294d-0', usage_metadata={'input_tokens': 30, 'output_tokens': 467, 'total_tokens': 497, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}), AIMessage(content='AIGC（人工智能生成内容，AI-generated content）是指利用人工智能技术自动生成各种形式的内容，包括文本、图像、音频和视频等。这项技术正在快速发展，并在多个领域中得到了广泛应用，包括新闻报道、社交媒体、创作艺术、游戏设计及市场营销等。\\n\\nAIGC 的核心技术包括自然语言处理（NLP）、机器学习、深度学习和生成对抗网络（GANs）等。通过这些技术，计算机能够分析大量数据，学习语言模式和风格，从而创造出与人类创作相似的内容。\\n\\n以下是 AIGC 的一些应用示例：\\n\\n1. **文本生成**：自动生成文章、故事、广告文案等，常用于新闻媒体和内容创作。\\n2. **图像生成**：通过算法生成艺术作品或设计图形，应用于游戏、动画和广告行业。\\n3. **音频生成**：生成音乐、播客或语音朗读，广泛用于娱乐和教育领域。\\n4. **视频生成**：自动制作短视频，常用于社交媒体内容的创作。\\n\\nAIGC 的优势在于提高创作效率、降低成本，并能根据用户需求快速调整内容。然而，它也带来了一些挑战，如版权问题、内容质量控制以及潜在的伦理和法律问题。随着技术的进步，AIGC 的未来将是一个充满机会和挑战的领域。', additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_efad92c60b'}, id='run--b646e5f7-9a13-4478-90c3-7f1cb8e7330f-0', usage_metadata={'input_tokens': 32, 'output_tokens': 316, 'total_tokens': 348, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}), AIMessage(content='大模型技术是指基于大规模数据和深度学习算法训练出的复杂模型，通常用于处理自然语言处理、图像识别、音频理解等任务。这些模型的参数数量通常达到数亿甚至数百亿，因而被称为“大模型”。以下是一些关键点：\\n\\n1. **深度学习**：大模型通常使用深度学习技术，特别是神经网络架构，如Transformer。这种架构能够处理复杂的模式识别问题。\\n\\n2. **大规模数据**：训练大模型需要大量的标注数据。随着计算能力的发展，能够使用更大规模的数据集，从而提升模型的性能和泛化能力。\\n\\n3. **预训练与微调**：大模型通常采用预训练的策略，通过在大量无标签数据上进行训练，获取丰富的语言或视觉表示，然后通过微调在特定任务上进行优化。\\n\\n4. **应用广泛**：大模型技术已在多个领域得到应用，如自然语言处理（例如ChatGPT）、计算机视觉（如图像分类和生成）、语音识别等。\\n\\n5. **计算资源需求高**：训练和推理大模型需要大量的计算资源，通常需要使用专用的硬件如GPU或TPU。\\n\\n6. **伦理和隐私**：随着大模型的广泛应用，出现了伦理和隐私等问题。例如，如何确保模型不产生偏见或误导信息。\\n\\n大模型技术的发展正在推动人工智能的前沿，为各种应用场景提供了强大的支持。', additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_efad92c60b'}, id='run--3dcb5b21-977c-4e7a-8f8d-8d6623c03aee-0', usage_metadata={'input_tokens': 32, 'output_tokens': 333, 'total_tokens': 365, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]\n"
     ]
    }
   ],
   "execution_count": 4
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 异步调用",
   "id": "cad471c1e62c4a43"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-19T12:57:59.079778Z",
     "start_time": "2025-10-19T12:57:54.057026Z"
    }
   },
   "cell_type": "code",
   "source": [
    "import asyncio\n",
    "import time\n",
    "\n",
    "\n",
    "async def async_call(llm):\n",
    "    await asyncio.sleep(5)\n",
    "    print(\"异步调用完成\")\n",
    "\n",
    "\n",
    "async def perform_other_tasks():\n",
    "    await asyncio.sleep(5)\n",
    "    print(\"其他任务完成\")\n",
    "\n",
    "\n",
    "async def run_async_tasks():\n",
    "    start_time = time.time()\n",
    "    await asyncio.gather(  # 待所有任务完成\n",
    "        async_call(None),  # 示例调用，使用None模拟LLM对象\n",
    "        perform_other_tasks()\n",
    "    )\n",
    "    end_time = time.time()\n",
    "    return f\"总共耗时：{end_time - start_time}\"\n",
    "\n",
    "\n",
    "result = await run_async_tasks()  # await 不会阻塞线程，但会“阻塞”当前协程，直到等的东西搞定，再接着往下跑\n",
    "print(result)"
   ],
   "id": "49ffd499e73f1315",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "异步调用完成\n",
      "其他任务完成\n",
      "总共耗时：5.003880977630615\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\Users\\King\\AppData\\Local\\Temp\\ipykernel_19408\\1642107586.py:19: RuntimeWarning: coroutine 'run_async_tasks' was never awaited\n",
      "  result = await run_async_tasks()\n",
      "RuntimeWarning: Enable tracemalloc to get the object allocation traceback\n"
     ]
    }
   ],
   "execution_count": 9
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 异步调用之ainvoke",
   "id": "177b4e2df6377238"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-19T15:35:56.237509Z",
     "start_time": "2025-10-19T15:35:56.226765Z"
    }
   },
   "cell_type": "code",
   "source": [
    "import inspect\n",
    "\n",
    "print(\"ainvoke 是协程函数：\", inspect.iscoroutinefunction(CHAT_MODEL.ainvoke))\n",
    "print(\"invoke是协程函数：\", inspect.iscoroutinefunction(CHAT_MODEL.invoke))"
   ],
   "id": "1747339558ee10fd",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ainvoke 是协程函数： True\n",
      "invoke是协程函数： False\n"
     ]
    }
   ],
   "execution_count": 13
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "# 提示词模板PromptTemplate",
   "id": "f456ca9ed80478b4"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 构造方法实例化",
   "id": "f0f3700fab0be5c7"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T14:01:34.069079Z",
     "start_time": "2025-10-20T14:01:33.692966Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_core.prompts.prompt import PromptTemplate\n",
    "\n",
    "template = PromptTemplate(\n",
    "    template=\"请评价{product}的优缺点，包括{aspect1}和{aspect2}。\",\n",
    "    input_variables=[\"product\", \"aspect1\", \"aspect2\"],\n",
    ")\n",
    "prompt1 = template.format(product=\"智能手机\", aspect1=\"电池续航\", aspect2=\"拍照质量\")\n",
    "prompt2 = template.format(product=\"笔记本电脑\", aspect1=\"处理速度\", aspect2=\"便携性\")\n",
    "print(\"提示词1:\", prompt1)\n",
    "print(\"提示词2:\", prompt2)"
   ],
   "id": "ea67774c7e92b6cb",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "提示词1: 请评价智能手机的优缺点，包括电池续航和拍照质量。\n",
      "提示词2: 请评价笔记本电脑的优缺点，包括处理速度和便携性。\n"
     ]
    }
   ],
   "execution_count": 2
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 调用 from_template方法实例化",
   "id": "2167c7b1b4141456"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T14:29:06.702279Z",
     "start_time": "2025-10-20T14:29:06.647923Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain.prompts import PromptTemplate\n",
    "\n",
    "prompt_template = PromptTemplate.from_template(\n",
    "    \"请给我一个关于{topic}的{type}解释。\"\n",
    ")\n",
    "# 传入模板中的变量名\n",
    "prompt = prompt_template.format(topic=\"详细\", type=\"量子力学\")\n",
    "print(prompt)"
   ],
   "id": "e596538f51c95874",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "请给我一个关于详细的量子力学解释。\n"
     ]
    }
   ],
   "execution_count": 4
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 部分提示词模板\n",
    "## 实例化过程中使用 `partial_variables` 变量"
   ],
   "id": "4ddb9168c82607e2"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T16:11:44.615177Z",
     "start_time": "2025-10-20T16:11:44.606912Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain.prompts import PromptTemplate\n",
    "\n",
    "template = PromptTemplate.from_template(\n",
    "    template=\"{foo}{bar}\",\n",
    "    partial_variables={\"foo\": \"hello\"}\n",
    ")\n",
    "\n",
    "prompt = template.format(bar=\"world\")\n",
    "\n",
    "print(prompt)"
   ],
   "id": "bd9e1c86ee22480",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "helloworld\n"
     ]
    }
   ],
   "execution_count": 12
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 方式2：使用 PromptTemplate.partial() 方法创建部分提示模板",
   "id": "90a74af5bcda4ffd"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T16:12:25.556783Z",
     "start_time": "2025-10-20T16:12:25.539094Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain.prompts import PromptTemplate\n",
    "\n",
    "template = PromptTemplate.from_template(\n",
    "    template=\"{foo}{bar}\",\n",
    ")\n",
    "\n",
    "prompt = template.partial(foo=\"hello\")\n",
    "prompt.format(bar=\"world\")\n",
    "\n",
    "print(prompt)"
   ],
   "id": "ad738f4e796a7875",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "input_variables=['bar'] input_types={} partial_variables={'foo': 'hello'} template='{foo}{bar}'\n"
     ]
    }
   ],
   "execution_count": 14
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 使用invoke方法代替format方法",
   "id": "2e4ec521a7801abc"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T16:29:04.674928Z",
     "start_time": "2025-10-20T16:29:04.080172Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain.prompts import PromptTemplate\n",
    "\n",
    "template = PromptTemplate.from_template(\n",
    "    \"Tell me a {adjective} joke about {content}.\"\n",
    ")\n",
    "\n",
    "prompt = template.invoke({\"adjective\": \"funny\", \"content\": \"chickens\"})\n",
    "\n",
    "print(prompt)\n",
    "print(prompt.to_string())"
   ],
   "id": "a662a0da238065",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "text='Tell me a funny joke about chickens.'\n",
      "Tell me a funny joke about chickens.\n"
     ]
    }
   ],
   "execution_count": 15
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 结合LLM调用",
   "id": "e60ddc6e6d8da760"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T16:38:05.798035Z",
     "start_time": "2025-10-20T16:37:59.200344Z"
    }
   },
   "cell_type": "code",
   "source": [
    "template = PromptTemplate.from_template(\n",
    "    template=\"请评价{product}的优缺点，包括{aspect1}和{aspect2}\"\n",
    ")\n",
    "\n",
    "prompt = template.invoke({\"product\": \"电脑\", \"aspect1\": \"性能\", \"aspect2\": \"电池\"})\n",
    "\n",
    "llm.invoke(prompt)\n"
   ],
   "id": "23e835d71d9322e",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'续航等方面。\"\\n\\n请注意，以上只是一个示例提示，您可以根据自己的需求进行调整和修改。\\n\\n如果您希望我为您提供更多的提示或帮助，请告诉我！\\n\\n当然可以！以下是一些关于评价电脑优缺点的提示，可以帮助您更好地进行分析：\\n\\n### 提示1：性能方面\\n- **处理器**：评估电脑的处理器类型和速度，例如Intel的i5、i7或AMD的Ryzen系列。处理器的性能直接影响到电脑的运行速度和多任务处理能力。\\n- **内存**：考虑电脑的RAM容量，8GB、16GB或更高的配置在处理大型应用程序和多任务时表现如何。\\n- **存储**：分析硬盘类型（HDD vs. SSD），以及存储容量。SSD通常比HDD在加载速度和数据传输方面更快。\\n- **图形性能**：如果您需要进行游戏或图形设计，评估显卡的性能（独立显卡 vs. 集成显卡）会很重要。\\n\\n### 提示2：电池续航\\n- **电池容量**：查看电池的容量（'"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 20
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "# ChatPromptTemplate",
   "id": "614b0ffac032dc9"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 构造方式实例化",
   "id": "32c10f94384d4170"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T16:46:53.993002Z",
     "start_time": "2025-10-20T16:46:53.974175Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "\n",
    "prompt_template = ChatPromptTemplate([\n",
    "    (\"system\", \"你是一个AI开发工程师，你的名字是 {name}.\"),\n",
    "    (\"human\", \"你能开发哪些AI应用？\"),\n",
    "    (\"ai\", \"我能开发很多AI应用，比如聊天机器人，图像识别，自然语言处理等\"),\n",
    "    (\"human\", \"{usage_input}\")\n",
    "])\n",
    "\n",
    "prompt = prompt_template.invoke(\n",
    "    input={\"name\": \"小谷AI\", \"usage_input\": \"你能帮我做什么？\"},\n",
    ")\n",
    "print(type(prompt))\n",
    "print(prompt)"
   ],
   "id": "fb3288e21227741a",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<class 'langchain_core.prompt_values.ChatPromptValue'>\n",
      "messages=[SystemMessage(content='你是一个AI开发工程师，你的名字是 小谷AI.', additional_kwargs={}, response_metadata={}), HumanMessage(content='你能开发哪些AI应用？', additional_kwargs={}, response_metadata={}), AIMessage(content='我能开发很多AI应用，比如聊天机器人，图像识别，自然语言处理等', additional_kwargs={}, response_metadata={}), HumanMessage(content='你能帮我做什么？', additional_kwargs={}, response_metadata={})]\n"
     ]
    }
   ],
   "execution_count": 21
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 使用from_messages()",
   "id": "39d31ac974ce73f1"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T16:48:13.723434Z",
     "start_time": "2025-10-20T16:48:13.706011Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "\n",
    "chat_template = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", \"你是一个有帮助的AI机器人，你的名字是{name}。\"),\n",
    "        (\"human\", \"你好，最近怎么样？\"),\n",
    "        (\"ai\", \"我很好，谢谢！\"),\n",
    "        (\"human\", \"{user_input}\"),\n",
    "    ]\n",
    ")\n",
    "\n",
    "messages = chat_template.invoke(input={\"name\": \"小明\", \"user_input\": \"你叫什么名字？\"})\n",
    "\n",
    "print(messages)"
   ],
   "id": "8b1cae8d3577ed6a",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "messages=[SystemMessage(content='你是一个有帮助的AI机器人，你的名字是小明。', additional_kwargs={}, response_metadata={}), HumanMessage(content='你好，最近怎么样？', additional_kwargs={}, response_metadata={}), AIMessage(content='我很好，谢谢！', additional_kwargs={}, response_metadata={}), HumanMessage(content='你叫什么名字？', additional_kwargs={}, response_metadata={})]\n"
     ]
    }
   ],
   "execution_count": 22
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 提示词模板调用的几种方式\n",
    "1. `invoke()`\n",
    "2. `format()`\n",
    "3. `format_messages()`\n",
    "4. `format_prompt()`\n",
    "### 通过 `format_messages()` 调用"
   ],
   "id": "e68648bf1f890e0"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T16:50:44.787961Z",
     "start_time": "2025-10-20T16:50:44.773265Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "\n",
    "prompt_template = ChatPromptTemplate([\n",
    "    (\"system\", \"你是一个AI开发工程师. 你的名字是 {name}.\"),\n",
    "    (\"human\", \"你能开发哪些AI应用?\"),\n",
    "    (\"ai\", \"我能开发很多AI应用, 比如聊天机器人, 图像识别, 自然语言处理等.\"),\n",
    "    (\"human\", \"{user_input}\")\n",
    "])\n",
    "\n",
    "#调用format_messages()方法，返回消息列表\n",
    "prompt2 = prompt_template.format_messages(name=\"小谷AI\", user_input=\"你能帮我做什么?\")\n",
    "\n",
    "print(type(prompt2))\n",
    "print(prompt2)"
   ],
   "id": "3c5869c40f20860d",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<class 'list'>\n",
      "[SystemMessage(content='你是一个AI开发工程师. 你的名字是 小谷AI.', additional_kwargs={}, response_metadata={}), HumanMessage(content='你能开发哪些AI应用?', additional_kwargs={}, response_metadata={}), AIMessage(content='我能开发很多AI应用, 比如聊天机器人, 图像识别, 自然语言处理等.', additional_kwargs={}, response_metadata={}), HumanMessage(content='你能帮我做什么?', additional_kwargs={}, response_metadata={})]\n"
     ]
    }
   ],
   "execution_count": 23
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 更丰富的实例化参数类型\n",
    "参数是列表类型，列表的元素可以是 **字符串、字典、字符串构成的元组、消息类型、提示词模板类型、消息提示词模板类型等**\n",
    "### dict类型"
   ],
   "id": "c22b2b53eaa343a"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T16:55:30.564570Z",
     "start_time": "2025-10-20T16:55:30.550139Z"
    }
   },
   "cell_type": "code",
   "source": [
    "prompt = ChatPromptTemplate.from_messages([\n",
    "    {\"role\": \"system\", \"content\": \"你是一个{role}.\"},\n",
    "    {\"role\": \"human\", \"content\": [\"复杂内容\", {\"type\": \"text\"}]},\n",
    "])\n",
    "print(prompt.format_messages(role=\"教师\"))"
   ],
   "id": "eff4f0275386f355",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[SystemMessage(content='你是一个教师.', additional_kwargs={}, response_metadata={}), HumanMessage(content=[{'type': 'text', 'text': '复杂内容'}, {'type': 'text'}], additional_kwargs={}, response_metadata={})]\n"
     ]
    }
   ],
   "execution_count": 24
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### Message类型",
   "id": "34a1f2f2af4c9926"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T16:56:06.331708Z",
     "start_time": "2025-10-20T16:56:06.319590Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_core.messages import SystemMessage, HumanMessage\n",
    "\n",
    "chat_prompt_template = ChatPromptTemplate.from_messages([\n",
    "    SystemMessage(content=\"我是一个贴心的智能助手\"),\n",
    "    HumanMessage(content=\"我的问题是:人工智能英文怎么说？\")\n",
    "])\n",
    "\n",
    "messages = chat_prompt_template.format_messages()\n",
    "print(messages)\n",
    "print(type(messages))"
   ],
   "id": "869728abb8b551b5",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[SystemMessage(content='我是一个贴心的智能助手', additional_kwargs={}, response_metadata={}), HumanMessage(content='我的问题是:人工智能英文怎么说？', additional_kwargs={}, response_metadata={})]\n",
      "<class 'list'>\n"
     ]
    }
   ],
   "execution_count": 25
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### BaseChatPromptTemplate类型",
   "id": "a28bba817e82aeb6"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "outputs": [],
   "execution_count": null,
   "source": [
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "\n",
    "# 使用 BaseChatPromptTemplate（嵌套的 ChatPromptTemplate）\n",
    "nested_prompt_template1 = ChatPromptTemplate.from_messages([\n",
    "    (\"system\", \"我是一个人工智能助手，我的名字叫{name}\")\n",
    "])\n",
    "nested_prompt_template2 = ChatPromptTemplate.from_messages([\n",
    "    (\"human\", \"很高兴认识你,我的问题是{question}\")\n",
    "])\n",
    "prompt_template = ChatPromptTemplate.from_messages([\n",
    "    nested_prompt_template1, nested_prompt_template2\n",
    "])\n",
    "\n",
    "prompt_template.format_messages(name=\"小智\", question=\"你为什么这么帅？\")"
   ],
   "id": "e38be0e972aef7b1"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### BaseMessagePromptTemplate类型",
   "id": "a5b657773f1ee332"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T16:58:52.188005Z",
     "start_time": "2025-10-20T16:58:52.183022Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# 导入聊天消息类模板\n",
    "from langchain_core.prompts import ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate\n",
    "\n",
    "# 创建消息模板\n",
    "system_template = \"你是一个专家{role}\"\n",
    "system_message_prompt = SystemMessagePromptTemplate.from_template(system_template)\n",
    "\n",
    "human_template = \"给我解释{concept}，用浅显易懂的语言\"\n",
    "human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)\n",
    "\n",
    "# 组合成聊天提示模板\n",
    "chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt,\n",
    "                                                human_message_prompt])\n",
    "\n",
    "# 格式化提示\n",
    "formatted_messages = chat_prompt.format_messages(\n",
    "    role=\"物理学家\",\n",
    "    concept=\"相对论\"\n",
    ")\n",
    "print(formatted_messages)"
   ],
   "id": "146dcc6bf4fb3065",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[SystemMessage(content='你是一个专家物理学家', additional_kwargs={}, response_metadata={}), HumanMessage(content='给我解释相对论，用浅显易懂的语言', additional_kwargs={}, response_metadata={})]\n"
     ]
    }
   ],
   "execution_count": 26
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T16:59:31.822974Z",
     "start_time": "2025-10-20T16:59:31.803774Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# 1.导入先关包\n",
    "from langchain_core.prompts import ChatMessagePromptTemplate\n",
    "\n",
    "# 2.定义模版\n",
    "prompt = \"今天我们授课的内容是{subject}\"\n",
    "\n",
    "# 3.创建自定义角色聊天消息提示词模版\n",
    "chat_message_prompt = ChatMessagePromptTemplate.from_template(\n",
    "    role=\"teacher\", template=prompt\n",
    ")\n",
    "\n",
    "# 4.格式聊天消息提示词\n",
    "resp = chat_message_prompt.format(subject=\"我爱北京天安门\")\n",
    "print(type(resp))\n",
    "print(resp)"
   ],
   "id": "54c07ca46afbce24",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<class 'langchain_core.messages.chat.ChatMessage'>\n",
      "content='今天我们授课的内容是我爱北京天安门' additional_kwargs={} response_metadata={} role='teacher'\n"
     ]
    }
   ],
   "execution_count": 27
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 综合使用",
   "id": "5ef4d6eb6f538434"
  },
  {
   "metadata": {
    "SqlCellData": {
     "variableName$1": "df_sql1"
    }
   },
   "cell_type": "code",
   "outputs": [],
   "execution_count": null,
   "source": "%%sql\n",
   "id": "eb99279fe83c1f66"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T17:00:07.638205Z",
     "start_time": "2025-10-20T17:00:07.629608Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_core.prompts import (\n",
    "    ChatPromptTemplate,\n",
    "    SystemMessagePromptTemplate,\n",
    "    HumanMessagePromptTemplate,\n",
    ")\n",
    "from langchain_core.messages import SystemMessage, HumanMessage\n",
    "\n",
    "# 示例 1: 使用 BaseMessagePromptTemplate\n",
    "system_prompt = SystemMessagePromptTemplate.from_template(\"你是一个{role}.\")\n",
    "human_prompt = HumanMessagePromptTemplate.from_template(\"{user_input}\")\n",
    "# 示例 2: 使用 BaseMessage（已实例化的消息）\n",
    "system_msg = SystemMessage(content=\"你是一个AI工程师。\")\n",
    "human_msg = HumanMessage(content=\"你好！\")\n",
    "# 示例 3: 使用 BaseChatPromptTemplate（嵌套的 ChatPromptTemplate）\n",
    "nested_prompt = ChatPromptTemplate.from_messages([(\"system\", \"嵌套提示词\")])\n",
    "\n",
    "prompt = ChatPromptTemplate.from_messages([\n",
    "    system_prompt,  # MessageLike (BaseMessagePromptTemplate)\n",
    "    human_prompt,  # MessageLike (BaseMessagePromptTemplate)\n",
    "    system_msg,  # MessageLike (BaseMessage)\n",
    "    human_msg,  # MessageLike (BaseMessage)\n",
    "    nested_prompt,  # MessageLike (BaseChatPromptTemplate)\n",
    "])\n",
    "prompt.format_messages(role=\"人工智能专家\", user_input=\"介绍一下大模型的应用场景\")"
   ],
   "id": "705a0bc480a2f67f",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[SystemMessage(content='你是一个人工智能专家.', additional_kwargs={}, response_metadata={}),\n",
       " HumanMessage(content='介绍一下大模型的应用场景', additional_kwargs={}, response_metadata={}),\n",
       " SystemMessage(content='你是一个AI工程师。', additional_kwargs={}, response_metadata={}),\n",
       " HumanMessage(content='你好！', additional_kwargs={}, response_metadata={}),\n",
       " SystemMessage(content='嵌套提示词', additional_kwargs={}, response_metadata={})]"
      ]
     },
     "execution_count": 28,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 28
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 结合LLM调用",
   "id": "1040bff40323b622"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T17:02:03.499090Z",
     "start_time": "2025-10-20T17:02:01.611850Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain.prompts.chat import ChatPromptTemplate\n",
    "\n",
    "chat_prompt = ChatPromptTemplate.from_messages([\n",
    "    (\"system\", \"你是一个数学家，你可以计算任何算式\"),\n",
    "    (\"human\", \"我的问题：{question}\"),\n",
    "])\n",
    "# 输入提示\n",
    "messages = chat_prompt.format_messages(question=\"我今年18岁，我的舅舅今年38岁，我的爷爷今年72岁，我和舅舅一共多少岁了？\")\n",
    "\n",
    "output = CHAT_MODEL.invoke(messages)\n",
    "\n",
    "# 打印输出内容\n",
    "print(output.content)"
   ],
   "id": "d0507d4c02131021",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "要计算你和舅舅一共多少岁，我们将你和舅舅的年龄相加。\n",
      "\n",
      "你的年龄：18岁  \n",
      "舅舅的年龄：38岁  \n",
      "\n",
      "计算：\n",
      "\n",
      "18 + 38 = 56\n",
      "\n",
      "所以，你和舅舅一共56岁。\n"
     ]
    }
   ],
   "execution_count": 29
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T17:03:15.935347Z",
     "start_time": "2025-10-20T17:03:14.258712Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from dotenv import load_dotenv\n",
    "from langchain.prompts.chat import SystemMessagePromptTemplate, HumanMessagePromptTemplate, AIMessagePromptTemplate\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "load_dotenv()\n",
    "llm = ChatOpenAI()\n",
    "template = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        SystemMessagePromptTemplate.from_template(\"你是{product}的客服助手。你的名字叫{name}\"),\n",
    "        HumanMessagePromptTemplate.from_template(\"hello 你好吗？\"),\n",
    "        AIMessagePromptTemplate.from_template(\"我很好 谢谢!\"),\n",
    "        HumanMessagePromptTemplate.from_template(\"{query}\"),\n",
    "    ]\n",
    ")\n",
    "prompt = template.format_messages(\n",
    "    product=\"AGI课堂\",\n",
    "    name=\"Bob\",\n",
    "    query=\"你是谁\"\n",
    ")\n",
    "\n",
    "# 调用聊天模型\n",
    "response = CHAT_MODEL.invoke(prompt)\n",
    "print(response.content)"
   ],
   "id": "cd4acfa91c78491c",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "我是Bob，AGI课堂的客服助手。我在这里帮助你解答问题或提供信息。有什么我可以帮助你的吗？\n"
     ]
    }
   ],
   "execution_count": 30
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 插入消息列表：MessagesPlaceholder",
   "id": "b3499d8ff206e675"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T17:04:19.572421Z",
     "start_time": "2025-10-20T17:04:19.563452Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
    "from langchain_core.messages import HumanMessage\n",
    "\n",
    "prompt_template = ChatPromptTemplate.from_messages([\n",
    "    (\"system\", \"You are a helpful assistant\"),\n",
    "    MessagesPlaceholder(\"msgs\")\n",
    "])\n",
    "# prompt_template.invoke({\"msgs\": [HumanMessage(content=\"hi!\")]})\n",
    "prompt_template.format_messages(msgs=[HumanMessage(content=\"hi!\")])"
   ],
   "id": "7fc4648ec1f02f87",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[SystemMessage(content='You are a helpful assistant', additional_kwargs={}, response_metadata={}),\n",
       " HumanMessage(content='hi!', additional_kwargs={}, response_metadata={})]"
      ]
     },
     "execution_count": 31,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 31
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T17:05:05.269827Z",
     "start_time": "2025-10-20T17:05:05.257709Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
    "from langchain_core.messages import AIMessage\n",
    "\n",
    "prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", \"You are a helpful assistant.\"),\n",
    "        MessagesPlaceholder(\"history\"),\n",
    "        (\"human\", \"{question}\")\n",
    "    ]\n",
    ")\n",
    "prompt.format_messages(\n",
    "    history=[HumanMessage(content=\"1+2*3 = ?\"), AIMessage(content=\"1+2*3=7\")],\n",
    "    question=\"我刚才问题是什么？\"\n",
    ")"
   ],
   "id": "a4531c139d4001e6",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[SystemMessage(content='You are a helpful assistant.', additional_kwargs={}, response_metadata={}),\n",
       " HumanMessage(content='1+2*3 = ?', additional_kwargs={}, response_metadata={}),\n",
       " AIMessage(content='1+2*3=7', additional_kwargs={}, response_metadata={}),\n",
       " HumanMessage(content='我刚才问题是什么？', additional_kwargs={}, response_metadata={})]"
      ]
     },
     "execution_count": 32,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 32
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-20T17:05:34.084946Z",
     "start_time": "2025-10-20T17:05:34.074162Z"
    }
   },
   "cell_type": "code",
   "source": [
    "#1.导入相关包\n",
    "from langchain_core.prompts import (ChatPromptTemplate, HumanMessagePromptTemplate,\n",
    "                                    MessagesPlaceholder)\n",
    "\n",
    "# 2.定义消息模板\n",
    "prompt = ChatPromptTemplate.from_messages([\n",
    "    SystemMessagePromptTemplate.from_template(\"你是{role}\"),\n",
    "    MessagesPlaceholder(variable_name=\"intermediate_steps\"),\n",
    "    HumanMessagePromptTemplate.from_template(\"{query}\")\n",
    "])\n",
    "# 3.定义消息对象（运行时填充中间步骤的结果）\n",
    "intermediate = [\n",
    "    SystemMessage(name=\"search\", content=\"北京: 晴, 25℃\")\n",
    "]\n",
    "# 4.格式化聊天消息提示词模版\n",
    "prompt.format_messages(\n",
    "    role=\"天气预报员\",\n",
    "    intermediate_steps=intermediate,\n",
    "    query=\"北京天气怎么样？\"\n",
    ")"
   ],
   "id": "fd4cfc5083b81af6",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[SystemMessage(content='你是天气预报员', additional_kwargs={}, response_metadata={}),\n",
       " SystemMessage(content='北京: 晴, 25℃', additional_kwargs={}, response_metadata={}, name='search'),\n",
       " HumanMessage(content='北京天气怎么样？', additional_kwargs={}, response_metadata={})]"
      ]
     },
     "execution_count": 33,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 33
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# 少量样本示例的提示词\n",
    "使用说明\n",
    "在构建prompt时，可以通过构建一个 少量示例列表 去进一步格式化prompt，这是一种简单但强大的指导生成的方式，在某些情况下可以 显著提高模型性能 。\n",
    "\n",
    "少量示例提示模板可以由 一组示例 或一个负责从定义的集合中选择 一部分示例 的示例选择器构建。\n",
    "\n",
    "- 前者：使用 FewShotPromptTemplate 或 FewShotChatMessagePromptTemplate\n",
    "- 后者：使用 Example selectors(示例选择器)\n",
    "\n",
    "每个示例的结构都是一个 字典 ，其中 键 是输入变量， 值 是输入变量的值。"
   ],
   "id": "8b2e57775f2ff9a4"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 体会：零样本会导致低质量回答",
   "id": "52632bc22f03d182"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-26T04:49:27.516544Z",
     "start_time": "2025-10-26T04:49:25.963870Z"
    }
   },
   "cell_type": "code",
   "source": [
    "CHAT_MODEL = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0.4)\n",
    "res = CHAT_MODEL.invoke(\"2 🦜 9是多少?\")\n",
    "print(res.content)"
   ],
   "id": "753af89403c811bb",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2 🦜 9 的意思不太明确。如果你是在问 2 加 9，那答案是 11。如果有其他的意思，请提供更多的上下文！\n"
     ]
    }
   ],
   "execution_count": 2
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## FewShotPromptTemplate的使用",
   "id": "1d869279f2f13ea7"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-26T04:49:38.376627Z",
     "start_time": "2025-10-26T04:49:36.714484Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain.prompts import PromptTemplate\n",
    "from langchain.prompts.few_shot import FewShotPromptTemplate\n",
    "\n",
    "prompt_template = \"你是一个数学专家，算式： {input} 值：{output} 使用：{description}\"\n",
    "\n",
    "# 这是一个提示模板，用于设置每个示例的格式\n",
    "prompt_sample = PromptTemplate.from_template(prompt_template)\n",
    "\n",
    "# 提供示例\n",
    "examples = [\n",
    "    {\"input\": \"2+2\", \"output\": \"4\", \"description\": \"加法运算\"},\n",
    "    {\"input\": \"5-2\", \"output\": \"3\", \"description\": \"减法运算\"},\n",
    "]\n",
    "\n",
    "prompt = FewShotPromptTemplate(\n",
    "    examples=examples,\n",
    "    example_prompt=prompt_sample,\n",
    "    suffix=\"你是一个数学专家，算式：{input} 值:{output}\",\n",
    "    input_variables=[\"input\", \"output\"]\n",
    ")\n",
    "print(prompt.invoke({\"input\": \"2*5\",\"output\":10}))\n",
    "\n",
    "# 大模型调用\n",
    "resp = CHAT_MODEL.invoke(prompt.invoke({\"input\": \"2*5\", \"output\": \"10\"}))\n",
    "\n",
    "print(\"===response===\")\n",
    "print(resp.content)"
   ],
   "id": "bc662126f6e1115f",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "text='你是一个数学专家，算式： 2+2 值：4 使用：加法运算\\n\\n你是一个数学专家，算式： 5-2 值：3 使用：减法运算\\n\\n你是一个数学专家，算式：2*5 值:10'\n",
      "===response===\n",
      "你是一个数学专家，算式：2*5 值：10 使用：乘法运算\n",
      "\n",
      "你是一个数学专家，算式：10/2 值：5 使用：除法运算\n"
     ]
    }
   ],
   "execution_count": 3
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## FewShotChatMessagePromptTemplate的使用\n",
    "除了 FewShotPromptTemplate 之外，FewShotChatMessagePromptTemplate 是专门为 聊天对话场景 设计的少样本（few-shot）提示模板，它继承自 FewShotPromptTemplate ，但针对聊天消息的格式进行了优化。\n",
    "\n",
    "特点：\n",
    "- 自动将示例格式化为聊天消息（ HumanMessage / AIMessage 等）\n",
    "- 输出结构化聊天消息（ List[BaseMessage] ）\n",
    "- 保留对话轮次结构"
   ],
   "id": "e1abc55614b8cf25"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-26T04:49:52.392029Z",
     "start_time": "2025-10-26T04:49:50.991340Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# 1.导入相关包\n",
    "from langchain_core.prompts import (FewShotChatMessagePromptTemplate,\n",
    "                                    ChatPromptTemplate)\n",
    "\n",
    "# 2.定义示例组\n",
    "examples = [\n",
    "    {\"input\": \"2🦜2\", \"output\": \"4\"},\n",
    "    {\"input\": \"2🦜3\", \"output\": \"8\"},\n",
    "]\n",
    "\n",
    "# 3.定义示例的消息格式提示词模版\n",
    "example_prompt = ChatPromptTemplate.from_messages([\n",
    "    ('human', '{input} 是多少?'),\n",
    "    ('ai', '{input}的结果是{output}')\n",
    "])\n",
    "\n",
    "# 4.定义FewShotChatMessagePromptTemplate对象\n",
    "few_shot_prompt = FewShotChatMessagePromptTemplate(\n",
    "    examples=examples,  # 示例组\n",
    "    example_prompt=example_prompt,  # 示例提示词词模版\n",
    ")\n",
    "\n",
    "# 5.输出完整提示词的消息模版\n",
    "final_prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        ('system', '你是一个数学奇才'),\n",
    "        few_shot_prompt,\n",
    "        ('human', '{input}'),\n",
    "    ]\n",
    ")\n",
    "\n",
    "print(\"===few_shot_prompt===\")\n",
    "print(final_prompt)\n",
    "\n",
    "#6.提供大模型\n",
    "print(\"===response====\")\n",
    "print(CHAT_MODEL.invoke(final_prompt.invoke(input=\"2🦜4\")).content)"
   ],
   "id": "83a61c0a1a7c0556",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "===few_shot_prompt===\n",
      "input_variables=['input'] input_types={} partial_variables={} messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], input_types={}, partial_variables={}, template='你是一个数学奇才'), additional_kwargs={}), FewShotChatMessagePromptTemplate(examples=[{'input': '2🦜2', 'output': '4'}, {'input': '2🦜3', 'output': '8'}], input_variables=[], input_types={}, partial_variables={}, example_prompt=ChatPromptTemplate(input_variables=['input', 'output'], input_types={}, partial_variables={}, messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], input_types={}, partial_variables={}, template='{input} 是多少?'), additional_kwargs={}), AIMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input', 'output'], input_types={}, partial_variables={}, template='{input}的结果是{output}'), additional_kwargs={})])), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], input_types={}, partial_variables={}, template='{input}'), additional_kwargs={})]\n",
      "===response====\n",
      "2🦜4的结果是16。这个符号表示的是2的4次方（2^4）。\n"
     ]
    }
   ],
   "execution_count": 4
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## Example selectors(示例选择器)\n",
    "前面FewShotPromptTemplate的特点是，无论输入什么问题，都会包含全部示例。在实际开发中，我们可以根据当前输入，使用示例选择器，从大量候选示例中选取最相关的示例子集。\n",
    "\n",
    "使用的好处：避免盲目传递所有示例，减少 token 消耗的同时，还可以提升输出效果。\n",
    "\n",
    "示例选择策略：语义相似选择、长度选择、最大边际相关示例选择等\n",
    "- 语义相似选择 ：通过余弦相似度等度量方式评估语义相关性，选择与输入问题最相似的 k 个示例。\n",
    "- 长度选择 ：根据输入文本的长度，从候选示例中筛选出长度最匹配的示例。增强模型对文本结构的理解。比语义相似度计算更轻量，适合对响应速度要求高的场景。\n",
    "- 最大边际相关示例选择 ：优先选择与输入问题语义相似的示例；同时，通过惩罚机制避免返回同质化的内容"
   ],
   "id": "13c7c860e03716ae"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 结合 FewShotPromptTemplate 使用",
   "id": "58baba0eb4a13293"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "需先执行： `pip install chromadb`",
   "id": "9d3783335e7d78af"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-26T04:50:00.435260Z",
     "start_time": "2025-10-26T04:49:57.474533Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_community.vectorstores import Chroma\n",
    "from langchain_core.example_selectors import SemanticSimilarityExampleSelector\n",
    "\n",
    "from langchain_openai import OpenAIEmbeddings\n",
    "\n",
    "\n",
    "# 1.定义嵌入模型\n",
    "embeddings_model = OpenAIEmbeddings(\n",
    "    model=\"text-embedding-ada-002\"\n",
    ")\n",
    "\n",
    "# 2.定义示例组\n",
    "examples = [\n",
    "    {\n",
    "        \"question\": \"谁活得更久，穆罕默德·阿里还是艾伦·图灵?\",\n",
    "        \"answer\": \"\"\"\n",
    "接下来还需要问什么问题吗？\n",
    "追问：穆罕默德·阿里去世时多大年纪？\n",
    "中间答案：穆罕默德·阿里去世时享年74岁。\n",
    "\"\"\",\n",
    "    },\n",
    "    {\n",
    "        \"question\": \"craigslist的创始人是什么时候出生的？\",\n",
    "        \"answer\": \"\"\"\n",
    "接下来还需要问什么问题吗？\n",
    "追问：谁是craigslist的创始人？\n",
    "中级答案：Craigslist是由克雷格·纽马克创立的。\n",
    "\"\"\",\n",
    "    },\n",
    "    {\n",
    "        \"question\": \"谁是乔治·华盛顿的外祖父？\",\n",
    "        \"answer\": \"\"\"\n",
    "接下来还需要问什么问题吗？\n",
    "追问：谁是乔治·华盛顿的母亲？\n",
    "中间答案：乔治·华盛顿的母亲是玛丽·鲍尔·华盛顿。\n",
    "\"\"\",\n",
    "    },\n",
    "    {\n",
    "        \"question\": \"《大白鲨》和《皇家赌场》的导演都来自同一个国家吗？\",\n",
    "        \"answer\": \"\"\"\n",
    "接下来还需要问什么问题吗？\n",
    "追问：《大白鲨》的导演是谁？\n",
    "中级答案：《大白鲨》的导演是史蒂文·斯皮尔伯格。\n",
    "\"\"\",\n",
    "    },\n",
    "]\n",
    "# 3.定义示例选择器\n",
    "example_selector = SemanticSimilarityExampleSelector.from_examples(\n",
    "    # 这是可供选择的示例列表\n",
    "    examples,\n",
    "    # 这是用于生成嵌入的嵌入类，用于衡量语义相似性\n",
    "    embeddings_model,\n",
    "    # 这是用于存储嵌入并进行相似性搜索的 VectorStore 类\n",
    "    Chroma,\n",
    "    # 这是要生成的示例数量\n",
    "    k=1,\n",
    ")\n",
    "\n",
    "# 选择与输入最相似的示例\n",
    "question = \"玛丽·鲍尔·华盛顿的父亲是谁?\"\n",
    "selected_examples = example_selector.select_examples({\"question\": question})\n",
    "print(f\"与输入最相似的示例：\")\n",
    "for example in selected_examples:\n",
    "    print(\"\\n\")\n",
    "    for k, v in example.items():\n",
    "        print(f\"{k}: {v}\")"
   ],
   "id": "189c9511b1d7e4d6",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "与输入最相似的示例：\n",
      "\n",
      "\n",
      "question: 谁是乔治·华盛顿的外祖父？\n",
      "answer: \n",
      "接下来还需要问什么问题吗？\n",
      "追问：谁是乔治·华盛顿的母亲？\n",
      "中间答案：乔治·华盛顿的母亲是玛丽·鲍尔·华盛顿。\n",
      "\n"
     ]
    }
   ],
   "execution_count": 5
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "这里使用FAISS，需安装：`conda install faiss-cpu`",
   "id": "2ef83fbeb2cf36b3"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-26T04:52:41.360031Z",
     "start_time": "2025-10-26T04:52:38.497937Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# 1.导入相关包\n",
    "from langchain_community.vectorstores import FAISS\n",
    "from langchain_core.example_selectors import SemanticSimilarityExampleSelector\n",
    "from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate\n",
    "from langchain_openai import OpenAIEmbeddings\n",
    "\n",
    "# 2.定义示例提示词模版\n",
    "example_prompt = PromptTemplate.from_template(\n",
    "    template=\"Input: {input}\\nOutput: {output}\",\n",
    ")\n",
    "\n",
    "# 3.创建一个示例提示词模版\n",
    "examples = [\n",
    "    {\"input\": \"高兴\", \"output\": \"悲伤\"},\n",
    "    {\"input\": \"高\", \"output\": \"矮\"},\n",
    "    {\"input\": \"长\", \"output\": \"短\"},\n",
    "    {\"input\": \"精力充沛\", \"output\": \"无精打采\"},\n",
    "    {\"input\": \"阳光\", \"output\": \"阴暗\"},\n",
    "    {\"input\": \"粗糙\", \"output\": \"光滑\"},\n",
    "    {\"input\": \"干燥\", \"output\": \"潮湿\"},\n",
    "    {\"input\": \"富裕\", \"output\": \"贫穷\"},\n",
    "]\n",
    "# 4.定义嵌入模型\n",
    "\n",
    "embeddings = OpenAIEmbeddings(\n",
    "    model=\"text-embedding-ada-002\"\n",
    ")\n",
    "\n",
    "# 5.创建语义相似性示例选择器\n",
    "example_selector = SemanticSimilarityExampleSelector.from_examples(\n",
    "    examples,\n",
    "    embeddings,\n",
    "    FAISS,\n",
    "    k=2,\n",
    ")\n",
    "\n",
    "# 6.定义小样本提示词模版\n",
    "similar_prompt = FewShotPromptTemplate(\n",
    "    example_selector=example_selector,\n",
    "    example_prompt=example_prompt,\n",
    "    prefix=\"给出每个词组的反义词\",\n",
    "    suffix=\"Input: {word}\\nOutput:\",\n",
    "    input_variables=[\"word\"],\n",
    ")\n",
    "response = similar_prompt.invoke({\"word\": \"忧郁\"})\n",
    "print(response.text)"
   ],
   "id": "9926f103f2424f1a",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "给出每个词组的反义词\n",
      "\n",
      "Input: 高兴\n",
      "Output: 悲伤\n",
      "\n",
      "Input: 阳光\n",
      "Output: 阴暗\n",
      "\n",
      "Input: 忧郁\n",
      "Output:\n"
     ]
    }
   ],
   "execution_count": 6
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# 具体使用：PipelinePromptTemplate(了解)\n",
    "用于将多个提示模板 按顺序组合成处理管道，实现分阶段、模块化的提示构建。它的核心作用类似于软件开发中的 管道模式 （Pipeline Pattern），通过串联多个提示处理步骤，实现复杂的提示生成逻辑。\n",
    "\n",
    "特点：\n",
    "- 将复杂提示拆解为多个处理阶段，每个阶段使用独立的提示模板\n",
    "- 前一个模板的输出作为下一个模板的输入变量\n",
    "- 使用场景：解决单一超大提示模板难以维护的问题"
   ],
   "id": "a94646df33561730"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-25T01:41:03.007563Z",
     "start_time": "2025-10-25T01:41:02.989560Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_core.prompts.prompt import PromptTemplate\n",
    "\n",
    "# 阶段1：问题分析\n",
    "analysis_template = PromptTemplate.from_template(\"\"\"\n",
    "分析这个问题：{question}\n",
    "关键要素：\n",
    "\"\"\")\n",
    "\n",
    "# 阶段2：知识检索\n",
    "retrieval_template = PromptTemplate.from_template(\"\"\"\n",
    "基于以下要素搜索资料：\n",
    "{analysis_result}\n",
    "搜索关键词：\n",
    "\"\"\")\n",
    "\n",
    "# 阶段3：生成最终回答\n",
    "answer_template = PromptTemplate.from_template(\"\"\"\n",
    "综合以下信息回答问题：\n",
    "{retrieval_result}\n",
    "最终答案：\n",
    "\"\"\")\n",
    "\n",
    "# 逐步执行管道提示\n",
    "pipeline_prompts = [\n",
    "    (\"analysis_result\", analysis_template),\n",
    "    (\"retrieval_result\", retrieval_template)\n",
    "]\n",
    "\n",
    "my_input = {\"question\": \"量子计算的优势是什么？\"}\n",
    "\n",
    "for name, prompt in pipeline_prompts:\n",
    "    # 调用当前提示模板并获取字符串结果\n",
    "    result = prompt.invoke(my_input).to_string()\n",
    "    # 将结果添加到输入字典中供下一步使用\n",
    "    my_input[name] = result\n",
    "\n",
    "# 生成最终答案\n",
    "my_output = answer_template.invoke(my_input).to_string()\n",
    "print(my_output)\n"
   ],
   "id": "950caf67f4823407",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "综合以下信息回答问题：\n",
      "\n",
      "基于以下要素搜索资料：\n",
      "\n",
      "分析这个问题：量子计算的优势是什么？\n",
      "关键要素：\n",
      "\n",
      "搜索关键词：\n",
      "\n",
      "最终答案：\n",
      "\n"
     ]
    }
   ],
   "execution_count": 3
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# 具体使用：自定义提示词模版(了解)\n",
    "在创建prompt时，我们也可以按照自己的需求去创建自定义的提示模版。\n",
    "\n",
    "步骤：\n",
    "- 自定义类继承提示词基类模版BasePromptTemplate\n",
    "- 重写format、format_prompt、from_template方法"
   ],
   "id": "4270438123ffc9ce"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-25T02:05:09.623597Z",
     "start_time": "2025-10-25T02:05:09.559583Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# 1.导入相关包\n",
    "from typing import List, Dict, Any\n",
    "from langchain.prompts import BasePromptTemplate\n",
    "from langchain.prompts import PromptTemplate\n",
    "from langchain.schema import PromptValue\n",
    "\n",
    "\n",
    "# 2.自定义提示词模版\n",
    "class SimpleCustomPrompt(BasePromptTemplate):\n",
    "    \"\"\"简单自定义提示词模板\"\"\"\n",
    "    template: str\n",
    "\n",
    "    def __init__(self, template: str, **kwargs):\n",
    "        # 使用PromptTemplate解析输入变量\n",
    "        prompt = PromptTemplate.from_template(template)\n",
    "        super().__init__(\n",
    "            input_variables=prompt.input_variables,\n",
    "            template=template,\n",
    "            **kwargs\n",
    "        )\n",
    "\n",
    "    def format(self, **kwargs: Any) -> str:\n",
    "        \"\"\"格式化提示词\"\"\"\n",
    "        # print(\"kwargs:\", kwargs)\n",
    "        # print(\"self.template:\", self.template)\n",
    "        return self.template.format(**kwargs)\n",
    "\n",
    "\n",
    "    def format_prompt(self, **kwargs: Any) -> PromptValue:\n",
    "        \"\"\"实现抽象方法\"\"\"\n",
    "        return PromptValue(text=self.format(**kwargs))\n",
    "\n",
    "    @classmethod\n",
    "    def from_template(cls, template: str, **kwargs) -> \"SimpleCustomPrompt\":\n",
    "        \"\"\"从模板创建实例\"\"\"\n",
    "        return cls(template=template, **kwargs)\n",
    "\n",
    "# 3.使用自定义提示词模版\n",
    "custom_prompt = SimpleCustomPrompt.from_template(\n",
    "    template=\"请回答关于{subject}的问题：{question}\"\n",
    ")\n",
    "\n",
    "# 4.格式化提示词\n",
    "formatted = custom_prompt.format(\n",
    "    subject=\"人工智能\",\n",
    "    question=\"什么是LLM？\"\n",
    ")\n",
    "\n",
    "print(formatted)\n"
   ],
   "id": "db014e3a0fb8bfc1",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "请回答关于人工智能的问题：什么是LLM？\n"
     ]
    }
   ],
   "execution_count": 4
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# 从文档中加载Prompt(了解)\n",
    "一方面，将想要设定prompt所支持的格式保存为JSON或者YAML格式文件。\n",
    "另一方面，通过读取指定路径的格式化文件，获取相应的prompt。\n",
    "\n",
    "目的与使用场景：\n",
    "- 为了便于共享、存储和加强对prompt的版本控制。\n",
    "- 当我们的prompt模板数据较大时，我们可以使用外部导入的方式进行管理和维护。"
   ],
   "id": "78f42cb3a2d8a7f1"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 加载yaml文件",
   "id": "ff0325989f963413"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-25T02:08:16.621826Z",
     "start_time": "2025-10-25T02:08:16.604821Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_core.prompts import load_prompt\n",
    "from dotenv import load_dotenv\n",
    "\n",
    "load_dotenv()\n",
    "\n",
    "prompt = load_prompt(\"asset/prompt.yaml\", encoding=\"utf-8\")\n",
    "# print(prompt)\n",
    "print(prompt.format(name=\"年轻人\", what=\"滑稽\"))"
   ],
   "id": "8300641d7a825ec4",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "请给年轻人讲一个关于滑稽的故事\n"
     ]
    }
   ],
   "execution_count": 5
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 加载json文件",
   "id": "289df3bef2fe8d45"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-25T02:09:28.799880Z",
     "start_time": "2025-10-25T02:09:28.781875Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_core.prompts import load_prompt\n",
    "from dotenv import load_dotenv\n",
    "\n",
    "load_dotenv()\n",
    "\n",
    "prompt = load_prompt(\"asset/prompt.json\",encoding=\"utf-8\")\n",
    "print(prompt.format(name=\"张三\",what=\"搞笑的\"))"
   ],
   "id": "71f42c9891c51f98",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "请张三讲一个搞笑的的故事。\n"
     ]
    }
   ],
   "execution_count": 6
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
