{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "7a0dbc3b",
   "metadata": {},
   "source": [
    "LangChain 的 MessagePromptTemplate 核心用于构建聊天交互中的结构化消息，其中最常用的三类是 SystemMessagePromptTemplate、HumanMessagePromptTemplate 和 AIMessagePromptTemplate，各自对应固定角色的消息生成。\n",
    "\n",
    "### 核心角色与用途\n",
    "- **SystemMessagePromptTemplate**：专门生成「系统消息」。用于设定对话规则、AI 角色定位或背景信息，指导后续 AI 回复的逻辑和风格。\n",
    "- **HumanMessagePromptTemplate**：专门生成「人类消息」。模拟用户的输入内容，支持嵌入动态变量（如用户提问、上下文参数），让用户消息更灵活。\n",
    "- **AIMessagePromptTemplate**：专门生成「AI 消息」。用于预设 AI 的历史回复或示例回复，常配合对话记忆或few-shot 提示词使用。\n",
    "\n",
    "### 核心共性\n",
    "三者本质都是「带模板功能的消息生成工具」，支持通过 `format` 方法传入变量动态填充内容，最终输出符合 Chat 模型要求的结构化消息对象，适配 LangChain 与各类大模型的交互流程。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "b813432f",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_core.prompts import ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "4fbc5ef5",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=['role'], input_types={}, partial_variables={}, template='你是{role}，擅长用生活化例子解释专业概念。\\n当用户提问时，先重复问题核心，再用1个比喻说明，最后总结关键点。'), additional_kwargs={})"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 1. 系统消息模板：设定AI角色和回复规则\n",
    "system_template = \"\"\"你是{role}，擅长用生活化例子解释专业概念。\n",
    "当用户提问时，先重复问题核心，再用1个比喻说明，最后总结关键点。\"\"\"\n",
    "system_prompt = SystemMessagePromptTemplate.from_template(system_template)\n",
    "# 创建对应角色的消息模板实例\n",
    "system_message_prompt = SystemMessagePromptTemplate.from_template(system_template)\n",
    "system_message_prompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "68e8ac8f",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['old_concept'], input_types={}, partial_variables={}, template='什么是{old_concept}？'), additional_kwargs={})"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 2. 历史人类消息模板：模拟用户过去的提问（作为示例）\n",
    "human_history_template = \"什么是{old_concept}？\"\n",
    "human_history_prompt = HumanMessagePromptTemplate.from_template(human_history_template)\n",
    "human_history_prompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "49eb5371",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessagePromptTemplate(prompt=PromptTemplate(input_variables=['analogy', 'old_concept', 'summary'], input_types={}, partial_variables={}, template='你问的是{old_concept}。\\n打个比方：{analogy}\\n简单说，就是{summary}'), additional_kwargs={})"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 3. 历史AI消息模板：模拟AI过去的回复（作为示例，引导风格）\n",
    "ai_history_template = \"\"\"你问的是{old_concept}。\n",
    "打个比方：{analogy}\n",
    "简单说，就是{summary}\"\"\"\n",
    "ai_history_prompt = AIMessagePromptTemplate.from_template(ai_history_template)\n",
    "ai_history_prompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "f531b520",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['new_concept'], input_types={}, partial_variables={}, template='那{new_concept}又是什么呢？'), additional_kwargs={})"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 4. 当前人类消息模板：用户现在的提问\n",
    "human_current_template = \"那{new_concept}又是什么呢？\"\n",
    "human_current_prompt = HumanMessagePromptTemplate.from_template(human_current_template)\n",
    "human_current_prompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "df4eec86",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatPromptTemplate(input_variables=['analogy', 'new_concept', 'old_concept', 'role', 'summary'], input_types={}, partial_variables={}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=['role'], input_types={}, partial_variables={}, template='你是{role}，擅长用生活化例子解释专业概念。\\n当用户提问时，先重复问题核心，再用1个比喻说明，最后总结关键点。'), additional_kwargs={}), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['old_concept'], input_types={}, partial_variables={}, template='什么是{old_concept}？'), additional_kwargs={}), AIMessagePromptTemplate(prompt=PromptTemplate(input_variables=['analogy', 'old_concept', 'summary'], input_types={}, partial_variables={}, template='你问的是{old_concept}。\\n打个比方：{analogy}\\n简单说，就是{summary}'), additional_kwargs={}), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['new_concept'], input_types={}, partial_variables={}, template='那{new_concept}又是什么呢？'), additional_kwargs={})])"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 组合所有消息（系统消息 → 历史对话 → 当前提问）\n",
    "chat_prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        system_prompt,\n",
    "        human_history_prompt,\n",
    "        ai_history_prompt,\n",
    "        human_current_prompt\n",
    "    ]\n",
    ")\n",
    "chat_prompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "2d7206a7",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "【system】你是科普作家，擅长用生活化例子解释专业概念。\n",
      "当用户提问时，先重复问题核心，再用1个比喻说明，最后总结关键点。\n",
      "\n",
      "【human】什么是量子纠缠？\n",
      "\n",
      "【ai】你问的是量子纠缠。\n",
      "打个比方：就像一对永远同步的魔术硬币，无论离多远，一个翻面另一个必同时翻面\n",
      "简单说，就是两个粒子无论相距多远，状态都会瞬间相互影响\n",
      "\n",
      "【human】那量子隧穿又是什么呢？\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# 填充变量生成完整消息\n",
    "formatted_messages = chat_prompt.format_messages(\n",
    "    role=\"科普作家\",\n",
    "    old_concept=\"量子纠缠\",\n",
    "    analogy=\"就像一对永远同步的魔术硬币，无论离多远，一个翻面另一个必同时翻面\",\n",
    "    summary=\"两个粒子无论相距多远，状态都会瞬间相互影响\",\n",
    "    new_concept=\"量子隧穿\"\n",
    ")\n",
    "\n",
    "# 打印结果\n",
    "for msg in formatted_messages:\n",
    "    print(f\"【{msg.type}】{msg.content}\\n\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "237f6b07",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 注意上面都是已经设定了角色了。\n",
    "# SystemMessagePromptTemplate ---> role=\"system\"\n",
    "# HumanMessagePromptTemplate ---> role=\"human\"\n",
    "# AIMessagePromptTemplate ---> role=\"ai\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "57ddba5d",
   "metadata": {},
   "source": [
    "# ChatMessagePromptTemplate"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1b5be083",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "类型：<class 'langchain_core.messages.chat.ChatMessage'>\n",
      "内容：content='本次活动的主题是青年创新论坛，请大家围绕如何利用AI技术解决社区实际问题展开讨论，每人发言不超过3分钟' additional_kwargs={} response_metadata={} role='host'\n"
     ]
    }
   ],
   "source": [
    "from langchain_core.prompts import ChatMessagePromptTemplate\n",
    "\n",
    "# 定义自定义场景的模板内容\n",
    "prompt = \"本次活动的主题是{theme}，请大家围绕{focus}展开讨论，每人发言不超过3分钟\"\n",
    "\n",
    "# 创建自定义角色（主持人）的聊天消息模板\n",
    "chat_message_prompt = ChatMessagePromptTemplate.from_template(\n",
    "    role=\"host\",  # 自定义角色为\"主持人\"\n",
    "    template=prompt\n",
    ")\n",
    "\n",
    "# 格式化消息（传入活动主题和讨论焦点）\n",
    "formatted_msg = chat_message_prompt.format(\n",
    "    theme=\"青年创新论坛\",\n",
    "    focus=\"如何利用AI技术解决社区实际问题\"\n",
    ")\n",
    "\n",
    "# 打印结果类型和内容\n",
    "print(f\"类型：{type(formatted_msg)}\")\n",
    "print(f\"内容：{formatted_msg}\")\n",
    "\n",
    "\n",
    "# chat_prompt = ChatPromptTemplate.from_messages(\n",
    "#     [\n",
    "#         system_prompt,\n",
    "#         human_history_prompt,\n",
    "#         ai_history_prompt,\n",
    "#         human_current_prompt\n",
    "#     ]\n",
    "# )\n",
    "# chat_prompt\n",
    "# 看看上面的内容 ChatPromptTemplate仅仅是把system_prompt等进行组装，并没有定义角色。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bcf702f4",
   "metadata": {},
   "source": [
    "# ChatMessagePromptTemplate与ChatPromptTemplate的区别"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fdd2996e",
   "metadata": {},
   "source": [
    "ChatMessagePromptTemplate: 定义单条带角色的消息内容, 必须指定单条消息的角色（role）\t, 构建对话中的某一轮消息\n",
    "ChatPromptTemplate: 组合多条消息形成完整对话提示, 不直接指定角色，依赖传入的子模板角色, 构建完整的对话上下文（多轮消息组合）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "7437ad45",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_core.prompts import (\n",
    "    ChatPromptTemplate,       # 用于组合多条消息的模板（组装器）\n",
    "    SystemMessagePromptTemplate,  # 系统角色消息的模板（单条消息零件）\n",
    "    HumanMessagePromptTemplate,   # 人类角色消息的模板（单条消息零件）\n",
    ")\n",
    "from langchain_core.messages import SystemMessage, HumanMessage  # 已实例化的消息（固定内容），他是消息，不是模板\n",
    "# 示例 1: 使用 消息模板类（可动态填充变量）\n",
    "# SystemMessagePromptTemplate：系统角色的消息模板，需通过 format 填充变量\n",
    "system_prompt_template = SystemMessagePromptTemplate.from_template(\n",
    "    \"你是{domain}领域的专业顾问，回答需符合{style}风格。\"\n",
    ")\n",
    "# HumanMessagePromptTemplate：人类角色的消息模板，需通过 format 填充变量\n",
    "human_prompt_template = HumanMessagePromptTemplate.from_template(\n",
    "    \"请推荐{num}本关于{topic}的经典书籍，并说明推荐理由。\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "69f257d7",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 示例 2: 使用 已实例化的消息（内容固定，无变量），用的是消息\n",
    "# SystemMessage：直接定义系统角色的固定内容（无模板变量）\n",
    "fixed_system_msg = SystemMessage(\n",
    "    content=\"你是一名严谨的科普工作者，回答需避免专业术语。\"\n",
    ")\n",
    "# HumanMessage：直接定义人类角色的固定内容（无模板变量）\n",
    "fixed_human_msg = HumanMessage(\n",
    "    content=\"什么是黑洞？用一句话概括。\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "49e1c2a2",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 示例 3: 使用 嵌套的 ChatPromptTemplate（将一个完整对话作为子组件）\n",
    "# 先定义一个嵌套的对话模板（可包含变量）\n",
    "nested_chat_prompt = ChatPromptTemplate.from_messages([\n",
    "    (\"system\", \"你现在需要补充说明：以上问题的背景是{scene}场景\"),  # 系统角色的临时提示\n",
    "    (\"human\", \"请结合背景进一步细化回答。\")  # 人类角色的临时提示\n",
    "])\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "30d8a66c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "角色: system, 内容: 你是文学领域的专业顾问，回答需符合简洁明了风格。\n",
      "\n",
      "角色: human, 内容: 请推荐3本关于科幻小说的经典书籍，并说明推荐理由。\n",
      "\n",
      "角色: system, 内容: 你是一名严谨的科普工作者，回答需避免专业术语。\n",
      "\n",
      "角色: human, 内容: 什么是黑洞？用一句话概括。\n",
      "\n",
      "角色: system, 内容: 你现在需要补充说明：以上问题的背景是中学生课堂场景\n",
      "\n",
      "角色: human, 内容: 请结合背景进一步细化回答。\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# 用 ChatPromptTemplate 组合所有消息组件（支持多种类型的\"消息元素\"）\n",
    "final_prompt = ChatPromptTemplate.from_messages([\n",
    "    system_prompt_template,  # 类型1：带变量的系统消息模板\n",
    "    human_prompt_template,   # 类型2：带变量的人类消息模板\n",
    "    fixed_system_msg,        # 类型3：固定内容的系统消息\n",
    "    fixed_human_msg,         # 类型4：固定内容的人类消息\n",
    "    nested_chat_prompt       # 类型5：嵌套的完整对话模板\n",
    "])\n",
    "\n",
    "# 格式化最终提示（传入所有变量，自动填充到所有模板中）\n",
    "# 变量会被应用到所有包含对应占位符的模板中（如 {domain}、{scene} 等）\n",
    "formatted_messages = final_prompt.format_messages(\n",
    "    domain=\"文学\",        # 给 system_prompt_template 用\n",
    "    style=\"简洁明了\",     # 给 system_prompt_template 用\n",
    "    num=3,                # 给 human_prompt_template 用\n",
    "    topic=\"科幻小说\",     # 给 human_prompt_template 用\n",
    "    scene=\"中学生课堂\"    # 给 nested_chat_prompt 用\n",
    ")\n",
    "# 打印结果（查看最终生成的对话消息列表）\n",
    "for msg in formatted_messages:\n",
    "    print(f\"角色: {msg.type}, 内容: {msg.content}\\n\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b0081391",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "角色类型: system, 内容: 你是一名营养师，必须用中文回答所有问题，且回答不超过3句话。\n",
      "\n",
      "角色类型: human, 内容: 你好！我想了解一些关于健康的知识。\n",
      "\n",
      "角色类型: ai, 内容: 你好！我可以帮你解答基础的健康问题，请问具体想了解什么？\n",
      "\n",
      "角色类型: human, 内容: 我想知道如何改善睡眠质量？\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# 类似的\n",
    "# 导入必要的消息类和模板类\n",
    "from langchain_core.messages import HumanMessage, AIMessage  # 已实例化的消息（固定内容）\n",
    "from langchain_core.prompts import (\n",
    "    HumanMessagePromptTemplate,  # 人类角色的动态消息模板（带变量）\n",
    "    SystemMessagePromptTemplate  # 系统角色的动态消息模板（带变量）\n",
    ")\n",
    "from langchain_core.prompts import ChatPromptTemplate  # 组合多条消息的总模板\n",
    "\n",
    "# 1. 定义系统角色的动态模板（可通过变量替换内容）\n",
    "# 作用：设定AI的身份和基础规则，{job}和{language}是需要填充的变量\n",
    "system_template = SystemMessagePromptTemplate.from_template(\n",
    "    \"你是一名{job}，必须用{language}回答所有问题，且回答不超过3句话。\"\n",
    ")\n",
    "\n",
    "# 2. 定义固定内容的人类消息（无变量，内容写死）\n",
    "# 作用：模拟对话历史中用户说过的话（内容固定，不需要动态修改）\n",
    "fixed_human_msg = HumanMessage(\n",
    "    content=\"你好！我想了解一些关于健康的知识。\"\n",
    ")\n",
    "\n",
    "# 3. 定义固定内容的AI消息（无变量，内容写死）\n",
    "# 作用：模拟对话历史中AI的回复（内容固定，作为上下文的一部分）\n",
    "fixed_ai_msg = AIMessage(\n",
    "    content=\"你好！我可以帮你解答基础的健康问题，请问具体想了解什么？\"\n",
    ")\n",
    "# 4. 定义人类角色的动态模板（可通过变量替换内容）\n",
    "# 作用：接收用户实时输入的问题（{user_question}是动态变量）\n",
    "human_template = HumanMessagePromptTemplate.from_template(\n",
    "    \"我想知道{user_question}？\"\n",
    ")\n",
    "\n",
    "# 用ChatPromptTemplate组合所有消息（按顺序组成完整对话）\n",
    "# 支持多种类型的消息：动态模板、固定消息均可混合使用\n",
    "chat_prompt = ChatPromptTemplate.from_messages([\n",
    "    system_template,    # 系统角色动态模板（设定AI身份）\n",
    "    fixed_human_msg,    # 固定人类消息（历史对话）\n",
    "    fixed_ai_msg,       # 固定AI消息（历史对话）\n",
    "    human_template      # 人类角色动态模板（当前问题）\n",
    "])\n",
    "\n",
    "# 填充所有变量，生成最终的消息列表\n",
    "# 变量会自动匹配所有模板中对应的占位符（如{job}对应system_template，{user_question}对应human_template）\n",
    "final_messages = chat_prompt.format_messages(\n",
    "    job=\"营养师\",          # 给system_template的{job}变量赋值\n",
    "    language=\"中文\",       # 给system_template的{language}变量赋值\n",
    "    user_question=\"如何改善睡眠质量\"  # 给human_template的{user_question}变量赋值\n",
    ")\n",
    "\n",
    "# 打印最终生成的消息列表（可直接传给大模型的对话格式）\n",
    "for msg in final_messages:\n",
    "    print(f\"角色类型: {msg.type}, 内容: {msg.content}\\n\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3b55642e",
   "metadata": {},
   "source": [
    "# 结合LLM"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8df1f475",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "base_url: https://dashscope.aliyuncs.com/compatible-mode/v1\n",
      "api_key: 35\n",
      "model_name: qwen-plus\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "from dotenv import load_dotenv\n",
    "load_dotenv()\n",
    "\n",
    "base_url = os.getenv(\"DASHSCOPE_BASE_URL\")\n",
    "api_key = os.getenv(\"DASHSCOPE_API_KEY\")\n",
    "model_name = os.getenv(\"DASHSCOPE_MODEL_NAME\")\n",
    "print(\"base_url:\", base_url)\n",
    "print(\"api_key:\", len(api_key))\n",
    "print(\"model_name:\", model_name)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "549cac21",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "content='保持规律作息，每晚固定时间睡觉和起床。睡前避免使用电子设备，可尝试温水泡脚或喝杯温牛奶助眠。饮食上少食多餐，避免咖啡因和油腻食物影响睡眠。' additional_kwargs={} response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-plus', 'model_provider': 'openai'} id='lc_run--63029e6a-281b-44f4-9572-1817ff929a26'\n"
     ]
    }
   ],
   "source": [
    "from langchain_openai import ChatOpenAI\n",
    "llm = ChatOpenAI(\n",
    "    model_name=model_name,\n",
    "    openai_api_key=api_key,\n",
    "    openai_api_base=base_url,\n",
    "    streaming=True,\n",
    ")\n",
    "response = llm.invoke(final_messages)\n",
    "print(response)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "langchain-course (3.10.12)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
