{
 "cells": [
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# 使用语言模型\n",
    "- [各种语言模型](https://www.langchain.com.cn/docs/tutorials/llm_chain/)\n",
    "- [deepseek语言模型](https://python.langchain.com/api_reference/deepseek/chat_models/langchain_deepseek.chat_models.ChatDeepSeek.html#langchain_deepseek.chat_models.ChatDeepSeek)"
   ],
   "id": "2d6907ad1e758eb8"
  },
  {
   "cell_type": "code",
   "id": "initial_id",
   "metadata": {
    "collapsed": true,
    "ExecuteTime": {
     "end_time": "2025-09-23T08:50:02.395400Z",
     "start_time": "2025-09-23T08:50:02.380075Z"
    }
   },
   "source": [
    "from langchain_core.messages import HumanMessage, SystemMessage\n",
    "from langchain_community.chat_models import ChatOpenAI\n",
    "\n",
    "# 使用 DeepSeek 的兼容 OpenAI 接口\n",
    "model = ChatOpenAI(\n",
    "    model=\"deepseek-chat\",\n",
    "    base_url=\"https://api.deepseek.com/v1\",  # DeepSeek 的 OpenAI 兼容接口\n",
    "    api_key=\"sk-a76a85b93093439ba3dc5b6dedfc51e5\",\n",
    "    temperature=0,\n",
    "    max_tokens=2048,\n",
    ")\n"
   ],
   "outputs": [],
   "execution_count": 14
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "让我们首先直接使用模型, `ChatModel` 是LangChain “运行接口”的实例，这意味着它们提供了一个标准接口供我们与之交互。要简单的调用模型，我们可以将消息列表传递给`.invoke` 方法\n",
    "\n"
   ],
   "id": "a48100f7ee4583a2"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-23T08:51:22.399054Z",
     "start_time": "2025-09-23T08:51:18.254020Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# <!--IMPORTS:[{\"imported\": \"HumanMessage\", \"source\": \"langchain_core.messages\", \"docs\": \"https://python.langchain.com/api_reference/core/messages/langchain_core.messages.human.HumanMessage.html\", \"title\": \"Build a Simple LLM Application with LCEL\"}, {\"imported\": \"SystemMessage\", \"source\": \"langchain_core.messages\", \"docs\": \"https://python.langchain.com/api_reference/core/messages/langchain_core.messages.system.SystemMessage.html\", \"title\": \"Build a Simple LLM Application with LCEL\"}]-->\n",
    "\n",
    "from langchain_core.messages import HumanMessage,SystemMessage\n",
    "\n",
    "messages = [\n",
    "    SystemMessage(content=\"把下面的内容从中文翻译成英文\"),\n",
    "    HumanMessage(content=\"要简单的调用模型，我们可以将消息列表传递给.invoke 方法\")\n",
    "]\n",
    "print(messages)\n",
    "\n",
    "model.invoke(messages)"
   ],
   "id": "95869feb09c3c133",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[SystemMessage(content='把下面的内容从中文翻译成英文', additional_kwargs={}, response_metadata={}), HumanMessage(content='要简单的调用模型，我们可以将消息列表传递给.invoke 方法', additional_kwargs={}, response_metadata={})]\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "AIMessage(content='To simply call the model, we can pass the message list to the `.invoke` method.', additional_kwargs={}, response_metadata={'token_usage': {'completion_tokens': 20, 'prompt_tokens': 26, 'total_tokens': 46, 'completion_tokens_details': None, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}, 'prompt_cache_hit_tokens': 0, 'prompt_cache_miss_tokens': 26}, 'model_name': 'deepseek-chat', 'system_fingerprint': 'fp_f253fc19d1_prod0820_fp8_kvcache', 'finish_reason': 'stop', 'logprobs': None}, id='run--218bea93-4b0c-4708-8ad7-f71edfa89086-0')"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 17
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# 输出解析器\n",
    "请注意，模型的响应是一个 `AIMessage` 。这包含一个字符串响应以及关于响应的其他元数据。我们通常可能指向处理字符串响应。我们可以通过使用简单的输出解析器来解析这个响应。\n",
    "我们首先导入简单的解析器。"
   ],
   "id": "f06e718c2b5ec1db"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-23T08:53:20.065870Z",
     "start_time": "2025-09-23T08:53:20.045231Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# IMPORTS:[{\"imported\": \"StrOutputParser\", \"source\": \"langchain_core.output_parsers\", \"docs\": \"https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html\", \"title\": \"Build a Simple LLM Application with LCEL\"}]\n",
    "\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "\n",
    "parser = StrOutputParser()"
   ],
   "id": "81a06fc52468d95d",
   "outputs": [],
   "execution_count": 18
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "使用它的一种方法是单独使用它。例如，我们可以保存用户艳模型调用的结果，然后将其传递给解析器",
   "id": "ea65adfc46e3d49c"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-23T08:53:25.339187Z",
     "start_time": "2025-09-23T08:53:21.185148Z"
    }
   },
   "cell_type": "code",
   "source": [
    "result = model.invoke(messages)\n",
    "parser.invoke(result)"
   ],
   "id": "a448167054dd61e8",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'To simply call the model, we can pass the message list to the `.invoke` method.'"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 19
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "更常见的是，我们可以将模型与此输出解析器“链式”连接。这意味着在此链中，每次都会调用此输出解析器。此链采用语言模型输入类型（字符串或消息列表）并返回输出解析器的输出类型（字符串）。我们可以使用 `|` 运算符轻松创建链。`|` 运算符在 `LangChain` 中用于将两个元素组合在一起",
   "id": "639191acf4ec0725"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-15T07:52:59.618994Z",
     "start_time": "2025-09-15T07:52:55.342869Z"
    }
   },
   "cell_type": "code",
   "source": [
    "chain = model | parser\n",
    "chain.invoke(messages)"
   ],
   "id": "91f859a8db87cb4d",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'To simply call the model, we can pass a list of messages to the `.invoke` method.'"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 10
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# 提示词模板\n",
    "现在我们直接将消息列表传递给语言模型。这些消息列表来自哪里？通常，它是由用户输入和应用逻辑的组合构建而成的。这个应用逻辑通常会将原始用户输入转换为准备传递给语言模型的消息列表。常见的PromptTemplates 是LangChain中的一个概念，旨在帮助进行这种转换。它们接收原始用户输入并返回准备传递给语言模型的数据(提示)。\n",
    "让我们在这里创建一个PromptTemplate。它将接收两个用户变量：\n",
    "- `language`: 需要翻译成的语言\n",
    "- `text`: 要翻译的文本"
   ],
   "id": "4e30df5671b82a8"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-15T08:08:53.647286Z",
     "start_time": "2025-09-15T08:08:53.643720Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# <!--IMPORTS:[{\"imported\": \"ChatPromptTemplate\", \"source\": \"langchain_core.prompts\", \"docs\": \"https://python.langchain.com/api_reference/core/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html\", \"title\": \"Build a Simple LLM Application with LCEL\"}]-->\n",
    "from langchain_core.prompts import ChatPromptTemplate"
   ],
   "id": "5d33ec2035d21308",
   "outputs": [],
   "execution_count": 16
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "首先，让我们创建一个字符串，我们将格式化为系统消息：",
   "id": "541083dc299fe6df"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-15T08:08:50.609374Z",
     "start_time": "2025-09-15T08:08:50.606094Z"
    }
   },
   "cell_type": "code",
   "source": "system_template = \"将下面内容翻译成 {language}:\"",
   "id": "ed4010c6a2f16da4",
   "outputs": [],
   "execution_count": 15
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "接下来，我们可以创建 PromptTemplate。这将是 `system_template` 和 一个更简单的模板的组合。用于放置要翻译的文本",
   "id": "9e6e1fbb16fe8a54"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-15T08:08:45.966453Z",
     "start_time": "2025-09-15T08:08:45.943151Z"
    }
   },
   "cell_type": "code",
   "source": [
    "prompt_template = ChatPromptTemplate.from_messages(\n",
    "    [(\"system\", system_template), (\"user\", \"{text}\")]\n",
    ")"
   ],
   "id": "b368e4c887f294fa",
   "outputs": [],
   "execution_count": 13
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "此提示词模板的输入是一个字典。我们可以单独玩弄这个提示词模板，以查看它的功能",
   "id": "c479674d25f251b2"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-15T08:10:54.465537Z",
     "start_time": "2025-09-15T08:10:54.460695Z"
    }
   },
   "cell_type": "code",
   "source": [
    "result = prompt_template.invoke({\"language\":\"英文\", \"text\":\"接下来，我们可以创建 PromptTemplate。这将是 system_template 和 一个更简单的模板的组合。用于放置要翻译的文本\"})\n",
    "\n",
    "result"
   ],
   "id": "d2151e84fc29124f",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatPromptValue(messages=[SystemMessage(content='将下面内容翻译成 英文:', additional_kwargs={}, response_metadata={}), HumanMessage(content='接下来，我们可以创建 PromptTemplate。这将是 system_template 和 一个更简单的模板的组合。用于放置要翻译的文本', additional_kwargs={}, response_metadata={})])"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 18
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "我们可以看到它返回一个 `ChatPromptValue`，由两个消息组成。如果我们想直接访问这些消息，可以这样做：\n",
    "- language: 要翻译成的语言\n",
    "- text: 要翻译的文本"
   ],
   "id": "b9d746b3137e0c2c"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-15T08:10:24.096262Z",
     "start_time": "2025-09-15T08:10:24.092398Z"
    }
   },
   "cell_type": "code",
   "source": "result.to_messages()",
   "id": "8b186488ed0e013e",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[SystemMessage(content='将下面内容翻译成 英文:', additional_kwargs={}, response_metadata={}),\n",
       " HumanMessage(content='接下来，我们可以创建 PromptTemplate。这将是 system_template 和 一个更简单的模板的组合。用于放置要翻译的文本', additional_kwargs={}, response_metadata={})]"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 17
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# 使用LCEL连接组件\n",
    "我们现在可以使用管道 | 操作符将其与上面的模型和输出解析器结合起来"
   ],
   "id": "52dab0c0883d25af"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-15T08:22:59.366049Z",
     "start_time": "2025-09-15T08:22:54.458112Z"
    }
   },
   "cell_type": "code",
   "source": [
    "chain = prompt_template | model | parser\n",
    "\n",
    "chain.invoke({\"language\":\"英文\", \"text\":\"接下来，我们可以创建 PromptTemplate。这将是 system_template 和 一个更简单的模板的组合。用于放置要翻译的文本\"})"
   ],
   "id": "2cf1d051c90e4c4a",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Next, we can create a PromptTemplate. This will be a combination of the system_template and a simpler template, used to place the text to be translated.'"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 20
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "这是一个使用[LangChain表达式(LCEL)](https://www.langchain.com.cn/docs/concepts/#langchain-expression-language-lcel)连接 LangChain 模块的简单示例。这中年写法有几个好处，包括优化的流式处理和追踪支持。\n",
    "如果我们查看LangSmith追踪，我们可以看到所有三个组件出现在[LangSmith追踪中](https://smith.langchain.com/public/bc49bec0-6b13-4726-967f-dbd3448b786d/r)"
   ],
   "id": "afb1002274ab051a"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# 使用LangServe提供服务\n",
    "现在我们已经构建了一个应用程序，我们需要提供服务。这就是LangServe的用武之地。LangServe帮助开发者将LangChain链部署为Rest API。您不需要使用LangServe 来使用LangChain,但在本指南中，我们将展示如何使用LangServe部署您的应用\n",
    "虽然本指南的第一部分，是在Jupyter Notebook或脚本中运行，但我们现在将脱离这一点。我们将创建一个Python文件，然后从命令与之交互。\n",
    "安装方式：\n",
    "pip install \"langserve[all]\""
   ],
   "id": "a699872791e30ba3"
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
