{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "c9cd7503-e36c-4e47-9ef6-37975b2cf770",
   "metadata": {},
   "source": [
    "## 模型包装器\n",
    "\n",
    "截止 2023年7月，Langchain 支持的大语言模型超过了 50种。随着大语言模型的发展，Langchain的模型包装器组件也在不断升级，以适应各大模型平台的 API 变化。\n",
    "\n",
    "2023 年，OpenAI 发布的 GPT-3.5-Turbo模型，增加了全新的 API,即 Chat 类型 API。这种 API 更适合用于聊天场景和复杂的应用场景，例如多轮对话。因此选择适合自己应用需求的 API，以及配套的 LangChain 模型包装器组件，是使用大语言模型进行开发时必须考虑的重要因素。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "101264b4-2f15-44ad-8b7f-e65af3fb53a8",
   "metadata": {
    "tags": []
   },
   "source": [
    "### 模型包装器分类"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0ab94cb7-eeb9-4cea-a016-6e4bb18a6ffa",
   "metadata": {},
   "source": [
    "主要两种类型：\n",
    "\n",
    "<b>1. 通用的 LLM 模型包装器</b>\n",
    "\n",
    "<b>2. 专门针对 Chat 类型的 API 的Chat Model （聊天模型包装器）</b>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bf838fa1-2538-4305-8995-07593a3028ac",
   "metadata": {},
   "source": [
    "例如，使用 OpenAI 的 text-davinci-003 模型，则使用的是 LLM 模型包装器；\n",
    "而使用 GPT-4 模型，则使用的是 ChatOpenAI 的聊天模型包装器。\n",
    "\n",
    "包装器不同，响应的格式也不同，LLM模型包装器返回的是字符串，而聊天模型包装器它接收一系列的消息作为输入，并返回一个消息类型作为输出，获得的响应式 AIMessage 消息数据。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d6e0dc13-afa3-4150-a022-0748e6436fe9",
   "metadata": {},
   "source": [
    "LLM 模型包装器专门用于与大语言模型文本补全类型 API 交互的组件，就是文本输入，然后返回补全的字符串作为输出。例如："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "676eadf8-f331-4942-8da2-4297902b6d24",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.llms import OpenAI\n",
    "# OpenAI 类专门用于处理 OpenAI 公司的 Completion 类型的 API\n",
    "openai = OpenAI(model_name=\"text-davinci-003\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dfe577c9-dc81-4f59-9177-ff9bf4f7b009",
   "metadata": {},
   "source": [
    "<b>LangChain.llms</b> 获取到的所有对象都是大语言模型的包装器，都是 BaseLLM 的子类。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ab5d3cab-da8d-4892-b478-1fd6013e4366",
   "metadata": {},
   "source": [
    "而通过<b> LangChain.chat_models</b> 获取的所有对象都是聊天模型包装器,都是 BaseChatModel 的子类。例如："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "847159c9-2cf5-4271-8664-fbd2e27c554a",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.chat_model import ChatOpenAI\n",
    "llm = ChatOpenAI(temperature=0,model_name=\"gpt-4\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "419a5141-a3ce-4cb2-bc8a-7e89958b47b6",
   "metadata": {
    "tags": []
   },
   "source": [
    "### LLM 模型包装器"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f99ee9a0-200b-4c42-8733-604474928051",
   "metadata": {
    "tags": []
   },
   "source": [
    "下面的示例代码使用的 LLM 模型包装器是 OpenAI 的，其它模型下的导入方式和实例化方法都是类似的。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f4db85cf-628f-4c50-9183-c8d2db2f1665",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain.llms import OpenAI\n",
    "OpenAI.openai_api_key = '密钥'\n",
    "\n",
    "llm = OpenAI()\n",
    "# 输入文本\n",
    "llm(\"Tell me a joke\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "21635058-d1be-4ca4-8c8a-0d807c9bc6fd",
   "metadata": {},
   "source": [
    "运行结果如下"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "77a9adbb-508c-4d60-80b3-0dc3fa40baab",
   "metadata": {},
   "source": [
    "'Why did the chicken cross the road?\\n\\n To get to the other side.'"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2dc86be4-5cd8-4736-9545-c4c629536b96",
   "metadata": {
    "tags": []
   },
   "source": [
    "### 聊天模型包装器"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8c3f2c6a-36c6-4f35-8392-105032226061",
   "metadata": {},
   "source": [
    "下面示例输入聊天消息列表的聊天模型包装器如何运行："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "97f305c7-ab20-49cd-aeb3-f3d852308fb3",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "import os\n",
    "os.environ['OPENAI_API_KEY'] = '密钥'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6a6079c7-65f4-4611-addf-e501fc4026bd",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入 3个数据模式\n",
    "# AIMessage == 由 AI 生成的消息数据模式\n",
    "# HumanMessage == 人类用户输入的消息数据模式\n",
    "# SystemMessage == 系统消息数据模式\n",
    "from langchain.schema import (\n",
    "  AIMessage,\n",
    "  HumanMessage,\n",
    "  SystemMessage\n",
    ")\n",
    "from langchain.chat_modes import ChatOpenAI"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9d8bfe6f-b4b6-4ca4-8de4-ce98b70ad731",
   "metadata": {},
   "source": [
    "聊天模型包装器输入的内容必须是一个消息列表。模式必须符合 AIMessage\\HumanMessage\\SystemMessage 的要求。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "10c57b4d-51f6-4a00-9f49-c96d65007817",
   "metadata": {},
   "outputs": [],
   "source": [
    "chat = ChatOpenAI(model_name=\"gpt-3.5-turbo\", temperature=0.3)\n",
    "# messages 列表包含了 System 及 Human message\n",
    "messages = [\n",
    "  SystemMesage(content=\"你是取名大师，你擅长为创业公司取名\"),\n",
    "  HumanMessage(content=\"帮我给新公司取名，名字要包含 AI\")\n",
    "]\n",
    "# ChatOpenAI 按照3中数据模式将输入序列化发送到接口\n",
    "response = chat(messages)\n",
    "\n",
    "# 响应是一个 AIMessage 对象，通过 response.content 方法获取内容\n",
    "print(response.context, end='\\n')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2cdac996-a585-46ca-b73c-8945a76f37c9",
   "metadata": {},
   "source": [
    "尽管聊天模型包装器使用比较复杂，但是这种复杂是有价值的，通过对前置数据模式的处理，可以简化和统一数据处理流程，减小出错的可能性，开发者可以将更多的注意力放在业务逻辑的开发上，而不必被各种复杂的数据处理和错误处理所困扰。而且它的输入方式非常适合处理那些需要引入历史对话内容以便生成带有对话上下文的响应的任务。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
