{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Quick Start\n",
    "\n",
    "> Chat models are a variation on language models. While chat models use language models under the hood, the interface they use is a bit different. Rather than using a \"text in, text out\" API, they use an interface where \"chat messages\" are the inputs and outputs.<br>\n",
    "聊天模型是语言模型的变体。虽然聊天模型在后台使用语言模型，但它们使用的界面略有不同。他们没有使用“文本输入，文本输出”API，而是使用“聊天消息”作为输入和输出的界面。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "chat = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "source": [
    "## Messages\n",
    "\n",
    "The chat model interface is based around messages rather than raw text. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, FunctionMessage and ChatMessage\n",
    "\n",
    "聊天模型界面基于消息而不是原始文本。<br>\n",
    "LangChain目前支持的消息类型是 AIMessage 、 HumanMessage 、 SystemMessage、FunctionMessage 和 ChatMessage\n",
    "> ChatMessage\n",
    "\n",
    "ChatMessage takes in an arbitrary role parameter. Most of the time, you'll just be dealing with HumanMessage, AIMessage, and SystemMessage <br>\n",
    "采用任意角色参数。大多数时候，你只会处理 HumanMessage 、 AIMessage 和 SystemMessage\n",
    "> HumanMessage 人类留言\n",
    "\n",
    "This represents a message from the user. Generally consists only of content.<br>\n",
    "这表示来自用户的消息。通常仅由内容组成。\n",
    "\n",
    "> AIMessage AIMessage的\n",
    "\n",
    "This represents a message from the model. This may have additional_kwargs in it - for example tool_calls if using OpenAI tool calling.<br>\n",
    "这表示来自模型的消息。这可能包含 additional_kwargs 其中 - 例如 tool_calls ，如果使用 OpenAI 工具调用。\n",
    "\n",
    "> SystemMessage 系统消息\n",
    "\n",
    "This represents a system message, which tells the model how to behave. This generally only consists of content. Not every model supports this.<br>\n",
    "这表示一条系统消息，它告诉模型如何操作。这通常只包含内容。并非每个模型都支持这一点。\n",
    "\n",
    "> FunctionMessage 函数消息\n",
    "\n",
    "This represents the result of a function call. In addition to role and content, this message has a name parameter which conveys the name of the function that was called to produce this result.<br>\n",
    "这表示函数调用的结果。除了 role 和 content 之外，此消息还有一个 name 参数，该参数传达为生成此结果而调用的函数的名称。\n",
    "\n",
    "> ToolMessage 工具消息\n",
    "\n",
    "This represents the result of a tool call. This is distinct from a FunctionMessage in order to match OpenAI's function and tool message types. In addition to role and content, this message has a tool_call_id parameter which conveys the id of the call to the tool that was called to produce this result.<br>\n",
    "这表示工具调用的结果。这与 FunctionMessage 不同，以匹配 OpenAI function 和 tool 消息类型。除了 role 和 content 之外，此消息还有一个 tool_call_id 参数，该参数将调用的 ID 传达给为生成此结果而调用的工具。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## LCEL\n",
    "\n",
    "Chat models implement the Runnable interface, the basic building block of the LangChain Expression Language (LCEL). This means they support invoke, ainvoke, stream, astream, batch, abatch, astream_log calls.\n",
    "\n",
    "聊天模型实现了 Runnable 接口，这是 LangChain 表达式语言 （LCEL） 的基本构建块。这意味着它们支持 invoke 、 ainvoke 、 stream 、 astream 、 batch abatch 、 astream_log 调用。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_core.messages import HumanMessage, SystemMessage\n",
    "\n",
    "messages = [\n",
    "    SystemMessage(content=\"You're a helpful assistant\"),\n",
    "    HumanMessage(content=\"What is the purpose of model regularization?\"),\n",
    "]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content=\"The purpose of model regularization is to prevent overfitting in machine learning models. Overfitting occurs when a model learns the noise in the training data rather than the underlying pattern, leading to poor generalization on unseen data. Regularization techniques add a penalty term to the model's loss function, discouraging overly complex models and promoting simpler models that are more likely to generalize well. By incorporating regularization, models are less likely to overfit and perform better on new, unseen data.\", response_metadata={'token_usage': {'completion_tokens': 96, 'prompt_tokens': 24, 'total_tokens': 120}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-9acbcf30-debb-47f3-aa05-4c761eef794a-0')"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chat.invoke(messages)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The purpose of model regularization is to prevent overfitting in machine learning models. Overfitting occurs when a model learns the training data too well to the point that it performs poorly on unseen data or test data. Regularization techniques add a penalty term to the model's loss function, which discourages the model from fitting the training data too closely and helps improve its generalization performance on unseen data. Regularization can help make the model more robust and reduce the chances of overfitting. Common regularization techniques include L1 regularization (Lasso), L2 regularization (Ridge), and dropout regularization in neural networks."
     ]
    }
   ],
   "source": [
    "for chunk in chat.stream(messages):\n",
    "    print(chunk.content, end=\"\", flush=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "langchain0_1",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
