{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4d741b04",
   "metadata": {},
   "outputs": [],
   "source": [
    "### 使用 langchain 的 init_chat_model 方法初始化 DeepSeek 模型\n",
    "from langchain.chat_models import init_chat_model\n",
    "\n",
    "DEEPSEEK_API_KEY=\"sk-ccd1af6c1d6e4db28f206c1c52972b6e\"\n",
    "DEEPSEEK_URL=\"https://api.deepseek.com\"\n",
    "\n",
    "model = init_chat_model(\n",
    "    model=\"deepseek-chat\", \n",
    "    model_provider=\"deepseek\", \n",
    "    api_key=DEEPSEEK_API_KEY, \n",
    "    base_url=DEEPSEEK_URL\n",
    ")\n",
    "\n",
    "response = model.invoke(\"什么是大模型？\")\n",
    "response"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "495830e5",
   "metadata": {},
   "outputs": [],
   "source": [
    "### 使用chatOpenai调用模型\n",
    "from langchain.chat_models import ChatOpenAI\n",
    "\n",
    "DEEPSEEK_API_KEY=\"sk-ccd1af6c1d6e4db28f206c1c52972b6e\"\n",
    "DEEPSEEK_URL=\"https://api.deepseek.com\"\n",
    "\n",
    "model = ChatOpenAI(\n",
    "    model_name=\"deepseek-chat\",\n",
    "    api_key=DEEPSEEK_API_KEY,   \n",
    "    base_url=DEEPSEEK_URL\n",
    "    )\n",
    "\n",
    "response = model.invoke(\"什么是大模型？\")\n",
    "response"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "initial_id",
   "metadata": {
    "collapsed": true,
    "jupyter": {
     "is_executing": true
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='当然！这是一个关于“大模型”的详细解释，从通俗易懂到深入技术层面，希望能帮助你全面理解。\\n\\n### 一、一句话概括\\n\\n**大模型（Large Model）**，通常指**参数规模巨大**（达到数十亿甚至万亿级别）的**人工智能模型**。它是在海量数据上训练而成的、能够处理各种复杂任务（如语言理解、生成、逻辑推理等）的“全能型”基础模型。\\n\\n最著名的例子就是 **ChatGPT 所依赖的 GPT 系列模型**、Google 的 **Gemini** 和 **PaLM**，以及开源的 **Llama** 等。\\n\\n---\\n\\n### 二、详细解释：核心特征与工作原理\\n\\n要理解大模型，可以从以下几个关键点入手：\\n\\n#### 1. “大”在哪里？\\n*   **参数（Parameters）量大：** 参数是模型从数据中学到的“内部知识”，可以理解为模型的“脑细胞”或“突触连接”的数量。大模型的参数量通常是**十亿（Billion）** 甚至**万亿（Trillion）** 级别。例如，GPT-3 有 1750 亿个参数。参数越多，模型能存储和记忆的知识和模式就越复杂。\\n*   **训练数据量大：** 大模型不是在几百本书上训练的，而是在**几乎整个互联网**的文本数据上进行训练的，包括书籍、文章、代码、论坛对话等，数据量可达TB甚至PB级别。这使它拥有了极其广博的知识面。\\n*   **算力需求大：** 训练这样的模型需要成千上万颗顶级GPU/TPU工作数周甚至数月，耗资巨大（数百万至数千万美元）。这使得大模型的研发主要集中在拥有强大算力资源的科技巨头手中。\\n\\n#### 2. 它是什么技术？\\n大模型的核心技术是**深度学习**，特别是基于**Transformer架构**的模型。\\n\\n*   **Transformer架构（2017年由Google提出）：** 这是大模型成功的**关键技术突破**。它有一个名为“自注意力机制（Self-Attention）”的组件，能让模型在处理一个词时，同时关注到句子中所有其他的词，从而更好地理解上下文关系。这非常适合处理像语言这样的序列数据。\\n*   **生成式AI（Generative AI）：** 当前的主流大模型（如GPT）都属于**生成式模型**。它们不是简单地进行分类或预测，而是学会了“生成”内容，即根据已有的上文，预测下一个最可能出现的词/字，如此循环往复，从而生成连贯的段落、文章、代码等。\\n\\n#### 3. 如何工作？（预训练 + 微调）\\n大模型通常采用一种称为“基础模型（Foundation Model）”的范式：\\n\\n1.  **预训练（Pre-training）：**\\n    *   **目标：** 让模型学习最基础、最通用的知识和语言规律。\\n    *   **方法：** 在海量无标注文本数据上，进行“自监督学习”。例如，让模型完成“完形填空”（掩码语言模型MLM）或“预测下一个词”（自回归模型）的任务。经过这个阶段，模型已经成了一个“饱读诗书”的博学之士，掌握了语法、事实知识、一定程度的逻辑推理能力。\\n\\n2.  **微调（Fine-tuning）与对齐（Alignment）：**\\n    *   **目标：** 让这个“博学但不受控”的模型变得有用、安全、符合人类意图。\\n    *   **方法：**\\n        *   **指令微调（Instruction Tuning）：** 用人类撰写的高质量指令和回答样本（如“请写一首诗”、“请总结这篇文章”）来训练模型，教会它如何理解和遵循人类的指令。\\n        *   **人类反馈强化学习（RLHF）：** 这是ChatGPT成功的关键。让人类标注员对模型的多个回答进行排序（哪个更好），然后利用这些反馈来进一步训练模型，使其输出更符合人类偏好（更有帮助、更诚实、更无害）。\\n\\n---\\n\\n### 三、大模型能做什么？（应用场景）\\n\\n大模型是一种“通用技术”，其应用几乎无处不在：\\n\\n*   **对话与问答：** 如ChatGPT，可以进行流畅的对话、解答问题。\\n*   **内容创作：** 撰写文章、报告、邮件、诗歌、小说、广告文案等。\\n*   **代码生成与辅助：** 根据描述自动生成代码、解释代码、调试、转换编程语言（如GitHub Copilot）。\\n*   **信息摘要：** 快速提取长篇文章、报告或会议记录的核心要点。\\n*   **语言翻译：** 实现高质量的多语言互译。\\n*   **多模态能力：** 最新的多模态大模型（如GPT-4V）还能理解和生成图片、音频、视频。\\n\\n---\\n\\n### 四、当前面临的挑战与局限性\\n\\n1.  **“幻觉”（Hallucination）：** 模型可能会自信地生成错误或编造的信息，因为它本质上是“概率生成”而非“事实查询”。\\n2.  **偏见与毒性：** 模型从互联网数据中学习，也可能学会其中存在的偏见、歧视和有害观点。\\n3.  **知识滞后：** 模型的知识截止于其训练数据的时间点，无法实时获取最新信息（除非通过外部工具增强）。\\n4.  **计算成本高昂：** 训练和部署的成本极高，导致碳足迹也很大。\\n5.  **理解与推理的局限：** 它更擅长“统计关联”而非真正的“因果逻辑推理”，有时会在复杂推理问题上出错。\\n\\n### 总结\\n\\n**大模型是基于Transformer架构、在海量数据上训练而成的超大型人工智能模型。它通过“预训练+微调”的范式，获得了强大的语言理解和生成能力，成为了当前人工智能领域的核心驱动力，正在深刻改变我们与机器交互的方式和信息处理的生产力。**\\n\\n虽然能力强大，但它仍是一项发展中的技术，需要谨慎地使用和持续地改进。', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 1247, 'prompt_tokens': 8, 'total_tokens': 1255, 'completion_tokens_details': None, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}, 'prompt_cache_hit_tokens': 0, 'prompt_cache_miss_tokens': 8}, 'model_name': 'deepseek-chat', 'system_fingerprint': 'fp_08f168e49b_prod0820_fp8_kvcache', 'id': '247b5312-4d40-45a9-b125-ef16cdd6eddf', 'service_tier': None, 'finish_reason': 'stop', 'logprobs': None}, id='run--4f64c960-8b7e-40cb-8c6f-c849f6aba898-0', usage_metadata={'input_tokens': 8, 'output_tokens': 1247, 'total_tokens': 1255, 'input_token_details': {'cache_read': 0}, 'output_token_details': {}})"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "### 使用 DeepSeek的集成\n",
    "import os\n",
    "\n",
    "os.environ[\"DEEPSEEK_API_KEY\"] = \"sk-ccd1af6c1d6e4db28f206c1c52972b6e\"\n",
    "\n",
    "from langchain_deepseek import ChatDeepSeek\n",
    "\n",
    "model = ChatDeepSeek(\n",
    "    model=\"deepseek-chat\",\n",
    "    temperature=0,\n",
    "    max_tokens=None,\n",
    "    timeout=None,\n",
    "    max_retries=2,\n",
    "    # other params...\n",
    ")\n",
    "\n",
    "# 调用模型\n",
    "response = model.invoke(\"什么是大模型？\")\n",
    "response\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "b53dc44b",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='我不知道你的名字。如果你告诉我，我会记住并在以后的对话中使用。', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 14, 'total_tokens': 28, 'completion_tokens_details': None, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}, 'prompt_cache_hit_tokens': 0, 'prompt_cache_miss_tokens': 14}, 'model_name': 'deepseek-chat', 'system_fingerprint': 'fp_08f168e49b_prod0820_fp8_kvcache', 'id': 'fc04e6e2-e1cf-4cfe-bcc4-c5791a21184e', 'service_tier': None, 'finish_reason': 'stop', 'logprobs': None}, id='run--fbab3900-8dbe-43a1-90c0-67fea8caef4b-0', usage_metadata={'input_tokens': 14, 'output_tokens': 14, 'total_tokens': 28, 'input_token_details': {'cache_read': 0}, 'output_token_details': {}})"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_core.messages import SystemMessage, HumanMessage\n",
    "\n",
    "messages = [\n",
    "    SystemMessage(content=\"将英语翻译为中文\"),\n",
    "    HumanMessage(content=\"What's my name?\"),\n",
    "]\n",
    "\n",
    "response = model.invoke(messages)\n",
    "response"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a0dd6572",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'我不知道你的名字。如果你告诉我，我会记住并在以后的对话中使用。'"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "### 结果解析\n",
    "response.content\n",
    "\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "\n",
    "parser = StrOutputParser()\n",
    "\n",
    "parser.invoke(response)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "23a83dcb",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "西红柿\n",
      "（\n",
      "番茄\n",
      "）\n",
      "的\n",
      "常见\n",
      "颜色\n",
      "是\n",
      "**\n",
      "红色\n",
      "**\n",
      "，\n",
      "这是\n",
      "其\n",
      "成熟\n",
      "后\n",
      "最\n",
      "典型的\n",
      "颜色\n",
      "。\n",
      "不过\n",
      "，\n",
      "根据\n",
      "品种\n",
      "和\n",
      "成熟\n",
      "度的\n",
      "不同\n",
      "，\n",
      "西红柿\n",
      "还可能\n",
      "呈现\n",
      "其他\n",
      "颜色\n",
      "：\n",
      "\n",
      "\n",
      "1\n",
      ".\n",
      " **\n",
      "绿色\n",
      "**\n",
      "：\n",
      "未\n",
      "完全\n",
      "成熟\n",
      "时\n",
      "呈\n",
      "绿色\n",
      "（\n",
      "部分\n",
      "特殊\n",
      "品种\n",
      "如\n",
      "“\n",
      "绿\n",
      "斑\n",
      "马\n",
      "”\n",
      "成熟\n",
      "后\n",
      "也为\n",
      "绿色\n",
      "）。\n",
      "  \n",
      "\n",
      "2\n",
      ".\n",
      " **\n",
      "黄色\n",
      "/\n",
      "橙色\n",
      "**\n",
      "：\n",
      "一些\n",
      "品种\n",
      "如\n",
      "“\n",
      "黄\n",
      "梨\n",
      "番茄\n",
      "”\n",
      "或\n",
      "“\n",
      "金\n",
      "太阳\n",
      "”\n",
      "成熟\n",
      "后\n",
      "为\n",
      "黄色\n",
      "或\n",
      "橙色\n",
      "。\n",
      "  \n",
      "\n",
      "3\n",
      ".\n",
      " **\n",
      "紫色\n",
      "/\n",
      "黑色\n",
      "**\n",
      "：\n",
      "如\n",
      "“\n",
      "黑\n",
      "美人\n",
      "”“\n",
      "紫\n",
      "番茄\n",
      "”\n",
      "等\n",
      "因\n",
      "含\n",
      "花\n",
      "青\n",
      "素\n",
      "而\n",
      "呈现\n",
      "深\n",
      "紫色\n",
      "或\n",
      "接近\n",
      "黑色\n",
      "。\n",
      "  \n",
      "\n",
      "4\n",
      ".\n",
      " **\n",
      "粉色\n",
      "**\n",
      "：\n",
      "部分\n",
      "品种\n",
      "果\n",
      "皮\n",
      "较\n",
      "薄\n",
      "，\n",
      "成熟\n",
      "后\n",
      "呈\n",
      "粉\n",
      "红色\n",
      "。\n",
      "  \n",
      "\n",
      "5\n",
      ".\n",
      " **\n",
      "白色\n",
      "**\n",
      "：\n",
      "罕见\n",
      "品种\n",
      "如\n",
      "“\n",
      "白\n",
      "美人\n",
      "”\n",
      "成熟\n",
      "后\n",
      "为\n",
      "乳\n",
      "白色\n",
      "或\n",
      "浅\n",
      "黄色\n",
      "。\n",
      "  \n",
      "\n",
      "\n",
      "因此\n",
      "，\n",
      "虽然\n",
      "红色\n",
      "最常见\n",
      "，\n",
      "但\n",
      "西红柿\n",
      "实际上\n",
      "拥有\n",
      "多元\n",
      "的色彩\n",
      "谱\n",
      "系\n",
      "哦\n",
      "！\n",
      " 🌈\n",
      "\n"
     ]
    }
   ],
   "source": [
    "### 流式输出\n",
    "stream_response = model.stream(\"西红柿是什么颜色？\")\n",
    "for chunk in stream_response:\n",
    "    print(chunk.text(), end=\"\\n\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "b85f3793",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatPromptValue(messages=[SystemMessage(content='将英语翻译为中文', additional_kwargs={}, response_metadata={}), HumanMessage(content='{text}', additional_kwargs={}, response_metadata={})])"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "### 提示词模板\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "\n",
    "prompt_template = ChatPromptTemplate.from_messages([\n",
    "    SystemMessage(content=\"将英语翻译为中文\"),\n",
    "    HumanMessage(content=\"{text}\"),\n",
    "])\n",
    "\n",
    "prompt = prompt_template.invoke({\"text\": \"What's my name?\"})\n",
    "\n",
    "prompt\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "langchain_learn",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.18"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
