{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "7d709f28",
   "metadata": {},
   "source": [
    "## 内存记忆"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "55ca316d",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Human says: Hi there my friend\n",
      "\n",
      "\n",
      "\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
      "Prompt after formatting:\n",
      "\u001b[32;1m\u001b[1;3mYou are a chatbot having a conversation with a human.\n",
      "\n",
      "Human: Hi there my friend\n",
      "Chatbot:\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n",
      " Hello, how are you doing today?\n",
      "Human says: Not too bad - how are you?\n",
      "\n",
      "\n",
      "\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
      "Prompt after formatting:\n",
      "\u001b[32;1m\u001b[1;3mYou are a chatbot having a conversation with a human.\n",
      "Human: Hi there my friend\n",
      "AI:  Hello, how are you doing today?\n",
      "Human: Not too bad - how are you?\n",
      "Chatbot:\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n",
      " I am doing well, thank you for asking. Is there something on your mind that you would like to talk about?\n"
     ]
    }
   ],
   "source": [
    "\n",
    "from langchain.memory import ConversationBufferMemory\n",
    "from langchain import OpenAI, LLMChain, PromptTemplate\n",
    "\n",
    "template = \"\"\"You are a chatbot having a conversation with a human.\n",
    "{chat_history}\n",
    "Human: {human_input}\n",
    "Chatbot:\"\"\"\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    input_variables=[\"chat_history\", \"human_input\"], template=template\n",
    ")\n",
    "\n",
    "memory = ConversationBufferMemory(memory_key=\"chat_history\")\n",
    "\n",
    "llm_chain = LLMChain(\n",
    "    llm=OpenAI(),\n",
    "    prompt=prompt,\n",
    "    verbose=True,\n",
    "    memory=memory,\n",
    ")\n",
    "\n",
    "human_input=\"Hi there my friend\"\n",
    "print(llm_chain.predict(human_input=human_input))\n",
    "\n",
    "human_input=\"Not too bad - how are you?\"\n",
    "print(llm_chain.predict(human_input=human_input))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "694cb2aa",
   "metadata": {},
   "source": [
    "## ConversationBufferWindowMemory 类\n",
    "\n",
    "ConversationBufferWindowMemory 类是 ConversationMemory 的一个子类，它维护一个窗口内存，只存储最近指定数量的交互，并丢弃其余部分。\n",
    "\n",
    "可以帮助限制内存的大小，以便适应特定的应用场景。\n",
    "\n",
    "使用 ConversationBufferWindowMemory，您可以在对话中只保留最近的一些交互，以控制内存的大小。\n",
    "\n",
    "这对于资源受限的环境或需要快速丢弃旧交互的场景非常有用。\n",
    "\n",
    "\n",
    "例如：\n",
    "\n",
    "```python\n",
    "import os\n",
    "from langchain.memory import ConversationBufferWindowMemory\n",
    "from langchain.chains import ConversationChain\n",
    "from langchain_openai import OpenAI\n",
    "\n",
    "# 初始化LLM - 使用通义千问\n",
    "llm = OpenAI(\n",
    "temperature=0.3,\n",
    "openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "model_name=\"qwen-turbo\"\n",
    ")\n",
    "\n",
    "# 创建窗口记忆 - 只保留最近2轮对话\n",
    "memory = ConversationBufferWindowMemory(k=2)\n",
    "\n",
    "# 创建对话链\n",
    "conversation_with_memory = ConversationChain(\n",
    "llm=llm,\n",
    "memory=memory,\n",
    "verbose=True\n",
    ")\n",
    "\n",
    "# 第1轮：客户自我介绍\n",
    "print(\"第1轮对话：\")\n",
    "response1 = conversation_with_memory.predict(\n",
    "input=\"你好，我是张先生，我经常需要寄送电子产品到全国各地\"\n",
    ")\n",
    "print(f\"客服回复: {response1}\\n\")\n",
    "\n",
    "# 第2轮：说明业务需求\n",
    "print(\"第2轮对话：\")\n",
    "response2 = conversation_with_memory.predict(\n",
    "input=\"我主要做电商生意，每天大概有50-100个包裹需要发货，主要是手机配件和数码产品\"\n",
    ")\n",
    "print(f\"客服回复: {response2}\\n\")\n",
    "\n",
    "# 第3轮：询问物流方案\n",
    "print(\"第3轮对话：\")\n",
    "response3 = conversation_with_memory.predict(\n",
    "input=\"你能为我推荐一个适合的物流方案吗？我希望价格合理，时效稳定\"\n",
    ")\n",
    "print(f\"客服回复: {response3}\\n\")\n",
    "\n",
    "# 第4轮：询问更多选项\n",
    "print(\"第4轮对话：\")\n",
    "response4 = conversation_with_memory.predict(\n",
    "input=\"除了刚才的方案，你还能给出更多选项吗？我想对比一下\"\n",
    ")\n",
    "print(f\"客服回复: {response4}\\n\")\n",
    "\n",
    "# 第5轮：测试记忆窗口（应该忘记第1轮的内容）\n",
    "print(\"第5轮对话（测试记忆窗口）：\")\n",
    "response5 = conversation_with_memory.predict(\n",
    "input=\"你还记得我的名字吗？我是做什么生意的？\"\n",
    ")\n",
    "print(f\"客服回复: {response5}\\n\")\n",
    "\n",
    "# 查看当前记忆内容\n",
    "print(\"=== 当前记忆内容 ===\")\n",
    "print(memory.buffer)\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5395688f",
   "metadata": {},
   "source": [
    "## Redis\n",
    "\n",
    "[langchain Redis 持久记忆](https://python.langchain.com/docs/integrations/memory/redis_chat_message_history/)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ca2a0803",
   "metadata": {},
   "source": [
    "## Mem0 技术简介\n",
    "### Mem0是什么？\n",
    "\n",
    "Mem0是一个专门为AI应用设计的记忆层框架，它能够为大语言模型（LLM）提供持久化的记忆能力。简单来说，它让AI能够\"记住\"之前的对话和用户偏好，实现真正的个性化交互。\n",
    "\n",
    "### 核心工作原理：\n",
    "\n",
    "记忆存储：将用户交互、偏好、上下文信息存储在向量数据库中\n",
    "智能检索：通过语义搜索快速找到相关的历史信息\n",
    "上下文注入：将检索到的记忆信息注入到当前对话中\n",
    "动态更新：根据新的交互持续更新和优化记忆内容\n",
    "\n",
    "### Mem0 的核心优势\n",
    "\n",
    "1. 持久化记忆能力\n",
    "突破了传统LLM的上下文窗口限制\n",
    "能够跨会话保持用户信息和偏好\n",
    "支持长期记忆积累和演化\n",
    "\n",
    "2. 个性化体验\n",
    "根据用户历史行为提供定制化服务\n",
    "学习用户偏好和习惯\n",
    "提供连贯的多轮对话体验\n",
    "\n",
    "3. 灵活的架构设计\n",
    "支持多种向量数据库（Qdrant、Chroma、Pinecone等）\n",
    "兼容主流LLM提供商（OpenAI、Anthropic、通义千问等）\n",
    "模块化设计，易于集成和扩展\n",
    "\n",
    "4. 智能记忆管理\n",
    "自动提取和总结关键信息\n",
    "智能去重和记忆优化\n",
    "支持记忆的增删改查操作"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "e7b7c1e4",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Collecting mem0ai\n",
      "  Downloading mem0ai-0.1.117-py3-none-any.whl.metadata (9.6 kB)\n",
      "Collecting openai<1.100.0,>=1.90.0 (from mem0ai)\n",
      "  Downloading openai-1.99.9-py3-none-any.whl.metadata (29 kB)\n",
      "Requirement already satisfied: posthog>=3.5.0 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from mem0ai) (5.4.0)\n",
      "Requirement already satisfied: protobuf<6.0.0,>=5.29.0 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from mem0ai) (5.29.5)\n",
      "Requirement already satisfied: pydantic>=2.7.3 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from mem0ai) (2.11.7)\n",
      "Requirement already satisfied: pytz>=2024.1 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from mem0ai) (2025.2)\n",
      "Collecting qdrant-client>=1.9.1 (from mem0ai)\n",
      "  Downloading qdrant_client-1.15.1-py3-none-any.whl.metadata (11 kB)\n",
      "Requirement already satisfied: sqlalchemy>=2.0.31 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from mem0ai) (2.0.43)\n",
      "Requirement already satisfied: anyio<5,>=3.5.0 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from openai<1.100.0,>=1.90.0->mem0ai) (4.10.0)\n",
      "Requirement already satisfied: distro<2,>=1.7.0 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from openai<1.100.0,>=1.90.0->mem0ai) (1.9.0)\n",
      "Requirement already satisfied: httpx<1,>=0.23.0 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from openai<1.100.0,>=1.90.0->mem0ai) (0.28.1)\n",
      "Requirement already satisfied: jiter<1,>=0.4.0 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from openai<1.100.0,>=1.90.0->mem0ai) (0.10.0)\n",
      "Requirement already satisfied: sniffio in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from openai<1.100.0,>=1.90.0->mem0ai) (1.3.1)\n",
      "Requirement already satisfied: tqdm>4 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from openai<1.100.0,>=1.90.0->mem0ai) (4.67.1)\n",
      "Requirement already satisfied: typing-extensions<5,>=4.11 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from openai<1.100.0,>=1.90.0->mem0ai) (4.15.0)\n",
      "Requirement already satisfied: idna>=2.8 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from anyio<5,>=3.5.0->openai<1.100.0,>=1.90.0->mem0ai) (3.10)\n",
      "Requirement already satisfied: certifi in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from httpx<1,>=0.23.0->openai<1.100.0,>=1.90.0->mem0ai) (2025.8.3)\n",
      "Requirement already satisfied: httpcore==1.* in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from httpx<1,>=0.23.0->openai<1.100.0,>=1.90.0->mem0ai) (1.0.9)\n",
      "Requirement already satisfied: h11>=0.16 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai<1.100.0,>=1.90.0->mem0ai) (0.16.0)\n",
      "Requirement already satisfied: annotated-types>=0.6.0 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from pydantic>=2.7.3->mem0ai) (0.7.0)\n",
      "Requirement already satisfied: pydantic-core==2.33.2 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from pydantic>=2.7.3->mem0ai) (2.33.2)\n",
      "Requirement already satisfied: typing-inspection>=0.4.0 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from pydantic>=2.7.3->mem0ai) (0.4.1)\n",
      "Requirement already satisfied: requests<3.0,>=2.7 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from posthog>=3.5.0->mem0ai) (2.32.5)\n",
      "Requirement already satisfied: six>=1.5 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from posthog>=3.5.0->mem0ai) (1.17.0)\n",
      "Requirement already satisfied: python-dateutil>=2.2 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from posthog>=3.5.0->mem0ai) (2.9.0.post0)\n",
      "Requirement already satisfied: backoff>=1.10.0 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from posthog>=3.5.0->mem0ai) (2.2.1)\n",
      "Requirement already satisfied: charset_normalizer<4,>=2 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from requests<3.0,>=2.7->posthog>=3.5.0->mem0ai) (3.4.3)\n",
      "Requirement already satisfied: urllib3<3,>=1.21.1 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from requests<3.0,>=2.7->posthog>=3.5.0->mem0ai) (2.5.0)\n",
      "Requirement already satisfied: grpcio>=1.41.0 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from qdrant-client>=1.9.1->mem0ai) (1.74.0)\n",
      "Requirement already satisfied: numpy>=1.26 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from qdrant-client>=1.9.1->mem0ai) (1.26.4)\n",
      "Collecting portalocker<4.0,>=2.7.0 (from qdrant-client>=1.9.1->mem0ai)\n",
      "  Using cached portalocker-3.2.0-py3-none-any.whl.metadata (8.7 kB)\n",
      "Requirement already satisfied: pywin32>=226 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from portalocker<4.0,>=2.7.0->qdrant-client>=1.9.1->mem0ai) (311)\n",
      "Collecting h2<5,>=3 (from httpx[http2]>=0.20.0->qdrant-client>=1.9.1->mem0ai)\n",
      "  Using cached h2-4.3.0-py3-none-any.whl.metadata (5.1 kB)\n",
      "Collecting hyperframe<7,>=6.1 (from h2<5,>=3->httpx[http2]>=0.20.0->qdrant-client>=1.9.1->mem0ai)\n",
      "  Using cached hyperframe-6.1.0-py3-none-any.whl.metadata (4.3 kB)\n",
      "Collecting hpack<5,>=4.1 (from h2<5,>=3->httpx[http2]>=0.20.0->qdrant-client>=1.9.1->mem0ai)\n",
      "  Using cached hpack-4.1.0-py3-none-any.whl.metadata (4.6 kB)\n",
      "Requirement already satisfied: greenlet>=1 in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from sqlalchemy>=2.0.31->mem0ai) (3.2.4)\n",
      "Requirement already satisfied: colorama in d:\\software\\miniconda3\\envs\\mlops\\lib\\site-packages (from tqdm>4->openai<1.100.0,>=1.90.0->mem0ai) (0.4.6)\n",
      "Downloading mem0ai-0.1.117-py3-none-any.whl (213 kB)\n",
      "Downloading openai-1.99.9-py3-none-any.whl (786 kB)\n",
      "   ---------------------------------------- 0.0/786.8 kB ? eta -:--:--\n",
      "   --------------------------------------- 786.8/786.8 kB 11.4 MB/s eta 0:00:00\n",
      "Downloading qdrant_client-1.15.1-py3-none-any.whl (337 kB)\n",
      "Using cached portalocker-3.2.0-py3-none-any.whl (22 kB)\n",
      "Using cached h2-4.3.0-py3-none-any.whl (61 kB)\n",
      "Using cached hpack-4.1.0-py3-none-any.whl (34 kB)\n",
      "Using cached hyperframe-6.1.0-py3-none-any.whl (13 kB)\n",
      "Installing collected packages: portalocker, hyperframe, hpack, h2, openai, qdrant-client, mem0ai\n",
      "\n",
      "  Attempting uninstall: openai\n",
      "\n",
      "    Found existing installation: openai 1.102.0\n",
      "\n",
      "   ---------------------- ----------------- 4/7 [openai]\n",
      "    Uninstalling openai-1.102.0:\n",
      "   ---------------------- ----------------- 4/7 [openai]\n",
      "      Successfully uninstalled openai-1.102.0\n",
      "   ---------------------- ----------------- 4/7 [openai]\n",
      "   ---------------------- ----------------- 4/7 [openai]\n",
      "   ---------------------- ----------------- 4/7 [openai]\n",
      "   ---------------------- ----------------- 4/7 [openai]\n",
      "   ---------------------- ----------------- 4/7 [openai]\n",
      "   ---------------------- ----------------- 4/7 [openai]\n",
      "   ---------------------- ----------------- 4/7 [openai]\n",
      "   ---------------------- ----------------- 4/7 [openai]\n",
      "   ---------------------- ----------------- 4/7 [openai]\n",
      "   ---------------------- ----------------- 4/7 [openai]\n",
      "   ---------------------------- ----------- 5/7 [qdrant-client]\n",
      "   ---------------------------- ----------- 5/7 [qdrant-client]\n",
      "   ---------------------------------- ----- 6/7 [mem0ai]\n",
      "   ---------------------------------- ----- 6/7 [mem0ai]\n",
      "   ---------------------------------------- 7/7 [mem0ai]\n",
      "\n",
      "Successfully installed h2-4.3.0 hpack-4.1.0 hyperframe-6.1.0 mem0ai-0.1.117 openai-1.99.9 portalocker-3.2.0 qdrant-client-1.15.1\n"
     ]
    }
   ],
   "source": [
    "!pip install mem0ai"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1e89755a",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-09-10 00:55:31,156 - INFO - API密钥已加载\n",
      "2025-09-10 00:55:31,877 - INFO - OpenAI客户端初始化成功\n",
      "2025-09-10 00:55:32,590 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:55:32,594 - INFO - API连接测试成功\n",
      "2025-09-10 00:55:33,948 - INFO - LangChain LLM初始化成功\n",
      "2025-09-10 00:55:35,345 - INFO - Inserting 1 vectors into collection mem0migrations\n",
      "2025-09-10 00:55:35,373 - INFO - Mem0记忆系统初始化成功\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "============================================================\n",
      "欢迎使用智能快递客服助手！\n",
      "============================================================\n",
      "我可以帮您处理各种快递相关问题：\n",
      "快递查询、寄件服务、问题处理、服务介绍等\n",
      "输入 'quit'、'exit' 或 '再见' 结束对话\n",
      "============================================================\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-09-10 00:55:36,174 - INFO - Backing off send_request(...) for 0.6s (requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionAbortedError(10053, '你的主机中的软件中止了一个已建立的连接。', None, 10053, None)))\n",
      "2025-09-10 00:55:37,113 - INFO - Backing off send_request(...) for 1.7s (requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionAbortedError(10053, '你的主机中的软件中止了一个已建立的连接。', None, 10053, None)))\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "您好！您的客户ID是: 小张\n",
      "------------------------------------------------------------\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-09-10 00:55:58,320 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:55:59,628 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:56:00,388 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:56:00,546 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:56:00,552 - INFO - Total existing memories: 0\n",
      "2025-09-10 00:56:01,687 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:56:01,687 - INFO - {'id': '0', 'text': '咨询寄件服务', 'event': 'ADD'}\n",
      "2025-09-10 00:56:01,711 - INFO - Inserting 1 vectors into collection mem0\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "快递客服: 您好！欢迎咨询我们的寄件服务。请问您需要寄送什么类型的物品？是文件、小件包裹还是大件商品呢？另外，请告知您的发件地址和收件地址，我可以为您查询相应的运费、时效以及包装建议。如果您有特殊需求，比如加急服务或保价服务，也可以告诉我，我会为您提供详细的解决方案。\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-09-10 00:56:09,694 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:56:14,134 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:56:14,803 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:56:14,805 - INFO - Total existing memories: 0\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "快递客服: 您好！您提到要寄送笔记本电脑，这是一个比较贵重且易损的物品，建议在寄件时特别注意以下几点：\n",
      "\n",
      "1. **包装建议**：\n",
      "   - 使用坚固的纸箱或专用电子设备包装箱。\n",
      "   - 在箱子内部用泡沫、气泡膜或软布将笔记本电脑包裹严密，防止运输过程中碰撞。\n",
      "   - 确保所有配件（如充电器、鼠标等）也妥善包装，避免丢失或损坏。\n",
      "\n",
      "2. **选择合适的快递服务**：\n",
      "   - 如果您希望快速送达，可以选择加急服务。\n",
      "   - 如果对价格敏感，可以选择标准快递服务。\n",
      "   - 对于高价值物品，建议购买“保价服务”，以保障运输安全。\n",
      "\n",
      "3. **填写运单信息**：\n",
      "   - 请准确填写收件人和寄件人的姓名、地址、电话等信息。\n",
      "   - 建议在备注中注明“易碎品”或“电子产品”，以便快递员更加小心处理。\n",
      "\n",
      "4. **运费与时效**：\n",
      "   - 不同快递公司的收费标准和时效可能有所不同，您可以根据需求选择合适的快递公司。\n",
      "   - 一般情况下，普通快递需要1-3天送达，加急服务可在当天或次日送达。\n",
      "\n",
      "如果您有具体的寄件地址或快递公司偏好，我可以为您提供更详细的建议。需要我帮您查询某个快递公司的价格或时效吗？\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-09-10 00:56:45,600 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:56:46,392 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:56:47,172 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:56:47,178 - INFO - Total existing memories: 0\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "快递客服: 好的，如果您有其他问题或需要帮助，随时告诉我，我会尽力为您服务！祝您生活愉快！\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-09-10 00:57:00,439 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:57:01,719 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:57:02,373 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:57:02,373 - INFO - Total existing memories: 0\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "快递客服: 您好，为了查询您之前寄出的物品信息，我需要您提供一些相关的详情，例如运单号、寄件日期或收件人信息等。这样我才能更准确地为您查找历史记录。如果您不确定具体信息，也可以告诉我更多细节，我会尽力帮助您找到答案。\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-09-10 00:57:13,294 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:57:14,623 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:57:15,519 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:57:15,666 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:57:15,854 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:57:15,858 - INFO - Total existing memories: 1\n",
      "2025-09-10 00:57:17,618 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:57:17,622 - INFO - {'id': '0', 'text': '咨询寄件服务', 'event': 'NONE'}\n",
      "2025-09-10 00:57:17,622 - INFO - NOOP for Memory.\n",
      "2025-09-10 00:57:17,622 - INFO - {'id': '1', 'text': '我是小张', 'event': 'ADD'}\n",
      "2025-09-10 00:57:17,622 - INFO - Inserting 1 vectors into collection mem0\n",
      "2025-09-10 00:57:17,664 - INFO - {'id': '2', 'text': '询问寄出的物品信息', 'event': 'ADD'}\n",
      "2025-09-10 00:57:17,665 - INFO - Inserting 1 vectors into collection mem0\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "快递客服: 您好，小张！为了查询您寄出的物品信息，我需要您提供一些额外的信息，例如快递单号或者寄件时的详细信息。这样我才能为您准确地查询到您寄出的物品内容和物流状态。请您方便时提供这些信息，我会尽快为您处理。\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-09-10 00:57:29,734 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:57:32,003 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:57:32,958 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:57:32,962 - INFO - Total existing memories: 0\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "快递客服: 您好，小张！感谢您联系我。您提供的运单号是SF123456，请稍等，我为您查询一下这个包裹的详细信息。\n",
      "\n",
      "根据系统显示，您的包裹目前状态正常，已到达【上海分拨中心】，预计将在24小时内发出，前往目的地【北京朝阳区】。预计送达时间是明天（4月5日）下午之前。\n",
      "\n",
      "如果您寄出的物品是易碎品或贵重物品，建议您在寄件时选择“保价服务”，以确保运输过程中的安全。另外，如果需要包装建议，也可以告诉我您寄送的物品类型，我可以为您提供一些实用的包装提示。\n",
      "\n",
      "请问还有其他需要帮助的吗？\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-09-10 00:58:09,364 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:58:10,967 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:58:11,780 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:58:11,974 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:58:11,975 - INFO - Total existing memories: 3\n",
      "2025-09-10 00:58:14,096 - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-09-10 00:58:14,097 - INFO - {'id': '0', 'text': '询问寄出的物品信息', 'event': 'NONE'}\n",
      "2025-09-10 00:58:14,097 - INFO - NOOP for Memory.\n",
      "2025-09-10 00:58:14,097 - INFO - {'id': '1', 'text': '咨询寄件服务', 'event': 'NONE'}\n",
      "2025-09-10 00:58:14,101 - INFO - NOOP for Memory.\n",
      "2025-09-10 00:58:14,102 - INFO - {'id': '2', 'text': '我是小张', 'event': 'NONE'}\n",
      "2025-09-10 00:58:14,103 - INFO - NOOP for Memory.\n",
      "2025-09-10 00:58:14,103 - INFO - {'id': '3', 'text': '用户询问邮寄的电脑的运单号', 'event': 'ADD'}\n",
      "2025-09-10 00:58:14,105 - INFO - Inserting 1 vectors into collection mem0\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "快递客服: 您好，小张！关于您邮寄的电脑的运单号，我需要您提供一些额外的信息以便为您查询。例如，您可以告诉我寄件的时间、寄件地点以及收件人信息等。如果您之前已经通过我们的平台下单，也可以提供当时的订单号或联系方式。\n",
      "\n",
      "如果这些信息您暂时不方便提供，我也可以给您一些建议，如何在快递公司的官网上或APP中自行查找运单号。您希望我如何帮助您呢？\n",
      "\n",
      "快递客服: 请输入您的问题，我很乐意为您提供帮助。\n",
      "快递客服: 请输入您的问题，我很乐意为您提供帮助。\n",
      "快递客服: 感谢您使用我们的服务！祝您生活愉快，期待下次为您服务！\n"
     ]
    }
   ],
   "source": [
    "# 快递行业智能客服助手 - 使用通义千问和Mem0\n",
    "import os\n",
    "import logging\n",
    "from typing import List, Dict, Optional\n",
    "from openai import OpenAI\n",
    "from mem0 import Memory\n",
    "from mem0.configs.base import MemoryConfig\n",
    "from mem0.embeddings.configs import EmbedderConfig\n",
    "from mem0.llms.configs import LlmConfig\n",
    "from langchain_openai import ChatOpenAI\n",
    "from langchain_core.messages import SystemMessage, HumanMessage\n",
    "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
    "\n",
    "# 配置日志\n",
    "logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')\n",
    "logger = logging.getLogger(__name__)\n",
    "\n",
    "class ExpressCustomerService:\n",
    "    def __init__(self):\n",
    "        \"\"\"初始化快递客服助手\"\"\"\n",
    "        self.api_key = self._get_api_key()\n",
    "        self.base_url = \"https://dashscope.aliyuncs.com/compatible-mode/v1\"\n",
    "        \n",
    "        # 初始化组件\n",
    "        self.openai_client = None\n",
    "        self.llm = None\n",
    "        self.mem0 = None\n",
    "        self.prompt = None\n",
    "        \n",
    "        # 初始化所有组件\n",
    "        self._initialize_components()\n",
    "    \n",
    "    def _get_api_key(self) -> str:\n",
    "        \"\"\"获取API密钥并验证\"\"\"\n",
    "        api_key = os.getenv(\"DASHSCOPE_API_KEY\")\n",
    "        if not api_key:\n",
    "            raise ValueError(\n",
    "                \"未找到DASHSCOPE_API_KEY环境变量。\\n\"\n",
    "                \"请设置环境变量：export DASHSCOPE_API_KEY='your_api_key_here'\"\n",
    "            )\n",
    "        logger.info(\"API密钥已加载\")\n",
    "        return api_key\n",
    "    \n",
    "    def _test_api_connection(self) -> bool:\n",
    "        \"\"\"测试API连接\"\"\"\n",
    "        try:\n",
    "            response = self.openai_client.chat.completions.create(\n",
    "                model=\"qwen-turbo\",\n",
    "                messages=[{\"role\": \"user\", \"content\": \"测试连接\"}],\n",
    "                max_tokens=10\n",
    "            )\n",
    "            logger.info(\"API连接测试成功\")\n",
    "            return True\n",
    "        except Exception as e:\n",
    "            logger.error(f\"API连接测试失败: {str(e)}\")\n",
    "            return False\n",
    "    \n",
    "    def _initialize_components(self):\n",
    "        \"\"\"初始化所有组件\"\"\"\n",
    "        try:\n",
    "            # 1. 初始化OpenAI客户端\n",
    "            self.openai_client = OpenAI(\n",
    "                api_key=self.api_key,\n",
    "                base_url=self.base_url,\n",
    "            )\n",
    "            logger.info(\"OpenAI客户端初始化成功\")\n",
    "            \n",
    "            # 2. 测试API连接\n",
    "            if not self._test_api_connection():\n",
    "                raise Exception(\"API连接失败，请检查API密钥和网络连接\")\n",
    "            \n",
    "            # 3. 初始化LangChain LLM\n",
    "            self.llm = ChatOpenAI(\n",
    "                temperature=0.3,\n",
    "                openai_api_key=self.api_key,\n",
    "                openai_api_base=self.base_url,\n",
    "                model=\"qwen-turbo\"\n",
    "            )\n",
    "            logger.info(\"LangChain LLM初始化成功\")\n",
    "            \n",
    "            # 4. 初始化Mem0配置（使用更兼容的配置）\n",
    "            config = MemoryConfig(\n",
    "                llm=LlmConfig(\n",
    "                    provider=\"openai\",\n",
    "                    config={\n",
    "                        \"model\": \"qwen-turbo\",\n",
    "                        \"api_key\": self.api_key,\n",
    "                        \"openai_base_url\": self.base_url\n",
    "                    }\n",
    "                ),\n",
    "                embedder=EmbedderConfig(\n",
    "                    provider=\"openai\",\n",
    "                    config={\n",
    "                        \"model\": \"text-embedding-v1\",  # 修正为正确的embedding模型\n",
    "                        \"api_key\": self.api_key,\n",
    "                        \"openai_base_url\": self.base_url\n",
    "                    }\n",
    "                )\n",
    "            )\n",
    "            \n",
    "            # 5. 初始化Mem0\n",
    "            self.mem0 = Memory(config=config)\n",
    "            logger.info(\"Mem0记忆系统初始化成功\")\n",
    "            \n",
    "            # 6. 初始化提示词模板\n",
    "            self._initialize_prompt()\n",
    "            \n",
    "        except Exception as e:\n",
    "            logger.error(f\"组件初始化失败: {str(e)}\")\n",
    "            raise\n",
    "    \n",
    "    def _initialize_prompt(self):\n",
    "        \"\"\"初始化提示词模板\"\"\"\n",
    "        self.prompt = ChatPromptTemplate.from_messages([\n",
    "            SystemMessage(content=\"\"\"您是一位专业的快递行业智能客服助手。请使用提供的上下文信息来个性化您的回复，记住用户的偏好和历史交互记录。\n",
    "\n",
    "                您的主要职责包括：\n",
    "                1. 快递查询服务：帮助用户查询包裹状态、物流轨迹、预计送达时间\n",
    "                2. 寄件服务：提供寄件指导、价格咨询、时效说明、包装建议\n",
    "                3. 问题解决：处理快递延误、丢失、损坏等问题，提供解决方案\n",
    "                4. 服务咨询：介绍各类快递服务、收费标准、服务范围\n",
    "                5. 投诉建议：接收用户反馈，记录投诉信息并提供处理方案\n",
    "\n",
    "                回复时请保持：\n",
    "                - 专业、礼貌、耐心的服务态度\n",
    "                - 准确、及时的信息提供\n",
    "                - 个性化的服务体验\n",
    "                - 如果没有具体信息，可以基于快递行业常识提供建议\n",
    "\n",
    "                请用中文回复，语气亲切专业。\"\"\"),\n",
    "            MessagesPlaceholder(variable_name=\"context\"),\n",
    "            HumanMessage(content=\"{input}\")\n",
    "        ])\n",
    "    \n",
    "    def retrieve_context(self, query: str, user_id: str) -> List[Dict]:\n",
    "        \"\"\"从Mem0检索相关上下文信息\"\"\"\n",
    "        try:\n",
    "            memories = self.mem0.search(query, user_id=user_id)\n",
    "            \n",
    "            # 安全地处理搜索结果\n",
    "            if memories and \"results\" in memories and memories[\"results\"]:\n",
    "                serialized_memories = ' '.join([\n",
    "                    mem.get(\"memory\", \"\") for mem in memories[\"results\"]\n",
    "                ])\n",
    "            else:\n",
    "                serialized_memories = \"暂无相关历史记录\"\n",
    "            \n",
    "            context = [\n",
    "                {\n",
    "                    \"role\": \"system\", \n",
    "                    \"content\": f\"相关历史信息: {serialized_memories}\"\n",
    "                },\n",
    "                {\n",
    "                    \"role\": \"user\",\n",
    "                    \"content\": query\n",
    "                }\n",
    "            ]\n",
    "            return context\n",
    "            \n",
    "        except Exception as e:\n",
    "            logger.warning(f\"检索上下文时出错: {str(e)}\")\n",
    "            # 返回基础上下文\n",
    "            return [\n",
    "                {\n",
    "                    \"role\": \"system\", \n",
    "                    \"content\": \"相关历史信息: 暂无相关历史记录\"\n",
    "                },\n",
    "                {\n",
    "                    \"role\": \"user\",\n",
    "                    \"content\": query\n",
    "                }\n",
    "            ]\n",
    "    \n",
    "    def generate_response(self, user_input: str, context: List[Dict]) -> str:\n",
    "        \"\"\"使用语言模型生成回复\"\"\"\n",
    "        try:\n",
    "            chain = self.prompt | self.llm\n",
    "            response = chain.invoke({\n",
    "                \"context\": context,\n",
    "                \"input\": user_input\n",
    "            })\n",
    "            return response.content\n",
    "            \n",
    "        except Exception as e:\n",
    "            logger.error(f\"生成回复时出错: {str(e)}\")\n",
    "            return \"抱歉，我现在遇到了一些技术问题，请稍后再试。如有紧急情况，请联系人工客服。\"\n",
    "    \n",
    "    def save_interaction(self, user_id: str, user_input: str, assistant_response: str):\n",
    "        \"\"\"将交互记录保存到Mem0\"\"\"\n",
    "        try:\n",
    "            interaction = [\n",
    "                {\n",
    "                    \"role\": \"user\",\n",
    "                    \"content\": user_input\n",
    "                },\n",
    "                {\n",
    "                    \"role\": \"assistant\",\n",
    "                    \"content\": assistant_response\n",
    "                }\n",
    "            ]\n",
    "            self.mem0.add(interaction, user_id=user_id)\n",
    "            logger.debug(f\"交互记录已保存 - 用户ID: {user_id}\")\n",
    "            \n",
    "        except Exception as e:\n",
    "            logger.warning(f\"保存交互记录时出错: {str(e)}\")\n",
    "            # 不影响主要功能，只记录警告\n",
    "    \n",
    "    def chat_turn(self, user_input: str, user_id: str) -> str:\n",
    "        \"\"\"处理一轮对话\"\"\"\n",
    "        try:\n",
    "            # 检索上下文\n",
    "            context = self.retrieve_context(user_input, user_id)\n",
    "            \n",
    "            # 生成回复\n",
    "            response = self.generate_response(user_input, context)\n",
    "            \n",
    "            # 保存交互记录\n",
    "            self.save_interaction(user_id, user_input, response)\n",
    "            \n",
    "            return response\n",
    "            \n",
    "        except Exception as e:\n",
    "            logger.error(f\"处理对话时出错: {str(e)}\")\n",
    "            return \"抱歉，处理您的请求时出现了问题。请重新尝试或联系技术支持。\"\n",
    "    \n",
    "    def run_interactive_chat(self):\n",
    "        \"\"\"运行交互式聊天\"\"\"\n",
    "        print(\"=\" * 60)\n",
    "        print(\"欢迎使用智能快递客服助手！\")\n",
    "        print(\"=\" * 60)\n",
    "        print(\"我可以帮您处理各种快递相关问题：\")\n",
    "        print(\"快递查询、寄件服务、问题处理、服务介绍等\")\n",
    "        print(\"输入 'quit'、'exit' 或 '再见' 结束对话\")\n",
    "        print(\"=\" * 60)\n",
    "        \n",
    "        # 可以根据实际情况设置用户ID\n",
    "        ###### 区分用户 ， 与 session_id 结合可以区分会话\n",
    "        user_id = input(\"请输入您的客户ID（或直接回车使用默认ID）: \").strip()\n",
    "        if not user_id:\n",
    "            user_id = \"customer_001\"\n",
    "        \n",
    "        # session_id = f\"session_{datetime.now().strftime('%Y%m%d_%H%M%S')}_{user_id}\"\n",
    "        # 混合模式：用户+会话\n",
    "        # composite_id = f\"{user_id}_session_{session_count}\"\n",
    "\n",
    "        print(f\"您好！您的客户ID是: {user_id}\")\n",
    "        print(\"-\" * 60)\n",
    "        \n",
    "        while True:\n",
    "            try:\n",
    "                user_input = input(\"您: \").strip()\n",
    "                \n",
    "                if user_input.lower() in ['quit', 'exit', '再见', '退出', 'bye']:\n",
    "                    print(\"快递客服: 感谢您使用我们的服务！祝您生活愉快，期待下次为您服务！\")\n",
    "                    break\n",
    "                \n",
    "                if not user_input:\n",
    "                    print(\"快递客服: 请输入您的问题，我很乐意为您提供帮助。\")\n",
    "                    continue\n",
    "                \n",
    "                # 处理用户输入\n",
    "                response = self.chat_turn(user_input, user_id)\n",
    "                print(f\"快递客服: {response}\\n\")\n",
    "                \n",
    "            except KeyboardInterrupt:\n",
    "                print(\"\\n快递客服: 感谢您使用我们的服务！再见！\")\n",
    "                break\n",
    "            except Exception as e:\n",
    "                logger.error(f\"交互过程中出错: {str(e)}\")\n",
    "                print(\"快递客服: 系统出现异常，请稍后重试。\")\n",
    "\n",
    "def main():\n",
    "    \"\"\"主程序入口\"\"\"\n",
    "    try:\n",
    "        # 创建客服助手实例\n",
    "        service = ExpressCustomerService()\n",
    "        \n",
    "        # 运行交互式聊天\n",
    "        service.run_interactive_chat()\n",
    "        \n",
    "    except Exception as e:\n",
    "        print(f\"程序启动失败: {str(e)}\")\n",
    "        print(\"\\n请检查以下事项：\")\n",
    "        print(\"1. 确保已设置DASHSCOPE_API_KEY环境变量\")\n",
    "        print(\"2. 确保网络连接正常\")\n",
    "        print(\"3. 确保API密钥有效且有足够权限\")\n",
    "        print(\"4. 确保已安装所有必需的依赖包\")\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    main()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "MLOps",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
