{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "9c794bc7",
   "metadata": {},
   "source": [
    "# 构建检索问答链"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f3d0f2c3-3bd9-4de1-bbd2-e0e2b09161c8",
   "metadata": {},
   "source": [
    "我们已经介绍了如何根据自己的本地知识文档，搭建一个向量知识库。 在接下来的内容里，我们将使用搭建好的向量数据库，对 query 查询问题进行召回，并将召回结果和 query 结合起来构建 prompt，输入到大模型中进行问答。   "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "95d8d968-8d98-47b9-8885-dc17d24dce76",
   "metadata": {},
   "source": [
    "## 1. 加载向量数据库\n",
    "\n",
    "首先，我们加载在前一章已经构建的向量数据库。注意，此处你需要使用和构建时相同的 Emedding。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e7affbed-f36e-4700-a1d9-c5d88917fff5",
   "metadata": {},
   "source": [
    "### Chroma"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8f66d376-8140-4224-bdfb-360b60aef43f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# import sys\n",
    "# sys.path.append(\"../C3 搭建知识库\") # 将父目录放入系统路径中\n",
    "\n",
    "from langchain.vectorstores.chroma import Chroma\n",
    "from dotenv import load_dotenv, find_dotenv\n",
    "import os\n",
    "# # 从环境变量中加载你的 API_KEY\n",
    "# _ = load_dotenv(find_dotenv())    # read local .env file\n",
    "# zhipuai_api_key = os.environ['ZHIPUAI_API_KEY']\n",
    "\n",
    "# 定义持久化目录\n",
    "persist_directory = '../data_base/vector_db/chroma-vmax'\n",
    "\n",
    "# # 创建嵌入模型\n",
    "# from langchain_community.embeddings import ZhipuAIEmbeddings\n",
    "\n",
    "# zhipu_embed = ZhipuAIEmbeddings(\n",
    "#     model=\"embedding-2\",\n",
    "#     api_key=zhipuai_api_key\n",
    "# )\n",
    "\n",
    "from langchain_community.embeddings import OllamaEmbeddings\n",
    "my_emb = OllamaEmbeddings(base_url='http://localhost:11434',model=\"bge-m3:latest\")\n",
    "\n",
    "try:\n",
    "    # 加载持久化的 Chroma 向量数据库\n",
    "    vectordb = Chroma(\n",
    "        persist_directory=persist_directory,  # 允许我们将persist_directory目录保存到磁盘上\n",
    "        collection_name=\"vmax-s\",\n",
    "        embedding_function=my_emb\n",
    "    )\n",
    "    print(\"向量数据库已成功加载。\")\n",
    "except Exception as e:\n",
    "    print(f\"加载向量数据库时发生错误: {e}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7b0b1838-38a3-4666-8bc8-c4592a5a39d8",
   "metadata": {},
   "outputs": [],
   "source": [
    "print(f\"向量库中存储的数量：{vectordb._collection.count()}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f5912d3d-9517-465c-8b82-cdd1eab27a30",
   "metadata": {},
   "outputs": [],
   "source": [
    "print(f\"向量库中存储的数量：{vectordb._collection.count()}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c8a68dc0-4f4c-433b-a367-44b5cffe8516",
   "metadata": {},
   "outputs": [],
   "source": [
    "question = \"VMAX上网日志业务是什么？\"\n",
    "docs = vectordb.similarity_search(question,k=5)\n",
    "print(f\"检索到的内容数：{len(docs)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "09852d2a-aeda-4822-bf56-782a0397df3c",
   "metadata": {},
   "source": [
    "打印一下检索到的内容"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d2f492bf-197b-4f94-820c-2d88c17754d6",
   "metadata": {},
   "outputs": [],
   "source": [
    "for i, doc in enumerate(docs):\n",
    "    print(f\"检索到的第{i}个内容: \\n {doc.page_content}\", end=\"\\n-----------------------------------------------------\\n\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "adc86262-da78-4fda-a597-600d54057062",
   "metadata": {},
   "source": [
    "### Milvus"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "d8349782-7ca4-4ebb-acc1-6097ff5cee99",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\Users\\will\\AppData\\Local\\Temp\\ipykernel_3560\\3633838239.py:3: LangChainDeprecationWarning: The class `OllamaEmbeddings` was deprecated in LangChain 0.3.1 and will be removed in 1.0.0. An updated version of the class exists in the :class:`~langchain-ollama package and should be used instead. To use it run `pip install -U :class:`~langchain-ollama` and import as `from :class:`~langchain_ollama import OllamaEmbeddings``.\n",
      "  my_emb = OllamaEmbeddings(base_url='http://localhost:11434', model=\"bge-m3:latest\")\n",
      "C:\\Users\\will\\AppData\\Local\\Temp\\ipykernel_3560\\3633838239.py:6: LangChainDeprecationWarning: The class `Milvus` was deprecated in LangChain 0.2.0 and will be removed in 1.0. An updated version of the class exists in the :class:`~langchain-milvus package and should be used instead. To use it run `pip install -U :class:`~langchain-milvus` and import as `from :class:`~langchain_milvus import MilvusVectorStore``.\n",
      "  vectordb = Milvus(\n"
     ]
    }
   ],
   "source": [
    "from langchain_community.vectorstores import Milvus\n",
    "from langchain_community.embeddings import OllamaEmbeddings\n",
    "my_emb = OllamaEmbeddings(base_url='http://localhost:11434', model=\"bge-m3:latest\")\n",
    "\n",
    "# Milvus 连接参数\n",
    "vectordb = Milvus(\n",
    "        embedding_function=my_emb,\n",
    "        collection_name=\"Vmaxs\",  # Milvus 集合名称\n",
    "        connection_args={\n",
    "            \"host\": \"192.168.0.188\",  # Milvus 服务器地址\n",
    "            \"port\": \"19530\",  # Milvus 默认端口\n",
    "        },\n",
    "    )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "14c39416-a9b2-4de1-9fb6-5c521a7fd2f4",
   "metadata": {},
   "outputs": [],
   "source": [
    "results = vectordb.similarity_search(query=\"VMAX\", k=2)\n",
    "# results"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4f7f8dbd-ecd5-449d-9753-aedc2b74289c",
   "metadata": {},
   "source": [
    "## 2. 创建一个 LLM"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "026bd74f-3dd0-496e-905b-950a444bb7a7",
   "metadata": {},
   "source": [
    "在这里，我们调用 OpenAI 的 API 创建一个 LLM，当然你也可以使用其他 LLM 的 API 进行创建"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "a89fb297-888a-4d35-b519-90eba639893c",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\Users\\will\\AppData\\Local\\Temp\\ipykernel_3560\\73905523.py:3: LangChainDeprecationWarning: The class `Ollama` was deprecated in LangChain 0.3.1 and will be removed in 1.0.0. An updated version of the class exists in the :class:`~langchain-ollama package and should be used instead. To use it run `pip install -U :class:`~langchain-ollama` and import as `from :class:`~langchain_ollama import OllamaLLM``.\n",
      "  my_llm = Ollama(base_url='http://localhost:11434', model='qwen2.5:0.5b', temperature=0.1)\n"
     ]
    }
   ],
   "source": [
    "from langchain_community.llms import Ollama\n",
    "\n",
    "my_llm = Ollama(base_url='http://localhost:11434', model='qwen2.5:0.5b', temperature=0.1)\n",
    "\n",
    "# my_llm.invoke(\"你好\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8f361e27-cafb-48bf-bb41-50c9cb3a4f7e",
   "metadata": {},
   "source": [
    "## 3. 构建检索问答链"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "248b5e3c-1bc9-40e9-83c7-0594c2e7727d",
   "metadata": {},
   "source": [
    "prompts"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "91be03f4-264d-45cb-bebd-223c1c5747fd",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.prompts import PromptTemplate\n",
    "\n",
    "template = \"\"\"你是VMAX运维助手，使用以下上下文来回答问题。如果你不知道答案，就说你不知道，不要试图编造答案。总是在回答的最后说“谢谢你的提问！”。\n",
    "{context}\n",
    "问题: {question}\n",
    "\"\"\"\n",
    "\n",
    "QA_CHAIN_PROMPT = PromptTemplate(input_variables=[\"context\",\"question\"],\n",
    "                                 template=template)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e2d06d7f-1dca-4d10-b5cd-3a23e9d91200",
   "metadata": {},
   "source": [
    "#### 创建一个基于模板的检索链： 基础检索版本"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8b05eb57-edf5-4b35-9538-42c2b8f5cc16",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.chains import RetrievalQA\n",
    "\n",
    "# 基础检索\n",
    "base_retriever = vectordb.as_retriever(search_kwargs={\"k\": 10})\n",
    "base_retriever = vectordb.as_retriever(\n",
    "    search_kwargs={\"k\": 15},  # 扩大召回池\n",
    "    search_type=\"mmr\",  # 最大边际相关性算法（网页5）\n",
    "    # metadata_filter={\"source\": \"权威文档.pdf\"}  # 元数据过滤\n",
    ")\n",
    "\n",
    "qa_chain = RetrievalQA.from_chain_type(my_llm,\n",
    "                                       retriever=base_retriever,\n",
    "                                       return_source_documents=True,\n",
    "                                       chain_type_kwargs={\"prompt\":QA_CHAIN_PROMPT})\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "503a7972-a673-41ca-a028-647169d19fcb",
   "metadata": {},
   "source": [
    "## 4.检索问答链效果测试"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "acc2223f-6fb5-4504-bfcd-ac74ca9ff2fa",
   "metadata": {},
   "source": [
    "### 4.1 基于召回结果和 query 结合起来构建的 prompt 效果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fa1a61eb-feea-4fff-8063-a20c3b392aed",
   "metadata": {},
   "outputs": [],
   "source": [
    "question_1 = \"什么是vmax的上网日志系统？\"\n",
    "result = qa_chain({\"query\": question_1})\n",
    "print(\"大模型+知识库后回答 question_1 的结果：\")\n",
    "print(result[\"result\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2707095e-21d0-4e2b-8a5b-0c02258d2ce0",
   "metadata": {},
   "outputs": [],
   "source": [
    "question_2 = \"严威是谁？\"\n",
    "result = qa_chain({\"query\": question_21})\n",
    "print(\"大模型+知识库后回答 question_2 的结果：\")\n",
    "print(result[\"result\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e4195cfa-1fc8-41a9-8984-91f2e5fbe013",
   "metadata": {},
   "source": [
    "### 4.2 无知识库大模型自己回答的效果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "569fbe28-2e2d-4042-b3a1-65326842bdc9",
   "metadata": {},
   "outputs": [],
   "source": [
    "prompt_template = \"\"\"请回答下列问题:\n",
    "                            {}\"\"\".format(question_1)\n",
    "\n",
    "### 基于大模型的问答\n",
    "my_llm.predict(prompt_template)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d0d3a813-db19-4be5-8926-ad8298e3e2b1",
   "metadata": {},
   "outputs": [],
   "source": [
    "prompt_template = \"\"\"请回答下列问题:\n",
    "                            {}\"\"\".format(question_2)\n",
    "\n",
    "### 基于大模型的问答\n",
    "my_llm.predict(prompt_template)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "51b9ba4a-053d-409a-a632-63336c2bdf84",
   "metadata": {},
   "source": [
    "> ⭐ 通过以上两个问题，我们发现 LLM 对于一些近几年的知识以及非常识性的专业问题，回答的并不是很好。而加上我们的本地知识，就可以帮助 LLM 做出更好的回答。另外，也有助于缓解大模型的“幻觉”问题。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "72780ae5-b010-4eb8-8885-7c449412183f",
   "metadata": {},
   "source": [
    "## 5. 添加历史对话的记忆功能"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "181c6341-9529-48f6-b0d6-df0cb38692cb",
   "metadata": {},
   "source": [
    "现在我们已经实现了通过上传本地知识文档，然后将他们保存到向量知识库，通过将查询问题与向量知识库的召回结果进行结合输入到 LLM 中，我们就得到了一个相比于直接让 LLM 回答要好得多的结果。在与语言模型交互时，你可能已经注意到一个关键问题 - **它们并不记得你之前的交流内容**。这在我们构建一些应用程序（如聊天机器人）的时候，带来了很大的挑战，使得对话似乎缺乏真正的连续性。这个问题该如何解决呢？\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5ef1dbbf-3260-4865-a71a-11d22317a195",
   "metadata": {},
   "source": [
    "### 5.1 记忆（Memory）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "61d8ac5e-e5f2-42a2-8c5a-cf416aaa2a7a",
   "metadata": {},
   "source": [
    "在本节中我们将介绍 LangChain 中的储存模块，即如何将先前的对话嵌入到语言模型中的，使其具有连续对话的能力。我们将使用 `ConversationBufferMemory` ，它保存聊天消息历史记录的列表，这些历史记录将在回答问题时与问题一起传递给聊天机器人，从而将它们添加到上下文中。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "52d58ef8-297f-4a56-9d7c-9cdc043ddda5",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\Users\\will\\AppData\\Local\\Temp\\ipykernel_3560\\2228008247.py:3: LangChainDeprecationWarning: Please see the migration guide at: https://python.langchain.com/docs/versions/migrating_memory/\n",
      "  memory = ConversationBufferMemory(\n"
     ]
    }
   ],
   "source": [
    "from langchain.memory import ConversationBufferMemory\n",
    "\n",
    "memory = ConversationBufferMemory(\n",
    "    memory_key=\"chat_history\",  # 与 prompt 的输入变量保持一致。\n",
    "    return_messages=True  # 将以消息列表的形式返回聊天记录，而不是单个字符串\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8bdb6d61-0772-453c-b507-db57afac74fe",
   "metadata": {},
   "source": [
    "关于更多的 Memory 的使用，包括保留指定对话轮数、保存指定 token 数量、保存历史对话的总结摘要等内容，请参考 langchain 的 Memory 部分的相关文档。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "84184f40-44e0-4c25-91e4-241f7364f654",
   "metadata": {},
   "source": [
    "### 5.2 对话检索链（ConversationalRetrievalChain）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bba1aea7-7d40-4101-8f9e-65b26d54ad40",
   "metadata": {},
   "source": [
    "对话检索链（ConversationalRetrievalChain）在检索 QA 链的基础上，增加了处理对话历史的能力。\n",
    "\n",
    "它的工作流程是:\n",
    "1. 将之前的对话与新问题合并生成一个完整的查询语句。\n",
    "2. 在向量数据库中搜索该查询的相关文档。\n",
    "3. 获取结果后,存储所有答案到对话记忆区。\n",
    "4. 用户可在 UI 中查看完整的对话流程。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "848cd6ab-0471-4104-b0d7-0e5198996d59",
   "metadata": {},
   "source": [
    "这种链式方式将新问题放在之前对话的语境中进行检索，可以处理依赖历史信息的查询。并保留所有信\n",
    "息在对话记忆中，方便追踪。\n",
    "\n",
    "接下来让我们可以测试这个对话检索链的效果："
   ]
  },
  {
   "cell_type": "markdown",
   "id": "db1dccc5-989f-4d92-8896-2fb16c2e429b",
   "metadata": {},
   "source": [
    "使用上一节中的向量数据库和 LLM ！首先提出一个无历史对话的问题“这门课会学习 Python 吗？”，并查看回答。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c61d1bd3-32ea-4354-b4e2-4e59c4496b2b",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 新增：定制对话模板\n",
    "from langchain.prompts import PromptTemplate\n",
    "\n",
    "custom_template = \"\"\"你是VMAX运维助手，基于以下对话历史和上下文知识，用中文回答用户的问题。\n",
    "    历史对话记录：\n",
    "    {chat_history}\n",
    "    \n",
    "    上下文知识：\n",
    "    {context}\n",
    "    \n",
    "    当前问题：{question}\n",
    "    \n",
    "    回答要求：\n",
    "    1. 如果问题需要专业领域知识，优先使用上下文内容\n",
    "    2. 若答案不在知识库中，明确告知\"根据已知信息无法回答\"\n",
    "    3. 结尾添加\"是否需要进一步说明？\"[2,7](@ref)\n",
    "    \"\"\"\n",
    "    \n",
    "# 创建包含变量占位的PromptTemplate\n",
    "QA_PROMPT = PromptTemplate(\n",
    "        input_variables=[\"chat_history\", \"context\", \"question\"],\n",
    "        template=custom_template\n",
    "    )\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "762c29e0-e001-4f46-a101-a5db0e650282",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "dd721cd8-a829-4cf4-aee2-96f26f94fd16",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 新增：定制对话模板\n",
    "from langchain.prompts import PromptTemplate\n",
    "template = \"\"\"你是VMAX运维助手，使用以下上下文来回答问题。如果你不知道答案，就说你不知道，不要试图编造答案。总是在回答的最后说“谢谢你的提问！”。\n",
    "{context}\n",
    "问题: {question}\n",
    "\"\"\"\n",
    "\n",
    "QA_CHAIN_PROMPT = PromptTemplate(input_variables=[\"context\",\"question\"],\n",
    "                                 template=template)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "d47e13b1-9208-4f04-a2ec-ac4dc5141a9a",
   "metadata": {},
   "outputs": [],
   "source": [
    "# from langchain.chains import ConversationalRetrievalChain\n",
    "\n",
    "# retriever=vectordb.as_retriever()\n",
    "\n",
    "# qa = ConversationalRetrievalChain.from_llm(\n",
    "#     llm,\n",
    "#     retriever=retriever,\n",
    "#     memory=memory\n",
    "# )\n",
    "# question = \"什么是VMAX？\"\n",
    "# result = qa({\"question\": question})\n",
    "# print(result['answer'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "4814ece5-3baa-4211-b47d-7c90d2dff2c7",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.chains import ConversationalRetrievalChain\n",
    "\n",
    "retriever = vectordb.as_retriever(search_kwargs={\"k\": 5})  # 控制检索文档数量\n",
    "    \n",
    "# 修改链配置，注入自定义模板\n",
    "qa = ConversationalRetrievalChain.from_llm(\n",
    "        my_llm,\n",
    "        retriever=retriever,\n",
    "        memory=memory,\n",
    "        combine_docs_chain_kwargs={\"prompt\": QA_CHAIN_PROMPT},  # 关键参数绑定模板\n",
    "        get_chat_history=lambda h: h  # 保持历史记录原始格式[4](@ref)\n",
    "    )\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bf0855c0-c860-48e0-a2a0-52674526db3c",
   "metadata": {},
   "source": [
    "然后基于答案进行下一个问题“为什么这门课需要教这方面的知识？”："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ed6702e3-6026-4575-98ed-5b8004c5dd7e",
   "metadata": {},
   "outputs": [],
   "source": [
    "questions = [\n",
    "    \"什么是VMAX？\",\n",
    "    \"VMAX有哪些功能\", \n",
    "    \"整理成表格\"  \n",
    "]\n",
    "\n",
    "for question in questions:\n",
    "    result = qa({\"question\": question})  # Pass string directly, not dict\n",
    "    print(f\"问题：{question}\")\n",
    "    print(f\"回答：{result['answer']}\")\n",
    "    # print(\"对话历史：\", memory.load_memory_variables({}))\n",
    "    print(\"\\n\" + \"=\"*50 + \"\\n\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "0ec6cd22-6607-45da-802b-dd892e3c60ec",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "回答：VMAX系统涉及的多个技术细节和技术问题包括：\n",
      "\n",
      "1. **物理指标**：\n",
      "   - 本章包含如下主题：物理指标（外形尺寸和重量）取决于具体项目的规模及选型的服务器。\n",
      "   - 例如，ZXVMAX-S系统除三方服务器无其它硬件外，相应的物理指标（外形尺寸和重量）取决于具体项目的规模及选型的服务器。\n",
      "\n",
      "2. **性能指标**：\n",
      "   - ZXVMAX-S的性能指标包括用户数、系统最大配置输入流量、同时登录的用户/终端数等。\n",
      "   - 例如，表8-1性能指标分类中提到的性能指标有：用户数#3000万、系统最大配置输入流量#300Gbps、同时登录的用户/终端数。\n",
      "\n",
      "3. **功耗指标**：\n",
      "   - 表8-1中的“功耗指标”部分没有具体提及，但通常包括CPU使用率、内存使用率等。\n",
      "   - 例如，表8-1性能指标分类中提到的功耗指标有：KQI/KPI分析周期、原始数据存储时间、小时粒度数据表存储时间、365天探针数据采集量。\n",
      "\n",
      "4. **时钟指标**：\n",
      "   - 表8-1中的“时钟指标”部分没有具体提及，但通常包括系统运行时间等。\n",
      "   - 例如，表8-1性能指标分类中提到的时钟指标有：KQI/KPI分析周期、原始数据存储时间、小时粒度数据表存储时间。\n",
      "\n",
      "5. **应急或备份措施**：\n",
      "   - 需要了解本运营商的相关标准和要求，并根据实际情况采取相应的应急或备份措施。\n",
      "   - 例如，VMAX系统可能需要支持的应急或备份措施包括但不限于：设备紧急故障的判断、定位和排除方法、用户投诉渠道等。\n",
      "\n",
      "6. **告警处理**：\n",
      "   - VMAX多维价值分析系统告警处理部分涉及多个技术细节和技术问题，如4000200116S1-MME接口XDRID完整率(小时)。\n",
      "   - 例如，该指标可能需要监控和管理以确保系统的稳定运行。\n",
      "\n",
      "综上所述，VMAX系统涉及的多个技术细节和技术问题包括但不限于物理、性能、功耗、时钟、应急或备份措施以及告警处理。具体的技术细节和技术问题可能会根据实际情况有所不同。\n",
      "\n",
      "==================================================\n",
      "\n",
      "回答：我是VMAX运维助手，阿里云提供的服务。如果您有任何关于VMAX的疑问或需要帮助的地方，请随时告诉我！\n",
      "\n",
      "==================================================\n",
      "\n"
     ]
    }
   ],
   "source": [
    "questions = [\n",
    "    \"VMAX有哪些功能？\",\n",
    "    \"整理成表格\", \n",
    "]\n",
    "\n",
    "for question in questions:\n",
    "    result = qa({\"question\": question})  # Pass string directly, not dict\n",
    "    print(f\"回答：{result['answer']}\")\n",
    "    print(\"\\n\" + \"=\"*50 + \"\\n\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "625044f3-d688-4bf1-b83a-539cfce368f9",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "回答：VMAX主要提供以下功能：\n",
      "\n",
      "1. 用户分析功能：\n",
      "   - 以客户、客户群组等为维度的专项分析，可以区分业务、用户等级和时间等。\n",
      "   - 可以进行用户级的在某一时间段的呼叫、注册记录、媒体数据、信令详情，并结合最近7天的统计情况定位语音/视频通话质量异常环节。\n",
      "\n",
      "2. 一键投诉功能：\n",
      "   - 输入用户号码或IMSI，以及用户投诉的起止时间。\n",
      "   - 提供引起用户投诉的具体问题业务记录（即问题话单），并给出导致问题发生的原因。\n",
      "\n",
      "3. 区域感知功能：\n",
      "   - 定期对全网的所有小区进行排查，找到感知差的小区和原因分析。\n",
      "   - 可以支持对事先配置的重点区域做感知分析，包括感知评估和根因定位。\n",
      "\n",
      "4. 用户感知功能：\n",
      "   - 定期对全网所有用户进行排查，找出感知差的用户，并分析导致问题的原因。\n",
      "\n",
      "5. 语音业务质量指标MOS值分析：\n",
      "   - 根据MOS值识别出语音质差区域。\n",
      "   - 对质差区域提供问题定界处理建议，并提供闭环验证。\n",
      "\n",
      "6. 呼叫和注册记录查询：\n",
      "   - 可以进行用户注册、呼叫的统计，以便定位通话质量异常环节。\n",
      "\n",
      "7. 本地日志分析：\n",
      "   - 可以查看全网或特定地市的本地日志，展示事件失败原因。\n",
      "   - 可以按不同维度（如网元、小区、终端、用户）进行统计和分析。\n",
      "\n",
      "8. 日志管理：\n",
      "   - 可以对日志数据进行清理和整理，提高工作效率。\n",
      "\n",
      "9. 账号管理：\n",
      "   - 用于维护账号信息，包括用户注册、登录等操作。\n",
      "\n",
      "10. 角色管理：\n",
      "    - 对不同角色的用户分配权限，如管理员、普通用户等。\n",
      "    - 可以对特定角色进行权限设置和限制。\n",
      "\n",
      "11. 资源监控：\n",
      "    - 可以查看全网或特定地市的资源使用情况，包括网络带宽、流量等。\n",
      "    - 可以提供告警管理功能，及时通知维护人员处理问题。\n",
      "\n",
      "12. 系统日志保存：\n",
      "   - 通过Gbase数据库或HDFS进行数据存储和备份。\n",
      "   - 可以支持自动清理超过保存时间的记录。\n",
      "\n",
      "13. 日志历史查询：\n",
      "    - 可以查看全网或特定地市的日志记录，方便分析和定位问题。\n",
      "\n",
      "总结来说，VMAX主要提供用户分析、一键投诉、区域感知、用户感知、语音业务质量指标MOS值分析、本地日志分析、账号管理、角色管理、资源监控、系统日志保存等功能。\n"
     ]
    }
   ],
   "source": [
    "question=\"只保留前三个功能\"\n",
    "result = qa({\"question\": question})  \n",
    "print(f\"回答：{result['answer']}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "27181a98-92c0-4cfc-8e39-9175a224c919",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "回答：VMAX的主要功能包括：\n",
      "\n",
      "1. 用户分析：通过用户IMSI或号码、用户群组等信息，定位语音/视频通话质量异常的环节。\n",
      "\n",
      "2. 业务管理：提供用户分析帮助，便于运营维护人员定位具体问题。\n",
      "\n",
      "3. 日志管理：支持日志保存和查询，方便管理和分析。\n",
      "\n",
      "4. 资源监控：实时监控网络资源使用情况。\n",
      "\n",
      "5. 告警管理：自动比对拨测结果，提高故障检测效率。\n",
      "\n",
      "6. 系统对接：支持与北向接口、云化上网日志XDR等接口的连接和数据传输。\n",
      "\n",
      "7. 语音业务优化：评估优化功能提供评估报告模板，帮助识别质量差区域并定位问题。\n"
     ]
    }
   ],
   "source": [
    "question=\"只保留前3个功能\"\n",
    "result = qa({\"question\": question})  \n",
    "print(f\"回答：{result['answer']}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "4099e7fe-c253-407d-be3c-872867ed2c0e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "回答：ZXVMAX-S语音业务的主要功能包括：\n",
      "\n",
      "1. 用户分析：通过用户IMSI或号码等维度，查询通话记录、注册情况、媒体数据和信令详情，并结合最近7天的统计结果定位异常问题。\n",
      "\n",
      "2. 评估优化：提供评估报告模板，帮助运维人员快速定位问题并提出解决方案。\n",
      "\n",
      "3. 日志管理：支持日志保存、查询、批量导入等功能，便于用户管理和分析日志信息。\n",
      "\n",
      "4. 账号管理：允许用户自定义账号和角色权限。\n",
      "\n",
      "5. 角色管理：为不同用户提供不同的访问权限。\n",
      "\n",
      "6. 资源监控：实时监测网络性能指标如KPI等，并提供预警功能。\n",
      "\n",
      "7. 本地日志分析：支持本地日志的保存、查询和历史记录查询。\n",
      "\n",
      "8. 用户感知：通过用户注册失败率、呼叫失败率等指标，帮助运维人员了解服务质量。\n",
      "\n",
      "9. 账号管理：允许用户自定义账号权限。\n",
      "\n",
      "10. 角色管理：为不同用户提供不同的访问权限。\n",
      "\n",
      "11. 本地日志分析：支持本地日志的保存和查询。\n",
      "\n",
      "12. 日志管理：提供日志历史记录查询功能，方便用户了解日志信息。\n"
     ]
    }
   ],
   "source": [
    "question=\"把上面的结果整理成一段话描述，100个字\"\n",
    "result = qa({\"question\": question})  \n",
    "print(f\"回答：{result['answer']}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1191908b-c6a1-4f47-aab2-f92ad8e29ec7",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python3 (env_rag)",
   "language": "python",
   "name": "env_rag"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
