{
 "cells": [
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# 用于RAG的分级索引\n",
    "\n",
    "实现一种用于RAG系统的分级索引方法(Hierarchical Indices)。这种技术通过使用两级搜索方法来提高检索效果：首先通过摘要识别相关的文档部分，然后从这些部分中检索具体细节。\n",
    "\n",
    "-----\n",
    "传统的RAG方法将所有文本块一视同仁，这可能导致：\n",
    "\n",
    "- 当文本块过小时，上下文信息丢失\n",
    "- 当文档集合较大时，检索结果无关\n",
    "- 在整个语料库中搜索效率低下\n",
    "\n",
    "-----\n",
    "分级检索解决了这些问题，具体方式如下：\n",
    "\n",
    "- 为较大的文档部分创建简洁的摘要\n",
    "- 首先搜索这些摘要以确定相关部分\n",
    "- 然后仅从这些部分中检索详细信息\n",
    "- 在保留具体细节的同时保持上下文信息\n",
    "\n",
    "-----\n",
    "实现步骤：\n",
    "- 从 PDF 中提取页面\n",
    "- 为每一页创建摘要，将摘要文本和元数据添加到摘要列表中\n",
    "- 为每一页创建详细块，将页面的文本切分为块\n",
    "- 为以上两个创建嵌入，并行其存入向量存储中\n",
    "- 使用查询分层检索相关块：先检索相关的摘要，收集来自相关摘要的页面，然后过滤掉不是相关页面的块，从这些相关页面检索详细块\n",
    "- 根据检索到的块生成回答"
   ],
   "id": "3585c6ce6cccb16d"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-29T07:39:28.726462Z",
     "start_time": "2025-04-29T07:39:17.242776Z"
    }
   },
   "cell_type": "code",
   "source": [
    "import fitz\n",
    "import os\n",
    "import re\n",
    "import json\n",
    "import numpy as np\n",
    "from tqdm import tqdm\n",
    "from openai import OpenAI\n",
    "from dotenv import load_dotenv\n",
    "from datetime import datetime\n",
    "import networkx as nx\n",
    "import matplotlib.pyplot as plt\n",
    "import heapq\n",
    "from sklearn.metrics.pairwise import cosine_similarity\n",
    "import jieba\n",
    "from typing import List, Dict, Tuple, Any\n",
    "import pickle\n",
    "\n",
    "load_dotenv()"
   ],
   "id": "8b6a2b84784f847a",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 1
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-29T07:39:28.934755Z",
     "start_time": "2025-04-29T07:39:28.755479Z"
    }
   },
   "cell_type": "code",
   "source": [
    "client = OpenAI(\n",
    "    base_url=os.getenv(\"LLM_BASE_URL\"),\n",
    "    api_key=os.getenv(\"LLM_API_KEY\")\n",
    ")\n",
    "llm_model = os.getenv(\"LLM_MODEL_ID\")\n",
    "embedding_model = os.getenv(\"EMBEDDING_MODEL_ID\")\n",
    "\n",
    "pdf_path = \"../../data/AI_Information.en.zh-CN.pdf\""
   ],
   "id": "ec7d3ee9592010f5",
   "outputs": [],
   "execution_count": 2
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 文档处理函数",
   "id": "1e796fced08f6fc8"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-29T07:39:29.106491Z",
     "start_time": "2025-04-29T07:39:29.102791Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def extract_text_from_pdf(pdf_path):\n",
    "    \"\"\"\n",
    "    从PDF文件中提取文本内容，并按页分离。\n",
    "\n",
    "    Args:\n",
    "        pdf_path (str): PDF文件的路径\n",
    "\n",
    "    Returns:\n",
    "        List[Dict]: 包含文本内容和元数据的页面列表\n",
    "    \"\"\"\n",
    "    print(f\"正在提取文本 {pdf_path}...\")  # 打印正在处理的PDF路径\n",
    "    pdf = fitz.open(pdf_path)  # 使用PyMuPDF打开PDF文件\n",
    "    pages = []  # 初始化一个空列表，用于存储包含文本内容的页面\n",
    "\n",
    "    # 遍历PDF中的每一页\n",
    "    for page_num in range(len(pdf)):\n",
    "        page = pdf[page_num]  # 获取当前页\n",
    "        text = page.get_text()  # 从当前页提取文本\n",
    "\n",
    "        # 跳过文本非常少的页面（少于50个字符）\n",
    "        if len(text.strip()) > 50:\n",
    "            # 将页面文本和元数据添加到列表中\n",
    "            pages.append({\n",
    "                \"text\": text,\n",
    "                \"metadata\": {\n",
    "                    \"source\": pdf_path,  # 源文件路径\n",
    "                    \"page\": page_num + 1  # 页面编号（从1开始）\n",
    "                }\n",
    "            })\n",
    "\n",
    "    print(f\"已提取 {len(pages)} 页的内容\")  # 打印已提取的页面数量\n",
    "    return pages  # 返回包含文本内容和元数据的页面列表\n"
   ],
   "id": "471864c01112a254",
   "outputs": [],
   "execution_count": 3
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-29T07:39:29.115192Z",
     "start_time": "2025-04-29T07:39:29.110523Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def chunk_text(text, metadata, chunk_size=1000, overlap=200):\n",
    "    \"\"\"\n",
    "    将文本分割为重叠的块，同时保留元数据。\n",
    "\n",
    "    Args:\n",
    "        text (str): 要分割的输入文本\n",
    "        metadata (Dict): 要保留的元数据\n",
    "        chunk_size (int): 每个块的大小（以字符为单位）\n",
    "        overlap (int): 块之间的重叠大小（以字符为单位）\n",
    "\n",
    "    Returns:\n",
    "        List[Dict]: 包含元数据的文本块列表\n",
    "    \"\"\"\n",
    "    chunks = []  # 初始化一个空列表，用于存储块\n",
    "\n",
    "    # 按指定的块大小和重叠量遍历文本\n",
    "    for i in range(0, len(text), chunk_size - overlap):\n",
    "        chunk_text = text[i:i + chunk_size]  # 提取文本块\n",
    "\n",
    "        # 跳过非常小的块（少于50个字符）\n",
    "        if chunk_text and len(chunk_text.strip()) > 50:\n",
    "            # 创建元数据的副本，并添加块特定的信息\n",
    "            chunk_metadata = metadata.copy()\n",
    "            chunk_metadata.update({\n",
    "                \"chunk_index\": len(chunks),  # 块的索引\n",
    "                \"start_char\": i,  # 块的起始字符索引\n",
    "                \"end_char\": i + len(chunk_text),  # 块的结束字符索引\n",
    "                \"is_summary\": False  # 标志，表示这不是摘要\n",
    "            })\n",
    "\n",
    "            # 将带有元数据的块添加到列表中\n",
    "            chunks.append({\n",
    "                \"text\": chunk_text,\n",
    "                \"metadata\": chunk_metadata\n",
    "            })\n",
    "\n",
    "    return chunks  # 返回带有元数据的块列表\n"
   ],
   "id": "e5d70af80722b82d",
   "outputs": [],
   "execution_count": 4
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 向量存储",
   "id": "5684d4b61b736082"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-29T07:39:29.127741Z",
     "start_time": "2025-04-29T07:39:29.121173Z"
    }
   },
   "cell_type": "code",
   "source": [
    "class SimpleVectorStore:\n",
    "    \"\"\"\n",
    "    使用NumPy实现的简单向量存储。\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self):\n",
    "        \"\"\"\n",
    "        初始化向量存储。\n",
    "        \"\"\"\n",
    "        self.vectors = []  # 用于存储嵌入向量的列表\n",
    "        self.texts = []  # 用于存储原始文本的列表\n",
    "        self.metadata = []  # 用于存储每个文本元数据的列表\n",
    "\n",
    "    def add_item(self, text, embedding, metadata=None):\n",
    "        \"\"\"\n",
    "        向向量存储中添加一个项目。\n",
    "\n",
    "        Args:\n",
    "            text (str): 原始文本。\n",
    "            embedding (List[float]): 嵌入向量。\n",
    "            metadata (dict, optional): 额外的元数据。\n",
    "        \"\"\"\n",
    "        self.vectors.append(np.array(embedding))  # 将嵌入转换为numpy数组并添加到向量列表中\n",
    "        self.texts.append(text)  # 将原始文本添加到文本列表中\n",
    "        self.metadata.append(metadata or {})  # 添加元数据到元数据列表中，如果没有提供则使用空字典\n",
    "\n",
    "    def similarity_search(self, query_embedding, k=5, filter_func=None):\n",
    "        \"\"\"\n",
    "        查找与查询嵌入最相似的项目。\n",
    "\n",
    "        Args:\n",
    "            query_embedding (List[float]): 查询嵌入向量。\n",
    "            k (int): 返回的结果数量。\n",
    "\n",
    "        Returns:\n",
    "            List[Dict]: 包含文本和元数据的前k个最相似项。\n",
    "        \"\"\"\n",
    "        if not self.vectors:\n",
    "            return []  # 如果没有存储向量，则返回空列表\n",
    "\n",
    "        # 将查询嵌入转换为numpy数组\n",
    "        query_vector = np.array(query_embedding)\n",
    "\n",
    "        # 使用余弦相似度计算相似度\n",
    "        similarities = []\n",
    "        for i, vector in enumerate(self.vectors):\n",
    "            # 如果存在过滤函数且该元数据不符合条件，则跳过该项\n",
    "            if filter_func and not filter_func(self.metadata[i]):\n",
    "                continue\n",
    "            # 计算查询向量与存储向量之间的余弦相似度\n",
    "            similarity = np.dot(query_vector, vector) / (np.linalg.norm(query_vector) * np.linalg.norm(vector))\n",
    "            similarities.append((i, similarity))  # 添加索引和相似度分数\n",
    "\n",
    "        # 按相似度排序（降序）\n",
    "        similarities.sort(key=lambda x: x[1], reverse=True)\n",
    "\n",
    "        # 返回前k个结果\n",
    "        results = []\n",
    "        for i in range(min(k, len(similarities))):\n",
    "            idx, score = similarities[i]\n",
    "            results.append({\n",
    "                \"text\": self.texts[idx],  # 添加对应的文本\n",
    "                \"metadata\": self.metadata[idx],  # 添加对应的元数据\n",
    "                \"similarity\": score  # 添加相似度分数\n",
    "            })\n",
    "\n",
    "        return results  # 返回前k个最相似项的列表\n"
   ],
   "id": "611ac01fab19a1e7",
   "outputs": [],
   "execution_count": 5
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 创建嵌入",
   "id": "a20fb3cbf941f99e"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-29T07:39:29.136002Z",
     "start_time": "2025-04-29T07:39:29.132798Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def create_embeddings(texts):\n",
    "    \"\"\"\n",
    "    为给定文本创建嵌入向量。\n",
    "\n",
    "    Args:\n",
    "        texts (List[str]): 输入文本列表\n",
    "        model (str): 嵌入模型名称\n",
    "\n",
    "    Returns:\n",
    "        List[List[float]]: 嵌入向量列表\n",
    "    \"\"\"\n",
    "    # 处理空输入的情况\n",
    "    if not texts:\n",
    "        return []\n",
    "\n",
    "    # 分批次处理（OpenAI API 的限制）\n",
    "    batch_size = 100\n",
    "    all_embeddings = []\n",
    "\n",
    "    # 遍历输入文本，按批次生成嵌入\n",
    "    for i in range(0, len(texts), batch_size):\n",
    "        batch = texts[i:i + batch_size]  # 获取当前批次的文本\n",
    "\n",
    "        # 调用 OpenAI 接口生成嵌入\n",
    "        response = client.embeddings.create(\n",
    "            model=embedding_model,\n",
    "            input=batch\n",
    "        )\n",
    "\n",
    "        # 提取当前批次的嵌入向量\n",
    "        batch_embeddings = [item.embedding for item in response.data]\n",
    "        all_embeddings.extend(batch_embeddings)  # 将当前批次的嵌入向量加入总列表\n",
    "\n",
    "    return all_embeddings  # 返回所有嵌入向量\n"
   ],
   "id": "3980f4949f4d4c53",
   "outputs": [],
   "execution_count": 6
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 摘要函数",
   "id": "a2c69a2e7bcc1d1c"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-29T07:39:29.145992Z",
     "start_time": "2025-04-29T07:39:29.142735Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def generate_page_summary(page_text):\n",
    "    \"\"\"\n",
    "    生成页面的简洁摘要。\n",
    "\n",
    "    Args:\n",
    "        page_text (str): 页面的文本内容\n",
    "\n",
    "    Returns:\n",
    "        str: 生成的摘要\n",
    "    \"\"\"\n",
    "    # 定义系统提示，指导摘要模型如何生成摘要\n",
    "    system_prompt = \"\"\"你是一个专业的摘要生成系统。\n",
    "    请对提供的文本创建一个详细的摘要。\n",
    "    重点捕捉主要内容、关键信息和重要事实。\n",
    "    你的摘要应足够全面，能够让人理解该页面包含的内容，\n",
    "    但要比原文更简洁。\"\"\"\n",
    "\n",
    "    # 如果输入文本超过最大令牌限制，则截断\n",
    "    max_tokens = 6000\n",
    "    truncated_text = page_text[:max_tokens] if len(page_text) > max_tokens else page_text\n",
    "\n",
    "    # 向OpenAI API发出请求以生成摘要\n",
    "    response = client.chat.completions.create(\n",
    "        model=llm_model,  # 指定要使用的模型\n",
    "        messages=[\n",
    "            {\"role\": \"system\", \"content\": system_prompt},  # 系统消息以引导助手\n",
    "            {\"role\": \"user\", \"content\": f\"请总结以下文本:\\n\\n{truncated_text}\"}  # 用户消息，包含要总结的文本\n",
    "        ],\n",
    "        temperature=0.3  # 设置响应生成的温度\n",
    "    )\n",
    "\n",
    "    # 返回生成的摘要内容\n",
    "    return response.choices[0].message.content\n"
   ],
   "id": "4f80041795f365b1",
   "outputs": [],
   "execution_count": 7
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 分级文档处理",
   "id": "b903a526218c06a"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-29T07:39:29.157345Z",
     "start_time": "2025-04-29T07:39:29.151785Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def process_document_hierarchically(pdf_path, chunk_size=1000, chunk_overlap=200):\n",
    "    \"\"\"\n",
    "    将文档处理为分层索引。\n",
    "\n",
    "    Args:\n",
    "        pdf_path (str): PDF 文件的路径\n",
    "        chunk_size (int): 每个详细块的大小\n",
    "        chunk_overlap (int): 块之间的重叠量\n",
    "\n",
    "    Returns:\n",
    "        Tuple[SimpleVectorStore, SimpleVectorStore]: 摘要和详细向量存储\n",
    "    \"\"\"\n",
    "    # 从 PDF 中提取页面\n",
    "    pages = extract_text_from_pdf(pdf_path)\n",
    "\n",
    "    # 为每一页创建摘要\n",
    "    print(\"生成页面摘要...\")\n",
    "    summaries = []\n",
    "    for i, page in enumerate(pages):\n",
    "        print(f\"正在摘要第 {i+1}/{len(pages)} 页...\")\n",
    "        summary_text = generate_page_summary(page[\"text\"])\n",
    "\n",
    "        # 创建摘要元数据\n",
    "        summary_metadata = page[\"metadata\"].copy()\n",
    "        summary_metadata.update({\"is_summary\": True})\n",
    "\n",
    "        # 将摘要文本和元数据添加到摘要列表中\n",
    "        summaries.append({\n",
    "            \"text\": summary_text,\n",
    "            \"metadata\": summary_metadata\n",
    "        })\n",
    "\n",
    "    # 为每一页创建详细块\n",
    "    detailed_chunks = []\n",
    "    for page in pages:\n",
    "        # 将页面的文本切分为块\n",
    "        page_chunks = chunk_text(\n",
    "            page[\"text\"],\n",
    "            page[\"metadata\"],\n",
    "            chunk_size,\n",
    "            chunk_overlap\n",
    "        )\n",
    "        # 使用当前页面的块扩展 detailed_chunks 列表\n",
    "        detailed_chunks.extend(page_chunks)\n",
    "\n",
    "    print(f\"已创建 {len(detailed_chunks)} 个详细块\")\n",
    "\n",
    "    # 为摘要创建嵌入\n",
    "    print(\"正在为摘要创建嵌入...\")\n",
    "    summary_texts = [summary[\"text\"] for summary in summaries]\n",
    "    summary_embeddings = create_embeddings(summary_texts)\n",
    "\n",
    "    # 为详细块创建嵌入\n",
    "    print(\"正在为详细块创建嵌入...\")\n",
    "    chunk_texts = [chunk[\"text\"] for chunk in detailed_chunks]\n",
    "    chunk_embeddings = create_embeddings(chunk_texts)\n",
    "\n",
    "    # 创建向量存储\n",
    "    summary_store = SimpleVectorStore()\n",
    "    detailed_store = SimpleVectorStore()\n",
    "\n",
    "    # 将摘要添加到摘要存储中\n",
    "    for i, summary in enumerate(summaries):\n",
    "        summary_store.add_item(\n",
    "            text=summary[\"text\"],\n",
    "            embedding=summary_embeddings[i],\n",
    "            metadata=summary[\"metadata\"]\n",
    "        )\n",
    "\n",
    "    # 将块添加到详细存储中\n",
    "    for i, chunk in enumerate(detailed_chunks):\n",
    "        detailed_store.add_item(\n",
    "            text=chunk[\"text\"],\n",
    "            embedding=chunk_embeddings[i],\n",
    "            metadata=chunk[\"metadata\"]\n",
    "        )\n",
    "\n",
    "    print(f\"已创建包含 {len(summaries)} 个摘要和 {len(detailed_chunks)} 个块的向量存储\")\n",
    "    return summary_store, detailed_store\n"
   ],
   "id": "503efa1e6d334180",
   "outputs": [],
   "execution_count": 8
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 分级检索",
   "id": "d5248df4bf11cae1"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-29T07:39:29.166665Z",
     "start_time": "2025-04-29T07:39:29.162722Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def retrieve_hierarchically(query, summary_store, detailed_store, k_summaries=3, k_chunks=5):\n",
    "    \"\"\"\n",
    "    使用分层索引检索信息。\n",
    "\n",
    "    Args:\n",
    "        query (str): 用户查询\n",
    "        summary_store (SimpleVectorStore): 文档摘要存储\n",
    "        detailed_store (SimpleVectorStore): 详细块存储\n",
    "        k_summaries (int): 要检索的摘要数量\n",
    "        k_chunks (int): 每个摘要要检索的块数量\n",
    "\n",
    "    Returns:\n",
    "        List[Dict]: 检索到的带有相关性分数的块\n",
    "    \"\"\"\n",
    "    print(f\"正在为查询执行分层检索: {query}\")\n",
    "\n",
    "    # 创建查询嵌入\n",
    "    query_embedding = create_embeddings(query)\n",
    "\n",
    "    # 首先，检索相关的摘要\n",
    "    summary_results = summary_store.similarity_search(\n",
    "        query_embedding,\n",
    "        k=k_summaries\n",
    "    )\n",
    "\n",
    "    print(f\"检索到 {len(summary_results)} 个相关摘要\")\n",
    "\n",
    "    # 收集来自相关摘要的页面\n",
    "    relevant_pages = [result[\"metadata\"][\"page\"] for result in summary_results]\n",
    "\n",
    "    # 创建一个过滤函数，仅保留来自相关页面的块\n",
    "    def page_filter(metadata):\n",
    "        return metadata[\"page\"] in relevant_pages\n",
    "\n",
    "    # 然后，仅从这些相关页面检索详细块\n",
    "    detailed_results = detailed_store.similarity_search(\n",
    "        query_embedding,\n",
    "        k=k_chunks * len(relevant_pages),\n",
    "        filter_func=page_filter\n",
    "    )\n",
    "\n",
    "    print(f\"从相关页面检索到 {len(detailed_results)} 个详细块\")\n",
    "\n",
    "    # 对于每个结果，添加它来自哪个摘要/页面\n",
    "    for result in detailed_results:\n",
    "        page = result[\"metadata\"][\"page\"]\n",
    "        matching_summaries = [s for s in summary_results if s[\"metadata\"][\"page\"] == page]\n",
    "        if matching_summaries:\n",
    "            result[\"summary\"] = matching_summaries[0][\"text\"]\n",
    "\n",
    "    return detailed_results\n"
   ],
   "id": "c4a9ffca460b5241",
   "outputs": [],
   "execution_count": 9
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 利用上下文生成回答",
   "id": "fdcd25d249a5576"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-29T07:39:29.176613Z",
     "start_time": "2025-04-29T07:39:29.172227Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def generate_response(query, retrieved_chunks):\n",
    "    \"\"\"\n",
    "    根据查询和检索到的块生成响应。\n",
    "\n",
    "    Args:\n",
    "        query (str): 用户查询\n",
    "        retrieved_chunks (List[Dict]): 从分层搜索中检索到的块\n",
    "\n",
    "    Returns:\n",
    "        str: 生成的响应\n",
    "    \"\"\"\n",
    "    # 从块中提取文本并准备上下文部分\n",
    "    context_parts = []\n",
    "\n",
    "    for i, chunk in enumerate(retrieved_chunks):\n",
    "        page_num = chunk[\"metadata\"][\"page\"]  # 从元数据中获取页码\n",
    "        context_parts.append(f\"[Page {page_num}]: {chunk['text']}\")  # 使用页码格式化块文本\n",
    "\n",
    "    # 将所有上下文部分合并为一个上下文字符串\n",
    "    context = \"\\n\\n\".join(context_parts)\n",
    "\n",
    "    # 定义系统消息以指导AI助手\n",
    "    system_message = \"\"\"你是一个乐于助人的AI助手，根据提供的上下文回答问题。\n",
    "请准确利用上下文中的信息来回答用户的问题。\n",
    "如果上下文中不包含相关信息，请予以说明。\n",
    "引用具体信息时请注明页码。\"\"\"\n",
    "\n",
    "    # 使用OpenAI API生成响应\n",
    "    response = client.chat.completions.create(\n",
    "        model=llm_model,  # 指定要使用的模型\n",
    "        messages=[\n",
    "            {\"role\": \"system\", \"content\": system_message},  # 系统消息以指导助手\n",
    "            {\"role\": \"user\", \"content\": f\"上下文内容:\\n\\n{context}\\n\\n查询问题: {query}\"}  # 包含上下文和查询的用户消息\n",
    "        ],\n",
    "        temperature=0.2  # 设置用于响应生成的温度\n",
    "    )\n",
    "\n",
    "    # 返回生成的响应内容\n",
    "    return response.choices[0].message.content\n"
   ],
   "id": "72fe71ce60508a0c",
   "outputs": [],
   "execution_count": 10
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 用分级检索实现完整的RAG流程",
   "id": "eef6270338d53f07"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-29T07:39:29.187293Z",
     "start_time": "2025-04-29T07:39:29.182623Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def hierarchical_rag(query, pdf_path, chunk_size=1000, chunk_overlap=200, k_summaries=3, k_chunks=5, regenerate=False):\n",
    "    \"\"\"\n",
    "    完整的分层 RAG 管道。\n",
    "\n",
    "    Args:\n",
    "        query (str): 用户查询\n",
    "        pdf_path (str): PDF 文档的路径\n",
    "        chunk_size (int): 每个详细块的大小\n",
    "        chunk_overlap (int): 块之间的重叠\n",
    "        k_summaries (int): 要检索的摘要数量\n",
    "        k_chunks (int): 每个摘要要检索的块数量\n",
    "        regenerate (bool): 是否重新生成向量存储\n",
    "\n",
    "    Returns:\n",
    "        Dict: 包括响应和检索到的块的结果\n",
    "    \"\"\"\n",
    "    # 创建用于缓存的存储文件名\n",
    "    summary_store_file = f\"{os.path.basename(pdf_path)}_summary_store.pkl\"\n",
    "    detailed_store_file = f\"{os.path.basename(pdf_path)}_detailed_store.pkl\"\n",
    "\n",
    "    # 如果需要，处理文档并创建存储\n",
    "    if regenerate or not os.path.exists(summary_store_file) or not os.path.exists(detailed_store_file):\n",
    "        print(\"处理文档并创建向量存储...\")\n",
    "        # 处理文档以创建分层索引和向量存储\n",
    "        summary_store, detailed_store = process_document_hierarchically(\n",
    "            pdf_path, chunk_size, chunk_overlap\n",
    "        )\n",
    "\n",
    "        # 将摘要存储保存到文件以供将来使用\n",
    "        with open(summary_store_file, 'wb') as f:\n",
    "            pickle.dump(summary_store, f)\n",
    "\n",
    "        # 将详细存储保存到文件以供将来使用\n",
    "        with open(detailed_store_file, 'wb') as f:\n",
    "            pickle.dump(detailed_store, f)\n",
    "    else:\n",
    "        # 从文件加载现有的摘要存储\n",
    "        print(\"加载现有的向量存储...\")\n",
    "        with open(summary_store_file, 'rb') as f:\n",
    "            summary_store = pickle.load(f)\n",
    "\n",
    "        # 从文件加载现有的详细存储\n",
    "        with open(detailed_store_file, 'rb') as f:\n",
    "            detailed_store = pickle.load(f)\n",
    "\n",
    "    # 使用查询分层检索相关块\n",
    "    retrieved_chunks = retrieve_hierarchically(\n",
    "        query, summary_store, detailed_store, k_summaries, k_chunks\n",
    "    )\n",
    "\n",
    "    # 根据检索到的块生成响应\n",
    "    response = generate_response(query, retrieved_chunks)\n",
    "\n",
    "    # 返回结果，包括查询、响应、检索到的块以及摘要和详细块的数量\n",
    "    return {\n",
    "        \"query\": query,\n",
    "        \"response\": response,\n",
    "        \"retrieved_chunks\": retrieved_chunks,\n",
    "        \"summary_count\": len(summary_store.texts),\n",
    "        \"detailed_count\": len(detailed_store.texts)\n",
    "    }\n"
   ],
   "id": "7fe45a7a36a23aaa",
   "outputs": [],
   "execution_count": 11
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 标准 RAG（非分级，用于对比）",
   "id": "cd9dd509b817984e"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-29T07:39:29.198167Z",
     "start_time": "2025-04-29T07:39:29.194116Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def standard_rag(query, pdf_path, chunk_size=1000, chunk_overlap=200, k=15):\n",
    "    \"\"\"\n",
    "    标准 RAG 管道，不包含分层检索。\n",
    "\n",
    "    Args:\n",
    "        query (str): 用户查询\n",
    "        pdf_path (str): PDF 文档的路径\n",
    "        chunk_size (int): 每个块的大小\n",
    "        chunk_overlap (int): 块之间的重叠\n",
    "        k (int): 要检索的块数量\n",
    "\n",
    "    Returns:\n",
    "        Dict: 包括响应和检索到的块的结果\n",
    "    \"\"\"\n",
    "    # 从 PDF 文档中提取页面\n",
    "    pages = extract_text_from_pdf(pdf_path)\n",
    "\n",
    "    # 直接从所有页面创建块\n",
    "    chunks = []\n",
    "    for page in pages:\n",
    "        # 将页面的文本切分为块\n",
    "        page_chunks = chunk_text(\n",
    "            page[\"text\"],\n",
    "            page[\"metadata\"],\n",
    "            chunk_size,\n",
    "            chunk_overlap\n",
    "        )\n",
    "        # 将当前页面的块扩展到块列表中\n",
    "        chunks.extend(page_chunks)\n",
    "\n",
    "    print(f\"为标准 RAG 创建了 {len(chunks)} 个块\")\n",
    "\n",
    "    # 创建一个向量存储以保存块\n",
    "    store = SimpleVectorStore()\n",
    "\n",
    "    # 为块创建嵌入\n",
    "    print(\"正在为块创建嵌入...\")\n",
    "    texts = [chunk[\"text\"] for chunk in chunks]\n",
    "    embeddings = create_embeddings(texts)\n",
    "\n",
    "    # 将块添加到向量存储中\n",
    "    for i, chunk in enumerate(chunks):\n",
    "        store.add_item(\n",
    "            text=chunk[\"text\"],\n",
    "            embedding=embeddings[i],\n",
    "            metadata=chunk[\"metadata\"]\n",
    "        )\n",
    "\n",
    "    # 为查询创建嵌入\n",
    "    query_embedding = create_embeddings(query)\n",
    "\n",
    "    # 根据查询嵌入检索最相关的块\n",
    "    retrieved_chunks = store.similarity_search(query_embedding, k=k)\n",
    "    print(f\"通过标准 RAG 检索到 {len(retrieved_chunks)} 个块\")\n",
    "\n",
    "    # 根据检索到的块生成响应\n",
    "    response = generate_response(query, retrieved_chunks)\n",
    "\n",
    "    # 返回结果，包括查询、响应和检索到的块\n",
    "    return {\n",
    "        \"query\": query,\n",
    "        \"response\": response,\n",
    "        \"retrieved_chunks\": retrieved_chunks\n",
    "    }\n"
   ],
   "id": "5655ddce09fff3f1",
   "outputs": [],
   "execution_count": 12
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 评估函数",
   "id": "f9d6854c4b7715f3"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-29T07:39:29.207586Z",
     "start_time": "2025-04-29T07:39:29.204028Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def compare_approaches(query, pdf_path, reference_answer=None):\n",
    "    \"\"\"\n",
    "    比较分层和标准 RAG 方法。\n",
    "\n",
    "    Args:\n",
    "        query (str): 用户查询\n",
    "        pdf_path (str): PDF 文档的路径\n",
    "        reference_answer (str, 可选): 用于评估的参考答案\n",
    "\n",
    "    Returns:\n",
    "        Dict: 比较结果\n",
    "    \"\"\"\n",
    "    print(f\"\\n=== 对于查询 {query} 比较 RAG 方法 ===\")\n",
    "\n",
    "    # 运行分层 RAG\n",
    "    print(\"\\n运行分层 RAG...\")\n",
    "    hierarchical_result = hierarchical_rag(query, pdf_path)\n",
    "    hier_response = hierarchical_result[\"response\"]\n",
    "\n",
    "    # 运行标准 RAG\n",
    "    print(\"\\n运行标准 RAG...\")\n",
    "    standard_result = standard_rag(query, pdf_path)\n",
    "    std_response = standard_result[\"response\"]\n",
    "\n",
    "    # 比较分层和标准 RAG 的结果\n",
    "    comparison = compare_responses(query, hier_response, std_response, reference_answer)\n",
    "\n",
    "    # 返回包含比较结果的字典\n",
    "    return {\n",
    "        \"query\": query,  # 原始查询\n",
    "        \"hierarchical_response\": hier_response,  # 分层 RAG 的响应\n",
    "        \"standard_response\": std_response,  # 标准 RAG 的响应\n",
    "        \"reference_answer\": reference_answer,  # 用于评估的参考答案\n",
    "        \"comparison\": comparison,  # 比较分析\n",
    "        \"hierarchical_chunks_count\": len(hierarchical_result[\"retrieved_chunks\"]),  # 分层 RAG 检索到的块数量\n",
    "        \"standard_chunks_count\": len(standard_result[\"retrieved_chunks\"])  # 标准 RAG 检索到的块数量\n",
    "    }\n"
   ],
   "id": "363cea573fd711b6",
   "outputs": [],
   "execution_count": 13
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-29T07:39:29.216400Z",
     "start_time": "2025-04-29T07:39:29.212933Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def compare_responses(query, hierarchical_response, standard_response, reference=None):\n",
    "    \"\"\"\n",
    "    比较分层和标准 RAG 的响应。\n",
    "\n",
    "    Args:\n",
    "        query (str): 用户查询\n",
    "        hierarchical_response (str): 分层 RAG 的响应\n",
    "        standard_response (str): 标准 RAG 的响应\n",
    "        reference (str, 可选): 参考答案\n",
    "\n",
    "    Returns:\n",
    "        str: 比较分析\n",
    "    \"\"\"\n",
    "    # 定义系统提示，指导模型如何评估响应\n",
    "    system_prompt = \"\"\"你是一个信息检索系统的专业评估者。\n",
    "请比较针对同一查询的两个回答，一个使用分级检索生成，另一个使用标准检索生成。\n",
    "\n",
    "请从以下方面进行评估：\n",
    "1. 准确性：哪个回答提供了更多事实准确的信息？\n",
    "2. 全面性：哪个回答更好地涵盖了查询的所有方面？\n",
    "3. 连贯性：哪个回答在逻辑流程和组织结构上更清晰合理？\n",
    "4. 页码引用：是否有哪个回答更有效地利用了页码引用？\n",
    "\n",
    "请具体分析每种方法的优势与不足。\"\"\"\n",
    "\n",
    "\n",
    "    # 创建包含查询和两种响应的用户提示\n",
    "    user_prompt = f\"\"\"查询: {query}\n",
    "\n",
    "分级 RAG 的回答:\n",
    "{hierarchical_response}\n",
    "\n",
    "标准 RAG 的回答:\n",
    "{standard_response}\"\"\"\n",
    "\n",
    "    # 如果提供了参考答案，则将其包含在用户提示中\n",
    "    if reference:\n",
    "        user_prompt += f\"\"\"\n",
    "\n",
    "参考答案:\n",
    "{reference}\"\"\"\n",
    "\n",
    "    # 添加最终指示到用户提示中\n",
    "    user_prompt += \"\"\"\n",
    "\n",
    "请对这两个回答进行详细比较，指出哪种方法表现更好，并说明原因。\"\"\"\n",
    "\n",
    "    # 向 OpenAI API 发送请求以生成比较分析\n",
    "    response = client.chat.completions.create(\n",
    "        model=llm_model,\n",
    "        messages=[\n",
    "            {\"role\": \"system\", \"content\": system_prompt},  # 系统消息以指导助手\n",
    "            {\"role\": \"user\", \"content\": user_prompt}  # 用户消息包含查询和响应\n",
    "        ],\n",
    "        temperature=0  # 设置响应生成的温度\n",
    "    )\n",
    "\n",
    "    # 返回生成的比较分析\n",
    "    return response.choices[0].message.content\n"
   ],
   "id": "d9d21a80ec14e634",
   "outputs": [],
   "execution_count": 14
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-29T07:39:29.225047Z",
     "start_time": "2025-04-29T07:39:29.221373Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def run_evaluation(pdf_path, test_queries, reference_answers=None):\n",
    "    \"\"\"\n",
    "    运行带有多个测试查询的完整评估。\n",
    "\n",
    "    Args:\n",
    "        pdf_path (str): PDF 文档的路径\n",
    "        test_queries (List[str]): 测试查询列表\n",
    "        reference_answers (List[str], 可选): 查询的参考答案列表\n",
    "\n",
    "    Returns:\n",
    "        Dict: 评估结果\n",
    "    \"\"\"\n",
    "    results = []  # 初始化一个空列表以存储结果\n",
    "\n",
    "    # 遍历测试查询中的每个查询\n",
    "    for i, query in enumerate(test_queries):\n",
    "        print(f\"Query: {query}\")  # 打印当前查询\n",
    "\n",
    "        # 如果可用，获取参考答案\n",
    "        reference = None\n",
    "        if reference_answers and i < len(reference_answers):\n",
    "            reference = reference_answers[i]  # 获取当前查询的参考答案\n",
    "\n",
    "        # 比较分层和标准 RAG 方法\n",
    "        result = compare_approaches(query, pdf_path, reference)\n",
    "        results.append(result)  # 将结果添加到结果列表中\n",
    "\n",
    "    # 生成评估结果的整体分析\n",
    "    overall_analysis = generate_overall_analysis(results)\n",
    "\n",
    "    return {\n",
    "        \"results\": results,  # 返回单个结果\n",
    "        \"overall_analysis\": overall_analysis  # 返回整体分析\n",
    "    }\n"
   ],
   "id": "e65d687c1b5932d8",
   "outputs": [],
   "execution_count": 15
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-29T07:39:29.234726Z",
     "start_time": "2025-04-29T07:39:29.230365Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def generate_overall_analysis(results):\n",
    "    \"\"\"\n",
    "    生成对评估结果的整体分析。\n",
    "\n",
    "    Args:\n",
    "        results (List[Dict]): 来自单个查询评估的结果列表\n",
    "\n",
    "    Returns:\n",
    "        str: 整体分析\n",
    "    \"\"\"\n",
    "    # 定义系统提示，指导模型如何评估结果\n",
    "    system_prompt = \"\"\"你是一个信息检索系统的专业评估专家。\n",
    "基于多个测试查询，提供一个整体分析，比较分级RAG与标准RAG的表现。\n",
    "\n",
    "关注点包括：\n",
    "1. 分级检索在何时表现更好及其原因\n",
    "2. 标准检索在何时表现更好及其原因\n",
    "3. 每种方法的整体优缺点\n",
    "4. 对于何时使用哪种方法的建议\"\"\"\n",
    "\n",
    "    # 创建评估结果的摘要\n",
    "    evaluations_summary = \"\"\n",
    "    for i, result in enumerate(results):\n",
    "        evaluations_summary += f\"查询 {i+1}: {result['query']}\\n\"\n",
    "        evaluations_summary += f\"分级检索使用的文本块数: {result['hierarchical_chunks_count']}, 标准检索使用的文本块数: {result['standard_chunks_count']}\\n\"\n",
    "        evaluations_summary += f\"比较摘要: {result['comparison'][:200]}...\\n\\n\"\n",
    "\n",
    "    # 定义用户提示，包含评估摘要内容\n",
    "    user_prompt = f\"\"\"根据以下针对 {len(results)} 个查询的评估结果，比较分级RAG与标准RAG，\n",
    "请提供这两种方法的整体分析：\n",
    "\n",
    "{evaluations_summary}\n",
    "\n",
    "请详细分析分级RAG与标准RAG在检索质量和回答生成方面的相对优缺点，\n",
    "并提供具体分析。\"\"\"\n",
    "\n",
    "    # 调用 OpenAI API 生成整体分析\n",
    "    response = client.chat.completions.create(\n",
    "        model=llm_model,\n",
    "        messages=[\n",
    "            {\"role\": \"system\", \"content\": system_prompt},  # 系统消息，用于引导助手的行为\n",
    "            {\"role\": \"user\", \"content\": user_prompt}       # 用户消息，包含评估摘要\n",
    "        ],\n",
    "        temperature=0  # 设置响应生成的随机性（温度参数）\n",
    "    )\n",
    "\n",
    "    # 返回生成的整体分析\n",
    "    return response.choices[0].message.content"
   ],
   "id": "9313cd2c46a5c288",
   "outputs": [],
   "execution_count": 16
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 分级RAG与标准RAG方法的评估",
   "id": "dfb7c6ae5834c1a2"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-29T07:43:36.292772Z",
     "start_time": "2025-04-29T07:39:29.240281Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# 用于测试分级RAG方法的示例查询\n",
    "query = \"Transformer模型在自然语言处理中的关键应用有哪些？\"\n",
    "result = hierarchical_rag(query, pdf_path)\n",
    "\n",
    "print(\"\\n=== 回答 ===\")\n",
    "print(result[\"response\"])\n",
    "\n",
    "# 正式评估使用的测试查询（仅使用一个查询以满足要求）\n",
    "test_queries = [\n",
    "    \"Transformer是如何处理序列数据的，与RNN相比有何不同？\"\n",
    "]\n",
    "\n",
    "# 测试查询的参考答案，用于进行比较\n",
    "reference_answers = [\n",
    "    \"Transformer通过自注意力机制而非循环连接来处理序列数据，这使得Transformer可以并行处理所有token，而不是像RNN那样按顺序处理。这种方法更高效地捕捉长距离依赖关系，并在训练期间实现更好的并行化。与RNN不同，Transformer在处理长序列时不会出现梯度消失的问题。\"\n",
    "]\n",
    "\n",
    "# 运行评估，比较分级RAG与标准RAG方法\n",
    "evaluation_results = run_evaluation(\n",
    "    pdf_path=pdf_path,\n",
    "    test_queries=test_queries,\n",
    "    reference_answers=reference_answers\n",
    ")\n",
    "\n",
    "# 打印对两种方法的整体分析\n",
    "print(\"\\n=== 整体分析 ===\")\n",
    "print(evaluation_results[\"overall_analysis\"])"
   ],
   "id": "58d766ef44dcef47",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "处理文档并创建向量存储...\n",
      "正在提取文本 data/AI_Information.en.zh-CN.pdf...\n",
      "已提取 15 页的内容\n",
      "生成页面摘要...\n",
      "正在摘要第 1/15 页...\n",
      "正在摘要第 2/15 页...\n",
      "正在摘要第 3/15 页...\n",
      "正在摘要第 4/15 页...\n",
      "正在摘要第 5/15 页...\n",
      "正在摘要第 6/15 页...\n",
      "正在摘要第 7/15 页...\n",
      "正在摘要第 8/15 页...\n",
      "正在摘要第 9/15 页...\n",
      "正在摘要第 10/15 页...\n",
      "正在摘要第 11/15 页...\n",
      "正在摘要第 12/15 页...\n",
      "正在摘要第 13/15 页...\n",
      "正在摘要第 14/15 页...\n",
      "正在摘要第 15/15 页...\n",
      "已创建 15 个详细块\n",
      "正在为摘要创建嵌入...\n",
      "正在为详细块创建嵌入...\n",
      "已创建包含 15 个摘要和 15 个块的向量存储\n",
      "正在为查询执行分层检索: Transformer模型在自然语言处理中的关键应用有哪些？\n",
      "检索到 3 个相关摘要\n",
      "从相关页面检索到 3 个详细块\n",
      "\n",
      "=== 回答 ===\n",
      "根据提供的上下文内容，没有明确提到\"Transformer模型\"的相关信息（Page 1-3均未提及该术语）。上下文主要介绍了自然语言处理（NLP）作为人工智能的一个分支（Page 2），其应用包括聊天机器人、机器翻译、文本摘要和情感分析等，但并未具体说明这些应用是否由Transformer模型实现。\n",
      "\n",
      "建议补充Transformer模型相关的上下文内容，或确认是否需要基于现有信息回答。当前可确认的是：\n",
      "1. NLP的通用应用领域已在Page 2列出\n",
      "2. 深度学习（包含神经网络）是NLP的基础技术之一（Page 2）\n",
      "3. 但未涉及Transformer这一特定架构的说明\n",
      "Query: Transformer是如何处理序列数据的，与RNN相比有何不同？\n",
      "\n",
      "=== 对于查询 Transformer是如何处理序列数据的，与RNN相比有何不同？ 比较 RAG 方法 ===\n",
      "\n",
      "运行分层 RAG...\n",
      "加载现有的向量存储...\n",
      "正在为查询执行分层检索: Transformer是如何处理序列数据的，与RNN相比有何不同？\n",
      "检索到 3 个相关摘要\n",
      "从相关页面检索到 3 个详细块\n",
      "\n",
      "运行标准 RAG...\n",
      "正在提取文本 data/AI_Information.en.zh-CN.pdf...\n",
      "已提取 15 页的内容\n",
      "为标准 RAG 创建了 15 个块\n",
      "正在为块创建嵌入...\n",
      "通过标准 RAG 检索到 15 个块\n",
      "\n",
      "=== 整体分析 ===\n",
      "### 分级RAG与标准RAG的对比分析（基于示例查询）\n",
      "\n",
      "#### **1. 检索质量对比**\n",
      "- **分级RAG优势**：  \n",
      "  - **精准性**：通过动态调整检索范围（本例仅用3个文本块），优先选择高置信度片段，避免低相关性内容污染上下文。  \n",
      "  - **容错性**：当高层级（如粗粒度检索）未命中关键信息时，明确承认知识盲区（如直接说明\"缺乏Transformer的具体信息\"），而非强行生成。  \n",
      "  - **效率**：减少无关文本处理开销，尤其适合**明确边界的问题**（如需要对比特定技术细节时）。  \n",
      "\n",
      "- **标准RAG劣势**：  \n",
      "  - **噪声引入**：强制检索固定数量文本块（本例15个），可能混入低质量内容（如Page 12的间接推断），导致生成答案时被迫\"脑补\"。  \n",
      "  - **过度泛化**：试图用宽泛上下文填补细节缺失（如将RNN的序列处理缺陷间接套用到Transformer），增加事实性错误风险。  \n",
      "\n",
      "#### **2. 回答生成对比**\n",
      "- **分级RAG特点**：  \n",
      "  - **保守但可靠**：生成策略与检索结果严格对齐，缺少直接证据时选择\"知之为知之\"（如示例中的诚实声明），适合**高事实性要求场景**（学术、医疗等）。  \n",
      "  - **解释性**：可通过分级逻辑向用户说明检索过程（如\"未找到足够细粒度数据\"），增强可信度。  \n",
      "\n",
      "- **标准RAG特点**：  \n",
      "  - **覆盖性优先**：倾向于利用所有检索内容生成看似完整的答案，但可能包含未经验证的关联（如将RNN的缺陷与Transformer优势强行对比）。  \n",
      "  - **流畅性陷阱**：因上下文更庞杂，生成的答案往往更长、更\"流畅\"，但可能掩盖逻辑漏洞（如示例中的间接推断问题）。  \n",
      "\n",
      "#### **3. 关键场景适用性**\n",
      "- **优先选择分级RAG**：  \n",
      "  - 问题需要**精确技术细节**（如算法对比、参数说明）  \n",
      "  - 数据源存在**质量不均**或**领域专业性高**（如法律、医学文献）  \n",
      "  - 用户容忍\"部分回答\"但要求**零幻觉**  \n",
      "\n",
      "- **优先选择标准RAG**：  \n",
      "  - 问题偏向**概述性**或**多角度讨论**（如\"深度学习的优缺点\"）  \n",
      "  - 数据源质量均匀且**冗余度高**（如维基百科类文本）  \n",
      "  - 用户更看重答案**连贯性**而非绝对精确  \n",
      "\n",
      "#### **4. 改进方向建议**\n",
      "- **分级RAG**：可增加\"分级置信度\"指标（如标注\"本回答基于Top 3可信片段，覆盖度70%\"），平衡保守性与实用性。  \n",
      "- **标准RAG**：需引入**断言验证机制**（如对\"Transformer并行处理\"等关键说法检查直接引文），减少间接推断。  \n",
      "\n",
      "**总结**：本例中分级RAG的准确性优势凸显了其在技术性查询中的价值，而标准RAG的\"尽力回答\"策略在开放性问题上可能更友好。选择取决于任务的核心需求——**事实优先选分级，覆盖优先选标准**。\n"
     ]
    }
   ],
   "execution_count": 17
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
