{
 "cells": [
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# 纠错型RAG（CRAG）实现\n",
    "\n",
    "实现**纠错型RAG（Corrective RAG）**——一种先进的方法，能够动态评估检索到的信息，并在必要时对检索过程进行修正，使用网络搜索作为备选方案。\n",
    "\n",
    "-----\n",
    "CRAG 在传统 RAG 的基础上进行了以下改进：\n",
    "\n",
    "- 在使用前对检索到的内容进行评估\n",
    "- 根据相关性动态切换不同的知识源\n",
    "- 当本地知识不足以回答问题时，通过网络搜索修正检索结果\n",
    "- 在适当时合并多个来源的信息\n",
    "\n",
    "-----\n",
    "实现步骤：\n",
    "- 处理文档并创建向量数据库\n",
    "- 创建查询嵌入并检索文档\n",
    "- 评估文档相关性：对检索到的内容进行评估。\n",
    "- 根据情况执行相应的知识获取策略：高相关性（评估分数>0.7）,直接使用文档内容；低相关性（评估分数<0.3）使用网络搜索；中等相关性（0.3-0.7）结合文档与网络搜索结果，并将文档结果与网络搜索结果进行合并。在混合搜索中，需要将搜索出来的内容，进行模型提炼，避免内容重复冗余。\n",
    "- 生成最终回答"
   ],
   "id": "d138de2d9494aee"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:00.882585Z",
     "start_time": "2025-04-30T07:21:52.858642Z"
    }
   },
   "cell_type": "code",
   "source": [
    "import fitz\n",
    "import os\n",
    "import re\n",
    "import json\n",
    "import numpy as np\n",
    "from tqdm import tqdm\n",
    "from openai import OpenAI\n",
    "from dotenv import load_dotenv\n",
    "from datetime import datetime\n",
    "import networkx as nx\n",
    "import matplotlib\n",
    "import matplotlib.pyplot as plt\n",
    "import heapq\n",
    "from sklearn.metrics.pairwise import cosine_similarity\n",
    "import jieba\n",
    "from typing import List, Dict, Tuple, Any\n",
    "import pickle\n",
    "import requests\n",
    "from urllib.parse import quote_plus\n",
    "\n",
    "load_dotenv()"
   ],
   "id": "4fa9f5a6b3321e7c",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 1
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.236258Z",
     "start_time": "2025-04-30T07:22:00.942402Z"
    }
   },
   "cell_type": "code",
   "source": [
    "client = OpenAI(\n",
    "    base_url=os.getenv(\"LLM_BASE_URL\"),\n",
    "    api_key=os.getenv(\"LLM_API_KEY\")\n",
    ")\n",
    "llm_model = os.getenv(\"LLM_MODEL_ID\")\n",
    "embedding_model = os.getenv(\"EMBEDDING_MODEL_ID\")\n",
    "\n",
    "pdf_path = \"../../data/AI_Information.en.zh-CN.pdf\""
   ],
   "id": "d92002e797fb2d7f",
   "outputs": [],
   "execution_count": 2
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 文档处理函数",
   "id": "70c3ddd521484b66"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.361638Z",
     "start_time": "2025-04-30T07:22:01.355159Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def extract_text_from_pdf(pdf_path):\n",
    "    \"\"\"\n",
    "    从PDF文件中提取文本内容。\n",
    "\n",
    "    Args:\n",
    "        pdf_path (str): PDF文件的路径\n",
    "\n",
    "    Returns:\n",
    "        str: 提取出的文本内容\n",
    "    \"\"\"\n",
    "    print(f\"正在从 {pdf_path} 提取文本...\")\n",
    "\n",
    "    # 打开PDF文件\n",
    "    pdf = fitz.open(pdf_path)\n",
    "    text = \"\"\n",
    "\n",
    "    # 遍历PDF中的每一页\n",
    "    for page_num in range(len(pdf)):\n",
    "        page = pdf[page_num]\n",
    "        # 从当前页提取文本并追加到text变量中\n",
    "        text += page.get_text()\n",
    "\n",
    "    return text"
   ],
   "id": "6ba0b73d9b14da4d",
   "outputs": [],
   "execution_count": 3
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.379262Z",
     "start_time": "2025-04-30T07:22:01.373539Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def chunk_text(text, chunk_size=1000, overlap=200):\n",
    "    \"\"\"\n",
    "    将文本分割为有重叠的块，以便进行高效检索和处理。\n",
    "\n",
    "    该函数将大段文本划分为较小且易于管理的文本块，并在连续块之间设置指定的重叠字符数。\n",
    "    对于RAG系统来说，分块非常关键，因为它可以实现更精确的相关信息检索。\n",
    "\n",
    "    Args:\n",
    "        text (str): 要分块的输入文本\n",
    "        chunk_size (int): 每个块的最大字符数\n",
    "        overlap (int): 连续块之间的重叠字符数，用于保持跨块边界的上下文连贯性\n",
    "\n",
    "    Returns:\n",
    "        List[Dict]: 文本块列表，每个块包含：\n",
    "                   - text: 块内容\n",
    "                   - metadata: 包含位置信息和来源类型的字典\n",
    "    \"\"\"\n",
    "    chunks = []\n",
    "\n",
    "    # 使用滑动窗口方式遍历文本\n",
    "    # 每次移动 (chunk_size - overlap) 的距离以确保块之间有适当重叠\n",
    "    for i in range(0, len(text), chunk_size - overlap):\n",
    "        # 提取当前块的内容，不超过chunk_size\n",
    "        chunk_text = text[i:i + chunk_size]\n",
    "\n",
    "        # 仅添加非空的文本块\n",
    "        if chunk_text:\n",
    "            chunks.append({\n",
    "                \"text\": chunk_text,  # 实际的文本内容\n",
    "                \"metadata\": {\n",
    "                    \"start_pos\": i,  # 在原文本中的起始位置\n",
    "                    \"end_pos\": i + len(chunk_text),  # 结束位置\n",
    "                    \"source_type\": \"document\"  # 表示此文本的来源类型\n",
    "                }\n",
    "            })\n",
    "\n",
    "    print(f\"共创建了 {len(chunks)} 个文本块\")\n",
    "    return chunks"
   ],
   "id": "1218ea71e195412",
   "outputs": [],
   "execution_count": 4
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 向量存储",
   "id": "8fe973ad9d62ebec"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.399373Z",
     "start_time": "2025-04-30T07:22:01.391409Z"
    }
   },
   "cell_type": "code",
   "source": [
    "class SimpleVectorStore:\n",
    "    \"\"\"\n",
    "    一个使用 NumPy 实现的简单向量存储。\n",
    "    \"\"\"\n",
    "    def __init__(self):\n",
    "        # 初始化列表用于存储向量、文本和元数据\n",
    "        self.vectors = []\n",
    "        self.texts = []\n",
    "        self.metadata = []\n",
    "\n",
    "    def add_item(self, text, embedding, metadata=None):\n",
    "        \"\"\"\n",
    "        向向量库中添加一项数据。\n",
    "\n",
    "        Args:\n",
    "            text (str): 文本内容\n",
    "            embedding (List[float]): 嵌入向量\n",
    "            metadata (Dict, optional): 额外的元数据\n",
    "        \"\"\"\n",
    "        # 将嵌入向量、文本和元数据分别加入对应的列表中\n",
    "        self.vectors.append(np.array(embedding))\n",
    "        self.texts.append(text)\n",
    "        self.metadata.append(metadata or {})\n",
    "\n",
    "    def add_items(self, items, embeddings):\n",
    "        \"\"\"\n",
    "        批量添加多个项到向量库中。\n",
    "\n",
    "        Args:\n",
    "            items (List[Dict]): 包含文本和元数据的项列表\n",
    "            embeddings (List[List[float]]): 嵌入向量列表\n",
    "        \"\"\"\n",
    "        # 遍历items和embeddings，逐个添加至向量库\n",
    "        for i, (item, embedding) in enumerate(zip(items, embeddings)):\n",
    "            self.add_item(\n",
    "                text=item[\"text\"],\n",
    "                embedding=embedding,\n",
    "                metadata=item.get(\"metadata\", {})\n",
    "            )\n",
    "\n",
    "    def similarity_search(self, query_embedding, k=5):\n",
    "        \"\"\"\n",
    "        查找与查询嵌入最相似的k个条目。\n",
    "\n",
    "        Args:\n",
    "            query_embedding (List[float]): 查询嵌入向量\n",
    "            k (int): 返回结果的数量\n",
    "\n",
    "        Returns:\n",
    "            List[Dict]: 最相似的前k个条目，包含文本、元数据和相似度分数\n",
    "        \"\"\"\n",
    "        # 如果向量库为空，则返回空列表\n",
    "        if not self.vectors:\n",
    "            return []\n",
    "\n",
    "        # 将查询向量转换为numpy数组\n",
    "        query_vector = np.array(query_embedding)\n",
    "\n",
    "        # 计算相似度（使用余弦相似度）\n",
    "        similarities = []\n",
    "        for i, vector in enumerate(self.vectors):\n",
    "            similarity = np.dot(query_vector, vector) / (\n",
    "                np.linalg.norm(query_vector) * np.linalg.norm(vector)\n",
    "            )\n",
    "            similarities.append((i, similarity))\n",
    "\n",
    "        # 按相似度降序排序\n",
    "        similarities.sort(key=lambda x: x[1], reverse=True)\n",
    "\n",
    "        # 返回前k个结果\n",
    "        results = []\n",
    "        for i in range(min(k, len(similarities))):\n",
    "            idx, score = similarities[i]\n",
    "            results.append({\n",
    "                \"text\": self.texts[idx],\n",
    "                \"metadata\": self.metadata[idx],\n",
    "                \"similarity\": float(score)\n",
    "            })\n",
    "\n",
    "        return results"
   ],
   "id": "dce4a4a1df944f9d",
   "outputs": [],
   "execution_count": 5
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 创建嵌入",
   "id": "a635b88cce7e4ac"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.418762Z",
     "start_time": "2025-04-30T07:22:01.410528Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def create_embeddings(texts):\n",
    "    \"\"\"\n",
    "    为文本输入创建向量嵌入。\n",
    "\n",
    "    嵌入是文本的密集向量表示，能够捕捉语义含义，便于进行相似性比较。\n",
    "    在 RAG 系统中，嵌入对于将查询与相关文档块进行匹配非常关键。\n",
    "\n",
    "    Args:\n",
    "        texts (str 或 List[str]): 要嵌入的输入文本。可以是单个字符串或字符串列表。\n",
    "        model (str): 要使用的嵌入模型名称。默认为 \"text-embedding-3-small\"。\n",
    "\n",
    "    Returns:\n",
    "        List[List[float]]: 如果输入是列表，返回每个文本对应的嵌入向量列表；\n",
    "                          如果输入是单个字符串，返回一个嵌入向量。\n",
    "    \"\"\"\n",
    "    # 处理单个字符串和列表两种输入形式：统一转为列表处理\n",
    "    input_texts = texts if isinstance(texts, list) else [texts]\n",
    "\n",
    "    # 分批次处理以避免 API 速率限制和请求体大小限制\n",
    "    batch_size = 100\n",
    "    all_embeddings = []\n",
    "\n",
    "    # 遍历每一批文本\n",
    "    for i in range(0, len(input_texts), batch_size):\n",
    "        # 提取当前批次的文本\n",
    "        batch = input_texts[i:i + batch_size]\n",
    "\n",
    "        # 调用 API 生成当前批次的嵌入\n",
    "        response = client.embeddings.create(\n",
    "            model=embedding_model,\n",
    "            input=batch\n",
    "        )\n",
    "\n",
    "        # 从响应中提取嵌入向量并加入总结果中\n",
    "        batch_embeddings = [item.embedding for item in response.data]\n",
    "        all_embeddings.extend(batch_embeddings)\n",
    "\n",
    "    # 如果原始输入是单个字符串，则只返回第一个嵌入\n",
    "    if isinstance(texts, str):\n",
    "        return all_embeddings[0]\n",
    "\n",
    "    # 否则返回所有嵌入组成的列表\n",
    "    return all_embeddings"
   ],
   "id": "d4fe2d3416566d8c",
   "outputs": [],
   "execution_count": 6
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 文档处理流程",
   "id": "5433a806529631e0"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.437903Z",
     "start_time": "2025-04-30T07:22:01.429331Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def process_document(pdf_path, chunk_size=1000, chunk_overlap=200):\n",
    "    \"\"\"\n",
    "    将文档处理并存入向量库中。\n",
    "\n",
    "    Args:\n",
    "        pdf_path (str): PDF 文件的路径\n",
    "        chunk_size (int): 每个文本块的字符数\n",
    "        chunk_overlap (int): 文本块之间的重叠字符数\n",
    "\n",
    "    Returns:\n",
    "        SimpleVectorStore: 包含文档块及其嵌入的向量库\n",
    "    \"\"\"\n",
    "    # 从PDF文件中提取文本\n",
    "    text = extract_text_from_pdf(pdf_path)\n",
    "\n",
    "    # 将提取到的文本按指定大小和重叠度进行分块\n",
    "    chunks = chunk_text(text, chunk_size, chunk_overlap)\n",
    "\n",
    "    # 为每个文本块生成嵌入向量\n",
    "    print(\"正在为文本块生成嵌入...\")\n",
    "    chunk_texts = [chunk[\"text\"] for chunk in chunks]\n",
    "    chunk_embeddings = create_embeddings(chunk_texts)\n",
    "\n",
    "    # 初始化一个新的向量存储\n",
    "    vector_store = SimpleVectorStore()\n",
    "\n",
    "    # 将文本块及其嵌入添加到向量库中\n",
    "    vector_store.add_items(chunks, chunk_embeddings)\n",
    "\n",
    "    print(f\"已创建包含 {len(chunks)} 个文本块的向量库\")\n",
    "    return vector_store"
   ],
   "id": "494ca072d891ce5b",
   "outputs": [],
   "execution_count": 7
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 相关性评价函数\n",
   "id": "bda4fc083be29fb5"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.462914Z",
     "start_time": "2025-04-30T07:22:01.455394Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def evaluate_document_relevance(query, document):\n",
    "    \"\"\"\n",
    "    评估文档与查询的相关性。\n",
    "\n",
    "    Args:\n",
    "        query (str): 用户查询\n",
    "        document (str): 文档文本\n",
    "\n",
    "    Returns:\n",
    "        float: 相关性评分（0 到 1）\n",
    "    \"\"\"\n",
    "    # 定义系统提示语，指导模型如何评估相关性\n",
    "    system_prompt = \"\"\"\n",
    "    你是一位评估文档相关性的专家。\n",
    "    请在 0 到 1 的范围内对给定文档与查询的相关性进行评分。\n",
    "    0 表示完全不相关，1 表示完全相关。\n",
    "    仅返回一个介于 0 和 1 之间的浮点数评分，不要过多解释与生成。\n",
    "    \"\"\"\n",
    "\n",
    "    # 构造用户提示语，包含查询和文档内容\n",
    "    user_prompt = f\"查询：{query}\\n\\n文档：{document}\"\n",
    "\n",
    "    try:\n",
    "        # 调用 OpenAI API 进行相关性评分\n",
    "        response = client.chat.completions.create(\n",
    "            model=llm_model,  # 使用的模型\n",
    "            messages=[\n",
    "                {\"role\": \"system\", \"content\": system_prompt},  # 系统消息用于引导助手行为\n",
    "                {\"role\": \"user\", \"content\": user_prompt}  # 用户消息包含查询和文档\n",
    "            ],\n",
    "            temperature=0,  # 设置生成温度为最低以保证一致性\n",
    "            max_tokens=5  # 只需返回一个简短的分数\n",
    "        )\n",
    "\n",
    "        # 提取评分结果\n",
    "        score_text = response.choices[0].message.content.strip()\n",
    "        # 使用正则表达式提取响应中的浮点数值\n",
    "        score_match = re.search(r'(\\d+(\\.\\d+)?)', score_text)\n",
    "        if score_match:\n",
    "            return float(score_match.group(1))  # 返回提取到的浮点型评分\n",
    "        return 0.5  # 如果解析失败，默认返回中间值\n",
    "\n",
    "    except Exception as e:\n",
    "        # 捕获异常并打印错误信息，出错时返回默认值\n",
    "        print(f\"评估文档相关性时出错：{e}\")\n",
    "        return 0.5  # 出错时默认返回中等评分"
   ],
   "id": "15e3b727c80543bd",
   "outputs": [],
   "execution_count": 8
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 网络搜索函数",
   "id": "b6ad4dfa01a967c"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.492688Z",
     "start_time": "2025-04-30T07:22:01.480222Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def duck_duck_go_search(query, num_results=3):\n",
    "    \"\"\"\n",
    "    使用 DuckDuckGo 执行网络搜索。\n",
    "\n",
    "    Args:\n",
    "        query (str): 搜索查询语句\n",
    "        num_results (int): 要返回的结果数量\n",
    "\n",
    "    Returns:\n",
    "        Tuple[str, List[Dict]]: 合并后的搜索结果文本 和 来源元数据\n",
    "    \"\"\"\n",
    "    # 对查询进行URL编码\n",
    "    encoded_query = quote_plus(query)\n",
    "\n",
    "    # DuckDuckGo 的非官方 API 接口地址\n",
    "    url = f\"https://api.duckduckgo.com/?q={encoded_query}&format=json\"\n",
    "\n",
    "    try:\n",
    "        # 发送网络搜索请求\n",
    "        response = requests.get(url, headers={\n",
    "            \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36\"\n",
    "        })\n",
    "        data = response.json()\n",
    "\n",
    "        # 初始化变量用于存储搜索结果和来源信息\n",
    "        results_text = \"\"\n",
    "        sources = []\n",
    "\n",
    "        # 添加摘要内容（如果存在）\n",
    "        if data.get(\"AbstractText\"):\n",
    "            results_text += f\"{data['AbstractText']}\\n\\n\"\n",
    "            sources.append({\n",
    "                \"title\": data.get(\"AbstractSource\", \"Wikipedia\"),\n",
    "                \"url\": data.get(\"AbstractURL\", \"\")\n",
    "            })\n",
    "\n",
    "        # 添加相关主题搜索结果\n",
    "        for topic in data.get(\"RelatedTopics\", [])[:num_results]:\n",
    "            if \"Text\" in topic and \"FirstURL\" in topic:\n",
    "                results_text += f\"{topic['Text']}\\n\\n\"\n",
    "                sources.append({\n",
    "                    \"title\": topic.get(\"Text\", \"\").split(\" - \")[0],\n",
    "                    \"url\": topic.get(\"FirstURL\", \"\")\n",
    "                })\n",
    "\n",
    "        return results_text, sources\n",
    "\n",
    "    except Exception as e:\n",
    "        # 如果主搜索失败，打印错误信息\n",
    "        print(f\"执行网络搜索时出错：{e}\")\n",
    "\n",
    "        # 尝试使用备份的搜索API（如SerpAPI）\n",
    "        try:\n",
    "            backup_url = f\"https://serpapi.com/search.json?q={encoded_query}&engine=duckduckgo\"\n",
    "            response = requests.get(backup_url)\n",
    "            data = response.json()\n",
    "\n",
    "            # 初始化变量\n",
    "            results_text = \"\"\n",
    "            sources = []\n",
    "\n",
    "            # 从备份API提取结果\n",
    "            for result in data.get(\"organic_results\", [])[:num_results]:\n",
    "                results_text += f\"{result.get('title', '')}: {result.get('snippet', '')}\\n\\n\"\n",
    "                sources.append({\n",
    "                    \"title\": result.get(\"title\", \"\"),\n",
    "                    \"url\": result.get(\"link\", \"\")\n",
    "                })\n",
    "\n",
    "            return results_text, sources\n",
    "        except Exception as backup_error:\n",
    "            # 如果备份搜索也失败，打印错误并返回空结果\n",
    "            print(f\"备用搜索也失败了：{backup_error}\")\n",
    "            return \"无法获取搜索结果。\", []"
   ],
   "id": "7336712a72e1f94e",
   "outputs": [],
   "execution_count": 9
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.501986Z",
     "start_time": "2025-04-30T07:22:01.496384Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def rewrite_search_query(query):\n",
    "    \"\"\"\n",
    "    将查询重写为更适合网络搜索的形式。\n",
    "\n",
    "    Args:\n",
    "        query (str): 原始查询语句\n",
    "\n",
    "    Returns:\n",
    "        str: 重写后的查询语句\n",
    "    \"\"\"\n",
    "    # 定义系统提示，指导模型如何重写查询\n",
    "    system_prompt = \"\"\"\n",
    "    你是一位编写高效搜索查询的专家。\n",
    "    请将给定的查询重写为更适合搜索引擎的形式。\n",
    "    重点使用关键词和事实，去除不必要的词语，使查询更简洁明确。\n",
    "    \"\"\"\n",
    "\n",
    "    try:\n",
    "        # 调用 OpenAI API 来重写查询\n",
    "        response = client.chat.completions.create(\n",
    "            model=llm_model,  # 使用的模型\n",
    "            messages=[\n",
    "                {\"role\": \"system\", \"content\": system_prompt},  # 系统提示用于引导助手行为\n",
    "                {\"role\": \"user\", \"content\": f\"原始查询：{query}\\n\\n重写后的查询：\"}  # 用户输入原始查询\n",
    "            ],\n",
    "            temperature=0.3,  # 设置生成温度以控制输出随机性\n",
    "            max_tokens=50  # 限制响应长度\n",
    "        )\n",
    "\n",
    "        # 返回重写后的查询结果（去除首尾空白）\n",
    "        return response.choices[0].message.content.strip()\n",
    "\n",
    "    except Exception as e:\n",
    "        # 如果发生错误，打印错误信息并返回原始查询\n",
    "        print(f\"重写搜索查询时出错：{e}\")\n",
    "        return query  # 出错时返回原始查询"
   ],
   "id": "800420f2cbd2a47d",
   "outputs": [],
   "execution_count": 10
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.518838Z",
     "start_time": "2025-04-30T07:22:01.511011Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def perform_web_search(query):\n",
    "    \"\"\"\n",
    "    使用重写后的查询执行网络搜索。\n",
    "\n",
    "    Args:\n",
    "        query (str): 用户原始查询语句\n",
    "\n",
    "    Returns:\n",
    "        Tuple[str, List[Dict]]: 搜索结果文本 和 来源元数据列表\n",
    "    \"\"\"\n",
    "    # 重写查询以提升搜索效果\n",
    "    rewritten_query = rewrite_search_query(query)\n",
    "    print(f\"重写后的搜索查询：{rewritten_query}\")\n",
    "\n",
    "    # 使用重写后的查询执行网络搜索\n",
    "    results_text, sources = duck_duck_go_search(rewritten_query)\n",
    "\n",
    "    # 返回搜索结果和来源信息\n",
    "    return results_text, sources"
   ],
   "id": "5daff18f6ada424a",
   "outputs": [],
   "execution_count": 11
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 知识提炼函数\n",
   "id": "ed3004478d39707f"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.538014Z",
     "start_time": "2025-04-30T07:22:01.528952Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def refine_knowledge(text):\n",
    "    \"\"\"\n",
    "    从文本中提取并精炼关键信息。\n",
    "\n",
    "    Args:\n",
    "        text (str): 要精炼的输入文本\n",
    "\n",
    "    Returns:\n",
    "        str: 精炼后的关键要点\n",
    "    \"\"\"\n",
    "    # 定义系统提示，指导模型如何提取关键信息\n",
    "    system_prompt = \"\"\"\n",
    "    请从以下文本中提取关键信息，并以清晰简洁的项目符号列表形式呈现。\n",
    "    重点关注最相关和最重要的事实与细节。\n",
    "    你的回答应格式化为一个项目符号列表，每一项以 \"• \" 开头，换行分隔。\n",
    "    \"\"\"\n",
    "\n",
    "    try:\n",
    "        # 调用 OpenAI API 来精炼文本\n",
    "        response = client.chat.completions.create(\n",
    "            model=llm_model,  # 使用的模型\n",
    "            messages=[\n",
    "                {\"role\": \"system\", \"content\": system_prompt},  # 系统消息用于引导助手行为\n",
    "                {\"role\": \"user\", \"content\": f\"要提炼的文本内容：\\n\\n{text}\"}  # 用户消息包含待精炼的文本\n",
    "            ],\n",
    "            temperature=0.3  # 设置生成温度以控制输出随机性\n",
    "        )\n",
    "\n",
    "        # 返回精炼后的关键要点（去除首尾空白）\n",
    "        return response.choices[0].message.content.strip()\n",
    "\n",
    "    except Exception as e:\n",
    "        # 如果发生错误，打印错误信息并返回原始文本\n",
    "        print(f\"精炼知识时出错：{e}\")\n",
    "        return text  # 出错时返回原始文本"
   ],
   "id": "71101923ffe973d",
   "outputs": [],
   "execution_count": 12
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## CRAG 核心处理",
   "id": "59bd431ba10fc7ee"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.558680Z",
     "start_time": "2025-04-30T07:22:01.544804Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def crag_process(query, vector_store, k=3):\n",
    "    \"\"\"\n",
    "    执行“纠正性检索增强生成”（Corrective RAG）流程。\n",
    "\n",
    "    Args:\n",
    "        query (str): 用户查询内容\n",
    "        vector_store (SimpleVectorStore): 包含文档块的向量存储\n",
    "        k (int): 初始要检索的文档数量\n",
    "\n",
    "    Returns:\n",
    "        Dict: 处理结果，包括响应内容和调试信息\n",
    "    \"\"\"\n",
    "    print(f\"\\n=== 正在使用 CRAG 处理查询：{query} ===\\n\")\n",
    "\n",
    "    # 步骤 1: 创建查询嵌入并检索文档\n",
    "    print(\"正在检索初始文档...\")\n",
    "    query_embedding = create_embeddings(query)\n",
    "    retrieved_docs = vector_store.similarity_search(query_embedding, k=k)\n",
    "\n",
    "    # 步骤 2: 评估文档相关性\n",
    "    print(\"正在评估文档的相关性...\")\n",
    "    relevance_scores = []\n",
    "    for doc in retrieved_docs:\n",
    "        score = evaluate_document_relevance(query, doc[\"text\"])\n",
    "        relevance_scores.append(score)\n",
    "        doc[\"relevance\"] = score\n",
    "        print(f\"文档得分为 {score:.2f} 的相关性\")\n",
    "\n",
    "    # 步骤 3: 根据最高相关性得分确定操作策略\n",
    "    max_score = max(relevance_scores) if relevance_scores else 0\n",
    "    best_doc_idx = relevance_scores.index(max_score) if relevance_scores else -1\n",
    "\n",
    "    # 记录来源用于引用\n",
    "    sources = []\n",
    "    final_knowledge = \"\"\n",
    "\n",
    "    # 步骤 4: 根据情况执行相应的知识获取策略\n",
    "    if max_score > 0.7:\n",
    "        # 情况 1: 高相关性 - 直接使用文档内容\n",
    "        print(f\"高相关性 ({max_score:.2f}) - 直接使用文档内容\")\n",
    "        best_doc = retrieved_docs[best_doc_idx][\"text\"]\n",
    "        final_knowledge = best_doc\n",
    "        sources.append({\n",
    "            \"title\": \"文档\",\n",
    "            \"url\": \"\"\n",
    "        })\n",
    "\n",
    "    elif max_score < 0.3:\n",
    "        # 情况 2: 低相关性 - 使用网络搜索\n",
    "        print(f\"低相关性 ({max_score:.2f}) - 进行网络搜索\")\n",
    "        web_results, web_sources = perform_web_search(query)\n",
    "        final_knowledge = refine_knowledge(web_results)\n",
    "        sources.extend(web_sources)\n",
    "\n",
    "    else:\n",
    "        # 情况 3: 中等相关性 - 结合文档与网络搜索结果\n",
    "        print(f\"中等相关性 ({max_score:.2f}) - 结合文档与网络搜索\")\n",
    "        best_doc = retrieved_docs[best_doc_idx][\"text\"]\n",
    "        refined_doc = refine_knowledge(best_doc)\n",
    "\n",
    "        # 获取网络搜索结果\n",
    "        web_results, web_sources = perform_web_search(query)\n",
    "        refined_web = refine_knowledge(web_results)\n",
    "\n",
    "        # 合并知识\n",
    "        final_knowledge = f\"来自文档的内容:\\n{refined_doc}\\n\\n来自网络搜索的内容:\\n{refined_web}\"\n",
    "\n",
    "        # 添加来源\n",
    "        sources.append({\n",
    "            \"title\": \"文档\",\n",
    "            \"url\": \"\"\n",
    "        })\n",
    "        sources.extend(web_sources)\n",
    "\n",
    "    # 步骤 5: 生成最终响应\n",
    "    print(\"正在生成最终响应...\")\n",
    "    response = generate_response(query, final_knowledge, sources)\n",
    "\n",
    "    # 返回完整的处理结果\n",
    "    return {\n",
    "        \"query\": query,\n",
    "        \"response\": response,\n",
    "        \"retrieved_docs\": retrieved_docs,\n",
    "        \"relevance_scores\": relevance_scores,\n",
    "        \"max_relevance\": max_score,\n",
    "        \"final_knowledge\": final_knowledge,\n",
    "        \"sources\": sources\n",
    "    }"
   ],
   "id": "fe9270695bf8cdd3",
   "outputs": [],
   "execution_count": 13
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 生成回答",
   "id": "feeadaeca88f1102"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.578084Z",
     "start_time": "2025-04-30T07:22:01.570206Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def generate_response(query, knowledge, sources):\n",
    "    \"\"\"\n",
    "    根据查询内容和提供的知识生成回答。\n",
    "\n",
    "    Args:\n",
    "        query (str): 用户的查询内容\n",
    "        knowledge (str): 用于生成回答的知识内容\n",
    "        sources (List[Dict]): 来源列表，每个来源包含标题和URL\n",
    "\n",
    "    Returns:\n",
    "        str: 生成的回答文本\n",
    "    \"\"\"\n",
    "\n",
    "    # 将来源格式化为可用于提示的内容\n",
    "    sources_text = \"\"\n",
    "    for source in sources:\n",
    "        title = source.get(\"title\", \"未知来源\")\n",
    "        url = source.get(\"url\", \"\")\n",
    "        if url:\n",
    "            sources_text += f\"- {title}: {url}\\n\"\n",
    "        else:\n",
    "            sources_text += f\"- {title}\\n\"\n",
    "\n",
    "    # 定义系统指令（system prompt），指导模型如何生成回答\n",
    "    system_prompt = \"\"\"\n",
    "    你是一个乐于助人的AI助手。请根据提供的知识内容，生成一个全面且有信息量的回答。\n",
    "    在回答中包含所有相关信息，同时保持语言清晰简洁。\n",
    "    如果知识内容不能完全回答问题，请指出这一限制。\n",
    "    最后在回答末尾注明引用来源。\n",
    "    \"\"\"\n",
    "\n",
    "    # 构建用户提示（user prompt），包含用户的查询、知识内容和来源信息\n",
    "    user_prompt = f\"\"\"\n",
    "    查询内容：{query}\n",
    "\n",
    "    知识内容：\n",
    "    {knowledge}\n",
    "\n",
    "    引用来源：\n",
    "    {sources_text}\n",
    "\n",
    "    请根据以上信息，提供一个有帮助的回答，并在最后列出引用来源。\n",
    "    \"\"\"\n",
    "\n",
    "    try:\n",
    "        # 调用 OpenAI API 生成回答\n",
    "        response = client.chat.completions.create(\n",
    "            model=llm_model,  # 使用模型以获得高质量回答\n",
    "            messages=[\n",
    "                {\"role\": \"system\", \"content\": system_prompt},\n",
    "                {\"role\": \"user\", \"content\": user_prompt}\n",
    "            ],\n",
    "            temperature=0.2  # 控制生成内容的随机性（较低值更稳定）\n",
    "        )\n",
    "\n",
    "        # 返回生成的回答内容，并去除首尾空格\n",
    "        return response.choices[0].message.content.strip()\n",
    "\n",
    "    except Exception as e:\n",
    "        # 捕获异常并返回错误信息\n",
    "        print(f\"生成回答时出错: {e}\")\n",
    "        return f\"抱歉，在尝试回答您的问题“{query}”时遇到了错误。错误信息为：{str(e)}\""
   ],
   "id": "80156794123cf11e",
   "outputs": [],
   "execution_count": 14
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 评估函数",
   "id": "4015866087e5de84"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.598058Z",
     "start_time": "2025-04-30T07:22:01.590370Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def evaluate_crag_response(query, response, reference_answer=None):\n",
    "    \"\"\"\n",
    "    评估 CRAG 回答的质量。\n",
    "\n",
    "    Args:\n",
    "        query (str): 用户查询内容\n",
    "        response (str): 生成的回答内容\n",
    "        reference_answer (str, optional): 参考答案（用于对比）\n",
    "\n",
    "    Returns:\n",
    "        Dict: 包含评分指标的字典\n",
    "    \"\"\"\n",
    "\n",
    "    # 定义系统指令（system prompt），指导模型如何评估回答质量\n",
    "    system_prompt = \"\"\"\n",
    "    你是评估问答质量的专家。请根据以下标准对提供的回答进行评分：\n",
    "\n",
    "    1. 相关性 (0-10)：回答是否直接针对查询？\n",
    "    2. 准确性 (0-10)：信息是否事实正确？\n",
    "    3. 完整性 (0-10)：回答是否全面覆盖查询的所有方面？\n",
    "    4. 清晰度 (0-10)：回答是否清晰易懂？\n",
    "    5. 来源质量 (0-10)：回答是否恰当引用相关来源？\n",
    "\n",
    "    请以 JSON 格式返回每个维度的评分和简要说明。\n",
    "    同时包含一个 \"overall_score\" (0-10) 和简短的 \"summary\" 总结评估结果。\n",
    "    \"\"\"\n",
    "\n",
    "    # 构建用户提示（user prompt），包含查询和待评估的回答\n",
    "    user_prompt = f\"\"\"\n",
    "    查询内容：{query}\n",
    "\n",
    "    待评估的回答：\n",
    "    {response}\n",
    "    \"\"\"\n",
    "\n",
    "    # 如果提供了参考答案，则将其加入提示词中\n",
    "    if reference_answer:\n",
    "        user_prompt += f\"\"\"\n",
    "    参考答案（用于对比）：\n",
    "    {reference_answer}\n",
    "    \"\"\"\n",
    "\n",
    "    try:\n",
    "        # 调用模型进行评估\n",
    "        evaluation_response = client.chat.completions.create(\n",
    "            model=llm_model,\n",
    "            messages=[\n",
    "                {\"role\": \"system\", \"content\": system_prompt},\n",
    "                {\"role\": \"user\", \"content\": user_prompt}\n",
    "            ],\n",
    "            response_format={\"type\": \"json_object\"},  # 要求返回 JSON 格式\n",
    "            temperature=0  # 设置为 0 表示完全确定性输出\n",
    "        )\n",
    "\n",
    "        # 解析模型返回的评估结果\n",
    "        evaluation = json.loads(evaluation_response.choices[0].message.content)\n",
    "        return evaluation\n",
    "\n",
    "    except Exception as e:\n",
    "        # 处理评估过程中的异常情况\n",
    "        print(f\"评估回答时出错: {e}\")\n",
    "        return {\n",
    "            \"error\": str(e),\n",
    "            \"overall_score\": 0,\n",
    "            \"summary\": \"由于发生错误，评估失败。\"\n",
    "        }"
   ],
   "id": "a3cd59abf11e4a74",
   "outputs": [],
   "execution_count": 15
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.614053Z",
     "start_time": "2025-04-30T07:22:01.609170Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def compare_crag_vs_standard_rag(query, vector_store, reference_answer=None):\n",
    "    \"\"\"\n",
    "    比较 CRAG 与标准 RAG 在给定查询上的表现。\n",
    "\n",
    "    Args:\n",
    "        query (str): 用户查询\n",
    "        vector_store (SimpleVectorStore): 包含文档块的向量存储\n",
    "        reference_answer (str, optional): 用于比较的参考答案\n",
    "\n",
    "    Returns:\n",
    "        Dict: 比较结果，包含查询、CRAG 响应、标准 RAG 响应、评估结果等\n",
    "    \"\"\"\n",
    "    # 运行 CRAG 流程\n",
    "    print(\"\\n=== 正在运行 CRAG ===\")\n",
    "    crag_result = crag_process(query, vector_store)\n",
    "    crag_response = crag_result[\"response\"]\n",
    "\n",
    "    # 运行标准 RAG（直接检索并生成响应）\n",
    "    print(\"\\n=== 正在运行标准 RAG ===\")\n",
    "    query_embedding = create_embeddings(query)\n",
    "    retrieved_docs = vector_store.similarity_search(query_embedding, k=3)\n",
    "    combined_text = \"\\n\\n\".join([doc[\"text\"] for doc in retrieved_docs])\n",
    "    standard_sources = [{\"title\": \"Document\", \"url\": \"\"}]\n",
    "    standard_response = generate_response(query, combined_text, standard_sources)\n",
    "\n",
    "    # 评估两种方法的结果\n",
    "    print(\"\\n=== 正在评估 CRAG 响应 ===\")\n",
    "    crag_eval = evaluate_crag_response(query, crag_response, reference_answer)\n",
    "\n",
    "    print(\"\\n=== 正在评估标准 RAG 响应 ===\")\n",
    "    standard_eval = evaluate_crag_response(query, standard_response, reference_answer)\n",
    "\n",
    "    # 对比两种方法的表现\n",
    "    print(\"\\n=== 正在对比两种方法 ===\")\n",
    "    comparison = compare_responses(query, crag_response, standard_response, reference_answer)\n",
    "\n",
    "    return {\n",
    "        \"query\": query,\n",
    "        \"crag_response\": crag_response,\n",
    "        \"standard_response\": standard_response,\n",
    "        \"reference_answer\": reference_answer,\n",
    "        \"crag_evaluation\": crag_eval,\n",
    "        \"standard_evaluation\": standard_eval,\n",
    "        \"comparison\": comparison\n",
    "    }\n"
   ],
   "id": "5d5c72c17f13ef36",
   "outputs": [],
   "execution_count": 16
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.632436Z",
     "start_time": "2025-04-30T07:22:01.624399Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def compare_responses(query, crag_response, standard_response, reference_answer=None):\n",
    "    \"\"\"\n",
    "    比较 CRAG 和标准 RAG 的生成回答。\n",
    "\n",
    "    Args:\n",
    "        query (str): 用户查询内容\n",
    "        crag_response (str): CRAG 方法生成的回答\n",
    "        standard_response (str): 标准 RAG 方法生成的回答\n",
    "        reference_answer (str, optional): 参考答案（用于对比）\n",
    "\n",
    "    Returns:\n",
    "        str: 对比分析结果\n",
    "    \"\"\"\n",
    "\n",
    "    # 定义系统指令（system prompt），指导模型如何比较两种方法\n",
    "    system_prompt = \"\"\"\n",
    "    你是评估问答系统的专家，请对以下两种方法进行比较分析：\n",
    "\n",
    "    1. **CRAG**（纠正性检索增强生成）：会先评估文档相关性，并在必要时动态切换至网络搜索的方法。\n",
    "    2. **标准 RAG**（传统检索增强生成）：基于嵌入向量相似性直接检索文档并生成回答。\n",
    "\n",
    "    请从以下维度进行比较分析这两种方法的回答：\n",
    "    - **准确性**：事实内容是否正确？\n",
    "    - **相关性**：回答是否紧扣查询问题？\n",
    "    - **完整性**：是否覆盖了问题的所有方面？\n",
    "    - **清晰度**：语言组织是否清晰易懂？\n",
    "    - **来源质量**：引用是否合理可靠？\n",
    "\n",
    "    最后需说明哪种方法在此特定查询中表现更优，并解释原因。\n",
    "    \"\"\"\n",
    "\n",
    "    # 构建用户提示（user prompt），包含查询和两种回答\n",
    "    user_prompt = f\"\"\"\n",
    "    查询内容：{query}\n",
    "\n",
    "    CRAG 回答：\n",
    "    {crag_response}\n",
    "\n",
    "    标准 RAG 回答：\n",
    "    {standard_response}\n",
    "    \"\"\"\n",
    "\n",
    "    # 如果提供了参考答案，则将其加入提示词中\n",
    "    if reference_answer:\n",
    "        user_prompt += f\"\"\"\n",
    "    参考答案（用于对比）：\n",
    "    {reference_answer}\n",
    "    \"\"\"\n",
    "\n",
    "    try:\n",
    "        # 调用模型进行对比分析\n",
    "        response = client.chat.completions.create(\n",
    "            model=llm_model,\n",
    "            messages=[\n",
    "                {\"role\": \"system\", \"content\": system_prompt},\n",
    "                {\"role\": \"user\", \"content\": user_prompt}\n",
    "            ],\n",
    "            temperature=0  # 设置为 0 表示输出确定性结果\n",
    "        )\n",
    "\n",
    "        # 返回模型生成的对比分析结果\n",
    "        return response.choices[0].message.content.strip()\n",
    "\n",
    "    except Exception as e:\n",
    "        # 处理对比过程中的异常情况\n",
    "        print(f\"比较回答时出错: {e}\")\n",
    "        return f\"比较回答时出错：{str(e)}\""
   ],
   "id": "85c6932dd8279f38",
   "outputs": [],
   "execution_count": 17
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 完整的评估流程",
   "id": "b36e439908626862"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.651196Z",
     "start_time": "2025-04-30T07:22:01.642636Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def run_crag_evaluation(pdf_path, test_queries, reference_answers=None):\n",
    "    \"\"\"\n",
    "    运行 CRAG 在多个测试查询上的完整评估流程。\n",
    "\n",
    "    Args:\n",
    "        pdf_path (str): PDF 文档的文件路径\n",
    "        test_queries (List[str]): 测试查询列表\n",
    "        reference_answers (List[str], optional): 每个查询对应的标准答案（用于对比）\n",
    "\n",
    "    Returns:\n",
    "        Dict: 包含所有评估结果的字典\n",
    "    \"\"\"\n",
    "\n",
    "    # 处理文档并创建向量数据库\n",
    "    vector_store = process_document(pdf_path)\n",
    "\n",
    "    results = []  # 存储每个查询的评估结果\n",
    "\n",
    "    # 遍历所有测试查询\n",
    "    for i, query in enumerate(test_queries):\n",
    "        print(f\"\\n\\n===== 正在评估第 {i+1}/{len(test_queries)} 个查询 =====\")\n",
    "        print(f\"查询内容：{query}\")\n",
    "\n",
    "        # 获取当前查询的参考答案（如果提供）\n",
    "        reference = None\n",
    "        if reference_answers and i < len(reference_answers):\n",
    "            reference = reference_answers[i]\n",
    "\n",
    "        # 执行 CRAG 与标准 RAG 的对比评估\n",
    "        result = compare_crag_vs_standard_rag(query, vector_store, reference)\n",
    "        results.append(result)  # 保存单次评估结果\n",
    "\n",
    "        # 显示本次对比结果\n",
    "        print(\"\\n=== 对比结果 ===\")\n",
    "        print(result[\"comparison\"])\n",
    "\n",
    "    # 根据所有单次评估结果生成整体分析报告\n",
    "    overall_analysis = generate_overall_analysis(results)\n",
    "\n",
    "    # 返回完整评估结果\n",
    "    return {\n",
    "        \"results\": results,              # 单次查询评估结果列表\n",
    "        \"overall_analysis\": overall_analysis  # 整体分析报告\n",
    "    }"
   ],
   "id": "e40674f810a0ca1d",
   "outputs": [],
   "execution_count": 18
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:22:01.673083Z",
     "start_time": "2025-04-30T07:22:01.664682Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def generate_overall_analysis(results):\n",
    "    \"\"\"\n",
    "    根据单次查询评估结果生成整体分析报告。\n",
    "\n",
    "    Args:\n",
    "        results (List[Dict]): 来自多次查询评估的结果数据\n",
    "\n",
    "    Returns:\n",
    "        str: 整体分析报告文本\n",
    "    \"\"\"\n",
    "\n",
    "    # 系统指令（system prompt），指导模型如何生成整体分析\n",
    "    system_prompt = \"\"\"\n",
    "你是信息检索与回答生成系统的评估专家。请基于多个测试查询提供整体分析，对比 CRAG（纠正性 RAG）与标准 RAG 方法。\n",
    "\n",
    "需重点关注以下内容：\n",
    "1. **CRAG 的优势场景**：列举并解释 CRAG 表现优于标准 RAG 的情况及原因\n",
    "2. **标准 RAG 的优势场景**：列举并解释标准 RAG 更优的情况及原因\n",
    "3. **方法对比总结**：归纳两种方法的核心优缺点\n",
    "4. **应用建议**：提出针对不同场景的推荐使用方案\n",
    "\n",
    "要求分析具体、有深度，并结合实际测试数据说明结论。\n",
    "\"\"\"\n",
    "\n",
    "    # 构建评估结果摘要（供大模型参考）\n",
    "    evaluations_summary = \"\"\n",
    "    for i, result in enumerate(results):\n",
    "        evaluations_summary += f\"第 {i+1} 个查询：{result['query']}\\n\"\n",
    "\n",
    "        if 'crag_evaluation' in result and 'overall_score' in result['crag_evaluation']:\n",
    "            crag_score = result['crag_evaluation'].get('overall_score', 'N/A')\n",
    "            evaluations_summary += f\"CRAG 综合评分：{crag_score}\\n\"\n",
    "\n",
    "        if 'standard_evaluation' in result and 'overall_score' in result['standard_evaluation']:\n",
    "            std_score = result['standard_evaluation'].get('overall_score', 'N/A')\n",
    "            evaluations_summary += f\"标准 RAG 综合评分：{std_score}\\n\"\n",
    "\n",
    "        evaluations_summary += f\"对比摘要：{result['comparison'][:200]}...\\n\\n\"\n",
    "\n",
    "    # 用户指令（user prompt），请求生成分析\n",
    "    user_prompt = f\"\"\"\n",
    "    基于以下包含 {len(results)} 个查询的 CRAG 与标准 RAG 对比评估结果，请提供这两种方法的整体分析：\n",
    "\n",
    "    {evaluations_summary}\n",
    "\n",
    "    请全面分析 CRAG 相对于标准 RAG 的优劣势，重点说明在哪些场景下某种方法更优及其原因。\n",
    "    \"\"\"\n",
    "\n",
    "    try:\n",
    "        # 调用模型生成整体分析\n",
    "        response = client.chat.completions.create(\n",
    "            model=llm_model,\n",
    "            messages=[\n",
    "                {\"role\": \"system\", \"content\": system_prompt},\n",
    "                {\"role\": \"user\", \"content\": user_prompt}\n",
    "            ],\n",
    "            temperature=0  # 设置为 0 保证输出确定性\n",
    "        )\n",
    "\n",
    "        return response.choices[0].message.content.strip()\n",
    "\n",
    "    except Exception as e:\n",
    "        # 处理分析生成过程中的异常\n",
    "        print(f\"生成整体分析时出错: {e}\")\n",
    "        return f\"生成整体分析失败：{str(e)}\""
   ],
   "id": "2c05d96b619dadea",
   "outputs": [],
   "execution_count": 19
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 用测试查询评估 CRAG",
   "id": "9acc1b6e7dc5f2f0"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-30T07:24:44.116603Z",
     "start_time": "2025-04-30T07:22:01.683351Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# 使用多个与人工智能相关的查询运行全面评估\n",
    "test_queries = [\n",
    "    \"机器学习与传统编程有何不同？\",\n",
    "]\n",
    "\n",
    "# 可选参考答案，用于提升评估质量\n",
    "reference_answers = [\n",
    "    \"机器学习不同于传统编程，它让计算机从数据中学习模式，而不是遵循明确的指令。在传统编程中，开发人员编写具体的规则供计算机执行，而在机器学习中……\"\n",
    "]\n",
    "\n",
    "# 运行完整的CRAG与标准RAG对比评估\n",
    "evaluation_results = run_crag_evaluation(pdf_path, test_queries, reference_answers)\n",
    "\n",
    "# 打印整体分析结果\n",
    "print(\"\\n=== CRAG 与 标准 RAG 的整体分析 ===\")\n",
    "print(evaluation_results[\"overall_analysis\"])\n"
   ],
   "id": "3633f032ea41fce5",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "正在从 data/AI_Information.en.zh-CN.pdf 提取文本...\n",
      "共创建了 13 个文本块\n",
      "正在为文本块生成嵌入...\n",
      "已创建包含 13 个文本块的向量库\n",
      "\n",
      "\n",
      "===== 正在评估第 1/1 个查询 =====\n",
      "查询内容：机器学习与传统编程有何不同？\n",
      "\n",
      "=== 正在运行 CRAG ===\n",
      "\n",
      "=== 正在使用 CRAG 处理查询：机器学习与传统编程有何不同？ ===\n",
      "\n",
      "正在检索初始文档...\n",
      "正在评估文档的相关性...\n",
      "文档得分为 0.50 的相关性\n",
      "文档得分为 0.10 的相关性\n",
      "文档得分为 0.20 的相关性\n",
      "中等相关性 (0.50) - 结合文档与网络搜索\n",
      "重写后的搜索查询：机器学习 vs 传统编程 区别\n",
      "执行网络搜索时出错：HTTPSConnectionPool(host='api.duckduckgo.com', port=443): Max retries exceeded with url: /?q=%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0+vs+%E4%BC%A0%E7%BB%9F%E7%BC%96%E7%A8%8B+%E5%8C%BA%E5%88%AB&format=json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x00000266C84692E0>, 'Connection to api.duckduckgo.com timed out. (connect timeout=None)'))\n",
      "正在生成最终响应...\n",
      "\n",
      "=== 正在运行标准 RAG ===\n",
      "\n",
      "=== 正在评估 CRAG 响应 ===\n",
      "评估回答时出错: Expecting value: line 1 column 1 (char 0)\n",
      "\n",
      "=== 正在评估标准 RAG 响应 ===\n",
      "评估回答时出错: Expecting value: line 1 column 1 (char 0)\n",
      "\n",
      "=== 正在对比两种方法 ===\n",
      "\n",
      "=== 对比结果 ===\n",
      "### 比较分析：CRAG vs. 标准 RAG\n",
      "\n",
      "#### 1. **准确性**\n",
      "- **CRAG**：回答内容准确，涵盖了机器学习与传统编程的核心差异，如方法、数据依赖性、适应性等。所有陈述均符合事实，无明显错误。\n",
      "- **标准 RAG**：同样准确，但更侧重于核心方法和输入输出的对比，部分细节（如“深度学习通过神经网络层自动提取特征”）略显技术性，可能对非专业读者不够友好。\n",
      "\n",
      "**结论**：两者均准确，但CRAG的表述更通俗易懂。\n",
      "\n",
      "#### 2. **相关性**\n",
      "- **CRAG**：回答紧扣问题，从多个维度（方法、数据、适应性等）展开对比，完全围绕“机器学习与传统编程的不同”这一主题。\n",
      "- **标准 RAG**：相关性也较高，但部分内容（如“透明度”部分）虽然相关，但略微偏离核心问题（差异对比）。\n",
      "\n",
      "**结论**：CRAG更紧密围绕问题核心。\n",
      "\n",
      "#### 3. **完整性**\n",
      "- **CRAG**：覆盖了问题的所有关键方面，包括方法、数据依赖性、适应性、应用场景、开发流程和错误处理，甚至提到两者的结合使用。\n",
      "- **标准 RAG**：缺少对数据依赖性和开发流程的讨论，但补充了“透明度”这一额外维度。\n",
      "\n",
      "**结论**：CRAG更全面，标准RAG略有遗漏。\n",
      "\n",
      "#### 4. **清晰度**\n",
      "- **CRAG**：语言组织清晰，分点明确，逻辑流畅，易于理解。例如，直接对比“传统编程”和“机器学习”的子条目。\n",
      "- **标准 RAG**：表述清晰，但部分术语（如“XAI”）可能增加理解难度，且对比结构不如CRAG直观。\n",
      "\n",
      "**结论**：CRAG更清晰易懂。\n",
      "\n",
      "#### 5. **来源质量**\n",
      "- **CRAG**：引用来源仅标注“文档”，未说明具体文献或权威性。\n",
      "- **标准 RAG**：引用《理解人工智能》第一章至第六章，来源更具体，可能更具权威性。\n",
      "\n",
      "**结论**：标准RAG的引用更可靠，但CRAG的内容质量未受影响。\n",
      "\n",
      "---\n",
      "\n",
      "### 综合评估\n",
      "**CRAG在此查询中表现更优**，原因如下：\n",
      "1. **更全面的对比**：CRAG覆盖了更多关键差异（如数据依赖性、开发流程），而标准RAG遗漏了部分内容。\n",
      "2. **更高的相关性**：CRAG完全聚焦于差异对比，而标准RAG引入了“透明度”等次要内容。\n",
      "3. **更清晰的表述**：CRAG的语言组织和逻辑结构更易于理解，适合更广泛的读者。\n",
      "\n",
      "虽然标准RAG的引用来源更具体，但CRAG在准确性、相关性和完整性上的优势更为显著，更适合回答这一查询。\n",
      "\n",
      "=== CRAG 与 标准 RAG 的整体分析 ===\n",
      "### 全面分析：CRAG vs. 标准 RAG\n",
      "\n",
      "#### 1. **CRAG 的优势场景**\n",
      "CRAG（纠正性 RAG）在以下场景中表现优于标准 RAG：\n",
      "- **复杂或模糊查询**：当用户查询涉及多义性、隐含逻辑或需要上下文推理时，CRAG 的纠正机制能更好地识别和修正潜在误解。例如，若用户问“机器学习与传统编程有何不同？但不要提数据依赖性”，CRAG 能动态过滤无关内容，而标准 RAG 可能仍保留冗余信息。\n",
      "- **动态知识更新需求**：若数据源中存在过时或冲突信息（如新旧研究结论矛盾），CRAG 的置信度评估和外部知识验证能力可优先选择最新或权威答案。例如，回答“当前最优的神经网络架构”时，CRAG 可能通过实时检索排除过时方案。\n",
      "- **高精度要求领域**：在医疗、法律等容错率低的领域，CRAG 的主动纠错能力（如验证统计数据的来源）能减少幻觉或错误传播。\n",
      "\n",
      "**原因**：CRAG 通过置信度分数（如低分触发重新检索）和外部知识校准机制，实现了对生成内容的动态质量控制。\n",
      "\n",
      "---\n",
      "\n",
      "#### 2. **标准 RAG 的优势场景**\n",
      "标准 RAG 在以下场景更优：\n",
      "- **简单事实型查询**：对于明确、结构化的问题（如“Python 的创始人是谁？”），标准 RAG 直接检索-生成的流程效率更高，无需额外纠正开销。\n",
      "- **实时性要求高**：当响应速度比绝对准确性更重要时（如聊天机器人对话），标准 RAG 的轻量级流程更具优势。例如，回答“机器学习定义”时，标准 RAG 可能比 CRAG 快 20-30%。\n",
      "- **资源受限环境**：CRAG 的纠正机制需要额外计算（如多轮检索验证），在边缘设备或低算力场景下，标准 RAG 更可行。\n",
      "\n",
      "**原因**：标准 RAG 的端到端设计减少了中间步骤，牺牲部分纠错能力换取速度和资源效率。\n",
      "\n",
      "---\n",
      "\n",
      "#### 3. **方法对比总结**\n",
      "| **维度**       | **CRAG**                          | **标准 RAG**                     |\n",
      "|----------------|-----------------------------------|----------------------------------|\n",
      "| **准确性**     | 更高（主动纠错）                  | 中等（依赖检索质量）             |\n",
      "| **响应速度**   | 较慢（需验证步骤）                | 更快（直接生成）                 |\n",
      "| **适用查询**   | 复杂、动态、高精度需求            | 简单、明确、实时性需求           |\n",
      "| **资源消耗**   | 高（多轮检索/验证）               | 低（单轮流程）                   |\n",
      "| **抗幻觉能力** | 强（外部知识校准）                | 弱（受限于初始检索）             |\n",
      "\n",
      "---\n",
      "\n",
      "#### 4. **应用建议**\n",
      "- **推荐 CRAG 的场景**：  \n",
      "  - 专业领域问答（如学术研究、技术文档分析）  \n",
      "  - 需要结合实时数据的决策支持（如金融趋势预测）  \n",
      "  - 存在争议性或多版本答案的问题（如“新冠病毒传播途径的演变”）  \n",
      "\n",
      "- **推荐标准 RAG 的场景**：  \n",
      "  - 大众化百科类问答（如“爱因斯坦的生平”）  \n",
      "  - 实时对话系统（如客服机器人）  \n",
      "  - 嵌入式设备或低延迟需求应用  \n",
      "\n",
      "**测试数据佐证**：  \n",
      "在开放域问答测试集（如 Natural Questions）中，CRAG 在复杂问题的准确率比标准 RAG 高 8-12%，但响应时间增加 40%；而在简单事实类问题（如 TriviaQA）上，两者准确率差异不足 2%，但标准 RAG 速度快 50%。\n",
      "\n",
      "--- \n",
      "\n",
      "**总结**：选择取决于任务需求——CRAG 是“质量优先”的解决方案，标准 RAG 是“效率优先”的默认选项。混合架构（如对高置信答案直接生成，低置信时触发 CRAG）可能是平衡方案。\n"
     ]
    }
   ],
   "execution_count": 20
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 本地无法调用 duckduckgo时（需要魔法），可以用serpapi调用\n",
    "\n",
    "- https://serpapi.com/ ，创建一个账号，然后获取API key，每月免费100次请求"
   ],
   "id": "ebb9e8f2b6d86f4"
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
