{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "# 用于增强RAG系统的重排序\n",
        "\n",
        "本笔记本实现了重排序技术来提高RAG系统中的检索质量。重排序在初始检索后作为第二次过滤步骤，确保最相关的内容用于响应生成。\n",
        "\n",
        "## 重排序的关键概念\n",
        "\n",
        "1. **初始检索**: 使用基本相似性搜索的第一轮检索（精度较低但速度较快）\n",
        "2. **文档评分**: 评估每个检索文档与查询的相关性\n",
        "3. **重新排序**: 根据相关性分数对文档进行排序\n",
        "4. **选择**: 仅使用最相关的文档进行响应生成\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 设置环境\n",
        "首先导入必要的库。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 28,
      "metadata": {},
      "outputs": [],
      "source": [
        "import fitz\n",
        "import os\n",
        "import numpy as np\n",
        "import json\n",
        "from openai import OpenAI\n",
        "import re\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 从PDF文件提取文本\n",
        "为了实现RAG，我们首先需要文本数据源。在这种情况下，我们使用PyMuPDF库从PDF文件中提取文本。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 29,
      "metadata": {},
      "outputs": [],
      "source": [
        "def extract_text_from_pdf(pdf_path):\n",
        "    \"\"\"\n",
        "    从PDF文件提取文本并打印前`num_chars`个字符。\n",
        "\n",
        "    参数:\n",
        "    pdf_path (str): PDF文件的路径。\n",
        "\n",
        "    返回:\n",
        "    str: 从PDF提取的文本。\n",
        "    \"\"\"\n",
        "    # 打开PDF文件\n",
        "    mypdf = fitz.open(pdf_path)\n",
        "    all_text = \"\"  # 初始化一个空字符串来存储提取的文本\n",
        "\n",
        "    # 遍历PDF中的每一页\n",
        "    for page_num in range(mypdf.page_count):\n",
        "        page = mypdf[page_num]  # 获取页面\n",
        "        text = page.get_text(\"text\")  # 从页面提取文本\n",
        "        all_text += text  # 将提取的文本追加到all_text字符串中\n",
        "\n",
        "    return all_text  # 返回提取的文本\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 分块提取的文本\n",
        "一旦我们有了提取的文本，我们将其分成更小的、重叠的块以提高检索准确性。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 30,
      "metadata": {},
      "outputs": [],
      "source": [
        "def chunk_text(text, n, overlap):\n",
        "    \"\"\"\n",
        "    将给定的文本分块为具有重叠的n个字符的段落。\n",
        "\n",
        "    参数:\n",
        "    text (str): 要分块的文本。\n",
        "    n (int): 每个块中的字符数。\n",
        "    overlap (int): 块之间重叠的字符数。\n",
        "\n",
        "    返回:\n",
        "    List[str]: 文本块的列表。\n",
        "    \"\"\"\n",
        "    chunks = []  # 初始化一个空列表来存储块\n",
        "\n",
        "    # 以(n - overlap)的步长遍历文本\n",
        "    for i in range(0, len(text), n - overlap):\n",
        "        # 将从索引i到i + n的文本块追加到chunks列表中\n",
        "        chunks.append(text[i:i + n])\n",
        "\n",
        "    return chunks  # 返回文本块列表\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 设置OpenAI API客户端\n",
        "我们初始化OpenAI客户端以生成嵌入和响应。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 31,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 使用基础URL和API密钥初始化OpenAI客户端\n",
        "client = OpenAI(\n",
        "    base_url=\"http://localhost:11434/v1/\",\n",
        "    api_key=\"ollama\"  # Ollama不需要真实的API密钥，但客户端需要一个值\n",
        ")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 构建简单的向量存储\n",
        "为了演示重排序如何与检索集成，让我们实现一个简单的向量存储。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 32,
      "metadata": {},
      "outputs": [],
      "source": [
        "class SimpleVectorStore:\n",
        "    \"\"\"\n",
        "    使用NumPy的简单向量存储实现。\n",
        "    \"\"\"\n",
        "    def __init__(self):\n",
        "        \"\"\"\n",
        "        初始化向量存储。\n",
        "        \"\"\"\n",
        "        self.vectors = []  # 存储嵌入向量的列表\n",
        "        self.texts = []  # 存储原始文本的列表\n",
        "        self.metadata = []  # 存储每个文本元数据的列表\n",
        "    \n",
        "    def add_item(self, text, embedding, metadata=None):\n",
        "        \"\"\"\n",
        "        向向量存储添加项目。\n",
        "\n",
        "        参数:\n",
        "        text (str): 原始文本。\n",
        "        embedding (List[float]): 嵌入向量。\n",
        "        metadata (dict, optional): 额外的元数据。\n",
        "        \"\"\"\n",
        "        self.vectors.append(np.array(embedding))  # 将嵌入转换为numpy数组并添加到向量列表\n",
        "        self.texts.append(text)  # 将原始文本添加到文本列表\n",
        "        self.metadata.append(metadata or {})  # 将元数据添加到元数据列表，如果为None则使用空字典\n",
        "    \n",
        "    def similarity_search(self, query_embedding, k=5):\n",
        "        \"\"\"\n",
        "        查找与查询嵌入最相似的项目。\n",
        "\n",
        "        参数:\n",
        "        query_embedding (List[float]): 查询嵌入向量。\n",
        "        k (int): 要返回的结果数量。\n",
        "\n",
        "        返回:\n",
        "        List[Dict]: 包含文本和元数据的前k个最相似项目。\n",
        "        \"\"\"\n",
        "        if not self.vectors:\n",
        "            return []  # 如果没有存储向量，返回空列表\n",
        "        \n",
        "        # 将查询嵌入转换为numpy数组\n",
        "        query_vector = np.array(query_embedding)\n",
        "        \n",
        "        # 使用余弦相似度计算相似性\n",
        "        similarities = []\n",
        "        for i, vector in enumerate(self.vectors):\n",
        "            # 计算查询向量和存储向量之间的余弦相似度\n",
        "            similarity = np.dot(query_vector, vector) / (np.linalg.norm(query_vector) * np.linalg.norm(vector))\n",
        "            similarities.append((i, similarity))  # 追加索引和相似度分数\n",
        "        \n",
        "        # 按相似度排序（降序）\n",
        "        similarities.sort(key=lambda x: x[1], reverse=True)\n",
        "        \n",
        "        # 返回前k个结果\n",
        "        results = []\n",
        "        for i in range(min(k, len(similarities))):\n",
        "            idx, score = similarities[i]\n",
        "            results.append({\n",
        "                \"text\": self.texts[idx],  # 添加对应的文本\n",
        "                \"metadata\": self.metadata[idx],  # 添加对应的元数据\n",
        "                \"similarity\": score  # 添加相似度分数\n",
        "            })\n",
        "        \n",
        "        return results  # 返回前k个相似项目的列表\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 创建嵌入\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 33,
      "metadata": {},
      "outputs": [],
      "source": [
        "def create_embeddings(text, model=\"bge-m3:latest\"):\n",
        "    \"\"\"\n",
        "    使用指定的OpenAI模型为给定文本创建嵌入。\n",
        "\n",
        "    参数:\n",
        "    text (str): 要创建嵌入的输入文本。\n",
        "    model (str): 用于创建嵌入的模型。\n",
        "\n",
        "    返回:\n",
        "    List[float]: 嵌入向量。\n",
        "    \"\"\"\n",
        "    # 通过将字符串输入转换为列表来处理字符串和列表输入\n",
        "    input_text = text if isinstance(text, list) else [text]\n",
        "    \n",
        "    # 使用指定模型为输入文本创建嵌入\n",
        "    response = client.embeddings.create(\n",
        "        model=model,\n",
        "        input=input_text\n",
        "    )\n",
        "    \n",
        "    # 如果输入是字符串，仅返回第一个嵌入\n",
        "    if isinstance(text, str):\n",
        "        return response.data[0].embedding\n",
        "    \n",
        "    # 否则，返回所有嵌入作为向量列表\n",
        "    return [item.embedding for item in response.data]\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 文档处理管道\n",
        "现在我们已经定义了必要的函数和类，我们可以继续定义文档处理管道。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 34,
      "metadata": {},
      "outputs": [],
      "source": [
        "def process_document(pdf_path, chunk_size=1000, chunk_overlap=200):\n",
        "    \"\"\"\n",
        "    为RAG处理文档。\n",
        "\n",
        "    参数:\n",
        "    pdf_path (str): PDF文件的路径。\n",
        "    chunk_size (int): 每个块的字符大小。\n",
        "    chunk_overlap (int): 块之间重叠的字符数。\n",
        "\n",
        "    返回:\n",
        "    SimpleVectorStore: 包含文档块及其嵌入的向量存储。\n",
        "    \"\"\"\n",
        "    # 从PDF文件提取文本\n",
        "    print(\"从PDF提取文本...\")\n",
        "    extracted_text = extract_text_from_pdf(pdf_path)\n",
        "    \n",
        "    # 分块提取的文本\n",
        "    print(\"分块文本...\")\n",
        "    chunks = chunk_text(extracted_text, chunk_size, chunk_overlap)\n",
        "    print(f\"创建了{len(chunks)}个文本块\")\n",
        "    \n",
        "    # 为文本块创建嵌入\n",
        "    print(\"为块创建嵌入...\")\n",
        "    chunk_embeddings = create_embeddings(chunks)\n",
        "    \n",
        "    # 初始化简单向量存储\n",
        "    store = SimpleVectorStore()\n",
        "    \n",
        "    # 将每个块及其嵌入添加到向量存储\n",
        "    for i, (chunk, embedding) in enumerate(zip(chunks, chunk_embeddings)):\n",
        "        store.add_item(\n",
        "            text=chunk,\n",
        "            embedding=embedding,\n",
        "            metadata={\"index\": i, \"source\": pdf_path}\n",
        "        )\n",
        "    \n",
        "    print(f\"将{len(chunks)}个块添加到向量存储\")\n",
        "    return store\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 实现基于LLM的重排序\n",
        "让我们使用OpenAI API实现基于LLM的重排序功能。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 35,
      "metadata": {},
      "outputs": [],
      "source": [
        "def rerank_with_llm(query, results, top_n=3, model=\"qwen2.5:7b\"):\n",
        "    \"\"\"\n",
        "    使用LLM相关性评分重新排序搜索结果。\n",
        "    \n",
        "    参数:\n",
        "        query (str): 用户查询\n",
        "        results (List[Dict]): 初始搜索结果\n",
        "        top_n (int): 重排序后要返回的结果数量\n",
        "        model (str): 用于评分的模型\n",
        "        \n",
        "    返回:\n",
        "        List[Dict]: 重排序后的结果\n",
        "    \"\"\"\n",
        "    print(f\"重排序{len(results)}个文档...\")  # 打印要重排序的文档数量\n",
        "    \n",
        "    scored_results = []  # 初始化空列表来存储评分结果\n",
        "    \n",
        "    # 为LLM定义系统提示\n",
        "    system_prompt = \"\"\"您是评估搜索查询文档相关性的专家。\n",
        "您的任务是根据文档回答给定查询的程度，在0到10的范围内对文档进行评分。\n",
        "\n",
        "准则:\n",
        "- 评分0-2: 文档完全不相关\n",
        "- 评分3-5: 文档有一些相关信息但不直接回答查询\n",
        "- 评分6-8: 文档相关并部分回答查询\n",
        "- 评分9-10: 文档高度相关并直接回答查询\n",
        "\n",
        "您必须仅回复0到10之间的单个整数分数。不要包含任何其他文本。\"\"\"\n",
        "    \n",
        "    # 遍历每个结果\n",
        "    for i, result in enumerate(results):\n",
        "        # 每5个文档显示进度\n",
        "        if i % 5 == 0:\n",
        "            print(f\"正在评分文档{i+1}/{len(results)}...\")\n",
        "        \n",
        "        # 为LLM定义用户提示\n",
        "        user_prompt = f\"\"\"查询: {query}\n",
        "\n",
        "文档:\n",
        "{result['text']}\n",
        "\n",
        "在0到10的范围内评分此文档与查询的相关性:\"\"\"\n",
        "        \n",
        "        # 获取LLM响应\n",
        "        response = client.chat.completions.create(\n",
        "            model=model,\n",
        "            temperature=0,\n",
        "            messages=[\n",
        "                {\"role\": \"system\", \"content\": system_prompt},\n",
        "                {\"role\": \"user\", \"content\": user_prompt}\n",
        "            ]\n",
        "        )\n",
        "        \n",
        "        # 从LLM响应中提取分数\n",
        "        score_text = response.choices[0].message.content.strip()\n",
        "        \n",
        "        # 使用正则表达式提取数值分数\n",
        "        score_match = re.search(r'\\b(10|[0-9])\\b', score_text)\n",
        "        if score_match:\n",
        "            score = float(score_match.group(1))\n",
        "        else:\n",
        "            # 如果分数提取失败，使用相似度分数作为备选\n",
        "            print(f\"警告: 无法从响应中提取分数: '{score_text}', 使用相似度分数代替\")\n",
        "            score = result[\"similarity\"] * 10\n",
        "        \n",
        "        # 将评分结果追加到列表\n",
        "        scored_results.append({\n",
        "            \"text\": result[\"text\"],\n",
        "            \"metadata\": result[\"metadata\"],\n",
        "            \"similarity\": result[\"similarity\"],\n",
        "            \"relevance_score\": score\n",
        "        })\n",
        "    \n",
        "    # 按相关性分数降序排列结果\n",
        "    reranked_results = sorted(scored_results, key=lambda x: x[\"relevance_score\"], reverse=True)\n",
        "    \n",
        "    # 返回前top_n个结果\n",
        "    return reranked_results[:top_n]\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 简单的基于关键词的重排序\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 36,
      "metadata": {},
      "outputs": [],
      "source": [
        "def rerank_with_keywords(query, results, top_n=3):\n",
        "    \"\"\"\n",
        "    基于关键词匹配和位置的简单替代重排序方法。\n",
        "    \n",
        "    参数:\n",
        "        query (str): 用户查询\n",
        "        results (List[Dict]): 初始搜索结果\n",
        "        top_n (int): 重排序后要返回的结果数量\n",
        "        \n",
        "    返回:\n",
        "        List[Dict]: 重排序后的结果\n",
        "    \"\"\"\n",
        "    # 从查询中提取重要关键词\n",
        "    keywords = [word.lower() for word in query.split() if len(word) > 3]\n",
        "    \n",
        "    scored_results = []  # 初始化列表来存储评分结果\n",
        "    \n",
        "    for result in results:\n",
        "        document_text = result[\"text\"].lower()  # 将文档文本转换为小写\n",
        "        \n",
        "        # 基础分数从向量相似度开始\n",
        "        base_score = result[\"similarity\"] * 0.5\n",
        "        \n",
        "        # 初始化关键词分数\n",
        "        keyword_score = 0\n",
        "        for keyword in keywords:\n",
        "            if keyword in document_text:\n",
        "                # 为找到的每个关键词添加分数\n",
        "                keyword_score += 0.1\n",
        "                \n",
        "                # 如果关键词出现在开头附近，添加更多分数\n",
        "                first_position = document_text.find(keyword)\n",
        "                if first_position < len(document_text) / 4:  # 在文本的前四分之一\n",
        "                    keyword_score += 0.1\n",
        "                \n",
        "                # 为关键词频率添加分数\n",
        "                frequency = document_text.count(keyword)\n",
        "                keyword_score += min(0.05 * frequency, 0.2)  # 最多0.2\n",
        "        \n",
        "        # 通过组合基础分数和关键词分数计算最终分数\n",
        "        final_score = base_score + keyword_score\n",
        "        \n",
        "        # 将评分结果追加到列表\n",
        "        scored_results.append({\n",
        "            \"text\": result[\"text\"],\n",
        "            \"metadata\": result[\"metadata\"],\n",
        "            \"similarity\": result[\"similarity\"],\n",
        "            \"relevance_score\": final_score\n",
        "        })\n",
        "    \n",
        "    # 按最终相关性分数降序排列结果\n",
        "    reranked_results = sorted(scored_results, key=lambda x: x[\"relevance_score\"], reverse=True)\n",
        "    \n",
        "    # 返回前top_n个结果\n",
        "    return reranked_results[:top_n]\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 响应生成\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 37,
      "metadata": {},
      "outputs": [],
      "source": [
        "def generate_response(query, context, model=\"qwen2.5:7b\"):\n",
        "    \"\"\"\n",
        "    基于查询和上下文生成响应。\n",
        "    \n",
        "    参数:\n",
        "        query (str): 用户查询\n",
        "        context (str): 检索到的上下文\n",
        "        model (str): 用于响应生成的模型\n",
        "        \n",
        "    返回:\n",
        "        str: 生成的响应\n",
        "    \"\"\"\n",
        "    # 定义系统提示来指导AI的行为\n",
        "    system_prompt = \"您是一个有用的AI助手。仅基于提供的上下文回答用户的问题。如果您在上下文中找不到答案，请说明您没有足够的信息。\"\n",
        "    \n",
        "    # 通过组合上下文和查询创建用户提示\n",
        "    user_prompt = f\"\"\"\n",
        "        上下文:\n",
        "        {context}\n",
        "\n",
        "        问题: {query}\n",
        "\n",
        "        请仅基于上述上下文提供全面的答案。\n",
        "    \"\"\"\n",
        "    \n",
        "    # 使用指定模型生成响应\n",
        "    response = client.chat.completions.create(\n",
        "        model=model,\n",
        "        temperature=0,\n",
        "        messages=[\n",
        "            {\"role\": \"system\", \"content\": system_prompt},\n",
        "            {\"role\": \"user\", \"content\": user_prompt}\n",
        "        ]\n",
        "    )\n",
        "    \n",
        "    # 返回生成的响应内容\n",
        "    return response.choices[0].message.content\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 带重排序的完整RAG管道\n",
        "到目前为止，我们已经实现了RAG管道的核心组件，包括文档处理、问答和重排序。现在，我们将结合这些组件创建一个完整的RAG管道。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 38,
      "metadata": {},
      "outputs": [],
      "source": [
        "def rag_with_reranking(query, vector_store, reranking_method=\"llm\", top_n=3, model=\"qwen2.5:7b\"):\n",
        "    \"\"\"\n",
        "    结合重排序的完整RAG管道。\n",
        "    \n",
        "    参数:\n",
        "        query (str): 用户查询\n",
        "        vector_store (SimpleVectorStore): 向量存储\n",
        "        reranking_method (str): 重排序方法（'llm'或'keywords'）\n",
        "        top_n (int): 重排序后要返回的结果数量\n",
        "        model (str): 用于响应生成的模型\n",
        "        \n",
        "    返回:\n",
        "        Dict: 包含查询、上下文和响应的结果\n",
        "    \"\"\"\n",
        "    # 创建查询嵌入\n",
        "    query_embedding = create_embeddings(query)\n",
        "    \n",
        "    # 初始检索（获取比需要的更多结果用于重排序）\n",
        "    initial_results = vector_store.similarity_search(query_embedding, k=10)\n",
        "    \n",
        "    # 应用重排序\n",
        "    if reranking_method == \"llm\":\n",
        "        reranked_results = rerank_with_llm(query, initial_results, top_n=top_n)\n",
        "    elif reranking_method == \"keywords\":\n",
        "        reranked_results = rerank_with_keywords(query, initial_results, top_n=top_n)\n",
        "    else:\n",
        "        # 无重排序，仅使用初始检索的前几个结果\n",
        "        reranked_results = initial_results[:top_n]\n",
        "    \n",
        "    # 合并重排序结果的上下文\n",
        "    context = \"\\n\\n===\\n\\n\".join([result[\"text\"] for result in reranked_results])\n",
        "    \n",
        "    # 基于上下文生成响应\n",
        "    response = generate_response(query, context, model)\n",
        "    \n",
        "    return {\n",
        "        \"query\": query,\n",
        "        \"reranking_method\": reranking_method,\n",
        "        \"initial_results\": initial_results[:top_n],\n",
        "        \"reranked_results\": reranked_results,\n",
        "        \"context\": context,\n",
        "        \"response\": response\n",
        "    }\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 评估重排序质量\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 39,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 从JSON文件加载验证数据\n",
        "with open('data/val.json') as f:\n",
        "    data = json.load(f)\n",
        "\n",
        "# 从验证数据中提取第一个查询\n",
        "query = data[0]['question']\n",
        "\n",
        "# 从验证数据中提取参考答案\n",
        "reference_answer = data[0]['ideal_answer']\n",
        "\n",
        "# pdf路径\n",
        "pdf_path = \"data/AI_Information.pdf\"\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 40,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "从PDF提取文本...\n",
            "分块文本...\n",
            "创建了42个文本块\n",
            "为块创建嵌入...\n",
            "将42个块添加到向量存储\n",
            "比较检索方法...\n",
            "\n",
            "=== 标准检索 ===\n",
            "\n",
            "查询: AI是否有潜力改变我们生活和工作的方式？\n",
            "\n",
            "响应:\n",
            "是的，根据提供的信息，人工智能（AI）有潜力显著改变我们的生活方式和工作方式。在工作中，AI能够自动化重复性和常规性任务，并通过增强人类的能力、自动执行繁琐的任务以及为决策提供见解来支持人类与AI系统的协作。此外，随着AI技术的发展，新的职业角色也在不断出现，如AI开发、数据科学等。\n",
            "\n",
            "在生活中，AI可以作为创意和创新的工具，用于生成艺术、音乐和文学作品，辅助设计过程，并加速科学研究。因此，可以说AI正在并将继续改变我们生活的方方面面。\n",
            "\n",
            "=== 基于LLM的重排序 ===\n",
            "重排序10个文档...\n",
            "正在评分文档1/10...\n",
            "正在评分文档6/10...\n",
            "\n",
            "查询: AI是否有潜力改变我们生活和工作的方式？\n",
            "\n",
            "响应:\n",
            "是的，AI有潜力显著改变我们的生活方式和工作方式。在工作中，AI可以自动化重复性和常规性任务，从而提高效率并创造新的就业机会。它还可以通过增强人类的能力、自动化繁琐的任务以及提供支持决策的见解来促进人机协作。\n",
            "\n",
            "在生活中，AI可以帮助解决社会挑战，如气候变化、贫困和医疗保健不平等，并通过改善资源管理、增强决策能力和支持可持续发展来发挥作用。此外，AI还可能用于提高教育、医疗和社会服务的可访问性，从而促进公平和福祉。然而，为了确保这些积极影响，需要进行技能再培训和提升，以及解决伦理问题并建立公众对AI的信任。\n",
            "\n",
            "=== 基于关键词的重排序 ===\n",
            "\n",
            "查询: AI是否有潜力改变我们生活和工作的方式？\n",
            "\n",
            "响应:\n",
            "是的，根据提供的信息，人工智能（AI）有潜力显著改变我们的生活方式和工作方式。在工作中，AI能够自动化重复性和常规性任务，并通过增强人类的能力、自动执行繁琐的任务以及为决策提供见解来支持人类与AI系统的协作。此外，随着AI技术的发展，新的职业角色也在不断出现，如AI开发、数据科学等。\n",
            "\n",
            "在生活中，AI可以作为创意和创新的工具，用于生成艺术、音乐和文学作品，辅助设计过程，并加速科学研究。因此，可以说AI正在并将继续改变我们生活的方方面面。\n"
          ]
        }
      ],
      "source": [
        "# 处理文档\n",
        "vector_store = process_document(pdf_path)\n",
        "\n",
        "# 示例查询\n",
        "query = \"AI是否有潜力改变我们生活和工作的方式？\"\n",
        "\n",
        "# 比较不同方法\n",
        "print(\"比较检索方法...\")\n",
        "\n",
        "# 1. 标准检索（无重排序）\n",
        "print(\"\\n=== 标准检索 ===\")\n",
        "standard_results = rag_with_reranking(query, vector_store, reranking_method=\"none\")\n",
        "print(f\"\\n查询: {query}\")\n",
        "print(f\"\\n响应:\\n{standard_results['response']}\")\n",
        "\n",
        "# 2. 基于LLM的重排序\n",
        "print(\"\\n=== 基于LLM的重排序 ===\")\n",
        "llm_results = rag_with_reranking(query, vector_store, reranking_method=\"llm\")\n",
        "print(f\"\\n查询: {query}\")\n",
        "print(f\"\\n响应:\\n{llm_results['response']}\")\n",
        "\n",
        "# 3. 基于关键词的重排序\n",
        "print(\"\\n=== 基于关键词的重排序 ===\")\n",
        "keyword_results = rag_with_reranking(query, vector_store, reranking_method=\"keywords\")\n",
        "print(f\"\\n查询: {query}\")\n",
        "print(f\"\\n响应:\\n{keyword_results['response']}\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 41,
      "metadata": {},
      "outputs": [],
      "source": [
        "def evaluate_reranking(query, standard_results, reranked_results, reference_answer=None):\n",
        "    \"\"\"\n",
        "    评估重排序结果与标准结果相比的质量。\n",
        "    \n",
        "    参数:\n",
        "        query (str): 用户查询\n",
        "        standard_results (Dict): 标准检索的结果\n",
        "        reranked_results (Dict): 重排序检索的结果\n",
        "        reference_answer (str, optional): 用于比较的参考答案\n",
        "        \n",
        "    返回:\n",
        "        str: 评估输出\n",
        "    \"\"\"\n",
        "    # 为AI评估器定义系统提示\n",
        "    system_prompt = \"\"\"您是RAG系统的专家评估员。\n",
        "    比较两种不同检索方法的检索上下文和响应。\n",
        "    评估哪一种提供了更好的上下文和更准确、全面的答案。\"\"\"\n",
        "    \n",
        "    # 准备带有截断的上下文和响应的比较文本\n",
        "    comparison_text = f\"\"\"查询: {query}\n",
        "\n",
        "标准检索上下文:\n",
        "{standard_results['context'][:1000]}... [截断]\n",
        "\n",
        "标准检索答案:\n",
        "{standard_results['response']}\n",
        "\n",
        "重排序检索上下文:\n",
        "{reranked_results['context'][:1000]}... [截断]\n",
        "\n",
        "重排序检索答案:\n",
        "{reranked_results['response']}\"\"\"\n",
        "\n",
        "    # 如果提供了参考答案，将其包含在比较文本中\n",
        "    if reference_answer:\n",
        "        comparison_text += f\"\"\"\n",
        "        \n",
        "参考答案:\n",
        "{reference_answer}\"\"\"\n",
        "\n",
        "    # 为AI评估器创建用户提示\n",
        "    user_prompt = f\"\"\"\n",
        "{comparison_text}\n",
        "\n",
        "请评估哪种检索方法提供了:\n",
        "1. 更相关的上下文\n",
        "2. 更准确的答案\n",
        "3. 更全面的答案\n",
        "4. 更好的整体性能\n",
        "\n",
        "请提供详细的分析和具体示例。\n",
        "\"\"\"\n",
        "    \n",
        "    # 使用指定模型生成评估响应\n",
        "    response = client.chat.completions.create(\n",
        "        model=\"qwen2.5:7b\",\n",
        "        temperature=0,\n",
        "        messages=[\n",
        "            {\"role\": \"system\", \"content\": system_prompt},\n",
        "            {\"role\": \"user\", \"content\": user_prompt}\n",
        "        ]\n",
        "    )\n",
        "    \n",
        "    # 返回评估输出\n",
        "    return response.choices[0].message.content\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 42,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n",
            "=== 评估结果 ===\n",
            "### 评估标准与分析\n",
            "\n",
            "#### 1. 更相关的上下文\n",
            "**标准检索上下文：**\n",
            "- 提供了关于AI在金融管理、交易和客户服务中的应用的具体例子，以及对工作未来的影响的详细讨论。这部分内容直接回答了查询中提到的工作方式。\n",
            "- 包含了自动化带来的就业替代与创造新机会之间的平衡，强调了重新培训和技能提升的重要性。\n",
            "\n",
            "**重排序检索上下文：**\n",
            "- 也涵盖了AI在金融管理、交易和客户服务中的应用，以及对工作未来的影响的讨论。这部分内容同样直接回答了查询中提到的工作方式。\n",
            "- 强调了人机协作的重要性，并提到了新的职业角色的发展。\n",
            "\n",
            "**分析与结论：**\n",
            "两者的上下文都高度相关于问题的核心——AI如何改变我们的生活和工作方式。两者都涵盖了自动化、就业替代与创造新机会以及人机协作等关键点，因此在相关性上没有显著差异。\n",
            "\n",
            "#### 2. 更准确的答案\n",
            "**标准检索答案：**\n",
            "- 提到了AI在金融管理中的应用，并讨论了其对工作未来的影响。\n",
            "- 强调了重新培训和技能提升的重要性。\n",
            "- 结论明确指出AI有潜力改变我们的生活方式和工作方式，但未详细说明生活方面的变化。\n",
            "\n",
            "**重排序检索答案：**\n",
            "- 除了提到AI在工作中的应用外，还强调了AI如何帮助解决社会挑战、提高教育与医疗的可访问性等。\n",
            "- 强调了人机协作的重要性，并提到了新的职业角色的发展。\n",
            "- 结论明确指出AI有潜力改变我们的生活方式和工作方式。\n",
            "\n",
            "**分析与结论：**\n",
            "重排序检索答案提供了更准确的答案，因为它不仅讨论了工作中的应用，还详细说明了AI如何在生活方面发挥作用。因此，在准确性上，重排序检索方法表现更好。\n",
            "\n",
            "#### 3. 更全面的答案\n",
            "**标准检索答案：**\n",
            "- 提到了金融管理、交易和客户服务的具体例子。\n",
            "- 讨论了自动化带来的就业替代与创造新机会之间的平衡。\n",
            "- 强调了重新培训和技能提升的重要性。\n",
            "\n",
            "**重排序检索答案：**\n",
            "- 除了提到AI在工作中的应用外，还详细说明了AI如何帮助解决社会挑战（如气候变化、贫困等）。\n",
            "- 提到了提高教育与医疗的可访问性。\n",
            "- 讨论了人机协作的重要性，并提到了新的职业角色的发展。\n",
            "\n",
            "**分析与结论：**\n",
            "重排序检索答案提供了更全面的答案。它不仅涵盖了工作中的应用，还详细讨论了AI在生活中的广泛影响，包括社会挑战、教育和医疗等领域的应用。因此，在全面性上，重排序检索方法表现更好。\n",
            "\n",
            "#### 4. 更好的整体性能\n",
            "**标准检索上下文与答案：**\n",
            "- 提供的信息较为具体且直接回答问题。\n",
            "- 答案简洁明了，但略显单一。\n",
            "\n",
            "**重排序检索上下文与答案：**\n",
            "- 提供的信息更加广泛和深入。\n",
            "- 答案详细全面，涵盖了多个方面的影响。\n",
            "\n",
            "**分析与结论：**\n",
            "综合考虑上下文和答案的整体性能，重排序检索方法表现更好。它不仅提供了更相关、准确且全面的答案，还能够更好地满足用户对AI影响的多维度理解需求。\n",
            "\n",
            "### 总结\n",
            "在本次比较中，重排序检索方法在提供更相关的上下文、更准确的答案以及更全面的答案方面均优于标准检索方法。因此，从整体性能来看，重排序检索方法表现更好。\n"
          ]
        }
      ],
      "source": [
        "# 评估重排序结果与标准结果相比的质量\n",
        "evaluation = evaluate_reranking(\n",
        "    query=query,  # 用户查询\n",
        "    standard_results=standard_results,  # 标准检索的结果\n",
        "    reranked_results=llm_results,  # 基于LLM重排序的结果\n",
        "    reference_answer=reference_answer  # 用于比较的参考答案\n",
        ")\n",
        "\n",
        "# 打印评估结果\n",
        "print(\"\\n=== 评估结果 ===\")\n",
        "print(evaluation)\n"
      ]
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "rag",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.12.11"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 2
}
