{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "# 相关段落提取 (RSE) 增强RAG技术\n",
        "\n",
        "在这个笔记本中，我们实现了相关段落提取（RSE）技术来改进RAG系统中的上下文质量。不是简单地检索一堆孤立的文本块，而是识别并重构连续的文本段落，为语言模型提供更好的上下文。\n",
        "\n",
        "## 核心概念\n",
        "\n",
        "相关的文本块往往在文档中聚集在一起。通过识别这些聚集并保持其连续性，我们为LLM提供了更连贯的上下文。\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 设置环境\n",
        "我们首先导入必要的库。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {},
      "outputs": [],
      "source": [
        "import fitz\n",
        "import os\n",
        "import numpy as np\n",
        "import json\n",
        "from openai import OpenAI\n",
        "import re\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 从PDF文件中提取文本\n",
        "为了实现RAG，我们首先需要文本数据源。在这种情况下，我们使用PyMuPDF库从PDF文件中提取文本。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {},
      "outputs": [],
      "source": [
        "def extract_text_from_pdf(pdf_path):\n",
        "    \"\"\"\n",
        "    从PDF文件中提取文本并打印前num_chars个字符。\n",
        "\n",
        "    参数:\n",
        "    pdf_path (str): PDF文件路径。\n",
        "\n",
        "    返回:\n",
        "    str: 从PDF中提取的文本。\n",
        "    \"\"\"\n",
        "    # 打开PDF文件\n",
        "    mypdf = fitz.open(pdf_path)\n",
        "    all_text = \"\"  # 初始化一个空字符串来存储提取的文本\n",
        "\n",
        "    # 遍历PDF中的每一页\n",
        "    for page_num in range(mypdf.page_count):\n",
        "        page = mypdf[page_num]  # 获取页面\n",
        "        text = page.get_text(\"text\")  # 从页面提取文本\n",
        "        all_text += text  # 将提取的文本追加到all_text字符串中\n",
        "\n",
        "    return all_text  # 返回提取的文本\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 对提取的文本进行分块\n",
        "一旦我们有了提取的文本，我们将其分成更小的、重叠的块以改进检索精度。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {},
      "outputs": [],
      "source": [
        "def chunk_text(text, chunk_size=800, overlap=0):\n",
        "    \"\"\"\n",
        "    将文本分割成非重叠的块。\n",
        "    对于RSE，我们通常希望非重叠的块，这样我们可以正确地重构段落。\n",
        "    \n",
        "    参数:\n",
        "        text (str): 要分块的输入文本\n",
        "        chunk_size (int): 每个块的字符大小\n",
        "        overlap (int): 块之间的重叠字符数\n",
        "        \n",
        "    返回:\n",
        "        List[str]: 文本块列表\n",
        "    \"\"\"\n",
        "    chunks = []\n",
        "    \n",
        "    # 简单的基于字符的分块\n",
        "    for i in range(0, len(text), chunk_size - overlap):\n",
        "        chunk = text[i:i + chunk_size]\n",
        "        if chunk:  # 确保我们不添加空块\n",
        "            chunks.append(chunk)\n",
        "    \n",
        "    return chunks\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 设置OpenAI API客户端\n",
        "我们初始化OpenAI客户端来生成嵌入向量和响应。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 使用基础URL和API密钥初始化OpenAI客户端\n",
        "client = OpenAI(\n",
        "    base_url=\"http://localhost:11434/v1/\",\n",
        "    api_key=\"ollama\"  # Ollama不需要真正的API密钥，但客户端需要一个值\n",
        ")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 构建简单的向量存储\n",
        "让我们实现一个简单的向量存储。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {},
      "outputs": [],
      "source": [
        "class SimpleVectorStore:\n",
        "    \"\"\"\n",
        "    使用NumPy的轻量级向量存储实现。\n",
        "    \"\"\"\n",
        "    def __init__(self, dimension=1536):\n",
        "        \"\"\"\n",
        "        初始化向量存储。\n",
        "        \n",
        "        参数:\n",
        "            dimension (int): 嵌入向量的维度\n",
        "        \"\"\"\n",
        "        self.dimension = dimension\n",
        "        self.vectors = []\n",
        "        self.documents = []\n",
        "        self.metadata = []\n",
        "    \n",
        "    def add_documents(self, documents, vectors=None, metadata=None):\n",
        "        \"\"\"\n",
        "        向向量存储中添加文档。\n",
        "        \n",
        "        参数:\n",
        "            documents (List[str]): 文档块列表\n",
        "            vectors (List[List[float]], optional): 嵌入向量列表\n",
        "            metadata (List[Dict], optional): 元数据字典列表\n",
        "        \"\"\"\n",
        "        if vectors is None:\n",
        "            vectors = [None] * len(documents)\n",
        "        \n",
        "        if metadata is None:\n",
        "            metadata = [{} for _ in range(len(documents))]\n",
        "        \n",
        "        for doc, vec, meta in zip(documents, vectors, metadata):\n",
        "            self.documents.append(doc)\n",
        "            self.vectors.append(vec)\n",
        "            self.metadata.append(meta)\n",
        "    \n",
        "    def search(self, query_vector, top_k=5):\n",
        "        \"\"\"\n",
        "        搜索最相似的文档。\n",
        "        \n",
        "        参数:\n",
        "            query_vector (List[float]): 查询嵌入向量\n",
        "            top_k (int): 返回的结果数量\n",
        "            \n",
        "        返回:\n",
        "            List[Dict]: 包含文档、分数和元数据的结果列表\n",
        "        \"\"\"\n",
        "        if not self.vectors or not self.documents:\n",
        "            return []\n",
        "        \n",
        "        # 将查询向量转换为numpy数组\n",
        "        query_array = np.array(query_vector)\n",
        "        \n",
        "        # 计算相似度\n",
        "        similarities = []\n",
        "        for i, vector in enumerate(self.vectors):\n",
        "            if vector is not None:\n",
        "                # 计算余弦相似度\n",
        "                similarity = np.dot(query_array, vector) / (\n",
        "                    np.linalg.norm(query_array) * np.linalg.norm(vector)\n",
        "                )\n",
        "                similarities.append((i, similarity))\n",
        "        \n",
        "        # 按相似度排序（降序）\n",
        "        similarities.sort(key=lambda x: x[1], reverse=True)\n",
        "        \n",
        "        # 获取top-k结果\n",
        "        results = []\n",
        "        for i, score in similarities[:top_k]:\n",
        "            results.append({\n",
        "                \"document\": self.documents[i],\n",
        "                \"score\": float(score),\n",
        "                \"metadata\": self.metadata[i]\n",
        "            })\n",
        "        \n",
        "        return results\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 为文本块创建嵌入向量\n",
        "嵌入向量将文本转换为数值向量，允许高效的相似性搜索。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {},
      "outputs": [],
      "source": [
        "def create_embeddings(texts, model=\"bge-m3:latest\"):\n",
        "    \"\"\"\n",
        "    为文本生成嵌入向量。\n",
        "    \n",
        "    参数:\n",
        "        texts (List[str]): 要嵌入的文本列表\n",
        "        model (str): 要使用的嵌入模型\n",
        "        \n",
        "    返回:\n",
        "        List[List[float]]: 嵌入向量列表\n",
        "    \"\"\"\n",
        "    if not texts:\n",
        "        return []  # 如果没有提供文本，返回空列表\n",
        "        \n",
        "    # 如果列表很长，则按批次处理\n",
        "    batch_size = 100  # 根据API限制调整\n",
        "    all_embeddings = []  # 初始化列表来存储所有嵌入向量\n",
        "    \n",
        "    for i in range(0, len(texts), batch_size):\n",
        "        batch = texts[i:i + batch_size]  # 获取当前批次的文本\n",
        "        \n",
        "        # 使用指定模型为当前批次创建嵌入向量\n",
        "        response = client.embeddings.create(\n",
        "            input=batch,\n",
        "            model=model\n",
        "        )\n",
        "        \n",
        "        # 从响应中提取嵌入向量\n",
        "        batch_embeddings = [item.embedding for item in response.data]\n",
        "        all_embeddings.extend(batch_embeddings)  # 将批次嵌入向量添加到列表中\n",
        "        \n",
        "    return all_embeddings  # 返回所有嵌入向量的列表\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 使用RSE处理文档\n",
        "现在让我们实现RSE的核心功能。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {},
      "outputs": [],
      "source": [
        "def process_document(pdf_path, chunk_size=800):\n",
        "    \"\"\"\n",
        "    处理文档以用于RSE。\n",
        "    \n",
        "    参数:\n",
        "        pdf_path (str): PDF文档路径\n",
        "        chunk_size (int): 每个块的字符大小\n",
        "        \n",
        "    返回:\n",
        "        Tuple[List[str], SimpleVectorStore, Dict]: 块、向量存储和文档信息\n",
        "    \"\"\"\n",
        "    print(\"从文档中提取文本...\")\n",
        "    # 从PDF文件中提取文本\n",
        "    text = extract_text_from_pdf(pdf_path)\n",
        "    \n",
        "    print(\"将文本分块为非重叠段落...\")\n",
        "    # 将提取的文本分块为非重叠段落\n",
        "    chunks = chunk_text(text, chunk_size=chunk_size, overlap=0)\n",
        "    print(f\"创建了 {len(chunks)} 个块\")\n",
        "    \n",
        "    print(\"为块生成嵌入向量...\")\n",
        "    # 为文本块生成嵌入向量\n",
        "    chunk_embeddings = create_embeddings(chunks)\n",
        "    \n",
        "    # 创建SimpleVectorStore实例\n",
        "    vector_store = SimpleVectorStore()\n",
        "    \n",
        "    # 添加带有元数据的文档（包括块索引以供后续重构）\n",
        "    metadata = [{\"chunk_index\": i, \"source\": pdf_path} for i in range(len(chunks))]\n",
        "    vector_store.add_documents(chunks, chunk_embeddings, metadata)\n",
        "    \n",
        "    # 跟踪原始文档结构以进行段落重构\n",
        "    doc_info = {\n",
        "        \"chunks\": chunks,\n",
        "        \"source\": pdf_path,\n",
        "    }\n",
        "    \n",
        "    return chunks, vector_store, doc_info\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## RSE核心算法：计算块值并找到最佳段落\n",
        "现在我们有了处理文档和为其块生成嵌入向量的必要函数，我们可以实现RSE的核心算法。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {},
      "outputs": [],
      "source": [
        "def calculate_chunk_values(query, chunks, vector_store, irrelevant_chunk_penalty=0.2):\n",
        "    \"\"\"\n",
        "    通过结合相关性和位置来计算块值。\n",
        "    \n",
        "    参数:\n",
        "        query (str): 查询文本\n",
        "        chunks (List[str]): 文档块列表\n",
        "        vector_store (SimpleVectorStore): 包含块的向量存储\n",
        "        irrelevant_chunk_penalty (float): 不相关块的惩罚\n",
        "        \n",
        "    返回:\n",
        "        List[float]: 块值列表\n",
        "    \"\"\"\n",
        "    # 创建查询嵌入向量\n",
        "    query_embedding = create_embeddings([query])[0]\n",
        "    \n",
        "    # 获取所有块的相似度分数\n",
        "    num_chunks = len(chunks)\n",
        "    results = vector_store.search(query_embedding, top_k=num_chunks)\n",
        "    \n",
        "    # 创建块索引到相关性分数的映射\n",
        "    relevance_scores = {result[\"metadata\"][\"chunk_index\"]: result[\"score\"] for result in results}\n",
        "    \n",
        "    # 计算块值（相关性分数减去惩罚）\n",
        "    chunk_values = []\n",
        "    for i in range(num_chunks):\n",
        "        # 获取相关性分数，如果不在结果中则默认为0\n",
        "        score = relevance_scores.get(i, 0.0)\n",
        "        # 应用惩罚转换为值，其中不相关的块具有负值\n",
        "        value = score - irrelevant_chunk_penalty\n",
        "        chunk_values.append(value)\n",
        "    \n",
        "    return chunk_values\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {},
      "outputs": [],
      "source": [
        "def find_best_segments(chunk_values, max_segment_length=20, total_max_length=30, min_segment_value=0.2):\n",
        "    \"\"\"\n",
        "    使用最大和子数组算法的变体找到最佳段落。\n",
        "    \n",
        "    参数:\n",
        "        chunk_values (List[float]): 每个块的值\n",
        "        max_segment_length (int): 单个段落的最大长度\n",
        "        total_max_length (int): 所有段落的最大总长度\n",
        "        min_segment_value (float): 段落被考虑的最小值\n",
        "        \n",
        "    返回:\n",
        "        List[Tuple[int, int]]: 最佳段落的(开始, 结束)索引列表\n",
        "    \"\"\"\n",
        "    print(\"找到最优连续文本段落...\")\n",
        "    \n",
        "    best_segments = []\n",
        "    segment_scores = []\n",
        "    total_included_chunks = 0\n",
        "    \n",
        "    # 继续找段落直到达到限制\n",
        "    while total_included_chunks < total_max_length:\n",
        "        best_score = min_segment_value  # 段落的最小阈值\n",
        "        best_segment = None\n",
        "        \n",
        "        # 尝试每个可能的起始位置\n",
        "        for start in range(len(chunk_values)):\n",
        "            # 如果此开始位置已经在选定的段落中，则跳过\n",
        "            if any(start >= s[0] and start < s[1] for s in best_segments):\n",
        "                continue\n",
        "                \n",
        "            # 尝试每个可能的段落长度\n",
        "            for length in range(1, min(max_segment_length, len(chunk_values) - start) + 1):\n",
        "                end = start + length\n",
        "                \n",
        "                # 如果结束位置已经在选定的段落中，则跳过\n",
        "                if any(end > s[0] and end <= s[1] for s in best_segments):\n",
        "                    continue\n",
        "                \n",
        "                # 计算段落值作为块值的总和\n",
        "                segment_value = sum(chunk_values[start:end])\n",
        "                \n",
        "                # 如果这个段落更好，则更新最佳段落\n",
        "                if segment_value > best_score:\n",
        "                    best_score = segment_value\n",
        "                    best_segment = (start, end)\n",
        "        \n",
        "        # 如果我们找到了一个好的段落，添加它\n",
        "        if best_segment:\n",
        "            best_segments.append(best_segment)\n",
        "            segment_scores.append(best_score)\n",
        "            total_included_chunks += best_segment[1] - best_segment[0]\n",
        "            print(f\"找到段落 {best_segment}，分数为 {best_score:.4f}\")\n",
        "        else:\n",
        "            # 没有更多好的段落可以找到\n",
        "            break\n",
        "    \n",
        "    # 按起始位置排序段落以便于阅读\n",
        "    best_segments = sorted(best_segments, key=lambda x: x[0])\n",
        "    \n",
        "    return best_segments, segment_scores\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 重构段落并将其用于RAG\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {},
      "outputs": [],
      "source": [
        "def reconstruct_segments(chunks, best_segments):\n",
        "    \"\"\"\n",
        "    基于块索引重构文本段落。\n",
        "    \n",
        "    参数:\n",
        "        chunks (List[str]): 所有文档块的列表\n",
        "        best_segments (List[Tuple[int, int]]): 段落的(开始, 结束)索引列表\n",
        "        \n",
        "    返回:\n",
        "        List[str]: 重构的文本段落列表\n",
        "    \"\"\"\n",
        "    reconstructed_segments = []  # 初始化空列表来存储重构的段落\n",
        "    \n",
        "    for start, end in best_segments:\n",
        "        # 连接此段落中的块以形成完整的段落文本\n",
        "        segment_text = \" \".join(chunks[start:end])\n",
        "        # 将段落文本及其范围添加到reconstructed_segments列表\n",
        "        reconstructed_segments.append({\n",
        "            \"text\": segment_text,\n",
        "            \"segment_range\": (start, end),\n",
        "        })\n",
        "    \n",
        "    return reconstructed_segments  # 返回重构的文本段落列表\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {},
      "outputs": [],
      "source": [
        "def format_segments_for_context(segments):\n",
        "    \"\"\"\n",
        "    将段落格式化为LLM的上下文字符串。\n",
        "    \n",
        "    参数:\n",
        "        segments (List[Dict]): 段落字典列表\n",
        "        \n",
        "    返回:\n",
        "        str: 格式化的上下文文本\n",
        "    \"\"\"\n",
        "    context = []  # 初始化空列表来存储格式化的上下文\n",
        "    \n",
        "    for i, segment in enumerate(segments):\n",
        "        # 为每个段落创建标题，包含其索引和块范围\n",
        "        segment_header = f\"段落 {i+1} (块 {segment['segment_range'][0]}-{segment['segment_range'][1]-1}):\"\n",
        "        context.append(segment_header)  # 将段落标题添加到上下文列表\n",
        "        context.append(segment['text'])  # 将段落文本添加到上下文列表\n",
        "        context.append(\"-\" * 80)  # 添加分隔线以提高可读性\n",
        "    \n",
        "    # 用双换行符连接上下文列表中的所有元素并返回结果\n",
        "    return \"\\n\\n\".join(context)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 使用RSE上下文生成响应\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "metadata": {},
      "outputs": [],
      "source": [
        "def generate_response(query, context, model=\"qwen2.5:7b\"):\n",
        "    \"\"\"\n",
        "    基于查询和上下文生成响应。\n",
        "    \n",
        "    参数:\n",
        "        query (str): 用户查询\n",
        "        context (str): 来自相关段落的上下文文本\n",
        "        model (str): 要使用的LLM模型\n",
        "        \n",
        "    返回:\n",
        "        str: 生成的响应\n",
        "    \"\"\"\n",
        "    print(\"使用相关段落作为上下文生成响应...\")\n",
        "    \n",
        "    # 定义系统提示以指导AI的行为\n",
        "    system_prompt = \"\"\"你是一个有用的助手，基于提供的上下文回答问题。\n",
        "    上下文由与用户查询相关的文档段落组成。\n",
        "    使用这些段落中的信息提供全面准确的答案。\n",
        "    如果上下文不包含回答问题的相关信息，请明确说明。\"\"\"\n",
        "    \n",
        "    # 通过结合上下文和查询创建用户提示\n",
        "    user_prompt = f\"\"\"\n",
        "上下文：\n",
        "{context}\n",
        "\n",
        "问题：{query}\n",
        "\n",
        "请基于提供的上下文给出有用的答案。\n",
        "\"\"\"\n",
        "    \n",
        "    # 使用指定模型生成响应\n",
        "    response = client.chat.completions.create(\n",
        "        model=model,\n",
        "        messages=[\n",
        "            {\"role\": \"system\", \"content\": system_prompt},\n",
        "            {\"role\": \"user\", \"content\": user_prompt}\n",
        "        ],\n",
        "        temperature=0\n",
        "    )\n",
        "    \n",
        "    # 返回生成的响应内容\n",
        "    return response.choices[0].message.content\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 完整的RSE管道函数\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 13,
      "metadata": {},
      "outputs": [],
      "source": [
        "def rag_with_rse(pdf_path, query, chunk_size=800, irrelevant_chunk_penalty=0.2):\n",
        "    \"\"\"\n",
        "    使用相关段落提取的完整RAG管道。\n",
        "    \n",
        "    参数:\n",
        "        pdf_path (str): 文档路径\n",
        "        query (str): 用户查询\n",
        "        chunk_size (int): 块大小\n",
        "        irrelevant_chunk_penalty (float): 不相关块的惩罚\n",
        "        \n",
        "    返回:\n",
        "        Dict: 包含查询、段落和响应的结果\n",
        "    \"\"\"\n",
        "    print(\"\\n=== 开始使用相关段落提取的RAG ===\")\n",
        "    print(f\"查询: {query}\")\n",
        "    \n",
        "    # 处理文档以提取文本、分块并创建嵌入向量\n",
        "    chunks, vector_store, doc_info = process_document(pdf_path, chunk_size)\n",
        "    \n",
        "    # 基于查询计算相关性分数和块值\n",
        "    print(\"\\n计算相关性分数和块值...\")\n",
        "    chunk_values = calculate_chunk_values(query, chunks, vector_store, irrelevant_chunk_penalty)\n",
        "    \n",
        "    # 基于块值找到最佳文本段落\n",
        "    best_segments, scores = find_best_segments(\n",
        "        chunk_values, \n",
        "        max_segment_length=20, \n",
        "        total_max_length=30, \n",
        "        min_segment_value=0.2\n",
        "    )\n",
        "    \n",
        "    # 从最佳块重构文本段落\n",
        "    print(\"\\n从块重构文本段落...\")\n",
        "    segments = reconstruct_segments(chunks, best_segments)\n",
        "    \n",
        "    # 将段落格式化为语言模型的上下文字符串\n",
        "    context = format_segments_for_context(segments)\n",
        "    \n",
        "    # 使用上下文从语言模型生成响应\n",
        "    response = generate_response(query, context)\n",
        "    \n",
        "    # 将结果编译到字典中\n",
        "    result = {\n",
        "        \"query\": query,\n",
        "        \"segments\": segments,\n",
        "        \"response\": response\n",
        "    }\n",
        "    \n",
        "    print(\"\\n=== 最终响应 ===\")\n",
        "    print(response)\n",
        "    \n",
        "    return result\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 与标准检索方法的比较\n",
        "让我们实现一个标准检索方法与RSE进行比较：\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 14,
      "metadata": {},
      "outputs": [],
      "source": [
        "def standard_top_k_retrieval(pdf_path, query, k=10, chunk_size=800):\n",
        "    \"\"\"\n",
        "    使用top-k检索的标准RAG。\n",
        "    \n",
        "    参数:\n",
        "        pdf_path (str): 文档路径\n",
        "        query (str): 用户查询\n",
        "        k (int): 要检索的块数量\n",
        "        chunk_size (int): 块大小\n",
        "        \n",
        "    返回:\n",
        "        Dict: 包含查询、块和响应的结果\n",
        "    \"\"\"\n",
        "    print(\"\\n=== 开始标准TOP-K检索 ===\")\n",
        "    print(f\"查询: {query}\")\n",
        "    \n",
        "    # 处理文档以提取文本、分块并创建嵌入向量\n",
        "    chunks, vector_store, doc_info = process_document(pdf_path, chunk_size)\n",
        "    \n",
        "    # 为查询创建嵌入向量\n",
        "    print(\"创建查询嵌入向量并检索块...\")\n",
        "    query_embedding = create_embeddings([query])[0]\n",
        "    \n",
        "    # 基于查询嵌入向量检索top-k最相关的块\n",
        "    results = vector_store.search(query_embedding, top_k=k)\n",
        "    retrieved_chunks = [result[\"document\"] for result in results]\n",
        "    \n",
        "    # 将检索到的块格式化为上下文字符串\n",
        "    context = \"\\n\\n\".join([\n",
        "        f\"块 {i+1}:\\n{chunk}\" \n",
        "        for i, chunk in enumerate(retrieved_chunks)\n",
        "    ])\n",
        "    \n",
        "    # 使用上下文从语言模型生成响应\n",
        "    response = generate_response(query, context)\n",
        "    \n",
        "    # 将结果编译到字典中\n",
        "    result = {\n",
        "        \"query\": query,\n",
        "        \"chunks\": retrieved_chunks,\n",
        "        \"response\": response\n",
        "    }\n",
        "    \n",
        "    print(\"\\n=== 最终响应 ===\")\n",
        "    print(response)\n",
        "    \n",
        "    return result\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## RSE的评估\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 15,
      "metadata": {},
      "outputs": [],
      "source": [
        "def evaluate_methods(pdf_path, query, reference_answer=None):\n",
        "    \"\"\"\n",
        "    比较RSE与标准top-k检索。\n",
        "    \n",
        "    参数:\n",
        "        pdf_path (str): 文档路径\n",
        "        query (str): 用户查询\n",
        "        reference_answer (str, optional): 用于评估的参考答案\n",
        "    \"\"\"\n",
        "    print(\"\\n========= 评估 =========\\n\")\n",
        "    \n",
        "    # 运行使用相关段落提取（RSE）方法的RAG\n",
        "    rse_result = rag_with_rse(pdf_path, query)\n",
        "    \n",
        "    # 运行标准top-k检索方法\n",
        "    standard_result = standard_top_k_retrieval(pdf_path, query)\n",
        "    \n",
        "    # 如果提供了参考答案，评估响应\n",
        "    if reference_answer:\n",
        "        print(\"\\n=== 比较结果 ===\")\n",
        "        \n",
        "        # 创建评估提示来将响应与参考答案进行比较\n",
        "        evaluation_prompt = f\"\"\"\n",
        "            查询: {query}\n",
        "\n",
        "            参考答案:\n",
        "            {reference_answer}\n",
        "\n",
        "            标准检索的响应:\n",
        "            {standard_result[\"response\"]}\n",
        "\n",
        "            相关段落提取的响应:\n",
        "            {rse_result[\"response\"]}\n",
        "\n",
        "            将这两个响应与参考答案进行比较。哪一个：\n",
        "            1. 更准确和全面\n",
        "            2. 更好地解决用户的查询\n",
        "            3. 较少包含不相关信息\n",
        "\n",
        "            请为每个要点解释你的理由。\n",
        "        \"\"\"\n",
        "        \n",
        "        print(\"根据参考答案评估响应...\")\n",
        "        \n",
        "        # 使用指定模型生成评估\n",
        "        evaluation = client.chat.completions.create(\n",
        "            model=\"meta-llama/Llama-3.2-3B-Instruct\",\n",
        "            messages=[\n",
        "                {\"role\": \"system\", \"content\": \"你是RAG系统响应的客观评估者。\"},\n",
        "                {\"role\": \"user\", \"content\": evaluation_prompt}\n",
        "            ]\n",
        "        )\n",
        "        \n",
        "        # 打印评估结果\n",
        "        print(\"\\n=== 评估结果 ===\")\n",
        "        print(evaluation.choices[0].message.content)\n",
        "    \n",
        "    # 返回两种方法的结果\n",
        "    return {\n",
        "        \"rse_result\": rse_result,\n",
        "        \"standard_result\": standard_result\n",
        "    }\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 16,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n",
            "========= 评估 =========\n",
            "\n",
            "\n",
            "=== 开始使用相关段落提取的RAG ===\n",
            "查询: What is 'Explainable AI' and why is it considered important?\n",
            "从文档中提取文本...\n",
            "将文本分块为非重叠段落...\n",
            "创建了 42 个块\n",
            "为块生成嵌入向量...\n",
            "\n",
            "计算相关性分数和块值...\n",
            "找到最优连续文本段落...\n",
            "找到段落 (22, 42)，分数为 6.1514\n",
            "找到段落 (0, 20)，分数为 5.8959\n",
            "\n",
            "从块重构文本段落...\n",
            "使用相关段落作为上下文生成响应...\n",
            "\n",
            "=== 最终响应 ===\n",
            "Explainable AI (XAI) refers to methods that aim to make the decision-making processes of artificial intelligence systems more transparent and understandable to humans. This involves developing techniques that can explain how an AI system arrived at a particular conclusion or recommendation.\n",
            "\n",
            "XAI is considered important for several reasons:\n",
            "\n",
            "1. **Building Trust**: By making AI decisions clearer, XAI helps build trust among users who may be hesitant about relying on opaque AI systems.\n",
            "2. **Enhancing Accountability**: Clear explanations allow stakeholders to understand and verify the reasoning behind AI outcomes, which can help in holding AI systems accountable.\n",
            "3. **Improving Fairness**: Understanding how an AI system makes decisions can reveal potential biases or unfair practices, enabling developers to address them.\n",
            "4. **Facilitating User Control**: Users can better control their interactions with AI if they understand why certain recommendations are made, allowing for more informed decision-making.\n",
            "\n",
            "Overall, XAI is crucial for ensuring that AI systems are not only effective but also ethical and reliable in their applications.\n",
            "\n",
            "=== 开始标准TOP-K检索 ===\n",
            "查询: What is 'Explainable AI' and why is it considered important?\n",
            "从文档中提取文本...\n",
            "将文本分块为非重叠段落...\n",
            "创建了 42 个块\n",
            "为块生成嵌入向量...\n",
            "创建查询嵌入向量并检索块...\n",
            "使用相关段落作为上下文生成响应...\n",
            "\n",
            "=== 最终响应 ===\n",
            "Explainable AI (XAI) refers to techniques that aim to make AI decisions more understandable, enabling users to assess their fairness and accuracy. This is crucial for several reasons:\n",
            "\n",
            "1. **Transparency**: XAI enhances transparency in how AI systems arrive at their decisions, making it easier for stakeholders to understand the reasoning behind these decisions.\n",
            "2. **Fairness Assessment**: By providing insights into decision-making processes, XAI helps users evaluate whether AI systems are fair and unbiased.\n",
            "3. **Accountability**: Clear explanations of AI decisions can establish accountability, ensuring that developers, deployers, and users can be held responsible for the outcomes of AI systems.\n",
            "\n",
            "In summary, Explainable AI is important because it promotes trust in AI systems by making their decision-making processes more transparent and understandable to humans, which is essential for addressing potential harms, ensuring ethical behavior, and maintaining public confidence.\n",
            "\n",
            "=== 比较结果 ===\n",
            "根据参考答案评估响应...\n"
          ]
        },
        {
          "ename": "NotFoundError",
          "evalue": "Error code: 404 - {'error': {'message': 'model \"meta-llama/Llama-3.2-3B-Instruct\" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}}",
          "output_type": "error",
          "traceback": [
            "\u001b[31m---------------------------------------------------------------------------\u001b[39m",
            "\u001b[31mNotFoundError\u001b[39m                             Traceback (most recent call last)",
            "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[16]\u001b[39m\u001b[32m, line 15\u001b[39m\n\u001b[32m     12\u001b[39m pdf_path = \u001b[33m\"\u001b[39m\u001b[33mdata/AI_Information.pdf\u001b[39m\u001b[33m\"\u001b[39m\n\u001b[32m     14\u001b[39m \u001b[38;5;66;03m# 运行评估\u001b[39;00m\n\u001b[32m---> \u001b[39m\u001b[32m15\u001b[39m results = \u001b[43mevaluate_methods\u001b[49m\u001b[43m(\u001b[49m\u001b[43mpdf_path\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mquery\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mreference_answer\u001b[49m\u001b[43m)\u001b[49m\n",
            "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[15]\u001b[39m\u001b[32m, line 46\u001b[39m, in \u001b[36mevaluate_methods\u001b[39m\u001b[34m(pdf_path, query, reference_answer)\u001b[39m\n\u001b[32m     43\u001b[39m \u001b[38;5;28mprint\u001b[39m(\u001b[33m\"\u001b[39m\u001b[33m根据参考答案评估响应...\u001b[39m\u001b[33m\"\u001b[39m)\n\u001b[32m     45\u001b[39m \u001b[38;5;66;03m# 使用指定模型生成评估\u001b[39;00m\n\u001b[32m---> \u001b[39m\u001b[32m46\u001b[39m evaluation = \u001b[43mclient\u001b[49m\u001b[43m.\u001b[49m\u001b[43mchat\u001b[49m\u001b[43m.\u001b[49m\u001b[43mcompletions\u001b[49m\u001b[43m.\u001b[49m\u001b[43mcreate\u001b[49m\u001b[43m(\u001b[49m\n\u001b[32m     47\u001b[39m \u001b[43m    \u001b[49m\u001b[43mmodel\u001b[49m\u001b[43m=\u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mmeta-llama/Llama-3.2-3B-Instruct\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[32m     48\u001b[39m \u001b[43m    \u001b[49m\u001b[43mmessages\u001b[49m\u001b[43m=\u001b[49m\u001b[43m[\u001b[49m\n\u001b[32m     49\u001b[39m \u001b[43m        \u001b[49m\u001b[43m{\u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mrole\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43msystem\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mcontent\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43m你是RAG系统响应的客观评估者。\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m}\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m     50\u001b[39m \u001b[43m        \u001b[49m\u001b[43m{\u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mrole\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43muser\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mcontent\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mevaluation_prompt\u001b[49m\u001b[43m}\u001b[49m\n\u001b[32m     51\u001b[39m \u001b[43m    \u001b[49m\u001b[43m]\u001b[49m\n\u001b[32m     52\u001b[39m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n\u001b[32m     54\u001b[39m \u001b[38;5;66;03m# 打印评估结果\u001b[39;00m\n\u001b[32m     55\u001b[39m \u001b[38;5;28mprint\u001b[39m(\u001b[33m\"\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[33m=== 评估结果 ===\u001b[39m\u001b[33m\"\u001b[39m)\n",
            "\u001b[36mFile \u001b[39m\u001b[32m/opt/anaconda3/envs/rag/lib/python3.12/site-packages/openai/_utils/_utils.py:287\u001b[39m, in \u001b[36mrequired_args.<locals>.inner.<locals>.wrapper\u001b[39m\u001b[34m(*args, **kwargs)\u001b[39m\n\u001b[32m    285\u001b[39m             msg = \u001b[33mf\u001b[39m\u001b[33m\"\u001b[39m\u001b[33mMissing required argument: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mquote(missing[\u001b[32m0\u001b[39m])\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m\"\u001b[39m\n\u001b[32m    286\u001b[39m     \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mTypeError\u001b[39;00m(msg)\n\u001b[32m--> \u001b[39m\u001b[32m287\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mfunc\u001b[49m\u001b[43m(\u001b[49m\u001b[43m*\u001b[49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43m*\u001b[49m\u001b[43m*\u001b[49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n",
            "\u001b[36mFile \u001b[39m\u001b[32m/opt/anaconda3/envs/rag/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py:925\u001b[39m, in \u001b[36mCompletions.create\u001b[39m\u001b[34m(self, messages, model, audio, frequency_penalty, function_call, functions, logit_bias, logprobs, max_completion_tokens, max_tokens, metadata, modalities, n, parallel_tool_calls, prediction, presence_penalty, reasoning_effort, response_format, seed, service_tier, stop, store, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, web_search_options, extra_headers, extra_query, extra_body, timeout)\u001b[39m\n\u001b[32m    882\u001b[39m \u001b[38;5;129m@required_args\u001b[39m([\u001b[33m\"\u001b[39m\u001b[33mmessages\u001b[39m\u001b[33m\"\u001b[39m, \u001b[33m\"\u001b[39m\u001b[33mmodel\u001b[39m\u001b[33m\"\u001b[39m], [\u001b[33m\"\u001b[39m\u001b[33mmessages\u001b[39m\u001b[33m\"\u001b[39m, \u001b[33m\"\u001b[39m\u001b[33mmodel\u001b[39m\u001b[33m\"\u001b[39m, \u001b[33m\"\u001b[39m\u001b[33mstream\u001b[39m\u001b[33m\"\u001b[39m])\n\u001b[32m    883\u001b[39m \u001b[38;5;28;01mdef\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34mcreate\u001b[39m(\n\u001b[32m    884\u001b[39m     \u001b[38;5;28mself\u001b[39m,\n\u001b[32m   (...)\u001b[39m\u001b[32m    922\u001b[39m     timeout: \u001b[38;5;28mfloat\u001b[39m | httpx.Timeout | \u001b[38;5;28;01mNone\u001b[39;00m | NotGiven = NOT_GIVEN,\n\u001b[32m    923\u001b[39m ) -> ChatCompletion | Stream[ChatCompletionChunk]:\n\u001b[32m    924\u001b[39m     validate_response_format(response_format)\n\u001b[32m--> \u001b[39m\u001b[32m925\u001b[39m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[43m.\u001b[49m\u001b[43m_post\u001b[49m\u001b[43m(\u001b[49m\n\u001b[32m    926\u001b[39m \u001b[43m        \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43m/chat/completions\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[32m    927\u001b[39m \u001b[43m        \u001b[49m\u001b[43mbody\u001b[49m\u001b[43m=\u001b[49m\u001b[43mmaybe_transform\u001b[49m\u001b[43m(\u001b[49m\n\u001b[32m    928\u001b[39m \u001b[43m            \u001b[49m\u001b[43m{\u001b[49m\n\u001b[32m    929\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mmessages\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmessages\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    930\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mmodel\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmodel\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    931\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43maudio\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43maudio\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    932\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mfrequency_penalty\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mfrequency_penalty\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    933\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mfunction_call\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mfunction_call\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    934\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mfunctions\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mfunctions\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    935\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mlogit_bias\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mlogit_bias\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    936\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mlogprobs\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mlogprobs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    937\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mmax_completion_tokens\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmax_completion_tokens\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    938\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mmax_tokens\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmax_tokens\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    939\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mmetadata\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmetadata\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    940\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mmodalities\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmodalities\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    941\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mn\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mn\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    942\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mparallel_tool_calls\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mparallel_tool_calls\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    943\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mprediction\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mprediction\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    944\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mpresence_penalty\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mpresence_penalty\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    945\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mreasoning_effort\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mreasoning_effort\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    946\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mresponse_format\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mresponse_format\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    947\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mseed\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mseed\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    948\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mservice_tier\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mservice_tier\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    949\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mstop\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mstop\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    950\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mstore\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mstore\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    951\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mstream\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    952\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mstream_options\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream_options\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    953\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mtemperature\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtemperature\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    954\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mtool_choice\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtool_choice\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    955\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mtools\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtools\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    956\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mtop_logprobs\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtop_logprobs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    957\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mtop_p\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtop_p\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    958\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43muser\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43muser\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    959\u001b[39m \u001b[43m                \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mweb_search_options\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mweb_search_options\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    960\u001b[39m \u001b[43m            \u001b[49m\u001b[43m}\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    961\u001b[39m \u001b[43m            \u001b[49m\u001b[43mcompletion_create_params\u001b[49m\u001b[43m.\u001b[49m\u001b[43mCompletionCreateParamsStreaming\u001b[49m\n\u001b[32m    962\u001b[39m \u001b[43m            \u001b[49m\u001b[38;5;28;43;01mif\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\n\u001b[32m    963\u001b[39m \u001b[43m            \u001b[49m\u001b[38;5;28;43;01melse\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mcompletion_create_params\u001b[49m\u001b[43m.\u001b[49m\u001b[43mCompletionCreateParamsNonStreaming\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    964\u001b[39m \u001b[43m        \u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    965\u001b[39m \u001b[43m        \u001b[49m\u001b[43moptions\u001b[49m\u001b[43m=\u001b[49m\u001b[43mmake_request_options\u001b[49m\u001b[43m(\u001b[49m\n\u001b[32m    966\u001b[39m \u001b[43m            \u001b[49m\u001b[43mextra_headers\u001b[49m\u001b[43m=\u001b[49m\u001b[43mextra_headers\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mextra_query\u001b[49m\u001b[43m=\u001b[49m\u001b[43mextra_query\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mextra_body\u001b[49m\u001b[43m=\u001b[49m\u001b[43mextra_body\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtimeout\u001b[49m\u001b[43m=\u001b[49m\u001b[43mtimeout\u001b[49m\n\u001b[32m    967\u001b[39m \u001b[43m        \u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    968\u001b[39m \u001b[43m        \u001b[49m\u001b[43mcast_to\u001b[49m\u001b[43m=\u001b[49m\u001b[43mChatCompletion\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    969\u001b[39m \u001b[43m        \u001b[49m\u001b[43mstream\u001b[49m\u001b[43m=\u001b[49m\u001b[43mstream\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01mor\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[32m    970\u001b[39m \u001b[43m        \u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[43m=\u001b[49m\u001b[43mStream\u001b[49m\u001b[43m[\u001b[49m\u001b[43mChatCompletionChunk\u001b[49m\u001b[43m]\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m    971\u001b[39m \u001b[43m    \u001b[49m\u001b[43m)\u001b[49m\n",
            "\u001b[36mFile \u001b[39m\u001b[32m/opt/anaconda3/envs/rag/lib/python3.12/site-packages/openai/_base_client.py:1249\u001b[39m, in \u001b[36mSyncAPIClient.post\u001b[39m\u001b[34m(self, path, cast_to, body, options, files, stream, stream_cls)\u001b[39m\n\u001b[32m   1235\u001b[39m \u001b[38;5;28;01mdef\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34mpost\u001b[39m(\n\u001b[32m   1236\u001b[39m     \u001b[38;5;28mself\u001b[39m,\n\u001b[32m   1237\u001b[39m     path: \u001b[38;5;28mstr\u001b[39m,\n\u001b[32m   (...)\u001b[39m\u001b[32m   1244\u001b[39m     stream_cls: \u001b[38;5;28mtype\u001b[39m[_StreamT] | \u001b[38;5;28;01mNone\u001b[39;00m = \u001b[38;5;28;01mNone\u001b[39;00m,\n\u001b[32m   1245\u001b[39m ) -> ResponseT | _StreamT:\n\u001b[32m   1246\u001b[39m     opts = FinalRequestOptions.construct(\n\u001b[32m   1247\u001b[39m         method=\u001b[33m\"\u001b[39m\u001b[33mpost\u001b[39m\u001b[33m\"\u001b[39m, url=path, json_data=body, files=to_httpx_files(files), **options\n\u001b[32m   1248\u001b[39m     )\n\u001b[32m-> \u001b[39m\u001b[32m1249\u001b[39m     \u001b[38;5;28;01mreturn\u001b[39;00m cast(ResponseT, \u001b[38;5;28;43mself\u001b[39;49m\u001b[43m.\u001b[49m\u001b[43mrequest\u001b[49m\u001b[43m(\u001b[49m\u001b[43mcast_to\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mopts\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[43m=\u001b[49m\u001b[43mstream\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[43m=\u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[43m)\u001b[49m)\n",
            "\u001b[36mFile \u001b[39m\u001b[32m/opt/anaconda3/envs/rag/lib/python3.12/site-packages/openai/_base_client.py:1037\u001b[39m, in \u001b[36mSyncAPIClient.request\u001b[39m\u001b[34m(self, cast_to, options, stream, stream_cls)\u001b[39m\n\u001b[32m   1034\u001b[39m             err.response.read()\n\u001b[32m   1036\u001b[39m         log.debug(\u001b[33m\"\u001b[39m\u001b[33mRe-raising status error\u001b[39m\u001b[33m\"\u001b[39m)\n\u001b[32m-> \u001b[39m\u001b[32m1037\u001b[39m         \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;28mself\u001b[39m._make_status_error_from_response(err.response) \u001b[38;5;28;01mfrom\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[32m   1039\u001b[39m     \u001b[38;5;28;01mbreak\u001b[39;00m\n\u001b[32m   1041\u001b[39m \u001b[38;5;28;01massert\u001b[39;00m response \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m, \u001b[33m\"\u001b[39m\u001b[33mcould not resolve response (should never happen)\u001b[39m\u001b[33m\"\u001b[39m\n",
            "\u001b[31mNotFoundError\u001b[39m: Error code: 404 - {'error': {'message': 'model \"meta-llama/Llama-3.2-3B-Instruct\" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}}"
          ]
        }
      ],
      "source": [
        "# 从JSON文件加载验证数据\n",
        "with open('data/val.json') as f:\n",
        "    data = json.load(f)\n",
        "\n",
        "# 从验证数据中提取第一个查询\n",
        "query = data[0]['question']\n",
        "\n",
        "# 从验证数据中提取参考答案\n",
        "reference_answer = data[0]['ideal_answer']\n",
        "\n",
        "# pdf路径\n",
        "pdf_path = \"data/AI_Information.pdf\"\n",
        "\n",
        "# 运行评估\n",
        "results = evaluate_methods(pdf_path, query, reference_answer)\n"
      ]
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "rag",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.12.11"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 2
}
