{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "markdown"
    }
   },
   "source": [
    "# 基于问题生成的文档增强RAG\n",
    "\n",
    "本notebook实现了一种通过问题生成进行文档增强的增强型RAG方法。通过为每个文本块生成相关问题，我们改进了检索过程，从而提高语言模型的响应质量。\n",
    "\n",
    "在此实现中，我们遵循以下步骤：\n",
    "\n",
    "1. **数据摄取**：从PDF文件中提取文本。\n",
    "2. **分块处理**：将文本分割成可管理的块。\n",
    "3. **问题生成**：为每个块生成相关问题。\n",
    "4. **嵌入创建**：为块和生成的问题创建嵌入。\n",
    "5. **向量存储创建**：使用NumPy构建一个简单的向量存储。\n",
    "6. **语义搜索**：为用户查询检索相关的块和问题。\n",
    "7. **响应生成**：基于检索到的内容生成答案。\n",
    "8. **评估**：评估生成响应的质量。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 设置环境\n",
    "我们首先导入必要的库。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [],
   "source": [
    "import fitz\n",
    "import os\n",
    "import numpy as np\n",
    "import json\n",
    "from openai import OpenAI\n",
    "import re\n",
    "from tqdm import tqdm"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 从PDF文件中提取文本\n",
    "为了实现RAG，我们首先需要文本数据源。在这种情况下，我们使用PyMuPDF库从PDF文件中提取文本。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [],
   "source": [
    "def extract_text_from_pdf(pdf_path):\n",
    "    \"\"\"\n",
    "    从PDF文件中提取文本并打印前`num_chars`个字符。\n",
    "\n",
    "    参数:\n",
    "    pdf_path (str): PDF文件的路径。\n",
    "\n",
    "    返回:\n",
    "    str: 从PDF提取的文本。\n",
    "    \"\"\"\n",
    "    # 打开PDF文件\n",
    "    mypdf = fitz.open(pdf_path)\n",
    "    all_text = \"\"  # 初始化一个空字符串来存储提取的文本\n",
    "\n",
    "    # 遍历PDF中的每一页\n",
    "    for page_num in range(mypdf.page_count):\n",
    "        page = mypdf[page_num]  # 获取页面\n",
    "        text = page.get_text(\"text\")  # 从页面提取文本\n",
    "        all_text += text  # 将提取的文本添加到all_text字符串\n",
    "\n",
    "    return all_text  # 返回提取的文本"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 对提取的文本进行分块\n",
    "一旦我们有了提取的文本，我们将其分成更小的、重叠的块以提高检索精度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [],
   "source": [
    "def chunk_text(text, n, overlap):\n",
    "    \"\"\"\n",
    "    将给定的文本分割成n个字符的段，并带有重叠。\n",
    "\n",
    "    参数:\n",
    "    text (str): 要分块的文本。\n",
    "    n (int): 每个块中的字符数。\n",
    "    overlap (int): 块之间重叠的字符数。\n",
    "\n",
    "    返回:\n",
    "    List[str]: 文本块的列表。\n",
    "    \"\"\"\n",
    "    chunks = []  # 初始化一个空列表来存储块\n",
    "    \n",
    "    # 以(n - overlap)的步长循环遍历文本\n",
    "    for i in range(0, len(text), n - overlap):\n",
    "        # 将从索引i到i + n的文本块添加到chunks列表中\n",
    "        chunks.append(text[i:i + n])\n",
    "\n",
    "    return chunks  # 返回文本块列表"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 设置OpenAI API客户端\n",
    "我们初始化OpenAI客户端来生成嵌入和响应。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 使用base URL和API key初始化OpenAI客户端\n",
    "client = OpenAI(\n",
    "    base_url=\"http://localhost:11434/v1/\",\n",
    "    api_key=\"ollama\"  # Ollama不需要真实的API密钥，但客户端需要一个值\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 为文本块生成问题\n",
    "这是相对于简单RAG的关键增强。我们生成每个文本块可以回答的问题。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_questions(text_chunk, num_questions=5, model=\"qwen2.5:7b\"):\n",
    "    \"\"\"\n",
    "    生成可以从给定文本块回答的相关问题。\n",
    "\n",
    "    参数:\n",
    "    text_chunk (str): 要生成问题的文本块。\n",
    "    num_questions (int): 要生成的问题数量。\n",
    "    model (str): 用于问题生成的模型。\n",
    "\n",
    "    返回:\n",
    "    List[str]: 生成的问题列表。\n",
    "    \"\"\"\n",
    "    # 定义系统提示来指导AI的行为\n",
    "    system_prompt = \"你是一个从文本中生成相关问题的专家。创建简洁的问题，这些问题只能使用提供的文本来回答。专注于关键信息和概念。\"\n",
    "    \n",
    "    # 定义用户提示，包含文本块和要生成的问题数量\n",
    "    user_prompt = f\"\"\"\n",
    "    基于以下文本，生成{num_questions}个不同的问题，这些问题只能使用该文本回答：\n",
    "\n",
    "    {text_chunk}\n",
    "    \n",
    "    将你的回答格式化为编号的问题列表，不要有额外的文本。\n",
    "    \"\"\"\n",
    "    \n",
    "    # 使用OpenAI API生成问题\n",
    "    response = client.chat.completions.create(\n",
    "        model=model,\n",
    "        temperature=0.7,\n",
    "        messages=[\n",
    "            {\"role\": \"system\", \"content\": system_prompt},\n",
    "            {\"role\": \"user\", \"content\": user_prompt}\n",
    "        ]\n",
    "    )\n",
    "    \n",
    "    # 从响应中提取和清理问题\n",
    "    questions_text = response.choices[0].message.content.strip()\n",
    "    questions = []\n",
    "    \n",
    "    # 使用正则表达式模式匹配提取问题\n",
    "    for line in questions_text.split('\\n'):\n",
    "        # 移除编号并清理空白字符\n",
    "        cleaned_line = re.sub(r'^\\d+\\.\\s*', '', line.strip())\n",
    "        if cleaned_line and cleaned_line.endswith('?'):\n",
    "            questions.append(cleaned_line)\n",
    "    \n",
    "    return questions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 为文本创建嵌入\n",
    "我们为文本块和生成的问题都生成嵌入。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_embeddings(text, model=\"bge-m3:latest\"):\n",
    "    \"\"\"\n",
    "    使用指定的OpenAI模型为给定文本创建嵌入。\n",
    "\n",
    "    参数:\n",
    "    text (str): 要创建嵌入的输入文本。\n",
    "    model (str): 用于创建嵌入的模型。\n",
    "\n",
    "    返回:\n",
    "    dict: 包含嵌入的OpenAI API响应。\n",
    "    \"\"\"\n",
    "    # 使用指定模型为输入文本创建嵌入\n",
    "    response = client.embeddings.create(\n",
    "        model=model,\n",
    "        input=text\n",
    "    )\n",
    "\n",
    "    return response  # 返回包含嵌入的响应"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 构建简单的向量存储\n",
    "我们将使用NumPy实现一个简单的向量存储。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [],
   "source": [
    "class SimpleVectorStore:\n",
    "    \"\"\"\n",
    "    使用NumPy的简单向量存储实现。\n",
    "    \"\"\"\n",
    "    def __init__(self):\n",
    "        \"\"\"\n",
    "        初始化向量存储。\n",
    "        \"\"\"\n",
    "        self.vectors = []\n",
    "        self.texts = []\n",
    "        self.metadata = []\n",
    "    \n",
    "    def add_item(self, text, embedding, metadata=None):\n",
    "        \"\"\"\n",
    "        向向量存储添加一个项目。\n",
    "\n",
    "        参数:\n",
    "        text (str): 原始文本。\n",
    "        embedding (List[float]): 嵌入向量。\n",
    "        metadata (dict, optional): 额外的元数据。\n",
    "        \"\"\"\n",
    "        self.vectors.append(np.array(embedding))\n",
    "        self.texts.append(text)\n",
    "        self.metadata.append(metadata or {})\n",
    "    \n",
    "    def similarity_search(self, query_embedding, k=5):\n",
    "        \"\"\"\n",
    "        找到与查询嵌入最相似的项目。\n",
    "\n",
    "        参数:\n",
    "        query_embedding (List[float]): 查询嵌入向量。\n",
    "        k (int): 要返回的结果数量。\n",
    "\n",
    "        返回:\n",
    "        List[Dict]: 前k个最相似的项目及其文本和元数据。\n",
    "        \"\"\"\n",
    "        if not self.vectors:\n",
    "            return []\n",
    "        \n",
    "        # 将查询嵌入转换为numpy数组\n",
    "        query_vector = np.array(query_embedding)\n",
    "        \n",
    "        # 使用余弦相似度计算相似度\n",
    "        similarities = []\n",
    "        for i, vector in enumerate(self.vectors):\n",
    "            similarity = np.dot(query_vector, vector) / (np.linalg.norm(query_vector) * np.linalg.norm(vector))\n",
    "            similarities.append((i, similarity))\n",
    "        \n",
    "        # 按相似度排序（降序）\n",
    "        similarities.sort(key=lambda x: x[1], reverse=True)\n",
    "        \n",
    "        # 返回前k个结果\n",
    "        results = []\n",
    "        for i in range(min(k, len(similarities))):\n",
    "            idx, score = similarities[i]\n",
    "            results.append({\n",
    "                \"text\": self.texts[idx],\n",
    "                \"metadata\": self.metadata[idx],\n",
    "                \"similarity\": score\n",
    "            })\n",
    "        \n",
    "        return results"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 使用问题增强处理文档\n",
    "现在我们将把所有内容整合在一起来处理文档、生成问题并构建我们的增强向量存储。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [],
   "source": [
    "def process_document(pdf_path, chunk_size=1000, chunk_overlap=200, questions_per_chunk=5):\n",
    "    \"\"\"\n",
    "    使用问题增强处理文档。\n",
    "\n",
    "    参数:\n",
    "    pdf_path (str): PDF文件的路径。\n",
    "    chunk_size (int): 每个文本块的字符大小。\n",
    "    chunk_overlap (int): 块之间重叠的字符数。\n",
    "    questions_per_chunk (int): 每个块生成的问题数量。\n",
    "\n",
    "    返回:\n",
    "    Tuple[List[str], SimpleVectorStore]: 文本块和向量存储。\n",
    "    \"\"\"\n",
    "    print(\"从PDF提取文本...\")\n",
    "    extracted_text = extract_text_from_pdf(pdf_path)\n",
    "    \n",
    "    print(\"分割文本...\")\n",
    "    text_chunks = chunk_text(extracted_text, chunk_size, chunk_overlap)\n",
    "    print(f\"创建了 {len(text_chunks)} 个文本块\")\n",
    "    \n",
    "    vector_store = SimpleVectorStore()\n",
    "    \n",
    "    print(\"处理块并生成问题...\")\n",
    "    for i, chunk in enumerate(tqdm(text_chunks, desc=\"处理块\")):\n",
    "        # 为块本身创建嵌入\n",
    "        chunk_embedding_response = create_embeddings(chunk)\n",
    "        chunk_embedding = chunk_embedding_response.data[0].embedding\n",
    "        \n",
    "        # 将块添加到向量存储\n",
    "        vector_store.add_item(\n",
    "            text=chunk,\n",
    "            embedding=chunk_embedding,\n",
    "            metadata={\"type\": \"chunk\", \"index\": i}\n",
    "        )\n",
    "        \n",
    "        # 为此块生成问题\n",
    "        questions = generate_questions(chunk, num_questions=questions_per_chunk)\n",
    "        \n",
    "        # 为每个问题创建嵌入并添加到向量存储\n",
    "        for j, question in enumerate(questions):\n",
    "            question_embedding_response = create_embeddings(question)\n",
    "            question_embedding = question_embedding_response.data[0].embedding\n",
    "            \n",
    "            # 将问题添加到向量存储\n",
    "            vector_store.add_item(\n",
    "                text=question,\n",
    "                embedding=question_embedding,\n",
    "                metadata={\"type\": \"question\", \"chunk_index\": i, \"original_chunk\": chunk}\n",
    "            )\n",
    "    \n",
    "    return text_chunks, vector_store"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 提取和处理文档"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "从PDF提取文本...\n",
      "分割文本...\n",
      "创建了 42 个文本块\n",
      "处理块并生成问题...\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "处理块: 100%|██████████| 42/42 [03:04<00:00,  4.38s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "向量存储包含 42 个项目\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "source": [
    "# 定义PDF文件路径\n",
    "pdf_path = \"data/AI_Information.pdf\"\n",
    "\n",
    "# 处理文档（提取文本、创建块、生成问题、构建向量存储）\n",
    "text_chunks, vector_store = process_document(\n",
    "    pdf_path, \n",
    "    chunk_size=1000, \n",
    "    chunk_overlap=200, \n",
    "    questions_per_chunk=3\n",
    ")\n",
    "\n",
    "print(f\"向量存储包含 {len(vector_store.texts)} 个项目\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 执行语义搜索\n",
    "我们实现了一个类似于简单RAG实现的语义搜索函数，但适配了我们的增强向量存储。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [],
   "source": [
    "def semantic_search(query, vector_store, k=5):\n",
    "    \"\"\"\n",
    "    使用查询和向量存储执行语义搜索。\n",
    "\n",
    "    参数:\n",
    "    query (str): 搜索查询。\n",
    "    vector_store (SimpleVectorStore): 要搜索的向量存储。\n",
    "    k (int): 要返回的结果数量。\n",
    "\n",
    "    返回:\n",
    "    List[Dict]: 前k个最相关的项目。\n",
    "    \"\"\"\n",
    "    # 为查询创建嵌入\n",
    "    query_embedding_response = create_embeddings(query)\n",
    "    query_embedding = query_embedding_response.data[0].embedding\n",
    "    \n",
    "    # 搜索向量存储\n",
    "    results = vector_store.similarity_search(query_embedding, k=k)\n",
    "    \n",
    "    return results"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 在增强向量存储上运行查询"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "查询: What is 'Explainable AI' and why is it considered important?\n",
      "\n",
      "搜索结果:\n",
      "\n",
      "相关文档块:\n",
      "上下文 1 (相似度: 0.6999):\n",
      "systems. Explainable AI (XAI) \n",
      "techniques aim to make AI decisions more understandable, enabling users to assess their \n",
      "fairness and accuracy. \n",
      "Privacy and Data Protection \n",
      "AI systems often rely on large amounts of data, raising concerns about privacy and data \n",
      "protection. Ensuring responsible data ...\n",
      "=====================================\n",
      "上下文 2 (相似度: 0.6213):\n",
      "control, accountability, and the \n",
      "potential for unintended consequences. Establishing clear guidelines and ethical frameworks for \n",
      "AI development and deployment is crucial. \n",
      "Weaponization of AI \n",
      "The potential use of AI in autonomous weapons systems raises significant ethical and security \n",
      "concerns. ...\n",
      "=====================================\n",
      "上下文 3 (相似度: 0.6012):\n",
      " incidents. \n",
      "Environmental Monitoring \n",
      "AI-powered environmental monitoring systems track air and water quality, detect pollution, and \n",
      "support environmental protection efforts. These systems provide real-time data, identify \n",
      "pollution sources, and inform environmental policies. \n",
      "Chapter 15: The Futu...\n",
      "=====================================\n",
      "上下文 4 (相似度: 0.5848):\n",
      "inability \n",
      "Many AI systems, particularly deep learning models, are \"black boxes,\" making it difficult to \n",
      "understand how they arrive at their decisions. Enhancing transparency and explainability is \n",
      "crucial for building trust and accountability. \n",
      " \n",
      " \n",
      "Privacy and Security \n",
      "AI systems often rely on la...\n",
      "=====================================\n",
      "上下文 5 (相似度: 0.5766):\n",
      "nt aligns with societal values. Education and awareness campaigns inform the public \n",
      "about AI, its impacts, and its potential. \n",
      "Chapter 19: AI and Ethics \n",
      "Principles of Ethical AI \n",
      "Ethical AI principles guide the development and deployment of AI systems to ensure they are fair, \n",
      "transparent, account...\n",
      "=====================================\n",
      "\n",
      "匹配的问题:\n"
     ]
    }
   ],
   "source": [
    "# 从JSON文件加载验证数据\n",
    "with open('data/val.json') as f:\n",
    "    data = json.load(f)\n",
    "\n",
    "# 从验证数据中提取第一个查询\n",
    "query = data[0]['question']\n",
    "\n",
    "# 执行语义搜索找到相关内容\n",
    "search_results = semantic_search(query, vector_store, k=5)\n",
    "\n",
    "print(\"查询:\", query)\n",
    "print(\"\\n搜索结果:\")\n",
    "\n",
    "# 按类型组织结果\n",
    "chunk_results = []\n",
    "question_results = []\n",
    "\n",
    "for result in search_results:\n",
    "    if result[\"metadata\"][\"type\"] == \"chunk\":\n",
    "        chunk_results.append(result)\n",
    "    else:\n",
    "        question_results.append(result)\n",
    "\n",
    "# 首先打印块结果\n",
    "print(\"\\n相关文档块:\")\n",
    "for i, result in enumerate(chunk_results):\n",
    "    print(f\"上下文 {i + 1} (相似度: {result['similarity']:.4f}):\")\n",
    "    print(result[\"text\"][:300] + \"...\")\n",
    "    print(\"=====================================\")\n",
    "\n",
    "# 然后打印问题匹配\n",
    "print(\"\\n匹配的问题:\")\n",
    "for i, result in enumerate(question_results):\n",
    "    print(f\"问题 {i + 1} (相似度: {result['similarity']:.4f}):\")\n",
    "    print(result[\"text\"])\n",
    "    chunk_idx = result[\"metadata\"][\"chunk_index\"]\n",
    "    print(f\"来自块 {chunk_idx}\")\n",
    "    print(\"=====================================\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 为响应生成上下文\n",
    "现在我们通过组合来自相关块和问题的信息来准备上下文。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [],
   "source": [
    "def prepare_context(search_results):\n",
    "    \"\"\"\n",
    "    从搜索结果准备统一的上下文以用于响应生成。\n",
    "\n",
    "    参数:\n",
    "    search_results (List[Dict]): 语义搜索的结果。\n",
    "\n",
    "    返回:\n",
    "    str: 组合的上下文字符串。\n",
    "    \"\"\"\n",
    "    # 提取结果中引用的唯一块\n",
    "    chunk_indices = set()\n",
    "    context_chunks = []\n",
    "    \n",
    "    # 首先添加直接块匹配\n",
    "    for result in search_results:\n",
    "        if result[\"metadata\"][\"type\"] == \"chunk\":\n",
    "            chunk_indices.add(result[\"metadata\"][\"index\"])\n",
    "            context_chunks.append(f\"块 {result['metadata']['index']}:\\n{result['text']}\")\n",
    "    \n",
    "    # 然后添加问题引用的块\n",
    "    for result in search_results:\n",
    "        if result[\"metadata\"][\"type\"] == \"question\":\n",
    "            chunk_idx = result[\"metadata\"][\"chunk_index\"]\n",
    "            if chunk_idx not in chunk_indices:\n",
    "                chunk_indices.add(chunk_idx)\n",
    "                context_chunks.append(f\"块 {chunk_idx} (由问题 '{result['text']}' 引用):\\n{result['metadata']['original_chunk']}\")\n",
    "    \n",
    "    # 组合所有上下文块\n",
    "    full_context = \"\\n\\n\".join(context_chunks)\n",
    "    return full_context"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 基于检索到的块生成响应\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_response(query, context, model=\"qwen2.5:7b\"):\n",
    "    \"\"\"\n",
    "    基于查询和上下文生成响应。\n",
    "\n",
    "    参数:\n",
    "    query (str): 用户的问题。\n",
    "    context (str): 从向量存储检索到的上下文信息。\n",
    "    model (str): 用于响应生成的模型。\n",
    "\n",
    "    返回:\n",
    "    str: 生成的响应。\n",
    "    \"\"\"\n",
    "    system_prompt = \"你是一个严格基于给定上下文回答问题的AI助手。如果答案不能直接从提供的上下文中得出，请回答：'我没有足够的信息来回答这个问题。'\"\n",
    "    \n",
    "    user_prompt = f\"\"\"\n",
    "        上下文:\n",
    "        {context}\n",
    "\n",
    "        问题: {query}\n",
    "\n",
    "        请仅基于上述提供的上下文回答问题。要简洁准确。\n",
    "    \"\"\"\n",
    "    \n",
    "    response = client.chat.completions.create(\n",
    "        model=model,\n",
    "        temperature=0,\n",
    "        messages=[\n",
    "            {\"role\": \"system\", \"content\": system_prompt},\n",
    "            {\"role\": \"user\", \"content\": user_prompt}\n",
    "        ]\n",
    "    )\n",
    "    \n",
    "    return response.choices[0].message.content"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 生成和显示响应"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "查询: What is 'Explainable AI' and why is it considered important?\n",
      "\n",
      "响应:\n",
      "<think>\n",
      "Okay, let's tackle this question about Explainable AI (XAI). The user wants to know what XAI is and why it's important, based solely on the provided context.\n",
      "\n",
      "First, I'll scan through the context to find mentions of XAI. Looking at the blocks, I see several references. Block 37 mentions XAI techniques aim to make AI decisions more understandable, enabling users to assess fairness and accuracy. Block 10 also talks about XAI being developed to provide insights into how AI models make decisions, enhancing trust and accountability. Block 29 repeats that XAI aims to make systems more transparent and understandable, with research focusing on explaining decisions to improve trust and accountability. Block 36 adds that XAI techniques make AI decisions more understandable, allowing users to assess fairness and accuracy.\n",
      "\n",
      "So, putting that together, XAI is about making AI decisions transparent and understandable. The importance points mentioned are building trust, ensuring accountability, assessing fairness and accuracy, and improving reliability. The context also ties it to ethical considerations like fairness and non-discrimination, which are part of ethical AI principles in Block 36.\n",
      "\n",
      "I need to make sure I don't add any external knowledge. The answer should be concise, using the exact terms from the context. The key points are the definition and the reasons: trust, accountability, fairness, accuracy, and transparency. Also, mention that it's part of ethical AI principles. Let me check if all these points are covered in the context. Yes, Block 37 and 36 have the main points. Block 10 and 29 reinforce the importance for trust and accountability. So the answer should combine these elements without extra info.\n",
      "</think>\n",
      "\n",
      "Explainable AI (XAI) refers to techniques that make AI decisions more transparent and understandable, enabling users to assess their fairness, accuracy, and reliability. It is considered important because transparency and explainability build trust in AI systems, enhance accountability, and ensure ethical behavior by allowing users to evaluate whether AI outcomes align with societal values and principles like non-discrimination and fairness.\n"
     ]
    }
   ],
   "source": [
    "# 从搜索结果准备上下文\n",
    "context = prepare_context(search_results)\n",
    "\n",
    "# 生成响应\n",
    "response_text = generate_response(query, context)\n",
    "\n",
    "print(\"\\n查询:\", query)\n",
    "print(\"\\n响应:\")\n",
    "print(response_text)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 评估AI响应\n",
    "我们将AI响应与期望答案进行比较并分配分数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [],
   "source": [
    "def evaluate_response(query, response, reference_answer, model=\"qwen2.5:7b\"):\n",
    "    \"\"\"\n",
    "    根据参考答案评估AI响应。\n",
    "    \n",
    "    参数:\n",
    "    query (str): 用户的问题。\n",
    "    response (str): AI生成的响应。\n",
    "    reference_answer (str): 参考/理想答案。\n",
    "    model (str): 用于评估的模型。\n",
    "    \n",
    "    返回:\n",
    "    str: 评估反馈。\n",
    "    \"\"\"\n",
    "    # 定义评估系统的系统提示\n",
    "    evaluate_system_prompt = \"\"\"你是一个智能评估系统，负责评估AI响应。\n",
    "            \n",
    "        将AI助手的响应与真实/参考答案进行比较，并基于以下标准进行评估：\n",
    "        1. 事实正确性 - 响应是否包含准确信息？\n",
    "        2. 完整性 - 是否涵盖了参考答案中的所有重要方面？\n",
    "        3. 相关性 - 是否直接回答了问题？\n",
    "\n",
    "        分配0到1的分数：\n",
    "        - 1.0: 内容和含义完全匹配\n",
    "        - 0.8: 非常好，有轻微遗漏/差异\n",
    "        - 0.6: 良好，涵盖主要点但缺少一些细节\n",
    "        - 0.4: 部分答案，有重大遗漏\n",
    "        - 0.2: 最少的相关信息\n",
    "        - 0.0: 错误或不相关\n",
    "\n",
    "        请提供您的分数和理由。\n",
    "    \"\"\"\n",
    "            \n",
    "    # 创建评估提示\n",
    "    evaluation_prompt = f\"\"\"\n",
    "        用户查询: {query}\n",
    "\n",
    "        AI响应:\n",
    "        {response}\n",
    "\n",
    "        参考答案:\n",
    "        {reference_answer}\n",
    "\n",
    "        请根据参考答案评估AI响应。\n",
    "    \"\"\"\n",
    "    \n",
    "    # 生成评估\n",
    "    eval_response = client.chat.completions.create(\n",
    "        model=model,\n",
    "        temperature=0,\n",
    "        messages=[\n",
    "            {\"role\": \"system\", \"content\": evaluate_system_prompt},\n",
    "            {\"role\": \"user\", \"content\": evaluation_prompt}\n",
    "        ]\n",
    "    )\n",
    "    \n",
    "    return eval_response.choices[0].message.content"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 运行评估"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "评估:\n",
      "Score: 0.8\n",
      "\n",
      "Justification:\n",
      "1. **Factual Correctness**: The AI response accurately defines XAI as techniques that make AI decisions more transparent and understandable. It correctly identifies the importance of transparency and explainability in building trust, accountability, and ensuring fairness. However, it slightly overemphasizes the role of ethical principles like non-discrimination and fairness compared to the reference answer.\n",
      "   \n",
      "2. **Completeness**: The response covers most important aspects from the reference answer but includes some additional information about ethical considerations that were not explicitly mentioned in the reference (e.g., \"allowing users to evaluate whether AI outcomes align with societal values and principles like non-discrimination and fairness\"). This adds value but also introduces a slight deviation.\n",
      "\n",
      "3. **Relevance**: The response directly addresses the question by defining XAI and explaining its importance, which matches the reference answer well.\n",
      "\n",
      "Given these points, the response is very good (0.8) because it accurately captures the essence of XAI and its importance while including some additional relevant information that slightly deviates from the exact wording in the reference answer.\n"
     ]
    }
   ],
   "source": [
    "# 从验证数据获取参考答案\n",
    "reference_answer = data[0]['ideal_answer']\n",
    "\n",
    "# 评估响应\n",
    "evaluation = evaluate_response(query, response_text, reference_answer)\n",
    "\n",
    "print(\"\\n评估:\")\n",
    "print(evaluation)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 从PDF文件中提取和分块文本\n",
    "现在，我们加载PDF，提取文本，并将其分割成块。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "文本块数量: 42\n",
      "\n",
      "第一个文本块:\n",
      "Understanding Artificial Intelligence \n",
      "Chapter 1: Introduction to Artificial Intelligence \n",
      "Artificial intelligence (AI) refers to the ability of a digital computer or computer-controlled robot \n",
      "to perform tasks commonly associated with intelligent beings. The term is frequently applied to \n",
      "the project of developing systems endowed with the intellectual processes characteristic of \n",
      "humans, such as the ability to reason, discover meaning, generalize, or learn from past \n",
      "experience. Over the past few decades, advancements in computing power and data availability \n",
      "have significantly accelerated the development and deployment of AI. \n",
      "Historical Context \n",
      "The idea of artificial intelligence has existed for centuries, often depicted in myths and fiction. \n",
      "However, the formal field of AI research began in the mid-20th century. The Dartmouth Workshop \n",
      "in 1956 is widely considered the birthplace of AI. Early AI research focused on problem-solving \n",
      "and symbolic methods. The 1980s saw a rise in exp\n"
     ]
    }
   ],
   "source": [
    "# 定义PDF文件路径\n",
    "pdf_path = \"data/AI_Information.pdf\"\n",
    "\n",
    "# 从PDF文件提取文本\n",
    "extracted_text = extract_text_from_pdf(pdf_path)\n",
    "\n",
    "# 将提取的文本分块为1000个字符的段，重叠200个字符\n",
    "text_chunks = chunk_text(extracted_text, 1000, 200)\n",
    "\n",
    "# 打印创建的文本块数量\n",
    "print(\"文本块数量:\", len(text_chunks))\n",
    "\n",
    "# 打印第一个文本块\n",
    "print(\"\\n第一个文本块:\")\n",
    "print(text_chunks[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 为文本块创建嵌入\n",
    "嵌入将文本转换为数值向量，这允许进行高效的相似性搜索。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_embeddings(text, model=\"bge-m3:latest\"):\n",
    "    \"\"\"\n",
    "    Creates embeddings for the given text using the specified OpenAI model.\n",
    "\n",
    "    Args:\n",
    "    text (str): The input text for which embeddings are to be created.\n",
    "    model (str): The model to be used for creating embeddings. Default is \"BAAI/bge-en-icl\".\n",
    "\n",
    "    Returns:\n",
    "    dict: The response from the OpenAI API containing the embeddings.\n",
    "    \"\"\"\n",
    "    # Create embeddings for the input text using the specified model\n",
    "    response = client.embeddings.create(\n",
    "        model=model,\n",
    "        input=text\n",
    "    )\n",
    "\n",
    "    return response  # Return the response containing the embeddings\n",
    "\n",
    "# Create embeddings for the text chunks\n",
    "response = create_embeddings(text_chunks)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 执行语义搜索\n",
    "我们实现余弦相似度来找到用户查询最相关的文本块。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [],
   "source": [
    "def cosine_similarity(vec1, vec2):\n",
    "    \"\"\"\n",
    "    计算两个向量之间的余弦相似度。\n",
    "\n",
    "    参数:\n",
    "    vec1 (np.ndarray): 第一个向量。\n",
    "    vec2 (np.ndarray): 第二个向量。\n",
    "\n",
    "    返回:\n",
    "    float: 两个向量之间的余弦相似度。\n",
    "    \"\"\"\n",
    "    # 计算两个向量的点积并除以它们的范数的乘积\n",
    "    return np.dot(vec1, vec2) / (np.linalg.norm(vec1) * np.linalg.norm(vec2))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [],
   "source": [
    "def semantic_search(query, text_chunks, embeddings, k=5):\n",
    "    \"\"\"\n",
    "    使用给定的查询和嵌入对文本块执行语义搜索。\n",
    "\n",
    "    参数:\n",
    "    query (str): 语义搜索的查询。\n",
    "    text_chunks (List[str]): 要搜索的文本块列表。\n",
    "    embeddings (List[dict]): 文本块的嵌入列表。\n",
    "    k (int): 要返回的顶部相关文本块数量。默认为5。\n",
    "\n",
    "    返回:\n",
    "    List[str]: 基于查询的前k个最相关文本块列表。\n",
    "    \"\"\"\n",
    "    # 为查询创建嵌入\n",
    "    query_embedding = create_embeddings(query).data[0].embedding\n",
    "    similarity_scores = []  # 初始化列表来存储相似度分数\n",
    "\n",
    "    # 计算查询嵌入与每个文本块嵌入之间的相似度分数\n",
    "    for i, chunk_embedding in enumerate(embeddings):\n",
    "        similarity_score = cosine_similarity(np.array(query_embedding), np.array(chunk_embedding.embedding))\n",
    "        similarity_scores.append((i, similarity_score))  # 添加索引和相似度分数\n",
    "\n",
    "    # 按相似度分数降序排序\n",
    "    similarity_scores.sort(key=lambda x: x[1], reverse=True)\n",
    "    # 获取前k个最相似文本块的索引\n",
    "    top_indices = [index for index, _ in similarity_scores[:k]]\n",
    "    # 返回前k个最相关的文本块\n",
    "    return [text_chunks[index] for index in top_indices]\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 在提取的块上运行查询"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Query: What is 'Explainable AI' and why is it considered important?\n",
      "Context 1:\n",
      "systems. Explainable AI (XAI) \n",
      "techniques aim to make AI decisions more understandable, enabling users to assess their \n",
      "fairness and accuracy. \n",
      "Privacy and Data Protection \n",
      "AI systems often rely on large amounts of data, raising concerns about privacy and data \n",
      "protection. Ensuring responsible data handling, implementing privacy-preserving techniques, \n",
      "and complying with data protection regulations are crucial. \n",
      "Accountability and Responsibility \n",
      "Establishing accountability and responsibility for AI systems is essential for addressing potential \n",
      "harms and ensuring ethical behavior. This includes defining roles and responsibilities for \n",
      "developers, deployers, and users of AI systems. \n",
      "Chapter 20: Building Trust in AI \n",
      "Transparency and Explainability \n",
      "Transparency and explainability are key to building trust in AI. Making AI systems understandable \n",
      "and providing insights into their decision-making processes helps users assess their reliability \n",
      "and fairness. \n",
      "Robustness and Reliability \n",
      "\n",
      "=====================================\n",
      "Context 2:\n",
      "control, accountability, and the \n",
      "potential for unintended consequences. Establishing clear guidelines and ethical frameworks for \n",
      "AI development and deployment is crucial. \n",
      "Weaponization of AI \n",
      "The potential use of AI in autonomous weapons systems raises significant ethical and security \n",
      "concerns. International discussions and regulations are needed to address the risks associated \n",
      "with AI-powered weapons. \n",
      "Chapter 5: The Future of Artificial Intelligence \n",
      "The future of AI is likely to be characterized by continued advancements and broader adoption \n",
      "across various domains. Key trends and areas of development include: \n",
      "Explainable AI (XAI) \n",
      "Explainable AI (XAI) aims to make AI systems more transparent and understandable. XAI \n",
      "techniques are being developed to provide insights into how AI models make decisions, \n",
      "enhancing trust and accountability. \n",
      "AI at the Edge \n",
      "AI at the edge involves processing data locally on devices, rather than relying on cloud-based \n",
      "servers. This approach reduc\n",
      "=====================================\n"
     ]
    }
   ],
   "source": [
    "# 从JSON文件加载验证数据\n",
    "with open('data/val.json') as f:\n",
    "    data = json.load(f)\n",
    "\n",
    "# 从验证数据中提取第一个查询\n",
    "query = data[0]['question']\n",
    "\n",
    "# 执行语义搜索找到查询的前2个最相关文本块\n",
    "top_chunks = semantic_search(query, text_chunks, response.data, k=2)\n",
    "\n",
    "# 打印查询\n",
    "print(\"查询:\", query)\n",
    "\n",
    "# 打印前2个最相关的文本块\n",
    "for i, chunk in enumerate(top_chunks):\n",
    "    print(f\"上下文 {i + 1}:\\n{chunk}\\n=====================================\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 基于检索到的块生成响应"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 为AI助手定义系统提示\n",
    "system_prompt = \"你是一个严格基于给定上下文回答问题的AI助手。如果答案不能直接从提供的上下文中得出，请回答：'我没有足够的信息来回答这个问题。'\"\n",
    "\n",
    "def generate_response(system_prompt, user_message, model=\"qwen2.5:7b\"):\n",
    "    \"\"\"\n",
    "    基于系统提示和用户消息从AI模型生成响应。\n",
    "\n",
    "    参数:\n",
    "    system_prompt (str): 指导AI行为的系统提示。\n",
    "    user_message (str): 用户的消息或查询。\n",
    "    model (str): 用于生成响应的模型。默认为\"meta-llama/Llama-2-7B-chat-hf\"。\n",
    "\n",
    "    返回:\n",
    "    dict: 来自AI模型的响应。\n",
    "    \"\"\"\n",
    "    response = client.chat.completions.create(\n",
    "        model=model,\n",
    "        temperature=0,\n",
    "        messages=[\n",
    "            {\"role\": \"system\", \"content\": system_prompt},\n",
    "            {\"role\": \"user\", \"content\": user_message}\n",
    "        ]\n",
    "    )\n",
    "    return response\n",
    "\n",
    "# 基于顶部块创建用户提示\n",
    "user_prompt = \"\\n\".join([f\"上下文 {i + 1}:\\n{chunk}\\n=====================================\\n\" for i, chunk in enumerate(top_chunks)])\n",
    "user_prompt = f\"{user_prompt}\\n问题: {query}\"\n",
    "\n",
    "# 生成AI响应\n",
    "ai_response = generate_response(system_prompt, user_prompt)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 评估AI响应\n",
    "我们将AI响应与期望答案进行比较并分配分数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Score: 1\n",
      "\n",
      "The AI response accurately captures the essence of Explainable AI (XAI) and its importance, closely aligning with the true response provided. It mentions transparency, understanding, insights into decision-making processes, trust, accountability, fairness, and addressing ethical concerns—elements that are all present in the true response as well. Therefore, it merits a score of 1.\n"
     ]
    }
   ],
   "source": [
    "# 为评估系统定义系统提示\n",
    "evaluate_system_prompt = \"你是一个智能评估系统，负责评估AI助手的响应。如果AI助手的响应与真实响应非常接近，分配分数为1。如果响应相对于真实响应是错误或不满意的，分配分数为0。如果响应与真实响应部分一致，分配分数为0.5。\"\n",
    "\n",
    "# 通过组合用户查询、AI响应、真实响应和评估系统提示创建评估提示\n",
    "evaluation_prompt = f\"用户查询: {query}\\nAI响应:\\n{ai_response.choices[0].message.content}\\n真实响应: {data[0]['ideal_answer']}\\n{evaluate_system_prompt}\"\n",
    "\n",
    "# 使用评估系统提示和评估提示生成评估响应\n",
    "evaluation_response = generate_response(evaluate_system_prompt, evaluation_prompt)\n",
    "\n",
    "# 打印评估响应\n",
    "print(evaluation_response.choices[0].message.content)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "rag",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
