{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "markdown"
    }
   },
   "source": [
    "## RAG中的上下文增强检索\n",
    "检索增强生成（RAG）通过从外部来源检索相关知识来增强AI响应。传统的检索方法返回孤立的文本块，这可能导致不完整的答案。\n",
    "\n",
    "为了解决这个问题，我们引入了上下文增强检索，确保检索到的信息包含相邻的文本块，以获得更好的连贯性。\n",
    "\n",
    "本笔记本的步骤：\n",
    "- 数据摄取：从PDF中提取文本。\n",
    "- 重叠上下文分块：将文本分割成重叠的块以保持上下文。\n",
    "- 嵌入创建：将文本块转换为数值表示。\n",
    "- 上下文感知检索：检索相关块及其邻居块以获得更好的完整性。\n",
    "- 响应生成：使用语言模型基于检索到的上下文生成响应。\n",
    "- 评估：评估模型响应的准确性。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 设置环境\n",
    "我们首先导入必要的库。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "import fitz\n",
    "import os\n",
    "import numpy as np\n",
    "import json\n",
    "from openai import OpenAI"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 从PDF文件中提取文本\n",
    "要实现RAG，我们首先需要一个文本数据源。在这种情况下，我们使用PyMuPDF库从PDF文件中提取文本。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "def extract_text_from_pdf(pdf_path):\n",
    "    \"\"\"\n",
    "    从PDF文件中提取文本并打印前`num_chars`个字符。\n",
    "\n",
    "    参数:\n",
    "    pdf_path (str): PDF文件的路径。\n",
    "\n",
    "    返回:\n",
    "    str: 从PDF中提取的文本。\n",
    "    \"\"\"\n",
    "    # 打开PDF文件\n",
    "    mypdf = fitz.open(pdf_path)\n",
    "    all_text = \"\"  # 初始化一个空字符串来存储提取的文本\n",
    "\n",
    "    # 遍历PDF中的每一页\n",
    "    for page_num in range(mypdf.page_count):\n",
    "        page = mypdf[page_num]  # 获取页面\n",
    "        text = page.get_text(\"text\")  # 从页面提取文本\n",
    "        all_text += text  # 将提取的文本追加到all_text字符串中\n",
    "\n",
    "    return all_text  # 返回提取的文本"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 对提取的文本进行分块\n",
    "一旦我们获得了提取的文本，我们将其分成更小的重叠块以提高检索准确性。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "def chunk_text(text, n, overlap):\n",
    "    \"\"\"\n",
    "    将给定的文本分割成具有重叠的n个字符的段落。\n",
    "\n",
    "    参数:\n",
    "    text (str): 要分块的文本。\n",
    "    n (int): 每个块中的字符数。\n",
    "    overlap (int): 块之间重叠的字符数。\n",
    "\n",
    "    返回:\n",
    "    List[str]: 文本块的列表。\n",
    "    \"\"\"\n",
    "    chunks = []  # 初始化一个空列表来存储块\n",
    "    \n",
    "    # 以(n - overlap)为步长循环遍历文本\n",
    "    for i in range(0, len(text), n - overlap):\n",
    "        # 将从索引i到i + n的文本块追加到chunks列表中\n",
    "        chunks.append(text[i:i + n])\n",
    "\n",
    "    return chunks  # 返回文本块列表"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 设置OpenAI API客户端\n",
    "我们初始化OpenAI客户端来生成嵌入和响应。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 使用基础URL和API密钥初始化OpenAI客户端\n",
    "client = OpenAI(\n",
    "    base_url=\"http://localhost:11434/v1/\",\n",
    "    api_key=\"ollama\"  # Ollama不需要真实的API密钥，但客户端需要一个值\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 从PDF文件中提取和分块文本\n",
    "现在，我们加载PDF，提取文本，并将其分割成块。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Number of text chunks: 42\n",
      "\n",
      "First text chunk:\n",
      "Understanding Artificial Intelligence \n",
      "Chapter 1: Introduction to Artificial Intelligence \n",
      "Artificial intelligence (AI) refers to the ability of a digital computer or computer-controlled robot \n",
      "to perform tasks commonly associated with intelligent beings. The term is frequently applied to \n",
      "the project of developing systems endowed with the intellectual processes characteristic of \n",
      "humans, such as the ability to reason, discover meaning, generalize, or learn from past \n",
      "experience. Over the past few decades, advancements in computing power and data availability \n",
      "have significantly accelerated the development and deployment of AI. \n",
      "Historical Context \n",
      "The idea of artificial intelligence has existed for centuries, often depicted in myths and fiction. \n",
      "However, the formal field of AI research began in the mid-20th century. The Dartmouth Workshop \n",
      "in 1956 is widely considered the birthplace of AI. Early AI research focused on problem-solving \n",
      "and symbolic methods. The 1980s saw a rise in exp\n"
     ]
    }
   ],
   "source": [
    "# 定义PDF文件的路径\n",
    "pdf_path = \"data/AI_Information.pdf\"\n",
    "\n",
    "# 从PDF文件中提取文本\n",
    "extracted_text = extract_text_from_pdf(pdf_path)\n",
    "\n",
    "# 将提取的文本分割成1000个字符的段落，重叠200个字符\n",
    "text_chunks = chunk_text(extracted_text, 1000, 200)\n",
    "\n",
    "# 打印创建的文本块数量\n",
    "print(\"Number of text chunks:\", len(text_chunks))\n",
    "\n",
    "# 打印第一个文本块\n",
    "print(\"\\nFirst text chunk:\")\n",
    "print(text_chunks[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 为文本块创建嵌入\n",
    "嵌入将文本转换为数值向量，这允许进行高效的相似性搜索。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_embeddings(text, model=\"bge-m3:latest\"):\n",
    "    \"\"\"\n",
    "    使用指定的OpenAI模型为给定文本创建嵌入。\n",
    "\n",
    "    参数:\n",
    "    text (str): 要创建嵌入的输入文本。\n",
    "    model (str): 用于创建嵌入的模型。默认是\"bge-m3:latest\"。\n",
    "\n",
    "    返回:\n",
    "    dict: 包含嵌入的OpenAI API响应。\n",
    "    \"\"\"\n",
    "    # 使用指定模型为输入文本创建嵌入\n",
    "    response = client.embeddings.create(\n",
    "        model=model,\n",
    "        input=text\n",
    "    )\n",
    "\n",
    "    return response  # 返回包含嵌入的响应\n",
    "\n",
    "# 为文本块创建嵌入\n",
    "response = create_embeddings(text_chunks)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 实现上下文感知的语义搜索\n",
    "我们修改检索以包含相邻块以获得更好的上下文。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "def cosine_similarity(vec1, vec2):\n",
    "    \"\"\"\n",
    "    计算两个向量之间的余弦相似度。\n",
    "\n",
    "    参数:\n",
    "    vec1 (np.ndarray): 第一个向量。\n",
    "    vec2 (np.ndarray): 第二个向量。\n",
    "\n",
    "    返回:\n",
    "    float: 两个向量之间的余弦相似度。\n",
    "    \"\"\"\n",
    "    # 计算两个向量的点积并除以它们的范数的乘积\n",
    "    return np.dot(vec1, vec2) / (np.linalg.norm(vec1) * np.linalg.norm(vec2))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "def context_enriched_search(query, text_chunks, embeddings, k=1, context_size=1):\n",
    "    \"\"\"\n",
    "    检索最相关的块及其相邻块。\n",
    "\n",
    "    参数:\n",
    "    query (str): 搜索查询。\n",
    "    text_chunks (List[str]): 文本块列表。\n",
    "    embeddings (List[dict]): 块嵌入列表。\n",
    "    k (int): 要检索的相关块数量。\n",
    "    context_size (int): 要包含的相邻块数量。\n",
    "\n",
    "    返回:\n",
    "    List[str]: 具有上下文信息的相关文本块。\n",
    "    \"\"\"\n",
    "    # 将查询转换为嵌入向量\n",
    "    query_embedding = create_embeddings(query).data[0].embedding\n",
    "    similarity_scores = []\n",
    "\n",
    "    # 计算查询和每个文本块嵌入之间的相似度分数\n",
    "    for i, chunk_embedding in enumerate(embeddings):\n",
    "        # 计算查询嵌入和当前块嵌入之间的余弦相似度\n",
    "        similarity_score = cosine_similarity(np.array(query_embedding), np.array(chunk_embedding.embedding))\n",
    "        # 将索引和相似度分数作为元组存储\n",
    "        similarity_scores.append((i, similarity_score))\n",
    "\n",
    "    # 按相似度分数降序排序块（最高相似度优先）\n",
    "    similarity_scores.sort(key=lambda x: x[1], reverse=True)\n",
    "\n",
    "    # 获取最相关块的索引\n",
    "    top_index = similarity_scores[0][0]\n",
    "\n",
    "    # 定义上下文包含的范围\n",
    "    # 确保我们不会低于0或超出text_chunks的长度\n",
    "    start = max(0, top_index - context_size)\n",
    "    end = min(len(text_chunks), top_index + context_size + 1)\n",
    "\n",
    "    # 返回相关块及其相邻的上下文块\n",
    "    return [text_chunks[i] for i in range(start, end)]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 运行带有上下文检索的查询\n",
    "我们现在测试上下文增强检索。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Query: What is 'Explainable AI' and why is it considered important?\n",
      "Context 1:\n",
      "nt aligns with societal values. Education and awareness campaigns inform the public \n",
      "about AI, its impacts, and its potential. \n",
      "Chapter 19: AI and Ethics \n",
      "Principles of Ethical AI \n",
      "Ethical AI principles guide the development and deployment of AI systems to ensure they are fair, \n",
      "transparent, accountable, and beneficial to society. Key principles include respect for human \n",
      "rights, privacy, non-discrimination, and beneficence. \n",
      " \n",
      " \n",
      "Addressing Bias in AI \n",
      "AI systems can inherit and amplify biases present in the data they are trained on, leading to unfair \n",
      "or discriminatory outcomes. Addressing bias requires careful data collection, algorithm design, \n",
      "and ongoing monitoring and evaluation. \n",
      "Transparency and Explainability \n",
      "Transparency and explainability are essential for building trust in AI systems. Explainable AI (XAI) \n",
      "techniques aim to make AI decisions more understandable, enabling users to assess their \n",
      "fairness and accuracy. \n",
      "Privacy and Data Protection \n",
      "AI systems often rely on la\n",
      "=====================================\n",
      "Context 2:\n",
      "systems. Explainable AI (XAI) \n",
      "techniques aim to make AI decisions more understandable, enabling users to assess their \n",
      "fairness and accuracy. \n",
      "Privacy and Data Protection \n",
      "AI systems often rely on large amounts of data, raising concerns about privacy and data \n",
      "protection. Ensuring responsible data handling, implementing privacy-preserving techniques, \n",
      "and complying with data protection regulations are crucial. \n",
      "Accountability and Responsibility \n",
      "Establishing accountability and responsibility for AI systems is essential for addressing potential \n",
      "harms and ensuring ethical behavior. This includes defining roles and responsibilities for \n",
      "developers, deployers, and users of AI systems. \n",
      "Chapter 20: Building Trust in AI \n",
      "Transparency and Explainability \n",
      "Transparency and explainability are key to building trust in AI. Making AI systems understandable \n",
      "and providing insights into their decision-making processes helps users assess their reliability \n",
      "and fairness. \n",
      "Robustness and Reliability \n",
      "\n",
      "=====================================\n",
      "Context 3:\n",
      "to building trust in AI. Making AI systems understandable \n",
      "and providing insights into their decision-making processes helps users assess their reliability \n",
      "and fairness. \n",
      "Robustness and Reliability \n",
      "Ensuring that AI systems are robust and reliable is essential for building trust. This includes \n",
      "testing and validating AI models, monitoring their performance, and addressing potential \n",
      "vulnerabilities. \n",
      "User Control and Agency \n",
      "Empowering users with control over AI systems and providing them with agency in their \n",
      "interactions with AI enhances trust. This includes allowing users to customize AI settings, \n",
      "understand how their data is used, and opt out of AI-driven features. \n",
      "Ethical Design and Development \n",
      "Incorporating ethical considerations into the design and development of AI systems is crucial for \n",
      "building trust. This includes conducting ethical impact assessments, engaging stakeholders, and \n",
      "adhering to ethical guidelines and standards. \n",
      "Public Engagement and Education \n",
      "Engaging th\n",
      "=====================================\n"
     ]
    }
   ],
   "source": [
    "# 从JSON文件加载验证数据集\n",
    "with open('data/val.json') as f:\n",
    "    data = json.load(f)\n",
    "\n",
    "# 从数据集中提取第一个问题作为我们的查询\n",
    "query = data[0]['question']\n",
    "\n",
    "# 检索最相关的块及其相邻块作为上下文\n",
    "# 参数:\n",
    "# - query: 我们正在搜索的问题\n",
    "# - text_chunks: 从PDF中提取的文本块\n",
    "# - response.data: 文本块的嵌入\n",
    "# - k=1: 返回顶部匹配\n",
    "# - context_size=1: 包括顶部匹配前后各1个块作为上下文\n",
    "top_chunks = context_enriched_search(query, text_chunks, response.data, k=1, context_size=1)\n",
    "\n",
    "# 打印查询以供参考\n",
    "print(\"Query:\", query)\n",
    "# 打印每个检索到的块，带有标题和分隔符\n",
    "for i, chunk in enumerate(top_chunks):\n",
    "    print(f\"Context {i + 1}:\\n{chunk}\\n=====================================\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 使用检索的上下文生成响应\n",
    "我们现在使用LLM生成响应。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 为AI助手定义系统提示\n",
    "system_prompt = \"You are an AI assistant that strictly answers based on the given context. If the answer cannot be derived directly from the provided context, respond with: 'I do not have enough information to answer that.'\"\n",
    "\n",
    "def generate_response(system_prompt, user_message, model=\"qwen2.5:7b\"):\n",
    "    \"\"\"\n",
    "    基于系统提示和用户消息从AI模型生成响应。\n",
    "\n",
    "    参数:\n",
    "    system_prompt (str): 指导AI行为的系统提示。\n",
    "    user_message (str): 用户的消息或查询。\n",
    "    model (str): 用于生成响应的模型。默认是\"meta-llama/Llama-2-7B-chat-hf\"。\n",
    "\n",
    "    返回:\n",
    "    dict: AI模型的响应。\n",
    "    \"\"\"\n",
    "    response = client.chat.completions.create(\n",
    "        model=model,\n",
    "        temperature=0,\n",
    "        messages=[\n",
    "            {\"role\": \"system\", \"content\": system_prompt},\n",
    "            {\"role\": \"user\", \"content\": user_message}\n",
    "        ]\n",
    "    )\n",
    "    return response\n",
    "\n",
    "# 基于顶级块创建用户提示\n",
    "user_prompt = \"\\n\".join([f\"Context {i + 1}:\\n{chunk}\\n=====================================\\n\" for i, chunk in enumerate(top_chunks)])\n",
    "user_prompt = f\"{user_prompt}\\nQuestion: {query}\"\n",
    "\n",
    "# 生成AI响应\n",
    "ai_response = generate_response(system_prompt, user_prompt)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 评估AI响应\n",
    "我们将AI响应与预期答案进行比较并分配分数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Score: 0.8\n",
      "\n",
      "The AI response accurately captures the essence of Explainable AI and its importance but could be slightly more precise in language and completeness. The true response includes additional key points about accountability and ensuring fairness, which were not explicitly mentioned in the AI response. However, the core concepts are well-covered, hence a score of 0.8 is appropriate.\n"
     ]
    }
   ],
   "source": [
    "# 为评估系统定义系统提示\n",
    "evaluate_system_prompt = \"You are an intelligent evaluation system tasked with assessing the AI assistant's responses. If the AI assistant's response is very close to the true response, assign a score of 1. If the response is incorrect or unsatisfactory in relation to the true response, assign a score of 0. If the response is partially aligned with the true response, assign a score of 0.5.\"\n",
    "\n",
    "# 通过结合用户查询、AI响应、真实响应和评估系统提示来创建评估提示\n",
    "evaluation_prompt = f\"User Query: {query}\\nAI Response:\\n{ai_response.choices[0].message.content}\\nTrue Response: {data[0]['ideal_answer']}\\n{evaluate_system_prompt}\"\n",
    "\n",
    "# 使用评估系统提示和评估提示生成评估响应\n",
    "evaluation_response = generate_response(evaluate_system_prompt, evaluation_prompt)\n",
    "\n",
    "# 打印评估响应\n",
    "print(evaluation_response.choices[0].message.content)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "rag",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
