{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "markdown"
    }
   },
   "source": [
    "## 语义分块简介\n",
    "文本分块是检索增强生成（RAG）中的重要步骤，将大段文本分割成有意义的片段以提高检索准确性。\n",
    "与固定长度分块不同，语义分块基于句子间的内容相似性来分割文本。\n",
    "\n",
    "### 断点方法：\n",
    "- **百分位数法**: 找到所有相似性差异的第X个百分位数，在相似性下降大于此值的地方进行分块。\n",
    "- **标准差法**: 在相似性下降超过平均值X个标准差的地方进行分割。\n",
    "- **四分位距法(IQR)**: 使用四分位距（Q3 - Q1）来确定分割点。\n",
    "\n",
    "本notebook **使用百分位数法**实现语义分块，并在示例文本上评估其性能。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 环境设置\n",
    "首先导入必要的库。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 75,
   "metadata": {},
   "outputs": [],
   "source": [
    "import fitz\n",
    "import os\n",
    "import numpy as np\n",
    "import json\n",
    "from openai import OpenAI"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 从PDF文件中提取文本\n",
    "为了实现RAG，我们首先需要一个文本数据源。在这种情况下，我们使用PyMuPDF库从PDF文件中提取文本。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 76,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Understanding Artificial Intelligence \n",
      "Chapter 1: Introduction to Artificial Intelligence \n",
      "Artificial intelligence (AI) refers to the ability of a digital computer or computer-controlled robot \n",
      "to perform tasks commonly associated with intelligent beings. The term is frequently applied to \n",
      "the project of developing systems endowed with the intellectual processes characteristic of \n",
      "humans, such as the ability to reason, discover meaning, generalize, or learn from past \n",
      "experience. Over the past f\n"
     ]
    }
   ],
   "source": [
    "def extract_text_from_pdf(pdf_path):\n",
    "    \"\"\"\n",
    "    从PDF文件中提取文本。\n",
    "\n",
    "    参数:\n",
    "    pdf_path (str): PDF文件路径。\n",
    "\n",
    "    返回:\n",
    "    str: 从PDF中提取的文本。\n",
    "    \"\"\"\n",
    "    # 打开PDF文件\n",
    "    mypdf = fitz.open(pdf_path)\n",
    "    all_text = \"\"  # 初始化一个空字符串来存储提取的文本\n",
    "    \n",
    "    # 遍历PDF中的每一页\n",
    "    for page in mypdf:\n",
    "        # 从当前页面提取文本并添加空格\n",
    "        all_text += page.get_text(\"text\") + \" \"\n",
    "\n",
    "    # 返回提取的文本，去除首尾空格\n",
    "    return all_text.strip()\n",
    "\n",
    "# 定义PDF文件路径\n",
    "pdf_path = \"data/AI_Information.pdf\"\n",
    "\n",
    "# 从PDF文件中提取文本\n",
    "extracted_text = extract_text_from_pdf(pdf_path)\n",
    "\n",
    "# 打印提取文本的前500个字符\n",
    "print(extracted_text[:500])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 设置OpenAI API客户端\n",
    "初始化OpenAI客户端来生成嵌入向量和响应。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 77,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 使用基础URL和API密钥初始化OpenAI客户端\n",
    "client = OpenAI(\n",
    "    base_url=\"http://localhost:11434/v1/\",\n",
    "    api_key=\"ollama\"  # 从环境变量中获取API密钥\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 创建句子级嵌入向量\n",
    "将文本分割成句子并生成嵌入向量。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 78,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "生成了 257 个句子嵌入向量。\n"
     ]
    }
   ],
   "source": [
    "def get_embedding(text, model=\"bge-m3:latest\"):\n",
    "    \"\"\"\n",
    "    使用OpenAI为给定文本创建嵌入向量。\n",
    "\n",
    "    参数:\n",
    "    text (str): 输入文本。\n",
    "    model (str): 嵌入模型名称。\n",
    "\n",
    "    返回:\n",
    "    np.ndarray: 嵌入向量。\n",
    "    \"\"\"\n",
    "    response = client.embeddings.create(model=model, input=text)\n",
    "    return np.array(response.data[0].embedding)\n",
    "\n",
    "# 将文本分割成句子（基础分割）\n",
    "sentences = extracted_text.split(\". \")\n",
    "\n",
    "# 为每个句子生成嵌入向量\n",
    "embeddings = [get_embedding(sentence) for sentence in sentences]\n",
    "\n",
    "print(f\"生成了 {len(embeddings)} 个句子嵌入向量。\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 计算相似性差异\n",
    "计算连续句子间的余弦相似度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 79,
   "metadata": {},
   "outputs": [],
   "source": [
    "def cosine_similarity(vec1, vec2):\n",
    "    \"\"\"\n",
    "    计算两个向量之间的余弦相似度。\n",
    "\n",
    "    参数:\n",
    "    vec1 (np.ndarray): 第一个向量。\n",
    "    vec2 (np.ndarray): 第二个向量。\n",
    "\n",
    "    返回:\n",
    "    float: 余弦相似度。\n",
    "    \"\"\"\n",
    "    return np.dot(vec1, vec2) / (np.linalg.norm(vec1) * np.linalg.norm(vec2))\n",
    "\n",
    "# 计算连续句子之间的相似度\n",
    "similarities = [cosine_similarity(embeddings[i], embeddings[i + 1]) for i in range(len(embeddings) - 1)]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 实现语义分块\n",
    "实现三种不同的断点查找方法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 80,
   "metadata": {},
   "outputs": [],
   "source": [
    "def compute_breakpoints(similarities, method=\"percentile\", threshold=90):\n",
    "    \"\"\"\n",
    "    基于相似度下降计算分块断点。\n",
    "\n",
    "    参数:\n",
    "    similarities (List[float]): 句子间相似度分数列表。\n",
    "    method (str): 'percentile'、'standard_deviation' 或 'interquartile'。\n",
    "    threshold (float): 阈值（对于'percentile'是百分位数，对于'standard_deviation'是标准差倍数）。\n",
    "\n",
    "    返回:\n",
    "    List[int]: 应该进行块分割的索引。\n",
    "    \"\"\"\n",
    "    # 根据选择的方法确定阈值\n",
    "    if method == \"percentile\":\n",
    "        # 计算相似度分数的第X个百分位数\n",
    "        threshold_value = np.percentile(similarities, threshold)\n",
    "    elif method == \"standard_deviation\":\n",
    "        # 计算相似度分数的均值和标准差\n",
    "        mean = np.mean(similarities)\n",
    "        std_dev = np.std(similarities)\n",
    "        # 将阈值设置为均值减去X个标准差\n",
    "        threshold_value = mean - (threshold * std_dev)\n",
    "    elif method == \"interquartile\":\n",
    "        # 计算第一和第三四分位数（Q1和Q3）\n",
    "        q1, q3 = np.percentile(similarities, [25, 75])\n",
    "        # 使用IQR规则设置异常值阈值\n",
    "        threshold_value = q1 - 1.5 * (q3 - q1)\n",
    "    else:\n",
    "        # 如果提供了无效方法则抛出错误\n",
    "        raise ValueError(\"无效方法。请选择 'percentile'、'standard_deviation' 或 'interquartile'。\")\n",
    "\n",
    "    # 识别相似度下降低于阈值的索引\n",
    "    return [i for i, sim in enumerate(similarities) if sim < threshold_value]\n",
    "\n",
    "# 使用百分位数方法计算断点，阈值为90\n",
    "breakpoints = compute_breakpoints(similarities, method=\"percentile\", threshold=90)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 根据语义将文本分割成块\n",
    "基于计算得出的断点分割文本。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 81,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "语义块数量: 231\n",
      "\n",
      "第一个文本块:\n",
      "Understanding Artificial Intelligence \n",
      "Chapter 1: Introduction to Artificial Intelligence \n",
      "Artificial intelligence (AI) refers to the ability of a digital computer or computer-controlled robot \n",
      "to perform tasks commonly associated with intelligent beings.\n"
     ]
    }
   ],
   "source": [
    "def split_into_chunks(sentences, breakpoints):\n",
    "    \"\"\"\n",
    "    将句子分割成语义块。\n",
    "\n",
    "    参数:\n",
    "    sentences (List[str]): 句子列表。\n",
    "    breakpoints (List[int]): 应该进行分块的索引。\n",
    "\n",
    "    返回:\n",
    "    List[str]: 文本块列表。\n",
    "    \"\"\"\n",
    "    chunks = []  # 初始化一个空列表来存储块\n",
    "    start = 0  # 初始化起始索引\n",
    "\n",
    "    # 遍历每个断点来创建块\n",
    "    for bp in breakpoints:\n",
    "        # 将从起始到当前断点的句子块添加到列表中\n",
    "        chunks.append(\". \".join(sentences[start:bp + 1]) + \".\")\n",
    "        start = bp + 1  # 将起始索引更新为断点后的下一个句子\n",
    "\n",
    "    # 将剩余句子作为最后一个块添加\n",
    "    chunks.append(\". \".join(sentences[start:]))\n",
    "    return chunks  # 返回块列表\n",
    "\n",
    "# 使用split_into_chunks函数创建块\n",
    "text_chunks = split_into_chunks(sentences, breakpoints)\n",
    "\n",
    "# 打印创建的块数量\n",
    "print(f\"语义块数量: {len(text_chunks)}\")\n",
    "\n",
    "# 打印第一个块来验证结果\n",
    "print(\"\\n第一个文本块:\")\n",
    "print(text_chunks[0])\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 为语义块创建嵌入向量\n",
    "为每个块创建嵌入向量以便后续检索。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 82,
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_embeddings(text_chunks):\n",
    "    \"\"\"\n",
    "    为每个文本块创建嵌入向量。\n",
    "\n",
    "    参数:\n",
    "    text_chunks (List[str]): 文本块列表。\n",
    "\n",
    "    返回:\n",
    "    List[np.ndarray]: 嵌入向量列表。\n",
    "    \"\"\"\n",
    "    # 使用get_embedding函数为每个文本块生成嵌入向量\n",
    "    return [get_embedding(chunk) for chunk in text_chunks]\n",
    "\n",
    "# 使用create_embeddings函数创建块嵌入向量\n",
    "chunk_embeddings = create_embeddings(text_chunks)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 执行语义搜索\n",
    "使用余弦相似度检索最相关的块。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 83,
   "metadata": {},
   "outputs": [],
   "source": [
    "def semantic_search(query, text_chunks, chunk_embeddings, k=5):\n",
    "    \"\"\"\n",
    "    为查询找到最相关的文本块。\n",
    "\n",
    "    参数:\n",
    "    query (str): 搜索查询。\n",
    "    text_chunks (List[str]): 文本块列表。\n",
    "    chunk_embeddings (List[np.ndarray]): 块嵌入向量列表。\n",
    "    k (int): 返回的顶部结果数量。\n",
    "\n",
    "    返回:\n",
    "    List[str]: 顶部k个相关块。\n",
    "    \"\"\"\n",
    "    # 为查询生成嵌入向量\n",
    "    query_embedding = get_embedding(query)\n",
    "    \n",
    "    # 计算查询嵌入向量与每个块嵌入向量之间的余弦相似度\n",
    "    similarities = [cosine_similarity(query_embedding, emb) for emb in chunk_embeddings]\n",
    "    \n",
    "    # 获取顶部k个最相似块的索引\n",
    "    top_indices = np.argsort(similarities)[-k:][::-1]\n",
    "    \n",
    "    # 返回顶部k个最相关的文本块\n",
    "    return [text_chunks[i] for i in top_indices]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 84,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "查询: What is 'Explainable AI' and why is it considered important?\n",
      "上下文 1:\n",
      "\n",
      "Transparency and Explainability \n",
      "Transparency and explainability are essential for building trust in AI systems. Explainable AI (XAI) \n",
      "techniques aim to make AI decisions more understandable, enabling users to assess their \n",
      "fairness and accuracy.\n",
      "========================================\n",
      "上下文 2:\n",
      "\n",
      "Explainable AI (XAI) \n",
      "Explainable AI (XAI) aims to make AI systems more transparent and understandable. Research in \n",
      "XAI focuses on developing methods for explaining AI decisions, enhancing trust, and improving \n",
      "accountability.\n",
      "========================================\n"
     ]
    }
   ],
   "source": [
    "# 从JSON文件加载验证数据\n",
    "with open('data/val.json') as f:\n",
    "    data = json.load(f)\n",
    "\n",
    "# 从验证数据中提取第一个查询\n",
    "query = data[0]['question']\n",
    "\n",
    "# 获取前2个相关块\n",
    "top_chunks = semantic_search(query, text_chunks, chunk_embeddings, k=2)\n",
    "\n",
    "# 打印查询\n",
    "print(f\"查询: {query}\")\n",
    "\n",
    "# 打印前2个最相关的文本块\n",
    "for i, chunk in enumerate(top_chunks):\n",
    "    print(f\"上下文 {i+1}:\\n{chunk}\\n{'='*40}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 基于检索到的块生成响应"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 85,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 为AI助手定义系统提示\n",
    "system_prompt = \"You are an AI assistant that strictly answers based on the given context. If the answer cannot be derived directly from the provided context, respond with: 'I do not have enough information to answer that.'\"\n",
    "\n",
    "def generate_response(system_prompt, user_message, model=\"qwen2.5:7b\"):\n",
    "    \"\"\"\n",
    "    基于系统提示和用户消息从AI模型生成响应。\n",
    "\n",
    "    参数:\n",
    "    system_prompt (str): 指导AI行为的系统提示。\n",
    "    user_message (str): 用户消息或查询。\n",
    "    model (str): 用于生成响应的模型。默认为\"meta-llama/Llama-2-7B-chat-hf\"。\n",
    "\n",
    "    返回:\n",
    "    dict: AI模型的响应。\n",
    "    \"\"\"\n",
    "    response = client.chat.completions.create(\n",
    "        model=model,\n",
    "        temperature=0,\n",
    "        messages=[\n",
    "            {\"role\": \"system\", \"content\": system_prompt},\n",
    "            {\"role\": \"user\", \"content\": user_message}\n",
    "        ]\n",
    "    )\n",
    "    return response\n",
    "\n",
    "# 基于顶部块创建用户提示\n",
    "user_prompt = \"\\n\".join([f\"Context {i + 1}:\\n{chunk}\\n=====================================\\n\" for i, chunk in enumerate(top_chunks)])\n",
    "user_prompt = f\"{user_prompt}\\nQuestion: {query}\"\n",
    "\n",
    "# 生成AI响应\n",
    "ai_response = generate_response(system_prompt, user_prompt)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 评估AI响应\n",
    "将AI响应与预期答案进行比较并给出评分。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 86,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Score: 0.8\n",
      "\n",
      "The AI response accurately captures the essence of Explainable AI and its importance but could be slightly more precise in aligning with the true response. The true response mentions \"providing insights into how they make decisions,\" which is a key aspect that was not explicitly stated in the AI's response, though it can be inferred. Therefore, a score of 0.8 reflects this high level of accuracy while acknowledging the slight omission.\n"
     ]
    }
   ],
   "source": [
    "# 为评估系统定义系统提示\n",
    "evaluate_system_prompt = \"You are an intelligent evaluation system tasked with assessing the AI assistant's responses. If the AI assistant's response is very close to the true response, assign a score of 1. If the response is incorrect or unsatisfactory in relation to the true response, assign a score of 0. If the response is partially aligned with the true response, assign a score of 0.5.\"\n",
    "\n",
    "# 通过组合用户查询、AI响应、真实响应和评估系统提示来创建评估提示\n",
    "evaluation_prompt = f\"User Query: {query}\\nAI Response:\\n{ai_response.choices[0].message.content}\\nTrue Response: {data[0]['ideal_answer']}\\n{evaluate_system_prompt}\"\n",
    "\n",
    "# 使用评估系统提示和评估提示生成评估响应\n",
    "evaluation_response = generate_response(evaluate_system_prompt, evaluation_prompt)\n",
    "\n",
    "# 打印评估响应\n",
    "print(evaluation_response.choices[0].message.content)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "rag",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
