{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "markdown"
    }
   },
   "source": [
    "## 在简单RAG中评估分块大小\n",
    "\n",
    "选择合适的分块大小对于提高检索增强生成（RAG）管道中的检索准确性至关重要。目标是在检索性能和响应质量之间取得平衡。\n",
    "\n",
    "本节通过以下步骤评估不同的分块大小：\n",
    "\n",
    "1. 从PDF中提取文本。\n",
    "2. 将文本分割成不同大小的块。\n",
    "3. 为每个文本块创建嵌入向量。\n",
    "4. 为查询检索相关的文本块。\n",
    "5. 使用检索到的文本块生成响应。\n",
    "6. 评估忠实度和相关性。\n",
    "7. 比较不同分块大小的结果。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 设置环境\n",
    "我们首先导入必要的库。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "import fitz\n",
    "import os\n",
    "import numpy as np\n",
    "import json\n",
    "from openai import OpenAI"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 设置OpenAI API客户端\n",
    "我们初始化OpenAI客户端来生成嵌入向量和响应。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 使用基础URL和API密钥初始化OpenAI客户端\n",
    "client = OpenAI(\n",
    "    base_url=\"http://localhost:11434/v1/\",\n",
    "    api_key=\"ollama\"  # 从环境变量中获取API密钥\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 从PDF中提取文本\n",
    "首先，我们将从`AI_Information.pdf`文件中提取文本。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Understanding Artificial Intelligence \n",
      "Chapter 1: Introduction to Artificial Intelligence \n",
      "Artificial intelligence (AI) refers to the ability of a digital computer or computer-controlled robot \n",
      "to perform tasks commonly associated with intelligent beings. The term is frequently applied to \n",
      "the project of developing systems endowed with the intellectual processes characteristic of \n",
      "humans, such as the ability to reason, discover meaning, generalize, or learn from past \n",
      "experience. Over the past f\n"
     ]
    }
   ],
   "source": [
    "def extract_text_from_pdf(pdf_path):\n",
    "    \"\"\"\n",
    "    从PDF文件中提取文本。\n",
    "\n",
    "    参数:\n",
    "    pdf_path (str): PDF文件的路径。\n",
    "\n",
    "    返回:\n",
    "    str: 从PDF中提取的文本。\n",
    "    \"\"\"\n",
    "    # 打开PDF文件\n",
    "    mypdf = fitz.open(pdf_path)\n",
    "    all_text = \"\"  # 初始化一个空字符串来存储提取的文本\n",
    "    \n",
    "    # 遍历PDF中的每一页\n",
    "    for page in mypdf:\n",
    "        # 从当前页面提取文本并添加空格\n",
    "        all_text += page.get_text(\"text\") + \" \"\n",
    "\n",
    "    # 返回提取的文本，去除首尾空白字符\n",
    "    return all_text.strip()\n",
    "\n",
    "# 定义PDF文件的路径\n",
    "pdf_path = \"data/AI_Information.pdf\"\n",
    "\n",
    "# 从PDF文件中提取文本\n",
    "extracted_text = extract_text_from_pdf(pdf_path)\n",
    "\n",
    "# 打印提取文本的前500个字符\n",
    "print(extracted_text[:500])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 对提取的文本进行分块\n",
    "为了改善检索效果，我们将提取的文本分割成不同大小的重叠块。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "块大小: 128, 块数量: 326\n",
      "块大小: 256, 块数量: 164\n",
      "块大小: 512, 块数量: 82\n"
     ]
    }
   ],
   "source": [
    "def chunk_text(text, n, overlap):\n",
    "    \"\"\"\n",
    "    将文本分割成重叠的块。\n",
    "\n",
    "    参数:\n",
    "    text (str): 需要分块的文本。\n",
    "    n (int): 每个块的字符数。\n",
    "    overlap (int): 块之间的重叠字符数。\n",
    "\n",
    "    返回:\n",
    "    List[str]: 文本块列表。\n",
    "    \"\"\"\n",
    "    chunks = []  # 初始化一个空列表来存储文本块\n",
    "    for i in range(0, len(text), n - overlap):\n",
    "        # 从当前索引到索引+块大小，添加一个文本块\n",
    "        chunks.append(text[i:i + n])\n",
    "    \n",
    "    return chunks  # 返回文本块列表\n",
    "\n",
    "# 定义要评估的不同块大小\n",
    "chunk_sizes = [128, 256, 512]\n",
    "\n",
    "# 创建一个字典来存储每个块大小的文本块\n",
    "text_chunks_dict = {size: chunk_text(extracted_text, size, size // 5) for size in chunk_sizes}\n",
    "\n",
    "# 打印每个块大小创建的块数量\n",
    "for size, chunks in text_chunks_dict.items():\n",
    "    print(f\"块大小: {size}, 块数量: {len(chunks)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 为文本块创建嵌入向量\n",
    "嵌入向量将文本转换为数值表示以进行相似性搜索。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "生成嵌入向量: 100%|██████████| 3/3 [00:20<00:00,  6.75s/it]\n"
     ]
    }
   ],
   "source": [
    "from tqdm import tqdm\n",
    "\n",
    "def create_embeddings(texts, model=\"bge-m3:latest\"):\n",
    "    \"\"\"\n",
    "    为文本列表生成嵌入向量。\n",
    "\n",
    "    参数:\n",
    "    texts (List[str]): 输入文本列表。\n",
    "    model (str): 嵌入模型。\n",
    "\n",
    "    返回:\n",
    "    List[np.ndarray]: 数值嵌入向量列表。\n",
    "    \"\"\"\n",
    "    # 使用指定模型创建嵌入向量\n",
    "    response = client.embeddings.create(model=model, input=texts)\n",
    "    # 将响应转换为numpy数组列表并返回\n",
    "    return [np.array(embedding.embedding) for embedding in response.data]\n",
    "\n",
    "# 为每个块大小生成嵌入向量\n",
    "# 遍历text_chunks_dict中的每个块大小及其对应的块\n",
    "chunk_embeddings_dict = {size: create_embeddings(chunks) for size, chunks in tqdm(text_chunks_dict.items(), desc=\"生成嵌入向量\")}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 执行语义搜索\n",
    "我们使用余弦相似度来找到与用户查询最相关的文本块。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "def cosine_similarity(vec1, vec2):\n",
    "    \"\"\"\n",
    "    计算两个向量之间的余弦相似度。\n",
    "\n",
    "    参数:\n",
    "    vec1 (np.ndarray): 第一个向量。\n",
    "    vec2 (np.ndarray): 第二个向量。\n",
    "\n",
    "    返回:\n",
    "    float: 余弦相似度分数。\n",
    "    \"\"\"\n",
    "\n",
    "    # 计算两个向量的点积\n",
    "    return np.dot(vec1, vec2) / (np.linalg.norm(vec1) * np.linalg.norm(vec2))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "def retrieve_relevant_chunks(query, text_chunks, chunk_embeddings, k=5):\n",
    "    \"\"\"\n",
    "    检索前k个最相关的文本块。\n",
    "    \n",
    "    参数:\n",
    "    query (str): 用户查询。\n",
    "    text_chunks (List[str]): 文本块列表。\n",
    "    chunk_embeddings (List[np.ndarray]): 文本块的嵌入向量。\n",
    "    k (int): 要返回的顶部块数量。\n",
    "    \n",
    "    返回:\n",
    "    List[str]: 最相关的文本块。\n",
    "    \"\"\"\n",
    "    # 为查询生成嵌入向量 - 将查询作为列表传递并获取第一项\n",
    "    query_embedding = create_embeddings([query])[0]\n",
    "    \n",
    "    # 计算查询嵌入向量与每个块嵌入向量之间的余弦相似度\n",
    "    similarities = [cosine_similarity(query_embedding, emb) for emb in chunk_embeddings]\n",
    "    \n",
    "    # 获取前k个最相似块的索引\n",
    "    top_indices = np.argsort(similarities)[-k:][::-1]\n",
    "    \n",
    "    # 返回前k个最相关的文本块\n",
    "    return [text_chunks[i] for i in top_indices]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['AI enables personalized medicine by analyzing individual patient data, predicting treatment \\nresponses, and tailoring interventions. Personalized medicine enhances treatment effectiveness \\nand reduces adverse effects. \\nRobotic Surgery \\nAI-powered robotic s', ' analyzing biological data, predicting drug \\nefficacy, and identifying potential drug candidates. AI-powered systems reduce the time and cost \\nof bringing new treatments to market. \\nPersonalized Medicine \\nAI enables personalized medicine by analyzing indiv', 'rams, \\nand enhance student support services. \\n \\n Chapter 11: AI and Healthcare \\nMedical Diagnosis and Treatment \\nAI is revolutionizing medical diagnosis and treatment by analyzing medical images, predicting \\npatient outcomes, and assisting in treatment pla', 'mains. \\nThese applications include: \\nHealthcare \\nAI is transforming healthcare through applications such as medical diagnosis, drug discovery, \\npersonalized medicine, and robotic surgery. AI-powered tools can analyze medical images, \\npredict patient outcom', 'g \\npatient outcomes, and assisting in treatment planning. AI-powered tools enhance accuracy, \\nefficiency, and patient care. \\nDrug Discovery and Development \\nAI accelerates drug discovery and development by analyzing biological data, predicting drug \\neffica']\n"
     ]
    }
   ],
   "source": [
    "# 从JSON文件加载验证数据\n",
    "with open('data/val.json') as f:\n",
    "    data = json.load(f)\n",
    "\n",
    "# 从验证数据中提取第一个查询\n",
    "query = data[3]['question']\n",
    "\n",
    "# 为每个块大小检索相关块\n",
    "retrieved_chunks_dict = {size: retrieve_relevant_chunks(query, text_chunks_dict[size], chunk_embeddings_dict[size]) for size in chunk_sizes}\n",
    "\n",
    "# 打印块大小为256的检索块\n",
    "print(retrieved_chunks_dict[256])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 基于检索到的文本块生成响应\n",
    "让我们基于块大小为`256`的检索文本生成响应。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "AI contributes to personalized medicine by analyzing individual patient data, predicting treatment responses, and tailoring interventions. This approach enhances the effectiveness of treatments while reducing adverse effects.\n"
     ]
    }
   ],
   "source": [
    "# 为AI助手定义系统提示\n",
    "system_prompt = \"您是一个严格基于给定上下文回答问题的AI助手。如果答案无法直接从提供的上下文中推导出来，请回复：'我没有足够的信息来回答这个问题。'\"\n",
    "\n",
    "def generate_response(query, system_prompt, retrieved_chunks, model=\"qwen2.5:7b\"):\n",
    "    \"\"\"\n",
    "    基于检索到的文本块生成AI响应。\n",
    "\n",
    "    参数:\n",
    "    query (str): 用户查询。\n",
    "    retrieved_chunks (List[str]): 检索到的文本块列表。\n",
    "    model (str): AI模型。\n",
    "\n",
    "    返回:\n",
    "    str: AI生成的响应。\n",
    "    \"\"\"\n",
    "    # 将检索到的块组合成单个上下文字符串\n",
    "    context = \"\\n\".join([f\"上下文 {i+1}:\\n{chunk}\" for i, chunk in enumerate(retrieved_chunks)])\n",
    "    \n",
    "    # 通过组合上下文和查询来创建用户提示\n",
    "    user_prompt = f\"{context}\\n\\n问题: {query}\"\n",
    "\n",
    "    # 使用指定模型生成AI响应\n",
    "    response = client.chat.completions.create(\n",
    "        model=model,\n",
    "        temperature=0,\n",
    "        messages=[\n",
    "            {\"role\": \"system\", \"content\": system_prompt},\n",
    "            {\"role\": \"user\", \"content\": user_prompt}\n",
    "        ]\n",
    "    )\n",
    "\n",
    "    # 返回AI响应的内容\n",
    "    return response.choices[0].message.content\n",
    "\n",
    "# 为每个块大小生成AI响应\n",
    "ai_responses_dict = {size: generate_response(query, system_prompt, retrieved_chunks_dict[size]) for size in chunk_sizes}\n",
    "\n",
    "# 打印块大小为256的响应\n",
    "print(ai_responses_dict[256])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 评估AI响应\n",
    "我们使用强大的LLM基于忠实度和相关性对响应进行评分"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定义评估评分系统常量\n",
    "SCORE_FULL = 1.0     # 完全匹配或完全满意\n",
    "SCORE_PARTIAL = 0.5  # 部分匹配或有些满意\n",
    "SCORE_NONE = 0.0     # 不匹配或不满意"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定义严格的评估提示模板\n",
    "FAITHFULNESS_PROMPT_TEMPLATE = \"\"\"\n",
    "评估AI响应与真实答案相比的忠实度。\n",
    "用户查询: {question}\n",
    "AI响应: {response}\n",
    "真实答案: {true_answer}\n",
    "\n",
    "忠实度衡量AI响应与真实答案中事实的吻合程度，不包含幻觉。\n",
    "\n",
    "指令:\n",
    "- 严格使用以下值进行评分:\n",
    "    * {full} = 完全忠实，与真实答案无矛盾\n",
    "    * {partial} = 部分忠实，有轻微矛盾\n",
    "    * {none} = 不忠实，有重大矛盾或幻觉\n",
    "- 仅返回数字分数 ({full}, {partial}, 或 {none})，不要解释或附加文本。\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "RELEVANCY_PROMPT_TEMPLATE = \"\"\"\n",
    "评估AI响应与用户查询的相关性。\n",
    "用户查询: {question}\n",
    "AI响应: {response}\n",
    "\n",
    "相关性衡量响应对用户问题的回答程度。\n",
    "\n",
    "指令:\n",
    "- 严格使用以下值进行评分:\n",
    "    * {full} = 完全相关，直接回答查询\n",
    "    * {partial} = 部分相关，回答了某些方面\n",
    "    * {none} = 不相关，未能回答查询\n",
    "- 仅返回数字分数 ({full}, {partial}, 或 {none})，不要解释或附加文本。\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "忠实度分数 (块大小 256): 1.0\n",
      "相关性分数 (块大小 256): 1.0\n",
      "\n",
      "\n",
      "忠实度分数 (块大小 128): 1.0\n",
      "相关性分数 (块大小 128): 1.0\n"
     ]
    }
   ],
   "source": [
    "def evaluate_response(question, response, true_answer):\n",
    "        \"\"\"\n",
    "        基于忠实度和相关性评估AI生成响应的质量。\n",
    "\n",
    "        参数:\n",
    "        question (str): 用户的原始问题。\n",
    "        response (str): 被评估的AI生成响应。\n",
    "        true_answer (str): 用作基准的正确答案。\n",
    "\n",
    "        返回:\n",
    "        Tuple[float, float]: 包含(忠实度分数, 相关性分数)的元组。\n",
    "                            每个分数为: 1.0 (完全), 0.5 (部分), 或 0.0 (无)。\n",
    "        \"\"\"\n",
    "        # 格式化评估提示\n",
    "        faithfulness_prompt = FAITHFULNESS_PROMPT_TEMPLATE.format(\n",
    "                question=question, \n",
    "                response=response, \n",
    "                true_answer=true_answer,\n",
    "                full=SCORE_FULL,\n",
    "                partial=SCORE_PARTIAL,\n",
    "                none=SCORE_NONE\n",
    "        )\n",
    "        \n",
    "        relevancy_prompt = RELEVANCY_PROMPT_TEMPLATE.format(\n",
    "                question=question, \n",
    "                response=response,\n",
    "                full=SCORE_FULL,\n",
    "                partial=SCORE_PARTIAL,\n",
    "                none=SCORE_NONE\n",
    "        )\n",
    "\n",
    "        # 向模型请求忠实度评估\n",
    "        faithfulness_response = client.chat.completions.create(\n",
    "               model=\"qwen2.5:7b\",\n",
    "                temperature=0,\n",
    "                messages=[\n",
    "                        {\"role\": \"system\", \"content\": \"您是一个客观的评估者。仅返回数字分数。\"},\n",
    "                        {\"role\": \"user\", \"content\": faithfulness_prompt}\n",
    "                ]\n",
    "        )\n",
    "        \n",
    "        # 向模型请求相关性评估\n",
    "        relevancy_response = client.chat.completions.create(\n",
    "                model=\"qwen2.5:7b\",\n",
    "                temperature=0,\n",
    "                messages=[\n",
    "                        {\"role\": \"system\", \"content\": \"您是一个客观的评估者。仅返回数字分数。\"},\n",
    "                        {\"role\": \"user\", \"content\": relevancy_prompt}\n",
    "                ]\n",
    "        )\n",
    "        \n",
    "        # 提取分数并处理潜在的解析错误\n",
    "        try:\n",
    "                faithfulness_score = float(faithfulness_response.choices[0].message.content.strip())\n",
    "        except ValueError:\n",
    "                print(\"警告: 无法解析忠实度分数，默认为0\")\n",
    "                faithfulness_score = 0.0\n",
    "                \n",
    "        try:\n",
    "                relevancy_score = float(relevancy_response.choices[0].message.content.strip())\n",
    "        except ValueError:\n",
    "                print(\"警告: 无法解析相关性分数，默认为0\")\n",
    "                relevancy_score = 0.0\n",
    "\n",
    "        return faithfulness_score, relevancy_score\n",
    "\n",
    "# 第一个验证数据的真实答案\n",
    "true_answer = data[3]['ideal_answer']\n",
    "\n",
    "# 评估块大小为256和128的响应\n",
    "faithfulness, relevancy = evaluate_response(query, ai_responses_dict[256], true_answer)\n",
    "faithfulness2, relevancy2 = evaluate_response(query, ai_responses_dict[128], true_answer)\n",
    "\n",
    "# 打印评估分数\n",
    "print(f\"忠实度分数 (块大小 256): {faithfulness}\")\n",
    "print(f\"相关性分数 (块大小 256): {relevancy}\")\n",
    "\n",
    "print(f\"\\n\")\n",
    "\n",
    "print(f\"忠实度分数 (块大小 128): {faithfulness2}\")\n",
    "print(f\"相关性分数 (块大小 128): {relevancy2}\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "rag",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
