{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "markdown"
    }
   },
   "source": [
    "# 增强RAG系统的查询转换技术\n",
    "\n",
    "本notebook实现了三种查询转换技术，用于增强RAG系统的检索性能，无需依赖LangChain等专门库。通过修改用户查询，我们可以显著提高检索信息的相关性和全面性。\n",
    "\n",
    "## 关键转换技术\n",
    "\n",
    "1. **查询重写**: 使查询更具体和详细，以提高搜索精度。\n",
    "2. **后退提示**: 生成更广泛的查询以检索有用的上下文信息。\n",
    "3. **子查询分解**: 将复杂查询分解为更简单的组件，实现全面检索。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 设置环境\n",
    "我们首先导入必要的库。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "import fitz\n",
    "import os\n",
    "import numpy as np\n",
    "import json\n",
    "from openai import OpenAI"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 设置OpenAI API客户端\n",
    "我们初始化OpenAI客户端以生成嵌入和响应。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 使用基础URL和API密钥初始化OpenAI客户端\n",
    "client = OpenAI(\n",
    "    base_url=\"http://localhost:11434/v1/\",\n",
    "    api_key=\"ollama\"  # Ollama不需要真实的API密钥，但客户端需要一个值\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 实现查询转换技术\n",
    "### 1. 查询重写\n",
    "该技术使查询更具体和详细，以提高检索精度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [],
   "source": [
    "def rewrite_query(original_query, model=\"qwen2.5:7b\"):\n",
    "    \"\"\"\n",
    "    重写查询，使其更具体和详细，以便更好地检索。\n",
    "    \n",
    "    Args:\n",
    "        original_query (str): 原始用户查询\n",
    "        model (str): 用于查询重写的模型\n",
    "        \n",
    "    Returns:\n",
    "        str: 重写后的查询\n",
    "    \"\"\"\n",
    "    # 定义系统提示以指导AI助手的行为\n",
    "    system_prompt = \"你是一个专门改进搜索查询的AI助手。你的任务是将用户查询重写得更具体、详细，并且更可能检索到相关信息。\"\n",
    "    \n",
    "    # 定义包含要重写的原始查询的用户提示\n",
    "    user_prompt = f\"\"\"\n",
    "    将以下查询重写得更具体和详细。包含可能有助于检索准确信息的相关术语和概念。\n",
    "    \n",
    "    原始查询: {original_query}\n",
    "    \n",
    "    重写查询:\n",
    "    \"\"\"\n",
    "    \n",
    "    # 使用指定模型生成重写后的查询\n",
    "    response = client.chat.completions.create(\n",
    "        model=model,\n",
    "        temperature=0.0,  # 低温度以获得确定性输出\n",
    "        messages=[\n",
    "            {\"role\": \"system\", \"content\": system_prompt},\n",
    "            {\"role\": \"user\", \"content\": user_prompt}\n",
    "        ]\n",
    "    )\n",
    "    \n",
    "    # 返回重写后的查询，去除前后空格\n",
    "    return response.choices[0].message.content.strip()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. 后退提示\n",
    "该技术生成更广泛的查询以检索上下文背景信息。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_step_back_query(original_query, model=\"qwen2.5:7b\"):\n",
    "    \"\"\"\n",
    "    生成更通用的\"后退\"查询以检索更广泛的上下文。\n",
    "    \n",
    "    Args:\n",
    "        original_query (str): 原始用户查询\n",
    "        model (str): 用于后退查询生成的模型\n",
    "        \n",
    "    Returns:\n",
    "        str: 后退查询\n",
    "    \"\"\"\n",
    "    # 定义系统提示以指导AI助手的行为\n",
    "    system_prompt = \"你是一个专门处理搜索策略的AI助手。你的任务是生成特定查询的更广泛、更通用的版本，以检索相关的背景信息。\"\n",
    "    \n",
    "    # 定义包含要泛化的原始查询的用户提示\n",
    "    user_prompt = f\"\"\"\n",
    "    生成以下查询的更广泛、更通用的版本，这可以帮助检索有用的背景信息。\n",
    "    \n",
    "    原始查询: {original_query}\n",
    "    \n",
    "    后退查询:\n",
    "    \"\"\"\n",
    "    \n",
    "    # 使用指定模型生成后退查询\n",
    "    response = client.chat.completions.create(\n",
    "        model=model,\n",
    "        temperature=0.1,  # 稍高的温度以获得一些变化\n",
    "        messages=[\n",
    "            {\"role\": \"system\", \"content\": system_prompt},\n",
    "            {\"role\": \"user\", \"content\": user_prompt}\n",
    "        ]\n",
    "    )\n",
    "    \n",
    "    # 返回后退查询，去除前后空格\n",
    "    return response.choices[0].message.content.strip()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. 子查询分解\n",
    "该技术将复杂查询分解为更简单的组件，以实现全面检索。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "def decompose_query(original_query, num_subqueries=4, model=\"qwen2.5:7b\"):\n",
    "    \"\"\"\n",
    "    将复杂查询分解为更简单的子查询。\n",
    "    \n",
    "    Args:\n",
    "        original_query (str): 原始复杂查询\n",
    "        num_subqueries (int): 要生成的子查询数量\n",
    "        model (str): 用于查询分解的模型\n",
    "        \n",
    "    Returns:\n",
    "        List[str]: 更简单的子查询列表\n",
    "    \"\"\"\n",
    "    # 定义系统提示以指导AI助手的行为\n",
    "    system_prompt = \"你是一个专门分解复杂问题的AI助手。你的任务是将复杂查询分解为更简单的子问题，当这些子问题一起回答时，可以解决原始查询。\"\n",
    "    \n",
    "    # 定义包含要分解的原始查询的用户提示\n",
    "    user_prompt = f\"\"\"\n",
    "    将以下复杂查询分解为{num_subqueries}个更简单的子查询。每个子查询应关注原始问题的不同方面。\n",
    "    \n",
    "    原始查询: {original_query}\n",
    "    \n",
    "    生成{num_subqueries}个子查询，每行一个，格式如下:\n",
    "    1. [第一个子查询]\n",
    "    2. [第二个子查询]\n",
    "    以此类推...\n",
    "    \"\"\"\n",
    "    \n",
    "    # 使用指定模型生成子查询\n",
    "    response = client.chat.completions.create(\n",
    "        model=model,\n",
    "        temperature=0.2,  # 稍高的温度以获得一些变化\n",
    "        messages=[\n",
    "            {\"role\": \"system\", \"content\": system_prompt},\n",
    "            {\"role\": \"user\", \"content\": user_prompt}\n",
    "        ]\n",
    "    )\n",
    "    \n",
    "    # 处理响应以提取子查询\n",
    "    content = response.choices[0].message.content.strip()\n",
    "    \n",
    "    # 使用简单解析提取编号查询\n",
    "    lines = content.split(\"\\n\")\n",
    "    sub_queries = []\n",
    "    \n",
    "    for line in lines:\n",
    "        if line.strip() and any(line.strip().startswith(f\"{i}.\") for i in range(1, 10)):\n",
    "            # 删除编号和前导空格\n",
    "            query = line.strip()\n",
    "            query = query[query.find(\".\")+1:].strip()\n",
    "            sub_queries.append(query)\n",
    "    \n",
    "    return sub_queries"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 演示查询转换技术\n",
    "让我们将这些技术应用于示例查询。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "原始查询: What are the impacts of AI on job automation and employment?\n",
      "\n",
      "1. 重写查询:\n",
      "重写查询：如何评估人工智能技术在不同行业中的应用对就业市场、职业岗位自动化程度以及劳动力结构的具体影响？请详细分析AI技术的发展趋势及其对未来工作机会和失业率的潜在长期效应。\n",
      "\n",
      "2. 后退查询:\n",
      "生成的更广泛、更通用的查询可以是：\n",
      "\n",
      "How has artificial intelligence influenced job market dynamics, workforce transformation, and economic shifts in recent years? \n",
      "\n",
      "这样的问题能够覆盖更多相关背景信息，包括但不限于AI对就业市场的影响、工作性质的变化以及由此带来的经济变化等。\n",
      "\n",
      "3. 子查询:\n",
      "   1. What is the current state of AI technology in terms of its ability to automate jobs?\n",
      "   2. How has AI affected employment rates in different industries over the past decade?\n",
      "   3. In what ways can AI lead to job creation and new opportunities for workers?\n",
      "   4. What are the potential long-term impacts of AI on the labor market and workforce?\n"
     ]
    }
   ],
   "source": [
    "# 示例查询\n",
    "original_query = \"What are the impacts of AI on job automation and employment?\"\n",
    "\n",
    "# 应用查询转换\n",
    "print(\"原始查询:\", original_query)\n",
    "\n",
    "# 查询重写\n",
    "rewritten_query = rewrite_query(original_query)\n",
    "print(\"\\n1. 重写查询:\")\n",
    "print(rewritten_query)\n",
    "\n",
    "# 后退提示\n",
    "step_back_query = generate_step_back_query(original_query)\n",
    "print(\"\\n2. 后退查询:\")\n",
    "print(step_back_query)\n",
    "\n",
    "# 子查询分解\n",
    "sub_queries = decompose_query(original_query, num_subqueries=4)\n",
    "print(\"\\n3. 子查询:\")\n",
    "for i, query in enumerate(sub_queries, 1):\n",
    "    print(f\"   {i}. {query}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 构建简单的向量存储\n",
    "为了演示查询转换如何与检索集成，让我们实现一个简单的向量存储。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "class SimpleVectorStore:\n",
    "    \"\"\"\n",
    "    使用NumPy的简单向量存储实现。\n",
    "    \"\"\"\n",
    "    def __init__(self):\n",
    "        \"\"\"\n",
    "        初始化向量存储。\n",
    "        \"\"\"\n",
    "        self.vectors = []  # 存储嵌入向量的列表\n",
    "        self.texts = []  # 存储原始文本的列表\n",
    "        self.metadata = []  # 存储每个文本元数据的列表\n",
    "    \n",
    "    def add_item(self, text, embedding, metadata=None):\n",
    "        \"\"\"\n",
    "        向向量存储添加项目。\n",
    "\n",
    "        Args:\n",
    "        text (str): 原始文本。\n",
    "        embedding (List[float]): 嵌入向量。\n",
    "        metadata (dict, optional): 附加元数据。\n",
    "        \"\"\"\n",
    "        self.vectors.append(np.array(embedding))  # 将嵌入转换为numpy数组并添加到向量列表\n",
    "        self.texts.append(text)  # 将原始文本添加到文本列表\n",
    "        self.metadata.append(metadata or {})  # 将元数据添加到元数据列表，如果为None则使用空字典\n",
    "    \n",
    "    def similarity_search(self, query_embedding, k=5):\n",
    "        \"\"\"\n",
    "        查找与查询嵌入最相似的项目。\n",
    "\n",
    "        Args:\n",
    "        query_embedding (List[float]): 查询嵌入向量。\n",
    "        k (int): 要返回的结果数量。\n",
    "\n",
    "        Returns:\n",
    "        List[Dict]: 前k个最相似的项目及其文本和元数据。\n",
    "        \"\"\"\n",
    "        if not self.vectors:\n",
    "            return []  # 如果没有存储向量，返回空列表\n",
    "        \n",
    "        # 将查询嵌入转换为numpy数组\n",
    "        query_vector = np.array(query_embedding)\n",
    "        \n",
    "        # 使用余弦相似度计算相似性\n",
    "        similarities = []\n",
    "        for i, vector in enumerate(self.vectors):\n",
    "            # 计算查询向量与存储向量之间的余弦相似度\n",
    "            similarity = np.dot(query_vector, vector) / (np.linalg.norm(query_vector) * np.linalg.norm(vector))\n",
    "            similarities.append((i, similarity))  # 添加索引和相似度分数\n",
    "        \n",
    "        # 按相似度排序（降序）\n",
    "        similarities.sort(key=lambda x: x[1], reverse=True)\n",
    "        \n",
    "        # 返回前k个结果\n",
    "        results = []\n",
    "        for i in range(min(k, len(similarities))):\n",
    "            idx, score = similarities[i]\n",
    "            results.append({\n",
    "                \"text\": self.texts[idx],  # 添加相应的文本\n",
    "                \"metadata\": self.metadata[idx],  # 添加相应的元数据\n",
    "                \"similarity\": score  # 添加相似度分数\n",
    "            })\n",
    "        \n",
    "        return results  # 返回前k个最相似项目的列表"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 创建嵌入"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_embeddings(text, model=\"bge-m3:latest\"):\n",
    "    \"\"\"\n",
    "    使用指定的OpenAI模型为给定文本创建嵌入。\n",
    "\n",
    "    Args:\n",
    "    text (str): 要创建嵌入的输入文本。\n",
    "    model (str): 用于创建嵌入的模型。\n",
    "\n",
    "    Returns:\n",
    "    List[float]: 嵌入向量。\n",
    "    \"\"\"\n",
    "    # 通过将字符串输入转换为列表来处理字符串和列表输入\n",
    "    input_text = text if isinstance(text, list) else [text]\n",
    "    \n",
    "    # 使用指定模型为输入文本创建嵌入\n",
    "    response = client.embeddings.create(\n",
    "        model=model,\n",
    "        input=input_text\n",
    "    )\n",
    "    \n",
    "    # 如果输入是字符串，只返回第一个嵌入\n",
    "    if isinstance(text, str):\n",
    "        return response.data[0].embedding\n",
    "    \n",
    "    # 否则，返回所有嵌入作为向量列表\n",
    "    return [item.embedding for item in response.data]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 实现带查询转换的RAG"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [],
   "source": [
    "def extract_text_from_pdf(pdf_path):\n",
    "    \"\"\"\n",
    "    从PDF文件中提取文本。\n",
    "\n",
    "    Args:\n",
    "    pdf_path (str): PDF文件的路径。\n",
    "\n",
    "    Returns:\n",
    "    str: 从PDF中提取的文本。\n",
    "    \"\"\"\n",
    "    # 打开PDF文件\n",
    "    mypdf = fitz.open(pdf_path)\n",
    "    all_text = \"\"  # 初始化空字符串以存储提取的文本\n",
    "\n",
    "    # 遍历PDF中的每一页\n",
    "    for page_num in range(mypdf.page_count):\n",
    "        page = mypdf[page_num]  # 获取页面\n",
    "        text = page.get_text(\"text\")  # 从页面提取文本\n",
    "        all_text += text  # 将提取的文本添加到all_text字符串\n",
    "\n",
    "    return all_text  # 返回提取的文本"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [],
   "source": [
    "def chunk_text(text, n=1000, overlap=200):\n",
    "    \"\"\"\n",
    "    将给定文本分割成具有重叠的n个字符的段落。\n",
    "\n",
    "    Args:\n",
    "    text (str): 要分块的文本。\n",
    "    n (int): 每个块的字符数。\n",
    "    overlap (int): 块之间重叠的字符数。\n",
    "\n",
    "    Returns:\n",
    "    List[str]: 文本块列表。\n",
    "    \"\"\"\n",
    "    chunks = []  # 初始化空列表以存储块\n",
    "\n",
    "    # 以(n - overlap)的步长循环遍历文本\n",
    "    for i in range(0, len(text), n - overlap):\n",
    "        # 将从索引i到i + n的文本块添加到块列表中\n",
    "        chunks.append(text[i:i + n])\n",
    "\n",
    "    return chunks  # 返回文本块列表"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [],
   "source": [
    "def process_document(pdf_path, chunk_size=1000, chunk_overlap=200):\n",
    "    \"\"\"\n",
    "    为RAG处理文档。\n",
    "\n",
    "    Args:\n",
    "    pdf_path (str): PDF文件的路径。\n",
    "    chunk_size (int): 每个块的字符数大小。\n",
    "    chunk_overlap (int): 块之间重叠的字符数。\n",
    "\n",
    "    Returns:\n",
    "    SimpleVectorStore: 包含文档块及其嵌入的向量存储。\n",
    "    \"\"\"\n",
    "    print(\"从PDF提取文本...\")\n",
    "    extracted_text = extract_text_from_pdf(pdf_path)\n",
    "    \n",
    "    print(\"分块文本...\")\n",
    "    chunks = chunk_text(extracted_text, chunk_size, chunk_overlap)\n",
    "    print(f\"创建了{len(chunks)}个文本块\")\n",
    "    \n",
    "    print(\"为块创建嵌入...\")\n",
    "    # 为了效率，一次性为所有块创建嵌入\n",
    "    chunk_embeddings = create_embeddings(chunks)\n",
    "    \n",
    "    # 创建向量存储\n",
    "    store = SimpleVectorStore()\n",
    "    \n",
    "    # 将块添加到向量存储\n",
    "    for i, (chunk, embedding) in enumerate(zip(chunks, chunk_embeddings)):\n",
    "        store.add_item(\n",
    "            text=chunk,\n",
    "            embedding=embedding,\n",
    "            metadata={\"index\": i, \"source\": pdf_path}\n",
    "        )\n",
    "    \n",
    "    print(f\"向向量存储添加了{len(chunks)}个块\")\n",
    "    return store"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 带查询转换的RAG"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [],
   "source": [
    "def transformed_search(query, vector_store, transformation_type, top_k=3):\n",
    "    \"\"\"\n",
    "    使用转换后的查询进行搜索。\n",
    "    \n",
    "    Args:\n",
    "        query (str): 原始查询\n",
    "        vector_store (SimpleVectorStore): 要搜索的向量存储\n",
    "        transformation_type (str): 转换类型（'rewrite'、'step_back'或'decompose'）\n",
    "        top_k (int): 要返回的结果数量\n",
    "        \n",
    "    Returns:\n",
    "        List[Dict]: 搜索结果\n",
    "    \"\"\"\n",
    "    print(f\"转换类型: {transformation_type}\")\n",
    "    print(f\"原始查询: {query}\")\n",
    "    \n",
    "    results = []\n",
    "    \n",
    "    if transformation_type == \"rewrite\":\n",
    "        # 查询重写\n",
    "        transformed_query = rewrite_query(query)\n",
    "        print(f\"重写查询: {transformed_query}\")\n",
    "        \n",
    "        # 为转换后的查询创建嵌入\n",
    "        query_embedding = create_embeddings(transformed_query)\n",
    "        \n",
    "        # 使用重写查询搜索\n",
    "        results = vector_store.similarity_search(query_embedding, k=top_k)\n",
    "        \n",
    "    elif transformation_type == \"step_back\":\n",
    "        # 后退提示\n",
    "        transformed_query = generate_step_back_query(query)\n",
    "        print(f\"后退查询: {transformed_query}\")\n",
    "        \n",
    "        # 为转换后的查询创建嵌入\n",
    "        query_embedding = create_embeddings(transformed_query)\n",
    "        \n",
    "        # 使用后退查询搜索\n",
    "        results = vector_store.similarity_search(query_embedding, k=top_k)\n",
    "        \n",
    "    elif transformation_type == \"decompose\":\n",
    "        # Sub-query decomposition\n",
    "        sub_queries = decompose_query(query)\n",
    "        print(\"Decomposed into sub-queries:\")\n",
    "        for i, sub_q in enumerate(sub_queries, 1):\n",
    "            print(f\"{i}. {sub_q}\")\n",
    "        \n",
    "        # Create embeddings for all sub-queries\n",
    "        sub_query_embeddings = create_embeddings(sub_queries)\n",
    "        \n",
    "        # Search with each sub-query and combine results\n",
    "        all_results = []\n",
    "        for i, embedding in enumerate(sub_query_embeddings):\n",
    "            sub_results = vector_store.similarity_search(embedding, k=2)  # Get fewer results per sub-query\n",
    "            all_results.extend(sub_results)\n",
    "        \n",
    "        # Remove duplicates (keep highest similarity score)\n",
    "        seen_texts = {}\n",
    "        for result in all_results:\n",
    "            text = result[\"text\"]\n",
    "            if text not in seen_texts or result[\"similarity\"] > seen_texts[text][\"similarity\"]:\n",
    "                seen_texts[text] = result\n",
    "        \n",
    "        # Sort by similarity and take top_k\n",
    "        results = sorted(seen_texts.values(), key=lambda x: x[\"similarity\"], reverse=True)[:top_k]\n",
    "        \n",
    "    else:\n",
    "        # Regular search without transformation\n",
    "        query_embedding = create_embeddings(query)\n",
    "        results = vector_store.similarity_search(query_embedding, k=top_k)\n",
    "    \n",
    "    return results"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Generating a Response with Transformed Queries"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_response(query, context, model=\"qwen2.5:7b\"):\n",
    "    \"\"\"\n",
    "    基于查询和检索到的上下文生成响应。\n",
    "    \n",
    "    Args:\n",
    "        query (str): 用户查询\n",
    "        context (str): 检索到的上下文\n",
    "        model (str): 用于响应生成的模型\n",
    "        \n",
    "    Returns:\n",
    "        str: 生成的响应\n",
    "    \"\"\"\n",
    "    # 定义系统提示以指导AI助手的行为\n",
    "    system_prompt = \"你是一个有用的AI助手。仅基于提供的上下文回答用户的问题。如果你在上下文中找不到答案，请说明你没有足够的信息。\"\n",
    "    \n",
    "    # 定义包含上下文和查询的用户提示\n",
    "    user_prompt = f\"\"\"\n",
    "        上下文:\n",
    "        {context}\n",
    "\n",
    "        问题: {query}\n",
    "\n",
    "        请仅基于上述上下文提供全面的答案。\n",
    "    \"\"\"\n",
    "    \n",
    "    # 使用指定模型生成响应\n",
    "    response = client.chat.completions.create(\n",
    "        model=model,\n",
    "        temperature=0,  # 低温度以获得确定性输出\n",
    "        messages=[\n",
    "            {\"role\": \"system\", \"content\": system_prompt},\n",
    "            {\"role\": \"user\", \"content\": user_prompt}\n",
    "        ]\n",
    "    )\n",
    "    \n",
    "    # 返回生成的响应，去除前后空格\n",
    "    return response.choices[0].message.content.strip()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 运行带查询转换的完整RAG管道"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [],
   "source": [
    "def rag_with_query_transformation(pdf_path, query, transformation_type=None):\n",
    "    \"\"\"\n",
    "    运行带可选查询转换的完整RAG管道。\n",
    "    \n",
    "    Args:\n",
    "        pdf_path (str): PDF文档路径\n",
    "        query (str): 用户查询\n",
    "        transformation_type (str): 转换类型（None、'rewrite'、'step_back'或'decompose'）\n",
    "        \n",
    "    Returns:\n",
    "        Dict: 包括查询、转换查询、上下文和响应的结果\n",
    "    \"\"\"\n",
    "    # 处理文档以创建向量存储\n",
    "    vector_store = process_document(pdf_path)\n",
    "    \n",
    "    # 应用查询转换并搜索\n",
    "    if transformation_type:\n",
    "        # 使用转换查询执行搜索\n",
    "        results = transformed_search(query, vector_store, transformation_type)\n",
    "    else:\n",
    "        # 执行不带转换的常规搜索\n",
    "        query_embedding = create_embeddings(query)\n",
    "        results = vector_store.similarity_search(query_embedding, k=3)\n",
    "    \n",
    "    # 合并搜索结果的上下文\n",
    "    context = \"\\n\\n\".join([f\"段落 {i+1}:\\n{result['text']}\" for i, result in enumerate(results)])\n",
    "    \n",
    "    # 基于查询和合并上下文生成响应\n",
    "    response = generate_response(query, context)\n",
    "    \n",
    "    # 返回包括原始查询、转换类型、上下文和响应的结果\n",
    "    return {\n",
    "        \"original_query\": query,\n",
    "        \"transformation_type\": transformation_type,\n",
    "        \"context\": context,\n",
    "        \"response\": response\n",
    "    }"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 评估转换技术"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [],
   "source": [
    "def compare_responses(results, reference_answer, model=\"qwen2.5:7b\"):\n",
    "    \"\"\"\n",
    "    Compare responses from different query transformation techniques.\n",
    "    \n",
    "    Args:\n",
    "        results (Dict): Results from different transformation techniques\n",
    "        reference_answer (str): Reference answer for comparison\n",
    "        model (str): Model for evaluation\n",
    "    \"\"\"\n",
    "    # Define the system prompt to guide the AI assistant's behavior\n",
    "    system_prompt = \"\"\"You are an expert evaluator of RAG systems. \n",
    "    Your task is to compare different responses generated using various query transformation techniques \n",
    "    and determine which technique produced the best response compared to the reference answer.\"\"\"\n",
    "    \n",
    "    # Prepare the comparison text with the reference answer and responses from each technique\n",
    "    comparison_text = f\"\"\"Reference Answer: {reference_answer}\\n\\n\"\"\"\n",
    "    \n",
    "    for technique, result in results.items():\n",
    "        comparison_text += f\"{technique.capitalize()} Query Response:\\n{result['response']}\\n\\n\"\n",
    "    \n",
    "    # Define the user prompt with the comparison text\n",
    "    user_prompt = f\"\"\"\n",
    "    {comparison_text}\n",
    "    \n",
    "    Compare the responses generated by different query transformation techniques to the reference answer.\n",
    "    \n",
    "    For each technique (original, rewrite, step_back, decompose):\n",
    "    1. Score the response from 1-10 based on accuracy, completeness, and relevance\n",
    "    2. Identify strengths and weaknesses\n",
    "    \n",
    "    Then rank the techniques from best to worst and explain which technique performed best overall and why.\n",
    "    \"\"\"\n",
    "    \n",
    "    # Generate the evaluation response using the specified model\n",
    "    response = client.chat.completions.create(\n",
    "        model=model,\n",
    "        temperature=0,\n",
    "        messages=[\n",
    "            {\"role\": \"system\", \"content\": system_prompt},\n",
    "            {\"role\": \"user\", \"content\": user_prompt}\n",
    "        ]\n",
    "    )\n",
    "    \n",
    "    # Print the evaluation results\n",
    "    print(\"\\n===== EVALUATION RESULTS =====\")\n",
    "    print(response.choices[0].message.content)\n",
    "    print(\"=============================\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [],
   "source": [
    "def evaluate_transformations(pdf_path, query, reference_answer=None):\n",
    "    \"\"\"\n",
    "    Evaluate different transformation techniques for the same query.\n",
    "    \n",
    "    Args:\n",
    "        pdf_path (str): Path to PDF document\n",
    "        query (str): Query to evaluate\n",
    "        reference_answer (str): Optional reference answer for comparison\n",
    "        \n",
    "    Returns:\n",
    "        Dict: Evaluation results\n",
    "    \"\"\"\n",
    "    # Define the transformation techniques to evaluate\n",
    "    transformation_types = [None, \"rewrite\", \"step_back\", \"decompose\"]\n",
    "    results = {}\n",
    "    \n",
    "    # Run RAG with each transformation technique\n",
    "    for transformation_type in transformation_types:\n",
    "        type_name = transformation_type if transformation_type else \"original\"\n",
    "        print(f\"\\n===== Running RAG with {type_name} query =====\")\n",
    "        \n",
    "        # Get the result for the current transformation type\n",
    "        result = rag_with_query_transformation(pdf_path, query, transformation_type)\n",
    "        results[type_name] = result\n",
    "        \n",
    "        # Print the response for the current transformation type\n",
    "        print(f\"Response with {type_name} query:\")\n",
    "        print(result[\"response\"])\n",
    "        print(\"=\" * 50)\n",
    "    \n",
    "    # Compare results if a reference answer is provided\n",
    "    if reference_answer:\n",
    "        compare_responses(results, reference_answer)\n",
    "    \n",
    "    return results"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 查询转换评估"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "===== Running RAG with original query =====\n",
      "从PDF提取文本...\n",
      "分块文本...\n",
      "创建了42个文本块\n",
      "为块创建嵌入...\n",
      "向向量存储添加了42个块\n",
      "Response with original query:\n",
      "Explainable AI (XAI) aims to make AI systems more transparent and understandable. It is considered important for several reasons:\n",
      "\n",
      "1. **Building Trust**: Making AI systems understandable helps users assess their reliability and fairness, which is crucial for building trust in these technologies.\n",
      "2. **Enhancing Accountability**: XAI techniques provide insights into how AI models make decisions, which can help establish accountability and responsibility for the outcomes of AI systems.\n",
      "3. **Addressing Ethical Concerns**: By making AI more transparent, ethical concerns related to decision-making processes can be better addressed.\n",
      "4. **Improving Fairness Assessment**: Users can evaluate the fairness of AI decisions, ensuring that these systems do not perpetuate biases or unfair practices.\n",
      "\n",
      "These aspects are highlighted in multiple segments of the provided context, emphasizing the importance of XAI for transparency, trust, and ethical use of AI technologies.\n",
      "==================================================\n",
      "\n",
      "===== Running RAG with rewrite query =====\n",
      "从PDF提取文本...\n",
      "分块文本...\n",
      "创建了42个文本块\n",
      "为块创建嵌入...\n",
      "向向量存储添加了42个块\n",
      "转换类型: rewrite\n",
      "原始查询: What is 'Explainable AI' and why is it considered important?\n",
      "重写查询: 重写查询：什么是可解释的人工智能（Explainable AI），它为什么在当前的机器学习和数据科学领域中被认为非常重要？具体来说，可解释的人工智能是指那些能够提供透明、易于理解的决策过程和结果的AI系统。请详细说明其重要性，包括但不限于提高用户信任度、确保公平性和遵守法规等方面的应用场景和案例研究。\n",
      "Response with rewrite query:\n",
      "Explainable AI (XAI) aims to make AI systems more transparent and understandable. It provides insights into how AI models make decisions, enhancing trust and accountability. XAI techniques are crucial for making AI decisions more understandable, enabling users to assess their fairness and accuracy, which is important for building trust in AI systems.\n",
      "==================================================\n",
      "\n",
      "===== Running RAG with step_back query =====\n",
      "从PDF提取文本...\n",
      "分块文本...\n",
      "创建了42个文本块\n",
      "为块创建嵌入...\n",
      "向向量存储添加了42个块\n",
      "转换类型: step_back\n",
      "原始查询: What is 'Explainable AI' and why is it considered important?\n",
      "后退查询: 生成的更广泛、更通用的查询可以是：\n",
      "\n",
      "\"Explorable Artificial Intelligence (AI): 定义、原理及其重要性探索\"\n",
      "\n",
      "这个版本的查询更加宽泛，涵盖了原始问题的核心主题，并且使用了不同的词汇表达相同的概念。这样可以帮助找到更多相关的背景信息和资源。\n",
      "Response with step_back query:\n",
      "Explainable AI (XAI) aims to make AI systems more transparent and understandable. It is considered important for several reasons:\n",
      "\n",
      "1. **Building Trust**: Making AI systems understandable helps users assess their reliability and fairness, which is crucial for building trust in these technologies.\n",
      "2. **Enhancing Accountability**: XAI techniques provide insights into how AI models make decisions, which can help establish accountability and responsibility for the outcomes of AI systems.\n",
      "3. **Addressing Ethical Concerns**: By making AI more transparent, ethical concerns related to decision-making processes can be better addressed.\n",
      "4. **Improving Fairness Assessment**: Users can evaluate the fairness of AI decisions, ensuring that these systems do not perpetuate biases or unfair practices.\n",
      "\n",
      "These aspects are highlighted in multiple segments of the provided context, emphasizing the importance of XAI for transparency, trust, and ethical use of AI technologies.\n",
      "==================================================\n",
      "\n",
      "===== Running RAG with decompose query =====\n",
      "从PDF提取文本...\n",
      "分块文本...\n",
      "创建了42个文本块\n",
      "为块创建嵌入...\n",
      "向向量存储添加了42个块\n",
      "转换类型: decompose\n",
      "原始查询: What is 'Explainable AI' and why is it considered important?\n",
      "Decomposed into sub-queries:\n",
      "1. What does the term \"Explainable AI\" mean?\n",
      "2. How does Explainable AI differ from traditional AI approaches?\n",
      "3. Why is Explainable AI considered important in various fields?\n",
      "4. What are some benefits and applications of Explainable AI?\n",
      "Response with decompose query:\n",
      "Explainable AI (XAI) aims to make AI systems more transparent and understandable, providing insights into how AI models make decisions. This is important because XAI techniques enhance trust and accountability by making AI decisions more understandable, enabling users to assess their fairness and accuracy.\n",
      "==================================================\n",
      "\n",
      "===== EVALUATION RESULTS =====\n",
      "### Evaluation of Responses\n",
      "\n",
      "#### Original Query Response:\n",
      "**Score: 9/10**\n",
      "- **Accuracy**: The response accurately captures the essence of XAI, its importance for transparency, trust, accountability, and ethical concerns. It closely aligns with the reference answer.\n",
      "- **Completeness**: The response is comprehensive, covering multiple aspects such as building trust, enhancing accountability, addressing ethical concerns, and improving fairness assessment.\n",
      "- **Relevance**: Highly relevant to the topic of XAI.\n",
      "\n",
      "**Strengths:**\n",
      "- Provides a detailed explanation of why XAI is important.\n",
      "- Includes specific points like trust, accountability, and fairness assessment.\n",
      "\n",
      "**Weaknesses:**\n",
      "- Slightly repetitive in some parts (e.g., \"These aspects are highlighted...\").\n",
      "\n",
      "#### Rewrite Query Response:\n",
      "**Score: 8/10**\n",
      "- **Accuracy**: The response accurately describes the purpose of XAI but is less detailed compared to the original.\n",
      "- **Completeness**: While it covers key points, it does not delve as deeply into each aspect as the original response.\n",
      "- **Relevance**: Relevant, but lacks some depth.\n",
      "\n",
      "**Strengths:**\n",
      "- Concise and clear in its explanation.\n",
      "- Highlights trust and accountability effectively.\n",
      "\n",
      "**Weaknesses:**\n",
      "- Less detailed compared to the original response.\n",
      "- Does not cover all aspects of XAI (e.g., fairness assessment).\n",
      "\n",
      "#### Step_back Query Response:\n",
      "**Score: 8/10**\n",
      "- **Accuracy**: Accurate, but slightly less detailed than the original response.\n",
      "- **Completeness**: Covers key points but does not provide as much detail or depth.\n",
      "- **Relevance**: Relevant to the topic.\n",
      "\n",
      "**Strengths:**\n",
      "- Maintains a similar structure and covers important aspects of XAI.\n",
      "- Provides clear explanations for trust and accountability.\n",
      "\n",
      "**Weaknesses:**\n",
      "- Less detailed compared to the original response.\n",
      "- Could benefit from including more specific points like fairness assessment.\n",
      "\n",
      "#### Decompose Query Response:\n",
      "**Score: 7/10**\n",
      "- **Accuracy**: Accurate, but less comprehensive than the original response.\n",
      "- **Completeness**: Covers some key aspects but is not as thorough.\n",
      "- **Relevance**: Relevant to the topic of XAI.\n",
      "\n",
      "**Strengths:**\n",
      "- Clear and concise in its explanation.\n",
      "- Highlights trust and accountability effectively.\n",
      "\n",
      "**Weaknesses:**\n",
      "- Less detailed compared to other responses.\n",
      "- Does not cover all important points (e.g., fairness assessment).\n",
      "\n",
      "### Ranking from Best to Worst\n",
      "\n",
      "1. **Original Query Response**: This response is the most accurate, complete, and relevant. It provides a comprehensive explanation of XAI's importance and covers multiple aspects in detail.\n",
      "2. **Step_back Query Response**: While it maintains accuracy and relevance, it lacks some depth compared to the original response.\n",
      "3. **Rewrite Query Response**: This response is clear but less detailed than the original, making it slightly less effective.\n",
      "4. **Decompose Query Response**: The most concise of all responses, but it misses out on providing a comprehensive explanation.\n",
      "\n",
      "### Conclusion\n",
      "\n",
      "The **Original Query Response** performed best overall because it accurately captures the essence of XAI, provides a thorough and detailed explanation, and covers multiple important aspects such as trust, accountability, and fairness assessment. While other responses are also relevant and accurate, they lack the depth and comprehensiveness provided by the original response.\n",
      "=============================\n"
     ]
    }
   ],
   "source": [
    "# 从JSON文件加载验证数据\n",
    "with open('data/val.json') as f:\n",
    "    data = json.load(f)\n",
    "\n",
    "# 从验证数据中提取第一个查询\n",
    "query = data[0]['question']\n",
    "\n",
    "# 从验证数据中提取参考答案\n",
    "reference_answer = data[0]['ideal_answer']\n",
    "\n",
    "# pdf路径\n",
    "pdf_path = \"data/AI_Information.pdf\"\n",
    "\n",
    "# 运行评估\n",
    "evaluation_results = evaluate_transformations(pdf_path, query, reference_answer)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "rag",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
