{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "74e8bcfe",
   "metadata": {},
   "source": [
    "在 LangChain 生态中，Indexing（索引）组件是一套用于维持向量存储（Vector Store）与底层数据源同步的上层 API。它解决了 RAG 系统中 “向量存储数据陈旧、重复计算嵌入、手动清理冗余内容” 的核心痛点，通过自动化的 “记录跟踪” 和 “增量更新”，确保向量存储始终反映最新的数据源状态，同时大幅节省时间与成本（避免重复生成嵌入、存储重复内容）。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8d8aa351",
   "metadata": {},
   "source": [
    "# 一、组件定位与核心价值：为什么需要 Indexing？\n",
    "在传统 RAG 流程中（文档加载→分割→嵌入→存入向量存储），若数据源发生变更（如新增文档、修改旧文档、删除文档），手动维护向量存储会面临三大问题：\n",
    "\n",
    "1. **重复内容冗余**：相同文档多次加载会重复存入向量存储，导致检索时返回重复结果；\n",
    "2. **嵌入重复计算**：即使文档未修改，重新加载仍会重复生成嵌入向量，浪费 Token 成本；\n",
    "3. **旧数据难清理**：源文档删除或修改后，向量存储中的旧版本片段无法自动删除，导致检索结果不准确。\n",
    "\n",
    "LangChain Indexing 组件的核心价值就是自动化解决这些问题，具体体现在：\n",
    "\n",
    "- ✅ 去重：自动识别重复文档，避免重复写入向量存储；\n",
    "- ✅ 增量更新：仅对修改过的文档重新生成嵌入，未变更文档直接跳过；\n",
    "- ✅ 自动清理：根据数据源变更，自动删除向量存储中的旧版本或已删除的文档；\n",
    "- ✅ 兼容复杂转换：即使文档经过多次处理（如文本分割、格式转换），仍能通过 “源 ID” 关联到原始数据源，确保同步逻辑生效。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "530c777d",
   "metadata": {},
   "source": [
    "# 二、工作原理：Indexing 如何实现数据同步？\n",
    "Indexing 的核心依赖 RecordManager（记录管理器），它相当于 “向量存储的账本”，负责跟踪每一份文档的写入、修改和删除状态。完整工作流程如下：\n",
    "\n",
    "1. **核心组件：RecordManager**\n",
    "\n",
    "RecordManager 是 Indexing 的 “大脑”，它通过一个数据库（如 SQLite、PostgreSQL）存储以下关键信息，用于跟踪文档状态：\n",
    "\n",
    "- **文档哈希（Document Hash）**：对文档的 page_content（内容）和 metadata（元数据）计算哈希值，唯一标识文档内容（内容不变则哈希不变）；\n",
    "- **写入时间（Write Time）**：记录文档首次写入和最后更新的时间，用于判断文档是否过期；\n",
    "- **源 ID（Source ID）**：从文档元数据中提取的 “原始来源标识”（如 metadata[\"source\"] = \"kitty.txt\"），用于关联同一原始文档衍生的所有片段（如文本分割后的多个 chunk）。\n",
    "\n",
    "2. **核心流程（以 “新增 / 修改文档” 为例）**\n",
    "\n",
    "- **计算哈希**：对输入的每一份文档（或分割后的片段）计算哈希值；\n",
    "- **查询记录**：通过 RecordManager 检查该哈希值是否已存在（判断是否重复），或该文档的源 ID 是否有旧版本记录（判断是否需要清理旧内容）；\n",
    "- **增量写入**：\n",
    "    - 若文档哈希已存在：跳过写入（避免重复）；\n",
    "    - 若文档哈希不存在：生成嵌入向量，写入向量存储，并在 RecordManager 中记录新哈希、写入时间和源 ID；\n",
    "- **自动清理**：根据配置的 “清理模式”，删除向量存储中与当前文档源 ID 关联的旧版本文档（如修改文档后，删除旧版本片段）。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8a2ca470",
   "metadata": {},
   "source": [
    "# 三、关键特性：清理模式（Cleanup Mode）\n",
    "Indexing 的核心灵活性体现在 “清理模式”—— 根据业务需求选择不同的同步策略，四种模式的核心区别如下（基于官方文档表格整理）：\n",
    "\n",
    "清理模式（Cleanup Mode）|\t去重（De-Duplicates）|\t支持并行（Parallelizable）|\t清理已删除的源文档|\t清理源文档 / 衍生文档的变更\t|清理时机（Clean Up Timing）\n",
    "|:--|:--|:--|:--|:--|:--\n",
    "None|\t✅|\t✅|\t❌|\t❌|\t无（需手动清理）\n",
    "Incremental（增量）|\t✅|\t✅|\t❌\t|✅\t|持续清理（写入时同步清理）\n",
    "Full（全量）|\t✅\t|❌\t|✅\t|✅\t|索引结束后统一清理\n",
    "Scoped_Full（范围全量）|\t✅|\t✅\t|❌|\t✅\t|索引结束后统一清理\n",
    "\n",
    "**各模式适用场景解析：**\n",
    "1. None 模式：\n",
    "    - 仅做 “去重”，不自动清理任何旧内容；\n",
    "    - 适用场景：一次性导入静态数据（如历史文档），后续无需修改或手动管理变更。\n",
    "2. Incremental 模式（最常用）：\n",
    "    - 支持去重、并行处理，能清理 “源文档变更” 生成的旧版本（如修改 doggy.txt 后，删除向量存储中该文档的旧片段），但不清理 “源文档被删除” 的内容；\n",
    "    - 优势：清理时机为 “写入时同步”，新旧版本共存时间极短，适合实时性要求高的场景；\n",
    "    - 适用场景：频繁更新现有文档，但很少删除源文档（如企业知识库的文档修订）。\n",
    "3. Full 模式：\n",
    "    - 需传入 “全部应存在的文档”，向量存储中未包含在此次输入中的文档会被全部删除（包括已删除的源文档）；\n",
    "    - 劣势：不支持并行（需处理全量数据），清理时机为 “索引结束后”，新旧版本可能短暂共存；\n",
    "    - 适用场景：数据源可完整枚举（如每日全量同步的报表文档），需严格清理已删除的源文档。\n",
    "4. Scoped_Full 模式：\n",
    "    - 功能与 Incremental 类似（不清理已删除的源文档），但清理时机为 “索引结束后”，支持并行；\n",
    "    - 适用场景：需要并行处理大量文档变更，但对清理实时性要求不高（如夜间批量更新）。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "093cfaa6",
   "metadata": {},
   "source": [
    "# 四、使用要求与兼容范围\n",
    "Indexing 并非适用于所有向量存储，需满足以下核心要求，否则无法正常工作：\n",
    "\n",
    "1. **向量存储需支持的功能**\n",
    "\n",
    "Indexing 依赖向量存储的两个关键接口，缺少任一功能均无法使用：\n",
    "    - 按 ID 添加文档：add_documents 方法需支持 ids 参数（为每个文档指定唯一 ID，用于跟踪和删除）；\n",
    "    - 按 ID 删除文档：delete 方法需支持 ids 参数（根据 RecordManager 记录的旧 ID 删除旧内容）。\n",
    "\n",
    "2. **兼容的向量存储（官方清单）**\n",
    "\n",
    "LangChain v0.3 中，以下向量存储已验证兼容 Indexing（部分常用）：\n",
    "Chroma、FAISS、Pinecone、Qdrant、Weaviate、PGVector、Redis、ElasticsearchStore、Milvus、MongoDBAtlasVectorSearch 等（完整清单见官方文档 “Requirements” 章节）。\n",
    "\n",
    "3. **其他限制**\n",
    "\n",
    "- ❌ 不可用于 “已手动填充内容的向量存储”：RecordManager 仅跟踪通过 Indexing API 写入的文档，手动写入的内容无法被识别，会导致重复或清理异常；\n",
    "- ⚠️ 时间机制警告：RecordManager 依赖高精度时间戳判断文档状态，若两次索引任务间隔过短（如毫秒级），可能因时间戳未更新导致清理失败（实际场景中因索引任务耗时通常超过毫秒级，此问题极少发生）。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "83d57b3b",
   "metadata": {},
   "source": [
    "# 五、示例\n",
    "以下示例基于 FIASS（向量存储）、SQLRecordManager（记录管理器）和 HuggingFaceEmbeddings（嵌入模型），覆盖核心使用流程。\n",
    "\n",
    "## 1. 核心步骤：从初始化到索引同步\n",
    "**步骤 1：初始化向量存储与嵌入模型**\n",
    "\n",
    "首先创建向量存储实例（用于存储文档向量）和嵌入模型（用于生成向量）："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "374b20fa",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "# 1. 加载环境变量\n",
    "api_base = os.getenv(\"OPENAI_API_BASE\")\n",
    "api_key = os.getenv(\"OPENAI_API_KEY\")\n",
    "emb_model_path = \"../../embed_model/bge-large-zh-v1.5\"\n",
    "db_path = \"../../faiss_db\"\n",
    "\n",
    "from langchain_huggingface import HuggingFaceEmbeddings\n",
    "from langchain_community.vectorstores import FAISS\n",
    "\n",
    "# 1. 创建嵌入模型实例\n",
    "embeddings = HuggingFaceEmbeddings(\n",
    "    model_name = emb_model_path,\n",
    "    model_kwargs = {\"device\": \"cpu\"},  # 可指定\"cuda\"使用GPU,\n",
    "    encode_kwargs = {\"normalize_embeddings\": True}  # 归一化向量，提升相似度计算准确性\n",
    ")\n",
    "\n",
    "# 2. 初始化 FAISS 并存储向量（支持持久化）\n",
    "if not os.path.exists(db_path):\n",
    "    # 从文档块直接创建 FAISS（自动转换向量并存储）\n",
    "    vectorstore = FAISS.from_documents(\n",
    "        documents = [],    # 创建空的FAISS向量存储\n",
    "        embedding = embeddings,    # Embedding 模型\n",
    "    )\n",
    "    # 持久化到本地（下次直接加载，无需重新生成向量）\n",
    "    vectorstore.save_local(db_path)  # 保存到 ../faiss_db 目录\n",
    "else:\n",
    "    # 加载已持久化的 FAISS\n",
    "    vectorstore = FAISS.load_local(\n",
    "        db_path,\n",
    "        embeddings,\n",
    "        allow_dangerous_deserialization=True  # 允许反序列化（本地使用安全）\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a2d1a447",
   "metadata": {},
   "source": [
    "**步骤 2：初始化 RecordManager 并创建 Schema**\n",
    "\n",
    "RecordManager 需指定 “命名空间”（区分不同向量存储 / 索引）和数据库地址（此处用 SQLite 本地数据库），并创建必要的表结构："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "5c9701b6",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.indexes import SQLRecordManager, index\n",
    "\n",
    "# 1. 命名空间：建议格式为 \"向量存储类型/索引名\"，避免冲突\n",
    "# 因为 FIASS 是本地存储，可以理解为本地文件名即为命名空间\n",
    "namespace = \"faiss/faiss_db\"\n",
    "sql_path = \"../../record_manager_cache.sql\"\n",
    "\n",
    "# 2. 初始化 RecordManager（使用 SQLite 存储记录）\n",
    "record_manager = SQLRecordManager(\n",
    "    namespace,\n",
    "    db_url = f\"sqlite:///{sql_path}\"  # SQLite 数据库路径（自动创建）\n",
    ")\n",
    "\n",
    "# 3. 创建 RecordManager 所需的表结构（首次使用必须执行）\n",
    "record_manager.create_schema()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4dd8fd2a",
   "metadata": {},
   "source": [
    "**步骤 3：定义测试文档**\n",
    "\n",
    "创建带 source 元数据的文档（source 为 “源 ID”，用于 Indexing 跟踪原始文档）："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "bd5b66b6",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_core.documents import Document\n",
    "\n",
    "# 文档 1：源为 \"kitty.txt\"\n",
    "doc1 = Document(page_content=\"kitty is cute\", metadata={\"source\": \"kitty.txt\"})\n",
    "# 文档 2：源为 \"doggy.txt\"\n",
    "doc2 = Document(page_content=\"doggy is loyal\", metadata={\"source\": \"doggy.txt\"})"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "88796ebd",
   "metadata": {},
   "source": [
    "## 2. 不同清理模式的使用示例\n",
    "**示例 1：None 模式（仅去重，不自动清理）**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7b55fdd2",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "首次索引结果： {'num_added': 2, 'num_updated': 0, 'num_skipped': 0, 'num_deleted': 0}\n",
      "重复索引结果： {'num_added': 0, 'num_updated': 0, 'num_skipped': 3, 'num_deleted': 0}\n"
     ]
    }
   ],
   "source": [
    "# 辅助函数：清空向量存储（基于 Full 模式，仅用于测试）\n",
    "def _clear_vectorstore():\n",
    "    try:\n",
    "        index([], record_manager, vectorstore, cleanup=\"full\", source_id_key=\"source\", key_encoder=\"sha256\")\n",
    "    except ValueError as e:\n",
    "        print(f\"警告：{e}\")\n",
    "\n",
    "# 1. 首次索引：导入 2 个文档（num_added=2）\n",
    "_clear_vectorstore()\n",
    "result = index(\n",
    "    [doc1, doc2],\n",
    "    record_manager,\n",
    "    vectorstore,\n",
    "    cleanup = None,          # 清理模式为 None\n",
    "    source_id_key = \"source\",  # 指定源ID字段为 metadata 中的 \"source\"\n",
    "    key_encoder = \"sha256\"\n",
    ")\n",
    "print(\"首次索引结果：\", result)  # 输出：{'num_added':2, 'num_updated':0, 'num_skipped':0, 'num_deleted':0}\n",
    "\n",
    "# 2. 重复索引：相同文档会被去重（num_skipped=2）\n",
    "result = index(\n",
    "    [doc1, doc1, doc2],  # 包含重复的 doc1\n",
    "    record_manager,\n",
    "    vectorstore,\n",
    "    cleanup = None,\n",
    "    source_id_key = \"source\",\n",
    "    key_encoder = \"sha256\"\n",
    ")\n",
    "print(\"重复索引结果：\", result)  # 输出：{'num_added':0, 'num_updated':0, 'num_skipped':3, 'num_deleted':0}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bb909f47",
   "metadata": {},
   "source": [
    "**示例 2：Incremental 模式（处理文档变更）**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "1e79e393",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "修改文档后索引结果： {'num_added': 1, 'num_updated': 0, 'num_skipped': 0, 'num_deleted': 1}\n",
      "检索结果内容： ['kitty is cute', 'puppy is cute']\n"
     ]
    }
   ],
   "source": [
    "# 1. 首次索引 2 个文档\n",
    "_clear_vectorstore()\n",
    "index([doc1, doc2], record_manager, vectorstore, cleanup=\"incremental\", source_id_key=\"source\", key_encoder=\"sha256\")\n",
    "\n",
    "# 2. 修改 doc2（内容变更，源ID不变）\n",
    "changed_doc2 = Document(page_content=\"puppy is cute\", metadata={\"source\": \"doggy.txt\"})\n",
    "\n",
    "# 3. 索引修改后的 doc2：新增 1 个新文档，删除 1 个旧文档\n",
    "result = index(\n",
    "    [changed_doc2],\n",
    "    record_manager,\n",
    "    vectorstore,\n",
    "    cleanup=\"incremental\",\n",
    "    source_id_key=\"source\",\n",
    "    key_encoder=\"sha256\"\n",
    ")\n",
    "print(\"修改文档后索引结果：\", result)  # 输出：{'num_added':1, 'num_updated':0, 'num_skipped':0, 'num_deleted':1}\n",
    "\n",
    "# 4. 检索验证：旧 doc2（doggy is loyal）已被删除，仅返回新 doc2\n",
    "retrieved = vectorstore.similarity_search(\"dog\", k=2)\n",
    "print(\"检索结果内容：\", [doc.page_content for doc in retrieved])  # 输出：[\"puppy is cute\", \"kitty is cute\"]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d26dfa49",
   "metadata": {},
   "source": [
    "**示例 3：Full 模式（处理源文档删除）**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "e6023017",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "删除源文档后索引结果： {'num_added': 0, 'num_updated': 0, 'num_skipped': 1, 'num_deleted': 1}\n",
      "检索 kitty 结果： [Document(id='b6c23e398954205d68d48a9dd51c33e580c5f740897d13a924c5138eefc6a313', metadata={'source': 'doggy.txt'}, page_content='doggy is loyal')]\n"
     ]
    }
   ],
   "source": [
    "# 1. 首次索引全部文档（doc1 + doc2）\n",
    "_clear_vectorstore()\n",
    "all_docs = [doc1, doc2]\n",
    "index(all_docs, record_manager, vectorstore, cleanup=\"full\", source_id_key=\"source\", key_encoder=\"sha256\")\n",
    "\n",
    "# 2. 删除源文档 doc1（仅保留 doc2）\n",
    "del all_docs[0]  # 此时 all_docs = [doc2]\n",
    "\n",
    "# 3. 全量索引：未包含的 doc1 会被删除（num_deleted=1）\n",
    "result = index(\n",
    "    all_docs,  # 仅传入 doc2\n",
    "    record_manager,\n",
    "    vectorstore,\n",
    "    cleanup=\"full\",\n",
    "    source_id_key=\"source\",\n",
    "    key_encoder=\"sha256\"\n",
    ")\n",
    "print(\"删除源文档后索引结果：\", result)  # 输出：{'num_added':0, 'num_updated':0, 'num_skipped':1, 'num_deleted':1}\n",
    "\n",
    "# 4. 检索验证：doc1 已被删除，仅返回 doc2\n",
    "retrieved = vectorstore.similarity_search(\"kitty\", k=1)\n",
    "print(\"检索 kitty 结果：\", retrieved)  # 输出：[]（doc1 已被删除）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8d85ea4d",
   "metadata": {},
   "source": [
    "## 3. 结合文档加载器与文本分割\n",
    "Indexing 支持直接传入 “文档加载器（Loader）”，即使文档经过文本分割，也能通过源 ID 关联原始文档："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "92d0f522",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "加载器索引结果： {'num_added': 2, 'num_updated': 0, 'num_skipped': 0, 'num_deleted': 0}\n",
      "更新加载器后索引结果： {'num_added': 2, 'num_updated': 0, 'num_skipped': 0, 'num_deleted': 2}\n"
     ]
    }
   ],
   "source": [
    "from langchain_core.document_loaders import BaseLoader\n",
    "from langchain_text_splitters import CharacterTextSplitter\n",
    "\n",
    "# 1. 自定义加载器（加载后自动分割文档）\n",
    "class MyCustomLoader(BaseLoader):\n",
    "    def load(self):\n",
    "        # 原始文档（源ID 分别为 \"kitty.txt\" 和 \"doggy.txt\"）\n",
    "        raw_docs = [\n",
    "            Document(page_content=\"kitty kitty kitty\", metadata={\"source\": \"kitty.txt\"}),\n",
    "            Document(page_content=\"doggy doggy doggy\", metadata={\"source\": \"doggy.txt\"})\n",
    "        ]\n",
    "        # 文本分割（将每个文档拆分为 2 个片段）\n",
    "        splitter = CharacterTextSplitter(chunk_size=10, chunk_overlap=2)\n",
    "        return splitter.split_documents(raw_docs)\n",
    "\n",
    "# 2. 初始化加载器并索引\n",
    "loader = MyCustomLoader().load()\n",
    "_clear_vectorstore()\n",
    "result = index(\n",
    "    loader,  # 直接传入加载器（自动加载并分割）\n",
    "    record_manager,\n",
    "    vectorstore,\n",
    "    cleanup=\"incremental\",\n",
    "    source_id_key=\"source\",\n",
    "    key_encoder=\"sha256\"\n",
    ")\n",
    "print(\"加载器索引结果：\", result)  # 输出：{'num_added':2, 'num_updated':0, 'num_skipped':0, 'num_deleted':0}\n",
    "\n",
    "# 3. 修改源文档 \"doggy.txt\" 并重新索引\n",
    "class UpdatedLoader(BaseLoader):\n",
    "    def load(self):\n",
    "        updated_doc = Document(page_content=\"woof woof\", metadata={\"source\": \"doggy.txt\"})\n",
    "        splitter = CharacterTextSplitter(chunk_size=10, chunk_overlap=2)\n",
    "        return splitter.split_documents([updated_doc, doc1])  # 仅更新 doggy.txt\n",
    "\n",
    "result = index(\n",
    "    UpdatedLoader().load(),\n",
    "    record_manager,\n",
    "    vectorstore,\n",
    "    cleanup=\"incremental\",\n",
    "    source_id_key=\"source\",\n",
    "    key_encoder=\"sha256\"\n",
    ")\n",
    "print(\"更新加载器后索引结果：\", result)  # 输出：{'num_added':2, 'num_updated':0, 'num_skipped':2, 'num_deleted':2}\n",
    "# 解释：新增 2 个 \"doggy.txt\" 新片段，跳过 2 个 \"kitty.txt\" 旧片段，删除 2 个 \"doggy.txt\" 旧片段"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "67ee33a9",
   "metadata": {},
   "source": [
    "# 六、关键注意事项（官方警告与建议）\n",
    "1. **RecordManager 时间机制：**\n",
    "    若两次索引任务间隔过短（如毫秒级），可能因时间戳未更新导致清理失败。实际场景中，因索引任务耗时通常超过毫秒级，此问题极少发生，若需规避，可在两次任务间添加微小延迟（如 100ms）。\n",
    "\n",
    "2. **向量存储兼容性：**\n",
    "    务必确认向量存储支持 “按 ID 添加 / 删除”，否则 Indexing 会报错（兼容清单见官方文档 “Requirements” 章节）。\n",
    "\n",
    "3. **源 ID（source_id_key）的重要性：**\n",
    "    source_id_key 是 Indexing 跟踪文档来源的核心，必须确保所有文档的 metadata 中包含该字段，且同一原始文档的所有衍生片段（如分割后的 chunk）使用相同的源 ID，否则无法正确关联和清理旧内容。\n",
    "\n",
    "4. **禁止手动修改向量存储：**\n",
    "    若向量存储中已存在手动写入的文档（未通过 Indexing API），RecordManager 无法识别这些文档，会导致重复内容或清理异常，建议使用全新的向量存储或先清空旧内容。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0aabf91f",
   "metadata": {},
   "source": [
    "# 七、组件总结\n",
    "LangChain Indexing 不是替代 Vector Stores、Document Loaders 等组件，而是整合这些组件的上层 “同步管理 API”，其核心价值在于：\n",
    "\n",
    "- 成本优化：避免重复计算嵌入（节省 Token 成本）和重复存储（节省空间）；\n",
    "- 数据准确性：自动同步数据源变更（新增、修改、删除），确保检索结果不包含陈旧内容；\n",
    "- 流程简化：无需手动跟踪文档状态和清理旧内容，降低 RAG 系统维护复杂度。\n",
    "\n",
    "- 适用场景：\n",
    "    - 需定期更新向量存储的应用（如企业知识库、产品文档检索）；\n",
    "    - 数据源频繁变更（如实时文档上传、动态报表）；\n",
    "    - 对检索准确性和成本敏感的生产级 RAG 系统。\n",
    "\n",
    "官方文档：[indexing](https://python.langchain.com/docs/how_to/indexing/)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
