{
 "cells": [
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "# 文档拆分器 Text Splitters",
   "id": "bb74fa7a8b9fb518"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## CharacterTextSplitter：Split by character\n",
    "参数情况说明：\n",
    "- `chunk_size` ：每个切块的最大token数量，默认值为4000。\n",
    "- `chunk_overlap` ：相邻两个切块之间的最大重叠token数量，默认值为200。\n",
    "- `separator` ：分割使用的分隔符，默认值为\" `\\n\\n` \"。\n",
    "- `length_function` ：用于计算切块长度的方法。**默认赋值为父类TextSplitter的len函数**。"
   ],
   "id": "d73a09259d459ec8"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 举例1：不指定分隔符分割字符串文本",
   "id": "8911ba15cb969038"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-11-20T01:25:57.785993Z",
     "start_time": "2025-11-20T01:25:57.769924Z"
    }
   },
   "cell_type": "code",
   "source": [
    "import langchain_community.document_loaders\n",
    "import langchain_text_splitters\n",
    "from  langchain_text_splitters.character import CharacterTextSplitter\n",
    "\n",
    "text = \"\"\"\n",
    "LangChain 是一个用于开发由语言模型驱动的应用程序的框架的。它提供了一套工具和抽象，使开发者能够更容易地构建复杂的应用程序。\n",
    "\"\"\"\n",
    "\n",
    "# 定义切割器\n",
    "text_splitter = CharacterTextSplitter(\n",
    "    chunk_size=50,\n",
    "    chunk_overlap=5,\n",
    "    separator=\"\"    # 设置为空字符串时，标识禁用分隔符优先\n",
    ")\n",
    "\n",
    "chunks = text_splitter.split_text(text)\n",
    "print(f\"chunks数量：{len(chunks)}\")\n",
    "for i, chunk in enumerate(chunks):\n",
    "    print(f\"→→→→→→→→ 索引：{i}，长度：{len(chunk)}\\n {chunk}\")"
   ],
   "id": "4cbca7c270dea53d",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "chunks数量：2\n",
      "→→→→→→→→ 索引：0，长度：49\n",
      " LangChain 是一个用于开发由语言模型驱动的应用程序的框架的。它提供了一套工具和抽象，使开发\n",
      "→→→→→→→→ 索引：1，长度：22\n",
      " 象，使开发者能够更容易地构建复杂的应用程序。\n"
     ]
    }
   ],
   "execution_count": 9
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "### 指定分隔符：重叠失效场景\n",
    "如果没有合并的片段，则 `overlap` 失效"
   ],
   "id": "eb3c1327789f9eac"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-11-20T01:42:55.350682Z",
     "start_time": "2025-11-20T01:42:55.336680Z"
    }
   },
   "cell_type": "code",
   "source": [
    "source_text = \"\"\"\n",
    "LangChain 是一个用于开发由语言模型驱动的应用程序的框架的。它提供了一套工具和抽象，使开发者能够更容易地构建复杂的应用程序。\n",
    "\"\"\"\n",
    "\n",
    "text_splitter = CharacterTextSplitter(\n",
    "    chunk_size=50,\n",
    "    chunk_overlap=5,\n",
    "    separator=\"。\"\n",
    ")\n",
    "\n",
    "chunks = text_splitter.split_text(source_text)\n",
    "print(f\"chunks数量：{len(chunks)}\")\n",
    "for i, chunk in enumerate(chunks):\n",
    "    print(f\"→→→→→→→→ 索引：{i}，长度：{len(chunk)}\\n {chunk}\")"
   ],
   "id": "477d444ce816ad4f",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "chunks数量：2\n",
      "→→→→→→→→ 索引：0，长度：33\n",
      " LangChain 是一个用于开发由语言模型驱动的应用程序的框架的\n",
      "→→→→→→→→ 索引：1，长度：32\n",
      " 它提供了一套工具和抽象，使开发者能够更容易地构建复杂的应用程序。\n"
     ]
    }
   ],
   "execution_count": 10
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "### 举例三、指定分隔符：重叠生效场景\n",
    "如果有合并的片段，则 `overlap` 生效"
   ],
   "id": "f2989cd765bf9393"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-11-20T02:01:43.291529Z",
     "start_time": "2025-11-20T02:01:43.280526Z"
    }
   },
   "cell_type": "code",
   "source": [
    "source_text = \"\"\"\n",
    "这是第一段文本。这是第二段内容。最后一段结束。\n",
    "\"\"\"\n",
    "\n",
    "text_splitter = CharacterTextSplitter(\n",
    "    chunk_size=20,\n",
    "    chunk_overlap=8,\n",
    "    separator=\"。\"\n",
    ")\n",
    "\n",
    "chunks = text_splitter.split_text(source_text)\n",
    "print(f\"chunks数量：{len(chunks)}\")\n",
    "for i, chunk in enumerate(chunks):\n",
    "    print(f\"→→→→→→→→ 索引：{i}，长度：{len(chunk)}\\n {chunk}\")\n"
   ],
   "id": "d13417d5fb873aea",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "chunks数量：2\n",
      "→→→→→→→→ 索引：0，长度：15\n",
      " 这是第一段文本。这是第二段内容\n",
      "→→→→→→→→ 索引：1，长度：15\n",
      " 这是第二段内容。最后一段结束。\n"
     ]
    }
   ],
   "execution_count": 11
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## RecursiveCharacterTextSplitter：最常用\n",
    "文档切分器中较常用的是 **RecursiveCharacterTextSplitter (递归字符文本切分器)** ，遇到 **特定字符** 时进行分割。默认情况下，它尝试进行切割的字符包括 `[\"\\n\\n\", \"\\n\", \" \", \"\"]` 。\n",
    "\n",
    "具体为：**根据第一个字符进行切块，但如果任何切块太大，则会继续移动到下一个字符继续切块，以此类推。**\n",
    "\n",
    "此外，还可以考虑添加 `，。` 等分割字符。\n",
    "\n",
    "**特点**：\n",
    "- **保留上下文**：优先在自然语言边界（如段落、句子结尾）处分割， **减少信息碎片化** 。\n",
    "- **智能分段**：通过递归尝试多种分隔符，将文本分割为大小接近chunk_size的片段。\n",
    "- **灵活适配**：适用于多种文本类型（代码、Markdown、普通文本等），是LangChain中 **最通用** 的文本拆分器。\n",
    "\n",
    "此外，还可以指定的参数包括（继承父类）：\n",
    "- `chunk_size`\n",
    "- `chunk_overlap`\n",
    "- `length_function`\n",
    "- `add_start_index`：在拆分文本后，为每个生成的文本块（chunk）记录它在原始文档中的起始字符位置。（在元数据对象中才生效。）"
   ],
   "id": "4278546839ff167a"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 举例1：使用split_text()方法演示：返回字符串列表",
   "id": "2add55299b04f07f"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-11-20T03:02:17.576558Z",
     "start_time": "2025-11-20T03:02:17.570556Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
    "\n",
    "source_text = \"LangChain框架特性\\n\\n多模型集成(GPT/Claude)\\n记忆管理功能\\n链式调用设计。文档分析场景示例：需要处理PDF/Word等格式。\"\n",
    "\n",
    "text_splitter = RecursiveCharacterTextSplitter(\n",
    "    chunk_size=10,\n",
    "    chunk_overlap=0\n",
    ")\n",
    "\n",
    "chunks = text_splitter.split_text(source_text)\n",
    "print(f\"chunks数量：{len(chunks)}\")\n",
    "for i, chunk in enumerate(chunks):\n",
    "    print(f\"→→→→→→→→ 索引：{i}，长度：{len(chunk)}\\n {chunk}\")"
   ],
   "id": "f282a6c1295ce36",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "chunks数量：9\n",
      "→→→→→→→→ 索引：0，长度：10\n",
      " LangChain框\n",
      "→→→→→→→→ 索引：1，长度：3\n",
      " 架特性\n",
      "→→→→→→→→ 索引：2，长度：9\n",
      " 多模型集成(GPT\n",
      "→→→→→→→→ 索引：3，长度：8\n",
      " /Claude)\n",
      "→→→→→→→→ 索引：4，长度：6\n",
      " 记忆管理功能\n",
      "→→→→→→→→ 索引：5，长度：9\n",
      " 链式调用设计。文档\n",
      "→→→→→→→→ 索引：6，长度：10\n",
      " 分析场景示例：需要处\n",
      "→→→→→→→→ 索引：7，长度：10\n",
      " 理PDF/Word等\n",
      "→→→→→→→→ 索引：8，长度：3\n",
      " 格式。\n"
     ]
    }
   ],
   "execution_count": 27
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "### 举例2：使用create_documents()方法演示，传入字符串列表，返回Document对象列表\n",
    "- `add_start_index`：在拆分文本后，为每个生成的文本块（chunk）记录它在原始文档中的起始字符位置。"
   ],
   "id": "cc218a9f308cbcd8"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-11-20T03:02:19.266956Z",
     "start_time": "2025-11-20T03:02:19.253953Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
    "\n",
    "source_text = \"LangChain框架特性\\n\\n多模型集成(GPT/Claude)\\n记忆管理功能\\n链式调用设计。文档分析场景示例：需要处理PDF/Word等格式。\"\n",
    "\n",
    "text_splitter = RecursiveCharacterTextSplitter(\n",
    "    chunk_size=10,\n",
    "    chunk_overlap=0,\n",
    "    add_start_index=True\n",
    ")\n",
    "\n",
    "docs = text_splitter.create_documents([source_text])\n",
    "for doc in docs:\n",
    "    print(f\"→→→→→→→→ 元数据：{doc.metadata}，长度：{len(doc.page_content)}\\n {doc.page_content}\")"
   ],
   "id": "3ff79005df1d6ee5",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "→→→→→→→→ 元数据：{'start_index': 0}，长度：10\n",
      " LangChain框\n",
      "→→→→→→→→ 元数据：{'start_index': 10}，长度：3\n",
      " 架特性\n",
      "→→→→→→→→ 元数据：{'start_index': 15}，长度：9\n",
      " 多模型集成(GPT\n",
      "→→→→→→→→ 元数据：{'start_index': 24}，长度：8\n",
      " /Claude)\n",
      "→→→→→→→→ 元数据：{'start_index': 33}，长度：6\n",
      " 记忆管理功能\n",
      "→→→→→→→→ 元数据：{'start_index': 40}，长度：9\n",
      " 链式调用设计。文档\n",
      "→→→→→→→→ 元数据：{'start_index': 49}，长度：10\n",
      " 分析场景示例：需要处\n",
      "→→→→→→→→ 元数据：{'start_index': 59}，长度：10\n",
      " 理PDF/Word等\n",
      "→→→→→→→→ 元数据：{'start_index': 69}，长度：3\n",
      " 格式。\n"
     ]
    }
   ],
   "execution_count": 28
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 举例3：将本地文件内容加载成字符串，进行拆分",
   "id": "5930e46768c8dc04"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-11-24T01:54:20.995105Z",
     "start_time": "2025-11-24T01:54:20.965099Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
    "\n",
    "## 打开txt文件\n",
    "with open(\"asset/load/08-ai.txt\", \"r\", encoding=\"utf-8\") as f:\n",
    "    source_text = f.read()  # 读取文件内容,string类型\n",
    "\n",
    "text_splitter = RecursiveCharacterTextSplitter(\n",
    "    chunk_size=100,\n",
    "    chunk_overlap=20,\n",
    "    length_function=len\n",
    ")\n",
    "\n",
    "docs = text_splitter.create_documents([source_text])\n",
    "print(len(docs))\n",
    "for doc in docs:\n",
    "    print(f\"→→→→→→→→ metadata：{doc.metadata}，len：{len(doc.page_content)}\\n {doc.page_content}\")"
   ],
   "id": "2ff9f9dd11ae0133",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "10\n",
      "→→→→→→→→ metadata：{}，len：12\n",
      " 人工智能（AI）是什么？\n",
      "→→→→→→→→ metadata：{}，len：15\n",
      " 人工智能（Artificial\n",
      "→→→→→→→→ metadata：{}，len：99\n",
      " Intelligence，简称AI）是指由计算机系统模拟人类智能的技术，使其能够执行通常需要人类认知能力的任务，如学习、推理、决策和语言理解。AI的核心目标是让机器具备感知环境、处理信息并自主行动的\n",
      "→→→→→→→→ metadata：{}，len：23\n",
      " 让机器具备感知环境、处理信息并自主行动的能力。\n",
      "→→→→→→→→ metadata：{}，len：77\n",
      " 1. AI的技术基础\n",
      "AI依赖多种关键技术：\n",
      "\n",
      "机器学习（ML）：通过算法让计算机从数据中学习规律，无需显式编程。例如，推荐系统通过用户历史行为预测偏好。\n",
      "→→→→→→→→ metadata：{}，len：96\n",
      " 深度学习：基于神经网络的机器学习分支，擅长处理图像、语音等复杂数据。AlphaGo击败围棋冠军便是典型案例。\n",
      "\n",
      "自然语言处理（NLP）：使计算机理解、生成人类语言，如ChatGPT的对话能力。\n",
      "→→→→→→→→ metadata：{}，len：83\n",
      " 2. AI的应用场景\n",
      "AI已渗透到日常生活和各行各业：\n",
      "\n",
      "医疗：辅助诊断（如AI分析医学影像）、药物研发加速。\n",
      "\n",
      "交通：自动驾驶汽车通过传感器和AI算法实现安全导航。\n",
      "→→→→→→→→ metadata：{}，len：76\n",
      " 金融：欺诈检测、智能投顾（如风险评估模型）。\n",
      "\n",
      "教育：个性化学习平台根据学生表现调整教学内容。\n",
      "\n",
      "3. AI的挑战与未来\n",
      "尽管前景广阔，AI仍面临问题：\n",
      "→→→→→→→→ metadata：{}，len：93\n",
      " 伦理争议：数据隐私、算法偏见（如招聘AI歧视特定群体）。\n",
      "\n",
      "就业影响：自动化可能取代部分人工岗位，但也会创造新职业。\n",
      "\n",
      "技术瓶颈：通用人工智能（AGI）尚未实现，当前AI仅擅长特定任务。\n",
      "→→→→→→→→ metadata：{}，len：66\n",
      " 未来，AI将与人类协作而非替代：医生借助AI提高诊断效率，教师利用AI定制课程。其发展需平衡技术创新与社会责任，确保技术造福全人类。\n"
     ]
    }
   ],
   "execution_count": 29
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 举例4：使用split_documents()方法演示，利用PDFLoader加载文档，对文档的内容用递归切割器切割",
   "id": "e9ce0ac5e03cec26"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-11-24T02:30:17.014897Z",
     "start_time": "2025-11-24T02:30:16.986851Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_community.document_loaders import PyPDFLoader\n",
    "from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
    "\n",
    "loader = PyPDFLoader(\"asset/load/02-load.pdf\")\n",
    "docs = loader.load()\n",
    "print(f\"分块数量：{len(docs)}\")\n",
    "\n",
    "for doc in docs:\n",
    "    print(f\"→→→→→→→→ 元数据：{doc.metadata}，长度：{len(doc.page_content)}\")\n",
    "\n",
    "text_splitter = RecursiveCharacterTextSplitter(\n",
    "    chunk_size=50,\n",
    "    chunk_overlap=0,\n",
    "    length_function=len\n",
    ")\n",
    "\n",
    "chunks = text_splitter.split_documents(docs)\n",
    "print(f\"分块数量：{len(chunks)}\")\n",
    "for chunk in chunks:\n",
    "    print(f\"→→→→→→→→ 元数据：{chunk.metadata}，长度：{len(chunk.page_content)}\\n {chunk.page_content}\")"
   ],
   "id": "a7a5e219fec75319",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "分块数量：1\n",
      "→→→→→→→→ 元数据：{'producer': 'Microsoft® Word 2019', 'creator': 'Microsoft® Word 2019', 'creationdate': '2025-06-20T17:18:19+08:00', 'moddate': '2025-06-20T17:18:19+08:00', 'source': 'asset/load/02-load.pdf', 'total_pages': 1, 'page': 0, 'page_label': '1'}，长度：328\n",
      "分块数量：8\n",
      "→→→→→→→→ 元数据：{'producer': 'Microsoft® Word 2019', 'creator': 'Microsoft® Word 2019', 'creationdate': '2025-06-20T17:18:19+08:00', 'moddate': '2025-06-20T17:18:19+08:00', 'source': 'asset/load/02-load.pdf', 'total_pages': 1, 'page': 0, 'page_label': '1'}，长度：39\n",
      " \"他的车，他的命！ 他忽然想起来，一年，二年，至少有三四年；一滴汗，两滴汗，不\n",
      "→→→→→→→→ 元数据：{'producer': 'Microsoft® Word 2019', 'creator': 'Microsoft® Word 2019', 'creationdate': '2025-06-20T17:18:19+08:00', 'moddate': '2025-06-20T17:18:19+08:00', 'source': 'asset/load/02-load.pdf', 'total_pages': 1, 'page': 0, 'page_label': '1'}，长度：40\n",
      " 知道多少万滴汗，才挣出那辆车。从风里雨里的咬牙，从饭里茶里的自苦，才赚出那辆车。\n",
      "→→→→→→→→ 元数据：{'producer': 'Microsoft® Word 2019', 'creator': 'Microsoft® Word 2019', 'creationdate': '2025-06-20T17:18:19+08:00', 'moddate': '2025-06-20T17:18:19+08:00', 'source': 'asset/load/02-load.pdf', 'total_pages': 1, 'page': 0, 'page_label': '1'}，长度：40\n",
      " 那辆车是他的一切挣扎与困苦的总结果与报酬，像身经百战的武士的一颗徽章。……他老想\n",
      "→→→→→→→→ 元数据：{'producer': 'Microsoft® Word 2019', 'creator': 'Microsoft® Word 2019', 'creationdate': '2025-06-20T17:18:19+08:00', 'moddate': '2025-06-20T17:18:19+08:00', 'source': 'asset/load/02-load.pdf', 'total_pages': 1, 'page': 0, 'page_label': '1'}，长度：32\n",
      " 着远远的一辆车，可以使他自由，独立，像自己的手脚的那么一辆车。\"\n",
      "→→→→→→→→ 元数据：{'producer': 'Microsoft® Word 2019', 'creator': 'Microsoft® Word 2019', 'creationdate': '2025-06-20T17:18:19+08:00', 'moddate': '2025-06-20T17:18:19+08:00', 'source': 'asset/load/02-load.pdf', 'total_pages': 1, 'page': 0, 'page_label': '1'}，长度：39\n",
      " \"他吃，他喝，他嫖，他赌，他懒，他狡猾， 因为他没了心，他的心被人家摘了去。他\n",
      "→→→→→→→→ 元数据：{'producer': 'Microsoft® Word 2019', 'creator': 'Microsoft® Word 2019', 'creationdate': '2025-06-20T17:18:19+08:00', 'moddate': '2025-06-20T17:18:19+08:00', 'source': 'asset/load/02-load.pdf', 'total_pages': 1, 'page': 0, 'page_label': '1'}，长度：40\n",
      " 只剩下那个高大的肉架子，等着溃烂，预备着到乱死岗子去。……体面的、要强的、好梦想\n",
      "→→→→→→→→ 元数据：{'producer': 'Microsoft® Word 2019', 'creator': 'Microsoft® Word 2019', 'creationdate': '2025-06-20T17:18:19+08:00', 'moddate': '2025-06-20T17:18:19+08:00', 'source': 'asset/load/02-load.pdf', 'total_pages': 1, 'page': 0, 'page_label': '1'}，长度：40\n",
      " 的、利己的、个人的、健壮的、伟大的祥子，不知陪着人家送了多少回殡；不知道何时何地\n",
      "→→→→→→→→ 元数据：{'producer': 'Microsoft® Word 2019', 'creator': 'Microsoft® Word 2019', 'creationdate': '2025-06-20T17:18:19+08:00', 'moddate': '2025-06-20T17:18:19+08:00', 'source': 'asset/load/02-load.pdf', 'total_pages': 1, 'page': 0, 'page_label': '1'}，长度：48\n",
      " 会埋起他自己来， 埋起这堕落的、 自私的、 不幸的、 社会病胎里的产儿， 个人主义的末路鬼！\n",
      "\"\n"
     ]
    }
   ],
   "execution_count": 34
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## TokenTextSplitter/CharacterTextSplitter：Split by tokens\n",
    "当我们将文本拆分为块时，除了字符以外，还可以： **按Token的数量分割** （而非字符或单词数），将长文本切分成多个小块。\n",
    "\n",
    "> TokenTextSplitter 使用说明：\n",
    "\n",
    "- **核心依据**：Token数量 + 自然边界。（TokenTextSplitter 严格按照 token 数量进行分割，但同时 **会优先在自然边界（如句尾）处切断**，以尽量保证语义的完整性。）\n",
    "- **优点**：与LLM的Token计数逻辑一致，能尽量保持语义完整\n",
    "- **缺点**：对非英语或特定领域文本，Token化效果可能不佳\n",
    "- **典型场景**：需要精确控制Token数输入LLM的场景"
   ],
   "id": "1d6f46d7dedcb427"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 举例1：使用TokenTextSplitter",
   "id": "a4f7dc76bcdbf3a"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-11-24T02:32:23.192806Z",
     "start_time": "2025-11-24T02:31:48.850755Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_text_splitters import TokenTextSplitter\n",
    "\n",
    "source_text = \"\"\"人工智能是一个强大的开发框架。它支持多种语言模型和工具链。人工智能是指通过计算机程序模拟人类智能的一门科学。自20世纪50年代诞生以来，人工智能经历了多次起伏。\"\"\"\n",
    "\n",
    "text_splitter = TokenTextSplitter(\n",
    "    chunk_size=33,\n",
    "    chunk_overlap=0,\n",
    "    encoding_name=\"cl100k_base\" # 使用 OpenAI 的编码器,将文本转换为 token 序列\n",
    ")\n",
    "\n",
    "chunks = text_splitter.split_text(source_text)\n",
    "print(f\"分块数量：{len(chunks)}\")\n",
    "for chunk in chunks:\n",
    "    print(f\"→→→→→→→→ 长度：{len(chunk)}\\n {chunk}\")"
   ],
   "id": "1ef395482263cde7",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "分块数量：3\n",
      "→→→→→→→→ 长度：29\n",
      " 人工智能是一个强大的开发框架。它支持多种语言模型和工具链。\n",
      "→→→→→→→→ 长度：32\n",
      " 人工智能是指通过计算机程序模拟人类智能的一门科学。自20世纪50\n",
      "→→→→→→→→ 长度：19\n",
      " 年代诞生以来，人工智能经历了多次起伏。\n"
     ]
    }
   ],
   "execution_count": 35
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 举例2：使用CharacterTextSplitter",
   "id": "2deac023c417c4cd"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-11-24T02:41:53.175513Z",
     "start_time": "2025-11-24T02:41:53.165511Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_text_splitters import CharacterTextSplitter\n",
    "import tiktoken # 用于计算Token数量\n",
    "\n",
    "# 定义文本\n",
    "text = \"人工智能是一个强大的开发框架。它支持多种语言模型和工具链。今天天气很好，想出去踏青。但是又比较懒不想出去，怎么办\"\n",
    "\n",
    "# 定义通过Token切割器\n",
    "text_splitter = CharacterTextSplitter.from_tiktoken_encoder(\n",
    "    encoding_name=\"cl100k_base\", # 使用 OpenAI 的编码器\n",
    "    chunk_size=18,\n",
    "    chunk_overlap=0,\n",
    "    separator=\"。\", # 指定中文句号为分隔符\n",
    "    keep_separator=False, # chunk中是否保留分隔符\n",
    ")\n",
    "\n",
    "# 开始切割\n",
    "texts = text_splitter.split_text(text)\n",
    "print(f\"分割后的块数: {len(texts)}\")\n",
    "# 初始化tiktoken编码器（用于Token计数）\n",
    "encoder = tiktoken.get_encoding(\"cl100k_base\") # 确保与CharacterTextSplitter的encoding_name一致\n",
    "for text in texts:\n",
    "    tokens = encoder.encode(text) # 现在encoder已定义\n",
    "    print(f\"→→→→→→→→ 长度：{len(text)}\\n {text}\")"
   ],
   "id": "599baeb6a86d04ef",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "分割后的块数: 4\n",
      "→→→→→→→→ 长度：14\n",
      " 人工智能是一个强大的开发框架\n",
      "→→→→→→→→ 长度：13\n",
      " 它支持多种语言模型和工具链\n",
      "→→→→→→→→ 长度：12\n",
      " 今天天气很好，想出去踏青\n",
      "→→→→→→→→ 长度：14\n",
      " 但是又比较懒不想出去，怎么办\n"
     ]
    }
   ],
   "execution_count": 37
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## SemanticChunker：语义分块\n",
    "SemanticChunking（语义分块）是 LangChain 中一种更高级的文本分割方法，它超越了传统的基于字符或固定大小的分块方式，而是根据 文本的语义结构 进行智能分块，使每个分块保持 语义完整性 ，从而提高检索增强生成(RAG)等应用的效果。\n",
    "### 语义分割 vs 传统分割\n",
    "\n",
    "| 特性 | 语义分割 （SemanticChunker） | 传统字符分割（RecursiveCharacter）|\n",
    "|--|--|--|\n",
    "| 分割依据 | 嵌入向量相似度 | 固定字符/换行符 |\n",
    "| 语义完整性 | ✅ 保持主题连贯 | ❌ 可能切断句子逻辑 |\n",
    "| 计算成本 | ❌ 高（需嵌入模型）| ✅ 低 |\n",
    "| 适用场景 | 需要高语义一致性的任务 | 简单文本预处理 |"
   ],
   "id": "100040fe58ecda5"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 举例",
   "id": "64c397927697258f"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-11-24T02:53:22.749898Z",
     "start_time": "2025-11-24T02:53:19.417484Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_experimental.text_splitter import SemanticChunker\n",
    "from langchain_openai.embeddings import OpenAIEmbeddings\n",
    "import os\n",
    "import dotenv\n",
    "\n",
    "dotenv.load_dotenv()\n",
    "\n",
    "# 加载文本\n",
    "with open(\"asset/load/09-ai1.txt\", encoding=\"utf-8\") as f:\n",
    "\tstate_of_the_union = f.read() #返回字符串\n",
    "\n",
    "# 获取嵌入模型\n",
    "os.environ['OPENAI_API_KEY'] = os.getenv(\"OPENAI_API_KEY\")\n",
    "os.environ['OPENAI_BASE_URL'] = os.getenv(\"OPENAI_BASE_URL\")\n",
    "embed_model = OpenAIEmbeddings(\n",
    "\tmodel=\"text-embedding-3-large\"\n",
    ")\n",
    "\n",
    "# 获取切割器\n",
    "text_splitter = SemanticChunker(\n",
    "\tembeddings=embed_model,\n",
    "\tbreakpoint_threshold_type=\"percentile\",#断点阈值类型：字面值[\"百分位数\", \"标准差\", \"四分位距\", \"梯度\"] 选其一\n",
    "\tbreakpoint_threshold_amount=65.0 #断点阈值数量 (极低阈值 → 高分割敏感度)\n",
    ")\n",
    "\n",
    "# 切分文档\n",
    "docs = text_splitter.create_documents(texts = [state_of_the_union])\n",
    "print(len(docs))\n",
    "for doc in docs:\n",
    "\tprint(f\"🔍 文档 {doc}:\")"
   ],
   "id": "d5fe81d065dd5227",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "4\n",
      "🔍 文档 page_content='人工智能综述：发展、应用与未来展望\n",
      "\n",
      "摘要\n",
      "人工智能（Artificial Intelligence，AI）作为计算机科学的一个重要分支，近年来取得了突飞猛进的发展。本文综述了人工智能的发展历程、核心技术、应用领域以及未来发展趋势。通过对人工智能的定义、历史背景、主要技术（如机器学习、深度学习、自然语言处理等）的详细介绍，探讨了人工智能在医疗、金融、教育、交通等领域的应用，并分析了人工智能发展过程中面临的挑战与机遇。最后，本文对人工智能的未来发展进行了展望，提出了可能的突破方向。\n",
      "\n",
      "1. 引言\n",
      "人工智能是指通过计算机程序模拟人类智能的一门科学。自20世纪50年代诞生以来，人工智能经历了多次起伏，近年来随着计算能力的提升和大数据的普及，人工智能技术取得了显著的进展。人工智能的应用已经渗透到日常生活的方方面面，从智能手机的语音助手到自动驾驶汽车，从医疗诊断到金融分析，人工智能正在改变着人类社会的运行方式。\n",
      "\n",
      "2.':\n",
      "🔍 文档 page_content='人工智能的发展历程\n",
      "2.1 早期发展\n",
      "人工智能的概念最早可以追溯到20世纪50年代。1956年，达特茅斯会议（Dartmouth Conference）被认为是人工智能研究的正式开端。在随后的几十年里，人工智能研究经历了多次高潮与低谷。早期的研究主要集中在符号逻辑和专家系统上，但由于计算能力的限制和算法的不足，进展缓慢。\n",
      "2.2 机器学习的兴起\n",
      "20世纪90年代，随着统计学习方法的引入，机器学习逐渐成为人工智能研究的主流。支持向量机（SVM）、决策树、随机森林等算法在分类和回归任务中取得了良好的效果。这一时期，机器学习开始应用于数据挖掘、模式识别等领域。\n",
      "2.3 深度学习的突破\n",
      "2012年，深度学习在图像识别领域取得了突破性进展，标志着人工智能进入了一个新的阶段。深度学习通过多层神经网络模拟人脑的工作方式，能够自动提取特征并进行复杂的模式识别。卷积神经网络（CNN）、循环神经网络（RNN）和长短期记忆网络（LSTM）等深度学习模型在图像处理、自然语言处理、语音识别等领域取得了显著成果。\n",
      "\n",
      "3. 人工智能的核心技术\n",
      "3.1 机器学习\n",
      "机器学习是人工智能的核心技术之一，通过算法使计算机从数据中学习并做出决策。常见的机器学习算法包括监督学习、无监督学习和强化学习。监督学习通过标记数据进行训练，无监督学习则从未标记数据中寻找模式，强化学习则通过与环境交互来优化决策。\n",
      "3.2 深度学习\n",
      "深度学习是机器学习的一个子领域，通过多层神经网络进行特征提取和模式识别。深度学习在图像识别、自然语言处理、语音识别等领域取得了显著成果。常见的深度学习模型包括卷积神经网络（CNN）、循环神经网络（RNN）和长短期记忆网络（LSTM）。\n",
      "3.3 自然语言处理\n",
      "自然语言处理（NLP）是人工智能的一个重要分支，致力于使计算机能够理解和生成人类语言。NLP技术广泛应用于机器翻译、情感分析、文本分类等领域。近年来，基于深度学习的NLP模型（如BERT、GPT）在语言理解任务中取得了突破性进展。\n",
      "3.4 计算机视觉\n",
      "计算机视觉是人工智能的另一个重要分支，致力于使计算机能够理解和处理图像和视频。计算机视觉技术广泛应用于图像识别、目标检测、人脸识别等领域。深度学习模型（如CNN）在计算机视觉任务中取得了显著成果。\n",
      "\n",
      "4. 人工智能的应用领域\n",
      "4.1 医疗健康\n",
      "人工智能在医疗健康领域的应用包括疾病诊断、药物研发、个性化医疗等。通过分析医学影像和患者数据，人工智能可以帮助医生更准确地诊断疾病，提高治疗效果。\n",
      "4.2 金融\n",
      "人工智能在金融领域的应用包括风险评估、欺诈检测、算法交易等。通过分析市场数据和交易记录，人工智能可以帮助金融机构做出更明智的决策，提高运营效率。\n",
      "4.3 教育\n",
      "人工智能在教育领域的应用包括个性化学习、智能辅导、自动评分等。通过分析学生的学习数据，人工智能可以为学生提供个性化的学习建议，提高学习效果。\n",
      "4.4 交通\n",
      "人工智能在交通领域的应用包括自动驾驶、交通管理、智能导航等。通过分析交通数据和路况信息，人工智能可以帮助优化交通流量，提高交通安全。\n",
      "\n",
      "5. 人工智能的挑战与机遇\n",
      "5.1 挑战\n",
      "人工智能发展过程中面临的主要挑战包括数据隐私、算法偏见、安全性问题等。数据隐私问题涉及到个人数据的收集和使用，算法偏见问题则涉及到算法的公平性和透明度，安全性问题则涉及到人工智能系统的可靠性和稳定性。\n",
      "5.2 机遇\n",
      "尽管面临挑战，人工智能的发展也带来了巨大的机遇。人工智能技术的进步将推动各行各业的创新，提高生产效率，改善生活质量。未来，人工智能有望在更多领域取得突破，为人类社会带来更多的便利和福祉。\n",
      "\n",
      "6.':\n",
      "🔍 文档 page_content='未来展望\n",
      "6.1 技术突破\n",
      "未来，人工智能技术有望在以下几个方面取得突破：一是算法的优化和创新，提高模型的效率和准确性；二是计算能力的提升，支持更复杂的模型和更大规模的数据处理；三是跨学科研究的深入，推动人工智能与其他领域的融合。\n",
      "6.2 应用拓展\n",
      "随着技术的进步，人工智能的应用领域将进一步拓展。未来，人工智能有望在更多领域发挥重要作用，如环境保护、能源管理、智能制造等。人工智能将成为推动社会进步的重要力量。\n",
      "\n",
      "7.':\n",
      "🔍 文档 page_content='结论\n",
      "人工智能作为一门快速发展的科学，正在改变着人类社会的运行方式。通过不断的技术创新和应用拓展，人工智能将为人类社会带来更多的便利和福祉。然而，人工智能的发展也面临着诸多挑战，需要社会各界共同努力，推动人工智能的健康发展。':\n"
     ]
    }
   ],
   "execution_count": 38
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 其它拆分器\n",
    "### 类型1：HTMLHeaderTextSplitter：Split by HTML header\n",
    "`HTMLHeaderTextSplitter` 是一种专门用于处理HTML文档的文本分割方法，它根据HTML的 标题标签（如`<h1>`、`<h2>` 等） 将文档划分为逻辑分块，同时保留标题的层级结构信息。"
   ],
   "id": "a630b841f217ae92"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-11-24T03:06:22.195322Z",
     "start_time": "2025-11-24T03:06:22.181317Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain.text_splitter import HTMLHeaderTextSplitter\n",
    "# 定义HTML文件\n",
    "html_string = \"\"\"\n",
    "<!DOCTYPE html>\n",
    "<html>\n",
    "<body>\n",
    "\t<div>\n",
    "\t<h1>欢迎来到尚硅谷！</h1>\n",
    "\t<p>尚硅谷是专门培训IT技术方向</p>\n",
    "\t<div>\n",
    "        <h2>尚硅谷老师简介</h2>\n",
    "        <p>尚硅谷老师拥有多年教学经验，都是从一线互联网下来</p>\n",
    "        <h3>尚硅谷北京校区</h3>\n",
    "        <p>北京校区位于宏福科技园区</p>\n",
    "\t</div>\n",
    "\t</div>\n",
    "</body>\n",
    "</html>\n",
    "\"\"\"\n",
    "# 用于指定要根据哪些HTML标签来分割文本\n",
    "headers_to_split_on = [\n",
    "    (\"h1\", \"标题1\"),\n",
    "    (\"h2\", \"标题2\"),\n",
    "    (\"h3\", \"标题3\"),\n",
    "]\n",
    "# 定义HTMLHeaderTextSplitter分割器\n",
    "html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)\n",
    "# 分割器分割\n",
    "html_header_splits = html_splitter.split_text(html_string)\n",
    "html_header_splits"
   ],
   "id": "e1dc4378ebe6eb3d",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Document(metadata={'标题1': '欢迎来到尚硅谷！'}, page_content='欢迎来到尚硅谷！'),\n",
       " Document(metadata={'标题1': '欢迎来到尚硅谷！'}, page_content='尚硅谷是专门培训IT技术方向'),\n",
       " Document(metadata={'标题1': '欢迎来到尚硅谷！', '标题2': '尚硅谷老师简介'}, page_content='尚硅谷老师简介'),\n",
       " Document(metadata={'标题1': '欢迎来到尚硅谷！', '标题2': '尚硅谷老师简介'}, page_content='尚硅谷老师拥有多年教学经验，都是从一线互联网下来'),\n",
       " Document(metadata={'标题1': '欢迎来到尚硅谷！', '标题2': '尚硅谷老师简介', '标题3': '尚硅谷北京校区'}, page_content='尚硅谷北京校区'),\n",
       " Document(metadata={'标题1': '欢迎来到尚硅谷！', '标题2': '尚硅谷老师简介', '标题3': '尚硅谷北京校区'}, page_content='北京校区位于宏福科技园区')]"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 43
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "### 类型2：CodeTextSplitter：Split code\n",
    "CodeTextSplitter是一个 **专为代码文件** 设计的文本分割器（Text Splitter），支持代码的语言包括 `['cpp','go', 'java', 'js', 'php', 'proto', 'python', 'rst', 'ruby', 'rust', 'scala', 'swift', 'markdown', 'latex', 'html','sol']`。它能够根据编程语言的语法结构（如函数、类、代码块等）智能地拆分代码，保持代码逻辑的完整性。\n",
    "\n",
    "与递归文本分割器（如RecursiveCharacterTextSplitter）不同，CodeTextSplitter 针对代码的特性进行了优化， **避免在函数或类的中间截断** 。\n",
    "\n",
    "```shell\n",
    "pip install langchain-text-splitters\n",
    "```"
   ],
   "id": "a2305e4be8c7d8bd"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "#### 举例1：支持的编程语言",
   "id": "486954e1af608475"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-11-24T03:16:14.829816Z",
     "start_time": "2025-11-24T03:16:14.823814Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain.text_splitter import Language\n",
    "\n",
    "# 支持分割语言类型\n",
    "# Full list of supported languages\n",
    "langs = [e.value for e in Language]\n",
    "print(langs)"
   ],
   "id": "ff291b3f774c4286",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['cpp', 'go', 'java', 'kotlin', 'js', 'ts', 'php', 'proto', 'python', 'rst', 'ruby', 'rust', 'scala', 'swift', 'markdown', 'latex', 'html', 'sol', 'csharp', 'cobol', 'c', 'lua', 'perl', 'haskell', 'elixir', 'powershell']\n"
     ]
    }
   ],
   "execution_count": 44
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "举例2:",
   "id": "ae88c0f519cdbc7c"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-11-24T03:16:33.815848Z",
     "start_time": "2025-11-24T03:16:33.805846Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain.text_splitter import (\n",
    "\tLanguage,\n",
    "\tRecursiveCharacterTextSplitter,\n",
    ")\n",
    "from pprint import pprint\n",
    "\n",
    "# 定义要分割的python代码片段\n",
    "PYTHON_CODE = \"\"\"\n",
    "def hello_world():\n",
    "\tprint(\"Hello, World!\")\n",
    "def hello_world1():\n",
    "\tprint(\"Hello, World1!\")\n",
    "\"\"\"\n",
    "# 定义递归字符切分器\n",
    "python_splitter = RecursiveCharacterTextSplitter.from_language(\n",
    "\tlanguage=Language.PYTHON,\n",
    "\tchunk_size=50,\n",
    "\tchunk_overlap=0\n",
    ")\n",
    "# 文档切分\n",
    "python_docs = python_splitter.create_documents(texts=[PYTHON_CODE])\n",
    "\n",
    "pprint(python_docs)"
   ],
   "id": "9e8c2421d006f8ee",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Document(metadata={}, page_content='def hello_world():\\n\\tprint(\"Hello, World!\")'),\n",
      " Document(metadata={}, page_content='def hello_world1():\\n\\tprint(\"Hello, World1!\")')]\n"
     ]
    }
   ],
   "execution_count": 45
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "### 类型3：MarkdownTextSplitter：md数据类型\n",
    "因为Markdown格式有特定的语法，一般整体内容由 `h1、h2、h3` 等多级标题组织，所以 `MarkdownHeaderTextSplitter` 的切分策略就是根据 标题来分割文本内容 。"
   ],
   "id": "d588b440d52180d9"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-11-24T03:25:31.918345Z",
     "start_time": "2025-11-24T03:25:31.907341Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain.text_splitter import MarkdownTextSplitter\n",
    "markdown_text = \"\"\"\n",
    "# 一级标题\\n\n",
    "这是一级标题下的内容\\n\\n\n",
    "## 二级标题\\n\n",
    "- 二级下列表项1\\n\n",
    "- 二级下列表项2\\n\n",
    "\"\"\"\n",
    "\n",
    "# 关键步骤：直接修改实例属性\n",
    "splitter = MarkdownTextSplitter(chunk_size=30, chunk_overlap=0)\n",
    "splitter._is_separator_regex = True # 强制将分隔符视为正则表达式\n",
    "\n",
    "# 执行分割\n",
    "docs = splitter.create_documents(texts = [markdown_text])\n",
    "\n",
    "for i, doc in enumerate(docs):\n",
    "    print(f\"\\n🔍 分块 {i + 1}:\")\n",
    "    print(doc.page_content)"
   ],
   "id": "f49da46a43cc77fb",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "🔍 分块 1:\n",
      "# 一级标题\n",
      "\n",
      "这是一级标题下的内容\n",
      "\n",
      "🔍 分块 2:\n",
      "## 二级标题\n",
      "\n",
      "- 二级下列表项1\n",
      "\n",
      "- 二级下列表项2\n"
     ]
    }
   ],
   "execution_count": 47
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
