{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "8c274a4e21b305a4",
   "metadata": {},
   "source": [
    "# 二、分词\n",
    "\n",
    "## 2.1 jieba分词\n",
    "\n",
    "### 2.1.1 三种分词模式\n",
    "\n",
    "- 精确模式（默认模式），试图将句子最精确地切开，适合做文本分析\n",
    "- 全模式，把句子中所有的可以成词的词语都扫描出来，速度非常快，但是不能解决歧义\n",
    "- 搜索引擎模式，在精确模式的基础上，对长词再次切分，提高召回率，适合用于搜索引擎分词\n",
    "\n",
    "精确模式类似于文言文断句，全模式和搜索引擎模式类似于断章取义。一般直接使用精确模式"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "1b4b0e804b29c6ae",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-27T14:40:07.055332Z",
     "start_time": "2025-09-27T14:40:07.052166Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "郭靖/和/哀牢山/三十六/剑/。\n",
      "郭/靖/和/哀牢山/三十/三十六/十六/剑/。\n",
      "郭靖/和/哀牢山/三十/十六/三十六/剑/。\n"
     ]
    }
   ],
   "source": [
    "import jieba\n",
    "\n",
    "tmpstr = \"郭靖和哀牢山三十六剑。\"\n",
    "\n",
    "# 精确模式\n",
    "res = jieba.cut(tmpstr)\n",
    "print('/'.join(res))\n",
    "\n",
    "# 全模式\n",
    "res = jieba.cut(tmpstr, cut_all=True)\n",
    "print('/'.join(res))\n",
    "\n",
    "# 搜索引擎模式\n",
    "res = jieba.cut_for_search(tmpstr)\n",
    "print('/'.join(res))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f41344b70e48d791",
   "metadata": {},
   "source": [
    "### 2.1.2 修改词典\n",
    "\n",
    "#### 动态增删新词\n",
    "\n",
    "在程序中可以动态根据分词的结果，对内存中的词库进行更新\n",
    "\n",
    "``` python\n",
    "# 添加新词\n",
    "add_word(\n",
    "    word, # 新词\n",
    "    freq=None, # 词频\n",
    "    tag=None, # 具体词性\n",
    ")\n",
    "\n",
    "# 删除新词\n",
    "del_word(word)\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "6ad9244ce021c791",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-27T06:48:17.902021Z",
     "start_time": "2025-09-27T06:48:17.898777Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "郭靖/和/哀牢山三十六剑/。\n",
      "郭靖/和/哀牢山/三十六/剑/。\n"
     ]
    }
   ],
   "source": [
    "# 增加新词\n",
    "jieba.add_word('哀牢山三十六剑')\n",
    "res = jieba.cut(tmpstr)\n",
    "print('/'.join(res))\n",
    "\n",
    "# 删除新词\n",
    "jieba.del_word('哀牢山三十六剑')\n",
    "print('/'.join(jieba.cut(tmpstr)))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1c69af493b5844ed",
   "metadata": {},
   "source": [
    "#### 使用自定义词典\n",
    "\n",
    "一个一个的增加新词很繁琐，可以直接读取自定义词典\n",
    "\n",
    "``` python\n",
    "load_userdict(\n",
    "    file_name # 文件类对象或自定义词典的路径\n",
    ")\n",
    "```\n",
    "\n",
    "词典基本格式\n",
    "\n",
    "- 一个词占一行：词、词频（可省略）、词性（可省略），用空格隔开\n",
    "- 词典文件必须为 UTF-8 编码\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "c8b9f0564b274799",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-27T06:54:27.635402Z",
     "start_time": "2025-09-27T06:54:27.607615Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "郭靖/和/哀牢山三十六剑/。\n"
     ]
    }
   ],
   "source": [
    "jieba.load_userdict('words.txt')\n",
    "print('/'.join(jieba.cut(tmpstr)))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "19ee63ec63a2f05c",
   "metadata": {},
   "source": [
    "### 2.1.3 去除停用词\n",
    "\n",
    "#### 常见的停用词种类\n",
    "\n",
    "- 超高频的常用词：基本不携带有效信息/歧义太多无分析价值 的、地、得\n",
    "- 虚词：如介词，连词等 只、条、件、当、从、同\n",
    "- 专业领域的高频词：基本不携带有效信息\n",
    "- 视情况而定的停用词 呵呵、emoj表情\n",
    "\n",
    "#### 分词后去除停用词\n",
    "\n",
    "基本步骤\n",
    "\n",
    "- 读入停用词表文件\n",
    "- 正常分词\n",
    "- 在分词结果中去除停用词 遍历分词结果，排除停用词\n",
    "\n",
    "该方法存在的问题：停用词必须要被分词过程正确拆分出来才行"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "207d69a8c4e67d43",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-27T07:05:26.057491Z",
     "start_time": "2025-09-27T07:05:26.052921Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['郭靖', '哀牢山三十六剑']\n"
     ]
    }
   ],
   "source": [
    "# 过滤掉 “和”和“。”\n",
    "newlist = [ w for w in jieba.cut(tmpstr) if w not in '和。']\n",
    "print(newlist)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "a3397aa5280c7b59",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-27T07:21:02.528205Z",
     "start_time": "2025-09-27T07:21:02.520701Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['郭靖', '哀牢山三十六剑']\n",
      "['郭靖', '哀牢山三十六剑']\n"
     ]
    }
   ],
   "source": [
    "# 使用pandas，但是pd处理文本的效率不高\n",
    "import pandas as pd\n",
    "df = pd.read_csv('stop-words.txt',\n",
    "                 names = ['w'], sep = 'aaa',\n",
    "                 encoding = 'utf-8',\n",
    "                 engine=\"python\"\n",
    "                 )\n",
    "# print(df.head(5))\n",
    "newlist = [w for w in jieba.cut(tmpstr) if w not in list(df[\"w\"])]\n",
    "print(newlist)\n",
    "\n",
    "# open处理文本的效率比pd更高\n",
    "with open('stop-words.txt', encoding='utf-8') as f:\n",
    "    stop_words = [line.rstrip('\\n') for line in f.readlines()]\n",
    "newlist = [ w for w in jieba.cut(tmpstr) if w not in stop_words ]\n",
    "print(newlist)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3cb838a62c4d3665",
   "metadata": {},
   "source": [
    "#### 用extract_tags函数去除停用词\n",
    "\n",
    "方法特点：\n",
    "\n",
    "根据TF-IDF算法将特征词提取出来，在提取之前去掉停用词\n",
    "可以人工指定停用词字典\n",
    "\n",
    "jieba.analyse.set_stop_words()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "e5a5956f7deb8099",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-27T07:26:11.117585Z",
     "start_time": "2025-09-27T07:26:11.112744Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['郭靖', '和', '哀牢山三十六剑', '。']\n",
      "['郭靖', '哀牢山三十六剑']\n"
     ]
    }
   ],
   "source": [
    "# 使用预先准备的停用词表\n",
    "import jieba.analyse as ana\n",
    "ana.set_stop_words('stop-words.txt')\n",
    "# 读入的停用词列表对cut分词结果无效\n",
    "print(jieba.lcut(tmpstr))\n",
    "# 导入的停用词对extract_tags函数分词才有效\n",
    "print(ana.extract_tags(tmpstr)) # 使用TF-IDF算法提取关键词，并同时去掉停用词"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ea6206f278a4d8f3",
   "metadata": {},
   "source": [
    "### 2.1.4 词性标注\n",
    "\n",
    "import jieba.posseg\n",
    "\n",
    "posseg.cut()：给出附加词性的分词结果\n",
    "\n",
    "词性标注采用和 ICTCLAS 兼容的标记法\n",
    "\n",
    "后续可基于词性做进一步处理，如只提取出名词，动词等\n",
    "\n",
    "| 标识 | 种类  | 含义                             |\n",
    "|----|-----|--------------------------------|\n",
    "| Ag | 形语素 | 形容词性语素。形容词代码为a，语素代码 g 前面置以A。   |\n",
    "| a  | 形容词 | 取英语形容词adjective的第1个字母。         |\n",
    "| ad | 副形词 | 直接作状语的形容词。形容词形容词代码a和副词代码d并在一起。 |\n",
    "| Dg | 副语素 | 副词性语素。副词代码为d，语素代码 g 前面置以D。     |\n",
    "| d  | 副词  | 取adverb的第2个字母，因其第1个字母已用于形容词。   |\n",
    "| m  | 数词  | 取英语numeral的第3个字母，n，u已有他用。      |\n",
    "| Ng | 名语素 | 名词性语素。名词代码为n，语素代码 g 前面置以N。     |\n",
    "| n  | 名词  | 取英语名词noun的第1个字母。               |\n",
    "| nr | 人名  | 名词代码n和“人(ren)”的声母并在一起。         |\n",
    "| Vg | 动语素 | 动词性语素。动词代码为v。在语素的代码g前面置以V。     |\n",
    "| v  | 动词  | 取英语动词verb的第一个字母。               |\n",
    "| vn | 名动词 | 指具有名词功能的动词。动词和名词的代码并在一起。       |"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "8abe6005e00e1db7",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-27T14:41:14.555170Z",
     "start_time": "2025-09-27T14:41:14.548713Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<generator object cut at 0x00000145010E1E40>\n",
      "郭靖 nr\n",
      "和 c\n",
      "哀牢山 ns\n",
      "三十六 m\n",
      "剑 n\n",
      "。 x\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[pair('郭靖', 'nr'),\n",
       " pair('和', 'c'),\n",
       " pair('哀牢山', 'ns'),\n",
       " pair('三十六', 'm'),\n",
       " pair('剑', 'n'),\n",
       " pair('。', 'x')]"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import jieba.posseg as psg\n",
    "\n",
    "tmpres = psg.cut(tmpstr) # 附加词性的分词结果\n",
    "print(tmpres)\n",
    "\n",
    "for item in tmpres:\n",
    "    print(item.word, item.flag)\n",
    "\n",
    "print(psg.lcut(tmpstr)) # 直接输出为list，成员为pair"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b4d3c32f2cfd82eb",
   "metadata": {},
   "source": [
    "## 2.2 nltk分词\n",
    "\n",
    "NLTK只能识别用空格作为词条分割方式，因此不能直接用于中文文本的分词。\n",
    "\n",
    "一般的做法是先用jieba分词，然后转换为空格分隔的连续文本，再转入NLTK框架使用。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "2a5e4e88dfc691a4",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-30T09:37:31.094707Z",
     "start_time": "2025-09-30T09:37:29.743246Z"
    }
   },
   "outputs": [
    {
     "ename": "NameError",
     "evalue": "name 'jieba' is not defined",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mNameError\u001b[0m                                 Traceback (most recent call last)",
      "Cell \u001b[1;32mIn[1], line 4\u001b[0m\n\u001b[0;32m      1\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;21;01mnltk\u001b[39;00m\n\u001b[0;32m      3\u001b[0m rawtext \u001b[38;5;241m=\u001b[39m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m周伯通笑道：“你懂了吗？...”\u001b[39m\u001b[38;5;124m'\u001b[39m\n\u001b[1;32m----> 4\u001b[0m txt \u001b[38;5;241m=\u001b[39m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m \u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;241m.\u001b[39mjoin(jieba\u001b[38;5;241m.\u001b[39mcut(rawtext))\n\u001b[0;32m      5\u001b[0m \u001b[38;5;28mprint\u001b[39m(txt)\n\u001b[0;32m      6\u001b[0m toke \u001b[38;5;241m=\u001b[39m nltk\u001b[38;5;241m.\u001b[39mword_tokenize(txt)\n",
      "\u001b[1;31mNameError\u001b[0m: name 'jieba' is not defined"
     ]
    }
   ],
   "source": [
    "import nltk\n",
    "\n",
    "rawtext = '周伯通笑道：“你懂了吗？...”'\n",
    "txt = ' '.join(jieba.cut(rawtext))\n",
    "print(txt)\n",
    "toke = nltk.word_tokenize(txt)\n",
    "print(toke)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python [conda env:base] *",
   "language": "python",
   "name": "conda-base-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.13.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
