{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "2aa4e8162785b6c3",
   "metadata": {},
   "source": [
    "## 什么是词元\n",
    "词元（token）是自然语言处理中的一个基本单位，通常包括单词、标点符号、标点符号、空格等。\n",
    "\n",
    "英文中可以是一个单词、一个字母、一个标点符号、一个字词（subword，unhappy -> un和happy）\n",
    "\n",
    "中文中可以是一个字、一个词语，一个成语"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eca5b0dfc447367b",
   "metadata": {},
   "source": [
    "## 常见的分词方法"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b41a98cefb7f9b73",
   "metadata": {},
   "source": [
    "### jieba分词\n",
    "\n",
    "jieba是一个基于结巴分词的分词器，可以进行中文分词。\n"
   ]
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-28T00:42:28.326447Z",
     "start_time": "2025-06-28T00:42:23.181821Z"
    }
   },
   "cell_type": "code",
   "source": "!pip install jieba",
   "id": "5a71a07d4f49bf94",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Collecting jieba\r\n",
      "  Using cached jieba-0.42.1.tar.gz (19.2 MB)\r\n",
      "  Preparing metadata (setup.py) ... \u001B[?25ldone\r\n",
      "\u001B[?25hBuilding wheels for collected packages: jieba\r\n",
      "\u001B[33m  DEPRECATION: Building 'jieba' using the legacy setup.py bdist_wheel mechanism, which will be removed in a future version. pip 25.3 will enforce this behaviour change. A possible replacement is to use the standardized build interface by setting the `--use-pep517` option, (possibly combined with `--no-build-isolation`), or adding a `pyproject.toml` file to the source tree of 'jieba'. Discussion can be found at https://github.com/pypa/pip/issues/6334\u001B[0m\u001B[33m\r\n",
      "\u001B[0m  Building wheel for jieba (setup.py) ... \u001B[?25ldone\r\n",
      "\u001B[?25h  Created wheel for jieba: filename=jieba-0.42.1-py3-none-any.whl size=19314509 sha256=6c1e604a4794cc5d511e4e5033881c8c830db1e5ac92ffa11177de1810133a6f\r\n",
      "  Stored in directory: /Users/dadudu/Library/Caches/pip/wheels/c9/69/31/d56d90b22a1777b0b231e234b00302a55be255930f8bd92dcd\r\n",
      "Successfully built jieba\r\n",
      "Installing collected packages: jieba\r\n",
      "Successfully installed jieba-0.42.1\r\n"
     ]
    }
   ],
   "execution_count": 1
  },
  {
   "cell_type": "code",
   "id": "c94ee565d9c5179d",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-07-03T08:28:31.848065Z",
     "start_time": "2025-07-03T08:28:31.440958Z"
    }
   },
   "source": [
    "import jieba\n",
    "\n",
    "text = \"我爱自然语言处理\"\n",
    "seg_list = jieba.cut(text)\n",
    "print(\" \".join(seg_list))"
   ],
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Building prefix dict from the default dictionary ...\n",
      "Loading model from cache /var/folders/vl/mkwcfmqd5kb3rykv5bb3w3n40000gn/T/jieba.cache\n",
      "Loading model cost 0.310 seconds.\n",
      "Prefix dict has been built successfully.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "我 爱 自然语言 处理\n"
     ]
    }
   ],
   "execution_count": 1
  },
  {
   "cell_type": "markdown",
   "id": "900350adc431bf7a",
   "metadata": {},
   "source": [
    "### bert-base-chinese的分词器\n",
    "bert自带的分词器是针对英文的，bert-base-chinese有单独的针对中文的分词器（按字来的）"
   ]
  },
  {
   "cell_type": "code",
   "id": "17c18fdf0669f895",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-07-03T08:33:45.722972Z",
     "start_time": "2025-07-03T08:33:44.578563Z"
    }
   },
   "source": [
    "# 基于HuggingFace，使用Bert中的分词器\n",
    "# 第一次使用要下载，因为需要到hugging face上下载，需要翻墙\n",
    "# from transformers import AutoTokenizer\n",
    "# tokenizer = AutoTokenizer.from_pretrained(\"google-bert/bert-base-chinese\")\n",
    "\n",
    "from modelscope import AutoTokenizer\n",
    "tokenizer = AutoTokenizer.from_pretrained(\"tiansz/bert-base-chinese\")\n",
    "\n",
    "text = \"自然语言处理很有趣!\"\n",
    "tokens = tokenizer.tokenize(text)\n",
    "print(tokens)"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloading Model from https://www.modelscope.cn to directory: /Users/dadudu/.cache/modelscope/hub/models/tiansz/bert-base-chinese\n",
      "['自', '然', '语', '言', '处', '理', '很', '有', '趣', '!']\n"
     ]
    }
   ],
   "execution_count": 8
  },
  {
   "cell_type": "markdown",
   "id": "f9d6fa51eda49e3c",
   "metadata": {},
   "source": [
    "## N-Gram模型\n",
    "什么是Gram? Gram就是一个单词。\n",
    "N-Gram就是N个单词。\n",
    "\n",
    "N-Gram模型，就是把句子分成N个单词，然后统计每个单词出现的次数，常用的就是2-gram和3-gram。\n",
    "\n",
    "2-gram就是两个单词为一组，最终用前一个单词预测后一个单词\n",
    "\n",
    "3-gram就是三个单词为一组，最终用前两个单词预测第三个单词"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f39ef09830ad60b5",
   "metadata": {},
   "source": "## 演示3-gram的工作流程"
  },
  {
   "cell_type": "code",
   "id": "8715edfeec7c5ebb",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-07-03T08:35:50.845125Z",
     "start_time": "2025-07-03T08:35:50.822371Z"
    }
   },
   "source": [
    "# 语料\n",
    "text = \"\"\"\n",
    "I love natural language processing.\n",
    "I love machine learning.\n",
    "I love coding in Python and Java.\n",
    "I love Python and C++.\n",
    "I love Python and Rust.\n",
    "\"\"\"\n",
    "\n",
    "# 分词\n",
    "words = [word for word in text.split()]\n",
    "words"
   ],
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['I',\n",
       " 'love',\n",
       " 'natural',\n",
       " 'language',\n",
       " 'processing.',\n",
       " 'I',\n",
       " 'love',\n",
       " 'machine',\n",
       " 'learning.',\n",
       " 'I',\n",
       " 'love',\n",
       " 'coding',\n",
       " 'in',\n",
       " 'Python',\n",
       " 'and',\n",
       " 'Java.',\n",
       " 'I',\n",
       " 'love',\n",
       " 'Python',\n",
       " 'and',\n",
       " 'C++.',\n",
       " 'I',\n",
       " 'love',\n",
       " 'Python',\n",
       " 'and',\n",
       " 'Rust.']"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 9
  },
  {
   "cell_type": "code",
   "id": "ac396c036d6aa8d0",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-07-03T08:36:28.104247Z",
     "start_time": "2025-07-03T08:36:28.099551Z"
    }
   },
   "source": [
    "# 生成3-gram\n",
    "# 每3个词为一组\n",
    "n = 3\n",
    "ngrams = [tuple(words[i:i + n]) for i in range(len(words) - n + 1)]\n",
    "ngrams"
   ],
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('I', 'love', 'natural'),\n",
       " ('love', 'natural', 'language'),\n",
       " ('natural', 'language', 'processing.'),\n",
       " ('language', 'processing.', 'I'),\n",
       " ('processing.', 'I', 'love'),\n",
       " ('I', 'love', 'machine'),\n",
       " ('love', 'machine', 'learning.'),\n",
       " ('machine', 'learning.', 'I'),\n",
       " ('learning.', 'I', 'love'),\n",
       " ('I', 'love', 'coding'),\n",
       " ('love', 'coding', 'in'),\n",
       " ('coding', 'in', 'Python'),\n",
       " ('in', 'Python', 'and'),\n",
       " ('Python', 'and', 'Java.'),\n",
       " ('and', 'Java.', 'I'),\n",
       " ('Java.', 'I', 'love'),\n",
       " ('I', 'love', 'Python'),\n",
       " ('love', 'Python', 'and'),\n",
       " ('Python', 'and', 'C++.'),\n",
       " ('and', 'C++.', 'I'),\n",
       " ('C++.', 'I', 'love'),\n",
       " ('I', 'love', 'Python'),\n",
       " ('love', 'Python', 'and'),\n",
       " ('Python', 'and', 'Rust.')]"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 10
  },
  {
   "cell_type": "code",
   "id": "19f1e9e62018c0d9",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-07-03T08:42:51.187820Z",
     "start_time": "2025-07-03T08:42:51.183211Z"
    }
   },
   "source": [
    "from collections import defaultdict\n",
    "\n",
    "# 构建一个预测模型 {context: {next_word: count}}\n",
    "# defaultdict(lambda: defaultdict(int))表示创建一个字典，字典的value是一个字典\n",
    "model = defaultdict(lambda: defaultdict(int))\n",
    "for i in range(len(words) - n + 1):\n",
    "    context = tuple(words[i:i + n - 1])  # 前n-1个词作为上下文\n",
    "    next_word = words[i + n - 1]  # 预测目标词\n",
    "    model[context][next_word] += 1\n",
    "\n",
    "model"
   ],
   "outputs": [
    {
     "data": {
      "text/plain": [
       "defaultdict(<function __main__.<lambda>()>,\n",
       "            {('I',\n",
       "              'love'): defaultdict(int,\n",
       "                         {'natural': 1,\n",
       "                          'machine': 1,\n",
       "                          'coding': 1,\n",
       "                          'Python': 2}),\n",
       "             ('love', 'natural'): defaultdict(int, {'language': 1}),\n",
       "             ('natural', 'language'): defaultdict(int, {'processing.': 1}),\n",
       "             ('language', 'processing.'): defaultdict(int, {'I': 1}),\n",
       "             ('processing.', 'I'): defaultdict(int, {'love': 1}),\n",
       "             ('love', 'machine'): defaultdict(int, {'learning.': 1}),\n",
       "             ('machine', 'learning.'): defaultdict(int, {'I': 1}),\n",
       "             ('learning.', 'I'): defaultdict(int, {'love': 1}),\n",
       "             ('love', 'coding'): defaultdict(int, {'in': 1}),\n",
       "             ('coding', 'in'): defaultdict(int, {'Python': 1}),\n",
       "             ('in', 'Python'): defaultdict(int, {'and': 1}),\n",
       "             ('Python',\n",
       "              'and'): defaultdict(int, {'Java.': 1, 'C++.': 1, 'Rust.': 1}),\n",
       "             ('and', 'Java.'): defaultdict(int, {'I': 1}),\n",
       "             ('Java.', 'I'): defaultdict(int, {'love': 1}),\n",
       "             ('love', 'Python'): defaultdict(int, {'and': 2}),\n",
       "             ('and', 'C++.'): defaultdict(int, {'I': 1}),\n",
       "             ('C++.', 'I'): defaultdict(int, {'love': 1})})"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 13
  },
  {
   "cell_type": "code",
   "id": "1942209237e068b2",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-07-03T08:42:52.666242Z",
     "start_time": "2025-07-03T08:42:52.663504Z"
    }
   },
   "source": [
    "# 定义一个预测函数\n",
    "def predict_next_word(model, context):\n",
    "    # 根据上下文预测概率最高的词\n",
    "    context = tuple(context)\n",
    "    if context not in model:\n",
    "        return None  # 上下文不存在于模型中\n",
    "    next_words = model[context]\n",
    "\n",
    "    # 返回概率最高的词 items会将dict转成kv的元组\n",
    "    return max(next_words.items(), key=lambda x: x[1])[0]"
   ],
   "outputs": [],
   "execution_count": 14
  },
  {
   "cell_type": "code",
   "id": "f067daff5e380cd0",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-07-03T08:42:53.841061Z",
     "start_time": "2025-07-03T08:42:53.838600Z"
    }
   },
   "source": [
    "# 预测\n",
    "test_context = ['I', 'love']\n",
    "predicted = predict_next_word(model, test_context)\n",
    "print(predicted)"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Python\n"
     ]
    }
   ],
   "execution_count": 15
  },
  {
   "cell_type": "code",
   "id": "9ba67dd946d6318d",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-07-03T08:43:13.387818Z",
     "start_time": "2025-07-03T08:43:13.380599Z"
    }
   },
   "source": [
    "# 连续预测\n",
    "test_context = ['I', 'love']\n",
    "for _ in range(5):\n",
    "    predicted = predict_next_word(model, test_context[-2:])\n",
    "    test_context.append(predicted)\n",
    "\n",
    "test_context"
   ],
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['I', 'love', 'Python', 'and', 'Java.', 'I', 'love']"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 16
  },
  {
   "cell_type": "markdown",
   "id": "3dd2a52d2f097f1a",
   "metadata": {},
   "source": [
    "\n",
    "N-Gram模型的优点是计算简单、速度快，缺点是无法捕捉距离较远的两个词之间的关系。\n",
    "\n",
    "核心思想是：每次根据前面的两个词来预测下一个词，从而忽视了之前的很多词。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.20"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
