{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "76fba87d",
   "metadata": {},
   "source": [
    "| [02_lexical_analysis/02_从头实现中文分词.ipynb](https://github.com/shibing624/nlp-tutorial/blob/main/02_lexical_analysis/02_%E4%BB%8E%E5%A4%B4%E5%AE%9E%E7%8E%B0%E4%B8%AD%E6%96%87%E5%88%86%E8%AF%8D.ipynb)  | 从头实现中文分词模型  |[Open In Colab](https://colab.research.google.com/github/shibing624/nlp-tutorial/blob/main/02_lexical_analysis/02_从头实现中文分词.ipynb) |\n",
    "\n",
    "# 从头实现中文分词模型\n",
    "\n",
    "在中文里面，词是最小的能够独立活动的有意义的语言成分，分词和词性标注都是中文自然语言处理的基础工作，能够后续如句法分析带来很大的便利性。\n",
    "\n",
    "在中文需要分词，英文中单词以空格作为自然分界，虽然也有短语划分的问题，但中文词没有一个形式上的分界，相对而言难度大了许多，分词作为中文自然语言处理的基础工作，质量的好坏对后面的工作影响很大。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "83f8e13c",
   "metadata": {},
   "source": [
    "## 分词的难点\n",
    "1. 分词歧义\n",
    "南京市长江大桥\n",
    "\n",
    "可以切分为：南京市/长江大桥\n",
    "\n",
    "可以切分为：南京/市长/江大桥\n",
    "\n",
    "可以看到，第二个切词结果是歧义的。\n",
    "\n",
    "2. 未登录词问题\n",
    "未登录词指的是在已有的词典中，或者训练语料里面没有出现过的词，分为实体名词，专有名词及新词。\n",
    "\n",
    "## 分词的方法\n",
    "\n",
    "1. 基于字典、词库匹配的分词机械分词算法，将待分的字符串与一个充分大的机器词典中的词条进行匹配。实际应用中，将机械分词作为初分手段，再利用其他方法提高准确率。\n",
    "\n",
    "2. 基于词频统计的分词统计分词，是一种全切分方法。切分出待分语句中所有的词，基于训练语料词表中每个词出现的频率，运用统计模型和决策算法决定最优的切分结果。\n",
    "\n",
    "3. 利用深度学习方法，通过对上下文内容所提供信息的分析对词进行定界。这类方法试图让机器具有人类的理解能力，需要使用大量的语言知识和信息，利用Bert预训练模型finetune分词任务实现。\n",
    "\n",
    "这里用第二种分词方法，即利用隐马尔可夫模型（HMM）来进行中文分词。\n",
    "\n",
    "### HMM方法介绍\n",
    "中文分词问题可以表示为一个序列标注问题，定义两个类别：\n",
    "\n",
    "- E代表词语中最后一个字\n",
    "- B代表词的首个字\n",
    "- M代表词中间的字\n",
    "- S代表单字成词\n",
    "\n",
    "对于分词结果：\"我/只是/做了/一些/微小/的/工作\"，可以标注为\"我E只B是E做B了E一B些E微B小E的S工B作E\".\n",
    "\n",
    "将标记序列\"EBEBEBEBESBE\"作为状态序列， 原始文本\"我只是做了一些微小的工作\"为观测序列. 分词过程即变成了求使给定观测序列出现概率最大的状态序列， 即解码问题。\n",
    "\n",
    "这里需要说明一下，所谓出现概率最大是指在自然语言中出现概率最大。\n",
    "\n",
    "## 程序实现"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "6af6471c",
   "metadata": {},
   "outputs": [],
   "source": [
    "STATES = {'B', 'M', 'E', 'S'}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "a5198fa3",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pickle\n",
    "import json\n",
    "\n",
    "EPS = 0.0001\n",
    "\n",
    "\n",
    "class HMModel:\n",
    "    def __init__(self):\n",
    "        self.trans_mat = {}  # trans_mat[status][status] = int\n",
    "        self.emit_mat = {}  # emit_mat[status][observe] = int\n",
    "        self.init_vec = {}  # init_vec[status] = int\n",
    "        self.state_count = {}  # state_count[status] = int\n",
    "        self.states = {}\n",
    "        self.inited = False\n",
    "\n",
    "    def setup(self):\n",
    "        for state in self.states:\n",
    "            # build trans_mat\n",
    "            self.trans_mat[state] = {}\n",
    "            for target in self.states:\n",
    "                self.trans_mat[state][target] = 0.0\n",
    "            # build emit_mat\n",
    "            self.emit_mat[state] = {}\n",
    "            # build init_vec\n",
    "            self.init_vec[state] = 0\n",
    "            # build state_count\n",
    "            self.state_count[state] = 0\n",
    "        self.inited = True\n",
    "\n",
    "    def save(self, filename=\"hmm.json\", code='json'):\n",
    "        fw = open(filename, 'w', encoding='utf-8')\n",
    "        data = {\n",
    "            \"trans_mat\": self.trans_mat,\n",
    "            \"emit_mat\": self.emit_mat,\n",
    "            \"init_vec\": self.init_vec,\n",
    "            \"state_count\": self.state_count\n",
    "        }\n",
    "        if code == \"json\":\n",
    "            txt = json.dumps(data)\n",
    "            txt = txt.encode('utf-8').decode('unicode-escape')\n",
    "            fw.write(txt)\n",
    "        elif code == \"pickle\":\n",
    "            pickle.dump(data, fw)\n",
    "        fw.close()\n",
    "\n",
    "    def load(self, filename=\"hmm.json\", code=\"json\"):\n",
    "        fr = open(filename, 'r', encoding='utf-8')\n",
    "        if code == \"json\":\n",
    "            txt = fr.read()\n",
    "            model = json.loads(txt)\n",
    "        elif code == \"pickle\":\n",
    "            model = pickle.load(fr)\n",
    "        self.trans_mat = model[\"trans_mat\"]\n",
    "        self.emit_mat = model[\"emit_mat\"]\n",
    "        self.init_vec = model[\"init_vec\"]\n",
    "        self.state_count = model[\"state_count\"]\n",
    "        self.inited = True\n",
    "        fr.close()\n",
    "\n",
    "    def do_train(self, observes, states):\n",
    "        if not self.inited:\n",
    "            self.setup()\n",
    "\n",
    "        for i in range(len(states)):\n",
    "            if i == 0:\n",
    "                self.init_vec[states[0]] += 1\n",
    "                self.state_count[states[0]] += 1\n",
    "            else:\n",
    "                self.trans_mat[states[i - 1]][states[i]] += 1\n",
    "                self.state_count[states[i]] += 1\n",
    "                if observes[i] not in self.emit_mat[states[i]]:\n",
    "                    self.emit_mat[states[i]][observes[i]] = 1\n",
    "                else:\n",
    "                    self.emit_mat[states[i]][observes[i]] += 1\n",
    "\n",
    "    def get_prob(self):\n",
    "        init_vec = {}\n",
    "        trans_mat = {}\n",
    "        emit_mat = {}\n",
    "        default = max(self.state_count.values())  # avoid ZeroDivisionError\n",
    "        # convert init_vec to prob\n",
    "        for key in self.init_vec:\n",
    "            if self.state_count[key] != 0:\n",
    "                init_vec[key] = float(self.init_vec[key]) / \\\n",
    "                    self.state_count[key]\n",
    "            else:\n",
    "                init_vec[key] = float(self.init_vec[key]) / default\n",
    "        # convert trans_mat to prob\n",
    "        for key1 in self.trans_mat:\n",
    "            trans_mat[key1] = {}\n",
    "            for key2 in self.trans_mat[key1]:\n",
    "                if self.state_count[key1] != 0:\n",
    "                    trans_mat[key1][key2] = float(\n",
    "                        self.trans_mat[key1][key2]) / self.state_count[key1]\n",
    "                else:\n",
    "                    trans_mat[key1][key2] = float(\n",
    "                        self.trans_mat[key1][key2]) / default\n",
    "        # convert emit_mat to prob\n",
    "        for key1 in self.emit_mat:\n",
    "            emit_mat[key1] = {}\n",
    "            for key2 in self.emit_mat[key1]:\n",
    "                if self.state_count[key1] != 0:\n",
    "                    emit_mat[key1][key2] = float(\n",
    "                        self.emit_mat[key1][key2]) / self.state_count[key1]\n",
    "                else:\n",
    "                    emit_mat[key1][key2] = float(\n",
    "                        self.emit_mat[key1][key2]) / default\n",
    "        return init_vec, trans_mat, emit_mat\n",
    "\n",
    "    def do_predict(self, sequence):\n",
    "        tab = [{}]\n",
    "        path = {}\n",
    "        init_vec, trans_mat, emit_mat = self.get_prob()\n",
    "\n",
    "        # init\n",
    "        for state in self.states:\n",
    "            tab[0][state] = init_vec[state] * \\\n",
    "                emit_mat[state].get(sequence[0], EPS)\n",
    "            path[state] = [state]\n",
    "\n",
    "        # build dynamic search table\n",
    "        for t in range(1, len(sequence)):\n",
    "            tab.append({})\n",
    "            new_path = {}\n",
    "            for state1 in self.states:\n",
    "                items = []\n",
    "                for state2 in self.states:\n",
    "                    if tab[t - 1][state2] == 0:\n",
    "                        continue\n",
    "                    prob = tab[t - 1][state2] * trans_mat[state2].get(\n",
    "                        state1, EPS) * emit_mat[state1].get(sequence[t], EPS)\n",
    "                    items.append((prob, state2))\n",
    "                best = max(items)  # best: (prob, state)\n",
    "                tab[t][state1] = best[0]\n",
    "                new_path[state1] = path[best[1]] + [state1]\n",
    "            path = new_path\n",
    "\n",
    "        # search best path\n",
    "        prob, state = max([(tab[len(sequence) - 1][state], state)\n",
    "                          for state in self.states])\n",
    "        return path[state]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "e358b796",
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_tags(src):\n",
    "    tags = []\n",
    "    if len(src) == 1:\n",
    "        tags = ['S']\n",
    "    elif len(src) == 2:\n",
    "        tags = ['B', 'E']\n",
    "    else:\n",
    "        m_num = len(src) - 2\n",
    "        tags.append('B')\n",
    "        tags.extend(['M'] * m_num)\n",
    "        tags.append('S')\n",
    "    return tags\n",
    "\n",
    "\n",
    "def cut_sent(src, tags):\n",
    "    word_list = []\n",
    "    start = -1\n",
    "    started = False\n",
    "\n",
    "    if len(tags) != len(src):\n",
    "        return None\n",
    "\n",
    "    if tags[-1] not in {'S', 'E'}:\n",
    "        if tags[-2] in {'S', 'E'}:\n",
    "            tags[-1] = 'S'  # for tags: r\".*(S|E)(B|M)\"\n",
    "        else:\n",
    "            tags[-1] = 'E'  # for tags: r\".*(B|M)(B|M)\"\n",
    "\n",
    "    for i in range(len(tags)):\n",
    "        if tags[i] == 'S':\n",
    "            if started:\n",
    "                started = False\n",
    "                word_list.append(src[start:i])  # for tags: r\"BM*S\"\n",
    "            word_list.append(src[i])\n",
    "        elif tags[i] == 'B':\n",
    "            if started:\n",
    "                word_list.append(src[start:i])  # for tags: r\"BM*B\"\n",
    "            start = i\n",
    "            started = True\n",
    "        elif tags[i] == 'E':\n",
    "            started = False\n",
    "            word = src[start:i+1]\n",
    "            word_list.append(word)\n",
    "        elif tags[i] == 'M':\n",
    "            continue\n",
    "    return word_list\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "fe842f68",
   "metadata": {},
   "outputs": [],
   "source": [
    "class HMMSegger(HMModel):\n",
    "\n",
    "    def __init__(self, *args, **kwargs):\n",
    "        super(HMMSegger, self).__init__(*args, **kwargs)\n",
    "        self.states = STATES\n",
    "        self.data = None\n",
    "\n",
    "    def load_data(self, filename):\n",
    "        self.data = open(filename, 'r', encoding=\"utf-8\")\n",
    "\n",
    "    def train(self):\n",
    "        if not self.inited:\n",
    "            self.setup()\n",
    "\n",
    "        # train\n",
    "        for line in self.data:\n",
    "            # pre processing\n",
    "            line = line.strip()\n",
    "            if not line:\n",
    "                continue\n",
    "\n",
    "            # get observes\n",
    "            observes = []\n",
    "            for i in range(len(line)):\n",
    "                if line[i] == \" \":\n",
    "                    continue\n",
    "                observes.append(line[i])\n",
    "\n",
    "            # get states\n",
    "            words = line.split(\" \")  # spilt word by whitespace\n",
    "            states = []\n",
    "            for word in words:\n",
    "                if '/' in word:\n",
    "                    word = word.split('/')[0]\n",
    "                states.extend(get_tags(word))\n",
    "\n",
    "            # resume train\n",
    "            self.do_train(observes, states)\n",
    "\n",
    "    def cut(self, sentence):\n",
    "        try:\n",
    "            tags = self.do_predict(sentence)\n",
    "            return cut_sent(sentence, tags)\n",
    "        except:\n",
    "            return sentence"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "cc24e180",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['长春', '市长', '春节', '讲话']"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "segger = HMMSegger()\n",
    "segger.load_data(\"data/seg_data.txt\")\n",
    "segger.train()\n",
    "segger.save()\n",
    "segger.cut(\"长春市长春节讲话\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5bf8b4a1",
   "metadata": {},
   "source": [
    "更多例子来测试下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "2e0f3367",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['我', '来到', '北京', '清华', '大学']\n",
      "['长春', '市长', '春节', '讲话']\n",
      "['我们', '去', '野生动物', '园', '玩']\n",
      "['我只', '是做', '了一', '些微', '小的', '工作']\n"
     ]
    }
   ],
   "source": [
    "cases = [\n",
    "    \"我来到北京清华大学\",\n",
    "    \"长春市长春节讲话\",\n",
    "    \"我们去野生动物园玩\",\n",
    "    \"我只是做了一些微小的工作\",\n",
    "]\n",
    "for case in cases:\n",
    "    result = segger.cut(case)\n",
    "    print(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7176c604",
   "metadata": {},
   "source": [
    "重新加载模型，并预测："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "d9ee67d6",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['长春', '市长', '春节', '讲话']"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "m = HMMSegger()\n",
    "m.load()\n",
    "segger.cut(\"长春市长春节讲话\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "55ed2c69",
   "metadata": {},
   "source": [
    "#### reference\n",
    "\n",
    "[自制基于HMM的python中文分词器](https://www.cnblogs.com/finley/p/6358097.html)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b74bc373",
   "metadata": {},
   "source": [
    "本节完。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "700b6906",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
