{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 基于 Bi-directional LSTM 的序列标注任务（分词）\n",
    "\n",
    "\n",
    "**tensorflow 版本： 1.2.1**\n",
    "\n",
    "\n",
    "本例子主要参考[【中文分词系列】 4. 基于双向LSTM的seq2seq字标注]{url: http://spaces.ac.cn/archives/3924/} 这篇文章。<br/>\n",
    "该文章用的是 keras 实现的双端 LSTM，在本例中，实现思路和该文章基本上一样，只是用 TensorFlow 来实现的。<br/>\n",
    "\n",
    "本例最主要的是说明基于 TensorFlow 如何来实现 Bi-LSTM。在后面部分进行最后分词处理用的是维特比译码，如果想了解为什么的话可以看一下《统计学习方法》第10章介绍的隐马尔可夫模型。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "主要参考: <br/>\n",
    "[1] 【中文分词系列】 4. 基于双向LSTM的seq2seq字标注 http://spaces.ac.cn/archives/3924/  <br/>\n",
    "[2] https://github.com/yongyehuang/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/bidirectional_rnn.py  <br/>\n",
    "[3] https://github.com/yongyehuang/deepnlp/blob/master/deepnlp/pos/pos_model_bilstm.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "import matplotlib.pyplot as plt\n",
    "import re\n",
    "from tqdm import tqdm\n",
    "import time"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. 数据预处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Length of texts is 20247877\n",
      "Example of texts: \n",
      " 人/b  们/e  常/s  说/s  生/b  活/e  是/s  一/s  部/s  教/b  科/m  书/e  ，/s  而/s  血/s  与/s  火/s  的/s  战/b  争/e  更/s  是/s  不/b  可/m  多/m  得/e  的/s  教/b  科/m  书/e  ，/s  她/s  确/b  实/e  是/s  名/b  副/m  其/m  实/e  的/s  ‘/s  我/s  的/s  大/b  学/e  ’/s  。/s   心/s  静/s  渐/s  知/s  春/s  似/s  海/s  ，/s  花/s  深/s  每/s  觉/s  影/s\n"
     ]
    }
   ],
   "source": [
    "# 以字符串的形式读入所有数据\n",
    "with open('data/msr_train.txt', 'rb') as inp:\n",
    "    texts = inp.read().decode('gbk')\n",
    "sentences = texts.split('\\r\\n')  # 根据换行切分\n",
    "\n",
    "# 将不规范的内容（如每行的开头）去掉\n",
    "def clean(s): \n",
    "    if u'“/s' not in s:  # \n",
    "        \n",
    "        return s.replace(u' ”/s', '')\n",
    "    elif u'”/s' not in s:\n",
    "        return s.replace(u'“/s ', '')\n",
    "    elif u'‘/s' not in s:\n",
    "        return s.replace(u' ’/s', '')\n",
    "    elif u'’/s' not in s:\n",
    "        return s.replace(u'‘/s ', '')\n",
    "    else:\n",
    "        return s\n",
    "    \n",
    "texts = u''.join(map(clean, sentences)) # 把所有的词拼接起来\n",
    "print 'Length of texts is %d' % len(texts)\n",
    "print 'Example of texts: \\n', texts[:300]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Sentences number: 331739\n",
      "Sentence Example:\n",
      "  而/s  血/s  与/s  火/s  的/s  战/b  争/e  更/s  是/s  不/b  可/m  多/m  得/e  的/s  教/b  科/m  书/e  \n"
     ]
    }
   ],
   "source": [
    "# 重新以标点来划分\n",
    "sentences = re.split(u'[，。！？、‘’“”]/[bems]', texts)\n",
    "print 'Sentences number:', len(sentences)\n",
    "print 'Sentence Example:\\n', sentences[1]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "12619it [00:00, 65836.89it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Start creating words and tags data ...\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "331739it [00:04, 67518.21it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Length of datas is 321533\n",
      "Example of datas:  [u'\\u4eba' u'\\u4eec' u'\\u5e38' u'\\u8bf4' u'\\u751f' u'\\u6d3b' u'\\u662f'\n",
      " u'\\u4e00' u'\\u90e8' u'\\u6559' u'\\u79d1' u'\\u4e66']\n",
      "Example of labels: [u'b' u'e' u's' u's' u'b' u'e' u's' u's' u's' u'b' u'm' u'e']\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "source": [
    "def get_Xy(sentence):\n",
    "    \"\"\"将 sentence 处理成 [word1, w2, ..wn], [tag1, t2, ...tn]\"\"\"\n",
    "    words_tags = re.findall('(.)/(.)', sentence)\n",
    "    if words_tags:\n",
    "        words_tags = np.asarray(words_tags)\n",
    "        words = words_tags[:, 0]\n",
    "        tags = words_tags[:, 1]\n",
    "        return words, tags # 所有的字和tag分别存为 data / label\n",
    "    return None\n",
    "\n",
    "datas = list()\n",
    "labels = list()\n",
    "print 'Start creating words and tags data ...'\n",
    "for sentence in tqdm(iter(sentences)):\n",
    "    result = get_Xy(sentence)\n",
    "    if result:\n",
    "        datas.append(result[0])\n",
    "        labels.append(result[1])\n",
    "\n",
    "print 'Length of datas is %d' % len(datas) \n",
    "print 'Example of datas: ', datas[0]\n",
    "print 'Example of labels:', labels[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>tags</th>\n",
       "      <th>words</th>\n",
       "      <th>sentence_len</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>[b, e, s, s, b, e, s, s, s, b, m, e]</td>\n",
       "      <td>[人, 们, 常, 说, 生, 活, 是, 一, 部, 教, 科, 书]</td>\n",
       "      <td>12</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>[s, s, s, s, s, b, e, s, s, b, m, m, e, s, b, ...</td>\n",
       "      <td>[而, 血, 与, 火, 的, 战, 争, 更, 是, 不, 可, 多, 得, 的, 教, ...</td>\n",
       "      <td>17</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                                tags  \\\n",
       "0               [b, e, s, s, b, e, s, s, s, b, m, e]   \n",
       "1  [s, s, s, s, s, b, e, s, s, b, m, m, e, s, b, ...   \n",
       "\n",
       "                                               words  sentence_len  \n",
       "0               [人, 们, 常, 说, 生, 活, 是, 一, 部, 教, 科, 书]            12  \n",
       "1  [而, 血, 与, 火, 的, 战, 争, 更, 是, 不, 可, 多, 得, 的, 教, ...            17  "
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df_data = pd.DataFrame({'words': datas, 'tags': labels}, index=range(len(datas)))\n",
    "#　句子长度\n",
    "df_data['sentence_len'] = df_data['words'].apply(lambda words: len(words))\n",
    "df_data.head(2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAaIAAAEXCAYAAADvDECpAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzt3X+YXVV97/H3xwRC5GeANIUkJFTitCH4CwvR3nqnjYUo\naNJbfqQXNdFgHi8IaGNtUrRQNQq1aqUtaBQkoAIx4iWVIqTBkVqbxKAgBIzkkmASAgECIQEJJHzv\nH2sdZud4JnNm5pzZk5zP63nOM3uvvdfaa69z5nxn7b1mL0UEZmZmZXlV2RUwM7PW5kBkZmalciAy\nM7NSORCZmVmpHIjMzKxUDkRmZlYqByJD0lckfbJBZR0jabukQXm9Q9K5jSg7l3ebpOmNKq8Hx/2M\npCclPVbn/pdK+maz69WfJF0r6TMNKqtN0j2Stkm6sBFl2t7LgWgfJ2mdpN/kX/hnJP1E0ockvfLe\nR8SHIuLTdZb19j3tExG/joiDImJXA+r+W1/mEfGOiFjQ17J7WI9jgNnA+Ij43Rrb2yVtaOLxGxYA\nenDMGZJ+3MRDfBz4YUQcHBFX1Dj+8ZLukLQlf27vlvTOvh602e+V9Y4DUWt4V0QcDIwBLgP+Bri6\n0QeRNLjRZQ4QxwBPRcTmsiuyDxkDrNrD9n8DlgC/C/wOcCHwbD/Uy8oQEX7twy9gHfD2qrSTgJeB\nCXn9WuAzeflI4PvAM8AW4D9Jf7Bcn/P8BthO+ot2LBDATODXwF2FtMG5vA7gc8AK0hfJLcDheVs7\nsKFWfYHJwIvAS/l49xbKOzcvvwr4BPAIsBm4Djg0b6vUY3qu25PAxXtop0Nz/idyeZ/I5b89n/PL\nuR7XVuU7sGr7duBo4FJgYS5zG+lL982FfEcD383HWwtcuIe6vfL+1Nj2+6Qv7C3AauCsqnz/Ctya\n67AceE1h+yk5z1bgSuBHwLnAHwAvALvy+TxTT3k16vbufN7P5PftD3L6nbnsF3L5r63Kd2R+7w7b\nQ9mnA/fksn8CvK7qM/Qx4Bf53G4CDtjDe/UqYA7w/4Cn8vtW+Yzu8XMEDAL+NufdBtwNjO7uvfGr\n6v0suwJ+NfkNrhGIcvqvgf+Tl1/5oiMFja8A++XXHwOqVVbhl/S6/Es+lNqBaCMwIe/zXeCbeVs7\nXQSivHxpZd/C9g46A9EHgDXA7wEHATcD11fV7Wu5Xq8HdpC/DGu0x3WkIHlwzvsrYGZX9azKW+s8\nLiV90b4zf1l9DliWt70qf2H9HbB/rv/DwKldlP/K+1OVfiCwHng/MBh4I+mLcnwh31OkPzwGA98C\nbszbjiT9YfC/8raLSEG/0rYzgB/XqEfN8mrU7bXAc8CfkT5HH8/v1f7V72ONvAIeIv1BNBUYUbX9\njaQ/PE7ObTs9f26GFD5DK0hB5nDgQeBDe3ivLgKWAaOAIcBXgRvq+RwBfw3cB7Tler8eOKK798av\n3V++NNe6HiX9klZ7CTgKGBMRL0XEf0b+jduDSyPiuYj4TRfbr4+I+yPiOeCTwFmVwQx9dA7wxYh4\nOCK2A3OBaVWXCP8+In4TEfcC95K+KHaT6zINmBsR2yJiHfAF4L19rN+PI+LfI90vu75w7D8EhkfE\npyLixYh4mPRFN62H5Z8OrIuIb0TEzoj4OSnQn1nY53sRsSIidpICxxty+juBVRFxc952BVDPQIyu\nyqt2NnBrRCyJiJeAfyR9kb+1uwPkz9ufkALKF4BNku6SNC7vMgv4akQsj4hdke4Z7gAmFoq5IiIe\njYgtpMt8XdUT4EOkXs6GiNhB+iPijDo/R+cCn4iI1ZHcGxFPUd97Y9m+ek3fujeSdMmg2udJv4h3\nSAKYHxGXdVPW+h5sf4T0F/KR9VVzj47O5RXLHgyMKKQVv1yfJ/Wcqh2Z61Rd1sg+1q/62AfkL7cx\nwNGSnilsH0S6DNoTY4CTq8oZTAp6XdWhcv5HU3hfIiLqvIlfT3tWyn+lPSPiZUnrqbNNI2ID8GEA\nSaOB+aRe61tI5z1d0gWFLPvnY3ZVz+K2amOA70l6uZC2i/o+R6NJl+Vqldnde2OZA1ELkvSHpC+E\n3xoVFRHbSCPEZkuaANwp6acRsZR0iaKW7npMowvLx5B6XU+SLt28ulCvQcDwHpT7KOkXvlj2TuBx\n0mWWej2Z6zQGeKBQ1sY68/f0EfbrgbURMa7bPbsv50cR8We9yLuJQhsp/dVRbLO+Ppb/UeCEqvJH\nU3+bdlYkYr2kfwVuyEnrgXkRMa8X9ap1XuuBD0TEf1VvkDS2m/LWA68B7q+R3tv3puX40lwLkXSI\npNOBG0n3Xu6rsc/pko7LXxxbSX8ZVv5SfJx0P6On3iNpvKRXA58CFuXLVb8i9RJOk7QfaYDAkEK+\nx4GxxaHmVW4APirpWEkHAZ8FbsqXjeqW67IQmCfpYEljgL8C6v0/oMeBIyQdWuf+K4Btkv5G0lBJ\ngyRNyH8gdGWQpAMKr/1J91BeK+m9kvbLrz+U9Ad11OFW4ARJU3Mv7XzSCLXiOY3Kx+mNhcBpkibl\n93Y26fLZT7rLKGmYpL/Pn8NXSTqSdD9wWd7la8CHJJ2s5MD8GTq4jnrVeq++Qnrvx+TjD5c0pc7z\n/DrwaUnjcl1eJ+kI+vbetBwHotbwb5K2kf5Kuxj4Iukmai3jgP8gjSj6b+DKiPhh3vY54BP5/zo+\n1oPjX0+60f0YafTShQARsRU4j/TLvJHUQypeHvpO/vmUpJ/VKPeaXPZdpJFnLwAX1NivHhfk4z9M\n6il+O5ffrYj4JSkoPpzbZk+XgSqB73TSfYu1pB7Z10kj97oyhzTiq/K6M/deTyHdW3qU1L6Xs3sw\n76oOT5LuV/wDaQDCeGAlKVhAGtm2CnhM0pPdlVej/NXAe4B/Jp3fu0j/RvBiHdlfJA0S+A/SgIr7\nc71m5LJXAh8E/gV4mjQIYkad9ar1Xn0ZWEy6HL2NFPBOrqc80u/SQuCOXNergaF9eW9aUWU0lJm1\nsNzr3ACcU/jDw6xfuEdk1qIknSrpMElDSP8LIzovf5n1Gwcis9b1FtKIr8qls6l7GIJv1jS+NGdm\nZqVyj8jMzErl/yMqOOyww+K4444ruxoDwnPPPceBBx5YdjUGBLdFJ7dFJ7dFp7vvvvvJiBje/Z61\nORAVjBgxgpUrV5ZdjQGho6OD9vb2sqsxILgtOrktOrktOkl6pPu9uuZLc2ZmVioHIjMzK5UDkZmZ\nlcqByMzMSuVAZGZmpXIgMjOzUjkQmZlZqZoaiCRdI2mzpPsLaZ+X9EtJv5D0PUmHFbbNlbRG0mpJ\npxbST5R0X952RZ4rB0lDJN2U05cXJ7GSNF3SQ/k1vZnnaWZmvdfsHtG1wOSqtCXAhIh4HWlitLkA\nksaT5u44Pue5Ms/YCXAVaf6RcflVKXMm8HREHAd8iTTfB5IOBy4hzSlyEnCJpGFNOD8zM+ujpj5Z\nISLuqp5qNyLuKKwuA87Iy1OAGyNiB7BW0hrgJEnrgEMiYhmApOuAqcBtOc+lOf8i4F9yb+lUYElE\nbMl5lpCCV2Wq4W6NnXNr3edZr3WXndbwMs3M9nZlP+LnA8BNeXkku8+FsiGnvcTus3ZW0it51gNE\nxE5JW4Ejiuk18uxG0ixgFsDw4cPp6OgAYPYJPZptui6VsvcG27dv36vq20xui05ui05ui8YpLRBJ\nuhjYCXyrrDoARMR8YD5AW1tbVJ4dNaMZPaJz2hteZrP4OVqd3Bad3Bad3BaNU8qoOUkzgNNJ0xJX\nJkTaCIwu7DYqp23My9Xpu+WRNBg4FHhqD2WZmdkA0++BSNJk4OPAuyPi+cKmxcC0PBLuWNKghBUR\nsQl4VtLEfP/nfcAthTyVEXFnAHfmwHY7cIqkYXmQwik5zczMBpimXpqTdAPQDhwpaQNpJNtcYAiw\nJI/CXhYRH4qIVZIWAg+QLtmdHxG7clHnkUbgDSUNUrgtp18NXJ8HNmwhjbojIrZI+jTw07zfpyoD\nF8zMbGBp9qi5v6yRfPUe9p8HzKuRvhKYUCP9BeDMLsq6Brim7sqamVkp/GQFMzMrlQORmZmVyoHI\nzMxK5UBkZmalciAyM7NSORCZmVmpHIjMzKxUDkRmZlYqByIzMyuVA5GZmZXKgcjMzErlQGRmZqVy\nIDIzs1I5EJmZWakciMzMrFQORGZmVioHIjMzK5UDkZmZlcqByMzMSuVAZGZmpXIgMjOzUjkQmZlZ\nqRyIzMysVA5EZmZWKgciMzMrVVMDkaRrJG2WdH8h7XBJSyQ9lH8OK2ybK2mNpNWSTi2knyjpvrzt\nCknK6UMk3ZTTl0saW8gzPR/jIUnTm3meZmbWe83uEV0LTK5KmwMsjYhxwNK8jqTxwDTg+JznSkmD\ncp6rgA8C4/KrUuZM4OmIOA74EnB5Lutw4BLgZOAk4JJiwDMzs4GjqYEoIu4CtlQlTwEW5OUFwNRC\n+o0RsSMi1gJrgJMkHQUcEhHLIiKA66ryVMpaBEzKvaVTgSURsSUingaW8NsB0czMBoDBJRxzRERs\nysuPASPy8khgWWG/DTntpbxcnV7Jsx4gInZK2gocUUyvkWc3kmYBswCGDx9OR0cHALNP2NnzM+tG\npey9wfbt2/eq+jaT26KT26KT26JxyghEr4iIkBQl12E+MB+gra0t2tvbAZgx59aGH2vdOe0NL7NZ\nOjo6qLRFq3NbdHJbdHJbNE4Zo+Yez5fbyD835/SNwOjCfqNy2sa8XJ2+Wx5Jg4FDgaf2UJaZmQ0w\nZQSixUBlFNt04JZC+rQ8Eu5Y0qCEFfky3rOSJub7P++rylMp6wzgznwf6XbgFEnD8iCFU3KamZkN\nME29NCfpBqAdOFLSBtJItsuAhZJmAo8AZwFExCpJC4EHgJ3A+RGxKxd1HmkE3lDgtvwCuBq4XtIa\n0qCIabmsLZI+Dfw07/epiKgeNGFmZgNAUwNRRPxlF5smdbH/PGBejfSVwIQa6S8AZ3ZR1jXANXVX\nth+MbfB9p3WXndbQ8szMyuAnK5iZWakciMzMrFQORGZmVioHIjMzK5UDkZmZlcqByMzMSuVAZGZm\npXIgMjOzUjkQmZlZqRyIzMysVA5EZmZWKgciMzMrlQORmZmVyoHIzMxK5UBkZmalciAyM7NSORCZ\nmVmpHIjMzKxUDkRmZlYqByIzMyuVA5GZmZXKgcjMzErlQGRmZqUaXM9Okl4HjC3uHxE3N6lOZmbW\nQroNRJKuAV4HrAJezskBOBCZmVmf1dMjmhgR4xt9YEkfBc4lBbX7gPcDrwZuIvW+1gFnRcTTef+5\nwExgF3BhRNye008ErgWGAv8OXBQRIWkIcB1wIvAUcHZErGv0eZiZWd/Uc4/ovyU1NBBJGglcCLw5\nIiYAg4BpwBxgaUSMA5bmdfLxpwHHA5OBKyUNysVdBXwQGJdfk3P6TODpiDgO+BJweSPPwczMGqOe\nQHQdKRitlvQLSfdJ+kUDjj0YGCppMKkn9CgwBViQty8ApublKcCNEbEjItYCa4CTJB0FHBIRyyIi\ncl2LeSplLQImSVID6m1mZg1Uz6W5q4H3ki6fvdzNvnWJiI2S/hH4NfAb4I6IuEPSiIjYlHd7DBiR\nl0cCywpFbMhpL+Xl6vRKnvX5eDslbQWOAJ5sxDmYmVlj1BOInoiIxY08qKRhpB7LscAzwHckvae4\nT77PE408bhd1mQXMAhg+fDgdHR0AzD5hZ7MP3WeVujbD9u3bm1r+3sRt0clt0clt0Tj1BKKfS/o2\n8G/AjkpiH4dvvx1YGxFPAEi6GXgr8LikoyJiU77stjnvvxEYXcg/KqdtzMvV6cU8G/Llv0NJgxZ2\nExHzgfkAbW1t0d7eDsCMObf24fT6x7pz2ptWdkdHB5W2aHVui05ui05ui8ap5x7RUFIAOgV4V36d\n3sfj/hqYKOnV+b7NJOBBYDEwPe8zHbglLy8GpkkaIulY0qCEFfky3rOSJuZy3leVp1LWGcCd+T6S\nmZkNIN32iCLi/Y0+aEQsl7QI+BmwE/g5qVdyELBQ0kzgEeCsvP8qSQuBB/L+50fErlzceXQO374t\nvyDd27pe0hpgC2nUnZmZDTD1/EPrN0j/67ObiPhAXw4cEZcAl1Ql7yD1jmrtPw+YVyN9JTChRvoL\nwJl9qaOZmTVfPfeIvl9YPgD4c9JQazMzsz6r59Lcd4vrkm4Afty0GpmZWUvpzdO3xwG/0+iKmJlZ\na6rnHtE20j0i5Z+PAX/T5HqZmVmLqOfS3MH9UREzM2tN9c5HNBIYw+7zEd3VrEqZmVnrqOfS3OXA\n2aT/4an8704ADkRmZtZn9fSIpgJtEbGj2z3NzMx6qJ5Rcw8D+zW7ImZm1prq6RE9D9wjaSm7P/T0\nwqbVyszMWkY9gWhxfpmZmTVcPcO3F+xpu6TvRsRfNK5KZmbWSnrzZIVqv9eAMszMrEU1IhB5jh8z\nM+u1RgQiMzOzXmtEIFIDyjAzsxZVVyCSNFRSWxeb/QBUMzPrtW4DkaR3AfcAP8jrb5D0ynDuiLij\nedUzM7N9XT09okuBk4BnACLiHuDYJtbJzMxaSD2B6KWI2FqV5pFyZmbWEPU8WWGVpP8NDJI0DrgQ\n+Elzq2VmZq2inh7RBcDxpOfMfRvYCnykmZUyM7PWUc8jfp4HLs4vMzOzhqpn1NwSSYcV1odJur25\n1TIzs1ZRz6W5IyPimcpKRDwN/E7zqmRmZq2knkD0sqRjKiuSxuBRc2Zm1iD1BKKLgR9Lul7SN4G7\ngLl9PbCkwyQtkvRLSQ9Keoukw/OlwIfyz2GF/edKWiNptaRTC+knSrovb7tCknL6EEk35fTlksb2\ntc5mZtZ43QaiiPgB8CbgJuBG4MSIaMQ9oi8DP4iI3wdeDzwIzAGWRsQ4YGleR9J4YBpp9N5k4EpJ\ng3I5VwEfBMbl1+ScPhN4OiKOA74EXN6AOpuZWYPV+9DTIcAW4FlgvKS39eWgkg4F3gZcDRARL+b7\nUFOAykR8C4CpeXkKcGNE7IiItcAa4CRJRwGHRMSyiAjguqo8lbIWAZMqvSUzMxs4uh2+Lely4Gxg\nFfByTg7SJbreOhZ4AviGpNcDdwMXASMiYlPe5zFgRF4eCSwr5N+Q017Ky9XplTzrASJip6StwBHA\nk1XnNwuYBTB8+HA6OjoAmH3Czj6cXv+o1LUZtm/f3tTy9yZui05ui05ui8ap58kKU4G2iNjR4OO+\nCbggIpZL+jL5MlxFRISkpg+KiIj5wHyAtra2aG9vB2DGnFubfeg+W3dOe9PK7ujooNIWrc5t0clt\n0clt0Tj1XJp7GNivwcfdAGyIiOV5fREpMD2eL7eRf27O2zcCowv5R+W0jXm5On23PJIGA4cCTzX4\nPMzMrI/q6RE9D9wjaSnpMT8ARMSFvT1oRDwmab2ktohYDUwCHsiv6cBl+ectOcti4NuSvggcTRqU\nsCIidkl6VtJEYDnwPuCfC3mmA/8NnAHcme8j7TPGNqHXtu6y0xpeppnZntQTiBbnV6NdAHxL0v6k\nXtf7ST20hZJmAo8AZwFExCpJC0mBaidwfkTsyuWcB1wLDAVuyy9IAyGul7SGNNBiWhPOwczM+qie\nZ80tkDQUOCb3Xhoiz2v05hqbJnWx/zxgXo30lcCEGukvAGf2sZpmZtZkfZ6h1czMrC96O0Pr7zWx\nTmZm1kJ6O0PryzX3NDMz6yHP0GpmZqXq7QytFzWzUmZm1jrq6RGdFhG7zdAq6UzgO02rlZmZtYx6\nekS1pnzo8zQQZmZmsIcekaR3AO8ERkq6orDpENI/lZqZmfXZni7NPQqsBN5Nejp2xTbgo82slJmZ\ntY4uA1FE3AvcK+nbEfFSP9bJzMxaSD2DFU6SdCkwJu8v0iwN/qdWMzPrs3oC0dWkS3F3A7u62dfM\nzKxH6glEWyPitu53MzMz67l6AtEPJX0euJnd5yP6WdNqZWZmLaOeQHRy/lmcsiGAP218dczMrNXU\nMx/Rn/RHRczMrDXVMx/RCElXS7otr4/PM6iamZn1WT2P+LkWuB04Oq//CvhIsypkZmatpZ5AdGRE\nLCTPQRQRO/EwbjMza5B6AtFzko4gDVBA0kTSVBBmZmZ9Vs+oub8CFgOvkfRfwHDgjKbWyszMWkY9\nPaLXAO8A3kq6V/QQ9QUwMzOzbtUTiD4ZEc8Cw4A/Aa4ErmpqrczMrGXUE4gqAxNOA74WEbcC+zev\nSmZm1krqCUQbJX0VOBv4d0lD6sxnZmbWrXoCylmke0OnRsQzwOHAX/f1wJIGSfq5pO/n9cMlLZH0\nUP45rLDvXElrJK2WdGoh/URJ9+VtV0hSTh8i6aacvlzS2L7W18zMmqPbQBQRz0fEzRHxUF7fFBF3\nNODYFwEPFtbnAEsjYhywNK8jaTwwDTgemAxcKWlQznMV8EFgXH5Nzukzgacj4jjgS8DlDaivmZk1\nQSmX2CSNIt1z+noheQqwIC8vAKYW0m+MiB0RsRZYQ5qs7yjgkIhYFhEBXFeVp1LWImBSpbdkZmYD\nS1nDsP8J+DhwcCFtRERsysuPASPy8khgWWG/DTntpbxcnV7Jsx7SkyAkbQWOAJ6sroikWcAsgOHD\nh9PR0QHA7BN29u7M9nKV89++ffsry63ObdHJbdHJbdE4/R6IJJ0ObI6IuyW119onIkJS9Ed9ImI+\nMB+gra0t2ttTlWbMubU/Dj/grDunHUgBqdIWrc5t0clt0clt0Thl9Ij+CHi3pHcCBwCHSPom8Lik\noyJiU77stjnvvxEYXcg/KqdtzMvV6cU8GyQNBg4FnmrWCZmZWe/1+z2iiJgbEaMiYixpEMKdEfEe\n0mOEpufdpgO35OXFwLQ8Eu5Y0qCEFfky3rOSJub7P++rylMp64x8jH7pYZmZWc8MpEf1XAYszHMd\nPUIaNk5ErJK0EHgA2AmcHxGVf7I9jzRNxVDgtvwCuBq4XtIaYAsp4JmZ2QBUaiCKiA6gIy8/BUzq\nYr95wLwa6SuBCTXSXwDObGBVzcysSfyEBDMzK5UDkZmZlcqByMzMSuVAZGZmpXIgMjOzUjkQmZlZ\nqRyIzMysVA5EZmZWKgciMzMrlQORmZmVyoHIzMxK5UBkZmalciAyM7NSORCZmVmpHIjMzKxUDkRm\nZlYqByIzMyuVA5GZmZWq1KnCbeAZO+dWAGafsJMZebkv1l12Wp/LMLN9m3tEZmZWKgciMzMrlQOR\nmZmVyoHIzMxK5UBkZmalciAyM7NSlRKIJI2W9ENJD0haJeminH64pCWSHso/hxXyzJW0RtJqSacW\n0k+UdF/edoUk5fQhkm7K6cslje3v8zQzs+6V1SPaCcyOiPHAROB8SeOBOcDSiBgHLM3r5G3TgOOB\nycCVkgblsq4CPgiMy6/JOX0m8HREHAd8Cbi8P07MzMx6ppRAFBGbIuJneXkb8CAwEpgCLMi7LQCm\n5uUpwI0RsSMi1gJrgJMkHQUcEhHLIiKA66ryVMpaBEyq9JbMzGzgKP0eUb5k9kZgOTAiIjblTY8B\nI/LySGB9IduGnDYyL1en75YnInYCW4EjGn4CZmbWJ6U+4kfSQcB3gY9ExLPFDktEhKTohzrMAmYB\nDB8+nI6ODiA94qaVjRjamDaotOfebPv27fvEeTSC26KT26JxSgtEkvYjBaFvRcTNOflxSUdFxKZ8\n2W1zTt8IjC5kH5XTNubl6vRing2SBgOHAk9V1yMi5gPzAdra2qK9vR2gIc9Z25vNPmEnX7iv7x+P\ndee0970yJevo6KDyuWh1botObovGKWvUnICrgQcj4ouFTYuB6Xl5OnBLIX1aHgl3LGlQwop8Ge9Z\nSRNzme+rylMp6wzgznwfyczMBpCyekR/BLwXuE/SPTntb4HLgIWSZgKPAGcBRMQqSQuBB0gj7s6P\niF0533nAtcBQ4Lb8ghTorpe0BthCGnVnZmYDTCmBKCJ+DHQ1gm1SF3nmAfNqpK8EJtRIfwE4sw/V\nNDOzflD6qDkzM2ttDkRmZlYqByIzMyuVA5GZmZXKgcjMzErlQGRmZqVyIDIzs1I5EJmZWakciMzM\nrFQORGZmVqpSp4Gwfd/YBj/FfN1lpzW0PDMrn3tEZmZWKgciMzMrlQORmZmVyoHIzMxK5UBkZmal\nciAyM7NSORCZmVmpHIjMzKxUDkRmZlYqByIzMyuVH/Fje5VGPzII/Nggs7K5R2RmZqVyIDIzs1I5\nEJmZWal8j8haXnf3nWafsJMZPbg35XtOZj2zT/eIJE2WtFrSGklzyq6PmZn9tn22RyRpEPCvwJ8B\nG4CfSlocEQ+UWzPb13lkn1nP7LOBCDgJWBMRDwNIuhGYAjgQ2V6nGcGtN/Z0mdLB0npLEVF2HZpC\n0hnA5Ig4N6+/Fzg5Ij5ctd8sYFZenQDc368VHbiOBJ4suxIDhNuik9uik9uiU1tEHNzbzPtyj6gu\nETEfmA8gaWVEvLnkKg0IbotObotObotObotOklb2Jf++PFhhIzC6sD4qp5mZ2QCyLweinwLjJB0r\naX9gGrC45DqZmVmVffbSXETslPRh4HZgEHBNRKzqJtv85tdsr+G26OS26OS26OS26NSntthnByuY\nmdneYV++NGdmZnsBByIzMyuVAxGt/SggSaMl/VDSA5JWSboopx8uaYmkh/LPYWXXtb9IGiTp55K+\nn9dbsi0kHSZpkaRfSnpQ0ltauC0+mn8/7pd0g6QDWqUtJF0jabOk+wtpXZ67pLn5u3S1pFPrOUbL\nB6LCo4DeAYwH/lLS+HJr1a92ArMjYjwwETg/n/8cYGlEjAOW5vVWcRHwYGG9Vdviy8APIuL3gdeT\n2qTl2kLSSOBC4M0RMYE0+GkardMW1wKTq9Jqnnv+7pgGHJ/zXJm/Y/eo5QMRhUcBRcSLQOVRQC0h\nIjZFxM/y8jbSl81IUhssyLstAKaWU8P+JWkUcBrw9UJyy7WFpEOBtwFXA0TEixHxDC3YFtlgYKik\nwcCrgUdpkbaIiLuALVXJXZ37FODGiNgREWuBNaTv2D1yIEpfuusL6xtyWsuRNBZ4I7AcGBERm/Km\nx4ARJVWrv/0T8HHg5UJaK7bFscATwDfyZcqvSzqQFmyLiNgI/CPwa2ATsDUi7qAF26Kgq3Pv1fep\nA5EBIOmh5BwdAAAEZ0lEQVQg4LvARyLi2eK2SGP89/lx/pJOBzZHxN1d7dMqbUHqAbwJuCoi3gg8\nR9Wlp1Zpi3z/YwopOB8NHCjpPcV9WqUtamnEuTsQ+VFASNqPFIS+FRE35+THJR2Vtx8FbC6rfv3o\nj4B3S1pHukT7p5K+SWu2xQZgQ0Qsz+uLSIGpFdvi7cDaiHgiIl4CbgbeSmu2RUVX596r71MHohZ/\nFJAkke4DPBgRXyxsWgxMz8vTgVv6u279LSLmRsSoiBhL+hzcGRHvoTXb4jFgvaS2nDSJNIVKy7UF\n6ZLcREmvzr8vk0j3UluxLSq6OvfFwDRJQyQdC4wDVnRXmJ+sAEh6J+neQOVRQPNKrlK/kfQ/gP8E\n7qPzvsjfku4TLQSOAR4BzoqI6huW+yxJ7cDHIuJ0SUfQgm0h6Q2kQRv7Aw8D7yf98dqKbfH3wNmk\nUaY/B84FDqIF2kLSDUA7adqLx4FLgP9LF+cu6WLgA6S2+khE3NbtMRyIzMysTL40Z2ZmpXIgMjOz\nUjkQmZlZqRyIzMysVA5EZmZWKgciMzMrlQORWQNIekP+f7Qy69BembqiweVOLT6RXlKHpDc3+jjW\nuhyIzBrjDUCpgaiJppKmSDFrCgcia3mSDpR0q6R788RnZ0s6UdKPJN0t6fbCc7U6JF0uaYWkX0n6\n4/xoqE8BZ0u6J+c/ME8otiI/vXpKzj9D0s2SfpAnFfuHQj0mS/pZrsfSQt1+q5w6z6mnx5+Zz2mF\npK9J+hdJbwXeDXw+n9tr8u5nFtugAW+DtbDBZVfAbACYDDwaEafBK3Px3AZMiYgnJJ0NzCM9tgRg\ncESclC/FXRIRb5f0d6SJ0z6cy/gs6Vl1H5B0GLBC0n/k/G8gTbexA1gt6Z+BF4CvAW+LiLWSDs/7\nXlyrnIh4rptzqplvD8ffBXyS9GDTbcCdwL0R8RNJi4HvR8SifG6/1QakB4Oa9YoDkVl6zt4XJF0O\nfB94GpgALMlfuoNI89BUVJ5QfjcwtosyTyE9yftjef0A0nO5IM1suRVA0gPAGGAYcFeeTIzCM8u6\nKqc4g2wjjn8k8KPC88K+A7x2D+XX0wZmdXEgspYXEb+S9CbSPZ7PkHoDqyLiLV1k2ZF/7qLr3yEB\nfxERq3dLlE4u5O+ujC7LqUOjjt+VetrArC6+R2QtT9LRwPMR8U3g88DJwHBJb8nb95N0fDfFbAMO\nLqzfDlyQpw1A0hu7yb8MeFt+dD6FS3M9Lae3x/8p8D8lDVOaDvsvCtuqz82soRyIzOAE0j2Ue0j3\nO/4OOAO4XNK9wD2kidD25IfA+MpgBeDTwH7ALyStyutdiogngFnAzfmYN+VNPSqnoKfH3wh8ljR3\nzH8B64CtefONwF/nQQ+vqV2CWe95GggzA9J08RGxPfeIvkeam+t7ZdfL9n3uEZlZxaW5V3g/sJY0\n+ZlZ07lHZLaXkXQqcHlV8tqI+PMy6mPWVw5EZmZWKl+aMzOzUjkQmZlZqRyIzMysVA5EZmZWqv8P\njRdggr3CGB4AAAAASUVORK5CYII=\n",
      "text/plain": [
       "<matplotlib.figure.Figure at 0x7fd49b278d10>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# 句子长度的分布\n",
    "import matplotlib.pyplot as plt\n",
    "df_data['sentence_len'].hist(bins=100)\n",
    "plt.xlim(0, 100)\n",
    "plt.xlabel('sentence_length')\n",
    "plt.ylabel('sentence_num')\n",
    "plt.title('Distribution of the Length of Sentence')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "从上图可以看到，在使用标点进行分割后，绝大部分的句子长度小于30个字。因为一般情况下，我们训练网络的时候都喜欢把输入 padding 到固定的长度，这样子计算更快。因此我们取 32 作为句子长度，超过 32 个字的将把多余的字去掉，少于 32 个字的将用特殊字符填充。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# 1.用 chain(*lists) 函数把多个list拼接起来\n",
    "from itertools import chain\n",
    "all_words = list(chain(*df_data['words'].values))\n",
    "# 2.统计所有 word\n",
    "sr_allwords = pd.Series(all_words)\n",
    "sr_allwords = sr_allwords.value_counts()\n",
    "set_words = sr_allwords.index\n",
    "set_ids = range(1, len(set_words)+1) # 注意从1开始，因为我们准备把0作为填充值\n",
    "tags = [ 'x', 's', 'b', 'm', 'e']\n",
    "tag_ids = range(len(tags))\n",
    "\n",
    "# 3. 构建 words 和 tags 都转为数值 id 的映射（使用 Series 比 dict 更加方便）\n",
    "word2id = pd.Series(set_ids, index=set_words)\n",
    "id2word = pd.Series(set_words, index=set_ids)\n",
    "tag2id = pd.Series(tag_ids, index=tags)\n",
    "id2tag = pd.Series(tags, index=tag_ids)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "vocab_size=5158\n"
     ]
    }
   ],
   "source": [
    "vocab_size = len(set_words)\n",
    "print 'vocab_size={}'.format(vocab_size)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "把 words 和 tags 都转为数值 id"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false,
    "scrolled": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "CPU times: user 2min 3s, sys: 232 ms, total: 2min 3s\n",
      "Wall time: 2min 3s\n",
      "CPU times: user 2min 1s, sys: 328 ms, total: 2min 2s\n",
      "Wall time: 2min 2s\n"
     ]
    }
   ],
   "source": [
    "max_len = 32\n",
    "def X_padding(words):\n",
    "    \"\"\"把 words 转为 id 形式，并自动补全位 max_len 长度。\"\"\"\n",
    "    ids = list(word2id[words])\n",
    "    if len(ids) >= max_len:  # 长则弃掉\n",
    "        return ids[:max_len]\n",
    "    ids.extend([0]*(max_len-len(ids))) # 短则补全\n",
    "    return ids\n",
    "\n",
    "def y_padding(tags):\n",
    "    \"\"\"把 tags 转为 id 形式， 并自动补全位 max_len 长度。\"\"\"\n",
    "    ids = list(tag2id[tags])\n",
    "    if len(ids) >= max_len:  # 长则弃掉\n",
    "        return ids[:max_len]\n",
    "    ids.extend([0]*(max_len-len(ids))) # 短则补全\n",
    "    return ids\n",
    "\n",
    "%time df_data['X'] = df_data['words'].apply(X_padding)\n",
    "%time df_data['y'] = df_data['tags'].apply(y_padding)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "X.shape=(321533, 32), y.shape=(321533, 32)\n",
      "Example of words:  [u'\\u4eba' u'\\u4eec' u'\\u5e38' u'\\u8bf4' u'\\u751f' u'\\u6d3b' u'\\u662f'\n",
      " u'\\u4e00' u'\\u90e8' u'\\u6559' u'\\u79d1' u'\\u4e66']\n",
      "Example of X:  [  8  43 320  88  36 198   7   2  41 163 124 245   0   0   0   0   0   0\n",
      "   0   0   0   0   0   0   0   0   0   0   0   0   0   0]\n",
      "Example of tags:  [u'b' u'e' u's' u's' u'b' u'e' u's' u's' u's' u'b' u'm' u'e']\n",
      "Example of y:  [2 4 1 1 2 4 1 1 1 2 3 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n"
     ]
    }
   ],
   "source": [
    "# 最后得到了所有的数据\n",
    "X = np.asarray(list(df_data['X'].values))\n",
    "y = np.asarray(list(df_data['y'].values))\n",
    "print 'X.shape={}, y.shape={}'.format(X.shape, y.shape)\n",
    "print 'Example of words: ', df_data['words'].values[0]\n",
    "print 'Example of X: ', X[0]\n",
    "print 'Example of tags: ', df_data['tags'].values[0]\n",
    "print 'Example of y: ', y[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "CPU times: user 7.78 s, sys: 236 ms, total: 8.02 s\n",
      "Wall time: 8.01 s\n",
      "CPU times: user 7.93 s, sys: 260 ms, total: 8.19 s\n",
      "Wall time: 8.18 s\n",
      "** Finished saving the data.\n"
     ]
    }
   ],
   "source": [
    "# 保存数据\n",
    "import pickle\n",
    "import os\n",
    "\n",
    "if not os.path.exists('data/'):\n",
    "    os.makedirs('data/')\n",
    "\n",
    "with open('data/data.pkl', 'wb') as outp:\n",
    "    %time pickle.dump(X, outp)\n",
    "    %time pickle.dump(y, outp)\n",
    "    pickle.dump(word2id, outp)\n",
    "    pickle.dump(id2word, outp)\n",
    "    pickle.dump(tag2id, outp)\n",
    "    pickle.dump(id2tag, outp)\n",
    "print '** Finished saving the data.'    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "CPU times: user 1.72 s, sys: 120 ms, total: 1.84 s\n",
      "Wall time: 1.84 s\n",
      "CPU times: user 1.65 s, sys: 188 ms, total: 1.84 s\n",
      "Wall time: 1.84 s\n",
      "X_train.shape=(205780, 32), y_train.shape=(205780, 32); \n",
      "X_valid.shape=(51446, 32), y_valid.shape=(51446, 32);\n",
      "X_test.shape=(64307, 32), y_test.shape=(64307, 32)\n"
     ]
    }
   ],
   "source": [
    "# 导入数据\n",
    "import pickle\n",
    "with open('data/data.pkl', 'rb') as inp:\n",
    "    %time X = pickle.load(inp)\n",
    "    %time y = pickle.load(inp)\n",
    "    word2id = pickle.load(inp)\n",
    "    id2word = pickle.load(inp)\n",
    "    tag2id = pickle.load(inp)\n",
    "    id2tag = pickle.load(inp)\n",
    "\n",
    "# 划分测试集/训练集/验证集\n",
    "from sklearn.model_selection import train_test_split\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n",
    "X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train,  test_size=0.2, random_state=42)\n",
    "print 'X_train.shape={}, y_train.shape={}; \\nX_valid.shape={}, y_valid.shape={};\\nX_test.shape={}, y_test.shape={}'.format(\n",
    "    X_train.shape, y_train.shape, X_valid.shape, y_valid.shape, X_test.shape, y_test.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. 构造一个生成batch数据的类"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Creating the data generator ...\n",
      "Finished creating the data generator.\n"
     ]
    }
   ],
   "source": [
    "# ** 3.build the data generator\n",
    "class BatchGenerator(object):\n",
    "    \"\"\" Construct a Data generator. The input X, y should be ndarray or list like type.\n",
    "    \n",
    "    Example:\n",
    "        Data_train = BatchGenerator(X=X_train_all, y=y_train_all, shuffle=False)\n",
    "        Data_test = BatchGenerator(X=X_test_all, y=y_test_all, shuffle=False)\n",
    "        X = Data_train.X\n",
    "        y = Data_train.y\n",
    "        or:\n",
    "        X_batch, y_batch = Data_train.next_batch(batch_size)\n",
    "     \"\"\" \n",
    "    \n",
    "    def __init__(self, X, y, shuffle=False):\n",
    "        if type(X) != np.ndarray:\n",
    "            X = np.asarray(X)\n",
    "        if type(y) != np.ndarray:\n",
    "            y = np.asarray(y)\n",
    "        self._X = X\n",
    "        self._y = y\n",
    "        self._epochs_completed = 0\n",
    "        self._index_in_epoch = 0\n",
    "        self._number_examples = self._X.shape[0]\n",
    "        self._shuffle = shuffle\n",
    "        if self._shuffle:\n",
    "            new_index = np.random.permutation(self._number_examples)\n",
    "            self._X = self._X[new_index]\n",
    "            self._y = self._y[new_index]\n",
    "                \n",
    "    @property\n",
    "    def X(self):\n",
    "        return self._X\n",
    "    \n",
    "    @property\n",
    "    def y(self):\n",
    "        return self._y\n",
    "    \n",
    "    @property\n",
    "    def num_examples(self):\n",
    "        return self._number_examples\n",
    "    \n",
    "    @property\n",
    "    def epochs_completed(self):\n",
    "        return self._epochs_completed\n",
    "    \n",
    "    def next_batch(self, batch_size):\n",
    "        \"\"\" Return the next 'batch_size' examples from this data set.\"\"\"\n",
    "        start = self._index_in_epoch\n",
    "        self._index_in_epoch += batch_size\n",
    "        if self._index_in_epoch > self._number_examples:\n",
    "            # finished epoch\n",
    "            self._epochs_completed += 1\n",
    "            # Shuffle the data \n",
    "            if self._shuffle:\n",
    "                new_index = np.random.permutation(self._number_examples)\n",
    "                self._X = self._X[new_index]\n",
    "                self._y = self._y[new_index]\n",
    "            start = 0\n",
    "            self._index_in_epoch = batch_size\n",
    "            assert batch_size <= self._number_examples\n",
    "        end = self._index_in_epoch\n",
    "        return self._X[start:end], self._y[start:end]\n",
    "\n",
    "print 'Creating the data generator ...'\n",
    "data_train = BatchGenerator(X_train, y_train, shuffle=True)\n",
    "data_valid = BatchGenerator(X_valid, y_valid, shuffle=False)\n",
    "data_test = BatchGenerator(X_test, y_test, shuffle=False)\n",
    "print 'Finished creating the data generator.'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. Bi-directional lstm 模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.1 模型构造"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Finished creating the bi-lstm model.\n"
     ]
    }
   ],
   "source": [
    "import tensorflow as tf\n",
    "config = tf.ConfigProto()\n",
    "config.gpu_options.allow_growth = True\n",
    "sess = tf.Session(config=config)\n",
    "from tensorflow.contrib import rnn\n",
    "import numpy as np\n",
    "\n",
    "'''\n",
    "For Chinese word segmentation.\n",
    "'''\n",
    "# ##################### config ######################\n",
    "decay = 0.85\n",
    "max_epoch = 5\n",
    "max_max_epoch = 10\n",
    "timestep_size = max_len = 32           # 句子长度\n",
    "vocab_size = 5159    # 样本中不同字的个数+1(padding 0)，根据处理数据的时候得到\n",
    "input_size = embedding_size = 64       # 字向量长度\n",
    "class_num = 5\n",
    "hidden_size = 128    # 隐含层节点数\n",
    "layer_num = 2        # bi-lstm 层数\n",
    "max_grad_norm = 5.0  # 最大梯度（超过此值的梯度将被裁剪）\n",
    "\n",
    "lr = tf.placeholder(tf.float32, [])\n",
    "keep_prob = tf.placeholder(tf.float32, [])\n",
    "batch_size = tf.placeholder(tf.int32, [])  # 注意类型必须为 tf.int32\n",
    "model_save_path = 'ckpt/bi-lstm.ckpt'  # 模型保存位置\n",
    "\n",
    "\n",
    "with tf.variable_scope('embedding'):\n",
    "    embedding = tf.get_variable(\"embedding\", [vocab_size, embedding_size], dtype=tf.float32)\n",
    "\n",
    "def weight_variable(shape):\n",
    "    \"\"\"Create a weight variable with appropriate initialization.\"\"\"\n",
    "    initial = tf.truncated_normal(shape, stddev=0.1)\n",
    "    return tf.Variable(initial)\n",
    "\n",
    "def bias_variable(shape):\n",
    "    \"\"\"Create a bias variable with appropriate initialization.\"\"\"\n",
    "    initial = tf.constant(0.1, shape=shape)\n",
    "    return tf.Variable(initial)\n",
    "\n",
    "def lstm_cell():\n",
    "    cell = rnn.LSTMCell(hidden_size, reuse=tf.get_variable_scope().reuse)\n",
    "    return rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)\n",
    "         \n",
    "def bi_lstm(X_inputs):\n",
    "    \"\"\"build the bi-LSTMs network. Return the y_pred\"\"\"\n",
    "    # X_inputs.shape = [batchsize, timestep_size]  ->  inputs.shape = [batchsize, timestep_size, embedding_size]\n",
    "    inputs = tf.nn.embedding_lookup(embedding, X_inputs)  \n",
    "    \n",
    "    # ** 1.构建前向后向多层 LSTM\n",
    "    cell_fw = rnn.MultiRNNCell([lstm_cell() for _ in range(layer_num)], state_is_tuple=True)\n",
    "    cell_bw = rnn.MultiRNNCell([lstm_cell() for _ in range(layer_num)], state_is_tuple=True)\n",
    "  \n",
    "    # ** 2.初始状态\n",
    "    initial_state_fw = cell_fw.zero_state(batch_size, tf.float32)\n",
    "    initial_state_bw = cell_bw.zero_state(batch_size, tf.float32)  \n",
    "    \n",
    "    # 下面两部分是等价的\n",
    "    # **************************************************************\n",
    "    # ** 把 inputs 处理成 rnn.static_bidirectional_rnn 的要求形式\n",
    "    # ** 文档说明\n",
    "    # inputs: A length T list of inputs, each a tensor of shape\n",
    "    # [batch_size, input_size], or a nested tuple of such elements.\n",
    "    # *************************************************************\n",
    "    # Unstack to get a list of 'n_steps' tensors of shape (batch_size, n_input)\n",
    "    # inputs.shape = [batchsize, timestep_size, embedding_size]  ->  timestep_size tensor, each_tensor.shape = [batchsize, embedding_size]\n",
    "    # inputs = tf.unstack(inputs, timestep_size, 1)\n",
    "    # ** 3.bi-lstm 计算（tf封装）  一般采用下面 static_bidirectional_rnn 函数调用。\n",
    "    #   但是为了理解计算的细节，所以把后面的这段代码进行展开自己实现了一遍。\n",
    "#     try:\n",
    "#         outputs, _, _ = rnn.static_bidirectional_rnn(cell_fw, cell_bw, inputs, \n",
    "#                         initial_state_fw = initial_state_fw, initial_state_bw = initial_state_bw, dtype=tf.float32)\n",
    "#     except Exception: # Old TensorFlow version only returns outputs not states\n",
    "#         outputs = rnn.static_bidirectional_rnn(cell_fw, cell_bw, inputs, \n",
    "#                         initial_state_fw = initial_state_fw, initial_state_bw = initial_state_bw, dtype=tf.float32)\n",
    "#     output = tf.reshape(tf.concat(outputs, 1), [-1, hidden_size * 2])\n",
    "    # ***********************************************************\n",
    "    \n",
    "    # ***********************************************************\n",
    "    # ** 3. bi-lstm 计算（展开）\n",
    "    with tf.variable_scope('bidirectional_rnn'):\n",
    "        # *** 下面，两个网络是分别计算 output 和 state \n",
    "        # Forward direction\n",
    "        outputs_fw = list()\n",
    "        state_fw = initial_state_fw\n",
    "        with tf.variable_scope('fw'):\n",
    "            for timestep in range(timestep_size):\n",
    "                if timestep > 0:\n",
    "                    tf.get_variable_scope().reuse_variables()\n",
    "                (output_fw, state_fw) = cell_fw(inputs[:, timestep, :], state_fw)\n",
    "                outputs_fw.append(output_fw)\n",
    "        \n",
    "        # backward direction\n",
    "        outputs_bw = list()\n",
    "        state_bw = initial_state_bw\n",
    "        with tf.variable_scope('bw') as bw_scope:\n",
    "            inputs = tf.reverse(inputs, [1])\n",
    "            for timestep in range(timestep_size):\n",
    "                if timestep > 0:\n",
    "                    tf.get_variable_scope().reuse_variables()\n",
    "                (output_bw, state_bw) = cell_bw(inputs[:, timestep, :], state_bw)\n",
    "                outputs_bw.append(output_bw)\n",
    "        # *** 然后把 output_bw 在 timestep 维度进行翻转\n",
    "        # outputs_bw.shape = [timestep_size, batch_size, hidden_size]\n",
    "        outputs_bw = tf.reverse(outputs_bw, [0])\n",
    "        # 把两个oupputs 拼成 [timestep_size, batch_size, hidden_size*2]\n",
    "        output = tf.concat([outputs_fw, outputs_bw], 2)\n",
    "        output = tf.transpose(output, perm=[1,0,2])\n",
    "        output = tf.reshape(output, [-1, hidden_size*2])\n",
    "    # ***********************************************************\n",
    "    return output # [-1, hidden_size*2]\n",
    "\n",
    "\n",
    "with tf.variable_scope('Inputs'):\n",
    "    X_inputs = tf.placeholder(tf.int32, [None, timestep_size], name='X_input')\n",
    "    y_inputs = tf.placeholder(tf.int32, [None, timestep_size], name='y_input')   \n",
    "    \n",
    "bilstm_output = bi_lstm(X_inputs)\n",
    "\n",
    "with tf.variable_scope('outputs'):\n",
    "    softmax_w = weight_variable([hidden_size * 2, class_num]) \n",
    "    softmax_b = bias_variable([class_num]) \n",
    "    y_pred = tf.matmul(bilstm_output, softmax_w) + softmax_b\n",
    "\n",
    "# adding extra statistics to monitor\n",
    "# y_inputs.shape = [batch_size, timestep_size]\n",
    "correct_prediction = tf.equal(tf.cast(tf.argmax(y_pred, 1), tf.int32), tf.reshape(y_inputs, [-1]))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "cost = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels = tf.reshape(y_inputs, [-1]), logits = y_pred))\n",
    "\n",
    "# ***** 优化求解 *******\n",
    "tvars = tf.trainable_variables()  # 获取模型的所有参数\n",
    "grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), max_grad_norm)  # 获取损失函数对于每个参数的梯度\n",
    "optimizer = tf.train.AdamOptimizer(learning_rate=lr)   # 优化器\n",
    "\n",
    "# 梯度下降计算\n",
    "train_op = optimizer.apply_gradients( zip(grads, tvars),\n",
    "    global_step=tf.contrib.framework.get_or_create_global_step())\n",
    "print 'Finished creating the bi-lstm model.'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 模型训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "EPOCH 1， lr=0.0001\n",
      "\ttraining acc=0.707389, cost=0.775753;  valid acc= 0.75947, cost=0.54095 \n",
      "\ttraining acc=0.759384, cost=0.515137;  valid acc= 0.776868, cost=0.474287 \n",
      "\ttraining acc=0.778026, cost=0.468697;  valid acc= 0.79003, cost=0.446381 \n",
      "\ttraining acc=0.790807, cost=0.445514;  valid acc= 0.797641, cost=0.431316 \n",
      "\ttraining acc=0.795798, cost=0.434927;  valid acc= 0.800602, cost=0.422603 \n",
      "\ttraining 205780, acc=0.766314, cost=0.527892 \n",
      "Epoch training 205780, acc=0.766314, cost=0.527892, speed=143.469 s/epoch\n",
      "EPOCH 2， lr=0.0001\n",
      "\ttraining acc=0.804128, cost=0.418064;  valid acc= 0.812661, cost=0.404303 \n",
      "\ttraining acc=0.822, cost=0.398365;  valid acc= 0.845194, cost=0.36579 \n",
      "\ttraining acc=0.858935, cost=0.350317;  valid acc= 0.889299, cost=0.301744 \n",
      "\ttraining acc=0.889116, cost=0.296755;  valid acc= 0.899861, cost=0.266887 \n",
      "\ttraining acc=0.897758, cost=0.27283;  valid acc= 0.905684, cost=0.251335 \n",
      "\ttraining 205780, acc=0.85444, cost=0.347169 \n",
      "Epoch training 205780, acc=0.85444, cost=0.347169, speed=141.351 s/epoch\n",
      "EPOCH 3， lr=0.0001\n",
      "\ttraining acc=0.904016, cost=0.257263;  valid acc= 0.911316, cost=0.238502 \n",
      "\ttraining acc=0.907998, cost=0.247058;  valid acc= 0.915211, cost=0.22845 \n",
      "\ttraining acc=0.912551, cost=0.235368;  valid acc= 0.918087, cost=0.219604 \n",
      "\ttraining acc=0.916842, cost=0.225103;  valid acc= 0.922426, cost=0.209635 \n",
      "\ttraining acc=0.919821, cost=0.217493;  valid acc= 0.92551, cost=0.201713 \n",
      "the save path is  ckpt/bi-lstm.ckpt-3\n",
      "\ttraining 205780, acc=0.912259, cost=0.236415 \n",
      "Epoch training 205780, acc=0.912259, cost=0.236415, speed=142.324 s/epoch\n",
      "EPOCH 4， lr=0.0001\n",
      "\ttraining acc=0.922908, cost=0.208851;  valid acc= 0.928602, cost=0.193111 \n",
      "\ttraining acc=0.925849, cost=0.20201;  valid acc= 0.931035, cost=0.186958 \n",
      "\ttraining acc=0.927849, cost=0.196128;  valid acc= 0.933309, cost=0.18026 \n",
      "\ttraining acc=0.930286, cost=0.189689;  valid acc= 0.93508, cost=0.175645 \n",
      "\ttraining acc=0.932908, cost=0.18321;  valid acc= 0.93667, cost=0.171521 \n",
      "\ttraining 205780, acc=0.927976, cost=0.195941 \n",
      "Epoch training 205780, acc=0.927976, cost=0.195941, speed=140.209 s/epoch\n",
      "EPOCH 5， lr=0.0001\n",
      "\ttraining acc=0.934327, cost=0.179739;  valid acc= 0.937869, cost=0.168317 \n",
      "\ttraining acc=0.935353, cost=0.176384;  valid acc= 0.938816, cost=0.165489 \n",
      "\ttraining acc=0.936464, cost=0.17336;  valid acc= 0.939926, cost=0.162875 \n",
      "\ttraining acc=0.936288, cost=0.17299;  valid acc= 0.940903, cost=0.16066 \n",
      "\ttraining acc=0.937714, cost=0.169626;  valid acc= 0.941379, cost=0.158603 \n",
      "\ttraining 205780, acc=0.936028, cost=0.17442 \n",
      "Epoch training 205780, acc=0.936028, cost=0.17442, speed=140.176 s/epoch\n",
      "EPOCH 6， lr=0.0001\n",
      "\ttraining acc=0.938181, cost=0.167858;  valid acc= 0.942277, cost=0.156263 \n",
      "\ttraining acc=0.939296, cost=0.165677;  valid acc= 0.9428, cost=0.15555 \n",
      "\ttraining acc=0.940196, cost=0.162571;  valid acc= 0.94334, cost=0.153556 \n",
      "\ttraining acc=0.94093, cost=0.161105;  valid acc= 0.943933, cost=0.152148 \n",
      "\ttraining acc=0.941557, cost=0.159799;  valid acc= 0.944393, cost=0.151195 \n",
      "the save path is  ckpt/bi-lstm.ckpt-6\n",
      "\ttraining 205780, acc=0.940047, cost=0.163356 \n",
      "Epoch training 205780, acc=0.940047, cost=0.163356, speed=140.412 s/epoch\n",
      "**TEST RESULT:\n",
      "**Test 64307, acc=0.944173, cost=0.152118\n"
     ]
    }
   ],
   "source": [
    "def test_epoch(dataset):\n",
    "    \"\"\"Testing or valid.\"\"\"\n",
    "    _batch_size = 500\n",
    "    fetches = [accuracy, cost]\n",
    "    _y = dataset.y\n",
    "    data_size = _y.shape[0]\n",
    "    batch_num = int(data_size / _batch_size)\n",
    "    start_time = time.time()\n",
    "    _costs = 0.0\n",
    "    _accs = 0.0\n",
    "    for i in xrange(batch_num):\n",
    "        X_batch, y_batch = dataset.next_batch(_batch_size)\n",
    "        feed_dict = {X_inputs:X_batch, y_inputs:y_batch, lr:1e-5, batch_size:_batch_size, keep_prob:1.0}\n",
    "        _acc, _cost = sess.run(fetches, feed_dict)\n",
    "        _accs += _acc\n",
    "        _costs += _cost    \n",
    "    mean_acc= _accs / batch_num     \n",
    "    mean_cost = _costs / batch_num\n",
    "    return mean_acc, mean_cost\n",
    "\n",
    "\n",
    "sess.run(tf.global_variables_initializer())\n",
    "tr_batch_size = 128 \n",
    "max_max_epoch = 6\n",
    "display_num = 5  # 每个 epoch 显示是个结果\n",
    "tr_batch_num = int(data_train.y.shape[0] / tr_batch_size)  # 每个 epoch 中包含的 batch 数\n",
    "display_batch = int(tr_batch_num / display_num)  # 每训练 display_batch 之后输出一次\n",
    "saver = tf.train.Saver(max_to_keep=10)  # 最多保存的模型数量\n",
    "for epoch in xrange(max_max_epoch):\n",
    "    _lr = 1e-4\n",
    "    if epoch > max_epoch:\n",
    "        _lr = _lr * ((decay) ** (epoch - max_epoch))\n",
    "    print 'EPOCH %d， lr=%g' % (epoch+1, _lr)\n",
    "    start_time = time.time()\n",
    "    _costs = 0.0\n",
    "    _accs = 0.0\n",
    "    show_accs = 0.0\n",
    "    show_costs = 0.0\n",
    "    for batch in xrange(tr_batch_num): \n",
    "        fetches = [accuracy, cost, train_op]\n",
    "        X_batch, y_batch = data_train.next_batch(tr_batch_size)\n",
    "        feed_dict = {X_inputs:X_batch, y_inputs:y_batch, lr:_lr, batch_size:tr_batch_size, keep_prob:0.5}\n",
    "        _acc, _cost, _ = sess.run(fetches, feed_dict) # the cost is the mean cost of one batch\n",
    "        _accs += _acc\n",
    "        _costs += _cost\n",
    "        show_accs += _acc\n",
    "        show_costs += _cost\n",
    "        if (batch + 1) % display_batch == 0:\n",
    "            valid_acc, valid_cost = test_epoch(data_valid)  # valid\n",
    "            print '\\ttraining acc=%g, cost=%g;  valid acc= %g, cost=%g ' % (show_accs / display_batch,\n",
    "                                                show_costs / display_batch, valid_acc, valid_cost)\n",
    "            show_accs = 0.0\n",
    "            show_costs = 0.0\n",
    "    mean_acc = _accs / tr_batch_num \n",
    "    mean_cost = _costs / tr_batch_num\n",
    "    if (epoch + 1) % 3 == 0:  # 每 3 个 epoch 保存一次模型\n",
    "        save_path = saver.save(sess, model_save_path, global_step=(epoch+1))\n",
    "        print 'the save path is ', save_path\n",
    "    print '\\ttraining %d, acc=%g, cost=%g ' % (data_train.y.shape[0], mean_acc, mean_cost)\n",
    "    print 'Epoch training %d, acc=%g, cost=%g, speed=%g s/epoch' % (data_train.y.shape[0], mean_acc, mean_cost, time.time()-start_time)        \n",
    "# testing\n",
    "print '**TEST RESULT:'\n",
    "test_acc, test_cost = test_epoch(data_test)\n",
    "print '**Test %d, acc=%g, cost=%g' % (data_test.y.shape[0], test_acc, test_cost) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "模型测试，现在给定一个字符串，首先应该把处理成正确的模型输入形式。即每次输入一个片段，（这里限制了每个片段的长度不超过 max_len=32）。每个字处理为对应的 id， 每个片段都会 padding 处理到固定的长度。也就是说，输入的是一个list， list 的每个元素是一个包含多个 id 的list。<br/>\n",
    "即 [[id0, id1, ..., id31], [id0, id1, ..., id31], [], ...]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Restoring parameters from ckpt/bi-lstm.ckpt-6\n",
      "CPU times: user 424 ms, sys: 144 ms, total: 568 ms\n",
      "Wall time: 399 ms\n"
     ]
    }
   ],
   "source": [
    "# ** 导入模型\n",
    "saver = tf.train.Saver()\n",
    "best_model_path = 'ckpt/bi-lstm.ckpt-6'\n",
    "%time saver.restore(sess, best_model_path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "X_tt.shape= (2, 32) y_tt.shape= (2, 32)\n",
      "X_tt =  [[ 120  107  360  221  200  241  452   42   14    1 1065  288  175  605\n",
      "   106  450   37    6    2  510 2706  519    0    0    0    0    0    0\n",
      "     0    0    0    0]\n",
      " [   4   20    2   48  242    5    0    0    0    0    0    0    0    0\n",
      "     0    0    0    0    0    0    0    0    0    0    0    0    0    0\n",
      "     0    0    0    0]]\n",
      "y_tt =  [[2 4 1 2 4 2 4 2 4 1 2 3 4 2 4 2 4 1 1 1 2 4 0 0 0 0 0 0 0 0 0 0]\n",
      " [1 1 1 2 4 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]]\n"
     ]
    }
   ],
   "source": [
    "# 再看看模型的输入数据形式, 我们要进行分词，首先就要把句子转为这样的形式\n",
    "X_tt, y_tt = data_train.next_batch(2)\n",
    "print 'X_tt.shape=', X_tt.shape, 'y_tt.shape=', y_tt.shape\n",
    "print 'X_tt = ', X_tt\n",
    "print 'y_tt = ', y_tt"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 维特比解码\n",
    "下面使用维特比的方法来计算的最优的状态序列。具体原理可以参考 隐马尔可夫模型。在 HMM 监督学习中，我们统计样本频数来得到转移概率和 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "the transition probability: \n",
      "be 0.828739514282\n",
      "bm 0.171260485718\n",
      "eb 0.59236966183\n",
      "es 0.40763033817\n",
      "me 0.504871829789\n",
      "mm 0.495128170211\n",
      "sb 0.623252032292\n",
      "ss 0.376747967708\n"
     ]
    }
   ],
   "source": [
    "# 利用 labels（即状态序列）来统计转移概率\n",
    "# 因为状态数比较少，这里用 dict={'I_tI_{t+1}'：p} 来实现\n",
    "# A统计状态转移的频数\n",
    "A = {\n",
    "      'sb':0,\n",
    "      'ss':0,\n",
    "      'be':0,\n",
    "      'bm':0,\n",
    "      'me':0,\n",
    "      'mm':0,\n",
    "      'eb':0,\n",
    "      'es':0\n",
    "     }\n",
    "\n",
    "# zy 表示转移概率矩阵\n",
    "zy = dict()\n",
    "for label in labels:\n",
    "    for t in xrange(len(label) - 1):\n",
    "        key = label[t] + label[t+1]\n",
    "        A[key] += 1.0\n",
    "        \n",
    "zy['sb'] = A['sb'] / (A['sb'] + A['ss'])\n",
    "zy['ss'] = 1.0 - zy['sb']\n",
    "zy['be'] = A['be'] / (A['be'] + A['bm'])\n",
    "zy['bm'] = 1.0 - zy['be']\n",
    "zy['me'] = A['me'] / (A['me'] + A['mm'])\n",
    "zy['mm'] = 1.0 - zy['me']\n",
    "zy['eb'] = A['eb'] / (A['eb'] + A['es'])\n",
    "zy['es'] = 1.0 - zy['eb']\n",
    "keys = sorted(zy.keys())\n",
    "print 'the transition probability: '\n",
    "for key in keys:\n",
    "    print key, zy[key]\n",
    "    \n",
    "zy = {i:np.log(zy[i]) for i in zy.keys()}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "def viterbi(nodes):\n",
    "    \"\"\"\n",
    "    维特比译码：除了第一层以外，每一层有4个节点。\n",
    "    计算当前层（第一层不需要计算）四个节点的最短路径：\n",
    "       对于本层的每一个节点，计算出路径来自上一层的各个节点的新的路径长度（概率）。保留最大值（最短路径）。\n",
    "       上一层每个节点的路径保存在 paths 中。计算本层的时候，先用paths_ 暂存，然后把本层的最大路径保存到 paths 中。\n",
    "       paths 采用字典的形式保存（路径：路径长度）。\n",
    "       一直计算到最后一层，得到四条路径，将长度最短（概率值最大的路径返回）\n",
    "    \"\"\"\n",
    "    paths = {'b': nodes[0]['b'], 's':nodes[0]['s']} # 第一层，只有两个节点\n",
    "    for layer in xrange(1, len(nodes)):  # 后面的每一层\n",
    "        paths_ = paths.copy()  # 先保存上一层的路径\n",
    "        # node_now 为本层节点， node_last 为上层节点\n",
    "        paths = {}  # 清空 path \n",
    "        for node_now in nodes[layer].keys():\n",
    "            # 对于本层的每个节点，找出最短路径\n",
    "            sub_paths = {} \n",
    "            # 上一层的每个节点到本层节点的连接\n",
    "            for path_last in paths_.keys():\n",
    "                if path_last[-1] + node_now in zy.keys(): # 若转移概率不为 0 \n",
    "                    sub_paths[path_last + node_now] = paths_[path_last] + nodes[layer][node_now] + zy[path_last[-1] + node_now]\n",
    "            # 最短路径,即概率最大的那个\n",
    "            sr_subpaths = pd.Series(sub_paths)\n",
    "            sr_subpaths = sr_subpaths.sort_values()  # 升序排序\n",
    "            node_subpath = sr_subpaths.index[-1]  # 最短路径\n",
    "            node_value = sr_subpaths[-1]   # 最短路径对应的值\n",
    "            # 把 node_now 的最短路径添加到 paths 中\n",
    "            paths[node_subpath] = node_value\n",
    "    # 所有层求完后，找出最后一层中各个节点的路径最短的路径\n",
    "    sr_paths = pd.Series(paths)\n",
    "    sr_paths = sr_paths.sort_values()  # 按照升序排序\n",
    "    return sr_paths.index[-1]  # 返回最短路径（概率值最大的路径）\n",
    "\n",
    "\n",
    "def text2ids(text):\n",
    "    \"\"\"把字片段text转为 ids.\"\"\"\n",
    "    words = list(text)\n",
    "    ids = list(word2id[words])\n",
    "    if len(ids) >= max_len:  # 长则弃掉\n",
    "        print u'输出片段超过%d部分无法处理' % (max_len) \n",
    "        return ids[:max_len]\n",
    "    ids.extend([0]*(max_len-len(ids))) # 短则补全\n",
    "    ids = np.asarray(ids).reshape([-1, max_len])\n",
    "    return ids\n",
    "\n",
    "\n",
    "def simple_cut(text):\n",
    "    \"\"\"对一个片段text（标点符号把句子划分为多个片段）进行预测。\"\"\"\n",
    "    if text:\n",
    "        text_len = len(text)\n",
    "        X_batch = text2ids(text)  # 这里每个 batch 是一个样本\n",
    "        fetches = [y_pred]\n",
    "        feed_dict = {X_inputs:X_batch, lr:1.0, batch_size:1, keep_prob:1.0}\n",
    "        _y_pred = sess.run(fetches, feed_dict)[0][:text_len]  # padding填充的部分直接丢弃\n",
    "        nodes = [dict(zip(['s','b','m','e'], each[1:])) for each in _y_pred]\n",
    "        tags = viterbi(nodes)\n",
    "        words = []\n",
    "        for i in range(len(text)):\n",
    "            if tags[i] in ['s', 'b']:\n",
    "                words.append(text[i])\n",
    "            else:\n",
    "                words[-1] += text[i]\n",
    "        return words\n",
    "    else:\n",
    "        return []\n",
    "\n",
    "\n",
    "def cut_word(sentence):\n",
    "    \"\"\"首先将一个sentence根据标点和英文符号/字符串划分成多个片段text，然后对每一个片段分词。\"\"\"\n",
    "    not_cuts = re.compile(u'([0-9\\da-zA-Z ]+)|[。，、？！.\\.\\?,!]')\n",
    "    result = []\n",
    "    start = 0\n",
    "    for seg_sign in not_cuts.finditer(sentence):\n",
    "        result.extend(simple_cut(sentence[start:seg_sign.start()]))\n",
    "        result.append(sentence[seg_sign.start():seg_sign.end()])\n",
    "        start = seg_sign.end()\n",
    "    result.extend(simple_cut(sentence[start:]))\n",
    "    return result"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "人们 / 思考 / 问题 / 往往 / 不是 / 从 / 零 / 开始 / 的 / 。 / 就 / 好像 / 你 / 现在 / 阅读 / 这 / 篇 / 文章 / 一样 / ， / 你 / 对 / 每 / 个词 / 的 / 理解 / 都会 / 依赖 / 于 / 你 / 前面 / 看到 / 的 / 一些 / 词 / ， /        / 而 / 不是 / 把 / 你 / 前面 / 看 / 的 / 内容 / 全部 / 抛弃 / 了 / ， / 忘记 / 了 / ， / 再去 / 理解 / 这个 / 单词 / 。 / 也 / 就 / 是 / 说 / ， / 人们 / 的 / 思维 / 总是 / 会有 / 延续 / 性 / 的 / 。 / \n"
     ]
    }
   ],
   "source": [
    "# 例一\n",
    "sentence = u'人们思考问题往往不是从零开始的。就好像你现在阅读这篇文章一样，你对每个词的理解都会依赖于你前面看到的一些词，\\\n",
    "      而不是把你前面看的内容全部抛弃了，忘记了，再去理解这个单词。也就是说，人们的思维总是会有延续性的。'\n",
    "result = cut_word(sentence)\n",
    "rss = ''\n",
    "for each in result:\n",
    "    rss = rss + each + ' / '\n",
    "print rss"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "天舟 / 一号 / 是 / 我国 / 自主 / 研制 / 的 / 首艘 / 货运 / 飞船 / ， / 由于 / 它 / 只 / 运货 / ， / 不送 / 人 / ， / 所以 / 被 / 形象 / 地 / 称为 / 太空 / “快 / 递 / 小哥 / ” / 。 /      / 它 / 采用 / 两 / 舱 / 式 / 结构 / ， / 直径 / 较小 / 的 / 是 / 推进 / 舱 / ， / 直径 / 较大 / 的 / 为 / 货物 / 舱 / 。 / 其 / 最大 / 直径 / 达到 / 3 / . / 35 / 米 / ， / 飞船 / 全长 / 10 / . / 6 / 米 / ， / 载荷 / 能力 / 达到 / 了 / 6 / . / 5 / 吨 / ， /      / 满载 / 货物 / 时重 / 13 / . / 5 / 吨 / 。 / 如果 / 此次 / 满载 / 的 / 话 / ， / 它 / 很 / 可能 / 将 / 成为 / 中国 / 发射 / 进入 / 太空 / 的 / 质量 / 最大 / 的 / 有效 / 载荷 / 。 / 甚至 / 比天宫 / 二号 / 空间 / 实验室 / 还 / 大 / ， /      / 后者 / 全长 / 10 / . / 4 / 米 / ， / 直径 / 同为 / 3 / . / 35 / 米 / ， / 质量 / 为 / 8 / . / 6 / 吨 / 。 / \n"
     ]
    }
   ],
   "source": [
    "# 例二\n",
    "sentence = u'天舟一号是我国自主研制的首艘货运飞船，由于它只运货，不送人，所以被形象地称为太空“快递小哥”。\\\n",
    "    它采用两舱式结构，直径较小的是推进舱，直径较大的为货物舱。其最大直径达到3.35米，飞船全长10.6米，载荷能力达到了6.5吨，\\\n",
    "    满载货物时重13.5吨。如果此次满载的话，它很可能将成为中国发射进入太空的质量最大的有效载荷。甚至比天宫二号空间实验室还大，\\\n",
    "    后者全长10.4米，直径同为3.35米，质量为8.6吨。'\n",
    "result = cut_word(sentence)\n",
    "rss = ''\n",
    "for each in result:\n",
    "    rss = rss + each + ' / '\n",
    "print rss"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "南京/ 市长江大桥/ \n"
     ]
    }
   ],
   "source": [
    "# 例三\n",
    "sentence = u'南京市长江大桥'\n",
    "result = cut_word(sentence)\n",
    "rss = ''\n",
    "for each in result:\n",
    "    rss = rss + each + '/ '\n",
    "print rss"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "结论：本例子使用 Bi-directional LSTM 来完成了序列标注的问题。本例中展示的是一个分词任务，但是还有其他的序列标注问题都是可以通过这样一个架构来实现的，比如 POS（词性标注）、NER（命名实体识别）等。在本例中，最后的分词效果还不是非常好，但已经达到了实用的水平，而且模型也只是粗略地跑了一遍，还没有进行任何的参数优化。最后的维特比译码中转移概率根据训练语料进行统计。\n",
    "\n",
    "看到最后一个 \"南京/ 市长江大桥/\" 的结果时，心里不由的一阵哀伤...\n",
    "\n",
    "在模型构造中，我们对 Bi-directional LSTM 模型进行了比较详细的展开分析，从而对模型有了深入的理解。这很大程度上也得益于 TensorFlow 比较底层，如果是用 keras 框架的话，虽然只需要短短的几行代码就搞定了，但是我们对于模型的理解估计不会这么深入。"
   ]
  }
 ],
 "metadata": {
  "anaconda-cloud": {},
  "kernelspec": {
   "display_name": "Python [default]",
   "language": "python",
   "name": "python2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
