{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "26236ba9",
   "metadata": {},
   "source": [
    "# 机器读心术之文本挖掘与自然语言处理第11课书面作业\n",
    "学号：207402  \n",
    "\n",
    "**作业内容：**  \n",
    "1. 抓取了京东上某红酒商品的差评、好评原始数据各一千条，已经上传到课程资源。请大家加工这些数据，从中选取部分作为训练数据，其它作为测试数据，建立情感分析模型（可以参考课堂上介绍的一些方法）并编程实现，评估准确率，编程语言不限。  \n",
    "原始数据格式说明：  \n",
    "第1列，序号  \n",
    "area 地区（部分缺失）  \n",
    "com_client 评论客户端（部分缺失）  \n",
    "comment 文字评论  \n",
    "goods_name 商品名称  \n",
    "score 评分（基本好评为5星，差评为1星）  \n",
    "times 评论时间  \n",
    "user_grade 用户等级  \n",
    "user_id 用户名称（匿名）  \n",
    "\n",
    "\n",
    "2. 针对在Collins的《Probabilistic Context-Free Grammars》第9页给出的PCFG，以及句子：The man saw the dog with the telescope，编程实现CYK算法，计算出该句子的最佳parse tree。可以使用任何编程语言，手算亦可。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "87529659",
   "metadata": {},
   "source": [
    "## 第1题"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cf02995c",
   "metadata": {},
   "source": [
    "### 1.1 方法一：用支持向量机进行情感分类\n",
    "采用课程讲到的方法present方法，即将用到的词语只要用到不管几次，就作为一个特征来使用。因为想重点关注情感分类问题，因此只关注“文字评论”列（即comment列），其他暂时忽略。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 144,
   "id": "f62512ac",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2022-01-02T12:47:30.212155Z",
     "start_time": "2022-01-02T12:47:29.573471Z"
    }
   },
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "import jieba\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn import svm\n",
    "from sklearn.metrics import classification_report\n",
    "from sklearn.metrics import accuracy_score"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "84e9ee42",
   "metadata": {},
   "source": [
    "下面读取好评与差评数据集，并新增一列(\"**evaluation**\")，置1表示好评，置0表示差评，并将两数据集合并为一个，方便后面处理。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 145,
   "id": "d0b24fc1",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2022-01-02T12:59:48.721461Z",
     "start_time": "2022-01-02T12:59:48.714468Z"
    }
   },
   "outputs": [],
   "source": [
    "good_pj = pd.read_csv('data/好评.csv')\n",
    "bad_pj = pd.read_csv('data/差评.csv')\n",
    "good_pj=good_pj.drop(columns=['Unnamed: 0'])\n",
    "bad_pj=bad_pj.drop(columns=['Unnamed: 0'])\n",
    "good_pj['evaluation']=1\n",
    "bad_pj['evaluation']=0\n",
    "comment_tb=pd.concat([good_pj,bad_pj])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 146,
   "id": "948f1d59",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2022-01-02T12:59:49.880005Z",
     "start_time": "2022-01-02T12:59:49.859001Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>area</th>\n",
       "      <th>com_client</th>\n",
       "      <th>comment</th>\n",
       "      <th>goods_name</th>\n",
       "      <th>score</th>\n",
       "      <th>times</th>\n",
       "      <th>user_grade</th>\n",
       "      <th>user_id</th>\n",
       "      <th>evaluation</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>宁夏</td>\n",
       "      <td>来自京东Android客户端</td>\n",
       "      <td>京东的物流是没得说了，很快。这次买酒水类，包装很仔细，没有出现意外。酒到手了，绝对是正品。感...</td>\n",
       "      <td>霞多丽白</td>\n",
       "      <td>grade-star g-star5</td>\n",
       "      <td>2016-06-19 11:29</td>\n",
       "      <td>金牌会员</td>\n",
       "      <td>大***抱</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>天津</td>\n",
       "      <td>来自京东iPhone客户端</td>\n",
       "      <td>活动买的，低档葡萄酒认准澳洲就对了</td>\n",
       "      <td>桑格利亚</td>\n",
       "      <td>grade-star g-star5</td>\n",
       "      <td>2016-08-04 20:19</td>\n",
       "      <td>金牌会员</td>\n",
       "      <td>j***v</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>河南</td>\n",
       "      <td>NaN</td>\n",
       "      <td>好吃，但是价格也不便宜的啊</td>\n",
       "      <td>西拉-红</td>\n",
       "      <td>grade-star g-star5</td>\n",
       "      <td>2016-09-05 22:37</td>\n",
       "      <td>钻石会员</td>\n",
       "      <td>芬***神</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>北京</td>\n",
       "      <td>NaN</td>\n",
       "      <td>到货很快，第二天就到了，东西包装细致，感觉很不错！只是口感不是我喜欢的！</td>\n",
       "      <td>西拉-红</td>\n",
       "      <td>grade-star g-star5</td>\n",
       "      <td>2016-08-30 22:50</td>\n",
       "      <td>金牌会员</td>\n",
       "      <td>c***9</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>江苏</td>\n",
       "      <td>来自京东iPhone客户端</td>\n",
       "      <td>不错，价格实惠，值得购买</td>\n",
       "      <td>梅洛-红</td>\n",
       "      <td>grade-star g-star4</td>\n",
       "      <td>2016-09-01 20:03</td>\n",
       "      <td>钻石会员</td>\n",
       "      <td>t***k</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>...</th>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1004</th>\n",
       "      <td>江苏</td>\n",
       "      <td>NaN</td>\n",
       "      <td>*买了你家这么多东西还一*号,打售后电话也不承认,抢了这么天券,等到0点都不行,*</td>\n",
       "      <td>西拉-红</td>\n",
       "      <td>grade-star g-star1</td>\n",
       "      <td>2016-04-29 10:45</td>\n",
       "      <td>金牌会员</td>\n",
       "      <td>只***尔</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1005</th>\n",
       "      <td>NaN</td>\n",
       "      <td>来自京东iPhone客户端</td>\n",
       "      <td>瓶塞都没有 很难喝假货上当的感觉</td>\n",
       "      <td>梅洛-红</td>\n",
       "      <td>grade-star g-star1</td>\n",
       "      <td>2016-06-15 20:14</td>\n",
       "      <td>铜牌会员</td>\n",
       "      <td>j***i</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1006</th>\n",
       "      <td>NaN</td>\n",
       "      <td>来自京东iPhone客户端</td>\n",
       "      <td>送的袋子是个露的 麻烦以后检查下再送 还不如不送 酒直接掉地上碎掉 呵呵</td>\n",
       "      <td>加本力苏维翁</td>\n",
       "      <td>grade-star g-star1</td>\n",
       "      <td>2016-06-16 08:46</td>\n",
       "      <td>银牌会员</td>\n",
       "      <td>孙***n</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1007</th>\n",
       "      <td>辽宁</td>\n",
       "      <td>NaN</td>\n",
       "      <td>味道不咋样，真心不如长城的。</td>\n",
       "      <td>加本力梅洛-红</td>\n",
       "      <td>grade-star g-star1</td>\n",
       "      <td>2016-03-26 11:45</td>\n",
       "      <td>金牌会员</td>\n",
       "      <td>n***d</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1008</th>\n",
       "      <td>辽宁</td>\n",
       "      <td>NaN</td>\n",
       "      <td>味道不咋样，真心不如长城的。</td>\n",
       "      <td>梅洛-红</td>\n",
       "      <td>grade-star g-star1</td>\n",
       "      <td>2016-03-26 11:45</td>\n",
       "      <td>金牌会员</td>\n",
       "      <td>n***d</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>2014 rows × 9 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "     area      com_client                                            comment  \\\n",
       "0      宁夏  来自京东Android客户端  京东的物流是没得说了，很快。这次买酒水类，包装很仔细，没有出现意外。酒到手了，绝对是正品。感...   \n",
       "1      天津   来自京东iPhone客户端                                  活动买的，低档葡萄酒认准澳洲就对了   \n",
       "2      河南             NaN                                      好吃，但是价格也不便宜的啊   \n",
       "3      北京             NaN               到货很快，第二天就到了，东西包装细致，感觉很不错！只是口感不是我喜欢的！   \n",
       "4      江苏   来自京东iPhone客户端                                       不错，价格实惠，值得购买   \n",
       "...   ...             ...                                                ...   \n",
       "1004   江苏             NaN          *买了你家这么多东西还一*号,打售后电话也不承认,抢了这么天券,等到0点都不行,*   \n",
       "1005  NaN   来自京东iPhone客户端                                   瓶塞都没有 很难喝假货上当的感觉   \n",
       "1006  NaN   来自京东iPhone客户端               送的袋子是个露的 麻烦以后检查下再送 还不如不送 酒直接掉地上碎掉 呵呵   \n",
       "1007   辽宁             NaN                                     味道不咋样，真心不如长城的。   \n",
       "1008   辽宁             NaN                                     味道不咋样，真心不如长城的。   \n",
       "\n",
       "     goods_name               score             times user_grade user_id  \\\n",
       "0          霞多丽白  grade-star g-star5  2016-06-19 11:29       金牌会员   大***抱   \n",
       "1          桑格利亚  grade-star g-star5  2016-08-04 20:19       金牌会员   j***v   \n",
       "2          西拉-红  grade-star g-star5  2016-09-05 22:37       钻石会员   芬***神   \n",
       "3          西拉-红  grade-star g-star5  2016-08-30 22:50       金牌会员   c***9   \n",
       "4          梅洛-红  grade-star g-star4  2016-09-01 20:03       钻石会员   t***k   \n",
       "...         ...                 ...               ...        ...     ...   \n",
       "1004       西拉-红  grade-star g-star1  2016-04-29 10:45       金牌会员   只***尔   \n",
       "1005       梅洛-红  grade-star g-star1  2016-06-15 20:14       铜牌会员   j***i   \n",
       "1006     加本力苏维翁  grade-star g-star1  2016-06-16 08:46       银牌会员   孙***n   \n",
       "1007    加本力梅洛-红  grade-star g-star1  2016-03-26 11:45       金牌会员   n***d   \n",
       "1008       梅洛-红  grade-star g-star1  2016-03-26 11:45       金牌会员   n***d   \n",
       "\n",
       "      evaluation  \n",
       "0              1  \n",
       "1              1  \n",
       "2              1  \n",
       "3              1  \n",
       "4              1  \n",
       "...          ...  \n",
       "1004           0  \n",
       "1005           0  \n",
       "1006           0  \n",
       "1007           0  \n",
       "1008           0  \n",
       "\n",
       "[2014 rows x 9 columns]"
      ]
     },
     "execution_count": 146,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "comment_tb"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "id": "29e56bc5",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2022-01-02T13:06:57.549013Z",
     "start_time": "2022-01-02T13:06:57.546012Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0"
      ]
     },
     "execution_count": 48,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "comment_tb['comment'].isnull().sum()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6b558ef8",
   "metadata": {},
   "source": [
    "评论列没有空缺情况。不需要考虑缺失值处理。  \n",
    "下面开始建立分词语料库："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "id": "2fb3a471",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "1498"
      ]
     },
     "execution_count": 63,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "corus = comment_tb['comment'].values\n",
    "corus_lst=[]\n",
    "cut_lst=[]\n",
    "for s in corus:\n",
    "    tl = jieba.cut(s)\n",
    "    cut_lst.append(list(tl))\n",
    "    corus_lst+=list(tl)\n",
    "vocab_words = set(corus_lst)\n",
    "print(len(vocab_words))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0f8ba130",
   "metadata": {},
   "source": [
    "可以看出我们本次数据集中词语空间大小是1498。  \n",
    "下面开始建议词语数值化字典："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 77,
   "id": "925d7e72",
   "metadata": {},
   "outputs": [],
   "source": [
    "vocab2id = {word: i for i, word in enumerate(vocab_words)}\n",
    "id2vocab = {i: word for i, word in enumerate(vocab_words)}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "034d2961",
   "metadata": {},
   "source": [
    "建立数据集："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 83,
   "id": "a49484ce",
   "metadata": {},
   "outputs": [],
   "source": [
    "x_data = np.zeros((comment_tb.shape[0],len(vocab_words)),dtype='int8')\n",
    "y_data = comment_tb['evaluation'].values\n",
    "\n",
    "for i,cl in enumerate(cut_lst):\n",
    "    for c in cl:\n",
    "        x_data[i][vocab2id[c]] = 1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 87,
   "id": "9177392a",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 训练集，测试集拆分\n",
    "X_train, X_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.3, random_state=42)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 90,
   "id": "42196eb0",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "NuSVC(gamma='auto')"
      ]
     },
     "execution_count": 90,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 模型训练\n",
    "clf = svm.NuSVC(gamma=\"auto\")\n",
    "clf.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 93,
   "id": "8ab2f0a2",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "accuracy:  0.8727272727272727\n",
      "              precision    recall  f1-score   support\n",
      "\n",
      "           0       0.98      0.76      0.85       299\n",
      "           1       0.81      0.99      0.89       306\n",
      "\n",
      "    accuracy                           0.87       605\n",
      "   macro avg       0.89      0.87      0.87       605\n",
      "weighted avg       0.89      0.87      0.87       605\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# 模型评估\n",
    "y_pred = clf.predict(X_test)\n",
    "print('accuracy: ',accuracy_score(y_test, y_pred))\n",
    "print(classification_report(y_test, y_pred))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4d512ce7",
   "metadata": {},
   "source": [
    "可以看出模型的准确度0.87，还是不错的。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b27efaf0",
   "metadata": {},
   "source": [
    "### 1.2 方法二：用LSTM进行情感分类\n",
    "RNN在情感分类中有不俗表现，下面采用RNN（LSTM）来建模进行试验："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 106,
   "id": "143cf43d",
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "370de5d4",
   "metadata": {},
   "source": [
    "神经网络要求输出数据尺寸要一致，因此先看一下输出的评论方案平均长度为多少（单位是词语），我们就以平均长度作为截断长度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 101,
   "id": "d5bd035f",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "11.991559086395233"
      ]
     },
     "execution_count": 101,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "lendis = np.array([len(cl) for cl in cut_lst])\n",
    "np.average(lendis)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cbc6af3e",
   "metadata": {},
   "source": [
    "平均长度是12。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 122,
   "id": "9ed02fdb",
   "metadata": {},
   "outputs": [],
   "source": [
    "maxlen = 12\n",
    "# 准备输入数据\n",
    "x_input = []\n",
    "for i,cl in enumerate(cut_lst):\n",
    "    t = []\n",
    "    for c in cl:\n",
    "        t.append(vocab2id[c])\n",
    "    x_input.append(t)\n",
    "\n",
    "x_input = tf.keras.preprocessing.sequence.pad_sequences(x_input, padding='post', maxlen=maxlen)\n",
    "# 标签数据\n",
    "label = tf.keras.utils.to_categorical(y_data, 2) #将因变量，即标签one-hot处理\n",
    "# 训练集，测试集数据拆分\n",
    "X_train, X_test, y_train, y_test = train_test_split(x_input, label, test_size=0.3, random_state=42)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 124,
   "id": "e6cfe31a",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定义神经网络模型\n",
    "def creat_model(maxlen,vocab_size,embedding_dim,cat):\n",
    "    inputs = tf.keras.Input(shape=(maxlen,))\n",
    "    lstm_1 = tf.keras.layers.Embedding(vocab_size, embedding_dim, mask_zero=True)(inputs)\n",
    "    flatten = tf.keras.layers.Flatten()(lstm_1) \n",
    "    outputs = tf.keras.layers.Dense(cat,activation='softmax')(flatten)   \n",
    "                      \n",
    "    model = tf.keras.Model(inputs=[inputs], outputs=[outputs])\n",
    "    \n",
    "    return model    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 125,
   "id": "e70c2440",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"model_1\"\n",
      "_________________________________________________________________\n",
      " Layer (type)                Output Shape              Param #   \n",
      "=================================================================\n",
      " input_4 (InputLayer)        [(None, 12)]              0         \n",
      "                                                                 \n",
      " embedding_1 (Embedding)     (None, 12, 50)            74900     \n",
      "                                                                 \n",
      " flatten_1 (Flatten)         (None, 600)               0         \n",
      "                                                                 \n",
      " dense_1 (Dense)             (None, 2)                 1202      \n",
      "                                                                 \n",
      "=================================================================\n",
      "Total params: 76,102\n",
      "Trainable params: 76,102\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "#创建模型\n",
    "vacab_size = len(vocab_words)\n",
    "embedding_dim = 50\n",
    "\n",
    "model = creat_model(maxlen,vacab_size,embedding_dim,2)\n",
    "model.summary()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 126,
   "id": "2c22ac47",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/50\n",
      "40/40 [==============================] - 1s 6ms/step - loss: 0.5686 - val_loss: 0.4071\n",
      "Epoch 2/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.3167 - val_loss: 0.2381\n",
      "Epoch 3/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.1962 - val_loss: 0.1713\n",
      "Epoch 4/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.1307 - val_loss: 0.1343\n",
      "Epoch 5/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0908 - val_loss: 0.1129\n",
      "Epoch 6/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0657 - val_loss: 0.0974\n",
      "Epoch 7/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0481 - val_loss: 0.0865\n",
      "Epoch 8/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0364 - val_loss: 0.0784\n",
      "Epoch 9/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0284 - val_loss: 0.0727\n",
      "Epoch 10/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0228 - val_loss: 0.0693\n",
      "Epoch 11/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0183 - val_loss: 0.0661\n",
      "Epoch 12/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0149 - val_loss: 0.0623\n",
      "Epoch 13/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0123 - val_loss: 0.0607\n",
      "Epoch 14/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0104 - val_loss: 0.0593\n",
      "Epoch 15/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0089 - val_loss: 0.0579\n",
      "Epoch 16/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0077 - val_loss: 0.0566\n",
      "Epoch 17/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0066 - val_loss: 0.0561\n",
      "Epoch 18/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0058 - val_loss: 0.0554\n",
      "Epoch 19/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0052 - val_loss: 0.0552\n",
      "Epoch 20/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0046 - val_loss: 0.0545\n",
      "Epoch 21/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0041 - val_loss: 0.0545\n",
      "Epoch 22/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0037 - val_loss: 0.0539\n",
      "Epoch 23/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0033 - val_loss: 0.0537\n",
      "Epoch 24/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0030 - val_loss: 0.0534\n",
      "Epoch 25/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0027 - val_loss: 0.0534\n",
      "Epoch 26/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0025 - val_loss: 0.0531\n",
      "Epoch 27/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0023 - val_loss: 0.0530\n",
      "Epoch 28/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0021 - val_loss: 0.0530\n",
      "Epoch 29/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0019 - val_loss: 0.0529\n",
      "Epoch 30/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0018 - val_loss: 0.0530\n",
      "Epoch 31/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0017 - val_loss: 0.0528\n",
      "Epoch 32/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0015 - val_loss: 0.0527\n",
      "Epoch 33/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0014 - val_loss: 0.0527\n",
      "Epoch 34/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0013 - val_loss: 0.0527\n",
      "Epoch 35/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0013 - val_loss: 0.0528\n",
      "Epoch 36/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0012 - val_loss: 0.0527\n",
      "Epoch 37/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0011 - val_loss: 0.0528\n",
      "Epoch 38/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 0.0010 - val_loss: 0.0529\n",
      "Epoch 39/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 9.7586e-04 - val_loss: 0.0527\n",
      "Epoch 40/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 9.1591e-04 - val_loss: 0.0528\n",
      "Epoch 41/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 8.6335e-04 - val_loss: 0.0530\n",
      "Epoch 42/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 8.1714e-04 - val_loss: 0.0528\n",
      "Epoch 43/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 7.7190e-04 - val_loss: 0.0530\n",
      "Epoch 44/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 7.3388e-04 - val_loss: 0.0530\n",
      "Epoch 45/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 6.9409e-04 - val_loss: 0.0530\n",
      "Epoch 46/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 6.5970e-04 - val_loss: 0.0531\n",
      "Epoch 47/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 6.2390e-04 - val_loss: 0.0533\n",
      "Epoch 48/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 5.9521e-04 - val_loss: 0.0532\n",
      "Epoch 49/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 5.6665e-04 - val_loss: 0.0532\n",
      "Epoch 50/50\n",
      "40/40 [==============================] - 0s 1ms/step - loss: 5.3713e-04 - val_loss: 0.0533\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<keras.callbacks.History at 0x15d0d942580>"
      ]
     },
     "execution_count": 126,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#编译模型\n",
    "model.compile(loss='categorical_crossentropy', optimizer=\"adam\")\n",
    "#训练模型\n",
    "model.fit(X_train,y_train,epochs=50,batch_size=32,validation_split=0.1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 141,
   "id": "ae6a9dbb",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "accuracy:  0.9404958677685951\n",
      "              precision    recall  f1-score   support\n",
      "\n",
      "           0       0.94      0.94      0.94       299\n",
      "           1       0.94      0.94      0.94       306\n",
      "\n",
      "    accuracy                           0.94       605\n",
      "   macro avg       0.94      0.94      0.94       605\n",
      "weighted avg       0.94      0.94      0.94       605\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# 模型评估\n",
    "y_pred = model.predict(X_test)\n",
    "y_pred = np.argmax(y_pred,axis=1)\n",
    "y_test1 = np.argmax(y_test,axis = 1)\n",
    "print('accuracy: ',accuracy_score(y_test1, y_pred))\n",
    "print(classification_report(y_test1, y_pred))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0ff5057c",
   "metadata": {},
   "source": [
    "**可以看出，效果上LSTM的神经网络模型要比SVM模型要好！**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "26be2f8f",
   "metadata": {},
   "source": [
    "## 第2题\n",
    "针对在Collins的《Probabilistic Context-Free Grammars》第9页给出的PCFG，以及句子：The man saw the dog with the telescope，编程实现CYK算法，计算出该句子的最佳parse tree。可以使用任何编程语言，手算亦可。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "4bbdd7d1",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "N=['S','NP','VP','PP','DT','Vi','Vt','NN','IN','EM'] #增加一个EM非终结符，EM->None\n",
    "S='S'\n",
    "T=['sleeps','saw', 'man','woman','dog','telescope','the','with','in']\n",
    "q={'S':{'NP VP':1.0},\n",
    "   'VP':{'Vi EM':0.3, 'Vt NP':0.5, 'VP PP':0.2}, #将VP->Vi改写为VP ->Vi EM\n",
    "   'NP':{'DT NN':0.8, 'NP PP':0.2},\n",
    "   'PP':{'IN NP':1.0},\n",
    "   'EM':{'':1.0}, #EM的推导式为空\n",
    "   'Vi':{'sleeps':1.0},\n",
    "   'Vt':{'saw':1.0},\n",
    "   'NN':{'man':0.1, 'woman':0.1, 'telescope':0.3, 'dog':0.5},\n",
    "   'DT':{'the':1.0},\n",
    "   'IN':{'with':0.6, 'in':0.4}\n",
    "  }\n",
    "\n",
    "aSentence='the man saw the dog with the telescope'\n",
    "\n",
    "def is_not_terminal(q,N,X):\n",
    "    '''\n",
    "    判断X的推导式是不是终结符\n",
    "    q: q表\n",
    "    N: 非终结符集合\n",
    "    X: 判断用非终结符\n",
    "    返回：\n",
    "    True: X的推导式为非终结符，即X->Y Z, Y Z各自为非终结符\n",
    "    False: X的推导式为终结符，即X->'dog'\n",
    "    '''\n",
    "    res = list(q[X].keys()) # 获得X的推导式\n",
    "    k = res[0].split() #X的推导式可能不止一个，但我们只取第一个表达式，并将这个表达式转换为列表格式，方便获取其中的项目\n",
    "    if len(k)==0: #如果从获取的符号长度为0，这里只能可能是EM对应的那个空串“”，则直接返回False\n",
    "        return False\n",
    "    return k[0] in N \n",
    "\n",
    "def get_qvalue(q,left,right):\n",
    "    '''\n",
    "    查q表获得q值,即q(left->right)\n",
    "    left: 推导式左侧非终结符\n",
    "    right:推导式右侧项目\n",
    "    返回：\n",
    "    q值\n",
    "    '''\n",
    "    if q.get(left):\n",
    "        res = q[left]\n",
    "        if res.get(right):\n",
    "            return res[right]\n",
    "    return 0.0\n",
    "\n",
    "def get_max_pi(i,j,X,q,N,pi):\n",
    "    '''\n",
    "    计算从i起始到j结束终结符，构造以X为根的概率值最大的parse tree的概率值，以及左右两棵子树的起止\n",
    "    i:起始点\n",
    "    j:结束点\n",
    "    X: 非终结符，用它来构造parse tree的根\n",
    "    q: q表\n",
    "    N: 非终结符集合\n",
    "    pi: pi表\n",
    "    返回：\n",
    "    vmax: 概率值 \n",
    "    exp: 推导式\n",
    "    smax:(i,smax)构成左子树\n",
    "    ssmax:(ssmax,j)构成右子树，ssmax >= smax\n",
    "    '''\n",
    "    vmax=0.0\n",
    "    smax=i\n",
    "    ssmax=i\n",
    "    exp = ''\n",
    "    right = q[X]\n",
    "    for t in right:\n",
    "        turn = t.split()\n",
    "        for s in range(i,j+1):\n",
    "            ss = s+1\n",
    "            if ss > j : ss = j\n",
    "            vt = get_qvalue(q,X,t)*pi[i][s][N.index(turn[0])]*pi[ss][j][N.index(turn[1])]\n",
    "            if vt > vmax:\n",
    "                vmax = vt\n",
    "                smax = s\n",
    "                ssmax = ss\n",
    "                exp = t\n",
    "    return vmax, exp, smax, ssmax\n",
    "\n",
    "def get_bp(bp,q,pi,N,i,j,X):\n",
    "    '''\n",
    "    获得backpointer\n",
    "    bp: backpointer表项\n",
    "    q: q表\n",
    "    N: 非终结符集合\n",
    "    pi: pi表\n",
    "    i:起始点\n",
    "    j:结束点\n",
    "    '''\n",
    "    if i == j : #终结符\n",
    "        for t in q[X]:\n",
    "            if q[X][t] == pi[i][j][N.index(X)]:\n",
    "                return t\n",
    "    s = '%d,%d,%s'%(i,j,X)#构造搜索索引\n",
    "    for ss in bp:\n",
    "        if ss.find(s) == 0 :\n",
    "            t = ss.split(',')\n",
    "            return t[3],t[4],t[5],t[6],t[7]\n",
    "\n",
    "# CYK 算法计算\n",
    "sn = aSentence.split()\n",
    "n = len(sn)\n",
    "pi = np.zeros((n,n,len(N)))\n",
    "bp = []\n",
    "\n",
    "# pi表初始化\n",
    "for idx, it in enumerate(N):\n",
    "    for i in range(n):\n",
    "        pi[i][i][idx] = get_qvalue(q,it,sn[i])\n",
    "# 开始迭代计算\n",
    "for l in range(n): # l 表示i和j之间间隔多少\n",
    "    for i in range(n):\n",
    "        j = i+l\n",
    "        if j >= n:\n",
    "            continue\n",
    "        for idx ,it in enumerate(N):\n",
    "            if is_not_terminal(q,N,it):\n",
    "                v, exp,s,ss = get_max_pi(i,j,it,q,N,pi)\n",
    "                pi[i][j][idx] = v\n",
    "                bp.append('%d,%d,%s,%s,%d,%d,%d,%d'%(i,j,it,exp,i,s,ss,j))\n",
    "#                 print(v,',',bp[-1])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "29230d46",
   "metadata": {},
   "outputs": [],
   "source": [
    "from graphviz import Digraph\n",
    "\n",
    "\n",
    "def plot_model(tree, name):\n",
    "    g = Digraph(\"G\", filename=name, format='png', strict=False)\n",
    "    first_label = list(tree.keys())[0]\n",
    "    g.node(\"0\", first_label)\n",
    "    _sub_plot(g, tree, \"0\")\n",
    "    g.view()\n",
    "\n",
    "root = \"0\"\n",
    "\n",
    "\n",
    "def _sub_plot(g, tree, inc):\n",
    "    global root\n",
    "\n",
    "    first_label = list(tree.keys())[0]\n",
    "    ts = tree[first_label]\n",
    "    for i in ts.keys():\n",
    "        if isinstance(tree[first_label][i], dict):\n",
    "            root = str(int(root) + 1)\n",
    "            g.node(root, list(tree[first_label][i].keys())[0])\n",
    "            g.edge(inc, root, str(i))\n",
    "            _sub_plot(g, tree[first_label][i], root)\n",
    "        else:\n",
    "            root = str(int(root) + 1)\n",
    "            g.node(root, tree[first_label][i])\n",
    "            g.edge(inc, root, str(i))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "id": "1914c79e",
   "metadata": {},
   "outputs": [],
   "source": [
    "def make_tree(bp,q,N,i,j,X):\n",
    "    res = get_bp(bp,q,pi,N,i,j,X)\n",
    "#     print(res)\n",
    "    if is_not_terminal(q,N,X):\n",
    "        k = res[0].split()\n",
    "        t = {X:{res[1]+'-'+res[2]:make_tree(bp,q,N,int(res[1]),int(res[2]),k[0]),\n",
    "                res[3]+'-'+res[4]:make_tree(bp,q,N,int(res[3]),int(res[4]),k[1])}}\n",
    "        return t\n",
    "    else:\n",
    "        k = get_bp(bp,q,pi,N,i,i,X)\n",
    "        p = {X:{str(i):k}}\n",
    "        return p"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "id": "5c4062b5",
   "metadata": {},
   "outputs": [],
   "source": [
    "tree = make_tree(bp,q,N,0,n-1,S)\n",
    "plot_model(tree, \"tree.gv\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "30658a02",
   "metadata": {},
   "source": [
    "句子：'the man saw the dog with the telescope'对应的最大可能的Parse Tree为：  \n",
    "![tree.gv](https://gitee.com/dotzhen/cloud-notes/raw/master/tree.gv.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "id": "ba426c82",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "相应的概率为： 0.00046080000000000014\n"
     ]
    }
   ],
   "source": [
    "print('相应的概率为：',pi[0][n-1][N.index(S)])"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  },
  "varInspector": {
   "cols": {
    "lenName": 16,
    "lenType": 16,
    "lenVar": 40
   },
   "kernels_config": {
    "python": {
     "delete_cmd_postfix": "",
     "delete_cmd_prefix": "del ",
     "library": "var_list.py",
     "varRefreshCmd": "print(var_dic_list())"
    },
    "r": {
     "delete_cmd_postfix": ") ",
     "delete_cmd_prefix": "rm(",
     "library": "var_list.r",
     "varRefreshCmd": "cat(var_dic_list()) "
    }
   },
   "types_to_exclude": [
    "module",
    "function",
    "builtin_function_or_method",
    "instance",
    "_Feature"
   ],
   "window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
