{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "028d85a7",
   "metadata": {},
   "source": [
    "## 这次主要是根据上次跑出来的一些结果，加入新的特征和一些小的修正\n",
    "- 加入语法错误检测  ---但是没整出来，自己写的方法不太好\n",
    "- 修正错误单词拼写\n",
    "- 修正大小写规则\n",
    "- 主题个数特征 ---没加入\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b3f8fb76",
   "metadata": {},
   "source": [
    "## 分析结果\n",
    "我找了一些评分差距比较大的文章，现在把评分结果分析记录在这里"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dc7a920d",
   "metadata": {},
   "source": [
    "### essay1:\n",
    "Dear Jerry, Hi, Jerry. For you thing. I think I can give you a little help. Frist, yours title must have attractive. Then, you must yours talking I will conclude by say:Fighting!you extreedly. great ! ' Yours,\n",
    "\n",
    "- 预测评分 8.5 \n",
    "- 实际评分 2.5 \n",
    "\n",
    "我们可以看出，标点使用十分不规范，语法错误较多，而且结尾没有以lihua结尾--这也说明，我们需要针对不同的评分文本做具体的分析"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9118a42e",
   "metadata": {},
   "source": [
    "### essay2：\n",
    "Dear Jerry, T will try my best to help you. First, You must know what you should say. And you must keep smile, You can't walk around, secend, you should try you best to say that want to say. Don't let students think you. fool'sh. You can ask teacher your problem, when the yanjiang ending, you should limao leave, Li Hua belive yourself, you can do that good. Every think will good. Yours. Li Hua\n",
    "\n",
    "- 预测得分：10.8 \n",
    "- 实际得分：7.0\n",
    "\n",
    "可以看出，语法错误较多，标点符号错误较多，还用到了拼音，但是这些机器都没有检查出来"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dd15bef0",
   "metadata": {},
   "source": [
    "### essay 3：\n",
    "Dear Jerry Hallo. The talking English is very interesting. Frish. You can writing about the talking. Don't anxiety. It's also true that nobaly is born knowing How to use knives and forks correctly. In turn we can take the opportunity to teach them some thing about our one culture. We've to learn How to drive here to wear proper clothing and so on. Finely, you can Li Mao live. You can underctand.? Yours, Li Hua\n",
    "\n",
    "- 预测得分：7.3\n",
    "- 实际得分：4.5\n",
    "\n",
    "可以看出，拼写错误多，标点符号使用错误多，语法错误多，所以之后一定要把拼写错误这一条加进去"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "53c4ccb4",
   "metadata": {},
   "source": [
    "### essay 4:\n",
    "Dear Jerry, Hello, Jerry. Long time no see. I'm Li Hua. How are you? I need your help Yours, Li Hua\n",
    "\n",
    "- 预测得分：9\n",
    "- 实际得分：0.5\n",
    "\n",
    "这个得分太离谱了，这个文章没有主题，字数太少，但是预测得分还很高，所以后面需要加入主题个数特征"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "658f3f44",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import nltk\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "from nltk.stem import WordNetLemmatizer\n",
    "from nltk.corpus import wordnet\n",
    "import re, collections\n",
    "from collections import defaultdict\n",
    "from sklearn.feature_extraction.text import CountVectorizer\n",
    "from sklearn.metrics import mean_squared_error, r2_score\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.linear_model import LinearRegression, Ridge, Lasso\n",
    "from sklearn import ensemble\n",
    "from sklearn.model_selection import GridSearchCV\n",
    "import xgboost as xgb\n",
    "import matplotlib.pyplot as plt\n",
    "from sklearn.metrics import cohen_kappa_score"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a4d6535b",
   "metadata": {},
   "source": [
    "## 读取数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "74ec5cb7",
   "metadata": {},
   "outputs": [],
   "source": [
    "essay_csv = \"essay_test.csv\"\n",
    "\n",
    "dataframe = pd.read_csv(essay_csv, encoding = 'latin-1')\n",
    "#copy一份数据\n",
    "data = dataframe[['id','essay','score']].copy()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "f91b9ece",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>essay</th>\n",
       "      <th>score</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>10001</td>\n",
       "      <td>Dear Jerry. I've heard about that you will giv...</td>\n",
       "      <td>19.5</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>10002</td>\n",
       "      <td>Dear Jerry I'm glad that you'll respresent you...</td>\n",
       "      <td>16.5</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>10003</td>\n",
       "      <td>Dear Jerry, I am very happy to hear that you w...</td>\n",
       "      <td>20.5</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>10004</td>\n",
       "      <td>Dear Je I'm so happy to hear that you will hav...</td>\n",
       "      <td>15.5</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>10005</td>\n",
       "      <td>Dear Jerry, I am so glad to hear that you will...</td>\n",
       "      <td>19.0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>...</th>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>996</th>\n",
       "      <td>11045</td>\n",
       "      <td>Dear Jerry, First, you must know how and what ...</td>\n",
       "      <td>17.5</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>997</th>\n",
       "      <td>11046</td>\n",
       "      <td>Dear Jerry, Yours, Li Hua</td>\n",
       "      <td>0.0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>998</th>\n",
       "      <td>11047</td>\n",
       "      <td>Dear Jerry, Yours, Li Hua</td>\n",
       "      <td>0.0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>999</th>\n",
       "      <td>11048</td>\n",
       "      <td>Dear Jerry, Yours, Li Hua</td>\n",
       "      <td>0.0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1000</th>\n",
       "      <td>11049</td>\n",
       "      <td>Dear Jerry, Yours, Li Hua</td>\n",
       "      <td>0.0</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>1001 rows × 3 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "         id                                              essay  score\n",
       "0     10001  Dear Jerry. I've heard about that you will giv...   19.5\n",
       "1     10002  Dear Jerry I'm glad that you'll respresent you...   16.5\n",
       "2     10003  Dear Jerry, I am very happy to hear that you w...   20.5\n",
       "3     10004  Dear Je I'm so happy to hear that you will hav...   15.5\n",
       "4     10005  Dear Jerry, I am so glad to hear that you will...   19.0\n",
       "...     ...                                                ...    ...\n",
       "996   11045  Dear Jerry, First, you must know how and what ...   17.5\n",
       "997   11046                          Dear Jerry, Yours, Li Hua    0.0\n",
       "998   11047                          Dear Jerry, Yours, Li Hua    0.0\n",
       "999   11048                          Dear Jerry, Yours, Li Hua    0.0\n",
       "1000  11049                          Dear Jerry, Yours, Li Hua    0.0\n",
       "\n",
       "[1001 rows x 3 columns]"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "pd.DataFrame(data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "f716d799",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "id       0\n",
       "essay    0\n",
       "score    0\n",
       "dtype: int64"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 检查有无空值\n",
    "data.isnull().sum()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "7c655ddd",
   "metadata": {},
   "outputs": [],
   "source": [
    "#把score列转换为列表\n",
    "scores = data[\"score\"]\n",
    "scores = [*scores]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "aa335227",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<AxesSubplot:>"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    },
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAlAAAAI/CAYAAAC4QOfKAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuNCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8QVMy6AAAACXBIWXMAAAsTAAALEwEAmpwYAAAaTUlEQVR4nO3df6zd933X8dfH8bh3KmVsymZ1U7IA2sBWxjbhTVtjpHuVObdykTrE0IgQ6jQr4STsasj84TjWtEFlbq8Ag2RhX2K5UPHDiA6mNUvV2BQfVW4Qa8p+tIs3Nk1dUlatVJW2pkyXJXz4I9chXhLbb/ve+/X3nsdDsu4933vuOW9Hys0z3+/nfr6t9x4AAG7erqEHAAAYGwEFAFAkoAAAigQUAECRgAIAKBJQAABFu7fzze6+++5+3333bedbAjPga1/7Wt7xjncMPQaww3zmM5/5cu/9m9/qa9saUPfdd1+ef/757XxLYAZMp9MsLCwMPQaww7TWfuftvuYSHgBAkYACACgSUAAARQIKAKBIQAEAFAkoAIAiAQUAUCSgAACKBBQAQJGAAgAoElAAAEUCCgCgSEABABQJKACAIgEFAFAkoAAAigQUAECRgAIAKBJQAABFAgoAoEhAAQAUCSgAgCIBBQBQtHvoAYDZ0lobeoSb1nsfegTgDuUMFLCteu+b/ufbj/7ClrwuwNsRUAAARQIKAKBIQAEAFAkoAIAiAQUAUCSgAACKBBQAQJGAAgAoElAAAEUCCgCgSEABABQJKACAIgEFAFAkoAAAigQUAECRgAIAKBJQAABFAgoAoEhAAQAUCSgAgCIBBQBQJKAAAIoEFABAkYACACgSUAAARQIKAKBIQAEAFAkoAIAiAQUAUCSgAACKBBQAQJGAAgAoElAAAEUCCgCgSEABABQJKACAIgEFAFAkoAAAigQUAECRgAIAKBJQAABFAgoAoEhAAQAUCSgAgCIBBQBQJKAAAIoEFABAkYACACgSUAAARQIKAKBIQAEAFAkoAIAiAQUAUCSgAACKBBQAQJGAAgAoElAAAEUCCgCg6IYB1Vq7p7V2qbV2pbX2a621n9w4/k2ttYuttd/c+PiNWz8uAMDwbuYM1CtJ/l7vfW+SH0jyd1pr+5I8keQTvffvSPKJjccAADveDQOq9/7F3vt/3/j8q0muJPm2JO9L8uGNp304yQ9v0YwAAHeU0hqo1tp9Sb43yX9Lsqf3/sXktchK8i2bPh0AwB1o980+sbX2J5P8xyR/t/f+B621m/2+R5M8miR79uzJdDq9hTEBrs/PFmA73VRAtda+Lq/F07/tvf+njcO/11p7V+/9i621dyX50lt9b+/9qSRPJcn+/fv7wsLC7U8N8EYffyZ+tgDb6WZ+C68lOZfkSu/95Bu+9NEk79/4/P1Jfn7zxwMAuPPczBmoB5L8rSSfba398saxJ5N8MMl/aK0dTvJikr++JRMCANxhbhhQvffLSd5uwdODmzsOAMCdz07kAABFAgoAoEhAAQAUCSgAgCIBBQBQJKAAAIoEFABAkYACACgSUAAARQIKAKBIQAEAFAkoAIAiAQUAUCSgAACKBBQAQJGAAgAoElAAAEUCCgCgSEABABQJKACAIgEFAFAkoAAAigQUAECRgAIAKBJQAABFAgoAoEhAAQAUCSgAgKLdQw8A3Lm+++9fyO//4R8NPcZNue+JZ4Ye4Ya+4eu/Lr/y0w8NPQawCQQU8LZ+/w//KJ//4HuHHuOGptNpFhYWhh7jhsYQecDNcQkPAKBIQAEAFAkoAIAiAQUAUCSgAACKBBQAQJGAAgAoElAAAEUCCgCgSEABABQJKACAIgEFAFAkoAAAigQUAECRgAIAKBJQAABFAgoAoEhAAQAUCSgAgCIBBQBQJKAAAIoEFABAkYACACgSUAAARQIKAKBIQAEAFAkoAIAiAQUAUCSgAACKBBQAQJGAAgAoElAAAEUCCgCgSEABABQJKACAIgEFAFAkoAAAigQUAECRgAIAKBJQAABFAgoAoGj30AMAd6537n0i3/XhJ4Ye4+Z8eOgBbuyde5PkvUOPAWwCAQW8ra9e+WA+/8E7/z/40+k0CwsLQ49xQ/c98czQIwCbxCU8AIAiAQUAUCSgAACKBBQAQJGAAgAoElAAAEUCCgCgSEABABQJKACAIgEFAFAkoAAAigQUAECRgAIAKBJQAABFAgoAoEhAAQAUCSgAgCIBBQBQJKAAAIoEFABAkYACACgSUAAARQIKAKBIQAEAFAkoAIAiAQUAUHTDgGqtfai19qXW2ufecOxnWmv/s7X2yxt/Dm3tmAAAd46bOQP1r5K85y2O/9Pe+/ds/PnY5o4FAHDnumFA9d4/meQr2zALAMAo3M4aqJ9orf3qxiW+b9y0iQAA7nC7b/H7ziT5QJK+8fGfJPnxt3pia+3RJI8myZ49ezKdTm/xLYEhjOHf2ZdffnkUcybj+OcJ3NgtBVTv/feuft5aO5vkF67z3KeSPJUk+/fv7wsLC7fylsAQPv5MxvDv7HQ6HcWcY/nnCdzYLV3Ca6296w0P/2qSz73dcwEAdpobnoFqrZ1PspDk7tbaF5L8dJKF1tr35LVLeJ9P8re3bkQAgDvLDQOq9/7wWxw+twWzAACMgp3IAQCKBBQAQJGAAgAoElAAAEUCCgCgSEABABQJKACAIgEFAFAkoAAAigQUAECRgAIAKBJQAABFAgoAoEhAAQAUCSgAgCIBBQBQtHvoAYA7231PPDP0CDfn43f+nN/w9V839AjAJhFQwNv6/AffO/QIN+W+J54ZzazAzuASHgBAkYACACgSUAAARQIKAKBIQAEAFAkoAIAiAQUAUCSgAACKBBQAQJGAAgAoElAAAEUCCgCgSEABABQJKACAIgEFAFAkoAAAigQUAECRgAIAKBJQAABFAgoAoEhAAQAUCSgAgCIBBQBQJKAAAIoEFABAkYACACgSUAAARQIKAKBIQAEAFAkoAIAiAQUAUCSgAACKBBQAQJGAAgAoElAAAEUCCgCgSEABABQJKACAIgEFAFAkoAAAigQUAECRgAIAKBJQAABFAgoAoEhAAQAUCSgAgCIBBQBQJKAAAIoEFABAkYACACgSUAAARQIKAKBIQAEAFAkoAIAiAQUAUCSgAACKBBQAQJGAAgAoElAAAEUCCgCgSEABABQJKACAIgEFAFAkoAAAigQUAECRgAIAKBJQAABFAgoAoEhAAQAUCSgAgCIBBQBQJKAAAIoEFABAkYACACgSUAAARQIKAKBIQAEAFAkoAIAiAQUAUCSgAACKbhhQrbUPtda+1Fr73BuOfVNr7WJr7Tc3Pn7j1o4JAHDnuJkzUP8qyXv+2LEnknyi9/4dST6x8RgAYCbcMKB6759M8pU/dvh9ST688fmHk/zw5o4FAHDnutU1UHt6719Mko2P37J5IwEA3Nl2b/UbtNYeTfJokuzZsyfT6XSr3xKYQX62ANvpVgPq91pr7+q9f7G19q4kX3q7J/ben0ryVJLs37+/Lyws3OJbAryNjz8TP1uA7XSrl/A+muT9G5+/P8nPb844AAB3vpvZxuB8kv+a5M+31r7QWjuc5INJDrbWfjPJwY3HAAAz4YaX8HrvD7/Nlx7c5FkAAEbBTuQAAEUCCgCgSEABABQJKACAIgEFAFAkoAAAigQUAECRgAIAKBJQAABFAgoAoEhAAQAUCSgAgCIBBQBQJKAAAIoEFABAkYACACgSUAAARQIKAKBIQAEAFAkoAIAiAQUAUCSgAACKBBQAQJGAAgAoElAAAEUCCgCgSEABABQJKACAIgEFAFAkoAAAigQUAECRgAIAKBJQAABFAgoAoEhAAQAUCSgAgCIBBQBQJKAAAIoEFABAkYACACgSUAAARQIKAKBIQAEAFAkoAIAiAQUAUCSgAACKBBQAQJGAAgAoElAAAEW7hx4AmC2tta153dXNf83e++a/KLAjOAMFbKve+6b/uXTp0pa8LsDbEVAAAEUCCgCgSEABABQJKACAIgEFAFAkoAAAigQUMFrLy8uZn5/P4uJi5ufns7y8PPRIwIywkSYwSsvLy1lbW8vq6mr27duXF154IUePHk2SnDp1auDpgJ3OGShglM6ePZvV1dUcOXIk8/PzOXLkSFZXV3P27NmhRwNmgIACRml9fT2TyeSaY5PJJOvr6wNNBMwSAQWM0tzcXNbW1q45tra2lrm5uYEmAmaJNVDAKD3yyCOvr3nat29fTp48maNHj77prBTAVhBQwChdXSj+5JNPZn19PXNzc5lMJhaQA9uibecdx/fv39+ff/75bXs/YDZMp9MsLCwMPQaww7TWPtN73/9WX7MGCgCgSEABABQJKGC07EQODMUicmCU7EQODMkZKGCU7EQODElAAaNkJ3JgSAIKGCU7kQNDsgYKGCU7kQNDElDAKNmJHBiSnciB0bMTObAV7EQOALCJBBQAQJGAAgAoElDAaC0tLWXXrl1ZXFzMrl27srS0NPRIwIwQUMAoLS0t5cKFC5lMJnn66aczmUxy4cIFEQVsC9sYAKN08eLFPPbYYzl9+nSm02lOnz6dJG/aXBNgKzgDBYxS7z0rKyvXHFtZWcl2bs0CzC4BBYxSay3Hjh275tixY8fSWhtoImCWuIQHjNLBgwdz5syZJMmhQ4fy+OOP58yZM3nooYcGngyYBXYiB0ZraWkpFy9eTO89rbUcPHgwzz777NBjATvE9XYidwYKGK2rseRWLsB2swYKAKBIQAEAFAkoAIAiAQWMllu5AEMRUMAouZULMCS/hQeMklu5AENyBgoYJbdyAYYkoIBRcisXYEgu4QGj5FYuwJDcygUYLbdyAbaSW7kAO5JbuQBDsQYKAKBIQAEAFN3WJbzW2ueTfDXJq0leebvrhAAAO8lmnIFa7L1/j3gCtptbuQBDcQkPGCW3cgGGdLu/hdeTXGit9ST/ovf+1CbMBHBDbuUCDOl2A+qB3vvvtta+JcnF1tqv994/+cYntNYeTfJokuzZsyfT6fQ23xLgtVu5HDp0KNPpNC+//HKm02kOHTqUM2fO+DkDbLnbCqje++9ufPxSa+3nknx/kk/+sec8leSp5LWNNO3VAmyG1lo+9rGPvX4GamFhIY8//nhaa/aEArbcLQdUa+0dSXb13r+68flDSf7Bpk0GcB1u5QIM6ZZv5dJa+7NJfm7j4e4k/673fuJ63+NWLsBmcisXYCttya1ceu+/neS7b3kqgNvkVi7AUGxjAABQJKAAAIoEFDBadiIHhiKggFGyEzkwpNvdSBNgEHYiB4bkDBQwSr33rKysXHNsZWUlt7o1C0CFgAJGqbWWY8eOXXPs2LFjaa0NNBEwS1zCA0bJTuTAkG55J/JbYSdyYDPZiRzYSluyEznA0OxEDgzFGigAgCIBBQBQJKAAAIoEFDBa58+fz/33358HH3ww999/f86fPz/0SMCMsIgcGKXz58/n+PHjOXfuXF599dXcddddOXz4cJLk4YcfHng6YKdzBgoYpRMnTuTcuXNZXFzM7t27s7i4mHPnzuXEiRNDjwbMAAEFjNKVK1dy4MCBa44dOHAgV65cGWgiYJYIKGCU9u7dm8uXL19z7PLly9m7d+9AEwGzREABo3T8+PEcPnw4ly5dyiuvvJJLly7l8OHDOX78+NCjATPAInJglK4uFF9eXs6VK1eyd+/enDhxwgJyYFu4Fx4wem7lAmyF690LzyU8AIAiAQUAUCSgAACKBBQwWktLS9m1a1cWFxeza9euLC0tDT0SMCMEFDBKS0tLuXDhQiaTSZ5++ulMJpNcuHBBRAHbwjYGwChdvHgxjz32WE6fPp3pdJrTp08nSdbW1gaeDJgFzkABo9R7z8rKyjXHVlZWsp1bswCzS0ABo9Ray7Fjx645duzYsbTWBpoImCUu4QGjdPDgwZw5cyZJcujQoTz++OM5c+ZMHnrooYEnA2aBnciB0VpaWsrFixfTe09rLQcPHsyzzz479FjADnG9ncidgQJG62osuZULsN2sgQIAKBJQAABFAgoAoEhAAaO1vLyc+fn5LC4uZn5+PsvLy0OPBMwIi8iBUVpeXs7a2lpWV1ezb9++vPDCCzl69GiS5NSpUwNPB+x0zkABo3T27Nmsrq7myJEjmZ+fz5EjR7K6upqzZ88OPRowAwQUMErr6+uZTCbXHJtMJllfXx9oImCWCChglObm5t504+C1tbXMzc0NNBEwS6yBAkbpkUceeX3N0759+3Ly5MkcPXr0TWelALaCgAJG6epC8SeffDLr6+uZm5vLZDKxgBzYFu6FB4yeW7kAW+F698KzBgoAoEhAAQAUWQMFjNa9996bl1566fXH99xzT1588cUBJwJmhTNQwChdjad3v/vd+chHPpJ3v/vdeemll3LvvfcOPRowAwQUMEpX4+lTn/pU7r777nzqU596PaIAtpqAAkbrZ3/2Z6/7GGCrCChgtH7kR37kuo8BtoqAAkbpnnvuyXPPPZcHHnggX/7yl/PAAw/kueeeyz333DP0aMAM8Ft4wCi9+OKLuffee/Pcc8/lueeeS+K38IDt4wwUMFovvvhieu+5dOlSeu/iCdg2AgoAoEhAAQAUCSgAgCIBBYzW8vJy5ufns7i4mPn5+SwvLw89EjAj/BYeMErLy8tZW1vL6upq9u3blxdeeCFHjx5Nkpw6dWrg6YCdzhkoYJTOnj2b1dXVHDlyJPPz8zly5EhWV1dz9uzZoUcDZoCAAkZpfX09k8nkmmOTySTr6+sDTQTMEgEFjNLc3FzW1tauOba2tpa5ubmBJgJmiTVQwCg98sgjr6952rdvX06ePJmjR4++6awUwFYQUMAoXV0o/uSTT2Z9fT1zc3OZTCYWkAPbovXet+3N9u/f359//vltez9gNkyn0ywsLAw9BrDDtNY+03vf/1ZfswYKAKBIQAEAFAkoAIAiAQWM1vnz53P//ffnwQcfzP3335/z588PPRIwI/wWHjBK58+fz/Hjx3Pu3Lm8+uqrueuuu3L48OEkycMPPzzwdMBO5wwUMEonTpzIuXPnsri4mN27d2dxcTHnzp3LiRMnhh4NmAECChilK1eu5MCBA9ccO3DgQK5cuTLQRMAsEVDAKO3duzeXL1++5tjly5ezd+/egSYCZomAAkbp+PHjOXz4cC5dupRXXnklly5dyuHDh3P8+PGhRwNmgEXkwChdXSi+vLycK1euZO/evTlx4oQF5MC2cCsXYPTcygXYCm7lAgCwiQQUAECRgAIAKBJQwGgtLS1l165dWVxczK5du7K0tDT0SMCMEFDAKC0tLeXChQuZTCZ5+umnM5lMcuHCBREFbAvbGACjdPHixTz22GM5ffp0ptNpTp8+nSRZW1sbeDJgFjgDBYxS7z0rKyvXHFtZWcl2bs0CzC4BBYxSay3Hjh275tixY8fSWhtoImCWuIQHjNLBgwdz5syZJMmhQ4fy+OOP58yZM3nooYcGngyYBXYiB0ZraWkpFy9eTO89rbUcPHgwzz777NBjATvE9XYidwYKGK2rseRWLsB2swYKAKBIQAEAFAkoYLSWl5czPz+fxcXFzM/PZ3l5eeiRgBlhDRQwSsvLy1lbW8vq6mr27duXF154IUePHk2SnDp1auDpgJ3OGShglM6ePZvV1dUcOXIk8/PzOXLkSFZXV3P27NmhRwNmgIACRml9fT2TyeSaY5PJJOvr6wNNBMwSAQWM0tzc3Jvue7e2tpa5ubmBJgJmiTVQwCg98sgjr6952rdvX06ePJmjR4++6awUwFYQUMAoXV0o/uSTT2Z9fT1zc3OZTCYWkAPbwq1cgNGzEzmwFa53KxdroAAAigQUAEDRbQVUa+09rbXfaK39Vmvtic0aCgDgTnbLi8hba3cl+edJDib5QpJPt9Y+2nt/YbOGA7ie1tqbjm3nuk5gdt3OGajvT/Jbvfff7r3/nyT/Psn7NmcsgOt7YzwdOHDgLY8DbJXbCahvS/LSGx5/YeMYwLbpvecDH/iAM0/AtrqdfaDe6n/z3vQTrLX2aJJHk2TPnj2ZTqe38ZYA/9+BAwcynU7z8ssvZzqd5sCBA7l8+bKfM8CWu+V9oFprP5jkZ3rvSxuPjyVJ733l7b7HPlDAZrl6qa73/vo+UG88BnC7tmofqE8n+Y7W2p9prf2JJH8jyUdv4/UAylpr+amf+ilrn4BtdcuX8Hrvr7TWfiLJs0nuSvKh3vuvbdpkANfRe389mi5fvnzNcYCtdlv7QPXeP9Z7/87e+5/rvZ/YrKEAbkbvPb33XLp06fXPAbaDncgBAIoEFABAkYACACgSUAAARQIKAKBIQAEAFAkoAIAiAQUAUCSgAACKBBQAQJGAAgAoElAAAEUCCgCgSEABABQJKACAIgEFAFAkoAAAigQUAECRgAIAKBJQAABFrfe+fW/W2v9K8jvb9obArLg7yZeHHgLYcb699/7Nb/WFbQ0ogK3QWnu+975/6DmA2eESHgBAkYACACgSUMBO8NTQAwCzxRooAIAiZ6AAAIoEFABAkYACdqzW2u6hZwB2JgEF3FFaa+9orT3TWvuV1trnWms/2lr7vtbacxvHfrG19s7W2nxr7V+21j7bWvul1trixvf/WGvtI621p5Nc2Hi9D7XWPr3xvPcN/FcEdgD/dwbcad6T5Hd77+9NktbaNyT5pSQ/2nv/dGvtTyX5wyQ/mSS99+9qrf2FvBZL37nxGj+Y5C/23r/SWvuHSf5L7/3HW2t/Oskvttb+c+/9a9v89wJ2EGeggDvNZ5P8UGtttbX2l5Pcm+SLvfdPJ0nv/Q96768kOZDkX28c+/W8dpuoqwF1sff+lY3PH0ryRGvtl5NMk8xvvCbALXMGCrij9N7/R2vtLyU5lGQlyYUkb7XfSrvOy7zx7FJL8td677+xeVMCs84ZKOCO0lr71iT/u/f+b5L84yQ/kORbW2vft/H1d24sDv9kkr+5cew789pZpbeKpGeTLLfW2sZzv3fr/xbATucMFHCn+a4k/6i19n+T/FGSx/LaWaRTrbWvz2vrn34oyekka621zyZ5JcmP9d7XNzrpjT6Q5J8l+dWNiPp8kr+yDX8PYAezEzkAQJFLeAAARQIKAKBIQAEAFAkoAIAiAQUAUCSgAACKBBQAQJGAAgAo+n+HjRx9XvrBngAAAABJRU5ErkJggg==\n",
      "text/plain": [
       "<Figure size 720x720 with 1 Axes>"
      ]
     },
     "metadata": {
      "needs_background": "light"
     },
     "output_type": "display_data"
    }
   ],
   "source": [
    "%matplotlib inline\n",
    "# 横坐标为分数，纵坐标为箱线图 也就是分数分布\n",
    "data.boxplot(column = 'score', figsize = (10, 10))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ca0f4dec",
   "metadata": {},
   "source": [
    "## 文本处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "dab6c8fc",
   "metadata": {},
   "outputs": [],
   "source": [
    "example = \"Dear Jerry I'm Li Hua. I know, you will to attend an English test. I think I can give you some advise. Such as, you should to know what you will to say. And when you say how you should to do. And you should to know how to leave. Yes, there are my advise. I think they are useful. And I think they can give you some help. So I say they in their. ' Yours, Li Hua\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f0f4a268",
   "metadata": {},
   "source": [
    "### 替换缩写词"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "0f31879c",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\"Dear Jerry I am Li Hua. I know, you will to attend an English test. I think I can give you some advise. Such as, you should to know what you will to say. And when you say how you should to do. And you should to know how to leave. Yes, there are my advise. I think they are useful. And I think they can give you some help. So I say they in their. ' Yours, Li Hua\""
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 首先对句子做一个清洗，需要把缩写转换为原来的单词，如把I'm 转换成I am\n",
    "def covert_abb2words(essay):\n",
    "    #essay = essay.lower()\n",
    "    essay = essay.replace(\"I'm\",\"I am\").replace(\"'ve\",\" have\").replace(\"'ll\",\" will\").replace(\"n't\",\" not\")\n",
    "    return essay\n",
    "    \n",
    "covert_abb2words(example)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "c7fe3a20",
   "metadata": {},
   "outputs": [],
   "source": [
    "data[\"corrected\"] = data.apply(lambda x : covert_abb2words(x[\"essay\"]),axis=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "33df80a0",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>essay</th>\n",
       "      <th>score</th>\n",
       "      <th>corrected</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>10001</td>\n",
       "      <td>Dear Jerry. I've heard about that you will giv...</td>\n",
       "      <td>19.5</td>\n",
       "      <td>Dear Jerry. I have heard about that you will g...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>10002</td>\n",
       "      <td>Dear Jerry I'm glad that you'll respresent you...</td>\n",
       "      <td>16.5</td>\n",
       "      <td>Dear Jerry I am glad that you will respresent ...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>10003</td>\n",
       "      <td>Dear Jerry, I am very happy to hear that you w...</td>\n",
       "      <td>20.5</td>\n",
       "      <td>Dear Jerry, I am very happy to hear that you w...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>10004</td>\n",
       "      <td>Dear Je I'm so happy to hear that you will hav...</td>\n",
       "      <td>15.5</td>\n",
       "      <td>Dear Je I am so happy to hear that you will ha...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>10005</td>\n",
       "      <td>Dear Jerry, I am so glad to hear that you will...</td>\n",
       "      <td>19.0</td>\n",
       "      <td>Dear Jerry, I am so glad to hear that you will...</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "      id                                              essay  score  \\\n",
       "0  10001  Dear Jerry. I've heard about that you will giv...   19.5   \n",
       "1  10002  Dear Jerry I'm glad that you'll respresent you...   16.5   \n",
       "2  10003  Dear Jerry, I am very happy to hear that you w...   20.5   \n",
       "3  10004  Dear Je I'm so happy to hear that you will hav...   15.5   \n",
       "4  10005  Dear Jerry, I am so glad to hear that you will...   19.0   \n",
       "\n",
       "                                           corrected  \n",
       "0  Dear Jerry. I have heard about that you will g...  \n",
       "1  Dear Jerry I am glad that you will respresent ...  \n",
       "2  Dear Jerry, I am very happy to hear that you w...  \n",
       "3  Dear Je I am so happy to hear that you will ha...  \n",
       "4  Dear Jerry, I am so glad to hear that you will...  "
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "data.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d0197787",
   "metadata": {},
   "source": [
    "## 语义特征\n",
    "这里我们还是使用之前google训练好的词向量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "3cfdc556",
   "metadata": {},
   "outputs": [],
   "source": [
    "## 首先把essay转换成频率矩阵的形式\n",
    "def get_count_vectors(essays):\n",
    "    # 实例化vectorizer\n",
    "    vectorizer = CountVectorizer(max_features=10000)\n",
    "    #fit_transform(X)\t拟合模型，并返回文本矩阵\n",
    "    count_vectors = vectorizer.fit_transform(essays)\n",
    "    # get_feature_names()\t所有文本的词汇；列表型\n",
    "    feature_names = vectorizer.get_feature_names()\n",
    "    vocabulary = vectorizer.vocabulary_\n",
    "    return feature_names, count_vectors,vocabulary\n",
    "\n",
    "#返回essay的文章的文本矩阵和关键词列表\n",
    "feature_names_cv,count_vectors,vocabulary = get_count_vectors(data[\"corrected\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "c4eae4a7",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['00',\n",
       " '000',\n",
       " '100',\n",
       " '1000',\n",
       " '11',\n",
       " '12',\n",
       " '13407369803',\n",
       " '15',\n",
       " '17',\n",
       " '1863406',\n",
       " '19',\n",
       " '1989',\n",
       " '1x',\n",
       " '2000',\n",
       " '2018',\n",
       " '20th',\n",
       " '2333335',\n",
       " '2731',\n",
       " '286',\n",
       " '305',\n",
       " '3peach',\n",
       " '50',\n",
       " '5x',\n",
       " '60',\n",
       " '70',\n",
       " '7x',\n",
       " '832',\n",
       " '88888888',\n",
       " '99',\n",
       " '9n',\n",
       " 'aad',\n",
       " 'aadiences',\n",
       " 'aaditionly',\n",
       " 'aagin',\n",
       " 'aback',\n",
       " 'abad',\n",
       " 'ability',\n",
       " 'able',\n",
       " 'ablitilys',\n",
       " 'aboard',\n",
       " 'aboat',\n",
       " 'abody',\n",
       " 'abont',\n",
       " 'abord',\n",
       " 'abou',\n",
       " 'aboue',\n",
       " 'aboul',\n",
       " 'about',\n",
       " 'above',\n",
       " 'abroad',\n",
       " 'abserve',\n",
       " 'absorb',\n",
       " 'absorbe',\n",
       " 'absorbed',\n",
       " 'absutly',\n",
       " 'abtention',\n",
       " 'abut',\n",
       " 'acc',\n",
       " 'acceessed',\n",
       " 'accept',\n",
       " 'acceptable',\n",
       " 'accepted',\n",
       " 'access',\n",
       " 'accidents',\n",
       " 'accomplish',\n",
       " 'accopmplish',\n",
       " 'according',\n",
       " 'account',\n",
       " 'accpect',\n",
       " 'accpet',\n",
       " 'accroding',\n",
       " 'accross',\n",
       " 'accter',\n",
       " 'acepet',\n",
       " 'acepted',\n",
       " 'acheivement',\n",
       " 'achevement',\n",
       " 'achieve',\n",
       " 'achievement',\n",
       " 'achive',\n",
       " 'aciationly',\n",
       " 'acix',\n",
       " 'ack',\n",
       " 'acl',\n",
       " 'acoiss',\n",
       " 'across',\n",
       " 'act',\n",
       " 'acting',\n",
       " 'action',\n",
       " 'actioning',\n",
       " 'actions',\n",
       " 'actity',\n",
       " 'active',\n",
       " 'actively',\n",
       " 'actives',\n",
       " 'activet',\n",
       " 'activetice',\n",
       " 'activety',\n",
       " 'activitation',\n",
       " 'activite',\n",
       " 'activites',\n",
       " 'activitie',\n",
       " 'activities',\n",
       " 'activitios',\n",
       " 'activitles',\n",
       " 'activitly',\n",
       " 'activity',\n",
       " 'activityeswith',\n",
       " 'activty',\n",
       " 'acts',\n",
       " 'acttilt',\n",
       " 'acttion',\n",
       " 'ad',\n",
       " 'add',\n",
       " 'added',\n",
       " 'addention',\n",
       " 'addents',\n",
       " 'adding',\n",
       " 'addition',\n",
       " 'additionally',\n",
       " 'address',\n",
       " 'addressing',\n",
       " 'adds',\n",
       " 'addtion',\n",
       " 'addtionally',\n",
       " 'adences',\n",
       " 'adequate',\n",
       " 'adevice',\n",
       " 'adi',\n",
       " 'adice',\n",
       " 'adience',\n",
       " 'adiences',\n",
       " 'adise',\n",
       " 'adises',\n",
       " 'adittion',\n",
       " 'adituide',\n",
       " 'adiuas',\n",
       " 'adiums',\n",
       " 'adive',\n",
       " 'adivece',\n",
       " 'adiveisx',\n",
       " 'adivice',\n",
       " 'adix',\n",
       " 'adjusting',\n",
       " 'adjusts',\n",
       " 'admiles',\n",
       " 'admine',\n",
       " 'admire',\n",
       " 'admired',\n",
       " 'admived',\n",
       " 'adrence',\n",
       " 'adrise',\n",
       " 'adrises',\n",
       " 'aduances',\n",
       " 'aduence',\n",
       " 'aduices',\n",
       " 'aduience',\n",
       " 'aduiences',\n",
       " 'aduiends',\n",
       " 'aduses',\n",
       " 'adv',\n",
       " 'advace',\n",
       " 'advance',\n",
       " 'advances',\n",
       " 'advanse',\n",
       " 'advantage',\n",
       " 'advatisese',\n",
       " 'adve',\n",
       " 'adverious',\n",
       " 'advertage',\n",
       " 'adverting',\n",
       " 'advertise',\n",
       " 'advertisements',\n",
       " 'advertisments',\n",
       " 'adves',\n",
       " 'advice',\n",
       " 'advicelx',\n",
       " 'advices',\n",
       " 'advience',\n",
       " 'advies',\n",
       " 'advioce',\n",
       " 'advise',\n",
       " 'advises',\n",
       " 'advist',\n",
       " 'advites',\n",
       " 'advitesmes',\n",
       " 'advive',\n",
       " 'advoce',\n",
       " 'adwiences',\n",
       " 'adx',\n",
       " 'afaind',\n",
       " 'afaird',\n",
       " 'affect',\n",
       " 'affects',\n",
       " 'afford',\n",
       " 'affort',\n",
       " 'afird',\n",
       " 'afire',\n",
       " 'afrad',\n",
       " 'afraid',\n",
       " 'afriad',\n",
       " 'afriaid',\n",
       " 'afriald',\n",
       " 'afrid',\n",
       " 'afrids',\n",
       " 'afritad',\n",
       " 'aftaid',\n",
       " 'aftend',\n",
       " 'aftention',\n",
       " 'after',\n",
       " 'afternoon',\n",
       " 'aftes',\n",
       " 'ag',\n",
       " 'agai',\n",
       " 'agaim',\n",
       " 'again',\n",
       " 'againe',\n",
       " 'againest',\n",
       " 'againing',\n",
       " 'against',\n",
       " 'agair',\n",
       " 'age',\n",
       " 'agein',\n",
       " 'ages',\n",
       " 'agian',\n",
       " 'agin',\n",
       " 'ago',\n",
       " 'agoin',\n",
       " 'agree',\n",
       " 'agreements',\n",
       " 'aguin',\n",
       " 'ah',\n",
       " 'ahead',\n",
       " 'ahother',\n",
       " 'ahvays',\n",
       " 'aid',\n",
       " 'aidence',\n",
       " 'aim',\n",
       " 'air',\n",
       " 'airtical',\n",
       " 'airticle',\n",
       " 'airtid',\n",
       " 'ak',\n",
       " 'akama',\n",
       " 'akers',\n",
       " 'alive',\n",
       " 'alively',\n",
       " 'all',\n",
       " 'allinall',\n",
       " 'allow',\n",
       " 'almost',\n",
       " 'alneccass',\n",
       " 'alone',\n",
       " 'along',\n",
       " 'alothing',\n",
       " 'aloud',\n",
       " 'alout',\n",
       " 'alraid',\n",
       " 'alreadly',\n",
       " 'already',\n",
       " 'alrealy',\n",
       " 'alright',\n",
       " 'als',\n",
       " 'alse',\n",
       " 'alsences',\n",
       " 'also',\n",
       " 'alsorbed',\n",
       " 'altention',\n",
       " 'alter',\n",
       " 'although',\n",
       " 'althought',\n",
       " 'althougt',\n",
       " 'altough',\n",
       " 'alway',\n",
       " 'always',\n",
       " 'am',\n",
       " 'amazing',\n",
       " 'ambelieve',\n",
       " 'america',\n",
       " 'americal',\n",
       " 'amid',\n",
       " 'amo',\n",
       " 'among',\n",
       " 'ample',\n",
       " 'an',\n",
       " 'ana',\n",
       " 'anbiety',\n",
       " 'anbout',\n",
       " 'ance',\n",
       " 'ancient',\n",
       " 'ancious',\n",
       " 'ancom',\n",
       " 'and',\n",
       " 'anderstand',\n",
       " 'andience',\n",
       " 'andiences',\n",
       " 'andiens',\n",
       " 'andient',\n",
       " 'anding',\n",
       " 'andion',\n",
       " 'andx',\n",
       " 'ane',\n",
       " 'anend',\n",
       " 'angry',\n",
       " 'anidiences',\n",
       " 'aniouse',\n",
       " 'anix',\n",
       " 'anixous',\n",
       " 'ano',\n",
       " 'anot',\n",
       " 'anothe',\n",
       " 'another',\n",
       " 'anounce',\n",
       " 'anouonce',\n",
       " 'anriety',\n",
       " 'anrious',\n",
       " 'answer',\n",
       " 'answered',\n",
       " 'answers',\n",
       " 'ant',\n",
       " 'antiety',\n",
       " 'antract',\n",
       " 'anvety',\n",
       " 'anviety',\n",
       " 'anwasering',\n",
       " 'anwous',\n",
       " 'anxi',\n",
       " 'anxicty',\n",
       " 'anxicy',\n",
       " 'anxie',\n",
       " 'anxiece',\n",
       " 'anxiecex',\n",
       " 'anxied',\n",
       " 'anxiets',\n",
       " 'anxiety',\n",
       " 'anxio',\n",
       " 'anxions',\n",
       " 'anxiou',\n",
       " 'anxious',\n",
       " 'anxity',\n",
       " 'anxspeech',\n",
       " 'any',\n",
       " 'anymore',\n",
       " 'anyone',\n",
       " 'anything',\n",
       " 'anytime',\n",
       " 'anyway',\n",
       " 'ao',\n",
       " 'apain',\n",
       " 'apapt',\n",
       " 'apeat',\n",
       " 'apely',\n",
       " 'appeal',\n",
       " 'appealer',\n",
       " 'appealing',\n",
       " 'appear',\n",
       " 'appearance',\n",
       " 'appearing',\n",
       " 'appearling',\n",
       " 'applancely',\n",
       " 'apply',\n",
       " 'appoint',\n",
       " 'apporiciaty',\n",
       " 'appracite',\n",
       " 'appractic',\n",
       " 'appreance',\n",
       " 'appreciate',\n",
       " 'appreciated',\n",
       " 'appreciating',\n",
       " 'appreciation',\n",
       " 'appriciate',\n",
       " 'approach',\n",
       " 'approaching',\n",
       " 'approciate',\n",
       " 'appropriatly',\n",
       " 'apreciate',\n",
       " 'are',\n",
       " 'aready',\n",
       " 'areful',\n",
       " 'arefully',\n",
       " 'arespeaking',\n",
       " 'argood',\n",
       " 'arict',\n",
       " 'arm',\n",
       " 'arms',\n",
       " 'around',\n",
       " 'arove',\n",
       " 'arrive',\n",
       " 'arsing',\n",
       " 'art',\n",
       " 'artcle',\n",
       " 'artic',\n",
       " 'artical',\n",
       " 'articat',\n",
       " 'artich',\n",
       " 'articl',\n",
       " 'article',\n",
       " 'articles',\n",
       " 'artide',\n",
       " 'artiel',\n",
       " 'artilc',\n",
       " 'as',\n",
       " 'asaid',\n",
       " 'ase',\n",
       " 'ash',\n",
       " 'ashamed',\n",
       " 'asi',\n",
       " 'aside',\n",
       " 'ask',\n",
       " 'aske',\n",
       " 'asked',\n",
       " 'askeed',\n",
       " 'asking',\n",
       " 'asks',\n",
       " 'asleep',\n",
       " 'aslo',\n",
       " 'aspect',\n",
       " 'assembly',\n",
       " 'assistance',\n",
       " 'assistand',\n",
       " 'assistence',\n",
       " 'assitance',\n",
       " 'asspect',\n",
       " 'ast',\n",
       " 'astrat',\n",
       " 'at',\n",
       " 'atation',\n",
       " 'ateation',\n",
       " 'ateention',\n",
       " 'atemosphere',\n",
       " 'aten',\n",
       " 'atending',\n",
       " 'atention',\n",
       " 'athough',\n",
       " 'atit',\n",
       " 'atmoshere',\n",
       " 'atmosphere',\n",
       " 'atract',\n",
       " 'atractive',\n",
       " 'atrctive',\n",
       " 'atrical',\n",
       " 'atrick',\n",
       " 'atricle',\n",
       " 'atritude',\n",
       " 'atrract',\n",
       " 'att',\n",
       " 'attach',\n",
       " 'attached',\n",
       " 'attack',\n",
       " 'attact',\n",
       " 'attacting',\n",
       " 'attactive',\n",
       " 'attant',\n",
       " 'attantion',\n",
       " 'attart',\n",
       " 'attated',\n",
       " 'attation',\n",
       " 'attch',\n",
       " 'atte',\n",
       " 'attection',\n",
       " 'atted',\n",
       " 'attempt',\n",
       " 'atten',\n",
       " 'attenation',\n",
       " 'attence',\n",
       " 'attend',\n",
       " 'attended',\n",
       " 'attending',\n",
       " 'attends',\n",
       " 'attendtion',\n",
       " 'attened',\n",
       " 'attenion',\n",
       " 'attenpt',\n",
       " 'attenption',\n",
       " 'attent',\n",
       " 'attented',\n",
       " 'attentein',\n",
       " 'attenten',\n",
       " 'attention',\n",
       " 'attentioned',\n",
       " 'attentions',\n",
       " 'attentively',\n",
       " 'attentoon',\n",
       " 'attentron',\n",
       " 'atter',\n",
       " 'atternder',\n",
       " 'atters',\n",
       " 'attetion',\n",
       " 'attiention',\n",
       " 'attin',\n",
       " 'attion',\n",
       " 'attioned',\n",
       " 'attions',\n",
       " 'attiont',\n",
       " 'attirack',\n",
       " 'attition',\n",
       " 'attitude',\n",
       " 'attiude',\n",
       " 'attived',\n",
       " 'attraced',\n",
       " 'attract',\n",
       " 'attracte',\n",
       " 'attracted',\n",
       " 'attractin',\n",
       " 'attracting',\n",
       " 'attraction',\n",
       " 'attractive',\n",
       " 'attrat',\n",
       " 'attrectio',\n",
       " 'attuide',\n",
       " 'aturion',\n",
       " 'atxovel',\n",
       " 'au',\n",
       " 'aud',\n",
       " 'audence',\n",
       " 'audiance',\n",
       " 'audien',\n",
       " 'audienc',\n",
       " 'audience',\n",
       " 'audiences',\n",
       " 'audiens',\n",
       " 'audients',\n",
       " 'audios',\n",
       " 'audiues',\n",
       " 'audiunces',\n",
       " 'auguage',\n",
       " 'auidence',\n",
       " 'auidences',\n",
       " 'auidens',\n",
       " 'auldience',\n",
       " 'autdiens',\n",
       " 'ave',\n",
       " 'avidence',\n",
       " 'aviod',\n",
       " 'avior',\n",
       " 'avoid',\n",
       " 'avoied',\n",
       " 'awaiting',\n",
       " 'aware',\n",
       " 'away',\n",
       " 'aways',\n",
       " 'axiety',\n",
       " 'axious',\n",
       " 'axtractive',\n",
       " 'ay',\n",
       " 'ayou',\n",
       " 'bab',\n",
       " 'baby',\n",
       " 'back',\n",
       " 'backing',\n",
       " 'backwara',\n",
       " 'backward',\n",
       " 'bact',\n",
       " 'bad',\n",
       " 'badly',\n",
       " 'bady',\n",
       " 'badylanguage',\n",
       " 'bagin',\n",
       " 'bais',\n",
       " 'balance',\n",
       " 'baleve',\n",
       " 'bank',\n",
       " 'baod',\n",
       " 'baody',\n",
       " 'bar',\n",
       " 'base',\n",
       " 'based',\n",
       " 'basic',\n",
       " 'basis',\n",
       " 'basise',\n",
       " 'basises',\n",
       " 'basketball',\n",
       " 'bast',\n",
       " 'bat',\n",
       " 'batter',\n",
       " 'bay',\n",
       " 'baying',\n",
       " 'bcause',\n",
       " 'be',\n",
       " 'beacouse',\n",
       " 'beacuse',\n",
       " 'beagin',\n",
       " 'bear',\n",
       " 'bearin',\n",
       " 'beast',\n",
       " 'beat',\n",
       " 'beau',\n",
       " 'beaus',\n",
       " 'beause',\n",
       " 'beautiful',\n",
       " 'beautifuly',\n",
       " 'beauty',\n",
       " 'beave',\n",
       " 'becaase',\n",
       " 'becacese',\n",
       " 'becale',\n",
       " 'became',\n",
       " 'becasx',\n",
       " 'becau',\n",
       " 'becaub',\n",
       " 'becaus',\n",
       " 'because',\n",
       " 'becides',\n",
       " 'becieve',\n",
       " 'becom',\n",
       " 'become',\n",
       " 'becoming',\n",
       " 'becon',\n",
       " 'becplox',\n",
       " 'becuse',\n",
       " 'bedy',\n",
       " 'beeause',\n",
       " 'beeb',\n",
       " 'been',\n",
       " 'beeping',\n",
       " 'beeter',\n",
       " 'befor',\n",
       " 'before',\n",
       " 'begain',\n",
       " 'begaining',\n",
       " 'began',\n",
       " 'beganing',\n",
       " 'begian',\n",
       " 'begin',\n",
       " 'begining',\n",
       " 'beginning',\n",
       " 'begins',\n",
       " 'begyning',\n",
       " 'behah',\n",
       " 'behaior',\n",
       " 'behaiors',\n",
       " 'behaivor',\n",
       " 'behalf',\n",
       " 'behand',\n",
       " 'behariour',\n",
       " 'behave',\n",
       " 'behavier',\n",
       " 'behavior',\n",
       " 'behaviores',\n",
       " 'behaviors',\n",
       " 'behaviour',\n",
       " 'behavir',\n",
       " 'behavoir',\n",
       " 'behavor',\n",
       " 'behavour',\n",
       " 'behavy',\n",
       " 'behevior',\n",
       " 'behind',\n",
       " 'behindclass',\n",
       " 'behiver',\n",
       " 'behivior',\n",
       " 'behivour',\n",
       " 'beides',\n",
       " 'being',\n",
       " 'beings',\n",
       " 'belaus',\n",
       " 'beleive',\n",
       " 'belie',\n",
       " 'beliere',\n",
       " 'believ',\n",
       " 'believe',\n",
       " 'believeable',\n",
       " 'believed',\n",
       " 'believing',\n",
       " 'beliling',\n",
       " 'beline',\n",
       " 'beliv',\n",
       " 'belive',\n",
       " 'belived',\n",
       " 'belivef',\n",
       " 'belivem',\n",
       " 'beliven',\n",
       " 'belives',\n",
       " 'beliveve',\n",
       " 'beller',\n",
       " 'belong',\n",
       " 'belongs',\n",
       " 'below',\n",
       " 'belsieve',\n",
       " 'belwe',\n",
       " 'ben',\n",
       " 'bend',\n",
       " 'benefical',\n",
       " 'beneficial',\n",
       " 'benefit',\n",
       " 'benefits',\n",
       " 'benglish',\n",
       " 'benifit',\n",
       " 'benifited',\n",
       " 'benift',\n",
       " 'bent',\n",
       " 'beqin',\n",
       " 'ber',\n",
       " 'berief',\n",
       " 'berieve',\n",
       " 'bering',\n",
       " 'bese',\n",
       " 'beside',\n",
       " 'besides',\n",
       " 'best',\n",
       " 'besuccesful',\n",
       " 'bette',\n",
       " 'better',\n",
       " 'betterly',\n",
       " 'between',\n",
       " 'beuteful',\n",
       " 'bew',\n",
       " 'bewilling',\n",
       " 'bex',\n",
       " 'bey',\n",
       " 'bgain',\n",
       " 'bie',\n",
       " 'big',\n",
       " 'biggest',\n",
       " 'billeve',\n",
       " 'binow',\n",
       " 'biology',\n",
       " 'bird',\n",
       " 'black',\n",
       " 'blow',\n",
       " 'blue',\n",
       " 'blx',\n",
       " 'bnow',\n",
       " 'bo',\n",
       " 'boa',\n",
       " 'boaylanguage',\n",
       " 'bob',\n",
       " 'boby',\n",
       " 'bocly',\n",
       " 'bod',\n",
       " 'bodies',\n",
       " 'bodiy',\n",
       " 'bodly',\n",
       " 'body',\n",
       " 'bodyactives',\n",
       " 'bodylamage',\n",
       " 'bodylan',\n",
       " 'bodylang',\n",
       " 'bodylangnage',\n",
       " 'bodylangrage',\n",
       " 'bodylanguage',\n",
       " 'bodylanguagecan',\n",
       " 'bodylangucige',\n",
       " 'bodylangurages',\n",
       " 'bodylanugh',\n",
       " 'bodylaugde',\n",
       " 'bodylaugray',\n",
       " 'bodylauguages',\n",
       " 'bodylauguen',\n",
       " 'bodynangewage',\n",
       " 'bodys',\n",
       " 'boely',\n",
       " 'boing',\n",
       " 'bok',\n",
       " 'bolieve',\n",
       " 'bolow',\n",
       " 'bolw',\n",
       " 'boly',\n",
       " 'bonus',\n",
       " 'bood',\n",
       " 'book',\n",
       " 'bookite',\n",
       " 'books',\n",
       " 'booly',\n",
       " 'booy',\n",
       " 'borad',\n",
       " 'bord',\n",
       " 'bordy',\n",
       " 'bored',\n",
       " 'boring',\n",
       " 'born',\n",
       " 'bositive',\n",
       " 'bost',\n",
       " 'bot',\n",
       " 'both',\n",
       " 'botter',\n",
       " 'boul',\n",
       " 'boun',\n",
       " 'bour',\n",
       " 'bout',\n",
       " 'bow',\n",
       " 'bowing',\n",
       " 'bowl',\n",
       " 'boy',\n",
       " 'boyy',\n",
       " 'brack',\n",
       " 'brain',\n",
       " 'brave',\n",
       " 'break',\n",
       " 'breally',\n",
       " 'breath',\n",
       " 'breathe',\n",
       " 'breathing',\n",
       " 'breif',\n",
       " 'brief',\n",
       " 'brillght',\n",
       " 'brilliant',\n",
       " 'bring',\n",
       " 'broaden',\n",
       " 'brobbly',\n",
       " 'broden',\n",
       " 'brothers',\n",
       " 'bu',\n",
       " 'buccess',\n",
       " 'buck',\n",
       " 'budget',\n",
       " 'bue',\n",
       " 'buggestions',\n",
       " 'buiding',\n",
       " 'building',\n",
       " 'built',\n",
       " 'burt',\n",
       " 'busy',\n",
       " 'but',\n",
       " 'buy',\n",
       " 'bvery',\n",
       " 'by',\n",
       " 'bye',\n",
       " 'byeor',\n",
       " 'byeto',\n",
       " 'ca',\n",
       " 'cad',\n",
       " 'cadive',\n",
       " 'cafeful',\n",
       " 'calin',\n",
       " 'call',\n",
       " 'calm',\n",
       " 'calmed',\n",
       " 'calmer',\n",
       " 'calming',\n",
       " 'calmly',\n",
       " 'calo',\n",
       " 'cam',\n",
       " 'came',\n",
       " 'campet',\n",
       " 'can',\n",
       " 'canget',\n",
       " 'canguage',\n",
       " 'canguages',\n",
       " 'cangudge',\n",
       " 'cannot',\n",
       " 'cans',\n",
       " 'cant',\n",
       " 'canughuge',\n",
       " 'canx',\n",
       " 'canyy',\n",
       " 'capable',\n",
       " 'car',\n",
       " 'care',\n",
       " 'cared',\n",
       " 'careet',\n",
       " 'careful',\n",
       " 'carefull',\n",
       " 'carefully',\n",
       " 'carefuly',\n",
       " 'careless',\n",
       " 'carelly',\n",
       " 'carit',\n",
       " 'carry',\n",
       " 'carse',\n",
       " 'case',\n",
       " 'cash',\n",
       " 'cast',\n",
       " 'catch',\n",
       " 'catching',\n",
       " 'cathx',\n",
       " 'cattantion',\n",
       " 'catting',\n",
       " 'cattionly',\n",
       " 'cau',\n",
       " 'caught',\n",
       " 'cauguage',\n",
       " 'cauld',\n",
       " 'cause',\n",
       " 'cave',\n",
       " 'cax',\n",
       " 'caxuntll',\n",
       " 'cay',\n",
       " 'ce',\n",
       " 'cearelly',\n",
       " 'cearly',\n",
       " 'ceas',\n",
       " 'ceave',\n",
       " 'celverly',\n",
       " 'cend',\n",
       " 'center',\n",
       " 'cepresse',\n",
       " 'certain',\n",
       " 'certainly',\n",
       " 'cessful',\n",
       " 'ceurly',\n",
       " 'cexcisex',\n",
       " 'cexcited',\n",
       " 'cexperice',\n",
       " 'ch',\n",
       " 'chainis',\n",
       " 'chairman',\n",
       " 'chaix',\n",
       " 'challaging',\n",
       " 'challenge',\n",
       " 'champion',\n",
       " 'chan',\n",
       " 'chance',\n",
       " 'change',\n",
       " 'changed',\n",
       " 'changex',\n",
       " 'changrege',\n",
       " 'chanle',\n",
       " 'chanlenge',\n",
       " 'character',\n",
       " 'charactor',\n",
       " 'charge',\n",
       " 'charm',\n",
       " 'charming',\n",
       " 'chartarn',\n",
       " 'chast',\n",
       " 'chave',\n",
       " 'chdix',\n",
       " 'che',\n",
       " 'cheak',\n",
       " 'cheardx',\n",
       " 'chearly',\n",
       " 'check',\n",
       " 'checking',\n",
       " 'cheer',\n",
       " 'cheered',\n",
       " 'cheering',\n",
       " 'cheers',\n",
       " 'chelive',\n",
       " 'chery',\n",
       " 'chian',\n",
       " 'chief',\n",
       " 'children',\n",
       " 'china',\n",
       " 'chinese',\n",
       " 'chinesesays',\n",
       " 'chinsex',\n",
       " 'chioce',\n",
       " 'choice',\n",
       " 'choicm',\n",
       " 'choiom',\n",
       " 'choion',\n",
       " 'choirm',\n",
       " 'chok',\n",
       " 'choose',\n",
       " 'choosen',\n",
       " 'chope',\n",
       " 'chose',\n",
       " 'chosen',\n",
       " 'chow',\n",
       " 'cifix',\n",
       " 'cignifance',\n",
       " 'cimpo',\n",
       " 'cimpolint',\n",
       " 'cimx',\n",
       " 'cind',\n",
       " 'cintest',\n",
       " 'circumstances',\n",
       " 'cirde',\n",
       " 'cisix',\n",
       " 'cist',\n",
       " 'city',\n",
       " 'clam',\n",
       " 'clamed',\n",
       " 'clamer',\n",
       " 'clamly',\n",
       " 'clanb',\n",
       " 'clapsand',\n",
       " 'class',\n",
       " 'classmales',\n",
       " 'classmate',\n",
       " 'classmates',\n",
       " 'classmetes',\n",
       " 'classnom',\n",
       " 'classroom',\n",
       " 'clauage',\n",
       " 'claugau',\n",
       " 'clean',\n",
       " 'cleanly',\n",
       " 'clear',\n",
       " 'clearer',\n",
       " 'clearly',\n",
       " 'cleay',\n",
       " 'cleeply',\n",
       " 'clere',\n",
       " 'clerly',\n",
       " 'clet',\n",
       " 'cletermination',\n",
       " 'clever',\n",
       " 'clex',\n",
       " 'clifference',\n",
       " 'cliss',\n",
       " 'clist',\n",
       " 'clo',\n",
       " 'clock',\n",
       " 'clod',\n",
       " 'close',\n",
       " 'closed',\n",
       " 'closely',\n",
       " 'closen',\n",
       " ...]"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "feature_names_cv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "dbdb2ac9",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "5584"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(feature_names_cv)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "4a343350",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'dear': 1286,\n",
       " 'jerry': 2594,\n",
       " 'have': 2228,\n",
       " 'heard': 2240,\n",
       " 'about': 47,\n",
       " 'that': 5030,\n",
       " 'you': 5535,\n",
       " 'will': 5415,\n",
       " 'give': 2085,\n",
       " 'deliver': 1316,\n",
       " 'to': 5129,\n",
       " 'the': 5034,\n",
       " 'students': 4799,\n",
       " 'of': 3462,\n",
       " 'senior': 4390,\n",
       " 'one': 3485,\n",
       " 'feel': 1787,\n",
       " 'very': 5292,\n",
       " 'proud': 3956,\n",
       " 'here': 2280,\n",
       " 'are': 376,\n",
       " 'some': 4577,\n",
       " 'tips': 5122,\n",
       " 'for': 1920,\n",
       " 'first': 1859,\n",
       " 'and': 292,\n",
       " 'foremost': 1924,\n",
       " 'it': 2576,\n",
       " 'importante': 2455,\n",
       " 'know': 2644,\n",
       " 'content': 1181,\n",
       " 'speech': 4649,\n",
       " 'well': 5367,\n",
       " 'which': 5392,\n",
       " 'help': 2264,\n",
       " 'no': 3413,\n",
       " 'nervous': 3382,\n",
       " 'when': 5383,\n",
       " 'giving': 2090,\n",
       " 'furthermore': 2024,\n",
       " 'pay': 3598,\n",
       " 'body': 733,\n",
       " 'languages': 2739,\n",
       " 'should': 4466,\n",
       " 'be': 578,\n",
       " 'attached': 447,\n",
       " 'ca': 820,\n",
       " 'great': 2147,\n",
       " 'importance': 2450,\n",
       " 'while': 5394,\n",
       " 'speaking': 4631,\n",
       " 'attract': 500,\n",
       " 'audience': 518,\n",
       " 'last': 2805,\n",
       " 'but': 813,\n",
       " 'not': 3425,\n",
       " 'least': 2918,\n",
       " 'leave': 2925,\n",
       " 'stage': 4715,\n",
       " 'politely': 3765,\n",
       " 'all': 247,\n",
       " 'relaxed': 4109,\n",
       " 'these': 5050,\n",
       " 'days': 1282,\n",
       " 'rem': 4125,\n",
       " 'do': 1399,\n",
       " 'forget': 1931,\n",
       " 'am': 275,\n",
       " 'there': 5046,\n",
       " 'anytime': 345,\n",
       " 'if': 2403,\n",
       " 'any': 341,\n",
       " 'problems': 3915,\n",
       " 'hesitate': 2283,\n",
       " 'contact': 1171,\n",
       " 'me': 3183,\n",
       " 'good': 2114,\n",
       " 'luck': 3111,\n",
       " 'yours': 5549,\n",
       " 'li': 2988,\n",
       " 'hua': 2366,\n",
       " 'glad': 2093,\n",
       " 'respresent': 4217,\n",
       " 'your': 5542,\n",
       " 'school': 4308,\n",
       " 'an': 284,\n",
       " 'english': 1552,\n",
       " 'like': 3000,\n",
       " 'advise': 181,\n",
       " 'must': 3317,\n",
       " 'significance': 4495,\n",
       " 'understanding': 5229,\n",
       " 'contents': 1182,\n",
       " 'is': 2568,\n",
       " 'could': 1226,\n",
       " 'speak': 4622,\n",
       " 'fluently': 1887,\n",
       " 'so': 4566,\n",
       " 'get': 2067,\n",
       " 'view': 5297,\n",
       " 'across': 85,\n",
       " 'remeber': 4139,\n",
       " 'practise': 3843,\n",
       " 'makes': 3146,\n",
       " 'perfect': 3633,\n",
       " 'over': 3541,\n",
       " 'again': 215,\n",
       " 'then': 5042,\n",
       " 'wo': 5442,\n",
       " 'make': 3143,\n",
       " 'nistakes': 3411,\n",
       " 'also': 265,\n",
       " 'benifit': 687,\n",
       " 'attention': 478,\n",
       " 'language': 2737,\n",
       " 'proper': 3941,\n",
       " 'would': 5477,\n",
       " 'more': 3285,\n",
       " 'natural': 3328,\n",
       " 'eye': 1720,\n",
       " 'catching': 864,\n",
       " 're': 4023,\n",
       " 'supposed': 4907,\n",
       " 'out': 3534,\n",
       " 'impression': 2469,\n",
       " 'with': 5437,\n",
       " 'hope': 2330,\n",
       " 'success': 4830,\n",
       " 'happy': 2210,\n",
       " 'hear': 2239,\n",
       " 'writting': 5489,\n",
       " 'advice': 175,\n",
       " 'firstly': 1860,\n",
       " 'remember': 4144,\n",
       " 'what': 5377,\n",
       " 'talk': 4951,\n",
       " 'important': 2453,\n",
       " 'read': 4027,\n",
       " 'scrpit': 4318,\n",
       " 'until': 5253,\n",
       " 'words': 5457,\n",
       " 'stand': 4719,\n",
       " 'on': 3482,\n",
       " 'table': 4938,\n",
       " 'say': 4283,\n",
       " 'next': 3404,\n",
       " 'terrible': 5011,\n",
       " 'secondly': 4344,\n",
       " 'mind': 3252,\n",
       " 'can': 835,\n",
       " 'attractive': 506,\n",
       " 'fall': 1739,\n",
       " 'in': 2480,\n",
       " 'as': 401,\n",
       " 'at': 425,\n",
       " 'finish': 1834,\n",
       " 'let': 2969,\n",
       " 'turn': 5201,\n",
       " 'down': 1419,\n",
       " 'just': 2626,\n",
       " 'go': 2102,\n",
       " 'away': 539,\n",
       " 'by': 816,\n",
       " 'cool': 1216,\n",
       " 'type': 5210,\n",
       " 'or': 3510,\n",
       " 'fun': 2017,\n",
       " 'polite': 3762,\n",
       " 'chinese': 934,\n",
       " 'care': 848,\n",
       " 'this': 5074,\n",
       " 'letter': 2976,\n",
       " 'helpful': 2267,\n",
       " 'je': 2588,\n",
       " 'chance': 902,\n",
       " 'practice': 3835,\n",
       " 'skills': 4530,\n",
       " 'suggestions': 4877,\n",
       " 'surposed': 4915,\n",
       " 'partice': 3578,\n",
       " 'many': 3161,\n",
       " 'times': 5112,\n",
       " 'behavior': 637,\n",
       " 'friends': 1992,\n",
       " 'parents': 3571,\n",
       " 'case': 860,\n",
       " 'langues': 2759,\n",
       " 'interesting': 2543,\n",
       " 'finally': 1821,\n",
       " 'finished': 1835,\n",
       " 'shoud': 4462,\n",
       " 'walk': 5324,\n",
       " 'right': 4232,\n",
       " 'now': 3444,\n",
       " 'because': 601,\n",
       " 'impolite': 2430,\n",
       " 'ending': 1532,\n",
       " 'examble': 1634,\n",
       " 'thanks': 5024,\n",
       " 'listening': 3024,\n",
       " 'seniour': 4393,\n",
       " 'writing': 5487,\n",
       " 'begin': 623,\n",
       " 'better': 702,\n",
       " 'familar': 1744,\n",
       " 'need': 3356,\n",
       " 'nervious': 3375,\n",
       " 'truntry': 5187,\n",
       " 'viewers': 5298,\n",
       " 'talking': 4953,\n",
       " 'who': 5403,\n",
       " 'add': 113,\n",
       " 'using': 5273,\n",
       " 'express': 1702,\n",
       " 'meaning': 3188,\n",
       " 'easily': 1456,\n",
       " 'insread': 2527,\n",
       " 'sirence': 4520,\n",
       " 'poriter': 3794,\n",
       " 'doing': 1401,\n",
       " 'only': 3489,\n",
       " 'pertect': 3669,\n",
       " 'shows': 4479,\n",
       " 'respect': 4204,\n",
       " 'tind': 5115,\n",
       " 'helptul': 2273,\n",
       " 'tell': 4998,\n",
       " 'things': 5061,\n",
       " 'notice': 3431,\n",
       " 'hop': 2329,\n",
       " 'benefit': 684,\n",
       " 'representing': 4187,\n",
       " 'student': 4797,\n",
       " 'ought': 3528,\n",
       " 'prepare': 3875,\n",
       " 'memorary': 3209,\n",
       " 'listeners': 3022,\n",
       " 'besides': 698,\n",
       " 'adv': 160,\n",
       " 'ise': 2569,\n",
       " 'due': 1436,\n",
       " 'its': 2581,\n",
       " 'attraction': 505,\n",
       " 'intal': 2531,\n",
       " 'spe': 4609,\n",
       " 'ech': 1466,\n",
       " 'forgot': 1938,\n",
       " 'leaving': 2929,\n",
       " 'politecy': 3763,\n",
       " 'looking': 3067,\n",
       " 'forward': 1954,\n",
       " 'wonderful': 5448,\n",
       " 'perfomance': 3638,\n",
       " 'wish': 5429,\n",
       " 'my': 3321,\n",
       " 'work': 5460,\n",
       " 'learned': 2913,\n",
       " 'englis': 1550,\n",
       " 'high': 2293,\n",
       " 'grade': 2129,\n",
       " 'advices': 177,\n",
       " 'sure': 4912,\n",
       " 'remem': 4141,\n",
       " 'ber': 692,\n",
       " 'everything': 1622,\n",
       " 'mentioned': 3227,\n",
       " 'try': 5191,\n",
       " 'alone': 252,\n",
       " 'stop': 4760,\n",
       " 'often': 3471,\n",
       " 'daily': 1268,\n",
       " 'life': 2993,\n",
       " 'may': 3178,\n",
       " 'looks': 3068,\n",
       " 'think': 5062,\n",
       " 'room': 4243,\n",
       " 'polit': 3760,\n",
       " 'way': 5350,\n",
       " 'news': 3402,\n",
       " 'gold': 2109,\n",
       " 'show': 4472,\n",
       " 'yourself': 5551,\n",
       " 'before': 617,\n",
       " 'something': 4580,\n",
       " 'topic': 5149,\n",
       " 'most': 3292,\n",
       " 'parts': 3582,\n",
       " 'they': 5053,\n",
       " 'those': 5085,\n",
       " 'listen': 3019,\n",
       " 'carefully': 853,\n",
       " 'langurage': 2772,\n",
       " 'exciting': 1655,\n",
       " 'final': 1819,\n",
       " 'part': 3577,\n",
       " 'making': 3147,\n",
       " 'after': 209,\n",
       " 'people': 3619,\n",
       " 'comfortable': 1030,\n",
       " 'trust': 5188,\n",
       " 'wonderfur': 5450,\n",
       " 'receive': 4049,\n",
       " 'reply': 4181,\n",
       " 'questions': 3987,\n",
       " 'memorize': 3216,\n",
       " 'information': 2503,\n",
       " 'addressing': 121,\n",
       " 'second': 4343,\n",
       " 'boby': 727,\n",
       " 'successful': 4834,\n",
       " 'time': 5111,\n",
       " 'come': 1020,\n",
       " 'inspire': 2525,\n",
       " 'trouble': 5179,\n",
       " 'meet': 3199,\n",
       " 'best': 699,\n",
       " 'wishes': 5430,\n",
       " 'hearing': 2245,\n",
       " 'attend': 465,\n",
       " 'compete': 1066,\n",
       " 'delight': 1309,\n",
       " 'share': 4434,\n",
       " 'determine': 1342,\n",
       " 'theme': 5040,\n",
       " 'around': 385,\n",
       " 'reading': 4030,\n",
       " 'repite': 4176,\n",
       " 'paragraph': 3567,\n",
       " 'without': 5439,\n",
       " 'thirdly': 5069,\n",
       " 'strage': 4767,\n",
       " 'control': 1197,\n",
       " 'use': 5264,\n",
       " 'idea': 2393,\n",
       " 'fourthly': 1969,\n",
       " 'believe': 660,\n",
       " 'others': 3521,\n",
       " 'persuaded': 3665,\n",
       " 'end': 1528,\n",
       " 'hurry': 2384,\n",
       " 'suppose': 4906,\n",
       " 'message': 3233,\n",
       " 'mail': 3133,\n",
       " 'considered': 1161,\n",
       " 'them': 5039,\n",
       " 'follows': 1910,\n",
       " 'memorized': 3217,\n",
       " 'familiarer': 1747,\n",
       " 'family': 1755,\n",
       " 'big': 712,\n",
       " 'day': 1281,\n",
       " 'useful': 5266,\n",
       " 'choose': 944,\n",
       " 'correctly': 1220,\n",
       " 'though': 5086,\n",
       " 'did': 1354,\n",
       " 'realize': 4035,\n",
       " 'example': 1635,\n",
       " 'sayin': 4286,\n",
       " 'easy': 1460,\n",
       " 'impress': 2467,\n",
       " 'politeness': 3767,\n",
       " 'recieve': 4059,\n",
       " 'sent': 4403,\n",
       " 'ask': 408,\n",
       " 'how': 2359,\n",
       " 'performance': 3641,\n",
       " 'competition': 1076,\n",
       " 'suggetions': 4884,\n",
       " 'following': 1908,\n",
       " 'enough': 1568,\n",
       " 'passage': 3585,\n",
       " 'smoothly': 4557,\n",
       " 'learn': 2912,\n",
       " 'languinge': 2769,\n",
       " 'expression': 1705,\n",
       " 'keep': 2628,\n",
       " 'smiling': 4551,\n",
       " 'finishing': 1837,\n",
       " 'thank': 5021,\n",
       " 'greet': 2154,\n",
       " 'three': 5095,\n",
       " 'points': 3743,\n",
       " 'told': 5135,\n",
       " 'deep': 1296,\n",
       " 'judges': 2622,\n",
       " 'grades': 2132,\n",
       " 'heare': 2242,\n",
       " 'pleased': 3704,\n",
       " 'had': 2176,\n",
       " 'chaix': 897,\n",
       " 'two': 5207,\n",
       " 'years': 5513,\n",
       " 'ago': 226,\n",
       " 'experience': 1681,\n",
       " 'embarrassing': 1501,\n",
       " 'said': 4268,\n",
       " 'lango': 2720,\n",
       " 'our': 3530,\n",
       " 'notic': 3430,\n",
       " 'therefore': 5048,\n",
       " 'act': 86,\n",
       " 'lisen': 3013,\n",
       " 'during': 1438,\n",
       " 'simplify': 4507,\n",
       " 'game': 2034,\n",
       " 'active': 92,\n",
       " 'voice': 5311,\n",
       " 'up': 5256,\n",
       " 'asleep': 414,\n",
       " 'biology': 716,\n",
       " 'teacher': 4981,\n",
       " 'studentsk': 4800,\n",
       " 'questionslx': 3988,\n",
       " 'class': 968,\n",
       " 'play': 3695,\n",
       " 'role': 4240,\n",
       " 'suggested': 4872,\n",
       " 'take': 4943,\n",
       " 'ways': 5351,\n",
       " 'havelx': 2230,\n",
       " 'necessary': 3351,\n",
       " 'assitance': 421,\n",
       " 'comes': 1025,\n",
       " 'basis': 568,\n",
       " 'proprely': 3948,\n",
       " 'langue': 2751,\n",
       " 'nothing': 3429,\n",
       " 'than': 5020,\n",
       " 'other': 3519,\n",
       " 'hand': 2187,\n",
       " 'enjoy': 1559,\n",
       " 'youself': 5563,\n",
       " 'where': 5386,\n",
       " 'belive': 667,\n",
       " 'achieve': 77,\n",
       " 'mast': 3169,\n",
       " 'really': 4037,\n",
       " 'fluent': 1886,\n",
       " 'concerned': 1112,\n",
       " 'languge': 2760,\n",
       " 'munt': 3311,\n",
       " 'politery': 3768,\n",
       " 'bent': 690,\n",
       " 'recieved': 4060,\n",
       " 'email': 1485,\n",
       " 'asked': 410,\n",
       " 'lecture': 2939,\n",
       " 'was': 5339,\n",
       " 'mine': 3254,\n",
       " 'appoint': 361,\n",
       " 'base': 565,\n",
       " 'greatful': 2150,\n",
       " 'bad': 553,\n",
       " 'remembering': 4146,\n",
       " 'suggest': 4871,\n",
       " 'chose': 947,\n",
       " 'third': 5068,\n",
       " 'behavour': 644,\n",
       " 'excellent': 1643,\n",
       " 'suggests': 4882,\n",
       " 'familier': 1748,\n",
       " 'neccessary': 3345,\n",
       " 'andiences': 295,\n",
       " 'bored': 768,\n",
       " 'such': 4851,\n",
       " 'relax': 4106,\n",
       " 'starts': 4731,\n",
       " 'breathe': 791,\n",
       " 'deeply': 1299,\n",
       " 'willing': 5417,\n",
       " 'lend': 2953,\n",
       " 'helping': 2269,\n",
       " 'follow': 1906,\n",
       " 'convenient': 1203,\n",
       " 'paper': 3563,\n",
       " 'want': 5329,\n",
       " 'everyone': 1620,\n",
       " 'stare': 4727,\n",
       " 'embarrsed': 1503,\n",
       " 'own': 3544,\n",
       " 'being': 653,\n",
       " 'opening': 3498,\n",
       " 'outgoing': 3536,\n",
       " 'place': 3687,\n",
       " 'true': 5182,\n",
       " 'hopeful': 2332,\n",
       " 'emailing': 1486,\n",
       " 'att': 445,\n",
       " 'ention': 1578,\n",
       " 'full': 2015,\n",
       " 'pre': 3858,\n",
       " 'parations': 3569,\n",
       " 'less': 2960,\n",
       " 'politly': 3775,\n",
       " 'affect': 192,\n",
       " 'above': 48,\n",
       " 'events': 1608,\n",
       " 'speach': 4613,\n",
       " 'speacher': 4614,\n",
       " 'thing': 5060,\n",
       " 'clearly': 981,\n",
       " 'understand': 5227,\n",
       " 'story': 4764,\n",
       " 'interested': 2542,\n",
       " 'plite': 3712,\n",
       " 'froward': 2008,\n",
       " 'new': 3399,\n",
       " 'delighted': 1310,\n",
       " 'lot': 3075,\n",
       " 'send': 4384,\n",
       " 'noticed': 3432,\n",
       " 'contect': 1176,\n",
       " 'feelings': 1791,\n",
       " 'rish': 4237,\n",
       " 'meanwhile': 3196,\n",
       " 'promote': 3936,\n",
       " 'tolking': 5136,\n",
       " 'hoping': 2338,\n",
       " 'succeed': 4825,\n",
       " 'monday': 3279,\n",
       " 'amo': 281,\n",
       " 'even': 1606,\n",
       " 'little': 3040,\n",
       " 'excited': 1652,\n",
       " 'saw': 4282,\n",
       " 'qr': 3978,\n",
       " 'helps': 2272,\n",
       " 'relaxly': 4112,\n",
       " 'laung': 2884,\n",
       " 'much': 3308,\n",
       " 'err': 1589,\n",
       " 'fantastic': 1764,\n",
       " 'nice': 3406,\n",
       " 'complete': 1084,\n",
       " 'cwont': 1265,\n",
       " 'fow': 1971,\n",
       " 'offer': 3465,\n",
       " 'detail': 1337,\n",
       " 'practised': 3844,\n",
       " 'politily': 3772,\n",
       " 'exple': 1694,\n",
       " 'bye': 817,\n",
       " 'spopen': 4694,\n",
       " 'firmly': 1855,\n",
       " 'advertisments': 173,\n",
       " 'recommed': 4071,\n",
       " 'books': 762,\n",
       " 'saying': 4287,\n",
       " 'goes': 2107,\n",
       " 'book': 760,\n",
       " 'learning': 2915,\n",
       " 'difficult': 1371,\n",
       " 'spoken': 4689,\n",
       " 'opportunity': 3505,\n",
       " 'teach': 4980,\n",
       " 'culture': 1253,\n",
       " 'exercise': 1663,\n",
       " 'recently': 4052,\n",
       " 'hold': 2312,\n",
       " 'contest': 1184,\n",
       " 'took': 5146,\n",
       " 'marks': 3165,\n",
       " 'anxious': 338,\n",
       " 'afritad': 205,\n",
       " 'worn': 5467,\n",
       " 'confidence': 1127,\n",
       " 'too': 5144,\n",
       " 'relaxing': 4111,\n",
       " 'execa': 1659,\n",
       " 'prefect': 3867,\n",
       " 'training': 5166,\n",
       " 'improve': 2477,\n",
       " 'imformations': 2417,\n",
       " 'faimly': 1735,\n",
       " 'behavor': 643,\n",
       " 'whether': 5388,\n",
       " 'job': 2602,\n",
       " 'decide': 1289,\n",
       " 'firal': 1849,\n",
       " 'thus': 5101,\n",
       " 'train': 5165,\n",
       " 'yourbody': 5544,\n",
       " 'minor': 3255,\n",
       " 'escaping': 1592,\n",
       " 'education': 1471,\n",
       " 'escape': 1591,\n",
       " 'forwand': 1951,\n",
       " 'grate': 2140,\n",
       " 'mossenges': 3291,\n",
       " 'from': 2002,\n",
       " 'speaches': 4615,\n",
       " 'bear': 582,\n",
       " 'familiar': 1746,\n",
       " 'confident': 1129,\n",
       " 'sides': 4486,\n",
       " 'bow': 780,\n",
       " 'po': 3725,\n",
       " 'litely': 3034,\n",
       " 'find': 1823,\n",
       " 'recive': 4065,\n",
       " 'asking': 412,\n",
       " 'might': 3249,\n",
       " 'preparations': 3874,\n",
       " 'pract': 3831,\n",
       " 'practical': 3834,\n",
       " 'going': 2108,\n",
       " 'mistakes': 3269,\n",
       " 'languerage': 2758,\n",
       " 'plays': 3699,\n",
       " 'extremely': 1716,\n",
       " 'significant': 4496,\n",
       " 'behaviors': 639,\n",
       " 'till': 5110,\n",
       " 'taking': 4946,\n",
       " 'curtain': 1259,\n",
       " 'call': 825,\n",
       " 'customs': 1262,\n",
       " 'appearing': 357,\n",
       " 'comments': 1040,\n",
       " 'suggestion': 4875,\n",
       " 'dor': 1412,\n",
       " 'lax': 2895,\n",
       " 'project': 3931,\n",
       " 'jokes': 2608,\n",
       " 'yo': 5529,\n",
       " 'actions': 90,\n",
       " 'humorous': 2379,\n",
       " 'become': 605,\n",
       " 'quickly': 3991,\n",
       " 'lister': 3028,\n",
       " 'finit': 1839,\n",
       " 'limpoiht': 3007,\n",
       " 'opptunity': 3506,\n",
       " 'especially': 1594,\n",
       " 'increase': 2487,\n",
       " 'ability': 36,\n",
       " 'courage': 1231,\n",
       " 'frow': 2007,\n",
       " 'resite': 4199,\n",
       " 'artide': 398,\n",
       " 'reduce': 4079,\n",
       " 'anxiety': 334,\n",
       " 'yoursenf': 5558,\n",
       " 'oul': 3529,\n",
       " 'manner': 3158,\n",
       " 'speaker': 4626,\n",
       " 'sincerely': 4512,\n",
       " 'sucessfully': 4850,\n",
       " 'leanr': 2907,\n",
       " 'contiued': 1192,\n",
       " 'bodys': 750,\n",
       " 'means': 3193,\n",
       " 'curse': 1258,\n",
       " 'calm': 826,\n",
       " 'drink': 1430,\n",
       " 'water': 5348,\n",
       " 'start': 4729,\n",
       " 'sound': 4603,\n",
       " 'shout': 4471,\n",
       " 'belived': 668,\n",
       " 'worry': 5472,\n",
       " 'supported': 4904,\n",
       " 'calmer': 828,\n",
       " 'aback': 34,\n",
       " 'slowly': 4539,\n",
       " 'listener': 3021,\n",
       " 'impolit': 2429,\n",
       " 'home': 2319,\n",
       " 'extremly': 1717,\n",
       " 'recieveing': 4061,\n",
       " 'firsty': 1861,\n",
       " 'con': 1102,\n",
       " 'beautiful': 589,\n",
       " 'addtionally': 124,\n",
       " 'known': 2659,\n",
       " 'afraid': 199,\n",
       " 'sme': 4545,\n",
       " 'smile': 4548,\n",
       " 'able': 37,\n",
       " 'coontinued': 1217,\n",
       " 'method': 3241,\n",
       " 'positive': 3803,\n",
       " 'self': 4370,\n",
       " 'aside': 407,\n",
       " 'every': 1616,\n",
       " 'morning': 3288,\n",
       " 'practising': 3846,\n",
       " 'lucky': 3117,\n",
       " 'spend': 4679,\n",
       " 'preprations': 3889,\n",
       " 'launghe': 2887,\n",
       " 'face': 1724,\n",
       " 'cto': 1249,\n",
       " 'thousands': 5093,\n",
       " 'please': 3703,\n",
       " 'clam': 962,\n",
       " 'action': 88,\n",
       " 'goodbye': 2117,\n",
       " 'memories': 3213,\n",
       " 'teachers': 4982,\n",
       " 'we': 5352,\n",
       " 'receiving': 4051,\n",
       " 'details': 1338,\n",
       " 'givi': 2089,\n",
       " 'vital': 5308,\n",
       " 'draw': 1422,\n",
       " 'additionally': 119,\n",
       " 'dress': 1427,\n",
       " 'enable': 1520,\n",
       " 'look': 3062,\n",
       " 'generous': 2050,\n",
       " 'suc': 4822,\n",
       " 'cessful': 889,\n",
       " 'early': 1450,\n",
       " 'yet': 5522,\n",
       " 'gestures': 2066,\n",
       " 'pretty': 3903,\n",
       " 'coming': 1036,\n",
       " 'fighting': 1810,\n",
       " 'pleasure': 3705,\n",
       " 'compel': 1057,\n",
       " 'completely': 1086,\n",
       " 'basise': 569,\n",
       " 'catch': 863,\n",
       " 'sight': 4488,\n",
       " 'brave': 787,\n",
       " 'properly': 3942,\n",
       " 'ward': 5333,\n",
       " 'massage': 3167,\n",
       " 'regard': 4090,\n",
       " 'wednesday': 5358,\n",
       " 'constructive': 1168,\n",
       " 'delivered': 1318,\n",
       " 'today': 5131,\n",
       " 'clear': 979,\n",
       " 'front': 2004,\n",
       " 'stick': 4750,\n",
       " 'englike': 1549,\n",
       " 'however': 2360,\n",
       " 'xif': 5499,\n",
       " 'lower': 3103,\n",
       " 'usual': 5275,\n",
       " 'usax': 5263,\n",
       " 'beneficial': 683,\n",
       " 'plox': 3718,\n",
       " 'friendly': 1991,\n",
       " 'hello': 2262,\n",
       " '88888888': 27,\n",
       " '13407369803': 6,\n",
       " 'recivew': 4068,\n",
       " 'ready': 4032,\n",
       " 'spech': 4641,\n",
       " 'smothly': 4558,\n",
       " 'funtastic': 2021,\n",
       " 'langurages': 2773,\n",
       " 'thougt': 5089,\n",
       " 'grateful': 2141,\n",
       " 'matter': 3175,\n",
       " 'result': 4220,\n",
       " 'always': 274,\n",
       " 'back': 548,\n",
       " 'begining': 624,\n",
       " 'introduce': 2559,\n",
       " 'order': 3511,\n",
       " 'their': 5036,\n",
       " 'eyes': 1721,\n",
       " 'bring': 797,\n",
       " 'interest': 2540,\n",
       " 'beqin': 691,\n",
       " 'duty': 1440,\n",
       " 'text': 5015,\n",
       " 'feeling': 1789,\n",
       " 'lanquage': 2789,\n",
       " 'target': 4972,\n",
       " 'sleepy': 4533,\n",
       " 'youre': 5546,\n",
       " 'unless': 5248,\n",
       " 'constractive': 1167,\n",
       " 'aways': 540,\n",
       " 'lauguas': 2866,\n",
       " 'off': 3464,\n",
       " 'denying': 1325,\n",
       " 'prectise': 3865,\n",
       " 'old': 3479,\n",
       " 'sayingwhere': 4289,\n",
       " 'everthing': 1615,\n",
       " 'hews': 2288,\n",
       " 'senio': 4389,\n",
       " 'mirror': 3258,\n",
       " 'free': 1976,\n",
       " 'indispensable': 2491,\n",
       " 'achive': 79,\n",
       " 'lan': 2692,\n",
       " 'guages': 2165,\n",
       " 'instead': 2529,\n",
       " 'longuages': 3057,\n",
       " 'listenning': 3027,\n",
       " 'qudiences': 3984,\n",
       " 'lectures': 2941,\n",
       " 'still': 4752,\n",
       " 'chosen': 948,\n",
       " 'ase': 403,\n",
       " 'seconed': 4347,\n",
       " 'lauguage': 2864,\n",
       " 'postive': 3813,\n",
       " 'oft': 3470,\n",
       " 'anymore': 342,\n",
       " 'diliver': 1380,\n",
       " 'yow': 5567,\n",
       " 'similiar': 4502,\n",
       " 'speeth': 4675,\n",
       " 'maybe': 3179,\n",
       " 'write': 5482,\n",
       " 'loudly': 3081,\n",
       " 'acts': 109,\n",
       " 'although': 269,\n",
       " 'step': 4746,\n",
       " 'asks': 413,\n",
       " 'problem': 3914,\n",
       " 'word': 5456,\n",
       " 'speeoh': 4673,\n",
       " 'lots': 3077,\n",
       " 'anxions': 336,\n",
       " 'earily': 1447,\n",
       " 'impor': 2436,\n",
       " 'tant': 4966,\n",
       " 'state': 4733,\n",
       " 'admine': 146,\n",
       " 'wigh': 5413,\n",
       " 'longuage': 3056,\n",
       " 'addition': 118,\n",
       " 'person': 3658,\n",
       " 'delivers': 1320,\n",
       " 'anything': 344,\n",
       " 'sespeekn': 4421,\n",
       " 'report': 4183,\n",
       " 'neccery': 3342,\n",
       " 'howevery': 2361,\n",
       " 'excite': 1651,\n",
       " 'untill': 5254,\n",
       " 'punish': 3970,\n",
       " 'speekn': 4668,\n",
       " 'impotant': 2465,\n",
       " 'abody': 41,\n",
       " 'him': 2300,\n",
       " 'speek': 4665,\n",
       " 'rase': 4019,\n",
       " 'wit': 5434,\n",
       " 'impossible': 2463,\n",
       " 'engish': 1545,\n",
       " 'disaporint': 1385,\n",
       " 'accter': 72,\n",
       " 'speakings': 4633,\n",
       " 'flee': 1876,\n",
       " 'afrid': 203,\n",
       " 'believed': 662,\n",
       " 'see': 4357,\n",
       " 'hands': 2190,\n",
       " 'us': 5262,\n",
       " 'fell': 1795,\n",
       " 'intertted': 2552,\n",
       " 'goodby': 2116,\n",
       " 'gift': 2078,\n",
       " 'china': 933,\n",
       " 'forword': 1956,\n",
       " 'suceess': 4846,\n",
       " 'famior': 1756,\n",
       " 'proce': 3916,\n",
       " 'manners': 3159,\n",
       " 'ends': 1537,\n",
       " 'overcome': 3542,\n",
       " 'opport': 3503,\n",
       " 'charm': 912,\n",
       " 'partipate': 3581,\n",
       " 'oh': 3474,\n",
       " 'jervy': 2595,\n",
       " 'meeting': 3200,\n",
       " 'maines': 3136,\n",
       " 'impro': 2473,\n",
       " 'languige': 2767,\n",
       " 'activity': 106,\n",
       " 'intersting': 2550,\n",
       " 'having': 2231,\n",
       " 'storyes': 4765,\n",
       " 'gave': 2044,\n",
       " 'quitions': 4006,\n",
       " 'finaly': 1822,\n",
       " 'welcome': 5364,\n",
       " 'dad': 1267,\n",
       " 'join': 2604,\n",
       " 'lether': 2972,\n",
       " 'few': 1802,\n",
       " 'practis': 3842,\n",
       " 'signifcted': 4492,\n",
       " 'wand': 5326,\n",
       " 'polited': 3764,\n",
       " 'sene': 4386,\n",
       " 'laugage': 2822,\n",
       " 'winriy': 5423,\n",
       " 'persent': 3656,\n",
       " 'sussestions': 4924,\n",
       " 'expeted': 1690,\n",
       " 'preparing': 3879,\n",
       " 'relief': 4114,\n",
       " 'smiler': 4550,\n",
       " 'support': 4903,\n",
       " 'forever': 1926,\n",
       " 'dea': 1284,\n",
       " 'heared': 2243,\n",
       " 'count': 1228,\n",
       " 'ignove': 2406,\n",
       " 'langrage': 2724,\n",
       " 'impolty': 2435,\n",
       " 'sybord': 4933,\n",
       " 'fourth': 1968,\n",
       " 'anvety': 322,\n",
       " 'improtened': 2475,\n",
       " 'exciuse': 1656,\n",
       " 'arm': 383,\n",
       " 'legs': 2950,\n",
       " 'ok': 3477,\n",
       " 'greads': 2146,\n",
       " 'test': 5013,\n",
       " 'practid': 3840,\n",
       " 'agaim': 214,\n",
       " 'mixtures': 3272,\n",
       " 'touch': 5155,\n",
       " 'nevours': 3396,\n",
       " 'imaged': 2410,\n",
       " 'friend': 1990,\n",
       " 'probelms': 3912,\n",
       " 'felt': 1798,\n",
       " 'sppech': 4698,\n",
       " 'suued': 4925,\n",
       " 'fished': 1864,\n",
       " 'sppeech': 4699,\n",
       " 'rave': 4022,\n",
       " 'blow': 719,\n",
       " 'thare': 5028,\n",
       " 'activities': 102,\n",
       " 'long': 3054,\n",
       " 'remained': 4128,\n",
       " 'miss': 3261,\n",
       " 'evey': 1627,\n",
       " 'belong': 675,\n",
       " 'win': 5418,\n",
       " 'struggle': 4784,\n",
       " 'anviety': 323,\n",
       " 'worried': 5470,\n",
       " 'usually': 5276,\n",
       " 'whe': 5380,\n",
       " 'attended': 466,\n",
       " 'lastly': 2807,\n",
       " 'diffcult': 1360,\n",
       " 'aim': 237,\n",
       " 'eat': 1462,\n",
       " 'cold': 1013,\n",
       " 'food': 1916,\n",
       " 'health': 2237,\n",
       " 'name': 3323,\n",
       " 'question': 3986,\n",
       " 'develop': 1346,\n",
       " 'voculastion': 5310,\n",
       " 'jewy': 2597,\n",
       " 'sponsibility': 4693,\n",
       " 'walking': 5325,\n",
       " 'four': 1964,\n",
       " 'sation': 4277,\n",
       " 'pealsure': 3605,\n",
       " 'purpose': 3973,\n",
       " 'wouldfor': 5478,\n",
       " 'uses': 5270,\n",
       " 'funnly': 2019,\n",
       " 'attrat': 507,\n",
       " 'thirtly': 5072,\n",
       " 'address': 120,\n",
       " 'same': 4271,\n",
       " 'prapre': 3851,\n",
       " 'already': 259,\n",
       " 'remembered': 4145,\n",
       " 'avoid': 535,\n",
       " 'forgetting': 1934,\n",
       " 'deeper': 1297,\n",
       " 'lanuage': 2790,\n",
       " 'set': 4422,\n",
       " 'judgements': 2621,\n",
       " 'fett': 1801,\n",
       " 'nature': 3330,\n",
       " 'begaining': 619,\n",
       " 'pression': 3900,\n",
       " 'wait': 5318,\n",
       " 'forst': 1945,\n",
       " 'prise': 3906,\n",
       " 'firsly': 1858,\n",
       " 'quikely': 3998,\n",
       " 'careful': 851,\n",
       " 'tired': 5123,\n",
       " 'head': 2235,\n",
       " 'bodylaugde': 745,\n",
       " ...}"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "vocabulary"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "b70981de",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "5584"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(vocabulary)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "98e034f2",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'\\n#把下载好的文件转换成dict的格式并保存至pickle，只挑选我们文章中有的词汇\\nfrom datetime import datetime\\nimport pickle\\nt0 = datetime.now()\\nw_v = {}\\nwith open(\"glove.6B.100d.txt\",encoding=\"utf-8\") as f:\\n    lines = f.readlines()\\n    for line in lines:\\n        word = line.split(\" \")[0]\\n        if word in vocabulary.keys():\\n            v = line.split()[1:-1]\\n            w_v[word] = v\\n        \\n# def return_vector(essay):\\n#     {}\\n    \\n# lines[:10]\\n#print(w_v)\\nwith open(\"glove_abb2word.6B.100d.pkl\", \"wb\") as fp:   #Pickling\\n    pickle.dump(w_v, fp, protocol = pickle.HIGHEST_PROTOCOL)\\n#w_v.to_pickle(\\'glove.6B.100d.pkl\\')\\nt1 = datetime.now()\\nprint(\\'Processing time: {}\\'.format(t1 - t0))\\n'"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "'''\n",
    "#把下载好的文件转换成dict的格式并保存至pickle，只挑选我们文章中有的词汇\n",
    "from datetime import datetime\n",
    "import pickle\n",
    "t0 = datetime.now()\n",
    "w_v = {}\n",
    "with open(\"glove.6B.100d.txt\",encoding=\"utf-8\") as f:\n",
    "    lines = f.readlines()\n",
    "    for line in lines:\n",
    "        word = line.split(\" \")[0]\n",
    "        if word in vocabulary.keys():\n",
    "            v = line.split()[1:-1]\n",
    "            w_v[word] = v\n",
    "        \n",
    "# def return_vector(essay):\n",
    "#     {}\n",
    "    \n",
    "# lines[:10]\n",
    "#print(w_v)\n",
    "with open(\"glove_abb2word.6B.100d.pkl\", \"wb\") as fp:   #Pickling\n",
    "    pickle.dump(w_v, fp, protocol = pickle.HIGHEST_PROTOCOL)\n",
    "#w_v.to_pickle('glove.6B.100d.pkl')\n",
    "t1 = datetime.now()\n",
    "print('Processing time: {}'.format(t1 - t0))\n",
    "'''"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "f9f0fc15",
   "metadata": {},
   "outputs": [],
   "source": [
    "#加载原来已经处理好的词汇向量文件，这个里面只保留了我们的文章里有的词汇\n",
    "import pickle\n",
    "with open(\"glove_abb2word.6B.100d.pkl\", \"rb\") as fp:   #Pickling\n",
    "    w_v_dict = pickle.load(fp)  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "e58d6c70",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "2832"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(w_v_dict)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "554576a1",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 输入文章，得出文章矩阵，如果文章中的词汇在这里，那么就返回词汇向量，如果没有就不就散了\n",
    "def get_glove_100_vec(essay,w_v_dict):\n",
    "    essay = essay.lower()\n",
    "    essay_list = essay.split()     # 将句子(英文)通过空格分割成由单词组成的list\n",
    "    essay_vec = []    # 存储向量的矩阵\n",
    "    for e in essay_list:     # 遍历所有单词，返回每个单词的向量\n",
    "        #e = word_del_punctuation(e)   # str处理，去除标点符号\n",
    "        if e in w_v_dict:\n",
    "            vector = [float(num) for num in w_v_dict[e]]\n",
    "            essay_vec.append(vector)\n",
    "        #else:\n",
    "            #vector = np.zeros((1,100))\n",
    "            #essay_vec.append(vector)\n",
    "            #print(type(w_v_dict[e]))\n",
    "    essay_vec = pd.DataFrame(essay_vec)  # 转成DataFrame格式，方便求句子的向量平均值\n",
    "    if len(essay_list) > 0:\n",
    "        return (essay_vec.sum() / len(essay_list)).tolist() # 这是一个essay地向量，总共有100维\n",
    "    else:\n",
    "        print('Error')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "2c3b08be",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[-0.13553795750000003,\n",
       " 0.2588367875,\n",
       " 0.28515293750000004,\n",
       " -0.28514219412499997,\n",
       " -0.24403648875,\n",
       " 0.1680435965,\n",
       " -0.21678215,\n",
       " 0.17170942375,\n",
       " 0.047616775000000014,\n",
       " -0.11989843750000002,\n",
       " 0.09236617500000002,\n",
       " 0.0744047625,\n",
       " 0.25159748750000005,\n",
       " 0.021821608749999985,\n",
       " 0.010148949999999997,\n",
       " -0.29882947499999996,\n",
       " 0.13542767749999998,\n",
       " 0.22289398750000006,\n",
       " -0.52089635,\n",
       " 0.3087066124999999,\n",
       " 0.1637944475,\n",
       " -0.0014466249999999972,\n",
       " 0.07201395,\n",
       " -0.08138825000000001,\n",
       " 0.03843543749999998,\n",
       " 0.0485547625,\n",
       " -0.10619462500000001,\n",
       " -0.46362687499999994,\n",
       " 0.20303066250000001,\n",
       " -0.22164630000000002,\n",
       " -0.12076567499999999,\n",
       " 0.5134347374999999,\n",
       " 0.024856977500000002,\n",
       " 0.056421475,\n",
       " 0.05366742500000001,\n",
       " 0.3002825375,\n",
       " 0.009404434999999992,\n",
       " 0.23738604000000002,\n",
       " 0.07527945,\n",
       " -0.24039925,\n",
       " -0.33163774,\n",
       " -0.196838675,\n",
       " -0.055453365,\n",
       " -0.44615637500000005,\n",
       " -0.266930575,\n",
       " -0.017439337499999995,\n",
       " 0.10234493999999998,\n",
       " -0.28087398750000003,\n",
       " -0.08144695500000002,\n",
       " -0.7269315,\n",
       " 0.018454335,\n",
       " -0.050946072499999995,\n",
       " 0.08973999999999999,\n",
       " 0.7717983875000001,\n",
       " -0.09101255,\n",
       " -1.6900016249999996,\n",
       " 0.08836920000000001,\n",
       " -0.0829239,\n",
       " 1.1189403625,\n",
       " 0.3578228,\n",
       " -0.13817485,\n",
       " 0.686790625,\n",
       " -0.3096133,\n",
       " 0.004713888750000008,\n",
       " 0.594601,\n",
       " 0.046298294999999996,\n",
       " 0.43347805,\n",
       " 0.43223481625,\n",
       " 0.011203537500000008,\n",
       " -0.258508,\n",
       " 0.017927762499999996,\n",
       " -0.30284685,\n",
       " -0.009187197499999985,\n",
       " -0.423449125,\n",
       " 0.05166727050000001,\n",
       " 0.0649401875,\n",
       " -0.06163798749999999,\n",
       " -0.026522127500000003,\n",
       " -0.49753662125,\n",
       " -0.09064547499999999,\n",
       " 0.4867069375,\n",
       " -0.07913693749999999,\n",
       " -0.4806108124999999,\n",
       " -0.023537517499999997,\n",
       " -1.1863540875,\n",
       " -0.15163269249999997,\n",
       " 0.05145605000000002,\n",
       " -0.02052066250000001,\n",
       " -0.1810807375,\n",
       " -0.27362995,\n",
       " -0.049032025,\n",
       " -0.11469637000000002,\n",
       " 0.016267272499999996,\n",
       " -0.12841138625,\n",
       " -0.44904210000000006,\n",
       " -0.08376680875,\n",
       " -0.12675697500000002,\n",
       " -0.3547508375,\n",
       " 0.3153238625]"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#测试词向量特征\n",
    "example = \"Dear Jerry I am Li Hua. I know, you will to attend an English test. I think I can give you some advise. Such as, you should to know what you will to say. And when you say how you should to do. And you should to know how to leave. Yes, there are my advise. I think they are useful. And I think they can give you some help. So I say they in their. ' Yours, Li Hua\"\n",
    "get_glove_100_vec(example,w_v_dict)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "e3909571",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "已处理1篇\n",
      "已处理2篇\n",
      "已处理3篇\n",
      "已处理4篇\n",
      "已处理5篇\n",
      "已处理6篇\n",
      "已处理7篇\n",
      "已处理8篇\n",
      "已处理9篇\n",
      "已处理10篇\n",
      "已处理11篇\n",
      "已处理12篇\n",
      "已处理13篇\n",
      "已处理14篇\n",
      "已处理15篇\n",
      "已处理16篇\n",
      "已处理17篇\n",
      "已处理18篇\n",
      "已处理19篇\n",
      "已处理20篇\n",
      "已处理21篇\n",
      "已处理22篇\n",
      "已处理23篇\n",
      "已处理24篇\n",
      "已处理25篇\n",
      "已处理26篇\n",
      "已处理27篇\n",
      "已处理28篇\n",
      "已处理29篇\n",
      "已处理30篇\n",
      "已处理31篇\n",
      "已处理32篇\n",
      "已处理33篇\n",
      "已处理34篇\n",
      "已处理35篇\n",
      "已处理36篇\n",
      "已处理37篇\n",
      "已处理38篇\n",
      "已处理39篇\n",
      "已处理40篇\n",
      "已处理41篇\n",
      "已处理42篇\n",
      "已处理43篇\n",
      "已处理44篇\n",
      "已处理45篇\n",
      "已处理46篇\n",
      "已处理47篇\n",
      "已处理48篇\n",
      "已处理49篇\n",
      "已处理50篇\n",
      "已处理51篇\n",
      "已处理52篇\n",
      "已处理53篇\n",
      "已处理54篇\n",
      "已处理55篇\n",
      "已处理56篇\n",
      "已处理57篇\n",
      "已处理58篇\n",
      "已处理59篇\n",
      "已处理60篇\n",
      "已处理61篇\n",
      "已处理62篇\n",
      "已处理63篇\n",
      "已处理64篇\n",
      "已处理65篇\n",
      "已处理66篇\n",
      "已处理67篇\n",
      "已处理68篇\n",
      "已处理69篇\n",
      "已处理70篇\n",
      "已处理71篇\n",
      "已处理72篇\n",
      "已处理73篇\n",
      "已处理74篇\n",
      "已处理75篇\n",
      "已处理76篇\n",
      "已处理77篇\n",
      "已处理78篇\n",
      "已处理79篇\n",
      "已处理80篇\n",
      "已处理81篇\n",
      "已处理82篇\n",
      "已处理83篇\n",
      "已处理84篇\n",
      "已处理85篇\n",
      "已处理86篇\n",
      "已处理87篇\n",
      "已处理88篇\n",
      "已处理89篇\n",
      "已处理90篇\n",
      "已处理91篇\n",
      "已处理92篇\n",
      "已处理93篇\n",
      "已处理94篇\n",
      "已处理95篇\n",
      "已处理96篇\n",
      "已处理97篇\n",
      "已处理98篇\n",
      "已处理99篇\n",
      "已处理100篇\n",
      "已处理101篇\n",
      "已处理102篇\n",
      "已处理103篇\n",
      "已处理104篇\n",
      "已处理105篇\n",
      "已处理106篇\n",
      "已处理107篇\n",
      "已处理108篇\n",
      "已处理109篇\n",
      "已处理110篇\n",
      "已处理111篇\n",
      "已处理112篇\n",
      "已处理113篇\n",
      "已处理114篇\n",
      "已处理115篇\n",
      "已处理116篇\n",
      "已处理117篇\n",
      "已处理118篇\n",
      "已处理119篇\n",
      "已处理120篇\n",
      "已处理121篇\n",
      "已处理122篇\n",
      "已处理123篇\n",
      "已处理124篇\n",
      "已处理125篇\n",
      "已处理126篇\n",
      "已处理127篇\n",
      "已处理128篇\n",
      "已处理129篇\n",
      "已处理130篇\n",
      "已处理131篇\n",
      "已处理132篇\n",
      "已处理133篇\n",
      "已处理134篇\n",
      "已处理135篇\n",
      "已处理136篇\n",
      "已处理137篇\n",
      "已处理138篇\n",
      "已处理139篇\n",
      "已处理140篇\n",
      "已处理141篇\n",
      "已处理142篇\n",
      "已处理143篇\n",
      "已处理144篇\n",
      "已处理145篇\n",
      "已处理146篇\n",
      "已处理147篇\n",
      "已处理148篇\n",
      "已处理149篇\n",
      "已处理150篇\n",
      "已处理151篇\n",
      "已处理152篇\n",
      "已处理153篇\n",
      "已处理154篇\n",
      "已处理155篇\n",
      "已处理156篇\n",
      "已处理157篇\n",
      "已处理158篇\n",
      "已处理159篇\n",
      "已处理160篇\n",
      "已处理161篇\n",
      "已处理162篇\n",
      "已处理163篇\n",
      "已处理164篇\n",
      "已处理165篇\n",
      "已处理166篇\n",
      "已处理167篇\n",
      "已处理168篇\n",
      "已处理169篇\n",
      "已处理170篇\n",
      "已处理171篇\n",
      "已处理172篇\n",
      "已处理173篇\n",
      "已处理174篇\n",
      "已处理175篇\n",
      "已处理176篇\n",
      "已处理177篇\n",
      "已处理178篇\n",
      "已处理179篇\n",
      "已处理180篇\n",
      "已处理181篇\n",
      "已处理182篇\n",
      "已处理183篇\n",
      "已处理184篇\n",
      "已处理185篇\n",
      "已处理186篇\n",
      "已处理187篇\n",
      "已处理188篇\n",
      "已处理189篇\n",
      "已处理190篇\n",
      "已处理191篇\n",
      "已处理192篇\n",
      "已处理193篇\n",
      "已处理194篇\n",
      "已处理195篇\n",
      "已处理196篇\n",
      "已处理197篇\n",
      "已处理198篇\n",
      "已处理199篇\n",
      "已处理200篇\n",
      "已处理201篇\n",
      "已处理202篇\n",
      "已处理203篇\n",
      "已处理204篇\n",
      "已处理205篇\n",
      "已处理206篇\n",
      "已处理207篇\n",
      "已处理208篇\n",
      "已处理209篇\n",
      "已处理210篇\n",
      "已处理211篇\n",
      "已处理212篇\n",
      "已处理213篇\n",
      "已处理214篇\n",
      "已处理215篇\n",
      "已处理216篇\n",
      "已处理217篇\n",
      "已处理218篇\n",
      "已处理219篇\n",
      "已处理220篇\n",
      "已处理221篇\n",
      "已处理222篇\n",
      "已处理223篇\n",
      "已处理224篇\n",
      "已处理225篇\n",
      "已处理226篇\n",
      "已处理227篇\n",
      "已处理228篇\n",
      "已处理229篇\n",
      "已处理230篇\n",
      "已处理231篇\n",
      "已处理232篇\n",
      "已处理233篇\n",
      "已处理234篇\n",
      "已处理235篇\n",
      "已处理236篇\n",
      "已处理237篇\n",
      "已处理238篇\n",
      "已处理239篇\n",
      "已处理240篇\n",
      "已处理241篇\n",
      "已处理242篇\n",
      "已处理243篇\n",
      "已处理244篇\n",
      "已处理245篇\n",
      "已处理246篇\n",
      "已处理247篇\n",
      "已处理248篇\n",
      "已处理249篇\n",
      "已处理250篇\n",
      "已处理251篇\n",
      "已处理252篇\n",
      "已处理253篇\n",
      "已处理254篇\n",
      "已处理255篇\n",
      "已处理256篇\n",
      "已处理257篇\n",
      "已处理258篇\n",
      "已处理259篇\n",
      "已处理260篇\n",
      "已处理261篇\n",
      "已处理262篇\n",
      "已处理263篇\n",
      "已处理264篇\n",
      "已处理265篇\n",
      "已处理266篇\n",
      "已处理267篇\n",
      "已处理268篇\n",
      "已处理269篇\n",
      "已处理270篇\n",
      "已处理271篇\n",
      "已处理272篇\n",
      "已处理273篇\n",
      "已处理274篇\n",
      "已处理275篇\n",
      "已处理276篇\n",
      "已处理277篇\n",
      "已处理278篇\n",
      "已处理279篇\n",
      "已处理280篇\n",
      "已处理281篇\n",
      "已处理282篇\n",
      "已处理283篇\n",
      "已处理284篇\n",
      "已处理285篇\n",
      "已处理286篇\n",
      "已处理287篇\n",
      "已处理288篇\n",
      "已处理289篇\n",
      "已处理290篇\n",
      "已处理291篇\n",
      "已处理292篇\n",
      "已处理293篇\n",
      "已处理294篇\n",
      "已处理295篇\n",
      "已处理296篇\n",
      "已处理297篇\n",
      "已处理298篇\n",
      "已处理299篇\n",
      "已处理300篇\n",
      "已处理301篇\n",
      "已处理302篇\n",
      "已处理303篇\n",
      "已处理304篇\n",
      "已处理305篇\n",
      "已处理306篇\n",
      "已处理307篇\n",
      "已处理308篇\n",
      "已处理309篇\n",
      "已处理310篇\n",
      "已处理311篇\n",
      "已处理312篇\n",
      "已处理313篇\n",
      "已处理314篇\n",
      "已处理315篇\n",
      "已处理316篇\n",
      "已处理317篇\n",
      "已处理318篇\n",
      "已处理319篇\n",
      "已处理320篇\n",
      "已处理321篇\n",
      "已处理322篇\n",
      "已处理323篇\n",
      "已处理324篇\n",
      "已处理325篇\n",
      "已处理326篇\n",
      "已处理327篇\n",
      "已处理328篇\n",
      "已处理329篇\n",
      "已处理330篇\n",
      "已处理331篇\n",
      "已处理332篇\n",
      "已处理333篇\n",
      "已处理334篇\n",
      "已处理335篇\n",
      "已处理336篇\n",
      "已处理337篇\n",
      "已处理338篇\n",
      "已处理339篇\n",
      "已处理340篇\n",
      "已处理341篇\n",
      "已处理342篇\n",
      "已处理343篇\n",
      "已处理344篇\n",
      "已处理345篇\n",
      "已处理346篇\n",
      "已处理347篇\n",
      "已处理348篇\n",
      "已处理349篇\n",
      "已处理350篇\n",
      "已处理351篇\n",
      "已处理352篇\n",
      "已处理353篇\n",
      "已处理354篇\n",
      "已处理355篇\n",
      "已处理356篇\n",
      "已处理357篇\n",
      "已处理358篇\n",
      "已处理359篇\n",
      "已处理360篇\n",
      "已处理361篇\n",
      "已处理362篇\n",
      "已处理363篇\n",
      "已处理364篇\n",
      "已处理365篇\n",
      "已处理366篇\n",
      "已处理367篇\n",
      "已处理368篇\n",
      "已处理369篇\n",
      "已处理370篇\n",
      "已处理371篇\n",
      "已处理372篇\n",
      "已处理373篇\n",
      "已处理374篇\n",
      "已处理375篇\n",
      "已处理376篇\n",
      "已处理377篇\n",
      "已处理378篇\n",
      "已处理379篇\n",
      "已处理380篇\n",
      "已处理381篇\n",
      "已处理382篇\n",
      "已处理383篇\n",
      "已处理384篇\n",
      "已处理385篇\n",
      "已处理386篇\n",
      "已处理387篇\n",
      "已处理388篇\n",
      "已处理389篇\n",
      "已处理390篇\n",
      "已处理391篇\n",
      "已处理392篇\n",
      "已处理393篇\n",
      "已处理394篇\n",
      "已处理395篇\n",
      "已处理396篇\n",
      "已处理397篇\n",
      "已处理398篇\n",
      "已处理399篇\n",
      "已处理400篇\n",
      "已处理401篇\n",
      "已处理402篇\n",
      "已处理403篇\n",
      "已处理404篇\n",
      "已处理405篇\n",
      "已处理406篇\n",
      "已处理407篇\n",
      "已处理408篇\n",
      "已处理409篇\n",
      "已处理410篇\n",
      "已处理411篇\n",
      "已处理412篇\n",
      "已处理413篇\n",
      "已处理414篇\n",
      "已处理415篇\n",
      "已处理416篇\n",
      "已处理417篇\n",
      "已处理418篇\n",
      "已处理419篇\n",
      "已处理420篇\n",
      "已处理421篇\n",
      "已处理422篇\n",
      "已处理423篇\n",
      "已处理424篇\n",
      "已处理425篇\n",
      "已处理426篇\n",
      "已处理427篇\n",
      "已处理428篇\n",
      "已处理429篇\n",
      "已处理430篇\n",
      "已处理431篇\n",
      "已处理432篇\n",
      "已处理433篇\n",
      "已处理434篇\n",
      "已处理435篇\n",
      "已处理436篇\n",
      "已处理437篇\n",
      "已处理438篇\n",
      "已处理439篇\n",
      "已处理440篇\n",
      "已处理441篇\n",
      "已处理442篇\n",
      "已处理443篇\n",
      "已处理444篇\n",
      "已处理445篇\n",
      "已处理446篇\n",
      "已处理447篇\n",
      "已处理448篇\n",
      "已处理449篇\n",
      "已处理450篇\n",
      "已处理451篇\n",
      "已处理452篇\n",
      "已处理453篇\n",
      "已处理454篇\n",
      "已处理455篇\n",
      "已处理456篇\n",
      "已处理457篇\n",
      "已处理458篇\n",
      "已处理459篇\n",
      "已处理460篇\n",
      "已处理461篇\n",
      "已处理462篇\n",
      "已处理463篇\n",
      "已处理464篇\n",
      "已处理465篇\n",
      "已处理466篇\n",
      "已处理467篇\n",
      "已处理468篇\n",
      "已处理469篇\n",
      "已处理470篇\n",
      "已处理471篇\n",
      "已处理472篇\n",
      "已处理473篇\n",
      "已处理474篇\n",
      "已处理475篇\n",
      "已处理476篇\n",
      "已处理477篇\n",
      "已处理478篇\n",
      "已处理479篇\n",
      "已处理480篇\n",
      "已处理481篇\n",
      "已处理482篇\n",
      "已处理483篇\n",
      "已处理484篇\n",
      "已处理485篇\n",
      "已处理486篇\n",
      "已处理487篇\n",
      "已处理488篇\n",
      "已处理489篇\n",
      "已处理490篇\n",
      "已处理491篇\n",
      "已处理492篇\n",
      "已处理493篇\n",
      "已处理494篇\n",
      "已处理495篇\n",
      "已处理496篇\n",
      "已处理497篇\n",
      "已处理498篇\n",
      "已处理499篇\n",
      "已处理500篇\n",
      "已处理501篇\n",
      "已处理502篇\n",
      "已处理503篇\n",
      "已处理504篇\n",
      "已处理505篇\n",
      "已处理506篇\n",
      "已处理507篇\n",
      "已处理508篇\n",
      "已处理509篇\n",
      "已处理510篇\n",
      "已处理511篇\n",
      "已处理512篇\n",
      "已处理513篇\n",
      "已处理514篇\n",
      "已处理515篇\n",
      "已处理516篇\n",
      "已处理517篇\n",
      "已处理518篇\n",
      "已处理519篇\n",
      "已处理520篇\n",
      "已处理521篇\n",
      "已处理522篇\n",
      "已处理523篇\n",
      "已处理524篇\n",
      "已处理525篇\n",
      "已处理526篇\n",
      "已处理527篇\n",
      "已处理528篇\n",
      "已处理529篇\n",
      "已处理530篇\n",
      "已处理531篇\n",
      "已处理532篇\n",
      "已处理533篇\n",
      "已处理534篇\n",
      "已处理535篇\n",
      "已处理536篇\n",
      "已处理537篇\n",
      "已处理538篇\n",
      "已处理539篇\n",
      "已处理540篇\n",
      "已处理541篇\n",
      "已处理542篇\n",
      "已处理543篇\n",
      "已处理544篇\n",
      "已处理545篇\n",
      "已处理546篇\n",
      "已处理547篇\n",
      "已处理548篇\n",
      "已处理549篇\n",
      "已处理550篇\n",
      "已处理551篇\n",
      "已处理552篇\n",
      "已处理553篇\n",
      "已处理554篇\n",
      "已处理555篇\n",
      "已处理556篇\n",
      "已处理557篇\n",
      "已处理558篇\n",
      "已处理559篇\n",
      "已处理560篇\n",
      "已处理561篇\n",
      "已处理562篇\n",
      "已处理563篇\n",
      "已处理564篇\n",
      "已处理565篇\n",
      "已处理566篇\n",
      "已处理567篇\n",
      "已处理568篇\n",
      "已处理569篇\n",
      "已处理570篇\n",
      "已处理571篇\n",
      "已处理572篇\n",
      "已处理573篇\n",
      "已处理574篇\n",
      "已处理575篇\n",
      "已处理576篇\n",
      "已处理577篇\n",
      "已处理578篇\n",
      "已处理579篇\n",
      "已处理580篇\n",
      "已处理581篇\n",
      "已处理582篇\n",
      "已处理583篇\n",
      "已处理584篇\n",
      "已处理585篇\n",
      "已处理586篇\n",
      "已处理587篇\n",
      "已处理588篇\n",
      "已处理589篇\n",
      "已处理590篇\n",
      "已处理591篇\n",
      "已处理592篇\n",
      "已处理593篇\n",
      "已处理594篇\n",
      "已处理595篇\n",
      "已处理596篇\n",
      "已处理597篇\n",
      "已处理598篇\n",
      "已处理599篇\n",
      "已处理600篇\n",
      "已处理601篇\n",
      "已处理602篇\n",
      "已处理603篇\n",
      "已处理604篇\n",
      "已处理605篇\n",
      "已处理606篇\n",
      "已处理607篇\n",
      "已处理608篇\n",
      "已处理609篇\n",
      "已处理610篇\n",
      "已处理611篇\n",
      "已处理612篇\n",
      "已处理613篇\n",
      "已处理614篇\n",
      "已处理615篇\n",
      "已处理616篇\n",
      "已处理617篇\n",
      "已处理618篇\n",
      "已处理619篇\n",
      "已处理620篇\n",
      "已处理621篇\n",
      "已处理622篇\n",
      "已处理623篇\n",
      "已处理624篇\n",
      "已处理625篇\n",
      "已处理626篇\n",
      "已处理627篇\n",
      "已处理628篇\n",
      "已处理629篇\n",
      "已处理630篇\n",
      "已处理631篇\n",
      "已处理632篇\n",
      "已处理633篇\n",
      "已处理634篇\n",
      "已处理635篇\n",
      "已处理636篇\n",
      "已处理637篇\n",
      "已处理638篇\n",
      "已处理639篇\n",
      "已处理640篇\n",
      "已处理641篇\n",
      "已处理642篇\n",
      "已处理643篇\n",
      "已处理644篇\n",
      "已处理645篇\n",
      "已处理646篇\n",
      "已处理647篇\n",
      "已处理648篇\n",
      "已处理649篇\n",
      "已处理650篇\n",
      "已处理651篇\n",
      "已处理652篇\n",
      "已处理653篇\n",
      "已处理654篇\n",
      "已处理655篇\n",
      "已处理656篇\n",
      "已处理657篇\n",
      "已处理658篇\n",
      "已处理659篇\n",
      "已处理660篇\n",
      "已处理661篇\n",
      "已处理662篇\n",
      "已处理663篇\n",
      "已处理664篇\n",
      "已处理665篇\n",
      "已处理666篇\n",
      "已处理667篇\n",
      "已处理668篇\n",
      "已处理669篇\n",
      "已处理670篇\n",
      "已处理671篇\n",
      "已处理672篇\n",
      "已处理673篇\n",
      "已处理674篇\n",
      "已处理675篇\n",
      "已处理676篇\n",
      "已处理677篇\n",
      "已处理678篇\n",
      "已处理679篇\n",
      "已处理680篇\n",
      "已处理681篇\n",
      "已处理682篇\n",
      "已处理683篇\n",
      "已处理684篇\n",
      "已处理685篇\n",
      "已处理686篇\n",
      "已处理687篇\n",
      "已处理688篇\n",
      "已处理689篇\n",
      "已处理690篇\n",
      "已处理691篇\n",
      "已处理692篇\n",
      "已处理693篇\n",
      "已处理694篇\n",
      "已处理695篇\n",
      "已处理696篇\n",
      "已处理697篇\n",
      "已处理698篇\n",
      "已处理699篇\n",
      "已处理700篇\n",
      "已处理701篇\n",
      "已处理702篇\n",
      "已处理703篇\n",
      "已处理704篇\n",
      "已处理705篇\n",
      "已处理706篇\n",
      "已处理707篇\n",
      "已处理708篇\n",
      "已处理709篇\n",
      "已处理710篇\n",
      "已处理711篇\n",
      "已处理712篇\n",
      "已处理713篇\n",
      "已处理714篇\n",
      "已处理715篇\n",
      "已处理716篇\n",
      "已处理717篇\n",
      "已处理718篇\n",
      "已处理719篇\n",
      "已处理720篇\n",
      "已处理721篇\n",
      "已处理722篇\n",
      "已处理723篇\n",
      "已处理724篇\n",
      "已处理725篇\n",
      "已处理726篇\n",
      "已处理727篇\n",
      "已处理728篇\n",
      "已处理729篇\n",
      "已处理730篇\n",
      "已处理731篇\n",
      "已处理732篇\n",
      "已处理733篇\n",
      "已处理734篇\n",
      "已处理735篇\n",
      "已处理736篇\n",
      "已处理737篇\n",
      "已处理738篇\n",
      "已处理739篇\n",
      "已处理740篇\n",
      "已处理741篇\n",
      "已处理742篇\n",
      "已处理743篇\n",
      "已处理744篇\n",
      "已处理745篇\n",
      "已处理746篇\n",
      "已处理747篇\n",
      "已处理748篇\n",
      "已处理749篇\n",
      "已处理750篇\n",
      "已处理751篇\n",
      "已处理752篇\n",
      "已处理753篇\n",
      "已处理754篇\n",
      "已处理755篇\n",
      "已处理756篇\n",
      "已处理757篇\n",
      "已处理758篇\n",
      "已处理759篇\n",
      "已处理760篇\n",
      "已处理761篇\n",
      "已处理762篇\n",
      "已处理763篇\n",
      "已处理764篇\n",
      "已处理765篇\n",
      "已处理766篇\n",
      "已处理767篇\n",
      "已处理768篇\n",
      "已处理769篇\n",
      "已处理770篇\n",
      "已处理771篇\n",
      "已处理772篇\n",
      "已处理773篇\n",
      "已处理774篇\n",
      "已处理775篇\n",
      "已处理776篇\n",
      "已处理777篇\n",
      "已处理778篇\n",
      "已处理779篇\n",
      "已处理780篇\n",
      "已处理781篇\n",
      "已处理782篇\n",
      "已处理783篇\n",
      "已处理784篇\n",
      "已处理785篇\n",
      "已处理786篇\n",
      "已处理787篇\n",
      "已处理788篇\n",
      "已处理789篇\n",
      "已处理790篇\n",
      "已处理791篇\n",
      "已处理792篇\n",
      "已处理793篇\n",
      "已处理794篇\n",
      "已处理795篇\n",
      "已处理796篇\n",
      "已处理797篇\n",
      "已处理798篇\n",
      "已处理799篇\n",
      "已处理800篇\n",
      "已处理801篇\n",
      "已处理802篇\n",
      "已处理803篇\n",
      "已处理804篇\n",
      "已处理805篇\n",
      "已处理806篇\n",
      "已处理807篇\n",
      "已处理808篇\n",
      "已处理809篇\n",
      "已处理810篇\n",
      "已处理811篇\n",
      "已处理812篇\n",
      "已处理813篇\n",
      "已处理814篇\n",
      "已处理815篇\n",
      "已处理816篇\n",
      "已处理817篇\n",
      "已处理818篇\n",
      "已处理819篇\n",
      "已处理820篇\n",
      "已处理821篇\n",
      "已处理822篇\n",
      "已处理823篇\n",
      "已处理824篇\n",
      "已处理825篇\n",
      "已处理826篇\n",
      "已处理827篇\n",
      "已处理828篇\n",
      "已处理829篇\n",
      "已处理830篇\n",
      "已处理831篇\n",
      "已处理832篇\n",
      "已处理833篇\n",
      "已处理834篇\n",
      "已处理835篇\n",
      "已处理836篇\n",
      "已处理837篇\n",
      "已处理838篇\n",
      "已处理839篇\n",
      "已处理840篇\n",
      "已处理841篇\n",
      "已处理842篇\n",
      "已处理843篇\n",
      "已处理844篇\n",
      "已处理845篇\n",
      "已处理846篇\n",
      "已处理847篇\n",
      "已处理848篇\n",
      "已处理849篇\n",
      "已处理850篇\n",
      "已处理851篇\n",
      "已处理852篇\n",
      "已处理853篇\n",
      "已处理854篇\n",
      "已处理855篇\n",
      "已处理856篇\n",
      "已处理857篇\n",
      "已处理858篇\n",
      "已处理859篇\n",
      "已处理860篇\n",
      "已处理861篇\n",
      "已处理862篇\n",
      "已处理863篇\n",
      "已处理864篇\n",
      "已处理865篇\n",
      "已处理866篇\n",
      "已处理867篇\n",
      "已处理868篇\n",
      "已处理869篇\n",
      "已处理870篇\n",
      "已处理871篇\n",
      "已处理872篇\n",
      "已处理873篇\n",
      "已处理874篇\n",
      "已处理875篇\n",
      "已处理876篇\n",
      "已处理877篇\n",
      "已处理878篇\n",
      "已处理879篇\n",
      "已处理880篇\n",
      "已处理881篇\n",
      "已处理882篇\n",
      "已处理883篇\n",
      "已处理884篇\n",
      "已处理885篇\n",
      "已处理886篇\n",
      "已处理887篇\n",
      "已处理888篇\n",
      "已处理889篇\n",
      "已处理890篇\n",
      "已处理891篇\n",
      "已处理892篇\n",
      "已处理893篇\n",
      "已处理894篇\n",
      "已处理895篇\n",
      "已处理896篇\n",
      "已处理897篇\n",
      "已处理898篇\n",
      "已处理899篇\n",
      "已处理900篇\n",
      "已处理901篇\n",
      "已处理902篇\n",
      "已处理903篇\n",
      "已处理904篇\n",
      "已处理905篇\n",
      "已处理906篇\n",
      "已处理907篇\n",
      "已处理908篇\n",
      "已处理909篇\n",
      "已处理910篇\n",
      "已处理911篇\n",
      "已处理912篇\n",
      "已处理913篇\n",
      "已处理914篇\n",
      "已处理915篇\n",
      "已处理916篇\n",
      "已处理917篇\n",
      "已处理918篇\n",
      "已处理919篇\n",
      "已处理920篇\n",
      "已处理921篇\n",
      "已处理922篇\n",
      "已处理923篇\n",
      "已处理924篇\n",
      "已处理925篇\n",
      "已处理926篇\n",
      "已处理927篇\n",
      "已处理928篇\n",
      "已处理929篇\n",
      "已处理930篇\n",
      "已处理931篇\n",
      "已处理932篇\n",
      "已处理933篇\n",
      "已处理934篇\n",
      "已处理935篇\n",
      "已处理936篇\n",
      "已处理937篇\n",
      "已处理938篇\n",
      "已处理939篇\n",
      "已处理940篇\n",
      "已处理941篇\n",
      "已处理942篇\n",
      "已处理943篇\n",
      "已处理944篇\n",
      "已处理945篇\n",
      "已处理946篇\n",
      "已处理947篇\n",
      "已处理948篇\n",
      "已处理949篇\n",
      "已处理950篇\n",
      "已处理951篇\n",
      "已处理952篇\n",
      "已处理953篇\n",
      "已处理954篇\n",
      "已处理955篇\n",
      "已处理956篇\n",
      "已处理957篇\n",
      "已处理958篇\n",
      "已处理959篇\n",
      "已处理960篇\n",
      "已处理961篇\n",
      "已处理962篇\n",
      "已处理963篇\n",
      "已处理964篇\n",
      "已处理965篇\n",
      "已处理966篇\n",
      "已处理967篇\n",
      "已处理968篇\n",
      "已处理969篇\n",
      "已处理970篇\n",
      "已处理971篇\n",
      "已处理972篇\n",
      "已处理973篇\n",
      "已处理974篇\n",
      "已处理975篇\n",
      "已处理976篇\n",
      "已处理977篇\n",
      "已处理978篇\n",
      "已处理979篇\n",
      "已处理980篇\n",
      "已处理981篇\n",
      "已处理982篇\n",
      "已处理983篇\n",
      "已处理984篇\n",
      "已处理985篇\n",
      "已处理986篇\n",
      "已处理987篇\n",
      "已处理988篇\n",
      "已处理989篇\n",
      "已处理990篇\n",
      "已处理991篇\n",
      "已处理992篇\n",
      "已处理993篇\n",
      "已处理994篇\n",
      "已处理995篇\n",
      "已处理996篇\n",
      "已处理997篇\n",
      "已处理998篇\n",
      "已处理999篇\n",
      "已处理1000篇\n",
      "已处理1001篇\n"
     ]
    }
   ],
   "source": [
    "#把essays转换为向量形式\n",
    "vectors_glove = []\n",
    "count = 1\n",
    "for essay in data['essay']:\n",
    "    vectors_glove.append(get_glove_100_vec(essay,w_v_dict))\n",
    "    print(\"已处理{}篇\".format(count))\n",
    "    count+=1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "08b5f7b0",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 把列表转换成数据框格式\n",
    "vectors_glove = pd.DataFrame(vectors_glove)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "9c7188ff",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(1001, 99)\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>0</th>\n",
       "      <th>1</th>\n",
       "      <th>2</th>\n",
       "      <th>3</th>\n",
       "      <th>4</th>\n",
       "      <th>5</th>\n",
       "      <th>6</th>\n",
       "      <th>7</th>\n",
       "      <th>8</th>\n",
       "      <th>9</th>\n",
       "      <th>...</th>\n",
       "      <th>89</th>\n",
       "      <th>90</th>\n",
       "      <th>91</th>\n",
       "      <th>92</th>\n",
       "      <th>93</th>\n",
       "      <th>94</th>\n",
       "      <th>95</th>\n",
       "      <th>96</th>\n",
       "      <th>97</th>\n",
       "      <th>98</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>-0.120686</td>\n",
       "      <td>0.155074</td>\n",
       "      <td>0.315100</td>\n",
       "      <td>-0.182956</td>\n",
       "      <td>-0.124893</td>\n",
       "      <td>0.150174</td>\n",
       "      <td>-0.212914</td>\n",
       "      <td>0.159672</td>\n",
       "      <td>-0.047339</td>\n",
       "      <td>-0.092322</td>\n",
       "      <td>...</td>\n",
       "      <td>-0.268447</td>\n",
       "      <td>-0.054396</td>\n",
       "      <td>-0.096545</td>\n",
       "      <td>0.011370</td>\n",
       "      <td>-0.016519</td>\n",
       "      <td>-0.408406</td>\n",
       "      <td>-0.048633</td>\n",
       "      <td>-0.170380</td>\n",
       "      <td>-0.307924</td>\n",
       "      <td>0.344400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>-0.096388</td>\n",
       "      <td>0.185211</td>\n",
       "      <td>0.360958</td>\n",
       "      <td>-0.184626</td>\n",
       "      <td>-0.109197</td>\n",
       "      <td>0.223573</td>\n",
       "      <td>-0.219939</td>\n",
       "      <td>0.106561</td>\n",
       "      <td>-0.119904</td>\n",
       "      <td>-0.026553</td>\n",
       "      <td>...</td>\n",
       "      <td>-0.264193</td>\n",
       "      <td>-0.047806</td>\n",
       "      <td>-0.121456</td>\n",
       "      <td>-0.028583</td>\n",
       "      <td>-0.007365</td>\n",
       "      <td>-0.416639</td>\n",
       "      <td>-0.121725</td>\n",
       "      <td>-0.118571</td>\n",
       "      <td>-0.277530</td>\n",
       "      <td>0.402811</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>-0.165218</td>\n",
       "      <td>0.230092</td>\n",
       "      <td>0.394004</td>\n",
       "      <td>-0.241641</td>\n",
       "      <td>-0.204977</td>\n",
       "      <td>0.202470</td>\n",
       "      <td>-0.191531</td>\n",
       "      <td>0.184408</td>\n",
       "      <td>-0.026965</td>\n",
       "      <td>-0.111635</td>\n",
       "      <td>...</td>\n",
       "      <td>-0.296409</td>\n",
       "      <td>-0.103088</td>\n",
       "      <td>-0.031124</td>\n",
       "      <td>0.031590</td>\n",
       "      <td>0.023312</td>\n",
       "      <td>-0.436750</td>\n",
       "      <td>-0.060165</td>\n",
       "      <td>-0.094003</td>\n",
       "      <td>-0.325644</td>\n",
       "      <td>0.377455</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>-0.136330</td>\n",
       "      <td>0.210490</td>\n",
       "      <td>0.333176</td>\n",
       "      <td>-0.254893</td>\n",
       "      <td>-0.124894</td>\n",
       "      <td>0.221892</td>\n",
       "      <td>-0.190266</td>\n",
       "      <td>0.179672</td>\n",
       "      <td>-0.043548</td>\n",
       "      <td>-0.108977</td>\n",
       "      <td>...</td>\n",
       "      <td>-0.262332</td>\n",
       "      <td>-0.084266</td>\n",
       "      <td>-0.085750</td>\n",
       "      <td>0.037456</td>\n",
       "      <td>-0.034176</td>\n",
       "      <td>-0.439657</td>\n",
       "      <td>-0.064015</td>\n",
       "      <td>-0.064168</td>\n",
       "      <td>-0.287647</td>\n",
       "      <td>0.386404</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>-0.163342</td>\n",
       "      <td>0.222142</td>\n",
       "      <td>0.335085</td>\n",
       "      <td>-0.222246</td>\n",
       "      <td>-0.085029</td>\n",
       "      <td>0.207153</td>\n",
       "      <td>-0.201252</td>\n",
       "      <td>0.153767</td>\n",
       "      <td>-0.038588</td>\n",
       "      <td>-0.051452</td>\n",
       "      <td>...</td>\n",
       "      <td>-0.249985</td>\n",
       "      <td>-0.054179</td>\n",
       "      <td>-0.057925</td>\n",
       "      <td>-0.027644</td>\n",
       "      <td>-0.014042</td>\n",
       "      <td>-0.426040</td>\n",
       "      <td>-0.084366</td>\n",
       "      <td>-0.170212</td>\n",
       "      <td>-0.345353</td>\n",
       "      <td>0.434815</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>5 rows × 99 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "         0         1         2         3         4         5         6   \\\n",
       "0 -0.120686  0.155074  0.315100 -0.182956 -0.124893  0.150174 -0.212914   \n",
       "1 -0.096388  0.185211  0.360958 -0.184626 -0.109197  0.223573 -0.219939   \n",
       "2 -0.165218  0.230092  0.394004 -0.241641 -0.204977  0.202470 -0.191531   \n",
       "3 -0.136330  0.210490  0.333176 -0.254893 -0.124894  0.221892 -0.190266   \n",
       "4 -0.163342  0.222142  0.335085 -0.222246 -0.085029  0.207153 -0.201252   \n",
       "\n",
       "         7         8         9   ...        89        90        91        92  \\\n",
       "0  0.159672 -0.047339 -0.092322  ... -0.268447 -0.054396 -0.096545  0.011370   \n",
       "1  0.106561 -0.119904 -0.026553  ... -0.264193 -0.047806 -0.121456 -0.028583   \n",
       "2  0.184408 -0.026965 -0.111635  ... -0.296409 -0.103088 -0.031124  0.031590   \n",
       "3  0.179672 -0.043548 -0.108977  ... -0.262332 -0.084266 -0.085750  0.037456   \n",
       "4  0.153767 -0.038588 -0.051452  ... -0.249985 -0.054179 -0.057925 -0.027644   \n",
       "\n",
       "         93        94        95        96        97        98  \n",
       "0 -0.016519 -0.408406 -0.048633 -0.170380 -0.307924  0.344400  \n",
       "1 -0.007365 -0.416639 -0.121725 -0.118571 -0.277530  0.402811  \n",
       "2  0.023312 -0.436750 -0.060165 -0.094003 -0.325644  0.377455  \n",
       "3 -0.034176 -0.439657 -0.064015 -0.064168 -0.287647  0.386404  \n",
       "4 -0.014042 -0.426040 -0.084366 -0.170212 -0.345353  0.434815  \n",
       "\n",
       "[5 rows x 99 columns]"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "print(vectors_glove.shape)\n",
    "vectors_glove.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "824ba8df",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0     0\n",
       "1     0\n",
       "2     0\n",
       "3     0\n",
       "4     0\n",
       "     ..\n",
       "94    0\n",
       "95    0\n",
       "96    0\n",
       "97    0\n",
       "98    0\n",
       "Length: 99, dtype: int64"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "vectors_glove.isnull().sum()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7bd6dbb4",
   "metadata": {},
   "source": [
    "## 添加其他特征"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "b5cb284f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 返回总的单词列表\n",
    "def sentence_to_wordlist(raw_sentence):\n",
    "    #句子清洗\n",
    "    # re的用法：replacedStr = re.sub(\"\\d+\", \"222\", inputStr)\n",
    "    clean_sentence = re.sub(\"[^a-zA-Z0-9]\", \" \", raw_sentence)# 这里主要是去除除了a-zA-Z0-9之外的字符\n",
    "    #nltk 分词\n",
    "    tokens = nltk.word_tokenize(clean_sentence)\n",
    "    #返回分词的结果\n",
    "    return tokens"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "c5b0fe59",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['Dear',\n",
       " 'Jerry',\n",
       " 'I',\n",
       " 'am',\n",
       " 'Li',\n",
       " 'Hua',\n",
       " 'I',\n",
       " 'know',\n",
       " 'you',\n",
       " 'will',\n",
       " 'to',\n",
       " 'attend',\n",
       " 'an',\n",
       " 'English',\n",
       " 'test',\n",
       " 'I',\n",
       " 'think',\n",
       " 'I',\n",
       " 'can',\n",
       " 'give',\n",
       " 'you',\n",
       " 'some',\n",
       " 'advise',\n",
       " 'Such',\n",
       " 'as',\n",
       " 'you',\n",
       " 'should',\n",
       " 'to',\n",
       " 'know',\n",
       " 'what',\n",
       " 'you',\n",
       " 'will',\n",
       " 'to',\n",
       " 'say',\n",
       " 'And',\n",
       " 'when',\n",
       " 'you',\n",
       " 'say',\n",
       " 'how',\n",
       " 'you',\n",
       " 'should',\n",
       " 'to',\n",
       " 'do',\n",
       " 'And',\n",
       " 'you',\n",
       " 'should',\n",
       " 'to',\n",
       " 'know',\n",
       " 'how',\n",
       " 'to',\n",
       " 'leave',\n",
       " 'Yes',\n",
       " 'there',\n",
       " 'are',\n",
       " 'my',\n",
       " 'advise',\n",
       " 'I',\n",
       " 'think',\n",
       " 'they',\n",
       " 'are',\n",
       " 'useful',\n",
       " 'And',\n",
       " 'I',\n",
       " 'think',\n",
       " 'they',\n",
       " 'can',\n",
       " 'give',\n",
       " 'you',\n",
       " 'some',\n",
       " 'help',\n",
       " 'So',\n",
       " 'I',\n",
       " 'say',\n",
       " 'they',\n",
       " 'in',\n",
       " 'their',\n",
       " 'Yours',\n",
       " 'Li',\n",
       " 'Hua']"
      ]
     },
     "execution_count": 28,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sentence_to_wordlist(example)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "b209be7c",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 把每个句子都拆成单词列表\n",
    "def tokenize(essay):\n",
    "    # 去掉句子前后空格\n",
    "    stripped_essay = essay.strip()\n",
    "    tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')\n",
    "    raw_sentences = tokenizer.tokenize(stripped_essay) # 分句\n",
    "    #print(raw_sentences)\n",
    "    tokenized_sentences = []\n",
    "    for raw_sentence in raw_sentences:\n",
    "        if len(raw_sentence) > 0:\n",
    "            tokenized_sentences.append(sentence_to_wordlist(raw_sentence))\n",
    "    return tokenized_sentences"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "91effd63",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[['Dear', 'Jerry', 'I', 'am', 'Li', 'Hua'],\n",
       " ['I', 'know', 'you', 'will', 'to', 'attend', 'an', 'English', 'test'],\n",
       " ['I', 'think', 'I', 'can', 'give', 'you', 'some', 'advise'],\n",
       " ['Such',\n",
       "  'as',\n",
       "  'you',\n",
       "  'should',\n",
       "  'to',\n",
       "  'know',\n",
       "  'what',\n",
       "  'you',\n",
       "  'will',\n",
       "  'to',\n",
       "  'say'],\n",
       " ['And', 'when', 'you', 'say', 'how', 'you', 'should', 'to', 'do'],\n",
       " ['And', 'you', 'should', 'to', 'know', 'how', 'to', 'leave'],\n",
       " ['Yes', 'there', 'are', 'my', 'advise'],\n",
       " ['I', 'think', 'they', 'are', 'useful'],\n",
       " ['And', 'I', 'think', 'they', 'can', 'give', 'you', 'some', 'help'],\n",
       " ['So', 'I', 'say', 'they', 'in', 'their'],\n",
       " ['Yours', 'Li', 'Hua']]"
      ]
     },
     "execution_count": 30,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tokenize(example)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "3f1076d5",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 平均词数\n",
    "def avg_word_len(essay):\n",
    "    # 清洗文章\n",
    "    clean_essay = re.sub(r'\\W', ' ', essay)\n",
    "    # 分词\n",
    "    words = nltk.word_tokenize(clean_essay)\n",
    "    # 遍历word，然后把每一个word的单词数目相加，然后除以总的单词数目，即可得到 每个单词平均有几个字母\n",
    "    return sum(len(word) for word in words) / len(words)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "c23b70d6",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "3.3797468354430378"
      ]
     },
     "execution_count": 32,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "avg_word_len(example)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "ccb0b3e9",
   "metadata": {},
   "outputs": [],
   "source": [
    "def word_count(essay):\n",
    "    clean_essay = re.sub(r'\\W', ' ', essay)\n",
    "    words = nltk.word_tokenize(clean_essay)\n",
    "    return len(words)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "id": "639f84c2",
   "metadata": {},
   "outputs": [],
   "source": [
    "def char_count(essay):\n",
    "    # 删除空格即返回文章字母总数\n",
    "    clean_essay = re.sub(r'\\s', '', str(essay).lower())\n",
    "    return len(clean_essay)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "762b4859",
   "metadata": {},
   "outputs": [],
   "source": [
    "def sent_count(essay):\n",
    "    sentences = nltk.sent_tokenize(essay)\n",
    "    return len(sentences)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "id": "141ed87e",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 提取出所有的名词 形容词 动词 副词\n",
    "def count_lemmas(essay):\n",
    "    #分句\n",
    "    tokenized_sentences = tokenize(essay)\n",
    "    lemmas = []\n",
    "    wordnet_lemmatizer = WordNetLemmatizer()\n",
    "\n",
    "    for sentence in tokenized_sentences:\n",
    "        # 给句子标注词性\n",
    "        tagged_tokens = nltk.pos_tag(sentence)\n",
    "        for token_tuple in tagged_tokens:\n",
    "            # 取第一个，即为该单词词性\n",
    "            pos_tag = token_tuple[1]\n",
    "            if pos_tag.startswith('N'):\n",
    "                pos = wordnet.NOUN\n",
    "                lemmas.append(wordnet_lemmatizer.lemmatize(token_tuple[0], pos))\n",
    "                \n",
    "            elif pos_tag.startswith('J'):\n",
    "                pos = wordnet.ADJ\n",
    "                lemmas.append(wordnet_lemmatizer.lemmatize(token_tuple[0], pos))\n",
    "            elif pos_tag.startswith('V'):\n",
    "                pos = wordnet.VERB\n",
    "                lemmas.append(wordnet_lemmatizer.lemmatize(token_tuple[0], pos))\n",
    "            elif pos_tag.startswith('R'):\n",
    "                pos = wordnet.ADV\n",
    "                lemmas.append(wordnet_lemmatizer.lemmatize(token_tuple[0], pos))\n",
    "            else:\n",
    "                pos = wordnet.NOUN\n",
    "                lemmas.append(wordnet_lemmatizer.lemmatize(token_tuple[0], pos))\n",
    "    lemma_count = len(set(lemmas))\n",
    "    # print(lemma_count)\n",
    "    return lemma_count"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "id": "d1c36443",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "39"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "count_lemmas(example)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "2c68d264",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\"\\ndef count_lemmas(essay):\\n    #分句\\n    tokenized_sentences = tokenize(essay)\\n    lemmas = []\\n    wordnet_lemmatizer = WordNetLemmatizer()\\n\\n    for sentence in tokenized_sentences:\\n        # 给句子标注词性\\n        tagged_tokens = nltk.pos_tag(sentence)\\n       # print(tagged_tokens)\\n        for token_tuple in tagged_tokens:\\n            # 取第二个，即为该单词词性\\n            pos_tag = token_tuple[1]\\n            if pos_tag.startswith('N'):\\n                pos = wordnet.NOUN\\n                # 这里是提取了所有名词\\n                lemmas.append(wordnet_lemmatizer.lemmatize(token_tuple[0], pos))\\n #           print(lemmas)\\n            elif pos_tag.startswith('J'):\\n                pos = wordnet.ADJ\\n                lemmas.append(wordnet_lemmatizer.lemmatize(token_tuple[0], pos))\\n           # print(lemmas)\\n            elif pos_tag.startswith('V'):\\n                pos = wordnet.VERB\\n                lemmas.append(wordnet_lemmatizer.lemmatize(token_tuple[0], pos))\\n            elif pos_tag.startswith('R'):\\n                pos = wordnet.ADV\\n                lemmas.append(wordnet_lemmatizer.lemmatize(token_tuple[0], pos))\\n            else:\\n                pos = wordnet.NOUN\\n                lemmas.append(wordnet_lemmatizer.lemmatize(token_tuple[0], pos))\\ncount_lemmas(example)\\n\""
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 测试nltk中的lemma使用\n",
    "'''\n",
    "def count_lemmas(essay):\n",
    "    #分句\n",
    "    tokenized_sentences = tokenize(essay)\n",
    "    lemmas = []\n",
    "    wordnet_lemmatizer = WordNetLemmatizer()\n",
    "\n",
    "    for sentence in tokenized_sentences:\n",
    "        # 给句子标注词性\n",
    "        tagged_tokens = nltk.pos_tag(sentence)\n",
    "       # print(tagged_tokens)\n",
    "        for token_tuple in tagged_tokens:\n",
    "            # 取第二个，即为该单词词性\n",
    "            pos_tag = token_tuple[1]\n",
    "            if pos_tag.startswith('N'):\n",
    "                pos = wordnet.NOUN\n",
    "                # 这里是提取了所有名词\n",
    "                lemmas.append(wordnet_lemmatizer.lemmatize(token_tuple[0], pos))\n",
    " #           print(lemmas)\n",
    "            elif pos_tag.startswith('J'):\n",
    "                pos = wordnet.ADJ\n",
    "                lemmas.append(wordnet_lemmatizer.lemmatize(token_tuple[0], pos))\n",
    "           # print(lemmas)\n",
    "            elif pos_tag.startswith('V'):\n",
    "                pos = wordnet.VERB\n",
    "                lemmas.append(wordnet_lemmatizer.lemmatize(token_tuple[0], pos))\n",
    "            elif pos_tag.startswith('R'):\n",
    "                pos = wordnet.ADV\n",
    "                lemmas.append(wordnet_lemmatizer.lemmatize(token_tuple[0], pos))\n",
    "            else:\n",
    "                pos = wordnet.NOUN\n",
    "                lemmas.append(wordnet_lemmatizer.lemmatize(token_tuple[0], pos))\n",
    "count_lemmas(example)\n",
    "'''"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "id": "6f405c55",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 这里需要定制化一下，加入li 和hua 这两个词\n",
    "def count_spell_error(essay):\n",
    "    clean_essay = re.sub(r'\\W', ' ', str(essay).lower())\n",
    "    clean_essay = re.sub(r'[0-9]', '', clean_essay)\n",
    "\n",
    "    # big.txt: It is a concatenation of public domain book excerpts from Project Gutenberg\n",
    "    #         and lists of most frequent words from Wiktionary and the British National Corpus.\n",
    "    #         It contains about a million words.\n",
    "    data = open('big.txt').read()\n",
    "\n",
    "    words_ = re.findall('[a-z]+', data.lower())\n",
    "\n",
    "\n",
    "    word_dict = collections.defaultdict(lambda: 0)\n",
    "\n",
    "    for word in words_:\n",
    "        word_dict[word] += 1\n",
    "\n",
    "    clean_essay = re.sub(r'\\W', ' ', str(essay).lower())\n",
    "    clean_essay = re.sub(r'[0-9]', '', clean_essay)\n",
    "\n",
    "    mispell_count = 0\n",
    "    mispell_words = []\n",
    "\n",
    "    words = clean_essay.split()\n",
    "\n",
    "    for word in words:\n",
    "        # 针对该文本的特殊词汇\n",
    "        if word not in [\"li\",\"hua\",\"jerry\"]:\n",
    "            # 如果essay中的词汇不在big data 这个文件下，那么就认为这个单词拼写错了\n",
    "            if not word in word_dict:\n",
    "                mispell_count += 1\n",
    "                mispell_words.append(word)\n",
    "\n",
    "    return mispell_count"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "id": "f3c9d6c4",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0"
      ]
     },
     "execution_count": 40,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "count_spell_error(example)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "id": "d5bb46b1",
   "metadata": {},
   "outputs": [],
   "source": [
    "def count_pos(essay):\n",
    "    tokenized_sentences = tokenize(essay)\n",
    "\n",
    "    noun_count = 0\n",
    "    adj_count = 0\n",
    "    verb_count = 0\n",
    "    adv_count = 0\n",
    "\n",
    "    for sentence in tokenized_sentences:\n",
    "        tagged_tokens = nltk.pos_tag(sentence)\n",
    "\n",
    "        for token_tuple in tagged_tokens:\n",
    "            pos_tag = token_tuple[1]\n",
    "\n",
    "            if pos_tag.startswith('N'):\n",
    "                noun_count += 1\n",
    "            elif pos_tag.startswith('J'):\n",
    "                adj_count += 1\n",
    "            elif pos_tag.startswith('V'):\n",
    "                verb_count += 1\n",
    "            elif pos_tag.startswith('R'):\n",
    "                adv_count += 1\n",
    "\n",
    "    return noun_count, adj_count, verb_count, adv_count"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "id": "9ed35c66",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 连词与介词\n",
    "def count_conj(essay):\n",
    "    tokenized_sentences = tokenize(essay)\n",
    "\n",
    "    conj_count = 0\n",
    "    ps_conj = 0\n",
    "    conjs = []\n",
    "    ps = []\n",
    "\n",
    "    for sentence in tokenized_sentences:\n",
    "        tagged_tokens = nltk.pos_tag(sentence)\n",
    "\n",
    "        for token_tuple in tagged_tokens:\n",
    "            pos_tag = token_tuple[1]\n",
    "            \n",
    "            if pos_tag.startswith('CC'):\n",
    "                conj_count += 1\n",
    "               # conjs.append(token_tuple[0])\n",
    "            if pos_tag.startswith(\"IN\"):\n",
    "                ps_conj += 1\n",
    "               # ps.append(token_tuple[0])\n",
    "    return conj_count,ps_conj"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "id": "89161e71",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(3, 3)"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "count_conj(example)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "id": "003671d5",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "D:\\Program\\anaconda\\lib\\site-packages\\spacy\\util.py:833: UserWarning: [W095] Model 'en_core_web_md' (3.1.0) was trained with spaCy v3.1 and may not be 100% compatible with the current version (3.2.0). If you see errors or degraded performance, download a newer compatible model or retrain your custom model with the current spaCy version. For more details and available updates, run: python -m spacy validate\n",
      "  warnings.warn(warn_msg)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Processing time: 0:00:29.017360\n"
     ]
    }
   ],
   "source": [
    "# 使用spacy获取命名实体识别特征\n",
    "import spacy\n",
    "from spacy.lang.en.stop_words import STOP_WORDS\n",
    "from datetime import datetime\n",
    "ner = []\n",
    "\n",
    "stop_words = set(STOP_WORDS)\n",
    "#stop_words.update(punctuation) # remove it if you need punctuation \n",
    "\n",
    "nlp = spacy.load('en_core_web_md')\n",
    "\n",
    "t0 = datetime.now()\n",
    "\n",
    "# suppress numpy warnings\n",
    "np.warnings.filterwarnings('ignore')\n",
    "\n",
    "# 遍历所有的文章，找出其中的 token，句子，词性标注，命名实体识别和lemma\n",
    "#for essay in nlp.pipe(training_set['corrected'], batch_size=100, n_threads=3):\n",
    "for essay in nlp.pipe(data['corrected'], batch_size=100):\n",
    "    if essay.is_parsed:\n",
    "        ner.append([e.text for e in essay.ents])\n",
    "        #ner_num = len(ner)\n",
    "    else:\n",
    "        # 让所有的元素都有相同的长度，所以在没有的地方加none\n",
    "        # We want to make sure that the lists of parsed results have the\n",
    "        # same number of entries of the original Dataframe, so add some blanks in case the parse fails\n",
    "        ner.append(None)\n",
    "        #ner_num = len(ner)\n",
    "\n",
    "\n",
    "data['ner'] = ner\n",
    "data[\"ner_num\"] = data.apply(lambda x:len(x['ner']),axis=1)\n",
    "#print(data['ner'])\n",
    "data.drop ('ner',axis=1, inplace=True) \n",
    "t1 = datetime.now()\n",
    "print('Processing time: {}'.format(t1 - t0))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "id": "346a8810",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 判断写作的正确结尾：因为这是信件，所以我们就以是否以lihua结尾来判断\n",
    "def End0(essay):\n",
    "    try:\n",
    "        if essay.endswith('Li Hua'):\n",
    "            return 1\n",
    "\n",
    "        else:\n",
    "            return 0 \n",
    "    except:\n",
    "        return -1\n",
    "# 之后需要把end0 这一列转换成one-hot编码"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "id": "471c5c18",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "1"
      ]
     },
     "execution_count": 47,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "End0(example)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "id": "e5f687b2",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 判断句中首字母是否大写，“.？！”后接大写，“，”后字母接小写。 \n",
    "def Initial_capitalization(text):\n",
    "    c = 0 # c指错误\n",
    "    text = text.replace(' ','')\n",
    "    try:\n",
    "        if not text.startswith(\"Dear J\"):\n",
    "            c += 1\n",
    "        for i in range(len(text)-1):\n",
    "            if text[i] in  \".!?\" and text[i+1].islower():\n",
    "                c += 1\n",
    "            if text[i] in  \",\" and not text[i+1].islower():\n",
    "                c += 1 \n",
    "    except Exception as e:\n",
    "        pass\n",
    "    return c"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "id": "2eaee4f1",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 统计错误标点使用\n",
    "def Wrong_signal(line):\n",
    "    return len(re.findall(r'[，。!！~·@#￥%……&*（）——+|《》？`$()_\\、“”：\"><]', line))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "id": "0f615950",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>essay</th>\n",
       "      <th>score</th>\n",
       "      <th>corrected</th>\n",
       "      <th>ner_num</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>10001</td>\n",
       "      <td>Dear Jerry. I've heard about that you will giv...</td>\n",
       "      <td>19.5</td>\n",
       "      <td>Dear Jerry. I have heard about that you will g...</td>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>10002</td>\n",
       "      <td>Dear Jerry I'm glad that you'll respresent you...</td>\n",
       "      <td>16.5</td>\n",
       "      <td>Dear Jerry I am glad that you will respresent ...</td>\n",
       "      <td>5</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>10003</td>\n",
       "      <td>Dear Jerry, I am very happy to hear that you w...</td>\n",
       "      <td>20.5</td>\n",
       "      <td>Dear Jerry, I am very happy to hear that you w...</td>\n",
       "      <td>5</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>10004</td>\n",
       "      <td>Dear Je I'm so happy to hear that you will hav...</td>\n",
       "      <td>15.5</td>\n",
       "      <td>Dear Je I am so happy to hear that you will ha...</td>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>10005</td>\n",
       "      <td>Dear Jerry, I am so glad to hear that you will...</td>\n",
       "      <td>19.0</td>\n",
       "      <td>Dear Jerry, I am so glad to hear that you will...</td>\n",
       "      <td>4</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "      id                                              essay  score  \\\n",
       "0  10001  Dear Jerry. I've heard about that you will giv...   19.5   \n",
       "1  10002  Dear Jerry I'm glad that you'll respresent you...   16.5   \n",
       "2  10003  Dear Jerry, I am very happy to hear that you w...   20.5   \n",
       "3  10004  Dear Je I'm so happy to hear that you will hav...   15.5   \n",
       "4  10005  Dear Jerry, I am so glad to hear that you will...   19.0   \n",
       "\n",
       "                                           corrected  ner_num  \n",
       "0  Dear Jerry. I have heard about that you will g...        3  \n",
       "1  Dear Jerry I am glad that you will respresent ...        5  \n",
       "2  Dear Jerry, I am very happy to hear that you w...        5  \n",
       "3  Dear Je I am so happy to hear that you will ha...        3  \n",
       "4  Dear Jerry, I am so glad to hear that you will...        4  "
      ]
     },
     "execution_count": 52,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "data.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "id": "b8a7031d",
   "metadata": {},
   "outputs": [],
   "source": [
    "def extract_features(data):\n",
    "    features = data.copy()\n",
    "    features['char_count'] = features['corrected'].apply(char_count) #文章字母总数\n",
    "    #features['word_count'] = features['corrected'].apply(word_count) #文章单词总数\n",
    "    features['sent_count'] = features['corrected'].apply(sent_count) #文章句子总数\n",
    "    features['avg_word_len'] = features['corrected'].apply(avg_word_len) #平均单词长度\n",
    "    features['lemma_count'] = features['corrected'].apply(count_lemmas) # 文章词性统计\n",
    "    features['spell_err_count'] = features['corrected'].apply(count_spell_error) # 文章写错的单词总数\n",
    "    features['noun_count'], features['adj_count'], features['verb_count'], features['adv_count'] = zip(\n",
    "        *features['corrected'].map(count_pos)) # 文章名词  动词 形容词 副词的总数\n",
    "    features[\"conj\"], features[\"ps_conj\"] = zip(*features[\"corrected\"].map(count_conj)) #连词的总数\n",
    "    features['end'] = features.corrected.apply(End0)# 是否写完\n",
    "    features['captilization'] = features.corrected.apply(Initial_capitalization) # 首字母是否大写\n",
    "    features['wrong_signal'] = features.corrected.apply(Wrong_signal) # 是否有错误的字符\n",
    "    features['comma'] = features.apply(lambda x: x['corrected'].count(','), axis=1) \n",
    "    features['question'] = features.apply(lambda x: x['corrected'].count('?'), axis=1)\n",
    "    features['exclamation'] = features.apply(lambda x: x['corrected'].count('!'), axis=1)\n",
    "    features['quotation'] = features.apply(lambda x: x['corrected'].count('\"') + x['corrected'].count(\"'\"), axis=1)\n",
    "    return features\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 171,
   "id": "26f11b34",
   "metadata": {},
   "outputs": [],
   "source": [
    "#拿到特征\n",
    "features = extract_features(data)\n",
    "#print(features.columns)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "id": "bfe78e02",
   "metadata": {},
   "outputs": [
    {
     "ename": "NameError",
     "evalue": "name 'features' is not defined",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mNameError\u001b[0m                                 Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-54-dd5d63ff320e>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[1;32m----> 1\u001b[1;33m \u001b[0mfeatures\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mto_pickle\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m'features_v3.pkl'\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m",
      "\u001b[1;31mNameError\u001b[0m: name 'features' is not defined"
     ]
    }
   ],
   "source": [
    "features.to_pickle('features_v3.pkl')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "id": "2bbf407e",
   "metadata": {},
   "outputs": [],
   "source": [
    "features = pd.read_pickle(\"features_v3.pkl\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "id": "73f42f40",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>essay</th>\n",
       "      <th>score</th>\n",
       "      <th>corrected</th>\n",
       "      <th>ner_num</th>\n",
       "      <th>char_count</th>\n",
       "      <th>sent_count</th>\n",
       "      <th>avg_word_len</th>\n",
       "      <th>lemma_count</th>\n",
       "      <th>spell_err_count</th>\n",
       "      <th>...</th>\n",
       "      <th>adv_count</th>\n",
       "      <th>conj</th>\n",
       "      <th>ps_conj</th>\n",
       "      <th>end</th>\n",
       "      <th>captilization</th>\n",
       "      <th>wrong_signal</th>\n",
       "      <th>comma</th>\n",
       "      <th>question</th>\n",
       "      <th>exclamation</th>\n",
       "      <th>quotation</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>10001</td>\n",
       "      <td>Dear Jerry. I've heard about that you will giv...</td>\n",
       "      <td>19.5</td>\n",
       "      <td>Dear Jerry. I have heard about that you will g...</td>\n",
       "      <td>3</td>\n",
       "      <td>514</td>\n",
       "      <td>15</td>\n",
       "      <td>4.142857</td>\n",
       "      <td>85</td>\n",
       "      <td>3</td>\n",
       "      <td>...</td>\n",
       "      <td>11</td>\n",
       "      <td>2</td>\n",
       "      <td>8</td>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>10002</td>\n",
       "      <td>Dear Jerry I'm glad that you'll respresent you...</td>\n",
       "      <td>16.5</td>\n",
       "      <td>Dear Jerry I am glad that you will respresent ...</td>\n",
       "      <td>5</td>\n",
       "      <td>618</td>\n",
       "      <td>11</td>\n",
       "      <td>4.357664</td>\n",
       "      <td>85</td>\n",
       "      <td>5</td>\n",
       "      <td>...</td>\n",
       "      <td>10</td>\n",
       "      <td>3</td>\n",
       "      <td>14</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>10003</td>\n",
       "      <td>Dear Jerry, I am very happy to hear that you w...</td>\n",
       "      <td>20.5</td>\n",
       "      <td>Dear Jerry, I am very happy to hear that you w...</td>\n",
       "      <td>5</td>\n",
       "      <td>676</td>\n",
       "      <td>18</td>\n",
       "      <td>3.975610</td>\n",
       "      <td>92</td>\n",
       "      <td>2</td>\n",
       "      <td>...</td>\n",
       "      <td>12</td>\n",
       "      <td>7</td>\n",
       "      <td>15</td>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>4</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>4</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>10004</td>\n",
       "      <td>Dear Je I'm so happy to hear that you will hav...</td>\n",
       "      <td>15.5</td>\n",
       "      <td>Dear Je I am so happy to hear that you will ha...</td>\n",
       "      <td>3</td>\n",
       "      <td>555</td>\n",
       "      <td>14</td>\n",
       "      <td>4.092308</td>\n",
       "      <td>78</td>\n",
       "      <td>6</td>\n",
       "      <td>...</td>\n",
       "      <td>14</td>\n",
       "      <td>3</td>\n",
       "      <td>8</td>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>7</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>10005</td>\n",
       "      <td>Dear Jerry, I am so glad to hear that you will...</td>\n",
       "      <td>19.0</td>\n",
       "      <td>Dear Jerry, I am so glad to hear that you will...</td>\n",
       "      <td>4</td>\n",
       "      <td>657</td>\n",
       "      <td>11</td>\n",
       "      <td>4.260000</td>\n",
       "      <td>87</td>\n",
       "      <td>13</td>\n",
       "      <td>...</td>\n",
       "      <td>14</td>\n",
       "      <td>5</td>\n",
       "      <td>12</td>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>8</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>5 rows × 23 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "      id                                              essay  score  \\\n",
       "0  10001  Dear Jerry. I've heard about that you will giv...   19.5   \n",
       "1  10002  Dear Jerry I'm glad that you'll respresent you...   16.5   \n",
       "2  10003  Dear Jerry, I am very happy to hear that you w...   20.5   \n",
       "3  10004  Dear Je I'm so happy to hear that you will hav...   15.5   \n",
       "4  10005  Dear Jerry, I am so glad to hear that you will...   19.0   \n",
       "\n",
       "                                           corrected  ner_num  char_count  \\\n",
       "0  Dear Jerry. I have heard about that you will g...        3         514   \n",
       "1  Dear Jerry I am glad that you will respresent ...        5         618   \n",
       "2  Dear Jerry, I am very happy to hear that you w...        5         676   \n",
       "3  Dear Je I am so happy to hear that you will ha...        3         555   \n",
       "4  Dear Jerry, I am so glad to hear that you will...        4         657   \n",
       "\n",
       "   sent_count  avg_word_len  lemma_count  spell_err_count  ...  adv_count  \\\n",
       "0          15      4.142857           85                3  ...         11   \n",
       "1          11      4.357664           85                5  ...         10   \n",
       "2          18      3.975610           92                2  ...         12   \n",
       "3          14      4.092308           78                6  ...         14   \n",
       "4          11      4.260000           87               13  ...         14   \n",
       "\n",
       "   conj  ps_conj  end  captilization  wrong_signal  comma  question  \\\n",
       "0     2        8    1              3             0      5         0   \n",
       "1     3       14    0              1             1      5         0   \n",
       "2     7       15    1              3             4      5         0   \n",
       "3     3        8    1              3             0      7         0   \n",
       "4     5       12    1              3             0      8         1   \n",
       "\n",
       "   exclamation  quotation  \n",
       "0            0          2  \n",
       "1            0          3  \n",
       "2            4          0  \n",
       "3            0          3  \n",
       "4            0          0  \n",
       "\n",
       "[5 rows x 23 columns]"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "features.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cc27eb3c",
   "metadata": {},
   "source": [
    "### 把END 特征转换成one-hot"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "id": "fc7d045c",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.preprocessing import OneHotEncoder\n",
    "tempdata = features[['end']]\n",
    "#print(tempdata)\n",
    "enc = OneHotEncoder()\n",
    "enc.fit(tempdata)\n",
    "\n",
    "#one-hot编码的结果是比较奇怪的，最好是先转换成二维数组\n",
    "tempdata = enc.transform(tempdata).toarray()\n",
    "#print(tempdata)\n",
    "end = pd.DataFrame(tempdata)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "id": "f622c6e8",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 最后得到一个数据框，但是为了保留列名，所以我们增加end的列名\n",
    "features[\"end1\"],features[\"end2\"] = end"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "id": "003fd1ec",
   "metadata": {},
   "outputs": [],
   "source": [
    "features.drop([\"end\"],axis=1)\n",
    "features = pd.DataFrame(features)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "id": "84f00b52",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>essay</th>\n",
       "      <th>score</th>\n",
       "      <th>corrected</th>\n",
       "      <th>ner_num</th>\n",
       "      <th>char_count</th>\n",
       "      <th>sent_count</th>\n",
       "      <th>avg_word_len</th>\n",
       "      <th>lemma_count</th>\n",
       "      <th>spell_err_count</th>\n",
       "      <th>...</th>\n",
       "      <th>ps_conj</th>\n",
       "      <th>end</th>\n",
       "      <th>captilization</th>\n",
       "      <th>wrong_signal</th>\n",
       "      <th>comma</th>\n",
       "      <th>question</th>\n",
       "      <th>exclamation</th>\n",
       "      <th>quotation</th>\n",
       "      <th>end1</th>\n",
       "      <th>end2</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>10001</td>\n",
       "      <td>Dear Jerry. I've heard about that you will giv...</td>\n",
       "      <td>19.5</td>\n",
       "      <td>Dear Jerry. I have heard about that you will g...</td>\n",
       "      <td>3</td>\n",
       "      <td>514</td>\n",
       "      <td>15</td>\n",
       "      <td>4.142857</td>\n",
       "      <td>85</td>\n",
       "      <td>3</td>\n",
       "      <td>...</td>\n",
       "      <td>8</td>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>10002</td>\n",
       "      <td>Dear Jerry I'm glad that you'll respresent you...</td>\n",
       "      <td>16.5</td>\n",
       "      <td>Dear Jerry I am glad that you will respresent ...</td>\n",
       "      <td>5</td>\n",
       "      <td>618</td>\n",
       "      <td>11</td>\n",
       "      <td>4.357664</td>\n",
       "      <td>85</td>\n",
       "      <td>5</td>\n",
       "      <td>...</td>\n",
       "      <td>14</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>10003</td>\n",
       "      <td>Dear Jerry, I am very happy to hear that you w...</td>\n",
       "      <td>20.5</td>\n",
       "      <td>Dear Jerry, I am very happy to hear that you w...</td>\n",
       "      <td>5</td>\n",
       "      <td>676</td>\n",
       "      <td>18</td>\n",
       "      <td>3.975610</td>\n",
       "      <td>92</td>\n",
       "      <td>2</td>\n",
       "      <td>...</td>\n",
       "      <td>15</td>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>4</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>4</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>10004</td>\n",
       "      <td>Dear Je I'm so happy to hear that you will hav...</td>\n",
       "      <td>15.5</td>\n",
       "      <td>Dear Je I am so happy to hear that you will ha...</td>\n",
       "      <td>3</td>\n",
       "      <td>555</td>\n",
       "      <td>14</td>\n",
       "      <td>4.092308</td>\n",
       "      <td>78</td>\n",
       "      <td>6</td>\n",
       "      <td>...</td>\n",
       "      <td>8</td>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>7</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>10005</td>\n",
       "      <td>Dear Jerry, I am so glad to hear that you will...</td>\n",
       "      <td>19.0</td>\n",
       "      <td>Dear Jerry, I am so glad to hear that you will...</td>\n",
       "      <td>4</td>\n",
       "      <td>657</td>\n",
       "      <td>11</td>\n",
       "      <td>4.260000</td>\n",
       "      <td>87</td>\n",
       "      <td>13</td>\n",
       "      <td>...</td>\n",
       "      <td>12</td>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>8</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>5 rows × 25 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "      id                                              essay  score  \\\n",
       "0  10001  Dear Jerry. I've heard about that you will giv...   19.5   \n",
       "1  10002  Dear Jerry I'm glad that you'll respresent you...   16.5   \n",
       "2  10003  Dear Jerry, I am very happy to hear that you w...   20.5   \n",
       "3  10004  Dear Je I'm so happy to hear that you will hav...   15.5   \n",
       "4  10005  Dear Jerry, I am so glad to hear that you will...   19.0   \n",
       "\n",
       "                                           corrected  ner_num  char_count  \\\n",
       "0  Dear Jerry. I have heard about that you will g...        3         514   \n",
       "1  Dear Jerry I am glad that you will respresent ...        5         618   \n",
       "2  Dear Jerry, I am very happy to hear that you w...        5         676   \n",
       "3  Dear Je I am so happy to hear that you will ha...        3         555   \n",
       "4  Dear Jerry, I am so glad to hear that you will...        4         657   \n",
       "\n",
       "   sent_count  avg_word_len  lemma_count  spell_err_count  ...  ps_conj  end  \\\n",
       "0          15      4.142857           85                3  ...        8    1   \n",
       "1          11      4.357664           85                5  ...       14    0   \n",
       "2          18      3.975610           92                2  ...       15    1   \n",
       "3          14      4.092308           78                6  ...        8    1   \n",
       "4          11      4.260000           87               13  ...       12    1   \n",
       "\n",
       "   captilization  wrong_signal  comma  question  exclamation  quotation  end1  \\\n",
       "0              3             0      5         0            0          2     0   \n",
       "1              1             1      5         0            0          3     0   \n",
       "2              3             4      5         0            4          0     0   \n",
       "3              3             0      7         0            0          3     0   \n",
       "4              3             0      8         1            0          0     0   \n",
       "\n",
       "   end2  \n",
       "0     1  \n",
       "1     1  \n",
       "2     1  \n",
       "3     1  \n",
       "4     1  \n",
       "\n",
       "[5 rows x 25 columns]"
      ]
     },
     "execution_count": 47,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "features.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "id": "de445895",
   "metadata": {},
   "outputs": [],
   "source": [
    "##添加wordcount特征\n",
    "features[\"word_count\"] = features['corrected'].str.strip().str.split().str.len()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b3ffb1da",
   "metadata": {},
   "source": [
    "### 提取句子语法特征-这种方法不行，先舍弃吧"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "id": "9fc1cdbc",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "  0%|                                                           | 0/121649 [00:00<?, ?it/s]\n"
     ]
    },
    {
     "ename": "NameError",
     "evalue": "name 'tokenize' is not defined",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mNameError\u001b[0m                                 Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-49-309e054ac685>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[0;32m      4\u001b[0m     \u001b[1;32mfor\u001b[0m \u001b[0mline\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mtqdm\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mf\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mreadlines\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      5\u001b[0m         \u001b[0mtxt\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mline\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mreplace\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"\\n\"\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;34m\"\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mreplace\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"\\t\"\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;34m\"\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 6\u001b[1;33m         \u001b[0mtokenized_sentences\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mtokenize\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mtxt\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m      7\u001b[0m         \u001b[0mwordnet_lemmatizer\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mWordNetLemmatizer\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      8\u001b[0m         \u001b[1;32mfor\u001b[0m \u001b[0msentence\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mtokenized_sentences\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;31mNameError\u001b[0m: name 'tokenize' is not defined"
     ]
    }
   ],
   "source": [
    "from tqdm import tqdm\n",
    "with open(\"homes_novel.txt\") as f:\n",
    "    collocation = []\n",
    "    for line in tqdm(f.readlines()):\n",
    "        txt = line.replace(\"\\n\",\"\").replace(\"\\t\",\"\")\n",
    "        tokenized_sentences = tokenize(txt)\n",
    "        wordnet_lemmatizer = WordNetLemmatizer()\n",
    "        for sentence in tokenized_sentences:\n",
    "            pos_tags = []\n",
    "        # 给句子标注词性\n",
    "            tagged_tokens = nltk.pos_tag(sentence)\n",
    "            #print(tagged_tokens)\n",
    "            for token_tuple in tagged_tokens:\n",
    "                # 取第二个，即为该单词词性\n",
    "                pos_tag = token_tuple[1]\n",
    "                pos_tags.append(pos_tag) \n",
    "            #print(pos_tags)\n",
    "            cos = [*nltk.ngrams(pos_tags, 4)] # 取四元词组,结构[(4-gram),(4-gram),(4-gram)]\n",
    "            for co in cos:\n",
    "                if co not in collocation:\n",
    "                    collocation.append(co)\n",
    "                    #print(co)\n",
    "    #print(collocation)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "id": "13fadd64",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'\\n# 把提取好的搭配列表放在pickle中\\nwith open(\"collocations.pkl\", \"wb\") as fp:   #Pickling\\n    pickle.dump(collocation, fp, protocol = pickle.HIGHEST_PROTOCOL)\\n'"
      ]
     },
     "execution_count": 50,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "'''\n",
    "# 把提取好的搭配列表放在pickle中\n",
    "with open(\"collocations.pkl\", \"wb\") as fp:   #Pickling\n",
    "    pickle.dump(collocation, fp, protocol = pickle.HIGHEST_PROTOCOL)\n",
    "'''"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "id": "18d2e527",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0"
      ]
     },
     "execution_count": 51,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(collocation)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "id": "a692d408",
   "metadata": {},
   "outputs": [],
   "source": [
    "example = \"I Dear Jerry, I'm glad that you'll have an English speech contest for these students which are new in high school. I have something you need to ( Yemember ) pay attentions. For a speech contest, the basic thing is knowing what you want to make others understand, and remember what you will told them. From the message you give me, I have some ( su ) ideas that you may need. The topic of you speech is about chinese culture, and I think the Ancient poetry is a good choice for you. If you using some good poems ( o ) in you speech, it wowld be ( fa ) great, and it can make your speech topic more deepey. Body languages are also very important for a speech contest. Sometimes the uses of body languages can make you speech(more)interesting,and others can understand your speech easily by this way. so, remember to use some body languages, but don't need too much, just three to five in you speech. After you speech end, you should follow the manners, politely out from the speech place. I'm looking forward to your good news. Yours. ! Li Hua\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 75,
   "id": "2dfd1df2",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 计算某个文本中搭配正确的比例,主要采用的四元词汇,效果不好，因为有的确实语法上是错误的，但是词性上没有错误\n",
    "def count_pos_right(essay):\n",
    "    with open(\"collocations.pkl\", \"rb\") as fp:   #Pickling\n",
    "        r_collocation = pickle.load(fp)  \n",
    "    e_collocation = []\n",
    "    c = 0\n",
    "    tokenized_sentences = tokenize(essay)\n",
    "    for sentence in tokenized_sentences:\n",
    "        pos_tags = []\n",
    "        tagged_tokens = nltk.pos_tag(sentence)\n",
    "        for token_tuple in tagged_tokens:\n",
    "                # 取第二个，即为该单词词性\n",
    "            pos_tag = token_tuple[1]\n",
    "            pos_tags.append(pos_tag)\n",
    "        cos = [*nltk.ngrams(pos_tags, 4)] # 取四元词组\n",
    "        for co in cos:\n",
    "            #print(co)\n",
    "            if co not in e_collocation:\n",
    "                e_collocation.append(co)\n",
    "    #print(e_collocation)\n",
    "    #print(len(e_collocation))\n",
    "    for coll in e_collocation:\n",
    "        if r_collocation.index(coll):\n",
    "            c+=1 \n",
    "    r_p = c/len(e_collocation)\n",
    "    print(r_p)\n",
    "    return r_p          "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 76,
   "id": "92eb966d",
   "metadata": {},
   "outputs": [
    {
     "ename": "ValueError",
     "evalue": "('NN', 'VBZ', 'VBG', 'WP') is not in list",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mValueError\u001b[0m                                Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-76-24b633b65ace>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[0;32m      1\u001b[0m \u001b[1;31m# 测试\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 2\u001b[1;33m \u001b[0mcount_pos_right\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mexample\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m",
      "\u001b[1;32m<ipython-input-75-229480292a67>\u001b[0m in \u001b[0;36mcount_pos_right\u001b[1;34m(essay)\u001b[0m\n\u001b[0;32m     21\u001b[0m     \u001b[1;31m#print(len(e_collocation))\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     22\u001b[0m     \u001b[1;32mfor\u001b[0m \u001b[0mcoll\u001b[0m \u001b[1;32min\u001b[0m \u001b[0me_collocation\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 23\u001b[1;33m         \u001b[1;32mif\u001b[0m \u001b[0mr_collocation\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mindex\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mcoll\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m     24\u001b[0m             \u001b[0mc\u001b[0m\u001b[1;33m+=\u001b[0m\u001b[1;36m1\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     25\u001b[0m     \u001b[0mr_p\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mc\u001b[0m\u001b[1;33m/\u001b[0m\u001b[0mlen\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0me_collocation\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;31mValueError\u001b[0m: ('NN', 'VBZ', 'VBG', 'WP') is not in list"
     ]
    }
   ],
   "source": [
    "# 测试\n",
    "count_pos_right(example)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "id": "82bc659a",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>essay</th>\n",
       "      <th>score</th>\n",
       "      <th>corrected</th>\n",
       "      <th>ner_num</th>\n",
       "      <th>char_count</th>\n",
       "      <th>sent_count</th>\n",
       "      <th>avg_word_len</th>\n",
       "      <th>lemma_count</th>\n",
       "      <th>spell_err_count</th>\n",
       "      <th>...</th>\n",
       "      <th>ps_conj</th>\n",
       "      <th>end</th>\n",
       "      <th>captilization</th>\n",
       "      <th>wrong_signal</th>\n",
       "      <th>comma</th>\n",
       "      <th>question</th>\n",
       "      <th>exclamation</th>\n",
       "      <th>quotation</th>\n",
       "      <th>end1</th>\n",
       "      <th>end2</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>10001</td>\n",
       "      <td>Dear Jerry. I've heard about that you will giv...</td>\n",
       "      <td>19.5</td>\n",
       "      <td>Dear Jerry. I have heard about that you will g...</td>\n",
       "      <td>3</td>\n",
       "      <td>514</td>\n",
       "      <td>15</td>\n",
       "      <td>4.142857</td>\n",
       "      <td>85</td>\n",
       "      <td>3</td>\n",
       "      <td>...</td>\n",
       "      <td>8</td>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>10002</td>\n",
       "      <td>Dear Jerry I'm glad that you'll respresent you...</td>\n",
       "      <td>16.5</td>\n",
       "      <td>Dear Jerry I am glad that you will respresent ...</td>\n",
       "      <td>5</td>\n",
       "      <td>618</td>\n",
       "      <td>11</td>\n",
       "      <td>4.357664</td>\n",
       "      <td>85</td>\n",
       "      <td>5</td>\n",
       "      <td>...</td>\n",
       "      <td>14</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>10003</td>\n",
       "      <td>Dear Jerry, I am very happy to hear that you w...</td>\n",
       "      <td>20.5</td>\n",
       "      <td>Dear Jerry, I am very happy to hear that you w...</td>\n",
       "      <td>5</td>\n",
       "      <td>676</td>\n",
       "      <td>18</td>\n",
       "      <td>3.975610</td>\n",
       "      <td>92</td>\n",
       "      <td>2</td>\n",
       "      <td>...</td>\n",
       "      <td>15</td>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>4</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>4</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>10004</td>\n",
       "      <td>Dear Je I'm so happy to hear that you will hav...</td>\n",
       "      <td>15.5</td>\n",
       "      <td>Dear Je I am so happy to hear that you will ha...</td>\n",
       "      <td>3</td>\n",
       "      <td>555</td>\n",
       "      <td>14</td>\n",
       "      <td>4.092308</td>\n",
       "      <td>78</td>\n",
       "      <td>6</td>\n",
       "      <td>...</td>\n",
       "      <td>8</td>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>7</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>10005</td>\n",
       "      <td>Dear Jerry, I am so glad to hear that you will...</td>\n",
       "      <td>19.0</td>\n",
       "      <td>Dear Jerry, I am so glad to hear that you will...</td>\n",
       "      <td>4</td>\n",
       "      <td>657</td>\n",
       "      <td>11</td>\n",
       "      <td>4.260000</td>\n",
       "      <td>87</td>\n",
       "      <td>13</td>\n",
       "      <td>...</td>\n",
       "      <td>12</td>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>8</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>5 rows × 25 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "      id                                              essay  score  \\\n",
       "0  10001  Dear Jerry. I've heard about that you will giv...   19.5   \n",
       "1  10002  Dear Jerry I'm glad that you'll respresent you...   16.5   \n",
       "2  10003  Dear Jerry, I am very happy to hear that you w...   20.5   \n",
       "3  10004  Dear Je I'm so happy to hear that you will hav...   15.5   \n",
       "4  10005  Dear Jerry, I am so glad to hear that you will...   19.0   \n",
       "\n",
       "                                           corrected  ner_num  char_count  \\\n",
       "0  Dear Jerry. I have heard about that you will g...        3         514   \n",
       "1  Dear Jerry I am glad that you will respresent ...        5         618   \n",
       "2  Dear Jerry, I am very happy to hear that you w...        5         676   \n",
       "3  Dear Je I am so happy to hear that you will ha...        3         555   \n",
       "4  Dear Jerry, I am so glad to hear that you will...        4         657   \n",
       "\n",
       "   sent_count  avg_word_len  lemma_count  spell_err_count  ...  ps_conj  end  \\\n",
       "0          15      4.142857           85                3  ...        8    1   \n",
       "1          11      4.357664           85                5  ...       14    0   \n",
       "2          18      3.975610           92                2  ...       15    1   \n",
       "3          14      4.092308           78                6  ...        8    1   \n",
       "4          11      4.260000           87               13  ...       12    1   \n",
       "\n",
       "   captilization  wrong_signal  comma  question  exclamation  quotation  end1  \\\n",
       "0              3             0      5         0            0          2     0   \n",
       "1              1             1      5         0            0          3     0   \n",
       "2              3             4      5         0            4          0     0   \n",
       "3              3             0      7         0            0          3     0   \n",
       "4              3             0      8         1            0          0     0   \n",
       "\n",
       "   end2  \n",
       "0     1  \n",
       "1     1  \n",
       "2     1  \n",
       "3     1  \n",
       "4     1  \n",
       "\n",
       "[5 rows x 25 columns]"
      ]
     },
     "execution_count": 73,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "features.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 74,
   "id": "21176ec3",
   "metadata": {},
   "outputs": [
    {
     "ename": "ValueError",
     "evalue": "('NN', 'RB', 'RB', 'WDT') is not in list",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mValueError\u001b[0m                                Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-74-934cb4513690>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[1;32m----> 1\u001b[1;33m \u001b[0mfeatures\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m\"pos_checker\"\u001b[0m\u001b[1;33m]\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mfeatures\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mapply\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;32mlambda\u001b[0m \u001b[0mx\u001b[0m\u001b[1;33m:\u001b[0m\u001b[0mcount_pos_right\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mx\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m\"corrected\"\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m,\u001b[0m\u001b[0maxis\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;36m1\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m",
      "\u001b[1;32mD:\\Program\\anaconda\\lib\\site-packages\\pandas\\core\\frame.py\u001b[0m in \u001b[0;36mapply\u001b[1;34m(self, func, axis, raw, result_type, args, **kwds)\u001b[0m\n\u001b[0;32m   7766\u001b[0m             \u001b[0mkwds\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mkwds\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m   7767\u001b[0m         )\n\u001b[1;32m-> 7768\u001b[1;33m         \u001b[1;32mreturn\u001b[0m \u001b[0mop\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mget_result\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m   7769\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m   7770\u001b[0m     \u001b[1;32mdef\u001b[0m \u001b[0mapplymap\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mfunc\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mna_action\u001b[0m\u001b[1;33m:\u001b[0m \u001b[0mOptional\u001b[0m\u001b[1;33m[\u001b[0m\u001b[0mstr\u001b[0m\u001b[1;33m]\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;32mNone\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;33m->\u001b[0m \u001b[0mDataFrame\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mD:\\Program\\anaconda\\lib\\site-packages\\pandas\\core\\apply.py\u001b[0m in \u001b[0;36mget_result\u001b[1;34m(self)\u001b[0m\n\u001b[0;32m    183\u001b[0m             \u001b[1;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mapply_raw\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    184\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 185\u001b[1;33m         \u001b[1;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mapply_standard\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    186\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    187\u001b[0m     \u001b[1;32mdef\u001b[0m \u001b[0mapply_empty_result\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mD:\\Program\\anaconda\\lib\\site-packages\\pandas\\core\\apply.py\u001b[0m in \u001b[0;36mapply_standard\u001b[1;34m(self)\u001b[0m\n\u001b[0;32m    274\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    275\u001b[0m     \u001b[1;32mdef\u001b[0m \u001b[0mapply_standard\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 276\u001b[1;33m         \u001b[0mresults\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mres_index\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mapply_series_generator\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    277\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    278\u001b[0m         \u001b[1;31m# wrap results\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mD:\\Program\\anaconda\\lib\\site-packages\\pandas\\core\\apply.py\u001b[0m in \u001b[0;36mapply_series_generator\u001b[1;34m(self)\u001b[0m\n\u001b[0;32m    288\u001b[0m             \u001b[1;32mfor\u001b[0m \u001b[0mi\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mv\u001b[0m \u001b[1;32min\u001b[0m \u001b[0menumerate\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mseries_gen\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    289\u001b[0m                 \u001b[1;31m# ignore SettingWithCopy here in case the user mutates\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 290\u001b[1;33m                 \u001b[0mresults\u001b[0m\u001b[1;33m[\u001b[0m\u001b[0mi\u001b[0m\u001b[1;33m]\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mf\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mv\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    291\u001b[0m                 \u001b[1;32mif\u001b[0m \u001b[0misinstance\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mresults\u001b[0m\u001b[1;33m[\u001b[0m\u001b[0mi\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mABCSeries\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    292\u001b[0m                     \u001b[1;31m# If we have a view on v, we need to make a copy because\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32m<ipython-input-74-934cb4513690>\u001b[0m in \u001b[0;36m<lambda>\u001b[1;34m(x)\u001b[0m\n\u001b[1;32m----> 1\u001b[1;33m \u001b[0mfeatures\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m\"pos_checker\"\u001b[0m\u001b[1;33m]\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mfeatures\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mapply\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;32mlambda\u001b[0m \u001b[0mx\u001b[0m\u001b[1;33m:\u001b[0m\u001b[0mcount_pos_right\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mx\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m\"corrected\"\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m,\u001b[0m\u001b[0maxis\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;36m1\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m",
      "\u001b[1;32m<ipython-input-71-229480292a67>\u001b[0m in \u001b[0;36mcount_pos_right\u001b[1;34m(essay)\u001b[0m\n\u001b[0;32m     21\u001b[0m     \u001b[1;31m#print(len(e_collocation))\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     22\u001b[0m     \u001b[1;32mfor\u001b[0m \u001b[0mcoll\u001b[0m \u001b[1;32min\u001b[0m \u001b[0me_collocation\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 23\u001b[1;33m         \u001b[1;32mif\u001b[0m \u001b[0mr_collocation\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mindex\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mcoll\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m     24\u001b[0m             \u001b[0mc\u001b[0m\u001b[1;33m+=\u001b[0m\u001b[1;36m1\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     25\u001b[0m     \u001b[0mr_p\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mc\u001b[0m\u001b[1;33m/\u001b[0m\u001b[0mlen\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0me_collocation\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;31mValueError\u001b[0m: ('NN', 'RB', 'RB', 'WDT') is not in list"
     ]
    }
   ],
   "source": [
    "features[\"pos_checker\"] = features.apply(lambda x:count_pos_right(x[\"corrected\"]),axis=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "id": "370fd45a",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>essay</th>\n",
       "      <th>score</th>\n",
       "      <th>corrected</th>\n",
       "      <th>ner_num</th>\n",
       "      <th>char_count</th>\n",
       "      <th>sent_count</th>\n",
       "      <th>avg_word_len</th>\n",
       "      <th>lemma_count</th>\n",
       "      <th>spell_err_count</th>\n",
       "      <th>...</th>\n",
       "      <th>end</th>\n",
       "      <th>captilization</th>\n",
       "      <th>wrong_signal</th>\n",
       "      <th>comma</th>\n",
       "      <th>question</th>\n",
       "      <th>exclamation</th>\n",
       "      <th>quotation</th>\n",
       "      <th>end1</th>\n",
       "      <th>end2</th>\n",
       "      <th>word_count</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>10001</td>\n",
       "      <td>Dear Jerry. I've heard about that you will giv...</td>\n",
       "      <td>19.5</td>\n",
       "      <td>Dear Jerry. I have heard about that you will g...</td>\n",
       "      <td>3</td>\n",
       "      <td>514</td>\n",
       "      <td>15</td>\n",
       "      <td>4.142857</td>\n",
       "      <td>85</td>\n",
       "      <td>3</td>\n",
       "      <td>...</td>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>117</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>10002</td>\n",
       "      <td>Dear Jerry I'm glad that you'll respresent you...</td>\n",
       "      <td>16.5</td>\n",
       "      <td>Dear Jerry I am glad that you will respresent ...</td>\n",
       "      <td>5</td>\n",
       "      <td>618</td>\n",
       "      <td>11</td>\n",
       "      <td>4.357664</td>\n",
       "      <td>85</td>\n",
       "      <td>5</td>\n",
       "      <td>...</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>132</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>10003</td>\n",
       "      <td>Dear Jerry, I am very happy to hear that you w...</td>\n",
       "      <td>20.5</td>\n",
       "      <td>Dear Jerry, I am very happy to hear that you w...</td>\n",
       "      <td>5</td>\n",
       "      <td>676</td>\n",
       "      <td>18</td>\n",
       "      <td>3.975610</td>\n",
       "      <td>92</td>\n",
       "      <td>2</td>\n",
       "      <td>...</td>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>4</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>4</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>167</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>10004</td>\n",
       "      <td>Dear Je I'm so happy to hear that you will hav...</td>\n",
       "      <td>15.5</td>\n",
       "      <td>Dear Je I am so happy to hear that you will ha...</td>\n",
       "      <td>3</td>\n",
       "      <td>555</td>\n",
       "      <td>14</td>\n",
       "      <td>4.092308</td>\n",
       "      <td>78</td>\n",
       "      <td>6</td>\n",
       "      <td>...</td>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>7</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>127</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>10005</td>\n",
       "      <td>Dear Jerry, I am so glad to hear that you will...</td>\n",
       "      <td>19.0</td>\n",
       "      <td>Dear Jerry, I am so glad to hear that you will...</td>\n",
       "      <td>4</td>\n",
       "      <td>657</td>\n",
       "      <td>11</td>\n",
       "      <td>4.260000</td>\n",
       "      <td>87</td>\n",
       "      <td>13</td>\n",
       "      <td>...</td>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>8</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>150</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>5 rows × 26 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "      id                                              essay  score  \\\n",
       "0  10001  Dear Jerry. I've heard about that you will giv...   19.5   \n",
       "1  10002  Dear Jerry I'm glad that you'll respresent you...   16.5   \n",
       "2  10003  Dear Jerry, I am very happy to hear that you w...   20.5   \n",
       "3  10004  Dear Je I'm so happy to hear that you will hav...   15.5   \n",
       "4  10005  Dear Jerry, I am so glad to hear that you will...   19.0   \n",
       "\n",
       "                                           corrected  ner_num  char_count  \\\n",
       "0  Dear Jerry. I have heard about that you will g...        3         514   \n",
       "1  Dear Jerry I am glad that you will respresent ...        5         618   \n",
       "2  Dear Jerry, I am very happy to hear that you w...        5         676   \n",
       "3  Dear Je I am so happy to hear that you will ha...        3         555   \n",
       "4  Dear Jerry, I am so glad to hear that you will...        4         657   \n",
       "\n",
       "   sent_count  avg_word_len  lemma_count  spell_err_count  ...  end  \\\n",
       "0          15      4.142857           85                3  ...    1   \n",
       "1          11      4.357664           85                5  ...    0   \n",
       "2          18      3.975610           92                2  ...    1   \n",
       "3          14      4.092308           78                6  ...    1   \n",
       "4          11      4.260000           87               13  ...    1   \n",
       "\n",
       "   captilization  wrong_signal  comma  question  exclamation  quotation  end1  \\\n",
       "0              3             0      5         0            0          2     0   \n",
       "1              1             1      5         0            0          3     0   \n",
       "2              3             4      5         0            4          0     0   \n",
       "3              3             0      7         0            0          3     0   \n",
       "4              3             0      8         1            0          0     0   \n",
       "\n",
       "   end2  word_count  \n",
       "0     1         117  \n",
       "1     1         132  \n",
       "2     1         167  \n",
       "3     1         127  \n",
       "4     1         150  \n",
       "\n",
       "[5 rows x 26 columns]"
      ]
     },
     "execution_count": 53,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "features.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "ad41db8b",
   "metadata": {},
   "outputs": [],
   "source": [
    "from spacy.lang.en.stop_words import STOP_WORDS"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f6267043",
   "metadata": {},
   "source": [
    "## 探索主题个数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 333,
   "id": "ce31886e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[(0, '0.084*\"speech\" + 0.028*\"bodi\" + 0.028*\"good\" + 0.028*\"contest\" + 0.028*\"need\" + 0.028*\"languag\" + 0.021*\"use\" + 0.021*\"understand\" + 0.021*\"rememb\" + 0.021*\"m\" + 0.021*\"topic\" + 0.014*\"don\" + 0.014*\"import\" + 0.014*\"poem\" + 0.014*\"o\" + 0.014*\"great\" + 0.014*\"fa\" + 0.014*\"easili\" + 0.014*\"wowld\" + 0.014*\"interest\"')]\n"
     ]
    }
   ],
   "source": [
    "import nltk.tokenize as tk\n",
    "import nltk.corpus as nc\n",
    "import nltk.stem.snowball as sb\n",
    "import gensim.models.ldamodel as gm\n",
    "import gensim.corpora as gc\n",
    "\n",
    "#读取数据和停用词\n",
    "tokenizer = tk.WordPunctTokenizer() \n",
    "\n",
    "signs = [',', '.', '!',\"’\"]\n",
    "stopwords = STOP_WORDS\n",
    "'''\n",
    "SnowballStemmer基于Snowball 词干提取算法，该算法使用RegexpStemmer 类构建词干提取器，通过接收一个字符串，并在找到其匹配的单词时删除该单词的前缀或后缀。示例如图三。\n",
    "\n",
    "'''\n",
    "\n",
    "# stem a word 都是英文\n",
    "stemmer = sb.SnowballStemmer('english')\n",
    "\n",
    "tokenized_sentences = tokenize(example)\n",
    "\n",
    "lines_tokens = []\n",
    "for line in tokenized_sentences: # 读取每一个句子\n",
    "    # 变小写\n",
    "    line_tokens = []\n",
    "    for token in line:\n",
    "        token = token.lower()\n",
    "        # 取出停用词和标点符号\n",
    "        if token not in stopwords and token not in signs:\n",
    "            # 取出所有单词的单词原型\n",
    "            token = stemmer.stem(token)\n",
    "            #print(token)\n",
    "          # 把token加在句子列表里\n",
    "            line_tokens.append(token)\n",
    "            #print(line_tokens)\n",
    "# 再把句子列表加入文文章列表中\n",
    "    lines_tokens.append(line_tokens)\n",
    "#print(lines_tokens)\n",
    "\n",
    "# # 把lines_tokens中出现的单词都存入gc提供的词典对象，对每一个单词做编码。\n",
    "dic = gc.Dictionary(lines_tokens)\n",
    "\n",
    "# # 遍历每一行，构建词袋列表\n",
    "corpus = []\n",
    "for line_tokens in lines_tokens:\n",
    "    row = dic.doc2bow(line_tokens)\n",
    "    corpus.append(row)\n",
    "\n",
    "n_topics = 1\n",
    "# # passes：训练伦次\n",
    "model = gm.LdaModel(corpus, num_topics=n_topics, id2word=dic, passes=50)\n",
    "\n",
    "# # 输出每个类别中对类别贡献最大的4个主题词\n",
    "topics = model.print_topics(num_topics=n_topics, num_words=20)\n",
    "print(topics)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "id": "e8b04800",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Index(['id', 'essay', 'score', 'corrected', 'ner_num', 'char_count',\n",
       "       'sent_count', 'avg_word_len', 'lemma_count', 'spell_err_count',\n",
       "       'noun_count', 'adj_count', 'verb_count', 'adv_count', 'conj', 'ps_conj',\n",
       "       'end', 'captilization', 'wrong_signal', 'comma', 'question',\n",
       "       'exclamation', 'quotation', 'end1', 'end2', 'word_count'],\n",
       "      dtype='object')"
      ]
     },
     "execution_count": 54,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "features.columns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "id": "45b15af7",
   "metadata": {},
   "outputs": [],
   "source": [
    "features[\"word_count\"] = features['corrected'].str.strip().str.split().str.len()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "id": "8bdfc963",
   "metadata": {},
   "outputs": [],
   "source": [
    "n = len(features['char_count'] )\n",
    "scores = features[\"score\"]\n",
    "char_count = features['char_count'].corr(scores,method=\"spearman\")\n",
    "word_count = features['word_count'].corr(scores,method=\"spearman\")\n",
    "sent_count = features['sent_count'].corr(scores,method=\"spearman\")\n",
    "avg_word_len = features['avg_word_len'].corr(scores,method=\"spearman\")\n",
    "lemma_count = features['lemma_count'].corr(scores,method=\"spearman\")\n",
    "spell_err_count = features['spell_err_count'].corr(scores,method=\"spearman\")\n",
    "noun_count =  features['noun_count'].corr(scores,method=\"spearman\")\n",
    "adj_count = features['adj_count'].corr(scores,method=\"spearman\")\n",
    "verb_count = features['verb_count'].corr(scores,method=\"spearman\")\n",
    "adv_count = features['adv_count'].corr(scores,method=\"spearman\")\n",
    "conj_count =  features[\"conj\"].corr(scores,method=\"spearman\")\n",
    "ps_conj_count = features[\"ps_conj\"].corr(scores,method=\"spearman\")\n",
    "ner_num = features['ner_num'].corr(scores,method='spearman')\n",
    "end = features['end'].corr(scores,method='spearman')\n",
    "captilization = features['captilization'].corr(scores,method='spearman')\n",
    "wrong_signal = features['wrong_signal'].corr(scores,method='spearman')\n",
    "comma = features['comma'].corr(scores,method='spearman')\n",
    "question = features['question'].corr(scores,method='spearman')\n",
    "exclamation = features['exclamation'].corr(scores,method='spearman')\n",
    "quotation = features['quotation'].corr(scores,method='spearman')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "ca47c1c8",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "('lemma_count', 0.5888752645740568)\n",
      "('char_count', 0.5611218563285321)\n",
      "('word_count', 0.5104161931474022)\n",
      "('adj_count', 0.4767567826267371)\n",
      "('noun_count', 0.4573249706695252)\n",
      "('ps_conj_count', 0.42377699701040883)\n",
      "('verb_count', 0.4093593909755708)\n",
      "('adv_count', 0.39983677073688506)\n",
      "('conj_count', 0.35533381786205975)\n",
      "('comma', 0.2664863575774475)\n",
      "('sent_count', 0.26635519193646884)\n",
      "('quotation', 0.22950912899552944)\n",
      "('avg_word_len', 0.2291145126534697)\n",
      "('wrong_signal', 0.19196122183933934)\n",
      "('exclamation', 0.18424955976920795)\n",
      "('ner_num', 0.12376759178641569)\n",
      "('spell_err_count', 0.1134584987493702)\n",
      "('question', 0.07628699241691425)\n",
      "('end', -0.027325870170952324)\n",
      "('captilization', -0.0927542127929696)\n"
     ]
    }
   ],
   "source": [
    "# 封装成字典并按照value 排序\n",
    "spearman_dic = {\n",
    "    \"char_count\":char_count\n",
    "    ,\"word_count\":word_count\n",
    "    ,\"sent_count\":sent_count\n",
    "    ,\"avg_word_len\":avg_word_len\n",
    "    ,\"lemma_count\":lemma_count\n",
    "    ,\"spell_err_count\":spell_err_count\n",
    "    ,\"noun_count\":noun_count\n",
    "    ,\"adj_count\":adj_count\n",
    "    ,\"verb_count\":verb_count\n",
    "    ,\"adv_count\":adv_count\n",
    "    ,\"conj_count\":conj_count\n",
    "    ,\"ps_conj_count\":ps_conj_count\n",
    "    ,\"ner_num\":ner_num\n",
    "    ,'end':end\n",
    "    ,'captilization':captilization\n",
    "    ,'wrong_signal':wrong_signal\n",
    "    ,'comma':comma\n",
    "    ,'exclamation':exclamation\n",
    "    ,'quotation':quotation\n",
    "    ,'question':question\n",
    "}\n",
    "a = sorted(spearman_dic.items(), key=lambda x: x[1], reverse=True)\n",
    "for item in a:\n",
    "    print(item)\n",
    "# 排序可知,词汇丰富度/作文长度 对作文评分影响最重要，其次是形容词、名词、连词与动词的使用。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "id": "fb5819a9",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 保留所有变量\n",
    "input_features = features.drop([\"essay\",\"id\",\"corrected\"],axis=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b1165ef8",
   "metadata": {},
   "source": [
    "## 数据标准化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "id": "b3bd82f3",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([4.20690997e+00, 2.71917217e+04, 2.43952192e+01, 5.01983620e-02,\n",
       "       4.50743087e+02, 1.64848398e+01, 5.93768170e+01, 1.90095060e+01,\n",
       "       7.54390844e+01, 1.46627758e+01, 3.98924752e+00, 1.78608504e+01,\n",
       "       1.17405072e-01, 8.98128245e+00, 5.28460151e+00, 1.32950805e+01,\n",
       "       2.22203371e-01, 1.01453392e+00, 1.19072137e+00, 0.00000000e+00,\n",
       "       0.00000000e+00, 1.47498754e+03])"
      ]
     },
     "execution_count": 60,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.preprocessing import StandardScaler\n",
    "X = input_features.iloc[:,1:].values\n",
    "y = input_features[\"score\"].values\n",
    "scaler = StandardScaler() #实例化\n",
    "scaler.fit(X) #fit，本质是生成均值和方差\n",
    "scaler.mean_ #查看均值的属性mean_\n",
    "scaler.var_ #查看方差的属性var_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "id": "534fb217",
   "metadata": {},
   "outputs": [],
   "source": [
    "X_new = scaler.fit_transform(X) #使用fit_transform(data)一步达成结果\n",
    "#scaler.inverse_transform(x_std) #使用inverse_transform逆转标准化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "id": "1a6caeac",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[-0.77248045, -0.07675809,  0.57644572, ...,  0.        ,\n",
       "         0.        , -0.13586005],\n",
       "       [ 0.20261782,  0.55393051, -0.23340995, ...,  0.        ,\n",
       "         0.        ,  0.25470833],\n",
       "       [ 0.20261782,  0.9056607 ,  1.18383748, ...,  0.        ,\n",
       "         0.        ,  1.16603456],\n",
       "       ...,\n",
       "       [-1.74757872, -3.06646466, -2.25804914, ...,  0.        ,\n",
       "         0.        , -3.05210398],\n",
       "       [-1.74757872, -3.06646466, -2.25804914, ...,  0.        ,\n",
       "         0.        , -3.05210398],\n",
       "       [-1.74757872, -3.06646466, -2.25804914, ...,  0.        ,\n",
       "         0.        , -3.05210398]])"
      ]
     },
     "execution_count": 62,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X_new"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4b285daf",
   "metadata": {},
   "source": [
    "## 构建新的特征矩阵"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "id": "d83e5f2b",
   "metadata": {},
   "outputs": [],
   "source": [
    "#X = np.concatenate((X_new, vectors), axis = 1)\n",
    "X = np.concatenate((X_new, vectors_glove), axis = 1)\n",
    "y = y"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "id": "e7f4e3df",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0      0\n",
       "1      0\n",
       "2      0\n",
       "3      0\n",
       "4      0\n",
       "      ..\n",
       "116    0\n",
       "117    0\n",
       "118    0\n",
       "119    0\n",
       "120    0\n",
       "Length: 121, dtype: int64"
      ]
     },
     "execution_count": 64,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 检查是否有空值\n",
    "pd.DataFrame(X).isnull().sum()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "id": "102ddec4",
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "id": "0133a672",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "预测分数：14.668328846564158 实际分数：16.0\n",
      "预测分数：12.863868776213769 实际分数：14.5\n",
      "预测分数：10.91123019751493 实际分数：9.0\n",
      "预测分数：19.65489933286471 实际分数：21.0\n",
      "预测分数：15.423629523056333 实际分数：16.5\n",
      "预测分数：17.789893592070822 实际分数：14.5\n",
      "预测分数：15.232634170723303 实际分数：18.0\n",
      "预测分数：14.699263256059714 实际分数：16.0\n",
      "预测分数：16.57006690343607 实际分数：15.5\n",
      "预测分数：16.351112748223294 实际分数：18.5\n",
      "预测分数：13.261341684774578 实际分数：18.0\n",
      "预测分数：17.121840450385278 实际分数：16.5\n",
      "预测分数：17.04166698654973 实际分数：17.5\n",
      "预测分数：16.267927060204173 实际分数：14.0\n",
      "预测分数：13.712298722240504 实际分数：8.5\n",
      "预测分数：15.812553905033969 实际分数：17.0\n",
      "预测分数：18.777631244193827 实际分数：17.0\n",
      "预测分数：12.612646713830705 实际分数：7.0\n",
      "预测分数：14.83158247164531 实际分数：13.0\n",
      "预测分数：-0.21781665189276467 实际分数：0.0\n",
      "预测分数：16.904984878439834 实际分数：18.0\n",
      "预测分数：17.06586136691577 实际分数：15.0\n",
      "预测分数：14.922132625566977 实际分数：15.0\n",
      "预测分数：16.33388032572877 实际分数：18.5\n",
      "预测分数：16.85382075781215 实际分数：19.0\n",
      "预测分数：11.036784108575533 实际分数：11.0\n",
      "预测分数：17.3810219596316 实际分数：17.5\n",
      "预测分数：-0.21781665189276467 实际分数：0.0\n",
      "预测分数：20.38230620547923 实际分数：19.5\n",
      "预测分数：13.489263910625425 实际分数：14.0\n",
      "预测分数：15.23890331087584 实际分数：15.0\n",
      "预测分数：12.852026670203877 实际分数：13.0\n",
      "预测分数：20.443660814107126 实际分数：23.0\n",
      "预测分数：18.348716411607942 实际分数：19.0\n",
      "预测分数：14.453777414645886 实际分数：16.0\n",
      "预测分数：17.600011302872407 实际分数：16.5\n",
      "预测分数：8.510455809307558 实际分数：4.5\n",
      "预测分数：18.113771380225135 实际分数：18.5\n",
      "预测分数：16.647282118489915 实际分数：18.0\n",
      "预测分数：18.646820927676043 实际分数：18.0\n",
      "预测分数：14.343181960545786 实际分数：14.0\n",
      "预测分数：-0.0769654980491179 实际分数：0.0\n",
      "预测分数：17.135113198815098 实际分数：16.0\n",
      "预测分数：17.627711751011127 实际分数：18.5\n",
      "预测分数：13.408649282690092 实际分数：5.5\n",
      "预测分数：15.019776635820216 实际分数：17.5\n",
      "预测分数：17.064607301265234 实际分数：16.0\n",
      "预测分数：15.042117810201967 实际分数：16.5\n",
      "预测分数：15.406606735139038 实际分数：13.5\n",
      "预测分数：17.416277202967535 实际分数：20.0\n",
      "预测分数：16.429498124005455 实际分数：17.0\n",
      "预测分数：14.296532695342547 实际分数：17.0\n",
      "预测分数：13.40106501049669 实际分数：20.5\n",
      "预测分数：18.133601768672644 实际分数：19.0\n",
      "预测分数：15.226549803774294 实际分数：20.0\n",
      "预测分数：15.18808564292613 实际分数：16.5\n",
      "预测分数：19.32071611302482 实际分数：17.0\n",
      "预测分数：14.091103177988915 实际分数：17.5\n",
      "预测分数：17.776424801700045 实际分数：15.0\n",
      "预测分数：15.418137662815026 实际分数：15.0\n",
      "预测分数：10.582186213638458 实际分数：12.5\n",
      "预测分数：20.81079307876446 实际分数：20.5\n",
      "预测分数：17.61484363804638 实际分数：19.5\n",
      "预测分数：15.772150668375556 实际分数：23.0\n",
      "预测分数：-0.21781665189276467 实际分数：0.0\n",
      "预测分数：15.946116262803002 实际分数：19.5\n",
      "预测分数：14.514147735806057 实际分数：17.0\n",
      "预测分数：2.2520355323726804 实际分数：2.5\n",
      "预测分数：16.131967863851617 实际分数：15.0\n",
      "预测分数：13.835693492878296 实际分数：15.0\n",
      "预测分数：16.111528095240942 实际分数：14.0\n",
      "预测分数：14.968374656810028 实际分数：16.5\n",
      "预测分数：16.031335756862276 实际分数：16.0\n",
      "预测分数：17.64530472795501 实际分数：20.5\n",
      "预测分数：20.880198578693047 实际分数：17.0\n",
      "预测分数：13.929718863094926 实际分数：16.0\n",
      "预测分数：10.556632303088454 实际分数：13.0\n",
      "预测分数：11.695056340246897 实际分数：16.0\n",
      "预测分数：20.010541585524102 实际分数：20.0\n",
      "预测分数：16.02218381231667 实际分数：17.0\n",
      "预测分数：15.091085182510199 实际分数：12.5\n",
      "预测分数：17.680102637190174 实际分数：18.5\n",
      "预测分数：19.126128572603704 实际分数：18.0\n",
      "预测分数：12.724776061351388 实际分数：17.0\n",
      "预测分数：15.359220604457384 实际分数：14.5\n",
      "预测分数：17.29414171091112 实际分数：16.0\n",
      "预测分数：15.265838818148188 实际分数：14.0\n",
      "预测分数：7.413931697982054 实际分数：17.0\n",
      "预测分数：16.660949910441808 实际分数：16.0\n",
      "预测分数：17.704071907417315 实际分数：14.5\n",
      "预测分数：19.86437381068549 实际分数：19.0\n",
      "预测分数：17.03455738513709 实际分数：15.5\n",
      "预测分数：15.73628710584659 实际分数：15.0\n",
      "预测分数：13.740415616508365 实际分数：16.5\n",
      "预测分数：17.400391212743997 实际分数：18.5\n",
      "预测分数：16.188675640768956 实际分数：22.0\n",
      "预测分数：19.666768631535703 实际分数：20.0\n",
      "预测分数：16.696206075947025 实际分数：16.5\n",
      "预测分数：19.228641236510153 实际分数：17.5\n",
      "预测分数：-0.0769654980491179 实际分数：0.0\n",
      "预测分数：15.621983359241927 实际分数：15.0\n",
      "预测分数：12.9324867611748 实际分数：16.0\n",
      "预测分数：13.81177638941395 实际分数：13.5\n",
      "预测分数：14.195978362565144 实际分数：18.5\n",
      "预测分数：18.018801788110963 实际分数：16.5\n",
      "预测分数：11.79401382014425 实际分数：10.0\n",
      "预测分数：18.68473558733029 实际分数：15.5\n",
      "预测分数：14.967363579572979 实际分数：15.0\n",
      "预测分数：13.360576211616074 实际分数：20.0\n",
      "预测分数：17.96007724391959 实际分数：17.0\n",
      "预测分数：12.435356802238607 实际分数：15.0\n",
      "预测分数：16.45386206780627 实际分数：20.0\n",
      "预测分数：17.121244829589298 实际分数：14.0\n",
      "预测分数：17.519801118484978 实际分数：17.0\n",
      "预测分数：23.4784381690509 实际分数：19.0\n",
      "预测分数：13.23517018208659 实际分数：17.5\n",
      "预测分数：13.271894960565197 实际分数：15.0\n",
      "预测分数：16.492082599370477 实际分数：15.0\n",
      "预测分数：18.69235693720369 实际分数：15.5\n",
      "预测分数：15.91110800507389 实际分数：16.5\n",
      "预测分数：15.430000780695872 实际分数：10.0\n",
      "预测分数：12.959521163912754 实际分数：12.0\n",
      "预测分数：20.985279821859194 实际分数：19.0\n",
      "预测分数：17.7964290812072 实际分数：20.5\n",
      "预测分数：21.261614065884597 实际分数：18.5\n",
      "预测分数：17.553717922744177 实际分数：17.0\n",
      "预测分数：16.833124930141977 实际分数：20.5\n",
      "预测分数：11.453248580704917 实际分数：12.0\n",
      "预测分数：19.681058812951456 实际分数：11.0\n",
      "预测分数：7.661112831936075 实际分数：5.0\n",
      "预测分数：19.776272245164233 实际分数：14.0\n",
      "预测分数：16.80403621286773 实际分数：14.5\n",
      "预测分数：17.6766730940693 实际分数：18.5\n",
      "预测分数：20.982119319481143 实际分数：20.0\n",
      "预测分数：18.317559386392553 实际分数：16.5\n",
      "预测分数：-0.21781665189276467 实际分数：0.0\n",
      "预测分数：12.718373338798182 实际分数：12.5\n",
      "预测分数：17.632564914118415 实际分数：17.5\n",
      "预测分数：15.274270152738357 实际分数：17.0\n",
      "预测分数：18.849696200881287 实际分数：19.0\n",
      "预测分数：15.997910035851092 实际分数：17.0\n",
      "预测分数：19.698963926534255 实际分数：19.5\n",
      "预测分数：14.973754247234865 实际分数：13.5\n",
      "预测分数：16.142881765678737 实际分数：15.0\n",
      "预测分数：13.512189350843753 实际分数：16.0\n",
      "预测分数：14.816510872521624 实际分数：18.0\n",
      "预测分数：8.85234933429298 实际分数：10.0\n",
      "预测分数：15.115671536435666 实际分数：16.5\n",
      "预测分数：17.442145978833302 实际分数：20.0\n",
      "预测分数：12.383012545414015 实际分数：13.0\n",
      "预测分数：16.026628935152196 实际分数：14.0\n",
      "预测分数：14.275759684667124 实际分数：15.0\n",
      "预测分数：20.77052441889171 实际分数：21.5\n",
      "预测分数：21.256174956550854 实际分数：15.0\n",
      "预测分数：14.257747326404878 实际分数：16.0\n",
      "预测分数：15.01549573742068 实际分数：17.5\n",
      "预测分数：18.004080172462608 实际分数：15.5\n",
      "预测分数：14.589213675549246 实际分数：16.0\n",
      "预测分数：15.38147018246405 实际分数：13.0\n",
      "预测分数：15.628718798476207 实际分数：16.0\n",
      "预测分数：10.173737709004307 实际分数：12.0\n",
      "预测分数：13.053140784828607 实际分数：16.0\n",
      "预测分数：17.31780347412792 实际分数：19.5\n",
      "预测分数：14.117536698597416 实际分数：14.0\n",
      "预测分数：15.584946790388162 实际分数：15.5\n",
      "预测分数：15.318729296242438 实际分数：16.0\n",
      "预测分数：14.816479569456806 实际分数：17.5\n",
      "预测分数：-0.21781665189276467 实际分数：0.0\n",
      "预测分数：19.21370340885605 实际分数：16.5\n",
      "预测分数：13.75994636250975 实际分数：17.0\n",
      "预测分数：18.234386724821903 实际分数：20.0\n",
      "预测分数：20.848437351274494 实际分数：22.5\n",
      "预测分数：15.251732519008206 实际分数：14.0\n",
      "预测分数：19.49166873694041 实际分数：15.0\n",
      "预测分数：16.913256191081274 实际分数：20.0\n",
      "预测分数：16.8302428369929 实际分数：16.0\n",
      "预测分数：19.5859350230061 实际分数：15.5\n",
      "预测分数：16.108761668008317 实际分数：17.0\n",
      "预测分数：16.89594371517554 实际分数：17.0\n",
      "预测分数：19.574918541621695 实际分数：19.0\n",
      "预测分数：17.11769140569885 实际分数：16.5\n",
      "预测分数：15.81835368609136 实际分数：17.5\n",
      "预测分数：18.48867595656642 实际分数：16.5\n",
      "预测分数：15.06583489790441 实际分数：15.5\n",
      "预测分数：16.31590250555876 实际分数：17.0\n",
      "预测分数：17.8942779350552 实际分数：18.0\n",
      "预测分数：17.387792409062442 实际分数：14.5\n",
      "预测分数：14.567787866639097 实际分数：19.0\n",
      "预测分数：19.03781797473985 实际分数：19.0\n",
      "预测分数：19.880432268563247 实际分数：16.5\n",
      "预测分数：13.462861978363035 实际分数：15.5\n",
      "预测分数：20.569772393508632 实际分数：18.5\n",
      "预测分数：13.60536158122893 实际分数：11.0\n",
      "预测分数：19.986991831096027 实际分数：18.5\n",
      "预测分数：15.093676744925903 实际分数：15.0\n",
      "预测分数：19.76078848328029 实际分数：18.5\n",
      "预测分数：10.087567391407589 实际分数：11.0\n",
      "预测分数：11.917545743712232 实际分数：17.0\n",
      "预测分数：18.54371106143693 实际分数：17.0\n",
      "预测分数：15.89138855696432 实际分数：17.0\n",
      "预测分数：-0.21781665189276467 实际分数：0.0\n",
      "预测分数：10.637102277509316 实际分数：7.0\n",
      "预测分数：-0.0769654980491179 实际分数：0.0\n",
      "预测分数：19.19507619787193 实际分数：17.5\n",
      "预测分数：9.132481099678476 实际分数：13.0\n",
      "预测分数：15.825356026783236 实际分数：14.0\n",
      "预测分数：13.914153428659581 实际分数：14.0\n",
      "预测分数：16.59324079223614 实际分数：19.5\n",
      "预测分数：14.61117614017014 实际分数：20.0\n",
      "预测分数：16.071966915803262 实际分数：16.0\n",
      "预测分数：13.169437407327067 实际分数：11.5\n",
      "预测分数：11.523216862508612 实际分数：14.0\n",
      "预测分数：11.040815870092128 实际分数：14.0\n",
      "预测分数：20.64187728028796 实际分数：21.5\n",
      "预测分数：18.316449635711955 实际分数：17.0\n",
      "预测分数：11.033032426487466 实际分数：13.5\n",
      "预测分数：14.311257459712163 实际分数：18.5\n",
      "预测分数：17.819094301770022 实际分数：16.0\n",
      "预测分数：11.624976420976505 实际分数：11.0\n",
      "预测分数：18.180226350313795 实际分数：14.5\n",
      "预测分数：-0.21781665189276467 实际分数：0.0\n",
      "预测分数：17.193583497691378 实际分数：19.0\n",
      "预测分数：19.166511021333086 实际分数：17.0\n",
      "预测分数：17.236528860491724 实际分数：17.0\n",
      "预测分数：18.92543914985941 实际分数：21.5\n",
      "预测分数：16.873281651084994 实际分数：15.0\n",
      "预测分数：16.629368324102323 实际分数：19.0\n",
      "预测分数：8.269714371544943 实际分数：4.0\n",
      "预测分数：14.227524234204136 实际分数：17.0\n",
      "预测分数：18.1270834517021 实际分数：20.0\n",
      "预测分数：18.495250845593645 实际分数：20.5\n",
      "预测分数：16.36964891778824 实际分数：16.5\n",
      "预测分数：18.382336019777014 实际分数：15.5\n",
      "预测分数：15.390189003631091 实际分数：19.5\n",
      "预测分数：14.480372575669325 实际分数：14.5\n",
      "预测分数：18.251835064139428 实际分数：19.5\n",
      "预测分数：14.234807716981189 实际分数：13.5\n",
      "预测分数：15.258434539094054 实际分数：18.0\n",
      "预测分数：17.078706164234696 实际分数：12.5\n",
      "预测分数：13.902934500862637 实际分数：19.0\n",
      "预测分数：-1.6329514628485011 实际分数：0.0\n",
      "预测分数：12.935398871008655 实际分数：16.5\n",
      "预测分数：15.992001804701166 实际分数：17.5\n",
      "预测分数：16.081403726505716 实际分数：17.0\n",
      "预测分数：2.8641697059468676 实际分数：3.5\n",
      "预测分数：14.440983978190278 实际分数：9.0\n",
      "预测分数：16.869518142392025 实际分数：17.0\n",
      "预测分数：19.086690651011534 实际分数：14.5\n",
      "预测分数：16.402296754480055 实际分数：15.5\n",
      "预测分数：12.414777300090144 实际分数：14.0\n",
      "预测分数：16.361945108303324 实际分数：19.5\n",
      "预测分数：16.85398972196863 实际分数：17.5\n",
      "预测分数：16.286833263628463 实际分数：14.0\n",
      "预测分数：18.281758440880502 实际分数：18.0\n",
      "预测分数：18.059846709361842 实际分数：22.0\n",
      "预测分数：-0.0769654980491179 实际分数：0.0\n",
      "预测分数：14.607839856031172 实际分数：15.5\n",
      "预测分数：12.85168471429196 实际分数：13.0\n",
      "预测分数：15.344927036007302 实际分数：16.0\n",
      "预测分数：18.239190767507573 实际分数：17.5\n",
      "预测分数：16.47665987328308 实际分数：19.0\n",
      "预测分数：17.70879832202536 实际分数：18.0\n",
      "预测分数：15.888653837001865 实际分数：17.5\n",
      "预测分数：17.254747196268596 实际分数：15.5\n",
      "预测分数：12.887009236965685 实际分数：17.0\n",
      "预测分数：18.80177109867493 实际分数：19.5\n",
      "预测分数：16.55837787254353 实际分数：16.5\n",
      "预测分数：16.694464951079723 实际分数：17.0\n",
      "预测分数：10.308358027661907 实际分数：13.0\n",
      "预测分数：16.186720399528397 实际分数：18.5\n",
      "预测分数：17.778582689483976 实际分数：18.0\n",
      "预测分数：-0.21781665189276467 实际分数：0.0\n",
      "预测分数：20.087449967331313 实际分数：20.0\n",
      "预测分数：15.895654659973465 实际分数：17.0\n",
      "预测分数：17.57128183051017 实际分数：13.5\n",
      "预测分数：10.014697981110945 实际分数：10.5\n",
      "预测分数：20.828901388225255 实际分数：16.0\n",
      "预测分数：14.226229369634439 实际分数：15.5\n",
      "预测分数：17.122164725398758 实际分数：13.5\n",
      "预测分数：18.072958604601936 实际分数：18.0\n",
      "预测分数：11.75420184050299 实际分数：6.5\n",
      "预测分数：16.149424261810985 实际分数：17.0\n",
      "预测分数：18.073484078693813 实际分数：17.0\n",
      "预测分数：12.624864629480165 实际分数：16.0\n",
      "预测分数：16.69989241314765 实际分数：21.0\n",
      "预测分数：17.15312461275886 实际分数：17.0\n",
      "预测分数：-0.21781665189276467 实际分数：0.0\n",
      "预测分数：14.851137311244765 实际分数：17.5\n",
      "预测分数：14.259323905288653 实际分数：21.5\n",
      "预测分数：13.832841781347419 实际分数：19.0\n",
      "预测分数：13.737504748509753 实际分数：15.0\n",
      "预测分数：17.190460130691633 实际分数：15.5\n",
      "预测分数：18.10136487962766 实际分数：17.5\n",
      "预测分数：18.23334122923712 实际分数：19.5\n",
      "预测分数：13.365836377590806 实际分数：15.5\n",
      "预测分数：19.099385243769206 实际分数：20.5\n",
      "预测分数：13.119132367667351 实际分数：19.0\n",
      "预测分数：17.696849385566626 实际分数：20.0\n",
      "预测分数：18.710568233196945 实际分数：17.0\n",
      "预测分数：10.858691615159493 实际分数：6.0\n",
      "预测分数：15.810209531792246 实际分数：19.5\n",
      "测试集得分：\n",
      " 0.691502630263225\n",
      "LinearRegression Coefficients: \n",
      " [ 6.75043021e-02  1.38576800e+00  4.39191523e-01  3.94138848e-01\n",
      "  1.69261490e+00  1.08187619e-01 -1.46990588e-01  1.90359471e-01\n",
      " -3.79633302e-01 -4.45799950e-02 -1.72817230e-02 -9.53367813e-01\n",
      "  1.31300009e-02 -3.84620370e-01  7.66416310e-02  2.54202669e-01\n",
      "  1.80131740e-01  1.38691374e-01  6.70942708e-02 -1.69742201e+13\n",
      "  4.60583998e+12  2.61918627e-01  1.88711322e+01 -4.16447392e+00\n",
      "  9.28508989e+00  1.97189099e+01  9.62591007e-01 -2.00058647e-01\n",
      " -1.05552040e+01  3.21187298e+00  3.70017361e+00 -4.33947510e+00\n",
      "  1.90822488e+01 -9.93670174e+00  2.29918435e+01  1.36295238e+01\n",
      " -8.59183277e+00  8.40519923e+00  5.84523804e+00 -6.81351418e+00\n",
      "  2.68036621e-01  2.38652264e+01 -2.24714720e+00 -2.90986693e+00\n",
      " -1.09287803e+01 -7.85020497e+00  1.01134161e+01  4.68242593e+00\n",
      "  7.61349928e+00 -7.00798137e+00 -7.40941459e+00 -6.37066176e+00\n",
      "  9.49288675e+00 -2.08275422e-01 -1.97518064e+00 -1.06567576e+00\n",
      "  4.76825100e+00 -1.23680801e+00  5.09998333e+00 -3.47502728e+00\n",
      " -9.26902845e+00  3.16600893e+00  3.20305076e-01 -1.96086624e+01\n",
      "  6.27710382e+00  1.47491990e-01 -1.58420036e+00 -2.48135144e+00\n",
      "  1.37261568e+01 -1.72793653e+01 -1.79241243e+01  2.57897501e-01\n",
      "  1.06825557e+01 -2.14340871e-01  4.97512831e+00 -1.23497075e+01\n",
      "  4.93004629e+00 -4.50949286e+00 -8.71462961e+00  1.10730471e+00\n",
      " -2.98778008e-01 -1.02154640e+01 -7.86937245e+00  2.27331741e+00\n",
      "  1.61808486e+01  7.23806354e+00 -1.83360553e+01  2.66323532e+00\n",
      " -2.52433401e+00 -3.94772456e-02 -4.98228651e+00 -1.87004105e+00\n",
      "  3.07960657e+00 -1.28923444e+01 -6.22224113e-01 -1.35750005e+01\n",
      " -1.05007353e+01  3.30998363e+00 -1.89578432e+01  1.01425999e+01\n",
      "  6.65327255e+00 -9.89647794e+00 -5.69249592e+00  5.69496638e-02\n",
      " -3.32895929e+00 -1.23026866e+01 -6.92448637e+00 -7.09188061e+00\n",
      " -1.50815461e+00  4.50114967e+00 -8.63987950e+00 -1.04496796e-01\n",
      "  1.04520061e+01  3.20895389e+01 -1.67007827e+01  1.52539106e+00\n",
      "  1.77178940e+00  1.13226940e+01  8.25839026e+00  4.55901212e+00\n",
      "  1.13073926e+01]\n",
      "LinearRegression Mean squared error: 6.86\n",
      "R2:0.691503\n",
      "--------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "linear_regressor = LinearRegression()\n",
    "linear_regressor.fit(X_train, y_train)\n",
    "y_pred = linear_regressor.predict(X_test)\n",
    "for i in range(len(y_test)):\n",
    "    print(\"预测分数：{}\".format(y_pred[i]),\"实际分数：{}\".format(y_test[i]))\n",
    "# The coefficients 表示每一个特征所占的权重\n",
    "print(\"测试集得分：\\n\",linear_regressor.score(X_test,y_test))\n",
    "print('LinearRegression Coefficients: \\n', linear_regressor.coef_)\n",
    "# The mean squared error\n",
    "print(\"LinearRegression Mean squared error: %.2f\" % mean_squared_error(y_test, y_pred))\n",
    "# 平均绝对百分比误差（Mean Absolute Percentage Error）\n",
    "#print('LinearRegression MAPE:%.2f' % np.average(np.abs((y_test-y_pred)/y_test)))\n",
    "print(\"R2:%f\"% r2_score(y_test, y_pred))\n",
    "print('-' * 50)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "id": "27c8340b",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "测试集得分：\n",
      " 0.6833515542426993\n",
      "Ridge Mean squared error: 7.04\n",
      "R2:0.683352\n",
      "              precision    recall  f1-score   support\n",
      "\n",
      "         0.0       0.07      1.00      0.13         1\n",
      "         1.0       0.00      0.00      0.00         9\n",
      "         2.0       0.00      0.00      0.00         0\n",
      "         3.0       0.00      0.00      0.00         4\n",
      "         4.0       0.00      0.00      0.00         0\n",
      "         5.0       0.00      0.00      0.00         0\n",
      "         6.0       0.00      0.00      0.00         0\n",
      "         7.0       0.00      0.00      0.00         0\n",
      "         8.0       0.00      0.00      0.00         0\n",
      "         9.0       0.00      0.00      0.00         1\n",
      "        10.0       0.00      0.00      0.00         5\n",
      "        11.0       0.00      0.00      0.00         7\n",
      "        12.0       0.38      0.25      0.30        12\n",
      "        13.0       0.25      0.15      0.19        13\n",
      "        14.0       0.12      0.14      0.13        29\n",
      "        15.0       0.20      0.07      0.11        56\n",
      "        16.0       0.19      0.25      0.22        44\n",
      "        17.0       0.23      0.15      0.18        52\n",
      "        18.0       0.16      0.21      0.18        34\n",
      "        19.0       0.12      0.18      0.15        11\n",
      "        20.0       0.09      0.33      0.14         9\n",
      "        21.0       0.00      0.00      0.00         8\n",
      "        22.0       0.14      0.20      0.17         5\n",
      "        23.0       0.00      0.00      0.00         1\n",
      "\n",
      "    accuracy                           0.15       301\n",
      "   macro avg       0.08      0.12      0.08       301\n",
      "weighted avg       0.17      0.15      0.15       301\n",
      "\n",
      "--------------------------------------------------\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "D:\\Program\\anaconda\\lib\\site-packages\\sklearn\\metrics\\_classification.py:1245: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.\n",
      "  _warn_prf(average, modifier, msg_start, len(result))\n",
      "D:\\Program\\anaconda\\lib\\site-packages\\sklearn\\metrics\\_classification.py:1245: UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.\n",
      "  _warn_prf(average, modifier, msg_start, len(result))\n",
      "D:\\Program\\anaconda\\lib\\site-packages\\sklearn\\metrics\\_classification.py:1245: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.\n",
      "  _warn_prf(average, modifier, msg_start, len(result))\n",
      "D:\\Program\\anaconda\\lib\\site-packages\\sklearn\\metrics\\_classification.py:1245: UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.\n",
      "  _warn_prf(average, modifier, msg_start, len(result))\n",
      "D:\\Program\\anaconda\\lib\\site-packages\\sklearn\\metrics\\_classification.py:1245: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.\n",
      "  _warn_prf(average, modifier, msg_start, len(result))\n",
      "D:\\Program\\anaconda\\lib\\site-packages\\sklearn\\metrics\\_classification.py:1245: UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.\n",
      "  _warn_prf(average, modifier, msg_start, len(result))\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'\\n测试集得分：\\n 0.6781095758903712\\nRidge Mean squared error: 7.86\\nR2:0.678110\\n'"
      ]
     },
     "execution_count": 67,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.metrics import classification_report\n",
    "alpha = 200\n",
    "ridge = Ridge(alpha=alpha)# 正则项的程度\n",
    "ridge.fit(X_train, y_train)\n",
    "y_pred = ridge.predict(X_test)\n",
    "print(\"测试集得分：\\n {0}\".format(ridge.score(X_test,y_test)))\n",
    "#print('Ridge Coefficients: \\n', ridge.coef_)\n",
    "print(\"Ridge Mean squared error: %.2f\" % mean_squared_error(y_test, y_pred))\n",
    "print(\"R2:%f\"% r2_score(y_test, y_pred))\n",
    "#print('Ridge MAPE:%.2f' % np.average(np.abs((y_test-y_pred)/y_test)))\n",
    "\n",
    "print(classification_report(np.rint(y_pred), np.rint(y_test)))\n",
    "print('-' * 50)\n",
    "\n",
    "# 最佳：\n",
    "'''\n",
    "测试集得分：\n",
    " 0.6781095758903712\n",
    "Ridge Mean squared error: 7.86\n",
    "R2:0.678110\n",
    "'''"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "id": "dc5aedcd",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.7003799516330043\n",
      "Ridge(alpha=10)\n",
      "{'alpha': 10}\n",
      "测试集最佳得分：\n",
      " 0.7003799516330043\n",
      "Grid Ridge Mean squared error: 6.54\n",
      "--------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "ridge_ = Ridge()\n",
    "# 网格搜索  找到最佳的alpha值，也就是最佳的正则项系数\n",
    "param_alpha = {'alpha': range(10,60,1)}\n",
    "#传入模型和参数\n",
    "grid = GridSearchCV(estimator=ridge_, param_grid=param_alpha) # 网格搜索同时满足了fit，search 和score 的三种功能\n",
    "# 网格搜索会从参数中不断选取对应的参数进行组合，然后自动进行交叉验证，返回每一组参数下交叉验证的\n",
    "grid.fit(X_train, y_train)\n",
    "y_pred = grid.predict(X_test)\n",
    "#输出最佳结果\n",
    "print(grid.best_score_)\n",
    "print(grid.best_estimator_)\n",
    "print(grid.best_params_)\n",
    "\n",
    "print(\"测试集最佳得分：\\n\",grid.best_score_)\n",
    "print(\"Grid Ridge Mean squared error: %.2f\" % mean_squared_error(y_test, y_pred))\n",
    "#print('Grid Ridge MAPE:%.2f' % np.average(np.abs((y_test-y_pred)/y_test)))\n",
    "print('-' * 50)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "id": "85ad6c57",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.7311323401717911\n",
      "RandomForestRegressor(max_depth=50, max_features=10, n_estimators=53)\n",
      "{'max_depth': 50, 'max_features': 10, 'n_estimators': 53}\n",
      "预测分数：16.12 实际分数：16.0\n",
      "预测分数：14.13 实际分数：14.5\n",
      "预测分数：12.32 实际分数：9.0\n",
      "预测分数：17.70 实际分数：21.0\n",
      "预测分数：15.03 实际分数：16.5\n",
      "预测分数：18.80 实际分数：14.5\n",
      "预测分数：17.36 实际分数：18.0\n",
      "预测分数：15.75 实际分数：16.0\n",
      "预测分数：17.08 实际分数：15.5\n",
      "预测分数：16.29 实际分数：18.5\n",
      "预测分数：14.79 实际分数：18.0\n",
      "预测分数：18.52 实际分数：16.5\n",
      "预测分数：17.14 实际分数：17.5\n",
      "预测分数：15.69 实际分数：14.0\n",
      "预测分数：15.00 实际分数：8.5\n",
      "预测分数：17.23 实际分数：17.0\n",
      "预测分数：18.34 实际分数：17.0\n",
      "预测分数：9.72 实际分数：7.0\n",
      "预测分数：17.25 实际分数：13.0\n",
      "预测分数：0.00 实际分数：0.0\n",
      "预测分数：16.90 实际分数：18.0\n",
      "预测分数：17.15 实际分数：15.0\n",
      "预测分数：17.23 实际分数：15.0\n",
      "预测分数：18.98 实际分数：18.5\n",
      "预测分数：14.56 实际分数：19.0\n",
      "预测分数：13.99 实际分数：11.0\n",
      "预测分数：17.54 实际分数：17.5\n",
      "预测分数：0.00 实际分数：0.0\n",
      "预测分数：16.20 实际分数：19.5\n",
      "预测分数：13.89 实际分数：14.0\n",
      "预测分数：16.69 实际分数：15.0\n",
      "预测分数：15.53 实际分数：13.0\n",
      "预测分数：17.82 实际分数：23.0\n",
      "预测分数：17.58 实际分数：19.0\n",
      "预测分数：15.61 实际分数：16.0\n",
      "预测分数：15.89 实际分数：16.5\n",
      "预测分数：7.18 实际分数：4.5\n",
      "预测分数：17.50 实际分数：18.5\n",
      "预测分数：15.65 实际分数：18.0\n",
      "预测分数：17.25 实际分数：18.0\n",
      "预测分数：16.09 实际分数：14.0\n",
      "预测分数：0.00 实际分数：0.0\n",
      "预测分数：16.88 实际分数：16.0\n",
      "预测分数：17.62 实际分数：18.5\n",
      "预测分数：15.57 实际分数：5.5\n",
      "预测分数：15.53 实际分数：17.5\n",
      "预测分数：17.47 实际分数：16.0\n",
      "预测分数：14.92 实际分数：16.5\n",
      "预测分数：14.60 实际分数：13.5\n",
      "预测分数：17.95 实际分数：20.0\n",
      "预测分数：17.48 实际分数：17.0\n",
      "预测分数：16.42 实际分数：17.0\n",
      "预测分数：13.85 实际分数：20.5\n",
      "预测分数：16.24 实际分数：19.0\n",
      "预测分数：15.87 实际分数：20.0\n",
      "预测分数：16.00 实际分数：16.5\n",
      "预测分数：18.51 实际分数：17.0\n",
      "预测分数：16.20 实际分数：17.5\n",
      "预测分数：16.98 实际分数：15.0\n",
      "预测分数：16.93 实际分数：15.0\n",
      "预测分数：11.69 实际分数：12.5\n",
      "预测分数：17.36 实际分数：20.5\n",
      "预测分数：17.60 实际分数：19.5\n",
      "预测分数：15.88 实际分数：23.0\n",
      "预测分数：0.00 实际分数：0.0\n",
      "预测分数：17.12 实际分数：19.5\n",
      "预测分数：16.05 实际分数：17.0\n",
      "预测分数：7.43 实际分数：2.5\n",
      "预测分数：13.30 实际分数：15.0\n",
      "预测分数：14.10 实际分数：15.0\n",
      "预测分数：17.95 实际分数：14.0\n",
      "预测分数：16.17 实际分数：16.5\n",
      "预测分数：16.84 实际分数：16.0\n",
      "预测分数：19.36 实际分数：20.5\n",
      "预测分数：18.19 实际分数：17.0\n",
      "预测分数：15.04 实际分数：16.0\n",
      "预测分数：12.88 实际分数：13.0\n",
      "预测分数：13.34 实际分数：16.0\n",
      "预测分数：17.92 实际分数：20.0\n",
      "预测分数：17.15 实际分数：17.0\n",
      "预测分数：14.96 实际分数：12.5\n",
      "预测分数：16.81 实际分数：18.5\n",
      "预测分数：17.38 实际分数：18.0\n",
      "预测分数：14.17 实际分数：17.0\n",
      "预测分数：15.67 实际分数：14.5\n",
      "预测分数：17.94 实际分数：16.0\n",
      "预测分数：15.95 实际分数：14.0\n",
      "预测分数：11.27 实际分数：17.0\n",
      "预测分数：15.16 实际分数：16.0\n",
      "预测分数：17.78 实际分数：14.5\n",
      "预测分数：17.32 实际分数：19.0\n",
      "预测分数：16.75 实际分数：15.5\n",
      "预测分数：16.57 实际分数：15.0\n",
      "预测分数：15.49 实际分数：16.5\n",
      "预测分数：17.34 实际分数：18.5\n",
      "预测分数：16.55 实际分数：22.0\n",
      "预测分数：17.36 实际分数：20.0\n",
      "预测分数：16.59 实际分数：16.5\n",
      "预测分数：17.92 实际分数：17.5\n",
      "预测分数：0.00 实际分数：0.0\n",
      "预测分数：16.09 实际分数：15.0\n",
      "预测分数：15.78 实际分数：16.0\n",
      "预测分数：14.69 实际分数：13.5\n",
      "预测分数：17.33 实际分数：18.5\n",
      "预测分数：17.21 实际分数：16.5\n",
      "预测分数：12.36 实际分数：10.0\n",
      "预测分数：17.52 实际分数：15.5\n",
      "预测分数：14.63 实际分数：15.0\n",
      "预测分数：15.56 实际分数：20.0\n",
      "预测分数：16.51 实际分数：17.0\n",
      "预测分数：14.97 实际分数：15.0\n",
      "预测分数：17.01 实际分数：20.0\n",
      "预测分数：16.84 实际分数：14.0\n",
      "预测分数：17.15 实际分数：17.0\n",
      "预测分数：17.29 实际分数：19.0\n",
      "预测分数：13.97 实际分数：17.5\n",
      "预测分数：13.36 实际分数：15.0\n",
      "预测分数：11.50 实际分数：15.0\n",
      "预测分数：16.61 实际分数：15.5\n",
      "预测分数：15.08 实际分数：16.5\n",
      "预测分数：15.49 实际分数：10.0\n",
      "预测分数：12.40 实际分数：12.0\n",
      "预测分数：17.15 实际分数：19.0\n",
      "预测分数：18.78 实际分数：20.5\n",
      "预测分数：17.84 实际分数：18.5\n",
      "预测分数：18.05 实际分数：17.0\n",
      "预测分数：18.46 实际分数：20.5\n",
      "预测分数：13.35 实际分数：12.0\n",
      "预测分数：16.30 实际分数：11.0\n",
      "预测分数：6.11 实际分数：5.0\n",
      "预测分数：17.89 实际分数：14.0\n",
      "预测分数：17.47 实际分数：14.5\n",
      "预测分数：17.42 实际分数：18.5\n",
      "预测分数：19.23 实际分数：20.0\n",
      "预测分数：18.15 实际分数：16.5\n",
      "预测分数：0.00 实际分数：0.0\n",
      "预测分数：15.59 实际分数：12.5\n",
      "预测分数：17.25 实际分数：17.5\n",
      "预测分数：14.27 实际分数：17.0\n",
      "预测分数：18.87 实际分数：19.0\n",
      "预测分数：17.64 实际分数：17.0\n",
      "预测分数：17.18 实际分数：19.5\n",
      "预测分数：17.00 实际分数：13.5\n",
      "预测分数：15.91 实际分数：15.0\n",
      "预测分数：16.82 实际分数：16.0\n",
      "预测分数：16.92 实际分数：18.0\n",
      "预测分数：10.67 实际分数：10.0\n",
      "预测分数：15.33 实际分数：16.5\n",
      "预测分数：17.60 实际分数：20.0\n",
      "预测分数：15.49 实际分数：13.0\n",
      "预测分数：15.31 实际分数：14.0\n",
      "预测分数：15.62 实际分数：15.0\n",
      "预测分数：17.93 实际分数：21.5\n",
      "预测分数：18.56 实际分数：15.0\n",
      "预测分数：16.74 实际分数：16.0\n",
      "预测分数：16.25 实际分数：17.5\n",
      "预测分数：16.49 实际分数：15.5\n",
      "预测分数：15.68 实际分数：16.0\n",
      "预测分数：14.97 实际分数：13.0\n",
      "预测分数：15.34 实际分数：16.0\n",
      "预测分数：11.84 实际分数：12.0\n",
      "预测分数：14.72 实际分数：16.0\n",
      "预测分数：16.77 实际分数：19.5\n",
      "预测分数：16.37 实际分数：14.0\n",
      "预测分数：16.60 实际分数：15.5\n",
      "预测分数：17.34 实际分数：16.0\n",
      "预测分数：15.54 实际分数：17.5\n",
      "预测分数：0.00 实际分数：0.0\n",
      "预测分数：17.25 实际分数：16.5\n",
      "预测分数：15.91 实际分数：17.0\n",
      "预测分数：16.56 实际分数：20.0\n",
      "预测分数：17.15 实际分数：22.5\n",
      "预测分数：15.43 实际分数：14.0\n",
      "预测分数：17.84 实际分数：15.0\n",
      "预测分数：18.33 实际分数：20.0\n",
      "预测分数：17.91 实际分数：16.0\n",
      "预测分数：16.82 实际分数：15.5\n",
      "预测分数：16.24 实际分数：17.0\n",
      "预测分数：17.27 实际分数：17.0\n",
      "预测分数：18.42 实际分数：19.0\n",
      "预测分数：17.21 实际分数：16.5\n",
      "预测分数：16.40 实际分数：17.5\n",
      "预测分数：17.76 实际分数：16.5\n",
      "预测分数：14.56 实际分数：15.5\n",
      "预测分数：15.25 实际分数：17.0\n",
      "预测分数：16.40 实际分数：18.0\n",
      "预测分数：16.31 实际分数：14.5\n",
      "预测分数：15.78 实际分数：19.0\n",
      "预测分数：17.46 实际分数：19.0\n",
      "预测分数：16.84 实际分数：16.5\n",
      "预测分数：17.04 实际分数：15.5\n",
      "预测分数：17.58 实际分数：18.5\n",
      "预测分数：15.24 实际分数：11.0\n",
      "预测分数：17.67 实际分数：18.5\n",
      "预测分数：16.37 实际分数：15.0\n",
      "预测分数：18.36 实际分数：18.5\n",
      "预测分数：12.09 实际分数：11.0\n",
      "预测分数：14.30 实际分数：17.0\n",
      "预测分数：18.42 实际分数：17.0\n",
      "预测分数：16.21 实际分数：17.0\n",
      "预测分数：0.00 实际分数：0.0\n",
      "预测分数：9.05 实际分数：7.0\n",
      "预测分数：0.00 实际分数：0.0\n",
      "预测分数：18.23 实际分数：17.5\n",
      "预测分数：13.48 实际分数：13.0\n",
      "预测分数：15.22 实际分数：14.0\n",
      "预测分数：15.71 实际分数：14.0\n",
      "预测分数：16.59 实际分数：19.5\n",
      "预测分数：15.97 实际分数：20.0\n",
      "预测分数：16.98 实际分数：16.0\n",
      "预测分数：13.42 实际分数：11.5\n",
      "预测分数：13.44 实际分数：14.0\n",
      "预测分数：13.15 实际分数：14.0\n",
      "预测分数：17.56 实际分数：21.5\n",
      "预测分数：16.79 实际分数：17.0\n",
      "预测分数：11.12 实际分数：13.5\n",
      "预测分数：14.66 实际分数：18.5\n",
      "预测分数：16.71 实际分数：16.0\n",
      "预测分数：15.92 实际分数：11.0\n",
      "预测分数：16.63 实际分数：14.5\n",
      "预测分数：0.00 实际分数：0.0\n",
      "预测分数：16.62 实际分数：19.0\n",
      "预测分数：17.96 实际分数：17.0\n",
      "预测分数：15.34 实际分数：17.0\n",
      "预测分数：17.97 实际分数：21.5\n",
      "预测分数：16.95 实际分数：15.0\n",
      "预测分数：15.38 实际分数：19.0\n",
      "预测分数：7.36 实际分数：4.0\n",
      "预测分数：16.10 实际分数：17.0\n",
      "预测分数：17.66 实际分数：20.0\n",
      "预测分数：18.08 实际分数：20.5\n",
      "预测分数：17.08 实际分数：16.5\n",
      "预测分数：15.53 实际分数：15.5\n",
      "预测分数：16.08 实际分数：19.5\n",
      "预测分数：14.97 实际分数：14.5\n",
      "预测分数：17.92 实际分数：19.5\n",
      "预测分数：14.58 实际分数：13.5\n",
      "预测分数：16.53 实际分数：18.0\n",
      "预测分数：16.24 实际分数：12.5\n",
      "预测分数：16.23 实际分数：19.0\n",
      "预测分数：0.07 实际分数：0.0\n",
      "预测分数：15.10 实际分数：16.5\n",
      "预测分数：17.21 实际分数：17.5\n",
      "预测分数：16.65 实际分数：17.0\n",
      "预测分数：4.53 实际分数：3.5\n",
      "预测分数：13.45 实际分数：9.0\n",
      "预测分数：17.00 实际分数：17.0\n",
      "预测分数：17.44 实际分数：14.5\n",
      "预测分数：16.40 实际分数：15.5\n",
      "预测分数：13.58 实际分数：14.0\n",
      "预测分数：17.17 实际分数：19.5\n",
      "预测分数：17.76 实际分数：17.5\n",
      "预测分数：14.91 实际分数：14.0\n",
      "预测分数：17.23 实际分数：18.0\n",
      "预测分数：17.84 实际分数：22.0\n",
      "预测分数：0.00 实际分数：0.0\n",
      "预测分数：16.30 实际分数：15.5\n",
      "预测分数：15.13 实际分数：13.0\n",
      "预测分数：16.13 实际分数：16.0\n",
      "预测分数：17.58 实际分数：17.5\n",
      "预测分数：15.45 实际分数：19.0\n",
      "预测分数：18.46 实际分数：18.0\n",
      "预测分数：14.28 实际分数：17.5\n",
      "预测分数：15.68 实际分数：15.5\n",
      "预测分数：15.55 实际分数：17.0\n",
      "预测分数：17.75 实际分数：19.5\n",
      "预测分数：16.64 实际分数：16.5\n",
      "预测分数：16.17 实际分数：17.0\n",
      "预测分数：15.08 实际分数：13.0\n",
      "预测分数：17.39 实际分数：18.5\n",
      "预测分数：16.30 实际分数：18.0\n",
      "预测分数：0.00 实际分数：0.0\n",
      "预测分数：18.06 实际分数：20.0\n",
      "预测分数：16.61 实际分数：17.0\n",
      "预测分数：16.70 实际分数：13.5\n",
      "预测分数：9.69 实际分数：10.5\n",
      "预测分数：17.55 实际分数：16.0\n",
      "预测分数：14.97 实际分数：15.5\n",
      "预测分数：15.49 实际分数：13.5\n",
      "预测分数：18.23 实际分数：18.0\n",
      "预测分数：11.74 实际分数：6.5\n",
      "预测分数：15.72 实际分数：17.0\n",
      "预测分数：17.62 实际分数：17.0\n",
      "预测分数：13.95 实际分数：16.0\n",
      "预测分数：17.60 实际分数：21.0\n",
      "预测分数：17.05 实际分数：17.0\n",
      "预测分数：0.00 实际分数：0.0\n",
      "预测分数：17.77 实际分数：17.5\n",
      "预测分数：18.54 实际分数：21.5\n",
      "预测分数：13.91 实际分数：19.0\n",
      "预测分数：14.51 实际分数：15.0\n",
      "预测分数：17.19 实际分数：15.5\n",
      "预测分数：17.00 实际分数：17.5\n",
      "预测分数：17.60 实际分数：19.5\n",
      "预测分数：16.18 实际分数：15.5\n",
      "预测分数：17.47 实际分数：20.5\n",
      "预测分数：15.68 实际分数：19.0\n",
      "预测分数：16.85 实际分数：20.0\n",
      "预测分数：18.16 实际分数：17.0\n",
      "预测分数：5.75 实际分数：6.0\n",
      "预测分数：16.47 实际分数：19.5\n",
      "测试集最佳得分：\n",
      " 0.7311323401717911\n",
      "Grid RF Mean squared error: 5.20\n",
      "Grid RF MAPE:nan\n",
      "Cohen's kappa score: 0.13\n",
      "--------------------------------------------------\n",
      "Wall time: 3min 3s\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "<timed exec>:16: RuntimeWarning: divide by zero encountered in true_divide\n",
      "<timed exec>:16: RuntimeWarning: invalid value encountered in true_divide\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "rf = ensemble.RandomForestRegressor()\n",
    "params = {'n_estimators': range(50,70,1), 'max_depth':[10, 50, 100], 'max_features':[2, 5, 10]}\n",
    "grid = GridSearchCV(estimator=rf, param_grid=params)\n",
    "grid.fit(X_train, y_train)\n",
    "y_pred = grid.predict(X_test)\n",
    "print(grid.best_score_)\n",
    "print(grid.best_estimator_)\n",
    "print(grid.best_params_)\n",
    "\n",
    "for i in range(len(y_test)):\n",
    "    print(\"预测分数：{:.2f}\".format(y_pred[i]),\"实际分数：{}\".format(y_test[i]))\n",
    "    \n",
    "print(\"测试集最佳得分：\\n\",grid.best_score_)\n",
    "print(\"Grid RF Mean squared error: %.2f\" % mean_squared_error(y_test, y_pred))\n",
    "#print('Grid RF Variance score: %.2f' % ridge.score(X_test, y_test))\n",
    "print('Grid RF MAPE:%.2f' % np.average(np.abs((y_test-y_pred)/y_test)))\n",
    "print('Cohen\\'s kappa score: %.2f' % cohen_kappa_score(np.rint(y_pred), np.rint(y_test)))\n",
    "print('-' * 50)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 70,
   "id": "b216ffc9",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "测试集最佳得分：\n",
      " 0.7247069586512471\n",
      "Grid xgboost Mean squared error: 5.54\n",
      "Grid xgboost MAPE:inf\n",
      "Cohen's kappa score: 0.16\n",
      "--------------------------------------------------\n",
      "Wall time: 49.6 s\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "<timed exec>:8: RuntimeWarning: divide by zero encountered in true_divide\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "model = xgb.XGBRegressor()\n",
    "params = {'n_estimators': [100, 150, 160], 'learning_rate': [0.1, 0.5, 1.0]}\n",
    "grid = GridSearchCV(estimator=model, param_grid=params)\n",
    "grid.fit(X_train, y_train)\n",
    "y_pred = grid.predict(X_test)\n",
    "print(\"测试集最佳得分：\\n\",grid.best_score_)\n",
    "print(\"Grid xgboost Mean squared error: %.2f\" % mean_squared_error(y_test, y_pred))\n",
    "print('Grid xgboost MAPE:%.2f' % np.average(np.abs((y_test-y_pred)/y_test)))\n",
    "print('Cohen\\'s kappa score: %.2f' % cohen_kappa_score(np.rint(y_pred), np.rint(y_test)))\n",
    "print('-' * 50)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "36656100",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
