{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "2128004d",
   "metadata": {},
   "source": [
    "## 数据集的读取与分割\n",
    "\n",
    "我准备了一个小的数据集，是我本人标记的。\n",
    "\n",
    "数据来自斗鱼主播艺帝帝的直播间，他是一个和平精英主播。我给弹幕进行了标记，0代表游戏相关，1代表游戏无关。\n",
    "\n",
    "我已经分好了训练集、验证集和测试集，分别为 train.txt, dev.txt 和 test.txt\n",
    "\n",
    "这里我们先用 train.txt 里的数据来操作。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "a8ba525c",
   "metadata": {},
   "outputs": [],
   "source": [
    "data = []\n",
    "with open('data/train.txt', 'r', encoding='utf-8') as f:\n",
    "    for line in f:\n",
    "        data.append(line.strip().split())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "26c820f2",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[['摄像头太丑了我吐会儿稍等', '1'],\n",
       " ['不匹配路人上战神了吗', '0'],\n",
       " ['主播能带我吃把鸡吗我打了200场没吃过鸡', '0'],\n",
       " ['我已通关消消乐获得神秘彩蛋', '0'],\n",
       " ['学到了学到了太棒了', '1'],\n",
       " ['搞一把瞬狙可以不', '0'],\n",
       " ['我斗鱼的第一次中奖哈哈哈哈', '1'],\n",
       " ['这是物理几何雷淡定', '0'],\n",
       " ['762不用补偿用消炎', '0'],\n",
       " ['小弟带你上战神去xyy的香菜小咩咩没事干来看看哈哈哈', '0']]"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "data[:10]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "3a41de3c",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "725ab61c",
   "metadata": {},
   "outputs": [],
   "source": [
    "df = pd.read_csv('data/train.txt', sep='\\t', names=['txt', 'label'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "ad31719c",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>txt</th>\n",
       "      <th>label</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>摄像头太丑了我吐会儿稍等</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>不匹配路人上战神了吗</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>主播能带我吃把鸡吗我打了200场没吃过鸡</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>我已通关消消乐获得神秘彩蛋</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>学到了学到了太棒了</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>...</th>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3995</th>\n",
       "      <td>洋芋是土豆你特么是非洲来的吗</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3996</th>\n",
       "      <td>刚才不理现在看他厉害</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3997</th>\n",
       "      <td>口嗨王者封鈊锁爱但是就差个平板</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3998</th>\n",
       "      <td>马思纯永远的神</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3999</th>\n",
       "      <td>m4也算半个禁枪了</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>4000 rows × 2 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "                       txt  label\n",
       "0             摄像头太丑了我吐会儿稍等      1\n",
       "1               不匹配路人上战神了吗      0\n",
       "2     主播能带我吃把鸡吗我打了200场没吃过鸡      0\n",
       "3            我已通关消消乐获得神秘彩蛋      0\n",
       "4                学到了学到了太棒了      1\n",
       "...                    ...    ...\n",
       "3995        洋芋是土豆你特么是非洲来的吗      1\n",
       "3996            刚才不理现在看他厉害      1\n",
       "3997       口嗨王者封鈊锁爱但是就差个平板      1\n",
       "3998               马思纯永远的神      1\n",
       "3999             m4也算半个禁枪了      0\n",
       "\n",
       "[4000 rows x 2 columns]"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "d685180d",
   "metadata": {},
   "outputs": [],
   "source": [
    "X = df.txt.tolist()\n",
    "y = df.label.tolist()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "db256db5",
   "metadata": {},
   "source": [
    "### 数据集的分割\n",
    "\n",
    "我们使用sklearn库的工具进行分割。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "ed081482",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.model_selection import train_test_split"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "id": "d0d02721",
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1/5, random_state=42)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "id": "f7b1439b",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(3200, 800, 3200, 800)"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(X_train), len(X_test), len(y_train), len(y_test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "id": "3af333ee",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['不是说了rr跟你走吗',\n",
       " 'm16伤害可没minih高',\n",
       " '我是今天直播间签到第666名给我点个赞吧',\n",
       " '我26还没女朋友',\n",
       " '因为菜啊哈哈哈哈哈哈哈',\n",
       " '弟弟下次用狗杂打好像是狗杂可以快点',\n",
       " 'm4用什么配件啊',\n",
       " '人生巅峰瞬间就没了',\n",
       " '真够惨的打扰了',\n",
       " 'r听着像小学生']"
      ]
     },
     "execution_count": 42,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X_test[:10]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f175446f",
   "metadata": {},
   "source": [
    "K折交叉验证主要是用于检验模型（算法）的效果，你会获得K个模型。但其实我们做分类只需要1个模型。\n",
    "作为take-home project，可以自己试一试。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e3d2c1c7",
   "metadata": {},
   "source": [
    "### sklearn文本分类示例\n",
    "\n",
    "传统的机器学习主要针对连续变量或者类别变量。而文本既不能当连续的变量，字、词也太多，不能当类别变量来处理。\n",
    "\n",
    "一种想法是，我们可以利用文本之间的关系，把他们化为向量。向量反映的是文本内部和文本之间的关系，利用这种关系的不同进行分类。这种代表文本的向量叫嵌入表示embedding。\n",
    "\n",
    "首先我们先用 tf-idf 算法把文本化为向量表示。然后使用sklearn提供的许多分类器进行分类的尝试。\n",
    "\n",
    "tf-idf (term frequency - inverse document frequency) 是将文本中每个词出现的频率与在文档中出现频率的倒数相乘得到的表示。\n",
    "\n",
    "一个词在这个文档里出现得越多，它越能代表这个句子，所以我们用这个词出现的频率。\n",
    "\n",
    "一个词在每个文档中都出现，那说明它用来区分这些文档的能力很弱，所以我们乘上它的倒数。\n",
    "\n",
    "不可能把所有的词都放进这个向量里，那样向量就太长了，也会非常稀疏（里面有大量的0）。所以我们设置上限和下限，这个词出现太少或者太多我们都不把它放进向量里。\n",
    "\n",
    "我们需要使用jieba进行中文分词。英文是用空格分词。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "id": "bb67204c",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.feature_extraction.text import TfidfVectorizer\n",
    "import jieba"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 71,
   "id": "3d8734ce",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5, min_df=3, ngram_range=(1,2))\n",
    "XX = [' '.join(jieba.lcut(doc)) for doc in X] # 英文以空格为分词标准，我们的数据里没有分开，会被认为是一整个词\n",
    "Xv = vectorizer.fit_transform(XX)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "id": "703c1107",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<4000x1322 sparse matrix of type '<class 'numpy.float64'>'\n",
       "\twith 10788 stored elements in Compressed Sparse Row format>"
      ]
     },
     "execution_count": 73,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "Xv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 86,
   "id": "58018210",
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train, X_test, y_train, y_test = train_test_split(Xv, y, test_size=1/5, random_state=42)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 87,
   "id": "5bc722ed",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.linear_model import LogisticRegression\n",
    "\n",
    "clf = LogisticRegression()\n",
    "clf = clf.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 88,
   "id": "356a5c0b",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.824375"
      ]
     },
     "execution_count": 88,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clf.score(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 89,
   "id": "4207e914",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.7325"
      ]
     },
     "execution_count": 89,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clf.score(X_test, y_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6cfc19ff",
   "metadata": {},
   "source": [
    "看起来很不错！但是我们这是一个二分类任务，而且标签是均衡的。所以你瞎蒙就蒙对的概率就是50%！"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 91,
   "id": "a3c53ee6",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.dummy import DummyClassifier\n",
    "clf = DummyClassifier()\n",
    "clf = clf.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 92,
   "id": "39c89e4a",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.5009375"
      ]
     },
     "execution_count": 92,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clf.score(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 93,
   "id": "be0df96b",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.49625"
      ]
     },
     "execution_count": 93,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clf.score(X_test, y_test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 97,
   "id": "b5876de9",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Iteration 1, loss = 0.68918039\n",
      "Iteration 2, loss = 0.65801176\n",
      "Iteration 3, loss = 0.57964902\n",
      "Iteration 4, loss = 0.46749896\n",
      "Iteration 5, loss = 0.38821509\n",
      "Iteration 6, loss = 0.34154244\n",
      "Iteration 7, loss = 0.30989851\n",
      "Iteration 8, loss = 0.28643090\n",
      "Iteration 9, loss = 0.26417769\n",
      "Iteration 10, loss = 0.24639050\n",
      "Iteration 11, loss = 0.22942181\n",
      "Iteration 12, loss = 0.21797148\n",
      "Iteration 13, loss = 0.20493420\n",
      "Iteration 14, loss = 0.19849078\n",
      "Iteration 15, loss = 0.19182353\n",
      "Iteration 16, loss = 0.17908409\n",
      "Iteration 17, loss = 0.17489242\n",
      "Iteration 18, loss = 0.16849176\n",
      "Iteration 19, loss = 0.16119749\n",
      "Iteration 20, loss = 0.15661413\n",
      "Iteration 21, loss = 0.15503490\n",
      "Iteration 22, loss = 0.14977881\n",
      "Iteration 23, loss = 0.14909016\n",
      "Iteration 24, loss = 0.14598333\n",
      "Iteration 25, loss = 0.14481821\n",
      "Iteration 26, loss = 0.14304862\n",
      "Iteration 27, loss = 0.14477359\n",
      "Iteration 28, loss = 0.14196999\n",
      "Iteration 29, loss = 0.14180794\n",
      "Iteration 30, loss = 0.14058349\n",
      "Iteration 31, loss = 0.13903802\n",
      "Iteration 32, loss = 0.14189678\n",
      "Iteration 33, loss = 0.14023193\n",
      "Iteration 34, loss = 0.13765310\n",
      "Iteration 35, loss = 0.13592614\n",
      "Iteration 36, loss = 0.13705701\n",
      "Iteration 37, loss = 0.13733036\n",
      "Iteration 38, loss = 0.13769079\n",
      "Iteration 39, loss = 0.13614797\n",
      "Iteration 40, loss = 0.13779302\n",
      "Iteration 41, loss = 0.13565229\n",
      "Iteration 42, loss = 0.13566148\n",
      "Iteration 43, loss = 0.13534426\n",
      "Iteration 44, loss = 0.13325260\n",
      "Iteration 45, loss = 0.13531482\n",
      "Iteration 46, loss = 0.13596936\n",
      "Iteration 47, loss = 0.13528818\n",
      "Iteration 48, loss = 0.13565906\n",
      "Iteration 49, loss = 0.13346830\n",
      "Iteration 50, loss = 0.13307075\n",
      "Iteration 51, loss = 0.13295394\n",
      "Iteration 52, loss = 0.13405432\n",
      "Iteration 53, loss = 0.13230437\n",
      "Iteration 54, loss = 0.13344558\n",
      "Iteration 55, loss = 0.13361027\n",
      "Iteration 56, loss = 0.13373383\n",
      "Iteration 57, loss = 0.13311725\n",
      "Iteration 58, loss = 0.13449530\n",
      "Iteration 59, loss = 0.13130874\n",
      "Iteration 60, loss = 0.13125839\n",
      "Iteration 61, loss = 0.13139252\n",
      "Iteration 62, loss = 0.13183629\n",
      "Iteration 63, loss = 0.13220377\n",
      "Iteration 64, loss = 0.13136296\n",
      "Iteration 65, loss = 0.13045196\n",
      "Iteration 66, loss = 0.13132921\n",
      "Iteration 67, loss = 0.13165194\n",
      "Iteration 68, loss = 0.13190443\n",
      "Iteration 69, loss = 0.13164159\n",
      "Iteration 70, loss = 0.13058480\n",
      "Iteration 71, loss = 0.13160458\n",
      "Iteration 72, loss = 0.12997268\n",
      "Iteration 73, loss = 0.13054235\n",
      "Iteration 74, loss = 0.12947808\n",
      "Iteration 75, loss = 0.13043749\n",
      "Iteration 76, loss = 0.13209936\n",
      "Iteration 77, loss = 0.13121370\n",
      "Iteration 78, loss = 0.12965802\n",
      "Iteration 79, loss = 0.12945153\n",
      "Iteration 80, loss = 0.13081407\n",
      "Iteration 81, loss = 0.13143751\n",
      "Iteration 82, loss = 0.12955419\n",
      "Iteration 83, loss = 0.13089557\n",
      "Iteration 84, loss = 0.13211332\n",
      "Iteration 85, loss = 0.12979614\n",
      "Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.\n",
      "0.935\n",
      "0.6975\n"
     ]
    }
   ],
   "source": [
    "from sklearn.neural_network import MLPClassifier\n",
    "\n",
    "mlp = MLPClassifier(hidden_layer_sizes=(128,128,), verbose=True)\n",
    "mlp = mlp.fit(X_train, y_train)\n",
    "print(mlp.score(X_train, y_train))\n",
    "print(mlp.score(X_test, y_test))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "32467b9b",
   "metadata": {},
   "source": [
    "很遗憾，我们看到MLP(Multiple Layer Proceptron, 多层感知机，也就是最简单的神经网络)表现甚至不如 Logistic 回归。\n",
    "\n",
    "\n",
    "这是因为模型过拟合了。\n",
    "\n",
    "接下来我们试试经典的SVM算法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 107,
   "id": "52cfb794",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.8978125\n",
      "0.72875\n"
     ]
    }
   ],
   "source": [
    "from sklearn.svm import SVC\n",
    "\n",
    "svc = SVC()\n",
    "svc = svc.fit(X_train, y_train)\n",
    "print(svc.score(X_train, y_train))\n",
    "print(svc.score(X_test, y_test))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5af2afa2",
   "metadata": {},
   "source": [
    "### 练习\n",
    "\n",
    "sklearn提供了很多的基础机器学习算法和便利的数据处理工具，很适合初学者使用，也很适合探索性研究时快速使用。\n",
    "\n",
    "sklearn还提供了很多别的分类器，这些分类器也提供了各种参数。试一试别的分类器，看看他们的效果会不会好。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3f49b6c1",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.naive_bayes import BernoulliNB  # 朴素贝叶斯分类器\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cd8233d4",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "27eaa77f",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e75609ab",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "47f951f1",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
