{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 数据分析学习\n",
    "​\n",
    "## task4 论⽂种类统计\n",
    "\n",
    "<img src=\"图片/论文种类统计.png\" >\n",
    "\n",
    "**学习目标**：学会文本分类的基本方法、TF-IDF等；\n",
    "\n",
    "在原始arxiv论文中论文都有对应的类别，而论文类别是作者填写的。在本次任务中我们可以借助论文的标题和摘要完成：\n",
    "\n",
    "对论文标题和摘要进行处理；  \n",
    "对论文类别进行处理；  \n",
    "构建文本分类模型；  \n",
    "\n",
    "\n",
    "### 文本分类思路\n",
    "\n",
    "思路1：TF-IDF+机器学习分类器\n",
    "直接使用TF-IDF对文本提取特征，使用分类器进行分类，分类器的选择上可以使用SVM、LR、XGboost等\n",
    "\n",
    "思路2：FastText\n",
    "FastText是入门款的词向量，利用Facebook提供的FastText工具，可以快速构建分类器\n",
    "\n",
    "思路3：WordVec+深度学习分类器\n",
    "WordVec是进阶款的词向量，并通过构建深度学习分类完成分类。深度学习分类的网络结构可以选择TextCNN、TextRnn或者BiLSTM。\n",
    "\n",
    "思路4：Bert词向量\n",
    "Bert是高配款的词向量，具有强大的建模学习能力。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导⼊所需的package\n",
    "import seaborn as sns #⽤于画图\n",
    "import re #⽤于正则表达式，匹配字符串的模式\n",
    "import requests #⽤于⽹络连接，发送⽹络请求，使⽤域名获取对应信息\n",
    "import json #读取数据，我们的数据为json格式的\n",
    "import pandas as pd #数据处理，数据分析\n",
    "import matplotlib.pyplot as plt #画图⼯具"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "首先是读取数据，且只选择前200000条数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "data  = [] #初始化\n",
    "#使用with语句优势：1.自动关闭文件句柄；2.自动显示（处理）文件读取数据异常\n",
    "with open(\"arxiv-metadata-oai-snapshot.json\", 'r') as f: \n",
    "    for idx, line in enumerate(f): \n",
    "        d = json.loads(line)\n",
    "        d = {'title': d['title'], 'categories': d['categories'], 'abstract': d['abstract']}\n",
    "        data.append(d)\n",
    "        \n",
    "        # 选择部分数据\n",
    "        if idx > 200000:\n",
    "            break\n",
    "        \n",
    "data = pd.DataFrame(data) #将list变为dataframe格式，方便使用pandas进行分析"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在原始arxiv论文中论文都有对应的类别，而论文类别是作者填写的。在本次任务中我们可以借助论文的标题和摘要完成：\n",
    "\n",
    "将标题和摘要拼接一起\n",
    "\n",
    "其中用到的操作有把\\n替换为空格，大写字母全改成小写，去掉索引"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "data['text'] = data['title'] + data['abstract']\n",
    "\n",
    "data['text'] = data['text'].apply(lambda x: x.replace('\\n',' '))\n",
    "data['text'] = data['text'].apply(lambda x: x.lower())\n",
    "data = data.drop(['abstract', 'title'], axis=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "由于原始论文有可能有多个类别，所以也需要处理："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 多个类别，包含子分类\n",
    "data['categories'] = data['categories'].apply(lambda x : x.split(' '))\n",
    "\n",
    "# 单个类别，不包含子分类\n",
    "data['categories_big'] = data['categories'].apply(lambda x : [xx.split('.')[0] for xx in x])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "然后将类别进行编码，这里类别是多个，所以需要多编码：\n",
    "\n",
    "选用了sklearn中的MultiLabelBinarizer，来进行One-vs-All的分类"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.preprocessing import MultiLabelBinarizer\n",
    "mlb = MultiLabelBinarizer()\n",
    "data_label = mlb.fit_transform(data['categories_big'].iloc[:])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "思路1使用TFIDF提取特征，限制最多4000个单词：\n",
    "\n",
    "TFIDF是(term frequency-inverse document frequency)词频-逆向文件频率\n",
    "\n",
    "参考官方文档：https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html\n",
    "\n",
    "其的parameter有：\n",
    "\n",
    "`(*, input='content', encoding='utf-8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, analyzer='word', stop_words=None, token_pattern='(?u)\\b\\w\\w+\\b', ngram_range=(1, 1), max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=<class 'numpy.float64'>, norm='l2', use_idf=True, smooth_idf=True, sublinear_tf=False)`\n",
    "\n",
    "输入值input{‘filename’, ‘file’, ‘content’}, default=’content’\n",
    "\n",
    "最大特征数max_features：int, default=None\n",
    "\n",
    "划窗分词的范围ngram_range：tuple (min_n, max_n), default=(1, 1)\n",
    "\n",
    "使用方法：\n",
    "\n",
    "|方法|意义|\n",
    "|--|--|\n",
    "build_analyzer()|Return a callable that handles preprocessing, tokenization and n-grams generation.\n",
    "build_preprocessor()|Return a function to preprocess the text before tokenization.\n",
    "build_tokenizer()|Return a function that splits a string into a sequence of tokens.\n",
    "decode(doc)|Decode the input into a string of unicode symbols.\n",
    "fit(raw_documents[, y])|Learn vocabulary and idf from training set.\n",
    "fit_transform(raw_documents[, y])|Learn vocabulary and idf, return document-term matrix.\n",
    "get_feature_names()|Array mapping from feature integer indices to feature name.\n",
    "get_params([deep])|Get parameters for this estimator.\n",
    "get_stop_words()|Build or fetch the effective stop words list.\n",
    "inverse_transform(X)|Return terms per document with nonzero entries in X.\n",
    "set_params(**params)|Set the parameters of this estimator.\n",
    "transform(raw_documents)|Transform documents to document-term matrix.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.feature_extraction.text import TfidfVectorizer\n",
    "vectorizer = TfidfVectorizer(max_features=4000)\n",
    "data_tfidf = vectorizer.fit_transform(data['text'].iloc[:])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用sklearn的多标签分类进行封装："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 划分训练集和验证集\n",
    "from sklearn.model_selection import train_test_split\n",
    "x_train, x_test, y_train, y_test = train_test_split(data_tfidf, data_label,\n",
    "                                                 test_size = 0.2,random_state = 1)\n",
    "\n",
    "# 构建多标签分类模型\n",
    "from sklearn.multioutput import MultiOutputClassifier\n",
    "from sklearn.naive_bayes import MultinomialNB\n",
    "clf = MultiOutputClassifier(MultinomialNB()).fit(x_train, y_train)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "验证模型的精度："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "              precision    recall  f1-score   support\n",
      "\n",
      "           0       0.95      0.85      0.90      7872\n",
      "           1       0.85      0.78      0.81      7329\n",
      "           2       0.77      0.72      0.74      2970\n",
      "           3       0.00      0.00      0.00         2\n",
      "           4       0.72      0.47      0.57      2149\n",
      "           5       0.51      0.67      0.58       993\n",
      "           6       0.89      0.35      0.50       538\n",
      "           7       0.71      0.68      0.70      3657\n",
      "           8       0.75      0.62      0.68      3382\n",
      "           9       0.85      0.88      0.86     10809\n",
      "          10       0.41      0.11      0.18      1796\n",
      "          11       0.80      0.04      0.07       737\n",
      "          12       0.44      0.33      0.38       540\n",
      "          13       0.52      0.34      0.41      1070\n",
      "          14       0.70      0.15      0.25      3435\n",
      "          15       0.83      0.19      0.31       687\n",
      "          16       0.88      0.18      0.30       249\n",
      "          17       0.89      0.43      0.58      2565\n",
      "          18       0.79      0.36      0.49       689\n",
      "\n",
      "   micro avg       0.81      0.65      0.72     51469\n",
      "   macro avg       0.70      0.43      0.49     51469\n",
      "weighted avg       0.80      0.65      0.69     51469\n",
      " samples avg       0.72      0.72      0.70     51469\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/aquasama/anaconda3/envs/tf2/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1245: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.\n",
      "  _warn_prf(average, modifier, msg_start, len(result))\n",
      "/home/aquasama/anaconda3/envs/tf2/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1245: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in samples with no predicted labels. Use `zero_division` parameter to control this behavior.\n",
      "  _warn_prf(average, modifier, msg_start, len(result))\n"
     ]
    }
   ],
   "source": [
    "from sklearn.metrics import classification_report\n",
    "print(classification_report(y_test, clf.predict(x_test)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "思路2使用深度学习模型，单词进行词嵌入然后训练。首先按照文本划分数据集："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.model_selection import train_test_split\n",
    "x_train, x_test, y_train, y_test = train_test_split(data['text'].iloc[:], data_label,\n",
    "                                                 test_size = 0.2,random_state = 1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "将数据集处理进行编码，并进行截断："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "# parameter\n",
    "max_features= 500\n",
    "max_len= 150\n",
    "embed_size=100\n",
    "batch_size = 128\n",
    "epochs = 15\n",
    "\n",
    "from keras.preprocessing.text import Tokenizer\n",
    "from keras.preprocessing import sequence\n",
    "\n",
    "tokens = Tokenizer(num_words = max_features)\n",
    "tokens.fit_on_texts(list(x_train)+list(x_test))\n",
    "\n",
    "x_sub_train = tokens.texts_to_sequences(x_train)\n",
    "x_sub_test = tokens.texts_to_sequences(x_test)\n",
    "\n",
    "x_sub_train=sequence.pad_sequences(x_sub_train, maxlen=max_len)\n",
    "x_sub_test=sequence.pad_sequences(x_sub_test, maxlen=max_len)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用了LSTM，训练了15个epochs后.得到71%的acc"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/15\n",
      "1251/1251 [==============================] - 546s 434ms/step - loss: 0.2263 - accuracy: 0.3251\n",
      "Epoch 2/15\n",
      "1251/1251 [==============================] - 539s 431ms/step - loss: 0.1640 - accuracy: 0.5089\n",
      "Epoch 3/15\n",
      "1251/1251 [==============================] - 527s 421ms/step - loss: 0.1392 - accuracy: 0.5928\n",
      "Epoch 4/15\n",
      "1251/1251 [==============================] - 527s 421ms/step - loss: 0.1277 - accuracy: 0.6281\n",
      "Epoch 5/15\n",
      "1251/1251 [==============================] - 527s 421ms/step - loss: 0.1212 - accuracy: 0.6501\n",
      "Epoch 6/15\n",
      "1251/1251 [==============================] - 527s 422ms/step - loss: 0.1154 - accuracy: 0.6668\n",
      "Epoch 7/15\n",
      "1251/1251 [==============================] - 533s 426ms/step - loss: 0.1110 - accuracy: 0.6825\n",
      "Epoch 8/15\n",
      "1251/1251 [==============================] - 522s 417ms/step - loss: 0.1101 - accuracy: 0.6810\n",
      "Epoch 9/15\n",
      "1251/1251 [==============================] - 520s 415ms/step - loss: 0.1087 - accuracy: 0.6892\n",
      "Epoch 10/15\n",
      "1251/1251 [==============================] - 519s 415ms/step - loss: 0.1046 - accuracy: 0.6975\n",
      "Epoch 11/15\n",
      "1251/1251 [==============================] - 520s 415ms/step - loss: 0.1030 - accuracy: 0.7016\n",
      "Epoch 12/15\n",
      "1251/1251 [==============================] - 532s 425ms/step - loss: 0.1012 - accuracy: 0.7067\n",
      "Epoch 13/15\n",
      "1251/1251 [==============================] - 520s 415ms/step - loss: 0.0997 - accuracy: 0.7085\n",
      "Epoch 14/15\n",
      "1251/1251 [==============================] - 519s 415ms/step - loss: 0.1028 - accuracy: 0.7015\n",
      "Epoch 15/15\n",
      "1251/1251 [==============================] - 519s 415ms/step - loss: 0.0984 - accuracy: 0.7114\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<tensorflow.python.keras.callbacks.History at 0x7f7d8a620850>"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# LSTM model\n",
    "# Keras Layers:\n",
    "from keras.layers import Dense,Input,LSTM,Bidirectional,Activation,Conv1D,GRU\n",
    "from keras.layers import Dropout,Embedding,GlobalMaxPooling1D, MaxPooling1D, Add, Flatten\n",
    "from keras.layers import GlobalAveragePooling1D, GlobalMaxPooling1D, concatenate, SpatialDropout1D# Keras Callback Functions:\n",
    "from keras.callbacks import Callback\n",
    "from keras.callbacks import EarlyStopping,ModelCheckpoint\n",
    "from keras import initializers, regularizers, constraints, optimizers, layers, callbacks\n",
    "from keras.models import Model\n",
    "from keras.optimizers import Adam\n",
    "\n",
    "sequence_input = Input(shape=(max_len, ))\n",
    "x = Embedding(max_features, embed_size,trainable = False)(sequence_input)\n",
    "x = SpatialDropout1D(0.2)(x)\n",
    "x = Bidirectional(GRU(128, return_sequences=True,dropout=0.1,recurrent_dropout=0.1))(x)\n",
    "x = Conv1D(64, kernel_size = 3, padding = \"valid\", kernel_initializer = \"glorot_uniform\")(x)\n",
    "avg_pool = GlobalAveragePooling1D()(x)\n",
    "max_pool = GlobalMaxPooling1D()(x)\n",
    "x = concatenate([avg_pool, max_pool]) \n",
    "preds = Dense(19, activation=\"sigmoid\")(x)\n",
    "\n",
    "model = Model(sequence_input, preds)\n",
    "model.compile(loss='binary_crossentropy',optimizer=Adam(lr=1e-3),metrics=['accuracy'])\n",
    "model.fit(x_sub_train, y_train, batch_size=batch_size, epochs=epochs)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>categories</th>\n",
       "      <th>text</th>\n",
       "      <th>categories_big</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>[hep-ph]</td>\n",
       "      <td>calculation of prompt diphoton production cros...</td>\n",
       "      <td>[hep-ph]</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>[math.CO, cs.CG]</td>\n",
       "      <td>sparsity-certifying graph decompositions  we d...</td>\n",
       "      <td>[math, cs]</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>[physics.gen-ph]</td>\n",
       "      <td>the evolution of the earth-moon system based o...</td>\n",
       "      <td>[physics]</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>[math.CO]</td>\n",
       "      <td>a determinant of stirling cycle numbers counts...</td>\n",
       "      <td>[math]</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>[math.CA, math.FA]</td>\n",
       "      <td>from dyadic $\\lambda_{\\alpha}$ to $\\lambda_{\\a...</td>\n",
       "      <td>[math, math]</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "           categories                                               text  \\\n",
       "0            [hep-ph]  calculation of prompt diphoton production cros...   \n",
       "1    [math.CO, cs.CG]  sparsity-certifying graph decompositions  we d...   \n",
       "2    [physics.gen-ph]  the evolution of the earth-moon system based o...   \n",
       "3           [math.CO]  a determinant of stirling cycle numbers counts...   \n",
       "4  [math.CA, math.FA]  from dyadic $\\lambda_{\\alpha}$ to $\\lambda_{\\a...   \n",
       "\n",
       "  categories_big  \n",
       "0       [hep-ph]  \n",
       "1     [math, cs]  \n",
       "2      [physics]  \n",
       "3         [math]  \n",
       "4   [math, math]  "
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "data.head()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
