{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 要完成的任务：\n",
    "\n",
    "- 1.数据预处理（1.8G数据中大部分都是没用的，需要剔除掉无关项）\n",
    "- 2.文本清洗（文本数据直接用恐怕不行，停用词，文本筛选，正则等操作都得做起来）\n",
    "- 3.矩阵分解（SVD与NMF,到底哪个好，还得试一试，其他任务中可能SVD效果好一些，这个项目中恰好就NMF强一些）\n",
    "- 4.LDA主题模型（无监督神器，文本分析任务中经常会用到，由于不涉及标签，用途比较广泛）\n",
    "- 5.构建推荐引擎（其实就是相似度计算，得出推荐结果）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 涉及到的工具包\n",
    "- numpy，pandas这些就不用说啦，必备的！\n",
    "- gensim：这个可以说是文本处理与建模神器，预处理方法与LDA模型等都可以在这里直接调用\n",
    "- sklearn：NMF与SVD直接可以调用，机器学习中用的最多的包"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "import re\n",
    "import string\n",
    "\n",
    "from sklearn.decomposition import NMF\n",
    "from sklearn.feature_extraction.text import TfidfVectorizer\n",
    "from sklearn.decomposition import TruncatedSVD\n",
    "\n",
    "import gensim\n",
    "from gensim.parsing.preprocessing import STOPWORDS\n",
    "from gensim import corpora, models\n",
    "from gensim.utils import simple_preprocess\n",
    "\n",
    "from nltk.stem.porter import PorterStemmer\n",
    "import warnings \n",
    "warnings.filterwarnings(\"ignore\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 导入数据集\n",
    "- 如果同学们笔记本执行速度较慢的话，可以选择只读取一部分数据，加上参数：nrows = 1000；"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>audioVersionDurationSec</th>\n",
       "      <th>codeBlock</th>\n",
       "      <th>codeBlockCount</th>\n",
       "      <th>collectionId</th>\n",
       "      <th>createdDate</th>\n",
       "      <th>createdDatetime</th>\n",
       "      <th>firstPublishedDate</th>\n",
       "      <th>firstPublishedDatetime</th>\n",
       "      <th>imageCount</th>\n",
       "      <th>isSubscriptionLocked</th>\n",
       "      <th>...</th>\n",
       "      <th>slug</th>\n",
       "      <th>name</th>\n",
       "      <th>postCount</th>\n",
       "      <th>author</th>\n",
       "      <th>bio</th>\n",
       "      <th>userId</th>\n",
       "      <th>userName</th>\n",
       "      <th>usersFollowedByCount</th>\n",
       "      <th>usersFollowedCount</th>\n",
       "      <th>scrappedDate</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>0.0</td>\n",
       "      <td>638f418c8464</td>\n",
       "      <td>2018-09-18</td>\n",
       "      <td>2018-09-18 20:55:34</td>\n",
       "      <td>2018-09-18</td>\n",
       "      <td>2018-09-18 20:57:03</td>\n",
       "      <td>1</td>\n",
       "      <td>False</td>\n",
       "      <td>...</td>\n",
       "      <td>blockchain</td>\n",
       "      <td>Blockchain</td>\n",
       "      <td>265164.0</td>\n",
       "      <td>Anar Babaev</td>\n",
       "      <td>NaN</td>\n",
       "      <td>f1ad85af0169</td>\n",
       "      <td>babaevanar</td>\n",
       "      <td>450.0</td>\n",
       "      <td>404.0</td>\n",
       "      <td>20181104</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>0.0</td>\n",
       "      <td>638f418c8464</td>\n",
       "      <td>2018-09-18</td>\n",
       "      <td>2018-09-18 20:55:34</td>\n",
       "      <td>2018-09-18</td>\n",
       "      <td>2018-09-18 20:57:03</td>\n",
       "      <td>1</td>\n",
       "      <td>False</td>\n",
       "      <td>...</td>\n",
       "      <td>samsung</td>\n",
       "      <td>Samsung</td>\n",
       "      <td>5708.0</td>\n",
       "      <td>Anar Babaev</td>\n",
       "      <td>NaN</td>\n",
       "      <td>f1ad85af0169</td>\n",
       "      <td>babaevanar</td>\n",
       "      <td>450.0</td>\n",
       "      <td>404.0</td>\n",
       "      <td>20181104</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>0.0</td>\n",
       "      <td>638f418c8464</td>\n",
       "      <td>2018-09-18</td>\n",
       "      <td>2018-09-18 20:55:34</td>\n",
       "      <td>2018-09-18</td>\n",
       "      <td>2018-09-18 20:57:03</td>\n",
       "      <td>1</td>\n",
       "      <td>False</td>\n",
       "      <td>...</td>\n",
       "      <td>it</td>\n",
       "      <td>It</td>\n",
       "      <td>3720.0</td>\n",
       "      <td>Anar Babaev</td>\n",
       "      <td>NaN</td>\n",
       "      <td>f1ad85af0169</td>\n",
       "      <td>babaevanar</td>\n",
       "      <td>450.0</td>\n",
       "      <td>404.0</td>\n",
       "      <td>20181104</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>0.0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>2018-01-07</td>\n",
       "      <td>2018-01-07 17:04:37</td>\n",
       "      <td>2018-01-07</td>\n",
       "      <td>2018-01-07 17:06:29</td>\n",
       "      <td>13</td>\n",
       "      <td>False</td>\n",
       "      <td>...</td>\n",
       "      <td>technology</td>\n",
       "      <td>Technology</td>\n",
       "      <td>166125.0</td>\n",
       "      <td>George Sykes</td>\n",
       "      <td>NaN</td>\n",
       "      <td>93b9e94f08ca</td>\n",
       "      <td>tasty231</td>\n",
       "      <td>6.0</td>\n",
       "      <td>22.0</td>\n",
       "      <td>20181104</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>0.0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>2018-01-07</td>\n",
       "      <td>2018-01-07 17:04:37</td>\n",
       "      <td>2018-01-07</td>\n",
       "      <td>2018-01-07 17:06:29</td>\n",
       "      <td>13</td>\n",
       "      <td>False</td>\n",
       "      <td>...</td>\n",
       "      <td>robotics</td>\n",
       "      <td>Robotics</td>\n",
       "      <td>9103.0</td>\n",
       "      <td>George Sykes</td>\n",
       "      <td>NaN</td>\n",
       "      <td>93b9e94f08ca</td>\n",
       "      <td>tasty231</td>\n",
       "      <td>6.0</td>\n",
       "      <td>22.0</td>\n",
       "      <td>20181104</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>5 rows × 50 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "   audioVersionDurationSec codeBlock  codeBlockCount  collectionId  \\\n",
       "0                        0       NaN             0.0  638f418c8464   \n",
       "1                        0       NaN             0.0  638f418c8464   \n",
       "2                        0       NaN             0.0  638f418c8464   \n",
       "3                        0       NaN             0.0           NaN   \n",
       "4                        0       NaN             0.0           NaN   \n",
       "\n",
       "  createdDate      createdDatetime firstPublishedDate firstPublishedDatetime  \\\n",
       "0  2018-09-18  2018-09-18 20:55:34         2018-09-18    2018-09-18 20:57:03   \n",
       "1  2018-09-18  2018-09-18 20:55:34         2018-09-18    2018-09-18 20:57:03   \n",
       "2  2018-09-18  2018-09-18 20:55:34         2018-09-18    2018-09-18 20:57:03   \n",
       "3  2018-01-07  2018-01-07 17:04:37         2018-01-07    2018-01-07 17:06:29   \n",
       "4  2018-01-07  2018-01-07 17:04:37         2018-01-07    2018-01-07 17:06:29   \n",
       "\n",
       "   imageCount  isSubscriptionLocked     ...             slug        name  \\\n",
       "0           1                 False     ...       blockchain  Blockchain   \n",
       "1           1                 False     ...          samsung     Samsung   \n",
       "2           1                 False     ...               it          It   \n",
       "3          13                 False     ...       technology  Technology   \n",
       "4          13                 False     ...         robotics    Robotics   \n",
       "\n",
       "  postCount        author  bio        userId    userName  \\\n",
       "0  265164.0   Anar Babaev  NaN  f1ad85af0169  babaevanar   \n",
       "1    5708.0   Anar Babaev  NaN  f1ad85af0169  babaevanar   \n",
       "2    3720.0   Anar Babaev  NaN  f1ad85af0169  babaevanar   \n",
       "3  166125.0  George Sykes  NaN  93b9e94f08ca    tasty231   \n",
       "4    9103.0  George Sykes  NaN  93b9e94f08ca    tasty231   \n",
       "\n",
       "   usersFollowedByCount  usersFollowedCount scrappedDate  \n",
       "0                 450.0               404.0     20181104  \n",
       "1                 450.0               404.0     20181104  \n",
       "2                 450.0               404.0     20181104  \n",
       "3                   6.0                22.0     20181104  \n",
       "4                   6.0                22.0     20181104  \n",
       "\n",
       "[5 rows x 50 columns]"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "medium = pd.read_csv('Medium_AggregatedData.csv',nrows = 1000)\n",
    "medium.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 预处理除了固定的套路，还得根据数据自己来设计一些规则\n",
    "- 大部分文本数据都是英文的，还有少量其他的，只保留英文数据\n",
    "- 推荐的文章也得差不多一点，点赞数量少的，暂时去除掉"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "medium = medium[medium['language'] == 'en']         \n",
    "medium = medium[medium['totalClapCount'] >= 25]     "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "整理文章对应标签"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def findTags(title):\n",
    "    rows = medium[medium['title'] == title]\n",
    "    #print(len(rows))\n",
    "    tags = list(rows['tag_name'].values)\n",
    "    return tags"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [],
   "source": [
    "titles = medium['title'].unique()                   # 所有文章名字\n",
    "\n",
    "tag_dict = {'title': [], 'tags': []}               # 文章对应标签\n",
    "\n",
    "for title in titles:\n",
    "    tag_dict['title'].append(title)\n",
    "    tag_dict['tags'].append(findTags(title))\n",
    "\n",
    "tag_df = pd.DataFrame(tag_dict)                     # 转换成DF\n",
    "\n",
    "# 去重\n",
    "medium = medium.drop_duplicates(subset = 'title', keep = 'first')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "添加标签到DF中"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def addTags(title):\n",
    "    try:\n",
    "        tags = list(tag_df[tag_df['title'] == title]['tags'])[0]\n",
    "    except:\n",
    "        # If there's an error assume no tags\n",
    "        tags = np.NaN\n",
    "    return tags"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(79, 6)\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>title</th>\n",
       "      <th>url</th>\n",
       "      <th>allTags</th>\n",
       "      <th>readingTime</th>\n",
       "      <th>author</th>\n",
       "      <th>text</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>Private Business, Government and Blockchain</td>\n",
       "      <td>https://medium.com/s/story/private-business-go...</td>\n",
       "      <td>[Blockchain]</td>\n",
       "      <td>0.958491</td>\n",
       "      <td>Anar Babaev</td>\n",
       "      <td>Private Business, Government and Blockchain\\n\\...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>Can a robot love us better than another human ...</td>\n",
       "      <td>https://medium.com/s/story/can-a-robot-love-us...</td>\n",
       "      <td>[Robotics]</td>\n",
       "      <td>0.652830</td>\n",
       "      <td>Stewart Alsop</td>\n",
       "      <td>Can a robot love us better than another human ...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>2017 Big Data, AI and IOT Use Cases</td>\n",
       "      <td>https://medium.com/s/story/2017-big-data-ai-an...</td>\n",
       "      <td>[Artificial Intelligence]</td>\n",
       "      <td>7.055031</td>\n",
       "      <td>Melody Ucros</td>\n",
       "      <td>2017 Big Data, AI and IOT Use Cases\\nAn Active...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>The Meta Model and Meta Meta-Model of Deep Lea...</td>\n",
       "      <td>https://medium.com/s/story/the-meta-model-and-...</td>\n",
       "      <td>[Machine Learning]</td>\n",
       "      <td>5.684906</td>\n",
       "      <td>Carlos E. Perez</td>\n",
       "      <td>The Meta Model and Meta Meta-Model of Deep Lea...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>Don’t trust “Do you trust this computer”</td>\n",
       "      <td>https://medium.com/s/story/dont-trust-do-you-t...</td>\n",
       "      <td>[Artificial Intelligence]</td>\n",
       "      <td>2.739623</td>\n",
       "      <td>Virginia Dignum</td>\n",
       "      <td>Don’t trust “Do you trust this computer”\\nfrom...</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                               title  \\\n",
       "0        Private Business, Government and Blockchain   \n",
       "1  Can a robot love us better than another human ...   \n",
       "2                2017 Big Data, AI and IOT Use Cases   \n",
       "3  The Meta Model and Meta Meta-Model of Deep Lea...   \n",
       "4           Don’t trust “Do you trust this computer”   \n",
       "\n",
       "                                                 url  \\\n",
       "0  https://medium.com/s/story/private-business-go...   \n",
       "1  https://medium.com/s/story/can-a-robot-love-us...   \n",
       "2  https://medium.com/s/story/2017-big-data-ai-an...   \n",
       "3  https://medium.com/s/story/the-meta-model-and-...   \n",
       "4  https://medium.com/s/story/dont-trust-do-you-t...   \n",
       "\n",
       "                     allTags  readingTime           author  \\\n",
       "0               [Blockchain]     0.958491      Anar Babaev   \n",
       "1                 [Robotics]     0.652830    Stewart Alsop   \n",
       "2  [Artificial Intelligence]     7.055031     Melody Ucros   \n",
       "3         [Machine Learning]     5.684906  Carlos E. Perez   \n",
       "4  [Artificial Intelligence]     2.739623  Virginia Dignum   \n",
       "\n",
       "                                                text  \n",
       "0  Private Business, Government and Blockchain\\n\\...  \n",
       "1  Can a robot love us better than another human ...  \n",
       "2  2017 Big Data, AI and IOT Use Cases\\nAn Active...  \n",
       "3  The Meta Model and Meta Meta-Model of Deep Lea...  \n",
       "4  Don’t trust “Do you trust this computer”\\nfrom...  "
      ]
     },
     "execution_count": 39,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 将标签加入到原始DF中\n",
    "medium['allTags'] = medium['title'].apply(addTags)\n",
    "\n",
    "# 只保留需要的列\n",
    "keep_cols = ['title', 'url', 'allTags', 'readingTime', 'author', 'text']\n",
    "medium = medium[keep_cols]\n",
    "\n",
    "# 标题为空的不要了\n",
    "null_title = medium[medium['title'].isna()].index\n",
    "medium.drop(index = null_title, inplace = True)\n",
    "\n",
    "medium.reset_index(drop = True, inplace = True)\n",
    "\n",
    "print(medium.shape)\n",
    "medium.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 文本清洗（正则表达式）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def clean_text(text):  \n",
    "    # 去掉http开头那些链接\n",
    "    text = re.sub('(?:(?:https?|ftp):\\/\\/)?[\\w/\\-?=%.]+\\.[\\w/\\-?=%.]+','', text)\n",
    "    # 去掉特殊字符之类的\n",
    "    text = re.sub('\\w*\\d\\w*', ' ', text)\n",
    "    # 去掉标点符号等，将所有字符转换成小写的\n",
    "    text = re.sub('[%s]' % re.escape(string.punctuation), ' ', text.lower())\n",
    "    # 去掉换行符\n",
    "    text = text.replace('\\n', ' ')\n",
    "    return text\n",
    "\n",
    "medium['text'] = medium['text'].apply(clean_text)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 去停用词\n",
    "- 一般都是用现成的停用词典，但是现成的往往难以满足自己的任务需求，还需要额外补充\n",
    "- 可以自己添加，一个词一个词的加入，也可以基于统计方法来计算，比如词频最高的前100个词等"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# 自己添加一部分停用词\n",
    "stop_list = STOPWORDS.union(set(['data', 'ai', 'learning', 'time', 'machine', 'like', 'use', 'new', 'intelligence', 'need', \"it's\", 'way',\n",
    "                                 'artificial', 'based', 'want', 'know', 'learn', \"don't\", 'things', 'lot', \"let's\", 'model', 'input',\n",
    "                                 'output', 'train', 'training', 'trained', 'it', 'we', 'don', 'you', 'ce', 'hasn', 'sa', 'do', 'som',\n",
    "                                 'can']))\n",
    "\n",
    "# 去停用词\n",
    "def remove_stopwords(text):\n",
    "    clean_text = []\n",
    "    for word in text.split(' '):\n",
    "        if word not in stop_list and (len(word) > 2):\n",
    "            clean_text.append(word)\n",
    "    return ' '.join(clean_text)\n",
    "\n",
    "medium['text'] = medium['text'].apply(remove_stopwords)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 词干提取\n",
    "- 英文数据也有事多的时候，统一成标准的词"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "stemmer = PorterStemmer()\n",
    "\n",
    "def stem_text(text):\n",
    "    word_list = []\n",
    "    for word in text.split(' '):\n",
    "        word_list.append(stemmer.stem(word))\n",
    "    return ' '.join(word_list)\n",
    "\n",
    "medium['text'] = medium['text'].apply(stem_text)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 预处理通常花的时间比较多，把结果保存下来"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "medium.to_csv('pre-processed.csv')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# medium = pd.read_csv('pre-processed.csv')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### TFIDF处理\n",
    "- 通常都会讲一个蜜蜂养殖的故事。。。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [],
   "source": [
    "vectorizer = TfidfVectorizer(stop_words = stop_list, ngram_range = (1,1))\n",
    "doc_word = vectorizer.fit_transform(medium['text'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(79, 5588)"
      ]
     },
     "execution_count": 47,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "doc_word.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### SVD矩阵分解\n",
    "- 函数使用说明：https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html#sklearn.decomposition.TruncatedSVD\n",
    "- 需要指定参数，这里的8相当于你觉得这些文档可能属于多少个主题"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# 其实跟PCA蛮像的\n",
    "svd = TruncatedSVD(8)\n",
    "docs_svd = svd.fit_transform(doc_word)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(79, 8)"
      ]
     },
     "execution_count": 51,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "docs_svd.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Function to Display Topics"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Topic  1\n",
      "human, network, imag, technolog, work, user, algorithm, predict, peopl, compani, product, busi, deep, custom, develop\n",
      "\n",
      "Topic  2\n",
      "imag, layer, network, neural, function, dataset, featur, weight, convolut, vector, valu, gradient, deep, predict, paramet\n",
      "\n",
      "Topic  3\n",
      "chatbot, bot, user, custom, convers, app, messag, messeng, chat, servic, text, word, voic, assist, interact\n",
      "\n",
      "Topic  4\n",
      "imag, network, layer, neural, human, convolut, deep, chatbot, robot, neuron, technolog, cnn, brain, gan, architectur\n",
      "\n",
      "Topic  5\n",
      "imag, blockchain, tensorflow, file, python, project, api, cloud, instal, token, platform, app, code, team, notebook\n",
      "\n",
      "Topic  6\n",
      "blockchain, market, valu, token, custom, layer, network, function, predict, busi, gradient, price, compani, platform, trade\n",
      "\n",
      "Topic  7\n",
      "scienc, network, neural, deep, chatbot, cours, scientist, layer, python, neuron, gradient, program, function, skill, weight\n",
      "\n",
      "Topic  8\n",
      "word, vector, text, blockchain, token, sentenc, languag, embed, document, nlp, sentiment, network, sequenc, rnn, matrix\n"
     ]
    }
   ],
   "source": [
    "def display_topics(model, feature_names, no_top_words, no_top_topics, topic_names=None):\n",
    "    count = 0\n",
    "    for ix, topic in enumerate(model.components_):\n",
    "        if count == no_top_topics:\n",
    "            break\n",
    "        if not topic_names or not topic_names[ix]:\n",
    "            print(\"\\nTopic \", (ix + 1))\n",
    "        else:\n",
    "            print(\"\\nTopic: '\",topic_names[ix],\"'\")\n",
    "        print(\", \".join([feature_names[i]\n",
    "                        for i in topic.argsort()[:-no_top_words - 1:-1]]))\n",
    "        count += 1\n",
    "\n",
    "display_topics(svd, vectorizer.get_feature_names(), 15, 8)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Try NMF"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Topic  1\n",
      "human, robot, technolog, peopl, machin, world, think, futur, brain, job, car, autom, design, game, live\n",
      "\n",
      "Topic  2\n",
      "valu, predict, variabl, featur, regress, function, algorithm, linear, set, test, dataset, paramet, gradient, tree, distribut\n",
      "\n",
      "Topic  3\n",
      "chatbot, bot, custom, user, convers, messag, chat, servic, messeng, busi, assist, app, interact, voic, answer\n",
      "\n",
      "Topic  4\n",
      "network, layer, imag, neural, deep, convolut, neuron, weight, cnn, function, architectur, loss, gener, gan, gradient\n",
      "\n",
      "Topic  5\n",
      "file, tensorflow, imag, python, code, instal, api, run, notebook, googl, librari, creat, app, dataset, gpu\n",
      "\n",
      "Topic  6\n",
      "blockchain, market, technolog, compani, busi, custom, product, platform, servic, token, develop, industri, user, invest, team\n",
      "\n",
      "Topic  7\n",
      "scienc, scientist, cours, work, skill, team, job, peopl, project, engin, busi, analyt, program, deep, start\n",
      "\n",
      "Topic  8\n",
      "word, vector, text, sentenc, embed, languag, document, sentiment, nlp, corpu, sequenc, token, context, topic, matrix\n"
     ]
    }
   ],
   "source": [
    "nmf = NMF(8)\n",
    "docs_nmf = nmf.fit_transform(doc_word)\n",
    "\n",
    "display_topics(nmf, vectorizer.get_feature_names(), 15, 8)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Try LDA"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[(0,\n",
       "  '0.018*\"user\" + 0.013*\"custom\" + 0.011*\"product\" + 0.008*\"chatbot\" + 0.008*\"busi\" + 0.007*\"servic\" + 0.007*\"bot\" + 0.006*\"app\" + 0.005*\"compani\" + 0.005*\"experi\"'),\n",
       " (1,\n",
       "  '0.010*\"technolog\" + 0.008*\"compani\" + 0.008*\"blockchain\" + 0.006*\"team\" + 0.006*\"project\" + 0.005*\"industri\" + 0.005*\"scienc\" + 0.005*\"research\" + 0.005*\"commun\" + 0.005*\"platform\"'),\n",
       " (2,\n",
       "  '0.011*\"car\" + 0.008*\"drive\" + 0.007*\"patient\" + 0.007*\"vehicl\" + 0.005*\"health\" + 0.005*\"medic\" + 0.005*\"autonom\" + 0.005*\"visual\" + 0.005*\"map\" + 0.005*\"inform\"'),\n",
       " (3,\n",
       "  '0.017*\"network\" + 0.014*\"word\" + 0.011*\"neural\" + 0.010*\"featur\" + 0.009*\"layer\" + 0.007*\"vector\" + 0.007*\"model\" + 0.007*\"deep\" + 0.006*\"function\" + 0.006*\"algorithm\"'),\n",
       " (4,\n",
       "  '0.009*\"price\" + 0.008*\"predict\" + 0.006*\"market\" + 0.006*\"year\" + 0.005*\"algorithm\" + 0.005*\"trade\" + 0.005*\"analysi\" + 0.004*\"music\" + 0.004*\"imag\" + 0.004*\"detect\"'),\n",
       " (5,\n",
       "  '0.017*\"valu\" + 0.009*\"function\" + 0.009*\"predict\" + 0.008*\"variabl\" + 0.007*\"algorithm\" + 0.007*\"test\" + 0.007*\"mean\" + 0.006*\"number\" + 0.006*\"point\" + 0.006*\"distribut\"'),\n",
       " (6,\n",
       "  '0.021*\"imag\" + 0.011*\"code\" + 0.008*\"python\" + 0.007*\"run\" + 0.007*\"file\" + 0.007*\"network\" + 0.007*\"deep\" + 0.006*\"tensorflow\" + 0.006*\"dataset\" + 0.006*\"object\"'),\n",
       " (7,\n",
       "  '0.013*\"human\" + 0.009*\"peopl\" + 0.006*\"think\" + 0.006*\"world\" + 0.005*\"technolog\" + 0.004*\"robot\" + 0.004*\"year\" + 0.004*\"don\" + 0.004*\"job\" + 0.004*\"understand\"')]"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tokenized_docs = medium['text'].apply(simple_preprocess)\n",
    "dictionary = gensim.corpora.Dictionary(tokenized_docs)\n",
    "dictionary.filter_extremes(no_below=15, no_above=0.5, keep_n=100000)\n",
    "corpus = [dictionary.doc2bow(doc) for doc in tokenized_docs]\n",
    "\n",
    "# Workers = 4 activates all four cores of my CPU, \n",
    "lda = models.LdaMulticore(corpus=corpus, num_topics=8, id2word=dictionary, passes=10, workers = 4)\n",
    "\n",
    "lda.print_topics()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Save NMF Topics\n",
    "And concatenate topic data back to other metadata. Also remove articles with all 0 topic distributions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>title</th>\n",
       "      <th>url</th>\n",
       "      <th>allTags</th>\n",
       "      <th>readingTime</th>\n",
       "      <th>author</th>\n",
       "      <th>Tech</th>\n",
       "      <th>Modeling</th>\n",
       "      <th>Chatbots</th>\n",
       "      <th>Deep Learning</th>\n",
       "      <th>Coding</th>\n",
       "      <th>Business</th>\n",
       "      <th>Careers</th>\n",
       "      <th>NLP</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>Private Business, Government and Blockchain</td>\n",
       "      <td>https://medium.com/s/story/private-business-go...</td>\n",
       "      <td>[Blockchain, Samsung, It]</td>\n",
       "      <td>0.958491</td>\n",
       "      <td>Anar Babaev</td>\n",
       "      <td>0.003306</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.076164</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>Can a robot love us better than another human ...</td>\n",
       "      <td>https://medium.com/s/story/can-a-robot-love-us...</td>\n",
       "      <td>[Robotics, Meditation, Therapy, Artificial Int...</td>\n",
       "      <td>0.652830</td>\n",
       "      <td>Stewart Alsop</td>\n",
       "      <td>0.052391</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>2017 Big Data, AI and IOT Use Cases</td>\n",
       "      <td>https://medium.com/s/story/2017-big-data-ai-an...</td>\n",
       "      <td>[Artificial Intelligence, Data Science, Big Da...</td>\n",
       "      <td>7.055031</td>\n",
       "      <td>Melody Ucros</td>\n",
       "      <td>0.020477</td>\n",
       "      <td>0.016318</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.011528</td>\n",
       "      <td>0.004402</td>\n",
       "      <td>0.045057</td>\n",
       "      <td>0.016091</td>\n",
       "      <td>0.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>The Meta Model and Meta Meta-Model of Deep Lea...</td>\n",
       "      <td>https://medium.com/s/story/the-meta-model-and-...</td>\n",
       "      <td>[Machine Learning, Deep Learning, Artificial I...</td>\n",
       "      <td>5.684906</td>\n",
       "      <td>Carlos E. Perez</td>\n",
       "      <td>0.008825</td>\n",
       "      <td>0.001702</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.045565</td>\n",
       "      <td>0.000700</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.009749</td>\n",
       "      <td>0.006328</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>Don’t trust “Do you trust this computer”</td>\n",
       "      <td>https://medium.com/s/story/dont-trust-do-you-t...</td>\n",
       "      <td>[Artificial Intelligence, Ethics, Elon Musk, D...</td>\n",
       "      <td>2.739623</td>\n",
       "      <td>Virginia Dignum</td>\n",
       "      <td>0.026696</td>\n",
       "      <td>0.005481</td>\n",
       "      <td>0.003155</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000638</td>\n",
       "      <td>0.005432</td>\n",
       "      <td>0.017030</td>\n",
       "      <td>0.004071</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                               title  \\\n",
       "0        Private Business, Government and Blockchain   \n",
       "1  Can a robot love us better than another human ...   \n",
       "2                2017 Big Data, AI and IOT Use Cases   \n",
       "3  The Meta Model and Meta Meta-Model of Deep Lea...   \n",
       "4           Don’t trust “Do you trust this computer”   \n",
       "\n",
       "                                                 url  \\\n",
       "0  https://medium.com/s/story/private-business-go...   \n",
       "1  https://medium.com/s/story/can-a-robot-love-us...   \n",
       "2  https://medium.com/s/story/2017-big-data-ai-an...   \n",
       "3  https://medium.com/s/story/the-meta-model-and-...   \n",
       "4  https://medium.com/s/story/dont-trust-do-you-t...   \n",
       "\n",
       "                                             allTags  readingTime  \\\n",
       "0                          [Blockchain, Samsung, It]     0.958491   \n",
       "1  [Robotics, Meditation, Therapy, Artificial Int...     0.652830   \n",
       "2  [Artificial Intelligence, Data Science, Big Da...     7.055031   \n",
       "3  [Machine Learning, Deep Learning, Artificial I...     5.684906   \n",
       "4  [Artificial Intelligence, Ethics, Elon Musk, D...     2.739623   \n",
       "\n",
       "            author      Tech  Modeling  Chatbots  Deep Learning    Coding  \\\n",
       "0      Anar Babaev  0.003306  0.000000  0.000000       0.000000  0.000000   \n",
       "1    Stewart Alsop  0.052391  0.000000  0.000000       0.000000  0.000000   \n",
       "2     Melody Ucros  0.020477  0.016318  0.000000       0.011528  0.004402   \n",
       "3  Carlos E. Perez  0.008825  0.001702  0.000000       0.045565  0.000700   \n",
       "4  Virginia Dignum  0.026696  0.005481  0.003155       0.000000  0.000638   \n",
       "\n",
       "   Business   Careers       NLP  \n",
       "0  0.076164  0.000000  0.000000  \n",
       "1  0.000000  0.000000  0.000000  \n",
       "2  0.045057  0.016091  0.000000  \n",
       "3  0.000000  0.009749  0.006328  \n",
       "4  0.005432  0.017030  0.004071  "
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Define column names for dataframe\n",
    "column_names = ['title', 'url', 'allTags', 'readingTime', 'author', 'Tech',\n",
    "                'Modeling', 'Chatbots', 'Deep Learning', 'Coding', 'Business',\n",
    "                'Careers', 'NLP', 'sum']\n",
    "\n",
    "# Create topic sum for each article. Later remove all articles with sum 0.\n",
    "topic_sum = pd.DataFrame(np.sum(docs_nmf, axis = 1))\n",
    "\n",
    "# Turn our docs_nmf array into a data frame\n",
    "doc_topic_df = pd.DataFrame(data = docs_nmf)\n",
    "\n",
    "# Merge all of our article metadata and name columns\n",
    "doc_topic_df = pd.concat([medium[['title', 'url', 'allTags', 'readingTime', 'author']], doc_topic_df, topic_sum], axis = 1)\n",
    "\n",
    "doc_topic_df.columns = column_names\n",
    "\n",
    "# Remove articles with topic sum = 0, then drop sum column\n",
    "doc_topic_df = doc_topic_df[doc_topic_df['sum'] != 0]\n",
    "\n",
    "doc_topic_df.drop(columns = 'sum', inplace = True)\n",
    "\n",
    "# Reset index then save\n",
    "doc_topic_df.reset_index(drop = True, inplace = True)\n",
    "doc_topic_df.to_csv('tfidf_nmf_8topics.csv', index = False)\n",
    "doc_topic_df.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# doc_topic_df = pd.read_csv('tfidf_nmf_8topics.csv')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Recommendation Engine"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "topic_names = ['Tech', 'Modeling', 'Chatbots', 'Deep Learning', 'Coding', 'Business', 'Careers', 'NLP']\n",
    "topic_array = np.array(doc_topic_df[topic_names])\n",
    "norms = np.linalg.norm(topic_array, axis = 1)\n",
    "\n",
    "def compute_dists(top_vec, topic_array):\n",
    "    '''\n",
    "    Returns cosine distances for top_vec compared to every article\n",
    "    '''\n",
    "    dots = np.matmul(topic_array, top_vec)\n",
    "    input_norm = np.linalg.norm(top_vec)\n",
    "    co_dists = dots / (input_norm * norms)\n",
    "    return co_dists\n",
    "\n",
    "def produce_rec(top_vec, topic_array, doc_topic_df, rand = 15):\n",
    "    '''\n",
    "    Produces a recommendation based on cosine distance.\n",
    "    Rand variable controls level of randomness in output recommendation.\n",
    "    '''\n",
    "    # Add a bit of randomness to top_vec\n",
    "    top_vec = top_vec + np.random.rand(8,)/(np.linalg.norm(top_vec)) * rand\n",
    "    co_dists = compute_dists(top_vec, topic_array)\n",
    "    return doc_topic_df.loc[np.argmax(co_dists)]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Test Against User Input"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "title                           How Algorithms Will Rule The World\n",
       "url              https://medium.com/s/story/how-algorithms-will...\n",
       "allTags          [Algorithms, Machine Learning, Marketing, Bran...\n",
       "readingTime                                                12.6679\n",
       "author                                                 Richard Yao\n",
       "Tech                                                     0.0411037\n",
       "Modeling                                                 0.0371002\n",
       "Chatbots                                                0.00691702\n",
       "Deep Learning                                                    0\n",
       "Coding                                                           0\n",
       "Business                                                 0.0393878\n",
       "Careers                                                 0.00836058\n",
       "NLP                                                              0\n",
       "Name: 8124, dtype: object"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tech = 5\n",
    "modeling = 5\n",
    "chatbots = 0\n",
    "deep = 0\n",
    "coding = 0\n",
    "business = 5\n",
    "careers = 0\n",
    "nlp = 0\n",
    "\n",
    "top_vec = np.array([tech, modeling, chatbots, deep, coding, business, careers, nlp])\n",
    "\n",
    "rec = produce_rec(top_vec, topic_array, doc_topic_df)\n",
    "rec"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
