{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Million Song Dataset——协同过滤\n",
    "\n",
    "    公开音乐数据集 Million Song Dataset（MSD），它包含来自 SecondHandSongs dataset、musiXmatch dataset、Last.fm dataset、Taste Profile subset、thisismyjam-to-MSD mapping、tagtraum genre annotations 和 Top MAGD dataset 七个知名音乐社区的数据。\n",
    "    \n",
    "    原始数据集包括： \n",
    "        1. train_triplets.txt：三元组数据（用户、歌曲、播放次数）\n",
    "        2. track_metadata.db：每个歌曲的元数据\n",
    "    \n",
    "    \n",
    "其中，由于原始数据太大，故使用其数据集子集，播放次数最多的800个用户、播放次数最多的800首歌曲，triplet_dataset_sub.csv（37000+条记录)。链接地址：https://labrosa.ee.columbia.edu/millionsong/     \n",
    "    \n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1、数据预处理\n",
    "\n",
    "    大致过程： 1. 隐式播放次数，转为显式打分\n",
    "                    2. 数据分割（train/test）\n",
    "                    3. 建立倒排表\n",
    "                    \n",
    "                    4. 计算用户平均打分\n",
    "                    5. 预计算用户相似矩阵\n",
    "                    6. 预计算物品相似矩阵\n",
    "                    \n",
    "                    7. 训练SVD模型\n",
    "\n",
    "### （0）导入相关包和数据\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "from numpy.random import random\n",
    "\n",
    "# 存储数据\n",
    "from collections import defaultdict\n",
    "import scipy.sparse as ss\n",
    "\n",
    "# 保存数据\n",
    "import pickle\n",
    "import scipy.io as sio\n",
    "\n",
    "# 距离\n",
    "import scipy.spatial.distance as ssd\n",
    "\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "    用户(800)、歌曲（800）及其播放次数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>user</th>\n",
       "      <th>song</th>\n",
       "      <th>play_count</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>4e11f45d732f4861772b2906f81a7d384552ad12</td>\n",
       "      <td>SOCKSGZ12A58A7CA4B</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>4e11f45d732f4861772b2906f81a7d384552ad12</td>\n",
       "      <td>SOCVTLJ12A6310F0FD</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>4e11f45d732f4861772b2906f81a7d384552ad12</td>\n",
       "      <td>SODLLYS12A8C13A96B</td>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>4e11f45d732f4861772b2906f81a7d384552ad12</td>\n",
       "      <td>SOEGIYH12A6D4FC0E3</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>4e11f45d732f4861772b2906f81a7d384552ad12</td>\n",
       "      <td>SOFRQTD12A81C233C0</td>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                       user                song  play_count\n",
       "0  4e11f45d732f4861772b2906f81a7d384552ad12  SOCKSGZ12A58A7CA4B           1\n",
       "1  4e11f45d732f4861772b2906f81a7d384552ad12  SOCVTLJ12A6310F0FD           1\n",
       "2  4e11f45d732f4861772b2906f81a7d384552ad12  SODLLYS12A8C13A96B           3\n",
       "3  4e11f45d732f4861772b2906f81a7d384552ad12  SOEGIYH12A6D4FC0E3           1\n",
       "4  4e11f45d732f4861772b2906f81a7d384552ad12  SOFRQTD12A81C233C0           2"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dpath = './MillionSong/data/'\n",
    "df_triplet = pd.read_csv( dpath  + 'triplet_dataset_sub.csv')\n",
    "df_triplet.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### （1）隐式反馈 --> 打分\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>user</th>\n",
       "      <th>song</th>\n",
       "      <th>play_count</th>\n",
       "      <th>total_play_count</th>\n",
       "      <th>fractional_play_count</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>4e11f45d732f4861772b2906f81a7d384552ad12</td>\n",
       "      <td>SOCKSGZ12A58A7CA4B</td>\n",
       "      <td>1</td>\n",
       "      <td>259</td>\n",
       "      <td>0.003861</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>4e11f45d732f4861772b2906f81a7d384552ad12</td>\n",
       "      <td>SOCVTLJ12A6310F0FD</td>\n",
       "      <td>1</td>\n",
       "      <td>259</td>\n",
       "      <td>0.003861</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>4e11f45d732f4861772b2906f81a7d384552ad12</td>\n",
       "      <td>SODLLYS12A8C13A96B</td>\n",
       "      <td>3</td>\n",
       "      <td>259</td>\n",
       "      <td>0.011583</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>4e11f45d732f4861772b2906f81a7d384552ad12</td>\n",
       "      <td>SOEGIYH12A6D4FC0E3</td>\n",
       "      <td>1</td>\n",
       "      <td>259</td>\n",
       "      <td>0.003861</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>4e11f45d732f4861772b2906f81a7d384552ad12</td>\n",
       "      <td>SOFRQTD12A81C233C0</td>\n",
       "      <td>2</td>\n",
       "      <td>259</td>\n",
       "      <td>0.007722</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                       user                song  play_count  \\\n",
       "0  4e11f45d732f4861772b2906f81a7d384552ad12  SOCKSGZ12A58A7CA4B           1   \n",
       "1  4e11f45d732f4861772b2906f81a7d384552ad12  SOCVTLJ12A6310F0FD           1   \n",
       "2  4e11f45d732f4861772b2906f81a7d384552ad12  SODLLYS12A8C13A96B           3   \n",
       "3  4e11f45d732f4861772b2906f81a7d384552ad12  SOEGIYH12A6D4FC0E3           1   \n",
       "4  4e11f45d732f4861772b2906f81a7d384552ad12  SOFRQTD12A81C233C0           2   \n",
       "\n",
       "   total_play_count  fractional_play_count  \n",
       "0               259               0.003861  \n",
       "1               259               0.003861  \n",
       "2               259               0.011583  \n",
       "3               259               0.003861  \n",
       "4               259               0.007722  "
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#每个用户的总播放次数\n",
    "df_triplet_users = df_triplet[['user','play_count']].groupby('user').sum().reset_index()\n",
    "df_triplet_users.rename(columns={'play_count':'total_play_count'},inplace=True)\n",
    "\n",
    "#每首歌曲的播放比例\n",
    "df_triplet = pd.merge(df_triplet, df_triplet_users)\n",
    "df_triplet['fractional_play_count'] = df_triplet['play_count']/df_triplet['total_play_count']\n",
    "del df_triplet_users\n",
    "\n",
    "df_triplet.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### （2）分割数据\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/apple/anaconda3/lib/python3.7/site-packages/sklearn/model_selection/_split.py:2179: FutureWarning: From version 0.21, test_size will always complement train_size unless both are specified.\n",
      "  FutureWarning)\n"
     ]
    }
   ],
   "source": [
    "from sklearn.model_selection import train_test_split\n",
    "\n",
    "# 切分\n",
    "total_index = df_triplet.index\n",
    "train_index, test_index = train_test_split(total_index, train_size = 0.8,random_state = 7)\n",
    "\n",
    "# 索引数据\n",
    "df_triplet_train = df_triplet.iloc[train_index]\n",
    "df_triplet_test = df_triplet.iloc[test_index]\n",
    "\n",
    "# 保存\n",
    "df_triplet_train.to_csv(path_or_buf= dpath + 'triplet_dataset_sub_train.csv')\n",
    "df_triplet_test.to_csv(path_or_buf= dpath + 'triplet_dataset_sub_test.csv')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### （3）倒排索引\n",
    "\n",
    "    对训练数据，用户和item重新建索引，事先计算好倒排表，比实时查询数据库快"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "number of Users :786\n",
      "number of Songs :800\n"
     ]
    }
   ],
   "source": [
    "# 所用的用户和item\n",
    "users = list(df_triplet_train['user'].unique())\n",
    "items = list(df_triplet_train['song'].unique())\n",
    "n_users = len(users)\n",
    "n_items = len(items)\n",
    "\n",
    "print(\"number of Users :%d\" % n_users)\n",
    "print(\"number of Songs :%d\" % n_items)\n",
    "\n",
    "\n",
    "#倒排表：统计每个用户播放过的歌曲 / 播放每个歌曲的用户\n",
    "user_items = defaultdict(set)\n",
    "item_users = defaultdict(set)\n",
    "\n",
    "#用户-物品关系矩阵表，稀疏矩阵，\n",
    "user_item_scores = ss.dok_matrix((n_users, n_items))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 重新编码用户索引字典\n",
    "users_index = dict()\n",
    "items_index = dict()\n",
    "for i, u in enumerate(users):\n",
    "    users_index[u] = i\n",
    "\n",
    "    \n",
    "#重新编码活动索引字典    \n",
    "for i, e in enumerate(items):\n",
    "    items_index[e] = i\n",
    "\n",
    "n_records = df_triplet_train.shape[0]\n",
    "for i in range(n_records):\n",
    "    user_index_i = users_index[df_triplet_train.iloc[i]['user'] ] #用户\n",
    "    item_index_i = items_index[df_triplet_train.iloc[i]['song'] ]#歌曲\n",
    "    \n",
    "    user_items[user_index_i].add(item_index_i)    #该用户的歌曲\n",
    "    item_users[item_index_i].add(user_index_i)    #播放该歌曲的用户\n",
    "        \n",
    "    score = df_triplet_train.iloc[i]['fractional_play_count']  #播放次数的比例\n",
    "    user_item_scores[user_index_i, item_index_i] = score"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "    保存"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 倒排表\n",
    "pickle.dump(user_items, open(\"MillionSong/model/user_items.pkl\", 'wb'))\n",
    "pickle.dump(item_users, open(\"MillionSong/model/item_users.pkl\", 'wb'))\n",
    "\n",
    "# 保存用户-物品关系矩阵R，以备后用\n",
    "sio.mmwrite(\"MillionSong/model/user_item_scores\", user_item_scores)\n",
    "\n",
    "# 保存用户索引表、物品索引表\n",
    "pickle.dump(users_index, open(\"MillionSong/model/users_index.pkl\", 'wb'))\n",
    "pickle.dump(items_index, open(\"MillionSong/model/items_index.pkl\", 'wb'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### （4）用户打分\n",
    "\n",
    "    计算每个用户的平均打分，以及所有用户的平均打分"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "users_mu = np.zeros(n_users)\n",
    "for u in range(n_users):  \n",
    "    n_user_items = 0\n",
    "    r_acc = 0.0\n",
    "    \n",
    "    for i in user_items[u]:  #用户打过分的item\n",
    "        r_acc += user_item_scores[u,i]\n",
    "        n_user_items += 1\n",
    " \n",
    "    users_mu[u] = r_acc/n_user_items\n",
    "\n",
    "pickle.dump(users_mu, open(\"MillionSong/model/users_mu.pkl\", 'wb')) \n",
    "\n",
    "#所有用户的平均打分\n",
    "mu = df_triplet_train['fractional_play_count'].mean()  #average rating\n",
    "pickle.dump(mu, open(\"MillionSong/model/mu.pkl\", 'wb'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### （5）用户-相似矩阵\n",
    "\n",
    "   - 计算两个用户之间的相似度\n",
    "   \n",
    "       1. 以播放比例为特征"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "def user_similarity_playcount(uid1, uid2 ):\n",
    "    si={}  #有效item（两个用户均有打分的item）的集合\n",
    "    for item in user_items[uid1]:  #uid1所有打过分的Item1\n",
    "        if item in user_items[uid2]:  #如果uid2也对该Item打过分\n",
    "            si[item]=1  #item为一个有效item\n",
    "        \n",
    "    n=len(si)   #有效item数，有效item为即对uid对Item打过分，uid2也对Item打过分\n",
    "    if (n==0):  #没有共同打过分的item，相似度设为0？\n",
    "        similarity=0.0  \n",
    "        return similarity  \n",
    "        \n",
    "    #用户uid1的有效打分(减去该用户的平均打分)\n",
    "    s1=np.array([user_item_scores[uid1,item]-users_mu[uid1] for item in si])  \n",
    "        \n",
    "    #用户uid2的有效打分(减去该用户的平均打分)\n",
    "    s2=np.array([user_item_scores[uid2,item]-users_mu[uid2] for item in si])  \n",
    "        \n",
    "    similarity = 1 - ssd.cosine(s1, s2) \n",
    "    \n",
    "    if np.isnan(similarity): #s1或s2的l2模为0（全部等于该用户的平均打分）\n",
    "        similarity = 0.0\n",
    "    return similarity  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "        2. 以是否播放过歌曲为特征"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "def user_similarity_played(uid1, uid2 ):\n",
    "    #得到uid1的特征表示：Calculate unique items of item uid1\n",
    "    s1 = user_items[uid1] \n",
    "    \n",
    "    #得到uid1的特征表示：Calculate unique items of item uid1\n",
    "    s2 = user_items[uid2]\n",
    "        \n",
    "    #Calculate intersection of songs played by uid1 and uid2\n",
    "    intersection = s1.intersection(s2)\n",
    "                \n",
    "    #Calculate Jaccard Index\n",
    "    if len(intersection) != 0:\n",
    "        #Calculate union of songs played by uid1 and uid2\n",
    "        union = s1.union(s2)\n",
    "        similarity = float(len(intersection))/float(len(union))\n",
    "    else:\n",
    "        similarity = 0\n",
    "\n",
    "    return similarity  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 计算所有用户的相似度"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ui=0 \n",
      "ui=100 \n",
      "ui=200 \n",
      "ui=300 \n",
      "ui=400 \n",
      "ui=500 \n",
      "ui=600 \n",
      "ui=700 \n"
     ]
    }
   ],
   "source": [
    "users_similarity_matrix = np.matrix(np.zeros(shape=(n_users, n_users)), float)\n",
    "\n",
    "for ui in range(n_users):\n",
    "    users_similarity_matrix[ui,ui] = 1.0\n",
    "    \n",
    "    #打印进度条\n",
    "    if(ui % 100 == 0):\n",
    "        print (\"ui=%d \" % (ui))\n",
    "\n",
    "    for uj in range(ui+1,n_users):   \n",
    "        users_similarity_matrix[uj,ui] = user_similarity_played(ui, uj)\n",
    "        users_similarity_matrix[ui,uj] = users_similarity_matrix[uj,ui]\n",
    "\n",
    "pickle.dump(users_similarity_matrix, open(\"MillionSong/model/users_similarity_played.pkl\", 'wb')) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### （6）物品-相似矩阵\n",
    "\n",
    "   - 计算两个物品之间的相似度\n",
    "   \n",
    "       1. 以播放比例为特征"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "def item_similarity_playcount(iid1, iid2):\n",
    "    su={}  #有效item（两个用户均有打分的item）的集合\n",
    "    for user in item_users[iid1]:  #对iid1所有打过分的用户\n",
    "        if user in item_users[iid2]:  #如果该用户对iid2也打过分\n",
    "            su[user]=1  #该用户为一个有效user\n",
    "        \n",
    "    n=len(su)   #有效item数，有效item为即对uid对Item打过分，uid2也对Item打过分\n",
    "    if (n==0):  #没有共同打过分的item，相似度设为0？\n",
    "        similarity=0  \n",
    "        return similarity  \n",
    "        \n",
    "    #iid1的有效打分(减去用户的平均打分)\n",
    "    s1=np.array([user_item_scores[user,iid1]-users_mu[user] for user in su])\n",
    "        \n",
    "    #iid2的有效打分(减去用户的平均打分)\n",
    "    s2=np.array([user_item_scores[user,iid2]-users_mu[user] for user in su])  \n",
    "    \n",
    "    similarity = 1 - ssd.cosine(s1, s2) \n",
    "    if( np.isnan(similarity) ):  #分母为0（s1或s2中所有元素为0）\n",
    "        similarity = 0.0\n",
    "    return similarity  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "        2. 以是否播放过歌曲为特征"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "def item_similarity_played(iid1, iid2 ):\n",
    "    #得到iid1的特征表示：Calculate unique users of iid1\n",
    "    s1 = item_users[iid1] \n",
    "    \n",
    "    #得到iid2的特征表示：Calculate unique users of iid2\n",
    "    s2 = item_users[iid2]\n",
    "        \n",
    "    #Calculate intersection of users played iid1 and iid2\n",
    "    intersection = s1.intersection(s2)\n",
    "                \n",
    "    #Calculate Jaccard Index\n",
    "    if len(intersection) != 0:\n",
    "        #Calculate union of songs played by uid1 and uid2\n",
    "        union = s1.union(s2)\n",
    "        similarity = float(len(intersection))/float(len(union))\n",
    "    else:\n",
    "        similarity = 0\n",
    "\n",
    "    return similarity  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 计算所有物品的相似度"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "i=0 \n",
      "i=100 \n",
      "i=200 \n",
      "i=300 \n",
      "i=400 \n",
      "i=500 \n",
      "i=600 \n",
      "i=700 \n"
     ]
    }
   ],
   "source": [
    "items_similarity_matrix = np.matrix(np.zeros(shape=(n_items, n_items)), float)\n",
    "\n",
    "for i in range(n_items):\n",
    "    items_similarity_matrix[i,i] = 1.0\n",
    "    \n",
    "    #打印进度条\n",
    "    if(i % 100 == 0):\n",
    "        print (\"i=%d \" % (i) )\n",
    "\n",
    "    for j in range(i+1,n_items):   #items by user \n",
    "        items_similarity_matrix[j,i] = item_similarity_played(i, j)\n",
    "        items_similarity_matrix[i,j] = items_similarity_matrix[j,i]\n",
    "\n",
    "pickle.dump(items_similarity_matrix, open(\"MillionSong/model/items_similarity_played.pkl\", 'wb')) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### （7）SVD模型\n",
    "\n",
    "   - 模型初始化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "#隐含变量的维数\n",
    "K = 40\n",
    "\n",
    "#item和用户的偏置项\n",
    "bi = np.zeros((n_items,1))    \n",
    "bu = np.zeros((n_users,1))   \n",
    "\n",
    "#item和用户的隐含向量\n",
    "qi =  np.zeros((n_items,K))    \n",
    "pu =  np.zeros((n_users,K))   \n",
    "\n",
    "#隐含向量初始化\n",
    "for uid in range(n_users):  #对每个用户\n",
    "    pu[uid] = np.reshape(random((K,1))/10*(np.sqrt(K)),K)\n",
    "       \n",
    "for iid in range(n_items):  #对每个item\n",
    "    qi[iid] = np.reshape(random((K,1))/10*(np.sqrt(K)),K)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 用户打分函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "def svd_pred(uid, iid):  \n",
    "    score = mu + bi[iid] + bu[uid] + np.sum(qi[iid]* pu[uid])        \n",
    "    return score  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    " - 模型训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "collapsed": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The 0-th  step is running\n",
      "the rmse of this step on train data is  [0.88180463]\n",
      "The 1-th  step is running\n",
      "the rmse of this step on train data is  [0.14193097]\n",
      "The 2-th  step is running\n",
      "the rmse of this step on train data is  [0.09545182]\n",
      "The 3-th  step is running\n",
      "the rmse of this step on train data is  [0.07980418]\n",
      "The 4-th  step is running\n",
      "the rmse of this step on train data is  [0.07140168]\n",
      "The 5-th  step is running\n",
      "the rmse of this step on train data is  [0.06593014]\n",
      "The 6-th  step is running\n",
      "the rmse of this step on train data is  [0.06196086]\n",
      "The 7-th  step is running\n",
      "the rmse of this step on train data is  [0.05889846]\n",
      "The 8-th  step is running\n",
      "the rmse of this step on train data is  [0.05632029]\n",
      "The 9-th  step is running\n",
      "the rmse of this step on train data is  [0.05437091]\n",
      "The 10-th  step is running\n",
      "the rmse of this step on train data is  [0.05257123]\n",
      "The 11-th  step is running\n",
      "the rmse of this step on train data is  [0.05107747]\n",
      "The 12-th  step is running\n",
      "the rmse of this step on train data is  [0.05003156]\n",
      "The 13-th  step is running\n",
      "the rmse of this step on train data is  [0.04896006]\n",
      "The 14-th  step is running\n",
      "the rmse of this step on train data is  [0.04814816]\n",
      "The 15-th  step is running\n",
      "the rmse of this step on train data is  [0.04742873]\n",
      "The 16-th  step is running\n",
      "the rmse of this step on train data is  [0.04674482]\n",
      "The 17-th  step is running\n",
      "the rmse of this step on train data is  [0.04627311]\n",
      "The 18-th  step is running\n",
      "the rmse of this step on train data is  [0.04573704]\n",
      "The 19-th  step is running\n",
      "the rmse of this step on train data is  [0.04536878]\n",
      "The 20-th  step is running\n",
      "the rmse of this step on train data is  [0.04500195]\n",
      "The 21-th  step is running\n",
      "the rmse of this step on train data is  [0.0446878]\n",
      "The 22-th  step is running\n",
      "the rmse of this step on train data is  [0.04445208]\n",
      "The 23-th  step is running\n",
      "the rmse of this step on train data is  [0.0441817]\n",
      "The 24-th  step is running\n",
      "the rmse of this step on train data is  [0.04397719]\n",
      "The 25-th  step is running\n",
      "the rmse of this step on train data is  [0.04380228]\n",
      "The 26-th  step is running\n",
      "the rmse of this step on train data is  [0.0436005]\n",
      "The 27-th  step is running\n",
      "the rmse of this step on train data is  [0.04345835]\n",
      "The 28-th  step is running\n",
      "the rmse of this step on train data is  [0.04332255]\n",
      "The 29-th  step is running\n",
      "the rmse of this step on train data is  [0.04320263]\n",
      "The 30-th  step is running\n",
      "the rmse of this step on train data is  [0.0430859]\n",
      "The 31-th  step is running\n",
      "the rmse of this step on train data is  [0.04299947]\n",
      "The 32-th  step is running\n",
      "the rmse of this step on train data is  [0.04289438]\n",
      "The 33-th  step is running\n",
      "the rmse of this step on train data is  [0.04282043]\n",
      "The 34-th  step is running\n",
      "the rmse of this step on train data is  [0.04273593]\n",
      "The 35-th  step is running\n",
      "the rmse of this step on train data is  [0.04266184]\n",
      "The 36-th  step is running\n",
      "the rmse of this step on train data is  [0.0425903]\n",
      "The 37-th  step is running\n",
      "the rmse of this step on train data is  [0.04255451]\n",
      "The 38-th  step is running\n",
      "the rmse of this step on train data is  [0.04247723]\n",
      "The 39-th  step is running\n",
      "the rmse of this step on train data is  [0.04241764]\n",
      "The 40-th  step is running\n",
      "the rmse of this step on train data is  [0.04238151]\n",
      "The 41-th  step is running\n",
      "the rmse of this step on train data is  [0.04233661]\n",
      "The 42-th  step is running\n",
      "the rmse of this step on train data is  [0.04230225]\n",
      "The 43-th  step is running\n",
      "the rmse of this step on train data is  [0.04226543]\n",
      "The 44-th  step is running\n",
      "the rmse of this step on train data is  [0.04223139]\n",
      "The 45-th  step is running\n",
      "the rmse of this step on train data is  [0.04220867]\n",
      "The 46-th  step is running\n",
      "the rmse of this step on train data is  [0.04217319]\n",
      "The 47-th  step is running\n",
      "the rmse of this step on train data is  [0.0421501]\n",
      "The 48-th  step is running\n",
      "the rmse of this step on train data is  [0.04212648]\n",
      "The 49-th  step is running\n",
      "the rmse of this step on train data is  [0.04210749]\n"
     ]
    }
   ],
   "source": [
    "#gamma：为学习率\n",
    "#Lambda：正则参数\n",
    "#steps：迭代次数\n",
    "steps=50\n",
    "gamma=0.04\n",
    "Lambda=0.15\n",
    "\n",
    "#总的打分记录数目\n",
    "n_records = df_triplet_train.shape[0]\n",
    "\n",
    "for step in range(steps):  \n",
    "    print ('The ' + str(step) + '-th  step is running' )\n",
    "    rmse_sum=0.0 \n",
    "            \n",
    "    #将训练样本打散顺序\n",
    "    kk = np.random.permutation(n_records)  \n",
    "    for j in range(n_records):  \n",
    "        #每次一个训练样本\n",
    "        line = kk[j]  \n",
    "        \n",
    "        uid = users_index [df_triplet_train.iloc[line]['user']]\n",
    "        iid = items_index [df_triplet_train.iloc[line]['song']]\n",
    "    \n",
    "        rating  = df_triplet_train.iloc[line]['fractional_play_count']\n",
    "                \n",
    "        #预测残差\n",
    "        eui = rating - svd_pred(uid, iid)  \n",
    "        #残差平方和\n",
    "        rmse_sum += eui**2  \n",
    "                \n",
    "        #随机梯度下降，更新\n",
    "        bu[uid] += gamma * (eui - Lambda * bu[uid])  \n",
    "        bi[iid] += gamma * (eui - Lambda * bi[iid]) \n",
    "                \n",
    "        temp = qi[iid]  \n",
    "        qi[iid] += gamma * (eui* pu[uid]- Lambda*qi[iid] )  \n",
    "        pu[uid] += gamma * (eui* temp - Lambda*pu[uid])  \n",
    "            \n",
    "    #学习率递减\n",
    "    gamma=gamma*0.93  \n",
    "    print (\"the rmse of this step on train data is \",np.sqrt(rmse_sum/n_records))  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    " - 保存模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [],
   "source": [
    "def save_json(filepath):\n",
    "    dict_ = {}\n",
    "    dict_['mu'] = mu\n",
    "    dict_['K'] = K\n",
    "    \n",
    "    dict_['bi'] = bi.tolist()\n",
    "    dict_['bu'] = bu.tolist()\n",
    "    \n",
    "    dict_['qi'] = qi.tolist()\n",
    "    dict_['pu'] = pu.tolist()\n",
    "\n",
    "    # Creat json and save to file\n",
    "    json_txt = json.dumps(dict_)\n",
    "    with open(filepath, 'w') as file:\n",
    "        file.write(json_txt)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "import json  \n",
    "save_json('MillionSong/model/svd_model.json')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2、歌曲推荐\n",
    "\n",
    "### 2.1、预测用户对 item 的打分\n",
    "\n",
    "    （1）基于用户的协同过滤 - 打分函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "### 预测用户uid对item iid的打分\n",
    "### similarity_matrix为用户与用户之间的相似性矩阵\n",
    "def user_CF_pred(uid, iid, similarity_matrix): \n",
    "    sim_accumulate=0.0  \n",
    "    rat_acc=0.0 \n",
    "    for user_id in item_users[iid]:  #对item iid打过分的所有用户\n",
    "        #计算当前用户与给item i打过分的用户之间的相似度\n",
    "        #sim = user_similarity(user_id, uid)\n",
    "        sim = similarity_matrix[user_id,uid]\n",
    "            \n",
    "        if sim != 0: \n",
    "            rat_acc += sim * (user_item_scores[user_id,iid] - users_mu[user_id])   #用户user对item i的打分\n",
    "            sim_accumulate += np.abs(sim)  \n",
    "        \n",
    "    if sim_accumulate != 0:   \n",
    "        score = users_mu[uid] + rat_acc/sim_accumulate\n",
    "    else: #no similar users,return average rates of the user\n",
    "        score = users_mu[uid]\n",
    "    \n",
    "    return score"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "    （2）基于物品的协同过滤 - 打分函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "### 预测用户uid对item iid的打分\n",
    "### similarity_matrix为item与item之间的相似性矩阵\n",
    "### n_Knns最相似的物品的数目\n",
    "def item_CF_pred(uid, iid, similarity_matrix, n_Knns): \n",
    "    sim_accumulate=0.0  \n",
    "    rat_acc=0.0 \n",
    "    n_nn_items = 0\n",
    "    \n",
    "    #相似度排序\n",
    "    cur_items_similarity = np.array(similarity_matrix[iid,:])\n",
    "    cur_items_similarity = cur_items_similarity.flatten()\n",
    "    sort_index = sorted(((e,i) for i,e in enumerate(list(cur_items_similarity))), reverse=True)\n",
    "    \n",
    "    for i in range(0,len(sort_index)):\n",
    "        cur_item_index = sort_index[i][1]\n",
    "        \n",
    "        if n_nn_items >= n_Knns:  #相似的items已经足够多（>n_Knns）\n",
    "            break;\n",
    "        \n",
    "        if cur_item_index in user_items[uid]: #对用户打过分的item\n",
    "           #计算当前用户打过分item与其他item之间的相似度\n",
    "            #sim = item_similarity(cur_item_index, iid)\n",
    "            sim = similarity_matrix[iid, cur_item_index]\n",
    "            \n",
    "            if sim != 0: \n",
    "                rat_acc += sim * (user_item_scores[uid, cur_item_index])   #用户user对item i的打分\n",
    "                sim_accumulate += np.abs(sim)  \n",
    "        \n",
    "            n_nn_items += 1\n",
    "        \n",
    "    if sim_accumulate != 0:   \n",
    "        score = rat_acc/sim_accumulate\n",
    "    else:   #no similar items,return average rates of the user  \n",
    "        score = users_mu[uid]\n",
    "    \n",
    "    if score <0:\n",
    "        score = 0.0\n",
    "    \n",
    "    return score"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "    （3）基于SVD的协同过滤 - 打分函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 1、加载模型\n",
    "filepath=\"MillionSong/model/svd_model.json\"\n",
    "\n",
    "with open(filepath, 'r') as file:\n",
    "    dict_ = json.load(file)\n",
    "\n",
    "    mu = dict_['mu']\n",
    "    K = dict_['K']\n",
    "\n",
    "    bi = np.asarray(dict_['bi'])\n",
    "    bu = np.asarray(dict_['bu'])\n",
    "\n",
    "    qi = np.asarray(dict_['qi'])\n",
    "    pu = np.asarray(dict_['pu'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 2、打分函数\n",
    "def svd_CF_pred(uid, iid):  \n",
    "    score = mu + bi[iid] + bu[uid] + np.sum(qi[iid]* pu[uid])  \n",
    "    return score  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.2、对给定用户推荐物品\n",
    "\n",
    "    （1）基于用户的协同过滤"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [],
   "source": [
    "N_KNNS = 10\n",
    "\n",
    "def recommend(user):\n",
    "    cur_user_id = users_index[user]\n",
    "    \n",
    "    cur_user_items = user_items[cur_user_id]  #训练集中该用户打过分的item\n",
    "    user_items_scores = np.zeros(n_items)  #该用户对所有item的打分\n",
    "\n",
    "    #预测打分\n",
    "    for i in range(n_items):  # all items \n",
    "        if i not in cur_user_items: #训练集中没打过分\n",
    "            user_items_scores[i] = user_CF_pred(cur_user_id, i, similarity_matrix_users)  #预测打分\n",
    "            #user_items_scores[i] = item_CF_pred(cur_user_id, i, similarity_matrix_items, N_KNNS)  \n",
    "            #user_items_scores[i] = svd_CF_pred(cur_user_id, i)  \n",
    "    \n",
    "    #推荐\n",
    "    sort_index = sorted(((e,i) for i,e in enumerate(list(user_items_scores))), reverse=True)\n",
    "    \n",
    "    #Create a dataframe from the following\n",
    "    columns = ['item_id', 'score']\n",
    "    df = pd.DataFrame(columns=columns)\n",
    "         \n",
    "    for i in range(0,len(sort_index)):\n",
    "        cur_item_index = sort_index[i][1] \n",
    "        cur_item = list (items_index.keys()) [list (items_index.values()).index (cur_item_index)]\n",
    "            \n",
    "        if ~np.isnan(sort_index[i][0]) and cur_item_index not in cur_user_items:\n",
    "            df.loc[len(df)]=[cur_item, sort_index[i][0]]\n",
    "    \n",
    "    return df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "    （2）基于物品的协同过滤"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [],
   "source": [
    "def recommend_II(user):\n",
    "    cur_user_id = users_index[user]\n",
    "    \n",
    "    cur_user_items = user_items[cur_user_id]  #训练集中该用户打过分的item\n",
    "    user_items_scores = np.zeros(n_items)  #该用户对所有item的打分\n",
    "\n",
    "    #预测打分\n",
    "    for i in range(n_items):  \n",
    "        if i not in cur_user_items: #训练集中没打过分\n",
    "            #user_items_scores[i] = user_CF_pred(cur_user_id, i, similarity_matrix_users) \n",
    "            user_items_scores[i] = item_CF_pred(cur_user_id, i, similarity_matrix_items, N_KNNS)  #预测打分\n",
    "            #user_items_scores[i] = svd_CF_pred(cur_user_id, i) \n",
    "    \n",
    "    #推荐\n",
    "    sort_index = sorted(((e,i) for i,e in enumerate(list(user_items_scores))), reverse=True)\n",
    "    \n",
    "    #Create a dataframe from the following\n",
    "    columns = ['item_id', 'score']\n",
    "    df = pd.DataFrame(columns=columns)\n",
    "         \n",
    "    for i in range(0,len(sort_index)):\n",
    "        cur_item_index = sort_index[i][1] \n",
    "        cur_item = list (items_index.keys()) [list (items_index.values()).index (cur_item_index)]\n",
    "            \n",
    "        if ~np.isnan(sort_index[i][0]) and cur_item_index not in cur_user_items:\n",
    "            df.loc[len(df)]=[cur_item, sort_index[i][0]]\n",
    "    \n",
    "    return df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "    （3）基于SVD的协同过滤"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [],
   "source": [
    "def recommend_III(user):\n",
    "    cur_user_id = users_index[user]\n",
    "    \n",
    "    cur_user_items = user_items[cur_user_id]  #训练集中该用户打过分的item\n",
    "    user_items_scores = np.zeros(n_items)  #该用户对所有item的打分\n",
    "\n",
    "    #预测打分\n",
    "    for i in range(n_items):  # all items \n",
    "        if i not in cur_user_items: #训练集中没打过分\n",
    "            #user_items_scores[i] = user_CF_pred(cur_user_id, i, similarity_matrix_users)  \n",
    "            #user_items_scores[i] = item_CF_pred(cur_user_id, i, similarity_matrix_items, N_KNNS)  \n",
    "            user_items_scores[i] = svd_CF_pred(cur_user_id, i)  #预测打分\n",
    "    \n",
    "    #推荐\n",
    "    sort_index = sorted(((e,i) for i,e in enumerate(list(user_items_scores))), reverse=True)\n",
    "    \n",
    "    #Create a dataframe from the following\n",
    "    columns = ['item_id', 'score']\n",
    "    df = pd.DataFrame(columns=columns)\n",
    "         \n",
    "    for i in range(0,len(sort_index)):\n",
    "        cur_item_index = sort_index[i][1] \n",
    "        cur_item = list (items_index.keys()) [list (items_index.values()).index (cur_item_index)]\n",
    "            \n",
    "        if ~np.isnan(sort_index[i][0]) and cur_item_index not in cur_user_items:\n",
    "            df.loc[len(df)]=[cur_item, sort_index[i][0]]\n",
    "    \n",
    "    return df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.3、测试-计算评价指标\n",
    "\n",
    "    （0）查看测试数据，并导入相关文件"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>user</th>\n",
       "      <th>song</th>\n",
       "      <th>play_count</th>\n",
       "      <th>total_play_count</th>\n",
       "      <th>fractional_play_count</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>17782</th>\n",
       "      <td>b21e1b6b14b7b3b8b8e683e82ede0e59ad64e9f7</td>\n",
       "      <td>SOXLRDB12A81C21739</td>\n",
       "      <td>3</td>\n",
       "      <td>1903</td>\n",
       "      <td>0.001576</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>15873</th>\n",
       "      <td>119b7c88d58d0c6eb051365c103da5caf817bea6</td>\n",
       "      <td>SOLLNTU12A6701CFDC</td>\n",
       "      <td>8</td>\n",
       "      <td>2477</td>\n",
       "      <td>0.003230</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>279</th>\n",
       "      <td>6a944bfe30ae8d6b873139e8305ae131f1607d5f</td>\n",
       "      <td>SOPXKYD12A6D4FA876</td>\n",
       "      <td>2</td>\n",
       "      <td>942</td>\n",
       "      <td>0.002123</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>26373</th>\n",
       "      <td>c48985a0208be8ee31f09edf031fb3c2be0790c7</td>\n",
       "      <td>SOSLQQJ12AB017BDCC</td>\n",
       "      <td>7</td>\n",
       "      <td>399</td>\n",
       "      <td>0.017544</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1314</th>\n",
       "      <td>6d625c6557df84b60d90426c0116138b617b9449</td>\n",
       "      <td>SOWEWRL12A58A7961F</td>\n",
       "      <td>3</td>\n",
       "      <td>1089</td>\n",
       "      <td>0.002755</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                           user                song  \\\n",
       "17782  b21e1b6b14b7b3b8b8e683e82ede0e59ad64e9f7  SOXLRDB12A81C21739   \n",
       "15873  119b7c88d58d0c6eb051365c103da5caf817bea6  SOLLNTU12A6701CFDC   \n",
       "279    6a944bfe30ae8d6b873139e8305ae131f1607d5f  SOPXKYD12A6D4FA876   \n",
       "26373  c48985a0208be8ee31f09edf031fb3c2be0790c7  SOSLQQJ12AB017BDCC   \n",
       "1314   6d625c6557df84b60d90426c0116138b617b9449  SOWEWRL12A58A7961F   \n",
       "\n",
       "       play_count  total_play_count  fractional_play_count  \n",
       "17782           3              1903               0.001576  \n",
       "15873           8              2477               0.003230  \n",
       "279             2               942               0.002123  \n",
       "26373           7               399               0.017544  \n",
       "1314            3              1089               0.002755  "
      ]
     },
     "execution_count": 31,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df_triplet_test.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [],
   "source": [
    "#用户和item的索引\n",
    "users_index = pickle.load(open(\"MillionSong/model/users_index.pkl\", 'rb'))\n",
    "items_index = pickle.load(open(\"MillionSong/model/items_index.pkl\", 'rb'))\n",
    "\n",
    "n_users = len(users_index)\n",
    "n_items = len(items_index)\n",
    "    \n",
    "#用户-物品关系矩阵R\n",
    "user_item_scores = sio.mmread(\"MillionSong/model/user_item_scores\").todense()\n",
    "    \n",
    "#倒排表：每个用户播放的歌曲、参加的用户\n",
    "user_items = pickle.load(open(\"MillionSong/model/user_items.pkl\", 'rb'))\n",
    "item_users = pickle.load(open(\"MillionSong/model/item_users.pkl\", 'rb'))\n",
    "\n",
    "#所有用户、所有item之间的相似度\n",
    "similarity_matrix_users = pickle.load(open(\"MillionSong/model/users_similarity_played.pkl\", 'rb'))\n",
    "similarity_matrix_items = pickle.load(open(\"MillionSong/model/items_similarity_played.pkl\", 'rb'))\n",
    "\n",
    "#每个用户的平均打分\n",
    "users_mu = pickle.load(open(\"MillionSong/model/users_mu.pkl\", 'rb'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "    （1）基于用户的协同过滤"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "de27b74444dae039f76e421362c6a914da9f8b41 is a new user.\n",
      "\n",
      "467e0e46181933c7e1a936e513ca55fbab4edaed is a new user.\n",
      "\n",
      "52a6c7b6221f57c89dacbbd06854ca0dc415e9e6 is a new user.\n",
      "\n",
      "62420be0fd0df5ab0eb4cba35a4bc7cb3e3b506a is a new user.\n",
      "\n"
     ]
    }
   ],
   "source": [
    "#统计总的用户\n",
    "unique_users_test = df_triplet_test['user'].unique()\n",
    "\n",
    "#为每个用户推荐的item的数目\n",
    "N_RS_ITEMS = 10\n",
    "\n",
    "\n",
    "#性能评价参数初始化，用户计算Percison和Recall\n",
    "n_hits = 0\n",
    "n_total_rec_items = 0\n",
    "n_test_items = 0\n",
    "\n",
    "#所有被推荐商品的集合（对不同用户），用于计算覆盖度\n",
    "all_rec_items = set()\n",
    "\n",
    "#残差平方和，用与计算RMSE\n",
    "rss_test = 0.0\n",
    "\n",
    "#对每个测试用户\n",
    "for user in unique_users_test:\n",
    "    #测试集中该用户打过分的电影（用于计算评价指标的真实值）\n",
    "    if user not in users_index:   #user在训练集中没有出现过，新用户不能用协同过滤\n",
    "        print(str(user) + ' is a new user.\\n')\n",
    "        continue\n",
    "   \n",
    "    #测试集真实值\n",
    "    df_user_records_test= df_triplet_test[df_triplet_test.user == user]\n",
    "    \n",
    "    #对每个测试用户，计算该用户对训练集中未出现过的商品的打分，并基于该打分进行推荐（top N_RS_ITEMS）\n",
    "    df_rec_items = recommend(user)\n",
    "    for i in range(N_RS_ITEMS):\n",
    "        item = df_rec_items.iloc[i]['item_id']\n",
    "        \n",
    "        if item in df_user_records_test['song'].values:\n",
    "            n_hits += 1\n",
    "        all_rec_items.add(item)\n",
    "    \n",
    "    #计算rmse\n",
    "    #对测试集中的每条记录，计算真实值与预测之间的RMSE\n",
    "    for i in range(df_user_records_test.shape[0]):\n",
    "        item = df_user_records_test.iloc[i]['song']\n",
    "        score = df_user_records_test.iloc[i]['fractional_play_count']\n",
    "        \n",
    "        df1 = df_rec_items[df_rec_items.item_id == item]\n",
    "        if(df1.shape[0] == 0): #item不在推荐列表中，可能是新item在训练集中没有出现过，或者该用户已经打过分新item不能被协同过滤推荐\n",
    "            print(str(item) + ' is a new item or  user \\n')\n",
    "            continue\n",
    "        pred_score = df1['score'].values[0]\n",
    "        \n",
    "        rss_test += (pred_score - score)**2     #残差平方和\n",
    "    \n",
    "    #推荐的item总数\n",
    "    n_total_rec_items += N_RS_ITEMS\n",
    "    \n",
    "    #真实item的总数\n",
    "    n_test_items += df_user_records_test.shape[0]\n",
    "\n",
    "#Precision & Recall\n",
    "precision = n_hits / (1.0*n_total_rec_items)\n",
    "recall = n_hits / (1.0*n_test_items)\n",
    "\n",
    "#覆盖度：推荐商品占总需要推荐商品的比例\n",
    "coverage = len(all_rec_items) / (1.0* n_items)\n",
    "\n",
    "#打分的均方误差\n",
    "rmse=np.sqrt(rss_test / df_triplet_test.shape[0])  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.012722298221614227\n",
      "0.0124\n",
      "0.27375\n",
      "0.04956894819275634\n"
     ]
    }
   ],
   "source": [
    "print(precision)\n",
    "print(recall)\n",
    "print(coverage)\n",
    "print(rmse)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "    （2）基于物品的协同过滤"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "de27b74444dae039f76e421362c6a914da9f8b41 is a new user.\n",
      "\n",
      "467e0e46181933c7e1a936e513ca55fbab4edaed is a new user.\n",
      "\n",
      "52a6c7b6221f57c89dacbbd06854ca0dc415e9e6 is a new user.\n",
      "\n",
      "62420be0fd0df5ab0eb4cba35a4bc7cb3e3b506a is a new user.\n",
      "\n"
     ]
    }
   ],
   "source": [
    "#统计总的用户\n",
    "unique_users_test = df_triplet_test['user'].unique()\n",
    "\n",
    "#为每个用户推荐的item的数目\n",
    "N_RS_ITEMS = 10\n",
    "\n",
    "\n",
    "#性能评价参数初始化，用户计算Percison和Recall\n",
    "n_hits = 0\n",
    "n_total_rec_items = 0\n",
    "n_test_items = 0\n",
    "\n",
    "#所有被推荐商品的集合（对不同用户），用于计算覆盖度\n",
    "all_rec_items = set()\n",
    "\n",
    "#残差平方和，用与计算RMSE\n",
    "rss_test = 0.0\n",
    "\n",
    "#对每个测试用户\n",
    "for user in unique_users_test:\n",
    "    #测试集中该用户打过分的电影（用于计算评价指标的真实值）\n",
    "    if user not in users_index:   #user在训练集中没有出现过，新用户不能用协同过滤\n",
    "        print(str(user) + ' is a new user.\\n')\n",
    "        continue\n",
    "   \n",
    "    #测试集真实值\n",
    "    df_user_records_test= df_triplet_test[df_triplet_test.user == user]\n",
    "    \n",
    "    #对每个测试用户，计算该用户对训练集中未出现过的商品的打分，并基于该打分进行推荐（top N_RS_ITEMS）\n",
    "    df_rec_items = recommend_II(user)\n",
    "    for i in range(N_RS_ITEMS):\n",
    "        item = df_rec_items.iloc[i]['item_id']\n",
    "        \n",
    "        if item in df_user_records_test['song'].values:\n",
    "            n_hits += 1\n",
    "        all_rec_items.add(item)\n",
    "    \n",
    "    #计算rmse\n",
    "    #对测试集中的每条记录，计算真实值与预测之间的RMSE\n",
    "    for i in range(df_user_records_test.shape[0]):\n",
    "        item = df_user_records_test.iloc[i]['song']\n",
    "        score = df_user_records_test.iloc[i]['fractional_play_count']\n",
    "        \n",
    "        df1 = df_rec_items[df_rec_items.item_id == item]\n",
    "        if(df1.shape[0] == 0): #item不在推荐列表中，可能是新item在训练集中没有出现过，或者该用户已经打过分新item不能被协同过滤推荐\n",
    "            print(str(item) + ' is a new item or  user \\n')\n",
    "            continue\n",
    "        pred_score = df1['score'].values[0]\n",
    "        \n",
    "        rss_test += (pred_score - score)**2     #残差平方和\n",
    "    \n",
    "    #推荐的item总数\n",
    "    n_total_rec_items += N_RS_ITEMS\n",
    "    \n",
    "    #真实item的总数\n",
    "    n_test_items += df_user_records_test.shape[0]\n",
    "\n",
    "#Precision & Recall\n",
    "precision = n_hits / (1.0*n_total_rec_items)\n",
    "recall = n_hits / (1.0*n_test_items)\n",
    "\n",
    "#覆盖度：推荐商品占总需要推荐商品的比例\n",
    "coverage = len(all_rec_items) / (1.0* n_items)\n",
    "\n",
    "#打分的均方误差\n",
    "rmse=np.sqrt(rss_test / df_triplet_test.shape[0])  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.04528043775649795\n",
      "0.04413333333333333\n",
      "0.9725\n",
      "0.050733139959579295\n"
     ]
    }
   ],
   "source": [
    "print(precision)\n",
    "print(recall)\n",
    "print(coverage)\n",
    "print(rmse)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "    （3）基于SVD的协同过滤"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "de27b74444dae039f76e421362c6a914da9f8b41 is a new user.\n",
      "\n",
      "467e0e46181933c7e1a936e513ca55fbab4edaed is a new user.\n",
      "\n",
      "52a6c7b6221f57c89dacbbd06854ca0dc415e9e6 is a new user.\n",
      "\n",
      "62420be0fd0df5ab0eb4cba35a4bc7cb3e3b506a is a new user.\n",
      "\n"
     ]
    }
   ],
   "source": [
    "#统计总的用户\n",
    "unique_users_test = df_triplet_test['user'].unique()\n",
    "\n",
    "#为每个用户推荐的item的数目\n",
    "N_RS_ITEMS = 10\n",
    "\n",
    "\n",
    "#性能评价参数初始化，用户计算Percison和Recall\n",
    "n_hits = 0\n",
    "n_total_rec_items = 0\n",
    "n_test_items = 0\n",
    "\n",
    "#所有被推荐商品的集合（对不同用户），用于计算覆盖度\n",
    "all_rec_items = set()\n",
    "\n",
    "#残差平方和，用与计算RMSE\n",
    "rss_test = 0.0\n",
    "\n",
    "#对每个测试用户\n",
    "for user in unique_users_test:\n",
    "    #测试集中该用户打过分的电影（用于计算评价指标的真实值）\n",
    "    if user not in users_index:   #user在训练集中没有出现过，新用户不能用协同过滤\n",
    "        print(str(user) + ' is a new user.\\n')\n",
    "        continue\n",
    "   \n",
    "    #测试集真实值\n",
    "    df_user_records_test= df_triplet_test[df_triplet_test.user == user]\n",
    "    \n",
    "    #对每个测试用户，计算该用户对训练集中未出现过的商品的打分，并基于该打分进行推荐（top N_RS_ITEMS）\n",
    "    df_rec_items = recommend_III(user)\n",
    "    for i in range(N_RS_ITEMS):\n",
    "        item = df_rec_items.iloc[i]['item_id']\n",
    "        \n",
    "        if item in df_user_records_test['song'].values:\n",
    "            n_hits += 1\n",
    "        all_rec_items.add(item)\n",
    "    \n",
    "    #计算rmse\n",
    "    #对测试集中的每条记录，计算真实值与预测之间的RMSE\n",
    "    for i in range(df_user_records_test.shape[0]):\n",
    "        item = df_user_records_test.iloc[i]['song']\n",
    "        score = df_user_records_test.iloc[i]['fractional_play_count']\n",
    "        \n",
    "        df1 = df_rec_items[df_rec_items.item_id == item]\n",
    "        if(df1.shape[0] == 0): #item不在推荐列表中，可能是新item在训练集中没有出现过，或者该用户已经打过分新item不能被协同过滤推荐\n",
    "            print(str(item) + ' is a new item or  user \\n')\n",
    "            continue\n",
    "        pred_score = df1['score'].values[0]\n",
    "        \n",
    "        rss_test += (pred_score - score)**2     #残差平方和\n",
    "    \n",
    "    #推荐的item总数\n",
    "    n_total_rec_items += N_RS_ITEMS\n",
    "    \n",
    "    #真实item的总数\n",
    "    n_test_items += df_user_records_test.shape[0]\n",
    "\n",
    "#Precision & Recall\n",
    "precision = n_hits / (1.0*n_total_rec_items)\n",
    "recall = n_hits / (1.0*n_test_items)\n",
    "\n",
    "#覆盖度：推荐商品占总需要推荐商品的比例\n",
    "coverage = len(all_rec_items) / (1.0* n_items)\n",
    "\n",
    "#打分的均方误差\n",
    "rmse=np.sqrt(rss_test / df_triplet_test.shape[0])  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.020109439124487004\n",
      "0.0196\n",
      "0.11125\n",
      "0.05355981314849328\n"
     ]
    }
   ],
   "source": [
    "print(precision)\n",
    "print(recall)\n",
    "print(coverage)\n",
    "print(rmse)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "| 方法 | 推荐数 | precision | recall | coverage | rmse |\n",
    "|:--|--|--|--|--|--:|\n",
    "|用户| 10 | 0.012722298221614227 | 0.0124 | 0.27375 | 0.04956894819275634 |\n",
    "|物品| 10 | 0.04528043775649795 | 0.04413333333333333 | 0.9725 | 0.050733139959579295 |\n",
    "|SVD| 10 | 0.020109439124487004 | 0.0196 | 0.11125 | 0.05355981314849328 |\n",
    "\n",
    "\n",
    "    果然 item_CF 的性能还是比 user_CF 好上一些，居然 item_CF 的覆盖率接近1。不过，这三种协同过滤方法的 rmse 值相差不多，性能都还可以。但是不知道为什么，svd_CF 的覆盖率怎么这么低。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
