{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 数据集回顾\n",
    "\n",
    "在编写数据处理程序前，我们先回顾下本章使用的ml-1m电影推荐数据集。\n",
    "\n",
    "ml-1m是GroupLens Research从MovieLens网站上收集并提供的电影评分数据集。包含了6000多位用户对近3900个电影的共100万条评分数据，评分均为1～5的整数，其中每个电影的评分数据至少有20条。该数据集包含三个数据文件，分别是：\n",
    "- users.dat，存储用户属性信息的txt格式文件。\n",
    "- movies.dat，存储电影属性信息的txt格式文件。\n",
    "- ratings.dat， 存储电影评分信息的txt格式文件。\n",
    "\n",
    "电影海报图像在posters文件夹下，海报图像的名字以\"mov_id\" + 电影ID + \".png\"的方式命名。由于这里的电影海报图像有缺失，我们整理了一个新的评分数据文件，新的文件中包含的电影均是有海报数据的，因此，本次实验使用的数据集在ml-1m基础上增加了两份数据：\n",
    "- posters/  , 包含电影海报图像。\n",
    "- new_rating.txt， 存储包含海报图像的新评分数据文件。\n",
    "\n",
    "注意：海报图像的数据将不在本实验中使用，而留作本章的作业。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 数据处理流程\n",
    "\n",
    "在计算机视觉和自然语言处理章节中，我们已经了解到数据处理是算法应用的前提，并掌握了图像数据处理和自然语言数据处理的方法。总结一下，数据处理就是将人类容易理解的图像文本数据，转换为机器容易理解的数字形式，把离散的数据转为连续的数据。在推荐算法中，这些数据处理方法也是通用的。\n",
    "\n",
    "本次实验中，数据处理一共包含如下六步：\n",
    "\n",
    "1. 读取用户数据，存储到字典\n",
    "2. 读取电影数据，存储到字典\n",
    "3. 读取评分数据，存储到字典\n",
    "4. 读取海报数据，存储到字典\n",
    "5. 将各个字典中的数据拼接，形成数据读取器\n",
    "6. 划分训练集和验证集，生成迭代器，每次提供一个批次的数据\n",
    "\n",
    "流程如下图所示。\n",
    "\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/43700fdb6536461b8d26a77bce2b07c12b7d321f051e46cd809b07b507e706a6\" width=\"380\" ></center>\n",
    "\n",
    "<center><br>图1：数据处理流程图 </br></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 用户数据处理\n",
    "\n",
    "\n",
    "用户数据文件user.dat中的数据格式为：UserID::Gender::Age::Occupation::Zip-code。存储形式如下图所示：\n",
    "\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/7a5fd5c791634b48a56c28c92266680941c10f467f8b47a6bbe8c3603088c23f)\n",
    "\n",
    "\n",
    "上图中，每一行表示一个用户的数据，以$::$隔开，第一列到最后一列分别表示UserID、Gender、Age、Occupation、Zip-code。各数据对应关系如下:\n",
    "\n",
    "\n",
    "| 数据类别 | 数据说明 | 数据示例 |\n",
    "| -------- | -------- | -------- |\n",
    "| **UserID**   | 每个用户的数字代号| 1、2、3等序号     |\n",
    "| **Gender**     | F表示女性，M表示男性| F或M     |\n",
    "| **Age**     | 用数字表示各个年龄段| <ul><li>1: \"Under 18\"　　　　　　　　　　　　　　　　　　　　　　</li><li>18: \"18-24\"</li>  <li>25: \"25-34\"</li>  <li>35: \"35-44\"</li>  <li>45: \"45-49\"</li>  <li>50: \"50-55\"</li>  <li>56: \"56+\"</li></ul> |\n",
    "| **Occupation**     | 用数字表示不同职业     | <ul><li>0: \"other\" or not specified</li><li>1: \"academic/educator\"</li> <li>2: \"artist\"</li>  <li>3: \"clerical/admin\"</li>  <li>4: \"college/grad student\"</li>  <li>5: \"customer service\"</li> <li>6: \"doctor/health care\"</li>  <li>7: \"executive/managerial\"</li>  <li>8: \"farmer\"</li>  <li>9: \"homemaker\"</li>  <li>10: \"K-12 student\"</li>  <li>11: \"lawyer\"</li>  <li>12: \"programmer\"</li>  <li>13: \"retired\"</li>  <li>14: \"sales/marketing\"</li> <li>15: \"scientist\"</li>  <li>16: \"self-employed\"</li>  <li>17: \"technician/engineer\"</li>  <li>18: \"tradesman/craftsman\"</li>  <li>19: \"unemployed\"</li>  <li>20: \"writer\"</li>  </ul>|\n",
    "| **zip-code**     | 邮政编码，与用户所处的地理位置有关。<br>在本次实验中，不使用这个数据。    | 48067     |\n",
    "\n",
    "\n",
    "><font size=2>比如82::M::25::17::48380表示ID为82的用户，性别为男，年龄为25-34岁，职业为technician/engineer。</font>\n",
    "\n",
    "首先，读取用户信息文件中的数据："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "data 数据长度是： 6040\n",
      "第一条数据是： 1::F::1::10::48067\n",
      "\n",
      "数据类型： <class 'str'>\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "usr_file = \"./work/ml-1m/users.dat\"\n",
    "# 打开文件，读取所有行到data中\n",
    "with open(usr_file, 'r') as f:\n",
    "    data = f.readlines()\n",
    "# 打印data的数据长度、第一条数据、数据类型\n",
    "print(\"data 数据长度是：\",len(data))\n",
    "print(\"第一条数据是：\", data[0])\n",
    "print(\"数据类型：\", type(data[0]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "观察以上结果，用户数据一共有6040条，数据以 $::$ 分隔，是字符串类型。为了方便后续数据读取，区分用户的ID、年龄、职业等数据，一个简单的方式是将数据存储到字典中。另外在自然语言处理章节中我们了解到，文本数据无法直接输入到神经网络中进行计算，所以需要将字符串类型的数据转换成数字类型。\n",
    "另外，用户的性别F、M是字母数据，这里需要转换成数字表示。\n",
    "\n",
    "我们定义如下函数实现字母转数字，将性别M、F转成数字0、1表示。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "性别M用数字 0 表示\n",
      "性别F用数字 1 表示\n"
     ]
    }
   ],
   "source": [
    "def gender2num(gender):\n",
    "    return 1 if gender == 'F' else 0\n",
    "\n",
    "\n",
    "print(\"性别M用数字 {} 表示\".format(gender2num('M')))\n",
    "print(\"性别F用数字 {} 表示\".format(gender2num('F')))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "接下来把用户数据的字符串类型的数据转成数字类型，并存储到字典中，实现如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "用户ID为3的用户数据是： {'usr_id': 3, 'gender': 0, 'age': 25, 'job': 15}\n"
     ]
    }
   ],
   "source": [
    "usr_info = {}\n",
    "max_usr_id = 0\n",
    "#按行索引数据\n",
    "for item in data:\n",
    "    # 去除每一行中和数据无关的部分\n",
    "    item = item.strip().split(\"::\")\n",
    "    usr_id = item[0]\n",
    "    # 将字符数据转成数字并保存在字典中\n",
    "    usr_info[usr_id] = {'usr_id': int(usr_id),\n",
    "                        'gender': gender2num(item[1]),\n",
    "                        'age': int(item[2]),\n",
    "                        'job': int(item[3])}\n",
    "    max_usr_id = max(max_usr_id, int(usr_id))\n",
    "\n",
    "print(\"用户ID为3的用户数据是：\", usr_info['3'])\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "至此，我们完成了用户数据的处理，完整的代码如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "用户数量: 6040\n",
      "最大用户ID: 6040\n",
      "第1个用户的信息是： {'usr_id': 1, 'gender': 1, 'age': 1, 'job': 10}\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "def get_usr_info(path):\n",
    "    # 性别转换函数，M-0， F-1\n",
    "    def gender2num(gender):\n",
    "        return 1 if gender == 'F' else 0\n",
    "    \n",
    "    # 打开文件，读取所有行到data中\n",
    "    with open(path, 'r') as f:\n",
    "        data = f.readlines()\n",
    "    # 建立用户信息的字典\n",
    "    use_info = {}\n",
    "    \n",
    "    max_usr_id = 0\n",
    "    #按行索引数据\n",
    "    for item in data:\n",
    "        # 去除每一行中和数据无关的部分\n",
    "        item = item.strip().split(\"::\")\n",
    "        usr_id = item[0]\n",
    "        # 将字符数据转成数字并保存在字典中\n",
    "        use_info[usr_id] = {'usr_id': int(usr_id),\n",
    "                            'gender': gender2num(item[1]),\n",
    "                            'age': int(item[2]),\n",
    "                            'job': int(item[3])}\n",
    "        max_usr_id = max(max_usr_id, int(usr_id))\n",
    "    \n",
    "    return use_info, max_usr_id\n",
    "\n",
    "usr_file = \"./work/ml-1m/users.dat\"\n",
    "usr_info, max_usr_id = get_usr_info(usr_file)\n",
    "print(\"用户数量:\", len(usr_info))\n",
    "print(\"最大用户ID:\", max_usr_id)\n",
    "print(\"第1个用户的信息是：\", usr_info['1'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "从上面的结果可以得出，一共有6040个用户，其中ID为1的用户信息是{'usr_id': [1], 'gender': [1], 'age': [1], 'job': [10]}，表示用户的性别序号是1（女），年龄序号是1（Under 18），职业序号是10（K-12 student），都已处理成数字类型。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 电影数据处理\n",
    "\n",
    "电影信息包含在movies.dat中，数据格式为：MovieID::Title::Genres，保存的格式与用户数据相同，每一行表示一条电影数据信息。\n",
    "\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/33c577021a674fd3b2ec779216447182bae861b56bd446389c9066d508395f36)\n",
    "\n",
    "\n",
    "各数据对应关系如下：\n",
    "| 数据类别 | 数据说明 | 数据示例 |\n",
    "| -------- | -------- | -------- |\n",
    "| **MovieID**     | 每个电影的数字代号     | 1、2、3等序号    |\n",
    "| **Title**     | 每个电影的名字和首映时间     | 比如：Toy Story (1995)     |\n",
    "| **Genres**     | 电影的种类，每个电影不止一个类别，不同类别以　\\|　隔开    | 比如：Animation\\| Children's\\|Comedy <br>包含的类别有：【Action，Adventure，Animation，Children's，Comedy，Crime，Documentary，Drama，Fantasy，Film-Noir，Horror，Musical，Mystery，Romance，Sci-Fi，Thriller，War，Western】|\n",
    "\n",
    "首先，读取电影信息文件里的数据。需要注意的是，电影数据的存储方式和用户数据不同，在读取电影数据时，需要指定编码方式为\"ISO-8859-1\"："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1::Toy Story (1995)::Animation|Children's|Comedy\n",
      "\n",
      "movie ID: 1\n",
      "movie title: Toy Story\n",
      "movie year: 1995\n",
      "movie genre: ['Animation', \"Children's\", 'Comedy']\n"
     ]
    }
   ],
   "source": [
    "movie_info_path = \"./work/ml-1m/movies.dat\"\n",
    "# 打开文件，编码方式选择ISO-8859-1，读取所有数据到data中\n",
    "with open(movie_info_path, 'r', encoding=\"ISO-8859-1\") as f:\n",
    "    data = f.readlines()\n",
    "\n",
    "# 读取第一条数据，并打印\n",
    "item = data[0]\n",
    "print(item)\n",
    "item = item.strip().split(\"::\")\n",
    "print(\"movie ID:\", item[0])\n",
    "print(\"movie title:\", item[1][:-7])\n",
    "print(\"movie year:\", item[1][-5:-1])\n",
    "print(\"movie genre:\", item[2].split('|'))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "从上述代码，我们看出每条电影数据以 $::$ 分隔，是字符串类型。类似处理用户数据的方式，需要将字符串类型的数据转换成数字类型，存储到字典中。\n",
    "不同的是，在用户数据处理中，我们把性别数据M、F处理成0、1，而电影数据中Title和Genres都是长文本信息，为了便于后续神经网络计算，我们把其中每个单词都拆分出来，不同的单词用对应的数字序号指代。\n",
    "\n",
    "所以，我们需要对这些数据进行如下处理：\n",
    "1. 统计电影ID信息。\n",
    "2. 统计电影名字的单词，并给每个单词一个数字序号。\n",
    "3. 统计电影类别单词，并给每个单词一个数字序号。\n",
    "4. 保存电影数据到字典中，方便根据电影ID进行索引。\n",
    "\n",
    "实现方法如下:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 1. 统计电影ID信息\n",
    "将电影ID信息存到字典中，并获得电影ID的最大值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "电影的最大ID是： 3952\n"
     ]
    }
   ],
   "source": [
    "movie_info_path = \"./work/ml-1m/movies.dat\"\n",
    "# 打开文件，编码方式选择ISO-8859-1，读取所有数据到data中\n",
    "with open(movie_info_path, 'r', encoding=\"ISO-8859-1\") as f:\n",
    "    data = f.readlines()\n",
    "    \n",
    "movie_info = {}\n",
    "for item in data:\n",
    "    item = item.strip().split(\"::\")\n",
    "    # 获得电影的ID信息\n",
    "    v_id = item[0]\n",
    "    movie_info[v_id] = {'mov_id': int(v_id)}\n",
    "max_id = max([movie_info[k]['mov_id'] for k in movie_info.keys()])\n",
    "print(\"电影的最大ID是：\", max_id)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 2. 统计电影名字的单词，并给每个单词一个数字序号\n",
    "\n",
    "不同于用户数据，电影数据中包含文字数据，可是，神经网络模型是无法直接处理文本数据的，我们可以借助自然语言处理中word embedding的方式完成文本到数字向量之间的转换。按照word embedding的步骤，首先，需要将每个单词用数字代替，然后利用embedding的方法完成数字到映射向量之间的转换。此处数据处理中，我们只需要先完成文本到数字的转换。\n",
    "\n",
    "接下来，我们把电影名字的单词用数字代替。在读取电影数据的同时，统计不同的单词，从数字 1 开始对不同单词进行标号。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "最大电影title长度是： 15\n",
      "电影 ID: 1\n",
      "电影 title: Toy Story\n",
      "ID为1 的电影数据是： {'mov_id': 1, 'title': [1, 2], 'years': 1995}\n"
     ]
    }
   ],
   "source": [
    "# 用于记录电影title每个单词对应哪个序号\n",
    "movie_titles = {}\n",
    "#记录电影名字包含的单词最大数量\n",
    "max_title_length = 0\n",
    "# 对不同的单词从1 开始计数\n",
    "t_count = 1\n",
    "# 按行读取数据并处理\n",
    "for item in data:\n",
    "    item = item.strip().split(\"::\")\n",
    "    # 1. 获得电影的ID信息\n",
    "    v_id = item[0]\n",
    "    v_title = item[1][:-7] # 去掉title中年份数据\n",
    "    v_year = item[1][-5:-1]\n",
    "    titles = v_title.split()\n",
    "    # 获得title最大长度\n",
    "    max_title_length = max((max_title_length, len(titles)))\n",
    "    \n",
    "    # 2. 统计电影名字的单词，并给每个单词一个序号，放在movie_titles中\n",
    "    for t in titles:\n",
    "        if t not in movie_titles:\n",
    "            movie_titles[t] = t_count\n",
    "            t_count += 1\n",
    "            \n",
    "    v_tit = [movie_titles[k] for k in titles]\n",
    "    # 保存电影ID数据和title数据到字典中\n",
    "    movie_info[v_id] = {'mov_id': int(v_id),\n",
    "                        'title': v_tit,\n",
    "                        'years': int(v_year)}\n",
    "    \n",
    "print(\"最大电影title长度是：\",  max_title_length)\n",
    "ID = 1\n",
    "# 读取第一条数据，并打印\n",
    "item = data[0]\n",
    "item = item.strip().split(\"::\")\n",
    "print(\"电影 ID:\", item[0])\n",
    "print(\"电影 title:\", item[1][:-7])\n",
    "print(\"ID为1 的电影数据是：\", movie_info['1'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "考虑到年份对衡量两个电影的相似度没有很大的影响，后续神经网络处理时，并不使用年份数据。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 3. 统计电影类别的单词，并给每个单词一个数字序号\n",
    "\n",
    "参考处理电影名字的方法处理电影类别，给不同类别的单词不同数字序号。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "电影类别数量最多是： 6\n",
      "电影 ID: 1\n",
      "电影种类 category: ['Animation', \"Children's\", 'Comedy']\n",
      "ID为1 的电影数据是： {'mov_id': 1, 'category': [1, 2, 3]}\n"
     ]
    }
   ],
   "source": [
    "# 用于记录电影类别每个单词对应哪个序号\n",
    "movie_titles, movie_cat = {}, {}\n",
    "\n",
    "max_title_length = 0\n",
    "max_cat_length = 0\n",
    "\n",
    "t_count, c_count = 1, 1\n",
    "# 按行读取数据并处理\n",
    "for item in data:\n",
    "    item = item.strip().split(\"::\")\n",
    "    # 1. 获得电影的ID信息\n",
    "    v_id = item[0]\n",
    "    cats = item[2].split('|')\n",
    "\n",
    "    # 获得电影类别数量的最大长度\n",
    "    max_cat_length = max((max_cat_length, len(cats)))\n",
    "            \n",
    "    v_cat = item[2].split('|')\n",
    "    # 3. 统计电影类别单词，并给每个单词一个序号，放在movie_cat中\n",
    "    for cat in cats:\n",
    "        if cat not in movie_cat:\n",
    "            movie_cat[cat] = c_count\n",
    "            c_count += 1\n",
    "    v_cat = [movie_cat[k] for k in v_cat]\n",
    "    \n",
    "    # 保存电影ID数据和title数据到字典中\n",
    "    movie_info[v_id] = {'mov_id': int(v_id),\n",
    "                        'category': v_cat}\n",
    "    \n",
    "print(\"电影类别数量最多是：\",  max_cat_length)\n",
    "ID = 1\n",
    "# 读取第一条数据，并打印\n",
    "item = data[0]\n",
    "item = item.strip().split(\"::\")\n",
    "print(\"电影 ID:\", item[0])\n",
    "print(\"电影种类 category:\", item[2].split('|'))\n",
    "print(\"ID为1 的电影数据是：\", movie_info['1'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 4. 保存电影数据到字典中，方便根据电影ID进行索引\n",
    "\n",
    "在保存电影数据到字典前，值得注意的是，由于每个电影名字和类别的单词数量不一样，转换成数字表示时，还需要通过补0将其补全成固定数据长度。原因是这些数据作为神经网络的输入，其维度影响了第一层网络的权重维度初始化，这要求输入数据的维度是定长的，而不是变长的，所以通过补0使其变为定长输入。补0并不会影响神经网络运算的最终结果。 \n",
    "\n",
    "从上面两小节我们已知：最大电影名字长度是15，最大电影类别长度是6，15和6分别表示电影名字、种类包含的最大单词数量。因此我们通过补0使电影名字的列表长度为15，使电影种类的列表长度补齐为6。实现如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "电影数据数量： 3883\n",
      "原始的电影ID为 2 的数据是： 2::Jumanji (1995)::Adventure|Children's|Fantasy\n",
      "\n",
      "电影ID为 2 的转换后数据是： {'mov_id': 2, 'title': [3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'category': [4, 2, 5, 0, 0, 0], 'years': 1995}\n"
     ]
    }
   ],
   "source": [
    "# 建立三个字典，分别存放电影ID、名字和类别\n",
    "movie_info, movie_titles, movie_cat = {}, {}, {}\n",
    "# 对电影名字、类别中不同的单词从 1 开始标号\n",
    "t_count, c_count = 1, 1\n",
    "\n",
    "count_tit = {}\n",
    "# 按行读取数据并处理\n",
    "for item in data:\n",
    "    item = item.strip().split(\"::\")\n",
    "    # 1. 获得电影的ID信息\n",
    "    v_id = item[0]\n",
    "    v_title = item[1][:-7] # 去掉title中年份数据\n",
    "    cats = item[2].split('|')\n",
    "    v_year = item[1][-5:-1]\n",
    "\n",
    "    titles = v_title.split()\n",
    "    # 2. 统计电影名字的单词，并给每个单词一个序号，放在movie_titles中\n",
    "    for t in titles:\n",
    "        if t not in movie_titles:\n",
    "            movie_titles[t] = t_count\n",
    "            t_count += 1\n",
    "    # 3. 统计电影类别单词，并给每个单词一个序号，放在movie_cat中\n",
    "    for cat in cats:\n",
    "        if cat not in movie_cat:\n",
    "            movie_cat[cat] = c_count\n",
    "            c_count += 1\n",
    "    # 补0使电影名称对应的列表长度为15\n",
    "    v_tit = [movie_titles[k] for k in titles]\n",
    "    while len(v_tit)<15:\n",
    "        v_tit.append(0)\n",
    "    # 补0使电影种类对应的列表长度为6\n",
    "    v_cat = [movie_cat[k] for k in cats]\n",
    "    while len(v_cat)<6:\n",
    "        v_cat.append(0)\n",
    "    # 4. 保存电影数据到movie_info中\n",
    "    movie_info[v_id] = {'mov_id': int(v_id),\n",
    "                        'title': v_tit,\n",
    "                        'category': v_cat,\n",
    "                        'years': int(v_year)}\n",
    "    \n",
    "print(\"电影数据数量：\", len(movie_info))\n",
    "ID = 2\n",
    "print(\"原始的电影ID为 {} 的数据是：\".format(ID), data[ID-1])\n",
    "print(\"电影ID为 {} 的转换后数据是：\".format(ID), movie_info[str(ID)])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "\n",
    "完整的电影数据处理代码如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "电影数量： 3883\n",
      "原始的电影ID为 1 的数据是： 1::Toy Story (1995)::Animation|Children's|Comedy\n",
      "\n",
      "电影ID为 1 的转换后数据是： {'mov_id': 1, 'title': [1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'category': [1, 2, 3, 0, 0, 0], 'years': 1995}\n",
      "电影种类对应序号：'Animation':1 'Children's':2 'Comedy':3\n",
      "电影名称对应序号：'The':26 'Story':2 \n"
     ]
    }
   ],
   "source": [
    "def get_movie_info(path):\n",
    "    # 打开文件，编码方式选择ISO-8859-1，读取所有数据到data中 \n",
    "    with open(path, 'r', encoding=\"ISO-8859-1\") as f:\n",
    "        data = f.readlines()\n",
    "    # 建立三个字典，分别用户存放电影所有信息，电影的名字信息、类别信息\n",
    "    movie_info, movie_titles, movie_cat = {}, {}, {}\n",
    "    # 对电影名字、类别中不同的单词计数\n",
    "    t_count, c_count = 1, 1\n",
    "    # 初始化电影名字和种类的列表\n",
    "    titles = []\n",
    "    cats = []\n",
    "    count_tit = {}\n",
    "    # 按行读取数据并处理\n",
    "    for item in data:\n",
    "        item = item.strip().split(\"::\")\n",
    "        v_id = item[0]\n",
    "        v_title = item[1][:-7]\n",
    "        cats = item[2].split('|')\n",
    "        v_year = item[1][-5:-1]\n",
    "\n",
    "        titles = v_title.split()\n",
    "        # 统计电影名字的单词，并给每个单词一个序号，放在movie_titles中\n",
    "        for t in titles:\n",
    "            if t not in movie_titles:\n",
    "                movie_titles[t] = t_count\n",
    "                t_count += 1\n",
    "        # 统计电影类别单词，并给每个单词一个序号，放在movie_cat中\n",
    "        for cat in cats:\n",
    "            if cat not in movie_cat:\n",
    "                movie_cat[cat] = c_count\n",
    "                c_count += 1\n",
    "        # 补0使电影名称对应的列表长度为15\n",
    "        v_tit = [movie_titles[k] for k in titles]\n",
    "        while len(v_tit)<15:\n",
    "            v_tit.append(0)\n",
    "        # 补0使电影种类对应的列表长度为6\n",
    "        v_cat = [movie_cat[k] for k in cats]\n",
    "        while len(v_cat)<6:\n",
    "            v_cat.append(0)\n",
    "        # 保存电影数据到movie_info中\n",
    "        movie_info[v_id] = {'mov_id': int(v_id),\n",
    "                            'title': v_tit,\n",
    "                            'category': v_cat,\n",
    "                            'years': int(v_year)}\n",
    "    return movie_info, movie_cat, movie_titles\n",
    "\n",
    "\n",
    "movie_info_path = \"./work/ml-1m/movies.dat\"\n",
    "movie_info, movie_cat, movie_titles = get_movie_info(movie_info_path)\n",
    "print(\"电影数量：\", len(movie_info))\n",
    "ID = 1\n",
    "print(\"原始的电影ID为 {} 的数据是：\".format(ID), data[ID-1])\n",
    "print(\"电影ID为 {} 的转换后数据是：\".format(ID), movie_info[str(ID)])\n",
    "\n",
    "print(\"电影种类对应序号：'Animation':{} 'Children's':{} 'Comedy':{}\".format(movie_cat['Animation'], \n",
    "                                                                   movie_cat[\"Children's\"], \n",
    "                                                                   movie_cat['Comedy']))\n",
    "print(\"电影名称对应序号：'The':{} 'Story':{} \".format(movie_titles['The'], movie_titles['Story']))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "从上面的结果来看，ml-1m数据集中一共有3883个不同的电影，每个电影信息包含电影ID、电影名称、电影类别，均已处理成数字形式。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 评分数据处理\n",
    "\n",
    "有了用户数据和电影数据后，还需要获得用户对电影的评分数据，ml-1m数据集的评分数据在ratings.dat文件中。评分数据格式为UserID::MovieID::Rating::Timestamp，如下图。\n",
    "\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/e4b916b3d3bb48cdb00ab685817268ecd5f183f9e0564c6296b64ec346e1711b)\n",
    "\n",
    "\n",
    "这份数据很容易理解，如1::1193::5::978300760 表示ID为1的用户对电影ID为1193的评分是5。\n",
    "\n",
    "> <font size=2>978300760表示Timestamp数据，是标注数据时记录的时间信息，对当前任务来说是没有作用的数据，可以忽略这部分信息。</font>\n",
    "\n",
    "接下来，读取评分文件里的数据："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1::1193::5::978300760\n",
      "\n",
      "评分数据条数： 1000209\n",
      "用户ID： 1\n",
      "电影ID： 1193\n",
      "用户对电影的评分： 5\n"
     ]
    }
   ],
   "source": [
    "use_poster = False\n",
    "if use_poster:\n",
    "    rating_path = \"./work/ml-1m/new_rating.txt\"\n",
    "else:\n",
    "    rating_path = \"./work/ml-1m/ratings.dat\"\n",
    "# 打开文件，读取所有行到data中\n",
    "with open(rating_path, 'r') as f:\n",
    "    data = f.readlines()\n",
    "# 打印data的数据长度，以及第一条数据中的用户ID、电影ID和评分信息   \n",
    "item = data[0]\n",
    "\n",
    "print(item)\n",
    "\n",
    "item = item.strip().split(\"::\")\n",
    "usr_id,movie_id,score = item[0],item[1],item[2]\n",
    "print(\"评分数据条数：\", len(data))\n",
    "print(\"用户ID：\", usr_id)\n",
    "print(\"电影ID：\", movie_id)\n",
    "print(\"用户对电影的评分：\", score)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "从以上统计结果来看，一共有1000209条评分数据。电影评分数据不包含文本信息，可以将数据直接存到字典中。\n",
    "\n",
    "下面我们将评分数据封装到get_rating_info()函数中，并返回评分数据的信息。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ID为1的用户一共评价了53个电影\n"
     ]
    }
   ],
   "source": [
    "def get_rating_info(path):\n",
    "    # 打开文件，读取所有行到data中\n",
    "    with open(path, 'r') as f:\n",
    "        data = f.readlines()\n",
    "    # 创建一个字典\n",
    "    rating_info = {}\n",
    "    for item in data:\n",
    "        item = item.strip().split(\"::\")\n",
    "        # 处理每行数据，分别得到用户ID，电影ID，和评分\n",
    "        usr_id,movie_id,score = item[0],item[1],item[2]\n",
    "        if usr_id not in rating_info.keys():\n",
    "            rating_info[usr_id] = {movie_id:float(score)}\n",
    "        else:\n",
    "            rating_info[usr_id][movie_id] = float(score)\n",
    "    return rating_info\n",
    "\n",
    "# 获得评分数据\n",
    "#rating_path = \"./work/ml-1m/ratings.dat\"\n",
    "rating_info = get_rating_info(rating_path)\n",
    "print(\"ID为1的用户一共评价了{}个电影\".format(len(rating_info['1'])))\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 海报图像读取\n",
    "\n",
    "电影发布时，都会包含电影海报，海报图像的名字以\"mov_id\" + 电影ID + \".jpg\"的方式命名。因此，我们可以用电影ID去索引对应的海报图像。\n",
    "\n",
    "海报图像展示如下：\n",
    "\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/5085b69aef05498bb526aaf8c3d931d0e955ab066d5043478612284fee9b66ca)\n",
    "\n",
    "\n",
    "图1： 电影ID-2296的海报\n",
    "\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/4a1bb54764b54ac69ad0d7df5263621f045ee2de6a7141e5be5018f8c843f710)\n",
    "\n",
    "\n",
    "图2： 电影ID-2291的海报\n",
    "\n",
    "\n",
    "\n",
    "我们可以从新的评分数据文件 new_rating.txt 中获取到电影ID，进而索引到图像，实现如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<Figure size 640x480 with 1 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from PIL import Image\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "# 使用海报图像和不使用海报图像的文件路径不同，处理方式相同\n",
    "use_poster = True\n",
    "if use_poster:\n",
    "    rating_path = \"./work/ml-1m/new_rating.txt\"\n",
    "else:\n",
    "    rating_path = \"./work/ml-1m/ratings.dat\"\n",
    "    \n",
    "with open(rating_path, 'r') as f:\n",
    "    data = f.readlines()\n",
    "    \n",
    "# 从新的rating文件中收集所有的电影ID\n",
    "mov_id_collect = []\n",
    "for item in data:\n",
    "    item = item.strip().split(\"::\")\n",
    "    usr_id,movie_id,score = item[0],item[1],item[2]\n",
    "    mov_id_collect.append(movie_id)\n",
    "\n",
    "\n",
    "# 根据电影ID读取图像\n",
    "poster_path = \"./work/ml-1m/posters/\"\n",
    "\n",
    "# 显示mov_id_collect中第几个电影ID的图像\n",
    "idx = 1\n",
    "\n",
    "poster = Image.open(poster_path+'mov_id{}.jpg'.format(str(mov_id_collect[idx])))\n",
    "\n",
    "plt.figure(\"Image\") # 图像窗口名称\n",
    "plt.imshow(poster)\n",
    "plt.axis('on') # 关掉坐标轴为 off\n",
    "plt.title(\"poster with ID {}\".format(mov_id_collect[idx])) # 图像题目\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 构建数据读取器\n",
    "\n",
    "至此我们已经分别处理了用户、电影和评分数据，接下来我们要利用这些处理好的数据，构建一个数据读取器，方便在训练神经网络时直接调用。\n",
    "\n",
    "首先，构造一个函数，把读取并处理后的数据整合到一起，即在rating数据中补齐用户和电影的所有特征字段。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "数据集总数据数： 1000209\n"
     ]
    }
   ],
   "source": [
    "def get_dataset(usr_info, rating_info, movie_info):\n",
    "    trainset = []\n",
    "    # 按照评分数据的key值索引数据\n",
    "    for usr_id in rating_info.keys():\n",
    "        usr_ratings = rating_info[usr_id]\n",
    "        for movie_id in usr_ratings:\n",
    "            trainset.append({'usr_info': usr_info[usr_id],\n",
    "                             'mov_info': movie_info[movie_id],\n",
    "                             'scores': usr_ratings[movie_id]})\n",
    "    return trainset\n",
    "\n",
    "dataset = get_dataset(usr_info, rating_info, movie_info)\n",
    "print(\"数据集总数据数：\", len(dataset))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "接下来构建数据读取器函数load_data()，先看一下整体结构："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "import random\n",
    "def load_data(dataset=None, mode='train'):\n",
    "    \n",
    "    \"\"\"定义一些超参数等等\"\"\"\n",
    "    \n",
    "    # 定义数据迭代加载器\n",
    "    def data_generator():\n",
    "        \n",
    "        \"\"\" 定义数据的处理过程\"\"\"\n",
    "        \n",
    "        data  = None\n",
    "        yield data\n",
    "        \n",
    "    # 返回数据迭代加载器\n",
    "    return data_generator\n",
    "        "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "我们来看一下完整的数据读取器函数实现，核心是将多个样本数据合并到一个列表（batch），当该列表达到batchsize后，以yield的方式返回（Python数据迭代器）。\n",
    "\n",
    "在进行批次数据拼合的同时，完成数据格式和数据尺寸的转换：\n",
    "\n",
    "* 由于飞桨框架的网络接入层要求将数据先转换成np.array的类型，再转换成框架内置变量variable的类型。所以在数据返回前，需将所有数据均转换成np.array的类型，方便后续处理。\n",
    "* 每个特征字段的尺寸也需要根据网络输入层的设计进行调整。根据之前的分析，用户和电影的所有原始特征可以分为四类，ID类（用户ID，电影ID，性别，年龄，职业）、列表类（电影类别）、文本类（电影名称）和图像类（电影海报）。因为每种特征后续接入的网络层方案不同，所以要求他们的数据尺寸也不同。这里我们先初步的了解即可，待后续阅读了模型设计章节后，将对输入输出尺寸有更好的理解。\n",
    "\n",
    "数据尺寸的说明：\n",
    "1. ID类（用户ID，电影ID，性别，年龄，职业）处理成（256，1）的尺寸，以便后续接入Embedding层。第一个维度256是batchsize，第二个维度是1，因为Embedding层要求输入数据的最后一维为1。\n",
    "2. 列表类（电影类别）处理成（256,6,1）的尺寸，6是电影最多的类比个数，以便后续接入全连接层。\n",
    "3. 文本类（电影名称）处理成（256,1,15,1）的尺寸，15是电影名称的最大单词数，以便接入2D卷积层。2D卷积层要求输入数据为四维，对应图像数据是【批次大小，通道数、图像的长、图像的宽】，其中RGB的彩色图像是3通道，灰度图像是单通道。\n",
    "4. 图像类（电影海报）处理成（256,3,64,64）的尺寸， 以便接入2D卷积层。图像的原始尺寸是180\\*270彩色图像，使用resize函数压缩成64\\*64的尺寸，减少网络计算。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "import random\n",
    "def load_data(dataset=None, mode='train'):\n",
    "    use_poster = False\n",
    "    \n",
    "    # 定义数据迭代Batch大小\n",
    "    BATCHSIZE = 256\n",
    "\n",
    "    data_length = len(dataset)\n",
    "    index_list = list(range(data_length))\n",
    "    # 定义数据迭代加载器\n",
    "    def data_generator():\n",
    "        # 训练模式下，打乱训练数据\n",
    "        if mode == 'train':\n",
    "            random.shuffle(index_list)\n",
    "        # 声明每个特征的列表\n",
    "        usr_id_list,usr_gender_list,usr_age_list,usr_job_list = [], [], [], []\n",
    "        mov_id_list,mov_tit_list,mov_cat_list,mov_poster_list = [], [], [], []\n",
    "        score_list = []\n",
    "        # 索引遍历输入数据集\n",
    "        for idx, i in enumerate(index_list):\n",
    "            # 获得特征数据保存到对应特征列表中\n",
    "            usr_id_list.append(dataset[i]['usr_info']['usr_id'])\n",
    "            usr_gender_list.append(dataset[i]['usr_info']['gender'])\n",
    "            usr_age_list.append(dataset[i]['usr_info']['age'])\n",
    "            usr_job_list.append(dataset[i]['usr_info']['job'])\n",
    "\n",
    "            mov_id_list.append(dataset[i]['mov_info']['mov_id'])\n",
    "            mov_tit_list.append(dataset[i]['mov_info']['title'])\n",
    "            mov_cat_list.append(dataset[i]['mov_info']['category'])\n",
    "            mov_id = dataset[i]['mov_info']['mov_id']\n",
    "\n",
    "            if use_poster:\n",
    "                # 不使用图像特征时，不读取图像数据，加快数据读取速度\n",
    "                poster = Image.open(poster_path+'mov_id{}.jpg'.format(str(mov_id[0])))\n",
    "                poster = poster.resize([64, 64])\n",
    "                if len(poster.size) <= 2:\n",
    "                    poster = poster.convert(\"RGB\")\n",
    "\n",
    "                mov_poster_list.append(np.array(poster))\n",
    "\n",
    "            score_list.append(int(dataset[i]['scores']))\n",
    "            # 如果读取的数据量达到当前的batch大小，就返回当前批次\n",
    "            if len(usr_id_list)==BATCHSIZE:\n",
    "                # 转换列表数据为数组形式，reshape到固定形状，使数据的最后一维是 1\n",
    "                usr_id_arr = np.expand_dims(np.array(usr_id_list), axis=-1)\n",
    "                usr_gender_arr = np.expand_dims(np.array(usr_gender_list), axis=-1)\n",
    "                usr_age_arr = np.expand_dims(np.array(usr_age_list), axis=-1)\n",
    "                usr_job_arr = np.expand_dims(np.array(usr_job_list), axis=-1)\n",
    "\n",
    "                mov_id_arr = np.expand_dims(np.array(mov_id_list), axis=-1)\n",
    "\n",
    "                mov_cat_arr = np.reshape(np.array(mov_cat_list), [BATCHSIZE, 6, 1]).astype(np.int64)\n",
    "                mov_tit_arr = np.reshape(np.array(mov_tit_list), [BATCHSIZE, 1, 15, 1]).astype(np.int64)\n",
    "\n",
    "                if use_poster:\n",
    "                    mov_poster_arr = np.reshape(np.array(mov_poster_list)/127.5 - 1, [BATCHSIZE, 3, 64, 64]).astype(np.float32)\n",
    "                else:\n",
    "                    mov_poster_arr = np.array([0.])\n",
    "                    \n",
    "                scores_arr = np.reshape(np.array(score_list), [-1, 1]).astype(np.float32)\n",
    "                \n",
    "                # 返回当前批次数据\n",
    "                yield [usr_id_arr, usr_gender_arr, usr_age_arr, usr_job_arr], \\\n",
    "                       [mov_id_arr, mov_cat_arr, mov_tit_arr, mov_poster_arr], scores_arr\n",
    "                \n",
    "                # 清空数据\n",
    "                usr_id_list, usr_gender_list, usr_age_list, usr_job_list = [], [], [], []\n",
    "                mov_id_list, mov_tit_list, mov_cat_list, score_list = [], [], [], []\n",
    "                mov_poster_list = []\n",
    "    return data_generator"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "load_data()函数通过输入的数据集，处理数据并返回一个数据迭代器。\n",
    "\n",
    "我们将数据集按照8:2的比例划分训练集和验证集，可以分别得到训练数据迭代器和验证数据迭代器。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "数据集总数量： 1000209\n",
      "训练集数量： 800167\n",
      "验证集数量: 200042\n"
     ]
    }
   ],
   "source": [
    "dataset = get_dataset(usr_info, rating_info, movie_info)\n",
    "print(\"数据集总数量：\", len(dataset))\n",
    "\n",
    "trainset = dataset[:int(0.8*len(dataset))]\n",
    "train_loader = load_data(trainset, mode=\"train\")\n",
    "print(\"训练集数量：\", len(trainset))\n",
    "\n",
    "validset = dataset[int(0.8*len(dataset)):]\n",
    "valid_loader = load_data(validset, mode='valid')\n",
    "print(\"验证集数量:\", len(validset))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "数据迭代器的使用方式如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "用户ID数据尺寸 (256, 1)\n",
      "电影ID数据尺寸 (256, 1) , 电影类别genres数据的尺寸 (256, 6, 1) , 电影名字title的尺寸 (256, 1, 15, 1)\n"
     ]
    }
   ],
   "source": [
    "for idx, data in enumerate(train_loader()):\n",
    "    usr_data, mov_data, score = data\n",
    "    \n",
    "    usr_id_arr, usr_gender_arr, usr_age_arr, usr_job_arr = usr_data\n",
    "    mov_id_arr, mov_cat_arr, mov_tit_arr, mov_poster_arr = mov_data\n",
    "    print(\"用户ID数据尺寸\", usr_id_arr.shape)\n",
    "    print(\"电影ID数据尺寸\", mov_id_arr.shape, \", 电影类别genres数据的尺寸\", mov_cat_arr.shape, \", 电影名字title的尺寸\", mov_tit_arr.shape)\n",
    "    break"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "观察输出结果可以发现，数据的最后一维是1，原因是这些数据输入到神经网络时，首先要经过Embedding层，该层要求输入数据的最后一维是1。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 数据处理完整代码\n",
    "\n",
    "到这里，我们已完成了ml-1m数据读取和处理，接下来，我们将数据处理的代码封装到一个Python类中，完整实现如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "##Total dataset instances:  382499\n",
      "##MovieLens dataset information: \n",
      "usr num: 6040\n",
      "movies num: 3883\n",
      "打印用户ID，性别，年龄，职业数据的维度：\n",
      "(256, 1)\n",
      "(256, 1)\n",
      "(256, 1)\n",
      "(256, 1)\n",
      "打印电影ID，名字，类别数据的维度：\n",
      "(256, 1)\n",
      "(256, 6, 1)\n",
      "(256, 1, 15, 1)\n",
      "(1,)\n"
     ]
    }
   ],
   "source": [
    "import random\n",
    "import numpy as np\n",
    "from PIL import Image\n",
    "\n",
    "class MovieLen(object):\n",
    "    def __init__(self, use_poster):\n",
    "        self.use_poster = use_poster\n",
    "        # 声明每个数据文件的路径\n",
    "        usr_info_path = \"./work/ml-1m/users.dat\"\n",
    "        rating_path = \"./work/ml-1m/new_rating.txt\"\n",
    "\n",
    "        movie_info_path = \"./work/ml-1m/movies.dat\"\n",
    "        self.poster_path = \"./work/ml-1m/posters/\"\n",
    "        # 得到电影数据\n",
    "        self.movie_info, self.movie_cat, self.movie_title = self.get_movie_info(movie_info_path)\n",
    "        # 记录电影的最大ID\n",
    "        self.max_mov_cat = np.max([self.movie_cat[k] for k in self.movie_cat])\n",
    "        self.max_mov_tit = np.max([self.movie_title[k] for k in self.movie_title])\n",
    "        self.max_mov_id = np.max(list(map(int, self.movie_info.keys())))\n",
    "        # 记录用户数据的最大ID\n",
    "        self.max_usr_id = 0\n",
    "        self.max_usr_age = 0\n",
    "        self.max_usr_job = 0\n",
    "        # 得到用户数据\n",
    "        self.usr_info = self.get_usr_info(usr_info_path)\n",
    "        # 得到评分数据\n",
    "        self.rating_info = self.get_rating_info(rating_path)\n",
    "        # 构建数据集 \n",
    "        self.dataset = self.get_dataset(usr_info=self.usr_info,\n",
    "                                        rating_info=self.rating_info,\n",
    "                                        movie_info=self.movie_info)\n",
    "        # 划分数据集，获得数据加载器\n",
    "        self.train_dataset = self.dataset[:int(len(self.dataset)*0.9)]\n",
    "        self.valid_dataset = self.dataset[int(len(self.dataset)*0.9):]\n",
    "        print(\"##Total dataset instances: \", len(self.dataset))\n",
    "        print(\"##MovieLens dataset information: \\nusr num: {}\\n\"\n",
    "              \"movies num: {}\".format(len(self.usr_info),len(self.movie_info)))\n",
    "    # 得到电影数据\n",
    "    def get_movie_info(self, path):\n",
    "        # 打开文件，编码方式选择ISO-8859-1，读取所有数据到data中 \n",
    "        with open(path, 'r', encoding=\"ISO-8859-1\") as f:\n",
    "            data = f.readlines()\n",
    "        # 建立三个字典，分别用户存放电影所有信息，电影的名字信息、类别信息\n",
    "        movie_info, movie_titles, movie_cat = {}, {}, {}\n",
    "        # 对电影名字、类别中不同的单词计数\n",
    "        t_count, c_count = 1, 1\n",
    "\n",
    "        count_tit = {}\n",
    "        # 按行读取数据并处理\n",
    "        for item in data:\n",
    "            item = item.strip().split(\"::\")\n",
    "            v_id = item[0]\n",
    "            v_title = item[1][:-7]\n",
    "            cats = item[2].split('|')\n",
    "            v_year = item[1][-5:-1]\n",
    "\n",
    "            titles = v_title.split()\n",
    "            # 统计电影名字的单词，并给每个单词一个序号，放在movie_titles中\n",
    "            for t in titles:\n",
    "                if t not in movie_titles:\n",
    "                    movie_titles[t] = t_count\n",
    "                    t_count += 1\n",
    "            # 统计电影类别单词，并给每个单词一个序号，放在movie_cat中\n",
    "            for cat in cats:\n",
    "                if cat not in movie_cat:\n",
    "                    movie_cat[cat] = c_count\n",
    "                    c_count += 1\n",
    "            # 补0使电影名称对应的列表长度为15\n",
    "            v_tit = [movie_titles[k] for k in titles]\n",
    "            while len(v_tit)<15:\n",
    "                v_tit.append(0)\n",
    "            # 补0使电影种类对应的列表长度为6\n",
    "            v_cat = [movie_cat[k] for k in cats]\n",
    "            while len(v_cat)<6:\n",
    "                v_cat.append(0)\n",
    "            # 保存电影数据到movie_info中\n",
    "            movie_info[v_id] = {'mov_id': int(v_id),\n",
    "                                'title': v_tit,\n",
    "                                'category': v_cat,\n",
    "                                'years': int(v_year)}\n",
    "        return movie_info, movie_cat, movie_titles\n",
    "\n",
    "    def get_usr_info(self, path):\n",
    "        # 性别转换函数，M-0， F-1\n",
    "        def gender2num(gender):\n",
    "            return 1 if gender == 'F' else 0\n",
    "\n",
    "        # 打开文件，读取所有行到data中\n",
    "        with open(path, 'r') as f:\n",
    "            data = f.readlines()\n",
    "        # 建立用户信息的字典\n",
    "        use_info = {}\n",
    "\n",
    "        max_usr_id = 0\n",
    "        #按行索引数据\n",
    "        for item in data:\n",
    "            # 去除每一行中和数据无关的部分\n",
    "            item = item.strip().split(\"::\")\n",
    "            usr_id = item[0]\n",
    "            # 将字符数据转成数字并保存在字典中\n",
    "            use_info[usr_id] = {'usr_id': int(usr_id),\n",
    "                                'gender': gender2num(item[1]),\n",
    "                                'age': int(item[2]),\n",
    "                                'job': int(item[3])}\n",
    "            self.max_usr_id = max(self.max_usr_id, int(usr_id))\n",
    "            self.max_usr_age = max(self.max_usr_age, int(item[2]))\n",
    "            self.max_usr_job = max(self.max_usr_job, int(item[3]))\n",
    "        return use_info\n",
    "    # 得到评分数据\n",
    "    def get_rating_info(self, path):\n",
    "        # 读取文件里的数据\n",
    "        with open(path, 'r') as f:\n",
    "            data = f.readlines()\n",
    "        # 将数据保存在字典中并返回\n",
    "        rating_info = {}\n",
    "        for item in data:\n",
    "            item = item.strip().split(\"::\")\n",
    "            usr_id,movie_id,score = item[0],item[1],item[2]\n",
    "            if usr_id not in rating_info.keys():\n",
    "                rating_info[usr_id] = {movie_id:float(score)}\n",
    "            else:\n",
    "                rating_info[usr_id][movie_id] = float(score)\n",
    "        return rating_info\n",
    "    # 构建数据集\n",
    "    def get_dataset(self, usr_info, rating_info, movie_info):\n",
    "        trainset = []\n",
    "        for usr_id in rating_info.keys():\n",
    "            usr_ratings = rating_info[usr_id]\n",
    "            for movie_id in usr_ratings:\n",
    "                trainset.append({'usr_info': usr_info[usr_id],\n",
    "                                 'mov_info': movie_info[movie_id],\n",
    "                                 'scores': usr_ratings[movie_id]})\n",
    "        return trainset\n",
    "    \n",
    "    def load_data(self, dataset=None, mode='train'):\n",
    "        use_poster = False\n",
    "\n",
    "        # 定义数据迭代Batch大小\n",
    "        BATCHSIZE = 256\n",
    "\n",
    "        data_length = len(dataset)\n",
    "        index_list = list(range(data_length))\n",
    "        # 定义数据迭代加载器\n",
    "        def data_generator():\n",
    "            # 训练模式下，打乱训练数据\n",
    "            if mode == 'train':\n",
    "                random.shuffle(index_list)\n",
    "            # 声明每个特征的列表\n",
    "            usr_id_list,usr_gender_list,usr_age_list,usr_job_list = [], [], [], []\n",
    "            mov_id_list,mov_tit_list,mov_cat_list,mov_poster_list = [], [], [], []\n",
    "            score_list = []\n",
    "            # 索引遍历输入数据集\n",
    "            for idx, i in enumerate(index_list):\n",
    "                # 获得特征数据保存到对应特征列表中\n",
    "                usr_id_list.append(dataset[i]['usr_info']['usr_id'])\n",
    "                usr_gender_list.append(dataset[i]['usr_info']['gender'])\n",
    "                usr_age_list.append(dataset[i]['usr_info']['age'])\n",
    "                usr_job_list.append(dataset[i]['usr_info']['job'])\n",
    "\n",
    "                mov_id_list.append(dataset[i]['mov_info']['mov_id'])\n",
    "                mov_tit_list.append(dataset[i]['mov_info']['title'])\n",
    "                mov_cat_list.append(dataset[i]['mov_info']['category'])\n",
    "                mov_id = dataset[i]['mov_info']['mov_id']\n",
    "\n",
    "                if use_poster:\n",
    "                    # 不使用图像特征时，不读取图像数据，加快数据读取速度\n",
    "                    poster = Image.open(self.poster_path+'mov_id{}.jpg'.format(str(mov_id[0])))\n",
    "                    poster = poster.resize([64, 64])\n",
    "                    if len(poster.size) <= 2:\n",
    "                        poster = poster.convert(\"RGB\")\n",
    "\n",
    "                    mov_poster_list.append(np.array(poster))\n",
    "\n",
    "                score_list.append(int(dataset[i]['scores']))\n",
    "                # 如果读取的数据量达到当前的batch大小，就返回当前批次\n",
    "                if len(usr_id_list)==BATCHSIZE:\n",
    "                    # 转换列表数据为数组形式，reshape到固定形状\n",
    "                    usr_id_arr = np.expand_dims(np.array(usr_id_list), axis=-1)\n",
    "                    usr_gender_arr = np.expand_dims(np.array(usr_gender_list), axis=-1)\n",
    "                    usr_age_arr = np.expand_dims(np.array(usr_age_list), axis=-1)\n",
    "                    usr_job_arr = np.expand_dims(np.array(usr_job_list), axis=-1)\n",
    "\n",
    "                    mov_id_arr = np.expand_dims(np.array(mov_id_list), axis=-1)\n",
    "                    mov_cat_arr = np.reshape(np.array(mov_cat_list), [BATCHSIZE, 6, 1]).astype(np.int64)\n",
    "                    mov_tit_arr = np.reshape(np.array(mov_tit_list), [BATCHSIZE, 1, 15, 1]).astype(np.int64)\n",
    "\n",
    "                    if use_poster:\n",
    "                        mov_poster_arr = np.reshape(np.array(mov_poster_list)/127.5 - 1, [BATCHSIZE, 3, 64, 64]).astype(np.float32)\n",
    "                    else:\n",
    "                        mov_poster_arr = np.array([0.])\n",
    "\n",
    "                    scores_arr = np.reshape(np.array(score_list), [-1, 1]).astype(np.float32)\n",
    "\n",
    "                    # 放回当前批次数据\n",
    "                    yield [usr_id_arr, usr_gender_arr, usr_age_arr, usr_job_arr], \\\n",
    "                           [mov_id_arr, mov_cat_arr, mov_tit_arr, mov_poster_arr], scores_arr\n",
    "\n",
    "                    # 清空数据\n",
    "                    usr_id_list, usr_gender_list, usr_age_list, usr_job_list = [], [], [], []\n",
    "                    mov_id_list, mov_tit_list, mov_cat_list, score_list = [], [], [], []\n",
    "                    mov_poster_list = []\n",
    "        return data_generator\n",
    "\n",
    "# 声明数据读取类\n",
    "dataset = MovieLen(False)\n",
    "# 定义数据读取器\n",
    "train_loader = dataset.load_data(dataset=dataset.train_dataset, mode='train')\n",
    "# 迭代的读取数据， Batchsize = 256\n",
    "for idx, data in enumerate(train_loader()):\n",
    "    usr, mov, score = data\n",
    "    print(\"打印用户ID，性别，年龄，职业数据的维度：\")\n",
    "    for v in usr:\n",
    "        print(v.shape)\n",
    "    print(\"打印电影ID，名字，类别数据的维度：\")\n",
    "    for v in mov:\n",
    "        print(v.shape)\n",
    "    \n",
    "    break\n",
    "    \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 总结\n",
    "\n",
    "本节主要介绍了电影推荐数据集ml-1m，并对数据集中的用户数据、电影数据、评分数据进行介绍和处理，将字符串形式的数据转成了数字表示的数据形式，并构建了数据读取器，最终将数据处理和数据读取封装到一个Python类中，如下图所示：\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/43700fdb6536461b8d26a77bce2b07c12b7d321f051e46cd809b07b507e706a6\" width=\"380\" ></center>\n",
    "\n",
    "<center><br>图1：数据处理流程图 </br></center>\n",
    "\n",
    "各数据处理前后格式如下：\n",
    "| 数据分类 | 输入数据样例 | 输出数据样例 |\n",
    "| -------- | -------- | -------- |\n",
    "| **用户数据**     | UserID::Gender::Age::Occupation <br>1::F::1::10     | {'usr_id': 1, 'gender': 1, 'age': 1, 'job': 10}    |\n",
    "| **电影数据**     | MovieID::Title::Genres <br>2::Jumanji (1995)::Adventure\\|Children's\\|Fantasy     | {'mov_id': 2, 'title': [3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'category': [4, 2, 5, 0, 0, 0]}     |\n",
    "| **评分数据**     | UserID::MovieID::Rating <br>1::1193::5    | {'usr_id': 1, 'mov_id': 1193, 'score': 5}|\n",
    "| **海报数据**     | \"mov_id\" + MovieID+\".jpg\"格式的图片    | 64\\*64\\*3的像素矩阵|\n",
    "\n",
    "虽然我们将文本的数据转换成了数字表示形式，但是这些数据依然是离散的，不适合直接输入到神经网络中，还需要对其进行Embedding操作，将其映射为固定长度的向量。\n",
    "\n",
    "接下来我们开始个性化电影推荐的第二个部分：模型设计。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "PaddlePaddle 1.6.2 (Python 3.5)",
   "language": "python",
   "name": "py35-paddle1.2.0"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
