{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "基于Spark&Flask的在线电影推荐服务——构建推荐者\n",
    "\n",
    "本笔记本解释了如何使用MovieLens数据集，通过协同过滤和 Spark 的交替最小 Saqures 实现构建电影推荐器。它由两部分组成。第一个是关于获取和解析电影和收视率数据到sparkrdds中。第二个是关于构建和使用推荐者，并将其持久化，以便在我们的在线推荐系统中使用。\n",
    "\n",
    "本教程可以单独用于构建基于MovieLens数据集的电影推荐模型。第一部分中关于如何在公共MovieLens数据集中使用ALS的大部分代码，都来自于我对Anthony D.的CS100.1x介绍apachespark的大数据中提出的一个练习的解决方案。这也是自2014年在Spark峰会上公开发布的。在这里，我添加了一些小的修改，以使用更大的数据集，并编写了有关如何存储和重新加载模型以供以后使用的代码。\n",
    "\n",
    "获取和处理数据\n",
    "\n",
    "为了建立一个在线电影推荐使用火花，我们需要有我们的模型数据预处理尽可能。每次需要进行新的推荐时，解析数据集并构建模型并不是最好的策略。\n",
    "\n",
    "我们可以预先计算的任务列表包括：\n",
    "\n",
    "加载和分析数据集。持久化生成的RDD以供以后使用。\n",
    "\n",
    "利用完整的数据集建立推荐模型。保留数据集以供以后使用。\n",
    "\n",
    "本笔记本解释了其中的第一个任务。\n",
    "\n",
    "文件下载\n",
    "\n",
    "GroupLens Research从MovieLens网站上收集并提供了评级数据集。根据数据集的大小，数据集是在不同的时间段内收集的。它们可以在这里找到。\n",
    "\n",
    "在本例中，我们将使用最新的数据集：\n",
    "\n",
    "小：10万收视率和2488个标签应用程序应用于8570部电影，706个用户。上次更新日期：2021年6月。\n",
    "\n",
    "完整：21000000收视率和470000标签应用程序应用于27000部电影，230000用户。上次更新日期：2021年6月。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "complete_dataset_url = 'http://files.grouplens.org/datasets/movielens/ml-latest.zip'\n",
    "small_dataset_url = 'http://files.grouplens.org/datasets/movielens/ml-latest-small.zip'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "我们还需要定义下载位置。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "datasets_path = os.path.join('..', 'datasets')\n",
    "\n",
    "complete_dataset_path = os.path.join(datasets_path, 'ml-latest.zip')\n",
    "small_dataset_path = os.path.join(datasets_path, 'ml-latest-small.zip')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "现在我们可以继续两个下载。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import urllib\n",
    "\n",
    "small_f = urllib.urlretrieve (small_dataset_url, small_dataset_path)\n",
    "complete_f = urllib.urlretrieve (complete_dataset_url, complete_dataset_path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "这两个文件都是zip文件，其中包含一个包含收视率、电影等的文件夹。我们需要将它们解压缩到单独的文件夹中，以便以后使用每个文件。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import zipfile\n",
    "\n",
    "with zipfile.ZipFile(small_dataset_path, \"r\") as z:\n",
    "    z.extractall(datasets_path)\n",
    "\n",
    "with zipfile.ZipFile(complete_dataset_path, \"r\") as z:\n",
    "    z.extractall(datasets_path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "加载和解析数据集\n",
    "\n",
    "我们已经准备好读入每个文件并创建一个由解析的行组成的RDD。\n",
    "\n",
    "评级数据集（ratings.csv）中的每一行的格式如下：\n",
    "\n",
    "userId，movieId，rating，timestamp\n",
    "\n",
    "movies（movies.csv）数据集中的每一行的格式如下：\n",
    "\n",
    "电影ID、片名、类型\n",
    "\n",
    "体裁的格式如下：\n",
    "\n",
    "Genre1 | Genre2 | Genre3。。。\n",
    "\n",
    "标记文件（tags.csv）的格式为：\n",
    "\n",
    "userId，movieId，tag，timestamp\n",
    "\n",
    "最后，links.csv文件的格式为：\n",
    "\n",
    "电影ID，imdbId，tmdbId\n",
    "\n",
    "这些文件的格式是统一和简单的，因此我们可以使用Python split（）在将它们加载到rdd后解析它们的行。解析电影和分级文件会产生两个RDD：\n",
    "\n",
    "对于ratings数据集中的每一行，我们创建一个元组（UserID、MovieID、Rating）。我们删除时间戳是因为此推荐者不需要它。\n",
    "\n",
    "对于movies数据集中的每一行，我们创建一个元组（MovieID，Title）。我们放弃流派，因为我们不使用他们这个推荐。\n",
    "\n",
    "让我们加载原始评级数据。我们需要过滤掉每个文件中包含的头文件。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "small_ratings_file = os.path.join(datasets_path, 'ml-latest-small', 'ratings.csv')\n",
    "\n",
    "small_ratings_raw_data = sc.textFile(small_ratings_file)\n",
    "small_ratings_raw_data_header = small_ratings_raw_data.take(1)[0]\n",
    "small_ratings_data = small_ratings_raw_data.filter(lambda line: line!=small_ratings_raw_data_header)\\\n",
    "    .map(lambda line: line.split(\",\")).map(lambda tokens: (tokens[0],tokens[1],tokens[2])).cache()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "为了便于说明，我们可以使用RDD的前几行来查看结果。在最后一个脚本中，我们在需要之前不会调用任何Spark操作（例如take），因为它们会触发集群中的实际计算。\n",
    "我们以类似的方式处理movies.csv文件。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "small_ratings_data.take(3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "small_movies_file = os.path.join(datasets_path, 'ml-latest-small', 'movies.csv')\n",
    "\n",
    "small_movies_raw_data = sc.textFile(small_movies_file)\n",
    "small_movies_raw_data_header = small_movies_raw_data.take(1)[0]\n",
    "\n",
    "small_movies_data = small_movies_raw_data.filter(lambda line: line!=small_movies_raw_data_header)\\\n",
    "    .map(lambda line: line.split(\",\")).map(lambda tokens: (tokens[0],tokens[1])).cache()\n",
    "    \n",
    "small_movies_data.take(3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "下面几节将介绍协作过滤，并解释如何使用Spark MLlib构建推荐者模型。我们将通过解释如何使用这样的模型来提出建议，以及如何将其持久化以供以后使用（例如在我们的Python/flask web服务中）来结束本教程。\n",
    "\n",
    "协同过滤\n",
    "\n",
    "在协作过滤中，我们通过从许多用户（协作）收集偏好或口味信息来预测（过滤）用户的兴趣。基本假设是，如果用户a在某个问题上与用户B有相同的意见，则a更有可能对不同的问题x有B的意见，而不是随机选择用户对x的意见。\n",
    "\n",
    "下面的图片（来自维基百科）展示了一个协作过滤的例子。起初，人们对不同的项目（如视频、图像、游戏）进行评分。然后，系统对用户对尚未评级的项目的评级进行预测。新的预测是建立在其他用户的现有评分与活跃用户的评分相似的基础上的。在图像中，系统预测用户将不喜欢该视频。\n",
    "https://camo.githubusercontent.com/a6e062883b83adb3b65b5a9e167a3a6f5e5f9a19/68747470733a2f2f75706c6f61642e77696b696d656469612e6f72672f77696b6970656469612f636f6d6d6f6e732f352f35322f436f6c6c61626f7261746976655f66696c746572696e672e676966\n",
    "Spark机器学习MLlib库通过使用交替最小二乘法提供了一个协作过滤实现。MLlib中的实现具有以下参数：\n",
    "\n",
    "numBlocks是用于并行计算的块数（设置为-1以自动配置）。\n",
    "\n",
    "秩是模型中潜在因素的个数。\n",
    "\n",
    "迭代次数是要运行的迭代次数。\n",
    "\n",
    "lambda在ALS中指定正则化参数。\n",
    "\n",
    "implicitPrefs指定是使用显式反馈ALS变量还是使用适用于隐式反馈数据的变量。\n",
    "\n",
    "alpha是一个适用于ALS隐式反馈变量的参数，它控制偏好观察中的基线置信度。\n",
    "利用小数据集选择ALS参数\n",
    "\n",
    "为了确定最佳ALS参数，我们将使用小数据集。我们首先需要将其划分为训练、验证和测试数据集。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "training_RDD, validation_RDD, test_RDD = small_ratings_data.randomSplit([6, 2, 2], seed=0L)\n",
    "validation_for_predict_RDD = validation_RDD.map(lambda x: (x[0], x[1]))\n",
    "test_for_predict_RDD = test_RDD.map(lambda x: (x[0], x[1]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "训练阶段"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pyspark.mllib.recommendation import ALS\n",
    "import math\n",
    "\n",
    "seed = 5L\n",
    "iterations = 10\n",
    "regularization_parameter = 0.1\n",
    "ranks = [4, 8, 12]\n",
    "errors = [0, 0, 0]\n",
    "err = 0\n",
    "tolerance = 0.02\n",
    "\n",
    "min_error = float('inf')\n",
    "best_rank = -1\n",
    "best_iteration = -1\n",
    "for rank in ranks:\n",
    "    model = ALS.train(training_RDD, rank, seed=seed, iterations=iterations,\n",
    "                      lambda_=regularization_parameter)\n",
    "    predictions = model.predictAll(validation_for_predict_RDD).map(lambda r: ((r[0], r[1]), r[2]))\n",
    "    rates_and_preds = validation_RDD.map(lambda r: ((int(r[0]), int(r[1])), float(r[2]))).join(predictions)\n",
    "    error = math.sqrt(rates_and_preds.map(lambda r: (r[1][0] - r[1][1])**2).mean())\n",
    "    errors[err] = error\n",
    "    err += 1\n",
    "    print 'For rank %s the RMSE is %s' % (rank, error)\n",
    "    if error < min_error:\n",
    "        min_error = error\n",
    "        best_rank = rank\n",
    "\n",
    "print 'The best model was trained with rank %s' % best_rank"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "等级4的RMSE为0.963681878574\n",
    "\n",
    "等级8的RMSE为0.96250475933\n",
    "\n",
    "排名12的RMSE为0.9716475632\n",
    "\n",
    "最佳模型的训练等级为8级\n",
    "\n",
    "但让我们解释一下。首先，让我们看看我们的预测。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "predictions.take(3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "基本上我们有UserID，MovieID，和Rating，就像我们在ratings数据集中一样。在这种情况下，预测的第三个元素，电影和用户的评级，是由我们的ALS模型预测的。\n",
    "\n",
    "然后我们将这些数据与我们的验证数据（包括评级的数据）结合起来，结果如下所示："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rates_and_preds.take(3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "为此，我们应用一个平方差，然后使用mean（）操作得到MSE并应用sqrt。\n",
    "\n",
    "最后对所选模型进行了测试。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model = ALS.train(training_RDD, best_rank, seed=seed, iterations=iterations,\n",
    "                      lambda_=regularization_parameter)\n",
    "predictions = model.predictAll(test_for_predict_RDD).map(lambda r: ((r[0], r[1]), r[2]))\n",
    "rates_and_preds = test_RDD.map(lambda r: ((int(r[0]), int(r[1])), float(r[2]))).join(predictions)\n",
    "error = math.sqrt(rates_and_preds.map(lambda r: (r[1][0] - r[1][1])**2).mean())\n",
    "    \n",
    "print 'For testing data the RMSE is %s' % (error)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "对于测试数据，RMSE为0.972342381898\n",
    "\n",
    "使用完整的数据集构建最终模型\n",
    "\n",
    "为了建立我们的推荐模型，我们将使用完整的数据集。因此，我们需要像处理小数据集一样处理它。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load the complete dataset file\n",
    "complete_ratings_file = os.path.join(datasets_path, 'ml-latest', 'ratings.csv')\n",
    "complete_ratings_raw_data = sc.textFile(complete_ratings_file)\n",
    "complete_ratings_raw_data_header = complete_ratings_raw_data.take(1)[0]\n",
    "\n",
    "# Parse\n",
    "complete_ratings_data = complete_ratings_raw_data.filter(lambda line: line!=complete_ratings_raw_data_header)\\\n",
    "    .map(lambda line: line.split(\",\")).map(lambda tokens: (int(tokens[0]),int(tokens[1]),float(tokens[2]))).cache()\n",
    "    \n",
    "print \"There are %s recommendations in the complete dataset\" % (complete_ratings_data.count())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "完整的数据集中有21063128条建议\n",
    "\n",
    "现在我们已经准备好训练推荐模型了。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "training_RDD, test_RDD = complete_ratings_data.randomSplit([7, 3], seed=0L)\n",
    "\n",
    "complete_model = ALS.train(training_RDD, best_rank, seed=seed, \n",
    "                           iterations=iterations, lambda_=regularization_parameter)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "测试"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_for_predict_RDD = test_RDD.map(lambda x: (x[0], x[1]))\n",
    "\n",
    "predictions = complete_model.predictAll(test_for_predict_RDD).map(lambda r: ((r[0], r[1]), r[2]))\n",
    "rates_and_preds = test_RDD.map(lambda r: ((int(r[0]), int(r[1])), float(r[2]))).join(predictions)\n",
    "error = math.sqrt(rates_and_preds.map(lambda r: (r[1][0] - r[1][1])**2).mean())\n",
    "    \n",
    "print 'For testing data the RMSE is %s' % (error)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "对于测试数据，RMSE为0.82183583368\n",
    "\n",
    "我们可以看到，当使用更大的数据集时，我们是如何得到更准确的推荐者的。\n",
    "\n",
    "如何提出建议\n",
    "\n",
    "虽然我们的目标是建立一个在线电影推荐人，现在我们知道如何准备好我们的推荐模型，我们可以尝试提供一些电影推荐。这将帮助我们在以后构建web服务时收集推荐引擎，并解释如何在任何其他情况下使用该模型。\n",
    "\n",
    "使用协同过滤时，获得推荐并不像使用先前生成的模型预测新条目那么简单。相反，我们需要再次训练模型，但要包括新的用户偏好，以便与数据集中的其他用户进行比较。也就是说，每当我们有新的用户评级时，推荐人都需要接受培训（当然，一个模型可以被多个用户使用！）。这使得这个过程非常昂贵，这也是为什么可伸缩性是一个问题（并引发一个解决方案！）的原因之一。一旦我们训练了我们的模型，我们就可以重用它来获得给定用户的最高推荐或者某部电影的个人评分。这些操作的成本比训练模型本身要低。\n",
    "\n",
    "因此，让我们首先加载电影的完整文件，以便以后使用。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "complete_movies_file = os.path.join(datasets_path, 'ml-latest', 'movies.csv')\n",
    "complete_movies_raw_data = sc.textFile(complete_movies_file)\n",
    "complete_movies_raw_data_header = complete_movies_raw_data.take(1)[0]\n",
    "\n",
    "# Parse\n",
    "complete_movies_data = complete_movies_raw_data.filter(lambda line: line!=complete_movies_raw_data_header)\\\n",
    "    .map(lambda line: line.split(\",\")).map(lambda tokens: (int(tokens[0]),tokens[1],tokens[2])).cache()\n",
    "\n",
    "complete_movies_titles = complete_movies_data.map(lambda x: (int(x[0]),x[1]))\n",
    "    \n",
    "print \"There are %s movies in the complete dataset\" % (complete_movies_titles.count())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "完整的数据集中有27303部电影\n",
    "\n",
    "我们想做的另一件事，是给一些最低收视率的电影推荐。为此，我们需要计算每部电影的收视率。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_counts_and_averages(ID_and_ratings_tuple):\n",
    "    nratings = len(ID_and_ratings_tuple[1])\n",
    "    return ID_and_ratings_tuple[0], (nratings, float(sum(x for x in ID_and_ratings_tuple[1]))/nratings)\n",
    "\n",
    "movie_ID_with_ratings_RDD = (complete_ratings_data.map(lambda x: (x[1], x[2])).groupByKey())\n",
    "movie_ID_with_avg_ratings_RDD = movie_ID_with_ratings_RDD.map(get_counts_and_averages)\n",
    "movie_rating_counts_RDD = movie_ID_with_avg_ratings_RDD.map(lambda x: (x[0], x[1][0]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "添加新用户分级\n",
    "\n",
    "现在我们需要为新用户评价一些电影。我们将把它们放在一个新的RDD中，并使用用户ID 0，该ID在MovieLens数据集中没有分配。检查dataset movies文件中的ID-to-title分配（这样您就知道您实际在为哪些电影评分）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "new_user_ID = 0\n",
    "\n",
    "# The format of each line is (userID, movieID, rating)\n",
    "new_user_ratings = [\n",
    "     (0,260,9), # Star Wars (1977)\n",
    "     (0,1,8), # Toy Story (1995)\n",
    "     (0,16,7), # Casino (1995)\n",
    "     (0,25,8), # Leaving Las Vegas (1995)\n",
    "     (0,32,9), # Twelve Monkeys (a.k.a. 12 Monkeys) (1995)\n",
    "     (0,335,4), # Flintstones, The (1994)\n",
    "     (0,379,3), # Timecop (1994)\n",
    "     (0,296,7), # Pulp Fiction (1994)\n",
    "     (0,858,10) , # Godfather, The (1972)\n",
    "     (0,50,8) # Usual Suspects, The (1995)\n",
    "    ]\n",
    "new_user_ratings_RDD = sc.parallelize(new_user_ratings)\n",
    "print 'New user ratings: %s' % new_user_ratings_RDD.take(10)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "新用户评级：[（0，260，9），（0，1，8），（0，16，7），（0，25，8），（0，32，9），（0，335，4），（0，379，3），（0，296，7），（0，858，10），（0，50，8）]\n",
    "\n",
    "现在我们将它们添加到我们将用来训练推荐模型的数据中。我们使用Spark的union（）变换。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "complete_data_with_new_ratings_RDD = complete_ratings_data.union(new_user_ratings_RDD)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "最后，我们使用之前选择的所有参数（当使用小数据集时）来训练ALS模型。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from time import time\n",
    "\n",
    "t0 = time()\n",
    "new_ratings_model = ALS.train(complete_data_with_new_ratings_RDD, best_rank, seed=seed, \n",
    "                              iterations=iterations, lambda_=regularization_parameter)\n",
    "tt = time() - t0\n",
    "\n",
    "print \"New model trained in %s seconds\" % round(tt,3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "新模型训练时间为56.61秒\n",
    "\n",
    "花了一些时间。我们将需要重复，每次用户添加新的评级。理想情况下，我们将分批进行，而不是为每个用户进入系统的每个评级。\n",
    "\n",
    "获取顶级推荐\n",
    "\n",
    "现在让我们得到一些建议！为此，我们将获得一个RDD的所有电影的新用户还没有评级。我们将它们与模型一起预测收视率。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "new_user_ratings_ids = map(lambda x: x[1], new_user_ratings) # get just movie IDs\n",
    "# keep just those not on the ID list (thanks Lei Li for spotting the error!)\n",
    "new_user_unrated_movies_RDD = (complete_movies_data.filter(lambda x: x[0] not in new_user_ratings_ids).map(lambda x: (new_user_ID, x[0])))\n",
    "\n",
    "# Use the input RDD, new_user_unrated_movies_RDD, with new_ratings_model.predictAll() to predict new ratings for the movies\n",
    "new_user_recommendations_RDD = new_ratings_model.predictAll(new_user_unrated_movies_RDD)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "我们已经准备好了我们的建议。现在我们可以打印出25部预测收视率最高的电影。加入他们的电影RDD，以获得标题，收视率计数，以获得最少数量的电影计数。首先，我们将进行连接，看看结果是什么样子。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Transform new_user_recommendations_RDD into pairs of the form (Movie ID, Predicted Rating)\n",
    "new_user_recommendations_rating_RDD = new_user_recommendations_RDD.map(lambda x: (x.product, x.rating))\n",
    "new_user_recommendations_rating_title_and_count_RDD = \\\n",
    "    new_user_recommendations_rating_RDD.join(complete_movies_titles).join(movie_rating_counts_RDD)\n",
    "new_user_recommendations_rating_title_and_count_RDD.take(3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "所以我们需要把它弄平一点才能有（标题、评级、评级计数）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "new_user_recommendations_rating_title_and_count_RDD = \\\n",
    "    new_user_recommendations_rating_title_and_count_RDD.map(lambda r: (r[1][0][1], r[1][0][0], r[1][1]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "最后，为新用户获取评分最高的推荐，过滤掉评分低于 25 的电影。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "top_movies = new_user_recommendations_rating_title_and_count_RDD.filter(lambda r: r[2]>=25).takeOrdered(25, key=lambda x: -x[1])\n",
    "\n",
    "print ('TOP recommended movies (with more than 25 reviews):\\n%s' %\n",
    "        '\\n'.join(map(str, top_movies)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "最佳推荐电影（评论超过25条）：\n",
    "\n",
    "（u“教父：第二部分”，8.50374912918670129198）\n",
    "\n",
    "（u“内战”，8.386497469089297257）\n",
    "\n",
    "（u‘冰冻星球（2011）’，8.372705479107108，31）\n",
    "\n",
    "（u“肖申克的救赎”，8.258510064442426，67741）\n",
    "\n",
    "（u'Cosmos（1980）’，8.252254825768972，948）\n",
    "\n",
    "（u'Band of Brothers（2001）’，8.225114960316244450）\n",
    "\n",
    "（u’Generation Kill（2008）”，8.206487040524653，52）\n",
    "\n",
    "（u“辛德勒名单（1993）”，8.172761674773625，53609）\n",
    "\n",
    "（u'Dr.Strangelove or:How I Learned to Stop nerving and Love the Bomb（1964）’，8.16622978676416823915）\n",
    "\n",
    "（u“一只飞过杜鹃巢（1975）”，8.1561702297057732948）\n",
    "\n",
    "（卡萨布兰卡（1942），8.141303207981174，26114）\n",
    "\n",
    "（u'Seven Samurai（Shichinin no Samurai）（1954年），8.139633165142612，11796）\n",
    "\n",
    "（u'Goodfellas（1990年），8.12931139003904827123）\n",
    "\n",
    "（《星球大战：第五集帝国反击》（1980）8.12422570240964710）\n",
    "\n",
    "（u'Jazz（2001）’，8.078538221315313，25）\n",
    "\n",
    "（u《长夜漫漫（2000）》，8.050176820606127，34）\n",
    "\n",
    "（阿拉伯的劳伦斯（1962），8.04133148994881413452）\n",
    "\n",
    "（u'Raiders of the Lost Ark（Indiana Jones and the Raiders of the Lost Ark）（1981年），8.039942481552845908）\n",
    "\n",
    "（u'12愤怒的男人（1957）’，8.011389274280754，13235）\n",
    "\n",
    "（u“天气真好（2012）”，8.007734839026181，35）\n",
    "\n",
    "（u'Apocalypse Now（1979）”，8.005094327199552，23905）\n",
    "\n",
    "（u‘光荣之路（1957）’，7.999377863942673598）\n",
    "\n",
    "（u‘后窗（1954）’，7.986086520354021417996）\n",
    "\n",
    "（u‘游戏状态（2003）’，7.981582126801772，27）\n",
    "\n",
    "（唐人街（1974），7.978673289692703，16195）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "获取个人评分\n",
    "\n",
    "另一个有用的用例是为给定的用户获得特定电影的预测分级。这个过程类似于之前的顶级推荐检索，但是，我们不会对用户尚未评分的每一部电影都使用preditall，我们只会将一个条目传递给该方法，其中包含要预测评分的电影。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "my_movie = sc.parallelize([(0, 500)]) # Quiz Show (1994)\n",
    "individual_movie_rating_RDD = new_ratings_model.predictAll(new_user_unrated_movies_RDD)\n",
    "individual_movie_rating_RDD.take(1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "新用户不太可能喜欢那个……显然我们可以在该列表中包含我们需要的电影！\n",
    "持久化模型\n",
    "\n",
    "或者，我们可能希望保留基本模型，以便以后在我们的在线建议中使用。虽然每次我们有新的用户评级时都会生成一个新的模型，但是为了节省启动服务器时的时间等，存储当前的模型可能是值得的。如果我们保留一些我们生成的RDD，特别是那些需要更长时间处理的RDD，我们也可能会节省时间。例如，以下行保存并加载ALS模型。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pyspark.mllib.recommendation import MatrixFactorizationModel\n",
    "\n",
    "model_path = os.path.join('..', 'models', 'movie_lens_als')\n",
    "\n",
    "# Save and load model\n",
    "model.save(sc, model_path)\n",
    "same_model = MatrixFactorizationModel.load(sc, model_path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "除此之外，您将在文件系统中看到，有一个文件夹将产品和用户数据转换为拼花格式的文件。\n",
    "\n",
    "流派和其他领域\n",
    "\n",
    "为了简化转换和整个教程，我们没有使用类型和时间戳字段。合并它们并不代表任何问题。一个很好的方法是按其中任何一个过滤推荐（例如按类型的推荐，或最近的推荐），就像我们用最少的评分数所做的那样。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.5"
  },
  "tianchi_metadata": {
   "competitions": [],
   "datasets": [],
   "description": "",
   "notebookId": "225034",
   "source": "dsw"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
