{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "使用Spark&Flask的在线电影推荐服务——构建web服务\n",
    "\n",
    "本教程将详细介绍如何在web服务中使用Spark机器学习模型，甚至其他类型的数据分析对象。这为在线预测、建议等打开了大门。通过使用Python语言，我们使这项任务变得非常容易，这要归功于Spark自己的Python功能，以及基于Python的框架，比如Flask。\n",
    "\n",
    "本教程可以独立使用，在任何类型的Spark模型之上构建web服务。然而，它与我们关于使用Spark MLlib构建基于MovieLens数据集的电影推荐模型的教程紧密结合。通过这样做，您将能够开发一个完整的在线电影推荐服务。\n",
    "\n",
    "我们完整的web服务包含三个Python文件：\n",
    "\n",
    "py定义了推荐引擎，包含了所有与Spark相关的计算。\n",
    "\n",
    "py是一个Flask web应用程序，它在引擎周围定义了一个类似RESTful的API。\n",
    "\n",
    "server.py在创建Spark上下文和Flask web应用程序后，使用上一步初始化CherryPy web服务器。\n",
    "\n",
    "但是让我们详细地解释一下它们，以及使用Spark作为计算引擎部署这样一个系统的特殊性。我们将重点放在如何在我们正在处理的web上下文中使用Spark模型。有关MovieLens数据以及如何使用Spark构建模型的说明，请参阅有关构建模型的教程。\n",
    "\n",
    "推荐引擎\n",
    "\n",
    "我们的电影推荐web服务的核心是一个推荐引擎（即我们最终部署的engine.py）。它由类推荐引擎表示，本节将逐步描述它的功能和实现。\n",
    "\n",
    "起动发动机\n",
    "\n",
    "当发动机初始化时，我们需要第一次对ALS模型进行评估。我们可以选择（这里不做）加载一个先前持久化的模型，以便将其用于推荐。此外，我们可能需要加载或预计算任何RDD，这些RDD稍后将用于提出建议。\n",
    "我们将在 RecommendationEngine 类的 __init__ 方法中执行此类操作（使用两个私有方法）。在这种情况下，我们不会节省任何时间。每次创建引擎时，我们都会重复整个过程。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "from pyspark.mllib.recommendation import ALS\n",
    " \n",
    "import logging\n",
    "logging.basicConfig(level=logging.INFO)\n",
    "logger = logging.getLogger(__name__)\n",
    "\n",
    "\n",
    "\n",
    "class RecommendationEngine:\n",
    "    \"\"\"A movie recommendation engine\n",
    "    \"\"\"\n",
    " \n",
    "    def __count_and_average_ratings(self):\n",
    "        \"\"\"Updates the movies ratings counts from \n",
    "        the current data self.ratings_RDD\n",
    "        \"\"\"\n",
    "        logger.info(\"Counting movie ratings...\")\n",
    "        movie_ID_with_ratings_RDD = self.ratings_RDD.map(lambda x: (x[1], x[2])).groupByKey()\n",
    "        movie_ID_with_avg_ratings_RDD = movie_ID_with_ratings_RDD.map(get_counts_and_averages)\n",
    "        self.movies_rating_counts_RDD = movie_ID_with_avg_ratings_RDD.map(lambda x: (x[0], x[1][0]))\n",
    " \n",
    " \n",
    "    def __train_model(self):\n",
    "        \"\"\"Train the ALS model with the current dataset\n",
    "        \"\"\"\n",
    "        logger.info(\"Training the ALS model...\")\n",
    "        self.model = ALS.train(self.ratings_RDD, self.rank, seed=self.seed,\n",
    "                               iterations=self.iterations, lambda_=self.regularization_parameter)\n",
    "        logger.info(\"ALS model built!\")\n",
    " \n",
    " \n",
    "    def __init__(self, sc, dataset_path):\n",
    "        \"\"\"Init the recommendation engine given a Spark context and a dataset path\n",
    "        \"\"\"\n",
    " \n",
    "        logger.info(\"Starting up the Recommendation Engine: \")\n",
    " \n",
    "        self.sc = sc\n",
    " \n",
    "        # Load ratings data for later use\n",
    "        logger.info(\"Loading Ratings data...\")\n",
    "        ratings_file_path = os.path.join(dataset_path, 'ratings.csv')\n",
    "        ratings_raw_RDD = self.sc.textFile(ratings_file_path)\n",
    "        ratings_raw_data_header = ratings_raw_RDD.take(1)[0]\n",
    "        self.ratings_RDD = ratings_raw_RDD.filter(lambda line: line!=ratings_raw_data_header)\\\n",
    "            .map(lambda line: line.split(\",\")).map(lambda tokens: (int(tokens[0]),int(tokens[1]),float(tokens[2]))).cache()\n",
    "        # Load movies data for later use\n",
    "        logger.info(\"Loading Movies data...\")\n",
    "        movies_file_path = os.path.join(dataset_path, 'movies.csv')\n",
    "        movies_raw_RDD = self.sc.textFile(movies_file_path)\n",
    "        movies_raw_data_header = movies_raw_RDD.take(1)[0]\n",
    "        self.movies_RDD = movies_raw_RDD.filter(lambda line: line!=movies_raw_data_header)\\\n",
    "            .map(lambda line: line.split(\",\")).map(lambda tokens: (int(tokens[0]),tokens[1],tokens[2])).cache()\n",
    "        self.movies_titles_RDD = self.movies_RDD.map(lambda x: (int(x[0]),x[1])).cache()\n",
    "        # Pre-calculate movies ratings counts\n",
    "        self.__count_and_average_ratings()\n",
    " \n",
    "        # Train the model\n",
    "        self.rank = 8\n",
    "        self.seed = 5L\n",
    "        self.iterations = 10\n",
    "        self.regularization_parameter = 0.1\n",
    "        self.__train_model()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "__init__ 和两个私有方法的所有代码都在关于构建模型的教程中进行了解释。 添加新评分 当使用协同过滤和 Spark 的交替最小二乘法时，我们需要为每一批新的用户评分重新计算预测模型。 这在我们之前关于构建模型的教程中进行了解释。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def add_ratings(self, ratings):\n",
    "    \"\"\"Add additional movie ratings in the format (user_id, movie_id, rating)\n",
    "    \"\"\"\n",
    "    # Convert ratings to an RDD\n",
    "    new_ratings_RDD = self.sc.parallelize(ratings)\n",
    "    # Add new ratings to the existing ones\n",
    "    self.ratings_RDD = self.ratings_RDD.union(new_ratings_RDD)\n",
    "    # Re-compute movie ratings count\n",
    "    self.__count_and_average_ratings()\n",
    "    # Re-train the ALS model with the new ratings\n",
    "    self.__train_model()\n",
    "\n",
    "    return ratings\n",
    "\n",
    "# Attach the function to a class method\n",
    "RecommendationEngine.add_ratings = add_ratings"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "提出推荐 我们还在关于构建电影推荐器的教程中解释了如何使用我们的 ALS 模型进行推荐。 在这里，我们将基本上重复等效代码，封装在我们 RecommendationEnginer 类的方法中，并使用将用于每个预测方法的私有方法。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def __predict_ratings(self, user_and_movie_RDD):\n",
    "    \"\"\"Gets predictions for a given (userID, movieID) formatted RDD\n",
    "    Returns: an RDD with format (movieTitle, movieRating, numRatings)\n",
    "    \"\"\"\n",
    "    predicted_RDD = self.model.predictAll(user_and_movie_RDD)\n",
    "    predicted_rating_RDD = predicted_RDD.map(lambda x: (x.product, x.rating))\n",
    "    predicted_rating_title_and_count_RDD = \\\n",
    "        predicted_rating_RDD.join(self.movies_titles_RDD).join(self.movies_rating_counts_RDD)\n",
    "    predicted_rating_title_and_count_RDD = \\\n",
    "        predicted_rating_title_and_count_RDD.map(lambda r: (r[1][0][1], r[1][0][0], r[1][1]))\n",
    "\n",
    "    return predicted_rating_title_and_count_RDD\n",
    "    \n",
    "def get_top_ratings(self, user_id, movies_count):\n",
    "    \"\"\"Recommends up to movies_count top unrated movies to user_id\n",
    "    \"\"\"\n",
    "    # Get pairs of (userID, movieID) for user_id unrated movies\n",
    "    user_unrated_movies_RDD = self.ratings_RDD.filter(lambda rating: not rating[1]==user_id).map(lambda x: (user_id, x[1]))\n",
    "    # Get predicted ratings\n",
    "    ratings = self.__predict_ratings(user_unrated_movies_RDD).filter(lambda r: r[2]>=25).takeOrdered(movies_count, key=lambda x: -x[1])\n",
    "\n",
    "    return ratings\n",
    "\n",
    "# Attach the functions to class methods\n",
    "RecommendationEngine.__predict_ratings = __predict_ratings\n",
    "RecommendationEngine.get_top_ratings = get_top_ratings"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "除了获得最高的未评级电影之外，我们还希望获得特定电影的评级。我们将在我们的 RecommendationEngine 中使用新方法来实现。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_ratings_for_movie_ids(self, user_id, movie_ids):\n",
    "    \"\"\"Given a user_id and a list of movie_ids, predict ratings for them \n",
    "    \"\"\"\n",
    "    requested_movies_RDD = self.sc.parallelize(movie_ids).map(lambda x: (user_id, x))\n",
    "    # Get predicted ratings\n",
    "    ratings = self.__predict_ratings(requested_movies_RDD).collect()\n",
    "\n",
    "    return ratings\n",
    "\n",
    "# Attach the function to a class method\n",
    "RecommendationEngine.get_ratings_for_movie_ids = get_ratings_for_movie_ids"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "使用 Flask 围绕我们的引擎构建 Web API Flask 是 Python 的 Web 微框架。 启动 Web API 非常容易，只需在我们的脚本中导入并使用一些注释将我们的服务端点与 Python 函数相关联。 在我们的例子中，我们将围绕其中一些端点包装我们的 RecommendationEngine 方法，并与 Web 客户端交换 JSON 格式的数据。 事实上就是这么简单，我们将在这里展示整个 app.py，而不是一块一块地展示。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from flask import Blueprint\n",
    "main = Blueprint('main', __name__)\n",
    " \n",
    "import json\n",
    "from engine import RecommendationEngine\n",
    " \n",
    "import logging\n",
    "logging.basicConfig(level=logging.INFO)\n",
    "logger = logging.getLogger(__name__)\n",
    " \n",
    "from flask import Flask, request\n",
    " \n",
    "@main.route(\"/<int:user_id>/ratings/top/<int:count>\", methods=[\"GET\"])\n",
    "def top_ratings(user_id, count):\n",
    "    logger.debug(\"User %s TOP ratings requested\", user_id)\n",
    "    top_ratings = recommendation_engine.get_top_ratings(user_id,count)\n",
    "    return json.dumps(top_ratings)\n",
    " \n",
    "@main.route(\"/<int:user_id>/ratings/<int:movie_id>\", methods=[\"GET\"])\n",
    "def movie_ratings(user_id, movie_id):\n",
    "    logger.debug(\"User %s rating requested for movie %s\", user_id, movie_id)\n",
    "    ratings = recommendation_engine.get_ratings_for_movie_ids(user_id, [movie_id])\n",
    "    return json.dumps(ratings)\n",
    " \n",
    " \n",
    "@main.route(\"/<int:user_id>/ratings\", methods = [\"POST\"])\n",
    "def add_ratings(user_id):\n",
    "    # get the ratings from the Flask POST request object\n",
    "    ratings_list = request.form.keys()[0].strip().split(\"\\n\")\n",
    "    ratings_list = map(lambda x: x.split(\",\"), ratings_list)\n",
    "    # create a list with the format required by the negine (user_id, movie_id, rating)\n",
    "    ratings = map(lambda x: (user_id, int(x[0]), float(x[1])), ratings_list)\n",
    "    # add them to the model using then engine API\n",
    "    recommendation_engine.add_ratings(ratings)\n",
    " \n",
    "    return json.dumps(ratings)\n",
    " \n",
    " \n",
    "def create_app(spark_context, dataset_path):\n",
    "    global recommendation_engine \n",
    " \n",
    "    recommendation_engine = RecommendationEngine(spark_context, dataset_path)    \n",
    "    \n",
    "    app = Flask(__name__)\n",
    "    app.register_blueprint(main)\n",
    "    return app"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "基本上我们使用该应用程序如下：\n",
    "\n",
    "我们在调用 create_app 时初始化事物。 这里创建了 RecommendationEngine 对象，然后我们关联了上面定义的 @main.route 注释。 每个注释由（参见 Flask 文档）定义：\n",
    "一个路由，即它的 URL，可能包含 <> 之间的参数。 它们被映射到函数参数。\n",
    "HTTP 可用方法的列表。\n",
    "定义了其中三个注释，对应于三个 RecommendationEngine 方法：\n",
    "GET /<user_id>/ratings/top 从引擎获取最佳推荐。\n",
    "GET /<user_id>/ratings 获取单个电影的预测评分。\n",
    "POST /<user_id>/ratings 添加新的评分。 格式是一系列行（以换行符结尾），movie_id 和 rating 以逗号分隔。 例如，以下文件对应于关于构建模型的教程中用作示例的十个新用户评分：\n",
    "260,9  \n",
    "1,8  \n",
    "16,7  \n",
    "25,8  \n",
    "32,9  \n",
    "335,4  \n",
    "379,3  \n",
    "296,7  \n",
    "858,10  \n",
    "50,8"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "使用 CherryPy 部署 WSGI 服务器\n",
    "除其他外，CherryPy 框架具有可靠的、符合 HTTP/1.1 的、WSGI 线程池网络服务器。 同时运行多个 HTTP 服务器（例如在多个端口上）也很容易。 所有这一切使它成为我们的在线推荐服务易于部署的生产 Web 服务器的完美候选者。\n",
    "\n",
    "我们将对 CherryPy 服务器的使用相对简单。 我们将再次在此处展示完整的 server.py 脚本，然后对其进行解释。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import time, sys, cherrypy, os\n",
    "from paste.translogger import TransLogger\n",
    "from app import create_app\n",
    "from pyspark import SparkContext, SparkConf\n",
    " \n",
    "def init_spark_context():\n",
    "    # load spark context\n",
    "    conf = SparkConf().setAppName(\"movie_recommendation-server\")\n",
    "    # IMPORTANT: pass aditional Python modules to each worker\n",
    "    sc = SparkContext(conf=conf, pyFiles=['engine.py', 'app.py'])\n",
    " \n",
    "    return sc\n",
    " \n",
    " \n",
    "def run_server(app):\n",
    " \n",
    "    # Enable WSGI access logging via Paste\n",
    "    app_logged = TransLogger(app)\n",
    " \n",
    "    # Mount the WSGI callable object (app) on the root directory\n",
    "    cherrypy.tree.graft(app_logged, '/')\n",
    " \n",
    "    # Set the configuration of the web server\n",
    "    cherrypy.config.update({\n",
    "        'engine.autoreload.on': True,\n",
    "        'log.screen': True,\n",
    "        'server.socket_port': 5432,\n",
    "        'server.socket_host': '0.0.0.0'\n",
    "    })\n",
    " \n",
    "    # Start the CherryPy WSGI web server\n",
    "    cherrypy.engine.start()\n",
    "    cherrypy.engine.block()\n",
    " \n",
    " \n",
    "if __name__ == \"__main__\":\n",
    "    # Init spark context and load libraries\n",
    "    sc = init_spark_context()\n",
    "    dataset_path = os.path.join('datasets', 'ml-latest')\n",
    "    app = create_app(sc, dataset_path)\n",
    " \n",
    "    # start web server\n",
    "    run_server(app)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "这是CherryPy的标准用法。如果我们看一下主入口点，我们会做三件事：\n",
    "\n",
    "按照函数init\\u spark\\u context中的定义创建一个spark上下文，并在那里传递传统的Python模块。\n",
    "\n",
    "创建Flask应用程序，调用我们在app.py中定义的Create\\u应用程序。\n",
    "\n",
    "运行服务器本身。\n",
    "\n",
    "请参阅下面有关启动服务器的部分。\n",
    "\n",
    "使用Spark运行服务器\n",
    "\n",
    "为了让服务器在能够访问Spark上下文和集群的同时运行，我们需要使用Spark submit将server.py文件提交到pySpark。使用此命令时的不同参数在Spark文档中有更好的解释。在我们的例子中，我们将使用如下内容。\n",
    "\n",
    "~/spark-1.3.1-bin-hadoop2.6/bin/spark-submit—主程序spark://169.254.206.2：7077--执行器内核总数14--执行器内存6g server.py\n",
    "\n",
    "重要的是：使用spark submit，不要直接使用pyspark。\n",
    "\n",
    "--master参数必须指向Spark cluster设置（可以是本地的）。\n",
    "\n",
    "您可以传递其他配置参数，例如--total executor cores和--executor memory\n",
    "\n",
    "您将看到如下输出：INFO:engine:Starting up the Recommendation Engine: INFO:engine:Loading Ratings data... INFO:engine:Loading Movies data... INFO:engine:Counting movie ratings... INFO:engine:Training the ALS model... ... More Spark and CherryPy logging INFO:engine:ALS model built! [05/Jul/2015:14:06:29] ENGINE Bus STARTING [05/Jul/2015:14:06:29] ENGINE Started monitor thread 'Autoreloader'. [05/Jul/2015:14:06:29] ENGINE Started monitor thread '_TimeoutMonitor'. [05/Jul/2015:14:06:29] ENGINE Serving on http://0.0.0.0:5432 [05/Jul/2015:14:06:29] ENGINE Bus STARTED\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "使用多个脚本和 Spark-submit 时的一些注意事项\n",
    "在像这样的部署中使用 Spark 时，我们需要解决两个问题。第一个是 Spark 集群是由启动 Python 脚本的 Spark Master 编排的 Workers 的分布式环境。这意味着 master 是唯一可以访问提交的脚本和本地附加文件的人。如果我们希望工作人员能够访问额外的导入 Python moules，它们要么必须是我们 Python 分发的一部分，要么我们需要隐式传递它们。我们通过在创建 SparkContext 对象时使用 pyFiles=['engine.py', 'app.py'] 参数来做到这一点。\n",
    "\n",
    "第二个问题与前一个有关，但有点棘手。在 Spark 中，当使用转换（例如在 RDD 上映射）时，我们不能引用在执行上下文中全局不可用的其他 RDD 或对象。例如，我们不能引用一个类的实例变量。因此，我们在 RecommendationEgine 类之外定义了所有传递给 RDD 转换的函数。\n",
    "\n",
    "试用服务\n",
    "现在让我们尝试一下该服务，使用我们在关于构建模型的教程中使用的相同数据。也就是说，首先我们要添加评分，然后我们将获得最高评分和个人评分。\n",
    "\n",
    "发布新评分\n",
    "首先，我们需要按照上一节中的说明运行我们的服务。一旦运行，我们将使用 curl 从 shell 发布新的评级。如果我们在当前文件夹中有文件 user_ratings.file（请参阅下面的获取源代码），只需执行以下命令。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
