{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 3小时入门Spark之Graphx图计算"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2019-12-04T02:33:59.332459Z",
     "start_time": "2019-12-04T02:33:56.193Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "spark = org.apache.spark.sql.SparkSession@25e787ad\n",
       "sc = org.apache.spark.SparkContext@1609045a\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "writeCsvOnly_t: (df: org.apache.spark.sql.DataFrame, path: String, sep: String)Unit\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.SparkContext@1609045a"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import org.apache.spark.sql.SparkSession\n",
    "import org.apache.spark.graphx._\n",
    "import org.apache.spark.sql.DataFrame\n",
    "import org.apache.spark.rdd.RDD\n",
    "\n",
    "val spark = SparkSession.builder()\n",
    "   .master(\"local[4]\").appName(\"graph\")\n",
    "   .getOrCreate()\n",
    "\n",
    "import spark.implicits._\n",
    "\n",
    "val sc = spark.sparkContext\n",
    "\n",
    "//将DataFrame写入单个本地文件\n",
    "def writeCsvOnly_t(df: DataFrame, path: String, sep: String = \"\\t\"): Unit = {\n",
    "    val header = df.columns.reduce(_+sep+_)\n",
    "    val values = df.rdd.map(_.toSeq.reduce(_+sep+_)).collect()\n",
    "    val writer = new java.io.PrintWriter(path)\n",
    "    writer.println(header)\n",
    "    for(v<-values){\n",
    "      writer.println(v)\n",
    "    }\n",
    "    writer.close\n",
    "  }"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "heading_collapsed": true
   },
   "source": [
    "### 一，图的基本概念"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hidden": true
   },
   "source": [
    "图(graph)有时候又被称为网络(network), 是一种适合表现事物之间关联关系的数据结构。\n",
    "\n",
    "\n",
    "1，图的组成\n",
    "\n",
    "图的基本组成是顶点(vertex)和边(edge)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hidden": true
   },
   "source": [
    "2，图的分类\n",
    "\n",
    "有向图和无向图：根据边是否有方向，图可以分成为有向图和无向图。有向图的边从源顶点出发，指向目标顶点。在无向图中，一个顶点上的边的数量叫做这个顶点的度。在有向图中，一个顶点上出发的边的数量叫做这个顶点的出度，汇集到一个顶点上的边的数量叫做这个顶点的入度。\n",
    "\n",
    "有环图和无环图：如果有向图中存在一些边构成闭合的环，称为有环图，反之为无环图。有环图上设计算法需要考虑终止条件，否则算法可能会沿着环永远循环下去。\n",
    "\n",
    "多重图和伪图：如果两个顶点之间可以有多条平行边，称为多重图。如果存在自环，即由一个顶点指向自己的边，则称为伪图。Graphx的图都是伪图。\n",
    "\n",
    "属性图和非属性图：如果顶点和边是包括属性的，称为属性图，否则是非属性图。非属性图作用不大。通常顶点和边至少有一个是包括属性的，Graphx的图都是属性图。\n",
    "\n",
    "二分图：如果图的顶点被分成两个不同的子集，边的源顶点始终来自其中一个子集，目标顶点始终来自另外一个子集。这种图称为二分图。二分图可用于交友网站，源顶点来自男性集合，目标顶点来自女性集合。二分图也可以用于推荐系统，源顶点来自用户，目标顶点来自商品。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hidden": true
   },
   "source": [
    "3，图的表示\n",
    "\n",
    "如果图的边是没有属性的，可以用稀疏的邻接矩阵进行表示。\n",
    "\n",
    "在Graphx中，用顶点属性表VertexRDD和边属性表EdgeRDD联合来表示图。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hidden": true
   },
   "source": [
    "4，图的算法\n",
    "\n",
    "图的著名算法包括：用于衡量顶点重要性的PageRank算法，用于计算顶点之间距离的最短路径算法，用于社区发现的标签传播算法，用于路径规划的Dijkstra算法和Kruscal算法……"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hidden": true
   },
   "source": [
    "5，图的应用\n",
    "\n",
    "图的应用主要包括网站排名，社交网络分析，金融欺诈检测，推荐系统等等。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "hidden": true
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 二，图的创建"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "有3类常用的创建图的方法。\n",
    "\n",
    "第一种是通过Graph的构造函数进行创建。\n",
    "\n",
    "第二种是通过GraphLoader.edgeListFile从文件读入EdgeRDD进行创建。\n",
    "\n",
    "第三种是使用util.GraphGenerators生成一些简单的图用于测试算法。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1，通过Graph构造函数创建\n",
    "\n",
    "Graph类有3个不同的构造函数，它们的签名如下，用法也是一目了然。\n",
    "\n",
    "```scala\n",
    "object Graph {\n",
    "  def apply[C, ED](\n",
    "      vertices: RDD[(VertexId, VD)],\n",
    "      edges: RDD[Edge[ED]],\n",
    "      defaultVertexAttr: VD = null)\n",
    "    : Graph[VD, ED]\n",
    "\n",
    "  def fromEdges[VD, ED](\n",
    "      edges: RDD[Edge[ED]],\n",
    "      defaultValue: VD): Graph[VD, ED]\n",
    "\n",
    "  def fromEdgeTuples[VD](\n",
    "      rawEdges: RDD[(VertexId, VertexId)],\n",
    "      defaultValue: VD,\n",
    "      uniqueEdges: Option[PartitionStrategy] = None): Graph[VD, Int]\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "((3,(rxin,student)),(7,(jgonzal,postdoc)),collab)\n",
      "((5,(franklin,prof)),(3,(rxin,student)),advisor)\n",
      "((2,(istoica,prof)),(5,(franklin,prof)),colleague)\n",
      "((5,(franklin,prof)),(7,(jgonzal,postdoc)),pi)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "users = ParallelCollectionRDD[0] at parallelize at <console>:39\n",
       "relationships = ParallelCollectionRDD[1] at parallelize at <console>:44\n",
       "defaultUser = (John Doe,Missing)\n",
       "graph_user = org.apache.spark.graphx.impl.GraphImpl@2b009dd6\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@2b009dd6"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "// 创建VertexRDD，注意VertexId必须是Long类型\n",
    "val users: RDD[(VertexId, (String, String))] =\n",
    "  sc.parallelize(Array((3L, (\"rxin\", \"student\")), (7L, (\"jgonzal\", \"postdoc\")),\n",
    "                       (5L, (\"franklin\", \"prof\")), (2L, (\"istoica\", \"prof\"))))\n",
    "\n",
    "// 创建EdgeRDD\n",
    "val relationships: RDD[Edge[String]] =\n",
    "  sc.parallelize(Array(Edge(3L, 7L, \"collab\"),    Edge(5L, 3L, \"advisor\"),\n",
    "                       Edge(2L, 5L, \"colleague\"), Edge(5L, 7L, \"pi\")))\n",
    "\n",
    "// 设置缺失顶点\n",
    "val defaultUser = (\"John Doe\", \"Missing\")\n",
    "\n",
    "// 使用apply构造函数创建图\n",
    "val graph_user:Graph[(String, String), String] = Graph(users, relationships, defaultUser)\n",
    "\n",
    "//查看图的部分数据，triplets同时存储了边属性信息和对应顶点属性信息。\n",
    "graph_user.triplets.take(5).foreach(println)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2，从文件读入EdgeRDD进行创建"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "data/paperCite.edges是一些论文之间的引用关系，其格式如下所示。\n",
    "\n",
    "#FromNodeId ToNodeId\n",
    "1 2\n",
    "1 3\n",
    "1 4\n",
    "1 5\n",
    "2 6\n",
    "2 7\n",
    "2 8\n",
    "5 9\n",
    "5 10\n",
    "5 11\n",
    "5 12\n",
    "6 13\n",
    "7 14\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "graph_paper = org.apache.spark.graphx.impl.GraphImpl@752cfd2\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "(559,11119)"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val graph_paper = GraphLoader.edgeListFile(sc,\"data/paperCite.edges\")\n",
    "\n",
    "//找到引用最高的文章\n",
    "graph_paper.outDegrees.reduce((a,b) =>if (a._2 > b._2) a else b)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "3，使用util.GraphGenerators生成测试用图"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Edge(0,1,1.0)\n",
      "Edge(0,4,1.0)\n",
      "Edge(1,2,1.0)\n",
      "Edge(1,5,1.0)\n",
      "Edge(2,3,1.0)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "graph_grid = org.apache.spark.graphx.impl.GraphImpl@2feea40\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@2feea40"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//生成 4*4的网格图\n",
    "\n",
    "//网格图的顶点和边符合特定的模式，就像是在一个二维的网格或矩阵中\n",
    "val graph_grid =  util.GraphGenerators.gridGraph(sc, 4, 4)\n",
    "\n",
    "graph_grid.edges.take(5).foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(4,1)\n",
      "(8,1)\n",
      "(1,1)\n",
      "(9,1)\n",
      "(5,1)\n",
      "(0,9)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "graph_star = org.apache.spark.graphx.impl.GraphImpl@66b706c4\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@66b706c4"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//生成星形图\n",
    "\n",
    "//星形图指的是有一个顶点通过边与所有其他顶点相连，除此之外图中不存在其他边\n",
    "val graph_star = util.GraphGenerators.starGraph(sc, 10)\n",
    "graph_star.outDegrees.take(5).foreach(println)\n",
    "graph_star.inDegrees.take(1).foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "graph_lognormal = org.apache.spark.graphx.impl.GraphImpl@4281dafa\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "Array(1, 2, 2, 2, 2, 2, 3, 4, 6, 7, 7, 8, 11, 13, 15, 15)"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//生成对数正态图\n",
    "\n",
    "//对数正态图通过随机算法生成，每个顶点的出度值分布符合对数正态分布\n",
    "val graph_lognormal = util.GraphGenerators.logNormalGraph(sc,16)\n",
    "graph_lognormal.outDegrees.map(_._2).collect.sorted\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "graph_rmat = org.apache.spark.graphx.impl.GraphImpl@279c13de\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "Array(4, 4, 4, 5, 5, 6, 6, 6)"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//生成递归矩阵图\n",
    "\n",
    "//递归矩阵图通过随机算法生成，递归矩阵图可以用于模拟典型的社交网络架构，会呈现社区趋势\n",
    "\n",
    "val graph_rmat = util.GraphGenerators.rmatGraph(sc,16,40) //参数为顶点数，独立的边数量\n",
    "\n",
    "graph_rmat.inDegrees.map(_._2).collect.sorted"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 三，图的可视化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以使用Python中的Networkx库，或者Gephi开源软件对图进行可视化，此外使用Zepplin也可以对Graphx的图进行可视化。\n",
    "\n",
    "此处我们演示通过调用Networkx库中对Graphx图的可视化。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "plot_graph.py 文件中的代码如下。\n",
    "\n",
    "```python\n",
    "import networkx as nx\n",
    "import pandas as pd \n",
    "import sys\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "def plot(directed = True):\n",
    "    dfedges = pd.read_csv('data/dfedges.csv',sep = '\\t')\n",
    "    if directed:\n",
    "        graph = nx.DiGraph(dfedges[['srcId','dstId']].values.tolist())\n",
    "    else:\n",
    "        graph = nx.Graph(dfedges[['srcId','dstId']].values.tolist())\n",
    "    nx.draw_networkx(graph)\n",
    "    plt.show()\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    if sys.argv[1]==\"true\": \n",
    "        plot(directed = True)\n",
    "    else:\n",
    "        plot(directed = False)\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "visualEdges: (dfedges: org.apache.spark.sql.DataFrame, ifdirected: Boolean)Unit\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "def visualEdges(dfedges:DataFrame, ifdirected:Boolean = true):Unit = {\n",
    "    writeCsvOnly_t(dfedges,\"data/dfedges.csv\")\n",
    "    Runtime.getRuntime().exec(s\"python plot_graph.py ${ifdirected}\")\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "graph_grid = org.apache.spark.graphx.impl.GraphImpl@1d34c62b\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@1d34c62b"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//网格图的可视化\n",
    "val graph_grid =  util.GraphGenerators.gridGraph(sc, 4, 4)\n",
    "\n",
    "visualEdges(graph_grid.edges.toDS.toDF,false)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](data/网格图可视化.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "graph_star = org.apache.spark.graphx.impl.GraphImpl@355d4813\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@355d4813"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//星形图的可视化\n",
    "val graph_star = util.GraphGenerators.starGraph(sc, 10)\n",
    "\n",
    "visualEdges(graph_star.edges.toDS.toDF,true)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](data/星形图可视化.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "graph_lognormal = org.apache.spark.graphx.impl.GraphImpl@7293d26f\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@7293d26f"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//对数正态图可视化\n",
    "val graph_lognormal = util.GraphGenerators.logNormalGraph(sc,16)\n",
    "visualEdges(graph_lognormal.edges.toDS.toDF,true)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](data/对数正态图可视化.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 四，Graph类的常用方法"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Graph的各种接口方法的签名如下所示，大概有9组30多个方法。\n",
    "\n",
    "其中pregel迭代接口和aggregateMessages合并消息接口是较为重要而灵活的方法。\n",
    "\n",
    "使用pregel和aggregateMessages方法的精妙之处在于只需要考虑每个顶点的更新函数即可，让框架在遍历顶点时进行调用，\n",
    "\n",
    "而无需考虑并行分布计算的细节。这种图计算编程模式叫做\"像顶点一样思考\"(Think Like A Vertex)。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Graph[VD, ED] {\n",
    "    \n",
    "  // 1，图的信息 \n",
    "  val numEdges: Long\n",
    "  val numVertices: Long\n",
    "  val inDegrees: VertexRDD[Int]\n",
    "  val outDegrees: VertexRDD[Int]\n",
    "  val degrees: VertexRDD[Int]\n",
    "\n",
    "  // 2，图的视图 \n",
    "  val vertices: VertexRDD[VD]\n",
    "  val edges: EdgeRDD[ED]\n",
    "  val triplets: RDD[EdgeTriplet[VD, ED]]\n",
    "\n",
    "  // 3，图的缓存和分区\n",
    "  def persist(newLevel: StorageLevel = StorageLevel.MEMORY_ONLY): Graph[VD, ED]\n",
    "  def cache(): Graph[VD, ED]\n",
    "  def unpersistVertices(blocking: Boolean = true): Graph[VD, ED]\n",
    "  def partitionBy(partitionStrategy: PartitionStrategy): Graph[VD, ED]\n",
    "\n",
    "  // 4，修改属性创建新图 \n",
    "  def mapVertices[VD2](map: (VertexId, VD) => VD2): Graph[VD2, ED]\n",
    "  def mapEdges[ED2](map: Edge[ED] => ED2): Graph[VD, ED2]\n",
    "  def mapEdges[ED2](map: (PartitionID, Iterator[Edge[ED]]) => Iterator[ED2]): Graph[VD, ED2]\n",
    "  def mapTriplets[ED2](map: EdgeTriplet[VD, ED] => ED2): Graph[VD, ED2]\n",
    "  def mapTriplets[ED2](map: (PartitionID, Iterator[EdgeTriplet[VD, ED]]) => Iterator[ED2])\n",
    "    : Graph[VD, ED2]\n",
    "\n",
    "  // 5，修改图结构创建新图 \n",
    "  def reverse: Graph[VD, ED]\n",
    "  def subgraph(\n",
    "      epred: EdgeTriplet[VD,ED] => Boolean = (x => true),\n",
    "      vpred: (VertexId, VD) => Boolean = ((v, d) => true))\n",
    "    : Graph[VD, ED]\n",
    "  def mask[VD2, ED2](other: Graph[VD2, ED2]): Graph[VD, ED]\n",
    "  def groupEdges(merge: (ED, ED) => ED): Graph[VD, ED]\n",
    "    \n",
    "  // 6，连接其它RDD\n",
    "  def joinVertices[U](table: RDD[(VertexId, U)])(mapFunc: (VertexId, VD, U) => VD): Graph[VD, ED]\n",
    "  def outerJoinVertices[U, VD2](other: RDD[(VertexId, U)])\n",
    "      (mapFunc: (VertexId, VD, Option[U]) => VD2)\n",
    "    : Graph[VD2, ED]\n",
    "    \n",
    "  // 7，收集邻居消息\n",
    "  def collectNeighborIds(edgeDirection: EdgeDirection): VertexRDD[Array[VertexId]]\n",
    "  def collectNeighbors(edgeDirection: EdgeDirection): VertexRDD[Array[(VertexId, VD)]]\n",
    "  def aggregateMessages[Msg: ClassTag](\n",
    "      sendMsg: EdgeContext[VD, ED, Msg] => Unit,\n",
    "      mergeMsg: (Msg, Msg) => Msg,\n",
    "      tripletFields: TripletFields = TripletFields.All)\n",
    "    : VertexRDD[A]\n",
    "    \n",
    "  // 8，pregel迭代接口 \n",
    "  def pregel[A](initialMsg: A, maxIterations: Int, activeDirection: EdgeDirection)(\n",
    "      vprog: (VertexId, VD, A) => VD,\n",
    "      sendMsg: EdgeTriplet[VD, ED] => Iterator[(VertexId,A)],\n",
    "      mergeMsg: (A, A) => A)\n",
    "    : Graph[VD, ED]\n",
    "    \n",
    "  // 9，内置常用图算法\n",
    "  def pageRank(tol: Double, resetProb: Double = 0.15): Graph[Double, Double]\n",
    "  def connectedComponents(): Graph[VertexId, ED]\n",
    "  def triangleCount(): Graph[Int, ED]\n",
    "  def stronglyConnectedComponents(numIter: Int): Graph[VertexId, ED]\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**1，图的信息**\n",
    "\n",
    "degrees既包括inDegrees和outDegrees之和。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "graph_star = org.apache.spark.graphx.impl.GraphImpl@323f2c57\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "Array((4,1), (0,9), (8,1), (1,1), (9,1), (5,1), (6,1), (2,1), (3,1), (7,1))"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val graph_star = util.GraphGenerators.starGraph(sc, 10)\n",
    "\n",
    "graph_star.degrees.collect"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Array((0,9))"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "graph_star.inDegrees.collect"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Array((4,1), (8,1), (1,1), (9,1), (5,1), (6,1), (2,1), (3,1), (7,1))"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "graph_star.outDegrees.collect"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**2，图的视图**\n",
    "\n",
    "edges和vertices必须至少包括1个属性，如果没有，一般给每个顶点和边填充一个1作为属性。\n",
    "\n",
    "可以从triplets中同时获取边的属性，以及与之关联的顶点属性。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(4,1)\n",
      "(0,1)\n",
      "(1,1)\n",
      "(2,1)\n",
      "(3,1)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "graph_star = org.apache.spark.graphx.impl.GraphImpl@64cfc576\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@64cfc576"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val graph_star = util.GraphGenerators.starGraph(sc, 5)\n",
    "graph_star.vertices.collect.foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Edge(1,0,1)\n",
      "Edge(2,0,1)\n",
      "Edge(3,0,1)\n",
      "Edge(4,0,1)\n"
     ]
    }
   ],
   "source": [
    "graph_star.edges.collect.foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "((1,1),(0,1),1)\n",
      "((2,1),(0,1),1)\n",
      "((3,1),(0,1),1)\n",
      "((4,1),(0,1),1)\n"
     ]
    }
   ],
   "source": [
    "graph_star.triplets.collect.foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**3，图的缓存和分区**\n",
    "\n",
    "如果图要多次被使用，应当使用persist缓存进行。如果确认图不再用到，推荐使用unpersist清理缓存以减轻内存压力。\n",
    "\n",
    "如果设计迭代算法，推荐使用pregel迭代接口，它能够正确地释放不再使用的中间计算结果。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "graphx对图的默认分区策略是切割Vertex而非切割Edge，这种设计更有利于减少存储和分区间的通信压力。\n",
    "\n",
    "在切割Vertex策略下，可以保证不同的分区是不同的边，但是有些Vertex可能会在多个分区存在。\n",
    "\n",
    "graphx提供了4种按照Vertex进行切割的具体策略。\n",
    "\n",
    "RandomVertexCut：以边的srcId和dstId来作Hash，这样两个顶点之间相同方向的边会分配到同一个分区。\n",
    " \n",
    "CanonicalRandomVertexCut：对srcId和dstId的排序结果来作Hash，这样两个顶点之间所有的边都会分配到同一个分区，而不管方向如何。\n",
    " \n",
    "EdgePartition1D：对srcId来作Hash, 同一个vertex出来的edge会被切到同一个分区, supernode问题得不到任何缓解, 仅仅适用于比较稀疏的图.\n",
    " \n",
    "EdgePartition2D：把整个图看成一个稀疏的矩阵, 然后对这个矩阵的行和列进行切分，从而保证顶点的备份数不大于2*sqrt(numParts)的限制。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(8,21)\n",
      "(9,13)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "graph_lognormal = org.apache.spark.graphx.impl.GraphImpl@79c04b2e\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@79c04b2e"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import org.apache.spark.storage.StorageLevel\n",
    "val graph_lognormal = util.GraphGenerators.logNormalGraph(sc,16).cache()\n",
    "\n",
    "//找到最大出度的顶点\n",
    "println(graph_lognormal.inDegrees.reduce((a,b)=>if(a._2>b._2) a else b))\n",
    "\n",
    "//找到最大入度的顶点\n",
    "println(graph_lognormal.outDegrees.reduce((a,b)=>if(a._2>b._2) a else b))\n",
    "\n",
    "//清空缓存\n",
    "graph_lognormal.unpersist()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Edge(0,0,1) Edge(0,1,1) Edge(0,1,1) Edge(0,2,1) Edge(0,4,1)\n",
      "Edge(1,2,1) Edge(1,3,1) Edge(1,3,1) Edge(1,5,1) Edge(2,3,1)\n",
      "Edge(3,0,1) Edge(3,2,1) Edge(3,4,1) Edge(3,5,1)\n",
      "Edge(4,0,1) Edge(4,4,1) Edge(5,1,1) Edge(5,1,1) Edge(5,2,1) Edge(5,3,1)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "graph_lognormal = org.apache.spark.graphx.impl.GraphImpl@56f8fdf4\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@56f8fdf4"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val graph_lognormal = util.GraphGenerators.logNormalGraph(sc,6,4).cache()\n",
    "\n",
    "//对数正态图默认的分区策略是EdgePartition1D\n",
    "graph_lognormal.edges.mapPartitions(iter=>Array(iter.toArray).iterator)\n",
    ".collect.map(arr=>arr.mkString(\" \")).foreach(println)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Edge(0,4,1) Edge(3,0,1) Edge(4,0,1)\n",
      "Edge(0,2,1) Edge(1,3,1) Edge(1,3,1) Edge(3,5,1) Edge(5,3,1)\n",
      "Edge(0,1,1) Edge(0,1,1) Edge(1,2,1) Edge(2,3,1) Edge(3,2,1)\n",
      "Edge(0,0,1) Edge(1,5,1) Edge(3,4,1) Edge(4,4,1) Edge(5,1,1) Edge(5,1,1) Edge(5,2,1)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "graph_repartition = org.apache.spark.graphx.impl.GraphImpl@20b6f243\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@20b6f243"
      ]
     },
     "execution_count": 28,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//修改分区策略为CanonicalRandomVertexCut\n",
    "val graph_repartition = graph_lognormal.partitionBy(PartitionStrategy.CanonicalRandomVertexCut,4)\n",
    "graph_repartition.edges.mapPartitions(iter=>Array(iter.toArray).iterator)\n",
    ".collect.map(arr=>arr.mkString(\" \")).foreach(println)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**4，修改属性创建新图**\n",
    "\n",
    "这几个方法的使用相对简单，都是进行一次map操作后生成新的VertexRDD或EdgeRDD替换掉已有Graph的对应部分，得到新的Graph。\n",
    "\n",
    "```scala\n",
    "def mapVertices[VD2](map: (VertexId, VD) => VD2): Graph[VD2, ED]\n",
    "def mapEdges[ED2](map: Edge[ED] => ED2): Graph[VD, ED2]\n",
    "def mapEdges[ED2](map: (PartitionID, Iterator[Edge[ED]]) => Iterator[ED2]): Graph[VD, ED2]\n",
    "def mapTriplets[ED2](map: EdgeTriplet[VD, ED] => ED2): Graph[VD, ED2]\n",
    "def mapTriplets[ED2](map: (PartitionID, Iterator[EdgeTriplet[VD, ED]]) => Iterator[ED2])\n",
    "    : Graph[VD, ED2]\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们首先构造如下一个社交关系图，然后利用mapTriplets给边添加属性值。\n",
    "\n",
    "如果边属性为\"is_friends_with\"，并且其源顶点属性中包含字母\"a\"，则添加属性值 true,否则添加属性值false。\n",
    "\n",
    "![](data/社交图范例.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "myVertices = ParallelCollectionRDD[369] at makeRDD at <console>:38\n",
       "myEdges = ParallelCollectionRDD[370] at makeRDD at <console>:41\n",
       "myGraph = org.apache.spark.graphx.impl.GraphImpl@63ec5d99\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@63ec5d99"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val myVertices = sc.makeRDD(Array((1L,\"Ann\"),(2L,\"Bill\"),(3L,\"Charles\"),\n",
    "                                  (4L,\"Diane\"),(5L,\"Went to gym this morning\")))\n",
    "\n",
    "val myEdges = sc.makeRDD(Array(Edge(1L,2L,\"is-friends-with\"),Edge(2L,3L,\"is-friends-with\"),\n",
    "                               Edge(3L,4L,\"is-friends-with\"),Edge(3L,5L,\"wrote-status\"),\n",
    "                               Edge(4L,5L,\"like-status\")))\n",
    "\n",
    "val myGraph = Graph(myVertices,myEdges)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "((1,Ann),(2,Bill),(is-friends-with,true))\n",
      "((2,Bill),(3,Charles),(is-friends-with,false))\n",
      "((3,Charles),(4,Diane),(is-friends-with,true))\n",
      "((3,Charles),(5,Went to gym this morning),(wrote-status,false))\n",
      "((4,Diane),(5,Went to gym this morning),(like-status,false))\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "newGraph = org.apache.spark.graphx.impl.GraphImpl@36d04a5c\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@36d04a5c"
      ]
     },
     "execution_count": 30,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//如果边属性为\"is_friends_with\"，并且其源顶点属性中包含字母\"a\"，则添加属性值 true,否则添加属性值false。\n",
    "val newGraph = myGraph.mapTriplets(t => \n",
    "    (t.attr, t.attr==\"is-friends-with\"&&t.srcAttr.toLowerCase.contains(\"a\")))\n",
    "\n",
    "newGraph.triplets.collect.foreach(println)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**5，修改图结构创建新图**\n",
    "\n",
    "这4个方法的作用简单总结如下：\n",
    "\n",
    "reverse最简单，将每条边的方向反向。\n",
    "\n",
    "subgraph过滤一些符合条件的边和顶点构造子图。\n",
    "\n",
    "mask返回和另外一个graph的公共子图\n",
    "\n",
    "groupEdges可以对平行边进行merge，但要求平行边位于相同的分区。\n",
    "\n",
    "```scala\n",
    "def reverse: Graph[VD, ED]\n",
    "def subgraph(\n",
    "  epred: EdgeTriplet[VD,ED] => Boolean = (x => true),\n",
    "  vpred: (VertexId, VD) => Boolean = ((v, d) => true))\n",
    "  : Graph[VD, ED]\n",
    "def mask[VD2, ED2](other: Graph[VD2, ED2]): Graph[VD, ED]\n",
    "def groupEdges(merge: (ED, ED) => ED): Graph[VD, ED]\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "((2,Bill),(1,Ann),is-friends-with)\n",
      "((3,Charles),(2,Bill),is-friends-with)\n",
      "((4,Diane),(3,Charles),is-friends-with)\n",
      "((5,Went to gym this morning),(3,Charles),wrote-status)\n",
      "((5,Went to gym this morning),(4,Diane),like-status)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "myGraph_reverse = org.apache.spark.graphx.impl.GraphImpl@2540c83a\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@2540c83a"
      ]
     },
     "execution_count": 31,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//reverse可以翻转边\n",
    "val myGraph_reverse = myGraph.reverse\n",
    "myGraph_reverse.triplets.collect.foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "((2,Bill),(3,Charles),is-friends-with)\n",
      "((3,Charles),(4,Diane),is-friends-with)\n",
      "((3,Charles),(5,Went to gym this morning),wrote-status)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "myGraph_charles = org.apache.spark.graphx.impl.GraphImpl@114e769d\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@114e769d"
      ]
     },
     "execution_count": 32,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//subgraph可以通过过滤边和顶点创建子图\n",
    "val myGraph_charles = myGraph.subgraph(t=> (t.srcAttr==\"Charles\"||t.dstAttr==\"Charles\"))\n",
    "myGraph_charles.triplets.collect.foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "((1,Ann),(2,Bill),is-friends-with)\n",
      "((2,Bill),(3,Charles),is-friends-with)\n",
      "((3,Charles),(4,Diane),is-friends-with)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "otherVertices = ParallelCollectionRDD[400] at makeRDD at <console>:41\n",
       "otherEdges = ParallelCollectionRDD[401] at makeRDD at <console>:44\n",
       "otherGraph = org.apache.spark.graphx.impl.GraphImpl@4a8001d8\n",
       "graph_mask = org.apache.spark.graphx.impl.GraphImpl@2d645610\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@2d645610"
      ]
     },
     "execution_count": 33,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//mask可以返回和另外一个图的公共子图\n",
    "val otherVertices = sc.makeRDD(Array((1L,\"Ann\"),(2L,\"Bill\"),(3L,\"Charles\"),\n",
    "                                  (4L,\"Diane\"),(6L,\"David\")))\n",
    "\n",
    "val otherEdges = sc.makeRDD(Array(Edge(1L,2L,\"is-friends-with\"),Edge(2L,3L,\"is-friends-with\"),\n",
    "                               Edge(3L,4L,\"is-friends-with\"),Edge(3L,6L,\"is-friends-with\"),\n",
    "                               Edge(4L,6L,\"is-friends-with\")))\n",
    "\n",
    "val otherGraph = Graph(otherVertices,otherEdges)\n",
    "val graph_mask = myGraph.mask(otherGraph)\n",
    "\n",
    "graph_mask.triplets.collect.foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "((1,1),(2,1),10.0)\n",
      "((1,1),(2,1),3.0)\n",
      "((1,1),(4,1),2.0)\n",
      "((2,1),(3,1),5.0)\n",
      "((2,1),(3,1),7.0)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "graph_distance = org.apache.spark.graphx.impl.GraphImpl@47ba81e8\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@47ba81e8"
      ]
     },
     "execution_count": 34,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//groupEdges可以对平行边进行merge\n",
    "\n",
    "val graph_distance = Graph.fromEdges(sc.makeRDD(\n",
    "    Array(Edge(1L,2L,10.0),Edge(1L,2L,3.0),\n",
    "         Edge(2L,3L,5.0),Edge(2L,3L,7.0),\n",
    "         Edge(1L,4L,2.0))),1).partitionBy(PartitionStrategy.RandomVertexCut,4)\n",
    "\n",
    "graph_distance.triplets.collect.foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 249,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "((1,1),(2,1),3.0)\n",
      "((1,1),(4,1),2.0)\n",
      "((2,1),(3,1),5.0)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "graph_grouped = org.apache.spark.graphx.impl.GraphImpl@6f7bbf56\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@6f7bbf56"
      ]
     },
     "execution_count": 249,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val graph_grouped = graph_distance.groupEdges((a,b)=> math.min(a,b))\n",
    "graph_grouped.triplets.collect.foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**6，连接其它RDD**\n",
    "\n",
    "\n",
    "```scala\n",
    "def joinVertices[U](table: RDD[(VertexId, U)])(mapFunc: (VertexId, VD, U) => VD): Graph[VD, ED]\n",
    "def outerJoinVertices[U, VD2](other: RDD[(VertexId, U)])\n",
    "  (mapFunc: (VertexId, VD, Option[U]) => VD2)\n",
    "  : Graph[VD2, ED]\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "((1,Beijing),(2,Nanjing),10.0)\n",
      "((1,Beijing),(2,Nanjing),3.0)\n",
      "((1,Beijing),(4,Tianjing),2.0)\n",
      "((2,Nanjing),(3,Shanghai),5.0)\n",
      "((2,Nanjing),(3,Shanghai),7.0)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "graph_distance = org.apache.spark.graphx.impl.GraphImpl@5dae6db3\n",
       "rdd_city = ParallelCollectionRDD[462] at makeRDD at <console>:46\n",
       "graph_join = org.apache.spark.graphx.impl.GraphImpl@101da12f\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@101da12f"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "// joinVertices 不会修改点属性的类型\n",
    "val graph_distance = Graph.fromEdges(sc.makeRDD(\n",
    "    Array(Edge(1L,2L,10.0),Edge(1L,2L,3.0),\n",
    "         Edge(2L,3L,5.0),Edge(2L,3L,7.0),\n",
    "         Edge(1L,4L,2.0))),\"\").partitionBy(PartitionStrategy.RandomVertexCut,4)\n",
    "\n",
    "val rdd_city = sc.makeRDD(Array((1L,\"Beijing\"),(2L,\"Nanjing\"),(3L,\"Shanghai\"),(4L,\"Tianjing\")))\n",
    "\n",
    "val graph_join = graph_distance.joinVertices[String](rdd_city)((id,v,u) => u)\n",
    "graph_join.triplets.collect.foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "((1,(Ann,female)),(2,(Bill,male)),is-friends-with)\n",
      "((2,(Bill,male)),(3,(Charles,male)),is-friends-with)\n",
      "((3,(Charles,male)),(4,(Diane,female)),is-friends-with)\n",
      "((3,(Charles,male)),(5,(Went to gym this morning, )),wrote-status)\n",
      "((4,(Diane,female)),(5,(Went to gym this morning, )),like-status)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "rdd_gender = ParallelCollectionRDD[477] at makeRDD at <console>:41\n",
       "graph_outjoin = org.apache.spark.graphx.impl.GraphImpl@1d6c4309\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@1d6c4309"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//outerJoinVertices 可以修改点属性的类型\n",
    "val rdd_gender = sc.makeRDD(Array((1L,\"female\"),(2L,\"male\"),(3L,\"male\"),(4L,\"female\")))\n",
    "val graph_outjoin = \n",
    "   myGraph.outerJoinVertices[String,(String,String)](rdd_gender)((id,v,opt)=>(v,opt.getOrElse(\" \")))\n",
    "graph_outjoin.triplets.collect.foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**7，收集邻居消息**\n",
    "\n",
    "```scala\n",
    "def collectNeighborIds(edgeDirection: EdgeDirection): VertexRDD[Array[VertexId]]\n",
    "def collectNeighbors(edgeDirection: EdgeDirection): VertexRDD[Array[(VertexId, VD)]]\n",
    "def aggregateMessages[Msg: ClassTag](\n",
    "      sendMsg: EdgeContext[VD, ED, Msg] => Unit,\n",
    "      mergeMsg: (Msg, Msg) => Msg,\n",
    "      tripletFields: TripletFields = TripletFields.All)\n",
    "    : VertexRDD[A]\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Array((4,Array(3)), (1,Array()), (5,Array(3, 4)), (2,Array(1)), (3,Array(2)))"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//collectNeighborIds可以获取每个节点附近的邻居节点id\n",
    "myGraph.collectNeighborIds(EdgeDirection.In).collect"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Array((4,Array((3,Charles), (5,Went to gym this morning))), (1,Array((2,Bill))))"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//collectNeighbors可以获取每个节点附近的邻居节点id和attr\n",
    "myGraph.collectNeighbors(EdgeDirection.Either).take(2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "aggregateMessages在图结构中实现了一个基本的map/reduce编程模型。\n",
    "\n",
    "sendMsg是map过程，每条边向其src或dst发送一个消息。其输入参数为EdgeContext类型。\n",
    "EdgeContext类型比Triplet类型多了sendToSrc和sendToDst两个方法，用于发送消息。\n",
    "\n",
    "mergeMsg是reduce过程，每个顶点收集其收到的消息，并做合并处理。\n",
    "\n",
    "aggregateMessages的返回值是一个VetexRDD。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(4,2)\n",
      "(1,1)\n",
      "(5,2)\n",
      "(2,2)\n",
      "(3,3)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "degrees = VertexRDDImpl[533] at RDD at VertexRDD.scala:57\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "VertexRDDImpl[533] at RDD at VertexRDD.scala:57"
      ]
     },
     "execution_count": 44,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//使用 aggregateMessages实现每个顶点的度统计\n",
    "val degrees = myGraph.aggregateMessages[Int](edge => {edge.sendToDst(1);edge.sendToSrc(1)}, _+_)\n",
    "degrees.collect.foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "inDegrees = Array((Diane,1), (Ann,0), (Went to gym this morning,2), (Bill,1), (Charles,1))\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "Array((Diane,1), (Ann,0), (Went to gym this morning,2), (Bill,1), (Charles,1))"
      ]
     },
     "execution_count": 45,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//使用 aggregateMessages实现每个顶点的入度统计\n",
    "val inDegrees = myGraph.aggregateMessages[Int](_.sendToDst(1), _+_)\n",
    ".rightOuterJoin(myGraph.vertices).map(t=>(t._2._2,t._2._1.getOrElse(0))).collect"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "大部分算法中都包括多轮迭代，我们可以通过构建循环反复调用aggregateMessages方法进行实现。\n",
    "\n",
    "我们考虑使用迭代算法计算每个顶点和离它最远的源顶点的距离。假设图是无环图。\n",
    "\n",
    "算法基本过程如下：\n",
    "\n",
    "1，给每个顶点赋初始属性值0.\n",
    "\n",
    "2，每条边向其目标顶点发送消息，消息值为该边源顶点的属性值+1.\n",
    "\n",
    "3，每个顶点收集所有消息，取消息中的最大值。\n",
    "\n",
    "4，重复执行第2，3步骤，直到图中每个顶点的属性值都不再发生改变。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(4,3)\n",
      "(1,0)\n",
      "(5,4)\n",
      "(2,1)\n",
      "(3,2)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "g = org.apache.spark.graphx.impl.GraphImpl@59c46001\n",
       "diff = 0\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "0"
      ]
     },
     "execution_count": 46,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "var g = myGraph.mapVertices((_,_)=>0)\n",
    "var diff = 1\n",
    "while(diff>0){\n",
    "    val vertes = g.aggregateMessages[Int](ec =>ec.sendToDst(ec.srcAttr+1), math.max(_,_))\n",
    "    val g2 = Graph(vertes,g.edges)\n",
    "    diff = g2.vertices.join(g.vertices).map(t => t._2._1 - t._2._2).reduce(_+_)\n",
    "    g = g2\n",
    "}\n",
    "\n",
    "g.vertices.collect.foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(4,3)\n",
      "(1,0)\n",
      "(5,4)\n",
      "(2,1)\n",
      "(3,2)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "g = org.apache.spark.graphx.impl.GraphImpl@60df0418\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "propagateEdgeCount: (g: org.apache.spark.graphx.Graph[Int,String])org.apache.spark.graphx.Graph[Int,String]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@60df0418"
      ]
     },
     "execution_count": 47,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//以上代码也可以使用递归函数进行实现\n",
    "def propagateEdgeCount(g:Graph[Int,String]):Graph[Int,String] = {\n",
    "    val vertes = g.aggregateMessages[Int](ec =>ec.sendToDst(ec.srcAttr+1), math.max(_,_))\n",
    "    val g2 = Graph(vertes,g.edges)\n",
    "    val diff = g2.vertices.join(g.vertices).map(t => t._2._1 - t._2._2).reduce(_+_)\n",
    "    if(diff>0)\n",
    "        propagateEdgeCount(g2)\n",
    "    else\n",
    "       g\n",
    "}\n",
    "\n",
    "val g = propagateEdgeCount(myGraph.mapVertices((_,_)=>0))\n",
    "g.vertices.collect.foreach(println) "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**8，pregel迭代接口**\n",
    "\n",
    "上述使用aggregateMessages进行迭代的方法尽管已经非常简短了，但是其迭代过程中中间结果的缓存问题可能会给程序的性能造成影响。\n",
    "\n",
    "使用pregel迭代接口能够很好地进行缓存优化。\n",
    "\n",
    "```scala\n",
    "def pregel[A](initialMsg: A, maxIterations: Int, activeDirection: EdgeDirection)(\n",
    "  vprog: (VertexId, VD, A) => VD,\n",
    "  sendMsg: EdgeTriplet[VD, ED] => Iterator[(VertexId,A)],\n",
    "  mergeMsg: (A, A) => A)\n",
    "  : Graph[VD, ED]\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "pregel迭代接口基于Graphx的基础API实现，实现方式相当简洁，其代码不过20多行。\n",
    "\n",
    "它主要使用了mapVertices，GraphXUtils.mapReduceTriplets，以及joinVertices这3个基础API进行实现。\n",
    "\n",
    "其中mapReduceTriplets和aggregateMessages作用非常相似，都是一个map/reduce操作，\n",
    "\n",
    "主要差异是其参数方法sendMsg的输入输出类型不太一样。\n",
    "\n",
    "```scala\n",
    "class GraphOps[VD, ED] {\n",
    "  def pregel[A]\n",
    "      (initialMsg: A,\n",
    "       maxIter: Int = Int.MaxValue,\n",
    "       activeDirection: EdgeDirection = EdgeDirection.Either)\n",
    "      (vprog: (VertexId, VD, A) => VD,\n",
    "       sendMsg: EdgeTriplet[VD, ED] => Iterator[(VertexId, A)],\n",
    "       mergeMsg: (A, A) => A)\n",
    "    : Graph[VD, ED] = {\n",
    "    // Receive the initial message at each vertex\n",
    "    var g = mapVertices( (vid, vdata) => vprog(vid, vdata, initialMsg) ).cache()\n",
    "\n",
    "    // compute the messages\n",
    "    var messages = GraphXUtils.mapReduceTriplets(g, sendMsg, mergeMsg)\n",
    "    var activeMessages = messages.count()\n",
    "    // Loop until no messages remain or maxIterations is achieved\n",
    "    var i = 0\n",
    "    while (activeMessages > 0 && i < maxIterations) {\n",
    "      // Receive the messages and update the vertices.\n",
    "      g = g.joinVertices(messages)(vprog).cache()\n",
    "      val oldMessages = messages\n",
    "      // Send new messages, skipping edges where neither side received a message.\n",
    "      //We must cache messages so it can be materialized on the next line, \n",
    "      //allowing us to uncache the previous iteration.\n",
    "      messages = GraphXUtils.mapReduceTriplets(\n",
    "        g, sendMsg, mergeMsg, Some((oldMessages, activeDirection))).cache()\n",
    "      activeMessages = messages.count()\n",
    "      i += 1\n",
    "    }\n",
    "    g\n",
    "  }\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "pregel迭代接口有2个参数列表。\n",
    "\n",
    "第一个参数列表完成了一些配置工作，三个参数分别是initialMsg、maxIter和activeDirection。\n",
    "\n",
    "分别设置了初始消息，最大迭代次数和边激活的条件。\n",
    "\n",
    "\n",
    "第二个参数列表有三个函数参数：vprog、sendMsg和mergeMsg.\n",
    "\n",
    "vprog是顶点更新函数，它在每轮迭代的最后一步用mergeMsg的结果更新顶点属性，并在初始化时用initialMsg初始化图。\n",
    "\n",
    "sengMsg是消息发送函数。其输入参数类型为EdgeTriplet，输出参数类型为一个Iterator，与aggregateMessages中的sengMsg有所不同。\n",
    "\n",
    "需要注意的是，为了让算法结束迭代，需要在合适的时候让其返回一个空的Iterator\n",
    "\n",
    "mergeMsg是消息合并函数。与aggregateMessages中的mergeMsg一样。\n",
    "\n",
    "pregel在迭代的每一步都会生成一个新的图，直到没有新的消息产生或达到最大迭代次数退出。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "重点讲解一下activeDirection，它是边的活跃状态的控制参数。在一轮迭代后，所有收到消息的顶点都是活跃的顶点。\n",
    "\n",
    "活跃顶点将状态传递给边的方式由activeDirection控制，activeDirection有4个候选取值。\n",
    "\n",
    "EdgeDirection.Out: 只有边的srcId的顶点在上一轮收到了消息，这条边才允许发送消息。即顶点活跃状态传递给它的出边。\n",
    "\n",
    "EdgeDirection.In: 只有边的dstId的顶点在上一轮收到了消息，这条边才允许发送消息。即顶点活跃状态传递给它的入边。\n",
    "\n",
    "EdgeDirection.Both: 只有边的srcId和desId两个顶点在上一轮都收到了消息，这条边才允许发送消息。即只有两个顶点都活跃，边才活跃。\n",
    "\n",
    "EdgeDirection.Either: 只要边的srcId或desId顶点在上一轮收到了消息，这条边就可以发送消息。即顶点活跃状态传递给它的入边和出边。这是默认值。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "下面我们基于pregel接口来重新实现：计算每个顶点和离它最远的源顶点的距离。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(4,3)\n",
      "(1,0)\n",
      "(5,4)\n",
      "(2,1)\n",
      "(3,2)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "init_graph = org.apache.spark.graphx.impl.GraphImpl@5633fcb4\n",
       "g = org.apache.spark.graphx.impl.GraphImpl@63a6a282\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@63a6a282"
      ]
     },
     "execution_count": 49,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val init_graph = myGraph.mapVertices((vid,vd) => 0) \n",
    "val g = init_graph.pregel[Int](initialMsg = 0)(\n",
    "    (id:VertexId,vd:Int,a:Int) => math.max(vd,a),\n",
    "    (et:EdgeTriplet[Int,String])=> if(et.srcAttr+1>et.dstAttr) Iterator((et.dstId, et.srcAttr+1)) \n",
    "      else Iterator.empty,\n",
    "    (a:Int,b:Int) => math.max(a,b)\n",
    ")\n",
    "\n",
    "g.vertices.collect.foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 五，VertexRDD和EdgeRDD类的补充方法"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "除了Graph类定义的这些方法，VertexRDD类和EdgeRDD类在继承了RDD类的方法的基础上，还有一些补充方法。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**1，VertexRDD类的补充方法**\n",
    "\n",
    "```scala\n",
    "class VertexRDD[VD] extends RDD[(VertexId, VD)] {\n",
    "    \n",
    "  // 1，过滤顶点\n",
    "  def filter(pred: Tuple2[VertexId, VD] => Boolean): VertexRDD[VD]\n",
    "  def minus(other: RDD[(VertexId, VD)])\n",
    "  def diff(other: VertexRDD[VD]): VertexRDD[VD]\n",
    "    \n",
    "  // 2，修改属性\n",
    "  def mapValues[VD2](map: VD => VD2): VertexRDD[VD2]\n",
    "  def mapValues[VD2](map: (VertexId, VD) => VD2): VertexRDD[VD2]\n",
    "    \n",
    "  // 3,连接操作\n",
    "  def leftJoin[VD2, VD3](other: RDD[(VertexId, VD2)])(f: (VertexId, VD, Option[VD2]) => VD3): VertexRDD[VD3]\n",
    "  def innerJoin[U, VD2](other: RDD[(VertexId, U)])(f: (VertexId, VD, U) => VD2): VertexRDD[VD2]\n",
    "    \n",
    "  // 4,使用当前VertexRDD索引加速对其它VertexRDD的reduceByKey操作\n",
    "  def aggregateUsingIndex[VD2](other: RDD[(VertexId, VD2)], reduceFunc: (VD2, VD2) => VD2): VertexRDD[VD2]\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "setA = VertexRDDImpl[962] at RDD at VertexRDD.scala:57\n",
       "rddB = MapPartitionsRDD[964] at flatMap at <console>:40\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "20"
      ]
     },
     "execution_count": 50,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val setA: VertexRDD[Int] = VertexRDD(sc.parallelize(0L until 5L).map(id => (id, 1)))\n",
    "val rddB: RDD[(VertexId, Double)] = sc.parallelize(0L until 10L)\n",
    "    .flatMap(id => List((id, 1.0), (id, 2.0)))\n",
    "rddB.count"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "5\n",
      "(4,3.0)\n",
      "(0,3.0)\n",
      "(1,3.0)\n",
      "(2,3.0)\n",
      "(3,3.0)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "setB = VertexRDDImpl[967] at RDD at VertexRDD.scala:57\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "VertexRDDImpl[967] at RDD at VertexRDD.scala:57"
      ]
     },
     "execution_count": 51,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val setB: VertexRDD[Double] = setA.aggregateUsingIndex(rddB, _ + _)\n",
    "println(setB.count)\n",
    "setB.collect.foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(4,4.0)\n",
      "(0,4.0)\n",
      "(1,4.0)\n",
      "(2,4.0)\n",
      "(3,4.0)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "setC = VertexRDDImpl[970] at RDD at VertexRDD.scala:57\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "VertexRDDImpl[970] at RDD at VertexRDD.scala:57"
      ]
     },
     "execution_count": 52,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val setC: VertexRDD[Double] = setA.innerJoin(setB)((id, a, b) => a + b)\n",
    "setC.collect.foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**2，EdgeRDD类的补充方法**\n",
    "\n",
    "```scala\n",
    "class EdgeRDD[ED] extends RDD[Edge[ED]] {\n",
    "    // 1,修改属性\n",
    "    def mapValues[ED2](f: Edge[ED] => ED2): EdgeRDD[ED2]\n",
    "    // 2,改变方向\n",
    "    def reverse: EdgeRDD[ED]\n",
    "    // 3,连接操作\n",
    "    def innerJoin[ED2, ED3](other: EdgeRDD[ED2])(f: (VertexId, VertexId, ED, ED2) => ED3): EdgeRDD[ED3]   \n",
    "}\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 六，Graphx内置常用图算法"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Graphx内置的图算法一些作为GraphOps类的方法存在，另外一些在graphx.lib中。\n",
    "\n",
    "Graph类和GraphOps类的关系就像RDD和PairRDD的关系，必要时候Graph对象可以通过隐式转换变成GraphOps对象。\n",
    "\n",
    "这些内置图算法主要包括：\n",
    "\n",
    "PageRank: 可以由PageRank值衡量节点的重要程度，常用于网页排名，社区关键人物分析。\n",
    "\n",
    "personalizedPageRank: 个性化的PageRank值，可用于社交网站中推荐\"你可能认识的人\"。 \n",
    "\n",
    "triangleCount: 三角形个数，可以衡量周围的节点的连通性，也可以用于衡量网络总体的联通性。\n",
    "\n",
    "ShortestPaths: 最小跳跃数，可以找到图中全部顶点和给定顶点的最小跳跃数。\n",
    "\n",
    "connectedComponents: 联通组件，可以在社交网络中找到社交圈子。\n",
    "\n",
    "stronglyConnectedComponents: 增强联通组件，针对有向图，可以找到社交圈子\n",
    "\n",
    "LabelPropagation: 标签传播算法，可以用于社区发现。但往往不收敛，不是特别推荐使用。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**1，PageRank**\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "PageRank网页排名算法由谷歌创始人拉里·佩奇和谢尔盖·布林在1998年提出。\n",
    "\n",
    "可用于搜索引擎页面排名，或者在论文引用关系网中找到最有影响力的论文。\n",
    "\n",
    "总之，它可以衡量一个顶点在一个网络中的\"重要性\"程度。\n",
    "\n",
    "PageRank的基本思想是被其它网页通过超链接引用越多的网页就越重要，并且被越重要网页引用的网页也越重要。\n",
    "\n",
    "PageRank值相当于在inDegrees的基础上，增加了引用来源网页的重要性作为权重因子。\n",
    "\n",
    "PageRank值通过迭代方法进行计算，其物理含义可以用这样一个思想实验来说明。\n",
    "\n",
    "假定有许许多多的用户在各个网页之间随机地通过超链接进行跳转，那么当达到动态均衡时，\n",
    "\n",
    "停留在某网页的用户数量占全部用户的比例就可以衡量为该网页的PageRank值。实际中的PageRank值还会做一些线性缩放。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "PageRank的迭代公式如下：\n",
    "\n",
    "$$PR'_i = resetProb + (1-resetProb) \\sum _{(j: j->i)} {\\frac{PR_j}{OutDegree_j}}$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "其中resetProb 为重置概率，即用户不通过超链接，而是直接访问某个页面的概率，默认值为0.15。\n",
    "\n",
    "重置概率可以防止某些只有出边没有入边的PageRank值衰减到零。\n",
    "\n",
    "求和项为将所有有超链接指向$i$的网页的PageRank值根据这项网页的出度均分后转移到$i$。\n",
    "\n",
    "在经过许多轮迭代后，各个网页的PageRank值基本稳定不变时，算法即宣告收敛。  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2019-12-04T02:48:41.347349Z",
     "start_time": "2019-12-04T02:48:36.300Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(21517,37.56088066394292)\n",
      "(21487,29.675728164383607)\n",
      "(21348,26.87597352177093)\n",
      "(21566,25.62027530706551)\n",
      "(21572,25.074895000031738)\n",
      "(21318,24.321173574264403)\n",
      "(21319,23.958485566968843)\n",
      "(21575,23.14642376495923)\n",
      "(21254,23.133255380450336)\n",
      "(21618,21.903525791885322)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "graph_paper = org.apache.spark.graphx.impl.GraphImpl@19040f3c\n",
       "graph_PR = org.apache.spark.graphx.impl.GraphImpl@5f0d6b08\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@5f0d6b08"
      ]
     },
     "execution_count": 53,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//pageRank方法会生成一个新的图，图的顶点属性即PR值\n",
    "val graph_paper = GraphLoader.edgeListFile(sc,\"data/paperCite.edges\")\n",
    "val graph_PR = graph_paper.pageRank(0.01) //参数0.01是前后两次迭代的PR差异阈值\n",
    "//查看最重要的几篇论文\n",
    "graph_PR.vertices.sortBy(k=>k._2,false).take(10).foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2019-12-04T02:55:47.793178Z",
     "start_time": "2019-12-04T02:55:44.858Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(21517,50.63490034773536)\n",
      "(21487,37.95316837182138)\n",
      "(21572,36.29677001702333)\n",
      "(21566,33.93200610848722)\n",
      "(21348,32.78003434149494)\n",
      "(21318,30.639868206727012)\n",
      "(21319,30.63541621071055)\n",
      "(21575,30.104113971137526)\n",
      "(21618,29.71268516135662)\n",
      "(21254,28.823289062162544)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "graph_PageRank = org.apache.spark.graphx.impl.GraphImpl@44790a53\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@44790a53"
      ]
     },
     "execution_count": 54,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//调用graphx.lib中的PageRank方法来跑PageRank\n",
    "val graph_PageRank = lib.PageRank.run(graph_paper,numIter = 20,resetProb = 0.1) \n",
    "graph_PageRank.vertices.sortBy(k=>k._2,false).take(10).foreach(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**2，personalizedPageRank**\n",
    "\n",
    "个性化PageRank是 PageRank的一个变种，可以用于在社交网站中给用户推荐\"你可能认识的人\"。\n",
    "\n",
    "personalizedPageRank除了要设定一个迭代终止的条件，还要指定一个源顶点的srcId.\n",
    "\n",
    "在PageRank原理中，有一个重置概率，即用户不通过超链接直接进入某个页面。\n",
    "\n",
    "而在个性化的PageRank中，除了指定源顶点的重置概率不为零，其余顶点的重置概率都为0.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$PR'_i = resetProb\\;\\delta _{is} + (1-resetProb) \\sum _{(j: j->i)} {\\frac{PR_j}{OutDegree_j}}$$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2019-12-04T03:16:39.076487Z",
     "start_time": "2019-12-04T03:16:35.707Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "graph_paper = org.apache.spark.graphx.impl.GraphImpl@660de127\n",
       "graph_PR_21517 = org.apache.spark.graphx.impl.GraphImpl@58868925\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "(21572,0.23637585271284656)"
      ]
     },
     "execution_count": 55,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//找到和21517论文关联最高的论文\n",
    "val graph_paper = GraphLoader.edgeListFile(sc,\"data/paperCite.edges\")\n",
    "val graph_PR_21517 = graph_paper.personalizedPageRank(21517L,0.01)\n",
    "graph_PR_21517.vertices.filter(_._1!=21517L).reduce((a,b)=> if(a._2>b._2) a else b)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**3，triangleCount**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "三角形数量可以衡量顶点周围网络连通性。graphx在计算三角形数量时，会忽略边的方向。\n",
    "\n",
    "微信朋友圈的互动规则就是基于三角形关系的。如果A和B是好友，A和C也是好友，B和C却不是好友，那么如果A发了一个状态，B点了一个赞，C能看到A的状态，却看不到B的点赞。只有A和B是好友，A和C是好友，并且B和C也是好友，三个人构成了三角形关系的前提下，B和C才能在A的状态下看到彼此的点赞。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2019-12-04T04:59:02.057789Z",
     "start_time": "2019-12-04T04:59:01.117Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "graph = org.apache.spark.graphx.impl.GraphImpl@7ecc443c\n",
       "graph_triangle = org.apache.spark.graphx.impl.GraphImpl@62e29954\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "Array((4,1), (1,3), (5,1), (2,1), (3,3))"
      ]
     },
     "execution_count": 56,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val graph = Graph.fromEdgeTuples(sc.makeRDD(Array((1L,2L),(1L,3L),(2L,3L),\n",
    "            (1L,4L),(1L,5L),(3L,5L),(3L,4L))),1)\n",
    "val graph_triangle = graph.triangleCount()\n",
    "graph_triangle.vertices.collect()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**4,ShortestPaths**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "ShortestPaths虽然命名上是最短路径，但其实际含义是计算各个顶点到给定顶点的最小跳跃数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2019-12-04T05:05:52.411644Z",
     "start_time": "2019-12-04T05:05:51.636Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "graph = org.apache.spark.graphx.impl.GraphImpl@75340945\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "Array((4,Map(4 -> 0)), (1,Map(4 -> 3)), (5,Map()), (2,Map(4 -> 2)), (3,Map(4 -> 1)))"
      ]
     },
     "execution_count": 61,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val graph = Graph.fromEdgeTuples(sc.makeRDD(Array((1L,2L),(2L,3L),(3L,4L),\n",
    "    (4L,5L),(1L,5L))),1)\n",
    "\n",
    "lib.ShortestPaths.run(graph,Array(4L)).vertices.collect()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**5,connectedComponents**\n",
    "\n",
    "connectedComponents连通组件会将图划分成几个连通区域，每个顶点的属性值为其所在连通区域中顶点编号的最小值。\n",
    "connectedComponents的一种巧妙用法是用来在spark上实现DBSCAN算法，可以用它来对临时聚类簇进行合并。\n",
    "\n",
    "连通组件不关心边的方向。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2019-12-04T06:44:55.238155Z",
     "start_time": "2019-12-04T06:44:54.369Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "graph = org.apache.spark.graphx.impl.GraphImpl@20f2578d\n",
       "graph_connected = org.apache.spark.graphx.impl.GraphImpl@c973c99\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "Array((1,1), (5,5), (6,6), (2,1), (3,1), (7,6))"
      ]
     },
     "execution_count": 62,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val graph = Graph.fromEdgeTuples(sc.makeRDD(Array((1L,2L),(2L,3L),(3L,1L),(5L,5L),(6L,7L))),1)\n",
    "val graph_connected = graph.connectedComponents()\n",
    "graph_connected.vertices.collect"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2019-12-04T06:44:59.078998Z",
     "start_time": "2019-12-04T06:44:58.645Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Array((1,Set(1, 2, 3)), (5,Set(5)), (6,Set(6, 7)))"
      ]
     },
     "execution_count": 63,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "graph_connected.vertices.map(t=>(t._2,Set(t._1))).reduceByKey((s1,s2)=>s1|s2).collect"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**6，stronglyConnectedComponents**\n",
    "\n",
    "stronglyConnectedComponents强连通组件和连通组件作用类似，但是它还关心边的方向。\n",
    "\n",
    "在强连通组件中，每个顶点都可以通过其它顶点到达。\n",
    "\n",
    "强连通组件由于边有方向，为了避免环的存在，需要设置最大迭代次数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2019-12-04T06:46:04.825355Z",
     "start_time": "2019-12-04T06:46:02.791Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "graph = org.apache.spark.graphx.impl.GraphImpl@5b403805\n",
       "graph_stronglyconnected = org.apache.spark.graphx.impl.GraphImpl@7353a186\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "Array((1,1), (5,5), (6,6), (2,1), (3,1), (7,7))"
      ]
     },
     "execution_count": 64,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val graph = Graph.fromEdgeTuples(sc.makeRDD(Array((1L,2L),(2L,3L),(3L,1L),(5L,5L),(6L,7L))),1)\n",
    "val graph_stronglyconnected = graph.stronglyConnectedComponents(20) //设置最大迭代次数为20\n",
    "graph_stronglyconnected.vertices.collect"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2019-12-04T06:47:08.086364Z",
     "start_time": "2019-12-04T06:47:07.631Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Array((1,Set(1, 2, 3)), (5,Set(5)), (6,Set(6)), (7,Set(7)))"
      ]
     },
     "execution_count": 65,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "graph_stronglyconnected.vertices.map(t=>(t._2,Set(t._1))).reduceByKey((s1,s2)=>s1|s2).collect"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**7，LabelPropagation**\n",
    "\n",
    "为了识别出图中紧密交织的群体，GraphX 提供了标签传播算法(LPA).\n",
    "这个想法是让稠密连接的顶点组在一个唯一的标签上达成一致，所以这些顶点组被定义为一个社区。\n",
    "不幸的是,LPA常常不是收敛的，所以需要指定一个最大迭代次数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2019-12-04T07:06:19.475669Z",
     "start_time": "2019-12-04T07:06:18.184Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "graph = org.apache.spark.graphx.impl.GraphImpl@63880b20\n",
       "graph_LPA = Array((1,1), (2,2), (3,1), (4,2), (5,1), (6,6), (7,5), (8,5), (9,6))\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "Array((1,1), (2,2), (3,1), (4,2), (5,1), (6,6), (7,5), (8,5), (9,6))"
      ]
     },
     "execution_count": 66,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val graph = Graph.fromEdgeTuples(sc.makeRDD(Array((1L,2L),(2L,3L),(3L,4L),(4L,1L),\n",
    "            (1L,3L),(2L,4L),(4L,5L),(5L,6L),(6L,7L),(7L,8L),(8L,9L),(5L,7L),(6L,8L))),1)\n",
    "val graph_LPA = lib.LabelPropagation.run(graph,10).vertices.collect.sortWith(_._1<_._1)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 七，其它常用图算法"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Graphx内置的一些图算法基本上是用pregel迭代API实现的。\n",
    "\n",
    "还有一些非常经典的图算法不太适合使用pregel迭代API实现，因此它们在Graphx中没有对应的内置实现。这些算法本质上也是迭代算法，例如每次迭代添加一条边。本节我们将主要使用诸如mapVertices和函数outerJoinVertices函数来实现和并行化这些原本被设计为顺序执行的算法。\n",
    "\n",
    "这些算法包括：\n",
    "\n",
    "最短路径算法(Dijkstra)：找到图中各个顶点到给定顶点的最短路径。\n",
    "\n",
    "旅行推销员问题(TSP)：在图中找到一条访问每个顶点一次并回到出发点的最短路径。\n",
    "\n",
    "最小生成树算法(Kruskal)：在一棵树(无环图)中 ，找到一个生成树，其边权值之和小于任何其他生成树边权值之和。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**1，最短路径算法(Dijkstra)**\n",
    "\n",
    "Dijkstra算法实际上是一种广度优先搜索算法，可以用pregel迭代API进行实现。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](data/最短路径.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "verts = ParallelCollectionRDD[2777] at makeRDD at <console>:43\n",
       "edges = ParallelCollectionRDD[2778] at makeRDD at <console>:46\n",
       "graph = org.apache.spark.graphx.impl.GraphImpl@34e2cbd4\n",
       "init_graph = org.apache.spark.graphx.impl.GraphImpl@6531dd81\n",
       "g = org.apache.spark.graphx.impl.GraphImpl@11419de\n",
       "output_graph = org.apache.spark.graphx.impl.GraphImpl@39e1b8a\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "Array((D,5.0), (A,0.0), (E,14.0), (F,11.0), (B,7.0), (C,15.0), (G,22.0))"
      ]
     },
     "execution_count": 67,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val verts = sc.makeRDD(Array((1L,\"A\"),(2L,\"B\"),(3L,\"C\"),\n",
    "(4L,\"D\"),(5L,\"E\"),(6L,\"F\"),(7L,\"G\")))\n",
    "\n",
    "val edges = sc.makeRDD(Array(Edge(1L, 2L, 7.0), Edge(1L, 4L, 5.0) ,\n",
    "Edge(2L, 3L, 8.0), Edge(2L, 4L, 9.0), Edge(2L, 5L, 7.0),\n",
    "Edge(3L, 5L, 5.0), Edge(5L, 6L, 8.0), Edge(4L, 5L,15.0), Edge (4L, 6L, 6.0),\n",
    "Edge(5L, 7L, 9.0) , Edge(6L, 7L, 11.0))) \n",
    "\n",
    "val graph = Graph(verts, edges)\n",
    "\n",
    "//使用pregel API 实现 有向图的Dijkstra，初始点为A\n",
    "val init_graph = graph.mapVertices((vid,vd)=>if(vid==1L) 0.0 else Double.PositiveInfinity)\n",
    "val g = init_graph.pregel[Double](initialMsg = Double.PositiveInfinity,\n",
    "     activeDirection = EdgeDirection.Out)(\n",
    "    (vid:VertexId,vd:Double,a:Double) => math.min(vd,a), //vprog\n",
    "    (et:EdgeTriplet[Double,Double]) => ({\n",
    "        val candidate = et.srcAttr+et.attr\n",
    "        if(candidate<et.dstAttr) Iterator((et.dstId,candidate)) else Iterator.empty\n",
    "    }),//sendMsg\n",
    "    (a:Double,b:Double) => math.min(a,b) //mergeMsg\n",
    ")\n",
    "\n",
    "val output_graph = g.outerJoinVertices[String,(String,Double)](graph.vertices)(\n",
    "    (vid:Long,vd:Double,opt:Option[String])=>(opt.getOrElse(\" \"),vd))\n",
    "\n",
    "output_graph.vertices.map(t => t._2).collect"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "init_graph = org.apache.spark.graphx.impl.GraphImpl@464b5293\n",
       "g = org.apache.spark.graphx.impl.GraphImpl@38c811f2\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "Array((4,9.0), (1,7.0), (5,7.0), (6,15.0), (2,0.0), (3,8.0), (7,16.0))"
      ]
     },
     "execution_count": 68,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//使用pregel API 实现无向图的Dijkstra，初始点为B\n",
    "val init_graph = graph.mapVertices((vid,vd)=>if(vid==2L) 0.0 else Double.PositiveInfinity)\n",
    "val g = init_graph.pregel[Double](initialMsg = Double.PositiveInfinity,\n",
    "    activeDirection = EdgeDirection.Either)(\n",
    "    (vid:VertexId,vd:Double,a:Double) => math.min(vd,a), //vprog\n",
    "    (et:EdgeTriplet[Double,Double]) => ({\n",
    "        val candidate_dst = et.srcAttr+et.attr\n",
    "        val candidate_src = et.dstAttr+et.attr\n",
    "        val iter = if(candidate_dst<et.dstAttr){\n",
    "            Iterator((et.dstId,candidate_dst))\n",
    "        }else if(candidate_src<et.srcAttr){\n",
    "            Iterator((et.srcId,candidate_src))\n",
    "        }else{\n",
    "            Iterator.empty\n",
    "        }\n",
    "        iter\n",
    "    }),//sendMsg\n",
    "    (a:Double,b:Double) => math.min(a,b) //mergeMsg\n",
    ")\n",
    "g.vertices.collect"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "init_graph = org.apache.spark.graphx.impl.GraphImpl@7b0e8a10\n",
       "g = org.apache.spark.graphx.impl.GraphImpl@544d7f9b\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "Array((4,(5.0,List(A, D))), (1,(0.0,List(A))), (5,(14.0,List(A, B, E))), (6,(11.0,List(A, D, F))), (2,(7.0,List(A, B))), (3,(15.0,List(A, B, C))), (7,(22.0,List(A, D, F, G))))"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//使用 pregel API 实现有向图的Dijkstra，初始点为A，并使用列表记录路径\n",
    "val init_graph = graph.mapVertices((vid,vd)=>if(vid==1L) (0.0,List[String](vd)) \n",
    "                                   else (Double.PositiveInfinity,List[String](vd)))\n",
    "\n",
    "//使用pregel API 实现 有向图的Dijkstra，初始点为A\n",
    "val g = init_graph.pregel[(Double,List[String])](initialMsg=(Double.PositiveInfinity,List[String]()),\n",
    "    activeDirection = EdgeDirection.Out)(\n",
    "    (vid:VertexId,vd:(Double,List[String]),a:(Double,List[String]))=>if(a._1<vd._1) a else vd, //vprog\n",
    "    (et:EdgeTriplet[(Double,List[String]),Double]) => ({\n",
    "        val candidate = et.srcAttr._1 + et.attr\n",
    "        if(candidate<et.dstAttr._1)Iterator((et.dstId,(candidate,et.srcAttr._2:+ et.dstAttr._2.last))) \n",
    "        else Iterator.empty\n",
    "    }),//sendMsg\n",
    "    (a:(Double,List[String]),b:(Double,List[String])) => if(a._1<b._1) a else b//mergeMsg\n",
    ")\n",
    "\n",
    "g.vertices.collect"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 70,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(4,(5.0,A->D))\n",
      "(1,(0.0,A))\n",
      "(5,(14.0,A->B->E))\n",
      "(6,(11.0,A->D->F))\n",
      "(2,(7.0,A->B))\n",
      "(3,(15.0,A->B->C))\n",
      "(7,(22.0,A->D->F->G))\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "init_graph = org.apache.spark.graphx.impl.GraphImpl@62c5ec47\n",
       "g = org.apache.spark.graphx.impl.GraphImpl@39deb020\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@39deb020"
      ]
     },
     "execution_count": 70,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//使用 pregel API 实现有向图的Dijkstra，初始点为A，并使用字符串记录路径, \n",
    "val init_graph = graph.mapVertices((vid,vd)=>\n",
    "    if(vid==1L) (0.0,vd) else (Double.PositiveInfinity,vd))\n",
    "\n",
    "//使用pregel API 实现 有向图的Dijkstra，初始点为A\n",
    "val g = init_graph.pregel[(Double,String)](initialMsg = (Double.PositiveInfinity,\"\"),\n",
    "    activeDirection = EdgeDirection.Out)(\n",
    "    (vid:VertexId,vd:(Double,String),a:(Double,String)) => if(a._1<vd._1) a else vd, //vprog\n",
    "    (et:EdgeTriplet[(Double,String),Double]) => ({\n",
    "        val candidate = et.srcAttr._1 + et.attr\n",
    "        if(candidate<et.dstAttr._1) \n",
    "            Iterator((et.dstId,(candidate,et.srcAttr._2+\"->\"+et.dstAttr._2.last))) \n",
    "        else \n",
    "            Iterator.empty\n",
    "    }),//sendMsg\n",
    "    (a:(Double,String),b:(Double,String)) => if(a._1<b._1) a else b//mergeMsg\n",
    ")\n",
    "\n",
    "g.vertices.collect.foreach(println)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**2，旅行推销员问题(TSP)**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "旅行推销员问题(TSP)是在一个无向图中找到一个经过每一个顶点的最短路径。 假如有一个推销员，他要到某一地区的所有城市去推销，他想要走过的总路程最少。\n",
    "\n",
    "旅行推销员问题是一个NP-Hard问题，没有一个有效的算法在多项式时间复杂度内得到确定的解。我们可以使用如下贪心算法得到近似解。\n",
    "\n",
    "TSP问题的贪心算法：\n",
    "\n",
    "* 1，从某些点开始\n",
    "* 2，添加权重最小的邻边到路径中。\n",
    "* 3，以该边的终点为新的起点，跳到第2步。\n",
    "\n",
    "对于旅行推销员问题来说，贪心算法是最简单的，缺点是不会总是到达所有顶点。在这 个例子中，顶点 G 就没有到达。\n",
    "\n",
    "贪心算法可在不用增加太多代码的情况下，用不同的起始顶点重新运行整个算法，不断迭代，挑选出一个到达所有顶点并且最短的解决方案。\n",
    "\n",
    "![](data/TSP问题.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 71,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "verts = ParallelCollectionRDD[3067] at makeRDD at <console>:41\n",
       "edges = ParallelCollectionRDD[3068] at makeRDD at <console>:44\n",
       "graph = org.apache.spark.graphx.impl.GraphImpl@76df072a\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "org.apache.spark.graphx.impl.GraphImpl@76df072a"
      ]
     },
     "execution_count": 71,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val verts = sc.makeRDD(Array((1L,\"A\"),(2L,\"B\"),(3L,\"C\"),\n",
    "(4L,\"D\"),(5L,\"E\"),(6L,\"F\"),(7L,\"G\")))\n",
    "\n",
    "val edges = sc.makeRDD(Array(Edge(1L, 2L, 7.0), Edge(1L, 4L, 5.0) ,\n",
    "Edge(2L, 3L, 8.0), Edge(2L, 4L, 9.0), Edge(2L, 5L, 7.0),\n",
    "Edge(3L, 5L, 5.0), Edge(5L, 6L, 8.0), Edge(4L, 5L,15.0), Edge (4L, 6L, 6.0),\n",
    "Edge(5L, 7L, 9.0) , Edge(6L, 7L, 11.0))) \n",
    "\n",
    "val graph = Graph(verts, edges)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 72,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "graph_tsp = org.apache.spark.graphx.impl.GraphImpl@6eb97170\n",
       "i = 6\n",
       "near_triplets = MapPartitionsRDD[3150] at filter at <console>:60\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "Array((A,1), (D,2), (F,3), (E,4), (C,5), (B,6))"
      ]
     },
     "execution_count": 72,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//使用图的顶点属性来记录访问顺序，0表示没有被访问\n",
    "\n",
    "//从A点开始\n",
    "var graph_tsp = graph.mapVertices((vid,vd) => if(vd!=\"A\") (vd,0L) else (vd,1L))\n",
    "\n",
    "//找到连接当前顶点和未访问顶点的边\n",
    "var i = 1L \n",
    "var near_triplets = graph_tsp.triplets.filter(t=>\n",
    "    {(t.srcAttr._2==i && t.dstAttr._2 ==0L)||(t.dstAttr._2==i && t.srcAttr._2 ==0L)})\n",
    "\n",
    "while(near_triplets.count > 0){\n",
    "    //取其中最短的一条连向的另外一个顶点作为下一个访问顶点。\n",
    "    val min_t = near_triplets.reduce((a,b)=> if(a.attr<b.attr) a else b)\n",
    "    val next_id = if(min_t.srcAttr._2 ==0L) min_t.srcId else min_t.dstId\n",
    "    i=i+1\n",
    "    \n",
    "    //使用outerJoinVertices修改图顶点属性\n",
    "    graph_tsp = graph_tsp.outerJoinVertices[Long,(String,Long)](sc.makeRDD(List((next_id,i))))(\n",
    "      (vid,vd,opt) => (vd._1, opt.getOrElse(vd._2)))\n",
    "    \n",
    "    near_triplets = graph_tsp.triplets.filter(t=>\n",
    "     {(t.srcAttr._2==i && t.dstAttr._2 ==0L)||(t.dstAttr._2==i && t.srcAttr._2 ==0L)})\n",
    "}\n",
    "\n",
    "//输出遍历顺序 \n",
    "graph_tsp.vertices.map(_._2).filter(_._2>0).sortBy(_._2).collect"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**3，最小生成树算法(Kruskal)**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "最小生成树问题是为了寻找包含图的每一个顶点的总边长度最小的子图。\n",
    "\n",
    "由于这样的子图包括了原始图中的每一个顶点，并且其边之和是最短的，所以可以叫做最小生成子图。\n",
    "\n",
    "这样总边之和最短的图必定不会形成环，否则的话，去掉环中的一段，新得到的子图依然包括了图中的每一个顶点，但其边之和却可以变短。\n",
    "\n",
    "所以最小生成子图实际上是一个树结构，一般称之为最小生成树。\n",
    "\n",
    "解决最小生成树问题的解法是Kruskal算法，这是一种贪心算法。\n",
    "\n",
    "但是和TSP算法不同，可以从数学上用反证法证明Kruskal算法的解一定是最优的。\n",
    "\n",
    "最小生成树的最直接应用是在路径规划工具方面(道路、电力、水等)，用来确保这些基础设施资源能在最小消耗的前提下到达所有城市(例如最短距离，路径图的边权值表示城市间的距离)。 也有一些不太显著的应用，如在相似事物的集合上做分类，例如动物 (用于科学分类)或 报纸头条。\n",
    "\n",
    "解决最小生成树的Kruskal算法可以表述如下：\n",
    "\n",
    "* 1,初始化集合中的边，构建一个空的最小生成树。\n",
    "\n",
    "* 2,找到图中最短的边，将其添加到结果集合中。其对应的两个顶点设置成已访问顶点。\n",
    "\n",
    "* 3,找到连接已访问顶点和未访问顶点中的边的最短的那条，将其添加到结果集合中。对应的未访问顶点设置成已访问顶点。\n",
    "\n",
    "* 4,重复步骤3，直到所有顶点都已经被访问。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](data/最小生成树.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "defined object TestKruskal\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-----+-----+--------+\n",
      "|srcId|dstId|    attr|\n",
      "+-----+-----+--------+\n",
      "|    1|    2|[7.0, 1]|\n",
      "|    1|    4|[5.0, 1]|\n",
      "|    2|    5|[7.0, 1]|\n",
      "|    3|    5|[5.0, 1]|\n",
      "|    4|    6|[6.0, 1]|\n",
      "|    5|    7|[9.0, 1]|\n",
      "+-----+-----+--------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "import org.apache.spark.sql.SparkSession\n",
    "import org.apache.spark.graphx._\n",
    "import org.apache.spark.sql.DataFrame\n",
    "import org.apache.spark.rdd.RDD\n",
    "\n",
    "//完整范例，增加Serializable特质以解决无法序列化问题。\n",
    "\n",
    "object TestKruskal extends Serializable{\n",
    "    def main(args:Array[String]):Unit = {\n",
    "        val spark = SparkSession.builder()\n",
    "           .master(\"local[4]\").appName(\"graph\")\n",
    "           .getOrCreate()\n",
    "        \n",
    "        import spark.implicits._\n",
    "        val sc = spark.sparkContext\n",
    "        \n",
    "        //构建图\n",
    "        val verts = sc.makeRDD(Array((1L,\"A\"),(2L,\"B\"),(3L,\"C\"),\n",
    "            (4L,\"D\"),(5L,\"E\"),(6L,\"F\"),(7L,\"G\")))\n",
    "\n",
    "        val edges = sc.makeRDD(Array(Edge(1L, 2L, 7.0), Edge(1L, 4L, 5.0) ,\n",
    "            Edge(2L, 3L, 8.0), Edge(2L, 4L, 9.0), Edge(2L, 5L, 7.0),\n",
    "            Edge(3L, 5L, 5.0), Edge(5L, 6L, 8.0), Edge(4L, 5L,15.0), Edge (4L, 6L, 6.0),\n",
    "            Edge(5L, 7L, 9.0) , Edge(6L, 7L, 11.0))) \n",
    "\n",
    "        val graph = Graph(verts, edges)\n",
    "\n",
    "        //使用图的边属性来记录该边是否在最小生成树中，0表示不在生成树中\n",
    "        //使用图的顶点属性表示该顶点是否已经被生成树触达，0表示尚未触达\n",
    "        \n",
    "        //1，找到最短的1条边\n",
    "        val min_t = graph.triplets.reduce((a,b)=> if(a.attr<b.attr) a else b)\n",
    "        val init_graph = graph.mapVertices((vid,vd) => \n",
    "            if(vid!=min_t.srcId && vid!=min_t.dstId) (vd,0) else (vd,1))\n",
    "        var graph_kruscal = init_graph.\n",
    "          mapTriplets( t => if(t.srcId==min_t.srcId && t.dstId==min_t.dstId) \n",
    "                      (t.attr,1) else (t.attr,0))\n",
    "        \n",
    "        //2,依次找到连接已访问顶点和未访问顶点中的最短的边\n",
    "        \n",
    "        for(i<-3 to graph.numVertices.toInt){\n",
    "            val min_t = graph_kruscal.triplets.filter(t =>t.srcAttr._2+t.dstAttr._2==1)\n",
    "               .reduce((a,b)=> if(a.attr._1<b.attr._1) a else b)\n",
    "            \n",
    "            graph_kruscal = graph_kruscal.mapVertices((vid,vd) =>\n",
    "               if(vid!=min_t.srcId && vid!=min_t.dstId) vd else (vd._1,1))\n",
    "            graph_kruscal = graph_kruscal.mapTriplets( t => \n",
    "               if(t.srcId==min_t.srcId && t.dstId==min_t.dstId) (t.attr._1,1) else t.attr)\n",
    "            \n",
    "        }\n",
    "        \n",
    "        graph_kruscal.edges.filter(_.attr._2==1).toDS.show(100)\n",
    "    }\n",
    "}\n",
    "\n",
    "TestKruskal.main(Array(\"\"))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 八，图和机器学习"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "大部分机器学习算法使用矩阵或者张量作为其数据结构，但也有一些算法使用图作为其数据结构，此外还有相当一部分算法的实现中用到了图。\n",
    "\n",
    "一些常用的和图相关的机器学习算法简单介绍如下。\n",
    "\n",
    "1，监督学习\n",
    "\n",
    "SVDPLUSPLUS算法：这是一个商品推荐算法，使用EdgeRDD作为输入，可以通过graphx.lib.SVDPLUSPLUS进行调用。\n",
    "\n",
    "2，非监督学习\n",
    "\n",
    "LDA文本主题算法：LDA文本主题模型可以将文本映射为主题向量，从而对文档进行聚类。它属于mllib库，但其最大期望算法EM的实现可以用图来进行加速。\n",
    "\n",
    "PIC聚类算法：幂迭代聚类算法可以用于图像分割。它可以通过mllib.clustering.PoweriterationClustering进行调用。\n",
    "\n",
    "3，半监督学习\n",
    "\n",
    "基于K近邻图的标签传播算法：可以利用图结构，将少量顶点的标签传递到其近邻的未知标签顶点上（可以按照边的权重倒数作概率加权）。\n",
    "\n",
    "得到较多的含有标签的顶点后，再利用K邻近的方式对未知顶点的标签进行预测。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Spark - Scala",
   "language": "scala",
   "name": "spark_scala"
  },
  "language_info": {
   "codemirror_mode": "text/x-scala",
   "file_extension": ".scala",
   "mimetype": "text/x-scala",
   "name": "scala",
   "pygments_lexer": "scala",
   "version": "2.11.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
