{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 1小时入门SparkSQL"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 一，RDD，DataFrame和DataSet对比"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "DataFrame参照了Pandas的思想，在RDD基础上增加了schma，能够获取列名信息。\n",
    "\n",
    "DataSet在DataFrame基础上进一步增加了数据类型信息，可以在编译时发现类型错误。\n",
    "\n",
    "DataFrame可以看成DataSet[Row]，两者的API接口完全相同。\n",
    "\n",
    "DataFrame和DataSet都支持SQL交互式查询，可以和 Hive无缝衔接。\n",
    "\n",
    "DataSet只有Scala语言和Java语言接口中才支持，在Python和R语言接口只支持DataFrame。\n",
    "\n",
    "![](RDD,DataFrame,Dataset对比.jpg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import org.apache.spark.sql.SparkSession\n",
    "\n",
    "val spark = SparkSession\n",
    ".builder()\n",
    ".appName(\"Spark SQL basic example\")\n",
    ".enableHiveSupport()\n",
    ".getOrCreate()\n",
    "\n",
    "//以支持将RDD隐式转换成DataFrame\n",
    "import spark.implicits._\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 二，创建DataFrame"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1，通过toDF方法转换成DataFrame\n",
    "\n",
    "可以将Seq,List或者 RDD转换成DataFrame\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//将Seq转换成DataFrame\n",
    "val seq = Seq(\n",
    "(1, \"First Value\", java.sql.Date.valueOf(\"2010-01-01\")),\n",
    "(2, \"Second Value\", java.sql.Date.valueOf(\"2010-02-01\"))\n",
    ")\n",
    "val df = seq.toDF(\"int_column\",\"string_column\",\"date_column\")\n",
    "df.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//将List转换成DataFrame\n",
    "val list = List(\n",
    "(\"LiLei\",15,88),\n",
    "(\"HanMeiMei\",16,90),\n",
    "(\"DaChui\",17,60)\n",
    ")\n",
    "\n",
    "val df = list.toDF(\"name\",\"age\",\"score\")\n",
    "df.show()\n",
    "df.printSchema()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//将RDD转换成DataFrame\n",
    "val rdd = sc.parallelize(List((\"LiLei\",15),(\"HanMeiMei\",17),(\"DaChui\",16)),2)\n",
    "val df = rdd.toDF(\"name\",\"age\")\n",
    "df.show \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2, 通过CreateDataFrame方法动态创建DataFrame\n",
    "\n",
    "可以通过createDataFrame的方法指定rdd和schema创建DataFrame。\n",
    "\n",
    "这种方法比较繁琐，但是可以在预先不知道schema和数据类型的情况下在代码中动态创建DataFrame.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import org.apache.spark.sql.types._\n",
    "import org.apache.spark.sql.Row\n",
    "\n",
    "val schema = StructType(List(\n",
    "StructField(\"integer_column\", IntegerType, nullable = false),\n",
    "StructField(\"string_column\", StringType, nullable = true),\n",
    "StructField(\"date_column\", DateType, nullable = true)\n",
    "))\n",
    "\n",
    "val rdd = spark.sparkContext.parallelize(Seq(\n",
    "Row(1, \"First Value\", java.sql.Date.valueOf(\"2010-01-01\")),\n",
    "Row(2, \"Second Value\", java.sql.Date.valueOf(\"2010-02-01\")),\n",
    "Row(2, \"Second Value\", null)\n",
    "))\n",
    "val df = spark.createDataFrame(rdd, schema)\n",
    "df.show()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**3，通过读取文件创建**\n",
    "\n",
    "可以读取json文件，csv文件，hive数据表或者mysql数据表得到DataFrame。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//读取json文件生成DataFrame\n",
    "val df = spark.read.json(\"resources/people.json\")\n",
    "df.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//读取csv文件\n",
    "val df = spark.sqlContext.read.format(\"com.databricks.spark.csv\")\n",
    " .option(\"header\",\"true\") //如果在csv第一行有属性的话设置\"true\"，没有就是\"false\"\n",
    " .option(\"inferSchema\",\"true\")//这是自动推断属性列的数据类型。\n",
    " .option(\"delimiter\", \",\") //分隔符，默认为逗号 \n",
    " .load(\"resources/iris.csv\")\n",
    "df.show(5)\n",
    "df.printSchema()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//读取parquet文件\n",
    "val df = spark.read.parquet(\"resources/users.parquet\")\n",
    "df.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//读取mysql数据表生成DataFrame\n",
    "val url = \"jdbc:mysql://localhost:3306/test\"\n",
    " val df = spark.read\n",
    " .format(\"jdbc\")\n",
    " .option(\"url\", url)\n",
    " .option(\"dbtable\", \"runoob_tbl\")\n",
    " .option(\"user\", \"root\")\n",
    " .option(\"password\", \"0845\")\n",
    " .load()\n",
    " df.show()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//读取hive数据表生成DataFrame\n",
    "import java.io.File\n",
    "import org.apache.spark.sql.{Row, SaveMode, SparkSession}\n",
    "\n",
    "val warehouseLocation = new File(\"spark-warehouse\").getAbsolutePath\n",
    "\n",
    "val spark = SparkSession\n",
    "  .builder()\n",
    "  .appName(\"Spark Hive Example\")\n",
    "  .config(\"spark.sql.warehouse.dir\", warehouseLocation)\n",
    "  .enableHiveSupport()\n",
    "  .getOrCreate()\n",
    "\n",
    "import spark.implicits._\n",
    "import spark.sql\n",
    "\n",
    "sql(\"CREATE TABLE IF NOT EXISTS src (key INT, value STRING) USING hive\")\n",
    "sql(\"LOAD DATA LOCAL INPATH 'resources/kv1.txt' INTO TABLE src\")\n",
    "\n",
    "val df = sql(\"SELECT key, value FROM src WHERE key < 10 ORDER BY key\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 三，创建DataSet\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "DataSet主要通过toDS方法从Seq,List或者RDD数据类型转换得到，或者从DataFrame通过as方法转换得到。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1，通过toDS方法转换得到DataSet"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//将Seq转换成DataSet\n",
    "import spark.implicits._\n",
    "case class Student(name:String,age:Int)\n",
    "\n",
    "val seq = Seq(\n",
    "Student(\"LiLei\",16),\n",
    "Student(\"DaChui\",17),\n",
    "Student(\"HanMeiMei\",15)\n",
    ")\n",
    "\n",
    "val ds = seq.toDS\n",
    "ds.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//将RDD转换成DataSet\n",
    "import spark.implicits._\n",
    "case class Student(name:String,age:Int)\n",
    "\n",
    "val rdd = sc.parallelize(List(Student(\"LiLei\",15),\n",
    "                              Student(\"HanMeiMei\",17),\n",
    "                              Student(\"DaChui\",16)))\n",
    "val ds = rdd.toDS\n",
    "ds.show "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2，通过DataFrame的as转换方法得到DataSet\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//读取json文件并转换生成DataSet\n",
    "import spark.implicits._\n",
    "case class People(age:Long,name:String)\n",
    "val ds = spark.read.json(\"resources/people.json\").as[People]\n",
    "ds.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//读取csv文件并转换得到DataSet\n",
    "import spark.implicits._\n",
    "case class Flower(sepallength:Double,sepalwidth:Double,\n",
    "                  petallength:Double,petalwidth:Double,label:Int)\n",
    "val ds = spark.sqlContext.read.format(\"com.databricks.spark.csv\")\n",
    " .option(\"header\",\"true\") //如果在csv第一行有属性的话设置true，没有就是\"false\"\n",
    " .option(\"inferSchema\",\"true\")//这是自动推断属性列的数据类型。\n",
    " .option(\"delimiter\", \",\") //分隔符，默认为逗号 \n",
    " .load(\"resources/iris.csv\")\n",
    " .as[Flower]\n",
    "ds.show()\n",
    "ds.printSchema()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ds.getClass.getSimpleName"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 四，DataFrame/DataSet保存成文件"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以保存成csv文件，json文件，parquet文件或者保存成hive数据表"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "peopleDF = [age: bigint, name: string]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[age: bigint, name: string]"
      ]
     },
     "execution_count": 40,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//保存成csv文件\n",
    "val peopleDF = spark.read.format(\"json\").load(\"resources/people.json\")\n",
    "peopleDF.write.format(\"csv\").option(\"header\",\"true\").save(\"people.csv\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [],
   "source": [
    "//先转换成rdd再保存成txt文件\n",
    "peopleDF.rdd.saveAsTextFile(\"newpeople.txt\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [],
   "source": [
    "//保存成json文件\n",
    "peopleDF.write.json(\"people.json\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [],
   "source": [
    "// 保存成parquet 文件, 可以存储schema信息\n",
    "peopleDF.write.partitionBy(\"age\").format(\"parquet\").save(\"namesAndAges.parquet\")\n",
    "peopleDF.write.parquet(\"people.parquet\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "// 保存成hive数据表\n",
    "val peopleDF = spark.read.format(\"json\").load(\"resources/people.json\")\n",
    "peopleDF.write.bucketBy(42, \"name\").sortBy(\"age\").saveAsTable(\"people_bucketed\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 五，DataFrame的API交互"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+---+------+\n",
      "|     name|age|gender|\n",
      "+---------+---+------+\n",
      "|    LiLei| 15|  male|\n",
      "|HanMeiMei| 16|female|\n",
      "|   DaChui| 17|  male|\n",
      "+---------+---+------+\n",
      "\n",
      "root\n",
      " |-- name: string (nullable = true)\n",
      " |-- age: integer (nullable = false)\n",
      " |-- gender: string (nullable = true)\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "list = List((LiLei,15,male), (HanMeiMei,16,female), (DaChui,17,male))\n",
       "df = [name: string, age: int ... 1 more field]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[name: string, age: int ... 1 more field]"
      ]
     },
     "execution_count": 44,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import spark.implicits._\n",
    "\n",
    "val list = List(\n",
    "(\"LiLei\",15,\"male\"),\n",
    "(\"HanMeiMei\",16,\"female\"),\n",
    "(\"DaChui\",17,\"male\")\n",
    ")\n",
    "\n",
    "val df = list.toDF(\"name\",\"age\",\"gender\")\n",
    "df.show()\n",
    "df.printSchema()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**1，Action操作**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "DataFrame的Action操作包括show,count,collect,collectAsList,describe,take,takeAsList,head,first等操作。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+---+------+\n",
      "|     name|age|gender|\n",
      "+---------+---+------+\n",
      "|    LiLei| 15|  male|\n",
      "|HanMeiMei| 16|female|\n",
      "|   DaChui| 17|  male|\n",
      "+---------+---+------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//show\n",
    "df.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+---+------+\n",
      "|name     |age|gender|\n",
      "+---------+---+------+\n",
      "|LiLei    |15 |male  |\n",
      "|HanMeiMei|16 |female|\n",
      "+---------+---+------+\n",
      "only showing top 2 rows\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//show(numRows: Int, truncate: Boolean) \n",
    "//第二个参数设置是否当输出字段长度超过20时进行截取\n",
    "df.show(2,false) "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "3"
      ]
     },
     "execution_count": 48,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//count\n",
    "df.count"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Array([LiLei,15,male], [HanMeiMei,16,female], [DaChui,17,male])"
      ]
     },
     "execution_count": 49,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//collect\n",
    "df.collect()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[[LiLei,15,male], [HanMeiMei,16,female], [DaChui,17,male]]"
      ]
     },
     "execution_count": 50,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//collectAsList\n",
    "df.collectAsList()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[LiLei,15,male]"
      ]
     },
     "execution_count": 51,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//first\n",
    "df.first"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Array([LiLei,15,male], [HanMeiMei,16,female], [DaChui,17,male])"
      ]
     },
     "execution_count": 52,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//take\n",
    "df.take(3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Array([LiLei,15,male], [HanMeiMei,16,female])"
      ]
     },
     "execution_count": 53,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//head\n",
    "df.head(2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[[LiLei,15,male], [HanMeiMei,16,female]]"
      ]
     },
     "execution_count": 54,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//takeAsList\n",
    "df.takeAsList(2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**2，类RDD操作** "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "DataFrame支持RDD常用的map,flatMap,filter,reduce,distinct,\n",
    "\n",
    "cache,sample,mapPartitions,foreach,intersect,except等操作。\n",
    "\n",
    "可以把DataFrame当做数据类型为Row的RDD来进行操作。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-----------+\n",
      "|      value|\n",
      "+-----------+\n",
      "|Hello World|\n",
      "|Hello China|\n",
      "|Hello Spark|\n",
      "+-----------+\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "df = [value: string]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[value: string]"
      ]
     },
     "execution_count": 55,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import spark.implicits._\n",
    "val df = List(\"Hello World\",\"Hello China\",\"Hello Spark\").toDF(\"value\")\n",
    "df.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-----------+\n",
      "|      value|\n",
      "+-----------+\n",
      "|HELLO WORLD|\n",
      "|HELLO CHINA|\n",
      "|HELLO SPARK|\n",
      "+-----------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//map\n",
    "df.map(x=>x(0).toString.toUpperCase).show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-----+\n",
      "|value|\n",
      "+-----+\n",
      "|Hello|\n",
      "|World|\n",
      "|Hello|\n",
      "|China|\n",
      "|Hello|\n",
      "|Spark|\n",
      "+-----+\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "flatdf = [value: string]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[value: string]"
      ]
     },
     "execution_count": 57,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//flatMap\n",
    "val flatdf = df.flatMap(x=>x(0).toString.split(\" \"))\n",
    "flatdf.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-----+\n",
      "|value|\n",
      "+-----+\n",
      "|World|\n",
      "|China|\n",
      "|Hello|\n",
      "|Spark|\n",
      "+-----+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//distinct\n",
    "flatdf.distinct.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[value: string]"
      ]
     },
     "execution_count": 59,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//cache缓存到内存，checkpoint永久性保存到磁盘 \n",
    "df.cache"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-----------+\n",
      "|      value|\n",
      "+-----------+\n",
      "|Hello Spark|\n",
      "+-----------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//filter过滤\n",
    "df.filter(s=>s(0).toString.endsWith(\"Spark\")).show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-----------+\n",
      "|      value|\n",
      "+-----------+\n",
      "|Hello China|\n",
      "|Hello Spark|\n",
      "+-----------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//sample抽样\n",
    "df.sample(false,0.6,0).show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-----------+\n",
      "|      value|\n",
      "+-----------+\n",
      "|Hello World|\n",
      "|Hello Scala|\n",
      "|Hello Spark|\n",
      "+-----------+\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "df2 = [value: string]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[value: string]"
      ]
     },
     "execution_count": 62,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val df2 = Seq(\"Hello World\",\"Hello Scala\",\"Hello Spark\").toDF(\"value\")\n",
    "df2.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-----------+\n",
      "|      value|\n",
      "+-----------+\n",
      "|Hello Spark|\n",
      "|Hello World|\n",
      "+-----------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//intersect交集\n",
    "df.intersect(df2).show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-----------+\n",
      "|      value|\n",
      "+-----------+\n",
      "|Hello China|\n",
      "+-----------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//except补集\n",
    "df.except(df2).show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**3，类Excel操作**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以对DataFrame进行增加列，删除列，重命名列，排序等操作，去除重复行，去除空行，就跟操作Excel表格一样。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+---+------+\n",
      "|     name|age|gender|\n",
      "+---------+---+------+\n",
      "|    LiLei| 15|  male|\n",
      "|HanMeiMei| 16|female|\n",
      "|   DaChui| 17|  male|\n",
      "|    RuHua| 16|  null|\n",
      "+---------+---+------+\n",
      "\n",
      "root\n",
      " |-- name: string (nullable = true)\n",
      " |-- age: integer (nullable = false)\n",
      " |-- gender: string (nullable = true)\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "list = List((LiLei,15,male), (HanMeiMei,16,female), (DaChui,17,male), (RuHua,16,null))\n",
       "df = [name: string, age: int ... 1 more field]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[name: string, age: int ... 1 more field]"
      ]
     },
     "execution_count": 65,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import spark.implicits._\n",
    "\n",
    "val list = List(\n",
    "(\"LiLei\",15,\"male\"),\n",
    "(\"HanMeiMei\",16,\"female\"),\n",
    "(\"DaChui\",17,\"male\"),\n",
    "(\"RuHua\",16,null)\n",
    ")\n",
    "\n",
    "val df = list.toDF(\"name\",\"age\",\"gender\")\n",
    "df.show()\n",
    "df.printSchema()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+---+------+---------+\n",
      "|     name|age|gender|birthyear|\n",
      "+---------+---+------+---------+\n",
      "|    LiLei| 15|  male|     2004|\n",
      "|HanMeiMei| 16|female|     2003|\n",
      "|   DaChui| 17|  male|     2002|\n",
      "|    RuHua| 16|  null|     2003|\n",
      "+---------+---+------+---------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//增加列\n",
    "df.withColumn(\"birthyear\",-df(\"age\")+2019).show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+---+------+---+\n",
      "|     name|age|gender|idx|\n",
      "+---------+---+------+---+\n",
      "|    LiLei| 15|  male|  0|\n",
      "|HanMeiMei| 16|female|  1|\n",
      "|   DaChui| 17|  male|  2|\n",
      "|    RuHua| 16|  null|  3|\n",
      "+---------+---+------+---+\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "dfnew = [name: string, age: int ... 2 more fields]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[name: string, age: int ... 2 more fields]"
      ]
     },
     "execution_count": 67,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//增加序号列\n",
    "import org.apache.spark.sql.functions.monotonically_increasing_id\n",
    "val dfnew = df.withColumn(\"idx\", monotonically_increasing_id)\n",
    "dfnew.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+---------+---+------+\n",
      "|idx|     name|age|gender|\n",
      "+---+---------+---+------+\n",
      "|  0|    LiLei| 15|  male|\n",
      "|  1|HanMeiMei| 16|female|\n",
      "|  2|   DaChui| 17|  male|\n",
      "|  3|    RuHua| 16|  null|\n",
      "+---+---------+---+------+\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "df = [idx: bigint, name: string ... 2 more fields]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[idx: bigint, name: string ... 2 more fields]"
      ]
     },
     "execution_count": 68,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//置换列的顺序\n",
    "val df = dfnew.select(\"idx\",\"name\",\"age\",\"gender\")\n",
    "df.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+---------+---+\n",
      "|idx|     name|age|\n",
      "+---+---------+---+\n",
      "|  0|    LiLei| 15|\n",
      "|  1|HanMeiMei| 16|\n",
      "|  2|   DaChui| 17|\n",
      "|  3|    RuHua| 16|\n",
      "+---+---------+---+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//删除列\n",
    "df.drop(\"gender\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 70,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+---------+---+------+\n",
      "|idx|     name|age|   sex|\n",
      "+---+---------+---+------+\n",
      "|  0|    LiLei| 15|  male|\n",
      "|  1|HanMeiMei| 16|female|\n",
      "|  2|   DaChui| 17|  male|\n",
      "|  3|    RuHua| 16|  null|\n",
      "+---+---------+---+------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//重命名列\n",
    "df.withColumnRenamed(\"gender\",\"sex\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 71,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+---------+---+------+\n",
      "|idx|     name|age|gender|\n",
      "+---+---------+---+------+\n",
      "|  2|   DaChui| 17|  male|\n",
      "|  1|HanMeiMei| 16|female|\n",
      "|  3|    RuHua| 16|  null|\n",
      "|  0|    LiLei| 15|  male|\n",
      "+---+---------+---+------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//排序sort，可以指定升序降序\n",
    "df.sort($\"age\".desc).show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 72,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+---------+---+------+\n",
      "|idx|     name|age|gender|\n",
      "+---+---------+---+------+\n",
      "|  2|   DaChui| 17|  male|\n",
      "|  1|HanMeiMei| 16|female|\n",
      "|  3|    RuHua| 16|  null|\n",
      "|  0|    LiLei| 15|  male|\n",
      "+---+---------+---+------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//排序orderby,默认为升序,可以根据多个字段\n",
    "df.orderBy(df(\"age\").desc,df(\"gender\").desc).show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+---------+---+------+\n",
      "|idx|     name|age|gender|\n",
      "+---+---------+---+------+\n",
      "|  0|    LiLei| 15|  male|\n",
      "|  1|HanMeiMei| 16|female|\n",
      "|  2|   DaChui| 17|  male|\n",
      "+---+---------+---+------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//去除nan值行\n",
    "df.na.drop.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 74,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+---------+---+------+\n",
      "|idx|     name|age|gender|\n",
      "+---+---------+---+------+\n",
      "|  0|    LiLei| 15|  male|\n",
      "|  1|HanMeiMei| 16|female|\n",
      "|  2|   DaChui| 17|  male|\n",
      "|  3|    RuHua| 16|female|\n",
      "+---+---------+---+------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//填充nan值\n",
    "df.na.fill(\"female\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 75,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+---------+---+------+\n",
      "|idx|     name|age|gender|\n",
      "+---+---------+---+------+\n",
      "|  0|    LiLei| 15|  male|\n",
      "|  1|HanMeiMei| 16|female|\n",
      "|  2|   DaChui| 17|  male|\n",
      "|  3|     SiYu| 16|  null|\n",
      "+---+---------+---+------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//替换某些值\n",
    "df.na.replace(Seq(\"gender\",\"name\"),Map(\" \"->\"female\",\"RuHua\"->\"SiYu\")).show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 76,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+---------+---+------+\n",
      "|idx|     name|age|gender|\n",
      "+---+---------+---+------+\n",
      "|  0|    LiLei| 15|  male|\n",
      "|  1|HanMeiMei| 16|female|\n",
      "|  2|   DaChui| 17|  male|\n",
      "|  3|    RuHua| 16|  null|\n",
      "|  0|    LiLei| 15|  male|\n",
      "|  1|HanMeiMei| 16|female|\n",
      "|  2|   DaChui| 17|  male|\n",
      "|  3|    RuHua| 16|  null|\n",
      "+---+---------+---+------+\n",
      "\n",
      "+---+---------+---+------+\n",
      "|idx|     name|age|gender|\n",
      "+---+---------+---+------+\n",
      "|  1|HanMeiMei| 16|female|\n",
      "|  0|    LiLei| 15|  male|\n",
      "|  3|    RuHua| 16|  null|\n",
      "|  2|   DaChui| 17|  male|\n",
      "+---+---------+---+------+\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "df2 = [idx: bigint, name: string ... 2 more fields]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "warning: there was one deprecation warning; re-run with -deprecation for details\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[idx: bigint, name: string ... 2 more fields]"
      ]
     },
     "execution_count": 76,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//去重，默认根据全部字段\n",
    "val df2 = df.unionAll(df)\n",
    "df2.show\n",
    "df2.dropDuplicates().show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 77,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+---------+---+------+\n",
      "|idx|     name|age|gender|\n",
      "+---+---------+---+------+\n",
      "|  1|HanMeiMei| 16|female|\n",
      "|  0|    LiLei| 15|  male|\n",
      "|  2|   DaChui| 17|  male|\n",
      "+---+---------+---+------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//去重,根据部分字段\n",
    "df.dropDuplicates(\"age\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 78,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-----------+--------+\n",
      "|count(name)|max(age)|\n",
      "+-----------+--------+\n",
      "|          4|      17|\n",
      "+-----------+--------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//简单聚合操作\n",
    "df.agg(\"name\"->\"count\",\"age\"->\"max\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 79,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-------+------------------+------+-----------------+------+\n",
      "|summary|               idx|  name|              age|gender|\n",
      "+-------+------------------+------+-----------------+------+\n",
      "|  count|                 4|     4|                4|     3|\n",
      "|   mean|               1.5|  null|             16.0|  null|\n",
      "| stddev|1.2909944487358056|  null|0.816496580927726|  null|\n",
      "|    min|                 0|DaChui|               15|female|\n",
      "|    max|                 3| RuHua|               17|  male|\n",
      "+-------+------------------+------+-----------------+------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//汇总信息\n",
    "df.describe().show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 80,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-------------+----------------+\n",
      "|age_freqItems|gender_freqItems|\n",
      "+-------------+----------------+\n",
      "|         [16]|          [male]|\n",
      "+-------------+----------------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//频率超过0.5的年龄和性别\n",
    "df.stat.freqItems(Seq(\"age\",\"gender\"),0.5).show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**4，类SQL表操作**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "类sql表操作包括表查询(select,selectExpr,where,filter),表连接(join,union,unionAll),表分组(groupby,agg,pivot)等操作。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 81,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+---+------+\n",
      "|     name|age|gender|\n",
      "+---------+---+------+\n",
      "|    LiLei| 15|  male|\n",
      "|HanMeiMei| 16|female|\n",
      "|   DaChui| 17|  male|\n",
      "|    RuHua| 16|  null|\n",
      "+---------+---+------+\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "list = List((LiLei,15,male), (HanMeiMei,16,female), (DaChui,17,male), (RuHua,16,null))\n",
       "df = [name: string, age: int ... 1 more field]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[name: string, age: int ... 1 more field]"
      ]
     },
     "execution_count": 81,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import spark.implicits._\n",
    "\n",
    "val list = List(\n",
    "(\"LiLei\",15,\"male\"),\n",
    "(\"HanMeiMei\",16,\"female\"),\n",
    "(\"DaChui\",17,\"male\"),\n",
    "(\"RuHua\",16,null)\n",
    ")\n",
    "\n",
    "val df = list.toDF(\"name\",\"age\",\"gender\")\n",
    "df.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 82,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+\n",
      "|     name|\n",
      "+---------+\n",
      "|    LiLei|\n",
      "|HanMeiMei|\n",
      "+---------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//表查询select\n",
    "df.select($\"name\").limit(2).show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 83,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+---------+\n",
      "|     name|(age + 1)|\n",
      "+---------+---------+\n",
      "|    LiLei|       16|\n",
      "|HanMeiMei|       17|\n",
      "|   DaChui|       18|\n",
      "|    RuHua|       17|\n",
      "+---------+---------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "df.select($\"name\",$\"age\" + 1).show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 84,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+----------+\n",
      "|     name|birth_year|\n",
      "+---------+----------+\n",
      "|    LiLei|      2004|\n",
      "|HanMeiMei|      2003|\n",
      "|   DaChui|      2002|\n",
      "|    RuHua|      2003|\n",
      "+---------+----------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//表查询select\n",
    "df.select($\"name\",-$\"age\"+2019).toDF(\"name\",\"birth_year\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 85,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+----------+------+\n",
      "|     name|birth_year|gender|\n",
      "+---------+----------+------+\n",
      "|    LiLei|      2004|  MALE|\n",
      "|HanMeiMei|      2003|FEMALE|\n",
      "|   DaChui|      2002|  MALE|\n",
      "|    RuHua|      2003|  null|\n",
      "+---------+----------+------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//表查询selectExpr,可以使用UDF函数，指定别名等\n",
    "df.selectExpr(\"name\", \"2019-age as birth_year\" , \"UPPER(gender) as gender\" ).show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 86,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+------+---+------+\n",
      "|  name|age|gender|\n",
      "+------+---+------+\n",
      "|DaChui| 17|  male|\n",
      "+------+---+------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//表查询where, 指定SQL中的where字句表达式\n",
    "df.where(\"gender='male' and age>15\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 87,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+------+---+------+\n",
      "|  name|age|gender|\n",
      "+------+---+------+\n",
      "|DaChui| 17|  male|\n",
      "+------+---+------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//表查询filter\n",
    "df.filter($\"age\">16).show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 88,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+---+------+\n",
      "|     name|age|gender|\n",
      "+---------+---+------+\n",
      "|HanMeiMei| 16|female|\n",
      "+---------+---+------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//表查询filter,注意不等号的写法\n",
    "df.filter($\"gender\"=!=\"male\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 89,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+------+---+------+\n",
      "|  name|age|gender|\n",
      "+------+---+------+\n",
      "| LiLei| 15|  male|\n",
      "|DaChui| 17|  male|\n",
      "+------+---+------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//表查询filter,注意等于号的写法\n",
    "df.filter($\"gender\"===\"male\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 90,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+------+---+------+\n",
      "|  name|age|gender|\n",
      "+------+---+------+\n",
      "| LiLei| 15|  male|\n",
      "|DaChui| 17|  male|\n",
      "+------+---+------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//表查询filter\n",
    "df.filter(\"gender ='male'\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 91,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+------+-----+\n",
      "|     name|gender|score|\n",
      "+---------+------+-----+\n",
      "|    LiLei|  male|   88|\n",
      "|HanMeiMei|female|   90|\n",
      "|   DaChui|  male|   50|\n",
      "+---------+------+-----+\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "dfscore = [name: string, gender: string ... 1 more field]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[name: string, gender: string ... 1 more field]"
      ]
     },
     "execution_count": 91,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//表连接join\n",
    "val dfscore = Seq((\"LiLei\",\"male\",88),(\"HanMeiMei\",\"female\",90),(\"DaChui\",\"male\",50))\n",
    "              .toDF(\"name\",\"gender\",\"score\")\n",
    "\n",
    "dfscore.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 92,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+---+------+-----+\n",
      "|     name|age|gender|score|\n",
      "+---------+---+------+-----+\n",
      "|    LiLei| 15|  male|   88|\n",
      "|HanMeiMei| 16|female|   90|\n",
      "|   DaChui| 17|  male|   50|\n",
      "+---------+---+------+-----+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//表连接join,根据单个字段\n",
    "df.join(dfscore.select(\"name\",\"score\"),\"name\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 93,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+------+---+-----+\n",
      "|     name|gender|age|score|\n",
      "+---------+------+---+-----+\n",
      "|    LiLei|  male| 15|   88|\n",
      "|HanMeiMei|female| 16|   90|\n",
      "|   DaChui|  male| 17|   50|\n",
      "+---------+------+---+-----+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//表连接join,根据多个字段\n",
    "df.join(dfscore,Seq(\"name\",\"gender\")).show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 94,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+------+---+-----+\n",
      "|     name|gender|age|score|\n",
      "+---------+------+---+-----+\n",
      "|    LiLei|  male| 15|   88|\n",
      "|HanMeiMei|female| 16|   90|\n",
      "|   DaChui|  male| 17|   50|\n",
      "+---------+------+---+-----+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//表连接join,根据多个字段\n",
    "//可以指定连接方式为\"inner\",\"left\",\"right\",\"outer\"\n",
    "df.join(dfscore,Seq(\"name\",\"gender\"),\"right\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 95,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+------+---+-----+\n",
      "|     name|gender|age|score|\n",
      "+---------+------+---+-----+\n",
      "|HanMeiMei|female| 16|   90|\n",
      "|   DaChui|  male| 17|   50|\n",
      "|    LiLei|  male| 15|   88|\n",
      "|    RuHua|  null| 16| null|\n",
      "+---------+------+---+-----+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "df.join(dfscore,Seq(\"name\",\"gender\"),\"outer\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 96,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+------+-----+\n",
      "|     name|   sex|score|\n",
      "+---------+------+-----+\n",
      "|    LiLei|  male|   88|\n",
      "|HanMeiMei|female|   90|\n",
      "|   DaChui|  male|   50|\n",
      "+---------+------+-----+\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "dfmark = [name: string, sex: string ... 1 more field]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[name: string, sex: string ... 1 more field]"
      ]
     },
     "execution_count": 96,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//表连接，灵活指定连接关系\n",
    "val dfmark = dfscore.withColumnRenamed(\"gender\",\"sex\")\n",
    "dfmark.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 97,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+---+------+---------+------+-----+\n",
      "|     name|age|gender|     name|   sex|score|\n",
      "+---------+---+------+---------+------+-----+\n",
      "|    LiLei| 15|  male|    LiLei|  male|   88|\n",
      "|HanMeiMei| 16|female|HanMeiMei|female|   90|\n",
      "|   DaChui| 17|  male|   DaChui|  male|   50|\n",
      "+---------+---+------+---------+------+-----+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "df.join(dfmark,df(\"name\")===dfmark(\"name\")&&df(\"gender\")===dfmark(\"sex\"),\n",
    "        \"inner\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 98,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+----+---+------+\n",
      "|name|age|gender|\n",
      "+----+---+------+\n",
      "| Jim| 18|  male|\n",
      "|Lily| 16|female|\n",
      "+----+---+------+\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "dfstudent = [name: string, age: int ... 1 more field]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[name: string, age: int ... 1 more field]"
      ]
     },
     "execution_count": 98,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//表合并union,unionAll\n",
    "val dfstudent = Seq((\"Jim\",18,\"male\"),(\"Lily\",16,\"female\")).toDF(\"name\",\"age\",\"gender\")\n",
    "dfstudent.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 101,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+---+------+\n",
      "|     name|age|gender|\n",
      "+---------+---+------+\n",
      "|    LiLei| 15|  male|\n",
      "|HanMeiMei| 16|female|\n",
      "|   DaChui| 17|  male|\n",
      "|    RuHua| 16|  null|\n",
      "|      Jim| 18|  male|\n",
      "|     Lily| 16|female|\n",
      "+---------+---+------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "df.union(dfstudent).show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 102,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+------+--------+\n",
      "|gender|max(age)|\n",
      "+------+--------+\n",
      "|  null|      16|\n",
      "|female|      16|\n",
      "|  male|      17|\n",
      "+------+--------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//表分组 groupBy\n",
    "import org.apache.spark.sql.functions._\n",
    "df.groupBy(\"gender\").max(\"age\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 103,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+------+--------+---------------+\n",
      "|gender|mean_age|          names|\n",
      "+------+--------+---------------+\n",
      "|  null|    16.0|        [RuHua]|\n",
      "|female|    16.0|    [HanMeiMei]|\n",
      "|  male|    16.0|[LiLei, DaChui]|\n",
      "+------+--------+---------------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//表分组后聚合，groupBy,agg\n",
    "df.groupBy(\"gender\").agg(mean(\"age\") as \"mean_age\",collect_list(\"name\") as \"names\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 104,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+------+--------+------------------+-----------+\n",
      "|gender|avg(age)|collect_list(name)|count(name)|\n",
      "+------+--------+------------------+-----------+\n",
      "|  null|    16.0|           [RuHua]|          1|\n",
      "|female|    16.0|       [HanMeiMei]|          1|\n",
      "|  male|    16.0|   [LiLei, DaChui]|          2|\n",
      "+------+--------+------------------+-----------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//表分组聚合，groupBy,agg\n",
    "df.groupBy(\"gender\").agg(\"age\"->\"avg\",\"name\"->\"collect_list\",\"name\"->\"count\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 105,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+------+---+------------------+\n",
      "|gender|age|collect_list(name)|\n",
      "+------+---+------------------+\n",
      "|  male| 17|          [DaChui]|\n",
      "|  male| 15|           [LiLei]|\n",
      "|  null| 16|           [RuHua]|\n",
      "|female| 16|       [HanMeiMei]|\n",
      "+------+---+------------------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//表分组聚合，groupBy,agg\n",
    "df.groupBy(\"gender\",\"age\").agg(\"name\"->\"collect_list\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 106,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+---+------+-----+\n",
      "|     name|age|gender|class|\n",
      "+---------+---+------+-----+\n",
      "|    LiLei| 18|  male|    1|\n",
      "|HanMeiMei| 16|female|    1|\n",
      "|      Jim| 17|  male|    2|\n",
      "|   DaChui| 20|  male|    2|\n",
      "+---------+---+------+-----+\n",
      "\n",
      "+-----+------+----+\n",
      "|class|female|male|\n",
      "+-----+------+----+\n",
      "|    1|    16|  18|\n",
      "|    2|  null|  20|\n",
      "+-----+------+----+\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "dfstudent = [name: string, age: int ... 2 more fields]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[name: string, age: int ... 2 more fields]"
      ]
     },
     "execution_count": 106,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//表分组后透视，groupBy,pivot\n",
    "val dfstudent = Seq((\"LiLei\",18,\"male\",1),(\"HanMeiMei\",16,\"female\",1),\n",
    "                    (\"Jim\",17,\"male\",2),(\"DaChui\",20,\"male\",2))\n",
    "                .toDF(\"name\",\"age\",\"gender\",\"class\")\n",
    "dfstudent.show\n",
    "dfstudent.groupBy(\"class\").pivot(\"gender\").max(\"age\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 107,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-----------+\n",
      "|      value|\n",
      "+-----------+\n",
      "|hello world|\n",
      "|hello China|\n",
      "|hello Spark|\n",
      "+-----------+\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "helloDF = [value: string]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[value: string]"
      ]
     },
     "execution_count": 107,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//explode行转列\n",
    "\n",
    "val helloDF = Seq(\"hello world\",\"hello China\",\"hello Spark\").toDF(\"value\")\n",
    "helloDF.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 108,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "warning: there was one deprecation warning; re-run with -deprecation for details\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-----------+-----+\n",
      "|      value|parts|\n",
      "+-----------+-----+\n",
      "|hello world|hello|\n",
      "|hello world|world|\n",
      "|hello China|hello|\n",
      "|hello China|China|\n",
      "|hello Spark|hello|\n",
      "|hello Spark|Spark|\n",
      "+-----------+-----+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "helloDF.explode(\"value\",\"parts\"){s:String => s.split(\" \")}.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 六，DataFrame的SQL交互"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "将DataFrame/DataSet注册为临时表视图或者全局表视图后，可以使用sql语句对DataFrame进行交互。\n",
    "\n",
    "以下为示范代码。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//注册为临时表视图\n",
    "val df = Seq((\"LiLei\",18,\"male\"),(\"HanMeiMei\",17,\"female\"),(\"Jim\",16,\"male\")).\n",
    "      toDF(\"name\",\"age\",\"gender\")\n",
    "\n",
    "df.createOrReplaceTempView(\"student\")\n",
    "val sqlDF = spark.sql(\"select * from student limit 2\")\n",
    "sqlDF.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "// 注册为全局临时表视图\n",
    "df.createGlobalTempView(\"student\")\n",
    "\n",
    "spark.sql(\"SELECT * FROM global_temp.student\").show()\n",
    "\n",
    "//可以在新的Session中访问\n",
    "spark.newSession().sql(\"SELECT * FROM global_temp.student\").show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//与Hive交互操作的范例\n",
    "\n",
    "import java.io.File\n",
    "import org.apache.spark.sql.{Row, SaveMode, SparkSession}\n",
    "case class Record(key: Int, value: String)\n",
    "// warehouseLocation points to the default location for managed databases and tables\n",
    "val warehouseLocation = new File(\"spark-warehouse\").getAbsolutePath\n",
    "\n",
    "val spark = SparkSession\n",
    "  .builder()\n",
    "  .appName(\"Spark Hive Example\")\n",
    "  .config(\"spark.sql.warehouse.dir\", warehouseLocation)\n",
    "  .enableHiveSupport()\n",
    "  .getOrCreate()\n",
    "\n",
    "import spark.implicits._\n",
    "import spark.sql\n",
    "\n",
    "sql(\"CREATE TABLE IF NOT EXISTS src (key INT, value STRING) USING hive\")\n",
    "sql(\"LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src\")\n",
    "\n",
    "// Queries are expressed in HiveQL\n",
    "sql(\"SELECT * FROM src\").show()\n",
    "\n",
    "// Aggregation queries are also supported.\n",
    "sql(\"SELECT COUNT(*) FROM src\").show()\n",
    "\n",
    "// The results of SQL queries are themselves DataFrames and support all normal functions.\n",
    "val sqlDF = sql(\"SELECT key, value FROM src WHERE key < 10 ORDER BY key\")\n",
    "\n",
    "// The items in DataFrames are of type Row, you can access each column by ordinal.\n",
    "val stringsDS = sqlDF.map {\n",
    "  case Row(key: Int, value: String) => s\"Key: $key, Value: $value\"\n",
    "}\n",
    "stringsDS.show()\n",
    "\n",
    "// You can also use DataFrames to create temporary views within a SparkSession.\n",
    "val recordsDF = spark.createDataFrame((1 to 100).map(i => Record(i, s\"val_$i\")))\n",
    "recordsDF.createOrReplaceTempView(\"records\")\n",
    "\n",
    "// Queries can then join DataFrame data with data stored in Hive.\n",
    "sql(\"SELECT * FROM records r JOIN src s ON r.key = s.key\").show()\n",
    "\n",
    "// Create a Hive managed Parquet table, with HQL syntax instead of the Spark SQL \n",
    "// `USING hive`\n",
    "sql(\"CREATE TABLE hive_records(key int, value string) STORED AS PARQUET\")\n",
    "// Save DataFrame to the Hive managed table\n",
    "val df = spark.table(\"src\")\n",
    "df.write.mode(SaveMode.Overwrite).saveAsTable(\"hive_records\")\n",
    "// After insertion, the Hive managed table has data now\n",
    "sql(\"SELECT * FROM hive_records\").show()\n",
    "\n",
    "val dataDir = \"/tmp/parquet_data\"\n",
    "spark.range(10).write.parquet(dataDir)\n",
    "sql(s\"CREATE EXTERNAL TABLE hive_ints(key int) STORED AS PARQUET LOCATION '$dataDir'\")\n",
    "sql(\"SELECT * FROM hive_ints\").show()\n",
    "\n",
    "// Turn on flag for Hive Dynamic Partitioning\n",
    "spark.sqlContext.setConf(\"hive.exec.dynamic.partition\", \"true\")\n",
    "spark.sqlContext.setConf(\"hive.exec.dynamic.partition.mode\", \"nonstrict\")\n",
    "\n",
    "// Create a Hive partitioned table using DataFrame API\n",
    "df.write.partitionBy(\"key\").format(\"hive\").saveAsTable(\"hive_part_tbl\")\n",
    "sql(\"SELECT * FROM hive_part_tbl\").show()\n",
    "spark.stop()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 七，RDD，DataFrame和 DataSet的相互转换"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "三种数据结构RDD，DataFrame和DataSet之间可以相互转换。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//RDD转DataFrame\n",
    "import spark.implicits\n",
    "val rdd = sc.parallelize(List((\"LiLei\",15),(\"HanMeiMei\",17),(\"DaChui\",16)),2)\n",
    "val df = rdd.toDF(\"name\",\"age\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//RDD转换DataSet\n",
    "import spark.implicits._\n",
    "val rdd = sc.parallelize(List((\"LiLei\",15),(\"HanMeiMei\",17),(\"DaChui\",16)),2)\n",
    "case class student(name:String, age:Int)\n",
    "val ds = rdd.map(s => student(s._1,s._2)).toDS"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//DataFrame或DataSet转RDD\n",
    "import spark.implicits._\n",
    "val rdd1 = df.rdd\n",
    "val rdd2 = ds.rdd"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//DataSet转DataFrame\n",
    "import spark.implicits._\n",
    "val studentDF = ds.toDF\n",
    "studentDF.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "//DataFrame转DataSet\n",
    "import spark.implicits._\n",
    "case class Student(name:String, age:Int)\n",
    "val studentDS = df.as[Student]\n",
    "studentDS.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 八，用户自定义函数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "SparkSQL的用户自定义函数包括二种类型，UDF和UDAF，即普通用户自定义函数和用户自定义聚合函数。\n",
    "\n",
    "其中UDAF由分为弱类型UDAF和强类型UDAF，前者可以在DataFrame和DataSet中使用，\n",
    "\n",
    "后者仅能够在DataSet中使用。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**1，普通UDF**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 109,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+---+------+\n",
      "|     name|age|gender|\n",
      "+---------+---+------+\n",
      "|    LiLei| 15|  male|\n",
      "|HanMeiMei| 16|female|\n",
      "|   DaChui| 17|  male|\n",
      "|    RuHua| 16|  null|\n",
      "+---------+---+------+\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "list = List((LiLei,15,male), (HanMeiMei,16,female), (DaChui,17,male), (RuHua,16,null))\n",
       "df = [name: string, age: int ... 1 more field]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[name: string, age: int ... 1 more field]"
      ]
     },
     "execution_count": 109,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import spark.implicits._\n",
    "\n",
    "val list = List(\n",
    "(\"LiLei\",15,\"male\"),\n",
    "(\"HanMeiMei\",16,\"female\"),\n",
    "(\"DaChui\",17,\"male\"),\n",
    "(\"RuHua\",16,null)\n",
    ")\n",
    "\n",
    "val df = list.toDF(\"name\",\"age\",\"gender\")\n",
    "df.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 110,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "UserDefinedFunction(<function1>,StringType,Some(List(StringType)))"
      ]
     },
     "execution_count": 110,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "spark.udf.register(\"addName\",(x:String)=>\"Name:\"+x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 112,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+--------------+---+\n",
      "|          name|age|\n",
      "+--------------+---+\n",
      "|    Name:LiLei| 15|\n",
      "|Name:HanMeiMei| 16|\n",
      "|   Name:DaChui| 17|\n",
      "|    Name:RuHua| 16|\n",
      "+--------------+---+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "df.selectExpr(\"addName(name) as name\",\"age\").show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**2, 弱类型UDAF**\n",
    "\n",
    "弱类型UDAF需要继承UserDefinedAggregateFunction"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 137,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "defined object MyAverage\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "import org.apache.spark.sql.expressions.MutableAggregationBuffer\n",
    "import org.apache.spark.sql.expressions.UserDefinedAggregateFunction\n",
    "import org.apache.spark.sql.types._\n",
    "import org.apache.spark.sql.{Row, SparkSession}\n",
    "\n",
    "object MyAverage extends UserDefinedAggregateFunction {\n",
    "  //聚合函数输入的类型\n",
    "  override def inputSchema: StructType = \n",
    "  StructType(StructField(\"inputColumn\", LongType) :: Nil)\n",
    "\n",
    "  //聚合函数缓冲区类型\n",
    "  override def bufferSchema: StructType = \n",
    "  StructType(StructField(\"sum\", LongType) :: StructField(\"column\", LongType) :: Nil)\n",
    "\n",
    "  //返回值类型\n",
    "  override def dataType: DataType = DoubleType\n",
    "\n",
    "  //相同输入是否返回相同输出\n",
    "  override def deterministic: Boolean = true\n",
    "\n",
    "  //初始化\n",
    "  override def initialize(buffer: MutableAggregationBuffer): Unit = {\n",
    "    buffer(0) = 0L\n",
    "    buffer(1) = 0L\n",
    "  }\n",
    "\n",
    "  //相同数据合并\n",
    "  override def update(buffer: MutableAggregationBuffer, input: Row): Unit = {\n",
    "    if (!input.isNullAt(0)) {\n",
    "      buffer(0) = buffer.getLong(0) + input.getLong(0)\n",
    "      buffer(1) = buffer.getLong(1) + 1\n",
    "    }\n",
    "  }\n",
    "\n",
    "  //不同Executor之间的数据合并\n",
    "  override def merge(buffer1: MutableAggregationBuffer, buffer2: Row): Unit = {\n",
    "    buffer1(0) = buffer1.getLong(0) + buffer2.getLong(0)\n",
    "    buffer1(1) = buffer1.getLong(1) + buffer2.getLong(1)\n",
    "  }\n",
    "\n",
    "  //计算结果\n",
    "  override def evaluate(buffer: Row): Any = buffer.getLong(0).toDouble / buffer.getLong(1)\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 138,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "MyAverage$@339e5321"
      ]
     },
     "execution_count": 138,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "spark.udf.register(\"MyAverage\", MyAverage)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 139,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------+---+------+\n",
      "|     name|age|gender|\n",
      "+---------+---+------+\n",
      "|    LiLei| 18|  male|\n",
      "|HanMeiMei| 17|female|\n",
      "|      Jim| 16|  male|\n",
      "+---------+---+------+\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "df = [name: string, age: int ... 1 more field]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[name: string, age: int ... 1 more field]"
      ]
     },
     "execution_count": 139,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "val df = Seq((\"LiLei\",18,\"male\"),(\"HanMeiMei\",17,\"female\"),(\"Jim\",16,\"male\")).\n",
    "      toDF(\"name\",\"age\",\"gender\")\n",
    "df.show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 140,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+--------------+\n",
      "|myaverage(age)|\n",
      "+--------------+\n",
      "|          17.0|\n",
      "+--------------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "df.agg(\"age\"->\"MyAverage\").show"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 141,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+--------------+\n",
      "|myaverage(age)|\n",
      "+--------------+\n",
      "|          17.0|\n",
      "+--------------+\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "defined class Student\n",
       "ds = [name: string, age: int ... 1 more field]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[name: string, age: int ... 1 more field]"
      ]
     },
     "execution_count": 141,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import spark.implicits._\n",
    "case class Student(name:String,age:Int,gender:String)\n",
    "val ds = df.as[Student]\n",
    "ds.agg(\"age\"->\"MyAverage\").show"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**3，强类型UDAF**\n",
    "\n",
    "强类型UDAF需要继承自Aggregator，不可注册"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 125,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "defined class Employee\n",
       "defined class Average\n",
       "defined object MyAverage2\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "import org.apache.spark.sql.expressions.Aggregator\n",
    "import org.apache.spark.sql.{Encoder, Encoders, SparkSession}\n",
    "\n",
    "case class Employee(name: String, salary: Long)\n",
    "case class Average(var sum: Long, var count: Long)\n",
    "\n",
    "object MyAverage2 extends Aggregator[Employee, Average, Double] {\n",
    "  //定义一个数据结构，保存工资总数和工资总个数，初始都为0\n",
    "  override def zero: Average = Average(0L, 0L)\n",
    "\n",
    "  //统计数据\n",
    "  override def reduce(b: Average, a: Employee): Average = {\n",
    "    b.sum += a.salary\n",
    "    b.count += 1\n",
    "    b\n",
    "  }\n",
    "\n",
    "  //各个Executor数据汇总\n",
    "  override def merge(b1: Average, b2: Average): Average = {\n",
    "    b1.sum += b2.sum\n",
    "    b1.count += b2.count\n",
    "    b1\n",
    "  }\n",
    "\n",
    "  //计算输出\n",
    "  override def finish(reduction: Average): Double \n",
    "  = reduction.sum.toDouble / reduction.count\n",
    "\n",
    "  // 设定之间值类型的编码器，要转换成case类\n",
    "  // Encoders.product是进行scala元组和case类转换的编码器\n",
    "  override def bufferEncoder: Encoder[Average] = Encoders.product\n",
    "\n",
    "  //设置最终输出编码器\n",
    "  override def outputEncoder: Encoder[Double] = Encoders.scalaDouble\n",
    "}\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 128,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-------+------+\n",
      "|   name|salary|\n",
      "+-------+------+\n",
      "|Michael|  3000|\n",
      "|   Andy|  4500|\n",
      "| Justin|  3500|\n",
      "|  Berta|  4000|\n",
      "+-------+------+\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "ds = [name: string, salary: bigint]\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "[name: string, salary: bigint]"
      ]
     },
     "execution_count": 128,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import spark.implicits._\n",
    "\n",
    "val ds = spark.read.json(\"resources/employees.json\").as[Employee]\n",
    "ds.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 133,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+--------------+\n",
      "|average_salary|\n",
      "+--------------+\n",
      "|        3750.0|\n",
      "+--------------+\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "average_salary = myaverage2() AS `average_salary`\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "lastException: Throwable = null\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "myaverage2() AS `average_salary`"
      ]
     },
     "execution_count": 133,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "//将MyAverage2转换成TypedColumn并给它命名\n",
    "val average_salary = MyAverage2.toColumn.name(\"average_salary\")\n",
    "ds.select(average_salary).show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Spark - Scala",
   "language": "scala",
   "name": "spark_scala"
  },
  "language_info": {
   "codemirror_mode": "text/x-scala",
   "file_extension": ".scala",
   "mimetype": "text/x-scala",
   "name": "scala",
   "pygments_lexer": "scala",
   "version": "2.11.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
