<html>
<head>
  <title>Spark SQL</title>
  <basefont face="微软雅黑" size="2" />
  <meta http-equiv="Content-Type" content="text/html;charset=utf-8" />
  <meta name="exporter-version" content="Evernote Windows/307027 (zh-CN, DDL); Windows/6.1.0 (Win32);"/>
  <style>
    body, td {
      font-family: 微软雅黑;
      font-size: 10pt;
    }
  </style>
</head>
<body>
<a name="1217"/>
<h1>Spark SQL</h1>

<div>
<span><div><div><font style="font-size: 18pt; color: rgb(28, 51, 135);"><b>简介</b></font></div><div><span style="font-size: 11pt;">Shark即Hive on Spark，为了实现与Hive兼容，Shark在HiveQL方面重用了Hive中的HiveQL解析、逻辑执行计划翻译、执行计划优化等逻辑，可以近似认为仅将物理执行计划从MapReduce作业替换成了Spark作业，通过Hive的HiveQL解析，把HiveQL翻译成Spark上的RDD操作。</span></div><div><span style="font-size: 11pt;">Shark的设计导致了两个问题：一是执行计划优化完全依赖于Hive，不方便添加新的优化策略；二是因为Spark是线程级并行，而MapReduce是进程级并行，因此，Spark在兼容Hive的实现上存在线程安全问题，导致Shark不得不使用另外一套独立维护的打了补丁的Hive源码分支。</span></div><div><font style="font-size: 11pt;"><br/></font></div><div><span style="font-size: 11pt;">Spark SQL的架构如图所示，在Shark原有的架构上重写了逻辑执行计划的优化部分，解决了Shark存在的问题。Spark SQL在Hive兼容层面仅依赖HiveQL解析和Hive元数据，也就是说，从HQL被解析成抽象语法树（AST）起，就全部由Spark SQL接管了。Spark SQL执行计划生成和优化都由Catalyst（函数式关系查询优化框架）负责。</span></div><div><img src="Spark SQL_files/Image.jpg" type="image/jpeg" data-filename="Image.jpg" style="font-size: 11pt;"/></div><div><font style="font-size: 11pt;"><br/></font></div><div><font style="font-size: 11pt;"><br/></font></div><div><span style="font-size: 11pt;">用户可以在Spark SQL中执行SQL语句，数据既可以来自RDD，也可以来自Hive、HDFS、Cassandra等外部数据源，还可以是JSON格式的数据。Spark SQL目前支持Scala、Java、Python三种语言，支持SQL-92规范。从Spark1.2 升级到Spark1.3以后，Spark SQL中的SchemaRDD变为了DataFrame，DataFrame相对于SchemaRDD有了较大改变,同时提供了更多好用且方便的API，如图所示。</span></div><div><img src="Spark SQL_files/Image [1].jpg" type="image/jpeg" data-filename="Image.jpg" style="font-size: 11pt;"/></div><div><font style="font-size: 11pt;"><br/></font></div><div><span style="font-size: 11pt;">Spark SQL可以很好地支持SQL查询</span></div><ol><li><div><span style="font-size: 11pt;">一方面，可以编写Spark应用程序使用SQL语句进行数据查询</span></div></li><li><div><span style="font-size: 11pt;"><span style="font-size: 11pt;">另一方面，也可以使用标准的数据库连接器（比如JDBC或ODBC）连接Spark进行SQL查询</span></span></div></li></ol><div><span style="font-size: 11pt;">这样，一些市场上现有的商业智能工具（比如Tableau）就可以很好地和Spark SQL组合起来使用，从而使得这些外部工具借助于Spark SQL也能获得大规模数据的处理分析能力。</span></div><div><font style="font-size: 11pt;"><br/></font></div><div><font style="font-size: 11pt;"><br/></font></div><div><font style="font-size: 18pt; color: rgb(28, 51, 135);"><b>DateFrame</b></font></div><div><span style="font-size: 11pt;">DataFrame的推出，让Spark具备了处理大规模结构化数据的能力，不仅比原有的RDD转化方式更加简单易用，而且获得了更高的计算性能。Spark能够轻松实现从MySQL到DataFrame的转化，并且支持SQL查询。</span></div><div><img src="Spark SQL_files/Image [2].jpg" type="image/jpeg" data-filename="Image.jpg" style="font-size: 11pt;"/></div><div><span style="font-size: 11pt;">从上面的图中可以看出DataFrame和RDD的区别。RDD是分布式的 Java对象的集合，比如，RDD[Person]是以Person为类型参数，但是，Person类的内部结构对于RDD而言却是不可知的。DataFrame是一种以RDD为基础的分布式数据集，也就是分布式的Row对象的集合（每个Row对象代表一行记录），提供了详细的结构信息，也就是我们经常说的模式（schema），Spark SQL可以清楚地知道该数据集中包含哪些列、每列的名称和类型。</span></div><div><span style="font-size: 11pt;">和RDD一样，DataFrame的各种变换操作也采用惰性机制，只是记录了各种转换的逻辑转换路线图（是一个DAG图），不会发生真正的计算，这个DAG图相当于一个逻辑查询计划，最终，会被翻译成物理查询计划，生成RDD DAG，按照之前介绍的RDD DAG的执行方式去完成最终的计算得到结果。</span></div><div><font style="font-size: 11pt;"><br/></font></div><div><font style="font-size: 11pt;"><br/></font></div><div><font style="font-size: 11pt;"><br/></font></div><h1><font style="font-size: 18pt; color: rgb(28, 51, 135);">DataFrame的创建</font></h1><div><span style="font-size: 11pt;">从Spark2.0以上版本开始，Spark使用全新的SparkSession接口替代Spark1.6中的SQLContext及HiveContext接口来实现其对数据加载、转换、处理等功能。SparkSession实现了SQLContext及HiveContext所有功能。</span></div><div><font color="#AD0000" style="font-size: 12pt;"><b><br/></b></font></div><div><font color="#AD0000" style="font-size: 12pt;"><b>SparkSession支持从不同的数据源加载数据，并把数据转换成DataFram</b></font><font style="font-size: 12pt; color: rgb(173, 0, 0);"><b>e</b></font><span style="font-size: 11pt;">，并且支持把DataFrame转换成SQLContext自身中的表，然后使用SQL语句来操作数据。SparkSession亦提供了HiveQL以及其他依赖于Hive的功能的支持。</span></div><div><font style="font-size: 11pt;"><br/></font></div><div><span style="font-size: 11pt;-en-paragraph:true;">people.json文件的内容如下：</span></div><div style="box-sizing: border-box; padding: 8px; font-family: Monaco, Menlo, Consolas, &quot;Courier New&quot;, monospace; font-size: 12px; color: rgb(51, 51, 51); border-radius: 4px; background-color: rgb(251, 250, 248); border: 1px solid rgba(0, 0, 0, 0.15);-en-codeblock:true;"><div><span style="font-size: 11pt;">{&quot;name&quot;:&quot;Michael&quot;}</span></div><div><span style="font-size: 11pt;">{&quot;name&quot;:&quot;Andy&quot;, &quot;age&quot;:30}</span></div><div><span style="font-size: 11pt;">{&quot;name&quot;:&quot;Justin&quot;, &quot;age&quot;:19}</span></div></div><div style="margin-top: 1em; margin-bottom: 1em;"><span style="font-size: 11pt;-en-paragraph:true;">people.txt文件的内容如下：</span></div><div style="box-sizing: border-box; padding: 8px; font-family: Monaco, Menlo, Consolas, &quot;Courier New&quot;, monospace; font-size: 12px; color: rgb(51, 51, 51); border-radius: 4px; background-color: rgb(251, 250, 248); border: 1px solid rgba(0, 0, 0, 0.15);-en-codeblock:true;"><div><span style="font-size: 11pt;">Michael, 29</span></div><div><span style="font-size: 11pt;">Andy, 30</span></div><div><span style="font-size: 11pt;">Justin, 19</span></div></div><div><font style="font-size: 11pt;"><br/></font></div><div style="box-sizing: border-box; padding: 8px; font-family: Monaco, Menlo, Consolas, &quot;Courier New&quot;, monospace; font-size: 12px; color: rgb(51, 51, 51); border-radius: 4px; background-color: rgb(251, 250, 248); border: 1px solid rgba(0, 0, 0, 0.15);-en-codeblock:true;"><div><span style="font-family: Monaco;"><font color="#AD0000" style="font-size: 11pt;"><b>如何使用SparkSession来创建DataFrame：</b></font></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; import org.apache.spark.sql.SparkSession</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val spark = SparkSession.builder().getOrCreate()</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; import spark.implidcits._</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val df = spark.read.json(&quot;file:///usr/local/spark/examples/srd/main/resources/people.json&quot;)</span></div><div>scala&gt; df.show()</div><div>+----+-------+</div><div>| age| name|</div><div>+----+-------+</div><div>|null|Michael|</div><div>| 30| Andy|</div><div>| 19| Justin|</div><div>+----+-------+</div></div><div><font style="font-size: 11pt;"><br/></font></div><div style="box-sizing: border-box; padding: 8px; font-family: Monaco, Menlo, Consolas, &quot;Courier New&quot;, monospace; font-size: 12px; color: rgb(51, 51, 51); border-radius: 4px; background-color: rgb(251, 250, 248); border: 1px solid rgba(0, 0, 0, 0.15);-en-codeblock:true;"><div><span style="font-family: Monaco;"><font color="#AD0000" style="font-size: 11pt;"><b>常用的DataFrame操作:</b></font></span></div><div><br/></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">// 打印模式信息</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; df.printSchema()</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">root</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|-- age: long (nullable = true)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|-- name: string (nullable = true)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">// 选择多列</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; df.select(df(&quot;name&quot;),df(&quot;age&quot;)+1).show()</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+-------+---------+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|   name|(age + 1)|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+-------+---------+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|Michael|     null|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|   Andy|       31|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">| Justin|       20|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+-------+---------+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">// 条件过滤</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; df.filter(df(&quot;age&quot;) &gt; 20 ).show()</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+---+----+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|age|name|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+---+----+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">| 30|Andy|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+---+----+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">// 分组聚合</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; df.groupBy(&quot;age&quot;).count().show()</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+----+-----+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">| age|count|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+----+-----+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|  19|    1|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|null|    1|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|  30|    1|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+----+-----+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">// 排序</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; df.sort(df(&quot;age&quot;).desc).show()</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+----+-------+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">| age|   name|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+----+-------+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|  30|   Andy|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|  19| Justin|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|null|Michael|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+----+-------+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">//多列排序</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; df.sort(df(&quot;age&quot;).desc, df(&quot;name&quot;).asc).show()</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+----+-------+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">| age|   name|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+----+-------+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|  30|   Andy|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|  19| Justin|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|null|Michael|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+----+-------+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">//对列进行重命名</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; df.select(df(&quot;name&quot;).as(&quot;username&quot;),df(&quot;age&quot;)).show()</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+--------+----+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|username| age|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+--------+----+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">| Michael|null|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|    Andy|  30|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|  Justin|  19|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+--------+----+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div></div><div><font style="font-size: 11pt;"><br/></font></div><div><br/></div><h1><span style="font-family: Monaco;"><font style="font-size: 18pt; color: rgb(28, 51, 135);">从RDD转换得到DataFrame</font></span></h1><div><span style="color: rgb(51, 51, 51);"><font style="font-size: 11pt;">Spark官网提供了两种方法来实现从RDD转换得到DataFrame，第一种方法是，利用反射来推断包含特定类型对象的RDD的schema，适用对已知数据结构的RDD转换；第二种方法是，使用编程接口，构造一个schema并将其应用在已知的RDD上。</font></span></div><div style="box-sizing: border-box; padding: 8px; font-family: Monaco, Menlo, Consolas, &quot;Courier New&quot;, monospace; font-size: 12px; color: rgb(51, 51, 51); border-radius: 4px; background-color: rgb(251, 250, 248); border: 1px solid rgba(0, 0, 0, 0.15);-en-codeblock:true;"><div><h3><span style="font-family: Monaco;"><font color="#AD0000" style="font-size: 11pt;">利用反射机制推断RDD模式:</font></span></h3><h3><font style="font-size: 9pt;">在利用反射机制推断RDD模式时，需要首先定义一个case class，因为，只有case class才能被Spark隐式地转换为DataFrame。</font></h3><div style="color: rgb(51, 51, 51);"><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder</span></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; import org.apache.spark.sql.Encoder</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">import org.apache.spark.sql.Encoder</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; import spark.implicits._  //导入包，支持把一个RDD隐式转换为一个DataFrame</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">import spark.implicits._</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; case class Person(name: String, age: Long)  //定义一个case class</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">defined class Person</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val peopleDF = spark.sparkContext.textFile(&quot;file:///usr/local/spark/examples/src/main/resources/people.txt&quot;).map(_.split(&quot;,&quot;)).map(attributes =&gt; Person(attributes(0), attributes(1).trim.toInt)).toDF()</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">peopleDF: org.apache.spark.sql.DataFrame = [name: string, age: bigint]</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; peopleDF.createOrReplaceTempView(&quot;people&quot;)  //必须注册为临时表才能供下面的查询使用</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val personsRDD = spark.sql(&quot;select name,age from people where age &gt; 20&quot;)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">//最终生成一个DataFrame</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">personsRDD: org.apache.spark.sql.DataFrame = [name: string, age: bigint]</span></div><div><br/></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; personsRDD.map(t =&gt; &quot;Name:&quot;+t(0)+&quot;,&quot;+&quot;Age:&quot;+t(1)).show()  //DataFrame中的每个元素都是一行记录，包含name和age两个字段，分别用t(0)和t(1)来获取值</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+------------------+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|             value|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+------------------+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|Name:Michael,Age:29|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|   Name:Andy,Age:30|</span></div><div style="color: rgb(51, 51, 51);"><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+------------------+</span><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div style="color: rgb(51, 51, 51);"><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div style="color: rgb(51, 51, 51);"></div></div></div><div><span style="color: rgb(51, 51, 51);"><font style="font-size: 11pt;"><br/></font></span></div><div style="box-sizing: border-box; padding: 8px; font-family: Monaco, Menlo, Consolas, &quot;Courier New&quot;, monospace; font-size: 12px; color: rgb(51, 51, 51); border-radius: 4px; background-color: rgb(251, 250, 248); border: 1px solid rgba(0, 0, 0, 0.15);-en-codeblock:true;"><div><h3><span style="font-family: Monaco;"><font color="#AD0000" style="font-size: 11pt;">使用编程方式定义RDD模式</font></span></h3><div style="color: rgb(51, 51, 51);"></div><div style="color: rgb(51, 51, 51);"><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">当无法提前定义case class时，就需要采用编程方式定义RDD模式。</span><br/></div><div style="color: rgb(51, 51, 51);"><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div></div><div style="color: rgb(51, 51, 51);"><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; import org.apache.spark.sql.types._</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">import org.apache.spark.sql.types._</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; import org.apache.spark.sql.Row</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">import org.apache.spark.sql.Row</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">//生成 RDD</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val peopleRDD = spark.sparkContext.textFile(&quot;file:///usr/local/spark/examples/src/main/resources/people.txt&quot;)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">peopleRDD: org.apache.spark.rdd.RDD[String] = file:///usr/local/spark/examples/src/main/resources/people.txt MapPartitionsRDD[1] at textFile at &lt;console&gt;:26</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">//定义一个模式字符串</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val schemaString = &quot;name age&quot;</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">schemaString: String = name age</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">//根据模式字符串生成模式</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val fields = schemaString.split(&quot; &quot;).map(fieldName =&gt; StructField(fieldName, StringType, nullable = true))</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">fields: Array[org.apache.spark.sql.types.StructField] = Array(StructField(name,StringType,true), StructField(age,StringType,true))</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val schema = StructType(fields)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">schema: org.apache.spark.sql.types.StructType = StructType(StructField(name,StringType,true), StructField(age,StringType,true))</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">//从上面信息可以看出，schema描述了模式信息，模式中包含name和age两个字段</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">//对peopleRDD 这个RDD中的每一行元素都进行解析val peopleDF = spark.read.format(&quot;json&quot;).load(&quot;examples/src/main/resources/people.json&quot;)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val rowRDD = peopleRDD.map(_.split(&quot;,&quot;)).map(attributes =&gt; Row(attributes(0), attributes(1).trim))</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">rowRDD: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[3] at map at &lt;console&gt;:29</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val peopleDF = spark.createDataFrame(rowRDD, schema)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">peopleDF: org.apache.spark.sql.DataFrame = [name: string, age: string]</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">//必须注册为临时表才能供下面查询使用</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; peopleDF.createOrReplaceTempView(&quot;people&quot;)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val results = spark.sql(&quot;SELECT name,age FROM people&quot;)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">results: org.apache.spark.sql.DataFrame = [name: string, age: string]</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; results.map(attributes =&gt; &quot;name: &quot; + attributes(0)+&quot;,&quot;+&quot;age:&quot;+attributes(1)).show()</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+--------------------+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|               value|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+--------------------+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|name: Michael,age:29|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|   name: Andy,age:30|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">| name: Justin,age:19|</span></div><div style="color: rgb(51, 51, 51);"><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+--------------------+</span><br/></div></div><div><span style="color: rgb(51, 51, 51);"><font style="font-size: 11pt;"><br/></font></span></div><h3><font style="font-size: 18pt; color: rgb(28, 51, 135);">把RDD保存成文件</font></h3><div style="box-sizing: border-box; padding: 8px; font-family: Monaco, Menlo, Consolas, &quot;Courier New&quot;, monospace; font-size: 12px; color: rgb(51, 51, 51); border-radius: 4px; background-color: rgb(251, 250, 248); border: 1px solid rgba(0, 0, 0, 0.15);-en-codeblock:true;"><div><h4><span style="color: rgb(51, 51, 51);"><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">第1种保存方法</span></span></h4><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val peopleDF = spark.read.format(&quot;json&quot;).load(&quot;file:///usr/local/spark/examples/src/main/resources/people.json&quot;)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">peopleDF: org.apache.spark.sql.DataFrame = [age: bigint, name: string]</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; peopleDF.select(&quot;name&quot;, &quot;age&quot;).write.format(&quot;csv&quot;).save(&quot;file:///usr/local/spark/mycode/newpeople.csv&quot;)</span></div></div></div><div><span style="color: rgb(51, 51, 51);"><font style="font-size: 11pt;"><br/></font></span></div><div><span style="font-size: 11pt;"><font color="#333333">另外，</font><b><font style="color: rgb(173, 0, 0);">write.format()支持输出 json,parquet, jdbc, orc, libsvm, csv, text</font></b><font style="color: rgb(51, 51, 51);">等格式文件，如果要输出文本文件，可以采用write.format(“text”)，但是，需要注意，只有select()中只存在一个列时，才允许保存成文本文件，如果存在两个列，比如select(“name”, “age”)，就不能保存成文本文件。</font></span></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);">如果我们要再次把newpeople.csv中的数据加载到RDD中，可以</span><font style="font-size: 12pt; color: rgb(173, 0, 0);"><b>直接使用newpeople.csv目录名称</b></font><span style="font-size: 11pt; color: rgb(51, 51, 51);">，而不需要使用part-r-00000-33184449-cb15-454c-a30f-9bb43faccac1.csv 文件，如下：</span></div><div style="box-sizing: border-box; padding: 8px; font-family: Monaco, Menlo, Consolas, &quot;Courier New&quot;, monospace; font-size: 12px; color: rgb(51, 51, 51); border-radius: 4px; background-color: rgb(251, 250, 248); border: 1px solid rgba(0, 0, 0, 0.15);-en-codeblock:true;"><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val textFile = sc.textFile(&quot;file:///usr/local/spark/mycode/newpeople.csv&quot;)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">textFile: org.apache.spark.rdd.RDD[String] = file:///usr/local/spark/mycode/newpeople.csv MapPartitionsRDD[1] at textFile at &lt;console&gt;:24</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; textFile.foreach(println)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">Justin,19</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">Michael,</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">Andy,30</span></div></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);"><br/></span></div><div style="box-sizing: border-box; padding: 8px; font-family: Monaco, Menlo, Consolas, &quot;Courier New&quot;, monospace; font-size: 12px; color: rgb(51, 51, 51); border-radius: 4px; background-color: rgb(251, 250, 248); border: 1px solid rgba(0, 0, 0, 0.15);-en-codeblock:true;"><div><h4><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">第2种保存方法</span></h4><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val peopleDF = spark.read.format(&quot;json&quot;).load(&quot;file:///usr/local/spark/examples/src/main/resources/people.json&quot;)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">peopleDF: org.apache.spark.sql.DataFrame = [age: bigint, name: string]</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; df.rdd.saveAsTextFile(&quot;file:///usr/local/spark/mycode/newpeople.txt&quot;)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div></div></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);">可以看出，我们是把DataFrame转换成RDD，然后调用saveAsTextFile()保存成文本文件。 <span style="font-size: 11pt; color: rgb(51, 51, 51);">如果我们要再次把newpeople.txt中的数据加载到RDD中，可以直接使用newpeople.txt目录名称，而不需要使用part-00000文件。</span></span></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);"><br/></span></div><h1><font style="font-size: 18pt; color: rgb(28, 51, 135);">读写Parquet(DataFrame)</font></h1><div><span style="font-size: 11pt; color: rgb(51, 51, 51);">Parquet是一种流行的列式存储格式，可以高效地存储具有嵌套字段的记录。Parquet是语言无关的，而且不与任何一种数据处理框架绑定在一起，适配多种语言和组件，能够与Parquet配合的组件有：</span></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);">* 查询引擎: Hive, Impala, Pig, Presto, Drill, Tajo, HAWQ, IBM Big SQL</span></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);">* 计算框架: MapReduce, Spark, Cascading, Crunch, Scalding, Kite</span></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);">* 数据模型: Avro, Thrift, Protocol Buffers, POJOs</span></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);">下面代码演示了如何从parquet文件中加载数据生成DataFrame</span></div><div style="box-sizing: border-box; padding: 8px; font-family: Monaco, Menlo, Consolas, &quot;Courier New&quot;, monospace; font-size: 12px; color: rgb(51, 51, 51); border-radius: 4px; background-color: rgb(251, 250, 248); border: 1px solid rgba(0, 0, 0, 0.15);-en-codeblock:true;"><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; import spark.implicits._</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">import spark.implicits._</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val parquetFileDF = spark.read.parquet(&quot;file:///usr/local/spark/examples/src/main/resources/users.parquet&quot;)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">SLF4J: Failed to load class &quot;org.slf4j.impl.StaticLoggerBinder&quot;.</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">SLF4J: Defaulting to no-operation (NOP) logger implementation</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">SLF4J: See</span> <a href="http://www.slf4j.org/codes.html#StaticLoggerBinder" style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">http://www.slf4j.org/codes.html#StaticLoggerBinder</a> <span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">for further details.</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">parquetFileDF: org.apache.spark.sql.DataFrame = [name: string, favorite_color: string ... 1 more field]</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; parquetFileDF.createOrReplaceTempView(&quot;parquetFile&quot;)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val namesDF = spark.sql(&quot;SELECT * FROM parquetFile&quot;)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">namesDF: org.apache.spark.sql.DataFrame = [name: string, favorite_color: string ... 1 more field]</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; namesDF.foreach(attributes =&gt;println(&quot;Name: &quot; + attributes(0)+&quot;  favorite color:&quot;+attributes(1)))</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">16/12/02 10:18:49 WARN hadoop.ParquetRecordReader: Can not initialize counter due to context is not a instance of TaskInputOutputContext, but is org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">Name: Alyssa  favorite color:null</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">Name: Ben  favorite color:red</span></div></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);">下面介绍如何将DataFrame保存成parquet文件</span><br/></div><div><div style="box-sizing: border-box; padding: 8px; font-family: Monaco, Menlo, Consolas, &quot;Courier New&quot;, monospace; font-size: 12px; color: rgb(51, 51, 51); border-radius: 4px; background-color: rgb(251, 250, 248); border: 1px solid rgba(0, 0, 0, 0.15);-en-codeblock:true;"><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; import spark.implicits._</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">import spark.implicits._</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val peopleDF = spark.read.json(&quot;file:///usr/local/spark/examples/src/main/resources/people.json&quot;)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">peopleDF: org.apache.spark.sql.DataFrame = [age: bigint, name: string]</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; peopleDF.write.parquet(&quot;file:///usr/local/spark/mycode/newpeople.parquet&quot;)</span></div></div><br/></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);">如果我们要再次把这个刚生成的数据又加载到DataFrame中，应该加载哪个文件呢？很简单，只要加载newpeople.parquet目录即可，而不是加载这2个文件。</span></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);"><br/></span></div><h1><font style="font-size: 18pt; color: rgb(28, 51, 135);">通过JDBC连接数据库(DataFrame)</font></h1><div><span style="font-size: 11pt; color: rgb(51, 51, 51);">这里以关系数据库MySQL为例。首先，请参考厦门大学数据库实验室博客教程（</span><a href="http://dblab.xmu.edu.cn/blog/install-mysql/" style="font-size: 11pt; color: rgb(51, 51, 51);">Ubuntu安装MySQL</a><span style="font-size: 11pt; color: rgb(51, 51, 51);">），在Linux系统中安装好MySQL数据库。这里假设你已经成功安装了MySQL数据库。下面我们要新建一个测试Spark程序的数据库，数据库名称是“spark”，表的名称是“student”。</span></div><div><span style="font-size: 11pt;"><b><font color="#AD0000"><br/></font></b></span></div><div><span style="font-size: 11pt;"><b><font color="#AD0000">Spark支持通过JDBC方式连接到其他数据库获取数据生成DataFrame。</font></b></span></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);"><br/></span></div><div style="box-sizing: border-box; padding: 8px; font-family: Monaco, Menlo, Consolas, &quot;Courier New&quot;, monospace; font-size: 12px; color: rgb(51, 51, 51); border-radius: 4px; background-color: rgb(251, 250, 248); border: 1px solid rgba(0, 0, 0, 0.15);-en-codeblock:true;"><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">首先，请进入Linux系统（本教程统一使用hadoop用户名登录），打开火狐（FireFox）浏览器，下载一个MySQL的JDBC驱动（</span><a href="http://dev.mysql.com/downloads/connector/j/" style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">下载</a><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">）。在火狐浏览器中下载时，一般默认保存在hadoop用户的当前工作目录的“下载”目录下，所以，可以打开一个终端界面，输入下面命令查看：</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">就可以看到刚才下载到的MySQL的JDBC驱动程序，文件名称为mysql-connector-java-5.1.40.tar.gz（你下载的版本可能和这个不同）。现在，使用下面命令，把该驱动程序拷贝到spark的安装目录下：</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">$ sudo tar -zxf ~/下载/mysql-connector-java-5.1.40.tar.gz -C /usr/local/spark/jars</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">$ cd /usr/local/spark/jars</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">$ ls</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">这时就可以在/usr/local/spark/jars目录下看到这个驱动程序文件所在的文件夹mysql-connector-java-5.1.40，进入这个文件夹，就可以看到驱动程序文件mysql-connector-java-5.1.40-bin.jar。</span></div></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);"><br/></span></div><div style="box-sizing: border-box; padding: 8px; font-family: Monaco, Menlo, Consolas, &quot;Courier New&quot;, monospace; font-size: 12px; color: rgb(51, 51, 51); border-radius: 4px; background-color: rgb(251, 250, 248); border: 1px solid rgba(0, 0, 0, 0.15);-en-codeblock:true;"><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">启动Spark Shell时，必须指定mysql连接驱动jar包。</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">$ cd /usr/local/spark</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">$ ./bin/spark-shell \</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">$ --jars /usr/local/spark/jars/mysql-connector-java-5.1.40/mysql-connector-java-5.1.40-bin.jar \</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">$ --driver-class-path /usr/local/spark/jars/mysql-connector-java-5.1.40/mysql-connector-java-5.1.40-bin.jar</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);"><br/></span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">启动进入spark-shell以后，可以执行以下命令连接数据库，读取数据，并显示:</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; val jdbcDF = spark.read.format(&quot;jdbc&quot;).option(&quot;url&quot;, &quot;jdbc:</span><a href="mysql://localhost:3306/spark" style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">mysql://localhost:3306/spark</a><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">&quot;).option(&quot;driver&quot;,&quot;com.mysql.jdbc.Driver&quot;).option(&quot;dbtable&quot;, &quot;student&quot;).option(&quot;user&quot;, &quot;root&quot;).option(&quot;password&quot;, &quot;hadoop&quot;).load()</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">Fri Dec 02 11:56:56 CST 2016 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">jdbcDF: org.apache.spark.sql.DataFrame = [id: int, name: string ... 2 more fields]</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">scala&gt; jdbcDF.show()</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">Fri Dec 02 11:57:30 CST 2016 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+---+--------+------+---+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">| id|    name|gender|age|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+---+--------+------+---+</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|  1| Xueqian|     F| 23|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">|  2|Weiliang|     M| 24|</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">+---+--------+------+---+</span></div></div><div><span style="font-size: 11pt;"><b><font color="#AD0000"><br/></font></b></span></div><div><span style="font-size: 11pt;"><b><font color="#AD0000">开始在spark-shell中编写程序，往spark.student表中插入两条记录</font></b></span></div><div style="box-sizing: border-box; padding: 8px; font-family: Monaco, Menlo, Consolas, &quot;Courier New&quot;, monospace; font-size: 12px; color: rgb(51, 51, 51); border-radius: 4px; background-color: rgb(251, 250, 248); border: 1px solid rgba(0, 0, 0, 0.15);-en-codeblock:true;"><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">import java.util.Properties</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">import org.apache.spark.sql.types._</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">import org.apache.spark.sql.Row</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">//下面我们设置两条数据表示两个学生信息</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">val studentRDD = spark.sparkContext.parallelize(Array(&quot;3 Rongcheng M 26&quot;,&quot;4 Guanhua M 27&quot;)).map(_.split(&quot; &quot;))</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">//下面要设置模式信息</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">val schema = StructType(List(StructField(&quot;id&quot;, IntegerType, true),StructField(&quot;name&quot;, StringType, true),StructField(&quot;gender&quot;, StringType, true),StructField(&quot;age&quot;, IntegerType, true)))</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">//下面创建Row对象，每个Row对象都是rowRDD中的一行</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">val rowRDD = studentRDD.map(p =&gt; Row(p(0).toInt, p(1).trim, p(2).trim, p(3).toInt))</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">//建立起Row对象和模式之间的对应关系，也就是把数据和模式对应起来</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">val studentDF = spark.createDataFrame(rowRDD, schema)</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">//下面创建一个prop变量用来保存JDBC连接参数</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">val prop = new Properties()</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">prop.put(&quot;user&quot;, &quot;root&quot;) //表示用户名是root</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">prop.put(&quot;password&quot;, &quot;hadoop&quot;) //表示密码是hadoop</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">prop.put(&quot;driver&quot;,&quot;com.mysql.jdbc.Driver&quot;) //表示驱动程序是com.mysql.jdbc.Driver</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">//下面就可以连接数据库，采用append模式，表示追加记录到数据库spark的student表中</span></div><div><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">studentDF.write.mode(&quot;append&quot;).jdbc(&quot;jdbc:</span><a href="mysql://localhost:3306/spark" style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">mysql://localhost:3306/spark</a><span style="font-family: Monaco; font-size: 9pt; color: rgb(51, 51, 51);">&quot;, &quot;spark.student&quot;, prop)</span></div></div><div><span style="font-size: 11pt; color: rgb(51, 51, 51);"><br/></span></div></div></span>
</div></body></html> 