<!DOCTYPE html>
<html lang="zh-cn">
<head>
   
    <link type="text/css" rel="stylesheet" href="/bundles/blog-common.css?v=KOZafwuaDasEedEenI5aTy8aXH0epbm6VUJ0v3vsT_Q1"/>
<link id="MainCss" type="text/css" rel="stylesheet" href="/skins/ThinkInside/bundle-ThinkInside.css?v=RRjf6pEarGnbXZ86qxNycPfQivwSKWRa4heYLB15rVE1"/>
<link type="text/css" rel="stylesheet" href="/blog/customcss/428549.css?v=%2fam3bBTkW5NBWhBE%2fD0lcyJv5UM%3d"/>

</head>
<body>
<a name="top"></a>

<div id="page_begin_html"></div><script>load_page_begin_html();</script>

<div id="topics">
	<div class = "post">
		<h1 class = "postTitle">
			<a id="cb_post_title_url" class="postTitle2" href="https://www.cnblogs.com/frankdeng/p/9294812.html">Spark（二）CentOS7.5搭建Spark2.3.1分布式集群</a>
		</h1>
		<div class="clear"></div>
		<div class="postBody">
			<div id="cnblogs_post_body" class="blogpost-body"><h2>一 下载安装包</h2>
<h3>1&nbsp;官方下载</h3>
<p>官方下载地址：<a href="http://spark.apache.org/downloads.html" target="_blank">http://spark.apache.org/downloads.html</a></p>
<p><img src="https://images2018.cnblogs.com/blog/1385722/201807/1385722-20180711183922952-1479997926.png" alt="" /></p>
<h3><strong>2&nbsp; 安装前提</strong></h3>
<ul>
<li>Java8&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;安装成功</li>
<li>zookeeper&nbsp; 安装参考：<a id="cb_post_title_url" class="postTitle2" href="https://www.cnblogs.com/frankdeng/p/9018177.html" target="_blank">CentOS7.5搭建Zookeeper3.4.12集群</a></li>
<li>hadoop&nbsp; &nbsp; &nbsp; &nbsp;安装参考：<a id="cb_post_title_url" class="postTitle2" href="https://www.cnblogs.com/frankdeng/p/9047698.html" target="_blank">CentOS7.5搭建Hadoop2.7.6集群</a></li>
<li>Scala&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 安装成功</li>
</ul>
<p><strong>注意：从Spark2.0版开始，默认使用Scala 2.11构建。Scala 2.10用户应该下载Spark源包并<a href="https://spark.apache.org/docs/latest/building-spark.html#building-for-scala-210">使用Scala 2.10支持</a>构建&nbsp;。</strong></p>
<h3><strong>3&nbsp; 集群规划</strong></h3>
<table style="background-color: #2dd2b1;" border="0">
<tbody>
<tr>
<td style="text-align: center;"><strong>节点名称&nbsp;</strong></td>
<td style="text-align: center;"><strong>IP</strong></td>
<td style="text-align: center;">Zookeeper</td>
<td style="text-align: center;">Master</td>
<td style="text-align: center;">Worker</td>
</tr>
<tr>
<td style="text-align: center;">node21</td>
<td style="text-align: center;">192.168.100.21</td>
<td style="text-align: center;">
<pre><span style="font-family: 'Microsoft YaHei';">Zookeeper</span></pre>
</td>
<td style="text-align: center;">主Master</td>
<td style="text-align: center;">&nbsp;</td>
</tr>
<tr>
<td style="text-align: center;">node22</td>
<td style="text-align: center;">192.168.100.22</td>
<td style="text-align: center;">
<pre><span style="font-family: 'Microsoft YaHei';">Zookeeper</span></pre>
</td>
<td style="text-align: center;">备Master</td>
<td style="text-align: center;">Worker</td>
</tr>
<tr>
<td style="text-align: center;">node23</td>
<td style="text-align: center;">192.168.100.23</td>
<td style="text-align: center;">
<pre><span style="font-family: 'Microsoft YaHei';">Zookeeper</span></pre>
</td>
<td style="text-align: center;">&nbsp;</td>
<td style="text-align: center;">Worker</td>
</tr>
</tbody>
</table>
<h2>&nbsp;二 集群安装</h2>
<h3>1&nbsp; 解压缩</h3>
<div class="cnblogs_code">
<pre>[admin@node21 software]$ tar zxvf spark-2.3.1-bin-hadoop2.7.tgz -C /opt/module/
[admin@node21 module]$ mv spark-2.3.1-bin-hadoop2.7 spark-2.3.1</pre>
</div>
<h3>2&nbsp; 修改配置文件</h3>
<p>（1）进入配置文件所在目录</p>
<div class="cnblogs_code">
<pre>[admin@node21 ~]$ cd /opt/module/spark-2.3.1/conf/
[admin@node21 conf]$ ll
total 36
-rw-rw-r-- 1 admin admin  996 Jun  2 04:49 docker.properties.template
-rw-rw-r-- 1 admin admin 1105 Jun  2 04:49 fairscheduler.xml.template
-rw-rw-r-- 1 admin admin 2025 Jun  2 04:49 log4j.properties.template
-rw-rw-r-- 1 admin admin 7801 Jun  2 04:49 metrics.properties.template
-rw-rw-r-- 1 admin admin  870 Jul  4 23:50 slaves.template 
-rw-rw-r-- 1 admin admin 1292 Jun  2 04:49 spark-defaults.conf.template
-rwxrwxr-x 1 admin admin 4861 Jul  5 00:25 spark-env.sh.template</pre>
</div>
<p>（2）复制spark-env.sh.template并重命名为spark-env.sh</p>
<div class="cnblogs_code">
<pre>[admin@node21 conf]$ cp spark-env.sh.template spark-env.sh
[admin@node21 conf]$ vi spark-env.sh</pre>
</div>
<p>编辑并在文件末尾添加如下配置内容</p>
<div class="cnblogs_code">
<pre><span style="color: #000000;">#指定默认master的ip或主机名
export SPARK_MASTER_HOST</span>=<span style="color: #000000;">node21  
#指定maaster提交任务的默认端口为7077    
export SPARK_MASTER_PORT</span>=<span style="color: #800080;">7077</span><span style="color: #000000;"> 
#指定masster节点的webui端口       
export SPARK_MASTER_WEBUI_PORT</span>=<span style="color: #800080;">8080</span><span style="color: #000000;"> 
#每个worker从节点能够支配的内存数 
export SPARK_WORKER_MEMORY</span>=<span style="color: #000000;">1g        
#允许Spark应用程序在计算机上使用的核心总数（默认值：所有可用核心）
export SPARK_WORKER_CORES</span>=<span style="color: #800080;">1</span><span style="color: #000000;">    
#每个worker从节点的实例（可选配置） 
export SPARK_WORKER_INSTANCES</span>=<span style="color: #800080;">1</span><span style="color: #000000;">   
#指向包含Hadoop集群的（客户端）配置文件的目录，运行在Yarn上配置此项   
export HADOOP_CONF_DIR</span>=/opt/module/hadoop-<span style="color: #800080;">2.7</span>.<span style="color: #800080;">6</span>/etc/<span style="color: #000000;">hadoop
#指定整个集群状态是通过zookeeper来维护的，包括集群恢复
export SPARK_DAEMON_JAVA_OPTS</span>=<span style="color: #800000;">"</span>      
-Dspark.deploy.recoveryMode=<span style="color: #000000;">ZOOKEEPER 
</span>-Dspark.deploy.zookeeper.url=node21:<span style="color: #800080;">2181</span>,node22:<span style="color: #800080;">2181</span>,node23:<span style="color: #800080;">2181</span>
-Dspark.deploy.zookeeper.dir=/spark<span style="color: #800000;">"</span></pre>
</div>
<p>（3）复制slaves.template成slaves，并修改配置内容</p>
<div class="cnblogs_code">
<pre>[admin@node21 conf]$ cp slaves.template slaves
[admin@node21 conf]$ vi slaves</pre>
</div>
<p>修改从节点</p>
<div class="cnblogs_code">
<pre>node22
node23</pre>
</div>
<p>（4）将安装包分发给其他节点</p>
<div class="cnblogs_code">
<pre>[admin@node21 module]$ scp -r spark-2.3.1 admin@node22:/opt/module/
[admin@node21 module]$ scp -r spark-2.3.1 admin@node23:/opt/module/</pre>
</div>
<p><strong>修改node22节点上conf/spark-env.sh配置的MasterIP为</strong>SPARK_MASTER_IP=node22</p>
<h3>3&nbsp; 配置环境变量</h3>
<p>所有节点均要配置</p>
<div class="cnblogs_code">
<pre>[admin@node21 spark-2.3.1]$ sudo vi /etc/profile
export  SPARK_HOME=/opt/module/spark-2.3.1
export  PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
[admin@node21 spark-2.3.1]$ source /etc/profile</pre>
</div>
<h2 id="blogTitle10">三 启动集群</h2>
<h3 id="blogTitle11">1 启动zookeeper集群</h3>
<p>所有zookeeper节点均要执行</p>
<div class="cnblogs_code">
<pre>[admin@node21 ~]$ <strong>zkServer.sh start</strong></pre>
</div>
<h3 id="blogTitle12">2 启动Hadoop集群</h3>
<div class="cnblogs_code">
<pre>[admin@node21 ~]$ start-dfs.sh
[admin@node22 ~]$ start-yarn.sh<br />[admin@node23 ~]$ yarn-daemon.sh start resourcemanager</pre>
</div>
<h3 id="blogTitle13">3 启动Spark集群</h3>
<p>启动spark：启动master节点：sbin/start-master.sh 启动worker节点：sbin/start-slaves.sh</p>
<p class="15">或者：sbin/start-all.sh</p>
<div class="cnblogs_code">
<pre>[admin@node21 spark-2.3.1]$ sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /opt/module/spark-2.3.1/logs/spark-admin-org.apache.spark.deploy.master.Master-1-node21.out
node22: starting org.apache.spark.deploy.worker.Worker, logging to /opt/module/spark-2.3.1/logs/spark-admin-org.apache.spark.deploy.worker.Worker-1-node22.out
node23: starting org.apache.spark.deploy.worker.Worker, logging to /opt/module/spark-2.3.1/logs/spark-admin-org.apache.spark.deploy.worker.Worker-1-node23.out</pre>
</div>
<p><strong>注意：备用master节点需要<strong>手动启动</strong></strong></p>
<div class="cnblogs_code">
<pre>[admin@node22 spark-2.3.1]$ sbin/start-master.sh 
starting org.apache.spark.deploy.master.Master, logging to /opt/module/spark-2.3.1/logs/spark-admin-org.apache.spark.deploy.master.Master-1-node22.out</pre>
</div>
<h3 id="blogTitle14">4 查看进程</h3>
<div class="cnblogs_code">
<pre>[admin@node21 spark-2.3.1]$ jps
1316 QuorumPeerMain
3205 Jps
3110 Master
1577 DataNode
1977 DFSZKFailoverController
1788 JournalNode
2124 NodeManager

[admin@node22 spark-2.3.1]$ jps
1089 QuorumPeerMain
1233 DataNode
1617 ResourceManager
1159 NameNode
1319 JournalNode
1735 NodeManager
3991 Master
4090 Jps
1435 DFSZKFailoverController
3918 Worker

[admin@node23 spark-2.3.1]$ jps
1584 ResourceManager
1089 QuorumPeerMain
1241 JournalNode
2411 Worker
1164 DataNode
1388 NodeManager
2478 Jps</pre>
</div>
<h2>四 验证集群HA</h2>
<h3 id="blogTitle18">1 看Web页面Master状态</h3>
<p>node21是ALIVE状态，node22为STANDBY状态，WebUI查看：<a href="http://node21:8080/" target="_blank">http://node21:8080/</a></p>
<p><img src="https://images2018.cnblogs.com/blog/1385722/201807/1385722-20180711234555585-270009170.png" alt="" /></p>
<p><img src="https://images2018.cnblogs.com/blog/1385722/201807/1385722-20180711234655992-794943681.png" alt="" /></p>
<p><strong>从节点连接地址：</strong><a href="http://node22:8081/" target="_blank">http://node22:8081/</a></p>
<p><img src="https://images2018.cnblogs.com/blog/1385722/201807/1385722-20180711235335894-459668537.png" alt="" /></p>
<p><img src="https://images2018.cnblogs.com/blog/1385722/201807/1385722-20180711235033541-268583604.png" alt="" /></p>
<h3 id="blogTitle19">2 验证HA的高可用</h3>
<p>手动干掉node21上面的Master进程，node21:8080无法访问，node22:8080状态如下，Master状态成功自动进行切换。</p>
<p>&nbsp;<img src="https://images2018.cnblogs.com/blog/1385722/201807/1385722-20180711235751855-1149762063.png" alt="" /></p>
<h3>3 HA注意点&nbsp;</h3>
<ul>
<li>主备切换过程中不能提交Application。</li>
<li>主备切换过程中不影响已经在集群中运行的Application。因为Spark是粗粒度资源调度。</li>
</ul>
<p><img src="https://images2018.cnblogs.com/blog/1385722/201807/1385722-20180712171158891-1943649960.png" alt="" /></p>
<h2>五<span lang="EN-US">集群提交命令方式</span></h2>
<h2>1&nbsp;<strong>Standalone</strong>模式</h2>
<h3><strong>1.1&nbsp;</strong><strong>Standalone-client</strong></h3>
<p><strong><strong>（1）</strong>提交命令</strong></p>
<div class="cnblogs_code">
<pre>[admin@node21 spark-2.3.1]$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi \
 --master spark://node21:7077 \
 --executor-memory 500m \
 --total-executor-cores 1 \
 examples/jars/spark-examples_2.11-2.3.1.jar 10</pre>
</div>
<p>或者</p>
<div class="cnblogs_code">
<pre>[admin@node21 spark-2.3.1]$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi \
 --master spark://node21:7077 \
 --deploy-mode client \
 --executor-memory 500m \
 --total-executor-cores 1 \
 examples/jars/spark-examples_2.11-2.3.1.jar 10</pre>
</div>
<p>（2）提交原理图解</p>
<p><img src="https://images2018.cnblogs.com/blog/1385722/201807/1385722-20180712152829637-1305556320.png" alt="" /></p>
<p>&nbsp;（3）执行流程</p>
<ol>
<li>client模式提交任务后，会在客户端启动Driver进程。</li>
<li>Driver会向Master申请启动Application启动的资源。</li>
<li>资源申请成功，Driver端将task发送到worker端执行。</li>
<li>worker将task执行结果返回到Driver端。</li>
</ol>
<p>（4）总结</p>
<p class="15">client模式适用于测试调试程序。Driver进程是在客户端启动的，这里的客户端就是指提交应用程序的当前节点。在Driver端可以看到task执行的情况。生产环境下不能使用client模式，是因为：假设要提交100个application到集群运行，Driver每次都会在client端启动，那么就会导致客户端100次网卡流量暴增的问题。</p>
<h3><strong>1.2&nbsp;</strong><strong>Standalone-cluster</strong></h3>
<p><strong><strong>（1）</strong>提交命令</strong></p>
<div class="cnblogs_code">
<pre>[admin@node21 spark-2.3.1]$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi \
 --master spark://node21:7077 \
 --deploy-mode cluster \
 examples/jars/spark-examples_2.11-2.3.1.jar 10</pre>
</div>
<p><strong>（2）提交原理图解</strong></p>
<p><strong><img src="https://images2018.cnblogs.com/blog/1385722/201807/1385722-20180712173319721-419433849.png" alt="" /></strong></p>
<p><strong>（3）执行流程</strong></p>
<ol>
<li>cluster模式提交应用程序后，会向Master请求启动Driver.</li>
<li>Master接受请求，随机在集群一台节点启动Driver进程。</li>
<li>Driver启动后为当前的应用程序申请资源。</li>
<li>Driver端发送task到worker节点上执行。</li>
<li>worker将执行情况和执行结果返回给Driver端。</li>
</ol>
<p><strong>（4）总结</strong></p>
<p class="15">Driver进程是在集群某一台Worker上启动的，在客户端是无法查看task的执行情况的。假设要提交100个application到集群运行,每次Driver会随机在集群中某一台Worker上启动，那么这100次网卡流量暴增的问题就散布在集群上。</p>
<h2>2&nbsp;Yarn模式</h2>
<h3><strong>2.1&nbsp;</strong><strong>yarn-client</strong></h3>
<p><strong><strong>（1）</strong>提交命令</strong></p>
<p>以<code>client</code>模式启动Spark应用程序：</p>
<div class="cnblogs_code">
<pre>$ ./bin/spark-submit --class path.to.your.Class --master yarn --deploy-mode client [options] &lt;app jar&gt; [app options]</pre>
</div>
<p><strong>例如</strong></p>
<div class="cnblogs_code">
<pre>[admin@node21 spark-2.3.1]$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi \
 --master yarn  \
 --deploy-mode client  \
 examples/jars/spark-examples_2.11-2.3.1.jar 10</pre>
</div>
<p><strong>（2）提交原理图解</strong></p>
<p><strong><img src="https://images2018.cnblogs.com/blog/1385722/201807/1385722-20180712184752126-808665477.png" alt="" /></strong></p>
<p><strong>（3）执行流程</strong></p>
<ol>
<li>客户端提交一个Application，在客户端启动一个Driver进程。</li>
<li>应用程序启动后会向RS(ResourceManager)发送请求，启动AM(ApplicationMaster)的资源。</li>
<li>RS收到请求，随机选择一台NM(NodeManager)启动AM。这里的NM相当于Standalone中的Worker节点。</li>
<li>AM启动后，会向RS请求一批container资源，用于启动Executor.</li>
<li>RS会找到一批NM返回给AM,用于启动Executor。</li>
<li>AM会向NM发送命令启动Executor。</li>
<li>Executor启动后，会反向注册给Driver，Driver发送task到Executor,执行情况和结果返回给Driver端。</li>
</ol>
<p><strong>（4）总结</strong></p>
<p class="15">Yarn-client模式同样是适用于测试，因为Driver运行在本地，Driver会与yarn集群中的Executor进行大量的通信，会造成客户机网卡流量的大量增加.</p>
<p class="15">&nbsp;<strong>ApplicationMaster的作用：</strong></p>
<ol>
<li>为当前的Application申请资源</li>
<li>给<span lang="EN-US">NodeManager</span>发送消息启动Executor。</li>
</ol>
<p class="15">注意：ApplicationMaster有launchExecutor和申请资源的功能，并没有作业调度的功能。</p>
<h3><strong>2.2&nbsp;</strong><strong>yarn-cluster</strong></h3>
<p><strong><strong>（1）</strong>提交命令</strong></p>
<p>以<code>cluster</code>模式启动Spark应用程序：</p>
<div class="cnblogs_code">
<pre>$ ./bin/spark-submit --class path.to.your.Class --master yarn --deploy-mode cluster [options] &lt;app jar&gt; [app options]</pre>
</div>
<p>例如</p>
<div class="cnblogs_code">
<pre>[admin@node21 spark-2.3.1]$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi \
 --master yarn  \
 --deploy-mode cluster  \
 examples/jars/spark-examples_2.11-2.3.1.jar 10</pre>
</div>
<p><strong>（2）提交原理图解</strong></p>
<p>&nbsp;<img src="https://images2018.cnblogs.com/blog/1385722/201807/1385722-20180712184916333-350327335.png" alt="" /></p>
<p><strong>（3）执行流程</strong></p>
<ol>
<li>客户机提交Application应用程序，发送请求到RS(ResourceManager),请求启动AM(ApplicationMaster)。</li>
<li>RS收到请求后随机在一台NM(NodeManager)上启动AM（相当于Driver端）。</li>
<li>AM启动，AM发送请求到RS，请求一批container用于启动Executor。</li>
<li>RS返回一批NM节点给AM。</li>
<li>AM连接到NM,发送请求到NM启动Executor。</li>
<li>Executor反向注册到AM所在的节点的Driver。Driver发送task到Executor。</li>
</ol>
<p><strong>（4）总结</strong></p>
<p class="15">Yarn-Cluster主要用于生产环境中，因为Driver运行在Yarn集群中某一台nodeManager中，每次提交任务的Driver所在的机器都是随机的，不会产生某一台机器网卡流量激增的现象，缺点是任务提交后不能看到日志。只能通过yarn查看日志。</p>
<p class="15"><strong>ApplicationMaster的作用：</strong></p>
<ol>
<li>为当前的Application申请资源</li>
<li>给<span lang="EN-US">NodeManager</span>发送消息启动Excutor。</li>
<li>任务调度。</li>
</ol>
<p class="15"><strong>停止集群任务命令：</strong><strong>yarn application -kill application</strong><strong>ID</strong></p>
<h2>六 配置历史服务器</h2>
<h3>1&nbsp;临时配置</h3>
<p>对本次提交的应用程序起作用</p>
<div class="cnblogs_code">
<pre>./spark-shell --master spark://node21:7077 
--name myapp1
--conf spark.eventLog.enabled=true
--conf spark.eventLog.dir=hdfs://node21:8020/spark/test</pre>
</div>
<p class="15">停止程序，在Web Ui中Completed Applications对应的ApplicationID中能查看history。</p>
<h3>2&nbsp; 永久配置</h3>
<p><strong>spark-default.conf配置文件中配置HistoryServer，对所有提交的Application都起作用</strong></p>
<p class="15">在客户端节点，进入../spark-2.3.1/conf/&nbsp;spark-defaults.conf最后加入:</p>
<div class="cnblogs_code">
<pre>//开启记录事件日志的功能
spark.eventLog.enabled          true
//设置事件日志存储的目录
spark.eventLog.dir              hdfs://node21:8020/spark/test
//设置HistoryServer加载事件日志的位置
spark.history.fs.logDirectory   hdfs://node21:8020/spark/test
//日志优化选项,压缩日志
spark.eventLog.compress         true</pre>
</div>
<p class="15">启动HistoryServer：</p>
<div class="cnblogs_code">
<pre>./start-history-server.sh</pre>
</div>
<p class="15">访问HistoryServer：node21:18080,之后所有提交的应用程序运行状况都会被记录。</p>
<h2>七 故障问题</h2>
<h3>1 Worker节点无法启动</h3>
<div class="cnblogs_code">
<pre>[admin@node21 spark-<span style="color: #800080;">2.3</span>.<span style="color: #800080;">1</span>]$ sbin/start-<span style="color: #000000;">all.sh 
starting org.apache.spark.deploy.master.Master, logging to </span>/opt/module/spark-<span style="color: #800080;">2.3</span>.<span style="color: #800080;">1</span>/logs/spark-admin-org.apache.spark.deploy.master.Master-<span style="color: #800080;">1</span>-node21.<span style="color: #0000ff;">out</span><span style="color: #000000;">
node23: starting org.apache.spark.deploy.worker.Worker, logging to </span>/opt/module/spark-<span style="color: #800080;">2.3</span>.<span style="color: #800080;">1</span>/logs/spark-admin-org.apache.spark.deploy.worker.Worker-<span style="color: #800080;">1</span>-node23.<span style="color: #0000ff;">out</span><span style="color: #000000;">
node22: starting org.apache.spark.deploy.worker.Worker, logging to </span>/opt/module/spark-<span style="color: #800080;">2.3</span>.<span style="color: #800080;">1</span>/logs/spark-admin-org.apache.spark.deploy.worker.Worker-<span style="color: #800080;">1</span>-node22.<span style="color: #0000ff;">out</span><span style="color: #000000;">
node23: <span style="color: #ff0000;">failed to launch: nice </span></span><span style="color: #ff0000;">-n 0 /opt/module/spark-2.3.1/bin/spark-class org.apache.spark.deploy.worker.Worker</span><span style="color: #ff0000;"> --webui-port 8081 --port 7078</span> spark:<span style="color: #008000;">//</span><span style="color: #008000;">node21:7077</span>
node23: full log <span style="color: #0000ff;">in</span> /opt/module/spark-<span style="color: #800080;">2.3</span>.<span style="color: #800080;">1</span>/logs/spark-admin-org.apache.spark.deploy.worker.Worker-<span style="color: #800080;">1</span>-node23.<span style="color: #0000ff;">out</span><span style="color: #000000;">
node22: <span style="color: #ff0000;">failed to launch: nice </span></span><span style="color: #ff0000;">-n 0 /opt/module/spark-2.3.1/bin/spark-class org.apache.spark.deploy.worker.Worker</span><span style="color: #ff0000;"> --webui-port 8081 --port 7078</span> spark:<span style="color: #008000;">//</span><span style="color: #008000;">node21:7077</span>
node22: full log <span style="color: #0000ff;">in</span> /opt/module/spark-<span style="color: #800080;">2.3</span>.<span style="color: #800080;">1</span>/logs/spark-admin-org.apache.spark.deploy.worker.Worker-<span style="color: #800080;">1</span>-node22.<span style="color: #0000ff;">out</span></pre>
</div>
<p>由于之前在conf/spark-env.sh里配置了如下信息</p>
<div class="cnblogs_code">
<pre><span style="color: #000000;">#每个worker从节点的端口（可选配置）       
export SPARK_WORKER_PORT</span>=<span style="color: #800080;">7078</span><span style="color: #000000;">       
#每个worker从节点的wwebui端口（可选配置）  
export SPARK_WORKER_WEBUI_PORT</span>=<span style="color: #800080;">8081</span> </pre>
</div>
<p>可能是由于端口问题去掉上述两项配置，重启成功。</p>
<h3>2&nbsp;启动Spark on YARN报错</h3>
<p><span style="color: #ff0000;"><strong>2.1&nbsp; Caused by: java.net.ConnectException: Connection refused</strong></span></p>
<div class="cnblogs_code">
<pre>[admin@node21 spark-<span style="color: #800080;">2.3</span>.<span style="color: #800080;">1</span>]$ spark-shell --master yarn --deploy-mode client</pre>
</div>
<p><strong>报错原因：内存资源给的过小，yarn直接kill掉进程，则报rpc连接失败、ClosedChannelException等错误。</strong></p>
<p><strong>解决方法：</strong><strong>先停止YARN服务，然后修改yarn-site.xml，增加如下内容</strong></p>
<div class="cnblogs_code">
<pre>&lt;!--是否将对容器强制实施虚拟内存限制 --&gt;
&lt;property&gt;
    &lt;name&gt;yarn.nodemanager.vmem-check-enabled&lt;/name&gt;
    &lt;value&gt;<span style="color: #0000ff;">false</span>&lt;/value&gt;
&lt;/property&gt;
&lt;!--设置容器的内存限制时虚拟内存与物理内存之间的比率 --&gt;
&lt;property&gt;
     &lt;name&gt;yarn.nodemanager.vmem-pmem-ratio&lt;/name&gt;
     &lt;value&gt;<span style="color: #800080;">4</span>&lt;/value&gt;
&lt;/property&gt;   </pre>
</div>
<p>将新的yarn-site.xml文件分发到其他Hadoop节点对应的目录下，最后在重新启动YARN。&nbsp;</p>
<p>重新执行以下命令启动spark on yarn，启动成功</p>
<p><strong><span style="color: #ff0000;">2.2&nbsp;&nbsp;java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi</span></strong></p>
<div class="cnblogs_code">
<pre>[admin@node21 spark-<span style="color: #800080;">2.3</span>.<span style="color: #800080;">1</span>]$ ./bin/spark-submit --<span style="color: #0000ff;">class</span><span style="color: #000000;"> org.apache.spark.examples.SparkPi \
</span>&gt;  --<span style="color: #000000;">master yarn  \
</span>&gt;  --deploy-<span style="color: #000000;">mode client  \
</span>&gt;  examples/jars/spark-examples_2.<span style="color: #800080;">11</span>-<span style="color: #800080;">2.3</span>.<span style="color: #800080;">1</span>.jar <span style="color: #800080;">10</span></pre>
</div>
<p>报错信息如下：</p>
<div class="cnblogs_code">
<pre><span style="color: #800080;">2018</span>-<span style="color: #800080;">07</span>-<span style="color: #800080;">13</span> <span style="color: #800080;">05</span>:<span style="color: #800080;">19</span>:<span style="color: #800080;">14</span> WARN  NativeCodeLoader:<span style="color: #800080;">62</span> - Unable to load native-hadoop library <span style="color: #0000ff;">for</span> your platform... <span style="color: #0000ff;">using</span> builtin-java classes <span style="color: #0000ff;">where</span><span style="color: #000000;"> applicable
<span style="color: #ff0000;">java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi</span>
    at java.net.URLClassLoader.findClass(URLClassLoader.java:</span><span style="color: #800080;">381</span><span style="color: #000000;">)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:</span><span style="color: #800080;">424</span><span style="color: #000000;">)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:</span><span style="color: #800080;">357</span><span style="color: #000000;">)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:</span><span style="color: #800080;">348</span><span style="color: #000000;">)
    at org.apache.spark.util.Utils$.classForName(Utils.scala:</span><span style="color: #800080;">238</span><span style="color: #000000;">)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:</span><span style="color: #800080;">851</span><span style="color: #000000;">)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$</span><span style="color: #800080;">1</span>(SparkSubmit.scala:<span style="color: #800080;">198</span><span style="color: #000000;">)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:</span><span style="color: #800080;">228</span><span style="color: #000000;">)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:</span><span style="color: #800080;">137</span><span style="color: #000000;">)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
</span><span style="color: #800080;">2018</span>-<span style="color: #800080;">07</span>-<span style="color: #800080;">13</span> <span style="color: #800080;">05</span>:<span style="color: #800080;">19</span>:<span style="color: #800080;">15</span> INFO  ShutdownHookManager:<span style="color: #800080;">54</span> -<span style="color: #000000;"> Shutdown hook called
</span><span style="color: #800080;">2018</span>-<span style="color: #800080;">07</span>-<span style="color: #800080;">13</span> <span style="color: #800080;">05</span>:<span style="color: #800080;">19</span>:<span style="color: #800080;">15</span> INFO  ShutdownHookManager:<span style="color: #800080;">54</span> - Deleting directory /tmp/spark-d0c9c44a-40bc-<span style="color: #800080;">4220</span>-958c-c2f976361d64</pre>
</div>

</body>
</html>
