<!DOCTYPE html>
<html lang="zh-cn">
<head>
  
    <link type="text/css" rel="stylesheet" href="/bundles/blog-common.css?v=KOZafwuaDasEedEenI5aTy8aXH0epbm6VUJ0v3vsT_Q1"/>
<link id="MainCss" type="text/css" rel="stylesheet" href="/skins/ThinkInside/bundle-ThinkInside.css?v=RRjf6pEarGnbXZ86qxNycPfQivwSKWRa4heYLB15rVE1"/>
<link type="text/css" rel="stylesheet" href="/blog/customcss/428549.css?v=%2fam3bBTkW5NBWhBE%2fD0lcyJv5UM%3d"/>

</head>
<body>
<a name="top"></a>

<div id="page_begin_html"></div><script>load_page_begin_html();</script>

<div id="topics">
	<div class = "post">
		<h1 class = "postTitle">
			<a id="cb_post_title_url" class="postTitle2" href="https://www.cnblogs.com/frankdeng/p/9310191.html">HBase（二）CentOS7.5搭建HBase1.2.6HA集群</a>
		</h1>
		<div class="clear"></div>
		<div class="postBody">
			<div id="cnblogs_post_body" class="blogpost-body"><h2>一、安装前提</h2>
<p>1、HBase 依赖于 HDFS 做底层的数据存储</p>
<p>2、HBase 依赖于 MapReduce 做数据计算</p>
<p>3、HBase 依赖于 ZooKeeper 做服务协调</p>
<p>4、HBase源码是java编写的，安装需要依赖JDK</p>
<h3>1、版本选择</h3>
<p>打开官方的版本说明<a href="http://hbase.apache.org/1.2/book.html" target="_blank">http://hbase.apache.org/1.2/book.html</a></p>
<h4>JDK的选择</h4>
<p><img src="https://images2018.cnblogs.com/blog/1385722/201808/1385722-20180812230130645-1511539587.png" alt="" /></p>
<h4>Hadoop的选择</h4>
<p><img src="https://images2018.cnblogs.com/blog/1385722/201808/1385722-20180812230456664-2087138215.png" alt="" /></p>
<p><img src="https://images2018.cnblogs.com/blog/1385722/201808/1385722-20180812230632768-1773190747.png" alt="" /></p>
<p>此处我们的hadoop版本用的的是2.7.6，HBase选择的版本是1.2.6</p>
<h3>2、下载安装包</h3>
<p>官网下载地址：<a href="http://archive.apache.org/dist/hbase/" target="_blank">http://archive.apache.org/dist/hbase/</a></p>
<p><img src="https://images2018.cnblogs.com/blog/1385722/201808/1385722-20180812232304577-1162155353.png" alt="" /></p>
<h3>3、完全分布式部署</h3>
<p><span>默认情况下，HBase以独立模式运行。</span><span>提供独立模式和伪分布模式都是为了进行小规模测试。</span></p>
<p><span>对于生产环境，分布式模式是合适的。</span><span>在分布式模式下，HBase守护程序的多个实例在群集中的多个服务器上运行。</span></p>
<table style="background-color: #28d7b4;" border="1">
<tbody>
<tr>
<td style="text-align: center;">节点IP</td>
<td style="text-align: center;">&nbsp;节点名称</td>
<td>Master</td>
<td>BackupMaster</td>
<td>RegionServer</td>
<td>Zookeeper</td>
<td>HDFS</td>
</tr>
<tr>
<td style="text-align: center;">192.168.100.21</td>
<td style="text-align: center;">node21</td>
<td style="text-align: center;">&radic;</td>
<td style="text-align: center;">&nbsp;</td>
<td style="text-align: center;">&radic;&nbsp;</td>
<td style="text-align: center;">&radic;</td>
<td>&nbsp;&radic;</td>
</tr>
<tr>
<td style="text-align: center;">192.168.100.22</td>
<td style="text-align: center;">node22</td>
<td style="text-align: center;">&nbsp;</td>
<td style="text-align: center;">&radic;</td>
<td style="text-align: center;">&radic;</td>
<td style="text-align: center;">&radic;</td>
<td>&nbsp;&radic;</td>
</tr>
<tr>
<td style="text-align: center;">192.168.100.23</td>
<td style="text-align: center;">node23</td>
<td style="text-align: center;">&nbsp;</td>
<td style="text-align: center;">&nbsp;</td>
<td style="text-align: center;">&radic;</td>
<td style="text-align: center;">&radic;</td>
<td>&nbsp;&radic;</td>
</tr>
</tbody>
</table>
<h4>Zookeeper集群安装参考：<a id="post_title_link_9018177" href="https://www.cnblogs.com/frankdeng/p/9018177.html" target="_blank">CentOS7.5搭建Zookeeper3.4.12集群</a><a href="http://www.cnblogs.com/qingyunzong/p/8619184.html" target="_blank"><br /></a></h4>
<h4>Hadoop集群安装参考：<a id="post_title_link_9047698" href="https://www.cnblogs.com/frankdeng/p/9047698.html" target="_blank">CentOS7.5搭建Hadoop2.7.6完全分布式集群</a></h4>
<h2>二、HBase的集群安装</h2>
<p>安装过程参考官方文档：<a href="http://hbase.apache.org/1.2/book.html#standalone_dist" target="_blank">http://hbase.apache.org/1.2/book.html#standalone_dist</a></p>
<div class="Section4">
<h3>1、上传解压缩</h3>
<p>解压&nbsp;HBase 到指定目录：</p>
<div class="cnblogs_code">
<pre>[admin@node21 software]$ tar zxvf hbase-1.2.6-bin.tar.gz -C /opt/module/</pre>
</div>
<h3>2、修改配置文件</h3>
<p><span style="color: #ff0000;">配置文件在/opt/module/hbase-1.2.6/conf目录下</span></p>
<p><strong>hbase-env.sh&nbsp;</strong>修改内容：</p>
<div class="cnblogs_code">
<pre>export JAVA_HOME=/opt/module/jdk1.8
export HBASE_MANAGES_ZK=false</pre>
</div>
</div>
<div class="Section5">
<p><strong>hbase-site.xml&nbsp;</strong>修改内容：</p>
<div class="cnblogs_code">
<pre>&lt;configuration&gt;
&lt;property&gt;
&lt;name&gt;hbase.rootdir&lt;/name&gt;
&lt;value&gt;hdfs://mycluster/hbase&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;hbase.cluster.distributed&lt;/name&gt;
&lt;value&gt;true&lt;/value&gt;
&lt;/property&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;hbase.zookeeper.quorum&lt;/name&gt;
&lt;value&gt;node21,node22,node23&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;hbase.zookeeper.property.dataDir&lt;/name&gt;
&lt;value&gt;/opt/module/zookeeper-3.4.12/Data&lt;/value&gt;
&lt;/property&gt;
&lt;/configuration&gt;</pre>
</div>
<h4><strong>region servers修改内容</strong>：</h4>
<div class="cnblogs_code">
<pre>node21<br />node22<br />node23</pre>
</div>
<h4 class="16">在&nbsp;<strong>conf</strong><strong>&nbsp;</strong>目录下创建&nbsp;<strong>backup-masters</strong><strong>&nbsp;</strong>文件，添加备机名</h4>
<div class="cnblogs_code">
<pre>$ echo node22 &gt; conf/backup-masters</pre>
</div>
<h3>3、软连接<strong>Hadoop</strong>配置</h3>
<div class="cnblogs_code">
<pre>[admin@node21 ~]$ ln -s /opt/module/hadoop-2.7.6/etc/hadoop/hdfs-site.xml /opt/module/hbase-1.2.6/conf/</pre>
</div>
<h3>4、替换Hbase依赖的<strong>Jar</strong>包</h3>
<p>由于&nbsp;HBase&nbsp;需要依赖&nbsp;Hadoop，所以替换&nbsp;HBase&nbsp;的&nbsp;lib&nbsp;目录下的&nbsp;jar 包，以解决兼容问题：</p>
</div>
<div class="Section6">
<p class="16">1)&nbsp;删除原有的&nbsp;jar：</p>
<div class="cnblogs_code">
<pre>[admin@node21 ~]$ rm -rf /opt/module/hbase-<span style="color: #800080;">1.2</span>.<span style="color: #800080;">6</span>/lib/hadoop-*<span style="color: #000000;"> 
[admin@node21 </span>~]$ rm -rf /opt/module/hbase-<span style="color: #800080;">1.2</span>.<span style="color: #800080;">6</span>/lib/zookeeper-<span style="color: #800080;">3.4</span>.<span style="color: #800080;">6</span>.jar</pre>
</div>
<p class="16">2)&nbsp;拷贝新&nbsp;jar，涉及的&nbsp;jar&nbsp;有：</p>
<div class="cnblogs_code">
<pre>hadoop-annotations-2.7.6.jar  hadoop-mapreduce-client-app-2.7.6.jar     hadoop-mapreduce-client-hs-plugins-2.7.6.jar 
hadoop-auth-2.7.6.jar         hadoop-mapreduce-client-common-2.7.6.jar  hadoop-mapreduce-client-jobclient-2.7.6.jar  
hadoop-common-2.7.6.jar       hadoop-mapreduce-client-core-2.7.6.jar    hadoop-mapreduce-client-shuffle-2.7.6.jar     
hadoop-hdfs-2.7.6.jar         hadoop-mapreduce-client-hs-2.7.6.jar      hadoop-yarn-api-2.7.6.jar
hadoop-yarn-client-2.7.6.jar  hadoop-yarn-common-2.7.6.jar              hadoop-yarn-server-common-2.7.6.jar<br />zookeeper-3.4.12.jar</pre>
</div>
</div>
<div class="Section7">
<p>尖叫提示：这些&nbsp;jar&nbsp;包的对应版本应替换成你目前使用的&nbsp;hadoop 版本，具体情况具体分析。</p>
<div class="cnblogs_code">
<pre>[admin@node21 ~]$ find /opt/module/hadoop-2.7.6/ -name hadoop-annotations*</pre>
</div>
<p>然后将找到的&nbsp;jar&nbsp;包复制到&nbsp;HBase&nbsp;的&nbsp;lib 目录下即可。</p>
<h3>5、分发安装包到其他节点</h3>
<div class="cnblogs_code">
<pre>[admin@node21 ~]$ scp -r /opt/module/hbase-<span style="color: #800080;">1.2</span>.<span style="color: #800080;">6</span>/ node22:/opt/module/<span style="color: #000000;"> 
[admin@node21 ~]$ scp </span>-r /opt/module/hbase-<span style="color: #800080;">1.2</span>.<span style="color: #800080;">6</span>/ node23:/opt/module/</pre>
</div>
</div>
<h3>6、配置环境变量</h3>
<p>所有服务器都有进行配置</p>
<div class="cnblogs_code">
<pre>[admin@node21 ~]$ <strong>vi /etc/profile </strong></pre>
</div>
<div class="cnblogs_code">
<pre>#HBase
export HBASE_HOME=/opt/module/hbase-1.2.6
export PATH=$PATH:$HBASE_HOME/bin</pre>
</div>
<p>使环境变量立即生效</p>
<div class="cnblogs_code">
<pre>[admin@node21 ~]$ <strong>source </strong><strong>/etc/profile </strong></pre>
</div>
<h3>7、同步时间</h3>
<p>尖叫提示：&nbsp;HBase 集群对于时间的同步要求的比HDFS严格，如果集群之间的节点时间不同步，会导致&nbsp;region server 无法启动，抛出ClockOutOfSyncException 异常。所以，集群启动之前千万记住要进行 时间同步，要求相差不要超过 30s.</p>
<div class="cnblogs_code">
<pre>&lt;property&gt;
&lt;name&gt;hbase.master.maxclockskew&lt;/name&gt;
&lt;value&gt;180000&lt;/value&gt;
&lt;description&gt;Time difference of regionserver from master&lt;/description&gt;
&lt;/property&gt;</pre>
</div>
<h2 id="blogTitle4">三、启动HBase集群</h2>
<p>严格按照启动顺序进行</p>
<h3>1、启动zookeeper集群</h3>
<p>每个zookeeper节点都要执行以下命令</p>
<div class="cnblogs_code">
<pre>[admin@node21 ~]$ zkServer.sh start</pre>
</div>
<h3>2、启动Hadoop集群</h3>
<p>如果需要运行MapReduce程序则启动yarn集群，否则不需要启动</p>
<div class="cnblogs_code">
<pre>[admin@node21 ~]$ start-<span style="color: #000000;">dfs.sh
[admin@node22 </span>~]$ start-yarn.sh</pre>
</div>
<h3>3、启动HBase集群</h3>
<p>保证 ZooKeeper 集群和 HDFS 集群启动正常的情况下启动 HBase 集群 启动命令：start-hbase.sh，在哪台节点上执行此命令，哪个节点就是主节点</p>
<p>启动方式 <strong>1</strong>：</p>
<div class="cnblogs_code">
<pre>[admin@node21 ~]$ start-<span style="color: #000000;">hbase.sh
starting master, logging to </span>/opt/module/hbase-<span style="color: #800080;">1.2</span>.<span style="color: #800080;">6</span>/logs/hbase-admin-master-node21.<span style="color: #0000ff;">out</span><span style="color: #000000;">
node23: starting regionserver, logging to </span>/opt/module/hbase-<span style="color: #800080;">1.2</span>.<span style="color: #800080;">6</span>/logs/hbase-admin-regionserver-node23.<span style="color: #0000ff;">out</span><span style="color: #000000;">
node21: starting regionserver, logging to </span>/opt/module/hbase-<span style="color: #800080;">1.2</span>.<span style="color: #800080;">6</span>/logs/hbase-admin-regionserver-node21.<span style="color: #0000ff;">out</span><span style="color: #000000;">
node22: starting regionserver, logging to </span>/opt/module/hbase-<span style="color: #800080;">1.2</span>.<span style="color: #800080;">6</span>/logs/hbase-admin-regionserver-node22.<span style="color: #0000ff;">out</span><span style="color: #000000;">
node22: starting master, logging to </span>/opt/module/hbase-<span style="color: #800080;">1.2</span>.<span style="color: #800080;">6</span>/logs/hbase-admin-master-node22.<span style="color: #0000ff;">out</span></pre>
</div>
<p>启动方式&nbsp;<strong>2</strong>：</p>
<div class="cnblogs_code">
<pre>$ hbase-<span style="color: #000000;">daemon.sh start master 
</span>$ hbase-daemon.sh start regionserver</pre>
</div>
<p>观看启动日志可以看到：</p>
<p>（1）首先在命令执行节点启动 master</p>
<p>（2）然后分别在 node21,node22,node23&nbsp;启动 regionserver</p>
<p>（3）然后在 backup-masters 文件中配置的备节点上再启动一个 master 主进程</p>
<p>尖叫提&nbsp;示：&nbsp;如果使用的是&nbsp;JDK8&nbsp;以&nbsp;上&nbsp;版&nbsp;本&nbsp;，&nbsp;则&nbsp;应&nbsp;在&nbsp;hbase-evn.sh&nbsp;中&nbsp;移除&nbsp;&ldquo;HBASE_MASTER_OPTS&rdquo;和&ldquo;HBASE_REGIONSERVER_OPTS&rdquo;配置。</p>
<h3>4、停止HBase集群</h3>
<div class="cnblogs_code">
<pre>[admin@node21 ~]$ stop-hbase.sh </pre>
</div>
<h2 id="blogTitle4">四、验证启动是否正常</h2>
<h3>1、检查各进程是否启动正常</h3>
<p>&nbsp;主节点和备用节点都启动 hmaster 进程，各从节点都启动 hregionserver 进程，按照对应的配置信息各个节点应该要启动的进程如下所示</p>
<p><img src="https://images2018.cnblogs.com/blog/1385722/201808/1385722-20180813212449518-1170807366.png" alt="" /></p>
<p><img src="https://images2018.cnblogs.com/blog/1385722/201808/1385722-20180813212531487-1050534859.png" alt="" /></p>
<p><img src="https://images2018.cnblogs.com/blog/1385722/201808/1385722-20180813212611730-54625887.png" alt="" /></p>
<h3>2、通过访问浏览器页面查看</h3>
<p>WebUI地址查看：<a href="http://node21:16010/master-status" target="_blank">http://node21:16010/master-status</a></p>
<p><img src="https://images2018.cnblogs.com/blog/1385722/201808/1385722-20180813213129852-348996122.png" alt="" /></p>
<p><img src="https://images2018.cnblogs.com/blog/1385722/201808/1385722-20180813213311062-1680281176.png" alt="" /></p>
<h3>3、验证高可用</h3>
<p>干掉node21上的hbase进程，观察备用节点是否启用</p>
<div class="cnblogs_code">
<p>[admin@node21 ~]$ kill -9 3414</p>
</div>
<p>&nbsp;node21界面访问失败，node22变成主节点</p>
<p><img src="https://images2018.cnblogs.com/blog/1385722/201808/1385722-20180813213657313-1619168751.png" alt="" /></p>
<h3>4、手动启动进程</h3>
<p>启动HMaster进程，当node21节点上的HMaster进程起来后又会成为备用Master，状态可通过webUI查看。</p>
<div class="cnblogs_code">
<pre>[admin@node21 ~<span style="color: #000000;">]$ jps
</span><span style="color: #800080;">3650</span><span style="color: #000000;"> HRegionServer
</span><span style="color: #800080;">2677</span><span style="color: #000000;"> NodeManager
</span><span style="color: #800080;">2394</span><span style="color: #000000;"> DFSZKFailoverController
</span><span style="color: #800080;">4442</span><span style="color: #000000;"> Jps
</span><span style="color: #800080;">1852</span><span style="color: #000000;"> DataNode
</span><span style="color: #800080;">2156</span><span style="color: #000000;"> JournalNode
</span><span style="color: #800080;">1405</span><span style="color: #000000;"> QuorumPeerMain
[admin@node21 </span>~]$ hbase-<span style="color: #000000;">daemon.sh start master
starting master, logging to </span>/opt/module/hbase-<span style="color: #800080;">1.2</span>.<span style="color: #800080;">6</span>/logs/hbase-admin-master-node21.<span style="color: #0000ff;">out</span><span style="color: #000000;">
[admin@node21 </span>~<span style="color: #000000;">]$ jps
</span><span style="color: #800080;">3650</span><span style="color: #000000;"> HRegionServer
</span><span style="color: #800080;">2677</span><span style="color: #000000;"> NodeManager
</span><span style="color: #800080;">4485</span><span style="color: #000000;"> HMaster
</span><span style="color: #800080;">4630</span><span style="color: #000000;"> Jps
</span><span style="color: #800080;">2394</span><span style="color: #000000;"> DFSZKFailoverController
</span><span style="color: #800080;">1852</span><span style="color: #000000;"> DataNode
</span><span style="color: #800080;">2156</span><span style="color: #000000;"> JournalNode
</span><span style="color: #800080;">1405</span> QuorumPeerMain</pre>
</div>
<p>启动HRegionServer进程</p>
<div class="cnblogs_code">
<pre>$ hbase-daemon.sh start regionserver </pre>
</div></div><div id="MySignature"></div>

</body>
</html>
