<!DOCTYPE html>
<html lang="zh-cn">
<head>
   
    <link type="text/css" rel="stylesheet" href="/bundles/blog-common.css?v=KOZafwuaDasEedEenI5aTy8aXH0epbm6VUJ0v3vsT_Q1"/>
<link id="MainCss" type="text/css" rel="stylesheet" href="/skins/ThinkInside/bundle-ThinkInside.css?v=RRjf6pEarGnbXZ86qxNycPfQivwSKWRa4heYLB15rVE1"/>
<link type="text/css" rel="stylesheet" href="/blog/customcss/428549.css?v=%2fam3bBTkW5NBWhBE%2fD0lcyJv5UM%3d"/>

</head>
<body>
<a name="top"></a>

<div id="page_begin_html"></div><script>load_page_begin_html();</script>

<div id="topics">
	<div class = "post">
		<h1 class = "postTitle">
			<a id="cb_post_title_url" class="postTitle2" href="https://www.cnblogs.com/frankdeng/p/9310219.html">HBase（六）HBase整合Hive，数据的备份与MR操作HBase</a>
		</h1>
		<div class="clear"></div>
		<div class="postBody">
			<div id="cnblogs_post_body" class="blogpost-body"><h2>一.数据的备份与恢复</h2>
<div class="Section36">
<h3 class="16">1. 备份</h3>
<p>停止&nbsp;HBase&nbsp;服务后，使用&nbsp;distcp&nbsp;命令运行&nbsp;MapReduce 任务进行备份，将数据备份到另一个地方，可以是同一个集群，也可以是专用的备份集群。</p>
<p>即，把数据转移到当前集群的其他目录下（也可以不在同一个集群中）:</p>
</div>
<div class="cnblogs_code">
<pre>$ bin/hadoop distcp \ hdfs://node21:8020/hbase \
hdfs://node21:8020/HbaseBackup/backup20180820</pre>
</div>
<div class="Section37">
<p>尖叫提示：执行该操作，一定要开启&nbsp;Yarn 服务</p>
<h3>2. 恢复</h3>
<p>非常简单，与备份方法一样，将数据整个移动回来即可。</p>
<div class="cnblogs_code">
<pre>$ bin/hadoop distcp \
hdfs://node21:8020/HbaseBackup/backup20180820 \ <br />hdfs://node21:8020/hbase</pre>
</div>
<h2>二.节点的管理</h2>
<h3 class="16">1. 服役(<strong>commissioning</strong>）</h3>
<p>当启动&nbsp;regionserver 时，regionserver&nbsp;会向&nbsp;HMaster&nbsp;注册并开始接收本地数据，开始的时候，&nbsp;新加入的节点不会有任何数据，平衡器开启的情况下，将会有新的&nbsp;region 移动到开启的</p>
<p>RegionServer&nbsp;上。如果启动和停止进程是使用&nbsp;ssh&nbsp;和&nbsp;HBase 脚本，那么会将新添加的节点的主机名加入到&nbsp;conf/regionservers 文件中。</p>
<h3 class="16">2. 退役(<strong>decommissioning</strong>）</h3>
<p>顾名思义，就是从当前&nbsp;HBase&nbsp;集群中删除某个&nbsp;RegionServer，这个过程分为如下几个过程：</p>
<p class="16">1) 停止负载平衡器</p>
<div class="cnblogs_code">
<pre>hbase&gt; balance_switch false</pre>
</div>
<p class="16">2)&nbsp;在退役节点上停止&nbsp;<strong>RegionServer</strong></p>
<div class="cnblogs_code">
<pre>hbase&gt; hbase-daemon.sh stop regionserver</pre>
</div>
<p class="16">3)&nbsp;<strong>RegionServer</strong><strong>&nbsp;</strong>一旦停止，会关闭维护的所有&nbsp;<strong>region</strong></p>
<p class="16">4)&nbsp;<strong>Zookeeper</strong><strong>&nbsp;</strong>上的该&nbsp;<strong>RegionServer</strong><strong>&nbsp;</strong>节点消失</p>
<p class="16">5)&nbsp;<strong>Master</strong><strong>&nbsp;</strong>节点检测到该&nbsp;<strong>RegionServer</strong><strong>&nbsp;</strong>下线</p>
<p class="16">6)&nbsp;<strong>RegionServer</strong><strong>&nbsp;</strong>的&nbsp;<strong>region</strong><strong>&nbsp;</strong>服务得到重新分配</p>
<p>该关闭方法比较传统，需要花费一定的时间，而且会造成部分&nbsp;region 短暂的不可用。</p>
</div>
<div class="Section38">
<p>另一种方案：</p>
<p class="16">1)&nbsp;<strong>RegionServer</strong><strong>&nbsp;</strong>先卸载所管理的&nbsp;<strong>region</strong></p>
<div class="cnblogs_code">
<pre>$ bin/graceful_stop.sh &lt;RegionServer-hostname&gt;</pre>
</div>
<p class="16" align="justify">2)&nbsp;自动平衡数据</p>
<p class="16" align="justify">3)&nbsp;和之前的&nbsp;<strong>2~6</strong><strong>&nbsp;</strong>步是一样的</p>
<h2 align="justify">三. 版本的确界</h2>
<h3 class="16" align="justify">1. 版本的下界</h3>
<p align="justify">默认的版本下界是&nbsp;0，即禁用。row 版本使用的最小数目是与生存时间（TTL Time To Live）&nbsp;相结合的，并且我们根据实际需求可以有&nbsp;0&nbsp;或更多的版本，使用&nbsp;0，即只有&nbsp;1 个版本的值写入&nbsp;cell。</p>
<h3 class="16">2. 版本的上界</h3>
<p align="justify">之前默认的版本上界是&nbsp;3，也就是一个&nbsp;row&nbsp;保留&nbsp;3 个副本（基于时间戳的插入）。该值不要设计的过大，一般的业务不会超过&nbsp;100。如果&nbsp;cell&nbsp;中存储的数据版本号超过了&nbsp;3 个，再次插入数据时，最新的值会将最老的值覆盖。（现版本已默认为&nbsp;1）</p>
</div>
<h2>四.HBase与Hive整合</h2>
<div class="Section28">
<h3 class="16">1.&nbsp;<strong>HBase</strong><strong>&nbsp;</strong>与&nbsp;<strong>Hive</strong><strong>&nbsp;</strong>的对比</h3>
<h4>1)&nbsp;<strong>Hive</strong></h4>
<p class="16">(1)&nbsp;数据仓库</p>
<p>Hive&nbsp;的本质其实就相当于将&nbsp;HDFS&nbsp;中已经存储的文件在&nbsp;Mysql&nbsp;中做了一个双射关系，以方便使用&nbsp;HQL 去管理查询。</p>
<p class="16">(2)&nbsp;用于数据分析、清洗</p>
<p>Hive 适用于离线的数据分析和清洗，延迟较高。</p>
<p class="16">(3)&nbsp;基于&nbsp;<strong>HDFS</strong>、<strong>MapReduce</strong></p>
<p>Hive&nbsp;存储的数据依旧在&nbsp;DataNode&nbsp;上，编写的&nbsp;HQL&nbsp;语句终将是转换为&nbsp;MapReduce 代码执行。</p>
<h4>2)&nbsp;<strong>HBase</strong></h4>
<p class="16">(1)&nbsp;数据库</p>
<p>是一种面向列存储的非关系型数据库。</p>
<p class="16">(2)&nbsp;用于存储结构化和非结构话的数据</p>
<p>适用于单表非关系型数据的存储，不适合做关联查询，类似&nbsp;JOIN 等操作。</p>
</div>
<div class="Section29">
<p class="16">(3)&nbsp;基于&nbsp;<strong>HDFS</strong></p>
<p>数据持久化存储的体现形式是&nbsp;Hfile，存放于&nbsp;DataNode&nbsp;中，被&nbsp;ResionServer&nbsp;以&nbsp;region 的形式进行管理。</p>
<p class="16">(4)&nbsp;延迟较低，接入在线业务使用</p>
<p>面对大量的企业数据，HBase 可以直线单表大量数据的存储，同时提供了高效的数据访问速度。</p>
<h3 class="16">2.&nbsp;<strong>HBase</strong><strong>&nbsp;</strong>与&nbsp;<strong>Hive</strong><strong>&nbsp;</strong>集成使用</h3>
<p>注意：HBase&nbsp;与&nbsp;Hive 的集成在版本中兼容问题。</p>
<p>环境准备</p>
<p>因为我们后续可能会在操作&nbsp;Hive&nbsp;的同时对&nbsp;HBase&nbsp;也会产生影响，所以&nbsp;Hive 需要持有操作HBase&nbsp;的&nbsp;Jar，那么接下来拷贝&nbsp;Hive&nbsp;所依赖的&nbsp;Jar 包（或者使用软连接的形式）。</p>
</div>
<div class="cnblogs_code">
<pre>$ export HBASE_HOME=/opt/modules/hbase-1.2.6
$ export HIVE_HOME=/opt/modules/hive-2.3.3<br />
$ ln -s $HBASE_HOME/lib/hbase-common-1.2.6.jar             $HIVE_HOME/lib/hbase-common-1.2.6.jar
$ ln -s $HBASE_HOME/lib/hbase-server-1.2.6.jar             $HIVE_HOME/lib/hbase-server-1.2.6.jar
$ ln -s $HBASE_HOME/lib/hbase-client-1.2.6.jar             $HIVE_HOME/lib/hbase-client-1.2.6.jar
$ ln -s $HBASE_HOME/lib/hbase-protocol-1.2.6.jar           $HIVE_HOME/lib/hbase-protocol-1.2.6.jar
$ ln -s $HBASE_HOME/lib/hbase-it-1.2.6.jar                 $HIVE_HOME/lib/hbase-it-1.2.6.jar
$ ln -s $HBASE_HOME/lib/htrace-core-3.1.0-incubating.jar   $HIVE_HOME/lib/htrace-core-3.1.0-incubating.jar
$ ln -s $HBASE_HOME/lib/hbase-hadoop2-compat-1.2.6.jar     $HIVE_HOME/lib/hbase-hadoop2-compat-1.2.6.jar
$ ln -s $HBASE_HOME/lib/hbase-hadoop-compat-1.2.6.jar      $HIVE_HOME/lib/hbase-hadoop-compat-1.2.6.jar</pre>
</div>
<div class="Section30">
<p>同时在&nbsp;<strong>hive-site.xml&nbsp;</strong>中修改&nbsp;<strong>zookeeper&nbsp;</strong>的属性，如下：</p>
<div class="cnblogs_code">
<pre>&lt;property&gt;
&lt;name&gt;hive.zookeeper.quorum&lt;/name&gt;
&lt;value&gt;node21,node22,node23&lt;/value&gt;
&lt;description&gt;The list of ZooKeeper servers to talk to. This is only needed for read/write locks.&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;hive.zookeeper.client.port&lt;/name&gt;
&lt;value&gt;2181&lt;/value&gt;
&lt;description&gt;The port of ZooKeeper servers to talk to. This is only needed for read/write locks.&lt;/description&gt;
&lt;/property&gt;</pre>
</div>
<h3 class="16">2.1. 案例一</h3>
<p>目标：建立&nbsp;Hive&nbsp;表，关联&nbsp;HBase&nbsp;表，插入数据到&nbsp;Hive&nbsp;表的同时能够影响&nbsp;HBase 表。分步实现：</p>
<p class="16">(1)&nbsp;在&nbsp;<strong>Hive&nbsp;</strong>中创建表同时关联&nbsp;<strong>HBase</strong></p>
</div>
<div class="cnblogs_code">
<pre>CREATE TABLE hive_hbase_emp_table( <br />empno int,
ename string, <br />job string,<br />mgr int,
hiredate string, <br />sal double, <br />comm double,
deptno int)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" =
":key,info:ename,info:job,info:mgr,info:hiredate,info:sal,info:comm,info:deptno")
TBLPROPERTIES ("hbase.table.name" = "hbase_emp_table");</pre>
</div>
<div class="Section31">
<p>尖叫提示：完成之后，可以分别进入&nbsp;Hive&nbsp;和&nbsp;HBase 查看，都生成了对应的表</p>
<p class="16">(2)&nbsp;在&nbsp;<strong>Hive&nbsp;</strong>中创建临时中间表，用于&nbsp;<strong>load</strong><strong>&nbsp;</strong>文件中的数据</p>
<p>尖叫提示：不能将数据直接&nbsp;load&nbsp;进&nbsp;Hive&nbsp;所关联&nbsp;HBase 的那张表中</p>
<div class="cnblogs_code">
<pre>CREATE TABLE emp(
empno int,<br />ename string, <br />job string,<br />mgr int,
hiredate string, <br />sal double, <br />comm double, <br />deptno int)
row format delimited fields terminated by '\t';</pre>
</div>
<p class="16">(3)&nbsp;向&nbsp;<strong>Hive&nbsp;</strong>中间表中&nbsp;<strong>load</strong><strong>&nbsp;</strong>数据</p>
<div class="cnblogs_code">
<pre>hive&gt; load data local inpath '/opt/data/emp.txt' into table emp;</pre>
</div>
<p class="16">(4)&nbsp;通过&nbsp;<strong>insert</strong><strong>&nbsp;</strong>命令将中间表中的数据导入到&nbsp;<strong>Hive</strong><strong>&nbsp;</strong>关联&nbsp;<strong>HBase</strong><strong>&nbsp;</strong>的那张表中</p>
<div class="cnblogs_code">
<pre>hive&gt; insert into table hive_hbase_emp_table select * from emp;</pre>
</div>
<p class="16">(5)&nbsp;查看&nbsp;<strong>Hive</strong><strong>&nbsp;</strong>以及关联的&nbsp;<strong>HBase</strong><strong>&nbsp;</strong>表中是否已经成功的同步插入了数据</p>
<p>Hive：</p>
<div class="cnblogs_code">
<pre>hive&gt; select * from hive_hbase_emp_table;</pre>
</div>
<p>HBase：</p>
<div class="cnblogs_code">
<pre>hbase&gt; scan &lsquo;hbase_emp_table&rsquo;</pre>
</div>
<h3 class="16">2.2. 案例二</h3>
</div>
<div class="Section32">
<p>目标：在&nbsp;HBase&nbsp;中已经存储了某一张表&nbsp;hbase_emp_table，然后在&nbsp;Hive 中创建一个外部表来，</p>
<p>关联&nbsp;HBase&nbsp;中的&nbsp;hbase_emp_table&nbsp;这张表，使之可以借助&nbsp;Hive&nbsp;来分析&nbsp;HBase 这张表中的数据。</p>
<p>注：该案例&nbsp;2&nbsp;紧跟案例&nbsp;1&nbsp;的脚步，所以完成此案例前，请先完成案例&nbsp;1。分步实现：</p>
<p class="16">(1)&nbsp;在&nbsp;<strong>Hive&nbsp;</strong>中创建外部表</p>
<div class="cnblogs_code">
<pre>CREATE EXTERNAL TABLE relevance_hbase_emp(<br />empno int,
ename string,<br />job string,<br />mgr int,
hiredate string,<br />sal double,<br />comm double,<br />deptno int) <br />STORED BY
'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" =
":key,info:ename,info:job,info:mgr,info:hiredate,info:sal,info:comm,info:deptno")
TBLPROPERTIES ("hbase.table.name" = "hbase_emp_table");</pre>
</div>
<p class="16">(2)&nbsp;关联后就可以使用&nbsp;<strong>Hive</strong><strong>&nbsp;</strong>函数进行一些分析操作了</p>
<div class="cnblogs_code">
<pre>hive (default)&gt; select * from relevance_hbase_emp;</pre>
</div>
</div>
<h2>五.HBase与Sqoop集成</h2>
<h3><strong>1.&nbsp;</strong>概念</h3>
<div class="Section32">
<p>Sqoop supports additional import targets beyond HDFS and Hive. Sqoop can also import records into a table in HBase.</p>
<p>之前我们已经学习过如何使用&nbsp;Sqoop&nbsp;在&nbsp;Hadoop 集群和关系型数据库中进行数据的导入导出</p>
<p>工作，接下来我们学习一下利用&nbsp;Sqoop&nbsp;在&nbsp;HBase&nbsp;和&nbsp;RDBMS 中进行数据的转储。</p>
</div>
<div class="Section33">
<p>相关参数：</p>
<table style="height: 549px; width: 828px;" border="1" cellspacing="0">
<tbody>
<tr>
<td valign="top" width="284">
<p class="17">参数</p>
</td>
<td valign="top" width="284">
<p class="17">描述</p>
</td>
</tr>
<tr>
<td valign="top" width="284">
<p class="17">--column-family &lt;family&gt;</p>
</td>
<td valign="top" width="284">
<p class="17">Sets the target column family for the import</p>
<p class="17">设置导入的目标列族。</p>
</td>
</tr>
<tr>
<td valign="top" width="284">
<p class="17">--hbase-create-table</p>
</td>
<td valign="top" width="284">
<p class="17">If specified, create missing HBase tables</p>
<p class="17">是否自动创建不存在的 HBase 表（这就意味着，不需要手动提前在 HBase 中先建立表）</p>
</td>
</tr>
<tr>
<td valign="top" width="284">
<p class="17">--hbase-row-key &lt;col&gt;</p>
</td>
<td valign="top" width="284">
<p class="17">Specifies which input column to use as the row key.In case, if input table contains composite key, then &lt;col&gt; must be in the form of a</p>
<p class="17">comma-separated list of composite key attributes.</p>
<p class="17">mysql 中哪一列的值作为 HBase 的 rowkey，</p>
<p class="17">如果rowkey&nbsp;是个组合键，则以逗号分隔。（注：</p>
<p class="17">避免 rowkey 的重复）</p>
</td>
</tr>
<tr>
<td valign="top" width="284">
<p class="17">--hbase-table &lt;table-name&gt;</p>
</td>
<td valign="top" width="284">
<p class="17">Specifies an HBase table to use as the target instead of HDFS.</p>
<p class="17">指定数据将要导入到 HBase 中的哪张表中。</p>
</td>
</tr>
<tr>
<td valign="top" width="284">
<p class="17">--hbase-bulkload</p>
</td>
<td valign="top" width="284">
<p class="17">Enables bulk loading.</p>
<p class="17">是否允许 bulk 形式的导入。</p>
</td>
</tr>
</tbody>
</table>
<h3><strong>2.&nbsp;</strong>案例一</h3>
<p>目标：将&nbsp;RDBMS&nbsp;中的数据抽取到&nbsp;HBase 中分步实现：</p>
<p class="16">(1)&nbsp;配置&nbsp;<strong>sqoop-env.sh</strong>，添加如下内容：</p>
<div class="cnblogs_code">
<pre>export HBASE_HOME=/opt/module/hbase-1.2.6</pre>
</div>
<p class="16">(2)&nbsp;在&nbsp;<strong>Mysql</strong><strong>&nbsp;</strong>中新建一个数据库&nbsp;<strong>db_library</strong>，一张表&nbsp;<strong>book</strong></p>
</div>
<div class="cnblogs_code">
<pre>CREATE DATABASE db_library; <br />CREATE TABLE db_library.book(
id int(4) PRIMARY KEY NOT NULL AUTO_INCREMENT,<br />name VARCHAR(255) NOT NULL,
price VARCHAR(255) NOT NULL);</pre>
</div>
<div class="Section34">
<p class="16">(3) 向表中插入一些数据</p>
<div class="cnblogs_code">
<pre>INSERT INTO db_library.book (name, price) VALUES('Lie Sporting', '30'); <br />INSERT INTO db_library.book (name, price) VALUES('Pride &amp; Prejudice', '70');
INSERT INTO db_library.book (name, price) VALUES('Fall of Giants', '50');</pre>
</div>
<p class="16">(4)&nbsp;执行&nbsp;<strong>Sqoop</strong><strong>&nbsp;</strong>导入数据的操作</p>
<div class="cnblogs_code">
<pre>$ bin/sqoop import \
--connect jdbc:mysql://node21:3306/db_library \
--username root \
--password 123456 \
--table book \
--columns "id,name,price" \
--column-family "info" \
--hbase-create-table \
--hbase-row-key "id" \
--hbase-table "hbase_book" \
--num-mappers 1 \
--split-by id</pre>
</div>
<p>提示：sqoop1.4.7&nbsp;只支持&nbsp;HBase1.0.1&nbsp;之前的版本的自动创建&nbsp;HBase 表的功能</p>
<p>解决方案：手动创建&nbsp;HBase 表</p>
<div class="cnblogs_code">
<pre>hbase&gt; create 'hbase_book','info'</pre>
</div>
<p class="16">(5)&nbsp;在&nbsp;<strong>HBase</strong><strong>&nbsp;</strong>中&nbsp;<strong>scan</strong><strong>&nbsp;</strong>这张表得到如下内容</p>
</div>
<div class="Section35">
<div class="cnblogs_code">
<pre>hbase&gt; scan &lsquo;hbase_book&rsquo;</pre>
</div>
<p>思考：尝试使用复合键作为导入数据时的&nbsp;rowkey。</p>
</div>
<h2>六.HBase操作MapReduce</h2>
<h3 class="Section11">1.&nbsp;&nbsp;<strong>MapReduce</strong></h3>
<div class="Section17">
<p>&nbsp; &nbsp; &nbsp; &nbsp; 通过&nbsp;HBase&nbsp;的相关&nbsp;JavaAPI，我们可以实现伴随&nbsp;HBase&nbsp;操作的&nbsp;MapReduce 过程，比如使用MapReduce&nbsp;将数据从本地文件系统导入到&nbsp;HBase&nbsp;的表中，比如我们从&nbsp;HBase&nbsp;中读取一些原始数据后使用&nbsp;MapReduce 做数据分析。</p>
<h3 class="16">1.1&nbsp;官方&nbsp;<strong>HBase-MapReduce</strong></h3>
<p class="16">1)&nbsp;查看&nbsp;<strong>HBase</strong><strong>&nbsp;</strong>的&nbsp;<strong>MapReduce</strong><strong>&nbsp;</strong>任务的执行</p>
<div class="cnblogs_code">
<pre>$ bin/hbase mapredcp</pre>
</div>
<p class="16">2)&nbsp;执行环境变量的导入</p>
<div class="cnblogs_code">
<pre>$ export HBASE_HOME=/opt/module/hbase-1.2.6
$ export HADOOP_HOME=/opt/module/hadoop-2.7.6
$ export HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase mapredcp`</pre>
</div>
<p class="16">3)&nbsp;运行官方的&nbsp;<strong>MapReduce</strong><strong>&nbsp;</strong>任务</p>
<p><strong>--&nbsp;</strong>案例一：统计&nbsp;<strong>Student&nbsp;</strong>表中有多少行数据</p>
<div class="cnblogs_code">
<pre>$ ~/module/hadoop-2.7.6/bin/yarn jar lib/hbase-server-1.2.6.jar rowcounter student</pre>
</div>
</div>
<div class="Section18">
<p><strong>--&nbsp;</strong>案例二：使用&nbsp;<strong>MapReduce&nbsp;</strong>将本地数据导入到&nbsp;<strong>HBase</strong></p>
<p class="16">(1)&nbsp;在本地创建一个&nbsp;<strong>tsv</strong><strong>&nbsp;</strong>格式的文件：<strong>fruit.tsv</strong></p>
<table border="0" cellspacing="0">
<tbody>
<tr>
<td valign="top" width="49">
<p class="17" align="center">1001</p>
</td>
<td valign="top" width="69">
<p class="17">Apple</p>
</td>
<td valign="top" width="43">
<p class="17">Red</p>
</td>
</tr>
<tr>
<td valign="top" width="49">
<p class="17" align="center">1002</p>
</td>
<td valign="top" width="69">
<p class="17">Pear</p>
</td>
<td valign="top" width="43">
<p class="17">Yellow</p>
</td>
</tr>
<tr>
<td valign="top" width="49">
<p class="17" align="center">1003</p>
</td>
<td valign="top" width="69">
<p class="17">Pineapple</p>
</td>
<td valign="top" width="43">
<p class="17">Yellow</p>
</td>
</tr>
</tbody>
</table>
<p>尖叫提示：上面的这个数据不要从&nbsp;word 中直接复制，有格式错误</p>
<p class="16">(2)&nbsp;创建&nbsp;<strong>HBase</strong><strong>&nbsp;</strong>表</p>
<div class="cnblogs_code">
<pre>hbase(main):001:0&gt; create 'fruit','info'</pre>
</div>
<p class="16">(3)&nbsp;在&nbsp;<strong>HDFS</strong><strong>&nbsp;</strong>中创建&nbsp;<strong>input_fruit</strong><strong>&nbsp;</strong>文件夹并上传&nbsp;<strong>fruit.tsv</strong><strong>&nbsp;</strong>文件</p>
<div class="cnblogs_code">
<pre>$ ~/module/hadoop-2.7.6/bin/hdfs dfs -mkdir /input_fruit/
$ ~/module/hadoop-2.7.6/bin/hdfs dfs -put fruit.tsv /input_fruit/</pre>
</div>
<p class="16">(4)&nbsp;执行&nbsp;<strong>MapReduce</strong><strong>&nbsp;</strong>到&nbsp;<strong>HBase</strong><strong>&nbsp;</strong>的&nbsp;<strong>fruit</strong><strong>&nbsp;</strong>表中</p>
<div class="cnblogs_code">
<pre>$ ~/module/hadoop-2.7.6/bin/yarn jar lib/hbase-server-1.2.6.jar importtsv \
-Dimporttsv.columns=HBASE_ROW_KEY,info:name,info:color fruit \ hdfs://node21:8020/input_fruit</pre>
</div>
<p class="16">(5)&nbsp;使用&nbsp;<strong>scan</strong><strong>&nbsp;</strong>命令查看导入后的结果</p>
<div class="cnblogs_code">
<pre>hbase(main):001:0&gt; scan &lsquo;fruit&rsquo;</pre>
</div>
<h3 class="16">1.2&nbsp;自定义&nbsp;<strong>HBase-MapReduce1</strong></h3>
<p>目标：将&nbsp;fruit&nbsp;表中的一部分数据，通过&nbsp;MR&nbsp;迁入到&nbsp;fruit_mr 表中。分步实现：</p>
<p class="16">1)&nbsp;构建&nbsp;<strong>ReadFruitMapper</strong><strong>&nbsp;</strong>类，用于读取&nbsp;<strong>fruit</strong><strong>&nbsp;</strong>表中的数据</p>
</div>
<p>&nbsp;</p>
<div class="cnblogs_code">
<pre>package com.xyg.hbase_mr;

import java.io.IOException;
import org.apache.hadoop.hbase.Cell; <br />import org.apache.hadoop.hbase.CellUtil;
import org.apache.hadoop.hbase.client.Put;<br />import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable; <br />import org.apache.hadoop.hbase.mapreduce.TableMapper;
import org.apache.hadoop.hbase.util.Bytes;

public class ReadFruitMapper extends TableMapper&lt;ImmutableBytesWritable, Put&gt; {<br />
@Override
protected void map(ImmutableBytesWritable key, Result value, Context context) throws IOException, InterruptedException {
<br />//将 fruit 的 name 和 color 提取出来，相当于将每一行数据读取出来放入到 Put 对象中。
Put put = new Put(key.get());
//遍历添加 column 行
for(Cell cell: value.rawCells()){
//添加/克隆列族:info
if("info".equals(Bytes.toString(CellUtil.cloneFamily(cell)))){
//添加/克隆列：name
if("name".equals(Bytes.toString(CellUtil.cloneQualifier(cell)))){
//将该列 cell 加入到 put 对象中
put.add(cell);
//添加/克隆列:color
}else if("color".equals(Bytes.toString(CellUtil.cloneQualifier(cell)))){
//向该列 cell 加入到 put 对象中
put.add(cell);
}
}
}
//将从 fruit 读取到的每行数据写入到 context 中作为 map 的输出
context.write(key, put);
}
}</pre>
</div>
<div class="Section20">
<p class="16">2)&nbsp;构建&nbsp;<strong>WriteFruitMRReducer</strong><strong>&nbsp;</strong>类，用于将读取到的&nbsp;<strong>fruit</strong><strong>&nbsp;</strong>表中的数据写入到&nbsp;<strong>fruit_mr</strong><strong>&nbsp;</strong>表中</p>
<div class="cnblogs_code">
<pre>package com.z.hbase_mr;
import java.io.IOException;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.mapreduce.TableReducer; import org.apache.hadoop.io.NullWritable;
public class WriteFruitMRReducer extends TableReducer&lt;ImmutableBytesWritable, Put, NullWritable&gt; {
@Override
protected void reduce(ImmutableBytesWritable key, Iterable&lt;Put&gt; values, Context context) throws IOException, InterruptedException {
//读出来的每一行数据写入到 fruit_mr 表中
for(Put put: values){ context.write(NullWritable.get(), put);
}
}
}</pre>
</div>
<h4>3)&nbsp;构建&nbsp;<strong>Fruit2FruitMRRunner</strong><strong>&nbsp;</strong><strong>extends</strong><strong>&nbsp;</strong><strong>Configured</strong><strong>&nbsp;</strong><strong>implements</strong><strong>&nbsp;</strong><strong>Tool</strong><strong>&nbsp;</strong>用于组装运行&nbsp;<strong>Job</strong><strong>&nbsp;</strong>任务</h4>
<div class="cnblogs_code">
<pre>//组装 Job
public int run(String[] args) throws Exception {
//得到 Configuration
Configuration conf = this.getConf();
//创建 Job 任务
Job job = Job.getInstance(conf, this.getClass().getSimpleName());
job.setJarByClass(Fruit2FruitMRRunner.class);
//配置 Job
Scan scan = new Scan();
scan.setCacheBlocks(false); <br />scan.setCaching(500);
//设置 Mapper，注意导入的是 mapreduce 包下的，不是 mapred 包下的，后者是老版本
TableMapReduceUtil.initTableMapperJob(
"fruit", //数据源的表名
scan, //scan 扫描控制器
ReadFruitMapper.class,//设置 Mapper 类
ImmutableBytesWritable.class,//设置 Mapper 输出 key 类型
Put.class,//设置 Mapper 输出 value 值类型
job//设置给哪个 JOB
);
//设置 Reducer
TableMapReduceUtil.initTableReducerJob("fruit_mr", WriteFruitMRReducer.class,job);
//设置 Reduce 数量，最少 1 个
job.setNumReduceTasks(1);
boolean isSuccess = job.waitForCompletion(true);
if(!isSuccess){
throw new IOException("Job running with error");
}
return isSuccess ? 0 : 1;
}</pre>
</div>
</div>
<div class="Section24">
<p class="16">4)&nbsp;主函数中调用运行该&nbsp;<strong>Job</strong><strong>&nbsp;</strong>任务</p>
<div class="cnblogs_code">
<pre>public static void main( String[] args ) throws Exception{ <br />Configuration conf = HBaseConfiguration.create();
int status = ToolRunner.run(conf, new Fruit2FruitMRRunner(), args);<br />System.exit(status);</pre>
</div>
<p class="16">5)&nbsp;打包运行任务</p>
<div class="cnblogs_code">
<pre>$ ~/module/hadoop-2.7.6/bin/yarn jar ~/softwares/jars/hbase-0.0.1-SNAPSHOT.jar
com.z.hbase.mr1.Fruit2FruitMRRunner</pre>
</div>
<p>尖叫提示：运行任务前，如果待数据导入的表不存在，则需要提前创建之。</p>
<p>尖叫提示：maven 打包命令：-P local clean package 或-P dev clean package install（将第三方jar 包一同打包，需要插件：maven-shade-plugin）</p>
<h3 class="16">1.3&nbsp;自定义&nbsp;<strong>HBase-MapReduce2</strong></h3>
<p>目标：实现将&nbsp;HDFS&nbsp;中的数据写入到&nbsp;HBase 表中。分步实现：</p>
<p class="16">1)&nbsp;构建&nbsp;<strong>ReadFruitFromHDFSMapper</strong><strong>&nbsp;</strong>于读取&nbsp;<strong>HDFS</strong><strong>&nbsp;</strong>中的文件数据</p>
</div>
<div class="cnblogs_code">
<pre>package com.z.hbase.mr2;

import java.io.IOException;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class ReadFruitFromHDFSMapper extends Mapper&lt;LongWritable, Text,ImmutableBytesWritable, Put&gt; {<br />@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
//从 HDFS 中读取的数据
String lineValue = value.toString();
//读取出来的每行数据使用\t 进行分割，存于 String 数组
String[] values = lineValue.split("\t");
//根据数据中值的含义取值String rowKey = values[0];<br />String name = values[1];
String color = values[2];
//初始化 rowKey
ImmutableBytesWritable rowKeyWritable = new
ImmutableBytesWritable(Bytes.toBytes(rowKey));
//初始化 put 对象
Put put = new Put(Bytes.toBytes(rowKey));
//参数分别:列族、列、值
put.add(Bytes.toBytes("info"), Bytes.toBytes("name"),Bytes.toBytes(name)); <br />put.add(Bytes.toBytes("info"), Bytes.toBytes("color"),Bytes.toBytes(color));
context.write(rowKeyWritable, put);
}
}</pre>
</div>
<p>2)构建&nbsp;<strong>WriteFruitMRFromTxtReducer&nbsp;</strong>类</p>
<div class="cnblogs_code">
<pre>package com.z.hbase.mr2;
import java.io.IOException;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;<br />import org.apache.hadoop.hbase.mapreduce.TableReducer;<br />import org.apache.hadoop.io.NullWritable;
public class WriteFruitMRFromTxtReducer extends TableReducer&lt;ImmutableBytesWritable, Put, NullWritable&gt; {
@Override
protected void reduce(ImmutableBytesWritable key, Iterable&lt;Put&gt; values, Context context) throws IOException, InterruptedException {
//读出来的每一行数据写入到 fruit_hdfs 表中
for(Put put: values){ context.write(NullWritable.get(), put);
}
}
}</pre>
</div>
<p>3)创建&nbsp;<strong>Txt2FruitRunner</strong><strong>&nbsp;</strong>组装&nbsp;<strong>Job</strong></p>
<div class="cnblogs_code">
<pre>public int run(String[] args) throws Exception {
//得到 Configuration
Configuration conf = this.getConf();
//创建 Job 任务
Job job = Job.getInstance(conf, this.getClass().getSimpleName());
job.setJarByClass(Txt2FruitRunner.class);
Path inPath = new Path("hdfs://linux01:8020/input_fruit/fruit.tsv");
FileInputFormat.addInputPath(job, inPath);
// 设 置 Mapper job.setMapperClass(ReadFruitFromHDFSMapper.class);
job.setMapOutputKeyClass(ImmutableBytesWritable.class); job.setMapOutputValueClass(Put.class);
//设置 Reducer
TableMapReduceUtil.initTableReducerJob("fruit_mr", WriteFruitMRFromTxtReducer.class, job);
//设置 Reduce 数量，最少 1 个
job.setNumReduceTasks(1);
boolean isSuccess = job.waitForCompletion(true);
if(!isSuccess){
throw new IOException("Job running with error");
}
return isSuccess ? 0 : 1;

}</pre>
</div>
<div class="Section28">
<p class="16">4)&nbsp;调用执行&nbsp;<strong>Job</strong></p>
<div class="cnblogs_code">
<pre>public static void main(String[] args) throws Exception { Configuration conf = HBaseConfiguration.create();
int status = ToolRunner.run(conf, new Txt2FruitRunner(), args); System.exit(status);</pre>
</div>
<p class="16">5)&nbsp;打包运行</p>
<div class="cnblogs_code">
<pre>$ ~/module/hadoop-2.7.6/bin/yarn jar ~/softwares/jars/hbase-0.0.1-SNAPSHOT.jar
com.z.hbase.mr2.Txt2FruitRunner</pre>
</div>
<p>尖叫提示：运行任务前，如果待数据导入的表不存在，则需要提前创建之。</p>
<p>尖叫提示：maven 打包命令：-P local clean package 或-P dev clean package install（将第三方</p>
<p>jar 包一同打包，需要插件：maven-shade-plugin）</p>
</div>
<h3>2. MapReduce从HDFS读取数据存储到HBase中</h3>
<p>现有HDFS中有一个student.txt文件，格式如下</p>
<div class="cnblogs_code">
<pre>95002,刘晨,女,19,IS
95017,王风娟,女,18,IS
95018,王一,女,19,IS
95013,冯伟,男,21,CS
95014,王小丽,女,19,CS
95019,邢小丽,女,19,IS
95020,赵钱,男,21,IS
95003,王敏,女,22,MA
95004,张立,男,19,IS
95012,孙花,女,20,CS
95010,孔小涛,男,19,CS
95005,刘刚,男,18,MA
95006,孙庆,男,23,CS
95007,易思玲,女,19,MA
95008,李娜,女,18,CS
95021,周二,男,17,MA
95022,郑明,男,20,MA
95001,李勇,男,20,CS
95011,包小柏,男,18,MA
95009,梦圆圆,女,18,MA
95015,王君,男,18,MA</pre>
</div>
<p>将HDFS上的这个文件里面的数据写入到HBase数据块中</p>
<p>MapReduce实现代码如下</p>
<div class="cnblogs_code">
<pre>import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
import org.apache.hadoop.hbase.mapreduce.TableReducer;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class ReadHDFSDataToHbaseMR extends Configured implements Tool{

    public static void main(String[] args) throws Exception {
        
        int run = ToolRunner.run(new ReadHDFSDataToHbaseMR(), args);
        System.exit(run);
    }

    @Override
    public int run(String[] arg0) throws Exception {

        Configuration conf = HBaseConfiguration.create();
        conf.set("fs.defaultFS", "hdfs://myha01/");
        conf.set("hbase.zookeeper.quorum", "node21:2181,node22:2181,node23:2181");
        System.setProperty("HADOOP_USER_NAME", "admin");
        FileSystem fs = FileSystem.get(conf);
//        conf.addResource("config/core-site.xml");
//        conf.addResource("config/hdfs-site.xml");
        
        Job job = Job.getInstance(conf);
        
        job.setJarByClass(ReadHDFSDataToHbaseMR.class);
        
        job.setMapperClass(HDFSToHbaseMapper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(NullWritable.class);

        TableMapReduceUtil.initTableReducerJob("student", HDFSToHbaseReducer.class, job,null,null,null,null,false);
        job.setOutputKeyClass(NullWritable.class);
        job.setOutputValueClass(Put.class);
        
        Path inputPath = new Path("/student/input/");
        Path outputPath = new Path("/student/output/");
        
        if(fs.exists(outputPath)) {
            fs.delete(outputPath,true);
        }
        
        FileInputFormat.addInputPath(job, inputPath);
        FileOutputFormat.setOutputPath(job, outputPath);
        
        boolean isDone = job.waitForCompletion(true);
        
        return isDone ? 0 : 1;
    }
    
    
    public static class HDFSToHbaseMapper extends Mapper&lt;LongWritable, Text, Text, NullWritable&gt;{
        
        @Override
        protected void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {    
            context.write(value, NullWritable.get());
        }
        
    }
    
    /**
     * 95015,王君,男,18,MA
     * */
    public static class HDFSToHbaseReducer extends TableReducer&lt;Text, NullWritable, NullWritable&gt;{
        
        @Override
        protected void reduce(Text key, Iterable&lt;NullWritable&gt; values,Context context)
                throws IOException, InterruptedException {
            
            String[] split = key.toString().split(",");
            
            Put put = new Put(split[0].getBytes());
            
            put.addColumn("info".getBytes(), "name".getBytes(), split[1].getBytes());
            put.addColumn("info".getBytes(), "sex".getBytes(), split[2].getBytes());
            put.addColumn("info".getBytes(), "age".getBytes(), split[3].getBytes());
            put.addColumn("info".getBytes(), "department".getBytes(), split[4].getBytes());
            
            context.write(NullWritable.get(), put);
        
        }
        
    }
    
}</pre>
</div>
<h3>3. MapReduce从HBase读取数据计算平均年龄并存储到HDFS中</h3>
<div class="cnblogs_code">
<pre>import java.io.IOException;
import java.util.List;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.Cell;
import org.apache.hadoop.hbase.CellUtil;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
import org.apache.hadoop.hbase.mapreduce.TableMapper;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;



public class ReadHbaseDataToHDFS extends Configured implements Tool{

    public static void main(String[] args) throws Exception {
        
        int run = ToolRunner.run(new ReadHbaseDataToHDFS(), args);
        System.exit(run);
        
    }

    @Override
    public int run(String[] arg0) throws Exception {

        Configuration conf = HBaseConfiguration.create();
        conf.set("fs.defaultFS", "hdfs://myha01/");
        conf.set("hbase.zookeeper.quorum", "node21:2181,node22:2181,node23:2181");
        System.setProperty("HADOOP_USER_NAME", "admin");
        FileSystem fs = FileSystem.get(conf);
//        conf.addResource("config/core-site.xml");
//        conf.addResource("config/hdfs-site.xml");
        
        Job job = Job.getInstance(conf);
        
        job.setJarByClass(ReadHbaseDataToHDFS.class);
        
        
        // 取对业务有用的数据 info,age
        Scan scan = new Scan();
        scan.addColumn("info".getBytes(), "age".getBytes());
        
        TableMapReduceUtil.initTableMapperJob(
                "student".getBytes(), // 指定表名
                scan, // 指定扫描数据的条件
                HbaseToHDFSMapper.class, // 指定mapper class
                Text.class,     // outputKeyClass mapper阶段的输出的key的类型
                IntWritable.class, // outputValueClass mapper阶段的输出的value的类型
                job, // job对象
                false
                );
    

        job.setReducerClass(HbaseToHDFSReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(DoubleWritable.class);
        
        Path outputPath = new Path("/student/avg/");
        
        if(fs.exists(outputPath)) {
            fs.delete(outputPath,true);
        }
        
        FileOutputFormat.setOutputPath(job, outputPath);
        
        boolean isDone = job.waitForCompletion(true);
        
        return isDone ? 0 : 1;
    }
    
    public static class HbaseToHDFSMapper extends TableMapper&lt;Text, IntWritable&gt;{
        
        Text outKey = new Text("age");
        IntWritable outValue = new IntWritable();
        // key是hbase中的行键
        // value是hbase中的所行键的所有数据
        @Override
        protected void map(ImmutableBytesWritable key, Result value,Context context)
                throws IOException, InterruptedException {
            
            boolean isContainsColumn = value.containsColumn("info".getBytes(), "age".getBytes());
        
            if(isContainsColumn) {
                
                List&lt;Cell&gt; listCells = value.getColumnCells("info".getBytes(), "age".getBytes());
                System.out.println("listCells:\t"+listCells);
                Cell cell = listCells.get(0);
                System.out.println("cells:\t"+cell);
                
                byte[] cloneValue = CellUtil.cloneValue(cell);
                String ageValue = Bytes.toString(cloneValue);
                outValue.set(Integer.parseInt(ageValue));
                
                context.write(outKey,outValue);
                
            }
            
        }
        
    }
    
    public static class HbaseToHDFSReducer extends Reducer&lt;Text, IntWritable, Text, DoubleWritable&gt;{
        
        DoubleWritable outValue = new DoubleWritable();
        
        @Override
        protected void reduce(Text key, Iterable&lt;IntWritable&gt; values,Context context)
                throws IOException, InterruptedException {
            
            int count = 0;
            int sum = 0;
            for(IntWritable value : values) {
                count++;
                sum += value.get();
            }
            
            double avgAge = sum * 1.0 / count;
            outValue.set(avgAge);
            context.write(key, outValue);
        }
        
    }
    
}</pre>
</div></div><div id="MySignature"></div>

</body>
</html>
