<!doctype html>
<html lang="en"><head>
    <title>HDFS Rebalance详解</title>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
    <link rel="stylesheet" href="../../../css/theme.css"/>
    
</head>
<body>
        <div id="content" class="mx-auto"><header class="container mt-sm-5 mt-4 mb-4 mt-xs-1">
    <div class="row">
        <div class="col-sm-4 col-12 text-sm-right text-center pt-sm-4">
            <a href="../../../" class="text-decoration-none">
                <img id="home-image" class="rounded-circle"
                    
                        src="../../../images/avatar.png"
                    
                />
            </a>
        </div>
        <div class="col-sm-8 col-12 text-sm-left text-center">
            <h2 class="m-0 mb-2 mt-4">
                <a href="../../../" class="text-decoration-none">
                    
                        KunYang
                    
                </a>
            </h2>
            <p class="text-muted mb-1">
                
                    Your Creative Subtitle
                
            </p>
            <ul id="nav-links" class="list-inline mb-2">
                
                
                    <li class="list-inline-item">
                        <a class="badge badge-white " href="../../../about/" title="About">About</a>
                    </li>
                
                    <li class="list-inline-item">
                        <a class="badge badge-white " href="../../../posts/" title="Posts">Posts</a>
                    </li>
                
                    <li class="list-inline-item">
                        <a class="badge badge-white " href="../../../categories/" title="Categories">Categories</a>
                    </li>
                
            </ul>
            <ul id="nav-social" class="list-inline">
                
                    <li class="list-inline-item mr-3">
                        <a href="http://www.kunyang.com" target="_blank">
                            <i class="fab fa-github fa-1x text-muted"></i>
                        </a>
                    </li>
                
                    <li class="list-inline-item mr-3">
                        <a href="" target="_blank">
                            <i class="fab fa-linkedin-in fa-1x text-muted"></i>
                        </a>
                    </li>
                
                    <li class="list-inline-item mr-3">
                        <a href="" target="_blank">
                            <i class="fab fa-twitter fa-1x text-muted"></i>
                        </a>
                    </li>
                
                    <li class="list-inline-item mr-3">
                        <a href="" target="_blank">
                            <i class="fas fa-at fa-1x text-muted"></i>
                        </a>
                    </li>
                
            </ul>
        </div>
    </div>
    <hr />
</header>
<div class="container">
    <div class="pl-sm-4 ml-sm-5">
        <blockquote>
<p>随着业务数据量的增长，越来越多的数据需要进行存储，对DataNode节点进行扩容后会出现新增节点磁盘使用量很低，之前已有磁盘占用空间又很高，这个时候就需要进行一波rebalance操作，平衡一下各个DataNode之间的磁盘使用空间差异；</p>
<p>还有些场景。比如给DataNode节点内增加磁盘，会出现同一节点中，磁盘使用空间差异较大的情况。有些盘将近快写满了，有些却还有较大的空间可以被使用，这种情况下就需要进行disk rebalance</p>
<p>本文将着重讨论这两种rebalance的内在逻辑，以及一些关键的参数</p>
</blockquote>
<h1 id="balancer">Balancer</h1>
<p>Balancer关注的是DataNode级别的数据平衡，通过此机制，能够保证数据在各个节点之间的分布处于均匀的状态。通常在集群增加了新的DataNode节点之后，会出现各个节点之间数据分布不均的情况，此时需要进行balancer操作。</p>
<h2 id="usage">Usage</h2>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">hdfs balancer
      <span style="color:#f92672">[</span>-policy &lt;policy&gt;<span style="color:#f92672">]</span>
      <span style="color:#f92672">[</span>-threshold &lt;threshold&gt;<span style="color:#f92672">]</span>
      <span style="color:#f92672">[</span>-exclude <span style="color:#f92672">[</span>-f &lt;hosts-file&gt; | &lt;comma-separated list of hosts&gt;<span style="color:#f92672">]</span><span style="color:#f92672">]</span>
      <span style="color:#f92672">[</span>-include <span style="color:#f92672">[</span>-f &lt;hosts-file&gt; | &lt;comma-separated list of hosts&gt;<span style="color:#f92672">]</span><span style="color:#f92672">]</span>
      <span style="color:#f92672">[</span>-source <span style="color:#f92672">[</span>-f &lt;hosts-file&gt; | &lt;comma-separated list of hosts&gt;<span style="color:#f92672">]</span><span style="color:#f92672">]</span>
      <span style="color:#f92672">[</span>-blockpools &lt;comma-separated list of blockpool ids&gt;<span style="color:#f92672">]</span>
      <span style="color:#f92672">[</span>-idleiterations &lt;idleiterations&gt;<span style="color:#f92672">]</span>
      <span style="color:#f92672">[</span>-runDuringUpgrade<span style="color:#f92672">]</span>
</code></pre></div><h2 id="使用参数说明">使用参数说明</h2>
<h3 id="命令参数">命令参数</h3>
<table>
<thead>
<tr>
<th><strong>COMMAND_OPTION</strong></th>
<th align="left"><strong>Description</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>-policy</td>
<td align="left">通过-policy参数来指定平衡策略，策略分为datanode(默认)和blockpool；datanode:如果集群中的每个DataNode节点之间数据平衡则认为集群达到了平衡，blockpool:如果集群中的每个DataNode中的block pool平衡则认为集群达到了平衡。BlockPool 策略仅适用于 Federated HDFS 服务。</td>
</tr>
<tr>
<td>-threshold</td>
<td align="left">指定集群平衡条件，默认为10，代表进行平衡后各个DataNode之间磁盘使用率相差10%。参数取值范围0~100</td>
</tr>
<tr>
<td>-exclude</td>
<td align="left">默认为空，指定不进行平衡的节点，-f:指定主机文件，文件中主机列表应以逗号分隔</td>
</tr>
<tr>
<td>-include</td>
<td align="left">默认为空，指定只进行平衡的节点，-f:参数同-exclude</td>
</tr>
<tr>
<td>-source</td>
<td align="left">默认为空，选择指定的DataNode作为源节点，-f:参数同-exclude</td>
</tr>
<tr>
<td>-blockpools</td>
<td align="left">默认为空，参数格式为以逗号分隔的blockpools ID列表。blancer只在此部分指定的blockpools上运行</td>
</tr>
<tr>
<td>-idleiterations</td>
<td align="left">指定最大连续空闲迭代次数，超过此次数就退出。默认值为：5，-1表示无限大</td>
</tr>
<tr>
<td>-runDuringUpgrade</td>
<td align="left">是否在正在进行的HDFS升级期间运行平衡器。这通常是不需要的，因为它不会影响过度使用的计算机上的已用空间。</td>
</tr>
</tbody>
</table>
<h3 id="hdfs参数调优">HDFS参数调优</h3>
<p><a href="https://www.alibabacloud.com/help/zh/doc-detail/139879.htm">https://www.alibabacloud.com/help/zh/doc-detail/139879.htm</a></p>
<table>
<thead>
<tr>
<th><strong>参数</strong></th>
<th><strong>Description</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>dfs.datanode.balance.bandwidthPerSec</td>
<td>此参数是用来设置DataNode可以用来进行balance的带宽，单位为bytes，默认1048576(1Mb/s)。需要根据自己机房的网络环境以及任务运行负载情况作出适当的调整</td>
</tr>
<tr>
<td>dfs.datanode.max.transfer.threads</td>
<td>此参数用于指定在DataNode之间传输block的最大线程数，默认值为4096，如果运行HBase建议更改成为16384，老版本的对应参数为dfs.datanode.max.xcievers。</td>
</tr>
<tr>
<td>dfs.balancer.block-move.timeout</td>
<td></td>
</tr>
<tr>
<td>dfs.balancer.max-no-move-interval</td>
<td></td>
</tr>
</tbody>
</table>
<h2 id="例子">例子</h2>
<p>下面给出一个例子，此例子并不是通用的例子，只是为了展现各个参数的用法，具体参数还需要根据自己的场景进行设置。</p>
<p>参数说明：指定DataNode平衡后使用量差值在10%，平衡策略为datanode级别，排除/tmp/exclude_ip.txt中的主机，包含/tmp/include_ip.txt中的主机，最大连续空闲迭代次数小于等于5次。</p>
<pre><code>hfds balancer 
-threshold 10 
-policy datanode
-exclude -f /tmp/exclude_ip.txt
-include -f /tmp/include_ip.txt
-idleiterations 5
</code></pre><p>上述命令一般会很快就执行完成，但这并不代表此时balancer任务就停止了。执行完命令后，在HDFS的<code>/system/balancer.id</code>文件中我们可以看到当前balancer任务进程所在的机器，在此机器上使用<code>jps</code>命令以看到<code>Balancer</code>相关进程。通常，HDFS执行Balance操作结束后，会自动释放此文件。</p>
<hr>
<h1 id="disk-balancer">Disk Balancer</h1>
<p>在某些场景下，如现有的DataNode节点每台都有5块磁盘，希望对这批DataNode进行扩容，每台节点磁盘数量增加至10块。在这种场景下就会出现从整个集群的磁盘使用的维度去看，每个DataNode的磁盘空间使用量是相对均衡的，但是在同一节点下，各个磁盘之间的使用量差异又是巨大的。在Hadoop3.X之后引入了disk balancer功能，目的就是为了解决上述的问题。</p>
<p>DiskBalancer关注的是同一个DataNode节点中的数据是否均匀的分布在磁盘中。与Balancer不同，Balancer的作用域是整个集群中的不同的DataNode，而Diskbalancer作用域是在同一个DataNode节点中不同的磁盘上。</p>
<h3 id="usage-1">Usage</h3>
<p>DiskBalancer命令需要在进行平衡的节点上执行，DiskBalancer通过<code>plan</code>命令生成一个执行计划来告诉DataNode如何在节点内磁盘之间移动数据。用户可以执行<code>execute</code>命令来提交任务，<code>cancel</code>命令取消任务。通过<code>query</code>命令来查看当前任务执行状态，<code>report</code>命令来查看磁盘使用详细的报告。</p>
<pre><code>hdfs diskbalancer [plan | execute | query | cancel |report] [options]
</code></pre><h2 id="使用参数说明-1">使用参数说明</h2>
<h3 id="命令说明">命令说明</h3>
<table>
<thead>
<tr>
<th align="left">COMMAND</th>
<th align="left">COMMAND_OPTION</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">plan <!-- raw HTML omitted --> [options]</td>
<td align="left">&ndash;bandwidth</td>
<td align="left">设置最大的可使用的磁盘带宽，单位：MB/s。由于datanode处于运行状态并且可能正在运行其他作业，因此diskbalancer会限制每秒移动的数据量。该参数允许用户设置要使用的最大带宽。一般不需要设置此项，因为如果未指定，则diskBalancer将使用默认带宽。</td>
</tr>
<tr>
<td align="left"></td>
<td align="left">&ndash;maxerror</td>
<td align="left">在复制的过程中，允许的最大错误，一般采用默认值，不需要更改</td>
</tr>
<tr>
<td align="left"></td>
<td align="left">&ndash;out</td>
<td align="left">指定计划文件输出的地址，一般默认即可(ps:我在使用的过程中指定此参数好像也无效，没找到原因)</td>
</tr>
<tr>
<td align="left"></td>
<td align="left">&ndash;thresholdPercentage</td>
<td align="left">数据倾斜百分比，一般采用默认值，不需要更改</td>
</tr>
<tr>
<td align="left"></td>
<td align="left">-v</td>
<td align="left">详细模式，在控制台上打印出详细计划</td>
</tr>
<tr>
<td align="left">execute <!-- raw HTML omitted --></td>
<td align="left">&ndash;skipDateCheck</td>
<td align="left">跳过日期检查，强制开始执行计划</td>
</tr>
<tr>
<td align="left">query <!-- raw HTML omitted -->  [options]</td>
<td align="left">-v</td>
<td align="left">详细模式</td>
</tr>
<tr>
<td align="left">cancel <!-- raw HTML omitted --></td>
<td align="left"></td>
<td align="left">指定planFile，如： <code>hdfs diskbalancer -cancel /system/diskbalancer/nodename.plan.json</code></td>
</tr>
<tr>
<td align="left">cancel <!-- raw HTML omitted --> -node <!-- raw HTML omitted --></td>
<td align="left"></td>
<td align="left">指定planID和hostname(planID可以通过query查到)，如：<code>hdfs diskbalancer -cancel planID -node nodename</code></td>
</tr>
<tr>
<td align="left">report [options]</td>
<td align="left">&ndash;node</td>
<td align="left">查看某个DataNode的详细信息，可以是DataNodeID、IP、hostname</td>
</tr>
<tr>
<td align="left"></td>
<td align="left">&ndash;top</td>
<td align="left">按照数据密度进行倒序排序，查看前N个节点信息</td>
</tr>
</tbody>
</table>
<h3 id="hdfs参数调优-1">HDFS参数调优</h3>
<table>
<thead>
<tr>
<th><strong>参数</strong></th>
<th><strong>Description</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>dfs.disk.balancer.max.disk.throughputInMBperSec</td>
<td>指定磁盘间平衡时占用的最大磁盘带宽，默认值10MB/s。在不影响读写性能的情况下可以适当调大</td>
</tr>
<tr>
<td>dfs.disk.balancer.plan.threshold.percent</td>
<td>各盘之间数据平衡的阈值。DiskBalancer中采用一种叫volume data density（卷数据密度）的度量来确定占用率的偏差值，该值越大，表明磁盘间的数据越不均衡。平衡过程结束后，每个盘的卷数据密度与平均密度之差必须小于threshold（按百分比计）。默认值是10，我们设成了5</td>
</tr>
<tr>
<td>dfs.disk.balancer.block.tolerance.percent</td>
<td>在每次移动块的过程中，移动块的数量与理想平衡状态之间的偏差容忍值（按百分比计，一般也设成5</td>
</tr>
</tbody>
</table>
<h1 id="hadoop-balance的步骤">Hadoop Balance的步骤</h1>
<p>Hadoop Balancer的步骤：
1、从namenode获取datanode磁盘的使用情况
2、计算需要把哪些数据移动到哪些节点
3、分别移动，完成后删除旧的block信息
4、循环执行，直到达到平衡标准</p>
<h1 id="原理">原理</h1>
<p><a href="https://blog.csdn.net/hixiaoxiaoniao/article/details/80771801">https://blog.csdn.net/hixiaoxiaoniao/article/details/80771801</a></p>
<p>balance进程</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-verilog" data-lang="verilog"># ps <span style="color:#f92672">-</span>ef <span style="color:#f92672">|</span> grep <span style="color:#ae81ff">6812</span>
hdfs      <span style="color:#ae81ff">6812</span>  <span style="color:#ae81ff">6789</span>  <span style="color:#ae81ff">0</span> <span style="color:#ae81ff">4</span><span style="color:#960050;background-color:#1e0010">月</span><span style="color:#ae81ff">06</span> <span style="color:#f92672">?</span>       <span style="color:#ae81ff">00</span><span style="color:#f92672">:</span><span style="color:#ae81ff">03</span><span style="color:#f92672">:</span><span style="color:#ae81ff">49</span> <span style="color:#f92672">/</span>usr<span style="color:#f92672">/</span>local<span style="color:#f92672">/</span>jdk<span style="color:#f92672">/</span>bin<span style="color:#f92672">/</span>java <span style="color:#f92672">-</span>Dproc_balancer <span style="color:#f92672">-</span>Dhdp.version<span style="color:#f92672">=</span><span style="color:#ae81ff">3.1</span><span style="color:#ae81ff">.4</span><span style="color:#ae81ff">.0</span><span style="color:#f92672">-</span><span style="color:#ae81ff">315</span> <span style="color:#f92672">-</span>Djava.net.preferIPv4Stack<span style="color:#f92672">=</span>true <span style="color:#f92672">-</span>Dhdp.version<span style="color:#f92672">=</span><span style="color:#ae81ff">3.1</span><span style="color:#ae81ff">.4</span><span style="color:#ae81ff">.0</span><span style="color:#f92672">-</span><span style="color:#ae81ff">315</span> <span style="color:#f92672">-</span>server <span style="color:#f92672">-</span>Xmx1024m <span style="color:#f92672">-</span>Dyarn.log.dir<span style="color:#f92672">=</span><span style="color:#f92672">/</span><span style="color:#66d9ef">var</span><span style="color:#f92672">/</span>log<span style="color:#f92672">/</span>hadoop<span style="color:#f92672">/</span>hdfs <span style="color:#f92672">-</span>Dyarn.log.file<span style="color:#f92672">=</span>hadoop.log <span style="color:#f92672">-</span>Dyarn.home.dir<span style="color:#f92672">=</span><span style="color:#f92672">/</span>usr<span style="color:#f92672">/</span>hdp<span style="color:#f92672">/</span><span style="color:#ae81ff">3.1</span><span style="color:#ae81ff">.4</span><span style="color:#ae81ff">.0</span><span style="color:#f92672">-</span><span style="color:#ae81ff">315</span><span style="color:#f92672">/</span>hadoop<span style="color:#f92672">-</span>yarn <span style="color:#f92672">-</span>Dyarn.root.logger<span style="color:#f92672">=</span>INFO,console <span style="color:#f92672">-</span>Djava.library.path<span style="color:#f92672">=</span><span style="color:#f92672">:</span><span style="color:#f92672">/</span>usr<span style="color:#f92672">/</span>hdp<span style="color:#f92672">/</span>current<span style="color:#f92672">/</span>hadoop<span style="color:#f92672">-</span>client<span style="color:#f92672">/</span>lib<span style="color:#f92672">/</span>native<span style="color:#f92672">/</span>Linux<span style="color:#f92672">-</span>amd64<span style="color:#f92672">-</span><span style="color:#ae81ff">64</span><span style="color:#f92672">:</span><span style="color:#f92672">/</span>usr<span style="color:#f92672">/</span>hdp<span style="color:#f92672">/</span><span style="color:#ae81ff">3.1</span><span style="color:#ae81ff">.4</span><span style="color:#ae81ff">.0</span><span style="color:#f92672">-</span><span style="color:#ae81ff">315</span><span style="color:#f92672">/</span>hadoop<span style="color:#f92672">/</span>lib<span style="color:#f92672">/</span>native<span style="color:#f92672">/</span>Linux<span style="color:#f92672">-</span>amd64<span style="color:#f92672">-</span><span style="color:#ae81ff">64</span><span style="color:#f92672">:</span><span style="color:#f92672">/</span>usr<span style="color:#f92672">/</span>hdp<span style="color:#f92672">/</span>current<span style="color:#f92672">/</span>hadoop<span style="color:#f92672">-</span>client<span style="color:#f92672">/</span>lib<span style="color:#f92672">/</span>native <span style="color:#f92672">-</span>Dhadoop.log.dir<span style="color:#f92672">=</span><span style="color:#f92672">/</span><span style="color:#66d9ef">var</span><span style="color:#f92672">/</span>log<span style="color:#f92672">/</span>hadoop<span style="color:#f92672">/</span>hdfs <span style="color:#f92672">-</span>Dhadoop.log.file<span style="color:#f92672">=</span>hadoop.log <span style="color:#f92672">-</span>Dhadoop.home.dir<span style="color:#f92672">=</span><span style="color:#f92672">/</span>usr<span style="color:#f92672">/</span>hdp<span style="color:#f92672">/</span>current<span style="color:#f92672">/</span>hadoop<span style="color:#f92672">-</span>client <span style="color:#f92672">-</span>Dhadoop.id.str<span style="color:#f92672">=</span>hdfs <span style="color:#f92672">-</span>Dhadoop.root.logger<span style="color:#f92672">=</span>INFO,console <span style="color:#f92672">-</span>Dhadoop.policy.file<span style="color:#f92672">=</span>hadoop<span style="color:#f92672">-</span>policy.xml <span style="color:#f92672">-</span>Dhadoop.security.logger<span style="color:#f92672">=</span>INFO,NullAppender org.apache.hadoop.hdfs.server.balancer.Balancer <span style="color:#f92672">-</span>threshold <span style="color:#ae81ff">10</span>
root     <span style="color:#ae81ff">26066</span> <span style="color:#ae81ff">22642</span>  <span style="color:#ae81ff">0</span> <span style="color:#ae81ff">09</span><span style="color:#f92672">:</span><span style="color:#ae81ff">16</span> pts<span style="color:#f92672">/</span><span style="color:#ae81ff">0</span>    <span style="color:#ae81ff">00</span><span style="color:#f92672">:</span><span style="color:#ae81ff">00</span><span style="color:#f92672">:</span><span style="color:#ae81ff">00</span> grep <span style="color:#f92672">-</span><span style="color:#f92672">-</span>color<span style="color:#f92672">=</span>auto <span style="color:#ae81ff">6812</span>
</code></pre></div><p><img src="../HDFS-Rebalance-img/ambari-balance-param-1.png" alt="image-20200408093310553"></p>
<p><a href="https://cloud.tencent.com/developer/article/1189378">HDFS运行Balancer失败及问题解决办法</a></p>
<p><a href="http://wenda.chinahadoop.cn/question/3793">Hadoop Balancer 启动状况是否正常，能不能查看balance的进度呢？</a></p>
<p><a href="https://lihuimintu.github.io/2019/11/20/DataNode-Balancer-DiskBalancer/">HDFS 数据平衡</a></p>
<p><a href="https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#balancer">Hadoop官方balance文档</a></p>
<p><a href="http://support-it.huawei.com/docs/zh-cn/fusioninsight-all/maintenance-guide/zh-cn_topic_0076815357.html">华为云对节点内资源不均匀的处理</a></p>

    </div>

            </div>
        </div><footer class="text-center pb-1">
    <small class="text-muted">
        
            &copy; Copyright 2020, 坤坤
        
        <br>
        Powered by <a href="https://gohugo.io/" target="_blank">Hugo</a>
        and <a href="https://github.com/austingebauer/devise" target="_blank">Devise</a>
    </small>
</footer>
</body>
</html>
