<!DOCTYPE html>


<html lang="zh-CN">


<head>
  <meta charset="utf-8" />
    
  <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1" />
  <title>
    Elasticsearch 最佳实践.md |  
  </title>
  <meta name="generator" content="hexo-theme-ayer">
  
  <link rel="shortcut icon" href="/favicon.ico" />
  
  
<link rel="stylesheet" href="/dist/main.css">

  
<link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/Shen-Yu/cdn/css/remixicon.min.css">

  
<link rel="stylesheet" href="/css/custom.css">

  
  
<script src="https://cdn.jsdelivr.net/npm/pace-js@1.0.2/pace.min.js"></script>

  
  

  

</head>

</html>

<body>
  <div id="app">
    
      
    <main class="content on">
      <section class="outer">
  <article
  id="post-es/Elasticsearch 最佳实践"
  class="article article-type-post"
  itemscope
  itemprop="blogPost"
  data-scroll-reveal
>
  <div class="article-inner">
    
    <header class="article-header">
       
<h1 class="article-title sea-center" style="border-left:0" itemprop="name">
  Elasticsearch 最佳实践.md
</h1>
 

    </header>
     
    <div class="article-meta">
      <a href="/2020/11/11/es/Elasticsearch%20%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5/" class="article-date">
  <time datetime="2020-11-10T16:00:00.000Z" itemprop="datePublished">2020-11-11</time>
</a> 
  <div class="article-category">
    <a class="article-category-link" href="/categories/es/">es</a>
  </div>
  
<div class="word_count">
    <span class="post-time">
        <span class="post-meta-item-icon">
            <i class="ri-quill-pen-line"></i>
            <span class="post-meta-item-text"> 字数统计:</span>
            <span class="post-count">8k</span>
        </span>
    </span>

    <span class="post-time">
        &nbsp; | &nbsp;
        <span class="post-meta-item-icon">
            <i class="ri-book-open-line"></i>
            <span class="post-meta-item-text"> 阅读时长≈</span>
            <span class="post-count">35 分钟</span>
        </span>
    </span>
</div>
 
    </div>
      
    <div class="tocbot"></div>




  
    <div class="article-entry" itemprop="articleBody">
       
  <h1 id="Elasticsearch-最佳实践"><a href="#Elasticsearch-最佳实践" class="headerlink" title="Elasticsearch 最佳实践"></a>Elasticsearch 最佳实践</h1><p> 这里简单总结下ELK中Elasticsearch健康状态相关问题, Elasticsearch的索引状态和集群状态传达着不同的意思。</p>
<h2 id="一-Elasticsearch-集群健康状态"><a href="#一-Elasticsearch-集群健康状态" class="headerlink" title="一.  Elasticsearch 集群健康状态"></a>一.  Elasticsearch 集群健康状态</h2><p>一个 Elasticsearch 集群至少包括一个节点和一个索引。或者它 可能有一百个数据节点、三个单独的主节点，以及一小打客户端节点——这些共同操作一千个索引（以及上万个分片）。但是不管集群扩展到多大规模，你都会想要一个快速获取集群状态的途径。Cluster Health API 充当的就是这个角色。你可以把它想象成是在一万英尺的高度鸟瞰集群。它可以告诉你安心吧一切都好，或者警告你集群某个地方有问题。Elasticsearch 里其他 API 一样，cluster-health 会返回一个 JSON 响应。这对自动化和告警系统来说，非常便于解析。响应中包含了和你集群有关的一些关键信息:</p>
<p>查看Elasticsearch健康状态  (*表示ES集群的master主节点)</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# curl -XGET &#39;http:&#x2F;&#x2F;10.0.8.47:9200&#x2F;_cat&#x2F;nodes?v&#39;  </span><br><span class="line">host      ip        heap.percent ram.percent load node.role master name                        </span><br><span class="line">10.0.8.47 10.0.8.47           53          85 0.16 d         *      elk-node03.kevin.cn  </span><br><span class="line">10.0.8.44 10.0.8.44           26          54 0.09 d         m      elk-node01.kevin.cn  </span><br><span class="line">10.0.8.45 10.0.8.45           71          81 0.02 d         m      elk-node02.kevin.cn  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>下面两条shell命令都可以监控到Elasticsearch健康状态</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# curl 10.0.8.47:9200&#x2F;_cat&#x2F;health  </span><br><span class="line">1554792912 14:55:12 kevin-elk green 3 3 4478 2239 0 0 0 0 - 100.0%  </span><br><span class="line">   </span><br><span class="line">[root@elk-node03 ~]# curl -X GET &#39;http:&#x2F;&#x2F;10.0.8.47:9200&#x2F;_cluster&#x2F;health?pretty&#39;  </span><br><span class="line">&#123;  </span><br><span class="line">  &quot;cluster_name&quot; : &quot;kevin-elk&quot;,     #集群名称  </span><br><span class="line">  &quot;status&quot; : &quot;green&quot;,               #为 green 则代表健康没问题，如果是 yellow 或者 red 则是集群有问题  </span><br><span class="line">  &quot;timed_out&quot; : false,               #是否有超时  </span><br><span class="line">  &quot;number_of_nodes&quot; : 3,             #集群中的节点数量  </span><br><span class="line">  &quot;number_of_data_nodes&quot; : 3,  </span><br><span class="line">  &quot;active_primary_shards&quot; : 2234,  </span><br><span class="line">  &quot;active_shards&quot; : 4468,  </span><br><span class="line">  &quot;relocating_shards&quot; : 0,  </span><br><span class="line">  &quot;initializing_shards&quot; : 0,  </span><br><span class="line">  &quot;unassigned_shards&quot; : 0,  </span><br><span class="line">  &quot;delayed_unassigned_shards&quot; : 0,  </span><br><span class="line">  &quot;number_of_pending_tasks&quot; : 0,  </span><br><span class="line">  &quot;number_of_in_flight_fetch&quot; : 0,  </span><br><span class="line">  &quot;task_max_waiting_in_queue_millis&quot; : 0,  </span><br><span class="line">  &quot;active_shards_percent_as_number&quot; : 100.0      #集群分片的可用性百分比，如果为0则表示不可用  </span><br><span class="line">&#125;  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>正常情况下，Elasticsearch 集群健康状态分为三种：</p>
<ul>
<li><p>green:最健康得状态，说明所有的分片包括备份都可用; 这种情况Elasticsearch集群所有的主分片和副本分片都已分配, Elasticsearch集群是 100% 可用的。</p>
</li>
<li><p>yellow :基本的分片可用，但是备份不可用（或者是没有备份）;  这种情况Elasticsearch集群所有的主分片已经分片了，但至少还有一个副本是缺失的。不会有数据丢失，所以搜索结果依然是完整的。不过，你的高可用性在某种程度上被弱化。如果 更多的 分片消失，你就会丢数据了。把 yellow 想象成一个需要及时调查的警告。</p>
</li>
<li><p>red:部分的分片可用，表明分片有一部分损坏。此时执行查询部分数据仍然可以查到，遇到这种情况，还是赶快解决比较好; 这种情况Elasticsearch集群至少一个主分片（以及它的全部副本）都在缺失中。这意味着你在缺少数据：搜索只能返回部分数据，而分配到这个分片上的写入请求会返回一个异常。</p>
</li>
</ul>
<p>Elasticsearch 集群不健康时的排查思路</p>
<ul>
<li><p>首先确保 es 主节点最先启动，随后启动数据节点;</p>
</li>
<li><p>允许 selinux（非必要），关闭 iptables;</p>
</li>
<li><p>确保数据节点的elasticsearch配置文件正确;</p>
</li>
<li><p>系统最大打开文件描述符数是否够用;</p>
</li>
<li><p>elasticsearch设置的内存是否够用 (“ES_HEAP_SIZE”内存设置 和 “indices.fielddata.cache.size”上限设置);</p>
</li>
<li><p>elasticsearch的索引数量暴增 , 删除一部分索引(尤其是不需要的索引);</p>
</li>
</ul>
<h2 id="二-Elasticsearch索引状态"><a href="#二-Elasticsearch索引状态" class="headerlink" title="二.  Elasticsearch索引状态"></a>二.  Elasticsearch索引状态</h2><p>查看Elasticsearch 索引状态  (*表示ES集群的master主节点)</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# curl -XGET &#39;http:&#x2F;&#x2F;10.0.8.47:9200&#x2F;_cat&#x2F;indices?v&#39;  </span><br><span class="line">health status index                                              pri rep docs.count docs.deleted store.size pri.store.size  </span><br><span class="line">green  open   10.0.61.24-vfc-intf-ent-deposit.log-2019.03.15       5   1        159            0    324.9kb        162.4kb  </span><br><span class="line">green  open   10.0.61.24-vfc-intf-ent-login.log-2019.03.04         5   1       3247            0      3.4mb          1.6mb  </span><br><span class="line">green  open   10.0.61.24-vfc-intf-ent-login.log-2019.03.05         5   1       1663            0      2.6mb          1.3mb  </span><br><span class="line">green  open   10.0.61.24-vfc-intf-ent-deposit.log-2019.03.19       5   1         14            0     81.1kb         40.5kb  </span><br><span class="line">.................  </span><br><span class="line">.................  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>Elasticsearch 索引的健康状态也有三种，即yellow、green、red与集群的健康状态解释是一样的!</p>
<h2 id="三-Elasticsearch-相关概念"><a href="#三-Elasticsearch-相关概念" class="headerlink" title="三.  Elasticsearch 相关概念"></a>三.  Elasticsearch 相关概念</h2><ul>
<li>Elasticsearch集群与节点</li>
</ul>
<p>节点(node)是你运行的Elasticsearch实例。一个集群(cluster)是一组具有相同cluster.name的节点集合，它们协同工作，共享数据并提供故障转移和扩展功能，当有新的节点加入或者删除节点，集群就会感知到并平衡数据。集群中一个节点会被选举为主节点(master),它用来管理集群中的一些变更，例如新建或删除索引、增加或移除节点等;当然一个节点也可以组成一个集群。</p>
<ul>
<li>Elasticsearch节点通信</li>
</ul>
<p>可以与集群中的任何节点通信，包括主节点。任何一个节点互相知道文档存在于哪个节点上，它们可以转发请求到我们需要数据所在的节点上。我们通信的节点负责收集各节点返回的数据，最后一起返回给客户端。这一切都由Elasticsearch透明的管理。</p>
<ul>
<li>Elasticsearch集群生态</li>
</ul>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">1、同集群中节点之间可以扩容缩容;  </span><br><span class="line">2、主分片的数量会在其索引创建完成后修正，但是副本分片的数量会随时变化;   </span><br><span class="line">3、相同的分片不会放在同一个节点上;  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<ul>
<li><p>Elasticsearch分片与副本分片 分片用于Elasticsearch在集群中分配数据, 可以想象把分片当作数据的容器, 文档存储在分片中，然后分片分配给你集群中的节点上。当集群扩容或缩小，Elasticsearch将会自动在节点间迁移分片，以使集群保持平衡。一个分片(shard)是一个最小级别的“工作单元(worker unit)”,它只是保存索引中所有数据的一小片.我们的文档存储和被索引在分片中，但是我们的程序不知道如何直接与它们通信。取而代之的是，它们直接与索引通信.Elasticsearch中的分片分为主分片和副本分片,复制分片只是主分片的一个副本，它用于提供数据的冗余副本，在硬件故障之后提供数据保护，同时服务于像搜索和检索等只读请求，主分片的数量和复制分片的数量都可以通过配置文件配置。但是主切片的数量只能在创建索引时定义且不能修改.相同的分片不会放在同一个节点上。</p>
</li>
<li><p>Elasticsearch分片算法</p>
</li>
</ul>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">shard &#x3D; hash(routing) % number_of_primary_shards  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>routing值是一个任意字符串，它默认是_id但也可以自定义，这个routing字符串通过哈希函数生成一个数字，然后除以主切片的数量得到一个余数(remainder)，余数的范围永远是0到number_of_primary_shards - 1，这个数字就是特定文档所在的分片。这也解释了为什么主切片的数量只能在创建索引时定义且不能修改：如果主切片的数量在未来改变了，所有先前的路由值就失效了，文档也就永远找不到了。所有的文档API（get、index、delete、bulk、update、mget）都接收一个routing参数，它用来自定义文档到分片的映射。自定义路由值可以确保所有相关文档.比如用户的文章,按照用户账号路由,就可以实现属于同一用户的文档被保存在同一分片上。</p>
<ul>
<li>Elasticsearch分片与副本交互</li>
</ul>
<p>新建、索引和删除请求都是写(write)操作，它们必须在主分片上成功完成才能复制到相关的复制分片上,下面我们罗列在主分片和复制分片上成功新建、索引或删除一个文档必要的顺序步骤:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">1、客户端给Node 1发送新建、索引或删除请求。  </span><br><span class="line">2、节点使用文档的_id确定文档属于分片0。它转发请求到Node 3，分片0位于这个节点上。  </span><br><span class="line">3、Node 3在主分片上执行请求，如果成功，它转发请求到相应的位于Node 1和Node 2的复制节点上。当所有的复制节点报告成功，Node 3报告成功到请求的节点，请求的节点再报告给客户端。 客户端接收到成功响应的时候，文档的修改已经被应用于主分片和所有的复制分片。你的修改生效了  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<ul>
<li>查看分片状态</li>
</ul>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# curl -X GET &#39;http:&#x2F;&#x2F;10.0.8.47:9200&#x2F;_cluster&#x2F;health?pretty&#39;  </span><br><span class="line">&#123;  </span><br><span class="line">  &quot;cluster_name&quot; : &quot;kevin-elk&quot;,  </span><br><span class="line">  &quot;status&quot; : &quot;green&quot;,  </span><br><span class="line">  &quot;timed_out&quot; : false,  </span><br><span class="line">  &quot;number_of_nodes&quot; : 3,  </span><br><span class="line">  &quot;number_of_data_nodes&quot; : 3,  </span><br><span class="line">  &quot;active_primary_shards&quot; : 2214,  </span><br><span class="line">  &quot;active_shards&quot; : 4428,  </span><br><span class="line">  &quot;relocating_shards&quot; : 0,  </span><br><span class="line">  &quot;initializing_shards&quot; : 0,  </span><br><span class="line">  &quot;unassigned_shards&quot; : 0,  </span><br><span class="line">  &quot;delayed_unassigned_shards&quot; : 0,  </span><br><span class="line">  &quot;number_of_pending_tasks&quot; : 0,  </span><br><span class="line">  &quot;number_of_in_flight_fetch&quot; : 0,  </span><br><span class="line">  &quot;task_max_waiting_in_queue_millis&quot; : 0,  </span><br><span class="line">  &quot;active_shards_percent_as_number&quot; : 100.0  </span><br><span class="line">&#125;  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>这里需要注意: 如下是单点单节点部署Elasticsearch, 集群状态可能为yellow, 因为单点部署Elasticsearch, 默认的分片副本数目配置为1，而相同的分片不能在一个节点上，所以就存在副本分片指定不明确的问题，所以显示为yellow，可以通过在Elasticsearch集群上添加一个节点来解决问题，如果不想这么做，可以删除那些指定不明确的副本分片（当然这不是一个好办法）但是作为测试和解决办法还是可以尝试的，下面试一下删除副本分片的办法:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-server ~]# curl -X GET &#39;http:&#x2F;&#x2F;localhost:9200&#x2F;_cluster&#x2F;health?pretty&#39;  </span><br><span class="line">&#123;  </span><br><span class="line">  &quot;cluster_name&quot; : &quot;elasticsearch&quot;,  </span><br><span class="line">  &quot;status&quot; : &quot;yellow&quot;,  </span><br><span class="line">  &quot;timed_out&quot; : false,  </span><br><span class="line">  &quot;number_of_nodes&quot; : 1,  </span><br><span class="line">  &quot;number_of_data_nodes&quot; : 1,  </span><br><span class="line">  &quot;active_primary_shards&quot; : 931,  </span><br><span class="line">  &quot;active_shards&quot; : 931,  </span><br><span class="line">  &quot;relocating_shards&quot; : 0,  </span><br><span class="line">  &quot;initializing_shards&quot; : 0,  </span><br><span class="line">  &quot;unassigned_shards&quot; : 930,  </span><br><span class="line">  &quot;delayed_unassigned_shards&quot; : 0,  </span><br><span class="line">  &quot;number_of_pending_tasks&quot; : 0,  </span><br><span class="line">  &quot;number_of_in_flight_fetch&quot; : 0,  </span><br><span class="line">  &quot;task_max_waiting_in_queue_millis&quot; : 0,  </span><br><span class="line">  &quot;active_shards_percent_as_number&quot; : 50.02686727565825  </span><br><span class="line">&#125;  </span><br><span class="line">   </span><br><span class="line">[root@elk-server ~]# curl -XPUT &quot;http:&#x2F;&#x2F;localhost:9200&#x2F;_settings&quot; -d&#39; &#123;  &quot;number_of_replicas&quot; : 0 &#125; &#39;  </span><br><span class="line">&#123;&quot;acknowledged&quot;:true&#125;  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>这个时候再次查看集群的状态状态变成了green</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-server ~]# curl -X GET &#39;http:&#x2F;&#x2F;localhost:9200&#x2F;_cluster&#x2F;health?pretty&#39;  </span><br><span class="line">&#123;  </span><br><span class="line">  &quot;cluster_name&quot; : &quot;elasticsearch&quot;,  </span><br><span class="line">  &quot;status&quot; : &quot;green&quot;,  </span><br><span class="line">  &quot;timed_out&quot; : false,  </span><br><span class="line">  &quot;number_of_nodes&quot; : 1,  </span><br><span class="line">  &quot;number_of_data_nodes&quot; : 1,  </span><br><span class="line">  &quot;active_primary_shards&quot; : 931,  </span><br><span class="line">  &quot;active_shards&quot; : 931,  </span><br><span class="line">  &quot;relocating_shards&quot; : 0,  </span><br><span class="line">  &quot;initializing_shards&quot; : 0,  </span><br><span class="line">  &quot;unassigned_shards&quot; : 0,  </span><br><span class="line">  &quot;delayed_unassigned_shards&quot; : 0,  </span><br><span class="line">  &quot;number_of_pending_tasks&quot; : 0,  </span><br><span class="line">  &quot;number_of_in_flight_fetch&quot; : 0,  </span><br><span class="line">  &quot;task_max_waiting_in_queue_millis&quot; : 0,  </span><br><span class="line">  &quot;active_shards_percent_as_number&quot; : 100.0  </span><br><span class="line">&#125; </span><br></pre></td></tr></table></figure>

<ul>
<li>Elasticsearch索引的unssigned问题</li>
</ul>
<p>如下, 访问<a target="_blank" rel="noopener" href="http://10.0.8.47:9200//_plugin/head/">http://10.0.8.47:9200/\_plugin/head/</a>, 发现有unssigned现象:</p>
<p><img src="https://mmbiz.qpic.cn/mmbiz_png/tuSaKc6SfPqt6Tqwia3Famia3TBHgia8CQt8pgwfJicTkfALKldCVqXXKpDt06tRh4c00klAOsjzOkicA3WhTYTyUUA/640?wx_fmt=png&tp=webp&wxfrom=5&wx_lazy=1&wx_co=1"></p>
<p>这里的unssigned就是未分配副本分片的问题，接下来执行settings中删除副本分片的命令后, 这个问题就解决了:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# curl -XPUT &quot;http:&#x2F;&#x2F;10.0.8.47:9200&#x2F;_settings&quot; -d&#39; &#123;  &quot;number_of_replicas&quot; : 0 &#125; &#39;  </span><br><span class="line">&#123;&quot;acknowledged&quot;:true&#125;  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<h2 id="四-Elasticsearch集群健康状态为”red”现象的排查分析"><a href="#四-Elasticsearch集群健康状态为”red”现象的排查分析" class="headerlink" title="四.  Elasticsearch集群健康状态为”red”现象的排查分析"></a>四.  Elasticsearch集群健康状态为”red”现象的排查分析</h2><p>通过Elasticsearch的Head插件访问, 发现Elasticsearch集群的健康值为red, 则说明至少一个主分片分配失败, 这将导致一些数据以及索引的某些部分不再可用。head插件会以不同的颜色显示, 绿色表示最健康的状态，代表所有的主分片和副本分片都可用；黄色表示所有的主分片可用，但是部分副本分片不可用；红色表示部分主分片不可用. (此时执行查询部分数据仍然可以查到，遇到这种情况，还是赶快解决比较好)</p>
<p>接着查看Elasticsearch启动日志会发现集群服务超时连接的情况:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">timeout notification from cluster service. timeout setting [1m], time since start [1m]  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>unassigned 分片问题可能的原因?</p>
<ul>
<li><p>INDEX_CREATED:  由于创建索引的API导致未分配。</p>
</li>
<li><p>CLUSTER_RECOVERED:  由于完全集群恢复导致未分配。</p>
</li>
<li><p>INDEX_REOPENED:  由于打开open或关闭close一个索引导致未分配。</p>
</li>
<li><p>DANGLING_INDEX_IMPORTED:  由于导入dangling索引的结果导致未分配。</p>
</li>
<li><p>NEW_INDEX_RESTORED:  由于恢复到新索引导致未分配。</p>
</li>
<li><p>EXISTING_INDEX_RESTORED:  由于恢复到已关闭的索引导致未分配。</p>
</li>
<li><p>REPLICA_ADDED:  由于显式添加副本分片导致未分配。</p>
</li>
<li><p>ALLOCATION_FAILED:  由于分片分配失败导致未分配。</p>
</li>
<li><p>NODE_LEFT:  由于承载该分片的节点离开集群导致未分配。</p>
</li>
<li><p>REINITIALIZED:  由于当分片从开始移动到初始化时导致未分配（例如，使用影子shadow副本分片）。</p>
</li>
<li><p>REROUTE_CANCELLED:  作为显式取消重新路由命令的结果取消分配。</p>
</li>
<li><p>REALLOCATED_REPLICA:  确定更好的副本位置被标定使用，导致现有的副本分配被取消，出现未分配。</p>
</li>
</ul>
<p>Elasticsearch集群状态红色如何排查？</p>
<ul>
<li><p>症状：集群健康值红色;</p>
</li>
<li><p>日志：集群服务连接超时；</p>
</li>
<li><p>可能原因：集群中部分节点的主分片未分配。</p>
</li>
</ul>
<p>接下来的解决方案主要围绕：使主分片unsigned 分片完成再分配展开。</p>
<p>如何解决 unassigned 分片问题？</p>
<ul>
<li>方案一：极端情况——这个分片数据已经不可用，直接删除该分片 (即删除索引) Elasticsearch中没有直接删除分片的接口，除非整个节点数据已不再使用，删除节点。</li>
</ul>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">删除索引命令&quot;curl -XDELETE  http:&#x2F;&#x2F;10.0.8.44:9200&#x2F;索引名&quot;  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<ul>
<li>方案二：集群中节点数量 &gt;= 集群中所有索引的最大副本数量 +１</li>
</ul>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">N &gt; &#x3D; R + 1  </span><br><span class="line">其中：  </span><br><span class="line">N——集群中节点的数目；  </span><br><span class="line">R——集群中所有索引的最大副本数目。  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>注意事项：当节点加入和离开集群时，主节点会自动重新分配分片，以确保分片的多个副本不会分配给同一个节点。换句话说，主节点不会将主分片分配给与其副本相同的节点，也不会将同一分片的两个副本分配给同一个节点。如果没有足够的节点相应地分配分片，则分片可能会处于未分配状态。</p>
<p>如果Elasticsearch集群就一个节点，即Ｎ＝１；所以Ｒ＝０，才能满足公式。这样问题就转嫁为：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">1) 添加节点处理，即Ｎ增大；  </span><br><span class="line">2) 删除副本分片，即R置为0。  </span><br><span class="line">  </span><br><span class="line">#R置为0的方式，可以通过如下命令行实现:  </span><br><span class="line">[root@elk-node03 ~]# curl -XPUT &quot;http:&#x2F;&#x2F;10.0.8.47:9200&#x2F;_settings&quot; -d&#39; &#123;  &quot;number_of_replicas&quot; : 0 &#125; &#39;  </span><br><span class="line">&#123;&quot;acknowledged&quot;:true&#125;  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<ul>
<li>方案三：allocate重新分配分片</li>
</ul>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">如果方案二仍然未解决，可以考虑重新分配分片。可能的原因：  </span><br><span class="line">1) 节点在重新启动时可能遇到问题。正常情况下，当一个节点恢复与群集的连接时，它会将有关其分片的信息转发给主节点，然后主节点将这分片从“未分配”转换为 &quot;已分配&#x2F;已启动&quot;。  </span><br><span class="line">2) 当由于某种原因 (例如节点的存储已被损坏) 导致该进程失败时，分片可能保持未分配状态。  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>在这种情况下，必须决定如何继续: 尝试让原始节点恢复并重新加入集群(并且不要强制分配主分片);  或者强制使用Reroute API分配分片并重新索引缺少的数据原始数据源或备份。如果你决定分配未分配的主分片，请确保将”allow_primary”：”true”标志添加到请求中。</p>
<p>Elasticsearch5.X使用脚本如下:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br></pre></td><td class="code"><pre><span class="line">#!&#x2F;bin&#x2F;bash  </span><br><span class="line">NODE&#x3D;&quot;YOUR NODE NAME&quot;  </span><br><span class="line">IFS&#x3D;$&#39;\n&#39;  </span><br><span class="line">for line in $(curl -s &#39;10.0.8.47:9200&#x2F;_cat&#x2F;shards&#39; | fgrep UNASSIGNED); do  </span><br><span class="line">  INDEX&#x3D;$(echo $line | (awk &#39;&#123;print $1&#125;&#39;))  </span><br><span class="line">  SHARD&#x3D;$(echo $line | (awk &#39;&#123;print $2&#125;&#39;))  </span><br><span class="line">   </span><br><span class="line">  curl -XPOST &#39;10.0.8.47:9200&#x2F;_cluster&#x2F;reroute&#39; -d &#39;&#123;  </span><br><span class="line">     &quot;commands&quot;: [  </span><br><span class="line">        &#123;  </span><br><span class="line">            &quot; allocate_replica &quot;: &#123;  </span><br><span class="line">                &quot;index&quot;: &quot;&#39;$INDEX&#39;&quot;,  </span><br><span class="line">                &quot;shard&quot;: &#39;$SHARD&#39;,  </span><br><span class="line">                &quot;node&quot;: &quot;&#39;$NODE&#39;&quot;,  </span><br><span class="line">                &quot;allow_primary&quot;: true  </span><br><span class="line">          &#125;  </span><br><span class="line">        &#125;  </span><br><span class="line">    ]  </span><br><span class="line">  &#125;&#39;  </span><br><span class="line">done  </span><br><span class="line">#Elasticsearch2.X及早期版本，只需将上面脚本中的allocate_replica改为 allocate，其他不变。  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<h2 id="五-案例-ELK中ElasticSearch集群状态异常问题"><a href="#五-案例-ELK中ElasticSearch集群状态异常问题" class="headerlink" title="五.  案例: ELK中ElasticSearch集群状态异常问题"></a>五.  案例: ELK中ElasticSearch集群状态异常问题</h2><p>线上环境部署的ELK日志集中分析系统, 过了一段时间后, 发现Kibana展示里没有日志, 查看head插件索引情况, 发现一直打不开! 这是因为如果不对es索引定期做处理, 则随着日志收集数据量的不断增大, es内存消耗不断增量, 索引数量也会随之暴增, 那么elk就会出现问题, 比如elk页面展示超时, 访问<a target="_blank" rel="noopener" href="http://10.0.8.47:9200//_plugin/head/">http://10.0.8.47:9200/\_plugin/head/</a> 一直卡顿等; es集群状态异常(出现red的status)等!</p>
<p>在任意一个node节点上执行下面命令查看es集群状态 (url里的ip地址可以是三个node中的任意一个), 如下可知, es集群当前master节点是10.0.8.47</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# curl -XGET &#39;http:&#x2F;&#x2F;10.0.8.47:9200&#x2F;_cat&#x2F;nodes?v&#39;  </span><br><span class="line">host      ip        heap.percent ram.percent load node.role master name                         </span><br><span class="line">10.0.8.47 10.0.8.47           31          78 0.92 d         *      elk-node03.kevin.cn  </span><br><span class="line">10.0.8.44 10.0.8.44           16          55 0.27 d         m      elk-node01.kevin.cn  </span><br><span class="line">10.0.8.45 10.0.8.45           61          78 0.11 d         m      elk-node02.kevin.cn  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>查询集群的健康状态（一共三种状态：green、yellow，red；其中green表示健康）</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# curl -XGET &#39;http:&#x2F;&#x2F;10.0.8.47:9200&#x2F;_cat&#x2F;health?v&#39;  </span><br><span class="line">epoch      timestamp cluster  status node.total node.data shards  pri relo init unassign pending_tasks max_task_wait_time active_shards_percent  </span><br><span class="line">1554689492 10:11:32  kevin-elk red             3         3   3587 3447    0    6     5555           567              11.1m                 39.2%  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>解决办法:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">1) 调优集群的稳定性  </span><br><span class="line">-&gt; 增大系统最大打开文件描述符数，即65535;  </span><br><span class="line">-&gt; 关闭swap，锁定进程地址空间，防止内存swap;  </span><br><span class="line">-&gt; JVM调优, 增大es内存设置, 默认是2g (Heap Size不超过物理内存的一半，且小于32G);  </span><br><span class="line">2) 定期删除es索引或删除不可用的索引, 比如只保留最近一个月的索引数据 (可写脚本定期执行, 具体可参考: https:&#x2F;&#x2F;www.cnblogs.com&#x2F;kevingrace&#x2F;p&#x2F;9994178.html);  </span><br><span class="line">3) 如果es主节点重启, 则主节点在转移到其他节点过程中, 分片分片也会转移过去; 如果分片比较多, 数据量比较大, 则需要耗费一定的时间, 在此过程中, elk集群的状态是yellow; 查看elk集群状态, shards分片会不断增加, unassign会不断减少,直至unassign减到0时, 表明分片已经完全转移到新的主节点上, 则此时查看elk的健康状态就是green了;  </span><br><span class="line">4) 如果所有es节点都重启, 则需要先启动一个节点作为master主节点, 然后再启动其他节点;  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>注意, 这里记录下修改ElasticSearch的内存配置操作 (“ES_HEAP_SIZE”内存设置 和 “indices.fielddata.cache.size”上限设置)</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">先修改&#x2F;etc&#x2F;sysconfig&#x2F;elasticsearch 文件里的ES_HEAP_SIZE参数值, 默认为2g  </span><br><span class="line">[root@elk-node03 ~]# vim &#x2F;etc&#x2F;sysconfig&#x2F;elasticsearch  </span><br><span class="line">.............  </span><br><span class="line">ES_HEAP_SIZE&#x3D;8g </span><br></pre></td></tr></table></figure>

<p>接着修改elasticsearch配置文件</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# vim &#x2F;etc&#x2F;elasticsearch&#x2F;elasticsearch.yml  </span><br><span class="line">.............  </span><br><span class="line">bootstrap.mlockall: true     #默认为false. 表示锁住内存.当JVM进行内存转换时,es性能会降低, 设置此参数值为true即可锁住内存.  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>注意: 这个时候最好在elasticsearch.yml配置文件里设置下indices.fielddata.cache.size , 此参数表示”控制有多少堆内存是分配给fielddata” 因为elasticsearch在查询时，fielddata缓存的数据越来越多造成的（默认是不自动清理的）</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# vim &#x2F;etc&#x2F;elasticsearch&#x2F;elasticsearch.yml  </span><br><span class="line">..............  </span><br><span class="line">indices.fielddata.cache.size: 40%  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>上面设置了限制fielddata 上限, 表示让字段数据缓存的内存大小达到heap 40% (也就是上面设置的8g的40%)的时候就起用自动清理旧的缓存数据</p>
<p>然后重启elasticsearch</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# systemctl restart elasticsearch  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>查看启动的elasticsearch, 发现内存已经调整到8g了</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# ps -ef|grep elasticsearch  </span><br><span class="line">root      7066  3032  0 16:46 pts&#x2F;0    00:00:00 grep --color&#x3D;auto elasticsearch  </span><br><span class="line">elastic+ 15586     1 22 10:33 ?        01:22:00 &#x2F;bin&#x2F;java -Xms8g -Xmx8g -Djava.awt.headless&#x3D;true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction&#x3D;75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding&#x3D;UTF-8 -Djna.nosys&#x3D;true -Des.path.home&#x3D;&#x2F;usr&#x2F;share&#x2F;elasticsearch -cp &#x2F;usr&#x2F;share&#x2F;elasticsearch&#x2F;lib&#x2F;elasticsearch-2.4.6.jar:&#x2F;usr&#x2F;share&#x2F;elasticsearch&#x2F;lib&#x2F;* org.elasticsearch.bootstrap.Elasticsearch start -Des.pidfile&#x3D;&#x2F;var&#x2F;run&#x2F;elasticsearch&#x2F;elasticsearch.pid -Des.default.path.home&#x3D;&#x2F;usr&#x2F;share&#x2F;elasticsearch -Des.default.path.logs&#x3D;&#x2F;var&#x2F;log&#x2F;elasticsearch -Des.default.path.data&#x3D;&#x2F;var&#x2F;lib&#x2F;elasticsearch -Des.default.path.conf&#x3D;&#x2F;etc&#x2F;elasticsearch </span><br></pre></td></tr></table></figure>

<p>如上, 在进行一系列修复操作 (增大系统最大打开文件描述符数65535, 关闭swap，锁定进程地址空间，防止内存swap, 增大ES内存, 删除不用或异常索引, 重启各节点的ES服务) 后, 再次查看ES集群状态, 发现此时仍然是”red”状态. 这是因为es主节点重启, 则主节点在转移到其他节点过程中, 分片分片也会转移过去; 如果分片比较多, 数据量比较大, 则需要耗费一定的时间. 需要等到unassign减到0时, 表明分片已经完全转移到新的主节点上, 则此时查看elk的健康状态就是green了.</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node02 system]# curl -XGET &#39;http:&#x2F;&#x2F;10.0.8.47:9200&#x2F;_cat&#x2F;health?v&#39;  </span><br><span class="line">epoch      timestamp cluster  status node.total node.data shards  pri relo init unassign pending_tasks max_task_wait_time active_shards_percent  </span><br><span class="line">1554691187 10:39:47  kevin-elk red             3         3   4460 3878    0    8     4660           935               5.7m                 48.9%  </span><br><span class="line">   </span><br><span class="line">[root@elk-node02 system]# curl -XGET &#39;http:&#x2F;&#x2F;10.0.8.47:9200&#x2F;_cat&#x2F;health?v&#39;  </span><br><span class="line">epoch      timestamp cluster  status node.total node.data shards  pri relo init unassign pending_tasks max_task_wait_time active_shards_percent  </span><br><span class="line">1554691187 10:39:47  kevin-elk red             3         3   4466 3882    0    8     4654           944               5.7m                 48.9%  </span><br><span class="line">   </span><br><span class="line">................  </span><br><span class="line">................  </span><br><span class="line">   </span><br><span class="line">#等到&quot;unassign&quot;数值为0时, 再次查看es状态  </span><br><span class="line">[root@elk-node03 ~]# curl -XGET &#39;http:&#x2F;&#x2F;10.0.8.47:9200&#x2F;_cat&#x2F;health?v&#39;  </span><br><span class="line">epoch      timestamp cluster  status node.total node.data shards  pri relo init unassign pending_tasks max_task_wait_time active_shards_percent  </span><br><span class="line">1554692772 11:06:12  kevin-elk green           3         3   9118 4559    0    0        0             0                  -                100.0%  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>如果es状态此时还是red, 则需要找出red状态的索引并且删除 (这个时候的red状态的索引应该是少部分)</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node02 system]# curl -XGET   &quot;http:&#x2F;&#x2F;10.0.8.45:9200&#x2F;_cat&#x2F;indices?v&quot;|grep -w &quot;red&quot;  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>比如找出的red状态的索引名为”10.0.61.24-vfc-intf-ent-order.log-2019.03.04”, 删除它即可</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node02 system]# curl -XDELETE  http:&#x2F;&#x2F;10.0.8.44:9200&#x2F;10.0.61.24-vfc-intf-ent-order.log-2019.03.04  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>需要特别注意:  如果elasticSearch集群节点中es数据所在的磁盘使用率超过了一定比例(比如85%), 则就会出现无法再为副分片分片的情况, 这也会导致elasticSearch集群监控状态也会出现”red”情况!!!  这个时候只需要增大这块磁盘的空间, 磁盘空间够用了, elasticSearch就会自动恢复数据!!!</p>
<h2 id="六-Elasticsearch常见错误"><a href="#六-Elasticsearch常见错误" class="headerlink" title="六.  Elasticsearch常见错误"></a>六.  Elasticsearch常见错误</h2><ul>
<li>错误1: Exception in thread “main” SettingsException[Failed to load settings from [elasticsearch.yml]]; nested: ElasticsearchParseException[malformed, expected settings to start with ‘object’, instead was [VALUE_STRING]];</li>
</ul>
<p>原因：elasticsearch.yml文件配置错误导致</p>
<p>解决：参数与参数值(等号)间需要空格</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# vim &#x2F;etc&#x2F;elasticsearch&#x2F;elasticsearch.yml  </span><br><span class="line">...............  </span><br><span class="line">#node.name:elk-node03.kevin.cn         #错误  </span><br><span class="line">node.name: elk-node03.kevin.cn           #正确  </span><br><span class="line">   </span><br><span class="line">#或者如下配置  </span><br><span class="line">#node.name &#x3D;&quot;elk-node03.kevin.cn&quot;    #错误  </span><br><span class="line">#node.name &#x3D; &quot;elk-node03.kevin.cn&quot;   #正确  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>然后重启elasticsearch服务</p>
<ul>
<li>错误2: org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root</li>
</ul>
<p>原因：处于对root用户的安全保护，需要使用其他用户组进行授权启动</p>
<p>解决：用户组进行授权启动</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# groupadd elasticsearch            </span><br><span class="line">[root@elk-node03 ~]# useradd elasticsearch -g elasticsearch -p elasticsearch  </span><br><span class="line">[root@elk-node03 ~]# chown -R elasticsearch.elasticsearch &#x2F;data&#x2F;es-data                 #给es的数据目录授权, 否则es服务启动报错  </span><br><span class="line">[root@elk-node03 ~]# chown -R elasticsearch.elasticsearch&#x2F;var&#x2F;log&#x2F;elasticsearch     #给es的日志目录授权, 否则es服务启动报错  </span><br><span class="line">   </span><br><span class="line">#以上是yum安装elasticsearch情况, 需要给elasticsearch的数据目录和日志目录授权, 如果elasticsearch是编译安装, 则需要给它的安装目录也授权  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>接着重启elasticsearch服务即可</p>
<ul>
<li>错误3: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000085330000, 2060255232, 0) failed; error=’Cannot a …’(errno=12);</li>
</ul>
<p>原因：jvm要分配最大内存超出系统内存</p>
<p>解决：适当调整指定jvm内存, 编辑elasticsearch 的jvm配置文件</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"># vim &#x2F;data&#x2F;elasticsearch&#x2F;config&#x2F;jvm.options  </span><br><span class="line">-Xms8g  </span><br><span class="line">-Xmx8g  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>如果是yum安装的elasticsearch, 则修改如下配置文件</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# vim &#x2F;etc&#x2F;sysconfig&#x2F;elasticsearch  </span><br><span class="line"># Heap size defaults to 256m min, 1g max             #最小为1g  </span><br><span class="line"># Set ES_HEAP_SIZE to 50% of available RAM, but no more than 31g     #设置为物理内存的50%, 但不要操作31g  </span><br><span class="line">ES_HEAP_SIZE&#x3D;8g  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>然后重启elasticsearch服务即可</p>
<ul>
<li>错误4: ERROR: [3] bootstrap checks failed</li>
</ul>
<p>原因：虚拟机限制用户的执行内存</p>
<p>解决：修改安全限制配置文件 (使用root最高权限 修改安全配置 在文件末尾加入)</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# vim &#x2F;etc&#x2F;security&#x2F;limits.conf  </span><br><span class="line">elasticsearch       hard        nofile        65536  </span><br><span class="line">elasticsearch       soft        nofile        65536  </span><br><span class="line">*               soft       nproc         4096  </span><br><span class="line">*               hard      nproc          4096  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>修改系统配置文件</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[3]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]  </span><br><span class="line">   </span><br><span class="line">[root@elk-node03 ~]# &#x2F;etc&#x2F;sysctl.conf        #注意下面的参数值大于错误提示值  </span><br><span class="line">vm.max_map_count &#x3D; 655360  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>然后重启elasticsearch服务即可</p>
<h2 id="七-Elasticsearch集群监控状态监控"><a href="#七-Elasticsearch集群监控状态监控" class="headerlink" title="七.  Elasticsearch集群监控状态监控"></a>七.  Elasticsearch集群监控状态监控</h2><ul>
<li></li>
</ul>
<ol>
<li>通过简单shell命令监控elasticsearch集群状态</li>
</ol>
<p>原理：使用curl命令模拟访问任意一个elasticsearch集群, 就可以反馈出elasticsearch集群状态，集群的状态需要为green。</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br><span class="line">145</span><br><span class="line">146</span><br><span class="line">147</span><br><span class="line">148</span><br><span class="line">149</span><br><span class="line">150</span><br><span class="line">151</span><br><span class="line">152</span><br><span class="line">153</span><br><span class="line">154</span><br><span class="line">155</span><br><span class="line">156</span><br><span class="line">157</span><br><span class="line">158</span><br><span class="line">159</span><br><span class="line">160</span><br><span class="line">161</span><br><span class="line">162</span><br><span class="line">163</span><br><span class="line">164</span><br><span class="line">165</span><br><span class="line">166</span><br><span class="line">167</span><br><span class="line">168</span><br><span class="line">169</span><br><span class="line">170</span><br><span class="line">171</span><br><span class="line">172</span><br><span class="line">173</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# curl -XGET &#39;http:&#x2F;&#x2F;10.0.8.47:9200&#x2F;_cluster&#x2F;stats?human&amp;pretty&#39;  </span><br><span class="line">&#123;  </span><br><span class="line">  &quot;timestamp&quot; : 1554792101956,  </span><br><span class="line">  &quot;cluster_name&quot; : &quot;kevin-elk&quot;,  </span><br><span class="line">  &quot;status&quot; : &quot;green&quot;,  </span><br><span class="line">  &quot;indices&quot; : &#123;  </span><br><span class="line">    &quot;count&quot; : 451,  </span><br><span class="line">    &quot;shards&quot; : &#123;  </span><br><span class="line">      &quot;total&quot; : 4478,  </span><br><span class="line">      &quot;primaries&quot; : 2239,  </span><br><span class="line">      &quot;replication&quot; : 1.0,  </span><br><span class="line">      &quot;index&quot; : &#123;  </span><br><span class="line">        &quot;shards&quot; : &#123;  </span><br><span class="line">          &quot;min&quot; : 2,  </span><br><span class="line">          &quot;max&quot; : 10,  </span><br><span class="line">          &quot;avg&quot; : 9.929046563192905  </span><br><span class="line">        &#125;,  </span><br><span class="line">        &quot;primaries&quot; : &#123;  </span><br><span class="line">          &quot;min&quot; : 1,  </span><br><span class="line">          &quot;max&quot; : 5,  </span><br><span class="line">          &quot;avg&quot; : 4.964523281596453  </span><br><span class="line">        &#125;,  </span><br><span class="line">        &quot;replication&quot; : &#123;  </span><br><span class="line">          &quot;min&quot; : 1.0,  </span><br><span class="line">          &quot;max&quot; : 1.0,  </span><br><span class="line">          &quot;avg&quot; : 1.0  </span><br><span class="line">        &#125;  </span><br><span class="line">      &#125;  </span><br><span class="line">    &#125;,  </span><br><span class="line">    &quot;docs&quot; : &#123;  </span><br><span class="line">      &quot;count&quot; : 10448854,  </span><br><span class="line">      &quot;deleted&quot; : 3  </span><br><span class="line">    &#125;,  </span><br><span class="line">    &quot;store&quot; : &#123;  </span><br><span class="line">      &quot;size&quot; : &quot;5gb&quot;,  </span><br><span class="line">      &quot;size_in_bytes&quot; : 5467367887,  </span><br><span class="line">      &quot;throttle_time&quot; : &quot;0s&quot;,  </span><br><span class="line">      &quot;throttle_time_in_millis&quot; : 0  </span><br><span class="line">    &#125;,  </span><br><span class="line">    &quot;fielddata&quot; : &#123;  </span><br><span class="line">      &quot;memory_size&quot; : &quot;0b&quot;,  </span><br><span class="line">      &quot;memory_size_in_bytes&quot; : 0,  </span><br><span class="line">      &quot;evictions&quot; : 0  </span><br><span class="line">    &#125;,  </span><br><span class="line">    &quot;query_cache&quot; : &#123;  </span><br><span class="line">      &quot;memory_size&quot; : &quot;0b&quot;,  </span><br><span class="line">      &quot;memory_size_in_bytes&quot; : 0,  </span><br><span class="line">      &quot;total_count&quot; : 364053,  </span><br><span class="line">      &quot;hit_count&quot; : 0,  </span><br><span class="line">      &quot;miss_count&quot; : 364053,  </span><br><span class="line">      &quot;cache_size&quot; : 0,  </span><br><span class="line">      &quot;cache_count&quot; : 0,  </span><br><span class="line">      &quot;evictions&quot; : 0  </span><br><span class="line">    &#125;,  </span><br><span class="line">    &quot;completion&quot; : &#123;  </span><br><span class="line">      &quot;size&quot; : &quot;0b&quot;,  </span><br><span class="line">      &quot;size_in_bytes&quot; : 0  </span><br><span class="line">    &#125;,  </span><br><span class="line">    &quot;segments&quot; : &#123;  </span><br><span class="line">      &quot;count&quot; : 16635,  </span><br><span class="line">      &quot;memory&quot; : &quot;83.6mb&quot;,  </span><br><span class="line">      &quot;memory_in_bytes&quot; : 87662804,  </span><br><span class="line">      &quot;terms_memory&quot; : &quot;64.5mb&quot;,  </span><br><span class="line">      &quot;terms_memory_in_bytes&quot; : 67635408,  </span><br><span class="line">      &quot;stored_fields_memory&quot; : &quot;6.3mb&quot;,  </span><br><span class="line">      &quot;stored_fields_memory_in_bytes&quot; : 6624464,  </span><br><span class="line">      &quot;term_vectors_memory&quot; : &quot;0b&quot;,  </span><br><span class="line">      &quot;term_vectors_memory_in_bytes&quot; : 0,  </span><br><span class="line">      &quot;norms_memory&quot; : &quot;6.1mb&quot;,  </span><br><span class="line">      &quot;norms_memory_in_bytes&quot; : 6478656,  </span><br><span class="line">      &quot;doc_values_memory&quot; : &quot;6.6mb&quot;,  </span><br><span class="line">      &quot;doc_values_memory_in_bytes&quot; : 6924276,  </span><br><span class="line">      &quot;index_writer_memory&quot; : &quot;448.1kb&quot;,  </span><br><span class="line">      &quot;index_writer_memory_in_bytes&quot; : 458896,  </span><br><span class="line">      &quot;index_writer_max_memory&quot; : &quot;4.5gb&quot;,  </span><br><span class="line">      &quot;index_writer_max_memory_in_bytes&quot; : 4914063972,  </span><br><span class="line">      &quot;version_map_memory&quot; : &quot;338b&quot;,  </span><br><span class="line">      &quot;version_map_memory_in_bytes&quot; : 338,  </span><br><span class="line">      &quot;fixed_bit_set&quot; : &quot;0b&quot;,  </span><br><span class="line">      &quot;fixed_bit_set_memory_in_bytes&quot; : 0  </span><br><span class="line">    &#125;,  </span><br><span class="line">    &quot;percolate&quot; : &#123;  </span><br><span class="line">      &quot;total&quot; : 0,  </span><br><span class="line">      &quot;time&quot; : &quot;0s&quot;,  </span><br><span class="line">      &quot;time_in_millis&quot; : 0,  </span><br><span class="line">      &quot;current&quot; : 0,  </span><br><span class="line">      &quot;memory_size_in_bytes&quot; : -1,  </span><br><span class="line">      &quot;memory_size&quot; : &quot;-1b&quot;,  </span><br><span class="line">      &quot;queries&quot; : 0  </span><br><span class="line">    &#125;  </span><br><span class="line">  &#125;,  </span><br><span class="line">  &quot;nodes&quot; : &#123;  </span><br><span class="line">    &quot;count&quot; : &#123;  </span><br><span class="line">      &quot;total&quot; : 3,  </span><br><span class="line">      &quot;master_only&quot; : 0,  </span><br><span class="line">      &quot;data_only&quot; : 0,  </span><br><span class="line">      &quot;master_data&quot; : 3,  </span><br><span class="line">      &quot;client&quot; : 0  </span><br><span class="line">    &#125;,  </span><br><span class="line">    &quot;versions&quot; : [ &quot;2.4.6&quot; ],  </span><br><span class="line">    &quot;os&quot; : &#123;  </span><br><span class="line">      &quot;available_processors&quot; : 24,  </span><br><span class="line">      &quot;allocated_processors&quot; : 24,  </span><br><span class="line">      &quot;mem&quot; : &#123;  </span><br><span class="line">        &quot;total&quot; : &quot;13.8gb&quot;,  </span><br><span class="line">        &quot;total_in_bytes&quot; : 14859091968  </span><br><span class="line">      &#125;,  </span><br><span class="line">      &quot;names&quot; : [ &#123;  </span><br><span class="line">        &quot;name&quot; : &quot;Linux&quot;,  </span><br><span class="line">        &quot;count&quot; : 3  </span><br><span class="line">      &#125; ]  </span><br><span class="line">    &#125;,  </span><br><span class="line">    &quot;process&quot; : &#123;  </span><br><span class="line">      &quot;cpu&quot; : &#123;  </span><br><span class="line">        &quot;percent&quot; : 1  </span><br><span class="line">      &#125;,  </span><br><span class="line">      &quot;open_file_descriptors&quot; : &#123;  </span><br><span class="line">        &quot;min&quot; : 9817,  </span><br><span class="line">        &quot;max&quot; : 9920,  </span><br><span class="line">        &quot;avg&quot; : 9866  </span><br><span class="line">      &#125;  </span><br><span class="line">    &#125;,  </span><br><span class="line">    &quot;jvm&quot; : &#123;  </span><br><span class="line">      &quot;max_uptime&quot; : &quot;1.1d&quot;,  </span><br><span class="line">      &quot;max_uptime_in_millis&quot; : 101282315,  </span><br><span class="line">      &quot;versions&quot; : [ &#123;  </span><br><span class="line">        &quot;version&quot; : &quot;1.8.0_131&quot;,  </span><br><span class="line">        &quot;vm_name&quot; : &quot;Java HotSpot(TM) 64-Bit Server VM&quot;,  </span><br><span class="line">        &quot;vm_version&quot; : &quot;25.131-b11&quot;,  </span><br><span class="line">        &quot;vm_vendor&quot; : &quot;Oracle Corporation&quot;,  </span><br><span class="line">        &quot;count&quot; : 3  </span><br><span class="line">      &#125; ],  </span><br><span class="line">      &quot;mem&quot; : &#123;  </span><br><span class="line">        &quot;heap_used&quot; : &quot;7.2gb&quot;,  </span><br><span class="line">        &quot;heap_used_in_bytes&quot; : 7800334800,  </span><br><span class="line">        &quot;heap_max&quot; : &quot;23.8gb&quot;,  </span><br><span class="line">        &quot;heap_max_in_bytes&quot; : 25560612864  </span><br><span class="line">      &#125;,  </span><br><span class="line">      &quot;threads&quot; : 359  </span><br><span class="line">    &#125;,  </span><br><span class="line">    &quot;fs&quot; : &#123;  </span><br><span class="line">      &quot;total&quot; : &quot;1.1tb&quot;,  </span><br><span class="line">      &quot;total_in_bytes&quot; : 1241247670272,  </span><br><span class="line">      &quot;free&quot; : &quot;1tb&quot;,  </span><br><span class="line">      &quot;free_in_bytes&quot; : 1206666141696,  </span><br><span class="line">      &quot;available&quot; : &quot;1tb&quot;,  </span><br><span class="line">      &quot;available_in_bytes&quot; : 1143543336960  </span><br><span class="line">    &#125;,  </span><br><span class="line">    &quot;plugins&quot; : [ &#123;  </span><br><span class="line">      &quot;name&quot; : &quot;bigdesk&quot;,  </span><br><span class="line">      &quot;version&quot; : &quot;master&quot;,  </span><br><span class="line">      &quot;description&quot; : &quot;bigdesk -- Live charts and statistics for Elasticsearch cluster &quot;,  </span><br><span class="line">      &quot;url&quot; : &quot;&#x2F;_plugin&#x2F;bigdesk&#x2F;&quot;,  </span><br><span class="line">      &quot;jvm&quot; : false,  </span><br><span class="line">      &quot;site&quot; : true  </span><br><span class="line">    &#125;, &#123;  </span><br><span class="line">      &quot;name&quot; : &quot;head&quot;,  </span><br><span class="line">      &quot;version&quot; : &quot;master&quot;,  </span><br><span class="line">      &quot;description&quot; : &quot;head - A web front end for an elastic search cluster&quot;,  </span><br><span class="line">      &quot;url&quot; : &quot;&#x2F;_plugin&#x2F;head&#x2F;&quot;,  </span><br><span class="line">      &quot;jvm&quot; : false,  </span><br><span class="line">      &quot;site&quot; : true  </span><br><span class="line">    &#125;, &#123;  </span><br><span class="line">      &quot;name&quot; : &quot;kopf&quot;,  </span><br><span class="line">      &quot;version&quot; : &quot;2.0.1&quot;,  </span><br><span class="line">      &quot;description&quot; : &quot;kopf - simple web administration tool for Elasticsearch&quot;,  </span><br><span class="line">      &quot;url&quot; : &quot;&#x2F;_plugin&#x2F;kopf&#x2F;&quot;,  </span><br><span class="line">      &quot;jvm&quot; : false,  </span><br><span class="line">      &quot;site&quot; : true  </span><br><span class="line">    &#125; ]  </span><br><span class="line">  &#125;  </span><br><span class="line">&#125;  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>以上监控命令打印的集群统计信息包含: Elasticsearch集群的分片数，文档数，存储空间，缓存信息，内存作用率，插件内容，文件系统内容，JVM 作用状况，系统 CPU，OS 信息，段信息。</p>
<ul>
<li></li>
</ul>
<ol start="2">
<li>利用脚本监控elasticSearch集群健康值green yellow red状态</li>
</ol>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# curl 10.0.8.47:9200&#x2F;_cat&#x2F;health  </span><br><span class="line">1554864073 10:41:13 qwkg-elk green 3 3 4478 2239 0 0 0 0 - 100.0%  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>编写python脚本, 监控elasticsearch的健康状态</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# vim &#x2F;opt&#x2F;es_health_monit.py     </span><br><span class="line">import commands  </span><br><span class="line">command &#x3D; &#39;curl 10.0.8.47:9200&#x2F;_cat&#x2F;health&#39;  </span><br><span class="line">(a, b) &#x3D; commands.getstatusoutput(command)  </span><br><span class="line">status&#x3D; b.split(&#39; &#39;)[157]  </span><br><span class="line">if status&#x3D;&#x3D;&#39;red&#39;:  </span><br><span class="line">    healthy&#x3D;0  </span><br><span class="line">else:  </span><br><span class="line">    healthy&#x3D;1  </span><br><span class="line">    </span><br><span class="line">print healthy  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>手动执行脚本, 打印出elasticsearch健康状态</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[root@elk-node03 ~]# chmod 755 &#x2F;opt&#x2F;es_health_monit.py  </span><br><span class="line">[root@elk-node03 ~]# python &#x2F;opt&#x2F;es_health_monit.py  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>然后在脚本中结合sendemail进行邮件报警 或者 添加到zabbix监控里.</p>
<h2 id="八-Elasticsearch配置中防止脑裂的配置"><a href="#八-Elasticsearch配置中防止脑裂的配置" class="headerlink" title="八.  Elasticsearch配置中防止脑裂的配置"></a>八.  Elasticsearch配置中防止脑裂的配置</h2><p>Master和DataNode未分离，导致集群不稳定。</p>
<p>在ES集群中，节点分为Master、DataNode、Client等几种角色，任何一个节点都可以同时具备以上所有角色，其中比较重要的角色为Master和DataNode:</p>
<ul>
<li></li>
</ul>
<ol>
<li>Master主要管理集群信息、primary分片和replica分片信息、维护index信息。</li>
</ol>
<ul>
<li></li>
</ul>
<ol start="2">
<li>DataNode用来存储数据，维护倒排索引，提供数据检索等。</li>
</ol>
<p>可以看到元信息都在Master上面，如果Master挂掉了，该Master含有的所有Index都无法访问，文档中说，为了保证Master稳定，需要将Master和Node分离。而构建master集群可能会产生一种叫做脑裂的问题，为了防止脑裂，需要设置最小master的节点数为eligible_master_number/2 + 1</p>
<p>脑裂的概念：如果有两个Master候选节点，并设置最小Master节点数为1，则当网络抖动或偶然断开时，两个Master都会认为另一个Master挂掉了，它们都被选举为主Master，则此时集群中存在两个主Master，即物理上一个集群变成了逻辑上的两个集群，而当其中一个Master再次挂掉时，即便它恢复后回到了原有的集群，在它作为主Master期间写入的数据都会丢失，因为它上面维护了Index信息。</p>
<p>根据以上理论，可以对集群做了如下更改，额外选取三个独立的机器作为Master节点，修改elasticsearch.yml配置</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">node.master &#x3D; true  </span><br><span class="line">node.data &#x3D; false  </span><br><span class="line">discovery.zen.minimum_master_nodes &#x3D; 2  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>修改其他节点配置，将其设置为DataNode，最后依次重启</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">node.master &#x3D; false  </span><br><span class="line">node.data &#x3D; true  </span><br><span class="line"></span><br></pre></td></tr></table></figure>

<blockquote>
<p>作者：散尽浮华</p>
<p>原文：<a target="_blank" rel="noopener" href="http://www.cnblogs.com/kevingrace/p/10671063.html">www.cnblogs.com/kevingrace/p/10671063.html</a></p>
</blockquote>
 
      <!-- reward -->
      
      <div id="reword-out">
        <div id="reward-btn">
          打赏
        </div>
      </div>
      
    </div>
    

    <!-- copyright -->
    
    <div class="declare">
      <ul class="post-copyright">
        <li>
          <i class="ri-copyright-line"></i>
          <strong>版权声明： </strong>
          
          本博客所有文章除特别声明外，著作权归作者所有。转载请注明出处！
          
        </li>
      </ul>
    </div>
    
    <footer class="article-footer">
       
<div class="share-btn">
      <span class="share-sns share-outer">
        <i class="ri-share-forward-line"></i>
        分享
      </span>
      <div class="share-wrap">
        <i class="arrow"></i>
        <div class="share-icons">
          
          <a class="weibo share-sns" href="javascript:;" data-type="weibo">
            <i class="ri-weibo-fill"></i>
          </a>
          <a class="weixin share-sns wxFab" href="javascript:;" data-type="weixin">
            <i class="ri-wechat-fill"></i>
          </a>
          <a class="qq share-sns" href="javascript:;" data-type="qq">
            <i class="ri-qq-fill"></i>
          </a>
          <a class="douban share-sns" href="javascript:;" data-type="douban">
            <i class="ri-douban-line"></i>
          </a>
          <!-- <a class="qzone share-sns" href="javascript:;" data-type="qzone">
            <i class="icon icon-qzone"></i>
          </a> -->
          
          <a class="facebook share-sns" href="javascript:;" data-type="facebook">
            <i class="ri-facebook-circle-fill"></i>
          </a>
          <a class="twitter share-sns" href="javascript:;" data-type="twitter">
            <i class="ri-twitter-fill"></i>
          </a>
          <a class="google share-sns" href="javascript:;" data-type="google">
            <i class="ri-google-fill"></i>
          </a>
        </div>
      </div>
</div>

<div class="wx-share-modal">
    <a class="modal-close" href="javascript:;"><i class="ri-close-circle-line"></i></a>
    <p>扫一扫，分享到微信</p>
    <div class="wx-qrcode">
      <img src="//api.qrserver.com/v1/create-qr-code/?size=150x150&data=http://example.com/2020/11/11/es/Elasticsearch%20%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5/" alt="微信分享二维码">
    </div>
</div>

<div id="share-mask"></div>  
  <ul class="article-tag-list" itemprop="keywords"><li class="article-tag-list-item"><a class="article-tag-list-link" href="/tags/es/" rel="tag">es</a></li></ul>

    </footer>
  </div>

   
  <nav class="article-nav">
    
      <a href="/2020/11/11/docker/%E6%9C%80%E6%96%B0%E6%95%B4%E7%90%86%E4%B9%8B--docker-compose%E5%8F%82%E6%95%B0%E5%8F%8A%E5%91%BD%E4%BB%A4/" class="article-nav-link">
        <strong class="article-nav-caption">上一篇</strong>
        <div class="article-nav-title">
          
            最新整理之--docker-compose参数及命令.md
          
        </div>
      </a>
    
    
      <a href="/2020/11/11/interview/IT%E8%BF%90%E7%BB%B4%E9%9D%A2%E8%AF%95%E9%97%AE%E9%A2%98%E6%80%BB%E7%BB%93-%E6%95%B0%E6%8D%AE%E5%BA%93%E3%80%81%E7%9B%91%E6%8E%A7%E3%80%81%E7%BD%91%E7%BB%9C%E7%AE%A1%E7%90%86/" class="article-nav-link">
        <strong class="article-nav-caption">下一篇</strong>
        <div class="article-nav-title">IT运维面试问题总结-数据库、监控、网络管理.md</div>
      </a>
    
  </nav>

   
<!-- valine评论 -->
<div id="vcomments-box">
  <div id="vcomments"></div>
</div>
<script src="//cdn1.lncld.net/static/js/3.0.4/av-min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/valine@1.4.14/dist/Valine.min.js"></script>
<script>
  new Valine({
    el: "#vcomments",
    app_id: "",
    app_key: "",
    path: window.location.pathname,
    avatar: "monsterid",
    placeholder: "给我的文章加点评论吧~",
    recordIP: true,
  });
  const infoEle = document.querySelector("#vcomments .info");
  if (infoEle && infoEle.childNodes && infoEle.childNodes.length > 0) {
    infoEle.childNodes.forEach(function (item) {
      item.parentNode.removeChild(item);
    });
  }
</script>
<style>
  #vcomments-box {
    padding: 5px 30px;
  }

  @media screen and (max-width: 800px) {
    #vcomments-box {
      padding: 5px 0px;
    }
  }

  #vcomments-box #vcomments {
    background-color: #fff;
  }

  .v .vlist .vcard .vh {
    padding-right: 20px;
  }

  .v .vlist .vcard {
    padding-left: 10px;
  }
</style>

 
     
</article>

</section>
      <footer class="footer">
  <div class="outer">
    <ul>
      <li>
        Copyrights &copy;
        2015-2020
        <i class="ri-heart-fill heart_icon"></i> TzWind
      </li>
    </ul>
    <ul>
      <li>
        
        
        
        由 <a href="https://hexo.io" target="_blank">Hexo</a> 强力驱动
        <span class="division">|</span>
        主题 - <a href="https://github.com/Shen-Yu/hexo-theme-ayer" target="_blank">Ayer</a>
        
      </li>
    </ul>
    <ul>
      <li>
        
        
        <span>
  <span><i class="ri-user-3-fill"></i>访问人数:<span id="busuanzi_value_site_uv"></span></s>
  <span class="division">|</span>
  <span><i class="ri-eye-fill"></i>浏览次数:<span id="busuanzi_value_page_pv"></span></span>
</span>
        
      </li>
    </ul>
    <ul>
      
    </ul>
    <ul>
      
    </ul>
    <ul>
      <li>
        <!-- cnzz统计 -->
        
        <script type="text/javascript" src='https://s9.cnzz.com/z_stat.php?id=1278069914&amp;web_id=1278069914'></script>
        
      </li>
    </ul>
  </div>
</footer>
      <div class="float_btns">
        <div class="totop" id="totop">
  <i class="ri-arrow-up-line"></i>
</div>

<div class="todark" id="todark">
  <i class="ri-moon-line"></i>
</div>

      </div>
    </main>
    <aside class="sidebar on">
      <button class="navbar-toggle"></button>
<nav class="navbar">
  
  <div class="logo">
    <a href="/"><img src="/images/ayer-side.svg" alt="Hexo"></a>
  </div>
  
  <ul class="nav nav-main">
    
    <li class="nav-item">
      <a class="nav-item-link" href="/">主页</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" href="/archives">归档</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" href="/categories">分类</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" href="/tags">标签</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" target="_blank" rel="noopener" href="http://www.baidu.com">百度</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" href="/friends">友链</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" href="/2019/about">关于我</a>
    </li>
    
  </ul>
</nav>
<nav class="navbar navbar-bottom">
  <ul class="nav">
    <li class="nav-item">
      
      <a class="nav-item-link nav-item-search"  title="搜索">
        <i class="ri-search-line"></i>
      </a>
      
      
      <a class="nav-item-link" target="_blank" href="/atom.xml" title="RSS Feed">
        <i class="ri-rss-line"></i>
      </a>
      
    </li>
  </ul>
</nav>
<div class="search-form-wrap">
  <div class="local-search local-search-plugin">
  <input type="search" id="local-search-input" class="local-search-input" placeholder="Search...">
  <div id="local-search-result" class="local-search-result"></div>
</div>
</div>
    </aside>
    <script>
      if (window.matchMedia("(max-width: 768px)").matches) {
        document.querySelector('.content').classList.remove('on');
        document.querySelector('.sidebar').classList.remove('on');
      }
    </script>
    <div id="mask"></div>

<!-- #reward -->
<div id="reward">
  <span class="close"><i class="ri-close-line"></i></span>
  <p class="reward-p"><i class="ri-cup-line"></i>请我喝杯咖啡吧~</p>
  <div class="reward-box">
    
    
  </div>
</div>
    
<script src="/js/jquery-2.0.3.min.js"></script>


<script src="/js/lazyload.min.js"></script>

<!-- Tocbot -->


<script src="/js/tocbot.min.js"></script>

<script>
  tocbot.init({
    tocSelector: '.tocbot',
    contentSelector: '.article-entry',
    headingSelector: 'h1, h2, h3, h4, h5, h6',
    hasInnerContainers: true,
    scrollSmooth: true,
    scrollContainer: 'main',
    positionFixedSelector: '.tocbot',
    positionFixedClass: 'is-position-fixed',
    fixedSidebarOffset: 'auto'
  });
</script>

<script src="https://cdn.jsdelivr.net/npm/jquery-modal@0.9.2/jquery.modal.min.js"></script>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/jquery-modal@0.9.2/jquery.modal.min.css">
<script src="https://cdn.jsdelivr.net/npm/justifiedGallery@3.7.0/dist/js/jquery.justifiedGallery.min.js"></script>

<script src="/dist/main.js"></script>

<!-- ImageViewer -->

<!-- Root element of PhotoSwipe. Must have class pswp. -->
<div class="pswp" tabindex="-1" role="dialog" aria-hidden="true">

    <!-- Background of PhotoSwipe. 
         It's a separate element as animating opacity is faster than rgba(). -->
    <div class="pswp__bg"></div>

    <!-- Slides wrapper with overflow:hidden. -->
    <div class="pswp__scroll-wrap">

        <!-- Container that holds slides. 
            PhotoSwipe keeps only 3 of them in the DOM to save memory.
            Don't modify these 3 pswp__item elements, data is added later on. -->
        <div class="pswp__container">
            <div class="pswp__item"></div>
            <div class="pswp__item"></div>
            <div class="pswp__item"></div>
        </div>

        <!-- Default (PhotoSwipeUI_Default) interface on top of sliding area. Can be changed. -->
        <div class="pswp__ui pswp__ui--hidden">

            <div class="pswp__top-bar">

                <!--  Controls are self-explanatory. Order can be changed. -->

                <div class="pswp__counter"></div>

                <button class="pswp__button pswp__button--close" title="Close (Esc)"></button>

                <button class="pswp__button pswp__button--share" style="display:none" title="Share"></button>

                <button class="pswp__button pswp__button--fs" title="Toggle fullscreen"></button>

                <button class="pswp__button pswp__button--zoom" title="Zoom in/out"></button>

                <!-- Preloader demo http://codepen.io/dimsemenov/pen/yyBWoR -->
                <!-- element will get class pswp__preloader--active when preloader is running -->
                <div class="pswp__preloader">
                    <div class="pswp__preloader__icn">
                        <div class="pswp__preloader__cut">
                            <div class="pswp__preloader__donut"></div>
                        </div>
                    </div>
                </div>
            </div>

            <div class="pswp__share-modal pswp__share-modal--hidden pswp__single-tap">
                <div class="pswp__share-tooltip"></div>
            </div>

            <button class="pswp__button pswp__button--arrow--left" title="Previous (arrow left)">
            </button>

            <button class="pswp__button pswp__button--arrow--right" title="Next (arrow right)">
            </button>

            <div class="pswp__caption">
                <div class="pswp__caption__center"></div>
            </div>

        </div>

    </div>

</div>

<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/photoswipe.min.css">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/default-skin/default-skin.min.css">
<script src="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/photoswipe.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/photoswipe-ui-default.min.js"></script>

<script>
    function viewer_init() {
        let pswpElement = document.querySelectorAll('.pswp')[0];
        let $imgArr = document.querySelectorAll(('.article-entry img:not(.reward-img)'))

        $imgArr.forEach(($em, i) => {
            $em.onclick = () => {
                // slider展开状态
                // todo: 这样不好，后面改成状态
                if (document.querySelector('.left-col.show')) return
                let items = []
                $imgArr.forEach(($em2, i2) => {
                    let img = $em2.getAttribute('data-idx', i2)
                    let src = $em2.getAttribute('data-target') || $em2.getAttribute('src')
                    let title = $em2.getAttribute('alt')
                    // 获得原图尺寸
                    const image = new Image()
                    image.src = src
                    items.push({
                        src: src,
                        w: image.width || $em2.width,
                        h: image.height || $em2.height,
                        title: title
                    })
                })
                var gallery = new PhotoSwipe(pswpElement, PhotoSwipeUI_Default, items, {
                    index: parseInt(i)
                });
                gallery.init()
            }
        })
    }
    viewer_init()
</script>

<!-- MathJax -->

<!-- Katex -->

<!-- busuanzi  -->


<script src="/js/busuanzi-2.3.pure.min.js"></script>


<!-- ClickLove -->

<!-- ClickBoom1 -->

<!-- ClickBoom2 -->

<!-- CodeCopy -->


<link rel="stylesheet" href="/css/clipboard.css">

<script src="https://cdn.jsdelivr.net/npm/clipboard@2/dist/clipboard.min.js"></script>
<script>
  function wait(callback, seconds) {
    var timelag = null;
    timelag = window.setTimeout(callback, seconds);
  }
  !function (e, t, a) {
    var initCopyCode = function(){
      var copyHtml = '';
      copyHtml += '<button class="btn-copy" data-clipboard-snippet="">';
      copyHtml += '<i class="ri-file-copy-2-line"></i><span>COPY</span>';
      copyHtml += '</button>';
      $(".highlight .code pre").before(copyHtml);
      $(".article pre code").before(copyHtml);
      var clipboard = new ClipboardJS('.btn-copy', {
        target: function(trigger) {
          return trigger.nextElementSibling;
        }
      });
      clipboard.on('success', function(e) {
        let $btn = $(e.trigger);
        $btn.addClass('copied');
        let $icon = $($btn.find('i'));
        $icon.removeClass('ri-file-copy-2-line');
        $icon.addClass('ri-checkbox-circle-line');
        let $span = $($btn.find('span'));
        $span[0].innerText = 'COPIED';
        
        wait(function () { // 等待两秒钟后恢复
          $icon.removeClass('ri-checkbox-circle-line');
          $icon.addClass('ri-file-copy-2-line');
          $span[0].innerText = 'COPY';
        }, 2000);
      });
      clipboard.on('error', function(e) {
        e.clearSelection();
        let $btn = $(e.trigger);
        $btn.addClass('copy-failed');
        let $icon = $($btn.find('i'));
        $icon.removeClass('ri-file-copy-2-line');
        $icon.addClass('ri-time-line');
        let $span = $($btn.find('span'));
        $span[0].innerText = 'COPY FAILED';
        
        wait(function () { // 等待两秒钟后恢复
          $icon.removeClass('ri-time-line');
          $icon.addClass('ri-file-copy-2-line');
          $span[0].innerText = 'COPY';
        }, 2000);
      });
    }
    initCopyCode();
  }(window, document);
</script>


<!-- CanvasBackground -->


    
  </div>
</body>

</html>