

<!DOCTYPE html>
<html class="writer-html5" lang="en" >
<head>
  <meta charset="utf-8" />
  
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />
  
  <title>健康检查 &mdash; Ceph Documentation</title>
  

  
  <link rel="stylesheet" href="../../../_static/ceph.css" type="text/css" />
  <link rel="stylesheet" href="../../../_static/pygments.css" type="text/css" />
  <link rel="stylesheet" href="../../../_static/graphviz.css" type="text/css" />
  <link rel="stylesheet" href="../../../_static/css/custom.css" type="text/css" />

  
  
    <link rel="shortcut icon" href="../../../_static/favicon.ico"/>
  

  
  

  

  
  <!--[if lt IE 9]>
    <script src="../../../_static/js/html5shiv.min.js"></script>
  <![endif]-->
  
    
      <script type="text/javascript" id="documentation_options" data-url_root="../../../" src="../../../_static/documentation_options.js"></script>
        <script src="../../../_static/jquery.js"></script>
        <script src="../../../_static/underscore.js"></script>
        <script src="../../../_static/doctools.js"></script>
    
    <script type="text/javascript" src="../../../_static/js/theme.js"></script>

    
    <link rel="index" title="Index" href="../../../genindex/" />
    <link rel="search" title="Search" href="../../../search/" />
    <link rel="next" title="监控集群" href="../monitoring/" />
    <link rel="prev" title="操纵集群" href="../operating/" /> 
</head>

<body class="wy-body-for-nav">

   
  <header class="top-bar">
    

















<div role="navigation" aria-label="breadcrumbs navigation">

  <ul class="wy-breadcrumbs">
    
      <li><a href="../../../" class="icon icon-home"></a> &raquo;</li>
        
          <li><a href="../../">Ceph 存储集群</a> &raquo;</li>
        
          <li><a href="../">集群运维</a> &raquo;</li>
        
      <li>健康检查</li>
    
    
      <li class="wy-breadcrumbs-aside">
        
          
            <a href="../../../_sources/rados/operations/health-checks.rst.txt" rel="nofollow"> View page source</a>
          
        
      </li>
    
  </ul>

  
  <hr/>
</div>
  </header>
  <div class="wy-grid-for-nav">
    
    <nav data-toggle="wy-nav-shift" class="wy-nav-side">
      <div class="wy-side-scroll">
        <div class="wy-side-nav-search"  style="background: #eee" >
          

          
            <a href="../../../">
          

          
            
            <img src="../../../_static/logo.png" class="logo" alt="Logo"/>
          
          </a>

          

          
<div role="search">
  <form id="rtd-search-form" class="wy-form" action="../../../search/" method="get">
    <input type="text" name="q" placeholder="Search docs" />
    <input type="hidden" name="check_keywords" value="yes" />
    <input type="hidden" name="area" value="default" />
  </form>
</div>

          
        </div>

        
        <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
          
            
            
              
            
            
              <ul class="current">
<li class="toctree-l1"><a class="reference internal" href="../../../start/intro/">Ceph 简介</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../install/">安装 Ceph</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../cephadm/">Cephadm</a></li>
<li class="toctree-l1 current"><a class="reference internal" href="../../">Ceph 存储集群</a><ul class="current">
<li class="toctree-l2"><a class="reference internal" href="../../configuration/">配置</a></li>
<li class="toctree-l2 current"><a class="reference internal" href="../">运维</a><ul class="current">
<li class="toctree-l3"><a class="reference internal" href="../operating/">操纵集群</a></li>
<li class="toctree-l3 current"><a class="current reference internal" href="#">健康检查</a><ul>
<li class="toctree-l4"><a class="reference internal" href="#id2">概览</a></li>
<li class="toctree-l4"><a class="reference internal" href="#id3">状态定义</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../monitoring/">监控集群</a></li>
<li class="toctree-l3"><a class="reference internal" href="../monitoring-osd-pg/">监控 OSD 和归置组</a></li>
<li class="toctree-l3"><a class="reference internal" href="../user-management/">用户管理</a></li>
<li class="toctree-l3"><a class="reference internal" href="../pg-repair/">修复 PG 不一致状态</a></li>
<li class="toctree-l3"><a class="reference internal" href="../data-placement/">数据归置概览</a></li>
<li class="toctree-l3"><a class="reference internal" href="../pools/">存储池</a></li>
<li class="toctree-l3"><a class="reference internal" href="../erasure-code/">纠删码</a></li>
<li class="toctree-l3"><a class="reference internal" href="../cache-tiering/">分级缓存</a></li>
<li class="toctree-l3"><a class="reference internal" href="../placement-groups/">归置组</a></li>
<li class="toctree-l3"><a class="reference internal" href="../balancer/">均衡器</a></li>
<li class="toctree-l3"><a class="reference internal" href="../upmap/">使用 pg-upmap</a></li>
<li class="toctree-l3"><a class="reference internal" href="../crush-map/">CRUSH 图</a></li>
<li class="toctree-l3"><a class="reference internal" href="../crush-map-edits/">手动编辑一个 CRUSH 图</a></li>
<li class="toctree-l3"><a class="reference internal" href="../stretch-mode/">Stretch Clusters</a></li>
<li class="toctree-l3"><a class="reference internal" href="../change-mon-elections/">Configure Monitor Election Strategies</a></li>
<li class="toctree-l3"><a class="reference internal" href="../add-or-rm-osds/">增加/删除 OSD</a></li>
<li class="toctree-l3"><a class="reference internal" href="../add-or-rm-mons/">增加/删除监视器</a></li>
<li class="toctree-l3"><a class="reference internal" href="../devices/">设备管理</a></li>
<li class="toctree-l3"><a class="reference internal" href="../bluestore-migration/">迁移到 BlueStore</a></li>
<li class="toctree-l3"><a class="reference internal" href="../control/">命令参考</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../troubleshooting/community/">Ceph 社区</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../troubleshooting/troubleshooting-mon/">监视器故障排除</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../troubleshooting/troubleshooting-osd/">OSD 故障排除</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../troubleshooting/troubleshooting-pg/">归置组排障</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../troubleshooting/log-and-debug/">日志记录和调试</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../troubleshooting/cpu-profiling/">CPU 剖析</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../troubleshooting/memory-profiling/">内存剖析</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../man/">    手册页</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../troubleshooting/">故障排除</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../api/">APIs</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../../cephfs/">Ceph 文件系统</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../rbd/">Ceph 块设备</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../radosgw/">Ceph 对象网关</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../mgr/">Ceph 管理器守护进程</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../mgr/dashboard/">Ceph 仪表盘</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../api/">API 文档</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../architecture/">体系结构</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../dev/developer_guide/">开发者指南</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../dev/internals/">Ceph 内幕</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../governance/">项目管理</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../foundation/">Ceph 基金会</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../ceph-volume/">ceph-volume</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../releases/general/">Ceph 版本（总目录）</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../releases/">Ceph 版本（索引）</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../security/">Security</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../glossary/">Ceph 术语</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../jaegertracing/">Tracing</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../translation_cn/">中文版翻译资源</a></li>
</ul>

            
          
        </div>
        
      </div>
    </nav>

    <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">

      
      <nav class="wy-nav-top" aria-label="top navigation">
        
          <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
          <a href="../../../">Ceph</a>
        
      </nav>


      <div class="wy-nav-content">
        
        <div class="rst-content">
        
          <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
           <div itemprop="articleBody">
            
<div id="dev-warning" class="admonition note">
  <p class="first admonition-title">Notice</p>
  <p class="last">This document is for a development version of Ceph.</p>
</div>
  <div id="docubetter" align="right" style="padding: 5px; font-weight: bold;">
    <a href="https://pad.ceph.com/p/Report_Documentation_Bugs">Report a Documentation Bug</a>
  </div>

  
  <div class="section" id="health-checks">
<span id="id1"></span><h1>健康检查<a class="headerlink" href="#health-checks" title="Permalink to this headline">¶</a></h1>
<div class="section" id="id2">
<h2>概览<a class="headerlink" href="#id2" title="Permalink to this headline">¶</a></h2>
<p>Ceph 集群可能产生的健康消息是有限的——它们通通被定义为<em>健康检查</em>，都有唯一标识符。</p>
<p>The identifier is a terse pseudo-human-readable (i.e. like a variable name)
string.  It is intended to enable tools (such as UIs) to make sense of
health checks, and present them in a way that reflects their meaning.</p>
<p>This page lists the health checks that are raised by the monitor and manager
daemons.  In addition to these, you may also see health checks that originate
from MDS daemons (see <a class="reference internal" href="../../../cephfs/health-messages/#cephfs-health-messages"><span class="std std-ref">CephFS 健康消息</span></a>), and health checks
that are defined by ceph-mgr python modules.</p>
</div>
<div class="section" id="id3">
<h2>状态定义<a class="headerlink" href="#id3" title="Permalink to this headline">¶</a></h2>
<div class="section" id="id4">
<h3>监视器<a class="headerlink" href="#id4" title="Permalink to this headline">¶</a></h3>
<div class="section" id="daemon-old-version">
<h4>DAEMON_OLD_VERSION<a class="headerlink" href="#daemon-old-version" title="Permalink to this headline">¶</a></h4>
<p>Warn if old version(s) of Ceph are running on any daemons.
It will generate a health error if multiple versions are detected.
This condition must exist for over mon_warn_older_version_delay (set to 1 week by default) in order for the
health condition to be triggered.  This allows most upgrades to proceed
without falsely seeing the warning.  If upgrade is paused for an extended
time period, health mute can be used like this
“ceph health mute DAEMON_OLD_VERSION –sticky”.  In this case after
upgrade has finished use “ceph health unmute DAEMON_OLD_VERSION”.</p>
</div>
<div class="section" id="mon-down">
<h4>MON_DOWN<a class="headerlink" href="#mon-down" title="Permalink to this headline">¶</a></h4>
<p>One or more monitor daemons is currently down.  The cluster requires a
majority (more than 1/2) of the monitors in order to function.  When
one or more monitors are down, clients may have a harder time forming
their initial connection to the cluster as they may need to try more
addresses before they reach an operating monitor.</p>
<p>The down monitor daemon should generally be restarted as soon as
possible to reduce the risk of a subsequen monitor failure leading to
a service outage.</p>
</div>
<div class="section" id="mon-clock-skew">
<h4>MON_CLOCK_SKEW<a class="headerlink" href="#mon-clock-skew" title="Permalink to this headline">¶</a></h4>
<p>The clocks on the hosts running the ceph-mon monitor daemons are not
sufficiently well synchronized.  This health alert is raised if the
cluster detects a clock skew greater than <code class="docutils literal notranslate"><span class="pre">mon_clock_drift_allowed</span></code>.</p>
<p>This is best resolved by synchronizing the clocks using a tool like
<code class="docutils literal notranslate"><span class="pre">ntpd</span></code> or <code class="docutils literal notranslate"><span class="pre">chrony</span></code>.</p>
<p>If it is impractical to keep the clocks closely synchronized, the
<code class="docutils literal notranslate"><span class="pre">mon_clock_drift_allowed</span></code> threshold can also be increased, but this
value must stay significantly below the <code class="docutils literal notranslate"><span class="pre">mon_lease</span></code> interval in
order for monitor cluster to function properly.</p>
</div>
<div class="section" id="mon-msgr2-not-enabled">
<h4>MON_MSGR2_NOT_ENABLED<a class="headerlink" href="#mon-msgr2-not-enabled" title="Permalink to this headline">¶</a></h4>
<p>The <code class="xref std std-confval docutils literal notranslate"><span class="pre">ms_bind_msgr2</span></code> option is enabled but one or more monitors is
not configured to bind to a v2 port in the cluster’s monmap.  This
means that features specific to the msgr2 protocol (e.g., encryption)
are not available on some or all connections.</p>
<p>In most cases this can be corrected by issuing the command:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">mon</span> <span class="n">enable</span><span class="o">-</span><span class="n">msgr2</span>
</pre></div>
</div>
<p>That command will change any monitor configured for the old default
port 6789 to continue to listen for v1 connections on 6789 and also
listen for v2 connections on the new default 3300 port.</p>
<p>If a monitor is configured to listen for v1 connections on a non-standard port (not 6789), then the monmap will need to be modified manually.</p>
</div>
<div class="section" id="mon-disk-low">
<h4>MON_DISK_LOW<a class="headerlink" href="#mon-disk-low" title="Permalink to this headline">¶</a></h4>
<p>One or more monitors is low on disk space.  This alert triggers if the
available space on the file system storing the monitor database
(normally <code class="docutils literal notranslate"><span class="pre">/var/lib/ceph/mon</span></code>), as a percentage, drops below
<code class="docutils literal notranslate"><span class="pre">mon_data_avail_warn</span></code> (default: 30%).</p>
<p>This may indicate that some other process or user on the system is
filling up the same file system used by the monitor.  It may also
indicate that the monitors database is large (see <code class="docutils literal notranslate"><span class="pre">MON_DISK_BIG</span></code>
below).</p>
<p>If space cannot be freed, the monitor’s data directory may need to be
moved to another storage device or file system (while the monitor
daemon is not running, of course).</p>
</div>
<div class="section" id="mon-disk-crit">
<h4>MON_DISK_CRIT<a class="headerlink" href="#mon-disk-crit" title="Permalink to this headline">¶</a></h4>
<p>One or more monitors is critically low on disk space.  This alert
triggers if the available space on the file system storing the monitor
database (normally <code class="docutils literal notranslate"><span class="pre">/var/lib/ceph/mon</span></code>), as a percentage, drops
below <code class="docutils literal notranslate"><span class="pre">mon_data_avail_crit</span></code> (default: 5%).  See <code class="docutils literal notranslate"><span class="pre">MON_DISK_LOW</span></code>, above.</p>
</div>
<div class="section" id="mon-disk-big">
<h4>MON_DISK_BIG<a class="headerlink" href="#mon-disk-big" title="Permalink to this headline">¶</a></h4>
<p>The database size for one or more monitors is very large.  This alert
triggers if the size of the monitor’s database is larger than
<code class="docutils literal notranslate"><span class="pre">mon_data_size_warn</span></code> (default: 15 GiB).</p>
<p>A large database is unusual, but may not necessarily indicate a
problem.  Monitor databases may grow in size when there are placement
groups that have not reached an <code class="docutils literal notranslate"><span class="pre">active+clean</span></code> state in a long time.</p>
<p>This may also indicate that the monitor’s database is not properly
compacting, which has been observed with some older versions of
leveldb and rocksdb.  Forcing a compaction with <code class="docutils literal notranslate"><span class="pre">ceph</span> <span class="pre">daemon</span> <span class="pre">mon.&lt;id&gt;</span>
<span class="pre">compact</span></code> may shrink the on-disk size.</p>
<p>This warning may also indicate that the monitor has a bug that is
preventing it from pruning the cluster metadata it stores.  If the
problem persists, please report a bug.</p>
<p>The warning threshold may be adjusted with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">config</span> <span class="nb">set</span> <span class="k">global</span> <span class="n">mon_data_size_warn</span> <span class="o">&lt;</span><span class="n">size</span><span class="o">&gt;</span>
</pre></div>
</div>
</div>
<div class="section" id="auth-insecure-global-id-reclaim">
<h4>AUTH_INSECURE_GLOBAL_ID_RECLAIM<a class="headerlink" href="#auth-insecure-global-id-reclaim" title="Permalink to this headline">¶</a></h4>
<p>One or more clients or daemons are connected to the cluster that are
not securely reclaiming their global_id (a unique number identifying
each entity in the cluster) when reconnecting to a monitor.  The
client is being permitted to connect anyway because the
<code class="docutils literal notranslate"><span class="pre">auth_allow_insecure_global_id_reclaim</span></code> option is set to true (which may
be necessary until all ceph clients have been upgraded), and the
<code class="docutils literal notranslate"><span class="pre">auth_expose_insecure_global_id_reclaim</span></code> option set to <code class="docutils literal notranslate"><span class="pre">true</span></code> (which
allows monitors to detect clients with insecure reclaim early by forcing them to
reconnect right after they first authenticate).</p>
<p>You can identify which client(s) are using unpatched ceph client code with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">health</span> <span class="n">detail</span>
</pre></div>
</div>
<p>Clients global_id reclaim rehavior can also seen in the
<code class="docutils literal notranslate"><span class="pre">global_id_status</span></code> field in the dump of clients connected to an
individual monitor (<code class="docutils literal notranslate"><span class="pre">reclaim_insecure</span></code> means the client is
unpatched and is contributing to this health alert):</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">tell</span> <span class="n">mon</span><span class="o">.</span>\<span class="o">*</span> <span class="n">sessions</span>
</pre></div>
</div>
<p>We strongly recommend that all clients in the system are upgraded to a
newer version of Ceph that correctly reclaims global_id values.  Once
all clients have been updated, you can stop allowing insecure reconnections
with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">config</span> <span class="nb">set</span> <span class="n">mon</span> <span class="n">auth_allow_insecure_global_id_reclaim</span> <span class="n">false</span>
</pre></div>
</div>
<p>If it is impractical to upgrade all clients immediately, you can silence
this warning temporarily with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">health</span> <span class="n">mute</span> <span class="n">AUTH_INSECURE_GLOBAL_ID_RECLAIM</span> <span class="mi">1</span><span class="n">w</span>   <span class="c1"># 1 week</span>
</pre></div>
</div>
<p>Although we do NOT recommend doing so, you can also disable this warning indefinitely
with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">config</span> <span class="nb">set</span> <span class="n">mon</span> <span class="n">mon_warn_on_insecure_global_id_reclaim</span> <span class="n">false</span>
</pre></div>
</div>
</div>
<div class="section" id="auth-insecure-global-id-reclaim-allowed">
<h4>AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED<a class="headerlink" href="#auth-insecure-global-id-reclaim-allowed" title="Permalink to this headline">¶</a></h4>
<p>Ceph is currently configured to allow clients to reconnect to monitors using
an insecure process to reclaim their previous global_id because the setting
<code class="docutils literal notranslate"><span class="pre">auth_allow_insecure_global_id_reclaim</span></code> is set to <code class="docutils literal notranslate"><span class="pre">true</span></code>.  It may be necessary to
leave this setting enabled while existing Ceph clients are upgraded to newer
versions of Ceph that correctly and securely reclaim their global_id.</p>
<p>If the <code class="docutils literal notranslate"><span class="pre">AUTH_INSECURE_GLOBAL_ID_RECLAIM</span></code> health alert has not also been raised and
the <code class="docutils literal notranslate"><span class="pre">auth_expose_insecure_global_id_reclaim</span></code> setting has not been disabled (it is
on by default), then there are currently no clients connected that need to be
upgraded, and it is safe to disallow insecure global_id reclaim with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">config</span> <span class="nb">set</span> <span class="n">mon</span> <span class="n">auth_allow_insecure_global_id_reclaim</span> <span class="n">false</span>
</pre></div>
</div>
<p>If there are still clients that need to be upgraded, then this alert can be
silenced temporarily with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">health</span> <span class="n">mute</span> <span class="n">AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED</span> <span class="mi">1</span><span class="n">w</span>   <span class="c1"># 1 week</span>
</pre></div>
</div>
<p>Although we do NOT recommend doing so, you can also disable this warning indefinitely
with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">config</span> <span class="nb">set</span> <span class="n">mon</span> <span class="n">mon_warn_on_insecure_global_id_reclaim_allowed</span> <span class="n">false</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="id5">
<h3>管理器<a class="headerlink" href="#id5" title="Permalink to this headline">¶</a></h3>
<div class="section" id="mgr-down">
<h4>MGR_DOWN<a class="headerlink" href="#mgr-down" title="Permalink to this headline">¶</a></h4>
<p>All manager daemons are currently down.  The cluster should normally
have at least one running manager (<code class="docutils literal notranslate"><span class="pre">ceph-mgr</span></code>) daemon.  If no
manager daemon is running, the cluster’s ability to monitor itself will
be compromised, and parts of the management API will become
unavailable (for example, the dashboard will not work, and most CLI
commands that report metrics or runtime state will block).  However,
the cluster will still be able to perform all IO operations and
recover from failures.</p>
<p>The down manager daemon should generally be restarted as soon as
possible to ensure that the cluster can be monitored (e.g., so that
the <code class="docutils literal notranslate"><span class="pre">ceph</span> <span class="pre">-s</span></code> information is up to date, and/or metrics can be
scraped by Prometheus).</p>
</div>
<div class="section" id="mgr-module-dependency">
<h4>MGR_MODULE_DEPENDENCY<a class="headerlink" href="#mgr-module-dependency" title="Permalink to this headline">¶</a></h4>
<p>An enabled manager module is failing its dependency check.  This health check
should come with an explanatory message from the module about the problem.</p>
<p>For example, a module might report that a required package is not installed:
install the required package and restart your manager daemons.</p>
<p>This health check is only applied to enabled modules.  If a module is
not enabled, you can see whether it is reporting dependency issues in
the output of <cite>ceph module ls</cite>.</p>
</div>
<div class="section" id="mgr-module-error">
<h4>MGR_MODULE_ERROR<a class="headerlink" href="#mgr-module-error" title="Permalink to this headline">¶</a></h4>
<p>A manager module has experienced an unexpected error.  Typically,
this means an unhandled exception was raised from the module’s <cite>serve</cite>
function.  The human readable description of the error may be obscurely
worded if the exception did not provide a useful description of itself.</p>
<p>This health check may indicate a bug: please open a Ceph bug report if you
think you have encountered a bug.</p>
<p>If you believe the error is transient, you may restart your manager
daemon(s), or use <cite>ceph mgr fail</cite> on the active daemon to prompt
a failover to another daemon.</p>
</div>
</div>
<div class="section" id="osds">
<h3>OSDs<a class="headerlink" href="#osds" title="Permalink to this headline">¶</a></h3>
<div class="section" id="osd-down">
<h4>OSD_DOWN<a class="headerlink" href="#osd-down" title="Permalink to this headline">¶</a></h4>
<p>至少有一个 OSD 被标记成了 down 状态，其 ceph-osd 守护进程可能已经停掉了、或者是对端 OSD 与此 OSD 之间的网络不通。常见起因有守护进程停止或崩溃、主机挂了、或者网络中断。</p>
<p>核实一下此主机是否健康、守护进程是否启动、网络是否正常。如果那个守护进程崩溃了，其守护进程日志文件（
<code class="docutils literal notranslate"><span class="pre">/var/log/ceph/ceph-osd.*</span></code> ）里会包含调试信息。</p>
</div>
<div class="section" id="osd-crush-type-down">
<h4>OSD_&lt;crush type&gt;_DOWN<a class="headerlink" href="#osd-crush-type-down" title="Permalink to this headline">¶</a></h4>
<p>(例如 OSD_HOST_DOWN, OSD_ROOT_DOWN)</p>
<p>某一个 CRUSH 子树里的所有 OSD 都被标记成 down 了，例如一台主机上的所有 OSD 。</p>
</div>
<div class="section" id="osd-orphan">
<h4>OSD_ORPHAN<a class="headerlink" href="#osd-orphan" title="Permalink to this headline">¶</a></h4>
<p>CRUSH 图分级结构里提到了这个 OSD ，但它并不存在。</p>
<p>CRUSH 图分级结构里的这个 OSD 可以用以下命令删除：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">crush</span> <span class="n">rm</span> <span class="n">osd</span><span class="o">.&lt;</span><span class="nb">id</span><span class="o">&gt;</span>
</pre></div>
</div>
</div>
<div class="section" id="osd-out-of-order-full">
<h4>OSD_OUT_OF_ORDER_FULL<a class="headerlink" href="#osd-out-of-order-full" title="Permalink to this headline">¶</a></h4>
<p>The utilization thresholds for <cite>nearfull</cite>, <cite>backfillfull</cite>, <cite>full</cite>,
and/or <cite>failsafe_full</cite> are not ascending.  In particular, we expect
<cite>nearfull &lt; backfillfull</cite>, <cite>backfillfull &lt; full</cite>, and <cite>full &lt;
failsafe_full</cite>.</p>
<p>The thresholds can be adjusted with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="nb">set</span><span class="o">-</span><span class="n">nearfull</span><span class="o">-</span><span class="n">ratio</span> <span class="o">&lt;</span><span class="n">ratio</span><span class="o">&gt;</span>
<span class="n">ceph</span> <span class="n">osd</span> <span class="nb">set</span><span class="o">-</span><span class="n">backfillfull</span><span class="o">-</span><span class="n">ratio</span> <span class="o">&lt;</span><span class="n">ratio</span><span class="o">&gt;</span>
<span class="n">ceph</span> <span class="n">osd</span> <span class="nb">set</span><span class="o">-</span><span class="n">full</span><span class="o">-</span><span class="n">ratio</span> <span class="o">&lt;</span><span class="n">ratio</span><span class="o">&gt;</span>
</pre></div>
</div>
</div>
<div class="section" id="osd-full">
<h4>OSD_FULL<a class="headerlink" href="#osd-full" title="Permalink to this headline">¶</a></h4>
<p>One or more OSDs has exceeded the <cite>full</cite> threshold and is preventing
the cluster from servicing writes.</p>
<p>Utilization by pool can be checked with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">df</span>
</pre></div>
</div>
<p>The currently defined <cite>full</cite> ratio can be seen with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">dump</span> <span class="o">|</span> <span class="n">grep</span> <span class="n">full_ratio</span>
</pre></div>
</div>
<p>A short-term workaround to restore write availability is to raise the full
threshold by a small amount:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="nb">set</span><span class="o">-</span><span class="n">full</span><span class="o">-</span><span class="n">ratio</span> <span class="o">&lt;</span><span class="n">ratio</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>New storage should be added to the cluster by deploying more OSDs or
existing data should be deleted in order to free up space.</p>
</div>
<div class="section" id="osd-backfillfull">
<h4>OSD_BACKFILLFULL<a class="headerlink" href="#osd-backfillfull" title="Permalink to this headline">¶</a></h4>
<p>One or more OSDs has exceeded the <cite>backfillfull</cite> threshold, which will
prevent data from being allowed to rebalance to this device.  This is
an early warning that rebalancing may not be able to complete and that
the cluster is approaching full.</p>
<p>Utilization by pool can be checked with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">df</span>
</pre></div>
</div>
</div>
<div class="section" id="osd-nearfull">
<h4>OSD_NEARFULL<a class="headerlink" href="#osd-nearfull" title="Permalink to this headline">¶</a></h4>
<p>One or more OSDs has exceeded the <cite>nearfull</cite> threshold.  This is an early
warning that the cluster is approaching full.</p>
<p>Utilization by pool can be checked with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">df</span>
</pre></div>
</div>
</div>
<div class="section" id="osdmap-flags">
<h4>OSDMAP_FLAGS<a class="headerlink" href="#osdmap-flags" title="Permalink to this headline">¶</a></h4>
<p>One or more cluster flags of interest has been set.  These flags include:</p>
<ul class="simple">
<li><p><em>full</em> - the cluster is flagged as full and cannot serve writes</p></li>
<li><p><em>pauserd</em>, <em>pausewr</em> - paused reads or writes</p></li>
<li><p><em>noup</em> - OSDs are not allowed to start</p></li>
<li><p><em>nodown</em> - OSD failure reports are being ignored, such that the
monitors will not mark OSDs <cite>down</cite></p></li>
<li><p><em>noin</em> - OSDs that were previously marked <cite>out</cite> will not be marked
back <cite>in</cite> when they start</p></li>
<li><p><em>noout</em> - down OSDs will not automatically be marked out after the
configured interval</p></li>
<li><p><em>nobackfill</em>, <em>norecover</em>, <em>norebalance</em> - recovery or data
rebalancing is suspended</p></li>
<li><p><em>noscrub</em>, <em>nodeep_scrub</em> - scrubbing is disabled</p></li>
<li><p><em>notieragent</em> - cache tiering activity is suspended</p></li>
</ul>
<p>With the exception of <em>full</em>, these flags can be set or cleared with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="nb">set</span> <span class="o">&lt;</span><span class="n">flag</span><span class="o">&gt;</span>
<span class="n">ceph</span> <span class="n">osd</span> <span class="n">unset</span> <span class="o">&lt;</span><span class="n">flag</span><span class="o">&gt;</span>
</pre></div>
</div>
</div>
<div class="section" id="osd-flags">
<h4>OSD_FLAGS<a class="headerlink" href="#osd-flags" title="Permalink to this headline">¶</a></h4>
<p>One or more OSDs or CRUSH {nodes,device classes} has a flag of interest set.
These flags include:</p>
<ul class="simple">
<li><p><em>noup</em>: these OSDs are not allowed to start</p></li>
<li><p><em>nodown</em>: failure reports for these OSDs will be ignored</p></li>
<li><p><em>noin</em>: if these OSDs were previously marked <cite>out</cite> automatically
after a failure, they will not be marked in when they start</p></li>
<li><p><em>noout</em>: if these OSDs are down they will not automatically be marked
<cite>out</cite> after the configured interval</p></li>
</ul>
<p>这些标记可以这样批量设置和清除：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="nb">set</span><span class="o">-</span><span class="n">group</span> <span class="o">&lt;</span><span class="n">flags</span><span class="o">&gt;</span> <span class="o">&lt;</span><span class="n">who</span><span class="o">&gt;</span>
<span class="n">ceph</span> <span class="n">osd</span> <span class="n">unset</span><span class="o">-</span><span class="n">group</span> <span class="o">&lt;</span><span class="n">flags</span><span class="o">&gt;</span> <span class="o">&lt;</span><span class="n">who</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>例如：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="nb">set</span><span class="o">-</span><span class="n">group</span> <span class="n">noup</span><span class="p">,</span><span class="n">noout</span> <span class="n">osd</span><span class="mf">.0</span> <span class="n">osd</span><span class="mf">.1</span>
<span class="n">ceph</span> <span class="n">osd</span> <span class="n">unset</span><span class="o">-</span><span class="n">group</span> <span class="n">noup</span><span class="p">,</span><span class="n">noout</span> <span class="n">osd</span><span class="mf">.0</span> <span class="n">osd</span><span class="mf">.1</span>
<span class="n">ceph</span> <span class="n">osd</span> <span class="nb">set</span><span class="o">-</span><span class="n">group</span> <span class="n">noup</span><span class="p">,</span><span class="n">noout</span> <span class="n">host</span><span class="o">-</span><span class="n">foo</span>
<span class="n">ceph</span> <span class="n">osd</span> <span class="n">unset</span><span class="o">-</span><span class="n">group</span> <span class="n">noup</span><span class="p">,</span><span class="n">noout</span> <span class="n">host</span><span class="o">-</span><span class="n">foo</span>
<span class="n">ceph</span> <span class="n">osd</span> <span class="nb">set</span><span class="o">-</span><span class="n">group</span> <span class="n">noup</span><span class="p">,</span><span class="n">noout</span> <span class="n">class</span><span class="o">-</span><span class="n">hdd</span>
<span class="n">ceph</span> <span class="n">osd</span> <span class="n">unset</span><span class="o">-</span><span class="n">group</span> <span class="n">noup</span><span class="p">,</span><span class="n">noout</span> <span class="n">class</span><span class="o">-</span><span class="n">hdd</span>
</pre></div>
</div>
</div>
<div class="section" id="old-crush-tunables">
<h4>OLD_CRUSH_TUNABLES<a class="headerlink" href="#old-crush-tunables" title="Permalink to this headline">¶</a></h4>
<p>CRUSH 图在使用很老的选项，应该更新它。还能使用（即，可连接此集群的最老客户端版本号）而不会触发此健康告警的最老可调选项由
<code class="docutils literal notranslate"><span class="pre">mon_crush_min_required_version</span></code> 配置选项决定。
详情见 ref:<cite>crush-map-tunables</cite> 。</p>
</div>
<div class="section" id="old-crush-straw-calc-version">
<h4>OLD_CRUSH_STRAW_CALC_VERSION<a class="headerlink" href="#old-crush-straw-calc-version" title="Permalink to this headline">¶</a></h4>
<p>CRUSH 图在使用一个比较老的、非最优方法为 <code class="docutils literal notranslate"><span class="pre">straw</span></code> 桶计算中间权重值。</p>
<p>The CRUSH map should be updated to use the newer method
(<code class="docutils literal notranslate"><span class="pre">straw_calc_version=1</span></code>).  See
<a class="reference internal" href="../crush-map/#crush-map-tunables"><span class="std std-ref">可调选项</span></a> for more information.</p>
</div>
<div class="section" id="cache-pool-no-hit-set">
<h4>CACHE_POOL_NO_HIT_SET<a class="headerlink" href="#cache-pool-no-hit-set" title="Permalink to this headline">¶</a></h4>
<p>One or more cache pools is not configured with a <em>hit set</em> to track
utilization, which will prevent the tiering agent from identifying
cold objects to flush and evict from the cache.</p>
<p>Hit sets can be configured on the cache pool with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span> <span class="o">&lt;</span><span class="n">poolname</span><span class="o">&gt;</span> <span class="n">hit_set_type</span> <span class="o">&lt;</span><span class="nb">type</span><span class="o">&gt;</span>
<span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span> <span class="o">&lt;</span><span class="n">poolname</span><span class="o">&gt;</span> <span class="n">hit_set_period</span> <span class="o">&lt;</span><span class="n">period</span><span class="o">-</span><span class="ow">in</span><span class="o">-</span><span class="n">seconds</span><span class="o">&gt;</span>
<span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span> <span class="o">&lt;</span><span class="n">poolname</span><span class="o">&gt;</span> <span class="n">hit_set_count</span> <span class="o">&lt;</span><span class="n">number</span><span class="o">-</span><span class="n">of</span><span class="o">-</span><span class="n">hitsets</span><span class="o">&gt;</span>
<span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span> <span class="o">&lt;</span><span class="n">poolname</span><span class="o">&gt;</span> <span class="n">hit_set_fpp</span> <span class="o">&lt;</span><span class="n">target</span><span class="o">-</span><span class="n">false</span><span class="o">-</span><span class="n">positive</span><span class="o">-</span><span class="n">rate</span><span class="o">&gt;</span>
</pre></div>
</div>
</div>
<div class="section" id="osd-no-sortbitwise">
<h4>OSD_NO_SORTBITWISE<a class="headerlink" href="#osd-no-sortbitwise" title="Permalink to this headline">¶</a></h4>
<p>没有在跑 luminous v12.y.z 之前的 OSD ，但却没有设置 <code class="docutils literal notranslate"><span class="pre">sortbitwise</span></code> 标记。</p>
<p>The <code class="docutils literal notranslate"><span class="pre">sortbitwise</span></code> flag must be set before luminous v12.y.z or newer
OSDs can start.  You can safely set the flag with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="nb">set</span> <span class="n">sortbitwise</span>
</pre></div>
</div>
</div>
<div class="section" id="osd-filestore">
<h4>OSD_FILESTORE<a class="headerlink" href="#osd-filestore" title="Permalink to this headline">¶</a></h4>
<p>Filestore has been deprecated, considering that Bluestore has been the default
objectstore for quite some time. Warn if OSDs are running Filestore.</p>
<p>The ‘mclock_scheduler’ is not supported for filestore OSDs. Therefore, the
default ‘osd_op_queue’ is set to ‘wpq’ for filestore OSDs and is enforced
even if the user attempts to change it.</p>
<p>Filestore OSDs can be listed with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">report</span> <span class="o">|</span> <span class="n">jq</span> <span class="o">-</span><span class="n">c</span> <span class="s1">&#39;.&quot;osd_metadata&quot; | .[] | select(.osd_objectstore | contains(&quot;filestore&quot;)) | {id, osd_objectstore}&#39;</span>
</pre></div>
</div>
<p>If it is not feasible to migrate Filestore OSDs to Bluestore immediately, you can silence
this warning temporarily with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">health</span> <span class="n">mute</span> <span class="n">OSD_FILESTORE</span>
</pre></div>
</div>
</div>
<div class="section" id="pool-full">
<h4>POOL_FULL<a class="headerlink" href="#pool-full" title="Permalink to this headline">¶</a></h4>
<p>One or more pools has reached its quota and is no longer allowing writes.</p>
<p>Pool quotas and utilization can be seen with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">df</span> <span class="n">detail</span>
</pre></div>
</div>
<p>You can either raise the pool quota with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span><span class="o">-</span><span class="n">quota</span> <span class="o">&lt;</span><span class="n">poolname</span><span class="o">&gt;</span> <span class="n">max_objects</span> <span class="o">&lt;</span><span class="n">num</span><span class="o">-</span><span class="n">objects</span><span class="o">&gt;</span>
<span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span><span class="o">-</span><span class="n">quota</span> <span class="o">&lt;</span><span class="n">poolname</span><span class="o">&gt;</span> <span class="n">max_bytes</span> <span class="o">&lt;</span><span class="n">num</span><span class="o">-</span><span class="nb">bytes</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>or delete some existing data to reduce utilization.</p>
</div>
<div class="section" id="bluefs-spillover">
<h4>BLUEFS_SPILLOVER<a class="headerlink" href="#bluefs-spillover" title="Permalink to this headline">¶</a></h4>
<p>One or more OSDs that use the BlueStore backend have been allocated
<cite>db</cite> partitions (storage space for metadata, normally on a faster
device) but that space has filled, such that metadata has “spilled
over” onto the normal slow device.  This isn’t necessarily an error
condition or even unexpected, but if the administrator’s expectation
was that all metadata would fit on the faster device, it indicates
that not enough space was provided.</p>
<p>This warning can be disabled on all OSDs with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">config</span> <span class="nb">set</span> <span class="n">osd</span> <span class="n">bluestore_warn_on_bluefs_spillover</span> <span class="n">false</span>
</pre></div>
</div>
<p>Alternatively, it can be disabled on a specific OSD with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">config</span> <span class="nb">set</span> <span class="n">osd</span><span class="mf">.123</span> <span class="n">bluestore_warn_on_bluefs_spillover</span> <span class="n">false</span>
</pre></div>
</div>
<p>To provide more metadata space, the OSD in question could be destroyed and
reprovisioned.  This will involve data migration and recovery.</p>
<p>It may also be possible to expand the LVM logical volume backing the
<cite>db</cite> storage.  If the underlying LV has been expanded, the OSD daemon
needs to be stopped and BlueFS informed of the device size change with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span>ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-$ID
</pre></div>
</div>
</div>
<div class="section" id="bluefs-available-space">
<h4>BLUEFS_AVAILABLE_SPACE<a class="headerlink" href="#bluefs-available-space" title="Permalink to this headline">¶</a></h4>
<p>To check how much space is free for BlueFS do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">daemon</span> <span class="n">osd</span><span class="mf">.123</span> <span class="n">bluestore</span> <span class="n">bluefs</span> <span class="n">available</span>
</pre></div>
</div>
<p>This will output up to 3 values: <cite>BDEV_DB free</cite>, <cite>BDEV_SLOW free</cite> and
<cite>available_from_bluestore</cite>. <cite>BDEV_DB</cite> and <cite>BDEV_SLOW</cite> report amount of space that
has been acquired by BlueFS and is considered free. Value <cite>available_from_bluestore</cite>
denotes ability of BlueStore to relinquish more space to BlueFS.
It is normal that this value is different from amount of BlueStore free space, as
BlueFS allocation unit is typically larger than BlueStore allocation unit.
This means that only part of BlueStore free space will be acceptable for BlueFS.</p>
</div>
<div class="section" id="bluefs-low-space">
<h4>BLUEFS_LOW_SPACE<a class="headerlink" href="#bluefs-low-space" title="Permalink to this headline">¶</a></h4>
<p>If BlueFS is running low on available free space and there is little
<cite>available_from_bluestore</cite> one can consider reducing BlueFS allocation unit size.
To simulate available space when allocation unit is different do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">daemon</span> <span class="n">osd</span><span class="mf">.123</span> <span class="n">bluestore</span> <span class="n">bluefs</span> <span class="n">available</span> <span class="o">&lt;</span><span class="n">alloc</span><span class="o">-</span><span class="n">unit</span><span class="o">-</span><span class="n">size</span><span class="o">&gt;</span>
</pre></div>
</div>
</div>
<div class="section" id="bluestore-fragmentation">
<h4>BLUESTORE_FRAGMENTATION<a class="headerlink" href="#bluestore-fragmentation" title="Permalink to this headline">¶</a></h4>
<p>As BlueStore works free space on underlying storage will get fragmented.
This is normal and unavoidable but excessive fragmentation will cause slowdown.
To inspect BlueStore fragmentation one can do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">daemon</span> <span class="n">osd</span><span class="mf">.123</span> <span class="n">bluestore</span> <span class="n">allocator</span> <span class="n">score</span> <span class="n">block</span>
</pre></div>
</div>
<p>Score is given in [0-1] range.
[0.0 .. 0.4] tiny fragmentation
[0.4 .. 0.7] small, acceptable fragmentation
[0.7 .. 0.9] considerable, but safe fragmentation
[0.9 .. 1.0] severe fragmentation, may impact BlueFS ability to get space from BlueStore</p>
<p>If detailed report of free fragments is required do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">daemon</span> <span class="n">osd</span><span class="mf">.123</span> <span class="n">bluestore</span> <span class="n">allocator</span> <span class="n">dump</span> <span class="n">block</span>
</pre></div>
</div>
<p>In case when handling OSD process that is not running fragmentation can be
inspected with <cite>ceph-bluestore-tool</cite>.
Get fragmentation score:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span><span class="o">-</span><span class="n">bluestore</span><span class="o">-</span><span class="n">tool</span> <span class="o">--</span><span class="n">path</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">ceph</span><span class="o">/</span><span class="n">osd</span><span class="o">/</span><span class="n">ceph</span><span class="o">-</span><span class="mi">123</span> <span class="o">--</span><span class="n">allocator</span> <span class="n">block</span> <span class="n">free</span><span class="o">-</span><span class="n">score</span>
</pre></div>
</div>
<p>And dump detailed free chunks:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span><span class="o">-</span><span class="n">bluestore</span><span class="o">-</span><span class="n">tool</span> <span class="o">--</span><span class="n">path</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">ceph</span><span class="o">/</span><span class="n">osd</span><span class="o">/</span><span class="n">ceph</span><span class="o">-</span><span class="mi">123</span> <span class="o">--</span><span class="n">allocator</span> <span class="n">block</span> <span class="n">free</span><span class="o">-</span><span class="n">dump</span>
</pre></div>
</div>
</div>
<div class="section" id="bluestore-legacy-statfs">
<h4>BLUESTORE_LEGACY_STATFS<a class="headerlink" href="#bluestore-legacy-statfs" title="Permalink to this headline">¶</a></h4>
<p>In the Nautilus release, BlueStore tracks its internal usage
statistics on a per-pool granular basis, and one or more OSDs have
BlueStore volumes that were created prior to Nautilus.  If <em>all</em> OSDs
are older than Nautilus, this just means that the per-pool metrics are
not available.  However, if there is a mix of pre-Nautilus and
post-Nautilus OSDs, the cluster usage statistics reported by <code class="docutils literal notranslate"><span class="pre">ceph</span>
<span class="pre">df</span></code> will not be accurate.</p>
<p>The old OSDs can be updated to use the new usage tracking scheme by stopping each OSD, running a repair operation, and the restarting it.  For example, if <code class="docutils literal notranslate"><span class="pre">osd.123</span></code> needed to be updated,:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">systemctl</span> <span class="n">stop</span> <span class="n">ceph</span><span class="o">-</span><span class="n">osd</span><span class="o">@</span><span class="mi">123</span>
<span class="n">ceph</span><span class="o">-</span><span class="n">bluestore</span><span class="o">-</span><span class="n">tool</span> <span class="n">repair</span> <span class="o">--</span><span class="n">path</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">ceph</span><span class="o">/</span><span class="n">osd</span><span class="o">/</span><span class="n">ceph</span><span class="o">-</span><span class="mi">123</span>
<span class="n">systemctl</span> <span class="n">start</span> <span class="n">ceph</span><span class="o">-</span><span class="n">osd</span><span class="o">@</span><span class="mi">123</span>
</pre></div>
</div>
<p>此警报可以这样禁用:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">config</span> <span class="nb">set</span> <span class="k">global</span> <span class="n">bluestore_warn_on_legacy_statfs</span> <span class="n">false</span>
</pre></div>
</div>
</div>
<div class="section" id="bluestore-no-per-pool-omap">
<h4>BLUESTORE_NO_PER_POOL_OMAP<a class="headerlink" href="#bluestore-no-per-pool-omap" title="Permalink to this headline">¶</a></h4>
<p>Starting with the Octopus release, BlueStore tracks omap space utilization
by pool, and one or more OSDs have volumes that were created prior to
Octopus.  If all OSDs are not running BlueStore with the new tracking
enabled, the cluster will report and approximate value for per-pool omap usage
based on the most recent deep-scrub.</p>
<p>The old OSDs can be updated to track by pool by stopping each OSD,
running a repair operation, and the restarting it.  For example, if
<code class="docutils literal notranslate"><span class="pre">osd.123</span></code> needed to be updated,:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">systemctl</span> <span class="n">stop</span> <span class="n">ceph</span><span class="o">-</span><span class="n">osd</span><span class="o">@</span><span class="mi">123</span>
<span class="n">ceph</span><span class="o">-</span><span class="n">bluestore</span><span class="o">-</span><span class="n">tool</span> <span class="n">repair</span> <span class="o">--</span><span class="n">path</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">ceph</span><span class="o">/</span><span class="n">osd</span><span class="o">/</span><span class="n">ceph</span><span class="o">-</span><span class="mi">123</span>
<span class="n">systemctl</span> <span class="n">start</span> <span class="n">ceph</span><span class="o">-</span><span class="n">osd</span><span class="o">@</span><span class="mi">123</span>
</pre></div>
</div>
<p>此警报可以这样禁用:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">config</span> <span class="nb">set</span> <span class="k">global</span> <span class="n">bluestore_warn_on_no_per_pool_omap</span> <span class="n">false</span>
</pre></div>
</div>
</div>
<div class="section" id="bluestore-no-per-pg-omap">
<h4>BLUESTORE_NO_PER_PG_OMAP<a class="headerlink" href="#bluestore-no-per-pg-omap" title="Permalink to this headline">¶</a></h4>
<p>Starting with the Pacific release, BlueStore tracks omap space utilization
by PG, and one or more OSDs have volumes that were created prior to
Pacific.  Per-PG omap enables faster PG removal when PGs migrate.</p>
<p>The older OSDs can be updated to track by PG by stopping each OSD,
running a repair operation, and the restarting it.  For example, if
<code class="docutils literal notranslate"><span class="pre">osd.123</span></code> needed to be updated,:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">systemctl</span> <span class="n">stop</span> <span class="n">ceph</span><span class="o">-</span><span class="n">osd</span><span class="o">@</span><span class="mi">123</span>
<span class="n">ceph</span><span class="o">-</span><span class="n">bluestore</span><span class="o">-</span><span class="n">tool</span> <span class="n">repair</span> <span class="o">--</span><span class="n">path</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">ceph</span><span class="o">/</span><span class="n">osd</span><span class="o">/</span><span class="n">ceph</span><span class="o">-</span><span class="mi">123</span>
<span class="n">systemctl</span> <span class="n">start</span> <span class="n">ceph</span><span class="o">-</span><span class="n">osd</span><span class="o">@</span><span class="mi">123</span>
</pre></div>
</div>
<p>此警报可以这样禁用:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">config</span> <span class="nb">set</span> <span class="k">global</span> <span class="n">bluestore_warn_on_no_per_pg_omap</span> <span class="n">false</span>
</pre></div>
</div>
</div>
<div class="section" id="bluestore-disk-size-mismatch">
<h4>BLUESTORE_DISK_SIZE_MISMATCH<a class="headerlink" href="#bluestore-disk-size-mismatch" title="Permalink to this headline">¶</a></h4>
<p>One or more OSDs using BlueStore has an internal inconsistency between the size
of the physical device and the metadata tracking its size.  This can lead to
the OSD crashing in the future.</p>
<p>The OSDs in question should be destroyed and reprovisioned.  Care should be
taken to do this one OSD at a time, and in a way that doesn’t put any data at
risk.  For example, if osd <code class="docutils literal notranslate"><span class="pre">$N</span></code> has the error,:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span>ceph osd out osd.$N
while ! ceph osd safe-to-destroy osd.$N ; do sleep 1m ; done
ceph osd destroy osd.$N
ceph-volume lvm zap /path/to/device
ceph-volume lvm create --osd-id $N --data /path/to/device
</pre></div>
</div>
</div>
<div class="section" id="bluestore-no-compression">
<h4>BLUESTORE_NO_COMPRESSION<a class="headerlink" href="#bluestore-no-compression" title="Permalink to this headline">¶</a></h4>
<p>One or more OSDs is unable to load a BlueStore compression plugin.
This can be caused by a broken installation, in which the <code class="docutils literal notranslate"><span class="pre">ceph-osd</span></code>
binary does not match the compression plugins, or a recent upgrade
that did not include a restart of the <code class="docutils literal notranslate"><span class="pre">ceph-osd</span></code> daemon.</p>
<p>Verify that the package(s) on the host running the OSD(s) in question
are correctly installed and that the OSD daemon(s) have been
restarted.  If the problem persists, check the OSD log for any clues
as to the source of the problem.</p>
</div>
<div class="section" id="bluestore-spurious-read-errors">
<h4>BLUESTORE_SPURIOUS_READ_ERRORS<a class="headerlink" href="#bluestore-spurious-read-errors" title="Permalink to this headline">¶</a></h4>
<p>One or more OSDs using BlueStore detects spurious read errors at main device.
BlueStore has recovered from these errors by retrying disk reads.
Though this might show some issues with underlying hardware, I/O subsystem,
etc.
Which theoretically might cause permanent data corruption.
Some observations on the root cause can be found at
<a class="reference external" href="https://tracker.ceph.com/issues/22464">https://tracker.ceph.com/issues/22464</a></p>
<p>This alert doesn’t require immediate response but corresponding host might need
additional attention, e.g. upgrading to the latest OS/kernel versions and
H/W resource utilization monitoring.</p>
<p>This warning can be disabled on all OSDs with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">config</span> <span class="nb">set</span> <span class="n">osd</span> <span class="n">bluestore_warn_on_spurious_read_errors</span> <span class="n">false</span>
</pre></div>
</div>
<p>Alternatively, it can be disabled on a specific OSD with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">config</span> <span class="nb">set</span> <span class="n">osd</span><span class="mf">.123</span> <span class="n">bluestore_warn_on_spurious_read_errors</span> <span class="n">false</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="id6">
<h3>设备健康<a class="headerlink" href="#id6" title="Permalink to this headline">¶</a></h3>
<div class="section" id="device-health">
<h4>DEVICE_HEALTH<a class="headerlink" href="#device-health" title="Permalink to this headline">¶</a></h4>
<p>One or more devices is expected to fail soon, where the warning
threshold is controlled by the <code class="docutils literal notranslate"><span class="pre">mgr/devicehealth/warn_threshold</span></code>
config option.</p>
<p>This warning only applies to OSDs that are currently marked “in”, so
the expected response to this failure is to mark the device “out” so
that data is migrated off of the device, and then to remove the
hardware from the system.  Note that the marking out is normally done
automatically if <code class="docutils literal notranslate"><span class="pre">mgr/devicehealth/self_heal</span></code> is enabled based on
the <code class="docutils literal notranslate"><span class="pre">mgr/devicehealth/mark_out_threshold</span></code>.</p>
<p>Device health can be checked with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">device</span> <span class="n">info</span> <span class="o">&lt;</span><span class="n">device</span><span class="o">-</span><span class="nb">id</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>Device life expectancy is set by a prediction model run by
the mgr or an by external tool via the command:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">device</span> <span class="nb">set</span><span class="o">-</span><span class="n">life</span><span class="o">-</span><span class="n">expectancy</span> <span class="o">&lt;</span><span class="n">device</span><span class="o">-</span><span class="nb">id</span><span class="o">&gt;</span> <span class="o">&lt;</span><span class="n">from</span><span class="o">&gt;</span> <span class="o">&lt;</span><span class="n">to</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>You can change the stored life expectancy manually, but that usually
doesn’t accomplish anything as whatever tool originally set it will
probably set it again, and changing the stored value does not affect
the actual health of the hardware device.</p>
</div>
<div class="section" id="device-health-in-use">
<h4>DEVICE_HEALTH_IN_USE<a class="headerlink" href="#device-health-in-use" title="Permalink to this headline">¶</a></h4>
<p>One or more devices is expected to fail soon and has been marked “out”
of the cluster based on <code class="docutils literal notranslate"><span class="pre">mgr/devicehealth/mark_out_threshold</span></code>, but it
is still participating in one more PGs.  This may be because it was
only recently marked “out” and data is still migrating, or because data
cannot be migrated off for some reason (e.g., the cluster is nearly
full, or the CRUSH hierarchy is such that there isn’t another suitable
OSD to migrate the data too).</p>
<p>This message can be silenced by disabling the self heal behavior
(setting <code class="docutils literal notranslate"><span class="pre">mgr/devicehealth/self_heal</span></code> to false), by adjusting the
<code class="docutils literal notranslate"><span class="pre">mgr/devicehealth/mark_out_threshold</span></code>, or by addressing what is
preventing data from being migrated off of the ailing device.</p>
</div>
<div class="section" id="device-health-toomany">
<h4>DEVICE_HEALTH_TOOMANY<a class="headerlink" href="#device-health-toomany" title="Permalink to this headline">¶</a></h4>
<p>Too many devices is expected to fail soon and the
<code class="docutils literal notranslate"><span class="pre">mgr/devicehealth/self_heal</span></code> behavior is enabled, such that marking
out all of the ailing devices would exceed the clusters
<code class="docutils literal notranslate"><span class="pre">mon_osd_min_in_ratio</span></code> ratio that prevents too many OSDs from being
automatically marked “out”.</p>
<p>This generally indicates that too many devices in your cluster are
expected to fail soon and you should take action to add newer
(healthier) devices before too many devices fail and data is lost.</p>
<p>The health message can also be silenced by adjusting parameters like
<code class="docutils literal notranslate"><span class="pre">mon_osd_min_in_ratio</span></code> or <code class="docutils literal notranslate"><span class="pre">mgr/devicehealth/mark_out_threshold</span></code>,
but be warned that this will increase the likelihood of unrecoverable
data loss in the cluster.</p>
</div>
</div>
<div class="section" id="id7">
<h3>数据健康（存储池和归置组们）<a class="headerlink" href="#id7" title="Permalink to this headline">¶</a></h3>
<div class="section" id="pg-availability">
<h4>PG_AVAILABILITY<a class="headerlink" href="#pg-availability" title="Permalink to this headline">¶</a></h4>
<p>Data availability is reduced, meaning that the cluster is unable to
service potential read or write requests for some data in the cluster.
Specifically, one or more PGs is in a state that does not allow IO
requests to be serviced.  Problematic PG states include <em>peering</em>,
<em>stale</em>, <em>incomplete</em>, and the lack of <em>active</em> (if those conditions do not clear
quickly).</p>
<p>Detailed information about which PGs are affected is available from:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">health</span> <span class="n">detail</span>
</pre></div>
</div>
<p>In most cases the root cause is that one or more OSDs is currently
down; see the discussion for <code class="docutils literal notranslate"><span class="pre">OSD_DOWN</span></code> above.</p>
<p>The state of specific problematic PGs can be queried with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">tell</span> <span class="o">&lt;</span><span class="n">pgid</span><span class="o">&gt;</span> <span class="n">query</span>
</pre></div>
</div>
</div>
<div class="section" id="pg-degraded">
<h4>PG_DEGRADED<a class="headerlink" href="#pg-degraded" title="Permalink to this headline">¶</a></h4>
<p>Data redundancy is reduced for some data, meaning the cluster does not
have the desired number of replicas for all data (for replicated
pools) or erasure code fragments (for erasure coded pools).
Specifically, one or more PGs:</p>
<ul class="simple">
<li><p>has the <em>degraded</em> or <em>undersized</em> flag set, meaning there are not
enough instances of that placement group in the cluster;</p></li>
<li><p>has not had the <em>clean</em> flag set for some time.</p></li>
</ul>
<p>Detailed information about which PGs are affected is available from:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">health</span> <span class="n">detail</span>
</pre></div>
</div>
<p>In most cases the root cause is that one or more OSDs is currently
down; see the dicussion for <code class="docutils literal notranslate"><span class="pre">OSD_DOWN</span></code> above.</p>
<p>The state of specific problematic PGs can be queried with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">tell</span> <span class="o">&lt;</span><span class="n">pgid</span><span class="o">&gt;</span> <span class="n">query</span>
</pre></div>
</div>
</div>
<div class="section" id="pg-recovery-full">
<h4>PG_RECOVERY_FULL<a class="headerlink" href="#pg-recovery-full" title="Permalink to this headline">¶</a></h4>
<p>Data redundancy may be reduced or at risk for some data due to a lack
of free space in the cluster.  Specifically, one or more PGs has the
<em>recovery_toofull</em> flag set, meaning that the
cluster is unable to migrate or recover data because one or more OSDs
is above the <em>full</em> threshold.</p>
<p>See the discussion for <em>OSD_FULL</em> above for steps to resolve this condition.</p>
</div>
<div class="section" id="pg-backfill-full">
<h4>PG_BACKFILL_FULL<a class="headerlink" href="#pg-backfill-full" title="Permalink to this headline">¶</a></h4>
<p>Data redundancy may be reduced or at risk for some data due to a lack
of free space in the cluster.  Specifically, one or more PGs has the
<em>backfill_toofull</em> flag set, meaning that the
cluster is unable to migrate or recover data because one or more OSDs
is above the <em>backfillfull</em> threshold.</p>
<p>See the discussion for <em>OSD_BACKFILLFULL</em> above for
steps to resolve this condition.</p>
</div>
<div class="section" id="pg-damaged">
<h4>PG_DAMAGED<a class="headerlink" href="#pg-damaged" title="Permalink to this headline">¶</a></h4>
<p>Data scrubbing has discovered some problems with data consistency in
the cluster.  Specifically, one or more PGs has the <em>inconsistent</em> or
<em>snaptrim_error</em> flag is set, indicating an earlier scrub operation
found a problem, or that the <em>repair</em> flag is set, meaning a repair
for such an inconsistency is currently in progress.</p>
<p>详情见 <a class="reference internal" href="../pg-repair/"><span class="doc">修复 PG 不一致状态</span></a> 。</p>
</div>
<div class="section" id="osd-scrub-errors">
<h4>OSD_SCRUB_ERRORS<a class="headerlink" href="#osd-scrub-errors" title="Permalink to this headline">¶</a></h4>
<p>近期的 OSD 洗刷出现了明显不一致的地方。这个错误一般和
<em>PG_DAMAGED</em> （见上文）成对出现。</p>
<p>详情见 <a class="reference internal" href="../pg-repair/"><span class="doc">修复 PG 不一致状态</span></a> 。</p>
</div>
<div class="section" id="osd-too-many-repairs">
<h4>OSD_TOO_MANY_REPAIRS<a class="headerlink" href="#osd-too-many-repairs" title="Permalink to this headline">¶</a></h4>
<p>When a read error occurs and another replica is available it is used to repair
the error immediately, so that the client can get the object data.  Scrub
handles errors for data at rest.  In order to identify possible failing disks
that aren’t seeing scrub errors, a count of read repairs is maintained.  If
it exceeds a config value threshold <em>mon_osd_warn_num_repaired</em> default 10,
this health warning is generated.</p>
</div>
<div class="section" id="large-omap-objects">
<h4>LARGE_OMAP_OBJECTS<a class="headerlink" href="#large-omap-objects" title="Permalink to this headline">¶</a></h4>
<p>One or more pools contain large omap objects as determined by
<code class="docutils literal notranslate"><span class="pre">osd_deep_scrub_large_omap_object_key_threshold</span></code> (threshold for number of keys
to determine a large omap object) or
<code class="docutils literal notranslate"><span class="pre">osd_deep_scrub_large_omap_object_value_sum_threshold</span></code> (the threshold for
summed size (bytes) of all key values to determine a large omap object) or both.
More information on the object name, key count, and size in bytes can be found
by searching the cluster log for ‘Large omap object found’. Large omap objects
can be caused by RGW bucket index objects that do not have automatic resharding
enabled. Please see <a class="reference internal" href="../../../radosgw/dynamicresharding/#rgw-dynamic-bucket-index-resharding"><span class="std std-ref">RGW Dynamic Bucket Index Resharding</span></a> for more information on resharding.</p>
<p>The thresholds can be adjusted with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">config</span> <span class="nb">set</span> <span class="n">osd</span> <span class="n">osd_deep_scrub_large_omap_object_key_threshold</span> <span class="o">&lt;</span><span class="n">keys</span><span class="o">&gt;</span>
<span class="n">ceph</span> <span class="n">config</span> <span class="nb">set</span> <span class="n">osd</span> <span class="n">osd_deep_scrub_large_omap_object_value_sum_threshold</span> <span class="o">&lt;</span><span class="nb">bytes</span><span class="o">&gt;</span>
</pre></div>
</div>
</div>
<div class="section" id="cache-pool-near-full">
<h4>CACHE_POOL_NEAR_FULL<a class="headerlink" href="#cache-pool-near-full" title="Permalink to this headline">¶</a></h4>
<p>A cache tier pool is nearly full.  Full in this context is determined
by the <code class="docutils literal notranslate"><span class="pre">target_max_bytes</span></code> and <code class="docutils literal notranslate"><span class="pre">target_max_objects</span></code> properties on
the cache pool.  Once the pool reaches the target threshold, write
requests to the pool may block while data is flushed and evicted
from the cache, a state that normally leads to very high latencies and
poor performance.</p>
<p>The cache pool target size can be adjusted with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span> <span class="o">&lt;</span><span class="n">cache</span><span class="o">-</span><span class="n">pool</span><span class="o">-</span><span class="n">name</span><span class="o">&gt;</span> <span class="n">target_max_bytes</span> <span class="o">&lt;</span><span class="nb">bytes</span><span class="o">&gt;</span>
<span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span> <span class="o">&lt;</span><span class="n">cache</span><span class="o">-</span><span class="n">pool</span><span class="o">-</span><span class="n">name</span><span class="o">&gt;</span> <span class="n">target_max_objects</span> <span class="o">&lt;</span><span class="n">objects</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>Normal cache flush and evict activity may also be throttled due to reduced
availability or performance of the base tier, or overall cluster load.</p>
</div>
<div class="section" id="too-few-pgs">
<h4>TOO_FEW_PGS<a class="headerlink" href="#too-few-pgs" title="Permalink to this headline">¶</a></h4>
<p>The number of PGs in use in the cluster is below the configurable
threshold of <code class="docutils literal notranslate"><span class="pre">mon_pg_warn_min_per_osd</span></code> PGs per OSD.  This can lead
to suboptimal distribution and balance of data across the OSDs in
the cluster, and similarly reduce overall performance.</p>
<p>This may be an expected condition if data pools have not yet been
created.</p>
<p>The PG count for existing pools can be increased or new pools can be created.
Please refer to <a class="reference internal" href="../placement-groups/#choosing-number-of-placement-groups"><span class="std std-ref">确定归置组数量</span></a> for more
information.</p>
</div>
<div class="section" id="pool-pg-num-not-power-of-two">
<h4>POOL_PG_NUM_NOT_POWER_OF_TWO<a class="headerlink" href="#pool-pg-num-not-power-of-two" title="Permalink to this headline">¶</a></h4>
<p>One or more pools has a <code class="docutils literal notranslate"><span class="pre">pg_num</span></code> value that is not a power of two.
Although this is not strictly incorrect, it does lead to a less
balanced distribution of data because some PGs have roughly twice as
much data as others.</p>
<p>This is easily corrected by setting the <code class="docutils literal notranslate"><span class="pre">pg_num</span></code> value for the
affected pool(s) to a nearby power of two:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span> <span class="o">&lt;</span><span class="n">pool</span><span class="o">-</span><span class="n">name</span><span class="o">&gt;</span> <span class="n">pg_num</span> <span class="o">&lt;</span><span class="n">value</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>This health warning can be disabled with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">config</span> <span class="nb">set</span> <span class="k">global</span> <span class="n">mon_warn_on_pool_pg_num_not_power_of_two</span> <span class="n">false</span>
</pre></div>
</div>
</div>
<div class="section" id="pool-too-few-pgs">
<h4>POOL_TOO_FEW_PGS<a class="headerlink" href="#pool-too-few-pgs" title="Permalink to this headline">¶</a></h4>
<p>One or more pools should probably have more PGs, based on the amount
of data that is currently stored in the pool.  This can lead to
suboptimal distribution and balance of data across the OSDs in the
cluster, and similarly reduce overall performance.  This warning is
generated if the <code class="docutils literal notranslate"><span class="pre">pg_autoscale_mode</span></code> property on the pool is set to
<code class="docutils literal notranslate"><span class="pre">warn</span></code>.</p>
<p>To disable the warning, you can disable auto-scaling of PGs for the
pool entirely with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span> <span class="o">&lt;</span><span class="n">pool</span><span class="o">-</span><span class="n">name</span><span class="o">&gt;</span> <span class="n">pg_autoscale_mode</span> <span class="n">off</span>
</pre></div>
</div>
<p>To allow the cluster to automatically adjust the number of PGs,:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span> <span class="o">&lt;</span><span class="n">pool</span><span class="o">-</span><span class="n">name</span><span class="o">&gt;</span> <span class="n">pg_autoscale_mode</span> <span class="n">on</span>
</pre></div>
</div>
<p>You can also manually set the number of PGs for the pool to the
recommended amount with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span> <span class="o">&lt;</span><span class="n">pool</span><span class="o">-</span><span class="n">name</span><span class="o">&gt;</span> <span class="n">pg_num</span> <span class="o">&lt;</span><span class="n">new</span><span class="o">-</span><span class="n">pg</span><span class="o">-</span><span class="n">num</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>Please refer to <a class="reference internal" href="../placement-groups/#choosing-number-of-placement-groups"><span class="std std-ref">确定归置组数量</span></a> and
<a class="reference internal" href="../placement-groups/#pg-autoscaler"><span class="std std-ref">自伸缩归置组</span></a> for more information.</p>
</div>
<div class="section" id="too-many-pgs">
<h4>TOO_MANY_PGS<a class="headerlink" href="#too-many-pgs" title="Permalink to this headline">¶</a></h4>
<p>The number of PGs in use in the cluster is above the configurable
threshold of <code class="docutils literal notranslate"><span class="pre">mon_max_pg_per_osd</span></code> PGs per OSD.  If this threshold is
exceed the cluster will not allow new pools to be created, pool <cite>pg_num</cite> to
be increased, or pool replication to be increased (any of which would lead to
more PGs in the cluster).  A large number of PGs can lead
to higher memory utilization for OSD daemons, slower peering after
cluster state changes (like OSD restarts, additions, or removals), and
higher load on the Manager and Monitor daemons.</p>
<p>The simplest way to mitigate the problem is to increase the number of
OSDs in the cluster by adding more hardware.  Note that the OSD count
used for the purposes of this health check is the number of “in” OSDs,
so marking “out” OSDs “in” (if there are any) can also help:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="ow">in</span> <span class="o">&lt;</span><span class="n">osd</span> <span class="nb">id</span><span class="p">(</span><span class="n">s</span><span class="p">)</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>Please refer to <a class="reference internal" href="../placement-groups/#choosing-number-of-placement-groups"><span class="std std-ref">确定归置组数量</span></a> for more
information.</p>
</div>
<div class="section" id="pool-too-many-pgs">
<h4>POOL_TOO_MANY_PGS<a class="headerlink" href="#pool-too-many-pgs" title="Permalink to this headline">¶</a></h4>
<p>One or more pools should probably have more PGs, based on the amount
of data that is currently stored in the pool.  This can lead to higher
memory utilization for OSD daemons, slower peering after cluster state
changes (like OSD restarts, additions, or removals), and higher load
on the Manager and Monitor daemons.  This warning is generated if the
<code class="docutils literal notranslate"><span class="pre">pg_autoscale_mode</span></code> property on the pool is set to <code class="docutils literal notranslate"><span class="pre">warn</span></code>.</p>
<p>To disable the warning, you can disable auto-scaling of PGs for the
pool entirely with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span> <span class="o">&lt;</span><span class="n">pool</span><span class="o">-</span><span class="n">name</span><span class="o">&gt;</span> <span class="n">pg_autoscale_mode</span> <span class="n">off</span>
</pre></div>
</div>
<p>To allow the cluster to automatically adjust the number of PGs,:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span> <span class="o">&lt;</span><span class="n">pool</span><span class="o">-</span><span class="n">name</span><span class="o">&gt;</span> <span class="n">pg_autoscale_mode</span> <span class="n">on</span>
</pre></div>
</div>
<p>You can also manually set the number of PGs for the pool to the
recommended amount with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span> <span class="o">&lt;</span><span class="n">pool</span><span class="o">-</span><span class="n">name</span><span class="o">&gt;</span> <span class="n">pg_num</span> <span class="o">&lt;</span><span class="n">new</span><span class="o">-</span><span class="n">pg</span><span class="o">-</span><span class="n">num</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>Please refer to <a class="reference internal" href="../placement-groups/#choosing-number-of-placement-groups"><span class="std std-ref">确定归置组数量</span></a> and
<a class="reference internal" href="../placement-groups/#pg-autoscaler"><span class="std std-ref">自伸缩归置组</span></a> for more information.</p>
</div>
<div class="section" id="pool-target-size-bytes-overcommitted">
<h4>POOL_TARGET_SIZE_BYTES_OVERCOMMITTED<a class="headerlink" href="#pool-target-size-bytes-overcommitted" title="Permalink to this headline">¶</a></h4>
<p>One or more pools have a <code class="docutils literal notranslate"><span class="pre">target_size_bytes</span></code> property set to
estimate the expected size of the pool,
but the value(s) exceed the total available storage (either by
themselves or in combination with other pools’ actual usage).</p>
<p>This is usually an indication that the <code class="docutils literal notranslate"><span class="pre">target_size_bytes</span></code> value for
the pool is too large and should be reduced or set to zero with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span> <span class="o">&lt;</span><span class="n">pool</span><span class="o">-</span><span class="n">name</span><span class="o">&gt;</span> <span class="n">target_size_bytes</span> <span class="mi">0</span>
</pre></div>
</div>
<p>更详细的内容见  <a class="reference internal" href="../placement-groups/#specifying-pool-target-size"><span class="std std-ref">配置期望的存储池尺寸</span></a>.</p>
</div>
<div class="section" id="pool-has-target-size-bytes-and-ratio">
<h4>POOL_HAS_TARGET_SIZE_BYTES_AND_RATIO<a class="headerlink" href="#pool-has-target-size-bytes-and-ratio" title="Permalink to this headline">¶</a></h4>
<p>One or more pools have both <code class="docutils literal notranslate"><span class="pre">target_size_bytes</span></code> and
<code class="docutils literal notranslate"><span class="pre">target_size_ratio</span></code> set to estimate the expected size of the pool.
Only one of these properties should be non-zero. If both are set,
<code class="docutils literal notranslate"><span class="pre">target_size_ratio</span></code> takes precedence and <code class="docutils literal notranslate"><span class="pre">target_size_bytes</span></code> is
ignored.</p>
<p>To reset <code class="docutils literal notranslate"><span class="pre">target_size_bytes</span></code> to zero:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span> <span class="o">&lt;</span><span class="n">pool</span><span class="o">-</span><span class="n">name</span><span class="o">&gt;</span> <span class="n">target_size_bytes</span> <span class="mi">0</span>
</pre></div>
</div>
<p>更详细的内容见  <a class="reference internal" href="../placement-groups/#specifying-pool-target-size"><span class="std std-ref">配置期望的存储池尺寸</span></a>.</p>
</div>
<div class="section" id="too-few-osds">
<h4>TOO_FEW_OSDS<a class="headerlink" href="#too-few-osds" title="Permalink to this headline">¶</a></h4>
<p>The number of OSDs in the cluster is below the configurable
threshold of <code class="docutils literal notranslate"><span class="pre">osd_pool_default_size</span></code>.</p>
</div>
<div class="section" id="smaller-pgp-num">
<h4>SMALLER_PGP_NUM<a class="headerlink" href="#smaller-pgp-num" title="Permalink to this headline">¶</a></h4>
<p>One or more pools has a <code class="docutils literal notranslate"><span class="pre">pgp_num</span></code> value less than <code class="docutils literal notranslate"><span class="pre">pg_num</span></code>.  This
is normally an indication that the PG count was increased without
also increasing the placement behavior.</p>
<p>This is sometimes done deliberately to separate out the <cite>split</cite> step
when the PG count is adjusted from the data migration that is needed
when <code class="docutils literal notranslate"><span class="pre">pgp_num</span></code> is changed.</p>
<p>This is normally resolved by setting <code class="docutils literal notranslate"><span class="pre">pgp_num</span></code> to match <code class="docutils literal notranslate"><span class="pre">pg_num</span></code>,
triggering the data migration, with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span> <span class="o">&lt;</span><span class="n">pool</span><span class="o">&gt;</span> <span class="n">pgp_num</span> <span class="o">&lt;</span><span class="n">pg</span><span class="o">-</span><span class="n">num</span><span class="o">-</span><span class="n">value</span><span class="o">&gt;</span>
</pre></div>
</div>
</div>
<div class="section" id="many-objects-per-pg">
<h4>MANY_OBJECTS_PER_PG<a class="headerlink" href="#many-objects-per-pg" title="Permalink to this headline">¶</a></h4>
<p>One or more pools has an average number of objects per PG that is
significantly higher than the overall cluster average.  The specific
threshold is controlled by the <code class="docutils literal notranslate"><span class="pre">mon_pg_warn_max_object_skew</span></code>
configuration value.</p>
<p>This is usually an indication that the pool(s) containing most of the
data in the cluster have too few PGs, and/or that other pools that do
not contain as much data have too many PGs.  See the discussion of
<em>TOO_MANY_PGS</em> above.</p>
<p>在管理器上调高 <code class="docutils literal notranslate"><span class="pre">mon_pg_warn_max_object_skew</span></code> 配置选项的阈值可以消除此健康告警。</p>
<p>如果把 <code class="docutils literal notranslate"><span class="pre">pg_autoscale_mode</span></code> 设置为 <code class="docutils literal notranslate"><span class="pre">on</span></code> ，某个特定存储池的健康告警就能消除。</p>
</div>
<div class="section" id="pool-app-not-enabled">
<h4>POOL_APP_NOT_ENABLED<a class="headerlink" href="#pool-app-not-enabled" title="Permalink to this headline">¶</a></h4>
<p>A pool exists that contains one or more objects but has not been
tagged for use by a particular application.</p>
<p>Resolve this warning by labeling the pool for use by an application.  For
example, if the pool is used by RBD,:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">rbd</span> <span class="n">pool</span> <span class="n">init</span> <span class="o">&lt;</span><span class="n">poolname</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>If the pool is being used by a custom application ‘foo’, you can also label
via the low-level command:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="n">application</span> <span class="n">enable</span> <span class="n">foo</span>
</pre></div>
</div>
<p>更详细的内容见  <a class="reference internal" href="../pools/#associate-pool-to-application"><span class="std std-ref">关联存储池与应用程序</span></a>.</p>
</div>
<div class="section" id="id8">
<h4>POOL_FULL<a class="headerlink" href="#id8" title="Permalink to this headline">¶</a></h4>
<p>One or more pools has reached (or is very close to reaching) its
quota.  The threshold to trigger this error condition is controlled by
the <code class="docutils literal notranslate"><span class="pre">mon_pool_quota_crit_threshold</span></code> configuration option.</p>
<p>Pool quotas can be adjusted up or down (or removed) with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span><span class="o">-</span><span class="n">quota</span> <span class="o">&lt;</span><span class="n">pool</span><span class="o">&gt;</span> <span class="n">max_bytes</span> <span class="o">&lt;</span><span class="nb">bytes</span><span class="o">&gt;</span>
<span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span><span class="o">-</span><span class="n">quota</span> <span class="o">&lt;</span><span class="n">pool</span><span class="o">&gt;</span> <span class="n">max_objects</span> <span class="o">&lt;</span><span class="n">objects</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>Setting the quota value to 0 will disable the quota.</p>
</div>
<div class="section" id="pool-near-full">
<h4>POOL_NEAR_FULL<a class="headerlink" href="#pool-near-full" title="Permalink to this headline">¶</a></h4>
<p>One or more pools is approaching a configured fullness threshold.</p>
<p>One threshold that can trigger this warning condition is the
<code class="docutils literal notranslate"><span class="pre">mon_pool_quota_warn_threshold</span></code> configuration option.</p>
<p>Pool quotas can be adjusted up or down (or removed) with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span><span class="o">-</span><span class="n">quota</span> <span class="o">&lt;</span><span class="n">pool</span><span class="o">&gt;</span> <span class="n">max_bytes</span> <span class="o">&lt;</span><span class="nb">bytes</span><span class="o">&gt;</span>
<span class="n">ceph</span> <span class="n">osd</span> <span class="n">pool</span> <span class="nb">set</span><span class="o">-</span><span class="n">quota</span> <span class="o">&lt;</span><span class="n">pool</span><span class="o">&gt;</span> <span class="n">max_objects</span> <span class="o">&lt;</span><span class="n">objects</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>Setting the quota value to 0 will disable the quota.</p>
<p>Other thresholds that can trigger the above two warning conditions are
<code class="docutils literal notranslate"><span class="pre">mon_osd_nearfull_ratio</span></code> and <code class="docutils literal notranslate"><span class="pre">mon_osd_full_ratio</span></code>.  Visit the
<a class="reference internal" href="../../configuration/mon-config-ref/#storage-capacity"><span class="std std-ref">存储容量</span></a> and <a class="reference internal" href="../../troubleshooting/troubleshooting-osd/#no-free-drive-space"><span class="std std-ref">硬盘没剩余空间</span></a> documents for details
and resolution.</p>
</div>
<div class="section" id="object-misplaced">
<h4>OBJECT_MISPLACED<a class="headerlink" href="#object-misplaced" title="Permalink to this headline">¶</a></h4>
<p>One or more objects in the cluster is not stored on the node the
cluster would like it to be stored on.  This is an indication that
data migration due to some recent cluster change has not yet completed.</p>
<p>Misplaced data is not a dangerous condition in and of itself; data
consistency is never at risk, and old copies of objects are never
removed until the desired number of new copies (in the desired
locations) are present.</p>
</div>
<div class="section" id="object-unfound">
<h4>OBJECT_UNFOUND<a class="headerlink" href="#object-unfound" title="Permalink to this headline">¶</a></h4>
<p>One or more objects in the cluster cannot be found.  Specifically, the
OSDs know that a new or updated copy of an object should exist, but a
copy of that version of the object has not been found on OSDs that are
currently online.</p>
<p>Read or write requests to unfound objects will block.</p>
<p>Ideally, a down OSD can be brought back online that has the more
recent copy of the unfound object.  Candidate OSDs can be identified from the
peering state for the PG(s) responsible for the unfound object:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">tell</span> <span class="o">&lt;</span><span class="n">pgid</span><span class="o">&gt;</span> <span class="n">query</span>
</pre></div>
</div>
<p>If the latest copy of the object is not available, the cluster can be
told to roll back to a previous version of the object. See
<a class="reference internal" href="../../troubleshooting/troubleshooting-pg/#failures-osd-unfound"><span class="std std-ref">未找到的对象</span></a> for more information.</p>
</div>
<div class="section" id="slow-ops">
<h4>SLOW_OPS<a class="headerlink" href="#slow-ops" title="Permalink to this headline">¶</a></h4>
<p>One or more OSD or monitor requests is taking a long time to process.  This can
be an indication of extreme load, a slow storage device, or a software
bug.</p>
<p>The request queue for the daemon in question can be queried with the
following command, executed from the daemon’s host:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">daemon</span> <span class="n">osd</span><span class="o">.&lt;</span><span class="nb">id</span><span class="o">&gt;</span> <span class="n">ops</span>
</pre></div>
</div>
<p>A summary of the slowest recent requests can be seen with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">daemon</span> <span class="n">osd</span><span class="o">.&lt;</span><span class="nb">id</span><span class="o">&gt;</span> <span class="n">dump_historic_ops</span>
</pre></div>
</div>
<p>OSD 的位置可用此命令找到：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">osd</span> <span class="n">find</span> <span class="n">osd</span><span class="o">.&lt;</span><span class="nb">id</span><span class="o">&gt;</span>
</pre></div>
</div>
</div>
<div class="section" id="pg-not-scrubbed">
<h4>PG_NOT_SCRUBBED<a class="headerlink" href="#pg-not-scrubbed" title="Permalink to this headline">¶</a></h4>
<p>One or more PGs has not been scrubbed recently.  PGs are normally scrubbed
within every configured interval specified by
<a class="reference internal" href="../../configuration/osd-config-ref/#confval-osd_scrub_max_interval"><code class="xref std std-confval docutils literal notranslate"><span class="pre">osd_scrub_max_interval</span></code></a> globally. This
interval can be overriden on per-pool basis with
<code class="xref std std-confval docutils literal notranslate"><span class="pre">scrub_max_interval</span></code>. The warning triggers when
<code class="docutils literal notranslate"><span class="pre">mon_warn_pg_not_scrubbed_ratio</span></code> percentage of interval has elapsed without a
scrub since it was due.</p>
<p>PGs will not scrub if they are not flagged as <em>clean</em>, which may
happen if they are misplaced or degraded (see <em>PG_AVAILABILITY</em> and
<em>PG_DEGRADED</em> above).</p>
<p>You can manually initiate a scrub of a clean PG with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">pg</span> <span class="n">scrub</span> <span class="o">&lt;</span><span class="n">pgid</span><span class="o">&gt;</span>
</pre></div>
</div>
</div>
<div class="section" id="pg-not-deep-scrubbed">
<h4>PG_NOT_DEEP_SCRUBBED<a class="headerlink" href="#pg-not-deep-scrubbed" title="Permalink to this headline">¶</a></h4>
<p>One or more PGs has not been deep scrubbed recently.  PGs are normally
scrubbed every <a class="reference internal" href="../../configuration/osd-config-ref/#confval-osd_deep_scrub_interval"><code class="xref std std-confval docutils literal notranslate"><span class="pre">osd_deep_scrub_interval</span></code></a> seconds, and this warning
triggers when <code class="docutils literal notranslate"><span class="pre">mon_warn_pg_not_deep_scrubbed_ratio</span></code> percentage of interval has elapsed
without a scrub since it was due.</p>
<p>PGs will not (deep) scrub if they are not flagged as <em>clean</em>, which may
happen if they are misplaced or degraded (see <em>PG_AVAILABILITY</em> and
<em>PG_DEGRADED</em> above).</p>
<p>You can manually initiate a scrub of a clean PG with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">pg</span> <span class="n">deep</span><span class="o">-</span><span class="n">scrub</span> <span class="o">&lt;</span><span class="n">pgid</span><span class="o">&gt;</span>
</pre></div>
</div>
</div>
<div class="section" id="pg-slow-snap-trimming">
<h4>PG_SLOW_SNAP_TRIMMING<a class="headerlink" href="#pg-slow-snap-trimming" title="Permalink to this headline">¶</a></h4>
<p>The snapshot trim queue for one or more PGs has exceeded the
configured warning threshold.  This indicates that either an extremely
large number of snapshots were recently deleted, or that the OSDs are
unable to trim snapshots quickly enough to keep up with the rate of
new snapshot deletions.</p>
<p>The warning threshold is controlled by the
<code class="docutils literal notranslate"><span class="pre">mon_osd_snap_trim_queue_warn_on</span></code> option (default: 32768).</p>
<p>This warning may trigger if OSDs are under excessive load and unable
to keep up with their background work, or if the OSDs’ internal
metadata database is heavily fragmented and unable to perform.  It may
also indicate some other performance issue with the OSDs.</p>
<p>The exact size of the snapshot trim queue is reported by the
<code class="docutils literal notranslate"><span class="pre">snaptrimq_len</span></code> field of <code class="docutils literal notranslate"><span class="pre">ceph</span> <span class="pre">pg</span> <span class="pre">ls</span> <span class="pre">-f</span> <span class="pre">json-detail</span></code>.</p>
</div>
</div>
<div class="section" id="id9">
<h3>杂项<a class="headerlink" href="#id9" title="Permalink to this headline">¶</a></h3>
<div class="section" id="recent-crash">
<h4>RECENT_CRASH<a class="headerlink" href="#recent-crash" title="Permalink to this headline">¶</a></h4>
<p>One or more Ceph daemons has crashed recently, and the crash has not
yet been archived (acknowledged) by the administrator.  This may
indicate a software bug, a hardware problem (e.g., a failing disk), or
some other problem.</p>
<p>New crashes can be listed with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">crash</span> <span class="n">ls</span><span class="o">-</span><span class="n">new</span>
</pre></div>
</div>
<p>Information about a specific crash can be examined with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">crash</span> <span class="n">info</span> <span class="o">&lt;</span><span class="n">crash</span><span class="o">-</span><span class="nb">id</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>This warning can be silenced by “archiving” the crash (perhaps after
being examined by an administrator) so that it does not generate this
warning:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">crash</span> <span class="n">archive</span> <span class="o">&lt;</span><span class="n">crash</span><span class="o">-</span><span class="nb">id</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>Similarly, all new crashes can be archived with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">crash</span> <span class="n">archive</span><span class="o">-</span><span class="nb">all</span>
</pre></div>
</div>
<p>Archived crashes will still be visible via <code class="docutils literal notranslate"><span class="pre">ceph</span> <span class="pre">crash</span> <span class="pre">ls</span></code> but not
<code class="docutils literal notranslate"><span class="pre">ceph</span> <span class="pre">crash</span> <span class="pre">ls-new</span></code>.</p>
<p>The time period for what “recent” means is controlled by the option
<code class="docutils literal notranslate"><span class="pre">mgr/crash/warn_recent_interval</span></code> (default: two weeks).</p>
<p>These warnings can be disabled entirely with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">config</span> <span class="nb">set</span> <span class="n">mgr</span><span class="o">/</span><span class="n">crash</span><span class="o">/</span><span class="n">warn_recent_interval</span> <span class="mi">0</span>
</pre></div>
</div>
</div>
<div class="section" id="recent-mgr-module-crash">
<h4>RECENT_MGR_MODULE_CRASH<a class="headerlink" href="#recent-mgr-module-crash" title="Permalink to this headline">¶</a></h4>
<p>One or more ceph-mgr modules has crashed recently, and the crash as
not yet been archived (acknowledged) by the administrator.  This
generally indicates a software bug in one of the software modules run
inside the ceph-mgr daemon.  Although the module that experienced the
problem maybe be disabled as a result, the function of other modules
is normally unaffected.</p>
<p>As with the <em>RECENT_CRASH</em> health alert, the crash can be inspected with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">crash</span> <span class="n">info</span> <span class="o">&lt;</span><span class="n">crash</span><span class="o">-</span><span class="nb">id</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>This warning can be silenced by “archiving” the crash (perhaps after
being examined by an administrator) so that it does not generate this
warning:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">crash</span> <span class="n">archive</span> <span class="o">&lt;</span><span class="n">crash</span><span class="o">-</span><span class="nb">id</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>Similarly, all new crashes can be archived with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">crash</span> <span class="n">archive</span><span class="o">-</span><span class="nb">all</span>
</pre></div>
</div>
<p>Archived crashes will still be visible via <code class="docutils literal notranslate"><span class="pre">ceph</span> <span class="pre">crash</span> <span class="pre">ls</span></code> but not
<code class="docutils literal notranslate"><span class="pre">ceph</span> <span class="pre">crash</span> <span class="pre">ls-new</span></code>.</p>
<p>The time period for what “recent” means is controlled by the option
<code class="docutils literal notranslate"><span class="pre">mgr/crash/warn_recent_interval</span></code> (default: two weeks).</p>
<p>These warnings can be disabled entirely with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">config</span> <span class="nb">set</span> <span class="n">mgr</span><span class="o">/</span><span class="n">crash</span><span class="o">/</span><span class="n">warn_recent_interval</span> <span class="mi">0</span>
</pre></div>
</div>
</div>
<div class="section" id="telemetry-changed">
<h4>TELEMETRY_CHANGED<a class="headerlink" href="#telemetry-changed" title="Permalink to this headline">¶</a></h4>
<p>Telemetry has been enabled, but the contents of the telemetry report
have changed since that time, so telemetry reports will not be sent.</p>
<p>The Ceph developers periodically revise the telemetry feature to
include new and useful information, or to remove information found to
be useless or sensitive.  If any new information is included in the
report, Ceph will require the administrator to re-enable telemetry to
ensure they have an opportunity to (re)review what information will be
shared.</p>
<p>To review the contents of the telemetry report,:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">telemetry</span> <span class="n">show</span>
</pre></div>
</div>
<p>Note that the telemetry report consists of several optional channels
that may be independently enabled or disabled.  更详细的内容见
<a class="reference internal" href="../../../mgr/telemetry/#telemetry"><span class="std std-ref">Telemetry Module</span></a>.</p>
<p>要重新启用 telemetry （并消除这条警报）:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">telemetry</span> <span class="n">on</span>
</pre></div>
</div>
<p>要禁用 telemetry （并消除这条警报）:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">telemetry</span> <span class="n">off</span>
</pre></div>
</div>
</div>
<div class="section" id="auth-bad-caps">
<h4>AUTH_BAD_CAPS<a class="headerlink" href="#auth-bad-caps" title="Permalink to this headline">¶</a></h4>
<p>One or more auth users has capabilities that cannot be parsed by the
monitor.  This generally indicates that the user will not be
authorized to perform any action with one or more daemon types.</p>
<p>This error is mostly likely to occur after an upgrade if the
capabilities were set with an older version of Ceph that did not
properly validate their syntax, or if the syntax of the capabilities
has changed.</p>
<p>The user in question can be removed with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">auth</span> <span class="n">rm</span> <span class="o">&lt;</span><span class="n">entity</span><span class="o">-</span><span class="n">name</span><span class="o">&gt;</span>
</pre></div>
</div>
<p>(This will resolve the health alert, but obviously clients will not be
able to authenticate as that user.)</p>
<p>Alternatively, the capabilities for the user can be updated with:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">auth</span> <span class="o">&lt;</span><span class="n">entity</span><span class="o">-</span><span class="n">name</span><span class="o">&gt;</span> <span class="o">&lt;</span><span class="n">daemon</span><span class="o">-</span><span class="nb">type</span><span class="o">&gt;</span> <span class="o">&lt;</span><span class="n">caps</span><span class="o">&gt;</span> <span class="p">[</span><span class="o">&lt;</span><span class="n">daemon</span><span class="o">-</span><span class="nb">type</span><span class="o">&gt;</span> <span class="o">&lt;</span><span class="n">caps</span><span class="o">&gt;</span> <span class="o">...</span><span class="p">]</span>
</pre></div>
</div>
<p>For more information about auth capabilities, see <a class="reference internal" href="../user-management/#user-management"><span class="std std-ref">用户管理</span></a>.</p>
</div>
<div class="section" id="osd-no-down-out-interval">
<h4>OSD_NO_DOWN_OUT_INTERVAL<a class="headerlink" href="#osd-no-down-out-interval" title="Permalink to this headline">¶</a></h4>
<p>The <code class="docutils literal notranslate"><span class="pre">mon_osd_down_out_interval</span></code> option is set to zero, which means
that the system will not automatically perform any repair or healing
operations after an OSD fails.  Instead, an administrator (or some
other external entity) will need to manually mark down OSDs as ‘out’
(i.e., via <code class="docutils literal notranslate"><span class="pre">ceph</span> <span class="pre">osd</span> <span class="pre">out</span> <span class="pre">&lt;osd-id&gt;</span></code>) in order to trigger recovery.</p>
<p>这个选项通常设置成 5 或 10 分钟 - 足够一台主机更换电源或重启。</p>
<p>把 <code class="docutils literal notranslate"><span class="pre">mon_warn_on_osd_down_out_interval_zero</span></code> 设置成 false 可以消除这个警报:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">config</span> <span class="k">global</span> <span class="n">mon</span> <span class="n">mon_warn_on_osd_down_out_interval_zero</span> <span class="n">false</span>
</pre></div>
</div>
</div>
<div class="section" id="dashboard-debug">
<h4>DASHBOARD_DEBUG<a class="headerlink" href="#dashboard-debug" title="Permalink to this headline">¶</a></h4>
<p>打开了 Dashboard 调试模式。这意味着，如果在处理一个 REST API 请求时出错了，
HTTP 错误响应会包含 Python 的追溯信息（ traceback ）。
这个行为在生产环境下应该禁用，因为这样的回溯信息可能包含并暴露敏感信息。</p>
<p>调试模式可以这样关闭:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ceph</span> <span class="n">dashboard</span> <span class="n">debug</span> <span class="n">disable</span>
</pre></div>
</div>
</div>
</div>
</div>
</div>



           </div>
           
          </div>
          <footer>
    <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
        <a href="../monitoring/" class="btn btn-neutral float-right" title="监控集群" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right" aria-hidden="true"></span></a>
        <a href="../operating/" class="btn btn-neutral float-left" title="操纵集群" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left" aria-hidden="true"></span> Previous</a>
    </div>

  <hr/>

  <div role="contentinfo">
    <p>
        &#169; Copyright 2016, Ceph authors and contributors. Licensed under Creative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0).

    </p>
  </div> 

</footer>
        </div>
      </div>

    </section>

  </div>
  

  <script type="text/javascript">
      jQuery(function () {
          SphinxRtdTheme.Navigation.enable(true);
      });
  </script>

  
  
    
   

</body>
</html>