<!DOCTYPE html>


<html lang="zh-CN">


<head>
  <meta charset="utf-8" />
    
  <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1" />
  <title>
    Ceph 基础篇 - 集群部署及故障排查.md |  
  </title>
  <meta name="generator" content="hexo-theme-ayer">
  
  <link rel="shortcut icon" href="/favicon.ico" />
  
  
<link rel="stylesheet" href="/dist/main.css">

  
<link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/Shen-Yu/cdn/css/remixicon.min.css">

  
<link rel="stylesheet" href="/css/custom.css">

  
  
<script src="https://cdn.jsdelivr.net/npm/pace-js@1.0.2/pace.min.js"></script>

  
  

  

</head>

</html>

<body>
  <div id="app">
    
      
    <main class="content on">
      <section class="outer">
  <article
  id="post-k8s/Ceph 基础篇 - 集群部署及故障排查"
  class="article article-type-post"
  itemscope
  itemprop="blogPost"
  data-scroll-reveal
>
  <div class="article-inner">
    
    <header class="article-header">
       
<h1 class="article-title sea-center" style="border-left:0" itemprop="name">
  Ceph 基础篇 - 集群部署及故障排查.md
</h1>
 

    </header>
     
    <div class="article-meta">
      <a href="/2020/11/11/k8s/Ceph%20%E5%9F%BA%E7%A1%80%E7%AF%87%20-%20%E9%9B%86%E7%BE%A4%E9%83%A8%E7%BD%B2%E5%8F%8A%E6%95%85%E9%9A%9C%E6%8E%92%E6%9F%A5/" class="article-date">
  <time datetime="2020-11-10T16:00:00.000Z" itemprop="datePublished">2020-11-11</time>
</a> 
  <div class="article-category">
    <a class="article-category-link" href="/categories/k8s/">k8s</a>
  </div>
  
<div class="word_count">
    <span class="post-time">
        <span class="post-meta-item-icon">
            <i class="ri-quill-pen-line"></i>
            <span class="post-meta-item-text"> 字数统计:</span>
            <span class="post-count">5.3k</span>
        </span>
    </span>

    <span class="post-time">
        &nbsp; | &nbsp;
        <span class="post-meta-item-icon">
            <i class="ri-book-open-line"></i>
            <span class="post-meta-item-text"> 阅读时长≈</span>
            <span class="post-count">26 分钟</span>
        </span>
    </span>
</div>
 
    </div>
      
    <div class="tocbot"></div>




  
    <div class="article-entry" itemprop="articleBody">
       
  <h1 id="Ceph-基础篇-集群部署及故障排查"><a href="#Ceph-基础篇-集群部署及故障排查" class="headerlink" title="Ceph 基础篇 - 集群部署及故障排查"></a>Ceph 基础篇 - 集群部署及故障排查</h1><p><strong>部署之前</strong>  </p>
<hr>
<p><strong>安装方式</strong>  </p>
<p>ceph-deploy 安装，官网已经没有这个部署页面，N之前的版本可以使用，包括N，自动化安装工具，后面的版本将不支持，这里我们选择使用ceph-deploy安装；  </p>
<p>cephadm 安装，近期出现的安装方式，需要 Centos8 的环境，并且支持图形化安装或者命令行安装，O版本之后，未来推荐使用 cephadm 安装；  </p>
<p>手动安装，一步步的教你如何安装，这种方法可以清晰了解部署细节，以及 Ceph 集群各组件的关联关系等；  </p>
<p>Rook 安装，与现有 kubernetes 集群集成安装，安装到集群中；</p>
<p>ceph-ansiable 自动化安装；  </p>
<p><strong>服务器规划</strong></p>
<table height="101"><tbody><tr height="19"><td x:str="" height="14" width="84" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;font-weight: 700;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(84, 130, 53);">服务器IP</td><td x:str="" height="14" width="104" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;font-weight: 700;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(84, 130, 53);">主机名</td><td x:str="" height="14" width="218" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;font-weight: 700;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(84, 130, 53);">承担功能</td><td x:str="" height="14" width="89" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;font-weight: 700;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(84, 130, 53);">系统版本</td></tr><tr height="44"><td x:str="" height="33" width="82" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;white-space: normal;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(255, 217, 102);">100.73.18.152<br>(复用节点)</td><td x:str="" height="33" width="104" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);">ceph-node01</td><td x:str="" height="33" width="218" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;white-space: normal;border-width: 0.5pt;border-color: rgb(0, 0, 0);word-break: break-all;">ceph-deploy、ceph-admin<br>client(复用)<br>mon、mgr、rgw、mds、osd</td><td rowspan="3" x:str="" height="61" width="89" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;text-align: left;white-space: normal;border-width: 0.5pt;border-color: rgb(0, 0, 0);">Centos&nbsp;7.2&nbsp;3.10.0-693.5.2.el7.x86_64</td></tr><tr height="19"><td x:str="" height="14" width="84" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(255, 217, 102);">100.73.18.153</td><td x:str="" height="14" width="104" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);">ceph-node02</td><td x:str="" height="14" width="218" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);">mon、mgr、rgw、mds、osd</td></tr><tr height="19"><td x:str="" height="14" width="84" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(255, 217, 102);">100.73.18.128</td><td x:str="" height="14" width="104" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);">ceph-node03</td><td x:str="" height="14" width="218" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);word-break: break-all;">mon、mgr、mds、osd</td></tr></tbody></table>



<p><strong>架构图</strong></p>
<p><img src=""></p>
<p>（架构图）  </p>
<p><strong>Ceph 概念介绍</strong></p>
<table height="472"><tbody><tr height="19"><td x:str="" height="14" width="56" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;font-weight: 700;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(84, 130, 53);">组件</td><td x:str="" height="14" width="482" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;font-weight: 700;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(84, 130, 53);">功能&nbsp;</td></tr><tr height="104"><td x:str="" height="78" width="56" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(255, 217, 102);">Monitor</td><td x:str="" height="78" width="482" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;white-space: normal;border-width: 0.5pt;border-color: rgb(0, 0, 0);">它是一个在主机上面运行的守护进程，这个守护进程，扮演着监控集群整个组件的职责，包括多少个存储池，存储池中有多少个PG，PG与osd映射的关系，每个节点有多少个osd等等，都要监视，它是整个集群运行图（Cluster Map）的持有者，一共5个运行图。<br>它还维护集群的认证信息，客户端有用户名密码等；内部的OSD也需要与mon通信，也需要认证，它是通过cephX协议进行认证的，各个组件之间通信，都必须认证，认证都需要经过mon,因此mon还是认证中心，有些同学就会想到，如果集群很大了的话，认证中心会不会成为瓶颈，所以这也是需要多台mon的原因，mon在认证上是无状态的，可以做任意横向扩展，可以使用负载均衡负载的；</td></tr><tr height="56"><td x:str="" height="42" width="53" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(255, 217, 102);">Managers</td><td x:str="" height="42" width="482" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;white-space: normal;border-width: 0.5pt;border-color: rgb(0, 0, 0);word-break: break-all;">它的专门守护进程是ceph-mgr，负责收集集群的状态指标，运行时的metrics，存储的利用率，当前性能指标和系统负载，它有很多基于Python的插件，以实现ceph-mgr功能的提升，用户辅助mon的。</td></tr><tr height="72"><td x:str="" height="54" width="56" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(255, 217, 102);">OSD</td><td x:str="" height="54" width="482" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;white-space: normal;border-width: 0.5pt;border-color: rgb(0, 0, 0);word-break: break-all;">Osd是指的单独的资源存储设备，一般情况下一台服务器上面放多块磁盘，ceph为了使用每个节点上面的osd单独进行管理，每一个osd都会有一个单独的、专用的守护进程，比如说，一台服务器上面有6个osd，就需要6个ceph-osd进程，每一个磁盘上面，都有一个守护进程。它提供存储 PG 的数据、数据复制、恢复、再均衡，并提供一些监控信息供mon和mgr来check，并且还可以通过心跳检测其它副本；至少需要有三个osd，不是三台节点哦，这里要注意下，一般情况下我们是1主2从，1主PG、2个副本PG、以确保其高可用性；</td></tr><tr height="19"><td x:str="" height="14" width="56" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(255, 217, 102);">CURSH</td><td x:str="" height="14" width="482" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;white-space: normal;border-width: 0.5pt;border-color: rgb(0, 0, 0);">CRUSH 是 Ceph 使用的数据分布算法，类似一致性哈希，让数据分配到预期的位置。</td></tr><tr height="29"><td x:str="" height="21" width="56" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(255, 217, 102);">PG</td><td x:str="" height="21" width="482" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;white-space: normal;border-width: 0.5pt;border-color: rgb(0, 0, 0);">PG 全称 Placement Groups，是一个逻辑的概念，一个 PG 包含多个 OSD 。引入 PG 这一层其实是为了更好的分配数据和定位数据。</td></tr><tr height="29"><td x:str="" height="21" width="56" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(255, 217, 102);">Object</td><td x:str="" height="21" width="482" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;white-space: normal;border-width: 0.5pt;border-color: rgb(0, 0, 0);">文件需要切分成大小为4M（默认）的块即对象，Ceph 最底层的存储单元就是 Object对象，每个 Object 包含元数据和原始数据；</td></tr><tr height="19"><td x:str="" height="14" width="56" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(255, 217, 102);">RADOS</td><td x:str="" height="14" width="482" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;white-space: normal;border-width: 0.5pt;border-color: rgb(0, 0, 0);">实现数据分配、Failover 等集群操作。</td></tr><tr height="29"><td x:str="" height="21" width="56" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(255, 217, 102);">Libradio</td><td x:str="" height="21" width="482" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;white-space: normal;border-width: 0.5pt;border-color: rgb(0, 0, 0);">&nbsp;librados提供了访问RADOS存储集群支持异步通信的API接口，支持对集群中对象数据的直接并行访问，用户可通过支持的编程语言开发自定义客户端程序通过RADOS协议与存储系统进行交互；</td></tr><tr height="19"><td x:str="" height="14" width="56" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(255, 217, 102);">MDS</td><td x:str="" height="14" width="482" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;white-space: normal;border-width: 0.5pt;border-color: rgb(0, 0, 0);">MDS全称Ceph Metadata Server，是CephFS服务依赖的元数据服务；</td></tr><tr height="19"><td x:str="" height="14" width="56" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(255, 217, 102);">RBD</td><td x:str="" height="14" width="482" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;white-space: normal;border-width: 0.5pt;border-color: rgb(0, 0, 0);">RBD全称 RADOS Block Device，是 Ceph 对外提供的块设备服务；</td></tr><tr height="29"><td x:str="" height="21" width="56" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(255, 217, 102);">RGW</td><td x:str="" height="21" width="482" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;white-space: normal;border-width: 0.5pt;border-color: rgb(0, 0, 0);">RGW依赖于在RADOS集群基础上独立运行的守护进程(ceph-radosgw)基于http 或https协议提供相关的API服务，不过，通常仅在需要以REST对象形式存取数据时才部署RGW；</td></tr><tr height="29"><td x:str="" height="21" width="56" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;border-width: 0.5pt;border-color: rgb(0, 0, 0);background: rgb(255, 217, 102);">CephFS</td><td x:str="" height="21" width="482" style="color: rgb(0, 0, 0);font-size: 10pt;font-family: 宋体;vertical-align: middle;white-space: normal;border-width: 0.5pt;border-color: rgb(0, 0, 0);word-break: break-all;">CephFS全称Ceph File System，是 Ceph 对外提供的文件系统服务，它是最早出现的，但也是最后可用于生产的，目前的版本，可以用在生产中。</td></tr></tbody></table>

<p>**<br>安装部署**</p>
<hr>
<p><strong>1. 修改主机名以及添加host（所有机器）</strong></p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"># 修改主机名  </span><br><span class="line">hostnamectl set-hostname ceph-node01  </span><br><span class="line">  </span><br><span class="line"># 修改hosts  </span><br><span class="line">[root@ceph-node01 ~]# cat &#x2F;etc&#x2F;hosts  </span><br><span class="line">127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4  </span><br><span class="line">::1 localhost localhost.localdomain localhost6 localhost6.localdomain6  </span><br><span class="line">100.73.18.152 ceph-node01  </span><br><span class="line">100.73.18.153 ceph-node02  </span><br><span class="line">100.73.18.128 ceph-node03  </span><br><span class="line">[root@ceph-node01 ~]#</span><br></pre></td></tr></table></figure>

<p><strong>2. ceph-admin 节点与其它节点做信任</strong></p>
<p>由于ceph-deploy命令不支持运行中输入密码，因此必须在管理节点（ceph-admin）上生成 ssh 密钥并将其分发到ceph 集群的各个节点上面。</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"># 生成密钥  </span><br><span class="line">ssh-keygen -t rsa -P &quot;&quot;  </span><br><span class="line">  </span><br><span class="line"># copy 密钥  </span><br><span class="line">ssh-copy-id -i .ssh&#x2F;id_rsa.pub &lt;node-name&gt;</span><br></pre></td></tr></table></figure>

<p><strong>3.  安装 NTP 服务，并做时间同步</strong></p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">yum -y install ntp</span><br></pre></td></tr></table></figure>

<p>安装完成后，配置/etc/ntp.conf 即可，如果公司有 NTP server，直接配置即可，如果没有，可以选择一个公网的；</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ~]# ntpq -p  </span><br><span class="line">     remote refid st t when poll reach delay offset jitter  </span><br><span class="line">&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;  </span><br><span class="line">*100.100.1.2 202.28.93.5 2 u 665 1024 377 1.268 -7.338 1.523  </span><br><span class="line">-100.100.1.2 202.28.116.236 2 u 1015 1024 377 0.805 -12.547 0.693  </span><br><span class="line">+100.100.1.3 203.159.70.33 2 u 117 1024 377 0.742 -5.007 1.814  </span><br><span class="line">+100.100.1.4 203.159.70.33 2 u 19 1024 377 0.731 -5.770 2.652  </span><br><span class="line">[root@ceph-node01 ~]#</span><br></pre></td></tr></table></figure>

<p>通过以上命令测试ntp 是否配置好即可，可以配置一台，其它节点服务器指向这一台即可；   </p>
<p><strong>4.  关闭 iptables 或 firewalld 服务 （也可以指定端口通信，不关闭）</strong></p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">systemctl stop firewalld.service  </span><br><span class="line">systemctl stop iptables.service  </span><br><span class="line">systemctl disable firewalld.service  </span><br><span class="line">systemctl disable iptables.service</span><br></pre></td></tr></table></figure>

<p><strong>5. 关闭并禁用 SELinux</strong></p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"># 修改  </span><br><span class="line">sed -i &#39;s@^\(SELINUX&#x3D;\).*@\1disabled@&#39; &#x2F;etc&#x2F;sysconfig&#x2F;selinux  </span><br><span class="line">  </span><br><span class="line"># 生效  </span><br><span class="line">setenforce 0  </span><br><span class="line">  </span><br><span class="line"># 查看  </span><br><span class="line">getenforce</span><br></pre></td></tr></table></figure>

<p><strong>6.  配置 yum 源（同步到所有机器）</strong></p>
<p>删除原来的配置文件，从阿里源下载最新yum 文件；阿里云的镜像官网：<a target="_blank" rel="noopener" href="https://developer.aliyun.com/mirror/%EF%BC%8C%E5%9F%BA%E6%9C%AC%E4%B8%8A%E5%8F%AF%E4%BB%A5%E5%9C%A8%E8%BF%99%E9%87%8C%E6%89%BE%E5%88%B0%E6%89%80%E6%9C%89%E7%9B%B8%E5%85%B3%E7%9A%84%E9%95%9C%E5%83%8F%E9%93%BE%E6%8E%A5%EF%BC%9B">https://developer.aliyun.com/mirror/，基本上可以在这里找到所有相关的镜像链接；</a></p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">wget -O &#x2F;etc&#x2F;yum.repos.d&#x2F;CentOS-Base.repo https:&#x2F;&#x2F;mirrors.aliyun.com&#x2F;repo&#x2F;Centos-7.repo  </span><br><span class="line">wget -O &#x2F;etc&#x2F;yum.repos.d&#x2F;epel.repo http:&#x2F;&#x2F;mirrors.aliyun.com&#x2F;repo&#x2F;epel-7.repo</span><br></pre></td></tr></table></figure>

<p>注意 epel-7 也要下载一下，默认的ceph版本较低，配置 ceph 源，需要自己根据阿里提供的进行自己编写，如下：  </p>
<p>ceph：<a target="_blank" rel="noopener" href="https://mirrors.aliyun.com/ceph/?spm=a2c6h.13651104.0.0.435f22d16X5Jk7">https://mirrors.aliyun.com/ceph/?spm=a2c6h.13651104.0.0.435f22d16X5Jk7</a></p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 yum.repos.d]# cat ceph.repo  </span><br><span class="line">[norch]  </span><br><span class="line">name&#x3D;norch  </span><br><span class="line">baseurl&#x3D;https:&#x2F;&#x2F;mirrors.aliyun.com&#x2F;ceph&#x2F;rpm-nautilus&#x2F;el7&#x2F;noarch&#x2F;  </span><br><span class="line">enabled&#x3D;1  </span><br><span class="line">gpgcheck&#x3D;0  </span><br><span class="line">  </span><br><span class="line">[x86_64]  </span><br><span class="line">name&#x3D;x86_64  </span><br><span class="line">baseurl&#x3D;https:&#x2F;&#x2F;mirrors.aliyun.com&#x2F;ceph&#x2F;rpm-nautilus&#x2F;el7&#x2F;x86_64&#x2F;  </span><br><span class="line">enabled&#x3D;1  </span><br><span class="line">gpgcheck&#x3D;0  </span><br><span class="line">[root@ceph-node01 yum.repos.d]#</span><br></pre></td></tr></table></figure>

<p>然后同步repo到所有机器上面；    </p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"># 查看当前 yum 源  </span><br><span class="line">yum repolist  </span><br><span class="line">yum repolist all</span><br></pre></td></tr></table></figure>

<p>就是把服务器的包信息下载到本地电脑缓存起来，makecache建立一个缓存，以后用yum install时就在缓存中搜索，提高了速度，配合yum -C search xxx使用。</p>
<p>yum makecache<br>yum -C search xxx<br>yum clean all</p>
<p><strong>7. 管理节点安装 ceph-deploy</strong></p>
<p>Ceph 存储集群部署过程中可通过管理节点使用ceph-deploy全程进行，这里首先在管理节点安装ceph-deploy及其依赖的程序包，这里要注意安装 python-setuptools 工具包；</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">yum install ceph-deploy python-setuptools python2-subprocess32</span><br></pre></td></tr></table></figure>

<p><strong>8. 部署RADOS存储集群</strong></p>
<p>创建一个专属目录；</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">mkdir ceph-deploy &amp;&amp; cd ceph-deploy</span><br></pre></td></tr></table></figure>

<p>初始化第一个MON节点，准备创建集群</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph-deploy new --cluster-network 100.73.18.0&#x2F;24 --public-network 100.73.18.0&#x2F;24 &lt;node-name&gt;</span><br></pre></td></tr></table></figure>

<p>--cluster-network  内部数据同步使用；</p>
<p>--public-network 对外提供服务使用的；</p>
<p>生成三个配置文件，ceph.conf（配置文件）、ceph-deploy-ceph.log（日志文件）、 ceph.mon.keyring（认证文件）。</p>
<p><strong>9. 安装 ceph 集群</strong></p>
<p>ceph-deploy install {ceph-node} {….}</p>
<p>这里使用这种方式安装的话，会自动化的把软件安装包安装上去，这种安装方式不太好，因为它会重新配置yum源，包括我们的 epel yum源，还有 ceph 的 yum 源，都会指向他内置的yum源，这样会导致你访问到国外，下载很慢，建议手动安装，下面每台机器都手动安装即可，如下：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# yum -y install ceph ceph-mds ceph-mgr ceph-osd ceph-radosgw ceph-mon</span><br></pre></td></tr></table></figure>



<p><strong>10. 配置文件和admin密钥copy到ceph集群各节点</strong></p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph -s  </span><br><span class="line">[errno 2] error connecting to the cluster  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#  </span><br><span class="line"># 原因，没有admin 文件，下面通过 admin 命令进行 copy  </span><br><span class="line">[root@ceph-node01 ceph-deploy]# ceph-deploy admin ceph-node01 ceph-node02 ceph-node03</span><br></pre></td></tr></table></figure>



<p>再次查看集群状态如下：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph -s  </span><br><span class="line">  cluster:  </span><br><span class="line">    id: cc10b0cb-476f-420c-b1d6-e48c1dc929af  </span><br><span class="line">    health: HEALTH_OK  </span><br><span class="line">  </span><br><span class="line">  services:  </span><br><span class="line">    mon: 1 daemons, quorum ceph-node01 (age 2m)  </span><br><span class="line">    mgr: no daemons active  </span><br><span class="line">    osd: 0 osds: 0 up, 0 in  </span><br><span class="line">  </span><br><span class="line">  data:  </span><br><span class="line">    pools: 0 pools, 0 pgs  </span><br><span class="line">    objects: 0 objects, 0 B  </span><br><span class="line">    usage: 0 B used, 0 B &#x2F; 0 B avail  </span><br><span class="line">    pgs:  </span><br><span class="line">  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#</span><br></pre></td></tr></table></figure>

<p>发现 services 只有一个 mon，没有 mgr、也没有 osd；  </p>
<p><strong>11. 安装 mgr</strong></p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph-deploy mgr create ceph-node01  </span><br><span class="line">[root@ceph-node01 ceph-deploy]# ceph -s  </span><br><span class="line">  cluster:  </span><br><span class="line">    id: cc10b0cb-476f-420c-b1d6-e48c1dc929af  </span><br><span class="line">    health: HEALTH_WARN  </span><br><span class="line">            OSD count 0 &lt; osd_pool_default_size 3  </span><br><span class="line">  </span><br><span class="line">  services:  </span><br><span class="line">    mon: 1 daemons, quorum ceph-node01 (age 4m)  </span><br><span class="line">    mgr: ceph-node01(active, since 84s)  </span><br><span class="line">    osd: 0 osds: 0 up, 0 in  </span><br><span class="line">  </span><br><span class="line">  data:  </span><br><span class="line">    pools: 0 pools, 0 pgs  </span><br><span class="line">    objects: 0 objects, 0 B  </span><br><span class="line">    usage: 0 B used, 0 B &#x2F; 0 B avail  </span><br><span class="line">    pgs:  </span><br><span class="line">  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#</span><br></pre></td></tr></table></figure>



<p><strong>12. 向 RADOS 集群添加 OSD</strong></p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph-deploy osd list ceph-node01</span><br></pre></td></tr></table></figure>

<p>ceph-deploy disk 命令可以检查并列出 OSD 节点上所有可用的磁盘的相关信息；</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph-deploy disk zap ceph-node01 &#x2F;dev&#x2F;vdb</span><br></pre></td></tr></table></figure>

<p>在管理节点上使用 ceph-deploy 命令擦除计划专用于 OSD 磁盘上的所有分区表和数据以便用于 OSD，命令格式 为 ceph-deploy disk zap {osd-server-name} {disk-name}，需要注意的是此步会清除目标设备上的所有数据。</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line"># 查看磁盘情况  </span><br><span class="line">[root@ceph-node01 ceph-deploy]# lsblk  </span><br><span class="line">NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT  </span><br><span class="line">sr0 11:0 1 1024M 0 rom  </span><br><span class="line">vda 252:0 0 50G 0 disk  </span><br><span class="line">├─vda1 252:1 0 500M 0 part &#x2F;boot  </span><br><span class="line">└─vda2 252:2 0 49.5G 0 part  </span><br><span class="line">  ├─centos-root 253:0 0 44.5G 0 lvm &#x2F;  </span><br><span class="line">  └─centos-swap 253:1 0 5G 0 lvm [SWAP]  </span><br><span class="line">vdb 252:16 0 100G 0 disk  </span><br><span class="line">vdc 252:32 0 100G 0 disk  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#  </span><br><span class="line"># 添加 osd  </span><br><span class="line">[root@ceph-node01 ceph-deploy]# ceph-deploy osd create ceph-node01 --data &#x2F;dev&#x2F;vdb  </span><br><span class="line">。。。  </span><br><span class="line">[root@ceph-node01 ceph-deploy]# ceph-deploy osd create ceph-node02 --data &#x2F;dev&#x2F;vdb  </span><br><span class="line">。。。  </span><br><span class="line">[root@ceph-node01 ceph-deploy]# ceph-deploy osd create ceph-node03 --data &#x2F;dev&#x2F;vdb  </span><br><span class="line">。。。</span><br></pre></td></tr></table></figure>



<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph-deploy osd list ceph-node01</span><br></pre></td></tr></table></figure>

<p>ceph-deploy osd list 命令列出指定节点上的 OSD 信息；</p>
<p><strong>13.  查看 OSD</strong></p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph osd tree  </span><br><span class="line">ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF  </span><br><span class="line">-1 0.39067 root default  </span><br><span class="line">-3 0.09769 host ceph-node01  </span><br><span class="line"> 0 hdd 0.09769 osd.0 up 1.00000 1.00000  </span><br><span class="line">-5 0.09769 host ceph-node02  </span><br><span class="line"> 1 hdd 0.09769 osd.1 up 1.00000 1.00000  </span><br><span class="line">-7 0.19530 host ceph-node03  </span><br><span class="line"> 2 hdd 0.19530 osd.2 up 1.00000 1.00000  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#</span><br></pre></td></tr></table></figure>



<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph osd stat  </span><br><span class="line">3 osds: 3 up (since 2d), 3 in (since 2d); epoch: e26  </span><br><span class="line">[root@ceph-node01 ceph-deploy]# ceph osd ls  </span><br><span class="line">0  </span><br><span class="line">1  </span><br><span class="line">2  </span><br><span class="line">[root@ceph-node01 ceph-deploy]# ceph osd dump  </span><br><span class="line">epoch 26  </span><br><span class="line">fsid cc10b0cb-476f-420c-b1d6-e48c1dc929af  </span><br><span class="line">created 2020-09-29 09:14:30.781641  </span><br><span class="line">modified 2020-09-29 10:14:06.100849  </span><br><span class="line">flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit  </span><br><span class="line">crush_version 7  </span><br><span class="line">full_ratio 0.95  </span><br><span class="line">backfillfull_ratio 0.9  </span><br><span class="line">nearfull_ratio 0.85  </span><br><span class="line">require_min_compat_client jewel  </span><br><span class="line">min_compat_client jewel  </span><br><span class="line">require_osd_release nautilus  </span><br><span class="line">pool 1 &#39;ceph-demo&#39; replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode warn last_change 25 lfor 0&#x2F;0&#x2F;20 flags hashpspool,selfmanaged_snaps stripe_width 0  </span><br><span class="line">  removed_snaps [1~3]  </span><br><span class="line">max_osd 3  </span><br><span class="line">osd.0 up in weight 1 up_from 5 up_thru 22 down_at 0 last_clean_interval [0,0) [v2:100.73.18.152:6802&#x2F;11943,v1:100.73.18.152:6803&#x2F;11943] [v2:100.73.18.152:6804&#x2F;11943,v1:100.73.18.152:6805&#x2F;11943] exists,up 136f6cf7-05a0-4325-aa92-ad316560edff  </span><br><span class="line">osd.1 up in weight 1 up_from 9 up_thru 22 down_at 0 last_clean_interval [0,0) [v2:100.73.18.153:6800&#x2F;10633,v1:100.73.18.153:6801&#x2F;10633] [v2:100.73.18.153:6802&#x2F;10633,v1:100.73.18.153:6803&#x2F;10633] exists,up 79804c00-2662-47a1-9987-95579afa10b6  </span><br><span class="line">osd.2 up in weight 1 up_from 13 up_thru 22 down_at 0 last_clean_interval [0,0) [v2:100.73.18.128:6800&#x2F;10558,v1:100.73.18.128:6801&#x2F;10558] [v2:100.73.18.128:6802&#x2F;10558,v1:100.73.18.128:6803&#x2F;10558] exists,up f15cacec-fdcd-4d3c-8bb8-ab3565cb4d0b  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#</span><br></pre></td></tr></table></figure>

<p>查看 OSD 的相关信息；</p>
<p> <strong>14. 扩展 mon</strong></p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph-deploy mon add ceph-node02  </span><br><span class="line">[root@ceph-node01 ceph-deploy]# ceph-deploy mon add ceph-node03</span><br></pre></td></tr></table></figure>



<p>由于 mon 需要使用 paxos 算法进行选举一个 leader，可以查看选举状态；</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph quorum_status</span><br></pre></td></tr></table></figure>



<p>查看 mon 状态</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph mon stat  </span><br><span class="line">e3: 3 mons at &#123;ceph-node01&#x3D;[v2:100.73.18.152:3300&#x2F;0,v1:100.73.18.152:6789&#x2F;0],ceph-node02&#x3D;[v2:100.73.18.153:3300&#x2F;0,v1:100.73.18.153:6789&#x2F;0],ceph-node03&#x3D;[v2:100.73.18.128:3300&#x2F;0,v1:100.73.18.128:6789&#x2F;0]&#125;, election epoch 12, leader 0 ceph-node01, quorum 0,1,2 ceph-node01,ceph-node02,ceph-node03  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#</span><br></pre></td></tr></table></figure>



<p>查看 mon 详情</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph mon dump  </span><br><span class="line">dumped monmap epoch 3  </span><br><span class="line">epoch 3  </span><br><span class="line">fsid cc10b0cb-476f-420c-b1d6-e48c1dc929af  </span><br><span class="line">last_changed 2020-09-29 09:28:35.692432  </span><br><span class="line">created 2020-09-29 09:14:30.493476  </span><br><span class="line">min_mon_release 14 (nautilus)  </span><br><span class="line">0: [v2:100.73.18.152:3300&#x2F;0,v1:100.73.18.152:6789&#x2F;0] mon.ceph-node01  </span><br><span class="line">1: [v2:100.73.18.153:3300&#x2F;0,v1:100.73.18.153:6789&#x2F;0] mon.ceph-node02  </span><br><span class="line">2: [v2:100.73.18.128:3300&#x2F;0,v1:100.73.18.128:6789&#x2F;0] mon.ceph-node03  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#</span><br></pre></td></tr></table></figure>



<p><strong>15. 扩展 mgr</strong></p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph-deploy mgr create ceph-node02 ceph-node03</span><br></pre></td></tr></table></figure>



<p><strong>16. 查看集群状态</strong></p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph -s  </span><br><span class="line">  cluster:  </span><br><span class="line">    id: cc10b0cb-476f-420c-b1d6-e48c1dc929af  </span><br><span class="line">    health: HEALTH_OK  </span><br><span class="line">  </span><br><span class="line">  services:  </span><br><span class="line">    mon: 3 daemons, quorum ceph-node01,ceph-node02,ceph-node03 (age 9m)  </span><br><span class="line">    mgr: ceph-node01(active, since 20m), standbys: ceph-node02, ceph-node03  </span><br><span class="line">    osd: 3 osds: 3 up (since 13m), 3 in (since 13m)  </span><br><span class="line">  </span><br><span class="line">  data:  </span><br><span class="line">    pools: 0 pools, 0 pgs  </span><br><span class="line">    objects: 0 objects, 0 B  </span><br><span class="line">    usage: 3.0 GiB used, 397 GiB &#x2F; 400 GiB avail  </span><br><span class="line">    pgs:  </span><br><span class="line">  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#</span><br></pre></td></tr></table></figure>

<p>3个mon、3个mgr、3个osd的RADOS集群创建成功；  </p>
<p><strong>17. 移出故障 OSD</strong></p>
<p>Ceph集群中的一个OSD通常对应于一个设备，且运行于专用的守护进程。在某OSD设备出现故障，或管理员出于管 理之需确实要移除特定的OSD设备时，需要先停止相关的守护进程，而后再进行移除操作。</p>
<p>1. 停用设备:ceph osd out {osd-num}</p>
<p>2. 停止进程:sudo systemctl stop ceph-osd@{osd-num} </p>
<p>3. 移除设备:ceph osd purge {id} –yes-i-really-mean-it</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph osd out 0  </span><br><span class="line">marked out osd.0.  </span><br><span class="line">[root@ceph-node01 ceph-deploy]# systemctl stop ceph-osd@0  </span><br><span class="line">[root@ceph-node01 ceph-deploy]# ceph osd purge 0 --yes-i-really-mean-it  </span><br><span class="line">purged osd.0  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#</span><br></pre></td></tr></table></figure>



<p>移出后查看状态  </p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph -s  </span><br><span class="line">  cluster:  </span><br><span class="line">    id: cc10b0cb-476f-420c-b1d6-e48c1dc929af  </span><br><span class="line">    health: HEALTH_WARN  </span><br><span class="line">            2 daemons have recently crashed  </span><br><span class="line">            OSD count 2 &lt; osd_pool_default_size 3  </span><br><span class="line">  </span><br><span class="line">  services:  </span><br><span class="line">    mon: 3 daemons, quorum ceph-node01,ceph-node02,ceph-node03 (age 15h)  </span><br><span class="line">    mgr: ceph-node01(active, since 15h)  </span><br><span class="line">    osd: 2 osds: 2 up (since 37h), 2 in (since 37h)  </span><br><span class="line">  </span><br><span class="line">  data:  </span><br><span class="line">    pools: 1 pools, 128 pgs  </span><br><span class="line">    objects: 54 objects, 137 MiB  </span><br><span class="line">    usage: 2.3 GiB used, 298 GiB &#x2F; 300 GiB avail  </span><br><span class="line">    pgs: 128 active+clean  </span><br><span class="line">  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#</span><br></pre></td></tr></table></figure>



<p>磁盘擦除数据时报错</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph-deploy disk zap ceph-node01 &#x2F;dev&#x2F;vdb  </span><br><span class="line">。。。  </span><br><span class="line">[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: &#x2F;usr&#x2F;sbin&#x2F;ceph-volume lvm zap &#x2F;dev&#x2F;vdb</span><br></pre></td></tr></table></figure>



<p>根据擦除数据时出现报错后，可以使用dd清空数据，然后再重启</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# dd if&#x3D;&#x2F;dev&#x2F;zero of&#x3D;&#x2F;dev&#x2F;vdb bs&#x3D;512K count&#x3D;1  </span><br><span class="line">[root@ceph-node01 ceph-deploy]# reboot</span><br></pre></td></tr></table></figure>



<p>再次进行zap 擦除数据，使磁盘可做 OSD 加入集群</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph-deploy disk zap ceph-node01 &#x2F;dev&#x2F;vdb  </span><br><span class="line">。。。。  </span><br><span class="line"> &#x2F;usr&#x2F;sbin&#x2F;ceph-volume lvm zap &#x2F;dev&#x2F;vdb  </span><br><span class="line">[ceph-node01][WARNIN] --&gt; Zapping: &#x2F;dev&#x2F;vdb  </span><br><span class="line">[ceph-node01][WARNIN] --&gt; --destroy was not specified, but zapping a whole device will remove the partition table  </span><br><span class="line">[ceph-node01][WARNIN] Running command: &#x2F;usr&#x2F;bin&#x2F;dd if&#x3D;&#x2F;dev&#x2F;zero of&#x3D;&#x2F;dev&#x2F;vdb bs&#x3D;1M count&#x3D;10 conv&#x3D;fsync  </span><br><span class="line">[ceph-node01][WARNIN] stderr: 记录了10+0 的读入  </span><br><span class="line">[ceph-node01][WARNIN] 记录了10+0 的写出  </span><br><span class="line">[ceph-node01][WARNIN] 10485760字节(10 MB)已复制  </span><br><span class="line">[ceph-node01][WARNIN] stderr: ，0.0398864 秒，263 MB&#x2F;秒  </span><br><span class="line">[ceph-node01][WARNIN] --&gt; Zapping successful for: &lt;Raw Device: &#x2F;dev&#x2F;vdb&gt;  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#</span><br></pre></td></tr></table></figure>



<p>磁盘修复好后，再加入集群</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph-deploy osd create ceph-node01 --data &#x2F;dev&#x2F;vdb  </span><br><span class="line">。。。  </span><br><span class="line">[ceph-node01][WARNIN] Running command: &#x2F;usr&#x2F;bin&#x2F;systemctl enable --runtime ceph-osd@0  </span><br><span class="line">[ceph-node01][WARNIN] stderr: Created symlink from &#x2F;run&#x2F;systemd&#x2F;system&#x2F;ceph-osd.target.wants&#x2F;ceph-osd@0.service to &#x2F;usr&#x2F;lib&#x2F;systemd&#x2F;system&#x2F;ceph-osd@.service.  </span><br><span class="line">[ceph-node01][WARNIN] Running command: &#x2F;usr&#x2F;bin&#x2F;systemctl start ceph-osd@0  </span><br><span class="line">[ceph-node01][WARNIN] --&gt; ceph-volume lvm activate successful for osd ID: 0  </span><br><span class="line">[ceph-node01][WARNIN] --&gt; ceph-volume lvm create successful for: &#x2F;dev&#x2F;vdb  </span><br><span class="line">[ceph-node01][INFO ] checking OSD status...  </span><br><span class="line">[ceph-node01][DEBUG ] find the location of an executable  </span><br><span class="line">[ceph-node01][INFO ] Running command: &#x2F;bin&#x2F;ceph --cluster&#x3D;ceph osd stat --format&#x3D;json  </span><br><span class="line">[ceph_deploy.osd][DEBUG ] Host ceph-node01 is now ready for osd use.  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#</span><br></pre></td></tr></table></figure>



<p>查看集群状态，发现正在迁移数据  </p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph -s  </span><br><span class="line">  cluster:  </span><br><span class="line">    id: cc10b0cb-476f-420c-b1d6-e48c1dc929af  </span><br><span class="line">    health: HEALTH_WARN  </span><br><span class="line">            Degraded data redundancy: 9&#x2F;88 objects degraded (10.227%), 7 pgs degraded  </span><br><span class="line">            2 daemons have recently crashed  </span><br><span class="line">  </span><br><span class="line">  services:  </span><br><span class="line">    mon: 3 daemons, quorum ceph-node01,ceph-node02,ceph-node03 (age 15h)  </span><br><span class="line">    mgr: ceph-node01(active, since 15h)  </span><br><span class="line">    osd: 3 osds: 3 up (since 6s), 3 in (since 6s)  </span><br><span class="line">  </span><br><span class="line">  data:  </span><br><span class="line">    pools: 1 pools, 128 pgs  </span><br><span class="line">    objects: 44 objects, 105 MiB  </span><br><span class="line">    usage: 3.3 GiB used, 397 GiB &#x2F; 400 GiB avail  </span><br><span class="line">    pgs: 24.219% pgs not active  </span><br><span class="line">             9&#x2F;88 objects degraded (10.227%)  </span><br><span class="line">             1&#x2F;88 objects misplaced (1.136%)  </span><br><span class="line">             90 active+clean  </span><br><span class="line">             31 peering  </span><br><span class="line">             6 active+recovery_wait+degraded  </span><br><span class="line">             1 active+recovering+degraded  </span><br><span class="line">  </span><br><span class="line">  io:  </span><br><span class="line">    recovery: 1.3 MiB&#x2F;s, 1 keys&#x2F;s, 1 objects&#x2F;s  </span><br><span class="line">  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#</span><br></pre></td></tr></table></figure>



<p>等待一段时间后，发现数据迁移完成，但有两个进程crashed了  </p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph -s  </span><br><span class="line">  cluster:  </span><br><span class="line">    id: cc10b0cb-476f-420c-b1d6-e48c1dc929af  </span><br><span class="line">    health: HEALTH_WARN  </span><br><span class="line">            2 daemons have recently crashed  </span><br><span class="line">  </span><br><span class="line">  services:  </span><br><span class="line">    mon: 3 daemons, quorum ceph-node01,ceph-node02,ceph-node03 (age 15h)  </span><br><span class="line">    mgr: ceph-node01(active, since 15h), standbys: ceph-node02, ceph-node03  </span><br><span class="line">    osd: 3 osds: 3 up (since 30m), 3 in (since 30m)  </span><br><span class="line">  </span><br><span class="line">  data:  </span><br><span class="line">    pools: 1 pools, 128 pgs  </span><br><span class="line">    objects: 54 objects, 137 MiB  </span><br><span class="line">    usage: 3.3 GiB used, 397 GiB &#x2F; 400 GiB avail  </span><br><span class="line">    pgs: 128 active+clean  </span><br><span class="line">  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#</span><br></pre></td></tr></table></figure>



<p>通过 ceph health detail 查看集群问题  </p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph health  </span><br><span class="line">HEALTH_WARN 2 daemons have recently crashed  </span><br><span class="line">[root@ceph-node01 ceph-deploy]# ceph health detail  </span><br><span class="line">HEALTH_WARN 2 daemons have recently crashed  </span><br><span class="line">RECENT_CRASH 2 daemons have recently crashed  </span><br><span class="line">    mgr.ceph-node02 crashed on host ceph-node02 at 2020-10-03 01:53:00.058389Z  </span><br><span class="line">    mgr.ceph-node03 crashed on host ceph-node03 at 2020-10-03 03:33:30.776755Z  </span><br><span class="line">[root@ceph-node01 ceph-deploy]# ceph crash ls  </span><br><span class="line">ID ENTITY NEW  </span><br><span class="line">2020-10-03_01:53:00.058389Z_c26486ef-adab-4a1f-9b94-68953571e8d3 mgr.ceph-node02 *  </span><br><span class="line">2020-10-03_03:33:30.776755Z_88464c4c-0711-42fa-ae05-6196180cfe31 mgr.ceph-node03 *  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#</span><br></pre></td></tr></table></figure>



<p>通过 systemctl restart ceph-mgr@ceph-node02无法重启（原因后续要找下），再次重建了下；</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph-deploy mgr create ceph-node02 ceph-node03</span><br></pre></td></tr></table></figure>



<p>再次查看集群状态如下，mgr已经恢复，但还提示两个进程crashed；  </p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph -s  </span><br><span class="line">  cluster:  </span><br><span class="line">    id: cc10b0cb-476f-420c-b1d6-e48c1dc929af  </span><br><span class="line">    health: HEALTH_WARN  </span><br><span class="line">            2 daemons have recently crashed  </span><br><span class="line">  </span><br><span class="line">  services:  </span><br><span class="line">    mon: 3 daemons, quorum ceph-node01,ceph-node02,ceph-node03 (age 15h)  </span><br><span class="line">    mgr: ceph-node01(active, since 15h), standbys: ceph-node02, ceph-node03  </span><br><span class="line">    osd: 3 osds: 3 up (since 30m), 3 in (since 30m)  </span><br><span class="line">  </span><br><span class="line">  data:  </span><br><span class="line">    pools: 1 pools, 128 pgs  </span><br><span class="line">    objects: 54 objects, 137 MiB  </span><br><span class="line">    usage: 3.3 GiB used, 397 GiB &#x2F; 400 GiB avail  </span><br><span class="line">    pgs: 128 active+clean  </span><br><span class="line">  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#</span><br></pre></td></tr></table></figure>



<p>通过 ceph crash archive-all 或者 ID 的形式修复，再次查看集群状态如下：  </p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line">[root@ceph-node01 ceph-deploy]# ceph crash archive-all  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#  </span><br><span class="line">[root@ceph-node01 ceph-deploy]# ceph -s  </span><br><span class="line">  cluster:  </span><br><span class="line">    id: cc10b0cb-476f-420c-b1d6-e48c1dc929af  </span><br><span class="line">    health: HEALTH_OK  </span><br><span class="line">  </span><br><span class="line">  services:  </span><br><span class="line">    mon: 3 daemons, quorum ceph-node01,ceph-node02,ceph-node03 (age 15h)  </span><br><span class="line">    mgr: ceph-node01(active, since 15h), standbys: ceph-node02, ceph-node03  </span><br><span class="line">    osd: 3 osds: 3 up (since 33m), 3 in (since 33m)  </span><br><span class="line">  </span><br><span class="line">  data:  </span><br><span class="line">    pools: 1 pools, 128 pgs  </span><br><span class="line">    objects: 54 objects, 137 MiB  </span><br><span class="line">    usage: 3.3 GiB used, 397 GiB &#x2F; 400 GiB avail  </span><br><span class="line">    pgs: 128 active+clean  </span><br><span class="line">  </span><br><span class="line">[root@ceph-node01 ceph-deploy]#</span><br></pre></td></tr></table></figure>



<p><strong>总结</strong></p>
<hr>
<p><strong>1. 安装前准备</strong></p>
<p>服务器规划、服务器间信任、主机名解析（hosts）、NTP同步、firewalld/iptables关闭、SELinux 关闭、配置yum源等；</p>
<p><strong>2. Ceph 集群部署</strong>  </p>
<p>mon 创建：ceph-deploy new –cluster-network 100.73.18.0/24 –public-network 100.73.18.0/24 <node-name> （创建第一个 mon ）</p>
<p>配置拷贝：ceph-deploy admin ceph-node01 ceph-node02 ceph-node03</p>
<p>mon 扩展：ceph-deploy mon add ceph-node02</p>
<p>mgr 创建：ceph-deploy mgr create ceph-node01</p>
<p>mgr 扩展：ceph-deploy mgr create ceph-node02 ceph-node03</p>
<p>osd 创建：ceph-deploy osd create ceph-node01 –data /dev/vdb</p>
<p><strong>3. 集群信息查看</strong></p>
<p>列出可用 osd：ceph-deploy osd list ceph-node01（列出某节点可用osd）</p>
<p>查看磁盘信息：lsblk；</p>
<p>擦除已有盘数据使其成为osd加入集群：ceph-deploy disk zap ceph-node01 /dev/vdb；</p>
<p>查看 OSD 信息：ceph osd tree、ceph osd stat、ceph osd ls、ceph osd dump等</p>
<p>查看 mon 选举：ceph quorum_status、ceph mon stat、ceph mon dump等</p>
<p><strong>4. 集群故障</strong></p>
<p>停止故障 OSD：ceph osd out {osd-num}</p>
<p>停止故障 OSD 进程：systemctl stop ceph-osd@{osd-num} </p>
<p>移出故障 OSD：ceph osd purge {id} –yes-i-really-mean-it</p>
<p>故障信息查看：ceph health、ceph health detail</p>
<p>查看crash进程：ceph crash ls</p>
<p>修复后忽略：ceph crash archive-all</p>
<p><strong>您的关注是我写作的动力</strong></p>
<hr>
<p><strong>基础小知识</strong></p>
<hr>
<p><a target="_blank" rel="noopener" href="http://mp.weixin.qq.com/s?__biz=MzI1MDgwNzQ1MQ==&mid=2247484783&idx=1&sn=d4fdf3d489b7640442601476b8d4b1fd&chksm=e9fdd09bde8a598d6dd319a58073916f286def9f3bf7de6a9d6cc5a50f4cb9165436f9e2f31f&scene=21#wechat_redirect">hping 命令使用小结</a>  </p>
<p><a target="_blank" rel="noopener" href="http://mp.weixin.qq.com/s?__biz=MzI1MDgwNzQ1MQ==&mid=2247484796&idx=1&sn=42cc6c1a13ee575dc31997675da76c05&chksm=e9fdd088de8a599e96a403555c0a629767e748302815ae04fa8cef87a741343c776ddde544ae&scene=21#wechat_redirect">Linux 网卡 bonding 小知识</a>  </p>
<p><strong>专辑分享</strong></p>
<hr>
<p><a target="_blank" rel="noopener" href="http://mp.weixin.qq.com/s?__biz=MzI1MDgwNzQ1MQ==&mid=2247483891&idx=1&sn=17dcd7cd0645df509c8e49059a2f00d7&chksm=e9fdd407de8a5d119d439b70dc2c381ec2eceddb63ed43767c2e1b7cffefe077e41955568cb5&scene=21#wechat_redirect">kubeadm使用外部etcd部署kubernetes v1.17.3 高可用集群</a>  </p>
<p><a target="_blank" rel="noopener" href="http://mp.weixin.qq.com/s?__biz=MzI1MDgwNzQ1MQ==&mid=2247484257&idx=1&sn=c666cf13ec5042a2c40dd0ccf89cc1eb&chksm=e9fdd695de8a5f8332bf51723043137032e65984dab81295616c1e13f55560088d8a6546fe78&scene=21#wechat_redirect">第一篇  使用 Prometheus 监控 Kubernetes 集群理论篇</a></p>
<p><a target="_blank" rel="noopener" href="http://mp.weixin.qq.com/s?__biz=MzI1MDgwNzQ1MQ==&mid=2247485232&idx=1&sn=ff0e93b91432a68699e0e00a96602b78&chksm=e9fdd2c4de8a5bd22d4801cf35f78ffd9d7ab95b2a254bc5a4d181d9247c31c9b2f5485d4b74&scene=21#wechat_redirect">Ceph 基础篇 - 存储基础及架构介绍</a>  </p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>


<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>

 
      <!-- reward -->
      
      <div id="reword-out">
        <div id="reward-btn">
          打赏
        </div>
      </div>
      
    </div>
    

    <!-- copyright -->
    
    <div class="declare">
      <ul class="post-copyright">
        <li>
          <i class="ri-copyright-line"></i>
          <strong>版权声明： </strong>
          
          本博客所有文章除特别声明外，著作权归作者所有。转载请注明出处！
          
        </li>
      </ul>
    </div>
    
    <footer class="article-footer">
       
<div class="share-btn">
      <span class="share-sns share-outer">
        <i class="ri-share-forward-line"></i>
        分享
      </span>
      <div class="share-wrap">
        <i class="arrow"></i>
        <div class="share-icons">
          
          <a class="weibo share-sns" href="javascript:;" data-type="weibo">
            <i class="ri-weibo-fill"></i>
          </a>
          <a class="weixin share-sns wxFab" href="javascript:;" data-type="weixin">
            <i class="ri-wechat-fill"></i>
          </a>
          <a class="qq share-sns" href="javascript:;" data-type="qq">
            <i class="ri-qq-fill"></i>
          </a>
          <a class="douban share-sns" href="javascript:;" data-type="douban">
            <i class="ri-douban-line"></i>
          </a>
          <!-- <a class="qzone share-sns" href="javascript:;" data-type="qzone">
            <i class="icon icon-qzone"></i>
          </a> -->
          
          <a class="facebook share-sns" href="javascript:;" data-type="facebook">
            <i class="ri-facebook-circle-fill"></i>
          </a>
          <a class="twitter share-sns" href="javascript:;" data-type="twitter">
            <i class="ri-twitter-fill"></i>
          </a>
          <a class="google share-sns" href="javascript:;" data-type="google">
            <i class="ri-google-fill"></i>
          </a>
        </div>
      </div>
</div>

<div class="wx-share-modal">
    <a class="modal-close" href="javascript:;"><i class="ri-close-circle-line"></i></a>
    <p>扫一扫，分享到微信</p>
    <div class="wx-qrcode">
      <img src="//api.qrserver.com/v1/create-qr-code/?size=150x150&data=http://example.com/2020/11/11/k8s/Ceph%20%E5%9F%BA%E7%A1%80%E7%AF%87%20-%20%E9%9B%86%E7%BE%A4%E9%83%A8%E7%BD%B2%E5%8F%8A%E6%95%85%E9%9A%9C%E6%8E%92%E6%9F%A5/" alt="微信分享二维码">
    </div>
</div>

<div id="share-mask"></div>  
  <ul class="article-tag-list" itemprop="keywords"><li class="article-tag-list-item"><a class="article-tag-list-link" href="/tags/k8s/" rel="tag">k8s</a></li></ul>

    </footer>
  </div>

   
  <nav class="article-nav">
    
      <a href="/2020/11/11/interview/IT%E8%BF%90%E7%BB%B4%E9%9D%A2%E8%AF%95%E9%97%AE%E9%A2%98%E6%80%BB%E7%BB%93-%E7%AE%80%E8%BF%B0Etcd%E3%80%81Kubernetes%E3%80%81Lvs%E3%80%81HAProxy/" class="article-nav-link">
        <strong class="article-nav-caption">上一篇</strong>
        <div class="article-nav-title">
          
            IT运维面试问题总结-简述Etcd、Kubernetes、Lvs、HAProxy.md
          
        </div>
      </a>
    
    
      <a href="/2020/11/11/develop/%E5%BF%AB%E9%80%9F%E7%90%86%E8%A7%A3Cookie%E3%80%81Session%E3%80%81Token%E3%80%81JWT/" class="article-nav-link">
        <strong class="article-nav-caption">下一篇</strong>
        <div class="article-nav-title">快速理解Cookie、Session、Token、JWT.md</div>
      </a>
    
  </nav>

   
<!-- valine评论 -->
<div id="vcomments-box">
  <div id="vcomments"></div>
</div>
<script src="//cdn1.lncld.net/static/js/3.0.4/av-min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/valine@1.4.14/dist/Valine.min.js"></script>
<script>
  new Valine({
    el: "#vcomments",
    app_id: "",
    app_key: "",
    path: window.location.pathname,
    avatar: "monsterid",
    placeholder: "给我的文章加点评论吧~",
    recordIP: true,
  });
  const infoEle = document.querySelector("#vcomments .info");
  if (infoEle && infoEle.childNodes && infoEle.childNodes.length > 0) {
    infoEle.childNodes.forEach(function (item) {
      item.parentNode.removeChild(item);
    });
  }
</script>
<style>
  #vcomments-box {
    padding: 5px 30px;
  }

  @media screen and (max-width: 800px) {
    #vcomments-box {
      padding: 5px 0px;
    }
  }

  #vcomments-box #vcomments {
    background-color: #fff;
  }

  .v .vlist .vcard .vh {
    padding-right: 20px;
  }

  .v .vlist .vcard {
    padding-left: 10px;
  }
</style>

 
     
</article>

</section>
      <footer class="footer">
  <div class="outer">
    <ul>
      <li>
        Copyrights &copy;
        2015-2020
        <i class="ri-heart-fill heart_icon"></i> TzWind
      </li>
    </ul>
    <ul>
      <li>
        
        
        
        由 <a href="https://hexo.io" target="_blank">Hexo</a> 强力驱动
        <span class="division">|</span>
        主题 - <a href="https://github.com/Shen-Yu/hexo-theme-ayer" target="_blank">Ayer</a>
        
      </li>
    </ul>
    <ul>
      <li>
        
        
        <span>
  <span><i class="ri-user-3-fill"></i>访问人数:<span id="busuanzi_value_site_uv"></span></s>
  <span class="division">|</span>
  <span><i class="ri-eye-fill"></i>浏览次数:<span id="busuanzi_value_page_pv"></span></span>
</span>
        
      </li>
    </ul>
    <ul>
      
    </ul>
    <ul>
      
    </ul>
    <ul>
      <li>
        <!-- cnzz统计 -->
        
        <script type="text/javascript" src='https://s9.cnzz.com/z_stat.php?id=1278069914&amp;web_id=1278069914'></script>
        
      </li>
    </ul>
  </div>
</footer>
      <div class="float_btns">
        <div class="totop" id="totop">
  <i class="ri-arrow-up-line"></i>
</div>

<div class="todark" id="todark">
  <i class="ri-moon-line"></i>
</div>

      </div>
    </main>
    <aside class="sidebar on">
      <button class="navbar-toggle"></button>
<nav class="navbar">
  
  <div class="logo">
    <a href="/"><img src="/images/ayer-side.svg" alt="Hexo"></a>
  </div>
  
  <ul class="nav nav-main">
    
    <li class="nav-item">
      <a class="nav-item-link" href="/">主页</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" href="/archives">归档</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" href="/categories">分类</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" href="/tags">标签</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" target="_blank" rel="noopener" href="http://www.baidu.com">百度</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" href="/friends">友链</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" href="/2019/about">关于我</a>
    </li>
    
  </ul>
</nav>
<nav class="navbar navbar-bottom">
  <ul class="nav">
    <li class="nav-item">
      
      <a class="nav-item-link nav-item-search"  title="搜索">
        <i class="ri-search-line"></i>
      </a>
      
      
      <a class="nav-item-link" target="_blank" href="/atom.xml" title="RSS Feed">
        <i class="ri-rss-line"></i>
      </a>
      
    </li>
  </ul>
</nav>
<div class="search-form-wrap">
  <div class="local-search local-search-plugin">
  <input type="search" id="local-search-input" class="local-search-input" placeholder="Search...">
  <div id="local-search-result" class="local-search-result"></div>
</div>
</div>
    </aside>
    <script>
      if (window.matchMedia("(max-width: 768px)").matches) {
        document.querySelector('.content').classList.remove('on');
        document.querySelector('.sidebar').classList.remove('on');
      }
    </script>
    <div id="mask"></div>

<!-- #reward -->
<div id="reward">
  <span class="close"><i class="ri-close-line"></i></span>
  <p class="reward-p"><i class="ri-cup-line"></i>请我喝杯咖啡吧~</p>
  <div class="reward-box">
    
    
  </div>
</div>
    
<script src="/js/jquery-2.0.3.min.js"></script>


<script src="/js/lazyload.min.js"></script>

<!-- Tocbot -->


<script src="/js/tocbot.min.js"></script>

<script>
  tocbot.init({
    tocSelector: '.tocbot',
    contentSelector: '.article-entry',
    headingSelector: 'h1, h2, h3, h4, h5, h6',
    hasInnerContainers: true,
    scrollSmooth: true,
    scrollContainer: 'main',
    positionFixedSelector: '.tocbot',
    positionFixedClass: 'is-position-fixed',
    fixedSidebarOffset: 'auto'
  });
</script>

<script src="https://cdn.jsdelivr.net/npm/jquery-modal@0.9.2/jquery.modal.min.js"></script>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/jquery-modal@0.9.2/jquery.modal.min.css">
<script src="https://cdn.jsdelivr.net/npm/justifiedGallery@3.7.0/dist/js/jquery.justifiedGallery.min.js"></script>

<script src="/dist/main.js"></script>

<!-- ImageViewer -->

<!-- Root element of PhotoSwipe. Must have class pswp. -->
<div class="pswp" tabindex="-1" role="dialog" aria-hidden="true">

    <!-- Background of PhotoSwipe. 
         It's a separate element as animating opacity is faster than rgba(). -->
    <div class="pswp__bg"></div>

    <!-- Slides wrapper with overflow:hidden. -->
    <div class="pswp__scroll-wrap">

        <!-- Container that holds slides. 
            PhotoSwipe keeps only 3 of them in the DOM to save memory.
            Don't modify these 3 pswp__item elements, data is added later on. -->
        <div class="pswp__container">
            <div class="pswp__item"></div>
            <div class="pswp__item"></div>
            <div class="pswp__item"></div>
        </div>

        <!-- Default (PhotoSwipeUI_Default) interface on top of sliding area. Can be changed. -->
        <div class="pswp__ui pswp__ui--hidden">

            <div class="pswp__top-bar">

                <!--  Controls are self-explanatory. Order can be changed. -->

                <div class="pswp__counter"></div>

                <button class="pswp__button pswp__button--close" title="Close (Esc)"></button>

                <button class="pswp__button pswp__button--share" style="display:none" title="Share"></button>

                <button class="pswp__button pswp__button--fs" title="Toggle fullscreen"></button>

                <button class="pswp__button pswp__button--zoom" title="Zoom in/out"></button>

                <!-- Preloader demo http://codepen.io/dimsemenov/pen/yyBWoR -->
                <!-- element will get class pswp__preloader--active when preloader is running -->
                <div class="pswp__preloader">
                    <div class="pswp__preloader__icn">
                        <div class="pswp__preloader__cut">
                            <div class="pswp__preloader__donut"></div>
                        </div>
                    </div>
                </div>
            </div>

            <div class="pswp__share-modal pswp__share-modal--hidden pswp__single-tap">
                <div class="pswp__share-tooltip"></div>
            </div>

            <button class="pswp__button pswp__button--arrow--left" title="Previous (arrow left)">
            </button>

            <button class="pswp__button pswp__button--arrow--right" title="Next (arrow right)">
            </button>

            <div class="pswp__caption">
                <div class="pswp__caption__center"></div>
            </div>

        </div>

    </div>

</div>

<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/photoswipe.min.css">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/default-skin/default-skin.min.css">
<script src="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/photoswipe.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/photoswipe-ui-default.min.js"></script>

<script>
    function viewer_init() {
        let pswpElement = document.querySelectorAll('.pswp')[0];
        let $imgArr = document.querySelectorAll(('.article-entry img:not(.reward-img)'))

        $imgArr.forEach(($em, i) => {
            $em.onclick = () => {
                // slider展开状态
                // todo: 这样不好，后面改成状态
                if (document.querySelector('.left-col.show')) return
                let items = []
                $imgArr.forEach(($em2, i2) => {
                    let img = $em2.getAttribute('data-idx', i2)
                    let src = $em2.getAttribute('data-target') || $em2.getAttribute('src')
                    let title = $em2.getAttribute('alt')
                    // 获得原图尺寸
                    const image = new Image()
                    image.src = src
                    items.push({
                        src: src,
                        w: image.width || $em2.width,
                        h: image.height || $em2.height,
                        title: title
                    })
                })
                var gallery = new PhotoSwipe(pswpElement, PhotoSwipeUI_Default, items, {
                    index: parseInt(i)
                });
                gallery.init()
            }
        })
    }
    viewer_init()
</script>

<!-- MathJax -->

<!-- Katex -->

<!-- busuanzi  -->


<script src="/js/busuanzi-2.3.pure.min.js"></script>


<!-- ClickLove -->

<!-- ClickBoom1 -->

<!-- ClickBoom2 -->

<!-- CodeCopy -->


<link rel="stylesheet" href="/css/clipboard.css">

<script src="https://cdn.jsdelivr.net/npm/clipboard@2/dist/clipboard.min.js"></script>
<script>
  function wait(callback, seconds) {
    var timelag = null;
    timelag = window.setTimeout(callback, seconds);
  }
  !function (e, t, a) {
    var initCopyCode = function(){
      var copyHtml = '';
      copyHtml += '<button class="btn-copy" data-clipboard-snippet="">';
      copyHtml += '<i class="ri-file-copy-2-line"></i><span>COPY</span>';
      copyHtml += '</button>';
      $(".highlight .code pre").before(copyHtml);
      $(".article pre code").before(copyHtml);
      var clipboard = new ClipboardJS('.btn-copy', {
        target: function(trigger) {
          return trigger.nextElementSibling;
        }
      });
      clipboard.on('success', function(e) {
        let $btn = $(e.trigger);
        $btn.addClass('copied');
        let $icon = $($btn.find('i'));
        $icon.removeClass('ri-file-copy-2-line');
        $icon.addClass('ri-checkbox-circle-line');
        let $span = $($btn.find('span'));
        $span[0].innerText = 'COPIED';
        
        wait(function () { // 等待两秒钟后恢复
          $icon.removeClass('ri-checkbox-circle-line');
          $icon.addClass('ri-file-copy-2-line');
          $span[0].innerText = 'COPY';
        }, 2000);
      });
      clipboard.on('error', function(e) {
        e.clearSelection();
        let $btn = $(e.trigger);
        $btn.addClass('copy-failed');
        let $icon = $($btn.find('i'));
        $icon.removeClass('ri-file-copy-2-line');
        $icon.addClass('ri-time-line');
        let $span = $($btn.find('span'));
        $span[0].innerText = 'COPY FAILED';
        
        wait(function () { // 等待两秒钟后恢复
          $icon.removeClass('ri-time-line');
          $icon.addClass('ri-file-copy-2-line');
          $span[0].innerText = 'COPY';
        }, 2000);
      });
    }
    initCopyCode();
  }(window, document);
</script>


<!-- CanvasBackground -->


    
  </div>
</body>

</html>