<!DOCTYPE html>


<html lang="zh-CN">


<head>
  <meta charset="utf-8" />
    
  <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1" />
  <title>
     
  </title>
  <meta name="generator" content="hexo-theme-ayer">
  
  <link rel="shortcut icon" href="/favicon.ico" />
  
  
<link rel="stylesheet" href="/dist/main.css">

  
<link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/Shen-Yu/cdn/css/remixicon.min.css">

  
<link rel="stylesheet" href="/css/custom.css">

  
  
<script src="https://cdn.jsdelivr.net/npm/pace-js@1.0.2/pace.min.js"></script>

  
  

  

</head>

</html>

<body>
  <div id="app">
    
      
    <main class="content on">
      
<section class="cover">
    
      
      <a class="forkMe" href="https://github.com/Shen-Yu/hexo-theme-ayer"
        target="_blank"><img width="149" height="149" src="/images/forkme.png"
          class="attachment-full size-full" alt="Fork me on GitHub" data-recalc-dims="1"></a>
    
  <div class="cover-frame">
    <div class="bg-box">
      <img src="/images/cover1.jpg" alt="image frame" />
    </div>
    <div class="cover-inner text-center text-white">
      <h1><a href="/">Hexo</a></h1>
      <div id="subtitle-box">
        
        <span id="subtitle"></span>
        
      </div>
      <div>
        
      </div>
    </div>
  </div>
  <div class="cover-learn-more">
    <a href="javascript:void(0)" class="anchor"><i class="ri-arrow-down-line"></i></a>
  </div>
</section>



<script src="https://cdn.jsdelivr.net/npm/typed.js@2.0.11/lib/typed.min.js"></script>


<!-- Subtitle -->

  <script>
    try {
      var typed = new Typed("#subtitle", {
        strings: ['面朝大海，春暖花开', '何来天才，唯有苦练', '集中一点，登峰造极'],
        startDelay: 0,
        typeSpeed: 200,
        loop: true,
        backSpeed: 100,
        showCursor: true
      });
    } catch (err) {
      console.log(err)
    }
  </script>
  
<div id="main">
  <section class="outer">
  
  

<div class="notice" style="margin-top:50px">
    <i class="ri-heart-fill"></i>
    <div class="notice-content" id="broad"></div>
</div>
<script type="text/javascript">
    fetch('https://v1.hitokoto.cn')
        .then(response => response.json())
        .then(data => {
            document.getElementById("broad").innerHTML = data.hitokoto;
        })
        .catch(console.error)
</script>

<style>
    .notice {
        padding: 20px;
        border: 1px dashed #e6e6e6;
        color: #969696;
        position: relative;
        display: inline-block;
        width: 100%;
        background: #fbfbfb50;
        border-radius: 10px;
    }

    .notice i {
        float: left;
        color: #999;
        font-size: 16px;
        padding-right: 10px;
        vertical-align: middle;
        margin-top: -2px;
    }

    .notice-content {
        display: initial;
        vertical-align: middle;
    }
</style>
  
  <article class="articles">
    
    
    
    
    <article
  id="post-docker/一个网站的微服务架构实战docker和 docker-compose"
  class="article article-type-post"
  itemscope
  itemprop="blogPost"
  data-scroll-reveal
>
  <div class="article-inner">
    
    <header class="article-header">
       
<h2 itemprop="name">
  <a class="article-title" href="/2020/11/11/docker/%E4%B8%80%E4%B8%AA%E7%BD%91%E7%AB%99%E7%9A%84%E5%BE%AE%E6%9C%8D%E5%8A%A1%E6%9E%B6%E6%9E%84%E5%AE%9E%E6%88%98docker%E5%92%8C%20docker-compose/"
    >一个网站的微服务架构实战docker和 docker-compose.md</a> 
</h2>
 

    </header>
     
    <div class="article-meta">
      <a href="/2020/11/11/docker/%E4%B8%80%E4%B8%AA%E7%BD%91%E7%AB%99%E7%9A%84%E5%BE%AE%E6%9C%8D%E5%8A%A1%E6%9E%B6%E6%9E%84%E5%AE%9E%E6%88%98docker%E5%92%8C%20docker-compose/" class="article-date">
  <time datetime="2020-11-10T16:00:00.000Z" itemprop="datePublished">2020-11-11</time>
</a> 
  <div class="article-category">
    <a class="article-category-link" href="/categories/docker/">docker</a>
  </div>
   
    </div>
      
    <div class="article-entry" itemprop="articleBody">
       
  <h1 id="一个网站的微服务架构实战docker和-docker-compose"><a href="#一个网站的微服务架构实战docker和-docker-compose" class="headerlink" title="一个网站的微服务架构实战docker和 docker-compose"></a>一个网站的微服务架构实战docker和 docker-compose</h1><h2 id="前言"><a href="#前言" class="headerlink" title="前言"></a><strong>前言</strong></h2><p>这是一次完整的项目实践，Angular页面+Springboot接口+MySQL都通过Dockerfile打包成docker镜像，通过docker-compose做统一编排。目的是实现整个项目产品的轻量级和灵活性，在将各个模块的镜像都上传公共镜像仓库后，任何人都可以通过 “docker-compose up -d” 一行命令，将整个项目的前端、后端、数据库以及文件服务器等，运行在自己的服务器上。</p>
<p>本项目是开发一个类似于segmentfault的文章共享社区，我的设想是当部署在个人服务器上时就是个人的文章库，部署在项目组的服务器上就是项目内部的文章库，部署在公司的服务器上就是所有职工的文章共享社区。突出的特点就是，项目相关的所有应用和文件资源都是灵活的，用户可以傻瓜式地部署并使用，对宿主机没有任何依赖。</p>
<p>目前一共有三个docker镜像，考虑以后打在一个镜像中，但目前只能通过docker-compose来编排这三个镜像。</p>
<ol>
<li>MySQL镜像：以MySQL为基础，将项目所用到的数据库、表结构以及一些基础表的数据库，通过SQL脚本打包在镜像中。用户在启动镜像后就自动创建了项目所有的数据库、表和基础表数据。</li>
<li>SpringBoot镜像：后台接口通过SpringBoot开发，开发完成后直接可以打成镜像，由于其内置tomcat，可以直接运行，数据库指向启动好的MySQL容器中的数据库。</li>
<li>Nginx（Angular）镜像：Nginx镜像中打包了Angular项目的dist目录资源，以及default.conf文件。主要的作用有：部署Angular项目页面；挂载宿主机目录作为文件服务器；以及反向代理SpringBoot接口，解决跨域问题等等。</li>
</ol>
<p>最后三个docker容器的编排通过docker-compose来实现，三个容器之间的相互访问都通过容器内部的别名，避免了宿主机迁移时ip无法对应的问题。为了方便开发，顺便配了个自动部署。</p>
<hr>
<h2 id="MySQL镜像"><a href="#MySQL镜像" class="headerlink" title="MySQL镜像"></a><strong>MySQL镜像</strong></h2><p>**<br>**</p>
<h3 id="初始化脚本"><a href="#初始化脚本" class="headerlink" title="初始化脚本"></a><strong>初始化脚本</strong></h3><p>**<br>**</p>
<p>在项目完成后，需要生成项目所需数据库、表结构以及基础表数据的脚本，保证在运行该docker容器中，启动MySQL数据库时，自动构建数据库和表结构，并初始化基础表数据。</p>
<p>Navicat for MySQL的客户端支持导出数据库的表结构和表数据的SQL脚本。如果没有安装Navicat，可以在连接上容器中开发用的MySQL数据库，通过mysqldump 命令导出数据库表结构和数据的SQL脚本。下文中就是将数据库的SQL脚本导出到宿主机的/bees/sql 目录：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">docker exec -it  mysql mysqldump -uroot -pPASSWORD 数据库名称 &gt; &#x2F;bees&#x2F;sql&#x2F;数据库名称.sql</span><br></pre></td></tr></table></figure>



<p>以上只是导出 表结构和表数据的脚本，还要在SQL脚本最上方加上 生成数据库的SQL：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">drop database if exists 数据库名称;</span><br><span class="line">create database 数据库名称;</span><br><span class="line">use 数据库名称;</span><br></pre></td></tr></table></figure>



<p>通过以上两个步骤，数据库、表结构和表数据三者的初始化SQL脚本就生成好了。</p>
<h3 id="Dockerfile构建镜像"><a href="#Dockerfile构建镜像" class="headerlink" title="Dockerfile构建镜像"></a><strong>Dockerfile构建镜像</strong></h3><p>**<br>**</p>
<p>我们生成的SQL脚本叫 bees.sql,在MySQL官方镜像中提供了容器启动时自动执行/docker-entrypoint-initdb.d文件夹下的脚本的功能(包括shell脚本和sql脚本)，我们在后续生成镜像的时候，将上述生成的SQL脚本COPY到MySQL的/docker-entrypoint-initdb.d目录下就可以了。<br>现在我们写Dockerfile，很简单：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">FROM mysql</span><br><span class="line"></span><br><span class="line">MAINTAINER kerry(kerry.wu@definesys.com)</span><br><span class="line"></span><br><span class="line">COPY bees.sql &#x2F;docker-entrypoint-initdb.d</span><br></pre></td></tr></table></figure>



<p>将 bees.sql 和 Dockerfile 两个文件放在同一目录，执行构建镜像的命令就可以了：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">docker build -t bees-mysql .</span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>现在通过 docker images，就能看到本地的镜像库中就已经新建了一个 bees-mysql的镜像啦。</p>
<hr>
<h2 id="SpringBoot镜像"><a href="#SpringBoot镜像" class="headerlink" title="SpringBoot镜像"></a><strong>SpringBoot镜像</strong></h2><p>**<br>**</p>
<p>springboot构建镜像的方式很多，有通过代码生成镜像的，也有通过jar包生成镜像的。我不想对代码有任何污染，就选择后者，通过生成的jar包构建镜像。<br>创建一个目录，上传已经准备好的springboot的jar包，这里名为bees-0.0.1-SNAPSHOT.jar，然后同样编写Dockerfile文件：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">FROM java:8</span><br><span class="line">VOLUME &#x2F;tmp</span><br><span class="line">ADD bees-0.0.1-SNAPSHOT.jar &#x2F;bees-springboot.jar</span><br><span class="line">EXPOSE 8010</span><br><span class="line">ENTRYPOINT [&quot;java&quot;,&quot;-Djava.security.egd&#x3D;file:&#x2F;dev&#x2F;.&#x2F;urandom&quot;,&quot;-jar&quot;,&quot;-Denv&#x3D;DEV&quot;,&quot;&#x2F;bees-springboot.jar&quot;]</span><br></pre></td></tr></table></figure>



<p>将bees-0.0.1-SNAPSHOT.jar和Dockerfile放在同一目录执行命令开始构建镜像，同样在本地镜像库中就生成了bees-springboot的镜像：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">docker build -t bees-springboot .</span><br></pre></td></tr></table></figure>



<hr>
<h2 id="Nginx（Angular）镜像"><a href="#Nginx（Angular）镜像" class="headerlink" title="Nginx（Angular）镜像"></a><strong>Nginx（Angular）镜像</strong></h2><p>**<br>**</p>
<h3 id="Nginx的配置"><a href="#Nginx的配置" class="headerlink" title="Nginx的配置"></a><strong>Nginx的配置</strong></h3><p>**<br>**</p>
<p>该镜像主要在于nginx上conf.d/default.conf文件的配置，主要实现三个需求：</p>
<p>1、Angualr部署<br>Angular的部署很简单，只要将Angular项目通过 ng build –prod 命令生成dist目录，将dist目录作为静态资源文件放在服务器上访问就行。我们这里就把dist目录打包在nginx容器中，在default.conf上配置访问。</p>
<p>2、文件服务器<br>项目为文章共享社区，少不了的就是一个存储文章的文件服务器，包括存储一些图片之类的静态资源。需要在容器中创建一个文件目录，通过default.conf上的配置将该目录代理出来，可以直接访问目录中的文件。<br>当然为了不丢失，这些文件最好是保存在宿主机上，在启动容器时可以将宿主机本地的目录挂载到容器中的文件目录。</p>
<p>3、接口跨域问题<br>在前后端分离开发的项目中，“跨域问题”是较为常见的，SpringBoot的容器和Angular所在的容器不在同一个ip和端口，我们同样可以在default.conf上配置反向代理，将后台接口代理成同一个ip和端口的地址。</p>
<p>话不多说，结合上面三个问题，我们最终的default.conf为：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br></pre></td><td class="code"><pre><span class="line">server &#123;</span><br><span class="line">    listen       80;</span><br><span class="line"></span><br><span class="line">    server_name  localhost;</span><br><span class="line">    </span><br><span class="line">    gzip on;</span><br><span class="line">    gzip_min_length  1k;</span><br><span class="line">    gzip_buffers     4 16k;</span><br><span class="line">    gzip_comp_level 3;</span><br><span class="line">    gzip_types       text&#x2F;plain application&#x2F;x-javascript application&#x2F;javascript text&#x2F;css application&#x2F;xml text&#x2F;javascript application&#x2F;x-httpd-php image&#x2F;jpeg image&#x2F;gif image&#x2F;png;</span><br><span class="line">    gzip_vary on;</span><br><span class="line"></span><br><span class="line">    location &#x2F; &#123;</span><br><span class="line">        root   &#x2F;usr&#x2F;share&#x2F;nginx&#x2F;html;</span><br><span class="line">        index  index.html index.htm;</span><br><span class="line">        try_files $uri $uri&#x2F; &#x2F;index.html;</span><br><span class="line">    &#125;</span><br><span class="line">    </span><br><span class="line">    location &#x2F;api&#x2F; &#123;</span><br><span class="line">        proxy_pass http:&#x2F;&#x2F;beesSpringboot:8010&#x2F;;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    location &#x2F;file &#123;</span><br><span class="line">        alias &#x2F;home&#x2F;file;</span><br><span class="line">        index  index.html index.htm;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    error_page   500 502 503 504  &#x2F;50x.html;</span><br><span class="line">    location &#x3D; &#x2F;50x.html &#123;</span><br><span class="line">        root   &#x2F;usr&#x2F;share&#x2F;nginx&#x2F;html;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure>



<ol>
<li>location / :代理的是Angular项目，dist目录内通过Dockerfile<br>COPY在容器内的/usr/share/nginx/html目录；</li>
<li>location /file ：代理/home/file 目录，作为文件服务器；</li>
<li>location /api/ :是为了解决跨域而做的反向代理，为了脱离宿主机的限制，接口所在容器的ip通过别名beesSpringboot来代替。别名的设置是在docker-compose.yml中设置的，后续再讲。</li>
<li>添加了gzip，针对前端较大资源文件下载速度的优化</li>
</ol>
<hr>
<h3 id="Dockerfile构建镜像-1"><a href="#Dockerfile构建镜像-1" class="headerlink" title="Dockerfile构建镜像"></a><strong>Dockerfile构建镜像</strong></h3><p>**<br>**</p>
<p>同样创建一个目录，包含Angualr的dist目录、Dockerfile和nginx的default.conf文件，目录结构如下：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br></pre></td><td class="code"><pre><span class="line">[root@Kerry angular]# tree</span><br><span class="line">.</span><br><span class="line">├── dist</span><br><span class="line">│   └── Bees</span><br><span class="line">│       ├── 0.cb202cb30edaa3c93602.js</span><br><span class="line">│       ├── 1.3ac3c111a5945a7fdac6.js</span><br><span class="line">│       ├── 2.99bfc194c4daea8390b3.js</span><br><span class="line">│       ├── 3.50547336e0234937eb51.js</span><br><span class="line">│       ├── 3rdpartylicenses.txt</span><br><span class="line">│       ├── 4.53141e3db614f9aa6fe0.js</span><br><span class="line">│       ├── assets</span><br><span class="line">│       │   └── images</span><br><span class="line">│       │       ├── login_background.jpg</span><br><span class="line">│       │       └── logo.png</span><br><span class="line">│       ├── favicon.ico</span><br><span class="line">│       ├── index.html</span><br><span class="line">│       ├── login_background.7eaf4f9ce82855adb045.jpg</span><br><span class="line">│       ├── main.894e80999bf907c5627b.js</span><br><span class="line">│       ├── polyfills.6960d5ea49e64403a1af.js</span><br><span class="line">│       ├── runtime.37fed2633286b6e47576.js</span><br><span class="line">│       └── styles.9e4729a9c6b60618a6c6.css</span><br><span class="line">├── Dockerfile</span><br><span class="line">└── nginx</span><br><span class="line">    └── default.conf</span><br></pre></td></tr></table></figure>



<p>Dockerfile文件如下：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">FROM nginx</span><br><span class="line"></span><br><span class="line">COPY nginx&#x2F;default.conf &#x2F;etc&#x2F;nginx&#x2F;conf.d&#x2F;</span><br><span class="line"></span><br><span class="line">RUN rm -rf &#x2F;usr&#x2F;share&#x2F;nginx&#x2F;html&#x2F;*</span><br><span class="line"></span><br><span class="line">COPY &#x2F;dist&#x2F;Bees &#x2F;usr&#x2F;share&#x2F;nginx&#x2F;html</span><br><span class="line"></span><br><span class="line">CMD [&quot;nginx&quot;, &quot;-g&quot;, &quot;daemon off;&quot;]</span><br></pre></td></tr></table></figure>



<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"></span><br></pre></td></tr></table></figure>

<p>以上，通过下列命令，构建bees-nginx-angular的镜像完成：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">&#96;docker build -t bees-nginx-angular .</span><br></pre></td></tr></table></figure>



<hr>
<h2 id="docker-compose容器服务编排"><a href="#docker-compose容器服务编排" class="headerlink" title="docker-compose容器服务编排"></a><strong>docker-compose容器服务编排</strong></h2><p>**<br>**</p>
<p>上述，我们已经构建了三个镜像，相对应的至少要启动三个容器来完成项目的运行。那要执行三个docker run？太麻烦了，而且这三个容器之间还需要相互通信，如果只使用docker来做的话，不光启动容器的命令会很长，而且为了容器之间的通信，docker –link 都会十分复杂，这里我们需要一个服务编排。docker的编排名气最大的当然是kubernetes，但我的初衷是让这个项目轻量级，不太希望用户安装偏重量级的kubernetes才能运行，而我暂时又没能解决将三个镜像构建成一个镜像的技术问题，就选择了适中的一个产品–docker-compse。</p>
<p>安装docker-compose很简单，这里就不赘言了。安装完之后，随便找个目录，写一个docker-compose.yml文件，然后在该文件所在地方执行一行命令就能将三个容器启动了：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">#启动</span><br><span class="line">docker-compose up -d</span><br><span class="line">#关闭</span><br><span class="line">docker-compose down</span><br></pre></td></tr></table></figure>



<p>这里直接上我写的docker-compose.yml文件</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br></pre></td><td class="code"><pre><span class="line">version: &quot;2&quot;</span><br><span class="line">services:</span><br><span class="line"></span><br><span class="line"> beesMysql:</span><br><span class="line">  restart: always</span><br><span class="line">  image: bees-mysql</span><br><span class="line">  ports:</span><br><span class="line">   - 3306:3306</span><br><span class="line">  volumes:</span><br><span class="line">   - &#x2F;bees&#x2F;docker_volume&#x2F;mysql&#x2F;conf:&#x2F;etc&#x2F;mysql&#x2F;conf.d</span><br><span class="line">   - &#x2F;bees&#x2F;docker_volume&#x2F;mysql&#x2F;logs:&#x2F;logs</span><br><span class="line">   - &#x2F;bees&#x2F;docker_volume&#x2F;mysql&#x2F;data:&#x2F;var&#x2F;lib&#x2F;mysql</span><br><span class="line">  environment:</span><br><span class="line">   MYSQL_ROOT_PASSWORD: kerry</span><br><span class="line"></span><br><span class="line"> beesSpringboot:</span><br><span class="line">  restart: always</span><br><span class="line">  image: bees-springboot</span><br><span class="line">  ports:</span><br><span class="line">   - 8010:8010</span><br><span class="line">  depends_on:</span><br><span class="line">   - beesMysql</span><br><span class="line"></span><br><span class="line"> beesNginxAngular:</span><br><span class="line">  restart: always</span><br><span class="line">  image: bees-nginx-angular</span><br><span class="line">  ports:</span><br><span class="line">   - 8000:80</span><br><span class="line">  depends_on:</span><br><span class="line">   - beesSpringboot</span><br><span class="line">  volumes:</span><br><span class="line">   - &#x2F;bees&#x2F;docker_volume&#x2F;nginx&#x2F;nginx.conf:&#x2F;etc&#x2F;nginx&#x2F;nginx.conf</span><br><span class="line">   - &#x2F;bees&#x2F;docker_volume&#x2F;nginx&#x2F;conf.d:&#x2F;etc&#x2F;nginx&#x2F;conf.d</span><br><span class="line">   - &#x2F;bees&#x2F;docker_volume&#x2F;nginx&#x2F;file:&#x2F;home&#x2F;file</span><br></pre></td></tr></table></figure>



<ul>
<li>image:镜像名称</li>
<li>ports:容器的端口和宿主机的端口的映射</li>
<li>services:文中三个service，在各自容器启动后就会自动生成别名，例如：在springboot中访问数据库，只需要通过“beesMysql:3306”就能访问。</li>
<li>depends_on:会设置被依赖的容器启动之后，才会启动自己。例如：mysql数据库容器启动后，再启动springboot接口的容器。</li>
<li>volumes：挂载卷，一些需要长久保存的文件，可通过宿主机中的目录，挂载到容器中，否则容器重启后会丢失。例如：数据库的数据文件；nginx的配置文件和文件服务器目录。</li>
</ul>
<hr>
<h2 id="其他"><a href="#其他" class="headerlink" title="其他"></a><strong>其他</strong></h2><p>**<br>**</p>
<h3 id="自动部署"><a href="#自动部署" class="headerlink" title="自动部署"></a><strong>自动部署</strong></h3><p>**<br>**</p>
<p>为了提高开发效率，简单写了一个自动部署的脚本，直接贴脚本了：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br></pre></td><td class="code"><pre><span class="line">#!&#x2F;bin&#x2F;bash</span><br><span class="line"></span><br><span class="line">v_springboot_jar&#x3D;&#96;find &#x2F;bees&#x2F;devops&#x2F;upload&#x2F; -name &quot;*.jar&quot;&#96;</span><br><span class="line">echo &quot;找到jar:&quot;$v_springboot_jar</span><br><span class="line">v_angular_zip&#x3D;&#96;find &#x2F;bees&#x2F;devops&#x2F;upload&#x2F; -name &quot;dist.zip&quot;&#96;</span><br><span class="line">echo &quot;找到dist:&quot;$v_angular_zip</span><br><span class="line"></span><br><span class="line">cd &#x2F;bees&#x2F;conf&#x2F;</span><br><span class="line">docker-compose down</span><br><span class="line">echo &quot;关闭容器&quot;</span><br><span class="line"></span><br><span class="line">docker rmi -f $(docker images |  grep &quot;bees-springboot&quot;  | awk &#39;&#123;print $1&#125;&#39;)</span><br><span class="line">docker rmi -f $(docker images |  grep &quot;bees-nginx-angular&quot;  | awk &#39;&#123;print $1&#125;&#39;)</span><br><span class="line">echo &quot;删除镜像&quot;</span><br><span class="line"></span><br><span class="line">cd &#x2F;bees&#x2F;devops&#x2F;dockerfiles&#x2F;springboot&#x2F;</span><br><span class="line">rm -f *.jar</span><br><span class="line">cp $v_springboot_jar .&#x2F;bees-0.0.1-SNAPSHOT.jar</span><br><span class="line">docker build -t bees-springboot .</span><br><span class="line">echo &quot;生成springboot镜像&quot;</span><br><span class="line"></span><br><span class="line">cd &#x2F;bees&#x2F;devops&#x2F;dockerfiles&#x2F;angular&#x2F;</span><br><span class="line">rm -rf dist&#x2F;</span><br><span class="line">cp $v_angular_zip .&#x2F;dist.zip</span><br><span class="line">unzip dist.zip</span><br><span class="line">rm -f dist.zip</span><br><span class="line">docker build -t bees-nginx-angular .</span><br><span class="line">echo &quot;生成angular镜像&quot;</span><br><span class="line"></span><br><span class="line">cd &#x2F;bees&#x2F;conf&#x2F;</span><br><span class="line">docker-compose up -d</span><br><span class="line">echo &quot;启动容器&quot;</span><br><span class="line">docker ps |grep bees</span><br></pre></td></tr></table></figure>



<h3 id="遇到的坑"><a href="#遇到的坑" class="headerlink" title="遇到的坑"></a><strong>遇到的坑</strong></h3><p>**<br>**</p>
<p>一开始在docker-compose.yml文件中写services时，每个service不是驼峰式命名，而是下划线连接，例如:bees_springboot、bees_mysql、bees_nginx_angular 。</p>
<p>在springboot中访问数据库的别名可以，但是在nginx中，反向代理springboot接口地址时死活代理不了 bees_springboot的别名。能在bees_nginx_angular的容器中ping通bees_springboot，但是代理不了bees_springboot地址的接口，通过curl -v 查看原因，是丢失了host。</p>
<p>最后发现，nginx默认request的header中包含“_”下划线时，会自动忽略掉。我因此把docker-compose.yml中service名称，从下划线命名都改成了驼峰式。</p>
<p>当然也可以通过在nginx里的nginx.conf配置文件中的http部分中添加如下配置解决：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">underscores_in_headers on;</span><br></pre></td></tr></table></figure>



<hr>
<p>点击左下角阅读原文，到 <strong>SegmentFault 思否社区</strong> 和文章作者展开更多互动和交流。</p>
<p><strong>- END -</strong></p>
 
      <!-- reward -->
      
    </div>
    

    <!-- copyright -->
    
    <footer class="article-footer">
       
  <ul class="article-tag-list" itemprop="keywords"><li class="article-tag-list-item"><a class="article-tag-list-link" href="/tags/docker/" rel="tag">docker</a></li></ul>

    </footer>
  </div>

    
 
   
</article>

    
    <article
  id="post-docker/一次搞明白 Docker 容器资源限制"
  class="article article-type-post"
  itemscope
  itemprop="blogPost"
  data-scroll-reveal
>
  <div class="article-inner">
    
    <header class="article-header">
       
<h2 itemprop="name">
  <a class="article-title" href="/2020/11/11/docker/%E4%B8%80%E6%AC%A1%E6%90%9E%E6%98%8E%E7%99%BD%20Docker%20%E5%AE%B9%E5%99%A8%E8%B5%84%E6%BA%90%E9%99%90%E5%88%B6/"
    >一次搞明白 Docker 容器资源限制.md</a> 
</h2>
 

    </header>
     
    <div class="article-meta">
      <a href="/2020/11/11/docker/%E4%B8%80%E6%AC%A1%E6%90%9E%E6%98%8E%E7%99%BD%20Docker%20%E5%AE%B9%E5%99%A8%E8%B5%84%E6%BA%90%E9%99%90%E5%88%B6/" class="article-date">
  <time datetime="2020-11-10T16:00:00.000Z" itemprop="datePublished">2020-11-11</time>
</a> 
  <div class="article-category">
    <a class="article-category-link" href="/categories/docker/">docker</a>
  </div>
   
    </div>
      
    <div class="article-entry" itemprop="articleBody">
       
  <h1 id="一次搞明白-Docker-容器资源限制"><a href="#一次搞明白-Docker-容器资源限制" class="headerlink" title="一次搞明白 Docker 容器资源限制"></a>一次搞明白 Docker 容器资源限制</h1><h3 id="前言"><a href="#前言" class="headerlink" title="前言"></a>前言</h3><p>在使用容器时(未被Kubernetes进行管理的情况下)，我们单台主机上可能会跑几十个容器，容器虽然都相互隔离，但是用的却是与宿主机相同的内核，CPU、内存、磁盘等硬件资源。注：容器没有内核。默认情况下，容器没有资源限制，可以使用主机内核调度程序允许的尽可能多的给定资源；如果不对容器资源进行限制，容器之间就会相互影响，一些占用硬件资源较高的容器会吞噬掉所有的硬件资源，从而导致其它容器无硬件资源可用，发生停服状态。Docker提供了限制内存，CPU或磁盘IO的方法， 可以对容器所占用的硬件资源大小以及多少进行限制，我们在使用docker create创建一个容器或者docker run运行一个容器的时候就可以来对此容器的硬件资源做限制。</p>
<h3 id="Docker核心"><a href="#Docker核心" class="headerlink" title="Docker核心"></a>Docker核心</h3><p>Docker分别使用Namespace和CGroup实现了对容器的资源隔离和资源限制，本文将会讲到怎么使用内核来调用CGroup对容器资源做限制。</p>
<h3 id="OOM介绍"><a href="#OOM介绍" class="headerlink" title="OOM介绍"></a>OOM介绍</h3><h4 id="out-of-memorty"><a href="#out-of-memorty" class="headerlink" title="out of memorty"></a>out of memorty</h4><p>OOM：out of memorty的简称，称之为内存溢出</p>
<p>1.如果内存耗尽，内存将无法给予内存空间，内核检测到没有足够的内存来执行重要的系统功能，它会抛出OOM或Out of Memory异常，内存将会溢出，随之会有选择性的杀死相应的进程。2.内存属于不可压缩性资源，如果执行这个进程的内存不够使用，这个进程它就会一直申请内存资源，直到内存溢出。3.CPU属于可压缩性资源，所以CPU并不会出现这种情况，例如一个进程占用了一个核心100%的CPU，那么原本这个进程是需要占用200%的CPU资源，如果其它进程所占用的CPU核心并不需要多高的CPU频率，那么此进程就会占用掉空闲的CPU，如果其它进程也需要他自己核心的CPU频率，那么此进程就只能使用对它自己所使用CPU核心100%，因此叫做可压缩。4.内存的有选择性：为什么不是杀死占用内存最高的进程呢？举个例子：例如我们运行了一个MySQL和一个Tomcat；这个MySQL原本是需要占用2G的内存资源，但他占用了1.9G；而Tomcat原本是需要占用500M的内存空间，可他占用了1G内存空间，这个时候当内存报异常OOM的时候，就会选择Tomcat进程进行发送kill -9的信号，进行杀死以释放内存。5.当我们的一个重要的进程占用的内存超标而导致内存OOM的情况下，我们不希望内核来Kill掉我们这个进程，怎么办？我们可以调整内存OOM后kill进程的优先级，优先级越高越优先杀死，则反之 为此，Docker特地调整了docker daemon的OOM优先级，以免它被内核的杀死，但容器的优先级并未被调整</p>
<h4 id="导致内存OOM"><a href="#导致内存OOM" class="headerlink" title="导致内存OOM"></a>导致内存OOM</h4><p>1.加载对象过大；2.相应资源过多，来不及加载；3.应用运行时间较长未重启，从而一点一点占用内存资源，长期积累导致占用内存空间较多；4.代码存在内存泄漏bug。</p>
<h4 id="解决OOM办法"><a href="#解决OOM办法" class="headerlink" title="解决OOM办法"></a>解决OOM办法</h4><p>1.内存引用上做一些处理，常用的有软引用；2.内存中加载图片直接在内存中做处理，（如边界压缩）；3.动态回收内存；4.优化Delivk虚拟机的堆内存分配；5.自定义堆内存大小；6.定期重启应用以释放内存。</p>
<h3 id="压测工具stress"><a href="#压测工具stress" class="headerlink" title="压测工具stress"></a>压测工具stress</h3><h4 id="下载stress"><a href="#下载stress" class="headerlink" title="下载stress"></a>下载stress</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">docker pull lorel&#x2F;docker-stress-ng:latest</span><br><span class="line">latest:Pullingfrom lorel&#x2F;docker-stress-ng</span><br><span class="line">c52e3ed763ff:Pull complete </span><br><span class="line">a3ed95caeb02:Pull complete </span><br><span class="line">7f831269c70e:Pull complete </span><br><span class="line">Digest: sha256:c8776b750869e274b340f8e8eb9a7d8fb2472edd5b25ff5b7d55728bca681322</span><br><span class="line">Status:Downloaded newer image for lorel&#x2F;docker-stress-ng:latest</span><br></pre></td></tr></table></figure>

<h4 id="使用方法"><a href="#使用方法" class="headerlink" title="使用方法"></a>使用方法</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">docker run --name stress -it --rm lorel&#x2F;docker-stress-ng:latest stress --help</span><br><span class="line">--name 指定lorel&#x2F;docker-stress-ng:latest所启动的测试得容器名称为stress</span><br><span class="line">--it：打开一个伪终端，并提供交互式</span><br><span class="line">--rm：容器停止即删除</span><br><span class="line">lorel&#x2F;docker-stress-ng:latest：压测stress工具镜像名称</span><br><span class="line">stress：lorel&#x2F;docker-stress-ng:latest镜像内所内置的命令，必须使用此命令来指定--help支持的选项</span><br></pre></td></tr></table></figure>

<h4 id="stress常用选项"><a href="#stress常用选项" class="headerlink" title="stress常用选项"></a>stress常用选项</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">--cpu N：启动几个子进程来做压测，默认一个进程使用一个CPU核心，选项可简写为-c N</span><br><span class="line">docker run --name stress -it --rm lorel&#x2F;docker-stress-ng:latest stress --help | grep  &quot;cpu N&quot;</span><br><span class="line">-c N,--cpu N            start N workers spinning on sqrt(rand())</span><br><span class="line"></span><br><span class="line">--vm N：启动几个进程来做匿名页压测，选项可简写为-m N</span><br><span class="line">docker run --name stress -it --rm lorel&#x2F;docker-stress-ng:latest stress --help | grep  &quot;vm N&quot;</span><br><span class="line">-m N,--vm N             start N workers spinning on anonymous mmap</span><br><span class="line"></span><br><span class="line">--vm-bytes N：为--vm N指定的进程提供内存分配，每个进程可以分配到的内存数量，默认为256M</span><br><span class="line">docker run --name stress -it --rm lorel&#x2F;docker-stress-ng:latest stress --help | grep  &quot;vm-bytes N&quot;</span><br><span class="line">--vm-bytes N       allocate N bytes per vm worker (default256MB)</span><br></pre></td></tr></table></figure>

<h3 id="Docker内存限制"><a href="#Docker内存限制" class="headerlink" title="Docker内存限制"></a>Docker内存限制</h3><h4 id="限制内存注意事项"><a href="#限制内存注意事项" class="headerlink" title="限制内存注意事项"></a>限制内存注意事项</h4><p>1.为需要限制容器内的应用提前做好压测，例如Nginx容器自身所占内存空间，以及预算业务量大小所需占用的内存空间，进行压测后，才能进入生产环境使用；2.保证宿主机内存资源充足，监控及时上报容器内的内存使用情况，一旦容器所占用的内存不足，立刻对容器内存限制做调整或者打包此容器为镜像到其它内存充足的机器上进行启动；3.如果物理内存充足，尽量不要使用swap交换内存，swap会导致内存计算复杂。</p>
<h4 id="设置内存选项"><a href="#设置内存选项" class="headerlink" title="设置内存选项"></a>设置内存选项</h4><p>注意：可限制的内存单位：b、k、m、g；分别对应bytes、KB、MB、GB</p>
<p><strong>-m or –memory=：</strong>容器能使用的最大内存大小，最小值为4M**–memory-swap=：<strong>容器能够使用swap内存的大小，使用—memory-swap选项必须要使用—memory选项，否则—memory-swap不生效<br>**–memory-swappiness：</strong>默认情况下，主机可以把容器使用的匿名页swap出来，你可以设置一个0-100之间的值，代表swap出来的比例，如果此值设置为0，容器就会先使用物理内存，能不用就不用swap空间，如果设置为100，则反之，能用swap就会用，哪怕有一丝可以用到swap空间的可能性就会使用swap空间**–memory-reservation：<strong>预留的一个内存空间，设置一个内存使用soft limit，如果docker发现主机内存不足，会执行OOM操作，这个值必须小于—memory设置的值</strong>–kernel-memory：<strong>容器能够使用kernel memory大小，最小值为4M</strong>–oom-kill-disable：**是否运行OOM的时候杀死容器，只有设置了-m或者-memory，才可以设置此值，值为flase或者true，设置为false之后当此容器的内存溢出的时候就会对此容器执行kill操作，否则容器会耗尽主机内存，而且导致主机应用被杀死，如果这个容器运行的应用非常重要，就把—oom-kill-disable设置为true，就是禁止被oom杀掉</p>
<p><strong>–memory-swap详解：</strong> <strong>swap：交换内存</strong> <strong>ram：物理内存</strong></p>
<p><img src="http://iubest.gitee.io/pic/640-1601173891939.png" alt="null"></p>
<p>查看内存大小：</p>
<p><img src="http://iubest.gitee.io/pic/640-1601173891794.png" alt="null"></p>
<h4 id="限制容器内存"><a href="#限制容器内存" class="headerlink" title="限制容器内存"></a>限制容器内存</h4><p>使用docker的–memory选项来限制容器能够使用物理内存的大小，使用stress命令的选项–vm指定启动几个占用内存的进程和每个占用内存进程所占用的内存空间大小 我们指定了容器最多使用物理内存512M，启动两个占用内存的进程，每个进程分别占用512M的空间，那么两个进程理论上需要占用1024的空间，我们只给了1024的空间，显然是不够的：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">docker run --name stress-memory  -it --rm -m 512M  lorel&#x2F;docker-stress-ng:latest stress --vm 2--vm-bytes 512M</span><br><span class="line">stress-ng: info:[1] defaulting to a 86400 second run per stressor</span><br><span class="line">stress-ng: info:[1] dispatching hogs:2 vm</span><br></pre></td></tr></table></figure>

<p>使用docker stats命令来查看容器硬件资源的使用情况 可以看到我们的stress-memory容器的总内存为512M，使用了500多点，但为超过521M，内存占用的百分比为99.3%：</p>
<p><img src="http://iubest.gitee.io/pic/640-1601173891534.png" alt="null"></p>
<p>使用htop命令来查看资源情况：</p>
<p><img src="http://iubest.gitee.io/pic/640-1601173891459.png" alt="null"></p>
<h4 id="限制容器swap内存"><a href="#限制容器swap内存" class="headerlink" title="限制容器swap内存"></a>限制容器swap内存</h4><h4 id="设置oom时是否杀掉进程"><a href="#设置oom时是否杀掉进程" class="headerlink" title="设置oom时是否杀掉进程"></a>设置oom时是否杀掉进程</h4><h3 id="Docker-CPU限制"><a href="#Docker-CPU限制" class="headerlink" title="Docker CPU限制"></a>Docker CPU限制</h3><p>查看CPU核心数以及编码：</p>
<p><img src="http://iubest.gitee.io/pic/640-1601173894818.png" alt="null"></p>
<h4 id="设置CPU选项"><a href="#设置CPU选项" class="headerlink" title="设置CPU选项"></a>设置CPU选项</h4><p><strong>–cpu-shares：</strong>共享式CPU资源，是按比例切分CPU资源；比如当前系统上一共运行了两个容器，第一个容器上权重是1024，第二个容器权重是512， 第二个容器启动之后没有运行任何进程，自己身上的512都没有用完，而第一台容器的进程有很多，这个时候它完全可以占用容器二的CPU空闲资源，这就是共享式CPU资源；如果容器二也跑了进程，那么就会把自己的512给要回来，按照正常权重1024:512划分，为自己的进程提供CPU资源。如果容器二不用CPU资源，那容器一就能够给容器二的CPU资源所占用，如果容器二也需要CPU资源，那么就按照比例划分，这就是CPU共享式，也证明了CPU为可压缩性资源。**–cpus：<strong>限制容器运行的核数；从docker1.13版本之后，docker提供了–cpus参数可以限定容器能使用的CPU核数。这个功能可以让我们更精确地设置容器CPU使用量，是一种更容易理解也常用的手段。</strong>-<strong>**cpuset-cpus：</strong>限制容器运行在指定的CPU核心；运行容器运行在哪个CPU核心上，例如主机有4个CPU核心，CPU核心标识为0-3，我启动一台容器，只想让这台容器运行在标识0和3的两个CPU核心上，可以使用cpuset来指定。</p>
<h4 id="限制CPU-Share"><a href="#限制CPU-Share" class="headerlink" title="限制CPU Share"></a>限制CPU Share</h4><p>启动stress压测工具，并使用stress命令加–cpu选项来启动四个进程，默认一个进程占用一颗CPU核心：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">docker run --name stress-share  -it --rm --cpu-shares 512  lorel&#x2F;docker-stress-ng:latest stress -c 2</span><br><span class="line">stress-ng: info:[1] defaulting to a 86400 second run per stressor</span><br><span class="line">stress-ng: info:[1] dispatching hogs:2 cpu</span><br></pre></td></tr></table></figure>

<p>压测工具会吃掉俩个核心的所有CPU资源 再次打开一个窗口，使用htop命令来查看硬件资源损耗情况 我们一共开了两个进程，默认会占用两个cpu核心，每个核心的资源为100%，两个也就是为200%，由于没有指定限制在某个CPU核心上，所以它是动态的跑在四核CPU核心数，但是stress占用CPU的资源不会超出200%：</p>
<p><img src="http://iubest.gitee.io/pic/640-1601173891701.png" alt="null"></p>
<p>再次打开一个窗口，使用docker top container也可以查看到两个进程一共消耗了多少cpu资源：</p>
<p><img src="http://iubest.gitee.io/pic/640-1601173891487.png" alt="null"></p>
<h4 id="限制CPU核数"><a href="#限制CPU核数" class="headerlink" title="限制CPU核数"></a>限制CPU核数</h4><p>我们使用docker的–cpus选项来限制cpu运行的核心数，使用stress命令的选项–cpu来限制启动的进程数 显示cpu只运行两个核心数，也就是只能运行200%CPU资源，启动4个进程，也就是讲这4个进程只能跑在200%的cpu资源上：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">docker run --name stress-cpus  -it --rm --cpus 2  lorel&#x2F;docker-stress-ng:latest stress -c 4</span><br><span class="line">stress-ng: info:[1] defaulting to a 86400 second run per stressor</span><br><span class="line">stress-ng: info:[1] dispatching hogs:4 cpu</span><br></pre></td></tr></table></figure>

<p>使用htop命令查看cpu占用资源 可以看到四个进程，但是跑在了四颗cpu上面，但是每颗cpu只运行了50%左右的资源，4*50也就是200%左右的cpu资源：</p>
<p><img src="http://iubest.gitee.io/pic/640-1601173891538.png" alt="null"></p>
<p>使用docker top container也可以查看到四个进程一共消耗了多少cpu资源：</p>
<p><img src="http://iubest.gitee.io/pic/640-1601173895098.png" alt="null"></p>
<h4 id="限制容器运行在指定核心"><a href="#限制容器运行在指定核心" class="headerlink" title="限制容器运行在指定核心"></a>限制容器运行在指定核心</h4><p>我们使用docker的–cpuset-cpus选项来指定cpu运行在哪个核心上，使用stress的选项–cpu来指定启动的进程数 我们指定4个进程运行在编号为0和2的cpu上：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">docker run --name stress-cpuset  -it --rm --cpuset-cpus&#x3D;0,2  lorel&#x2F;docker-stress-ng:latest stress -c 4</span><br><span class="line">stress-ng: info:[1] defaulting to a 86400 second run per stressor</span><br><span class="line">stress-ng: info:[1] dispatching hogs:4 cpu</span><br></pre></td></tr></table></figure>

<p>使用htop查看系统硬件占用资源 可以看到和预期一样，占用了第一颗和第三颗cpu，每个进程占用cpu资源为50%，总资源为200%，两颗cpu的量：</p>
<p><img src="http://iubest.gitee.io/pic/640-1601173895101.png" alt="null"></p>
<p>\END -</p>
 
      <!-- reward -->
      
    </div>
    

    <!-- copyright -->
    
    <footer class="article-footer">
       
  <ul class="article-tag-list" itemprop="keywords"><li class="article-tag-list-item"><a class="article-tag-list-link" href="/tags/docker/" rel="tag">docker</a></li></ul>

    </footer>
  </div>

    
 
   
</article>

    
    <article
  id="post-docker/给新手的11个docker免费上手项目"
  class="article article-type-post"
  itemscope
  itemprop="blogPost"
  data-scroll-reveal
>
  <div class="article-inner">
    
    <header class="article-header">
       
<h2 itemprop="name">
  <a class="article-title" href="/2020/11/11/docker/%E7%BB%99%E6%96%B0%E6%89%8B%E7%9A%8411%E4%B8%AAdocker%E5%85%8D%E8%B4%B9%E4%B8%8A%E6%89%8B%E9%A1%B9%E7%9B%AE/"
    >给新手的11个docker免费上手项目.md</a> 
</h2>
 

    </header>
     
    <div class="article-meta">
      <a href="/2020/11/11/docker/%E7%BB%99%E6%96%B0%E6%89%8B%E7%9A%8411%E4%B8%AAdocker%E5%85%8D%E8%B4%B9%E4%B8%8A%E6%89%8B%E9%A1%B9%E7%9B%AE/" class="article-date">
  <time datetime="2020-11-10T16:00:00.000Z" itemprop="datePublished">2020-11-11</time>
</a> 
  <div class="article-category">
    <a class="article-category-link" href="/categories/docker/">docker</a>
  </div>
   
    </div>
      
    <div class="article-entry" itemprop="articleBody">
       
  <h1 id="给新手的11个docker免费上手项目"><a href="#给新手的11个docker免费上手项目" class="headerlink" title="给新手的11个docker免费上手项目"></a>给新手的11个docker免费上手项目</h1><ol>
<li><h1 id="spug"><a href="#spug" class="headerlink" title="spug"></a>spug</h1></li>
</ol>
<p>地址: http s:// github.com/openspug/spug</p>
<p>star: 3.8k</p>
<p>fork: 769</p>
<blockquote>
<p>使用 Python+Vue 实现的开源运维平台，前后端分离方便二次开发。该项目基于 Docker 镜像发布部署，方便安装和升级。支持运维常见功能：主机管理、任务计划管理、发布部署、监控告警等</p>
</blockquote>
<p><img src="http://iubest.gitee.io/pic/2020102201.gif"></p>
<hr>
<h1 id="2-ctop"><a href="#2-ctop" class="headerlink" title="2. ctop"></a>2. ctop</h1><p>地址: http s:// github.com/bcicen/ctop</p>
<p>star: 10.2k</p>
<p>fork: 388</p>
<blockquote>
<p>实现了类 top 命令展示效果的 docker 容器监控工具</p>
</blockquote>
<p><img src="http://iubest.gitee.io/pic/2020102202.gif"></p>
<hr>
<h1 id="3-drone"><a href="#3-drone" class="headerlink" title="3. drone"></a>3. drone</h1><p>地址: http s:// github.com/drone/drone</p>
<p>star: 21.3k</p>
<p>fork: 2.1k</p>
<blockquote>
<p>一个基于 Docker 的持续集成平台，使用 Go 语言编写</p>
</blockquote>
<p><img src="http://iubest.gitee.io/pic/2020102203.png"></p>
<hr>
<h1 id="4-docui"><a href="#4-docui" class="headerlink" title="4. docui"></a>4. docui</h1><p>地址: http s:// github.com/skanehira/docui</p>
<p>star: 1.8k</p>
<p>fork: 74</p>
<blockquote>
<p>终端 Docker 管理工具，自带一个终端界面。使用该工具可以方便的通过界面管理 docker 不用再记那些命令。安装命令：</p>
</blockquote>
<p><img src="http://iubest.gitee.io/pic/2020102204.png"></p>
<hr>
<h1 id="5-docker-slim"><a href="#5-docker-slim" class="headerlink" title="5. docker-slim"></a>5. docker-slim</h1><p>地址: http s:// github.com/docker-slim/docker-slim</p>
<p>star: 8.8k</p>
<p>fork: 306</p>
<blockquote>
<p>自动缩减 docker 镜像的体积的工具。大幅度缩减 docker 镜像的体积，方便分发，使用命令 docker-slim build –http-probe your-name/your-app。比如 Node.js 镜像缩减后的对比：</p>
</blockquote>
<hr>
<h1 id="6-docker-practice"><a href="#6-docker-practice" class="headerlink" title="6. docker_practice"></a>6. docker_practice</h1><p>地址: http s:// github.com/yeasy/docker_practice</p>
<p>star: 17.1k</p>
<p>fork: 4.7k</p>
<blockquote>
<p>Docker 从入门到实践</p>
</blockquote>
<hr>
<h1 id="7-lazydocker"><a href="#7-lazydocker" class="headerlink" title="7. lazydocker"></a>7. lazydocker</h1><p>地址: http s:// github.com/jesseduffield/lazydocker</p>
<p>star: 15.5k</p>
<p>fork: 581</p>
<blockquote>
<p>带命令行 UI 的 docker 管理工具。可以通过点点点来管理 docker，却又不需要装 rancher 这样的企业级容器管理平台</p>
</blockquote>
<p><img src="http://iubest.gitee.io/pic/2020102205.png"></p>
<hr>
<h1 id="8-dive"><a href="#8-dive" class="headerlink" title="8. dive"></a>8. dive</h1><p>地址: http s:// github.com/wagoodman/dive</p>
<p>star: 20.7k</p>
<p>fork: 749</p>
<blockquote>
<p>用来探索 docker 镜像每一层文件系统，以及发现缩小镜像体积方法的命令行工具。启动命令：dive 镜像名</p>
</blockquote>
<p><img src="http://iubest.gitee.io/pic/2020102206.gif"></p>
<hr>
<h1 id="9-gochat"><a href="#9-gochat" class="headerlink" title="9. gochat"></a>9. gochat</h1><p>地址: http s:// github.com/LockGit/gochat</p>
<p>star: 663</p>
<p>fork: 108</p>
<blockquote>
<p>纯 Go 实现的轻量级即时通讯系统。技术上各层之间通过 rpc 通讯，使用 redis 作为消息存储与投递的载体，相对 kafka 操作起来更加方便快捷。各层之间基于 etcd 服务发现，在扩容部署时将会方便很多。架构、目录结构清晰，文档详细。而且还提供了 docker 一件构建，安装运行十分方便，推荐作为学习项目</p>
</blockquote>
<hr>
<h1 id="10-docker-dashboard"><a href="#10-docker-dashboard" class="headerlink" title="10. docker-dashboard"></a>10. docker-dashboard</h1><p>地址: http s:// github.com/pipiliang/docker-dashboard</p>
<p>star: 205</p>
<p>fork: 22</p>
<blockquote>
<p>基于控制台的 docker 工具，代码简单易读，可以做为学习 Node.js 的实践项目</p>
</blockquote>
<hr>
<h1 id="11-diving"><a href="#11-diving" class="headerlink" title="11. diving"></a>11. diving</h1><p>地址: http s:// github.com/vicanso/diving</p>
<p>star: 136</p>
<p>fork: 12</p>
<blockquote>
<p>基于 dive 分析 docker 镜像，界面化展示了镜像每层的变动（增加、修改、删除等）、用户层数据大小等信息。便捷获取镜像信息和每层镜像内容的文件树，可以方便地浏览镜像信息。对于需要优化镜像体积时非常方便</p>
</blockquote>
<p><img src="http://iubest.gitee.io/pic/2020102207.png"></p>
<h4 id=""><a href="#" class="headerlink" title=""></a></h4><p><img src="https://mmbiz.qpic.cn/mmbiz_png/j0ROiac4adEsDO74OB9XY6RFiaAy5MicfmBicB6lz3wf0cpX9fyMc6z5kaEjx5DrgBFdibB9t4EXnmuEHKJzHUhghJg/640?wx_fmt=png&tp=webp&wxfrom=5&wx_lazy=1&wx_co=1"><img src="https://mmbiz.qpic.cn/mmbiz/cZV2hRpuAPiaJQXWGyC9wrUzIicibgXayrgibTYarT3A1yzttbtaO0JlV21wMqroGYT3QtPq2C7HMYsvicSB2p7dTBg/640?wx_fmt=gif&tp=webp&wxfrom=5&wx_lazy=1" title="音符"></p>
 
      <!-- reward -->
      
    </div>
    

    <!-- copyright -->
    
    <footer class="article-footer">
       
  <ul class="article-tag-list" itemprop="keywords"><li class="article-tag-list-item"><a class="article-tag-list-link" href="/tags/docker/" rel="tag">docker</a></li></ul>

    </footer>
  </div>

    
 
   
</article>

    
    <article
  id="post-docker/解密 Docker 挂载文件，宿主机修改后容器里文件没有修改"
  class="article article-type-post"
  itemscope
  itemprop="blogPost"
  data-scroll-reveal
>
  <div class="article-inner">
    
    <header class="article-header">
       
<h2 itemprop="name">
  <a class="article-title" href="/2020/11/11/docker/%E8%A7%A3%E5%AF%86%20Docker%20%E6%8C%82%E8%BD%BD%E6%96%87%E4%BB%B6%EF%BC%8C%E5%AE%BF%E4%B8%BB%E6%9C%BA%E4%BF%AE%E6%94%B9%E5%90%8E%E5%AE%B9%E5%99%A8%E9%87%8C%E6%96%87%E4%BB%B6%E6%B2%A1%E6%9C%89%E4%BF%AE%E6%94%B9/"
    >解密 Docker 挂载文件，宿主机修改后容器里文件没有修改.md</a> 
</h2>
 

    </header>
     
    <div class="article-meta">
      <a href="/2020/11/11/docker/%E8%A7%A3%E5%AF%86%20Docker%20%E6%8C%82%E8%BD%BD%E6%96%87%E4%BB%B6%EF%BC%8C%E5%AE%BF%E4%B8%BB%E6%9C%BA%E4%BF%AE%E6%94%B9%E5%90%8E%E5%AE%B9%E5%99%A8%E9%87%8C%E6%96%87%E4%BB%B6%E6%B2%A1%E6%9C%89%E4%BF%AE%E6%94%B9/" class="article-date">
  <time datetime="2020-11-10T16:00:00.000Z" itemprop="datePublished">2020-11-11</time>
</a> 
  <div class="article-category">
    <a class="article-category-link" href="/categories/docker/">docker</a>
  </div>
   
    </div>
      
    <div class="article-entry" itemprop="articleBody">
       
  <h2 id="解密-Docker-挂载文件，宿主机修改后容器里文件没有修改"><a href="#解密-Docker-挂载文件，宿主机修改后容器里文件没有修改" class="headerlink" title="解密 Docker 挂载文件，宿主机修改后容器里文件没有修改"></a>解密 Docker 挂载文件，宿主机修改后容器里文件没有修改</h2><p>YP小站 <a href="javascript:void(0);">YP小站</a> <em>昨天</em></p>
<h2 id="问题"><a href="#问题" class="headerlink" title="问题"></a>问题</h2><p>使用 <code>Docker Volumes</code> 时，有时需要挂载一个宿主机目录或者文件，提供数据可持续或者容器内部服务配置文件。</p>
<p>使用命令 <code>docker run -it --rm -v /root/test.txt:/root/test.txt debian:10 bash</code> 挂载文件（test.txt 默认权限 644）时，通过 <code>vim</code> 修改宿主 <code>test.txt</code> 文件，但是容器中 <code>test.txt</code> 没有修改。这是为什么？</p>
<h2 id="问题分析"><a href="#问题分析" class="headerlink" title="问题分析"></a>问题分析</h2><p>Docker 中，mount volume 的原理是借用了 <code>Linux Namespace</code> 中的 <code>Mount NameSpace</code>，隔离系统中不同进程的挂载点视图，实际文件是没有变化。比如上面的例子，在container中，bash 实际就是一个运行在宿主机上的进程，被Docker用Linux分别隔离了 <code>Mount Namespace</code>、<code>UTS Namespace</code>、<code>IPC Namespace</code>、<code>PID Namespace</code>、<code>Network Namespace</code>和<code>User Namespace</code>，使得它看上去好像运行在了一个<code>独立的</code>、<code>相对隔离的</code>系统上，但实际它的一切资源都是宿主机在不同Namespace中的一个投影，文件也不例外。</p>
<p>为什么宿主机上修改 <code>test.txt</code> 文件，而容器中 <code>test.txt</code> 文件没有变化？</p>
<p>Linux中，<code>证明文件是否相同的根本途径是</code>，判断其 <code>inode</code>，如果两个文件的inode相同，两个文件必定为同一文件，从而两个文件的内容也必然相同。</p>
<h2 id="验证问题"><a href="#验证问题" class="headerlink" title="验证问题"></a>验证问题</h2><p>1、在宿主机上创建一个 <code>/root/test.txt</code> 文件，使用命令 <code>stat</code> 查看 <code>inode</code> 值，如下图：</p>
<p><img src="http://iubest.gitee.io/pic/640-1601000870993.png" alt="img"></p>
<p>2、使用命令 <code>docker run -it --rm -v /root/test.txt:/root/test.txt debian:10 bash</code> 临时启动一个容器，把宿主机文件 <code>/root/test.txt</code> 挂载到容器中。</p>
<p>3、另开一个终端，使用 <code>vi</code> 命令修改 /root/test.txt 文件，编辑完后保存，再次使用 stat 命令查看 /root/test.txt 文件 inode 值。从下图已经发现，inode 值已经改变。</p>
<p><img src="http://iubest.gitee.io/pic/640-1601000870989.png" alt="img"></p>
<p>4、登陆容器查看 <code>/root/test.txt</code> 文件  inode 值。如下图，inode 值还是 vi 修改前的值。而不是修改后的值。这也就解释为什么宿主机上修改了文件而容器中文件没有更新的原因。因为容器与宿主机使用的不是同一个文件。</p>
<p><img src="http://iubest.gitee.io/pic/640-1601000870994.png" alt="img"></p>
<h2 id="简述-vi-或者-vim-修改文件过程"><a href="#简述-vi-或者-vim-修改文件过程" class="headerlink" title="简述 vi 或者 vim 修改文件过程"></a>简述 vi 或者 vim 修改文件过程</h2><p>Linux 默认情况下，vim为了防止在你修改文件的过程中，由于磁盘或者系统出现问题而导致当前被修改的文件的损坏，它做了类似如下逻辑：</p>
<ul>
<li>1、复制出一个需要修改文件的副本，命名为在原来文件的基础上增加 <code>&quot;.swp&quot;</code> 后缀以及 <code>&quot;.&quot;</code> 前缀。</li>
<li>2、修改内容保存到有 <code>.swp</code> 后缀的文件，并 <code>flush</code> 到磁盘</li>
<li>3、执行 <code>:wq</code> 就会交换原文件和 <code>swp</code> 文件的名称</li>
<li>4、删除临时 <code>swp</code> 文件</li>
</ul>
<p>从上面可以看出，原来的文件已经被删除，但是容器还是会一直记录以前的文件，只有当<code>restart</code> 容器时，容器才会重新读取新的文件。宿主机上修改的内容才会更新。</p>
<h2 id="解决方法"><a href="#解决方法" class="headerlink" title="解决方法"></a>解决方法</h2><h3 id="方法一"><a href="#方法一" class="headerlink" title="方法一"></a>方法一</h3><p>使用 <code>echo</code> 修改文件，而不是使用 <code>vim</code> 或者 <code>vi</code>。</p>
<h3 id="方法二"><a href="#方法二" class="headerlink" title="方法二"></a>方法二</h3><p>修改 vim 配置。执行 vim 命令，输入 <code>:scriptnames</code> 查看 vim 配置文件路径，这边配置文件路径是 <code>/etc/vimrc</code> ，在配置文件最后添加如下两行。</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">set backup</span><br><span class="line">set backupcopy&#x3D;yes</span><br></pre></td></tr></table></figure>

<p>这样可以解决问题，不过也有一个很大的副作用，那就是每次用vim编辑文件保存之后，vim会生成一个类似该被修改文件，但末尾增加了一个”~”后缀，用以保存修改之前的文件内容。<code>此方法不推荐</code>。</p>
<h3 id="方法三"><a href="#方法三" class="headerlink" title="方法三"></a>方法三</h3><p>修改文件权限，文件默认权限是 <code>644</code>，把权限修改为 <code>666</code>。修改完权限后，再次通过 vim 修改并保存后，原文件的 inode 不会发生变化。(<code>推荐此方法</code>)</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ chmod 666 &#x2F;root&#x2F;test.txt</span><br></pre></td></tr></table></figure>

<h3 id="方法四"><a href="#方法四" class="headerlink" title="方法四"></a>方法四</h3><p>挂载<code>目录</code>，不要挂载<code>文件</code>。挂载目录不会出现宿主机文件更新，而容器中文件没有更新。(<code>推荐此方法</code>)</p>
<h2 id="参考链接"><a href="#参考链接" class="headerlink" title="参考链接"></a>参考链接</h2><ul>
<li><a target="_blank" rel="noopener" href="https://forums.docker.com/t/modify-a-file-which-mount-as-a-data-volume-but-it-didnt-change-in-container/2813/13">https://forums.docker.com/t/modify-a-file-which-mount-as-a-data-volume-but-it-didnt-change-in-container/2813/13</a></li>
<li><a target="_blank" rel="noopener" href="https://www.cnblogs.com/lylex/p/12781007.html">https://www.cnblogs.com/lylex/p/12781007.html</a></li>
</ul>
 
      <!-- reward -->
      
    </div>
    

    <!-- copyright -->
    
    <footer class="article-footer">
       
  <ul class="article-tag-list" itemprop="keywords"><li class="article-tag-list-item"><a class="article-tag-list-link" href="/tags/docker/" rel="tag">docker</a></li></ul>

    </footer>
  </div>

    
 
   
</article>

    
    <article
  id="post-k8s/K8S使用ceph-csi持久化存储之RBD"
  class="article article-type-post"
  itemscope
  itemprop="blogPost"
  data-scroll-reveal
>
  <div class="article-inner">
    
    <header class="article-header">
       
<h2 itemprop="name">
  <a class="article-title" href="/2020/11/11/k8s/K8S%E4%BD%BF%E7%94%A8ceph-csi%E6%8C%81%E4%B9%85%E5%8C%96%E5%AD%98%E5%82%A8%E4%B9%8BRBD/"
    >K8S使用ceph-csi持久化存储之RBD.md</a> 
</h2>
 

    </header>
     
    <div class="article-meta">
      <a href="/2020/11/11/k8s/K8S%E4%BD%BF%E7%94%A8ceph-csi%E6%8C%81%E4%B9%85%E5%8C%96%E5%AD%98%E5%82%A8%E4%B9%8BRBD/" class="article-date">
  <time datetime="2020-11-10T16:00:00.000Z" itemprop="datePublished">2020-11-11</time>
</a> 
  <div class="article-category">
    <a class="article-category-link" href="/categories/k8s/">k8s</a>
  </div>
   
    </div>
      
    <div class="article-entry" itemprop="articleBody">
       
  <h1 id="K8S使用ceph-csi持久化存储之RBD"><a href="#K8S使用ceph-csi持久化存储之RBD" class="headerlink" title="K8S使用ceph-csi持久化存储之RBD"></a>K8S使用ceph-csi持久化存储之RBD</h1><h3 id="一、集群和组件版本"><a href="#一、集群和组件版本" class="headerlink" title="一、集群和组件版本"></a>一、集群和组件版本</h3><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">K8S集群：1.17.3+</span><br><span class="line">Ceph集群：Nautilus（stables）</span><br><span class="line">Ceph-CSI：release-v3.1</span><br><span class="line">snapshotter-controller：release-2.1</span><br><span class="line">Linue kernel：3.10.0-1127.19.1.el7.x86_64 +</span><br></pre></td></tr></table></figure>

<ul>
<li>镜像版本：</li>
</ul>
<figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">docker pull quay.io/k8scsi/csi-snapshotter:v2.1.1</span><br><span class="line">docker pull quay.io/k8scsi/csi-snapshotter:v2.1.0</span><br><span class="line">docker pull quay.io/k8scsi/csi-resizer:v0.5.0</span><br><span class="line">docker pull quay.io/k8scsi/csi-provisioner:v1.6.0</span><br><span class="line">docker pull quay.io/k8scsi/csi-node-driver-registrar:v1.3.0</span><br><span class="line">docker pull quay.io/k8scsi/csi-attacher:v2.1.1</span><br><span class="line">docker pull quay.io/cephcsi/cephcsi:v3.1-canary</span><br><span class="line">docker pull quay.io/k8scsi/snapshot-controller:v2.0.1</span><br></pre></td></tr></table></figure>

<h3 id="二、部署"><a href="#二、部署" class="headerlink" title="二、部署"></a>二、部署</h3><h4 id="1）部署Ceph-CSI"><a href="#1）部署Ceph-CSI" class="headerlink" title="1）部署Ceph-CSI"></a>1）部署Ceph-CSI</h4><h5 id="1-1）克隆代码"><a href="#1-1）克隆代码" class="headerlink" title="1.1）克隆代码"></a>1.1）克隆代码</h5><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#</span><span class="bash"> git <span class="built_in">clone</span> https://github.com/ceph/ceph-csi.git</span></span><br><span class="line"><span class="meta">#</span><span class="bash"> <span class="built_in">cd</span> ceph-csi/deploy/rbd/kubernetes</span></span><br></pre></td></tr></table></figure>

<h5 id="1-2）修改yaml文件"><a href="#1-2）修改yaml文件" class="headerlink" title="1.2）修改yaml文件"></a>1.2）修改yaml文件</h5><p><em>1.2.1）修改csi-rbdplugin-provisioner.yaml和csi-rbdplugin.yaml文件，注释ceph-csi-encryption-kms-config配置：</em></p>
<figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#</span><span class="bash"> grep <span class="string">&quot;#&quot;</span> csi-rbdplugin-provisioner.yaml</span></span><br><span class="line">          # for stable functionality replace canary with latest release version</span><br><span class="line">            #- name: ceph-csi-encryption-kms-config</span><br><span class="line">            #  mountPath: /etc/ceph-csi-encryption-kms-config/</span><br><span class="line">        #- name: ceph-csi-encryption-kms-config</span><br><span class="line">        #  configMap:</span><br><span class="line">        #    name: ceph-csi-encryption-kms-config</span><br></pre></td></tr></table></figure>

<p><em>1.2.2）配置csi-config-map.yaml文件链接ceph集群的信息</em></p>
<figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># cat csi-config-map.yaml</span></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">ConfigMap</span></span><br><span class="line"><span class="attr">data:</span></span><br><span class="line">  <span class="attr">config.json:</span> <span class="string">|-</span></span><br><span class="line">    [</span><br><span class="line">      &#123;</span><br><span class="line">        <span class="attr">&quot;clusterID&quot;:</span> <span class="string">&quot;c7b4xxf7-c61e-4668-9xx0-82c9xx5e3696&quot;</span>,    <span class="string">//</span> <span class="string">通过ceph集群的ID</span></span><br><span class="line">        <span class="attr">&quot;monitors&quot;:</span> [</span><br><span class="line">          <span class="string">&quot;xxx.xxx.xxx.xxx:6789&quot;</span></span><br><span class="line">        ]</span><br><span class="line">      &#125;</span><br><span class="line">    ]</span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line">  <span class="attr">name:</span> <span class="string">ceph-csi-config</span></span><br></pre></td></tr></table></figure>

<p><em>1.2.3）部署rbd相关的CSI</em></p>
<figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#</span><span class="bash"> kubectl apply -f ceph-csi/deploy/rbd/kubernetes/</span></span><br><span class="line"><span class="meta">#</span><span class="bash"> kubectl get pods</span></span><br><span class="line">csi-rbdplugin-9f8kn                             3/3     Running   0          39h</span><br><span class="line">csi-rbdplugin-pnjtn                             3/3     Running   0          39h</span><br><span class="line">csi-rbdplugin-provisioner-7f469fb84-4qqbd       6/6     Running   0          41h</span><br><span class="line">csi-rbdplugin-provisioner-7f469fb84-hkc9q       6/6     Running   5          41h</span><br><span class="line">csi-rbdplugin-provisioner-7f469fb84-vm7qm       6/6     Running   0          40h</span><br></pre></td></tr></table></figure>

<h4 id="2-快照功能需要安装快照控制器支持："><a href="#2-快照功能需要安装快照控制器支持：" class="headerlink" title="2)快照功能需要安装快照控制器支持："></a>2)快照功能需要安装快照控制器支持：</h4><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">2.1）克隆代码</span><br><span class="line"><span class="meta">#</span><span class="bash"> git <span class="built_in">clone</span> https://github.com/kubernetes-csi/external-snapshotter</span></span><br><span class="line"><span class="meta">#</span><span class="bash"> <span class="built_in">cd</span> external-snapshotter/deploy/kubernetes/snapshot-controller</span></span><br><span class="line">2.2）部署</span><br><span class="line"><span class="meta">#</span><span class="bash"> kubectl external-snapshotter/deploy/kubernetes/snapshot-controller/</span></span><br><span class="line"><span class="meta">#</span><span class="bash"> kubectl get pods | grep snapshot-controller</span></span><br><span class="line">snapshot-controller-0                           1/1     Running   0          20h</span><br><span class="line">2.3）部署crd</span><br><span class="line"><span class="meta">#</span><span class="bash"> kubectl apply -f external-snapshotter/config/crd/</span></span><br><span class="line"><span class="meta">#</span><span class="bash"> kubectl api-versions | grep snapshot</span></span><br><span class="line">snapshot.storage.k8s.io/v1beta1</span><br></pre></td></tr></table></figure>

<p><em>至此，Ceph-CSI和snapshot-controller安装完成。下面进行功能测试。测试功能前需要在ceph集群中创建对应的存储池：</em></p>
<figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br></pre></td><td class="code"><pre><span class="line">// 查看集群状态</span><br><span class="line"><span class="meta">#</span><span class="bash"> ceph -s</span></span><br><span class="line">  cluster:</span><br><span class="line">    id:     c7b43ef7-c61e-4668-9970-82c9775e3696</span><br><span class="line">    health: HEALTH_OK</span><br><span class="line"> </span><br><span class="line">  services:</span><br><span class="line">    mon: 1 daemons, quorum cka-node-01 (age 24h)</span><br><span class="line">    mgr: cka-node-01(active, since 24h), standbys: cka-node-02, cka-node-03</span><br><span class="line">    mds: cephfs:1 &#123;0=cka-node-01=up:active&#125; 2 up:standby</span><br><span class="line">    osd: 3 osds: 3 up, 3 in</span><br><span class="line">    rgw: 1 daemon active (cka-node-01)</span><br><span class="line"> </span><br><span class="line">  task status:</span><br><span class="line">    scrub status:</span><br><span class="line">        mds.cka-node-01: idle</span><br><span class="line"> </span><br><span class="line">  data:</span><br><span class="line">    pools:   7 pools, 184 pgs</span><br><span class="line">    objects: 827 objects, 1.7 GiB</span><br><span class="line">    usage:   8.1 GiB used, 52 GiB / 60 GiB avail</span><br><span class="line">    pgs:     184 active+clean</span><br><span class="line"> </span><br><span class="line">  io:</span><br><span class="line">    client:   32 KiB/s rd, 0 B/s wr, 31 op/s rd, 21 op/s wr</span><br><span class="line"> </span><br><span class="line">// 创建存储池kubernetes</span><br><span class="line"><span class="meta">#</span><span class="bash"> ceph osd pool create kubernetes 8 8</span></span><br><span class="line"><span class="meta">#</span><span class="bash"> rbd pool init kubernetes</span></span><br><span class="line"> </span><br><span class="line">// 创建用户kubernetes</span><br><span class="line"><span class="meta">#</span><span class="bash"> ceph auth get-or-create client.kubernetes mon <span class="string">&#x27;profile rbd&#x27;</span> osd <span class="string">&#x27;profile rbd pool=kubernetes&#x27;</span></span></span><br><span class="line"> </span><br><span class="line">// 获取集群信息和查看用户key</span><br><span class="line"><span class="meta">#</span><span class="bash"> ceph mon dump</span></span><br><span class="line">dumped monmap epoch 3</span><br><span class="line">epoch 3</span><br><span class="line">fsid c7b43ef7-c61e-4668-9970-82c9775e3696</span><br><span class="line">last_changed 2020-09-11 11:05:25.529648</span><br><span class="line">created 2020-09-10 16:22:52.967856</span><br><span class="line">min_mon_release 14 (nautilus)</span><br><span class="line">0: [v2:10.0.xxx.xxx0:3300/0,v1:10.0.xxx.xxx:6789/0] mon.cka-node-01</span><br><span class="line"> </span><br><span class="line"><span class="meta">#</span><span class="bash"> ceph auth get client.kubernetes</span></span><br><span class="line">exported keyring for client.kubernetes</span><br><span class="line">[client.kubernetes]</span><br><span class="line">    key = AQBt5xxxR0DBAAtjxxA+zlqxxxF3shYm8qLQmw==</span><br><span class="line">    caps mon = &quot;profile rbd&quot;</span><br><span class="line">    caps osd = &quot;profile rbd pool=kubernetes&quot;</span><br></pre></td></tr></table></figure>

<h3 id="三、验证"><a href="#三、验证" class="headerlink" title="三、验证"></a>三、验证</h3><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">验证如下功能：</span><br><span class="line">1）创建rbd类型pvc给pod使用；</span><br><span class="line">2）创建rbd类型pvc的快照，并验证基于快照恢复的可用性；</span><br><span class="line">3）扩容pvc大小；</span><br><span class="line">4）同一个pvc重复创建快照；</span><br></pre></td></tr></table></figure>

<h5 id="1、创建rbd类型pvc给pod使用："><a href="#1、创建rbd类型pvc给pod使用：" class="headerlink" title="1、创建rbd类型pvc给pod使用："></a>1、创建rbd类型pvc给pod使用：</h5><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br></pre></td><td class="code"><pre><span class="line"><span class="number">1.1</span><span class="string">)</span> <span class="string">创建连接ceph集群的秘钥</span></span><br><span class="line"><span class="comment"># cat secret.yaml</span></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Secret</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line">  <span class="attr">name:</span> <span class="string">csi-rbd-secret</span></span><br><span class="line">  <span class="attr">namespace:</span> <span class="string">default</span></span><br><span class="line"><span class="attr">stringData:</span></span><br><span class="line">  <span class="attr">userID:</span> <span class="string">kubernetes</span></span><br><span class="line">  <span class="attr">userKey:</span> <span class="string">AQBt51lf9iR0DBAAtjA+zlqxxxYm8qLQmw==</span></span><br><span class="line">  <span class="attr">encryptionPassphrase:</span> <span class="string">test_passphrase</span></span><br><span class="line"> </span><br><span class="line"><span class="comment"># kubectl apply -f secret.yaml</span></span><br><span class="line"><span class="number">1.2</span><span class="string">)</span> <span class="string">创建storeclass</span></span><br><span class="line"><span class="comment"># cat storageclass.yaml</span></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">storage.k8s.io/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">StorageClass</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line">   <span class="attr">name:</span> <span class="string">csi-rbd-sc</span></span><br><span class="line"><span class="attr">provisioner:</span> <span class="string">rbd.csi.ceph.com</span></span><br><span class="line"><span class="attr">parameters:</span></span><br><span class="line">   <span class="attr">clusterID:</span> <span class="string">c7b43xxf7-c61e-4668-9970-82c9e3696</span></span><br><span class="line">   <span class="attr">pool:</span> <span class="string">kubernetes</span></span><br><span class="line">   <span class="attr">imageFeatures:</span> <span class="string">layering</span></span><br><span class="line">   <span class="attr">csi.storage.k8s.io/provisioner-secret-name:</span> <span class="string">csi-rbd-secret</span></span><br><span class="line">   <span class="attr">csi.storage.k8s.io/provisioner-secret-namespace:</span> <span class="string">default</span></span><br><span class="line">   <span class="attr">csi.storage.k8s.io/controller-expand-secret-name:</span> <span class="string">csi-rbd-secret</span></span><br><span class="line">   <span class="attr">csi.storage.k8s.io/controller-expand-secret-namespace:</span> <span class="string">default</span></span><br><span class="line">   <span class="attr">csi.storage.k8s.io/node-stage-secret-name:</span> <span class="string">csi-rbd-secret</span></span><br><span class="line">   <span class="attr">csi.storage.k8s.io/node-stage-secret-namespace:</span> <span class="string">default</span></span><br><span class="line">   <span class="attr">csi.storage.k8s.io/fstype:</span> <span class="string">ext4</span></span><br><span class="line"><span class="attr">reclaimPolicy:</span> <span class="string">Delete</span></span><br><span class="line"><span class="attr">allowVolumeExpansion:</span> <span class="literal">true</span></span><br><span class="line"><span class="attr">mountOptions:</span></span><br><span class="line">   <span class="bullet">-</span> <span class="string">discard</span></span><br><span class="line"> </span><br><span class="line"><span class="comment"># kubectl apply -f storageclass.yaml</span></span><br><span class="line"><span class="number">1.3</span><span class="string">)基于storeclass创建pvc</span></span><br><span class="line"><span class="comment"># cat pvc.yaml</span></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">PersistentVolumeClaim</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line">  <span class="attr">name:</span> <span class="string">rbd-pvc</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line">  <span class="attr">accessModes:</span></span><br><span class="line">    <span class="bullet">-</span> <span class="string">ReadWriteOnce</span></span><br><span class="line">  <span class="attr">resources:</span></span><br><span class="line">    <span class="attr">requests:</span></span><br><span class="line">      <span class="attr">storage:</span> <span class="string">1Gi</span></span><br><span class="line">  <span class="attr">storageClassName:</span> <span class="string">csi-rbd-sc</span></span><br><span class="line"> </span><br><span class="line"><span class="comment"># kubectl apply -f pvc.yaml</span></span><br><span class="line"><span class="comment"># kubectl get pvc rbd-pvc</span></span><br><span class="line"><span class="string">NAME</span>      <span class="string">STATUS</span>   <span class="string">VOLUME</span>                                     <span class="string">CAPACITY</span>   <span class="string">ACCESS</span> <span class="string">MODES</span>   <span class="string">STORAGECLASS</span>   <span class="string">AGE</span></span><br><span class="line"><span class="string">rbd-pvc</span>   <span class="string">Bound</span>    <span class="string">pvc-11b931b0-7cb5-40e1-815b-c15659310593</span>   <span class="string">1Gi</span>      <span class="string">RWO</span>            <span class="string">csi-rbd-sc</span>        <span class="string">17h</span></span><br><span class="line"><span class="number">1.4</span><span class="string">）创建pod应用pvc</span></span><br><span class="line"><span class="comment"># cat pod.yaml</span></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Pod</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line">  <span class="attr">name:</span> <span class="string">csi-rbd-demo-pod</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line">  <span class="attr">containers:</span></span><br><span class="line">    <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">web-server</span></span><br><span class="line">      <span class="attr">image:</span> <span class="string">nginx</span></span><br><span class="line">      <span class="attr">volumeMounts:</span></span><br><span class="line">        <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">mypvc</span></span><br><span class="line">          <span class="attr">mountPath:</span> <span class="string">/var/lib/www/html</span></span><br><span class="line">  <span class="attr">volumes:</span></span><br><span class="line">    <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">mypvc</span></span><br><span class="line">      <span class="attr">persistentVolumeClaim:</span></span><br><span class="line">        <span class="attr">claimName:</span> <span class="string">rbd-pvc</span></span><br><span class="line">        <span class="attr">readOnly:</span> <span class="literal">false</span></span><br><span class="line"> </span><br><span class="line"><span class="comment"># kubectl apply -f pod.yaml</span></span><br><span class="line"><span class="comment"># kubectl get pods csi-rbd-demo-pod</span></span><br><span class="line"><span class="string">NAME</span>               <span class="string">READY</span>   <span class="string">STATUS</span>    <span class="string">RESTARTS</span>   <span class="string">AGE</span></span><br><span class="line"><span class="string">csi-rbd-demo-pod</span>   <span class="number">1</span><span class="string">/1</span>     <span class="string">Running</span>   <span class="number">0</span>          <span class="string">40h</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># kubectl exec -ti csi-rbd-demo-pod -- bash</span></span><br><span class="line"><span class="string">root@csi-rbd-demo-pod:/#</span> <span class="string">df</span> <span class="string">-h</span></span><br><span class="line"><span class="string">Filesystem</span>               <span class="string">Size</span>  <span class="string">Used</span> <span class="string">Avail</span> <span class="string">Use%</span> <span class="string">Mounted</span> <span class="string">on</span></span><br><span class="line"><span class="string">overlay</span>                  <span class="string">199G</span>  <span class="number">7.</span><span class="string">4G</span>  <span class="string">192G</span>   <span class="number">4</span><span class="string">%</span> <span class="string">/</span></span><br><span class="line"><span class="string">tmpfs</span>                     <span class="string">64M</span>     <span class="number">0</span>   <span class="string">64M</span>   <span class="number">0</span><span class="string">%</span> <span class="string">/dev</span></span><br><span class="line"><span class="string">tmpfs</span>                    <span class="number">7.</span><span class="string">8G</span>     <span class="number">0</span>  <span class="number">7.</span><span class="string">8G</span>   <span class="number">0</span><span class="string">%</span> <span class="string">/sys/fs/cgroup</span></span><br><span class="line"><span class="string">/dev/mapper/centos-root</span>  <span class="string">199G</span>  <span class="number">7.</span><span class="string">4G</span>  <span class="string">192G</span>   <span class="number">4</span><span class="string">%</span> <span class="string">/etc/hosts</span></span><br><span class="line"><span class="string">shm</span>                       <span class="string">64M</span>     <span class="number">0</span>   <span class="string">64M</span>   <span class="number">0</span><span class="string">%</span> <span class="string">/dev/shm</span></span><br><span class="line"><span class="string">/dev/rbd0</span>                <span class="string">976M</span>  <span class="number">2.</span><span class="string">6M</span>  <span class="string">958M</span>   <span class="number">1</span><span class="string">%</span> <span class="string">/var/lib/www/html</span></span><br><span class="line"><span class="string">tmpfs</span>                    <span class="number">7.</span><span class="string">8G</span>   <span class="string">12K</span>  <span class="number">7.</span><span class="string">8G</span>   <span class="number">1</span><span class="string">%</span> <span class="string">/run/secrets/kubernetes.io/serviceaccount</span></span><br><span class="line"><span class="string">tmpfs</span>                    <span class="number">7.</span><span class="string">8G</span>     <span class="number">0</span>  <span class="number">7.</span><span class="string">8G</span>   <span class="number">0</span><span class="string">%</span> <span class="string">/proc/acpi</span></span><br><span class="line"><span class="string">tmpfs</span>                    <span class="number">7.</span><span class="string">8G</span>     <span class="number">0</span>  <span class="number">7.</span><span class="string">8G</span>   <span class="number">0</span><span class="string">%</span> <span class="string">/proc/scsi</span></span><br><span class="line"><span class="string">tmpfs</span>                    <span class="number">7.</span><span class="string">8G</span>     <span class="number">0</span>  <span class="number">7.</span><span class="string">8G</span>   <span class="number">0</span><span class="string">%</span> <span class="string">/sys/firmware</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># 写入文件，用于后续快照验证</span></span><br><span class="line"><span class="string">root@csi-rbd-demo-pod:/#</span> <span class="string">cd</span> <span class="string">/var/lib/www/html;mkdir</span> <span class="string">demo;cd</span> <span class="string">demo;echo</span> <span class="string">&quot;snapshot test&quot;</span> <span class="string">&gt;</span> <span class="string">test.txt</span></span><br><span class="line"><span class="string">root@csi-rbd-demo-pod:/var/lib/www/html#</span> <span class="string">cat</span> <span class="string">demo/test.txt</span></span><br><span class="line"><span class="string">snapshot</span> <span class="string">test</span></span><br></pre></td></tr></table></figure>

<h5 id="2）创建rbd类型pvc的快照，并验证基于快照恢复的可用性："><a href="#2）创建rbd类型pvc的快照，并验证基于快照恢复的可用性：" class="headerlink" title="2）创建rbd类型pvc的快照，并验证基于快照恢复的可用性："></a>2）创建rbd类型pvc的快照，并验证基于快照恢复的可用性：</h5><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br></pre></td><td class="code"><pre><span class="line"><span class="number">2.1</span><span class="string">)创建上一步pvc的快照</span></span><br><span class="line"><span class="comment"># cat snapshot.yaml</span></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">snapshot.storage.k8s.io/v1beta1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">VolumeSnapshot</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line">  <span class="attr">name:</span> <span class="string">rbd-pvc-snapshot</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line">  <span class="attr">volumeSnapshotClassName:</span> <span class="string">csi-rbdplugin-snapclass</span></span><br><span class="line">  <span class="attr">source:</span></span><br><span class="line">    <span class="attr">persistentVolumeClaimName:</span> <span class="string">rbd-pvc</span></span><br><span class="line"> </span><br><span class="line"><span class="comment"># kubectl apply -f snapshot.yaml</span></span><br><span class="line"><span class="comment"># kubectl get VolumeSnapshot rbd-pvc-snapshot</span></span><br><span class="line"><span class="string">NAME</span>               <span class="string">READYTOUSE</span>   <span class="string">SOURCEPVC</span>   <span class="string">SOURCESNAPSHOTCONTENT</span>   <span class="string">RESTORESIZE</span>   <span class="string">SNAPSHOTCLASS</span>             <span class="string">SNAPSHOTCONTENT</span>                                    <span class="string">CREATIONTIME</span>   <span class="string">AGE</span></span><br><span class="line"><span class="string">rbd-pvc-snapshot</span>   <span class="literal">true</span>         <span class="string">rbd-pvc</span>                             <span class="string">1Gi</span>           <span class="string">csi-rbdplugin-snapclass</span>   <span class="string">snapcontent-48f3e563-d21a-40bb-8e15-ddbf27886c88</span>   <span class="string">19h</span>            <span class="string">19h</span></span><br><span class="line"><span class="number">2.2</span><span class="string">)创建基于快照恢复的pvc</span></span><br><span class="line"><span class="comment"># cat pvc-restore.yaml</span></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">PersistentVolumeClaim</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line">  <span class="attr">name:</span> <span class="string">rbd-pvc-restore</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line">  <span class="attr">storageClassName:</span> <span class="string">csi-rbd-sc</span></span><br><span class="line">  <span class="attr">dataSource:</span></span><br><span class="line">    <span class="attr">name:</span> <span class="string">rbd-pvc-snapshot</span></span><br><span class="line">    <span class="attr">kind:</span> <span class="string">VolumeSnapshot</span></span><br><span class="line">    <span class="attr">apiGroup:</span> <span class="string">snapshot.storage.k8s.io</span></span><br><span class="line">  <span class="attr">accessModes:</span></span><br><span class="line">    <span class="bullet">-</span> <span class="string">ReadWriteOnce</span></span><br><span class="line">  <span class="attr">resources:</span></span><br><span class="line">    <span class="attr">requests:</span></span><br><span class="line">      <span class="attr">storage:</span> <span class="string">1Gi</span></span><br><span class="line"> </span><br><span class="line"><span class="comment"># kubectl apply -f pvc-restore.yaml</span></span><br><span class="line"><span class="number">2.3</span><span class="string">)创建pod应用快照恢复的pvc</span></span><br><span class="line"><span class="comment"># cat pod-restore.yaml</span></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Pod</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line">  <span class="attr">name:</span> <span class="string">csi-rbd-restore-demo-pod</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line">  <span class="attr">containers:</span></span><br><span class="line">    <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">web-server</span></span><br><span class="line">      <span class="attr">image:</span> <span class="string">nginx</span></span><br><span class="line">      <span class="attr">volumeMounts:</span></span><br><span class="line">        <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">mypvc</span></span><br><span class="line">          <span class="attr">mountPath:</span> <span class="string">/var/lib/www/html</span></span><br><span class="line">  <span class="attr">volumes:</span></span><br><span class="line">    <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">mypvc</span></span><br><span class="line">      <span class="attr">persistentVolumeClaim:</span></span><br><span class="line">        <span class="attr">claimName:</span> <span class="string">rbd-pvc-restore</span></span><br><span class="line">        <span class="attr">readOnly:</span> <span class="literal">false</span></span><br><span class="line"> </span><br><span class="line"><span class="comment"># kubectl apply -f pod-restore.yaml</span></span><br><span class="line"><span class="comment"># kubectl get pods csi-rbd-restore-demo-pod</span></span><br><span class="line"><span class="string">NAME</span>                       <span class="string">READY</span>   <span class="string">STATUS</span>    <span class="string">RESTARTS</span>   <span class="string">AGE</span></span><br><span class="line"><span class="string">csi-rbd-restore-demo-pod</span>   <span class="number">1</span><span class="string">/1</span>     <span class="string">Running</span>   <span class="number">0</span>          <span class="string">18h</span></span><br><span class="line"><span class="comment"># kubectl exec -ti csi-rbd-restore-demo-pod -- bash</span></span><br><span class="line"><span class="string">root@csi-rbd-restore-demo-pod:/#</span> <span class="string">df</span> <span class="string">-h</span></span><br><span class="line"><span class="string">Filesystem</span>               <span class="string">Size</span>  <span class="string">Used</span> <span class="string">Avail</span> <span class="string">Use%</span> <span class="string">Mounted</span> <span class="string">on</span></span><br><span class="line"><span class="string">overlay</span>                  <span class="string">199G</span>  <span class="number">7.</span><span class="string">4G</span>  <span class="string">192G</span>   <span class="number">4</span><span class="string">%</span> <span class="string">/</span></span><br><span class="line"><span class="string">tmpfs</span>                     <span class="string">64M</span>     <span class="number">0</span>   <span class="string">64M</span>   <span class="number">0</span><span class="string">%</span> <span class="string">/dev</span></span><br><span class="line"><span class="string">tmpfs</span>                    <span class="number">7.</span><span class="string">8G</span>     <span class="number">0</span>  <span class="number">7.</span><span class="string">8G</span>   <span class="number">0</span><span class="string">%</span> <span class="string">/sys/fs/cgroup</span></span><br><span class="line"><span class="string">/dev/mapper/centos-root</span>  <span class="string">199G</span>  <span class="number">7.</span><span class="string">4G</span>  <span class="string">192G</span>   <span class="number">4</span><span class="string">%</span> <span class="string">/etc/hosts</span></span><br><span class="line"><span class="string">shm</span>                       <span class="string">64M</span>     <span class="number">0</span>   <span class="string">64M</span>   <span class="number">0</span><span class="string">%</span> <span class="string">/dev/shm</span></span><br><span class="line"><span class="string">/dev/rbd3</span>                <span class="string">976M</span>  <span class="number">2.</span><span class="string">6M</span>  <span class="string">958M</span>   <span class="number">1</span><span class="string">%</span> <span class="string">/var/lib/www/html</span></span><br><span class="line"><span class="string">tmpfs</span>                    <span class="number">7.</span><span class="string">8G</span>   <span class="string">12K</span>  <span class="number">7.</span><span class="string">8G</span>   <span class="number">1</span><span class="string">%</span> <span class="string">/run/secrets/kubernetes.io/serviceaccount</span></span><br><span class="line"><span class="string">tmpfs</span>                    <span class="number">7.</span><span class="string">8G</span>     <span class="number">0</span>  <span class="number">7.</span><span class="string">8G</span>   <span class="number">0</span><span class="string">%</span> <span class="string">/proc/acpi</span></span><br><span class="line"><span class="string">tmpfs</span>                    <span class="number">7.</span><span class="string">8G</span>     <span class="number">0</span>  <span class="number">7.</span><span class="string">8G</span>   <span class="number">0</span><span class="string">%</span> <span class="string">/proc/scsi</span></span><br><span class="line"><span class="string">tmpfs</span>                    <span class="number">7.</span><span class="string">8G</span>     <span class="number">0</span>  <span class="number">7.</span><span class="string">8G</span>   <span class="number">0</span><span class="string">%</span> <span class="string">/sys/firmware</span></span><br><span class="line"></span><br><span class="line"><span class="string">root@csi-rbd-restore-demo-pod:/#</span> <span class="string">cd</span> <span class="string">/var/lib/www/html</span></span><br><span class="line"><span class="string">root@csi-rbd-restore-demo-pod:/var/lib/www/html#</span> <span class="string">ls</span></span><br><span class="line"><span class="string">demo</span>  <span class="string">lost+found</span></span><br><span class="line"><span class="string">root@csi-rbd-restore-demo-pod:/var/lib/www/html#</span> <span class="string">cat</span> <span class="string">demo/test.txt</span></span><br><span class="line"><span class="string">snapshot</span> <span class="string">test</span></span><br><span class="line"> </span><br><span class="line"><span class="string">//基于快照恢复数据功能正常</span></span><br></pre></td></tr></table></figure>

<h5 id="3）扩容pvc大小："><a href="#3）扩容pvc大小：" class="headerlink" title="3）扩容pvc大小："></a>3）扩容pvc大小：</h5><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br></pre></td><td class="code"><pre><span class="line">3.1)修改rbd-pvc的容量大小</span><br><span class="line"><span class="meta">#</span><span class="bash"> cat pvc.yaml</span></span><br><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: PersistentVolumeClaim</span><br><span class="line">metadata:</span><br><span class="line">  name: rbd-pvc</span><br><span class="line">spec:</span><br><span class="line">  accessModes:</span><br><span class="line">    - ReadWriteOnce</span><br><span class="line">  resources:</span><br><span class="line">    requests:</span><br><span class="line">      storage: 100Gi    // 由1G改为100G</span><br><span class="line">  storageClassName: csi-rbd-sc</span><br><span class="line"> </span><br><span class="line"><span class="meta">#</span><span class="bash"> kubectl apply -f pvc.yaml</span></span><br><span class="line"><span class="meta">#</span><span class="bash"> kubectl get pvc rbd-pvc</span></span><br><span class="line">NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE</span><br><span class="line">rbd-pvc   Bound    pvc-11b931b0-7cb5-40e1-815b-c15659310593   100Gi      RWO            csi-rbd-sc     40h</span><br><span class="line"><span class="meta">#</span><span class="bash"> kubectl <span class="built_in">exec</span> -ti csi-rbd-demo-pod -- bash</span></span><br><span class="line">root@csi-rbd-demo-pod:/# df -h</span><br><span class="line">Filesystem               Size  Used Avail Use% Mounted on</span><br><span class="line">overlay                  199G  7.4G  192G   4% /</span><br><span class="line">tmpfs                     64M     0   64M   0% /dev</span><br><span class="line">tmpfs                    7.8G     0  7.8G   0% /sys/fs/cgroup</span><br><span class="line">/dev/mapper/centos-root  199G  7.4G  192G   4% /etc/hosts</span><br><span class="line">shm                       64M     0   64M   0% /dev/shm</span><br><span class="line">/dev/rbd0                 99G  6.8M   99G   1% /var/lib/www/html    // 扩容正常</span><br><span class="line">tmpfs                    7.8G   12K  7.8G   1% /run/secrets/kubernetes.io/serviceaccount</span><br><span class="line">tmpfs                    7.8G     0  7.8G   0% /proc/acpi</span><br><span class="line">tmpfs                    7.8G     0  7.8G   0% /proc/scsi</span><br><span class="line">tmpfs                    7.8G     0  7.8G   0% /sys/firmware</span><br><span class="line"> </span><br><span class="line">// 再次写入数据用于后续第二次创建快照</span><br><span class="line">root@csi-rbd-demo-pod:# cd /var/lib/www/html;mkdir test;echo &quot;abc&quot; &gt; test/demo.txt;echo &quot;abc&quot; &gt;&gt; /var/lib/www/html/demo/test.txt</span><br><span class="line">root@csi-rbd-demo-pod:/var/lib/www/html# cat test/demo.txt</span><br><span class="line">abc</span><br><span class="line">root@csi-rbd-demo-pod:/var/lib/www/html# cat demo/test.txt</span><br><span class="line">snapshot test</span><br><span class="line">abc</span><br></pre></td></tr></table></figure>

<h5 id="4）同一个pvc重复创建快照："><a href="#4）同一个pvc重复创建快照：" class="headerlink" title="4）同一个pvc重复创建快照："></a>4）同一个pvc重复创建快照：</h5><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br></pre></td><td class="code"><pre><span class="line">4.1)再次对rbd-pvc创建快照</span><br><span class="line"><span class="meta">#</span><span class="bash"> cat snapshot-1.yaml</span></span><br><span class="line">---</span><br><span class="line">apiVersion: snapshot.storage.k8s.io/v1beta1</span><br><span class="line">kind: VolumeSnapshot</span><br><span class="line">metadata:</span><br><span class="line">  name: rbd-pvc-snapshot-1</span><br><span class="line">spec:</span><br><span class="line">  volumeSnapshotClassName: csi-rbdplugin-snapclass</span><br><span class="line">  source:</span><br><span class="line">    persistentVolumeClaimName: rbd-pvc</span><br><span class="line"> </span><br><span class="line"><span class="meta">#</span><span class="bash"> kubectl apply -f snapshot-1.yaml</span></span><br><span class="line"><span class="meta">#</span><span class="bash"> kubectl get VolumeSnapshot rbd-pvc-snapshot-1</span></span><br><span class="line">NAME                 READYTOUSE   SOURCEPVC   SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS             SNAPSHOTCONTENT                                    CREATIONTIME   AGE</span><br><span class="line">rbd-pvc-snapshot-1   true         rbd-pvc                             100Gi         csi-rbdplugin-snapclass   snapcontent-b82dceb0-7ba6-4a3e-88ab-2220b729d85f   18h            18h</span><br><span class="line">4.2)基于rbd-pvc-snapshot-1快照恢复pvc</span><br><span class="line"><span class="meta">#</span><span class="bash"> cat pvc-restore-1.yaml</span></span><br><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: PersistentVolumeClaim</span><br><span class="line">metadata:</span><br><span class="line">  name: rbd-pvc-restore-1</span><br><span class="line">spec:</span><br><span class="line">  storageClassName: csi-rbd-sc</span><br><span class="line">  dataSource:</span><br><span class="line">    name: rbd-pvc-snapshot-1</span><br><span class="line">    kind: VolumeSnapshot</span><br><span class="line">    apiGroup: snapshot.storage.k8s.io</span><br><span class="line">  accessModes:</span><br><span class="line">    - ReadWriteOnce</span><br><span class="line">  resources:</span><br><span class="line">    requests:</span><br><span class="line">      storage: 100Gi</span><br><span class="line"> </span><br><span class="line"><span class="meta">#</span><span class="bash"> kubectl apply -f pvc-restore-1.yaml</span></span><br><span class="line">4.3)创建pod引用rbd-pvc-restore-1恢复的pvc</span><br><span class="line"><span class="meta">#</span><span class="bash"> cat pod-restore-1.yaml</span></span><br><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Pod</span><br><span class="line">metadata:</span><br><span class="line">  name: csi-rbd-restore-demo-pod-1</span><br><span class="line">spec:</span><br><span class="line">  containers:</span><br><span class="line">    - name: web-server</span><br><span class="line">      image: nginx</span><br><span class="line">      volumeMounts:</span><br><span class="line">        - name: mypvc</span><br><span class="line">          mountPath: /var/lib/www/html</span><br><span class="line">  volumes:</span><br><span class="line">    - name: mypvc</span><br><span class="line">      persistentVolumeClaim:</span><br><span class="line">        claimName: rbd-pvc-restore-1</span><br><span class="line">        readOnly: false</span><br><span class="line"> </span><br><span class="line"><span class="meta">#</span><span class="bash"> kubectl apply -f pod-restore-1.yaml</span></span><br><span class="line">NAME                         READY   STATUS    RESTARTS   AGE</span><br><span class="line">csi-rbd-restore-demo-pod-1   1/1     Running   0          18h</span><br><span class="line"><span class="meta">#</span><span class="bash"> kubectl <span class="built_in">exec</span> -ti csi-rbd-restore-demo-pod-1 -- bash</span></span><br><span class="line">root@csi-rbd-restore-demo-pod-1:/# df -h</span><br><span class="line">Filesystem               Size  Used Avail Use% Mounted on</span><br><span class="line">overlay                  199G  7.4G  192G   4% /</span><br><span class="line">tmpfs                     64M     0   64M   0% /dev</span><br><span class="line">tmpfs                    7.8G     0  7.8G   0% /sys/fs/cgroup</span><br><span class="line">/dev/mapper/centos-root  199G  7.4G  192G   4% /etc/hosts</span><br><span class="line">shm                       64M     0   64M   0% /dev/shm</span><br><span class="line">/dev/rbd4                 99G  6.8M   99G   1% /var/lib/www/html</span><br><span class="line">tmpfs                    7.8G   12K  7.8G   1% /run/secrets/kubernetes.io/serviceaccount</span><br><span class="line">tmpfs                    7.8G     0  7.8G   0% /proc/acpi</span><br><span class="line">tmpfs                    7.8G     0  7.8G   0% /proc/scsi</span><br><span class="line">tmpfs                    7.8G     0  7.8G   0% /sys/firmware</span><br><span class="line">root@csi-rbd-restore-demo-pod-1:/# cd /var/lib/www/html</span><br><span class="line">root@csi-rbd-restore-demo-pod-1:/var/lib/www/html# cat demo/test.txt</span><br><span class="line">snapshot test</span><br><span class="line">abc</span><br><span class="line">root@csi-rbd-restore-demo-pod-1:/var/lib/www/html# cat test/demo.txt</span><br><span class="line">abc</span><br><span class="line"> </span><br><span class="line">// 至此验证扩容后的pvc，二次创建的快照恢复数据功能正常</span><br><span class="line"> </span><br><span class="line">// 查看第一个创建的快照中是否有后续添加的文件数据,如下数据还是第一个快照创建时数据</span><br><span class="line">[root@cka-node-01 rbd]# kubectl exec -ti csi-rbd-restore-demo-pod -- bash</span><br><span class="line">root@csi-rbd-restore-demo-pod:/# df -h</span><br><span class="line">Filesystem               Size  Used Avail Use% Mounted on</span><br><span class="line">overlay                  199G  7.4G  192G   4% /</span><br><span class="line">tmpfs                     64M     0   64M   0% /dev</span><br><span class="line">tmpfs                    7.8G     0  7.8G   0% /sys/fs/cgroup</span><br><span class="line">/dev/mapper/centos-root  199G  7.4G  192G   4% /etc/hosts</span><br><span class="line">shm                       64M     0   64M   0% /dev/shm</span><br><span class="line">/dev/rbd3                976M  2.6M  958M   1% /var/lib/www/html</span><br><span class="line">tmpfs                    7.8G   12K  7.8G   1% /run/secrets/kubernetes.io/serviceaccount</span><br><span class="line">tmpfs                    7.8G     0  7.8G   0% /proc/acpi</span><br><span class="line">tmpfs                    7.8G     0  7.8G   0% /proc/scsi</span><br><span class="line">tmpfs                    7.8G     0  7.8G   0% /sys/firmware</span><br><span class="line">root@csi-rbd-restore-demo-pod:/# cd /var/lib/www/html</span><br><span class="line">root@csi-rbd-restore-demo-pod:/var/lib/www/html# cat demo/test.txt</span><br><span class="line">snapshot test</span><br><span class="line">root@csi-rbd-restore-demo-pod:/var/lib/www/html# ls</span><br><span class="line">demo  lost+found</span><br></pre></td></tr></table></figure>

<p><em>完</em></p>
<p>END</p>
 
      <!-- reward -->
      
    </div>
    

    <!-- copyright -->
    
    <footer class="article-footer">
       
  <ul class="article-tag-list" itemprop="keywords"><li class="article-tag-list-item"><a class="article-tag-list-link" href="/tags/k8s/" rel="tag">k8s</a></li></ul>

    </footer>
  </div>

    
 
   
</article>

    
    <article
  id="post-k8s/Kubectl远程连接集群"
  class="article article-type-post"
  itemscope
  itemprop="blogPost"
  data-scroll-reveal
>
  <div class="article-inner">
    
    <header class="article-header">
       
<h2 itemprop="name">
  <a class="article-title" href="/2020/11/11/k8s/Kubectl%E8%BF%9C%E7%A8%8B%E8%BF%9E%E6%8E%A5%E9%9B%86%E7%BE%A4/"
    >Kubectl远程连接集群.md</a> 
</h2>
 

    </header>
     
    <div class="article-meta">
      <a href="/2020/11/11/k8s/Kubectl%E8%BF%9C%E7%A8%8B%E8%BF%9E%E6%8E%A5%E9%9B%86%E7%BE%A4/" class="article-date">
  <time datetime="2020-11-10T16:00:00.000Z" itemprop="datePublished">2020-11-11</time>
</a> 
  <div class="article-category">
    <a class="article-category-link" href="/categories/k8s/">k8s</a>
  </div>
   
    </div>
      
    <div class="article-entry" itemprop="articleBody">
       
  <h1 id="Kubectl远程连接集群"><a href="#Kubectl远程连接集群" class="headerlink" title="Kubectl远程连接集群"></a>Kubectl远程连接集群</h1><p>最近更新时间：2020-01-15 14:08:31</p>
<p><a target="_blank" rel="noopener" href="https://github.com/tencentyun/qcloud-documents/blob/master/product/%E8%AE%A1%E7%AE%97%E4%B8%8E%E7%BD%91%E7%BB%9C/%E5%AE%B9%E5%99%A8%E6%9C%8D%E5%8A%A1/%E6%8E%A7%E5%88%B6%E5%8F%B0%E6%8C%87%E5%8D%97%EF%BC%88%E6%96%B0%E7%89%88%EF%BC%89/%E9%9B%86%E7%BE%A4%E7%AE%A1%E7%90%86/%E8%BF%9E%E6%8E%A5%E9%9B%86%E7%BE%A4.md"> 前往 GitHub 编辑 </a><a target="_blank" rel="noopener" href="https://main.qcloudimg.com/raw/document/product/pdf/457_31697_cn.pdf"> 查看 PDF</a></p>
<h2 id="本页目录："><a href="#本页目录：" class="headerlink" title="本页目录："></a>本页目录：</h2><ul>
<li><a target="_blank" rel="noopener" href="https://cloud.tencent.com/document/product/457/32191#.E6.93.8D.E4.BD.9C.E5.9C.BA.E6.99.AF">操作场景</a></li>
<li><a target="_blank" rel="noopener" href="https://cloud.tencent.com/document/product/457/32191#.E5.89.8D.E6.8F.90.E6.9D.A1.E4.BB.B6">前提条件</a></li>
<li>操作步骤<ul>
<li><a target="_blank" rel="noopener" href="https://cloud.tencent.com/document/product/457/32191#.E5.AE.89.E8.A3.85-kubectl-.E5.B7.A5.E5.85.B7.3Cspan-id.3D.22installkubectl.22.3E.3C.2Fspan.3E">安装 Kubectl 工具</a></li>
<li><a target="_blank" rel="noopener" href="https://cloud.tencent.com/document/product/457/32191#.E9.85.8D.E7.BD.AE-kubeconfig">配置 Kubeconfig</a></li>
<li><a target="_blank" rel="noopener" href="https://cloud.tencent.com/document/product/457/32191#.E8.AE.BF.E9.97.AE-kubernetes-.E9.9B.86.E7.BE.A4">访问 Kubernetes 集群</a></li>
</ul>
</li>
<li>相关说明<ul>
<li><a target="_blank" rel="noopener" href="https://cloud.tencent.com/document/product/457/32191#kubectl-.E5.91.BD.E4.BB.A4.E8.A1.8C.E4.BB.8B.E7.BB.8D">Kubectl 命令行介绍</a></li>
</ul>
</li>
</ul>
<h2 id="操作场景"><a href="#操作场景" class="headerlink" title="操作场景"></a>操作场景</h2><p>您可以通过 Kubernetes 命令行工具 Kubectl 从本地客户端机器连接到 TKE 集群。本文档指导您如何连接集群。</p>
<h2 id="前提条件"><a href="#前提条件" class="headerlink" title="前提条件"></a>前提条件</h2><p>请安装 curl 软件。<br>请根据操作系统的类型，选择获取 Kubectl 工具的方式：</p>
<blockquote>
<p>说明：</p>
<p>根据实际需求，将命令行中的 “v1.8.13” 替换成业务所需的 Kubectl 版本。</p>
</blockquote>
<ul>
<li><p>Mac OS X 系统</p>
<p>执行以下命令，获取 Kubectl 工具：</p>
<figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.13/bin/darwin/amd64/kubectl</span><br></pre></td></tr></table></figure>
</li>
<li><p>Linux 系统</p>
<p>执行以下命令，获取 Kubectl 工具：</p>
<figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.13/bin/linux/amd64/kubectl</span><br></pre></td></tr></table></figure>
</li>
<li><p>Windows 系统</p>
<p>执行以下命令，获取 Kubectl 工具：</p>
<figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.13/bin/windows/amd64/kubectl.exe</span><br></pre></td></tr></table></figure>

</li>
</ul>
<h2 id="操作步骤"><a href="#操作步骤" class="headerlink" title="操作步骤"></a>操作步骤</h2><h3 id="安装-Kubectl-工具"><a href="#安装-Kubectl-工具" class="headerlink" title="安装 Kubectl 工具"></a>安装 Kubectl 工具</h3><ol>
<li><p>参考 <a target="_blank" rel="noopener" href="https://kubernetes.io/docs/user-guide/prereqs/">Installing and Setting up kubectl</a>，安装 Kubectl 工具。</p>
<blockquote>
<p>说明：</p>
<ul>
<li>如果您已经安装 Kubectl 工具，请忽略本步骤。</li>
<li>此步骤以 Linux 系统为例。</li>
</ul>
</blockquote>
</li>
<li><p>执行以下命令，添加执行权限。</p>
<figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">chmod +x ./kubectl</span><br><span class="line">sudo mv ./kubectl /usr/local/bin/kubectl</span><br></pre></td></tr></table></figure>
</li>
<li><p>执行以下命令，测试安装结果。</p>
<figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl version</span><br></pre></td></tr></table></figure>

<p>如若输出类似以下版本信息，即表示安装成功。</p>
<figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">Client Version: version.Info&#123;Major:&quot;1&quot;, Minor:&quot;5&quot;, GitVersion:&quot;v1.5.2&quot;, GitCommit:&quot;08e099554f3c31f6e6f07b448ab3ed78d0520507&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2017-01-12T04:57:25Z&quot;, GoVersion:&quot;go1.7.4&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;&#125;</span><br></pre></td></tr></table></figure>

</li>
</ol>
<h3 id="配置-Kubeconfig"><a href="#配置-Kubeconfig" class="headerlink" title="配置 Kubeconfig"></a>配置 Kubeconfig</h3><ol>
<li><p>登录容器服务控制台 ，选择左侧导航栏中的【<a target="_blank" rel="noopener" href="https://console.cloud.tencent.com/tke2/cluster?rid=4">集群</a>】，进入集群管理界面。</p>
</li>
<li><p>单击需要连接的<strong>集群 ID/名称</strong>，进入集群详情页。</p>
</li>
<li><p>选择左侧导航栏中的【基本信息】，即可在“基本信息”页面中查看“集群APIServer信息”模块中该集群的访问地址、外网/内网访问状态、Kubeconfig 访问凭证内容等信息。如下图所示：</p>
</li>
</ol>
<ul>
<li><p><strong>访问地址</strong>：集群 APIServer 地址。请注意该地址不支持复制粘贴至浏览器进行访问。</p>
</li>
<li><p>获取访问入口</p>
<p>：请根据实际需求进行设置。</p>
<ul>
<li><strong>外网访问</strong>：默认不开启。开启外网访问<strong>会将集群 apiserver 暴露到公网，请谨慎操作</strong>。且需配置来源授权，默认全拒绝，您可配置放通单个 IP 或 CIDR ，强烈不建议配置 <code>0.0.0.0/0</code> 放通全部来源。</li>
<li><strong>内网访问</strong>：默认不开启。开启内网访问时，需配置一个子网，开启成功后将在已配置的子网中分配 IP 地址。</li>
</ul>
</li>
<li><p><strong>Kubeconfig</strong>：该集群的访问凭证，可复制、下载。</p>
</li>
</ul>
<ol start="4">
<li><p>根据实际情况进行集群凭据配置。</p>
<p>配置前，请判断当前访问客户端是否已经配置过任何集群的访问凭证：</p>
<ul>
<li><p><strong>否</strong>，即 <code>~/.kube/config</code> 文件内容为空，可直接复制已获取的 Kubeconfig 访问凭证内容并粘贴入 <code>~/.kube/config</code> 中。若客户端无 <code>~/.kube/config</code> 文件，您可直接创建。</p>
</li>
<li><p>是</p>
<p>，您可下载已获取的 Kubeconfig 至指定位置，并依次执行以下命令以合并多个集群的 config。</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">KUBECONFIG&#x3D;~&#x2F;.kube&#x2F;config:~&#x2F;Downloads&#x2F;cls-3jju4zdc-config kubectl config view --merge --flatten &gt; ~&#x2F;.kube&#x2F;config</span><br></pre></td></tr></table></figure>

<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">export KUBECONFIG&#x3D;~&#x2F;.kube&#x2F;config</span><br></pre></td></tr></table></figure>

<p>其中，<br>~/Downloads/cls-3jju4zdc-config<br>为本集群的 Kubeconfig 的文件路径，请替换为下载至本地后的实际路径。</p>
</li>
</ul>
</li>
</ol>
<h3 id="访问-Kubernetes-集群"><a href="#访问-Kubernetes-集群" class="headerlink" title="访问 Kubernetes 集群"></a>访问 Kubernetes 集群</h3><ol>
<li><p>完成 Kubeconfig 配置后，依次执行以下命令查看并切换 context 以访问本集群。</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl config get-contexts</span><br></pre></td></tr></table></figure>

<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl config use-context cls-3jju4zdc-context-default</span><br></pre></td></tr></table></figure>
</li>
<li><p>执行以下命令， 测试是否可正常访问集群。</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get node</span><br></pre></td></tr></table></figure>

<p>如果无法连接请查看是否已经开启公网访问或内网访问入口，并确保访问客户端在指定的网络环境内。</p>
</li>
</ol>
<h2 id="相关说明"><a href="#相关说明" class="headerlink" title="相关说明"></a>相关说明</h2><h3 id="Kubectl-命令行介绍"><a href="#Kubectl-命令行介绍" class="headerlink" title="Kubectl 命令行介绍"></a>Kubectl 命令行介绍</h3><p>Kubectl 是一个用于 Kubernetes 集群操作的命令行工具。本文涵盖 kubectl 语法、常见命令操作并提供常见示例。有关每个命令（包括所有主命令和子命令）的详细信息，请参阅 <a target="_blank" rel="noopener" href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl/">kubectl 参考文档</a> 或使用 <code>kubectl help</code> 命令查看详细帮助，kubectl 安装说明请参见 <a target="_blank" rel="noopener" href="https://cloud.tencent.com/document/product/457/32191#installKubectl">安装 Kubectl 工具</a>。</p>
 
      <!-- reward -->
      
    </div>
    

    <!-- copyright -->
    
    <footer class="article-footer">
       
  <ul class="article-tag-list" itemprop="keywords"><li class="article-tag-list-item"><a class="article-tag-list-link" href="/tags/k8s/" rel="tag">k8s</a></li></ul>

    </footer>
  </div>

    
 
   
</article>

    
    <article
  id="post-k8s/Kubernetes 上对应用程序进行故障排除的 6 个技巧"
  class="article article-type-post"
  itemscope
  itemprop="blogPost"
  data-scroll-reveal
>
  <div class="article-inner">
    
    <header class="article-header">
       
<h2 itemprop="name">
  <a class="article-title" href="/2020/11/11/k8s/Kubernetes%20%E4%B8%8A%E5%AF%B9%E5%BA%94%E7%94%A8%E7%A8%8B%E5%BA%8F%E8%BF%9B%E8%A1%8C%E6%95%85%E9%9A%9C%E6%8E%92%E9%99%A4%E7%9A%84%206%20%E4%B8%AA%E6%8A%80%E5%B7%A7/"
    >Kubernetes 上对应用程序进行故障排除的 6 个技巧.md</a> 
</h2>
 

    </header>
     
    <div class="article-meta">
      <a href="/2020/11/11/k8s/Kubernetes%20%E4%B8%8A%E5%AF%B9%E5%BA%94%E7%94%A8%E7%A8%8B%E5%BA%8F%E8%BF%9B%E8%A1%8C%E6%95%85%E9%9A%9C%E6%8E%92%E9%99%A4%E7%9A%84%206%20%E4%B8%AA%E6%8A%80%E5%B7%A7/" class="article-date">
  <time datetime="2020-11-10T16:00:00.000Z" itemprop="datePublished">2020-11-11</time>
</a> 
  <div class="article-category">
    <a class="article-category-link" href="/categories/k8s/">k8s</a>
  </div>
   
    </div>
      
    <div class="article-entry" itemprop="articleBody">
       
  <h1 id="Kubernetes-上对应用程序进行故障排除的-6-个技巧"><a href="#Kubernetes-上对应用程序进行故障排除的-6-个技巧" class="headerlink" title="Kubernetes 上对应用程序进行故障排除的 6 个技巧"></a>Kubernetes 上对应用程序进行故障排除的 6 个技巧</h1><p>从 Docker 迁移到 Docker Swarm，再到 Kubernetes，然后处理了多年来的所有各种 API 更改之后，我非常乐意发现部署中出现的问题和把问题进行修复。</p>
<p>我今天分享下我认为最有用的5条故障排除技巧，以及一些其他的使用技巧。</p>
<p>kubectl –“瑞士军刀”</p>
<p>kubectl 就是我们的瑞士军刀，我们经常在出现问题的时候使用他们，在出现问题如何使用他们很重要，让我们从5个“实际案例”开始，看出现问题时如何使用它们。</p>
<p>情况将是：我的YAML已被接受，但我的服务未启动且已启动，但无法正常工作。</p>
<h3 id="1-kubectl-get-deployment-pods"><a href="#1-kubectl-get-deployment-pods" class="headerlink" title="1.kubectl get deployment/pods"></a>1.kubectl get deployment/pods</h3><p>这个命令如此重要的原因是它无需显示大量内容即可显示很有用的信息。<br>如果要为工作负载使用部署，则有两种选择：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>
<p><code>kubectl get deploy``kubectl get deploy -n  名称空间``kubectl get deploy –all-namespaces [或“ -A”]</code></p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>理想情况下，您希望看到的是1/1或等值的2/2，以此类推。这表明您的部署已被接受，并已尝试进行部署。</p>
<p>接下来，您可能需要查看kubectl get pod，以查看部署的后备Pod是否正确启动。</p>
<h3 id="2-kubectl-get-events"><a href="#2-kubectl-get-events" class="headerlink" title="2. kubectl get events"></a>2. kubectl get events</h3><p>我感到惊讶的是，我不得不经常向与Kubernetes有问题的人们解释这个小技巧。此命令将打印出给定名称空间中的事件，非常适合查找关键问题，例如崩溃的pod或无法pull容器镜像。</p>
<p>Kubernetes中的日志是“未排序的”，因此，您将需要添加以下内容，这些内容取自OpenFaaS文档。</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>
<p>$ kubectl get events –sort-by=.metadata.creationTimestamp</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>kubectl get事件的另一个接近的命令是是kubectl  describe，就像get deploy / pod一样，它与对象的名称一起工作：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>
<p>kubectl describe deploy/figlet -n openfaas</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">  </span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>您会在这里获得非常详细的信息。您可以描述大多数事情，包括节点，这些节点将显示由于资源限制或其他问题而无法启动 Pod。</p>
<h3 id="3-kubectl-logs"><a href="#3-kubectl-logs" class="headerlink" title="3. kubectl logs"></a>3. kubectl logs</h3><p>这个命令肯定经常大家经常使用，但很多人使用了错误的方式。</p>
<p>如果您进行了部署，比方说cert-manager命名空间中的cert-manager，那么很多人认为他们首先必须找到Pod的长（唯一）名称并将其用作参数。不对。</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>
<p>kubectl logs deploy/cert-manager -n cert-manager</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>要跟踪日志，请添加-f</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>
<p>kubectl logs deploy/cert-manager -n cert-manager -f</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>您可以将所有三个结合起来。</p>
<p>如果您的 Deployment 或 Pod 有任何标签，则可以使用 -l app = name 或任何其他标签集来附加到一个或多个匹配Pod的日志中。</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>
<p>kubectl logs -l app=nginx</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>



<p>有一些工具，例如 stern 和 kail，可以帮助您匹配模式并节省一些键入操作，但我发现它们会分散您的注意力。</p>
<h3 id="4-kubectl-get-o-yaml"><a href="#4-kubectl-get-o-yaml" class="headerlink" title="4.kubectl get -o yaml"></a>4.kubectl get -o yaml</h3><p>当您开始使用由另一个项目或诸如Helm之类的其他工具生成的YAML时，您将很快需要它。在生产中检查镜像的版本或您在某处设置的注释也很有用。</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>
<p>kubectl run nginx-1 –image=nginx –port=80 –restart=Always</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>输出yaml</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>
<p>kubectl get deploy/nginx-1 -o yaml</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>现在我们知道了。而且，我们可以添加–export并将YAML保存在本地以进行编辑并再次应用。</p>
<p>实时编辑YAML的另一个选项是kubectl edit，如果您对vim感到困惑，不知道如何使用，请在命令前加上VISUAL = nano，使用这个简化编辑器。</p>
<h3 id="5-kubectl-scale-您打开和关闭它了吗？"><a href="#5-kubectl-scale-您打开和关闭它了吗？" class="headerlink" title="5. kubectl scale  您打开和关闭它了吗？"></a>5. kubectl scale  您打开和关闭它了吗？</h3><p>Kubectl scale可用于将Deployment及其Pod缩小为零个副本，实际上杀死了所有副本。当您将其缩放回1/1时，将创建一个新的Pod，重新启动您的应用程序。</p>
<p>语法非常简单，您可以重新启动代码并再次进行测试。</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>
<p><code>kubectl scale deploy/nginx-1 --replicas=0``kubectl scale deploy/nginx-1 --replicas=1</code></p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>

<h3 id="6-Port-forwarding"><a href="#6-Port-forwarding" class="headerlink" title="6. Port forwarding"></a>6. Port forwarding</h3><p>我们需要这个技巧， 通过kubectl进行的端口转发使我们可以在我们自己计算机上的本地或远程群集上公开一项服务，以便在任何已配置的端口上访问它，而无需在Internet上公开它。</p>
<p>以下是在本地访问Nginx部署的示例：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>
<p>kubectl port-forward deploy/nginx-1 8080:80</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>

<p>有人认为这仅适用于部署或Pod，这是错误的。服务间是公平的，通常是转发的选择，因为它们将模拟生产集群中的配置。</p>
<p>如果您确实想在Internet上公开服务，通常会使用LoadBalancer服务，或运行kubectl暴露：</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>
<p>kubectl expose deployment nginx-1 –port=80 –type=LoadBalancer</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>



<p>技巧说完了，可以现在尝试一下，我希望您发现这6条命令和技巧有用， 现在，您可以在真实的集群上对其进行测试了。</p>
<p>来源：<a target="_blank" rel="noopener" href="https://www.mindg.cn/?p=2578">https://www.mindg.cn/?p=2578</a></p>
<p><strong>近期好文：</strong></p>
<h2 id="运维必看！这里有-CAP-分布式最易懂的解释"><a href="#运维必看！这里有-CAP-分布式最易懂的解释" class="headerlink" title="运维必看！这里有 CAP 分布式最易懂的解释"></a><a target="_blank" rel="noopener" href="https://mp.weixin.qq.com/s?__biz=MzA4Nzg5Nzc5OA==&mid=2651689834&idx=1&sn=2b71c7959d5d00574b7d06855da0c0a7&chksm=8bcb50c3bcbcd9d50f320263e079208d99d3a86eb6b28ab44b7e35771cef50a2481eef91ae7b&token=1743236577&lang=zh_CN&scene=21#wechat_redirect">运维必看！这里有 CAP 分布式最易懂的解释</a></h2><h2 id="快手诚聘-CDN-高级运营开发工程师-职位推荐"><a href="#快手诚聘-CDN-高级运营开发工程师-职位推荐" class="headerlink" title="快手诚聘 CDN 高级运营开发工程师 | 职位推荐"></a><a target="_blank" rel="noopener" href="https://mp.weixin.qq.com/s?__biz=MzA4Nzg5Nzc5OA==&mid=2651689834&idx=2&sn=07e7cb3acbcf66e2285b252a34c8056e&chksm=8bcb50c3bcbcd9d55765076ae31939eea8ccd475ab81b5bb39b25149f16c2d234faa57ebee34&token=1743236577&lang=zh_CN&scene=21#wechat_redirect">快手诚聘 CDN 高级运营开发工程师 | 职位推荐</a></h2><p>“高效运维”公众号诚邀广大技术人员投稿，  </p>
<hr>
<p>投稿邮箱：<a href="mailto:&#106;&#105;&#97;&#99;&#104;&#101;&#x6e;&#64;&#x67;&#x72;&#101;&#x61;&#116;&#111;&#112;&#x73;&#x2e;&#x6e;&#x65;&#116;">&#106;&#105;&#97;&#99;&#104;&#101;&#x6e;&#64;&#x67;&#x72;&#101;&#x61;&#116;&#111;&#112;&#x73;&#x2e;&#x6e;&#x65;&#116;</a>，或添加联系人微信：greatops1118.</p>
<p>点击阅读原文，更多精彩</p>
<p><img src="https://mmbiz.qpic.cn/mmbiz_png/d5TCS9b3zE1d63yHRpJDZ2G0wgx1wY6ciaaPcfRr35t8sZ2H1qkica0UTZY6pTqGNxd6XkRo0rU9WvcSqBb9w5icQ/640?wx_fmt=png&tp=webp&wxfrom=5&wx_lazy=1&wx_co=1"></p>
<p>点个“在看”，一年不宕机</p>
 
      <!-- reward -->
      
    </div>
    

    <!-- copyright -->
    
    <footer class="article-footer">
       
  <ul class="article-tag-list" itemprop="keywords"><li class="article-tag-list-item"><a class="article-tag-list-link" href="/tags/k8s/" rel="tag">k8s</a></li></ul>

    </footer>
  </div>

    
 
   
</article>

    
    <article
  id="post-k8s/Kubernetes 临时存储需要限制吗"
  class="article article-type-post"
  itemscope
  itemprop="blogPost"
  data-scroll-reveal
>
  <div class="article-inner">
    
    <header class="article-header">
       
<h2 itemprop="name">
  <a class="article-title" href="/2020/11/11/k8s/Kubernetes%20%E4%B8%B4%E6%97%B6%E5%AD%98%E5%82%A8%E9%9C%80%E8%A6%81%E9%99%90%E5%88%B6%E5%90%97/"
    >Kubernetes 临时存储需要限制吗.md</a> 
</h2>
 

    </header>
     
    <div class="article-meta">
      <a href="/2020/11/11/k8s/Kubernetes%20%E4%B8%B4%E6%97%B6%E5%AD%98%E5%82%A8%E9%9C%80%E8%A6%81%E9%99%90%E5%88%B6%E5%90%97/" class="article-date">
  <time datetime="2020-11-10T16:00:00.000Z" itemprop="datePublished">2020-11-11</time>
</a> 
  <div class="article-category">
    <a class="article-category-link" href="/categories/k8s/">k8s</a>
  </div>
   
    </div>
      
    <div class="article-entry" itemprop="articleBody">
       
  <h1 id="Kubernetes-临时存储需要限制吗"><a href="#Kubernetes-临时存储需要限制吗" class="headerlink" title="Kubernetes 临时存储需要限制吗"></a>Kubernetes 临时存储需要限制吗</h1><h2 id="临时存储简介"><a href="#临时存储简介" class="headerlink" title="临时存储简介"></a>临时存储简介</h2><p><code>Node节点</code>通常还可以具有本地的临时性存储，由本地挂载的<code>可写入设备</code>或者 <code>RAM</code>来提供支持。<code>临时（Ephemeral）</code> 意味着对所存储的数据不提供长期可用性的保证。</p>
<p>Pods 通常可以使用临时性本地存储来实现缓冲区、保存日志等功能。kubelet 可以为使用本地临时存储的 Pods 提供这种存储空间，允许后者使用 <code>emptyDir</code> 类型的卷将其挂载到容器中。</p>
<p>kubelet 也使用此类存储来保存<code>节点层面的容器日志</code>， <code>容器镜像文件</code>、<code>以及运行中容器的可写入层</code>。</p>
<h2 id="临时存储有哪些"><a href="#临时存储有哪些" class="headerlink" title="临时存储有哪些"></a>临时存储有哪些</h2><ul>
<li>本地临时存储（local ephemeral storage）</li>
<li>emptyDir</li>
</ul>
<p><code>本地临时存储（local ephemeral storage）</code>：Kubernetes在1.8的版本中引入了一种类似于CPU，内存的新的资源模式：ephemeral-storage，并且在1.10版本kubelet中默认打开这个特性。ephemeral-storage是为管理和调度Kubernetes中运行的应用短暂存储。</p>
<p><code>emptyDir</code>：emptyDir 类型Volume在Pod分配到Node上时被创建，Kubernetes会在Node节点上自动分配一个目录，因此无需指定宿主机Node上对应的目录文件。这个目录初始内容为空，当Pod从Node上移除时，emptyDir中的数据会被永久删除。</p>
<blockquote>
<p>注释：容器的 <code>crashing</code> 事件并不会导致 <code>emptyDir</code> 中的数据被删除。</p>
</blockquote>
<h2 id="临时存储默认存储在哪个位置"><a href="#临时存储默认存储在哪个位置" class="headerlink" title="临时存储默认存储在哪个位置?"></a>临时存储默认存储在哪个位置?</h2><p>在每个 Kubernetes <code>Node节点</code> 上，kubelet 默认根目录是 <code>/var/lib/kubelet</code> 和 日志目录 <code>/var/log</code> 保存在节点的系统分区上，这个分区同时也会被Pod的 <code>EmptyDir</code> 类型的<code>volume</code>、<code>容器日志</code>、<code>镜像层</code>、<code>容器的可写层所占用</code>。<code>ephemeral-storage</code> 便是对系统分区进行管理。</p>
<h2 id="临时存储需要限制吗？"><a href="#临时存储需要限制吗？" class="headerlink" title="临时存储需要限制吗？"></a>临时存储需要限制吗？</h2><p>答案是 <code>需要限制</code>，从上文了解到，临时存储默认根目录是在 <code>/var/lib/kubelet</code>中，<code>/var</code> 一般情况都是在系统根分区中，并且根分区磁盘一般不会很大（阿里云ECS系统盘默认为 40G），这就必须限制，为系统预留足够的磁盘空间来支持正常运行。上文也说到，临时存储也可以使用 RAM，那就更应该限制，内存是一种非常有限的资源。</p>
<h2 id="Node节点设置临时存储使用大小"><a href="#Node节点设置临时存储使用大小" class="headerlink" title="Node节点设置临时存储使用大小"></a>Node节点设置临时存储使用大小</h2><p>Node节点上的 <code>kubelet</code> 组件启动时，kubelet会统计当前节点默认 <code>/var/lib/kubelet</code> 所在的分区可分配的磁盘资源，或者你可以覆盖节点上kubelet的配置来自定义可分配的资源。创建Pod时会根据存储需求调度到满足存储的节点，Pod使用超过限制的存储时会对其做<code>驱逐</code>处理来保证不会耗尽节点上的磁盘空间。</p>
<blockquote>
<p>注意：如果运行时指定了别的独立的分区，比如修改了docker的镜像层和容器可写层的存储位置(默认是/var/lib/docker)所在的分区，将不再将其计入 <code>ephemeral-storage</code> 的消耗。</p>
</blockquote>
<p>kubelet 如下配置，限制Node节点上临时存储能使用多大磁盘空间</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"># Node 资源保留</span><br><span class="line">  nodefs.available: 10% # 给 &#x2F;var&#x2F;lib&#x2F;kubelet 所在分区保留 10% 磁盘空间</span><br><span class="line">  nodefs.inodesFree: 5% # 给 &#x2F;var&#x2F;lib&#x2F;kubelet 所在分区保留 5% inodes</span><br></pre></td></tr></table></figure>

<h2 id="临时存储限制使用举例"><a href="#临时存储限制使用举例" class="headerlink" title="临时存储限制使用举例"></a>临时存储限制使用举例</h2><h3 id="限制磁盘本地临时存储"><a href="#限制磁盘本地临时存储" class="headerlink" title="限制磁盘本地临时存储"></a>限制磁盘本地临时存储</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line">apiVersion: v1</span><br><span class="line">kind: Pod</span><br><span class="line">metadata:</span><br><span class="line">  name: test-storage</span><br><span class="line">  labels:</span><br><span class="line">    app: test-storage</span><br><span class="line">spec:</span><br><span class="line">  containers:</span><br><span class="line">  - name: busybox</span><br><span class="line">    image:  busybox</span><br><span class="line">    command: [&quot;sh&quot;, &quot;-c&quot;, &quot;while true; do dd if&#x3D;&#x2F;dev&#x2F;zero of&#x3D;$(date &#39;+%s&#39;).out count&#x3D;1 bs&#x3D;30MB; sleep 1; done&quot;] # 使用dd命令持续往容器写数据</span><br><span class="line">    resources:</span><br><span class="line">      limits:</span><br><span class="line">        ephemeral-storage: 300Mi #定义存储的限制为300Mi</span><br><span class="line">      requests:</span><br><span class="line">        ephemeral-storage: 300Mi</span><br></pre></td></tr></table></figure>

<p>容器使用磁盘超过 300Mi，被 kubelet 驱逐。具体请见下图</p>
<p><img src="http://iubest.gitee.io/pic/640-1601018227635.png" alt="img"></p>
<h3 id="限制内存临时存储"><a href="#限制内存临时存储" class="headerlink" title="限制内存临时存储"></a>限制内存临时存储</h3><p><code>emptyDir</code> 也是一种临时存储，因此也需要限制使用。</p>
<p>在Pod级别检查临时存储使用量时，也会将 <code>emptyDir</code> 的使用量计算在内，因此如果对 emptyDir 使用过量后，也会导致该Pod被 kubelet <code>Evict</code>。</p>
<p>另外，emptyDir本身也可以设置容量上限。指定 emptyDir 使用内存作为存储介质，这样用户可以获得极好的读写性能，但是由于内存比较珍贵，只提供了 <code>128Mi</code> 的空间，当用户在 <code>/cache-data</code> 目录下使用超过64Mi后，该Pod会被 kubelet 驱逐。</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">  volumeMounts:</span><br><span class="line">  - mountPath: &#x2F;cache-data</span><br><span class="line">    name: cache-data-volume</span><br><span class="line">volumes:</span><br><span class="line">- emptyDir:</span><br><span class="line">    medium: Memory</span><br><span class="line">    sizeLimit: 128Mi</span><br><span class="line">  name: cache-data-volume</span><br></pre></td></tr></table></figure>

<h2 id="参考链接"><a href="#参考链接" class="headerlink" title="参考链接"></a>参考链接</h2><ul>
<li><a target="_blank" rel="noopener" href="https://kubernetes.io/zh/docs/concepts/configuration/manage-resources-containers/">https://kubernetes.io/zh/docs/concepts/configuration/manage-resources-containers/</a></li>
<li><a target="_blank" rel="noopener" href="https://developer.aliyun.com/article/594066">https://developer.aliyun.com/article/594066</a></li>
<li><a target="_blank" rel="noopener" href="https://ieevee.com/tech/2019/05/23/ephemeral-storage.html">https://ieevee.com/tech/2019/05/23/ephemeral-storage.html</a></li>
</ul>
<h2 id="热门文章推荐"><a href="#热门文章推荐" class="headerlink" title="热门文章推荐"></a>热门文章推荐</h2><ul>
<li><a target="_blank" rel="noopener" href="http://mp.weixin.qq.com/s?__biz=MzA4MzIwNTc4NQ==&mid=2247486468&idx=1&sn=b43b42bffea97cbe0e247d109fc8a60e&chksm=9ffb47f2a88ccee4e8b59958d32d233622c1d938c75397c76aae3b97513182e6d081954ca39e&scene=21#wechat_redirect">分享阿里巴巴云原生技术与实践 - KubeCon 2020 经典演讲集锦</a></li>
<li><a target="_blank" rel="noopener" href="http://mp.weixin.qq.com/s?__biz=MzA4MzIwNTc4NQ==&mid=2247486422&idx=1&sn=48473bcf5c7f9451569137f5bbdf95a3&chksm=9ffb4020a88cc93603d5b2094a9b0c198e073bfbaf4cc4b0f82ef038a488371c3e853773894a&scene=21#wechat_redirect">Kubernetes v1.19.0 正式发布！</a></li>
<li><a target="_blank" rel="noopener" href="http://mp.weixin.qq.com/s?__biz=MzA4MzIwNTc4NQ==&mid=2247486324&idx=1&sn=97fb1ca3b706056595754844fe6db5de&chksm=9ffb4082a88cc99487ed614c68afbf7549006994e5ecfacf7774aeaca9e68a4a436d54542e39&scene=21#wechat_redirect">IT运维面试问题总结-简述Etcd、Kubernetes、Lvs、HAProxy等</a></li>
<li><a target="_blank" rel="noopener" href="http://mp.weixin.qq.com/s?__biz=MzA4MzIwNTc4NQ==&mid=2247485830&idx=1&sn=8b0031a60fdbc0b080d8b4967f03b859&chksm=9ffb4270a88ccb66c9e0002d7162254c1ca392d9f6da41b5b8bfa046fd1e07dc7ed3d84856a1&scene=21#wechat_redirect">Kubernetes 升级填坑指南（一）</a></li>
<li><a target="_blank" rel="noopener" href="http://mp.weixin.qq.com/s?__biz=MzA4MzIwNTc4NQ==&mid=2247485859&idx=1&sn=c1355c41c67dcb28453158a38bf15f81&chksm=9ffb4255a88ccb430a26a3bf988a55302adff24c303fd85285372d0ab6e8fb94ae26ba3d485d&scene=21#wechat_redirect">Kubernetes v1.15.3 升级到 v1.18.5 心得</a></li>
<li><a target="_blank" rel="noopener" href="http://mp.weixin.qq.com/s?__biz=MzA4MzIwNTc4NQ==&mid=2247485814&idx=1&sn=fc3545f9d8fa7d20274195dd8409d4dd&chksm=9ffb4280a88ccb9617ebd256a9b96a062a6dbf2e23ebd9e507599c9a574b64b76f36029f9a25&scene=21#wechat_redirect">根据 PID 获取 K8S Pod名称 - 反之 POD名称 获取 PID</a></li>
</ul>
 
      <!-- reward -->
      
    </div>
    

    <!-- copyright -->
    
    <footer class="article-footer">
       
  <ul class="article-tag-list" itemprop="keywords"><li class="article-tag-list-item"><a class="article-tag-list-link" href="/tags/k8s/" rel="tag">k8s</a></li></ul>

    </footer>
  </div>

    
 
   
</article>

    
    <article
  id="post-k8s/Kubernetes 故障解决心得"
  class="article article-type-post"
  itemscope
  itemprop="blogPost"
  data-scroll-reveal
>
  <div class="article-inner">
    
    <header class="article-header">
       
<h2 itemprop="name">
  <a class="article-title" href="/2020/11/11/k8s/Kubernetes%20%E6%95%85%E9%9A%9C%E8%A7%A3%E5%86%B3%E5%BF%83%E5%BE%97/"
    >Kubernetes 故障解决心得.md</a> 
</h2>
 

    </header>
     
    <div class="article-meta">
      <a href="/2020/11/11/k8s/Kubernetes%20%E6%95%85%E9%9A%9C%E8%A7%A3%E5%86%B3%E5%BF%83%E5%BE%97/" class="article-date">
  <time datetime="2020-11-10T16:00:00.000Z" itemprop="datePublished">2020-11-11</time>
</a> 
  <div class="article-category">
    <a class="article-category-link" href="/categories/k8s/">k8s</a>
  </div>
   
    </div>
      
    <div class="article-entry" itemprop="articleBody">
       
  <h1 id="Kubernetes-故障解决心得"><a href="#Kubernetes-故障解决心得" class="headerlink" title="Kubernetes 故障解决心得"></a>Kubernetes 故障解决心得</h1><h3 id="故障现象"><a href="#故障现象" class="headerlink" title="故障现象"></a>故障现象</h3><p>kubelet 启动不了，通过命令 <code>journalctl -u kubelet</code> 查看日志，报 <code>Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: failed to find subsystem mount for required subsystem: pids</code></p>
<h3 id="故障分析"><a href="#故障分析" class="headerlink" title="故障分析"></a>故障分析</h3><p>根据报错，有用的信息是 <code>failed to find subsystem mount for required subsystem: pids</code>，通过命令 <code>ls -l /sys/fs/cgroup/systemd/kubepods/burstable/</code> 查看，该目录下没有 <code>pids</code> 目录。</p>
<p><code>SupportPodPidsLimit</code> 在 kubernetes <code>1.14+</code> 默认开启。SupportNodePidsLimit 在<code>1.15+</code> 默认开启。</p>
<blockquote>
<p>相关Issues：<a target="_blank" rel="noopener" href="https://github.com/kubernetes/kubernetes/issues/79046">https://github.com/kubernetes/kubernetes/issues/79046</a></p>
</blockquote>
<h3 id="解决方法"><a href="#解决方法" class="headerlink" title="解决方法"></a>解决方法</h3><ul>
<li>方法一：编辑 kubelet 配置文件，添加 <code>--feature-gates=SupportPodPidsLimit=false,SupportNodePidsLimit=false</code> 参数，后面在重启 kubelet 服务。</li>
<li>方法二：可以升级系统内核 <code>5+</code> 版本</li>
</ul>
<h2 id="故障二"><a href="#故障二" class="headerlink" title="故障二"></a>故障二</h2><h3 id="故障现象-1"><a href="#故障现象-1" class="headerlink" title="故障现象"></a>故障现象</h3><p>Docker daemon oci 故障，日志报 <code>docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused &quot;process_linux.go:301: running exec setns process for init caused \&quot;exit status 40\&quot;&quot;: unknown.</code></p>
<h3 id="解决方法-1"><a href="#解决方法-1" class="headerlink" title="解决方法"></a>解决方法</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"># 清理缓存</span><br><span class="line">$ echo 1 &gt; &#x2F;proc&#x2F;sys&#x2F;vm&#x2F;drop_caches</span><br><span class="line"></span><br><span class="line"># 永久生效</span><br><span class="line">$ echo &quot;vm.min_free_kbytes&#x3D;1048576&quot; &gt;&gt; &#x2F;etc&#x2F;sysctl.conf</span><br><span class="line">$ sysctl -p</span><br><span class="line"></span><br><span class="line"># 重启 docker 服务，让 docker 应用内核设置</span><br><span class="line">$ systemctl restart docker</span><br></pre></td></tr></table></figure>

<h2 id="故障三"><a href="#故障三" class="headerlink" title="故障三"></a>故障三</h2><h3 id="报错现象"><a href="#报错现象" class="headerlink" title="报错现象"></a>报错现象</h3><p>kubelet 日志报 <code>network plugin is not ready: cni config uninitialized</code></p>
<h3 id="解决方法-2"><a href="#解决方法-2" class="headerlink" title="解决方法"></a>解决方法</h3><p>网络插件（flannel 或者 calico）没有安装或者安装失败。</p>
<h2 id="故障四"><a href="#故障四" class="headerlink" title="故障四"></a>故障四</h2><h3 id="故障现象-2"><a href="#故障现象-2" class="headerlink" title="故障现象"></a>故障现象</h3><p>kubelet 日志报 <code>Failed to connect to apiserver: the server has asked for the client to provide credentials</code></p>
<h3 id="故障分析-1"><a href="#故障分析-1" class="headerlink" title="故障分析"></a>故障分析</h3><p>从上面 kubelet 日志信息能得出，kubelet 客户端证书已过期，导致 Node节点状态处于 <code>NotReady</code>。</p>
<p>也可以通过命令 <code>openssl x509 -noout -enddate -in &#123;证书路径&#125;</code> 来查看证书到期日期。</p>
<h3 id="解决方法-3"><a href="#解决方法-3" class="headerlink" title="解决方法"></a>解决方法</h3><h4 id="kubeadm-部署的-Kubernetes-解决方法"><a href="#kubeadm-部署的-Kubernetes-解决方法" class="headerlink" title="kubeadm 部署的 Kubernetes 解决方法"></a>kubeadm 部署的 Kubernetes 解决方法</h4><p>kubernetes 1.15+ 版本可以直接通过命令 <code>kubeadm alpha certs renew </code> 更新。</p>
<p>kubernetes 小于 1.15 版本的，可以参考 <code>https://github.com/yuyicai/update-kube-cert</code> 项目更新</p>
<h4 id="二进制部署的-Kubernetes-解决方法"><a href="#二进制部署的-Kubernetes-解决方法" class="headerlink" title="二进制部署的 Kubernetes 解决方法"></a>二进制部署的 Kubernetes 解决方法</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"># 删除旧的 kubelet 证书文件</span><br><span class="line">$ rm -f  &#x2F;opt&#x2F;kubernetes&#x2F;ssl&#x2F;kubelet*</span><br><span class="line"></span><br><span class="line"># 删除 kubelet kubeconfig 文件</span><br><span class="line">$ rm -f &#x2F;opt&#x2F;kubernetes&#x2F;cfg&#x2F;kubelet.kubeconfig</span><br><span class="line"></span><br><span class="line"># 重启 kubelet 服务，让 master 重新颁发客户端证书</span><br><span class="line">$ systemctl restart kubelet</span><br></pre></td></tr></table></figure>

<h2 id="参考链接"><a href="#参考链接" class="headerlink" title="参考链接"></a>参考链接</h2><ul>
<li><a target="_blank" rel="noopener" href="https://adoyle.me/Today-I-Learned/k8s/k8s-deployment.html">https://adoyle.me/Today-I-Learned/k8s/k8s-deployment.html</a></li>
</ul>
<p>YP小站 发起了一个读者讨论大家可以在这里分享工作中或者学习中遇到的kubernetes故障。。。精选讨论内容参与讨论<img src="http://iubest.gitee.io/pic/132.jpg" alt="img">小站5写的非常不错，有很大收获。</p>
<h2 id="热门文章推荐"><a href="#热门文章推荐" class="headerlink" title="热门文章推荐"></a>热门文章推荐</h2><ul>
<li><a target="_blank" rel="noopener" href="http://mp.weixin.qq.com/s?__biz=MzA4MzIwNTc4NQ==&mid=2247486614&idx=1&sn=1518def129c34507919af4e02d7dcbae&chksm=9ffb4760a88cce764f5b98c20b86f5db08bfad330ec3461598f5bda96983aa5fe48ca3c2cd82&scene=21#wechat_redirect">Kubernetes 临时存储需要限制吗？</a></li>
<li><a target="_blank" rel="noopener" href="http://mp.weixin.qq.com/s?__biz=MzA4MzIwNTc4NQ==&mid=2247486454&idx=1&sn=5563754522286abebf9cd943dbef88be&chksm=9ffb4000a88cc9168bcb8725b678c5716346ae623ca12309f85728449d4380d72ef4d85d7f46&scene=21#wechat_redirect">Linux Used内存到底哪里去了？</a></li>
<li><a target="_blank" rel="noopener" href="http://mp.weixin.qq.com/s?__biz=MzA4MzIwNTc4NQ==&mid=2247485385&idx=1&sn=5ddf33da3670a0a189423eb6c32352f2&chksm=9ffb4c3fa88cc529e29a199af9045ca962ed792aee017ebd84c8f26dc7599db45256105788bf&scene=21#wechat_redirect">K8S故障排查指南- but volume paths are still present on disk</a></li>
<li><a target="_blank" rel="noopener" href="http://mp.weixin.qq.com/s?__biz=MzA4MzIwNTc4NQ==&mid=2247485277&idx=1&sn=d58eb0bf311643cc47b0dbf2617c22d4&chksm=9ffb4caba88cc5bd8102cdcce45861e7055c8c81e695ff0b0b279cbb988ef4769c4544946d46&scene=21#wechat_redirect">Kubernetes故障排查指南-分析容器退出状态码</a></li>
<li><a target="_blank" rel="noopener" href="http://mp.weixin.qq.com/s?__biz=MzA4MzIwNTc4NQ==&mid=2247485830&idx=1&sn=8b0031a60fdbc0b080d8b4967f03b859&chksm=9ffb4270a88ccb66c9e0002d7162254c1ca392d9f6da41b5b8bfa046fd1e07dc7ed3d84856a1&scene=21#wechat_redirect">Kubernetes 升级填坑指南（一）</a></li>
<li><a target="_blank" rel="noopener" href="http://mp.weixin.qq.com/s?__biz=MzA4MzIwNTc4NQ==&mid=2247484082&idx=1&sn=8973e0179a9cceab3751c4fee025bf2b&chksm=9ffb4944a88cc0522fbce127281849a7840b252727bece5e584c30f288f10c230ca8a438f6bd&scene=21#wechat_redirect">Kubernetes Pod 故障归类与排查方法</a></li>
</ul>
 
      <!-- reward -->
      
    </div>
    

    <!-- copyright -->
    
    <footer class="article-footer">
       
  <ul class="article-tag-list" itemprop="keywords"><li class="article-tag-list-item"><a class="article-tag-list-link" href="/tags/k8s/" rel="tag">k8s</a></li></ul>

    </footer>
  </div>

    
 
   
</article>

    
    <article
  id="post-k8s/Spring Cloud 应用在 Kubernetes 上的最佳实践 — 高可用 混沌工程"
  class="article article-type-post"
  itemscope
  itemprop="blogPost"
  data-scroll-reveal
>
  <div class="article-inner">
    
    <header class="article-header">
       
<h2 itemprop="name">
  <a class="article-title" href="/2020/11/11/k8s/Spring%20Cloud%20%E5%BA%94%E7%94%A8%E5%9C%A8%20Kubernetes%20%E4%B8%8A%E7%9A%84%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5%20%E2%80%94%20%E9%AB%98%E5%8F%AF%E7%94%A8%20%E6%B7%B7%E6%B2%8C%E5%B7%A5%E7%A8%8B/"
    >Spring Cloud 应用在 Kubernetes 上的最佳实践 — 高可用 混沌工程.md</a> 
</h2>
 

    </header>
     
    <div class="article-meta">
      <a href="/2020/11/11/k8s/Spring%20Cloud%20%E5%BA%94%E7%94%A8%E5%9C%A8%20Kubernetes%20%E4%B8%8A%E7%9A%84%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5%20%E2%80%94%20%E9%AB%98%E5%8F%AF%E7%94%A8%20%E6%B7%B7%E6%B2%8C%E5%B7%A5%E7%A8%8B/" class="article-date">
  <time datetime="2020-11-10T16:00:00.000Z" itemprop="datePublished">2020-11-11</time>
</a> 
  <div class="article-category">
    <a class="article-category-link" href="/categories/k8s/">k8s</a>
  </div>
   
    </div>
      
    <div class="article-entry" itemprop="articleBody">
       
  <h1 id="Spring-Cloud-应用在-Kubernetes-上的最佳实践-—-高可用（混沌工程）"><a href="#Spring-Cloud-应用在-Kubernetes-上的最佳实践-—-高可用（混沌工程）" class="headerlink" title="Spring Cloud 应用在 Kubernetes 上的最佳实践 — 高可用（混沌工程）"></a>Spring Cloud 应用在 Kubernetes 上的最佳实践 — 高可用（混沌工程）</h1><p><strong>导读：</strong>从上篇开始，我们进入到了高可用的章节，上篇提到的熔断能力，是历年保障大促当天晚上整个系统不被洪峰流量打垮的法宝。本文将重点介绍为什么我们要做混沌工程以及如何使用 ChaoBlade 工具和 AHAS 平台快速实施混沌工程。</p>
<p><strong>前言</strong></p>
<p>从上篇开始，我们进入到了高可用的章节，上篇提到的熔断能力，是历年保障大促当天晚上整个系统不被洪峰流量打垮的法宝，本篇介绍的措施与熔断有不一样的地方，一个是线上洪峰来临时的保护措施，它更多的是流量低峰或者在专门的演练环境中，针对可能遇见的各类故障，采取演练的手段，来窥探对业务的影响；它的主要目的是让我们自己更加了解自己业务系统的薄弱环节，以便来对症下药增强系统的高可用能力。</p>
<p><strong>为什么需要混沌工程？</strong></p>
<p>任何一个系统都会有未曾可知的故障出现，拿现代工艺已经很好的磁盘来说，有统计数据的磁盘最低的年故障率都可达到 0.39% 。即便是这么底层基础设施，也会有这么高的不确定性。</p>
<p>尤其当下大部分的服务形态都是分布式架构，在分布式系统架构下，服务间的依赖日益复杂，更很难评估单个服务故障对整个系统的影响；并且请求链路长，监控告警的不完善导致发现问题、定位问题难度增大；同时业务和技术迭代快，如何持续保障系统的稳定性和高可用性受到很大的挑战。</p>
<h3 id=""><a href="#" class="headerlink" title=""></a></h3><h3 id="1-云原生系统挑战更大"><a href="#1-云原生系统挑战更大" class="headerlink" title="1. 云原生系统挑战更大"></a><strong>1. 云原生系统挑战更大</strong></h3><p>谈到云原生，可以说云原生是一个理念，主要包含的技术有云设施、容器、微服务、服务网格、Serverless 等技术。云设施指公有云、专有云和混合云等，是云原生系统的基础设施，基础实施的故障可能对整个上层业务系统造成很大影响，所以说云设施的稳定性是非常重要的。</p>
<p><strong>容器服务的挑战</strong>可以分两大类：一类是面向 K8s 服务提供商，服务是否稳定；另一类是面向用户，配置的扩缩容规则是否有效，实现的 CRD 是否正确，容器编排是否合理等问题。</p>
<p><strong>分布式服务的挑战</strong>主要是复杂性，单个服务的故障很难判断对整个系统的影响；service mesh，sidecar 的服务路由、负载均衡等功能的有效性，还有 sidecar 容器本身的可用性。</p>
<p><strong>一些新兴的部署模式的挑战</strong>如 serverless，现在基本上都是函数加事件的形式，资源调度是否有效，而且 serverless 服务提供商屏蔽了一些中间件，你能掌控的是函数这些服务，那么你可以通过混沌工程去验证你函数调用的一些配置，比如超时配置、相关的一些降级策略等这些是否合理。</p>
<p>以上技术都有相同的共性，比如弹性可扩展、松耦合、容错性高、还有一些易于管理，便于观察这些特性。所以说在云原生时代，通过混沌工程可以更有效的推进系统的“云原生”化。</p>
<h3 id="-1"><a href="#-1" class="headerlink" title=""></a></h3><h3 id="2-每个职位都需要懂混沌工程"><a href="#2-每个职位都需要懂混沌工程" class="headerlink" title="2. 每个职位都需要懂混沌工程"></a><strong>2. 每个职位都需要懂混沌工程</strong></h3><p>混沌工程是一种思想，它让系统中的每个参与者都学会去考虑一件事情：如果所依赖的某服务中断了服务该怎么办？对于以下四类人群而言，意义尤显突出：</p>
<ul>
<li><p>对于<strong>架构师</strong>来说，可以验证系统架构的容错能力，我们需要面向失败设计的系统，混沌工程的思想就是践行这一原则的方式；</p>
</li>
<li><p>对于<strong>开发和运维</strong>，可以提高故障的应急效率，实现故障告警、定位、恢复的有效和高效性；</p>
</li>
<li><p>对于<strong>测试</strong>来说，可以弥补传统测试方法留下的空白，之前的测试方法基本上是从用户的角度去做，而混沌工程是从系统的角度进行测试，降低故障复发率；</p>
</li>
<li><p>对于<strong>产品和设计</strong>，通过混沌事件查看产品的表现，提升客户使用体验。所以说混沌工程面向的不仅仅是开发、测试，拥有最好的客户体验是每个人的目标，所以实施混沌工程，可以提早发现生产环境上的问题，并且可以以战养战，提升故障应急效率和可以使用体验，逐渐建设高可用的韧性系统。</p>
</li>
</ul>
<h2 id="-2"><a href="#-2" class="headerlink" title=""></a></h2><p><img src="http://iubest.gitee.io/pic/640-1601366814197.gif" alt="动态黑色音符"></p>
<p><strong>混沌工程实操</strong></p>
<p>在一次完整的演练流程中，需要先做好计划，对相关的演练计划有一个行为预期；演练相关计划的同时，我们推荐的最佳实践是需要配合有业务的自动化测试，每演练一次需要全方位的跑完自动化测试用例，这样才能全面的了解真正的业务产生时对业务造成的影响：</p>
<p><img src="http://iubest.gitee.io/pic/640-1601003235339.webp" alt="img"></p>
<p>在上面的图中描述了一次完整的故障演练需要经过的步骤，其中最重要的一步的实践是如何“执行预制混沌实验”？因为这一步需要一个专业的工具，在业内目前最流行的工具是 Netflix 的 Chaos Monkey 和阿里巴巴开源的<a target="_blank" rel="noopener" href="https://mp.weixin.qq.com/s?__biz=MzUzNzYxNjAzMg==&mid=2247494460&idx=2&sn=2d50c4257bf7ee4f780255923c3f4c51&chksm=fae6e0f3cd9169e56378a5c85ca54a1eda4e607bd339192be19d99774889fc1818f554f925db&mpshare=1&scene=24&srcid=0921mEuiYl0Z71dEPujp5MTa&sharer_sharetime=1600732341212&sharer_shareid=407c90840c4caeeaf9680b1dd38c62ba&key=dbdefc5d690db19540edcfe32db55dc38284f6437256b923bdcadaf128424ac3eae8a68d9b758ebba248a653a1c83d17a477d058632881c7d16d97c3fb7f634a4d52cf825bc6d8f947b9324e02a21966c631338d2914dd8be97b712780e5cd364120a93d1124246963c52e2b5217cf5f0ad70b9dbd5eafb48cd6732c1077cbf5&ascene=14&uin=MTIwMjI3NTkwNQ==&devicetype=Windows+10+x64&version=62090529&lang=zh_CN&exportkey=A8Zk0u8xHdVwPzIwFzJfv6c=&pass_ticket=pdKql0fF0rGOXvNr/tToA1+AardNoo77GWcTcNS7PpaVOYI2W/vk8qbSO4P5qmER&wx_header=0&winzoom=1">ChaosBlade</a>，我们接下来主要是介绍如何使用 ChaosBlade 来完成一次演练。</p>
<h3 id="1-使用-ChaosBlade-去做"><a href="#1-使用-ChaosBlade-去做" class="headerlink" title="1. 使用 ChaosBlade 去做"></a><strong>1. 使用 ChaosBlade 去做</strong></h3><p><a target="_blank" rel="noopener" href="https://mp.weixin.qq.com/s?__biz=MzUzNzYxNjAzMg==&mid=2247494460&idx=2&sn=2d50c4257bf7ee4f780255923c3f4c51&chksm=fae6e0f3cd9169e56378a5c85ca54a1eda4e607bd339192be19d99774889fc1818f554f925db&mpshare=1&scene=24&srcid=0921mEuiYl0Z71dEPujp5MTa&sharer_sharetime=1600732341212&sharer_shareid=407c90840c4caeeaf9680b1dd38c62ba&key=dbdefc5d690db19540edcfe32db55dc38284f6437256b923bdcadaf128424ac3eae8a68d9b758ebba248a653a1c83d17a477d058632881c7d16d97c3fb7f634a4d52cf825bc6d8f947b9324e02a21966c631338d2914dd8be97b712780e5cd364120a93d1124246963c52e2b5217cf5f0ad70b9dbd5eafb48cd6732c1077cbf5&ascene=14&uin=MTIwMjI3NTkwNQ==&devicetype=Windows+10+x64&version=62090529&lang=zh_CN&exportkey=A8Zk0u8xHdVwPzIwFzJfv6c=&pass_ticket=pdKql0fF0rGOXvNr/tToA1+AardNoo77GWcTcNS7PpaVOYI2W/vk8qbSO4P5qmER&wx_header=0&winzoom=1">ChaosBlade</a> 是阿里巴巴一款遵循混沌实验模型的混沌实验执行工具，具有场景丰富度高，简单易用等特点，而且扩展场景也特别方便，开源不久就被加入到 CNCF Landspace 中，成为主流的一款混沌工具。目前包含的场景有基础资源、应用服务、容器服务、云资源等。ChaosBlade 下载解压即用，可以通过执行 blade 命令来执行云原生下微服务的演练场景，下面是模拟 Kubernetes 下微服务中数据库调用延迟故障。</p>
<p><img src="http://iubest.gitee.io/pic/640-1601003235283.webp" alt="img"></p>
<h3 id="-3"><a href="#-3" class="headerlink" title=""></a></h3><h3 id="2-使用-AHAS-故障演练平台去做"><a href="#2-使用-AHAS-故障演练平台去做" class="headerlink" title="2. 使用 AHAS 故障演练平台去做"></a><strong>2. 使用 AHAS 故障演练平台去做</strong></h3><p>AHAS 故障演练平台是阿里云对外部用户开放的云产品，使用方式可参考<a target="_blank" rel="noopener" href="https://mp.weixin.qq.com/s?__biz=MzUzNzYxNjAzMg==&mid=2247494460&idx=2&sn=2d50c4257bf7ee4f780255923c3f4c51&chksm=fae6e0f3cd9169e56378a5c85ca54a1eda4e607bd339192be19d99774889fc1818f554f925db&mpshare=1&scene=24&srcid=0921mEuiYl0Z71dEPujp5MTa&sharer_sharetime=1600732341212&sharer_shareid=407c90840c4caeeaf9680b1dd38c62ba&key=dbdefc5d690db19540edcfe32db55dc38284f6437256b923bdcadaf128424ac3eae8a68d9b758ebba248a653a1c83d17a477d058632881c7d16d97c3fb7f634a4d52cf825bc6d8f947b9324e02a21966c631338d2914dd8be97b712780e5cd364120a93d1124246963c52e2b5217cf5f0ad70b9dbd5eafb48cd6732c1077cbf5&ascene=14&uin=MTIwMjI3NTkwNQ==&devicetype=Windows+10+x64&version=62090529&lang=zh_CN&exportkey=A8Zk0u8xHdVwPzIwFzJfv6c=&pass_ticket=pdKql0fF0rGOXvNr/tToA1+AardNoo77GWcTcNS7PpaVOYI2W/vk8qbSO4P5qmER&wx_header=0&winzoom=1">官方文档</a>。其底层的故障注入能力大部分来源于 ChaosBlade 实现，另一部分使用自身小程序扩展实现。AHAS 相比于 ChaosBlade，除了简单易用的白屏操作之外，还实现了上层的演练编排、权限控制、场景管理等，而且还针对微服务新增应用维度演练，简化演练成本，优化演练体验。</p>
<p><img src="http://iubest.gitee.io/pic/640-1601003235331.webp" alt="img"></p>
<p><img src="http://iubest.gitee.io/pic/640-1601366806598.gif" alt="动态黑色音符"></p>
<p><strong>结尾</strong></p>
<p>混沌工程是一种主动防御的稳定性手段，体现的是反脆弱的思想，实施混沌工程不能只是把故障制造出来，需要有明确的驱动目标。我们要选择合适的工具和平台，控制演练风险，实现常态化演练。</p>
<p>阿里巴巴内部从最早引入混沌工程解决微服务的依赖问题，到业务服务、云服务稳态验证，进一步升级到公共云、专有云的业务连续性保障，以及在验证云原生系统的稳定性等方面积累了比较丰富的场景和实践经验；这一些经验沉淀我们都通过开源产品以及云产品<a target="_blank" rel="noopener" href="https://mp.weixin.qq.com/s?__biz=MzUzNzYxNjAzMg==&mid=2247494460&idx=2&sn=2d50c4257bf7ee4f780255923c3f4c51&chksm=fae6e0f3cd9169e56378a5c85ca54a1eda4e607bd339192be19d99774889fc1818f554f925db&mpshare=1&scene=24&srcid=0921mEuiYl0Z71dEPujp5MTa&sharer_sharetime=1600732341212&sharer_shareid=407c90840c4caeeaf9680b1dd38c62ba&key=dbdefc5d690db19540edcfe32db55dc38284f6437256b923bdcadaf128424ac3eae8a68d9b758ebba248a653a1c83d17a477d058632881c7d16d97c3fb7f634a4d52cf825bc6d8f947b9324e02a21966c631338d2914dd8be97b712780e5cd364120a93d1124246963c52e2b5217cf5f0ad70b9dbd5eafb48cd6732c1077cbf5&ascene=14&uin=MTIwMjI3NTkwNQ==&devicetype=Windows+10+x64&version=62090529&lang=zh_CN&exportkey=A8Zk0u8xHdVwPzIwFzJfv6c=&pass_ticket=pdKql0fF0rGOXvNr/tToA1+AardNoo77GWcTcNS7PpaVOYI2W/vk8qbSO4P5qmER&wx_header=0&winzoom=1">AHAS</a> 一一对外输出。</p>
 
      <!-- reward -->
      
    </div>
    

    <!-- copyright -->
    
    <footer class="article-footer">
       
  <ul class="article-tag-list" itemprop="keywords"><li class="article-tag-list-item"><a class="article-tag-list-link" href="/tags/k8s/" rel="tag">k8s</a></li></ul>

    </footer>
  </div>

    
 
   
</article>

    
  </article>
  

  
  <nav class="page-nav">
    
    <a class="extend prev" rel="prev" href="/page/4/">上一页</a><a class="page-number" href="/">1</a><span class="space">&hellip;</span><a class="page-number" href="/page/3/">3</a><a class="page-number" href="/page/4/">4</a><span class="page-number current">5</span><a class="page-number" href="/page/6/">6</a><a class="page-number" href="/page/7/">7</a><span class="space">&hellip;</span><a class="page-number" href="/page/14/">14</a><a class="extend next" rel="next" href="/page/6/">下一页</a>
  </nav>
  
</section>
</div>

      <footer class="footer">
  <div class="outer">
    <ul>
      <li>
        Copyrights &copy;
        2015-2020
        <i class="ri-heart-fill heart_icon"></i> TzWind
      </li>
    </ul>
    <ul>
      <li>
        
        
        
        由 <a href="https://hexo.io" target="_blank">Hexo</a> 强力驱动
        <span class="division">|</span>
        主题 - <a href="https://github.com/Shen-Yu/hexo-theme-ayer" target="_blank">Ayer</a>
        
      </li>
    </ul>
    <ul>
      <li>
        
        
        <span>
  <span><i class="ri-user-3-fill"></i>访问人数:<span id="busuanzi_value_site_uv"></span></s>
  <span class="division">|</span>
  <span><i class="ri-eye-fill"></i>浏览次数:<span id="busuanzi_value_page_pv"></span></span>
</span>
        
      </li>
    </ul>
    <ul>
      
    </ul>
    <ul>
      
    </ul>
    <ul>
      <li>
        <!-- cnzz统计 -->
        
        <script type="text/javascript" src='https://s9.cnzz.com/z_stat.php?id=1278069914&amp;web_id=1278069914'></script>
        
      </li>
    </ul>
  </div>
</footer>
      <div class="float_btns">
        <div class="totop" id="totop">
  <i class="ri-arrow-up-line"></i>
</div>

<div class="todark" id="todark">
  <i class="ri-moon-line"></i>
</div>

      </div>
    </main>
    <aside class="sidebar on">
      <button class="navbar-toggle"></button>
<nav class="navbar">
  
  <div class="logo">
    <a href="/"><img src="/images/ayer-side.svg" alt="Hexo"></a>
  </div>
  
  <ul class="nav nav-main">
    
    <li class="nav-item">
      <a class="nav-item-link" href="/">主页</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" href="/archives">归档</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" href="/categories">分类</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" href="/tags">标签</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" target="_blank" rel="noopener" href="http://www.baidu.com">百度</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" href="/friends">友链</a>
    </li>
    
    <li class="nav-item">
      <a class="nav-item-link" href="/2019/about">关于我</a>
    </li>
    
  </ul>
</nav>
<nav class="navbar navbar-bottom">
  <ul class="nav">
    <li class="nav-item">
      
      <a class="nav-item-link nav-item-search"  title="搜索">
        <i class="ri-search-line"></i>
      </a>
      
      
      <a class="nav-item-link" target="_blank" href="/atom.xml" title="RSS Feed">
        <i class="ri-rss-line"></i>
      </a>
      
    </li>
  </ul>
</nav>
<div class="search-form-wrap">
  <div class="local-search local-search-plugin">
  <input type="search" id="local-search-input" class="local-search-input" placeholder="Search...">
  <div id="local-search-result" class="local-search-result"></div>
</div>
</div>
    </aside>
    <script>
      if (window.matchMedia("(max-width: 768px)").matches) {
        document.querySelector('.content').classList.remove('on');
        document.querySelector('.sidebar').classList.remove('on');
      }
    </script>
    <div id="mask"></div>

<!-- #reward -->
<div id="reward">
  <span class="close"><i class="ri-close-line"></i></span>
  <p class="reward-p"><i class="ri-cup-line"></i>请我喝杯咖啡吧~</p>
  <div class="reward-box">
    
    
  </div>
</div>
    
<script src="/js/jquery-2.0.3.min.js"></script>


<script src="/js/lazyload.min.js"></script>

<!-- Tocbot -->

<script src="https://cdn.jsdelivr.net/npm/jquery-modal@0.9.2/jquery.modal.min.js"></script>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/jquery-modal@0.9.2/jquery.modal.min.css">
<script src="https://cdn.jsdelivr.net/npm/justifiedGallery@3.7.0/dist/js/jquery.justifiedGallery.min.js"></script>

<script src="/dist/main.js"></script>

<!-- ImageViewer -->

<!-- Root element of PhotoSwipe. Must have class pswp. -->
<div class="pswp" tabindex="-1" role="dialog" aria-hidden="true">

    <!-- Background of PhotoSwipe. 
         It's a separate element as animating opacity is faster than rgba(). -->
    <div class="pswp__bg"></div>

    <!-- Slides wrapper with overflow:hidden. -->
    <div class="pswp__scroll-wrap">

        <!-- Container that holds slides. 
            PhotoSwipe keeps only 3 of them in the DOM to save memory.
            Don't modify these 3 pswp__item elements, data is added later on. -->
        <div class="pswp__container">
            <div class="pswp__item"></div>
            <div class="pswp__item"></div>
            <div class="pswp__item"></div>
        </div>

        <!-- Default (PhotoSwipeUI_Default) interface on top of sliding area. Can be changed. -->
        <div class="pswp__ui pswp__ui--hidden">

            <div class="pswp__top-bar">

                <!--  Controls are self-explanatory. Order can be changed. -->

                <div class="pswp__counter"></div>

                <button class="pswp__button pswp__button--close" title="Close (Esc)"></button>

                <button class="pswp__button pswp__button--share" style="display:none" title="Share"></button>

                <button class="pswp__button pswp__button--fs" title="Toggle fullscreen"></button>

                <button class="pswp__button pswp__button--zoom" title="Zoom in/out"></button>

                <!-- Preloader demo http://codepen.io/dimsemenov/pen/yyBWoR -->
                <!-- element will get class pswp__preloader--active when preloader is running -->
                <div class="pswp__preloader">
                    <div class="pswp__preloader__icn">
                        <div class="pswp__preloader__cut">
                            <div class="pswp__preloader__donut"></div>
                        </div>
                    </div>
                </div>
            </div>

            <div class="pswp__share-modal pswp__share-modal--hidden pswp__single-tap">
                <div class="pswp__share-tooltip"></div>
            </div>

            <button class="pswp__button pswp__button--arrow--left" title="Previous (arrow left)">
            </button>

            <button class="pswp__button pswp__button--arrow--right" title="Next (arrow right)">
            </button>

            <div class="pswp__caption">
                <div class="pswp__caption__center"></div>
            </div>

        </div>

    </div>

</div>

<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/photoswipe.min.css">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/default-skin/default-skin.min.css">
<script src="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/photoswipe.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/photoswipe-ui-default.min.js"></script>

<script>
    function viewer_init() {
        let pswpElement = document.querySelectorAll('.pswp')[0];
        let $imgArr = document.querySelectorAll(('.article-entry img:not(.reward-img)'))

        $imgArr.forEach(($em, i) => {
            $em.onclick = () => {
                // slider展开状态
                // todo: 这样不好，后面改成状态
                if (document.querySelector('.left-col.show')) return
                let items = []
                $imgArr.forEach(($em2, i2) => {
                    let img = $em2.getAttribute('data-idx', i2)
                    let src = $em2.getAttribute('data-target') || $em2.getAttribute('src')
                    let title = $em2.getAttribute('alt')
                    // 获得原图尺寸
                    const image = new Image()
                    image.src = src
                    items.push({
                        src: src,
                        w: image.width || $em2.width,
                        h: image.height || $em2.height,
                        title: title
                    })
                })
                var gallery = new PhotoSwipe(pswpElement, PhotoSwipeUI_Default, items, {
                    index: parseInt(i)
                });
                gallery.init()
            }
        })
    }
    viewer_init()
</script>

<!-- MathJax -->

<!-- Katex -->

<!-- busuanzi  -->


<script src="/js/busuanzi-2.3.pure.min.js"></script>


<!-- ClickLove -->

<!-- ClickBoom1 -->

<!-- ClickBoom2 -->

<!-- CodeCopy -->


<link rel="stylesheet" href="/css/clipboard.css">

<script src="https://cdn.jsdelivr.net/npm/clipboard@2/dist/clipboard.min.js"></script>
<script>
  function wait(callback, seconds) {
    var timelag = null;
    timelag = window.setTimeout(callback, seconds);
  }
  !function (e, t, a) {
    var initCopyCode = function(){
      var copyHtml = '';
      copyHtml += '<button class="btn-copy" data-clipboard-snippet="">';
      copyHtml += '<i class="ri-file-copy-2-line"></i><span>COPY</span>';
      copyHtml += '</button>';
      $(".highlight .code pre").before(copyHtml);
      $(".article pre code").before(copyHtml);
      var clipboard = new ClipboardJS('.btn-copy', {
        target: function(trigger) {
          return trigger.nextElementSibling;
        }
      });
      clipboard.on('success', function(e) {
        let $btn = $(e.trigger);
        $btn.addClass('copied');
        let $icon = $($btn.find('i'));
        $icon.removeClass('ri-file-copy-2-line');
        $icon.addClass('ri-checkbox-circle-line');
        let $span = $($btn.find('span'));
        $span[0].innerText = 'COPIED';
        
        wait(function () { // 等待两秒钟后恢复
          $icon.removeClass('ri-checkbox-circle-line');
          $icon.addClass('ri-file-copy-2-line');
          $span[0].innerText = 'COPY';
        }, 2000);
      });
      clipboard.on('error', function(e) {
        e.clearSelection();
        let $btn = $(e.trigger);
        $btn.addClass('copy-failed');
        let $icon = $($btn.find('i'));
        $icon.removeClass('ri-file-copy-2-line');
        $icon.addClass('ri-time-line');
        let $span = $($btn.find('span'));
        $span[0].innerText = 'COPY FAILED';
        
        wait(function () { // 等待两秒钟后恢复
          $icon.removeClass('ri-time-line');
          $icon.addClass('ri-file-copy-2-line');
          $span[0].innerText = 'COPY';
        }, 2000);
      });
    }
    initCopyCode();
  }(window, document);
</script>


<!-- CanvasBackground -->


    
  </div>
</body>

</html>