<!DOCTYPE html>
<html lang="en-us">
<head><head>
    <meta name="referrer" content="no-referrer"/>
    <meta name="google-site-verification" content="9vIieCe-Qpd78QOmBl63rGtIVbhY6sYyuxX3j8XWBA4" />
    <meta name="baidu-site-verification" content="LRrmH41lz7" />
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="google-site-verification" content="xBT4GhYoi5qRD5tr338pgPM5OWHHIDR6mNg1a3euekI" />
    <meta name="viewport" content="width=device-width, initial-scale=1">
    
    <meta name="description" content="kubernetes集群创建实战">
    
    <meta name="keyword"  content="暴走的初号机, shinji3887, 暴走的初号机的网络日志, 暴走的初号机的博客, shinji3887 Blog, 博客, 个人网站, 互联网, Web, 云原生, PaaS, Istio, Kubernetes, 微服务, Microservice">
    <link rel="shortcut icon" href="/img/favicon.ico">

    <title>使用kubeadm快速搭建单机kubernetes 1.13集群-同步率400%</title>

    <link rel="canonical" href="/post/kubeadmin-create-cluster/">

    <link rel="stylesheet" href="https://lupeier.cn-sh2.ufileos.com/iDisqus.min.css"/>
	
    
    <link rel="stylesheet" href="https://lupeier.cn-sh2.ufileos.com/bootstrap.min.css">

    
    <link rel="stylesheet" href="https://lupeier.cn-sh2.ufileos.com/hux-blog.min.css">

    
    <link rel="stylesheet" href="https://lupeier.cn-sh2.ufileos.com/syntax.css">

    
    <link rel="stylesheet" href="https://lupeier.cn-sh2.ufileos.com/zanshang.css">

    
    <link href="/css/font-awesome.min.css" rel="stylesheet" type="text/css">
    
    
    <script src="https://lupeier.cn-sh2.ufileos.com/jquery.min.js"></script>
    
    
    <script src="https://lupeier.cn-sh2.ufileos.com/bootstrap.min.js"></script>
    
    
    <script src="https://lupeier.cn-sh2.ufileos.com/hux-blog.min.js"></script>
</head>
</head>

<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
    <div class="container-fluid">
        
        <div class="navbar-header page-scroll">
            <button type="button" class="navbar-toggle">
                <span class="sr-only">Toggle navigation</span>
                <span class="icon-bar"></span>
                <span class="icon-bar"></span>
                <span class="icon-bar"></span>
            </button>
            <a class="navbar-brand" href="/">L&#39;s Blog</a>
        </div>

        
        
        <div id="huxblog_navbar">
            <div class="navbar-collapse">
                <ul class="nav navbar-nav navbar-right">
                    <li>
                        <a href="/">Home</a>
                    </li>
                    
                    <li>
                        <a href="categories/tech">tech</a>
                    </li>
                    
                    <li>
                        <a href="categories/tips">tips</a>
                    </li>
                    
                    <li>
                        <a href="/about">About</a>
                    </li>
                    
                </ul>
            </div>
        </div>
        
    </div>
    
</nav>
<script>
    
    
    
    var $body   = document.body;
    var $toggle = document.querySelector('.navbar-toggle');
    var $navbar = document.querySelector('#huxblog_navbar');
    var $collapse = document.querySelector('.navbar-collapse');

    $toggle.addEventListener('click', handleMagic)
    function handleMagic(e){
        if ($navbar.className.indexOf('in') > 0) {
        
            $navbar.className = " ";
            
            setTimeout(function(){
                
                if($navbar.className.indexOf('in') < 0) {
                    $collapse.style.height = "0px"
                }
            },400)
        }else{
        
            $collapse.style.height = "auto"
            $navbar.className += " in";
        }
    }
</script>




<style type="text/css">
    header.intro-header{
        background-image: url('https://lupeier.cn-sh2.ufileos.com/architecture-bay-boat-326410.jpg')
    }
</style>
<header class="intro-header" >
    <div class="container">
        <div class="row">
            <div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
                <div class="post-heading">
                    <div class="tags">
                       
                       <a class="tag" href="/tags/kubernetes" title="Kubernetes">
                           Kubernetes
                        </a>
                        
                    </div>
                    <h1>使用kubeadm快速搭建单机kubernetes 1.13集群</h1>
                    <h2 class="subheading"></h2>
                    <span  class="meta">Posted by L&#39; on Saturday, January 12, 2019
                        
                    </span>
					<br>
                </div>
            </div>
        </div>
    </div>
</header>




<article>
    <div class="container">
        <div class="row">

            
            <div class="
                col-lg-8 col-lg-offset-2
                col-md-10 col-md-offset-1
                post-container">

        		
                        <header>
                        <h2>TOC</h2>
                        </header>
                        <nav id="TableOfContents">
  <ul>
    <li>
      <ul>
        <li><a href="#和minikube的区别">和minikube的区别</a></li>
        <li><a href="#环境要求">环境要求</a></li>
        <li><a href="#设置yum源">设置yum源</a></li>
        <li><a href="#安装docker">安装docker</a></li>
        <li><a href="#kubeadm安装k8s">kubeadm安装k8s</a></li>
        <li><a href="#安装dashboard">安装DashBoard</a></li>
      </ul>
    </li>
  </ul>
</nav>
        		
        		<p>kubeadm可谓是快速搭建k8集群的神器，想当年有多少人倒在k8集群搭建的这道坎上，我自己去年也是通过二进制方式手动搭了一个k8 1.9的集群，安装大量的组件，各种证书配置，各种依赖。。。那酸爽真的不忍回忆，而且搭出来的集群还是有一些问题，证书一直有问题，dashboard也只能用老版本的。现在有了kubeadm，它帮助我们做了大量原来需要手动安装、配置、生成证书的事情，一杯咖啡的功夫集群就能搭建好了。</p>
<h3 id="和minikube的区别">和minikube的区别</h3>
<p>minikube基本上你可以认为是一个实验室工具，只能单机部署，里面整合了k8最主要的组件，无法搭建集群，且由于程序做死无法安装各种扩展插件（比如网络插件、dns插件、ingress插件等等），主要作用是给你了解k8用的。而kudeadm搭建出来是一个真正的k8集群，可用于生产环境（HA需要自己做），和二进制搭建出来的集群几乎没有区别。</p>
<h3 id="环境要求">环境要求</h3>
<ul>
<li>本次安装使用virtualbox虚拟机（macOs），分配2C2G内存</li>
<li>操作系统为centos 7.6，下述安装步骤均基于centos，注意centos版本最好是最新的，否则会有各种各样奇怪的坑（之前基于7.0被坑了不少）</li>
<li>虚拟机需要保持和宿主机的双向互通且可以访问公网，具体设置这边不展开，网上教程很多</li>
<li>kubernetes安装的基线版本为1.13.1</li>
</ul>
<h3 id="设置yum源">设置yum源</h3>
<p>首先去<code>/etc/yum.repos.d/</code>目录，删除该目录下所有repo文件（先做好备份）</p>
<p>下载centos基础yum源配置（这里用的是阿里云的镜像）</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">curl -o CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
</code></pre></div><p>下载docker的yum源配置</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">curl -o docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
</code></pre></div><p>配置kubernetes的yum源</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">cat <span style="color:#e6db74">&lt;&lt;EOF &gt; /etc/yum.repos.d/kubernetes.repo
</span><span style="color:#e6db74">[kubernetes]
</span><span style="color:#e6db74">name=Kubernetes
</span><span style="color:#e6db74">baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
</span><span style="color:#e6db74">enabled=1
</span><span style="color:#e6db74">gpgcheck=0
</span><span style="color:#e6db74">repo_gpgcheck=0
</span><span style="color:#e6db74">gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
</span><span style="color:#e6db74">        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
</span><span style="color:#e6db74">EOF</span>
</code></pre></div><p>执行下列命令刷新yum源缓存</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#75715e"># yum clean all  </span>
<span style="color:#75715e"># yum makecache  </span>
<span style="color:#75715e"># yum repolist</span>
</code></pre></div><p>得到这面这个列表，说明源配置正确</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#f92672">[</span>root@MiWiFi-R1CM-srv yum.repos.d<span style="color:#f92672">]</span><span style="color:#75715e"># yum repolist</span>
已加载插件：fastestmirror
Loading mirror speeds from cached hostfile
源标识                                                                               源名称                                                                                    状态
base/7/x86_64                                                                        CentOS-7 - Base - 163.com                                                                 10,019
docker-ce-stable/x86_64                                                              Docker CE Stable - x86_64                                                                     <span style="color:#ae81ff">28</span>
extras/7/x86_64                                                                      CentOS-7 - Extras - 163.com                                                                  <span style="color:#ae81ff">321</span>
kubernetes                                                                           Kubernetes                                                                                   <span style="color:#ae81ff">299</span>
updates/7/x86_64                                                                     CentOS-7 - Updates - 163.com                                                                 <span style="color:#ae81ff">628</span>
repolist: 11,295
</code></pre></div><h3 id="安装docker">安装docker</h3>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">yum install -y docker-ce
</code></pre></div><p>我这边直接装的最新稳定版18.09，如果对于版本有要求，可以先执行</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#f92672">[</span>root@MiWiFi-R1CM-srv yum.repos.d<span style="color:#f92672">]</span><span style="color:#75715e"># yum list docker-ce --showduplicates | sort -r</span>
已加载插件：fastestmirror
已安装的软件包
可安装的软件包
Loading mirror speeds from cached hostfile
docker-ce.x86_64            3:18.09.1-3.el7                    docker-ce-stable 
docker-ce.x86_64            3:18.09.1-3.el7                    @docker-ce-stable
docker-ce.x86_64            3:18.09.0-3.el7                    docker-ce-stable 
docker-ce.x86_64            18.06.1.ce-3.el7                   docker-ce-stable 
docker-ce.x86_64            18.06.0.ce-3.el7                   docker-ce-stable 
docker-ce.x86_64            18.03.1.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            18.03.0.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.12.1.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.12.0.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.09.1.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.09.0.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.06.2.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.06.1.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.06.0.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.03.3.ce-1.el7                   docker-ce-stable 
docker-ce.x86_64            17.03.2.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.03.1.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.03.0.ce-1.el7.centos            docker-ce-stable 
</code></pre></div><p>列出所有版本，再执行</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">yum install -y docker-ce-&lt;VERSION STRING&gt;
</code></pre></div><p>安装指定版本
安装完成后，执行</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#f92672">[</span>root@MiWiFi-R1CM-srv yum.repos.d<span style="color:#f92672">]</span><span style="color:#75715e"># systemctl start docker</span>
<span style="color:#f92672">[</span>root@MiWiFi-R1CM-srv yum.repos.d<span style="color:#f92672">]</span><span style="color:#75715e"># systemctl enable docker</span>
<span style="color:#f92672">[</span>root@MiWiFi-R1CM-srv yum.repos.d<span style="color:#f92672">]</span><span style="color:#75715e"># docker info</span>
Containers: <span style="color:#ae81ff">24</span>
 Running: <span style="color:#ae81ff">21</span>
 Paused: <span style="color:#ae81ff">0</span>
 Stopped: <span style="color:#ae81ff">3</span>
Images: <span style="color:#ae81ff">11</span>
Server Version: 18.09.1
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
runc version: 96ec2177ae841256168fcf76954f7177af9446eb
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-957.1.3.el7.x86_64
Operating System: CentOS Linux <span style="color:#ae81ff">7</span> <span style="color:#f92672">(</span>Core<span style="color:#f92672">)</span>
OSType: linux
Architecture: x86_64
CPUs: <span style="color:#ae81ff">2</span>
Total Memory: 1.795GiB
Name: MiWiFi-R1CM-srv
ID: DSTM:KH2I:Y4UV:SUPX:WIP4:ZV4C:WTNO:VMZR:4OKK:HM3G:3YFS:FXMY
Docker Root Dir: /var/lib/docker
Debug Mode <span style="color:#f92672">(</span>client<span style="color:#f92672">)</span>: false
Debug Mode <span style="color:#f92672">(</span>server<span style="color:#f92672">)</span>: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine

WARNING: bridge-nf-call-ip6tables is disabled
</code></pre></div><p>说明安装正确</p>
<h3 id="kubeadm安装k8s">kubeadm安装k8s</h3>
<p>可能大家对于kubeadm安装出来的kubernetes集群的稳定性还有疑虑，这边援引官方的说明文档
<img src="kubeadm.png" alt="kubeadm.png">
可以看到核心功能都已经GA了，可以放心用，大家比较关心的HA还是在alpha阶段，还得再等等，目前来说kubeadm搭建出来的k8集群master还是单节点的，要做高可用还需要自己手动搭建etcd集群。</p>
<p>由于之前已经设置好了kubernetes的yum源，我们只要执行</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">yum install -y kubeadm
</code></pre></div><p>系统就会帮我们自动安装最新版的kubeadm了（我安装的时候是1.13.1），一共会安装kubelet、kubeadm、kubectl、kubernetes-cni这四个程序。</p>
<ul>
<li>kubeadm：k8集群的一键部署工具，通过把k8的各类核心组件和插件以pod的方式部署来简化安装过程</li>
<li>kubelet：运行在每个节点上的node agent，k8集群通过kubelet真正的去操作每个节点上的容器，由于需要直接操作宿主机的各类资源，所以没有放在pod里面，还是通过服务的形式装在系统里面</li>
<li>kubectl：kubernetes的命令行工具，通过连接api-server完成对于k8的各类操作</li>
<li>kubernetes-cni：k8的虚拟网络设备，通过在宿主机上虚拟一个cni0网桥，来完成pod之间的网络通讯，作用和docker0类似。</li>
</ul>
<p>安装完后，执行</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">kubeadmin init --pod-network-cidr<span style="color:#f92672">=</span>10.244.0.0/16
</code></pre></div><p>开始master节点的初始化工作，注意这边的<code>--pod-network-cidr=10.244.0.0/16</code>，是k8的网络插件所需要用到的配置信息，用来给node分配子网段，我这边用到的网络插件是flannel，就是这么配，其他的插件也有相应的配法，官网上都有详细的说明，具体参考<a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/">这个网页</a>。</p>
<p>初始化的时候kubeadm会做一系列的校验，以检测你的服务器是否符合kubernetes的安装条件，检测结果分为<code>[WARNING]</code>和<code>[ERROR]</code>两种，类似如下的信息（一般第一次执行都会失败。。）</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#f92672">[</span>root@MiWiFi-R1CM-srv ~<span style="color:#f92672">]</span><span style="color:#75715e"># kubeadm init</span>
I0112 00:30:18.868179   <span style="color:#ae81ff">13025</span> version.go:94<span style="color:#f92672">]</span> could not fetch a Kubernetes version from the internet: unable to get URL <span style="color:#e6db74">&#34;https://dl.k8s.io/release/stable-1.txt&#34;</span>: Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled <span style="color:#f92672">(</span>Client.Timeout exceeded <span style="color:#66d9ef">while</span> awaiting headers<span style="color:#f92672">)</span>
I0112 00:30:18.868645   <span style="color:#ae81ff">13025</span> version.go:95<span style="color:#f92672">]</span> falling back to the local client version: v1.13.1
<span style="color:#f92672">[</span>init<span style="color:#f92672">]</span> Using Kubernetes version: v1.13.1
<span style="color:#f92672">[</span>preflight<span style="color:#f92672">]</span> Running pre-flight checks
	<span style="color:#f92672">[</span>WARNING SystemVerification<span style="color:#f92672">]</span>: this Docker version is not on the list of validated versions: 18.09.1. Latest validated version: 18.06
	<span style="color:#f92672">[</span>WARNING Hostname<span style="color:#f92672">]</span>: hostname <span style="color:#e6db74">&#34;miwifi-r1cm-srv&#34;</span> could not be reached
	<span style="color:#f92672">[</span>WARNING Hostname<span style="color:#f92672">]</span>: hostname <span style="color:#e6db74">&#34;miwifi-r1cm-srv&#34;</span>: lookup miwifi-r1cm-srv on 192.168.31.1:53: no such host
	<span style="color:#f92672">[</span>WARNING Service-Kubelet<span style="color:#f92672">]</span>: kubelet service is not enabled, please run <span style="color:#e6db74">&#39;systemctl enable kubelet.service&#39;</span>
error execution phase preflight: <span style="color:#f92672">[</span>preflight<span style="color:#f92672">]</span> Some fatal errors occurred:
	<span style="color:#f92672">[</span>ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables<span style="color:#f92672">]</span>: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to <span style="color:#ae81ff">1</span>
	<span style="color:#f92672">[</span>ERROR Swap<span style="color:#f92672">]</span>: running with swap on is not supported. Please disable swap
<span style="color:#f92672">[</span>preflight<span style="color:#f92672">]</span> If you know what you are doing, you can make a check non-fatal with <span style="color:#e6db74">`</span>--ignore-preflight-errors<span style="color:#f92672">=</span>...<span style="color:#e6db74">`</span>
</code></pre></div><p><code>[WARNING]</code>的有比如docker服务没设置成自动启动啦，docker版本不符合兼容性要求啦，hostname设置不规范之类，这些一般问题不大，不影响安装，当然尽量你按照它提示的要求能改掉是最好。</p>
<p><code>[ERROR]</code>的话就要重视，虽然可以通过<code>--ignore-preflight-errors</code>忽略错误强制安装，但为了不出各种奇怪的毛病，所以强烈建议error的问题一定要解决了再继续执行下去。比如系统资源不满足要求（master节点要求至少2C2G），swap没关等等（会影响kubelet的启动），swap的话可以通过设置<code>swapoff -a</code>来进行关闭，另外注意<code>/proc/sys/net/bridge/bridge-nf-call-iptables</code>这个参数，需要设置为1，否则kubeadm预检也会通不过，貌似网络插件会用到这个内核参数。</p>
<p>一顿修改后，预检全部通过，kubeadm就开始安装了，经过一阵等待，不出意外的话安装会失败-_-，原因自然是众所周知的原因，gcr.io无法访问（谷歌自己的容器镜像仓库），但是错误信息很有价值，我们来看一下</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#f92672">[</span>root@MiWiFi-R1CM-srv ~<span style="color:#f92672">]</span><span style="color:#75715e"># kubeadm init</span>
I0112 00:39:39.813145   <span style="color:#ae81ff">13591</span> version.go:94<span style="color:#f92672">]</span> could not fetch a Kubernetes version from the internet: unable to get URL <span style="color:#e6db74">&#34;https://dl.k8s.io/release/stable-1.txt&#34;</span>: Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled <span style="color:#f92672">(</span>Client.Timeout exceeded <span style="color:#66d9ef">while</span> awaiting headers<span style="color:#f92672">)</span>
I0112 00:39:39.813263   <span style="color:#ae81ff">13591</span> version.go:95<span style="color:#f92672">]</span> falling back to the local client version: v1.13.1
<span style="color:#f92672">[</span>init<span style="color:#f92672">]</span> Using Kubernetes version: v1.13.1
<span style="color:#f92672">[</span>preflight<span style="color:#f92672">]</span> Running pre-flight checks
	<span style="color:#f92672">[</span>WARNING SystemVerification<span style="color:#f92672">]</span>: this Docker version is not on the list of validated versions: 18.09.1. Latest validated version: 18.06
	<span style="color:#f92672">[</span>WARNING Hostname<span style="color:#f92672">]</span>: hostname <span style="color:#e6db74">&#34;miwifi-r1cm-srv&#34;</span> could not be reached
	<span style="color:#f92672">[</span>WARNING Hostname<span style="color:#f92672">]</span>: hostname <span style="color:#e6db74">&#34;miwifi-r1cm-srv&#34;</span>: lookup miwifi-r1cm-srv on 192.168.31.1:53: no such host
<span style="color:#f92672">[</span>preflight<span style="color:#f92672">]</span> Pulling images required <span style="color:#66d9ef">for</span> setting up a Kubernetes cluster
<span style="color:#f92672">[</span>preflight<span style="color:#f92672">]</span> This might take a minute or two, depending on the speed of your internet connection
<span style="color:#f92672">[</span>preflight<span style="color:#f92672">]</span> You can also perform this action in beforehand using <span style="color:#e6db74">&#39;kubeadm config images pull&#39;</span>
error execution phase preflight: <span style="color:#f92672">[</span>preflight<span style="color:#f92672">]</span> Some fatal errors occurred:
	<span style="color:#f92672">[</span>ERROR ImagePull<span style="color:#f92672">]</span>: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled <span style="color:#66d9ef">while</span> waiting <span style="color:#66d9ef">for</span> connection <span style="color:#f92672">(</span>Client.Timeout exceeded <span style="color:#66d9ef">while</span> awaiting headers<span style="color:#f92672">)</span>
, error: exit status <span style="color:#ae81ff">1</span>
	<span style="color:#f92672">[</span>ERROR ImagePull<span style="color:#f92672">]</span>: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled <span style="color:#66d9ef">while</span> waiting <span style="color:#66d9ef">for</span> connection <span style="color:#f92672">(</span>Client.Timeout exceeded <span style="color:#66d9ef">while</span> awaiting headers<span style="color:#f92672">)</span>
, error: exit status <span style="color:#ae81ff">1</span>
	<span style="color:#f92672">[</span>ERROR ImagePull<span style="color:#f92672">]</span>: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled <span style="color:#66d9ef">while</span> waiting <span style="color:#66d9ef">for</span> connection <span style="color:#f92672">(</span>Client.Timeout exceeded <span style="color:#66d9ef">while</span> awaiting headers<span style="color:#f92672">)</span>
, error: exit status <span style="color:#ae81ff">1</span>
	<span style="color:#f92672">[</span>ERROR ImagePull<span style="color:#f92672">]</span>: failed to pull image k8s.gcr.io/kube-proxy:v1.13.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled <span style="color:#66d9ef">while</span> waiting <span style="color:#66d9ef">for</span> connection <span style="color:#f92672">(</span>Client.Timeout exceeded <span style="color:#66d9ef">while</span> awaiting headers<span style="color:#f92672">)</span>
, error: exit status <span style="color:#ae81ff">1</span>
	<span style="color:#f92672">[</span>ERROR ImagePull<span style="color:#f92672">]</span>: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled <span style="color:#66d9ef">while</span> waiting <span style="color:#66d9ef">for</span> connection <span style="color:#f92672">(</span>Client.Timeout exceeded <span style="color:#66d9ef">while</span> awaiting headers<span style="color:#f92672">)</span>
, error: exit status <span style="color:#ae81ff">1</span>
	<span style="color:#f92672">[</span>ERROR ImagePull<span style="color:#f92672">]</span>: failed to pull image k8s.gcr.io/etcd:3.2.24: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled <span style="color:#66d9ef">while</span> waiting <span style="color:#66d9ef">for</span> connection <span style="color:#f92672">(</span>Client.Timeout exceeded <span style="color:#66d9ef">while</span> awaiting headers<span style="color:#f92672">)</span>
, error: exit status <span style="color:#ae81ff">1</span>
	<span style="color:#f92672">[</span>ERROR ImagePull<span style="color:#f92672">]</span>: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled <span style="color:#66d9ef">while</span> waiting <span style="color:#66d9ef">for</span> connection <span style="color:#f92672">(</span>Client.Timeout exceeded <span style="color:#66d9ef">while</span> awaiting headers<span style="color:#f92672">)</span>
, error: exit status <span style="color:#ae81ff">1</span>
</code></pre></div><p>这里面明确列出了安装需要用到的镜像名称和tag，那么我们只需要提前把这些镜像pull下来，再安装就ok了。你也可以通过<code>kubeadm config images pull</code>预先下载好镜像，再执行<code>kubeadm init</code>。</p>
<p>知道名字就好办了，这点小问题难不倒我们。目前国内的各大云计算厂商都提供了kubernetes的镜像服务，比如阿里云，我可以通过</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24
</code></pre></div><p>来拉取etcd的镜像，再通过</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
</code></pre></div><p>来改成kudeadm安装时候需要的镜像名称，其他的镜像也是如法炮制。注意所需的镜像和版本号，可能和我这边列出的不一样，kubernetes项目更新很快，具体还是要以你当时执行的时候列出的出错信息里面的为准，但是处理方式都是一样的。（其实不改名，kubeadm还可以通过yaml文件申明安装所需的镜像名称，这部分就留给你自己去研究啦）。</p>
<p>注：由于阿里云用别人的仓库，也没法保障所有的镜像都有，所以这次再提供一种可以方便的自制gcr.io上面镜像的方法[]</p>
<p>镜像都搞定之后，再次执行</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#f92672">[</span>root@MiWiFi-R1CM-srv ~<span style="color:#f92672">]</span><span style="color:#75715e"># kubeadm init --pod-network-cidr=10.244.0.0/16</span>
I0112 01:35:38.758110    <span style="color:#ae81ff">4544</span> version.go:94<span style="color:#f92672">]</span> could not fetch a Kubernetes version from the internet: unable to get URL <span style="color:#e6db74">&#34;https://dl.k8s.io/release/stable-1.txt&#34;</span>: Get https://dl.k8s.io/release/stable-1.txt: x509: certificate has expired or is not yet valid
I0112 01:35:38.758428    <span style="color:#ae81ff">4544</span> version.go:95<span style="color:#f92672">]</span> falling back to the local client version: v1.13.1
<span style="color:#f92672">[</span>init<span style="color:#f92672">]</span> Using Kubernetes version: v1.13.1
<span style="color:#f92672">[</span>preflight<span style="color:#f92672">]</span> Running pre-flight checks
	<span style="color:#f92672">[</span>WARNING SystemVerification<span style="color:#f92672">]</span>: this Docker version is not on the list of validated versions: 18.09.1. Latest validated version: 18.06
	<span style="color:#f92672">[</span>WARNING Hostname<span style="color:#f92672">]</span>: hostname <span style="color:#e6db74">&#34;miwifi-r1cm-srv&#34;</span> could not be reached
	<span style="color:#f92672">[</span>WARNING Hostname<span style="color:#f92672">]</span>: hostname <span style="color:#e6db74">&#34;miwifi-r1cm-srv&#34;</span>: lookup miwifi-r1cm-srv on 192.168.31.1:53: no such host
<span style="color:#f92672">[</span>preflight<span style="color:#f92672">]</span> Pulling images required <span style="color:#66d9ef">for</span> setting up a Kubernetes cluster
<span style="color:#f92672">[</span>preflight<span style="color:#f92672">]</span> This might take a minute or two, depending on the speed of your internet connection
<span style="color:#f92672">[</span>preflight<span style="color:#f92672">]</span> You can also perform this action in beforehand using <span style="color:#e6db74">&#39;kubeadm config images pull&#39;</span>
<span style="color:#f92672">[</span>kubelet-start<span style="color:#f92672">]</span> Writing kubelet environment file with flags to file <span style="color:#e6db74">&#34;/var/lib/kubelet/kubeadm-flags.env&#34;</span>
<span style="color:#f92672">[</span>kubelet-start<span style="color:#f92672">]</span> Writing kubelet configuration to file <span style="color:#e6db74">&#34;/var/lib/kubelet/config.yaml&#34;</span>
<span style="color:#f92672">[</span>kubelet-start<span style="color:#f92672">]</span> Activating the kubelet service
<span style="color:#f92672">[</span>certs<span style="color:#f92672">]</span> Using certificateDir folder <span style="color:#e6db74">&#34;/etc/kubernetes/pki&#34;</span>
<span style="color:#f92672">[</span>certs<span style="color:#f92672">]</span> Generating <span style="color:#e6db74">&#34;front-proxy-ca&#34;</span> certificate and key
<span style="color:#f92672">[</span>certs<span style="color:#f92672">]</span> Generating <span style="color:#e6db74">&#34;front-proxy-client&#34;</span> certificate and key
<span style="color:#f92672">[</span>certs<span style="color:#f92672">]</span> Generating <span style="color:#e6db74">&#34;etcd/ca&#34;</span> certificate and key
<span style="color:#f92672">[</span>certs<span style="color:#f92672">]</span> Generating <span style="color:#e6db74">&#34;etcd/peer&#34;</span> certificate and key
<span style="color:#f92672">[</span>certs<span style="color:#f92672">]</span> etcd/peer serving cert is signed <span style="color:#66d9ef">for</span> DNS names <span style="color:#f92672">[</span>miwifi-r1cm-srv localhost<span style="color:#f92672">]</span> and IPs <span style="color:#f92672">[</span>192.168.31.175 127.0.0.1 ::1<span style="color:#f92672">]</span>
<span style="color:#f92672">[</span>certs<span style="color:#f92672">]</span> Generating <span style="color:#e6db74">&#34;etcd/healthcheck-client&#34;</span> certificate and key
<span style="color:#f92672">[</span>certs<span style="color:#f92672">]</span> Generating <span style="color:#e6db74">&#34;apiserver-etcd-client&#34;</span> certificate and key
<span style="color:#f92672">[</span>certs<span style="color:#f92672">]</span> Generating <span style="color:#e6db74">&#34;etcd/server&#34;</span> certificate and key
<span style="color:#f92672">[</span>certs<span style="color:#f92672">]</span> etcd/server serving cert is signed <span style="color:#66d9ef">for</span> DNS names <span style="color:#f92672">[</span>miwifi-r1cm-srv localhost<span style="color:#f92672">]</span> and IPs <span style="color:#f92672">[</span>192.168.31.175 127.0.0.1 ::1<span style="color:#f92672">]</span>
<span style="color:#f92672">[</span>certs<span style="color:#f92672">]</span> Generating <span style="color:#e6db74">&#34;ca&#34;</span> certificate and key
<span style="color:#f92672">[</span>certs<span style="color:#f92672">]</span> Generating <span style="color:#e6db74">&#34;apiserver-kubelet-client&#34;</span> certificate and key
<span style="color:#f92672">[</span>certs<span style="color:#f92672">]</span> Generating <span style="color:#e6db74">&#34;apiserver&#34;</span> certificate and key
<span style="color:#f92672">[</span>certs<span style="color:#f92672">]</span> apiserver serving cert is signed <span style="color:#66d9ef">for</span> DNS names <span style="color:#f92672">[</span>miwifi-r1cm-srv kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local<span style="color:#f92672">]</span> and IPs <span style="color:#f92672">[</span>10.96.0.1 192.168.31.175<span style="color:#f92672">]</span>
<span style="color:#f92672">[</span>certs<span style="color:#f92672">]</span> Generating <span style="color:#e6db74">&#34;sa&#34;</span> key and public key
<span style="color:#f92672">[</span>kubeconfig<span style="color:#f92672">]</span> Using kubeconfig folder <span style="color:#e6db74">&#34;/etc/kubernetes&#34;</span>
<span style="color:#f92672">[</span>kubeconfig<span style="color:#f92672">]</span> Writing <span style="color:#e6db74">&#34;admin.conf&#34;</span> kubeconfig file
<span style="color:#f92672">[</span>kubeconfig<span style="color:#f92672">]</span> Writing <span style="color:#e6db74">&#34;kubelet.conf&#34;</span> kubeconfig file
<span style="color:#f92672">[</span>kubeconfig<span style="color:#f92672">]</span> Writing <span style="color:#e6db74">&#34;controller-manager.conf&#34;</span> kubeconfig file
<span style="color:#f92672">[</span>kubeconfig<span style="color:#f92672">]</span> Writing <span style="color:#e6db74">&#34;scheduler.conf&#34;</span> kubeconfig file
<span style="color:#f92672">[</span>control-plane<span style="color:#f92672">]</span> Using manifest folder <span style="color:#e6db74">&#34;/etc/kubernetes/manifests&#34;</span>
<span style="color:#f92672">[</span>control-plane<span style="color:#f92672">]</span> Creating static Pod manifest <span style="color:#66d9ef">for</span> <span style="color:#e6db74">&#34;kube-apiserver&#34;</span>
<span style="color:#f92672">[</span>control-plane<span style="color:#f92672">]</span> Creating static Pod manifest <span style="color:#66d9ef">for</span> <span style="color:#e6db74">&#34;kube-controller-manager&#34;</span>
<span style="color:#f92672">[</span>control-plane<span style="color:#f92672">]</span> Creating static Pod manifest <span style="color:#66d9ef">for</span> <span style="color:#e6db74">&#34;kube-scheduler&#34;</span>
<span style="color:#f92672">[</span>etcd<span style="color:#f92672">]</span> Creating static Pod manifest <span style="color:#66d9ef">for</span> local etcd in <span style="color:#e6db74">&#34;/etc/kubernetes/manifests&#34;</span>
<span style="color:#f92672">[</span>wait-control-plane<span style="color:#f92672">]</span> Waiting <span style="color:#66d9ef">for</span> the kubelet to boot up the control plane as static Pods from directory <span style="color:#e6db74">&#34;/etc/kubernetes/manifests&#34;</span>. This can take up to 4m0s
<span style="color:#f92672">[</span>apiclient<span style="color:#f92672">]</span> All control plane components are healthy after 29.508735 seconds
<span style="color:#f92672">[</span>uploadconfig<span style="color:#f92672">]</span> storing the configuration used in ConfigMap <span style="color:#e6db74">&#34;kubeadm-config&#34;</span> in the <span style="color:#e6db74">&#34;kube-system&#34;</span> Namespace
<span style="color:#f92672">[</span>kubelet<span style="color:#f92672">]</span> Creating a ConfigMap <span style="color:#e6db74">&#34;kubelet-config-1.13&#34;</span> in namespace kube-system with the configuration <span style="color:#66d9ef">for</span> the kubelets in the cluster
<span style="color:#f92672">[</span>patchnode<span style="color:#f92672">]</span> Uploading the CRI Socket information <span style="color:#e6db74">&#34;/var/run/dockershim.sock&#34;</span> to the Node API object <span style="color:#e6db74">&#34;miwifi-r1cm-srv&#34;</span> as an annotation
<span style="color:#f92672">[</span>mark-control-plane<span style="color:#f92672">]</span> Marking the node miwifi-r1cm-srv as control-plane by adding the label <span style="color:#e6db74">&#34;node-role.kubernetes.io/master=&#39;&#39;&#34;</span>
<span style="color:#f92672">[</span>mark-control-plane<span style="color:#f92672">]</span> Marking the node miwifi-r1cm-srv as control-plane by adding the taints <span style="color:#f92672">[</span>node-role.kubernetes.io/master:NoSchedule<span style="color:#f92672">]</span>
<span style="color:#f92672">[</span>bootstrap-token<span style="color:#f92672">]</span> Using token: wde86i.tmjaf7d18v26zg03
<span style="color:#f92672">[</span>bootstrap-token<span style="color:#f92672">]</span> Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
<span style="color:#f92672">[</span>bootstraptoken<span style="color:#f92672">]</span> configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order <span style="color:#66d9ef">for</span> nodes to get long term certificate credentials
<span style="color:#f92672">[</span>bootstraptoken<span style="color:#f92672">]</span> configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
<span style="color:#f92672">[</span>bootstraptoken<span style="color:#f92672">]</span> configured RBAC rules to allow certificate rotation <span style="color:#66d9ef">for</span> all node client certificates in the cluster
<span style="color:#f92672">[</span>bootstraptoken<span style="color:#f92672">]</span> creating the <span style="color:#e6db74">&#34;cluster-info&#34;</span> ConfigMap in the <span style="color:#e6db74">&#34;kube-public&#34;</span> namespace
<span style="color:#f92672">[</span>addons<span style="color:#f92672">]</span> Applied essential addon: CoreDNS
<span style="color:#f92672">[</span>addons<span style="color:#f92672">]</span> Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown <span style="color:#66d9ef">$(</span>id -u<span style="color:#66d9ef">)</span>:<span style="color:#66d9ef">$(</span>id -g<span style="color:#66d9ef">)</span> $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run <span style="color:#e6db74">&#34;kubectl apply -f [podnetwork].yaml&#34;</span> with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.31.175:6443 --token wde86i.tmjaf7d18v26zg03 --discovery-token-ca-cert-hash sha256:b05fa53d8f8c10fa4159ca499eb91cf11fbb9b27801b7ea9eb7d5066d86ae366
</code></pre></div><p>可以看到终于安装成功了，kudeadm帮你做了大量的工作，包括kubelet配置、各类证书配置、kubeconfig配置、插件安装等等（这些东西自己搞不知道要搞多久，反正估计用过kubeadm没人会再愿意手工安装了）。注意最后一行，kubeadm提示你，其他节点需要加入集群的话，只需要执行这条命令就行了，里面包含了加入集群所需要的token。同时kubeadm还提醒你，要完成全部安装，还需要安装一个网络插件<code>kubectl apply -f [podnetwork].yaml</code>，并且连如何安装网络插件的网址都提供给你了（很贴心啊有木有）。同时也提示你，需要执行</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown <span style="color:#66d9ef">$(</span>id -u<span style="color:#66d9ef">)</span>:<span style="color:#66d9ef">$(</span>id -g<span style="color:#66d9ef">)</span> $HOME/.kube/config
</code></pre></div><p>把相关配置信息拷贝入.kube的目录，这个是用来配置kubectl和api-server之间的认证，其他node节点的话需要将此配置信息拷贝入node节点的对应目录。此时我们执行一下</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#f92672">[</span>root@MiWiFi-R1CM-srv yum.repos.d<span style="color:#f92672">]</span><span style="color:#75715e"># kubectl get node</span>
NAME              STATUS   ROLES    AGE     VERSION
miwifi-r1cm-srv   NotReady    master   4h56m   v1.13.1
</code></pre></div><p>显示目前节点是<code>notready</code>状态，先不要急，我们先来看一下kudeadm帮我们安装了哪些东西：</p>
<h4 id="核心组件">核心组件</h4>
<p>前面介绍过，kudeadm的思路，是通过把k8主要的组件容器化，来简化安装过程。这时候你可能就有一个疑问，这时候k8集群还没起来，如何来部署pod？难道直接执行docker run？当然是没有那么low，其实在kubelet的运行规则中，有一种特殊的启动方法叫做“静态pod”（static pod），只要把pod定义的yaml文件放在指定目录下，当这个节点的kubelet启动时，就会自动启动yaml文件中定义的pod。从这个机制你也可以发现，为什么叫做static pod，因为这些pod是不能调度的，只能在这个节点上启动，并且pod的ip地址直接就是宿主机的地址。在k8中，放这些预先定义yaml文件的位置是<code>/etc/kubernetes/manifests</code>，我们来看一下</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#f92672">[</span>root@MiWiFi-R1CM-srv manifests<span style="color:#f92672">]</span><span style="color:#75715e"># ls -l</span>
总用量 <span style="color:#ae81ff">16</span>
-rw-------. <span style="color:#ae81ff">1</span> root root <span style="color:#ae81ff">1999</span> 1月  <span style="color:#ae81ff">12</span> 01:35 etcd.yaml
-rw-------. <span style="color:#ae81ff">1</span> root root <span style="color:#ae81ff">2674</span> 1月  <span style="color:#ae81ff">12</span> 01:35 kube-apiserver.yaml
-rw-------. <span style="color:#ae81ff">1</span> root root <span style="color:#ae81ff">2547</span> 1月  <span style="color:#ae81ff">12</span> 01:35 kube-controller-manager.yaml
-rw-------. <span style="color:#ae81ff">1</span> root root <span style="color:#ae81ff">1051</span> 1月  <span style="color:#ae81ff">12</span> 01:35 kube-scheduler.yaml
</code></pre></div><p>这四个就是k8的核心组件了，以静态pod的方式运行在当前节点上</p>
<ul>
<li>etcd：k8s的数据库，所有的集群配置信息、密钥、证书等等都是放在这个里面，所以生产上面一般都会做集群，挂了不是开玩笑的</li>
<li>kube-apiserver: k8的restful api入口，所有其他的组件都是通过api-server来操作kubernetes的各类资源，可以说是k8最底层的组件</li>
<li>kube-controller-manager: 负责管理容器pod的生命周期</li>
<li>kube-scheduler: 负责pod在集群中的调度
<img src="kubecomponent.png" alt="image"></li>
</ul>
<p>具体操作来说，在之前的文章中已经介绍过，docker架构调整后，已经拆分出containerd组件，所以现在是kubelet直接通过cri-containerd来调用containerd进行容器的创建（不走docker daemon了），从进程信息里面可以看出</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#f92672">[</span>root@MiWiFi-R1CM-srv manifests<span style="color:#f92672">]</span><span style="color:#75715e"># ps -ef|grep containerd</span>
root      <span style="color:#ae81ff">3075</span>     <span style="color:#ae81ff">1</span>  <span style="color:#ae81ff">0</span> 00:29 ?        00:00:55 /usr/bin/containerd
root      <span style="color:#ae81ff">4740</span>  <span style="color:#ae81ff">3075</span>  <span style="color:#ae81ff">0</span> 01:35 ?        00:00:01 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/ec93247aeb737218908557f825344b33dd58f0c098bd750c71da1bc0ec9a49b0 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root      <span style="color:#ae81ff">4754</span>  <span style="color:#ae81ff">3075</span>  <span style="color:#ae81ff">0</span> 01:35 ?        00:00:01 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/f738d56f65b9191a63243a1b239bac9c3924b5a2c7c98e725414c247fcffbb8f -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root      <span style="color:#ae81ff">4757</span>  <span style="color:#ae81ff">3</span>
</code></pre></div><p>其中<code>3075</code>这个进程就是由docker服务启动时带起来的containerd daemon，<code>4740</code>和<code>4754</code>是由<code>containerd</code>进程创建的<code>cotainerd-shim</code>子进程，用来真正的管理容器进程。多说一句，之前的docker版本这几个进程名字分别叫<code>docker-containerd</code>，<code>docker-cotainerd-shim</code>，<code>docker-runc</code>,现在的进程名字里面已经完全看不到docker的影子了，去docker化越来越明显了。</p>
<h4 id="插件addon">插件addon</h4>
<ul>
<li>CoreDNS: cncf项目，主要是用来做服务发现，目前已经取代kube-dns作为k8默认的服务发现组件</li>
<li>kube-proxy: 基于iptables来做的负载均衡，service会用到，这个性能不咋地，知道一下就好</li>
</ul>
<p>我们执行一下</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#f92672">[</span>root@MiWiFi-R1CM-srv ~<span style="color:#f92672">]</span><span style="color:#75715e"># kubectl get pods -n kube-system</span>
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-gbgzx                  0/1     Pending   <span style="color:#ae81ff">0</span>          5m28s
coredns-86c58d9df4-kzljk                  0/1     Pending   <span style="color:#ae81ff">0</span>          5m28s
etcd-miwifi-r1cm-srv                      1/1     Running   <span style="color:#ae81ff">0</span>          4m40s
kube-apiserver-miwifi-r1cm-srv            1/1     Running   <span style="color:#ae81ff">0</span>          4m52s
kube-controller-manager-miwifi-r1cm-srv   1/1     Running   <span style="color:#ae81ff">0</span>          5m3s
kube-proxy-9c8cs                          1/1     Running   <span style="color:#ae81ff">0</span>          5m28s
kube-scheduler-miwifi-r1cm-srv            1/1     Running   <span style="color:#ae81ff">0</span>          4m45s
</code></pre></div><p>可以看到kubeadm帮我们安装的，就是我上面提到的那些组件，并且都是以pod的形式安装。同时你也应该注意到了，coredns的两个pod都是<code>pending</code>状态，这是因为网络插件还没有安装。我们根据前面提到的官方页面的说明安装网络插件，这边我用到的是flannel，安装方式也很简单，标准的k8式的安装</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
</code></pre></div><p>安装完之后我们再看一下pod的状态</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#f92672">[</span>root@MiWiFi-R1CM-srv ~<span style="color:#f92672">]</span><span style="color:#75715e"># kubectl get pods -n kube-system</span>
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-gbgzx                  1/1     Running   <span style="color:#ae81ff">0</span>          11m
coredns-86c58d9df4-kzljk                  1/1     Running   <span style="color:#ae81ff">0</span>          11m
etcd-miwifi-r1cm-srv                      1/1     Running   <span style="color:#ae81ff">0</span>          11m
kube-apiserver-miwifi-r1cm-srv            1/1     Running   <span style="color:#ae81ff">0</span>          11m
kube-controller-manager-miwifi-r1cm-srv   1/1     Running   <span style="color:#ae81ff">0</span>          11m
kube-flannel-ds-amd64-kwx59               1/1     Running   <span style="color:#ae81ff">0</span>          57s
kube-proxy-9c8cs                          1/1     Running   <span style="color:#ae81ff">0</span>          11m
kube-scheduler-miwifi-r1cm-srv            1/1     Running   <span style="color:#ae81ff">0</span>          11m
</code></pre></div><p>可以看到coredns的两个pod都已经启动，同时还多了一个<code>kube-flannel-ds-amd64-kwx59</code>，这正是我们刚才安装的网络插件flannel。</p>
<p>这时候我们再来看一下核心组件的状态</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#f92672">[</span>root@MiWiFi-R1CM-srv yum.repos.d<span style="color:#f92672">]</span><span style="color:#75715e"># kubectl get componentstatus</span>
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   <span style="color:#f92672">{</span><span style="color:#e6db74">&#34;health&#34;</span>: <span style="color:#e6db74">&#34;true&#34;</span><span style="color:#f92672">}</span>
</code></pre></div><p>可以看到组件的状态都已经ok了，我们再看看node的状态</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#f92672">[</span>root@MiWiFi-R1CM-srv yum.repos.d<span style="color:#f92672">]</span><span style="color:#75715e"># kubectl get node</span>
NAME              STATUS   ROLES    AGE     VERSION
miwifi-r1cm-srv   Ready    master   4h56m   v1.13.1
</code></pre></div><p>node的状态是<code>Ready</code>，说明我们的master安装成功，至此大功告成！
默认的master节点是不能调度应用pod的，所以我们还需要给master节点打一个污点标记</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">kubectl taint nodes --all node-role.kubernetes.io/master-
</code></pre></div><h3 id="安装dashboard">安装DashBoard</h3>
<p>k8项目提供了一个官方的dashboard，虽然平时还是命令行用的多，但是有个UI总是好的，我们来看看怎么安装。安装其实也是非常简单，标准的k8声明式安装</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
</code></pre></div><p>安装完后查看pod信息</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#f92672">[</span>root@MiWiFi-R1CM-srv yum.repos.d<span style="color:#f92672">]</span><span style="color:#75715e"># kubectl get po -n kube-system</span>
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-gbgzx                  1/1     Running   <span style="color:#ae81ff">0</span>          4h45m
coredns-86c58d9df4-kzljk                  1/1     Running   <span style="color:#ae81ff">0</span>          4h45m
etcd-miwifi-r1cm-srv                      1/1     Running   <span style="color:#ae81ff">0</span>          4h44m
kube-apiserver-miwifi-r1cm-srv            1/1     Running   <span style="color:#ae81ff">0</span>          4h44m
kube-controller-manager-miwifi-r1cm-srv   1/1     Running   <span style="color:#ae81ff">0</span>          4h44m
kube-flannel-ds-amd64-kwx59               1/1     Running   <span style="color:#ae81ff">0</span>          4h34m
kube-proxy-9c8cs                          1/1     Running   <span style="color:#ae81ff">0</span>          4h45m
kube-scheduler-miwifi-r1cm-srv            1/1     Running   <span style="color:#ae81ff">0</span>          4h44m
kubernetes-dashboard-57df4db6b-bn5vn      1/1     Running   <span style="color:#ae81ff">0</span>          4h8m
</code></pre></div><p>可以看到多了一个<code>kubernetes-dashboard-57df4db6b-bn5vn</code>，并且已经正常启动。但出于安全性考虑，dashboard是不提供外部访问的，所以我们这边需要添加一个service，并且指定为NodePort类型，以供外部访问，service配置如下</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#f92672">apiVersion</span>: <span style="color:#ae81ff">v1</span>
<span style="color:#f92672">kind</span>: <span style="color:#ae81ff">Service</span>
<span style="color:#f92672">metadata</span>:
  <span style="color:#f92672">creationTimestamp</span>: <span style="color:#e6db74">&#34;2019-01-11T18:12:43Z&#34;</span>
  <span style="color:#f92672">labels</span>:
    <span style="color:#f92672">k8s-app</span>: <span style="color:#ae81ff">kubernetes-dashboard</span>
  <span style="color:#f92672">name</span>: <span style="color:#ae81ff">kubernetes-dashboard</span>
  <span style="color:#f92672">namespace</span>: <span style="color:#ae81ff">kube-system</span>
  <span style="color:#f92672">resourceVersion</span>: <span style="color:#e6db74">&#34;6015&#34;</span>
  <span style="color:#f92672">selfLink</span>: <span style="color:#ae81ff">/api/v1/namespaces/kube-system/services/kubernetes-dashboard</span>
  <span style="color:#f92672">uid</span>: <span style="color:#ae81ff">7dd0deb6-15cc-11e9-bb65-08002726d64d</span>
<span style="color:#f92672">spec</span>:
  <span style="color:#f92672">clusterIP</span>: <span style="color:#ae81ff">10.102.157.202</span>
  <span style="color:#f92672">externalTrafficPolicy</span>: <span style="color:#ae81ff">Cluster</span>
  <span style="color:#f92672">ports</span>:
  - <span style="color:#f92672">nodePort</span>: <span style="color:#ae81ff">30443</span>
    <span style="color:#f92672">port</span>: <span style="color:#ae81ff">443</span>
    <span style="color:#f92672">protocol</span>: <span style="color:#ae81ff">TCP</span>
    <span style="color:#f92672">targetPort</span>: <span style="color:#ae81ff">8443</span>
  <span style="color:#f92672">selector</span>:
    <span style="color:#f92672">k8s-app</span>: <span style="color:#ae81ff">kubernetes-dashboard</span>
  <span style="color:#f92672">sessionAffinity</span>: <span style="color:#ae81ff">None</span>
  <span style="color:#f92672">type</span>: <span style="color:#ae81ff">NodePort</span>
<span style="color:#f92672">status</span>:
  <span style="color:#f92672">loadBalancer</span>: {}
</code></pre></div><p>dashboard应用的默认端口是8443，这边我们指定一个30443端口进行映射，提供外部访问入口。这时候我们就可以通过<code>https://ip:8443</code>来访问dashboard了，注意用官方的yaml创建出来的servcieaccount登陆的话，是啥权限都没有的，全部是forbidden，因为官方的给了一个minimal的role。。。我们这边为了测试方便，直接创建一个超级管理员的账号，配置如下</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#f92672">apiVersion</span>: <span style="color:#ae81ff">v1</span>
<span style="color:#f92672">kind</span>: <span style="color:#ae81ff">ServiceAccount</span>
<span style="color:#f92672">metadata</span>:
  <span style="color:#f92672">name</span>: <span style="color:#ae81ff">dashboard</span>
  <span style="color:#f92672">namespace</span>: <span style="color:#ae81ff">kube-system</span>
---
<span style="color:#f92672">kind</span>: <span style="color:#ae81ff">ClusterRoleBinding</span>
<span style="color:#f92672">apiVersion</span>: <span style="color:#ae81ff">rbac.authorization.k8s.io/v1beta1</span>
<span style="color:#f92672">metadata</span>:
  <span style="color:#f92672">name</span>: <span style="color:#ae81ff">dashboard</span>
<span style="color:#f92672">subjects</span>:
  - <span style="color:#f92672">kind</span>: <span style="color:#ae81ff">ServiceAccount</span>
    <span style="color:#f92672">name</span>: <span style="color:#ae81ff">dashboard</span>
    <span style="color:#f92672">namespace</span>: <span style="color:#ae81ff">kube-system</span>
<span style="color:#f92672">roleRef</span>:
  <span style="color:#f92672">kind</span>: <span style="color:#ae81ff">ClusterRole</span>
  <span style="color:#f92672">name</span>: <span style="color:#ae81ff">cluster-admin</span>
  <span style="color:#f92672">apiGroup</span>: <span style="color:#ae81ff">rbac.authorization.k8s.io</span>
</code></pre></div><p>创建完了之后，系统会自动创建该用户的secret，通过如下命令获取secret</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#f92672">[</span>root@MiWiFi-R1CM-srv yum.repos.d<span style="color:#f92672">]</span><span style="color:#75715e"># kubectl describe secret dashboard -n kube-system</span>
Name:         dashboard-token-s9hqc
Namespace:    kube-system
Labels:       &lt;none&gt;
Annotations:  kubernetes.io/service-account.name: dashboard
              kubernetes.io/service-account.uid: 63c43e1e-15d6-11e9-bb65-08002726d64d

Type:  kubernetes.io/service-account-token

Data
<span style="color:#f92672">====</span>
ca.crt:     <span style="color:#ae81ff">1025</span> bytes
namespace:  <span style="color:#ae81ff">11</span> bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtdG9rZW4tczlocWMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkIiwia3Vi
</code></pre></div><p>将该token填入登陆界面中的token位置，即可登陆，并具有全部权限。
<img src="https://upload-images.jianshu.io/upload_images/14871146-8ca67573deb59483.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1000/format/webp" alt="dashboard.png">
至此一个完整的单节点k8集群安装完毕！</p>

        
                
        
              <hr>
              <ul class="pager">
                  
                  <li class="previous">
                      <a href="/post/use-runc-to-create-container/" data-toggle="tooltip" data-placement="top" title="使用runC创建容器">&larr; Previous Post</a>
                  </li>
                  
                  
                  <li class="next">
                      <a href="/post/after-kubernetest-genereation/" data-toggle="tooltip" data-placement="top" title="解读2017之容器篇：后Kubernetes时代 ">Next Post &rarr;</a>
                  </li>
                  
              </ul>
  
              

<div id="gitalk-container"></div>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/gitalk@1/dist/gitalk.css">
<script src="https://cdn.jsdelivr.net/npm/gitalk@1/dist/gitalk.min.js"></script>
<script src="/js/md5.min.js"></script>
<script>
	const gitalk = new Gitalk({
	  clientID: 'aff2580d8cc58af83367',
	  clientSecret: '747547f5f87fcc5145b847ab76a498d7e501319f',
	  repo: 'comment',
	  owner: 'shinji3887',
	  admin: ['shinji3887'],
	  id: md5(location.pathname),      
	  distractionFreeMode: false  
	})

	gitalk.render('gitalk-container')
</script>


            </div>
            
            <div class="
                col-lg-8 col-lg-offset-2
                col-md-10 col-md-offset-1
                sidebar-container">

                
                <section>
                    <hr class="hidden-sm hidden-xs">
                    <h5><a href="/tags/">FEATURED TAGS</a></h5>
                    <div class="tags">
                     
                    
                        
                            <a href="/tags/api-gateway" title="api-gateway">
                                api-gateway
                            </a>
                        
                    
                        
                    
                        
                            <a href="/tags/cloud-native" title="cloud-native">
                                cloud-native
                            </a>
                        
                    
                        
                            <a href="/tags/devops" title="devops">
                                devops
                            </a>
                        
                    
                        
                            <a href="/tags/docker" title="docker">
                                docker
                            </a>
                        
                    
                        
                    
                        
                    
                        
                    
                        
                    
                        
                            <a href="/tags/istio" title="istio">
                                istio
                            </a>
                        
                    
                        
                    
                        
                            <a href="/tags/kubernetes" title="kubernetes">
                                kubernetes
                            </a>
                        
                    
                        
                            <a href="/tags/microservice" title="microservice">
                                microservice
                            </a>
                        
                    
                        
                    
                        
                    
                        
                    
                        
                    
                        
                            <a href="/tags/restful" title="restful">
                                restful
                            </a>
                        
                    
                        
                    
                        
                            <a href="/tags/servicemesh" title="servicemesh">
                                servicemesh
                            </a>
                        
                    
                        
                            <a href="/tags/spring-cloud" title="spring-cloud">
                                spring-cloud
                            </a>
                        
                    
                        
                            <a href="/tags/vue" title="vue">
                                vue
                            </a>
                        
                    
                        
                    
                        
                    
                    </div>
                </section>

                
                <hr>
                <h5>FRIENDS</h5>
                <ul class="list-inline">
                    
                        <li><a target="_blank" href="https://skyao.io/">小剑的博客</a></li>
                    
                        <li><a target="_blank" href="https://zhaohuabing.com/">huabing的博客</a></li>
                    
                        <li><a target="_blank" href="http://blog.didispace.com/">程序猿DD的博客</a></li>
                    
                </ul>
            </div>
        </div>
    </div>
</article>




<footer>
    <div class="container">
        <div class="row">
            <div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
                <ul class="list-inline text-center">
                   
                   <li>
                       <a href="" rel="alternate" type="application/rss+xml" title="L&#39;s Blog" >
                           <span class="fa-stack fa-lg">
                               <i class="fa fa-circle fa-stack-2x"></i>
                               <i class="fa fa-rss fa-stack-1x fa-inverse"></i>
                           </span>
                       </a>
                   </li>
                   
                    
                    <li>
                        <a href="mailto:18016380795@163.com">
                            <span class="fa-stack fa-lg">
                                <i class="fa fa-circle fa-stack-2x"></i>
                                <i class="fa fa-envelope fa-stack-1x fa-inverse"></i>
                            </span>
                        </a>
                    </li>
		    
                    
                    
                    
                    

                    

		    
                    
                    <li>
                        <a target="_blank" href="/link%20of%20wechat%20QR%20code%20image">
                            <span class="fa-stack fa-lg">
                                <i class="fa fa-circle fa-stack-2x"></i>
                                <i class="fa fa-wechat fa-stack-1x fa-inverse"></i>
                            </span>
                        </a>
                    </li>
		    
                    
                    <li>
                        <a target="_blank" href="https://github.com/shinji3887">
                            <span class="fa-stack fa-lg">
                                <i class="fa fa-circle fa-stack-2x"></i>
                                <i class="fa fa-github fa-stack-1x fa-inverse"></i>
                            </span>
                        </a>
                    </li>
		    
                    
                    <li>
                        <a target="_blank" href="https://www.linkedin.com/in/lupeier">
                            <span class="fa-stack fa-lg">
                                <i class="fa fa-circle fa-stack-2x"></i>
                                <i class="fa fa-linkedin fa-stack-1x fa-inverse"></i>
                            </span>
                        </a>
                    </li>
		    
                </ul>
		<p class="copyright text-muted">
                    Copyright &copy; L&#39;s Blog , 2019
                    <br>
                    <br>
                    <a href="http://icp.chinaz.com/info?q=lupeier.com" target="_blank">备案号：沪ICP备19022667号-1</a>                    
                </p>
            </div>
        </div>
    </div>
</footer>



<script>
    function async(u, c) {
      var d = document, t = 'script',
          o = d.createElement(t),
          s = d.getElementsByTagName(t)[0];
      o.src = u;
      if (c) { o.addEventListener('load', function (e) { c(null, e); }, false); }
      s.parentNode.insertBefore(o, s);
    }
</script>






<script>
    
    if($('#tag_cloud').length !== 0){
        async("/js/jquery.tagcloud.js",function(){
            $.fn.tagcloud.defaults = {
                
                color: {start: '#bbbbee', end: '#0085a1'},
            };
            $('#tag_cloud a').tagcloud();
        })
    }
</script>


<script>
    async("/js/fastclick.js", function(){
        var $nav = document.querySelector("nav");
        if($nav) FastClick.attach($nav);
    })
</script>


<script>
    (function(){
        var bp = document.createElement('script');
        var curProtocol = window.location.protocol.split(':')[0];
        if (curProtocol === 'https'){
       bp.src = 'https://zz.bdstatic.com/linksubmit/push.js';
      }
      else{
      bp.src = 'http://push.zhanzhang.baidu.com/push.js';
      }
        var s = document.getElementsByTagName("script")[0];
        s.parentNode.insertBefore(bp, s);
    })();
</script>







</body>
</html>
