<!DOCTYPE HTML>
<html>

<head>
	<link rel="bookmark"  type="image/x-icon"  href="https://api.vercel.com/now/files/57dbce39c99b5c73e5c7f933a3d4bf1694fd1e41999e4d9f08a96a87733f5940/LogoMakr-6bamev.png"/>
	<link rel="shortcut icon" href="https://api.vercel.com/now/files/57dbce39c99b5c73e5c7f933a3d4bf1694fd1e41999e4d9f08a96a87733f5940/LogoMakr-6bamev.png">
	
			    <title>
    Fly's Blog
    </title>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no" />
    <link rel="stylesheet" href="/css/mic_main.css" />
    <link rel="stylesheet" href="/css/dropdownMenu.css" />
    <meta name="keywords" content="miccall" />
    <script data-ad-client="ca-pub-4558202898504715" async     src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
    
    	<script async src="//busuanzi.ibruce.info/busuanzi/2.3/busuanzi.pure.mini.js"></script>
	 
    <noscript>
        <link rel="stylesheet" href="/css/noscript.css" />
    </noscript>
    <style type="text/css">
        body:before {
          content: ' ';
          position: fixed;
          top: 0;
          background: url('/img/bg.jpg') center 0 no-repeat;
          right: 0;
          bottom: 0;
          left: 0;
          background-size: cover; 
        }
    </style>

			    
  
    <script type="text/x-mathjax-config">
      MathJax.Hub.Config({
        tex2jax: {
          inlineMath: [ ['$','$'], ["\\(","\\)"]  ],
          processEscapes: true,
          skipTags: ['script', 'noscript', 'style', 'textarea', 'pre', 'code']
        }
      });
    </script>

    <script type="text/x-mathjax-config">
      MathJax.Hub.Queue(function() {
        var all = MathJax.Hub.getAllJax(), i;
        for (i=0; i < all.length; i += 1) {
          all[i].SourceElement().parentNode.className += ' has-jax';
        }
      });
    </script>
    <script async type="text/javascript" src="//cdn.bootcss.com/mathjax/2.7.1/latest.js?config=TeX-AMS-MML_HTMLorMML"></script>
  


    <script src="/js/jquery.min.js"></script>
    <script src="/js/jquery.scrollex.min.js"></script>
    <script src="/js/jquery.scrolly.min.js"></script>
    <script src="/js/skel.min.js"></script>
    <script src="/js/util.js"></script>
    <script src="/js/main.js"></script>
	
<meta name="generator" content="Hexo 5.4.0"></head>
    
		
<!-- Layouts -->



<!--  代码渲染  -->
<link rel="stylesheet" href="/css/prism_coy.css" />
<link rel="stylesheet" href="/css/typo.css" />
<!-- 文章页 -->
<body class="is-loading">
    <!-- Wrapper 外包 s-->
    <div id="wrapper" class="fade-in">
        <!-- Intro 头部显示 s -->
        <!-- Intro 头部显示 e -->
        <!-- Header 头部logo start -->
        <header id="header">
    <a href="/" class="logo">Fly's Blog</a>
</header>
        <!-- Nav 导航条 start -->
        <nav id="nav" class="special" >
            <ul class="menu links" >
			<!-- Homepage  主页  --> 
			<li >
	            <a href="/" rel="nofollow">主页</a>
	        </li>
			<!-- categories_name  分类   --> 
	        
	        <li class="active">
	            <a href="#s1">分类</a>
	                    <ul class="submenu">
	                        <li>
	                        <a class="category-link" href="/categories/Welcome/">Welcome</a></li><li><a class="category-link" href="/categories/hexo/">hexo</a></li><li><a class="category-link" href="/categories/%E5%AD%A6%E4%B9%A0/">学习</a></li><li><a class="category-link" href="/categories/%E5%AD%A6%E4%B9%A0/NLP/">NLP</a></li><li><a class="category-link" href="/categories/%E5%AD%A6%E4%B9%A0/git/">git</a></li><li><a class="category-link" href="/categories/%E5%AD%A6%E4%B9%A0/%E5%8D%9A%E5%AE%A2/">博客</a></li><li><a class="category-link" href="/categories/%E5%AD%A6%E4%B9%A0/%E5%8D%9A%E5%AE%A2/Netch/">Netch</a></li><li><a class="category-link" href="/categories/%E5%AD%A6%E4%B9%A0/%E5%8D%9A%E5%AE%A2/VUE/">VUE</a></li><li><a class="category-link" href="/categories/%E5%AD%A6%E4%B9%A0/%E6%95%B0%E5%AD%A6%E5%BB%BA%E6%A8%A1/">数学建模</a></li><li><a class="category-link" href="/categories/%E6%B1%87%E7%BC%96/">汇编</a></li><li><a class="category-link" href="/categories/%E7%BD%91%E7%9B%98/">网盘</a>
	                    </ul>
	        </li>
	        
	        <!-- archives  归档   --> 
	        
	        
		        <!-- Pages 自定义   -->
		        


            </ul>
            <!-- icons 图标   -->
			<ul class="icons">
                    
                    <li>
                        <a title="github" href="https://github.com/yu2256140203" target="_blank" rel="noopener">
                            <i class="icon fa fa-github"></i>
                        </a>
                    </li>
                    
                    <li>
                        <a title="500px" href="http://500px.com" target="_blank" rel="noopener">
                            <i class="icon fa fa-500px"></i>
                        </a>
                    </li>
                    
			</ul>
</nav>

        <div id="main" >
            <div class ="post_page_title_img" style="height: 25rem;background-image: url(https://i.loli.net/2020/07/31/rSYOE68HnqgJaU4.png#vwid=1409&amp;vhei=821);background-position: center; background-repeat:no-repeat; background-size:cover;-moz-background-size:cover;overflow:hidden;" >
                <a href="#" style="padding: 4rem 4rem 2rem 4rem ;"><h2 >CNN使用MNIST手写数字识别实战的代码和心得</h2></a>
            </div>
            <!-- Post -->
            <div class="typo" style="padding: 3rem;">
                <blockquote>
<p>CNN(Convolutional Neural Network)卷积神经网络对于MNIST手写数字识别的实战代码和心得</p>
</blockquote>
<p>首先是对代码结构思路进行思路图展示，如下：<br><img src="https://i.loli.net/2020/07/31/rSYOE68HnqgJaU4.png#vwid=1409&vhei=821" alt="CNN进行MNIST手写数字识别"><br>参数和原理剖析：<br>因为MNIST图片为长和宽相同的28像素，为黑白两色，所以图片的高度为1，为灰度通道。<br>在传入的时候，我定义的BATCH_SIZE为512，所以具体的输入维度为(512,1,28,28)<br>我的CNN卷积神经网络的为两层卷积层，两次激活函数，两层池化层，和两层全连接层<br>卷积核设为5X5，步长Stride = 2（卷积核移动的步长）<br>填充padding = （kernal_size - stride） /2 (在图像张量周围加两圈0)<br>1.1经过卷积层 输入通道为1，输出通道为14，其他参数值不变(BATCH_SIZE,1,28,28)<br>1.2经过激活函数，只将张量中的为负数的值变为0，不改变shape，各维度不变(BATCH_SIZE,14,28,28)<br>1.3经过最大池化层，将图片缩小，降采样，只取图片的最大值细节，图片长宽维度变为原来的二分之一(BATCH_SIZE,14,14,14)<br>2.1经过卷积层 输入通道为14，输出通道为28，其他参数值不变(BATCH_SIZE,28,14,14)<br>2.2经过激活函数，只将张量中的为负数的值变为0，不改变shape，各维度不变(BATCH_SIZE,28,14,14)<br>2.3经过最大池化层，将图片缩小，降采样，只取图片的最大值细节，图片长宽维度变为原来的二分之一(BATCH_SIZE,28,7,7)<br>3.利用view函数，将张量拉平，shape变为(BATCH_SIZE,28<em>7</em>7)<br>4.1经过第一层全连接层，将(28<em>7</em>7)变为200，高度提纯，一个全连接层将卷积层提取的特征进行线性组合<br>4.2经过第二层全连接层，将200变为10，针对最后分类的10钟图片，进行十种维度的结果，实现了对输入的数据进行高度的非线性变换的目的<br>下面是对库的导入</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 1 加载必要的库</span></span><br><span class="line"><span class="keyword">import</span> torch</span><br><span class="line"><span class="keyword">import</span> torch.nn <span class="keyword">as</span> nn</span><br><span class="line"><span class="keyword">import</span> torch.nn.functional <span class="keyword">as</span> F  <span class="comment"># 优化器</span></span><br><span class="line"><span class="keyword">import</span> torch.optim <span class="keyword">as</span> optim</span><br><span class="line"><span class="keyword">from</span> torchvision <span class="keyword">import</span> datasets, transforms</span><br></pre></td></tr></table></figure>

<p>对于超参数的定义</p>
<pre><code># 2 定义超参数
BATCH_SIZE = 512  # 每批处理的数据
DEVICE = torch.device(&quot;cuda&quot; if torch.cuda.is_available() else &quot;cpu&quot;)  # 是否使用GPU还是CPU
EPOCHS = 10  # 训练数据集的伦次
</code></pre>
<p>BATCH_SIZE是每批处理数据的样本数量<br>对于DEVICE的定义是对于程序运行在CPU还是GPU进行识别，通过torch的CUDA函数<br>EPOCHS指训练和测试方法运行的次数，运行在一定范围内次数越多能提高正确率</p>
<p>对于图像进行处理</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">    # 3创建pipeline，对图像做处理（transforms变换）</span><br><span class="line">    pipeline &#x3D; transforms.Compose([</span><br><span class="line">         transforms.ToTensor(),  # 将图片转换成tensor</span><br><span class="line">         transforms.Normalize((0.1307,), (0.3081,))</span><br><span class="line">    ])</span><br><span class="line">ToTensor将本来图片像素点的形式，转化为张量的形式，利用于计算</span><br><span class="line">normalize正则化，模型出现过拟合时，降低模型复杂度</span><br></pre></td></tr></table></figure>
<p>进行数据集的下载和加载</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 4 下载，加载数据</span></span><br><span class="line"><span class="keyword">from</span> torch.utils.data <span class="keyword">import</span> DataLoader</span><br><span class="line"></span><br><span class="line"><span class="comment"># 下载数据集</span></span><br><span class="line">train_set = datasets.MNIST(<span class="string">&quot;data&quot;</span>, train=<span class="literal">True</span>, download=<span class="literal">True</span>, transform=pipeline)</span><br><span class="line"></span><br><span class="line">test_set = datasets.MNIST(<span class="string">&quot;data&quot;</span>, train=<span class="literal">False</span>, download=<span class="literal">True</span>, transform=pipeline)</span><br><span class="line"><span class="comment"># 加载数据</span></span><br><span class="line">train_loader = DataLoader(train_set, batch_size=BATCH_SIZE, shuffle=<span class="literal">True</span>)  <span class="comment"># 打乱图片顺序</span></span><br><span class="line"></span><br><span class="line">test_loader = DataLoader(test_set, batch_size=BATCH_SIZE, shuffle=<span class="literal">True</span>)</span><br></pre></td></tr></table></figure>
<p>构建网络模型，针对于网络模型的构建，我采用了Module和Sequential两种方式<br>1.Moudle方式</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 5 构建网络模型</span></span><br><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">CNN</span>(<span class="params">nn.Module</span>):</span></span><br><span class="line">    <span class="function"><span class="keyword">def</span> <span class="title">__init__</span>(<span class="params">self</span>):</span></span><br><span class="line">        <span class="built_in">super</span>().__init__()</span><br><span class="line">        self.conv1 = nn.Conv2d(<span class="number">1</span>, <span class="number">14</span>, <span class="number">5</span>,<span class="number">1</span>,<span class="number">2</span>)  <span class="comment"># 卷积函数   1:灰度图片的通道 14：输出通达 5：kernel </span></span><br><span class="line"><span class="number">1</span>:stride <span class="number">2</span>:padding</span><br><span class="line">        self.conv2 = nn.Conv2d(<span class="number">14</span>, <span class="number">28</span>, <span class="number">5</span>,<span class="number">1</span>,<span class="number">2</span>)  <span class="comment"># 14:输入通道 20 输出通道 3：kernel</span></span><br><span class="line">        self.fc1 = nn.Linear(<span class="number">28</span> * <span class="number">7</span> * <span class="number">7</span>, <span class="number">200</span>)  <span class="comment"># 全连接层,28*7*7:输入通道 200输出通道</span></span><br><span class="line">        self.fc2 = nn.Linear(<span class="number">200</span>, <span class="number">10</span>)  <span class="comment"># 200:输入通道，10：输出通道</span></span><br><span class="line">   </span><br><span class="line">   <span class="function"><span class="keyword">def</span> <span class="title">forward</span>(<span class="params">self, x</span>):</span></span><br><span class="line">        input_size = x.size(<span class="number">0</span>)  <span class="comment"># batch_size</span></span><br><span class="line">        x = self.conv1(x)  <span class="comment"># 卷积操作 输入batch_size*1*28*28，输出：batch_size*14*28*28</span></span><br><span class="line">        x = F.relu(x)  <span class="comment"># 输出：batch_size*14*28*28</span></span><br><span class="line">        x = F.max_pool2d(x, <span class="number">2</span>, <span class="number">2</span>)  <span class="comment"># 输入：batch*14*28*28,输出：batch*14*14*14，</span></span><br><span class="line">        x = self.conv2(x)  <span class="comment"># 输入：batch*14*14*14,输出：batch*28*14*14</span></span><br><span class="line">        x = F.max_pool2d(x, <span class="number">2</span>, <span class="number">2</span>)</span><br><span class="line">        x = x.view(input_size, -<span class="number">1</span>)  <span class="comment">#28*14*14=1372</span></span><br><span class="line">        x = self.fc1(x)  <span class="comment"># 输入batch*1372 输出batch*200</span></span><br><span class="line">        x = F.relu(x)  <span class="comment"># 保持shape不变</span></span><br><span class="line">        x = self.fc2(x)  <span class="comment"># 输入：batch*200 输出：batch*10</span></span><br><span class="line">        output = F.log_softmax(x, dim=<span class="number">1</span>)  <span class="comment">#</span></span><br><span class="line">        <span class="keyword">return</span> output</span><br></pre></td></tr></table></figure>
<p>2.Sequential方式</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">#  构建网络模型</span></span><br><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">CNN</span>(<span class="params">nn.Module</span>):</span></span><br><span class="line">    <span class="function"><span class="keyword">def</span> <span class="title">__init__</span>(<span class="params">self</span>):</span></span><br><span class="line">        <span class="built_in">super</span>().__init__()</span><br><span class="line">        self.conv1 = nn.Sequential(</span><br><span class="line">            nn.Conv2d(<span class="number">1</span>, <span class="number">14</span>, <span class="number">5</span>, <span class="number">1</span>, <span class="number">2</span>),  <span class="comment"># padding=(kernel_size - stride)/2</span></span><br><span class="line">            nn.ReLU(),  <span class="comment"># (512,4,28,28）</span></span><br><span class="line">            nn.MaxPool2d(kernel_size=<span class="number">2</span>),</span><br><span class="line">        )  <span class="comment"># (14,14,14,)</span></span><br><span class="line">         self.conv2 = nn.Sequential(</span><br><span class="line">            nn.Conv2d(<span class="number">14</span>, <span class="number">28</span>, <span class="number">5</span>, <span class="number">1</span>, <span class="number">2</span>),</span><br><span class="line">            nn.ReLU(),</span><br><span class="line">            nn.MaxPool2d(kernel_size=<span class="number">2</span>),</span><br><span class="line">        )</span><br><span class="line">         self.fc1 = nn.Linear(<span class="number">28</span>*<span class="number">7</span>*<span class="number">7</span>, <span class="number">200</span>)  <span class="comment"># 全连接层,28*7*7:输入通道 200输出通道</span></span><br><span class="line">        self.fc2 = nn.Linear(<span class="number">200</span>, <span class="number">10</span>)  <span class="comment"># 200:输入通道，10：输出通道</span></span><br><span class="line"></span><br><span class="line">    <span class="function"><span class="keyword">def</span> <span class="title">forward</span>(<span class="params">self, x</span>):</span></span><br><span class="line">        input_size = x.size(<span class="number">0</span>)  <span class="comment"># batch_size</span></span><br><span class="line">        x = self.conv1(x)  <span class="comment"># 卷积操作 输入batch_size*1*28*28，输出：batch_size*14*14*14</span></span><br><span class="line">        x = self.conv2(x)  <span class="comment"># 输入：batch*14*14*14,输出：batch*28*7*7</span></span><br><span class="line">        x = x.view(input_size, -<span class="number">1</span>)  <span class="comment"># 拉平，-1自动计算维度，28*7*7=1372</span></span><br><span class="line">        x = self.fc1(x)  <span class="comment"># 输入batch*1372 输出batch*200</span></span><br><span class="line">        x = F.relu(x)  <span class="comment"># 保持shape不变</span></span><br><span class="line">        x = self.fc2(x)  <span class="comment"># 输入：batch*200 输出：batch*10</span></span><br><span class="line">        output = F.log_softmax(x, dim=<span class="number">1</span>)  <span class="comment"># 计算分类后，每个数字的概率值</span></span><br><span class="line">        <span class="keyword">return</span> output</span><br></pre></td></tr></table></figure>
<p>针对于卷积神经网络具体的层和函数的作用理解:<br>1.卷积层：对图片信息进行抽象化<br>2.激活函数：激活函数，非线性函数神经网络更好表达,保持shape不变，<br>3.池化层：池化降采样，将原图缩小，取最大或者取平均池化<br>4.全连接层：高度提纯，一个全连接层将卷积层提取的特征进行线性组合，第二个“实现了对输入的数据进行高度的非线性变换的目的”。</p>
<p>定义优化器</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line">    <span class="comment"># 6 定义模型，优化器</span></span><br><span class="line">    model = CNN().to(DEVICE)  <span class="comment"># 创建模型，部署到设备上</span></span><br><span class="line">    print(model)</span><br><span class="line">    optimizer = optim.Adam(model.parameters())  <span class="comment"># 创建优化器</span></span><br><span class="line"></span><br><span class="line">定义训练方法</span><br><span class="line"></span><br><span class="line">    <span class="comment"># 7 定义训练方法</span></span><br><span class="line">    <span class="function"><span class="keyword">def</span> <span class="title">train_model</span>(<span class="params">model, device, train_loader, optimizer, epoch</span>):</span>  <span class="comment"># epoch就是循环的次数</span></span><br><span class="line">    <span class="comment"># 模型训练</span></span><br><span class="line">        model.train()</span><br><span class="line">        <span class="keyword">for</span> batch_index, (data, target) <span class="keyword">in</span> <span class="built_in">enumerate</span>(train_loader):  <span class="comment"># target是标签，可以用label</span></span><br><span class="line">            data, target = data.to(device), target.to(device)</span><br><span class="line">            optimizer.zero_grad()</span><br><span class="line">            output = model(data)</span><br><span class="line">            loss = F.cross_entropy(output, target) </span><br><span class="line">            loss.backward()</span><br><span class="line">            optimizer.step()</span><br><span class="line">            <span class="keyword">if</span> batch_index % <span class="number">3000</span> == <span class="number">0</span>:</span><br><span class="line">                print(<span class="string">&quot;Train Epoch : &#123;&#125; \t Loss : &#123;:.6f&#125;&quot;</span>.<span class="built_in">format</span>(epoch, loss.item()))</span><br></pre></td></tr></table></figure>
<p>epoch为循环的次数<br>optimizer.zero_grad()是对梯度进行初始化<br>output预测，训练后的结果，data调用的方法是model.forword()函数<br>loss计算交叉熵损失<br>loss.backward()反向传播<br>optimizer.step()参数优化</p>
<p>定义测试方法，测试方法的代码与训练类似，直接在原代码进行标注</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 8 定义测试方法</span></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">test_model</span>(<span class="params">model, device, test_loader</span>):</span></span><br><span class="line">    <span class="comment"># 模型验证</span></span><br><span class="line">    model.<span class="built_in">eval</span>()</span><br><span class="line">    <span class="comment"># 正确率</span></span><br><span class="line">    correct = <span class="number">0.0</span></span><br><span class="line">    <span class="comment"># 测试损失</span></span><br><span class="line">    test_loss = <span class="number">0.0</span></span><br><span class="line">    <span class="keyword">with</span> torch.no_grad():  <span class="comment"># 不会计算梯度，也不会反向传播</span></span><br><span class="line">        <span class="keyword">for</span> data, target <span class="keyword">in</span> test_loader:</span><br><span class="line">            <span class="comment"># 部署到device上去</span></span><br><span class="line">            data, target = data.to(device), target.to(device)</span><br><span class="line">            <span class="comment"># 测试数据</span></span><br><span class="line">            output = model(data)</span><br><span class="line">            <span class="comment"># 计算测试损失</span></span><br><span class="line">            test_loss += F.cross_entropy(output, target).item()</span><br><span class="line">            <span class="comment"># 找到概率值最大的下标</span></span><br><span class="line">            pred = output.<span class="built_in">max</span>(<span class="number">1</span>, keepdim=<span class="literal">True</span>)[<span class="number">1</span>]  <span class="comment"># 值，索引</span></span><br><span class="line">            <span class="comment"># pred = torch.max(output,dim=1)</span></span><br><span class="line">            <span class="comment"># pred = output.argmax(dim=1)</span></span><br><span class="line">            <span class="comment"># 累计正确率</span></span><br><span class="line">            correct += pred.eq(target.view_as(pred)).<span class="built_in">sum</span>().item()</span><br><span class="line">        test_loss /= <span class="built_in">len</span>(test_loader.dataset)</span><br><span class="line">        print(<span class="string">&quot;Test -- Average loss :&#123;:.4f&#125;, Accuracy : &#123;:.3f&#125;\n&quot;</span>.</span><br><span class="line">              <span class="built_in">format</span>(test_loss, <span class="number">100.0</span> * correct / <span class="built_in">len</span>(test_loader.dataset)))</span><br></pre></td></tr></table></figure>
<p>对于方法的调用</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 9 调用方法</span></span><br><span class="line"><span class="keyword">for</span> epoch <span class="keyword">in</span> <span class="built_in">range</span>(<span class="number">1</span>, EPOCHS + <span class="number">1</span>):</span><br><span class="line">    train_model(model, DEVICE, train_loader, optimizer, epoch)</span><br><span class="line">    test_model(model, DEVICE, test_loader)</span><br></pre></td></tr></table></figure>
<p>运行结果</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line">CNN(</span><br><span class="line">  (conv1): Sequential(</span><br><span class="line">    (<span class="number">0</span>): Conv2d(<span class="number">1</span>, <span class="number">14</span>, kernel_size=(<span class="number">5</span>, <span class="number">5</span>), stride=(<span class="number">1</span>, <span class="number">1</span>), padding=(<span class="number">2</span>, <span class="number">2</span>))</span><br><span class="line">    (<span class="number">1</span>): ReLU()</span><br><span class="line">    (<span class="number">2</span>): MaxPool2d(kernel_size=<span class="number">2</span>, stride=<span class="number">2</span>, padding=<span class="number">0</span>, dilation=<span class="number">1</span>, ceil_mode=<span class="literal">False</span>)</span><br><span class="line">  )</span><br><span class="line">  (conv2): Sequential(</span><br><span class="line">    (<span class="number">0</span>): Conv2d(<span class="number">14</span>, <span class="number">28</span>, kernel_size=(<span class="number">5</span>, <span class="number">5</span>), stride=(<span class="number">1</span>, <span class="number">1</span>), padding=(<span class="number">2</span>, <span class="number">2</span>))</span><br><span class="line">    (<span class="number">1</span>): ReLU()</span><br><span class="line">    (<span class="number">2</span>): MaxPool2d(kernel_size=<span class="number">2</span>, stride=<span class="number">2</span>, padding=<span class="number">0</span>, dilation=<span class="number">1</span>, ceil_mode=<span class="literal">False</span>)</span><br><span class="line">  )</span><br><span class="line">  (fc1): Linear(in_features=<span class="number">1372</span>, out_features=<span class="number">200</span>, bias=<span class="literal">True</span>)</span><br><span class="line">  (fc2): Linear(in_features=<span class="number">200</span>, out_features=<span class="number">10</span>, bias=<span class="literal">True</span>)</span><br><span class="line">)</span><br><span class="line">Train Epoch : <span class="number">10</span> 	 Loss : <span class="number">0.014751</span></span><br><span class="line">Test -- Average loss :<span class="number">0.0001</span>, Accuracy : <span class="number">99.040</span></span><br></pre></td></tr></table></figure>

            </div>

            <!-- Post Comments -->
            
    <!-- 使用 DISQUS_CLICK -->
<div id="disqus-comment">
    <div id="disqus_thread"></div>

<!-- add animation -->
<style>
	.disqus_click_btn {
            line-height: 30px;
            margin: 0;
            min-width: 50px;
            padding: 0 14px;
            display: inline-block;
            font-family: "Roboto", "Helvetica", "Arial", sans-serif;
            font-size: 14px;
            font-weight: 400;
            text-transform: uppercase;
            letter-spacing: 0;
            overflow: hidden;
            will-change: box-shadow;
            transition: box-shadow .2s cubic-bezier(.4, 0, 1, 1), background-color .2s cubic-bezier(.4, 0, .2, 1), color .2s cubic-bezier(.4, 0, .2, 1);
            outline: 0;
            cursor: pointer;
            text-decoration: none;
            text-align: center;
            vertical-align: middle;
            border: 0;
            background: rgba(158, 158, 158, .2);
            box-shadow: 0 2px 2px 0 rgba(0, 0, 0, .14), 0 3px 1px -2px rgba(0, 0, 0, .2), 0 1px 5px 0 rgba(0, 0, 0, .12);
            color: #fff;
            background-color: #7EC0EE;
            text-shadow: 0
        }
</style>
	
<div class="btn_click_load" id="disqus_bt"> 
    <button class="disqus_click_btn">点击查看评论</button>
</div>

<!--
<script type="text/javascript">
$('.btn_click_load').click(function() {
    /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */
    var disqus_shortname = 'http-miccall-tech'; // required: replace example with your forum shortname

    /* * * DON'T EDIT BELOW THIS LINE * * */
    (function() {
      var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
      dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
      (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
    })();

    document.getElementById('disqus_bt').style.display = "none";
});
</script>
-->
<script type="text/javascript">
    var disqus_config = function () {
        this.page.url = 'http://example.com/2021/03/04/CNN%E4%BD%BF%E7%94%A8MNIST%E6%89%8B%E5%86%99%E6%95%B0%E5%AD%97%E8%AF%86%E5%88%AB%E5%AE%9E%E6%88%98%E7%9A%84%E4%BB%A3%E7%A0%81%E5%92%8C%E5%BF%83%E5%BE%97/';  // Replace PAGE_URL with your page's canonical URL variable
        this.page.identifier = 'http://example.com/2021/03/04/CNN%E4%BD%BF%E7%94%A8MNIST%E6%89%8B%E5%86%99%E6%95%B0%E5%AD%97%E8%AF%86%E5%88%AB%E5%AE%9E%E6%88%98%E7%9A%84%E4%BB%A3%E7%A0%81%E5%92%8C%E5%BF%83%E5%BE%97/'; // Replace PAGE_IDENTIFIER with your page's unique identifier variable
    };
</script>

<script type="text/javascript">
    $('.btn_click_load').click(function() {  //click to load comments
        (function() { // DON'T EDIT BELOW THIS LINE
            var d = document;
            var s = d.createElement('script');
            s.src = '//http-miccall-tech.disqus.com/embed.js';
            s.setAttribute('data-timestamp', + new Date());
            (d.head || d.body).appendChild(s);
        })();
        $('.btn_click_load').css('display','none');
    });
</script>
</div>
<style>
    #disqus-comment{
        background-color: #eee;
        padding: 2pc;
    }
</style>


        </div>
        <!-- Copyright 版权 start -->
                <div id="copyright">
            <ul>
                <li>&copy;Powered By <a target="_blank" rel="noopener" href="https://hexo.io/zh-cn/" style="border-bottom: none;">hexo</a></li>
                <li>Design: <a target="_blank" rel="noopener" href="http://miccall.tech " style="border-bottom: none;">miccall</a></li>
            </ul>
            
                <span id="busuanzi_container_site_pv">本站总访问量<span id="busuanzi_value_site_pv"></span>次</span>
			
       <li>「Powered by <a data-from="10680" href="https://webify.cloudbase.net/" target="_blank" rel="nofollow noopener noreferrer">CloudBase Webify</a>」</li>
        </div>
    </div>
</body>



 	
</html>
