

<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
  <meta charset="utf-8">
  
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  
  <title>TensorFlow Extensions &mdash; 简单粗暴TensorFlow 0.3 beta 文档</title>
  

  
  
  
  

  

  
  
    

  

  
  
    <link rel="stylesheet" href="../_static/css/theme.css" type="text/css" />
  

  

  
        <link rel="index" title="索引"
              href="../genindex.html"/>
        <link rel="search" title="搜索" href="../search.html"/>
    <link rel="top" title="简单粗暴TensorFlow 0.3 beta 文档" href="../index.html"/>
        <link rel="next" title="Appendix: Static TensorFlow" href="static.html"/>
        <link rel="prev" title="TensorFlow Models" href="models.html"/> 

  
  <script src="../_static/js/modernizr.min.js"></script>

</head>

<body class="wy-body-for-nav" role="document">

   
  <div class="wy-grid-for-nav">

    
    <nav data-toggle="wy-nav-shift" class="wy-nav-side">
      <div class="wy-side-scroll">
        <div class="wy-side-nav-search">
          

          
            <a href="../index.html" class="icon icon-home"> 简单粗暴TensorFlow
          

          
          </a>

          
            
            
              <div class="version">
                0.3
              </div>
            
          

          
<div role="search">
  <form id="rtd-search-form" class="wy-form" action="../search.html" method="get">
    <input type="text" name="q" placeholder="Search docs" />
    <input type="hidden" name="check_keywords" value="yes" />
    <input type="hidden" name="area" value="default" />
  </form>
</div>

          
        </div>

        <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
          
            
            
              
            
            
              <p class="caption"><span class="caption-text">目录</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../zh/preface.html">前言</a></li>
<li class="toctree-l1"><a class="reference internal" href="../zh/installation.html">TensorFlow安装</a></li>
<li class="toctree-l1"><a class="reference internal" href="../zh/basic.html">TensorFlow基础</a></li>
<li class="toctree-l1"><a class="reference internal" href="../zh/models.html">TensorFlow模型</a></li>
<li class="toctree-l1"><a class="reference internal" href="../zh/extended.html">TensorFlow扩展</a></li>
<li class="toctree-l1"><a class="reference internal" href="../zh/static.html">附录：静态的TensorFlow</a></li>
</ul>
<p class="caption"><span class="caption-text">Contents</span></p>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="preface.html">Preface</a></li>
<li class="toctree-l1"><a class="reference internal" href="installation.html">TensorFlow Installation</a></li>
<li class="toctree-l1"><a class="reference internal" href="basic.html">TensorFlow Basic</a></li>
<li class="toctree-l1"><a class="reference internal" href="models.html">TensorFlow Models</a></li>
<li class="toctree-l1 current"><a class="current reference internal" href="#">TensorFlow Extensions</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#checkpoint-saving-and-restoring-variables">Checkpoint: Saving and Restoring Variables</a></li>
<li class="toctree-l2"><a class="reference internal" href="#tensorboard-visualization-of-the-training-process">TensorBoard: Visualization of the Training Process</a></li>
<li class="toctree-l2"><a class="reference internal" href="#gpu-usage-and-allocation">GPU Usage and Allocation</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="static.html">Appendix: Static TensorFlow</a></li>
</ul>

            
          
        </div>
      </div>
    </nav>

    <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">

      
      <nav class="wy-nav-top" role="navigation" aria-label="top navigation">
        
          <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
          <a href="../index.html">简单粗暴TensorFlow</a>
        
      </nav>


      
      <div class="wy-nav-content">
        <div class="rst-content">
          















<div role="navigation" aria-label="breadcrumbs navigation">

  <ul class="wy-breadcrumbs">
    
      <li><a href="../index.html">Docs</a> &raquo;</li>
        
      <li>TensorFlow Extensions</li>
    
    
      <li class="wy-breadcrumbs-aside">
        
            
            <a href="../_sources/en/extended.rst.txt" rel="nofollow"> View page source</a>
          
        
      </li>
    
  </ul>

  
  <hr/>
</div>
          <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
           <div itemprop="articleBody">
            
  <div class="section" id="tensorflow-extensions">
<h1>TensorFlow Extensions<a class="headerlink" href="#tensorflow-extensions" title="永久链接至标题">¶</a></h1>
<p>This chapter introduces some of the most commonly used TensorFlow extensions. Although these features are not “must”, they make the process of model training and calling more convenient.</p>
<p>Prerequisites:</p>
<ul class="simple">
<li><a class="reference external" href="http://www.runoob.com/python3/python3-inputoutput.html">Python serialization module Pickle</a> (not required)</li>
<li><a class="reference external" href="https://eastlakeside.gitbooks.io/interpy-zh/content/args_kwargs/Usage_kwargs.html">Python special function parameters **kwargs</a> (not required)</li>
</ul>
<div class="section" id="checkpoint-saving-and-restoring-variables">
<h2>Checkpoint: Saving and Restoring Variables<a class="headerlink" href="#checkpoint-saving-and-restoring-variables" title="永久链接至标题">¶</a></h2>
<p>Usually, we hope to save the trained parameters (variables) after the model training is completed. By loading the model and parameters when you need model, you can get the trained model directly. Perhaps the first thing you think of is to store <code class="docutils literal"><span class="pre">model.variables</span></code> with the Python serialization module <code class="docutils literal"><span class="pre">pickle</span></code>. But unfortunately, TensorFlow’s variable type <code class="docutils literal"><span class="pre">ResourceVariable</span></code> cannot be serialized.</p>
<p>Fortunately, TensorFlow provides a powerful variable saving and restoring class <a class="reference external" href="https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint">tf.train.Checkpoint</a> , which can save and restore all objects in the TensorFlow containing the Checkpointable State by <code class="docutils literal"><span class="pre">save()</span></code> and <code class="docutils literal"><span class="pre">restore()</span></code> methods. Specifically, <code class="docutils literal"><span class="pre">tf.train.Optimizer</span></code> implementations, <code class="docutils literal"><span class="pre">tf.Variable</span></code>, <code class="docutils literal"><span class="pre">tf.keras.Layer</span></code> implementations or <code class="docutils literal"><span class="pre">tf.keras.Model</span></code> implementations can all be saved. Its usage is very simple, we first declare a Checkpoint:</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">checkpoint</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">Checkpoint</span><span class="p">(</span><span class="n">model</span><span class="o">=</span><span class="n">model</span><span class="p">)</span>
</pre></div>
</div>
<p>Here the initialization parameter passed to <code class="docutils literal"><span class="pre">tf.train.Checkpoint()</span></code> is special, it is a <code class="docutils literal"><span class="pre">**kwargs</span></code>. Specifically, it is a series of key-value pairs, and the keys can be taken at will, and the values are objects that need to be saved. For example, if we want to save a model instance <code class="docutils literal"><span class="pre">model</span></code> that inherits <code class="docutils literal"><span class="pre">tf.keras.Model</span></code> and an optimizer <code class="docutils literal"><span class="pre">optimizer</span></code> that inherits <code class="docutils literal"><span class="pre">tf.train.Optimizer</span></code>, we can write:</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">checkpoint</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">Checkpoint</span><span class="p">(</span><span class="n">myAwesomeModel</span><span class="o">=</span><span class="n">model</span><span class="p">,</span> <span class="n">myAwesomeOptimizer</span><span class="o">=</span><span class="n">optimizer</span><span class="p">)</span>
</pre></div>
</div>
<p>Here <code class="docutils literal"><span class="pre">myAwesomeModel</span></code> is any key we take to save the model <code class="docutils literal"><span class="pre">model</span></code>. Note that we will also use this key when restoring variables.</p>
<p>Next, when the trained model needs to be saved, use:</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">checkpoint</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="n">save_path_with_prefix</span><span class="p">)</span>
</pre></div>
</div>
<p>is fine. <code class="docutils literal"><span class="pre">save_path_with_prefix</span></code> is the directory and prefix of the saved file. For example, if you create a folder named “save” in the source directory and call <code class="docutils literal"><span class="pre">checkpoint.save('./save/model.ckpt')</span></code> once, we can find three files in the directory named <code class="docutils literal"><span class="pre">checkpoint</span></code>, <code class="docutils literal"><span class="pre">model.ckpt-1.index</span></code>, and <code class="docutils literal"><span class="pre">model.ckpt-1.data-00000-of-00001</span></code>, which record variable information. The <code class="docutils literal"><span class="pre">checkpoint.save()</span></code> method can be run multiple times. Each time we will get an .index file and a .data file. The serial number increase gradually.</p>
<p>When you need to reload previously saved parameters for models elsewhere, you need to instantiate a checkpoint again, while keeping the keys consistent. Then call the restore method of checkpoint. Just like this:</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">model_to_be_restored</span> <span class="o">=</span> <span class="n">MyModel</span><span class="p">()</span>                                        <span class="c1"># The same model of the parameter to be restored</span>
<span class="n">checkpoint</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">Checkpoint</span><span class="p">(</span><span class="n">myAwesomeModel</span><span class="o">=</span><span class="n">model_to_be_restored</span><span class="p">)</span>   <span class="c1"># The key remains as &quot;myAwesomeModel&quot;</span>
<span class="n">checkpoint</span><span class="o">.</span><span class="n">restore</span><span class="p">(</span><span class="n">save_path_with_prefix_and_index</span><span class="p">)</span>
</pre></div>
</div>
<p>Then the model variables are restored. <code class="docutils literal"><span class="pre">save_path_with_prefix_and_index</span></code> is the directory + prefix + number of the previously saved file. For example, calling <code class="docutils literal"><span class="pre">checkpoint.restore('./save/model.ckpt-1')</span></code> will load the file with the prefix <code class="docutils literal"><span class="pre">model.ckpt</span></code> and sequence number 1 to restore the model.</p>
<p>When saving multiple files, we often want to load the most recent one. You can use an assistant function <code class="docutils literal"><span class="pre">tf.train.latest_checkpoint(save_path)</span></code> to return the file name of the most recent checkpoint in the directory. For example, if there are 10 saved files from <code class="docutils literal"><span class="pre">model.ckpt-1.index</span></code> to <code class="docutils literal"><span class="pre">model.ckpt-10.index</span></code> in the save directory, <code class="docutils literal"><span class="pre">tf.train.latest_checkpoint('./save')</span></code> then returns <code class="docutils literal"><span class="pre">./save/model.ckpt-10</span></code> .</p>
<p>In general, the typical framework for restoring and saving variables is as follows:</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="c1"># train.py - Model training phase</span>

<span class="n">model</span> <span class="o">=</span> <span class="n">MyModel</span><span class="p">()</span>
<span class="n">checkpoint</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">Checkpoint</span><span class="p">(</span><span class="n">myModel</span><span class="o">=</span><span class="n">model</span><span class="p">)</span>     <span class="c1"># Instantiate Checkpoint, specify the save object as model (if you need to save the optimizer&#39;s parameters, you can also add it)</span>
<span class="c1"># Model training code</span>
<span class="n">checkpoint</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="s1">&#39;./save/model.ckpt&#39;</span><span class="p">)</span>                <span class="c1"># Save the parameters to a file after the model is trained, or save it periodically during the training process.</span>
</pre></div>
</div>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="c1"># test.py - Model use phase</span>

<span class="n">model</span> <span class="o">=</span> <span class="n">MyModel</span><span class="p">()</span>
<span class="n">checkpoint</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">Checkpoint</span><span class="p">(</span><span class="n">myModel</span><span class="o">=</span><span class="n">model</span><span class="p">)</span>             <span class="c1"># Instantiate Checkpoint, specify the recovery object as model</span>
<span class="n">checkpoint</span><span class="o">.</span><span class="n">restore</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">latest_checkpoint</span><span class="p">(</span><span class="s1">&#39;./save&#39;</span><span class="p">))</span>    <span class="c1"># Restore model parameters from file</span>
<span class="c1"># Model usage code</span>
</pre></div>
</div>
<p>By the way, <code class="docutils literal"><span class="pre">tf.train.Checkpoint</span></code> is more powerful than the <code class="docutils literal"><span class="pre">tf.train.Saver</span></code>, which is commonly used in previous versions, because it supports “delayed” recovery variables under Eager Execution. Specifically, when <code class="docutils literal"><span class="pre">checkpoint.restore()</span></code> is called but the variables in the model have not yet been created, Checkpoint can wait until the variable is created before restoring the value. Under “Eager Execution” mode, the initialization of each layer in the model and the creation of variables are performed when the model is first called (the advantage is that the shape of the variable can be automatically determined based on the input tensor shape, without manual specification). This means that when the model has just been instantiated, there is actually no variable in it. At this time, using the previous method to recover the variable value will definitely cause an error. For example, you can try to save the parameters of model by calling the <code class="docutils literal"><span class="pre">save_weight()</span></code> method of <code class="docutils literal"><span class="pre">tf.keras.Model</span></code> in train.py, and call <code class="docutils literal"><span class="pre">load_weight()</span></code> method immediately after instantiating the model in test.py, it will cause an error. Only after calling the model and then run the <code class="docutils literal"><span class="pre">load_weight()</span></code> method can you get the correct result. It is obvious that <code class="docutils literal"><span class="pre">tf.train.Checkpoint</span></code> can bring us considerable convenience in this case. In addition, <code class="docutils literal"><span class="pre">tf.train.Checkpoint</span></code> also supports the Graph Execution mode.</p>
<p>Finally, an example is provided. The previous chapter’s <a class="reference internal" href="../zh/models.html#mlp"><span class="std std-ref">multilayer perceptron model</span></a> shows the preservation and loading of model variables:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">from</span> <span class="nn">en.model.mlp.mlp</span> <span class="k">import</span> <span class="n">MLP</span>
<span class="kn">from</span> <span class="nn">en.model.mlp.utils</span> <span class="k">import</span> <span class="n">DataLoader</span>

<span class="n">tf</span><span class="o">.</span><span class="n">enable_eager_execution</span><span class="p">()</span>
<span class="n">mode</span> <span class="o">=</span> <span class="s1">&#39;test&#39;</span>
<span class="n">num_batches</span> <span class="o">=</span> <span class="mi">1000</span>
<span class="n">batch_size</span> <span class="o">=</span> <span class="mi">50</span>
<span class="n">learning_rate</span> <span class="o">=</span> <span class="mf">0.001</span>
<span class="n">data_loader</span> <span class="o">=</span> <span class="n">DataLoader</span><span class="p">()</span>


<span class="k">def</span> <span class="nf">train</span><span class="p">():</span>
    <span class="n">model</span> <span class="o">=</span> <span class="n">MLP</span><span class="p">()</span>
    <span class="n">optimizer</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">AdamOptimizer</span><span class="p">(</span><span class="n">learning_rate</span><span class="o">=</span><span class="n">learning_rate</span><span class="p">)</span>
    <span class="n">checkpoint</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">Checkpoint</span><span class="p">(</span><span class="n">myAwesomeModel</span><span class="o">=</span><span class="n">model</span><span class="p">)</span>      <span class="c1"># instantiate a Checkpoint, set `model` as object to be saved</span>
    <span class="k">for</span> <span class="n">batch_index</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_batches</span><span class="p">):</span>
        <span class="n">X</span><span class="p">,</span> <span class="n">y</span> <span class="o">=</span> <span class="n">data_loader</span><span class="o">.</span><span class="n">get_batch</span><span class="p">(</span><span class="n">batch_size</span><span class="p">)</span>
        <span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">GradientTape</span><span class="p">()</span> <span class="k">as</span> <span class="n">tape</span><span class="p">:</span>
            <span class="n">y_logit_pred</span> <span class="o">=</span> <span class="n">model</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">convert_to_tensor</span><span class="p">(</span><span class="n">X</span><span class="p">))</span>
            <span class="n">loss</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">losses</span><span class="o">.</span><span class="n">sparse_softmax_cross_entropy</span><span class="p">(</span><span class="n">labels</span><span class="o">=</span><span class="n">y</span><span class="p">,</span> <span class="n">logits</span><span class="o">=</span><span class="n">y_logit_pred</span><span class="p">)</span>
            <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;batch </span><span class="si">%d</span><span class="s2">: loss </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">batch_index</span><span class="p">,</span> <span class="n">loss</span><span class="o">.</span><span class="n">numpy</span><span class="p">()))</span>
        <span class="n">grads</span> <span class="o">=</span> <span class="n">tape</span><span class="o">.</span><span class="n">gradient</span><span class="p">(</span><span class="n">loss</span><span class="p">,</span> <span class="n">model</span><span class="o">.</span><span class="n">variables</span><span class="p">)</span>
        <span class="n">optimizer</span><span class="o">.</span><span class="n">apply_gradients</span><span class="p">(</span><span class="n">grads_and_vars</span><span class="o">=</span><span class="nb">zip</span><span class="p">(</span><span class="n">grads</span><span class="p">,</span> <span class="n">model</span><span class="o">.</span><span class="n">variables</span><span class="p">))</span>
        <span class="k">if</span> <span class="p">(</span><span class="n">batch_index</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)</span> <span class="o">%</span> <span class="mi">100</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span>                        <span class="c1"># save every 100 batches</span>
            <span class="n">checkpoint</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="s1">&#39;./save/model.ckpt&#39;</span><span class="p">)</span>                <span class="c1"># save model to .ckpt file</span>


<span class="k">def</span> <span class="nf">test</span><span class="p">():</span>
    <span class="n">model_to_be_restored</span> <span class="o">=</span> <span class="n">MLP</span><span class="p">()</span>
    <span class="n">checkpoint</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">Checkpoint</span><span class="p">(</span><span class="n">myAwesomeModel</span><span class="o">=</span><span class="n">model_to_be_restored</span><span class="p">)</span>      <span class="c1"># instantiate a Checkpoint, set newly initialized model `model_to_be_restored` to be the object to be restored</span>
    <span class="n">checkpoint</span><span class="o">.</span><span class="n">restore</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">latest_checkpoint</span><span class="p">(</span><span class="s1">&#39;./save&#39;</span><span class="p">))</span>    <span class="c1"># restore parameters of model from file</span>
    <span class="n">num_eval_samples</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">shape</span><span class="p">(</span><span class="n">data_loader</span><span class="o">.</span><span class="n">eval_labels</span><span class="p">)[</span><span class="mi">0</span><span class="p">]</span>
    <span class="n">y_pred</span> <span class="o">=</span> <span class="n">model_to_be_restored</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">data_loader</span><span class="o">.</span><span class="n">eval_data</span><span class="p">))</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span>
    <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;test accuracy: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="nb">sum</span><span class="p">(</span><span class="n">y_pred</span> <span class="o">==</span> <span class="n">data_loader</span><span class="o">.</span><span class="n">eval_labels</span><span class="p">)</span> <span class="o">/</span> <span class="n">num_eval_samples</span><span class="p">))</span>


<span class="k">if</span> <span class="vm">__name__</span> <span class="o">==</span> <span class="s1">&#39;__main__&#39;</span><span class="p">:</span>
    <span class="k">if</span> <span class="n">mode</span> <span class="o">==</span> <span class="s1">&#39;train&#39;</span><span class="p">:</span>
        <span class="n">train</span><span class="p">()</span>
    <span class="k">if</span> <span class="n">mode</span> <span class="o">==</span> <span class="s1">&#39;test&#39;</span><span class="p">:</span>
        <span class="n">test</span><span class="p">()</span>
</pre></div>
</div>
<p>After the save folder is created in the source directory and the model is trained, the model variable data stored every 100 batches will be stored in the save folder. Change line 7 to <code class="docutils literal"><span class="pre">model</span> <span class="pre">=</span> <span class="pre">'test'</span></code> and run the code again. The model will be restored directly using the last saved variable value and tested on the test set. You can directly get an accuracy of about 95%.</p>
</div>
<div class="section" id="tensorboard-visualization-of-the-training-process">
<h2>TensorBoard: Visualization of the Training Process<a class="headerlink" href="#tensorboard-visualization-of-the-training-process" title="永久链接至标题">¶</a></h2>
<p>Sometimes you want to see how the various parameters change during the model training (such as the value of the loss function). Although it can be viewed through the terminal output, it is sometimes not intuitional enough. TensorBoard is a tool that helps us visualize the training process.</p>
<p>Currently, TensorBoard support in Eager Execution mode is still in <a class="reference external" href="https://www.tensorflow.org/api_docs/python/tf/contrib/summary">tf.contrib.summary</a>, and there may be more changes in the future. So here are just a simple example. First, create a folder (such as ./tensorboard) in the source directory to store the TensorBoard record file, and instantiate a logger in the code:</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">summary_writer</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">contrib</span><span class="o">.</span><span class="n">summary</span><span class="o">.</span><span class="n">create_file_writer</span><span class="p">(</span><span class="s1">&#39;./tensorboard&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>Next, put the code of training part in the context of <code class="docutils literal"><span class="pre">summary_writer.as_default()</span></code> and <code class="docutils literal"><span class="pre">tf.contrib.summary.always_record_summaries()</span></code> using “with” statement, and run <code class="docutils literal"><span class="pre">tf.contrib.summary.scalar(name,</span> <span class="pre">tensor,</span> <span class="pre">step=batch_index)</span></code> for the parameters that need to be logged (usually scalar). The “step” parameter here can be set according to your own needs, and can commonly be set to be the batch number in the current training process. The overall framework is as follows:</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">summary_writer</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">contrib</span><span class="o">.</span><span class="n">summary</span><span class="o">.</span><span class="n">create_file_writer</span><span class="p">(</span><span class="s1">&#39;./tensorboard&#39;</span><span class="p">)</span>
<span class="k">with</span> <span class="n">summary_writer</span><span class="o">.</span><span class="n">as_default</span><span class="p">(),</span> <span class="n">tf</span><span class="o">.</span><span class="n">contrib</span><span class="o">.</span><span class="n">summary</span><span class="o">.</span><span class="n">always_record_summaries</span><span class="p">():</span>
    <span class="c1"># Start model training</span>
    <span class="k">for</span> <span class="n">batch_index</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_batches</span><span class="p">):</span>
        <span class="c1"># Training code, the current loss of batch is put into the variable &quot;loss&quot;</span>
        <span class="n">tf</span><span class="o">.</span><span class="n">contrib</span><span class="o">.</span><span class="n">summary</span><span class="o">.</span><span class="n">scalar</span><span class="p">(</span><span class="s2">&quot;loss&quot;</span><span class="p">,</span> <span class="n">loss</span><span class="p">,</span> <span class="n">step</span><span class="o">=</span><span class="n">batch_index</span><span class="p">)</span>
        <span class="n">tf</span><span class="o">.</span><span class="n">contrib</span><span class="o">.</span><span class="n">summary</span><span class="o">.</span><span class="n">scalar</span><span class="p">(</span><span class="s2">&quot;MyScalar&quot;</span><span class="p">,</span> <span class="n">my_scalar</span><span class="p">,</span> <span class="n">step</span><span class="o">=</span><span class="n">batch_index</span><span class="p">)</span>  <span class="c1"># You can also add other variables</span>
</pre></div>
</div>
<p>Each time you run <code class="docutils literal"><span class="pre">tf.contrib.summary.scalar()</span></code>, the logger writes a record to the log file. In addition to the simplest scalar, TensorBoard can also visualize other types of data (such as images, audio, etc.) as described in the <a class="reference external" href="https://www.tensorflow.org/api_docs/python/tf/Contrib/summary">API document</a>.</p>
<p>When we want to visualize the training process, open the terminal in the source directory (and enter the TensorFlow conda environment if necessary), run:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">tensorboard</span> <span class="o">--</span><span class="n">logdir</span><span class="o">=./</span><span class="n">tensorboard</span>
</pre></div>
</div>
<p>Then use the browser to visit the URL output by the terminal (usually <a class="reference external" href="http://computer_name:6006">http://computer_name:6006</a>), you can visit the visible interface of TensorBoard, as shown below:</p>
<div class="figure align-center">
<a class="reference internal image-reference" href="../_images/tensorboard.png"><img alt="../_images/tensorboard.png" src="../_images/tensorboard.png" style="width: 100%;" /></a>
</div>
<p>By default, TensorBoard updates data every 30 seconds. However, you can also manually refresh by clicking the refresh button in the upper right corner.</p>
<p>When using TensorBoard, please notice the following notes:</p>
<ul class="simple">
<li>If you want to retrain, you need to delete the information in the record folder and restart TensorBoard (or create a new record folder and open TensorBoard with the <code class="docutils literal"><span class="pre">--logdir</span></code> parameter set to be the newly created folder);</li>
<li>Language of the record directory path should all be English.</li>
</ul>
<p>Finally, we provide an example of the previous chapter’s <a class="reference internal" href="../zh/models.html#mlp"><span class="std std-ref">multilayer perceptron model</span></a> showing the use of TensorBoard:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">from</span> <span class="nn">en.model.mlp.mlp</span> <span class="k">import</span> <span class="n">MLP</span>
<span class="kn">from</span> <span class="nn">en.model.mlp.utils</span> <span class="k">import</span> <span class="n">DataLoader</span>

<span class="n">tf</span><span class="o">.</span><span class="n">enable_eager_execution</span><span class="p">()</span>
<span class="n">num_batches</span> <span class="o">=</span> <span class="mi">10000</span>
<span class="n">batch_size</span> <span class="o">=</span> <span class="mi">50</span>
<span class="n">learning_rate</span> <span class="o">=</span> <span class="mf">0.001</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">MLP</span><span class="p">()</span>
<span class="n">data_loader</span> <span class="o">=</span> <span class="n">DataLoader</span><span class="p">()</span>
<span class="n">optimizer</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">AdamOptimizer</span><span class="p">(</span><span class="n">learning_rate</span><span class="o">=</span><span class="n">learning_rate</span><span class="p">)</span>
<span class="n">summary_writer</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">contrib</span><span class="o">.</span><span class="n">summary</span><span class="o">.</span><span class="n">create_file_writer</span><span class="p">(</span><span class="s1">&#39;./tensorboard&#39;</span><span class="p">)</span>     <span class="c1"># instantiate a logger</span>
<span class="k">with</span> <span class="n">summary_writer</span><span class="o">.</span><span class="n">as_default</span><span class="p">(),</span> <span class="n">tf</span><span class="o">.</span><span class="n">contrib</span><span class="o">.</span><span class="n">summary</span><span class="o">.</span><span class="n">always_record_summaries</span><span class="p">():</span>
    <span class="k">for</span> <span class="n">batch_index</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_batches</span><span class="p">):</span>
        <span class="n">X</span><span class="p">,</span> <span class="n">y</span> <span class="o">=</span> <span class="n">data_loader</span><span class="o">.</span><span class="n">get_batch</span><span class="p">(</span><span class="n">batch_size</span><span class="p">)</span>
        <span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">GradientTape</span><span class="p">()</span> <span class="k">as</span> <span class="n">tape</span><span class="p">:</span>
            <span class="n">y_logit_pred</span> <span class="o">=</span> <span class="n">model</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">convert_to_tensor</span><span class="p">(</span><span class="n">X</span><span class="p">))</span>
            <span class="n">loss</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">losses</span><span class="o">.</span><span class="n">sparse_softmax_cross_entropy</span><span class="p">(</span><span class="n">labels</span><span class="o">=</span><span class="n">y</span><span class="p">,</span> <span class="n">logits</span><span class="o">=</span><span class="n">y_logit_pred</span><span class="p">)</span>
            <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;batch </span><span class="si">%d</span><span class="s2">: loss </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">batch_index</span><span class="p">,</span> <span class="n">loss</span><span class="o">.</span><span class="n">numpy</span><span class="p">()))</span>
            <span class="n">tf</span><span class="o">.</span><span class="n">contrib</span><span class="o">.</span><span class="n">summary</span><span class="o">.</span><span class="n">scalar</span><span class="p">(</span><span class="s2">&quot;loss&quot;</span><span class="p">,</span> <span class="n">loss</span><span class="p">,</span> <span class="n">step</span><span class="o">=</span><span class="n">batch_index</span><span class="p">)</span>       <span class="c1"># log current loss</span>
        <span class="n">grads</span> <span class="o">=</span> <span class="n">tape</span><span class="o">.</span><span class="n">gradient</span><span class="p">(</span><span class="n">loss</span><span class="p">,</span> <span class="n">model</span><span class="o">.</span><span class="n">variables</span><span class="p">)</span>
        <span class="n">optimizer</span><span class="o">.</span><span class="n">apply_gradients</span><span class="p">(</span><span class="n">grads_and_vars</span><span class="o">=</span><span class="nb">zip</span><span class="p">(</span><span class="n">grads</span><span class="p">,</span> <span class="n">model</span><span class="o">.</span><span class="n">variables</span><span class="p">))</span>
</pre></div>
</div>
</div>
<div class="section" id="gpu-usage-and-allocation">
<h2>GPU Usage and Allocation<a class="headerlink" href="#gpu-usage-and-allocation" title="永久链接至标题">¶</a></h2>
<p>Usually the scenario is: there are many students/researchers in the lab/company research group who need to use the GPU, but there is only one multi-card machine. At this time, you need to pay attention to how to allocate graphics resources.</p>
<p>The command <code class="docutils literal"><span class="pre">nvidia-smi</span></code> can view the existing GPU and the usage of the machine (in Windows, add <code class="docutils literal"><span class="pre">C:\Program</span> <span class="pre">Files\NVIDIA</span> <span class="pre">Corporation\NVSMI</span></code> to the environment variable “Path”, or in Windows 10 you can view the graphics card information using the Performance tab of the Task Manager).</p>
<p>Use the environment variable <code class="docutils literal"><span class="pre">CUDA_VISIBLE_DEVICES</span></code> to control the GPU used by the program. Assume that, on a four-card machine, GPUs 0, 1 are in use and GPUs 2, 3 are idle. Then type in the Linux terminal:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">export</span> <span class="n">CUDA_VISIBLE_DEVICES</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span>
</pre></div>
</div>
<p>or add this in the code,</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">os</span>
<span class="n">os</span><span class="o">.</span><span class="n">environ</span><span class="p">[</span><span class="s1">&#39;CUDA_VISIBLE_DEVICES&#39;</span><span class="p">]</span> <span class="o">=</span> <span class="s2">&quot;2,3&quot;</span>
</pre></div>
</div>
<p>to specify that the program runs only on GPUs 2, 3.</p>
<p>By default, TensorFlow will use almost all of the available graphic memory to avoid performance loss caused by memory fragmentation. You can set the strategy for TensorFlow to use graphic memory through the <code class="docutils literal"><span class="pre">tf.ConfigProto</span></code> class. The specific way is to instantiate a <code class="docutils literal"><span class="pre">tf.ConfigProto</span></code> class, set the parameters, and specify the “config” parameter when running <code class="docutils literal"><span class="pre">tf.enable_eager_execution()</span></code>. The following code use the <code class="docutils literal"><span class="pre">allow_growth</span></code> option to set TensorFlow to apply for memory space only when necessary:</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">config</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">ConfigProto</span><span class="p">()</span>
<span class="n">config</span><span class="o">.</span><span class="n">gpu_options</span><span class="o">.</span><span class="n">allow_growth</span> <span class="o">=</span> <span class="bp">True</span>
<span class="n">tf</span><span class="o">.</span><span class="n">enable_eager_execution</span><span class="p">(</span><span class="n">config</span><span class="o">=</span><span class="n">config</span><span class="p">)</span>
</pre></div>
</div>
<p>The following code sets TensorFlow to consume 40% of GPU memory by the <code class="docutils literal"><span class="pre">per_process_gpu_memory_fraction</span></code> option:</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">config</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">ConfigProto</span><span class="p">()</span>
<span class="n">config</span><span class="o">.</span><span class="n">gpu_options</span><span class="o">.</span><span class="n">per_process_gpu_memory_fraction</span> <span class="o">=</span> <span class="mf">0.4</span>
<span class="n">tf</span><span class="o">.</span><span class="n">enable_eager_execution</span><span class="p">(</span><span class="n">config</span><span class="o">=</span><span class="n">config</span><span class="p">)</span>
</pre></div>
</div>
<p>Under “Graph Execution”, you can also pass the tf.ConfigPhoto class to set up when instantiating a new session.</p>
</div>
</div>


           </div>
           <div class="articleComments">
            
           </div>
          </div>
          <footer>
  
    <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
      
        <a href="static.html" class="btn btn-neutral float-right" title="Appendix: Static TensorFlow" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
      
      
        <a href="models.html" class="btn btn-neutral" title="TensorFlow Models" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
      
    </div>
  

  <hr/>

  <div role="contentinfo">
    <p>
        &copy; Copyright 2018, Xihan Li（雪麒）.

    </p>
  </div>
  Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a <a href="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>. 

</footer>

        </div>
      </div>

    </section>

  </div>
  


  

    <script type="text/javascript">
        var DOCUMENTATION_OPTIONS = {
            URL_ROOT:'../',
            VERSION:'0.3 beta',
            COLLAPSE_INDEX:false,
            FILE_SUFFIX:'.html',
            HAS_SOURCE:  true,
            SOURCELINK_SUFFIX: '.txt'
        };
    </script>
      <script type="text/javascript" src="../_static/jquery.js"></script>
      <script type="text/javascript" src="../_static/underscore.js"></script>
      <script type="text/javascript" src="../_static/doctools.js"></script>
      <script type="text/javascript" src="../_static/translations.js"></script>

  

  
  
    <script type="text/javascript" src="../_static/js/theme.js"></script>
  

  
  
  <script type="text/javascript">
      jQuery(function () {
          SphinxRtdTheme.StickyNav.enable();
      });
  </script>
   

</body>
</html>