

<!DOCTYPE html>
<html class="writer-html5" lang="zh" >
<head>
  <meta charset="utf-8">
  
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  
  <title>调试内存泄漏 &mdash; Scrapy 2.3.0 文档</title>
  

  
  <link rel="stylesheet" href="../_static/css/theme.css" type="text/css" />
  <link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster.custom.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster.bundle.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-shadow.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-punk.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-noir.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-light.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-borderless.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/micromodal.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/sphinx_rtd_theme.css" type="text/css" />

  
  
  
  

  
  <!--[if lt IE 9]>
    <script src="../_static/js/html5shiv.min.js"></script>
  <![endif]-->
  
    
      <script type="text/javascript" id="documentation_options" data-url_root="../" src="../_static/documentation_options.js"></script>
        <script src="../_static/jquery.js"></script>
        <script src="../_static/underscore.js"></script>
        <script src="../_static/doctools.js"></script>
        <script src="../_static/language_data.js"></script>
        <script src="../_static/js/hoverxref.js"></script>
        <script src="../_static/js/tooltipster.bundle.min.js"></script>
        <script src="../_static/js/micromodal.min.js"></script>
    
    <script type="text/javascript" src="../_static/js/theme.js"></script>

    
    <link rel="index" title="索引" href="../genindex.html" />
    <link rel="search" title="搜索" href="../search.html" />
    <link rel="next" title="下载和处理文件和图像" href="media-pipeline.html" />
    <link rel="prev" title="选择动态加载的内容" href="dynamic-content.html" /> 
</head>

<body class="wy-body-for-nav">

   
  <div class="wy-grid-for-nav">
    
    <nav data-toggle="wy-nav-shift" class="wy-nav-side">
      <div class="wy-side-scroll">
        <div class="wy-side-nav-search" >
          

          
            <a href="../index.html" class="icon icon-home" alt="Documentation Home"> Scrapy
          

          
          </a>

          
            
            
              <div class="version">
                2.3
              </div>
            
          

          
<div role="search">
  <form id="rtd-search-form" class="wy-form" action="../search.html" method="get">
    <input type="text" name="q" placeholder="Search docs" />
    <input type="hidden" name="check_keywords" value="yes" />
    <input type="hidden" name="area" value="default" />
  </form>
</div>

          
        </div>

        
        <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
          
            
            
              
            
            
              <p class="caption"><span class="caption-text">第一步</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../intro/overview.html">Scrapy一目了然</a></li>
<li class="toctree-l1"><a class="reference internal" href="../intro/install.html">安装指南</a></li>
<li class="toctree-l1"><a class="reference internal" href="../intro/tutorial.html">Scrapy 教程</a></li>
<li class="toctree-l1"><a class="reference internal" href="../intro/examples.html">实例</a></li>
</ul>
<p class="caption"><span class="caption-text">基本概念</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="commands.html">命令行工具</a></li>
<li class="toctree-l1"><a class="reference internal" href="spiders.html">蜘蛛</a></li>
<li class="toctree-l1"><a class="reference internal" href="selectors.html">选择器</a></li>
<li class="toctree-l1"><a class="reference internal" href="items.html">项目</a></li>
<li class="toctree-l1"><a class="reference internal" href="loaders.html">项目加载器</a></li>
<li class="toctree-l1"><a class="reference internal" href="shell.html">Scrapy shell</a></li>
<li class="toctree-l1"><a class="reference internal" href="item-pipeline.html">项目管道</a></li>
<li class="toctree-l1"><a class="reference internal" href="feed-exports.html">Feed 导出</a></li>
<li class="toctree-l1"><a class="reference internal" href="request-response.html">请求和响应</a></li>
<li class="toctree-l1"><a class="reference internal" href="link-extractors.html">链接提取器</a></li>
<li class="toctree-l1"><a class="reference internal" href="settings.html">设置</a></li>
<li class="toctree-l1"><a class="reference internal" href="exceptions.html">例外情况</a></li>
</ul>
<p class="caption"><span class="caption-text">内置服务</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="logging.html">登录</a></li>
<li class="toctree-l1"><a class="reference internal" href="stats.html">统计数据集合</a></li>
<li class="toctree-l1"><a class="reference internal" href="email.html">发送电子邮件</a></li>
<li class="toctree-l1"><a class="reference internal" href="telnetconsole.html">远程登录控制台</a></li>
<li class="toctree-l1"><a class="reference internal" href="webservice.html">Web服务</a></li>
</ul>
<p class="caption"><span class="caption-text">解决具体问题</span></p>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="../faq.html">常见问题</a></li>
<li class="toctree-l1"><a class="reference internal" href="debug.html">调试spiders</a></li>
<li class="toctree-l1"><a class="reference internal" href="contracts.html">蜘蛛合约</a></li>
<li class="toctree-l1"><a class="reference internal" href="practices.html">常用做法</a></li>
<li class="toctree-l1"><a class="reference internal" href="broad-crawls.html">宽爬行</a></li>
<li class="toctree-l1"><a class="reference internal" href="developer-tools.html">使用浏览器的开发人员工具进行抓取</a></li>
<li class="toctree-l1"><a class="reference internal" href="dynamic-content.html">选择动态加载的内容</a></li>
<li class="toctree-l1 current"><a class="current reference internal" href="#">调试内存泄漏</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#common-causes-of-memory-leaks">内存泄漏的常见原因</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#too-many-requests">请求太多？</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#debugging-memory-leaks-with-trackref">使用调试内存泄漏 <code class="docutils literal notranslate"><span class="pre">trackref</span></code></a><ul>
<li class="toctree-l3"><a class="reference internal" href="#which-objects-are-tracked">跟踪哪些对象？</a></li>
<li class="toctree-l3"><a class="reference internal" href="#a-real-example">一个真实的例子</a></li>
<li class="toctree-l3"><a class="reference internal" href="#too-many-spiders">蜘蛛太多了？</a></li>
<li class="toctree-l3"><a class="reference internal" href="#scrapy-utils-trackref-module">scrapy.utils.trackRef模块</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#debugging-memory-leaks-with-muppy">用muppy调试内存泄漏</a></li>
<li class="toctree-l2"><a class="reference internal" href="#leaks-without-leaks">无泄漏泄漏</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="media-pipeline.html">下载和处理文件和图像</a></li>
<li class="toctree-l1"><a class="reference internal" href="deploy.html">部署蜘蛛</a></li>
<li class="toctree-l1"><a class="reference internal" href="autothrottle.html">AutoThrottle 扩展</a></li>
<li class="toctree-l1"><a class="reference internal" href="benchmarking.html">标杆管理</a></li>
<li class="toctree-l1"><a class="reference internal" href="jobs.html">作业：暂停和恢复爬行</a></li>
<li class="toctree-l1"><a class="reference internal" href="coroutines.html">协同程序</a></li>
<li class="toctree-l1"><a class="reference internal" href="asyncio.html">asyncio</a></li>
</ul>
<p class="caption"><span class="caption-text">扩展Scrapy</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="architecture.html">体系结构概述</a></li>
<li class="toctree-l1"><a class="reference internal" href="downloader-middleware.html">下载器中间件</a></li>
<li class="toctree-l1"><a class="reference internal" href="spider-middleware.html">蜘蛛中间件</a></li>
<li class="toctree-l1"><a class="reference internal" href="extensions.html">扩展</a></li>
<li class="toctree-l1"><a class="reference internal" href="api.html">核心API</a></li>
<li class="toctree-l1"><a class="reference internal" href="signals.html">信号</a></li>
<li class="toctree-l1"><a class="reference internal" href="exporters.html">条目导出器</a></li>
</ul>
<p class="caption"><span class="caption-text">其余所有</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../news.html">发行说明</a></li>
<li class="toctree-l1"><a class="reference internal" href="../contributing.html">为 Scrapy 贡献</a></li>
<li class="toctree-l1"><a class="reference internal" href="../versioning.html">版本控制和API稳定性</a></li>
</ul>

            
          
        </div>
        
      </div>
    </nav>

    <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">

      
      <nav class="wy-nav-top" aria-label="top navigation">
        
          <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
          <a href="../index.html">Scrapy</a>
        
      </nav>


      <div class="wy-nav-content">
        
        <div class="rst-content">
        
          















<div role="navigation" aria-label="breadcrumbs navigation">

  <ul class="wy-breadcrumbs">
    
      <li><a href="../index.html" class="icon icon-home"></a> &raquo;</li>
        
      <li>调试内存泄漏</li>
    
    
      <li class="wy-breadcrumbs-aside">
        
            
        
      </li>
    
  </ul>

  
  <hr/>
</div>
          <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
           <div itemprop="articleBody">
            
  <div class="section" id="debugging-memory-leaks">
<span id="topics-leaks"></span><h1>调试内存泄漏<a class="headerlink" href="#debugging-memory-leaks" title="永久链接至标题">¶</a></h1>
<p>在Scrapy中，请求、响应和项等对象的生命周期是有限的：它们被创建、使用一段时间，最后被销毁。</p>
<p>从所有这些对象中，请求可能是生命周期最长的请求，因为它一直在调度程序队列中等待，直到需要处理它为止。有关详细信息，请参阅 <a class="reference internal" href="architecture.html#topics-architecture"><span class="std std-ref">体系结构概述</span></a> .</p><script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
<ins class="adsbygoogle"
     style="display:block; text-align:center;"
     data-ad-layout="in-article"
     data-ad-format="fluid"
     data-ad-client="ca-pub-1466963416408457"
     data-ad-slot="8850786025"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>
<p>由于这些零碎的物体有（相当长的）寿命，总有在没有正确释放它们的情况下将它们累积到内存中的风险，从而导致所谓的“内存泄漏”。</p>
<p>为了帮助调试内存泄漏，scrapy提供了一种内置机制，用于跟踪调用的对象引用 <a class="reference internal" href="#topics-leaks-trackrefs"><span class="std std-ref">trackref</span></a> ，您还可以使用第三方库 <a class="reference internal" href="#topics-leaks-muppy"><span class="std std-ref">muppy</span></a> 有关更高级的内存调试（请参阅下面的详细信息）。两种机制都必须从 <a class="reference internal" href="telnetconsole.html#topics-telnetconsole"><span class="std std-ref">Telnet Console</span></a> .</p>
<div class="section" id="common-causes-of-memory-leaks">
<h2>内存泄漏的常见原因<a class="headerlink" href="#common-causes-of-memory-leaks" title="永久链接至标题">¶</a></h2>
<p>Scrapy开发人员传递请求中引用的对象（例如，使用 <a class="reference internal" href="request-response.html#scrapy.http.Request.cb_kwargs" title="scrapy.http.Request.cb_kwargs"><code class="xref py py-attr docutils literal notranslate"><span class="pre">cb_kwargs</span></code></a> 或 <a class="reference internal" href="request-response.html#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">meta</span></code></a> 属性或请求回调函数），它有效地将这些引用对象的生存期限制为请求的生存期。到目前为止，这是导致零碎项目内存泄漏的最常见原因，对于新手来说，这是一个很难调试的原因。</p>
<p>在大型项目中，蜘蛛通常是由不同的人编写的，其中一些蜘蛛可能会“泄漏”，从而在其他（写得好的）蜘蛛同时运行时影响其他蜘蛛，而这反过来又会影响整个爬行过程。</p>
<p>如果您没有正确地释放（以前分配的）资源，那么泄漏也可能来自您编写的定制中间件、管道或扩展。例如，在上分配资源 <a class="reference internal" href="signals.html#std-signal-spider_opened"><code class="xref std std-signal docutils literal notranslate"><span class="pre">spider_opened</span></code></a> 但不释放它们 <a class="reference internal" href="signals.html#std-signal-spider_closed"><code class="xref std std-signal docutils literal notranslate"><span class="pre">spider_closed</span></code></a> 如果你跑步，可能会引起问题 <a class="reference internal" href="practices.html#run-multiple-spiders"><span class="std std-ref">multiple spiders per process</span></a> .</p>
<div class="section" id="too-many-requests">
<h3>请求太多？<a class="headerlink" href="#too-many-requests" title="永久链接至标题">¶</a></h3>
<p>默认情况下，scrapy将请求队列保存在内存中；它包括 <a class="reference internal" href="request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 对象和请求属性中引用的所有对象（例如 <a class="reference internal" href="request-response.html#scrapy.http.Request.cb_kwargs" title="scrapy.http.Request.cb_kwargs"><code class="xref py py-attr docutils literal notranslate"><span class="pre">cb_kwargs</span></code></a> 和 <a class="reference internal" href="request-response.html#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">meta</span></code></a> ）虽然不一定是泄漏，但这可能会占用大量内存。有可能 <a class="reference internal" href="jobs.html#topics-jobs"><span class="std std-ref">persistent job queue</span></a> 有助于控制内存使用。</p>
</div>
</div>
<div class="section" id="debugging-memory-leaks-with-trackref">
<span id="topics-leaks-trackrefs"></span><h2>使用调试内存泄漏 <code class="docutils literal notranslate"><span class="pre">trackref</span></code><a class="headerlink" href="#debugging-memory-leaks-with-trackref" title="永久链接至标题">¶</a></h2>
<p><code class="xref py py-mod docutils literal notranslate"><span class="pre">trackref</span></code> 是Scrapy提供的一个模块，用于调试最常见的内存泄漏情况。它基本上跟踪对所有实时请求、响应、项、蜘蛛和选择器对象的引用。</p>
<p>您可以进入telnet控制台并使用 <code class="docutils literal notranslate"><span class="pre">prefs()</span></code> 函数的别名 <a class="reference internal" href="#scrapy.utils.trackref.print_live_refs" title="scrapy.utils.trackref.print_live_refs"><code class="xref py py-func docutils literal notranslate"><span class="pre">print_live_refs()</span></code></a> 功能：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">telnet</span> <span class="n">localhost</span> <span class="mi">6023</span>

<span class="o">&gt;&gt;&gt;</span> <span class="n">prefs</span><span class="p">()</span>
<span class="n">Live</span> <span class="n">References</span>

<span class="n">ExampleSpider</span>                       <span class="mi">1</span>   <span class="n">oldest</span><span class="p">:</span> <span class="mi">15</span><span class="n">s</span> <span class="n">ago</span>
<span class="n">HtmlResponse</span>                       <span class="mi">10</span>   <span class="n">oldest</span><span class="p">:</span> <span class="mi">1</span><span class="n">s</span> <span class="n">ago</span>
<span class="n">Selector</span>                            <span class="mi">2</span>   <span class="n">oldest</span><span class="p">:</span> <span class="mi">0</span><span class="n">s</span> <span class="n">ago</span>
<span class="n">FormRequest</span>                       <span class="mi">878</span>   <span class="n">oldest</span><span class="p">:</span> <span class="mi">7</span><span class="n">s</span> <span class="n">ago</span>
</pre></div>
</div>
<p>如您所见，该报告还显示了每个类中最旧对象的“年龄”。如果每个进程运行多个spider，那么通过查看最早的请求或响应，您很可能会发现哪个spider正在泄漏。您可以使用 <a class="reference internal" href="#scrapy.utils.trackref.get_oldest" title="scrapy.utils.trackref.get_oldest"><code class="xref py py-func docutils literal notranslate"><span class="pre">get_oldest()</span></code></a> 功能（从telnet控制台）。</p>
<div class="section" id="which-objects-are-tracked">
<h3>跟踪哪些对象？<a class="headerlink" href="#which-objects-are-tracked" title="永久链接至标题">¶</a></h3>
<p>被跟踪的对象 <code class="docutils literal notranslate"><span class="pre">trackrefs</span></code> 都来自这些类（及其所有子类）：</p>
<ul class="simple">
<li><p><a class="reference internal" href="request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.http.Request</span></code></a></p></li>
<li><p><a class="reference internal" href="request-response.html#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.http.Response</span></code></a></p></li>
<li><p><a class="reference internal" href="items.html#scrapy.item.Item" title="scrapy.item.Item"><code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.item.Item</span></code></a></p></li>
<li><p><a class="reference internal" href="selectors.html#scrapy.selector.Selector" title="scrapy.selector.Selector"><code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.selector.Selector</span></code></a></p></li>
<li><p><a class="reference internal" href="spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.spiders.Spider</span></code></a></p></li>
</ul>
</div>
<div class="section" id="a-real-example">
<h3>一个真实的例子<a class="headerlink" href="#a-real-example" title="永久链接至标题">¶</a></h3>
<p>让我们来看一个假设的内存泄漏案例的具体示例。假设我们有一只蜘蛛，上面有一条和这条类似的线：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="k">return</span> <span class="n">Request</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;http://www.somenastyspider.com/product.php?pid=</span><span class="si">{</span><span class="n">product_id</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">,</span>
               <span class="n">callback</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">parse</span><span class="p">,</span> <span class="n">cb_kwargs</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;referer&#39;</span><span class="p">:</span> <span class="n">response</span><span class="p">})</span>
</pre></div>
</div>
<p>该行正在请求中传递一个响应引用，它有效地将响应生命周期与请求的生命周期联系起来，这肯定会导致内存泄漏。</p>
<p>让我们看看如何通过使用 <code class="docutils literal notranslate"><span class="pre">trackref</span></code> 工具。</p>
<p>当爬虫运行几分钟后，我们注意到它的内存使用量增长了很多，我们可以进入它的telnet控制台并检查实时引用：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="n">prefs</span><span class="p">()</span>
<span class="go">Live References</span>

<span class="go">SomenastySpider                     1   oldest: 15s ago</span>
<span class="go">HtmlResponse                     3890   oldest: 265s ago</span>
<span class="go">Selector                            2   oldest: 0s ago</span>
<span class="go">Request                          3878   oldest: 250s ago</span>
</pre></div>
</div>
<p>事实上，存在如此多的实时响应（而且它们太老了），这是绝对可疑的，因为与请求相比，响应的生存期应该相对较短。响应的数量与请求的数量相似，因此看起来它们是以某种方式捆绑在一起的。我们现在可以检查蜘蛛的代码，以发现产生泄漏的讨厌的行（在请求中传递响应引用）。</p>
<p>有时，有关活动对象的额外信息可能会有所帮助。让我们检查最早的回答：</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">scrapy.utils.trackref</span> <span class="kn">import</span> <span class="n">get_oldest</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">r</span> <span class="o">=</span> <span class="n">get_oldest</span><span class="p">(</span><span class="s1">&#39;HtmlResponse&#39;</span><span class="p">)</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">r</span><span class="o">.</span><span class="n">url</span>
<span class="go">&#39;http://www.somenastyspider.com/product.php?pid=123&#39;</span>
</pre></div>
</div>
<p>如果您希望遍历所有对象，而不是获取最旧的对象，则可以使用 <a class="reference internal" href="#scrapy.utils.trackref.iter_all" title="scrapy.utils.trackref.iter_all"><code class="xref py py-func docutils literal notranslate"><span class="pre">scrapy.utils.trackref.iter_all()</span></code></a> 功能：</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">scrapy.utils.trackref</span> <span class="kn">import</span> <span class="n">iter_all</span>
<span class="gp">&gt;&gt;&gt; </span><span class="p">[</span><span class="n">r</span><span class="o">.</span><span class="n">url</span> <span class="k">for</span> <span class="n">r</span> <span class="ow">in</span> <span class="n">iter_all</span><span class="p">(</span><span class="s1">&#39;HtmlResponse&#39;</span><span class="p">)]</span>
<span class="go">[&#39;http://www.somenastyspider.com/product.php?pid=123&#39;,</span>
<span class="go"> &#39;http://www.somenastyspider.com/product.php?pid=584&#39;,</span>
<span class="go">...]</span>
</pre></div>
</div>
</div>
<div class="section" id="too-many-spiders">
<h3>蜘蛛太多了？<a class="headerlink" href="#too-many-spiders" title="永久链接至标题">¶</a></h3>
<p>如果项目并行执行的spider太多，则 <code class="xref py py-func docutils literal notranslate"><span class="pre">prefs()</span></code> 很难阅读。因此，该函数具有 <code class="docutils literal notranslate"><span class="pre">ignore</span></code> 参数，该参数可用于忽略特定类（及其所有子类）。例如，这不会显示对spider的任何实时引用：</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">scrapy.spiders</span> <span class="kn">import</span> <span class="n">Spider</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">prefs</span><span class="p">(</span><span class="n">ignore</span><span class="o">=</span><span class="n">Spider</span><span class="p">)</span>
</pre></div>
</div>
<span class="target" id="module-scrapy.utils.trackref"></span></div>
<div class="section" id="scrapy-utils-trackref-module">
<h3>scrapy.utils.trackRef模块<a class="headerlink" href="#scrapy-utils-trackref-module" title="永久链接至标题">¶</a></h3>
<p>以下是 <a class="reference internal" href="#module-scrapy.utils.trackref" title="scrapy.utils.trackref: Track references of live objects"><code class="xref py py-mod docutils literal notranslate"><span class="pre">trackref</span></code></a> 模块。</p>
<dl class="py class">
<dt id="scrapy.utils.trackref.object_ref">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.utils.trackref.</code><code class="sig-name descname">object_ref</code><a class="reference internal" href="../_modules/scrapy/utils/trackref.html#object_ref"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.utils.trackref.object_ref" title="永久链接至目标">¶</a></dt>
<dd><p>如果要使用跟踪活动实例，则从该类继承 <code class="docutils literal notranslate"><span class="pre">trackref</span></code> 模块。</p>
</dd></dl>

<dl class="py function">
<dt id="scrapy.utils.trackref.print_live_refs">
<code class="sig-prename descclassname">scrapy.utils.trackref.</code><code class="sig-name descname">print_live_refs</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">class_name</span></em>, <em class="sig-param"><span class="n">ignore</span><span class="o">=</span><span class="default_value">NoneType</span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/utils/trackref.html#print_live_refs"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.utils.trackref.print_live_refs" title="永久链接至目标">¶</a></dt>
<dd><p>打印实时引用的报告，按类名分组。</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><p><strong>ignore</strong> (<a class="reference external" href="https://docs.python.org/3/library/functions.html#type" title="(在 Python v3.9)"><em>type</em></a><em> or </em><a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#tuple" title="(在 Python v3.9)"><em>tuple</em></a>) -- 如果给定，则将忽略指定类（或类的元组）中的所有对象。</p>
</dd>
</dl>
</dd></dl>

<dl class="py function">
<dt id="scrapy.utils.trackref.get_oldest">
<code class="sig-prename descclassname">scrapy.utils.trackref.</code><code class="sig-name descname">get_oldest</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">class_name</span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/utils/trackref.html#get_oldest"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.utils.trackref.get_oldest" title="永久链接至目标">¶</a></dt>
<dd><p>返回具有给定类名的最旧活动对象，或者 <code class="docutils literal notranslate"><span class="pre">None</span></code> 如果没有找到。使用 <a class="reference internal" href="#scrapy.utils.trackref.print_live_refs" title="scrapy.utils.trackref.print_live_refs"><code class="xref py py-func docutils literal notranslate"><span class="pre">print_live_refs()</span></code></a> 首先获取每个类名的所有跟踪活动对象的列表。</p>
</dd></dl>

<dl class="py function">
<dt id="scrapy.utils.trackref.iter_all">
<code class="sig-prename descclassname">scrapy.utils.trackref.</code><code class="sig-name descname">iter_all</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">class_name</span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/utils/trackref.html#iter_all"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.utils.trackref.iter_all" title="永久链接至目标">¶</a></dt>
<dd><p>返回具有给定类名的所有活动对象的迭代器，或者 <code class="docutils literal notranslate"><span class="pre">None</span></code> 如果没有找到。使用 <a class="reference internal" href="#scrapy.utils.trackref.print_live_refs" title="scrapy.utils.trackref.print_live_refs"><code class="xref py py-func docutils literal notranslate"><span class="pre">print_live_refs()</span></code></a> 首先获取每个类名的所有跟踪活动对象的列表。</p>
</dd></dl>

</div>
</div>
<div class="section" id="debugging-memory-leaks-with-muppy">
<span id="topics-leaks-muppy"></span><h2>用muppy调试内存泄漏<a class="headerlink" href="#debugging-memory-leaks-with-muppy" title="永久链接至标题">¶</a></h2>
<p><code class="docutils literal notranslate"><span class="pre">trackref</span></code> 提供了一种非常方便的机制来跟踪内存泄漏，但它只跟踪更可能导致内存泄漏的对象。然而，在其他情况下，内存泄漏可能来自其他（或多或少模糊）对象。如果这是你的案子，而且你用 <code class="docutils literal notranslate"><span class="pre">trackref</span></code> ，你还有另一个资源：muppy类库。</p>
<p>你可以从 <a class="reference external" href="https://pypi.org/project/Pympler/">Pympler</a> .</p>
<p>如果你使用 <code class="docutils literal notranslate"><span class="pre">pip</span></code> ，可以使用以下命令安装muppy:：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">pip</span> <span class="n">install</span> <span class="n">Pympler</span>
</pre></div>
</div>
<p>下面是一个使用muppy查看堆中可用的所有Python对象的示例：</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">pympler</span> <span class="kn">import</span> <span class="n">muppy</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">all_objects</span> <span class="o">=</span> <span class="n">muppy</span><span class="o">.</span><span class="n">get_objects</span><span class="p">()</span>
<span class="gp">&gt;&gt;&gt; </span><span class="nb">len</span><span class="p">(</span><span class="n">all_objects</span><span class="p">)</span>
<span class="go">28667</span>
<span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">pympler</span> <span class="kn">import</span> <span class="n">summary</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">suml</span> <span class="o">=</span> <span class="n">summary</span><span class="o">.</span><span class="n">summarize</span><span class="p">(</span><span class="n">all_objects</span><span class="p">)</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">summary</span><span class="o">.</span><span class="n">print_</span><span class="p">(</span><span class="n">suml</span><span class="p">)</span>
<span class="go">                               types |   # objects |   total size</span>
<span class="go">==================================== | =========== | ============</span>
<span class="go">                         &lt;class &#39;str |        9822 |      1.10 MB</span>
<span class="go">                        &lt;class &#39;dict |        1658 |    856.62 KB</span>
<span class="go">                        &lt;class &#39;type |         436 |    443.60 KB</span>
<span class="go">                        &lt;class &#39;code |        2974 |    419.56 KB</span>
<span class="go">          &lt;class &#39;_io.BufferedWriter |           2 |    256.34 KB</span>
<span class="go">                         &lt;class &#39;set |         420 |    159.88 KB</span>
<span class="go">          &lt;class &#39;_io.BufferedReader |           1 |    128.17 KB</span>
<span class="go">          &lt;class &#39;wrapper_descriptor |        1130 |     88.28 KB</span>
<span class="go">                       &lt;class &#39;tuple |        1304 |     86.57 KB</span>
<span class="go">                     &lt;class &#39;weakref |        1013 |     79.14 KB</span>
<span class="go">  &lt;class &#39;builtin_function_or_method |         958 |     67.36 KB</span>
<span class="go">           &lt;class &#39;method_descriptor |         865 |     60.82 KB</span>
<span class="go">                 &lt;class &#39;abc.ABCMeta |          62 |     59.96 KB</span>
<span class="go">                        &lt;class &#39;list |         446 |     58.52 KB</span>
<span class="go">                         &lt;class &#39;int |        1425 |     43.20 KB</span>
</pre></div>
</div>
<p>有关Muppy的更多信息，请参阅 <a class="reference external" href="https://pythonhosted.org/Pympler/muppy.html">muppy documentation</a> .</p>
</div>
<div class="section" id="leaks-without-leaks">
<span id="topics-leaks-without-leaks"></span><h2>无泄漏泄漏<a class="headerlink" href="#leaks-without-leaks" title="永久链接至标题">¶</a></h2>
<p>有时，您可能会注意到您的废进程的内存使用只会增加，但不会减少。不幸的是，即使Scrapy和您的项目都没有泄漏内存，也可能发生这种情况。这是由于Python的一个（不太常见）已知问题造成的，在某些情况下，该问题可能不会将释放的内存返回到操作系统。有关此问题的详细信息，请参阅：</p>
<ul class="simple">
<li><p><a class="reference external" href="https://www.evanjones.ca/python-memory.html">Python Memory Management</a></p></li>
<li><p><a class="reference external" href="https://www.evanjones.ca/python-memory-part2.html">Python Memory Management Part 2</a></p></li>
<li><p><a class="reference external" href="https://www.evanjones.ca/python-memory-part3.html">Python Memory Management Part 3</a></p></li>
</ul>
<p>Evan Jones提出的改进建议，详情见 <a class="reference external" href="https://www.evanjones.ca/memoryallocator/">this paper</a> 在python 2.5中进行了合并，但这只会减少问题，并不能完全解决问题。引用论文：</p>
<blockquote>
<div><p><em>不幸的是，这个补丁只能在竞技场中不再分配对象的情况下释放竞技场。这意味着  Scrapy   化是一个大问题。一个应用程序可以有许多兆字节的空闲内存，分散在所有的区域中，但是它将无法释放其中的任何一个。这是所有内存分配器都遇到的问题。解决这个问题的唯一方法是移动到一个压缩垃圾收集器，它能够移动内存中的对象。这将需要对python解释器进行重大更改。</em></p>
</div></blockquote>
<p>为了保持内存消耗合理，可以将作业拆分为几个较小的作业或启用 <a class="reference internal" href="jobs.html#topics-jobs"><span class="std std-ref">persistent job queue</span></a> 不时停止/启动Spider。</p>
</div>
</div>


           </div>
           
          </div>
          <footer>
  
    <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
      
        <a href="media-pipeline.html" class="btn btn-neutral float-right" title="下载和处理文件和图像" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
      
      
        <a href="dynamic-content.html" class="btn btn-neutral float-left" title="选择动态加载的内容" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
      
    </div>
  

  <hr/>

  <div role="contentinfo">
    <p>
        
        &copy; 版权所有 2008–2020, Scrapy developers
      <span class="lastupdated">
        最后更新于 10月 18, 2020.
      </span>

    </p>
  </div>
    
    
    
    Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a
    
    <a href="https://github.com/rtfd/sphinx_rtd_theme">theme</a>
    
    provided by <a href="https://readthedocs.org">Read the Docs</a>. 

</footer>

        </div>
      </div>

    </section>

  </div>
  

  <script type="text/javascript">
      jQuery(function () {
          SphinxRtdTheme.Navigation.enable(true);
      });
  </script>

  
  
    
  
 
<script type="text/javascript">
!function(){var analytics=window.analytics=window.analytics||[];if(!analytics.initialize)if(analytics.invoked)window.console&&console.error&&console.error("Segment snippet included twice.");else{analytics.invoked=!0;analytics.methods=["trackSubmit","trackClick","trackLink","trackForm","pageview","identify","reset","group","track","ready","alias","page","once","off","on"];analytics.factory=function(t){return function(){var e=Array.prototype.slice.call(arguments);e.unshift(t);analytics.push(e);return analytics}};for(var t=0;t<analytics.methods.length;t++){var e=analytics.methods[t];analytics[e]=analytics.factory(e)}analytics.load=function(t){var e=document.createElement("script");e.type="text/javascript";e.async=!0;e.src=("https:"===document.location.protocol?"https://":"http://")+"cdn.segment.com/analytics.js/v1/"+t+"/analytics.min.js";var n=document.getElementsByTagName("script")[0];n.parentNode.insertBefore(e,n)};analytics.SNIPPET_VERSION="3.1.0";
analytics.load("8UDQfnf3cyFSTsM4YANnW5sXmgZVILbA");
analytics.page();
}}();

analytics.ready(function () {
    ga('require', 'linker');
    ga('linker:autoLink', ['scrapinghub.com', 'crawlera.com']);
});
</script>


</body>
</html>