

<!DOCTYPE html>
<html class="writer-html5" lang="zh" >
<head>
  <meta charset="utf-8">
  
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  
  <title>下载器中间件 &mdash; Scrapy 2.3.0 文档</title>
  

  
  <link rel="stylesheet" href="../_static/css/theme.css" type="text/css" />
  <link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster.custom.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster.bundle.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-shadow.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-punk.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-noir.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-light.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-borderless.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/micromodal.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/sphinx_rtd_theme.css" type="text/css" />

  
  
  
  

  
  <!--[if lt IE 9]>
    <script src="../_static/js/html5shiv.min.js"></script>
  <![endif]-->
  
    
      <script type="text/javascript" id="documentation_options" data-url_root="../" src="../_static/documentation_options.js"></script>
        <script src="../_static/jquery.js"></script>
        <script src="../_static/underscore.js"></script>
        <script src="../_static/doctools.js"></script>
        <script src="../_static/language_data.js"></script>
        <script src="../_static/js/hoverxref.js"></script>
        <script src="../_static/js/tooltipster.bundle.min.js"></script>
        <script src="../_static/js/micromodal.min.js"></script>
    
    <script type="text/javascript" src="../_static/js/theme.js"></script>

    
    <link rel="index" title="索引" href="../genindex.html" />
    <link rel="search" title="搜索" href="../search.html" />
    <link rel="next" title="蜘蛛中间件" href="spider-middleware.html" />
    <link rel="prev" title="体系结构概述" href="architecture.html" /> 
</head>

<body class="wy-body-for-nav">

   
  <div class="wy-grid-for-nav">
    
    <nav data-toggle="wy-nav-shift" class="wy-nav-side">
      <div class="wy-side-scroll">
        <div class="wy-side-nav-search" >
          

          
            <a href="../index.html" class="icon icon-home" alt="Documentation Home"> Scrapy
          

          
          </a>

          
            
            
              <div class="version">
                2.3
              </div>
            
          

          
<div role="search">
  <form id="rtd-search-form" class="wy-form" action="../search.html" method="get">
    <input type="text" name="q" placeholder="Search docs" />
    <input type="hidden" name="check_keywords" value="yes" />
    <input type="hidden" name="area" value="default" />
  </form>
</div>

          
        </div>

        
        <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
          
            
            
              
            
            
              <p class="caption"><span class="caption-text">第一步</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../intro/overview.html">Scrapy一目了然</a></li>
<li class="toctree-l1"><a class="reference internal" href="../intro/install.html">安装指南</a></li>
<li class="toctree-l1"><a class="reference internal" href="../intro/tutorial.html">Scrapy 教程</a></li>
<li class="toctree-l1"><a class="reference internal" href="../intro/examples.html">实例</a></li>
</ul>
<p class="caption"><span class="caption-text">基本概念</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="commands.html">命令行工具</a></li>
<li class="toctree-l1"><a class="reference internal" href="spiders.html">蜘蛛</a></li>
<li class="toctree-l1"><a class="reference internal" href="selectors.html">选择器</a></li>
<li class="toctree-l1"><a class="reference internal" href="items.html">项目</a></li>
<li class="toctree-l1"><a class="reference internal" href="loaders.html">项目加载器</a></li>
<li class="toctree-l1"><a class="reference internal" href="shell.html">Scrapy shell</a></li>
<li class="toctree-l1"><a class="reference internal" href="item-pipeline.html">项目管道</a></li>
<li class="toctree-l1"><a class="reference internal" href="feed-exports.html">Feed 导出</a></li>
<li class="toctree-l1"><a class="reference internal" href="request-response.html">请求和响应</a></li>
<li class="toctree-l1"><a class="reference internal" href="link-extractors.html">链接提取器</a></li>
<li class="toctree-l1"><a class="reference internal" href="settings.html">设置</a></li>
<li class="toctree-l1"><a class="reference internal" href="exceptions.html">例外情况</a></li>
</ul>
<p class="caption"><span class="caption-text">内置服务</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="logging.html">登录</a></li>
<li class="toctree-l1"><a class="reference internal" href="stats.html">统计数据集合</a></li>
<li class="toctree-l1"><a class="reference internal" href="email.html">发送电子邮件</a></li>
<li class="toctree-l1"><a class="reference internal" href="telnetconsole.html">远程登录控制台</a></li>
<li class="toctree-l1"><a class="reference internal" href="webservice.html">Web服务</a></li>
</ul>
<p class="caption"><span class="caption-text">解决具体问题</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../faq.html">常见问题</a></li>
<li class="toctree-l1"><a class="reference internal" href="debug.html">调试spiders</a></li>
<li class="toctree-l1"><a class="reference internal" href="contracts.html">蜘蛛合约</a></li>
<li class="toctree-l1"><a class="reference internal" href="practices.html">常用做法</a></li>
<li class="toctree-l1"><a class="reference internal" href="broad-crawls.html">宽爬行</a></li>
<li class="toctree-l1"><a class="reference internal" href="developer-tools.html">使用浏览器的开发人员工具进行抓取</a></li>
<li class="toctree-l1"><a class="reference internal" href="dynamic-content.html">选择动态加载的内容</a></li>
<li class="toctree-l1"><a class="reference internal" href="leaks.html">调试内存泄漏</a></li>
<li class="toctree-l1"><a class="reference internal" href="media-pipeline.html">下载和处理文件和图像</a></li>
<li class="toctree-l1"><a class="reference internal" href="deploy.html">部署蜘蛛</a></li>
<li class="toctree-l1"><a class="reference internal" href="autothrottle.html">AutoThrottle 扩展</a></li>
<li class="toctree-l1"><a class="reference internal" href="benchmarking.html">标杆管理</a></li>
<li class="toctree-l1"><a class="reference internal" href="jobs.html">作业：暂停和恢复爬行</a></li>
<li class="toctree-l1"><a class="reference internal" href="coroutines.html">协同程序</a></li>
<li class="toctree-l1"><a class="reference internal" href="asyncio.html">asyncio</a></li>
</ul>
<p class="caption"><span class="caption-text">扩展Scrapy</span></p>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="architecture.html">体系结构概述</a></li>
<li class="toctree-l1 current"><a class="current reference internal" href="#">下载器中间件</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#activating-a-downloader-middleware">激活下载器中间件</a></li>
<li class="toctree-l2"><a class="reference internal" href="#writing-your-own-downloader-middleware">编写自己的下载中间件</a></li>
<li class="toctree-l2"><a class="reference internal" href="#built-in-downloader-middleware-reference">内置下载器中间件参考</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#module-scrapy.downloadermiddlewares.cookies">CookiesMiddleware</a><ul>
<li class="toctree-l4"><a class="reference internal" href="#multiple-cookie-sessions-per-spider">每个蜘蛛有多个cookie会话</a></li>
<li class="toctree-l4"><a class="reference internal" href="#cookies-enabled">COOKIES_ENABLED</a></li>
<li class="toctree-l4"><a class="reference internal" href="#cookies-debug">COOKIES_DEBUG</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="#module-scrapy.downloadermiddlewares.defaultheaders">DefaultHeadersMiddleware</a></li>
<li class="toctree-l3"><a class="reference internal" href="#module-scrapy.downloadermiddlewares.downloadtimeout">DownloadTimeoutMiddleware</a></li>
<li class="toctree-l3"><a class="reference internal" href="#module-scrapy.downloadermiddlewares.httpauth">HttpAuthMiddleware</a></li>
<li class="toctree-l3"><a class="reference internal" href="#module-scrapy.downloadermiddlewares.httpcache">HttpCacheMiddleware</a><ul>
<li class="toctree-l4"><a class="reference internal" href="#dummy-policy-default">虚拟策略（默认）</a></li>
<li class="toctree-l4"><a class="reference internal" href="#rfc2616-policy">RCF2616政策</a></li>
<li class="toctree-l4"><a class="reference internal" href="#filesystem-storage-backend-default">文件系统存储后端（默认）</a></li>
<li class="toctree-l4"><a class="reference internal" href="#dbm-storage-backend">DBM存储后端</a></li>
<li class="toctree-l4"><a class="reference internal" href="#writing-your-own-storage-backend">编写自己的存储后端</a></li>
<li class="toctree-l4"><a class="reference internal" href="#httpcache-middleware-settings">httpcache中间件设置</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="#module-scrapy.downloadermiddlewares.httpcompression">HttpCompressionMiddleware</a><ul>
<li class="toctree-l4"><a class="reference internal" href="#httpcompressionmiddleware-settings">httpcompression中间件设置</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="#module-scrapy.downloadermiddlewares.httpproxy">HttpProxyMiddleware</a></li>
<li class="toctree-l3"><a class="reference internal" href="#module-scrapy.downloadermiddlewares.redirect">RedirectMiddleware</a><ul>
<li class="toctree-l4"><a class="reference internal" href="#redirectmiddleware-settings">重定向中间件设置</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="#metarefreshmiddleware">MetaRefreshMiddleware</a><ul>
<li class="toctree-l4"><a class="reference internal" href="#metarefreshmiddleware-settings">元刷新中间件设置</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="#module-scrapy.downloadermiddlewares.retry">RetryMiddleware</a><ul>
<li class="toctree-l4"><a class="reference internal" href="#retrymiddleware-settings">重试IDdleware设置</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="#module-scrapy.downloadermiddlewares.robotstxt">RobotsTxtMiddleware</a><ul>
<li class="toctree-l4"><a class="reference internal" href="#protego-parser">Protego解析器</a></li>
<li class="toctree-l4"><a class="reference internal" href="#robotfileparser">RobotFileParser</a></li>
<li class="toctree-l4"><a class="reference internal" href="#reppy-parser">Reppy解析器</a></li>
<li class="toctree-l4"><a class="reference internal" href="#robotexclusionrulesparser">RobotExclusionRuleSpaser</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="#implementing-support-for-a-new-parser">实现对新解析器的支持</a></li>
<li class="toctree-l3"><a class="reference internal" href="#module-scrapy.downloadermiddlewares.stats">DownloaderStats</a></li>
<li class="toctree-l3"><a class="reference internal" href="#module-scrapy.downloadermiddlewares.useragent">UserAgentMiddleware</a></li>
<li class="toctree-l3"><a class="reference internal" href="#module-scrapy.downloadermiddlewares.ajaxcrawl">AjaxCrawlMiddleware</a><ul>
<li class="toctree-l4"><a class="reference internal" href="#ajaxcrawlmiddleware-settings">AjaxCrawl中间件设置</a></li>
<li class="toctree-l4"><a class="reference internal" href="#httpproxymiddleware-settings">httpproxymiddleware设置</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="spider-middleware.html">蜘蛛中间件</a></li>
<li class="toctree-l1"><a class="reference internal" href="extensions.html">扩展</a></li>
<li class="toctree-l1"><a class="reference internal" href="api.html">核心API</a></li>
<li class="toctree-l1"><a class="reference internal" href="signals.html">信号</a></li>
<li class="toctree-l1"><a class="reference internal" href="exporters.html">条目导出器</a></li>
</ul>
<p class="caption"><span class="caption-text">其余所有</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../news.html">发行说明</a></li>
<li class="toctree-l1"><a class="reference internal" href="../contributing.html">为 Scrapy 贡献</a></li>
<li class="toctree-l1"><a class="reference internal" href="../versioning.html">版本控制和API稳定性</a></li>
</ul>

            
          
        </div>
        
      </div>
    </nav>

    <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">

      
      <nav class="wy-nav-top" aria-label="top navigation">
        
          <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
          <a href="../index.html">Scrapy</a>
        
      </nav>


      <div class="wy-nav-content">
        
        <div class="rst-content">
        
          















<div role="navigation" aria-label="breadcrumbs navigation">

  <ul class="wy-breadcrumbs">
    
      <li><a href="../index.html" class="icon icon-home"></a> &raquo;</li>
        
      <li>下载器中间件</li>
    
    
      <li class="wy-breadcrumbs-aside">
        
            
        
      </li>
    
  </ul>

  
  <hr/>
</div>
          <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
           <div itemprop="articleBody">
            
  <div class="section" id="downloader-middleware">
<span id="topics-downloader-middleware"></span><h1>下载器中间件<a class="headerlink" href="#downloader-middleware" title="永久链接至标题">¶</a></h1>
<p>下载器中间件是Scrapy请求/响应处理的钩子框架。这是一个轻，低层次的系统，全球范围内改变斯拉皮的请求和响应。</p>
<div class="section" id="activating-a-downloader-middleware">
<span id="topics-downloader-middleware-setting"></span><h2>激活下载器中间件<a class="headerlink" href="#activating-a-downloader-middleware" title="永久链接至标题">¶</a></h2>
<p>要激活下载器中间件组件，请将其添加到 <a class="reference internal" href="settings.html#std-setting-DOWNLOADER_MIDDLEWARES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOADER_MIDDLEWARES</span></code></a> 设置，这是一个dict，其键是中间件类路径，其值是中间件顺序。</p><script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
<ins class="adsbygoogle"
     style="display:block; text-align:center;"
     data-ad-layout="in-article"
     data-ad-format="fluid"
     data-ad-client="ca-pub-1466963416408457"
     data-ad-slot="8850786025"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>
<p>举个例子：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">DOWNLOADER_MIDDLEWARES</span> <span class="o">=</span> <span class="p">{</span>
    <span class="s1">&#39;myproject.middlewares.CustomDownloaderMiddleware&#39;</span><span class="p">:</span> <span class="mi">543</span><span class="p">,</span>
<span class="p">}</span>
</pre></div>
</div>
<p>这个 <a class="reference internal" href="settings.html#std-setting-DOWNLOADER_MIDDLEWARES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOADER_MIDDLEWARES</span></code></a> 设置与合并 <a class="reference internal" href="settings.html#std-setting-DOWNLOADER_MIDDLEWARES_BASE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOADER_MIDDLEWARES_BASE</span></code></a> 在scrappy中定义的设置（不打算被覆盖），然后按顺序排序，以获得已启用中间件的最终排序列表：第一个中间件更接近引擎，最后一个更接近下载程序。也就是说， <a class="reference internal" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_request" title="scrapy.downloadermiddlewares.DownloaderMiddleware.process_request"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_request()</span></code></a> 每个中间件的方法将以增加的中间件顺序（100、200、300…）调用，并且 <a class="reference internal" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_response" title="scrapy.downloadermiddlewares.DownloaderMiddleware.process_response"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_response()</span></code></a> 每个中间件的方法将按降序调用。</p>
<p>要决定分配给中间件的顺序，请参见 <a class="reference internal" href="settings.html#std-setting-DOWNLOADER_MIDDLEWARES_BASE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOADER_MIDDLEWARES_BASE</span></code></a> 根据要插入中间件的位置设置和选择一个值。顺序很重要，因为每个中间件执行不同的操作，并且您的中间件可能依赖于之前（或之后）应用的一些中间件。</p>
<p>如果要禁用内置中间件（定义于 <a class="reference internal" href="settings.html#std-setting-DOWNLOADER_MIDDLEWARES_BASE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOADER_MIDDLEWARES_BASE</span></code></a> 并在默认情况下启用）您必须在项目的 <a class="reference internal" href="settings.html#std-setting-DOWNLOADER_MIDDLEWARES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOADER_MIDDLEWARES</span></code></a> 设置和分配 <code class="docutils literal notranslate"><span class="pre">None</span></code> 作为其价值。例如，如果要禁用用户代理中间件：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">DOWNLOADER_MIDDLEWARES</span> <span class="o">=</span> <span class="p">{</span>
    <span class="s1">&#39;myproject.middlewares.CustomDownloaderMiddleware&#39;</span><span class="p">:</span> <span class="mi">543</span><span class="p">,</span>
    <span class="s1">&#39;scrapy.downloadermiddlewares.useragent.UserAgentMiddleware&#39;</span><span class="p">:</span> <span class="kc">None</span><span class="p">,</span>
<span class="p">}</span>
</pre></div>
</div>
<p>最后，请记住，某些中间商可能需要通过特定设置启用。有关更多信息，请参阅每个中间件文档。</p>
</div>
<div class="section" id="writing-your-own-downloader-middleware">
<span id="topics-downloader-middleware-custom"></span><h2>编写自己的下载中间件<a class="headerlink" href="#writing-your-own-downloader-middleware" title="永久链接至标题">¶</a></h2>
<p>每个下载器中间件都是一个python类，它定义了下面定义的一个或多个方法。</p>
<p>主要入口点是 <code class="docutils literal notranslate"><span class="pre">from_crawler</span></code> 类方法，它接收 <a class="reference internal" href="api.html#scrapy.crawler.Crawler" title="scrapy.crawler.Crawler"><code class="xref py py-class docutils literal notranslate"><span class="pre">Crawler</span></code></a> 实例。这个 <a class="reference internal" href="api.html#scrapy.crawler.Crawler" title="scrapy.crawler.Crawler"><code class="xref py py-class docutils literal notranslate"><span class="pre">Crawler</span></code></a> 例如，对象允许您访问 <a class="reference internal" href="settings.html#topics-settings"><span class="std std-ref">settings</span></a> .</p>
<span class="target" id="module-scrapy.downloadermiddlewares"></span><dl class="py class">
<dt id="scrapy.downloadermiddlewares.DownloaderMiddleware">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.downloadermiddlewares.</code><code class="sig-name descname">DownloaderMiddleware</code><a class="headerlink" href="#scrapy.downloadermiddlewares.DownloaderMiddleware" title="永久链接至目标">¶</a></dt>
<dd><div class="admonition note">
<p class="admonition-title">注解</p>
<p>任何下载器中间件方法也可能返回延迟。</p>
</div>
<dl class="py method">
<dt id="scrapy.downloadermiddlewares.DownloaderMiddleware.process_request">
<code class="sig-name descname">process_request</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">request</span></em>, <em class="sig-param"><span class="n">spider</span></em><span class="sig-paren">)</span><a class="headerlink" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_request" title="永久链接至目标">¶</a></dt>
<dd><p>对于通过下载中间件的每个请求调用此方法。</p>
<p><a class="reference internal" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_request" title="scrapy.downloadermiddlewares.DownloaderMiddleware.process_request"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_request()</span></code></a> 应该是：返回 <code class="docutils literal notranslate"><span class="pre">None</span></code> 返回A <a class="reference internal" href="request-response.html#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 对象，返回 <a class="reference internal" href="request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 对象，或提升 <a class="reference internal" href="exceptions.html#scrapy.exceptions.IgnoreRequest" title="scrapy.exceptions.IgnoreRequest"><code class="xref py py-exc docutils literal notranslate"><span class="pre">IgnoreRequest</span></code></a> .</p>
<p>如果它回来 <code class="docutils literal notranslate"><span class="pre">None</span></code> ，scrapy将继续处理此请求，执行所有其他中间软件，直到最后调用适当的下载器处理程序执行请求（及其下载的响应）。</p>
<p>如果它返回 <a class="reference internal" href="request-response.html#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 对象，Scrapy不用打电话了 <em>any</em> 其他 <a class="reference internal" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_request" title="scrapy.downloadermiddlewares.DownloaderMiddleware.process_request"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_request()</span></code></a> 或 <a class="reference internal" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception" title="scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_exception()</span></code></a> 方法或适当的下载函数；它将返回该响应。这个 <a class="reference internal" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_response" title="scrapy.downloadermiddlewares.DownloaderMiddleware.process_response"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_response()</span></code></a> 每次响应都会调用已安装中间件的方法。</p>
<p>如果它返回 <a class="reference internal" href="request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 对象，Scrapy将停止调用进程请求方法并重新安排返回的请求。一旦执行新返回的请求，将对下载的响应调用适当的中间件链。</p>
<p>如果它引发了 <a class="reference internal" href="exceptions.html#scrapy.exceptions.IgnoreRequest" title="scrapy.exceptions.IgnoreRequest"><code class="xref py py-exc docutils literal notranslate"><span class="pre">IgnoreRequest</span></code></a> 例外情况 <a class="reference internal" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception" title="scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_exception()</span></code></a> 将调用已安装的下载器中间件的方法。如果它们都不处理异常，则请求的errback函数 (<code class="docutils literal notranslate"><span class="pre">Request.errback</span></code> ）。如果没有代码处理引发的异常，则忽略该异常，不记录该异常（与其他异常不同）。</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>request</strong> (<a class="reference internal" href="request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> object) -- 正在处理的请求</p></li>
<li><p><strong>spider</strong> (<a class="reference internal" href="spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> object) -- 此请求所针对的蜘蛛</p></li>
</ul>
</dd>
</dl>
</dd></dl>

<dl class="py method">
<dt id="scrapy.downloadermiddlewares.DownloaderMiddleware.process_response">
<code class="sig-name descname">process_response</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">request</span></em>, <em class="sig-param"><span class="n">response</span></em>, <em class="sig-param"><span class="n">spider</span></em><span class="sig-paren">)</span><a class="headerlink" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_response" title="永久链接至目标">¶</a></dt>
<dd><p><a class="reference internal" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_response" title="scrapy.downloadermiddlewares.DownloaderMiddleware.process_response"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_response()</span></code></a> 应该是：返回 <a class="reference internal" href="request-response.html#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 对象，返回 <a class="reference internal" href="request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 反对或提高 <a class="reference internal" href="exceptions.html#scrapy.exceptions.IgnoreRequest" title="scrapy.exceptions.IgnoreRequest"><code class="xref py py-exc docutils literal notranslate"><span class="pre">IgnoreRequest</span></code></a> 例外。</p>
<p>如果它返回 <a class="reference internal" href="request-response.html#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> （可能是相同的给定响应，也可能是全新的响应），该响应将继续使用 <a class="reference internal" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_response" title="scrapy.downloadermiddlewares.DownloaderMiddleware.process_response"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_response()</span></code></a> 链中的下一个中间件。</p>
<p>如果它返回 <a class="reference internal" href="request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 对象，中间件链将停止，返回的请求将被重新安排以便将来下载。这与从返回请求时的行为相同 <a class="reference internal" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_request" title="scrapy.downloadermiddlewares.DownloaderMiddleware.process_request"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_request()</span></code></a> .</p>
<p>如果它引发了 <a class="reference internal" href="exceptions.html#scrapy.exceptions.IgnoreRequest" title="scrapy.exceptions.IgnoreRequest"><code class="xref py py-exc docutils literal notranslate"><span class="pre">IgnoreRequest</span></code></a> 异常，请求的errback函数 (<code class="docutils literal notranslate"><span class="pre">Request.errback</span></code> ）。如果没有代码处理引发的异常，则忽略该异常，不记录该异常（与其他异常不同）。</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>request</strong> (is a <a class="reference internal" href="request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> object) -- 发起响应的请求</p></li>
<li><p><strong>response</strong> (<a class="reference internal" href="request-response.html#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> object) -- 正在处理的响应</p></li>
<li><p><strong>spider</strong> (<a class="reference internal" href="spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> object) -- 此响应所针对的蜘蛛</p></li>
</ul>
</dd>
</dl>
</dd></dl>

<dl class="py method">
<dt id="scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception">
<code class="sig-name descname">process_exception</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">request</span></em>, <em class="sig-param"><span class="n">exception</span></em>, <em class="sig-param"><span class="n">spider</span></em><span class="sig-paren">)</span><a class="headerlink" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception" title="永久链接至目标">¶</a></dt>
<dd><p>Scrapy电话 <a class="reference internal" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception" title="scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_exception()</span></code></a> 当下载处理程序或 <a class="reference internal" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_request" title="scrapy.downloadermiddlewares.DownloaderMiddleware.process_request"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_request()</span></code></a> （从下载器中间件）引发异常（包括 <a class="reference internal" href="exceptions.html#scrapy.exceptions.IgnoreRequest" title="scrapy.exceptions.IgnoreRequest"><code class="xref py py-exc docutils literal notranslate"><span class="pre">IgnoreRequest</span></code></a> 例外）</p>
<p><a class="reference internal" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception" title="scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_exception()</span></code></a> 应该返回：或者 <code class="docutils literal notranslate"><span class="pre">None</span></code> ，A <a class="reference internal" href="request-response.html#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 对象，或 <a class="reference internal" href="request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 对象。</p>
<p>如果它回来 <code class="docutils literal notranslate"><span class="pre">None</span></code> ，Scrapy将继续处理此异常，执行任何其他 <a class="reference internal" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception" title="scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_exception()</span></code></a> 安装的中间件的方法，直到没有中间件，默认的异常处理开始。</p>
<p>如果它返回 <a class="reference internal" href="request-response.html#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 对象 <a class="reference internal" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_response" title="scrapy.downloadermiddlewares.DownloaderMiddleware.process_response"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_response()</span></code></a> 已安装中间件的方法链已启动，Scrapy不需要调用任何其他方法。 <a class="reference internal" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception" title="scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_exception()</span></code></a> 中间件的方法。</p>
<p>如果它返回 <a class="reference internal" href="request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 对象，将重新安排返回的请求以便将来下载。这将停止执行 <a class="reference internal" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception" title="scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_exception()</span></code></a> 中间件的方法与返回响应相同。</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>request</strong> (is a <a class="reference internal" href="request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> object) -- 生成异常的请求</p></li>
<li><p><strong>exception</strong> (an <code class="docutils literal notranslate"><span class="pre">Exception</span></code> object) -- 引发的异常</p></li>
<li><p><strong>spider</strong> (<a class="reference internal" href="spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> object) -- 此请求所针对的蜘蛛</p></li>
</ul>
</dd>
</dl>
</dd></dl>

<dl class="py method">
<dt id="scrapy.downloadermiddlewares.DownloaderMiddleware.from_crawler">
<code class="sig-name descname">from_crawler</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">cls</span></em>, <em class="sig-param"><span class="n">crawler</span></em><span class="sig-paren">)</span><a class="headerlink" href="#scrapy.downloadermiddlewares.DownloaderMiddleware.from_crawler" title="永久链接至目标">¶</a></dt>
<dd><p>如果存在，则调用该类方法从 <a class="reference internal" href="api.html#scrapy.crawler.Crawler" title="scrapy.crawler.Crawler"><code class="xref py py-class docutils literal notranslate"><span class="pre">Crawler</span></code></a> . 它必须返回中间件的新实例。爬虫对象提供对所有零碎核心组件（如设置和信号）的访问；它是中间件访问它们并将其功能连接到零碎的一种方式。</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><p><strong>crawler</strong> (<a class="reference internal" href="api.html#scrapy.crawler.Crawler" title="scrapy.crawler.Crawler"><code class="xref py py-class docutils literal notranslate"><span class="pre">Crawler</span></code></a> object) -- 使用此中间件的爬虫程序</p>
</dd>
</dl>
</dd></dl>

</dd></dl>

</div>
<div class="section" id="built-in-downloader-middleware-reference">
<span id="topics-downloader-middleware-ref"></span><h2>内置下载器中间件参考<a class="headerlink" href="#built-in-downloader-middleware-reference" title="永久链接至标题">¶</a></h2>
<p>本页介绍了所有随Scrapy一起提供的下载器中间件组件。有关如何使用它们以及如何编写自己的下载器中间件的信息，请参见 <a class="reference internal" href="#topics-downloader-middleware"><span class="std std-ref">downloader middleware usage guide</span></a> .</p>
<p>有关默认启用的组件列表（及其顺序），请参见 <a class="reference internal" href="settings.html#std-setting-DOWNLOADER_MIDDLEWARES_BASE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOADER_MIDDLEWARES_BASE</span></code></a> 设置。</p>
<div class="section" id="module-scrapy.downloadermiddlewares.cookies">
<span id="cookiesmiddleware"></span><span id="cookies-mw"></span><h3>CookiesMiddleware<a class="headerlink" href="#module-scrapy.downloadermiddlewares.cookies" title="永久链接至标题">¶</a></h3>
<dl class="py class">
<dt id="scrapy.downloadermiddlewares.cookies.CookiesMiddleware">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.downloadermiddlewares.cookies.</code><code class="sig-name descname">CookiesMiddleware</code><a class="reference internal" href="../_modules/scrapy/downloadermiddlewares/cookies.html#CookiesMiddleware"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.downloadermiddlewares.cookies.CookiesMiddleware" title="永久链接至目标">¶</a></dt>
<dd><p>此中间件支持使用需要cookie的站点，例如使用会话的站点。就像网络浏览器发送的cookies一样，它也会跟踪网络浏览器发出的请求。</p>
<div class="admonition caution">
<p class="admonition-title">警告</p>
<p>当非UTF8编码的字节序列传递给 <a class="reference internal" href="request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> , the <code class="docutils literal notranslate"><span class="pre">CookiesMiddleware</span></code> 将记录一个警告。参考 <a class="reference internal" href="logging.html#topics-logging-advanced-customization"><span class="std std-ref">高级自定义</span></a> 自定义日志行为。</p>
</div>
</dd></dl>

<p>以下设置可用于配置cookie中间件：</p>
<ul class="simple">
<li><p><a class="reference internal" href="#std-setting-COOKIES_ENABLED"><code class="xref std std-setting docutils literal notranslate"><span class="pre">COOKIES_ENABLED</span></code></a></p></li>
<li><p><a class="reference internal" href="#std-setting-COOKIES_DEBUG"><code class="xref std std-setting docutils literal notranslate"><span class="pre">COOKIES_DEBUG</span></code></a></p></li>
</ul>
<div class="section" id="multiple-cookie-sessions-per-spider">
<span id="std-reqmeta-cookiejar"></span><span id="std:reqmeta-cookiejar"></span><h4>每个蜘蛛有多个cookie会话<a class="headerlink" href="#multiple-cookie-sessions-per-spider" title="永久链接至标题">¶</a></h4>
<p>通过使用 <a class="reference internal" href="#std-reqmeta-cookiejar"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">cookiejar</span></code></a> 请求元键。默认情况下，它使用一个cookie jar（会话），但您可以通过一个标识符来使用不同的标识符。</p>
<p>例如：：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">url</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">urls</span><span class="p">):</span>
    <span class="k">yield</span> <span class="n">scrapy</span><span class="o">.</span><span class="n">Request</span><span class="p">(</span><span class="n">url</span><span class="p">,</span> <span class="n">meta</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;cookiejar&#39;</span><span class="p">:</span> <span class="n">i</span><span class="p">},</span>
        <span class="n">callback</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">parse_page</span><span class="p">)</span>
</pre></div>
</div>
<p>记住 <a class="reference internal" href="#std-reqmeta-cookiejar"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">cookiejar</span></code></a> meta-key不是“粘性的”。您需要在随后的请求中继续传递它。例如：：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">parse_page</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">response</span><span class="p">):</span>
    <span class="c1"># do some processing</span>
    <span class="k">return</span> <span class="n">scrapy</span><span class="o">.</span><span class="n">Request</span><span class="p">(</span><span class="s2">&quot;http://www.example.com/otherpage&quot;</span><span class="p">,</span>
        <span class="n">meta</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;cookiejar&#39;</span><span class="p">:</span> <span class="n">response</span><span class="o">.</span><span class="n">meta</span><span class="p">[</span><span class="s1">&#39;cookiejar&#39;</span><span class="p">]},</span>
        <span class="n">callback</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">parse_other_page</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="section" id="cookies-enabled">
<span id="std-setting-COOKIES_ENABLED"></span><span id="std:setting-COOKIES_ENABLED"></span><h4>COOKIES_ENABLED<a class="headerlink" href="#cookies-enabled" title="永久链接至标题">¶</a></h4>
<p>违约： <code class="docutils literal notranslate"><span class="pre">True</span></code></p>
<p>是否启用cookie中间件。如果禁用，则不会向Web服务器发送cookie。</p>
<p>注意，尽管 <a class="reference internal" href="#std-setting-COOKIES_ENABLED"><code class="xref std std-setting docutils literal notranslate"><span class="pre">COOKIES_ENABLED</span></code></a> 设置中频 <code class="docutils literal notranslate"><span class="pre">Request.</span></code> Remeta： ['dont_merge_cookies'] &lt;dont_merge_cookies&gt;`评估结果为 <code class="docutils literal notranslate"><span class="pre">True</span></code> 请求cookies将 <strong>not</strong> 发送到Web服务器并在中接收cookie <a class="reference internal" href="request-response.html#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 将 <strong>not</strong> 与现有cookie合并。</p>
<p>有关详细信息，请参阅 <code class="docutils literal notranslate"><span class="pre">cookies</span></code> 参数在 <a class="reference internal" href="request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> .</p>
</div>
<div class="section" id="cookies-debug">
<span id="std-setting-COOKIES_DEBUG"></span><span id="std:setting-COOKIES_DEBUG"></span><h4>COOKIES_DEBUG<a class="headerlink" href="#cookies-debug" title="永久链接至标题">¶</a></h4>
<p>违约： <code class="docutils literal notranslate"><span class="pre">False</span></code></p>
<p>如果启用，Scrapy将记录发送到请求中的所有cookie（即。 <code class="docutils literal notranslate"><span class="pre">Cookie</span></code> 头）和响应中接收到的所有cookie（即。 <code class="docutils literal notranslate"><span class="pre">Set-Cookie</span></code> 标题）。</p>
<p>下面是一个使用 <a class="reference internal" href="#std-setting-COOKIES_DEBUG"><code class="xref std std-setting docutils literal notranslate"><span class="pre">COOKIES_DEBUG</span></code></a> 启用：：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="mi">2011</span><span class="o">-</span><span class="mi">04</span><span class="o">-</span><span class="mi">06</span> <span class="mi">14</span><span class="p">:</span><span class="mi">35</span><span class="p">:</span><span class="mi">10</span><span class="o">-</span><span class="mi">0300</span> <span class="p">[</span><span class="n">scrapy</span><span class="o">.</span><span class="n">core</span><span class="o">.</span><span class="n">engine</span><span class="p">]</span> <span class="n">INFO</span><span class="p">:</span> <span class="n">Spider</span> <span class="n">opened</span>
<span class="mi">2011</span><span class="o">-</span><span class="mi">04</span><span class="o">-</span><span class="mi">06</span> <span class="mi">14</span><span class="p">:</span><span class="mi">35</span><span class="p">:</span><span class="mi">10</span><span class="o">-</span><span class="mi">0300</span> <span class="p">[</span><span class="n">scrapy</span><span class="o">.</span><span class="n">downloadermiddlewares</span><span class="o">.</span><span class="n">cookies</span><span class="p">]</span> <span class="n">DEBUG</span><span class="p">:</span> <span class="n">Sending</span> <span class="n">cookies</span> <span class="n">to</span><span class="p">:</span> <span class="o">&lt;</span><span class="n">GET</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">www</span><span class="o">.</span><span class="n">diningcity</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">netherlands</span><span class="o">/</span><span class="n">index</span><span class="o">.</span><span class="n">html</span><span class="o">&gt;</span>
        <span class="n">Cookie</span><span class="p">:</span> <span class="n">clientlanguage_nl</span><span class="o">=</span><span class="n">en_EN</span>
<span class="mi">2011</span><span class="o">-</span><span class="mi">04</span><span class="o">-</span><span class="mi">06</span> <span class="mi">14</span><span class="p">:</span><span class="mi">35</span><span class="p">:</span><span class="mi">14</span><span class="o">-</span><span class="mi">0300</span> <span class="p">[</span><span class="n">scrapy</span><span class="o">.</span><span class="n">downloadermiddlewares</span><span class="o">.</span><span class="n">cookies</span><span class="p">]</span> <span class="n">DEBUG</span><span class="p">:</span> <span class="n">Received</span> <span class="n">cookies</span> <span class="n">from</span><span class="p">:</span> <span class="o">&lt;</span><span class="mi">200</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">www</span><span class="o">.</span><span class="n">diningcity</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">netherlands</span><span class="o">/</span><span class="n">index</span><span class="o">.</span><span class="n">html</span><span class="o">&gt;</span>
        <span class="n">Set</span><span class="o">-</span><span class="n">Cookie</span><span class="p">:</span> <span class="n">JSESSIONID</span><span class="o">=</span><span class="n">B</span><span class="o">~</span><span class="n">FA4DC0C496C8762AE4F1A620EAB34F38</span><span class="p">;</span> <span class="n">Path</span><span class="o">=/</span>
        <span class="n">Set</span><span class="o">-</span><span class="n">Cookie</span><span class="p">:</span> <span class="n">ip_isocode</span><span class="o">=</span><span class="n">US</span>
        <span class="n">Set</span><span class="o">-</span><span class="n">Cookie</span><span class="p">:</span> <span class="n">clientlanguage_nl</span><span class="o">=</span><span class="n">en_EN</span><span class="p">;</span> <span class="n">Expires</span><span class="o">=</span><span class="n">Thu</span><span class="p">,</span> <span class="mi">07</span><span class="o">-</span><span class="n">Apr</span><span class="o">-</span><span class="mi">2011</span> <span class="mi">21</span><span class="p">:</span><span class="mi">21</span><span class="p">:</span><span class="mi">34</span> <span class="n">GMT</span><span class="p">;</span> <span class="n">Path</span><span class="o">=/</span>
<span class="mi">2011</span><span class="o">-</span><span class="mi">04</span><span class="o">-</span><span class="mi">06</span> <span class="mi">14</span><span class="p">:</span><span class="mi">49</span><span class="p">:</span><span class="mi">50</span><span class="o">-</span><span class="mi">0300</span> <span class="p">[</span><span class="n">scrapy</span><span class="o">.</span><span class="n">core</span><span class="o">.</span><span class="n">engine</span><span class="p">]</span> <span class="n">DEBUG</span><span class="p">:</span> <span class="n">Crawled</span> <span class="p">(</span><span class="mi">200</span><span class="p">)</span> <span class="o">&lt;</span><span class="n">GET</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">www</span><span class="o">.</span><span class="n">diningcity</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">netherlands</span><span class="o">/</span><span class="n">index</span><span class="o">.</span><span class="n">html</span><span class="o">&gt;</span> <span class="p">(</span><span class="n">referer</span><span class="p">:</span> <span class="kc">None</span><span class="p">)</span>
<span class="p">[</span><span class="o">...</span><span class="p">]</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="module-scrapy.downloadermiddlewares.defaultheaders">
<span id="defaultheadersmiddleware"></span><h3>DefaultHeadersMiddleware<a class="headerlink" href="#module-scrapy.downloadermiddlewares.defaultheaders" title="永久链接至标题">¶</a></h3>
<dl class="py class">
<dt id="scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.downloadermiddlewares.defaultheaders.</code><code class="sig-name descname">DefaultHeadersMiddleware</code><a class="reference internal" href="../_modules/scrapy/downloadermiddlewares/defaultheaders.html#DefaultHeadersMiddleware"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware" title="永久链接至目标">¶</a></dt>
<dd><p>此中间件设置在 <a class="reference internal" href="settings.html#std-setting-DEFAULT_REQUEST_HEADERS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DEFAULT_REQUEST_HEADERS</span></code></a> 设置。</p>
</dd></dl>

</div>
<div class="section" id="module-scrapy.downloadermiddlewares.downloadtimeout">
<span id="downloadtimeoutmiddleware"></span><h3>DownloadTimeoutMiddleware<a class="headerlink" href="#module-scrapy.downloadermiddlewares.downloadtimeout" title="永久链接至标题">¶</a></h3>
<dl class="py class">
<dt id="scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.downloadermiddlewares.downloadtimeout.</code><code class="sig-name descname">DownloadTimeoutMiddleware</code><a class="reference internal" href="../_modules/scrapy/downloadermiddlewares/downloadtimeout.html#DownloadTimeoutMiddleware"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware" title="永久链接至目标">¶</a></dt>
<dd><p>此中间件为中指定的请求设置下载超时 <a class="reference internal" href="settings.html#std-setting-DOWNLOAD_TIMEOUT"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOAD_TIMEOUT</span></code></a> 设置或 <code class="xref py py-attr docutils literal notranslate"><span class="pre">download_timeout</span></code> 蜘蛛属性。</p>
</dd></dl>

<div class="admonition note">
<p class="admonition-title">注解</p>
<p>您还可以使用设置每个请求的下载超时 <a class="reference internal" href="request-response.html#std-reqmeta-download_timeout"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">download_timeout</span></code></a> request.meta key；即使禁用downloadTimeoutMiddleware，也支持此功能。</p>
</div>
</div>
<div class="section" id="module-scrapy.downloadermiddlewares.httpauth">
<span id="httpauthmiddleware"></span><h3>HttpAuthMiddleware<a class="headerlink" href="#module-scrapy.downloadermiddlewares.httpauth" title="永久链接至标题">¶</a></h3>
<dl class="py class">
<dt id="scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.downloadermiddlewares.httpauth.</code><code class="sig-name descname">HttpAuthMiddleware</code><a class="reference internal" href="../_modules/scrapy/downloadermiddlewares/httpauth.html#HttpAuthMiddleware"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware" title="永久链接至目标">¶</a></dt>
<dd><p>此中间件使用 <a class="reference external" href="https://en.wikipedia.org/wiki/Basic_access_authentication">Basic access authentication</a> （又名。HTTP AUTH）。</p>
<p>要从某些spider启用HTTP身份验证，请设置 <code class="docutils literal notranslate"><span class="pre">http_user</span></code> 和 <code class="docutils literal notranslate"><span class="pre">http_pass</span></code> 这些蜘蛛的属性。</p>
<p>例子：：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">scrapy.spiders</span> <span class="kn">import</span> <span class="n">CrawlSpider</span>

<span class="k">class</span> <span class="nc">SomeIntranetSiteSpider</span><span class="p">(</span><span class="n">CrawlSpider</span><span class="p">):</span>

    <span class="n">http_user</span> <span class="o">=</span> <span class="s1">&#39;someuser&#39;</span>
    <span class="n">http_pass</span> <span class="o">=</span> <span class="s1">&#39;somepass&#39;</span>
    <span class="n">name</span> <span class="o">=</span> <span class="s1">&#39;intranet.example.com&#39;</span>

    <span class="c1"># .. rest of the spider code omitted ...</span>
</pre></div>
</div>
</dd></dl>

</div>
<div class="section" id="module-scrapy.downloadermiddlewares.httpcache">
<span id="httpcachemiddleware"></span><h3>HttpCacheMiddleware<a class="headerlink" href="#module-scrapy.downloadermiddlewares.httpcache" title="永久链接至标题">¶</a></h3>
<dl class="py class">
<dt id="scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.downloadermiddlewares.httpcache.</code><code class="sig-name descname">HttpCacheMiddleware</code><a class="headerlink" href="#scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware" title="永久链接至目标">¶</a></dt>
<dd><p>此中间件为所有HTTP请求和响应提供低级缓存。它必须与缓存存储后端以及缓存策略相结合。</p>
<p>带有三个HTTP缓存存储后端的 Scrapy  船：</p>
<blockquote>
<div><ul class="simple">
<li><p><a class="reference internal" href="#httpcache-storage-fs"><span class="std std-ref">文件系统存储后端（默认）</span></a></p></li>
<li><p><a class="reference internal" href="#httpcache-storage-dbm"><span class="std std-ref">DBM存储后端</span></a></p></li>
</ul>
</div></blockquote>
<p>您可以使用 <a class="reference internal" href="#std-setting-HTTPCACHE_STORAGE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">HTTPCACHE_STORAGE</span></code></a> 设置。或者你也可以 <a class="reference internal" href="#httpcache-storage-custom"><span class="std std-ref">implement your own storage backend.</span></a></p>
<p>Scrapy附带两个HTTP缓存策略：</p>
<blockquote>
<div><ul class="simple">
<li><p><a class="reference internal" href="#httpcache-policy-rfc2616"><span class="std std-ref">RCF2616政策</span></a></p></li>
<li><p><a class="reference internal" href="#httpcache-policy-dummy"><span class="std std-ref">虚拟策略（默认）</span></a></p></li>
</ul>
</div></blockquote>
<p>可以使用更改HTTP缓存策略 <a class="reference internal" href="#std-setting-HTTPCACHE_POLICY"><code class="xref std std-setting docutils literal notranslate"><span class="pre">HTTPCACHE_POLICY</span></code></a> 设置。或者您也可以实现自己的策略。</p>
<p id="std-reqmeta-dont_cache"><span id="std:reqmeta-dont_cache"></span>您还可以避免在使用 <a class="reference internal" href="#std-reqmeta-dont_cache"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">dont_cache</span></code></a> 元密钥相等 <code class="docutils literal notranslate"><span class="pre">True</span></code> .</p>
</dd></dl>

<div class="section" id="dummy-policy-default">
<span id="httpcache-policy-dummy"></span><h4>虚拟策略（默认）<a class="headerlink" href="#dummy-policy-default" title="永久链接至标题">¶</a></h4>
<dl class="py class">
<dt id="scrapy.extensions.httpcache.DummyPolicy">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.extensions.httpcache.</code><code class="sig-name descname">DummyPolicy</code><a class="reference internal" href="../_modules/scrapy/extensions/httpcache.html#DummyPolicy"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.extensions.httpcache.DummyPolicy" title="永久链接至目标">¶</a></dt>
<dd><p>此策略不知道任何HTTP缓存控制指令。每个请求及其相应的响应都被缓存。当再次看到相同的请求时，将返回响应，而不从Internet传输任何内容。</p>
<p>虚拟策略对于更快地测试spider（而不必每次都等待下载）以及在无法连接到Internet时尝试离线使用spider非常有用。目标是能够“重播”蜘蛛的奔跑 <em>和以前一样</em> .</p>
</dd></dl>

</div>
<div class="section" id="rfc2616-policy">
<span id="httpcache-policy-rfc2616"></span><h4>RCF2616政策<a class="headerlink" href="#rfc2616-policy" title="永久链接至标题">¶</a></h4>
<dl class="py class">
<dt id="scrapy.extensions.httpcache.RFC2616Policy">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.extensions.httpcache.</code><code class="sig-name descname">RFC2616Policy</code><a class="reference internal" href="../_modules/scrapy/extensions/httpcache.html#RFC2616Policy"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.extensions.httpcache.RFC2616Policy" title="永久链接至目标">¶</a></dt>
<dd><p>此策略提供了一个符合RFC2616的HTTP缓存，即具有HTTP缓存控制意识，旨在生产，并在连续运行中使用，以避免下载未修改的数据（以节省带宽和加快爬行速度）。</p>
<p>实施的内容：</p>
<ul class="simple">
<li><p>不要试图用存储响应/请求 <code class="docutils literal notranslate"><span class="pre">no-store</span></code> 缓存控制指令集</p></li>
<li><p>如果 <code class="docutils literal notranslate"><span class="pre">no-cache</span></code> 甚至为新响应设置了缓存控制指令</p></li>
<li><p>计算新鲜度寿命 <code class="docutils literal notranslate"><span class="pre">max-age</span></code> 缓存控制指令</p></li>
<li><p>计算新鲜度寿命 <code class="docutils literal notranslate"><span class="pre">Expires</span></code> 响应报头</p></li>
<li><p>计算新鲜度寿命 <code class="docutils literal notranslate"><span class="pre">Last-Modified</span></code> 响应头（firefox使用的启发式方法）</p></li>
<li><p>计算当前年龄 <code class="docutils literal notranslate"><span class="pre">Age</span></code> 响应报头</p></li>
<li><p>计算当前年龄 <code class="docutils literal notranslate"><span class="pre">Date</span></code> 页眉</p></li>
<li><p>基于以下内容重新验证过时响应 <code class="docutils literal notranslate"><span class="pre">Last-Modified</span></code> 响应报头</p></li>
<li><p>基于以下内容重新验证过时响应 <code class="docutils literal notranslate"><span class="pre">ETag</span></code> 响应报头</p></li>
<li><p>集合 <code class="docutils literal notranslate"><span class="pre">Date</span></code> 接收到的任何响应的头丢失了它</p></li>
<li><p>支持 <code class="docutils literal notranslate"><span class="pre">max-stale</span></code> 请求中的缓存控制指令</p></li>
</ul>
<p>这允许spider使用完整的rfc2616缓存策略进行配置，但避免按请求进行重新验证，同时保持与HTTP规范一致。</p>
<p>例子：</p>
<p>添加 <code class="docutils literal notranslate"><span class="pre">Cache-Control:</span> <span class="pre">max-stale=600</span></code> 请求头接受超过其过期时间不超过600秒的响应。</p>
<p>另见：RFC2616，14.9.3</p>
<p>缺少什么：</p>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">Pragma:</span> <span class="pre">no-cache</span></code> 支持https://www.w3.org/protocols/rfc2616/rfc2616-sec14.html sec14.9.1</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">Vary</span></code> 头支持https://www.w3.org/protocols/rfc2616/rfc2616-sec13.html sec13.6</p></li>
<li><p>更新或删除后无效https://www.w3.org/protocols/rfc2616/rfc2616-sec13.html sec13.10</p></li>
<li><p>…可能还有其他……</p></li>
</ul>
</dd></dl>

</div>
<div class="section" id="filesystem-storage-backend-default">
<span id="httpcache-storage-fs"></span><h4>文件系统存储后端（默认）<a class="headerlink" href="#filesystem-storage-backend-default" title="永久链接至标题">¶</a></h4>
<dl class="py class">
<dt id="scrapy.extensions.httpcache.FilesystemCacheStorage">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.extensions.httpcache.</code><code class="sig-name descname">FilesystemCacheStorage</code><a class="reference internal" href="../_modules/scrapy/extensions/httpcache.html#FilesystemCacheStorage"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.extensions.httpcache.FilesystemCacheStorage" title="永久链接至目标">¶</a></dt>
<dd><p>文件系统存储后端可用于HTTP缓存中间件。</p>
<p>每个请求/响应对存储在包含以下文件的不同目录中：</p>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">request_body</span></code> -普通请求主体</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">request_headers</span></code> -请求头（原始HTTP格式）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">response_body</span></code> -普通反应体</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">response_headers</span></code> -请求头（原始HTTP格式）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">meta</span></code> -python中这个缓存资源的一些元数据 <code class="docutils literal notranslate"><span class="pre">repr()</span></code> 格式（grep友好格式）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">pickled_meta</span></code> -中的相同元数据 <code class="docutils literal notranslate"><span class="pre">meta</span></code> 但是为了更有效的反序列化而进行的pickled</p></li>
</ul>
<p>目录名是根据请求指纹生成的（请参见 <code class="docutils literal notranslate"><span class="pre">scrapy.utils.request.fingerprint</span></code> 一级子目录用于避免在同一目录中创建过多的文件（在许多文件系统中效率低下）。示例目录可以是：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="o">/</span><span class="n">path</span><span class="o">/</span><span class="n">to</span><span class="o">/</span><span class="n">cache</span><span class="o">/</span><span class="nb">dir</span><span class="o">/</span><span class="n">example</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="mi">72</span><span class="o">/</span><span class="mi">72811</span><span class="n">f648e718090f041317756c03adb0ada46c7</span>
</pre></div>
</div>
</dd></dl>

</div>
<div class="section" id="dbm-storage-backend">
<span id="httpcache-storage-dbm"></span><h4>DBM存储后端<a class="headerlink" href="#dbm-storage-backend" title="永久链接至标题">¶</a></h4>
<dl class="py class">
<dt id="scrapy.extensions.httpcache.DbmCacheStorage">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.extensions.httpcache.</code><code class="sig-name descname">DbmCacheStorage</code><a class="reference internal" href="../_modules/scrapy/extensions/httpcache.html#DbmCacheStorage"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.extensions.httpcache.DbmCacheStorage" title="永久链接至目标">¶</a></dt>
<dd><p>A <a class="reference external" href="https://en.wikipedia.org/wiki/Dbm">DBM</a> 存储后端也可用于HTTP缓存中间件。</p>
<p>默认情况下，它使用 <a class="reference external" href="https://docs.python.org/3/library/dbm.html#module-dbm" title="(在 Python v3.9)"><code class="xref py py-mod docutils literal notranslate"><span class="pre">dbm</span></code></a> ，但您可以用 <a class="reference internal" href="#std-setting-HTTPCACHE_DBM_MODULE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">HTTPCACHE_DBM_MODULE</span></code></a> 设置。</p>
</dd></dl>

</div>
<div class="section" id="writing-your-own-storage-backend">
<span id="httpcache-storage-custom"></span><h4>编写自己的存储后端<a class="headerlink" href="#writing-your-own-storage-backend" title="永久链接至标题">¶</a></h4>
<p>您可以通过创建定义下面描述的方法的python类来实现缓存存储后端。</p>
<span class="target" id="module-scrapy.extensions.httpcache"></span><dl class="py class">
<dt id="scrapy.extensions.httpcache.CacheStorage">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.extensions.httpcache.</code><code class="sig-name descname">CacheStorage</code><a class="headerlink" href="#scrapy.extensions.httpcache.CacheStorage" title="永久链接至目标">¶</a></dt>
<dd><dl class="py method">
<dt id="scrapy.extensions.httpcache.CacheStorage.open_spider">
<code class="sig-name descname">open_spider</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">spider</span></em><span class="sig-paren">)</span><a class="headerlink" href="#scrapy.extensions.httpcache.CacheStorage.open_spider" title="永久链接至目标">¶</a></dt>
<dd><p>在打开蜘蛛进行爬行后调用此方法。它处理 <a class="reference internal" href="signals.html#std-signal-spider_opened"><code class="xref std std-signal docutils literal notranslate"><span class="pre">open_spider</span></code></a> 信号。</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><p><strong>spider</strong> (<a class="reference internal" href="spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> object) -- 已经打开的蜘蛛</p>
</dd>
</dl>
</dd></dl>

<dl class="py method">
<dt id="scrapy.extensions.httpcache.CacheStorage.close_spider">
<code class="sig-name descname">close_spider</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">spider</span></em><span class="sig-paren">)</span><a class="headerlink" href="#scrapy.extensions.httpcache.CacheStorage.close_spider" title="永久链接至目标">¶</a></dt>
<dd><p>关闭spider后调用此方法。它处理 <a class="reference internal" href="signals.html#std-signal-spider_closed"><code class="xref std std-signal docutils literal notranslate"><span class="pre">close_spider</span></code></a> 信号。</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><p><strong>spider</strong> (<a class="reference internal" href="spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> object) -- 已关闭的蜘蛛</p>
</dd>
</dl>
</dd></dl>

<dl class="py method">
<dt id="scrapy.extensions.httpcache.CacheStorage.retrieve_response">
<code class="sig-name descname">retrieve_response</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">spider</span></em>, <em class="sig-param"><span class="n">request</span></em><span class="sig-paren">)</span><a class="headerlink" href="#scrapy.extensions.httpcache.CacheStorage.retrieve_response" title="永久链接至目标">¶</a></dt>
<dd><p>如果缓存中存在，则返回响应，或者 <code class="docutils literal notranslate"><span class="pre">None</span></code> 否则。</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>spider</strong> (<a class="reference internal" href="spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> object) -- 生成请求的蜘蛛</p></li>
<li><p><strong>request</strong> (<a class="reference internal" href="request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> object) -- 查找的缓存响应的请求</p></li>
</ul>
</dd>
</dl>
</dd></dl>

<dl class="py method">
<dt id="scrapy.extensions.httpcache.CacheStorage.store_response">
<code class="sig-name descname">store_response</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">spider</span></em>, <em class="sig-param"><span class="n">request</span></em>, <em class="sig-param"><span class="n">response</span></em><span class="sig-paren">)</span><a class="headerlink" href="#scrapy.extensions.httpcache.CacheStorage.store_response" title="永久链接至目标">¶</a></dt>
<dd><p>将给定的响应存储在缓存中。</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>spider</strong> (<a class="reference internal" href="spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> object) -- 响应所针对的蜘蛛</p></li>
<li><p><strong>request</strong> (<a class="reference internal" href="request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> object) -- 蜘蛛生成的相应请求</p></li>
<li><p><strong>response</strong> (<a class="reference internal" href="request-response.html#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> object) -- 存储在缓存中的响应</p></li>
</ul>
</dd>
</dl>
</dd></dl>

</dd></dl>

<p>要使用存储后端，请设置：</p>
<ul class="simple">
<li><p><a class="reference internal" href="#std-setting-HTTPCACHE_STORAGE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">HTTPCACHE_STORAGE</span></code></a> 到自定义存储类的python导入路径。</p></li>
</ul>
</div>
<div class="section" id="httpcache-middleware-settings">
<h4>httpcache中间件设置<a class="headerlink" href="#httpcache-middleware-settings" title="永久链接至标题">¶</a></h4>
<p>这个 <code class="xref py py-class docutils literal notranslate"><span class="pre">HttpCacheMiddleware</span></code> 可通过以下设置进行配置：</p>
<div class="section" id="httpcache-enabled">
<span id="std-setting-HTTPCACHE_ENABLED"></span><span id="std:setting-HTTPCACHE_ENABLED"></span><h5>HTTPCACHE_ENABLED<a class="headerlink" href="#httpcache-enabled" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">False</span></code></p>
<p>是否启用HTTP缓存。</p>
</div>
<div class="section" id="httpcache-expiration-secs">
<span id="std-setting-HTTPCACHE_EXPIRATION_SECS"></span><span id="std:setting-HTTPCACHE_EXPIRATION_SECS"></span><h5>HTTPCACHE_EXPIRATION_SECS<a class="headerlink" href="#httpcache-expiration-secs" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">0</span></code></p>
<p>缓存请求的过期时间（秒）。</p>
<p>超过此时间的缓存请求将被重新下载。如果为零，则缓存请求将永不过期。</p>
</div>
<div class="section" id="httpcache-dir">
<span id="std-setting-HTTPCACHE_DIR"></span><span id="std:setting-HTTPCACHE_DIR"></span><h5>HTTPCACHE_DIR<a class="headerlink" href="#httpcache-dir" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">'httpcache'</span></code></p>
<p>用于存储（低级）HTTP缓存的目录。如果为空，则将禁用HTTP缓存。如果给定了相对路径，则相对于项目数据目录。有关详细信息，请参阅： <a class="reference internal" href="commands.html#topics-project-structure"><span class="std std-ref">报废项目的默认结构</span></a> .</p>
</div>
<div class="section" id="httpcache-ignore-http-codes">
<span id="std-setting-HTTPCACHE_IGNORE_HTTP_CODES"></span><span id="std:setting-HTTPCACHE_IGNORE_HTTP_CODES"></span><h5>HTTPCACHE_IGNORE_HTTP_CODES<a class="headerlink" href="#httpcache-ignore-http-codes" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">[]</span></code></p>
<p>不要用这些HTTP代码缓存响应。</p>
</div>
<div class="section" id="httpcache-ignore-missing">
<span id="std-setting-HTTPCACHE_IGNORE_MISSING"></span><span id="std:setting-HTTPCACHE_IGNORE_MISSING"></span><h5>HTTPCACHE_IGNORE_MISSING<a class="headerlink" href="#httpcache-ignore-missing" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">False</span></code></p>
<p>如果启用，在缓存中找不到的请求将被忽略，而不是下载。</p>
</div>
<div class="section" id="httpcache-ignore-schemes">
<span id="std-setting-HTTPCACHE_IGNORE_SCHEMES"></span><span id="std:setting-HTTPCACHE_IGNORE_SCHEMES"></span><h5>HTTPCACHE_IGNORE_SCHEMES<a class="headerlink" href="#httpcache-ignore-schemes" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">['file']</span></code></p>
<p>不要用这些URI方案缓存响应。</p>
</div>
<div class="section" id="httpcache-storage">
<span id="std-setting-HTTPCACHE_STORAGE"></span><span id="std:setting-HTTPCACHE_STORAGE"></span><h5>HTTPCACHE_STORAGE<a class="headerlink" href="#httpcache-storage" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">'scrapy.extensions.httpcache.FilesystemCacheStorage'</span></code></p>
<p>实现缓存存储后端的类。</p>
</div>
<div class="section" id="httpcache-dbm-module">
<span id="std-setting-HTTPCACHE_DBM_MODULE"></span><span id="std:setting-HTTPCACHE_DBM_MODULE"></span><h5>HTTPCACHE_DBM_MODULE<a class="headerlink" href="#httpcache-dbm-module" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">'dbm'</span></code></p>
<p>要在中使用的数据库模块 <a class="reference internal" href="#httpcache-storage-dbm"><span class="std std-ref">DBM storage backend</span></a> . 此设置特定于DBM后端。</p>
</div>
<div class="section" id="httpcache-policy">
<span id="std-setting-HTTPCACHE_POLICY"></span><span id="std:setting-HTTPCACHE_POLICY"></span><h5>HTTPCACHE_POLICY<a class="headerlink" href="#httpcache-policy" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">'scrapy.extensions.httpcache.DummyPolicy'</span></code></p>
<p>实现缓存策略的类。</p>
</div>
<div class="section" id="httpcache-gzip">
<span id="std-setting-HTTPCACHE_GZIP"></span><span id="std:setting-HTTPCACHE_GZIP"></span><h5>HTTPCACHE_GZIP<a class="headerlink" href="#httpcache-gzip" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">False</span></code></p>
<p>如果启用，将使用gzip压缩所有缓存数据。此设置特定于文件系统后端。</p>
</div>
<div class="section" id="httpcache-always-store">
<span id="std-setting-HTTPCACHE_ALWAYS_STORE"></span><span id="std:setting-HTTPCACHE_ALWAYS_STORE"></span><h5>HTTPCACHE_ALWAYS_STORE<a class="headerlink" href="#httpcache-always-store" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">False</span></code></p>
<p>如果启用，将无条件缓存页。</p>
<p>蜘蛛可能希望缓存中有所有可用的响应，以便将来与一起使用 <code class="docutils literal notranslate"><span class="pre">Cache-Control:</span> <span class="pre">max-stale</span></code> 例如。dummypolicy缓存所有响应，但从不重新验证它们，有时需要更细微的策略。</p>
<p>此设置仍然尊重 <code class="docutils literal notranslate"><span class="pre">Cache-Control:</span> <span class="pre">no-store</span></code> 回应中的指示。如果你不想要，过滤 <code class="docutils literal notranslate"><span class="pre">no-store</span></code> 在您提供给缓存中间件的响应中的缓存控制标头。</p>
</div>
<div class="section" id="httpcache-ignore-response-cache-controls">
<span id="std-setting-HTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS"></span><span id="std:setting-HTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS"></span><h5>HTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS<a class="headerlink" href="#httpcache-ignore-response-cache-controls" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">[]</span></code></p>
<p>要忽略的响应中的缓存控制指令列表。</p>
<p>网站通常会设置“无存储”、“无缓存”、“必须重新验证”等，但是如果蜘蛛真正遵守这些指令，它可能会产生流量，这会让网站感到不安。这允许有选择地忽略缓存控制指令，这些指令对于正在爬网的站点来说并不重要。</p>
<p>我们假设蜘蛛不会在请求中发出缓存控制指令，除非它确实需要它们，所以请求中的指令不会被过滤。</p>
</div>
</div>
</div>
<div class="section" id="module-scrapy.downloadermiddlewares.httpcompression">
<span id="httpcompressionmiddleware"></span><h3>HttpCompressionMiddleware<a class="headerlink" href="#module-scrapy.downloadermiddlewares.httpcompression" title="永久链接至标题">¶</a></h3>
<dl class="py class">
<dt id="scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.downloadermiddlewares.httpcompression.</code><code class="sig-name descname">HttpCompressionMiddleware</code><a class="reference internal" href="../_modules/scrapy/downloadermiddlewares/httpcompression.html#HttpCompressionMiddleware"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware" title="永久链接至目标">¶</a></dt>
<dd><p>此中间件允许从网站发送/接收压缩（gzip、deflate）流量。</p>
<p>此中间件还支持解码 <a class="reference external" href="https://www.ietf.org/rfc/rfc7932.txt">brotli-compressed</a> 回答，提供 <a class="reference external" href="https://pypi.org/project/brotlipy/">brotlipy</a> 已安装。</p>
</dd></dl>

<div class="section" id="httpcompressionmiddleware-settings">
<h4>httpcompression中间件设置<a class="headerlink" href="#httpcompressionmiddleware-settings" title="永久链接至标题">¶</a></h4>
<div class="section" id="compression-enabled">
<span id="std-setting-COMPRESSION_ENABLED"></span><span id="std:setting-COMPRESSION_ENABLED"></span><h5>COMPRESSION_ENABLED<a class="headerlink" href="#compression-enabled" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">True</span></code></p>
<p>是否启用压缩中间件。</p>
</div>
</div>
</div>
<div class="section" id="module-scrapy.downloadermiddlewares.httpproxy">
<span id="httpproxymiddleware"></span><h3>HttpProxyMiddleware<a class="headerlink" href="#module-scrapy.downloadermiddlewares.httpproxy" title="永久链接至标题">¶</a></h3>
<span class="target" id="std-reqmeta-proxy"><span id="std:reqmeta-proxy"></span></span><dl class="py class">
<dt id="scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.downloadermiddlewares.httpproxy.</code><code class="sig-name descname">HttpProxyMiddleware</code><a class="reference internal" href="../_modules/scrapy/downloadermiddlewares/httpproxy.html#HttpProxyMiddleware"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware" title="永久链接至目标">¶</a></dt>
<dd><p>此中间件通过设置 <code class="docutils literal notranslate"><span class="pre">proxy</span></code> 元价值 <a class="reference internal" href="request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 物体。</p>
<p>就像Python标准库模块一样 <a class="reference external" href="https://docs.python.org/3/library/urllib.request.html#module-urllib.request" title="(在 Python v3.9)"><code class="xref py py-mod docutils literal notranslate"><span class="pre">urllib.request</span></code></a> ，它遵循以下环境变量：</p>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">http_proxy</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">https_proxy</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">no_proxy</span></code></p></li>
</ul>
<p>您也可以设置meta键 <code class="docutils literal notranslate"><span class="pre">proxy</span></code> per-request, to a value like <code class="docutils literal notranslate"><span class="pre">http://some_proxy_server:port</span></code> or <code class="docutils literal notranslate"><span class="pre">http://username:password&#64;some_proxy_server:port</span></code>. Keep in mind this value will take precedence over <code class="docutils literal notranslate"><span class="pre">http_proxy</span></code>/<code class="docutils literal notranslate"><span class="pre">https_proxy</span></code> 环境变量，它也将忽略 <code class="docutils literal notranslate"><span class="pre">no_proxy</span></code> 环境变量。</p>
</dd></dl>

</div>
<div class="section" id="module-scrapy.downloadermiddlewares.redirect">
<span id="redirectmiddleware"></span><h3>RedirectMiddleware<a class="headerlink" href="#module-scrapy.downloadermiddlewares.redirect" title="永久链接至标题">¶</a></h3>
<dl class="py class">
<dt id="scrapy.downloadermiddlewares.redirect.RedirectMiddleware">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.downloadermiddlewares.redirect.</code><code class="sig-name descname">RedirectMiddleware</code><a class="reference internal" href="../_modules/scrapy/downloadermiddlewares/redirect.html#RedirectMiddleware"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.downloadermiddlewares.redirect.RedirectMiddleware" title="永久链接至目标">¶</a></dt>
<dd><p>此中间件根据响应状态处理请求的重定向。</p>
</dd></dl>

<p id="std-reqmeta-redirect_urls"><span id="std:reqmeta-redirect_urls"></span>请求通过的URL（在重定向时）可以在 <code class="docutils literal notranslate"><span class="pre">redirect_urls</span></code>  <a class="reference internal" href="request-response.html#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.meta</span></code></a> 关键。</p>
<p id="std-reqmeta-redirect_reasons"><span id="std:reqmeta-redirect_reasons"></span>每个重定向背后的原因 <a class="reference internal" href="#std-reqmeta-redirect_urls"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">redirect_urls</span></code></a> 可以在 <code class="docutils literal notranslate"><span class="pre">redirect_reasons</span></code>  <a class="reference internal" href="request-response.html#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.meta</span></code></a> 关键。例如： <code class="docutils literal notranslate"><span class="pre">[301,</span> <span class="pre">302,</span> <span class="pre">307,</span> <span class="pre">'meta</span> <span class="pre">refresh']</span></code> .</p>
<p>原因的格式取决于处理相应重定向的中间件。例如， <a class="reference internal" href="#scrapy.downloadermiddlewares.redirect.RedirectMiddleware" title="scrapy.downloadermiddlewares.redirect.RedirectMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">RedirectMiddleware</span></code></a> 以整数表示触发响应状态代码，而 <a class="reference internal" href="#scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware" title="scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">MetaRefreshMiddleware</span></code></a> 总是使用 <code class="docutils literal notranslate"><span class="pre">'meta</span> <span class="pre">refresh'</span></code> 字符串作为原因。</p>
<p>这个 <a class="reference internal" href="#scrapy.downloadermiddlewares.redirect.RedirectMiddleware" title="scrapy.downloadermiddlewares.redirect.RedirectMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">RedirectMiddleware</span></code></a> 可以通过以下设置进行配置（有关详细信息，请参阅设置文档）：</p>
<ul class="simple">
<li><p><a class="reference internal" href="#std-setting-REDIRECT_ENABLED"><code class="xref std std-setting docutils literal notranslate"><span class="pre">REDIRECT_ENABLED</span></code></a></p></li>
<li><p><a class="reference internal" href="#std-setting-REDIRECT_MAX_TIMES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">REDIRECT_MAX_TIMES</span></code></a></p></li>
</ul>
<p id="std-reqmeta-dont_redirect"><span id="std:reqmeta-dont_redirect"></span>如果 <a class="reference internal" href="request-response.html#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.meta</span></code></a> 有 <code class="docutils literal notranslate"><span class="pre">dont_redirect</span></code> key设置为true，该中间件将忽略该请求。</p>
<p>如果要处理蜘蛛中的某些重定向状态代码，可以在 <code class="docutils literal notranslate"><span class="pre">handle_httpstatus_list</span></code> 蜘蛛属性。</p>
<p>例如，如果您希望重定向中间件忽略301和302响应（并将它们传递给您的spider），可以这样做：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="k">class</span> <span class="nc">MySpider</span><span class="p">(</span><span class="n">CrawlSpider</span><span class="p">):</span>
    <span class="n">handle_httpstatus_list</span> <span class="o">=</span> <span class="p">[</span><span class="mi">301</span><span class="p">,</span> <span class="mi">302</span><span class="p">]</span>
</pre></div>
</div>
<p>这个 <code class="docutils literal notranslate"><span class="pre">handle_httpstatus_list</span></code> 关键 <a class="reference internal" href="request-response.html#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.meta</span></code></a> 还可以用于指定每个请求允许哪些响应代码。您也可以设置meta键 <code class="docutils literal notranslate"><span class="pre">handle_httpstatus_all</span></code> 到 <code class="docutils literal notranslate"><span class="pre">True</span></code> 如果您想允许请求的任何响应代码。</p>
<div class="section" id="redirectmiddleware-settings">
<h4>重定向中间件设置<a class="headerlink" href="#redirectmiddleware-settings" title="永久链接至标题">¶</a></h4>
<div class="section" id="redirect-enabled">
<span id="std-setting-REDIRECT_ENABLED"></span><span id="std:setting-REDIRECT_ENABLED"></span><h5>REDIRECT_ENABLED<a class="headerlink" href="#redirect-enabled" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">True</span></code></p>
<p>是否启用重定向中间件。</p>
</div>
<div class="section" id="redirect-max-times">
<span id="std-setting-REDIRECT_MAX_TIMES"></span><span id="std:setting-REDIRECT_MAX_TIMES"></span><h5>REDIRECT_MAX_TIMES<a class="headerlink" href="#redirect-max-times" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">20</span></code></p>
<p>单个请求将遵循的最大重定向数。在这个最大值之后，请求的响应按原样返回。</p>
</div>
</div>
</div>
<div class="section" id="metarefreshmiddleware">
<h3>MetaRefreshMiddleware<a class="headerlink" href="#metarefreshmiddleware" title="永久链接至标题">¶</a></h3>
<dl class="py class">
<dt id="scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.downloadermiddlewares.redirect.</code><code class="sig-name descname">MetaRefreshMiddleware</code><a class="reference internal" href="../_modules/scrapy/downloadermiddlewares/redirect.html#MetaRefreshMiddleware"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware" title="永久链接至目标">¶</a></dt>
<dd><p>此中间件处理基于meta-refresh html标记的请求重定向。</p>
</dd></dl>

<p>这个 <a class="reference internal" href="#scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware" title="scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">MetaRefreshMiddleware</span></code></a> 可以通过以下设置进行配置（有关详细信息，请参阅设置文档）：</p>
<ul class="simple">
<li><p><a class="reference internal" href="#std-setting-METAREFRESH_ENABLED"><code class="xref std std-setting docutils literal notranslate"><span class="pre">METAREFRESH_ENABLED</span></code></a></p></li>
<li><p><a class="reference internal" href="#std-setting-METAREFRESH_IGNORE_TAGS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">METAREFRESH_IGNORE_TAGS</span></code></a></p></li>
<li><p><a class="reference internal" href="#std-setting-METAREFRESH_MAXDELAY"><code class="xref std std-setting docutils literal notranslate"><span class="pre">METAREFRESH_MAXDELAY</span></code></a></p></li>
</ul>
<p>这个中间件服从 <a class="reference internal" href="#std-setting-REDIRECT_MAX_TIMES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">REDIRECT_MAX_TIMES</span></code></a> 设置， <a class="reference internal" href="#std-reqmeta-dont_redirect"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">dont_redirect</span></code></a> ， <a class="reference internal" href="#std-reqmeta-redirect_urls"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">redirect_urls</span></code></a> 和 <a class="reference internal" href="#std-reqmeta-redirect_reasons"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">redirect_reasons</span></code></a> 按说明请求元键 <a class="reference internal" href="#scrapy.downloadermiddlewares.redirect.RedirectMiddleware" title="scrapy.downloadermiddlewares.redirect.RedirectMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">RedirectMiddleware</span></code></a></p>
<div class="section" id="metarefreshmiddleware-settings">
<h4>元刷新中间件设置<a class="headerlink" href="#metarefreshmiddleware-settings" title="永久链接至标题">¶</a></h4>
<div class="section" id="metarefresh-enabled">
<span id="std-setting-METAREFRESH_ENABLED"></span><span id="std:setting-METAREFRESH_ENABLED"></span><h5>METAREFRESH_ENABLED<a class="headerlink" href="#metarefresh-enabled" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">True</span></code></p>
<p>是否启用元刷新中间件。</p>
</div>
<div class="section" id="metarefresh-ignore-tags">
<span id="std-setting-METAREFRESH_IGNORE_TAGS"></span><span id="std:setting-METAREFRESH_IGNORE_TAGS"></span><h5>METAREFRESH_IGNORE_TAGS<a class="headerlink" href="#metarefresh-ignore-tags" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">[]</span></code></p>
<p>忽略这些标记中的元标记。</p>
<div class="versionchanged">
<p><span class="versionmodified changed">在 2.0 版更改: </span>默认值为 <a class="reference internal" href="#std-setting-METAREFRESH_IGNORE_TAGS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">METAREFRESH_IGNORE_TAGS</span></code></a> 从改变 <code class="docutils literal notranslate"><span class="pre">['script',</span> <span class="pre">'noscript']</span></code> 到 <code class="docutils literal notranslate"><span class="pre">[]</span></code> .</p>
</div>
</div>
<div class="section" id="metarefresh-maxdelay">
<span id="std-setting-METAREFRESH_MAXDELAY"></span><span id="std:setting-METAREFRESH_MAXDELAY"></span><h5>METAREFRESH_MAXDELAY<a class="headerlink" href="#metarefresh-maxdelay" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">100</span></code></p>
<p>重定向后的最大元刷新延迟（秒）。有些站点使用meta-refresh重定向到会话过期的页面，因此我们将自动重定向限制为最大延迟。</p>
</div>
</div>
</div>
<div class="section" id="module-scrapy.downloadermiddlewares.retry">
<span id="retrymiddleware"></span><h3>RetryMiddleware<a class="headerlink" href="#module-scrapy.downloadermiddlewares.retry" title="永久链接至标题">¶</a></h3>
<dl class="py class">
<dt id="scrapy.downloadermiddlewares.retry.RetryMiddleware">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.downloadermiddlewares.retry.</code><code class="sig-name descname">RetryMiddleware</code><a class="headerlink" href="#scrapy.downloadermiddlewares.retry.RetryMiddleware" title="永久链接至目标">¶</a></dt>
<dd><p>一种中间件，用于重试可能由临时问题（如连接超时或HTTP 500错误）引起的失败请求。</p>
</dd></dl>

<p>一旦爬行器完成对所有常规（非失败）页面的爬行，将在抓取过程中收集失败的页面，并在最后重新安排。</p>
<p>这个 <a class="reference internal" href="#scrapy.downloadermiddlewares.retry.RetryMiddleware" title="scrapy.downloadermiddlewares.retry.RetryMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">RetryMiddleware</span></code></a> 可以通过以下设置进行配置（有关详细信息，请参阅设置文档）：</p>
<ul class="simple">
<li><p><a class="reference internal" href="#std-setting-RETRY_ENABLED"><code class="xref std std-setting docutils literal notranslate"><span class="pre">RETRY_ENABLED</span></code></a></p></li>
<li><p><a class="reference internal" href="#std-setting-RETRY_TIMES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">RETRY_TIMES</span></code></a></p></li>
<li><p><a class="reference internal" href="#std-setting-RETRY_HTTP_CODES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">RETRY_HTTP_CODES</span></code></a></p></li>
</ul>
<p id="std-reqmeta-dont_retry"><span id="std:reqmeta-dont_retry"></span>如果 <a class="reference internal" href="request-response.html#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.meta</span></code></a> 有 <code class="docutils literal notranslate"><span class="pre">dont_retry</span></code> key设置为true，该中间件将忽略该请求。</p>
<div class="section" id="retrymiddleware-settings">
<h4>重试IDdleware设置<a class="headerlink" href="#retrymiddleware-settings" title="永久链接至标题">¶</a></h4>
<div class="section" id="retry-enabled">
<span id="std-setting-RETRY_ENABLED"></span><span id="std:setting-RETRY_ENABLED"></span><h5>RETRY_ENABLED<a class="headerlink" href="#retry-enabled" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">True</span></code></p>
<p>是否启用重试中间件。</p>
</div>
<div class="section" id="retry-times">
<span id="std-setting-RETRY_TIMES"></span><span id="std:setting-RETRY_TIMES"></span><h5>RETRY_TIMES<a class="headerlink" href="#retry-times" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">2</span></code></p>
<p>除第一次下载外，还要重试的最大次数。</p>
<p>也可以使用指定每个请求的最大重试次数 <a class="reference internal" href="request-response.html#std-reqmeta-max_retry_times"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">max_retry_times</span></code></a> 属性 <a class="reference internal" href="request-response.html#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.meta</span></code></a> . 初始化时， <a class="reference internal" href="request-response.html#std-reqmeta-max_retry_times"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">max_retry_times</span></code></a> 元键优先于 <a class="reference internal" href="#std-setting-RETRY_TIMES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">RETRY_TIMES</span></code></a> 设置。</p>
</div>
<div class="section" id="retry-http-codes">
<span id="std-setting-RETRY_HTTP_CODES"></span><span id="std:setting-RETRY_HTTP_CODES"></span><h5>RETRY_HTTP_CODES<a class="headerlink" href="#retry-http-codes" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">[500,</span> <span class="pre">502,</span> <span class="pre">503,</span> <span class="pre">504,</span> <span class="pre">522,</span> <span class="pre">524,</span> <span class="pre">408,</span> <span class="pre">429]</span></code></p>
<p>要重试的HTTP响应代码。总是重试其他错误（DNS查找问题、连接丢失等）。</p>
<p>在某些情况下，您可能希望将400添加到 <a class="reference internal" href="#std-setting-RETRY_HTTP_CODES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">RETRY_HTTP_CODES</span></code></a> 因为它是用于指示服务器过载的常见代码。默认情况下不包括它，因为HTTP规范这么说。</p>
</div>
</div>
</div>
<div class="section" id="module-scrapy.downloadermiddlewares.robotstxt">
<span id="robotstxtmiddleware"></span><span id="topics-dlmw-robots"></span><h3>RobotsTxtMiddleware<a class="headerlink" href="#module-scrapy.downloadermiddlewares.robotstxt" title="永久链接至标题">¶</a></h3>
<dl class="py class">
<dt id="scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.downloadermiddlewares.robotstxt.</code><code class="sig-name descname">RobotsTxtMiddleware</code><a class="headerlink" href="#scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware" title="永久链接至目标">¶</a></dt>
<dd><p>此中间件过滤掉robots.txt排除标准禁止的请求。</p>
<p>要确保scrapy尊重robots.txt，请确保启用中间件，并且 <a class="reference internal" href="settings.html#std-setting-ROBOTSTXT_OBEY"><code class="xref std std-setting docutils literal notranslate"><span class="pre">ROBOTSTXT_OBEY</span></code></a> 设置已启用。</p>
<p>这个 <a class="reference internal" href="settings.html#std-setting-ROBOTSTXT_USER_AGENT"><code class="xref std std-setting docutils literal notranslate"><span class="pre">ROBOTSTXT_USER_AGENT</span></code></a> 设置可用于指定用于在中进行匹配的用户代理字符串 <a class="reference external" href="https://www.robotstxt.org/">robots.txt</a> 文件。如果是的话 <code class="docutils literal notranslate"><span class="pre">None</span></code> ，随请求或 <a class="reference internal" href="settings.html#std-setting-USER_AGENT"><code class="xref std std-setting docutils literal notranslate"><span class="pre">USER_AGENT</span></code></a> 设置（按该顺序）将用于确定要在中使用的用户代理 <a class="reference external" href="https://www.robotstxt.org/">robots.txt</a> 文件。</p>
<p>这个中间件必须与 <a class="reference external" href="https://www.robotstxt.org/">robots.txt</a> 解析器。</p>
<p>支持以下设备的废船 <a class="reference external" href="https://www.robotstxt.org/">robots.txt</a> 解析器：</p>
<ul class="simple">
<li><p><a class="reference internal" href="#protego-parser"><span class="std std-ref">Protego</span></a> （默认）</p></li>
<li><p><a class="reference internal" href="#python-robotfileparser"><span class="std std-ref">RobotFileParser</span></a></p></li>
<li><p><a class="reference internal" href="#reppy-parser"><span class="std std-ref">Reppy</span></a></p></li>
<li><p><a class="reference internal" href="#rerp-parser"><span class="std std-ref">Robotexclusionrulesparser</span></a></p></li>
</ul>
<p>你可以改变 <a class="reference external" href="https://www.robotstxt.org/">robots.txt</a> 具有 <a class="reference internal" href="settings.html#std-setting-ROBOTSTXT_PARSER"><code class="xref std std-setting docutils literal notranslate"><span class="pre">ROBOTSTXT_PARSER</span></code></a> 设置。或者你也可以 <a class="reference internal" href="#support-for-new-robots-parser"><span class="std std-ref">implement support for a new parser</span></a> .</p>
</dd></dl>

<p id="std-reqmeta-dont_obey_robotstxt"><span id="std:reqmeta-dont_obey_robotstxt"></span>如果 <a class="reference internal" href="request-response.html#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.meta</span></code></a> 有 <code class="docutils literal notranslate"><span class="pre">dont_obey_robotstxt</span></code> 密钥设置为true，即使 <a class="reference internal" href="settings.html#std-setting-ROBOTSTXT_OBEY"><code class="xref std std-setting docutils literal notranslate"><span class="pre">ROBOTSTXT_OBEY</span></code></a> 启用。</p>
<p>解析器在几个方面有所不同：</p>
<ul class="simple">
<li><p>执行语言</p></li>
<li><p>支持的规范</p></li>
<li><p>支持通配符匹配</p></li>
<li><p>用法 <a class="reference external" href="https://developers.google.com/search/reference/robots_txt#order-of-precedence-for-group-member-lines">length based rule</a> ：尤其是 <code class="docutils literal notranslate"><span class="pre">Allow</span></code> 和 <code class="docutils literal notranslate"><span class="pre">Disallow</span></code> 指令，其中基于路径长度的最具体规则胜过不太具体（较短）的规则</p></li>
</ul>
<p>不同解析器的性能比较可在 <a class="reference external" href="https://anubhavp28.github.io/gsoc-weekly-checkin-12/">the following link</a> .</p>
<div class="section" id="protego-parser">
<span id="id1"></span><h4>Protego解析器<a class="headerlink" href="#protego-parser" title="永久链接至标题">¶</a></h4>
<p>基于 <a class="reference external" href="https://github.com/scrapy/protego">Protego</a> ：</p>
<ul class="simple">
<li><p>用Python实现</p></li>
<li><p>符合 <a class="reference external" href="https://developers.google.com/search/reference/robots_txt">Google's Robots.txt Specification</a></p></li>
<li><p>支持通配符匹配</p></li>
<li><p>使用基于长度的规则</p></li>
</ul>
<p>Scrapy默认使用这个解析器。</p>
</div>
<div class="section" id="robotfileparser">
<span id="python-robotfileparser"></span><h4>RobotFileParser<a class="headerlink" href="#robotfileparser" title="永久链接至标题">¶</a></h4>
<p>基于 <a class="reference external" href="https://docs.python.org/3/library/urllib.robotparser.html#urllib.robotparser.RobotFileParser" title="(在 Python v3.9)"><code class="xref py py-class docutils literal notranslate"><span class="pre">RobotFileParser</span></code></a> ：</p>
<ul class="simple">
<li><p>是Python的内置 <a class="reference external" href="https://www.robotstxt.org/">robots.txt</a> 语法分析器</p></li>
<li><p>符合 <a class="reference external" href="https://www.robotstxt.org/norobots-rfc.txt">Martijn Koster's 1996 draft specification</a></p></li>
<li><p>缺少对通配符匹配的支持</p></li>
<li><p>不使用基于长度的规则</p></li>
</ul>
<p>它比protey 8.0之前的版本更快地兼容。</p>
<p>要使用此分析器，请设置：</p>
<ul class="simple">
<li><p><a class="reference internal" href="settings.html#std-setting-ROBOTSTXT_PARSER"><code class="xref std std-setting docutils literal notranslate"><span class="pre">ROBOTSTXT_PARSER</span></code></a> to <code class="docutils literal notranslate"><span class="pre">scrapy.robotstxt.PythonRobotParser</span></code></p></li>
</ul>
</div>
<div class="section" id="reppy-parser">
<span id="id2"></span><h4>Reppy解析器<a class="headerlink" href="#reppy-parser" title="永久链接至标题">¶</a></h4>
<p>基于 <a class="reference external" href="https://github.com/seomoz/reppy/">Reppy</a> ：</p>
<ul class="simple">
<li><p>周围有一个Python包装 <a class="reference external" href="https://github.com/seomoz/rep-cpp">Robots Exclusion Protocol Parser for C++</a></p></li>
<li><p>符合 <a class="reference external" href="https://www.robotstxt.org/norobots-rfc.txt">Martijn Koster's 1996 draft specification</a></p></li>
<li><p>支持通配符匹配</p></li>
<li><p>使用基于长度的规则</p></li>
</ul>
<p>本机实现，提供比Protego更好的速度。</p>
<p>要使用此解析器：</p>
<ul class="simple">
<li><p>安装 <a class="reference external" href="https://github.com/seomoz/reppy/">Reppy</a> 通过运行 <code class="docutils literal notranslate"><span class="pre">pip</span> <span class="pre">install</span> <span class="pre">reppy</span></code></p></li>
<li><p>集合 <a class="reference internal" href="settings.html#std-setting-ROBOTSTXT_PARSER"><code class="xref std std-setting docutils literal notranslate"><span class="pre">ROBOTSTXT_PARSER</span></code></a> 设置为 <code class="docutils literal notranslate"><span class="pre">scrapy.robotstxt.ReppyRobotParser</span></code></p></li>
</ul>
</div>
<div class="section" id="robotexclusionrulesparser">
<span id="rerp-parser"></span><h4>RobotExclusionRuleSpaser<a class="headerlink" href="#robotexclusionrulesparser" title="永久链接至标题">¶</a></h4>
<p>基于 <a class="reference external" href="http://nikitathespider.com/python/rerp/">Robotexclusionrulesparser</a> ：</p>
<ul class="simple">
<li><p>用Python实现</p></li>
<li><p>符合 <a class="reference external" href="https://www.robotstxt.org/norobots-rfc.txt">Martijn Koster's 1996 draft specification</a></p></li>
<li><p>支持通配符匹配</p></li>
<li><p>不使用基于长度的规则</p></li>
</ul>
<p>要使用此解析器：</p>
<ul class="simple">
<li><p>安装 <a class="reference external" href="http://nikitathespider.com/python/rerp/">Robotexclusionrulesparser</a> 通过运行 <code class="docutils literal notranslate"><span class="pre">pip</span> <span class="pre">install</span> <span class="pre">robotexclusionrulesparser</span></code></p></li>
<li><p>集合 <a class="reference internal" href="settings.html#std-setting-ROBOTSTXT_PARSER"><code class="xref std std-setting docutils literal notranslate"><span class="pre">ROBOTSTXT_PARSER</span></code></a> 设置为 <code class="docutils literal notranslate"><span class="pre">scrapy.robotstxt.RerpRobotParser</span></code></p></li>
</ul>
</div>
</div>
<div class="section" id="implementing-support-for-a-new-parser">
<span id="support-for-new-robots-parser"></span><h3>实现对新解析器的支持<a class="headerlink" href="#implementing-support-for-a-new-parser" title="永久链接至标题">¶</a></h3>
<p>您可以实现对新的 <a class="reference external" href="https://www.robotstxt.org/">robots.txt</a> 通过对抽象基类进行子类化来分析程序 <a class="reference internal" href="#scrapy.robotstxt.RobotParser" title="scrapy.robotstxt.RobotParser"><code class="xref py py-class docutils literal notranslate"><span class="pre">RobotParser</span></code></a> 以及实现下述方法。</p>
<span class="target" id="module-scrapy.robotstxt"></span><dl class="py class">
<dt id="scrapy.robotstxt.RobotParser">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.robotstxt.</code><code class="sig-name descname">RobotParser</code><a class="reference internal" href="../_modules/scrapy/robotstxt.html#RobotParser"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.robotstxt.RobotParser" title="永久链接至目标">¶</a></dt>
<dd><dl class="py method">
<dt id="scrapy.robotstxt.RobotParser.allowed">
<em class="property">abstract </em><code class="sig-name descname">allowed</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">url</span></em>, <em class="sig-param"><span class="n">user_agent</span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/robotstxt.html#RobotParser.allowed"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.robotstxt.RobotParser.allowed" title="永久链接至目标">¶</a></dt>
<dd><p>返回 <code class="docutils literal notranslate"><span class="pre">True</span></code> 如果 <code class="docutils literal notranslate"><span class="pre">user_agent</span></code> 允许爬行 <code class="docutils literal notranslate"><span class="pre">url</span></code> ，否则返回 <code class="docutils literal notranslate"><span class="pre">False</span></code> .</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>url</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#str" title="(在 Python v3.9)"><em>str</em></a>) -- 绝对网址</p></li>
<li><p><strong>user_agent</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#str" title="(在 Python v3.9)"><em>str</em></a>) -- 用户代理</p></li>
</ul>
</dd>
</dl>
</dd></dl>

<dl class="py method">
<dt id="scrapy.robotstxt.RobotParser.from_crawler">
<em class="property">abstract classmethod </em><code class="sig-name descname">from_crawler</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">crawler</span></em>, <em class="sig-param"><span class="n">robotstxt_body</span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/robotstxt.html#RobotParser.from_crawler"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.robotstxt.RobotParser.from_crawler" title="永久链接至目标">¶</a></dt>
<dd><p>分析 <a class="reference external" href="https://www.robotstxt.org/">robots.txt</a> 文件为字节。这必须是类方法。它必须返回解析器后端的新实例。</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>crawler</strong> (<a class="reference internal" href="api.html#scrapy.crawler.Crawler" title="scrapy.crawler.Crawler"><code class="xref py py-class docutils literal notranslate"><span class="pre">Crawler</span></code></a> instance) -- 提出请求的爬虫</p></li>
<li><p><strong>robotstxt_body</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#bytes" title="(在 Python v3.9)"><em>bytes</em></a>) -- a的内容 <a class="reference external" href="https://www.robotstxt.org/">robots.txt</a> 文件。</p></li>
</ul>
</dd>
</dl>
</dd></dl>

</dd></dl>

</div>
<div class="section" id="module-scrapy.downloadermiddlewares.stats">
<span id="downloaderstats"></span><h3>DownloaderStats<a class="headerlink" href="#module-scrapy.downloadermiddlewares.stats" title="永久链接至标题">¶</a></h3>
<dl class="py class">
<dt id="scrapy.downloadermiddlewares.stats.DownloaderStats">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.downloadermiddlewares.stats.</code><code class="sig-name descname">DownloaderStats</code><a class="reference internal" href="../_modules/scrapy/downloadermiddlewares/stats.html#DownloaderStats"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.downloadermiddlewares.stats.DownloaderStats" title="永久链接至目标">¶</a></dt>
<dd><p>存储通过它的所有请求、响应和异常的统计信息的中间件。</p>
<p>要使用此中间件，必须启用 <a class="reference internal" href="settings.html#std-setting-DOWNLOADER_STATS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOADER_STATS</span></code></a> 设置。</p>
</dd></dl>

</div>
<div class="section" id="module-scrapy.downloadermiddlewares.useragent">
<span id="useragentmiddleware"></span><h3>UserAgentMiddleware<a class="headerlink" href="#module-scrapy.downloadermiddlewares.useragent" title="永久链接至标题">¶</a></h3>
<dl class="py class">
<dt id="scrapy.downloadermiddlewares.useragent.UserAgentMiddleware">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.downloadermiddlewares.useragent.</code><code class="sig-name descname">UserAgentMiddleware</code><a class="reference internal" href="../_modules/scrapy/downloadermiddlewares/useragent.html#UserAgentMiddleware"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.downloadermiddlewares.useragent.UserAgentMiddleware" title="永久链接至目标">¶</a></dt>
<dd><p>允许spider覆盖默认用户代理的中间件。</p>
<p>为了让spider重写默认的用户代理，其 <code class="docutils literal notranslate"><span class="pre">user_agent</span></code> 必须设置属性。</p>
</dd></dl>

</div>
<div class="section" id="module-scrapy.downloadermiddlewares.ajaxcrawl">
<span id="ajaxcrawlmiddleware"></span><span id="ajaxcrawl-middleware"></span><h3>AjaxCrawlMiddleware<a class="headerlink" href="#module-scrapy.downloadermiddlewares.ajaxcrawl" title="永久链接至标题">¶</a></h3>
<dl class="py class">
<dt id="scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.downloadermiddlewares.ajaxcrawl.</code><code class="sig-name descname">AjaxCrawlMiddleware</code><a class="reference internal" href="../_modules/scrapy/downloadermiddlewares/ajaxcrawl.html#AjaxCrawlMiddleware"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware" title="永久链接至目标">¶</a></dt>
<dd><p>基于元片段html标记查找“AJAX可爬网”页面变体的中间件。看到了吗https://developers.google.com/search/docs/ajax-crawling/docs/getting-started了解更多信息。</p>
<div class="admonition note">
<p class="admonition-title">注解</p>
<p>Scrapy查找“ajax可爬行”页面，查找类似 <code class="docutils literal notranslate"><span class="pre">'http://example.com/!#foo=bar'</span></code> 即使没有这个中间件。当URL不包含时，需要AjaxCrawlMiddleware <code class="docutils literal notranslate"><span class="pre">'!#'</span></code> . 这通常是“索引”或“主要”网站页面的情况。</p>
</div>
</dd></dl>

<div class="section" id="ajaxcrawlmiddleware-settings">
<h4>AjaxCrawl中间件设置<a class="headerlink" href="#ajaxcrawlmiddleware-settings" title="永久链接至标题">¶</a></h4>
<div class="section" id="ajaxcrawl-enabled">
<span id="std-setting-AJAXCRAWL_ENABLED"></span><span id="std:setting-AJAXCRAWL_ENABLED"></span><h5>AJAXCRAWL_ENABLED<a class="headerlink" href="#ajaxcrawl-enabled" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">False</span></code></p>
<p>是否启用AjaxCrawl中间件。您可能希望启用它 <a class="reference internal" href="broad-crawls.html#topics-broad-crawls"><span class="std std-ref">broad crawls</span></a> .</p>
</div>
</div>
<div class="section" id="httpproxymiddleware-settings">
<h4>httpproxymiddleware设置<a class="headerlink" href="#httpproxymiddleware-settings" title="永久链接至标题">¶</a></h4>
<span class="target" id="std-setting-HTTPPROXY_ENABLED"><span id="std:setting-HTTPPROXY_ENABLED"></span></span><div class="section" id="httpproxy-enabled">
<span id="std-setting-HTTPPROXY_AUTH_ENCODING"></span><span id="std:setting-HTTPPROXY_AUTH_ENCODING"></span><h5>HTTPPROXY_ENABLED<a class="headerlink" href="#httpproxy-enabled" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">True</span></code></p>
<p>是否启用 <code class="xref py py-class docutils literal notranslate"><span class="pre">HttpProxyMiddleware</span></code> .</p>
</div>
<div class="section" id="httpproxy-auth-encoding">
<h5>HTTPPROXY_AUTH_ENCODING<a class="headerlink" href="#httpproxy-auth-encoding" title="永久链接至标题">¶</a></h5>
<p>违约： <code class="docutils literal notranslate"><span class="pre">&quot;latin-1&quot;</span></code></p>
<p>上代理身份验证的默认编码 <code class="xref py py-class docutils literal notranslate"><span class="pre">HttpProxyMiddleware</span></code> .</p>
</div>
</div>
</div>
</div>
</div>


           </div>
           
          </div>
          <footer>
  
    <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
      
        <a href="spider-middleware.html" class="btn btn-neutral float-right" title="蜘蛛中间件" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
      
      
        <a href="architecture.html" class="btn btn-neutral float-left" title="体系结构概述" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
      
    </div>
  

  <hr/>

  <div role="contentinfo">
    <p>
        
        &copy; 版权所有 2008–2020, Scrapy developers
      <span class="lastupdated">
        最后更新于 10月 18, 2020.
      </span>

    </p>
  </div>
    
    
    
    Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a
    
    <a href="https://github.com/rtfd/sphinx_rtd_theme">theme</a>
    
    provided by <a href="https://readthedocs.org">Read the Docs</a>. 

</footer>

        </div>
      </div>

    </section>

  </div>
  

  <script type="text/javascript">
      jQuery(function () {
          SphinxRtdTheme.Navigation.enable(true);
      });
  </script>

  
  
    
  
 
<script type="text/javascript">
!function(){var analytics=window.analytics=window.analytics||[];if(!analytics.initialize)if(analytics.invoked)window.console&&console.error&&console.error("Segment snippet included twice.");else{analytics.invoked=!0;analytics.methods=["trackSubmit","trackClick","trackLink","trackForm","pageview","identify","reset","group","track","ready","alias","page","once","off","on"];analytics.factory=function(t){return function(){var e=Array.prototype.slice.call(arguments);e.unshift(t);analytics.push(e);return analytics}};for(var t=0;t<analytics.methods.length;t++){var e=analytics.methods[t];analytics[e]=analytics.factory(e)}analytics.load=function(t){var e=document.createElement("script");e.type="text/javascript";e.async=!0;e.src=("https:"===document.location.protocol?"https://":"http://")+"cdn.segment.com/analytics.js/v1/"+t+"/analytics.min.js";var n=document.getElementsByTagName("script")[0];n.parentNode.insertBefore(e,n)};analytics.SNIPPET_VERSION="3.1.0";
analytics.load("8UDQfnf3cyFSTsM4YANnW5sXmgZVILbA");
analytics.page();
}}();

analytics.ready(function () {
    ga('require', 'linker');
    ga('linker:autoLink', ['scrapinghub.com', 'crawlera.com']);
});
</script>


</body>
</html>