

<!DOCTYPE html>
<html class="writer-html5" lang="zh" >
<head>
  <meta charset="utf-8">
  
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  
  <title>请求和响应 &mdash; Scrapy 2.3.0 文档</title>
  

  
  <link rel="stylesheet" href="../_static/css/theme.css" type="text/css" />
  <link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster.custom.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster.bundle.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-shadow.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-punk.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-noir.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-light.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-borderless.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/micromodal.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/sphinx_rtd_theme.css" type="text/css" />

  
  
  
  

  
  <!--[if lt IE 9]>
    <script src="../_static/js/html5shiv.min.js"></script>
  <![endif]-->
  
    
      <script type="text/javascript" id="documentation_options" data-url_root="../" src="../_static/documentation_options.js"></script>
        <script src="../_static/jquery.js"></script>
        <script src="../_static/underscore.js"></script>
        <script src="../_static/doctools.js"></script>
        <script src="../_static/language_data.js"></script>
        <script src="../_static/js/hoverxref.js"></script>
        <script src="../_static/js/tooltipster.bundle.min.js"></script>
        <script src="../_static/js/micromodal.min.js"></script>
    
    <script type="text/javascript" src="../_static/js/theme.js"></script>

    
    <link rel="index" title="索引" href="../genindex.html" />
    <link rel="search" title="搜索" href="../search.html" />
    <link rel="next" title="链接提取器" href="link-extractors.html" />
    <link rel="prev" title="Feed 导出" href="feed-exports.html" /> 
</head>

<body class="wy-body-for-nav">

   
  <div class="wy-grid-for-nav">
    
    <nav data-toggle="wy-nav-shift" class="wy-nav-side">
      <div class="wy-side-scroll">
        <div class="wy-side-nav-search" >
          

          
            <a href="../index.html" class="icon icon-home" alt="Documentation Home"> Scrapy
          

          
          </a>

          
            
            
              <div class="version">
                2.3
              </div>
            
          

          
<div role="search">
  <form id="rtd-search-form" class="wy-form" action="../search.html" method="get">
    <input type="text" name="q" placeholder="Search docs" />
    <input type="hidden" name="check_keywords" value="yes" />
    <input type="hidden" name="area" value="default" />
  </form>
</div>

          
        </div>

        
        <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
          
            
            
              
            
            
              <p class="caption"><span class="caption-text">第一步</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../intro/overview.html">Scrapy一目了然</a></li>
<li class="toctree-l1"><a class="reference internal" href="../intro/install.html">安装指南</a></li>
<li class="toctree-l1"><a class="reference internal" href="../intro/tutorial.html">Scrapy 教程</a></li>
<li class="toctree-l1"><a class="reference internal" href="../intro/examples.html">实例</a></li>
</ul>
<p class="caption"><span class="caption-text">基本概念</span></p>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="commands.html">命令行工具</a></li>
<li class="toctree-l1"><a class="reference internal" href="spiders.html">蜘蛛</a></li>
<li class="toctree-l1"><a class="reference internal" href="selectors.html">选择器</a></li>
<li class="toctree-l1"><a class="reference internal" href="items.html">项目</a></li>
<li class="toctree-l1"><a class="reference internal" href="loaders.html">项目加载器</a></li>
<li class="toctree-l1"><a class="reference internal" href="shell.html">Scrapy shell</a></li>
<li class="toctree-l1"><a class="reference internal" href="item-pipeline.html">项目管道</a></li>
<li class="toctree-l1"><a class="reference internal" href="feed-exports.html">Feed 导出</a></li>
<li class="toctree-l1 current"><a class="current reference internal" href="#">请求和响应</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#request-objects">请求对象</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#passing-additional-data-to-callback-functions">向回调函数传递附加数据</a></li>
<li class="toctree-l3"><a class="reference internal" href="#using-errbacks-to-catch-exceptions-in-request-processing">使用errbacks捕获请求处理中的异常</a></li>
<li class="toctree-l3"><a class="reference internal" href="#accessing-additional-data-in-errback-functions">访问errback函数中的其他数据</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#request-meta-special-keys">请求.meta特殊键</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#bindaddress">绑定地址</a></li>
<li class="toctree-l3"><a class="reference internal" href="#download-timeout">download_timeout</a></li>
<li class="toctree-l3"><a class="reference internal" href="#download-latency">download_latency</a></li>
<li class="toctree-l3"><a class="reference internal" href="#download-fail-on-dataloss">download_fail_on_dataloss</a></li>
<li class="toctree-l3"><a class="reference internal" href="#max-retry-times">max_retry_times</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#stopping-the-download-of-a-response">停止下载响应</a></li>
<li class="toctree-l2"><a class="reference internal" href="#request-subclasses">请求子类</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#formrequest-objects">FormRequest对象</a></li>
<li class="toctree-l3"><a class="reference internal" href="#request-usage-examples">请求使用示例</a><ul>
<li class="toctree-l4"><a class="reference internal" href="#using-formrequest-to-send-data-via-http-post">使用FormRequest通过HTTP Post发送数据</a></li>
<li class="toctree-l4"><a class="reference internal" href="#using-formrequest-from-response-to-simulate-a-user-login">使用formRequest.from_response（）模拟用户登录</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="#jsonrequest">JsonRequest</a></li>
<li class="toctree-l3"><a class="reference internal" href="#jsonrequest-usage-example">JsonRequest用法示例</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#response-objects">响应对象</a></li>
<li class="toctree-l2"><a class="reference internal" href="#response-subclasses">响应子类</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#textresponse-objects">文本响应对象</a></li>
<li class="toctree-l3"><a class="reference internal" href="#htmlresponse-objects">HTMLResponse对象</a></li>
<li class="toctree-l3"><a class="reference internal" href="#xmlresponse-objects">XmlResponse对象</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="link-extractors.html">链接提取器</a></li>
<li class="toctree-l1"><a class="reference internal" href="settings.html">设置</a></li>
<li class="toctree-l1"><a class="reference internal" href="exceptions.html">例外情况</a></li>
</ul>
<p class="caption"><span class="caption-text">内置服务</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="logging.html">登录</a></li>
<li class="toctree-l1"><a class="reference internal" href="stats.html">统计数据集合</a></li>
<li class="toctree-l1"><a class="reference internal" href="email.html">发送电子邮件</a></li>
<li class="toctree-l1"><a class="reference internal" href="telnetconsole.html">远程登录控制台</a></li>
<li class="toctree-l1"><a class="reference internal" href="webservice.html">Web服务</a></li>
</ul>
<p class="caption"><span class="caption-text">解决具体问题</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../faq.html">常见问题</a></li>
<li class="toctree-l1"><a class="reference internal" href="debug.html">调试spiders</a></li>
<li class="toctree-l1"><a class="reference internal" href="contracts.html">蜘蛛合约</a></li>
<li class="toctree-l1"><a class="reference internal" href="practices.html">常用做法</a></li>
<li class="toctree-l1"><a class="reference internal" href="broad-crawls.html">宽爬行</a></li>
<li class="toctree-l1"><a class="reference internal" href="developer-tools.html">使用浏览器的开发人员工具进行抓取</a></li>
<li class="toctree-l1"><a class="reference internal" href="dynamic-content.html">选择动态加载的内容</a></li>
<li class="toctree-l1"><a class="reference internal" href="leaks.html">调试内存泄漏</a></li>
<li class="toctree-l1"><a class="reference internal" href="media-pipeline.html">下载和处理文件和图像</a></li>
<li class="toctree-l1"><a class="reference internal" href="deploy.html">部署蜘蛛</a></li>
<li class="toctree-l1"><a class="reference internal" href="autothrottle.html">AutoThrottle 扩展</a></li>
<li class="toctree-l1"><a class="reference internal" href="benchmarking.html">标杆管理</a></li>
<li class="toctree-l1"><a class="reference internal" href="jobs.html">作业：暂停和恢复爬行</a></li>
<li class="toctree-l1"><a class="reference internal" href="coroutines.html">协同程序</a></li>
<li class="toctree-l1"><a class="reference internal" href="asyncio.html">asyncio</a></li>
</ul>
<p class="caption"><span class="caption-text">扩展Scrapy</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="architecture.html">体系结构概述</a></li>
<li class="toctree-l1"><a class="reference internal" href="downloader-middleware.html">下载器中间件</a></li>
<li class="toctree-l1"><a class="reference internal" href="spider-middleware.html">蜘蛛中间件</a></li>
<li class="toctree-l1"><a class="reference internal" href="extensions.html">扩展</a></li>
<li class="toctree-l1"><a class="reference internal" href="api.html">核心API</a></li>
<li class="toctree-l1"><a class="reference internal" href="signals.html">信号</a></li>
<li class="toctree-l1"><a class="reference internal" href="exporters.html">条目导出器</a></li>
</ul>
<p class="caption"><span class="caption-text">其余所有</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../news.html">发行说明</a></li>
<li class="toctree-l1"><a class="reference internal" href="../contributing.html">为 Scrapy 贡献</a></li>
<li class="toctree-l1"><a class="reference internal" href="../versioning.html">版本控制和API稳定性</a></li>
</ul>

            
          
        </div>
        
      </div>
    </nav>

    <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">

      
      <nav class="wy-nav-top" aria-label="top navigation">
        
          <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
          <a href="../index.html">Scrapy</a>
        
      </nav>


      <div class="wy-nav-content">
        
        <div class="rst-content">
        
          















<div role="navigation" aria-label="breadcrumbs navigation">

  <ul class="wy-breadcrumbs">
    
      <li><a href="../index.html" class="icon icon-home"></a> &raquo;</li>
        
      <li>请求和响应</li>
    
    
      <li class="wy-breadcrumbs-aside">
        
            
        
      </li>
    
  </ul>

  
  <hr/>
</div>
          <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
           <div itemprop="articleBody">
            
  <div class="section" id="module-scrapy.http">
<span id="requests-and-responses"></span><span id="topics-request-response"></span><h1>请求和响应<a class="headerlink" href="#module-scrapy.http" title="永久链接至标题">¶</a></h1>
<p>零星用途 <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 和 <a class="reference internal" href="#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 用于对网站进行爬网的对象。</p>
<p>通常， <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 对象在spider中生成并在系统中传递，直到它们到达下载程序，下载程序执行请求并返回 <a class="reference internal" href="#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 返回发出请求的spider的对象。</p><script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
<ins class="adsbygoogle"
     style="display:block; text-align:center;"
     data-ad-layout="in-article"
     data-ad-format="fluid"
     data-ad-client="ca-pub-1466963416408457"
     data-ad-slot="8850786025"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>
<p>两个 <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 和 <a class="reference internal" href="#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 类具有子类，这些子类添加了基类中不需要的功能。这些在下面的 <a class="reference internal" href="#topics-request-response-ref-request-subclasses"><span class="std std-ref">请求子类</span></a> 和 <a class="reference internal" href="#topics-request-response-ref-response-subclasses"><span class="std std-ref">响应子类</span></a> .</p>
<div class="section" id="request-objects">
<h2>请求对象<a class="headerlink" href="#request-objects" title="永久链接至标题">¶</a></h2>
<dl class="py class">
<dt id="scrapy.http.Request">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.http.</code><code class="sig-name descname">Request</code><span class="sig-paren">(</span><em class="sig-param"><span class="o">*</span><span class="n">args</span></em>, <em class="sig-param"><span class="o">**</span><span class="n">kwargs</span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/request.html#Request"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.Request" title="永久链接至目标">¶</a></dt>
<dd><p>A <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 对象表示一个HTTP请求，通常由spider生成并由下载程序执行，从而生成一个 <a class="reference internal" href="#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> .</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>url</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#str" title="(在 Python v3.9)"><em>str</em></a>) -- 此请求的URL如果该URL无效，则 <a class="reference external" href="https://docs.python.org/3/library/exceptions.html#ValueError" title="(在 Python v3.9)"><code class="xref py py-exc docutils literal notranslate"><span class="pre">ValueError</span></code></a> 引发异常。</p></li>
<li><p><strong>callback</strong> (<a class="reference external" href="https://docs.python.org/3/library/collections.abc.html#collections.abc.Callable" title="(在 Python v3.9)"><em>collections.abc.Callable</em></a>) -- 将以此请求的响应（一旦下载）作为其第一个参数来调用的函数。有关详细信息，请参阅 <a class="reference internal" href="#topics-request-response-ref-request-callback-arguments"><span class="std std-ref">向回调函数传递附加数据</span></a> 下面。如果请求未指定回调，则蜘蛛 <a class="reference internal" href="spiders.html#scrapy.spiders.Spider.parse" title="scrapy.spiders.Spider.parse"><code class="xref py py-meth docutils literal notranslate"><span class="pre">parse()</span></code></a> 将使用方法。请注意，如果在处理过程中引发异常，则改为调用errback。</p></li>
<li><p><strong>method</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#str" title="(在 Python v3.9)"><em>str</em></a>) -- 此请求的HTTP方法。默认为 <code class="docutils literal notranslate"><span class="pre">'GET'</span></code> .</p></li>
<li><p><strong>meta</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#dict" title="(在 Python v3.9)"><em>dict</em></a>) -- 的初始值 <a class="reference internal" href="#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.meta</span></code></a> 属性。如果给定，则将浅复制传入此参数的dict。</p></li>
<li><p><strong>body</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#bytes" title="(在 Python v3.9)"><em>bytes</em></a><em> or </em><a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#str" title="(在 Python v3.9)"><em>str</em></a>) -- 请求主体。如果传递了一个字符串，则使用 <code class="docutils literal notranslate"><span class="pre">encoding</span></code> 通过（默认为 <code class="docutils literal notranslate"><span class="pre">utf-8</span></code> ）如果 <code class="docutils literal notranslate"><span class="pre">body</span></code> 如果未给定，则存储空字节对象。不管这个参数的最后一个值是什么，都不会被存储 <code class="docutils literal notranslate"><span class="pre">None</span></code> ）</p></li>
<li><p><strong>headers</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#dict" title="(在 Python v3.9)"><em>dict</em></a>) -- 此请求的头。dict值可以是字符串（对于单值头）或列表（对于多值头）。如果 <code class="docutils literal notranslate"><span class="pre">None</span></code> 作为值传递，HTTP头将不会被发送。</p></li>
<li><p><strong>cookies</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#dict" title="(在 Python v3.9)"><em>dict</em></a><em> or </em><a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#list" title="(在 Python v3.9)"><em>list</em></a>) -- <p>请求cookies。这些可以用两种形式发送。</p>
<ol class="arabic">
<li><p>使用DICT：：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">request_with_cookies</span> <span class="o">=</span> <span class="n">Request</span><span class="p">(</span><span class="n">url</span><span class="o">=</span><span class="s2">&quot;http://www.example.com&quot;</span><span class="p">,</span>
                               <span class="n">cookies</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;currency&#39;</span><span class="p">:</span> <span class="s1">&#39;USD&#39;</span><span class="p">,</span> <span class="s1">&#39;country&#39;</span><span class="p">:</span> <span class="s1">&#39;UY&#39;</span><span class="p">})</span>
</pre></div>
</div>
</li>
<li><p>使用听写列表：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">request_with_cookies</span> <span class="o">=</span> <span class="n">Request</span><span class="p">(</span><span class="n">url</span><span class="o">=</span><span class="s2">&quot;http://www.example.com&quot;</span><span class="p">,</span>
                               <span class="n">cookies</span><span class="o">=</span><span class="p">[{</span><span class="s1">&#39;name&#39;</span><span class="p">:</span> <span class="s1">&#39;currency&#39;</span><span class="p">,</span>
                                        <span class="s1">&#39;value&#39;</span><span class="p">:</span> <span class="s1">&#39;USD&#39;</span><span class="p">,</span>
                                        <span class="s1">&#39;domain&#39;</span><span class="p">:</span> <span class="s1">&#39;example.com&#39;</span><span class="p">,</span>
                                        <span class="s1">&#39;path&#39;</span><span class="p">:</span> <span class="s1">&#39;/currency&#39;</span><span class="p">}])</span>
</pre></div>
</div>
</li>
</ol>
<p>后一个表单允许自定义 <code class="docutils literal notranslate"><span class="pre">domain</span></code> 和 <code class="docutils literal notranslate"><span class="pre">path</span></code> cookie的属性。只有在为以后的请求保存cookie时，这才有用。</p>
<span class="target" id="std-reqmeta-dont_merge_cookies"><span id="std:reqmeta-dont_merge_cookies"></span></span><p>当某些站点返回cookies（在响应中）时，这些cookies存储在该域的cookies中，并将在以后的请求中再次发送。这是任何普通网络浏览器的典型行为。</p>
<p>要创建不发送存储的cookie且不存储接收到的cookie的请求，请设置 <code class="docutils literal notranslate"><span class="pre">dont_merge_cookies</span></code> 关键 <code class="docutils literal notranslate"><span class="pre">True</span></code> 在里面 <a class="reference internal" href="#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">request.meta</span></code></a> .</p>
<p>发送手动定义的cookie并忽略cookie存储的请求示例：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">Request</span><span class="p">(</span>
    <span class="n">url</span><span class="o">=</span><span class="s2">&quot;http://www.example.com&quot;</span><span class="p">,</span>
    <span class="n">cookies</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;currency&#39;</span><span class="p">:</span> <span class="s1">&#39;USD&#39;</span><span class="p">,</span> <span class="s1">&#39;country&#39;</span><span class="p">:</span> <span class="s1">&#39;UY&#39;</span><span class="p">},</span>
    <span class="n">meta</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;dont_merge_cookies&#39;</span><span class="p">:</span> <span class="kc">True</span><span class="p">},</span>
<span class="p">)</span>
</pre></div>
</div>
<p>有关详细信息，请参阅 <a class="reference internal" href="downloader-middleware.html#cookies-mw"><span class="std std-ref">CookiesMiddleware</span></a> .</p>
</p></li>
<li><p><strong>encoding</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#str" title="(在 Python v3.9)"><em>str</em></a>) -- 此请求的编码（默认为 <code class="docutils literal notranslate"><span class="pre">'utf-8'</span></code> ). 此编码将用于对URL进行百分比编码，并将正文转换为字节（如果以字符串形式给出）。</p></li>
<li><p><strong>priority</strong> (<a class="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(在 Python v3.9)"><em>int</em></a>) -- 此请求的优先级（默认为 <code class="docutils literal notranslate"><span class="pre">0</span></code> ）调度程序使用优先级定义用于处理请求的顺序。优先级值较高的请求将更早执行。允许负值以表示相对较低的优先级。</p></li>
<li><p><strong>dont_filter</strong> (<a class="reference external" href="https://docs.python.org/3/library/functions.html#bool" title="(在 Python v3.9)"><em>bool</em></a>) -- 指示调度程序不应筛选此请求。当您希望多次执行相同的请求时，可以使用此选项忽略重复的筛选器。小心使用，否则会进入爬行循环。默认为 <code class="docutils literal notranslate"><span class="pre">False</span></code> .</p></li>
<li><p><strong>errback</strong> (<a class="reference external" href="https://docs.python.org/3/library/collections.abc.html#collections.abc.Callable" title="(在 Python v3.9)"><em>collections.abc.Callable</em></a>) -- 如果在处理请求时引发任何异常，则将调用的函数。这包括404 HTTP错误等失败的页面。它收到一个 <a class="reference external" href="https://twistedmatrix.com/documents/current/api/twisted.python.failure.Failure.html" title="(在 Twisted v2.0)"><code class="xref py py-exc docutils literal notranslate"><span class="pre">Failure</span></code></a> 作为第一个参数。有关详细信息，请参阅 <a class="reference internal" href="#topics-request-response-ref-errbacks"><span class="std std-ref">使用errbacks捕获请求处理中的异常</span></a> 下面。。版本更改：：2.0 <em>回调</em> 当 <em>错误</em> 参数已指定。</p></li>
<li><p><strong>flags</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#list" title="(在 Python v3.9)"><em>list</em></a>) -- 发送到请求的标志可用于日志记录或类似用途。</p></li>
<li><p><strong>cb_kwargs</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#dict" title="(在 Python v3.9)"><em>dict</em></a>) -- 具有任意数据的dict，将作为关键字参数传递到请求的回调。</p></li>
</ul>
</dd>
</dl>
<dl class="py attribute">
<dt id="scrapy.http.Request.url">
<code class="sig-name descname">url</code><a class="headerlink" href="#scrapy.http.Request.url" title="永久链接至目标">¶</a></dt>
<dd><p>包含此请求的URL的字符串。请记住，此属性包含转义的URL，因此它可以不同于 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法。</p>
<p>此属性是只读的。要更改请求的URL，请使用 <a class="reference internal" href="#scrapy.http.Request.replace" title="scrapy.http.Request.replace"><code class="xref py py-meth docutils literal notranslate"><span class="pre">replace()</span></code></a> .</p>
</dd></dl>

<dl class="py attribute">
<dt id="scrapy.http.Request.method">
<code class="sig-name descname">method</code><a class="headerlink" href="#scrapy.http.Request.method" title="永久链接至目标">¶</a></dt>
<dd><p>表示请求中HTTP方法的字符串。这保证是大写的。例子： <code class="docutils literal notranslate"><span class="pre">&quot;GET&quot;</span></code> ， <code class="docutils literal notranslate"><span class="pre">&quot;POST&quot;</span></code> ， <code class="docutils literal notranslate"><span class="pre">&quot;PUT&quot;</span></code> 等</p>
</dd></dl>

<dl class="py attribute">
<dt id="scrapy.http.Request.headers">
<code class="sig-name descname">headers</code><a class="headerlink" href="#scrapy.http.Request.headers" title="永久链接至目标">¶</a></dt>
<dd><p>包含请求头的类似字典的对象。</p>
</dd></dl>

<dl class="py attribute">
<dt id="scrapy.http.Request.body">
<code class="sig-name descname">body</code><a class="headerlink" href="#scrapy.http.Request.body" title="永久链接至目标">¶</a></dt>
<dd><p>以字节表示的请求正文。</p>
<p>此属性是只读的。要更改请求正文，请使用 <a class="reference internal" href="#scrapy.http.Request.replace" title="scrapy.http.Request.replace"><code class="xref py py-meth docutils literal notranslate"><span class="pre">replace()</span></code></a> .</p>
</dd></dl>

<dl class="py attribute">
<dt id="scrapy.http.Request.meta">
<code class="sig-name descname">meta</code><a class="headerlink" href="#scrapy.http.Request.meta" title="永久链接至目标">¶</a></dt>
<dd><p>包含此请求的任意元数据的dict。对于新请求，此dict是空的，通常由不同的零碎组件（扩展、中间产品等）填充。所以这个dict中包含的数据取决于您启用的扩展名。</p>
<p>见 <a class="reference internal" href="#topics-request-meta"><span class="std std-ref">请求.meta特殊键</span></a> 获取scrapy识别的特殊元键列表。</p>
<p>这个字典是 <a class="reference external" href="https://docs.python.org/3/library/copy.html" title="(在 Python v3.9)"><span class="xref std std-doc">shallow copied</span></a> 当使用 <code class="docutils literal notranslate"><span class="pre">copy()</span></code> 或 <code class="docutils literal notranslate"><span class="pre">replace()</span></code> 方法，也可以通过 <code class="docutils literal notranslate"><span class="pre">response.meta</span></code> 属性。</p>
</dd></dl>

<dl class="py attribute">
<dt id="scrapy.http.Request.cb_kwargs">
<code class="sig-name descname">cb_kwargs</code><a class="headerlink" href="#scrapy.http.Request.cb_kwargs" title="永久链接至目标">¶</a></dt>
<dd><p>包含此请求的任意元数据的字典。它的内容将作为关键字参数传递给请求的回调。对于新请求，它为空，这意味着默认情况下，回调只获取 <a class="reference internal" href="#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 对象作为参数。</p>
<p>这个字典是 <a class="reference external" href="https://docs.python.org/3/library/copy.html" title="(在 Python v3.9)"><span class="xref std std-doc">shallow copied</span></a> 当使用 <code class="docutils literal notranslate"><span class="pre">copy()</span></code> 或 <code class="docutils literal notranslate"><span class="pre">replace()</span></code> 方法，也可以通过 <code class="docutils literal notranslate"><span class="pre">response.cb_kwargs</span></code> 属性。</p>
<p>在处理请求失败的情况下，此dict可以作为 <code class="docutils literal notranslate"><span class="pre">failure.request.cb_kwargs</span></code> 在请求的errback中。有关详细信息，请参阅 <a class="reference internal" href="#errback-cb-kwargs"><span class="std std-ref">访问errback函数中的其他数据</span></a> .</p>
</dd></dl>

<dl class="py method">
<dt id="scrapy.http.Request.copy">
<code class="sig-name descname">copy</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/request.html#Request.copy"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.Request.copy" title="永久链接至目标">¶</a></dt>
<dd><p>返回一个新请求，它是此请求的副本。参见： <a class="reference internal" href="#topics-request-response-ref-request-callback-arguments"><span class="std std-ref">向回调函数传递附加数据</span></a> .</p>
</dd></dl>

<dl class="py method">
<dt id="scrapy.http.Request.replace">
<code class="sig-name descname">replace</code><span class="sig-paren">(</span><span class="optional">[</span><em class="sig-param">url</em>, <em class="sig-param">method</em>, <em class="sig-param">headers</em>, <em class="sig-param">body</em>, <em class="sig-param">cookies</em>, <em class="sig-param">meta</em>, <em class="sig-param">flags</em>, <em class="sig-param">encoding</em>, <em class="sig-param">priority</em>, <em class="sig-param">dont_filter</em>, <em class="sig-param">callback</em>, <em class="sig-param">errback</em>, <em class="sig-param">cb_kwargs</em><span class="optional">]</span><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/request.html#Request.replace"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.Request.replace" title="永久链接至目标">¶</a></dt>
<dd><p>返回具有相同成员的请求对象，除了那些通过指定的关键字参数赋予新值的成员。这个 <a class="reference internal" href="#scrapy.http.Request.cb_kwargs" title="scrapy.http.Request.cb_kwargs"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.cb_kwargs</span></code></a> 和 <a class="reference internal" href="#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.meta</span></code></a> 默认情况下，属性被浅复制（除非新值作为参数提供）。另请参见 <a class="reference internal" href="#topics-request-response-ref-request-callback-arguments"><span class="std std-ref">向回调函数传递附加数据</span></a> .</p>
</dd></dl>

<dl class="py method">
<dt id="scrapy.http.Request.from_curl">
<em class="property">classmethod </em><code class="sig-name descname">from_curl</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">curl_command</span></em>, <em class="sig-param"><span class="n">ignore_unknown_options</span><span class="o">=</span><span class="default_value">True</span></em>, <em class="sig-param"><span class="o">**</span><span class="n">kwargs</span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/request.html#Request.from_curl"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.Request.from_curl" title="永久链接至目标">¶</a></dt>
<dd><p>从包含 <a class="reference external" href="https://curl.haxx.se/">cURL</a> 命令。它填充HTTP方法、URL、头、cookies和主体。它接受与 <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 类，获取首选项并重写cURL命令中包含的相同参数的值。</p>
<p>默认情况下，将忽略无法识别的选项。若要在查找未知选项时引发错误，请通过传递调用此方法 <code class="docutils literal notranslate"><span class="pre">ignore_unknown_options=False</span></code> .</p>
<div class="admonition caution">
<p class="admonition-title">警告</p>
<p>使用 <a class="reference internal" href="#scrapy.http.Request.from_curl" title="scrapy.http.Request.from_curl"><code class="xref py py-meth docutils literal notranslate"><span class="pre">from_curl()</span></code></a> 从 <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 子类，例如 <code class="xref py py-class docutils literal notranslate"><span class="pre">JSONRequest</span></code> 或 <code class="xref py py-class docutils literal notranslate"><span class="pre">XmlRpcRequest</span></code> ，以及 <a class="reference internal" href="downloader-middleware.html#topics-downloader-middleware"><span class="std std-ref">downloader middlewares</span></a> 和 <a class="reference internal" href="spider-middleware.html#topics-spider-middleware"><span class="std std-ref">spider middlewares</span></a> 启用，例如 <a class="reference internal" href="downloader-middleware.html#scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware" title="scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">DefaultHeadersMiddleware</span></code></a> ， <a class="reference internal" href="downloader-middleware.html#scrapy.downloadermiddlewares.useragent.UserAgentMiddleware" title="scrapy.downloadermiddlewares.useragent.UserAgentMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">UserAgentMiddleware</span></code></a> 或 <a class="reference internal" href="downloader-middleware.html#scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware" title="scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">HttpCompressionMiddleware</span></code></a> ，可以修改 <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 对象。</p>
</div>
<p>要将cURL命令转换为Scrapy请求，可以使用 <a class="reference external" href="https://michael-shub.github.io/curl2scrapy/">curl2scrapy</a> .</p>
</dd></dl>

</dd></dl>

<div class="section" id="passing-additional-data-to-callback-functions">
<span id="topics-request-response-ref-request-callback-arguments"></span><h3>向回调函数传递附加数据<a class="headerlink" href="#passing-additional-data-to-callback-functions" title="永久链接至标题">¶</a></h3>
<p>请求的回调是一个函数，在下载请求的响应时将调用该函数。将使用下载的 <a class="reference internal" href="#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 对象作为其第一个参数。</p>
<p>例子：：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">parse_page1</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">response</span><span class="p">):</span>
    <span class="k">return</span> <span class="n">scrapy</span><span class="o">.</span><span class="n">Request</span><span class="p">(</span><span class="s2">&quot;http://www.example.com/some_page.html&quot;</span><span class="p">,</span>
                          <span class="n">callback</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">parse_page2</span><span class="p">)</span>

<span class="k">def</span> <span class="nf">parse_page2</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">response</span><span class="p">):</span>
    <span class="c1"># this would log http://www.example.com/some_page.html</span>
    <span class="bp">self</span><span class="o">.</span><span class="n">logger</span><span class="o">.</span><span class="n">info</span><span class="p">(</span><span class="s2">&quot;Visited </span><span class="si">%s</span><span class="s2">&quot;</span><span class="p">,</span> <span class="n">response</span><span class="o">.</span><span class="n">url</span><span class="p">)</span>
</pre></div>
</div>
<p>在某些情况下，您可能对向这些回调函数传递参数感兴趣，以便稍后在第二个回调中接收这些参数。下面的示例演示如何通过使用 <a class="reference internal" href="#scrapy.http.Request.cb_kwargs" title="scrapy.http.Request.cb_kwargs"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.cb_kwargs</span></code></a> 属性：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">parse</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">response</span><span class="p">):</span>
    <span class="n">request</span> <span class="o">=</span> <span class="n">scrapy</span><span class="o">.</span><span class="n">Request</span><span class="p">(</span><span class="s1">&#39;http://www.example.com/index.html&#39;</span><span class="p">,</span>
                             <span class="n">callback</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">parse_page2</span><span class="p">,</span>
                             <span class="n">cb_kwargs</span><span class="o">=</span><span class="nb">dict</span><span class="p">(</span><span class="n">main_url</span><span class="o">=</span><span class="n">response</span><span class="o">.</span><span class="n">url</span><span class="p">))</span>
    <span class="n">request</span><span class="o">.</span><span class="n">cb_kwargs</span><span class="p">[</span><span class="s1">&#39;foo&#39;</span><span class="p">]</span> <span class="o">=</span> <span class="s1">&#39;bar&#39;</span>  <span class="c1"># add more arguments for the callback</span>
    <span class="k">yield</span> <span class="n">request</span>

<span class="k">def</span> <span class="nf">parse_page2</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">response</span><span class="p">,</span> <span class="n">main_url</span><span class="p">,</span> <span class="n">foo</span><span class="p">):</span>
    <span class="k">yield</span> <span class="nb">dict</span><span class="p">(</span>
        <span class="n">main_url</span><span class="o">=</span><span class="n">main_url</span><span class="p">,</span>
        <span class="n">other_url</span><span class="o">=</span><span class="n">response</span><span class="o">.</span><span class="n">url</span><span class="p">,</span>
        <span class="n">foo</span><span class="o">=</span><span class="n">foo</span><span class="p">,</span>
    <span class="p">)</span>
</pre></div>
</div>
<div class="admonition caution">
<p class="admonition-title">警告</p>
<p><a class="reference internal" href="#scrapy.http.Request.cb_kwargs" title="scrapy.http.Request.cb_kwargs"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.cb_kwargs</span></code></a> 在版本中引入 <code class="docutils literal notranslate"><span class="pre">1.7</span></code> . 在此之前，使用 <a class="reference internal" href="#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.meta</span></code></a> 建议在回调时传递信息。后 <code class="docutils literal notranslate"><span class="pre">1.7</span></code> ， <a class="reference internal" href="#scrapy.http.Request.cb_kwargs" title="scrapy.http.Request.cb_kwargs"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.cb_kwargs</span></code></a> 成为处理用户信息的首选方式，离开 <a class="reference internal" href="#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.meta</span></code></a> 用于与中间件和扩展等组件通信。</p>
</div>
</div>
<div class="section" id="using-errbacks-to-catch-exceptions-in-request-processing">
<span id="topics-request-response-ref-errbacks"></span><h3>使用errbacks捕获请求处理中的异常<a class="headerlink" href="#using-errbacks-to-catch-exceptions-in-request-processing" title="永久链接至标题">¶</a></h3>
<p>请求的errback是一个函数，在处理异常时将调用该函数。</p>
<p>它收到一个 <a class="reference external" href="https://twistedmatrix.com/documents/current/api/twisted.python.failure.Failure.html" title="(在 Twisted v2.0)"><code class="xref py py-exc docutils literal notranslate"><span class="pre">Failure</span></code></a> 作为第一个参数，可用于跟踪连接建立超时、DNS错误等。</p>
<p>下面是一个spider示例，记录所有错误，并在需要时捕获一些特定错误：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">scrapy</span>

<span class="kn">from</span> <span class="nn">scrapy.spidermiddlewares.httperror</span> <span class="kn">import</span> <span class="n">HttpError</span>
<span class="kn">from</span> <span class="nn">twisted.internet.error</span> <span class="kn">import</span> <span class="n">DNSLookupError</span>
<span class="kn">from</span> <span class="nn">twisted.internet.error</span> <span class="kn">import</span> <span class="ne">TimeoutError</span><span class="p">,</span> <span class="n">TCPTimedOutError</span>

<span class="k">class</span> <span class="nc">ErrbackSpider</span><span class="p">(</span><span class="n">scrapy</span><span class="o">.</span><span class="n">Spider</span><span class="p">):</span>
    <span class="n">name</span> <span class="o">=</span> <span class="s2">&quot;errback_example&quot;</span>
    <span class="n">start_urls</span> <span class="o">=</span> <span class="p">[</span>
        <span class="s2">&quot;http://www.httpbin.org/&quot;</span><span class="p">,</span>              <span class="c1"># HTTP 200 expected</span>
        <span class="s2">&quot;http://www.httpbin.org/status/404&quot;</span><span class="p">,</span>    <span class="c1"># Not found error</span>
        <span class="s2">&quot;http://www.httpbin.org/status/500&quot;</span><span class="p">,</span>    <span class="c1"># server issue</span>
        <span class="s2">&quot;http://www.httpbin.org:12345/&quot;</span><span class="p">,</span>        <span class="c1"># non-responding host, timeout expected</span>
        <span class="s2">&quot;http://www.httphttpbinbin.org/&quot;</span><span class="p">,</span>       <span class="c1"># DNS error expected</span>
    <span class="p">]</span>

    <span class="k">def</span> <span class="nf">start_requests</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
        <span class="k">for</span> <span class="n">u</span> <span class="ow">in</span> <span class="bp">self</span><span class="o">.</span><span class="n">start_urls</span><span class="p">:</span>
            <span class="k">yield</span> <span class="n">scrapy</span><span class="o">.</span><span class="n">Request</span><span class="p">(</span><span class="n">u</span><span class="p">,</span> <span class="n">callback</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">parse_httpbin</span><span class="p">,</span>
                                    <span class="n">errback</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">errback_httpbin</span><span class="p">,</span>
                                    <span class="n">dont_filter</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>

    <span class="k">def</span> <span class="nf">parse_httpbin</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">response</span><span class="p">):</span>
        <span class="bp">self</span><span class="o">.</span><span class="n">logger</span><span class="o">.</span><span class="n">info</span><span class="p">(</span><span class="s1">&#39;Got successful response from </span><span class="si">{}</span><span class="s1">&#39;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">response</span><span class="o">.</span><span class="n">url</span><span class="p">))</span>
        <span class="c1"># do something useful here...</span>

    <span class="k">def</span> <span class="nf">errback_httpbin</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">failure</span><span class="p">):</span>
        <span class="c1"># log all failures</span>
        <span class="bp">self</span><span class="o">.</span><span class="n">logger</span><span class="o">.</span><span class="n">error</span><span class="p">(</span><span class="nb">repr</span><span class="p">(</span><span class="n">failure</span><span class="p">))</span>

        <span class="c1"># in case you want to do something special for some errors,</span>
        <span class="c1"># you may need the failure&#39;s type:</span>

        <span class="k">if</span> <span class="n">failure</span><span class="o">.</span><span class="n">check</span><span class="p">(</span><span class="n">HttpError</span><span class="p">):</span>
            <span class="c1"># these exceptions come from HttpError spider middleware</span>
            <span class="c1"># you can get the non-200 response</span>
            <span class="n">response</span> <span class="o">=</span> <span class="n">failure</span><span class="o">.</span><span class="n">value</span><span class="o">.</span><span class="n">response</span>
            <span class="bp">self</span><span class="o">.</span><span class="n">logger</span><span class="o">.</span><span class="n">error</span><span class="p">(</span><span class="s1">&#39;HttpError on </span><span class="si">%s</span><span class="s1">&#39;</span><span class="p">,</span> <span class="n">response</span><span class="o">.</span><span class="n">url</span><span class="p">)</span>

        <span class="k">elif</span> <span class="n">failure</span><span class="o">.</span><span class="n">check</span><span class="p">(</span><span class="n">DNSLookupError</span><span class="p">):</span>
            <span class="c1"># this is the original request</span>
            <span class="n">request</span> <span class="o">=</span> <span class="n">failure</span><span class="o">.</span><span class="n">request</span>
            <span class="bp">self</span><span class="o">.</span><span class="n">logger</span><span class="o">.</span><span class="n">error</span><span class="p">(</span><span class="s1">&#39;DNSLookupError on </span><span class="si">%s</span><span class="s1">&#39;</span><span class="p">,</span> <span class="n">request</span><span class="o">.</span><span class="n">url</span><span class="p">)</span>

        <span class="k">elif</span> <span class="n">failure</span><span class="o">.</span><span class="n">check</span><span class="p">(</span><span class="ne">TimeoutError</span><span class="p">,</span> <span class="n">TCPTimedOutError</span><span class="p">):</span>
            <span class="n">request</span> <span class="o">=</span> <span class="n">failure</span><span class="o">.</span><span class="n">request</span>
            <span class="bp">self</span><span class="o">.</span><span class="n">logger</span><span class="o">.</span><span class="n">error</span><span class="p">(</span><span class="s1">&#39;TimeoutError on </span><span class="si">%s</span><span class="s1">&#39;</span><span class="p">,</span> <span class="n">request</span><span class="o">.</span><span class="n">url</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="section" id="accessing-additional-data-in-errback-functions">
<span id="errback-cb-kwargs"></span><h3>访问errback函数中的其他数据<a class="headerlink" href="#accessing-additional-data-in-errback-functions" title="永久链接至标题">¶</a></h3>
<p>在处理请求失败的情况下，您可能会对访问回调函数的参数感兴趣，以便可以根据errback中的参数进一步处理。下面的示例演示如何使用 <code class="docutils literal notranslate"><span class="pre">Failure.request.cb_kwargs</span></code> ：：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">parse</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">response</span><span class="p">):</span>
    <span class="n">request</span> <span class="o">=</span> <span class="n">scrapy</span><span class="o">.</span><span class="n">Request</span><span class="p">(</span><span class="s1">&#39;http://www.example.com/index.html&#39;</span><span class="p">,</span>
                             <span class="n">callback</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">parse_page2</span><span class="p">,</span>
                             <span class="n">errback</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">errback_page2</span><span class="p">,</span>
                             <span class="n">cb_kwargs</span><span class="o">=</span><span class="nb">dict</span><span class="p">(</span><span class="n">main_url</span><span class="o">=</span><span class="n">response</span><span class="o">.</span><span class="n">url</span><span class="p">))</span>
    <span class="k">yield</span> <span class="n">request</span>

<span class="k">def</span> <span class="nf">parse_page2</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">response</span><span class="p">,</span> <span class="n">main_url</span><span class="p">):</span>
    <span class="k">pass</span>

<span class="k">def</span> <span class="nf">errback_page2</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">failure</span><span class="p">):</span>
    <span class="k">yield</span> <span class="nb">dict</span><span class="p">(</span>
        <span class="n">main_url</span><span class="o">=</span><span class="n">failure</span><span class="o">.</span><span class="n">request</span><span class="o">.</span><span class="n">cb_kwargs</span><span class="p">[</span><span class="s1">&#39;main_url&#39;</span><span class="p">],</span>
    <span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="request-meta-special-keys">
<span id="topics-request-meta"></span><h2>请求.meta特殊键<a class="headerlink" href="#request-meta-special-keys" title="永久链接至标题">¶</a></h2>
<p>这个 <a class="reference internal" href="#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.meta</span></code></a> 属性可以包含任意数据，但有一些特殊的键可以被scrapy及其内置扩展识别。</p>
<p>那些是：</p>
<ul class="simple">
<li><p><a class="reference internal" href="downloader-middleware.html#std-reqmeta-dont_redirect"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">dont_redirect</span></code></a></p></li>
<li><p><a class="reference internal" href="downloader-middleware.html#std-reqmeta-dont_retry"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">dont_retry</span></code></a></p></li>
<li><p><a class="reference internal" href="spider-middleware.html#std-reqmeta-handle_httpstatus_list"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">handle_httpstatus_list</span></code></a></p></li>
<li><p><a class="reference internal" href="spider-middleware.html#std-reqmeta-handle_httpstatus_all"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">handle_httpstatus_all</span></code></a></p></li>
<li><p><a class="reference internal" href="#std-reqmeta-dont_merge_cookies"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">dont_merge_cookies</span></code></a></p></li>
<li><p><a class="reference internal" href="downloader-middleware.html#std-reqmeta-cookiejar"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">cookiejar</span></code></a></p></li>
<li><p><a class="reference internal" href="downloader-middleware.html#std-reqmeta-dont_cache"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">dont_cache</span></code></a></p></li>
<li><p><a class="reference internal" href="downloader-middleware.html#std-reqmeta-redirect_reasons"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">redirect_reasons</span></code></a></p></li>
<li><p><a class="reference internal" href="downloader-middleware.html#std-reqmeta-redirect_urls"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">redirect_urls</span></code></a></p></li>
<li><p><a class="reference internal" href="#std-reqmeta-bindaddress"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">bindaddress</span></code></a></p></li>
<li><p><a class="reference internal" href="downloader-middleware.html#std-reqmeta-dont_obey_robotstxt"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">dont_obey_robotstxt</span></code></a></p></li>
<li><p><a class="reference internal" href="#std-reqmeta-download_timeout"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">download_timeout</span></code></a></p></li>
<li><p><a class="reference internal" href="settings.html#std-reqmeta-download_maxsize"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">download_maxsize</span></code></a></p></li>
<li><p><a class="reference internal" href="#std-reqmeta-download_latency"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">download_latency</span></code></a></p></li>
<li><p><a class="reference internal" href="#std-reqmeta-download_fail_on_dataloss"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">download_fail_on_dataloss</span></code></a></p></li>
<li><p><a class="reference internal" href="downloader-middleware.html#std-reqmeta-proxy"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">proxy</span></code></a></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">ftp_user</span></code> （见 <a class="reference internal" href="settings.html#std-setting-FTP_USER"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FTP_USER</span></code></a> 更多信息）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">ftp_password</span></code> （见 <a class="reference internal" href="settings.html#std-setting-FTP_PASSWORD"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FTP_PASSWORD</span></code></a> 更多信息）</p></li>
<li><p><a class="reference internal" href="spider-middleware.html#std-reqmeta-referrer_policy"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">referrer_policy</span></code></a></p></li>
<li><p><a class="reference internal" href="#std-reqmeta-max_retry_times"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">max_retry_times</span></code></a></p></li>
</ul>
<div class="section" id="bindaddress">
<span id="std-reqmeta-bindaddress"></span><span id="std:reqmeta-bindaddress"></span><h3>绑定地址<a class="headerlink" href="#bindaddress" title="永久链接至标题">¶</a></h3>
<p>用于执行请求的传出IP地址的IP。</p>
</div>
<div class="section" id="download-timeout">
<span id="std-reqmeta-download_timeout"></span><span id="std:reqmeta-download_timeout"></span><h3>download_timeout<a class="headerlink" href="#download-timeout" title="永久链接至标题">¶</a></h3>
<p>下载程序在超时前等待的时间（以秒计）。参见： <a class="reference internal" href="settings.html#std-setting-DOWNLOAD_TIMEOUT"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOAD_TIMEOUT</span></code></a> .</p>
</div>
<div class="section" id="download-latency">
<span id="std-reqmeta-download_latency"></span><span id="std:reqmeta-download_latency"></span><h3>download_latency<a class="headerlink" href="#download-latency" title="永久链接至标题">¶</a></h3>
<p>自请求启动以来，获取响应所花费的时间，即通过网络发送的HTTP消息。只有在下载响应后，此元键才可用。虽然大多数其他的元键用于控制零碎的行为，但这个元键应该是只读的。</p>
</div>
<div class="section" id="download-fail-on-dataloss">
<span id="std-reqmeta-download_fail_on_dataloss"></span><span id="std:reqmeta-download_fail_on_dataloss"></span><h3>download_fail_on_dataloss<a class="headerlink" href="#download-fail-on-dataloss" title="永久链接至标题">¶</a></h3>
<p>是否在错误的响应上失败。见： <a class="reference internal" href="settings.html#std-setting-DOWNLOAD_FAIL_ON_DATALOSS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOAD_FAIL_ON_DATALOSS</span></code></a> .</p>
</div>
<div class="section" id="max-retry-times">
<span id="std-reqmeta-max_retry_times"></span><span id="std:reqmeta-max_retry_times"></span><h3>max_retry_times<a class="headerlink" href="#max-retry-times" title="永久链接至标题">¶</a></h3>
<p>使用meta key设置每个请求的重试次数。初始化时， <a class="reference internal" href="#std-reqmeta-max_retry_times"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">max_retry_times</span></code></a> 元键优先于 <a class="reference internal" href="downloader-middleware.html#std-setting-RETRY_TIMES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">RETRY_TIMES</span></code></a> 设置。</p>
</div>
</div>
<div class="section" id="stopping-the-download-of-a-response">
<span id="topics-stop-response-download"></span><h2>停止下载响应<a class="headerlink" href="#stopping-the-download-of-a-response" title="永久链接至标题">¶</a></h2>
<p>抬高 <a class="reference internal" href="exceptions.html#scrapy.exceptions.StopDownload" title="scrapy.exceptions.StopDownload"><code class="xref py py-exc docutils literal notranslate"><span class="pre">StopDownload</span></code></a> 来自a的异常 <a class="reference internal" href="signals.html#scrapy.signals.bytes_received" title="scrapy.signals.bytes_received"><code class="xref py py-class docutils literal notranslate"><span class="pre">bytes_received</span></code></a> 信号处理程序将停止下载给定的响应。请参见以下示例：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">scrapy</span>


<span class="k">class</span> <span class="nc">StopSpider</span><span class="p">(</span><span class="n">scrapy</span><span class="o">.</span><span class="n">Spider</span><span class="p">):</span>
    <span class="n">name</span> <span class="o">=</span> <span class="s2">&quot;stop&quot;</span>
    <span class="n">start_urls</span> <span class="o">=</span> <span class="p">[</span><span class="s2">&quot;https://docs.scrapy.org/en/latest/&quot;</span><span class="p">]</span>

    <span class="nd">@classmethod</span>
    <span class="k">def</span> <span class="nf">from_crawler</span><span class="p">(</span><span class="bp">cls</span><span class="p">,</span> <span class="n">crawler</span><span class="p">):</span>
        <span class="n">spider</span> <span class="o">=</span> <span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="n">from_crawler</span><span class="p">(</span><span class="n">crawler</span><span class="p">)</span>
        <span class="n">crawler</span><span class="o">.</span><span class="n">signals</span><span class="o">.</span><span class="n">connect</span><span class="p">(</span><span class="n">spider</span><span class="o">.</span><span class="n">on_bytes_received</span><span class="p">,</span> <span class="n">signal</span><span class="o">=</span><span class="n">scrapy</span><span class="o">.</span><span class="n">signals</span><span class="o">.</span><span class="n">bytes_received</span><span class="p">)</span>
        <span class="k">return</span> <span class="n">spider</span>

    <span class="k">def</span> <span class="nf">parse</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">response</span><span class="p">):</span>
        <span class="c1"># &#39;last_chars&#39; show that the full response was not downloaded</span>
        <span class="k">yield</span> <span class="p">{</span><span class="s2">&quot;len&quot;</span><span class="p">:</span> <span class="nb">len</span><span class="p">(</span><span class="n">response</span><span class="o">.</span><span class="n">text</span><span class="p">),</span> <span class="s2">&quot;last_chars&quot;</span><span class="p">:</span> <span class="n">response</span><span class="o">.</span><span class="n">text</span><span class="p">[</span><span class="o">-</span><span class="mi">40</span><span class="p">:]}</span>

    <span class="k">def</span> <span class="nf">on_bytes_received</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">data</span><span class="p">,</span> <span class="n">request</span><span class="p">,</span> <span class="n">spider</span><span class="p">):</span>
        <span class="k">raise</span> <span class="n">scrapy</span><span class="o">.</span><span class="n">exceptions</span><span class="o">.</span><span class="n">StopDownload</span><span class="p">(</span><span class="n">fail</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
</pre></div>
</div>
<p>会产生以下输出：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="mi">2020</span><span class="o">-</span><span class="mi">05</span><span class="o">-</span><span class="mi">19</span> <span class="mi">17</span><span class="p">:</span><span class="mi">26</span><span class="p">:</span><span class="mi">12</span> <span class="p">[</span><span class="n">scrapy</span><span class="o">.</span><span class="n">core</span><span class="o">.</span><span class="n">engine</span><span class="p">]</span> <span class="n">INFO</span><span class="p">:</span> <span class="n">Spider</span> <span class="n">opened</span>
<span class="mi">2020</span><span class="o">-</span><span class="mi">05</span><span class="o">-</span><span class="mi">19</span> <span class="mi">17</span><span class="p">:</span><span class="mi">26</span><span class="p">:</span><span class="mi">12</span> <span class="p">[</span><span class="n">scrapy</span><span class="o">.</span><span class="n">extensions</span><span class="o">.</span><span class="n">logstats</span><span class="p">]</span> <span class="n">INFO</span><span class="p">:</span> <span class="n">Crawled</span> <span class="mi">0</span> <span class="n">pages</span> <span class="p">(</span><span class="n">at</span> <span class="mi">0</span> <span class="n">pages</span><span class="o">/</span><span class="nb">min</span><span class="p">),</span> <span class="n">scraped</span> <span class="mi">0</span> <span class="n">items</span> <span class="p">(</span><span class="n">at</span> <span class="mi">0</span> <span class="n">items</span><span class="o">/</span><span class="nb">min</span><span class="p">)</span>
<span class="mi">2020</span><span class="o">-</span><span class="mi">05</span><span class="o">-</span><span class="mi">19</span> <span class="mi">17</span><span class="p">:</span><span class="mi">26</span><span class="p">:</span><span class="mi">13</span> <span class="p">[</span><span class="n">scrapy</span><span class="o">.</span><span class="n">core</span><span class="o">.</span><span class="n">downloader</span><span class="o">.</span><span class="n">handlers</span><span class="o">.</span><span class="n">http11</span><span class="p">]</span> <span class="n">DEBUG</span><span class="p">:</span> <span class="n">Download</span> <span class="n">stopped</span> <span class="k">for</span> <span class="o">&lt;</span><span class="n">GET</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">docs</span><span class="o">.</span><span class="n">scrapy</span><span class="o">.</span><span class="n">org</span><span class="o">/</span><span class="n">en</span><span class="o">/</span><span class="n">latest</span><span class="o">/&gt;</span> <span class="kn">from</span> <span class="nn">signal</span> <span class="n">handler</span> <span class="n">StopSpider</span><span class="o">.</span><span class="n">on_bytes_received</span>
<span class="mi">2020</span><span class="o">-</span><span class="mi">05</span><span class="o">-</span><span class="mi">19</span> <span class="mi">17</span><span class="p">:</span><span class="mi">26</span><span class="p">:</span><span class="mi">13</span> <span class="p">[</span><span class="n">scrapy</span><span class="o">.</span><span class="n">core</span><span class="o">.</span><span class="n">engine</span><span class="p">]</span> <span class="n">DEBUG</span><span class="p">:</span> <span class="n">Crawled</span> <span class="p">(</span><span class="mi">200</span><span class="p">)</span> <span class="o">&lt;</span><span class="n">GET</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">docs</span><span class="o">.</span><span class="n">scrapy</span><span class="o">.</span><span class="n">org</span><span class="o">/</span><span class="n">en</span><span class="o">/</span><span class="n">latest</span><span class="o">/&gt;</span> <span class="p">(</span><span class="n">referer</span><span class="p">:</span> <span class="kc">None</span><span class="p">)</span> <span class="p">[</span><span class="s1">&#39;download_stopped&#39;</span><span class="p">]</span>
<span class="mi">2020</span><span class="o">-</span><span class="mi">05</span><span class="o">-</span><span class="mi">19</span> <span class="mi">17</span><span class="p">:</span><span class="mi">26</span><span class="p">:</span><span class="mi">13</span> <span class="p">[</span><span class="n">scrapy</span><span class="o">.</span><span class="n">core</span><span class="o">.</span><span class="n">scraper</span><span class="p">]</span> <span class="n">DEBUG</span><span class="p">:</span> <span class="n">Scraped</span> <span class="kn">from</span> <span class="o">&lt;</span><span class="mi">200</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">docs</span><span class="o">.</span><span class="n">scrapy</span><span class="o">.</span><span class="n">org</span><span class="o">/</span><span class="n">en</span><span class="o">/</span><span class="n">latest</span><span class="o">/&gt;</span>
<span class="p">{</span><span class="s1">&#39;len&#39;</span><span class="p">:</span> <span class="mi">279</span><span class="p">,</span> <span class="s1">&#39;last_chars&#39;</span><span class="p">:</span> <span class="s1">&#39;dth, initial-scale=1.0&quot;&gt;</span><span class="se">\n</span><span class="s1">  </span><span class="se">\n</span><span class="s1">  &lt;title&gt;Scr&#39;</span><span class="p">}</span>
<span class="mi">2020</span><span class="o">-</span><span class="mi">05</span><span class="o">-</span><span class="mi">19</span> <span class="mi">17</span><span class="p">:</span><span class="mi">26</span><span class="p">:</span><span class="mi">13</span> <span class="p">[</span><span class="n">scrapy</span><span class="o">.</span><span class="n">core</span><span class="o">.</span><span class="n">engine</span><span class="p">]</span> <span class="n">INFO</span><span class="p">:</span> <span class="n">Closing</span> <span class="n">spider</span> <span class="p">(</span><span class="n">finished</span><span class="p">)</span>
</pre></div>
</div>
<p>默认情况下，结果响应由相应的错误回复处理。要调用它们的回调，就像在本例中一样，传递 <code class="docutils literal notranslate"><span class="pre">fail=False</span></code> 到 <a class="reference internal" href="exceptions.html#scrapy.exceptions.StopDownload" title="scrapy.exceptions.StopDownload"><code class="xref py py-exc docutils literal notranslate"><span class="pre">StopDownload</span></code></a> 例外。</p>
</div>
<div class="section" id="request-subclasses">
<span id="topics-request-response-ref-request-subclasses"></span><h2>请求子类<a class="headerlink" href="#request-subclasses" title="永久链接至标题">¶</a></h2>
<p>这是内置的列表 <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 子类。您还可以将其子类化，以实现您自己的自定义功能。</p>
<div class="section" id="formrequest-objects">
<h3>FormRequest对象<a class="headerlink" href="#formrequest-objects" title="永久链接至标题">¶</a></h3>
<p>FormRequest类扩展了基 <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 具有处理HTML表单的功能。它使用 <a class="reference external" href="https://lxml.de/lxmlhtml.html#forms">lxml.html forms</a> 使用表单数据预填充表单域的步骤 <a class="reference internal" href="#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 物体。</p>
<dl class="py class">
<dt id="scrapy.http.FormRequest">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.http.</code><code class="sig-name descname">FormRequest</code><span class="sig-paren">(</span><em class="sig-param">url</em><span class="optional">[</span>, <em class="sig-param">formdata</em>, <em class="sig-param">...</em><span class="optional">]</span><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/request/form.html#FormRequest"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.FormRequest" title="永久链接至目标">¶</a></dt>
<dd><p>这个 <a class="reference internal" href="#scrapy.http.FormRequest" title="scrapy.http.FormRequest"><code class="xref py py-class docutils literal notranslate"><span class="pre">FormRequest</span></code></a> 类将新的关键字参数添加到 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法。其余参数与 <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 在这里没有记录。</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><p><strong>formdata</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#dict" title="(在 Python v3.9)"><em>dict</em></a><em> or </em><a class="reference external" href="https://docs.python.org/3/library/collections.abc.html#collections.abc.Iterable" title="(在 Python v3.9)"><em>collections.abc.Iterable</em></a>) -- 是包含HTML表单数据的字典（或可为（键、值）元组），这些数据将被URL编码并分配给请求主体。</p>
</dd>
</dl>
<p>这个 <a class="reference internal" href="#scrapy.http.FormRequest" title="scrapy.http.FormRequest"><code class="xref py py-class docutils literal notranslate"><span class="pre">FormRequest</span></code></a> 除了标准之外，对象还支持以下类方法 <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 方法：</p>
<dl class="py method">
<dt id="scrapy.http.FormRequest.from_response">
<em class="property">classmethod </em><code class="sig-name descname">from_response</code><span class="sig-paren">(</span><em class="sig-param">response</em><span class="optional">[</span>, <em class="sig-param">formname=None</em>, <em class="sig-param">formid=None</em>, <em class="sig-param">formnumber=0</em>, <em class="sig-param">formdata=None</em>, <em class="sig-param">formxpath=None</em>, <em class="sig-param">formcss=None</em>, <em class="sig-param">clickdata=None</em>, <em class="sig-param">dont_click=False</em>, <em class="sig-param">...</em><span class="optional">]</span><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/request/form.html#FormRequest.from_response"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.FormRequest.from_response" title="永久链接至目标">¶</a></dt>
<dd><p>返回新的 <a class="reference internal" href="#scrapy.http.FormRequest" title="scrapy.http.FormRequest"><code class="xref py py-class docutils literal notranslate"><span class="pre">FormRequest</span></code></a> 对象，其表单字段值预填充在HTML中 <code class="docutils literal notranslate"><span class="pre">&lt;form&gt;</span></code> 包含在给定响应中的元素。有关示例，请参见 <a class="reference internal" href="#topics-request-response-ref-request-userlogin"><span class="std std-ref">使用formRequest.from_response（）模拟用户登录</span></a> .</p>
<p>默认情况下，策略是在任何看起来可单击的窗体控件上自动模拟单击，如 <code class="docutils literal notranslate"><span class="pre">&lt;input</span> <span class="pre">type=&quot;submit&quot;&gt;</span></code> .  尽管这非常方便，而且常常是所需的行为，但有时它可能会导致难以调试的问题。例如，当处理使用javascript填充和/或提交的表单时，默认 <a class="reference internal" href="#scrapy.http.FormRequest.from_response" title="scrapy.http.FormRequest.from_response"><code class="xref py py-meth docutils literal notranslate"><span class="pre">from_response()</span></code></a> 行为可能不是最合适的。要禁用此行为，可以设置 <code class="docutils literal notranslate"><span class="pre">dont_click</span></code> 参数 <code class="docutils literal notranslate"><span class="pre">True</span></code> . 此外，如果要更改单击的控件（而不是禁用它），还可以使用 <code class="docutils literal notranslate"><span class="pre">clickdata</span></code> 参数。</p>
<div class="admonition caution">
<p class="admonition-title">警告</p>
<p>对于选项值中有前导空格或尾随空格的select元素，使用此方法将不起作用，因为 <a class="reference external" href="https://bugs.launchpad.net/lxml/+bug/1665241">bug in lxml</a> ，应在LXML 3.8及更高版本中修复。</p>
</div>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>response</strong> (<a class="reference internal" href="#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> object) -- 包含用于预填充表单字段的HTML表单的响应</p></li>
<li><p><strong>formname</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#str" title="(在 Python v3.9)"><em>str</em></a>) -- 如果给定，将使用名称属性设置为该值的表单。</p></li>
<li><p><strong>formid</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#str" title="(在 Python v3.9)"><em>str</em></a>) -- 如果给定，将使用ID属性设置为该值的表单。</p></li>
<li><p><strong>formxpath</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#str" title="(在 Python v3.9)"><em>str</em></a>) -- 如果给定，将使用与xpath匹配的第一个表单。</p></li>
<li><p><strong>formcss</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#str" title="(在 Python v3.9)"><em>str</em></a>) -- 如果给定，将使用与CSS选择器匹配的第一个表单。</p></li>
<li><p><strong>formnumber</strong> (<a class="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(在 Python v3.9)"><em>int</em></a>) -- 当响应包含多个表单时要使用的表单数。第一个（也是默认值）是 <code class="docutils literal notranslate"><span class="pre">0</span></code> .</p></li>
<li><p><strong>formdata</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#dict" title="(在 Python v3.9)"><em>dict</em></a>) -- 要在表单数据中重写的字段。如果响应中已存在字段 <code class="docutils literal notranslate"><span class="pre">&lt;form&gt;</span></code> 元素，其值将被此参数中传递的值重写。如果此参数中传递的值是 <code class="docutils literal notranslate"><span class="pre">None</span></code> ，即使响应中存在该字段，该字段也不会包含在请求中。 <code class="docutils literal notranslate"><span class="pre">&lt;form&gt;</span></code> 元素。</p></li>
<li><p><strong>clickdata</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#dict" title="(在 Python v3.9)"><em>dict</em></a>) -- 用于查找单击的控件的属性。如果没有给出，将提交表单数据，模拟单击第一个可单击元素。除了HTML属性之外，控件还可以通过其相对于表单内其他可提交输入的基于零的索引进行标识，方法是 <code class="docutils literal notranslate"><span class="pre">nr</span></code> 属性。</p></li>
<li><p><strong>dont_click</strong> (<a class="reference external" href="https://docs.python.org/3/library/functions.html#bool" title="(在 Python v3.9)"><em>bool</em></a>) -- 如果为真，则表单数据将在不单击任何元素的情况下提交。</p></li>
</ul>
</dd>
</dl>
<p>该类方法的其他参数直接传递给 <a class="reference internal" href="#scrapy.http.FormRequest" title="scrapy.http.FormRequest"><code class="xref py py-class docutils literal notranslate"><span class="pre">FormRequest</span></code></a>  <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法。</p>
</dd></dl>

</dd></dl>

</div>
<div class="section" id="request-usage-examples">
<h3>请求使用示例<a class="headerlink" href="#request-usage-examples" title="永久链接至标题">¶</a></h3>
<div class="section" id="using-formrequest-to-send-data-via-http-post">
<h4>使用FormRequest通过HTTP Post发送数据<a class="headerlink" href="#using-formrequest-to-send-data-via-http-post" title="永久链接至标题">¶</a></h4>
<p>如果您想在spider中模拟HTML表单发布并发送几个键值字段，可以返回 <a class="reference internal" href="#scrapy.http.FormRequest" title="scrapy.http.FormRequest"><code class="xref py py-class docutils literal notranslate"><span class="pre">FormRequest</span></code></a> 像这样的物体：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="k">return</span> <span class="p">[</span><span class="n">FormRequest</span><span class="p">(</span><span class="n">url</span><span class="o">=</span><span class="s2">&quot;http://www.example.com/post/action&quot;</span><span class="p">,</span>
                    <span class="n">formdata</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;name&#39;</span><span class="p">:</span> <span class="s1">&#39;John Doe&#39;</span><span class="p">,</span> <span class="s1">&#39;age&#39;</span><span class="p">:</span> <span class="s1">&#39;27&#39;</span><span class="p">},</span>
                    <span class="n">callback</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">after_post</span><span class="p">)]</span>
</pre></div>
</div>
</div>
<div class="section" id="using-formrequest-from-response-to-simulate-a-user-login">
<span id="topics-request-response-ref-request-userlogin"></span><h4>使用formRequest.from_response（）模拟用户登录<a class="headerlink" href="#using-formrequest-from-response-to-simulate-a-user-login" title="永久链接至标题">¶</a></h4>
<p>网站通常通过 <code class="docutils literal notranslate"><span class="pre">&lt;input</span> <span class="pre">type=&quot;hidden&quot;&gt;</span></code> 元素，例如与会话相关的数据或身份验证令牌（用于登录页）。当进行抓取时，您将希望这些字段自动预填充，并且只覆盖其中的几个字段，例如用户名和密码。你可以使用 <a class="reference internal" href="#scrapy.http.FormRequest.from_response" title="scrapy.http.FormRequest.from_response"><code class="xref py py-meth docutils literal notranslate"><span class="pre">FormRequest.from_response()</span></code></a> 此作业的方法。下面是一个蜘蛛的例子，它使用它：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">scrapy</span>

<span class="k">def</span> <span class="nf">authentication_failed</span><span class="p">(</span><span class="n">response</span><span class="p">):</span>
    <span class="c1"># TODO: Check the contents of the response and return True if it failed</span>
    <span class="c1"># or False if it succeeded.</span>
    <span class="k">pass</span>

<span class="k">class</span> <span class="nc">LoginSpider</span><span class="p">(</span><span class="n">scrapy</span><span class="o">.</span><span class="n">Spider</span><span class="p">):</span>
    <span class="n">name</span> <span class="o">=</span> <span class="s1">&#39;example.com&#39;</span>
    <span class="n">start_urls</span> <span class="o">=</span> <span class="p">[</span><span class="s1">&#39;http://www.example.com/users/login.php&#39;</span><span class="p">]</span>

    <span class="k">def</span> <span class="nf">parse</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">response</span><span class="p">):</span>
        <span class="k">return</span> <span class="n">scrapy</span><span class="o">.</span><span class="n">FormRequest</span><span class="o">.</span><span class="n">from_response</span><span class="p">(</span>
            <span class="n">response</span><span class="p">,</span>
            <span class="n">formdata</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;username&#39;</span><span class="p">:</span> <span class="s1">&#39;john&#39;</span><span class="p">,</span> <span class="s1">&#39;password&#39;</span><span class="p">:</span> <span class="s1">&#39;secret&#39;</span><span class="p">},</span>
            <span class="n">callback</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">after_login</span>
        <span class="p">)</span>

    <span class="k">def</span> <span class="nf">after_login</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">response</span><span class="p">):</span>
        <span class="k">if</span> <span class="n">authentication_failed</span><span class="p">(</span><span class="n">response</span><span class="p">):</span>
            <span class="bp">self</span><span class="o">.</span><span class="n">logger</span><span class="o">.</span><span class="n">error</span><span class="p">(</span><span class="s2">&quot;Login failed&quot;</span><span class="p">)</span>
            <span class="k">return</span>

        <span class="c1"># continue scraping with authenticated session...</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="jsonrequest">
<h3>JsonRequest<a class="headerlink" href="#jsonrequest" title="永久链接至标题">¶</a></h3>
<p>JsonRequest类扩展了 <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 类，具有处理JSON请求的功能。</p>
<dl class="py class">
<dt id="scrapy.http.JsonRequest">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.http.</code><code class="sig-name descname">JsonRequest</code><span class="sig-paren">(</span><em class="sig-param">url</em><span class="optional">[</span>, <em class="sig-param">... data</em>, <em class="sig-param">dumps_kwargs</em><span class="optional">]</span><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/request/json_request.html#JsonRequest"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.JsonRequest" title="永久链接至目标">¶</a></dt>
<dd><p>这个 <a class="reference internal" href="#scrapy.http.JsonRequest" title="scrapy.http.JsonRequest"><code class="xref py py-class docutils literal notranslate"><span class="pre">JsonRequest</span></code></a> 类将两个新的关键字参数添加到 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法。其余参数与 <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 在这里没有记录。</p>
<p>使用 <a class="reference internal" href="#scrapy.http.JsonRequest" title="scrapy.http.JsonRequest"><code class="xref py py-class docutils literal notranslate"><span class="pre">JsonRequest</span></code></a> 将设置 <code class="docutils literal notranslate"><span class="pre">Content-Type</span></code> 报头到 <code class="docutils literal notranslate"><span class="pre">application/json</span></code> 和 <code class="docutils literal notranslate"><span class="pre">Accept</span></code> 报头到 <code class="docutils literal notranslate"><span class="pre">application/json,</span> <span class="pre">text/javascript,</span> <span class="pre">*/*;</span> <span class="pre">q=0.01</span></code></p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>data</strong> (<a class="reference external" href="https://docs.python.org/3/library/functions.html#object" title="(在 Python v3.9)"><em>object</em></a>) -- 是需要对JSON编码并分配给主体的任何JSON可序列化对象。如果 <a class="reference internal" href="#scrapy.http.Request.body" title="scrapy.http.Request.body"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.body</span></code></a> 提供了参数。此参数将被忽略。如果 <a class="reference internal" href="#scrapy.http.Request.body" title="scrapy.http.Request.body"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.body</span></code></a> 未提供参数，并且提供了数据参数 <a class="reference internal" href="#scrapy.http.Request.method" title="scrapy.http.Request.method"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.method</span></code></a> 将被设置为 <code class="docutils literal notranslate"><span class="pre">'POST'</span></code> 自动地。</p></li>
<li><p><strong>dumps_kwargs</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#dict" title="(在 Python v3.9)"><em>dict</em></a>) -- 将传递给基础的参数 <a class="reference external" href="https://docs.python.org/3/library/json.html#json.dumps" title="(在 Python v3.9)"><code class="xref py py-func docutils literal notranslate"><span class="pre">json.dumps()</span></code></a> 方法，用于将数据序列化为JSON格式。</p></li>
</ul>
</dd>
</dl>
</dd></dl>

</div>
<div class="section" id="jsonrequest-usage-example">
<h3>JsonRequest用法示例<a class="headerlink" href="#jsonrequest-usage-example" title="永久链接至标题">¶</a></h3>
<p>使用JSON负载发送JSON POST请求：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">data</span> <span class="o">=</span> <span class="p">{</span>
    <span class="s1">&#39;name1&#39;</span><span class="p">:</span> <span class="s1">&#39;value1&#39;</span><span class="p">,</span>
    <span class="s1">&#39;name2&#39;</span><span class="p">:</span> <span class="s1">&#39;value2&#39;</span><span class="p">,</span>
<span class="p">}</span>
<span class="k">yield</span> <span class="n">JsonRequest</span><span class="p">(</span><span class="n">url</span><span class="o">=</span><span class="s1">&#39;http://www.example.com/post/action&#39;</span><span class="p">,</span> <span class="n">data</span><span class="o">=</span><span class="n">data</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="response-objects">
<h2>响应对象<a class="headerlink" href="#response-objects" title="永久链接至标题">¶</a></h2>
<dl class="py class">
<dt id="scrapy.http.Response">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.http.</code><code class="sig-name descname">Response</code><span class="sig-paren">(</span><em class="sig-param"><span class="o">*</span><span class="n">args</span></em>, <em class="sig-param"><span class="o">**</span><span class="n">kwargs</span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/response.html#Response"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.Response" title="永久链接至目标">¶</a></dt>
<dd><p>A <a class="reference internal" href="#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 对象表示一个HTTP响应，它通常被下载（由下载程序）并送入spider进行处理。</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>url</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#str" title="(在 Python v3.9)"><em>str</em></a>) -- 此响应的URL</p></li>
<li><p><strong>status</strong> (<a class="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(在 Python v3.9)"><em>int</em></a>) -- 响应的HTTP状态。默认为 <code class="docutils literal notranslate"><span class="pre">200</span></code> .</p></li>
<li><p><strong>headers</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#dict" title="(在 Python v3.9)"><em>dict</em></a>) -- 此响应的头。dict值可以是字符串（对于单值头）或列表（对于多值头）。</p></li>
<li><p><strong>body</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#bytes" title="(在 Python v3.9)"><em>bytes</em></a>) -- 反应机构。要以字符串形式访问解码文本，请使用 <code class="docutils literal notranslate"><span class="pre">response.text</span></code> 从编码感知 <a class="reference internal" href="#topics-request-response-ref-response-subclasses"><span class="std std-ref">Response subclass</span></a> ，如 <a class="reference internal" href="#scrapy.http.TextResponse" title="scrapy.http.TextResponse"><code class="xref py py-class docutils literal notranslate"><span class="pre">TextResponse</span></code></a> .</p></li>
<li><p><strong>flags</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#list" title="(在 Python v3.9)"><em>list</em></a>) -- 是一个列表，其中包含 <a class="reference internal" href="#scrapy.http.Response.flags" title="scrapy.http.Response.flags"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Response.flags</span></code></a> 属性。如果给定，则将浅复制列表。</p></li>
<li><p><strong>request</strong> (<a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><em>scrapy.http.Request</em></a>) -- 的初始值 <a class="reference internal" href="#scrapy.http.Response.request" title="scrapy.http.Response.request"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Response.request</span></code></a> 属性。这代表 <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 产生了这个响应。</p></li>
<li><p><strong>certificate</strong> (<a class="reference external" href="https://twistedmatrix.com/documents/current/api/twisted.internet.ssl.Certificate.html" title="(在 Twisted v2.0)"><em>twisted.internet.ssl.Certificate</em></a>) -- 表示服务器的SSL证书的对象。</p></li>
<li><p><strong>ip_address</strong> (<a class="reference external" href="https://docs.python.org/3/library/ipaddress.html#ipaddress.IPv4Address" title="(在 Python v3.9)"><code class="xref py py-class docutils literal notranslate"><span class="pre">ipaddress.IPv4Address</span></code></a> or <a class="reference external" href="https://docs.python.org/3/library/ipaddress.html#ipaddress.IPv6Address" title="(在 Python v3.9)"><code class="xref py py-class docutils literal notranslate"><span class="pre">ipaddress.IPv6Address</span></code></a>) -- 从哪个服务器发出响应的IP地址。</p></li>
</ul>
</dd>
</dl>
<div class="versionadded">
<p><span class="versionmodified added">2.1.0 新版功能: </span>这个 <code class="docutils literal notranslate"><span class="pre">ip_address</span></code> 参数。</p>
</div>
<dl class="py attribute">
<dt id="scrapy.http.Response.url">
<code class="sig-name descname">url</code><a class="headerlink" href="#scrapy.http.Response.url" title="永久链接至目标">¶</a></dt>
<dd><p>包含响应的URL的字符串。</p>
<p>此属性是只读的。要更改响应的URL，请使用 <a class="reference internal" href="#scrapy.http.Response.replace" title="scrapy.http.Response.replace"><code class="xref py py-meth docutils literal notranslate"><span class="pre">replace()</span></code></a> .</p>
</dd></dl>

<dl class="py attribute">
<dt id="scrapy.http.Response.status">
<code class="sig-name descname">status</code><a class="headerlink" href="#scrapy.http.Response.status" title="永久链接至目标">¶</a></dt>
<dd><p>表示响应的HTTP状态的整数。例子： <code class="docutils literal notranslate"><span class="pre">200</span></code> ， <code class="docutils literal notranslate"><span class="pre">404</span></code> .</p>
</dd></dl>

<dl class="py attribute">
<dt id="scrapy.http.Response.headers">
<code class="sig-name descname">headers</code><a class="headerlink" href="#scrapy.http.Response.headers" title="永久链接至目标">¶</a></dt>
<dd><p>包含响应头的类似字典的对象。可以使用访问值 <code class="xref py py-meth docutils literal notranslate"><span class="pre">get()</span></code> 返回具有指定名称的第一个头值，或 <code class="xref py py-meth docutils literal notranslate"><span class="pre">getlist()</span></code> 返回具有指定名称的所有头值。例如，此调用将为您提供标题中的所有cookie:：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">response</span><span class="o">.</span><span class="n">headers</span><span class="o">.</span><span class="n">getlist</span><span class="p">(</span><span class="s1">&#39;Set-Cookie&#39;</span><span class="p">)</span>
</pre></div>
</div>
</dd></dl>

<dl class="py attribute">
<dt id="scrapy.http.Response.body">
<code class="sig-name descname">body</code><a class="headerlink" href="#scrapy.http.Response.body" title="永久链接至目标">¶</a></dt>
<dd><p>作为正文响应字节。</p>
<p>如果你想把身体作为一个字符串，使用 <a class="reference internal" href="#scrapy.http.TextResponse.text" title="scrapy.http.TextResponse.text"><code class="xref py py-attr docutils literal notranslate"><span class="pre">TextResponse.text</span></code></a> （仅在 <a class="reference internal" href="#scrapy.http.TextResponse" title="scrapy.http.TextResponse"><code class="xref py py-class docutils literal notranslate"><span class="pre">TextResponse</span></code></a> 和子类）。</p>
<p>此属性是只读的。要更改响应主体，请使用 <a class="reference internal" href="#scrapy.http.Response.replace" title="scrapy.http.Response.replace"><code class="xref py py-meth docutils literal notranslate"><span class="pre">replace()</span></code></a> .</p>
</dd></dl>

<dl class="py attribute">
<dt id="scrapy.http.Response.request">
<code class="sig-name descname">request</code><a class="headerlink" href="#scrapy.http.Response.request" title="永久链接至目标">¶</a></dt>
<dd><p>这个 <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 生成此响应的对象。在响应和请求通过所有 <a class="reference internal" href="downloader-middleware.html#topics-downloader-middleware"><span class="std std-ref">Downloader Middlewares</span></a> . 特别是，这意味着：</p>
<ul class="simple">
<li><p>HTTP重定向将导致将原始请求（重定向前的URL）分配给重定向响应（重定向后的最终URL）。</p></li>
<li><p>response.request.url并不总是等于response.url</p></li>
<li><p>此属性仅在spider代码和 <a class="reference internal" href="spider-middleware.html#topics-spider-middleware"><span class="std std-ref">Spider Middlewares</span></a> ，但在下载器中间软件（尽管您通过其他方式有可用的请求）和 <a class="reference internal" href="signals.html#std-signal-response_downloaded"><code class="xref std std-signal docutils literal notranslate"><span class="pre">response_downloaded</span></code></a> 信号。</p></li>
</ul>
</dd></dl>

<dl class="py attribute">
<dt id="scrapy.http.Response.meta">
<code class="sig-name descname">meta</code><a class="headerlink" href="#scrapy.http.Response.meta" title="永久链接至目标">¶</a></dt>
<dd><p>到的快捷方式 <a class="reference internal" href="#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.meta</span></code></a> 的属性 <a class="reference internal" href="#scrapy.http.Response.request" title="scrapy.http.Response.request"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Response.request</span></code></a> 对象（即。 <code class="docutils literal notranslate"><span class="pre">self.request.meta</span></code> ）</p>
<p>不像 <a class="reference internal" href="#scrapy.http.Response.request" title="scrapy.http.Response.request"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Response.request</span></code></a> 属性 <a class="reference internal" href="#scrapy.http.Response.meta" title="scrapy.http.Response.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Response.meta</span></code></a> 属性是沿着重定向和重试传播的，因此您将获得原始的 <a class="reference internal" href="#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.meta</span></code></a> 从你的蜘蛛那里送来的。</p>
<div class="admonition seealso">
<p class="admonition-title">参见</p>
<p><a class="reference internal" href="#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.meta</span></code></a> 属性</p>
</div>
</dd></dl>

<dl class="py attribute">
<dt id="scrapy.http.Response.cb_kwargs">
<code class="sig-name descname">cb_kwargs</code><a class="headerlink" href="#scrapy.http.Response.cb_kwargs" title="永久链接至目标">¶</a></dt>
<dd><div class="versionadded">
<p><span class="versionmodified added">2.0 新版功能.</span></p>
</div>
<p>到的快捷方式 <a class="reference internal" href="#scrapy.http.Request.cb_kwargs" title="scrapy.http.Request.cb_kwargs"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.cb_kwargs</span></code></a> 的属性 <a class="reference internal" href="#scrapy.http.Response.request" title="scrapy.http.Response.request"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Response.request</span></code></a> 对象（即。 <code class="docutils literal notranslate"><span class="pre">self.request.cb_kwargs</span></code> ）</p>
<p>不像 <a class="reference internal" href="#scrapy.http.Response.request" title="scrapy.http.Response.request"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Response.request</span></code></a> 属性 <a class="reference internal" href="#scrapy.http.Response.cb_kwargs" title="scrapy.http.Response.cb_kwargs"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Response.cb_kwargs</span></code></a> 属性是沿着重定向和重试传播的，因此您将获得原始的 <a class="reference internal" href="#scrapy.http.Request.cb_kwargs" title="scrapy.http.Request.cb_kwargs"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.cb_kwargs</span></code></a> 从你的蜘蛛那里送来的。</p>
<div class="admonition seealso">
<p class="admonition-title">参见</p>
<p><a class="reference internal" href="#scrapy.http.Request.cb_kwargs" title="scrapy.http.Request.cb_kwargs"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.cb_kwargs</span></code></a> 属性</p>
</div>
</dd></dl>

<dl class="py attribute">
<dt id="scrapy.http.Response.flags">
<code class="sig-name descname">flags</code><a class="headerlink" href="#scrapy.http.Response.flags" title="永久链接至目标">¶</a></dt>
<dd><p>包含此响应标志的列表。标记是用于标记响应的标签。例如： <code class="docutils literal notranslate"><span class="pre">'cached'</span></code> ， <code class="docutils literal notranslate"><span class="pre">'redirected</span></code> '等，它们显示在响应的字符串表示形式上 (<cite>__str__</cite> 方法）由引擎用于日志记录。</p>
</dd></dl>

<dl class="py attribute">
<dt id="scrapy.http.Response.certificate">
<code class="sig-name descname">certificate</code><a class="headerlink" href="#scrapy.http.Response.certificate" title="永久链接至目标">¶</a></dt>
<dd><p>A <a class="reference external" href="https://twistedmatrix.com/documents/current/api/twisted.internet.ssl.Certificate.html" title="(在 Twisted v2.0)"><code class="xref py py-class docutils literal notranslate"><span class="pre">twisted.internet.ssl.Certificate</span></code></a> 表示服务器的SSL证书的。</p>
<p>仅为填充 <code class="docutils literal notranslate"><span class="pre">https</span></code> 响应， <code class="docutils literal notranslate"><span class="pre">None</span></code> 否则。</p>
</dd></dl>

<dl class="py attribute">
<dt id="scrapy.http.Response.ip_address">
<code class="sig-name descname">ip_address</code><a class="headerlink" href="#scrapy.http.Response.ip_address" title="永久链接至目标">¶</a></dt>
<dd><div class="versionadded">
<p><span class="versionmodified added">2.1.0 新版功能.</span></p>
</div>
<p>从哪个服务器发出响应的IP地址。</p>
<p>该属性目前只由http1.1下载处理程序填充，即 <code class="docutils literal notranslate"><span class="pre">http(s)</span></code> 响应。对于其他处理程序， <a class="reference internal" href="#scrapy.http.Response.ip_address" title="scrapy.http.Response.ip_address"><code class="xref py py-attr docutils literal notranslate"><span class="pre">ip_address</span></code></a> 总是 <code class="docutils literal notranslate"><span class="pre">None</span></code> .</p>
</dd></dl>

<dl class="py method">
<dt id="scrapy.http.Response.copy">
<code class="sig-name descname">copy</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/response.html#Response.copy"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.Response.copy" title="永久链接至目标">¶</a></dt>
<dd><p>返回此响应的副本的新响应。</p>
</dd></dl>

<dl class="py method">
<dt id="scrapy.http.Response.replace">
<code class="sig-name descname">replace</code><span class="sig-paren">(</span><span class="optional">[</span><em class="sig-param">url</em>, <em class="sig-param">status</em>, <em class="sig-param">headers</em>, <em class="sig-param">body</em>, <em class="sig-param">request</em>, <em class="sig-param">flags</em>, <em class="sig-param">cls</em><span class="optional">]</span><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/response.html#Response.replace"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.Response.replace" title="永久链接至目标">¶</a></dt>
<dd><p>返回具有相同成员的响应对象，除了那些通过指定的关键字参数赋予新值的成员。属性 <a class="reference internal" href="#scrapy.http.Response.meta" title="scrapy.http.Response.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Response.meta</span></code></a> 默认情况下是复制的。</p>
</dd></dl>

<dl class="py method">
<dt id="scrapy.http.Response.urljoin">
<code class="sig-name descname">urljoin</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">url</span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/response.html#Response.urljoin"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.Response.urljoin" title="永久链接至目标">¶</a></dt>
<dd><p>通过组合响应的 <a class="reference internal" href="#scrapy.http.Response.url" title="scrapy.http.Response.url"><code class="xref py py-attr docutils literal notranslate"><span class="pre">url</span></code></a> 有一个可能的相对URL。</p>
<p>这是包装纸 <a class="reference external" href="https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urljoin" title="(在 Python v3.9)"><code class="xref py py-func docutils literal notranslate"><span class="pre">urljoin()</span></code></a> ，这仅仅是进行此呼叫的别名：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">urllib</span><span class="o">.</span><span class="n">parse</span><span class="o">.</span><span class="n">urljoin</span><span class="p">(</span><span class="n">response</span><span class="o">.</span><span class="n">url</span><span class="p">,</span> <span class="n">url</span><span class="p">)</span>
</pre></div>
</div>
</dd></dl>

<dl class="py method">
<dt id="scrapy.http.Response.follow">
<code class="sig-name descname">follow</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">url</span></em>, <em class="sig-param"><span class="n">callback</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">method</span><span class="o">=</span><span class="default_value">'GET'</span></em>, <em class="sig-param"><span class="n">headers</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">body</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">cookies</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">meta</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">encoding</span><span class="o">=</span><span class="default_value">'utf-8'</span></em>, <em class="sig-param"><span class="n">priority</span><span class="o">=</span><span class="default_value">0</span></em>, <em class="sig-param"><span class="n">dont_filter</span><span class="o">=</span><span class="default_value">False</span></em>, <em class="sig-param"><span class="n">errback</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">cb_kwargs</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">flags</span><span class="o">=</span><span class="default_value">None</span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/response.html#Response.follow"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.Response.follow" title="永久链接至目标">¶</a></dt>
<dd><p>返回A <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 要跟踪链接的实例 <code class="docutils literal notranslate"><span class="pre">url</span></code> . 它接受与 <code class="docutils literal notranslate"><span class="pre">Request.__init__</span></code> 方法，但 <code class="docutils literal notranslate"><span class="pre">url</span></code> 可以是相对URL或 <code class="docutils literal notranslate"><span class="pre">scrapy.link.Link</span></code> 对象，而不仅仅是绝对URL。</p>
<p><a class="reference internal" href="#scrapy.http.TextResponse" title="scrapy.http.TextResponse"><code class="xref py py-class docutils literal notranslate"><span class="pre">TextResponse</span></code></a> 提供了一个 <a class="reference internal" href="#scrapy.http.TextResponse.follow" title="scrapy.http.TextResponse.follow"><code class="xref py py-meth docutils literal notranslate"><span class="pre">follow()</span></code></a> 方法，它除了支持绝对/相对URL和链接对象之外还支持选择器。</p>
<div class="versionadded">
<p><span class="versionmodified added">2.0 新版功能: </span>这个 <em>旗帜</em> 参数。</p>
</div>
</dd></dl>

<dl class="py method">
<dt id="scrapy.http.Response.follow_all">
<code class="sig-name descname">follow_all</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">urls</span></em>, <em class="sig-param"><span class="n">callback</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">method</span><span class="o">=</span><span class="default_value">'GET'</span></em>, <em class="sig-param"><span class="n">headers</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">body</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">cookies</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">meta</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">encoding</span><span class="o">=</span><span class="default_value">'utf-8'</span></em>, <em class="sig-param"><span class="n">priority</span><span class="o">=</span><span class="default_value">0</span></em>, <em class="sig-param"><span class="n">dont_filter</span><span class="o">=</span><span class="default_value">False</span></em>, <em class="sig-param"><span class="n">errback</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">cb_kwargs</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">flags</span><span class="o">=</span><span class="default_value">None</span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/response.html#Response.follow_all"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.Response.follow_all" title="永久链接至目标">¶</a></dt>
<dd><div class="versionadded">
<p><span class="versionmodified added">2.0 新版功能.</span></p>
</div>
<p>返回iterable of <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 跟踪所有链接的实例 <code class="docutils literal notranslate"><span class="pre">urls</span></code> . 它接受与 <code class="docutils literal notranslate"><span class="pre">Request.__init__</span></code> 方法，但元素 <code class="docutils literal notranslate"><span class="pre">urls</span></code> 可以是相对URL或 <a class="reference internal" href="link-extractors.html#scrapy.link.Link" title="scrapy.link.Link"><code class="xref py py-class docutils literal notranslate"><span class="pre">Link</span></code></a> 对象，而不仅仅是绝对URL。</p>
<p><a class="reference internal" href="#scrapy.http.TextResponse" title="scrapy.http.TextResponse"><code class="xref py py-class docutils literal notranslate"><span class="pre">TextResponse</span></code></a> 提供了一个 <a class="reference internal" href="#scrapy.http.TextResponse.follow_all" title="scrapy.http.TextResponse.follow_all"><code class="xref py py-meth docutils literal notranslate"><span class="pre">follow_all()</span></code></a> 方法，它除了支持绝对/相对URL和链接对象之外还支持选择器。</p>
</dd></dl>

</dd></dl>

</div>
<div class="section" id="response-subclasses">
<span id="topics-request-response-ref-response-subclasses"></span><h2>响应子类<a class="headerlink" href="#response-subclasses" title="永久链接至标题">¶</a></h2>
<p>下面是可用的内置响应子类的列表。您还可以对响应类进行子类化，以实现您自己的功能。</p>
<div class="section" id="textresponse-objects">
<h3>文本响应对象<a class="headerlink" href="#textresponse-objects" title="永久链接至标题">¶</a></h3>
<dl class="py class">
<dt id="scrapy.http.TextResponse">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.http.</code><code class="sig-name descname">TextResponse</code><span class="sig-paren">(</span><em class="sig-param">url</em><span class="optional">[</span>, <em class="sig-param">encoding</em><span class="optional">[</span>, <em class="sig-param">...</em><span class="optional">]</span><span class="optional">]</span><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/response/text.html#TextResponse"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.TextResponse" title="永久链接至目标">¶</a></dt>
<dd><p><a class="reference internal" href="#scrapy.http.TextResponse" title="scrapy.http.TextResponse"><code class="xref py py-class docutils literal notranslate"><span class="pre">TextResponse</span></code></a> 对象将编码功能添加到基 <a class="reference internal" href="#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 类，它只用于二进制数据，如图像、声音或任何媒体文件。</p>
<p><a class="reference internal" href="#scrapy.http.TextResponse" title="scrapy.http.TextResponse"><code class="xref py py-class docutils literal notranslate"><span class="pre">TextResponse</span></code></a> 对象支持新的 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法参数，以及基 <a class="reference internal" href="#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 物体。其余功能与 <a class="reference internal" href="#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 类，此处未记录。</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><p><strong>encoding</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#str" title="(在 Python v3.9)"><em>str</em></a>) -- 包含用于此响应的编码的字符串。如果创建一个 <a class="reference internal" href="#scrapy.http.TextResponse" title="scrapy.http.TextResponse"><code class="xref py py-class docutils literal notranslate"><span class="pre">TextResponse</span></code></a> 对象，它将转换为使用此编码编码的字节。如果 <em>编码</em> 是 <code class="docutils literal notranslate"><span class="pre">None</span></code> （默认），将在响应头和正文中查找编码。</p>
</dd>
</dl>
<p><a class="reference internal" href="#scrapy.http.TextResponse" title="scrapy.http.TextResponse"><code class="xref py py-class docutils literal notranslate"><span class="pre">TextResponse</span></code></a> 除了标准之外，对象还支持以下属性 <a class="reference internal" href="#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 一：</p>
<dl class="py attribute">
<dt id="scrapy.http.TextResponse.text">
<code class="sig-name descname">text</code><a class="headerlink" href="#scrapy.http.TextResponse.text" title="永久链接至目标">¶</a></dt>
<dd><p>响应体，作为字符串。</p>
<p>一样 <code class="docutils literal notranslate"><span class="pre">response.body.decode(response.encoding)</span></code> ，但结果在第一次调用后缓存，因此您可以访问 <code class="docutils literal notranslate"><span class="pre">response.text</span></code> 多次无额外开销。</p>
<div class="admonition note">
<p class="admonition-title">注解</p>
<p><code class="docutils literal notranslate"><span class="pre">str(response.body)</span></code> 不是将响应正文转换为字符串的正确方法：</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="nb">str</span><span class="p">(</span><span class="sa">b</span><span class="s1">&#39;body&#39;</span><span class="p">)</span>
<span class="go">&quot;b&#39;body&#39;&quot;</span>
</pre></div>
</div>
</div>
</dd></dl>

<dl class="py attribute">
<dt id="scrapy.http.TextResponse.encoding">
<code class="sig-name descname">encoding</code><a class="headerlink" href="#scrapy.http.TextResponse.encoding" title="永久链接至目标">¶</a></dt>
<dd><p>带有此响应编码的字符串。按顺序尝试以下机制来解决编码问题：</p>
<ol class="arabic simple">
<li><p>传入的编码 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法 <code class="docutils literal notranslate"><span class="pre">encoding</span></code> 参数</p></li>
<li><p>在内容类型HTTP标头中声明的编码。如果此编码无效（即未知），则忽略它并尝试下一个解析机制。</p></li>
<li><p>响应正文中声明的编码。TextResponse类不为此提供任何特殊功能。然而， <a class="reference internal" href="#scrapy.http.HtmlResponse" title="scrapy.http.HtmlResponse"><code class="xref py py-class docutils literal notranslate"><span class="pre">HtmlResponse</span></code></a> 和 <a class="reference internal" href="#scrapy.http.XmlResponse" title="scrapy.http.XmlResponse"><code class="xref py py-class docutils literal notranslate"><span class="pre">XmlResponse</span></code></a> 上课。</p></li>
<li><p>通过查看响应主体推断出的编码。这是更脆弱的方法，也是最后一个尝试的方法。</p></li>
</ol>
</dd></dl>

<dl class="py attribute">
<dt id="scrapy.http.TextResponse.selector">
<code class="sig-name descname">selector</code><a class="headerlink" href="#scrapy.http.TextResponse.selector" title="永久链接至目标">¶</a></dt>
<dd><p>A <a class="reference internal" href="selectors.html#scrapy.selector.Selector" title="scrapy.selector.Selector"><code class="xref py py-class docutils literal notranslate"><span class="pre">Selector</span></code></a> 使用响应作为目标的实例。选择器在第一次访问时被惰性地实例化。</p>
</dd></dl>

<p><a class="reference internal" href="#scrapy.http.TextResponse" title="scrapy.http.TextResponse"><code class="xref py py-class docutils literal notranslate"><span class="pre">TextResponse</span></code></a> 对象除了支持标准之外还支持以下方法 <a class="reference internal" href="#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 一：</p>
<dl class="py method">
<dt id="scrapy.http.TextResponse.xpath">
<code class="sig-name descname">xpath</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">query</span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/response/text.html#TextResponse.xpath"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.TextResponse.xpath" title="永久链接至目标">¶</a></dt>
<dd><p>捷径 <code class="docutils literal notranslate"><span class="pre">TextResponse.selector.xpath(query)</span></code> ：：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">response</span><span class="o">.</span><span class="n">xpath</span><span class="p">(</span><span class="s1">&#39;//p&#39;</span><span class="p">)</span>
</pre></div>
</div>
</dd></dl>

<dl class="py method">
<dt id="scrapy.http.TextResponse.css">
<code class="sig-name descname">css</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">query</span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/response/text.html#TextResponse.css"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.TextResponse.css" title="永久链接至目标">¶</a></dt>
<dd><p>捷径 <code class="docutils literal notranslate"><span class="pre">TextResponse.selector.css(query)</span></code> ：：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">response</span><span class="o">.</span><span class="n">css</span><span class="p">(</span><span class="s1">&#39;p&#39;</span><span class="p">)</span>
</pre></div>
</div>
</dd></dl>

<dl class="py method">
<dt id="scrapy.http.TextResponse.follow">
<code class="sig-name descname">follow</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">url</span></em>, <em class="sig-param"><span class="n">callback</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">method</span><span class="o">=</span><span class="default_value">'GET'</span></em>, <em class="sig-param"><span class="n">headers</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">body</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">cookies</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">meta</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">encoding</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">priority</span><span class="o">=</span><span class="default_value">0</span></em>, <em class="sig-param"><span class="n">dont_filter</span><span class="o">=</span><span class="default_value">False</span></em>, <em class="sig-param"><span class="n">errback</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">cb_kwargs</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">flags</span><span class="o">=</span><span class="default_value">None</span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/response/text.html#TextResponse.follow"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.TextResponse.follow" title="永久链接至目标">¶</a></dt>
<dd><p>返回A <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 要跟踪链接的实例 <code class="docutils literal notranslate"><span class="pre">url</span></code> . 它接受与 <code class="docutils literal notranslate"><span class="pre">Request.__init__</span></code> 方法，但 <code class="docutils literal notranslate"><span class="pre">url</span></code> 不仅可以是绝对URL，而且可以是</p>
<ul class="simple">
<li><p>相对URL</p></li>
<li><p>一 <a class="reference internal" href="link-extractors.html#scrapy.link.Link" title="scrapy.link.Link"><code class="xref py py-class docutils literal notranslate"><span class="pre">Link</span></code></a> 对象，例如 <a class="reference internal" href="link-extractors.html#topics-link-extractors"><span class="std std-ref">链接提取器</span></a></p></li>
<li><p>一 <a class="reference internal" href="selectors.html#scrapy.selector.Selector" title="scrapy.selector.Selector"><code class="xref py py-class docutils literal notranslate"><span class="pre">Selector</span></code></a> 对象 <code class="docutils literal notranslate"><span class="pre">&lt;link&gt;</span></code> 或 <code class="docutils literal notranslate"><span class="pre">&lt;a&gt;</span></code> 元素，例如 <code class="docutils literal notranslate"><span class="pre">response.css('a.my_link')[0]</span></code></p></li>
<li><p>属性 <a class="reference internal" href="selectors.html#scrapy.selector.Selector" title="scrapy.selector.Selector"><code class="xref py py-class docutils literal notranslate"><span class="pre">Selector</span></code></a> （不是选择器列表），例如。 <code class="docutils literal notranslate"><span class="pre">response.css('a::attr(href)')[0]</span></code> 或 <code class="docutils literal notranslate"><span class="pre">response.xpath('//img/&#64;src')[0]</span></code></p></li>
</ul>
<p>见 <a class="reference internal" href="../intro/tutorial.html#response-follow-example"><span class="std std-ref">创建请求的快捷方式</span></a> 用于示例。</p>
</dd></dl>

<dl class="py method">
<dt id="scrapy.http.TextResponse.follow_all">
<code class="sig-name descname">follow_all</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">urls</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">callback</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">method</span><span class="o">=</span><span class="default_value">'GET'</span></em>, <em class="sig-param"><span class="n">headers</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">body</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">cookies</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">meta</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">encoding</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">priority</span><span class="o">=</span><span class="default_value">0</span></em>, <em class="sig-param"><span class="n">dont_filter</span><span class="o">=</span><span class="default_value">False</span></em>, <em class="sig-param"><span class="n">errback</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">cb_kwargs</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">flags</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">css</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">xpath</span><span class="o">=</span><span class="default_value">None</span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/response/text.html#TextResponse.follow_all"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.TextResponse.follow_all" title="永久链接至目标">¶</a></dt>
<dd><p>产生 <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 跟踪所有链接的实例 <code class="docutils literal notranslate"><span class="pre">urls</span></code> . 它接受与 <a class="reference internal" href="#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 的 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法，除了 <code class="docutils literal notranslate"><span class="pre">urls</span></code> 元素不需要是绝对URL，它可以是以下任何一个：</p>
<ul class="simple">
<li><p>相对URL</p></li>
<li><p>一 <a class="reference internal" href="link-extractors.html#scrapy.link.Link" title="scrapy.link.Link"><code class="xref py py-class docutils literal notranslate"><span class="pre">Link</span></code></a> 对象，例如 <a class="reference internal" href="link-extractors.html#topics-link-extractors"><span class="std std-ref">链接提取器</span></a></p></li>
<li><p>一 <a class="reference internal" href="selectors.html#scrapy.selector.Selector" title="scrapy.selector.Selector"><code class="xref py py-class docutils literal notranslate"><span class="pre">Selector</span></code></a> 对象 <code class="docutils literal notranslate"><span class="pre">&lt;link&gt;</span></code> 或 <code class="docutils literal notranslate"><span class="pre">&lt;a&gt;</span></code> 元素，例如 <code class="docutils literal notranslate"><span class="pre">response.css('a.my_link')[0]</span></code></p></li>
<li><p>属性 <a class="reference internal" href="selectors.html#scrapy.selector.Selector" title="scrapy.selector.Selector"><code class="xref py py-class docutils literal notranslate"><span class="pre">Selector</span></code></a> （不是选择器列表），例如。 <code class="docutils literal notranslate"><span class="pre">response.css('a::attr(href)')[0]</span></code> 或 <code class="docutils literal notranslate"><span class="pre">response.xpath('//img/&#64;src')[0]</span></code></p></li>
</ul>
<p>此外， <code class="docutils literal notranslate"><span class="pre">css</span></code> 和 <code class="docutils literal notranslate"><span class="pre">xpath</span></code> 参数可用于在中执行链接提取 <code class="docutils literal notranslate"><span class="pre">follow_all</span></code> 方法（只有一个 <code class="docutils literal notranslate"><span class="pre">urls</span></code> ， <code class="docutils literal notranslate"><span class="pre">css</span></code> 和 <code class="docutils literal notranslate"><span class="pre">xpath</span></code> 接受）。</p>
<p>注意，当经过 <code class="docutils literal notranslate"><span class="pre">SelectorList</span></code> 作为 <code class="docutils literal notranslate"><span class="pre">urls</span></code> 参数或使用 <code class="docutils literal notranslate"><span class="pre">css</span></code> 或 <code class="docutils literal notranslate"><span class="pre">xpath</span></code> 参数时，此方法不会为无法从中获取链接的选择器生成请求（例如，没有 <code class="docutils literal notranslate"><span class="pre">href</span></code> 属性）</p>
</dd></dl>

<dl class="py method">
<dt id="scrapy.http.TextResponse.json">
<code class="sig-name descname">json</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/response/text.html#TextResponse.json"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.TextResponse.json" title="永久链接至目标">¶</a></dt>
<dd><div class="versionadded">
<p><span class="versionmodified added">2.2 新版功能.</span></p>
</div>
<p>将JSON文档反序列化为Python对象。</p>
<p>从反序列化的JSON文档返回Python对象。结果在第一次调用后被缓存。</p>
</dd></dl>

</dd></dl>

</div>
<div class="section" id="htmlresponse-objects">
<h3>HTMLResponse对象<a class="headerlink" href="#htmlresponse-objects" title="永久链接至标题">¶</a></h3>
<dl class="py class">
<dt id="scrapy.http.HtmlResponse">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.http.</code><code class="sig-name descname">HtmlResponse</code><span class="sig-paren">(</span><em class="sig-param">url</em><span class="optional">[</span>, <em class="sig-param">...</em><span class="optional">]</span><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/response/html.html#HtmlResponse"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.HtmlResponse" title="永久链接至目标">¶</a></dt>
<dd><p>这个 <a class="reference internal" href="#scrapy.http.HtmlResponse" title="scrapy.http.HtmlResponse"><code class="xref py py-class docutils literal notranslate"><span class="pre">HtmlResponse</span></code></a> 类是的子类 <a class="reference internal" href="#scrapy.http.TextResponse" title="scrapy.http.TextResponse"><code class="xref py py-class docutils literal notranslate"><span class="pre">TextResponse</span></code></a> 它通过查看HTML添加了编码自动发现支持 <a class="reference external" href="https://www.w3schools.com/TAGS/att_meta_http_equiv.asp">meta http-equiv</a> 属性。见 <a class="reference internal" href="#scrapy.http.TextResponse.encoding" title="scrapy.http.TextResponse.encoding"><code class="xref py py-attr docutils literal notranslate"><span class="pre">TextResponse.encoding</span></code></a> .</p>
</dd></dl>

</div>
<div class="section" id="xmlresponse-objects">
<h3>XmlResponse对象<a class="headerlink" href="#xmlresponse-objects" title="永久链接至标题">¶</a></h3>
<dl class="py class">
<dt id="scrapy.http.XmlResponse">
<em class="property">class </em><code class="sig-prename descclassname">scrapy.http.</code><code class="sig-name descname">XmlResponse</code><span class="sig-paren">(</span><em class="sig-param">url</em><span class="optional">[</span>, <em class="sig-param">...</em><span class="optional">]</span><span class="sig-paren">)</span><a class="reference internal" href="../_modules/scrapy/http/response/xml.html#XmlResponse"><span class="viewcode-link">[源代码]</span></a><a class="headerlink" href="#scrapy.http.XmlResponse" title="永久链接至目标">¶</a></dt>
<dd><p>这个 <a class="reference internal" href="#scrapy.http.XmlResponse" title="scrapy.http.XmlResponse"><code class="xref py py-class docutils literal notranslate"><span class="pre">XmlResponse</span></code></a> 类是的子类 <a class="reference internal" href="#scrapy.http.TextResponse" title="scrapy.http.TextResponse"><code class="xref py py-class docutils literal notranslate"><span class="pre">TextResponse</span></code></a> 它通过查看XML声明行添加了编码自动发现支持。见 <a class="reference internal" href="#scrapy.http.TextResponse.encoding" title="scrapy.http.TextResponse.encoding"><code class="xref py py-attr docutils literal notranslate"><span class="pre">TextResponse.encoding</span></code></a> .</p>
</dd></dl>

</div>
</div>
</div>


           </div>
           
          </div>
          <footer>
  
    <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
      
        <a href="link-extractors.html" class="btn btn-neutral float-right" title="链接提取器" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
      
      
        <a href="feed-exports.html" class="btn btn-neutral float-left" title="Feed 导出" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
      
    </div>
  

  <hr/>

  <div role="contentinfo">
    <p>
        
        &copy; 版权所有 2008–2020, Scrapy developers
      <span class="lastupdated">
        最后更新于 10月 18, 2020.
      </span>

    </p>
  </div>
    
    
    
    Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a
    
    <a href="https://github.com/rtfd/sphinx_rtd_theme">theme</a>
    
    provided by <a href="https://readthedocs.org">Read the Docs</a>. 

</footer>

        </div>
      </div>

    </section>

  </div>
  

  <script type="text/javascript">
      jQuery(function () {
          SphinxRtdTheme.Navigation.enable(true);
      });
  </script>

  
  
    
  
 
<script type="text/javascript">
!function(){var analytics=window.analytics=window.analytics||[];if(!analytics.initialize)if(analytics.invoked)window.console&&console.error&&console.error("Segment snippet included twice.");else{analytics.invoked=!0;analytics.methods=["trackSubmit","trackClick","trackLink","trackForm","pageview","identify","reset","group","track","ready","alias","page","once","off","on"];analytics.factory=function(t){return function(){var e=Array.prototype.slice.call(arguments);e.unshift(t);analytics.push(e);return analytics}};for(var t=0;t<analytics.methods.length;t++){var e=analytics.methods[t];analytics[e]=analytics.factory(e)}analytics.load=function(t){var e=document.createElement("script");e.type="text/javascript";e.async=!0;e.src=("https:"===document.location.protocol?"https://":"http://")+"cdn.segment.com/analytics.js/v1/"+t+"/analytics.min.js";var n=document.getElementsByTagName("script")[0];n.parentNode.insertBefore(e,n)};analytics.SNIPPET_VERSION="3.1.0";
analytics.load("8UDQfnf3cyFSTsM4YANnW5sXmgZVILbA");
analytics.page();
}}();

analytics.ready(function () {
    ga('require', 'linker');
    ga('linker:autoLink', ['scrapinghub.com', 'crawlera.com']);
});
</script>


</body>
</html>