

<!DOCTYPE html>
<html class="writer-html5" lang="zh" >
<head>
  <meta charset="utf-8">
  
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  
  <title>项目管道 &mdash; Scrapy 2.3.0 文档</title>
  

  
  <link rel="stylesheet" href="../_static/css/theme.css" type="text/css" />
  <link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster.custom.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster.bundle.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-shadow.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-punk.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-noir.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-light.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/tooltipster-sideTip-borderless.min.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/micromodal.css" type="text/css" />
  <link rel="stylesheet" href="../_static/css/sphinx_rtd_theme.css" type="text/css" />

  
  
  
  

  
  <!--[if lt IE 9]>
    <script src="../_static/js/html5shiv.min.js"></script>
  <![endif]-->
  
    
      <script type="text/javascript" id="documentation_options" data-url_root="../" src="../_static/documentation_options.js"></script>
        <script src="../_static/jquery.js"></script>
        <script src="../_static/underscore.js"></script>
        <script src="../_static/doctools.js"></script>
        <script src="../_static/language_data.js"></script>
        <script src="../_static/js/hoverxref.js"></script>
        <script src="../_static/js/tooltipster.bundle.min.js"></script>
        <script src="../_static/js/micromodal.min.js"></script>
    
    <script type="text/javascript" src="../_static/js/theme.js"></script>

    
    <link rel="index" title="索引" href="../genindex.html" />
    <link rel="search" title="搜索" href="../search.html" />
    <link rel="next" title="Feed 导出" href="feed-exports.html" />
    <link rel="prev" title="Scrapy shell" href="shell.html" /> 
</head>

<body class="wy-body-for-nav">

   
  <div class="wy-grid-for-nav">
    
    <nav data-toggle="wy-nav-shift" class="wy-nav-side">
      <div class="wy-side-scroll">
        <div class="wy-side-nav-search" >
          

          
            <a href="../index.html" class="icon icon-home" alt="Documentation Home"> Scrapy
          

          
          </a>

          
            
            
              <div class="version">
                2.3
              </div>
            
          

          
<div role="search">
  <form id="rtd-search-form" class="wy-form" action="../search.html" method="get">
    <input type="text" name="q" placeholder="Search docs" />
    <input type="hidden" name="check_keywords" value="yes" />
    <input type="hidden" name="area" value="default" />
  </form>
</div>

          
        </div>

        
        <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
          
            
            
              
            
            
              <p class="caption"><span class="caption-text">第一步</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../intro/overview.html">Scrapy一目了然</a></li>
<li class="toctree-l1"><a class="reference internal" href="../intro/install.html">安装指南</a></li>
<li class="toctree-l1"><a class="reference internal" href="../intro/tutorial.html">Scrapy 教程</a></li>
<li class="toctree-l1"><a class="reference internal" href="../intro/examples.html">实例</a></li>
</ul>
<p class="caption"><span class="caption-text">基本概念</span></p>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="commands.html">命令行工具</a></li>
<li class="toctree-l1"><a class="reference internal" href="spiders.html">蜘蛛</a></li>
<li class="toctree-l1"><a class="reference internal" href="selectors.html">选择器</a></li>
<li class="toctree-l1"><a class="reference internal" href="items.html">项目</a></li>
<li class="toctree-l1"><a class="reference internal" href="loaders.html">项目加载器</a></li>
<li class="toctree-l1"><a class="reference internal" href="shell.html">Scrapy shell</a></li>
<li class="toctree-l1 current"><a class="current reference internal" href="#">项目管道</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#writing-your-own-item-pipeline">编写自己的项目管道</a></li>
<li class="toctree-l2"><a class="reference internal" href="#item-pipeline-example">项目管道示例</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#price-validation-and-dropping-items-with-no-prices">无价格的价格验证和删除项目</a></li>
<li class="toctree-l3"><a class="reference internal" href="#write-items-to-a-json-file">将项目写入JSON文件</a></li>
<li class="toctree-l3"><a class="reference internal" href="#write-items-to-mongodb">将项目写入MongoDB</a></li>
<li class="toctree-l3"><a class="reference internal" href="#take-screenshot-of-item">项目截图</a></li>
<li class="toctree-l3"><a class="reference internal" href="#duplicates-filter">重复筛选器</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#activating-an-item-pipeline-component">激活项目管道组件</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="feed-exports.html">Feed 导出</a></li>
<li class="toctree-l1"><a class="reference internal" href="request-response.html">请求和响应</a></li>
<li class="toctree-l1"><a class="reference internal" href="link-extractors.html">链接提取器</a></li>
<li class="toctree-l1"><a class="reference internal" href="settings.html">设置</a></li>
<li class="toctree-l1"><a class="reference internal" href="exceptions.html">例外情况</a></li>
</ul>
<p class="caption"><span class="caption-text">内置服务</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="logging.html">登录</a></li>
<li class="toctree-l1"><a class="reference internal" href="stats.html">统计数据集合</a></li>
<li class="toctree-l1"><a class="reference internal" href="email.html">发送电子邮件</a></li>
<li class="toctree-l1"><a class="reference internal" href="telnetconsole.html">远程登录控制台</a></li>
<li class="toctree-l1"><a class="reference internal" href="webservice.html">Web服务</a></li>
</ul>
<p class="caption"><span class="caption-text">解决具体问题</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../faq.html">常见问题</a></li>
<li class="toctree-l1"><a class="reference internal" href="debug.html">调试spiders</a></li>
<li class="toctree-l1"><a class="reference internal" href="contracts.html">蜘蛛合约</a></li>
<li class="toctree-l1"><a class="reference internal" href="practices.html">常用做法</a></li>
<li class="toctree-l1"><a class="reference internal" href="broad-crawls.html">宽爬行</a></li>
<li class="toctree-l1"><a class="reference internal" href="developer-tools.html">使用浏览器的开发人员工具进行抓取</a></li>
<li class="toctree-l1"><a class="reference internal" href="dynamic-content.html">选择动态加载的内容</a></li>
<li class="toctree-l1"><a class="reference internal" href="leaks.html">调试内存泄漏</a></li>
<li class="toctree-l1"><a class="reference internal" href="media-pipeline.html">下载和处理文件和图像</a></li>
<li class="toctree-l1"><a class="reference internal" href="deploy.html">部署蜘蛛</a></li>
<li class="toctree-l1"><a class="reference internal" href="autothrottle.html">AutoThrottle 扩展</a></li>
<li class="toctree-l1"><a class="reference internal" href="benchmarking.html">标杆管理</a></li>
<li class="toctree-l1"><a class="reference internal" href="jobs.html">作业：暂停和恢复爬行</a></li>
<li class="toctree-l1"><a class="reference internal" href="coroutines.html">协同程序</a></li>
<li class="toctree-l1"><a class="reference internal" href="asyncio.html">asyncio</a></li>
</ul>
<p class="caption"><span class="caption-text">扩展Scrapy</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="architecture.html">体系结构概述</a></li>
<li class="toctree-l1"><a class="reference internal" href="downloader-middleware.html">下载器中间件</a></li>
<li class="toctree-l1"><a class="reference internal" href="spider-middleware.html">蜘蛛中间件</a></li>
<li class="toctree-l1"><a class="reference internal" href="extensions.html">扩展</a></li>
<li class="toctree-l1"><a class="reference internal" href="api.html">核心API</a></li>
<li class="toctree-l1"><a class="reference internal" href="signals.html">信号</a></li>
<li class="toctree-l1"><a class="reference internal" href="exporters.html">条目导出器</a></li>
</ul>
<p class="caption"><span class="caption-text">其余所有</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../news.html">发行说明</a></li>
<li class="toctree-l1"><a class="reference internal" href="../contributing.html">为 Scrapy 贡献</a></li>
<li class="toctree-l1"><a class="reference internal" href="../versioning.html">版本控制和API稳定性</a></li>
</ul>

            
          
        </div>
        
      </div>
    </nav>

    <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">

      
      <nav class="wy-nav-top" aria-label="top navigation">
        
          <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
          <a href="../index.html">Scrapy</a>
        
      </nav>


      <div class="wy-nav-content">
        
        <div class="rst-content">
        
          















<div role="navigation" aria-label="breadcrumbs navigation">

  <ul class="wy-breadcrumbs">
    
      <li><a href="../index.html" class="icon icon-home"></a> &raquo;</li>
        
      <li>项目管道</li>
    
    
      <li class="wy-breadcrumbs-aside">
        
            
        
      </li>
    
  </ul>

  
  <hr/>
</div>
          <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
           <div itemprop="articleBody">
            
  <div class="section" id="item-pipeline">
<span id="topics-item-pipeline"></span><h1>项目管道<a class="headerlink" href="#item-pipeline" title="永久链接至标题">¶</a></h1>
<p>在一个项目被蜘蛛抓取之后，它被发送到项目管道，该管道通过几个按顺序执行的组件来处理它。</p>
<p>每个项管道组件（有时称为“项管道”）都是一个实现简单方法的Python类。它们接收一个项目并对其执行操作，还决定该项目是否应继续通过管道，或者是否应删除并不再处理。</p><script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
<ins class="adsbygoogle"
     style="display:block; text-align:center;"
     data-ad-layout="in-article"
     data-ad-format="fluid"
     data-ad-client="ca-pub-1466963416408457"
     data-ad-slot="8850786025"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>
<p>项目管道的典型用途有：</p>
<ul class="simple">
<li><p>清理HTML数据</p></li>
<li><p>验证抓取的数据（检查项目是否包含某些字段）</p></li>
<li><p>检查重复项（并删除它们）</p></li>
<li><p>将爬取的项目存储在数据库中</p></li>
</ul>
<div class="section" id="writing-your-own-item-pipeline">
<h2>编写自己的项目管道<a class="headerlink" href="#writing-your-own-item-pipeline" title="永久链接至标题">¶</a></h2>
<p>每个item pipeline组件都是一个python类，必须实现以下方法：</p>
<dl class="py method">
<dt id="process_item">
<code class="sig-name descname">process_item</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">self</span></em>, <em class="sig-param"><span class="n">item</span></em>, <em class="sig-param"><span class="n">spider</span></em><span class="sig-paren">)</span><a class="headerlink" href="#process_item" title="永久链接至目标">¶</a></dt>
<dd><p>对每个项管道组件调用此方法。</p>
<p><cite>item</cite> 是一个 <a class="reference internal" href="items.html#item-types"><span class="std std-ref">item object</span></a> 见 <a class="reference internal" href="items.html#supporting-item-types"><span class="std std-ref">支持所有项目类型</span></a> .</p>
<p><a class="reference internal" href="#process_item" title="process_item"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_item()</span></code></a> 必须：返回 <a class="reference internal" href="items.html#item-types"><span class="std std-ref">item object</span></a> 返回A <a class="reference external" href="https://twistedmatrix.com/documents/current/api/twisted.internet.defer.Deferred.html" title="(在 Twisted v2.0)"><code class="xref py py-class docutils literal notranslate"><span class="pre">Deferred</span></code></a> 或提高 <a class="reference internal" href="exceptions.html#scrapy.exceptions.DropItem" title="scrapy.exceptions.DropItem"><code class="xref py py-exc docutils literal notranslate"><span class="pre">DropItem</span></code></a> 例外。</p>
<p>丢弃的项目不再由其他管道组件处理。</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>item</strong> (<a class="reference internal" href="items.html#item-types"><span class="std std-ref">item object</span></a>) -- 刮掉的东西</p></li>
<li><p><strong>spider</strong> (<a class="reference internal" href="spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> object) -- 爬取项目的蜘蛛</p></li>
</ul>
</dd>
</dl>
</dd></dl>

<p>此外，它们还可以实现以下方法：</p>
<dl class="py method">
<dt id="open_spider">
<code class="sig-name descname">open_spider</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">self</span></em>, <em class="sig-param"><span class="n">spider</span></em><span class="sig-paren">)</span><a class="headerlink" href="#open_spider" title="永久链接至目标">¶</a></dt>
<dd><p>当spider打开时调用此方法。</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><p><strong>spider</strong> (<a class="reference internal" href="spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> object) -- 打开的蜘蛛</p>
</dd>
</dl>
</dd></dl>

<dl class="py method">
<dt id="close_spider">
<code class="sig-name descname">close_spider</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">self</span></em>, <em class="sig-param"><span class="n">spider</span></em><span class="sig-paren">)</span><a class="headerlink" href="#close_spider" title="永久链接至目标">¶</a></dt>
<dd><p>当spider关闭时调用此方法。</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><p><strong>spider</strong> (<a class="reference internal" href="spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> object) -- 关闭的蜘蛛</p>
</dd>
</dl>
</dd></dl>

<dl class="py method">
<dt id="from_crawler">
<code class="sig-name descname">from_crawler</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">cls</span></em>, <em class="sig-param"><span class="n">crawler</span></em><span class="sig-paren">)</span><a class="headerlink" href="#from_crawler" title="永久链接至目标">¶</a></dt>
<dd><p>如果存在，则调用此ClassMethod从 <a class="reference internal" href="api.html#scrapy.crawler.Crawler" title="scrapy.crawler.Crawler"><code class="xref py py-class docutils literal notranslate"><span class="pre">Crawler</span></code></a> . 它必须返回管道的新实例。爬虫对象提供对所有零碎核心组件（如设置和信号）的访问；它是管道访问它们并将其功能连接到零碎的一种方式。</p>
<dl class="field-list simple">
<dt class="field-odd">参数</dt>
<dd class="field-odd"><p><strong>crawler</strong> (<a class="reference internal" href="api.html#scrapy.crawler.Crawler" title="scrapy.crawler.Crawler"><code class="xref py py-class docutils literal notranslate"><span class="pre">Crawler</span></code></a> object) -- 使用此管道的爬虫程序</p>
</dd>
</dl>
</dd></dl>

</div>
<div class="section" id="item-pipeline-example">
<h2>项目管道示例<a class="headerlink" href="#item-pipeline-example" title="永久链接至标题">¶</a></h2>
<div class="section" id="price-validation-and-dropping-items-with-no-prices">
<h3>无价格的价格验证和删除项目<a class="headerlink" href="#price-validation-and-dropping-items-with-no-prices" title="永久链接至标题">¶</a></h3>
<p>让我们看看下面的假设管道，它调整了 <code class="docutils literal notranslate"><span class="pre">price</span></code> 不包括增值税的项目的属性 (<code class="docutils literal notranslate"><span class="pre">price_excludes_vat</span></code> 属性），并删除不包含价格的项目：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">itemadapter</span> <span class="kn">import</span> <span class="n">ItemAdapter</span>
<span class="kn">from</span> <span class="nn">scrapy.exceptions</span> <span class="kn">import</span> <span class="n">DropItem</span>
<span class="k">class</span> <span class="nc">PricePipeline</span><span class="p">:</span>

    <span class="n">vat_factor</span> <span class="o">=</span> <span class="mf">1.15</span>

    <span class="k">def</span> <span class="nf">process_item</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">item</span><span class="p">,</span> <span class="n">spider</span><span class="p">):</span>
        <span class="n">adapter</span> <span class="o">=</span> <span class="n">ItemAdapter</span><span class="p">(</span><span class="n">item</span><span class="p">)</span>
        <span class="k">if</span> <span class="n">adapter</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s1">&#39;price&#39;</span><span class="p">):</span>
            <span class="k">if</span> <span class="n">adapter</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s1">&#39;price_excludes_vat&#39;</span><span class="p">):</span>
                <span class="n">adapter</span><span class="p">[</span><span class="s1">&#39;price&#39;</span><span class="p">]</span> <span class="o">=</span> <span class="n">adapter</span><span class="p">[</span><span class="s1">&#39;price&#39;</span><span class="p">]</span> <span class="o">*</span> <span class="bp">self</span><span class="o">.</span><span class="n">vat_factor</span>
            <span class="k">return</span> <span class="n">item</span>
        <span class="k">else</span><span class="p">:</span>
            <span class="k">raise</span> <span class="n">DropItem</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Missing price in </span><span class="si">{</span><span class="n">item</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="section" id="write-items-to-a-json-file">
<h3>将项目写入JSON文件<a class="headerlink" href="#write-items-to-a-json-file" title="永久链接至标题">¶</a></h3>
<p>下面的管道将所有爬取的项目（从所有蜘蛛）存储到一个单独的管道中 <code class="docutils literal notranslate"><span class="pre">items.jl</span></code> 文件，每行包含一个以JSON格式序列化的项：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">json</span>

<span class="kn">from</span> <span class="nn">itemadapter</span> <span class="kn">import</span> <span class="n">ItemAdapter</span>

<span class="k">class</span> <span class="nc">JsonWriterPipeline</span><span class="p">:</span>

    <span class="k">def</span> <span class="nf">open_spider</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">spider</span><span class="p">):</span>
        <span class="bp">self</span><span class="o">.</span><span class="n">file</span> <span class="o">=</span> <span class="nb">open</span><span class="p">(</span><span class="s1">&#39;items.jl&#39;</span><span class="p">,</span> <span class="s1">&#39;w&#39;</span><span class="p">)</span>

    <span class="k">def</span> <span class="nf">close_spider</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">spider</span><span class="p">):</span>
        <span class="bp">self</span><span class="o">.</span><span class="n">file</span><span class="o">.</span><span class="n">close</span><span class="p">()</span>

    <span class="k">def</span> <span class="nf">process_item</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">item</span><span class="p">,</span> <span class="n">spider</span><span class="p">):</span>
        <span class="n">line</span> <span class="o">=</span> <span class="n">json</span><span class="o">.</span><span class="n">dumps</span><span class="p">(</span><span class="n">ItemAdapter</span><span class="p">(</span><span class="n">item</span><span class="p">)</span><span class="o">.</span><span class="n">asdict</span><span class="p">())</span> <span class="o">+</span> <span class="s2">&quot;</span><span class="se">\n</span><span class="s2">&quot;</span>
        <span class="bp">self</span><span class="o">.</span><span class="n">file</span><span class="o">.</span><span class="n">write</span><span class="p">(</span><span class="n">line</span><span class="p">)</span>
        <span class="k">return</span> <span class="n">item</span>
</pre></div>
</div>
<div class="admonition note">
<p class="admonition-title">注解</p>
<p>jsonWriterPipeline的目的只是介绍如何编写项管道。如果您真的想将所有的爬取项存储到JSON文件中，那么应该使用 <a class="reference internal" href="feed-exports.html#topics-feed-exports"><span class="std std-ref">Feed exports</span></a> .</p>
</div>
</div>
<div class="section" id="write-items-to-mongodb">
<h3>将项目写入MongoDB<a class="headerlink" href="#write-items-to-mongodb" title="永久链接至标题">¶</a></h3>
<p>在这个示例中，我们将向 <a class="reference external" href="https://www.mongodb.com/">MongoDB</a> 使用 <a class="reference external" href="https://api.mongodb.com/python/current/">pymongo</a>. 在Scrapy设置中指定MongoDB地址和数据库名称；MongoDB集合以item类命名。</p>
<p>这个例子的要点是演示如何使用 <a class="reference internal" href="#from_crawler" title="from_crawler"><code class="xref py py-meth docutils literal notranslate"><span class="pre">from_crawler()</span></code></a> 方法和如何正确清理资源。：：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">pymongo</span>
<span class="kn">from</span> <span class="nn">itemadapter</span> <span class="kn">import</span> <span class="n">ItemAdapter</span>

<span class="k">class</span> <span class="nc">MongoPipeline</span><span class="p">:</span>

    <span class="n">collection_name</span> <span class="o">=</span> <span class="s1">&#39;scrapy_items&#39;</span>

    <span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">mongo_uri</span><span class="p">,</span> <span class="n">mongo_db</span><span class="p">):</span>
        <span class="bp">self</span><span class="o">.</span><span class="n">mongo_uri</span> <span class="o">=</span> <span class="n">mongo_uri</span>
        <span class="bp">self</span><span class="o">.</span><span class="n">mongo_db</span> <span class="o">=</span> <span class="n">mongo_db</span>

    <span class="nd">@classmethod</span>
    <span class="k">def</span> <span class="nf">from_crawler</span><span class="p">(</span><span class="bp">cls</span><span class="p">,</span> <span class="n">crawler</span><span class="p">):</span>
        <span class="k">return</span> <span class="bp">cls</span><span class="p">(</span>
            <span class="n">mongo_uri</span><span class="o">=</span><span class="n">crawler</span><span class="o">.</span><span class="n">settings</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s1">&#39;MONGO_URI&#39;</span><span class="p">),</span>
            <span class="n">mongo_db</span><span class="o">=</span><span class="n">crawler</span><span class="o">.</span><span class="n">settings</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s1">&#39;MONGO_DATABASE&#39;</span><span class="p">,</span> <span class="s1">&#39;items&#39;</span><span class="p">)</span>
        <span class="p">)</span>

    <span class="k">def</span> <span class="nf">open_spider</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">spider</span><span class="p">):</span>
        <span class="bp">self</span><span class="o">.</span><span class="n">client</span> <span class="o">=</span> <span class="n">pymongo</span><span class="o">.</span><span class="n">MongoClient</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">mongo_uri</span><span class="p">)</span>
        <span class="bp">self</span><span class="o">.</span><span class="n">db</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">client</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">mongo_db</span><span class="p">]</span>

    <span class="k">def</span> <span class="nf">close_spider</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">spider</span><span class="p">):</span>
        <span class="bp">self</span><span class="o">.</span><span class="n">client</span><span class="o">.</span><span class="n">close</span><span class="p">()</span>

    <span class="k">def</span> <span class="nf">process_item</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">item</span><span class="p">,</span> <span class="n">spider</span><span class="p">):</span>
        <span class="bp">self</span><span class="o">.</span><span class="n">db</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">collection_name</span><span class="p">]</span><span class="o">.</span><span class="n">insert_one</span><span class="p">(</span><span class="n">ItemAdapter</span><span class="p">(</span><span class="n">item</span><span class="p">)</span><span class="o">.</span><span class="n">asdict</span><span class="p">())</span>
        <span class="k">return</span> <span class="n">item</span>
</pre></div>
</div>
</div>
<div class="section" id="take-screenshot-of-item">
<span id="screenshotpipeline"></span><h3>项目截图<a class="headerlink" href="#take-screenshot-of-item" title="永久链接至标题">¶</a></h3>
<p>这个例子演示了如何使用 <a class="reference internal" href="coroutines.html"><span class="doc">coroutine syntax</span></a> 在 <a class="reference internal" href="#process_item" title="process_item"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_item()</span></code></a> 方法。</p>
<p>此项管道向本地运行的实例发出请求 <a class="reference external" href="https://splash.readthedocs.io/en/stable/">Splash</a> 呈现项目URL的屏幕截图。下载请求响应后，项目管道将屏幕截图保存到文件中，并将文件名添加到项目中。</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">hashlib</span>
<span class="kn">from</span> <span class="nn">urllib.parse</span> <span class="kn">import</span> <span class="n">quote</span>

<span class="kn">import</span> <span class="nn">scrapy</span>
<span class="kn">from</span> <span class="nn">itemadapter</span> <span class="kn">import</span> <span class="n">ItemAdapter</span>

<span class="k">class</span> <span class="nc">ScreenshotPipeline</span><span class="p">:</span>
    <span class="sd">&quot;&quot;&quot;Pipeline that uses Splash to render screenshot of</span>
<span class="sd">    every Scrapy item.&quot;&quot;&quot;</span>

    <span class="n">SPLASH_URL</span> <span class="o">=</span> <span class="s2">&quot;http://localhost:8050/render.png?url=</span><span class="si">{}</span><span class="s2">&quot;</span>

    <span class="k">async</span> <span class="k">def</span> <span class="nf">process_item</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">item</span><span class="p">,</span> <span class="n">spider</span><span class="p">):</span>
        <span class="n">adapter</span> <span class="o">=</span> <span class="n">ItemAdapter</span><span class="p">(</span><span class="n">item</span><span class="p">)</span>
        <span class="n">encoded_item_url</span> <span class="o">=</span> <span class="n">quote</span><span class="p">(</span><span class="n">adapter</span><span class="p">[</span><span class="s2">&quot;url&quot;</span><span class="p">])</span>
        <span class="n">screenshot_url</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">SPLASH_URL</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">encoded_item_url</span><span class="p">)</span>
        <span class="n">request</span> <span class="o">=</span> <span class="n">scrapy</span><span class="o">.</span><span class="n">Request</span><span class="p">(</span><span class="n">screenshot_url</span><span class="p">)</span>
        <span class="n">response</span> <span class="o">=</span> <span class="k">await</span> <span class="n">spider</span><span class="o">.</span><span class="n">crawler</span><span class="o">.</span><span class="n">engine</span><span class="o">.</span><span class="n">download</span><span class="p">(</span><span class="n">request</span><span class="p">,</span> <span class="n">spider</span><span class="p">)</span>

        <span class="k">if</span> <span class="n">response</span><span class="o">.</span><span class="n">status</span> <span class="o">!=</span> <span class="mi">200</span><span class="p">:</span>
            <span class="c1"># Error happened, return item.</span>
            <span class="k">return</span> <span class="n">item</span>

        <span class="c1"># Save screenshot to file, filename will be hash of url.</span>
        <span class="n">url</span> <span class="o">=</span> <span class="n">adapter</span><span class="p">[</span><span class="s2">&quot;url&quot;</span><span class="p">]</span>
        <span class="n">url_hash</span> <span class="o">=</span> <span class="n">hashlib</span><span class="o">.</span><span class="n">md5</span><span class="p">(</span><span class="n">url</span><span class="o">.</span><span class="n">encode</span><span class="p">(</span><span class="s2">&quot;utf8&quot;</span><span class="p">))</span><span class="o">.</span><span class="n">hexdigest</span><span class="p">()</span>
        <span class="n">filename</span> <span class="o">=</span> <span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><span class="n">url_hash</span><span class="si">}</span><span class="s2">.png&quot;</span>
        <span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">filename</span><span class="p">,</span> <span class="s2">&quot;wb&quot;</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
            <span class="n">f</span><span class="o">.</span><span class="n">write</span><span class="p">(</span><span class="n">response</span><span class="o">.</span><span class="n">body</span><span class="p">)</span>

        <span class="c1"># Store filename in item.</span>
        <span class="n">adapter</span><span class="p">[</span><span class="s2">&quot;screenshot_filename&quot;</span><span class="p">]</span> <span class="o">=</span> <span class="n">filename</span>
        <span class="k">return</span> <span class="n">item</span>
</pre></div>
</div>
</div>
<div class="section" id="duplicates-filter">
<h3>重复筛选器<a class="headerlink" href="#duplicates-filter" title="永久链接至标题">¶</a></h3>
<p>查找重复项并删除已处理的项的筛选器。假设我们的项目有一个唯一的ID，但是我们的spider返回具有相同ID的多个项目：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">itemadapter</span> <span class="kn">import</span> <span class="n">ItemAdapter</span>
<span class="kn">from</span> <span class="nn">scrapy.exceptions</span> <span class="kn">import</span> <span class="n">DropItem</span>

<span class="k">class</span> <span class="nc">DuplicatesPipeline</span><span class="p">:</span>

    <span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
        <span class="bp">self</span><span class="o">.</span><span class="n">ids_seen</span> <span class="o">=</span> <span class="nb">set</span><span class="p">()</span>

    <span class="k">def</span> <span class="nf">process_item</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">item</span><span class="p">,</span> <span class="n">spider</span><span class="p">):</span>
        <span class="n">adapter</span> <span class="o">=</span> <span class="n">ItemAdapter</span><span class="p">(</span><span class="n">item</span><span class="p">)</span>
        <span class="k">if</span> <span class="n">adapter</span><span class="p">[</span><span class="s1">&#39;id&#39;</span><span class="p">]</span> <span class="ow">in</span> <span class="bp">self</span><span class="o">.</span><span class="n">ids_seen</span><span class="p">:</span>
            <span class="k">raise</span> <span class="n">DropItem</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Duplicate item found: </span><span class="si">{</span><span class="n">item</span><span class="si">!r}</span><span class="s2">&quot;</span><span class="p">)</span>
        <span class="k">else</span><span class="p">:</span>
            <span class="bp">self</span><span class="o">.</span><span class="n">ids_seen</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">adapter</span><span class="p">[</span><span class="s1">&#39;id&#39;</span><span class="p">])</span>
            <span class="k">return</span> <span class="n">item</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="activating-an-item-pipeline-component">
<h2>激活项目管道组件<a class="headerlink" href="#activating-an-item-pipeline-component" title="永久链接至标题">¶</a></h2>
<p>若要激活项管道组件，必须将其类添加到 <a class="reference internal" href="settings.html#std-setting-ITEM_PIPELINES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">ITEM_PIPELINES</span></code></a> 设置，如以下示例中所示：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">ITEM_PIPELINES</span> <span class="o">=</span> <span class="p">{</span>
    <span class="s1">&#39;myproject.pipelines.PricePipeline&#39;</span><span class="p">:</span> <span class="mi">300</span><span class="p">,</span>
    <span class="s1">&#39;myproject.pipelines.JsonWriterPipeline&#39;</span><span class="p">:</span> <span class="mi">800</span><span class="p">,</span>
<span class="p">}</span>
</pre></div>
</div>
<p>在此设置中分配给类的整数值决定了它们的运行顺序：项从低值类传递到高值类。习惯上把这些数字定义在0-1000范围内。</p>
</div>
</div>


           </div>
           
          </div>
          <footer>
  
    <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
      
        <a href="feed-exports.html" class="btn btn-neutral float-right" title="Feed 导出" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
      
      
        <a href="shell.html" class="btn btn-neutral float-left" title="Scrapy shell" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
      
    </div>
  

  <hr/>

  <div role="contentinfo">
    <p>
        
        &copy; 版权所有 2008–2020, Scrapy developers
      <span class="lastupdated">
        最后更新于 10月 18, 2020.
      </span>

    </p>
  </div>
    
    
    
    Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a
    
    <a href="https://github.com/rtfd/sphinx_rtd_theme">theme</a>
    
    provided by <a href="https://readthedocs.org">Read the Docs</a>. 

</footer>

        </div>
      </div>

    </section>

  </div>
  

  <script type="text/javascript">
      jQuery(function () {
          SphinxRtdTheme.Navigation.enable(true);
      });
  </script>

  
  
    
  
 
<script type="text/javascript">
!function(){var analytics=window.analytics=window.analytics||[];if(!analytics.initialize)if(analytics.invoked)window.console&&console.error&&console.error("Segment snippet included twice.");else{analytics.invoked=!0;analytics.methods=["trackSubmit","trackClick","trackLink","trackForm","pageview","identify","reset","group","track","ready","alias","page","once","off","on"];analytics.factory=function(t){return function(){var e=Array.prototype.slice.call(arguments);e.unshift(t);analytics.push(e);return analytics}};for(var t=0;t<analytics.methods.length;t++){var e=analytics.methods[t];analytics[e]=analytics.factory(e)}analytics.load=function(t){var e=document.createElement("script");e.type="text/javascript";e.async=!0;e.src=("https:"===document.location.protocol?"https://":"http://")+"cdn.segment.com/analytics.js/v1/"+t+"/analytics.min.js";var n=document.getElementsByTagName("script")[0];n.parentNode.insertBefore(e,n)};analytics.SNIPPET_VERSION="3.1.0";
analytics.load("8UDQfnf3cyFSTsM4YANnW5sXmgZVILbA");
analytics.page();
}}();

analytics.ready(function () {
    ga('require', 'linker');
    ga('linker:autoLink', ['scrapinghub.com', 'crawlera.com']);
});
</script>


</body>
</html>