

<!DOCTYPE html>
<html class="writer-html5" lang="zh" >
<head>
  <meta charset="utf-8">
  
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  
  <title>常见问题 &mdash; Scrapy 2.3.0 文档</title>
  

  
  <link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
  <link rel="stylesheet" href="_static/pygments.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/tooltipster.custom.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/tooltipster.bundle.min.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/tooltipster-sideTip-shadow.min.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/tooltipster-sideTip-punk.min.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/tooltipster-sideTip-noir.min.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/tooltipster-sideTip-light.min.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/tooltipster-sideTip-borderless.min.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/micromodal.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/sphinx_rtd_theme.css" type="text/css" />

  
  
  
  

  
  <!--[if lt IE 9]>
    <script src="_static/js/html5shiv.min.js"></script>
  <![endif]-->
  
    
      <script type="text/javascript" id="documentation_options" data-url_root="./" src="_static/documentation_options.js"></script>
        <script src="_static/jquery.js"></script>
        <script src="_static/underscore.js"></script>
        <script src="_static/doctools.js"></script>
        <script src="_static/language_data.js"></script>
        <script src="_static/js/hoverxref.js"></script>
        <script src="_static/js/tooltipster.bundle.min.js"></script>
        <script src="_static/js/micromodal.min.js"></script>
    
    <script type="text/javascript" src="_static/js/theme.js"></script>

    
    <link rel="index" title="索引" href="genindex.html" />
    <link rel="search" title="搜索" href="search.html" />
    <link rel="next" title="调试spiders" href="topics/debug.html" />
    <link rel="prev" title="Web服务" href="topics/webservice.html" /> 
</head>

<body class="wy-body-for-nav">

   
  <div class="wy-grid-for-nav">
    
    <nav data-toggle="wy-nav-shift" class="wy-nav-side">
      <div class="wy-side-scroll">
        <div class="wy-side-nav-search" >
          

          
            <a href="index.html" class="icon icon-home" alt="Documentation Home"> Scrapy
          

          
          </a>

          
            
            
              <div class="version">
                2.3
              </div>
            
          

          
<div role="search">
  <form id="rtd-search-form" class="wy-form" action="search.html" method="get">
    <input type="text" name="q" placeholder="Search docs" />
    <input type="hidden" name="check_keywords" value="yes" />
    <input type="hidden" name="area" value="default" />
  </form>
</div>

          
        </div>

        
        <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
          
            
            
              
            
            
              <p class="caption"><span class="caption-text">第一步</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="intro/overview.html">Scrapy一目了然</a></li>
<li class="toctree-l1"><a class="reference internal" href="intro/install.html">安装指南</a></li>
<li class="toctree-l1"><a class="reference internal" href="intro/tutorial.html">Scrapy 教程</a></li>
<li class="toctree-l1"><a class="reference internal" href="intro/examples.html">实例</a></li>
</ul>
<p class="caption"><span class="caption-text">基本概念</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="topics/commands.html">命令行工具</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/spiders.html">蜘蛛</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/selectors.html">选择器</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/items.html">项目</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/loaders.html">项目加载器</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/shell.html">Scrapy shell</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/item-pipeline.html">项目管道</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/feed-exports.html">Feed 导出</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/request-response.html">请求和响应</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/link-extractors.html">链接提取器</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/settings.html">设置</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/exceptions.html">例外情况</a></li>
</ul>
<p class="caption"><span class="caption-text">内置服务</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="topics/logging.html">登录</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/stats.html">统计数据集合</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/email.html">发送电子邮件</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/telnetconsole.html">远程登录控制台</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/webservice.html">Web服务</a></li>
</ul>
<p class="caption"><span class="caption-text">解决具体问题</span></p>
<ul class="current">
<li class="toctree-l1 current"><a class="current reference internal" href="#">常见问题</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#how-does-scrapy-compare-to-beautifulsoup-or-lxml">Scrapy与BeautifulSoup或LXML相比如何？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#can-i-use-scrapy-with-beautifulsoup">我可以和BeautifulSoup一起使用Scrapy吗？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#did-scrapy-steal-x-from-django">Scrapy是否从Django“窃取”X？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#does-scrapy-work-with-http-proxies">Scrapy与HTTP代理一起工作吗？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#how-can-i-scrape-an-item-with-attributes-in-different-pages">如何在不同的页面中抓取具有属性的项目？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-crashes-with-importerror-no-module-named-win32api">Scrapy崩溃：importError:没有名为win32api的模块</a></li>
<li class="toctree-l2"><a class="reference internal" href="#how-can-i-simulate-a-user-login-in-my-spider">如何在蜘蛛中模拟用户登录？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#does-scrapy-crawl-in-breadth-first-or-depth-first-order">Scrapy是以广度优先还是深度优先的顺序爬行？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#my-scrapy-crawler-has-memory-leaks-what-can-i-do">我可怜的爬虫有记忆漏洞。我能做什么？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#how-can-i-make-scrapy-consume-less-memory">我怎么能让 Scrapy 消耗更少的记忆？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#can-i-use-basic-http-authentication-in-my-spiders">我可以在spider中使用基本的HTTP身份验证吗？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#why-does-scrapy-download-pages-in-english-instead-of-my-native-language">为什么Scrapy用英语而不是我的母语下载页面？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#where-can-i-find-some-example-scrapy-projects">我在哪里可以找到一些零碎项目的例子？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#can-i-run-a-spider-without-creating-a-project">我可以在不创建项目的情况下运行蜘蛛吗？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#i-get-filtered-offsite-request-messages-how-can-i-fix-them">我收到“Filtered offsite request”消息。 我该如何解决这些问题？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#what-is-the-recommended-way-to-deploy-a-scrapy-crawler-in-production">在生产中，建议采用什么方式部署 Scrapy ？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#can-i-use-json-for-large-exports">我可以使用JSON进行大型输出吗？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#can-i-return-twisted-deferreds-from-signal-handlers">我可以从信号处理程序返回（扭曲）延迟吗？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#what-does-the-response-status-code-999-means">响应状态代码999是什么意思？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#can-i-call-pdb-set-trace-from-my-spiders-to-debug-them">我可以从我的蜘蛛调用``pdb.set_trace（）``来调试它们吗？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#simplest-way-to-dump-all-my-scraped-items-into-a-json-csv-xml-file">最简单的方法是将我的所有抓取项转储到json/csv/xml文件中？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#what-s-this-huge-cryptic-viewstate-parameter-used-in-some-forms">在某些形式中使用的这个巨大的神秘``__VIEWSTATE``参数是什么？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#what-s-the-best-way-to-parse-big-xml-csv-data-feeds">解析大型XML/CSV数据源的最佳方法是什么？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#does-scrapy-manage-cookies-automatically">Scrapy是否自动管理cookies？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#how-can-i-see-the-cookies-being-sent-and-received-from-scrapy">我如何才能看到从Scrapy发送和接收的cookies？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#how-can-i-instruct-a-spider-to-stop-itself">我怎样才能指示蜘蛛停止自己呢？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#how-can-i-prevent-my-scrapy-bot-from-getting-banned">如何防止我的Scrapy机器人被禁止？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#should-i-use-spider-arguments-or-settings-to-configure-my-spider">我应该使用蜘蛛参数或设置来配置我的蜘蛛吗？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#i-m-scraping-a-xml-document-and-my-xpath-selector-doesn-t-return-any-items">我正在抓取一个XML文档，而我的xpath选择器没有返回任何项</a></li>
<li class="toctree-l2"><a class="reference internal" href="#how-to-split-an-item-into-multiple-items-in-an-item-pipeline">如何在项目管道中将项目拆分为多个项目？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#does-scrapy-support-ipv6-addresses">Scrapy支持IPv6地址吗？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#how-to-deal-with-class-valueerror-filedescriptor-out-of-range-in-select-exceptions">如何处理 <code class="docutils literal notranslate"><span class="pre">&lt;class</span> <span class="pre">'ValueError'&gt;:</span> <span class="pre">filedescriptor</span> <span class="pre">out</span> <span class="pre">of</span> <span class="pre">range</span> <span class="pre">in</span> <span class="pre">select()</span></code> 例外情况？</a></li>
<li class="toctree-l2"><a class="reference internal" href="#how-can-i-cancel-the-download-of-a-given-response">如何取消对给定响应的下载？</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="topics/debug.html">调试spiders</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/contracts.html">蜘蛛合约</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/practices.html">常用做法</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/broad-crawls.html">宽爬行</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/developer-tools.html">使用浏览器的开发人员工具进行抓取</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/dynamic-content.html">选择动态加载的内容</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/leaks.html">调试内存泄漏</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/media-pipeline.html">下载和处理文件和图像</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/deploy.html">部署蜘蛛</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/autothrottle.html">AutoThrottle 扩展</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/benchmarking.html">标杆管理</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/jobs.html">作业：暂停和恢复爬行</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/coroutines.html">协同程序</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/asyncio.html">asyncio</a></li>
</ul>
<p class="caption"><span class="caption-text">扩展Scrapy</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="topics/architecture.html">体系结构概述</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/downloader-middleware.html">下载器中间件</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/spider-middleware.html">蜘蛛中间件</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/extensions.html">扩展</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/api.html">核心API</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/signals.html">信号</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/exporters.html">条目导出器</a></li>
</ul>
<p class="caption"><span class="caption-text">其余所有</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="news.html">发行说明</a></li>
<li class="toctree-l1"><a class="reference internal" href="contributing.html">为 Scrapy 贡献</a></li>
<li class="toctree-l1"><a class="reference internal" href="versioning.html">版本控制和API稳定性</a></li>
</ul>

            
          
        </div>
        
      </div>
    </nav>

    <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">

      
      <nav class="wy-nav-top" aria-label="top navigation">
        
          <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
          <a href="index.html">Scrapy</a>
        
      </nav>


      <div class="wy-nav-content">
        
        <div class="rst-content">
        
          















<div role="navigation" aria-label="breadcrumbs navigation">

  <ul class="wy-breadcrumbs">
    
      <li><a href="index.html" class="icon icon-home"></a> &raquo;</li>
        
      <li>常见问题</li>
    
    
      <li class="wy-breadcrumbs-aside">
        
            
        
      </li>
    
  </ul>

  
  <hr/>
</div>
          <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
           <div itemprop="articleBody">
            
  <div class="section" id="frequently-asked-questions">
<span id="faq"></span><h1>常见问题<a class="headerlink" href="#frequently-asked-questions" title="永久链接至标题">¶</a></h1>
<div class="section" id="how-does-scrapy-compare-to-beautifulsoup-or-lxml">
<span id="faq-scrapy-bs-cmp"></span><h2>Scrapy与BeautifulSoup或LXML相比如何？<a class="headerlink" href="#how-does-scrapy-compare-to-beautifulsoup-or-lxml" title="永久链接至标题">¶</a></h2>
<p><a class="reference external" href="https://www.crummy.com/software/BeautifulSoup/">BeautifulSoup</a> 和 <a class="reference external" href="https://lxml.de/">lxml</a> 是用于分析HTML和XML的库。Scrapy是一个应用程序框架，用于编写爬行网站并从中提取数据的网络蜘蛛。</p>
<p>Scrapy提供了一种用于提取数据的内置机制（称为：ref：<cite>selectors &lt;topics-selectors&gt;</cite>）但你可以轻松使用`BeautifulSoup`_（或`lxml`_），如果你觉得使用它们更舒服。 毕竟，他们只是解析可以从任何Python代码导入和使用的库。</p><script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
<ins class="adsbygoogle"
     style="display:block; text-align:center;"
     data-ad-layout="in-article"
     data-ad-format="fluid"
     data-ad-client="ca-pub-1466963416408457"
     data-ad-slot="8850786025"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>
<p>换句话说，将`BeautifulSoup`_（或`lxml`_）与Scrapy进行比较就像将`jinja2`_与`Django`_进行比较。</p>
</div>
<div class="section" id="can-i-use-scrapy-with-beautifulsoup">
<h2>我可以和BeautifulSoup一起使用Scrapy吗？<a class="headerlink" href="#can-i-use-scrapy-with-beautifulsoup" title="永久链接至标题">¶</a></h2>
<p>是的你可以。 如上所述：ref：<cite>above &lt;faq-scrapy-bs-cmp&gt;</cite>，<a href="#id1"><span class="problematic" id="id2">`</span></a>BeautifulSoup`_可用于解析Scrapy回调中的HTML响应。 您只需将响应的主体提供给``BeautifulSoup``对象，并从中提取所需的任何数据。</p>
<p>下面是一个使用BeautifulSoupAPI的蜘蛛示例， <code class="docutils literal notranslate"><span class="pre">lxml</span></code> 作为HTML解析器：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">bs4</span> <span class="kn">import</span> <span class="n">BeautifulSoup</span>
<span class="kn">import</span> <span class="nn">scrapy</span>


<span class="k">class</span> <span class="nc">ExampleSpider</span><span class="p">(</span><span class="n">scrapy</span><span class="o">.</span><span class="n">Spider</span><span class="p">):</span>
    <span class="n">name</span> <span class="o">=</span> <span class="s2">&quot;example&quot;</span>
    <span class="n">allowed_domains</span> <span class="o">=</span> <span class="p">[</span><span class="s2">&quot;example.com&quot;</span><span class="p">]</span>
    <span class="n">start_urls</span> <span class="o">=</span> <span class="p">(</span>
        <span class="s1">&#39;http://www.example.com/&#39;</span><span class="p">,</span>
    <span class="p">)</span>

    <span class="k">def</span> <span class="nf">parse</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">response</span><span class="p">):</span>
        <span class="c1"># use lxml to get decent HTML parsing speed</span>
        <span class="n">soup</span> <span class="o">=</span> <span class="n">BeautifulSoup</span><span class="p">(</span><span class="n">response</span><span class="o">.</span><span class="n">text</span><span class="p">,</span> <span class="s1">&#39;lxml&#39;</span><span class="p">)</span>
        <span class="k">yield</span> <span class="p">{</span>
            <span class="s2">&quot;url&quot;</span><span class="p">:</span> <span class="n">response</span><span class="o">.</span><span class="n">url</span><span class="p">,</span>
            <span class="s2">&quot;title&quot;</span><span class="p">:</span> <span class="n">soup</span><span class="o">.</span><span class="n">h1</span><span class="o">.</span><span class="n">string</span>
        <span class="p">}</span>
</pre></div>
</div>
<div class="admonition note">
<p class="admonition-title">注解</p>
<p><a href="#id1"><span class="problematic" id="id2">``</span></a>BeautifulSoup``支持几种HTML / XML解析器。 请参阅“BeautifulSoup的官方文档”，了解哪些可用。</p>
</div>
</div>
<div class="section" id="did-scrapy-steal-x-from-django">
<h2>Scrapy是否从Django“窃取”X？<a class="headerlink" href="#did-scrapy-steal-x-from-django" title="永久链接至标题">¶</a></h2>
<p>可能吧，但我们不喜欢这个词。我们认为django_u是一个伟大的开源项目，也是一个可以效仿的例子，所以我们把它作为scrappy的灵感来源。</p>
<p>我们相信，如果事情已经做好，就没有必要重新发明它。 这个概念除了是开源和自由软件的基础之外，不仅适用于软件，还适用于文档，程序，策略等。因此，我们不是自己解决每个问题，而是选择从这些项目中复制想法。 已经妥善解决了这些问题，并专注于我们需要解决的实际问题。</p>
<p>如果Scrapy能为其他项目提供灵感，我们会感到骄傲。随时从我们这里偷东西！</p>
</div>
<div class="section" id="does-scrapy-work-with-http-proxies">
<h2>Scrapy与HTTP代理一起工作吗？<a class="headerlink" href="#does-scrapy-work-with-http-proxies" title="永久链接至标题">¶</a></h2>
<p>是。 通过HTTP代理下载器中间件提供对HTTP代理的支持（自Scrapy 0.8起）。 请参阅：class：<cite>~scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware</cite>。</p>
</div>
<div class="section" id="how-can-i-scrape-an-item-with-attributes-in-different-pages">
<h2>如何在不同的页面中抓取具有属性的项目？<a class="headerlink" href="#how-can-i-scrape-an-item-with-attributes-in-different-pages" title="永久链接至标题">¶</a></h2>
<p>见 <a class="reference internal" href="topics/request-response.html#topics-request-response-ref-request-callback-arguments"><span class="std std-ref">向回调函数传递附加数据</span></a> .</p>
</div>
<div class="section" id="scrapy-crashes-with-importerror-no-module-named-win32api">
<h2>Scrapy崩溃：importError:没有名为win32api的模块<a class="headerlink" href="#scrapy-crashes-with-importerror-no-module-named-win32api" title="永久链接至标题">¶</a></h2>
<p>您需要安装 <a class="reference external" href="https://sourceforge.net/projects/pywin32/">pywin32</a> 因为这个 Twisted bug`_ .</p>
</div>
<div class="section" id="how-can-i-simulate-a-user-login-in-my-spider">
<h2>如何在蜘蛛中模拟用户登录？<a class="headerlink" href="#how-can-i-simulate-a-user-login-in-my-spider" title="永久链接至标题">¶</a></h2>
<p>见 <a class="reference internal" href="topics/request-response.html#topics-request-response-ref-request-userlogin"><span class="std std-ref">使用formRequest.from_response（）模拟用户登录</span></a> .</p>
</div>
<div class="section" id="does-scrapy-crawl-in-breadth-first-or-depth-first-order">
<span id="faq-bfo-dfo"></span><h2>Scrapy是以广度优先还是深度优先的顺序爬行？<a class="headerlink" href="#does-scrapy-crawl-in-breadth-first-or-depth-first-order" title="永久链接至标题">¶</a></h2>
<p>默认情况下，Scrapy使用 <a class="reference external" href="https://en.wikipedia.org/wiki/Stack_(abstract_data_type)">LIFO</a> 用于存储挂起请求的队列，这基本上意味着它会爬入 <a class="reference external" href="https://en.wikipedia.org/wiki/Depth-first_search">DFO order</a> .这种订单在大多数情况下更方便。</p>
<p>如果你真的想爬进去 <a class="reference external" href="https://en.wikipedia.org/wiki/Breadth-first_search">BFO order</a> ，您可以通过设置以下设置来完成此操作：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">DEPTH_PRIORITY</span> <span class="o">=</span> <span class="mi">1</span>
<span class="n">SCHEDULER_DISK_QUEUE</span> <span class="o">=</span> <span class="s1">&#39;scrapy.squeues.PickleFifoDiskQueue&#39;</span>
<span class="n">SCHEDULER_MEMORY_QUEUE</span> <span class="o">=</span> <span class="s1">&#39;scrapy.squeues.FifoMemoryQueue&#39;</span>
</pre></div>
</div>
<p>当挂起的请求低于配置的值时 <a class="reference internal" href="topics/settings.html#std-setting-CONCURRENT_REQUESTS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">CONCURRENT_REQUESTS</span></code></a> ， <a class="reference internal" href="topics/settings.html#std-setting-CONCURRENT_REQUESTS_PER_DOMAIN"><code class="xref std std-setting docutils literal notranslate"><span class="pre">CONCURRENT_REQUESTS_PER_DOMAIN</span></code></a> 或 <a class="reference internal" href="topics/settings.html#std-setting-CONCURRENT_REQUESTS_PER_IP"><code class="xref std std-setting docutils literal notranslate"><span class="pre">CONCURRENT_REQUESTS_PER_IP</span></code></a> ，这些请求同时发送。因此，前几个爬行请求很少遵循所需的顺序。将这些设置降低到 <code class="docutils literal notranslate"><span class="pre">1</span></code> 强制执行所需的顺序，但它会显著降低整体爬行速度。</p>
</div>
<div class="section" id="my-scrapy-crawler-has-memory-leaks-what-can-i-do">
<h2>我可怜的爬虫有记忆漏洞。我能做什么？<a class="headerlink" href="#my-scrapy-crawler-has-memory-leaks-what-can-i-do" title="永久链接至标题">¶</a></h2>
<p>见 <a class="reference internal" href="topics/leaks.html#topics-leaks"><span class="std std-ref">调试内存泄漏</span></a> .</p>
<p>此外，Python有一个内置的内存泄漏问题，在下面描述：ref：<cite>topics-leaks-without-leaks</cite>。</p>
</div>
<div class="section" id="how-can-i-make-scrapy-consume-less-memory">
<h2>我怎么能让 Scrapy 消耗更少的记忆？<a class="headerlink" href="#how-can-i-make-scrapy-consume-less-memory" title="永久链接至标题">¶</a></h2>
<p>请参阅前面的问题。</p>
</div>
<div class="section" id="can-i-use-basic-http-authentication-in-my-spiders">
<h2>我可以在spider中使用基本的HTTP身份验证吗？<a class="headerlink" href="#can-i-use-basic-http-authentication-in-my-spiders" title="永久链接至标题">¶</a></h2>
<p>是的，请参阅：class：<cite>~scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware</cite>。</p>
</div>
<div class="section" id="why-does-scrapy-download-pages-in-english-instead-of-my-native-language">
<h2>为什么Scrapy用英语而不是我的母语下载页面？<a class="headerlink" href="#why-does-scrapy-download-pages-in-english-instead-of-my-native-language" title="永久链接至标题">¶</a></h2>
<p>尝试更改默认值 <a class="reference external" href="https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.4">Accept-Language</a> request header by overriding the <a class="reference internal" href="topics/settings.html#std-setting-DEFAULT_REQUEST_HEADERS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DEFAULT_REQUEST_HEADERS</span></code></a> 设置。</p>
</div>
<div class="section" id="where-can-i-find-some-example-scrapy-projects">
<h2>我在哪里可以找到一些零碎项目的例子？<a class="headerlink" href="#where-can-i-find-some-example-scrapy-projects" title="永久链接至标题">¶</a></h2>
<p>见 <a class="reference internal" href="intro/examples.html#intro-examples"><span class="std std-ref">实例</span></a> .</p>
</div>
<div class="section" id="can-i-run-a-spider-without-creating-a-project">
<h2>我可以在不创建项目的情况下运行蜘蛛吗？<a class="headerlink" href="#can-i-run-a-spider-without-creating-a-project" title="永久链接至标题">¶</a></h2>
<p>对。你可以使用 <a class="reference internal" href="topics/commands.html#std-command-runspider"><code class="xref std std-command docutils literal notranslate"><span class="pre">runspider</span></code></a> 命令。例如，如果有一个蜘蛛用 <code class="docutils literal notranslate"><span class="pre">my_spider.py</span></code> 您可以用以下方式运行它的文件：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">scrapy</span> <span class="n">runspider</span> <span class="n">my_spider</span><span class="o">.</span><span class="n">py</span>
</pre></div>
</div>
<p>有关详细信息，请参阅：command：<a href="#id1"><span class="problematic" id="id2">`</span></a>runspider`命令。</p>
</div>
<div class="section" id="i-get-filtered-offsite-request-messages-how-can-i-fix-them">
<h2>我收到“Filtered offsite request”消息。 我该如何解决这些问题？<a class="headerlink" href="#i-get-filtered-offsite-request-messages-how-can-i-fix-them" title="永久链接至标题">¶</a></h2>
<p>这些信息（记录 <code class="docutils literal notranslate"><span class="pre">DEBUG</span></code> 级别）不一定意味着有问题，因此您可能不需要修复它们。</p>
<p>这些消息由非现场蜘蛛中间件抛出，这是一个蜘蛛中间件（默认情况下启用），其目的是过滤掉对蜘蛛所覆盖域之外的域的请求。</p>
<p>有关详细信息，请参阅： <a class="reference internal" href="topics/spider-middleware.html#scrapy.spidermiddlewares.offsite.OffsiteMiddleware" title="scrapy.spidermiddlewares.offsite.OffsiteMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">OffsiteMiddleware</span></code></a> .</p>
</div>
<div class="section" id="what-is-the-recommended-way-to-deploy-a-scrapy-crawler-in-production">
<h2>在生产中，建议采用什么方式部署 Scrapy ？<a class="headerlink" href="#what-is-the-recommended-way-to-deploy-a-scrapy-crawler-in-production" title="永久链接至标题">¶</a></h2>
<p>见 <a class="reference internal" href="topics/deploy.html#topics-deploy"><span class="std std-ref">部署蜘蛛</span></a> .</p>
</div>
<div class="section" id="can-i-use-json-for-large-exports">
<h2>我可以使用JSON进行大型输出吗？<a class="headerlink" href="#can-i-use-json-for-large-exports" title="永久链接至标题">¶</a></h2>
<p>这取决于你的输出有多大。 请参阅：ref：<cite>this warning &lt;json-with-large-data&gt;`in：class：`~scrapy.exporters.JsonItemExporter</cite> documentation。</p>
</div>
<div class="section" id="can-i-return-twisted-deferreds-from-signal-handlers">
<h2>我可以从信号处理程序返回（扭曲）延迟吗？<a class="headerlink" href="#can-i-return-twisted-deferreds-from-signal-handlers" title="永久链接至标题">¶</a></h2>
<p>一些信号支持从其处理程序返回延迟，而另一些则不支持。请参见 <a class="reference internal" href="topics/signals.html#topics-signals-ref"><span class="std std-ref">内置信号参考</span></a> 以了解哪些。</p>
</div>
<div class="section" id="what-does-the-response-status-code-999-means">
<h2>响应状态代码999是什么意思？<a class="headerlink" href="#what-does-the-response-status-code-999-means" title="永久链接至标题">¶</a></h2>
<p>999是雅虎网站用来限制请求的自定义响应状态代码。尝试使用下载延迟来降低爬行速度 <code class="docutils literal notranslate"><span class="pre">2</span></code> （或更高）在你的蜘蛛：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="k">class</span> <span class="nc">MySpider</span><span class="p">(</span><span class="n">CrawlSpider</span><span class="p">):</span>

    <span class="n">name</span> <span class="o">=</span> <span class="s1">&#39;myspider&#39;</span>

    <span class="n">download_delay</span> <span class="o">=</span> <span class="mi">2</span>

    <span class="c1"># [ ... rest of the spider code ... ]</span>
</pre></div>
</div>
<p>或者通过在项目中设置全局下载延迟 <a class="reference internal" href="topics/settings.html#std-setting-DOWNLOAD_DELAY"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOAD_DELAY</span></code></a> 设置。</p>
</div>
<div class="section" id="can-i-call-pdb-set-trace-from-my-spiders-to-debug-them">
<h2>我可以从我的蜘蛛调用``pdb.set_trace（）``来调试它们吗？<a class="headerlink" href="#can-i-call-pdb-set-trace-from-my-spiders-to-debug-them" title="永久链接至标题">¶</a></h2>
<p>是的，但是您也可以使用scriby shell，它允许您快速分析（甚至修改）您的spider正在处理的响应，这通常比普通的老版本更有用。 <code class="docutils literal notranslate"><span class="pre">pdb.set_trace()</span></code> .</p>
<p>有关详细信息，请参阅 <a class="reference internal" href="topics/shell.html#topics-shell-inspect-response"><span class="std std-ref">从spiders调用shell来检查响应</span></a> .</p>
</div>
<div class="section" id="simplest-way-to-dump-all-my-scraped-items-into-a-json-csv-xml-file">
<h2>最简单的方法是将我的所有抓取项转储到json/csv/xml文件中？<a class="headerlink" href="#simplest-way-to-dump-all-my-scraped-items-into-a-json-csv-xml-file" title="永久链接至标题">¶</a></h2>
<p>要转储到JSON文件，请执行以下操作：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">scrapy</span> <span class="n">crawl</span> <span class="n">myspider</span> <span class="o">-</span><span class="n">O</span> <span class="n">items</span><span class="o">.</span><span class="n">json</span>
</pre></div>
</div>
<p>要转储到csv文件，请执行以下操作：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">scrapy</span> <span class="n">crawl</span> <span class="n">myspider</span> <span class="o">-</span><span class="n">O</span> <span class="n">items</span><span class="o">.</span><span class="n">csv</span>
</pre></div>
</div>
<p>要转储到XML文件，请执行以下操作：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">scrapy</span> <span class="n">crawl</span> <span class="n">myspider</span> <span class="o">-</span><span class="n">O</span> <span class="n">items</span><span class="o">.</span><span class="n">xml</span>
</pre></div>
</div>
<p>有关详细信息，请参阅 <a class="reference internal" href="topics/feed-exports.html#topics-feed-exports"><span class="std std-ref">Feed 导出</span></a></p>
</div>
<div class="section" id="what-s-this-huge-cryptic-viewstate-parameter-used-in-some-forms">
<h2>在某些形式中使用的这个巨大的神秘``__VIEWSTATE``参数是什么？<a class="headerlink" href="#what-s-this-huge-cryptic-viewstate-parameter-used-in-some-forms" title="永久链接至标题">¶</a></h2>
<p>这个 <code class="docutils literal notranslate"><span class="pre">__VIEWSTATE</span></code> 参数用于使用ASP.NET/VB.NET生成的网站。有关其工作方式的详细信息，请参见 <a class="reference external" href="https://metacpan.org/pod/release/ECARROLL/HTML-TreeBuilderX-ASP_NET-0.09/lib/HTML/TreeBuilderX/ASP_NET.pm">this page</a> . 还有，这里有一个 <a class="reference external" href="https://github.com/AmbientLighter/rpn-fas/blob/master/fas/spiders/rnp.py">example spider</a> 会爬取其中一个站点。</p>
</div>
<div class="section" id="what-s-the-best-way-to-parse-big-xml-csv-data-feeds">
<h2>解析大型XML/CSV数据源的最佳方法是什么？<a class="headerlink" href="#what-s-the-best-way-to-parse-big-xml-csv-data-feeds" title="永久链接至标题">¶</a></h2>
<p>使用xpath选择器解析大型提要可能会有问题，因为它们需要在内存中构建整个提要的DOM，这可能会非常慢，并且会消耗大量内存。</p>
<p>为了避免在内存中一次分析所有提要，可以使用函数 <code class="docutils literal notranslate"><span class="pre">xmliter</span></code> 和 <code class="docutils literal notranslate"><span class="pre">csviter</span></code> 从 <code class="docutils literal notranslate"><span class="pre">scrapy.utils.iterators</span></code> 模块。事实上，这就是食性蜘蛛（参见 <a class="reference internal" href="topics/spiders.html#topics-spiders"><span class="std std-ref">蜘蛛</span></a> ）在封面下的用法。</p>
</div>
<div class="section" id="does-scrapy-manage-cookies-automatically">
<h2>Scrapy是否自动管理cookies？<a class="headerlink" href="#does-scrapy-manage-cookies-automatically" title="永久链接至标题">¶</a></h2>
<p>是的，Scrapy接收并跟踪服务器发送的cookie，并像任何普通的Web浏览器一样，在随后的请求中发送它们。</p>
<p>有关详细信息，请参阅 <a class="reference internal" href="topics/request-response.html#topics-request-response"><span class="std std-ref">请求和响应</span></a> 和 <a class="reference internal" href="topics/downloader-middleware.html#cookies-mw"><span class="std std-ref">CookiesMiddleware</span></a> .</p>
</div>
<div class="section" id="how-can-i-see-the-cookies-being-sent-and-received-from-scrapy">
<h2>我如何才能看到从Scrapy发送和接收的cookies？<a class="headerlink" href="#how-can-i-see-the-cookies-being-sent-and-received-from-scrapy" title="永久链接至标题">¶</a></h2>
<p>启用 <a class="reference internal" href="topics/downloader-middleware.html#std-setting-COOKIES_DEBUG"><code class="xref std std-setting docutils literal notranslate"><span class="pre">COOKIES_DEBUG</span></code></a> 设置。</p>
</div>
<div class="section" id="how-can-i-instruct-a-spider-to-stop-itself">
<h2>我怎样才能指示蜘蛛停止自己呢？<a class="headerlink" href="#how-can-i-instruct-a-spider-to-stop-itself" title="永久链接至标题">¶</a></h2>
<p>从回调中提出：exc：<cite>~scrapy.exceptions.CloseSpider`异常。 有关详细信息，请参阅:: exc：`~scrapy.exceptions.CloseSpider</cite>。</p>
</div>
<div class="section" id="how-can-i-prevent-my-scrapy-bot-from-getting-banned">
<h2>如何防止我的Scrapy机器人被禁止？<a class="headerlink" href="#how-can-i-prevent-my-scrapy-bot-from-getting-banned" title="永久链接至标题">¶</a></h2>
<p>见 <a class="reference internal" href="topics/practices.html#bans"><span class="std std-ref">避免被禁止</span></a> .</p>
</div>
<div class="section" id="should-i-use-spider-arguments-or-settings-to-configure-my-spider">
<h2>我应该使用蜘蛛参数或设置来配置我的蜘蛛吗？<a class="headerlink" href="#should-i-use-spider-arguments-or-settings-to-configure-my-spider" title="永久链接至标题">¶</a></h2>
<p>两个 <a class="reference internal" href="topics/spiders.html#spiderargs"><span class="std std-ref">spider arguments</span></a> 和 <a class="reference internal" href="topics/settings.html#topics-settings"><span class="std std-ref">settings</span></a> 可以用来配置蜘蛛。没有严格的规则要求使用其中一个或另一个，但是设置更适合于参数，一旦设置，就不会改变太多，而spider参数的更改更频繁，甚至在每次spider运行时，有时甚至需要spider运行（例如，设置spider的起始URL）。</p>
<p>举个例子来说明，假设您有一个蜘蛛需要登录到一个站点来获取数据，并且您只想从站点的某个部分（每次都不同）获取数据。在这种情况下，登录的凭证将是设置，而要擦除的部分的URL将是spider参数。</p>
</div>
<div class="section" id="i-m-scraping-a-xml-document-and-my-xpath-selector-doesn-t-return-any-items">
<h2>我正在抓取一个XML文档，而我的xpath选择器没有返回任何项<a class="headerlink" href="#i-m-scraping-a-xml-document-and-my-xpath-selector-doesn-t-return-any-items" title="永久链接至标题">¶</a></h2>
<p>可能需要删除命名空间。见 <a class="reference internal" href="topics/selectors.html#removing-namespaces"><span class="std std-ref">正在删除命名空间</span></a> .</p>
</div>
<div class="section" id="how-to-split-an-item-into-multiple-items-in-an-item-pipeline">
<span id="faq-split-item"></span><h2>如何在项目管道中将项目拆分为多个项目？<a class="headerlink" href="#how-to-split-an-item-into-multiple-items-in-an-item-pipeline" title="永久链接至标题">¶</a></h2>
<p><a class="reference internal" href="topics/item-pipeline.html#topics-item-pipeline"><span class="std std-ref">Item pipelines</span></a> 无法为每个输入项生成多个项。 <a class="reference internal" href="topics/spider-middleware.html#custom-spider-middleware"><span class="std std-ref">Create a spider middleware</span></a> 而是使用它 <a class="reference internal" href="topics/spider-middleware.html#scrapy.spidermiddlewares.SpiderMiddleware.process_spider_output" title="scrapy.spidermiddlewares.SpiderMiddleware.process_spider_output"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_spider_output()</span></code></a> 方法。例如：：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">copy</span> <span class="kn">import</span> <span class="n">deepcopy</span>

<span class="kn">from</span> <span class="nn">itemadapter</span> <span class="kn">import</span> <span class="n">is_item</span><span class="p">,</span> <span class="n">ItemAdapter</span>

<span class="k">class</span> <span class="nc">MultiplyItemsMiddleware</span><span class="p">:</span>

    <span class="k">def</span> <span class="nf">process_spider_output</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">response</span><span class="p">,</span> <span class="n">result</span><span class="p">,</span> <span class="n">spider</span><span class="p">):</span>
        <span class="k">for</span> <span class="n">item</span> <span class="ow">in</span> <span class="n">result</span><span class="p">:</span>
            <span class="k">if</span> <span class="n">is_item</span><span class="p">(</span><span class="n">item</span><span class="p">):</span>
                <span class="n">adapter</span> <span class="o">=</span> <span class="n">ItemAdapter</span><span class="p">(</span><span class="n">item</span><span class="p">)</span>
                <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">adapter</span><span class="p">[</span><span class="s1">&#39;multiply_by&#39;</span><span class="p">]):</span>
                    <span class="k">yield</span> <span class="n">deepcopy</span><span class="p">(</span><span class="n">item</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="section" id="does-scrapy-support-ipv6-addresses">
<h2>Scrapy支持IPv6地址吗？<a class="headerlink" href="#does-scrapy-support-ipv6-addresses" title="永久链接至标题">¶</a></h2>
<p>是的，通过设置 <a class="reference internal" href="topics/settings.html#std-setting-DNS_RESOLVER"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DNS_RESOLVER</span></code></a> 到 <code class="docutils literal notranslate"><span class="pre">scrapy.resolver.CachingHostnameResolver</span></code> . 注意，这样做，您就失去了为DNS请求设置特定超时的能力（ <a class="reference internal" href="topics/settings.html#std-setting-DNS_TIMEOUT"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DNS_TIMEOUT</span></code></a> 设置被忽略）。</p>
</div>
<div class="section" id="how-to-deal-with-class-valueerror-filedescriptor-out-of-range-in-select-exceptions">
<span id="faq-specific-reactor"></span><h2>如何处理 <code class="docutils literal notranslate"><span class="pre">&lt;class</span> <span class="pre">'ValueError'&gt;:</span> <span class="pre">filedescriptor</span> <span class="pre">out</span> <span class="pre">of</span> <span class="pre">range</span> <span class="pre">in</span> <span class="pre">select()</span></code> 例外情况？<a class="headerlink" href="#how-to-deal-with-class-valueerror-filedescriptor-out-of-range-in-select-exceptions" title="永久链接至标题">¶</a></h2>
<p>本期 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2905">has been reported</a> 在macOS中运行broad crawls时出现，默认的Twisted reactor是 <a class="reference external" href="https://twistedmatrix.com/documents/current/api/twisted.internet.selectreactor.SelectReactor.html" title="(在 Twisted v2.0)"><code class="xref py py-class docutils literal notranslate"><span class="pre">twisted.internet.selectreactor.SelectReactor</span></code></a> . 通过使用 <a class="reference internal" href="topics/settings.html#std-setting-TWISTED_REACTOR"><code class="xref std std-setting docutils literal notranslate"><span class="pre">TWISTED_REACTOR</span></code></a> 设置。</p>
</div>
<div class="section" id="how-can-i-cancel-the-download-of-a-given-response">
<span id="faq-stop-response-download"></span><h2>如何取消对给定响应的下载？<a class="headerlink" href="#how-can-i-cancel-the-download-of-a-given-response" title="永久链接至标题">¶</a></h2>
<p>在某些情况下，停止下载某个响应可能会很有用。例如，如果您只需要大型响应的第一部分，并且希望通过避免下载整个正文来节省资源。在这种情况下，可以将处理程序附加到 <a class="reference internal" href="topics/signals.html#scrapy.signals.bytes_received" title="scrapy.signals.bytes_received"><code class="xref py py-class docutils literal notranslate"><span class="pre">bytes_received</span></code></a> 发出信号并升起 <a class="reference internal" href="topics/exceptions.html#scrapy.exceptions.StopDownload" title="scrapy.exceptions.StopDownload"><code class="xref py py-exc docutils literal notranslate"><span class="pre">StopDownload</span></code></a> 例外情况。请参考 <a class="reference internal" href="topics/request-response.html#topics-stop-response-download"><span class="std std-ref">停止下载响应</span></a> 主题以获取更多信息和示例。</p>
</div>
</div>


           </div>
           
          </div>
          <footer>
  
    <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
      
        <a href="topics/debug.html" class="btn btn-neutral float-right" title="调试spiders" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
      
      
        <a href="topics/webservice.html" class="btn btn-neutral float-left" title="Web服务" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
      
    </div>
  

  <hr/>

  <div role="contentinfo">
    <p>
        
        &copy; 版权所有 2008–2020, Scrapy developers
      <span class="lastupdated">
        最后更新于 10月 18, 2020.
      </span>

    </p>
  </div>
    
    
    
    Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a
    
    <a href="https://github.com/rtfd/sphinx_rtd_theme">theme</a>
    
    provided by <a href="https://readthedocs.org">Read the Docs</a>. 

</footer>

        </div>
      </div>

    </section>

  </div>
  

  <script type="text/javascript">
      jQuery(function () {
          SphinxRtdTheme.Navigation.enable(true);
      });
  </script>

  
  
    
  
 
<script type="text/javascript">
!function(){var analytics=window.analytics=window.analytics||[];if(!analytics.initialize)if(analytics.invoked)window.console&&console.error&&console.error("Segment snippet included twice.");else{analytics.invoked=!0;analytics.methods=["trackSubmit","trackClick","trackLink","trackForm","pageview","identify","reset","group","track","ready","alias","page","once","off","on"];analytics.factory=function(t){return function(){var e=Array.prototype.slice.call(arguments);e.unshift(t);analytics.push(e);return analytics}};for(var t=0;t<analytics.methods.length;t++){var e=analytics.methods[t];analytics[e]=analytics.factory(e)}analytics.load=function(t){var e=document.createElement("script");e.type="text/javascript";e.async=!0;e.src=("https:"===document.location.protocol?"https://":"http://")+"cdn.segment.com/analytics.js/v1/"+t+"/analytics.min.js";var n=document.getElementsByTagName("script")[0];n.parentNode.insertBefore(e,n)};analytics.SNIPPET_VERSION="3.1.0";
analytics.load("8UDQfnf3cyFSTsM4YANnW5sXmgZVILbA");
analytics.page();
}}();

analytics.ready(function () {
    ga('require', 'linker');
    ga('linker:autoLink', ['scrapinghub.com', 'crawlera.com']);
});
</script>


</body>
</html>