

<!DOCTYPE html>
<html class="writer-html5" lang="zh" >
<head>
  <meta charset="utf-8">
  
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  
  <title>发行说明 &mdash; Scrapy 2.3.0 文档</title>
  

  
  <link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
  <link rel="stylesheet" href="_static/pygments.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/tooltipster.custom.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/tooltipster.bundle.min.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/tooltipster-sideTip-shadow.min.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/tooltipster-sideTip-punk.min.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/tooltipster-sideTip-noir.min.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/tooltipster-sideTip-light.min.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/tooltipster-sideTip-borderless.min.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/micromodal.css" type="text/css" />
  <link rel="stylesheet" href="_static/css/sphinx_rtd_theme.css" type="text/css" />

  
  
  
  

  
  <!--[if lt IE 9]>
    <script src="_static/js/html5shiv.min.js"></script>
  <![endif]-->
  
    
      <script type="text/javascript" id="documentation_options" data-url_root="./" src="_static/documentation_options.js"></script>
        <script src="_static/jquery.js"></script>
        <script src="_static/underscore.js"></script>
        <script src="_static/doctools.js"></script>
        <script src="_static/language_data.js"></script>
        <script src="_static/js/hoverxref.js"></script>
        <script src="_static/js/tooltipster.bundle.min.js"></script>
        <script src="_static/js/micromodal.min.js"></script>
    
    <script type="text/javascript" src="_static/js/theme.js"></script>

    
    <link rel="index" title="索引" href="genindex.html" />
    <link rel="search" title="搜索" href="search.html" />
    <link rel="next" title="为 Scrapy 贡献" href="contributing.html" />
    <link rel="prev" title="条目导出器" href="topics/exporters.html" /> 
</head>

<body class="wy-body-for-nav">

   
  <div class="wy-grid-for-nav">
    
    <nav data-toggle="wy-nav-shift" class="wy-nav-side">
      <div class="wy-side-scroll">
        <div class="wy-side-nav-search" >
          

          
            <a href="index.html" class="icon icon-home" alt="Documentation Home"> Scrapy
          

          
          </a>

          
            
            
              <div class="version">
                2.3
              </div>
            
          

          
<div role="search">
  <form id="rtd-search-form" class="wy-form" action="search.html" method="get">
    <input type="text" name="q" placeholder="Search docs" />
    <input type="hidden" name="check_keywords" value="yes" />
    <input type="hidden" name="area" value="default" />
  </form>
</div>

          
        </div>

        
        <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
          
            
            
              
            
            
              <p class="caption"><span class="caption-text">第一步</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="intro/overview.html">Scrapy一目了然</a></li>
<li class="toctree-l1"><a class="reference internal" href="intro/install.html">安装指南</a></li>
<li class="toctree-l1"><a class="reference internal" href="intro/tutorial.html">Scrapy 教程</a></li>
<li class="toctree-l1"><a class="reference internal" href="intro/examples.html">实例</a></li>
</ul>
<p class="caption"><span class="caption-text">基本概念</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="topics/commands.html">命令行工具</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/spiders.html">蜘蛛</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/selectors.html">选择器</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/items.html">项目</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/loaders.html">项目加载器</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/shell.html">Scrapy shell</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/item-pipeline.html">项目管道</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/feed-exports.html">Feed 导出</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/request-response.html">请求和响应</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/link-extractors.html">链接提取器</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/settings.html">设置</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/exceptions.html">例外情况</a></li>
</ul>
<p class="caption"><span class="caption-text">内置服务</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="topics/logging.html">登录</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/stats.html">统计数据集合</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/email.html">发送电子邮件</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/telnetconsole.html">远程登录控制台</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/webservice.html">Web服务</a></li>
</ul>
<p class="caption"><span class="caption-text">解决具体问题</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="faq.html">常见问题</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/debug.html">调试spiders</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/contracts.html">蜘蛛合约</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/practices.html">常用做法</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/broad-crawls.html">宽爬行</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/developer-tools.html">使用浏览器的开发人员工具进行抓取</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/dynamic-content.html">选择动态加载的内容</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/leaks.html">调试内存泄漏</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/media-pipeline.html">下载和处理文件和图像</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/deploy.html">部署蜘蛛</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/autothrottle.html">AutoThrottle 扩展</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/benchmarking.html">标杆管理</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/jobs.html">作业：暂停和恢复爬行</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/coroutines.html">协同程序</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/asyncio.html">asyncio</a></li>
</ul>
<p class="caption"><span class="caption-text">扩展Scrapy</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="topics/architecture.html">体系结构概述</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/downloader-middleware.html">下载器中间件</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/spider-middleware.html">蜘蛛中间件</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/extensions.html">扩展</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/api.html">核心API</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/signals.html">信号</a></li>
<li class="toctree-l1"><a class="reference internal" href="topics/exporters.html">条目导出器</a></li>
</ul>
<p class="caption"><span class="caption-text">其余所有</span></p>
<ul class="current">
<li class="toctree-l1 current"><a class="current reference internal" href="#">发行说明</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-2-3-0-2020-08-04">刮痧2.3.0（2020-08-04）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#deprecation-removals">折旧清除</a></li>
<li class="toctree-l3"><a class="reference internal" href="#deprecations">贬抑</a></li>
<li class="toctree-l3"><a class="reference internal" href="#new-features">新特点</a></li>
<li class="toctree-l3"><a class="reference internal" href="#bug-fixes">错误修复</a></li>
<li class="toctree-l3"><a class="reference internal" href="#documentation">文档</a></li>
<li class="toctree-l3"><a class="reference internal" href="#quality-assurance">质量保证</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-2-2-1-2020-07-17">2.2.1（2020-07-17）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-2-2-0-2020-06-24">刮痧2.2.0（2020-06-24）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#backward-incompatible-changes">向后不兼容的更改</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id1">贬抑</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id2">新特点</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id3">错误修复</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id4">文档</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id5">质量保证</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-2-1-0-2020-04-24">刮痧2.1.0（2020-04-24）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id6">向后不兼容的更改</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id7">折旧清除</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id8">贬抑</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id9">新特点</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id10">错误修复</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id11">文档</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id12">质量保证</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-2-0-1-2020-03-18">刮痧2.0.1（2020-03-18）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-2-0-0-2020-03-03">刮痧2.0.0（2020-03-03）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id13">向后不兼容的更改</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id14">折旧清除</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id15">贬抑</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id16">新特点</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id17">错误修复</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id18">文档</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id19">质量保证</a></li>
<li class="toctree-l3"><a class="reference internal" href="#changes-to-scheduler-queue-classes">计划程序队列类的更改</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-8-0-2019-10-28">1.8.0（2019-10-28）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id20">向后不兼容的更改</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id21">新特点</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id22">错误修复</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id23">文档</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id24">折旧清除</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id26">贬抑</a></li>
<li class="toctree-l3"><a class="reference internal" href="#other-changes">其他变化</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-7-4-2019-10-21">刮伤1.7.4（2019-10-21）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-7-3-2019-08-01">1.7.3（2019-08-01）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-7-2-2019-07-23">Scrapy  1.7.2（2019-07-23）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-7-1-2019-07-18">Scrapy  1.7.1（2019-07-18）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-7-0-2019-07-18">Scrapy  1.7.0（2019-07-18）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id27">向后不兼容的更改</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id28">新特点</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id29">错误修复</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id30">文档</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id31">折旧清除</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id33">贬抑</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id34">其他变化</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-6-0-2019-01-30">Scrapy 1.6.0（2019-01-30）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#selector-api-changes">选择器API更改</a></li>
<li class="toctree-l3"><a class="reference internal" href="#telnet-console">Telnet控制台</a></li>
<li class="toctree-l3"><a class="reference internal" href="#new-extensibility-features">新的可扩展性功能</a></li>
<li class="toctree-l3"><a class="reference internal" href="#new-filepipeline-and-mediapipeline-features">新的文件管道和媒体管道功能</a></li>
<li class="toctree-l3"><a class="reference internal" href="#scrapy-contracts-improvements"><code class="docutils literal notranslate"><span class="pre">scrapy.contracts</span></code> 改进</a></li>
<li class="toctree-l3"><a class="reference internal" href="#usability-improvements">可用性改进</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id35">错误修复</a></li>
<li class="toctree-l3"><a class="reference internal" href="#documentation-improvements">文档改进</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id36">折旧清除</a></li>
<li class="toctree-l3"><a class="reference internal" href="#other-improvements-cleanups">其他改进、清理</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-5-2-2019-01-22">Scrapy 1.5.2（2019-01-22）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-5-1-2018-07-12">Scrapy 1.5.1（2018-07-12）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-5-0-2017-12-29">Scrapy 1.5.0（2017-12-29）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id37">向后不兼容的更改</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id38">新特点</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id39">错误修复</a></li>
<li class="toctree-l3"><a class="reference internal" href="#docs">文档</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-4-0-2017-05-18">Scrapy 1.4.0（2017-05-18）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#deprecations-and-backward-incompatible-changes">折旧和向后不兼容的变更</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id40">新特点</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id41">错误修复</a></li>
<li class="toctree-l3"><a class="reference internal" href="#cleanups-refactoring">清理和重构</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id42">文档</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-3-3-2017-03-10">Scrapy 1.3.3（2017-03-10）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id43">错误修复</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-3-2-2017-02-13">Scrapy 1.3.2（2017-02-13）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id44">错误修复</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-3-1-2017-02-08">Scrapy 1.3.1（2017-02-08）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id45">新特点</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id46">错误修复</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id47">文档</a></li>
<li class="toctree-l3"><a class="reference internal" href="#cleanups">清除</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-3-0-2016-12-21">Scrapy 1.3.0（2016-12-21）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id48">新特点</a></li>
<li class="toctree-l3"><a class="reference internal" href="#dependencies-cleanups">依赖关系和清理</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-2-3-2017-03-03">Scrapy 1.2.3（2017-03-03）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-2-2-2016-12-06">Scrapy 1.2.2（2016-12-06）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id49">错误修复</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id50">文档</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id51">其他变化</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-2-1-2016-10-21">Scrapy 1.2.1（2016-10-21）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id52">错误修复</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id53">文档</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id54">其他变化</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-2-0-2016-10-03">Scrapy 1.2.0（2016-10-03）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id55">新特点</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id56">错误修复</a></li>
<li class="toctree-l3"><a class="reference internal" href="#refactoring">重构</a></li>
<li class="toctree-l3"><a class="reference internal" href="#tests-requirements">测试和要求</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id57">文档</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-1-4-2017-03-03">Scrapy 1.1.4（2017-03-03）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-1-3-2016-09-22">Scrapy 1.1.3（2016-09-22）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id58">错误修复</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id59">文档</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-1-2-2016-08-18">Scrapy 1.1.2（2016-08-18）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id60">错误修复</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-1-1-2016-07-13">Scrapy 1.1.1（2016-07-13）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id61">错误修复</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id62">新特点</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id63">文档</a></li>
<li class="toctree-l3"><a class="reference internal" href="#tests">测验</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-1-0-2016-05-11">Scrapy 1.1.0（2016-05-11）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#beta-python-3-support">Beta Python 3 支持</a></li>
<li class="toctree-l3"><a class="reference internal" href="#additional-new-features-and-enhancements">其他新功能和增强功能</a></li>
<li class="toctree-l3"><a class="reference internal" href="#deprecations-and-removals">弃用和移除</a></li>
<li class="toctree-l3"><a class="reference internal" href="#relocations">重新定位</a></li>
<li class="toctree-l3"><a class="reference internal" href="#bugfixes">错误修正</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-0-7-2017-03-03">Scrapy 1.0.7（2017-03-03）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-0-6-2016-05-04">Scrapy 1.0.6（2016-05-04）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-0-5-2016-02-04">Scrapy 1.0.5（2016-02-04）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-0-4-2015-12-30">Scrapy 1.0.4（2015-12-30）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-0-3-2015-08-11">Scrapy 1.0.3（2015-08-11）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-0-2-2015-08-06">Scrapy 1.0.2（2015-08-06）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-0-1-2015-07-01">Scrapy 1.0.1（2015-07-01）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-1-0-0-2015-06-19">Scrapy 1.0.0（2015-06-19）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#support-for-returning-dictionaries-in-spiders">支持在spiders中返回字典</a></li>
<li class="toctree-l3"><a class="reference internal" href="#per-spider-settings-gsoc-2014">每个蜘蛛设置（GSOC 2014）</a></li>
<li class="toctree-l3"><a class="reference internal" href="#python-logging">Python 测井</a></li>
<li class="toctree-l3"><a class="reference internal" href="#crawler-api-refactoring-gsoc-2014">爬虫API重构（GSOC 2014）</a></li>
<li class="toctree-l3"><a class="reference internal" href="#module-relocations">模块重新定位</a><ul>
<li class="toctree-l4"><a class="reference internal" href="#full-list-of-relocations">重新定位的完整列表</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="#changelog">Changelog</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-24-6-2015-04-20">Scrapy 0.24.6（2015-04-20）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-24-5-2015-02-25">Scrapy 0.24.5（2015-02-25）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-24-4-2014-08-09">Scrapy 0.24.4（2014-08-09）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-24-3-2014-08-09">Scrapy 0.24.3（2014-08-09）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-24-2-2014-07-08">Scrapy 0.24.2（2014-07-08）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-24-1-2014-06-27">Scrapy 0.24.1（2014-06-27）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-24-0-2014-06-26">Scrapy 0.24.0（2014-06-26）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#enhancements">增强功能</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id65">错误修正</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-22-2-released-2014-02-14">Scrapy 0.22.2（2014-02-14发布）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-22-1-released-2014-02-08">Scrapy 0.22.1（2014-02-08发布）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-22-0-released-2014-01-17">Scrapy 0.22.0（2014-01-17发布）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id66">增强功能</a></li>
<li class="toctree-l3"><a class="reference internal" href="#fixes">修正</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-20-2-released-2013-12-09">Scrapy 0.20.2（2013-12-09发布）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-20-1-released-2013-11-28">Scrapy 0.20.1（2013-11-28发布）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-20-0-released-2013-11-08">Scrapy 0.20.0（2013-11-08发布）</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id67">增强功能</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id68">错误修正</a></li>
<li class="toctree-l3"><a class="reference internal" href="#other">其他</a></li>
<li class="toctree-l3"><a class="reference internal" href="#thanks">谢谢</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-18-4-released-2013-10-10">Scrapy 0.18.4（2013-10-10发布）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-18-3-released-2013-10-03">Scrapy 0.18.3（2013-10-03发布）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-18-2-released-2013-09-03">Scrapy 0.18.2（2013-09-03发布）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-18-1-released-2013-08-27">Scrapy 0.18.1（2013-08-27发布）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-18-0-released-2013-08-09">Scrapy 0.18.0（2013-08-09发布）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-16-5-released-2013-05-30">Scrapy 0.16.5（2013-05-30发布）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-16-4-released-2013-01-23">Scrapy 0.16.4（2013-01-23发布）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-16-3-released-2012-12-07">Scrapy 0.16.3（2012-12-07发布）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-16-2-released-2012-11-09">Scrapy 0.16.2（2012-11-09发布）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-16-1-released-2012-10-26">Scrapy 0.16.1（2012-10-26发布）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-16-0-released-2012-10-18">Scrapy 0.16.0（2012-10-18发布）</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-14-4">Scrapy 0.14.4</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-14-3">Scrapy 0.14.3</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-14-2">Scrapy 0.14.2</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-14-1">Scrapy 0.14.1</a></li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-14">Scrapy 0.14</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#new-features-and-settings">新功能和设置</a></li>
<li class="toctree-l3"><a class="reference internal" href="#code-rearranged-and-removed">重新排列和删除代码</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-12">Scrapy 0.12</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#new-features-and-improvements">新功能和改进</a></li>
<li class="toctree-l3"><a class="reference internal" href="#scrapyd-changes">抓取变化</a></li>
<li class="toctree-l3"><a class="reference internal" href="#changes-to-settings">对设置的更改</a></li>
<li class="toctree-l3"><a class="reference internal" href="#deprecated-obsoleted-functionality">弃用/废弃功能</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-10">Scrapy 0.10</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id69">新功能和改进</a></li>
<li class="toctree-l3"><a class="reference internal" href="#command-line-tool-changes">命令行工具更改</a></li>
<li class="toctree-l3"><a class="reference internal" href="#api-changes">API更改</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id70">对设置的更改</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-9">Scrapy 0.9</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id71">新功能和改进</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id72">API更改</a></li>
<li class="toctree-l3"><a class="reference internal" href="#changes-to-default-settings">更改为默认设置</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-8">Scrapy 0.8</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id73">新特点</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id74">向后不兼容的更改</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#scrapy-0-7">Scrapy 0.7</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="contributing.html">为 Scrapy 贡献</a></li>
<li class="toctree-l1"><a class="reference internal" href="versioning.html">版本控制和API稳定性</a></li>
</ul>

            
          
        </div>
        
      </div>
    </nav>

    <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">

      
      <nav class="wy-nav-top" aria-label="top navigation">
        
          <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
          <a href="index.html">Scrapy</a>
        
      </nav>


      <div class="wy-nav-content">
        
        <div class="rst-content">
        
          















<div role="navigation" aria-label="breadcrumbs navigation">

  <ul class="wy-breadcrumbs">
    
      <li><a href="index.html" class="icon icon-home"></a> &raquo;</li>
        
      <li>发行说明</li>
    
    
      <li class="wy-breadcrumbs-aside">
        
            
        
      </li>
    
  </ul>

  
  <hr/>
</div>
          <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
           <div itemprop="articleBody">
            
  <div class="section" id="release-notes">
<span id="news"></span><h1>发行说明<a class="headerlink" href="#release-notes" title="永久链接至标题">¶</a></h1>
<div class="section" id="scrapy-2-3-0-2020-08-04">
<span id="release-2-3-0"></span><h2>刮痧2.3.0（2020-08-04）<a class="headerlink" href="#scrapy-2-3-0-2020-08-04" title="永久链接至标题">¶</a></h2>
<p>亮点：</p>
<ul>
<li><p><a class="reference internal" href="topics/feed-exports.html#topics-feed-exports"><span class="std std-ref">Feed exports</span></a> 现在支持 <a class="reference internal" href="topics/feed-exports.html#topics-feed-storage-gcs"><span class="std std-ref">Google Cloud Storage</span></a> 作为存储后端</p><script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
<ins class="adsbygoogle"
     style="display:block; text-align:center;"
     data-ad-layout="in-article"
     data-ad-format="fluid"
     data-ad-client="ca-pub-1466963416408457"
     data-ad-slot="8850786025"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script></li>
<li><p>新的 <a class="reference internal" href="topics/feed-exports.html#std-setting-FEED_EXPORT_BATCH_ITEM_COUNT"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FEED_EXPORT_BATCH_ITEM_COUNT</span></code></a> 设置允许批量传递输出项，最多可达指定的项数。</p>
<p>它还可以作为 <a class="reference internal" href="topics/feed-exports.html#delayed-file-delivery"><span class="std std-ref">delayed file delivery</span></a> ，这会导致Scrapy在使用某些存储后端时仅在爬网完成后才开始项目传递 (<a class="reference internal" href="topics/feed-exports.html#topics-feed-storage-s3"><span class="std std-ref">S3</span></a> ， <a class="reference internal" href="topics/feed-exports.html#topics-feed-storage-ftp"><span class="std std-ref">FTP</span></a> ，现在呢 <a class="reference internal" href="topics/feed-exports.html#topics-feed-storage-gcs"><span class="std std-ref">GCS</span></a> ）</p>
</li>
<li><p>基本实现 <a class="reference internal" href="topics/loaders.html#topics-loaders"><span class="std std-ref">item loaders</span></a> 已经搬进了一个单独的类库， <a class="reference external" href="https://itemloaders.readthedocs.io/en/latest/index.html" title="(在 itemloaders)"><span class="xref std std-doc">itemloaders</span></a> ，允许从外部使用Scrapy和单独的发布时间表</p></li>
</ul>
<div class="section" id="deprecation-removals">
<h3>折旧清除<a class="headerlink" href="#deprecation-removals" title="永久链接至标题">¶</a></h3>
<ul>
<li><p>从中删除了以下类及其父模块 <code class="docutils literal notranslate"><span class="pre">scrapy.linkextractors</span></code> ：</p>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">htmlparser.HtmlParserLinkExtractor</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">regex.RegexLinkExtractor</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">sgml.BaseSgmlLinkExtractor</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">sgml.SgmlLinkExtractor</span></code></p></li>
</ul>
<p>使用 <a class="reference internal" href="topics/link-extractors.html#scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor" title="scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor"><code class="xref py py-class docutils literal notranslate"><span class="pre">LinkExtractor</span></code></a> 相反 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4356">issue 4356</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4679">issue 4679</a> ）</p>
</li>
</ul>
</div>
<div class="section" id="deprecations">
<h3>贬抑<a class="headerlink" href="#deprecations" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>这个 <code class="docutils literal notranslate"><span class="pre">scrapy.utils.python.retry_on_eintr</span></code> 函数现在已弃用 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4683">issue 4683</a> ）</p></li>
</ul>
</div>
<div class="section" id="new-features">
<h3>新特点<a class="headerlink" href="#new-features" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p><a class="reference internal" href="topics/feed-exports.html#topics-feed-exports"><span class="std std-ref">Feed exports</span></a> support <a class="reference internal" href="topics/feed-exports.html#topics-feed-storage-gcs"><span class="std std-ref">Google Cloud
Storage</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/685">issue 685</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3608">issue 3608</a>)</p></li>
<li><p>新的 <a class="reference internal" href="topics/feed-exports.html#std-setting-FEED_EXPORT_BATCH_ITEM_COUNT"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FEED_EXPORT_BATCH_ITEM_COUNT</span></code></a> 批量交货设置 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4250">issue 4250</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4434">issue 4434</a> ）</p></li>
<li><p>这个 <a class="reference internal" href="topics/commands.html#std-command-parse"><code class="xref std std-command docutils literal notranslate"><span class="pre">parse</span></code></a> 命令现在允许指定输出文件 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4317">issue 4317</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4377">issue 4377</a> ）</p></li>
<li><p><a class="reference internal" href="topics/request-response.html#scrapy.http.Request.from_curl" title="scrapy.http.Request.from_curl"><code class="xref py py-meth docutils literal notranslate"><span class="pre">Request.from_curl</span></code></a> and
<a class="reference internal" href="topics/developer-tools.html#scrapy.utils.curl.curl_to_request_kwargs" title="scrapy.utils.curl.curl_to_request_kwargs"><code class="xref py py-func docutils literal notranslate"><span class="pre">curl_to_request_kwargs()</span></code></a> now also support
<code class="docutils literal notranslate"><span class="pre">--data-raw</span></code> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4612">issue 4612</a>)</p></li>
<li><p>A <code class="docutils literal notranslate"><span class="pre">parse</span></code> callback may now be used in built-in spider subclasses, such
as <a class="reference internal" href="topics/spiders.html#scrapy.spiders.CrawlSpider" title="scrapy.spiders.CrawlSpider"><code class="xref py py-class docutils literal notranslate"><span class="pre">CrawlSpider</span></code></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/712">issue 712</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/732">issue 732</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/781">issue 781</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4254">issue 4254</a> )</p></li>
</ul>
</div>
<div class="section" id="bug-fixes">
<h3>错误修复<a class="headerlink" href="#bug-fixes" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>Fixed the <a class="reference internal" href="topics/feed-exports.html#topics-feed-format-csv"><span class="std std-ref">CSV exporting</span></a> of
<a class="reference internal" href="topics/items.html#dataclass-items"><span class="std std-ref">dataclass items</span></a> and <a class="reference internal" href="topics/items.html#attrs-items"><span class="std std-ref">attr.s items</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4667">issue 4667</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4668">issue 4668</a>)</p></li>
<li><p><a class="reference internal" href="topics/request-response.html#scrapy.http.Request.from_curl" title="scrapy.http.Request.from_curl"><code class="xref py py-meth docutils literal notranslate"><span class="pre">Request.from_curl</span></code></a> 和 <a class="reference internal" href="topics/developer-tools.html#scrapy.utils.curl.curl_to_request_kwargs" title="scrapy.utils.curl.curl_to_request_kwargs"><code class="xref py py-func docutils literal notranslate"><span class="pre">curl_to_request_kwargs()</span></code></a> 现在将request方法设置为 <code class="docutils literal notranslate"><span class="pre">POST</span></code> 当指定了请求正文而未指定请求方法时 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4612">issue 4612</a> ）</p></li>
<li><p>在Windows10.0.14393及更高版本中启用了ANSI转义序列的处理，其中彩色输出需要它 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4393">issue 4393</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4403">issue 4403</a> ）</p></li>
</ul>
</div>
<div class="section" id="documentation">
<h3>文档<a class="headerlink" href="#documentation" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>更新了 <a class="reference external" href="https://www.openssl.org/docs/manmaster/man1/openssl-ciphers.html#CIPHER-LIST-FORMAT">OpenSSL cipher list format</a> 文档中有关 <a class="reference internal" href="topics/settings.html#std-setting-DOWNLOADER_CLIENT_TLS_CIPHERS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOADER_CLIENT_TLS_CIPHERS</span></code></a> 设置 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4653">issue 4653</a> ）</p></li>
<li><p>Simplified the code example in <a class="reference internal" href="topics/loaders.html#topics-loaders-dataclass"><span class="std std-ref">使用dataclass项</span></a>
(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4652">issue 4652</a>)</p></li>
</ul>
</div>
<div class="section" id="quality-assurance">
<h3>质量保证<a class="headerlink" href="#quality-assurance" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>The base implementation of <a class="reference internal" href="topics/loaders.html#topics-loaders"><span class="std std-ref">item loaders</span></a> has been
moved into <a class="reference external" href="https://itemloaders.readthedocs.io/en/latest/index.html" title="(在 itemloaders)"><span class="xref std std-doc">itemloaders</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4005">issue 4005</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4516">issue 4516</a>)</p></li>
<li><p>修复了某些调度程序中的错误 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4644">issue 4644</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4645">issue 4645</a> ）</p></li>
<li><p>已续订用于SSL测试的本地主机证书 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4650">issue 4650</a> ）</p></li>
<li><p>删除了python2特有的cookie处理代码 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4682">issue 4682</a> ）</p></li>
<li><p>停止使用Python2 unicode文本语法 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4704">issue 4704</a> ）</p></li>
<li><p>停止使用齿隙来继续生产线 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4673">issue 4673</a> ）</p></li>
<li><p>从MyPy异常列表中删除了不需要的条目 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4690">issue 4690</a> ）</p></li>
<li><p>自动化测试现在作为我们持续集成系统的一部分在Windows上传递 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4458">issue 4458</a> ）</p></li>
<li><p>自动化测试现在通过最新的PyPy版本来获得我们的持续集成系统中支持的Python版本 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4504">issue 4504</a> ）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-2-2-1-2020-07-17">
<span id="release-2-2-1"></span><h2>2.2.1（2020-07-17）<a class="headerlink" href="#scrapy-2-2-1-2020-07-17" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>这个 <a class="reference internal" href="topics/commands.html#std-command-startproject"><code class="xref std std-command docutils literal notranslate"><span class="pre">startproject</span></code></a> 命令不再对目标文件夹中文件的权限进行意外更改，例如删除执行权限 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4662">issue 4662</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4666">issue 4666</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-2-2-0-2020-06-24">
<span id="release-2-2-0"></span><h2>刮痧2.2.0（2020-06-24）<a class="headerlink" href="#scrapy-2-2-0-2020-06-24" title="永久链接至标题">¶</a></h2>
<p>亮点：</p>
<ul class="simple">
<li><p>现在需要Python3.5.2+</p></li>
<li><p><a class="reference internal" href="topics/items.html#dataclass-items"><span class="std std-ref">dataclass objects</span></a> and
<a class="reference internal" href="topics/items.html#attrs-items"><span class="std std-ref">attrs objects</span></a> are now valid <a class="reference internal" href="topics/items.html#item-types"><span class="std std-ref">item types</span></a></p></li>
<li><p>新的 <a class="reference internal" href="topics/request-response.html#scrapy.http.TextResponse.json" title="scrapy.http.TextResponse.json"><code class="xref py py-meth docutils literal notranslate"><span class="pre">TextResponse.json</span></code></a> 方法</p></li>
<li><p>新的 <a class="reference internal" href="topics/signals.html#std-signal-bytes_received"><code class="xref std std-signal docutils literal notranslate"><span class="pre">bytes_received</span></code></a> 允许取消响应下载的信号</p></li>
<li><p><a class="reference internal" href="topics/downloader-middleware.html#scrapy.downloadermiddlewares.cookies.CookiesMiddleware" title="scrapy.downloadermiddlewares.cookies.CookiesMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">CookiesMiddleware</span></code></a> 修正</p></li>
</ul>
<div class="section" id="backward-incompatible-changes">
<h3>向后不兼容的更改<a class="headerlink" href="#backward-incompatible-changes" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>Support for Python 3.5.0 and 3.5.1 has been dropped; Scrapy now refuses to
run with a Python version lower than 3.5.2, which introduced
<a class="reference external" href="https://docs.python.org/3/library/typing.html#typing.Type" title="(在 Python v3.9)"><code class="xref py py-class docutils literal notranslate"><span class="pre">typing.Type</span></code></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4615">issue 4615</a>)</p></li>
</ul>
</div>
<div class="section" id="id1">
<h3>贬抑<a class="headerlink" href="#id1" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p><code class="xref py py-meth docutils literal notranslate"><span class="pre">TextResponse.body_as_unicode</span></code> 现在已弃用，请使用 <a class="reference internal" href="topics/request-response.html#scrapy.http.TextResponse.text" title="scrapy.http.TextResponse.text"><code class="xref py py-attr docutils literal notranslate"><span class="pre">TextResponse.text</span></code></a> 相反 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4546">issue 4546</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4555">issue 4555</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4579">issue 4579</a> ）</p></li>
<li><p><code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.item.BaseItem</span></code> 现在已弃用，请使用 <a class="reference internal" href="topics/items.html#scrapy.item.Item" title="scrapy.item.Item"><code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.item.Item</span></code></a> 相反 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4534">issue 4534</a> ）</p></li>
</ul>
</div>
<div class="section" id="id2">
<h3>新特点<a class="headerlink" href="#id2" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p><a class="reference internal" href="topics/items.html#dataclass-items"><span class="std std-ref">dataclass objects</span></a> and
<a class="reference internal" href="topics/items.html#attrs-items"><span class="std std-ref">attrs objects</span></a> are now valid <a class="reference internal" href="topics/items.html#item-types"><span class="std std-ref">item types</span></a>, and a new <a class="reference external" href="https://github.com/scrapy/itemadapter">itemadapter</a> library makes it easy to
write code that <a class="reference internal" href="topics/items.html#supporting-item-types"><span class="std std-ref">supports any item type</span></a>
(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2749">issue 2749</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2807">issue 2807</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3761">issue 3761</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3881">issue 3881</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4642">issue 4642</a>)</p></li>
<li><p>一个新的 <a class="reference internal" href="topics/request-response.html#scrapy.http.TextResponse.json" title="scrapy.http.TextResponse.json"><code class="xref py py-meth docutils literal notranslate"><span class="pre">TextResponse.json</span></code></a> 方法允许反序列化JSON响应 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2444">issue 2444</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4460">issue 4460</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4574">issue 4574</a> ）</p></li>
<li><p>A new <a class="reference internal" href="topics/signals.html#std-signal-bytes_received"><code class="xref std std-signal docutils literal notranslate"><span class="pre">bytes_received</span></code></a> signal allows monitoring response download
progress and <a class="reference internal" href="topics/request-response.html#topics-stop-response-download"><span class="std std-ref">stopping downloads</span></a>
(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4205">issue 4205</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4559">issue 4559</a>)</p></li>
<li><p>的结果列表中的词典 <a class="reference internal" href="topics/media-pipeline.html#topics-media-pipeline"><span class="std std-ref">media pipeline</span></a> 现在包括一个新的密钥， <code class="docutils literal notranslate"><span class="pre">status</span></code> ，指示文件是否已下载，如果未下载，则说明未下载的原因；请参阅 <a class="reference internal" href="topics/media-pipeline.html#scrapy.pipelines.files.FilesPipeline.get_media_requests" title="scrapy.pipelines.files.FilesPipeline.get_media_requests"><code class="xref py py-meth docutils literal notranslate"><span class="pre">FilesPipeline.get_media_requests</span></code></a> 更多信息 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2893">issue 2893</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4486">issue 4486</a> ）</p></li>
<li><p>使用时 <a class="reference internal" href="topics/media-pipeline.html#media-pipeline-gcs"><span class="std std-ref">Google Cloud Storage</span></a> 对于一个 <a class="reference internal" href="topics/media-pipeline.html#topics-media-pipeline"><span class="std std-ref">media pipeline</span></a> ，如果配置的凭据没有授予所需的权限，则现在会记录一个警告 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4346">issue 4346</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4508">issue 4508</a> ）</p></li>
<li><p><a class="reference internal" href="topics/link-extractors.html#topics-link-extractors"><span class="std std-ref">Link extractors</span></a> are now serializable,
as long as you do not use <a class="reference external" href="https://docs.python.org/3/reference/expressions.html#lambda" title="(在 Python v3.9)"><span class="xref std std-ref">lambdas</span></a> for parameters; for
example, you can now pass link extractors in <a class="reference internal" href="topics/request-response.html#scrapy.http.Request.cb_kwargs" title="scrapy.http.Request.cb_kwargs"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.cb_kwargs</span></code></a> or
<a class="reference internal" href="topics/request-response.html#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.meta</span></code></a> when <a class="reference internal" href="topics/jobs.html#topics-jobs"><span class="std std-ref">persisting
scheduled requests</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4554">issue 4554</a>)</p></li>
<li><p>升级了 <a class="reference external" href="https://docs.python.org/3/library/pickle.html#pickle-protocols" title="(在 Python v3.9)"><span class="xref std std-ref">pickle protocol</span></a> 从协议2到协议4的Scrapy使用，提高了序列化能力和性能 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4135">issue 4135</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4541">issue 4541</a> ）</p></li>
<li><p><code class="xref py py-func docutils literal notranslate"><span class="pre">scrapy.utils.misc.create_instance()</span></code> now raises a <a class="reference external" href="https://docs.python.org/3/library/exceptions.html#TypeError" title="(在 Python v3.9)"><code class="xref py py-exc docutils literal notranslate"><span class="pre">TypeError</span></code></a>
exception if the resulting instance is <code class="docutils literal notranslate"><span class="pre">None</span></code> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4528">issue 4528</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4532">issue 4532</a>)</p></li>
</ul>
</div>
<div class="section" id="id3">
<h3>错误修复<a class="headerlink" href="#id3" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p><a class="reference internal" href="topics/downloader-middleware.html#scrapy.downloadermiddlewares.cookies.CookiesMiddleware" title="scrapy.downloadermiddlewares.cookies.CookiesMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">CookiesMiddleware</span></code></a> no longer
discards cookies defined in <a class="reference internal" href="topics/request-response.html#scrapy.http.Request.headers" title="scrapy.http.Request.headers"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.headers</span></code></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1992">issue 1992</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2400">issue 2400</a>)</p></li>
<li><p><a class="reference internal" href="topics/downloader-middleware.html#scrapy.downloadermiddlewares.cookies.CookiesMiddleware" title="scrapy.downloadermiddlewares.cookies.CookiesMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">CookiesMiddleware</span></code></a> no longer
re-encodes cookies defined as <a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#bytes" title="(在 Python v3.9)"><code class="xref py py-class docutils literal notranslate"><span class="pre">bytes</span></code></a> in the <code class="docutils literal notranslate"><span class="pre">cookies</span></code> parameter
of the <code class="docutils literal notranslate"><span class="pre">__init__</span></code> method of <a class="reference internal" href="topics/request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a>
(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2400">issue 2400</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3575">issue 3575</a>)</p></li>
<li><p>什么时候？ <a class="reference internal" href="topics/feed-exports.html#std-setting-FEEDS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FEEDS</span></code></a> 定义多个uri， <a class="reference internal" href="topics/feed-exports.html#std-setting-FEED_STORE_EMPTY"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FEED_STORE_EMPTY</span></code></a> 是 <code class="docutils literal notranslate"><span class="pre">False</span></code> 爬网不会产生任何项目，Scrapy在第一个URI之后不再停止feed导出 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4621">issue 4621</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4626">issue 4626</a> ）</p></li>
<li><p><a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> callbacks defined using <a class="reference internal" href="topics/coroutines.html"><span class="doc">coroutine
syntax</span></a> no longer need to return an iterable, and may
instead return a <a class="reference internal" href="topics/request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> object, an
<a class="reference internal" href="topics/items.html#topics-items"><span class="std std-ref">item</span></a>, or <code class="docutils literal notranslate"><span class="pre">None</span></code> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4609">issue 4609</a>)</p></li>
<li><p>这个 <a class="reference internal" href="topics/commands.html#std-command-startproject"><code class="xref std std-command docutils literal notranslate"><span class="pre">startproject</span></code></a> 命令现在确保生成的项目文件夹和文件具有正确的权限 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4604">issue 4604</a> ）</p></li>
<li><p>Fix a <a class="reference external" href="https://docs.python.org/3/library/exceptions.html#KeyError" title="(在 Python v3.9)"><code class="xref py py-exc docutils literal notranslate"><span class="pre">KeyError</span></code></a> exception being sometimes raised from
<code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.utils.datatypes.LocalWeakReferencedCache</span></code> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4597">issue 4597</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4599">issue 4599</a>)</p></li>
<li><p>什么时候？ <a class="reference internal" href="topics/feed-exports.html#std-setting-FEEDS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FEEDS</span></code></a> 定义了多个uri，关于正在存储的项的日志消息现在包含来自相应提要的信息，而不是总是只包含一个提要的信息 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4619">issue 4619</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4629">issue 4629</a> ）</p></li>
</ul>
</div>
<div class="section" id="id4">
<h3>文档<a class="headerlink" href="#id4" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>Added a new section about <a class="reference internal" href="topics/request-response.html#errback-cb-kwargs"><span class="std std-ref">accessing cb_kwargs from errbacks</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4598">issue 4598</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4634">issue 4634</a>)</p></li>
<li><p>Covered <a class="reference external" href="https://github.com/Nykakin/chompjs">chompjs</a> in <a class="reference internal" href="topics/dynamic-content.html#topics-parsing-javascript"><span class="std std-ref">分析javascript代码</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4556">issue 4556</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4562">issue 4562</a>)</p></li>
<li><p>从中删除 <a class="reference internal" href="topics/coroutines.html"><span class="doc">协同程序</span></a> 关于API正在试验的警告 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4511">issue 4511</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4513">issue 4513</a> ）</p></li>
<li><p>Removed references to unsupported versions of <a class="reference external" href="https://twistedmatrix.com/documents/current/index.html" title="(在 Twisted v20.3)"><span class="xref std std-doc">Twisted</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4533">issue 4533</a>)</p></li>
<li><p>Updated the description of the <a class="reference internal" href="topics/item-pipeline.html#screenshotpipeline"><span class="std std-ref">screenshot pipeline example</span></a>, which now uses <a class="reference internal" href="topics/coroutines.html"><span class="doc">coroutine syntax</span></a> instead of returning a
<a class="reference external" href="https://twistedmatrix.com/documents/current/api/twisted.internet.defer.Deferred.html" title="(在 Twisted v2.0)"><code class="xref py py-class docutils literal notranslate"><span class="pre">Deferred</span></code></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4514">issue 4514</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4593">issue 4593</a>)</p></li>
<li><p>从中删除了一个误导性的导入行 <code class="xref py py-func docutils literal notranslate"><span class="pre">scrapy.utils.log.configure_logging()</span></code> 代码示例 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4510">issue 4510</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4587">issue 4587</a> ）</p></li>
<li><p>The display-on-hover behavior of internal documentation references now also
covers links to <a class="reference internal" href="topics/commands.html#topics-commands"><span class="std std-ref">commands</span></a>, <a class="reference internal" href="topics/request-response.html#scrapy.http.Request.meta" title="scrapy.http.Request.meta"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.meta</span></code></a> keys, <a class="reference internal" href="topics/settings.html#topics-settings"><span class="std std-ref">settings</span></a> and
<a class="reference internal" href="topics/signals.html#topics-signals"><span class="std std-ref">signals</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4495">issue 4495</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4563">issue 4563</a>)</p></li>
<li><p>再次可以下载文档进行脱机阅读 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4578">issue 4578</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4585">issue 4585</a> ）</p></li>
<li><p>删除前面的反斜杠 <code class="docutils literal notranslate"><span class="pre">*args</span></code> 和 <code class="docutils literal notranslate"><span class="pre">**kwargs</span></code> 在某些函数和方法签名中 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4592">issue 4592</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4596">issue 4596</a> ）</p></li>
</ul>
</div>
<div class="section" id="id5">
<h3>质量保证<a class="headerlink" href="#id5" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>Adjusted the code base further to our <a class="reference internal" href="contributing.html#coding-style"><span class="std std-ref">style guidelines</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4237">issue 4237</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4525">issue 4525</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4538">issue 4538</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4539">issue 4539</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4540">issue 4540</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4542">issue 4542</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4543">issue 4543</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4544">issue 4544</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4545">issue 4545</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4557">issue 4557</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4558">issue 4558</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4566">issue 4566</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4568">issue 4568</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4572">issue 4572</a>)</p></li>
<li><p>删除了Python2支持的残余部分 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4550">issue 4550</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4553">issue 4553</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4568">issue 4568</a> ）</p></li>
<li><p>改进了 <a class="reference internal" href="topics/commands.html#std-command-crawl"><code class="xref std std-command docutils literal notranslate"><span class="pre">crawl</span></code></a> 和 <a class="reference internal" href="topics/commands.html#std-command-runspider"><code class="xref std std-command docutils literal notranslate"><span class="pre">runspider</span></code></a> 命令 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4548">issue 4548</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4552">issue 4552</a> ）</p></li>
<li><p>Replaced <code class="docutils literal notranslate"><span class="pre">chain(*iterable)</span></code> with <code class="docutils literal notranslate"><span class="pre">chain.from_iterable(iterable)</span></code>
(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4635">issue 4635</a>)</p></li>
<li><p>您现在可以运行 <a class="reference external" href="https://docs.python.org/3/library/asyncio.html#module-asyncio" title="(在 Python v3.9)"><code class="xref py py-mod docutils literal notranslate"><span class="pre">asyncio</span></code></a> 在任何Python版本上使用Tox进行测试 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4521">issue 4521</a> ）</p></li>
<li><p>更新测试要求，以反映与pytest 5.4和5.4.1的不兼容性 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4588">issue 4588</a> ）</p></li>
<li><p>改进 <a class="reference internal" href="topics/api.html#scrapy.spiderloader.SpiderLoader" title="scrapy.spiderloader.SpiderLoader"><code class="xref py py-class docutils literal notranslate"><span class="pre">SpiderLoader</span></code></a> 测试包含重复蜘蛛名称的场景的覆盖率 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4549">issue 4549</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4560">issue 4560</a> ）</p></li>
<li><p>将Travis CI配置为也使用python3.5.2运行测试 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4518">issue 4518</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4615">issue 4615</a> ）</p></li>
<li><p>增加了一个 <a class="reference external" href="https://www.pylint.org/">Pylint</a> 特拉维斯·CI的工作 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3727">issue 3727</a> ）</p></li>
<li><p>增加了一个 <a class="reference external" href="http://mypy-lang.org/">Mypy</a> 特拉维斯·CI的工作 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4637">issue 4637</a> ）</p></li>
<li><p>在测试中使用集合字面值 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4573">issue 4573</a> ）</p></li>
<li><p>已清理Travis CI配置 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4517">issue 4517</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4519">issue 4519</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4522">issue 4522</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4537">issue 4537</a> ）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-2-1-0-2020-04-24">
<span id="release-2-1-0"></span><h2>刮痧2.1.0（2020-04-24）<a class="headerlink" href="#scrapy-2-1-0-2020-04-24" title="永久链接至标题">¶</a></h2>
<p>亮点：</p>
<ul class="simple">
<li><p>新的 <a class="reference internal" href="topics/feed-exports.html#std-setting-FEEDS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FEEDS</span></code></a> 设置导出到多个源</p></li>
<li><p>新的 <a class="reference internal" href="topics/request-response.html#scrapy.http.Response.ip_address" title="scrapy.http.Response.ip_address"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Response.ip_address</span></code></a> 属性</p></li>
</ul>
<div class="section" id="id6">
<h3>向后不兼容的更改<a class="headerlink" href="#id6" title="永久链接至标题">¶</a></h3>
<ul>
<li><p><a class="reference external" href="https://docs.python.org/3/library/exceptions.html#AssertionError" title="(在 Python v3.9)"><code class="xref py py-exc docutils literal notranslate"><span class="pre">AssertionError</span></code></a> 引发异常的原因 <a class="reference external" href="https://docs.python.org/3/reference/simple_stmts.html#assert" title="(在 Python v3.9)"><span class="xref std std-ref">assert</span></a> 语句已被新的异常类型取代，以支持在优化模式下运行Python（请参见 <a class="reference external" href="https://docs.python.org/3/using/cmdline.html#cmdoption-o" title="(在 Python v3.9)"><code class="xref std std-option docutils literal notranslate"><span class="pre">-O</span></code></a> )不会以任何意想不到的方式改变斯帕蒂的行为。</p>
<p>如果你抓住一个 <a class="reference external" href="https://docs.python.org/3/library/exceptions.html#AssertionError" title="(在 Python v3.9)"><code class="xref py py-exc docutils literal notranslate"><span class="pre">AssertionError</span></code></a> 来自Scrapy的异常，更新您的代码以捕获相应的新异常。</p>
<p>(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4440">issue 4440</a> ）</p>
</li>
</ul>
</div>
<div class="section" id="id7">
<h3>折旧清除<a class="headerlink" href="#id7" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>这个 <code class="docutils literal notranslate"><span class="pre">LOG_UNSERIALIZABLE_REQUESTS</span></code> 设置不再受支持，请使用 <a class="reference internal" href="topics/settings.html#std-setting-SCHEDULER_DEBUG"><code class="xref std std-setting docutils literal notranslate"><span class="pre">SCHEDULER_DEBUG</span></code></a> 相反 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4385">issue 4385</a> ）</p></li>
<li><p>这个 <code class="docutils literal notranslate"><span class="pre">REDIRECT_MAX_METAREFRESH_DELAY</span></code> 设置不再受支持，请使用 <a class="reference internal" href="topics/downloader-middleware.html#std-setting-METAREFRESH_MAXDELAY"><code class="xref std std-setting docutils literal notranslate"><span class="pre">METAREFRESH_MAXDELAY</span></code></a> 相反 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4385">issue 4385</a> ）</p></li>
<li><p>这个 <code class="xref py py-class docutils literal notranslate"><span class="pre">ChunkedTransferMiddleware</span></code> 中间件已被删除，包括整个 <code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.downloadermiddlewares.chunked</span></code> 模块；分块传输是开箱即用的 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4431">issue 4431</a> ）</p></li>
<li><p>这个 <code class="docutils literal notranslate"><span class="pre">spiders</span></code> 属性已从中删除 <a class="reference internal" href="topics/api.html#scrapy.crawler.Crawler" title="scrapy.crawler.Crawler"><code class="xref py py-class docutils literal notranslate"><span class="pre">Crawler</span></code></a> 使用 <code class="xref py py-class docutils literal notranslate"><span class="pre">CrawlerRunner.spider_loader</span></code> 或实例化 <a class="reference internal" href="topics/settings.html#std-setting-SPIDER_LOADER_CLASS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">SPIDER_LOADER_CLASS</span></code></a> 用你的设置代替 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4398">issue 4398</a> ）</p></li>
<li><p>The <code class="docutils literal notranslate"><span class="pre">MultiValueDict</span></code>, <code class="docutils literal notranslate"><span class="pre">MultiValueDictKeyError</span></code>, and <code class="docutils literal notranslate"><span class="pre">SiteNode</span></code>
classes have been removed from <code class="xref py py-mod docutils literal notranslate"><span class="pre">scrapy.utils.datatypes</span></code>
(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4400">issue 4400</a>)</p></li>
</ul>
</div>
<div class="section" id="id8">
<h3>贬抑<a class="headerlink" href="#id8" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>这个 <code class="docutils literal notranslate"><span class="pre">FEED_FORMAT</span></code> 和 <code class="docutils literal notranslate"><span class="pre">FEED_URI</span></code> 设置已被弃用，取而代之的是 <a class="reference internal" href="topics/feed-exports.html#std-setting-FEEDS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FEEDS</span></code></a> 设置 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1336">issue 1336</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3858">issue 3858</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4507">issue 4507</a> ）</p></li>
</ul>
</div>
<div class="section" id="id9">
<h3>新特点<a class="headerlink" href="#id9" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>一个新的环境， <a class="reference internal" href="topics/feed-exports.html#std-setting-FEEDS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FEEDS</span></code></a> ，允许配置具有不同设置的多个输出源 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1336">issue 1336</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3858">issue 3858</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4507">issue 4507</a> ）</p></li>
<li><p>这个 <a class="reference internal" href="topics/commands.html#std-command-crawl"><code class="xref std std-command docutils literal notranslate"><span class="pre">crawl</span></code></a> 和 <a class="reference internal" href="topics/commands.html#std-command-runspider"><code class="xref std std-command docutils literal notranslate"><span class="pre">runspider</span></code></a> 命令现在支持多个 <code class="docutils literal notranslate"><span class="pre">-o</span></code> 参数 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1336">issue 1336</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3858">issue 3858</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4507">issue 4507</a> ）</p></li>
<li><p>这个 <a class="reference internal" href="topics/commands.html#std-command-crawl"><code class="xref std std-command docutils literal notranslate"><span class="pre">crawl</span></code></a> 和 <a class="reference internal" href="topics/commands.html#std-command-runspider"><code class="xref std std-command docutils literal notranslate"><span class="pre">runspider</span></code></a> 命令现在支持通过附加 <code class="docutils literal notranslate"><span class="pre">:&lt;format&gt;</span></code> 到输出文件 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1336">issue 1336</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3858">issue 3858</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4507">issue 4507</a> ）</p></li>
<li><p>新的 <a class="reference internal" href="topics/request-response.html#scrapy.http.Response.ip_address" title="scrapy.http.Response.ip_address"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Response.ip_address</span></code></a> 属性提供对发起响应的IP地址的访问 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3903">issue 3903</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3940">issue 3940</a> ）</p></li>
<li><p>当 <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider.allowed_domains" title="scrapy.spiders.Spider.allowed_domains"><code class="xref py py-attr docutils literal notranslate"><span class="pre">allowed_domains</span></code></a> 包括一个端口 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/50">issue 50</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3198">issue 3198</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4413">issue 4413</a> ）</p></li>
<li><p>Zsh completion现在将使用的选项别名从完成列表中排除 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4438">issue 4438</a> ）</p></li>
</ul>
</div>
<div class="section" id="id10">
<h3>错误修复<a class="headerlink" href="#id10" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p><a class="reference internal" href="topics/jobs.html#request-serialization"><span class="std std-ref">Request serialization</span></a> 不再中断为spider属性的回调，这些属性被分配了一个不同名称的函数 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4500">issue 4500</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">None</span></code> 价值观 <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider.allowed_domains" title="scrapy.spiders.Spider.allowed_domains"><code class="xref py py-attr docutils literal notranslate"><span class="pre">allowed_domains</span></code></a> 不再引起 <a class="reference external" href="https://docs.python.org/3/library/exceptions.html#TypeError" title="(在 Python v3.9)"><code class="xref py py-exc docutils literal notranslate"><span class="pre">TypeError</span></code></a> 例外 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4410">issue 4410</a> ）</p></li>
<li><p>Zsh完成不再允许参数后的选项 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4438">issue 4438</a> ）</p></li>
<li><p>zope.接口现在支持5.0.0和更高版本 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4447">issue 4447</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4448">issue 4448</a> ）</p></li>
<li><p><code class="xref py py-meth docutils literal notranslate"><span class="pre">Spider.make_requests_from_url</span></code> ，已在Scrapy 1.4.0中弃用，现在在使用时发出警告 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4412">issue 4412</a> ）</p></li>
</ul>
</div>
<div class="section" id="id11">
<h3>文档<a class="headerlink" href="#id11" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>Improved the documentation about signals that allow their handlers to
return a <a class="reference external" href="https://twistedmatrix.com/documents/current/api/twisted.internet.defer.Deferred.html" title="(在 Twisted v2.0)"><code class="xref py py-class docutils literal notranslate"><span class="pre">Deferred</span></code></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4295">issue 4295</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4390">issue 4390</a>)</p></li>
<li><p>我们的PyPI条目现在包含了文档、源代码存储库和问题跟踪程序的链接 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4456">issue 4456</a> ）</p></li>
<li><p>覆盖了 <a class="reference external" href="https://michael-shub.github.io/curl2scrapy/">curl2scrapy</a> 文件中的服务 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4206">issue 4206</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4455">issue 4455</a> ）</p></li>
<li><p>删除了对Guppy库的引用，该库只在python2中工作 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4285">issue 4285</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4343">issue 4343</a> ）</p></li>
<li><p>扩展使用InterSphinx链接到python3文档 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4444">issue 4444</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4445">issue 4445</a> ）</p></li>
<li><p>增加了对Sphinx3.0及更高版本的支持 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4475">issue 4475</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4480">issue 4480</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4496">issue 4496</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4503">issue 4503</a> ）</p></li>
</ul>
</div>
<div class="section" id="id12">
<h3>质量保证<a class="headerlink" href="#id12" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>删除了有关使用旧的已删除设置的警告 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4404">issue 4404</a> ）</p></li>
<li><p>删除了有关导入的警告 <a class="reference external" href="https://twistedmatrix.com/documents/current/api/twisted.internet.testing.StringTransport.html" title="(在 Twisted v2.0)"><code class="xref py py-class docutils literal notranslate"><span class="pre">StringTransport</span></code></a> 从 <code class="docutils literal notranslate"><span class="pre">twisted.test.proto_helpers</span></code> 在Twisted 19.7.0或更新版本中 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4409">issue 4409</a> ）</p></li>
<li><p>删除了过时的Debian包生成文件 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4384">issue 4384</a> ）</p></li>
<li><p>远离的 <a class="reference external" href="https://docs.python.org/3/library/functions.html#object" title="(在 Python v3.9)"><code class="xref py py-class docutils literal notranslate"><span class="pre">object</span></code></a> 作为基类使用 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4430">issue 4430</a> ）</p></li>
<li><p>删除了添加了对我们不再支持的Twisted旧版本的支持的代码 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4472">issue 4472</a> ）</p></li>
<li><p>修复了代码样式问题 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4468">issue 4468</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4469">issue 4469</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4471">issue 4471</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4481">issue 4481</a> ）</p></li>
<li><p>远离的 <a class="reference external" href="https://twistedmatrix.com/documents/current/api/twisted.internet.defer.html#returnValue" title="(在 Twisted v2.0)"><code class="xref py py-func docutils literal notranslate"><span class="pre">twisted.internet.defer.returnValue()</span></code></a> 电话 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4443">issue 4443</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4446">issue 4446</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4489">issue 4489</a> ）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-2-0-1-2020-03-18">
<span id="release-2-0-1"></span><h2>刮痧2.0.1（2020-03-18）<a class="headerlink" href="#scrapy-2-0-1-2020-03-18" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p><a class="reference internal" href="topics/request-response.html#scrapy.http.Response.follow_all" title="scrapy.http.Response.follow_all"><code class="xref py py-meth docutils literal notranslate"><span class="pre">Response.follow_all</span></code></a> 现在支持一个空的URL iterable作为输入 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4408">issue 4408</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4420">issue 4420</a> ）</p></li>
<li><p>Removed top-level <a class="reference external" href="https://twistedmatrix.com/documents/current/api/twisted.internet.reactor.html" title="(在 Twisted v2.0)"><code class="xref py py-mod docutils literal notranslate"><span class="pre">reactor</span></code></a> imports to prevent
errors about the wrong Twisted reactor being installed when setting a
different Twisted reactor using <a class="reference internal" href="topics/settings.html#std-setting-TWISTED_REACTOR"><code class="xref std std-setting docutils literal notranslate"><span class="pre">TWISTED_REACTOR</span></code></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4401">issue 4401</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4406">issue 4406</a>)</p></li>
<li><p>固定测试 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4422">issue 4422</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-2-0-0-2020-03-03">
<span id="release-2-0-0"></span><h2>刮痧2.0.0（2020-03-03）<a class="headerlink" href="#scrapy-2-0-0-2020-03-03" title="永久链接至标题">¶</a></h2>
<p>亮点：</p>
<ul class="simple">
<li><p>Python2支持已被删除</p></li>
<li><p><a class="reference internal" href="topics/coroutines.html"><span class="doc">Partial</span></a>  <a class="reference external" href="https://docs.python.org/3/reference/compound_stmts.html#async" title="(在 Python v3.9)"><span class="xref std std-ref">coroutine syntax</span></a> 支持和 <a class="reference internal" href="topics/asyncio.html"><span class="doc">experimental</span></a>  <a class="reference external" href="https://docs.python.org/3/library/asyncio.html#module-asyncio" title="(在 Python v3.9)"><code class="xref py py-mod docutils literal notranslate"><span class="pre">asyncio</span></code></a> 支持</p></li>
<li><p>新的 <a class="reference internal" href="topics/request-response.html#scrapy.http.Response.follow_all" title="scrapy.http.Response.follow_all"><code class="xref py py-meth docutils literal notranslate"><span class="pre">Response.follow_all</span></code></a> 方法</p></li>
<li><p><a class="reference internal" href="topics/media-pipeline.html#media-pipeline-ftp"><span class="std std-ref">FTP support</span></a> 对于介质管道</p></li>
<li><p>新的 <a class="reference internal" href="topics/request-response.html#scrapy.http.Response.certificate" title="scrapy.http.Response.certificate"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Response.certificate</span></code></a> 属性</p></li>
<li><p>IPv6支持通过 <a class="reference internal" href="topics/settings.html#std-setting-DNS_RESOLVER"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DNS_RESOLVER</span></code></a></p></li>
</ul>
<div class="section" id="id13">
<h3>向后不兼容的更改<a class="headerlink" href="#id13" title="永久链接至标题">¶</a></h3>
<ul>
<li><p>Python 2 support has been removed, following <a class="reference external" href="https://www.python.org/doc/sunset-python-2/">Python 2 end-of-life on
January 1, 2020</a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4091">issue 4091</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4114">issue 4114</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4115">issue 4115</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4121">issue 4121</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4138">issue 4138</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4231">issue 4231</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4242">issue 4242</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4304">issue 4304</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4309">issue 4309</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4373">issue 4373</a>)</p></li>
<li><p>重试gaveups（请参阅 <a class="reference internal" href="topics/downloader-middleware.html#std-setting-RETRY_TIMES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">RETRY_TIMES</span></code></a> )现在记录为错误而不是调试信息 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3171">issue 3171</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3566">issue 3566</a> ）</p></li>
<li><p>File extensions that
<a class="reference internal" href="topics/link-extractors.html#scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor" title="scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor"><code class="xref py py-class docutils literal notranslate"><span class="pre">LinkExtractor</span></code></a>
ignores by default now also include <code class="docutils literal notranslate"><span class="pre">7z</span></code>, <code class="docutils literal notranslate"><span class="pre">7zip</span></code>, <code class="docutils literal notranslate"><span class="pre">apk</span></code>, <code class="docutils literal notranslate"><span class="pre">bz2</span></code>,
<code class="docutils literal notranslate"><span class="pre">cdr</span></code>, <code class="docutils literal notranslate"><span class="pre">dmg</span></code>, <code class="docutils literal notranslate"><span class="pre">ico</span></code>, <code class="docutils literal notranslate"><span class="pre">iso</span></code>, <code class="docutils literal notranslate"><span class="pre">tar</span></code>, <code class="docutils literal notranslate"><span class="pre">tar.gz</span></code>, <code class="docutils literal notranslate"><span class="pre">webm</span></code>, and
<code class="docutils literal notranslate"><span class="pre">xz</span></code> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1837">issue 1837</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2067">issue 2067</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4066">issue 4066</a>)</p></li>
<li><p>这个 <a class="reference internal" href="topics/downloader-middleware.html#std-setting-METAREFRESH_IGNORE_TAGS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">METAREFRESH_IGNORE_TAGS</span></code></a> 设置现在默认为空列表，遵循web浏览器行为 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3844">issue 3844</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4311">issue 4311</a> ）</p></li>
<li><p>这个 <a class="reference internal" href="topics/downloader-middleware.html#scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware" title="scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">HttpCompressionMiddleware</span></code></a> 现在在值中包含逗号后的空格 <code class="docutils literal notranslate"><span class="pre">Accept-Encoding</span></code> 它设置的标头，遵循web浏览器行为 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4293">issue 4293</a> ）</p></li>
<li><p>这个 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 自定义下载处理程序的方法（请参见 <a class="reference internal" href="topics/settings.html#std-setting-DOWNLOAD_HANDLERS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOAD_HANDLERS</span></code></a> )或以下下载程序处理程序的子类不再接收 <code class="docutils literal notranslate"><span class="pre">settings</span></code> 参数：</p>
<ul class="simple">
<li><p><code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.core.downloader.handlers.datauri.DataURIDownloadHandler</span></code></p></li>
<li><p><code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.core.downloader.handlers.file.FileDownloadHandler</span></code></p></li>
</ul>
<p>使用 <code class="docutils literal notranslate"><span class="pre">from_settings</span></code> 或 <code class="docutils literal notranslate"><span class="pre">from_crawler</span></code> 类方法将此类参数公开给自定义下载处理程序。</p>
<p>(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4126">issue 4126</a> ）</p>
</li>
<li><p>我们已经重构了 <code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.core.scheduler.Scheduler</span></code> 类和相关队列类（请参见 <a class="reference internal" href="topics/settings.html#std-setting-SCHEDULER_PRIORITY_QUEUE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">SCHEDULER_PRIORITY_QUEUE</span></code></a> ， <a class="reference internal" href="topics/settings.html#std-setting-SCHEDULER_DISK_QUEUE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">SCHEDULER_DISK_QUEUE</span></code></a> 和 <a class="reference internal" href="topics/settings.html#std-setting-SCHEDULER_MEMORY_QUEUE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">SCHEDULER_MEMORY_QUEUE</span></code></a> )使实现自定义调度程序队列类更容易。看到了吗 <a class="reference internal" href="#scheduler-queue-changes"><span class="std std-ref">计划程序队列类的更改</span></a> 详情见下文。</p></li>
<li><p>覆盖的设置现在以不同的格式记录。这更符合启动时记录的类似信息 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4199">issue 4199</a> ）</p></li>
</ul>
</div>
<div class="section" id="id14">
<h3>折旧清除<a class="headerlink" href="#id14" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>这个 <a class="reference internal" href="topics/shell.html#topics-shell"><span class="std std-ref">Scrapy shell</span></a> 不再提供 <cite>sel</cite> 代理对象，使用 <code class="xref py py-meth docutils literal notranslate"><span class="pre">response.selector</span></code> 相反 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4347">issue 4347</a> ）</p></li>
<li><p>LevelDB支持已删除 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4112">issue 4112</a> ）</p></li>
<li><p>The following functions have been removed from <code class="xref py py-mod docutils literal notranslate"><span class="pre">scrapy.utils.python</span></code>:
<code class="docutils literal notranslate"><span class="pre">isbinarytext</span></code>, <code class="docutils literal notranslate"><span class="pre">is_writable</span></code>, <code class="docutils literal notranslate"><span class="pre">setattr_default</span></code>, <code class="docutils literal notranslate"><span class="pre">stringify_dict</span></code>
(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4362">issue 4362</a>)</p></li>
</ul>
</div>
<div class="section" id="id15">
<h3>贬抑<a class="headerlink" href="#id15" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>使用前缀为 <code class="docutils literal notranslate"><span class="pre">SCRAPY_</span></code> 不推荐使用覆盖设置 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4300">issue 4300</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4374">issue 4374</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4375">issue 4375</a> ）</p></li>
<li><p><code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.linkextractors.FilteringLinkExtractor</span></code> 已弃用，请使用 <a class="reference internal" href="topics/link-extractors.html#scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor" title="scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor"><code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.linkextractors.LinkExtractor</span></code></a> 相反 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4045">issue 4045</a> ）</p></li>
<li><p>这个 <code class="docutils literal notranslate"><span class="pre">noconnect</span></code> 代理URL的查询字符串参数已弃用，应从代理URL中删除 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4198">issue 4198</a> ）</p></li>
<li><p>这个 <code class="xref py py-meth docutils literal notranslate"><span class="pre">next</span></code> 方法 <code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.utils.python.MutableChain</span></code> 已弃用，请使用全局 <a class="reference external" href="https://docs.python.org/3/library/functions.html#next" title="(在 Python v3.9)"><code class="xref py py-func docutils literal notranslate"><span class="pre">next()</span></code></a> 功能或 <code class="xref py py-meth docutils literal notranslate"><span class="pre">MutableChain.__next__</span></code> 相反 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4153">issue 4153</a> ）</p></li>
</ul>
</div>
<div class="section" id="id16">
<h3>新特点<a class="headerlink" href="#id16" title="永久链接至标题">¶</a></h3>
<ul>
<li><p>补充 <a class="reference internal" href="topics/coroutines.html"><span class="doc">partial support</span></a> 对于Python的 <a class="reference external" href="https://docs.python.org/3/reference/compound_stmts.html#async" title="(在 Python v3.9)"><span class="xref std std-ref">coroutine syntax</span></a> 和 <a class="reference internal" href="topics/asyncio.html"><span class="doc">experimental support</span></a> 对于 <a class="reference external" href="https://docs.python.org/3/library/asyncio.html#module-asyncio" title="(在 Python v3.9)"><code class="xref py py-mod docutils literal notranslate"><span class="pre">asyncio</span></code></a> 和 <a class="reference external" href="https://docs.python.org/3/library/asyncio.html#module-asyncio" title="(在 Python v3.9)"><code class="xref py py-mod docutils literal notranslate"><span class="pre">asyncio</span></code></a> -支持的库 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4010">issue 4010</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4259">issue 4259</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4269">issue 4269</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4270">issue 4270</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4271">issue 4271</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4316">issue 4316</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4318">issue 4318</a> ）</p></li>
<li><p>新的 <a class="reference internal" href="topics/request-response.html#scrapy.http.Response.follow_all" title="scrapy.http.Response.follow_all"><code class="xref py py-meth docutils literal notranslate"><span class="pre">Response.follow_all</span></code></a> 方法提供的功能与 <a class="reference internal" href="topics/request-response.html#scrapy.http.Response.follow" title="scrapy.http.Response.follow"><code class="xref py py-meth docutils literal notranslate"><span class="pre">Response.follow</span></code></a> 但是支持一个iterable的url作为输入，并返回iterable请求 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2582">issue 2582</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4057">issue 4057</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4286">issue 4286</a> ）</p></li>
<li><p><a class="reference internal" href="topics/media-pipeline.html#topics-media-pipeline"><span class="std std-ref">Media pipelines</span></a> now support <a class="reference internal" href="topics/media-pipeline.html#media-pipeline-ftp"><span class="std std-ref">FTP
storage</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3928">issue 3928</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3961">issue 3961</a>)</p></li>
<li><p>新的 <a class="reference internal" href="topics/request-response.html#scrapy.http.Response.certificate" title="scrapy.http.Response.certificate"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Response.certificate</span></code></a> 属性将服务器的SSL证书公开为 <a class="reference external" href="https://twistedmatrix.com/documents/current/api/twisted.internet.ssl.Certificate.html" title="(在 Twisted v2.0)"><code class="xref py py-class docutils literal notranslate"><span class="pre">twisted.internet.ssl.Certificate</span></code></a> HTTPS响应的对象 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2726">issue 2726</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4054">issue 4054</a> ）</p></li>
<li><p>一个新的 <a class="reference internal" href="topics/settings.html#std-setting-DNS_RESOLVER"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DNS_RESOLVER</span></code></a> 设置允许启用IPv6支持 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1031">issue 1031</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4227">issue 4227</a> ）</p></li>
<li><p>一个新的 <a class="reference internal" href="topics/settings.html#std-setting-SCRAPER_SLOT_MAX_ACTIVE_SIZE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">SCRAPER_SLOT_MAX_ACTIVE_SIZE</span></code></a> 设置允许配置现有的软限制，当正在处理的总响应数据太高时暂停请求下载 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1410">issue 1410</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3551">issue 3551</a> ）</p></li>
<li><p>A new <a class="reference internal" href="topics/settings.html#std-setting-TWISTED_REACTOR"><code class="xref std std-setting docutils literal notranslate"><span class="pre">TWISTED_REACTOR</span></code></a> setting allows customizing the
<a class="reference external" href="https://twistedmatrix.com/documents/current/api/twisted.internet.reactor.html" title="(在 Twisted v2.0)"><code class="xref py py-mod docutils literal notranslate"><span class="pre">reactor</span></code></a> that Scrapy uses, allowing to
<a class="reference internal" href="topics/asyncio.html"><span class="doc">enable asyncio support</span></a> or deal with a
<a class="reference internal" href="faq.html#faq-specific-reactor"><span class="std std-ref">common macOS issue</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2905">issue 2905</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4294">issue 4294</a>)</p></li>
<li><p>Scheduler disk and memory queues may now use the class methods
<code class="docutils literal notranslate"><span class="pre">from_crawler</span></code> or <code class="docutils literal notranslate"><span class="pre">from_settings</span></code> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3884">issue 3884</a>)</p></li>
<li><p>The new <a class="reference internal" href="topics/request-response.html#scrapy.http.Response.cb_kwargs" title="scrapy.http.Response.cb_kwargs"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Response.cb_kwargs</span></code></a>
attribute serves as a shortcut for <a class="reference internal" href="topics/request-response.html#scrapy.http.Request.cb_kwargs" title="scrapy.http.Request.cb_kwargs"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Response.request.cb_kwargs</span></code></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4331">issue 4331</a>)</p></li>
<li><p><a class="reference internal" href="topics/request-response.html#scrapy.http.Response.follow" title="scrapy.http.Response.follow"><code class="xref py py-meth docutils literal notranslate"><span class="pre">Response.follow</span></code></a> now supports a
<code class="docutils literal notranslate"><span class="pre">flags</span></code> parameter, for consistency with <a class="reference internal" href="topics/request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a>
(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4277">issue 4277</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4279">issue 4279</a>)</p></li>
<li><p><a class="reference internal" href="topics/loaders.html#topics-loaders-processors"><span class="std std-ref">Item loader processors</span></a> 现在可以是正则函数，它们不再需要是方法 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3899">issue 3899</a> ）</p></li>
<li><p><a class="reference internal" href="topics/spiders.html#scrapy.spiders.Rule" title="scrapy.spiders.Rule"><code class="xref py py-class docutils literal notranslate"><span class="pre">Rule</span></code></a> 现在接受一个 <code class="docutils literal notranslate"><span class="pre">errback</span></code> 参数 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4000">issue 4000</a> ）</p></li>
<li><p><a class="reference internal" href="topics/request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 不再需要 <code class="docutils literal notranslate"><span class="pre">callback</span></code> 当 <code class="docutils literal notranslate"><span class="pre">errback</span></code> 参数已指定 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3586">issue 3586</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4008">issue 4008</a> ）</p></li>
<li><p><a class="reference internal" href="topics/logging.html#scrapy.logformatter.LogFormatter" title="scrapy.logformatter.LogFormatter"><code class="xref py py-class docutils literal notranslate"><span class="pre">LogFormatter</span></code></a> 现在支持一些其他方法：</p>
<ul class="simple">
<li><p><a class="reference internal" href="topics/logging.html#scrapy.logformatter.LogFormatter.download_error" title="scrapy.logformatter.LogFormatter.download_error"><code class="xref py py-class docutils literal notranslate"><span class="pre">download_error</span></code></a> 对于下载错误</p></li>
<li><p><a class="reference internal" href="topics/logging.html#scrapy.logformatter.LogFormatter.item_error" title="scrapy.logformatter.LogFormatter.item_error"><code class="xref py py-class docutils literal notranslate"><span class="pre">item_error</span></code></a> for exceptions
raised during item processing by <a class="reference internal" href="topics/item-pipeline.html#topics-item-pipeline"><span class="std std-ref">item pipelines</span></a></p></li>
<li><p><a class="reference internal" href="topics/logging.html#scrapy.logformatter.LogFormatter.spider_error" title="scrapy.logformatter.LogFormatter.spider_error"><code class="xref py py-class docutils literal notranslate"><span class="pre">spider_error</span></code></a> for exceptions
raised from <a class="reference internal" href="topics/spiders.html#topics-spiders"><span class="std std-ref">spider callbacks</span></a></p></li>
</ul>
<p>(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/374">issue 374</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3986">issue 3986</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3989">issue 3989</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4176">issue 4176</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4188">issue 4188</a> ）</p>
</li>
<li><p>这个 <code class="xref std std-setting docutils literal notranslate"><span class="pre">FEED_URI</span></code> 设置现在支持 <a class="reference external" href="https://docs.python.org/3/library/pathlib.html#pathlib.Path" title="(在 Python v3.9)"><code class="xref py py-class docutils literal notranslate"><span class="pre">pathlib.Path</span></code></a> 价值观 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3731">issue 3731</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4074">issue 4074</a> ）</p></li>
<li><p>一个新的 <a class="reference internal" href="topics/signals.html#std-signal-request_left_downloader"><code class="xref std std-signal docutils literal notranslate"><span class="pre">request_left_downloader</span></code></a> 当请求离开下载程序时发送信号 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4303">issue 4303</a> ）</p></li>
<li><p>Scrapy在检测到使用 <code class="docutils literal notranslate"><span class="pre">yield</span></code> 但也返回一个值，因为返回的值将丢失 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3484">issue 3484</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3869">issue 3869</a> ）</p></li>
<li><p><a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> 对象现在引发 <a class="reference external" href="https://docs.python.org/3/library/exceptions.html#AttributeError" title="(在 Python v3.9)"><code class="xref py py-exc docutils literal notranslate"><span class="pre">AttributeError</span></code></a> 如果他们没有 <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider.start_urls" title="scrapy.spiders.Spider.start_urls"><code class="xref py py-class docutils literal notranslate"><span class="pre">start_urls</span></code></a> 属性或重新实现 <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider.start_requests" title="scrapy.spiders.Spider.start_requests"><code class="xref py py-class docutils literal notranslate"><span class="pre">start_requests</span></code></a> ，但是有一个 <code class="docutils literal notranslate"><span class="pre">start_url</span></code> 属性 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4133">issue 4133</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4170">issue 4170</a> ）</p></li>
<li><p><a class="reference internal" href="topics/exporters.html#scrapy.exporters.BaseItemExporter" title="scrapy.exporters.BaseItemExporter"><code class="xref py py-class docutils literal notranslate"><span class="pre">BaseItemExporter</span></code></a> 子类现在可以使用 <code class="docutils literal notranslate"><span class="pre">super().__init__(**kwargs)</span></code> 而不是 <code class="docutils literal notranslate"><span class="pre">self._configure(kwargs)</span></code> 在他们 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法，通过 <code class="docutils literal notranslate"><span class="pre">dont_fail=True</span></code> 给父母 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法（如果需要），并访问 <code class="docutils literal notranslate"><span class="pre">kwargs</span></code> 在 <code class="docutils literal notranslate"><span class="pre">self._kwargs</span></code> 打电话给他们的父母之后 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4193">issue 4193</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4370">issue 4370</a> ）</p></li>
<li><p>一个新的 <code class="docutils literal notranslate"><span class="pre">keep_fragments</span></code> 参数 <code class="xref py py-func docutils literal notranslate"><span class="pre">scrapy.utils.request.request_fingerprint()</span></code> 允许为URL中包含不同片段的请求生成不同的指纹 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4104">issue 4104</a> ）</p></li>
<li><p>下载处理程序（请参见 <a class="reference internal" href="topics/settings.html#std-setting-DOWNLOAD_HANDLERS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOAD_HANDLERS</span></code></a> )现在可以使用 <code class="docutils literal notranslate"><span class="pre">from_settings</span></code> 和 <code class="docutils literal notranslate"><span class="pre">from_crawler</span></code> 其他垃圾组件已经支持的类方法 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4126">issue 4126</a> ）</p></li>
<li><p><code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.utils.python.MutableChain.__iter__</span></code> now returns <code class="docutils literal notranslate"><span class="pre">self</span></code>,
<a class="reference external" href="https://lgtm.com/rules/4850080/">allowing it to be used as a sequence</a>
(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4153">issue 4153</a>)</p></li>
</ul>
</div>
<div class="section" id="id17">
<h3>错误修复<a class="headerlink" href="#id17" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>这个 <a class="reference internal" href="topics/commands.html#std-command-crawl"><code class="xref std std-command docutils literal notranslate"><span class="pre">crawl</span></code></a> 当爬行开始之前发生异常时，命令现在也会退出，退出代码为1 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4175">issue 4175</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4207">issue 4207</a> ）</p></li>
<li><p><a class="reference internal" href="topics/link-extractors.html#scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor.extract_links" title="scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor.extract_links"><code class="xref py py-class docutils literal notranslate"><span class="pre">LinkExtractor.extract_links</span></code></a> 不再对来自非UTF-8响应的查询字符串或url重新编码为UTF-8 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/998">issue 998</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1403">issue 1403</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1949">issue 1949</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4321">issue 4321</a> ）</p></li>
<li><p>第一个spider中间件（请参见 <a class="reference internal" href="topics/settings.html#std-setting-SPIDER_MIDDLEWARES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">SPIDER_MIDDLEWARES</span></code></a> )现在还处理从作为生成器的回调中引发的异常 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4260">issue 4260</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4272">issue 4272</a> ）</p></li>
<li><p>重定向到以3个斜杠开头的URL (<code class="docutils literal notranslate"><span class="pre">///</span></code> )现在支持 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4032">issue 4032</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4042">issue 4042</a> ）</p></li>
<li><p><a class="reference internal" href="topics/request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 不再接受字符串作为 <code class="docutils literal notranslate"><span class="pre">url</span></code> 只是因为他们有结肠 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2552">issue 2552</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4094">issue 4094</a> ）</p></li>
<li><p>The correct encoding is now used for attach names in
<a class="reference internal" href="topics/email.html#scrapy.mail.MailSender" title="scrapy.mail.MailSender"><code class="xref py py-class docutils literal notranslate"><span class="pre">MailSender</span></code></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4229">issue 4229</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4239">issue 4239</a>)</p></li>
<li><p><code class="xref py py-class docutils literal notranslate"><span class="pre">RFPDupeFilter</span></code> ，默认的 <a class="reference internal" href="topics/settings.html#std-setting-DUPEFILTER_CLASS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DUPEFILTER_CLASS</span></code></a> ，不再写入额外的 <code class="docutils literal notranslate"><span class="pre">\r</span></code> 窗口中每行的字符，这使得 <code class="docutils literal notranslate"><span class="pre">requests.seen</span></code> 在那个平台上不必要的大文件 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4283">issue 4283</a> ）</p></li>
<li><p>Z shell自动完成现在查找 <code class="docutils literal notranslate"><span class="pre">.html</span></code> 文件，而不是 <code class="docutils literal notranslate"><span class="pre">.http</span></code> 文件，包括 <code class="docutils literal notranslate"><span class="pre">-h</span></code> 命令行开关 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4122">issue 4122</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4291">issue 4291</a> ）</p></li>
<li><p>将项目添加到 <code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.utils.datatypes.LocalCache</span></code> 没有 <code class="docutils literal notranslate"><span class="pre">limit</span></code> 定义不再引发 <a class="reference external" href="https://docs.python.org/3/library/exceptions.html#TypeError" title="(在 Python v3.9)"><code class="xref py py-exc docutils literal notranslate"><span class="pre">TypeError</span></code></a> 例外 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4123">issue 4123</a> ）</p></li>
<li><p>Fixed a typo in the message of the <a class="reference external" href="https://docs.python.org/3/library/exceptions.html#ValueError" title="(在 Python v3.9)"><code class="xref py py-exc docutils literal notranslate"><span class="pre">ValueError</span></code></a> exception raised when
<code class="xref py py-func docutils literal notranslate"><span class="pre">scrapy.utils.misc.create_instance()</span></code> gets both <code class="docutils literal notranslate"><span class="pre">settings</span></code> and
<code class="docutils literal notranslate"><span class="pre">crawler</span></code> set to <code class="docutils literal notranslate"><span class="pre">None</span></code> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4128">issue 4128</a>)</p></li>
</ul>
</div>
<div class="section" id="id18">
<h3>文档<a class="headerlink" href="#id18" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>API文档现在链接到相应源代码的联机、语法突出显示视图 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4148">issue 4148</a> ）</p></li>
<li><p>链接到不存在的文档页现在允许访问侧栏 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4152">issue 4152</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4169">issue 4169</a> ）</p></li>
<li><p>文档中的交叉引用现在在悬停时显示工具提示 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4173">issue 4173</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4183">issue 4183</a> ）</p></li>
<li><p>Improved the documentation about <a class="reference internal" href="topics/link-extractors.html#scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor.extract_links" title="scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor.extract_links"><code class="xref py py-meth docutils literal notranslate"><span class="pre">LinkExtractor.extract_links</span></code></a> and
simplified <a class="reference internal" href="topics/link-extractors.html#topics-link-extractors"><span class="std std-ref">链接提取器</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4045">issue 4045</a>)</p></li>
<li><p>阐明了 <code class="xref py py-class docutils literal notranslate"><span class="pre">ItemLoader.item</span></code> 作品 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3574">issue 3574</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4099">issue 4099</a> ）</p></li>
<li><p>Clarified that <a class="reference external" href="https://docs.python.org/3/library/logging.html#logging.basicConfig" title="(在 Python v3.9)"><code class="xref py py-func docutils literal notranslate"><span class="pre">logging.basicConfig()</span></code></a> should not be used when also
using <code class="xref py py-class docutils literal notranslate"><span class="pre">CrawlerProcess</span></code> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2149">issue 2149</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2352">issue 2352</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3146">issue 3146</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3960">issue 3960</a>)</p></li>
<li><p>Clarified the requirements for <a class="reference internal" href="topics/request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> objects
<a class="reference internal" href="topics/jobs.html#request-serialization"><span class="std std-ref">when using persistence</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4124">issue 4124</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4139">issue 4139</a>)</p></li>
<li><p>Clarified how to install a <a class="reference internal" href="topics/media-pipeline.html#media-pipeline-example"><span class="std std-ref">custom image pipeline</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4034">issue 4034</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4252">issue 4252</a>)</p></li>
<li><p>修复了 <code class="docutils literal notranslate"><span class="pre">file_path</span></code> 方法在 <a class="reference internal" href="topics/media-pipeline.html#topics-media-pipeline"><span class="std std-ref">media pipeline</span></a> 实例 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4290">issue 4290</a> ）</p></li>
<li><p>涵盖了一个影响自定义的Scrapy 1.7.0中向后不兼容的更改 <code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.core.scheduler.Scheduler</span></code> 子类 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4274">issue 4274</a> ）</p></li>
<li><p>改进了 <code class="docutils literal notranslate"><span class="pre">README.rst</span></code> 和 <code class="docutils literal notranslate"><span class="pre">CODE_OF_CONDUCT.md</span></code> 文件夹 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4059">issue 4059</a> ）</p></li>
<li><p>文档示例现在作为测试套件的一部分进行了检查，我们已经修复了检测到的一些问题 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4142">issue 4142</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4146">issue 4146</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4171">issue 4171</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4184">issue 4184</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4190">issue 4190</a> ）</p></li>
<li><p>修复了逻辑问题，断开的链接和打字错误 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4247">issue 4247</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4258">issue 4258</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4282">issue 4282</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4288">issue 4288</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4305">issue 4305</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4308">issue 4308</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4323">issue 4323</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4338">issue 4338</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4359">issue 4359</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4361">issue 4361</a> ）</p></li>
<li><p>在引用 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 对象的方法 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4086">issue 4086</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4088">issue 4088</a> ）</p></li>
<li><p>Fixed an inconsistency between code and output in <a class="reference internal" href="intro/overview.html#intro-overview"><span class="std std-ref">Scrapy一目了然</span></a>
(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4213">issue 4213</a>)</p></li>
<li><p>扩展 <a class="reference external" href="https://www.sphinx-doc.org/en/master/usage/extensions/intersphinx.html#module-sphinx.ext.intersphinx" title="(在 Sphinx v4.0.0+)"><code class="xref py py-mod docutils literal notranslate"><span class="pre">intersphinx</span></code></a> 使用 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4147">issue 4147</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4172">issue 4172</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4185">issue 4185</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4194">issue 4194</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4197">issue 4197</a> ）</p></li>
<li><p>我们现在使用Python的最新版本来构建文档 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4140">issue 4140</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4249">issue 4249</a> ）</p></li>
<li><p>已清理文档 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4143">issue 4143</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4275">issue 4275</a> ）</p></li>
</ul>
</div>
<div class="section" id="id19">
<h3>质量保证<a class="headerlink" href="#id19" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>重新启用代理 <code class="docutils literal notranslate"><span class="pre">CONNECT</span></code> 测验 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2545">issue 2545</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4114">issue 4114</a> ）</p></li>
<li><p>补充 <a class="reference external" href="https://bandit.readthedocs.io/">Bandit</a> 对我们的测试套件进行安全检查 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4162">issue 4162</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4181">issue 4181</a> ）</p></li>
<li><p>补充 <a class="reference external" href="https://flake8.pycqa.org/en/latest/">Flake8</a> 对我们的测试套件进行样式检查，并应用了许多相应的更改 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3944">issue 3944</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3945">issue 3945</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4137">issue 4137</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4157">issue 4157</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4167">issue 4167</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4174">issue 4174</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4186">issue 4186</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4195">issue 4195</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4238">issue 4238</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4246">issue 4246</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4355">issue 4355</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4360">issue 4360</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4365">issue 4365</a> ）</p></li>
<li><p>提高测试覆盖率 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4097">issue 4097</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4218">issue 4218</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4236">issue 4236</a> ）</p></li>
<li><p>开始报告最慢的测试，并改进了其中一些测试的性能 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4163">issue 4163</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4164">issue 4164</a> ）</p></li>
<li><p>修复了损坏的测试并重构了一些测试 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4014">issue 4014</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4095">issue 4095</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4244">issue 4244</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4268">issue 4268</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4372">issue 4372</a> ）</p></li>
<li><p>修改了 <a class="reference external" href="https://tox.readthedocs.io/en/latest/index.html" title="(在 tox v3.20)"><span class="xref std std-doc">tox</span></a> 配置以允许使用任何Python版本运行测试，请运行 <a class="reference external" href="https://bandit.readthedocs.io/">Bandit</a> 和 <a class="reference external" href="https://flake8.pycqa.org/en/latest/">Flake8</a> 默认情况下进行测试，并以编程方式强制执行最低tox版本 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4179">issue 4179</a> ）</p></li>
<li><p>已清理代码 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3937">issue 3937</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4208">issue 4208</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4209">issue 4209</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4210">issue 4210</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4212">issue 4212</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4369">issue 4369</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4376">issue 4376</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4378">issue 4378</a> ）</p></li>
</ul>
</div>
<div class="section" id="changes-to-scheduler-queue-classes">
<span id="scheduler-queue-changes"></span><h3>计划程序队列类的更改<a class="headerlink" href="#changes-to-scheduler-queue-classes" title="永久链接至标题">¶</a></h3>
<p>以下更改可能会影响所有类型的任何自定义队列类：</p>
<ul class="simple">
<li><p>这个 <code class="docutils literal notranslate"><span class="pre">push</span></code> 方法不再接收包含 <code class="docutils literal notranslate"><span class="pre">request.priority</span> <span class="pre">*</span> <span class="pre">-1</span></code> . 如果需要该值，请从第一个位置参数获取它， <code class="docutils literal notranslate"><span class="pre">request</span></code> ，或者使用新的 <code class="xref py py-meth docutils literal notranslate"><span class="pre">priority()</span></code> 方法在 <code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.core.scheduler.ScrapyPriorityQueue</span></code> 子类。</p></li>
</ul>
<p>以下更改可能会影响自定义优先级队列类：</p>
<ul class="simple">
<li><p>在 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法或 <code class="docutils literal notranslate"><span class="pre">from_crawler</span></code> 或 <code class="docutils literal notranslate"><span class="pre">from_settings</span></code> 类方法：</p>
<ul>
<li><p>用于包含工厂函数的参数， <code class="docutils literal notranslate"><span class="pre">qfactory</span></code> ，现在作为名为的关键字参数传递 <code class="docutils literal notranslate"><span class="pre">downstream_queue_cls</span></code> .</p></li>
<li><p>添加了一个新的关键字参数： <code class="docutils literal notranslate"><span class="pre">key</span></code> . 对于内存队列，它始终是空字符串，并指示 <code class="xref std std-setting docutils literal notranslate"><span class="pre">JOB_DIR</span></code> 磁盘队列的值。</p></li>
<li><p>包含上一次爬网数据的磁盘队列的参数， <code class="docutils literal notranslate"><span class="pre">startprios</span></code> 或 <code class="docutils literal notranslate"><span class="pre">slot_startprios</span></code> ，现在作为名为的关键字参数传递 <code class="docutils literal notranslate"><span class="pre">startprios</span></code> .</p></li>
<li><p>这个 <code class="docutils literal notranslate"><span class="pre">serialize</span></code> 不再传递参数。在写入磁盘之前，磁盘队列类必须自己处理请求序列化，使用 <code class="xref py py-func docutils literal notranslate"><span class="pre">request_to_dict()</span></code> 和 <code class="xref py py-func docutils literal notranslate"><span class="pre">request_from_dict()</span></code> 中的函数 <code class="xref py py-mod docutils literal notranslate"><span class="pre">scrapy.utils.reqser</span></code> 模块。</p></li>
</ul>
</li>
</ul>
<p>以下更改可能会影响自定义磁盘和内存队列类：</p>
<ul class="simple">
<li><p>签名 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法现在 <code class="docutils literal notranslate"><span class="pre">__init__(self,</span> <span class="pre">crawler,</span> <span class="pre">key)</span></code> .</p></li>
</ul>
<p>以下更改特别影响 <code class="xref py py-class docutils literal notranslate"><span class="pre">ScrapyPriorityQueue</span></code> 和 <code class="xref py py-class docutils literal notranslate"><span class="pre">DownloaderAwarePriorityQueue</span></code> 类来自 <code class="xref py py-mod docutils literal notranslate"><span class="pre">scrapy.core.scheduler</span></code> 并可能影响子类：</p>
<ul>
<li><p>在 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法，则应用上述大多数更改。</p>
<p><code class="docutils literal notranslate"><span class="pre">__init__</span></code> 仍然可以接收所有参数作为位置参数，但是：</p>
<ul>
<li><p><code class="docutils literal notranslate"><span class="pre">downstream_queue_cls</span></code> ，取代了 <code class="docutils literal notranslate"><span class="pre">qfactory</span></code> ，必须以不同的方式实例化。</p>
<p><code class="docutils literal notranslate"><span class="pre">qfactory</span></code> 已用优先级值（整数）实例化。</p>
<p>实例 <code class="docutils literal notranslate"><span class="pre">downstream_queue_cls</span></code> 应该使用新的 <code class="xref py py-meth docutils literal notranslate"><span class="pre">ScrapyPriorityQueue.qfactory</span></code> 或 <code class="xref py py-meth docutils literal notranslate"><span class="pre">DownloaderAwarePriorityQueue.pqfactory</span></code> 方法。</p>
</li>
<li><p>新的 <code class="docutils literal notranslate"><span class="pre">key</span></code> 参数替换了 <code class="docutils literal notranslate"><span class="pre">startprios</span></code> 参数1位于右侧。</p></li>
</ul>
</li>
<li><p>已添加以下类属性：</p>
<ul class="simple">
<li><p><code class="xref py py-attr docutils literal notranslate"><span class="pre">crawler</span></code></p></li>
<li><p><code class="xref py py-attr docutils literal notranslate"><span class="pre">downstream_queue_cls</span></code> （详见上文）</p></li>
<li><p><code class="xref py py-attr docutils literal notranslate"><span class="pre">key</span></code> （详见上文）</p></li>
</ul>
</li>
<li><p>这个 <code class="docutils literal notranslate"><span class="pre">serialize</span></code> 属性已被删除（以上详细信息）</p></li>
</ul>
<p>以下更改特别影响 <code class="xref py py-class docutils literal notranslate"><span class="pre">ScrapyPriorityQueue</span></code> 类并可能影响子类：</p>
<ul>
<li><p>一个新的 <code class="xref py py-meth docutils literal notranslate"><span class="pre">priority()</span></code> 已添加方法，该方法在给定请求时返回 <code class="docutils literal notranslate"><span class="pre">request.priority</span> <span class="pre">*</span> <span class="pre">-1</span></code> .</p>
<p>它用于 <code class="xref py py-meth docutils literal notranslate"><span class="pre">push()</span></code> 以弥补它的删除 <code class="docutils literal notranslate"><span class="pre">priority</span></code> 参数。</p>
</li>
<li><p>这个 <code class="docutils literal notranslate"><span class="pre">spider</span></code> 属性已删除。使用 <code class="xref py py-attr docutils literal notranslate"><span class="pre">crawler.spider</span></code> 相反。</p></li>
</ul>
<p>以下更改特别影响 <code class="xref py py-class docutils literal notranslate"><span class="pre">DownloaderAwarePriorityQueue</span></code> 类并可能影响子类：</p>
<ul class="simple">
<li><p>一个新的 <code class="xref py py-attr docutils literal notranslate"><span class="pre">pqueues</span></code> 属性提供下载程序插槽名称到相应实例的映射 <code class="xref py py-attr docutils literal notranslate"><span class="pre">downstream_queue_cls</span></code> .</p></li>
</ul>
<p>(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3884">issue 3884</a> ）</p>
</div>
</div>
<div class="section" id="scrapy-1-8-0-2019-10-28">
<span id="release-1-8-0"></span><h2>1.8.0（2019-10-28）<a class="headerlink" href="#scrapy-1-8-0-2019-10-28" title="永久链接至标题">¶</a></h2>
<p>亮点：</p>
<ul class="simple">
<li><p>放弃了对Python3.4的支持并更新了最低要求；使Python3.8正式支持</p></li>
<li><p>新的 <a class="reference internal" href="topics/request-response.html#scrapy.http.Request.from_curl" title="scrapy.http.Request.from_curl"><code class="xref py py-meth docutils literal notranslate"><span class="pre">Request.from_curl</span></code></a> 类方法</p></li>
<li><p>新的 <a class="reference internal" href="topics/settings.html#std-setting-ROBOTSTXT_PARSER"><code class="xref std std-setting docutils literal notranslate"><span class="pre">ROBOTSTXT_PARSER</span></code></a> 和 <a class="reference internal" href="topics/settings.html#std-setting-ROBOTSTXT_USER_AGENT"><code class="xref std std-setting docutils literal notranslate"><span class="pre">ROBOTSTXT_USER_AGENT</span></code></a> 设置</p></li>
<li><p>新的 <a class="reference internal" href="topics/settings.html#std-setting-DOWNLOADER_CLIENT_TLS_CIPHERS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOADER_CLIENT_TLS_CIPHERS</span></code></a> 和 <a class="reference internal" href="topics/settings.html#std-setting-DOWNLOADER_CLIENT_TLS_VERBOSE_LOGGING"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOADER_CLIENT_TLS_VERBOSE_LOGGING</span></code></a> 设置</p></li>
</ul>
<div class="section" id="id20">
<h3>向后不兼容的更改<a class="headerlink" href="#id20" title="永久链接至标题">¶</a></h3>
<ul>
<li><p>不再支持Python3.4，Scrapy的一些最低要求也发生了变化：</p>
<ul class="simple">
<li><p><a class="reference external" href="https://cssselect.readthedocs.io/en/latest/index.html" title="(在 cssselect v1.1.0)"><span class="xref std std-doc">cssselect</span></a> 0.9.1</p></li>
<li><p><a class="reference external" href="https://cryptography.io/en/latest/">cryptography</a> 2.0</p></li>
<li><p><a class="reference external" href="https://lxml.de/">lxml</a> 3.5.0</p></li>
<li><p><a class="reference external" href="https://www.pyopenssl.org/en/stable/">pyOpenSSL</a> 16.2.0</p></li>
<li><p><a class="reference external" href="https://github.com/scrapy/queuelib">queuelib</a> 1.4.2</p></li>
<li><p><a class="reference external" href="https://service-identity.readthedocs.io/en/stable/">service_identity</a> 16.0.0</p></li>
<li><p><a class="reference external" href="https://six.readthedocs.io/">six</a> 1.10.0</p></li>
<li><p><a class="reference external" href="https://twistedmatrix.com/trac/">Twisted</a> 17.9.0（16.0.0，使用Python 2）</p></li>
<li><p><a class="reference external" href="https://zopeinterface.readthedocs.io/en/latest/">zope.interface</a> 4.1.3</p></li>
</ul>
<p>(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3892">issue 3892</a> ）</p>
</li>
<li><p><code class="docutils literal notranslate"><span class="pre">JSONRequest</span></code> 现在叫做 <a class="reference internal" href="topics/request-response.html#scrapy.http.JsonRequest" title="scrapy.http.JsonRequest"><code class="xref py py-class docutils literal notranslate"><span class="pre">JsonRequest</span></code></a> 为了与类似的类保持一致 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3929">issue 3929</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3982">issue 3982</a> ）</p></li>
<li><p>If you are using a custom context factory
(<a class="reference internal" href="topics/settings.html#std-setting-DOWNLOADER_CLIENTCONTEXTFACTORY"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOADER_CLIENTCONTEXTFACTORY</span></code></a>), its <code class="docutils literal notranslate"><span class="pre">__init__</span></code> method must
accept two new parameters: <code class="docutils literal notranslate"><span class="pre">tls_verbose_logging</span></code> and <code class="docutils literal notranslate"><span class="pre">tls_ciphers</span></code>
(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2111">issue 2111</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3392">issue 3392</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3442">issue 3442</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3450">issue 3450</a>)</p></li>
<li><p><code class="xref py py-class docutils literal notranslate"><span class="pre">ItemLoader</span></code> 它的输入值现在变成了：</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="n">item</span> <span class="o">=</span> <span class="n">MyItem</span><span class="p">()</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">item</span><span class="p">[</span><span class="s1">&#39;field&#39;</span><span class="p">]</span> <span class="o">=</span> <span class="s1">&#39;value1&#39;</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">loader</span> <span class="o">=</span> <span class="n">ItemLoader</span><span class="p">(</span><span class="n">item</span><span class="o">=</span><span class="n">item</span><span class="p">)</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">item</span><span class="p">[</span><span class="s1">&#39;field&#39;</span><span class="p">]</span>
<span class="go">[&#39;value1&#39;]</span>
</pre></div>
</div>
<p>这是允许向现有字段添加值所必需的 (<code class="docutils literal notranslate"><span class="pre">loader.add_value('field',</span> <span class="pre">'value2')</span></code> ）</p>
<p>(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3804">issue 3804</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3819">issue 3819</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3897">issue 3897</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3976">issue 3976</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3998">issue 3998</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4036">issue 4036</a> ）</p>
</li>
</ul>
<p>也见 <a class="reference internal" href="#id24"><span class="std std-ref">折旧清除</span></a> 下面。</p>
</div>
<div class="section" id="id21">
<h3>新特点<a class="headerlink" href="#id21" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>A new <a class="reference internal" href="topics/request-response.html#scrapy.http.Request.from_curl" title="scrapy.http.Request.from_curl"><code class="xref py py-meth docutils literal notranslate"><span class="pre">Request.from_curl</span></code></a> class
method allows <a class="reference internal" href="topics/developer-tools.html#requests-from-curl"><span class="std std-ref">creating a request from a cURL command</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2985">issue 2985</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3862">issue 3862</a>)</p></li>
<li><p>A new <a class="reference internal" href="topics/settings.html#std-setting-ROBOTSTXT_PARSER"><code class="xref std std-setting docutils literal notranslate"><span class="pre">ROBOTSTXT_PARSER</span></code></a> setting allows choosing which <a class="reference external" href="https://www.robotstxt.org/">robots.txt</a>
parser to use. It includes built-in support for
<a class="reference internal" href="topics/downloader-middleware.html#python-robotfileparser"><span class="std std-ref">RobotFileParser</span></a>,
<a class="reference internal" href="topics/downloader-middleware.html#protego-parser"><span class="std std-ref">Protego</span></a> (default), <a class="reference internal" href="topics/downloader-middleware.html#reppy-parser"><span class="std std-ref">Reppy</span></a>, and
<a class="reference internal" href="topics/downloader-middleware.html#rerp-parser"><span class="std std-ref">Robotexclusionrulesparser</span></a>, and allows you to
<a class="reference internal" href="topics/downloader-middleware.html#support-for-new-robots-parser"><span class="std std-ref">implement support for additional parsers</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/754">issue 754</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2669">issue 2669</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3796">issue 3796</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3935">issue 3935</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3969">issue 3969</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4006">issue 4006</a>)</p></li>
<li><p>一个新的 <a class="reference internal" href="topics/settings.html#std-setting-ROBOTSTXT_USER_AGENT"><code class="xref std std-setting docutils literal notranslate"><span class="pre">ROBOTSTXT_USER_AGENT</span></code></a> 设置允许定义要用于的单独的用户代理字符串 <a class="reference external" href="https://www.robotstxt.org/">robots.txt</a> 解析 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3931">issue 3931</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3966">issue 3966</a> ）</p></li>
<li><p><a class="reference internal" href="topics/spiders.html#scrapy.spiders.Rule" title="scrapy.spiders.Rule"><code class="xref py py-class docutils literal notranslate"><span class="pre">Rule</span></code></a> 不再需要 <a class="reference internal" href="topics/link-extractors.html#scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor" title="scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor"><code class="xref py py-class docutils literal notranslate"><span class="pre">LinkExtractor</span></code></a> 参数 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/781">issue 781</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4016">issue 4016</a> ）</p></li>
<li><p>使用新的 <a class="reference internal" href="topics/settings.html#std-setting-DOWNLOADER_CLIENT_TLS_CIPHERS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOADER_CLIENT_TLS_CIPHERS</span></code></a> 设置以自定义默认HTTP/1.1下载程序使用的TLS/SSL密码 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3392">issue 3392</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3442">issue 3442</a> ）</p></li>
<li><p>设置新的 <a class="reference internal" href="topics/settings.html#std-setting-DOWNLOADER_CLIENT_TLS_VERBOSE_LOGGING"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOADER_CLIENT_TLS_VERBOSE_LOGGING</span></code></a> 设置为 <code class="docutils literal notranslate"><span class="pre">True</span></code> 在建立HTTPS连接后启用有关TLS连接参数的调试级别消息 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2111">issue 2111</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3450">issue 3450</a> ）</p></li>
<li><p>Callbacks that receive keyword arguments
(see <a class="reference internal" href="topics/request-response.html#scrapy.http.Request.cb_kwargs" title="scrapy.http.Request.cb_kwargs"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.cb_kwargs</span></code></a>) can now be
tested using the new <a class="reference internal" href="topics/contracts.html#scrapy.contracts.default.CallbackKeywordArgumentsContract" title="scrapy.contracts.default.CallbackKeywordArgumentsContract"><code class="xref py py-class docutils literal notranslate"><span class="pre">&#64;cb_kwargs</span></code></a>
<a class="reference internal" href="topics/contracts.html#topics-contracts"><span class="std std-ref">spider contract</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3985">issue 3985</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3988">issue 3988</a>)</p></li>
<li><p>当A <a class="reference internal" href="topics/contracts.html#scrapy.contracts.default.ScrapesContract" title="scrapy.contracts.default.ScrapesContract"><code class="xref py py-class docutils literal notranslate"><span class="pre">&#64;scrapes</span></code></a> 蜘蛛合同失败，所有丢失的字段现在报告 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/766">issue 766</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3939">issue 3939</a> ）</p></li>
<li><p><a class="reference internal" href="topics/logging.html#custom-log-formats"><span class="std std-ref">Custom log formats</span></a> can now drop messages by
having the corresponding methods of the configured <a class="reference internal" href="topics/settings.html#std-setting-LOG_FORMATTER"><code class="xref std std-setting docutils literal notranslate"><span class="pre">LOG_FORMATTER</span></code></a>
return <code class="docutils literal notranslate"><span class="pre">None</span></code> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3984">issue 3984</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3987">issue 3987</a>)</p></li>
<li><p>一个大大改进的完成定义现在可用于 <a class="reference external" href="https://www.zsh.org/">Zsh</a>  (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4069">issue 4069</a> ）</p></li>
</ul>
</div>
<div class="section" id="id22">
<h3>错误修复<a class="headerlink" href="#id22" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p><code class="xref py py-meth docutils literal notranslate"><span class="pre">ItemLoader.load_item()</span></code> 以后不再打电话给 <code class="xref py py-meth docutils literal notranslate"><span class="pre">ItemLoader.get_output_value()</span></code> 或 <code class="xref py py-meth docutils literal notranslate"><span class="pre">ItemLoader.load_item()</span></code> 返回空数据 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3804">issue 3804</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3819">issue 3819</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3897">issue 3897</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3976">issue 3976</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3998">issue 3998</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4036">issue 4036</a> ）</p></li>
<li><p>固定的 <a class="reference internal" href="topics/stats.html#scrapy.statscollectors.DummyStatsCollector" title="scrapy.statscollectors.DummyStatsCollector"><code class="xref py py-class docutils literal notranslate"><span class="pre">DummyStatsCollector</span></code></a> 提高 <a class="reference external" href="https://docs.python.org/3/library/exceptions.html#TypeError" title="(在 Python v3.9)"><code class="xref py py-exc docutils literal notranslate"><span class="pre">TypeError</span></code></a> 例外 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4007">issue 4007</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4052">issue 4052</a> ）</p></li>
<li><p><a class="reference internal" href="topics/media-pipeline.html#scrapy.pipelines.files.FilesPipeline.file_path" title="scrapy.pipelines.files.FilesPipeline.file_path"><code class="xref py py-meth docutils literal notranslate"><span class="pre">FilesPipeline.file_path</span></code></a> and
<a class="reference internal" href="topics/media-pipeline.html#scrapy.pipelines.images.ImagesPipeline.file_path" title="scrapy.pipelines.images.ImagesPipeline.file_path"><code class="xref py py-meth docutils literal notranslate"><span class="pre">ImagesPipeline.file_path</span></code></a> no longer choose
file extensions that are not <a class="reference external" href="https://www.iana.org/assignments/media-types/media-types.xhtml">registered with IANA</a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1287">issue 1287</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3953">issue 3953</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3954">issue 3954</a>)</p></li>
<li><p>使用时 <a class="reference external" href="https://github.com/boto/botocore">botocore</a> 为了在S3中持久化文件，所有支持botocore的头现在都被正确地映射了 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3904">issue 3904</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3905">issue 3905</a> ）</p></li>
<li><p>FTP密码 <code class="xref std std-setting docutils literal notranslate"><span class="pre">FEED_URI</span></code> 包含百分比转义字符的字符现在已正确解码 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3941">issue 3941</a> ）</p></li>
<li><p>中的内存处理和错误处理问题 <code class="xref py py-func docutils literal notranslate"><span class="pre">scrapy.utils.ssl.get_temp_key_info()</span></code> 已修复 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3920">issue 3920</a> ）</p></li>
</ul>
</div>
<div class="section" id="id23">
<h3>文档<a class="headerlink" href="#id23" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>The documentation now covers how to define and configure a <a class="reference internal" href="topics/logging.html#custom-log-formats"><span class="std std-ref">custom log
format</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3616">issue 3616</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3660">issue 3660</a>)</p></li>
<li><p>API documentation added for <code class="xref py py-class docutils literal notranslate"><span class="pre">MarshalItemExporter</span></code>
and <code class="xref py py-class docutils literal notranslate"><span class="pre">PythonItemExporter</span></code> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3973">issue 3973</a>)</p></li>
<li><p>API documentation added for <code class="xref py py-class docutils literal notranslate"><span class="pre">BaseItem</span></code> and
<a class="reference internal" href="topics/items.html#scrapy.item.ItemMeta" title="scrapy.item.ItemMeta"><code class="xref py py-class docutils literal notranslate"><span class="pre">ItemMeta</span></code></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3999">issue 3999</a>)</p></li>
<li><p>小文档修复 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2998">issue 2998</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3398">issue 3398</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3597">issue 3597</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3894">issue 3894</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3934">issue 3934</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3978">issue 3978</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3993">issue 3993</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4022">issue 4022</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4028">issue 4028</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4033">issue 4033</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4046">issue 4046</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4050">issue 4050</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4055">issue 4055</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4056">issue 4056</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4061">issue 4061</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4072">issue 4072</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4071">issue 4071</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4079">issue 4079</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4081">issue 4081</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4089">issue 4089</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4093">issue 4093</a> ）</p></li>
</ul>
</div>
<div class="section" id="id24">
<span id="id25"></span><h3>折旧清除<a class="headerlink" href="#id24" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.xlib</span></code> 已删除 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4015">issue 4015</a> ）</p></li>
</ul>
</div>
<div class="section" id="id26">
<h3>贬抑<a class="headerlink" href="#id26" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>这个 <a class="reference external" href="https://github.com/google/leveldb">LevelDB</a> 存储后端 (<code class="docutils literal notranslate"><span class="pre">scrapy.extensions.httpcache.LeveldbCacheStorage</span></code> 的） <a class="reference internal" href="topics/downloader-middleware.html#scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware" title="scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">HttpCacheMiddleware</span></code></a> 被贬低 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/4085">issue 4085</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4092">issue 4092</a> ）</p></li>
<li><p>使用无证文件 <code class="docutils literal notranslate"><span class="pre">SCRAPY_PICKLED_SETTINGS_TO_OVERRIDE</span></code> 环境变量已弃用 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3910">issue 3910</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.item.DictItem</span></code> 已弃用，请使用 <a class="reference internal" href="topics/items.html#scrapy.item.Item" title="scrapy.item.Item"><code class="xref py py-class docutils literal notranslate"><span class="pre">Item</span></code></a> 相反 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3999">issue 3999</a> ）</p></li>
</ul>
</div>
<div class="section" id="other-changes">
<h3>其他变化<a class="headerlink" href="#other-changes" title="永久链接至标题">¶</a></h3>
<ul>
<li><p>持续集成测试所涵盖的可选废料需求的最低版本已更新：</p>
<ul class="simple">
<li><p><a class="reference external" href="https://github.com/boto/botocore">botocore</a> 1.3.23</p></li>
<li><p><a class="reference external" href="https://python-pillow.org/">Pillow</a> 3.4.2</p></li>
</ul>
<p>但这些要求的可选版本不一定有效 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3892">issue 3892</a> ）</p>
</li>
<li><p>用于bug报告和功能请求的GitHub模板 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3126">issue 3126</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3471">issue 3471</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3749">issue 3749</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3754">issue 3754</a> ）</p></li>
<li><p>持续集成修复 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3923">issue 3923</a> ）</p></li>
<li><p>代码清理 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3391">issue 3391</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3907">issue 3907</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3946">issue 3946</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3950">issue 3950</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4023">issue 4023</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/4031">issue 4031</a> ）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-1-7-4-2019-10-21">
<span id="release-1-7-4"></span><h2>刮伤1.7.4（2019-10-21）<a class="headerlink" href="#scrapy-1-7-4-2019-10-21" title="永久链接至标题">¶</a></h2>
<p>Revert the fix for <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3804">issue 3804</a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3819">issue 3819</a>), which has a few undesired
side effects (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3897">issue 3897</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3976">issue 3976</a>).</p>
<p>因此，当项目加载器用项目初始化时， <code class="xref py py-meth docutils literal notranslate"><span class="pre">ItemLoader.load_item()</span></code> 再次打电话给 <code class="xref py py-meth docutils literal notranslate"><span class="pre">ItemLoader.get_output_value()</span></code> 或 <code class="xref py py-meth docutils literal notranslate"><span class="pre">ItemLoader.load_item()</span></code> 返回空数据。</p>
</div>
<div class="section" id="scrapy-1-7-3-2019-08-01">
<span id="release-1-7-3"></span><h2>1.7.3（2019-08-01）<a class="headerlink" href="#scrapy-1-7-3-2019-08-01" title="永久链接至标题">¶</a></h2>
<p>对python3.4强制执行lxml4.3.5或更低版本 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3912">issue 3912</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3918">issue 3918</a> ）</p>
</div>
<div class="section" id="scrapy-1-7-2-2019-07-23">
<span id="release-1-7-2"></span><h2>Scrapy  1.7.2（2019-07-23）<a class="headerlink" href="#scrapy-1-7-2-2019-07-23" title="永久链接至标题">¶</a></h2>
<p>修复python 2支持 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3889">issue 3889</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3893">issue 3893</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3896">issue 3896</a> ）</p>
</div>
<div class="section" id="scrapy-1-7-1-2019-07-18">
<span id="release-1-7-1"></span><h2>Scrapy  1.7.1（2019-07-18）<a class="headerlink" href="#scrapy-1-7-1-2019-07-18" title="永久链接至标题">¶</a></h2>
<p>重新包装Scrapy1.7.0，这在Pypi中丢失了一些变化。</p>
</div>
<div class="section" id="scrapy-1-7-0-2019-07-18">
<span id="release-1-7-0"></span><h2>Scrapy  1.7.0（2019-07-18）<a class="headerlink" href="#scrapy-1-7-0-2019-07-18" title="永久链接至标题">¶</a></h2>
<div class="admonition note">
<p class="admonition-title">注解</p>
<p>确保安装Scrapy 1.7.1。pypi中的scrapy 1.7.0包是错误提交标记的结果，不包括下面描述的所有更改。</p>
</div>
<p>亮点：</p>
<ul class="simple">
<li><p>针对多个域的爬虫改进</p></li>
<li><p>将参数传递给回调的更简单方法</p></li>
<li><p>JSON请求的新类</p></li>
<li><p>基于规则的蜘蛛改进</p></li>
<li><p>饲料输出的新功能</p></li>
</ul>
<div class="section" id="id27">
<h3>向后不兼容的更改<a class="headerlink" href="#id27" title="永久链接至标题">¶</a></h3>
<ul>
<li><p><code class="docutils literal notranslate"><span class="pre">429</span></code> 现在是 <a class="reference internal" href="topics/downloader-middleware.html#std-setting-RETRY_HTTP_CODES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">RETRY_HTTP_CODES</span></code></a> 默认设置</p>
<p>这种变化是 <strong>向后不兼容</strong> .如果你不想再试一次 <code class="docutils literal notranslate"><span class="pre">429</span></code> ，必须重写 <a class="reference internal" href="topics/downloader-middleware.html#std-setting-RETRY_HTTP_CODES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">RETRY_HTTP_CODES</span></code></a> 因此。</p>
</li>
<li><p><a class="reference internal" href="topics/api.html#scrapy.crawler.Crawler" title="scrapy.crawler.Crawler"><code class="xref py py-class docutils literal notranslate"><span class="pre">Crawler</span></code></a> ， <code class="xref py py-class docutils literal notranslate"><span class="pre">CrawlerRunner.crawl</span></code> 和 <code class="xref py py-class docutils literal notranslate"><span class="pre">CrawlerRunner.create_crawler</span></code> 不再接受 <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> 子类实例，它们只接受 <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> 立即子类。</p>
<p><a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> 子类实例从来没有打算工作，它们也没有像人们预期的那样工作：而不是使用 <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> 子类实例，它们的 <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider.from_crawler" title="scrapy.spiders.Spider.from_crawler"><code class="xref py py-class docutils literal notranslate"><span class="pre">from_crawler</span></code></a> 方法以生成新实例。</p>
</li>
<li><p>的非默认值 <a class="reference internal" href="topics/settings.html#std-setting-SCHEDULER_PRIORITY_QUEUE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">SCHEDULER_PRIORITY_QUEUE</span></code></a> 设置可能停止工作。调度程序优先级队列类现在需要处理 <a class="reference internal" href="topics/request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 对象而不是任意的python数据结构。</p></li>
<li><p>额外的 <code class="docutils literal notranslate"><span class="pre">crawler</span></code> 参数已添加到 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法 <code class="xref py py-class docutils literal notranslate"><span class="pre">Scheduler</span></code> 班级。自定义调度程序子类不接受其 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法可能会因为此更改而中断。</p>
<p>有关详细信息，请参阅 <a class="reference internal" href="topics/settings.html#std-setting-SCHEDULER"><code class="xref std std-setting docutils literal notranslate"><span class="pre">SCHEDULER</span></code></a> .</p>
</li>
</ul>
<p>也见 <a class="reference internal" href="#id31"><span class="std std-ref">折旧清除</span></a> 下面。</p>
</div>
<div class="section" id="id28">
<h3>新特点<a class="headerlink" href="#id28" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>新的调度程序优先级队列， <code class="docutils literal notranslate"><span class="pre">scrapy.pqueues.DownloaderAwarePriorityQueue</span></code> ，可能是 <a class="reference internal" href="topics/broad-crawls.html#broad-crawls-scheduler-priority-queue"><span class="std std-ref">enabled</span></a> 对于以多个Web域为目标的爬虫的显著调度改进，不需要 <a class="reference internal" href="topics/settings.html#std-setting-CONCURRENT_REQUESTS_PER_IP"><code class="xref std std-setting docutils literal notranslate"><span class="pre">CONCURRENT_REQUESTS_PER_IP</span></code></a> 支持 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3520">issue 3520</a> ）</p></li>
<li><p>一个新的 <a class="reference internal" href="topics/request-response.html#scrapy.http.Request.cb_kwargs" title="scrapy.http.Request.cb_kwargs"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Request.cb_kwargs</span></code></a> 属性为将关键字参数传递给回调方法提供了一种更清晰的方法 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1138">issue 1138</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3563">issue 3563</a> ）</p></li>
<li><p>一个新的 <a class="reference internal" href="topics/request-response.html#scrapy.http.JsonRequest" title="scrapy.http.JsonRequest"><code class="xref py py-class docutils literal notranslate"><span class="pre">JSONRequest</span></code></a> 类提供了一种更方便的方法来构建JSON请求 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3504">issue 3504</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3505">issue 3505</a> ）</p></li>
<li><p>A <code class="docutils literal notranslate"><span class="pre">process_request</span></code> 已将回调传递给 <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Rule" title="scrapy.spiders.Rule"><code class="xref py py-class docutils literal notranslate"><span class="pre">Rule</span></code></a>  <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法现在接收 <a class="reference internal" href="topics/request-response.html#scrapy.http.Response" title="scrapy.http.Response"><code class="xref py py-class docutils literal notranslate"><span class="pre">Response</span></code></a> 作为第二个参数发出请求的对象 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3682">issue 3682</a> ）</p></li>
<li><p>一个新的 <code class="docutils literal notranslate"><span class="pre">restrict_text</span></code> 的参数 <a class="reference internal" href="topics/link-extractors.html#scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor" title="scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor"><code class="xref py py-attr docutils literal notranslate"><span class="pre">LinkExtractor</span></code></a>  <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法允许通过链接文本筛选链接 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3622">issue 3622</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3635">issue 3635</a> ）</p></li>
<li><p>一个新的 <a class="reference internal" href="topics/feed-exports.html#std-setting-FEED_STORAGE_S3_ACL"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FEED_STORAGE_S3_ACL</span></code></a> 设置允许为导出到AmazonS3的源定义自定义ACL (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3607">issue 3607</a> ）</p></li>
<li><p>一个新的 <a class="reference internal" href="topics/feed-exports.html#std-setting-FEED_STORAGE_FTP_ACTIVE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FEED_STORAGE_FTP_ACTIVE</span></code></a> 设置允许对导出到FTP服务器的源使用FTP的活动连接模式 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3829">issue 3829</a> ）</p></li>
<li><p>一个新的 <a class="reference internal" href="topics/downloader-middleware.html#std-setting-METAREFRESH_IGNORE_TAGS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">METAREFRESH_IGNORE_TAGS</span></code></a> 设置允许覆盖在搜索触发重定向的HTML元标记响应时忽略哪些HTML标记。 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1422">issue 1422</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3768">issue 3768</a> ）</p></li>
<li><p>一个新的 <a class="reference internal" href="topics/downloader-middleware.html#std-reqmeta-redirect_reasons"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">redirect_reasons</span></code></a> 请求元键在每次跟踪的重定向之后公开原因（状态代码、元刷新） (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3581">issue 3581</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3687">issue 3687</a> ）</p></li>
<li><p>The <code class="docutils literal notranslate"><span class="pre">SCRAPY_CHECK</span></code> variable is now set to the <code class="docutils literal notranslate"><span class="pre">true</span></code> string during runs
of the <a class="reference internal" href="topics/commands.html#std-command-check"><code class="xref std std-command docutils literal notranslate"><span class="pre">check</span></code></a> command, which allows <a class="reference internal" href="topics/contracts.html#detecting-contract-check-runs"><span class="std std-ref">detecting contract
check runs from code</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3704">issue 3704</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3739">issue 3739</a>)</p></li>
<li><p>A new <a class="reference internal" href="topics/items.html#scrapy.item.Item.deepcopy" title="scrapy.item.Item.deepcopy"><code class="xref py py-meth docutils literal notranslate"><span class="pre">Item.deepcopy()</span></code></a> method makes it
easier to <a class="reference internal" href="topics/items.html#copying-items"><span class="std std-ref">deep-copy items</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1493">issue 1493</a>,
<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3671">issue 3671</a>)</p></li>
<li><p><a class="reference internal" href="topics/extensions.html#scrapy.extensions.corestats.CoreStats" title="scrapy.extensions.corestats.CoreStats"><code class="xref py py-class docutils literal notranslate"><span class="pre">CoreStats</span></code></a> 同时记录 <code class="docutils literal notranslate"><span class="pre">elapsed_time_seconds</span></code> 现在 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3638">issue 3638</a> ）</p></li>
<li><p>例外 <code class="xref py py-class docutils literal notranslate"><span class="pre">ItemLoader</span></code>  <a class="reference internal" href="topics/loaders.html#topics-loaders-processors"><span class="std std-ref">input and output processors</span></a> 现在更详细了 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3836">issue 3836</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3840">issue 3840</a> ）</p></li>
<li><p><a class="reference internal" href="topics/api.html#scrapy.crawler.Crawler" title="scrapy.crawler.Crawler"><code class="xref py py-class docutils literal notranslate"><span class="pre">Crawler</span></code></a> ， <code class="xref py py-class docutils literal notranslate"><span class="pre">CrawlerRunner.crawl</span></code> 和 <code class="xref py py-class docutils literal notranslate"><span class="pre">CrawlerRunner.create_crawler</span></code> 如果他们收到 <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> 子类实例而不是子类本身 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2283">issue 2283</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3610">issue 3610</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3872">issue 3872</a> ）</p></li>
</ul>
</div>
<div class="section" id="id29">
<h3>错误修复<a class="headerlink" href="#id29" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p><a class="reference internal" href="topics/spider-middleware.html#scrapy.spidermiddlewares.SpiderMiddleware.process_spider_exception" title="scrapy.spidermiddlewares.SpiderMiddleware.process_spider_exception"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_spider_exception()</span></code></a> 现在也为生成器调用 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/220">issue 220</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2061">issue 2061</a> ）</p></li>
<li><p>系统异常，如 <a class="reference external" href="https://docs.python.org/3/library/exceptions.html#KeyboardInterrupt">KeyboardInterrupt</a> 不再被抓住 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3726">issue 3726</a> ）</p></li>
<li><p><code class="xref py py-meth docutils literal notranslate"><span class="pre">ItemLoader.load_item()</span></code> 以后不再打电话给 <code class="xref py py-meth docutils literal notranslate"><span class="pre">ItemLoader.get_output_value()</span></code> 或 <code class="xref py py-meth docutils literal notranslate"><span class="pre">ItemLoader.load_item()</span></code> 返回空数据 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3804">issue 3804</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3819">issue 3819</a> ）</p></li>
<li><p>The images pipeline (<a class="reference internal" href="topics/media-pipeline.html#scrapy.pipelines.images.ImagesPipeline" title="scrapy.pipelines.images.ImagesPipeline"><code class="xref py py-class docutils literal notranslate"><span class="pre">ImagesPipeline</span></code></a>) no
longer ignores these Amazon S3 settings: <a class="reference internal" href="topics/settings.html#std-setting-AWS_ENDPOINT_URL"><code class="xref std std-setting docutils literal notranslate"><span class="pre">AWS_ENDPOINT_URL</span></code></a>,
<a class="reference internal" href="topics/settings.html#std-setting-AWS_REGION_NAME"><code class="xref std std-setting docutils literal notranslate"><span class="pre">AWS_REGION_NAME</span></code></a>, <a class="reference internal" href="topics/settings.html#std-setting-AWS_USE_SSL"><code class="xref std std-setting docutils literal notranslate"><span class="pre">AWS_USE_SSL</span></code></a>, <a class="reference internal" href="topics/settings.html#std-setting-AWS_VERIFY"><code class="xref std std-setting docutils literal notranslate"><span class="pre">AWS_VERIFY</span></code></a>
(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3625">issue 3625</a>)</p></li>
<li><p>修复了内存泄漏 <code class="docutils literal notranslate"><span class="pre">scrapy.pipelines.media.MediaPipeline</span></code> 例如，影响来自定制中间商的非200响应和异常 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3813">issue 3813</a> ）</p></li>
<li><p>带有私人回调的请求现在正确地从磁盘上取消了序列化 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3790">issue 3790</a> ）</p></li>
<li><p><a class="reference internal" href="topics/request-response.html#scrapy.http.FormRequest.from_response" title="scrapy.http.FormRequest.from_response"><code class="xref py py-meth docutils literal notranslate"><span class="pre">FormRequest.from_response()</span></code></a> 现在处理诸如主要Web浏览器之类的无效方法 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3777">issue 3777</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3794">issue 3794</a> ）</p></li>
</ul>
</div>
<div class="section" id="id30">
<h3>文档<a class="headerlink" href="#id30" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>一个新话题， <a class="reference internal" href="topics/dynamic-content.html#topics-dynamic-content"><span class="std std-ref">选择动态加载的内容</span></a> ，介绍了读取动态加载数据的推荐方法 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3703">issue 3703</a> ）</p></li>
<li><p><a class="reference internal" href="topics/broad-crawls.html#topics-broad-crawls"><span class="std std-ref">宽爬行</span></a> 现在提供有关内存使用的信息 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1264">issue 1264</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3866">issue 3866</a> ）</p></li>
<li><p>The documentation of <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Rule" title="scrapy.spiders.Rule"><code class="xref py py-class docutils literal notranslate"><span class="pre">Rule</span></code></a> now covers how to access
the text of a link when using <a class="reference internal" href="topics/spiders.html#scrapy.spiders.CrawlSpider" title="scrapy.spiders.CrawlSpider"><code class="xref py py-class docutils literal notranslate"><span class="pre">CrawlSpider</span></code></a>
(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3711">issue 3711</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3712">issue 3712</a>)</p></li>
<li><p>A new section, <a class="reference internal" href="topics/downloader-middleware.html#httpcache-storage-custom"><span class="std std-ref">编写自己的存储后端</span></a>, covers writing a custom
cache storage backend for
<a class="reference internal" href="topics/downloader-middleware.html#scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware" title="scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">HttpCacheMiddleware</span></code></a>
(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3683">issue 3683</a>, <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3692">issue 3692</a>)</p></li>
<li><p>一个新的 <a class="reference internal" href="faq.html#faq"><span class="std std-ref">FAQ</span></a> 入口， <a class="reference internal" href="faq.html#faq-split-item"><span class="std std-ref">如何在项目管道中将项目拆分为多个项目？</span></a> ，解释当要从项目管道中将项目拆分为多个项目时要执行的操作。 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2240">issue 2240</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3672">issue 3672</a> ）</p></li>
<li><p>更新了 <a class="reference internal" href="faq.html#faq-bfo-dfo"><span class="std std-ref">FAQ entry about crawl order</span></a> 解释为什么前几个请求很少遵循所需的顺序 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1739">issue 1739</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3621">issue 3621</a> ）</p></li>
<li><p>这个 <a class="reference internal" href="topics/settings.html#std-setting-LOGSTATS_INTERVAL"><code class="xref std std-setting docutils literal notranslate"><span class="pre">LOGSTATS_INTERVAL</span></code></a> 设置 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3730">issue 3730</a> ） <a class="reference internal" href="topics/media-pipeline.html#scrapy.pipelines.files.FilesPipeline.file_path" title="scrapy.pipelines.files.FilesPipeline.file_path"><code class="xref py py-meth docutils literal notranslate"><span class="pre">FilesPipeline.file_path</span></code></a> 和 <a class="reference internal" href="topics/media-pipeline.html#scrapy.pipelines.images.ImagesPipeline.file_path" title="scrapy.pipelines.images.ImagesPipeline.file_path"><code class="xref py py-meth docutils literal notranslate"><span class="pre">ImagesPipeline.file_path</span></code></a> 方法 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2253">issue 2253</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3609">issue 3609</a> ） <code class="xref py py-meth docutils literal notranslate"><span class="pre">Crawler.stop()</span></code> 方法 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3842">issue 3842</a> ）现在记录在案</p></li>
<li><p>文档中某些令人困惑或误导的部分现在更加清晰了 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1347">issue 1347</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1789">issue 1789</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2289">issue 2289</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3069">issue 3069</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3615">issue 3615</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3626">issue 3626</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3668">issue 3668</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3670">issue 3670</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3673">issue 3673</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3728">issue 3728</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3762">issue 3762</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3861">issue 3861</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3882">issue 3882</a> ）</p></li>
<li><p>小文档修复 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3648">issue 3648</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3649">issue 3649</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3662">issue 3662</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3674">issue 3674</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3676">issue 3676</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3694">issue 3694</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3724">issue 3724</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3764">issue 3764</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3767">issue 3767</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3791">issue 3791</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3797">issue 3797</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3806">issue 3806</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3812">issue 3812</a> ）</p></li>
</ul>
</div>
<div class="section" id="id31">
<span id="id32"></span><h3>折旧清除<a class="headerlink" href="#id31" title="永久链接至标题">¶</a></h3>
<p>已删除以下已弃用的API (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3578">issue 3578</a> ）：</p>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.conf</span></code> （使用） <a class="reference internal" href="topics/api.html#scrapy.crawler.Crawler.settings" title="scrapy.crawler.Crawler.settings"><code class="xref py py-attr docutils literal notranslate"><span class="pre">Crawler.settings</span></code></a> ）</p></li>
<li><p>从 <code class="docutils literal notranslate"><span class="pre">scrapy.core.downloader.handlers</span></code> ：</p>
<ul>
<li><p><code class="docutils literal notranslate"><span class="pre">http.HttpDownloadHandler</span></code> （使用） <code class="docutils literal notranslate"><span class="pre">http10.HTTP10DownloadHandler</span></code> ）</p></li>
</ul>
</li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.loader.ItemLoader._get_values</span></code> （使用） <code class="docutils literal notranslate"><span class="pre">_get_xpathvalues</span></code> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.loader.XPathItemLoader</span></code> （使用） <code class="xref py py-class docutils literal notranslate"><span class="pre">ItemLoader</span></code> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.log</span></code> （见 <a class="reference internal" href="topics/logging.html#topics-logging"><span class="std std-ref">登录</span></a> ）</p></li>
<li><p>从 <code class="docutils literal notranslate"><span class="pre">scrapy.pipelines</span></code> ：</p>
<ul>
<li><p><code class="docutils literal notranslate"><span class="pre">files.FilesPipeline.file_key</span></code> （使用） <code class="docutils literal notranslate"><span class="pre">file_path</span></code> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">images.ImagesPipeline.file_key</span></code> （使用） <code class="docutils literal notranslate"><span class="pre">file_path</span></code> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">images.ImagesPipeline.image_key</span></code> （使用） <code class="docutils literal notranslate"><span class="pre">file_path</span></code> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">images.ImagesPipeline.thumb_key</span></code> （使用） <code class="docutils literal notranslate"><span class="pre">thumb_path</span></code> ）</p></li>
</ul>
</li>
<li><p>从两者 <code class="docutils literal notranslate"><span class="pre">scrapy.selector</span></code> 和 <code class="docutils literal notranslate"><span class="pre">scrapy.selector.lxmlsel</span></code> ：</p>
<ul>
<li><p><code class="docutils literal notranslate"><span class="pre">HtmlXPathSelector</span></code> （使用） <a class="reference internal" href="topics/selectors.html#scrapy.selector.Selector" title="scrapy.selector.Selector"><code class="xref py py-class docutils literal notranslate"><span class="pre">Selector</span></code></a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">XmlXPathSelector</span></code> （使用） <a class="reference internal" href="topics/selectors.html#scrapy.selector.Selector" title="scrapy.selector.Selector"><code class="xref py py-class docutils literal notranslate"><span class="pre">Selector</span></code></a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">XPathSelector</span></code> （使用） <a class="reference internal" href="topics/selectors.html#scrapy.selector.Selector" title="scrapy.selector.Selector"><code class="xref py py-class docutils literal notranslate"><span class="pre">Selector</span></code></a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">XPathSelectorList</span></code> （使用） <a class="reference internal" href="topics/selectors.html#scrapy.selector.Selector" title="scrapy.selector.Selector"><code class="xref py py-class docutils literal notranslate"><span class="pre">Selector</span></code></a> ）</p></li>
</ul>
</li>
<li><p>从 <code class="docutils literal notranslate"><span class="pre">scrapy.selector.csstranslator</span></code> ：</p>
<ul>
<li><p><code class="docutils literal notranslate"><span class="pre">ScrapyGenericTranslator</span></code> （使用） <a class="reference external" href="https://parsel.readthedocs.io/en/latest/parsel.html#parsel.csstranslator.GenericTranslator">parsel.csstranslator.GenericTranslator</a>)</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">ScrapyHTMLTranslator</span></code> （使用） <a class="reference external" href="https://parsel.readthedocs.io/en/latest/parsel.html#parsel.csstranslator.HTMLTranslator">parsel.csstranslator.HTMLTranslator</a>)</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">ScrapyXPathExpr</span></code> （使用） <a class="reference external" href="https://parsel.readthedocs.io/en/latest/parsel.html#parsel.csstranslator.XPathExpr">parsel.csstranslator.XPathExpr</a>)</p></li>
</ul>
</li>
<li><p>从 <a class="reference internal" href="topics/selectors.html#scrapy.selector.Selector" title="scrapy.selector.Selector"><code class="xref py py-class docutils literal notranslate"><span class="pre">Selector</span></code></a> ：</p>
<ul>
<li><p><code class="docutils literal notranslate"><span class="pre">_root</span></code> （两个 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法参数和对象属性，使用 <code class="docutils literal notranslate"><span class="pre">root</span></code> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">extract_unquoted</span></code> （使用） <code class="docutils literal notranslate"><span class="pre">getall</span></code> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">select</span></code> （使用） <code class="docutils literal notranslate"><span class="pre">xpath</span></code> ）</p></li>
</ul>
</li>
<li><p>从 <a class="reference internal" href="topics/selectors.html#scrapy.selector.SelectorList" title="scrapy.selector.SelectorList"><code class="xref py py-class docutils literal notranslate"><span class="pre">SelectorList</span></code></a> ：</p>
<ul>
<li><p><code class="docutils literal notranslate"><span class="pre">extract_unquoted</span></code> （使用） <code class="docutils literal notranslate"><span class="pre">getall</span></code> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">select</span></code> （使用） <code class="docutils literal notranslate"><span class="pre">xpath</span></code> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">x</span></code> （使用） <code class="docutils literal notranslate"><span class="pre">xpath</span></code> ）</p></li>
</ul>
</li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.spiders.BaseSpider</span></code> （使用） <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> ）</p></li>
<li><p>从 <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider" title="scrapy.spiders.Spider"><code class="xref py py-class docutils literal notranslate"><span class="pre">Spider</span></code></a> （和子类）：</p>
<ul>
<li><p><code class="docutils literal notranslate"><span class="pre">DOWNLOAD_DELAY</span></code> （使用） <a class="reference internal" href="topics/settings.html#spider-download-delay-attribute"><span class="std std-ref">download_delay</span></a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">set_crawler</span></code> （使用） <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider.from_crawler" title="scrapy.spiders.Spider.from_crawler"><code class="xref py py-meth docutils literal notranslate"><span class="pre">from_crawler()</span></code></a> ）</p></li>
</ul>
</li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.spiders.spiders</span></code> （使用） <a class="reference internal" href="topics/api.html#scrapy.spiderloader.SpiderLoader" title="scrapy.spiderloader.SpiderLoader"><code class="xref py py-class docutils literal notranslate"><span class="pre">SpiderLoader</span></code></a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.telnet</span></code> （使用） <a class="reference internal" href="topics/extensions.html#module-scrapy.extensions.telnet" title="scrapy.extensions.telnet: Telnet console"><code class="xref py py-mod docutils literal notranslate"><span class="pre">scrapy.extensions.telnet</span></code></a> ）</p></li>
<li><p>从 <code class="docutils literal notranslate"><span class="pre">scrapy.utils.python</span></code> ：</p>
<ul>
<li><p><code class="docutils literal notranslate"><span class="pre">str_to_unicode</span></code> （使用） <code class="docutils literal notranslate"><span class="pre">to_unicode</span></code> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">unicode_to_str</span></code> （使用） <code class="docutils literal notranslate"><span class="pre">to_bytes</span></code> ）</p></li>
</ul>
</li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.utils.response.body_or_str</span></code></p></li>
</ul>
<p>以下不推荐使用的设置也已删除 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3578">issue 3578</a> ）：</p>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">SPIDER_MANAGER_CLASS</span></code> （使用） <a class="reference internal" href="topics/settings.html#std-setting-SPIDER_LOADER_CLASS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">SPIDER_LOADER_CLASS</span></code></a> ）</p></li>
</ul>
</div>
<div class="section" id="id33">
<h3>贬抑<a class="headerlink" href="#id33" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>这个 <code class="docutils literal notranslate"><span class="pre">queuelib.PriorityQueue</span></code> 价值 <a class="reference internal" href="topics/settings.html#std-setting-SCHEDULER_PRIORITY_QUEUE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">SCHEDULER_PRIORITY_QUEUE</span></code></a> 设置已弃用。使用 <code class="docutils literal notranslate"><span class="pre">scrapy.pqueues.ScrapyPriorityQueue</span></code> 相反。</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">process_request</span></code> 回调传递给 <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Rule" title="scrapy.spiders.Rule"><code class="xref py py-class docutils literal notranslate"><span class="pre">Rule</span></code></a> 不接受两个参数的将被弃用。</p></li>
<li><p>以下模块已弃用：</p>
<ul>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.utils.http</span></code> （使用） <a class="reference external" href="https://w3lib.readthedocs.io/en/latest/w3lib.html#module-w3lib.http">w3lib.http</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.utils.markup</span></code> （使用） <a class="reference external" href="https://w3lib.readthedocs.io/en/latest/w3lib.html#module-w3lib.html">w3lib.html</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.utils.multipart</span></code> （使用） <a class="reference external" href="https://urllib3.readthedocs.io/en/latest/index.html">urllib3</a> ）</p></li>
</ul>
</li>
<li><p>这个 <code class="docutils literal notranslate"><span class="pre">scrapy.utils.datatypes.MergeDict</span></code> 对于python 3代码基，不推荐使用类。使用 <a class="reference external" href="https://docs.python.org/3/library/collections.html#collections.ChainMap" title="(在 Python v3.9)"><code class="xref py py-class docutils literal notranslate"><span class="pre">ChainMap</span></code></a> 相反。 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3878">issue 3878</a> ）</p></li>
<li><p>这个 <code class="docutils literal notranslate"><span class="pre">scrapy.utils.gz.is_gzipped</span></code> 函数已弃用。使用 <code class="docutils literal notranslate"><span class="pre">scrapy.utils.gz.gzip_magic_number</span></code> 相反。</p></li>
</ul>
</div>
<div class="section" id="id34">
<h3>其他变化<a class="headerlink" href="#id34" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>It is now possible to run all tests from the same <a class="reference external" href="https://pypi.org/project/tox/">tox</a> environment in
parallel; the documentation now covers <a class="reference internal" href="contributing.html#running-tests"><span class="std std-ref">this and other ways to run
tests</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3707">issue 3707</a>)</p></li>
<li><p>现在可以生成API文档覆盖率报告 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3806">issue 3806</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3810">issue 3810</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3860">issue 3860</a> ）</p></li>
<li><p>The <a class="reference internal" href="contributing.html#documentation-policies"><span class="std std-ref">documentation policies</span></a> now require
<a class="reference external" href="https://docs.python.org/3/glossary.html#term-docstring">docstrings</a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3701">issue 3701</a>) that follow <a class="reference external" href="https://www.python.org/dev/peps/pep-0257/">PEP 257</a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3748">issue 3748</a>)</p></li>
<li><p>内部修复和清理 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3629">issue 3629</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3643">issue 3643</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3684">issue 3684</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3698">issue 3698</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3734">issue 3734</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3735">issue 3735</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3736">issue 3736</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3737">issue 3737</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3809">issue 3809</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3821">issue 3821</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3825">issue 3825</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3827">issue 3827</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3833">issue 3833</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3857">issue 3857</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3877">issue 3877</a> ）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-1-6-0-2019-01-30">
<span id="release-1-6-0"></span><h2>Scrapy 1.6.0（2019-01-30）<a class="headerlink" href="#scrapy-1-6-0-2019-01-30" title="永久链接至标题">¶</a></h2>
<p>亮点：</p>
<ul class="simple">
<li><p>更好的Windows支持;</p></li>
<li><p>Python 3.7兼容性;</p></li>
<li><p>大的文档改进，包括从 <code class="docutils literal notranslate"><span class="pre">.extract_first()</span></code> + <code class="docutils literal notranslate"><span class="pre">.extract()</span></code> API到 <code class="docutils literal notranslate"><span class="pre">.get()</span></code> + <code class="docutils literal notranslate"><span class="pre">.getall()</span></code> 应用程序编程接口;</p></li>
<li><p>feed 导出、文件管道和媒体管道改进;</p></li>
<li><p>更好的扩展性： <a class="reference internal" href="topics/signals.html#std-signal-item_error"><code class="xref std std-signal docutils literal notranslate"><span class="pre">item_error</span></code></a> 和 <a class="reference internal" href="topics/signals.html#std-signal-request_reached_downloader"><code class="xref std std-signal docutils literal notranslate"><span class="pre">request_reached_downloader</span></code></a> 信号； <code class="docutils literal notranslate"><span class="pre">from_crawler</span></code> 支持Feed 导出、Feed 仓库和双过滤器。</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.contracts</span></code> 修复和新功能;</p></li>
<li><p>telnet控制台安全性改进，首次作为后端发布于 <a class="reference internal" href="#release-1-5-2"><span class="std std-ref">Scrapy 1.5.2（2019-01-22）</span></a> ;</p></li>
<li><p>清理弃用的代码;</p></li>
<li><p>各种错误修复、小的新特性和整个代码库的可用性改进。</p></li>
</ul>
<div class="section" id="selector-api-changes">
<h3>选择器API更改<a class="headerlink" href="#selector-api-changes" title="永久链接至标题">¶</a></h3>
<p>虽然这些不是scrapy本身的更改，而是scrapy用于xpath/css选择器的parsel_u库中的更改，但这些更改在这里值得一提。Scrapy现在依赖于parsel&gt;=1.5，并且Scrapy文档会更新以跟踪最近的 <code class="docutils literal notranslate"><span class="pre">parsel</span></code> API惯例。</p>
<p>最明显的变化是 <code class="docutils literal notranslate"><span class="pre">.get()</span></code> 和 <code class="docutils literal notranslate"><span class="pre">.getall()</span></code> 选择器方法现在比 <code class="docutils literal notranslate"><span class="pre">.extract_first()</span></code> 和 <code class="docutils literal notranslate"><span class="pre">.extract()</span></code> . 我们认为这些新方法会产生更简洁和可读的代码。见 <a class="reference internal" href="topics/selectors.html#old-extraction-api"><span class="std std-ref">extract（）和extract_first（）。</span></a> 了解更多详细信息。</p>
<div class="admonition note">
<p class="admonition-title">注解</p>
<p>目前有 <strong>no plans</strong> 贬低 <code class="docutils literal notranslate"><span class="pre">.extract()</span></code> 和 <code class="docutils literal notranslate"><span class="pre">.extract_first()</span></code> 方法。</p>
</div>
<p>另一个有用的新特性是 <code class="docutils literal notranslate"><span class="pre">Selector.attrib</span></code> 和 <code class="docutils literal notranslate"><span class="pre">SelectorList.attrib</span></code> 属性，这使得获取HTML元素的属性更加容易。见 <a class="reference internal" href="topics/selectors.html#selecting-attributes"><span class="std std-ref">选择元素属性</span></a> .</p>
<p>CSS选择器缓存在parsel&gt;=1.5中，这使得在多次使用相同的css路径时更快。这是非常常见的情况下，剪贴蜘蛛：回调通常被称为多次，在不同的网页。</p>
<p>如果使用自定义 <code class="docutils literal notranslate"><span class="pre">Selector</span></code> 或 <code class="docutils literal notranslate"><span class="pre">SelectorList</span></code> 子类 <strong>backward incompatible</strong> Parsel中的更改可能会影响代码。见 <a class="reference external" href="https://parsel.readthedocs.io/en/latest/history.html">parsel changelog</a> 详细描述，以及完整的改进列表。</p>
</div>
<div class="section" id="telnet-console">
<h3>Telnet控制台<a class="headerlink" href="#telnet-console" title="永久链接至标题">¶</a></h3>
<p>向后不兼容: Scrapy的telnet控制台现在需要用户名和密码。见 <a class="reference internal" href="topics/telnetconsole.html#topics-telnetconsole"><span class="std std-ref">远程登录控制台</span></a> 了解更多详细信息。此更改修复了 安全问题; 详细说明见 <a class="reference internal" href="#release-1-5-2"><span class="std std-ref">Scrapy 1.5.2（2019-01-22）</span></a> 。</p>
</div>
<div class="section" id="new-extensibility-features">
<h3>新的可扩展性功能<a class="headerlink" href="#new-extensibility-features" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">from_crawler</span></code> 对Feed 导出和Feed仓库增加了支持。除此之外，它还允许从自定义饲料仓库和出口商访问零碎设置。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1605">issue 1605</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3348">issue 3348</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">from_crawler</span></code> 对双过滤器增加了支持（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2956">issue 2956</a> ）；这允许从双面打印器访问设置或蜘蛛。</p></li>
<li><p><a class="reference internal" href="topics/signals.html#std-signal-item_error"><code class="xref std std-signal docutils literal notranslate"><span class="pre">item_error</span></code></a> 在管道中发生错误时激发（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3256">issue 3256</a>;</p></li>
<li><p><a class="reference internal" href="topics/signals.html#std-signal-request_reached_downloader"><code class="xref std std-signal docutils literal notranslate"><span class="pre">request_reached_downloader</span></code></a> 当下载程序收到新请求时激发；此信号可能有用，例如，对于自定义计划程序有用（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3393">issue 3393</a> ）</p></li>
<li><p>新建SiteMapSpider <a class="reference internal" href="topics/spiders.html#scrapy.spiders.SitemapSpider.sitemap_filter" title="scrapy.spiders.SitemapSpider.sitemap_filter"><code class="xref py py-meth docutils literal notranslate"><span class="pre">sitemap_filter()</span></code></a> 方法，该方法允许根据SiteMapSpider子类中的属性选择站点地图条目（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3512">issue 3512</a> ）</p></li>
<li><p>下载程序处理程序的延迟加载现在是可选的；这使得在自定义下载程序处理程序中能够更好地处理初始化错误。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3394">issue 3394</a> ）</p></li>
</ul>
</div>
<div class="section" id="new-filepipeline-and-mediapipeline-features">
<h3>新的文件管道和媒体管道功能<a class="headerlink" href="#new-filepipeline-and-mediapipeline-features" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>显示s3filestore的更多选项： <a class="reference internal" href="topics/settings.html#std-setting-AWS_ENDPOINT_URL"><code class="xref std std-setting docutils literal notranslate"><span class="pre">AWS_ENDPOINT_URL</span></code></a> ， <a class="reference internal" href="topics/settings.html#std-setting-AWS_USE_SSL"><code class="xref std std-setting docutils literal notranslate"><span class="pre">AWS_USE_SSL</span></code></a> ， <a class="reference internal" href="topics/settings.html#std-setting-AWS_VERIFY"><code class="xref std std-setting docutils literal notranslate"><span class="pre">AWS_VERIFY</span></code></a> ， <a class="reference internal" href="topics/settings.html#std-setting-AWS_REGION_NAME"><code class="xref std std-setting docutils literal notranslate"><span class="pre">AWS_REGION_NAME</span></code></a> . 例如，这允许使用可选的或自托管的与AWS兼容的提供程序（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2609">issue 2609</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3548">issue 3548</a> ）</p></li>
<li><p>对谷歌云存储的ACL支持： <a class="reference internal" href="topics/media-pipeline.html#std-setting-FILES_STORE_GCS_ACL"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FILES_STORE_GCS_ACL</span></code></a> 和 <a class="reference internal" href="topics/media-pipeline.html#std-setting-IMAGES_STORE_GCS_ACL"><code class="xref std std-setting docutils literal notranslate"><span class="pre">IMAGES_STORE_GCS_ACL</span></code></a> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3199">issue 3199</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-contracts-improvements">
<h3><code class="docutils literal notranslate"><span class="pre">scrapy.contracts</span></code> 改进<a class="headerlink" href="#scrapy-contracts-improvements" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>更好地处理合同代码中的异常（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3377">issue 3377</a>;</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">dont_filter=True</span></code> 用于合同请求，该请求允许使用相同的URL测试不同的回调（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3381">issue 3381</a>;</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">request_cls</span></code> 合同子类中的属性允许在合同中使用不同的请求类，例如FormRequest（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3383">issue 3383</a> ）</p></li>
<li><p>合同中的固定errback处理，例如，对于为返回非200响应的URL执行合同的情况（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3371">issue 3371</a> ）</p></li>
</ul>
</div>
<div class="section" id="usability-improvements">
<h3>可用性改进<a class="headerlink" href="#usability-improvements" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>robotstxtmiddleware的更多统计信息（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3100">issue 3100</a> ）</p></li>
<li><p>信息日志级别用于显示telnet主机/端口（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3115">issue 3115</a> ）</p></li>
<li><p>在robotstxtmiddleware中将消息添加到ignorerequest（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3113">issue 3113</a> ）</p></li>
<li><p>更好地验证 <code class="docutils literal notranslate"><span class="pre">url</span></code> 论点 <code class="docutils literal notranslate"><span class="pre">Response.follow</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3131">issue 3131</a> ）</p></li>
<li><p>当spider初始化出错时，从scray命令返回非零退出代码 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/3226">issue 3226</a> ）</p></li>
<li><p>链接提取改进：“ftp”添加到方案列表中（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3152">issue 3152</a> ）将“flv”添加到常用视频扩展（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3165">issue 3165</a> ）</p></li>
<li><p>禁用导出程序时出现更好的错误消息（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3358">issue 3358</a>;</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy</span> <span class="pre">shell</span> <span class="pre">--help</span></code> 提到本地文件所需的语法（ <code class="docutils literal notranslate"><span class="pre">./file.html</span></code> - <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3496">issue 3496</a> .</p></li>
<li><p>Referer header值添加到rfpdupefilter日志消息中（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3588">issue 3588</a> ）</p></li>
</ul>
</div>
<div class="section" id="id35">
<h3>错误修复<a class="headerlink" href="#id35" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>修复了Windows下.csv导出中多余空行的问题（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3039">issue 3039</a>;</p></li>
<li><p>在为磁盘队列序列化对象时正确处理python 3中的picking错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3082">issue 3082</a> ）</p></li>
<li><p>复制请求时标志现在被保留（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3342">issue 3342</a>;</p></li>
<li><p>FormRequest.from_response clickdata不应忽略带有 <code class="docutils literal notranslate"><span class="pre">input[type=image]</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3153">issue 3153</a> ）</p></li>
<li><p>FormRequest.from响应应保留重复的密钥（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3247">issue 3247</a> ）</p></li>
</ul>
</div>
<div class="section" id="documentation-improvements">
<h3>文档改进<a class="headerlink" href="#documentation-improvements" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>重新编写文档是为了建议.get/.getall API而不是.extract/.extract_。也， <a class="reference internal" href="topics/selectors.html#topics-selectors"><span class="std std-ref">选择器</span></a> 文档被更新并重新构造以匹配最新的Parsel文档；它们现在包含更多的主题，例如 <a class="reference internal" href="topics/selectors.html#selecting-attributes"><span class="std std-ref">选择元素属性</span></a> 或 <a class="reference internal" href="topics/selectors.html#topics-selectors-css-extensions"><span class="std std-ref">CSS选择器的扩展</span></a> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3390">issue 3390</a> ）</p></li>
<li><p><a class="reference internal" href="topics/developer-tools.html#topics-developer-tools"><span class="std std-ref">使用浏览器的开发人员工具进行抓取</span></a> 是一个新的教程，它取代了旧的火狐和Firebug教程（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3400">issue 3400</a> ）</p></li>
<li><p>Scrapy_项目环境变量记录在案（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3518">issue 3518</a>;</p></li>
<li><p>安装说明中添加了故障排除部分（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3517">issue 3517</a>;</p></li>
<li><p>改进了教程中初学者资源的链接（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3367">issue 3367</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3468">issue 3468</a>;</p></li>
<li><p>固定的 <a class="reference internal" href="topics/downloader-middleware.html#std-setting-RETRY_HTTP_CODES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">RETRY_HTTP_CODES</span></code></a> 文档中的默认值（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3335">issue 3335</a>;</p></li>
<li><p>移除未使用的素材 <code class="docutils literal notranslate"><span class="pre">DEPTH_STATS</span></code> 文档选项（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3245">issue 3245</a>;</p></li>
<li><p>其他清理（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3347">issue 3347</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3350">issue 3350</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3445">issue 3445</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3544">issue 3544</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3605">issue 3605</a> ）</p></li>
</ul>
</div>
<div class="section" id="id36">
<h3>折旧清除<a class="headerlink" href="#id36" title="永久链接至标题">¶</a></h3>
<p>1.0以前版本的 Scrapy 模块名称的兼容性垫片已移除（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3318">issue 3318</a> ）：</p>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.command</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.contrib</span></code> （所有子模块）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.contrib_exp</span></code> （所有子模块）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.dupefilter</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.linkextractor</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.project</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.spider</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.spidermanager</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.squeue</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.stats</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.statscol</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.utils.decorator</span></code></p></li>
</ul>
<p>有关详细信息见 <a class="reference internal" href="#module-relocations"><span class="std std-ref">模块重新定位</span></a> ，或使用Scrapy 1.5.x Deprecation Warnings中的建议更新代码。</p>
<p>其他折旧移除：</p>
<ul class="simple">
<li><p>已删除不推荐使用的scrapy.interfaces.ispIderManager；请使用scrapy.interfaces.ispIderLoader。</p></li>
<li><p>已弃用 <code class="docutils literal notranslate"><span class="pre">CrawlerSettings</span></code> 类已删除（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3327">issue 3327</a> ）</p></li>
<li><p>已弃用 <code class="docutils literal notranslate"><span class="pre">Settings.overrides</span></code> 和 <code class="docutils literal notranslate"><span class="pre">Settings.defaults</span></code> 属性被删除（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3327">issue 3327</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3359">issue 3359</a> ）</p></li>
</ul>
</div>
<div class="section" id="other-improvements-cleanups">
<h3>其他改进、清理<a class="headerlink" href="#other-improvements-cleanups" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>所有 Scrapy 测试现在都在Windows上通过； Scrapy 测试套件在CI上的Windows环境中执行（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3315">issue 3315</a> ）</p></li>
<li><p>Python 3.7支持（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3326">issue 3326</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3150">issue 3150</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3547">issue 3547</a> ）</p></li>
<li><p>测试和CI修复（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3526">issue 3526</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3538">issue 3538</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3308">issue 3308</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3311">issue 3311</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3309">issue 3309</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3305">issue 3305</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3210">issue 3210</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3299">issue 3299</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.http.cookies.CookieJar.clear</span></code> 接受“域”、“路径”和“名称”可选参数（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3231">issue 3231</a> ）</p></li>
<li><p>附加文件包含在SDIST中（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3495">issue 3495</a>;</p></li>
<li><p>代码样式修复（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3405">issue 3405</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3304">issue 3304</a>;</p></li>
<li><p>已删除不需要的.strip（）调用（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3519">issue 3519</a>;</p></li>
<li><p>collections.deque用于存储MiddleWarManager方法，而不是列表（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3476">issue 3476</a> ）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-1-5-2-2019-01-22">
<span id="release-1-5-2"></span><h2>Scrapy 1.5.2（2019-01-22）<a class="headerlink" href="#scrapy-1-5-2-2019-01-22" title="永久链接至标题">¶</a></h2>
<ul>
<li><p>安全修补程序: telnet控制台扩展可以很容易地被发布内容到http://localhost:6023的流氓网站利用，我们还没有找到从scrappy利用它的方法，但是很容易欺骗浏览器这样做，并提高了本地开发环境的风险。</p>
<p><em>修复程序向后不兼容</em> ，默认情况下，它使用随机生成的密码启用telnet用户密码验证。如果不能立即升级，请考虑设置 <a class="reference internal" href="topics/telnetconsole.html#std-setting-TELNETCONSOLE_PORT"><code class="xref std std-setting docutils literal notranslate"><span class="pre">TELNETCONSOLE_PORT</span></code></a> 超出其默认值。</p>
<p>有关详细信息的文档见 <a class="reference internal" href="topics/telnetconsole.html#topics-telnetconsole"><span class="std std-ref">telnet console</span></a></p>
</li>
<li><p>由于boto导入错误，GCE环境下的Backport CI构建失败。</p></li>
</ul>
</div>
<div class="section" id="scrapy-1-5-1-2018-07-12">
<span id="release-1-5-1"></span><h2>Scrapy 1.5.1（2018-07-12）<a class="headerlink" href="#scrapy-1-5-1-2018-07-12" title="永久链接至标题">¶</a></h2>
<p>这是一个包含重要错误修复的维护版本，但没有新功能：</p>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">O(N^2)</span></code> 解决了影响python 3和pypy的gzip解压问题（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3281">issue 3281</a>;</p></li>
<li><p>改进了对TLS验证错误的跳过（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3166">issue 3166</a>;</p></li>
<li><p>Ctrl-C处理在python 3.5中是固定的+（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3096">issue 3096</a> ;</p></li>
<li><p>测试修复 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3092">issue 3092</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3263">issue 3263</a> ;</p></li>
<li><p>文档改进（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3058">issue 3058</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3059">issue 3059</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3089">issue 3089</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3123">issue 3123</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3127">issue 3127</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3189">issue 3189</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3224">issue 3224</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3280">issue 3280</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3279">issue 3279</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3201">issue 3201</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3260">issue 3260</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3284">issue 3284</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3298">issue 3298</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3294">issue 3294</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-1-5-0-2017-12-29">
<span id="release-1-5-0"></span><h2>Scrapy 1.5.0（2017-12-29）<a class="headerlink" href="#scrapy-1-5-0-2017-12-29" title="永久链接至标题">¶</a></h2>
<p>这个版本在代码库中带来了一些新的小特性和改进。一些亮点：</p>
<ul class="simple">
<li><p>文件管道和ImageSpipeline支持Google云存储。</p></li>
<li><p>随着到代理的连接现在可以重用，使用代理服务器进行爬行变得更加高效。</p></li>
<li><p>对警告、异常和日志消息进行了改进，使调试更加容易。</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy</span> <span class="pre">parse</span></code> 命令现在允许通过 <code class="docutils literal notranslate"><span class="pre">--meta</span></code> 参数。</p></li>
<li><p>与python 3.6、pypy和pypy3的兼容性得到了改进；通过在CI上运行测试，pypy和pypy3现在得到了官方支持。</p></li>
<li><p>更好地默认处理HTTP 308、522和524状态代码。</p></li>
<li><p>像往常一样，文档得到了改进。</p></li>
</ul>
<div class="section" id="id37">
<h3>向后不兼容的更改<a class="headerlink" href="#id37" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>Scrapy1.5放弃了对python 3.3的支持。</p></li>
<li><p>默认的scrapy用户代理现在使用https链接到scrapy.org（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2983">issue 2983</a> ） 这在技术上是向后不兼容的; 覆盖 <a class="reference internal" href="topics/settings.html#std-setting-USER_AGENT"><code class="xref std std-setting docutils literal notranslate"><span class="pre">USER_AGENT</span></code></a> 如果你依赖旧的价值观。</p></li>
<li><p>记录被覆盖的设置 <code class="docutils literal notranslate"><span class="pre">custom_settings</span></code> 是固定的； <strong>this is technically backward-incompatible</strong> 因为记录器从 <code class="docutils literal notranslate"><span class="pre">[scrapy.utils.log]</span></code> 到 <code class="docutils literal notranslate"><span class="pre">[scrapy.crawler]</span></code> . 如果您正在分析垃圾日志，请更新日志分析器（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1343">issue 1343</a> ）</p></li>
<li><p>Linkextractor现在忽略 <code class="docutils literal notranslate"><span class="pre">m4v</span></code> 默认情况下，这是行为的更改。</p></li>
<li><p>522和524状态代码添加到 <code class="docutils literal notranslate"><span class="pre">RETRY_HTTP_CODES</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2851">issue 2851</a> ）</p></li>
</ul>
</div>
<div class="section" id="id38">
<h3>新特点<a class="headerlink" href="#id38" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>支持 <code class="docutils literal notranslate"><span class="pre">&lt;link&gt;</span></code> 标签在 <code class="docutils literal notranslate"><span class="pre">Response.follow</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2785">issue 2785</a> ）</p></li>
<li><p>Support for <code class="docutils literal notranslate"><span class="pre">ptpython</span></code> REPL (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2654">issue 2654</a>)</p></li>
<li><p>Google云存储支持文件管道和图像管道（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2923">issue 2923</a> ）</p></li>
<li><p>新的 <code class="docutils literal notranslate"><span class="pre">--meta</span></code> “scrapy parse”命令的选项允许传递附加请求。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2883">issue 2883</a> ）</p></li>
<li><p>使用时填充spider变量 <code class="docutils literal notranslate"><span class="pre">shell.inspect_response</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2812">issue 2812</a> ）</p></li>
<li><p>处理HTTP 308永久重定向（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2844">issue 2844</a> ）</p></li>
<li><p>将522和524添加到 <code class="docutils literal notranslate"><span class="pre">RETRY_HTTP_CODES</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2851">issue 2851</a> ）</p></li>
<li><p>启动时记录版本信息（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2857">issue 2857</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.mail.MailSender</span></code> 现在在python 3中工作（它需要Twisted17.9.0）</p></li>
<li><p>重新使用与代理服务器的连接（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2743">issue 2743</a> ）</p></li>
<li><p>为下载器中间件添加模板（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2755">issue 2755</a> ）</p></li>
<li><p>未定义分析回调时NotImplementedError的显式消息（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2831">issue 2831</a> ）</p></li>
<li><p>CrawlerProcess有一个选项可以禁用安装根日志处理程序（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2921">issue 2921</a> ）</p></li>
<li><p>Linkextractor现在忽略 <code class="docutils literal notranslate"><span class="pre">m4v</span></code> 默认情况下的扩展</p></li>
<li><p>更好地记录响应消息 <a class="reference internal" href="topics/settings.html#std-setting-DOWNLOAD_WARNSIZE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOAD_WARNSIZE</span></code></a> 和 <a class="reference internal" href="topics/settings.html#std-setting-DOWNLOAD_MAXSIZE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOAD_MAXSIZE</span></code></a> 限制（限制） <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2927">issue 2927</a> ）</p></li>
<li><p>当URL被放入时显示警告 <code class="docutils literal notranslate"><span class="pre">Spider.allowed_domains</span></code> 而不是域（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2250">issue 2250</a> ）</p></li>
</ul>
</div>
<div class="section" id="id39">
<h3>错误修复<a class="headerlink" href="#id39" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>修复由重写的设置的日志记录 <code class="docutils literal notranslate"><span class="pre">custom_settings</span></code> ； <strong>this is technically backward-incompatible</strong> 因为记录器从 <code class="docutils literal notranslate"><span class="pre">[scrapy.utils.log]</span></code> 到 <code class="docutils literal notranslate"><span class="pre">[scrapy.crawler]</span></code> ，因此如果需要，请更新日志分析器（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1343">issue 1343</a> ）</p></li>
<li><p>默认的scrapy用户代理现在使用https链接到scrapy.org（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2983">issue 2983</a> ） 这在技术上是向后不兼容的; 覆盖 <a class="reference internal" href="topics/settings.html#std-setting-USER_AGENT"><code class="xref std std-setting docutils literal notranslate"><span class="pre">USER_AGENT</span></code></a> 如果你依赖旧的价值观。</p></li>
<li><p>修复pypy和pypy3测试失败，正式支持它们（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2793">issue 2793</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2935">issue 2935</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2990">issue 2990</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3050">issue 3050</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2213">issue 2213</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3048">issue 3048</a> ）</p></li>
<li><p>在下列情况下修复DNS解析程序 <code class="docutils literal notranslate"><span class="pre">DNSCACHE_ENABLED=False</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2811">issue 2811</a> ）</p></li>
<li><p>添加 <code class="docutils literal notranslate"><span class="pre">cryptography</span></code> for Debian Jessie tox test env (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2848">issue 2848</a>)</p></li>
<li><p>添加验证以检查请求回调是否可调用（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2766">issue 2766</a> ）</p></li>
<li><p>端口 <code class="docutils literal notranslate"><span class="pre">extras/qpsclient.py</span></code> 到Python 3（Python） <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2849">issue 2849</a> ）</p></li>
<li><p>在python 3的场景下使用getfullargspec来停止取消预测警告（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2862">issue 2862</a> ）</p></li>
<li><p>更新不推荐使用的测试别名（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2876">issue 2876</a> ）</p></li>
<li><p>固定 <code class="docutils literal notranslate"><span class="pre">SitemapSpider</span></code> 支持备用链接（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2853">issue 2853</a> ）</p></li>
</ul>
</div>
<div class="section" id="docs">
<h3>文档<a class="headerlink" href="#docs" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>为添加了缺少的项目符号点 <code class="docutils literal notranslate"><span class="pre">AUTOTHROTTLE_TARGET_CONCURRENCY</span></code> 设置。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2756">issue 2756</a> ）</p></li>
<li><p>更新贡献文档，记录新的支持渠道（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2762">issue 2762</a> ，问题：“3038”</p></li>
<li><p>在文档中包含对Scrapy Subreddit的引用</p></li>
<li><p>修复断开的链接；对外部链接使用https://（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2978">issue 2978</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2982">issue 2982</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2958">issue 2958</a> ）</p></li>
<li><p>文档CloseSpider扩展更好（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2759">issue 2759</a> ）</p></li>
<li><p>在MongoDB示例中（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2781">issue 2781</a> ）使用 <code class="docutils literal notranslate"><span class="pre">pymongo.collection.Collection.insert_one()</span></code></p></li>
<li><p>拼写错误和打字错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2828">issue 2828</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2837">issue 2837</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2884">issue 2884</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2924">issue 2924</a> ）</p></li>
<li><p>澄清 <code class="docutils literal notranslate"><span class="pre">CSVFeedSpider.headers</span></code> 文件编制（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2826">issue 2826</a> ）</p></li>
<li><p>文件 <code class="docutils literal notranslate"><span class="pre">DontCloseSpider</span></code> 例外和澄清 <code class="docutils literal notranslate"><span class="pre">spider_idle</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2791">issue 2791</a> ）</p></li>
<li><p>更新自述文件中的“releases”部分（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2764">issue 2764</a> ）</p></li>
<li><p>修正RST语法 <code class="docutils literal notranslate"><span class="pre">DOWNLOAD_FAIL_ON_DATALOSS</span></code> 文档库 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2763">issue 2763</a> ）</p></li>
<li><p>StartProject参数描述中的小修复（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2866">issue 2866</a> ）</p></li>
<li><p>在response.body文档中澄清数据类型（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2922">issue 2922</a> ）</p></li>
<li><p>添加有关的注释 <code class="docutils literal notranslate"><span class="pre">request.meta['depth']</span></code> 到DepthmIddleware文档（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2374">issue 2374</a> ）</p></li>
<li><p>添加有关的注释 <code class="docutils literal notranslate"><span class="pre">request.meta['dont_merge_cookies']</span></code> to CookiesMiddleware docs (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2999">issue 2999</a>)</p></li>
<li><p>最新的项目结构示例（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2964">issue 2964</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2976">issue 2976</a> ）</p></li>
<li><p>Itemexporters用法的更好示例（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2989">issue 2989</a> ）</p></li>
<li><p>文件 <code class="docutils literal notranslate"><span class="pre">from_crawler</span></code> 蜘蛛和下载者中间商的方法（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/3019">issue 3019</a> ）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-1-4-0-2017-05-18">
<span id="release-1-4-0"></span><h2>Scrapy 1.4.0（2017-05-18）<a class="headerlink" href="#scrapy-1-4-0-2017-05-18" title="永久链接至标题">¶</a></h2>
<p>Scrapy1.4并没有带来那么多惊人的新功能，但还是有相当多的便利改进。</p>
<p>Scrappy现在支持匿名ftp会话，通过新的 <a class="reference internal" href="topics/settings.html#std-setting-FTP_USER"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FTP_USER</span></code></a> 和 <a class="reference internal" href="topics/settings.html#std-setting-FTP_PASSWORD"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FTP_PASSWORD</span></code></a> 设置。如果您使用的是Twisted版本17.1.0或更高版本，那么ftp现在可用于python 3。</p>
<p>有一个新的 <a class="reference internal" href="topics/request-response.html#scrapy.http.TextResponse.follow" title="scrapy.http.TextResponse.follow"><code class="xref py py-meth docutils literal notranslate"><span class="pre">response.follow</span></code></a> 创建请求的方法； 现在，它是一种推荐的在“  Scrapy   蜘蛛”中创建请求的方法。. 这种方法使得编写正确的spider更加容易； <code class="docutils literal notranslate"><span class="pre">response.follow</span></code> 与创建 <code class="docutils literal notranslate"><span class="pre">scrapy.Request</span></code> 直接对象：</p>
<ul class="simple">
<li><p>它处理相关的URL;</p></li>
<li><p>它可以在非utf8页面上正确地使用非ASCII URL;</p></li>
<li><p>除了绝对和相对URL之外，它还支持选择器；用于 <code class="docutils literal notranslate"><span class="pre">&lt;a&gt;</span></code> 元素也可以提取它们的Href值。</p></li>
</ul>
<p>例如，而不是：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="k">for</span> <span class="n">href</span> <span class="ow">in</span> <span class="n">response</span><span class="o">.</span><span class="n">css</span><span class="p">(</span><span class="s1">&#39;li.page a::attr(href)&#39;</span><span class="p">)</span><span class="o">.</span><span class="n">extract</span><span class="p">():</span>
    <span class="n">url</span> <span class="o">=</span> <span class="n">response</span><span class="o">.</span><span class="n">urljoin</span><span class="p">(</span><span class="n">href</span><span class="p">)</span>
    <span class="k">yield</span> <span class="n">scrapy</span><span class="o">.</span><span class="n">Request</span><span class="p">(</span><span class="n">url</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">parse</span><span class="p">,</span> <span class="n">encoding</span><span class="o">=</span><span class="n">response</span><span class="o">.</span><span class="n">encoding</span><span class="p">)</span>
</pre></div>
</div>
<p>现在可以写下：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="k">for</span> <span class="n">a</span> <span class="ow">in</span> <span class="n">response</span><span class="o">.</span><span class="n">css</span><span class="p">(</span><span class="s1">&#39;li.page a&#39;</span><span class="p">):</span>
    <span class="k">yield</span> <span class="n">response</span><span class="o">.</span><span class="n">follow</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">parse</span><span class="p">)</span>
</pre></div>
</div>
<p>链接提取器也得到了改进。 它们与常规现代浏览器的工作方式类似：在构建“Link”对象时，从属性中删除前导和尾随空格（想象``href =“<a class="reference external" href="http://example.com">http://example.com</a>”<a href="#id1"><span class="problematic" id="id2">``</span></a>）。 使用``FormRequest``的``action``属性也会发生这种空白剥离。</p>
<p><a href="#id1"><span class="problematic" id="id2">**</span></a>请注意，链接提取器在默认情况下不再规范化URL。<a href="#id3"><span class="problematic" id="id4">**</span></a>这让用户不时感到困惑，实际上浏览器并不是这样做的，因此我们删除了对提取链接的额外转换。</p>
<p>对于那些想要更多控制 <code class="docutils literal notranslate"><span class="pre">Referer:</span></code> 当跟踪链接时Scrapy发送的标题，您可以设置自己的 <code class="docutils literal notranslate"><span class="pre">Referrer</span> <span class="pre">Policy</span></code> . 在Scrapy 1.4之前，默认 <code class="docutils literal notranslate"><span class="pre">RefererMiddleware</span></code> 会简单而盲目地将其设置为生成HTTP请求的响应的URL（这可能会泄漏URL种子的信息）。默认情况下，scrappy现在的行为与常规浏览器非常相似。这个策略完全可以用W3C标准值定制（或者如果你愿意的话，可以用你自己定制的值）。见 <a class="reference internal" href="topics/spider-middleware.html#std-setting-REFERRER_POLICY"><code class="xref std std-setting docutils literal notranslate"><span class="pre">REFERRER_POLICY</span></code></a> 有关详细信息。</p>
<p>为了使scrappyspider更容易调试，scrappy在1.4中默认记录更多的统计信息：内存使用统计信息、详细的重试统计信息、详细的HTTP错误代码统计信息。类似的变化是，HTTP缓存路径现在也可以在日志中看到。</p>
<p>最后但同样重要的是，scrapy现在可以选择使用新的 <a class="reference internal" href="topics/feed-exports.html#std-setting-FEED_EXPORT_INDENT"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FEED_EXPORT_INDENT</span></code></a> 设置。</p>
<p>享受！（或继续阅读此版本中的其他更改。）</p>
<div class="section" id="deprecations-and-backward-incompatible-changes">
<h3>折旧和向后不兼容的变更<a class="headerlink" href="#deprecations-and-backward-incompatible-changes" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>Default to <code class="docutils literal notranslate"><span class="pre">canonicalize=False</span></code> in
<a class="reference internal" href="topics/link-extractors.html#scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor" title="scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor"><code class="xref py py-class docutils literal notranslate"><span class="pre">scrapy.linkextractors.LinkExtractor</span></code></a>
(<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2537">issue 2537</a>, fixes <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1941">issue 1941</a> and <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1982">issue 1982</a>):
<strong>warning, this is technically backward-incompatible</strong></p></li>
<li><p>默认情况下启用memusage扩展（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2539">issue 2539</a> 修正 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2187">issue 2187</a> ； <strong>this is technically backward-incompatible</strong> 因此，请检查您是否有任何非违约行为 <code class="docutils literal notranslate"><span class="pre">MEMUSAGE_***</span></code> 选项集。</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">EDITOR</span></code> 环境变量现在优先于 <code class="docutils literal notranslate"><span class="pre">EDITOR</span></code> 在settings.py中定义的选项（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1829">issue 1829</a> ）；报废默认设置不再依赖于环境变量。 从技术上讲，这是一个前后不相容的变化.</p></li>
<li><p>不推荐使用``Spider.make_requests_from_url``（：issue：<cite>1728</cite>，fixes：issue：<cite>1495</cite>）。</p></li>
</ul>
</div>
<div class="section" id="id40">
<h3>新特点<a class="headerlink" href="#id40" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>接受代理凭据 <a class="reference internal" href="topics/downloader-middleware.html#std-reqmeta-proxy"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">proxy</span></code></a> 请求元键（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2526">issue 2526</a> ）</p></li>
<li><p>支持 <a class="reference external" href="https://github.com/google/brotli">brotli</a>-compressed content; requires optional <a class="reference external" href="https://github.com/python-hyper/brotlipy/">brotlipy</a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2535">issue 2535</a>)</p></li>
<li><p>新的 <a class="reference internal" href="intro/tutorial.html#response-follow-example"><span class="std std-ref">response.follow</span></a> 创建请求的快捷方式（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1940">issue 1940</a> ）</p></li>
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">flags</span></code> 参数和属性 <a class="reference internal" href="topics/request-response.html#scrapy.http.Request" title="scrapy.http.Request"><code class="xref py py-class docutils literal notranslate"><span class="pre">Request</span></code></a> 对象（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2047">issue 2047</a> ）</p></li>
<li><p>支持匿名ftp（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2342">issue 2342</a> ）</p></li>
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">retry/count</span></code> ， <code class="docutils literal notranslate"><span class="pre">retry/max_reached</span></code> 和 <code class="docutils literal notranslate"><span class="pre">retry/reason_count/&lt;reason&gt;</span></code> 统计到 <a class="reference internal" href="topics/downloader-middleware.html#scrapy.downloadermiddlewares.retry.RetryMiddleware" title="scrapy.downloadermiddlewares.retry.RetryMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">RetryMiddleware</span></code></a> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2543">issue 2543</a> ）</p></li>
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">httperror/response_ignored_count</span></code> 和 <code class="docutils literal notranslate"><span class="pre">httperror/response_ignored_status_count/&lt;status&gt;</span></code> 统计到 <a class="reference internal" href="topics/spider-middleware.html#scrapy.spidermiddlewares.httperror.HttpErrorMiddleware" title="scrapy.spidermiddlewares.httperror.HttpErrorMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">HttpErrorMiddleware</span></code></a> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2566">issue 2566</a> ）</p></li>
<li><p>可定制的 <a class="reference internal" href="topics/spider-middleware.html#std-setting-REFERRER_POLICY"><code class="xref std std-setting docutils literal notranslate"><span class="pre">Referrer</span> <span class="pre">policy</span></code></a> in <a class="reference internal" href="topics/spider-middleware.html#scrapy.spidermiddlewares.referer.RefererMiddleware" title="scrapy.spidermiddlewares.referer.RefererMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">RefererMiddleware</span></code></a> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2306">issue 2306</a> ）</p></li>
<li><p>新的 <code class="docutils literal notranslate"><span class="pre">data:</span></code> URI下载处理程序器（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2334">issue 2334</a> 修正 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2156">issue 2156</a> ）</p></li>
<li><p>使用HTTP缓存时的日志缓存目录（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2611">issue 2611</a> 修正 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2604">issue 2604</a> ）</p></li>
<li><p>当项目包含重复的蜘蛛名称时警告用户（修复 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2181">issue 2181</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.utils.datatypes.CaselessDict</span></code> 现在接受 <code class="docutils literal notranslate"><span class="pre">Mapping</span></code> 实例而不仅仅是dicts (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2646">issue 2646</a> ）</p></li>
<li><p><a class="reference internal" href="topics/media-pipeline.html#topics-media-pipeline"><span class="std std-ref">Media downloads</span></a> 用 <a class="reference internal" href="topics/media-pipeline.html#scrapy.pipelines.files.FilesPipeline" title="scrapy.pipelines.files.FilesPipeline"><code class="xref py py-class docutils literal notranslate"><span class="pre">FilesPipeline</span></code></a> 或 <a class="reference internal" href="topics/media-pipeline.html#scrapy.pipelines.images.ImagesPipeline" title="scrapy.pipelines.images.ImagesPipeline"><code class="xref py py-class docutils literal notranslate"><span class="pre">ImagesPipeline</span></code></a> ，现在可以选择使用新的 <a class="reference internal" href="topics/media-pipeline.html#std-setting-MEDIA_ALLOW_REDIRECTS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">MEDIA_ALLOW_REDIRECTS</span></code></a> 设置 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2616">issue 2616</a> 修正 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2004">issue 2004</a> ）</p></li>
<li><p>接受来自使用新的 <a class="reference internal" href="topics/settings.html#std-setting-DOWNLOAD_FAIL_ON_DATALOSS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOAD_FAIL_ON_DATALOSS</span></code></a> 设置（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2590">issue 2590</a> 修正 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2586">issue 2586</a> ）</p></li>
<li><p>JSON和XML项的可选漂亮打印通过 <a class="reference internal" href="topics/feed-exports.html#std-setting-FEED_EXPORT_INDENT"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FEED_EXPORT_INDENT</span></code></a> 设置（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2456">issue 2456</a> 修正 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1327">issue 1327</a> ）</p></li>
<li><p>允许删除字段 <code class="docutils literal notranslate"><span class="pre">FormRequest.from_response</span></code> 格式数据 <code class="docutils literal notranslate"><span class="pre">None</span></code> 值已传递（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/667">issue 667</a> ）</p></li>
<li><p>每个请求使用新的 <a class="reference internal" href="topics/request-response.html#std-reqmeta-max_retry_times"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">max_retry_times</span></code></a> 元密钥（元密钥） <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2642">issue 2642</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">python</span> <span class="pre">-m</span> <span class="pre">scrapy</span></code> 作为更明确的替代方案 <code class="docutils literal notranslate"><span class="pre">scrapy</span></code> 命令（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2740">issue 2740</a> ）</p></li>
</ul>
</div>
<div class="section" id="id41">
<h3>错误修复<a class="headerlink" href="#id41" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>Linkextractor现在从属性中去掉前导空格和尾随空格。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2547">issue 2547</a> 修正 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1614">issue 1614</a> ）</p></li>
<li><p>Properly handle whitespaces in action attribute in
<a class="reference internal" href="topics/request-response.html#scrapy.http.FormRequest" title="scrapy.http.FormRequest"><code class="xref py py-class docutils literal notranslate"><span class="pre">FormRequest</span></code></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2548">issue 2548</a>)</p></li>
<li><p>从代理服务器缓冲连接响应字节，直到收到所有HTTP头（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2495">issue 2495</a> 修正 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2491">issue 2491</a> ）</p></li>
<li><p>FTP下载器现在可以在python 3上工作，前提是使用twisted&gt;=17.1（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2599">issue 2599</a> ）</p></li>
<li><p>在解压缩内容后使用body选择响应类型（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2393">issue 2393</a> 修正 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2145">issue 2145</a> ）</p></li>
<li><p>总是解压缩 <code class="docutils literal notranslate"><span class="pre">Content-Encoding:</span> <span class="pre">gzip</span></code> 在 <a class="reference internal" href="topics/downloader-middleware.html#scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware" title="scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware"><code class="xref py py-class docutils literal notranslate"><span class="pre">HttpCompressionMiddleware</span></code></a> 阶段（阶段） <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2391">issue 2391</a> ）</p></li>
<li><p>尊重自定义日志级别 <code class="docutils literal notranslate"><span class="pre">Spider.custom_settings</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2581">issue 2581</a> 修正 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1612">issue 1612</a> ）</p></li>
<li><p>MacOS的“make htmlview”修复程序（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2661">issue 2661</a> ）</p></li>
<li><p>从命令列表中删除“命令”（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2695">issue 2695</a> ）</p></li>
<li><p>修复具有空正文的投递请求的重复内容长度头（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2677">issue 2677</a> ）</p></li>
<li><p>适当地取消大量下载，如上面所述 <a class="reference internal" href="topics/settings.html#std-setting-DOWNLOAD_MAXSIZE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOAD_MAXSIZE</span></code></a> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1616">issue 1616</a> ）</p></li>
<li><p>ImageSpipeline：使用调色板固定处理透明PNG图像（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2675">issue 2675</a> ）</p></li>
</ul>
</div>
<div class="section" id="cleanups-refactoring">
<h3>清理和重构<a class="headerlink" href="#cleanups-refactoring" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>测试：删除临时文件和文件夹 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2570">issue 2570</a> )，修复了macOS上的ProjectUtilsTest (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2569">issue 2569</a> )，在Travis CI上使用portable pypy for Linux (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2710">issue 2710</a> ）</p></li>
<li><p>独立建筑请求 <code class="docutils literal notranslate"><span class="pre">_requests_to_follow</span></code> 爬行蜘蛛（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2562">issue 2562</a> ）</p></li>
<li><p>删除“python 3 progress”徽章（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2567">issue 2567</a> ）</p></li>
<li><p>再添加几行到 <code class="docutils literal notranslate"><span class="pre">.gitignore</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2557">issue 2557</a> ）</p></li>
<li><p>删除BumpVersion预发布配置（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2159">issue 2159</a> ）</p></li>
<li><p>添加codecov.yml文件（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2750">issue 2750</a> ）</p></li>
<li><p>基于Twisted版本设置上下文工厂实现（：issue：<cite>2577</cite>，fixes：issue：<cite>2560</cite>）</p></li>
<li><p>添加省略 <code class="docutils literal notranslate"><span class="pre">self</span></code> 默认项目中间件模板中的参数（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2595">issue 2595</a> ）</p></li>
<li><p>删除冗余 <code class="docutils literal notranslate"><span class="pre">slot.add_request()</span></code> 调用ExecutionEngine（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2617">issue 2617</a> ）</p></li>
<li><p>Catch more specific <code class="docutils literal notranslate"><span class="pre">os.error</span></code> exception in
<code class="docutils literal notranslate"><span class="pre">scrapy.pipelines.files.FSFilesStore</span></code> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2644">issue 2644</a>)</p></li>
<li><p>更改“localhost”测试服务器证书（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2720">issue 2720</a> ）</p></li>
<li><p>移除未使用的 <code class="docutils literal notranslate"><span class="pre">MEMUSAGE_REPORT</span></code> 设置（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2576">issue 2576</a> ）</p></li>
</ul>
</div>
<div class="section" id="id42">
<h3>文档<a class="headerlink" href="#id42" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>导出程序需要二进制模式（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2564">issue 2564</a> 修正 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2553">issue 2553</a> ）</p></li>
<li><p>提及问题 <a class="reference internal" href="topics/request-response.html#scrapy.http.FormRequest.from_response" title="scrapy.http.FormRequest.from_response"><code class="xref py py-meth docutils literal notranslate"><span class="pre">FormRequest.from_response</span></code></a> 由于lxml中的错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2572">issue 2572</a> ）</p></li>
<li><p>在模板中统一使用单引号（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2596">issue 2596</a> ）</p></li>
<li><p>文件 <a class="reference internal" href="topics/settings.html#std-reqmeta-ftp_user"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">ftp_user</span></code></a> 和 <a class="reference internal" href="topics/settings.html#std-reqmeta-ftp_password"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">ftp_password</span></code></a> 元密钥（元密钥） <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2587">issue 2587</a> ）</p></li>
<li><p>关于弃用的``contrib /<a href="#id1"><span class="problematic" id="id2">``</span></a>（：issue：<cite>2636</cite>）的删除部分</p></li>
<li><p>在Windows上安装Scrapy时推荐Anaconda（问题：<cite>2477</cite>，修复：问题：<cite>2475</cite>）</p></li>
<li><p>常见问题解答：在Windows上重写关于python 3支持的说明（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2690">issue 2690</a> ）</p></li>
<li><p>重新排列选择器节（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2705">issue 2705</a> ）</p></li>
<li><p>去除 <code class="docutils literal notranslate"><span class="pre">__nonzero__</span></code> 从 <a class="reference internal" href="topics/selectors.html#scrapy.selector.SelectorList" title="scrapy.selector.SelectorList"><code class="xref py py-class docutils literal notranslate"><span class="pre">SelectorList</span></code></a> 文档 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2683">issue 2683</a> ）</p></li>
<li><p>在文档中说明如何禁用请求筛选 <a class="reference internal" href="topics/settings.html#std-setting-DUPEFILTER_CLASS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DUPEFILTER_CLASS</span></code></a> 设置（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2714">issue 2714</a> ）</p></li>
<li><p>在文档设置自述文件中添加sphinx_rtd_主题（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2668">issue 2668</a> ）</p></li>
<li><p>在json item writer示例中以文本模式打开文件（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2729">issue 2729</a> ）</p></li>
<li><p>澄清 <code class="docutils literal notranslate"><span class="pre">allowed_domains</span></code> 实例（例） <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2670">issue 2670</a> ）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-1-3-3-2017-03-10">
<span id="release-1-3-3"></span><h2>Scrapy 1.3.3（2017-03-10）<a class="headerlink" href="#scrapy-1-3-3-2017-03-10" title="永久链接至标题">¶</a></h2>
<div class="section" id="id43">
<h3>错误修复<a class="headerlink" href="#id43" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>制作 <code class="docutils literal notranslate"><span class="pre">SpiderLoader</span></code> 提升 <code class="docutils literal notranslate"><span class="pre">ImportError</span></code> 对于缺少依赖项和错误 <a class="reference internal" href="topics/settings.html#std-setting-SPIDER_MODULES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">SPIDER_MODULES</span></code></a> . 从1.3.0开始，这些例外被作为警告而沉默。引入新的设置，以便在警告或异常（如果需要）之间切换；有关详细信息请参见 <a class="reference internal" href="topics/settings.html#std-setting-SPIDER_LOADER_WARN_ONLY"><code class="xref std std-setting docutils literal notranslate"><span class="pre">SPIDER_LOADER_WARN_ONLY</span></code></a> 。</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-1-3-2-2017-02-13">
<span id="release-1-3-2"></span><h2>Scrapy 1.3.2（2017-02-13）<a class="headerlink" href="#scrapy-1-3-2-2017-02-13" title="永久链接至标题">¶</a></h2>
<div class="section" id="id44">
<h3>错误修复<a class="headerlink" href="#id44" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>在转换为/从dicts（utils.reqser）时保留请求类（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2510">issue 2510</a> ）</p></li>
<li><p>在教程中为作者字段使用一致的选择器（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2551">issue 2551</a> ）</p></li>
<li><p>在Twisted 17中修复TLS兼容性+（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2558">issue 2558</a> ）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-1-3-1-2017-02-08">
<span id="release-1-3-1"></span><h2>Scrapy 1.3.1（2017-02-08）<a class="headerlink" href="#scrapy-1-3-1-2017-02-08" title="永久链接至标题">¶</a></h2>
<div class="section" id="id45">
<h3>新特点<a class="headerlink" href="#id45" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>支持布尔设置的``'True'<code class="docutils literal notranslate"><span class="pre">和</span></code>'False'<a href="#id1"><span class="problematic" id="id2">``</span></a>字符串值（：issue：<cite>2519</cite>）; 你现在可以做一些像``scrapy crawl myspider -s REDIRECT_ENABLED = False``这样的事情。</p></li>
<li><p>支持Kwargs <code class="docutils literal notranslate"><span class="pre">response.xpath()</span></code> 使用 <a class="reference internal" href="topics/selectors.html#topics-selectors-xpath-variables"><span class="std std-ref">XPath variables</span></a> 和特殊名称空间声明；这至少需要Parselv1.1（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2457">issue 2457</a> ）</p></li>
<li><p>添加对python 3.6的支持（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2485">issue 2485</a> ）</p></li>
<li><p>在pypy上运行测试（警告：某些测试仍然失败，因此pypy尚不受支持）。</p></li>
</ul>
</div>
<div class="section" id="id46">
<h3>错误修复<a class="headerlink" href="#id46" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>强制执行 <code class="docutils literal notranslate"><span class="pre">DNS_TIMEOUT</span></code> 设置（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2496">issue 2496</a> ）</p></li>
<li><p>固定 <a class="reference internal" href="topics/commands.html#std-command-view"><code class="xref std std-command docutils literal notranslate"><span class="pre">view</span></code></a> 命令；这是v1.3.0中的回归（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2503">issue 2503</a> ）</p></li>
<li><p>修复有关的测试 <code class="docutils literal notranslate"><span class="pre">*_EXPIRES</span> <span class="pre">settings</span></code> 带有文件/图像管道（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2460">issue 2460</a> ）</p></li>
<li><p>使用基本项目模板时，修复生成的管道类的名称（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2466">issue 2466</a> ）</p></li>
<li><p>修复与Twisted 17的兼容性+ (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2496">issue 2496</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2528">issue 2528</a> ）</p></li>
<li><p>修复Python 3.6上的``scrapy.Item``继承（：issue：<cite>2511</cite>）。</p></li>
<li><p>按顺序强制执行组件的数值 <code class="docutils literal notranslate"><span class="pre">SPIDER_MIDDLEWARES</span></code> ， <code class="docutils literal notranslate"><span class="pre">DOWNLOADER_MIDDLEWARES</span></code> ， <code class="docutils literal notranslate"><span class="pre">EXTENIONS</span></code> 和 <code class="docutils literal notranslate"><span class="pre">SPIDER_CONTRACTS</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2420">issue 2420</a> ）</p></li>
</ul>
</div>
<div class="section" id="id47">
<h3>文档<a class="headerlink" href="#id47" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>改写行为准则部分并升级到贡献者契约v1.4 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2469">issue 2469</a> ）</p></li>
<li><p>澄清传递spider参数会将其转换为spider属性（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2483">issue 2483</a> ）</p></li>
<li><p>在``FormRequest.from_response（）``（：issue：<cite>2497</cite>）上记录``formid``参数。</p></li>
<li><p>向自述文件添加.rst扩展名（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2507">issue 2507</a> ）</p></li>
<li><p>提到级别数据库缓存存储后端（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2525">issue 2525</a> ）</p></li>
<li><p>使用 <code class="docutils literal notranslate"><span class="pre">yield</span></code> 在示例回调代码中（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2533">issue 2533</a> ）</p></li>
<li><p>添加有关HTML实体解码的说明 <code class="docutils literal notranslate"><span class="pre">.re()/.re_first()</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1704">issue 1704</a> ）</p></li>
<li><p>错别字（：问题：<cite>2512</cite>，：issue：<cite>2534</cite>，：issue：<cite>2531</cite>）。</p></li>
</ul>
</div>
<div class="section" id="cleanups">
<h3>清除<a class="headerlink" href="#cleanups" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>Remove redundant check in <code class="docutils literal notranslate"><span class="pre">MetaRefreshMiddleware</span></code> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2542">issue 2542</a>).</p></li>
<li><p>在``LinkExtractor``中更快地检查允许/拒绝模式（：issue：<cite>2538</cite>）。</p></li>
<li><p>删除支持旧Twisted版本的死代码（：issue：<cite>2544</cite>）。</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-1-3-0-2016-12-21">
<span id="release-1-3-0"></span><h2>Scrapy 1.3.0（2016-12-21）<a class="headerlink" href="#scrapy-1-3-0-2016-12-21" title="永久链接至标题">¶</a></h2>
<p>这个版本出现在1.2.2之后不久，主要原因之一是：发现从0.18到1.2.2（包括）的版本使用了一些来自Twisted的反向端口代码（ <code class="docutils literal notranslate"><span class="pre">scrapy.xlib.tx.*</span></code> ，即使有新的扭曲模块可用。现在使用的 <code class="docutils literal notranslate"><span class="pre">twisted.web.client</span></code> 和 <code class="docutils literal notranslate"><span class="pre">twisted.internet.endpoints</span></code> 直接。（另请参见下面的清理。）</p>
<p>由于这是一个重大的变化，我们希望在不破坏任何使用1.2系列的项目的情况下，快速修复bug。</p>
<div class="section" id="id48">
<h3>新特点<a class="headerlink" href="#id48" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p><a href="#id1"><span class="problematic" id="id2">``</span></a>MailSender``现在接受单个字符串作为``to``和``cc``参数的值（：issue：<cite>2272</cite>）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy</span> <span class="pre">fetch</span> <span class="pre">url</span></code> ， <code class="docutils literal notranslate"><span class="pre">scrapy</span> <span class="pre">shell</span> <span class="pre">url</span></code> 和 <code class="docutils literal notranslate"><span class="pre">fetch(url)</span></code> 在Scrapy shell中，默认情况下遵循HTTP重定向 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2290">issue 2290</a> 见 <a class="reference internal" href="topics/commands.html#std-command-fetch"><code class="xref std std-command docutils literal notranslate"><span class="pre">fetch</span></code></a> 和 <a class="reference internal" href="topics/commands.html#std-command-shell"><code class="xref std std-command docutils literal notranslate"><span class="pre">shell</span></code></a> 有关详细信息。</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">HttpErrorMiddleware</span></code> 现在记录错误 <code class="docutils literal notranslate"><span class="pre">INFO</span></code> 级别而不是 <code class="docutils literal notranslate"><span class="pre">DEBUG</span></code> ；从技术上讲 <strong>backward incompatible</strong> 所以请检查您的日志分析器。</p></li>
<li><p>默认情况下，记录器名称现在使用长格式路径，例如 <code class="docutils literal notranslate"><span class="pre">[scrapy.extensions.logstats]</span></code> 而不是先前版本（例如 <code class="docutils literal notranslate"><span class="pre">[scrapy]</span></code> 这是 <strong>backward incompatible</strong> 如果日志解析器需要短的logger name部分。您可以使用 <a class="reference internal" href="topics/settings.html#std-setting-LOG_SHORT_NAMES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">LOG_SHORT_NAMES</span></code></a> 设置为 <code class="docutils literal notranslate"><span class="pre">True</span></code> .</p></li>
</ul>
</div>
<div class="section" id="dependencies-cleanups">
<h3>依赖关系和清理<a class="headerlink" href="#dependencies-cleanups" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>Scrappy现在需要twisted&gt;=13.1，这已经是许多Linux发行版的情况了。</p></li>
<li><p>结果，我们摆脱了 <code class="docutils literal notranslate"><span class="pre">scrapy.xlib.tx.*</span></code> 模块，它复制了一些扭曲的代码，供用户使用“旧”的扭曲版本</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">ChunkedTransferMiddleware</span></code> 已弃用并从默认的下载器中间软件中删除。</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-1-2-3-2017-03-03">
<span id="release-1-2-3"></span><h2>Scrapy 1.2.3（2017-03-03）<a class="headerlink" href="#scrapy-1-2-3-2017-03-03" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>打包修复：禁止在setup.py中支持不受支持的Twisted版本</p></li>
</ul>
</div>
<div class="section" id="scrapy-1-2-2-2016-12-06">
<span id="release-1-2-2"></span><h2>Scrapy 1.2.2（2016-12-06）<a class="headerlink" href="#scrapy-1-2-2-2016-12-06" title="永久链接至标题">¶</a></h2>
<div class="section" id="id49">
<h3>错误修复<a class="headerlink" href="#id49" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>修复管道上发生故障时的神秘回溯 <code class="docutils literal notranslate"><span class="pre">open_spider()</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2011">issue 2011</a> ）</p></li>
<li><p>修复嵌入的ipython外壳变量（修复 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/396">issue 396</a> 重新出现在1.2.0中，固定在 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2418">issue 2418</a> ）</p></li>
<li><p>处理robots.txt时的几个补丁：</p>
<ul>
<li><p>处理（非标准）相对站点地图URL（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2390">issue 2390</a> ）</p></li>
<li><p>在python 2中处理非ASCII URL和用户代理（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2373">issue 2373</a> ）</p></li>
</ul>
</li>
</ul>
</div>
<div class="section" id="id50">
<h3>文档<a class="headerlink" href="#id50" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>文件 <code class="docutils literal notranslate"><span class="pre">&quot;download_latency&quot;</span></code> 键入 <code class="docutils literal notranslate"><span class="pre">Request</span></code> 的 <code class="docutils literal notranslate"><span class="pre">meta</span></code> DICT <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2033">issue 2033</a> ）</p></li>
<li><p>从目录中删除Ubuntu包上的页面（已弃用且不受支持）（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2335">issue 2335</a> ）</p></li>
<li><p>一些固定的拼写错误（：issue：<cite>2346</cite>，：issue：<cite>2369</cite>，：issue：<cite>2369</cite>，：issue：<cite>2380</cite>）和澄清（：issue：<cite>2354</cite>，：issue：<cite>2325</cite>，：问题：<cite>2414</cite>）</p></li>
</ul>
</div>
<div class="section" id="id51">
<h3>其他变化<a class="headerlink" href="#id51" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>广告`conda-forge`_作为Scrapy的官方conda频道（：issue：<cite>2387</cite>）</p></li>
<li><p>尝试使用时出现更多有用的错误消息 <code class="docutils literal notranslate"><span class="pre">.css()</span></code> 或 <code class="docutils literal notranslate"><span class="pre">.xpath()</span></code> 关于非文本响应（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2264">issue 2264</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">startproject</span></code> 命令现在生成一个示例 <code class="docutils literal notranslate"><span class="pre">middlewares.py</span></code> 文件（文件） <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2335">issue 2335</a> ）</p></li>
<li><p>在``scrapy version`` verbose output（：issue：<cite>2404</cite>）中添加更多依赖项的版本信息</p></li>
<li><p>全部删除 <code class="docutils literal notranslate"><span class="pre">*.pyc</span></code> 源分发中的文件（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2386">issue 2386</a> ）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-1-2-1-2016-10-21">
<span id="release-1-2-1"></span><h2>Scrapy 1.2.1（2016-10-21）<a class="headerlink" href="#scrapy-1-2-1-2016-10-21" title="永久链接至标题">¶</a></h2>
<div class="section" id="id52">
<h3>错误修复<a class="headerlink" href="#id52" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>在建立TLS/SSL连接时包括OpenSSL更为允许的默认密码（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2314">issue 2314</a> ）</p></li>
<li><p>修复非ASCII URL重定向上的“位置”HTTP头解码（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2321">issue 2321</a> ）</p></li>
</ul>
</div>
<div class="section" id="id53">
<h3>文档<a class="headerlink" href="#id53" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>修复jsonWriterPipeline示例（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2302">issue 2302</a> ）</p></li>
<li><p>各种注释： <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2330">issue 2330</a> 关于蜘蛛的名字， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2329">issue 2329</a> 在中间件方法处理顺序上， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2327">issue 2327</a> 以列表形式获取多值HTTP头。</p></li>
</ul>
</div>
<div class="section" id="id54">
<h3>其他变化<a class="headerlink" href="#id54" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>从内置蜘蛛模板中的``start_urls``中删除了``www``（：issue：<cite>2299</cite>）。</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-1-2-0-2016-10-03">
<span id="release-1-2-0"></span><h2>Scrapy 1.2.0（2016-10-03）<a class="headerlink" href="#scrapy-1-2-0-2016-10-03" title="永久链接至标题">¶</a></h2>
<div class="section" id="id55">
<h3>新特点<a class="headerlink" href="#id55" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>新的 <a class="reference internal" href="topics/feed-exports.html#std-setting-FEED_EXPORT_ENCODING"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FEED_EXPORT_ENCODING</span></code></a> 用于自定义将项写入文件时使用的编码的设置。可用于关闭 <code class="docutils literal notranslate"><span class="pre">\uXXXX</span></code> 在JSON输出中进行转义。这对于那些希望XML或CSV输出使用UTF-8以外的东西的人也很有用。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2034">issue 2034</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">startproject</span></code> 命令现在支持一个可选的目标目录，以根据项目名称覆盖默认目录。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2005">issue 2005</a> ）</p></li>
<li><p>新的 <a class="reference internal" href="topics/settings.html#std-setting-SCHEDULER_DEBUG"><code class="xref std std-setting docutils literal notranslate"><span class="pre">SCHEDULER_DEBUG</span></code></a> 设置为日志请求序列化失败（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1610">issue 1610</a> ）</p></li>
<li><p>JSON编码器现在支持序列化 <code class="docutils literal notranslate"><span class="pre">set</span></code> 实例（实例） <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2058">issue 2058</a> ）</p></li>
<li><p>将``application / json-amazonui-streaming``解释为``TextResponse``（：issue：<cite>1503</cite>）。</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy</span></code> 在使用shell工具时默认导入（ <a class="reference internal" href="topics/commands.html#std-command-shell"><code class="xref std std-command docutils literal notranslate"><span class="pre">shell</span></code></a> ， <a class="reference internal" href="topics/shell.html#topics-shell-inspect-response"><span class="std std-ref">inspect_response</span></a> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2248">issue 2248</a> ）</p></li>
</ul>
</div>
<div class="section" id="id56">
<h3>错误修复<a class="headerlink" href="#id56" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>DefaultRequestHeaders中间件现在在useragent中间件之前运行（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2088">issue 2088</a> ） 警告：这在技术上是向后不兼容的, 尽管我们认为这是错误修复。</p></li>
<li><p>HTTP缓存扩展和使用 <code class="docutils literal notranslate"><span class="pre">.scrapy</span></code> 数据目录现在在项目外部工作（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1581">issue 1581</a> ） 警告：这在技术上是向后不兼容的, 尽管我们认为这是错误修复。</p></li>
<li><p><a href="#id1"><span class="problematic" id="id2">``</span></a>Selector``不允许再传递``response``和``text``（：issue：<cite>2153</cite>）。</p></li>
<li><p>修复了错误回调名称的日志记录 <code class="docutils literal notranslate"><span class="pre">scrapy</span> <span class="pre">parse</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2169">issue 2169</a> ）</p></li>
<li><p>修复一个奇怪的gzip解压错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1606">issue 1606</a> ）</p></li>
<li><p>使用``CrawlSpider``时修复选定的回调：command：<cite>scrapy parse &lt;parse&gt;`（：issue：`2225</cite>）。</p></li>
<li><p>修复蜘蛛不生成任何项时的无效JSON和XML文件（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/872">issue 872</a> ）</p></li>
<li><p>实施 <code class="docutils literal notranslate"><span class="pre">flush()</span></code> FPR <code class="docutils literal notranslate"><span class="pre">StreamLogger</span></code> 避免日志中出现警告（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2125">issue 2125</a> ）</p></li>
</ul>
</div>
<div class="section" id="refactoring">
<h3>重构<a class="headerlink" href="#refactoring" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">canonicalize_url</span></code> 已移至 <a class="reference external" href="https://w3lib.readthedocs.io/en/latest/w3lib.html#w3lib.url.canonicalize_url">w3lib.url</a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2168">issue 2168</a>) .</p></li>
</ul>
</div>
<div class="section" id="tests-requirements">
<h3>测试和要求<a class="headerlink" href="#tests-requirements" title="永久链接至标题">¶</a></h3>
<p>Scrapy的新需求基线是Debian8“Jessie”。它以前是Ubuntu12.04精确版。实际上，这意味着我们至少要用这些（主要）包版本运行连续集成测试：twisted 14.0、pyopenssl 0.14、lxml 3.4。</p>
<p>Scrapy可以很好地处理这些包的旧版本（例如，代码库中仍然有用于旧的扭曲版本的开关），但不能保证（因为它不再被测试）。</p>
</div>
<div class="section" id="id57">
<h3>文档<a class="headerlink" href="#id57" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>语法修正： <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2128">issue 2128</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1566">issue 1566</a> .</p></li>
<li><p>从自述文件中删除“下载状态”徽章（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2160">issue 2160</a> ）</p></li>
<li><p>New Scrapy <a class="reference internal" href="topics/architecture.html#topics-architecture"><span class="std std-ref">architecture diagram</span></a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2165">issue 2165</a>).</p></li>
<li><p>更新的 <code class="docutils literal notranslate"><span class="pre">Response</span></code> 参数文档（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2197">issue 2197</a> ）</p></li>
<li><p>重新误导：设置：<cite>RANDOMIZE_DOWNLOAD_DELAY</cite> description（：issue：<cite>2190</cite>）。</p></li>
<li><p>添加stackoverflow作为支持通道（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2257">issue 2257</a> ）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-1-1-4-2017-03-03">
<span id="release-1-1-4"></span><h2>Scrapy 1.1.4（2017-03-03）<a class="headerlink" href="#scrapy-1-1-4-2017-03-03" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>打包修复：禁止在setup.py中支持不受支持的Twisted版本</p></li>
</ul>
</div>
<div class="section" id="scrapy-1-1-3-2016-09-22">
<span id="release-1-1-3"></span><h2>Scrapy 1.1.3（2016-09-22）<a class="headerlink" href="#scrapy-1-1-3-2016-09-22" title="永久链接至标题">¶</a></h2>
<div class="section" id="id58">
<h3>错误修复<a class="headerlink" href="#id58" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>“ImagesPipeline”和“FilesPipeline”的子类的类属性与它们在1.1.1之前的作用相同（：issue：<cite>2243</cite>，fixes：issue：<cite>2198</cite>）</p></li>
</ul>
</div>
<div class="section" id="id59">
<h3>文档<a class="headerlink" href="#id59" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p><a class="reference internal" href="intro/overview.html#intro-overview"><span class="std std-ref">Overview</span></a> 和 <a class="reference internal" href="intro/tutorial.html#intro-tutorial"><span class="std std-ref">tutorial</span></a> 重写以使用http://toscrape.com网站（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2236">issue 2236</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2249">issue 2249</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2252">issue 2252</a> ）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-1-1-2-2016-08-18">
<span id="release-1-1-2"></span><h2>Scrapy 1.1.2（2016-08-18）<a class="headerlink" href="#scrapy-1-1-2-2016-08-18" title="永久链接至标题">¶</a></h2>
<div class="section" id="id60">
<h3>错误修复<a class="headerlink" href="#id60" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>引入一个缺失：设置：<a href="#id1"><span class="problematic" id="id2">`</span></a>IMAGES_STORE_S3_ACL`设置，在将图像上传到S3时覆盖``ImagesPipeline``中的默认ACL策略（请注意，默认的ACL策略是“private”--而不是“public read”--因为scrapy 1.1.0）</p></li>
<li><p>：设置：<a href="#id1"><span class="problematic" id="id2">`</span></a>IMAGES_EXPIRES`默认值设置回90（回归在1.1.1中引入）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-1-1-1-2016-07-13">
<span id="release-1-1-1"></span><h2>Scrapy 1.1.1（2016-07-13）<a class="headerlink" href="#scrapy-1-1-1-2016-07-13" title="永久链接至标题">¶</a></h2>
<div class="section" id="id61">
<h3>错误修复<a class="headerlink" href="#id61" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>在连接请求到HTTPS代理中添加“主机”头（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2069">issue 2069</a> ）</p></li>
<li><p>使用响应 <code class="docutils literal notranslate"><span class="pre">body</span></code> 选择响应类时（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2001">issue 2001</a> 修正 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2000">issue 2000</a> ）</p></li>
<li><p>不要使用错误的netlocs规范化URL（：issue：<cite>2038</cite>，fixes：issue：<cite>2010</cite>）</p></li>
<li><p>修复了``HttpCompressionMiddleware``（和``SitemapSpider``）：</p>
<ul>
<li><p>不解码HEAD响应（：issue：<cite>2008</cite>，fixes：issue：<cite>1899</cite>）</p></li>
<li><p>在gzip内容类型标题中处理charset参数（：issue：<cite>2050</cite>，fixes：issue：<cite>2049</cite>）</p></li>
<li><p>不要解压缩gzip八位字节流响应（：issue：<cite>2065</cite>，fixes：issue：<cite>2063</cite>）</p></li>
</ul>
</li>
<li><p>在针对IP地址主机验证证书时捕获（并忽略警告）异常（：issue：<cite>2094</cite>，fixes：issue：<cite>2092</cite>）</p></li>
<li><p>关于使用遗留类属性进行定制，使``FilesPipeline``和``ImagesPipeline``再次向后兼容（：issue：<cite>1989</cite>，fixes：issue：<cite>1985</cite>）</p></li>
</ul>
</div>
<div class="section" id="id62">
<h3>新特点<a class="headerlink" href="#id62" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>在项目文件夹外启用genspider命令（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2052">issue 2052</a> ）</p></li>
<li><p>重试HTTPS连接 <code class="docutils literal notranslate"><span class="pre">TunnelError</span></code> 默认情况下（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1974">issue 1974</a> ）</p></li>
</ul>
</div>
<div class="section" id="id63">
<h3>文档<a class="headerlink" href="#id63" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">FEED_TEMPDIR</span></code> 设置在词典编纂位置（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/9b3c72c">commit 9b3c72c</a> ）</p></li>
<li><p>在概述中使用惯用的``.extract_first（）``（：issue：<cite>1994</cite>）</p></li>
<li><p>在版权公告中的更新年份（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/c2c8036">commit c2c8036</a> ）</p></li>
<li><p>添加有关错误回复的信息和示例（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1995">issue 1995</a> ）</p></li>
<li><p>在下载器中间件示例中使用“url”变量（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/2015">issue 2015</a> ）</p></li>
<li><p>语法修复（：问题：'2054`，：issue：<cite>2120</cite>）</p></li>
<li><p>关于在蜘蛛回调中使用BeautifulSoup的新FAQ条目（：issue：<cite>2048</cite>）</p></li>
<li><p>添加关于Scrapy不能在使用Python3的Windows上工作的注释 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/2060">issue 2060</a> ）</p></li>
<li><p>鼓励拉动请求中的完整标题（：issue：<cite>2026</cite>）</p></li>
</ul>
</div>
<div class="section" id="tests">
<h3>测验<a class="headerlink" href="#tests" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>将Travis CI和Pin pytest-cov的py.test要求升级到2.2.1（：issue：<cite>2095</cite>）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-1-1-0-2016-05-11">
<span id="release-1-1-0"></span><h2>Scrapy 1.1.0（2016-05-11）<a class="headerlink" href="#scrapy-1-1-0-2016-05-11" title="永久链接至标题">¶</a></h2>
<p>这个1.1版本带来了许多有趣的功能和错误修复：</p>
<ul class="simple">
<li><p>Scrapy 1.1支持beta python 3（需要twisted&gt;=15.5）。更多细节和一些限制见 <a class="reference internal" href="#news-betapy3"><span class="std std-ref">Beta Python 3 支持</span></a> 。</p></li>
<li><p>热门新功能：</p>
<ul>
<li><p>项目加载器现在支持嵌套加载器（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1467">issue 1467</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">FormRequest.from_response</span></code> 改进（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1382">issue 1382</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1137">issue 1137</a> ）</p></li>
<li><p>添加了设置：设置：<cite>AUTOTHROTTLE_TARGET_CONCURRENCY`和改进的AutoThrottle文档（：issue：`1324</cite>）。</p></li>
<li><p>添加了``response.text``以将body作为unicode（：issue：<cite>1730</cite>）。</p></li>
<li><p>匿名S3连接（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1358">issue 1358</a> ）</p></li>
<li><p>下载器中间件中的延迟（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1473">issue 1473</a> ）这样可以更好地处理robots.txt（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1471">issue 1471</a> ）</p></li>
<li><p>HTTP缓存现在更接近于RFC2616，增加了设置 <a class="reference internal" href="topics/downloader-middleware.html#std-setting-HTTPCACHE_ALWAYS_STORE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">HTTPCACHE_ALWAYS_STORE</span></code></a> 和 <a class="reference internal" href="topics/downloader-middleware.html#std-setting-HTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">HTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS</span></code></a> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1151">issue 1151</a> ）</p></li>
<li><p>选择器被提取到Parsel_u库（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1409">issue 1409</a> ）这意味着您可以使用没有scrapy的scrapy选择器，也可以在不需要升级scrapy的情况下升级选择器引擎。</p></li>
<li><p>现在，HTTPS下载器默认情况下执行TLS协议协商，而不是强制使用TLS 1.0。您还可以使用新的 <a class="reference internal" href="topics/settings.html#std-setting-DOWNLOADER_CLIENT_TLS_METHOD"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOADER_CLIENT_TLS_METHOD</span></code></a> .</p></li>
</ul>
</li>
<li><p>这些错误修复可能需要您注意：</p>
<ul>
<li><p>默认情况下不重试错误请求（HTTP 400）（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1289">issue 1289</a> ）如果您需要旧的行为，请添加 <code class="docutils literal notranslate"><span class="pre">400</span></code> 到 <a class="reference internal" href="topics/downloader-middleware.html#std-setting-RETRY_HTTP_CODES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">RETRY_HTTP_CODES</span></code></a> .</p></li>
<li><p>修复shell文件参数处理（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1710">issue 1710</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1550">issue 1550</a> ）如果你尝试 <code class="docutils literal notranslate"><span class="pre">scrapy</span> <span class="pre">shell</span> <span class="pre">index.html</span></code> 它将尝试加载URL <a class="reference external" href="http://index.html">http://index.html</a>，使用 <code class="docutils literal notranslate"><span class="pre">scrapy</span> <span class="pre">shell</span> <span class="pre">./index.html</span></code> 加载本地文件。</p></li>
<li><p>现在，默认情况下，已为新创建的项目启用robots.txt遵从性（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1724">issue 1724</a> ）Scrapy还将等待robots.txt下载，然后再继续爬行。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1735">issue 1735</a> ）如果要禁用此行为，请更新 <a class="reference internal" href="topics/settings.html#std-setting-ROBOTSTXT_OBEY"><code class="xref std std-setting docutils literal notranslate"><span class="pre">ROBOTSTXT_OBEY</span></code></a> 在里面 <code class="docutils literal notranslate"><span class="pre">settings.py</span></code> 创建新项目后的文件。</p></li>
<li><p>导出器现在使用unicode，而不是默认的字节 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1080">issue 1080</a> ）如果你使用 <code class="xref py py-class docutils literal notranslate"><span class="pre">PythonItemExporter</span></code> ，您可能希望更新代码以禁用二进制模式，但现在已弃用该模式。</p></li>
<li><p>接受包含点的XML节点名为有效（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1533">issue 1533</a> ）</p></li>
<li><p>将文件或图像上载到S3时（使用 <code class="docutils literal notranslate"><span class="pre">FilesPipeline</span></code> 或 <code class="docutils literal notranslate"><span class="pre">ImagesPipeline</span></code> ）默认的acl策略现在是“private”而不是“public”<a href="#id1"><span class="problematic" id="id2">**</span></a>警告：向后不兼容！<a href="#id3"><span class="problematic" id="id4">**</span></a>你可以使用 <a class="reference internal" href="topics/media-pipeline.html#std-setting-FILES_STORE_S3_ACL"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FILES_STORE_S3_ACL</span></code></a> 改变它。</p></li>
<li><p>我们重新实施了 <code class="docutils literal notranslate"><span class="pre">canonicalize_url()</span></code> 以获得更正确的输出，尤其是包含非ASCII字符的URL (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1947">issue 1947</a> ). 这可能会改变链接提取器的输出相比以前的破烂版本。这也可能使您在1.1之前的运行中仍然存在的一些缓存项失效。 <strong>警告：向后不兼容！</strong> .</p></li>
</ul>
</li>
</ul>
<p>继续阅读以获取有关其他改进和错误修复的更多详细信息。</p>
<div class="section" id="beta-python-3-support">
<span id="news-betapy3"></span><h3>Beta Python 3 支持<a class="headerlink" href="#beta-python-3-support" title="永久链接至标题">¶</a></h3>
<p>我们一直在努力使Scrapy在Python 3上运行&lt;<a class="reference external" href="https://github.com/scrapy/scrapy/wiki/Python-3-Porting">https://github.com/scrapy/scrapy/wiki/Python-3-Porting</a>&gt;`_。 因此，现在您可以在Python 3.3,3.4和3.5上运行蜘蛛（需要Twisted&gt; = 15.5）。 某些功能仍然缺失（有些功能可能永远不会被移植）。</p>
<p>几乎所有内置扩展/中间产品都可以工作。但是，我们知道Python3中的一些限制：</p>
<ul class="simple">
<li><p>Scrapy不适用于使用python 3的Windows</p></li>
<li><p>不支持发送电子邮件</p></li>
<li><p>不支持ftp下载处理程序</p></li>
<li><p>不支持telnet控制台</p></li>
</ul>
</div>
<div class="section" id="additional-new-features-and-enhancements">
<h3>其他新功能和增强功能<a class="headerlink" href="#additional-new-features-and-enhancements" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>Scrapy现在有一个 <a class="reference external" href="https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md">Code of Conduct</a> (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1681">issue 1681</a>) .</p></li>
<li><p>命令行工具现在已经完成了zsh（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/934">issue 934</a> ）</p></li>
<li><p>改进 <code class="docutils literal notranslate"><span class="pre">scrapy</span> <span class="pre">shell</span></code> ：</p>
<ul>
<li><p>支持bpython并通过 <code class="docutils literal notranslate"><span class="pre">SCRAPY_PYTHON_SHELL</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1100">issue 1100</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1444">issue 1444</a> ）</p></li>
<li><p>支持没有方案的URL（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1498">issue 1498</a> ）**警告：向后不兼容！**</p></li>
<li><p>恢复对相对文件路径的支持（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1710">issue 1710</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1550">issue 1550</a> ）</p></li>
</ul>
</li>
<li><p>补充 <a class="reference internal" href="topics/settings.html#std-setting-MEMUSAGE_CHECK_INTERVAL_SECONDS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">MEMUSAGE_CHECK_INTERVAL_SECONDS</span></code></a> 更改默认检查间隔的设置（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1282">issue 1282</a> ）</p></li>
<li><p>下载处理程序现在使用其方案在第一个请求上延迟加载（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1390">issue 1390</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1421">issue 1421</a> ）</p></li>
<li><p>HTTPS下载处理程序不再强制TLS 1.0；相反，OpenSSL的 <code class="docutils literal notranslate"><span class="pre">SSLv23_method()/TLS_method()</span></code> 用于允许尝试与远程主机协商其可以达到的最高TLS协议版本（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1794">issue 1794</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1629">issue 1629</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">RedirectMiddleware</span></code> 现在跳过状态代码 <code class="docutils literal notranslate"><span class="pre">handle_httpstatus_list</span></code> 蜘蛛属性或 <code class="docutils literal notranslate"><span class="pre">Request</span></code> 的 <code class="docutils literal notranslate"><span class="pre">meta</span></code> 密钥（密钥） <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1334">issue 1334</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1364">issue 1364</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1447">issue 1447</a> ）</p></li>
<li><p>表格提交：</p>
<ul>
<li><p>现在也可以使用``&lt;button&gt;``元素（：issue：<cite>1469</cite>）。</p></li>
<li><p>空字符串现在用于没有值的提交按钮（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1472">issue 1472</a> ）</p></li>
</ul>
</li>
<li><p>类似dict的设置现在具有每个键的优先级（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1135">issue 1135</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1149">issue 1149</a> 和 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1586">issue 1586</a> ）</p></li>
<li><p>发送非ASCII电子邮件（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1662">issue 1662</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">CloseSpider</span></code> 和 <code class="docutils literal notranslate"><span class="pre">SpiderState</span></code> 如果没有设置相关设置，扩展现在将被禁用。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1723">issue 1723</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1725">issue 1725</a> ）</p></li>
<li><p>添加的方法 <code class="docutils literal notranslate"><span class="pre">ExecutionEngine.close</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1423">issue 1423</a> ）</p></li>
<li><p>添加的方法 <code class="docutils literal notranslate"><span class="pre">CrawlerRunner.create_crawler</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1528">issue 1528</a> ）</p></li>
<li><p>调度程序优先级队列现在可以通过 <a class="reference internal" href="topics/settings.html#std-setting-SCHEDULER_PRIORITY_QUEUE"><code class="xref std std-setting docutils literal notranslate"><span class="pre">SCHEDULER_PRIORITY_QUEUE</span></code></a> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1822">issue 1822</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">.pps</span></code> 默认情况下，链接提取器中的链接现在被忽略。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1835">issue 1835</a> ）</p></li>
<li><p>可以使用新的 <a class="reference internal" href="topics/settings.html#std-setting-FEED_TEMPDIR"><code class="xref std std-setting docutils literal notranslate"><span class="pre">FEED_TEMPDIR</span></code></a> 设置（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1847">issue 1847</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">FilesPipeline</span></code> 和 <code class="docutils literal notranslate"><span class="pre">ImagesPipeline</span></code> 设置现在是实例属性而不是类属性，启用特定于蜘蛛的行为（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1891">issue 1891</a> ）</p></li>
<li><p><a href="#id1"><span class="problematic" id="id2">``</span></a>JsonItemExporter``现在在它们自己的行（输出文件的第一行和最后一行）（：issue：<cite>1950</cite>）上格式化开始和结束方括号。</p></li>
<li><p>如果可用，<code class="docutils literal notranslate"><span class="pre">botocore``用于``S3FeedStorage</span></code>，<a href="#id1"><span class="problematic" id="id2">``</span></a>S3DownloadHandler``和``S3FilesStore``（：issue：<cite>1761</cite>，：issue：<cite>1883</cite>）。</p></li>
<li><p>大量文档更新和相关修复（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1291">issue 1291</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1302">issue 1302</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1335">issue 1335</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1683">issue 1683</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1660">issue 1660</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1642">issue 1642</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1721">issue 1721</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1727">issue 1727</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1879">issue 1879</a> ）</p></li>
<li><p>其他重构、优化和清理（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1476">issue 1476</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1481">issue 1481</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1477">issue 1477</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1315">issue 1315</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1290">issue 1290</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1750">issue 1750</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1881">issue 1881</a> ）</p></li>
</ul>
</div>
<div class="section" id="deprecations-and-removals">
<h3>弃用和移除<a class="headerlink" href="#deprecations-and-removals" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">to_bytes</span></code> 和 <code class="docutils literal notranslate"><span class="pre">to_unicode</span></code> 蔑视 <code class="docutils literal notranslate"><span class="pre">str_to_unicode</span></code> 和 <code class="docutils literal notranslate"><span class="pre">unicode_to_str</span></code> 功能（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/778">issue 778</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">binary_is_text</span></code> 介绍，以取代使用 <code class="docutils literal notranslate"><span class="pre">isbinarytext</span></code> （但返回值相反）（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1851">issue 1851</a> ）</p></li>
<li><p>已删除``optional_features``集（：issue：<cite>1359</cite>）。</p></li>
<li><p><a href="#id1"><span class="problematic" id="id2">``</span></a>--lsprof``命令行选项已被删除（：issue：<cite>1689</cite>）。 <strong>警告：向后不兼容</strong>，但不会破坏用户代码。</p></li>
<li><p>下列数据类型已弃用（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1720">issue 1720</a> ）：</p>
<ul>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.utils.datatypes.MultiValueDictKeyError</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.utils.datatypes.MultiValueDict</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.utils.datatypes.SiteNode</span></code></p></li>
</ul>
</li>
<li><p>以前捆绑的 <code class="docutils literal notranslate"><span class="pre">scrapy.xlib.pydispatch</span></code> 库已被弃用并替换为 <a class="reference external" href="https://pypi.org/project/PyDispatcher/">pydispatcher</a> .</p></li>
</ul>
</div>
<div class="section" id="relocations">
<h3>重新定位<a class="headerlink" href="#relocations" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">telnetconsole</span></code> 被重新安置到 <code class="docutils literal notranslate"><span class="pre">extensions/</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1524">issue 1524</a> ）</p>
<ul>
<li><p>注意：在python 3上没有启用telnet（<a class="reference external" href="https://github.com/scrapy/scrapy/pull/1524">https://github.com/scrapy/scrapy/pull/1524</a> issuecomment-146985595）</p></li>
</ul>
</li>
</ul>
</div>
<div class="section" id="bugfixes">
<h3>错误修正<a class="headerlink" href="#bugfixes" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>Scrapy不会重试 <code class="docutils literal notranslate"><span class="pre">HTTP</span> <span class="pre">400</span> <span class="pre">Bad</span> <span class="pre">Request</span></code> 回复了。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1289">issue 1289</a> ）**警告：向后不兼容！**</p></li>
<li><p>支持http_proxy config的空密码（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1274">issue 1274</a> ）</p></li>
<li><p>解读 <code class="docutils literal notranslate"><span class="pre">application/x-json</span></code> 作为 <code class="docutils literal notranslate"><span class="pre">TextResponse</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1333">issue 1333</a> ）</p></li>
<li><p>支持多值链接rel属性（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1201">issue 1201</a> ）</p></li>
<li><p>当有一个``&lt;base&gt;``标签（：issue：<cite>1564</cite>）时，修复了``scrapy.http.FormRequest.from_response``。</p></li>
<li><p>修正：设置：<cite>TEMPLATES_DIR`处理（：issue：`1575</cite>）。</p></li>
<li><p>各种各样 <code class="docutils literal notranslate"><span class="pre">FormRequest</span></code> 修复（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1595">issue 1595</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1596">issue 1596</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1597">issue 1597</a> ）</p></li>
<li><p>使 <code class="docutils literal notranslate"><span class="pre">_monkeypatches</span></code> 更健壮（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1634">issue 1634</a> ）</p></li>
<li><p>修复了``XMLItemExporter``中带有非字符串字段的错误（：issue：<cite>1738</cite>）。</p></li>
<li><p>修正了macOS中的startproject命令 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1635">issue 1635</a> ）</p></li>
<li><p>固定的 <code class="xref py py-class docutils literal notranslate"><span class="pre">PythonItemExporter</span></code> 和CSVExporter用于非字符串项类型 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/1737">issue 1737</a> ）</p></li>
<li><p>各种与日志相关的修复（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1294">issue 1294</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1419">issue 1419</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1263">issue 1263</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1624">issue 1624</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1654">issue 1654</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1722">issue 1722</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1726">issue 1726</a> 和 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1303">issue 1303</a> ）</p></li>
<li><p>修复了``utils.template.render_templatefile（）``（：issue：<cite>1212</cite>）中的错误。</p></li>
<li><p>从``robots.txt``中提取的站点地图现在不区分大小写（：issue：<cite>1902</cite>）。</p></li>
<li><p>当使用多个代理到同一个远程主机时，HTTPS + CONNECT隧道可能会混淆（：issue：<cite>1912</cite>）。</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-1-0-7-2017-03-03">
<span id="release-1-0-7"></span><h2>Scrapy 1.0.7（2017-03-03）<a class="headerlink" href="#scrapy-1-0-7-2017-03-03" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>打包修复：禁止在setup.py中支持不受支持的Twisted版本</p></li>
</ul>
</div>
<div class="section" id="scrapy-1-0-6-2016-05-04">
<span id="release-1-0-6"></span><h2>Scrapy 1.0.6（2016-05-04）<a class="headerlink" href="#scrapy-1-0-6-2016-05-04" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>修正：retrymiddleware现在对非标准的HTTP状态代码是健壮的。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1857">issue 1857</a> ）</p></li>
<li><p>修复：文件存储HTTP缓存正在检查错误的修改时间（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1875">issue 1875</a> ）</p></li>
<li><p>DOC：支持Sphinx 1.4+（：issue：<cite>1893</cite>）</p></li>
<li><p>文档：选择器示例的一致性（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1869">issue 1869</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-1-0-5-2016-02-04">
<span id="release-1-0-5"></span><h2>Scrapy 1.0.5（2016-02-04）<a class="headerlink" href="#scrapy-1-0-5-2016-02-04" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>FIX：[Backport]忽略LinkExtractors中的伪造链接（修复：问题：<cite>907</cite>，：commit：<cite>108195e</cite>）</p></li>
<li><p>TST：更改了buildbot makefile以使用'pytest'（：commit：<cite>1f3d90a</cite>）</p></li>
<li><p>文档：修复了教程和媒体管道中的拼写错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/808a9ea">commit 808a9ea</a> 和 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/803bd87">commit 803bd87</a> ）</p></li>
<li><p>文档：在设置文档中将ajaxcrawlMiddleware添加到下载器中间件库（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/aa94121">commit aa94121</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-1-0-4-2015-12-30">
<span id="release-1-0-4"></span><h2>Scrapy 1.0.4（2015-12-30）<a class="headerlink" href="#scrapy-1-0-4-2015-12-30" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>忽略xlib/tx文件夹，具体取决于Twisted版本。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/7dfa979">commit 7dfa979</a> ）</p></li>
<li><p>在新Travis CI Infra上运行（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/6e42f0b">commit 6e42f0b</a> ）</p></li>
<li><p>拼写修复（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/823a1cc">commit 823a1cc</a> ）</p></li>
<li><p>在xmliter regex中转义nodename（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/da3c155">commit da3c155</a> ）</p></li>
<li><p>用点测试XML节点名（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/4418fc3">commit 4418fc3</a> ）</p></li>
<li><p>测试中不要使用坏Pillow 版本（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/a55078c">commit a55078c</a> ）</p></li>
<li><p>禁用登录版本命令。关闭α1426 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/86fc330">commit 86fc330</a> ）</p></li>
<li><p>禁用登录StartProject命令（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/db4c9fe">commit db4c9fe</a> ）</p></li>
<li><p>添加pypi下载状态徽章（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/df2b944">commit df2b944</a> ）</p></li>
<li><p>如果一个pr是由一个报废/报废的分支生成的，则不要在travis上运行两次测试。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/a83ab41">commit a83ab41</a> ）</p></li>
<li><p>在自述文件中添加python 3移植状态徽章（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/73ac80d">commit 73ac80d</a> ）</p></li>
<li><p>修复了RFPDupeFilter持久性（：commit：<cite>97d080e</cite>）</p></li>
<li><p>TST显示Dupefilter持久性不起作用的测试（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/97f2fb3">commit 97f2fb3</a> ）</p></li>
<li><p>在file://scheme handler上显式关闭文件（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/d9b4850">commit d9b4850</a> ）</p></li>
<li><p>禁用shell中的dupefilter（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/c0d0734">commit c0d0734</a> ）</p></li>
<li><p>文档：向侧边栏中显示的目录树添加标题（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/aa239ad">commit aa239ad</a> ）</p></li>
<li><p>Doc从安装说明中删除了pywin32，因为它已经声明为依赖项。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/10eb400">commit 10eb400</a> ）</p></li>
<li><p>添加了有关在Windows和其他操作系统中使用Conda的安装说明。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/1c3600a">commit 1c3600a</a> ）</p></li>
<li><p>修正了小语法问题。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/7f4ddd5">commit 7f4ddd5</a> ）</p></li>
<li><p>修正了文档中的拼写错误。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/b71f677">commit b71f677</a> ）</p></li>
<li><p>版本1现在存在（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/5456c0e">commit 5456c0e</a> ）</p></li>
<li><p>修复另一个无效的xpath错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/0a1366e">commit 0a1366e</a> ）</p></li>
<li><p>修复值错误：selectors.rst上的xpath://div/[id=“not exists”]/text（）无效（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/ca8d60f">commit ca8d60f</a> ）</p></li>
<li><p>拼写错误更正（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/7067117">commit 7067117</a> ）</p></li>
<li><p>修复downloader-middleware.rst和exceptions.rst中的拼写错误，middlware-&gt;middleware（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/32f115c">commit 32f115c</a> ）</p></li>
<li><p>在Ubuntu安装部分添加关于Debian兼容性的说明 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/23fda69">commit 23fda69</a> ）</p></li>
<li><p>用virtualenv替代macOS安装解决方案 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/98b63ee">commit 98b63ee</a> ）</p></li>
<li><p>有关安装说明，请参阅自制主页。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/1925db1">commit 1925db1</a> ）</p></li>
<li><p>将最旧支持的TOX版本添加到参与文档（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/5d10d6d">commit 5d10d6d</a> ）</p></li>
<li><p>安装文档中关于pip已经包含在python中的说明&gt;=2.7.9（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/85c980e">commit 85c980e</a> ）</p></li>
<li><p>在文档的Ubuntu安装部分添加非python依赖项（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/fbd010d">commit fbd010d</a> ）</p></li>
<li><p>将macOS安装部分添加到文档 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/d8f4cba">commit d8f4cba</a> ）</p></li>
<li><p>文档（enh）：显式指定RTD主题的路径（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/de73b1a">commit de73b1a</a> ）</p></li>
<li><p>次要：scrapy.spider docs语法（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/1ddcc7b">commit 1ddcc7b</a> ）</p></li>
<li><p>使常用实践示例代码与注释匹配（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/1b85bcf">commit 1b85bcf</a> ）</p></li>
<li><p>下一个重复呼叫（心跳）。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/55f7104">commit 55f7104</a> ）</p></li>
<li><p>与Twisted 15.4.0的后端修复兼容性（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/b262411">commit b262411</a> ）</p></li>
<li><p>插脚Pytest至2.7.3（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/a6535c2">commit a6535c2</a> ）</p></li>
<li><p>合并请求1512来自mgedmin/patch-1（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/8876111">commit 8876111</a> ）</p></li>
<li><p>合并请求1513来自mgedmin/patch-2（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/5d4daf8">commit 5d4daf8</a> ）</p></li>
<li><p>Typo <a class="reference external" href="https://github.com/scrapy/scrapy/commit/f8d0682">commit f8d0682</a> ）</p></li>
<li><p>修复列表格式（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/5f83a93">commit 5f83a93</a> ）</p></li>
<li><p>修复最近对queuelib的更改后的混乱测试 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/3365c01">commit 3365c01</a> ）</p></li>
<li><p>合并请求1475来自RWEindl/Patch-1（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/2d688cd">commit 2d688cd</a> ）</p></li>
<li><p>更新tutorial.rst（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/fbc1f25">commit fbc1f25</a> ）</p></li>
<li><p>合并请求1449，来自Rhoekman/Patch-1（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/7d6538c">commit 7d6538c</a> ）</p></li>
<li><p>小的语法变化（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/8752294">commit 8752294</a> ）</p></li>
<li><p>将openssl版本添加到version命令（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/13c45ac">commit 13c45ac</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-1-0-3-2015-08-11">
<span id="release-1-0-3"></span><h2>Scrapy 1.0.3（2015-08-11）<a class="headerlink" href="#scrapy-1-0-3-2015-08-11" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>将服务标识添加到Scrapy installu requires (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/cbc2501">commit cbc2501</a> ）</p></li>
<li><p>Travis的解决方案296（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/66af9cd">commit 66af9cd</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-1-0-2-2015-08-06">
<span id="release-1-0-2"></span><h2>Scrapy 1.0.2（2015-08-06）<a class="headerlink" href="#scrapy-1-0-2-2015-08-06" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>Twisted 15.3.0不会引发picklinger或序列化lambda函数（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/b04dd7d">commit b04dd7d</a> ）</p></li>
<li><p>次要方法名称修复（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/6f85c7f">commit 6f85c7f</a> ）</p></li>
<li><p>小调：下流。蜘蛛语法和清晰度（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/9c9d2e0">commit 9c9d2e0</a> ）</p></li>
<li><p>宣传支持渠道（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/c63882b">commit c63882b</a> ）</p></li>
<li><p>拼写错误（：commit：<cite>a9ae7b0</cite>）</p></li>
<li><p>修复文档引用。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/7c8a4fe">commit 7c8a4fe</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-1-0-1-2015-07-01">
<span id="release-1-0-1"></span><h2>Scrapy 1.0.1（2015-07-01）<a class="headerlink" href="#scrapy-1-0-1-2015-07-01" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>在传递到ftpclient之前取消引用请求路径，它已经转义了路径（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/cc00ad2">commit cc00ad2</a> ）</p></li>
<li><p>在清单中包括测试/到源分发。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/eca227e">commit eca227e</a> ）</p></li>
<li><p>Doc Fix SelectJMES文档（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/b8567bc">commit b8567bc</a> ）</p></li>
<li><p>Doc将Ubuntu和ArchLinux带到Windows子部分之外（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/392233f">commit 392233f</a> ）</p></li>
<li><p>DOC从Ubuntu包中删除版本后缀 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/5303c66">commit 5303c66</a> ）</p></li>
<li><p>1.0的文档更新发布日期（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/c89fa29">commit c89fa29</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-1-0-0-2015-06-19">
<span id="release-1-0-0"></span><h2>Scrapy 1.0.0（2015-06-19）<a class="headerlink" href="#scrapy-1-0-0-2015-06-19" title="永久链接至标题">¶</a></h2>
<p>在这个主要版本中，您会发现许多新的特性和错误修复。确保检查我们的更新 <a class="reference internal" href="intro/overview.html#intro-overview"><span class="std std-ref">overview</span></a> 看看其中的一些变化，以及我们的刷 <a class="reference internal" href="intro/tutorial.html#intro-tutorial"><span class="std std-ref">tutorial</span></a> .</p>
<div class="section" id="support-for-returning-dictionaries-in-spiders">
<h3>支持在spiders中返回字典<a class="headerlink" href="#support-for-returning-dictionaries-in-spiders" title="永久链接至标题">¶</a></h3>
<p>声明和返回 Scrapy 项目不再需要从您的蜘蛛收集抓取的数据，您现在可以返回显式字典。</p>
<p><em>经典版</em></p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="k">class</span> <span class="nc">MyItem</span><span class="p">(</span><span class="n">scrapy</span><span class="o">.</span><span class="n">Item</span><span class="p">):</span>
    <span class="n">url</span> <span class="o">=</span> <span class="n">scrapy</span><span class="o">.</span><span class="n">Field</span><span class="p">()</span>

<span class="k">class</span> <span class="nc">MySpider</span><span class="p">(</span><span class="n">scrapy</span><span class="o">.</span><span class="n">Spider</span><span class="p">):</span>
    <span class="k">def</span> <span class="nf">parse</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">response</span><span class="p">):</span>
        <span class="k">return</span> <span class="n">MyItem</span><span class="p">(</span><span class="n">url</span><span class="o">=</span><span class="n">response</span><span class="o">.</span><span class="n">url</span><span class="p">)</span>
</pre></div>
</div>
<p><em>新版本</em></p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="k">class</span> <span class="nc">MySpider</span><span class="p">(</span><span class="n">scrapy</span><span class="o">.</span><span class="n">Spider</span><span class="p">):</span>
    <span class="k">def</span> <span class="nf">parse</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">response</span><span class="p">):</span>
        <span class="k">return</span> <span class="p">{</span><span class="s1">&#39;url&#39;</span><span class="p">:</span> <span class="n">response</span><span class="o">.</span><span class="n">url</span><span class="p">}</span>
</pre></div>
</div>
</div>
<div class="section" id="per-spider-settings-gsoc-2014">
<h3>每个蜘蛛设置（GSOC 2014）<a class="headerlink" href="#per-spider-settings-gsoc-2014" title="永久链接至标题">¶</a></h3>
<p>去年的谷歌夏季代码项目完成了一项重要的机制重新设计，用于填充设置，引入明确的优先级来覆盖任何给定的设置。作为该目标的扩展，我们为专门针对单个蜘蛛的设置提供了新的优先级，允许它们重新定义项目设置。</p>
<p>通过定义 <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider.custom_settings" title="scrapy.spiders.Spider.custom_settings"><code class="xref py py-attr docutils literal notranslate"><span class="pre">custom_settings</span></code></a> 蜘蛛中的类变量：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="k">class</span> <span class="nc">MySpider</span><span class="p">(</span><span class="n">scrapy</span><span class="o">.</span><span class="n">Spider</span><span class="p">):</span>
    <span class="n">custom_settings</span> <span class="o">=</span> <span class="p">{</span>
        <span class="s2">&quot;DOWNLOAD_DELAY&quot;</span><span class="p">:</span> <span class="mf">5.0</span><span class="p">,</span>
        <span class="s2">&quot;RETRY_ENABLED&quot;</span><span class="p">:</span> <span class="kc">False</span><span class="p">,</span>
    <span class="p">}</span>
</pre></div>
</div>
<p>阅读有关设置填充的详细信息： <a class="reference internal" href="topics/settings.html#topics-settings"><span class="std std-ref">设置</span></a></p>
</div>
<div class="section" id="python-logging">
<h3>Python 测井<a class="headerlink" href="#python-logging" title="永久链接至标题">¶</a></h3>
<p>Scrapy1.0已经从扭曲的日志记录转移到支持python内置的默认日志记录系统。我们对大多数旧的自定义接口保持向后兼容性，以便调用日志记录函数，但是您将收到警告，以便完全切换到Python日志记录API。</p>
<p><em>旧版本</em></p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">scrapy</span> <span class="kn">import</span> <span class="n">log</span>
<span class="n">log</span><span class="o">.</span><span class="n">msg</span><span class="p">(</span><span class="s1">&#39;MESSAGE&#39;</span><span class="p">,</span> <span class="n">log</span><span class="o">.</span><span class="n">INFO</span><span class="p">)</span>
</pre></div>
</div>
<p><em>新版本</em></p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">logging</span>
<span class="n">logging</span><span class="o">.</span><span class="n">info</span><span class="p">(</span><span class="s1">&#39;MESSAGE&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>用蜘蛛记录仍然是一样的，但在 <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider.log" title="scrapy.spiders.Spider.log"><code class="xref py py-meth docutils literal notranslate"><span class="pre">log()</span></code></a> 方法可以访问自定义 <a class="reference internal" href="topics/spiders.html#scrapy.spiders.Spider.logger" title="scrapy.spiders.Spider.logger"><code class="xref py py-attr docutils literal notranslate"><span class="pre">logger</span></code></a> 为蜘蛛发布日志事件而创建：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="k">class</span> <span class="nc">MySpider</span><span class="p">(</span><span class="n">scrapy</span><span class="o">.</span><span class="n">Spider</span><span class="p">):</span>
    <span class="k">def</span> <span class="nf">parse</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">response</span><span class="p">):</span>
        <span class="bp">self</span><span class="o">.</span><span class="n">logger</span><span class="o">.</span><span class="n">info</span><span class="p">(</span><span class="s1">&#39;Response received&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>阅读日志文档中的更多内容： <a class="reference internal" href="topics/logging.html#topics-logging"><span class="std std-ref">登录</span></a></p>
</div>
<div class="section" id="crawler-api-refactoring-gsoc-2014">
<h3>爬虫API重构（GSOC 2014）<a class="headerlink" href="#crawler-api-refactoring-gsoc-2014" title="永久链接至标题">¶</a></h3>
<p>上一个谷歌夏季代码的另一个里程碑是对内部API的重构，寻求更简单和更容易的使用。检查新的核心接口： <a class="reference internal" href="topics/api.html#topics-api"><span class="std std-ref">核心API</span></a></p>
<p>您将要面对这些更改的一个常见情况是在从脚本运行scrapy时。以下是如何使用新API手动运行spider的快速示例：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">scrapy.crawler</span> <span class="kn">import</span> <span class="n">CrawlerProcess</span>

<span class="n">process</span> <span class="o">=</span> <span class="n">CrawlerProcess</span><span class="p">({</span>
    <span class="s1">&#39;USER_AGENT&#39;</span><span class="p">:</span> <span class="s1">&#39;Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)&#39;</span>
<span class="p">})</span>
<span class="n">process</span><span class="o">.</span><span class="n">crawl</span><span class="p">(</span><span class="n">MySpider</span><span class="p">)</span>
<span class="n">process</span><span class="o">.</span><span class="n">start</span><span class="p">()</span>
</pre></div>
</div>
<p>请记住，此功能仍在开发中，其API可能会更改，直到达到稳定状态。</p>
<p>请参阅运行scrappy的脚本的更多示例： <a class="reference internal" href="topics/practices.html#topics-practices"><span class="std std-ref">常用做法</span></a></p>
</div>
<div class="section" id="module-relocations">
<span id="id64"></span><h3>模块重新定位<a class="headerlink" href="#module-relocations" title="永久链接至标题">¶</a></h3>
<p>为了改善 Scrapy 的总体结构，模块进行了大量的重新排列。主要的变化是将不同的子包分离成新的项目，并同时解散这两个项目。 <code class="docutils literal notranslate"><span class="pre">scrapy.contrib</span></code> 和 <code class="docutils literal notranslate"><span class="pre">scrapy.contrib_exp</span></code> 到顶级包中。内部重新定位之间保持向后兼容性，而导入不推荐使用的模块时会收到指示其新位置的警告。</p>
<div class="section" id="full-list-of-relocations">
<h4>重新定位的完整列表<a class="headerlink" href="#full-list-of-relocations" title="永久链接至标题">¶</a></h4>
<p>外包包</p>
<div class="admonition note">
<p class="admonition-title">注解</p>
<p>这些扩展进行了一些小的更改，例如更改了一些设置名称。请检查每个新存储库中的文档以熟悉新用法。</p>
</div>
<table class="docutils align-default">
<colgroup>
<col style="width: 50%" />
<col style="width: 50%" />
</colgroup>
<thead>
<tr class="row-odd"><th class="head"><p>老位置</p></th>
<th class="head"><p>新位置</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>scrapy.commands.deploy</p></td>
<td><p><a class="reference external" href="https://github.com/scrapy/scrapyd-client">scrapyd-client</a> （见其他备选方案： <a class="reference internal" href="topics/deploy.html#topics-deploy"><span class="std std-ref">部署蜘蛛</span></a> ）</p></td>
</tr>
<tr class="row-odd"><td><p>scrapy.contrib.djangoitem</p></td>
<td><p><a class="reference external" href="https://github.com/scrapy-plugins/scrapy-djangoitem">scrapy-djangoitem</a></p></td>
</tr>
<tr class="row-even"><td><p>scrapy.webservice</p></td>
<td><p><a class="reference external" href="https://github.com/scrapy-plugins/scrapy-jsonrpc">scrapy-jsonrpc</a></p></td>
</tr>
</tbody>
</table>
<p><code class="docutils literal notranslate"><span class="pre">scrapy.contrib_exp</span></code> 和 <code class="docutils literal notranslate"><span class="pre">scrapy.contrib</span></code> 溶解</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 50%" />
<col style="width: 50%" />
</colgroup>
<thead>
<tr class="row-odd"><th class="head"><p>老位置</p></th>
<th class="head"><p>新位置</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>scrapy.contribexp.downloadermiddleware.解压缩</p></td>
<td><p>scrapy.downloadermiddleware.decompresson</p></td>
</tr>
<tr class="row-odd"><td><p>scrapy.contrib_exp.iterators</p></td>
<td><p>scrapy.utils.iterators</p></td>
</tr>
<tr class="row-even"><td><p>scrapy.contrib.downloadermiddleware</p></td>
<td><p>scrapy.downloadermiddlewares</p></td>
</tr>
<tr class="row-odd"><td><p>scrapy.contrib.exporter</p></td>
<td><p>scrapy.exporters</p></td>
</tr>
<tr class="row-even"><td><p>scrapy.contrib.linkextractors</p></td>
<td><p>scrapy.linkextractors</p></td>
</tr>
<tr class="row-odd"><td><p>scrapy.contrib.loader</p></td>
<td><p>scrapy.loader</p></td>
</tr>
<tr class="row-even"><td><p>scrapy.contrib.loader.processor</p></td>
<td><p>scrapy.loader.processors</p></td>
</tr>
<tr class="row-odd"><td><p>scrapy.contrib.pipeline</p></td>
<td><p>scrapy.pipelines</p></td>
</tr>
<tr class="row-even"><td><p>scrapy.contrib.spidermiddleware</p></td>
<td><p>scrapy.spidermiddlewares</p></td>
</tr>
<tr class="row-odd"><td><p>scrapy.contrib.spiders</p></td>
<td><p>scrapy.spiders</p></td>
</tr>
<tr class="row-even"><td><ul class="simple">
<li><p>scrapy.contrib.closespider</p></li>
<li><p>scrapy.contrib.corestats</p></li>
<li><p>scrapy.contrib.debug</p></li>
<li><p>scrapy.contrib.feedexport</p></li>
<li><p>scrapy.contrib.httpcache</p></li>
<li><p>scrapy.contrib.logstats</p></li>
<li><p>scrapy.contrib.memdebug</p></li>
<li><p>scrapy.contrib.memusage</p></li>
<li><p>scrapy.contrib.spiderstate</p></li>
<li><p>scrapy.contrib.statsmailer</p></li>
<li><p>scrapy.contrib.throttle</p></li>
</ul>
</td>
<td><p>scrapy.extensions.*</p></td>
</tr>
</tbody>
</table>
<p>复数重命名与模块统一</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 50%" />
<col style="width: 50%" />
</colgroup>
<thead>
<tr class="row-odd"><th class="head"><p>老位置</p></th>
<th class="head"><p>新位置</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>scrapy.command</p></td>
<td><p>scrapy.commands</p></td>
</tr>
<tr class="row-odd"><td><p>scrapy.dupefilter</p></td>
<td><p>scrapy.dupefilters</p></td>
</tr>
<tr class="row-even"><td><p>scrapy.linkextractor</p></td>
<td><p>scrapy.linkextractors</p></td>
</tr>
<tr class="row-odd"><td><p>scrapy.spider</p></td>
<td><p>scrapy.spiders</p></td>
</tr>
<tr class="row-even"><td><p>scrapy.squeue</p></td>
<td><p>scrapy.squeues</p></td>
</tr>
<tr class="row-odd"><td><p>scrapy.statscol</p></td>
<td><p>scrapy.statscollectors</p></td>
</tr>
<tr class="row-even"><td><p>scrapy.utils.decorator</p></td>
<td><p>scrapy.utils.decorators</p></td>
</tr>
</tbody>
</table>
<p>类重命名</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 50%" />
<col style="width: 50%" />
</colgroup>
<thead>
<tr class="row-odd"><th class="head"><p>老位置</p></th>
<th class="head"><p>新位置</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>scrapy.spidermanager.SpiderManager</p></td>
<td><p>scrapy.spiderloader.SpiderLoader</p></td>
</tr>
</tbody>
</table>
<p>设置重命名</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 50%" />
<col style="width: 50%" />
</colgroup>
<thead>
<tr class="row-odd"><th class="head"><p>老位置</p></th>
<th class="head"><p>新位置</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>SPIDER_MANAGER_CLASS</p></td>
<td><p>SPIDER_LOADER_CLASS</p></td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="changelog">
<h3>Changelog<a class="headerlink" href="#changelog" title="永久链接至标题">¶</a></h3>
<p>新功能和增强功能</p>
<ul class="simple">
<li><p>Python日志（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1060">issue 1060</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1235">issue 1235</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1236">issue 1236</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1240">issue 1240</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1259">issue 1259</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1278">issue 1278</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1286">issue 1286</a> ）</p></li>
<li><p>FEED_EXPORT_FIELDS选项（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1159">issue 1159</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1224">issue 1224</a> ）</p></li>
<li><p>DNS缓存大小和超时选项（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1132">issue 1132</a> ）</p></li>
<li><p>支持xmliter？lxml中的命名空间前缀（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/963">issue 963</a> ）</p></li>
<li><p>反应器线程池最大大小设置（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1123">issue 1123</a> ）</p></li>
<li><p>允许蜘蛛返回听写。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1081">issue 1081</a> ）</p></li>
<li><p>添加response.urljoin（）帮助程序（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1086">issue 1086</a> ）</p></li>
<li><p>在~/.config/scrappy.cfg中查找用户配置（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1098">issue 1098</a> ）</p></li>
<li><p>处理TLS SNI（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1101">issue 1101</a> ）</p></li>
<li><p>选择列表先提取（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/624">issue 624</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1145">issue 1145</a> ）</p></li>
<li><p>添加了jmesselect（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1016">issue 1016</a> ）</p></li>
<li><p>将gzip压缩添加到文件系统HTTP缓存后端（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1020">issue 1020</a> ）</p></li>
<li><p>链接提取器中的CSS支持（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/983">issue 983</a> ）</p></li>
<li><p>httpcache不缓存meta 19 689（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/821">issue 821</a> ）</p></li>
<li><p>添加调度程序丢弃请求时要发送的信号（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/961">issue 961</a> ）</p></li>
<li><p>避免下载大响应（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/946">issue 946</a> ）</p></li>
<li><p>允许在csvfeedspider中指定QuoteCar（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/882">issue 882</a> ）</p></li>
<li><p>添加对“蜘蛛错误处理”日志消息的引用（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/795">issue 795</a> ）</p></li>
<li><p>处理robots.txt一次（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/896">issue 896</a> ）</p></li>
<li><p>每个蜘蛛的GSOC设置（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/854">issue 854</a> ）</p></li>
<li><p>添加项目名称验证（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/817">issue 817</a> ）</p></li>
<li><p>GSOC API清理（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/816">issue 816</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1128">issue 1128</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1147">issue 1147</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1148">issue 1148</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1156">issue 1156</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1185">issue 1185</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1187">issue 1187</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1258">issue 1258</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1268">issue 1268</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1276">issue 1276</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1285">issue 1285</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1284">issue 1284</a> ）</p></li>
<li><p>对IO操作的响应能力更强（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1074">issue 1074</a> 和 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1075">issue 1075</a> ）</p></li>
<li><p>关闭时对httpcache执行leveldb压缩（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1297">issue 1297</a> ）</p></li>
</ul>
<p>弃用和移除</p>
<ul class="simple">
<li><p>取消预测htmlparser链接提取程序（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1205">issue 1205</a> ）</p></li>
<li><p>从FeedExporter中删除已弃用的代码（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1155">issue 1155</a> ）</p></li>
<li><p>用于.15兼容性的剩余部分（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/925">issue 925</a> ）</p></li>
<li><p>放弃对每个蜘蛛并发请求的支持（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/895">issue 895</a> ）</p></li>
<li><p>删除旧的发动机代码（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/911">issue 911</a> ）</p></li>
<li><p>拆除SGMLLinkextractor（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/777">issue 777</a> ）</p></li>
</ul>
<p>重新定位</p>
<ul class="simple">
<li><p>将exporters/uuu init_uuu.py移动到exporters.py（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1242">issue 1242</a> ）</p></li>
<li><p>将基类移动到其包中（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1218">issue 1218</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1233">issue 1233</a> ）</p></li>
<li><p>模块重新定位（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1181">issue 1181</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1210">issue 1210</a> ）</p></li>
<li><p>将spiderManager重命名为spiderLoader（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1166">issue 1166</a> ）</p></li>
<li><p>移除Djangoitem（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1177">issue 1177</a> ）</p></li>
<li><p>删除报废部署命令（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1102">issue 1102</a> ）</p></li>
<li><p>解除控制（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1134">issue 1134</a> ）</p></li>
<li><p>已从根目录中删除bin文件夹，修复913（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/914">issue 914</a> ）</p></li>
<li><p>删除基于JSONRPC的WebService（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/859">issue 859</a> ）</p></li>
<li><p>在项目根目录下移动测试用例（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/827">issue 827</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/841">issue 841</a> ）</p></li>
<li><p>修复设置中重新定位路径的向后不兼容性（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1267">issue 1267</a> ）</p></li>
</ul>
<p>文档</p>
<ul class="simple">
<li><p>爬虫过程文档（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1190">issue 1190</a> ）</p></li>
<li><p>在描述中倾向于使用Web抓取而不是屏幕抓取（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1188">issue 1188</a> ）</p></li>
<li><p>对Scrapy教程的一些改进（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1180">issue 1180</a> ）</p></li>
<li><p>将文件管道与图像管道一起记录（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1150">issue 1150</a> ）</p></li>
<li><p>部署文档调整（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1164">issue 1164</a> ）</p></li>
<li><p>增加了部署部分，包括 Scrapy  部署和SHUB（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1124">issue 1124</a> ）</p></li>
<li><p>向项目模板添加更多设置（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1073">issue 1073</a> ）</p></li>
<li><p>概述页面的一些改进（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1106">issue 1106</a> ）</p></li>
<li><p>更新了docs/topics/architecture.rst中的链接（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/647">issue 647</a> ）</p></li>
<li><p>文档重新排序主题（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1022">issue 1022</a> ）</p></li>
<li><p>更新request.meta特殊键列表（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1071">issue 1071</a> ）</p></li>
<li><p>文档下载超时（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/898">issue 898</a> ）</p></li>
<li><p>文档简化扩展文档（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/893">issue 893</a> ）</p></li>
<li><p>泄漏文档 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/894">issue 894</a> ）</p></li>
<li><p>项目管道的爬虫方法的文档（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/904">issue 904</a> ）</p></li>
<li><p>蜘蛛网错误不支持延迟（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1292">issue 1292</a> ）</p></li>
<li><p>更正和Sphinx相关的修复程序（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1220">issue 1220</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1219">issue 1219</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1196">issue 1196</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1172">issue 1172</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1171">issue 1171</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1169">issue 1169</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1160">issue 1160</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1154">issue 1154</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1127">issue 1127</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1112">issue 1112</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1105">issue 1105</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1041">issue 1041</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1082">issue 1082</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1033">issue 1033</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/944">issue 944</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/866">issue 866</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/864">issue 864</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/796">issue 796</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1260">issue 1260</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1271">issue 1271</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1293">issue 1293</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1298">issue 1298</a> ）</p></li>
</ul>
<p>错误修正</p>
<ul class="simple">
<li><p>项目多继承修复（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/353">issue 353</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1228">issue 1228</a> ）</p></li>
<li><p>ItemLoader.load_item:迭代字段副本（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/722">issue 722</a> ）</p></li>
<li><p>修复延迟（robotstxtmiddleware）中未处理的错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1131">issue 1131</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1197">issue 1197</a> ）</p></li>
<li><p>强制读取下载超时为int（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/954">issue 954</a> ）</p></li>
<li><p>scrapy.utils.misc.load_对象应打印完整的回溯（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/902">issue 902</a> ）</p></li>
<li><p>修复“.local”主机名的错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/878">issue 878</a> ）</p></li>
<li><p>修复已启用的扩展、中间软件、管道信息不再打印（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/879">issue 879</a> ）</p></li>
<li><p>修复在meta设置为false时不合并cookies的不良行为（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/846">issue 846</a> ）</p></li>
</ul>
<p>Python 3 进行中支持</p>
<ul class="simple">
<li><p>如果twisted.conch不可用，则禁用scrappy.telnet（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1161">issue 1161</a> ）</p></li>
<li><p>修复ajaxcrawl.py中的python 3语法错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1162">issue 1162</a> ）</p></li>
<li><p>urllib的更多python3兼容性更改（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1121">issue 1121</a> ）</p></li>
<li><p>在Python3中，AssertItemSequal被重命名为AssertCountEqual。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1070">issue 1070</a> ）</p></li>
<li><p>导入unittest.mock（如果可用）。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1066">issue 1066</a> ）</p></li>
<li><p>更新了不推荐使用的cgi.parse_qsl以使用six的parse_qsl（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/909">issue 909</a> ）</p></li>
<li><p>防止python 3端口回归（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/830">issue 830</a> ）</p></li>
<li><p>PY3:对python 3使用可变映射（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/810">issue 810</a> ）</p></li>
<li><p>PY3:使用six.bytesio和six.moves.cstringio（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/803">issue 803</a> ）</p></li>
<li><p>PY3:修复xmlrpclib和电子邮件导入（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/801">issue 801</a> ）</p></li>
<li><p>PY3：使用6个用于robotparser和urlparse（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/800">issue 800</a> ）</p></li>
<li><p>PY3:使用6.iterkeys、6.iteritems和tempfile（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/799">issue 799</a> ）</p></li>
<li><p>PY3:fix有_键并使用six.moves.configparser（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/798">issue 798</a> ）</p></li>
<li><p>PY3:使用six.moves.cpickle（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/797">issue 797</a> ）</p></li>
<li><p>PY3使在python3中运行一些测试成为可能（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/776">issue 776</a> ）</p></li>
</ul>
<p>测验</p>
<ul class="simple">
<li><p>从PY3中删除不必要的行忽略（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1243">issue 1243</a> ）</p></li>
<li><p>在收集测试时修复来自pytest的剩余警告（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1206">issue 1206</a> ）</p></li>
<li><p>将文档生成添加到Travis（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1234">issue 1234</a> ）</p></li>
<li><p>TST不从不推荐使用的模块收集测试。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1165">issue 1165</a> ）</p></li>
<li><p>在测试中安装Service_Identity包以防止警告（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1168">issue 1168</a> ）</p></li>
<li><p>修复测试中不推荐使用的设置API（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1152">issue 1152</a> ）</p></li>
<li><p>使用post方法为WebClient添加测试，但未提供主体（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1089">issue 1089</a> ）</p></li>
<li><p>py3-ignores.txt支持注释（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1044">issue 1044</a> ）</p></li>
<li><p>使一些主张现代化（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/835">issue 835</a> ）</p></li>
<li><p>选择器。重复测试（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/779">issue 779</a> ）</p></li>
</ul>
<p>代码重构</p>
<ul class="simple">
<li><p>CSVFeedspider清理：使用迭代蜘蛛网输出（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1079">issue 1079</a> ）</p></li>
<li><p>从scrapy.utils.spider.iter_spider_输出中删除不必要的检查（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/1078">issue 1078</a> ）</p></li>
<li><p>派送PEP8（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/992">issue 992</a> ）</p></li>
<li><p>已从walk_modules（）中删除未使用的“load=false”参数（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/871">issue 871</a> ）</p></li>
<li><p>为了保持一致，请使用 <code class="docutils literal notranslate"><span class="pre">job_dir</span></code> 帮手 <code class="docutils literal notranslate"><span class="pre">SpiderState</span></code> 延伸。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/805">issue 805</a> ）</p></li>
<li><p>将“sflo”局部变量重命名为不那么神秘的“log_observer”（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/775">issue 775</a> ）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-0-24-6-2015-04-20">
<h2>Scrapy 0.24.6（2015-04-20）<a class="headerlink" href="#scrapy-0-24-6-2015-04-20" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>使用py2下的unicode_转义对无效的xpath进行编码（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/07cb3e5">commit 07cb3e5</a> ）</p></li>
<li><p>修复ipython shell作用域问题并加载ipython用户配置（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/2c8e573">commit 2c8e573</a> ）</p></li>
<li><p>修复文档中的小错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/d694019">commit d694019</a> ）</p></li>
<li><p>修复小错字（：commit：<cite>f92fa83</cite>）</p></li>
<li><p>在提取数据时已将sel.xpath（）调用转换为response.xpath（）。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/c2c6d15">commit c2c6d15</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-24-5-2015-02-25">
<h2>Scrapy 0.24.5（2015-02-25）<a class="headerlink" href="#scrapy-0-24-5-2015-02-25" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>在Twisted 15.0.0上支持新的getEndpoint代理签名（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/540b9bc">commit 540b9bc</a> ）</p></li>
<li><p>多了几个参考文献（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/b4c454b">commit b4c454b</a> ）</p></li>
<li><p>文档修复引用（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/e3c1260">commit e3c1260</a> ）</p></li>
<li><p>t.i.b.ThreadeDresolver现在是一个新的类（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/9e13f42">commit 9e13f42</a> ）</p></li>
<li><p>S3DownloadHandler:修复带引用路径/查询参数的请求的身份验证（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/cdb9a0b">commit cdb9a0b</a> ）</p></li>
<li><p>修复了mailsender文档中的变量类型（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/bb3a848">commit bb3a848</a> ）</p></li>
<li><p>重置项目而不是项目计数（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/edb07a4">commit edb07a4</a> ）</p></li>
<li><p>关于阅读什么文件供贡献的暂定注意信息（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/7ee6f7a">commit 7ee6f7a</a> ）</p></li>
<li><p>mitmproxy 0.10.1也需要Netlib 0.10.1（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/874fcdd">commit 874fcdd</a> ）</p></li>
<li><p>销Mitmproxy 0.10.1 as&gt;0.11不适用于测试（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/c6b21f0">commit c6b21f0</a> ）</p></li>
<li><p>在本地测试parse命令，而不是针对外部URL（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/c3a6628">commit c3a6628</a> ）</p></li>
<li><p>关闭httpDownloadHandler上的连接池时出现补丁扭曲问题（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/d0bf957">commit d0bf957</a> ）</p></li>
<li><p>更新动态项类的文档。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/eeb589a">commit eeb589a</a> ）</p></li>
<li><p>来自Lazar-T/Patch-3的合并请求943（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/5fdab02">commit 5fdab02</a> ）</p></li>
<li><p>错字（：commit：<cite>b0ae199</cite>）</p></li>
<li><p>twisted需要pywin32。关闭α937 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/5cb0cfb">commit 5cb0cfb</a> ）</p></li>
<li><p>更新install.rst（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/781286b">commit 781286b</a> ）</p></li>
<li><p>来自Lazar-T/Patch-1的合并请求928（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/b415d04">commit b415d04</a> ）</p></li>
<li><p>逗号而不是句号（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/627b9ba">commit 627b9ba</a> ）</p></li>
<li><p>合并请求885来自JSMA/Patch-1（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/de909ad">commit de909ad</a> ）</p></li>
<li><p>更新request-response.rst（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/3f3263d">commit 3f3263d</a> ）</p></li>
<li><p>BaseSgmlLinkExtractor:用于解析存在unicode的&lt;area&gt;标记的修复程序（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/49b40f0">commit 49b40f0</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-24-4-2014-08-09">
<h2>Scrapy 0.24.4（2014-08-09）<a class="headerlink" href="#scrapy-0-24-4-2014-08-09" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>mockserver使用PEM文件，scrapy bench需要。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/5eddc68">commit 5eddc68</a> ）</p></li>
<li><p>下脚料台需要下脚料。测试*（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/d6cb999">commit d6cb999</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-24-3-2014-08-09">
<h2>Scrapy 0.24.3（2014-08-09）<a class="headerlink" href="#scrapy-0-24-3-2014-08-09" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>无需在PY3上浪费Travis CI时间0.24（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/8e080c1">commit 8e080c1</a> ）</p></li>
<li><p>更新安装文档（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/1d0c096">commit 1d0c096</a> ）</p></li>
<li><p>有一个特洛夫分类器为 Scrapy 框架！（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/4c701d7">commit 4c701d7</a> ）</p></li>
<li><p>更新提到w3lib版本的其他位置（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/d109c13">commit d109c13</a> ）</p></li>
<li><p>将w3lib要求更新为1.8.0（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/39d2ce5">commit 39d2ce5</a> ）</p></li>
<li><p>使用w3lib.html.replace_entities（）（不推荐使用remove_entities（））（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/180d3ad">commit 180d3ad</a> ）</p></li>
<li><p>设置zip_safe=false（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/a51ee8b">commit a51ee8b</a> ）</p></li>
<li><p>不装运测试包（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/ee3b371">commit ee3b371</a> ）</p></li>
<li><p>不再需要scrappy.bat（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/c3861cf">commit c3861cf</a> ）</p></li>
<li><p>现代化设置.py（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/362e322">commit 362e322</a> ）</p></li>
<li><p>头不能处理非字符串值（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/94a5c65">commit 94a5c65</a> ）</p></li>
<li><p>修复FTP测试用例（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/a274a7f">commit a274a7f</a> ）</p></li>
<li><p>Travis CI构建的总结大约需要50分钟才能完成。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/ae1e2cc">commit ae1e2cc</a> ）</p></li>
<li><p>更新shell.rst typo（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/e49c96a">commit e49c96a</a> ）</p></li>
<li><p>删除shell结果中的奇怪缩进（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/1ca489d">commit 1ca489d</a> ）</p></li>
<li><p>改进了解释，澄清了博客文章的来源，在规范中添加了xpath字符串函数的链接（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/65c8f05">commit 65c8f05</a> ）</p></li>
<li><p>已重命名usertimeouterrror和servertimeouterrror 583（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/037f6ab">commit 037f6ab</a> ）</p></li>
<li><p>向选择器文档添加一些XPath提示（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/2d103e0">commit 2d103e0</a> ）</p></li>
<li><p>修复测试以解释https://github.com/scrappy/w3lib/pull/23（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/f8d366a">commit f8d366a</a> ）</p></li>
<li><p>获取_func_参数最大递归修复728（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/81344ea">commit 81344ea</a> ）</p></li>
<li><p>根据560更新输入/输出处理器示例。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/f7c4ea8">commit f7c4ea8</a> ）</p></li>
<li><p>修复了教程中的python语法。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/db59ed9">commit db59ed9</a> ）</p></li>
<li><p>为隧道代理添加测试用例（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/f090260">commit f090260</a> ）</p></li>
<li><p>使用隧道时将代理授权头泄漏到远程主机的错误修复（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/d8793af">commit d8793af</a> ）</p></li>
<li><p>从具有mime类型“application/xml”的xhtml文档中提取链接（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/ed1f376">commit ed1f376</a> ）</p></li>
<li><p>合并请求来自Roysc/Patch-1的793（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/91a1106">commit 91a1106</a> ）</p></li>
<li><p>修复commands.rst中的拼写错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/743e1e2">commit 743e1e2</a> ）</p></li>
<li><p>settings.overrides.setdefault的更好测试用例（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/e22daaf">commit e22daaf</a> ）</p></li>
<li><p>根据HTTP 1.1定义使用CRLF作为行标记（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/5ec430b">commit 5ec430b</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-24-2-2014-07-08">
<h2>Scrapy 0.24.2（2014-07-08）<a class="headerlink" href="#scrapy-0-24-2-2014-07-08" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>使用可变映射来代理不推荐使用的设置。overrides和settings.defaults属性（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/e5e8133">commit e5e8133</a> ）</p></li>
<li><p>尚未支持python3（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/3cd6146">commit 3cd6146</a> ）</p></li>
<li><p>将python兼容版本集更新为Debian包 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/fa5d76b">commit fa5d76b</a> ）</p></li>
<li><p>发行说明中的文档修复格式（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/c6a9e20">commit c6a9e20</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-24-1-2014-06-27">
<h2>Scrapy 0.24.1（2014-06-27）<a class="headerlink" href="#scrapy-0-24-1-2014-06-27" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>修复不推荐使用的Crawlersettings并提高与.defaults属性的向后兼容性（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/8e3f20a">commit 8e3f20a</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-24-0-2014-06-26">
<h2>Scrapy 0.24.0（2014-06-26）<a class="headerlink" href="#scrapy-0-24-0-2014-06-26" title="永久链接至标题">¶</a></h2>
<div class="section" id="enhancements">
<h3>增强功能<a class="headerlink" href="#enhancements" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>改进残缺的顶级命名空间（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/494">issue 494</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/684">issue 684</a> ）</p></li>
<li><p>向响应添加选择器快捷方式（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/554">issue 554</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/690">issue 690</a> ）</p></li>
<li><p>添加新的基于lxml的LinkExtractor以替换未维护的SgmlLinkExtractor (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/559">issue 559</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/761">issue 761</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/763">issue 763</a> ）</p></li>
<li><p>清理设置API-每个蜘蛛设置的一部分 <strong>GSoC project</strong> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/737">issue 737</a> ）</p></li>
<li><p>将utf8编码头添加到模板（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/688">issue 688</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/762">issue 762</a> ）</p></li>
<li><p>Telnet控制台现在默认绑定到127.0.0.1（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/699">issue 699</a> ）</p></li>
<li><p>更新Debian/Ubuntu安装说明 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/509">issue 509</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/549">issue 549</a> ）</p></li>
<li><p>禁用LXML XPath计算中的智能字符串（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/535">issue 535</a> ）</p></li>
<li><p>将基于文件系统的缓存还原为HTTP缓存中间件的默认缓存（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/541">issue 541</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/500">issue 500</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/571">issue 571</a> ）</p></li>
<li><p>将当前爬行器暴露在 Scrapy 壳中（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/557">issue 557</a> ）</p></li>
<li><p>改进测试套件，比较csv和xml导出器（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/570">issue 570</a> ）</p></li>
<li><p>新的 <code class="docutils literal notranslate"><span class="pre">offsite/filtered</span></code> 和 <code class="docutils literal notranslate"><span class="pre">offsite/domains</span></code> 统计数据（统计） <a class="reference external" href="https://github.com/scrapy/scrapy/issues/566">issue 566</a> ）</p></li>
<li><p>在Crawlspiper中支持进程链接作为生成器（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/555">issue 555</a> ）</p></li>
<li><p>DupeFilter的详细日志记录和新统计计数器（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/553">issue 553</a> ）</p></li>
<li><p>将mimetype参数添加到 <code class="docutils literal notranslate"><span class="pre">MailSender.send()</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/602">issue 602</a> ）</p></li>
<li><p>通用化文件管道日志消息（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/622">issue 622</a> ）</p></li>
<li><p>用sgmlinkextractor中的HTML实体替换不可编码的代码点（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/565">issue 565</a> ）</p></li>
<li><p>已将SEP文档转换为RST格式（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/629">issue 629</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/630">issue 630</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/638">issue 638</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/632">issue 632</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/636">issue 636</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/640">issue 640</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/635">issue 635</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/634">issue 634</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/639">issue 639</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/637">issue 637</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/631">issue 631</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/633">issue 633</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/641">issue 641</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/642">issue 642</a> ）</p></li>
<li><p>用于表单请求中ClickData的nr索引的测试和文档（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/646">issue 646</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/645">issue 645</a> ）</p></li>
<li><p>允许像禁用任何其他组件一样禁用下载程序处理程序（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/650">issue 650</a> ）</p></li>
<li><p>在重定向过多后放弃请求时记录（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/654">issue 654</a> ）</p></li>
<li><p>如果蜘蛛回调不处理错误响应，则记录错误响应（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/612">issue 612</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/656">issue 656</a> ）</p></li>
<li><p>向HTTP压缩mw添加内容类型检查（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/193">issue 193</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/660">issue 660</a> ）</p></li>
<li><p>使用来自ppa的最新pypi运行pypypy测试（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/674">issue 674</a> ）</p></li>
<li><p>使用pytest而不是trial运行测试套件（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/679">issue 679</a> ）</p></li>
<li><p>建立文档并检查毒物环境中的死链接（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/687">issue 687</a> ）</p></li>
<li><p>使scrappy.versionu info成为整数的元组（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/681">issue 681</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/692">issue 692</a> ）</p></li>
<li><p>从文件扩展名推断导出程序的输出格式（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/546">issue 546</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/659">issue 659</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/760">issue 760</a> ）</p></li>
<li><p>在中支持不区分大小写的域 <code class="docutils literal notranslate"><span class="pre">url_is_from_any_domain()</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/693">issue 693</a> ）</p></li>
<li><p>删除项目和Spider模板中的PEP8警告（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/698">issue 698</a> ）</p></li>
<li><p>测试和文档 <code class="docutils literal notranslate"><span class="pre">request_fingerprint</span></code> 功能（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/597">issue 597</a> ）</p></li>
<li><p>GSOC项目9月19日更新 <code class="docutils literal notranslate"><span class="pre">per-spider</span> <span class="pre">settings</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/705">issue 705</a> ）</p></li>
<li><p>合同失败时，将退出代码设置为非零（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/727">issue 727</a> ）</p></li>
<li><p>添加一个设置来控制实例化为下载程序组件的类 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/738">issue 738</a> ）</p></li>
<li><p>传入响应 <code class="docutils literal notranslate"><span class="pre">item_dropped</span></code> 信号（信号） <a class="reference external" href="https://github.com/scrapy/scrapy/issues/724">issue 724</a> ）</p></li>
<li><p>改进 <code class="docutils literal notranslate"><span class="pre">scrapy</span> <span class="pre">check</span></code> 合同指挥部（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/733">issue 733</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/752">issue 752</a> ）</p></li>
<li><p>文件 <code class="docutils literal notranslate"><span class="pre">spider.closed()</span></code> 快捷方式（捷径” <a class="reference external" href="https://github.com/scrapy/scrapy/issues/719">issue 719</a> ）</p></li>
<li><p>文件 <code class="docutils literal notranslate"><span class="pre">request_scheduled</span></code> 信号（信号） <a class="reference external" href="https://github.com/scrapy/scrapy/issues/746">issue 746</a> ）</p></li>
<li><p>添加有关报告安全问题的说明（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/697">issue 697</a> ）</p></li>
<li><p>添加LevelDB HTTP缓存存储后端（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/626">issue 626</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/500">issue 500</a> ）</p></li>
<li><p>排序蜘蛛列表输出 <code class="docutils literal notranslate"><span class="pre">scrapy</span> <span class="pre">list</span></code> 命令（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/742">issue 742</a> ）</p></li>
<li><p>多个文档增强和修复 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/575">issue 575</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/587">issue 587</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/590">issue 590</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/596">issue 596</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/610">issue 610</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/617">issue 617</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/618">issue 618</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/627">issue 627</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/613">issue 613</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/643">issue 643</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/654">issue 654</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/675">issue 675</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/663">issue 663</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/711">issue 711</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/714">issue 714</a> ）</p></li>
</ul>
</div>
<div class="section" id="id65">
<h3>错误修正<a class="headerlink" href="#id65" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>在regexlinkextractor中创建链接时编码unicode url值（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/561">issue 561</a> ）</p></li>
<li><p>忽略项加载器处理器中的无值（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/556">issue 556</a> ）</p></li>
<li><p>当sgmlinkxtractor和htmlparserlinkextractor中存在内部标记时修复链接文本（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/485">issue 485</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/574">issue 574</a> ）</p></li>
<li><p>修复对已弃用类的子类的错误检查（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/581">issue 581</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/584">issue 584</a> ）</p></li>
<li><p>处理由inspect.stack（）失败引起的错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/582">issue 582</a> ）</p></li>
<li><p>修复对不存在的引擎属性的引用（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/593">issue 593</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/594">issue 594</a> ）</p></li>
<li><p>修复类型（）的动态项类示例用法（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/603">issue 603</a> ）</p></li>
<li><p>使用lucasdemarchi/codespell修复拼写错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/628">issue 628</a> ）</p></li>
<li><p>将sgmlinkextractor中attrs参数的默认值固定为tuple（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/661">issue 661</a> ）</p></li>
<li><p>修复站点地图阅读器中的XXE缺陷（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/676">issue 676</a> ）</p></li>
<li><p>修复引擎以支持筛选的启动请求（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/707">issue 707</a> ）</p></li>
<li><p>在没有主机名的URL上修复非现场中间件案例（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/745">issue 745</a> ）</p></li>
<li><p>测试套件不再需要PIL（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/585">issue 585</a> ）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-0-22-2-released-2014-02-14">
<h2>Scrapy 0.22.2（2014-02-14发布）<a class="headerlink" href="#scrapy-0-22-2-released-2014-02-14" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>修复对不存在的engine.slots的引用。关闭α593 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/13c099a">commit 13c099a</a> ）</p></li>
<li><p>下载ermw-doc-typo（spidermw-doc-copy-remark）（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/8ae11bf">commit 8ae11bf</a> ）</p></li>
<li><p>正确的拼写错误 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/1346037">commit 1346037</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-22-1-released-2014-02-08">
<h2>Scrapy 0.22.1（2014-02-08发布）<a class="headerlink" href="#scrapy-0-22-1-released-2014-02-08" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>localhost666在某些情况下可以解决（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/2ec2279">commit 2ec2279</a> ）</p></li>
<li><p>测试检查。堆栈故障（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/cc3eda3">commit cc3eda3</a> ）</p></li>
<li><p>当inspect.stack（）失败时处理案例（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/8cb44f9">commit 8cb44f9</a> ）</p></li>
<li><p>修复对已弃用类的子类的错误检查。关闭α581 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/46d98d6">commit 46d98d6</a> ）</p></li>
<li><p>文档：最终spider示例的4空间缩进（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/13846de">commit 13846de</a> ）</p></li>
<li><p>修复htmlparserlinktextractor并在485合并后进行测试（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/368a946">commit 368a946</a> ）</p></li>
<li><p>BaseSgmlLinkExtractor:修复了链接具有内部标记时缺少的空间（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/b566388">commit b566388</a> ）</p></li>
<li><p>BaseSgmlLinkExtractor: 添加带有内部标记的链接的单元测试（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/c1cb418">commit c1cb418</a> ）</p></li>
<li><p>BaseSgmlLinkExtractor:修复了未知的_end tag（），以便在结束标记与开始标记匹配时只设置当前的_link=none（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/7e4d627">commit 7e4d627</a> ）</p></li>
<li><p>修复Travis CI构建的测试（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/76c7e20">commit 76c7e20</a> ）</p></li>
<li><p>用html实体替换不可编码的代码点。修复了562和285 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/5f87b17">commit 5f87b17</a> ）</p></li>
<li><p>Regexlinkextractor:创建链接时编码URL Unicode值（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/d0ee545">commit d0ee545</a> ）</p></li>
<li><p>用最新的输出更新了教程的爬行输出。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/8da65de">commit 8da65de</a> ）</p></li>
<li><p>使用爬虫引用更新了shell文档，并修复了实际shell输出。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/875b9ab">commit 875b9ab</a> ）</p></li>
<li><p>PEP8小编辑。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/f89efaf">commit f89efaf</a> ）</p></li>
<li><p>暴露当前爬虫在破烂的外壳。 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/5349cec">commit 5349cec</a> ）</p></li>
<li><p>未使用的重新导入和PEP8小编辑。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/387f414">commit 387f414</a> ）</p></li>
<li><p>使用itemloader时忽略none的值。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/0632546">commit 0632546</a> ）</p></li>
<li><p>Doc修复了默认值中的httpcache_存储错误，该默认值现在是filesystem而不是dbm。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/cde9a8c">commit cde9a8c</a> ）</p></li>
<li><p>将Ubuntu安装说明显示为文本代码 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/fb5c9c5">commit fb5c9c5</a> ）</p></li>
<li><p>更新Ubuntu安装说明（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/70fb105">commit 70fb105</a> ）</p></li>
<li><p>合并请求550来自Missist Leone/Patch-1（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/6f70b6a">commit 6f70b6a</a> ）</p></li>
<li><p>修改scray Ubuntu包的版本 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/725900d">commit 725900d</a> ）</p></li>
<li><p>确定0.22.0发布日期（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/af0219a">commit af0219a</a> ）</p></li>
<li><p>修复news.rst中的拼写错误并删除（尚未发布）标题（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/b7f58f4">commit b7f58f4</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-22-0-released-2014-01-17">
<h2>Scrapy 0.22.0（2014-01-17发布）<a class="headerlink" href="#scrapy-0-22-0-released-2014-01-17" title="永久链接至标题">¶</a></h2>
<div class="section" id="id66">
<h3>增强功能<a class="headerlink" href="#id66" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>[<strong>向后不兼容</strong>]将httpcachemiddleware后端切换到文件系统（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/541">issue 541</a> ）还原旧的后端集 <code class="docutils literal notranslate"><span class="pre">HTTPCACHE_STORAGE</span></code> 到 <code class="docutils literal notranslate"><span class="pre">scrapy.contrib.httpcache.DbmCacheStorage</span></code></p></li>
<li><p>使用connect方法的代理服务器https://urls（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/392">issue 392</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/397">issue 397</a> ）</p></li>
<li><p>添加一个中间件来对由Google定义的Ajax可爬行页面进行爬行。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/343">issue 343</a> ）</p></li>
<li><p>将scrapy.spider.basespider重命名为scrapy.spider.spider（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/510">issue 510</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/519">issue 519</a> ）</p></li>
<li><p>选择器默认注册exslt命名空间（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/472">issue 472</a> ）</p></li>
<li><p>统一与选择器重命名类似的项加载器（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/461">issue 461</a> ）</p></li>
<li><p>制作 <code class="docutils literal notranslate"><span class="pre">RFPDupeFilter</span></code> 类容易子类化（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/533">issue 533</a> ）</p></li>
<li><p>提高测试覆盖率和即将推出的python 3支持（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/525">issue 525</a> ）</p></li>
<li><p>将设置和中间件的启动信息提升到信息级别（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/520">issue 520</a> ）</p></li>
<li><p>支持部分 <code class="docutils literal notranslate"><span class="pre">get_func_args</span></code> 乌蒂尔 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/506">issue 506</a> ，问题：“504”</p></li>
<li><p>允许通过毒物进行单独测试 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/503">issue 503</a> ）</p></li>
<li><p>链接提取程序忽略了更新扩展（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/498">issue 498</a> ）</p></li>
<li><p>添加中间件方法以获取文件/图像/拇指路径（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/490">issue 490</a> ）</p></li>
<li><p>改进非现场中间件测试（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/478">issue 478</a> ）</p></li>
<li><p>添加一种跳过由refermiddleware设置的默认referer头的方法（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/475">issue 475</a> ）</p></li>
<li><p>请勿发送 <code class="docutils literal notranslate"><span class="pre">x-gzip</span></code> 默认情况下 <code class="docutils literal notranslate"><span class="pre">Accept-Encoding</span></code> 报头（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/469">issue 469</a> ）</p></li>
<li><p>支持使用设置定义HTTP错误处理（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/466">issue 466</a> ）</p></li>
<li><p>使用现代的python习惯用法，无论你在哪里找到遗产（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/497">issue 497</a> ）</p></li>
<li><p>改进和更正文档（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/527">issue 527</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/524">issue 524</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/521">issue 521</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/517">issue 517</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/512">issue 512</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/505">issue 505</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/502">issue 502</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/489">issue 489</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/465">issue 465</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/460">issue 460</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/425">issue 425</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/536">issue 536</a> ）</p></li>
</ul>
</div>
<div class="section" id="fixes">
<h3>修正<a class="headerlink" href="#fixes" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>更新Crawlspiper模板中的选择器类导入（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/484">issue 484</a> ）</p></li>
<li><p>修复不存在的引用 <code class="docutils literal notranslate"><span class="pre">engine.slots</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/464">issue 464</a> ）</p></li>
<li><p>不要试图在非TextResponse实例上调用``body_as_unicode（）``（：issue：<cite>462</cite>）</p></li>
<li><p>在XpathitemLoader子类化时发出警告，以前它只在实例化时发出警告。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/523">issue 523</a> ）</p></li>
<li><p>在XpathSelector子类化时发出警告，以前它只在实例化时发出警告。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/537">issue 537</a> ）</p></li>
<li><p>对内存状态的多个修复（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/531">issue 531</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/530">issue 530</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/529">issue 529</a> ）</p></li>
<li><p>修复中的重写URL <code class="docutils literal notranslate"><span class="pre">FormRequest.from_response()</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/507">issue 507</a> ）</p></li>
<li><p>在PIP 1.5下修复测试运行程序（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/513">issue 513</a> ）</p></li>
<li><p>修复spider名称为unicode时的日志记录错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/479">issue 479</a> ）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-0-20-2-released-2013-12-09">
<h2>Scrapy 0.20.2（2013-12-09发布）<a class="headerlink" href="#scrapy-0-20-2-released-2013-12-09" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>使用选择器更改更新Crawlspiper模板（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/6d1457d">commit 6d1457d</a> ）</p></li>
<li><p>在教程中修复方法名。关闭GH-480（GH-480） <a class="reference external" href="https://github.com/scrapy/scrapy/commit/b4fc359">commit b4fc359</a></p></li>
</ul>
</div>
<div class="section" id="scrapy-0-20-1-released-2013-11-28">
<h2>Scrapy 0.20.1（2013-11-28发布）<a class="headerlink" href="#scrapy-0-20-1-released-2013-11-28" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>包含u软件包u从发布的源代码构建车轮需要数据（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/5ba1ad5">commit 5ba1ad5</a> ）</p></li>
<li><p>进程并行正在泄漏内部延迟的故障。关闭α458 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/419a780">commit 419a780</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-20-0-released-2013-11-08">
<h2>Scrapy 0.20.0（2013-11-08发布）<a class="headerlink" href="#scrapy-0-20-0-released-2013-11-08" title="永久链接至标题">¶</a></h2>
<div class="section" id="id67">
<h3>增强功能<a class="headerlink" href="#id67" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>新选择器的API，包括CSS选择器（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/395">issue 395</a> 和 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/426">issue 426</a> ）</p></li>
<li><p>请求/响应URL/主体属性现在是不可变的（修改它们已经被弃用了很长时间）</p></li>
<li><p><a class="reference internal" href="topics/settings.html#std-setting-ITEM_PIPELINES"><code class="xref std std-setting docutils literal notranslate"><span class="pre">ITEM_PIPELINES</span></code></a> 现在定义为dict（而不是列表）</p></li>
<li><p>SitemapSpider可以获取备用URL（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/360">issue 360</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">Selector.remove_namespaces()</span></code> 现在从元素的属性中移除名称空间。（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/416">issue 416</a> ）</p></li>
<li><p>为python 3.3铺平道路+（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/435">issue 435</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/436">issue 436</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/431">issue 431</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/452">issue 452</a> ）</p></li>
<li><p>使用具有嵌套支持的本机python类型的新项导出器（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/366">issue 366</a> ）</p></li>
<li><p>调整http1.1池大小，使其与设置定义的并发性匹配（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/b43b5f575">commit b43b5f575</a> ）</p></li>
<li><p>scrappy.mail.mailsender现在可以通过tls连接或使用starttls升级（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/327">issue 327</a> ）</p></li>
<li><p>从ImageSpipeline中分解出功能的新文件管道（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/370">issue 370</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/409">issue 409</a> ）</p></li>
<li><p>建议用枕头代替PIL来处理图像（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/317">issue 317</a> ）</p></li>
<li><p>为Ubuntu Quantal和Raring添加了Debian包 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/86230c0">commit 86230c0</a> ）</p></li>
<li><p>模拟服务器（用于测试）可以侦听HTTPS请求（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/410">issue 410</a> ）</p></li>
<li><p>从多个核心组件上拆下多个十字轴支架（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/422">issue 422</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/421">issue 421</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/420">issue 420</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/419">issue 419</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/423">issue 423</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/418">issue 418</a> ）</p></li>
<li><p>Travis CI现在根据开发版本测试 Scrapy 更改 <code class="docutils literal notranslate"><span class="pre">w3lib</span></code> 和 <code class="docutils literal notranslate"><span class="pre">queuelib</span></code> python包。</p></li>
<li><p>将PYPY 2.1添加到持续集成测试中（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/ecfa7431">commit ecfa7431</a> ）</p></li>
<li><p>Pylinted、pep8并从源中删除了旧样式异常（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/430">issue 430</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/432">issue 432</a> ）</p></li>
<li><p>将importlib用于参数导入（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/445">issue 445</a> ）</p></li>
<li><p>处理python 2.7.5中引入的影响xmlItemExporter的回归（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/372">issue 372</a> ）</p></li>
<li><p>修正了SIGINT上的爬行关闭（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/450">issue 450</a> ）</p></li>
<li><p>不提交 <code class="docutils literal notranslate"><span class="pre">reset</span></code> 在FormRequest.From响应中键入输入（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/b326b87">commit b326b87</a> ）</p></li>
<li><p>当请求errback引发异常时，不要消除下载错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/684cfc0">commit 684cfc0</a> ）</p></li>
</ul>
</div>
<div class="section" id="id68">
<h3>错误修正<a class="headerlink" href="#id68" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>在Django 1.6下修复测试（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/b6bed44c">commit b6bed44c</a> ）</p></li>
<li><p>使用HTTP1.1下载处理程序在断开连接的情况下重试中间件的许多错误修复</p></li>
<li><p>修复Twisted版本之间的不一致（：issue：<cite>406</cite>）</p></li>
<li><p>修复破烂的外壳缺陷 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/418">issue 418</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/407">issue 407</a> ）</p></li>
<li><p>修复setup.py中的无效变量名（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/429">issue 429</a> ）</p></li>
<li><p>修复教程引用（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/387">issue 387</a> ）</p></li>
<li><p>改进请求响应文档（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/391">issue 391</a> ）</p></li>
<li><p>改进最佳实践文档（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/399">issue 399</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/400">issue 400</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/401">issue 401</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/402">issue 402</a> ）</p></li>
<li><p>改进Django集成文档（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/404">issue 404</a> ）</p></li>
<li><p>文件 <code class="docutils literal notranslate"><span class="pre">bindaddress</span></code> 请求元 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/37c24e01d7">commit 37c24e01d7</a> ）</p></li>
<li><p>改进 <code class="docutils literal notranslate"><span class="pre">Request</span></code> 类文档（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/226">issue 226</a> ）</p></li>
</ul>
</div>
<div class="section" id="other">
<h3>其他<a class="headerlink" href="#other" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>删除Python 2.6支持（：issue：<cite>448</cite>）</p></li>
<li><p>添加 <a class="reference external" href="https://cssselect.readthedocs.io/en/latest/index.html" title="(在 cssselect v1.1.0)"><span class="xref std std-doc">cssselect</span></a> python包作为安装依赖项</p></li>
<li><p>删除libxml2和多选择器的后端支持， <a class="reference external" href="https://lxml.de/">lxml</a> 从现在开始是必需的。</p></li>
<li><p>最小扭曲版本增加到10.0.0，下降扭曲8.0支持。</p></li>
<li><p>现在运行测试套件需要 <code class="docutils literal notranslate"><span class="pre">mock</span></code> python库（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/390">issue 390</a> ）</p></li>
</ul>
</div>
<div class="section" id="thanks">
<h3>谢谢<a class="headerlink" href="#thanks" title="永久链接至标题">¶</a></h3>
<p>感谢所有为这次发布做出贡献的人！</p>
<p>按提交次数排序的参与者列表：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="mi">69</span> <span class="n">Daniel</span> <span class="n">Graña</span> <span class="o">&lt;</span><span class="n">dangra</span><span class="o">@...&gt;</span>
<span class="mi">37</span> <span class="n">Pablo</span> <span class="n">Hoffman</span> <span class="o">&lt;</span><span class="n">pablo</span><span class="o">@...&gt;</span>
<span class="mi">13</span> <span class="n">Mikhail</span> <span class="n">Korobov</span> <span class="o">&lt;</span><span class="n">kmike84</span><span class="o">@...&gt;</span>
 <span class="mi">9</span> <span class="n">Alex</span> <span class="n">Cepoi</span> <span class="o">&lt;</span><span class="n">alex</span><span class="o">.</span><span class="n">cepoi</span><span class="o">@...&gt;</span>
 <span class="mi">9</span> <span class="n">alexanderlukanin13</span> <span class="o">&lt;</span><span class="n">alexander</span><span class="o">.</span><span class="n">lukanin</span><span class="o">.</span><span class="mi">13</span><span class="o">@...&gt;</span>
 <span class="mi">8</span> <span class="n">Rolando</span> <span class="n">Espinoza</span> <span class="n">La</span> <span class="n">fuente</span> <span class="o">&lt;</span><span class="n">darkrho</span><span class="o">@...&gt;</span>
 <span class="mi">8</span> <span class="n">Lukasz</span> <span class="n">Biedrycki</span> <span class="o">&lt;</span><span class="n">lukasz</span><span class="o">.</span><span class="n">biedrycki</span><span class="o">@...&gt;</span>
 <span class="mi">6</span> <span class="n">Nicolas</span> <span class="n">Ramirez</span> <span class="o">&lt;</span><span class="n">nramirez</span><span class="o">.</span><span class="n">uy</span><span class="o">@...&gt;</span>
 <span class="mi">3</span> <span class="n">Paul</span> <span class="n">Tremberth</span> <span class="o">&lt;</span><span class="n">paul</span><span class="o">.</span><span class="n">tremberth</span><span class="o">@...&gt;</span>
 <span class="mi">2</span> <span class="n">Martin</span> <span class="n">Olveyra</span> <span class="o">&lt;</span><span class="n">molveyra</span><span class="o">@...&gt;</span>
 <span class="mi">2</span> <span class="n">Stefan</span> <span class="o">&lt;</span><span class="n">misc</span><span class="o">@...&gt;</span>
 <span class="mi">2</span> <span class="n">Rolando</span> <span class="n">Espinoza</span> <span class="o">&lt;</span><span class="n">darkrho</span><span class="o">@...&gt;</span>
 <span class="mi">2</span> <span class="n">Loren</span> <span class="n">Davie</span> <span class="o">&lt;</span><span class="n">loren</span><span class="o">@...&gt;</span>
 <span class="mi">2</span> <span class="n">irgmedeiros</span> <span class="o">&lt;</span><span class="n">irgmedeiros</span><span class="o">@...&gt;</span>
 <span class="mi">1</span> <span class="n">Stefan</span> <span class="n">Koch</span> <span class="o">&lt;</span><span class="n">taikano</span><span class="o">@...&gt;</span>
 <span class="mi">1</span> <span class="n">Stefan</span> <span class="o">&lt;</span><span class="n">cct</span><span class="o">@...&gt;</span>
 <span class="mi">1</span> <span class="n">scraperdragon</span> <span class="o">&lt;</span><span class="n">dragon</span><span class="o">@...&gt;</span>
 <span class="mi">1</span> <span class="n">Kumara</span> <span class="n">Tharmalingam</span> <span class="o">&lt;</span><span class="n">ktharmal</span><span class="o">@...&gt;</span>
 <span class="mi">1</span> <span class="n">Francesco</span> <span class="n">Piccinno</span> <span class="o">&lt;</span><span class="n">stack</span><span class="o">.</span><span class="n">box</span><span class="o">@...&gt;</span>
 <span class="mi">1</span> <span class="n">Marcos</span> <span class="n">Campal</span> <span class="o">&lt;</span><span class="n">duendex</span><span class="o">@...&gt;</span>
 <span class="mi">1</span> <span class="n">Dragon</span> <span class="n">Dave</span> <span class="o">&lt;</span><span class="n">dragon</span><span class="o">@...&gt;</span>
 <span class="mi">1</span> <span class="n">Capi</span> <span class="n">Etheriel</span> <span class="o">&lt;</span><span class="n">barraponto</span><span class="o">@...&gt;</span>
 <span class="mi">1</span> <span class="n">cacovsky</span> <span class="o">&lt;</span><span class="n">amarquesferraz</span><span class="o">@...&gt;</span>
 <span class="mi">1</span> <span class="n">Berend</span> <span class="n">Iwema</span> <span class="o">&lt;</span><span class="n">berend</span><span class="o">@...&gt;</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="scrapy-0-18-4-released-2013-10-10">
<h2>Scrapy 0.18.4（2013-10-10发布）<a class="headerlink" href="#scrapy-0-18-4-released-2013-10-10" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>IPython拒绝更新命名空间。FixY 396 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/3d32c4f">commit 3d32c4f</a> ）</p></li>
<li><p>修复alreadycallederror替换shell命令中的请求。关闭α407 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/b1d8919">commit b1d8919</a> ）</p></li>
<li><p>修复启动请求延迟和提前挂起（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/89faf52">commit 89faf52</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-18-3-released-2013-10-03">
<h2>Scrapy 0.18.3（2013-10-03发布）<a class="headerlink" href="#scrapy-0-18-3-released-2013-10-03" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>修复对启动请求的延迟评估的回归（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/12693a5">commit 12693a5</a> ）</p></li>
<li><p>表单：不提交重置输入（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/e429f63">commit e429f63</a> ）</p></li>
<li><p>增加UnitTest超时以减少Travis假阳性故障（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/912202e">commit 912202e</a> ）</p></li>
<li><p>json导出器的后台主修复程序（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/cfc2d46">commit cfc2d46</a> ）</p></li>
<li><p>在生成sdist tarball之前，修复权限并设置umask（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/06149e0">commit 06149e0</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-18-2-released-2013-09-03">
<h2>Scrapy 0.18.2（2013-09-03发布）<a class="headerlink" href="#scrapy-0-18-2-released-2013-09-03" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>后端 <code class="docutils literal notranslate"><span class="pre">scrapy</span> <span class="pre">check</span></code> 命令修复和向后兼容的多爬虫进程（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/339">issue 339</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-18-1-released-2013-08-27">
<h2>Scrapy 0.18.1（2013-08-27发布）<a class="headerlink" href="#scrapy-0-18-1-released-2013-08-27" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>删除由cherry-picked更改添加的额外导入（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/d20304e">commit d20304e</a> ）</p></li>
<li><p>在twisted pre 11.0.0下修复爬行测试（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/1994f38">commit 1994f38</a> ）</p></li>
<li><p>pY26不能格式化零长度字段（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/abf756f">commit abf756f</a> ）</p></li>
<li><p>测试未绑定响应的潜在数据丢失错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/b15470d">commit b15470d</a> ）</p></li>
<li><p>将没有内容长度或传输编码的响应视为良好响应（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/c4bf324">commit c4bf324</a> ）</p></li>
<li><p>如果未启用http11处理程序，则不包括responsefailed（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/6cbe684">commit 6cbe684</a> ）</p></li>
<li><p>新的HTTP客户端在ResponseFiled异常中包装连接丢失。修复373 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/1a20bba">commit 1a20bba</a> ）</p></li>
<li><p>限制Travis CI构建矩阵（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/3b01bb8">commit 3b01bb8</a> ）</p></li>
<li><p>合并请求375来自Peterarenot/Patch-1（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/fa766d7">commit fa766d7</a> ）</p></li>
<li><p>已修复，因此它引用了正确的文件夹（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/3283809">commit 3283809</a> ）</p></li>
<li><p>添加了Quantal&amp;Raring来支持Ubuntu版本 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/1411923">commit 1411923</a> ）</p></li>
<li><p>修复在升级到http1客户端后没有重试某些连接错误的重试中间件，关闭GH-373（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/bb35ed0">commit bb35ed0</a> ）</p></li>
<li><p>在python 2.7.4和2.7.5中修复xmlItemExporter（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/de3e451">commit de3e451</a> ）</p></li>
<li><p>0.18发行说明的小更新（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/c45e5f1">commit c45e5f1</a> ）</p></li>
<li><p>修正贡献者列表格式 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/0b60031">commit 0b60031</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-18-0-released-2013-08-09">
<h2>Scrapy 0.18.0（2013-08-09发布）<a class="headerlink" href="#scrapy-0-18-0-released-2013-08-09" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>使用tox对testsuite运行进行了很多改进，包括在pypi上进行测试的方法</p></li>
<li><p>处理Ajax可爬行URL的get参数（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/3fe2a32">commit 3fe2a32</a> ）</p></li>
<li><p>使用lxml recover选项分析站点地图（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/347">issue 347</a> ）</p></li>
<li><p>错误修复cookie按主机名而不是按netloc合并（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/352">issue 352</a> ）</p></li>
<li><p>支持禁用 <code class="docutils literal notranslate"><span class="pre">HttpCompressionMiddleware</span></code> 使用标志设置（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/359">issue 359</a> ）</p></li>
<li><p>使用支持XML命名空间 <code class="docutils literal notranslate"><span class="pre">iternodes</span></code> 语法分析器 <code class="docutils literal notranslate"><span class="pre">XMLFeedSpider</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/12">issue 12</a> ）</p></li>
<li><p>支持 <code class="docutils literal notranslate"><span class="pre">dont_cache</span></code> 请求元标志（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/19">issue 19</a> ）</p></li>
<li><p>修正错误 <code class="docutils literal notranslate"><span class="pre">scrapy.utils.gz.gunzip</span></code> 被python 2.7.4中的更改打断（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/4dc76e">commit 4dc76e</a> ）</p></li>
<li><p>错误修复上的URL编码 <code class="docutils literal notranslate"><span class="pre">SgmlLinkExtractor</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/24">issue 24</a> ）</p></li>
<li><p>修正错误 <code class="docutils literal notranslate"><span class="pre">TakeFirst</span></code> 处理器不应丢弃零（0）值（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/59">issue 59</a> ）</p></li>
<li><p>支持XML导出器中的嵌套项（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/66">issue 66</a> ）</p></li>
<li><p>提高cookie处理性能（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/77">issue 77</a> ）</p></li>
<li><p>记录重复筛选的请求一次（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/105">issue 105</a> ）</p></li>
<li><p>将重定向中间件拆分为状态中间件和基于元的中间件（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/78">issue 78</a> ）</p></li>
<li><p>使用http1.1作为默认的下载程序处理程序（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/109">issue 109</a> 和 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/318">issue 318</a> ）</p></li>
<li><p>支持上的XPath表单选择 <code class="docutils literal notranslate"><span class="pre">FormRequest.from_response</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/185">issue 185</a> ）</p></li>
<li><p>修正上的Unicode解码错误 <code class="docutils literal notranslate"><span class="pre">SgmlLinkExtractor</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/199">issue 199</a> ）</p></li>
<li><p>Pypi解释器上的错误修复信号调度（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/205">issue 205</a> ）</p></li>
<li><p>改进请求延迟和并发处理（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/206">issue 206</a> ）</p></li>
<li><p>将rfc2616缓存策略添加到 <code class="docutils literal notranslate"><span class="pre">HttpCacheMiddleware</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/212">issue 212</a> ）</p></li>
<li><p>允许自定义引擎记录的消息（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/214">issue 214</a> ）</p></li>
<li><p>多方面的改进 <code class="docutils literal notranslate"><span class="pre">DjangoItem</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/217">issue 217</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/218">issue 218</a> ， <a class="reference external" href="https://github.com/scrapy/scrapy/issues/221">issue 221</a> ）</p></li>
<li><p>使用SETUPTOOLS入口点扩展 Scrapy  命令（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/260">issue 260</a> ）</p></li>
<li><p>允许蜘蛛``allowed_domains``值设置/元组（：issue：<cite>261</cite>）</p></li>
<li><p>支持 <code class="docutils literal notranslate"><span class="pre">settings.getdict</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/269">issue 269</a> ）</p></li>
<li><p>简化内部 <code class="docutils literal notranslate"><span class="pre">scrapy.core.scraper</span></code> 插槽处理 <a class="reference external" href="https://github.com/scrapy/scrapy/issues/271">issue 271</a> ）</p></li>
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">Item.copy</span></code> （ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/290">issue 290</a> ）</p></li>
<li><p>收集空闲下载器插槽（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/297">issue 297</a> ）</p></li>
<li><p>添加 <code class="docutils literal notranslate"><span class="pre">ftp://</span></code> 方案下载程序处理程序（ <a class="reference external" href="https://github.com/scrapy/scrapy/issues/329">issue 329</a> ）</p></li>
<li><p>添加了Downloader Benchmark Web服务器和Spider工具 <a class="reference internal" href="topics/benchmarking.html#benchmarking"><span class="std std-ref">标杆管理</span></a></p></li>
<li><p>已将永久（磁盘上）队列移动到单独的项目 (<a class="reference external" href="https://github.com/scrapy/queuelib">queuelib</a>) 现在要看哪一个了</p></li>
<li><p>使用外部库添加垃圾命令 (<a class="reference external" href="https://github.com/scrapy/scrapy/issues/260">issue 260</a> ）</p></li>
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">--pdb</span></code> 选择权 <code class="docutils literal notranslate"><span class="pre">scrapy</span></code> 命令行工具</p></li>
<li><p>补充 <a class="reference internal" href="topics/selectors.html#scrapy.selector.Selector.remove_namespaces" title="scrapy.selector.Selector.remove_namespaces"><code class="xref py py-meth docutils literal notranslate"><span class="pre">XPathSelector.remove_namespaces</span></code></a> 它允许从XML文档中删除所有名称空间以方便（使用不含名称空间的xpaths）。记录在 <a class="reference internal" href="topics/selectors.html#topics-selectors"><span class="std std-ref">选择器</span></a> .</p></li>
<li><p>蜘蛛合约的几个改进</p></li>
<li><p>名为metarefreshmiddldeware的新默认中间件，用于处理meta refresh html标记重定向,</p></li>
<li><p>MetaRefreshMiddlDeware和RedirectMiddleware有不同的优先级来解决62</p></li>
<li><p>从爬虫方法添加到蜘蛛</p></li>
<li><p>使用模拟服务器添加系统测试</p></li>
<li><p>macOS兼容性的更多改进（感谢Alex Cepoi）</p></li>
<li><p>多个单件清洁和多蜘蛛支持（感谢Nicolas Ramirez）</p></li>
<li><p>支持自定义下载插槽</p></li>
<li><p>在“shell”命令中添加了--spider选项。</p></li>
<li><p>当scray启动时记录重写的设置</p></li>
</ul>
<p>感谢所有为这次发布做出贡献的人。以下是按提交次数排序的参与者列表：</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="mi">130</span> <span class="n">Pablo</span> <span class="n">Hoffman</span> <span class="o">&lt;</span><span class="n">pablo</span><span class="o">@...&gt;</span>
 <span class="mi">97</span> <span class="n">Daniel</span> <span class="n">Graña</span> <span class="o">&lt;</span><span class="n">dangra</span><span class="o">@...&gt;</span>
 <span class="mi">20</span> <span class="n">Nicolás</span> <span class="n">Ramírez</span> <span class="o">&lt;</span><span class="n">nramirez</span><span class="o">.</span><span class="n">uy</span><span class="o">@...&gt;</span>
 <span class="mi">13</span> <span class="n">Mikhail</span> <span class="n">Korobov</span> <span class="o">&lt;</span><span class="n">kmike84</span><span class="o">@...&gt;</span>
 <span class="mi">12</span> <span class="n">Pedro</span> <span class="n">Faustino</span> <span class="o">&lt;</span><span class="n">pedrobandim</span><span class="o">@...&gt;</span>
 <span class="mi">11</span> <span class="n">Steven</span> <span class="n">Almeroth</span> <span class="o">&lt;</span><span class="n">sroth77</span><span class="o">@...&gt;</span>
  <span class="mi">5</span> <span class="n">Rolando</span> <span class="n">Espinoza</span> <span class="n">La</span> <span class="n">fuente</span> <span class="o">&lt;</span><span class="n">darkrho</span><span class="o">@...&gt;</span>
  <span class="mi">4</span> <span class="n">Michal</span> <span class="n">Danilak</span> <span class="o">&lt;</span><span class="n">mimino</span><span class="o">.</span><span class="n">coder</span><span class="o">@...&gt;</span>
  <span class="mi">4</span> <span class="n">Alex</span> <span class="n">Cepoi</span> <span class="o">&lt;</span><span class="n">alex</span><span class="o">.</span><span class="n">cepoi</span><span class="o">@...&gt;</span>
  <span class="mi">4</span> <span class="n">Alexandr</span> <span class="n">N</span> <span class="n">Zamaraev</span> <span class="p">(</span><span class="n">aka</span> <span class="n">tonal</span><span class="p">)</span> <span class="o">&lt;</span><span class="n">tonal</span><span class="o">@...&gt;</span>
  <span class="mi">3</span> <span class="n">paul</span> <span class="o">&lt;</span><span class="n">paul</span><span class="o">.</span><span class="n">tremberth</span><span class="o">@...&gt;</span>
  <span class="mi">3</span> <span class="n">Martin</span> <span class="n">Olveyra</span> <span class="o">&lt;</span><span class="n">molveyra</span><span class="o">@...&gt;</span>
  <span class="mi">3</span> <span class="n">Jordi</span> <span class="n">Llonch</span> <span class="o">&lt;</span><span class="n">llonchj</span><span class="o">@...&gt;</span>
  <span class="mi">3</span> <span class="n">arijitchakraborty</span> <span class="o">&lt;</span><span class="n">myself</span><span class="o">.</span><span class="n">arijit</span><span class="o">@...&gt;</span>
  <span class="mi">2</span> <span class="n">Shane</span> <span class="n">Evans</span> <span class="o">&lt;</span><span class="n">shane</span><span class="o">.</span><span class="n">evans</span><span class="o">@...&gt;</span>
  <span class="mi">2</span> <span class="n">joehillen</span> <span class="o">&lt;</span><span class="n">joehillen</span><span class="o">@...&gt;</span>
  <span class="mi">2</span> <span class="n">Hart</span> <span class="o">&lt;</span><span class="n">HartSimha</span><span class="o">@...&gt;</span>
  <span class="mi">2</span> <span class="n">Dan</span> <span class="o">&lt;</span><span class="n">ellisd23</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">Zuhao</span> <span class="n">Wan</span> <span class="o">&lt;</span><span class="n">wanzuhao</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">whodatninja</span> <span class="o">&lt;</span><span class="n">blake</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">vkrest</span> <span class="o">&lt;</span><span class="n">v</span><span class="o">.</span><span class="n">krestiannykov</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">tpeng</span> <span class="o">&lt;</span><span class="n">pengtaoo</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">Tom</span> <span class="n">Mortimer</span><span class="o">-</span><span class="n">Jones</span> <span class="o">&lt;</span><span class="n">tom</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">Rocio</span> <span class="n">Aramberri</span> <span class="o">&lt;</span><span class="n">roschegel</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">Pedro</span> <span class="o">&lt;</span><span class="n">pedro</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">notsobad</span> <span class="o">&lt;</span><span class="n">wangxiaohugg</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">Natan</span> <span class="n">L</span> <span class="o">&lt;</span><span class="n">kuyanatan</span><span class="o">.</span><span class="n">nlao</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">Mark</span> <span class="n">Grey</span> <span class="o">&lt;</span><span class="n">mark</span><span class="o">.</span><span class="n">grey</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">Luan</span> <span class="o">&lt;</span><span class="n">luanpab</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">Libor</span> <span class="n">Nenadál</span> <span class="o">&lt;</span><span class="n">libor</span><span class="o">.</span><span class="n">nenadal</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">Juan</span> <span class="n">M</span> <span class="n">Uys</span> <span class="o">&lt;</span><span class="n">opyate</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">Jonas</span> <span class="n">Brunsgaard</span> <span class="o">&lt;</span><span class="n">jonas</span><span class="o">.</span><span class="n">brunsgaard</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">Ilya</span> <span class="n">Baryshev</span> <span class="o">&lt;</span><span class="n">baryshev</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">Hasnain</span> <span class="n">Lakhani</span> <span class="o">&lt;</span><span class="n">m</span><span class="o">.</span><span class="n">hasnain</span><span class="o">.</span><span class="n">lakhani</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">Emanuel</span> <span class="n">Schorsch</span> <span class="o">&lt;</span><span class="n">emschorsch</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">Chris</span> <span class="n">Tilden</span> <span class="o">&lt;</span><span class="n">chris</span><span class="o">.</span><span class="n">tilden</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">Capi</span> <span class="n">Etheriel</span> <span class="o">&lt;</span><span class="n">barraponto</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">cacovsky</span> <span class="o">&lt;</span><span class="n">amarquesferraz</span><span class="o">@...&gt;</span>
  <span class="mi">1</span> <span class="n">Berend</span> <span class="n">Iwema</span> <span class="o">&lt;</span><span class="n">berend</span><span class="o">@...&gt;</span>
</pre></div>
</div>
</div>
<div class="section" id="scrapy-0-16-5-released-2013-05-30">
<h2>Scrapy 0.16.5（2013-05-30发布）<a class="headerlink" href="#scrapy-0-16-5-released-2013-05-30" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>当Scrapy deploy重定向到新端点时遵守请求方法 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/8c4fcee">commit 8c4fcee</a> ）</p></li>
<li><p>修复不准确的下载器中间件文档。参考文献280 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/40667cb">commit 40667cb</a> ）</p></li>
<li><p>文档：删除diveintopython.org的链接，该链接不再可用。关闭α246 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/bd58bfa">commit bd58bfa</a> ）</p></li>
<li><p>在无效的HTML5文档中查找表单节点（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/e3d6945">commit e3d6945</a> ）</p></li>
<li><p>修正了错误的标签属性类型bool而不是list（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/a274276">commit a274276</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-16-4-released-2013-01-23">
<h2>Scrapy 0.16.4（2013-01-23发布）<a class="headerlink" href="#scrapy-0-16-4-released-2013-01-23" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>修复文档中的拼写错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/6d2b3aa">commit 6d2b3aa</a> ）</p></li>
<li><p>添加关于禁用扩展的文档。参考文献132 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/c90de33">commit c90de33</a> ）</p></li>
<li><p>已修复错误消息格式。log.err（）不支持酷格式，出现错误时，消息为：“错误：错误处理%（item）s”（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/c16150c">commit c16150c</a> ）</p></li>
<li><p>整理和改进图像管道错误记录（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/56b45fc">commit 56b45fc</a> ）</p></li>
<li><p>固定文档错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/243be84">commit 243be84</a> ）</p></li>
<li><p>添加文档主题：广泛的爬网和常见做法 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/1fbb715">commit 1fbb715</a> ）</p></li>
<li><p>修复Scrapy parse命令中未显式指定spider时的错误。关闭#209 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/c72e682">commit c72e682</a> ）</p></li>
<li><p>更新docs/topics/commands.rst（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/28eac7a">commit 28eac7a</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-16-3-released-2012-12-07">
<h2>Scrapy 0.16.3（2012-12-07发布）<a class="headerlink" href="#scrapy-0-16-3-released-2012-12-07" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>在使用下载延迟时删除并发限制，并仍然确保强制执行请求间延迟（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/487b9b5">commit 487b9b5</a> ）</p></li>
<li><p>当图像管道失败时添加错误详细信息（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/8232569">commit 8232569</a> ）</p></li>
<li><p>提高macOS兼容性 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/8dcf8aa">commit 8dcf8aa</a> ）</p></li>
<li><p>setup.py:使用readme.rst填充long_描述（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/7b5310d">commit 7b5310d</a> ）</p></li>
<li><p>文档：删除了对ClientForm的过时引用（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/80f9bb6">commit 80f9bb6</a> ）</p></li>
<li><p>为默认存储后端更正文档（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/2aa491b">commit 2aa491b</a> ）</p></li>
<li><p>文档：从常见问题解答中删除了断开的proxyhub链接（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/bdf61c4">commit bdf61c4</a> ）</p></li>
<li><p>SpiderOpenCloseLogging示例中的固定文档拼写错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/7184094">commit 7184094</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-16-2-released-2012-11-09">
<h2>Scrapy 0.16.2（2012-11-09发布）<a class="headerlink" href="#scrapy-0-16-2-released-2012-11-09" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>废合同：python2.6compat (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/a4a9199">commit a4a9199</a> ）</p></li>
<li><p>废合同详细选项 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/ec41673">commit ec41673</a> ）</p></li>
<li><p>废合同的正确的类unittest输出 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/86635e4">commit 86635e4</a> ）</p></li>
<li><p>在调试文档中添加了“在浏览器中打开”（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/c9b690d">commit c9b690d</a> ）</p></li>
<li><p>从设置文档中删除对全局碎片统计的引用 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/dd55067">commit dd55067</a> ）</p></li>
<li><p>修复Windows平台中的spiderstate错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/58998f4">commit 58998f4</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-16-1-released-2012-10-26">
<h2>Scrapy 0.16.1（2012-10-26发布）<a class="headerlink" href="#scrapy-0-16-1-released-2012-10-26" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>修复了logstats扩展，它在0.16版本之前的错误合并后被破坏。（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/8c780fd">commit 8c780fd</a> ）</p></li>
<li><p>更好地向后兼容scrapy.conf.settings（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/3403089">commit 3403089</a> ）</p></li>
<li><p>有关如何从扩展访问爬虫统计信息的扩展文档（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/c4da0b5">commit c4da0b5</a> ）</p></li>
<li><p>删除了.hgtag（由于Scrapy使用git，不再需要） (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/d52c188">commit d52c188</a> ）</p></li>
<li><p>固定RST标题下的破折号（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/fa4f7f9">commit fa4f7f9</a> ）</p></li>
<li><p>在新闻中设置0.16.0的发布日期（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/e292246">commit e292246</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-16-0-released-2012-10-18">
<h2>Scrapy 0.16.0（2012-10-18发布）<a class="headerlink" href="#scrapy-0-16-0-released-2012-10-18" title="永久链接至标题">¶</a></h2>
<p>零星变化：</p>
<ul class="simple">
<li><p>补充 <a class="reference internal" href="topics/contracts.html#topics-contracts"><span class="std std-ref">蜘蛛合约</span></a> 以正式/可复制的方式测试蜘蛛的机制</p></li>
<li><p>增加选项 <code class="docutils literal notranslate"><span class="pre">-o</span></code> 和 <code class="docutils literal notranslate"><span class="pre">-t</span></code> 到 <a class="reference internal" href="topics/commands.html#std-command-runspider"><code class="xref std std-command docutils literal notranslate"><span class="pre">runspider</span></code></a> 命令</p></li>
<li><p>文件化的 <a class="reference internal" href="topics/autothrottle.html"><span class="doc">AutoThrottle 扩展</span></a> 并添加到默认安装的扩展。您仍然需要启用它 <a class="reference internal" href="topics/autothrottle.html#std-setting-AUTOTHROTTLE_ENABLED"><code class="xref std std-setting docutils literal notranslate"><span class="pre">AUTOTHROTTLE_ENABLED</span></code></a></p></li>
<li><p>主要统计数据收集重构：删除全局/每个蜘蛛统计数据的分离，删除与统计数据相关的信号（ <code class="docutils literal notranslate"><span class="pre">stats_spider_opened</span></code> 等）。统计信息现在要简单得多，在统计信息收集器API和信号上保持向后兼容性。</p></li>
<li><p>补充 <a class="reference internal" href="topics/spider-middleware.html#scrapy.spidermiddlewares.SpiderMiddleware.process_start_requests" title="scrapy.spidermiddlewares.SpiderMiddleware.process_start_requests"><code class="xref py py-meth docutils literal notranslate"><span class="pre">process_start_requests()</span></code></a> 蜘蛛中间商的方法</p></li>
<li><p>信号掉了。现在应该可以通过爬虫信号属性。有关更多信息，请参阅信号文档。</p></li>
<li><p>删除了统计收集器singleton。现在可以通过crawler.stats属性访问状态。有关详细信息，请参阅统计信息收集文档。</p></li>
<li><p>文件化的 <a class="reference internal" href="topics/api.html#topics-api"><span class="std std-ref">核心API</span></a></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">lxml</span></code> is now the default selectors backend instead of <code class="docutils literal notranslate"><span class="pre">libxml2</span></code></p></li>
<li><p>将formRequest.from_response（）移植到 <a class="reference external" href="https://lxml.de/">lxml</a> 而不是 <a class="reference external" href="http://wwwsearch.sourceforge.net/old/ClientForm/">ClientForm</a></p></li>
<li><p>删除的模块： <code class="docutils literal notranslate"><span class="pre">scrapy.xlib.BeautifulSoup</span></code> 和 <code class="docutils literal notranslate"><span class="pre">scrapy.xlib.ClientForm</span></code></p></li>
<li><p>SiteMapSpider:添加了对以.xml和.xml.gz结尾的站点地图URL的支持，即使它们公布了错误的内容类型（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/10ed28b">commit 10ed28b</a> ）</p></li>
<li><p>Stacktracedump扩展：同时转储trackref活动引用（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/fe2ce93">commit fe2ce93</a> ）</p></li>
<li><p>现在JSON和JSONLINES导出器完全支持嵌套项</p></li>
<li><p>补充 <a class="reference internal" href="topics/downloader-middleware.html#std-reqmeta-cookiejar"><code class="xref std std-reqmeta docutils literal notranslate"><span class="pre">cookiejar</span></code></a> 请求meta-key以支持每个spider的多个cookie会话</p></li>
<li><p>去耦编码检测码 <a class="reference external" href="https://github.com/scrapy/w3lib/blob/master/w3lib/encoding.py">w3lib.encoding</a> 并移植了一些垃圾代码以使用该模块</p></li>
<li><p>放弃了对python 2.5的支持。见https://blog.scrapinghub.com/2012/02/27/scrapy-0-15-dropping-support-for-python-2-5/</p></li>
<li><p>twisted 2.5的下降支架</p></li>
<li><p>补充 <a class="reference internal" href="topics/spider-middleware.html#std-setting-REFERER_ENABLED"><code class="xref std std-setting docutils literal notranslate"><span class="pre">REFERER_ENABLED</span></code></a> 设置，控制引用中间件</p></li>
<li><p>已将默认用户代理更改为： <code class="docutils literal notranslate"><span class="pre">Scrapy/VERSION</span> <span class="pre">(+http://scrapy.org)</span></code></p></li>
<li><p>已删除（未记录） <code class="docutils literal notranslate"><span class="pre">HTMLImageLinkExtractor</span></code> 类从 <code class="docutils literal notranslate"><span class="pre">scrapy.contrib.linkextractors.image</span></code></p></li>
<li><p>根据蜘蛛设置删除（替换为实例化多个爬行器对象）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">USER_AGENT</span></code> 蜘蛛属性将不再工作，请使用 <code class="docutils literal notranslate"><span class="pre">user_agent</span></code> 改为属性</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">DOWNLOAD_TIMEOUT</span></code> 蜘蛛属性将不再工作，请使用 <code class="docutils literal notranslate"><span class="pre">download_timeout</span></code> 改为属性</p></li>
<li><p>删除了``ENCODING_ALIASES``设置，因为编码自动检测已移至`w3lib`_库</p></li>
<li><p>升级：ref：<a href="#id1"><span class="problematic" id="id2">`</span></a>topics-djangoitem`到main contrib</p></li>
<li><p>LogFormatter方法现在返回dicts（而不是字符串）以支持延迟格式化（：issue：<cite>164</cite>，：commit：<cite>dcef7b0</cite>）</p></li>
<li><p>下载程序处理程序 (<a class="reference internal" href="topics/settings.html#std-setting-DOWNLOAD_HANDLERS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">DOWNLOAD_HANDLERS</span></code></a> 设置）现在接收设置作为 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法</p></li>
<li><p>已将内存使用率替换为（更便携） <a class="reference external" href="https://docs.python.org/2/library/resource.html">resource</a> 移除模块 <code class="docutils literal notranslate"><span class="pre">scrapy.utils.memory</span></code> 模块</p></li>
<li><p>删除信号： <code class="docutils literal notranslate"><span class="pre">scrapy.mail.mail_sent</span></code></p></li>
<li><p>删除``TRACK_REFS``设置，现在：ref：<a href="#id1"><span class="problematic" id="id2">`</span></a>trackrefs &lt;topics-leaks-trackrefs&gt;`总是被启用</p></li>
<li><p>DBM现在是HTTP缓存中间件的默认存储后端</p></li>
<li><p>日志消息的数量（每个级别）现在通过  Scrapy   统计（stat name: <code class="docutils literal notranslate"><span class="pre">log_count/LEVEL</span></code> ）</p></li>
<li><p>接收到的响应数现在通过scrapy stats（stat name: <code class="docutils literal notranslate"><span class="pre">response_received_count</span></code> ）</p></li>
<li><p>删除了``scrapy.log.started``属性</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-14-4">
<h2>Scrapy 0.14.4<a class="headerlink" href="#scrapy-0-14-4" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>为支持的Ubuntu发行版添加了precise (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/b7e46df">commit b7e46df</a> ）</p></li>
<li><p>修复了在https://groups.google.com/forum/中报告的json-rpc-webservice中的错误！主题/垃圾用户/QGVBMFYBNAQ/讨论。也从extras/scrapy-ws.py中删除了不再支持的“run”命令（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/340fbdb">commit 340fbdb</a> ）</p></li>
<li><p>内容类型http equiv的元标记属性可以是任意顺序。（123） <a class="reference external" href="https://github.com/scrapy/scrapy/commit/0cb68af">commit 0cb68af</a> ）</p></li>
<li><p>将“导入图像”替换为更标准的“从PIL导入图像”。关闭α88 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/4d17048">commit 4d17048</a> ）</p></li>
<li><p>将试用状态返回为bin/runtests.sh exit值。（118） <a class="reference external" href="https://github.com/scrapy/scrapy/commit/b7b2e7f">commit b7b2e7f</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-14-3">
<h2>Scrapy 0.14.3<a class="headerlink" href="#scrapy-0-14-3" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>忘记包含PyDispatch许可证。（118） <a class="reference external" href="https://github.com/scrapy/scrapy/commit/fd85f9c">commit fd85f9c</a> ）</p></li>
<li><p>包括testsuite在源分发中使用的egg文件。（118） <a class="reference external" href="https://github.com/scrapy/scrapy/commit/c897793">commit c897793</a> ）</p></li>
<li><p>更新项目模板中的docstring以避免与genspider命令混淆，这可能被视为高级功能。参考文献107 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/2548dcc">commit 2548dcc</a> ）</p></li>
<li><p>在docs/topics/firebug.rst中添加了关于关闭google目录的注释（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/668e352">commit 668e352</a> ）</p></li>
<li><p>空的时候不要丢弃插槽，只需保存在另一个dict中，以便在需要时循环使用。 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/8e9f607">commit 8e9f607</a> ）</p></li>
<li><p>在支持libxml2的选择器中处理unicode xpaths不会失败（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/b830e95">commit b830e95</a> ）</p></li>
<li><p>修正了请求对象文档中的小错误（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/bf3c9ee">commit bf3c9ee</a> ）</p></li>
<li><p>修复了链接提取器文档中的次要缺陷（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/ba14f38">commit ba14f38</a> ）</p></li>
<li><p>在scray中删除了一些与sqlite支持相关的过时代码 (<a class="reference external" href="https://github.com/scrapy/scrapy/commit/0665175">commit 0665175</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-14-2">
<h2>Scrapy 0.14.2<a class="headerlink" href="#scrapy-0-14-2" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>在计算校验和之前，移动指向文件开头的缓冲区。参考文献92 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/6a5bef2">commit 6a5bef2</a> ）</p></li>
<li><p>在保存图像之前计算图像校验和。关闭α92 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/9817df1">commit 9817df1</a> ）</p></li>
<li><p>删除缓存失败中的泄漏引用（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/673a120">commit 673a120</a> ）</p></li>
<li><p>修正了memoryusage扩展中的错误：get_engine_status（）只接受1个参数（给定0）（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/11133e9">commit 11133e9</a> ）</p></li>
<li><p>修复了HTTP压缩中间件上的struct.error。关闭α87 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/1423140">commit 1423140</a> ）</p></li>
<li><p>ajax爬网没有扩展Unicode URL（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/0de3fb4">commit 0de3fb4</a> ）</p></li>
<li><p>Catch start_请求迭代器错误。参考文献83 <a class="reference external" href="https://github.com/scrapy/scrapy/commit/454a21d">commit 454a21d</a> ）</p></li>
<li><p>加速libxml2 xpathselector（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/2fbd662">commit 2fbd662</a> ）</p></li>
<li><p>根据最近的更改更新版本文档（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/0a070f5">commit 0a070f5</a> ）</p></li>
<li><p>scrapyd：修复文档链接（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/2b4e4c3">commit 2b4e4c3</a> ）</p></li>
<li><p>extras/makedeb.py:不再从git获取版本（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/caffe0e">commit caffe0e</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-14-1">
<h2>Scrapy 0.14.1<a class="headerlink" href="#scrapy-0-14-1" title="永久链接至标题">¶</a></h2>
<ul class="simple">
<li><p>extras/makedeb.py:不再从git获取版本（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/caffe0e">commit caffe0e</a> ）</p></li>
<li><p>缓冲版本为0.14.1（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/6cb9e1c">commit 6cb9e1c</a> ）</p></li>
<li><p>修复了对教程目录的引用（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/4b86bd6">commit 4b86bd6</a> ）</p></li>
<li><p>文档：从request.replace（）中删除了重复的回调参数（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/1aeccdd">commit 1aeccdd</a> ）</p></li>
<li><p>修复了scrapyd doc（：commit：<cite>8bf19e6</cite>）的格式</p></li>
<li><p>为所有正在运行的线程转储堆栈并修复StackTraceDump扩展转储的引擎状态（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/14a8e6e">commit 14a8e6e</a> ）</p></li>
<li><p>添加了关于为什么我们在boto图像上传上禁用SSL的注释（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/5223575">commit 5223575</a> ）</p></li>
<li><p>当与S3进行太多并行连接时，SSL握手挂起（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/63d583d">commit 63d583d</a> ）</p></li>
<li><p>更改教程以跟踪dmoz网站上的更改（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/bcb3198">commit bcb3198</a> ）</p></li>
<li><p>避免在Twisted中出现断开连接的deferred attributeerror异常&gt;=11.1.0（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/98f3f87">commit 98f3f87</a> ）</p></li>
<li><p>允许spider设置autothrottle最大并发性（ <a class="reference external" href="https://github.com/scrapy/scrapy/commit/175a4b5">commit 175a4b5</a> ）</p></li>
</ul>
</div>
<div class="section" id="scrapy-0-14">
<h2>Scrapy 0.14<a class="headerlink" href="#scrapy-0-14" title="永久链接至标题">¶</a></h2>
<div class="section" id="new-features-and-settings">
<h3>新功能和设置<a class="headerlink" href="#new-features-and-settings" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>支持 <a class="reference external" href="https://developers.google.com/search/docs/ajax-crawling/docs/getting-started?csw=1">AJAX crawleable urls</a></p></li>
<li><p>在磁盘上存储请求的新的永久性计划程序，允许挂起和恢复爬网（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2737">r2737</a> ）</p></li>
<li><p>在``scrapy crawl``中添加了``-o``选项，这是将已删除项目转储到文件中的快捷方式（或使用``-<a href="#id1"><span class="problematic" id="id2">``</span></a>标准输出）</p></li>
<li><p>添加了对自定义设置传递给Scrapyd``recation.json`` api（：rev：<cite>2779</cite>，：rev：<cite>2783</cite>）的支持</p></li>
<li><p>新的 <code class="docutils literal notranslate"><span class="pre">ChunkedTransferMiddleware</span></code> （默认启用）以支持 <a class="reference external" href="https://en.wikipedia.org/wiki/Chunked_transfer_encoding">chunked transfer encoding</a> (<a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2769">r2769</a>)</p></li>
<li><p>添加对S3下载器处理程序的boto 2.0支持（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2763">r2763</a> ）</p></li>
<li><p>补充 <a class="reference external" href="https://docs.python.org/2/library/marshal.html">marshal</a> to formats supported by feed exports (<a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2744">r2744</a>)</p></li>
<li><p>在请求错误回复中，有问题的请求现在接收到 <code class="docutils literal notranslate"><span class="pre">failure.request</span></code> 属性（属性） <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2738">r2738</a> ）</p></li>
<li><dl class="simple">
<dt>大下载重构以支持每个域/IP并发限制（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2732">r2732</a> ）</dt><dd><ul>
<li><dl class="simple">
<dt><code class="docutils literal notranslate"><span class="pre">CONCURRENT_REQUESTS_PER_SPIDER</span></code> 设置已被弃用，并替换为：</dt><dd><ul>
<li><p><a class="reference internal" href="topics/settings.html#std-setting-CONCURRENT_REQUESTS"><code class="xref std std-setting docutils literal notranslate"><span class="pre">CONCURRENT_REQUESTS</span></code></a>, <a class="reference internal" href="topics/settings.html#std-setting-CONCURRENT_REQUESTS_PER_DOMAIN"><code class="xref std std-setting docutils literal notranslate"><span class="pre">CONCURRENT_REQUESTS_PER_DOMAIN</span></code></a>, <a class="reference internal" href="topics/settings.html#std-setting-CONCURRENT_REQUESTS_PER_IP"><code class="xref std std-setting docutils literal notranslate"><span class="pre">CONCURRENT_REQUESTS_PER_IP</span></code></a></p></li>
</ul>
</dd>
</dl>
</li>
<li><p>查看文档了解更多详细信息</p></li>
</ul>
</dd>
</dl>
</li>
<li><p>添加了内置缓存DNS解析程序（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2728">r2728</a> ）</p></li>
<li><p>将与Amazon AWS相关的组件/扩展（sqs spider queue，simpledb stats collector）移动到单独的项目：[scaws]（<a class="reference external" href="https://github.com/scrapinghub/scaws">https://github.com/scrapinghub/scaws</a>）（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2706">r2706</a> ， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2714">r2714</a> ）</p></li>
<li><p>已将spider队列移动到scrapyd: <code class="docutils literal notranslate"><span class="pre">scrapy.spiderqueue</span></code> &gt; <code class="docutils literal notranslate"><span class="pre">scrapyd.spiderqueue</span></code> （ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2708">r2708</a> ）</p></li>
<li><p>已将sqlite utils移动到scrapyd: <code class="docutils literal notranslate"><span class="pre">scrapy.utils.sqlite</span></code> &gt; <code class="docutils literal notranslate"><span class="pre">scrapyd.sqlite</span></code> （ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2781">r2781</a> ）</p></li>
<li><p>对返回迭代器的真正支持 <code class="docutils literal notranslate"><span class="pre">start_requests()</span></code> 方法。当蜘蛛空闲时，迭代器现在在爬行过程中被消耗。（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2704">r2704</a> ）</p></li>
<li><p>补充 <a class="reference internal" href="topics/downloader-middleware.html#std-setting-REDIRECT_ENABLED"><code class="xref std std-setting docutils literal notranslate"><span class="pre">REDIRECT_ENABLED</span></code></a> 快速启用/禁用重定向中间件的设置（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2697">r2697</a> ）</p></li>
<li><p>补充 <a class="reference internal" href="topics/downloader-middleware.html#std-setting-RETRY_ENABLED"><code class="xref std std-setting docutils literal notranslate"><span class="pre">RETRY_ENABLED</span></code></a> 设置为快速启用/禁用重试中间件（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2694">r2694</a> ）</p></li>
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">CloseSpider</span></code> 手动关闭星形齿轮的例外情况（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2691">r2691</a> ）</p></li>
<li><p>通过添加对HTML5元字符集声明的支持来改进编码检测（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2690">r2690</a> ）</p></li>
<li><p>重构CloseSpider行为，等待所有下载完成并由Spider处理，然后关闭Spider（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2688">r2688</a> ）</p></li>
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">SitemapSpider</span></code> （见Spiders页面中的文档）（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2658">r2658</a> ）</p></li>
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">LogStats</span></code> 用于定期记录基本统计信息（如已爬网页和已擦除项）的扩展（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2657">r2657</a> ）</p></li>
<li><p>使gzipped响应的处理更加可靠（319， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2643">r2643</a> ）现在，scrappy将尝试尽可能多地从gzip响应中解压缩，而不是使用 <code class="docutils literal notranslate"><span class="pre">IOError</span></code> .</p></li>
<li><p>简化！memoryDebugger扩展，用于转储内存调试信息（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2639">r2639</a> ）</p></li>
<li><p>添加了编辑蜘蛛的新命令：<code class="docutils literal notranslate"><span class="pre">scrapy</span> <span class="pre">edit``（：rev：`2636`）和</span></code>-e``标志到``genspider``命令使用它（：rev：<cite>2653</cite>）</p></li>
<li><p>将项目的默认表示更改为精美打印的dicts。（启：<cite>2631</cite>）。 对于Scraped和Dropped行，在默认情况下使日志更具可读性，从而改进了默认日志记录。</p></li>
<li><p>补充 <a class="reference internal" href="topics/signals.html#std-signal-spider_error"><code class="xref std std-signal docutils literal notranslate"><span class="pre">spider_error</span></code></a> 信号（信号） <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2628">r2628</a> ）</p></li>
<li><p>补充：设置：<cite>COOKIES_ENABLED`设置（：rev：`2625</cite>）</p></li>
<li><p>现在，Stats被转储到Scrapy日志中（默认值：setting：<a href="#id1"><span class="problematic" id="id2">`</span></a>STATS_DUMP`设置已更改为“True”。）。 这是为了让Scrapy用户更加了解Scrapy统计数据以及在那里收集的数据。</p></li>
<li><p>增加了对动态调整下载延迟和最大并发请求的支持（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2599">r2599</a> ）</p></li>
<li><p>添加了新的DBM HTTP缓存存储后端（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2576">r2576</a> ）</p></li>
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">listjobs.json</span></code> API到ScrapyDy（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2571">r2571</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">CsvItemExporter</span></code>：添加了``join_multivalued``参数（：rev：<cite>2578</cite>）</p></li>
<li><p>为``xmliter_lxml``添加名称空间支持（：rev：<cite>2552</cite>）</p></li>
<li><p>改进了cookies中间件 <code class="docutils literal notranslate"><span class="pre">COOKIES_DEBUG</span></code> 更好的记录它（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2579">r2579</a> ）</p></li>
<li><p>对Scrapyd和Link提取器的一些改进</p></li>
</ul>
</div>
<div class="section" id="code-rearranged-and-removed">
<h3>重新排列和删除代码<a class="headerlink" href="#code-rearranged-and-removed" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><dl class="simple">
<dt>合并的项传递和项抓取概念，因为它们在过去常常被证明是混淆的。这意味着： <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2630">r2630</a> ）</dt><dd><ul>
<li><p>原始item_scraped信号已被删除</p></li>
<li><p>原始item_passed信号已重命名为item_scraped</p></li>
<li><p>旧的日志行``Scraped Item ...``被删除了</p></li>
<li><p>旧的日志行``通过的项目......``被重命名为``Scraped Item ...``行并降级为``DEBUG``级别</p></li>
</ul>
</dd>
</dl>
</li>
<li><dl class="simple">
<dt>通过将部分废弃代码分为两个新库来减少废弃代码库：</dt><dd><ul>
<li><p><cite>w3lib`_（来自``scrapy.utils。{http，markup，multipart，response，url}``的几个函数，完成于：rev：`2584</cite>）</p></li>
<li><p><cite>scrapely`_（是``scrapy.contrib.ibl`</cite>，完成于：rev：<cite>2586</cite>）</p></li>
</ul>
</dd>
</dl>
</li>
<li><p>删除了未使用的功能： <code class="docutils literal notranslate"><span class="pre">scrapy.utils.request.request_info()</span></code> （ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2577">r2577</a> ）</p></li>
<li><p>已从中删除googledir项目 <code class="docutils literal notranslate"><span class="pre">examples/googledir</span></code> . 现在有一个新的示例项目叫做 <code class="docutils literal notranslate"><span class="pre">dirbot</span></code> 在GitHub上提供：<a class="reference external" href="https://github.com/scray/dirbot">https://github.com/scray/dirbot</a></p></li>
<li><p>删除了对Scrapy项目中默认字段值的支持（：rev：<cite>2616</cite>）</p></li>
<li><p>删除了实验crawlspider v2（：rev：<cite>2632</cite>）</p></li>
<li><p>删除了调度程序中间件以简化体系结构。重复过滤器现在在调度程序本身中完成，使用与以前相同的重复过滤类。（ <code class="docutils literal notranslate"><span class="pre">DUPEFILTER_CLASS</span></code> 设置） <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2640">r2640</a> ）</p></li>
<li><p>已删除对将URL传递到的支持 <code class="docutils literal notranslate"><span class="pre">scrapy</span> <span class="pre">crawl</span></code> 命令（使用） <code class="docutils literal notranslate"><span class="pre">scrapy</span> <span class="pre">parse</span></code> 取而代之的是） <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2704">r2704</a> ）</p></li>
<li><p>已删除不推荐使用的执行队列（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2704">r2704</a> ）</p></li>
<li><p>已删除（未记录）spider上下文扩展（来自scrapy.contrib.spiderContext）（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2780">r2780</a> ）</p></li>
<li><p>删除``CONCURRENT_SPIDERS``设置（改用scrapyd maxproc）（：rev：<cite>2789</cite>）</p></li>
<li><p>重命名核心组件的属性：downloader.sites - &gt; downloader.slots，scraper.sites - &gt; scraper.slots（：rev：<cite>2717</cite>，：rev：<cite>2718</cite>）</p></li>
<li><p>将设置``CLOSESPIDER_ITEMPASSED``重命名为：setting：<cite>CLOSESPIDER_ITEMCOUNT`（：rev：`2655</cite>）。 保持向后兼容性。</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-0-12">
<h2>Scrapy 0.12<a class="headerlink" href="#scrapy-0-12" title="永久链接至标题">¶</a></h2>
<p>旧问题追踪器（trac）中的nnn参考票等数字不再可用。</p>
<div class="section" id="new-features-and-improvements">
<h3>新功能和改进<a class="headerlink" href="#new-features-and-improvements" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>传递的项现在发送到 <code class="docutils literal notranslate"><span class="pre">item</span></code> 论证 <a class="reference internal" href="topics/signals.html#std-signal-item_scraped"><code class="xref std std-signal docutils literal notranslate"><span class="pre">item_passed</span></code></a> （273）</p></li>
<li><p>向添加了详细选项 <code class="docutils literal notranslate"><span class="pre">scrapy</span> <span class="pre">version</span></code> 命令，用于错误报告（298）</p></li>
<li><p>HTTP缓存现在默认存储在项目数据目录中（279）</p></li>
<li><p>增加了项目数据存储目录（276，277）</p></li>
<li><p>Scrapy 项目的文档结构（见命令行工具文档）</p></li>
<li><p>XPath选择器的新lxml后端（147）</p></li>
<li><p>每个蜘蛛设置（245）</p></li>
<li><p>支持退出代码，以在scrapy命令中发出错误信号（248）</p></li>
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">-c</span></code> 参数 <code class="docutils literal notranslate"><span class="pre">scrapy</span> <span class="pre">shell</span></code> 命令</p></li>
<li><p>制造 <code class="docutils literal notranslate"><span class="pre">libxml2</span></code> 可选择的（第260）</p></li>
<li><p>新的 <code class="docutils literal notranslate"><span class="pre">deploy</span></code> 命令（第261）</p></li>
<li><p>补充 <a class="reference internal" href="topics/extensions.html#std-setting-CLOSESPIDER_PAGECOUNT"><code class="xref std std-setting docutils literal notranslate"><span class="pre">CLOSESPIDER_PAGECOUNT</span></code></a> 设置（α253）</p></li>
<li><p>补充 <a class="reference internal" href="topics/extensions.html#std-setting-CLOSESPIDER_ERRORCOUNT"><code class="xref std std-setting docutils literal notranslate"><span class="pre">CLOSESPIDER_ERRORCOUNT</span></code></a> 设置（α254）</p></li>
</ul>
</div>
<div class="section" id="scrapyd-changes">
<h3>抓取变化<a class="headerlink" href="#scrapyd-changes" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>ScrapyD现在每个蜘蛛使用一个进程</p></li>
<li><p>它为每个蜘蛛运行存储一个日志文件，并将其旋转以保持每个蜘蛛最新的5个日志（默认情况下）</p></li>
<li><p>添加了一个最小的Web UI，默认情况下可从http://localhost:6800获得</p></li>
<li><p>现在有一个 <code class="docutils literal notranslate"><span class="pre">scrapy</span> <span class="pre">server</span></code> 启动当前项目的ScrapyD服务器的命令</p></li>
</ul>
</div>
<div class="section" id="changes-to-settings">
<h3>对设置的更改<a class="headerlink" href="#changes-to-settings" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">HTTPCACHE_ENABLED</span></code> 设置（默认为false）以启用HTTP缓存中间件</p></li>
<li><p>改变 <code class="docutils literal notranslate"><span class="pre">HTTPCACHE_EXPIRATION_SECS</span></code> 语义：现在零意味着“永不过期”。</p></li>
</ul>
</div>
<div class="section" id="deprecated-obsoleted-functionality">
<h3>弃用/废弃功能<a class="headerlink" href="#deprecated-obsoleted-functionality" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>已弃用 <code class="docutils literal notranslate"><span class="pre">runserver</span></code> 有利于…的命令 <code class="docutils literal notranslate"><span class="pre">server</span></code> 启动ScrapyD服务器的命令。另请参见：ScrapyD更改</p></li>
<li><p>已弃用 <code class="docutils literal notranslate"><span class="pre">queue</span></code> 有利于使用ScrapyD的命令 <code class="docutils literal notranslate"><span class="pre">schedule.json</span></code> 应用程序编程接口。另请参见：ScrapyD更改</p></li>
<li><p>移除了！lxmlitemloader（从未升级到主控件的实验控件）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-0-10">
<h2>Scrapy 0.10<a class="headerlink" href="#scrapy-0-10" title="永久链接至标题">¶</a></h2>
<p>旧问题追踪器（trac）中的nnn参考票等数字不再可用。</p>
<div class="section" id="id69">
<h3>新功能和改进<a class="headerlink" href="#id69" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>新的Scrapy服务称为``scrapyd``，用于在生产中部署Scrapy爬虫（＃218）（可用文档）</p></li>
<li><p>简化的图像管道使用，现在无需对自己的图像管道进行子类化（217）</p></li>
<li><p>Scrapy Shell现在默认显示Scrapy日志（206）</p></li>
<li><p>重构公共基本代码中的执行队列和称为“spider队列”的可插拔后端（220）</p></li>
<li><p>新的持久蜘蛛队列（基于sqlite）（198），默认情况下可用，允许在服务器模式下启动scrappy，然后安排蜘蛛运行。</p></li>
<li><p>添加了scrapy命令行工具及其所有可用子命令的文档。（提供文件）</p></li>
<li><p>具有可插拔后端的Feed exporters（197）（提供文档）</p></li>
<li><p>延迟信号（193）</p></li>
<li><p>向item pipeline open_spider（）添加了两个新方法，使用延迟支持关闭_spider（）（195）</p></li>
<li><p>支持覆盖每个spider的默认请求头（181）</p></li>
<li><p>将默认的spider管理器替换为具有类似功能但不依赖于双绞线插件的管理器（186）</p></li>
<li><p>将Debian包拆分为两个包-库和服务（187）</p></li>
<li><p>Scrapy日志重构（＃188）</p></li>
<li><p>在不同的运行中保持持久的蜘蛛上下文的新扩展（203）</p></li>
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">dont_redirect</span></code> 避免重定向的request.meta键（233）</p></li>
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">dont_retry</span></code> 用于避免重试的request.meta密钥（234）</p></li>
</ul>
</div>
<div class="section" id="command-line-tool-changes">
<h3>命令行工具更改<a class="headerlink" href="#command-line-tool-changes" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>新的 <code class="docutils literal notranslate"><span class="pre">scrapy</span></code> 替换旧命令的命令 <code class="docutils literal notranslate"><span class="pre">scrapy-ctl.py</span></code> （199）-只有一个全局 <code class="docutils literal notranslate"><span class="pre">scrapy</span></code> 现在命令，而不是一个 <code class="docutils literal notranslate"><span class="pre">scrapy-ctl.py</span></code> 每个项目-已添加 <code class="docutils literal notranslate"><span class="pre">scrapy.bat</span></code> 用于从Windows更方便地运行的脚本</p></li>
<li><p>将bash完成添加到命令行工具（210）</p></li>
<li><p>重命名命令 <code class="docutils literal notranslate"><span class="pre">start</span></code> 到 <code class="docutils literal notranslate"><span class="pre">runserver</span></code> （209）</p></li>
</ul>
</div>
<div class="section" id="api-changes">
<h3>API更改<a class="headerlink" href="#api-changes" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">url</span></code> 和 <code class="docutils literal notranslate"><span class="pre">body</span></code> 请求对象的属性现在是只读的（230）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">Request.copy()</span></code> 和 <code class="docutils literal notranslate"><span class="pre">Request.replace()</span></code> 现在也复制他们的 <code class="docutils literal notranslate"><span class="pre">callback</span></code> 和 <code class="docutils literal notranslate"><span class="pre">errback</span></code> 属性（231）</p></li>
<li><p>从``scrapy.contrib``中删除了``UrlFilterMiddleware``（默认情况下已禁用）</p></li>
<li><p>非现场Middelware不会过滤掉来自没有允许域属性的spider的任何请求（225）</p></li>
<li><p>删除蜘蛛管理器 <code class="docutils literal notranslate"><span class="pre">load()</span></code> 方法。现在蜘蛛被装载在 <code class="docutils literal notranslate"><span class="pre">__init__</span></code> 方法本身。</p></li>
<li><dl class="simple">
<dt>对Scrapy Manager（现在称为“crawler”）的更改：</dt><dd><ul>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.core.manager.ScrapyManager</span></code> class renamed to <code class="docutils literal notranslate"><span class="pre">scrapy.crawler.Crawler</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.core.manager.scrapymanager</span></code> singleton moved to <code class="docutils literal notranslate"><span class="pre">scrapy.project.crawler</span></code></p></li>
</ul>
</dd>
</dl>
</li>
<li><p>移动模块： <code class="docutils literal notranslate"><span class="pre">scrapy.contrib.spidermanager</span></code> 到 <code class="docutils literal notranslate"><span class="pre">scrapy.spidermanager</span></code></p></li>
<li><p>蜘蛛管理器单例从``scrapy.spider.spiders``移动到``scrapy.project.crawler``单例的``spiders`属性。</p></li>
<li><dl class="simple">
<dt>已移动的统计信息收集器类：（204）</dt><dd><ul>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.stats.collector.StatsCollector</span></code> to <code class="docutils literal notranslate"><span class="pre">scrapy.statscol.StatsCollector</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.stats.collector.SimpledbStatsCollector</span></code> to <code class="docutils literal notranslate"><span class="pre">scrapy.contrib.statscol.SimpledbStatsCollector</span></code></p></li>
</ul>
</dd>
</dl>
</li>
<li><p>默认的每个命令设置现在在 <code class="docutils literal notranslate"><span class="pre">default_settings</span></code> 命令对象类的属性（201）</p></li>
<li><dl class="simple">
<dt>已更改项管道的参数 <code class="docutils literal notranslate"><span class="pre">proce将Item管道``process_item（）``方法的参数从``（spider，item）``改为``（item，spider）</span></code></dt><dd><ul>
<li><p>保持向后兼容性（带有反预测警告）</p></li>
</ul>
</dd>
</dl>
</li>
<li><dl class="simple">
<dt>将``scrapy.core.signals``模块移到``scrapy.signals``</dt><dd><ul>
<li><p>保持向后兼容性（带有反预测警告）</p></li>
</ul>
</dd>
</dl>
</li>
<li><dl class="simple">
<dt>将``scrapy.core.exceptions``模块移动到``scrapy.exceptions``</dt><dd><ul>
<li><p>保持向后兼容性（带有反预测警告）</p></li>
</ul>
</dd>
</dl>
</li>
<li><p>将``handles_request（）``类方法添加到``BaseSpider``</p></li>
<li><p>删掉``scrapy.log.exc（）``函数（使用``scrapy.log.err（）``代替）</p></li>
<li><p>删除``scrapy.log.msg（）``函数的``component``参数</p></li>
<li><p>删除了``scrapy.log.log_level``属性</p></li>
<li><p>在Spider Manager和Item Pipeline Manager中添加了``from_settings（）``类方法</p></li>
</ul>
</div>
<div class="section" id="id70">
<h3>对设置的更改<a class="headerlink" href="#id70" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>添加了``HTTPCACHE_IGNORE_SCHEMES``设置以忽略某些方案！HttpCacheMiddleware（＃225）</p></li>
<li><p>添加了``SPIDER_QUEUE_CLASS``设置，定义了要使用的蜘蛛队列（＃220）</p></li>
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">KEEP_ALIVE</span></code> 设置（α220）</p></li>
<li><p>删除了``SERVICE_QUEUE``设置（＃220）</p></li>
<li><p>移除了 <code class="docutils literal notranslate"><span class="pre">COMMANDS_SETTINGS_MODULE</span></code> 设置（α201）</p></li>
<li><p>将``REQUEST_HANDLERS``重命名为``DOWNLOAD_HANDLERS``并制作下载处理程序类（而不是函数）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-0-9">
<h2>Scrapy 0.9<a class="headerlink" href="#scrapy-0-9" title="永久链接至标题">¶</a></h2>
<p>旧问题追踪器（trac）中的nnn参考票等数字不再可用。</p>
<div class="section" id="id71">
<h3>新功能和改进<a class="headerlink" href="#id71" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>向scrappy.mail添加了smtp-auth支持</p></li>
<li><p>添加的新设置： <code class="docutils literal notranslate"><span class="pre">MAIL_USER</span></code> ， <code class="docutils literal notranslate"><span class="pre">MAIL_PASS</span></code> （ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2065">r2065</a> （149）</p></li>
<li><p>添加了新的scrapy-ctl视图命令 - 在浏览器中查看URL，如Scrapy所见（：rev：<cite>2039</cite>）</p></li>
<li><p>添加了用于控制Scrapy进程的Web服务（这也会取消Web控制台的支持）。（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2053">r2053</a> （167）</p></li>
<li><p>支持将Scrapy作为服务运行，用于生产系统（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1988">r1988</a> ， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2054">r2054</a> ， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2055">r2055</a> ， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2056">r2056</a> ， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2057">r2057</a> （168）</p></li>
<li><p>添加了包装感应库（文档目前仅在源代码中可用）。（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2011">r2011</a> ）</p></li>
<li><p>简化和改进的响应编码支持（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1961">r1961</a> ， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1969">r1969</a> ）</p></li>
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">LOG_ENCODING</span></code> 设置（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1956">r1956</a> ，文档可用）</p></li>
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">RANDOMIZE_DOWNLOAD_DELAY</span></code> 设置（默认启用）（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1923">r1923</a> ，文档可用）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">MailSender</span></code> 不再是IO阻塞（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1955">r1955</a> （146）</p></li>
<li><p>Linkextractor和新的Crawlspider现在处理相对的基标记URL（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1960">r1960</a> （148）</p></li>
<li><p>项目加载器和处理器的几个改进（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2022">r2022</a> ， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2023">r2023</a> ， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2024">r2024</a> ， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2025">r2025</a> ， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2026">r2026</a> ， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2027">r2027</a> ， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2028">r2028</a> ， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2029">r2029</a> ， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2030">r2030</a> ）</p></li>
<li><p>增加了对向telnet控制台添加变量的支持（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2047">r2047</a> （165）</p></li>
<li><p>支持不带回调的请求（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2050">r2050</a> （166）</p></li>
</ul>
</div>
<div class="section" id="id72">
<h3>API更改<a class="headerlink" href="#id72" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>将``Spider.domain_name``改为``Spider.name``（SEP-012，：rev：<cite>1975</cite>）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">Response.encoding</span></code> 现在是检测到的编码（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1961">r1961</a> ）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">HttpErrorMiddleware</span></code> 现在不返回任何值或引发异常（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2006">r2006</a> （157）</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">scrapy.command</span></code> 模块重新定位（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2035">r2035</a> ， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2036">r2036</a> ， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2037">r2037</a> ）</p></li>
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">ExecutionQueue</span></code> 用来喂蜘蛛（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2034">r2034</a> ）</p></li>
<li><p>删除了``ExecutionEngine`` singleton（：rev：<cite>2039</cite>）</p></li>
<li><p>端口 <code class="docutils literal notranslate"><span class="pre">S3ImagesStore</span></code> （图像管道）使用boto和线程（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2033">r2033</a> ）</p></li>
<li><p>移动模块： <code class="docutils literal notranslate"><span class="pre">scrapy.management.telnet</span></code> 到 <code class="docutils literal notranslate"><span class="pre">scrapy.telnet</span></code> （ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/2047">r2047</a> ）</p></li>
</ul>
</div>
<div class="section" id="changes-to-default-settings">
<h3>更改为默认设置<a class="headerlink" href="#changes-to-default-settings" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>更改的默认值 <code class="docutils literal notranslate"><span class="pre">SCHEDULER_ORDER</span></code> 到 <code class="docutils literal notranslate"><span class="pre">DFO</span></code> （ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1939">r1939</a> ）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-0-8">
<h2>Scrapy 0.8<a class="headerlink" href="#scrapy-0-8" title="永久链接至标题">¶</a></h2>
<p>旧问题追踪器（trac）中的nnn参考票等数字不再可用。</p>
<div class="section" id="id73">
<h3>新特点<a class="headerlink" href="#id73" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>添加了默认的响应编码设置（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1809">r1809</a> ）</p></li>
<li><p>在``FormRequest.from_response（）``方法（：rev：<cite>1813</cite>，：rev：<cite>1816</cite>）中添加了``dont_click``参数</p></li>
<li><p>在``FormRequest.from_response（）``方法（：rev：<cite>1802</cite>，：rev：<cite>1803</cite>）中添加了``clickdata``参数</p></li>
<li><p>添加了对HTTP代理的支持（ <code class="docutils literal notranslate"><span class="pre">HttpProxyMiddleware</span></code> （ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1781">r1781</a> ， <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1785">r1785</a> ）</p></li>
<li><p>当过滤掉请求时，异地蜘蛛中间件现在记录消息。（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1841">r1841</a> ）</p></li>
</ul>
</div>
<div class="section" id="id74">
<h3>向后不兼容的更改<a class="headerlink" href="#id74" title="永久链接至标题">¶</a></h3>
<ul class="simple">
<li><p>改变 <code class="docutils literal notranslate"><span class="pre">scrapy.utils.response.get_meta_refresh()</span></code> signature (<a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1804">r1804</a>)</p></li>
<li><p>删除了已弃用的``scrapy.item.ScrapedItem``类 - 使用``scrapy.item.Item代替```（：rev：<cite>1838</cite>）</p></li>
<li><p>删除了已弃用的``scrapy.xpath``模块 - 改为使用``scrapy.selector``。（启：<cite>1836</cite>）</p></li>
<li><p>删除了已弃用的``core.signals.domain_open``信号 - 改为使用``core.signals.domain_opened``（：rev：<cite>1822</cite>）</p></li>
<li><dl class="simple">
<dt><code class="docutils literal notranslate"><span class="pre">log.msg()</span></code> 现在收到一个 <code class="docutils literal notranslate"><span class="pre">spider</span></code> 论证（论证） <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1822">r1822</a> ）</dt><dd><ul>
<li><p>旧的域参数已被弃用，将在0.9中删除。对于蜘蛛，你应该经常使用 <code class="docutils literal notranslate"><span class="pre">spider</span></code> 参数并传递spider引用。如果确实要传递字符串，请使用 <code class="docutils literal notranslate"><span class="pre">component</span></code> 改为参数。</p></li>
</ul>
</dd>
</dl>
</li>
<li><p>改变核心信号 <code class="docutils literal notranslate"><span class="pre">domain_opened</span></code> ， <code class="docutils literal notranslate"><span class="pre">domain_closed</span></code> ， <code class="docutils literal notranslate"><span class="pre">domain_idle</span></code></p></li>
<li><dl class="simple">
<dt>将项目管道更改为使用spider而不是域</dt><dd><ul>
<li><p>这个 <code class="docutils literal notranslate"><span class="pre">domain</span></code> 的参数 <code class="docutils literal notranslate"><span class="pre">process_item()</span></code> 项目管道方法已更改为 <code class="docutils literal notranslate"><span class="pre">spider</span></code> ，新签名为： <code class="docutils literal notranslate"><span class="pre">process_item(spider,</span> <span class="pre">item)</span></code> （ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1827">r1827</a> （105）</p></li>
<li><p>要快速移植代码（使用Scrapy0.8），只需使用 <code class="docutils literal notranslate"><span class="pre">spider.domain_name</span></code> 你以前用过的地方 <code class="docutils literal notranslate"><span class="pre">domain</span></code> .</p></li>
</ul>
</dd>
</dl>
</li>
<li><dl class="simple">
<dt>更改了stats API以使用spider而不是域（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1849">r1849</a> （113）</dt><dd><ul>
<li><p><code class="docutils literal notranslate"><span class="pre">StatsCollector</span></code> 已更改为在其方法中接收蜘蛛引用（而不是域）（ <code class="docutils literal notranslate"><span class="pre">set_value</span></code> ， <code class="docutils literal notranslate"><span class="pre">inc_value</span></code> 等）。</p></li>
<li><p>补充 <code class="docutils literal notranslate"><span class="pre">StatsCollector.iter_spider_stats()</span></code> 方法</p></li>
<li><p>删除了``StatsCollector.list_domains（）``方法</p></li>
<li><p>另外，stats信号被重命名，现在传递蜘蛛引用（而不是域）。以下是更改的摘要：</p></li>
<li><p>要快速移植代码（使用Scrapy0.8），只需使用 <code class="docutils literal notranslate"><span class="pre">spider.domain_name</span></code> 你以前用过的地方 <code class="docutils literal notranslate"><span class="pre">domain</span></code> . <code class="docutils literal notranslate"><span class="pre">spider_stats</span></code> 包含与完全相同的数据 <code class="docutils literal notranslate"><span class="pre">domain_stats</span></code> .</p></li>
</ul>
</dd>
</dl>
</li>
<li><dl class="simple">
<dt><code class="docutils literal notranslate"><span class="pre">CloseDomain</span></code> 扩展移动到 <code class="docutils literal notranslate"><span class="pre">scrapy.contrib.closespider.CloseSpider</span></code> （ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1833">r1833</a> ）</dt><dd><ul>
<li><dl class="simple">
<dt>其设置也被重命名：</dt><dd><ul>
<li><p><code class="docutils literal notranslate"><span class="pre">CLOSEDOMAIN_TIMEOUT</span></code> to <code class="docutils literal notranslate"><span class="pre">CLOSESPIDER_TIMEOUT</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">CLOSEDOMAIN_ITEMCOUNT</span></code> to <code class="docutils literal notranslate"><span class="pre">CLOSESPIDER_ITEMCOUNT</span></code></p></li>
</ul>
</dd>
</dl>
</li>
</ul>
</dd>
</dl>
</li>
<li><p>删除了已弃用的``SCRAPYSETTINGS_MODULE``环境变量 - 改为使用``SCRAPY_SETTINGS_MODULE``（：rev：<cite>1840</cite>）</p></li>
<li><p>将设置：<a href="#id1"><span class="problematic" id="id2">``</span></a>REQUESTS_PER_DOMAIN``重命名为``CONCURRENT_REQUESTS_PER_SPIDER``（：rev：<cite>1830</cite>，：rev：<cite>1844</cite>）</p></li>
<li><p>将设置：<a href="#id1"><span class="problematic" id="id2">``</span></a>CONCURRENT_DOMAINS``重命名为``CONCURRENT_SPIDERS``（：rev：<cite>1830</cite>）</p></li>
<li><p>重构HTTP缓存中间件</p></li>
<li><p>HTTP缓存中间件经过了大量的重构，保留了相同的功能，但删除了域分段。（ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1843">r1843</a> ）</p></li>
<li><p>重命名的异常： <code class="docutils literal notranslate"><span class="pre">DontCloseDomain</span></code> 到 <code class="docutils literal notranslate"><span class="pre">DontCloseSpider</span></code> （ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1859">r1859</a> （120）</p></li>
<li><p>重命名的扩展名： <code class="docutils literal notranslate"><span class="pre">DelayedCloseDomain</span></code> 到 <code class="docutils literal notranslate"><span class="pre">SpiderCloseDelay</span></code> （ <a class="reference external" href="http://hg.scrapy.org/scrapy/changeset/1861">r1861</a> （121）</p></li>
<li><p>删除了过时的``scrapy.utils.markup.remove_escape_chars``函数 - 改用``scrapy.utils.markup.replace_escape_chars``（：rev：<cite>1865</cite>）</p></li>
</ul>
</div>
</div>
<div class="section" id="scrapy-0-7">
<h2>Scrapy 0.7<a class="headerlink" href="#scrapy-0-7" title="永久链接至标题">¶</a></h2>
<p>第一次发行的Scrapy。</p>
</div>
</div>


           </div>
           
          </div>
          <footer>
  
    <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
      
        <a href="contributing.html" class="btn btn-neutral float-right" title="为 Scrapy 贡献" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
      
      
        <a href="topics/exporters.html" class="btn btn-neutral float-left" title="条目导出器" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
      
    </div>
  

  <hr/>

  <div role="contentinfo">
    <p>
        
        &copy; 版权所有 2008–2020, Scrapy developers
      <span class="lastupdated">
        最后更新于 10月 18, 2020.
      </span>

    </p>
  </div>
    
    
    
    Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a
    
    <a href="https://github.com/rtfd/sphinx_rtd_theme">theme</a>
    
    provided by <a href="https://readthedocs.org">Read the Docs</a>. 

</footer>

        </div>
      </div>

    </section>

  </div>
  

  <script type="text/javascript">
      jQuery(function () {
          SphinxRtdTheme.Navigation.enable(true);
      });
  </script>

  
  
    
  
 
<script type="text/javascript">
!function(){var analytics=window.analytics=window.analytics||[];if(!analytics.initialize)if(analytics.invoked)window.console&&console.error&&console.error("Segment snippet included twice.");else{analytics.invoked=!0;analytics.methods=["trackSubmit","trackClick","trackLink","trackForm","pageview","identify","reset","group","track","ready","alias","page","once","off","on"];analytics.factory=function(t){return function(){var e=Array.prototype.slice.call(arguments);e.unshift(t);analytics.push(e);return analytics}};for(var t=0;t<analytics.methods.length;t++){var e=analytics.methods[t];analytics[e]=analytics.factory(e)}analytics.load=function(t){var e=document.createElement("script");e.type="text/javascript";e.async=!0;e.src=("https:"===document.location.protocol?"https://":"http://")+"cdn.segment.com/analytics.js/v1/"+t+"/analytics.min.js";var n=document.getElementsByTagName("script")[0];n.parentNode.insertBefore(e,n)};analytics.SNIPPET_VERSION="3.1.0";
analytics.load("8UDQfnf3cyFSTsM4YANnW5sXmgZVILbA");
analytics.page();
}}();

analytics.ready(function () {
    ga('require', 'linker');
    ga('linker:autoLink', ['scrapinghub.com', 'crawlera.com']);
});
</script>


</body>
</html>