{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "5633acff",
   "metadata": {},
   "source": [
    "## 整体架构"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "70c14a77",
   "metadata": {},
   "source": [
    "![](03_Scrapy框架_images/01.png)\n",
    "\n",
    "> 最简单的单个网页爬取流程是 spiders > scheduler > downloader > spiders > item pipeline"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f572b055",
   "metadata": {},
   "source": [
    "**Scrapy运行流程大概如下：**\n",
    "\n",
    "1. 引擎从调度器中取出一个链接 (URL) 用于接下来的抓取\n",
    "2. 引擎把 URL 封装成一个请求 (Request) 传给下载器\n",
    "3. 下载器把资源下载下来，并封装成应答包(Response)\n",
    "4. 爬虫解析Response\n",
    "5. 解析出实体（Item）,则交给实体管道进行进一步的处理\n",
    "6. 解析出的是链接（URL）,则把URL交给调度器等待抓取"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1970633f",
   "metadata": {},
   "source": [
    "![](03_Scrapy框架_images/02.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e9bed675",
   "metadata": {},
   "source": [
    "## 主要组件\n",
    "- 引擎(Scrapy)\n",
    "  - 用来处理整个系统的数据流处理, 触发事务(框架核心)\n",
    "  \n",
    "  \n",
    "- 调度器(Scheduler)\n",
    "  - 用来接受引擎发过来的请求, 压入队列中, 并在引擎再次请求的时候返回. \n",
    "  - 可以想像成一个URL（抓取网页的网址或者说是链接）的优先队列, 由它来决定下一个要抓取的网址是什么, 同时去除重复的网址\n",
    "  \n",
    "  \n",
    "- 下载器(Downloader)\n",
    "  - 用于下载网页内容, 并将网页内容返回给蜘蛛(Scrapy下载器是建立在twisted这个高效的异步模型上的)\n",
    "  \n",
    "  \n",
    "- 爬虫(Spiders)\n",
    "  - 爬虫是主要干活的, 用于从特定的网页中提取自己需要的信息, 即所谓的实体(Item)。\n",
    "  - 用户也可以从中提取出链接,让Scrapy继续抓取下一个页面\n",
    "  \n",
    "  \n",
    "- 项目管道(Pipeline)\n",
    "  - 负责处理爬虫从网页中抽取的实体，主要的功能是持久化实体、验证实体的有效性、清除不需要的信息。\n",
    "  - 当页面被爬虫解析后，将被发送到项目管道，并经过几个特定的次序处理数据。\n",
    "  \n",
    "  \n",
    "- 下载器中间件(Downloader Middlewares)\n",
    "  - 位于Scrapy引擎和下载器之间的框架，主要是处理Scrapy引擎与下载器之间的请求及响应\n",
    "  \n",
    "  \n",
    "- 爬虫中间件(Spider Middlewares)\n",
    "  - 介于Scrapy引擎和爬虫之间的框架，主要工作是处理蜘蛛的响应输入和请求输出\n",
    "  \n",
    "  \n",
    "- 调度中间件(Scheduler Middewares)\n",
    "  - 介于Scrapy引擎和调度之间的中间件，从Scrapy引擎发送到调度的请求和响应"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "66afb63a",
   "metadata": {},
   "source": [
    "## Scrapy框架的使用"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "822895c2",
   "metadata": {},
   "source": [
    "### 创建项目"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7ae28120",
   "metadata": {},
   "source": [
    "终端输入命令：scrapy  startproject  (your_project_name）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8e49f3fa",
   "metadata": {},
   "source": [
    "**文件说明：**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1f109551",
   "metadata": {},
   "source": [
    "![](03_Scrapy框架_images/03.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ac9c6321",
   "metadata": {},
   "source": [
    "![](03_Scrapy框架_images/04.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "42e759f9",
   "metadata": {},
   "source": [
    "### 创建 spdier\n",
    "\n",
    "- 创建爬虫\n",
    "  \n",
    "  ```python\n",
    "  cd 文件夹  # 进入项目所在文件夹\n",
    "    \n",
    "  scrapy genspider 爬虫名 允许爬取的域名\n",
    "  ```\n",
    "  \n",
    "- 运行爬虫\n",
    "  \n",
    "  ```python\n",
    "  scrapy crawl 爬虫名\n",
    "  ```\n",
    "\n",
    "在spiders目录中新建 baidu.py 文件\n",
    "\n",
    "**注意：**\n",
    "1. 爬虫文件需要定义一个类，并继承 scrapy.spiders.Spider\n",
    "2. 必须定义name，即爬虫名，如果没有name，会报错。因为源码中是这样定义的\n",
    "\n",
    "**编写内容：**\n",
    "\n",
    "> 在这里可以告诉 scrapy ，要如何查找确切数据，这里必须要定义一些属性：\n",
    "\n",
    "- name: 它定义了蜘蛛的唯一名称\n",
    "- allowed_domains: 它包含了蜘蛛抓取的基本URL；\n",
    "- start-urls: 蜘蛛开始爬行的URL列表；\n",
    "- parse(): 这是提取并解析刮下数据的方法；"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6472600c",
   "metadata": {},
   "source": [
    "### 数据提取 parse\n",
    "\n",
    "从网页中提取数据，Scrapy 使用基于 XPath 和 CSS 表达式的技术叫做选择器。以下是 XPath 表达式的一些例子：\n",
    "\n",
    "- 这将选择 HTML 文档中的 `<head>` 元素中的 `<title>` 元素\n",
    "  \n",
    "  ```\n",
    "  /html/head/title\n",
    "  ```\n",
    "  \n",
    "- 这将选择 `<title>` 元素中的文本\n",
    "  \n",
    "  ```\n",
    "  /html/head/title/text()\n",
    "  ```\n",
    "  \n",
    "- 这将选择所有的 `<td>` 元素\n",
    "  \n",
    "  ```\n",
    "  //td\n",
    "  ```\n",
    "  \n",
    "- 选择 div 包含一个属性 class=”slice” 的所有元素\n",
    "  \n",
    "  ```\n",
    "  //div[@class=”slice”]\n",
    "  ```\n",
    "  \n",
    "\n",
    "选择器 Selector 有四个基本的方法，如下所示：\n",
    "\n",
    "![](03_Scrapy框架_images/05.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "447618cc",
   "metadata": {},
   "source": [
    "**Scrapy Shell**\n",
    "\n",
    "如果使用选择器想快速地达到效果，我们可以使用Scrapy Shell -- 终端中运行\n",
    "\n",
    "```\n",
    "scrapy shell \"http://www.4399.com/flash/\"\n",
    "```\n",
    "\n",
    "\n",
    "**举例**\n",
    "\n",
    "从一个普通的HTML网站提取数据，查看该网站得到的 XPath 的源代码。检测后，可以看到数据将在UL标签，并选择 li 标签中的 元素。\n",
    "\n",
    "代码的下面行显示了不同类型的数据的提取：\n",
    "\n",
    "- 选择 li 标签内的数据：\n",
    "  \n",
    "  ```\n",
    "  response.xpath('//ul/li')\n",
    "  ```\n",
    "  \n",
    "- 对于选择描述：\n",
    "  \n",
    "  ```\n",
    "  response.xpath('//ul/li/text()').extract()\n",
    "  ```\n",
    "  \n",
    "- 对于选择网站标题：\n",
    "  \n",
    "  ```\n",
    "  response.xpath('//ul/li/a/text()').extract()\n",
    "  ```\n",
    "  \n",
    "- 对于选择网站的链接：\n",
    "  \n",
    "  ```\n",
    "  response.xpath('//ul/li/a/@href').extract()\n",
    "  ```\n",
    " \n",
    "- 退出 shell命令\n",
    "  ```\n",
    "    exit()\n",
    "  ```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "49193566",
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "下面的代码演示了蜘蛛代码的样子：\n",
    "具体请看：./scrapy/scrapy01/spiders\n",
    "\"\"\"\n",
    "import scrapy\n",
    "\n",
    "\n",
    "class GameSpider(scrapy.Spider):\n",
    "    # 爬虫名，唯一且必须\n",
    "    name = 'game'\n",
    "\n",
    "    # 允许访问的域名\n",
    "    allowed_domains = ['4399.com']\n",
    "\n",
    "    # 最开始访问的 URL\n",
    "    start_urls = ['http://www.4399.com/flash/']\n",
    "\n",
    "    def parse(self, response):\n",
    "        \"\"\"重写解析数据的函数\"\"\"\n",
    "\n",
    "        # 拿到页面源代码\n",
    "        html = response.text\n",
    "\n",
    "        # 提取数据\n",
    "        \"\"\"\n",
    "        response.json() # 获取 JSON数据\n",
    "        response.xpath(\"\") # 利用 XPath对数据进行解析\n",
    "        response.css() # 利用 CSS选择器解析\n",
    "        \"\"\"\n",
    "\n",
    "        # 获取 游戏名 -- 返回的是 选择器 Selector对象\n",
    "        names = response.xpath('//ul[@class=\"n-game cf\"]/li/a/b/text()')\n",
    "\n",
    "        # 提取游戏名\n",
    "        names = names.extract()\n",
    "\n",
    "        # 分块提取数据\n",
    "        li_list = response.xpath('//ul[@class=\"n-game cf\"]/li')\n",
    "        for li in li_list:\n",
    "            ## 取游戏名\n",
    "            # extract_first()：提取第一项，有则提取，无则跳过\n",
    "            name = li.xpath('./a/b/text()').extract_first()\n",
    "\n",
    "            # 取类别\n",
    "            category = li.xpath('./em/a/text()').extract_first()\n",
    "\n",
    "            # 取发布日期\n",
    "            date = li.xpath('./em/text()').extract_first()\n",
    "\n",
    "            dic = {\n",
    "                'name': name,\n",
    "                'category': category,\n",
    "                'date': date\n",
    "            }\n",
    "\n",
    "            # 需要用 yield 将数据传递给 管道pipelines.py\n",
    "            # spider返回的内容只能是字典, requestes对象, item数据或者 None. 其他内容一律报错\n",
    "            yield dic\n",
    "            "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d488640a",
   "metadata": {},
   "source": [
    "### 数据保存 Pipeline"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1cc3d1ae",
   "metadata": {},
   "source": [
    "parse() 中的数据传递到 pipelines.py，\n",
    "\n",
    "要启动 pipelines，首先修改settings.py文件中的 pipeline信息\n",
    "```python\n",
    "\"\"\" 开启 Pipelines \"\"\"\n",
    "ITEM_PIPELINES = {\n",
    "    # 管道路径：管道的优先级 -- key:value\n",
    "    # 数越小，优先级越高\n",
    "   'scrapy01.pipelines.Scrapy01Pipeline': 300,\n",
    "   'scrapy01.pipelines.NewPipeline': 200\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5c8cb389",
   "metadata": {},
   "source": [
    "然后我们修改一下pipeline中的代码:\n",
    "```python\n",
    "# 注意：管道默认是不生效的，需要去 settings 里面开启管道\n",
    "class Scrapy01Pipeline:\n",
    "    \"\"\"类名随便取\"\"\"\n",
    "\n",
    "    def process_item(self, item, # item接收 spider传来的数据\n",
    "                     spider): # 爬虫对象\n",
    "        \"\"\"函数名不可改\"\"\"\n",
    "\n",
    "        print(item)\n",
    "        print(spider.name)\n",
    "        # 将解析的数据传递给 下一个 管道pipeline，注意管道之间的优先级\n",
    "        return item\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "05899101",
   "metadata": {},
   "source": [
    "### 自定义数据传输结构 item\n",
    "在上述案例中, 我们使用字典作为数据传递的载体, 但是如果数据量非常大，由于字典的key是随意创建的，极易出现问题, 此时再用字典就不合适了。\n",
    "\n",
    "Scrapy 中提供 item 作为数据格式的声明位置，我们可以在 items.py 文件提前定义好该爬虫在进行数据传输时的数据格式.\n",
    "\n",
    "item.py文件\n",
    "```python\n",
    "import scrapy\n",
    "\n",
    "class GameItem(scrapy.Item):\n",
    "    # 定义数据结构，确定 key值\n",
    "    name = scrapy.Field()\n",
    "    category = scrapy.Field()\n",
    "    date = scrapy.Field()\n",
    "    \n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "abb3b807",
   "metadata": {},
   "source": [
    "spider中. 这样来使用:\n",
    "\n",
    "```python\n",
    "from scrapy01.items import GameItem\n",
    "\n",
    "# 以下代码在spider中的parse替换掉原来的字典\n",
    "# item 和字典类似，但是 item 的 key值 是固定的，但字典的 key 可以修改\n",
    "item = GameItem()\n",
    "item[\"name\"] = name\n",
    "item[\"category\"] = category\n",
    "item[\"date\"] = date\n",
    "yield item\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "95195659",
   "metadata": {},
   "source": [
    "### scrapy使用小总结\n",
    "\n",
    "具体看：./scrapy/scrapy01\n",
    "\n",
    "至此, 我们对scrapy有了一个非常初步的了解和使用。快速总结一下，scrapy框架的使用流程:\n",
    "\n",
    "1. 创建爬虫项目. `scrapy startproject xxx `\n",
    "\n",
    "\n",
    "2. 进入项目目录. `cd xxx `\n",
    "\n",
    "\n",
    "3. 创建爬虫 `scrapy genspider 爬虫名称 运行抓取的域名`\n",
    "\n",
    "\n",
    "4. 编写`item.py` 文件, 定义好数据item\n",
    "\n",
    "\n",
    "5. 修改spider中的 parse 方法. 对返回的响应 response对象 进行解析，返回item\n",
    "\n",
    "\n",
    "6. 在 pipeline 中对数据进行保存工作.\n",
    "\n",
    "\n",
    "7. 修改`settings.py`文件, 将pipeline设置为生效, 并设置好优先级\n",
    "\n",
    "\n",
    "8. 启动爬虫 `scrapy crawl 爬虫名称`"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3e3fd9de",
   "metadata": {},
   "source": [
    "## 管道 pipelines\n",
    "\n",
    "我们已经可以从 spider中 提取并解析了数据. 然后通过引擎将数据传递给 pipeline, 那么在 pipeline 中如何对数据进行保存呢? 我们主要针对四种数据存储展开讲解."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b54a90eb",
   "metadata": {},
   "source": [
    "### csv文件写入\n",
    "\n",
    "具体见：./scrapy/scrapy02\n",
    "\n",
    "写入文件是一个非常简单的事情. 直接在pipeline中开启文件即可. \n",
    "\n",
    "但这里要说明的是. 如果我们只在process_item中进行处理文件是不够优雅的. 总不能有一条数据就 open一次吧\n",
    "\n",
    "```python\n",
    "class GamePipeline_CSV:\n",
    "\n",
    "    def process_item(self, item, spider):\n",
    "        with open(\"./data/game.csv\", mode=\"a\", encoding='utf-8') as f:\n",
    "            # 写入文件\n",
    "            f.write(f\"{item['name']}, {item['category']}, {item['date']}\\n\")\n",
    "        return item\n",
    "```\n",
    "\n",
    "​ 我们希望的是, 能不能打开一个文件, 然后就用这一个文件句柄来完成数据的保存.\n",
    "\n",
    "答案是可以的. 我们可以在pipeline中创建两个方法, 一个是open_spider(), 另一个是close_spider(). 看名字也能明白其含义:\n",
    "\n",
    "- open_spider(), 在爬虫开始的时候执行一次\n",
    "- close_spider(), 在爬虫结束的时候执行一次\n",
    "\n",
    "\n",
    "```python\n",
    "class GamePipeline_CSV:\n",
    "\n",
    "    def open_spider(self, spider):\n",
    "        self.f = open(\"./data/game.csv\", mode=\"a\", encoding='utf-8')\n",
    "\n",
    "    def process_item(self, item, spider):\n",
    "        # 写入文件\n",
    "        f.write(f\"{item['name']}, {item['category']}, {item['date']}\\n\")\n",
    "        return item\n",
    "    \n",
    "    def close_spider(self, spider):\n",
    "        if self.f:\n",
    "            self.f.close()\n",
    "```\n",
    "\n",
    "​ 在爬虫开始的时候打开一个文件, 在爬虫结束的时候关闭这个文件.\n",
    "\n",
    "记得设置settings\n",
    "\n",
    "```python\n",
    "ITEM_PIPELINES = {\n",
    "   'scrapy02.pipelines.GamePipeline_CSV': 300,\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9f58d2d9",
   "metadata": {},
   "source": [
    "### mysql数据库写入\n",
    "\n",
    "具体见：./scrapy/scrapy02\n",
    "\n",
    "首先, 在open_spider中创建好数据库连接, 在 close_spider 中关闭链接. 在proccess_item中 对数据进行保存工作.\n",
    "\n",
    "先把 mysql 相关设置丢到settings里\n",
    "\n",
    "```python\n",
    "# MYSQL配置信息\n",
    "MYSQL_CONFIG = {\n",
    "   \"host\": \"localhost\",\n",
    "   \"port\": 3306,\n",
    "   \"user\": \"root\",\n",
    "   \"password\": \"zxydsg123\",\n",
    "   \"database\": \"spider\",\n",
    "}\n",
    "```\n",
    "\n",
    "```python\n",
    "from scrapy02.settings import MYSQL_CONFIG as mysql\n",
    "import pymysql\n",
    "\n",
    "# 写入 mysql数据库\n",
    "class GamePipeline_MySQL:\n",
    "\n",
    "    def open_spider(self, spider):\n",
    "        \"\"\"在爬虫开始的时候，连接数据库\"\"\"\n",
    "        self.conn = pymysql.connect(host=\"127.0.0.1\",\n",
    "                                    port=3306,\n",
    "                                    user=\"root\",\n",
    "                                    password=\"zxydsg123\",\n",
    "                                    database=\"scrapy\")\n",
    "\n",
    "    def close_spider(self, spider):\n",
    "        \"\"\"爬虫结束的时候，关闭数据库连接\"\"\"\n",
    "        self.conn.close()\n",
    "\n",
    "    # 每次spider返回一条数据. 这里就要运行一次.\n",
    "    def process_item(self, item, spider):\n",
    "        try:\n",
    "            # 弄个游标\n",
    "            cursor = self.conn.cursor()\n",
    "\n",
    "            # sql语句：插入数据\n",
    "            sql = \"insert into game(name, category, date) values(%s, %s, %s)\"\n",
    "\n",
    "            # 执行 sql\n",
    "            cursor.execute(sql, (item['name'], item['category'], item['date']))\n",
    "\n",
    "            # 提交事务\n",
    "            self.conn.commit()\n",
    "\n",
    "        except Exception as e:\n",
    "            # 如果报错. 回滚\n",
    "            print(e)\n",
    "            self.conn.rollback()\n",
    "\n",
    "        print(\"存储完毕...\")\n",
    "\n",
    "        return item  # 如果不return. 后续的管道将接受不到数据\n",
    "```\n",
    "\n",
    "别忘了把pipeline设置一下\n",
    "\n",
    "```python\n",
    "ITEM_PIPELINES = {\n",
    "   'caipiao.pipelines.CaipiaoMySQLPipeline': 301,\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1f13a91c",
   "metadata": {},
   "source": [
    "### mongodb数据库写入\n",
    "\n",
    "​ mongodb数据库写入和mysql写入如出一辙...\n",
    "\n",
    "```python\n",
    "MONGO_CONFIG = {\n",
    "   \"host\": \"localhost\",\n",
    "   \"port\": 27017,\n",
    "   #'has_user': True,\n",
    "   #'user': \"python_admin\",\n",
    "   #\"password\": \"123456\",\n",
    "   \"db\": \"python\"\n",
    "}\n",
    "```\n",
    "\n",
    "```python\n",
    "from caipiao.settings import MONGO_CONFIG as mongo\n",
    "import pymongo\n",
    "\n",
    "class CaipiaoMongoDBPipeline:\n",
    "    def open_spider(self, spider):\n",
    "        client = pymongo.MongoClient(host=mongo['host'],\n",
    "                                     port=mongo['port'])\n",
    "        db = client[mongo['db']]\n",
    "        #if mongo['has_user']:\n",
    "        #    db.authenticate(mongo['user'], mongo['password'])\n",
    "        self.client = client  #  你们那里不用这么麻烦. \n",
    "        self.collection = db['caipiao']\n",
    "\n",
    "    def close_spider(self, spider):\n",
    "        self.client.close()\n",
    "\n",
    "    def process_item(self, item, spider):\n",
    "        self.collection.insert({\"qihao\": item['qihao'], 'red': item[\"red_ball\"], 'blue': item['blue_ball']})\n",
    "        return item\n",
    "```\n",
    "\n",
    "```python\n",
    "ITEM_PIPELINES = {\n",
    "    # 三个管道可以共存~\n",
    "   'caipiao.pipelines.CaipiaoFilePipeline': 300,\n",
    "   'caipiao.pipelines.CaipiaoMySQLPipeline': 301,\n",
    "   'caipiao.pipelines.CaipiaoMongoDBPipeline': 302,\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "458cec14",
   "metadata": {},
   "source": [
    "### 图片的保存\n",
    "\n",
    "具体见：./scrapy/scrapy03\n",
    "\n",
    "接下来我们来尝试使用scrapy来下载一些图片, 看看效果如何.\n",
    "\n",
    "\n",
    "接下来. 创建好项目, 完善spider, 注意看 yield scrapy.Request()\n",
    "\n",
    "```python\n",
    "import scrapy\n",
    "\n",
    "class PictureSpider(scrapy.Spider):\n",
    "    name = 'picture'\n",
    "    allowed_domains = ['4399.com']\n",
    "    start_urls = ['http://www.4399.com/flash/']\n",
    "\n",
    "    def parse(self, resp):\n",
    "        # 查看是 哪个网页 返回来的数据\n",
    "        print(resp.url)\n",
    "\n",
    "        # 获取 游戏网站的 多个页面地址\n",
    "        img_srcs = resp.xpath(\"//div[@id='pagg']/a[@target='_self']/@href\").extract()\n",
    "        # print(img_srcs)\n",
    "        for img_src in img_srcs:\n",
    "\n",
    "            # 生成完整的 页面 URL地址\n",
    "            child_url = resp.urljoin(img_src)\n",
    "\n",
    "            # 发送请求，经过 引擎 -> 调度器 -> 引擎 -> 下载器 -> 网页返回数据，执行回调函数\n",
    "            yield scrapy.Request(url=child_url, callback=self.NextParse)\n",
    "\n",
    "\n",
    "    def NextParse(self, resp, **kwargs):\n",
    "\n",
    "        # 查看 爬取的是哪个网页\n",
    "        print(resp.url)\n",
    "\n",
    "        # 每个网页，只获取一张图片\n",
    "        img_src = resp.xpath(\"//ul[@class='n-game cf']/li/a/img/@lz_src\")[0].extract()\n",
    "        img_src = resp.urljoin(img_src)\n",
    "\n",
    "        img_name = resp.xpath(\"//ul[@class='n-game cf']/li/a/b/text()\")[0].extract()\n",
    "\n",
    "        print(img_src, img_name)\n",
    "        # 发送给 Pipeline进行数据保存\n",
    "        yield {\n",
    "            'name':img_name,\n",
    "            \"src\": img_src\n",
    "        }\n",
    "```\n",
    "\n",
    "关于Request()的参数:\n",
    "- url: 请求地址\n",
    "- method: 请求方式\n",
    "- callback: 回调函数\n",
    "- errback: 报错回调\n",
    "- dont_filter: 默认False, 表示\"不过滤\", 该请求会重新进行发送\n",
    "- headers: 请求头. \n",
    "- cookies: cookie信息\n",
    "\n",
    "​ 接下来就是下载问题了. 如何在pipeline中下载一张图片呢? Scrapy早就帮你准备好了. 在Scrapy中有一个ImagesPipeline可以实现自动图片下载功能.\n",
    "\n",
    "```python\n",
    "# Define your item pipelines here\n",
    "#\n",
    "# Don't forget to add your pipeline to the ITEM_PIPELINES setting\n",
    "# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html\n",
    "\n",
    "\n",
    "# useful for handling different item types with a single interface\n",
    "import scrapy\n",
    "from itemadapter import ItemAdapter\n",
    "# ImagesPipeline 图片专用的管道\n",
    "from scrapy.pipelines.images import ImagesPipeline\n",
    "\n",
    "\n",
    "class TuPipeline:\n",
    "    def process_item(self, item, spider):\n",
    "        print(item['img_src'])\n",
    "        # 一个存储方案.\n",
    "        # import requests\n",
    "        # resp = requests.get(img_src)\n",
    "        # resp.content\n",
    "        return item\n",
    "\n",
    "# scrapy的方案\n",
    "from itemadapter import ItemAdapter\n",
    "\n",
    "from scrapy.pipelines.images import ImagesPipeline\n",
    "from scrapy.pipelines.files import FilesPipeline\n",
    "from scrapy import Request\n",
    "\n",
    "\n",
    "class PicturePipeline(ImagesPipeline):\n",
    "    \"\"\"进行图片下载\"\"\"\n",
    "\n",
    "    def get_media_requests(self, item, info):\n",
    "        \"\"\"向图片所在的网址发送请求\"\"\"\n",
    "\n",
    "        # 获取图片的网址\n",
    "        src = item['src']\n",
    "\n",
    "        # 获取图片名字\n",
    "        name = item['name']\n",
    "\n",
    "        # 发送请求，请求对象传值的最佳方案: meta\n",
    "        yield Request(src, meta={\"img_src\": src, 'img_name':name})\n",
    "\n",
    "    def file_path(self, request, response=None, info=None, *, item=None):\n",
    "        \"\"\"保存至本地\"\"\"\n",
    "\n",
    "        # 设置文件名\n",
    "        name = request.meta['img_name']\n",
    "\n",
    "        # 返回文件的保存路径，需要注名文件格式 jpg\n",
    "        return f\"/{name}.jpg\"\n",
    "\n",
    "\n",
    "    def item_completed(self, results, item, info):\n",
    "        \"\"\"最后的收尾工作\"\"\"\n",
    "        \n",
    "        # 将数据传递给 下一个管道\n",
    "        return item\n",
    "```\n",
    "\n",
    "最后, 需要在settings中设置以下内容:\n",
    "\n",
    "```python\n",
    "LOG_LEVEL = \"WARNING\"\n",
    "\n",
    "USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.54 Safari/537.36'\n",
    "\n",
    "# Obey robots.txt rules\n",
    "ROBOTSTXT_OBEY = False\n",
    "\n",
    "ITEM_PIPELINES = {\n",
    "   'scrapy03.pipelines.PicturePipeline': 300,\n",
    "}\n",
    "\n",
    "MEDIA_ALLOW_REDIRECTS = True\n",
    "\n",
    "# 图片的保存路径 -- 必须写\n",
    "IMAGES_STORE = \"./pictures\"\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5d1303d2",
   "metadata": {},
   "source": [
    "## Scrapy处理 Cookies\n",
    "具体见：./scrapy/scrapy04"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3a8bab01",
   "metadata": {},
   "source": [
    "在 requests 中我们讲解处理cookie主要有两个方案. \n",
    "- 第一个方案. 从浏览器里直接把cookie搞出来. 贴到 heades 里, 简单粗暴. \n",
    "- 第二个方案是走正常的登录流程，通过session来记录请求过程中的cookie\n",
    "\n",
    "​ 首先, 我们依然是把目标定好, https://user.17k.com/ck/author/shelf?page=1&appKey=2406394919\n",
    "\n",
    "​ 这个 url 必须要登录后才能访问，就必须要用到cookie了. 首先, 创建项目, 建立爬虫. 把该填的地方填上.\n",
    "\n",
    "```python\n",
    "import scrapy\n",
    "from scrapy import Request, FormRequest\n",
    "\n",
    "\n",
    "class LoginSpider(scrapy.Spider):\n",
    "    name = 'login'\n",
    "    allowed_domains = ['17k.com']\n",
    "    start_urls = ['https://user.17k.com/ck/author/shelf?page=1&appKey=2406394919']\n",
    "\n",
    "    def parse(self, response):\n",
    "        print(response.text)\n",
    "```\n",
    "\n",
    "此时运行时, 显示的是该用户还未登录. 不论是哪个方案. 在请求到 start_urls 里面的url之前必须得获取到cookie. \n",
    "\n",
    "但是默认情况下, scrapy 会自动的帮我们完成 request的创建. 但此时, 我们需要自己去组装第一个请求. \n",
    "\n",
    "这时就需要我们自己的爬虫中重写start_requests()方法，该方法负责起始request的组装工作\n",
    "\n",
    "```python\n",
    "    def start_requests(self):\n",
    "        \"\"\"发送第一个 请求的方法\"\"\"\n",
    "        print(\"首次发送请求...\")\n",
    "        \n",
    "        yield Request(\n",
    "            url=LoginSpider.start_urls[0],\n",
    "            callback=self.parse # 回调函数，把响应结果传给 parse函数\n",
    "        )\n",
    "```\n",
    "\n",
    "接下来, 我们去处理cookie"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1dca59fc",
   "metadata": {},
   "source": [
    "### 方案一：从浏览器上 复制 cookie"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5d2cf7b5",
   "metadata": {},
   "source": [
    "```python\n",
    "import scrapy\n",
    "from scrapy import Request, FormRequest\n",
    "\n",
    "class LoginSpider(scrapy.Spider):\n",
    "    name = 'login'\n",
    "    allowed_domains = ['17k.com']\n",
    "    start_urls = ['https://user.17k.com/ck/author/shelf?page=1&appKey=2406394919']\n",
    "\n",
    "    def parse(self, response):\n",
    "        print(response.text)\n",
    "\n",
    "    def start_requests(self):\n",
    "        # 直接从 浏览器复制 cookie\n",
    "        cookies = \"GUID=d69afa45-6723-43a7-9d9c-1845e6df0d4c; c_channel=0; c_csc=web; Hm_lvt_9793f42b498361373512340937deb2a0=1698411700,1698416042,1698416654; accessToken=avatarUrl%3Dhttps%253A%252F%252Fcdn.static.17k.com%252Fuser%252Favatar%252F02%252F82%252F71%252F102357182.jpg-88x88%253Fv%253D1698411887000%26id%3D102357182%26nickname%3DY%25E9%25AA%2581%25E5%258B%2587%25E5%2596%2584%25E6%2588%2598Y%26e%3D1713968655%26s%3Da357e246664654ad; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%22102357182%22%2C%22%24device_id%22%3A%2218b717fe9c8101-0cdaf2cd30d718-17525634-1296000-18b717fe9c9fc2%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_referrer%22%3A%22%22%2C%22%24latest_referrer_host%22%3A%22%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%7D%2C%22first_id%22%3A%22d69afa45-6723-43a7-9d9c-1845e6df0d4c%22%7D; Hm_lpvt_9793f42b498361373512340937deb2a0=1698416662\"\n",
    "\n",
    "        # 将 cookie 以字典形式存储\n",
    "        cookie_dic = {}\n",
    "        for c in cookies.split(\"; \"):\n",
    "            k, v = c.split(\"=\")\n",
    "            cookie_dic[k] = v\n",
    "\n",
    "        # 发送请求\n",
    "        yield Request(\n",
    "            url=LoginSpider.start_urls[0],\n",
    "            cookies=cookie_dic,\n",
    "            callback=self.parse # 执行回调函数\n",
    "        )\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1620e49c",
   "metadata": {},
   "source": [
    "### 方案二：完成登录过程.\n",
    "\n",
    "```python\n",
    "import scrapy\n",
    "from scrapy import Request, FormRequest\n",
    "\n",
    "class LoginSpider(scrapy.Spider):\n",
    "    name = 'login'\n",
    "    allowed_domains = ['17k.com']\n",
    "    start_urls = ['https://user.17k.com/www/']\n",
    "\n",
    "    def start_requests(self):\n",
    "        \"\"\"首次发送请求：进行登录\"\"\"\n",
    "\n",
    "        # 用户名和密码\n",
    "        username = \"18370942865\"\n",
    "        password = \"zxydsg123\"\n",
    "\n",
    "        # 登录界面的 URL\n",
    "        url = \"https://passport.17k.com/\"\n",
    "\n",
    "        # 发送 post请求 -- 拿到 cookie值\n",
    "        yield FormRequest(\n",
    "            url=url,\n",
    "            formdata={\n",
    "                \"user\": username,\n",
    "                \"pass\": password\n",
    "            }, # key值 与 页面保持一致\n",
    "            callback=self.parse # 回调函数\n",
    "        )\n",
    "\n",
    "        \"\"\"\n",
    "        # 另一种方式：发送post请求\n",
    "        yield Request(\n",
    "            url=url,\n",
    "            method=\"post\",\n",
    "            body=\"user=18370942865&pass=zxydsg123\",\n",
    "            callback=self.parse\n",
    "        )\n",
    "        \"\"\"\n",
    "\n",
    "\n",
    "    def parse(self, response):\n",
    "        \"\"\"解析数据\"\"\"\n",
    "\n",
    "        # 发送请求\n",
    "        yield Request(\n",
    "            url=LoginSpider.start_urls[0],\n",
    "            callback=self.parse_detail # 回调函数\n",
    "        )\n",
    "\n",
    "    def parse_detail(self, resp):\n",
    "        \"\"\"检查是否登录成功\"\"\"\n",
    "        print(resp)\n",
    "```\n",
    "\n",
    "**注意, 发送post请求有两个方案：**\n",
    "\n",
    "1. Scrapy.Request(url=url, method='post', body=数据)\n",
    "  \n",
    "2. Scarpy.FormRequest(url=url, formdata=数据) -> 推荐"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b8e01c4c",
   "metadata": {},
   "source": [
    "### 方案三：在 settings 文件中给出 cookie 值\n",
    "\n",
    "在settings中，有一个配置项: **DEFAULT_REQUEST_HEADERS**, 在里面可以给出默认的请求头信息. \n",
    "\n",
    "但是要注意, 需要在settings中把**COOKIES_ENABLED**设置成False. .\n",
    "\n",
    "```python\n",
    "COOKIES_ENABLED = False\n",
    "\n",
    "DEFAULT_REQUEST_HEADERS = {\n",
    "  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',\n",
    "  'Accept-Language': 'en',\n",
    "  'Cookie': 'xxxxxx',\n",
    "  \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.54 Safari/537.36\"\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "46abd348",
   "metadata": {},
   "source": [
    "## Scrapy的中间件\n",
    "\n",
    "**中间件的作用:** 负责处理 引擎和爬虫 以及 引擎和下载器之间 的请求和响应, 主要是可以对request和response做预处理, 为后面的操作做好充足的准备工作. \n",
    "\n",
    "在python中准备了两种中间件, 分别是下载器中间件和爬虫中间件.\n",
    "\n",
    "### DownloaderMiddleware\n",
    "\n",
    "具体见：./scrapy/scrapy05\n",
    "\n",
    "下载中间件, 它是介于 引擎和下载器 之间, 引擎 在获取到 request对象后, 会交给下载器去下载, 在这之间我们可以设置下载中间件. 它的执行流程:\n",
    "\n",
    "- 引擎拿到request -> 中间件1(process_request) -> 中间件2(process_request) .....-> 下载器\n",
    "\n",
    "- 引擎拿到request <- 中间件1(process_response) <- 中间件2(process_response) ..... <- 下载器"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c57ef94a",
   "metadata": {},
   "source": [
    "```python\n",
    "class MidDownloaderMiddleware1:\n",
    "    \"\"\"下载器中间件1, 介于下载器和引擎之间\"\"\"\n",
    "\n",
    "    @classmethod\n",
    "    def from_crawler(cls, crawler):\n",
    "        \"\"\"设置 在什么时间，执行什么方法\"\"\"\n",
    "\n",
    "        # 在爬虫启动时，执行 spider_opened方法\n",
    "        s = cls()\n",
    "        crawler.signals.connect(s.spider_opened, # 爬虫启动时\n",
    "                                signal=signals.spider_opened) # 执行的方法\n",
    "        return s\n",
    "\n",
    "    def process_request(self, request, spider):\n",
    "        \"\"\"\n",
    "        在引擎将请求的信息交给下载器之前, 自动的调用该方法\n",
    "        :param request: 当前请求\n",
    "        :param spider: 发出该请求的 spider\n",
    "        :return:\n",
    "            注意, process_request返回值是有规定的.\n",
    "            1. 如果返回的None, 不做拦截. 继续向后面的中间件执行.\n",
    "            2. 如果返回的是Request. 后续的中间件将不再执行.将请求重新交给引擎. 引擎重新扔给调度器\n",
    "            3. 如果返回的是Response. 后续的中间件将不再执行.将响应信息交给引擎. 引擎将响应丢给spider. 进行数据处理\n",
    "        \"\"\"\n",
    "        print(\"ware1, 我是process_requests\")\n",
    "        return None\n",
    "\n",
    "    def process_response(self, request, response, spider):\n",
    "        \"\"\"\n",
    "        在下载器返回响应准备交给引擎之间. 自动调用该方法\n",
    "        :param request: 当前请求\n",
    "        :param response:  响应内容\n",
    "        :param spider: 发送请求的爬虫\n",
    "        :return:\n",
    "            1. request. 直接把请求交给引擎, 引擎将请求丢给调度器\n",
    "            2. response. 不做拦截, 继续向前进行提交返回\n",
    "        \"\"\"\n",
    "        print(\"ware1\", \"process_response\")\n",
    "        return response\n",
    "\n",
    "    def process_exception(self, request, exception, spider):\n",
    "        # 当前请求执行过程中, 出错了. 自动执行它\n",
    "        pass\n",
    "\n",
    "    def spider_opened(self, spider):\n",
    "        \"\"\"爬虫启动时，执行该方法 -- 只执行一次\"\"\"\n",
    "        print(\"我是spider_opened\")\n",
    "\n",
    "class MidDownloaderMiddleware2:\n",
    "    \"\"\"下载器中间件2, 介于下载器和引擎之间\"\"\"\n",
    "    def process_request(self, request, spider):\n",
    "        \"\"\"处理请求\"\"\"\n",
    "        print(\"ware2, 我是process_requests\")\n",
    "        return None\n",
    "\n",
    "    def process_response(self, request, response, spider):\n",
    "        \"\"\"处理响应\"\"\"\n",
    "        print(\"我是ware2\", \"process_response\")\n",
    "        return response\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "88bf58bf",
   "metadata": {},
   "source": [
    "**开启中间件**\n",
    "\n",
    "```python\n",
    "# 优先级参考管道\n",
    "DOWNLOADER_MIDDLEWARES = {\n",
    "   # 'mid.middlewares.MidDownloaderMiddleware': 542,\n",
    "   'mid.middlewares.MidDownloaderMiddleware1': 543,\n",
    "   'mid.middlewares.MidDownloaderMiddleware2': 544,\n",
    "}\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6edf1a6b",
   "metadata": {},
   "source": [
    "### 动态随机设置UA\n",
    "\n",
    "设置统一的 UA 很简单，直接在 settings 里设置即可.\n",
    "\n",
    "```python\n",
    "# settings里添加\n",
    "USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36'\n",
    "```\n",
    "\n",
    "但是这个不够好, 我希望得到一个随机的 UA. 此时就可以这样设计, 首先, 在 settings 里定义好一堆 UserAgent. http://useragentstring.com/pages/useragentstring.php?name=Chrome\n",
    "\n",
    "```python\n",
    "# settings里添加\n",
    "USER_AGENT_LIST = [\n",
    "    'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36',\n",
    "    'Mozilla/5.0 (X11; Ubuntu; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2919.83 Safari/537.36',\n",
    "    'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2866.71 Safari/537.36',\n",
    "    'Mozilla/5.0 (X11; Ubuntu; Linux i686 on x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2820.59 Safari/537.36',\n",
    "    'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2762.73 Safari/537.36',\n",
    "    'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2656.18 Safari/537.36',\n",
    "    'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML like Gecko) Chrome/44.0.2403.155 Safari/537.36',\n",
    "    'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36',\n",
    "    'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.1 Safari/537.36',\n",
    "    'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.0 Safari/537.36',\n",
    "    'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.0 Safari/537.36',\n",
    "    'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2226.0 Safari/537.36',\n",
    "    'Mozilla/5.0 (Windows NT 6.4; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2225.0 Safari/537.36',\n",
    "    'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2225.0 Safari/537.36',\n",
    "    'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2224.3 Safari/537.36',\n",
    "    'Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.93 Safari/537.36',\n",
    "    'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.124 Safari/537.36',\n",
    "    'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2049.0 Safari/537.36',\n",
    "    'Mozilla/5.0 (Windows NT 4.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2049.0 Safari/537.36',\n",
    "]\n",
    "```\n",
    "\n",
    "**中间件设置：**\n",
    "\n",
    "```python\n",
    "class MyRandomUserAgentMiddleware:\n",
    "\n",
    "    def process_request(self, request, spider):\n",
    "        UA = choice(USER_AGENT_LIST)\n",
    "        request.headers['User-Agent'] = UA\n",
    "        # 不要返回任何东西\n",
    "\n",
    "    def process_response(self, request, response, spider):\n",
    "        return response\n",
    "\n",
    "    def process_exception(self, request, exception, spider):\n",
    "        pass\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "af07e5bd",
   "metadata": {},
   "source": [
    "### 处理代理问题\n",
    "\n",
    "1. 免费代理\n",
    "  \n",
    "  ```python\n",
    "    # 免费代理\n",
    "   class ProxyDownloaderMiddleware:\n",
    "        def process_request(self, request, spider):\n",
    "\n",
    "            # 选择 IP\n",
    "            ip = choice(PROXY_IP_LIST)\n",
    "\n",
    "            # 设置代理\n",
    "            request.meta['proxy'] = \"https://\" + ip\n",
    "\n",
    "            # 放行\n",
    "            return None\n",
    "  ```\n",
    "  \n",
    "\n",
    "2. 收费代理\n",
    "  \n",
    "  免费代理实在太难用了. 我们这里直接选择一个收费代理. 依然选择`快代理`, 这个根据你自己的喜好进行调整.\n",
    "  \n",
    "  ```python\n",
    "    # 付费代理\n",
    "   class MoneyProxyDownloaderMiddleware:\n",
    "\n",
    "        def process_request(self, request, spider):\n",
    "           proxy = \"tps138.kdlapi.com:15818\"\n",
    "\n",
    "            # 设置代理\n",
    "           request.meta['proxy'] = f\"http://{proxy}\"\n",
    "\n",
    "            # 用户名密码认证\n",
    "           request.headers['Proxy-Authorization'] = basic_auth_header('t12831993520578', 't72a13xu')\n",
    "           request.headers[\"Connection\"] = \"close\"\n",
    "  ```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "00caf7c2",
   "metadata": {},
   "source": [
    "### 使用selenium完成数据抓取\n",
    "\n",
    "具体见：./scrapy/scrapy07\n",
    "\n",
    "首先, 我们需要使用selenium作为下载器进行下载， 那么我们的请求应该也是特殊订制的. 所以, 在我的设计里, 我可以重新设计一个请求. 就叫SeleniumRequest\n",
    "\n",
    "```python\n",
    "from scrapy.http.request import Request\n",
    "\n",
    "class SeleniumRequest(Request):\n",
    "    pass\n",
    "```\n",
    "\n",
    "接下来，完善一下spider\n",
    "\n",
    "```python\n",
    "import scrapy\n",
    "from boss.request import SeleniumRequest\n",
    "\n",
    "class BeijingSpider(scrapy.Spider):\n",
    "    name = 'beijing'\n",
    "    allowed_domains = ['zhipin.com']\n",
    "    start_urls = ['https://www.zhipin.com/job_detail/?query=python&city=101010100&industry=&position=']\n",
    "\n",
    "    def start_requests(self):\n",
    "        yield SeleniumRequest(\n",
    "            url=BeijingSpider.start_urls[0],\n",
    "            callback=self.parse,\n",
    "        )\n",
    "\n",
    "    def parse(self, resp, **kwargs):\n",
    "        li_list = resp.xpath('//*[@id=\"main\"]/div/div[3]/ul/li')\n",
    "        for li in li_list:\n",
    "            href = li.xpath(\"./div[1]/div[1]/div[1]/div[1]/div[1]/span[1]/a[1]/@href\").extract_first()\n",
    "            name = li.xpath(\"./div[1]/div[1]/div[1]/div[1]/div[1]/span[1]/a[1]/text()\").extract_first()\n",
    "\n",
    "            print(name, href)\n",
    "            print(resp.urljoin(href))\n",
    "            yield SeleniumRequest(\n",
    "                url=resp.urljoin(href),\n",
    "                callback=self.parse_detail,\n",
    "            )\n",
    "        # 下一页.....\n",
    "\n",
    "    def parse_detail(self, resp, **kwargs):\n",
    "        print(\"招聘人\", resp.xpath('//*[@id=\"main\"]/div[3]/div/div[2]/div[1]/h2').extract())\n",
    "```\n",
    "\n",
    "记得设置中间件：\n",
    "\n",
    "```python\n",
    "class BossDownloaderMiddleware:\n",
    "\n",
    "    @classmethod\n",
    "    def from_crawler(cls, crawler):\n",
    "        # This method is used by Scrapy to create your spiders.\n",
    "        s = cls()\n",
    "        # 这里很关键哦. \n",
    "        # 在爬虫开始的时候. 执行spider_opened\n",
    "        # 在爬虫结束的时候. 执行spider_closed\n",
    "        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)\n",
    "        crawler.signals.connect(s.spider_closed, signal=signals.spider_closed)\n",
    "        return s\n",
    "\n",
    "    def process_request(self, request, spider):\n",
    "        if isinstance(request, SeleniumRequest):\n",
    "            self.web.get(request.url)\n",
    "            time.sleep(3)\n",
    "            page_source = self.web.page_source\n",
    "            return HtmlResponse(url=request.url, encoding='utf-8', request=request, body=page_source)\n",
    "\n",
    "    def process_response(self, request, response, spider):\n",
    "        return response\n",
    "\n",
    "    def process_exception(self, request, exception, spider):\n",
    "        pass\n",
    "\n",
    "    def spider_opened(self, spider):\n",
    "        self.web = Chrome()\n",
    "        self.web.implicitly_wait(10)\n",
    "        # 完成登录. 拿到cookie. 很容易...\n",
    "        print(\"创建浏览器\")\n",
    "\n",
    "    def spider_closed(self, spider):\n",
    "        self.web.close()\n",
    "        print(\"关闭浏览器\")\n",
    "```\n",
    "\n",
    "修改 settings\n",
    "\n",
    "```python\n",
    "DOWNLOADER_MIDDLEWARES = {\n",
    "    # 设置在所有默认中间件前面. 只要是 selenium后面所有的中间件都停用\n",
    "   'boss.middlewares.BossDownloaderMiddleware': 99,  \n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bb50bb0e",
   "metadata": {},
   "source": [
    "### 用selenium设置cookie\n",
    "\n",
    "具体见：./scrapy/scrapy08\n",
    "\n",
    "有了这个案例. 想要用 selenium处理cookie 也很容易，直接在spider_opened位置完成登录, 然后在process_request()中简单设置一下即可.\n",
    "\n",
    "```python\n",
    "class ChaojiyingDownloaderMiddleware:\n",
    "\n",
    "    @classmethod\n",
    "    def from_crawler(cls, crawler):\n",
    "        \"\"\"配置中间件\"\"\"\n",
    "        s = cls()\n",
    "        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)\n",
    "        return s\n",
    "\n",
    "    def process_request(self, request, spider):\n",
    "        \"\"\"处理请求\"\"\"\n",
    "\n",
    "        # 判断是否有cookie\n",
    "        if not request.cookies:\n",
    "            # 没有cookie就设置一下\n",
    "            request.cookies = self.cookie\n",
    "        return None\n",
    "\n",
    "    def spider_opened(self, spider):\n",
    "        \"\"\"开启spider时，执行该函数\"\"\"\n",
    "\n",
    "        # 开启浏览器\n",
    "        web = Chrome()\n",
    "\n",
    "        # 发送请求\n",
    "        web.get(\"https://www.chaojiying.com/user/login/\")\n",
    "\n",
    "        # 输入电话号码\n",
    "        web.find_element(by=By.XPATH, value='/html/body/div[3]/div/div[3]/div[1]/form/p[1]/input').send_keys('18614075987')\n",
    "\n",
    "        # 输入密码\n",
    "        web.find_element(by=By.XPATH, value='/html/body/div[3]/div/div[3]/div[1]/form/p[2]/input').send_keys('q6035945')\n",
    "\n",
    "        # 查找 验证码图片地址\n",
    "        img = web.find_element(by=By.XPATH, value='/html/body/div[3]/div/div[3]/div[1]/form/div/img')\n",
    "\n",
    "        # 识别验证码\n",
    "        verify_code = self.base64_api(\"q6035945\", \"q6035945\", img.screenshot_as_base64, 3)\n",
    "\n",
    "        # 输入验证码\n",
    "        web.find_element(by=By.XPATH, value='/html/body/div[3]/div/div[3]/div[1]/form/p[3]/input').send_keys(verify_code)\n",
    "\n",
    "        # 点击登录\n",
    "        web.find_element(by=By.XPATH, value=\"/html/body/div[3]/div/div[3]/div[1]/form/p[4]/input\").click()\n",
    "\n",
    "        # 画面显示 2秒\n",
    "        time.sleep(2)\n",
    "\n",
    "        # 读取 cookie, web.get_cookies() = [{name: k1, value: v1}, {name: k2, value: v2}...]\n",
    "        self.cookie = {item['name']: item['value'] for item in web.get_cookies()}\n",
    "        web.close()\n",
    "\n",
    "    def base64_api(self, uname, pwd, b64, typeid):\n",
    "        \"\"\"识别验证码\"\"\"\n",
    "        # 发送 POST请求\n",
    "        data = {\"username\": uname, \"password\": pwd, \"typeid\": typeid, \"image\": b64}\n",
    "\n",
    "        # 返回验证码\n",
    "        result = json.loads(requests.post(\"http://api.ttshitu.com/predict\", json=data).text)\n",
    "        if result['success']:\n",
    "            return result[\"data\"][\"result\"]\n",
    "        else:\n",
    "            return result[\"message\"]\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "75ca720f",
   "metadata": {},
   "source": [
    "### SpiderMiddleware (了解)\n",
    "\n",
    "爬虫中间件. 是处于引擎和spider之间的中间件. 里面常用的方法有:\n",
    "\n",
    "```python\n",
    "class CuowuSpiderMiddleware:\n",
    "    # Not all methods need to be defined. If a method is not defined,\n",
    "    # scrapy acts as if the spider middleware does not modify the\n",
    "    # passed objects.\n",
    "\n",
    "    @classmethod\n",
    "    def from_crawler(cls, crawler):\n",
    "        \"\"\"设置中间件配置\"\"\"\n",
    "        \n",
    "        s = cls()\n",
    "        \n",
    "        # 开启爬虫时，执行 spider_opened\n",
    "        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)\n",
    "        return s\n",
    "\n",
    "    def process_spider_input(self, response, spider):\n",
    "        \"\"\"请求被返回, reponse 即将进入到spider时调用\"\"\"\n",
    "        \n",
    "        # 要么返回 None, 要么报错\n",
    "        print(\"我是process_spider_input\")\n",
    "        return None\n",
    "\n",
    "    def process_spider_output(self, response, result, spider):\n",
    "        \"\"\"spider中的数据，返回数据后. 执行\"\"\"\n",
    "        \n",
    "        # 返回值要么是item, 要么是request.\n",
    "        print(\"我是process_spider_output\")\n",
    "        for i in result:\n",
    "            yield i\n",
    "        print(\"我是process_spider_output\")\n",
    "\n",
    "    def process_spider_exception(self, response, exception, spider):\n",
    "        \"\"\"spider中报错 或者 process_spider_input() 方法报错时，调用该方法\"\"\"\n",
    "        \n",
    "        print(\"process_spider_exception\")\n",
    "        \n",
    "        # 返回 None 或者 Request 或者 item.\n",
    "        it = ErrorItem()\n",
    "        it['name'] = \"exception\"\n",
    "        it['url'] = response.url\n",
    "        yield it\n",
    "\n",
    "    def process_start_requests(self, start_requests, spider):\n",
    "        \"\"\"第一次启动爬虫时被调用\"\"\"\n",
    "        print(\"process_start_requests\")\n",
    "        \n",
    "        # Must return only requests (not items).\n",
    "        for r in start_requests:\n",
    "            yield r\n",
    "\n",
    "    def spider_opened(self, spider):\n",
    "        pass\n",
    "```\n",
    "\n",
    "items\n",
    "\n",
    "```python\n",
    "class ErrorItem(scrapy.Item):\n",
    "    name = scrapy.Field()\n",
    "    url = scrapy.Field()\n",
    "```\n",
    "\n",
    "spider:\n",
    "\n",
    "```python\n",
    "class BaocuoSpider(scrapy.Spider):\n",
    "    name = 'baocuo'\n",
    "    allowed_domains = ['baidu.com']\n",
    "    start_urls = ['http://www.baidu.com/']\n",
    "\n",
    "    def parse(self, resp, **kwargs):\n",
    "        \"\"\"解析数据\"\"\"\n",
    "        name = resp.xpath('//title/text()').extract_first()\n",
    "        \n",
    "        it = CuowuItem()\n",
    "        it['name'] = name\n",
    "        print(name)\n",
    "        yield it\n",
    "```\n",
    "\n",
    "pipeline:\n",
    "\n",
    "```python\n",
    "from cuowu.items import ErrorItem\n",
    "\n",
    "class CuowuPipeline:\n",
    "    def process_item(self, item, spider):\n",
    "        if isinstance(item, ErrorItem):\n",
    "            print(\"错误\", item)\n",
    "        else:\n",
    "            print(\"没错\", item)\n",
    "        return item\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ad56e8fd",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7def9dc4",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9547f0b0",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2ad1a8e7",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.12"
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": false,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {
    "height": "calc(100% - 180px)",
    "left": "10px",
    "top": "150px",
    "width": "266.594px"
   },
   "toc_section_display": true,
   "toc_window_display": true
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
