{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "10202b34-7c9a-4f52-87a4-f6fbdcd4db74",
   "metadata": {},
   "source": [
    "# 网络爬虫\n",
    "\n",
    "### 1. Python网络抓取简介\n",
    "\n",
    "#### 1.1 网络爬虫的定义\n",
    "\n",
    "- **网络爬虫**：一种自动化的网络程序，根据特定规则访问万维网上的网页，提取信息并将数据存储在本地。\n",
    "\n",
    "#### 1.2 网络爬虫的应用\n",
    "\n",
    "- 搜索引擎（如Google、百度）\n",
    "- 数据分析（市场分析、竞争对手分析等）\n",
    "- 监控网站内容的变化（如股票价格、新闻更新等）\n",
    "\n",
    "#### 1.3 基本的网络知识和HTTP协议介绍\n",
    "\n",
    "##### 1.3.1 网络基础知识\n",
    "\n",
    "- **万维网（WWW）**：由许多相互连接的网页组成的系统，可通过互联网访问。\n",
    "- **互联网**：全球计算机网络，用于交换数据。\n",
    "\n",
    "##### 1.3.2 IP地址和域名\n",
    "\n",
    "- **IP地址**：互联网上每台计算机的唯一地址，例如192.168.1.1。\n",
    "- **域名**：更易于记忆的地址，如`www.example.com`，通过域名系统（DNS）解析为IP地址。\n",
    "\n",
    "##### 1.3.3 HTTP协议\n",
    "\n",
    "- **HTTP（超文本传输协议）**：定义了客户端和服务器之间交换信息的格式和规则。\n",
    "- **请求和响应**：客户端向服务器发送HTTP请求，服务器返回响应。\n",
    "- **方法**：主要的HTTP方法包括GET（请求资源）、POST（提交数据进行处理）、PUT（替换目标资源的所有当前表示）、DELETE（删除指定资源）等。\n",
    "\n",
    "##### 1.3.4 HTTPS协议\n",
    "\n",
    "- **HTTPS（安全超文本传输协议）**：在HTTP上添加SSL/TLS协议，用于加密客户端和服务器之间的通信，确保数据安全。\n",
    "- **SSL/TLS**：用于在Web浏览器和服务器之间加密数据。\n",
    "- **加密过程**：使用加密和解密密钥确保数据在传输过程中不被窃取或篡改。\n",
    "\n",
    "##### 1.3.5 URL结构\n",
    "\n",
    "- URL（统一资源定位符）：互联网上资源的地址，包括协议、域名、端口（可选）、资源路径和查询参数。\n",
    "  - 示例：`https://www.example.com:443/path/to/file?query=value`\n",
    "\n",
    "#### 1.4 常见的Web服务器和客户端工具\n",
    "\n",
    "- **Web服务器**：如Apache、Nginx、Microsoft IIS。\n",
    "- 客户端工具：\n",
    "  - 浏览器（Chrome、Firefox）\n",
    "  - 命令行工具（curl、wget）\n",
    "  - **Fiddler**：一种HTTP调试工具，可以捕获HTTP和HTTPS流量，允许用户监视、修改和重放入站和出站数据。\n",
    "  - **Charles**：一种代理服务器，使开发人员能够查看所有HTTP和SSL/HTTPS流量，包括请求和响应、头部和元数据。\n",
    "\n",
    "#### 1.5 初始测试站点\n",
    "\n",
    "- **HTTPBin**（[http://httpbin.org](http://httpbin.org/)）：一个简单的服务，接收HTTP请求并将发送的信息回显。它支持各种请求方法，如GET、POST、PUT、DELETE等，并可用于测试HTTP头部、响应数据和状态码。\n",
    "- **Reqres**（[https://reqres.in](https://reqres.in/)）：一个轻量级的模拟REST API，提供各种API响应模拟，包括用户注册、用户信息检索、数据更新，以及HTTPS信息的模拟。\n",
    "\n",
    "### HTTP请求和HTTPS的示例\n",
    "\n",
    "#### HTTP请求：\n",
    "\n",
    "- **示例HTTP请求**：\n",
    "\n",
    "  ```\n",
    "  GET /api/users HTTP/1.1\n",
    "  Host: example.com\n",
    "  User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36\n",
    "  Accept: application/json\n",
    "  Cookie: session_id=abc123; language=en-US\n",
    "  Authorization: Bearer <access_token>\n",
    "  ```\n",
    "\n",
    "- **请求头部**：\n",
    "\n",
    "  - **Cookie**：将保存的数据发送回服务器。\n",
    "  - **Session**：由服务器使用会话Cookie维护，有助于个性化用户交互，无需为每个访问的页面登录凭据。\n",
    "\n",
    "#### HTTPS响应：\n",
    "\n",
    "- 示例HTTPS响应：\n",
    "\n",
    "  ```\n",
    "  HTTP/1.1 200 OK\n",
    "  Content-Type: application/json\n",
    "  Server: Apache/2.4.41 (Unix)\n",
    "  Set-Cookie: session_id=def456; Expires=Sat, 14 May 2023 23:59:59 GMT; Secure; HttpOnly\n",
    "  Cache-Control: max-age=3600\n",
    "  Content-Length: 45\n",
    "  \n",
    "  {\n",
    "    \"id\": 123,\n",
    "    \"name\": \"John Doe\",\n",
    "    \"email\": \"johndoe@example.com\"\n",
    "  }\n",
    "  ```\n",
    "\n",
    "这个详细的内容集成了关于HTTP请求的详细解释，包括头部、会话和Cookie的使用，并提供了HTTP和HTTPS响应的示例，以帮助学生更好地理解在网络抓取环境中的网络通信。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "71846f08-b67b-4cdb-b3ec-d77a633682d5",
   "metadata": {},
   "source": [
    "理解GET和POST请求：\n",
    "\n",
    "GET用于检索信息，而POST用于向服务器发送数据。\n",
    "\n",
    "状态码及其含义：\n",
    "\n",
    "每个HTTP响应都带有一个状态码。例如，200表示请求成功，而404表示未找到请求的资源。\n",
    "\n",
    "HTTP请求：\n",
    "\n",
    "1. 请求行：指定HTTP方法（例如GET、POST）、目标URL和HTTP版本。\n",
    "2. 头部：\n",
    "   - Host：指定服务器的域名或IP地址。\n",
    "   - User-Agent：标识发出请求的客户端（例如浏览器或软件）。\n",
    "   - Accept：指定对响应的期望内容类型（例如text/html、application/json）。\n",
    "   - Content-Type：指示请求体中包含的数据格式（例如application/json、multipart/form-data）。\n",
    "   - Cookie：包含服务器发送的任何先前存储的cookie。\n",
    "   - Authorization：提供访问受保护资源的凭据（例如API密钥、访问令牌）。\n",
    "   - 其他头部：其他信息，如Accept-Language、Referer、User-Agent等。\n",
    "3. 主体（可选）：包含与请求一起发送的有效载荷或数据，例如表单数据或JSON负载。\n",
    "\n",
    "HTTP响应：\n",
    "\n",
    "1. 状态行：指定HTTP版本、指示请求结果的状态码（例如200 OK、404 Not Found）和简要的原因短语。\n",
    "2. 头部：\n",
    "   - Content-Type：指示响应内容的格式（例如text/html、application/json）。\n",
    "   - Set-Cookie：在客户端上设置一个cookie，以供未来的请求使用。\n",
    "   - Server：标识处理请求的软件或服务器。\n",
    "   - Cache-Control：控制客户端或中间代理的缓存行为。\n",
    "   - Content-Length：以字节为单位指定响应体的长度。\n",
    "   - 其他头部：Vary、Expires、Last-Modified等，提供有关响应的其他信息。\n",
    "3. 主体（可选）：包含响应的实际内容，例如HTML、JSON或二进制数据。\n",
    "\n",
    "HTTP响应代码及其含义：\n",
    "\n",
    "- 1xx（信息）：\n",
    "  - 100 Continue：服务器已接收请求头部，客户端应继续发送请求主体。\n",
    "  - 101 Switching Protocols：服务器根据请求正在切换协议。\n",
    "- 2xx（成功）：\n",
    "  - 200 OK：成功的HTTP请求的标准响应，通常用于GET和POST请求。\n",
    "  - 201 Created：请求已成功执行，并导致创建新资源。\n",
    "  - 204 No Content：服务器成功处理了请求，但不返回任何内容。\n",
    "- 3xx（重定向）：\n",
    "  - 301 Moved Permanently：请求的资源已永久移动到新的URI。\n",
    "  - 302 Found：请求的资源暂时位于不同的URI下。\n",
    "  - 304 Not Modified：自上次请求以来，资源未被修改。\n",
    "- 4xx（客户端错误）：\n",
    "  - 400 Bad Request：由于语法错误，服务器无法处理请求。\n",
    "  - 401 Unauthorized：请求要求用户身份验证。\n",
    "  - 403 Forbidden：服务器理解请求，但拒绝授权。\n",
    "  - 404 Not Found：服务器找不到请求的资源。\n",
    "- 5xx（服务器错误）：\n",
    "  - 500 Internal Server Error：服务器遇到意外情况，无法提供更具体的错误消息时显示的通用错误消息。\n",
    "  - 502 Bad Gateway：服务器作为网关或代理时，在尝试满足请求时，从入站服务器接收到无效响应。\n",
    "  - 503 Service Unavailable：服务器当前无法处理请求，因为暂时过载或维护。\n",
    "  - 504 Gateway Timeout：服务器作为网关或代理时，在尝试完成请求时，未能从上游服务器接收到及时的响应。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "efcf99fe-a308-4638-aab2-895d19009ce3",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "正在 Ping www.a.shifen.com [110.242.68.3] 具有 32 字节的数据:\n",
      "来自 110.242.68.3 的回复: 字节=32 时间=21ms TTL=53\n",
      "来自 110.242.68.3 的回复: 字节=32 时间=21ms TTL=53\n",
      "来自 110.242.68.3 的回复: 字节=32 时间=21ms TTL=53\n",
      "来自 110.242.68.3 的回复: 字节=32 时间=22ms TTL=53\n",
      "\n",
      "110.242.68.3 的 Ping 统计信息:\n",
      "    数据包: 已发送 = 4，已接收 = 4，丢失 = 0 (0% 丢失)，\n",
      "往返行程的估计时间(以毫秒为单位):\n",
      "    最短 = 21ms，最长 = 22ms，平均 = 21ms\n"
     ]
    }
   ],
   "source": [
    "!ping www.baidu.com"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "244a72c6-0255-4d74-93f2-bf0472ff4874",
   "metadata": {},
   "source": [
    "###  2. Python网络爬虫的法律和道德问题\n",
    "\n",
    "#### 2.1 理解robots.txt文件\n",
    "\n",
    "- **robots.txt的目的**：该文件用于网站与网络爬虫进行通信，告知它们允许或禁止爬取的位置。它位于网站的根目录下。\n",
    "- **内容结构**：该文件包含`User-agent`行，指定不同的网络爬虫，后跟`Disallow`或`Allow`指令，以限制或允许访问网站的特定路径。\n",
    "- **遵守robots.txt**：道德的网络爬取涉及遵守`robots.txt`文件中指定的限制。忽视这一点可能导致法律诉讼并被禁止访问网站。\n",
    "\n",
    "#### 2.2 使用网络爬虫的法律指南\n",
    "\n",
    "- **遵守法律**：网络爬取的合法性取决于司法管辖区和国家的具体法律。一般来说，访问公开可用的数据通常是合法的，但在未经许可的受保护区域或违反服务条款的情况下进行数据爬取可能会导致法律后果。\n",
    "- **服务条款（ToS）**：许多网站在其服务条款中包含限制或禁止爬取的条款。在爬取数据之前，重要的是审查并遵守这些条款。\n",
    "- **避免系统过载**：法律问题也可能源于在短时间内发送过多请求而导致网站服务器过载。这可能被视为拒绝服务攻击。\n",
    "\n",
    "#### 2.3 数据使用中的道德考虑\n",
    "\n",
    "- **隐私问题**：在爬取数据时，考虑个人隐私至关重要。个人数据应谨慎处理，最好在可能的情况下对数据进行匿名处理。\n",
    "- **数据使用**：在道德上，通过爬取收集的数据应该得到负责任的使用。滥用数据可能导致道德违规并损害个人或组织。\n",
    "- **透明度和同意**：在可能的情况下，获得数据使用的同意并公开说明数据的使用方式可以帮助减轻道德风险。\n",
    "\n",
    "#### 2.4 案例研究和示例\n",
    "\n",
    "- **正面示例**：学术研究人员爬取数据以分析市场趋势，他们遵守robots.txt，道德使用数据，并将研究结果发布供公众利益。\n",
    "- **负面示例**：一家企业未经同意从竞争对手的网站爬取联系信息，并将其用于垃圾邮件营销活动，违反了隐私和法律指南。\n",
    "\n",
    "通过遵守这些法律和道德指南，Python网络爬虫可以确保其活动不仅有效，而且尊重在线环境的权利和法规。该课程的这一部分可以涉及讨论现实世界的案例，以说明网络爬取中的道德和法律考虑的影响。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5301c654-1b41-451c-8ba3-0c2b5df3183c",
   "metadata": {},
   "source": [
    "### 3. Python网络爬虫的基本组件\n",
    "\n",
    "#### 3.1 请求库：Requests\n",
    "\n",
    "##### 简介\n",
    "\n",
    "Requests是一个流行的Python HTTP库，旨在使HTTP请求简单直观。它构建在“为人类而设计”的理念上，支持诸如会话对象、持久连接和持久性Cookie等功能。\n",
    "\n",
    "##### 安装\n",
    "\n",
    "要安装Requests库，请在命令行或终端中输入以下命令：\n",
    "\n",
    "```\n",
    "pip install requests\n",
    "```\n",
    "\n",
    "##### 基本用法\n",
    "\n",
    "使用Requests发送HTTP请求非常简单。以下是一些基本示例：\n",
    "\n",
    "###### 发送GET请求\n",
    "\n",
    "GET请求用于从指定的URL检索数据。以下示例显示如何发送GET请求并打印响应内容："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "b4c5ede4-04de-4292-9229-cc69bafa8325",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple\n",
      "Requirement already satisfied: requests in d:\\anaconda3\\lib\\site-packages (2.31.0)\n",
      "Requirement already satisfied: charset-normalizer<4,>=2 in d:\\anaconda3\\lib\\site-packages (from requests) (2.0.4)\n",
      "Requirement already satisfied: idna<4,>=2.5 in d:\\anaconda3\\lib\\site-packages (from requests) (3.4)\n",
      "Requirement already satisfied: urllib3<3,>=1.21.1 in d:\\anaconda3\\lib\\site-packages (from requests) (1.26.16)\n",
      "Requirement already satisfied: certifi>=2017.4.17 in d:\\anaconda3\\lib\\site-packages (from requests) (2024.2.2)\n"
     ]
    }
   ],
   "source": [
    "!pip install requests"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "99c80898-9a49-45a0-aea6-fd2141e813e3",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<Response [200]>\n",
      "<class 'requests.models.Response'>\n",
      "{\n",
      "  \"args\": {}, \n",
      "  \"headers\": {\n",
      "    \"Accept\": \"*/*\", \n",
      "    \"Accept-Encoding\": \"gzip, deflate, br\", \n",
      "    \"Host\": \"httpbin.org\", \n",
      "    \"User-Agent\": \"python-requests/2.31.0\", \n",
      "    \"X-Amzn-Trace-Id\": \"Root=1-662fb508-026e87de34ba6ac429b5d4b3\"\n",
      "  }, \n",
      "  \"origin\": \"221.15.159.208\", \n",
      "  \"url\": \"https://httpbin.org/get\"\n",
      "}\n",
      "\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "# 发送一个get请求\n",
    "response =requests .get('https://httpbin.org/get')\n",
    "print(response)\n",
    "print(type(response))\n",
    "print(response.text)  # 打印响应内容\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "dd248e0e-4e82-4eda-85e7-bdcb276e2c2f",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<Response [200]>\n",
      "{'args': {}, 'data': '', 'files': {}, 'form': {'username': 'password'}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate, br', 'Content-Length': '17', 'Content-Type': 'application/x-www-form-urlencoded', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.31.0', 'X-Amzn-Trace-Id': 'Root=1-662fb52d-22bc40af1df743aa42c5d040'}, 'json': None, 'origin': '221.15.159.208', 'url': 'https://httpbin.org/post'}\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "# 发送post请求\n",
    "response = requests.post('https://httpbin.org/post', data={'username': 'password'})\n",
    "print(response)\n",
    "print(response.json())  # \n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cdf96faa-3eb9-49a8-873d-56fc374fc66d",
   "metadata": {},
   "source": [
    "处理查询参数\n",
    "在发送GET请求时，通常需要在URL中包含查询参数。Requests允许您将这些参数提供为字典，如下所示："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "82fa206a-05e4-4283-a8fb-68d1bcf1757b",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "https://httpbin.org/get?key1=value1&key2=value2\n",
      "{\n",
      "  \"args\": {\n",
      "    \"key1\": \"value1\", \n",
      "    \"key2\": \"value2\"\n",
      "  }, \n",
      "  \"headers\": {\n",
      "    \"Accept\": \"*/*\", \n",
      "    \"Accept-Encoding\": \"gzip, deflate, br\", \n",
      "    \"Host\": \"httpbin.org\", \n",
      "    \"User-Agent\": \"python-requests/2.31.0\", \n",
      "    \"X-Amzn-Trace-Id\": \"Root=1-662fb57d-61abf62366648c2800d59396\"\n",
      "  }, \n",
      "  \"origin\": \"221.15.159.208\", \n",
      "  \"url\": \"https://httpbin.org/get?key1=value1&key2=value2\"\n",
      "}\n",
      "\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "# Define query parameters\n",
    "params = {\n",
    "    'key1': 'value1',\n",
    "    'key2': 'value2'\n",
    "}\n",
    "\n",
    "# Send the request\n",
    "response = requests.get('https://httpbin.org/get', params=params)\n",
    "print(response.url)  # View the actual URL requested\n",
    "print(response.text)  # Print the text content of the response\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2907417b-29cc-4617-8c8a-fc3cfd61098f",
   "metadata": {},
   "source": [
    "\n",
    "处理请求头部\n",
    "如果您需要自定义HTTP头部，例如设置User-Agent或身份验证令牌，您可以将字典传递给headers参数："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "29a5d1e5-76a0-4371-a46a-becb19d57d8b",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<Response [412]>\n",
      "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">\n",
      "<html lang=\"zh-cn\">\n",
      "\n",
      "<head>\n",
      "    <meta http-equiv=\"Access-Control-Allow-Origin\" content=\"*\" />\n",
      "    <meta http-equiv=\"Page-Enter\" content=\"blendTrans(Duration=0.5)\">\n",
      "    <meta http-equiv=\"Page-Exit\" content=\"blendTrans(Duration=0.5)\">\n",
      "    <meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\">\n",
      "    <meta name=\"viewport\" content=\"width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0\">\n",
      "    <meta name=\"spm_prefix\" content=\"333.937\">\n",
      "    <title>åºéå¦! - bilibili.com</title>\n",
      "    <link rel=\"shortcut icon\" href=\"//static.hdslb.com/images/favicon.ico\">\n",
      "    <script type=\"text/javascript\" src=\"//s1.hdslb.com/bfs/static/jinkela/long/js/jquery/jquery1.7.2.min.js\"></script>\n",
      "    \n",
      "</head>\n",
      "\n",
      "<body>\n",
      "    <div class=\"error-container\">\n",
      "        <div class=\"txt-item err-code\">éè¯¯å·:412</div>\n",
      "        <div class=\"txt-item err-text\">ç±äºè§¦ååå©åå©å®å",
      "¨é£æ§ç­ç¥ï¼è¯¥æ¬¡è®¿é®è¯·æ±è¢«æç»ã</div>\n",
      "        <div class=\"txt-item\">The request was rejected because of the bilibili security control policy.</div>\n",
      "        <div class=\"txt-item datetime_now\"></div>\n",
      "        <div class=\"txt-item user_url\"></div>\n",
      "        <div class=\"txt-item user_ip\"></div>\n",
      "        <div class=\"txt-item user_id\"></div>\n",
      "        <div class=\"check-input\">\n",
      "            <div class=\"title\"></div>\n",
      "            <div class=\"box-pic\"></div>\n",
      "            <div class=\"box\"></div>\n",
      "            <div class=\"state\"></div>\n",
      "        </div>\n",
      "    </div>\n",
      "    <script type=\"text/javascript\" charset=\"utf-8\" src=\"//security.bilibili.com/static/js/sha256.min.js\"></script>\n",
      "    <script type=\"text/javascript\" charset=\"utf-8\" src=\"//security.bilibili.com/static/js/js.cookie.min.js\"></script>\n",
      "    <script type=\"text/javascript\" charset=\"utf-8\" src=\"//security.bilibili.com/static/js/412.js\"></script>\n",
      "</body>\n",
      "</html>\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "# URL of bilibili\n",
    "url = 'https://www.bilibili.com/'\n",
    "\n",
    "# Send a GET request to bilibili\n",
    "response = requests.get(url)\n",
    "print(response)\n",
    "print(response.text)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "31933563-bad8-481e-a3bd-e008f05dbba8",
   "metadata": {},
   "source": [
    "\n",
    "处理请求头部\n",
    "如果您需要自定义HTTP头部，例如设置User-Agent或身份验证令牌，您可以将字典传递给headers参数："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "68b69d40-4cd5-4a08-b74d-90f6c5b477e5",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<Response [200]>\n",
      "\n",
      "网页被存到 'bilili_page.html'.\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "# URL of bilibili\n",
    "url = 'https://www.bilibili.com/'\n",
    "\n",
    "# Define a dictionary containing the headers\n",
    "headers = {\n",
    "    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36'\n",
    "}\n",
    "\n",
    "# Send a GET request to Baidu with the specified headers\n",
    "response = requests.get(url, headers=headers)\n",
    "print(response)\n",
    "with open('bilili_page.html', 'w', encoding='utf-8') as file:\n",
    "    file.write(response.text)\n",
    "\n",
    "print(\"\\n网页被存到 'bilili_page.html'.\")\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "7982c1ac-ae3d-48d8-8636-a3fe9878b78f",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "GET Response: {'page': 2, 'per_page': 6, 'total': 12, 'total_pages': 2, 'data': [{'id': 7, 'email': 'michael.lawson@reqres.in', 'first_name': 'Michael', 'last_name': 'Lawson', 'avatar': 'https://reqres.in/img/faces/7-image.jpg'}, {'id': 8, 'email': 'lindsay.ferguson@reqres.in', 'first_name': 'Lindsay', 'last_name': 'Ferguson', 'avatar': 'https://reqres.in/img/faces/8-image.jpg'}, {'id': 9, 'email': 'tobias.funke@reqres.in', 'first_name': 'Tobias', 'last_name': 'Funke', 'avatar': 'https://reqres.in/img/faces/9-image.jpg'}, {'id': 10, 'email': 'byron.fields@reqres.in', 'first_name': 'Byron', 'last_name': 'Fields', 'avatar': 'https://reqres.in/img/faces/10-image.jpg'}, {'id': 11, 'email': 'george.edwards@reqres.in', 'first_name': 'George', 'last_name': 'Edwards', 'avatar': 'https://reqres.in/img/faces/11-image.jpg'}, {'id': 12, 'email': 'rachel.howell@reqres.in', 'first_name': 'Rachel', 'last_name': 'Howell', 'avatar': 'https://reqres.in/img/faces/12-image.jpg'}], 'support': {'url': 'https://reqres.in/#support-heading', 'text': 'To keep ReqRes free, contributions towards server costs are appreciated!'}}\n",
      "\n",
      "\n",
      "POST Response: {'name': 'morpheus', 'job': 'leader', 'id': '546', 'createdAt': '2024-04-29T15:01:05.091Z'}\n",
      "\n",
      "\n",
      "PUT Response: {'name': 'morpheus', 'job': 'zion resident', 'updatedAt': '2024-04-29T15:01:06.484Z'}\n",
      "\n",
      "\n",
      "DELETE Response: 204\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "# 测试 GET 请求\n",
    "def test_get():\n",
    "    response = requests.get('https://reqres.in/api/users?page=2')\n",
    "    print(\"GET Response:\", response.json())\n",
    "\n",
    "# 测试 POST 请求\n",
    "def test_post():\n",
    "    data = {\n",
    "        \"name\": \"morpheus\",\n",
    "        \"job\": \"leader\"\n",
    "    }\n",
    "    response = requests.post('https://reqres.in/api/users', data=data)\n",
    "    print(\"POST Response:\", response.json())\n",
    "\n",
    "# 测试 PUT 请求\n",
    "def test_put():\n",
    "    data = {\n",
    "        \"name\": \"morpheus\",\n",
    "        \"job\": \"zion resident\"\n",
    "    }\n",
    "    response = requests.put('https://reqres.in/api/users/2', data=data)\n",
    "    print(\"PUT Response:\", response.json())\n",
    "\n",
    "# 测试 DELETE 请求\n",
    "def test_delete():\n",
    "    response = requests.delete('https://reqres.in/api/users/2')\n",
    "    print(\"DELETE Response:\", response.status_code)  # 成功删除通常返回 204\n",
    "\n",
    "# 执行测试\n",
    "test_get()\n",
    "print('\\n')\n",
    "\n",
    "test_post()\n",
    "print('\\n')\n",
    "test_put()\n",
    "print('\\n')\n",
    "test_delete()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "54e52a15-e52d-4ea6-aa57-32ace256a71a",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "测试注册成功:\n",
      "({'id': 4, 'token': 'QpwL5tke4Pnpja7X4'}, 200)\n",
      "\n",
      "测试注册失败（缺少密码）:\n",
      "({'error': 'Missing password'}, 400)\n",
      "\n",
      "测试登录成功:\n",
      "({'token': 'QpwL5tke4Pnpja7X4'}, 200)\n",
      "\n",
      "测试登录失败（密码错误）:\n",
      "({'token': 'QpwL5tke4Pnpja7X4'}, 200)\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "# API基础URL\n",
    "base_url = \"https://reqres.in/api\"\n",
    "\n",
    "def register_user(email, password):\n",
    "    \"\"\"注册用户的函数。\"\"\"\n",
    "    url = f\"{base_url}/register\"\n",
    "    data = {\n",
    "        \"email\": email,\n",
    "        \"password\": password\n",
    "    }\n",
    "    response = requests.post(url, json=data)\n",
    "    return response.json(), response.status_code\n",
    "\n",
    "def login_user(email, password):\n",
    "    \"\"\"登录用户的函数。\"\"\"\n",
    "    url = f\"{base_url}/login\"\n",
    "    data = {\n",
    "        \"email\": email,\n",
    "        \"password\": password\n",
    "    }\n",
    "    response = requests.post(url, json=data)\n",
    "    return response.json(), response.status_code\n",
    "\n",
    "# 测试注册成功\n",
    "print(\"测试注册成功:\")\n",
    "print(register_user(\"eve.holt@reqres.in\", \"pistol\"))\n",
    "\n",
    "# 测试注册失败（缺少密码）\n",
    "print(\"\\n测试注册失败（缺少密码）:\")\n",
    "print(register_user(\"eve.holt@reqres.in\", \"\"))\n",
    "\n",
    "# 测试登录成功\n",
    "print(\"\\n测试登录成功:\")\n",
    "print(login_user(\"eve.holt@reqres.in\", \"cityslicka\"))\n",
    "\n",
    "# 测试登录失败（密码错误）\n",
    "print(\"\\n测试登录失败（密码错误）:\")\n",
    "print(login_user(\"eve.holt@reqres.in\", \"wrongpassword\"))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f4926ade-8bbb-4080-82ef-e72132851fb2",
   "metadata": {},
   "source": [
    "在Web开发中，`session`和`cookie`是用于存储信息的技术，主要用于在浏览器和服务器之间维护用户状态。虽然它们具有类似的作用，但它们的操作方式不同，并且用于不同的目的。\n",
    "\n",
    "### Cookie\n",
    "\n",
    "Cookie是从服务器发送并存储在用户浏览器上的小数据片段。每当同一用户再次向服务器发出请求时，浏览器将cookie与请求一起发送回服务器。这样，服务器可以识别用户并记住有关他们的信息，例如他们的登录状态、偏好等。\n",
    "\n",
    "**主要特点包括**：\n",
    "\n",
    "- **持久性**：Cookie可以设置一个过期日期。如果设置了过期日期，即使关闭浏览器，信息仍然保留；如果未设置，它将成为会话cookie，在关闭浏览器时过期。\n",
    "- **大小受限**：每个cookie限制为约4KB，并且每个域存储的cookie数量有限。\n",
    "- **安全性**：尽管cookie数据存储在本地并且用户可以访问和修改，但通过设置HttpOnly和Secure标志可以增强安全性，以防止跨站脚本（XSS）攻击读取cookie或通过非加密连接发送cookie。\n",
    "\n",
    "### Session\n",
    "\n",
    "会话是另一种服务器端数据存储机制，用于存储有关用户会话的信息。服务器为每个用户的会话分配一个唯一标识符，通常称为会话ID。此标识符存储在cookie中或通过URL重写传递。每次用户与服务器交互时，服务器可以通过会话ID识别用户，并访问关于该用户在服务器上存储的数据。\n",
    "\n",
    "**主要特点包括**：\n",
    "\n",
    "- **增强安全性**：由于会话数据存储在服务器端，因此用户无法直接访问，使其比cookie更安全。\n",
    "- **无大小限制**：会话可以存储更大量的数据，而不受cookie大小限制的限制。\n",
    "- **依赖于cookie**：尽管会话信息存储在服务器上，但会话ID通常通过cookie进行管理。如果用户禁用了cookie，则需要使用其他方法（如URL重写）来传递会话ID。\n",
    "\n",
    "### 总结\n",
    "\n",
    "总之，cookie是一种在用户浏览器中存储数据的方式，主要用于跟踪和识别用户。会话是一种服务器端解决方案，提供了一种存储用户特定数据的方法，通过会话ID来识别和管理用户状态。在Web应用程序中，通常将两者一起使用来实现用户身份验证、状态管理和其他功能。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "a4e8e8ea-e6ff-4072-ad16-fd74816569f4",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\n",
      "  \"args\": {}, \n",
      "  \"headers\": {\n",
      "    \"Accept\": \"*/*\", \n",
      "    \"Accept-Encoding\": \"gzip, deflate, br\", \n",
      "    \"Cookie\": \"sssss=12345\", \n",
      "    \"Host\": \"httpbin.org\", \n",
      "    \"User-Agent\": \"python-requests/2.31.0\", \n",
      "    \"X-Amzn-Trace-Id\": \"Root=1-662fb6b5-03c570c2492a98dc7ede3041\"\n",
      "  }, \n",
      "  \"origin\": \"221.15.159.208\", \n",
      "  \"url\": \"https://httpbin.org/get\"\n",
      "}\n",
      "\n",
      "{'sssss': '12345'}\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "# 创建一个 Session 对象\n",
    "session = requests.Session()\n",
    "\n",
    "# 添加一个名为 'sessioncookie' 的 cookie 到 session 中\n",
    "session.cookies.set('sssss', '12345')\n",
    "\n",
    "# 发送 GET 请求\n",
    "response = session.get('https://httpbin.org/get')\n",
    "\n",
    "# 打印响应文本，可以看到请求中包含的 cookie\n",
    "print(response.text)\n",
    "\n",
    "# 打印当前 session 中的所有 cookies\n",
    "print(session.cookies.get_dict())\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "ee7d47e8-4564-4e02-b67e-0720631f10a0",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdin",
     "output_type": "stream",
     "text": [
      "Please enter the content you want to search:  你好\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "kw = input(\"Please enter the content you want to search: \")\n",
    "response = requests.get(f\"https://www.sogou.com/web?query={kw}\")  # Send GET request\n",
    "\n",
    "with open(\"search_sogou.html\", mode=\"w\", encoding=\"utf-8\") as f:\n",
    "    f.write(response.text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "a66ebb0d-299c-4264-a217-28800894586f",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdin",
     "output_type": "stream",
     "text": [
      "Please enter the content you want to search:  你好\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "# 用户输入搜索内容\n",
    "kw = input(\"Please enter the content you want to search: \")\n",
    "\n",
    "# 创建自定义请求头部\n",
    "headers = {\n",
    "    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3',\n",
    "    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',\n",
    "    'Accept-Language': 'en-US,en;q=0.5'\n",
    "}\n",
    "\n",
    "# 发送带有自定义头部的 GET 请求\n",
    "response = requests.get(f\"https://www.sogou.com/web?query={kw}\", headers=headers)\n",
    "\n",
    "# 将响应内容写入文件\n",
    "with open(\"search_sogou1.html\", mode=\"w\", encoding=\"utf-8\") as f:\n",
    "    f.write(response.text)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "9ad685d4-851e-44cf-aa14-457dcbb0d3a8",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdin",
     "output_type": "stream",
     "text": [
      "Please enter the text to translate:  你好\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'errno': 0, 'data': [{'k': '你好', 'v': 'hello; hi; How do you do!'}, {'k': '你好吗', 'v': 'How do you do?'}, {'k': '你好，陌生人', 'v': '[电影]Hello Stranger'}], 'logid': 275726914}\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "# Request URL\n",
    "url = 'https://fanyi.baidu.com/sug'\n",
    "\n",
    "# Prompt the user to enter the text to translate\n",
    "text = input(\"Please enter the text to translate: \")\n",
    "\n",
    "# Build the request data\n",
    "data = {\n",
    "    'kw': text,    # Text to translate\n",
    "    'from': 'auto',   # Source language automatically detected\n",
    "    'to': 'zh'      # Target language is Chinese\n",
    "}\n",
    "\n",
    "# Send a POST request\n",
    "response = requests.post(url, data=data)\n",
    "\n",
    "# Get the response result\n",
    "result = response.json()\n",
    "\n",
    "# Print the translation result\n",
    "print(result)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "id": "86595aca-573f-4f8e-8454-f8bf7167ad38",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdin",
     "output_type": "stream",
     "text": [
      "请输入要翻译的文本： apple\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "n. 苹果公司，原称苹果电脑公司\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "# 提示用户输入要翻译的文本\n",
    "kw = input(\"请输入要翻译的文本：\")\n",
    "\n",
    "# 准备请求数据\n",
    "dic = {\n",
    "    \"kw\": kw   # 这个参数必须与请求工具中的参数匹配\n",
    "}\n",
    "\n",
    "# 发送一个POST请求到百度翻译的'sug'端点\n",
    "resp = requests.post(\"https://fanyi.baidu.com/sug\", data=dic)\n",
    "\n",
    "# 响应是JSON格式，因此直接解析它\n",
    "resp_json = resp.json()\n",
    "\n",
    "# 从响应中提取翻译\n",
    "# 这里假设我们想要的翻译始终在'data'列表中的第一个字典中\n",
    "# 如果不是这种情况，您可能需要修改这部分代码\n",
    "print(resp_json['data'][0]['v'])\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "id": "784969cc-d751-4411-90a4-989ab9823f99",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdin",
     "output_type": "stream",
     "text": [
      "请输入开始位置（从哪部电影开始）： 2\n",
      "请输入要获取的电影数量： 22\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "成功获取电影数据！\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "import json\n",
    "\n",
    "def fetch_douban_movies():\n",
    "    url = 'https://movie.douban.com/j/chart/top_list'\n",
    "\n",
    "    # 获取用户输入的开始位置和限制数量\n",
    "    start = input(\"请输入开始位置（从哪部电影开始）：\")\n",
    "    limit = input(\"请输入要获取的电影数量：\")\n",
    "\n",
    "    param = {\n",
    "        'type': '24',\n",
    "        'interval_id': '100:90',\n",
    "        'action':'',\n",
    "        'start': start,  # 从哪部电影开始获取\n",
    "        'limit': limit,  # 要获取的电影数量\n",
    "    }\n",
    "\n",
    "    headers = {\n",
    "        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36'\n",
    "    }\n",
    "\n",
    "    # 发送请求并处理任何异常\n",
    "    try:\n",
    "        response = requests.get(url=url,params=param,headers=headers)\n",
    "        response.raise_for_status()\n",
    "        response.encoding = response.apparent_encoding\n",
    "\n",
    "        # 将响应转换为JSON\n",
    "        list_data = response.json()\n",
    "\n",
    "        # 将数据写入文件\n",
    "        with open('./douban.json','w',encoding='utf-8') as fp:\n",
    "            json.dump(list_data, fp, ensure_ascii=False)\n",
    "\n",
    "        print('成功获取电影数据！')\n",
    "\n",
    "    except Exception as e:\n",
    "        print(f\"获取电影数据失败：{e}\")\n",
    "\n",
    "# 调用函数\n",
    "fetch_douban_movies()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "id": "08220f4d-d6c4-45e9-953e-3b440dd439cd",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdin",
     "output_type": "stream",
     "text": [
      "请输入开始位置（从哪部电影开始）： 1\n",
      "请输入要获取的电影数量： 20\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "鬼子来了\n",
      "触不可及\n",
      "摩登时代\n",
      "大话西游之大圣娶亲\n",
      "疯狂动物城\n",
      "三傻大闹宝莱坞\n",
      "城市之光\n",
      "怦然心动\n",
      "寻梦环游记\n",
      "飞屋环游记\n",
      "罗马假日\n",
      "两杆大烟枪\n",
      "我不是药神\n",
      "让子弹飞\n",
      "大话西游之月光宝盒\n",
      "雨中曲\n",
      "喜宴\n",
      "玛丽和马克思\n",
      "绿皮书\n",
      "东京教父\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "def fetch_douban_movies():\n",
    "    url = 'https://movie.douban.com/j/chart/top_list'\n",
    "\n",
    "    # 获取用户输入的开始位置和限制数量\n",
    "    start = input(\"请输入开始位置（从哪部电影开始）：\")\n",
    "    limit = input(\"请输入要获取的电影数量：\")\n",
    "\n",
    "    param = {\n",
    "        'type': '24',   # 这表示列表的类型。'24'代表“华语电影排行榜”\n",
    "        'interval_id': '100:90',   # 这表示评分范围。'100:90'表示90-100分\n",
    "        'action':'',  \n",
    "        'start': start,  # 这表示要获取的电影的起始索引\n",
    "        'limit': limit,  # 这表示要获取的电影数量\n",
    "    }\n",
    "\n",
    "    headers = {\n",
    "        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36'\n",
    "        # 这只是一个常见的用户代理字符串。一些网站需要这个来允许请求\n",
    "    }\n",
    "\n",
    "    # 发送请求并处理任何异常\n",
    "    try:\n",
    "        response = requests.get(url=url,params=param,headers=headers)\n",
    "        response.raise_for_status()\n",
    "        response.encoding = response.apparent_encoding\n",
    "\n",
    "        # 将响应转换为JSON\n",
    "        list_data = response.json()\n",
    "\n",
    "        # 打印电影名称\n",
    "        for movie in list_data:\n",
    "            print(movie['title'])\n",
    "\n",
    "    except Exception as e:\n",
    "        print(f\"获取电影数据失败：{e}\")\n",
    "\n",
    "# 调用函数\n",
    "fetch_douban_movies()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "id": "b4873cbc-03d8-43d2-b26d-2c26ac25ec86",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "请求详情：\n",
      "URL: https://www.baidu.com/\n",
      "Method: GET\n",
      "Headers: {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate, br', 'Accept': '*/*', 'Connection': 'keep-alive'}\n",
      "\n",
      "响应详情：\n",
      "状态码: 200\n",
      "Headers: {'Cache-Control': 'private, no-cache, no-store, proxy-revalidate, no-transform', 'Connection': 'keep-alive', 'Content-Encoding': 'gzip', 'Content-Type': 'text/html', 'Date': 'Mon, 29 Apr 2024 15:07:56 GMT', 'Last-Modified': 'Mon, 23 Jan 2017 13:23:46 GMT', 'Pragma': 'no-cache', 'Server': 'bfe/1.0.8.18', 'Set-Cookie': 'BDORZ=27315; max-age=86400; domain=.baidu.com; path=/', 'Transfer-Encoding': 'chunked'}\n",
      "\n",
      "响应正文:\n",
      "<!DOCTYPE html>\n",
      "<!--STATUS OK--><html> <head><meta http-equiv=content-type content=text/html;charset=utf-8><meta http-equiv=X-UA-Compatible content=IE=Edge><meta content=always name=referrer><link rel=stylesheet type=text/css href=https://ss1.bdstatic.com/5eN1bjq8AAUYm2zgoY3K/r/www/cache/bdorz/baidu.min.css><title>百度一下，你就知道</title></head> <body link=#0000cc> <div id=wrapper> <div id=head> <div class=head_wrapper> <div class=s_form> <div class=s_form_wrapper> <div id=lg> <img hidefocus=true src=//www.baidu.com/img/bd_logo1.png width=270 height=129> </div> <form id=form name=f action=//www.baidu.com/s class=fm> <input type=hidden name=bdorz_come value=1> <input type=hidden name=ie value=utf-8> <input type=hidden name=f value=8> <input type=hidden name=rsv_bp value=1> <input type=hidden name=rsv_idx value=1> <input type=hidden name=tn value=baidu><span class=\"bg s_ipt_wr\"><input id=kw name=wd class=s_ipt value maxlength=255 autocomplete=off autofocus=autofocus></span><span class=\"bg s_btn_wr\"><input type=submit id=su value=百度一下 class=\"bg s_btn\" autofocus></span> </form> </div> </div> <div id=u1> <a href=http://news.baidu.com name=tj_trnews class=mnav>新闻</a> <a href=https://www.hao123.com name=tj_trhao123 class=mnav>hao123</a> <a href=http://map.baidu.com name=tj_trmap class=mnav>地图</a> <a href=http://v.baidu.com name=tj_trvideo class=mnav>视频</a> <a href=http://tieba.baidu.com name=tj_trtieba class=mnav>贴吧</a> <noscript> <a href=http://www.baidu.com/bdorz/login.gif?login&amp;tpl=mn&amp;u=http%3A%2F%2Fwww.baidu.com%2f%3fbdorz_come%3d1 name=tj_login class=lb>登录</a> </noscript> <script>document.write('<a href=\"http://www.baidu.com/bdorz/login.gif?login&tpl=mn&u='+ encodeURIComponent(window.location.href+ (window.location.search === \"\" ? \"?\" : \"&\")+ \"bdorz_come=1\")+ '\" name=\"tj_login\" class=\"lb\">登录</a>');\n",
      "                </script> <a href=//www.baidu.com/more/ name=tj_briicon class=bri style=\"display: block;\">更多产品</a> </div> </div> </div> <div id=ftCon> <div id=ftConw> <p id=lh> <a href=http://home.baidu.com>关于百度</a> <a href=http://ir.baidu.com>About Baidu</a> </p> <p id=cp>&copy;2017&nbsp;Baidu&nbsp;<a href=http://www.baidu.com/duty/>使用百度前必读</a>&nbsp; <a href=http://jianyi.baidu.com/ class=cp-feedback>意见反馈</a>&nbsp;京ICP证030173号&nbsp; <img src=//www.baidu.com/img/gs.gif> </p> </div> </div> </div> </body> </html>\n",
      "\n",
      "\n",
      "HTML内容已保存到 'baidu_page.html'。\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "# 百度的URL\n",
    "url = 'https://www.baidu.com'\n",
    "\n",
    "# 发送一个GET请求到百度\n",
    "response = requests.get(url)\n",
    "\n",
    "# 确保响应的编码设置正确\n",
    "response.encoding = 'utf-8'\n",
    "\n",
    "# 打印请求详情\n",
    "print(\"请求详情：\")\n",
    "print(\"URL:\", response.request.url)\n",
    "print(\"Method:\", response.request.method)\n",
    "print(\"Headers:\", response.request.headers)\n",
    "\n",
    "# 打印响应\n",
    "print(\"\\n响应详情：\")\n",
    "print(\"状态码:\", response.status_code)\n",
    "print(\"Headers:\", response.headers)\n",
    "print(\"\\n响应正文:\")\n",
    "print(response.text[:10000])  # 打印响应的前10000个字符\n",
    "\n",
    "# 将响应内容保存为一个HTML文件\n",
    "with open('baidu_page.html', 'w', encoding='utf-8') as file:\n",
    "    file.write(response.text)\n",
    "\n",
    "print(\"\\nHTML内容已保存到 'baidu_page.html'。\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bf900bf4-baeb-4f9b-a84e-5f12e223bb3d",
   "metadata": {},
   "source": [
    "### 4. 使用Python进行数据解析\n",
    "\n",
    "在前一章中，我们基本掌握了整个网页的抓取基本技能。然而，在大多数情况下，我们并不需要整个网页的内容；我们只需要其中的一小部分。那么，我们该怎么办呢？这就引出了数据提取的问题。\n",
    "\n",
    "数据提取涉及从较大的数据集或网页中检索特定的数据元素或信息。我们可以使用各种技术和工具，而不是处理整个页面，仅提取我们需要的相关数据。这使我们能够专注于感兴趣的特定信息，使我们的抓取过程更加高效和有针对性。\n",
    "\n",
    "在进行网页抓取期间，执行数据提取有不同的方法和途径。一些常见的技术包括使用BeautifulSoup、正则表达式（re）和XPath。\n",
    "\n",
    "BeautifulSoup：它是一个Python库，提供了一种方便的方法来解析HTML和XML文档。使用BeautifulSoup，我们可以浏览HTML结构，并根据标签、属性或其他模式提取特定的元素或数据。\n",
    "\n",
    "正则表达式（re）：正则表达式提供了一种强大而灵活的模式匹配和文本处理方法。使用正则表达式，我们可以定义特定的模式，并从网页内容中提取与这些模式匹配的数据。\n",
    "\n",
    "XPath：XPath是一种用于在XML或HTML文档中导航和选择元素的查询语言。它提供了一种遍历文档结构并根据其位置或属性选择特定节点或数据的方法。\n",
    "\n",
    "通过采用这些技术，我们可以高效、准确地从网页中提取所需的数据，仅专注于我们分析或应用所需的相关信息。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f215c79a-92b6-48be-bf0f-a39ba822196d",
   "metadata": {},
   "source": [
    "#### 4.1 正则表达式（re）\n",
    "\n",
    "第1步：导入re模块 要在Python中使用正则表达式，您需要导入`re`模块：\n",
    "\n",
    "```\n",
    "import re\n",
    "```\n",
    "\n",
    "第2步：基本模式匹配 正则表达式最基本的用法是在字符串中匹配特定的模式。以下是一个示例：\n",
    "\n",
    "```\n",
    "pattern = r\"apple\"\n",
    "text = \"I have an apple and a banana.\"\n",
    "\n",
    "match = re.search(pattern, text)\n",
    "if match:\n",
    "    print(\"找到模式！\")\n",
    "else:\n",
    "    print(\"未找到模式。\")\n",
    "```\n",
    "\n",
    "在这个示例中，我们使用原始字符串 `r\"apple\"` 定义一个模式。然后我们使用 `re.search()` 在 `text` 字符串中搜索该模式。如果找到模式，则打印“找到模式！”；否则，打印“未找到模式”。\n",
    "\n",
    "第3步：元字符和特殊序列 正则表达式具有称为元字符的特殊字符，它们具有特殊的含义。以下是一些常用的元字符：\n",
    "\n",
    "- `.`: 匹配除换行符之外的任何字符。\n",
    "- `^`: 匹配字符串的开头。\n",
    "- `$`: 匹配字符串的结尾。\n",
    "- `[]`: 匹配括号内的任何单个字符。\n",
    "- `|`: 匹配管道前后的表达式。\n",
    "- `*`: 匹配前面模式的零个或多个出现。\n",
    "- `+`: 匹配前面模式的一个或多个出现。\n",
    "- `?`: 匹配前面模式的零个或一个出现。\n",
    "- `()`: 创建一个捕获组。\n",
    "\n",
    "特殊序列是代表常见模式的缩写代码：\n",
    "\n",
    "- `\\d`: 匹配任何数字字符（0-9）。\n",
    "- `\\w`: 匹配任何字母数字字符（a-z、A-Z、0-9 和下划线）。\n",
    "- `\\s`: 匹配任何空白字符（空格、制表符、换行符）。\n",
    "- `\\b`: 匹配单词边界。\n",
    "\n",
    "第4步：将模式与函数结合使用 `re` 模块提供了各种函数来处理正则表达式。以下是一些常用的函数：\n",
    "\n",
    "- `re.search(pattern, string)`: 在字符串中搜索模式的匹配项。\n",
    "- `re.match(pattern, string)`: 在字符串的开头搜索模式的匹配项。\n",
    "- `re.findall(pattern, string)`: 返回字符串中模式的所有非重叠匹配项。\n",
    "- `re.split(pattern, string)`: 使用模式的出现来分割字符串。\n",
    "- `re.sub(pattern, repl, string)`: 将字符串中模式的匹配项替换为替换字符串。\n",
    "\n",
    "第5步：捕获组和反向引用 捕获组允许您从匹配的模式中提取特定部分。以下是一个示例：\n",
    "\n",
    "```\n",
    "pattern = r\"(\\d+)-(\\d+)-(\\d+)\"\n",
    "text = \"Date: 2023-05-14\"\n",
    "\n",
    "match = re.search(pattern, text)\n",
    "if match:\n",
    "    year = match.group(1)\n",
    "    month = match.group(2)\n",
    "    day = match.group(3)\n",
    "    print(\"年份:\", year)\n",
    "    print(\"月份:\", month)\n",
    "    print(\"日期:\", day)\n",
    "```\n",
    "\n",
    "在这个示例中，模式 `(\\d+)-(\\d+)-(\\d+)` 从日期字符串中捕获年、月和日。我们使用 `match.group()` 方法访问捕获的组并打印它们。\n",
    "\n",
    "这只是Python中使用正则表达式的基础知识。正则表达式提供了一种强大的方式以灵活的方式搜索、匹配和操作文本模式。我建议参考官方Python文档以获取有关正则表达式更详细信息。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5141070c-91c3-4444-8803-9cc0c71bd774",
   "metadata": {},
   "source": [
    "这些量词符号（*，+，?）在正则表达式中用于指定前面的模式的重复次数。它们的区别在于：\n",
    "\n",
    "1. `*`：匹配前面的模式零个或多个出现。这意味着前面的模式可以完全不存在，也可以重复出现任意次数，包括零次。例如，`ab*`将匹配`a`后面跟着零个或多个`b`的字符串，比如`a`、`ab`、`abb`等。\n",
    "2. `+`：匹配前面的模式一个或多个出现。这意味着前面的模式至少会出现一次，也可以重复出现任意次数，但至少要出现一次。例如，`ab+`将匹配`a`后面跟着至少一个`b`的字符串，比如`ab`、`abb`、`abbb`等，但不会匹配`a`。\n",
    "3. `?`：匹配前面的模式零个或一个出现。这意味着前面的模式可以完全不存在，也可以只出现一次，但不会重复出现。例如，`ab?`将匹配`a`后面跟着零个或一个`b`的字符串，比如`a`、`ab`，但不会匹配`abb`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "id": "3238fbfe-1970-4084-813e-e5a76832af3c",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Pattern found!\n"
     ]
    }
   ],
   "source": [
    "import re\n",
    "pattern = r\"apple\"\n",
    "text = \"I have an apple and a banana.\"\n",
    "\n",
    "match = re.search(pattern, text)\n",
    "if match:\n",
    "    print(\"Pattern found!\")\n",
    "else:\n",
    "    print(\"Pattern not found.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "id": "848bcb07-9be0-4239-9a72-d3801d317adb",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Year: 2023\n",
      "Month: 05\n",
      "Day: 14\n"
     ]
    }
   ],
   "source": [
    "import re\n",
    "pattern = r\"(\\d+)-(\\d+)-(\\d+)\"\n",
    "text = \"Date: 2023-05-14\"\n",
    "\n",
    "match = re.search(pattern, text)\n",
    "if match:\n",
    "    # print(match.group(1))\n",
    "    year = match.group(1)\n",
    "    month = match.group(2)\n",
    "    day = match.group(3)\n",
    "    print(\"Year:\", year)\n",
    "    print(\"Month:\", month)\n",
    "    print(\"Day:\", day)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "id": "fb8d1636-f492-4981-8584-1971557ef94c",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['10086', '10010']\n",
      "10086\n",
      "10010\n",
      "10086\n",
      "10086\n",
      "10086\n",
      "10010\n",
      "['1000000000']\n",
      "Guo Qilin\n",
      "1\n",
      "Song Tie\n",
      "2\n",
      "Da Congming\n",
      "3\n",
      "Fan Sizhe\n",
      "4\n",
      "Hu Shuo Badao\n",
      "5\n"
     ]
    }
   ],
   "source": [
    "import re\n",
    "\n",
    "# findall: Matches all occurrences of the pattern in the string\n",
    "lst = re.findall(r\"\\d+\", \"My phone number is: 10086, and my girlfriend's phone number is: 10010\")\n",
    "print(lst)\n",
    "\n",
    "# finditer: Matches all occurrences of the pattern in the string [returns an iterator], accessing the content from the iterator requires .group()\n",
    "it = re.finditer(r\"\\d+\", \"My phone number is: 10086, and my girlfriend's phone number is: 10010\")\n",
    "for i in it:\n",
    "    print(i.group())\n",
    "\n",
    "# search: Returns the first occurrence of a match, the result is a match object, accessing the data requires .group()\n",
    "s = re.search(r\"\\d+\", \"My phone number is: 10086, and my girlfriend's phone number is: 10010\")\n",
    "print(s.group())\n",
    "\n",
    "# match: Matches from the beginning of the string\n",
    "s = re.match(r\"\\d+\", \"10086, and my girlfriend's phone number is: 10010\")\n",
    "print(s.group())\n",
    "\n",
    "# Precompile regular expression\n",
    "obj = re.compile(r\"\\d+\")\n",
    "\n",
    "ret = obj.finditer(\"My phone number is: 10086, and my girlfriend's phone number is: 10010\")\n",
    "for it in ret:\n",
    "    print(it.group())\n",
    "\n",
    "ret = obj.findall(\"Hahaha, I don't believe you won't change me 1000000000\")\n",
    "print(ret)\n",
    "\n",
    "s = \"\"\"\n",
    "<div class='jay'><span id='1'>Guo Qilin</span></div>\n",
    "<div class='jj'><span id='2'>Song Tie</span></div>\n",
    "<div class='jolin'><span id='3'>Da Congming</span></div>\n",
    "<div class='sylar'><span id='4'>Fan Sizhe</span></div>\n",
    "<div class='tory'><span id='5'>Hu Shuo Badao</span></div>\n",
    "\"\"\"\n",
    "\n",
    "# (?P<group_name>regex) can be used to further extract content from the matched content\n",
    "obj = re.compile(r\"<div class='.*?'><span id='(?P<id>\\d+)'>(?P<wahaha>.*?)</span></div>\", re.S)  # re.S: allows . to match newline characters\n",
    "\n",
    "result = obj.finditer(s)\n",
    "for it in result:\n",
    "    print(it.group(\"wahaha\"))\n",
    "    print(it.group(\"id\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "id": "fdd0873b-65d7-49e1-a48b-ef9f8db6756a",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "数据提取并写入CSV完成！\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "import re\n",
    "import csv\n",
    "\n",
    "# 请求豆瓣电影Top250页面\n",
    "url = \"https://movie.douban.com/top250\"\n",
    "headers = {\n",
    "    \"user-agent\": \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.192 Safari/537.36\"\n",
    "}\n",
    "resp = requests.get(url, headers=headers)\n",
    "page_content = resp.text\n",
    "\n",
    "# 解析数据\n",
    "pattern = re.compile(r'<li>.*?<div class=\"item\">.*?<span class=\"title\">(?P<name>.*?)'\n",
    "                     r'</span>.*?<p class=\"\">.*?<br>(?P<year>.*?)&nbsp.*?<span '\n",
    "                     r'class=\"rating_num\" property=\"v:average\">(?P<score>.*?)</span>.*?'\n",
    "                     r'<span>(?P<num>.*?)人评价</span>', re.S)\n",
    "\n",
    "# 开始匹配\n",
    "result = pattern.finditer(page_content)\n",
    "\n",
    "# 创建并写入CSV文件\n",
    "with open(\"data.csv\", mode=\"w\", encoding=\"utf-8\") as f:\n",
    "    csvwriter = csv.writer(f)\n",
    "    for item in result:\n",
    "        dic = item.groupdict()\n",
    "        dic['year'] = dic['year'].strip()  # 去除年份字符串两端的空格\n",
    "        csvwriter.writerow(dic.values())\n",
    "\n",
    "print(\"数据提取并写入CSV完成！\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "943963b0-4e84-4314-9323-0ca733b14876",
   "metadata": {},
   "source": [
    "```\n",
    "`<li>.*?<div class=\"item\">.*?<span class=\"title\">(?P<name>.*?)</span>\n",
    "```\n",
    "\n",
    "- `<li>`: 匹配起始的 `<li>` 标签。\n",
    "- `.*?`: 非贪婪地匹配任意字符（除换行符外）零次或多次。\n",
    "- `<div class=\"item\">`: 匹配 class 属性为 \"item\" 的 `<div>` 标签。\n",
    "- `<span class=\"title\">`: 匹配 class 属性为 \"title\" 的 `<span>` 标签。\n",
    "- `(?P<name>.*?)`: 命名捕获组 \"name\"，用于匹配电影名称。`.*?` 非贪婪地匹配任意字符（除换行符外）零次或多次。\n",
    "- `</span>`: 匹配结束的 `</span>` 标签。\n",
    "\n",
    "```\n",
    ".*?<p class=\"\">.*?<br>(?P<year>.*?) .*?<span class=\"rating_num\" property=\"v:average\">(?P<score>.*?)</span>\n",
    "```\n",
    "\n",
    "- `.*?<p class=\"\">`: 非贪婪地匹配任意字符（除换行符外）零次或多次，接着匹配 class 属性为空字符串的 `<p>` 标签。\n",
    "- `.*?<br>`: 非贪婪地匹配任意字符（除换行符外）零次或多次，接着匹配 `<br>` 标签。\n",
    "- `(?P<year>.*?)`: 命名捕获组 \"year\"，用于匹配电影年份。`.*?` 非贪婪地匹配任意字符（除换行符外）零次或多次。\n",
    "- `&nbsp`: 匹配非断行空格字符。\n",
    "- `.*?<span class=\"rating_num\" property=\"v:average\">`: 非贪婪地匹配任意字符（除换行符外）零次或多次，接着匹配 class 属性为 \"rating_num\" 且 property 属性为 \"v:average\" 的 `<span>` 标签。\n",
    "- `(?P<score>.*?)`: 命名捕获组 \"score\"，用于匹配电影评分。`.*?` 非贪婪地匹配任意字符（除换行符外）零次或多次。\n",
    "- `</span>`: 匹配结束的 `</span>` 标签。\n",
    "\n",
    "```\n",
    ".*?<span>(?P<num>.*?)人评价</span>\n",
    "```\n",
    "\n",
    "- `.*?<span>`: 非贪婪地匹配任意字符（除换行符外）零次或多次，接着匹配 `<span>` 标签。\n",
    "- `(?P<num>.*?)`: 命名捕获组 \"num\"，用于匹配评价人数。`.*?` 非贪婪地匹配任意字符（除换行符外）零次或多次。\n",
    "- `人评价</span>`: 匹配文本 \"人评价\"，接着匹配结束的 `</span>` 标签。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "af27ebc1-0109-4260-9ecd-e8f54dfcfe21",
   "metadata": {},
   "source": [
    "\n",
    "4.2 BeautifulSoup 库\n",
    "\n",
    "- 使用 BeautifulSoup 解析 HTML\n",
    "- 使用 BeautifulSoup 导航解析树\n",
    "\n",
    "BeautifulSoup（通常缩写为 bs4）是处理网页或 HTML 文件时非常有价值的 Python 库。它提供了一种简单灵活的方式来解析 HTML 并从中提取数据。以下是 BeautifulSoup 库的一些关键功能：\n",
    "\n",
    "1. HTML 解析：BeautifulSoup 可以将 HTML 内容解析为一个名为“BeautifulSoup 对象”的 Python 对象。该对象表示整个 HTML 文档的结构，使您可以轻松地遍历和操作它。\n",
    "2. 导航解析树：BeautifulSoup 提供了一系列方法来导航 HTML 解析树。您可以基于标签、属性或层次关系搜索特定元素，也可以遍历整个树结构以检索所需数据。\n",
    "3. 数据提取：使用 BeautifulSoup，您可以轻松地从 HTML 文档中提取数据。您可以访问单个标签的内容、属性和文本，以及基于特定选择器提取多个元素。\n",
    "4. 修改文档：BeautifulSoup 还允许您修改 HTML 文档。您可以添加、删除和修改标签，更改标签属性和文本内容，并根据需要重新构造文档。\n",
    "5. 处理复杂 HTML：BeautifulSoup 在处理复杂的 HTML 文档时非常强大。它可以处理不完整的标签、嵌套的标签结构和其他 HTML 错误，确保您可以正确解析和提取数据。\n",
    "\n",
    "总之，BeautifulSoup 是一个强大的库，用于从 HTML 中提取数据、处理网页和执行网页抓取任务。它提供了一个简单灵活的 API，使 HTML 解析和操作更加简单。无论是网页抓取、数据提取还是网页分析，BeautifulSoup 都是一个有价值的工具。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "id": "ca7b98a4-3091-4ea3-8b60-0e07ab013334",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple\n",
      "Requirement already satisfied: beautifulsoup4 in d:\\anaconda3\\lib\\site-packages (4.12.2)\n",
      "Requirement already satisfied: soupsieve>1.2 in d:\\anaconda3\\lib\\site-packages (from beautifulsoup4) (2.4)\n"
     ]
    }
   ],
   "source": [
    "!pip install beautifulsoup4"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "id": "efc83add-9f1e-401f-bfed-708e831659b8",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Example Page\n",
      "Welcome to the Example Page\n",
      "<div class=\"content\">\n",
      "<p>This is some example content.</p>\n",
      "<ul>\n",
      "<li>Item 1</li>\n",
      "<li>Item 2</li>\n",
      "<li>Item 3</li>\n",
      "</ul>\n",
      "</div>\n",
      "['content']\n",
      "Item 1\n",
      "Item 2\n",
      "Item 3\n",
      "This is some example content.\n",
      "Item 1\n",
      "Item 2\n",
      "Item 3\n"
     ]
    }
   ],
   "source": [
    "from bs4 import BeautifulSoup\n",
    "\n",
    "html_content = '''\n",
    "<html>\n",
    "  <head>\n",
    "    <title>Example Page</title>\n",
    "  </head>\n",
    "  <body>\n",
    "    <h1>Welcome to the Example Page</h1>\n",
    "    <div class=\"content\">\n",
    "      <p>This is some example content.</p>\n",
    "      <ul>\n",
    "        <li>Item 1</li>\n",
    "        <li>Item 2</li>\n",
    "        <li>Item 3</li>\n",
    "      </ul>\n",
    "    </div>\n",
    "  </body>\n",
    "</html>\n",
    "'''\n",
    "\n",
    "# 创建 BeautifulSoup 对象\n",
    "soup = BeautifulSoup(html_content, 'html.parser')\n",
    "\n",
    "# 访问标签内容\n",
    "title = soup.title\n",
    "print(title.text)  # 输出：Example Page\n",
    "\n",
    "h1 = soup.h1\n",
    "print(h1.text)  # 输出：Welcome to the Example Page\n",
    "\n",
    "# 根据标签名查找元素\n",
    "div = soup.find('div')\n",
    "print(div)\n",
    "\n",
    "# 访问元素属性\n",
    "div_class = div['class']\n",
    "print(div_class)  # 输出：['content']\n",
    "\n",
    "# 遍历标签元素\n",
    "ul = soup.find('ul')\n",
    "for li in ul.find_all('li'):\n",
    "    print(li.text)\n",
    "\n",
    "# 使用 CSS 选择器选择元素\n",
    "p = soup.select_one('.content p')\n",
    "print(p.text)  # 输出：This is some example content.\n",
    "\n",
    "items = soup.select('.content li')\n",
    "for item in items:\n",
    "    print(item.text)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "id": "0c9e2cd5-53ef-4415-b65d-7b2780434e4d",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "肖申克的救赎\n",
      " / The Shawshank Redemption\n",
      "霸王别姬\n",
      "阿甘正传\n",
      " / Forrest Gump\n",
      "泰坦尼克号\n",
      " / Titanic\n",
      "千与千寻\n",
      " / 千と千尋の神隠し\n",
      "这个杀手不太冷\n",
      " / Léon\n",
      "美丽人生\n",
      " / La vita è bella\n",
      "星际穿越\n",
      " / Interstellar\n",
      "盗梦空间\n",
      " / Inception\n",
      "楚门的世界\n",
      " / The Truman Show\n",
      "辛德勒的名单\n",
      " / Schindler's List\n",
      "忠犬八公的故事\n",
      " / Hachi: A Dog's Tale\n",
      "海上钢琴师\n",
      " / La leggenda del pianista sull'oceano\n",
      "三傻大闹宝莱坞\n",
      " / 3 Idiots\n",
      "放牛班的春天\n",
      " / Les choristes\n",
      "机器人总动员\n",
      " / WALL·E\n",
      "疯狂动物城\n",
      " / Zootopia\n",
      "无间道\n",
      " / 無間道\n",
      "控方证人\n",
      " / Witness for the Prosecution\n",
      "大话西游之大圣娶亲\n",
      " / 西遊記大結局之仙履奇緣\n",
      "熔炉\n",
      " / 도가니\n",
      "教父\n",
      " / The Godfather\n",
      "触不可及\n",
      " / Intouchables\n",
      "当幸福来敲门\n",
      " / The Pursuit of Happyness\n",
      "寻梦环游记\n",
      " / Coco\n",
      "电影标题列表已打印！\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "from bs4 import BeautifulSoup\n",
    "\n",
    "url = \"https://movie.douban.com/top250\"\n",
    "headers = {\n",
    "    \"user-agent\": \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.192 Safari/537.36\"\n",
    "}\n",
    "resp = requests.get(url, headers=headers)\n",
    "page_content = resp.text\n",
    "\n",
    "# 使用 BeautifulSoup 解析页面内容\n",
    "soup = BeautifulSoup(page_content, \"html.parser\")\n",
    "\n",
    "# 使用 CSS 选择器查找电影标题\n",
    "movie_titles = soup.select(\"#content > div > div.article > ol > li > div > div.info > div.hd > a > span.title\")\n",
    "\n",
    "# 打印电影标题\n",
    "for title in movie_titles:\n",
    "    print(title.get_text())\n",
    "\n",
    "print(\"电影标题列表已打印！\")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "id": "4e2c4288-da2b-4ddf-89ee-8ba947fc8266",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "已下载: zgghaapkfhy.jpg\n",
      "已下载: zyecmylhfrn.jpg\n",
      "已下载: gsyxb1o4gdq.jpg\n",
      "已下载: zgghaapkfhy.jpg\n",
      "已下载: ihk3g03psgi.jpg\n",
      "已下载: t1ouhdmbhjo.jpg\n",
      "已下载: ap2c1vg3whm.jpg\n",
      "已下载: zj2ggdrhl44.jpg\n",
      "已下载: dnvk3qz2ocy.jpg\n",
      "已下载: cyhlqhlylep.jpg\n",
      "已下载: ql23ngdggqt.jpg\n",
      "已下载: fhkfzrkfyyv.jpg\n",
      "已下载: vxrtmf3rnig.jpg\n",
      "已下载: xbz4cl1lhtg.jpg\n",
      "已下载: yotyomy0svb.jpg\n",
      "已下载: 5g54nolova5.jpg\n",
      "已下载: y1mahuysmqw.jpg\n",
      "已下载: u0ffxygdpgk.jpg\n",
      "已下载: epb4dxkxtlz.jpg\n",
      "已下载: f3ypjikdmf0.jpg\n",
      "已下载: ghj2jfe5twm.jpg\n",
      "已下载: oxxnb3niz1h.jpg\n",
      "已下载: c3td3px1qvo.jpg\n",
      "已下载: nbuidh0n0cj.jpg\n",
      "已下载: af2f41cry2n.jpg\n",
      "已下载: hv1yg315qua.jpg\n",
      "已下载: hpfrstvqizi.jpg\n",
      "已下载: w2yy320dp5b.jpg\n",
      "已下载: srpiuysntej.jpg\n",
      "已下载: j32saeaez3h.jpg\n",
      "所有图片已下载！\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "import requests\n",
    "from bs4 import BeautifulSoup\n",
    "import time\n",
    "from urllib.parse import urljoin\n",
    "\n",
    "url = \"https://www.umei.cc/bizhitupian/weimeibizhi/\"\n",
    "resp = requests.get(url)\n",
    "resp.encoding = 'utf-8'  # 处理编码问题\n",
    "\n",
    "# 将响应内容传递给 BeautifulSoup\n",
    "main_page = BeautifulSoup(resp.text, \"html.parser\")\n",
    "items = main_page.find_all(\"div\", class_=\"item\")\n",
    "\n",
    "# 如果不存在 'imgC' 目录，则创建它\n",
    "os.makedirs(\"imgC\", exist_ok=True)\n",
    "\n",
    "for item in items:\n",
    "    # 找到指向子页面的链接\n",
    "    link = item.find(\"a\", href=True)\n",
    "    href = link[\"href\"]\n",
    "    \n",
    "    # 检查 URL 是否具有协议\n",
    "    if not href.startswith(\"http\"):\n",
    "        href = urljoin(url, href)\n",
    "\n",
    "    # 获取子页面的内容\n",
    "    child_page_resp = requests.get(href)\n",
    "    child_page_resp.encoding = 'utf-8'\n",
    "    child_page_text = child_page_resp.text\n",
    "\n",
    "    # 从子页面提取图像下载 URL\n",
    "    child_page = BeautifulSoup(child_page_text, \"html.parser\")\n",
    "    img = child_page.find(\"img\", class_=\"lazy\")\n",
    "    src = img[\"data-original\"]\n",
    "\n",
    "    # 检查 URL 是否具有协议\n",
    "    if not src.startswith(\"http\"):\n",
    "        src = urljoin(url, src)\n",
    "\n",
    "    # 下载图像\n",
    "    img_resp = requests.get(src)\n",
    "    img_name = src.split(\"/\")[-1]  # 从 URL 中提取图像名称\n",
    "\n",
    "    with open(\"imgC/\" + img_name, mode=\"wb\") as f:\n",
    "        f.write(img_resp.content)\n",
    "\n",
    "    print(\"已下载:\", img_name)\n",
    "    time.sleep(1)\n",
    "\n",
    "print(\"所有图片已下载！\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "15b2b910-c117-44a4-a587-5507043fc162",
   "metadata": {},
   "source": [
    "\n",
    "XPath 是一种强大的查询语言，用于选择元素并在 XML 和 HTML 文档中导航。它允许您遍历文档的结构，并根据模式和条件提取特定的数据。\n",
    "\n",
    "要在 Python 中使用 XPath 库，您需要安装提供 XPath 功能的 `lxml` 库。您可以使用 pip 进行安装：\n",
    "\n",
    "```\n",
    "bash\n",
    "Copy code\n",
    "pip install lxml\n",
    "```\n",
    "\n",
    "安装完成后，您可以导入必要的模块来使用 XPath：\n",
    "\n",
    "```\n",
    "python\n",
    "Copy code\n",
    "from lxml import etree\n",
    "```\n",
    "\n",
    "现在，让我们来了解 XPath 的关键概念和技术：\n",
    "\n",
    "1. 选择元素：XPath 表达式用于选择 XML 或 HTML 文档中的元素。您可以通过标签名称、属性或它们在文档结构中的位置来指定要定位的元素。\n",
    "2. XPath 轴：轴允许您相对于当前元素导航文档。常见的轴包括 `child`、`parent`、`descendant`、`ancestor`、`following-sibling` 和 `preceding-sibling`。它们帮助您基于元素与其他元素的关系选择元素。\n",
    "3. 谓词：谓词是进一步细化元素选择的条件。您可以使用谓词根据它们的属性、值或位置来过滤元素。\n",
    "4. XPath 函数：XPath 提供了一系列内置函数，用于对元素和值执行操作。诸如 `text()`、`contains()`、`starts-with()`、`position()` 和 `last()` 等函数在 XPath 表达式中经常使用。\n",
    "5. XPath 运算符：XPath 支持各种运算符，如 `|`（并集）、`+`、`-`、`*`、`div`、`mod`、`= `、`!=`、`<`、`>`、`<=`、`>=`、`and`、`or` 和 `not`。这些运算符允许您组合表达式并比较值。\n",
    "6. 在 Python 中使用 XPath：使用 `lxml` 库，您可以使用 `etree` 模块解析 XML 或 HTML 文档。解析后，您可以使用 `xpath()` 方法执行 XPath 表达式，并检索匹配的元素或值。\n",
    "\n",
    "XPath 是从 XML 和 HTML 文档中提取数据的多功能工具。它提供了一种精确而灵活的方式来导航文档结构并定位特定元素。通过掌握 XPath，您可以从复杂的文档中高效地提取所需的数据。\n",
    "\n",
    "注意：虽然 XPath 主要设计用于 XML，但它也可以与 HTML 文档一起使用。但是，HTML 文档可能具有结构差异，这可能会影响 XPath 表达式的准确性和可靠性。在这种情况下，建议使用专门用于解析 HTML 的库，如 BeautifulSoup。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 71,
   "id": "d487fe17-c20d-44fe-9a2e-2d067a0b335f",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple\n",
      "Requirement already satisfied: lxml in d:\\anaconda3\\lib\\site-packages (4.9.3)\n"
     ]
    }
   ],
   "source": [
    "!pip install lxml"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "id": "8ecdded4-e147-4ac6-90a1-1de807ec6cea",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "欢迎来到 XPath 教程\n",
      "学习用于网页抓取的 XPath\n",
      "介绍\n",
      "基本语法\n",
      "表达式和谓词\n",
      "函数\n",
      "div\n",
      "div\n"
     ]
    }
   ],
   "source": [
    "from lxml import etree\n",
    "\n",
    "# 创建一个 HTML 文档\n",
    "html_content = \"\"\"\n",
    "<html>\n",
    "    <body>\n",
    "        <h1>欢迎来到 XPath 教程</h1>\n",
    "        <div class=\"content\">\n",
    "            <p>学习用于网页抓取的 XPath</p>\n",
    "            <ul>\n",
    "                <li>介绍</li>\n",
    "                <li>基本语法</li>\n",
    "                <li>表达式和谓词</li>\n",
    "                <li>函数</li>\n",
    "            </ul>\n",
    "        </div>\n",
    "    </body>\n",
    "</html>\n",
    "\"\"\"\n",
    "\n",
    "# 解析 HTML 文档\n",
    "root = etree.HTML(html_content)\n",
    "\n",
    "# 使用 XPath 选择元素\n",
    "headings = root.xpath(\"//h1\")\n",
    "for heading in headings:\n",
    "    print(heading.text)\n",
    "\n",
    "paragraph = root.xpath(\"//p\")[0]\n",
    "print(paragraph.text)\n",
    "\n",
    "list_items = root.xpath(\"//ul/li/text()\")\n",
    "for item in list_items:\n",
    "    print(item)\n",
    "\n",
    "# 使用谓词来过滤元素\n",
    "div = root.xpath(\"//div[@class='content']\")[0]\n",
    "print(div.tag)\n",
    "\n",
    "# 访问父元素和子元素\n",
    "ul = root.xpath(\"//ul\")[0]\n",
    "parent_div = ul.getparent()\n",
    "print(parent_div.tag)\n",
    "\n",
    "# 使用命名空间评估 XPath 表达式（不适用于 HTML）\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 75,
   "id": "3371cf8e-89b1-4acc-bfa5-cc18ae5361ad",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "肖申克的救赎\n",
      "霸王别姬\n",
      "阿甘正传\n",
      "泰坦尼克号\n",
      "千与千寻\n",
      "这个杀手不太冷\n",
      "美丽人生\n",
      "星际穿越\n",
      "盗梦空间\n",
      "楚门的世界\n",
      "辛德勒的名单\n",
      "忠犬八公的故事\n",
      "海上钢琴师\n",
      "三傻大闹宝莱坞\n",
      "放牛班的春天\n",
      "机器人总动员\n",
      "疯狂动物城\n",
      "无间道\n",
      "控方证人\n",
      "大话西游之大圣娶亲\n",
      "熔炉\n",
      "教父\n",
      "触不可及\n",
      "当幸福来敲门\n",
      "寻梦环游记\n",
      "电影列表已打印！\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "from lxml import etree\n",
    "\n",
    "url = \"https://movie.douban.com/top250\"\n",
    "headers = {\n",
    "    \"user-agent\": \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.192 Safari/537.36\"\n",
    "}\n",
    "resp = requests.get(url, headers=headers)\n",
    "page_content = resp.text\n",
    "\n",
    "# 解析数据\n",
    "tree = etree.HTML(page_content)\n",
    "\n",
    "# 使用 XPath 查找电影名称\n",
    "movie_names = tree.xpath(\"/html/body/div[3]/div[1]/div/div[1]/ol/li/div/div[2]/div[1]/a/span[1]\")\n",
    "\n",
    "# 打印电影名称\n",
    "for name in movie_names:\n",
    "    print(name.text)\n",
    "\n",
    "print(\"电影列表已打印！\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "48c8c9c2-4a4f-42a1-88df-2f9358cbe7a2",
   "metadata": {},
   "source": [
    "### 5. 反爬虫策略\n",
    "\n",
    "- 常见的反爬虫技术\n",
    "- 绕过这些技术的方法\n",
    "\n",
    "网络爬虫面临各种反爬虫技术，这些技术由网站采用，旨在保护其数据并控制访问。这些技术旨在检测并防止自动化爬取活动。了解常见的反爬虫技术并学习如何绕过它们，可以帮助您提高网络爬取项目的成功率和可靠性。让我们探讨一些常见的反爬虫技术以及绕过它们的方法：\n",
    "\n",
    "1. Robots.txt：网站通常使用 robots.txt 文件指定哪些部分不应被网络爬虫访问。这是一种向搜索引擎爬虫通信的标准方法。要绕过此限制，您可以选择忽略 robots.txt 文件，并继续爬取所需内容。但是，在这样做时，请谨慎并尊重网站政策。\n",
    "2. 用户代理限制：网站可能会阻止没有有效用户代理标头或具有可疑用户代理值的请求。为了绕过此限制，您可以在爬取代码中设置用户代理标头，以模拟合法的网络浏览器。您可以在请求标头中设置流行的用户代理字符串以使您的爬虫看起来更像普通用户。\n",
    "3. 验证码挑战：验证码用于区分人类和机器人。网站可能会使用验证码来防止自动化爬取。要绕过验证码，您可以使用第三方服务或库来自动解决验证码，例如验证码解决 API。这些服务通常需要 API 密钥，并且可以代表您处理验证码挑战。\n",
    "4. IP 阻止：网站可能会阻止在短时间内进行过多请求的 IP 地址。要绕过 IP 阻止，您可以使用轮换代理或代理服务。代理允许您从不同的 IP 地址发出请求，使得网站难以跟踪和阻止您的爬取活动。请确保选择可靠和信誉良好的代理提供商。\n",
    "5. 动态网站内容：依赖 JavaScript 进行客户端渲染的网站可能会对爬取提出挑战。要绕过这一限制，您可以使用无头浏览器或能够渲染 JavaScript 的爬取框架，例如 Puppeteer 或 Selenium。这些工具模拟真实的浏览器环境，并允许您与动态加载的内容进行交互。\n",
    "6. 会话管理：网站可能会使用 cookie 或会话跟踪用户活动并防止爬取。要绕过基于会话的保护，您可以在爬取代码中维护和管理 cookie。您可以从初始请求中提取 cookie，并将其包含在后续请求中，以维持与网站的会话。\n",
    "7. 速率限制：网站可能会实施速率限制机制，限制单个用户在给定时间内的请求次数。要绕过速率限制，您可以在请求之间引入延迟，或者使用智能爬取技术，如自适应速率限制，根据网站的响应时间动态调整爬取速度。\n",
    "8. 蜜罐陷阱：网站可能会使用隐藏链接或表单字段，这些对人类用户不可见，但对机器人可见。向这些陷阱提交请求可能会导致 IP 阻止或其他反制措施。要绕过蜜罐陷阱，您可以检查网页的 HTML 结构，分析表单字段，或者避免点击可疑的链接。\n",
    "\n",
    "需要注意的是，虽然这些方法可以帮助您绕过常见的反爬虫技术，但应该负责任地使用，并遵守网站的服务条款。尊重网站政策，限制爬取速率，并避免对目标网站的服务器造成过大的负载始终是一种良好的做法。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4e3cbe57-b9b4-4b74-a6ae-4b0fef30fa82",
   "metadata": {},
   "source": [
    "### 5.1 异步爬取\n",
    "\n",
    "- 理解异步爬取\n",
    "\n",
    "异步爬取，也称为异步抓取或并发抓取，是网络爬取中一种用于提高数据从多个网页中提取的效率和速度的技术。在传统的爬取中，请求是同步发送和处理的，这意味着每个请求都必须等待响应，然后才能进行下一个请求。这可能导致显著的延迟和性能降低，特别是在处理大量网页时。\n",
    "\n",
    "异步爬取通过允许同时进行多个请求并独立处理，而无需等待每个响应来解决这个问题。这使得爬取脚本可以利用并行处理，并最大限度地利用系统资源。因此，整体爬取速度可以显著提高。\n",
    "\n",
    "### 5.2 Selenium\n",
    "\n",
    "Selenium 是一个流行的开源库，提供了用于自动化 Web 浏览器的编程接口。它使开发人员能够自动执行浏览器操作，与 Web 元素进行交互，并在 Web 页面上执行各种任务。Selenium 支持多种编程语言，包括 Python、Java、C# 等。在本文中，我们将重点介绍 Python 中的 Selenium。\n",
    "\n",
    "Selenium 库的关键特性和组件包括：\n",
    "\n",
    "1. WebDriver：WebDriver 是 Selenium 的核心组件，提供了与 Web 浏览器进行交互的编程接口。它允许您自动执行浏览器操作，如导航到 URL、填写表单、点击按钮，并从 Web 元素中提取数据。\n",
    "2. Selenium WebDriver API：Selenium WebDriver 提供了与不同浏览器交互的 API，包括 Chrome、Firefox、Safari、Edge 等。每个浏览器都需要一个特定的 WebDriver，它充当 Selenium 库和浏览器之间的桥梁。\n",
    "3. 定位元素：Selenium 提供了各种方法来定位 Web 页面上的元素，如通过它们的 ID、类名、标签名、CSS 选择器或 XPath 查找元素。这些方法使您能够识别并与 Web 页面上的特定元素进行交互。\n",
    "4. 与元素交互：Selenium 允许您通过执行操作与 Web 元素进行交互，如点击按钮、填写表单、从下拉列表中选择选项、提交表单，甚至模拟键盘输入。您还可以检索元素属性、文本或执行其他操作。\n",
    "5. 导航和操作浏览器窗口：Selenium 提供了处理多个浏览器窗口或选项卡的方法。您可以在窗口之间切换、打开新窗口或关闭现有窗口。它还允许您控制浏览器的大小、位置，并执行滚动操作。\n",
    "6. 高级交互：Selenium 支持与 Web 元素的高级交互，如悬停在元素上、双击、拖放以及在浏览器内执行 JavaScript 代码。\n",
    "7. 等待元素：Selenium 提供了显式和隐式等待机制来处理动态网页。您可以在执行操作之前等待特定条件，例如等待元素可见、可点击或在页面上存在。\n",
    "\n",
    "Selenium 广泛用于各种目的，包括 Web 抓取、自动化测试、浏览器自动化和 Web 应用程序开发。它提供了在不同浏览器和平台上的灵活性和兼容性，使其成为自动化浏览器交互的多功能工具。\n",
    "\n",
    "在 Python 中，可以使用 pip 命令安装 Selenium 库，命令为 pip install selenium。此外，您需要下载并设置相应的 WebDriver（https://chromedriver.chromium.org/downloads）以用于您打算自动化的浏览器。\n",
    "\n",
    "Selenium 文档、教程和社区资源可在官方 Selenium 网站（https://www.selenium.dev/）上找到。这些资源提供了详细的信息、示例和最佳实践，帮助您充分利用 Selenium 库来满足您的 Web 自动化需求。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 79,
   "id": "e17b21ad-0bc0-4c3f-89ad-9d8cc88d1a57",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple\n",
      "Requirement already satisfied: selenium in d:\\anaconda3\\lib\\site-packages (4.20.0)\n",
      "Requirement already satisfied: urllib3[socks]<3,>=1.26 in d:\\anaconda3\\lib\\site-packages (from selenium) (1.26.16)\n",
      "Requirement already satisfied: trio~=0.17 in d:\\anaconda3\\lib\\site-packages (from selenium) (0.25.0)\n",
      "Requirement already satisfied: trio-websocket~=0.9 in d:\\anaconda3\\lib\\site-packages (from selenium) (0.11.1)\n",
      "Requirement already satisfied: certifi>=2021.10.8 in d:\\anaconda3\\lib\\site-packages (from selenium) (2024.2.2)\n",
      "Requirement already satisfied: typing_extensions>=4.9.0 in d:\\anaconda3\\lib\\site-packages (from selenium) (4.11.0)\n",
      "Requirement already satisfied: attrs>=23.2.0 in d:\\anaconda3\\lib\\site-packages (from trio~=0.17->selenium) (23.2.0)\n",
      "Requirement already satisfied: sortedcontainers in d:\\anaconda3\\lib\\site-packages (from trio~=0.17->selenium) (2.4.0)\n",
      "Requirement already satisfied: idna in d:\\anaconda3\\lib\\site-packages (from trio~=0.17->selenium) (3.4)\n",
      "Requirement already satisfied: outcome in d:\\anaconda3\\lib\\site-packages (from trio~=0.17->selenium) (1.3.0.post0)\n",
      "Requirement already satisfied: sniffio>=1.3.0 in d:\\anaconda3\\lib\\site-packages (from trio~=0.17->selenium) (1.3.1)\n",
      "Requirement already satisfied: cffi>=1.14 in d:\\anaconda3\\lib\\site-packages (from trio~=0.17->selenium) (1.15.1)\n",
      "Requirement already satisfied: wsproto>=0.14 in d:\\anaconda3\\lib\\site-packages (from trio-websocket~=0.9->selenium) (1.2.0)\n",
      "Requirement already satisfied: PySocks!=1.5.7,<2.0,>=1.5.6 in d:\\anaconda3\\lib\\site-packages (from urllib3[socks]<3,>=1.26->selenium) (1.7.1)\n",
      "Requirement already satisfied: pycparser in d:\\anaconda3\\lib\\site-packages (from cffi>=1.14->trio~=0.17->selenium) (2.21)\n",
      "Requirement already satisfied: h11<1,>=0.9.0 in d:\\anaconda3\\lib\\site-packages (from wsproto>=0.14->trio-websocket~=0.9->selenium) (0.14.0)\n"
     ]
    }
   ],
   "source": [
    "!pip install selenium"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 81,
   "id": "82c3b2a7-f4b7-40ec-9f92-604bad1ae504",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "     price comment              shopname  \\\n",
      "0  4299.00    20万+             小米京东自营旗舰店   \n",
      "0  1399.00   100万+             荣耀京东自营旗舰店   \n",
      "0   799.00    50万+             小米京东自营旗舰店   \n",
      "0   584.00   100万+             荣耀京东自营旗舰店   \n",
      "0  1599.00    10万+           华为京东自营官方旗舰店   \n",
      "0  1499.00    50万+             小米京东自营旗舰店   \n",
      "0  1999.00   1000+         iQOO京东自营官方旗舰店   \n",
      "0  1199.00    20万+             小米京东自营旗舰店   \n",
      "0  1999.00     2万+             小米京东自营旗舰店   \n",
      "0  1899.00    500+         OPPO京东自营官方旗舰店   \n",
      "0   498.00     5万+                 魅紫旗舰店   \n",
      "0  1079.00     1万+          京东手机运营商自营旗舰店   \n",
      "0  2599.00    20万+         一加手机京东自营官方旗舰店   \n",
      "0  1878.00    10万+             荣耀京东自营旗舰店   \n",
      "0   299.00     2万+  百事乐（LEBEST）手机京东自营旗舰店   \n",
      "0  2399.00    50万+             小米京东自营旗舰店   \n",
      "0   499.00     2万+    天语（K-TOUCH）京东自营旗舰店   \n",
      "0  1448.00   2000+          京东手机运营商自营旗舰店   \n",
      "0  5099.00    20万+             小米京东自营旗舰店   \n",
      "0  3999.00     2万+         OPPO京东自营官方旗舰店   \n",
      "0  9399.00   100万+        Apple产品京东自营旗舰店   \n",
      "0    88.00    20万+           纽曼京东自营官方旗舰店   \n",
      "0   462.00     1万+             京东通信自营旗舰店   \n",
      "0  5199.00   200万+        Apple产品京东自营旗舰店   \n",
      "0  2099.00    500+         OPPO京东自营官方旗舰店   \n",
      "0   539.00    20万+             小米京东自营旗舰店   \n",
      "0  1299.00   100万+         OPPO京东自营官方旗舰店   \n",
      "0   596.00     5万+         vivo京东自营官方旗舰店   \n",
      "0   218.00     1万+            星时代二手手机专营店   \n",
      "0   876.00   5000+           华为移动京东自营专卖店   \n",
      "\n",
      "                                               URL  \\\n",
      "0    https://item.jd.com/100071377749.html#comment   \n",
      "0    https://item.jd.com/100057334060.html#comment   \n",
      "0    https://item.jd.com/100044835935.html#comment   \n",
      "0    https://item.jd.com/100020974898.html#comment   \n",
      "0    https://item.jd.com/100081500557.html#comment   \n",
      "0    https://item.jd.com/100068892967.html#comment   \n",
      "0    https://item.jd.com/100105483276.html#comment   \n",
      "0    https://item.jd.com/100058934613.html#comment   \n",
      "0    https://item.jd.com/100094231749.html#comment   \n",
      "0    https://item.jd.com/100094995977.html#comment   \n",
      "0  https://item.jd.com/10067430999787.html#comment   \n",
      "0    https://item.jd.com/100080366418.html#comment   \n",
      "0    https://item.jd.com/100078549401.html#comment   \n",
      "0    https://item.jd.com/100080686409.html#comment   \n",
      "0    https://item.jd.com/100075061966.html#comment   \n",
      "0    https://item.jd.com/100078020142.html#comment   \n",
      "0    https://item.jd.com/100053872573.html#comment   \n",
      "0    https://item.jd.com/100087551516.html#comment   \n",
      "0    https://item.jd.com/100071377745.html#comment   \n",
      "0    https://item.jd.com/100080133335.html#comment   \n",
      "0    https://item.jd.com/100068388451.html#comment   \n",
      "0    https://item.jd.com/100035815878.html#comment   \n",
      "0    https://item.jd.com/100057801320.html#comment   \n",
      "0    https://item.jd.com/100066896356.html#comment   \n",
      "0    https://item.jd.com/100106836722.html#comment   \n",
      "0    https://item.jd.com/100041367878.html#comment   \n",
      "0    https://item.jd.com/100031192620.html#comment   \n",
      "0    https://item.jd.com/100071821822.html#comment   \n",
      "0  https://item.jd.com/10054562940406.html#comment   \n",
      "0    https://item.jd.com/100081377034.html#comment   \n",
      "\n",
      "                                               title  \\\n",
      "0  小米14 徕卡光学镜头 光影猎人900 徕卡75mm浮动长焦 澎湃OS 16+512 黑色 ...   \n",
      "0  荣耀X50 第一代骁龙6芯片 1.5K超清护眼硬核曲屏 5800mAh超耐久大电池 5G手机...   \n",
      "0  小米（MI）Redmi Note12 5G 120Hz OLED屏幕  骁龙4移动平台 50...   \n",
      "0  荣耀畅玩20 5000mAh超大电池续航 6.5英寸大屏  莱茵护眼 6GB+128GB 钛...   \n",
      "0  华为畅享 70 Pro 1亿像素超清影像40W超级快充5000mAh大电池长续航 256GB...   \n",
      "0  小米Redmi Note13Pro 新2亿像素 第二代1.5K高光屏 8GB+256GB 子...   \n",
      "0  vivoiQOO Z9 Turbo 12GB+256GB 星芒白第三代骁龙8s独显芯片Tur...   \n",
      "0  小米（MI）Redmi Note 12T Pro 5G 天玑8200-Ultra 真旗舰芯 ...   \n",
      "0  小米Redmi Turbo 3 第三代骁龙8s 小米澎湃OS 12+256 墨晶  AI功能...   \n",
      "0  OPPO K12 5G 100W闪充 5500mAh超长续航 第三代骁龙7旗舰芯 直屏新款拍...   \n",
      "0  魅紫新款15promax灵动岛大屏智能手机256g可用5G卡4g全网通电竞游戏长续航全新学生...   \n",
      "0  华为 华为/HUAWEI 畅享 70 6000mAh大电池 长续航 畅享X键一键直达 256...   \n",
      "0  一加 Ace 3 12GB+256GB 星辰黑 1.5K 东方屏 第二代骁龙 8 旗舰芯片 ...   \n",
      "0  荣耀X50 GT 骁龙8+芯片 苍穹散热系统 灵龙触控引擎 5800mAh电池 1.5K抗摔...   \n",
      "0  百事乐（LEBEST）L23pro全新超薄八核智能手机学生价便宜大屏百元机长续航老人老年备用...   \n",
      "0  小米Redmi K70 第二代骁龙8 澎湃OS 12GB+256GB 墨羽 红米K70 手机...   \n",
      "0  天语全新256GB灵动屏 八核智能手机 超薄电竞游戏全网通 学生安卓百元老人机长续航 X14...   \n",
      "0  华为畅享 70 Pro 1亿像素超清影像40W超级快充5000mAh大电池长续航 256GB...   \n",
      "0  小米14Pro 徕卡可变光圈镜头 光影猎人900 澎湃OS 16+512 黑色 5G AI手...   \n",
      "0  OPPO Find X7 12GB+256GB 海阔天空 天玑 9300 超光影三主摄 专业...   \n",
      "0  Apple/苹果 iPhone 15 Pro Max (A3108) 256GB 原色钛金属...   \n",
      "0  纽曼（Newman）M560(J) 星空黑 4G全网通老人手机 双卡双待超长待机 大字大声大...   \n",
      "0  荣耀畅玩20 5000mAh超大电池续航 6.5英寸大屏 莱茵护眼 4GB+64GB 全网通...   \n",
      "0  Apple/苹果 iPhone 15 (A3092) 128GB 黑色 支持移动联通电信5G...   \n",
      "0  OPPO K12 5G 100W闪充 5500mAh超长续航 第三代骁龙7旗舰芯 直屏新款拍...   \n",
      "0  小米（MI）Redmi 12C Helio G85 性能芯 5000万高清双摄 5000mA...   \n",
      "0  OPPO K9x 天玑 810 5000mAh长续航 快充 8GB+256GB 银紫超梦 老...   \n",
      "0  vivo Y33t 6GB+128GB 丛野绿 5000mAh电池 后置1300万像素 八核...   \n",
      "0  拍拍\\t\\n华为（HUAWEI）华为畅享9 二手手机 智能机 工作机 全网通4G 双卡双待 ...   \n",
      "0   华为畅享 70 6000mAh大电池 长续航 畅享X键一键直达 128GB 翡冷翠 鸿蒙智能手机   \n",
      "\n",
      "                                             pnglink  \n",
      "0  https://img14.360buyimg.com/n7/jfs/t1/158279/4...  \n",
      "0  https://img10.360buyimg.com/n7/jfs/t1/235790/2...  \n",
      "0  https://img10.360buyimg.com/n7/jfs/t1/110250/8...  \n",
      "0  https://img13.360buyimg.com/n7/jfs/t1/186256/6...  \n",
      "0  https://img12.360buyimg.com/n7/jfs/t1/102841/8...  \n",
      "0  https://img12.360buyimg.com/n7/jfs/t1/216782/2...  \n",
      "0  https://img11.360buyimg.com/n7/jfs/t1/130827/2...  \n",
      "0  https://img13.360buyimg.com/n7/jfs/t1/158059/3...  \n",
      "0  https://img14.360buyimg.com/n7/jfs/t1/60617/13...  \n",
      "0  https://img12.360buyimg.com/n7/jfs/t1/175664/2...  \n",
      "0  https://img12.360buyimg.com/n7/jfs/t1/238269/1...  \n",
      "0  https://img13.360buyimg.com/n7/jfs/t1/184493/2...  \n",
      "0  https://img11.360buyimg.com/n7/jfs/t1/235844/2...  \n",
      "0  https://img14.360buyimg.com/n7/jfs/t1/242181/3...  \n",
      "0  https://img11.360buyimg.com/n7/jfs/t1/201538/3...  \n",
      "0  https://img12.360buyimg.com/n7/jfs/t1/234399/2...  \n",
      "0  https://img13.360buyimg.com/n7/jfs/t1/226922/2...  \n",
      "0  https://img11.360buyimg.com/n7/jfs/t1/163890/3...  \n",
      "0  https://img10.360buyimg.com/n7/jfs/t1/162711/2...  \n",
      "0  https://img10.360buyimg.com/n7/jfs/t1/163222/3...  \n",
      "0  https://img11.360buyimg.com/n7/jfs/t1/168836/1...  \n",
      "0  https://img13.360buyimg.com/n7/jfs/t1/226083/1...  \n",
      "0  https://img10.360buyimg.com/n7/jfs/t1/238236/3...  \n",
      "0  https://img11.360buyimg.com/n7/jfs/t1/185376/8...  \n",
      "0  https://img12.360buyimg.com/n7/jfs/t1/228448/2...  \n",
      "0  https://img13.360buyimg.com/n7/jfs/t1/101162/2...  \n",
      "0  https://img10.360buyimg.com/n7/jfs/t1/245255/3...  \n",
      "0  https://img12.360buyimg.com/n7/jfs/t1/224939/1...  \n",
      "0  https://img11.360buyimg.com/n7/jfs/t1/224248/1...  \n",
      "0  https://img14.360buyimg.com/n7/jfs/t1/178106/3...  \n"
     ]
    }
   ],
   "source": [
    "from selenium import webdriver\n",
    "from selenium.webdriver.support import expected_conditions as EC\n",
    "from selenium.webdriver.support.wait import WebDriverWait\n",
    "from selenium.webdriver.common.by import By\n",
    "from time import sleep\n",
    "from lxml import etree\n",
    "import pandas as pd\n",
    "\n",
    "class JdMobileScraper:\n",
    "    def __init__(self, pages=2):\n",
    "        self.url = 'https://www.jd.com/'\n",
    "        self.pages = pages\n",
    "        self.driver = webdriver.Chrome()\n",
    "        self.wait = WebDriverWait(self.driver, 10)\n",
    "        self.data = pd.DataFrame()\n",
    "\n",
    "    def open_html(self):\n",
    "        self.driver.get(self.url)\n",
    "\n",
    "    def search_product(self, key):\n",
    "        search_box = self.wait.until(EC.presence_of_element_located((By.ID, 'key')))\n",
    "        search_box.send_keys(key)\n",
    "        search_button = self.wait.until(EC.element_to_be_clickable((By.CLASS_NAME, 'button')))\n",
    "        search_button.click()\n",
    "\n",
    "    def scrape_pages(self):\n",
    "        for _ in range(self.pages):\n",
    "            self.scroll_down()\n",
    "            self.get_content()\n",
    "            self.next_page()\n",
    "\n",
    "    def scroll_down(self):\n",
    "        for _ in range(2):\n",
    "            self.driver.execute_script(\"window.scrollTo(0, document.body.scrollHeight);\")\n",
    "            sleep(3)\n",
    "\n",
    "    def get_content(self):\n",
    "        html = etree.HTML(self.driver.page_source)\n",
    "        items = html.xpath('//div[@class=\"gl-i-wrap\"]')\n",
    "        for item in items:\n",
    "            D = {}\n",
    "            D['price'] = item.xpath('.//div[@class=\"p-price\"]/strong/i/text()')[0]\n",
    "            D['comment'] = item.xpath('.//div[@class=\"p-commit\"]/strong/a/text()')[0]\n",
    "            shopname = item.xpath('.//div[@class=\"p-shop\"]/span/a/text()')\n",
    "            D['shopname'] = shopname[0] if shopname else 'None'\n",
    "            D['URL'] = 'https:' + item.xpath('.//div[@class=\"p-commit\"]/strong/a/@href')[0]\n",
    "            title = item.xpath('.//div[@class=\"p-name p-name-type-2\"]/a/em')[0].xpath('string(.)').strip()\n",
    "            D['title'] = title\n",
    "            image_url = item.xpath('.//div[@class=\"p-img\"]/a/img/@data-lazy-img')\n",
    "            D['pnglink'] = 'https:' + image_url[0] if image_url and image_url[0] != 'done' else 'https:' + item.xpath('.//div[@class=\"p-img\"]/a/img/@src')[0]\n",
    "            self.data = pd.concat([self.data, pd.DataFrame([D])])\n",
    "\n",
    "    def next_page(self):\n",
    "        next_button = self.wait.until(EC.element_to_be_clickable((By.XPATH, '//*[@id=\"J_bottomPage\"]/span[1]/a[9]')))\n",
    "        self.driver.execute_script(\"arguments[0].click();\", next_button)\n",
    "        sleep(4)\n",
    "\n",
    "    def run(self, key):\n",
    "        self.open_html()\n",
    "        self.search_product(key)\n",
    "        self.scrape_pages()\n",
    "        self.driver.quit()\n",
    "        return self.data\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    jd_scraper = JdMobileScraper()\n",
    "    data = jd_scraper.run('手机')\n",
    "    print(data)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "89415415-fd3a-41a9-9791-0599e7867271",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
