{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "1670b03f",
   "metadata": {},
   "source": [
    "## 爬虫介绍"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f5ab4a07",
   "metadata": {},
   "source": [
    "**网络爬虫**：如果把互联网比喻成一个蜘蛛网，那么网络爬虫就是在网上爬来爬去的蜘蛛，爬虫程序通过请求url地址，根据响应的内容进行解析采集数据。\n",
    "\n",
    "比如：如果响应内容是html，分析dom结构，进行dom解析、或者正则匹配，如果响应内容是xml/json数据，就可以转数据对象，然后对数据进行解析"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e2466a19",
   "metadata": {},
   "source": [
    "![](01_爬虫基础与数据提取_images/01.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "db605c30",
   "metadata": {},
   "source": [
    "通过有效的爬虫手段批量采集数据，可以降低人工成本，提高有效数据量，给予运营/销售的数据支撑，加快产品发展。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bb61a459",
   "metadata": {},
   "source": [
    "**反爬虫一些手段：**\n",
    "\n",
    "- 合法检测：请求校验(useragent，referer，接口加签名，等)\n",
    "- 小黑屋：IP/用户限制请求频率，或者直接拦截\n",
    "- 投毒：投毒返回虚假数据，可以误导竞品决策"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d26f983a",
   "metadata": {},
   "source": [
    "## urllib库"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "3f96baa5",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-23T03:04:11.935280Z",
     "start_time": "2023-10-23T03:04:11.724487Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<!DOCTYPE html><!--STATUS OK--><html><head><meta http-equiv=\"Content-Type\" content=\"text/html;charse\n"
     ]
    }
   ],
   "source": [
    "\"\"\"牛刀小试：爬取一个页面\"\"\"\n",
    "from urllib.request import urlopen\n",
    "\n",
    "# 发送请求，并将结果返回给 response\n",
    "response = urlopen(\"http://www.baidu.com/\")\n",
    "\n",
    "# response.read()返回的是 bytes类型，需要进行decode解码。\n",
    "# 由于网页信息过多，我们只查看 前100个数据\n",
    "print(response.read().decode()[:100])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4f825423",
   "metadata": {},
   "source": [
    "### 常用方法"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "36c4c9a5",
   "metadata": {},
   "source": [
    "- requset.urlopen(url, data, timeout)\n",
    "  \n",
    "  - 第一个参数url即为URL，第二个参数data是访问URL时要传送的数据，第三个timeout是设置超时时间。\n",
    "    \n",
    "  - 第二三个参数是可以不传送的，data默认为空None，timeout默认为 socket._GLOBAL_DEFAULT_TIMEOUT\n",
    "    \n",
    "  - 第一个参数URL是必须要传送的，在这个例子里面我们传送了百度的URL，执行urlopen方法之后，返回一个response对象，返回信息便保存在这里面。\n",
    "    \n",
    "- response.read()\n",
    "  \n",
    "  - read()方法就是读取文件里的全部内容，返回bytes类型\n",
    "  \n",
    "- response.getcode()\n",
    "  \n",
    "  - 返回 HTTP的响应码，成功返回200，4服务器页面出错，5服务器问题\n",
    "  \n",
    "- response.geturl()\n",
    "  \n",
    "  - 返回 实际数据的实际URL，防止重定向问题\n",
    "  \n",
    "- response.info()\n",
    "  \n",
    "  - 返回 服务器响应的HTTP报头\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "9f3178d6",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-23T03:32:56.025180Z",
     "start_time": "2023-10-23T03:32:55.815607Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "b'<!DOCTYPE html><!--STATUS OK--><html><head><meta http-equiv=\"Content-Type\" content=\"text/html;charse'\n",
      "200\n",
      "http://www.baidu.com/\n",
      "Content-Length: 399285\n",
      "Content-Security-Policy: frame-ancestors 'self' https://chat.baidu.com http://mirror-chat.baidu.com https://fj-chat.baidu.com https://hba-chat.baidu.com https://hbe-chat.baidu.com https://njjs-chat.baidu.com https://nj-chat.baidu.com https://hna-chat.baidu.com https://hnb-chat.baidu.com http://debug.baidu-int.com;\n",
      "Content-Type: text/html; charset=utf-8\n",
      "Date: Mon, 23 Oct 2023 03:32:55 GMT\n",
      "Server: BWS/1.1\n",
      "Set-Cookie: BIDUPSID=ACCC98175A66C3937951F3B3FB190FF1; expires=Thu, 31-Dec-37 23:55:55 GMT; max-age=2147483647; path=/; domain=.baidu.com\n",
      "Set-Cookie: PSTM=1698031975; expires=Thu, 31-Dec-37 23:55:55 GMT; max-age=2147483647; path=/; domain=.baidu.com\n",
      "Set-Cookie: BAIDUID=ACCC98175A66C3937951F3B3FB190FF1:FG=1; Path=/; Domain=baidu.com; Max-Age=31536000\n",
      "Set-Cookie: BAIDUID_BFESS=ACCC98175A66C3937951F3B3FB190FF1:FG=1; Path=/; Domain=baidu.com; Max-Age=31536000; Secure; SameSite=None\n",
      "Traceid: 1698031975351503386617655246480832879923\n",
      "Vary: Accept-Encoding\n",
      "X-Ua-Compatible: IE=Edge,chrome=1\n",
      "Connection: close\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "from urllib.request import urlopen\n",
    "\n",
    "url = 'http://www.baidu.com/'\n",
    "\n",
    "# 发送请求，并将结果返回给 response\n",
    "response = urlopen(url) \n",
    "\n",
    "# 读取数据\n",
    "print(response.read()[:100])\n",
    "\n",
    "# 获取相应码\n",
    "print(response.getcode())\n",
    "\n",
    "# 返回数据实际的 URL地址 -- 防止重定向问题\n",
    "print(response.geturl())\n",
    "\n",
    "# 获取响应头的信息\n",
    "print(response.info())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cb2c852d",
   "metadata": {},
   "source": [
    "### Request对象\n",
    "其实上面的 urlopen参数 可以传入一个 request请求, 即一个Request类的实例，构造时需要传入Url,Data等等的内容"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "d107c4c5",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-23T03:39:18.627510Z",
     "start_time": "2023-10-23T03:39:16.681279Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\n",
      "  \"args\": {}, \n",
      "  \"headers\": {\n",
      "    \"Accept-Encoding\": \"identity\", \n",
      "    \"Host\": \"httpbin.org\", \n",
      "    \n"
     ]
    }
   ],
   "source": [
    "from urllib.request import urlopen\n",
    "from urllib.request import Request\n",
    "from random import choice\n",
    "\n",
    "url = 'http://httpbin.org/get'\n",
    "\n",
    "# 随机生成用户代理 -- 发送请求时，会向服务器发送用户代理信息，服务器可以知道请求是从哪个浏览器上来的\n",
    "# 爬虫程序，其用户代理是 python文件，不修改用户代理，服务器可以知道 请求是由爬虫程序发出的\n",
    "user_agent_list = ['ua1', 'ua2', 'ua3']\n",
    "headers = {\n",
    "    'User-Agent': choice(user_agent_list)\n",
    "}\n",
    "\n",
    "# 构建 request对象，传入相应的头部信息\n",
    "req = Request(url, headers=headers)\n",
    "\n",
    "# 给网页发送请求，返回结果\n",
    "resp = urlopen(req)\n",
    "\n",
    "# 打印信息\n",
    "print(resp.read().decode()[:100])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "679abd4c",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-23T03:43:18.146470Z",
     "start_time": "2023-10-23T03:43:17.524313Z"
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Error occurred during getting browser: opera, but was suppressed with fallback.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\n",
      "  \"args\": {}, \n",
      "  \"headers\": {\n",
      "    \"Accept-Encoding\": \"identity\", \n",
      "    \"Host\": \"httpbin.org\", \n",
      "    \n"
     ]
    }
   ],
   "source": [
    "\"\"\"fake_useragent使用\"\"\"\n",
    "from urllib.request import urlopen\n",
    "from urllib.request import Request\n",
    "from fake_useragent import UserAgent\n",
    "\n",
    "url = 'http://httpbin.org/get'\n",
    "\n",
    "# 为了让服务器无法识别是爬虫程序发送的请求，我们用 UserAgent对象，随机生成用户代理\n",
    "ua = UserAgent()\n",
    "headers = {\n",
    "    'User-Agent':ua.opera\n",
    "}\n",
    "\n",
    "# 构造 Request对象\n",
    "req = Request(url,headers=headers)\n",
    "\n",
    "# 发送请求，返回结果\n",
    "resp = urlopen(req)\n",
    "\n",
    "print(resp.read().decode()[:100])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "88809504",
   "metadata": {},
   "source": [
    "### Get 请求\n",
    "\n",
    "大部分被传输到浏览器的html，images，js，css, … 都是通过 GET方法 发出请求的。它是获取数据的主要方法\n",
    "\n",
    "例如：www.baidu.com 搜索\n",
    "\n",
    "Get请求的参数都是在Url中体现的, 如果有中文，需要转码，这时我们可使用\n",
    "\n",
    "- urllib.parse.urlencode()\n",
    "  \n",
    "- urllib.parse.quote()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "e388e18c",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-23T03:46:40.637572Z",
     "start_time": "2023-10-23T03:46:36.659847Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "请输入要搜索的内容:牛\n",
      "%E7%89%9B\n",
      "<!DOCTYPE html>\n",
      "<html lang=\"zh-CN\">\n",
      "<head>\n",
      "    <meta charset=\"utf-8\">\n",
      "    <title>百度安全验证</title>\n",
      "    \n"
     ]
    }
   ],
   "source": [
    "\"\"\"quote: 进行URL编码，发送中文请求\"\"\"\n",
    "\n",
    "from urllib.request import urlopen,Request\n",
    "from fake_useragent import UserAgent\n",
    "from urllib.parse import quote\n",
    "\n",
    "args =input('请输入要搜索的内容:')\n",
    "\n",
    "# 进行 url编码\n",
    "print(quote(args))\n",
    "\n",
    "url = f'https://www.baidu.com/s?wd={quote(args)}'\n",
    "\n",
    "# 随机生成用户代理\n",
    "ua = UserAgent()\n",
    "headers = {\n",
    "    'User-Agent':ua.chrome\n",
    "}\n",
    "\n",
    "# 创建 request对象\n",
    "req = Request(url,headers = headers)\n",
    "\n",
    "# 发送请求\n",
    "resp = urlopen(req)\n",
    "\n",
    "# 打印信息\n",
    "print(resp.read().decode()[:100])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "60caecc6",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-23T03:57:19.233088Z",
     "start_time": "2023-10-23T03:57:16.821977Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "请输入要搜索的内容:牛\n",
      "<!DOCTYPE html>\n",
      "<html lang=\"zh-CN\">\n",
      "<head>\n",
      "    <meta charset=\"utf-8\">\n",
      "    <title>百度安全验证</title>\n",
      "    \n"
     ]
    }
   ],
   "source": [
    "\"\"\"urlencode: 对字典类型，进行URL编码，发送中文请求\"\"\"\n",
    "from urllib.request import parse_http_list, urlopen,Request\n",
    "from fake_useragent import UserAgent\n",
    "from urllib.parse import urlencode\n",
    "\n",
    "args =input('请输入要搜索的内容:')\n",
    "\n",
    "# 对中文请求，进行 URL编码\n",
    "parms ={\n",
    "    'wd':args\n",
    "}\n",
    "url = f'https://www.baidu.com/s?{urlencode(parms)}'\n",
    "\n",
    "# 随机生成用户代理\n",
    "ua = UserAgent()\n",
    "headers = {\n",
    "    'User-Agent':ua.chrome\n",
    "}\n",
    "\n",
    "# 创建 Request请求对象\n",
    "req = Request(url, headers=headers)\n",
    "\n",
    "# 发送请求\n",
    "resp = urlopen(req)\n",
    "\n",
    "print(resp.read().decode()[:100])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "7c3bb517",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-23T03:59:50.972112Z",
     "start_time": "2023-10-23T03:59:37.256718Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "请输入品牌：大众\n",
      "200\n",
      "200\n",
      "200\n"
     ]
    }
   ],
   "source": [
    "\"\"\"58同城车辆练习\"\"\"\n",
    "\n",
    "from urllib.request import Request,urlopen\n",
    "from fake_useragent import UserAgent\n",
    "from urllib.parse import quote\n",
    "from time import sleep\n",
    "\n",
    "args = input('请输入品牌：')\n",
    "\n",
    "for page in range(1,4):\n",
    "    \n",
    "    # URL地址\n",
    "    url =f'https://bj.58.com/ershouche/pn{page}/?key={quote(args)}'\n",
    "    sleep(1)\n",
    "    \n",
    "    # 随机生成用户代理\n",
    "    headers = {'User-Agent':UserAgent().chrome}\n",
    "    \n",
    "    # 创建 Request请求对象\n",
    "    req =  Request(url,headers = headers)\n",
    "    \n",
    "    # 发送请求\n",
    "    resp = urlopen(req)\n",
    "    \n",
    "    # 获取响应码\n",
    "    print(resp.getcode())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "624535f7",
   "metadata": {},
   "source": [
    "### Post 请求\n",
    "\n",
    "如果Request请求对象的里有data参数，它就发送的是POST请求，我们要传送的数据就是这个参数data，data是一个字典，里面是要匹配的键值对\n",
    "\n",
    "发送请求/响应header头的含义：\n",
    "\n",
    "![](01_爬虫基础与数据提取_images/02.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "37a8b9d5",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-23T04:04:07.058982Z",
     "start_time": "2023-10-23T04:04:06.567682Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<!DOCTYPE html>\r\n",
      "<!--[if lt IE 10]><html class=\"ie lt10\"><![endif]-->\r\n",
      "<!--[if (gt IE 9) | !(IE)]><!\n"
     ]
    }
   ],
   "source": [
    "\"\"\"发送 POST请求\"\"\"\n",
    "from urllib.request import Request, urlopen\n",
    "from fake_useragent import UserAgent\n",
    "from urllib.parse import urlencode\n",
    "\n",
    "url = 'https://www.21wecan.com/rcwjs/searchlist.jsp'\n",
    "\n",
    "# 随机生成用户代理\n",
    "headers = {'User-Agent': UserAgent().chrome}\n",
    "\n",
    "# 进行 URL编码\n",
    "args = {\n",
    "    'searchword': '人才'\n",
    "}\n",
    "f_data = urlencode(args)\n",
    "\n",
    "# 创建 Request请求对象，如果传送了data参数，就会成为post请求 -- 需要对 URL编码进行二次压缩\n",
    "req = Request(url, headers=headers, data=f_data.encode())\n",
    "\n",
    "# 发送请求\n",
    "resp = urlopen(req)\n",
    "\n",
    "print(resp.read().decode()[:100])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "03492d41",
   "metadata": {},
   "source": [
    "### 响应状态码\n",
    "\n",
    "响应状态代码有三位数字组成，第一个数字定义了响应的类别，且有五种可能取值。常见状态码：\n",
    "\n",
    "![](01_爬虫基础与数据提取_images/03.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7143315b",
   "metadata": {},
   "source": [
    "### Ajax请求\n",
    "\n",
    "有些网页内容使用 AJAX 加载数据，而AJAX一般返回的是JSON, 直接对AJAX地址进行post或get，会返回JSON数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "2af10c8f",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-23T04:08:18.031441Z",
     "start_time": "2023-10-23T04:08:17.850980Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"code\":1,\"internalCode\":\"PC000000\",\"msg\":\"success\",\"data\":[{\"img\":\"https://i5.hoopchina.com.cn/news\n"
     ]
    }
   ],
   "source": [
    "'''\n",
    "静态数据：\n",
    "    访问地址栏里的数据就可以获取到想要的数据。 \n",
    "动态数据：\n",
    "    访问地址栏里的数据就可以获取不到想要的数据。 \n",
    "    解决方案：抓包\n",
    "        打开浏览器的开发者工具 - network-xhr, 找到可以获取到数据的url访问即可\n",
    "'''\n",
    "\n",
    "from urllib.request import Request,urlopen\n",
    "from fake_useragent import UserAgent\n",
    "\n",
    "url ='https://www.hupu.com/home/v1/news?pageNo=2&pageSize=50'\n",
    "\n",
    "# 随机生成用户代理\n",
    "headers = {'User-Agent':UserAgent().chrome}\n",
    "\n",
    "# 创建 Request对象\n",
    "req = Request(url,headers = headers)\n",
    "\n",
    "# 发送信息\n",
    "resp = urlopen(req)\n",
    "print(resp.read().decode()[:100])\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d2fcd6a0",
   "metadata": {},
   "source": [
    "### 请求 SSL证书验证\n",
    "\n",
    "有些网站没有通过CA验证，没有SSL证书，那么我们的操作系统 不信任服务器的安全证书，我们对网页发送请求时，会被我们自己的操作系统所阻拦。\n",
    "\n",
    "我们可以设置忽略安全证书，对网页发送请求"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "1848f8c1",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-23T04:13:30.304514Z",
     "start_time": "2023-10-23T04:13:30.136546Z"
    }
   },
   "outputs": [],
   "source": [
    "import ssl\n",
    "from urllib.request import Request,urlopen\n",
    "\n",
    "# 忽略SSL安全认证\n",
    "context = ssl._create_unverified_context()\n",
    "\n",
    "url ='https://www.hupu.com/home/v1/news?pageNo=2&pageSize=50'\n",
    "\n",
    "# 发送请求时，添加到 context参数里\n",
    "response = urlopen(url, context = context)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "43ddfdbe",
   "metadata": {},
   "source": [
    "## urllib库高级"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0a29e572",
   "metadata": {},
   "source": [
    "### 设置请求头\n",
    "\n",
    "其中`User-Agent`代表用的哪个请求的浏览器"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "e66cedb6",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-23T04:19:30.170678Z",
     "start_time": "2023-10-23T04:19:29.923198Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'<!DOCTYPE html><html><head><title>虎扑体育-虎扑网</title><meta name=\"keywords\" content=\"虎扑,NBA,足球,英超,LPL\"/>'"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from urllib.request import urlopen\n",
    "from urllib.request import Request\n",
    "\n",
    "url = 'https://www.hupu.com'\n",
    "\n",
    "# 设置用户代理 -- 让服务器识别不出 我们是用爬虫程序发送的请求\n",
    "user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)' \n",
    "headers = { 'User-Agent' : user_agent }  \n",
    "\n",
    "# 创建 Request请求对象\n",
    "request = Request(url, headers=headers)\n",
    "\n",
    "# 发送请求\n",
    "response = urlopen(request)  \n",
    "response.read().decode()[:100]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cf42aed9",
   "metadata": {},
   "source": [
    "对付防盗链，服务器会识别 headers 中的referer是不是它自己，如果不是，有的服务器不会响应，所以我们还可以在headers中加入referer"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "02386abc",
   "metadata": {},
   "outputs": [],
   "source": [
    "headers = {\n",
    "    'User-Agent': 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)',\n",
    "    'Referer': 'http://www.zhihu.com/articles'\n",
    "}\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "a05e9b28",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T00:04:17.411413Z",
     "start_time": "2023-10-24T00:04:17.400154Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mozilla/5.0 (Windows; U; Windows NT 5.2) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.27\n"
     ]
    }
   ],
   "source": [
    "\"\"\"可以使用多个User_Agent: 然后随即选择\"\"\"\n",
    "\n",
    "import urllib.request\n",
    "import random\n",
    "\n",
    "# 随机选择用户代理\n",
    "ua_list = [\n",
    "    \"Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0)\",\n",
    "    \"Mozilla/5.0 (Windows; U; Windows NT 5.2) Gecko/2008070208 Firefox/3.0.1\",\n",
    "    \"Mozilla/5.0 (Windows; U; Windows NT 5.2) AppleWebKit/525.13 (KHTML, like Gecko) Version/3.1\",\n",
    "    \"Mozilla/5.0 (Windows; U; Windows NT 5.2) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.27\",\n",
    "    \"Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) ;  QIHU 360EE)\"\n",
    "]\n",
    "user_agent = random.choice(ua_list)\n",
    "\n",
    "# 创建请求对象\n",
    "request = urllib.request.Request(\"http://www.baidu.com\")\n",
    "\n",
    "# 添加用户代理\n",
    "request.add_header(\"User-Agent\",user_agent)\n",
    "\n",
    "# 获取用户代理\n",
    "print(request.get_header(\"User-agent\"))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d34ee089",
   "metadata": {},
   "source": [
    "### 设置代理Proxy\n",
    "\n",
    "> 假如一个网站它会检测某一段时间某个IP 的访问次数，如果访问次数过多，它会禁止你的访问。所以你可以设置一些代理服务器来帮助你做工作，每隔一段时间换一个代理\n",
    "\n",
    "**分类：**\n",
    "\n",
    "- 透明代理：目标网站知道你使用了代理并且知道你的源IP地址，这种代理显然不符合我们这里使用代理的初衷\n",
    "\n",
    "- 匿名代理：匿名程度比较低，也就是网站知道你使用了代理，但是并不知道你的源IP地址\n",
    "\n",
    "- 高匿代理：这是最保险的方式，目标网站既不知道你使用的代理更不知道你的源IP\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "7b8e1cc7",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T00:08:29.542504Z",
     "start_time": "2023-10-24T00:08:25.794034Z"
    },
    "collapsed": true
   },
   "outputs": [
    {
     "ename": "KeyboardInterrupt",
     "evalue": "",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mKeyboardInterrupt\u001b[0m                         Traceback (most recent call last)",
      "Input \u001b[0;32mIn [23]\u001b[0m, in \u001b[0;36m<cell line: 13>\u001b[0;34m()\u001b[0m\n\u001b[1;32m     10\u001b[0m url \u001b[38;5;241m=\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mhttp://www.baidu.com\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m     12\u001b[0m \u001b[38;5;66;03m# 用代理发送请求\u001b[39;00m\n\u001b[0;32m---> 13\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[43mopener\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mopen\u001b[49m\u001b[43m(\u001b[49m\u001b[43murl\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     15\u001b[0m \u001b[38;5;66;03m# 打印信息\u001b[39;00m\n\u001b[1;32m     16\u001b[0m \u001b[38;5;28mprint\u001b[39m(response\u001b[38;5;241m.\u001b[39mread()\u001b[38;5;241m.\u001b[39mdecode(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mutf-8\u001b[39m\u001b[38;5;124m\"\u001b[39m)[:\u001b[38;5;241m100\u001b[39m])\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/urllib/request.py:517\u001b[0m, in \u001b[0;36mOpenerDirector.open\u001b[0;34m(self, fullurl, data, timeout)\u001b[0m\n\u001b[1;32m    514\u001b[0m     req \u001b[38;5;241m=\u001b[39m meth(req)\n\u001b[1;32m    516\u001b[0m sys\u001b[38;5;241m.\u001b[39maudit(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124murllib.Request\u001b[39m\u001b[38;5;124m'\u001b[39m, req\u001b[38;5;241m.\u001b[39mfull_url, req\u001b[38;5;241m.\u001b[39mdata, req\u001b[38;5;241m.\u001b[39mheaders, req\u001b[38;5;241m.\u001b[39mget_method())\n\u001b[0;32m--> 517\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_open\u001b[49m\u001b[43m(\u001b[49m\u001b[43mreq\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mdata\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    519\u001b[0m \u001b[38;5;66;03m# post-process response\u001b[39;00m\n\u001b[1;32m    520\u001b[0m meth_name \u001b[38;5;241m=\u001b[39m protocol\u001b[38;5;241m+\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m_response\u001b[39m\u001b[38;5;124m\"\u001b[39m\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/urllib/request.py:534\u001b[0m, in \u001b[0;36mOpenerDirector._open\u001b[0;34m(self, req, data)\u001b[0m\n\u001b[1;32m    531\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m result\n\u001b[1;32m    533\u001b[0m protocol \u001b[38;5;241m=\u001b[39m req\u001b[38;5;241m.\u001b[39mtype\n\u001b[0;32m--> 534\u001b[0m result \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_call_chain\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mhandle_open\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mprotocol\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mprotocol\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m+\u001b[39;49m\n\u001b[1;32m    535\u001b[0m \u001b[43m                          \u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43m_open\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mreq\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    536\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m result:\n\u001b[1;32m    537\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m result\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/urllib/request.py:494\u001b[0m, in \u001b[0;36mOpenerDirector._call_chain\u001b[0;34m(self, chain, kind, meth_name, *args)\u001b[0m\n\u001b[1;32m    492\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m handler \u001b[38;5;129;01min\u001b[39;00m handlers:\n\u001b[1;32m    493\u001b[0m     func \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mgetattr\u001b[39m(handler, meth_name)\n\u001b[0;32m--> 494\u001b[0m     result \u001b[38;5;241m=\u001b[39m \u001b[43mfunc\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    495\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m result \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m    496\u001b[0m         \u001b[38;5;28;01mreturn\u001b[39;00m result\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/urllib/request.py:1375\u001b[0m, in \u001b[0;36mHTTPHandler.http_open\u001b[0;34m(self, req)\u001b[0m\n\u001b[1;32m   1374\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mhttp_open\u001b[39m(\u001b[38;5;28mself\u001b[39m, req):\n\u001b[0;32m-> 1375\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mdo_open\u001b[49m\u001b[43m(\u001b[49m\u001b[43mhttp\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mclient\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mHTTPConnection\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mreq\u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/urllib/request.py:1346\u001b[0m, in \u001b[0;36mAbstractHTTPHandler.do_open\u001b[0;34m(self, http_class, req, **http_conn_args)\u001b[0m\n\u001b[1;32m   1344\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m   1345\u001b[0m     \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m-> 1346\u001b[0m         \u001b[43mh\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrequest\u001b[49m\u001b[43m(\u001b[49m\u001b[43mreq\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget_method\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mreq\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mselector\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mreq\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mdata\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mheaders\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1347\u001b[0m \u001b[43m                  \u001b[49m\u001b[43mencode_chunked\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mreq\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mhas_header\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mTransfer-encoding\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m)\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1348\u001b[0m     \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mOSError\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m err: \u001b[38;5;66;03m# timeout error\u001b[39;00m\n\u001b[1;32m   1349\u001b[0m         \u001b[38;5;28;01mraise\u001b[39;00m URLError(err)\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/http/client.py:1285\u001b[0m, in \u001b[0;36mHTTPConnection.request\u001b[0;34m(self, method, url, body, headers, encode_chunked)\u001b[0m\n\u001b[1;32m   1282\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mrequest\u001b[39m(\u001b[38;5;28mself\u001b[39m, method, url, body\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mNone\u001b[39;00m, headers\u001b[38;5;241m=\u001b[39m{}, \u001b[38;5;241m*\u001b[39m,\n\u001b[1;32m   1283\u001b[0m             encode_chunked\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mFalse\u001b[39;00m):\n\u001b[1;32m   1284\u001b[0m \u001b[38;5;250m    \u001b[39m\u001b[38;5;124;03m\"\"\"Send a complete request to the server.\"\"\"\u001b[39;00m\n\u001b[0;32m-> 1285\u001b[0m     \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_send_request\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmethod\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43murl\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mbody\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mheaders\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mencode_chunked\u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/http/client.py:1331\u001b[0m, in \u001b[0;36mHTTPConnection._send_request\u001b[0;34m(self, method, url, body, headers, encode_chunked)\u001b[0m\n\u001b[1;32m   1327\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(body, \u001b[38;5;28mstr\u001b[39m):\n\u001b[1;32m   1328\u001b[0m     \u001b[38;5;66;03m# RFC 2616 Section 3.7.1 says that text default has a\u001b[39;00m\n\u001b[1;32m   1329\u001b[0m     \u001b[38;5;66;03m# default charset of iso-8859-1.\u001b[39;00m\n\u001b[1;32m   1330\u001b[0m     body \u001b[38;5;241m=\u001b[39m _encode(body, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mbody\u001b[39m\u001b[38;5;124m'\u001b[39m)\n\u001b[0;32m-> 1331\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mendheaders\u001b[49m\u001b[43m(\u001b[49m\u001b[43mbody\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mencode_chunked\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mencode_chunked\u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/http/client.py:1280\u001b[0m, in \u001b[0;36mHTTPConnection.endheaders\u001b[0;34m(self, message_body, encode_chunked)\u001b[0m\n\u001b[1;32m   1278\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m   1279\u001b[0m     \u001b[38;5;28;01mraise\u001b[39;00m CannotSendHeader()\n\u001b[0;32m-> 1280\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_send_output\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmessage_body\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mencode_chunked\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mencode_chunked\u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/http/client.py:1040\u001b[0m, in \u001b[0;36mHTTPConnection._send_output\u001b[0;34m(self, message_body, encode_chunked)\u001b[0m\n\u001b[1;32m   1038\u001b[0m msg \u001b[38;5;241m=\u001b[39m \u001b[38;5;124mb\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;130;01m\\r\u001b[39;00m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;241m.\u001b[39mjoin(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_buffer)\n\u001b[1;32m   1039\u001b[0m \u001b[38;5;28;01mdel\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_buffer[:]\n\u001b[0;32m-> 1040\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43msend\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmsg\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1042\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m message_body \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m   1043\u001b[0m \n\u001b[1;32m   1044\u001b[0m     \u001b[38;5;66;03m# create a consistent interface to message_body\u001b[39;00m\n\u001b[1;32m   1045\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mhasattr\u001b[39m(message_body, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mread\u001b[39m\u001b[38;5;124m'\u001b[39m):\n\u001b[1;32m   1046\u001b[0m         \u001b[38;5;66;03m# Let file-like take precedence over byte-like.  This\u001b[39;00m\n\u001b[1;32m   1047\u001b[0m         \u001b[38;5;66;03m# is needed to allow the current position of mmap'ed\u001b[39;00m\n\u001b[1;32m   1048\u001b[0m         \u001b[38;5;66;03m# files to be taken into account.\u001b[39;00m\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/http/client.py:980\u001b[0m, in \u001b[0;36mHTTPConnection.send\u001b[0;34m(self, data)\u001b[0m\n\u001b[1;32m    978\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39msock \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m    979\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mauto_open:\n\u001b[0;32m--> 980\u001b[0m         \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mconnect\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    981\u001b[0m     \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m    982\u001b[0m         \u001b[38;5;28;01mraise\u001b[39;00m NotConnected()\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/http/client.py:946\u001b[0m, in \u001b[0;36mHTTPConnection.connect\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m    944\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mconnect\u001b[39m(\u001b[38;5;28mself\u001b[39m):\n\u001b[1;32m    945\u001b[0m \u001b[38;5;250m    \u001b[39m\u001b[38;5;124;03m\"\"\"Connect to the host and port specified in __init__.\"\"\"\u001b[39;00m\n\u001b[0;32m--> 946\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39msock \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_create_connection\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m    947\u001b[0m \u001b[43m        \u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mhost\u001b[49m\u001b[43m,\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mport\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mtimeout\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43msource_address\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    948\u001b[0m     \u001b[38;5;66;03m# Might fail in OSs that don't implement TCP_NODELAY\u001b[39;00m\n\u001b[1;32m    949\u001b[0m     \u001b[38;5;28;01mtry\u001b[39;00m:\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/socket.py:832\u001b[0m, in \u001b[0;36mcreate_connection\u001b[0;34m(address, timeout, source_address)\u001b[0m\n\u001b[1;32m    830\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m source_address:\n\u001b[1;32m    831\u001b[0m     sock\u001b[38;5;241m.\u001b[39mbind(source_address)\n\u001b[0;32m--> 832\u001b[0m \u001b[43msock\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mconnect\u001b[49m\u001b[43m(\u001b[49m\u001b[43msa\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    833\u001b[0m \u001b[38;5;66;03m# Break explicitly a reference cycle\u001b[39;00m\n\u001b[1;32m    834\u001b[0m err \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m\n",
      "\u001b[0;31mKeyboardInterrupt\u001b[0m: "
     ]
    }
   ],
   "source": [
    "from urllib.request import ProxyHandler\n",
    "from urllib.request import build_opener\n",
    "\n",
    "# 代理的 IP\n",
    "proxy = ProxyHandler({\"http\": \"119.109.197.195:80\"})\n",
    "\n",
    "# 创建 opener对象，再添加代理\n",
    "opener = build_opener(proxy)\n",
    "\n",
    "url = \"http://www.baidu.com\"\n",
    "\n",
    "# 用代理发送请求\n",
    "response = opener.open(url)\n",
    "\n",
    "# 打印信息\n",
    "print(response.read().decode(\"utf-8\")[:100])\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a4ab6151",
   "metadata": {},
   "outputs": [],
   "source": [
    "from urllib.request import Request, build_opener\n",
    "from fake_useragent import UserAgent\n",
    "from urllib.request import ProxyHandler\n",
    "\n",
    "url = 'http://httpbin.org/get'\n",
    "\n",
    "# 随机生成用户代理\n",
    "headers = {'User-Agent': UserAgent().chrome}\n",
    "\n",
    "# 创建请求对象\n",
    "req = Request(url, headers=headers)\n",
    "\n",
    "# 共享代理\n",
    "# handler = ProxyHandler({'type':'ip:port'})\n",
    "# handler = ProxyHandler({'http':'110.18.152.229:9999'})\n",
    "\n",
    "# 独享代理\n",
    "# handler = ProxyHandler({'type':'user:pwd@ip:port'})\n",
    "handler = ProxyHandler({'http': '398707160:j8inhg2g@114.117.236.72:16819'})\n",
    "\n",
    "# 将控制器 传递到 opener\n",
    "opener = build_opener(handler)\n",
    "\n",
    "# 发送请求\n",
    "resp = opener.open(req)\n",
    "print(resp.read().decode()[:100])\n",
    "\n",
    "'''\n",
    "快代理\n",
    "https://www.kuaidaili.com\n",
    "\n",
    "云代理\n",
    "http://www.ip3366.net\n",
    "\n",
    "无忧代理\n",
    "http://www.data5u.com/\n",
    "\n",
    "66ip 代理\n",
    "http://www.66ip.cn\n",
    "\n",
    "站大爷\n",
    "https://www.zdaye.com/FreeIPList.html\n",
    "\n",
    "讯代理\n",
    "http://www.xdaili.cn/\n",
    "\n",
    "蚂蚁代理\n",
    "http://www.mayidaili.com/free\n",
    "\n",
    "89免费代理\n",
    "http://www.89ip.cn/\n",
    "\n",
    "全网代理\n",
    "http://www.goubanjia.com/buy/high.html\n",
    "\n",
    "开心代理\n",
    "http://ip.kxdaili.com/\n",
    "\n",
    "'''"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "86394518",
   "metadata": {},
   "source": [
    "## Cookie"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8a0fe3ef",
   "metadata": {},
   "source": [
    "Cookie，指某些网站为了辨别用户身份、进行session跟踪而储存在用户本地终端上的数据（通常经过加密）\n",
    "\n",
    "比如说有些网站需要登录后才能访问某个页面，在登录之前，你想抓取某个页面内容是不允许的。那么我们可以利用Urllib库保存我们登录的Cookie，然后再抓取其他页面就达到目的了。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "060b20e1",
   "metadata": {},
   "outputs": [],
   "source": [
    "from urllib.request import Request, build_opener\n",
    "from fake_useragent import UserAgent\n",
    "\n",
    "url = 'https://www.kuaidaili.com/usercenter/overview'\n",
    "\n",
    "headers = {\n",
    "    # 设置 用户代理\n",
    "    'User-Agent': UserAgent().chrome,\n",
    "    \n",
    "    # 设置 cookie\n",
    "    'Cookie': 'channelid=0; sid=1621786217815170; _ga=GA1.2.301996636.1621786363; _gid=GA1.2.699625050.1621786363; Hm_lvt_7ed65b1cc4b810e9fd37959c9bb51b31=1621786363,1621823311; _gat=1; Hm_lpvt_7ed65b1cc4b810e9fd37959c9bb51b31=1621823382; sessionid=48cc80a5da3a451c2fa3ce682d29fde7'\n",
    "}\n",
    "\n",
    "# 创建请求对象\n",
    "req = Request(url, headers=headers)\n",
    "\n",
    "# 创建 opener对象\n",
    "opener = build_opener()\n",
    "\n",
    "# 发送请求\n",
    "resp = opener.open(req)\n",
    "\n",
    "print(resp.read().decode()[:100])\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "8733f84e",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T00:33:34.833268Z",
     "start_time": "2023-10-24T00:33:34.122438Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<!DOCTYPE html>\n",
      "<html>\n",
      "<head>\n",
      "<meta http-equiv=\"Content-Type\" content=\"text/html;charset=utf-8\">\n",
      "<me\n"
     ]
    }
   ],
   "source": [
    "from urllib.request import Request, build_opener\n",
    "from fake_useragent import UserAgent\n",
    "from urllib.parse import urlencode\n",
    "from urllib.request import HTTPCookieProcessor\n",
    "\n",
    "login_url = 'https://www.kuaidaili.com/login/'\n",
    "\n",
    "# 设置要传递的 参数\n",
    "args = {\n",
    "    'username': '398707160@qq.com',\n",
    "    'passwd': '123456abc'\n",
    "}\n",
    "\n",
    "# 随机生成用户代理\n",
    "headers = {\n",
    "    'User-Agent': UserAgent().chrome\n",
    "}\n",
    "\n",
    "# POST请求\n",
    "req = Request(login_url, headers=headers, data=urlencode(args).encode())\n",
    "\n",
    "# 创建一个可以保存cookie的控制器对象\n",
    "handler = HTTPCookieProcessor()\n",
    "\n",
    "# 构造发送请求的对象 opener\n",
    "opener = build_opener(handler)\n",
    "\n",
    "# 发送请求\n",
    "resp = opener.open(req)\n",
    "\n",
    "'''\n",
    "-------------------------上面已经登录好----------------------------------\n",
    "'''\n",
    "\n",
    "# 创建请求对象\n",
    "index_url = 'https://www.kuaidaili.com/usercenter/overview'\n",
    "index_req = Request(index_url, headers=headers)\n",
    "\n",
    "# 发送请求，在opener中，内涵 cookie对象\n",
    "index_resp = opener.open(index_req)\n",
    "\n",
    "print(index_resp.read().decode()[:100])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cb5b8566",
   "metadata": {},
   "source": [
    "### Opener\n",
    "\n",
    "当你获取一个 URL 你使用一个 opener(一个urllib.OpenerDirector的实例)。在前面，我们都是使用的默认的opener，也就是urlopen。它是一个特殊的opener，可以理解成opener的一个特殊实例，传入的参数仅仅是url，data，timeout。\n",
    "\n",
    "如果我们需要用到Cookie，只用这个opener是不能达到目的的，所以我们需要创建更一般的opener来实现对Cookie的设置"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "251c6eec",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T00:18:41.870157Z",
     "start_time": "2023-10-24T00:18:40.788436Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\n",
      "  \"args\": {}, \n",
      "  \"headers\": {\n",
      "    \"Accept-Encoding\": \"identity\", \n",
      "    \"Host\": \"httpbin.org\", \n",
      "    \n"
     ]
    }
   ],
   "source": [
    "from urllib.request import Request, build_opener\n",
    "from fake_useragent import UserAgent\n",
    "\n",
    "url = 'http://httpbin.org/get'\n",
    "\n",
    "# 随机生成 用户代理\n",
    "headers = {'User-Agent': UserAgent().chrome}\n",
    "\n",
    "# 创建请求对象\n",
    "req = Request(url, headers=headers)\n",
    "\n",
    "# 创建 opener对象\n",
    "opener = build_opener()\n",
    "resp = opener.open(req)\n",
    "print(resp.read().decode()[:100])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "04ac65f3",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T00:19:09.054844Z",
     "start_time": "2023-10-24T00:19:08.500932Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "send: b'GET /get HTTP/1.1\\r\\nAccept-Encoding: identity\\r\\nHost: httpbin.org\\r\\nUser-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36\\r\\nConnection: close\\r\\n\\r\\n'\n",
      "reply: 'HTTP/1.1 200 OK\\r\\n'\n",
      "header: Date: Tue, 24 Oct 2023 00:19:08 GMT\n",
      "header: Content-Type: application/json\n",
      "header: Content-Length: 369\n",
      "header: Connection: close\n",
      "header: Server: gunicorn/19.9.0\n",
      "header: Access-Control-Allow-Origin: *\n",
      "header: Access-Control-Allow-Credentials: true\n",
      "{\n",
      "  \"args\": {}, \n",
      "  \"headers\": {\n",
      "    \"Accept-Encoding\": \"identity\", \n",
      "    \"Host\": \"httpbin.org\", \n",
      "    \n"
     ]
    }
   ],
   "source": [
    "from urllib.request import Request, build_opener\n",
    "from fake_useragent import UserAgent\n",
    "from urllib.request import HTTPHandler\n",
    "\n",
    "url = 'http://httpbin.org/get'\n",
    "\n",
    "# 随机生成用户代理\n",
    "headers = {'User-Agent': UserAgent().chrome}\n",
    "\n",
    "# 创建请求对象\n",
    "req = Request(url, headers=headers)\n",
    "\n",
    "# 打印 debug信息\n",
    "handler = HTTPHandler(debuglevel=1)\n",
    "\n",
    "# 创建 opener对象\n",
    "opener = build_opener(handler)\n",
    "\n",
    "# 发送请求\n",
    "resp = opener.open(req)\n",
    "print(resp.read().decode()[:100])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "972ad207",
   "metadata": {},
   "source": [
    "### Cookielib\n",
    "\n",
    "cookielib 模块 的主要作用 是提供可存储cookie的对象，以便于与 urllib 模块配合使用来访问 Internet资源。Cookielib模块非常强大，我们可以利用本模块的CookieJar类的对象来捕获cookie并在后续连接请求时重新发送，比如可以实现模拟登录功能。该模块主要的对象有CookieJar、FileCookieJar、MozillaCookieJar、LWPCookieJar"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "cf913721",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T00:41:40.895526Z",
     "start_time": "2023-10-24T00:41:40.612450Z"
    },
    "collapsed": true
   },
   "outputs": [
    {
     "ename": "HTTPError",
     "evalue": "HTTP Error 404: Not Found",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mHTTPError\u001b[0m                                 Traceback (most recent call last)",
      "Input \u001b[0;32mIn [27]\u001b[0m, in \u001b[0;36m<cell line: 37>\u001b[0;34m()\u001b[0m\n\u001b[1;32m     34\u001b[0m request \u001b[38;5;241m=\u001b[39m Request(login_url, headers\u001b[38;5;241m=\u001b[39mheader, data\u001b[38;5;241m=\u001b[39mdata)\n\u001b[1;32m     36\u001b[0m \u001b[38;5;66;03m# 发送请求\u001b[39;00m\n\u001b[0;32m---> 37\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[43mopener\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mopen\u001b[49m\u001b[43m(\u001b[49m\u001b[43mrequest\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     39\u001b[0m \u001b[38;5;66;03m# 创建另一个请求对象\u001b[39;00m\n\u001b[1;32m     40\u001b[0m info_url \u001b[38;5;241m=\u001b[39m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mhttp://www.sxt.cn/index/user.html\u001b[39m\u001b[38;5;124m'\u001b[39m\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/urllib/request.py:523\u001b[0m, in \u001b[0;36mOpenerDirector.open\u001b[0;34m(self, fullurl, data, timeout)\u001b[0m\n\u001b[1;32m    521\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m processor \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mprocess_response\u001b[38;5;241m.\u001b[39mget(protocol, []):\n\u001b[1;32m    522\u001b[0m     meth \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mgetattr\u001b[39m(processor, meth_name)\n\u001b[0;32m--> 523\u001b[0m     response \u001b[38;5;241m=\u001b[39m \u001b[43mmeth\u001b[49m\u001b[43m(\u001b[49m\u001b[43mreq\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mresponse\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    525\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m response\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/urllib/request.py:632\u001b[0m, in \u001b[0;36mHTTPErrorProcessor.http_response\u001b[0;34m(self, request, response)\u001b[0m\n\u001b[1;32m    629\u001b[0m \u001b[38;5;66;03m# According to RFC 2616, \"2xx\" code indicates that the client's\u001b[39;00m\n\u001b[1;32m    630\u001b[0m \u001b[38;5;66;03m# request was successfully received, understood, and accepted.\u001b[39;00m\n\u001b[1;32m    631\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;241m200\u001b[39m \u001b[38;5;241m<\u001b[39m\u001b[38;5;241m=\u001b[39m code \u001b[38;5;241m<\u001b[39m \u001b[38;5;241m300\u001b[39m):\n\u001b[0;32m--> 632\u001b[0m     response \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mparent\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43merror\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m    633\u001b[0m \u001b[43m        \u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mhttp\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mrequest\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mresponse\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcode\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mmsg\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mhdrs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    635\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m response\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/urllib/request.py:555\u001b[0m, in \u001b[0;36mOpenerDirector.error\u001b[0;34m(self, proto, *args)\u001b[0m\n\u001b[1;32m    553\u001b[0m     http_err \u001b[38;5;241m=\u001b[39m \u001b[38;5;241m0\u001b[39m\n\u001b[1;32m    554\u001b[0m args \u001b[38;5;241m=\u001b[39m (\u001b[38;5;28mdict\u001b[39m, proto, meth_name) \u001b[38;5;241m+\u001b[39m args\n\u001b[0;32m--> 555\u001b[0m result \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_call_chain\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    556\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m result:\n\u001b[1;32m    557\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m result\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/urllib/request.py:494\u001b[0m, in \u001b[0;36mOpenerDirector._call_chain\u001b[0;34m(self, chain, kind, meth_name, *args)\u001b[0m\n\u001b[1;32m    492\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m handler \u001b[38;5;129;01min\u001b[39;00m handlers:\n\u001b[1;32m    493\u001b[0m     func \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mgetattr\u001b[39m(handler, meth_name)\n\u001b[0;32m--> 494\u001b[0m     result \u001b[38;5;241m=\u001b[39m \u001b[43mfunc\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    495\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m result \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m    496\u001b[0m         \u001b[38;5;28;01mreturn\u001b[39;00m result\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/urllib/request.py:747\u001b[0m, in \u001b[0;36mHTTPRedirectHandler.http_error_302\u001b[0;34m(self, req, fp, code, msg, headers)\u001b[0m\n\u001b[1;32m    744\u001b[0m fp\u001b[38;5;241m.\u001b[39mread()\n\u001b[1;32m    745\u001b[0m fp\u001b[38;5;241m.\u001b[39mclose()\n\u001b[0;32m--> 747\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mparent\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mopen\u001b[49m\u001b[43m(\u001b[49m\u001b[43mnew\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtimeout\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mreq\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mtimeout\u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/urllib/request.py:523\u001b[0m, in \u001b[0;36mOpenerDirector.open\u001b[0;34m(self, fullurl, data, timeout)\u001b[0m\n\u001b[1;32m    521\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m processor \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mprocess_response\u001b[38;5;241m.\u001b[39mget(protocol, []):\n\u001b[1;32m    522\u001b[0m     meth \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mgetattr\u001b[39m(processor, meth_name)\n\u001b[0;32m--> 523\u001b[0m     response \u001b[38;5;241m=\u001b[39m \u001b[43mmeth\u001b[49m\u001b[43m(\u001b[49m\u001b[43mreq\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mresponse\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    525\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m response\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/urllib/request.py:632\u001b[0m, in \u001b[0;36mHTTPErrorProcessor.http_response\u001b[0;34m(self, request, response)\u001b[0m\n\u001b[1;32m    629\u001b[0m \u001b[38;5;66;03m# According to RFC 2616, \"2xx\" code indicates that the client's\u001b[39;00m\n\u001b[1;32m    630\u001b[0m \u001b[38;5;66;03m# request was successfully received, understood, and accepted.\u001b[39;00m\n\u001b[1;32m    631\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;241m200\u001b[39m \u001b[38;5;241m<\u001b[39m\u001b[38;5;241m=\u001b[39m code \u001b[38;5;241m<\u001b[39m \u001b[38;5;241m300\u001b[39m):\n\u001b[0;32m--> 632\u001b[0m     response \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mparent\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43merror\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m    633\u001b[0m \u001b[43m        \u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mhttp\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mrequest\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mresponse\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcode\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mmsg\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mhdrs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    635\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m response\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/urllib/request.py:561\u001b[0m, in \u001b[0;36mOpenerDirector.error\u001b[0;34m(self, proto, *args)\u001b[0m\n\u001b[1;32m    559\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m http_err:\n\u001b[1;32m    560\u001b[0m     args \u001b[38;5;241m=\u001b[39m (\u001b[38;5;28mdict\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mdefault\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mhttp_error_default\u001b[39m\u001b[38;5;124m'\u001b[39m) \u001b[38;5;241m+\u001b[39m orig_args\n\u001b[0;32m--> 561\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_call_chain\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/urllib/request.py:494\u001b[0m, in \u001b[0;36mOpenerDirector._call_chain\u001b[0;34m(self, chain, kind, meth_name, *args)\u001b[0m\n\u001b[1;32m    492\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m handler \u001b[38;5;129;01min\u001b[39;00m handlers:\n\u001b[1;32m    493\u001b[0m     func \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mgetattr\u001b[39m(handler, meth_name)\n\u001b[0;32m--> 494\u001b[0m     result \u001b[38;5;241m=\u001b[39m \u001b[43mfunc\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    495\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m result \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m    496\u001b[0m         \u001b[38;5;28;01mreturn\u001b[39;00m result\n",
      "File \u001b[0;32m~/opt/anaconda3/lib/python3.9/urllib/request.py:641\u001b[0m, in \u001b[0;36mHTTPDefaultErrorHandler.http_error_default\u001b[0;34m(self, req, fp, code, msg, hdrs)\u001b[0m\n\u001b[1;32m    640\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mhttp_error_default\u001b[39m(\u001b[38;5;28mself\u001b[39m, req, fp, code, msg, hdrs):\n\u001b[0;32m--> 641\u001b[0m     \u001b[38;5;28;01mraise\u001b[39;00m HTTPError(req\u001b[38;5;241m.\u001b[39mfull_url, code, msg, hdrs, fp)\n",
      "\u001b[0;31mHTTPError\u001b[0m: HTTP Error 404: Not Found"
     ]
    }
   ],
   "source": [
    "\"\"\"获取Cookie保存到变量\"\"\"\n",
    "from urllib.request import HTTPCookieProcessor\n",
    "from urllib.request import build_opener\n",
    "from urllib.request import Request\n",
    "from http.cookiejar import CookieJar\n",
    "from urllib.parse import urlencode\n",
    "\n",
    "# 声明一个CookieJar对象实例 用来保存 cookie\n",
    "cookie = CookieJar()\n",
    "\n",
    "# 利用 HTTPCookieProcessor对象来创建 cookie处理器\n",
    "cookiePro = HTTPCookieProcessor(cookie)\n",
    "\n",
    "# 通过 handler来构建 opener\n",
    "opener = build_opener(cookiePro)\n",
    "\n",
    "login_url = \"http://www.sxt.cn/index/login/login\"\n",
    "\n",
    "# 随机生成用户代理\n",
    "header = {\n",
    "    \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.79 Safari/537.36\"\n",
    "}\n",
    "\n",
    "# 设置 传递参数\n",
    "fromdata = {\n",
    "    \"user\": \"17703181473\",\n",
    "    \"password\": \"123456\"\n",
    "}\n",
    "\n",
    "# 对传递的参数进行 url编码\n",
    "data = urlencode(fromdata).encode()\n",
    "\n",
    "# 创建 请求对象\n",
    "request = Request(login_url, headers=header, data=data)\n",
    "\n",
    "# 发送请求\n",
    "response = opener.open(request)\n",
    "\n",
    "# 创建另一个请求对象\n",
    "info_url = 'http://www.sxt.cn/index/user.html'\n",
    "request_info = Request(info_url)\n",
    "\n",
    "# 登录之后，cookie被保存到 opener中，再次发送请求\n",
    "response = opener.open(request_info)\n",
    "\n",
    "# 获取页面数据\n",
    "html = response.read()\n",
    "\n",
    "print(html.decode()[:100])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5e546074",
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"cookie保存文件的读取\"\"\"\n",
    "from urllib.request import build_opener, Request\n",
    "from urllib.request import HTTPCookieProcessor\n",
    "from http.cookiejar import MozillaCookieJar\n",
    "from urllib.parse import urlencode\n",
    "\n",
    "\n",
    "def get_cookie():\n",
    "    \"\"\"获取 cookie\"\"\"\n",
    "    # 请求头，生成用户代理\n",
    "    headers = {\n",
    "        \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36\"}\n",
    "    login_url = \"http://www.sxt.cn/index/login/login.html\"\n",
    "\n",
    "    # 设置请求参数\n",
    "    form_data = {\n",
    "        \"user\": \"17703181473\",\n",
    "        \"password\": \"123456\"\n",
    "    }\n",
    "\n",
    "    # 转换成 url编码\n",
    "    f_data = urlencode(form_data)\n",
    "    \n",
    "    # 创建请求对象\n",
    "    req = Request(login_url, headers=headers, data=f_data.encode())\n",
    "    \n",
    "    # 创建保存 可以序列化cookie的文件对象\n",
    "    cookie = MozillaCookieJar(\"cookie.txt\")\n",
    "    \n",
    "    # 构造可保存cookie的控制器\n",
    "    c_handler = HTTPCookieProcessor(cookie)\n",
    "    \n",
    "    # 构造opener\n",
    "    opener = build_opener(c_handler)\n",
    "    \n",
    "    # 发送请求\n",
    "    opener.open(req)\n",
    "    \n",
    "    # 将获取到的 cookie保存到本地\n",
    "    cookie.save(ignore_discard=True, ignore_expires=True)\n",
    "\n",
    "\n",
    "def use_cookie():\n",
    "    \"\"\"使用 cookie\"\"\"\n",
    "    \n",
    "    # 请求头：设置用户代理\n",
    "    headers = {\n",
    "        \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36\"}\n",
    "\n",
    "    info_url = \"http://www.sxt.cn/index/user.html\"\n",
    "    \n",
    "    # 创建保存可以序列化cookie的文件对象\n",
    "    cookie = MozillaCookieJar()\n",
    "    \n",
    "    # 加载cookie文件\n",
    "    cookie.load(\"cookie.txt\", ignore_discard=True, ignore_expires=True)\n",
    "    \n",
    "    # 构造可保存cookie的控制器\n",
    "    c_handler = HTTPCookieProcessor(cookie)\n",
    "    \n",
    "    # 构造opener\n",
    "    opener = build_opener(c_handler)\n",
    "    \n",
    "    # 创建 请求\n",
    "    req1 = Request(info_url, headers=headers)\n",
    "    \n",
    "    # 发送请求\n",
    "    resp2 = opener.open(req1)\n",
    "    \n",
    "    # 打印信息\n",
    "    print(resp2.read().decode())\n",
    "\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    # get_cookie()\n",
    "    use_cookie()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "df074f9f",
   "metadata": {},
   "source": [
    "### URLError\n",
    "\n",
    "首先解释下 URLError 可能产生的原因：\n",
    "\n",
    "- 网络无连接，即本机无法上网\n",
    "- 连接不到特定的服务器\n",
    "- 服务器不存在"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "032bc395",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T00:53:27.167592Z",
     "start_time": "2023-10-24T00:53:26.145543Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "8\n",
      "爬取完成\n"
     ]
    }
   ],
   "source": [
    "from urllib.request import Request, urlopen\n",
    "from fake_useragent import UserAgent\n",
    "from urllib.error import URLError\n",
    "\n",
    "url = 'http://www.sxtwerwf1jojhofsaf.cn/sadfa/sdfs14'\n",
    "\n",
    "# 设置请求头：随机生成 用户代理\n",
    "headers = {'User-Agent': UserAgent().chrome}\n",
    "\n",
    "# 创建请求对象\n",
    "req = Request(url, headers=headers)\n",
    "\n",
    "try:\n",
    "    # 发送请求\n",
    "    resp = urlopen(req)\n",
    "    print(resp.read().decode()[:100])\n",
    "    \n",
    "except URLError as e:\n",
    "    \n",
    "    if e.args:\n",
    "        print(e.args[0].errno)\n",
    "        \n",
    "    else:\n",
    "        print(e.code)\n",
    "        \n",
    "print('爬取完成')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a5c5e32d",
   "metadata": {},
   "source": [
    "## Requests库"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "66f6a32b",
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"基本请求类型\"\"\"\n",
    "\n",
    "req = requests.get(\"http://www.baidu.com\")\n",
    "req = requests.post(\"http://www.baidu.com\")\n",
    "req = requests.put(\"http://www.baidu.com\")\n",
    "req = requests.delete(\"http://www.baidu.com\")\n",
    "req = requests.head(\"http://www.baidu.com\")\n",
    "req = requests.options(\"http://www.baidu.com\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "21619f89",
   "metadata": {},
   "source": [
    "### get请求\n",
    "\n",
    "参数是字典，我们也可以传递 json类型的参数："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "57f9bf0e",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T01:00:09.536516Z",
     "start_time": "2023-10-24T01:00:09.070133Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<!DOCTYPE html>\n",
      "<!--STATUS OK-\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "\n",
    "def no_args():\n",
    "    \"\"\"无参请求\"\"\"\n",
    "\n",
    "    url = 'http://www.sxt.cn/'\n",
    "\n",
    "    # 发送 Get请求\n",
    "    resp = requests.get(url)\n",
    "    print(resp.text[:30])\n",
    "\n",
    "\n",
    "def use_args():\n",
    "    \"\"\"带参请求\"\"\"\n",
    "\n",
    "    url = 'http://www.baidu.com/s'\n",
    "\n",
    "    # 参数\n",
    "    args = {\n",
    "        'wd': '北理工'\n",
    "    }\n",
    "\n",
    "    # 发送 Get请求\n",
    "    resp = requests.get(url, params=args)\n",
    "    print(resp.text[:30])\n",
    "\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    use_args()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "16cbcb1d",
   "metadata": {},
   "source": [
    "### post请求\n",
    "\n",
    "参数是字典，我们也可以传递json类型的参数："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "6bcf4340",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T01:01:37.115915Z",
     "start_time": "2023-10-24T01:01:36.708926Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<!DOCTYPE html>\r\n",
      "<!--[if lt IE 10]><html class=\"ie\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "url = 'https://www.21wecan.com/rcwjs/searchlist.jsp'\n",
    "\n",
    "# 设置参数\n",
    "args = {\n",
    "    'searchword': '人才'\n",
    "}\n",
    "\n",
    "# 发送 POST请求\n",
    "resp = requests.post(url, data=args)\n",
    "print(resp.text[:50])\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "19265e54",
   "metadata": {},
   "source": [
    "### 自定义请求头部\n",
    "\n",
    "> 伪装请求头部是采集时经常用的，我们可以用这个方法来隐藏"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "b4911f81",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T01:04:09.947223Z",
     "start_time": "2023-10-24T01:04:07.816545Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "python\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "headers = {'User-Agent': 'python'}\n",
    "r = requests.get('http://www.zhidaow.com', headers=headers)\n",
    "\n",
    "print(r.request.headers['User-Agent'])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "533eb33e",
   "metadata": {},
   "source": [
    "### 设置超时时间\n",
    "\n",
    "> 可以通过 timeout 属性设置超时时间，一旦超过这个时间还没获得响应内容，就会提示错误"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "05984a09",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T01:06:36.924883Z",
     "start_time": "2023-10-24T01:06:36.126402Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<Response [200]>"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "requests.get('http://github.com', timeout=5)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fd96192a",
   "metadata": {},
   "source": [
    "### 代理访问\n",
    "\n",
    "> 采集时为避免被封IP，经常会使用代理。requests也有相应的proxies属性"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1328549d",
   "metadata": {},
   "outputs": [],
   "source": [
    "import requests\n",
    "from fake_useragent import UserAgent\n",
    "\n",
    "url = 'http://httpbin.org/get'\n",
    "\n",
    "# 随机生成用户代理\n",
    "headers = {'User-Agent': UserAgent().chrome}\n",
    "\n",
    "'''\n",
    "\"type\":\"type://ip:port\"\n",
    "\"type\":\"type://username:password@ip:port\"\n",
    "'''\n",
    "proxy = {\n",
    "    'http': 'http://398707160:j8inhg2g@114.117.238.188:16819'\n",
    "}\n",
    "\n",
    "# 发送 GET请求：添加头部信息、代理信息\n",
    "resp = requests.get(url, headers=headers, proxies=proxy)\n",
    "print(resp.text)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e2c4f8ce",
   "metadata": {},
   "source": [
    "### session自动保存cookies\n",
    "\n",
    "> seesion的意思是保持一个会话，比如 登陆后继续操作(记录身份信息) 而requests是单次请求的请求，身份信息不会被记录"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3d370c77",
   "metadata": {},
   "outputs": [],
   "source": [
    "import requests\n",
    "from fake_useragent import UserAgent\n",
    "\n",
    "login_url = 'https://www.kuaidaili.com/login/'\n",
    "\n",
    "# 要传递的参数\n",
    "args = {\n",
    "    'username': '398707160@qq.com',\n",
    "    'passwd': '123456abc'\n",
    "}\n",
    "\n",
    "# 随机生成用户代理\n",
    "headers = {'User-Agent': UserAgent().chrome}\n",
    "\n",
    "# 创建一个session对象\n",
    "session = requests.Session()\n",
    "\n",
    "# 发送 POST请求\n",
    "resp = session.post(login_url, data=args, headers=headers)\n",
    "\n",
    "'''-----------------------上面已经登录好---------------------------'''\n",
    "index_url = 'https://www.kuaidaili.com/usercenter/overview'\n",
    "\n",
    "# 再次发送 GET请求\n",
    "index_resp = session.get(index_url, headers=headers)\n",
    "print(resp.text[:50])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4c44e14d",
   "metadata": {},
   "source": [
    "### SSL验证"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "94abdcbd",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 禁用 安全请求警告\n",
    "requests.packages.urllib3.disable_warnings()\n",
    "\n",
    "resp = requests.get(url, verify=False, headers=headers)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fe37ed9c",
   "metadata": {},
   "source": [
    "### 获取响应信息"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1d65c7dd",
   "metadata": {},
   "source": [
    "![](01_爬虫基础与数据提取_images/04.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "58fd665d",
   "metadata": {},
   "source": [
    "## 数据提取_正则表达式"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4eb27bcc",
   "metadata": {},
   "source": [
    "在前面我们已经搞定了怎样获取页面的内容，不过还差一步，这么多杂乱的代码夹杂文字我们怎样把它提取出来整理呢？下面就开始介绍一个十分强大的工具，正则表达式！\n",
    "\n",
    "> 正则表达式是对字符串操作的一种逻辑公式，就是用事先定义好的一些特定字符、及这些特定字符的组合，组成一个“规则字符串”，这个“规则字符串”用来表达对字符串的一种过滤逻辑。\n",
    "\n",
    "正则表达式是用来匹配字符串非常强大的工具，利用正则表达式，我们想要从返回的页面内容提取出我们想要的内容就易如反掌了"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d658e599",
   "metadata": {},
   "source": [
    "![](01_爬虫基础与数据提取_images/05.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "95929a39",
   "metadata": {},
   "source": [
    "![](01_爬虫基础与数据提取_images/06.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b9b03922",
   "metadata": {},
   "source": [
    "### 数量词的贪婪模式与非贪婪模式\n",
    "\n",
    "正则表达式通常用于在文本中查找匹配的字符串\n",
    "\n",
    "Python里数量词默认是贪婪的，总是尝试匹配尽可能多的字符；非贪婪的则相反，总是尝试匹配尽可能少的字符\n",
    "\n",
    "例如：正则表达式”ab\\*”如果用于查找”abbbc”，将找到”abbb”。而如果使用非贪婪的数量词”ab\\*?”，将找到”a”"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fb7c108a",
   "metadata": {},
   "source": [
    "### 常用方法\n",
    "\n",
    "- re.match\n",
    "  - re.match 尝试从字符串的起始位置匹配一个模式，如果不是起始位置匹配成功的话，match()就返回 none\n",
    "  - 函数语法：\n",
    "    re.match(pattern, string, flags=0)\n",
    "    \n",
    "- re.search\n",
    "  - re.search 扫描整个字符串并返回第一个成功的匹配。\n",
    "  - 函数语法：\n",
    "    re.search(pattern, string, flags=0)\n",
    "    \n",
    "- re.sub\n",
    "  - re.sub 替换字符串：\n",
    "    re.sub(pattern,replace,string)\n",
    "    \n",
    "- re.findall\n",
    "  - re.findall 查找全部：\n",
    "    re.findall(pattern,string,flags=0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "id": "1890e3db",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T01:24:19.184604Z",
     "start_time": "2023-10-24T01:24:19.167573Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "------------------match(从起始位置开始匹配)-----------------------\n",
      "s\n"
     ]
    }
   ],
   "source": [
    "import re\n",
    "\n",
    "str ='I study python3.9 every_day'\n",
    "\n",
    "# 从头开始匹配，如果有匹配不上的，就返回 None\n",
    "print('------------------match(从起始位置开始匹配)-----------------------') \n",
    "m1 = re.match(r'I',str)\n",
    "m2 = re.match(r'\\w',str)\n",
    "m3 = re.match(r'\\S',str)\n",
    "m4 = re.match(r'\\D',str)\n",
    "m5 = re.match(r'I (study)',str)\n",
    "m6 = re.match(r'I (s\\w*)',str)\n",
    "m6 = re.match(r'I (s\\w*?)',str)\n",
    "\n",
    "print(m6.group(1)) # group是在能匹配到对象的时候使用"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "id": "cac2aae2",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T01:24:33.856077Z",
     "start_time": "2023-10-24T01:24:33.851399Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "------------------search(返回第一个匹配成功的结果)-----------------------\n",
      "python3.9\n"
     ]
    }
   ],
   "source": [
    "# 从任意位置开始匹配，匹配第一个数据\n",
    "print('------------------search(返回第一个匹配成功的结果)-----------------------') \n",
    "s1 = re.search(r'\\D',str)\n",
    "s2 = re.search(r's\\w+',str)\n",
    "s3 = re.search(r'y',str)\n",
    "s4 = re.search(r'p\\w+',str)\n",
    "s5 = re.search(r'p\\w+.\\d',str)\n",
    "\n",
    "print(s5.group())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "id": "8c8ba7ac",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T01:25:52.381239Z",
     "start_time": "2023-10-24T01:25:52.377755Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "------------------findall(查找所有匹配成功的字符串)-----------------------\n",
      "[]\n"
     ]
    }
   ],
   "source": [
    "# 从任意位置开始匹配，匹配所有数据\n",
    "print('------------------findall(查找所有匹配成功的字符串)-----------------------')\n",
    "f1 = re.findall(r'eva', str)\n",
    "\n",
    "print(f1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "id": "b432caec",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T01:27:25.019259Z",
     "start_time": "2023-10-24T01:27:25.014284Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "------------------sub(替换子串)-----------------------\n",
      "I study python3.9 Every_day\n",
      "I study python3.9 every_day\n"
     ]
    }
   ],
   "source": [
    "print('------------------sub(替换子串)-----------------------')\n",
    "su1 = re.sub('python', 'Python', str)\n",
    "su2 = re.sub('e\\w+', 'Every_day', str)\n",
    "\n",
    "print(su2)\n",
    "print(str)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "id": "f63bad24",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T01:27:56.601173Z",
     "start_time": "2023-10-24T01:27:56.589459Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "------------------test()-----------------------\n",
      "[('http://www.bjsxt.com', '北理工')]\n"
     ]
    }
   ],
   "source": [
    "print('------------------test()-----------------------')\n",
    "html = '<div><a class=\"title\" href=\"http://www.bjsxt.com\">北理工</a></div>'\n",
    "\n",
    "t1 = re.findall(r'<a class=\"title\" href=\"http://www.bjsxt.com\">([\\u4e00-\\u9fa5]+)</a>',html)\n",
    "t2 = re.findall(r'<a class=\"title\" href=\"(.+)\">[\\u4e00-\\u9fa5]+</a>',html)\n",
    "t3 = re.findall(r'<a class=\"title\" href=\"(.+)\">(.+)</a>',html)\n",
    "print(t3)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "febad8c7",
   "metadata": {},
   "source": [
    "### 正则表达式修饰符\n",
    "\n",
    "> 正则表达式可以包含一些可选标志修饰符来控制匹配的模式。修饰符被指定为一个可选的标志。多个标志可以通过按位 OR(|) 它们来指定。如 re.I | re.M 被设置成 I 和 M 标志："
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5d70b21e",
   "metadata": {},
   "source": [
    "![](01_爬虫基础与数据提取_images/07.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "id": "eb898ac9",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T01:31:07.722938Z",
     "start_time": "2023-10-24T01:31:07.316042Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "3年1.86亿美元！字母哥顶薪续约雄鹿  豪言再夺冠\n",
      "亚冠-米特洛维奇3射1传+倒钩世界波，利雅得新月6-0血洗孟买城\n",
      "勇士vs太阳前瞻：库里遇上三巨头 对飚大战努尔基奇或称奇兵\n",
      "热刺2-0富勒姆登顶榜首，孙兴�O献传射，麦迪逊斩获主场首球\n",
      "NBA新赛季观战指南 十大故事线你怎么看？\n",
      "足坛同夜五佳球：韩国欧巴标志性兜射 新月米神炸裂倒挂金钩\n",
      "竞者｜足坛世三人未到谢幕时！马竞艺术品再献帽子戏法\n",
      "23日CBA五佳球：陈国豪大心脏三分 崔永熙抢断后双手重扣\n",
      "专访北控探花林彦廷：新赛季不想做菜鸟\n",
      "前无古人！陈国豪20+10+5帽成CBA选秀历史最炸状元首秀\n",
      "汤普森要离开勇士了？美记爆料汤神和勇士续约谈判陷入重大麻烦\n",
      "不慎给自己做上袋口斯诺克！魔术师犹豫再三完成完美解球\n",
      "给新赛季算一卦：绿军总冠军东契奇MVP 老詹破纪录火箭无缘季后赛\n",
      "小将邢子豪战胜世界冠军赛后：赢出了自信心，找到球感发挥出色\n",
      "吉达联合1-0亚冠领跑：坎特助攻哈默德补时绝杀，中超旧将脱衣狂奔庆祝\n",
      "太强了！热刺2-0登顶英超：孙兴�O鬼魅跑位传射建功，麦迪逊破门\n",
      "3年1.86亿！字母哥与雄鹿达成提前续约，本人发声：再冲冠军！\n",
      "湖人vs掘金前瞻：湖人欲破坏掘金颁奖仪式 约基奇仍是最大难题\n",
      "NBA季前赛最逆天扣篮：各大球星轮番争艳，令无数球迷为之疯狂\n",
      "90秒速看利雅得新月6-0狂胜：米神帽子戏法+助攻，内马尔观战点赞祝贺\n",
      "英超-孙兴�O传射麦迪逊破门，热刺2-0富勒姆7胜2平不败登顶\n"
     ]
    }
   ],
   "source": [
    "\"\"\" re实战：腾讯新闻 \"\"\"\n",
    "import requests\n",
    "from fake_useragent import UserAgent\n",
    "import re\n",
    "\n",
    "url = 'https://sports.qq.com/'\n",
    "\n",
    "# 随机生成用户代理\n",
    "headers = {'User-Agent': UserAgent().chrome}\n",
    "\n",
    "# 发送 GET请求\n",
    "resp = requests.get(url, headers=headers)\n",
    "\n",
    "# 正则表达式的 匹配模版\n",
    "regx = f'<li><a target=\"_blank\" href=\".+?\" class=\".*?\">(.+?)</a></li>'\n",
    "\n",
    "# 查找所有与模版匹配的数据\n",
    "datas = re.findall(regx, resp.text)\n",
    "\n",
    "# 打印\n",
    "for d in datas:\n",
    "    print(d)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a559bfaf",
   "metadata": {},
   "source": [
    "## 数据提取_Beautiful_Soup\n",
    "> Beautiful Soup自动将输入文档转换为Unicode编码，输出文档转换为utf-8编码。你不需要考虑编码方式，除非文档没有指定一个编码方式，这时，Beautiful Soup就不能自动识别编码方式了。然后，你仅仅需要说明一下原始编码方式就可以了。\n",
    "\n",
    "> Beautiful Soup已成为和lxml、html6lib一样出色的python解释器，为用户灵活地提供不同的解析策略或强劲的速度\n",
    "\n",
    "> [官网](http://beautifulsoup.readthedocs.io/zh_CN/latest/)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2a6205e4",
   "metadata": {},
   "source": [
    "Beautiful Soup支持Python标准库中的HTML解析器, 还支持一些第三方的解析器\n",
    "\n",
    "![](01_爬虫基础与数据提取_images/08.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "id": "0581fa62",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T01:54:48.279068Z",
     "start_time": "2023-10-24T01:54:48.162394Z"
    }
   },
   "outputs": [],
   "source": [
    "\"\"\"创建 Beautiful Soup 对象\"\"\"\n",
    "from bs4 import BeautifulSoup\n",
    "\n",
    "bs = BeautifulSoup(html, \"lxml\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3c27392f",
   "metadata": {},
   "source": [
    "### 四大对象种类\n",
    "\n",
    "Beautiful Soup 将复杂 HTML 文档转换成一个复杂的树形结构, 每个节点都是Python对象, 所有对象可以归纳为4种:\n",
    "- Tag\n",
    "- NavigableString\n",
    "- BeautifulSoup\n",
    "- Comment"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "64345e9d",
   "metadata": {},
   "source": [
    "#### Tag"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "id": "d3686e35",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T02:02:54.422515Z",
     "start_time": "2023-10-24T02:02:54.412259Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "-------------------获取标签------------------------\n",
      "<title id=\"title\">北理工</title>\n",
      "<div class=\"info\" float=\"left\">Welcome to BIT</div>\n",
      "<span>Good Good Study</span>\n"
     ]
    }
   ],
   "source": [
    "\"\"\"获取标签，注意：相同的标签只能获取第一个符合要求的标签\"\"\"\n",
    "from bs4 import BeautifulSoup\n",
    "\n",
    "html = '''\n",
    "<title id=\"title\">北理工</title>\n",
    "\n",
    "<div class=\"info\" float=\"left\">Welcome to BIT</div>\n",
    "\n",
    "<div class=\"info\" float=\"right\">\n",
    "    <span>Good Good Study</span>\n",
    "    \n",
    "    <a href=\"www.bjsxt.com\"></a>\n",
    "    \n",
    "    <strong><!-- 这个是注释啊 --></strong>\n",
    "</div>\n",
    "'''\n",
    "\n",
    "# 创建 BeautifulSoup对象，以 lxml方式解析\n",
    "soup = BeautifulSoup(html, 'lxml')\n",
    "\n",
    "\n",
    "print('-------------------获取标签------------------------')\n",
    "\n",
    "# 注意：相同的标签只能获取第一个符合要求的标签\n",
    "print(soup.title)\n",
    "print(soup.div)\n",
    "print(soup.span)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "id": "f544f46a",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T02:04:06.270336Z",
     "start_time": "2023-10-24T02:04:06.265541Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "-------------------获取属性------------------------\n",
      "{'class': ['info'], 'float': 'left'}\n",
      "['info']\n",
      "left\n",
      "www.bjsxt.com\n"
     ]
    }
   ],
   "source": [
    "print('-------------------获取属性------------------------')\n",
    "\n",
    "# 获取标签下的所有属性\n",
    "print(soup.div.attrs)\n",
    "\n",
    "# 获取 单个属性的值\n",
    "print(soup.div.get('class'))\n",
    "print(soup.div['float'])\n",
    "print(soup.a.get('href'))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bac0e6cd",
   "metadata": {},
   "source": [
    "#### NavigableString 获取内容"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "id": "4f990874",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T02:07:04.964504Z",
     "start_time": "2023-10-24T02:07:04.959102Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "-------------------获取内容------------------------\n",
      "北理工\n",
      "<class 'bs4.element.NavigableString'>\n",
      "北理工\n",
      "<class 'str'>\n"
     ]
    }
   ],
   "source": [
    "print('-------------------获取内容------------------------')\n",
    "\n",
    "# NavigableString对象\n",
    "print(soup.title.string)\n",
    "print(type(soup.title.string))\n",
    "\n",
    "# string类型\n",
    "print(soup.title.text)\n",
    "print(type(soup.title.text))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "id": "94f6ae76",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T02:08:45.554152Z",
     "start_time": "2023-10-24T02:08:45.548288Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "-------------------获取内容注释------------------------\n",
      " 这个是注释啊 \n",
      "<class 'bs4.element.Comment'>\n",
      "\n",
      "<class 'str'>\n",
      "<strong>\n",
      " <!-- 这个是注释啊 -->\n",
      "</strong>\n",
      "\n"
     ]
    }
   ],
   "source": [
    "print('-------------------获取内容注释------------------------')\n",
    "\n",
    "# NavigableString对象 可以获取注释\n",
    "print(soup.strong.string)\n",
    "print(type(soup.strong.string))\n",
    "\n",
    "# string类型 不可获取注释\n",
    "print(soup.strong.text)\n",
    "print(type(soup.strong.text))\n",
    "\n",
    "print(soup.strong.prettify())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e9acc641",
   "metadata": {},
   "source": [
    "#### BeautifulSoup\n",
    "\n",
    "> BeautifulSoup 对象表示的是一个文档的全部内容. 大部分时候,可以把它当作 Tag 对象,它支持 遍历文档树 和 搜索文档树 中描述的大部分的方法.\n",
    "\n",
    "> 因为 BeautifulSoup 对象并不是真正的 HTML 或 XML的tag, 所以它没有name 和 attribute属性.但有时查看它的 .name 属性是很方便的, 所以 BeautifulSoup 对象包含了一个值为 “[document]” 的特殊属性 .name"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "id": "d1e07812",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T02:10:43.889172Z",
     "start_time": "2023-10-24T02:10:43.881631Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[document]\n",
      "head\n"
     ]
    }
   ],
   "source": [
    "print(soup.name)\n",
    "print(soup.head.name)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7c360b28",
   "metadata": {},
   "source": [
    "### 搜索文档树\n",
    "\n",
    "> Beautiful Soup定义了很多搜索方法,这里着重介绍2个: find() 和 find_all().其它方法的参数和用法类似\n",
    "\n",
    "#### 字符串过滤器\n",
    "\n",
    "> 最简单的过滤器是字符串，在搜索方法中传入一个字符串参数，Beautiful Soup 会查找与字符串完整匹配的内容, 下面的例子用于查找文档中所有的\\<div>标签"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "id": "32925020",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T02:19:12.610166Z",
     "start_time": "2023-10-24T02:19:12.600165Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[<div class=\"info\" float=\"left\">Welcome to BIT</div>, <div class=\"info\" float=\"right\">\n",
      "<span>Good Good Study</span>\n",
      "<a href=\"www.bjsxt.com\"></a>\n",
      "<strong><!-- 这个是注释啊 --></strong>\n",
      "</div>]\n"
     ]
    }
   ],
   "source": [
    "print(soup.find_all('div'))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cb1f0641",
   "metadata": {},
   "source": [
    "#### 正则表达式过滤器\n",
    "\n",
    "如果传入正则表达式作为参数, Beautiful Soup 会通过正则表达式的 match() 来匹配内容"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "id": "25d17812",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T02:19:40.283579Z",
     "start_time": "2023-10-24T02:19:40.277763Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[<div class=\"info\" float=\"left\">Welcome to BIT</div>, <div class=\"info\" float=\"right\">\n",
      "<span>Good Good Study</span>\n",
      "<a href=\"www.bjsxt.com\"></a>\n",
      "<strong><!-- 这个是注释啊 --></strong>\n",
      "</div>]\n"
     ]
    }
   ],
   "source": [
    "# 返回所有的 div标签\n",
    "print (soup.find_all(re.compile(\"^div\")))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2f1187bc",
   "metadata": {},
   "source": [
    "#### 列表\n",
    "\n",
    "> 如果传入列表参数, Beautiful Soup 会将与列表中任一元素匹配的内容返回"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "id": "c752c102",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T02:20:51.774346Z",
     "start_time": "2023-10-24T02:20:51.770836Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[<span>Good Good Study</span>, <a href=\"www.bjsxt.com\"></a>]\n"
     ]
    }
   ],
   "source": [
    "# 返回所有匹配到的span a标签\n",
    "print(soup.find_all(['span', 'a']))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3e833676",
   "metadata": {},
   "source": [
    "#### keyword\n",
    "\n",
    "> 如果一个指定名字的参数不是搜索内置的参数名, 搜索时会把该参数当作指定名字tag的属性来搜索"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "id": "abb19104",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T02:27:01.494476Z",
     "start_time": "2023-10-24T02:27:01.489045Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[<title id=\"title\">北理工</title>]\n"
     ]
    }
   ],
   "source": [
    "# 返回 id为 title 的标签\n",
    "print(soup.find_all(id='title'))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e98da6cf",
   "metadata": {},
   "source": [
    "#### 按CSS搜索\n",
    "\n",
    "> 按照CSS类名搜索tag的功能非常实用,但标识CSS类名的关键字 class 在Python中是保留字,使用 class 做参数会导致语法错误.从Beautiful Soup的4.1.1版本开始,可以通过 class_ 参数搜索有指定CSS类名的tag"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "id": "c40a98dd",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T02:27:04.753569Z",
     "start_time": "2023-10-24T02:27:04.750063Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[<div class=\"info\" float=\"left\">Welcome to BIT</div>, <div class=\"info\" float=\"right\">\n",
      "<span>Good Good Study</span>\n",
      "<a href=\"www.bjsxt.com\"></a>\n",
      "<strong><!-- 这个是注释啊 --></strong>\n",
      "</div>]\n"
     ]
    }
   ],
   "source": [
    "# 返回 class为 info 的标签\n",
    "print(soup.find_all(class_='info'))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "17283da0",
   "metadata": {},
   "source": [
    "#### 按属性的搜索"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "id": "01ea8f6e",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T02:27:08.096466Z",
     "start_time": "2023-10-24T02:27:08.088797Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[<div class=\"info\" float=\"right\">\n",
      "<span>Good Good Study</span>\n",
      "<a href=\"www.bjsxt.com\"></a>\n",
      "<strong><!-- 这个是注释啊 --></strong>\n",
      "</div>]\n",
      "[<div class=\"info\" float=\"left\">Welcome to BIT</div>]\n"
     ]
    }
   ],
   "source": [
    "# 返回 float为 right 的标签\n",
    "print(soup.find_all(attrs={'float': 'right'}))\n",
    "\n",
    "# 返回 div标签，要求其 float属性为 left\n",
    "print(soup.find_all('div', attrs={'float': 'left'}))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1f560ce8",
   "metadata": {},
   "source": [
    "#### CSS选择器"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7f05c0a2",
   "metadata": {},
   "source": [
    "![](01_爬虫基础与数据提取_images/09.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "id": "e93ce64f",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T02:31:38.829129Z",
     "start_time": "2023-10-24T02:31:38.821257Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[<div class=\"info\" float=\"left\">Welcome to BIT</div>, <div class=\"info\" float=\"right\">\n",
      "<span>Good Good Study</span>\n",
      "<a href=\"www.bjsxt.com\"></a>\n",
      "<strong><!-- 这个是注释啊 --></strong>\n",
      "</div>]\n",
      "\n",
      "[<title id=\"title\">北理工</title>]\n",
      "\n",
      "[<div class=\"info\" float=\"left\">Welcome to BIT</div>, <div class=\"info\" float=\"right\">\n",
      "<span>Good Good Study</span>\n",
      "<a href=\"www.bjsxt.com\"></a>\n",
      "<strong><!-- 这个是注释啊 --></strong>\n",
      "</div>]\n",
      "\n",
      "[<span>Good Good Study</span>]\n",
      "\n",
      "[<a href=\"www.bjsxt.com\"></a>]\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# 选择 div标签\n",
    "print(soup.select('div'), end='\\n\\n')\n",
    "\n",
    "# 选择 id为 title的标签\n",
    "print(soup.select('#title'), end='\\n\\n')\n",
    "\n",
    "# 选择 class为 info的标签\n",
    "print(soup.select('.info'), end='\\n\\n')\n",
    "\n",
    "# 选取 div标签下的 span标签\n",
    "print(soup.select('div > span'), end='\\n\\n')\n",
    "\n",
    "# 选择 class为 info的 div标签下的 a标签\n",
    "print(soup.select('div.info > a'), end='\\n\\n')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "25de8881",
   "metadata": {},
   "source": [
    "## 数据提取_pyquery\n",
    "[官网](https://pythonhosted.org/pyquery/)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3dbec2ed",
   "metadata": {},
   "source": [
    "### 初始化方式"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "id": "5f4d170f",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T02:46:38.173608Z",
     "start_time": "2023-10-24T02:46:38.166830Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<title id=\"title\">北理工</title>\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# 字符串方式创建 pyquery对象\n",
    "html = '''\n",
    "<title id=\"title\">北理工</title>\n",
    "\n",
    "<div class=\"info\" float=\"left\">Welcome to BIT</div>\n",
    "\n",
    "<div class=\"info\" float=\"right\">\n",
    "    <span>Good Good Study</span>\n",
    "    \n",
    "    <a href=\"www.bjsxt.com\"></a>\n",
    "    \n",
    "    <strong><!-- 这个是注释啊 --></strong>\n",
    "</div>\n",
    "'''\n",
    "from pyquery import PyQuery as pq\n",
    "\n",
    "# 创建 pyquery对象\n",
    "doc = pq(html)\n",
    "\n",
    "# 获取 title标签\n",
    "print(doc('title'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "id": "06093690",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T02:49:30.192571Z",
     "start_time": "2023-10-24T02:49:30.119227Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<title>百度一下，你就知道</title>\n"
     ]
    }
   ],
   "source": [
    "# url方式创建 pyquery对象\n",
    "\n",
    "from pyquery import PyQuery as pq\n",
    "\n",
    "# 创建 pyquery对象\n",
    "doc = pq(url='http://www.baidu.com')\n",
    "\n",
    "# 获取 title标签\n",
    "print(doc('title'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 70,
   "id": "f0a3eb84",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T02:52:41.644248Z",
     "start_time": "2023-10-24T02:52:41.633368Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<title>Document</title>\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# 文件方式创建 pyquery对象\n",
    "\n",
    "from pyquery import PyQuery as pq\n",
    "\n",
    "# 创建 pyquery对象\n",
    "doc = pq(filename='./html/alter.html')\n",
    "\n",
    "# 获取 title标签\n",
    "print(doc('title'))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ce70334a",
   "metadata": {},
   "source": [
    "### 选择节点\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 71,
   "id": "d41643e1",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T03:58:46.980310Z",
     "start_time": "2023-10-24T03:58:46.971796Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<title id=\"t\">Document</title>\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# 获取当前节点\n",
    "\n",
    "from pyquery import PyQuery as pq\n",
    "\n",
    "# 创建 pyquery对象\n",
    "doc = pq(filename='./html/alter.html')\n",
    "\n",
    "# 选择 id为t的 标签\n",
    "print(doc('#t'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 78,
   "id": "21c84192",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T04:02:44.004050Z",
     "start_time": "2023-10-24T04:02:43.996194Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<p id=\"top\"/><p>程序员</p>\n",
      "    \n"
     ]
    }
   ],
   "source": [
    "\"\"\"\n",
    "获取父节点\n",
    "- 获取到当前节点后使用 parent 方法\n",
    "  \n",
    "获取兄弟节点\n",
    "- 获取到当前节点后使用 siblings 方法\n",
    "\"\"\"\n",
    "    \n",
    "from pyquery import PyQuery as pq\n",
    "doc = pq(filename='./html/alter.html')\n",
    "\n",
    "# 获取 id为 main的 子节点内容\n",
    "print(doc('#main').children())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d72263c2",
   "metadata": {},
   "source": [
    "### 获取属性"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 82,
   "id": "2c5466e8",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T04:06:34.571041Z",
     "start_time": "2023-10-24T04:06:34.559544Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "北理工\n"
     ]
    }
   ],
   "source": [
    "from pyquery import PyQuery as pq\n",
    "\n",
    "# 创建 PyQuery对象\n",
    "doc = pq(filename='./html/alter.html')\n",
    "\n",
    "# 获取 id为 main的 标签\n",
    "a = doc('#main')\n",
    "\n",
    "# 获取标签的 href属性内容\n",
    "print(a.attr('href'))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2d451e94",
   "metadata": {},
   "source": [
    "### 获取内容"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 88,
   "id": "762d553d",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T04:09:45.275713Z",
     "start_time": "2023-10-24T04:09:45.269196Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "        <p id=\"top\"/><p>程序员</p>\n",
      "    \n",
      "程序员\n"
     ]
    }
   ],
   "source": [
    "\n",
    "from pyquery import PyQuery as pq\n",
    "\n",
    "doc = pq(filename='./html/alter.html')\n",
    "\n",
    "# 获取 id=main的 标签\n",
    "div = doc('#main')\n",
    "\n",
    "# 获取该标签内的 html内容\n",
    "print(div.html())\n",
    "\n",
    "# 获取该标签内的 文字信息\n",
    "print(div.text()) \n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7d1fef2f",
   "metadata": {},
   "source": [
    "### 示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 76,
   "id": "f22c3981",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T03:05:06.727557Z",
     "start_time": "2023-10-24T03:05:06.550536Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'hello'"
      ]
     },
     "execution_count": 76,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from pyquery import PyQuery as pq\n",
    "\n",
    "# 1.可加载一段 HTML字符串，或一个HTML文件，或是一个url地址，\n",
    "d = pq(\"<html><title>hello</title></html>\")\n",
    "\n",
    "path_to_html_file = './html/alter.html'\n",
    "d = pq(filename=path_to_html_file)\n",
    "\n",
    "d = pq(url='http://www.baidu.com')  # 注意：此处url似乎必须写全\n",
    "\n",
    "\n",
    "# 2.html() 和 text() ——获取相应的 HTML块 或 文本块，\n",
    "p = pq(\"<head><title>hello</title></head>\")\n",
    "\n",
    "p('head').html()  # 返回 <title>hello</title>\n",
    "p('head').text()  # 返回 hello"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 78,
   "id": "e167b9e9",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T03:06:14.156634Z",
     "start_time": "2023-10-24T03:06:14.150034Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<p>test 1</p><p>test 2</p>\n",
      "test 1\n"
     ]
    }
   ],
   "source": [
    "# 3.根据 HTML标签来获取元素，\n",
    "d = pq('<div><p>test 1</p><p>test 2</p></div>')\n",
    "\n",
    "print(d('p'))  # 返回 <p>test 1</p><p>test 2</p>\n",
    "\n",
    "# 注意：当获取到的元素不只一个时，html()方法只返回首个元素的相应内容块\n",
    "print(d('p').html())  # 返回 test 1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 79,
   "id": "cd6a9a4b",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T03:07:10.797591Z",
     "start_time": "2023-10-24T03:07:10.792066Z"
    },
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "test 2\n"
     ]
    }
   ],
   "source": [
    "# 4.eq(index) —— 根据给定的索引号得到指定元素。接上例，若想得到第二个p标签内的内容，则可以：\n",
    "print(d('p').eq(1).html())  # 返回 test 2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "id": "360c9953",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T03:41:10.487062Z",
     "start_time": "2023-10-24T03:41:10.483125Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<p id=\"1\">test 1</p>\n"
     ]
    }
   ],
   "source": [
    "# 5.filter() —— 根据类名、id名得到指定元素，例：\n",
    "\n",
    "from pyquery import PyQuery as pq\n",
    "# 创建 pyquery对象\n",
    "d = pq(\"\"\"<div><p id='1'>test 1</p><p class='2'>test 3</p></div>\"\"\")\n",
    "\n",
    "# 选择 id为1 的 p标签\n",
    "print(d('p').filter('#1'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "id": "7f8d497c",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T03:42:02.452726Z",
     "start_time": "2023-10-24T03:42:02.447411Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<p id=\"1\">test 1</p><p class=\"2\">test 2</p>\n",
      "<p id=\"1\">test 1</p>\n"
     ]
    }
   ],
   "source": [
    "# 6.find() —— 查找嵌套元素，例：\n",
    "d = pq(\"<div><p id='1'>test 1</p><p class='2'>test 2</p></div>\")\n",
    "\n",
    "# 找到 div标签下的 所有p标签\n",
    "print(d('div').find('p')) \n",
    "\n",
    "# 找到 div标签下的 第一个 p标签\n",
    "print(d('div').find('p').eq(0))  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "id": "768697f5",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T03:45:14.988145Z",
     "start_time": "2023-10-24T03:45:14.983700Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'test 1'"
      ]
     },
     "execution_count": 53,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 7.直接根据类名、id名获取元素，例：\n",
    "d = pq(\"<div><p id='1'>test 1</p></div>\")\n",
    "\n",
    "# 获取 id为1 的标签内容\n",
    "d('#1').html()  # 返回test 1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "id": "051fd93a",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T03:46:52.522269Z",
     "start_time": "2023-10-24T03:46:52.516744Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "http://hello.com\n",
      "my_id\n"
     ]
    }
   ],
   "source": [
    "# 8.获取属性值，例：\n",
    "d = pq(\"<p id='my_id'><a href='http://hello.com'>hello</a></p>\")\n",
    "\n",
    "# 获取 a标签的 href属性\n",
    "print(d('a').attr('href'))  # 返回 http://hello.com\n",
    "\n",
    "# 获取 p标签的 id\n",
    "print(d('p').attr('id'))  # 返回 my_id"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "id": "ae841375",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T03:47:21.497509Z",
     "start_time": "2023-10-24T03:47:21.492923Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "http://baidu.com\n"
     ]
    }
   ],
   "source": [
    "# 9.修改属性值，例：\n",
    "d('a').attr('href', 'http://baidu.com') # 把href属性修改为了baidu\n",
    "print(d('a').attr('href'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "id": "02b3cbbd",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T03:49:28.352457Z",
     "start_time": "2023-10-24T03:49:28.348301Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<div class=\"my_class\">牛逼</div>\n"
     ]
    }
   ],
   "source": [
    "# 10.addClass(value) ——为元素添加类，例：\n",
    "d = pq('<div>牛逼</div>')\n",
    "\n",
    "# 添加类属性\n",
    "d.addClass('my_class')\n",
    "\n",
    "print(d('.my_class'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "id": "5fe406dd",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T03:49:54.970281Z",
     "start_time": "2023-10-24T03:49:54.964543Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 60,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 11.hasClass(name) # 返回判断元素是否包含给定的类，例：\n",
    "d = pq(\"<div class='my_class'></div>\")\n",
    "\n",
    "d.hasClass('my_class')  # 返回True"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "id": "539c169a",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T03:51:27.283742Z",
     "start_time": "2023-10-24T03:51:27.279373Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<p id=\"1\">hello</p><p id=\"2\">world</p>\n",
      "<p id=\"2\">world</p>\n"
     ]
    }
   ],
   "source": [
    "# 12.children(selector=None) ——获取子元素，例：\n",
    "d = pq(\"<span><p id='1'>hello</p><p id='2'>world</p></span>\")\n",
    "\n",
    "# 获取 子元素\n",
    "print(d.children())\n",
    "\n",
    "# 获取 子元素中 id为2 的标签\n",
    "print(d.children('#2'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "id": "a7bc9d17",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T03:54:13.413094Z",
     "start_time": "2023-10-24T03:54:13.408031Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<p><span><p id=\"1\">hello</p><p id=\"2\">world</p></span></p><span><p id=\"1\">hello</p><p id=\"2\">world</p></span>\n",
      "<span><p id=\"1\">hello</p><p id=\"2\">world</p></span>\n",
      "<p><span><p id=\"1\">hello</p><p id=\"2\">world</p></span></p>\n"
     ]
    }
   ],
   "source": [
    "# 13.parents(selector=None)——获取父元素，例：\n",
    "d = pq(\"<p><span><p id='1'>hello</p><p id='2'>world</p></span></p>\")\n",
    "\n",
    "# 返回 p标签的 所有父标签\n",
    "print(d('p').parents())\n",
    "\n",
    "# 返回 id为1 标签的 span类型的父标签\n",
    "print(d('#1').parents('span'))\n",
    "print(d('#1').parents('p'))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "id": "3e1d26a0",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T03:57:02.118945Z",
     "start_time": "2023-10-24T03:57:02.111706Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<p id=\"2\">world</p><img scr=\"\"/>\n",
      "<img scr=\"\"/>\n"
     ]
    }
   ],
   "source": [
    "# 14.nextAll(selector=None) —— 返回后面全部的元素块，例：\n",
    "d = pq(\"<p id='1'>hello</p><p id='2'>world</p><img scr='' />\")\n",
    "\n",
    "# 获取 第一个 p标签后 所有元素块\n",
    "print(d('p:first').nextAll())\n",
    "\n",
    "# 获取 最后一个 p标签后 所有元素块\n",
    "print(d('p:last').nextAll())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 70,
   "id": "2c9591b8",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T03:57:54.502944Z",
     "start_time": "2023-10-24T03:57:54.498570Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<p id=\"1\">test 1</p>\n"
     ]
    }
   ],
   "source": [
    "# 15.not_(selector) ——返回不匹配选择器的元素，例：\n",
    "d = pq(\"<p id='1'>test 1</p><p id='2'>test 2</p>\")\n",
    "\n",
    "# 返回 id不为2的 p标签\n",
    "print(d('p').not_('#2'))  # 返回[<p#1>]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4c067891",
   "metadata": {},
   "source": [
    "## 数据提取_XPath\n",
    "[官网](http://lxml.de/index.html)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "12e41d17",
   "metadata": {},
   "source": [
    "### 选取节点"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "36434c52",
   "metadata": {},
   "source": [
    "**节点之间的关系：**\n",
    "- 父（Parent）\n",
    "- 子（Children）\n",
    "- 同胞（Sibling）\n",
    "- 先辈（Ancestor）\n",
    "- 后代（Descendant）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f6a6d411",
   "metadata": {},
   "source": [
    "**常用的路径表达式：**\n",
    "![](01_爬虫基础与数据提取_images/10.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d23d53b0",
   "metadata": {},
   "source": [
    "**选取若干路径：**\n",
    "\n",
    "通过在路径表达式中使用“|”运算符，您可以选取若干个路径\n",
    "\n",
    "xpath('//div`|`//table')  获取所有的div与table节点"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7e5817e8",
   "metadata": {},
   "source": [
    "**谓语：**\n",
    "\n",
    "谓语被嵌在方括号内，用来查找某个特定的节点或包含某个制定的值的节点\n",
    "\n",
    "![](01_爬虫基础与数据提取_images/11.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "78ab6e2e",
   "metadata": {},
   "source": [
    "**选择XML文件中节点：**\n",
    "\n",
    "- element（元素节点）\n",
    "- attribute（属性节点）\n",
    "- text （文本节点）\n",
    "- concat (元素节点,元素节点)\n",
    "- comment （注释节点）\n",
    "- root （根节点）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fe5ec231",
   "metadata": {},
   "source": [
    "**XPath 运算符：**\n",
    "\n",
    "![](01_爬虫基础与数据提取_images/12.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 92,
   "id": "3cadf879",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T10:22:39.301359Z",
     "start_time": "2023-10-24T10:22:39.296414Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<html><body><div>\n",
      "    <ul>\n",
      "         <li class=\"item-0\"><a href=\"link1.html\">first item</a></li>\n",
      "         <li class=\"item-1\"><a href=\"link2.html\">second item</a></li>\n",
      "         <li class=\"item-inactive\"><a href=\"link3.html\">third item</a></li>\n",
      "         <li class=\"item-1\"><a href=\"link4.html\">fourth item</a></li>\n",
      "         <li class=\"item-0\"><a href=\"link5.html\">fifth item</a>\n",
      "     </li></ul>\n",
      " </div>\n",
      "</body></html>\n"
     ]
    }
   ],
   "source": [
    "from lxml import etree\n",
    "text = '''\n",
    "<div>\n",
    "    <ul>\n",
    "         <li class=\"item-0\"><a href=\"link1.html\">first item</a></li>\n",
    "         <li class=\"item-1\"><a href=\"link2.html\">second item</a></li>\n",
    "         <li class=\"item-inactive\"><a href=\"link3.html\">third item</a></li>\n",
    "         <li class=\"item-1\"><a href=\"link4.html\">fourth item</a></li>\n",
    "         <li class=\"item-0\"><a href=\"link5.html\">fifth item</a>\n",
    "     </ul>\n",
    " </div>\n",
    "'''\n",
    "# 利用etree 初始化 HTML\n",
    "html = etree.HTML(text)\n",
    "\n",
    "# 将 HTML变成字符串\n",
    "result = etree.tostring(html)\n",
    "\n",
    "# 打印，发现 HTML被自动修正 -- 不仅补全了 li 标签，还添加了 body，html 标签\n",
    "print(result.decode())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 93,
   "id": "aa3fd673",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T10:25:12.370440Z",
     "start_time": "2023-10-24T10:25:12.362346Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<div>\n",
      "    <ul>\n",
      "         <li class=\"item-0\"><a href=\"link1.html\">first item</a></li>\n",
      "         <li class=\"item-1\"><a href=\"link2.html\">second item</a></li>\n",
      "         <li class=\"item-inactive\"><a href=\"link3.html\"><span class=\"bold\">third item</span></a></li>\n",
      "         <li class=\"item-1\"><a href=\"link4.html\">fourth item</a></li>\n",
      "         <li class=\"item-0\"><a href=\"link5.html\">fifth item</a></li>\n",
      "     </ul>\n",
      " </div>\n",
      "\n"
     ]
    }
   ],
   "source": [
    "\"\"\"利用 parse 方法来读取文件\"\"\"\n",
    "from lxml import etree\n",
    "\n",
    "# 读取 HTML文件\n",
    "html = etree.parse('./html/hello.html')\n",
    "\n",
    "# 将 HTML变成 string\n",
    "result = etree.tostring(html, pretty_print=True)\n",
    "\n",
    "print(result.decode())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "30439290",
   "metadata": {},
   "source": [
    "### XPath具体使用"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 94,
   "id": "2f6b9320",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T10:27:30.751904Z",
     "start_time": "2023-10-24T10:27:30.744667Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[<Element li at 0x12010b1c0>, <Element li at 0x12010be40>, <Element li at 0x12010b600>, <Element li at 0x12010b780>, <Element li at 0x12010b400>]\n"
     ]
    }
   ],
   "source": [
    "\"\"\"获取所有的 `<li>` 标签\"\"\"\n",
    "from lxml import etree\n",
    "\n",
    "# 读取 HTML文件\n",
    "html = etree.parse('./html/hello.html')\n",
    "\n",
    "# 获取 <li> 标签\n",
    "result = html.xpath('//li')\n",
    "\n",
    "print (result)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 95,
   "id": "5a9420a7",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T10:28:37.391131Z",
     "start_time": "2023-10-24T10:28:37.385440Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['item-0', 'item-1', 'item-inactive', 'item-1', 'item-0']\n"
     ]
    }
   ],
   "source": [
    "\"\"\"获取` <li> `标签的所有 class\"\"\"\n",
    "result = html.xpath('//li/@class')\n",
    "print (result)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 96,
   "id": "55927e51",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T10:29:27.999978Z",
     "start_time": "2023-10-24T10:29:27.995068Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[<Element a at 0x120119580>]\n"
     ]
    }
   ],
   "source": [
    "\"\"\"获取 `<li>` 标签下 href 为 link1.html 的 `<a>` 标签\"\"\"\n",
    "result = html.xpath('//li/a[@href=\"link1.html\"]')\n",
    "print(result)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 97,
   "id": "a45460c8",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T10:31:38.075629Z",
     "start_time": "2023-10-24T10:31:38.066302Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[<Element span at 0x12011bd40>]\n"
     ]
    }
   ],
   "source": [
    "\"\"\"获取` <li> `标签下的所有 `<span>` 标签\"\"\"\n",
    "\n",
    "result = html.xpath('//li/span') # 这么写是不对的\n",
    "\n",
    "# 因为 / 是用来获取子元素的，而 <span> 并不是 <li> 的子元素，所以，要用双斜杠\n",
    "result = html.xpath('//li//span')\n",
    "print(result)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 98,
   "id": "df404cba",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T10:32:43.458909Z",
     "start_time": "2023-10-24T10:32:43.454371Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['bold']\n"
     ]
    }
   ],
   "source": [
    "\"\"\"获取 `<li>` 标签下的所有 class，不包括`<li>`\"\"\"\n",
    "result = html.xpath('//li/a//@class')\n",
    "print(result)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 99,
   "id": "6fbbbf2b",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T10:34:32.794532Z",
     "start_time": "2023-10-24T10:34:32.789424Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['link5.html']\n"
     ]
    }
   ],
   "source": [
    "\"\"\"获取最后一个 `<li>` 的 `<a>` 的 href\"\"\"\n",
    "result = html.xpath('//li[last()]/a/@href')\n",
    "print(result)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 100,
   "id": "766ddc44",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T10:35:10.693885Z",
     "start_time": "2023-10-24T10:35:10.688184Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "fourth item\n"
     ]
    }
   ],
   "source": [
    "\"\"\"获取倒数第二个元素的内容\"\"\"\n",
    "result = html.xpath('//li[last()-1]/a')\n",
    "print(result[0].text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 101,
   "id": "961c6d44",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T10:35:51.800780Z",
     "start_time": "2023-10-24T10:35:51.796044Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "span\n"
     ]
    }
   ],
   "source": [
    "\"\"\"获取 class 为 bold 的标签名\"\"\"\n",
    "result = html.xpath('//*[@class=\"bold\"]')\n",
    "print(result[0].tag)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d746cac3",
   "metadata": {},
   "source": [
    "## 数据提取_jsonPath"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cf901a7f",
   "metadata": {},
   "source": [
    "### Json\n",
    "json简单说就是 javascript 中的对象 和 数组，所以这两种结构就是对象和数组两种结构\n",
    "\n",
    "1. 对象：对象在js中表示为{ }括起来的内容，数据结构为 { key：value, key：value, ... }的键值对的结构，在面向对象的语言中，key为对象的属性，value为对应的属性值，所以很容易理解，取值方法为 对象.key 获取属性值，这个属性值的类型可以是数字、字符串、数组、对象这几种\n",
    "  \n",
    "2. 数组：数组在js中是中括号[ ]括起来的内容，数据结构为 [\"Python\", \"javascript\", \"C++\", ...]，取值方式和所有语言中一样，使用索引获取，字段值的类型可以是 数字、字符串、数组、对象几种"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "702f049d",
   "metadata": {},
   "source": [
    "### json.loads()\n",
    "\n",
    "> 把Json格式字符串解码转换成 Python 对象 从 json到python 的类型转化对照如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 103,
   "id": "9609d5ea",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T10:43:32.998146Z",
     "start_time": "2023-10-24T10:43:32.994078Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[1, 2, 3, 4]\n",
      "{'city': '北京', 'name': '范爷'}\n"
     ]
    }
   ],
   "source": [
    "import json\n",
    "\n",
    "strList = '[1, 2, 3, 4]'\n",
    "strDict = '{\"city\": \"北京\", \"name\": \"范爷\"}'\n",
    "\n",
    "# string -> json\n",
    "print(json.loads(strList))\n",
    "\n",
    "# string -> json\n",
    "print(json.loads(strDict))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "07ec8fab",
   "metadata": {},
   "source": [
    "### json.dumps()\n",
    "\n",
    "> 实现 python 类型转化为 json 字符串，返回一个 str 对象 把一个 Python对象编码转换成 Json字符串\n",
    "\n",
    "> 从python原始类型向json类型的转化对照如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 104,
   "id": "c5846941",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T10:45:54.912027Z",
     "start_time": "2023-10-24T10:45:54.904853Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[1, 2, 3, 4]\n",
      "[1, 2, 3, 4]\n",
      "{\"city\": \"\\u5317\\u4eac\", \"name\": \"\\u8303\\u7237\"}\n",
      "{\"city\": \"北京\", \"name\": \"范爷\"}\n"
     ]
    }
   ],
   "source": [
    "import json\n",
    "\n",
    "\n",
    "listStr = [1, 2, 3, 4]\n",
    "tupleStr = (1, 2, 3, 4)\n",
    "dictStr = {\"city\": \"北京\", \"name\": \"范爷\"}\n",
    "\n",
    "# list -> json字符串\n",
    "print(json.dumps(listStr))\n",
    "\n",
    "# tuple -> json字符串\n",
    "print(json.dumps(tupleStr))\n",
    "\n",
    "\n",
    "# 注意：json.dumps() 序列化时默认使用的 ascii编码\n",
    "# 添加参数 ensure_ascii=False 禁用ascii编码，按 utf-8编码\n",
    "\n",
    "print(json.dumps(dictStr))\n",
    "print(json.dumps(dictStr, ensure_ascii=False))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2aeccb13",
   "metadata": {},
   "source": [
    "### json.dump()\n",
    "\n",
    "> 将Python内置类型序列化为 json对象后写入文件"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 105,
   "id": "5f961f56",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T10:48:05.380569Z",
     "start_time": "2023-10-24T10:48:05.375664Z"
    }
   },
   "outputs": [],
   "source": [
    "import json\n",
    "\n",
    "# 列表 -> json文件\n",
    "listStr = [{\"city\": \"北京\"}, {\"name\": \"范爷\"}]\n",
    "json.dump(listStr, open(\"./data/listStr.json\", \"w\"), ensure_ascii=False)\n",
    "\n",
    "# 字典 -> json文件\n",
    "dictStr = {\"city\": \"北京\", \"name\": \"范爷\"}\n",
    "json.dump(dictStr, open(\"./data/dictStr.json\", \"w\"), ensure_ascii=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fcf0d230",
   "metadata": {},
   "source": [
    "### json.load()\n",
    "\n",
    "> 读取文件中json形式 的字符串元素 转化成python类型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 107,
   "id": "a546d932",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T10:49:52.826618Z",
     "start_time": "2023-10-24T10:49:52.821023Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[{'city': '北京'}, {'name': '范爷'}]\n",
      "{'city': '北京', 'name': '范爷'}\n"
     ]
    }
   ],
   "source": [
    "import json\n",
    "\n",
    "# json -> list\n",
    "strList = json.load(open(\"./data/listStr.json\"))\n",
    "print(strList)\n",
    "\n",
    "# json -> dict\n",
    "strDict = json.load(open(\"./data/dictStr.json\"))\n",
    "print(strDict)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d005f128",
   "metadata": {},
   "source": [
    "### JsonPath\n",
    "\n",
    "JsonPath 是一种信息抽取类库，是从 JSON文档 中抽取指定信息的工具"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c4b8e6e4",
   "metadata": {},
   "source": [
    "![](01_爬虫基础与数据提取_images/13.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 113,
   "id": "12a9ecc6",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T11:01:01.272898Z",
     "start_time": "2023-10-24T11:01:01.141254Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['安阳', '安庆', '安康', '安顺', '鞍山', '澳门', '阿拉善盟', '阿坝藏族羌族自治州', '阿拉尔', '北京', '保定', '包头', '滨州', '蚌埠', '宝鸡', '北海', '亳州', '毕节', '百色', '保山', '本溪', '巴音郭楞', '巴中', '白银', '巴彦淖尔', '白城', '白山', '北屯', '成都', '长沙', '重庆', '长春', '常州', '沧州', '郴州', '赤峰', '承德', '常德', '滁州', '潮州', '朝阳', '楚雄', '澄迈', '池州', '昌吉', '崇左', '昌都', '东莞', '大连', '德州', '德阳', '大理', '大同', '东营', '达州', '大庆', '丹东', '定西', '德宏', '儋州', '大兴安岭', '鄂尔多斯', '鄂州', '恩施', '佛山', '福州', '阜阳', '抚州', '阜新', '抚顺', '防城港', '广州', '贵阳', '赣州', '桂林', '广安', '贵港', '甘孜藏族自治州', '广元', '甘南', '杭州', '合肥', '惠州', '哈尔滨', '海口', '呼和浩特', '湖州', '邯郸', '菏泽', '衡水', '淮安', '海外', '衡阳', '黄石', '怀化', '黄冈', '淮南', '淮北', '河源', '鹤壁', '汉中', '黑河', '河池', '呼伦贝尔', '红河', '葫芦岛', '黄山', '哈密', '海东', '贺州', '黄南', '济南', '金华', '嘉兴', '江门', '济宁', '揭阳', '九江', '荆州', '晋中', '焦作', '锦州', '景德镇', '吉林', '佳木斯', '晋城', '吉安', '荆门', '金昌', '酒泉', '济源', '嘉峪关', '昆明', '开封', '喀什', '克拉玛依', '廊坊', '兰州', '洛阳', '临沂', '聊城', '柳州', '连云港', '乐山', '丽水', '六安', '临汾', '泸州', '漯河', '龙岩', '吕梁', '凉山彝族自治州', '丽江', '拉萨', '娄底', '六盘水', '辽阳', '临沧', '辽源', '陵水黎族自治县', '临夏', '林芝', '来宾', '陇南', '绵阳', '眉山', '茂名', '梅州', '马鞍山', '牡丹江', '南京', '宁波', '南昌', '南宁', '南通', '南阳', '宁德', '南充', '内江', '南平', '莆田', '平顶山', '盘锦', '濮阳', '萍乡', '普洱', '攀枝花', '平凉', '青岛', '泉州', '清远', '秦皇岛', '曲靖', '衢州', '齐齐哈尔', '钦州', '庆阳', '琼海', '七台河', '黔南', '黔东南', '黔西南', '日照', '日喀则', '上海', '深圳', '苏州', '沈阳', '石家庄', '汕头', '绍兴', '宿迁', '商丘', '三亚', '上饶', '韶关', '宿州', '十堰', '汕尾', '遂宁', '邵阳', '绥化', '随州', '三门峡', '三明', '松原', '四平', '石嘴山', '朔州', '石河子', '商洛', '山南', '神农架林区', '天津', '太原', '台州', '唐山', '泰州', '泰安', '通辽', '铜仁', '通化', '铜陵', '铁岭', '天水', '台湾', '铜川', '天门', '铁门关', '武汉', '无锡', '温州', '潍坊', '芜湖', '乌鲁木齐', '威海', '梧州', '渭南', '乌兰察布', '武威', '文山', '万宁', '乌海', '文昌', '五家渠', '五指山', '西安', '厦门', '徐州', '新乡', '襄阳', '邢台', '咸阳', '香港', '许昌', '孝感', '西宁', '信阳', '新余', '湘潭', '咸宁', '宣城', '西双版纳', '仙桃', '忻州', '湘西土家族苗族自治州', '兴安盟', '锡林郭勒盟', '烟台', '扬州', '银川', '盐城', '宜宾', '宜昌', '阳江', '玉林', '岳阳', '宜春', '运城', '营口', '榆林', '玉溪', '益阳', '雅安', '云浮', '阳泉', '永州', '鹰潭', '延边', '伊犁', '延安', '伊春', '郑州', '珠海', '中山', '淄博', '株洲', '漳州', '湛江', '肇庆', '镇江', '遵义', '枣庄', '周口', '驻马店', '张家口', '长治', '资阳', '舟山', '张掖', '昭通', '自贡', '张家界', '中卫']\n"
     ]
    }
   ],
   "source": [
    "from urllib.request import urlopen\n",
    "from urllib.request import Request\n",
    "import jsonpath\n",
    "import json\n",
    "\n",
    "url = 'http://www.lagou.com/lbs/getAllCitySearchLabels.json'\n",
    "\n",
    "# 构建 Request请求对象\n",
    "request = Request(url)\n",
    "\n",
    "# 发送请求\n",
    "response = urlopen(request)\n",
    "\n",
    "# 获取数据\n",
    "html = response.read()\n",
    "\n",
    "# json格式字符串 -> python对象\n",
    "jsonobj = json.loads(html)\n",
    "\n",
    "# 从根节点开始，匹配 name节点\n",
    "citylist = jsonpath.jsonpath(jsonobj, '$..name')\n",
    "print(citylist)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6ca60acf",
   "metadata": {},
   "source": [
    "### 注意事项\n",
    "\n",
    "- json.loads() 是把 Json格式字符串 解码转换成 Python对象，如果在 json.loads 的时候出错，要注意被解码的Json字符的编码。\n",
    "  如果传入的字符串的编码 不是 UTF-8 的话，需要指定字符编码的 参数 encoding"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a35f0ff5",
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "dataDict = json.loads(jsonStrGBK)\n",
    "  \n",
    "# dataJsonStr 是JSON字符串，假设其编码本身是非 UTF-8的话而是 GBK 的，那么上述代码会导致出错，改为对应的：\n",
    "dataDict = json.loads(jsonStrGBK, encoding=\"GBK\");\n",
    "\n",
    "  \n",
    "# 如果 dataJsonStr 通过 encoding 指定了合适的编码，\n",
    "# 但是其中又包含了其他编码的字符，则需要先去将 dataJsonStr 转换为 Unicode，然后再指定编码格式调用json.loads()\n",
    "dataJsonStrUni = dataJsonStr.decode(\"GB2312\"); \n",
    "dataDict = json.loads(dataJsonStrUni, encoding=\"GB2312\");\n",
    "\n",
    "\"\"\"\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0cb65efe",
   "metadata": {},
   "source": [
    "### 字符串编码转换\n",
    "\n",
    "其实编码问题很好搞定，只要记住一点：\n",
    "\n",
    "**任何平台的任何编码 都能和 Unicode 互相转换**\n",
    "\n",
    "UTF-8 与 GBK 互相转换，那就先把 UTF-8 转换成 Unicode，再从 Unicode 转换成 GBK，反之同理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 116,
   "id": "25e3d5c0",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T11:15:15.413326Z",
     "start_time": "2023-10-24T11:15:15.408580Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "你好地球\n",
      "b'\\xc4\\xe3\\xba\\xc3\\xb5\\xd8\\xc7\\xf2'\n",
      "你好地球\n",
      "b'\\xe4\\xbd\\xa0\\xe5\\xa5\\xbd\\xe5\\x9c\\xb0\\xe7\\x90\\x83'\n"
     ]
    }
   ],
   "source": [
    "# 这是一个 UTF-8 编码的字符串\n",
    "utf8Str = \"你好地球\"\n",
    "\n",
    "# 1. 将 UTF-8 编码的字符串 转换成 Unicode 编码\n",
    "unicodeStr = utf8Str.encode('UTF-8').decode(\"UTF-8\")\n",
    "print(unicodeStr)\n",
    "\n",
    "# 2. 再将 Unicode 编码格式字符串 转换成 GBK 编码\n",
    "gbkData = unicodeStr.encode(\"GBK\")\n",
    "print(gbkData)\n",
    "\n",
    "# 1. 再将 GBK 编码格式字符串 转化成 Unicode\n",
    "unicodeStr = gbkData.decode(\"gbk\")\n",
    "print(unicodeStr)\n",
    "\n",
    "# 2. 再将 Unicode 编码格式字符串转换成 UTF-8\n",
    "utf8Str = unicodeStr.encode(\"UTF-8\")\n",
    "print(utf8Str)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "86cf233e",
   "metadata": {},
   "source": [
    "## 实战：猫眼电影"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0d95bbff",
   "metadata": {},
   "source": [
    "### 猫眼电影 bs4 单电影"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 126,
   "id": "95d1c29b",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2023-10-24T11:51:52.335321Z",
     "start_time": "2023-10-24T11:51:52.299004Z"
    },
    "collapsed": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/macbook/opt/anaconda3/lib/python3.9/site-packages/bs4/__init__.py:435: MarkupResemblesLocatorWarning: The input looks more like a filename than markup. You may want to open this file and pass the filehandle into Beautiful Soup.\n",
      "  warnings.warn(\n"
     ]
    },
    {
     "ename": "IndexError",
     "evalue": "list index out of range",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mIndexError\u001b[0m                                Traceback (most recent call last)",
      "Input \u001b[0;32mIn [126]\u001b[0m, in \u001b[0;36m<cell line: 37>\u001b[0;34m()\u001b[0m\n\u001b[1;32m     34\u001b[0m     \u001b[38;5;28mprint\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m电影名：\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mname\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m  类型：\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mtypes\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m  演员：\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mactors\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m     37\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;18m__name__\u001b[39m \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m__main__\u001b[39m\u001b[38;5;124m'\u001b[39m:\n\u001b[0;32m---> 38\u001b[0m     \u001b[43mstart\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n",
      "Input \u001b[0;32mIn [126]\u001b[0m, in \u001b[0;36mstart\u001b[0;34m()\u001b[0m\n\u001b[1;32m     19\u001b[0m soup \u001b[38;5;241m=\u001b[39m BeautifulSoup(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m./html/猫眼电影.html\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mlxml\u001b[39m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m     21\u001b[0m \u001b[38;5;66;03m# 获取电影名称\u001b[39;00m\n\u001b[0;32m---> 22\u001b[0m name \u001b[38;5;241m=\u001b[39m \u001b[43msoup\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mselect\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mh1.name\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m)\u001b[49m\u001b[43m[\u001b[49m\u001b[38;5;241;43m0\u001b[39;49m\u001b[43m]\u001b[49m\n\u001b[1;32m     23\u001b[0m \u001b[38;5;28mprint\u001b[39m(name)\n\u001b[1;32m     25\u001b[0m \u001b[38;5;66;03m# 获取电影类型\u001b[39;00m\n",
      "\u001b[0;31mIndexError\u001b[0m: list index out of range"
     ]
    }
   ],
   "source": [
    "\"\"\"猫眼电影 bs4 单电影\"\"\"\n",
    "from random import betavariate\n",
    "from fake_useragent import UserAgent\n",
    "import requests\n",
    "from bs4 import BeautifulSoup\n",
    "\n",
    "\n",
    "def format_actors(a_list):\n",
    "    actor_set = set()\n",
    "    for a in a_list:\n",
    "        actor_set.add(a.text.strip())\n",
    "    return actor_set\n",
    "\n",
    "\n",
    "def start():\n",
    "\n",
    "    # URL地址\n",
    "    url = 'https://maoyan.com/films/1299372'\n",
    "    \n",
    "    # 随机生成用户代理\n",
    "    headers = {'User-Agent': UserAgent().chrome}\n",
    "    \n",
    "    # 发送 GET请求\n",
    "    resp = requests.get(url, headers=headers)\n",
    "    \n",
    "    # 创建 BeautifulSoup对象\n",
    "    soup = BeautifulSoup(resp.text, 'lxml')\n",
    "\n",
    "    # 获取 class为name的 h1标签列表 -- 选取符合要求的第一个标签内容\n",
    "    name = soup.select('h1.name')[0].text.strip()\n",
    "    \n",
    "    # 获取 class为ellipsis的 li标签列表 -- 选取符合要求的第一个标签内容\n",
    "    types = soup.select('li.ellipsis')[0].text.strip()\n",
    "    \n",
    "    # 获取 class为ellipsis actor 的 li标签下的 div标签下的 a标签列表\n",
    "    actors_m = soup.select('li.celebrity.actor > div > a')\n",
    "    \n",
    "    # 去重\n",
    "    actors = format_actors(actors_m)\n",
    "    print(f'电影名：{name}  类型：{types}  演员：{actors}')\n",
    "\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    start()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a615a2a7",
   "metadata": {},
   "source": [
    "### 猫眼电影 bs4 多电影"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0751bbeb",
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"猫眼电影 bs4 多电影\"\"\"\n",
    "from fake_useragent import UserAgent\n",
    "import requests\n",
    "from bs4 import BeautifulSoup\n",
    "from time import sleep\n",
    "\n",
    "\n",
    "def get_list():\n",
    "    num = int(input('请输入要获取多少页数据：'))\n",
    "\n",
    "    for i in range(num):\n",
    "        url = 'https://maoyan.com/films?showType=3&offset={i*30}'\n",
    "\n",
    "        # 随机生成 用户代理\n",
    "        headers = {'User-Agent': UserAgent().chrome}\n",
    "        \n",
    "        # 发送 GET请求\n",
    "        resp = requests.get(url, headers=headers)\n",
    "        \n",
    "        # 创建 BeautifulSoup对象\n",
    "        soup = BeautifulSoup(resp.text, 'lxml')\n",
    "\n",
    "        # 选择 div标签下 data-act属性为 movies-click 的 a标签\n",
    "        all_a = soup.select('div > a[data-act=\"movies-click\"]')\n",
    "    \n",
    "    # 获取 a标签的 href属性\n",
    "    return [a.get('href') for a in all_a]\n",
    "\n",
    "\n",
    "def format_actors(a_list):\n",
    "    \"\"\"去重\"\"\"\n",
    "    actor_set = set()\n",
    "    for a in a_list:\n",
    "        actor_set.add(a.text.strip())\n",
    "    return actor_set\n",
    "\n",
    "\n",
    "def start():\n",
    "    \n",
    "    # 获取 a标签的 href属性\n",
    "    all_href = get_list()\n",
    "\n",
    "    for a in all_href:\n",
    "        sleep(2)\n",
    "        url = f'https://maoyan.com{a}'\n",
    "        \n",
    "        # 随机生成 用户代理\n",
    "        headers = {'User-Agent': UserAgent().chrome}\n",
    "        \n",
    "        # 发送 GET请求\n",
    "        resp = requests.get(url, headers=headers)\n",
    "        \n",
    "        # 创建 BeautifulSoup\n",
    "        soup = BeautifulSoup(resp.text, 'lxml')\n",
    "\n",
    "        # 获取 class为name的 h1标签列表 -- 选取符合要求的第一个标签内容\n",
    "        name = soup.select('h1.name')[0].text.strip()\n",
    "        \n",
    "        # 获取 class为ellipsis的 li标签列表 -- 选取符合要求的第一个标签内容\n",
    "        types = soup.select('li.ellipsis')[0].text.strip()\n",
    "        \n",
    "        # 获取 class为ellipsis actor 的 li标签下的 div标签下的 a标签列表\n",
    "        actors_m = soup.select('li.celebrity.actor > div > a')\n",
    "        \n",
    "        # 去重\n",
    "        actors = format_actors(actors_m)\n",
    "        print(f'电影名：{name}  类型：{types}  演员：{actors}')\n",
    "\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    start()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "06e49e37",
   "metadata": {},
   "source": [
    "### 猫眼电影 bs4 优化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "db6fa1e8",
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"猫眼电影 bs4 优化\"\"\"\n",
    "from fake_useragent import UserAgent\n",
    "import requests\n",
    "from bs4 import BeautifulSoup\n",
    "from time import sleep\n",
    "\n",
    "\n",
    "def get_html(url):\n",
    "    \n",
    "    # 随机生成 用户代理\n",
    "    headers = {'User-Agent': UserAgent().chrome}\n",
    "    \n",
    "    # 发送 GET请求\n",
    "    resp = requests.get(url, headers=headers)\n",
    "    sleep(2)\n",
    "    \n",
    "    # 如果成功访问\n",
    "    if resp.status_code == 200:\n",
    "        \n",
    "        # 进行 UTF-8编码\n",
    "        resp.encoding = 'utf-8'\n",
    "        \n",
    "        # 返回 HTML内容\n",
    "        return resp.text\n",
    "    else:\n",
    "        return None\n",
    "\n",
    "\n",
    "def get_list(html):\n",
    "    \n",
    "    # 创建 BeautifulSoup对象\n",
    "    soup = BeautifulSoup(html, 'lxml')\n",
    "    \n",
    "    # 选择 div标签下的 data-act属性为movies-click的 a标签列表\n",
    "    all_a = soup.select('div > a[data-act=\"movies-click\"]')\n",
    "    \n",
    "    # 获取 a标签的 href属性\n",
    "    return [a.get('href') for a in all_a]\n",
    "\n",
    "\n",
    "def get_index(html):\n",
    "    \n",
    "    # 创建 BeautifulSoup对象\n",
    "    soup = BeautifulSoup(html, 'lxml')\n",
    "    \n",
    "    # 获取 class为name的 h1标签列表 -- 选取符合要求的第一个标签内容\n",
    "    name = soup.select('h1.name')[0].text.strip()\n",
    "    \n",
    "    # 获取 class为ellipsis的 li标签列表 -- 选取符合要求的第一个标签内容\n",
    "    types = soup.select('li.ellipsis')[0].text.strip()\n",
    "    \n",
    "    # 获取 class为ellipsis actor 的 li标签下的 div标签下的 a标签列表\n",
    "    actors_m = soup.select('li.celebrity.actor > div > a')\n",
    "    \n",
    "    # 去重\n",
    "    actors = format_actors(actors_m)\n",
    "    \n",
    "    return f'电影名：{name}  类型：{types}  演员：{actors}'\n",
    "\n",
    "\n",
    "def format_actors(a_list):\n",
    "    \"\"\"去重\"\"\"\n",
    "    actor_set = set()\n",
    "    for a in a_list:\n",
    "        actor_set.add(a.text.strip())\n",
    "    return actor_set\n",
    "\n",
    "\n",
    "def start():\n",
    "    num = int(input('请输入要获取多少页数据：'))\n",
    "    for i in range(num):\n",
    "        url = f'https://maoyan.com/films?showType=3&offset={i*30}'\n",
    "        \n",
    "        # 获取 HTML\n",
    "        html = get_html(url)\n",
    "        \n",
    "        # 获取 href属性\n",
    "        all_href = get_list(html)\n",
    "        \n",
    "        # 遍历\n",
    "        for a in all_href:\n",
    "            url = f'https://maoyan.com{a}'\n",
    "            \n",
    "            # 获取 HTML\n",
    "            index_html = get_html(url)\n",
    "            \n",
    "            # 返回信息\n",
    "            info = get_index(index_html)\n",
    "            print(info)\n",
    "\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    start()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8af49c12",
   "metadata": {},
   "source": [
    "### 猫眼电影 pyquery获取"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "db60e576",
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"猫眼电影 pyquery获取\"\"\"\n",
    "from fake_useragent import UserAgent\n",
    "import requests\n",
    "from pyquery import PyQuery\n",
    "from time import sleep\n",
    "\n",
    "\n",
    "def get_html(url):\n",
    "    \"\"\"获取 HTML内容\"\"\"\n",
    "    \n",
    "    # 随机生成 用户代理\n",
    "    headers = {'User-Agent': UserAgent().chrome}\n",
    "    \n",
    "    # 发送 GET请求\n",
    "    resp = requests.get(url, headers=headers)\n",
    "    sleep(2)\n",
    "    \n",
    "    # 如果成功获取 \n",
    "    if resp.status_code == 200:\n",
    "        \n",
    "        # 进行 UTF-8编码\n",
    "        resp.encoding = 'utf-8'\n",
    "        return resp.text\n",
    "    else:\n",
    "        return None\n",
    "\n",
    "\n",
    "def get_list(html):\n",
    "    \"\"\"获取 a标签的 href属性\"\"\"\n",
    "    \n",
    "    # 创建 PyQuery对象\n",
    "    pq = PyQuery(html)\n",
    "    \n",
    "    # 获取 div标签下 data-act属性为 movies-click的 a标签\n",
    "    all_a = pq('div > a[data-act=\"movies-click\"]')\n",
    "    \n",
    "    # 获取 a标签的 href属性\n",
    "    return [a.get('href') for a in all_a]\n",
    "\n",
    "\n",
    "def get_index(html):\n",
    "    \"\"\"获取信息\"\"\"\n",
    "    \n",
    "    # 创建 PyQuery对象\n",
    "    pq = PyQuery(html)\n",
    "    \n",
    "    # 获取 class为name的 h1标签列表 -- 选取符合要求的第一个标签内容\n",
    "    name = pq('h1.name').eq(0).text()\n",
    "    \n",
    "    # 获取 class为ellipsis的 li标签列表 -- 选取符合要求的第一个标签内容\n",
    "    types = pq('li.ellipsis').eq(0).text()\n",
    "    \n",
    "    # 获取 class为ellipsis actor 的 li标签下的 div标签下的 a标签列表\n",
    "    actors_m = pq('li.celebrity.actor > div > a')\n",
    "    \n",
    "    # 去重\n",
    "    actors = format_actors(actors_m)\n",
    "    return f'电影名：{name}  类型：{types}  演员：{actors}'\n",
    "\n",
    "\n",
    "def format_actors(a_list):\n",
    "    \"\"\"去重\"\"\"\n",
    "    actor_set = set()\n",
    "    for a in a_list:\n",
    "        actor_set.add(a.text.strip())\n",
    "    return actor_set\n",
    "\n",
    "\n",
    "def start():\n",
    "    num = int(input('请输入要获取多少页数据：'))\n",
    "    for i in range(num):\n",
    "        url = f'https://maoyan.com/films?showType=3&offset={i*30}'\n",
    "        \n",
    "        # 获取 HTML页面\n",
    "        html = get_html(url)\n",
    "        \n",
    "        # 获取 a标签的 href 属性\n",
    "        all_href = get_list(html)\n",
    "        \n",
    "        # 遍历\n",
    "        for a in all_href:\n",
    "            url = f'https://maoyan.com{a}'\n",
    "            \n",
    "            # 获取 HTML页面\n",
    "            index_html = get_html(url)\n",
    "            \n",
    "            # 获取信息\n",
    "            info = get_index(index_html)\n",
    "            print(info)\n",
    "\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    start()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e77e4ff4",
   "metadata": {},
   "source": [
    "### 猫眼电影 XPath 获取"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a2970da8",
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"猫眼电影 XPath 获取\"\"\"\n",
    "from fake_useragent import UserAgent\n",
    "import requests\n",
    "from lxml import etree\n",
    "from time import sleep\n",
    "\n",
    "\n",
    "def get_html(url):\n",
    "    \"\"\"获取 HTML\"\"\"\n",
    "    \n",
    "    # 随机生成 用户代理\n",
    "    headers = {'User-Agent': UserAgent().chrome}\n",
    "    \n",
    "    # 发送 GET请求\n",
    "    resp = requests.get(url, headers=headers)\n",
    "    sleep(3)\n",
    "    \n",
    "    # 如果成功获取\n",
    "    if resp.status_code == 200:\n",
    "        \n",
    "        # 进行 UTF-8编码\n",
    "        resp.encoding = 'utf-8'\n",
    "        return resp.text\n",
    "    else:\n",
    "        return None\n",
    "\n",
    "\n",
    "def get_list(html):\n",
    "    \"\"\"获取 href属性的内容\"\"\"\n",
    "    \n",
    "    # 创建 etree对象\n",
    "    e = etree.HTML(html)\n",
    "    \n",
    "    # div标签下的 data-act属性为 movies-click 的 a标签 的 href属性内容\n",
    "    all_a = e.xpath('//div/a[@data-act=\"movies-click\"]/@href')\n",
    "    return all_a\n",
    "\n",
    "\n",
    "def get_index(html):\n",
    "    \"\"\"获取信息\"\"\"\n",
    "    \n",
    "    # 创建 etree对象\n",
    "    e = etree.HTML(html)\n",
    "    \n",
    "    # 获取 class为 name的 h1标签的内容\n",
    "    name = ''.join(e.xpath('//h1[@class=\"name\"]/text()'))\n",
    "    \n",
    "    # 获取 class为 ellipsis的 li标签下的 a标签内容\n",
    "    types = e.xpath('//li[@class=\"ellipsis\"]/a/text()')\n",
    "    \n",
    "    # 获取 class为 celebrity actor 的 li标签下的 div标签下的 a标签内容\n",
    "    actors_m = e.xpath('//li[@class=\"celebrity actor\"]/div/a/text()')\n",
    "    \n",
    "    # 去重\n",
    "    actors = format_actors(actors_m)\n",
    "    \n",
    "    return f'电影名：{name}  类型：{types}  演员：{actors}'\n",
    "\n",
    "\n",
    "def format_actors(a_list):\n",
    "    \"\"\"去重\"\"\"\n",
    "    actor_set = set()\n",
    "    for a in a_list:\n",
    "        actor_set.add(a.strip())\n",
    "    return actor_set\n",
    "\n",
    "\n",
    "def start():\n",
    "    num = int(input('请输入要获取多少页数据：'))\n",
    "    for i in range(num):\n",
    "        url = f'https://maoyan.com/films?showType=3&offset={i*30}'\n",
    "        \n",
    "        # 获取 HTML\n",
    "        html = get_html(url)\n",
    "        \n",
    "        # 获取 a标签下的 href属性\n",
    "        all_href = get_list(html)\n",
    "        \n",
    "        # 遍历\n",
    "        for a in all_href:\n",
    "            url = f'https://maoyan.com{a}'\n",
    "            \n",
    "            # 获取 HTML\n",
    "            index_html = get_html(url)\n",
    "            \n",
    "            # 获取信息\n",
    "            info = get_index(index_html)\n",
    "            \n",
    "            print(info)\n",
    "\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    start()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "741d6ccf",
   "metadata": {},
   "source": [
    "### 猫眼电影 re 获取"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1acbe8ce",
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"猫眼电影 re 获取\"\"\"\n",
    "\n",
    "from fake_useragent import UserAgent\n",
    "import requests\n",
    "import re\n",
    "from time import sleep\n",
    "\n",
    "\n",
    "def get_html(url):\n",
    "    \"\"\"获取 HTML内容\"\"\"\n",
    "    \n",
    "    # 随机生成用户代理\n",
    "    headers = {'User-Agent': UserAgent().chrome}\n",
    "    \n",
    "    # 发送 GET请求\n",
    "    resp = requests.get(url, headers=headers)\n",
    "    sleep(5)\n",
    "    \n",
    "    # 如果获取成功\n",
    "    if resp.status_code == 200:\n",
    "        \n",
    "        # 进行 UTF-8编码\n",
    "        resp.encoding = 'utf-8'\n",
    "        return resp.text\n",
    "    else:\n",
    "        return None\n",
    "\n",
    "\n",
    "def get_list(html):\n",
    "    \"\"\"获取 a标签下的 href属性\"\"\"\n",
    "    \n",
    "    all_a = re.findall(\n",
    "        '<a href=\"(.+?)\" target=\"_blank\" data-act=\"movies-click\" data-val=\"{movieId:\\d+}\">.+?</a>', html)\n",
    "    return all_a\n",
    "\n",
    "\n",
    "def get_index(html):\n",
    "    \"\"\"获取信息\"\"\"\n",
    "\n",
    "    # 正则表达式，获取电影名\n",
    "    name = re.findall('<h1 class=\"name\">(.+?)</h1>', html)[0]\n",
    "    \n",
    "    # 获取电影类型\n",
    "    types = re.findall('<ul>\\s+?<li class=\"ellipsis\">\\s+?([\\S\\s]+?)</li>', html)[0]\n",
    "    types = re.findall('<a .+?>\\s*(.+)\\s*</a>', types)\n",
    "    \n",
    "    # 获取演员信息\n",
    "    actors_m = re.findall(\n",
    "        '<li class=\"celebrity actor\".+>\\s*<a[\\d\\D]+?</a>\\s*<div.+?>\\s+<a[\\d\\D]+?class=\"name\">\\s*(.+?)\\s*</a>', html)\n",
    "    \n",
    "    # 去重\n",
    "    actors = format_actors(actors_m)\n",
    "    \n",
    "    return f'电影名：{name}  类型：{types}  演员：{actors}'\n",
    "\n",
    "\n",
    "def format_actors(a_list):\n",
    "    \"\"\"去重\"\"\"\n",
    "    actor_set = set()\n",
    "    for a in a_list:\n",
    "        actor_set.add(a.strip())\n",
    "    return actor_set\n",
    "\n",
    "\n",
    "def start():\n",
    "    num = int(input('请输入要获取多少页数据：'))\n",
    "    for i in range(num):\n",
    "        url = f'https://maoyan.com/films?showType=3&offset={i*30}'\n",
    "        \n",
    "        # 获取 HTML内容\n",
    "        html = get_html(url)\n",
    "        \n",
    "        # 获取 href属性内容\n",
    "        all_href = get_list(html)\n",
    "        \n",
    "        # 遍历\n",
    "        for a in all_href:\n",
    "            url = f'https://maoyan.com{a}'\n",
    "            \n",
    "            # 获取 HTML\n",
    "            index_html = get_html(url)\n",
    "            \n",
    "            # 获取信息\n",
    "            info = get_index(index_html)\n",
    "            print(info)\n",
    "\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    start()\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e7538bf6",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.12"
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": false,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {
    "height": "calc(100% - 180px)",
    "left": "10px",
    "top": "150px",
    "width": "288px"
   },
   "toc_section_display": true,
   "toc_window_display": true
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
