{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# HTML解析入门及准备URL生成连续技\n",
    "![for humans](https://requests-html.kennethreitz.org/_static/requests-html-logo.png#thumbnail)\n",
    "\n",
    "*  本周主要内容：批量抓取页面基础及技巧\n",
    "*  上周主要内容：HTML解析（parse HTML）及准备URL生成连续技\n",
    "*  20春_Web数据挖掘_week04\n",
    "*  电子讲义设计者：廖汉腾, 许智超\n",
    "<br/>\n",
    "<br/>\n",
    "\n",
    "-----\n",
    "## 复习\n",
    "\n",
    "复习：上周内容，实践\n",
    "\n",
    "* 猎聘PC版 liepin.com 取工作URL参数的牛肉\n",
    "* 如何生成一连串新URL以进一步爬取数据\n",
    "\n",
    "\n",
    "-----\n",
    "## 本周内容及学习目标\n",
    "\n",
    "本周内容聚焦在\n",
    "\n",
    "<mark> 如何有系统的把更多页数据(相同结构)作系统性爬取 </mark>\n",
    "\n",
    "为此，我们需要学习\n",
    "\n",
    "* 翻页：参数字典的拆解\n",
    "  * xpath\n",
    "  * 建构参数模板\n",
    "  * 建构参数字典\n",
    "* 翻页：系统性迭代\n",
    "  * robots.txt\n",
    "  * 频率及时间\n",
    "* 翻页：数据备份与整合\n",
    "  * 储存备份\n",
    "  * 数据整合\n",
    "  \n",
    "### 目标\n",
    "1. 使用 requests-html 爬取并存取网页文字档，查找[requests-html 中文文档](https://cncert.github.io/requests-html-doc-cn/#/)\n",
    "2. 熟悉 [xpath 语法](https://www.w3cschool.cn/xpath/xpath-syntax.html)丶[xpath 节点](https://www.w3cschool.cn/xpath/xpath-nodes.html)\n",
    "3. 使用 [xpath cheatsheet](https://devhints.io/xpath)\n",
    "  * 在 Chrome Inspector 使用\n",
    "  * 在 requests-html (Python) 使用\n",
    "4. 简易使用 [pd.DataFrame](https://www.pypandas.cn/doc/getting_started/dsintro.html#dataframe)\n",
    "5. 参数字典的拆解与迭代\n",
    "6. 翻页数据备份与整合"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<style>\n",
       "/* 本电子讲义使用之CSS */\n",
       "div.code_cell {\n",
       "    background-color: #e5f1fe;\n",
       "}\n",
       "div.cell.selected {\n",
       "    background-color: #effee2;\n",
       "    font-size: 2rem;\n",
       "    line-height: 2.4rem;\n",
       "}\n",
       "div.cell.selected .rendered_html table {\n",
       "    font-size: 2rem !important;\n",
       "    line-height: 2.4rem !important;\n",
       "}\n",
       ".rendered_html pre code {\n",
       "    background-color: #C4E4ff;   \n",
       "    padding: 2px 25px;\n",
       "}\n",
       ".rendered_html pre {\n",
       "    background-color: #99c9ff;\n",
       "}\n",
       "div.code_cell .CodeMirror {\n",
       "    font-size: 2rem !important;\n",
       "    line-height: 2.4rem !important;\n",
       "}\n",
       ".rendered_html img, .rendered_html svg {\n",
       "    max-width: 60%;\n",
       "    height: auto;\n",
       "    float: right;\n",
       "}\n",
       "\n",
       ".rendered_html img[src*=\"#full\"], .rendered_html svg[src*=\"#full\"] {\n",
       "    max-width: 100%;\n",
       "    height: auto;\n",
       "    float: none;\n",
       "}\n",
       "\n",
       ".rendered_html img[src*=\"#thumbnail\"], .rendered_html svg[src*=\"#thumbnail\"] {\n",
       "    max-width: 15%;\n",
       "    height: auto;\n",
       "}\n",
       "\n",
       "/* Gradient transparent - color - transparent */\n",
       "hr {\n",
       "    border: 0;\n",
       "    border-bottom: 1px dashed #ccc;\n",
       "}\n",
       ".emoticon{\n",
       "    font-size: 5rem;\n",
       "    line-height: 4.4rem;\n",
       "    text-align: center;\n",
       "    vertical-align: middle;\n",
       "}\n",
       ".bg-split_apply_comine {\n",
       "    width: 500px;     \n",
       "    height: 300px;\n",
       "    background: url('02_split-apply-comine_500x300.png') -10px -10px;\n",
       "    float: right;\n",
       "}\n",
       ".bg-comine {\n",
       "    width: 175px;\n",
       "    height: 150px;\n",
       "    background: url('02_split-apply-comine_500x300.png') -280px -80px;\n",
       "    float: right;\n",
       "}\n",
       ".bg-apply {\n",
       "    width: 155px;\n",
       "    height: 225px;\n",
       "    background: url('02_split-apply-comine_500x300.png') -160px -30px;\n",
       "    float: right;\n",
       "}\n",
       ".bg-split {\n",
       "    width: 205px;\n",
       "    height: 225px;\n",
       "    background: url('02_split-apply-comine_500x300.png') -10px -30px;\n",
       "    float: right;\n",
       "}\n",
       ".break {\n",
       "                   page-break-after: right; \n",
       "                   width:700px;\n",
       "                   clear:both;\n",
       "}\n",
       "</style>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "%%html\n",
    "<style>\n",
    "/* 本电子讲义使用之CSS */\n",
    "div.code_cell {\n",
    "    background-color: #e5f1fe;\n",
    "}\n",
    "div.cell.selected {\n",
    "    background-color: #effee2;\n",
    "    font-size: 2rem;\n",
    "    line-height: 2.4rem;\n",
    "}\n",
    "div.cell.selected .rendered_html table {\n",
    "    font-size: 2rem !important;\n",
    "    line-height: 2.4rem !important;\n",
    "}\n",
    ".rendered_html pre code {\n",
    "    background-color: #C4E4ff;   \n",
    "    padding: 2px 25px;\n",
    "}\n",
    ".rendered_html pre {\n",
    "    background-color: #99c9ff;\n",
    "}\n",
    "div.code_cell .CodeMirror {\n",
    "    font-size: 2rem !important;\n",
    "    line-height: 2.4rem !important;\n",
    "}\n",
    ".rendered_html img, .rendered_html svg {\n",
    "    max-width: 60%;\n",
    "    height: auto;\n",
    "    float: right;\n",
    "}\n",
    "\n",
    ".rendered_html img[src*=\"#full\"], .rendered_html svg[src*=\"#full\"] {\n",
    "    max-width: 100%;\n",
    "    height: auto;\n",
    "    float: none;\n",
    "}\n",
    "\n",
    ".rendered_html img[src*=\"#thumbnail\"], .rendered_html svg[src*=\"#thumbnail\"] {\n",
    "    max-width: 15%;\n",
    "    height: auto;\n",
    "}\n",
    "\n",
    "/* Gradient transparent - color - transparent */\n",
    "hr {\n",
    "    border: 0;\n",
    "    border-bottom: 1px dashed #ccc;\n",
    "}\n",
    ".emoticon{\n",
    "    font-size: 5rem;\n",
    "    line-height: 4.4rem;\n",
    "    text-align: center;\n",
    "    vertical-align: middle;\n",
    "}\n",
    ".bg-split_apply_comine {\n",
    "    width: 500px;     \n",
    "    height: 300px;\n",
    "    background: url('02_split-apply-comine_500x300.png') -10px -10px;\n",
    "    float: right;\n",
    "}\n",
    ".bg-comine {\n",
    "    width: 175px;\n",
    "    height: 150px;\n",
    "    background: url('02_split-apply-comine_500x300.png') -280px -80px;\n",
    "    float: right;\n",
    "}\n",
    ".bg-apply {\n",
    "    width: 155px;\n",
    "    height: 225px;\n",
    "    background: url('02_split-apply-comine_500x300.png') -160px -30px;\n",
    "    float: right;\n",
    "}\n",
    ".bg-split {\n",
    "    width: 205px;\n",
    "    height: 225px;\n",
    "    background: url('02_split-apply-comine_500x300.png') -10px -30px;\n",
    "    float: right;\n",
    "}\n",
    ".break {\n",
    "                   page-break-after: right; \n",
    "                   width:700px;\n",
    "                   clear:both;\n",
    "}\n",
    "</style>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 基本模块\n",
    "import pandas as pd     #从模块中导入函数\n",
    "from requests_html import HTMLSession   #从模块中导入函数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 0. 上周整合代码"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "edu         5\n",
      "经验          9\n",
      "薪水         72\n",
      "时间         24\n",
      "职称        181\n",
      "公司地点       73\n",
      "公司名称       71\n",
      "链结        192\n",
      "公司URL      72\n",
      "热门公司类型      6\n",
      "dtype: int64\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th></th>\n",
       "      <th>职称</th>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>公司名称</th>\n",
       "      <th>edu</th>\n",
       "      <th></th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>阿里巴巴</th>\n",
       "      <th>学历不限</th>\n",
       "      <td>24</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>小米</th>\n",
       "      <th>统招本科</th>\n",
       "      <td>15</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>明略科技集团</th>\n",
       "      <th>统招本科</th>\n",
       "      <td>14</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th rowspan=\"2\" valign=\"top\">华为</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>12</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>统招本科</th>\n",
       "      <td>8</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>深圳市优必选科技股份有限公司</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>7</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>SenseTime（商汤集团）</th>\n",
       "      <th>统招本科</th>\n",
       "      <td>5</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>明略科技集团</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>5</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>新东方教育科技集团有限公司</th>\n",
       "      <th>统招本科</th>\n",
       "      <td>4</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>滴滴</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>4</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>小红书</th>\n",
       "      <th>统招本科</th>\n",
       "      <td>4</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>小米</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>4</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>宁德时代新能源科技股份有限公司</th>\n",
       "      <th>大专及以上</th>\n",
       "      <td>4</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>华为</th>\n",
       "      <th>硕士及以上</th>\n",
       "      <td>4</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>黑芝麻智能科技(上海)有限公司</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>4</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>远东控股集团</th>\n",
       "      <th>统招本科</th>\n",
       "      <td>4</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>远东国际融资租赁有限公司</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>4</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>网易集团</th>\n",
       "      <th>统招本科</th>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>山东开创集团股份有限公司</th>\n",
       "      <th>大专及以上</th>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>医渡云</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>赛轮集团股份有限公司</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>通威股份</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>深圳市优必选科技股份有限公司</th>\n",
       "      <th>硕士及以上</th>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>网易集团</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>双胞胎</th>\n",
       "      <th>大专及以上</th>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>宋城集团</th>\n",
       "      <th>大专及以上</th>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>小米</th>\n",
       "      <th>大专及以上</th>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>上海擎创信息技术有限公司</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>CVTE</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>阿里巴巴</th>\n",
       "      <th>统招本科</th>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>...</th>\n",
       "      <th>...</th>\n",
       "      <td>...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>众安在线财产保险</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>今日头条</th>\n",
       "      <th>大专及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>中联重科</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>中源家居</th>\n",
       "      <th>统招本科</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>上海肇观电子科技有限公司</th>\n",
       "      <th>统招本科</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th rowspan=\"2\" valign=\"top\">上海擎创信息技术有限公司</th>\n",
       "      <th>统招本科</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>大专及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>上海寒武纪信息科技有限公司</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>上海丹瑞生物医药科技有限公司</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>SenseTime（商汤集团）</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>NIO蔚来</th>\n",
       "      <th>统招本科</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>合生创展集团有限公司</th>\n",
       "      <th>大专及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>大疆创新</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>天士力集团网</th>\n",
       "      <th>统招本科</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>平安银行</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>方太</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>新城控股集团住宅开发事业部</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>新城悦控股有限公司</th>\n",
       "      <th>大专及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>招金矿业</th>\n",
       "      <th>统招本科</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>戴维医疗</th>\n",
       "      <th>统招本科</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>广厦控股</th>\n",
       "      <th>大专及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>岁宝百货</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>天际电器</th>\n",
       "      <th>大专及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>岁宝百货</th>\n",
       "      <th>大专及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>山东荣信集团有限公司</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th rowspan=\"2\" valign=\"top\">宝德投资深圳</th>\n",
       "      <th>统招本科</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>大专及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>安徽广信农化股份有限公司</th>\n",
       "      <th>统招本科</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>太平洋建设</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>平安医疗健康管理股份有限公司</th>\n",
       "      <th>本科及以上</th>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>96 rows × 1 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "                       职称\n",
       "公司名称            edu      \n",
       "阿里巴巴            学历不限   24\n",
       "小米              统招本科   15\n",
       "明略科技集团          统招本科   14\n",
       "华为              本科及以上  12\n",
       "                统招本科    8\n",
       "深圳市优必选科技股份有限公司  本科及以上   7\n",
       "SenseTime（商汤集团） 统招本科    5\n",
       "明略科技集团          本科及以上   5\n",
       "新东方教育科技集团有限公司   统招本科    4\n",
       "滴滴              本科及以上   4\n",
       "小红书             统招本科    4\n",
       "小米              本科及以上   4\n",
       "宁德时代新能源科技股份有限公司 大专及以上   4\n",
       "华为              硕士及以上   4\n",
       "黑芝麻智能科技(上海)有限公司 本科及以上   4\n",
       "远东控股集团          统招本科    4\n",
       "远东国际融资租赁有限公司    本科及以上   4\n",
       "网易集团            统招本科    3\n",
       "山东开创集团股份有限公司    大专及以上   3\n",
       "医渡云             本科及以上   3\n",
       "赛轮集团股份有限公司      本科及以上   3\n",
       "通威股份            本科及以上   3\n",
       "深圳市优必选科技股份有限公司  硕士及以上   3\n",
       "网易集团            本科及以上   3\n",
       "双胞胎             大专及以上   3\n",
       "宋城集团            大专及以上   2\n",
       "小米              大专及以上   2\n",
       "上海擎创信息技术有限公司    本科及以上   2\n",
       "CVTE            本科及以上   2\n",
       "阿里巴巴            统招本科    2\n",
       "...                    ..\n",
       "众安在线财产保险        本科及以上   1\n",
       "今日头条            大专及以上   1\n",
       "中联重科            本科及以上   1\n",
       "中源家居            统招本科    1\n",
       "上海肇观电子科技有限公司    统招本科    1\n",
       "上海擎创信息技术有限公司    统招本科    1\n",
       "                大专及以上   1\n",
       "上海寒武纪信息科技有限公司   本科及以上   1\n",
       "上海丹瑞生物医药科技有限公司  本科及以上   1\n",
       "SenseTime（商汤集团） 本科及以上   1\n",
       "NIO蔚来           统招本科    1\n",
       "合生创展集团有限公司      大专及以上   1\n",
       "大疆创新            本科及以上   1\n",
       "天士力集团网          统招本科    1\n",
       "平安银行            本科及以上   1\n",
       "方太              本科及以上   1\n",
       "新城控股集团住宅开发事业部   本科及以上   1\n",
       "新城悦控股有限公司       大专及以上   1\n",
       "招金矿业            统招本科    1\n",
       "戴维医疗            统招本科    1\n",
       "广厦控股            大专及以上   1\n",
       "岁宝百货            本科及以上   1\n",
       "天际电器            大专及以上   1\n",
       "岁宝百货            大专及以上   1\n",
       "山东荣信集团有限公司      本科及以上   1\n",
       "宝德投资深圳          统招本科    1\n",
       "                大专及以上   1\n",
       "安徽广信农化股份有限公司    统招本科    1\n",
       "太平洋建设           本科及以上   1\n",
       "平安医疗健康管理股份有限公司  本科及以上   1\n",
       "\n",
       "[96 rows x 1 columns]"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 上周C-1B-5 建构 参数模板     在这里keyword换成是UI，这里所得是猎聘网使用谷歌开发者工具检查后，热门公司a链接的解析\n",
    "params_compTag_UI ={'中国500强': {'init': ['-1'], 'headckid': ['d7d1454a4390af10'], 'flushckid': ['1'], 'fromSearchBtn': ['2'], 'keyword': ['UI'], 'compTag': ['155'], 'ckid': ['d7d1454a4390af10'], 'siTag': ['1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw'], 'd_sfrom': ['search_unknown'], 'd_ckId': ['4c88f62d6e1f952e049e334ecce1c876'], 'd_curPage': ['0'], 'd_pageSize': ['40'], 'd_headId': ['4c88f62d6e1f952e049e334ecce1c876']}, '2018互联网300强': {'init': ['-1'], 'headckid': ['d7d1454a4390af10'], 'flushckid': ['1'], 'fromSearchBtn': ['2'], 'keyword': ['UI'], 'compTag': ['182'], 'ckid': ['d7d1454a4390af10'], 'siTag': ['1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw'], 'd_sfrom': ['search_unknown'], 'd_ckId': ['4c88f62d6e1f952e049e334ecce1c876'], 'd_curPage': ['0'], 'd_pageSize': ['40'], 'd_headId': ['4c88f62d6e1f952e049e334ecce1c876']}, '制造业500强': {'init': ['-1'], 'headckid': ['d7d1454a4390af10'], 'flushckid': ['1'], 'fromSearchBtn': ['2'], 'keyword': ['UI'], 'compTag': ['186'], 'ckid': ['d7d1454a4390af10'], 'siTag': ['1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw'], 'd_sfrom': ['search_unknown'], 'd_ckId': ['4c88f62d6e1f952e049e334ecce1c876'], 'd_curPage': ['0'], 'd_pageSize': ['40'], 'd_headId': ['4c88f62d6e1f952e049e334ecce1c876']}, 'AI创新成长50强 ': {'init': ['-1'], 'headckid': ['d7d1454a4390af10'], 'flushckid': ['1'], 'fromSearchBtn': ['2'], 'keyword': ['UI'], 'compTag': ['189'], 'ckid': ['d7d1454a4390af10'], 'siTag': ['1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw'], 'd_sfrom': ['search_unknown'], 'd_ckId': ['4c88f62d6e1f952e049e334ecce1c876'], 'd_curPage': ['0'], 'd_pageSize': ['40'], 'd_headId': ['4c88f62d6e1f952e049e334ecce1c876']}, '独角兽': {'init': ['-1'], 'headckid': ['d7d1454a4390af10'], 'flushckid': ['1'], 'fromSearchBtn': ['2'], 'keyword': ['UI'], 'compTag': ['130'], 'ckid': ['d7d1454a4390af10'], 'siTag': ['1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw'], 'd_sfrom': ['search_unknown'], 'd_ckId': ['4c88f62d6e1f952e049e334ecce1c876'], 'd_curPage': ['0'], 'd_pageSize': ['40'], 'd_headId': ['4c88f62d6e1f952e049e334ecce1c876']}, '上市公司': {'init': ['-1'], 'headckid': ['d7d1454a4390af10'], 'flushckid': ['1'], 'fromSearchBtn': ['2'], 'keyword': ['UI'], 'compTag': ['156'], 'ckid': ['d7d1454a4390af10'], 'siTag': ['1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw'], 'd_sfrom': ['search_unknown'], 'd_ckId': ['4c88f62d6e1f952e049e334ecce1c876'], 'd_curPage': ['0'], 'd_pageSize': ['40'], 'd_headId': ['4c88f62d6e1f952e049e334ecce1c876']}}\n",
    "# 上周C-1   多个页面准备测试1 中国500强\n",
    "session = HTMLSession()\n",
    "url = \"https://www.liepin.com/zhaopin/\"   #获得爬取数据的网址\n",
    "payload = params_compTag_UI['中国500强']  #字典取值，根据字典的键来取值\n",
    "r = session.get( url, params = payload)  #在这里，params是代码参数的意思\n",
    "\n",
    "# r.url\n",
    "\n",
    "# 上周C-2  简化 A-1   单一页面爬+解析\n",
    "session = HTMLSession()\n",
    "\n",
    "def requests_liepin( url, params):   #创建函数模块\n",
    "    # 获取猎聘网中国500强的网址\n",
    "    r = session.get( url , params = payload)\n",
    "\n",
    "    # 搜索后，取每一个子列表信息的最大内容框\n",
    "    main_factor = r.html.xpath( '//ul[@class=\"sojob-list\"]/li')\n",
    "\n",
    "    # 作为xpath字典，键为我要抓的牛肉名称，值为xpath\n",
    "    dict_xpaths={ \n",
    "        'text': {\n",
    "            'edu':      '//div[contains(@class,\"job-info\")]/p/span[@class=\"edu\"]',\n",
    "            '经验':      '//div[contains(@class,\"job-info\")]/p/span[@class=\"edu\"]/following-sibling::span',\n",
    "            '薪水':    '//div[contains(@class,\"job-info\")]/p/span[@class=\"text-warning\"]', \n",
    "            '时间':    '//div[contains(@class,\"job-info\")]/p/time/@title', \n",
    "            '职称':    '//div[contains(@class,\"job-info\")]/h3/a', \n",
    "            '公司地点': '//div[contains(@class,\"job-info\")]/p/a',\n",
    "            '公司名称': '//div[contains(@class,\"sojob-item-main\")]//p[@class=\"company-name\"]/a', \n",
    "        },\n",
    "        'text_content': {\n",
    "        },\n",
    "        'href': {\n",
    "            '链结':    '//div[contains(@class,\"job-info\")]/h3/a', \n",
    "            '公司URL': '//div[contains(@class,\"sojob-item-main\")]//p[@class=\"company-name\"]/a', \n",
    "        }\n",
    "    }\n",
    "\n",
    "    def get_e_text_content(_xpath_):  #创建函数，从主要元素（main_factor）里获取到text_content的xpath\n",
    "        # 高级列表推导\n",
    "        result = [e.xpath(_xpath_)[0].lxml.text_content() for e in main_factor]  #可拆分为两行代码，先是for e in main_factor,再是e.xpath(_xpath_)[0].lxml.text_content()\n",
    "        return(result)\n",
    "\n",
    "    def get_e_text(_xpath_):    #创建函数，从主要元素（main_factor）里获取到text的xpath\n",
    "        # 高级列表推导\n",
    "        #以下代码可分为四大部分，最首先是for循环到e,获取到text的xpath，二是for循环text的xpath，三是一个判断语句，判断x是否为字符串，四再通过.join()的方法拼接再一起\n",
    "        result = [\"\".join([x.strip() if type(x) is str else x.text.strip() for x in e.xpath(_xpath_)]) for e in main_factor]\n",
    "        return(result)\n",
    "\n",
    "\n",
    "    def get_e_href(_xpath_):  #创建函数，从主要元素（main_factor）里获取到href的xpath\n",
    "        # 高级列表推导,\\表示不换行，继续前面的代码。注意在斜杠后面不能加空格，否则会报错\n",
    "        result = [list(e.xpath(_xpath_, first=True).absolute_links)[0] \\\n",
    "                   if len(e.xpath(_xpath_, first=True).absolute_links) >= 1  \\\n",
    "                   else \"\" for e in main_factor]\n",
    "        return(result)\n",
    "    \n",
    "    \n",
    "    # 只对主要元素下进行.xpath取值\n",
    "    data_dict = dict() #新建一个空字典\n",
    "    #以下三行都是通过for循环，拿到对应xpath的值，.update是字典的增加元素的方法，.item()是按序排列，字典取值一般是直接使用键的方式\n",
    "    data_dict = {k:get_e_text_content(v) for k,v in dict_xpaths['text_content'].items()}  \n",
    "    data_dict.update({k:get_e_text(v) for k,v in dict_xpaths['text'].items()})\n",
    "    data_dict.update({k:get_e_href(v) for k,v in dict_xpaths['href'].items()})\n",
    "\n",
    "    data = pd.DataFrame(data_dict)   #将所获得字典数据表格化，在这里使用的是pandas的DataFrame\n",
    "    #数据.to_excel(\"20春_Web数据挖掘_week03_liepin.xlsx\", sheet_name=\"搜查结果\")\n",
    "    return (data)\n",
    "\n",
    "\n",
    "# 上周C-3   多个页面\n",
    "url = \"https://www.liepin.com/zhaopin/\"\n",
    "\n",
    "list_df = list()  #创建新列表\n",
    "for k,v in params_compTag_UI.items():\n",
    "    payload = v   #for循环将url后面的参数值付给payload\n",
    "    df = requests_liepin( url, params = payload)    #拼接成完整的猎聘搜索网址\n",
    "    df = df.assign (热门公司类型 = k)    #.assign的意思是将新列分配给DataFrame。\n",
    "    list_df.append(df)   #将新列加入的列表里，append是列表的添加元素的方法\n",
    "\n",
    "df_all = pd.concat(list_df)  #concat函数是在pandas底下的方法，可以将数据根据不同的轴作简单的融合\n",
    "df_all\n",
    "\n",
    "# 上周C-4   输出\n",
    "df_all.to_excel(\"20春_Web数据挖掘_week03_liepin_各热门公司类型.xlsx\", sheet_name=\"搜查结果\")  #输出为excel表格，习惯于保存数据\n",
    "\n",
    "# 上周C-5 Pandas  基本能力\n",
    "\n",
    "print (df_all.nunique())   #Pandas nunique() 用于获取唯一值的统计次数，在这里是找出df_all的相异总数\n",
    "df_all[['edu']].drop_duplicates()  #这个drop_duplicate方法是对DataFrame格式的数据，去除特定列下面的重复行。返回DataFrame格式的数据。\n",
    "\n",
    "df_all.groupby(['公司名称','edu']).agg({\"职称\":\"count\"}).sort_values(by='职称', ascending=False)\n",
    "#.groupby是pandas的分组函数，在这里是按多列进行分组，.agg是pandas的数据聚合的方法，sort_values是对数据进行排序，by参数可以指定根据哪一列数据进行排序，ascending是设置升序和降序"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "-----\n",
    "\n",
    "## 本周实践目标\n",
    "<mark> 如何有系统的把更多页数据(相同结构)作系统性爬取 </mark>[猎聘PC版](https://www.liepin.com/zhaopin/)\n",
    "* 翻页：参数字典的拆解\n",
    "  * xpath解析翻页a/@href\n",
    "  * 建构参数模板\n",
    "  * 建构参数字典\n",
    "* 翻页：系统性迭代\n",
    "  * robots.txt\n",
    "  * 频率及时间\n",
    "* 翻页：数据备份与整合\n",
    "  * 储存备份\n",
    "  * 数据整合"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 翻页：参数字典的拆解\n",
    "## xpath解析翻页a/@href"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "# A-0   单一页面\n",
    "url = \"https://www.liepin.com/zhaopin/?keyword=UI\"  #更改关键字keyword为UI\n",
    "session = HTMLSession()\n",
    "r = session.get( url )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[<Element 'a' href='/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=1'>, <Element 'a' href='/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=2'>, <Element 'a' href='/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=3'>, <Element 'a' href='/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=4'>, <Element 'a' href='/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=1'>, <Element 'a' class=('last',) href='/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=9' title='末页'>]\n",
      "{'2': '/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=1', '3': '/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=2', '4': '/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=3', '5': '/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=4', '下一页': '/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=1', '': '/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=9'}\n"
     ]
    }
   ],
   "source": [
    "# A-1  xpath 解析翻页a/@href\n",
    "xpath_翻页a = '//div[@class=\"pagerbar\"]/a' # 有disabled, current等href是javascript   #先获取到翻页的a\n",
    "xpath_翻页a = '//div[@class=\"pagerbar\"]/a[starts-with(@href,\"/zhaopin\")]'   #获取到翻页的@href\n",
    "print (r.html.xpath(xpath_翻页a)) # 物件  \n",
    "\n",
    "href_list = [x.xpath('//@href')[0] for x in r.html.xpath(xpath_翻页a)]  #通过for循环获取到xpath中的@href\n",
    "#print (href_列表)\n",
    "\n",
    "text_list = [x.text for x in r.html.xpath(xpath_翻页a)]   #通过for循环获取到xpath中的文字部分 \n",
    "#print (文字_列表)\n",
    "\n",
    "href_dict = {x.text:x.xpath('//@href')[0]  for x in r.html.xpath(xpath_翻页a)}  #可拆分为两部分，for循环得到x,再将x.text:x.xpath('//@href')[0]添加到字典里\n",
    "print (href_dict)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 观察：\n",
    "此网页是否给出开始丶步进丶及结束的信息，以方便我们完成迭代设置\n",
    "\n",
    "* 老问题 URL太长，用上周的URL+query参数解析与pandas数据框找到异同之处\n",
    "* 老问题 怎麽系统化出URL？用上周的URL+query参数解析与pandas数据框找到异同之处的时候，顺便构建参数字典，至少让以下参数可调\n",
    "  * 搜索关键词：上周keyword\n",
    "  * 页码在哪？\n",
    "* 实践挑战：如何把上周代码模块化为我们所用？\n",
    "\n",
    "-----\n",
    "\n",
    "## 建构参数模板\n",
    "\n",
    "```python\n",
    "\n",
    "# 上周B-1 使用 urllib.parse 解析\n",
    "from urllib.parse import urlparse, parse_qs\n",
    "\n",
    "\n",
    "# 上周B-2 使用 pd.DataFrame进行 unuinque()相异值计量比对 \n",
    "import pandas as pd\n",
    "df = pd.DataFrame([ urlparse(x) for x in company_link.values()])   #urlparse解析url\n",
    "print(df.nunique())    #取出相异值\n",
    "\n",
    "# 上周B-3 针对query 再解析之  \n",
    "#df_qs = pd.DataFrame([ parse_qs(x) for x in df['query'] ])  区分：下面一行的代码中，v[0]要与列表取值的方法分开了，一个是列表，一个是字典\n",
    "df_qs = pd.DataFrame([{k:v[0] for k,v in parse_qs(x).items()} for x in df['query'] ])  #尝试高级列表推导的拆分\n",
    "print(df.nunique())\n",
    "\n",
    "# 上周B-4 建构 参数模板 及 字典_compTag\n",
    "def parse_url_qs_for_compTag (url):   #创建函数模块，方便下面的调用\n",
    "    six_parts = urlparse(url)    #urlparse 一般默认将url解析为6个部分\n",
    "    out = parse_qs(six_parts.query)\n",
    "    return (out)\n",
    "\n",
    "# parse_url_qs_for_compTag(list(company_link.values())[0])['compTag']\n",
    "params_mould = parse_url_qs_for_compTag(list(company_link.values())[0])\n",
    "print(params_mould )\n",
    "# [ parse_url_qs_for_compTag(x)['compTag'] for x in company_link.values()]\n",
    "[ parse_url_qs_for_compTag(x)['compTag'][0] for x in company_link.values()]\n",
    "\n",
    "dict_compTag = { k:parse_url_qs_for_compTag(v)['compTag'][0] for k,v in company_link.items()}\n",
    "print (dict_compTag)\n",
    "\n",
    "# B-5 建构 参数模板  \n",
    "def params_mould_generation(compTag , keyword ):\n",
    "    params = params_mould .copy()\n",
    "    params['compTag'] = compTag\n",
    "    params['keyword'] = keyword\n",
    "    return (params)\n",
    "\n",
    "params_compTag_UI = { k:params_mould_generation(compTag = [v], keyword = ['UI']) for k,v in dict_compTag.items()}\n",
    "print(params_compTag_UI)\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>scheme</th>\n",
       "      <th>netloc</th>\n",
       "      <th>path</th>\n",
       "      <th>params</th>\n",
       "      <th>query</th>\n",
       "      <th>fragment</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td></td>\n",
       "      <td></td>\n",
       "      <td>/zhaopin/</td>\n",
       "      <td></td>\n",
       "      <td>init=-1&amp;headckid=9da7fe05bb64f8c5&amp;fromSearchBt...</td>\n",
       "      <td></td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td></td>\n",
       "      <td></td>\n",
       "      <td>/zhaopin/</td>\n",
       "      <td></td>\n",
       "      <td>init=-1&amp;headckid=9da7fe05bb64f8c5&amp;fromSearchBt...</td>\n",
       "      <td></td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td></td>\n",
       "      <td></td>\n",
       "      <td>/zhaopin/</td>\n",
       "      <td></td>\n",
       "      <td>init=-1&amp;headckid=9da7fe05bb64f8c5&amp;fromSearchBt...</td>\n",
       "      <td></td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td></td>\n",
       "      <td></td>\n",
       "      <td>/zhaopin/</td>\n",
       "      <td></td>\n",
       "      <td>init=-1&amp;headckid=9da7fe05bb64f8c5&amp;fromSearchBt...</td>\n",
       "      <td></td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td></td>\n",
       "      <td></td>\n",
       "      <td>/zhaopin/</td>\n",
       "      <td></td>\n",
       "      <td>init=-1&amp;headckid=9da7fe05bb64f8c5&amp;fromSearchBt...</td>\n",
       "      <td></td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5</th>\n",
       "      <td></td>\n",
       "      <td></td>\n",
       "      <td>/zhaopin/</td>\n",
       "      <td></td>\n",
       "      <td>init=-1&amp;headckid=9da7fe05bb64f8c5&amp;fromSearchBt...</td>\n",
       "      <td></td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "  scheme netloc       path params  \\\n",
       "0                /zhaopin/          \n",
       "1                /zhaopin/          \n",
       "2                /zhaopin/          \n",
       "3                /zhaopin/          \n",
       "4                /zhaopin/          \n",
       "5                /zhaopin/          \n",
       "\n",
       "                                               query fragment  \n",
       "0  init=-1&headckid=9da7fe05bb64f8c5&fromSearchBt...           \n",
       "1  init=-1&headckid=9da7fe05bb64f8c5&fromSearchBt...           \n",
       "2  init=-1&headckid=9da7fe05bb64f8c5&fromSearchBt...           \n",
       "3  init=-1&headckid=9da7fe05bb64f8c5&fromSearchBt...           \n",
       "4  init=-1&headckid=9da7fe05bb64f8c5&fromSearchBt...           \n",
       "5  init=-1&headckid=9da7fe05bb64f8c5&fromSearchBt...           "
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "scheme      1\n",
      "netloc      1\n",
      "path        1\n",
      "params      1\n",
      "query       5\n",
      "fragment    1\n",
      "dtype: int64\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>ckid</th>\n",
       "      <th>curPage</th>\n",
       "      <th>d_ckId</th>\n",
       "      <th>d_curPage</th>\n",
       "      <th>d_headId</th>\n",
       "      <th>d_pageSize</th>\n",
       "      <th>d_sfrom</th>\n",
       "      <th>fromSearchBtn</th>\n",
       "      <th>headckid</th>\n",
       "      <th>init</th>\n",
       "      <th>keyword</th>\n",
       "      <th>siTag</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>9da7fe05bb64f8c5°radeFlag=0</td>\n",
       "      <td>1</td>\n",
       "      <td>d8376c5dd17d772337b435c56baafc42</td>\n",
       "      <td>0</td>\n",
       "      <td>d8376c5dd17d772337b435c56baafc42</td>\n",
       "      <td>40</td>\n",
       "      <td>search_unknown</td>\n",
       "      <td>2</td>\n",
       "      <td>9da7fe05bb64f8c5</td>\n",
       "      <td>-1</td>\n",
       "      <td>UI</td>\n",
       "      <td>1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>9da7fe05bb64f8c5°radeFlag=0</td>\n",
       "      <td>2</td>\n",
       "      <td>d8376c5dd17d772337b435c56baafc42</td>\n",
       "      <td>0</td>\n",
       "      <td>d8376c5dd17d772337b435c56baafc42</td>\n",
       "      <td>40</td>\n",
       "      <td>search_unknown</td>\n",
       "      <td>2</td>\n",
       "      <td>9da7fe05bb64f8c5</td>\n",
       "      <td>-1</td>\n",
       "      <td>UI</td>\n",
       "      <td>1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>9da7fe05bb64f8c5°radeFlag=0</td>\n",
       "      <td>3</td>\n",
       "      <td>d8376c5dd17d772337b435c56baafc42</td>\n",
       "      <td>0</td>\n",
       "      <td>d8376c5dd17d772337b435c56baafc42</td>\n",
       "      <td>40</td>\n",
       "      <td>search_unknown</td>\n",
       "      <td>2</td>\n",
       "      <td>9da7fe05bb64f8c5</td>\n",
       "      <td>-1</td>\n",
       "      <td>UI</td>\n",
       "      <td>1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>9da7fe05bb64f8c5°radeFlag=0</td>\n",
       "      <td>4</td>\n",
       "      <td>d8376c5dd17d772337b435c56baafc42</td>\n",
       "      <td>0</td>\n",
       "      <td>d8376c5dd17d772337b435c56baafc42</td>\n",
       "      <td>40</td>\n",
       "      <td>search_unknown</td>\n",
       "      <td>2</td>\n",
       "      <td>9da7fe05bb64f8c5</td>\n",
       "      <td>-1</td>\n",
       "      <td>UI</td>\n",
       "      <td>1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>9da7fe05bb64f8c5°radeFlag=0</td>\n",
       "      <td>1</td>\n",
       "      <td>d8376c5dd17d772337b435c56baafc42</td>\n",
       "      <td>0</td>\n",
       "      <td>d8376c5dd17d772337b435c56baafc42</td>\n",
       "      <td>40</td>\n",
       "      <td>search_unknown</td>\n",
       "      <td>2</td>\n",
       "      <td>9da7fe05bb64f8c5</td>\n",
       "      <td>-1</td>\n",
       "      <td>UI</td>\n",
       "      <td>1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5</th>\n",
       "      <td>9da7fe05bb64f8c5°radeFlag=0</td>\n",
       "      <td>9</td>\n",
       "      <td>d8376c5dd17d772337b435c56baafc42</td>\n",
       "      <td>0</td>\n",
       "      <td>d8376c5dd17d772337b435c56baafc42</td>\n",
       "      <td>40</td>\n",
       "      <td>search_unknown</td>\n",
       "      <td>2</td>\n",
       "      <td>9da7fe05bb64f8c5</td>\n",
       "      <td>-1</td>\n",
       "      <td>UI</td>\n",
       "      <td>1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                          ckid curPage                            d_ckId  \\\n",
       "0  9da7fe05bb64f8c5°radeFlag=0       1  d8376c5dd17d772337b435c56baafc42   \n",
       "1  9da7fe05bb64f8c5°radeFlag=0       2  d8376c5dd17d772337b435c56baafc42   \n",
       "2  9da7fe05bb64f8c5°radeFlag=0       3  d8376c5dd17d772337b435c56baafc42   \n",
       "3  9da7fe05bb64f8c5°radeFlag=0       4  d8376c5dd17d772337b435c56baafc42   \n",
       "4  9da7fe05bb64f8c5°radeFlag=0       1  d8376c5dd17d772337b435c56baafc42   \n",
       "5  9da7fe05bb64f8c5°radeFlag=0       9  d8376c5dd17d772337b435c56baafc42   \n",
       "\n",
       "  d_curPage                          d_headId d_pageSize         d_sfrom  \\\n",
       "0         0  d8376c5dd17d772337b435c56baafc42         40  search_unknown   \n",
       "1         0  d8376c5dd17d772337b435c56baafc42         40  search_unknown   \n",
       "2         0  d8376c5dd17d772337b435c56baafc42         40  search_unknown   \n",
       "3         0  d8376c5dd17d772337b435c56baafc42         40  search_unknown   \n",
       "4         0  d8376c5dd17d772337b435c56baafc42         40  search_unknown   \n",
       "5         0  d8376c5dd17d772337b435c56baafc42         40  search_unknown   \n",
       "\n",
       "  fromSearchBtn          headckid init keyword  \\\n",
       "0             2  9da7fe05bb64f8c5   -1      UI   \n",
       "1             2  9da7fe05bb64f8c5   -1      UI   \n",
       "2             2  9da7fe05bb64f8c5   -1      UI   \n",
       "3             2  9da7fe05bb64f8c5   -1      UI   \n",
       "4             2  9da7fe05bb64f8c5   -1      UI   \n",
       "5             2  9da7fe05bb64f8c5   -1      UI   \n",
       "\n",
       "                                           siTag  \n",
       "0  1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw  \n",
       "1  1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw  \n",
       "2  1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw  \n",
       "3  1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw  \n",
       "4  1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw  \n",
       "5  1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw  "
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ckid             1\n",
      "curPage          5\n",
      "d_ckId           1\n",
      "d_curPage        1\n",
      "d_headId         1\n",
      "d_pageSize       1\n",
      "d_sfrom          1\n",
      "fromSearchBtn    1\n",
      "headckid         1\n",
      "init             1\n",
      "keyword          1\n",
      "siTag            1\n",
      "dtype: int64\n"
     ]
    }
   ],
   "source": [
    "# A-2 建构参数模板：找到关键参数及参数结构\n",
    "\n",
    "# 需要模组库，导入模块与函数\n",
    "from urllib.parse import urlparse, parse_qs\n",
    "import pandas as pd\n",
    "from IPython.display import display, HTML\n",
    "\n",
    "# 总体目标：输入 href_列表, 建构出参数字典\n",
    "\n",
    "# urlparse 解析后丢入数据框\n",
    "df = pd.DataFrame([ urlparse(x) for x in href_list])  #使用urlparse解析href_list\n",
    "df_qs = pd.DataFrame([{k:v[0] for k,v in parse_qs(x).items()} for x in df['query'] ])  \n",
    "#上面这行代码是通过for循环取得href_list中的query，扔进数据框中，再进行排列\n",
    "\n",
    "display(df)\n",
    "print(df.nunique())   #df相异值总数统计\n",
    "display(df_qs)\n",
    "print(df_qs.nunique())   #df_qs相异值总数统计\n",
    "\n",
    "df_qs.curPage   # .curPage指当前页数\n",
    "df_qs = df_qs.assign (curPage_int=df_qs.curPage.astype(int)) # 当前页数变成整数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 观察：\n",
    "* query\n",
    "* curPage 5次, 最大值9, 本页不算?\n",
    "\n",
    "-----\n",
    "\n",
    "## 建构参数模板：curPage\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'init': ['-1'], 'headckid': ['9da7fe05bb64f8c5'], 'fromSearchBtn': ['2'], 'keyword': ['UI'], 'ckid': ['9da7fe05bb64f8c5°radeFlag=0'], 'siTag': ['1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw'], 'd_sfrom': ['search_unknown'], 'd_ckId': ['d8376c5dd17d772337b435c56baafc42'], 'd_curPage': ['0'], 'd_pageSize': ['40'], 'd_headId': ['d8376c5dd17d772337b435c56baafc42'], 'curPage': ['1']}\n",
      "{'2': '/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=1', '3': '/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=2', '4': '/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=3', '5': '/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=4', '下一页': '/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=1', '': '/zhaopin/?init=-1&headckid=9da7fe05bb64f8c5&fromSearchBtn=2&keyword=UI&ckid=9da7fe05bb64f8c5°radeFlag=0&siTag=1B2M2Y8AsgTpgAmY7PhCfg%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_unknown&d_ckId=d8376c5dd17d772337b435c56baafc42&d_curPage=0&d_pageSize=40&d_headId=d8376c5dd17d772337b435c56baafc42&curPage=9'}\n"
     ]
    }
   ],
   "source": [
    "# A-2 建构参数模板：找到关键参数及参数结构\n",
    "\n",
    "def parse_url_qs_for_curPage (url):  #新建函数，参数为Url\n",
    "    six_parts = urlparse(url)     #解析url\n",
    "    out = parse_qs(six_parts.query)  #反序列化将请求参数转回字典参数,在这里是将query转回为字典参数\n",
    "    return (out)\n",
    "\n",
    "# 取一例做模板\n",
    "params_mould = parse_url_qs_for_curPage(href_list[0])  #运用函数模块，将href_list[0]转为字典参数\n",
    "print (params_mould)\n",
    "\n",
    "print (href_dict)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1\n",
      "9\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{0: {'init': ['-1'],\n",
       "  'headckid': ['9da7fe05bb64f8c5'],\n",
       "  'fromSearchBtn': ['2'],\n",
       "  'keyword': ['UI'],\n",
       "  'ckid': ['9da7fe05bb64f8c5°radeFlag=0'],\n",
       "  'siTag': ['1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw'],\n",
       "  'd_sfrom': ['search_unknown'],\n",
       "  'd_ckId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'd_curPage': ['0'],\n",
       "  'd_pageSize': ['40'],\n",
       "  'd_headId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'curPage': [0]},\n",
       " 1: {'init': ['-1'],\n",
       "  'headckid': ['9da7fe05bb64f8c5'],\n",
       "  'fromSearchBtn': ['2'],\n",
       "  'keyword': ['UI'],\n",
       "  'ckid': ['9da7fe05bb64f8c5°radeFlag=0'],\n",
       "  'siTag': ['1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw'],\n",
       "  'd_sfrom': ['search_unknown'],\n",
       "  'd_ckId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'd_curPage': ['0'],\n",
       "  'd_pageSize': ['40'],\n",
       "  'd_headId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'curPage': [1]},\n",
       " 2: {'init': ['-1'],\n",
       "  'headckid': ['9da7fe05bb64f8c5'],\n",
       "  'fromSearchBtn': ['2'],\n",
       "  'keyword': ['UI'],\n",
       "  'ckid': ['9da7fe05bb64f8c5°radeFlag=0'],\n",
       "  'siTag': ['1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw'],\n",
       "  'd_sfrom': ['search_unknown'],\n",
       "  'd_ckId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'd_curPage': ['0'],\n",
       "  'd_pageSize': ['40'],\n",
       "  'd_headId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'curPage': [2]},\n",
       " 3: {'init': ['-1'],\n",
       "  'headckid': ['9da7fe05bb64f8c5'],\n",
       "  'fromSearchBtn': ['2'],\n",
       "  'keyword': ['UI'],\n",
       "  'ckid': ['9da7fe05bb64f8c5°radeFlag=0'],\n",
       "  'siTag': ['1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw'],\n",
       "  'd_sfrom': ['search_unknown'],\n",
       "  'd_ckId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'd_curPage': ['0'],\n",
       "  'd_pageSize': ['40'],\n",
       "  'd_headId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'curPage': [3]},\n",
       " 4: {'init': ['-1'],\n",
       "  'headckid': ['9da7fe05bb64f8c5'],\n",
       "  'fromSearchBtn': ['2'],\n",
       "  'keyword': ['UI'],\n",
       "  'ckid': ['9da7fe05bb64f8c5°radeFlag=0'],\n",
       "  'siTag': ['1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw'],\n",
       "  'd_sfrom': ['search_unknown'],\n",
       "  'd_ckId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'd_curPage': ['0'],\n",
       "  'd_pageSize': ['40'],\n",
       "  'd_headId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'curPage': [4]},\n",
       " 5: {'init': ['-1'],\n",
       "  'headckid': ['9da7fe05bb64f8c5'],\n",
       "  'fromSearchBtn': ['2'],\n",
       "  'keyword': ['UI'],\n",
       "  'ckid': ['9da7fe05bb64f8c5°radeFlag=0'],\n",
       "  'siTag': ['1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw'],\n",
       "  'd_sfrom': ['search_unknown'],\n",
       "  'd_ckId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'd_curPage': ['0'],\n",
       "  'd_pageSize': ['40'],\n",
       "  'd_headId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'curPage': [5]},\n",
       " 6: {'init': ['-1'],\n",
       "  'headckid': ['9da7fe05bb64f8c5'],\n",
       "  'fromSearchBtn': ['2'],\n",
       "  'keyword': ['UI'],\n",
       "  'ckid': ['9da7fe05bb64f8c5°radeFlag=0'],\n",
       "  'siTag': ['1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw'],\n",
       "  'd_sfrom': ['search_unknown'],\n",
       "  'd_ckId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'd_curPage': ['0'],\n",
       "  'd_pageSize': ['40'],\n",
       "  'd_headId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'curPage': [6]},\n",
       " 7: {'init': ['-1'],\n",
       "  'headckid': ['9da7fe05bb64f8c5'],\n",
       "  'fromSearchBtn': ['2'],\n",
       "  'keyword': ['UI'],\n",
       "  'ckid': ['9da7fe05bb64f8c5°radeFlag=0'],\n",
       "  'siTag': ['1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw'],\n",
       "  'd_sfrom': ['search_unknown'],\n",
       "  'd_ckId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'd_curPage': ['0'],\n",
       "  'd_pageSize': ['40'],\n",
       "  'd_headId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'curPage': [7]},\n",
       " 8: {'init': ['-1'],\n",
       "  'headckid': ['9da7fe05bb64f8c5'],\n",
       "  'fromSearchBtn': ['2'],\n",
       "  'keyword': ['UI'],\n",
       "  'ckid': ['9da7fe05bb64f8c5°radeFlag=0'],\n",
       "  'siTag': ['1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw'],\n",
       "  'd_sfrom': ['search_unknown'],\n",
       "  'd_ckId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'd_curPage': ['0'],\n",
       "  'd_pageSize': ['40'],\n",
       "  'd_headId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'curPage': [8]},\n",
       " 9: {'init': ['-1'],\n",
       "  'headckid': ['9da7fe05bb64f8c5'],\n",
       "  'fromSearchBtn': ['2'],\n",
       "  'keyword': ['UI'],\n",
       "  'ckid': ['9da7fe05bb64f8c5°radeFlag=0'],\n",
       "  'siTag': ['1B2M2Y8AsgTpgAmY7PhCfg~fA9rXquZc5IkJpXC-Ycixw'],\n",
       "  'd_sfrom': ['search_unknown'],\n",
       "  'd_ckId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'd_curPage': ['0'],\n",
       "  'd_pageSize': ['40'],\n",
       "  'd_headId': ['d8376c5dd17d772337b435c56baafc42'],\n",
       "  'curPage': [9]}}"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# A-3 建构参数模板生成器：keyword curPage\n",
    "def params_mould_generation(keyword, curPage):   #创建函数，必须有两个参数\n",
    "    params = params_mould.copy()\n",
    "    params['curPage'] = curPage   #将curPage赋值为参数\n",
    "    params['keyword'] = keyword  #将keyword赋值为参数\n",
    "    return (params)\n",
    "\n",
    "#高级列表推导理解：第一  for循环href_list  # 第二 keyword 换为UI   # 第三 添加到一个字典里\n",
    "#  大坑！！ 在一行代码的后面加入\\，代表是不换行，紧接下一行，注意在这里斜杠后面不能有空格，若加了空格就报错\n",
    "params_keyword_UI_curPage = { \n",
    "    i:params_mould_generation(curPage = [i], \\\n",
    "                  keyword = ['UI']) \\\n",
    "    for i,v in href_dict.items()\\\n",
    "    }\n",
    "\n",
    "# print(参数_keyword_用户体验_curPage) # 只生成本页有的额外翻页URL, 并没有推估到&curPage=9,也没有这页\n",
    "\n",
    "print (df_qs.curPage_int.min()) # 最小值只有1\n",
    "print (df_qs.curPage_int.max()) # 最大值只有9\n",
    "\n",
    "# 应该是 0 (本页)....9(最大值)\n",
    "#for循环从0开始到当前最大页数+1   ## 第二 keyword 换为UI   #添加到一个字典里\n",
    "params_keyword_UI_curPage = { \n",
    "    i:params_mould_generation(curPage = [i], \\\n",
    "                  keyword = ['UI']) \\\n",
    "    for i in range(0,df_qs.curPage_int.max()+1)\\\n",
    "    }\n",
    "params_keyword_UI_curPage"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 翻页：系统性迭代\n",
    "\n",
    "## 爬亦有道\n",
    "* robots.txt 站长/网站拥有者给搜索引擎的\"道\"\n",
    "* 频率及时间\n",
    "  * 不要爬太快\n",
    "  * 尽量像\"人\"一样礼貌\n",
    "  * time.sleep\n",
    "  \n",
    "```python\n",
    "\n",
    "# 上周C-3   多个页面\n",
    "url = \"https://www.liepin.com/zhaopin/\"\n",
    "\n",
    "list_df = list()\n",
    "for k,v in 参数_compTag_用户体验.items():\n",
    "    payload = v\n",
    "    df = requests_liepin( url, params = payload)\n",
    "    df = df.assign (热门公司类型 = k)    \n",
    "    list_df.append(df)\n",
    "\n",
    "df_all = pd.concat(list_df)\n",
    "df_all\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "# B-1 上周C-2  简化 上上周A-1   单一页面爬+解析\n",
    "session = HTMLSession()\n",
    "\n",
    "def requests_liepin( url, params):   #创建函数模块\n",
    "    # 获取猎聘网中国500强的网址\n",
    "    r = session.get( url , params = payload)\n",
    "\n",
    "    # 搜索后，取每一个子列表信息的最大内容框\n",
    "    main_factor = r.html.xpath( '//ul[@class=\"sojob-list\"]/li')\n",
    "\n",
    "    # 作为xpath字典，键为我要抓的牛肉名称，值为xpath\n",
    "    dict_xpaths={ \n",
    "        'text': {\n",
    "            'edu':      '//div[contains(@class,\"job-info\")]/p/span[@class=\"edu\"]',\n",
    "            '经验':      '//div[contains(@class,\"job-info\")]/p/span[@class=\"edu\"]/following-sibling::span',\n",
    "            '薪水':    '//div[contains(@class,\"job-info\")]/p/span[@class=\"text-warning\"]', \n",
    "            '时间':    '//div[contains(@class,\"job-info\")]/p/time/@title', \n",
    "            '职称':    '//div[contains(@class,\"job-info\")]/h3/a', \n",
    "            '公司地点': '//div[contains(@class,\"job-info\")]/p/a',\n",
    "            '公司名称': '//div[contains(@class,\"sojob-item-main\")]//p[@class=\"company-name\"]/a', \n",
    "        },\n",
    "        'text_content': {\n",
    "        },\n",
    "        'href': {\n",
    "            '链结':    '//div[contains(@class,\"job-info\")]/h3/a', \n",
    "            '公司URL': '//div[contains(@class,\"sojob-item-main\")]//p[@class=\"company-name\"]/a', \n",
    "        }\n",
    "    }\n",
    "\n",
    "    def get_e_text_content(_xpath_):  #创建函数，从主要元素（main_factor）里获取到text_content的xpath\n",
    "        # 高级列表推导\n",
    "        result = [e.xpath(_xpath_)[0].lxml.text_content() for e in main_factor]  #可拆分为两行代码，先是for e in main_factor,再是e.xpath(_xpath_)[0].lxml.text_content()\n",
    "        return(result)\n",
    "\n",
    "    def get_e_text(_xpath_):    #创建函数，从主要元素（main_factor）里获取到text的xpath\n",
    "        # 高级列表推导\n",
    "        #以下代码可分为四大部分，最首先是for循环到e,获取到text的xpath，二是for循环text的xpath，三是一个判断语句，判断x是否为字符串，四再通过.join()的方法拼接再一起\n",
    "        result = [\"\".join([x.strip() if type(x) is str else x.text.strip() for x in e.xpath(_xpath_)]) for e in main_factor]\n",
    "        return(result)\n",
    "\n",
    "\n",
    "    def get_e_href(_xpath_):  #创建函数，从主要元素（main_factor）里获取到href的xpath\n",
    "        # 高级列表推导,\\表示不换行，继续前面的代码。\n",
    "        result = [list(e.xpath(_xpath_, first=True).absolute_links)[0] \\\n",
    "                   if len(e.xpath(_xpath_, first=True).absolute_links) >= 1  \\\n",
    "                   else \"\" for e in main_factor]\n",
    "        return(result)\n",
    "    \n",
    "    \n",
    "    # 只对主要元素下进行.xpath取值\n",
    "    data_dict = dict() #新建一个空字典\n",
    "    #以下三行都是通过for循环，拿到对应xpath的值，.update是字典的增加元素的方法\n",
    "    data_dict = {k:get_e_text_content(v) for k,v in dict_xpaths['text_content'].items()}  \n",
    "    data_dict.update({k:get_e_text(v) for k,v in dict_xpaths['text'].items()})\n",
    "    data_dict.update({k:get_e_href(v) for k,v in dict_xpaths['href'].items()})\n",
    "\n",
    "    data = pd.DataFrame(data_dict)   #将所获得字典数据表格化，在这里使用的是pandas的DataFrame\n",
    "    #数据.to_excel(\"20春_Web数据挖掘_week03_liepin.xlsx\", sheet_name=\"搜查结果\")\n",
    "    return (data)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 爬亦有道- 不要爬太快\n",
    "time.sleep"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Wall time: 6.41 s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "import time\n",
    "from random import random\n",
    "time.sleep(3+4*random())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Wall time: 1min 10s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "#这里有一个坑，查询资料后发现，%%time是要放在第一行的，若%%time上方有一些注释也回导致报错\n",
    "# B-2 多个页面，但放慢脚步 time.sleep\n",
    "import time   #导入模块\n",
    "from random import random  #导入函数\n",
    "\n",
    "url = \"https://www.liepin.com/zhaopin/\"   #猎聘网址\n",
    "\n",
    "list_df = list()  #新建一个空列表\n",
    "for k,v in params_keyword_UI_curPage.items():  #通过for循环拿到以字典形式的k,v\n",
    "    payload = v    #将值赋给变量payload\n",
    "    df = requests_liepin( url, params = payload)   #在猎聘网的基础上上，加上作为参数的v的值\n",
    "    time.sleep(3+4*random())  #放慢脚步 3-7秒, 平均约5秒\n",
    "    df = df.assign (curPage = k)  # 区分  curPage\n",
    "    list_df.append(df)     #使用列表append的方法，将数据加到列表里\n",
    "\n",
    "df_all = pd.concat(list_df).reset_index()\n",
    "df_all.index.name = '序'\n",
    "\n",
    "# 上周C-4   输出\n",
    "df_all.to_excel(\"20春_Web数据挖掘_week04_liepin_翻页.xlsx\",\\\n",
    "                sheet_name=\"用户体验\")  #转成excel表格，学会实时保存数据\n",
    "\n",
    "# 预估时间: 5秒*10 =50\n",
    "# 预估数量: 40*10 =400"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<function time.sleep>"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "## 多个页面+多个关键词\n",
    "time.sleep"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "用户体验 10\n",
      "UX 10\n",
      "网页设计 10\n",
      "Wall time: 3min 15s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "# B-3 多个页面+多个关键词\n",
    "import time  #导入模块\n",
    "from random import random  #从模块中导入函数\n",
    "\n",
    "url = \"https://www.liepin.com/zhaopin/\"\n",
    "xpath_翻页a = '//div[@class=\"pagerbar\"]/a[starts-with(@href,\"/zhaopin\")]'\n",
    "\n",
    "keywords = ['用户体验','UX','网页设计'] #添加keyword\n",
    "list_df = list()  #新建一个空列表\n",
    "\n",
    "## 第一页试探有多长的页面\n",
    "for key in keywords:\n",
    "    payload = params_mould_generation(keyword=[key], curPage=['0'])  #调用函数，并给出两个必须的参数值，关键词和当前页数\n",
    "    df = requests_liepin( url, params = payload)   #获取到猎聘网带参数的网址\n",
    "    href_list = [x.xpath('//@href')[0] for x in r.html.xpath(xpath_翻页a)]  #for循环获取到xpath，然后提取xpath('//@href')的第一个值\n",
    "    df = pd.DataFrame([ urlparse(x) for x in href_list])   # for循环遍历href_list，并对其进行解析，丢进数据框里\n",
    "    df_qs = pd.DataFrame([{k:v[0] for k,v in parse_qs(x).items()} for x in df['query'] ])  #高级列表推导，for循环的到query,对其进行解析，再字典格式化，数据框化\n",
    "    df_qs = df_qs.assign (curPage_int=df_qs.curPage.astype(int)) # 变成整数\n",
    "    lenth = df_qs.curPage_int.max()+1   #最大值的长度\n",
    "    params_keyword_X_curPage = { \n",
    "        i:params_mould_generation(curPage = [i], \\\n",
    "                      keyword = [key]) \\\n",
    "        for i in range(0,lenth)\\\n",
    "        }\n",
    "    #print (参数_keyword_X_curPage)\n",
    "    print (key,lenth)\n",
    "    \n",
    "    for k,v in params_keyword_X_curPage.items():  #.item（）是排序\n",
    "        payload = v   #v值为参数值\n",
    "        df = requests_liepin( url, params = payload)  #完整网址\n",
    "        time.sleep(3+4*random())  #放慢脚步 3-7秒, 平均约5秒\n",
    "        df = df.assign (keyword = key)  # 区分  keyword    \n",
    "        df = df.assign (curPage = k)  # 区分  curPage    \n",
    "        list_df.append(df)  #通过append的方法添加到列表里\n",
    "        \n",
    "df_all = pd.concat(list_df).reset_index()  #concat函数是在pandas底下的方法，可以将数据根据不同的轴作简单的融合，reset_index可以还原索引，重新变为默认的整型索引\n",
    "df_all.index.name = '序'  #索引名字\n",
    "\n",
    "df_all.to_excel(\"20春_Web数据挖掘_week04_liepin_翻页.xlsx\",\\\n",
    "                sheet_name=\"_\".join(keywords))  #存为excel表格，并加keyword加上去\n",
    "# 预估时间: 2*5秒*10 =100\n",
    "# 预估数量: 2*40*10 =800"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 翻页：数据备份与整合\n",
    "多个页面+多个关键词执行时，若怕中断最好把每一页的df内容备份做中继"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "用户体验 10\n",
      "UX 10\n",
      "产品需求 10\n",
      "PRD 10\n",
      "UI 10\n",
      "Wall time: 5min 21s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "# C-1 多个页面+多个关键词\n",
    "import time  #导入模块\n",
    "from random import random  #从模块中导入函数\n",
    "\n",
    "url = \"https://www.liepin.com/zhaopin/\"\n",
    "xpath_翻页a = '//div[@class=\"pagerbar\"]/a[starts-with(@href,\"/zhaopin\")]'  #翻页的xpath\n",
    "\n",
    "keywords = ['用户体验','UX','产品需求','PRD','UI']  #添加关键词UI\n",
    "list_df = list()  #新建一个列表\n",
    "\n",
    "## 第一页试探有多长的页面\n",
    "for key in keywords:   \n",
    "    payload = params_mould_generation(keyword=[key], curPage=['0'])#调用函数，并给出两个必须的参数值，关键词和当前页数\n",
    "    df = requests_liepin( url, params = payload)#获取到猎聘网带参数的网址\n",
    "    href_list = [x.xpath('//@href')[0] for x in r.html.xpath(xpath_翻页a)]  #for循环获取到xpath，然后提取xpath('//@href')的第一个值\n",
    "    df = pd.DataFrame([ urlparse(x) for x in href_list]) # for循环遍历href_list，并对其进行解析，丢进数据框里\n",
    "    df_qs = pd.DataFrame([{k:v[0] for k,v in parse_qs(x).items()} for x in df['query'] ])#高级列表推导，for循环的到query,对其进行解析，再字典格式化，数据框化\n",
    "    df_qs = df_qs.assign (curPage_int=df_qs.curPage.astype(int)) # 变成整数\n",
    "    lenth = df_qs.curPage_int.max()+1  #最大值的长度\n",
    "    params_keyword_X_curPage = { \n",
    "        i:params_mould_generation(curPage = [i], \\\n",
    "                      keyword = [key]) \\\n",
    "        for i in range(0,lenth)\\\n",
    "        }\n",
    "    #print (参数_keyword_X_curPage)\n",
    "    print (key,lenth)\n",
    "\n",
    "    for k,v in params_keyword_X_curPage.items():#.item（）是排序\n",
    "        payload = v  #v值为参数值\n",
    "        df = requests_liepin( url, params = payload)  #带参数的完整网址\n",
    "        time.sleep(3+4*random())  #放慢脚步 3-7秒, 平均约5秒\n",
    "        ## 备份\n",
    "        df.to_csv(\"20春_Web数据挖掘_week04_liepin_{key}_{k}.tsv\"\\\n",
    "                  .format(key=key, k=k), sep=\"\\t\", encoding=\"utf8\")\n",
    "        \n",
    "        df = df.assign (keyword = key)  # 区分  keyword    \n",
    "        df = df.assign (curPage = k)  # 区分  curPage    \n",
    "        list_df.append(df)#通过append的方法添加到列表里\n",
    "        \n",
    "df_all = pd.concat(list_df).reset_index()#concat函数是在pandas底下的方法，可以将数据根据不同的轴作简单的融合，reset_index可以还原索引，\n",
    "                                         #重新变为默认的整型索引\n",
    "df_all.index.name = '序'\n",
    "\n",
    "df_all.to_excel(\"20春_Web数据挖掘_week04_liepin_翻页_4.xlsx\",\\\n",
    "                sheet_name=\"_\".join(keywords)) #存为excel表格，并加keyword加上去\n",
    "# 预估时间: 4*5秒*10 =200\n",
    "# 预估数量: 4*40*10 =1600"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 本周练习\n",
    "\n",
    "* 开始试验各类参数的调整\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": false,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {
    "height": "749px",
    "left": "1125.609375px",
    "top": "110px",
    "width": "281.390625px"
   },
   "toc_section_display": true,
   "toc_window_display": true
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
