{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 挖掘猎聘网上海地区的产品经理职业情况\n",
    "### 加值主张宣言\n",
    "- 数据加值宣言：\n",
    "  - 该项目是挖掘猎聘网中上海地区产品经理职业数据，抓取到企业对该职业的招聘要求，如学历，工作经验，薪水等，为有意愿当产品经理的应届毕业生们提供参考，提早做出职业规划，也对相应职位的薪资有一定的了解。\n",
    "-  MVP的数据加值：\n",
    "1. 问题：许多大学生对产品经理这一职业的职业要求还不够了解\n",
    "2. 方案：抓取上海地区的产品经理职业要求，学历、经验、薪资等提供参考，提前做出职业规划\n",
    "### 问题情境的数据参数选择\n",
    "#### query参数：\n",
    "- dqs\n",
    "- curPage\n",
    "#### 关键词：产品经理\n",
    "```\n",
    "# 关键词更改\n",
    "参数修改后列表=[参数修改(curPage=[i],key=[\"产品经理\"]) for i in range(10)]\n",
    "参数修改后列表\n",
    "```\n",
    "### 数据挖掘：思路方法及具体执行\n",
    "- 方法选择：由于猎聘网的数据都在html中，使用xpath便可以获取，于是使用了xpath获取网页数据\n",
    "### \n",
    "```\n",
    "class LiepinspiderSpider(scrapy.Spider):\n",
    "    name = 'liepinSpider'\n",
    "    allowed_domains = ['www.liepin.com']\n",
    "    start_urls =starts_url\n",
    "    def parse(self, response): \n",
    "        r=response.xpath('//ul[@class=\"sojob-list\"]/li')\n",
    "        for a in r:\n",
    "            job_xueli =a.xpath('//div[contains(@class,\"job-info\")]/p/span[@class=\"edu\"]/text()').extract()\n",
    "            job_jingyan=a.xpath('//div[contains(@class,\"job-info\")]/p/span[@class=\"edu\"]/following-sibling::span/text()').extract()\n",
    "            job_xinshui=a.xpath('//div[contains(@class,\"job-info\")]/p/span[@class=\"text-warning\"]/text()').extract()\n",
    "            job_shijian=a.xpath('//div[contains(@class,\"job-info\")]/p/time/@title/text()').extract()\n",
    "            job_zhicheng=[x.strip()for x in (a.xpath('//div[contains(@class,\"job-info\")]/h3/a/text()')).extract() ]\n",
    "            job_company_name=a.xpath('//div[contains(@class,\"sojob-item-main\")]//p[@class=\"company-name\"]/a/text()').extract()\n",
    "            job_url=a.xpath('//div[contains(@class,\"job-info\")]/h3/a/@href').extract()\n",
    "            job_company_url=a.xpath('//div[contains(@class,\"sojob-item-main\")]//p[@class=\"company-name\"]/a/@href').extract()\n",
    "```\n",
    "### 单页数据+url解析\n",
    "```\n",
    "url=\"https://www.liepin.com/zhaopin/?compkind=&dqs=020&pubTime=&pageSize=40&salary=&compTag=155&sortFlag=15&compIds=&subIndustry=&jobKind=&industries=&compscale=&key=%E6%95%B0%E6%8D%AE%E5%88%86%E6%9E%90%E5%B8%88&siTag=bFGQTbwE_AAQSb-u11jrBw%7EfA9rXquZc5IkJpXC-Ycixw&d_sfrom=search_prime&d_ckId=29ad48338b62368ba7537c9cdb34d6ff&d_curPage=1&d_pageSize=40&d_headId=29ad48338b62368ba7537c9cdb34d6ff\"\n",
    "from urllib.parse import urlparse, parse_qs,urlencode\n",
    "import pandas as pd\n",
    "def parse_url_qs_for_curPage (url):\n",
    "    six_parts = urlparse(url) #把url拆成6部分\n",
    "    out = parse_qs(six_parts.query)#取出query值并输出为字典out\n",
    "    return (out)\n",
    "参数模板=parse_url_qs_for_curPage(url)\n",
    "参数模板\n",
    "#下面这个函数要改，上面的url要改\n",
    "def 参数修改(key,curPage):\n",
    "    参数=参数模板.copy()\n",
    "    参数[\"key\"]=key\n",
    "    参数[\"curPage\"]=curPage\n",
    "    return 参数\n",
    "```\n",
    "### 多页数据\n",
    "```\n",
    "def 参数修改(key,curPage):\n",
    "    参数=参数模板.copy()\n",
    "    参数[\"key\"]=key\n",
    "    参数[\"curPage\"]=curPage\n",
    "    return 参数\n",
    "```\n",
    "### 系统设计思维\n",
    "- 该项目是挖掘猎聘网中上海地区产品经理职业数据，抓取到企业对该职业的招聘要求，如学历，工作经验，薪水等，提供数据参考\n",
    "\n",
    "|   | scrapy        | request | selenium |\n",
    "|:----:|:---------------:|:---------:|:----------:|\n",
    "| 优点 | 模块化，速度快       | 定制灵活    | 自动爬取     |\n",
    "| 缺点 | 需要执行js才能获取到网页 | 并发性较差   | 速度慢      |\n",
    "\n",
    "- 选用scrapy框架的原因：模块化、爬取时间快、效率高\n",
    "\n",
    "\n",
    "### 数据导出\n",
    "```\n",
    "ulist=list()\n",
    "class LiepinPipeline:\n",
    "    def process_item(self, item, spider):\n",
    "        df=pd.DataFrame(item[\"liepin_xueli\"]).rename(columns={0:\"学历\"})\n",
    "        df[\"经验\"]=item[\"liepin_jingyan\"]\n",
    "        df[\"薪水\"]=item[\"job_xinshui\"]\n",
    "        df[\"职称\"]=item[\"job_zhicheng\"]\n",
    "        df[\"公司名称\"]=item[\"job_company_name\"]\n",
    "        df[\"链接\"]=item[\"job_url\"]\n",
    "        df[\"公司链接\"]=item[\"job_company_url\"]\n",
    "        self.addition(df) \n",
    "    def addition(self,df):\n",
    "       ulist.append(df)\n",
    "         df_合并=pd.concat(ulist)\n",
    "         df_合并.to_excel(\"猎聘上海产品经理职位信息.xlsx\")\n",
    "```\n",
    "### 数据整理\n",
    "```\n",
    "df=pd.DataFrame(item[\"liepin_xueli\"]).rename(columns={0:\"学历\"})\n",
    "        df[\"经验\"]=item[\"liepin_jingyan\"]\n",
    "        df[\"薪水\"]=item[\"job_xinshui\"]\n",
    "        df[\"职称\"]=item[\"job_zhicheng\"]\n",
    "        df[\"公司名称\"]=item[\"job_company_name\"]\n",
    "        df[\"链接\"]=item[\"job_url\"]\n",
    "        df[\"公司链接\"]=item[\"job_company_url\"]\n",
    "        self.addition(df)\n",
    "```\n",
    "### 心得\n",
    "- 在经过本学期的学习和期末实践，我初步初步了解到了爬虫是如何在网页上爬取数据的，抓取数据这一能力对以后的就业也有很大的扩充，也从数据科学的角度分析其主要价值及对人文科学的贡献和实践意义。在今后的学习中也会更加关注数据的向善价值，提高数据科学的理念。\n",
    "### 感谢：\n",
    "- 感谢有scrapy框架，让挖掘数据更快，更简单\n",
    "- 也感谢网上许多大佬分享的文章给予我们学习和参考\n",
    "- [大数据分析：研究武汉新型肺炎的发展历程](http://www.woshipm.com/data-analysis/3347742.html)其中有谈到scrapy一些使用情景\n",
    "- [如何利用数据挖掘潜在用户？](http://www.woshipm.com/data-analysis/593685.html) 有很大的拓展意义\n",
    "\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
