{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "## MVP加值主张宣言 \n",
    "* 数据加值宣言：本项目产出按行业及地区挖掘的关于互联网设计的数据，以解决游戏产业下互联网设计就业需求及特性的就业分析问题，给想要在游戏产业发展互联网设计的大学生一个职业能力需要的标准，帮助他们有目的的提升自己，做好职业规划。\n",
    "\n",
    "* MVP的数据加值：\n",
    "    * 现有问题：大学生就业难，不知道往哪个方面提升自己，也不知道自己想要的工作需要什么样的能力。\n",
    "    * 解决方案：通过详情页的职称、经验、学历等详细信息分析广州地区游戏产业下的互联网设计工作的共性与特性，帮助大学生有针对性的提升自己，找到自己心仪的工作，做好职业规划\n",
    "\n",
    "\n",
    "## 问题情境的数据参数选择 \n",
    "1. query参数： \n",
    "    * dqs\n",
    "    * curPage\n",
    "\n",
    " 2. 关键词：互联网设计\n",
    "```\n",
    "#关键词更改\n",
    "参数修改后列表=[参数修改(curPage=[i],key=[\"互联网设计\"]) for i in range(10)]\n",
    "参数修改后列表\n",
    "```\n",
    "## 数据挖掘：思路方法及具体执行 \n",
    "### 方法选择：由于猎聘网的数据都在html中，使用xpath便可以获取，于是使用了xpath获取网页数据\n",
    "```\n",
    "class LiepinspiderSpider(scrapy.Spider):\n",
    "    name = 'liepinSpider'\n",
    "    allowed_domains = ['www.liepin.com']\n",
    "    start_urls =starts_url\n",
    "    def parse(self, response): \n",
    "        r=response.xpath('//ul[@class=\"sojob-list\"]/li')\n",
    "        for a in r:\n",
    "            job_xueli =a.xpath('//div[contains(@class,\"job-info\")]/p/span[@class=\"edu\"]/text()').extract()\n",
    "            job_jingyan=a.xpath('//div[contains(@class,\"job-info\")]/p/span[@class=\"edu\"]/following-sibling::span/text()').extract()\n",
    "            job_xinshui=a.xpath('//div[contains(@class,\"job-info\")]/p/span[@class=\"text-warning\"]/text()').extract()\n",
    "            job_shijian=a.xpath('//div[contains(@class,\"job-info\")]/p/time/@title/text()').extract()\n",
    "            job_zhicheng=[x.strip()for x in (a.xpath('//div[contains(@class,\"job-info\")]/h3/a/text()')).extract() ]\n",
    "            job_company_name=a.xpath('//div[contains(@class,\"sojob-item-main\")]//p[@class=\"company-name\"]/a/text()').extract()\n",
    "            job_position=a.xpath('//div[contains(@class,\"job-info\")]/p/a/text()').extract()\n",
    "            job_company_url=a.xpath('//div[contains(@class,\"sojob-item-main\")]//p[@class=\"company-name\"]/a/@href').extract() \n",
    "```\n",
    "\n",
    "### 单页数据+url解析\n",
    "```\n",
    "url=\"https://www.liepin.com/zhaopin/?compkind=&dqs=050020&pubTime=&pageSize=40&salary=&compTag=&sortFlag=15&degradeFlag=0&compIds=&subIndustry=&jobKind=&industries=420&compscale=&key=%E4%BA%92%E8%81%94%E7%BD%91%E8%AE%BE%E8%AE%A1&siTag=qUvdX0afE0-4-1bYcFF5vw%7E9NgYuqQOK_ZE5No4cv1wsA&d_sfrom=search_prime&d_ckId=8fb205edb185d8cd6b5c2118afb22a5a&d_curPage=9&d_pageSize=40&d_headId=937320f2851c43a3580d0554fd8a8557&curPage=0\"\n",
    "from urllib.parse import urlparse, parse_qs,urlencode\n",
    "import pandas as pd\n",
    "def parse_url_qs_for_curPage (url):\n",
    "    six_parts = urlparse(url) #把url拆成6部分\n",
    "    out = parse_qs(six_parts.query)#取出query值并输出为字典out\n",
    "    return (out)\n",
    "参数模板=parse_url_qs_for_curPage(url)\n",
    "参数模板\n",
    "#下面这个函数要改，上面的url要改\n",
    "def 参数修改(key,curPage):\n",
    "    参数=参数模板.copy()\n",
    "    参数[\"key\"]=key\n",
    "    参数[\"curPage\"]=curPage\n",
    "    return 参数\n",
    "```\n",
    "\n",
    "### 多页数据\n",
    "```\n",
    "def 参数修改(key,curPage):\n",
    "    参数=参数模板.copy()\n",
    "    参数[\"key\"]=key\n",
    "    参数[\"curPage\"]=curPage\n",
    "    return 参数\n",
    "#关键词更改\n",
    "参数修改后列表=[参数修改(curPage=[i],key=[\"互联网设计\"]) for i in range(10)]\n",
    "参数修改后列表\n",
    "```\n",
    "\n",
    "### 系统设计思维\n",
    "* 本项目运用scrapy框架爬取猎聘网广州地区的游戏产业下关于互联网分析的数据\n",
    "* 网页数据抓取对比\n",
    "\n",
    "|    | scrapy | request | selenium |\n",
    "|----|--------|---------|----------|\n",
    "| 优点 |    模块化、并发性好、爬取时间快    |     入门简单，定制灵活   |      自动化爬取    |\n",
    "| 缺点 |     入门较难，不能爬取需要执行js才能获取数据的网页   |     并发性较差，性能低    |      速度慢    |\n",
    "\n",
    "* 选用scrapy框架的原因：模块化、爬取时间快、抓取猎聘这样的网页简单高效\n",
    "\n",
    "\n",
    "\n",
    "### 数据导出\n",
    "```\n",
    "ulist=list()\n",
    "class LiepinPipeline:\n",
    "    def process_item(self, item, spider):\n",
    "        df=pd.DataFrame(item[\"liepin_xueli\"]).rename(columns={0:\"学历\"})\n",
    "        df[\"经验\"]=item[\"liepin_jingyan\"]\n",
    "        df[\"薪水\"]=item[\"job_xinshui\"]\n",
    "        df[\"职称\"]=item[\"job_zhicheng\"]\n",
    "        df[\"公司名称\"]=item[\"job_company_name\"]\n",
    "        df[\"公司地点\"]=item[\"job_position\"]\n",
    "        df[\"公司链接\"]=item[\"job_company_url\"]\n",
    "        self.addition(df)\n",
    "    def addition(self,df):\n",
    "\n",
    "        ulist.append(df)\n",
    "\n",
    "        df_合并=pd.concat(ulist)\n",
    "\n",
    "        df_合并.to_excel(\"猎聘广州互联网设计职位信息.xlsx\")\n",
    "```\n",
    "\n",
    "### 数据整理\n",
    "```\n",
    "        item=LiepinItem()\n",
    "        item[\"liepin_jingyan\"]=job_jingyan\n",
    "        item[\"liepin_xueli\"]=job_xueli\n",
    "        item[\"job_xinshui\"]=job_xinshui\n",
    "        item[\"job_zhicheng\"]=job_zhicheng\n",
    "        item[\"job_company_name\"]=job_company_name\n",
    "        item[\"job_position\"]=job_position\n",
    "        item[\"job_company_url\"]=job_company_url\n",
    "```\n",
    "```\n",
    "        df=pd.DataFrame(item[\"liepin_xueli\"]).rename(columns={0:\"学历\"})\n",
    "        df[\"经验\"]=item[\"liepin_jingyan\"]\n",
    "        df[\"薪水\"]=item[\"job_xinshui\"]\n",
    "        df[\"职称\"]=item[\"job_zhicheng\"]\n",
    "        df[\"公司名称\"]=item[\"job_company_name\"]\n",
    "        df[\"公司地点\"]=item[\"job_position\"]\n",
    "        df[\"公司链接\"]=item[\"job_company_url\"]\n",
    "        self.addition(df)\n",
    "```\n",
    "\n",
    "\n",
    "## 心得总结及感谢\n",
    "### 心得\n",
    "* 在经过了本学期的学习后，我认为数据挖掘中最重要的是根据网页数据有针对性的去选择爬取的工具，这样才能准确高效的爬取到自己所需要的数据。以及在对挖掘数据的选择时，要先想这些数据具体有什么作用，能解决什么问题，这些问题是否以人为本，有哪些向善的价值。这样爬取到的数据才能效益最大化。\n",
    "### 感谢\n",
    "* 感谢一些博主写的scrapy的文章对我的关键代码及项目的帮助，并在此附上URL。\n",
    "* [爬虫框架Scrapy个人总结（详细）熟悉](https://www.jianshu.com/p/cecb29c04cd2)\n",
    "* [从爬虫到数据可视化（1）—猎聘网](https://www.jianshu.com/p/c80badcaa5bf)\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": false,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {},
   "toc_section_display": true,
   "toc_window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
