{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#  互联网新兴岗位游戏设计吃香吗？\n",
    "\n",
    "- 数据加值宣言：本项目产出按2019互联网新兴人才白皮书的主要结论三：“游戏、交互等互联网新兴职位供不应求”挖掘关于游戏设计相关工作的数据，以解决游戏设计就业需求及特性的就业分析问题\n",
    "\n",
    "### 我的数据产品可以帮助分析\n",
    "- 1、游戏产业相关的岗位有哪些。\n",
    "- 2、游戏产业岗位的相关待遇。\n",
    "- 3、求职者去哪个城市更有机会成功的面试到相游戏产业的相关岗位。\n",
    "- 4、想要应聘游戏产业相关的岗位有没有什么要求。\n",
    "- 5、这类岗位对于应届生是否有需求，也就是这类岗位是否真的“供不应求”到愿意应聘一个没有经验的应届生。\n",
    "\n",
    "\n",
    "# 数据最小可用产品\n",
    "\n",
    "### MVP的数据加值：\n",
    "- 1、爬取出的每个招聘职位的文本可以帮助我们具体地了解游戏产业相关的岗位有哪些。\n",
    "- 2、爬取出的对应职位的薪酬、公司可以帮助分析游戏产业岗位的相关待遇。\n",
    "- 3、爬取出的对应职位的招聘地区可以用来分析哪个地区对于这类人才的需求较高。\n",
    "- 4、爬取出的对应职位的工作经验要求、学历要求可以用来分析对于应聘者的要求有多高，这类岗位对于应届生是否有需求。\n",
    "\n",
    "\n",
    "# 挖掘Query参数\n",
    "\n",
    "\n",
    "- 在猎聘网中以“游戏产业为”关键词，并成功挖取到多页数据，能够以csv的形式系统地整合数据，显示第几页、第几条、参数类别的数据\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 基本模块\n",
    "import pandas as pd\n",
    "from requests_html import HTMLSession"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "import requests,csv,time\n",
    "from lxml import etree\n",
    "from requests.exceptions import RequestException\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 构建模块"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "url ='https://www.liepin.com/zhaopin/?headckid=4ba8c02991d96408&industries=420'#根据关键词换相对应的链接\n",
    "num_1,num_2=0,0\n",
    "    \n",
    "def get_one_page(url):\n",
    "    try:\n",
    "        headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36'}\n",
    "        response = requests.get(url, headers=headers)\n",
    "        if response.status_code == 200:\n",
    "            return etree.HTML(response.text)\n",
    "        return None\n",
    "    except RequestException:\n",
    "        return None\n",
    " \n",
    "def parse_one_page(html):#使用xpath定位数据\n",
    "    global num_2\n",
    "    for job in html.xpath('//div[@class=\"sojob-item-main clearfix\"]'):\n",
    "        num_2+=1\n",
    "        try:\n",
    "            city=job.xpath('div/p/a/text()')[0].strip()\n",
    "            name=job.xpath('div/h3/a/text()')[0].strip()\n",
    "            url=job.xpath('div/h3/a/@href')[0].strip()\n",
    "            firm=job.xpath('div//p[@class=\"company-name\"]/a/text()')[0].strip()\n",
    "            salary=job.xpath('div/p/span/text()')[0].strip()\n",
    "            exper=job.xpath('div/p//span[3]/text()')[0].strip()\n",
    "            edu=job.xpath('div/p//span[2]/text()')[0].strip()\n",
    "            time=job.xpath('div//p[@class=\"time-info clearfix\"]/time/text()')[0].strip()\n",
    "            yield{\n",
    "                    '城市':city,\n",
    "                    '职位':name,\n",
    "                    '网址':url,\n",
    "                    '公司':firm,\n",
    "                    '薪酬':salary,\n",
    "                    '工作经验要求':exper,\n",
    "                    '学历要求':edu,\n",
    "                    '发布时间':time,\n",
    "                    '页面':num_1,\n",
    "                    '条目':num_2\n",
    "                    }\n",
    "        except BaseException as e:\n",
    "            print(num_1,num_2,e)\n",
    "           \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 爬取出的数据直接存入csv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "5 11 'gbk' codec can't encode character '\\u200b' in position 3: illegal multibyte sequence\n"
     ]
    }
   ],
   "source": [
    "def init_csv():#初始化csv文件，写入标题行和爬取时间等相关信息\n",
    "    crawl_time=time.strftime('%Y-%m-%d %H:%M:%S',time.localtime())\n",
    "    with open('result.csv','a',newline='') as my_csv:\n",
    "        my_writer=csv.writer(my_csv)\n",
    "        my_writer.writerow(['爬取对象：猎聘网互联网游戏',url,crawl_time])\n",
    "        my_writer.writerow(['城市', '职位', '网址','公司','薪酬', '工作经验要求', '学历要求','发布时间','页面','条目'])\n",
    "        my_csv.close()\n",
    "    \n",
    "def write_to_csv(content):#写入csv文件\n",
    "    with open('result.csv','a',newline='') as my_csv:\n",
    "        fieldnames= ['城市', '职位', '网址','公司','薪酬', '工作经验要求', '学历要求','发布时间','页面','条目']\n",
    "        my_writer=csv.DictWriter(my_csv,fieldnames=fieldnames)\n",
    "        try:\n",
    "            my_writer.writerow(content)\n",
    "        except BaseException as e:\n",
    "            print(num_1,num_2,e)\n",
    "                \n",
    "def main(offset):\n",
    "    global num_1,num_2\n",
    "    num_1,num_2=num_1+1,0\n",
    "    crawl_url=url+'&curPage='+str(offset)\n",
    "    html = get_one_page(crawl_url)\n",
    "    for item in parse_one_page(html):\n",
    "        write_to_csv(item)\n",
    " \n",
    "if __name__ == '__main__':\n",
    "    init_csv()\n",
    "    for i in range(1,100):\n",
    "        main(i)\n",
    "        time.sleep(1)#避免频繁访问，被封IP\n",
    "        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
