{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "b149af9c",
   "metadata": {},
   "source": [
    "# 知识图谱实战（第21期）第2课书面作业\n",
    "学号：115688\n",
    "\n",
    "**作业内容：**  \n",
    "1. 用python（或其它您熟悉的语言）编写爬虫程序，抓取新浪天气（http://weather.sina.com.cn ）中，您所在的城市的未来10天的天气预报，包括温度，风向等  \n",
    "2. 部署Neo4j，运行一些Cypher语句验证部署正常，抓图实验过程。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "269b1acd",
   "metadata": {},
   "source": [
    "## 第1题\n",
    "用python（或其它您熟悉的语言）编写爬虫程序，抓取新浪天气（http://weather.sina.com.cn ）中，您所在的城市的未来10天的天气预报，包括温度，风向等。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dbf3ce9a",
   "metadata": {},
   "source": [
    "### 抓取新浪天气\n",
    "* 用Scrapy实现新浪天气热抓取。  \n",
    "* 但是新浪天气目前已经不更新了，只有一天的。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5634c045",
   "metadata": {},
   "source": [
    "items.py："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "25438abf",
   "metadata": {},
   "outputs": [],
   "source": [
    "import scrapy\n",
    "\n",
    "class WeatherItem(scrapy.Item):\n",
    "    city = scrapy.Field()\n",
    "    date = scrapy.Field()\n",
    "    dayDesc = scrapy.Field()\n",
    "    dayTemp = scrapy.Field()\n",
    "    dayWind = scrapy.Field()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1faf47ff",
   "metadata": {},
   "source": [
    "pipelines.py："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3522679f",
   "metadata": {},
   "outputs": [],
   "source": [
    "class WeatherPipeline(object):\n",
    "\tdef __init__(self):\n",
    "\t\tpass\n",
    "\n",
    "\tdef process_item(self, item, spider):\n",
    "\t\twith open('weather.txt', 'w+', encoding='utf8') as file:\n",
    "\t\t\t# file = open('weather.txt', 'w+', encoding='utf8')\n",
    "\t\t\tcity = item['city']\n",
    "\t\t\tfile.write('city:' + str(city) + '\\n\\n')\n",
    "\n",
    "\t\t\tdate = item['date']\n",
    "\n",
    "\t\t\tdesc = item['dayDesc']\n",
    "\t\t\tdayDesc = desc[1::2]\n",
    "\t\t\tnightDesc = desc[0::2]\n",
    "\n",
    "\t\t\tdayTemp = item['dayTemp']\n",
    "\t\t\tdayWind = item['dayWind']\n",
    "\n",
    "\t\t\tweaitem_t = zip(date, dayDesc, nightDesc, dayTemp, dayWind)\n",
    "\t\t\tweaitem = list(weaitem_t)\n",
    "\n",
    "\t\t\tfor i in range(len(weaitem)):\n",
    "\t\t\t\titem = weaitem[i]\n",
    "\t\t\t\td = item[0]\n",
    "\t\t\t\tdd = item[1]\n",
    "\t\t\t\tnd = item[2]\n",
    "\t\t\t\tta = item[3].split('/')\n",
    "\t\t\t\tdt = ta[0]\n",
    "\t\t\t\tnt = ta[1]\n",
    "\t\t\t\twd = item[4]\n",
    "\t\t\t\ttxt = 'date:{0}\\t\\tday:{1}({2})\\t\\tnight:{3}({4})\\t\\twind:{5}\\n\\n'.format(d, dd, dt, nd, nt, wd)\n",
    "\t\t\t\tfile.write(txt)\n",
    "\n",
    "\t\treturn item"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e71172db",
   "metadata": {},
   "source": [
    "爬虫部分源代码weather.py："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1e62cc36",
   "metadata": {},
   "outputs": [],
   "source": [
    "import scrapy\n",
    "from bs4 import BeautifulSoup\n",
    "from weather.items import WeatherItem\n",
    "\n",
    "\n",
    "class WeatherSpider(scrapy.Spider):\n",
    "    name = \"myweather\"\n",
    "    allowed_domains = [\"sina.com.cn\"]\n",
    "    start_urls = ['http://weather.sina.com.cn']\n",
    "\n",
    "    def parse(self, response):\n",
    "        html_doc = response.body\n",
    "        #html_doc = html_doc.decode('utf-8')\n",
    "        soup = BeautifulSoup(html_doc, features=\"lxml\")\n",
    "        itemTemp = {}\n",
    "        itemTemp['city'] = soup.find(id='slider_ct_name')\n",
    "        tenDay = soup.find(id='blk_fc_c0')\n",
    "        itemTemp['date'] = tenDay.findAll(\"p\", {\"class\": 'wt_fc_c0_i_date'})\n",
    "        itemTemp['dayDesc'] = tenDay.findAll(\"img\", {\"class\": 'icons0_wt'})\n",
    "        itemTemp['dayTemp'] = tenDay.findAll('p', {\"class\": 'wt_fc_c0_i_temp'})\n",
    "        itemTemp['dayWind'] = tenDay.findAll('p', {\"class\": 'wt_fc_c0_i_tip'})\n",
    "        item = WeatherItem()\n",
    "        for att in itemTemp:\n",
    "            item[att] = []\n",
    "            if att == 'city':\n",
    "                item[att] = itemTemp.get(att).text\n",
    "                continue\n",
    "            for obj in itemTemp.get(att):\n",
    "                if att == 'dayDesc':\n",
    "                    item[att].append(obj['title'])\n",
    "                else:\n",
    "                    item[att].append(obj.text)\n",
    "        return item"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4753edad",
   "metadata": {},
   "source": [
    "运行截图：  \n",
    "![kgclass02-11](https://gitee.com/dotzhen/cloud-notes/raw/master/kgclass02-11.jpg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "432744a2",
   "metadata": {},
   "source": [
    "### 1.2 爬取百度天气\n",
    "* 又尝试了一下爬取百度天气，百度天气是有更新的，有完整的15天长期预报。  \n",
    "* 这次直接使用requests组件来做，不采用scrapy。  \n",
    "* 用chrome分析了一下百度天气的网页，百度天气有一定的反爬虫设置，里面的长期预报信息藏在javascript脚本中。这里用re正则表达式将其提取出来。  \n",
    "\n",
    "见源代码："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "5de6b305",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2022-05-19: 23~19摄氏度\n",
      "2022-05-20: 24~19摄氏度\n",
      "2022-05-21: 25~19摄氏度\n",
      "2022-05-22: 27~18摄氏度\n",
      "2022-05-23: 26~19摄氏度\n",
      "2022-05-24: 27~19摄氏度\n",
      "2022-05-25: 25~18摄氏度\n",
      "2022-05-26: 28~19摄氏度\n",
      "2022-05-27: 30~20摄氏度\n",
      "2022-05-28: 27~20摄氏度\n",
      "2022-05-29: 28~19摄氏度\n",
      "2022-05-30: 28~19摄氏度\n",
      "2022-05-31: 27~19摄氏度\n",
      "2022-06-01: 29~19摄氏度\n",
      "2022-06-02: 30~21摄氏度\n",
      "2022-06-03: 22~19摄氏度\n",
      "2022-06-04: 24~19摄氏度\n",
      "2022-06-05: 26~19摄氏度\n",
      "2022-06-06: 27~20摄氏度\n",
      "2022-06-07: 26~22摄氏度\n",
      "2022-06-08: 26~22摄氏度\n",
      "2022-06-09: 25~22摄氏度\n",
      "2022-06-10: 27~22摄氏度\n",
      "2022-06-11: 28~23摄氏度\n",
      "2022-06-12: 30~25摄氏度\n",
      "2022-06-13: 32~25摄氏度\n",
      "2022-06-14: 32~25摄氏度\n",
      "2022-06-15: 30~25摄氏度\n",
      "2022-06-16: 33~26摄氏度\n",
      "2022-06-17: 31~23摄氏度\n",
      "2022-06-18: 27~22摄氏度\n",
      "2022-06-19: 25~23摄氏度\n",
      "2022-06-20: 26~23摄氏度\n",
      "2022-06-21: 27~24摄氏度\n",
      "2022-06-22: 28~24摄氏度\n",
      "2022-06-23: 27~25摄氏度\n",
      "2022-06-24: 27~25摄氏度\n",
      "2022-06-25: 30~26摄氏度\n",
      "2022-06-26: 31~26摄氏度\n",
      "2022-06-27: 31~26摄氏度\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "import json\n",
    "import re\n",
    "from urllib.parse import quote\n",
    "\n",
    "city = '上海'\n",
    "city = quote(city+'天气')\n",
    "# print(city)\n",
    "url=f\"https://weathernew.pae.baidu.com/weathernew/pc?query={city}&srcid=4982\"\n",
    "\n",
    "headers = {\n",
    "    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.67 Safari/537.36'\n",
    "}\n",
    "\n",
    "r=requests.get(url,headers=headers)\n",
    "r.encoding = 'utf8'\n",
    "\n",
    "find = re.search('data\\[\"longDayForecast\"\\]=.*;', r.text)\n",
    "longForecast = find.group()[24:-1]\n",
    "\n",
    "data = json.loads(longForecast)\n",
    "\n",
    "for x in data['info']:\n",
    "    print(f\"{x['date']}: {x['temperature_day']}~{x['temperature_night']}摄氏度\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cf67c76c",
   "metadata": {},
   "source": [
    "上面就是运行于jupyter notebook效果，可以抓取到30天的最高温与最低温。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "58fb8ee4",
   "metadata": {},
   "source": [
    "## 第2题\n",
    "部署Neo4j，运行一些Cypher语句验证部署正常，抓图实验过程。\n",
    "\n",
    "* 安装Neo4j当前最新版本，Neo desktop 1.14.5版本。 "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c73ec809",
   "metadata": {},
   "source": [
    "![kgclass02-2](https://gitee.com/dotzhen/cloud-notes/raw/master/kgclass02-2.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "36c58f83",
   "metadata": {},
   "source": [
    "加载系统缺省的电影数据库：  \n",
    "![kgclass02-3](https://gitee.com/dotzhen/cloud-notes/raw/master/kgclass02-3.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c7be4416",
   "metadata": {},
   "source": [
    "执行命令查询出演电影“达芬奇密码”的演员：  \n",
    "![kgclass02-3](https://gitee.com/dotzhen/cloud-notes/raw/master/kgclass02-4.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0e7d87d6",
   "metadata": {},
   "source": []
  }
 ],
 "metadata": {
  "interpreter": {
   "hash": "9f35b62a1d17a2b36b9c54ddf6a1c189fdf51f8ec8cea898ba9fed29bc45b6fd"
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
