{
 "cells": [
  {
   "source": [
    "### 数据的获取\n",
    "1. 爬虫requests获取,需要数据清洗\n",
    "2. 从网站下载CSV/TXT文件\n",
    "3. API库获取数据(部分收费)\n",
    ">```\n",
    "import pandas_datareader.data as web\n",
    "f = web.DataReader('AXP','stooq') \n",
    "f.head(5)\n",
    "```\n"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "### 爬虫工作原理\n",
    "数据传递： 爬虫 <--> 服务器  '获取 - 解析 - 提取 - 储存'\n",
    "\n",
    "1.获取数据：根据我们提供的网址，向服务器发起请求，然后返回数据。  \n",
    "2.解析数据：把服务器返回的数据解析成我们能读懂的格式。  \n",
    "3.提取数据：从中提取出我们需要的数据。  \n",
    "4.储存数据：把有用的数据保存起来，便于使用和分析。  \n",
    ">注意  \n",
    "&nbsp;&nbsp;在爬虫中，理解数据是什么对象是非常、特别、以及极其重要的一件事。  \n",
    "只有知道了数据是什么对象，我们才知道对象有什么属性和方法可供我们操作。  \n",
    ">浏览器原理  \n",
    "&nbsp;&nbsp;数据传递过程：客户端 'URL' -> 浏览器'请求' -> 服务器 '响应'【返回HTML代码】 ->  浏览器 '解析数据【可视化】’ -> 客户端 '提取、保存'"
   ],
   "cell_type": "code",
   "metadata": {},
   "execution_count": null,
   "outputs": []
  },
  {
   "source": [
    "### requests第三方库【获取数据】：下载网页源代码、文本、图片，甚至是音频\n",
    "## [官方文档](https://docs.python-requests.org/zh_CN/latest/) 非常简洁易懂!"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import requests     ### 引入requests库\n",
    "res = requests.get('https://localprod.pandateacher.com/python-manuscript/crawler-html/sanguo.md')\n",
    "#requests.get向服务器发送了一个请求，括号里的参数是你需要的数据所在的网址。\n",
    "#服务器对请求作出了响应,响应返回的结果赋值在变量res上。\n",
    "print(type(res))  # 返回模块某个类\n",
    "# 获取数据，本质就是通过URL去向服务器发出请求，服务器再把相关内容封装成一个Response对象返回给我们"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 通用代码框架\n",
    "import requests\n",
    "\n",
    "def getHTMLText(url):\n",
    "    try:\n",
    "        r = requests.get(url,timeout=30)\n",
    "        r.raise_for_status() #如果状态码不是200,则引发HTTPError异常\n",
    "        r.encoding = r.apparent_encoding    # 设置网页编码格式,较费时!\n",
    "        return r.text\n",
    "    except:\n",
    "        return '产生异常'\n",
    "\n",
    "url = 'https://www.baidu.com/'\n",
    "print(getHTMLText(url))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 返回请求状态【response.status_code】[200：成功，403：禁止访问]\n",
    "import requests \n",
    "res = requests.get('https://res.pandateacher.com/2018-12-18-10-43-07.png') \n",
    "print(res.status_code)    # 返回不同状态码代表不同含义，需要时查表即可; 就算是200也可能没爬到关键数据\n",
    "# res 对象 是response类的实例化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 图片、音频、视频的下载【response.content】 返回二进制形式\n",
    "import requests\n",
    "res = requests.get('https://res.pandateacher.com/2018-12-18-10-43-07.png')\n",
    "#发出请求，并把返回的结果放在变量res中\n",
    "pic=res.content\n",
    "#把Reponse对象的内容以二进制数据的形式返回\n",
    "photo = open('crawler0519.jpg','wb')\n",
    "#新建了一个文件ppt.jpg，这里的文件没加路径，它会被保存在程序运行的当前目录下。\n",
    "#图片内容需要以二进制wb读写。你在学习open()函数时接触过它\n",
    "photo.write(pic) \n",
    "#写入pic的二进制内容\n",
    "photo.close()\n",
    "#关闭文件"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 文字、网页源代码的下载【response.text】\n",
    "import requests\n",
    "\n",
    "res = requests.get('https://localprod.pandateacher.com/python-manuscript/crawler-html/sanguo.md')\n",
    "# 我们得到一个对象，它被命名为res\n",
    "novel = res.text\n",
    "# 把Response对象的内容以字符串的形式返回\n",
    "with open('《三国演义》.txt','w') as file:\n",
    "    file.write(novel)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 出现乱码 【response.encoding】\n",
    "res.encoding='utf-8'   # 定义response对象编码\n",
    "# 目标数据本身有既定的编码类型，要跟源码一致\n",
    "# requests库会对数据的编码类型做出自己的判断，但不一定准确！"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "### Robots协议是互联网爬虫的一项公认的道德规范，它的全称是“网络爬虫排除标准”（Robots exclusion protocol），这个协议用来告诉爬虫，哪些页面是可以抓取的，哪些不可以。\n",
    "# 查看网站的robots协议：在网站的域名后加上/robots.txt\n",
    "# 怎么通过robot.txt判断哪些内容能爬到???"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 作业0.1\n",
    "import requests\n",
    "res = requests.get('https://localprod.pandateacher.com/python-manuscript/crawler-html/exercise/HTTP%E5%93%8D%E5%BA%94%E7%8A%B6%E6%80%81%E7%A0%81.md')\n",
    "novel = res.text\n",
    "with open('HTTP statuscode.txt','w')as file1:\n",
    "    file1.write(novel)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 作业0.1\n",
    "import requests\n",
    "res = requests.get('https://res.pandateacher.com/2018-12-18-10-43-07.png')\n",
    "pic = res.content\n",
    "with open('crawler0520.png','wb') as pic_cont:\n",
    "    pic_cont.write(pic)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# HTML基础 【网页描述语言】\n",
    "# 读 >> 改 >> 写  [HTML（Hyper Text Markup Language）]\n",
    "# 读懂HTML文档 >> 网页结构 >> 解析、提取数据\n",
    "# 修改HTML代码 >> 编写HTML代码[专业水平]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 元素  ==    <标签>内容</标签>\n",
    "<html>             #    <**> →【开始标签】\n",
    "    <head>    \n",
    "        <meta charset=\"utf-8\">    #  定义网页编码格式\n",
    "        <title>我是网页的名字</title>  #  标签页的内容\n",
    "    </head>\n",
    "    <body>\n",
    "        <h1>我是一级标题</h1>\n",
    "        <h2>我是二级标题</h2>\n",
    "        <h3>我是三级标题</h3>   #  网页中的内容\n",
    "        <p>我是一个段落啦。一级标题、二级标题和我，我们三个一起组成了body。\n",
    "         </p>\n",
    "    </body>\n",
    "</html>             #    </**> →【结束标签】\n",
    "# <head> 的内容不会被直接呈现在浏览器里的网页正文中\n",
    "# <body>的内容会直接显示在网页正文中"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 属性：  <开始标签 name='value'>\n",
    "\n",
    "<body>\n",
    "  <h1 style=\"color:#20b2aa;\">这个书苑不太冷</h1>  \n",
    "  # style 设置段落格式   ①\n",
    "  <h3>吴枫推荐的书:</h3>\n",
    "  <a href=\"https://spidermen.cn\" target=\"_blank\">点这里看看</a>\n",
    "  # href 添加链接    ②\n",
    "  <br>\n",
    "  <h2>《奇点遗民》</h2>\n",
    "  <div class=\"book\">\n",
    "  # class 标识一系列元素：继承<head>中 .book 的所有样式    ③\n",
    "  # id 与 class 类似，但标识唯一的元素,继承<head>中 '# book'的所有样式    ④\n",
    "    <h2>《奇点遗民》</h2>\n",
    "    <p>本书精选收录了刘宇昆的科幻佳作共22篇。《奇点遗民》融入了科幻艺术吸引人的几大元素：数字化生命、影像化记忆、人工智能、外星访客……刘宇昆的独特之处在于，他写的不是科幻探险或英雄奇幻，而是数据时代里每个人的生活和情感变化。透过这本书，我们看到的不仅是未来还有当下。\n",
    "    </p>\n",
    "  </div>\n",
    "</body>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "<head>\n",
    "    <meta charset=\"utf-8\"> \n",
    "    <title>这个书苑不太冷3.0</title>\n",
    "    <style> \n",
    "    .book {     \n",
    "    #  定义class.book的样式\n",
    "    float: left; \n",
    "    margin: 5px; \n",
    "    padding: 15px; \n",
    "    width: 350px; \n",
    "    height: 240px;\n",
    "    border: 3px solid \n",
    "    } \n",
    "    </style>\n",
    "</head>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#完整代码试读\n",
    "<div id=\"article\">\n",
    "    <div id=\"nav\">\n",
    "        <a href=\"#type1\" class=\"catlog\">科幻小说</a><br>\n",
    "        <a href=\"#type2\" class=\"catlog\">人文读物</a><br>\n",
    "        <a href=\"#type3\" class=\"catlog\">技术参考</a><br>\n",
    "    </div>\n",
    "    <div id=\"main\">\n",
    "        <div class=\"books\">\n",
    "            <h2><a name=\"type1\">科幻小说</a></h2>    # 标注name属性方便调用\n",
    "            <a href=\"https://book.douban.com/subject/27077140/\" class=\"title\">《奇点遗民》</a>\n",
    "            <p class=\"info\">本书精选收录了刘宇昆的科幻佳作共22篇。《奇点遗民》融入了科幻艺术吸引人的几大元素：数字化生命、影像化记忆、人工智能、外星访客……刘宇昆的独特之处在于，他写的不是科幻探险或英雄奇幻，而是数据时代里每个人的生活和情感变化。透过这本书，我们看到的不仅是未来还有当下。</p> \n",
    "            <img class=\"img\" src=\"./spider-men5.0_files/s29492583.jpg\">\n",
    "            <br/>\n",
    "            <br/>\n",
    "            <hr size=\"1\">\n",
    "        </div>\n",
    "        \n",
    "        <div class=\"books\">\n",
    "            <h2><a name=\"type2\">人文读物</a></h2>\n",
    "            <a href=\"https://book.douban.com/subject/26943161/\" class=\"title\">《未来简史》</a>\n",
    "            <p class=\"info\">未来，人类将面临着三大问题：生物本身就是算法，生命是不断处理数据的过程；意识与智能的分离；拥有大数据积累的外部环境将比我们自己更了解自己。如何看待这三大问题，以及如何采取应对措施，将直接影响着人类未来的发展。</p> \n",
    "            <img class=\"img\" src=\"./spider-men5.0_files/s29287103.jpg\">\n",
    "            <br/>\n",
    "            <br/>\n",
    "            <hr size=\"1\">\n",
    "        </div>\n",
    "        \n",
    "        <div class=\"books\">\n",
    "            <h2><a name=\"type3\">技术参考</a></h2>\n",
    "            <a href=\"https://book.douban.com/subject/25779298/\" class=\"title\">《利用Python进行数据分析》</a>\n",
    "            <p class=\"info\">本书含有大量的实践案例，你将学会如何利用各种Python库（包括NumPy、pandas、matplotlib以及IPython等）高效地解决各式各样的数据分析问题。由于作者Wes McKinney是pandas库的主要作者，所以本书也可以作为利用Python实现数据密集型应用的科学计算实践指南。本书适合刚刚接触Python的分析人员以及刚刚接触科学计算的Python程序员。</p> \n",
    "            <img class=\"img\" src=\"./spider-men5.0_files/s27275372.jpg\">\n",
    "            <br/>\n",
    "            <br/>\n",
    "            <hr size=\"1\">\n",
    "        </div>\n",
    "    </div>\n",
    "</div>\n",
    "# 修改HTML内容 >> 仅供娱乐，源文件内容仍保存在服务器，刷新后又重置"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定位网页中的源代码 >> 获取数据\n",
    "import requests\n",
    "res = requests.get('https://localprod.pandateacher.com/python-manuscript/crawler-html/spider-men5.0.html')\n",
    "res.encoding='utf-8'\n",
    "code = res.text\n",
    "with open ('这个杀手不太冷.text','w',encoding='utf-8')as file:\n",
    "    file.write(code)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "### 浏览器原理：\n",
    "# 浏览器从服务器上接收一个HTML文档，然后拿去做解析，最后呈现出来。\n",
    "# 可以把你电脑的HTML文档解析成漂亮的网页。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "name": "python374jvsc74a57bd02a7e95a32014fc1ccf24626d45a98c6e7b4373277259c22f47a91d487fc3e8a5",
   "display_name": "Python 3.7.4 64-bit ('base': conda)"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}