{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 6. 数据加载，存储与文件格式 Data Loading, Storage, and File Formats\n",
    "\n",
    "Input and output typically falls into a few main categories: \n",
    "+ reading text files and other more efficient on-disk formats, \n",
    "+ loading data from databases, \n",
    "+ interacting with network sources like web APIs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 模块导入\n",
    "import pathlib, sys\n",
    "sys.path.append(str(pathlib.Path.cwd().parent))\n",
    "import numpy\n",
    "import pandas\n",
    "from dependency import arr_info"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 6.1 读写文本格式的数据 Reading and Writing Data in Text Format\n",
    "\n",
    "+ `read_csv(filepath, sep=\",\")`：分隔符默认为逗号，`sep` 与 `delimiter` 参数相同 Load delimited data from a file, URL, or file-like object; use comma as default delimiter\n",
    "+ `read_table()` Load delimited data from a file, URL, or file-like object; use tab ('\\t') as default delimiter\n",
    "+ `read_fwf()` Read data in fixed-width column format (i.e., no delimiters)\n",
    "+ `read_clipboard()` Version of read_table that reads data from the clipboard; useful for converting tables from web pages\n",
    "+ `read_excel()` Read tabular data from an Excel XLS or XLSX file\n",
    "+ `read_hdf()` Read HDF5 files written by pandas\n",
    "+ `read_html()` Read all tables found in the given HTML document\n",
    "+ `read_json()` Read data from a JSON (JavaScript Object Notation) string representation\n",
    "+ `read_msgpack()` Read pandas data encoded using the MessagePack binary format\n",
    "+ `read_pickle()` Read an arbitrary object stored in Python pickle format\n",
    "+ `read_sas()` Read a SAS dataset stored in one of the SAS system’s custom storage formats\n",
    "+ `read_sql()` Read the results of a SQL query (using SQLAlchemy) as a pandas DataFrame\n",
    "+ `read_stata()` Read a dataset from Stata file format\n",
    "+ `read_feather()` Read the Feather binary file format"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Some read_csv/read_table function arguments\n",
    "\n",
    "+ `path` String indicating filesystem location, URL, or file-like object\n",
    "+ `sep` or `delimiter` Character sequence or regular expression to use to split fields in each row\n",
    "+ `header` Row number to use as column names; defaults to 0 (first row), but should be None if there is no header row\n",
    "+ `index_col` Column numbers or names to use as the row index in the result; can be a single name/number or a list of them for a hierarchical index\n",
    "+ `names` List of column names for result, combine with header=None\n",
    "+ `skiprows` Number of rows at beginning of file to ignore or list of row numbers (starting from 0) to skip.\n",
    "+ `na_values` Sequence of values to replace with NA.\n",
    "+ `comment` Character(s) to split comments off the end of lines.\n",
    "+ `parse_dates` Attempt to parse data to datetime; False by default. If True, will attempt to parse all columns. Otherwise can specify a list of column numbers or name to parse. If element of list is tuple or list, will combine multiple columns together and parse to date (e.g., if date/time split across two columns).\n",
    "+ `keep_date_col` If joining columns to parse date, keep the joined columns; False by default.\n",
    "+ `converters` Dict containing column number of name mapping to functions (e.g., {'foo': f} would apply the function f to all values in the 'foo' column).\n",
    "+ `dayfirst` When parsing potentially ambiguous dates, treat as international format (e.g., 7/6/2012 -> June 7, 2012); False by default.\n",
    "+ `date_parser` Function to use to parse dates.\n",
    "+ `nrows` Number of rows to read from beginning of file.\n",
    "+ `iterator` Return a TextParser object for reading file piecemeal.\n",
    "+ `chunksize` For iteration, size of file chunks.\n",
    "+ `skip_footer` Number of lines to ignore at end of file.\n",
    "+ `verbose` Print various parser output information, like the number of missing values placed in non-numeric columns.\n",
    "+ `encoding` Text encoding for Unicode (e.g., 'utf-8' for UTF-8 encoded text).\n",
    "+ `squeeze` If the parsed data only contains one column, return a Series.\n",
    "+ `thousands` Separator for thousands (e.g., ',' or '.')."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# read_csv() 默认分隔符为逗号\n",
    "\n",
    "df0_1 = pandas.read_csv(\".\\\\data\\\\ex1.csv\")\n",
    "df0_1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# read_table() 必须指定分隔符\n",
    "\n",
    "df0_2 = pandas.read_table(\".\\\\data\\\\ex1.csv\", sep=\",\")\n",
    "df0_2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 没有header的数据\n",
    "\n",
    "df0_3 = pandas.read_csv(\".\\\\data\\\\ex2.csv\", header=None)    # 不说明header=None则会将第一行数据作为header\n",
    "df0_3"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 指定行列标签\n",
    "\n",
    "df0_4 = pandas.read_csv(\".\\\\data\\\\ex2.csv\", names=[\"A\",\"B\",\"C\",\"D\",\"Message\"])\n",
    "df0_4"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 指定某列数据作为列索引\n",
    "\n",
    "df0_4 = pandas.read_csv(\".\\\\data\\\\ex1.csv\", index_col=4)\n",
    "df0_4 = pandas.read_csv(\".\\\\data\\\\ex1.csv\", index_col=\"message\")\n",
    "df0_4"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 层次化索引\n",
    "\n",
    "df0_5 = pandas.read_csv(\".\\\\data\\\\csv_mindex.csv\", index_col=[\"key1\", \"key2\"])\n",
    "df0_5"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 用正则表达式处理复杂的分隔符\n",
    "\n",
    "df0_6 = pandas.read_csv(\".\\\\data\\\\ex3.csv\", sep=\"\\s+\")\n",
    "df0_6 = pandas.read_table(\".\\\\data\\\\ex3.csv\", sep=\"\\s+\")\n",
    "df0_6"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 跳过一些行读取\n",
    "\n",
    "df0_7 = pandas.read_csv(\".\\\\data\\\\ex4.csv\", skiprows=[0, 2, 3])\n",
    "df0_7"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 处理缺失数据\n",
    "\n",
    "## 读取\n",
    "df0_8 = pandas.read_csv(\".\\\\data\\\\ex5.csv\")\n",
    "arr_info( df0_8 )\n",
    "arr_info( pandas.isnull(df0_8) )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## 指定缺失数据\n",
    "df0_9 = pandas.read_csv(\".\\\\data\\\\ex5.csv\", na_values=[\"NULL\", \"foo\"])  # 指定哪些值属于非数据的\n",
    "arr_info( df0_9 )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## 分别指定非数据的值\n",
    "sentinels = {\"message\": [\"foo\", \"NA\"], \"something\": [\"two\"]}\n",
    "df0_10 = pandas.read_csv(\".\\\\data\\\\ex5.csv\", na_values=sentinels)\n",
    "arr_info( df0_10 )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 6.1.1 逐块读取文本文件 Reading Text File in Pieces"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 更改输出显示的设置\n",
    "\n",
    "pandas.options.display.max_rows = 10    # 配置最大显示行数\n",
    "df1_1 = pandas.read_csv(\"./data/ex6.csv\")\n",
    "df1_1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 读取部分行\n",
    "\n",
    "df1_2 = pandas.read_csv(\"./data/ex6.csv\", nrows=10)\n",
    "df1_2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 逐块读取\n",
    "\n",
    "chunker1_3 = pandas.read_csv(\"./data/ex6.csv\", chunksize=100)\n",
    "print(chunker1_3)\n",
    "\n",
    "ser = pandas.Series([ ], dtype=float)    # 最好指定dtype\n",
    "for piece in chunker1_3:\n",
    "    # print(piece)\n",
    "    ser = ser.add(piece[\"key\"].value_counts(), fill_value=0)\n",
    "ser = ser.sort_values()\n",
    "arr_info(ser)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 6.1.2 将数据写入到文本格式文件 Writing Data to Text Format"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# \n",
    "\n",
    "import sys\n",
    "\n",
    "df0_8.to_csv(sys.stdout, sep=\"|\")  # 可以指定分隔符\n",
    "df0_8.to_csv(sys.stdout, sep=\" \")  # 在控制台打印结果（方便演示）\n",
    "df0_8.to_csv(sys.stdout, sep=\" \")\n",
    "df0_8.to_csv(\"./data/out.csv\", sep=\"\\t\", na_rep=\"NULL\") # 指定缺失值的表示形式，默认为空字符串"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 不显示索引与列标签\n",
    "\n",
    "df0_1.to_csv(sys.stdout, sep=\"\\t\", index=False, header=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 输出指定的数据\n",
    "\n",
    "df0_1.to_csv(sys.stdout, sep=\"\\t\", columns=[\"b\", \"c\", \"a\"])     # 顺序与给定的相同"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Series\n",
    "\n",
    "date = pandas.date_range(\"2000.1.1\", periods=7)\n",
    "arr_info(date)\n",
    "ser0_1 = pandas.Series(numpy.arange(7), index=date)\n",
    "ser0_1.to_csv(\"./data/tseries.csv\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 6.1.3 处理分隔符的格式 Working with Delimited Formats"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 6.1.4 JSON Data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 6.1.5 XML and HTML: Web Scraping"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# "
   ]
  }
 ],
 "metadata": {
  "interpreter": {
   "hash": "2df30c634058628fc5df5036be3dee25b811a252316c0aa1ff7f50eb8aecb5be"
  },
  "kernelspec": {
   "display_name": "Python 3.9.6 64-bit ('venv': venv)",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.9"
  },
  "orig_nbformat": 4
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
