{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 数据预处理：房屋价格数据集\n",
    "### 导入数据集\n",
    "在进行房价数据集预处理前，先导入数据分析需要的库："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import seaborn as sns\n",
    "import matplotlib.pyplot as plt\n",
    "import numpy as np\n",
    "\n",
    "from sklearn.impute import SimpleImputer\n",
    "from sklearn.cluster import KMeans\n",
    "import sklearn.preprocessing as skp\n",
    "from scipy import stats\n",
    "\n",
    "import warnings\n",
    "warnings.filterwarnings('ignore')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "定义房价数据集的文件路径，并加载数据集："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset_path = './house_train.csv'\n",
    "data = pd.read_csv(dataset_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "查看数据集是否导入成功："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "        id  distirct  built_date  green_rate  area   floor  oriented  traffic  \\\n",
      "0        1       164    2003/1/4        45.0    90  Medium         0       74   \n",
      "1        2       111   2000/6/18        51.0    72     Low         0       61   \n",
      "2        3        28   1993/9/27        58.0    92  Medium         1       94   \n",
      "3        4        90    1993/6/6        45.0    83  Medium         1       81   \n",
      "4        5        63   1997/7/15        62.0    77  Medium         1       71   \n",
      "...    ...       ...         ...         ...   ...     ...       ...      ...   \n",
      "1995  1996        49   1995/7/26        42.0    77    High         1       75   \n",
      "1996  1997        90   1999/2/27        70.0    89  Medium         1       85   \n",
      "1997  1998        68    1997/2/5        45.0    67    High         0       62   \n",
      "1998  1999        51   1999/3/14        50.0    75  Medium         1       73   \n",
      "1999  2000         1  2000/12/24        47.0    86  Medium         1       61   \n",
      "\n",
      "      shockproof  school  crime_rate  pm25   price  \n",
      "0             70       2         6.5    72  380.00  \n",
      "1             65       3         5.8    55  245.00  \n",
      "2             39       2         7.0    87  212.50  \n",
      "3             39       3         7.4    74  480.00  \n",
      "4             83       1         6.5    71  293.75  \n",
      "...          ...     ...         ...   ...     ...  \n",
      "1995          65       3         6.3    66  323.75  \n",
      "1996          26       1         6.7    81  690.00  \n",
      "1997          53       3         5.2    58  230.00  \n",
      "1998          44       2         4.4    63  297.50  \n",
      "1999          68       2         6.0    65  210.00  \n",
      "\n",
      "[2000 rows x 13 columns]\n",
      "Index(['id', 'distirct', 'built_date', 'green_rate', 'area', 'floor',\n",
      "       'oriented', 'traffic', 'shockproof', 'school', 'crime_rate', 'pm25',\n",
      "       'price'],\n",
      "      dtype='object')\n"
     ]
    }
   ],
   "source": [
    "print(data)\n",
    "print(data.columns)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以得出，该数据集包含13列数据，共2000行内容。并且通过数据分析知识可以发现，其中`built_rate`和`floor`两列数据是字符串，不适合进行数值分析，可以将这两项去除。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "data = data.drop(['floor'], axis=1)\n",
    "data = data.drop(['built_date'], axis=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 缺失值检测与缺失值处理\n",
    "首先编写查看缺失率函数，对原数据集进行缺失值检测："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "def missing_rate(data):\n",
    "    total = data.isnull().sum().sort_values(ascending=False)\n",
    "    percent = (data.isnull().sum() / data.isnull().count()).sort_values(ascending=False)\n",
    "    missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])\n",
    "    return missing_data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "之后编写填充缺失值的函数，使用平均值的填充方式填充："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "def impute_missing_values(data):\n",
    "    imp = SimpleImputer(missing_values=np.nan, strategy='mean')\n",
    "    data_reshape = data.values.reshape(-1, 1)\n",
    "    return imp.fit_transform(data_reshape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "函数编写完成后，先查看原数据集的缺失率："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "            Total  Percent\n",
      "crime_rate      4   0.0020\n",
      "green_rate      3   0.0015\n",
      "price           0   0.0000\n",
      "pm25            0   0.0000\n",
      "school          0   0.0000\n",
      "shockproof      0   0.0000\n",
      "traffic         0   0.0000\n",
      "oriented        0   0.0000\n",
      "area            0   0.0000\n",
      "distirct        0   0.0000\n",
      "id              0   0.0000\n"
     ]
    }
   ],
   "source": [
    "print(missing_rate(data))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以得出，`crime_rate`和`green_rate`这两项存在缺失值，故对这两列数据进行缺失值填充处理："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "data['crime_rate'] = impute_missing_values(data['crime_rate'])\n",
    "data['green_rate'] = impute_missing_values(data['green_rate'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "填充缺失值后，再次查看缺失率，发现已经填充成功："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "            Total  Percent\n",
      "price           0      0.0\n",
      "pm25            0      0.0\n",
      "crime_rate      0      0.0\n",
      "school          0      0.0\n",
      "shockproof      0      0.0\n",
      "traffic         0      0.0\n",
      "oriented        0      0.0\n",
      "area            0      0.0\n",
      "green_rate      0      0.0\n",
      "distirct        0      0.0\n",
      "id              0      0.0\n"
     ]
    }
   ],
   "source": [
    "print(missing_rate(data))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 异常值检测\n",
    "首先，编写正态性检验函数，根据 $3\\sigma$ 原则和分位差上下限筛选出异常值，保留删除异常值之后的数据："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "def normality_test(data):\n",
    "    std = data.std()\n",
    "    u = data.mean()\n",
    "    stats.kstest(data, 'norm', (u, std))\n",
    "    print('均值为：%3f，标准差为：%3f' % (u, std))\n",
    "\n",
    "    error = data[np.abs(data - u) > 3 * std]\n",
    "    data_c = data[np.abs(data - u) <= 3 * std]\n",
    "    print('异常值共%i条' % len(error))\n",
    "\n",
    "    s = data.describe()\n",
    "    q1 = s['25%']\n",
    "    q3 = s['75%']\n",
    "    iqr = q3 - q1\n",
    "    mi = q1 - 1.5 * iqr\n",
    "    ma = q3 + 1.5 * iqr\n",
    "    print('分位差为：%.3f，下限为：%.3f，上限为：%.3f' % (iqr, mi, ma))\n",
    "\n",
    "    error = data[(data < mi) | (data > ma)]\n",
    "    data_c = data[(data >= mi) & (data <= ma)]\n",
    "    print('异常值共%i条' % len(error))\n",
    "\n",
    "    return data_c"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "对数据集中的每一列进行异常值检测："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "id 异常值检测情况为：\n",
      "均值为：1009.874052，标准差为：577.161101\n",
      "异常值共0条\n",
      "分位差为：981.000，下限为：-954.750，上限为：2969.250\n",
      "异常值共0条\n",
      "\n",
      "distirct 异常值检测情况为：\n",
      "均值为：91.795903，标准差为：48.996537\n",
      "异常值共0条\n",
      "分位差为：82.000，下限为：-69.000，上限为：259.000\n",
      "异常值共0条\n",
      "\n",
      "green_rate 异常值检测情况为：\n",
      "均值为：52.116666，标准差为：6.870057\n",
      "异常值共0条\n",
      "分位差为：10.000，下限为：32.000，上限为：72.000\n",
      "异常值共0条\n",
      "\n",
      "area 异常值检测情况为：\n",
      "均值为：81.030349，标准差为：5.635794\n",
      "异常值共0条\n",
      "分位差为：8.000，下限为：65.000，上限为：97.000\n",
      "异常值共0条\n",
      "\n",
      "oriented 异常值检测情况为：\n",
      "均值为：1.000000，标准差为：0.000000\n",
      "异常值共0条\n",
      "分位差为：0.000，下限为：1.000，上限为：1.000\n",
      "异常值共0条\n",
      "\n",
      "traffic 异常值检测情况为：\n",
      "均值为：64.764036，标准差为：13.074276\n",
      "异常值共0条\n",
      "分位差为：19.000，下限为：27.500，上限为：103.500\n",
      "异常值共0条\n",
      "\n",
      "shockproof 异常值检测情况为：\n",
      "均值为：58.190440，标准差为：13.833460\n",
      "异常值共0条\n",
      "分位差为：17.000，下限为：25.500，上限为：93.500\n",
      "异常值共43条\n",
      "\n",
      "school 异常值检测情况为：\n",
      "均值为：2.333080，标准差为：0.743883\n",
      "异常值共0条\n",
      "分位差为：1.000，下限为：0.500，上限为：4.500\n",
      "异常值共0条\n",
      "\n",
      "crime_rate 异常值检测情况为：\n",
      "均值为：5.741180，标准差为：0.956420\n",
      "异常值共0条\n",
      "分位差为：1.300，下限为：3.150，上限为：8.350\n",
      "异常值共2条\n",
      "\n",
      "pm25 异常值检测情况为：\n",
      "均值为：62.960546，标准差为：9.257965\n",
      "异常值共0条\n",
      "分位差为：13.000，下限为：37.500，上限为：89.500\n",
      "异常值共0条\n",
      "\n",
      "price 异常值检测情况为：\n",
      "均值为：305.137709，标准差为：66.912167\n",
      "异常值共0条\n",
      "分位差为：85.000，下限为：127.500，上限为：467.500\n",
      "异常值共47条\n",
      "\n"
     ]
    }
   ],
   "source": [
    "for column in data.columns:\n",
    "    print(column, '异常值检测情况为：')\n",
    "    data[column] = normality_test(data[column])\n",
    "    print()\n",
    "\n",
    "data = data.dropna()\n",
    "data_ori = data.copy(deep=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "将异常值所在数据行删除后，查看数据集清洗后的情况："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "        id  distirct  green_rate  area  oriented  traffic  shockproof  school  \\\n",
      "2        3        28        58.0  92.0       1.0       94        39.0       2   \n",
      "4        5        63        62.0  77.0       1.0       71        83.0       1   \n",
      "5        6        68        50.0  73.0       1.0       74        53.0       2   \n",
      "7        8       164        53.0  76.0       1.0       52        51.0       1   \n",
      "9       10       127        57.0  71.0       1.0       70        45.0       2   \n",
      "...    ...       ...         ...   ...       ...      ...         ...     ...   \n",
      "1993  1994       149        52.0  74.0       1.0       52        61.0       2   \n",
      "1994  1995       111        41.0  78.0       1.0       64        40.0       3   \n",
      "1995  1996        49        42.0  77.0       1.0       75        65.0       3   \n",
      "1998  1999        51        50.0  75.0       1.0       73        44.0       2   \n",
      "1999  2000         1        47.0  86.0       1.0       61        68.0       2   \n",
      "\n",
      "      crime_rate  pm25   price  \n",
      "2            7.0  87.0  212.50  \n",
      "4            6.5  71.0  293.75  \n",
      "5            5.7  66.0  271.25  \n",
      "7            5.3  54.0  234.50  \n",
      "9            4.1  55.0  252.50  \n",
      "...          ...   ...     ...  \n",
      "1993         5.4  57.0  236.00  \n",
      "1994         3.7  48.0  230.00  \n",
      "1995         6.3  66.0  323.75  \n",
      "1998         4.4  63.0  297.50  \n",
      "1999         6.0  65.0  210.00  \n",
      "\n",
      "[1227 rows x 11 columns]\n"
     ]
    }
   ],
   "source": [
    "print(data)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 特征间的相关性分析\n",
    "首先编写计算两列数据的相关系数和协方差的函数："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "def relativity_analysis(data, a, b):\n",
    "    ab = np.array([data[a], data[b]])\n",
    "    dfab = pd.DataFrame(ab.T, columns=[a, b])\n",
    "    print(a, '与', b, '的协方差为：', dfab[a].cov(dfab[b]))\n",
    "    print(a, '与', b, '的相关系数为：', dfab[a].corr(dfab[b]))\n",
    "    return dfab[a].corr(dfab[b])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "用`price`和`school`这两项进行测试："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "price 与 school 的协方差为： 4.739748900154359\n",
      "price 与 school 的相关系数为： 0.11465356269108218\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "0.11465356269108218"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "relativity_analysis(data, 'price', 'school')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 对 price 属性进行标准化\n",
    "标准化有三种方法，分别为：\n",
    "\n",
    "- 最大-最小规范化\n",
    "- Z-score 规范化\n",
    "- 小数标定规范化\n",
    "\n",
    "由于在上述过程中，我们对数据集已经做了较好的预处理，并且对异常值进行了清洗，故没有离群值左右最小-最大规范化；并且小数标定法容易丢失部分数据，使其精确度降低。所以经过上述考虑，采用最小-最大规范化对数据进行处理。\n",
    "\n",
    "处理前："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "count    1227.000000\n",
      "mean      299.437653\n",
      "std        58.257598\n",
      "min       210.000000\n",
      "25%       254.000000\n",
      "50%       290.000000\n",
      "75%       335.000000\n",
      "max       465.000000\n",
      "Name: price, dtype: float64\n"
     ]
    }
   ],
   "source": [
    "print(data['price'].describe())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "处理后："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "count    1227.000000\n",
      "mean        0.350736\n",
      "std         0.228461\n",
      "min         0.000000\n",
      "25%         0.172549\n",
      "50%         0.313725\n",
      "75%         0.490196\n",
      "max         1.000000\n",
      "Name: price, dtype: float64\n"
     ]
    }
   ],
   "source": [
    "data['price'] = skp.MinMaxScaler().fit_transform(data['price'].values.reshape(-1, 1))\n",
    "print(data['price'].describe())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 根据 price 属性进行离散化\n",
    "离散化有无监督方法和有监督方法，这里采用无监督方法的 K 均值聚类算法进行离散化："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "price = data['price']\n",
    "price_re = price.values.reshape((price.index.size, 1))\n",
    "k = 10\n",
    "k_model = KMeans(n_clusters=k, n_jobs=4)\n",
    "result = k_model.fit_predict(price_re)\n",
    "data['price'] = result"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "对离散化结果进行查看："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "price\n",
      "0    171\n",
      "1     85\n",
      "2     35\n",
      "3    214\n",
      "4    140\n",
      "5     40\n",
      "6     89\n",
      "7    215\n",
      "8    128\n",
      "9    110\n",
      "Name: price, dtype: int64\n",
      "count    1227.000000\n",
      "mean        4.572127\n",
      "std         2.937962\n",
      "min         0.000000\n",
      "25%         3.000000\n",
      "50%         4.000000\n",
      "75%         7.000000\n",
      "max         9.000000\n",
      "Name: price, dtype: float64\n"
     ]
    }
   ],
   "source": [
    "print(data.groupby('price')['price'].count())\n",
    "print(data['price'].describe())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 找出与 price 相关性最高的三个特征\n",
    "最后，可以利用之前已经编写好的相关性检测函数，统一计算不同属性与`price`房价属性之间的相关性："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "price 与 id 的协方差为： 1314.9209851146913\n",
      "price 与 id 的相关系数为： 0.039068123138824434\n",
      "price 与 distirct 的协方差为： -41.78301680779527\n",
      "price 与 distirct 的相关系数为： -0.014654674086677293\n",
      "price 与 green_rate 的协方差为： 29.30880073330759\n",
      "price 与 green_rate 的相关系数为： 0.07395426422536977\n",
      "price 与 area 的协方差为： 110.40738860946804\n",
      "price 与 area 的相关系数为： 0.34543599387897145\n",
      "price 与 oriented 的协方差为： 0.0\n",
      "price 与 oriented 的相关系数为： nan\n",
      "price 与 traffic 的协方差为： 139.26522882373357\n",
      "price 与 traffic 的相关系数为： 0.18435159030371132\n",
      "price 与 shockproof 的协方差为： 164.93380085913603\n",
      "price 与 shockproof 的相关系数为： 0.22712633997041048\n",
      "price 与 school 的协方差为： 4.739748900154359\n",
      "price 与 school 的相关系数为： 0.11465356269108218\n",
      "price 与 crime_rate 的协方差为： 14.771381911897125\n",
      "price 与 crime_rate 的相关系数为： 0.2693795467385551\n",
      "price 与 pm25 的协方差为： 137.5759217564026\n",
      "price 与 pm25 的相关系数为： 0.25886341319064377\n"
     ]
    }
   ],
   "source": [
    "for column in data_ori.columns:\n",
    "    if column != 'price':\n",
    "        relativity_analysis(data_ori, 'price', column)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以看出，与`price`相关系数最高的是`area`、`clime_rate`和`pm25`这三项，说明房价与面积相关，更大的房子价格更高，这也与我们的认知相同；房价更高的地区，普遍是富人区，所以也是小偷经常光顾的地方，所以犯罪率也会更高；而富人区，私家车占比更高，车辆排放 pm2.5 占了更大的比重。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
