{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 数据变换时数据准备的重要环节， 它通过数据平滑，数据聚集，数据概化和规范化等方式将数据转换成适用于数据挖掘的形式\n",
    "\n",
    "常见的变化方法:\n",
    "1. 数据平滑： 去除数据中的噪声，将连续数据离散化。这里可以采用分箱，聚类和回归的方式进行数据平滑\n",
    "2. 数据聚集： 对数据进行汇总，在SQL中有一些聚集函数可以供我们操作，比如Max()反馈某个字段的最大小，Sum()返回某个字段的数值总和\n",
    "3. 数据概化： 建该数据由较低的概念抽象为较高的概念，较少数据的复杂度，即用更高的概念替代更低的概念。比如说上海，杭州，深圳，北京可以概化为中国\n",
    "4. 数据规范化： 使属性数据按比例缩放，这样就将原来的数值映射到一个新的特定区域中，常用的方法由最小-最大规范化，Z-score规范化，按小数定标规范化等\n",
    "5. 属性构造： 构造出新的属性并添加到属性集中，这里用回到特征工程的指代，因为通过属性与属性的连接构造新的属性，其实就是特征工程。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 数据规范的几种方法以及在sciket-learn中的使用\n",
    "\n",
    "#### 1. min-max规范化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[0.         0.         0.66666667]\n",
      " [1.         1.         1.        ]\n",
      " [0.         1.         0.        ]]\n"
     ]
    }
   ],
   "source": [
    "from sklearn import preprocessing\n",
    "import numpy as np\n",
    "\n",
    "## 初始化数据，每一行表示一个样本，每一列代表一个特征\n",
    "x = np.array([[0., -3., 1.],\n",
    "              [3., 1., 2.],\n",
    "              [0., 1., -1.]])\n",
    "# 将数据进行[0,1]规范化\n",
    "min_max_scaler = preprocessing.MinMaxScaler()\n",
    "minmax_x = min_max_scaler.fit_transform(x)\n",
    "print(minmax_x)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2. Z-Score规范化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[-0.70710678 -1.41421356  0.26726124]\n",
      " [ 1.41421356  0.70710678  1.06904497]\n",
      " [-0.70710678  0.70710678 -1.33630621]]\n"
     ]
    }
   ],
   "source": [
    "from sklearn import preprocessing\n",
    "import numpy as np\n",
    "\n",
    "## 初始化数据，每一行表示一个样本，每一列代表一个特征\n",
    "x = np.array([[0., -3., 1.],\n",
    "              [3., 1., 2.],\n",
    "              [0., 1., -1.]])\n",
    "# 将数据进行Z-Score规范化\n",
    "scale_x = preprocessing.scale(x)\n",
    "print(scale_x)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3. 小数定标规范化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[ 0.  -0.3  0.1]\n",
      " [ 0.3  0.1  0.2]\n",
      " [ 0.   0.1 -0.1]]\n"
     ]
    }
   ],
   "source": [
    "from sklearn import preprocessing\n",
    "import numpy as np\n",
    "\n",
    "## 初始化数据，每一行表示一个样本，每一列代表一个特征\n",
    "x = np.array([[0., -3., 1.],\n",
    "              [3., 1., 2.],\n",
    "              [0., 1., -1.]])\n",
    "# 将数据进行小数定标规范化\n",
    "j = np.ceil(np.log10(np.max(abs(x))))\n",
    "scaled_x = x/(10**j)\n",
    "print(scaled_x)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 总结\n",
    "\n",
    "数据变化很重要，要是很大数值或者某些数据不在同一个维度上进行比较就很麻烦，比如在深度学习中，会有归一化的操作，就是去除一些数据不重要的特征，方便后面的算法的展开"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
