{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "534ebf4c-c39b-4ee7-8c1d-fd667c8e3342",
   "metadata": {},
   "source": [
    "# 特征工程\n",
    "- 特征抽取\n",
    "- 数据特征的预处理\n",
    "- 特征选择"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5b57bd64-3de7-4112-968f-749fa14e5dab",
   "metadata": {},
   "source": [
    "- 为什么需要特征工程\n",
    "  - 样本数据的特征有可能会存在缺失值,重复值,异常值等等,那么我们是需要对特征中的相关噪点数据进行处理的,那么处理的目的就是为了营造出一个更纯净的样本集(数据集越纯净则越便于让模型总结出数据集中潜在的规律),让模型基于这组数据可以有更好的预测能力."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4d002cfa-ebc4-46f3-96dd-a1634bc52349",
   "metadata": {},
   "source": [
    "- 什么是特征工程\n",
    "  - 特征工程是将原始数据转化为更好的能代表模型能处理数据的潜在问题对应特征的过程,从而提高对未知数据预测的准确性.所以特征工程就是对特征的相关处理"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d4951861-826a-451b-9e9f-2f8f683e5b8c",
   "metadata": {},
   "source": [
    "- 特征工程的意义\n",
    "   - 直接影响模型预测的结果"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "54ab6d95-722c-4c9e-8d0b-99e38c441b54",
   "metadata": {},
   "source": [
    "# 特征抽取\n",
    "- 目的:\n",
    "  - 我们所采集到样本中的特征数据往往很多时候为字符串或者其他类型的数据,但是电脑只可以识别二进制数值,如果把字符串给电脑,电脑是看不懂的;机器学习学习的数据如果不是数值型的数据,它是识别不了的."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e09b79a1-4491-447e-8f04-6da9e027333d",
   "metadata": {},
   "source": [
    "**字符串类型分类变量**\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "aed7d587-7140-4435-9636-b1920d4db7ec",
   "metadata": {},
   "source": [
    "  - 无序分类变量\n",
    "    - 说明事物的类别的一个名称,如性别有男女两种,二者无大小之分,无顺序之分,还有如血型,名族等."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "14909362-3a5b-4f09-8d60-8c1e7e7a4204",
   "metadata": {},
   "source": [
    "  - 有序分类变量\n",
    "    - 也是说明事物类型的一个名称,但是有次序之分,如:满意度分为[满意,一般,不满意],三者有顺序之分,但是无大小之分"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4e4bd539-92b4-4900-8471-2368674773d6",
   "metadata": {},
   "source": [
    "特征值化:将非数值型的特征转化成数值型的特质"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "55c1572f-a545-41c4-9b6d-37cf5614d29b",
   "metadata": {},
   "source": [
    "**效果演示**\n",
    "\n",
    "将字符转化为数字"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "e313cf1a-f3fe-4849-9419-88da3e92f987",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[0 1 1 0 1 1 1]\n",
      " [1 1 1 1 0 1 0]]\n"
     ]
    }
   ],
   "source": [
    "from sklearn.feature_extraction.text import CountVectorizer\n",
    "vector=CountVectorizer()\n",
    "res=vector.fit_transform(['life is short,i love python','life is long,i hate python'])\n",
    "print(res.toarray())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "57e045e7-b66a-423c-829f-1a49be67534f",
   "metadata": {},
   "source": [
    "- 演示后的结论:\n",
    "\n",
    "  - 特征抽取对文本等数据进行特质值化.特征值化是为了让机器更好的理解数据"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ff77ecee-951c-49d0-9469-f3baace1ed91",
   "metadata": {},
   "source": [
    "### 字典特征值化\n",
    "- 作用:对字典数据进行特征值化\n",
    "\n",
    "- API:from sklearn.feature_extraction import DictVectorizer\n",
    "\n",
    "  - fit_transform(X):X为字典或包含字典的迭代器,返回值为sparse矩阵\n",
    " \n",
    "  - inverse_transform(X):X为sparse矩阵或者array数组,返回值为转换前的数据格式\n",
    " \n",
    "  - get_feature_names():返回类别名称"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "299d6140-678c-4834-80e0-b7eb9193de71",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<Compressed Sparse Row sparse matrix of dtype 'float64'\n",
      "\twith 6 stored elements and shape (3, 4)>\n",
      "  Coords\tValues\n",
      "  (0, 0)\t1.0\n",
      "  (0, 3)\t33.0\n",
      "  (1, 1)\t1.0\n",
      "  (1, 3)\t42.0\n",
      "  (2, 2)\t1.0\n",
      "  (2, 3)\t40.0\n"
     ]
    }
   ],
   "source": [
    "from sklearn.feature_extraction import DictVectorizer\n",
    "# 创建工具对象\n",
    "d = DictVectorizer(sparse=True)\n",
    "\n",
    "alist=[\n",
    "    {'city':'BeiJing','temp':33},\n",
    "    {'city':'GZ','temp':42},\n",
    "    {'city':'SH','temp':40}\n",
    "      ]\n",
    "\n",
    "# 使用该工具对象进行特征值化\n",
    "\n",
    "result=d.fit_transform(alist)\n",
    "print(result) # 返回的结果是一个sparese矩阵"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "acd13166-956b-487d-ba22-4691c3aa6e2d",
   "metadata": {},
   "source": [
    "* 什么是sparse矩阵\n",
    "  * 在DictVectorizer类的构造方法中设定sparse=False则返回的就不是sparse矩阵,而是一个数组\n",
    "      * get_feature_names():返回类别名称\n",
    "  * sparse矩阵就是一个变相的数组或者列表,目的是为了节省空间"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "97ee16cc-0229-4aa4-b2a2-eaf48f45cb0c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['city=BeiJing' 'city=GZ' 'city=SH' 'temp']\n",
      "[[ 1.  0.  0. 33.]\n",
      " [ 0.  1.  0. 42.]\n",
      " [ 0.  0.  1. 40.]]\n"
     ]
    }
   ],
   "source": [
    "# 将sparse举证转化成数组的形式\n",
    "from sklearn.feature_extraction import DictVectorizer\n",
    "\n",
    "\n",
    "alist=[\n",
    "    {'city':'BeiJing','temp':33},\n",
    "    {'city':'GZ','temp':42},\n",
    "    {'city':'SH','temp':40}\n",
    "      ]\n",
    "# 创建工具对象\n",
    "d = DictVectorizer(sparse=False)\n",
    "\n",
    "# 使用该工具对象进行特征值化\n",
    "result=d.fit_transform(alist)\n",
    "\n",
    "names=d.get_feature_names_out()\n",
    "print(names) # 特征名称\n",
    "print(result) # 返回的结果是一个二维数组"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3e344e61-ad9a-4507-81bd-e12c4ecd5f76",
   "metadata": {},
   "source": [
    "- OneHot编码(独热编码)\n",
    "  - sparse举证中的0and1就是onehot编码\n",
    "- 为什么需要onehot编码\n",
    "  - 特征抽取的主要目标就是对非数值的数据进行特征值化"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9c6ef9a0-2a1f-435b-86a0-558a4441b7a9",
   "metadata": {},
   "source": [
    "- 基于pandas实现one-hot编码\n",
    "  - pd.get_dummies(df['col'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "b11bc8cd-eafb-49ac-83e9-308a6caf340f",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>color</th>\n",
       "      <th>size</th>\n",
       "      <th>weight</th>\n",
       "      <th>class label</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>green</td>\n",
       "      <td>M</td>\n",
       "      <td>20</td>\n",
       "      <td>class1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>red</td>\n",
       "      <td>L</td>\n",
       "      <td>21</td>\n",
       "      <td>class2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>blue</td>\n",
       "      <td>XL</td>\n",
       "      <td>30</td>\n",
       "      <td>class3</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   color size  weight class label\n",
       "0  green    M      20      class1\n",
       "1    red    L      21      class2\n",
       "2   blue   XL      30      class3"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import pandas as pd\n",
    "df = pd.DataFrame([\n",
    "    ['green','M',20,'class1'],\n",
    "    ['red','L',21,'class2'],\n",
    "    ['blue','XL',30,'class3'],\n",
    "])\n",
    "df.columns = ['color','size','weight','class label']\n",
    "df"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "bae7038a-9f2b-4e4c-826f-6597f15cfac7",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>weight</th>\n",
       "      <th>class label</th>\n",
       "      <th>blue</th>\n",
       "      <th>green</th>\n",
       "      <th>red</th>\n",
       "      <th>new_size</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>20</td>\n",
       "      <td>class1</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>21</td>\n",
       "      <td>class2</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>30</td>\n",
       "      <td>class3</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   weight class label  blue  green  red  new_size\n",
       "0      20      class1     0      1    0         1\n",
       "1      21      class2     0      0    1         2\n",
       "2      30      class3     1      0    0         3"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 无序字符值\n",
    "r1=pd.get_dummies(df['color'],dtype=int)# 默认返回的是布尔值,需要手动该类型\n",
    "new_df=pd.concat((df,r1),axis=1).drop(labels='color',axis=1)# 级联生成好color的独热编码并把原color列删除\n",
    "\n",
    "# 有序字符值\n",
    "new_df['new_size']=new_df['size'].map({'M':1,'L':2,'XL':3})# 将size字符映射指定成指定数值并添加到new_size列中\n",
    "new_df.drop(labels='size',axis=1)# 将原size删除"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "771fead3-ecb0-4c90-ae34-cf3b21b89b7f",
   "metadata": {},
   "source": [
    "### 文本特质抽取\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "10f55566-5ca1-4e1d-8ea4-8d35a17471a0",
   "metadata": {},
   "source": [
    "- 作用:对文本数据进行特质值化\n",
    "- API:from sklearn.feature_extraction.text import CountVectorizer\n",
    "- fit_transform(X):X为文本或者包含文本字符串的可迭代对象,返回sparse矩阵\n",
    "- inverse_transform(X):X为array数组或者sparse矩阵,返回转化之前的格式数据\n",
    "- get_feature_names()\n",
    "- toarray():将sparse矩阵转化为数组"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "033d0c77-5b36-46cc-a5ae-e40b41c220cb",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['hate' 'is' 'life' 'long' 'love' 'python' 'short']\n",
      "[[0 1 1 0 1 1 1]\n",
      " [1 1 1 1 0 1 0]]\n"
     ]
    }
   ],
   "source": [
    "alist = [\n",
    "            'life is short,i love python',\n",
    "            'life is long,i hate python'\n",
    "        ]\n",
    "c = CountVectorizer()\n",
    "result = c.fit_transform(alist)\n",
    "print(c.get_feature_names_out())\n",
    "print(result.toarray())#toarray() 将sparese矩阵转化为数组"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "74648208-224a-4655-83ac-f7e6948d0919",
   "metadata": {},
   "source": [
    "- ### 中文文本特征抽取\n",
    "  - 对有标点符号的中文文本进行特征抽取"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "6f5377c1-5cb4-4cb3-954c-5e4dc6d8d583",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['今天的天气' '对有标点' '符号的中文文本' '进行特征抽取']\n",
      "[[2 0 0 0]\n",
      " [0 1 1 1]]\n"
     ]
    }
   ],
   "source": [
    "alist = ['今天的天气,今天的天气','对有标点,符号的中文文本,进行特征抽取']\n",
    "c = CountVectorizer()\n",
    "\n",
    "result=c.fit_transform(alist)\n",
    "# 根据标点符号做分词\n",
    "print(c.get_feature_names_out())\n",
    "print(result.toarray())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "65736a27-be88-48d6-af94-2eb28b343ae6",
   "metadata": {},
   "source": [
    "- 目前CountVectorizer只可以对有标点符号和分隔符对应的文本进行特征抽取.但这满足不了日常需要\n",
    "  - 因为在自然语言处理中,我们是需要将一段中文文本中相关的词语,成语形容词等都要进行抽取的"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0da340e0-c9a1-4c69-9f83-bba2b0c509f4",
   "metadata": {},
   "source": [
    "- jieba分词\n",
    "  - 对中文文章进行分词处理\n",
    "  - pip install jieba"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f4da5d3a-4445-4de3-9e82-24b930d0c4c7",
   "metadata": {},
   "source": [
    "- jieba分词的基本使用"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "id": "b9affdbb-a228-4fef-b4da-2674d9c636a5",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['因为', '在', '自然语言', '处理', '中', ',', '我们', '是', '需要', '将', '一段', '中文', '文本', '中', '相关', '的', '词语', ',', '成语', '形容词', '等', '都', '要', '进行', '抽取', '的']\n",
      "['目前', 'CountVectorizer', '只', '可以', '对', '有', '标点符号', '和', '分隔符', '对应', '的', '文本', '进行', '特征', '抽取', '.', '但', '这', '满足', '不了', '日常', '需要']\n",
      "['countvectorizer' '一段' '不了' '中文' '分隔符' '可以' '因为' '处理' '对应' '形容词' '成语'\n",
      " '我们' '抽取' '文本' '日常' '标点符号' '满足' '特征' '目前' '相关' '自然语言' '词语' '进行' '需要']\n",
      "[[0 1 0 1 0 0 1 1 0 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1]\n",
      " [1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1]]\n"
     ]
    }
   ],
   "source": [
    "import jieba\n",
    "\n",
    "text = ['因为在自然语言处理中,我们是需要将一段中文文本中相关的词语,成语形容词等都要进行抽取的',\n",
    "       '目前CountVectorizer只可以对有标点符号和分隔符对应的文本进行特征抽取.但这满足不了日常需要'\n",
    "       ]\n",
    "\n",
    "c = jieba.cut(text)\n",
    "\n",
    "new_text=[]\n",
    "\n",
    "for t in text:\n",
    "    r=list(jieba.cut(t))\n",
    "    print(r)\n",
    "    s = ' ' .join(r)\n",
    "    new_text.append(s)\n",
    "new_text\n",
    "\n",
    "c = CountVectorizer()\n",
    "result = c.fit_transform(new_text)\n",
    "print(c.get_feature_names_out())\n",
    "print(result.toarray())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1e8c74c5-0979-429b-a5b2-455dc371a6f0",
   "metadata": {},
   "source": [
    "### 特征的预处理:对数值型的数据进行处理\n",
    "\n",
    "- 无量纲化:\n",
    "  - 在机器学习算法实践中,我们往往有着将不同规格的数据转换为到同一规格,或不同分布的数据转换到某个特定分布的需求这种需求统称为将数据'无量纲化'\n",
    "    - 譬如:梯度和矩阵为核心的算法中,譬如逻辑回归,支持向量机,神经网络,无量纲化可以加快求解速度;\n",
    "    - 而在距离类模型,譬如K临近,K-Means聚类中,无量纲化可以帮助我们提升模型精度,避免某一取值范围特别大的特征对距离计算造成影响\n",
    "    - 一个特例是决策树和数的集成算法们,对决策树我们不需要无量纲化,决策树可以把任意数据都处理好\n",
    "  - 预处理就是实现无量纲化的方式\n",
    "- 含义:特征抽取后我们就可以获取对应的数值型的样本数据,然后就可以对数据进行处理\n",
    "- 概念:通过特定的统计方法(数学方法),将数据转化成算法要求的数据\n",
    "- 方式:\n",
    "  - 归一化:易受异常值影响\n",
    "  - 标准化:对异常值不敏感"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "baf7f668-1966-4afb-b2dd-98739725adce",
   "metadata": {},
   "source": [
    "- 如果每一个特征具有相同大小的权重都同等重要,则必须要对其进行归一化处理\n",
    "- 可以使用KNN的算法对特征影响进行说明\n",
    "- 特点:通过对原始数据进行交换把数据映射到(默认[0,1])之间\n",
    "- 公式:X'=(x-min)/max-min  X''=X'*(mx-mi)+mi\n",
    "   - 注:作用于每一列,max为一列的最大值,min为一列的最小值,那么X''为最终结果,mx,mi分别指区间值默认mx为1,mi为0 \n",
    "- 归一化后的数据符合正太分布"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cf1e92c9-8cbd-4604-8d8c-ddb2bea24e5b",
   "metadata": {},
   "source": [
    "- API:from sklearn.perprocessing import MinMaxScaler\n",
    "  - 参数:feature_range表示缩放范围,通常使用(0,1)\n",
    "- 作用:使得某一个特征对最终结果不会造成很大影响"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "id": "3de4fa02-1ce0-4c31-85c8-1235d4559e34",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>0</th>\n",
       "      <th>1</th>\n",
       "      <th>2</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>40920</td>\n",
       "      <td>8.326976</td>\n",
       "      <td>0.953952</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>14488</td>\n",
       "      <td>7.153469</td>\n",
       "      <td>1.673904</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>26052</td>\n",
       "      <td>1.441871</td>\n",
       "      <td>0.805124</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>75136</td>\n",
       "      <td>13.147394</td>\n",
       "      <td>0.428964</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>38344</td>\n",
       "      <td>1.669788</td>\n",
       "      <td>0.134296</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>...</th>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>995</th>\n",
       "      <td>11145</td>\n",
       "      <td>3.410627</td>\n",
       "      <td>0.631838</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>996</th>\n",
       "      <td>68846</td>\n",
       "      <td>9.974715</td>\n",
       "      <td>0.669787</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>997</th>\n",
       "      <td>26575</td>\n",
       "      <td>10.650102</td>\n",
       "      <td>0.866627</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>998</th>\n",
       "      <td>48111</td>\n",
       "      <td>9.134528</td>\n",
       "      <td>0.728045</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>999</th>\n",
       "      <td>43757</td>\n",
       "      <td>7.882601</td>\n",
       "      <td>1.332446</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>1000 rows × 3 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "         0          1         2\n",
       "0    40920   8.326976  0.953952\n",
       "1    14488   7.153469  1.673904\n",
       "2    26052   1.441871  0.805124\n",
       "3    75136  13.147394  0.428964\n",
       "4    38344   1.669788  0.134296\n",
       "..     ...        ...       ...\n",
       "995  11145   3.410627  0.631838\n",
       "996  68846   9.974715  0.669787\n",
       "997  26575  10.650102  0.866627\n",
       "998  48111   9.134528  0.728045\n",
       "999  43757   7.882601  1.332446\n",
       "\n",
       "[1000 rows x 3 columns]"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "data = pd.read_csv('datingTestSet.txt',header=None,sep='\\t')\n",
    "feature = data[[0,1,2]]\n",
    "feature"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "id": "81cbf4e3-596f-44a5-92af-67cb91b5040d",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[0.44832535 0.39805139 0.56233353]\n",
      " [0.15873259 0.34195467 0.98724416]\n",
      " [0.28542943 0.06892523 0.47449629]\n",
      " ...\n",
      " [0.29115949 0.50910294 0.51079493]\n",
      " [0.52711097 0.43665451 0.4290048 ]\n",
      " [0.47940793 0.3768091  0.78571804]]\n"
     ]
    }
   ],
   "source": [
    "from sklearn.preprocessing import MinMaxScaler\n",
    "mm = MinMaxScaler()\n",
    "result = mm.fit_transform(feature)\n",
    "print(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "41943a28-d75a-4cbb-90b0-76dc52b00410",
   "metadata": {},
   "source": [
    "- 标准化的处理\n",
    "  - 当数据按均值中心化之后,在按标准差缩放,数据就会服从为均值为0,方差为1的正态分布(即标准化正态分布),而这个过程,就叫数据标准化(Standrardization,又称Z-score normailzation)\n",
    "  - 公式:X'=(x-mean)/σ\n",
    "    - 注:作用于每一列,mean为平均值,σ为标准差,var成为方差,var=[(x1-mean)^2+(x2-mean)^2+...]/n(每个特征的样本数),σ=var^1/2\n",
    "    - 其中:方差(考量数据的稳定性)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "id": "9e89c35d-d22e-4199-9bc8-d9871caab285",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[ 0.33193158  0.41660188  0.24523407]\n",
      " [-0.87247784  0.13992897  1.69385734]\n",
      " [-0.34554872 -1.20667094 -0.05422437]\n",
      " ...\n",
      " [-0.32171752  0.96431572  0.06952649]\n",
      " [ 0.65959911  0.60699509 -0.20931587]\n",
      " [ 0.46120328  0.31183342  1.00680598]]\n"
     ]
    }
   ],
   "source": [
    "from sklearn.preprocessing import StandardScaler\n",
    "s = StandardScaler()\n",
    "result = s.fit_transform(feature)\n",
    "print(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5eaf6058-110c-4f53-ac51-8ac7f6abe19c",
   "metadata": {},
   "source": [
    "- 归一化和标准化总结:\n",
    "  - 对于归一化来说,如果出现了异常值则会影响特征的最大最小值,那么最终结果会受到比较大影响\n",
    "  - 对于标准化来说,如果出现异常值,由于具有一定的数据量,少量的异常点对于平均值的影响不大,从而标准差改变比较少"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "637b41e7-b0a2-47e9-89be-5b212c121cff",
   "metadata": {},
   "source": [
    "- StandardScaler和MinMaxScaler\n",
    "  - 大多数机器学习算法中,会选择StandardScaler来进行特征缩放,因为MinMaxScaler对异常值非常敏感.在PCA,聚类,逻辑回归,支持向量机,神经网络这些算法中,StandardScaler往往是最好的选择.MinMax在不涉及距离度量,梯度,协方差计算以及数据需要被压缩到特定区间时使用广泛,比如数字图像处理中量化像素强度时,都会使用MinMaxScaler将数据压缩于[0,1]区间之中"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "47105d36-4428-4b96-aac0-0ca3c32b2e13",
   "metadata": {},
   "source": [
    "### 特征选择:从特征中选择出有意义对模型有帮助的特征作为最终的机器学习输入的数据\n",
    "\n",
    "- 特征选择的原因:\n",
    "  - 冗余:部分特征相关度高,容易消耗计算机性能\n",
    "  - 噪点:部分特征对预测结果有偏执影响\n",
    "- 特征选择的实现:\n",
    "  - 人为对不相关的特征进行主观舍弃\n",
    "  - 在已有特征和对应预测结果的基础上,使用相关的工具过滤掉一些无用的或权重较低的特征\n",
    "    - 工具:\n",
    "      - Filter(过滤式)\n",
    "      - Embedded(嵌入式):决策树模型会自己选择出对其重要的特征\n",
    "      - PCA降维\n",
    "      - 相关性分析"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "11400e3b-5772-4cc7-b3c1-a4bf5c4258ad",
   "metadata": {},
   "source": [
    "- Filter过滤式(方差过滤):\n",
    "  - 原理:这是通过特征本身的方差来筛选特征的类.比如一个特征本身的方差很小,就表示样本在这个特征上基本没有差异,可能特征中的大多数值都一样,甚至整个特征的取值都相同,那这个特征对于样本区分没有用.所以无论接下来的特征工程要做什么,都要优先消除方差为0或者方差极低的特征.\n",
    "    - 比如:朝阳区的房价预测,其中样本有一列特征为温度,则要知道朝阳区包含在样本的房子对应的气象温度几乎一致或者大同小异,则温度特征正则对房价的区分式无意义的\n",
    "  - API:from sklearn.feature_selection import VarianceThreshold\n",
    "  - VarianceThreshold(threshold=x)threshold方差的值,删除所有方差低于x的特征,默认值为0表示保留所有方差不为0的特征\n",
    "  - fit_transform(X):X为特征"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "id": "70ae3b92-4a98-4fd9-8622-7483ace11e46",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>radius_mean</th>\n",
       "      <th>texture_mean</th>\n",
       "      <th>perimeter_mean</th>\n",
       "      <th>area_mean</th>\n",
       "      <th>smoothness_mean</th>\n",
       "      <th>compactness_mean</th>\n",
       "      <th>concavity_mean</th>\n",
       "      <th>concave_mean</th>\n",
       "      <th>symmetry_mean</th>\n",
       "      <th>fractal_mean</th>\n",
       "      <th>...</th>\n",
       "      <th>radius_max</th>\n",
       "      <th>texture_max</th>\n",
       "      <th>perimeter_max</th>\n",
       "      <th>area_max</th>\n",
       "      <th>smoothness_max</th>\n",
       "      <th>compactness_max</th>\n",
       "      <th>concavity_max</th>\n",
       "      <th>concave_max</th>\n",
       "      <th>symmetry_max</th>\n",
       "      <th>fractal_max</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>17.99</td>\n",
       "      <td>10.38</td>\n",
       "      <td>122.80</td>\n",
       "      <td>1001.0</td>\n",
       "      <td>0.11840</td>\n",
       "      <td>0.27760</td>\n",
       "      <td>0.30010</td>\n",
       "      <td>0.14710</td>\n",
       "      <td>0.2419</td>\n",
       "      <td>0.07871</td>\n",
       "      <td>...</td>\n",
       "      <td>25.380</td>\n",
       "      <td>17.33</td>\n",
       "      <td>184.60</td>\n",
       "      <td>2019.0</td>\n",
       "      <td>0.16220</td>\n",
       "      <td>0.66560</td>\n",
       "      <td>0.7119</td>\n",
       "      <td>0.2654</td>\n",
       "      <td>0.4601</td>\n",
       "      <td>0.11890</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>20.57</td>\n",
       "      <td>17.77</td>\n",
       "      <td>132.90</td>\n",
       "      <td>1326.0</td>\n",
       "      <td>0.08474</td>\n",
       "      <td>0.07864</td>\n",
       "      <td>0.08690</td>\n",
       "      <td>0.07017</td>\n",
       "      <td>0.1812</td>\n",
       "      <td>0.05667</td>\n",
       "      <td>...</td>\n",
       "      <td>24.990</td>\n",
       "      <td>23.41</td>\n",
       "      <td>158.80</td>\n",
       "      <td>1956.0</td>\n",
       "      <td>0.12380</td>\n",
       "      <td>0.18660</td>\n",
       "      <td>0.2416</td>\n",
       "      <td>0.1860</td>\n",
       "      <td>0.2750</td>\n",
       "      <td>0.08902</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>19.69</td>\n",
       "      <td>21.25</td>\n",
       "      <td>130.00</td>\n",
       "      <td>1203.0</td>\n",
       "      <td>0.10960</td>\n",
       "      <td>0.15990</td>\n",
       "      <td>0.19740</td>\n",
       "      <td>0.12790</td>\n",
       "      <td>0.2069</td>\n",
       "      <td>0.05999</td>\n",
       "      <td>...</td>\n",
       "      <td>23.570</td>\n",
       "      <td>25.53</td>\n",
       "      <td>152.50</td>\n",
       "      <td>1709.0</td>\n",
       "      <td>0.14440</td>\n",
       "      <td>0.42450</td>\n",
       "      <td>0.4504</td>\n",
       "      <td>0.2430</td>\n",
       "      <td>0.3613</td>\n",
       "      <td>0.08758</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>11.42</td>\n",
       "      <td>20.38</td>\n",
       "      <td>77.58</td>\n",
       "      <td>386.1</td>\n",
       "      <td>0.14250</td>\n",
       "      <td>0.28390</td>\n",
       "      <td>0.24140</td>\n",
       "      <td>0.10520</td>\n",
       "      <td>0.2597</td>\n",
       "      <td>0.09744</td>\n",
       "      <td>...</td>\n",
       "      <td>14.910</td>\n",
       "      <td>26.50</td>\n",
       "      <td>98.87</td>\n",
       "      <td>567.7</td>\n",
       "      <td>0.20980</td>\n",
       "      <td>0.86630</td>\n",
       "      <td>0.6869</td>\n",
       "      <td>0.2575</td>\n",
       "      <td>0.6638</td>\n",
       "      <td>0.17300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>20.29</td>\n",
       "      <td>14.34</td>\n",
       "      <td>135.10</td>\n",
       "      <td>1297.0</td>\n",
       "      <td>0.10030</td>\n",
       "      <td>0.13280</td>\n",
       "      <td>0.19800</td>\n",
       "      <td>0.10430</td>\n",
       "      <td>0.1809</td>\n",
       "      <td>0.05883</td>\n",
       "      <td>...</td>\n",
       "      <td>22.540</td>\n",
       "      <td>16.67</td>\n",
       "      <td>152.20</td>\n",
       "      <td>1575.0</td>\n",
       "      <td>0.13740</td>\n",
       "      <td>0.20500</td>\n",
       "      <td>0.4000</td>\n",
       "      <td>0.1625</td>\n",
       "      <td>0.2364</td>\n",
       "      <td>0.07678</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>...</th>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>564</th>\n",
       "      <td>21.56</td>\n",
       "      <td>22.39</td>\n",
       "      <td>142.00</td>\n",
       "      <td>1479.0</td>\n",
       "      <td>0.11100</td>\n",
       "      <td>0.11590</td>\n",
       "      <td>0.24390</td>\n",
       "      <td>0.13890</td>\n",
       "      <td>0.1726</td>\n",
       "      <td>0.05623</td>\n",
       "      <td>...</td>\n",
       "      <td>25.450</td>\n",
       "      <td>26.40</td>\n",
       "      <td>166.10</td>\n",
       "      <td>2027.0</td>\n",
       "      <td>0.14100</td>\n",
       "      <td>0.21130</td>\n",
       "      <td>0.4107</td>\n",
       "      <td>0.2216</td>\n",
       "      <td>0.2060</td>\n",
       "      <td>0.07115</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>565</th>\n",
       "      <td>20.13</td>\n",
       "      <td>28.25</td>\n",
       "      <td>131.20</td>\n",
       "      <td>1261.0</td>\n",
       "      <td>0.09780</td>\n",
       "      <td>0.10340</td>\n",
       "      <td>0.14400</td>\n",
       "      <td>0.09791</td>\n",
       "      <td>0.1752</td>\n",
       "      <td>0.05533</td>\n",
       "      <td>...</td>\n",
       "      <td>23.690</td>\n",
       "      <td>38.25</td>\n",
       "      <td>155.00</td>\n",
       "      <td>1731.0</td>\n",
       "      <td>0.11660</td>\n",
       "      <td>0.19220</td>\n",
       "      <td>0.3215</td>\n",
       "      <td>0.1628</td>\n",
       "      <td>0.2572</td>\n",
       "      <td>0.06637</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>566</th>\n",
       "      <td>16.60</td>\n",
       "      <td>28.08</td>\n",
       "      <td>108.30</td>\n",
       "      <td>858.1</td>\n",
       "      <td>0.08455</td>\n",
       "      <td>0.10230</td>\n",
       "      <td>0.09251</td>\n",
       "      <td>0.05302</td>\n",
       "      <td>0.1590</td>\n",
       "      <td>0.05648</td>\n",
       "      <td>...</td>\n",
       "      <td>18.980</td>\n",
       "      <td>34.12</td>\n",
       "      <td>126.70</td>\n",
       "      <td>1124.0</td>\n",
       "      <td>0.11390</td>\n",
       "      <td>0.30940</td>\n",
       "      <td>0.3403</td>\n",
       "      <td>0.1418</td>\n",
       "      <td>0.2218</td>\n",
       "      <td>0.07820</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>567</th>\n",
       "      <td>20.60</td>\n",
       "      <td>29.33</td>\n",
       "      <td>140.10</td>\n",
       "      <td>1265.0</td>\n",
       "      <td>0.11780</td>\n",
       "      <td>0.27700</td>\n",
       "      <td>0.35140</td>\n",
       "      <td>0.15200</td>\n",
       "      <td>0.2397</td>\n",
       "      <td>0.07016</td>\n",
       "      <td>...</td>\n",
       "      <td>25.740</td>\n",
       "      <td>39.42</td>\n",
       "      <td>184.60</td>\n",
       "      <td>1821.0</td>\n",
       "      <td>0.16500</td>\n",
       "      <td>0.86810</td>\n",
       "      <td>0.9387</td>\n",
       "      <td>0.2650</td>\n",
       "      <td>0.4087</td>\n",
       "      <td>0.12400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>568</th>\n",
       "      <td>7.76</td>\n",
       "      <td>24.54</td>\n",
       "      <td>47.92</td>\n",
       "      <td>181.0</td>\n",
       "      <td>0.05263</td>\n",
       "      <td>0.04362</td>\n",
       "      <td>0.00000</td>\n",
       "      <td>0.00000</td>\n",
       "      <td>0.1587</td>\n",
       "      <td>0.05884</td>\n",
       "      <td>...</td>\n",
       "      <td>9.456</td>\n",
       "      <td>30.37</td>\n",
       "      <td>59.16</td>\n",
       "      <td>268.6</td>\n",
       "      <td>0.08996</td>\n",
       "      <td>0.06444</td>\n",
       "      <td>0.0000</td>\n",
       "      <td>0.0000</td>\n",
       "      <td>0.2871</td>\n",
       "      <td>0.07039</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>569 rows × 30 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "     radius_mean  texture_mean  perimeter_mean  area_mean  smoothness_mean  \\\n",
       "0          17.99         10.38          122.80     1001.0          0.11840   \n",
       "1          20.57         17.77          132.90     1326.0          0.08474   \n",
       "2          19.69         21.25          130.00     1203.0          0.10960   \n",
       "3          11.42         20.38           77.58      386.1          0.14250   \n",
       "4          20.29         14.34          135.10     1297.0          0.10030   \n",
       "..           ...           ...             ...        ...              ...   \n",
       "564        21.56         22.39          142.00     1479.0          0.11100   \n",
       "565        20.13         28.25          131.20     1261.0          0.09780   \n",
       "566        16.60         28.08          108.30      858.1          0.08455   \n",
       "567        20.60         29.33          140.10     1265.0          0.11780   \n",
       "568         7.76         24.54           47.92      181.0          0.05263   \n",
       "\n",
       "     compactness_mean  concavity_mean  concave_mean  symmetry_mean  \\\n",
       "0             0.27760         0.30010       0.14710         0.2419   \n",
       "1             0.07864         0.08690       0.07017         0.1812   \n",
       "2             0.15990         0.19740       0.12790         0.2069   \n",
       "3             0.28390         0.24140       0.10520         0.2597   \n",
       "4             0.13280         0.19800       0.10430         0.1809   \n",
       "..                ...             ...           ...            ...   \n",
       "564           0.11590         0.24390       0.13890         0.1726   \n",
       "565           0.10340         0.14400       0.09791         0.1752   \n",
       "566           0.10230         0.09251       0.05302         0.1590   \n",
       "567           0.27700         0.35140       0.15200         0.2397   \n",
       "568           0.04362         0.00000       0.00000         0.1587   \n",
       "\n",
       "     fractal_mean  ...  radius_max  texture_max  perimeter_max  area_max  \\\n",
       "0         0.07871  ...      25.380        17.33         184.60    2019.0   \n",
       "1         0.05667  ...      24.990        23.41         158.80    1956.0   \n",
       "2         0.05999  ...      23.570        25.53         152.50    1709.0   \n",
       "3         0.09744  ...      14.910        26.50          98.87     567.7   \n",
       "4         0.05883  ...      22.540        16.67         152.20    1575.0   \n",
       "..            ...  ...         ...          ...            ...       ...   \n",
       "564       0.05623  ...      25.450        26.40         166.10    2027.0   \n",
       "565       0.05533  ...      23.690        38.25         155.00    1731.0   \n",
       "566       0.05648  ...      18.980        34.12         126.70    1124.0   \n",
       "567       0.07016  ...      25.740        39.42         184.60    1821.0   \n",
       "568       0.05884  ...       9.456        30.37          59.16     268.6   \n",
       "\n",
       "     smoothness_max  compactness_max  concavity_max  concave_max  \\\n",
       "0           0.16220          0.66560         0.7119       0.2654   \n",
       "1           0.12380          0.18660         0.2416       0.1860   \n",
       "2           0.14440          0.42450         0.4504       0.2430   \n",
       "3           0.20980          0.86630         0.6869       0.2575   \n",
       "4           0.13740          0.20500         0.4000       0.1625   \n",
       "..              ...              ...            ...          ...   \n",
       "564         0.14100          0.21130         0.4107       0.2216   \n",
       "565         0.11660          0.19220         0.3215       0.1628   \n",
       "566         0.11390          0.30940         0.3403       0.1418   \n",
       "567         0.16500          0.86810         0.9387       0.2650   \n",
       "568         0.08996          0.06444         0.0000       0.0000   \n",
       "\n",
       "     symmetry_max  fractal_max  \n",
       "0          0.4601      0.11890  \n",
       "1          0.2750      0.08902  \n",
       "2          0.3613      0.08758  \n",
       "3          0.6638      0.17300  \n",
       "4          0.2364      0.07678  \n",
       "..            ...          ...  \n",
       "564        0.2060      0.07115  \n",
       "565        0.2572      0.06637  \n",
       "566        0.2218      0.07820  \n",
       "567        0.4087      0.12400  \n",
       "568        0.2871      0.07039  \n",
       "\n",
       "[569 rows x 30 columns]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "(569, 30)"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "(np.float64(0.002646070967089195), np.float64(569.356992669949))"
      ]
     },
     "execution_count": 62,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "data = pd.read_csv('./data/cancer.csv',sep='\\t')\n",
    "\n",
    "data.columns != 'Diagnosis'\n",
    "fea_col_index=data.columns[data.columns != 'Diagnosis']\n",
    "\n",
    "\n",
    "feature = data[fea_col_index].drop(labels='ID',axis=1)\n",
    "display(feature,feature.shape)\n",
    "# feature.std(axis=0).min(),feature.std(axis=0).max()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "id": "4aacda54-a360-4d3b-8488-ee260e3b606e",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(569, 11)"
      ]
     },
     "execution_count": 69,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.feature_selection import VarianceThreshold \n",
    "v = VarianceThreshold(threshold=0.2)# threshold=x 表示方差的阈值,保留方差大于x特征(默认0.0)\n",
    "result = v.fit_transform(feature)\n",
    "result.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "af0d6d52-f977-4843-8106-a89a98572436",
   "metadata": {},
   "source": [
    "- 如果将方差为0或者方差极地的特征去除后,剩余特征还有很多且模型的效果没有显著提升则方差也可以帮助我们将特征选择[一步到位].留下一半特征,那可以设定一个让特征总数减半的方差阈值,只要找到特征方差的中位数,再将这个中位数作为一个参数threshold的值输入就好了\n",
    "  - VarianceThreshold(np.median(X.var().values)).fit_thransform(X)\n",
    "    - X为样本数据中的特征列"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 77,
   "id": "cbc000ac-388b-4c17-902a-01b29abd22c2",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(569, 30)\n",
      "(569, 15)\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "print(feature.shape)\n",
    "\n",
    "median_value = np.median(feature.var(axis=0).values)\n",
    "v = VarianceThreshold(threshold=median_value)\n",
    "result = v.fit_transform(feature)\n",
    "\n",
    "print(result.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "453e180c-9cfb-4c86-b005-380cce59927f",
   "metadata": {},
   "source": [
    "- 方差过滤对模型的影响\n",
    "  - 对于KNN,过滤后的效果十分明显:准确率稍有提升,但算法效率上升了1/3\n",
    "- 注意:\n",
    "  - 方差过滤主要服务对象式:需要遍历特征的算法模型\n",
    "  - 而过滤的主要目的是:在维持算法表现的前提下,帮助算法们降低计算成本"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "88ad1563-70fb-4781-b72a-2a57ab0b92ac",
   "metadata": {},
   "source": [
    "### PCA降维(主成分分析):是一种分析,简化数据集的技术,也是[矩阵分解法]的核心算法\n",
    "- 降维的纬度值就是特征的种类\n",
    "- 目的:特征数量达到上百,上千的时候,考虑数据的优化.使数据维度压缩,尽可能降低源数据的维度(复杂度),损失少量信息\n",
    "- 作用:可以削减回归分析会泽聚类分析中的特征数量\n",
    "- 矩阵分解\n",
    "  - 用来找出n个新特征向量,让数据能够被压缩到少数特征上并且总信息量不损失太多的技术就是矩阵分解\n",
    "- PCA语法\n",
    "  - from sklearn.decomposition import PCA\n",
    "  - pca = PCA(n_components=None)\n",
    "    - n_components可以为小数(保留特征的百分比),整数(减少到的特征数量)\n",
    "    - pca.fit_transform(X)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 84,
   "id": "20203d19-2e6e-4559-b33a-4dfbb8fbc15d",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[-2.88362421e+00, -1.25443227e+00,  8.61572474e-17],\n",
       "       [-1.45140588e+00,  1.56492061e+00,  8.61572474e-17],\n",
       "       [ 4.33503009e+00, -3.10488337e-01,  8.61572474e-17]])"
      ]
     },
     "execution_count": 84,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.decomposition import PCA\n",
    "# 将数据分解为较低维度的空间\n",
    "# n_components可以为小数(保留特征的百分比),整数(减少到的特征数量)\n",
    "pca = PCA(n_components=3)\n",
    "pca.fit_transform([[0,2,4,3],[0,3,7,3],[0,9,6,3]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "612c897b-e5f3-460d-b1e4-84666f464516",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
