{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "主成分分析法（PCA）计算综合得分过程：\n",
    "\n",
    "    a. 数据准备：指标数据采集，*孤立森林*处理异常值\n",
    "    b. 数据标准化：使用*Z-score标准化*，去中心化、定坐标系，映射是减去均值除以方差\n",
    "    c. 为得相关性计算*协方差矩阵*：得到两两指标间的协方差，衡量变量之间的线性关系\n",
    "    d. 为得方差比率求解*特征值和特征向量*：特征值表示了每个主成分的方差贡献，特征向量则确定了主成分的方向\n",
    "    e. 选择*主成分*：选择前几个最大的特征值对应的特征向量作为主成分\n",
    "    f. 构造新特征空间：将原始数据转换为新的特征表示，每个新特征是原始特征的加权和\n",
    "    g. 综合评价：以主成分*解释的方差比率*作为权重，将新空间主成分按权累加，得到综合得分"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "检查./score.xlsx中的数据，将含有0值的记录删除"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "import openpyxl\n",
    "import numpy as np\n",
    "from sklearn.decomposition import PCA\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.ensemble import IsolationForest\n",
    "\n",
    "#  从表格文件中读取数据到程序中，如果表格文件内容无变化，无需重复执行此步骤\n",
    "sheet = openpyxl.open('./score.xlsx')['成绩']\n",
    "# 表格文件表头格式为：班级，姓名，数学，语文，英语，物理，化学，生物，政治，历史，地理。\n",
    "# 第二行开始是数据，学科顺序不影响分析结果\n",
    "scoreList = []\n",
    "nameList = []\n",
    "# 在表格文件中循环读取数据到scoreList中\n",
    "for row in list(sheet.rows)[1:]:   \n",
    "    # 成绩出现0往往意味着缺考等意外，会影响分析，跳过\n",
    "    if 0 in [r.value for r in row]:continue\n",
    "    scoreList.append([r.value for r in row[2:11]])\n",
    "    nameList.append((row[0].value,row[1].value))\n",
    "\n",
    "# 干净数据已存入scoreList中\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. 使用孤立森林处理异常值"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "异常值： [((32, '兴祖'), array([ 88. , 126. ,  69.5,  60. ,  52. ,  45. ,  26. ,  30. ,  40. ])), ((28, '杨'), array([ 91. , 105. , 110.5,  52. ,  49. ,  68. ,  16. ,  27. ,  16. ])), ((15, '卓航'), array([55. , 62. , 61.5, 60. , 64. , 45. , 18. , 51. , 25. ])), ((19, '鑫钰'), array([30., 57., 65., 19., 33., 23.,  6., 44., 43.])), ((31, '长通'), array([96. , 13. , 50.5, 10. , 14. , 11. , 49. , 45. , 26. ])), ((6, '璐瑶'), array([72. , 16. , 36.5, 18. , 17. , 25. , 27. , 30. , 46. ])), ((10, '悦豪'), array([69. , 23. , 55.5, 13. ,  8. , 20. , 24. , 24. , 43. ]))]\n"
     ]
    }
   ],
   "source": [
    "def outlier_detection(scoreList, nameList=nameList, contamin=0.003):\n",
    "    \"\"\"\n",
    "    使用孤立森林检测并处理异常值。\n",
    "    \n",
    "    参数:\n",
    "    - scoreList (list-like): 分数数据列表。\n",
    "    - nameList (list-like): 名字数据列表，与scoreList对应。\n",
    "    - contamin (float): 孤立森林中的污染参数，表示异常值的比例。\n",
    "    \n",
    "    返回:\n",
    "    - data_clean (np.array): 清除异常值后的分数数据。\n",
    "    - name_clean (list): 清除异常值后的名字数据。\n",
    "    \"\"\"\n",
    "    # 参数检查\n",
    "    if len(scoreList) != len(nameList):\n",
    "        raise ValueError(\"scoreList 和 nameList 的长度必须相等。\")\n",
    "    \n",
    "    data = scoreList\n",
    "\n",
    "    # 定义异常值检测工具\n",
    "    clf = IsolationForest(contamination=contamin,random_state=42)  \n",
    "    # 拟合模型\n",
    "    clf.fit(data)\n",
    "\n",
    "    # 预测数据集的异常值分数\n",
    "    y_pred = clf.predict(data)\n",
    "\n",
    "    # 异常值标记\n",
    "    outlier_index = np.where(y_pred == -1)[0]\n",
    "\n",
    "    # 处理异常值并保留干净值\n",
    "    data_clean = np.delete(data, outlier_index, axis=0)\n",
    "    name_clean = [nameList[i] for i in range(len(data)) if i not in outlier_index]\n",
    "    if len(data_clean) != len(name_clean):\n",
    "        raise ValueError(\"data_clean 和 name_clean 的长度必须相等。\")\n",
    "    \n",
    "    # 异常值收集，打印\n",
    "    outlier_name = [nameList[o] for o in outlier_index]\n",
    "    outlier_data = data[outlier_index]\n",
    "    if len(outlier_name) != len(outlier_data):\n",
    "        raise ValueError(\"outlier_name 和 outlier_data 的长度必须相等。\")\n",
    "    print('异常值：',[z for z in zip(outlier_name, outlier_data)])\n",
    "    \n",
    "    return data_clean,name_clean\n",
    "# 不使用主函数时执行这句\n",
    "data = np.array(scoreList)\n",
    "data_clean,name_clean = outlier_detection(data, nameList, contamin=0.003)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [],
   "source": [
    "def save_data(name, data, name_clean):\n",
    "\n",
    "    # 将数据写入excel文件\n",
    "    workbook = openpyxl.load_workbook('./score.xlsx')\n",
    "    new_sheet = workbook.create_sheet(name)if name not in workbook.sheetnames else workbook[name]\n",
    "    # 将此表中源数据清空，避免重复写入\n",
    "    # 清空工作表中的所有行\n",
    "    for row in new_sheet.iter_rows():\n",
    "        for cell in row:\n",
    "            new_sheet[cell.coordinate] = None  \n",
    "    for i in range(len(name_clean)):\n",
    "        ls = [*name_clean[i],data[i]] if isinstance(data[i],np.float64) else [*name_clean[i],*list(data[i])]\n",
    "        new_sheet.append(ls)\n",
    "    workbook.save(filename='./score.xlsx')\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "def standar_data(name_clean,data,save=False):\n",
    "    # Z-score标准化使用这个库\n",
    "    scaler = StandardScaler()\n",
    "    data_scaled = scaler.fit_transform(data)\n",
    "    # 如果要保存数据，则执行这句\n",
    "    if save:\n",
    "        save_data(name='标准化后', data=data_scaled, name_clean=name_clean)\n",
    "    return data_scaled\n",
    "# 不使用主函数时执行这句\n",
    "data_scaled = standar_data(name_clean,data_clean)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用 PCA 降维处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "协方差矩阵:\n",
      " [[1.    0.351 0.457 0.349 0.394 0.468 0.405 0.403 0.309]\n",
      " [0.351 1.    0.357 0.744 0.686 0.642 0.228 0.39  0.384]\n",
      " [0.457 0.357 1.    0.355 0.381 0.429 0.368 0.349 0.239]\n",
      " [0.349 0.744 0.355 1.    0.734 0.683 0.218 0.44  0.464]\n",
      " [0.394 0.686 0.381 0.734 1.    0.717 0.258 0.451 0.447]\n",
      " [0.468 0.642 0.429 0.683 0.717 1.    0.316 0.514 0.509]\n",
      " [0.405 0.228 0.368 0.218 0.258 0.316 1.    0.38  0.255]\n",
      " [0.403 0.39  0.349 0.44  0.451 0.514 0.38  1.    0.44 ]\n",
      " [0.309 0.384 0.239 0.464 0.447 0.509 0.255 0.44  1.   ]]\n",
      "特征值（已排序）:\n",
      " [4.55 1.18 0.78 0.59 0.53 0.53 0.34 0.27 0.24]\n",
      "特征向量（已排序）:\n",
      " [[ 0.29  0.37  0.28  0.38  0.39  0.4   0.23  0.32  0.3 ]\n",
      " [-0.41  0.32 -0.37  0.34  0.26  0.13 -0.59 -0.19  0.07]\n",
      " [ 0.18  0.23  0.49  0.1   0.12  0.01 -0.13 -0.43 -0.67]\n",
      " [ 0.4  -0.23  0.34 -0.14 -0.11  0.04 -0.7   0.01  0.37]\n",
      " [ 0.69  0.06 -0.59  0.    0.05  0.06  0.08 -0.39  0.01]\n",
      " [ 0.21 -0.01 -0.27 -0.02  0.02  0.02 -0.27  0.72 -0.54]\n",
      " [-0.13 -0.57 -0.04 -0.23  0.43  0.62  0.03 -0.11 -0.14]\n",
      " [-0.07  0.35 -0.03 -0.23 -0.63  0.65 -0.01 -0.03 -0.04]\n",
      " [-0.03  0.46  0.   -0.78  0.4  -0.12 -0.04  0.04  0.09]]\n",
      "需要5个主成分,解释的方差比率（特征值归一化）: [0.50496901 0.13057574 0.08688091 0.06555865 0.05913341]\n",
      "主成分载荷（特征向量）: \n",
      " [[ 0.29  0.37  0.28  0.38  0.39  0.4   0.23  0.32  0.3 ]\n",
      " [-0.41  0.32 -0.37  0.34  0.26  0.13 -0.59 -0.19  0.07]\n",
      " [ 0.18  0.23  0.49  0.1   0.12  0.01 -0.13 -0.43 -0.67]\n",
      " [ 0.4  -0.23  0.34 -0.14 -0.11  0.04 -0.7   0.01  0.37]\n",
      " [ 0.69  0.06 -0.59  0.    0.05  0.06  0.08 -0.39  0.01]]\n",
      "每个学科的权重： [ 0.17801118  0.23511061  0.12197644  0.23821824  0.23692938  0.22553015\n",
      " -0.01431449  0.07648455  0.12546469]\n"
     ]
    }
   ],
   "source": [
    "def pca_reduction(data,n=9,save=False):\n",
    "    # 计算协方差矩阵：得到两两指标间的协方差，衡量变量之间的线性关系\n",
    "    # 求解特征值和特征向量：特征值表示了每个主成分的方差贡献，特征向量则确定了主成分的方向\n",
    "    pca = PCA(n_components=n)\n",
    "    # 计算协方差矩阵，求解特征值和特征向量\n",
    "    pca.fit(data)\n",
    "    # 打印协方差矩阵，保留两位小数\n",
    "    print('协方差矩阵:\\n', np.round(pca.get_covariance(), 3))\n",
    "    # 打印特征值和特征向量\n",
    "    print('特征值（已排序）:\\n', np.round(pca.explained_variance_, 2))\n",
    "    print('特征向量（已排序）:\\n', np.round(pca.components_, 2))\n",
    "\n",
    "\n",
    "    # 选择超过一定方差比例的主成分，80%~85%是最佳范围\n",
    "    i=0\n",
    "    # np.cumsum 代表累加和\n",
    "    for r in np.cumsum(pca.explained_variance_ratio_):\n",
    "        i+=1\n",
    "        if r > 0.8:\n",
    "            break\n",
    "\n",
    "    # i 为主成分个数， X_pca 是 i 个主成分的矩阵\n",
    "    X_pca = pca.transform(data)[:, :i]\n",
    "    # 降维后数据:  \n",
    "    # print(X_pca[:10])\n",
    "    \n",
    "    # 4. 计算主成分载荷\n",
    "    loadings_components = pca.components_[:i, :]\n",
    "    \n",
    "    # 5. 计算方差比例\n",
    "    explained_variance_ratio = pca.explained_variance_ratio_[:i]\n",
    "\n",
    "    # 打印累积方差比率和主成分载荷\n",
    "    print(\"需要{}个主成分,解释的方差比率（特征值归一化）:\".format(i), explained_variance_ratio)\n",
    "    print(\"主成分载荷（特征向量）: \\n\", np.round(loadings_components,2))\n",
    "    \n",
    "    # 调试用\n",
    "    # s = sum(explained_variance_ratio)\n",
    "    print('每个学科的权重：',loadings_components.T@explained_variance_ratio.T,)\n",
    "        #   loadings_components@(explained_variance_ratio/s))\n",
    "    if save: save_data(name='主成分',data=X_pca,name_clean=name_clean)\n",
    "    return X_pca, explained_variance_ratio\n",
    "\n",
    "# 不使用主函数时执行这句\n",
    "X_pca,evr = pca_reduction(data_scaled)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 在这里查看综合得分，或者保存到文件\n",
    "import math\n",
    "r = (X_pca)@evr\n",
    "# r = r-math.floor(min(r))\n",
    "r = r+3\n",
    "for i in r:\n",
    "    print(i)\n",
    "# save_data('综合得分',r.T,name_clean)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "主函数（如果分步骤执行就不用管主函数）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def score_analysis(scoreList):\n",
    "    data = np.array(scoreList)\n",
    "\n",
    "    # 1. 孤立森林数据清洗，0.3%的异常值按实际情况得\n",
    "    data,name_clean = outlier_detection(data,contamin=0.003)\n",
    "\n",
    "    # 2. 数据标准化：Z-score标准化\n",
    "    data_scaled = standar_data(name_clean,data,)\n",
    "\n",
    "    # 3. 计算主成分\n",
    "    X_pca,evr = pca_reduction(data_scaled)\n",
    "\n",
    "    # # 4. 计算最终得分\n",
    "    scores = (X_pca) @ evr\n",
    "\n",
    "    # # \n",
    "    return list(zip(name_clean,(scores)))\n",
    "# 可以不使用主函数，则将这句注释掉\n",
    "# final_score = score_analysis(scoreList)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 配合主函数使用\n",
    "# print(scoreList[:10])\n",
    "# final_score[:10]"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
